I wanted a very simple SDR with jack inputs and outputs for a
demonstration I was doing. I had a look at the DSP guts of dttsp and
quisk, and sat down to code.
Now, since I wanted to demonstrate how you could use LADSPA filters to
clean up received audio, it occurred to me that I should implement my
SDR core as a LADSPA plugin. So, I did.
It "works for me". If you try it out, let me know how you get on. At
256 frames/period it sits at about 3% usage on my P4-2.8 without any
other LADSPAs running - not bad, but it probably could be better.
If you want to build it, get the code with:
git clone git://lovesthepython.org/ladspa-sdr.git
then build it with scons. You'll need to manually copy the resulting
sdr.so to wherever your LADSPA plugins live. Load it up in jack-rack
and add in an amplifier plugin (there's no AGC) and some sort of filter
(I recommend the Glame Bandpass Filter).
Performance and quality isn't exactly amazing, but for less than 300
lines of code - much of that used to set up the plugin - it's not too
bad.
Gordon MM0YEQ
We seem to be fairly intrested in the same things James!
I don't know if you have access to University Lecturers... if you do, go
have a chat with
the software engineer lecturer. I've only had positive experiences when
approaching them
about "totally-unrelated-to-course" projects.
On the other hand, I bought a book (forget the exact name.. can find out)
which showed some of the basic ObjectOrientated stuff, but at the same time,
I found it to be relatively useless when trying to apply it to
"music-software".
(Ie: Ardour, Seq24, Dino, etc kind of programs)
Spending time doing out program diagrams.. (you know the "standard" boxes
approach
to explaing how classes interact.) That's been my approach, I didnt really
find any great
resources online. If you do find any, please post back here! :-)
Good luck, -Harry
Jorn, Fons, i'm looking for a ladspa UHJ encoder, and can't seem to
find one. Any idea if such a beast exists? Or if there's a standalone
instance or ambdec preset i can use, and route in and out of?
Jorn ,i've had several browses over your web examples of using AMB
plugins with Ardour, and have reflected the setup where possible in
Non-Mixer.
I'm using samples (ala LSampler) for noise, but i'll ask here, what's
the function of using the tetraproc mike plugin over something else?
I'm lost in your explanation.
I'm still getting my feet wet in ambisonics, and making plenty of
errors along the way, but progress seems imminent. (as it always does
i guess, for the optimistic among us.)
Some general questions.
When i use Jconvolver standalone (my preference) and test with a
*amb.conf, i get 1 input and 4 outputs WXYZ. Is this correct for 4
signals coming into 1, into the *amb.conf, or do i need to change this
to reflect individual WXYZ routing, from something like a MASTER
strip, or from an ambdec plugin in a channel strip? (i'm trying to get
the signal chain sorted out correctly.) i.e. 4 in, 4 out.
I'm using all mono ins for sound sources, and want to reflect
positioning in the busses, as i have multitrack 1st violins,
2ndviolins, etc...
So my 1st violins (4 monotracks) are going into a 1stviolin buss (4
ins) and in the buss signal chain, i'm adding a ladspa amb mono
panner, which naturally gives me 4 outs, then the chain continues to
the MASTER and jconvolver, back into a jconv buss in the mixer with
the intent of finally routing that to the UHJ buss...
Should i then stay "faithful" to that signal chain, and up to a UHJ
encode to stereo (which i hope is in ladspa existence) maintain the 4
port stream to stay compliant with WXYZ?
The intent with this is provide ambisonic positioning, and convolver
tail, right up until downsizing to stereo as the last part of the
signal chain.
I'm finding the challenge of this interesting, and may have more
questions as more of this slowly seeps into my head.
Feel free to point out obvious errors, or alternative (meaning
smarter) suggestions.
Alex.
--
www.openoctave.org
midi-subscribe(a)openoctave.org
development-subscribe(a)openoctave.org
Good day...
Just coming to grips working with and learning the alias system...
Under what conditions might a Jack port not have any alias names?
When might I expect to encounter that situation?
Because our app supports both ALSA midi and Jack midi, the app's very own
ALSA ports are showing up in its list of Jack midi ports. We don't want our
own ALSA ports listed in there.
So to filter them out of our Jack midi ports list, I look at the port's (first)
alias name and see if the app's name is in there, and filter out the port if
it's a match.
So far so good, but I'm worried what happens if there's no alias to work with.
I can't figure out a way to determine if a non-aliased name like
"system:midi_playback_4" actually belongs to our app's own ALSA ports.
I don't know how or if 'alias renaming' will affect my plans.
Still learning + investigating much about this system.
Thanks. Tim.
Hey, has anyone been seeing strange behavior from this combination?
kernel 2.6.31.x rt20 + alsa 1.0.22 userland
RME card (pcmcia card + multiface)
hdspmixer is not doing the right thing (does not initialize the card in
a way in which playback works), it does not see the hwdep interface (or
something like that) and disables metering, alsamixer even segfaults
when I reach the end of the controls listed. Plain weird. Smells like
something changed deep in the kernel that makes alsa-lib very unhappy.
Alsa-tools rebuilt from source does not make a difference.
Weirdness goes away when I boot into 2.6.29.6 rt23...
Is there anything in alsa-* that depends on which _kernel_ is available
at compile time?
-- Fernando
<ralf.mardorf(a)alice-dsl.net> wrote:
> With my CPU [1] resampling seems to be still a problem. I'm using
> Qtractor, but didn't use it's time stretch feature for audio clips,
> instead I used Rubber Band as plugin to pitch down drums. This forced
> the CPU to it's knees. Another issue might be trouble because of
> transients. I like Rubber Band as an effect, so I'm fine with
> transients, but for your needs they might be a problem.
> http://www.breakfastquay.com/rubberband/technical.html
Found a new one Zita-resampler. Can't wait to try it.
http://www.kokkinizita.net/linuxaudio
So far I'm only using SRC, currently writing classes for the others.
So far so good if I don't push it with waay too many audio tracks.
To all, for your information.
I have received the following message from Mr. Nick Copeland.
It was sent privately, but since this is the continuation of
a thread on this list and the person concerned has well gone
beyond any reasonable limits of decent behaviour I feel free
to post it here.
*** Start included message ***
The last time I looked at the AMS source code I was under the impression
that the Moog VCF filters were not actually yours, they were Tossavainen
and Kellets. They were published eventually placed on musicdsp.org however
there are a couple of things here:
1. In their original publishing they did not advocate the use of
quantisation of their parameters - you implemented this as an
adaptation of their algorithm.
2. In not quoting the original authors you are plagiarism them.
In the source to AMS they are quoted however it is bit disingenious
to imply that this was 'your' MoogVCF.
In short you did not actually write the 'Moog' VCF parts and
suggesting that this is how their algorithms worked is just
attempting to justify quantisation to 1/16 is not deleterious
to quality which is definitely not attributable to the original
authors.
>From what you are saying I would like to just say you are an
arrogant selfpublicist however base on the claims in your email
you are actually a lying plagiarst.
Regards nick.
*** End included message ***
--
FA
O tu, che porte, correndo si ?
E guerra e morte !
Once upon a time, jack_diplomat was alive and well. Sadly, all
attempts to find it have failed, as the hosting site seems to be,
extinct.
Does anyone have a copy of this app on their machine that they might
consider sharing?
Alex.
--
www.openoctave.org
midi-subscribe(a)openoctave.org
development-subscribe(a)openoctave.org
Hi,
The sequencer/arpeggiator/WYWTCI I'm working on, (eventually) will
partially generate MIDI events (primarily NOTE-ON/OFF) from a pattern. The
pattern consists of equi-distant intervals (ie 1/8th or 1/16th etc), where
each interval has the potential to play a note or not.
The note pitch and velocity is generated by something akin to window
placement. Given a grid: pitch @ X, velocity @ Y, place a box within the
grid using an algorithm to prevent overlapping other boxes* etc, but only
when the pattern says to do so.
Initially I thought I'd have this generative stuff within the JACK process
callback, but soon decided not too...
Here is how I think this will work:
1) The 'pattern processor' processes 1 quarter note at a time (I guess)...
looking ahead, processing notes which will be playing shortly, adding the
generated note data to a main event list for the sequencer (??)
2) The sequencer processes the main event list, creating the MIDI events
and adding them to a jack_ringbuffer.
3) the JACK process thread/callback reads the jack_ringbuffer and outputs
the MIDI events it finds there to the JACK MIDI port.
Now, somewhere along the line, the GUI has to provide visual feedback at
exactly (or as exactly as possible) the time any note on/off occurs.
Perhaps then:
4) the JACK process thread/callback upon sending a midi event, adds to a
2nd ring buffer which is read by the GUI thread, and displays the visual
feedback (ie a coloured box).
Also, the 'pattern processor' will need to be notified of note off events
so it can remove the box from the grid (such that new notes might replace
the old). So..
5) the Jack process thread cb upon sending a note off adds to a 3rd
ringbuffer which is read by the pattern processor to notify it of the
removal of a note/box.
Does this sound about right?
Cheers,
James.
* see also: http://jwm-art.net/art/text/xwinmidiarptoyhttp://jwm-art.net/art/text/xwinmidiarptoy.txt