On 19 Feb 2010, at 14:30, Fons Adriaensen wrote:
On Fri, Feb 19, 2010 at 01:59:34PM +0000, Simon
Jenkins wrote:
On 18 Feb 2010, at 17:32, alex stone wrote:
So it's feasible to create another type of
port, (CV/Fs), without
crippling something else in jack, or damaging the current API?
If so, surely that would enhance further Jack's capabilities, and open
it up to more options for devs and users alike.
A reduced rate CV port doesn't really make much possible
that's not already possible with a full buffer of values
at the audio sampling rate.
True. The advantage is that if there is a 'standard' for
such control signals (e.g. 1/16) it becomes practical to
store them as well. Of course you could do that at audio
rate, but just imagine the consequences if you have e.g.
at least 4 control signals for each audio channel, as is
the case in the WFS system here. There is a huge difference
between having to store 48 audio files of an hour each,
(25 GB) and 240 of them (125 GB) - in particular if most
of that storage is wasted anyway. In a mixdown session
there can easily be much more than 4 automation tracks
for each audio one. Reducing the rate at least brings
this back to manageable levels.
Storage is a good point.
I'd been thinking mainly in terms of something like an (analog-style?) sequencer
generating the CV in which case you don't store its outputs, you just set it running
again.
But storage is a good point.
If a receiving application, for example, wants to
update
its filter parameters at 1/16th the full sampling rate it
is perfectly capable of skipping 16-at-a-time along an
audio-rate buffer of values all by itself. Or 8-at-a-time.
Or 32 or whatever divisor makes most sense for *that*
application, or was configured by the user of that
application, or whatever.
Meanwhile this same buffer can be routed at the same
time to applications that would prefer the full rate data.
All true, but you are confusing two quite separate issues:
*internal update rate* and *useful bandwidth*.
I'm not confusing them, I just wasn't considering bandwidth as the variable we
were trying to optimise for. Maybe it is though.
- The internal update rate of e.g. a filter or gain control
would always have to be audio rate, to avoid 'zipper' effects.
The filter could e.g. use linear interpolation over segments
of 16 samples, or 32, or 256. This is an implementation
detail of the DSP code.
- The useful bandwidth of control signals in audio is very
low. Even if the internal update rate is audio, there will
be no energy in the control signal above a few tens of Hz.
If you modulate a filter or gain stage with anything above
that bandwidth it is no longer just a filter or gain control
- you will be producing quite an obvious effect (e.g. vibrato).
That makes sense in synthesisers etc., but in any normal
audio processing it's something to be avoided.
So with the exception of synth modules etc., control signals
never need to be high rate,
So control signals don't need to be high rate, apart from the exceptions, which do.
;)
and if they are the DSP code would
have to filter out the HF parts. Actually 1/16 (3 kHz) would
be more than sufficient for use as envelopes etc. in a synth
as well, anything faster would just generate clicks.
If the user sends a 20khz sine wave into an application's "volume" port
that's either their mistake, or its exactly what they wanted to do.
--
FA
O tu, che porte, correndo si ?
E guerra e morte !
_______________________________________________
Linux-audio-dev mailing list
Linux-audio-dev(a)lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev