[LAD] CV data protocol in apps.

Fons Adriaensen fons at kokkinizita.net
Fri Feb 19 14:30:23 UTC 2010


On Fri, Feb 19, 2010 at 01:59:34PM +0000, Simon Jenkins wrote:
> On 18 Feb 2010, at 17:32, alex stone wrote:
> 
> > So it's feasible to create another type of port,  (CV/Fs), without
> > crippling something else in jack, or damaging the current API?
> > 
> > If so, surely that would enhance further Jack's capabilities, and open
> > it up to more options for devs and users alike.
> 
> A reduced rate CV port doesn't really make much possible
> that's not already possible with a full buffer of values
> at the audio sampling rate.

True. The advantage is that if there is a 'standard' for
such control signals (e.g. 1/16) it becomes practical to
store them as well. Of course you could do that at audio
rate, but just imagine the consequences if you have e.g. 
at least 4 control signals for each audio channel, as is
the case in the WFS system here. There is a huge difference
between having to store 48 audio files of an hour each,
(25 GB) and 240 of them (125 GB) - in particular if most
of that storage is wasted anyway. In a mixdown session 
there can easily be much more than 4 automation tracks
for each audio one. Reducing the rate at least brings
this back to manageable levels.

> If a receiving application, for example, wants to update
> its filter parameters at 1/16th the full sampling rate it
> is perfectly capable of skipping 16-at-a-time along an
> audio-rate buffer of values all by itself. Or 8-at-a-time.
> Or 32 or whatever divisor makes most sense for *that*
> application, or was configured by the user of that
> application, or whatever.
> 
> Meanwhile this same buffer can be routed at the same
> time to applications that would prefer the full rate data.

All true, but you are confusing two quite separate issues:
*internal update rate* and *useful bandwidth*.

- The internal update rate of e.g. a filter or gain control
  would always have to be audio rate, to avoid 'zipper' effects.
  The filter could e.g. use linear interpolation over segments
  of 16 samples, or 32, or 256. This is an implementation
  detail of the DSP code.

- The useful bandwidth of control signals in audio is very
  low. Even if the internal update rate is audio, there will
  be no energy in the control signal above a few tens of Hz.
  If you modulate a filter or gain stage with anything above
  that bandwidth it is no longer just a filter or gain control
  - you will be producing quite an obvious effect (e.g. vibrato).
  That makes sense in synthesisers etc., but in any normal
  audio processing it's something to be avoided.
  
So with the exception of synth modules etc., control signals
never need to be high rate, and if they are the DSP code would
have to filter out the HF parts. Actually 1/16 (3 kHz) would
be more than sufficient for use as envelopes etc. in a synth
as well, anything faster would just generate clicks.

-- 
FA

O tu, che porte, correndo si ?
E guerra e morte !



More information about the Linux-audio-dev mailing list