[LAD] automation on Linux (modular approach)

Nick Copeland nickycopeland at hotmail.com
Tue Mar 23 20:58:17 UTC 2010


fons wrote:
>  to send a stream of
> parameter updates, and then it all depends on the receiver
> if this results in a 'staircase' or a smooth trajectory. 

Agreed, and the MMA advises on a few of them such as the pan
mentioned in the last submit and the fact that the MMA only 
advises on a few of them just highlights the limitation.

> The advantage of a defined rate (audio or sub-audio) 'CV'
> style data type is that at least that interpretation is
> defined - it is bandlimited by its sample rate and that
> more or less imposes the only valid way to interpret it.

Perhaps the only issue I have is the naming. What is being
proposed - another message format over another port type
in jack?

The existing tuxfamily definition uses a jack audio port to 
carry samples that provide the control 'voltage', in this case
as floats. This is a pretty cool method and a few people have
expressed interest in having it more generalised. A separate
port type could still be defined for quantised messages but then
somebody would have to write jcv2cvd to convert that back into
samplerate floats for the synths to work with.

What it comes down to is that the fully programable synths
(arp2600, moog modular, ms20, matrix, synthi) allowed 
anything to connect to anything, that means the patching
has to support native formats, currently floats at the native
sampling rate. If you introduce another 'CV' based on sub-
sample 16 bit values then you introduce another special
case that needs to be catered for.

As such don't see how they could be reasonably used in 
conjunction. Take the case of the ARP2600: all of it osc
can modulate all the other osc. At low frequency these could
be passed as your s/16 messages. What about at audible 
frequency when you want to get FM type effects - is this CV
supposed to be changed depending on whether the signal
is LF or AF? If so, where is the cutover done? This kind of
modulation has to be carried in the native format.

Since audible frequencies are the only ones that will work
for FM and other effects then it is pretty much a done deal.
The data needs to be passed at the native rate between apps.

If some synths want to have quantised parameters that is fine,
but that really isn't CV and using such a name obfuscates
what is going on.

> Another limitation of MIDI is its handling of context, 
> the only way to do this is by using the channel number.
> There is no way to refer to anything higher level, to
> say e.g. this is a control message for note #12345 that
> started some time ago. 

This is also very true. It does raise the question of how much
data you want to carry in these f/16 messages - it now includes
timing information, note information, data information, some
arbitrary concept of channel, it may need to carry a MIDI
mapping of some type, probably also sequence numbers
for ordering, information on the source of the message, etc.
The amount of data being passed at a rate of f/16 will be very 
close to f for continuous changes as these messages are going
to be much larger than a sample of type 'float'. They are also
going to be a lot more laborious to process. I still feel that the
CV should remain floats at native sample rate and apps that
want to quantise should just pull out every n'th sample.

> As far a I can see, any application dealing with control
> data should offer MIDI only as one possible I/O format,
> but certainly not use it internally, or be based on it.

Your initial interpretation of MIDI was just a way to get some
data from A to B. That is true, but wasn't a large part of the the 
actual goal to find a way to digitise CV from A to B? Perhaps
it is time to get rid of the messages and get back to the original
issue of moving analogous signals around and recognise that
in the current digital world 'analogous' is floats/samplerate.

It is then up to the app to decide whether it wants to quantise 
those values.

That doesn't actually respond to the point you are making here
though - MIDI should certainly not be used internally to an 
application. Again, very correct. This does skirt around the
issue that applications need a midi interface, and that when
modulating audio data they need some form of audio interface
between the components. Proposing what appears to be an
enhancement to MIDI for the modulating interface just imposes
an extra level of complexity where an extra port is needed for
the 'new CV' data, the audio data needs to be mixed with the 
'new CV' definition since it is not really reasonable to expect the 
'new CV' definition be used to carry audio samples for internal
modulation paths. Unless the new CV really is an audio stream.

Nick.
 		 	   		  
_________________________________________________________________
Hotmail: Trusted email with powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxaudio.org/pipermail/linux-audio-dev/attachments/20100323/d42085f8/attachment.html>


More information about the Linux-audio-dev mailing list