On Tuesday 07 December 2004 15:42, Artem Baguinski wrote:
Suppose I have several various devices that control
various aspects of
sound generation / processing. they all send "handful of bytes" now
and then and I want my JACK client to react immediately.
You do realize that you can't react immediately regardless.
Jack - and all other audio API's process audio in chunks/periods/buffers
whatever they are called.
So it depends on how much jitter you tolerate in the response.
This is determined by the period size in the simple case of handling input as
soon as possible (which is the next call to your process callback)
Which is the easy thing and if you are able to run low latency kernels you can
use with it with some kinds of controls. And in this case as long as the
event gets there by the next process cycle you're ok.
Or you need to add timestamping on incoming events with respect to the
position of the audio-playback-pointer in the current period
Then you delay them by the same amount in the next available process cycle.
You increase response time but you decrease jitter. But then you need accurate
and quick-as-possible response for those incoming events and this is the
situation Paul mentioned, where it gets complicated.
An solution for a custom hw interface could to use an audio input on the sound
card as a trigger/control input. Then you can just deal with it in your
process cycle. But then most audio interfaces don't like dc signals so you
have careful what you send in.
Just so you know why it is complicated :)
G
--
electronic & acoustic musics--
http://www.xs4all.nl/~gml