[linux-audio-dev] XAP and Event Outputs

Tim Goetze tim at quitte.de
Wed Dec 11 20:54:00 UTC 2002


Tim Hockin wrote:

>> i'm becoming tired of discussing this matter. fine by me if 
>> you can live with a plugin system that goes only half the way 
>> towards usable event handling. 
>
>I haven't been following this issue too closely, rather waiting for some
>decision.  I have been busy incorporating other ideas.  What do you suggest
>as an alternative to an unsigned 32 bit sample-counter?

i'm using event structures with a timestamp measured in
'ticks' for all plugins. the 'tick rate' is defined for 
any point in time through a tempo map in my implementation. 
the 'tick' type is floating-point.

yes, all plugins need to issue 'host' calls if they want
to map 'tick' to 'time' or 'frame' or reverse. however,
the overhead is not palpable in terms of performance.

allow me to excur somewhat beyond the scope of the question:

event outputs are implemented as lock-free fifos. 1-n outputs 
can connect to 1 inputs. because events remain in the 
outbound fifos until fetched, sorting is simple as long as 
individual fifos are filled in correct order -- which hasn't 
yet proved problematic.

two strategies for block-based processors are possible:
 
* fixed blocks -- calculate 'tick' at the end of the 
  block and process all events from all inbound fifos
  that are stamped <= 'tick'.

note that in this case, only one 'tick mapping' is needed,
the rest is simply comparison. of course dividing the cycle
into subcycles for better time resolution is possible too.

* sample-accurate -- determine the next event from all
  inbound connections, map this tick to audio frames,
  process until this frame, process the event(s) found, 
  repeat until the block is complete.

yes, this introduces some overhead if lots of events are
hurled at a plugin implementing sample-accuracy. however,
this is less problematic i think, having come to believe
that good interpolation methods should be preferred over
massive event usage.

please let me go into yet more depth:

another, quite substantial, benefit of the design is that 
the fifos can be filled in one thread (midi-in for
example) and fetched from in another (audio for example).
it also allows for least-latency routing of events across 
threads.

the current manifestation of this system handles plugins 
operating on the same sets of data and events in six major 
threads, or in fact any combination of these in one plugin:

periodic:
* audio (pcm interrupt)
* low-latency, high-frequency time (rtc interrupt, midi out)
* high-latency, low-frequency time (sequencer prequeuing)

on-demand:
* midi in (in fact anything that's pollable)
* script plugins (i use python which is not rt-capable)
* disk access.

the design was chosen because i deem it to impose the
least limitations on the who and how of plugins and their
connections, and so far it hasn't failed to live up to 
this promise. 

currently it comprises midi in and -out, jack and alsa 
(duplex), event sequencing, scheduled audio playback and 
recording, ladspa units (with event-based parameter i/o), 
tempo maps (rt modifiable), a few native filters and 
oscillators, and the ability to code event-based plugins 
in python (there's even the possibility of processing audio 
with python, but it does introduce a good deal of latency).

i consider myself as far from being a coding wizard. this
enumeration serves the purpose of proving that the design
i've chosen, which uses 'musical time' stamps throughout,
can in fact support a great variety of functionality, and
that this universality is a worthy goal. 

i'd also like you to understand this post as describing
the workings of my ideal candidate for a generic plugin
API, or parts thereof.

code is coming soon to a http server near you, when time
permits.

>I'd hate to lose good feedback because you got tired of it..

thanks. :)

tim




More information about the Linux-audio-dev mailing list