[linux-audio-dev] XAP and Event Outputs

David Olofson david at olofson.net
Tue Dec 10 19:21:00 UTC 2002


On Wednesday 11 December 2002 00.08, Tim Goetze wrote:
> David Olofson wrote:
> >> thats a mistake, i think. there are some definite benefits to
> >> being able to define events' time in musical time as well.
> >
> >Like what? Since we're talking about sample accurate timing, isn't
> >asking the host about the musical time for an event timestamp
> >sufficient for when you want that information?
>
> like tempo changes without recalculating all later event
> times --

Of course - I'm perfectly aware of all the issues with tempo changes, 
looping, "seeking" and all that.

However, in a properly designed event system, there *are* no later 
event times to recalculate, since not even the events for the next 
buffer *exist* yet.

See how I handle this in Audiality. Originally, I thought it would be 
a nice idea to be able to queue events ahead of the current buffer, 
but it turned out to be a very bad idea for various reasons.

And normal plugins don't generate and "output" audio or control data 
an arbitrary number of buffers ahead. Why should they do that with 
events?

IMNSHO, the simple answer is "They should not!"


> this also allows prequeuing without emptying and
> reloading the queues on tempo change.

Why would you prequeue, and *what* would you prequeue?

Think about an event processor, and it becomes really rather obvious 
that you *cannot* produce output beyond the end of the "buffer time 
frame" you're supposed to work with. You don't have the *input* yet.


> in general, it makes
> all timing calculations (quantization, arpeggiators etc)
> one level easier, and they do tend to get hairy quickly
> enough.

And it's better to have an event system that needs host calls to even 
*look* at an event?

I believe controlling synths with timestamped events can be hairy 
enough without having to check the type of every timestamp as well.


> >Note that I'm talking about a low level communication protocol for
> >use in situations where you would otherwise use LADSPA style
> > control ports, or audio rate control streams. These are *not*
> > events as stored inside a sequencer.
>
> but you'll probably end up wanting to use a sequencer to
> store and/or (re)generate them, based on musical time.

Yes - but what's the problem?

When you get an event, ask the host what the music time is for the 
timestamp of that event, and store that. (Or transport time, or 
whatever you like best.)

When you want to send events for one buffer, you just ask for the 
music time for the first sample of the buffer, and for the sample 
after the end of the buffer (first of next). Then you find all events 
in that range, convert them from your "database" format into actual 
events (which includes converting the timestamps to "event time"), 
and send them.

That's it! Why do you want to force complexity that belongs in the 
sequencer upon every damn plugin in the system, as well as the host? 

(And people are complaining about multiple data types... *heh*)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
|    The Multimedia Application Integration Architecture    |
`----------------------------> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



More information about the Linux-audio-dev mailing list