On Wednesday 11 December 2002 02.06, Tim Goetze wrote:
David Olofson wrote:
And normal plugins don't generate and
"output" audio or control
data an arbitrary number of buffers ahead. Why should they do
that with events?
you may have an algorithm written in a scripting (non-rt
capable) language to generate events for example.
That's a pretty special case, I'd say. (Still, I do have a scripting
language in Audiality... :-)
or you
don't wan't to iterate a lot of stored events at every
sample to find out which to process, and still offer sample-
accurate timing.
So, sort them and keep track of where you are. You'll have to sort
the events anyway, or the event system will break down when you send
events out-of-order. The latter is what the event processing loop of
every plugin will do, BTW - pretty trivial stuff.
Think about an
event processor, and it becomes really rather
obvious that you *cannot* produce output beyond the end of the
"buffer time frame" you're supposed to work with. You don't have
the *input* yet.
i don't see how this touches the workings of an event
processor, rt or not.
Do event processors posses time travelling capabilites?
Otherwise, I don't see how they possibly could even think about what
happens beyond the end of the current buffer. How would you deal with
input from real time controllers, such as a MIDI keyboard?
and a 'musical' event processor
is more likely to be rooted in musical time than in
audio time.
It sounds like you're talking about "music edit operation plugins"
rather than real time plugins.
in general, it makes
all timing calculations (quantization, arpeggiators etc)
one level easier, and they do tend to get hairy quickly
enough.
And it's better to have an event system that needs host calls to
even *look* at an event?
host calls only to convert the timestamp on the event, i
understand.
Yeah. And that's what you do for every event before even considering
to process it - which means you'll have to check the event after each
"run" of audio processing (if any) twice.
you need the reverse if your events are all
audio-timestamped instead.
When and where? When would your average synth want to know about
musical time, for example?
if you keep a table or other cache mapping audio frame
to
musical time for the current block of audio, you're just
fine.
No, not if you're processing or generating events beyond the end of
the current block.
I believe
controlling synths with timestamped events can be hairy
enough without having to check the type of every timestamp as
well.
i think it's sane to keep timestamps within one domain.
Agreed.
That's it!
Why do you want to force complexity that belongs in the
sequencer upon every damn plugin in the system, as well as the
host?
on average, this is not complex if done right i think.
No, but why do it *at all* in the average case, just to make the
special case a bit easier?
I think one or two host calls for every event processed is pretty
expensive, especially considering that my current implementation does
only this:
In the API headers:
#define AEV_TIME(frame, offset) \
((unsigned)(frame - aev_timer - offset) & \
AEV_TIMESTAMP_MASK)
static inline unsigned aev_next(AEV_port *evp, unsigned offset)
{
AEV_event *ev = evp->first;
if(ev)
return AEV_TIME(ev->frame, offset);
else
return AEV_TIMESTAMP_MASK;
}
static inline AEV_event *aev_read(AEV_port *evp)
{
AEV_event *ev = evp->first;
if(!ev)
return NULL;
evp->first = ev->next;
return ev;
}
static inline void aev_free(AEV_event *ev)
{
ev->next = aev_event_pool;
aev_event_pool = ev;
}
In the plugin:
while(frames)
{
unsigned frag_frames;
while( !(frag_frames = aev_next(&v->port, s)) )
{
aev_event_t *ev = aev_read(&v->port);
switch(ev->type)
{
case SOME_EVENT:
...do something...
break;
case SOME_OTHER_EVENT:
...do something else...
break;
}
aev_free(ev);
}
if(frag_frames > frames)
frag_frames = frames;
...process frag_frames of audio...
s += frag_frames; /* Start offset in buffers */
frames -= frag_frames;
}
And again, why would I want to know the musical time of this event in
here? (This is a stripped version of the Audiality voice mixer - a
"sample player", that is.)
and
if i use a system to produce music, to me it seems natural
for the system to understand the concept of musical time.
If you just *use* a system, you won't have a clue what kind of
timestamps it uses.
Do you know how VST timestamps events?
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`--------------------------->
http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`---------------------------->
http://www.linuxdj.com/maia -'
---
http://olofson.net ---
http://www.reologica.se ---