David Olofson wrote:
And normal plugins don't generate and
"output" audio or control data
an arbitrary number of buffers ahead. Why should they do that with
events?
you may have an algorithm written in a scripting (non-rt
capable) language to generate events for example. or you
don't wan't to iterate a lot of stored events at every
sample to find out which to process, and still offer sample-
accurate timing.
Think about an event processor, and it becomes really
rather obvious
that you *cannot* produce output beyond the end of the "buffer time
frame" you're supposed to work with. You don't have the *input* yet.
i don't see how this touches the workings of an event
processor, rt or not. and a 'musical' event processor
is more likely to be rooted in musical time than in
audio time.
in general, it
makes
all timing calculations (quantization, arpeggiators etc)
one level easier, and they do tend to get hairy quickly
enough.
And it's better to have an event system that needs host calls to even
*look* at an event?
host calls only to convert the timestamp on the event, i
understand. you need the reverse if your events are all
audio-timestamped instead.
if you keep a table or other cache mapping audio frame to
musical time for the current block of audio, you're just
fine.
I believe controlling synths with timestamped events
can be hairy
enough without having to check the type of every timestamp as well.
i think it's sane to keep timestamps within one domain.
That's it! Why do you want to force complexity that
belongs in the
sequencer upon every damn plugin in the system, as well as the host?
on average, this is not complex if done right i think. and
if i use a system to produce music, to me it seems natural
for the system to understand the concept of musical time.
tim