Hi, and wellcome! :-)
BTW, do you know about the GMPI discussion? The XAP team is over there
as well.
http://www.freelists.org/cgi-bin/list?list_id=gmpi
On Friday 21 February 2003 11.46, torbenh(a)gmx.de wrote:
[...]
When the
offending event is received:
* Fire a worker callback to allocate and
initialize new buffers.
It would be nice if the worker thread was fired when the event was
queued. consider a stepsequencer sequencing the delay time.
1. clock would generate an event scheduled in 1/Hz seconds.
2. event gets processed immediately by the sequencer and queued
still with future timestamp onto the delay.
3. queueing the event on the delay could fire the worker thread.
4. when the event is due the worker thread could already be
finished and the buffer can be exchanged on in Realtime.
This would require some flags on the sequencer plugin, which is a
pure event processor (maybe even not because a pure event processor
does not have to be in sync with audio)
Well...
1) Only events for the current block will be generated
and queued. Events are just like audio in that respect;
structured streams, basically. *Everything* is
processed one block at a time. That is, for there to
be any chance at all that the worker thread is finished
before the host calls in's "owner" plugin, you need an
SMP machine, or you have to run the worker thread at a
higher priority than the audio thread. The latter would
obviously be rather silly, and the former won't work
if both/all CPUs are under heavy load; RT or not.
2) Events are normally sent directly to receivers, and
either way, they're allocated, filled in, sent and
received using very simple inline functions. A host
could route some control events through itself to do
something before passing them on to the receiver, but
the host won't be able to do anything before the
sender process() call returns.
3) Doing anything like this just to *slightly* increase
the chances of soft real time worker threads finishing
within a single buffer cycle, while the audio thread
is working, in a way that can only work on SMP machines,
is rather pointless. If you fire a worker callback, you
do it because you suspect it might take so long to
finish that doing the work in the audio thread would
cause a drop-out. That is, you're not expecting a
result until one or more blocks later.
It's as simple as this: Either a control is implemented in a real time
safe manner (ie no need for a worker callback), or it is not really
usable for real time control. Worker callbacks are for big and/or
nondeterministic jobs that need to be done without stalling the whole
net.
The only other way I can think of to get the job done would be to take
the whole plugin out of the net, have it process the event in a
"fake" engine thread, and then put it back in. That's a very messy
way of doing it (just for starters; event queues are not thread
safe), and it doesn't allow plugins that can to keep doing something
useful while waiting for the RT unsafe job to finish. It would just
be a hairy, nasty hack, and hardly even half a solution.
[...]
I see. I also
see a big difference between synths and "effects"
here. It's hard to tell when you need to start processing a synth
again, after it's been silent. You can *guess* by looking at the
input of an effect plugin, but you can't even be sure in that
case with some effects. Such effects would just have to ignore
this feature altogether and be processed at all times.
the synth could tell that it wants to be processed when the event
it just received becomes due.
How? The host won't see any events, as they're normally just added
directly to the synth's event queues by inline code in the sender...
Besides, even if the host snooped every single control of the synth
(ie all incoming events), it wouldn't know what's relevant and what
isn't. Only the synth can know what combinations of values actually
produce sound.
[...]
A silent
plugin is also one that you cannot know when to start
processing again... Unless the host eavesdrop on all event queues
of silent plugins. Plugins could provide a callback to check all
Queues, but then what's the point? Plugins could just do this in
run() instead. Then they could even chose to ignore some events,
if appropriate.
the whole thing would get simpler, if the queing of events was
done with a callback on the plugin.
I actually think it would *complicate* things.
The main reason we have timestamped events is that we do *not* want
receivers to do stuff in between process() calls. The whole point is
to get sample accurate timing without buffer splitting or blockless
processing. Function calls are more expensive than inline event
handling, and IMO, pretty much eliminate the point with timestamped
events.
Another issue with function calls is that they make remote connections
in low latency networks impossible, because each event transmission
becomes an RPC call, rather than an asynchronous write operation.
i suggest you look at the galan code to see what i
mean.
or i will post some excerpts if you are interested.
Yes, some examples would be interesting. How do you deal with sample
accurate control? (API and implementation.)
at the Moment it uses one global event queue which
makes the performance bad if you had 2 unrelated high frequency
clocks in the mesh. But the XAP model of individual event queues
would fix this with a not small coding effort involved.
Probably. It also makes it possible to avoid explicit event
dispatching inside plugins with more than one inner loop, and it
eliminates the need for sorting events as they'le sent.
[...]
I think our
direct plugin->plugin design is much more efficient
than anything that requires events to be sent through the host.
And as I've pointed out before, if that works, you automatically
have an API that allows the host to really be just a *host*, with
sequencers and whatnot running as plugins; real or host
integrated.
this management stuff could also be done in a library all plugins
link to.
Sure, but that's just a matter of how you call something that really
belongs in the host, or (as it is now) as inline code in plugins.
i dont see a reason for implementing graph traversal
and
ordering in every host. This is a fairly complex thing and should
not be seen by an XAP user.
What has that got to do with how it's actually implemented...? "Direct
communication" in no way rules out a host SDK, so I really don't get
your point.
Also memory management for events has to be done
without malloc
this should be also be taken care of by the XAP core.
Of course. That doesn't mean you have to waste cycles making function
calls all the time. The XAP event system is almost identical to what
I have running in Audiality, and the only function call an event
sender would ever make is the "panic" host call you hit if the pool
is exhausted. The rest is just tiny inlines. (Well, there is a
"sorting" send function as well, but IIRC, I'm not using it, and
there won't even be one in XAP, because there's no need for one.)
[stuff with
too much detail for me, almost new to the discussion
deleted, i will have a glance at your website, where was it ?]
I hope it is not too late for me to push the API to the galan model
:-)
Well, I'll look at it when I get the time. :-) (During the weekend,
maybe.)
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`----------------------------------->
http://audiality.org -'
---
http://olofson.net ---
http://www.reologica.se ---