On Fri, Feb 28, 2003 at 07:05:19PM +0100, David Olofson wrote:
On Friday 28 February 2003 09.20, torbenh(a)gmx.de
wrote:
ok... but i see realtime as a subset of offline
audio processing...
It is not. Real time applications have some very strict requirements
that no other applications have. Sinci there is no way around this
fact, we just have to deal with it, and design the rest of the system
around the requirements.
there are some things you must not do in a
realtime system.
but you can do these in a nonRT system...
Exactly - and that is why you must design in a way that allows RT
hosts and plugins to operate without non RT safe actions.
yes and because a nonRT system can do more things it is a superset
of RT. (but this is just being picky :)
you can build realtime systems with C although
you can also do
while(1) ;
i see galan as a programming language.
so i dont want to lock the user in a system where he can only
build realtime systems. galan should warn the user if he built
something not realtime safe though... but it should not be an
error....
Well, it's not really an error to implement a XAP plugin that won't
run in real time and/or is nondeterministic. However, I don't quite
see what you could do in an RT capable *host* that inherently cannot
run in RT. Multi-pass subnets with conditional loops...? Can you give
some examples?
I realize that non-linear time processing falls into this category,
but I don't see how that is relevant to a real time plugin API. I can
see how it would be *possible* to support it in a plugin API that
also does real time processing, but I think designing something like
that is asking for serious trouble. There are just too many conflicts
between these two ways of working.
yeah i see your point...
how about a flag indicating "dont call my process() from the realtime
thread, but from another thread"
the event sending in this case needs to be different but this could
be a different function/macro which needs to be called in this case.
note that in galan i have the opengl rendering components which receive
events also. I would be nice if this could be handled by XAP...
[...]
I'm
not sure... Is this basically about splitting actions up into
two events; one "this is what we'll do" event, and one "do it
now" event?
yes sort of...
the idea is to put plugins doing the prepare stuff in front of
the delay so they could safely fire a worker thread...
when the worker thread finishes... the delay adjusts
the timestamp compensating for the non determinism of
the worker thread...
i see this as a method of trying to guarantee some time
the worker thread has...
but if the event is late after delaying it, the delay is not
calibrated correctly...
but it is not so bad because realtime processing was not
interrupted...
Right - but then, why mess with this on the API level, and why use a
delay at all? Well, I can see the jitter reduction advantage, but I
don't think it's worth it. Just design plugins so that you can tell
them what you're going to do first, and then have guaranteed RT
response.
how is this guaranteed ?
when i tell a plugin it should load sample X
it might take hours to download this url via modem.....
it seems like you expect the user to know that and dont
use the sample until its downloaded.
the evtdelay and an evtgate are some of the plugins who
could handle delayed events and i dont see the reasons
to rule these out.
the plugin must do its own buffer splitting, process ramps etc...
why dont you want it to do
// This if() needs to be a nice macro for fixing wrapping
// or #define SAMPLETIME gint64
if( event->timestamp < current_time )
drop_event_if_i_dont_like_it()
even late ramp events could be fixed... if the host did
not destroy its timestamp by setting it to current_time...
what are your reasons for this ?
does it have to do with timestamp wrapping ?
this could be fixed trivially...
For example, the delay could have a "max delay time" control for
buffer allocation, and a "delay time" control for setting the actual
delay. That way, you can use the same plugin for ms delays as well as
delays of several seconds, without hard-coding the plugin to some
arbitrary huge buffer size.
max delay time is not realtime safe...
how shall such an event be handled ?
[...]
in XAP this event feedback would already be illegal.
No, feedback isn't illegal in any way. It's just "late events"
that are illegal; only events for this or future blocks must even
be found in an input event queue.
why ?
the semantic of a "late event" is process me as fast as you can
i am late.
Sure, but timestamps wrap, and as a results, they wrap into the future
if they arrive late. This can be "fixed", but it requires that we
agree on a specific maximum allowed time an event may be late.
no problem MAX_TIMESTAMP - current_blocksize
future events are not delivered to the plugins.
More importantly though; it's not possible to handle control ramping
properly, unless timestamps are respected as part of the data.
"Process ASAP" results in quantization, and that will screw up chains
of ramps. (XAP ramps are just "aim point" commands, and nothing is
guaranteed after an aim point is passed. Plugins are not required to
stop ramping automatically, but can expect to receive a new event in
time.)
yes... normal plugins can drop this late events or try to interpolate
to the current time.
there are not much plugins capable of handling late events
but a delay could handle them.
plugins could report that they received a late
event
to the host so it could notify the user or abort...
That's just moving a host implemantation issue into plugins. Allowing
late events is almost like allowing late audio buffers. The only
difference is that with events, it *looks* simple, whereas with
audio, it's obvious that you'd need a different protocol. When you
look closer at it, specifically at ramping, it becomes obvious that
there isn't all that much of a difference.
I don't think plugins can do anything useful about late events that
hosts can't fix by adjusting the timestamps, and as this never
happens within a real time net in a single thread, it doesn't even
have to be considered in most hosts. Only hosts that support soft RT
connections between multiple threads or processes will need to deal
with this, and they can do it inside the event gateways that are
required anyway.
yes this is correct.
but if the plugin could decide by itself what to do with a late
event everything would be fine.
but why do you want to rule out late events ?
they can be handled in a sensible way.
I don't think they can be handled trivially, and I don't see any
reason why the API or plugins should consider it at all. Whenever a
host makes a connection that can result in late events, it has to
make sure the events are adjusted as needed, so plugins get delayed
or quantized events.
yes... and how is this decision made ?
in my proposal the user would wire a delay or a quantizer into the net
and everything would be fine.
in your proposal you would have some logic in the host decide what to do
generating complexity in the host as this question must reach the user
somehow....
i think this is similar to the soft mode of
jack....
Maybe, but audio is not structured data, and thus, isn't as sensitive
to glitches. You don't have to do anything special to "fix" drop-outs
in audio streams without totally screwing up the data.
yes...
but if the system was robust enough to handle an event dropout it would
be ok. also the user could guard himself against the dropout case...
with an eventgate of some sort.
[...]
i meant event feedbacks without a delay but with
an event gate to
abort in some condition...
user can implement a for loop with this.
if user is smart enough it can still be realtime safe...
We're talking about conditional "ping-pong" between plugins? Well,
that's kind of tricky to implement with a block based callback model,
since each plugin only gets to run once per block. A roundtrip always
means one block of latency.
yes... i see the advantages of running once per block...
the could be also done by the host compiling a graph into
an XAP plugin ... i am fine with this method as i see a way
out of the misery for me :)
(but if the code was smart enough it could solve
the cycle at the delay)
Plugins won't and shouldn't care. Feedback is a host
implementation issue.
hmm... ok... what about the for loop implemented with events?
i dont like to give this idea up....
What exactly do you want to achieve with this? It sounds like this
belongs on a much lower level than the plugin API... XAP plugin nets
are not like "bytecode" with their own flow control.
ok... i just like the idea of galan beeing turing complete :)
well i dont really know what i want to achieve with this.
i have not yet spent a day experimenting with complex
event logic...
i spend too much time on thinking about our current discussions...
[...]
There's simply no other way of doing it. It could theoretically
take *minutes* to load all the samples for a song. (At least if
they're on CDs in a CD-ROM changer...) What can the API or
plugins do about that...?
in this case the delay would convert the delayed event
to a delayed event and it would be immedietly processed
when injected into the realtime thread again...
Of course, but that's not useful. No matter how it's handled; if the
sampler isn't ready when I start playing notes, I'm screwed. The only
useful feature beyond plugins being able to do non RT stuff in worker
callbacks, would be the ability to tell whether a plugin is ready or
not. The only safe solution is to wait until a plugin is ready before
messing with it.
how do you know its ready ?
the sample loading component would fire a worker
thread,
which inserts the event with the same timestamp
as the incoming event... (at this point the timestamp is
in the past)
the evtdelay adjusts the timestamp to be in the future
again.
I don't see much point in this. How do you figure out the delay
value?
user can control the value with a control in the panel...
if user knows what he does he can set the delay to a
working value... where no droputs ocur...
This is not possible, and IMNSHO it's completely pointless to even
consider this on the API level. There is *no upper limit* to the time
it may take to allocate some memory in a general purpose OS, such as
Linux, Mac OS X or Windows. Deal with it, or switch to a complete
RTOS with virtual memory disabled.
if the user knows there are still 512 MB free there is an upper bound.
delayed events could be reported to the engine
making some led
blink...
Sure, but who cares? Depending on soft RT features in a hard RT system
makes the whole system essentially soft RT. I think a "READY" output
for that LED would be sufficient and useful, but going further than
that is just a futile atempt to make soft real time look harder than
it is. If you want all hard RT, just never change soft RT controls
during performance. Plugins that can't be used without doing that are
either broken or not meant for real time operation.
ah.. there are soft RT and real RT controls ?
is that a hint on the control ?
i take this approach for midi in also...
(not implemented yet)
the midi event has a timestamp from alsa, which corresponds
to the past. without the delay it would be processed now..
this would generate jitter.
with the delay some latency is imposed but the jitter is away.
the user can adjust the delay to his machine...
That's very different. This is what you're expected to do with
incoming real time events, and the resulting constant latency is
strictly defined by the block size. That's all there is to it. It
has nothing to do with worker callbacks, delayed events or other
"tricks" to deal with plugins that have non-RT safe controls.
i dont really see a difference... this is an event which comes
from another thread. like the event which comes from the worker
thread...
The worker thread belongs to the plugin, and is an internal
implementational matter. Worker threads are meant specifically for
jobs that are not RT safe. A plugin may not be able to process other
events *at all* until the worker thread has finished. It might just
sit there and track incomming events, so the rest of the net can keep
running.
MIDI events OTOH (or other events that come from other threads), come
from the outside world. Plugins don't care if they're late, and in
fact, won't even know about it, since they can't see the original
timestamps.
I think there's quite a difference between the two, especially since
the first case is strictly related to soft real time, wherease the
latter doesn't have to have anything to do with soft RT at all. The
former is a plugin implementation thing, while the latter is not even
visible to plugins.
hmm... i see your point somehow...
but i am still not convinced...
[...]
> > yes... But due to the event peeking code it
would get the
> > event 100ms before it is due. The event peeker is too
> > complicated though.
>
> In fact, it's not even possible to implement in a generic
> way. (See above.)
i think i have found a method above...
I don't think so. You can't see into the future, so you must use
a delay. When you're dealing with non-deterministic operations
(which is the only case where you need to mess with this stuff),
you can't figure out what a sufficient delay time would be. When
dealing with "live" input, delays are just not an option, as
they'd defeat the whole purpose of real time processing.
well what with midi jitter ?
midi event -> midiport -> timestamp -> kernel doing blabla.. ->
user space midi thread -> real time thread...
long path for the midi event... i suspect
event bursts...
Sure, but it has nothing to do with peeking into the future. If you
can't get MIDI events delivered in time, it's an OS and/or driver
problem, and there isn't much we can do about it. (Short of fixing
the OS and/or driver, of course.)
This is just a soft->hard RT gateway. Assume a delay that will be
sufficient most of the time, and adjust late events if you get them.
That's all there is to it. I don't see how this is related in any way
to peeking in order to "prepare" nondeterministic plugins.
[...]
Yes... but galan has no stop. It is always
running.
Just like hardware synths - and XAP should work the same way. The
way I see it, very few hosts have valid reasons to ever stop
processing.
What I'm talking about is the *sequencer*, which will always have
transport control (with stop, start, rewind etc), and that's
where this comes in. When you load a song, you'll have to
initialize the net before you can start playing, just as with
external hardware samplers that need to grind and rattle some
before you can play.
yes but constuctors are not executed in the realtime thread.
the plugin only gets inserted into the net
after its constructor was run..
I'm not talking about constructors, but about loading presets and
changing controls. That's nondeterministic stuff, so if you want to
be sure you won't screw up half your intro, you simply have to wait
for it to finish before you start playing.
yes ... correct...
but how about streaming components ?
the ogg_ra component has an input event triggering the loading
of the next buffer. in galan 0.3.x there is a worker thread
exchanging 2 buffers with the realtime thread to make this
operation realtime safe. but this does not handle skipping
elegantly....
i would rather implement the loading in a component getting
a time event and then emiting an array of audio data...
this way i can prebuffer in the eventdelay...
if the event does not come through in time the
old audio data will still sit in the sample component
and this will be played then...
just like an xrun where the soundcard keeps playing...
by adjusting the evtdelay the user could enable this ogg player
to handle most skips without an xrun ( note that this is only
an xrun within the mesh-maschine...) it can be detected by the mesh,
and it could take precautions to not make it sound so bad...
(eg turn on a low pass so that the pop wont be loud)
But a change sample will fire a worker and leave the
old sample
until the new sample arrives.
Yes, but that's just a plugin implementation detail. The only API
implication it has is that such controls should be marked as "may
not respond instantly" - and not even that is strictly required.
(MIDI samplers don't have such a feature, AFAIK.)
ah... the execution path of an event can enter the realtime thread,
and leave it again into another thread...
Not quite, since the event itself never gets to the worker thread. The
event is received by the plugin, which sends the *job* off to a
worker thread. When the worker thread is done, the host sends a
"result" event to the plugin, which can then go on with any further,
RT safe operations.
but then the delay must be part of the plugin...
and i think that this is not necessary...
i think the mesh would be easier to understand if
the delay was an explicit element of the net and not hidden
in some component.
if the realtime thread advanced to much then
the event would have a timestamp in the past.
No, becaues there would be no events at all outsid the RT thread. The
"result" event is generated by the host as soon as it finds out that
the worker thread has finished.
hmmm... what about the opengl thread ?
i need a way to have different threads and dont want to process all
events in the realtime thread which is obvious, isnt it.
This also a requirement for the graph sorter...
how do we model this ?
events with timestamps < gen_get_sample_time()
are always processed
first: better late than never.
Well, you could say the result events are handled like this, but it's
done by the host (by picking a suitable timestamp), and the events
aren't really late. They just tell the plugin that it's worker thread
has finished.
now the host destroys a timestamp which could be fixed by a delay.
i dont want delays to be special components... the simply arent.
[...]
do want to have audiality running in the kernel ?
Actually, the "second generation" of Audiality (the "real" one is
the third) was intended to run in kernel space, but that was
before Linux/lowlatency. Back then, the only way to get solid RT
performance on Linux was through RTLinux, so that's what I went
for.
did you ever install RTLinux ?
is that stress ?
It was quite some time ago I used it, but I had no major problems.
It's just a kernel patch and some modules. These days, I believe
there are some shared libs as well.
I installed RTAI recently, and the only issue with that was to find a
compiler that was supported by both RTAI and my kernel. I ended up
compiling my own compiler, and then everything worked fine. (This was
on an embedded SBC system. Yes, it did take ages to build the
compiler... :-)
hehe :)
i am experimenting with 2.5.x kernel with alsa
inside
just had galan running without glitches...
Well, unfortunately, RTLinux or RTAI won't help, at least not without
some hacking. You could probably hack galan to run in user space via
LXRT ("RTAI in user space"), but you'd have to do some tricks with
the I/O, as you can't use Linux drivers directly without the RT going
soft. This is the major reason why I dropped RTLinux for audio when
Linux/lowlatency showed up.
ok... so i think i wont bother...
Anyway,
both RTLinux and RTAI can schedule hard RT threads in
user space these days, so there's still no need to run in kernel
space, even if you need lower latency than Linux/lowlatency can
provide.
very nice i should really try one of them...
Sure, but be prepared to either port drivers, or use mmap() mode and a
PLL locked to the sound card IRQ. The latter shouldn't be too
complicated, but I haven't bothered with it for various reasons.
i have an event PLL in galan :)
it could be possible to model the output as a randomaccess input
filling it when the clock event gets through...
hmm... i should think about this a little more...
this would make up some nice lowlevel component set :)
[...building Audiality 0.1.0...]
You have ALSA 0.9.x, but this version of Audiality only supports
0.5.x. It's just for raw MIDI anyway, so you might as well use OSS
emulation. './configure --without-alsa'.
ok... it compiled... what do i do now ?
need to look more into it...
any exmaples for it ?
--
torben Hohn
http://galan.sourceforge.net -- The graphical Audio language