On Thu, 18.06.09 19:57, Paul Davis (paul(a)linuxaudiosystems.com) wrote:
On Thu, Jun 18, 2009 at 7:43 PM, Lennart Poettering<mzynq(a)0pointer.de> wrote:
<snip>
for (;;) {
n = jack_client_wait()
process(n);
jack_cycle_signal();
while (jack_frames_since_cycle_start() < threshold) {
if (no_private_events_to_process())
break;
process_one_of_my_private_events();
}
}
</snip>
how is this functionally different than adding 1 period of latency
to every event, then processing every event marked to occur during a
given process cycle *within* that process cycle?
My events are not time critical. As long as they are dispatched, it
doesn't matter if they are dispatched now or 5ms later. Doing IO
should take priority over processing them.
Then, generally I think it is a good idea to signal the other threads
as quickly as possibly after having finished with the
processing IO. While that probably doesn't matter that much if we only
have one CPU and our thread is RT, this actually does make a
difference on SMP, i.e. practically all modern CPUs sold today.
this is precisely what happens with MIDI and OSC
sequencing. i.e:
now = 0;
for (;;) {
n = jack_cycle_wait ();
while (events_to_process (now, now+n)) {
process_event ();
}
process_data ();
now += n;
jack_cycle_signal ();
}
as i mentioned, this is fundamentally what any MIDI sequencer that
uses JACK MIDI is doing. the latency of the event is fixed, and there
is close to zero jitter.
no "waiting after", no potential stealing of cycles outside the
process cycle, no scheduling issues.
But for MIDI time is critical. For my control events it is not. That's
why I'd like to handle them after the _signal() invocation.
Lennart
--
Lennart Poettering Red Hat, Inc.
lennart [at] poettering [dot] net
http://0pointer.net/lennart/ GnuPG 0x1A015CC4