On Tue, Oct 05, 2010 at 08:27:02PM +1100, cal wrote:
On 05/10/10 18:51, Arnout Engelen wrote:
On Tue, Oct 05, 2010 at 03:12:12PM +1100, cal
wrote:
latency in a soft synth (possibly yoshimi) ...
Latency? Or jitter?
Not sure - possibly the main reason for the post was to seek help in resolving
fallacies in my thinking. With a jack period of 128, a midi event associated
with frame 129 won't actually get applied to audio generation until the
generation of frames 257-384. On the other hand an event against frame 128 will
make it into the generation of frames 128-256. I figure that variability of
effective midi latency according to where the event sits in the period probably
qualifies as jitter?
With 'make it into the generation' you mean the midi event associated with
frame 129 will end up in the beginning of frames 257-384 while an event
against frame 128 will end up in the beginning of frames 128-256? Then yes,
that 'variability of effective midi latency' is indeed exactly what jitter is.
is the
bottleneck. It seems more likely
to me that your way of getting the notes from the MIDI device (again - is this
ALSA midi? Can you share code/patches?) is not RT-safe. Have you tried running
it with
http://repo.or.cz/w/jack_interposer.git ?
Never heard of it but will investigate, thanks.
I like it, but I'm biased as i wrote it :)
To
'solve' ALSA MIDI jitter for JACK, you'll have to receive ALSA MIDI messages
in a seperate thread, and make sure you apply them at the right position in the
process buffer based on a timestamp telling you when exactly the note event was
recorded.
A dedicated thread for alsa midi is what I've been using along. At the moment
I'm
exploring using an alsa timer callback, and I quite like that.
Ah, then I misunderstood you there.
With JACK
MIDI, JACK takes care of the recording MIDI events and timestamping
them in a sample-accurate way - all you'll have to do is make sure you do take
into account the timestamp to correctly place them in the process buffer.
I wish that that was "all you have to do". I think the crux of the biscuit is
that
given the way zyn/yoshimi does its magic, you simply can't have a midi event
changing note or controller parameters during the generation of a given period of
audio.
OK, now I understand what you meant better: splitting up the JACK period into
multiple subperiods is not something you like to do per se, but something that
makes re-using existing zyn/yoshimi code easier. That can make sense, yes.
Hence it's impossible to accurately honor the
frame/time stamp of a midi
event. That's what drove drove the experimentation with splitting the audio
generation down to tighter blocks.
Yes, that could be an interesting way to reduce (though not eliminate entirely)
jitter even at large jack period sizes.
Arnout