[linux-audio-dev] ALSA MIDI latency correction

Peter Brinkmann brinkman at math.TU-Berlin.DE
Sat Jul 30 16:00:58 UTC 2005



Hi,
I've been contemplating the problem of reconciling different latencies
of various sound sources, but so far the only solutions I have been able
to think of seemed sort of awkward, and googling didn't help much, either.

Specifically, here's the issue I'm looking at: I have a sequencer
application that has several MIDI output ports, each connected to some
device that accepts MIDI input. Those devices may have vastly different
latencies (for instance, I may want to use my digital piano, without any
noticeable latency, together with timidity, which has serious latency,
even when started with options like -B2,8, never mind the latency introduced
by my USB sound card), but of course I don't want to hear a time lag between
those devices. I don't mind a little bit of overall latency; the piano may
wait for timidity as long as they're in sync.

I'm currently scheduling all my events through one queue (is that the
recommended method? I've been wondering whether it would make more sense
to have, say, one queue per output port, but I don't see how this would
help), and the only solution I have been able to think of is to explicitly
schedule events for faster devices at a later time. This is clumsy, and
it's exacerbated by the fact that I'd like to schedule events in terms of
ticks rather than milliseconds. Since latencies are usually measured in
milliseconds, that means I have to convert them to ticks, considering
the current tempo of the queue. There's gotta be a better way.

Ideally, there are two things I'd like to do:
    1. Assign a delay dt to each output port, so that an event scheduled
    at time t0 will be sent at time t0+dt. Like this, I could compute the
    maximum latency of all my devices, and the output port attached to a
    device would get a delay of (max latency - latency of device), so
    that everything would be in sync.

    2. Automatically determine the latencies of the devices I'm talking
    to. In theory, this should be possible. For instance, if timidity is
    connected to jack, it could get jack's current latency, add its own
    latency, and report the result. Is this science fiction?

Any thoughts on these issues would be appreciated!
Best,
    Peter





More information about the Linux-audio-dev mailing list