[linux-audio-dev] Re: Language fanboys [was Re: light C++ set for WAV]

lazzaro lazzaro at eecs.berkeley.edu
Mon Jul 24 18:31:39 UTC 2006


On Jul 24, 2006, at 7:43 AM, Carmen wrote:
>
>> you need to send data over lossy networks
>> and that fits MIDI's semantics, and TCP's
>> head-of-line blocking latency is not acceptable,
>
> what sort of latency is this? ~10 ms?


The contract TCP makes is to deliver a stream
of bytes in the order the bytes were sent.  So,
let's consider what happens when:

[1] Packet 15 in a sequence is lost, but
[2] Packets 16-20 in a sequence arrive OK.

TCP can't deliver the data in packets 16-20
until it has packet 15 -- because TCP has
to deliver a byte stream in the order it was
sent.

And so, packet 15 is at the "head of the line"
blocking packets 16-20, which is why we call
it "head of line blocking".  Retransmitting
packet 15 requires a successful round-trip
(feedback from receiver to sender, and a resent
packet 15 from sender to receiver), and so
latency depends on the link latency and the
nature of packet loss on the link.  Note the
latency doesn't only affect packet 15 -- packet
16-20 experience just as much latency, because
they cannot be placed into the bytestream until
packet 15 is placed in the bytestream.


> couldn't adjusting the MTU or something else alleviate
> it without having to invent "UDP + Parity Checking" ?


No, see above.  The problem happens because of the
nature of "reliable-bytestream" TCP contract.

Also, note that RTP MIDI isn't about "parity checking".
It's about encoding the present state of the entire
MIDI "state machine" (which Note's are down, where
is each Pitch Bend, where is the sustain pedal, etc) in
a journal that gets sent along with each packet.  Several
design tricks were used to keep the journal to a reasonable
size: for example, the sender user slow feedback from the
receivers to figure out what information about the MIDI state
machine a receiver would need to recover from an
arbitrary packet loss.

This paper:

http://www.cs.berkeley.edu/~lazzaro/sa/pubs/pdf/nossdav01.pdf

explains the general concept of how the recovery
journal works, the folks in the IETF read it and
invited us to do RTP MIDI (in 2001), and then the folks in
the MMA joined in the process around 2003.  This
paper:

http://www.cs.berkeley.edu/~lazzaro/sa/pubs/pdf/aes117.pdf

was written closer to the end of the standardization
process (2004, was presented at AES), although there
were some changes after that to better support some of the
MIDI idioms Emagic and Mackie used for Logic Control.
So, for the final word on RTP MIDI implementation, see the
I-Ds (soon to be RFCs) and sample code on this website:

http://www.cs.berkeley.edu/~lazzaro/rtpmidi/

---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---





More information about the Linux-audio-dev mailing list