On Jul 24, 2006, at 1:39 PM, linux-audio-dev-
request(a)music.columbia.edu wrote:
> what about applying the journal data to an OSC-over-UDP stream. the
> journal data could be encapsulated in OSC. sounds like a paper and
> liblo patch waiting to happen ;)
Personally, my suggestion is that the community starts by
defining OSC profiles for specific classes of gestural input
and synthesis methods that are widely used in the community.
These profiles should standardize syntax and semantics. If
you are working on a music project that is doing something
that fits a profile, use the profile. Otherwise, do as you do today.
If OSC goes down this route, one can imagine developing a
recovery-journal system with recovery semantics for all the
standard profiles. Part of developing a new OSC profile would
be defining the recovery journal for the profile.
The least of the benefits of a design like this would be
network resiliency. The big win is by defining OSC profiles
with semantics, it starts to make sense to create a hardware
or software synth that "understands OSC profile X" out of
the box, in the same way a synth understands MIDI. And
you can also create mass-market controller hardware that
"puts out OSC data using profile X". And so, you can
connect the two boxes up and get plug and play -- just
like MIDI.
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---
On Jul 24, 2006, at 7:43 AM, Dave Robillard
<drobilla(a)connect.carleton.ca> wrote:
> Anyway, as soon as you go sysex you lose the semantics and you have
> the same
> problem anyway - retransmission is the only possible solution if you
> know nothing about the data (it becomes application specific).
RTP MIDI has several ways to deal with this. For senders that know the
semantics of what they are sending (like, say, Novation would if they
were
adding Wi-Fi to their keyboard line), the recovery journal syntax for
SysEx
lets the sender specify recovery data in a way that's suitable to the
semantics,
and this encoding lets a receiver figure out how to use the data to
recovery.
For senders that don't know the semantics of what they are sending
(like a box with MIDI DIN jacks on one end and a WiFi antenna on
the other), there are several options. One is to use the recovery
journal
encoding for SysEx that is a simple list of all commands for a type
of SysEx, and rely on more frequent RTCP feedback from receiver
to sender to keep the journal trimmed to a reasonable length.
Alternatively,
It's possible to split a MIDI cable into two RTP MIDI streams -- one
TCP and
one UDP -- and gate the SysEx onto the TCP stream.
> (especially custom sysex evil that throws interoperability completely
> out the window).
Most industry folks who need to do unusual things with MIDI don't start
with SysEx. They start by making analogies of what they need to
do with the standard MIDI command set, and repurposing. This is
partially done to make sure DAWs can edit the data, and partially
done to get the efficiency of running status over the wire. SysEx
is used for secondary features. You can see this design philosophy in
the Logic Control specification in Appendix B of:
http://manuals.info.apple.com/en/Logic7_DedicatedCntrlSurfaceInfo.pdf
If I were rewriting an OSC application to use MIDI, with an eye
towards good RTP MIDI loss behavior, I'd take this re-purposing
approach ... it would be curious to see how Jazzmutant did it,
since in their latest release of Lemur MIDI is now a full-fledged
transport and not sent via OSC, if I read this web page correctly:
http://www.jazzmutant.com/lemur_lastupdate.php
> Human readability and interoperability IS often
> important (eg using supercollider or pd or whatever to control
> things).
I use Structured Audio in my own work, and Eric's Scheirer's language
support design for MIDI has many good aspects. See:
http://www.cs.berkeley.edu/~lazzaro/sa/book/control/midi/index.html
In 2006 if I were designing a replacement language, I'd do the MIDI
interface language design differently, given my experience using and
implementing SAOL. But I don't consider MIDI's use of SAOL as
hard to program in its present state, apart from some details of
extend() and turnoff for handling NoteOff release activities.
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---
Hello,
My name is Rene Bon Ciric, I am a Linuxer/musician/designer/webdev.
I am trying to assemble a team to start developing an Open Source OS for
the AKAI MPC4000 Sampler/Workstation. We have just started and are at
irc://freenode/openmpc
If you're interested, please join the group.
Thanks for your time!
Renich
On Monday, 24 July 2006 16:38, Lee Revell wrote:
> Take the sequence "80 3D 35 31 80 3A 39 0E 80 37 31 03 80 31 1F" in
> the first line for example. I know that 0x80 is note-off, and 0x3D are
> note number and 0x35 the velocity of the note-off. But what the heck is
> the next byte, 0x31?
Delta time of the next event, in variable length representation.
> The MIDI standard says note-off is one status byte
> followed by 2 data bytes!
SMF (Standard MIDI File) format must store timestamped events, which the MIDI
protocol (over the wire) doesn't care. There is a good reference of SMF
format here: http://borg.com/~jglatt/tech/midifile.htm
You may want to try some SMF-to-text conversion utility:
http://alsa.opensrc.org/MidiComp
Regards,
Pedro
On Jul 24, 2006, at 7:43 AM, Forest Bond
<forest(a)alittletooquiet.net> wrote:
> I assume that RTP MIDI could be wrapped in a nice library that
> makes working
> with it a lot more pleasant? Couldn't someone implement a protocol
> over RTP
> MIDI sysex/NRPN/something that feels something like OSC at the code
> level? Or
> would that just not be very useful?
My suggestion would be ...
If you're creating a system where you are sending OSC and receiving
OSC, just use OSC. To handle loss and reordering, use the suggestions
people have made during this thread -- resilient data encoding, or
application
level retransmission, or engineering an essentially loss-free
network, or
some combination of the three.
The question that brought me into the discussion originally was, if you
in fact have a MIDI device, like one you bought in a store, and you
want to send its data stream over a lossy network (say, Wi-Fi or the
public Internet) what to do? In this case, I think having RTP MIDI
built
into your environment at some level makes sense. The question is, how?
One way to go is to model Apple -- Tiger added networked MIDI cables
to CoreMIDI, so that applications use the standard CoreMIDI API
and see both direct-connect MIDI cables (via USB, Firewire, etc)
and networked MIDI cables as the same thing. See:
http://www.soundonsound.com/sos/jul05/articles/tiger.htm#3
for a description of how it works in OS X Tiger. For Linux, I assume
this means building virtual MIDI cables into ALSA.
Another option is to use a middle-ware package that has already
implemented virtual MIDI cables over networks, like MIDIShare.
Perhaps Dominique Fober or one of his collaborators can chime
in with more details.
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---
On Jul 24, 2006, at 7:43 AM, Carmen wrote:
>
>> you need to send data over lossy networks
>> and that fits MIDI's semantics, and TCP's
>> head-of-line blocking latency is not acceptable,
>
> what sort of latency is this? ~10 ms?
The contract TCP makes is to deliver a stream
of bytes in the order the bytes were sent. So,
let's consider what happens when:
[1] Packet 15 in a sequence is lost, but
[2] Packets 16-20 in a sequence arrive OK.
TCP can't deliver the data in packets 16-20
until it has packet 15 -- because TCP has
to deliver a byte stream in the order it was
sent.
And so, packet 15 is at the "head of the line"
blocking packets 16-20, which is why we call
it "head of line blocking". Retransmitting
packet 15 requires a successful round-trip
(feedback from receiver to sender, and a resent
packet 15 from sender to receiver), and so
latency depends on the link latency and the
nature of packet loss on the link. Note the
latency doesn't only affect packet 15 -- packet
16-20 experience just as much latency, because
they cannot be placed into the bytestream until
packet 15 is placed in the bytestream.
> couldn't adjusting the MTU or something else alleviate
> it without having to invent "UDP + Parity Checking" ?
No, see above. The problem happens because of the
nature of "reliable-bytestream" TCP contract.
Also, note that RTP MIDI isn't about "parity checking".
It's about encoding the present state of the entire
MIDI "state machine" (which Note's are down, where
is each Pitch Bend, where is the sustain pedal, etc) in
a journal that gets sent along with each packet. Several
design tricks were used to keep the journal to a reasonable
size: for example, the sender user slow feedback from the
receivers to figure out what information about the MIDI state
machine a receiver would need to recover from an
arbitrary packet loss.
This paper:
http://www.cs.berkeley.edu/~lazzaro/sa/pubs/pdf/nossdav01.pdf
explains the general concept of how the recovery
journal works, the folks in the IETF read it and
invited us to do RTP MIDI (in 2001), and then the folks in
the MMA joined in the process around 2003. This
paper:
http://www.cs.berkeley.edu/~lazzaro/sa/pubs/pdf/aes117.pdf
was written closer to the end of the standardization
process (2004, was presented at AES), although there
were some changes after that to better support some of the
MIDI idioms Emagic and Mackie used for Logic Control.
So, for the final word on RTP MIDI implementation, see the
I-Ds (soon to be RFCs) and sample code on this website:
http://www.cs.berkeley.edu/~lazzaro/rtpmidi/
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---
> Dave Robillard <drobilla(a)connect.carleton.ca> wrote:
>
> OSC can go over TCP to avoid the packet loss issue (and messed up
> ordering which can be extremely annoying as well). liblo's TCP
> support
> needs some work though.
This comment illustrates an advantage for using RTP MIDI
to send MIDI over lossy networks. The recovery journal in
RTP MIDI supports graceful recovery from the loss of an arbitrary
number of packets upon the receipt of the first packet after the
loss (also works for reordering). Journalling is a feed-forward
process,
no retransmission is used -- thus, no head-of-line-blocking latency
issues as one has when running media over TCP.
See:
http://www.cs.berkeley.edu/~lazzaro/rtpmidi/index.html
The IESG approved RTP MIDI in February, so the protocol is
frozen. Hopefully the copy-edit phase will be done by autumn
and then we'll have RFC numbers for RTP MIDI.
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---
On 7/20/06, Loki Davison <loki.davison(a)gmail.com> wrote:
> On 7/20/06, Erik de Castro Lopo <mle+la(a)mega-nerd.com> wrote:
> > Loki Davison wrote:
> >
> > > There are quite a few c++ 'not fans' on LAD. C and python all the way ;)
> >
> > I used to be a Python fan but for anything larger than a couple
> > of hundred lines I now prefer Ocaml.
> >
> > Erik
>
> I haven't tried ocaml. I should put it on the todo list. ;) I'm a bit
> of a lisper and i find it easier to find places that let me program in
> python than lisp/scheme, so it's a good back up plan. Jobs for both
> are pretty damn hard to find though. :)
>
> Loki
>
You pretty much can't ever ask something about C++ without all the
haters coming out.
I've worked professionally using only C++ (and a little plain old C)
for 8 years, and the last 6 have been exclusively on Linux. Ocaml may
be programming nirvana, but it likely won't pay the bills, so I won't
be spending any time on it.
On Jul 23, 2006, at 2:06 PM, Dave Robillard
<drobilla(a)connect.carleton.ca> wrote:
> I don't see it as much of a problem anyway. At least in all my use
> cases, there's realtime crucial data (eg what MIDI tends to do,
> controllers, notes, etc) and there's data that just needs to get there
> sanely. The nice thing about the realtime stuff is that lost messages
> don't really matter, all you care about is the most recent one anyway.
Well, consider a volume slider on a mixing console.
One way to send its state is to sample its value and
send a packet every millisecond. In this case, your
"nice thing" is true -- the occasional burst of lost
or reordered packets will only cause brief "transient"
artifacts. The price paid for this "nice thing" is
sending 1000 packets per second, every second
of the performance.
The more efficient way to send the slide state
is to only send packets when a human finger
moves the slider. In this case, your "nice thing"
is not true -- the slider might be moved by the
human once per minute, and if the packet coding
that move is lost, that lost packet matters for the
entire minute.
RTP MIDI's recovery journal lets you safely use
this incremental update approach over a lossy
packet stream in an efficient way ... if
you need to send data over lossy networks
and that fits MIDI's semantics, and TCP's
head-of-line blocking latency is not acceptable,
I think RTP MIDI is the right protocol to use.
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---