On Monday, 24 July 2006 16:38, Lee Revell wrote:
> Take the sequence "80 3D 35 Â 31 80 3A 39 Â 0E 80 37 31 Â 03 80 31 1F" in
> the first line for example. Â I know that 0x80 is note-off, and 0x3D are
> note number and 0x35 the velocity of the note-off. Â But what the heck is
> the next byte, 0x31? Â
Delta time of the next event, in variable length representation.
> The MIDI standard says note-off is one status byte
> followed by 2 data bytes!
SMF (Standard MIDI File) format must store timestamped events, which the MIDI
protocol (over the wire) doesn't care. There is a good reference of SMF
format here: http://borg.com/~jglatt/tech/midifile.htm
You may want to try some SMF-to-text conversion utility:
http://alsa.opensrc.org/MidiComp
Regards,
Pedro
On Jul 24, 2006, at 7:43 AM, Forest Bond
<forest(a)alittletooquiet.net> wrote:
> I assume that RTP MIDI could be wrapped in a nice library that
> makes working
> with it a lot more pleasant? Couldn't someone implement a protocol
> over RTP
> MIDI sysex/NRPN/something that feels something like OSC at the code
> level? Or
> would that just not be very useful?
My suggestion would be ...
If you're creating a system where you are sending OSC and receiving
OSC, just use OSC. To handle loss and reordering, use the suggestions
people have made during this thread -- resilient data encoding, or
application
level retransmission, or engineering an essentially loss-free
network, or
some combination of the three.
The question that brought me into the discussion originally was, if you
in fact have a MIDI device, like one you bought in a store, and you
want to send its data stream over a lossy network (say, Wi-Fi or the
public Internet) what to do? In this case, I think having RTP MIDI
built
into your environment at some level makes sense. The question is, how?
One way to go is to model Apple -- Tiger added networked MIDI cables
to CoreMIDI, so that applications use the standard CoreMIDI API
and see both direct-connect MIDI cables (via USB, Firewire, etc)
and networked MIDI cables as the same thing. See:
http://www.soundonsound.com/sos/jul05/articles/tiger.htm#3
for a description of how it works in OS X Tiger. For Linux, I assume
this means building virtual MIDI cables into ALSA.
Another option is to use a middle-ware package that has already
implemented virtual MIDI cables over networks, like MIDIShare.
Perhaps Dominique Fober or one of his collaborators can chime
in with more details.
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---
On Jul 24, 2006, at 7:43 AM, Carmen wrote:
>
>> you need to send data over lossy networks
>> and that fits MIDI's semantics, and TCP's
>> head-of-line blocking latency is not acceptable,
>
> what sort of latency is this? ~10 ms?
The contract TCP makes is to deliver a stream
of bytes in the order the bytes were sent. So,
let's consider what happens when:
[1] Packet 15 in a sequence is lost, but
[2] Packets 16-20 in a sequence arrive OK.
TCP can't deliver the data in packets 16-20
until it has packet 15 -- because TCP has
to deliver a byte stream in the order it was
sent.
And so, packet 15 is at the "head of the line"
blocking packets 16-20, which is why we call
it "head of line blocking". Retransmitting
packet 15 requires a successful round-trip
(feedback from receiver to sender, and a resent
packet 15 from sender to receiver), and so
latency depends on the link latency and the
nature of packet loss on the link. Note the
latency doesn't only affect packet 15 -- packet
16-20 experience just as much latency, because
they cannot be placed into the bytestream until
packet 15 is placed in the bytestream.
> couldn't adjusting the MTU or something else alleviate
> it without having to invent "UDP + Parity Checking" ?
No, see above. The problem happens because of the
nature of "reliable-bytestream" TCP contract.
Also, note that RTP MIDI isn't about "parity checking".
It's about encoding the present state of the entire
MIDI "state machine" (which Note's are down, where
is each Pitch Bend, where is the sustain pedal, etc) in
a journal that gets sent along with each packet. Several
design tricks were used to keep the journal to a reasonable
size: for example, the sender user slow feedback from the
receivers to figure out what information about the MIDI state
machine a receiver would need to recover from an
arbitrary packet loss.
This paper:
http://www.cs.berkeley.edu/~lazzaro/sa/pubs/pdf/nossdav01.pdf
explains the general concept of how the recovery
journal works, the folks in the IETF read it and
invited us to do RTP MIDI (in 2001), and then the folks in
the MMA joined in the process around 2003. This
paper:
http://www.cs.berkeley.edu/~lazzaro/sa/pubs/pdf/aes117.pdf
was written closer to the end of the standardization
process (2004, was presented at AES), although there
were some changes after that to better support some of the
MIDI idioms Emagic and Mackie used for Logic Control.
So, for the final word on RTP MIDI implementation, see the
I-Ds (soon to be RFCs) and sample code on this website:
http://www.cs.berkeley.edu/~lazzaro/rtpmidi/
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---
> Dave Robillard <drobilla(a)connect.carleton.ca> wrote:
>
> OSC can go over TCP to avoid the packet loss issue (and messed up
> ordering which can be extremely annoying as well). liblo's TCP
> support
> needs some work though.
This comment illustrates an advantage for using RTP MIDI
to send MIDI over lossy networks. The recovery journal in
RTP MIDI supports graceful recovery from the loss of an arbitrary
number of packets upon the receipt of the first packet after the
loss (also works for reordering). Journalling is a feed-forward
process,
no retransmission is used -- thus, no head-of-line-blocking latency
issues as one has when running media over TCP.
See:
http://www.cs.berkeley.edu/~lazzaro/rtpmidi/index.html
The IESG approved RTP MIDI in February, so the protocol is
frozen. Hopefully the copy-edit phase will be done by autumn
and then we'll have RFC numbers for RTP MIDI.
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---
On 7/20/06, Loki Davison <loki.davison(a)gmail.com> wrote:
> On 7/20/06, Erik de Castro Lopo <mle+la(a)mega-nerd.com> wrote:
> > Loki Davison wrote:
> >
> > > There are quite a few c++ 'not fans' on LAD. C and python all the way ;)
> >
> > I used to be a Python fan but for anything larger than a couple
> > of hundred lines I now prefer Ocaml.
> >
> > Erik
>
> I haven't tried ocaml. I should put it on the todo list. ;) I'm a bit
> of a lisper and i find it easier to find places that let me program in
> python than lisp/scheme, so it's a good back up plan. Jobs for both
> are pretty damn hard to find though. :)
>
> Loki
>
You pretty much can't ever ask something about C++ without all the
haters coming out.
I've worked professionally using only C++ (and a little plain old C)
for 8 years, and the last 6 have been exclusively on Linux. Ocaml may
be programming nirvana, but it likely won't pay the bills, so I won't
be spending any time on it.
On Jul 23, 2006, at 2:06 PM, Dave Robillard
<drobilla(a)connect.carleton.ca> wrote:
> I don't see it as much of a problem anyway. At least in all my use
> cases, there's realtime crucial data (eg what MIDI tends to do,
> controllers, notes, etc) and there's data that just needs to get there
> sanely. The nice thing about the realtime stuff is that lost messages
> don't really matter, all you care about is the most recent one anyway.
Well, consider a volume slider on a mixing console.
One way to send its state is to sample its value and
send a packet every millisecond. In this case, your
"nice thing" is true -- the occasional burst of lost
or reordered packets will only cause brief "transient"
artifacts. The price paid for this "nice thing" is
sending 1000 packets per second, every second
of the performance.
The more efficient way to send the slide state
is to only send packets when a human finger
moves the slider. In this case, your "nice thing"
is not true -- the slider might be moved by the
human once per minute, and if the packet coding
that move is lost, that lost packet matters for the
entire minute.
RTP MIDI's recovery journal lets you safely use
this incremental update approach over a lossy
packet stream in an efficient way ... if
you need to send data over lossy networks
and that fits MIDI's semantics, and TCP's
head-of-line blocking latency is not acceptable,
I think RTP MIDI is the right protocol to use.
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---
Hello all,
I've set up a mailing list for specimen:
http://zhevny.com/mailman/listinfo/specimen
Additionally the web site now lives at http://zhevny.com/specimen. This
is a minimal re-working of Pete's old site. More improvements to follow.
Let me know if anything is badly broken. (I know the guide is broken,
but I need to re-do it with new screenshots anyway.)
I just changed the dns record for zhevny.com to point to a new host
earlier today. The TTL was set to 1 day, so some of you may get pointed
to the old box. Let me know if you have any issues, or just try again a
little later.
Thanks,
Eric Rz.
[ in linux-audio-users, originally in nmedit-devel ]
>our first release of Nomad - Nord Modular Editor is now available.
It is at
http://nmedit.sourceforge.net
if that was not mentioned.
The Nomad has also a UI builder for building UIs for modules.
Check the screenshots and "nomad-ui-editor.jpg".
Do we have anything similar? E.g. a collection of audio related
widgets for Glade?
How Nomad's UI builder could be re-used in other projects?
By using it like VSTGUI?
By building module UIs for Csound opcodes and writing Csound
exporter? I have earlier suggested that Nord Modular modules
could be written as Csound instruments, i.e.,
Nomad + Exporter + Csound == NM clone.
One may find several thousands of NM patches from the web.
Many of them are advanced and well documented.
http://nm-archives.electro-music.com/010_NordModular/014_Interesting_Thread…
Juhana
--
http://music.columbia.edu/mailman/listinfo/linux-graphics-dev
for developers of open source graphics software
>From: conrad berhrster <conrad.berhoerster(a)gmx.de>
>
>Now the question:
>- The worker thread is faster then the jackthread. (Sure, it should be). What
>is the usual way to pause the workerthread and wake up again, when the
>ringbuffer needs more data.
My alsashmrec uses
kill((pid_t)diskpid,SIGSTOP); (in disk writer process)
and
kill((pid_t)diskpid,SIGCONT); (in A/D reader process)
Perhaps that could be replaced with semaphore but what if
multiple processes uses the same disk service process?
That increases the number of SIGCONTs but otherwise the
processes would be uncoupled. How it is with semaphores?
>- How should the ringbuffer be initilized when starting to play?
Fill it up if its size was selected optimally.
If you have multiple songs and you don't know which the user
would like to play next, then make one ringbuffer per song and
fill them all up. Then the playing can start instantly when user
presses the song button.
Juhana
--
http://music.columbia.edu/mailman/listinfo/linux-graphics-dev
for developers of open source graphics software