Hi everyone,
The Internet-Drafts for the RTP MIDI protocols
for sending MIDI over IP are now in "IETF Last Call" --
this is a process where the Internet Engineering Steering
Group (IESG) solicits comments from the community at large,
before making a decision on whether the protocol should
be blessed with standards-track RFC status.
See below for information on how to send comments
to the IESG (don't send them to me directly -- I can't pass
them on). Thanks!
-----
From: The IESG <iesg-secretary(a)ietf.org>
Date: January 6, 2006 8:46:17 AM PST
To: IETF-Announce <ietf-announce(a)ietf.org>
Cc: avt(a)ietf.org
Subject: Last Call: 'RTP Payload Format for MIDI' to Proposed Standard
Reply-To: iesg(a)ietf.org
The IESG has received a request from the Audio/Video Transport WG to
consider
the following documents:
- 'RTP Payload Format for MIDI '
<draft-ietf-avt-rtp-midi-format-14.txt> as a Proposed Standard
- 'An Implementation Guide for RTP MIDI '
<draft-ietf-avt-rtp-midi-guidelines-14.txt> as an Informational RFC
The IESG plans to make a decision in the next few weeks, and solicits
final comments on this action. Please send any comments to the
iesg(a)ietf.org mailing lists by 2006-01-20.
The file can be obtained via
http://www.ietf.org/internet-drafts/draft-ietf-avt-rtp-midi-
format-14.txt
http://www.ietf.org/internet-drafts/draft-ietf-avt-rtp-midi-
guidelines-14.txt
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---
Hi folks.
This message does get into driver specifics to an extent, but I'm mostly
coming to list for advice on how to find what "mystery codec" is being
used.
I've been putting some of the tiny bit of actual freetime I get during
this winter holiday :), into trying to get a libusb-based driver going
for my little Olympus VN480PC. It's a digital voice recorder that comes
with a USB cable for transferring voice recordings to a computer, and
the accompanying software is windows only.
I'd very much like to be able to do this transfer using linux instead of
needing windows though.
I think I -might- have the protocol mostly figured using a USB sniffer
on the windows side, but that may prove to be the easy part of this
project. :-S
What I'm faced with now, is I have 10K of data which I'm assuming just
about has to be the voice data in some format or other - but it isn't
clear what format it's in.
Toward trying to figure out the format, I've:
1) Computed the difference in the lengths of the apparently-voice data
transferred via USB, and the "data" section of the resulting .wav file.
They do not differ by a constant - so it's not just a matter of tacking
on a header.
2) Computed the quotient of the lengths of the apparently-voice data
transferred via USB, and the "data" section of the resulting .wav file.
They do not differ by a constant factor - so it's not just a matter of
converting, for example, shorts to reals again and again, for example,
in which case the lengths, I would think, should differ by a constant
factor of 2.
3) Symlinked a file containing the data transferred via USB, to all of
the file extensions known to sox, and attempted to use sox to convert
those files to .wav. None of the conversions succeeded.
4) Wrote a small python program to treat the data transferred via USB as
data to be stuffed into the "data" section of a .wav file, and created a
series of .wav files with all format types from 0 to 999. sndfile-info
did not give errors for 9 of these, but none of them look or sound right
in gnusound.
5) Googled about olympus and voice/audio codecs, to see if there is a
proprietary one they favor. It appears they were involved in the design
of the "DSS" format.
6) Downloaded "DSS Player Lite" from Olympus' web site, and copied the
data transferred via USB to "hi.dss". However, DSS Player Lite did not
recognize the file format.
Does anyone have any thoughts about what else I might try to see what
format this data is in, and/or convert it to a known format?
I've got detailed documentation of most of what I've done so far on this
project at http://dcs.nac.uci.edu/~strombrg/VN480PC/ The page includes
some .wav's, a binary file I'm assuming is voice data in a mystery
codec, full USB sniffer logs, and so on.
Does anyone have any suggestions - especially toward how to convert that
"likely voice data" in the USB Sniff to some sort of known and
supported-on-linux codec?
Thanks!
Hi!
Seems like the father of FM-synthesis has joined wikipedia. Some of you
guys might care to take a brief look at the FM-synthesis page, just once
in a while, so it wont get vandalised again?
--
mvh // Jens M Andreasen
Florian Schmidt writes:
> I further assume that the alsa seq event system
> is used
This is true of Rosegarden,
> and midi events are not queued
> for future delivery but always delivered immediately.
but this isn't -- Rosegarden always queues events
from a non-RT thread and lets the ALSA sequencer
kernel layer deliver them. (Thru events are delivered
directly, with potential additional latency because of
the lower priority used for the MIDI thread.) In
principle this should mean that only the priority of
the receiving synth's MIDI thread is significant for
the timing of sequenced events. We also have a
mechanism to compensate for gradual drift between
the MIDI timing source (kernel timers or RTC) and
soundcard clock, when synchronising to audio, by
adjusting the sequencer skew factor. (This happens
to be similar to the mechanism for slaving to MTC,
which is handy.)
In my experience this is all a long way from
foolproof. The most common problems for users
seem to be:
- ALSA sequencer uses kernel timers by default and
of course they only run at 100 or 250Hz in many
kernels.
- ALSA sequencer can sync to RTC, but the
associated module (snd-rtctimer) appears to hang
some kernels solid when loaded or used. I don't have
much information about that, but I can probably find
out some more.
- ALSA sequencer can sync to a soundcard clock,
but this induces jitter when used with JACK and has
caused confusion for users who find themselves
inadvertently sync'd to an unused soundcard (the
classic "first note plays, then nothing" symptom).
The biggest advantage of course is not having to run
an RT MIDI timing thread. My impression is that this
aspect of MusE (which does that, I think) causes
as many configuration problems for its users as using
ALSA sequencer queue timers does for Rosegarden's.
Any more thoughts on this?
Chris
hi everyone!
a happy new year to all folks on the gregorian calendar, and a generic
happy next 365 days to everyone else!
the music department at columbia university are taking the list server
down for an upgrade on the coming weekend, so expect interruptions for
linux-audio-dev, linux-audio-user and linux-audio-announce. you will
probably want to keep a copy of all the mails you send over the weekend,
so that you can re-send them in case they end up in the bit bucket.
let me take this opportunity to thank douglas irving repetto for many
years of painless hosting and friendly help, and the entire music dept.
for their generous donation of iron and bandwidth. kudos, guys!
rumor has it that the new list server will be an os x machine. hopefully
this will make the lists even more user-friendly and aesthetically
pleasing than before ;)
all the best,
jörn
-------- Original Message --------
Subject: music.columbia.edu server downtime
Date: Mon, 2 Jan 2006 16:21:13 -0500
From: douglas irving repetto <douglas(a)music.columbia.edu>
To: douglas(a)music.columbia.edu
Hello,
We will be upgrading the music.columbia.edu server this weekend.
Hopefully all of the work will be done on Saturday, but it may extend
into Sunday. music.columbia.edu will not be available during the
downtime. That means no websites will be served, no email will be
sent/delivered, no mailing lists will function, etc.
I'll send an update later this week with info about some changes that
you'll see on the new server.
Happy new year,
douglas
--
............................................... http://artbots.org
.....douglas.....irving........................ http://dorkbot.org
................................ http://ceait.calarts.edu/musicdsp
.......... repetto....... http://works.music.columbia.edu/organism
............................... http://music.columbia.edu/~douglas
--
jörn nettingsmeier
home://germany/45128 essen/lortzingstr. 11/
http://spunk.dnsalias.org
phone://+49/201/491621
if you are a free (as in "free speech") software developer
and you happen to be travelling near my home, drop me a line
and come round for a free (as in "free beer") beer. :-D
On Friday 30 December 2005 17:37, Werner Schweer wrote:
> The ALSA seq api is from ancient time were no realtime threads were
> available in linux. Only a kernel driver could provide usable
> midi timing. But with the introduction of RT threads the
> ALSA seq api is obsolete IMHO.
I don't agree with this statement. IMHO, a design based on raw MIDI ports used
like simple Unix file descriptors, and every user application implementing
its own event schedule mechanism is the ancient and traditional way, and it
should be considered obsolete now in Linux since we have the advanced
queueing capabilities provided by the ALSA sequencer.
You guys are talking here about MIDI timing, considering only the event
scheduling point of view, as if Rosegarden or MusE were simple MIDI players.
Of course, playing beats on time is a required feature. But my bigger concern
about MIDI timing issues is when you are *recording* events. Here is where
ALSA queues, providing accurate timestamps for incoming events, are so good.
It could be the absolute winner if problems like the audio synchronization
and slave MTC synchronization were solved likewise.
Regards,
Pedro
On Saturday 31 December 2005 17:10, Paul Davis wrote:
> On Fri, 2005-12-30 at 22:27 +0100, Pedro Lopez-Cabanillas wrote:
> > On Friday 30 December 2005 17:37, Werner Schweer wrote:
> > > The ALSA seq api is from ancient time were no realtime threads were
> > > available in linux. Only a kernel driver could provide usable
> > > midi timing. But with the introduction of RT threads the
> > > ALSA seq api is obsolete IMHO.
> >
> > I don't agree with this statement. IMHO, a design based on raw MIDI ports
> > used  like simple Unix file descriptors, and every user application
> > implementing its own event schedule mechanism is the ancient and
> > traditional way, and it should be considered obsolete now in Linux since
> > we have the advanced queueing capabilities provided by the ALSA
> > sequencer.
>
> low latency apps don't want queuing they just want routing. this is why
> the ALSA sequencer is obsolete for such apps. frank (v.d.p) had the
> right idea back when he started this, but i agree with werner's
> perspective that the queuing facilities are no longer relevant, at least
> not for "music" or "pro-audio" applications.
Many professional musicians want MIDI capabilities on their PCs because they
already own (or want to have) electronic musical instruments communicating
via MIDI. This means that the computer is another piece of musical equipment
in the musician's studio/network.
The kind of scenario you are painting about low latency applications seems
limited to soft synths listening to sequencing applications. Using MIDI to
this kind of communication between two processes running in the same machine
looks a bit overkill to me. MusE has synth plugins, and Rosegarden has DSSI
synth plugins, without ALSA sequencer being involved here.
> > It could be the absolute winner if problems like the audio
> > synchronization  and slave MTC synchronization were solved likewise.
>
> what problems? JACK demonstrates perfect audio sync in the only sensible
> way there is (the same way every other system does it); several JACK
> clients have MTC slave capabilities, including ardour, and it has
> nothing whatsoever to do with the ALSA sequencer.
Exactly. Please, excuse me my poor English. I mean functionality instead of
problem. Let me reword the sentence: ALSA could be even better if there were
another universal mechanism available for every ALSA application, providing
an easy and consistent way to synchronize a queue with an external MTC
master, without needing to recode the whole process for each application.
I know that Ardour provides slave MTC synchronization, and also does
Rosegarden. Each one uses a different approach, and in the future there will
be many more better or worse implementations.
I like the way Rosegarden solves it, using the ALSA sequencer queue skew
parameter. I guess that we can build another ALSA sequencer client, either a
kernel module or a userspace one, accepting MTC input and translating the MTC
sysex messages received to skew in some queues used also by any other ALSA
clients. Comments?
Regards,
Pedro
Paul Davis:
> i guess it all depends on one's definition of
> "sufficient". my take is that there are several MIDI
> h/w boxes that guarantee MIDI delivery to a
> resolution that matches the wire protocol
> (1/3msec). until we have scheduling capabilities that
> match this (or better), i don't feel comfortable
> calling them "sufficient".
Ah, I see. I've no argument with that, but it isn't
quite what I thought you were referring to.
Chris
On Saturday 31 December 2005 00:52, Florian Schmidt wrote:
> All of this depends on whether physical port midi activity is really
> handled by IRQ's too. Anyone know more?
I don't know every MIDI interface details, but there are many different
variations. Please, somebody with better knowledge could provide additional
info and correct my very probable mistakes.
USB MIDI interfaces don't generate interrupts. Instead, this is done by the
USB host controller (EHCI/OHCI/UHCI). The driver for these devices provide an
interrupt handler not directly, but indirectly (the in/out URB completion
handlers). So yes, this device type may be considered interrupt-driven.
The oldest MIDI interface for PCs was the Roland MPU-401. It had two
operational modes: Intelligent and UART mode. Intelligent mode was necessary
because the low power of early personal computers CPUs. This intelligent mode
required IRQ handlers for both MIDI input and output operations, and to
control the internal timer used for hardware scheduling and event
timestamping, and also for external MTC/SMPTE synchronization. It was a
rather sofisticated piece of hardware, but there is not an ALSA driver for
these devices (and I don't know a single manufacturer selling it nowadays).
The MPU-401 UART mode doesn't provide an interrupt signal to finish the
output completion, so you must use polling for output (it provides an
interrupt only for incoming events). There is an ALSA driver for this mode,
that is used also by many consumer sound cards emulating the MPU-401. I
wouldn't recommend to use these MIDI interfaces.
Other chips, as the Ensoniq 1370/1371 used in some cheap SoundBlaster
products, included a better UART mode, providing interrupts for both
reception and transmission. There is also an ALSA driver for the 16550 UART,
that can be used with a few external devices like the Roland Canvas and the
Midiator devices.
Regards,
Pedro