On Saturday 31 December 2005 17:10, Paul Davis wrote:
> On Fri, 2005-12-30 at 22:27 +0100, Pedro Lopez-Cabanillas wrote:
> > On Friday 30 December 2005 17:37, Werner Schweer wrote:
> > > The ALSA seq api is from ancient time were no realtime threads were
> > > available in linux. Only a kernel driver could provide usable
> > > midi timing. But with the introduction of RT threads the
> > > ALSA seq api is obsolete IMHO.
> >
> > I don't agree with this statement. IMHO, a design based on raw MIDI ports
> > used  like simple Unix file descriptors, and every user application
> > implementing its own event schedule mechanism is the ancient and
> > traditional way, and it should be considered obsolete now in Linux since
> > we have the advanced queueing capabilities provided by the ALSA
> > sequencer.
>
> low latency apps don't want queuing they just want routing. this is why
> the ALSA sequencer is obsolete for such apps. frank (v.d.p) had the
> right idea back when he started this, but i agree with werner's
> perspective that the queuing facilities are no longer relevant, at least
> not for "music" or "pro-audio" applications.
Many professional musicians want MIDI capabilities on their PCs because they
already own (or want to have) electronic musical instruments communicating
via MIDI. This means that the computer is another piece of musical equipment
in the musician's studio/network.
The kind of scenario you are painting about low latency applications seems
limited to soft synths listening to sequencing applications. Using MIDI to
this kind of communication between two processes running in the same machine
looks a bit overkill to me. MusE has synth plugins, and Rosegarden has DSSI
synth plugins, without ALSA sequencer being involved here.
> > It could be the absolute winner if problems like the audio
> > synchronization  and slave MTC synchronization were solved likewise.
>
> what problems? JACK demonstrates perfect audio sync in the only sensible
> way there is (the same way every other system does it); several JACK
> clients have MTC slave capabilities, including ardour, and it has
> nothing whatsoever to do with the ALSA sequencer.
Exactly. Please, excuse me my poor English. I mean functionality instead of
problem. Let me reword the sentence: ALSA could be even better if there were
another universal mechanism available for every ALSA application, providing
an easy and consistent way to synchronize a queue with an external MTC
master, without needing to recode the whole process for each application.
I know that Ardour provides slave MTC synchronization, and also does
Rosegarden. Each one uses a different approach, and in the future there will
be many more better or worse implementations.
I like the way Rosegarden solves it, using the ALSA sequencer queue skew
parameter. I guess that we can build another ALSA sequencer client, either a
kernel module or a userspace one, accepting MTC input and translating the MTC
sysex messages received to skew in some queues used also by any other ALSA
clients. Comments?
Regards,
Pedro
Paul Davis:
> i guess it all depends on one's definition of
> "sufficient". my take is that there are several MIDI
> h/w boxes that guarantee MIDI delivery to a
> resolution that matches the wire protocol
> (1/3msec). until we have scheduling capabilities that
> match this (or better), i don't feel comfortable
> calling them "sufficient".
Ah, I see. I've no argument with that, but it isn't
quite what I thought you were referring to.
Chris
On Saturday 31 December 2005 00:52, Florian Schmidt wrote:
> All of this depends on whether physical port midi activity is really
> handled by IRQ's too. Anyone know more?
I don't know every MIDI interface details, but there are many different
variations. Please, somebody with better knowledge could provide additional
info and correct my very probable mistakes.
USB MIDI interfaces don't generate interrupts. Instead, this is done by the
USB host controller (EHCI/OHCI/UHCI). The driver for these devices provide an
interrupt handler not directly, but indirectly (the in/out URB completion
handlers). So yes, this device type may be considered interrupt-driven.
The oldest MIDI interface for PCs was the Roland MPU-401. It had two
operational modes: Intelligent and UART mode. Intelligent mode was necessary
because the low power of early personal computers CPUs. This intelligent mode
required IRQ handlers for both MIDI input and output operations, and to
control the internal timer used for hardware scheduling and event
timestamping, and also for external MTC/SMPTE synchronization. It was a
rather sofisticated piece of hardware, but there is not an ALSA driver for
these devices (and I don't know a single manufacturer selling it nowadays).
The MPU-401 UART mode doesn't provide an interrupt signal to finish the
output completion, so you must use polling for output (it provides an
interrupt only for incoming events). There is an ALSA driver for this mode,
that is used also by many consumer sound cards emulating the MPU-401. I
wouldn't recommend to use these MIDI interfaces.
Other chips, as the Ensoniq 1370/1371 used in some cheap SoundBlaster
products, included a better UART mode, providing interrupts for both
reception and transmission. There is also an ALSA driver for the 16550 UART,
that can be used with a few external devices like the Roland Canvas and the
Midiator devices.
Regards,
Pedro
Paul Davis:
> most of the ALSA sequencer's
> capabilities are redundant, which is compounded
> because it currently has
> no way of providing sufficiently accurate scheduling
You say this as if it were self-evident, when it's been
the subject of much of this thread. _Does_ it have
no way of providing sufficiently accurate scheduling?
If not, why not?
This would imply that there is in fact no way for a
userspace application on a normal Linux distribution
to provide MIDI timing accurate enough to be
perceived as correct in all circumstances.
Chris
Me:
> I'll have to review the sequencer API and look at
> adding a separate RT MIDI thread as an alternative
Actually no, hang on a minute. First I want to know
more about why the ALSA sequencer queue doesn't
work better here.
It's all very well saying it's irrelevant now that it's
so easy to create RT threads, but I think that's
bogus. Probably a substantial majority of Rosegarden
users doing MIDI only are using systems on which it
isn't possible for a random user to create RT
threads at all. For these users, the ALSA sequencer
ought to be able to do a lot better than an ordinary
unprivileged thread can. I'd like to know why it might not.
I'm not at a proper computer just now, to delve
through code - anyone have any more idea about this?
Chris
Florian Schmidt:
> Yeah, i got a nice and juicy BUG in it (see below). So
> this is what kills rosegarden regularly here when
> run with RTC timing source.
That'll be the chap. Mind you, I never saw the RTC-
based timer measure significantly better than the
system timer at 1000Hz. Although your measurements
may vary, and it seems, probably would.
Chris
Florian Schmidt writes:
> Here's example output with rosegarden producing a
> supposedly steady stream of 16th notes at 120 bpm:
> [...]
Those results certainly are pretty poor. We do have
a very similar test in the Rosegarden tree (the
complainer test) but it doesn't stress the system
quite the way it seems your program does.
I'll have to review the sequencer API and look at
adding a separate RT MIDI thread as an alternative
(which should be straightforward enough). The
rationale for using queued events is simple -- ALSA
provides the service, why duplicate it? -- but it's
probably true that we've already spent far more time
working around problems with it than we saved by not
duplicating it. (Does anyone else use queued
sequencer events in earnest?)
Chris
Hi,
i was wondering:
With the new shiny -rt kernels and realtime scheduling available to non
root users via the usual mechanisms, there's the possibility of really
finetuning an audio/midi system.
The main issue i am interested in is the interplay between midi and
audio in such a system. How to tune the audio side to get a very
reliable system is pretty easy these days, thanks to the great jack
audio connection kit, alsa and the new -rt kernels.
But now i wonder how midi software fits into this. I'm here interested
in the special case of a software sequencer (like i.e. Rosegarden)
driving a softsynth (like i.e. om-synth or supercollider3) or whatever.
Ok, on a normal audio tuned -rt equipped linux system the SCHED_FIFO
priorities which are used for the different components look something
like this:
99 - system timer
98 - RTC
81 - soundcard IRQ handler
80 - jack watchdog
70 - jack main loop
69 - jack clients' process loops
50 - the other IRQ handlers
Now, i wonder how midi threads would fit in best into this scheme. Let's
assume our midi sequencer uses either sleep() or RTC to get woken up at
regular intervals, and let's further assume that it properly deals with
these timing sources to get relatively jitter free midi output given
that it get's woken up often enough by the scheduler. I further assume
that the alsa seq event system is used and midi events are not queued
for future delivery but always delivered immediately.
All this implies that for midi delivery timing to not be influenced by
audio processing on the system (which gets a problem especially at large
buffer size, where quite a bit of work is done at a time), all the stuff
that handles midi should run with realtime priorities above the jack
stuff (i.e. around 90). I wonder whether it also needs to have a higher
priority than the soundcard irq handler, too. Does the jackd main loop
"inherit" the priority of the soundcard irq handler?
Anyways, one more thing to note is for this to work nicely, the
softsynth needs to have an extra midi handling thread that is also
running with a priority in the 90s range, so it can timestamp the event
properly when it arrives.
So i wonder now: Assuming our system is setup as described above and all
midi handling is done from threads with sufficiently high pririties not
to get disturbed by audio stuff, will the alsa event system play nice?
I ask this, because i have setup a system as above with a simple midi
generator (see code below) and some different softsynths (one of which i
have written which does have its midi thread at an appropriate priority.
you can get a tarball here.
http://affenbande.org/~tapas/ughsynth-0.0.3.tgz
Beware it eats unbelievable amounts of cpu and is in no way considered
being finished. it just lay around handy for this test ;)). But i still
get some regular jitter in my sound.
Here's recorded example output (running jackd at a periodsize of 1024
and the test notes are produced at a frequency of 8hz). First with
ughsynth then with jack-dssi-host hexter.so. The effect is less
prominent with hexter, i suppose because the jack load with it is only
at 2 or 3% as opposed to ughsynth that uses 50% here on my athlon 1.2
ghz box. In case you don't hear what i mean: The timing of every ca. 7th
or 8th note is a little bit off.
http://affenbande.org/~tapas/midi_timing.ogg
So i wonder: what's going wrong? Is the priorities setup described above
not correct? Is alsa seq handling somehow not done with RT priority?
What else could be wrong? Please enlighten me :)
And yeah, i do _not_ want to hear about jack midi. It's a good thing,
and i'm all for it as it will make at least some scenarios work great
(sequencer and softsynth both being jack midi clients), but not all.
Thanks in advance,
Flo
midi_timer.cc:
#include <iostream>
#include <fstream>
#include <sstream>
#include <string>
#include <vector>
#include <cstdlib>
#include <iomanip>
#include <pthread.h>
#include <linux/rtc.h>
#include <sys/ioctl.h>
#include <sys/time.h>
#include <sys/types.h>
#include <sys/mman.h>
#include <fcntl.h>
#include <errno.h>
#include <unistd.h>
#include <poll.h>
#include <signal.h>
#include <time.h>
#include <alsa/asoundlib.h>
#define RTC_FREQ 2048.0
#define NOTE_FREQ 8.0
#define RT_PRIO 85
int main()
{
int fd;
fd = open("/dev/rtc", O_RDONLY);
if (fd == -1) {
perror("/dev/rtc");
exit(errno);
}
int retval = ioctl(fd, RTC_IRQP_SET, (int)RTC_FREQ);
if (retval == -1) {
perror("ioctl");
exit(errno);
}
std::cout << "locking memory" << std::endl;
mlockall(MCL_CURRENT);
// std::cout << "sleeping 1 sec" << std::endl;
// sleep(1);
snd_seq_t *seq_handle;
int err, port_no;
err = snd_seq_open(&seq_handle, "default", SND_SEQ_OPEN_OUTPUT, 0);
if (err < 0) {
std::cout << "error" << std::endl;
exit(0);
}
std::string port_name = "midi_timer";
// set the name to something reasonable..
err = snd_seq_set_client_name(seq_handle, port_name.c_str());
if (err < 0) {
std::cout << "error" << std::endl;
exit(0);
}
// this is the port others can connect to. we don't do autoconnect ourself
err = snd_seq_create_simple_port(seq_handle, "midi_timer:output", SND_SEQ_PORT_CAP_READ|SND_SEQ_PORT_CAP_SUBS_READ, SND_SEQ_PORT_TYPE_MIDI_GENERIC);
if (err < 0) {
std::cout << "error" << std::endl;
exit(0);
}
// on success we know our port no
port_no = err;
struct sched_param param;
int policy;
pthread_getschedparam(pthread_self(), &policy, ¶m);
param.sched_priority = RT_PRIO;
policy = SCHED_FIFO;
pthread_setschedparam(pthread_self(), policy, ¶m);
std::cout << "turning irq on" << std::endl;
retval = ioctl(fd, RTC_PIE_ON, 0);
if (retval == -1) {
perror("ioctl");
exit(errno);
}
snd_seq_event_t ev;
unsigned long data;
int ticks_passed = 0;
while(1) {
// then we read it
retval = read(fd, &data, sizeof(unsigned long));
if (retval == -1) {
perror("read");
exit(errno);
}
if ((float)ticks_passed >= (RTC_FREQ/NOTE_FREQ)) {
// std::cout << "play note" << std::endl;
ticks_passed -= (long int)(RTC_FREQ/NOTE_FREQ);
// play note
snd_seq_ev_clear(&ev);
snd_seq_ev_set_direct(&ev);
snd_seq_ev_set_subs(&ev);
snd_seq_ev_set_source(&ev, port_no);
ev.type = SND_SEQ_EVENT_NOTEON;
ev.data.note.note = 53;
ev.data.note.velocity = 100;
snd_seq_event_output_direct(seq_handle, &ev);
snd_seq_drain_output(seq_handle);
}
data = (data >> 8);
// std::cout << data << std::endl;
ticks_passed += data;
}
return 0;
}
--
Palimm Palimm!
http://tapas.affenbande.org
> are you using it in a professional environment? So far it's been used only
> for home/hobbyist situations, and I would be really interested to hear
> about any use in a more professional situation.
Denis,
My application is that of a "non-commercial" mastering studio. I do a bit of
mastering for my friends in Nashville, but so far I've only worked with the
rough mixes, nothing that's made it onto a record.
The reason I'm so excited about DRC is that it transforms my
not-mastering-quality Klipschorns into something that is very accurate, or at
least measures so, and in my opinion is comparable to the high-end mastering
and mixing studios I've visted.
For those who don't know, Klipschorns use a folded horn to load a 15" woofer
and convetional horn-loaded midrange and tweeter into a cabinet that fits
tightly in the corner of the room. They have extremely high efficiency and
low distortion, but abysmal phase and frequency response. Luckily, these are
EXACTLY the things that DRC is designed to fix. Most speakers have 10% or so
distortion at low frequencies and moderate listening levels. As far as I
know, this can't be removed by any sort of electronic correction. With this
setup and some modest room treatment, I've got very low distortion, AND flat
frequency response.
-Ben Loftis
http://www.harrisonconsoles.comhttp://www.studiooutfitters.com