Fons Adriaensen wrote:
With the release of Jack 0.109 we now have a
(hopefully)
stable API for midi-over-jack. This set me to consider what
would be required to modify Aeolus to use this system.
And I did not like the conclusions.
Is it a good idea to insert a 30-year old data format
that mixes real-time and general data on a single stream
into a real-time audio processing API ? I don't think so.
1. Note on/off and controller events now can be 'sample
accurate'. That's nice to have. But a) they are not and
never will be 'sample accurate' if they come from a HW
midi device, and b) if they are generated in software then
you can have, and for some applications you actually want
much more finer grained timestamps and controller values.
So it's a solution that in one case is plain overkill,
and in the other it's just not good enough.
2. *All* MIDI data now has to pass through the RT-audio
audio thread, even if it's not related to any RT operations
at all, and in many cases, not even meant for the receiving
application. What is the poor process callback to do with
sysex memory dumps, sample downloads, and in most cases,
even program changes ? The only thing it can do is get rid
of this ballast as fast as possible, dumping it in a buffer
to be processed by some lower-priority process, and hoping
it will not be blocked or forced to discard data. Forcing
a critical RT-thread to waste its time in this way is IMHO
not good program design, not even if the overhead can be
tolerated. That sort of data should never go there.
Some sort of solution could be to let the MIDI backend do
at least part of the filtering - it could have several
jack-midi ports corresponding to a single raw midi input
and e.g. separate note on/off, controller events, and
sysex messages. An app that wants to receive all can
still do so without any significant overhead, it just
needs a few more ports.
And once e,g. note on/off and controller updates have
dedicated ports, there is no more need to keep the MIDI
format with all its limitations.
This will be enough for some flames, so I'll stop here
:-)
I guess you will agree if I state that it would have been better if you
performed this though experiment while the API was still in a volatile
stage. But that observation doesn't help us, so let's not go down that
path. (FWIW I didn't think about the jack-midi API either back then...)
A first note on sample-accuracy: a FireWire device provides midi events
muxed with the audio samples. These midi events have the same timestamps
as audio events. IOW firewire devices can provide sample accurate midi
timing. Whether they actually *do* is another question, but technically
it doesn't look that challenging. Also note that while the data format
might be MIDI, there doesn't have to be an actual midi cable in place.
E.g. a master controller keyboard with built-in FireWire transport.
Hence your 1.a statement is incorrect (especially the 'never be' part).
Regarding 1.b: that smells like the 'control ports' discussed on
jack-devel a while ago. I think it's a matter of what the goal is. And I
think the goal is more 'simple midi support' than complex control flow.
Which could also be implemented I guess, but that needs a rather
thorough discussion on the API.
Regarding 2: The idea you have on the backend filtering seems
reasonable, but I don't like the idea of having multiple backend ports
for the same 'midi channel'. A more general approach would IMHO be to
have some sort of subscription system where a client can specify what
type of midi events it wants at a port.
Note that doesn't solve the issue of the RT thread having to deal with
MIDI data. But that's unavoidable since there is no concept of 'non-RT
ports' in jack. And introducing that is not really easy I'd say. Or
maybe it is?
A general solution might be to have:
* an IS_RT flag on a port
* both an RT callback and a non-RT callback
* a third port type that implements the ideal fine-grained control API
But I don't want to be the person to design/implement this. The amount
of corner cases is huge.
Greets,
Pieter