On Tue, Jul 21, 2015 at 9:53 AM, Takashi Sakamoto
<o-takashi(a)sakamocchi.jp> wrote:
I also know the MCP and HUI is a combination of MIDI
messages. What I
concern about is the sequence. If the seeuqnce requires device drivers
to keep state (i.e. current message has different meaning according to
previous messages), I should have much work for it.
In this meaning, I use the 'rule'.
First of all, building support for *interpreting* incoming MIDI into a
device driver is probably a bad idea on a *nix-like OS. It is just the
wrong place for it. If there's a desire to have something act on
incoming MCP or HUI messages, that should be a user-space demon
receiving data from the driver.
This means that the driver doesn't care about anything other than
receiving a stream of data from the hardware and passing it on, in the
same order as it was received, to any processes that are reading from
the device. The device driver does not "keep state" with respect to
incoming data, only the state of the hardware.
Well, when DAWs and devices successfully establish the 'hand-shaking',
they must maintain the state, such as TCP?
Discovery in MCP and HUI is a much simpler system. In many
implementations, there is no hand-shake or discovery at all: the
device just powers up and can start receiving and transmitting
immediately. There is no state to maintain, no keep-alive protocol.
About the only thing that can sometimes be gained from the handshake
is determining the type/name of the device, but this is actually
rarely delivered.
Currently, ALSA middleware has no framework for Open
Sound Control.
Lets hope it remains that way.