Hi Paul and Len,
Thanks for your replies.
On Jul 20 2015 22:41, Paul Davis wrote:
On Mon, Jul 20, 2015 at 9:33 AM, Takashi Sakamoto
<o-takashi(a)sakamocchi.jp> wrote:
Well, are there some developers who have enough
knowledgement about MIDI
messaging rule for Mackie Control or Mackie Human User Interface(HUI)?
not sure what you mean by "rule". I'm intimately familiar with both MCP and
HUI.
Great.
As long as I
know, for these models, there're three types of the
converter; usual MIDI messages such as Control Change (CC), Mackie
Control and Mackie Human User Interface, while I have a little
knowledgement about the latter two types.
MCP and HUI both consist of "usual MIDI messages".
There are a couple of sysex messages used for hand-shaking +
discovery; everything else is just normal CC and note messages.
I also know the MCP and HUI is a combination of MIDI messages. What I
concern about is the sequence. If the seeuqnce requires device drivers
to keep state (i.e. current message has different meaning according to
previous messages), I should have much work for it.
In this meaning, I use the 'rule'.
Well, when DAWs and devices successfully establish the 'hand-shaking',
they must maintain the state, such as TCP? And in the 'discovery',
devices must retrieve informations from DAWs? Furthermore, in the
'rule', transactions (a set of requests/responses) are used?
On Jul 21 2015 01:25, Len Ovens wrote:
It is (as Paul has said) straight MIDI. The best guide
I know of is the
"Logic Control User's Manual" from 2002. The MIDI implementation starts
on page 105. The only thing maybe a bit odd is that there are encoders
that use CC increment and decrement instead of straight values, but any
sw written for these surfaces is aware of it.
It's noce, thanks. But the metering is one of my headaches...
On Jul 21 2015 01:25, Len Ovens wrote:
You will note the use of pitchbend for levels. CC has
only 127 values
which can give "zipper" artifacts. If using CC, the values need to be
mapped to DB per tick and/or have SW smoothing. The top 50db of the
range are most important.
I think you mean that rough approximation fomula in acoustics
engineering for human perception (i.e. ISO 226:2003).
On 2015年07月21日 01:25, Len Ovens wrote:
You get to make up your own midi map is what it
finally comes down to.
OSC might be a better option as the values can be floats and there is no
limit to number of controls (Midi has only 127 CCs and some of those are
reserved).
Currently, ALSA middleware has no framework for Open Sound Control. It
just has implementations for MIDI like messages. In this time, I use
rawmidi interface for my purpose. The MIDI messages will be available
for userspace applications to read from ALSA sequencer functionality.
Thanks
Takashi Sakamoto