[LAD] MIDI 2.0 is coming

Ralf Mardorf ralf.mardorf at alice-dsl.net
Tue Jan 29 18:10:47 CET 2019


On Tue, 29 Jan 2019 13:37:01 +0100 (CET), Jeanette C. wrote:
>why should I use Linux and LV2 plugins, if they don't work with my
>$1000 control keyboard? There's always hope.

From the link posted by Louigi:
"[snip] With previous MIDI feature additions, the challenge has often
been getting companies to actually implement it. Take MPE [1] as an
example – despite being officially adopted in 2018 to the MIDI spec,
only a handful of companies (like ROLI and Moog) have added it to
commercial products [snip]"

[1] From
https://www.midi.org/articles-old/midi-polyphonic-expression-mpe:

"[snip] Music making products (such as the ROLI Seaboard, Moog's
Animoog, and Apple's Logic) take advantage of this so that musicians
can apply multiple dimensions of finger movement control: left and
right, forward and back, downward pressure, and more. [snip]"

I mentioned the Animoog's touch screen feature in the the LAU thread
about Dominique's DIY Theremin. A song that still is in progress has
short parts with something similar to a Theremin played via touch
screen, using the Animoog. It's impossible to record by MIDI now, maybe
Logic could do, but neither the iOS, nor the Linux software I'm using
is able to record and play it via MIDI. I guess that Animoog even
doesn't send the required data. I had to record it as an audio track
over and over again. However, even the Animoog's special "keyboard"
couldn't do the whole job, to get a better spooky Theremin howling
sound I needed to add a little bit of a TalkBox effect and important to
this thread, I had to rework the Theremin alike audio track using
volume automation. If it would have been possible to use MIDI instead
of an audio track, it would have been possible to move notes a little
bit forward and backward instead of playing it again and again, but it
still would have require to do some rework. Programming the used sound
to use up and down movements of the finger to either do the desired
howling or to allow volume control, would have been possible, but it
would require to learn how to do it. I purchased a lot of proprietary
virtual synth with a lot of features neither old fashioned digital, nor
analog synth provide. Sometimes programming sounds using those new
synth is easy to do, but often it has got a learning curve that IMO
isn't worth the effort. IMO it's better to spend time to improve the
skills to play a real instrument, this gains more to make good music,
than learning how to program each gimmick, that doesn't gain as much as
people guess.

I like to get MIDI 2, but as pointed out by the QjackCtl GUI thread, it
would be more important if virtual synth would care about e.g. MIDI 1
clock to sync delays, LFOs etc., something a lot of proprietary synth
already do, but that is still completely missing for Linux. Host
integration of virtual synth still needs to be improved especially for
Linux FLOSS , but still for proprietary software for other OS, too.

IOW I'm sceptic that MIDI 2 does solve much, since there are easy to
use MIDI 1 features already ignored, such as using MIDI clock to sync
LFOs of synthesizers. It's comparable to politicians sharpening laws,
that are already sharp enough, but suffer from other issues such as
bureaucracy or something else caused by reality. Sharpening a law that
isn't/can't be used, doesn't solve an issue.

Regards,
Ralf


More information about the Linux-audio-dev mailing list