[LAD] midi bindings -- was Re: ZASFX is mean with my Qtractor XML session files

Mark D. McCurry mark.d.mccurry at gmail.com
Sat Jul 9 03:16:54 UTC 2016


On 07-09, Robin Gareus wrote:
> On 07/08/2016 05:26 PM, Mark D. McCurry wrote:
> 
> > Last time I counted the total possible parameters given the default
> > number of parts/kits/voices/etc there's a bit over 6,000,000 parameters.
> > 
> > Think about how big of a .ttl that would be :p
> > 
> > The way that MIDI learn works is:
> > 1. you select one of these many parameters
> >    (Middle click or CTRL+right click in the fltk/ntk UI)
> >    (if you're in a version where this doesn't lauch correctly use
> >    zynaddsubfx-ext-gui osc.udp://localhost:PORT)
> > 2. you send zyn an unbound MIDI CC
> > 3. zyn creates an internal mapping from MIDI CC -> internal parameter
> > 
> > This isn't visible as a standard lv2/vst/etc parameter, so it's quite
> > non-standard in that sense, but IMO it's a reasonable solution given the
> > scope of zyn.
> > 
> > So, remove your doubt and enjoy the non-standard solution to this problem.
> 
> +1
> 
> setBfree uses exactly the same approach, with a slight difference is
> step 2: it does not have to be unbound: one can re-assign in one step.
Interesting.
There are trade-offs with either CC grabbing approach.
I'm undecided if it would make sense to eventually add the complexity of
a user selectable learning mode or not.

When I first put MIDI learn in place I'd imagine that it would primarily
be used in two scenarios:
1. patch design
2. adding freeform changes as a piece (e.g. stuff closer to droning
   pieces)
(since updates to running notes has been added there's also the
use case of automating parameters within normal songs (though there's
certainly plenty of work left in figuring out how to interpolate
some parameter values nicely))

With patch design in mind, my own (somewhat limited) workflow and one
that I've seen in quite a few video tutorials is to generate a small
sequence and loop it until you're satisfied with the result.
This means that you can have a sequencer injecting notes and CCs as well
as external hardware controllers injecting events.

In this case if you learn to the first observed CC (bound/unbound) then
the sequencer could grab different parameters while waiting for an
unbound one makes it easier to isolate CCs involved with physical
controllers.
Also waiting for bound CCs makes it easy to queue up a series of
controls to be learned and then input a series of physical knob
(which zyn's librtosc does).

Of course this approach does make it a bit more complex to unlearn or
reassign parameters, so there are cases where each approach is more
suitable.

Hopefully that's not too much rambling,
--Mark
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 473 bytes
Desc: not available
URL: <http://lists.linuxaudio.org/pipermail/linux-audio-dev/attachments/20160708/1845031a/attachment.pgp>


More information about the Linux-audio-dev mailing list