[LAU] Questions about LV2

Fons Adriaensen fons at linuxaudio.org
Wed May 15 23:10:28 UTC 2013


On Wed, May 15, 2013 at 01:01:02PM -0700, J. Liles wrote:

> You've seen the consequences of these design decisions in Rui's response.
> As extensible and awesome as your design may be on paper, the end result is
> that users get overly fancy, in consistent (and probably slow) GUIs that
> fiddle with parameters through hidden channels and have poor accessibility.
> I think that is a real problem.

Since everybody seems to air his/her opinions about this I'll add mine.

Considering that the DSP and GUI parts of a plugin could end up in different
places (on separate machines), considering that in that case the communication
will be asynchronous no matter how it is presented - whatever API is designed
to support it should make it clear that it *is* asynchronous even if it doesn't
have to be if the two parts happen to be in the same process. Which probably
means that it should be based on exchanging messages.

Next question is who should take care of transporting such messages, and
I'd say it should be the host if DSP and GUI are on the same, or the hosts
(plural) if they are not, using whatever mechanism they like. In both cases
the interfaces presented to the two plugin components should be the same.

Next question is if the host(s) should interpret such messages, and my answer
to that is a definite *no*. IMHO the DSP-GUI communication is private to the
plugin. If the plugin designer wants to expose some or all control ports for
automation, MIDI binding or OSC mapping, he/she should do that explicitly.
The mere existence of e.g. a slider on the GUI and a corresponding control
port on the DSP side should not enable the host to access the control port. 
This of course assumes that all plugins provide their own GUI, and it will
never be the host's task to create a GUI for a plugin (unless maybe if the
plugin explicitly asks for this to happen, in which case it has to supply
the required information).

Related questions are if automation, MIDI binding and OSC mapping should
take place at the DSP side or the GUI side. MIDI binding makes most sense
at the GUI side - in most cases the user will want to map e.g. a slider to
some HW controller, and since only a linear range from 0 to 127 is available
it makes sense to include the position to value mapping of the slider in the
MIDI binding, and you can't expect 'sample accurate' control via MIDI anyway.
And if the DSP and GUI are on separate machines, the user and his MIDI HW are
probably near the one that does the GUI, while the DSP part could be in a 
machine room on another floor. OSC mapping could make sense at the DSP side.
Automation is a dubious case. In systems such as Meyer Sound's D-Mitri, all
automation is done client side, i.e. by the CueStation software which in our
context corresponds to the GUI. Even in that case it could be 'sample accurate'
if desired, but I can't see any case where automation has to be sample accurate
- if it is used for anything else than replaying manual input it becomes some-
thing entirely different that proabably requires its own solution.

Probably heresy in some people's mind, but so be it.

Ciao,


-- 
FA

A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)



More information about the Linux-audio-user mailing list