[linux-audio-dev] plugin GUIs .. Mediastation, LADSPA ?

Benno Senoner sbenno at gardena.net
Wed Nov 19 01:35:00 UTC 2003


Disclaimer: I have not read the entire GUI thread so please don't flame 
me (but correct me)
if I say nonsense.


Having written a few engines (midi player, audio player) and the 
corresponding GUIs of
the upcoming Lionstracs Mediastation keyboard, we have faced the same 
problem old problem:
we wanted the GUIs decoupled from the engines, perhaps allowing 
controlling the engine
by more than one GUI at the same time where all GUIs automagically 
update themselves
when the engine (or other GUIs) change its parameters.

I decided to adopt the simplest possible protocol perhaps it is a bit 
inefficient compared
to the most elegant solution but I think the simplicity of my model 
cannot be beaten that easily.
I'd like you folks to read the brief explanation of my protocol and make 
comments about it,
eg. if an even more simple solution exists, or if it has serious flaws 
(I don't think so :-) )

Let's explain a simple scenario:
I midi player which consists of a GUI-less engine and a GUI.
It's basically a client/server system: GUIs are clients, engines are 
servers.
The midi player accepts commands like
load filename.mid
start, stop, seek, get_engine_status

the method I use to communicate between the GUIs is SYSV message queues
because they can be multiclient but the API is abstracted so the 
underlying transport
model can be chosen arbitrarily.

the server does:

// opens a message port in server mode (creates the port)
// key is needed to identify the port by clients that
// will connect to

int mcmd_open_server_port(int key, mcmd_info_t *mcmd_info);

key is a machine wide unique identifier.
in my implementation it's just the SYSV IPC message queue key,
 but this is not mandatory, the API can easily be adapted to use
strings (eg for TCP/IP support which I will add soon).

now within the main loop the server does:

// receives a message. If no message is pending in the queue
// wait until message was received
int mcmd_receive(mcmd_info_t *mcmd_info, void *buffer, int buflen)

and then based on the type of message received it performs some action
and sends back a response to the client which can be a simple ACK or
a message containing some payload

it does so using this function:

int mcmd_send(mcmd_info_t *mcmd_info, void *buffer, int buflen, int 
destination)

The destination is an unique identifier of the client that sent the message
 So basically each server and client do have unique identifiers
which in my implementation are mapped to SYSV IPC struct msgbuf.mtype 
values
(but this is only an implementation detail, as said the communication 
layer can be
any form of IPC)

I have the mcmd_receive_nowait() call too which can be directly placed 
within a high
priority thread that does other stuff too (eg a midi/audio playing thread).
SYSV msg queues seems quite fast a couple of usecs (or dozen of usecs) 
per call so
perhaps one could probably even put msgrcv() calls into an audio 
processing loop that does
process audio fragments of 1-2msec. Not sure if there is some locking 
that might cause stalls.
If you are paranoid you could let an audio player accept external 
commands via lower priority
thread and then pass these commands to audio thread via lock-free FIFO, 
or even better
use lock-free fifos as the transport method of the mcmd_ API.

So far so good, let's continue with the MIDI player scenario:

The GUI user wants to load a midi file.
the command sent to the server is load_file file.mid
the server executes it (loads the file.mid) and then responds
with an ACK. (with a status code so that the GUI knows if loading succeded).
Same for start,stop, change volume of midi channel, etc commands.
Now let's see how the GUI updates itself: we want the time display to 
advance,
and midi volume sliders update automagically.

Basically the GUI has a timer callback that is called 10-20 times/sec.
It sends a GET_ENGINE_STATUS message to the server
and the server responds with a ENGINE_STATUS message.
this message (you can define your own arbitrary message structs because the
protocol does not know anything about the payload of messages.

This engine status message should contain the values that should 
exported to the GUI
in the midi player's case the current position (midi ticks), current 
values of midi volumes,
muted channels and so on.
If the payload gets too big then export flags that signal that some 
parameters in the engine
changes and then let the GUI request an additional info packet that 
contains the values
you need.
The guideline should be to minimize the number of messages exchanged so 
in my
specific case I have chosen to put all parameters needed in single 
packet (100-200bytes).

Of course the GUI should do something like this to avoid unnecessary 
repainting
of elements (which could lead to a slow, flickering GUI)

if(curr_volume != old_volume) update_volume_slider(curr_volume);

In my example midi player you see the volume sliders move when the there 
are volume
changes in the player engine because the GUI is constantly fetching 
those values.
If you attach a second GUI to the engine both GUIs update perfectly in sync.
If you move a slider on the first GUI it will move in synth on the 
second GUI too.
(because GUI1 moves the slider, sends a set_volume message to the engine 
and during
the next gui update cycle of GUI2 this value is fetched by GUI2 and the 
slider is moved)

Perhaps you might say this protocol is suboptimal because it sends 10-20 
messages/sec
even if the values do not change so it would be better to let the server 
wake up the clients
waiting for data.
Yes it would probably be more elegant but would make it more complex and 
the measurable
gain would be almost zero. (IPC message queues are capable of sending 
tens/hundreds of thousands
 of msgs/sec so 10-20/sec used by my API is a really negligible when 
measuring the overall performance.

Plus think about it in the case of a audio/midi player gui the song 
pointer position must be updated
frequently (at least 10-20 times/sec so that it updates smoothly on 
screen) so even if in the case
where the engine wakes up the GUI the amount of messages that pass per 
time unit are the same.

We will use the same API for LinuxSampler too because in that case we 
want a fully detached GUI
and multiple ways to control it (who knows if LinuxSampler will some day 
run in expanders that
are made of an embedded x86 with a few buttons and a small LCD ? :-) ).

I'm currently adding TCP/IP support too via a proxy module which allows 
you to run the GUIs
on remote hosts and manage multiple engines (the headless midi player, 
sampler etc) through
a single TCP port (so that in case of firewalled hosts you need to 
open/forward only a single TCP port
 even if you run multiple engines on the host).
The protocol will be published soon  (probably via LinuxSampler CVS, 
I'll keep you posted when it's ready).

The question is: would it make sense to use such a protocol for LADSPA ?

For example LADSPA could contain a definition that tells the host what 
kind of parameters it must export to the GUI
via the message passing API.
At this the GUI can be implemented using any kind of GUI toolkit and the 
above message passing API.
The GUI would run in a separate process so there would be no conflicts 
with the host
 and you get GUI automation for free: just change the internal values in 
the host and the GUI will automatically update
itself.

In conclusion: do you see any drawbacks in using this stuff in LADSPA ?
Better solutions ?

If you ask me the GUI toolkit mixing is a kludge and keeping the GUI 
stuff out of LADSPA by separating the two
things via a client/server model can improve code quality, because you 
are not allowed to mix GUI stuff and engine stuff
in the same thread thus forcing the programmer to be a bit more disciplined.
Yes, it is a bit more work to write apps that way but the reward is 
quite big, trust me.
I think without a client/server model writing the software for  
Mediastation would be a big PITA.
Perhaps the protocol is suboptimal but perfectly suits our needs.

cheers,
Benno
http://www.linuxsampler.org

Steve Harris wrote:

>On Tue, Nov 18, 2003 at 02:50:37 -0500, Paul Davis wrote:
>  
>
>>if we avoid the goal of having the host have some control over the
>>plugin GUI window, then this isn't necessary, and the design i
>>implemented last night will work without any special support from X or
>>toolkits. it does require a small library with what steve has termed
>>"toolkit specific hacks" - its not so much that as enumeration of
>>supported toolkits. however, the goal of the host having some control
>>over the plugin GUI window seems rather desirable.
>>    
>>
>
>If the host has no control over the UI then I'm not quite sure what the
>point is. It saves on processes, but the're pretty cheap on UNIX anyway.
>
>We need some kind of IPC (ITC?) between the host and UI anyway, to handle
>serialisation of control changes.
>
>- Steve
>
>  
>





More information about the Linux-audio-dev mailing list