[linux-audio-dev] Project: modular synth editor

Dave Robillard drobilla at connect.carleton.ca
Fri Jan 16 20:32:17 UTC 2004


On Thu, 2004-01-15 at 14:46, Mike Rawes wrote:

> It's when you have several connected plugins forming a single module
> (I'll roll out my old example of an ADSR, a few DCOs, DCA etc) that GUI
> generation really falls down. An alternative is that used in gAlan
> (http://galan.sourceforge.net), where you build up your UI and hook it
> up to whatever ports you like. Or PD - which lets you do ... just about
> anything.

I suppose you have a point, nice UIs for a subpatch would be a nice
thing.   Not a very complicated thing to implement in a host though,
especially in something like AMS which already has the ability to
control pretty much any parameter (right now from MIDI).  Literally
defining in the subpatch "these things should be on the master subpatch
GUI" (just a big list of params) and letting the app draw them as it
pleases, as with current normal LADSPA plugins.

Subpatch GUIs are a good idea I didn't think of, and doing something
like the above (assuming I was making any sense at all) doesn't seem
very difficult or like it would have any major problems that I can see. 
Comments?

> > The ideas ams/ssm have used are pretty good, but I would add the
> > capability of choosing on an individual control basis whether the
> > control was in the GUI, or exported as a port.  This is definately
> > something severely limiting about ams for me right now... (sure, you
> > can use MIDI out to get around it, but ew).
> > 
> > > Another issue is that LADSPA plugins cannot be used for sample
> > > playback, as loading samples is impossible unless you stream the
> > > data in a la SooperLooper(http://essej.net/sooperlooper).
> > 
> > This is where the simplicity comes in.. we're not making a sampler :).
> > 
> > Use an external sampler, if you want the audio in the synth, bring it
> > in as audio.
> 
> I agree - for most cases a separate sampler is fine. Composite
> samples-plus-filters-and-stuff instruments might be trickier, but doable
> (just send the Note On or whatever to both the sampler and the synth).

Exactly.  That's not that much trickier - hell, I do it all the time
right now :)  (alsa seq is awesome..)

> [...]
> 
> > > > Anybody with more ideas/criteria for the
> > > > perfect modular synth? 
> > > 
> > > OK here I go: From my notes (and fuzzy memory :), what I'd like is a
> > > simple cli program that communicates via some form of IPC.
> > > OpenSoundControl (OSC) would seem to be a good mechanism, as it's
> > > quite widely supported and should be sufficient for all that would
> > > be required.[chop]
> > 
> > Making an "interface" or communication protocol, IPC, whatever you
> > wanna say I think is way way way overshooting what needs to be done. 
> > What's the point really?  The overhead is definately not worth it
> > (realtime audio..).  The engine should be just code, straight up.  We
> > need to design the architechture of the _program_ not an interface.
> 
> The engine would be straight up code. However, the interface would still
> need to be defined as a set of functions for manipulating the engine.
> All the OSC/IPC/whetever bit is for is to allow this manipulation to be
> done by a program that is entirely separate from the engine. It is
> likely that each OSC message (or whatever) would map one-for-one to each
> function.
> 
> The overhead is minimal - it's not being used to stream audio, or
> anything even close to that bandwidth.
>  
> > If you really wanted a UI like what you described, it could be
> > implemented as, well, a UI plugin.  This is my idea anyway, if you
> > have some good rationale for a system like that please elaborate
> 
> What I was suggesting was not a *user* interface: I wouldn't expect the
> user to control it directly by sending and receiving OSC messages!
> 
> For an excellent example of what I'm getting at, have a look at ecasound
> (http://eca.cx) - a multitrack audio engine. This is controlled by the
> Ecasound Control Interface (ECI) which can be used by any number of UIs,
> both graphical and text based, to control a common engine (ecasound
> itself) for a variety of purposes.

But.. why?  Any "interface" (other than just functions/classes that are
Just There) is a LOT of added complexity.  More importantly perhaps is
overhead, 'cause there's going to be quite a bit.

The million dollar question: why?

Frontends can be built as a part of the project (like alsa patch bay for
example, fltk and gtkmm).  I completely fail to see the point of sending
"messages" around, just to have a UI.


> That's the idea - simplicity. I wouldn't want any (G)UI code anywhere
> near the engine. The engine should know nothing of knobs, sliders or
> whatever, just what it receives via its inputs.

The GUI code need not be anywhere remotely close to the engine code,
it's not necessary to have some message-passing interface to accomplish
this.




More information about the Linux-audio-dev mailing list