On Wednesday 04 December 2002 19.24, Steve Harris wrote:
[...]
In a modular system, each oscilator etc, is monophonic
and you
clone whole blocks of the system to make it polyphonic. Also, pitch
is controlled via CV control and gate, which doesnt map well to
midi style note events.
Another reason why having those events at all is a bad idea...
Note that with an event based system (or a callback based system -
but not a shared variable based system, such as LADSPA), you still
actually have *events* for everything. It's just a matter of
interprettation. If you feel like a MIDI synth, just interpret the
first velocity change to a non-zero value as NoteOn with a velocity
of the argument value. Then wait for a velocity change to 0, and
treat that as NoteOff. (Or rather, NoteOn with vel == 0, which is
what most controllers and sequencers send, unfortunately. :-/ )
Sure, you could coerce the described system into a
modular
voewpoint but there would be a lot of overhead and neadless
complexity.
Well, all I have to say on that is that I think it's a bad idea to
draw a strict line between the two kinds of systems. I think one API
that handles both would be a lot more like real analog stuff, where
you don't really have to worry about protocols and APIs - it's all
just analog signals. I'd like to be able to use virtual studio style
plugins inside my modular synth networks, and vice versa, as far as
practically possible.
I'm not sure what peoples opinions on numbers of
outputs are,
obviously the number needs to be variable per instrument at
development time, but I dont think it should be variable at
instantiation time, that doesn't sound useful, and it would be
hell to optimise.
I'd agree with this, but for a few exceptions:
* Mixer plugin. I really do want it to have a variable number of
inputs. I don't want to say 48-channels or 128-channels.
Right, but do mixer plugins belong in an instrument API? Be good at
one thing...
Yes - but if you remove the features needed to support things like
mixers, you also remove features that instruments may need. Not
everyone wants to build instruments out of lots of little plugins.
Besides, mixers need automation...
If you can give me an example of an instrument that
benefits from
variable numbers of i/o and doesn't deserve to be a standalone jack
client then I'l agree with you.
Good point - if we were talking about a plugin API strictly designed
for synths. From what I've seen on the VST list, assuming that an
instrument is one thing and an effect is another is just plain wrong
- and that's still within a single API.
Anyway, a stand-alone JACK synth would still need a way to get
control events from a sequencer - just like synth plugins, or effect
plugins. Should we use ten different ways of controlling synths, or
does it perhaps make some sense to have a common protocol for that?
That said, the way Audiality is designed currently, the event system
is a totally independent subsystem. All you need to take part is an
event port and a process() callback. If JACK clients could
communicate events of this type, there would be no problem; we could
"just" decide on a common protocol and hack away.
Can we assume
that all voices have a standard control mapped to
velocity? Or should the per-voice controls actually be another
set of controls, and each instrument needs to specify
velocity,pitchbend (and others)?
Here you could make use of well known labels again,
note-on-velocity, note-off-velocity, pitchbend etc. The host can
map these to the normal MIDI controls of it likes.
It probably makes sense to list the per-voice controls speratly
from the per-instrument. Its just important that they are the same
at a fundamental level (otherwise you end up with very confused
developers and code).
We have another terminology clash, I think... I thought this was
about per-voice control vs per-channel (MIDI style) control - but now
that you mention it, there's a third kind of controls; per-plugin
instance - or per-instrument controls. I would agree that the these
should have a list of their own, as they may in fact have no relation
whatsoever to per-channel or per-voice controls. (I would assume
they're meant for controlling voice allocation parameters, routing,
internal master effects and things like that.)
My feeling is that just float+"arbitrary opaque
binary data" is
enough. The float can be augmented with hints, enumerations,
whatever.
String is needed if we want to deal with filenames without a
custom GUI. Opaque binary data has no meaning to the host, and
the host is what manipulates controls. Where does it get that
data?
The binary data comes from "GUIs", I still dont see how a generic
UI can usefully deal with filenames, it can pop up a file selector
dialgoue, but the user doesnt know what thier looking for.
Another good point. Maybe not even file names are a sensible
candidated for a standard control type. But I still think it may make
some sense if the host knows what to do with the *files*, even if it
doesn't understand anything about their formats or naming conventions.
//David Olofson - Programmer, Composer, Open Source Advocate
.- Coming soon from VaporWare Inc...------------------------.
| The Return of Audiality! Real, working software. Really! |
| Real time and off-line synthesis, scripting, MIDI, LGPL...|
`-----------------------------------> (Public Release RSN) -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`---------------------------->
http://www.linuxdj.com/maia -'
---
http://olofson.net ---
http://www.reologica.se ---