(Either I forgot to comment on this yesterday, or I missed some
stuff...)
On Sunday 08 December 2002 06.52, Tim Hockin wrote:
controls, two filter controls etc and making them
slightly
different and behave differently in each plug. Are we better to
define the behavior, or at least offer a suggestion?
2D "address space":
Dimension 1: Channel
Dimension 2: Control
Do we really want to have different controls for different
channels?
Not different *sets of controls*, but (obviously) different
*instances* of them. See my earlier post about this Bay concept; a
Bay may have any number of Channels, but these Channels must all be
identical in terms of Control sets and other parameters, since those
parameters belong to the Bay object in the initialization stage.
I can see how it might be neat, but since multiple
channels is convenient for hardware people, shouldn't we leave the
controls the same, too? :)
Yes. But if you for some reason *want* two kinds of Event Input
Channels, with different sets of controls (say synth channels and
mixer busses), you can have that. Just have one Bay for each.
Simpler perhaps:
nchannels
May differ between Bays. (You may not want to have as many mixer
busses as you have synth channels, for example.)
master_controls[]
These would belong to only Channel of the non-optional Master Bay.
(Since it would make sense to use the exact same protocol for all
Event Channels, this one could actually have Voice Controls as
well... May sound silly, but remember that I once mentioned that a
synth that supports only one Channel may use the Master Event Port
for everything. That's when you'd want this.)
channel_controls[]
Channels of some Event Input Bay.
voice_controls[]
Same Channels of the same Bay - only different events.
Or use -1 for the "voice" field when sending Channel controls - but I
strongly prefer different events for complexity and performance
reasons. Nicer with some more cases in the ever present event
decoding switch, than if() statements inside a bunch of the cases.
(And you can still merge them, if you really want to, for some
reason...)
or, if we agree flags for this are best:
controls[] (each is flagged MASTER/CHANNEL/VOICE)
Well, that's another way of describing the same thing. It's just a
matter of initialization API design. Nothing performance critical or
anything - just do it the cleanest and simplest way possible.
What if you
need *only* pressure, and don't care about attack or
release velocity? Why not just make velocity optional as well?
:-)
Even though I have conceded this point - JUST IGNORE IT? Whatever,
it's old news :)
Well, yes! If your violin simulator doesn't simulate the "thud" when
the bow impacts the strings, what would you use velocity for? It's
already decided that the "bow speed" control (whatever you call it)
is the primary "loudness" control of this instrument. (That's
something that should be in the hints, BTW, so MIDI->event converters
and the like will have a chance of doing the right thing.)
However, it
might be handy for the host to be able to ask for the
values of specific controllers. It may or may not be useful to do
that multiple times per buffer - but if it is, you'll definitely
want the timestamps of the "reply events" to match those of your
request events, or things will get hairy...
Uggh, can we keep the get() of control values simpler than events?
My previos proposal had a control->get() method and a ctrl->set()
method. Obviously, the set() is superceded by events. Is the get,
too?
Still not 100% sure about this... I *think* you might as well use
events both from the complexity POV and the performance POV, but I
think I'll have to hack some actual code to be sure.
SILENT (plugin to host - sent when reverb tails or
whatnot have
died..)
Great idea. Sample accurate end-of-tail notification. :-) (In
Audiality, I do that by fiddling with the plugin state, which is
rather ugly and not sample accurate.)
I noodled on your state-model and realized that a state-change is
just an event. :)
Yes (MAIA philosophy again!) - but some of those state changes cannot
be done inside the RT thread. And since there's no easy way to have
callback driven plugins detach themselves from the RT thread, I think
it's much better if they let the host worry about that, and just
assume that these state changes will be requested through calls made
from a suitable context.
That said, you can *still* use events sent to the Master Event Port +
calls to run()/process(), instead of calls to state(). However, there
are less obvious problems with this.
The first one I think of is that event allocation and deallocation is
not inherently thread safe, so you'll have to give the plugin another
host struct (different event pool, different host event port etc)
before you can send that STATE event and call run()/process(). (Or
the plugin would potentially crash and/or crash the RT thread upon
freeing the event you sent it.)
Can be done, but is there really a point? You can't really change
state of a plugin in the middle of a buffer anyway, so the timestamps
are of no use.
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`--------------------------->
http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`---------------------------->
http://www.linuxdj.com/maia -'
---
http://olofson.net ---
http://www.reologica.se ---