On Monday 09 December 2002 13.27, Steve Harris wrote:
On Sun, Dec 08, 2002 at 11:06:36PM +0100, David
Olofson wrote:
Just like with LADSPA, the plugin will have to
tell the host
whether or not it has absolute upper and/or lower limits. In VST,
the range is [0,1], period, but I think that's too restrictive.
Scaling that range to whatever you need is fine, but that's not
the point: Some controls just don't *have* sensible absolute
limits - so why force plugin designers to "invent" some?
Actually it quickly became obvious that for a host to construct a
useful GUI the plugin has to at least hint at some ranges.
Yes, *hint*.
Howeve in LADSPA theres nothing to stop you going past
the hints,
and I even recommend it for some plugins.
...while other plugins would freak out, or even crash if you did
that, and the plugins didn't range check everything.
In some cases, you don't need to range check at all (say, when the
events come directly from a native sequencer), and for the other
cases, why do it in every plugin instead of in the host? You tell the
host about the ranges anyway, so why not just tell it whether those
ranges are hard or not?
Normally,
plugins just do what you tell them. They don't change
their own input controls. (Well, they *could*, but that would be
pretty hairy, I think... Or maybe not. Anyone dare to try it? ;-)
I think it would be bad :)
So do I. And then there's this target input + actual output solution
for this kind of stuff, if you really want it. (Needs no explicit API
support.)
> So have I (when designing MAIA) - until I
actually implemented
> Audiality. There *should* be a single per-instance event port -
> the Master Event Port - but you may optionally have one event
> port for each channel as well, if you have any use for
> per-channel events. You may well just use the Master Event Port
> for everything in a multichannel plugin, but then you're on your
> own when it comes to addressing of channels and stuff. (That
> would require more dimensions of addressing in each event, which
> means a bigger event struct, more decoding overhead etc.)
(I just solved that, BTW. See "XAP: Multiple Event Ports or not?".)
Yes, but is that overhard larger than having multiple
event ports?
What exactly is the overhead of having multiple event ports? :-)
For instruments with high channel count and low event
rate (eg. a
sampler?) I would imagine that multiplexing into one event port
would be more efficient?
No. Unless you process all channels in parallel, one sample at a time
in the inner loop, you would need to dispatch events internally
before going into the channel loop. That's overhead.
And when you do that, you end up with one internal event port per
channel anyway, or at least, one way or another, you still have to
check for events for each channel, whether there are events or not.
Finally, when you have only one input port, the host will have to do
a lot of shaddowing + sort/merging to get your input event queue in
order - and then you're just going to rip it apart again? :-)
OTOH, if you have one port per channel, the only overhead is that
single check per buffer for each channel (check the timestamp of the
first event, if there is one), and if there are no events, you can do
pure DSP for the whole buffer for the channel. On the "outside", it's
quite likely that there will be at most one sender for each one of
your ports, and that means events go *directly* into your input
queues without any host intervention whatsoever.
Anyway, now you can have it either way, and the host won't even know
the difference. :-)
[...]
the RT engine
- *unless* you decide on a number of VVIDs to
allocate for each Channel of every plugin, right when they're
instantiated.
That sound most sensible. The instrument has to allocate voice
table space, so there is likly to be an internal (soft) limit
anyway.
Yes... But keep in mind that there is no strict relation between
Physical Voices and Virtual Voices. A synth with only 32 voices may
be played with 8 VVIDs or 256 VVIDs. The number of VVIDs is decided
by the *sender*, so it can manage the polyphony it wants in a
sensible way. The synth only uses the VVIDs to keep track of which
Virtual Voice the sender is talking about - whether or not it
actually has a Physical Voice at the moment. Voice allocator in
between.
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`--------------------------->
http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`---------------------------->
http://www.linuxdj.com/maia -'
---
http://olofson.net ---
http://www.reologica.se ---