I disagree
with that - this is a waste of DSP cycles processing to
be sent nowhere.
So, why would you ask the plugin to set up outputs that you won't
connect, and then force the plugin to have another conditional to
check whether the output is connected or not?
This confuses me. A plugin says it can handle 1-6 channels. The host only
connects 2 channels. The plugin loops for i = 0 to i = me->nchannels.
There isn't any checking. If the plugin says it can handle 2-6 channels and
the host only connects 1, it is an error. Connect at least the minimum, up
to the maximum. In typing this, I've seen that discontiguous connections
do, in fact, require condionals. Maybe it is safe to say you have to
connect ports in order?
I would propose that the pre-instantiation host/plugin
"negotiations"
including:
* A way for the host to tell the plugin how many ports of
each type it wants for a particular instance of the plugin.
This is exactly what I'm talking about with the connect methods. Before we
go into PLAY mode, we ask for a certain number of channels.
* A way for the host to *ask* the plugin to disable
certain
ports if possible, so they can be left disconnected.
hmm, this is interesting, but now we're adding the conditional
plugin with two 1D, contiguous arrays (although
possibly with some
ports disabled, if the plugin supports it); one for inputs and one
for outputs. That will simplify the low level/DSP code, and I think
Yes, I've come around to this. The question in my mind is now about
disabling (or just not connecting) some ports.
Now, if the plugin didn't support DisableSingle on
the output ports
of type Out;5.1, you'd have to accept getting all 6 outs, and just
route the bass and centel channels to "/dev/null". It should be easy
enough for the host, and it could simplify and/or speed up the
average case (all outputs used, assumed) of the plugin a bit, since
there's no need for conditionals in the inner loop, mixing one buffer
for each output at a time, or having 63 (!) different versions of the
mixing loop.
ok, I see now. If the plugin supports disabling, the host can use it. If
the plugin is faster to assume all ports connected, it does that instead. I
think I rather like that.
think it's a bad idea to *require* that plugins
support it.
This is key, again, you've convinced me.
strongly prefer working with individual mono
waveforms, each on a
voice of their own, as this offers much more flexibility. (And it's
also a helluva' lot easier to implement a sampler that way! :-)
just so we're clear, 'voice' in your terminology == 'channel' in
mine?
...provided there is a quarantee that there is a
buffer for the port.
Or you'll segfault unless you check every port before messing with
it. :-)
Do we need to provide a buffer for ports that are disabled?
Hrrm, so how
does something like this sound?
(metacode)
Yeah, something like that. Add "count granularity", and you'll make
life for the plugin coder a lot easier, I think. (Again, see above.)
ok, I'll include something to this effect in the next draft.
> { "left(4):mono(4)" }, {
"right(4)" },
Does this mean the plugin is supposed to understand
that you want a
"mono mix" if you only connect the left output?
If the host connects this pad to a mono effect, it knows that the
'left' channel is also named 'mono'. I do not expect the plugin to
mono-ize
a stereo sample (though it can if it feels clever).
all that much to it. There's no sensible way of
describing the
input/output relations of every possible plugin, so it's debatable
whether we should care to try at all.
I'm agreeing now..
* note_on
returns an int voice-id
* that voice-id is used by the host for note_off() or note_ctrl()
That's the way I do it in Audiality - but it doesn't mix well with
timestamped events, not even within the context of the RT engine
core.
how so - it seems if you want to send a voice-specific event, you'd need
this
I don't think that's a good idea. The synth
has a much better chance
of knowing which voice is "best" to steal - and if smart voice
stealing is not what you want, you shouldn't use a polyphonic synth
or sound.
ok, ok.
Besides, VSTi has it. DXi has it. I bet TDM has it.
I'm sure all
major digital audio editing systems (s/w or h/w) have it. Sample
accurate timing. I guess there is a reason. (Or: It's not just me! :-)
yeah, VSTi also has MIDI - need I say more? I'm becoming convinced, though.
What kind of knobs need to be ints? And what
range/resolution should
they have...? You don't have to decide if you use floats.
They should have the same range as floats - whatever their control struct
dictates.
I'd assume
a violin modeller would have a BOWSPEED control. The
note_on() would tell it what the eventual pitch would be. The
plugin would use BOWSPEED to model the attack.
Then how do you control pitch continously? ;-)
with a per-voice pitchbend
Some controls may not be possible to change in real
time context -
but I still think it makes sense to use the control API for things
like that.
I don't know if I like the idea of controls being flagged RT vs NONRT, but
maybe it is necessary. Or maybe it's not, and a user who changes a sample
in real time can expect a glitch.
An Algorithmically Generated Waveform script...?
BUt what I don't get is: who loads the data into the control?
a) hast will call deserialize() with a string or other standard format
b) plugin will load it from a file, in which case host passes the filename
to the cotrol
c) host loads a chunk of arbitrary data which it read from the plugin before
saving/restoring - in which case how did it get there in the first place?
(see a or b)
Well, then I guess you'll need the "raw data
block" type after all,
since advanced synth plugins will have a lot of input data that
cannot be expressed as one or more "normal" controls in any sane way.
Such as? Where does this data come from in the first place?
Just as with callback models, that depends entirely on
the API and
the plugin implementation. AFAIK, DXi has "ramp events". The
Audiality synth has linear ramp events for output/send levels.
So does Apple Audio Units. I am starting to like the idea..
Audiality, but
if we're designing the same thing, why aren't we
working on the same project?
Well, that's the problem with Free/Open Source in general, I think.
The ones who care want to roll their own, and the ones that don't
care... well, they don't care, unless someone throws something nice
and ready to use at them.
As to Audiality, that basically came to be "by accident". It started
Interesting how it came about, but why are you helping me turn my API into
yours, instead of letting me work on yours? Just curious. I do like to
roll my own, but I don't want to waste time..
Tim