David Olofson wrote:
I would think that users might want some way of
"upgrading" their
project files to use new plugin versions without just manually
ripping out and replacing plugins, but even without some API help,
I'd rather not see hosts trying to do this automatically...
Well, a common
solution is to store plugin version identifier (it could
even be a sequence number assigned by plugin author) in the song. Then,
the plugin is able to convert at least the current parameter values (but
not, say, automation tracks) on song load.
It doesn't solve *all* the compatibility problems, but can solve the
most immediate one, I think.
(suggesting a newer, partially compatible version of
the plugin if
there is one), but again, no silent automatic upgrades, please. Too
much risk of the new version not working as expected.
Automatic conversion worked with VST and Buzz. But, warning the user
about possible incompatibility because of newer version is a good idea.
Maybe a plugin should be able to override it if it's absolutely certain
that no compatibility problems may arise, but that may cause problems :)
For now, I decided on 16:16 fixed point timestamps for
the top secret
Audiality 2 rewrite (DOH!), because the event loop deals in integers
(audio sample frames), and because there'll be some integer/fixed
point DSP in there.
I love the idea of fixed point 16:16 timestamp (assuming the time would
be relative to current buffer start, not some absolute time). Most
plugins would just shift timestamps by 16 bits and compare them to the
loop iterator :) Sounds practical.
As to sub-sample accurate timestamps, these are
required for granular
synthesis implemented as separate plugins. It just sounds horrible as
soon as you try to generate tones rather than random noise "clouds".
I bet most plugins wouldn't support fractional part of timestamps, and
those that would, could report it as a separate feature, for use in
granular synthesis-aware hosts :) Yes, I'm reaching too far ahead here,
but you kind of asked for it :)
Other than that, I'm not sure it has much value
outside of
marketing... Any other real uses, anyone?
Can't think of any. Events for true oscillator "hard sync",
perhaps
(phase reset with subsample precision).
Short version, as I remember it: Plugins that want it
can have
a "music time" input port where they receive sample accurate updates
on transport state. Advanced plugins could handle loop points and
other jumps as they occur, but your average arpeggiator or similar
would probably do reasonably well by just tracking tempo and
position. (You need the latter unless you "synchronize" only on note
events or similar.)
A separate port type (which would probably be implicitly auto-connected
by most hosts) would perhaps be nice for that, just so that things
aren't scattered too much. Although plain float (or other) ports for bpm
and position could do, too. What do you think?
Makes sense to me. An "icon" like this is
basically just a small GUI
that doesn't take any user input. (Well, it *could*, but shouldn't
rely on it, as a host really using it as an icon probably wouldn't
care to let it have any input events...)
It could. I'm thinking of more and more examples of use of it:
- amplifier where you can set the amplification factor by just clicking
and dragging on the knob, without opening a full-blown GUI window
- waveshaper which displays the current shaping table (so that you know
on a first glance what kind of shaping is going on)
- envelope icon that displays the actual current envelope shape, not
just a generic ADSR icon :)
- a two-input toggle switch (A/B) which displays which state it's in,
and allows toggling by clicking on the state icon
The ideas come from BEAST user perspective, when there are certain
things that require opening too many windows :) Plus, if done well, it
could be quite an eye-candy for modular environments.
Somewhere around here is where I'd suggest using a
"notification"
style control interface instead - ie function calls, VST 1 style, or
events of some sort. ;-)
Well, the parameter group bitmask is easy for host and easy for plugin,
and is completely optional for both (if the host doesn't want to bother
with setting "parameters changed" bitmask, it can just set all 1's - and
when the plugin doesn't want to get the information about what
parameters have changed, it just ignores the bitmask and assumes that
all parameters changed).
In other words, it's a decent optimization if both host and plugin
support it, and it's harmless for those which don't support it. What's
more, supporting it is really easy - for host it's "just look up which
bits you need to set when changing certain parameters", for plugin it's
even simpler - check the bits and do certain calculations.
VST1-style notification (function call on every parameter change) would
work too, but it's pretty inefficient, especially when changing several
parameters at once. Float-valued events would also work, although they'd
push a bit of human work on plugin side, which may be undesirable,
because there will be more plugins than hosts.
Just realized that relationship too, but I'm not
totally sure about
the details yet. I'm probably going to try a 2D addressing approach;
some ports may have multiple connections wired to different abstract
instances of things (mixer voices, synth voices...) in the plugin.
My usual suggestion - keep it very simple.
Is that (not being able to allocate hundreds or
thousands of voices at
any time, real time safe) an actual restriction to anyone...?
Not to me. Hundredrs/thousands of individually controlled voices is an
uncommon, extreme case.
Of course, voice management still belongs to a plugin, in my opinion,
because different plugins can implement it in very different ways
(monosynth, normal polysynth, polysynth using extra voices for unison,
sf2 player using voice structures for layers). It's just that the host
should be able to tell the plugin to treat certain notes in a certain
way (individual pitch bend for selected notes etc).
Is that acceptable? I think Fruityloops plugin standard had an
individual control over each note (per-note pitchbends on piano roll,
etc), and it worked pretty well, too bad I don't really remember how did
they implement it
A sequencer would generally have some fixed upper
limit to the number
of voices it can control, and with the exception of live recording,
it can even figure out exactly how many voices it needs to control at
once, maximum, for the currently loaded performance. I don't see a
real problem here.
I think only certain notes would be "tagged" for individual control,
so
a limit of 16 "note tags" doesn't seem to be very limiting (assuming we
use a (channel, note tag) addressing). If that's what you mean, of
course. On the other hand, maybe someone has use for more than 16 tags
per channel? Unfortunately, my experience is limited here - for average
synth and average musician it's fine, but maybe for things like MIDI
control of stage lights etc it's not enough?
Chris