[Oops. Now I've done it all - this one was originally slightly over
the 40 kB limit. *hehe*]
On Tuesday 04 February 2003 03.04, Laurent de Soras [Ohm Force] wrote:
Hello everybody,
I'm one of the PTAF authors and I've just subscribed to
this list, seeing the standard is discussed here. I will
try to answer your questions and to collect feedback.
Great! Welcome to the list. :-)
I apologize for this long mail.
Well, I made an awful lot of comments in one post there - but rather
that than starting lots of theads...
> * Three states; created, initialized and
activated.
[...]
I prefered to keep the factory API quite simple. In
most
[...]
Yeah, I see your point. As soon as the meta-data is *anything* but
pure, static data, plugin binaries will need code to generate it
somewhere. From that POV, it does look cleaner to have each plugin
"class" handle it's own meta-data in a lightweight state, than to
pack it into a central plugin factory in each binary.
If you have only one plugin per binary, it doesn't matter much, but
single file plugin packs are really rather common and useful. It
allows plugins to share code efficiently, and thus reduces cache
thrashing. It also allows plugins to share "live" data in various
ways.
Just as an example, you could implement an oscilloscope bank (for
testing, debugging and the general "imponance effect", of course :-)
where a small plugin is used for tapping audio here and there, and
another plugin is used to gather data from all the others and pass it
to the GUI.
(In our designs, the GUI is normally a physically separate part from
the plugin. There are good reasons for this; read on.)
[...]
Hmm that makes me think about the issue of dual
recursion
in property checking (host-plug-host-plug etc) => infinite
loops.
Yeah, I've thought about that too. Rather hairy stuff. The only easy
way to avoid it is to strictly define what the host should tell the
plugin before asking anything, and then just leave it at that.
However, that doesn't allow any actual negotiation. OTOH, I'm not sure
that's actually a problem. Hosts can still try various system
parameters until they have a set of responses for all cases they care
about.
[...the "sequencer" thread...]
Well, I don't get what this thread would be then. I was assuming it
was something like the automation sequencer thread found in some VST
hosts, but if it's not a sequencer at all...?
My point being, when would a host thread that is not a sequencer nor
an audio thread ever call a plugin? I don't see why plugins should
have to know about the excistense of such things, if they never
interact with them directly.
Of course I'm 100% for having audio and sequencing
in the
same thread. That's why there is only one opcode to process
audio and sequence data at once. However... some VST hosts
actually dissocitate them, allowing process() function to
overlap with setParameter() or ProcessEvents(), which is
1) confusing 2) source of bugs, hard to track 3) difficult
to synchronize.
Yeah, of course. That's why we want to avoid it at pretty much any
cost.
* GUI code in
the same binaries is not even possible on
some platforms. (At least not with standard toolkits.)
I'm presonally not familiar with Linux GUI toolkits (I'm
confused with Gnome, KDE, X, Berlin, etc, sorry for
my ignorance),
Who isn't confused...!? ;-)
is there a problem with them ?
Yes. They use different GUI toolkits. That means, applications will
have to link with different libs, and generally, these libs just
don't mix, most importantly because virtually every GUI toolkit wants
to take over the main loop and drive the application through
callbacks or similar mechanisms. You generally can't have multiple
toolkits running in the same process, even if you try running them in
different threads.
In this
case wouldn't be possible to launch the GUI module from
the plug-in ?
Exactly. If your host is a KDE app using Qt and you're a plugin using
GTK+ (the toolkit used by GNOME as well as lots of non-GNOME apps),
you're screwed. You GUI can't run in the host's process.
What would you recommend ?
Keeping GUIs as separate executables that are installed in parallel
with the DSP plugins. For XAP, I'm thinking in terms of using an API
very similar to the DSP plugin API. A GUI "plugin" would basically
just have control inputs and outputs that match those of the DSP
plugin, so the host can connect them like any two plugins, although
through an RT/non-RT gateway and IPC, rather than the usual direct
sender-to-port event passing.
[...integrated GUIs, TDM etc...]
Right. The only issue for local, in-process execution is with the GUI
contra the host application. GUI and DSP will obviously run in
different threads in any RT host, but that's as far apart you can get
them. Running plugins on cluster supercomputers and the like just
won't work, unless you can run the GUI on one machine and the DSP
part on another. Of course, this applies to traditional Unix style
server/workstation setups as well.
You *can* still have them in the same binary, though, but if you do
that, you have to make sure both parts have no external lib
references that cannot be resolved in a GUI-only or DSP-only host.
That is, you have to load any DSP plugin SDK lib, GUI toolkit libs
and whatnot *dynamically*, which means you force extra non-optional
and inherently non-portable code into every plugin.
If you make GUIs standalone executables, you don't have to worry much
about this, as they're really just normal applications, although with
a few special command line arguments and the desire to run as clients
to the host that spawns them.
Actually this is mostly a design choice. If the
plug-in
must assumes that its GUI and DSP codes may be run on
different locations (on two computers for example), both
API and plug-ins should be designed to take that into
account, especially instanciation and communication
between both parts.
Well, if you make the GUIs use about the same API as the DSP plugins,
this is left to the host and support libs. The host fires up a GUI
and gives it a "context handle". The GUI uses an SDK lib to register
itself, and doesn't have to worry about how the communication is
done. (Shared memory, signals, messages, pipes, TCP/IP or whatever,
depending on platform and system configuration.) The plugin will then
basically be part of the processing net, and the host can connect it
to the DSP plugin in a suitable fashion. (Directly, through the
automation sequencer or whatever.)
Current design is oriented toward
a single object, making the API simpler and allowing
flexible and hidden communication between GUI and plug-
in core.
I think that's a bad idea. If all communication is done through the
official API - even if some of the data is just raw blocks that only
the plugin DSP and GUI understand - you can take advantage of the
transport layer that is already there. The host makes it totally
thread safe, and even makes the connections for you. DSP plugins
don't need any API at all for this, as the normal control API is
sufficient.
Anyway the idea is indubitably interesting.
It's also basically a requirement for some platforms. Enforcing a
single, specific GUI toolkit simply isn't an option. You can provide
something like VSTGUI (or maybe even VSTGUI? remains to see what
license they chose...), but you can't *force* people to use it, and
expect them to accept the API under those conditions. Lots of people
use and love GTK+, Qt and FLTK, and there are a few other
alternatives as well. Some just want to do raw low level graphics
using SDL - and SDL (as of now) can only handle one window per
process, regardless of platform. All of these have one thing in
common: They don't mix well, if at all.
[...tokens...]
Several APIs (MAS for example, and recent VST
extensions)
use a similar system, it works like a charm.
I know, but I still think it's a bad idea. Not the token deal in
itself, but rather the fact that both hosts and plugins are
explicitly aware of this stuff. I think there are cleaner and more
flexible ways to do it.
I don't
understant why just sending events to the host would
work. If several "parameter clients" act at the same
time, how would you prevent the parameter to jump
continuously ?
You would need a feature that allows control outputs to be marked as
"active" or "passive". This allows hosts to control how things are
handled.
For examlpe, if you have a plugin, it's GUI (another plugin,
basically) and an automation sequencer, the normal case would be that
the automation sequencer sends recorded events to both the GUI and
the DSP plugin. The latter is - as always - just supposed to do it's
thing, while the former should track the incoming control changes.
Now, if the user grabs a knob in the GUI, the GUI stops tracking input
events for that control, marks the corresponding output as active,
and starts transmitting instead. The automation sequencer would
notice the change and (normally - although this is completely
optional) pass the incoming control changes to the DSP plugin, and,
if in record mode, record the events in the process.
I don't think it gets cleaner and simpler than that - and note that
this works just fine even if there is substantial latency between the
GUI and the DSP host. The GUI never deals directly with the DSP
plugin, but rather just looks at the events it receives, and sends
events when it "feels like it".
One solution is to make the host define implicit
priority
levels for these clients, for example 1 = automation,
2 = remote controls, 3 = main GUI. This is good enough
for parameter changes arriving simultaneously, but is
not appropriate for "interleaved" changes.
There will be no interleaved changes with my approach, since GUIs
never explicitly send events to DSP plugins. In fact, you could say
that they don't even know about the excistence of the DSP plugins -
and indeed, you *could* run a GUI plugin against an automation
sequencer only, if you like. (Not sure what use it would be, but I'm
sure someone can think of something. :-)
As to priorities, well, you could do that as well, but it's a host
thing. In my example above, you'd connect the GUI plugin to the
sequencer, and then the sequencer to the DSP plugin. The sequencer
would react when the GUI toggles the active/passive "bit" of
controls, and decide what to do based on that. The normal action
would be to handle active/passive as a sort of punch in/out feature.
Another solution is to use tokens. It can acts like
the
first solution, but also allows the host to know that
user is holding his fingers on a knob, maintaining it
at a fixed position, because the plug-in doesn't have to
send any movement information. This is essential when
recording automations in "touch" mode.
I think just having active/passive flags on control outputs is both
cleaner and more flexible. With tokens, a GUI has to be aware of the
DSP plugin, and actually mess "directly" with it, one way or another,
whereas the active/passive solution is just a control output feature.
When you grab a knob, the control becomes active. When you let go, ti
goes passive.
DSP plugins could have the active/passive feature as well, although it
would probably always be in the active state, and whatever you
connect a control out from a DSP plugin to will most probably ignore
it anyway. I think it's a good idea to make the feature universal,
though, as it makes GUI plugins less special, and allows event
processor plugins to be inserted between the GUI and the automation
sequencer without screwing things up. It also allows the automation
sequencer to appear as a normal plugin, whether it actually is a
plugin, or integrated with the host.
* Why use C++
if you're actually writing C? If it won't
compile on a C compiler, it's *not* C.
It was for the structured naming facilities offered by C++
(namespaces).
Yeah. But that makes it C++, since it definitely won't compile with a
C compiler.
It makes possible to use short names when
using the right scope and to be sure that these names
wouldn't collide with other libraries.
You can do the same by putting things in structs - but indeed, there
is a problem with enums and constants.
However, namespaces don't really solve this; they just give it a
different syntax and some type checking. The latter is nice, of
course, but writing SomeNamespace::SomeSymbol instead of
SomeNamespace_SomeSymbol... well, what's the point, really?
[...VST style dispatcher...]
But it's the easiest way to :
1) Ensure the bidirectional compatibility across API versions
I actually think having to try to *call* a function to find out
whether it's supported or not is rather nasty... I'd rather take a
look at the version field of the struct and chose what to do on a
higher level, possibly by using different wrappers for different
plugin versions. Likewise with plugins; if a plugin really supports
older hosts, it could select implementations during initialization,
rather than checking every host call in the middle of the battle.
2) Track every calls when debugging. By having a
single entry
point you can monitor call order, concurence, called functions,
etc.
Yeah, that can be handy, but you might as well do that on the wrapper
level. And monitoring multiple entry points isn't all that hard
either.
The system is simple and can be wrapped to your
solution at
a very low cost (just bound checking and jump in table)
Why bother? That's exactly what a switch() becomes when you compile
it...
Anyway, for plugins, it seems simpler to just fill in a struct and
stamp an API version on it, and then not worry more about it. I'd
rather have as little extra logic in plugins as possible.
[...]
* Hosts assume
all plugins to be in-place broken. Why?
* No mix output mode; only replace. More overhead...
There is several reasons for these specifications :
1) Copy/mix/etc overhead on plug-in pins is *very* light
on today's GHz computers. Think of computers in 5, 10
years...
Yeah, I'm thinking along those lines as well, but I still think it's a
problem with simple/fast plugins. (EQ inserts and the like.) It still
becomes significant if you have lots of those in a virtual mixer or
similar.
This would have been an issue if the plug-ins
were designed to be building blocks for modular synth,
requiring hundreds of them per effect or instruments.
However this is not the goal of PTAF which is inteded
to host mid- to coarse-grained plug-ins. A modular synth
API would be completly different.
I think this separation is a mistake. Of course, we can't have an API
for complex monolith synths scale perfectly to modular synth units,
but I don't see why one should explicitly prevent some scaling into
that range. Not a major design goal, though; just something to keep
in mind.
2) This is the most important reason: programmer
failure.
Having several similar functions differing only by += vs
= are massively prone to bugs, and it has been confirmed
in commercial products released by major companies.
Implementing only one function makes things clear and
doesn't requires malicious copy/paste or semi-automatic
code generation.
Yeah, that's a good point.
3) Why no in-place processing ? Same reasons as
above,
especially when using block-processing. More, allowing
"in-place" modes can lead to weird configurations like
crossed-channels, without considering the multi-pins /
multi-channels configurations with different numbers
of inputs and outputs.
Good point. Not all plugins process everything one frame at a time.
Especially not big monoliths like polyphonic synths...
I still think plugins should be able to just have a simple hint to say
"I'm in-place safe!" It's easy enough for hosts to check for this and
make use of it when it can save cycles. There's nothing to it for
plugins, and hosts can just assume that all plugins are in-place
broken, if that makes things easier.
Also, leaving intact the input
buffers is useful to make internal bypass or dry mix.
Yeah, but when you need that, you just don't reuse the input buffers
for the outputs. You're using the same callback for in-place
processing, only passing the same buffers for some in/out pairs.
4) One function with one functionning mode is easier
to
test. When developing a plug-in, your test hosts probably
won't use every function in every mode, so there are bug
which cannot be detected. In concrete terms, this is a
real problem with VST.
Yeah, I know. That's part because the lack of a standardized output
gain control makes the adding process call essentially useless to
most hosts, but the point is still valid. (And it's the main reason
why I made the FX plugin "SDK" in Audiality emulate all modes with
whatever the plugin provides. I can just pick one of them and still
have the plugin work everywhere.)
I think it's a good idea that an API specification
takes
care about programming errors when performance is not
affected much. Indeed let's face it, none of us can avoid
bugs when coding even simple programs. Final user will
attach value to reliable software, fore sure.
Yeah. It might still be debatable whether or not the extra copying and
buffers is a real performance issue, but let's just say, I'm not
nearly as interested in these multiple calls now as I was just one or
two years ago.
I do believe a single "in-place capable" flag would be a rather nice
thing to have, though, as it's something you get for free with many
FX plugins, and because it's something hosts can make use of rather
easilly if they want to. Not an incredible gain, maybe, but the
feature costs next to nothing.
* Buffers 16
byte aligned. Sufficient?
I don't know. We could extend it to 64 ? For what I've
read about 64-bit architecture of incoming CPUs, 128-bit
registers are still the widers.
There have been 256 bit GPUs for a while, although I'm not sure how
"256 bit" they really are... Anyway, the point is that since SIMD
extensions and memory busses are 128 bit already, it probably won't
be long before we see 256 bits.
Anyway, I'd think the alignment requirements are really a function of
what the plugin uses, than what's available on the platform. However,
managing this on a per-plugin basis seems messy and pointless.
How about defining buffer alignment as "what works good for whatever
extensions that are available on this hardware"...?
Would mean 16 in most hosts on current hardware, but hosts would just
up it to 32 or something when new CPUs arrive, if it gains anything
for plugins that use the new extensions.
Enforcing 16-byte
alignment would ensure that SIMD instructions can be
used, and with maximum efficiency. SSE/SSE2 still works
when data is not aligned to 16 but seems slower.
Yeah, as soon as you have misalignment with the memory bus width,
there will be performance issues in some cases, since you can't write
to "odd" addresses without reading two memory words, modifying them
and writing them back. The cache and memory subsystem normally deals
with this through write combining, but obviously, that doesn't work
too well unless you're doing contiguous blocks, and there's always a
slight hit at the starts and ends of blocks. In real life, it also
seems that many CPUs run slightly slower even when write combining
*should* theoretically be able to deal with it. (I think early MMX
had serious trouble with this, but SSE(2) is probably improved in
that regard as well.)
However
Altivec produce bus error and this instruction set is
the only way to get decent performance with PPCs.
Ouch. Like old 68k CPUs... *heh* (Except 030+, which I learned the
hard way, looking for bugs that "simply cannot happen"... :-)
So for me, 16 byte is a minimum. It could be
increased,
but to which size ? It requires a visionary here ;)
I think it's platform dependent and subject to change over time, and
thus shouldn't be strictly specified in the API.
[...]
I think about replacing all these redundant functions
by just one, callable only in Initialized state. It
would be kinda similar to the audio processing opcode,
but would only receive events, which wouldn't be dated.
That's one way... I thought a lot about that for MAIA, but I'm not
sure I like it. Using events becomes somewhat pointless when you
change the rules and use another callback. Function calls are cleaner
and easier for everyone for this kind of stuff.
Anyway, I'm nervous about the (supposedly real time) connection event,
as it's not obvious that any plugin can easilly handly connections in
a real time safe manner. This is mostly up to the host, I think, but
it's worth keeping this stuff in mind. As soon as something uses
events, one has to consider the implications. A situation where
inherently non RT safe actions *must* be performed in the process()
call effectively makes the API useless for serious real time
processing. I'd really rather not see all hosts forced to resort to
faking the audio thread to plugins in non-RT threads, just to be able
to use "everyday features" without compromising the audio thread.
[...]
* Ramping API
seems awkward...
What would you propose ? I designed it to let plug-in
developpers bypass the ramp correctly. Because there are
chances that ramps would be completly ignored by many
of them. Indeed it is often preferable to smooth the
transitions according to internal plug-in rules.
This is what I do in Audiality, and it's probably the way it'll be
done in XAP:
* There is only one event for controls; RAMP.
* The RAMP event brings an "aim point", expressed as
<target_value, duration>.
* A RAMP event with a duration of 0 instantly sets the
control value to 'target_value', and stops any
ramping in progress. (There are various reasons for
this special case. Most importantly, it's actually
required for correct ramping.)
* A plugin is expected to approximate linear ramping
from the current value to the target value.
* A plugin may ignore the duraction field and always
handle it as if it was zero; ie all RAMP events are
treated as "SET" operations.
* What happens if you let a plugin run beyond the
last specified "aim point" is UNDEFINED! (For
implementational reasons, most plugins will just
keep ramping with the same slope "forever".)
This is
* Easy to handle for senders and receivers.
* Involves only one receiver side conditional.
(The duration == 0 test.) There are no
conditionals on the sender side.
* Allows non-ramped outputs to drive ramped
inputs and vice versa.
* Allows senders to let ramps run across
several blocks without new events, if desired.
* Makes accurate, click free ramping simple;
senders don't have to actively track the
current value of controls.
* The "aim point" approach avoids rounding
error build-up while ramping, since every
new RAMP event effectively gives the receiver
a new absolute reference.
[...ramping...]
Hmmm... I start to see your point. You mean that it
is impossible for the plug-in to know if the block ramp
is part of a bigger ramp, extending over several blocks ?
Yes, that's one part of it. It would have plugins that approximate
linear ramping (say, filters that ramp the internal coefficients
instead) depend on the block size, which is a bad idea, IMO. Also, it
drastically increases event traffic, since senders will have to post
new RAMP events *both* for block boundaries *and* the nodes of the
curves they're trying to implement.
Finally, if plugins are supposed to stop ramping at a specific point,
there are two issues:
1) This violates the basic principles of timestamped
events to some extent, as you effectively have a
single event with two timestamps, each with an
explicit action to perform.
2) Plugins will have to implement this, which means
they have to enqueue events for themselves, "poll"
a lot of internal counters (one per control with
ramping support), or similar.
I think this is completely pointless, as senders will have to maintain
a "solid" chain of events anyway, or there isn't much point with
ramping. That is, a sender knows better than the receiver when to do
what, and more specifically, it knows what to do at or before each
aim point. It just sends a RAMP(value, 0) to stop at the target
value, or (more commonly) another normal RAMP event, to continue
ramping to some other value.
[...voice IDs...]
Virtual Voice IDs. Basically the same thing, although with an twist:
Each VVID comes with an integer's worth of memory for the plugin to
use in any way it wants. A synth can then do something like this:
* Host creates a note context for 'vvid':
host->vvids[vvid] = alloc_voice();
// If allocation fails, alloc_voice()
// returns the index of a dead fake voice.
* Host sends an event to voice 'vvid':
MY_voice *v = voices[host->vvids[vvid]];
v->handle_event(...);
* Host wants to terminate note context 'vvid':
MY_voice *v = voices[host->vvids[vvid]];
v->free_when_done = 1;
// The voice is now free to go too sleep
// and return to the free pool when it's
// done with any release phase or similar.
That is, your look-up becomes two levels of indexing; first using the
VVID in the host managed VVID entry table, and then using your very
own index on whatever array you like.
Obviously, senders will have to allocate VVIDs from the host, and the
host will have to manage an array of VVID entries for plugins to use.
If you consider a synth playing a massive composition with lots of
continous voice controlling going on, I think this makes a huge
difference, since *every* voice event is affected. If you have 128
voices, looking through on average 50% of them (or less if you use a
hash table) for a few thousand events per second is wasting a whole
lot of cycles.
* Hz is not a
good unit for pitch...
Where have you read that pitch was expressed in Hz ?
I might have misinterpreted some table. It said Hz, as if it was the
unit of the pitch control...
Pitch unit is semi-tone, relative or absolute,
depending
on the context, but always logarithmic (compared to a Hz
scale).
I see. (Same as Audiality, although I'm using 16:16 fixed point there,
for various reasons.)
Anyway, IMHO, assuming that the 12tET scale is somehow special enough
to be regarded as the base of a generic pitch unit is a bit
narrowminded. What'n wrong with 1.0/octave?
That said, I can definitely see the use of something like 1.0/note,
but that's a different thing. It just *happens* to map trivially to
12.0/octave linear pitch, but it is in no way equivalent. Units could
be selected so that "note pitch" maps directly to linear pitch, so
you don't need to perform actual conversion for 12tET (which *is*
rather common, after all), but note pitch is still not the same
things as linear pitch.
[...]
If you want to synthesize a piano with its
"weird" tuning,
remap the Base Pitch to this tuning to get the real pitch
in semi-tones, possibly using a table and interpolation.
This is where it gets hairy. We prefer to think of scale tuning as
something that is generally done by specialized plugins, or by
whatever sends the events (host/sequencer) *before* the synths. That
way, every synth doesn't have to implement scale support to be
usable.
As it is with VST, most plugins are essentially useless to people that
don't use 12tET, unless multiple channels + pitch bend is a
sufficient work-around.
[...]
Now you can use the Transpose information for pitch
bend,
here on a per-note basis (unlike MIDI where you had to
isolate notes into channels to pitch-bend them
independently). Thus final pitch can be calculated very
easily, and log-sized wavetables switching/crossfading
remains fast.
And a continous pitch controller *could* just use a fixed value for
Base Pitch and use Transpose as a linear pitch control, right?
I think we can cover most cases by dissociating both
informations.
Yeah. The only part I don't like is assuming that Base Pitch isn't
linear pitch, but effectively note pith, mapped in unknown ways.
Syths are supposed to play the *pitch* you tell them. Scale
converters integrated in synths are an idea that closely related to
keyboard controllers, but doesn't really work for continous pitch
controllers. When it comes to controlling their instruments, scales
are not relevant to vocalists or players of the violin, fretless bass
and whatnot. They just think of a pitch and sing it or play it. The
connection with scales is in their brains; not in the instruments.
> * Why [0, 2] ranges for Velocity and Pressure?
[...]
Right, but MIDI is integers, and the range defines the resolution.
With floats, why have 2.0 if you can have 1.0...?
*
TransportJump: sample pos, beat, bar. (Why not just ticks?)
But musically speaking, what is tick, and to what is it
relative ?
That's defined by the meter.
Anyway, what I'm suggesting is basically that this event should use
the same format as the running musical time "counter" that any tempo
or beat sync plugin would maintain. It seems easier to just count
ticks, and then convert to beats, bars or whatever when you need
those units.
If you want to locate a position in a piece,
you need two informations: the absolute date and the
musical date, if relevant.
Yes. And bars and beats is just *one* valid unit for musical time.
There's also SMPTE, hours/minutes/seconds, HDR audio time and other
stuff. Why bars and beats of all these, rather than just some linear,
single value representation?
Could be seconds or ticks, but the latter make more sense as it's
locked to the song even if the tempo is changed.
Bar has a strong rhythmic value
and refering to it is important.
Why? If you want more info, just convert "linear musical time" into
whatever units you want. The host should provide means of translating
back and forth, querying the relevant timeline.
Beat is the musical
position within the bar, measured ... in beats. Tick trick
is not needed since the beat value is fractional (~infinite
accuracy).
Accuracy is *not* infinite. In fact, you'll have fractions with more
decimals that float or double can handle even in lots of hard
quantized music, and the tick subdivisions used by most sequencers
won't produce "clean" float values for much more than a few values.
Whether or not this is a real problem is another matter. It would be
for editing operations inside a sequencer, but if you're only playing
back stuff, you're fine as long as you stay sample accurate. And if
you record something from an event processor plugin, you're
restricted to sample accurate timing anyway, unless you use
non-integer event timestamps.
[...]
* Why have
normalized parameter values at all?
(Actual parameter values are [0, 1], like VST, but
then there are calls to convert back and forth.)
Normalized parameter value is intended to show something
like the potentiometer course, in order to have significant
sound variations along the whole range, along with constant
control accuracy. This is generally what the user wants,
unless being masochist. Who wants to control directly IIR
coefficients ? (ok this example is a bit extreme but you
get my point). Also automation curves make sense, they
are not concentrated in 10% of the available space any
more.
The host would of course know the range of each control, so the only
*real* issue is that natural control values mean more work for hosts.
There's another problem, though: In some cases, there are no obvious
absolute limits. For this reason, we'd like to support "soft limits",
that are only a recommendation or hint. Of course, that *could* be
supported with hardcoded [0, 1] as well, just having frags indicating
whether the low and high are hard limits or not.
So parameter remapping is probably unavoidable, being
done by the host or by the plug-in. Let the plug-in do
it, it generally knows better what to do :)
Yeah. Doing it in one place (the host SDK), once and for all, sounds
better to me, though, and happens to be doable with the same
information hosts need to construct GUIs for GUI-less plugins.
A thought: You can actually have it both ways. For things that the
lin/log, range, unit etc approach cannot cover, plugins can provide
callbacks for conversion. If the standard hints fit, the host SDK
provides all the host needs. If not, the host SDK forwards the
requests to the plugin.
Making the parameters available under a
"natural" form
is intended to facilitate numerical display (48% of the
knob course is not a relevent information for a gain
display, but -13.5 dB is) and copy/paste operations,
as well as parameter exchange between plug-ins.
Yes, of course.
* The
"save state chunk" call seems cool, but what's
the point, really?
This is not mandatory at all, there is a lot of plug-in
which can live without. It is intended to store the
plug-in instance state when you save the host document.
Ok. Then it's not what I thought - and not all that interesting
either, IMHO.
Let's take a simple example: how would you make a
sampler
without it ? You need to store somewhere sample information
(data themselves or pathnames). It is not possible to do it
in a global storage, like a file pointed by an environment
variable, Windows registry, Mac preference files or
whatever.
What's wrong with text and/or raw data controls?
Actually the content is up to you. GUI data (current
working directory, tab index, skin selection, etc),
control data (LFO phases, random generator states, MIDI
mappings, parameters which cannot be represented with the
classic way...), audio data (samples, delay line contents
even)... Everything you find useful for user, avoiding
him repetitive operations each times he loads a project.
Speaking of GUIs (as external processes), these would effectively
function like any plugins when connected to the host, and as such,
they'd have their control values saved as well. So, just have some
outputs for tab index and whatnot, and those will be saved as well.
BTW, this makes me realize something that's prehaps pretty obvious,
but should be pointed out: When an output is connected, the plugin
must send an inital event to set the connected input to the current
output value. (Other wise nothing will happen until the output
*changes* - which might not happen at all.)
What I just realized is that this gives us a perfect bonus feature.
You can read the current output values from any plugins by just
reconnecting the outputs and running process() once. That is, if you
only want to save the current values at some point, you don't have to
connect all outputs and process all data they generate. Just connect
and take a peek when you need to.
[...]
Token is managed by the host and requested explicitly
by the plug-in GUI. There is no need of hidden
communication here. Or I miss something ?
Actually, I was thinking about the fact that GUIs talk directly to
their DSP parts. I just don't like the idea of multiple senders being
connected to the same input at the same time, no matter how it's
maintained. And for other reasons, I don't like the idea of GUIs
talking directly to DSP plugins at all, since it doesn't gain you
anything if the GUI is out-of-process.
It seems easier and cleaner to make the GUI part act more like a
plugin than a host, and it also opens up a lot of possibilities
without explicit API support.
For example, you could implement visualization plugins as *only* GUI
"plugins". The host could just do it's usual gateway thing it does
for GUIs that belong to DSP plugins, but without the special
connection management part. Just dump the gateway representation of
the plugin in the net, and let the user hook it up as if it was a
normal DSP plugin. If there's audio, it'll be async streaming back
and forth, just like with events. Latency is "as low as possible" (ie
no buffer queues), so this isn't usable for out-of-process audio
processing, but it *is* usable for visualization.
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`--------------------------->
http://olofson.net/audiality -'
---
http://olofson.net ---
http://www.reologica.se ---