Hello everybody,
I'm one of the PTAF authors and I've just subscribed to
this list, seeing the standard is discussed here. I will
try to answer your questions and to collect feedback.
I apologize for this long mail.
* Three states; created, initialized and activated.
This may be useful if plugins have to be instantiated
for hosts to get info from them. Why not just provide
metadata through the factory API?
I prefered to keep the factory API quite simple. In most
cases, the factory would produce only one kind of plug-in.
Detaching the plug-in capabilities from the plug-in itself
may become a pain. With factory-provided metadata and only
two states, the trivial solution is to instanciate internally
the plug-in to get its properties. If it brings it directly
in the Initialized states, it will likely increase the
overhead.
There is also an advantage when the plug-in is a wrapper,
as the factory can just reflect a directory content,
possibly filtered and checked.
Lastly, plug-in properties may depends on contextual data,
like OS or host versions, date, CPU, environment variable,
etc. - things sometimes useful for copy protections.
Hmm that makes me think about the issue of dual recursion
in property checking (host-plug-host-plug etc) => infinite
loops.
* Assuming that audio and sequencer stuff is in
different
threads, and even have plugins deal with the sync
sounds like a *very* bad idea to me.
I think there is a misunderstanding here. The "Sequencer"
thread is not a thread sending notes or events or whatever
related to the sequence. It is the generic thread for the
host, being not the GUI thread nor the audio thread. The
word "Sequencer" is badly choosen, someone already told me
that too. If anyone has a better idea for its name...
Of course I'm 100% for having audio and sequencing in the
same thread. That's why there is only one opcode to process
audio and sequence data at once. However... some VST hosts
actually dissocitate them, allowing process() function to
overlap with setParameter() or ProcessEvents(), which is
1) confusing 2) source of bugs, hard to track 3) difficult
to synchronize.
* GUI code in the same binaries is not even possible
on
some platforms. (At least not with standard toolkits.)
I'm presonally not familiar with Linux GUI toolkits (I'm
confused with Gnome, KDE, X, Berlin, etc, sorry for
my ignorance), is there a problem with them ? In this
case wouldn't be possible to launch the GUI module from
the plug-in ? What would you recommend ?
.Agreed, this is not ideal, ohm force are a host based
processing shop, so haven't thought about these issues.
The TDM people will obviously have different ideas.
Yes we have tought about it. TDM is not a problem here
because Digidesign is absolutely opposed to wrappers or
non-Digi-controlled standards, it is part of the SDK
license agreement :). UAD cards are a more interesting
case because their plug-ins show as VSTs. It seems that
the current code architecture isn't an issue for them.
Actually this is mostly a design choice. If the plug-in
must assumes that its GUI and DSP codes may be run on
different locations (on two computers for example), both
API and plug-ins should be designed to take that into
account, especially instanciation and communication
between both parts. Current design is oriented toward
a single object, making the API simpler and allowing
flexible and hidden communication between GUI and plug-
in core. Anyway the idea is indubitably interesting.
* Using tokens for control arbitage sounds pointless.
Why
not just pipe events through the host? That turns GUI/DSP
interaction into perfectly normal control routing.
Several APIs (MAS for example, and recent VST extensions)
use a similar system, it works like a charm. I don't
understant why just sending events to the host would
work. If several "parameter clients" act at the same
time, how would you prevent the parameter to jump
continuously ?
One solution is to make the host define implicit priority
levels for these clients, for example 1 = automation,
2 = remote controls, 3 = main GUI. This is good enough
for parameter changes arriving simultaneously, but is
not appropriate for "interleaved" changes.
Another solution is to use tokens. It can acts like the
first solution, but also allows the host to know that
user is holding his fingers on a knob, maintaining it
at a fixed position, because the plug-in doesn't have to
send any movement information. This is essential when
recording automations in "touch" mode.
* Why use C++ if you're actually writing C? If it
won't
compile on a C compiler, it's *not* C.
It was for the structured naming facilities offered by C++
(namespaces). It makes possible to use short names when
using the right scope and to be sure that these names
wouldn't collide with other libraries.
I agree that it's not essential and everything could be
written in C.
* I think the VST style dispatcher idea is silly
Performance of the calls is not a critical issue here.
In the audio thread, there is roughly only one call per
audio block. And final plug-in or host developer will
likely use an wrapper, adding more overhead.
But it's the easiest way to :
1) Ensure the bidirectional compatibility across API versions
2) Track every calls when debugging. By having a single entry
point you can monitor call order, concurence, called functions,
etc.
The system is simple and can be wrapped to your solution at
a very low cost (just bound checking and jump in table)
* Is using exceptions internally in plugins safe and
portable?
It depends on your compiler. To specify, the use of
exception is allowed within a call only if it has no side
effect for the caller, whatever it is. I think most of the
C++ compilers are compliant with this rule; it seems to be
the case for all compilers I've worked with.
The sentence about exception came after many plug-in
developer questions on mailing lists: "can I use safely
exceptions in my plug-in ?" The answer generally given was
"Yes, but restricted to certain conditions, I don't know
them that much so I wouldn't recommend their use".
In the PTAF API, we wanted to clarify this point.
And no, C++ is not required for PTAF plug-in, because it
defines a communication protocol at the lowest possible
level. C/C++ syntax is just an easy and common way to
write it.
* UTF-8, rather than ASCII or UNICODE.
Actually it IS Unicode, packed in UTF-8. This is the
best of both worlds, because strings restricted to the
ASCII or partial Latin-1 (character sets are stored in
UTF-8 exactly like in "unpacked" form. Alternatively,
"exotic" characters may be represented using multi-byte
sequences.
* Hosts assume all plugins to be in-place broken.
Why?
* No mix output mode; only replace. More overhead...
There is several reasons for these specifications :
1) Copy/mix/etc overhead on plug-in pins is *very* light
on today's GHz computers. Think of computers in 5, 10
years... This would have been an issue if the plug-ins
were designed to be building blocks for modular synth,
requiring hundreds of them per effect or instruments.
However this is not the goal of PTAF which is inteded
to host mid- to coarse-grained plug-ins. A modular synth
API would be completly different.
2) This is the most important reason: programmer failure.
Having several similar functions differing only by += vs
= are massively prone to bugs, and it has been confirmed
in commercial products released by major companies.
Implementing only one function makes things clear and
doesn't requires malicious copy/paste or semi-automatic
code generation.
3) Why no in-place processing ? Same reasons as above,
especially when using block-processing. More, allowing
"in-place" modes can lead to weird configurations like
crossed-channels, without considering the multi-pins /
multi-channels configurations with different numbers
of inputs and outputs. Also, leaving intact the input
buffers is useful to make internal bypass or dry mix.
4) One function with one functionning mode is easier to
test. When developing a plug-in, your test hosts probably
won't use every function in every mode, so there are bug
which cannot be detected. In concrete terms, this is a
real problem with VST.
I think it's a good idea that an API specification takes
care about programming errors when performance is not
affected much. Indeed let's face it, none of us can avoid
bugs when coding even simple programs. Final user will
attach value to reliable software, fore sure.
* Buffers 16 byte aligned. Sufficient?
I don't know. We could extend it to 64 ? For what I've
read about 64-bit architecture of incoming CPUs, 128-bit
registers are still the widers. Enforcing 16-byte
alignment would ensure that SIMD instructions can be
used, and with maximum efficiency. SSE/SSE2 still works
when data is not aligned to 16 but seems slower. However
Altivec produce bus error and this instruction set is
the only way to get decent performance with PPCs.
So for me, 16 byte is a minimum. It could be increased,
but to which size ? It requires a visionary here ;)
* Events are used for making connections. (But
there's
also a function call for it...)
This is the PTAF features which embarass me... Having
two ways to control one "parameter" is awful in my
opinion.
I think about replacing all these redundant functions
by just one, callable only in Initialized state. It
would be kinda similar to the audio processing opcode,
but would only receive events, which wouldn't be dated.
* Tail size. (Can be unknown!)
Yes. Think about feedback delay lines with resonant
filters or waveshapers, etc. It may be only a few
ms or an infinite time. The tail size is not essential
because plug-in can tell if it has generated sound or
not at each processing. It is rather a hint for the
host.
* "Clear buffers" call. Does not kill active
notes...
I can't remember exactly why I wrote that ??? A fair
requirement would be "can be called only when there
isn't active notes any more".
* Ramping API seems awkward...
What would you propose ? I designed it to let plug-in
developpers bypass the ramp correctly. Because there are
chances that ramps would be completly ignored by many
of them. Indeed it is often preferable to smooth the
transitions according to internal plug-in rules.
* It is not specified whether ramping stops
automatically
at the end value, although one would assume it should,
considering the design of the interface.
Hmmm... I start to see your point. You mean that it
is impossible for the plug-in to know if the block ramp
is part of a bigger ramp, extending over several blocks ?
* Note IDs are just "random" values chosen
by the sender,
which means synths must hash and/or search...
Yes. It is easy to implement and still fast as long as
the polyphony is not too high, but I agree, there is
probably a more convenient solution. You were talking
about VVIDs... what is the concept ?
* Hz is not a good unit for pitch...
Where have you read that pitch was expressed in Hz ?
Pitch unit is semi-tone, relative or absolute, depending
on the context, but always logarithmic (compared to a Hz
scale).
* Why both pitch and transpose?
Final note pitch is made of two parameters: Base Pitch and
Transpose. Both are from Pitch family (semi-tones). Base Pitch
is a convenient way to express both note frequency or drum hit
id or other way to identify a note within the timbral palette
of an instrument. This solutions refers to a debate on the
vstplugins mailing-list some months ago (I think it was this
ML, correct me if I'm wrong).
If you want only a drum number, just round Base Pitch to the
nearest integer, and do what you want with the fractional
part.
If you want to synthesize a piano with its "weird" tuning,
remap the Base Pitch to this tuning to get the real pitch
in semi-tones, possibly using a table and interpolation.
If you want to make an instrument imposing a specific
micro-tuning, remap Base Pitch as you want, by stretching
or discretization.
These are examples of the Base Pitch, giving backward
compatibility with MIDI note Ids.
Now you can use the Transpose information for pitch bend,
here on a per-note basis (unlike MIDI where you had to
isolate notes into channels to pitch-bend them
independently). Thus final pitch can be calculated very
easily, and log-sized wavetables switching/crossfading
remains fast.
I think we can cover most cases by dissociating both
informations.
* Why [0, 2] ranges for Velocity and Pressure?
As Steve stated, 1.0 is for medium, default intensity,
the middle of the parameter course as specified in MIDI
specs. But I don't get why it is "wrong" ? IMHO it isn't
more wrong than the 0..64..127 MIDI scales. There is just
a constant factor between both representation.
* TransportJump: sample pos, beat, bar. (Why not just
ticks?)
But musically speaking, what is tick, and to what is it
relative ? If you want to locate a position in a piece,
you need two informations: the absolute date and the
musical date, if relevant. Bar has a strong rhythmic value
and refering to it is important. Beat is the musical
position within the bar, measured ... in beats. Tick trick
is not needed since the beat value is fractional (~infinite
accuracy).
* Parameter sets for note default params? Performance
hack - is it really worth it?
Good question.
* Why have normalized parameter values at all?
(Actual parameter values are [0, 1], like VST, but
then there are calls to convert back and forth.)
Normalized parameter value is intended to show something
like the potentiometer course, in order to have significant
sound variations along the whole range, along with constant
control accuracy. This is generally what the user wants,
unless being masochist. Who wants to control directly IIR
coefficients ? (ok this example is a bit extreme but you
get my point). Also automation curves make sense, they
are not concentrated in 10% of the available space any
more.
So parameter remapping is probably unavoidable, being
done by the host or by the plug-in. Let the plug-in do
it, it generally knows better what to do :)
Making the parameters available under a "natural" form
is intended to facilitate numerical display (48% of the
knob course is not a relevent information for a gain
display, but -13.5 dB is) and copy/paste operations,
as well as parameter exchange between plug-ins.
* The "save state chunk" call seems cool,
but what's
the point, really?
This is not mandatory at all, there is a lot of plug-in
which can live without. It is intended to store the
plug-in instance state when you save the host document.
Let's take a simple example: how would you make a sampler
without it ? You need to store somewhere sample information
(data themselves or pathnames). It is not possible to do it
in a global storage, like a file pointed by an environment
variable, Windows registry, Mac preference files or
whatever.
Actually the content is up to you. GUI data (current
working directory, tab index, skin selection, etc),
control data (LFO phases, random generator states, MIDI
mappings, parameters which cannot be represented with the
classic way...), audio data (samples, delay line contents
even)... Everything you find useful for user, avoiding
him repetitive operations each times he loads a project.
* Just realized that plugin GUIs have to disconnect
or otherwise "detach" their outputs, so automation
knows when to override the recorded data and not.
Just grabbing a knob and hold it still counts as
automation override, even if the host can't see any
events coming... (PTAF uses tokens and assumes that
GUI and DSP parts have some connections "behind the
scenes", rather than implementing GUIs as out-of-
thread or out-of-process plugins.)
Token is managed by the host and requested explicitly
by the plug-in GUI. There is no need of hidden
communication here. Or I miss something ?
-- Laurent
==================================+========================
Laurent de Soras | Ohm Force
DSP developer & Software designer | Digital Audio Software
mailto:laurent@ohmforce.com |
http://www.ohmforce.com
==================================+========================