On Tue, Feb 04, 2003 at 03:04:09 +0100, Laurent de Soras [Ohm Force] wrote:
* GUI code in
the same binaries is not even possible on
some platforms. (At least not with standard toolkits.)
I'm presonally not familiar with Linux GUI toolkits (I'm
confused with Gnome, KDE, X, Berlin, etc, sorry for
my ignorance), is there a problem with them ? In this
case wouldn't be possible to launch the GUI module from
the plug-in ? What would you recommend ?
There are some low-level issues that are hard to describe, but its common
on UNIX for the UI to run on a different machine to the DSP processing.
Theres has also been some interest from hardware people, and the are host
installed cards, like you said.
Performance of the calls is not a critical issue
here.
In the audio thread, there is roughly only one call per
audio block. And final plug-in or host developer will
likely use an wrapper, adding more overhead.
I dont understand why its neccesary to fetch the function pointers for
every call, they wont vary, right?
> * UTF-8, rather than ASCII or UNICODE.
I support this FWIW. ASCII is too limiting and UTF-16 is a mistake.
* Hosts assume
all plugins to be in-place broken. Why?
* No mix output mode; only replace. More overhead...
There is several reasons for these specifications :
This one I'm in two minds about, I hate duplicated code, but I dont like
wasting memory bandwith either. At 96/64 I suspect it will start to hurt.
[buffer allignment]
but to which size ? It requires a visionary here ;)
It doesnt seem like the API needs to specify this, the host can just
arrange it, given knowledge about the platform.
* Hz is not a
good unit for pitch...
Where have you read that pitch was expressed in Hz ?
Pitch unit is semi-tone, relative or absolute, depending
on the context, but always logarithmic (compared to a Hz
scale).
My feeling is that 12tET is an arbitrary representation for pitch, and
octaves are more natural, but thats slightly conntraversial. Several
people here are interested in non-12tET music, but I dont really want to
bring that discussion up again.
Base pitch can be rolled into transpose.
Drums probably should no be specifed by pitch, MIDI can be translated into
something more meaningful before it reaches the plugin. One of hte aims of
XAP was to not inherit too many of MIDIs design flaws, whilst remaining
broadly compatible.
* Why [0, 2]
ranges for Velocity and Pressure?
As Steve stated, 1.0 is for medium, default intensity,
the middle of the parameter course as specified in MIDI
specs. But I don't get why it is "wrong" ? IMHO it isn't
more wrong than the 0..64..127 MIDI scales. There is just
a constant factor between both representation.
I would count this as a bad thing that can be inherited from MIDI,
logically you would expect a note with 1.0 velocity to have an output
amplitude of 1.0.
* Why have
normalized parameter values at all?
(Actual parameter values are [0, 1], like VST, but
then there are calls to convert back and forth.)
Normalized parameter value is intended to show something
like the potentiometer course, in order to have significant
sound variations along the whole range, along with constant
control accuracy. This is generally what the user wants,
unless being masochist. Who wants to control directly IIR
coefficients ?
My experience is that combining real, natural parameters with arbitrary
ranges is better, eg. amplitude in dBs, time in seconds, pitch bias in
octaves, etc.
Giving the parameters semantically meaningful units also helps for
scripting and offline usage, something which is close to our text-based
UNIX hearts ;)
* The
"save state chunk" call seems cool, but what's
the point, really?
This is not mandatory at all, there is a lot of plug-in
which can live without. It is intended to store the
plug-in instance state when you save the host document.
I agree, this is useful.
- Steve