Here's that PTAF document that Ohm Force brought to Anaheim:
http://freezope2.nipltd.net/ldesoras/files/ptaf-2003.01.23.pdf
I've read some parts of it, and browsed all of it, and here are some
initial reflections of mine:
* Three states; created, initialized and activated.
This may be useful if plugins have to be instantiated
for hosts to get info from them. Why not just provide
metadata through the factory API?
* Bypass mode seems to be a good idea for stereo->surround.
* Assuming that audio and sequencer stuff is in different
threads, and even have plugins deal with the sync sounds
like a *very* bad idea to me.
* GUI code in the same binaries is not even possible on
some platforms. (At least not with standard toolkits.)
* Using tokens for control arbitage sounds pointless. Why
not just pipe events through the host? That turns GUI/DSP
interaction into perfectly normal control routing.
* Why use C++ if you're actually writing C? If it won't
compile on a C compiler, it's *not* C.
* I think the VST style dispatcher idea is silly. A table
of function pointers, with a bunch of reserved NULLs
would be simpler, faster and just as extensible for all
practical matters.
* Is using exceptions internally in plugins safe and
portable?
* Specifying the maximum string length when asking for
strings seems nice in some ways...
* UTF-8, rather than ASCII or UNICODE.
* Hosts assume all plugins to be in-place broken. Why?
* No mix output mode; only replace. More overhead...
* Only mono buffers. (Good.)
* Buffers 16 byte aligned. Sufficient?
* Events are used for making connections. (But there's
also a function call for it...)
* Audio quality control. (Nice scalability feature.)
* Plugin input->output latency.
* Host process return->audible output latency.
* Tail size. (Can be unknown!)
* Process mode: Mixed/RT/Off-Line.
* Plugins send events specifically to the host...
* Events are delivered in arrays, just like in VST.
* "Clear buffers" call. Does not kill active notes...
* Timestamps are in audio frames.
* Events are always for the current block.
* Ramping API seems awkward...
* Ramping cannot run accross blocks.
* It is not specified whether ramping stops automatically
at the end value, although one would assume it should,
considering the design of the interface.
* Note IDs are just "random" values chosen by the sender,
which means synths must hash and/or search...
* Hz is not a good unit for pitch...
* Why both pitch and transpose?
* Why [0, 2] ranges for Velocity and Pressure?
* Note On/Off/End confuse note on/off with context
management.
* TimeChange: tempo, ts numerator, ts denominator.
* TransportJump: sample pos, beat, bar. (Why not just
ticks?)
* PlaybackChange: stopped/running
* Plugins (not hosts) maintain parameter sets/presets.
* Parameter sets for note default params? Performance
hack - is it really worth it?
* Why have normalized parameter values at all?
(Actual parameter values are [0, 1], like VST, but
then there are calls to convert back and forth.)
* The "save state chunk" call seems cool, but what's
the point, really?
* Just realized that plugin GUIs have to disconnect
or otherwise "detach" their outputs, so automation
knows when to override the recorded data and not.
Just grabbing a knob and hold it still counts as
automation override, even if the host can't see any
events coming... (PTAF uses tokens and assumes that
GUI and DSP parts have some connections "behind the
scenes", rather than implementing GUIs as out-of-
thread or out-of-process plugins.)
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`--------------------------->
http://olofson.net/audiality -'
---
http://olofson.net ---
http://www.reologica.se ---