On Saturday 07 December 2002 12.21, Steve Harris wrote:
On Sat, Dec 07, 2002 at 12:25:09 -0800, Tim Hockin
wrote:
This is what I am thinking: One of the flags on
each control is
whether it is a CTRL_MASTER, or a CTRL_VOICE (or both). This
allows the plugin to define a ctrl labelled (for example)
VELOCITY and the host can connect it to any MIDI input, if it
wants.
I'm not sure that making one control both is meaningful. Per voice
and per instrument controls should probably be kept seperate.
I'm still not sure I understand the terminology here... Considering
some recent posts, I assume that the following applies:
Instrument:
Monophonic or polyphonic plugin, capable of
maintaining one set of "patch data" (or "one
instrument sound") at a time. In popular MIDI
synth terminology, an Instrument would
correspond to "one part".
Voice:
An abstract object inside an Instrument,
representing "one note". A polyphonic Instrument
may maintain several Voices at a time. Physically,
a Voice may be anything from a single oscillator
to a network of oscillators, filters and other
objects.
Is that what you mean?
Personally, I'd rather not use the term "Instrument", but rather just
Plugin. In my terminology, an Instrument (as defined above) would be
a Channel, and Plugins may or may not be allowed to have multiple
Channels. As to the control protocol, if you want to do away with the
"channel" field, you could just allow synths to have more than one
event input port. One might think that that means more event decoding
overhead, but given that you'll most probably process one channel at
a time anyway, it would actually be an *advantage*. Oh, and you
wouldn't have to use separate negotiation schemes for
instrument/plugin controls and voice/note controls - you can just use
separate ports.
<audiality_internals>
BTW, in Audiality, there's no real plugin API for the synth engine
yet (it's used only for insert FX), but just objects with event
ports. As of now, there is one event port for each Channel, and one
event port for each physical Voice. "Virtual Voices" (ie Voices as
defined above, which I would rather call "Notes" or, just that;
Virtual Voices) are not defined in the "API", and may not exist in
some patches - the Patch Plugin (which is what converts Channel
Events into... anything, basically) decides how do deal with Channels
and Voice events.
</audiality_internals>
1) * Host
calls plug->note_on(timestamp, note, velocity), and
gets a voice-id.
I dont think I like the firstclass-ness of note and velocity, a
we've discussed they are meaningless for some instruments and could
be overloaded.
I agree. When I implemented the simple "2D positional" sound FX API
for the Audiality engine, I quickly realized that this <pitch,
velocity> tuple is just an arbitrary set which has been hammered into
our minds through the years. It's a sensible compromize to save
bandwidth, and it works well for most keyboard controlled
instruments, but that's about it.
I think it's time we stop thinking of two arbitrary parameters as an
obvious part of any "note on" message.
As to the "gets a voice ID" thing, no, I think this is a bad idea.
The host should *pick* a voice ID from a previously allocated pool of
Virtual Voice IDs. That way, you eliminate the host-plugin or
plugin-plugin (this is where it starts getting interesting)
roundtrip, which means you can, amond other things:
* send controls *before* you start a voice, if you like
* implement smarter and tighter voice allocation
* use a timestamped event system
* pipe events over cables, somewhat like MIDI
* Host
sends n VOICE_PARAM events to set up any params it
wants
You could just send pitch and velocity this way?
Yes, I think so. There's slightly more overhead, obviously, but the
alternative is to maintain a performance hack in the API, all plugins
and all hosts... Also note that event size (which has to be fixed)
increases if you must squeeze in multiple parameters in some events.
Should each
plugin provide serialize() and deserialize() methods
(thus storing plugin state from the plugin itself in an arbitrary
string) or should the host be expected to write-out each
control's data?
The host should do it. You dont want to have to write serialising
code for every single instrument. This also ensures that all the
state can be automated.
++host_does_serialising;
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`--------------------------->
http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`---------------------------->
http://www.linuxdj.com/maia -'
---
http://olofson.net ---
http://www.reologica.se ---