On Tuesday 07 January 2003 15.03, Steve Harris wrote:
[...]
It's just
that there's a *big* difference between latching
control values when starting a note and being able to "morph"
while the note is played... I think it makes a lot of sense to
allow synths to do it either way.
I'm not convinced there are many things that should be latched. I
guess if you're trying to emulate MIDI hardware, but there you can
just ignore velocity that arrives after the voice on.
I don't think velocity mapping qualifies as "emulating MIDI
hardware", though. Likewise with "impact position" for drums.
Selecting a waveform at voice on is very different from switching
between waveforms during playback in a useful way. Changing the
position dependent parameters for playing sounds just because "the
drummer moves his aim" is simply incorrect.
Either way, the problem with ignoring velocity after voice on is that
you have to consider event *timestamps* rather than event ordering.
This breaks the logic of timestamped event processing, IMHO.
I guess I have no real probelm with two stage voice
initialisation.
It certainly beets having two classes of event.
Yes, and that's the more important side of this. Treating
"parameters" as different from controls has implications for both
hosts/senders and synths, whereas defining "voice on latched
controls" as a synth implementation thing has no implications for
hosts/senders.
It might seem handy to allow synths to explicitly say that some
controls *must* be rewritten at the instant of a VOICE_ON, but I
don't think it's useful (it's useless for continous velocity
instruments, at least) enough to motivate the cost.
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`--------------------------->
http://olofson.net/audiality -'
---
http://olofson.net ---
http://www.reologica.se ---