that it has N
voices on at all times. Hmm. Still, VOICE_ALLOC is
akin to note_on.
Well, if a voice can be "started" and still actually both silent and
physically deactivated (that is, acting as or being a dump control
tracker) - then yes.
I personally find this notion bizarre and counter-intuitive. The idea that
the note is turned on by some random control is just awkward. I'm willing
to concede it, but I just want to be on the record that I find it bizarre.
I still believe that the VOICE_ON approach is simpler to comprehend, and
more consistent. I want to toss MIDI, but not where the convention makes
things easy to understand. I think that explaining the idea that a voice is
created but not 'on' until the instrument decides is going to confuse
people. Over engineered. I've said my peace. If everyone (speak up ppl!)
wants to pursue this notion, I'll go along.
On to more topics..
Apart from that, I just think it's ugly having
both hosts and senders
mess with "keys" that really belong in the synth internals. Even
I agree to some extent. I'm just idearizing still.
having hosts provide VVID entries (which they never
access) is
stretching it, but it's the cleanest way of avoiding "voice
searching" proposed so far, and it's a synth<->local host thing only.
If the host never accesses the VVID table, why is it in the host domain and
not the plugins? Simpler linearity? I don't buy that.
The plugin CAN use the VVID table to store flags about the voice, as you
suggested. I just want to point out that this is essentially the same as
the plugin communicating to the host about voices, just more passively. It
seems useful.
If the plugin can flag VVID table entries as released, the host can have a
better idea of which VVIDs it can reuse.
Well, what *actually* made me comment on that, is that
I thought
"vvid_is_active()" had something to do with whether or not the
*synth* is using the VVID.
Was that the idea? If so; again; VVID entries are not for feedback;
they simply do not exist to senders. They're a host provided VVID
mapping service for synths; nothing else.
That wasn't the idea until you suggested using int32 for error status :)
They can handle it by actually doing what the hint
suggests; sample
the "initializer" control values only when a note is started. That
way, they'll "sort of" do the right thing even when driven by data
generated/recorded for synths/sounds that use these controls as
continous. And more interestingly; continous control synth/sounds can
be driven properly by initializer oriented data.
Am I losing my mind or are we back at a prior scenario?
Init controls:
time X: ALLOC_VOICE
time X: CONTROL A SET
time X: CONTROL B SET
This tastes just like VOICE_ON, SET, SET. If controls A and B are both
required to start a voice, the synth has to expect both.
Agreed - they
are semantically the same. The question is whether
or not it has a counterpart to say that init-time controls are
done.
This is where the confusion/disagreement is, I think: I don't think
of this event as "INIT_START", but rather as "CONTEXT_START". I
don't
see the need for a specific "init" part of the lifetime of a context.
Initialization ends whenever the synth decides to start playing
instead of just tracking controls.
Right - this is the bit I find insane. From the user perspective: I want
to start a note. Not whenever you feel like it. Now. Here are the
non-default control values for this voice. Anything I did not send you,
assume the default. Go.
I can map this onto your same behavior, but I don't like the way you
characterize it. The init period is instantaneous. If I did not provide
enough information (e.g.: no velocity, and default is 0.0) then the voice is
silent. Same behavior different characterization.
The difference comes when the host sends the 'magic' start-voice control too
soon.
Assume a synth with a bunch of init-latched controls.
Assume velocity is the 'magic trigger'.
time0: Host sends VOICE_START/ALLOC/whatever
time0: Host sends controls A, B, C (latched, but no effect from the synth)
time0: Host sends control VELOCITY (host activates voice)
time0: Host sends controls D, E, F (ignored - they are init-latched, and
init is over!)
Do you see the problem? It is easily solved by declaring a context, then
setting init controls, then activating the voice. But the activation of the
voice has to be consistent for all Instruments, or the host can't get it
right.
If that means that the whitenoise generator with no per-voice controls has
to receive VOICE_ALLOC(vvid) and VOICE_ON(vvid), then that is OK.
physically when the sender has reassigned the VVID and
the synth has
killed the voice. Thus, no need for a "VOICE_END" or similar event
either.
The host still has to be able to end a voice, without starting a new one.
For a continous velocity instrument, this is obvious;
default
velocity has to be 0, or you'll have a "random" note started soon as
a voice is allocated. Thus you *have* to change the velocity control
to start a note.
For a "latched velocity" instrument (MIDI style), it's exactly the
same thing. There will be a NOTE control, and it has to be "off" by
default. Just set it to "on" to start a note with default velocity.
That's fine, but can't it be consistent? The continuous velocity instrument
does not suffer because it has to accept a VOICE_ON (what you called NOTE
here). Does it map perfectly, no. Does it map well enough, yes - and it
has the benefit of being consistent, and easy to explain.
Realistically MOST synths will have at least one init-time control
(velocity). As we evolve, this can go away and just be a normal control.
But until then we need a consistent way to handle init-latched controls.
Because they ARE special. They all need to be sent before the voice is
activated.
BTW, it might be a good idea to always provide NOTE
output from
senders, even though continous velocity and similar synths won't care
about it. Then again, continous velocity data doesn't work very well
as velocity data for note based synths, so it's probably not much
use...
We should - I agree. I just wrote that diatribe without reading ahead :)
Yes. So every voice creation involves a VOICE_ALLOC and a VOICE_ON. Once
VOICE_ON is received, the synth may make sound for the voice. It may be
silent, also, but the voice IS active.
Yes. "VOICE_ALLOC" doesn't trigger a
MIDIism allergy reaction for me
at least, but it's still a bit confusing...
Well, names are meant to be changed. Once we agree on WHAT it is, we can
name it properly.
For something
that has no per-voice controls (e.g. a
white-noise machine) you still need to send some event.
Fiddle with the NOTE control, maybe? (Which would default to "off",
of course.)
YES! The same as a VOICE_ON. or send a 1 to the VOICE control. Describe it
how you will, it tastes like VOICE_ON. Remember before when I said I would
concede this point? I lied! I might concede VVIDs and give up on
plugin-allocated VIDs, but this seems more and more right, and you yourself
talked me back into it.
Now, how do we express release velocity with that
scheme? Negative
Still broken, though: Release velocity doesn's fit
in... :-/
Use a third control for that!? Actually, that makes
quite some sense,
since many real instruments use totally different mechanisms for
starting and stopping notes. Pianos. Cymbals...
VELOCITY vs DAMPING, or something. (Makes sense for continous
velocity instruments as well!)
I'll think about this scheme. As a simple alternative - mirror the
VOICE_ON? VOICE_RELEASE, CONTROL SET, VOICE_OFF.
It actually is starting to taste more like VOICE_PAUSE, now. Tell the synth
not to react to changes until unpause. I don't like that. Perhaps release
velocity is a different control. But let's not make velocity special again.
If VELOCITY means something special to the end of a note, maybe there are
others.
Still needs thought.
Well, yes. And you'll also have to send a third
event (corresponding
to VOICE_OFF) to stop it, since just "letting go" of a VVID doesn't
affect anything. (It's a NOP, as far as the API and synths are
concerned.)
Yes, I've always assumed that, and it makes perfect sense to me and doesn't
offend me at all :)
struct voice
*v = myvoices[(*(host->vvid_table))[vvid]];
Why double indirection? I think it's enough to say that you must
never store the pointer across process() calls. Note that senders
I guess that works, too. :)
Tim