On Thursday 16 January 2003 12.09, Steve Harris wrote:
[...]
2) continuous
control - this includes things like a violin, which
receives streams of parameters. This gets a VVID for each new
voice. If you want glissando, you would tell the violin synth
that fact, and it would handle it.
Mono-synths need a VOICE, too. Modular synth modules might not.
They are effectively ON all the time, just silent because of some
other control. This works for modular-style synths, because they
are essentially pure oscillators, right? Mono-synths can still
have init-latched values, and releases etc. Does it hurt
anything to give modular synths a VOICE control, if it buys
consistency at the higher levels? Maybe - I can be swayed on
that. But a modular synth is NOT the same as just any
mono-synth.
(Analogue) monosynths do not have init latched values. I guess if
you're trying to mimic a digital monosynth you might want a VOICE,
but I can't see how it would be anything but confusing when youre
trying to implement a monosynth model.
Exactly. That's why I think it should be legal to not have a VOICE
control input, or just ignore the events. And I can't see any
problems with doing so.
There are also effects that
might want to receive note information, but wouldn't want (or
expect) to have to deal with voices, eg. pitch shift.
That's OK, as long as the effects expect monophonic "instrument
control data". For a harmonizer (which is basically a polyphonic
pitch shifter), you'd still need to deal with VVIDs, just like
anything that wants to track more than one voice.
Anyway, these are perfect examples of why thinking of synths and
effects as different is just pointless. There's so much overlap that
we can't even agree on what to look at to tell them apart - so why
bother? The distinction is completely irrelevant anyway.
Isn't the easiest thing just to make the
instrument declare wether
its polyphonic or not, if it is (NB it can have a polyphony of one)
it will receive VVIDs, if not, it wont.
I think it would be more much less confusing to just have a hint that
indicates whether or not the synth cares about VVIDs. If it has more
than 1 voice, it will have to use VVIDs obviously, but this isn't
*really* because the synth is polyphonic, but because it needs the
VVIDs for addressing.
This isn't all there is to it, though. You *can* implement a
polyphonic synth without real VVIDs. The distinction between real and
fake VVIDs I originaly wanted to make relates only to the synth side
of VVIDs. Real VVIDs come with some "user space", whereas fake VVIDs
are just integers that are unique from the synth POV.
So, there are actually *three* classes:
No VVIDs
Fixed value, or vvid field not initialized.
(Means you should never even *look* at them!)
Fake VVIDs
Unique values only. You can use these to mark
voices or whatever, internally, so you *can*
still do polyphonic; it'll just be more
expensive.
Real VVIDs
Unique values that are also indices into a
host managed array of VVID Entries. A VVID
Entry is a 32 bit integer that the synth may
use in any way it desires. The idea is that
synths can use this for instant voice lookup,
instead of implementing hash based searching
or similar to address voices.
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`--------------------------->
http://olofson.net/audiality -'
---
http://olofson.net ---
http://www.reologica.se ---