On Tue, Dec 10, 2002 at 10:38:52 +0200, Sami P Perttu wrote:
First, I don't understand why you want to design a
"synth API". If you
want to play a note, why not instantiate a DSP network that does the job,
connect it to the main network (where system audio outs reside), run it
for a while and then destroy it? That is what events are in my system -
timed modifications to the DSP network.
For interoperability. Its defintly worth something if your synths LOF
output can drive another synths LFO input to get them running in sync.
On the issue of pitch: if I understood why you want a
synth API I would
prefer 1.0/octave because it carries less cultural connotations. In my
system (it doesn't have a name yet but let's call it MONKEY so I won't
have to use the refrain "my system") you just give the frequency in Hz,
there is absolutely no concept of pitch. However, if you want, you can
The problem with using frequency throughout is that it makes certain
common kinds of operation expensive (e.g. pitch modulation).
Perceived wisdom is that its best to work with pitch data at the
generation, sequencing and modulation stages, then convert to the
oscialtors native form internally. It reduces the number of conversions on
average.
- Steve