On Wednesday 11 December 2002 5:19 pm, David Olofson wrote:
(Oops. Replied to the direct reply, rather than via
the list. Please,
don't CC me - I'm on the list! :-)
Sorry, I just tend to hit "reply to all" because some lists seem to be set up
so that "reply" doesn't go to the list.
I like the idea of enforced "explicit
casting", but I think it's
rather restrictive not to allow synths to take note_pitch. That would
make it impossible to have synths with integrated event processors
(including scale converters; although *that* might actually be a good
idea)
That would be bad. If a synth takes note_pitch it's bound to interpret it as
12tET, which would be annoying to someone trying to use a different scale. A
synth could still have a built in event processor, but it should only process
linear_pitch events. Scale converters should definately not be built into
synths.
Either way, there will *not* be a distinction between
synths and
other plugins in the API. Steinberg did that mistake, and has been
forced to correct it. Let's not repeat it.
I wasn't thinking so much of an API distinction as a very well-documented
convention. Also I was thinking more of the distinction being between
scale-related event processors and everything else, rather than synths and
everything else which I agree would be bad.
You could enforce it with rules like "if it's got a note_pitch input port
it's not allowed to have any other kind of port, except in the case of a
plugin with one note_pitch input and one linear_pitch output, which is a
scale converter" - but there might be the odd case where these rules don't
make sense.
If you have an
algorithm that needs
to know something about the actual pitch rather than position on a
scale then it should operate on linear_pitch instead.
Yes indeed - that's what note_pitch vs linear_pitch is all about.
I think that
in this scheme note_pitch and linear_pitch are two completely
different things and shouldn't be interchangeable.
You're right. Allowing implicit casting in the 1tET case is a pure
performance hack.
That way you
can enforce the correct order of operations:
Sequencer
| note_pitch signal
V
scaled pitch bend (eg +/- 2 tones) /
arpeggiator / shift along scale /
other scale-related effects
| note_pitch signal
V
scale converter (could be trivial)
| linear_pitch signal
V
portamento / vibrato /
relative-pitch arpeggiator /
interval-preserving transpose /
other frequency-related effects
| linear_pitch signal
V
synth
That way anyone who doesn't want to worry about notes and scales
can just always work in linear_pitch and know they'll never see
anything else.
Yes. But anyone who doesn't truly understand all this should not go
into the advanced options menu and check the "Allow implicit casting
of note_pitch into linear_pitch" box.
So, I basically agree with you. I was only suggesting a host side
performance hack for 1.0/octave diehards. It has nothing to do with
the API.
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`--------------------------->
http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`---------------------------->
http://www.linuxdj.com/maia -'
---
http://olofson.net ---
http://www.reologica.se ---