On Saturday 14 December 2002 11.15, Steve Harris wrote:
On Sat, Dec 14, 2002 at 02:44:48 +0100, David Olofson
wrote:
Right, so
dont allow plugins to talk notes... I still dont
think its necceasry, its just programmer convienvence.
It's actually more *user* convenience than programmer
convenience. Programmers that work with traditional theory will
have to work with <something>/note regardless, but users won't be
able to tell for sure which plugins expect or generate what,
since it just says "PITCH" everywhere. Is that acceptable, or
maybe even desirable?
Er, well, most people will just let the host do the wiring for
them. So it will all work fine.
...as long as they put the plugins in the right order.
If they do the wiring themselves then they will wire
pitch outut to
pitch input and it will all work. Theres no possibilities of a
pitch data mismatch, because theres only one format.
Running linear pitch with a scale applied into a plugin that expects
<something>/note is not a mismatch? So, how is that plugin going to
figure out what pitch in the input corresponds to which note in the
scale?
Fine, it works
for me, but I'm not sure I know how to explain how
this works to your average user.
It wont need explaining, its blatatly obvious, unlike if you have
pitch and note pitch when its not obvious if they will be
compatible or not (even to the host).
I don't see how it's blatantly obvious that things will work if you
put the plugins in one order, whereas it will not work at all if you
put them in some other order.
If you dont have it there cant be any compatibility
problems.
How can you avoid compatibility problems by sending two different
kinds of data, while pretending they are the same?
There aren't two kinds of data, theres just pitch.
Useful when you think only in terms of linear
pitch, yes. When
you do anything related to traditional music theory, you'll have
to guess which note the input pitch is supposed to be, and then
you'll have to guess what scale is desired for the output.
This is true of all the systems we've discussed.
No. There is no guessing if you know you have 1.0/note. You can apply
traditional theory in NtET space, and then translate the result into
a "tweaked" tempering, without the traditional theory plugin having
to understand anything about tempering.
Nothing more
sophisticated than autocomp plugins (rhythm +
harmonizer, basically) and other plugins based on traditional
music theory. Things that are relatively easy to implement if you
can assume that input and output is note based, and disregard
tempering of the scale, within reasonable limits. They still work
with non-ET scales, because that translation is done elsewhere.
(In the synths - but not many of them actually support it,
AFAIK... *heh*)
Right, and none of this stuff is any harder if you just support
pitch. In either case you need to know what scale its in.
No, you don't - that's the whole point. You only have to know
*approximately* what scale is used. 1.0/note is just 1.0/note, even
if the scale converter (that must be placed *after* the note/scale
based plugins) converts it into some non-ET scale.
Obviously, you cannot force 12tET based theory to apply to 16t - but
there is no point in trying to do that anyway. It's a completely
different theory, so you'll need a different plugin anyway! You
*can*, however, use any 12t based plugin, with any 12t scale, as long
as the relative pitch of the notes in the scale doesn't deviate too
much from what the plugin was designed for.
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`--------------------------->
http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`---------------------------->
http://www.linuxdj.com/maia -'
---
http://olofson.net ---
http://www.reologica.se ---