On Wednesday 11 December 2002 02.43, Paul Davis wrote:
you are discussing an API that is intended to support
*instruments*.
And very few instruments understand musical time, and practically
none *should* think in terms of notes.
i didn't say anything about notes (which is why i deliberately used
a non-MIDI number to stand for a pitch code of some kind). see
below about musical time.
Well, it was the integer number that set off the alarm. ;-)
Just use time
(seconds, audio sample frames,...) and pitch (linear
pitch, Hz,...), and you'll eliminate the need for instuments to
understand musical time and scales, without imposing any
restrictions whatsoever upon them.
some people don't seem to agree with you about using frequency.
Nor do I! ;-) (It wasn't my suggestion.)
I only included Hz here for completeness, to suggest that anything
continous will do, whereas integer note numbers will not.
any such API needs to be able to handle the
following kind of request:
at bar 13, beat 3, start playing a sound corresponding to
note 134, and enter a release phase at bar 14, beat 2.
This kind of information is relevant only in sequencers, and a few
special types of plugins. I don't see why the whole API should be
made significantly more complex and a lot slower, just to make
life slightly easier for the few that would ever consider writing
a plugin that cares about musical time.
i'm sorry, you're simply wrong here. tim's original proposal was
for an API centered around the needs of "instruments", not DSP
units. go take a look at the current set of VSTi's and you'll find
lots of them make some use of the concept of musical time,
particular tempo.
Yes, I'm perfectly aware of this.
Yet, most of the *events* sent to these plugins do not have anything
to do with musical timing; the synth core just needs to know when to
perform certain control changes - and on that level, you care only
about audio time anyway.
you want the LFO to be tempo-synced?
Ask the host about the musical time for every N samples and sync your
LFO to that information.
All events will still have to be in, or be converted to, audio time
before they can be processed.
you want to
delay in the modulation section to follow the tempo?
I'm not sure what you mean here, but it sounds like you again need to
ask the host for the musical time at suitable intervals, and
calculate a delay time from that.
All events will still have to be in, or be converted to, audio time
before they can be processed.
there are lots
of small, but musically rather important (and certainly pleasant)
capabilities that rely on musical time.
Yes indeed - but very few of them benefit from events being delivered
with timestamps in musical time format.
now, if you don't handle this by prequeing events,
then that
simply means that something else has to queue the events and
deliver them at the right time.
That is the job of a sequencer. The job of the event system is to
transmit "messages" between ports with sample accurate timing.
well, yes, the sequencer can do it, if indeed there *is* a
sequencer.
If there is not, you have only time. There is no musical time until
you throw a "timeline object" into the graph - and that may or may
not be part of the sequencer. (I would rather have it as a separate
plugin, which everyone - including the sequencer - gets audio and
transport time from.)
but this is an API we're talking about, and every
single
host that decides to use an API like this will end up needing to
prequeue events this way. i consider that wasteful.
I consider it utterly wasteful to force all plugins to convert back
and forth between timestamp formats, considering that the majority of
them will do fine with audio time in timestamps.
i agree with you that adding multiple timebases to an
events
timestamp field has an nasty edge of complexity to it. but i also
think you will find that the existing proprietary software world is
just beginning to understand the power of providing "virtual
instruments" with access to a rich and wonderful temporal
environment. i am concerned that you are losing sight of the
possibilities in favor of simplicity, and that it might turn out
that allowing events to be timestamped with musical time allows for
more flexibility.
Well, I'm interested in finding out what that flexibility might be.
I frankly can see only one real advantage of musical time in event
timestamps: Subsample accurate accurate timing.
Considering that people have trouble even accepting sample accurate
timing as a required feature, *subsample* accurate would appear to be
of virtually no interest to anyone at all.
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`--------------------------->
http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`---------------------------->
http://www.linuxdj.com/maia -'
---
http://olofson.net ---
http://www.reologica.se ---