On Sunday 15 December 2002 03.53, Tim Goetze wrote:
David Olofson wrote:
Well, considering that we seem to have virtually
*no* input from
people with solid experience with software sequencers or
traditional music theory based processing, I suggest we either
decide to build a prototype base on what we *know*, or put XAP on
hold until we manage to get input from people with real
experience in more fields.
it's (mostly) all there for you to read.
Well, yes - but is there any sequencer out there that deals with the
basic timeline + sequencer stuff?
there's a pure sequencer
engine called 'tse3' out there that is somebody's third go at
the sequencer alone. no matter what its merits are, reading
its source is bound to give you an idea how things come together
in a sequencer (hosted at
sf.net iirc).
Sounds interesting. Will check.
and there's always muse
which, iirc, also comprises audio.
Just started browsing MusE source, actually. Discovered that the
synth API indeed uses *audio* timestamps (as expected), so I'll have
to look at the sequencer core for whatever we might be missing here.
* Is an
explicitly scale related pitch control type needed?
1. / octave is the politically correct value i guess.
12. / octave is what i am happy with.
since transformation between the two is a simple multiplication,
i don't care much which gets voted.
That's not really what it's about. IMHO, we should have 1/octave for
basically "everything", but also something that's officially hinted
as "virtual pitch", which is related to an unspecified scale, rater
than directly to pitch.
Example:
1. You have integer MIDI notes in.
2. You have a scale converter. The scale is not 12tET,
but a "pure" 12t scale, that results in better
sounding intervals in a certain key.
3. Output is linear pitch (1.0/octave).
In this example, the scale converter would take what I call note
pitch, and generate linear pitch. Note pitch - whether it's expressed
as integer notes + pitch bend, or as continous pitch - is virtual; it
is not what you want to use to control your synth. Linear pitch is
the *actual* pitch, that will drive the pitch inputs on synths.
So far, there is no big deal what you call the two; you could just
say that you have 12tET before the converter, and 12t pure
temperament after it.
Now, if you wanted to insert an effect that looks at the input and
generates a suitable chord, where would you put it, and how would you
implement it?
Hint: There are two answers; one relatively trivial, and one that's
both complicated and requires that two plugins are completely
aware of the details of the pure 12t scale.
If this does not demonstrate why I think NOTEPITCH is useful, I
frankly have no idea how to explain it, short of implementing both
alternatives in code.
* Is there a
good reason to make event system timestamps
relate to musical time rather than audio time?
yes. musical time is, literally, the way a musician perceives
time. he will say something like "move the snare to the sixteenth
before beat three there" but not "move it to sample 3440004."
Of course - but I don't see how that relates to audio timestamps.
Musical time is something that is locked to the sequencer's timeline,
whereas audio time is simply running sample count.
Whenever the sequencer is running (and actually when it's stopped as
well!), there is a well defined relation between the two. If there
wasn't you would not be able to control a softsynth from the
sequencer with better than totally random latency!
Within the context of a plugin's process()/run() call, the sequencer
will already have defined the musical/audio relation very strictly,
so it doesn't matter which one you get - you can always translate.
You can *always* translate? Well, not quite! Read on.
the system should do its best to make things
transparent to the
musician who uses (and programs) it; that is why i am convinced
the native time unit should relate to musical time.
OTOH, musical time is really rather opaque to DSP programmers, and
either way, has to be translated into audio time, sooner or later,
one way or another. I find it illogical to use a foreign unit in a
place where everything else is bound to "real" time and samples.
And that's not all there is to it. Surprize:
Musical time *stops* when you stop the sequencer, which means that
plugins can no longer exchange timestamped events! You may send and
receive events all you like, but they will all have the same
timestamp, and since time is not moving, you're not *allowed* to
handle the events you receive. (If you did, sample accurate timing
would be out the window.)
So, for example, you can't change controls on your mixer, unless you
have the sequencer running. How logical is that in a virtual studio?
How logical is it not to be able to play a synth *at all*, unless
"something" fakes a musical timeline for it?
Should plugins have a special case event handler for when time stands
still? If so, what would it do? How could it allow the automation's
nice declick ramping on PFL buttons and the like to work, if there is
no running time to relate the ramp durations to?
I'm afraid this simply won't work in any system that doesn't assume
that the whole world stops when you stop the sequencer.
i do not think it should be explicitly
'bar.beat.tick', but
total ticks that get translated when needed. this judgement is
based on intuition rather than fact i fear. for one thing, it
makes all arithmetic and comparisons on timestamps a good deal
less cpu-bound. it is simpler to describe. however, in many
algorithms it makes the % operator necessary instead of a
direct comparison. otoh, the % operator can be used effectively
to cover multi-bar patterns, which is where the bbt scheme
becomes less handy.
Right. Anything like b.b.t assumes that you actually *have* bars and
beats, which may not be relevant at all, depending on what you're
doing. IMHO, it should not be hardcoded into sequencers, and
definitely not into APIs.
* Should
plugins be able to ask the sequencer about *any*
event, for the full length of the timeline?
you're perfectly right in saying that all events destined for
consumption during one cycle must be present when the plugin
starts the cycle. i do not think it is sane to go beyond this
timespan here.
That's what I'm thinking - and that's where the event system I'm
talking about comes in. It's nothing but an alternative to audio rate
controls, basically.
however time conversion functions must exist that
give
valid results for points past and future with respect to
the current transport time in order to correctly schedule
future events.
Yes, I realize now that you have a good point here; if you know how
far into the future an event is to be executed, you don't have to
reevaluate it once per block. This could be considered a performance
hack, *provided* it actually costs less to evaluate a correct time
for a future event, than to check once per block. I *think* it would.
That said, there is one very important thing to remember when using
this feature: No one knows anything about the future!
You *may* know about the future along the sequencer's timeline, but
you do *not* know what the relation between audio time and musical
time will be after the end of the current buffer.
After you return from process(), event scheduled (be it with sample
count or musical time as a timestamp), someone might commit an edit
to the timeline, there could be a transport stop, or there could be a
transport jump. In either of those cases, you're in trouble.
If you have an audio rate timestamp, it simply becomes invalid, and
must be recalculated.
If it's a musical timestamp, you may end up leaking events. For
example, in a loop, you would end up sceduling some events after the
loop, over and over again. The events would never be delivered, and
thus, never removed.
Of course, in the first case, you can just invalidate all scheduled
events. (But make sure you have the original musical timestamps
somewhere! :-) In the latter case, you can deal with transport
events, and remove anything (what?) you won't be needing.
No matter how you turn this stuff about, some things get a bit hairy.
The most important thing to keep in mind though, is that some designs
make some things virtually *impossible*.
* Is there a
need for supporting multiple timelines?
this is a political decision,
I disagree. It's also a technical decision. Many synths and effects
will sync with the tempo, and/or lock to the timeline. If you can
have only one timeline, you'll have trouble controlling these plugins
properly, since they tread the timeline pretty much like a "rhythm"
that's hardcoded into the timeline.
It's not just about moving notes around.
and it's actually a decision you
have to make twice: one -- multiple tempi at the same point,
and two -- multiple ways to count beats (7/8 time vs 3/4 time
vs 4/4 time etc) in concurrence.
Well, if you have two tempo maps, how would you apply the "meter
map"? I guess the meter map would just be a shared object, and that
meter changes are related to musical time of the respective map, but
there *are* (at least theoretically) other ways.
Either way, IMHO, these belong together. (And SMPTE, MIDI clock and
stuff belongs there as well; all that makes "one timeline".)
being politically quite incorrect, i am happy
supporting only
one tempo and one time at the same point. imagine how
complicated things get when you answer 'yes' two times above,
and add to this that i can describe the music i want to make
without (even standard polyrhythmic patterns because they
usually meet periodically).
It doesn't seem too complicated if you think of it as separate
sequencers, each with a timeline of it's own... They're just sending
events to various units anyway, so what's the difference if they send
events describing different tempo maps as well?
multiple tempi are really uncommon, and tend to
irritate
listeners easily.
Well, I can understand that... :-)
Though, it seems to me that it might be rather handy when dealing
with soundscapes and similar stuff. Most of the time, you'll
basically have no explicit tempo at all, but if you want to throw in
some little tunes and stuff... Imagine a large 3D soundscape that you
can walk around in; pretty much like a game environment. Some radio
playing music somewhere. Someone whistling...
How about having a sequencer understand Q3A maps with special speaker
entities, so you can create complete 3D soundscapes for maps right in
the sequencer? ;-)
* Is it at all
possible, or reasonable, to support
sequencers, audio editors and real time synths with
one, single plugin API?
the sequencer definitely needs a different kind of connection
to the host. in fact it should be assumed it is part of, or
simply, it is the host i think.
Maybe it will be in most cases, but I can't see any real reasons why
you couldn't implement it as a reasonably normal plugin. Indeed, it
would need a "private" interface for interaction with it's GUI, but
that's to be expected.
for simple hosts, default time conversion facilities
are really
simple to implement: one tempo and one time at transport time
zero does it. conversion between linear and musical time then
are a simple multiplication.
Yes, but you still have to deal with tranport events. No big deal,
though; you just have to tell everyone that cares about them, so they
can adjust their internal "song position counters" at the right time.
audio editors, i don't know. if you call it
'offline processing'
instead i ask where's the basic difference to realtime.
For most plugins, there won't be a difference. It only matters to
driver plugins (and you don't use audio and MIDI drivers for off-line
processing), plugins with background worker threads, and plugins with
latency compensated interfaces for their GUIs. Even for many of
these, it's no big deal; they'll work anyway, although you'll
obviously get silly results in some cases.
real time synths -- wait, that's the point,
isn't it? ;)
Well, I *think* so... ;-)
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`--------------------------->
http://olofson.net/audiality -'
---
http://olofson.net ---
http://www.reologica.se ---