[linux-audio-dev] XAP: a polemic

David Olofson david at olofson.net
Sun Dec 15 15:08:00 UTC 2002


On Sunday 15 December 2002 16.43, Tim Goetze wrote:
> David Olofson wrote:
> >If this does not demonstrate why I think NOTEPITCH is useful, I
> >frankly have no idea how to explain it, short of implementing both
> >alternatives in code.
>
> i agree that the ability to discern different scales is handy
> indeed. but the only clean way to implement it is by going
> back to integer note numbers.

Why? I don't see why you can't interpret "1.5" as being in between 1 
and 2.

Scales do not imply that there is no pitch bend, or continous pitch 
control. It doesn't to 12tET, and at least, my 1080 doesn't disable 
pitch bend if I tweak the scale...


> this definitely is something plugins shouldn't have to care
> about i think.

So, let's not have event processors that deal with basic, traditional 
theory...?


> a reasonable approach is to leave the generation
> of correct pitch values to the sequencer -- if it does in fact
> support multiple tunings, it is bound to know how they map to
> pitch.

I disagree. People will not want to implement simple arpeggiators and 
harmonizers that break down as soon as you tweak the 12tET scale a 
little. Do we *have* to force this sort of stuff (that even VST can 
do!) into another API?


> no need to duplicate this knowledge in the api.

What knowledge would need to be in the API?


> or just go 12. / octave and meet me in politically incorrect
> westerners' hell

No way! (Ok, I'm going there anyway, but that's besides the point. :-)

1.0/octave.

Whatever NOTEPITCH becomes, I don't really give a damn - as long as 
the distinction is possible to make for those that care. Call it 
PITCH, and have an even "softer" hint (NOTE) somewhere else. Whatever.

NOTEPITCH is *only* the exact same thing as PITCH when you're dealing 
with ET scales, and when you don't care about scales at all. You may 
think of both as 1.0/octave. The only difference is that *when* you 
have a scale converter anywhere, NOTEPITCH is what you have *before* 
the scale converter, and PITCH is what you have after it.

If the scale is not ET, some event processors will *definitely* care 
which side you put them on, so I suggest it would be kind of nice if 
there was any way at all to tell what the plugin author intends.


> -- i'll buy you a beer there if you manage
> to keep your future posts a little shorter.

Ok, no beer for me! ;-)


> >Musical time *stops* when you stop the sequencer, which means that
> >
> >So, for example, you can't change controls on your mixer, unless
> > you have the sequencer running. How logical is that in a virtual
> > studio?
>
> this is a non-issue. if time stops but processing does not,
> all plugins keep processing events <= 'now', and the events
> they emit are stamped 'now'. that's how i do it anyway.

(Oops, there goes sample accurate timing when you don't have a 
sequencer...)

How would you go about implementing an event delay effect?


> you can proceed counting time in stopped state, but it's
> void of any meaning within transport time to do so, whether
> you're counting samples or beats.

I'd really rather let plugins decide how they want to do it. 
Depending on what you're doing, and whether you're syncing or 
locking, there are many different answers - and by cutting off the 
notion of time from all plugins makes half of them impossible to 
implement.

You can cheat internally in a plugin, but what do you do in an event 
processor? Unless *you* are sending timeline events to it, you can't 
even lie to it, so that it will accept your faked timestamps.


> [bbt system]
>
> >IMHO, it should not be hardcoded into sequencers, and
> >definitely not into APIs.
>
> in fact i find myself happy without bbt mappings in the
> sequencer core, yes. others may have other needs.

Well, there *is* a relation between bars:beats and ticks, so you can 
always convert...


> >You *may* know about the future along the sequencer's timeline,
> > but you do *not* know what the relation between audio time and
> > musical time will be after the end of the current buffer.
>
> you *will* know if you have a central tempo map -- which you
> have when you have a sequencer around, or instead confine
> yourself to one tempo, one beat, ie. the linear mapping case.

This won't help if you're locking to an external device. (I'm sure 
Paul can explain this a lot better - I only seem to confuse people 
most of the time...)


> >After you return from process(), event scheduled (be it with
> > sample count or musical time as a timestamp), someone might
> > commit an edit to the timeline, there could be a transport stop,
> > or there could be a transport jump. In either of those cases,
> > you're in trouble.
>
> you're not. if you wouldn't check all queues on transport
> state change you don't understand enough of the workings of
> a sequencer yet.

How does that allow you to *take back* what you did in earlier 
blocks? Again, consider a simple event delay plugin.

Either way, I understand perfectly well that a sequencer cannot take 
back what it has sent down a MIDI cable. This is exactly the same 
thing. When events are placed on the event queues of other plugins, 
they are *gone* - there is nothing more you can do for that block.


> you may also want to synchronize changes to the tempo map and
> the loop points to be executed at cycle boundaries, which is
> how i am making these less invasive, but that's another story.

I certainly wouldn't want the API to depend on such limitations. 
Applications may do what they like, but thinking of block/cycle 
boundaries as anything more than the limits for the non-zero length 
time span that is "now", is not helpful in any way, if you want fully 
sample accurate timing.


> >> >	* Is there a need for supporting multiple timelines?
> >>
> >> this is a political decision,
> >
> >I disagree. It's also a technical decision. Many synths and
> > effects will sync with the tempo, and/or lock to the timeline. If
> > you can have only one timeline, you'll have trouble controlling
> > these plugins properly, since they tread the timeline pretty much
> > like a "rhythm" that's hardcoded into the timeline.
>
> to put this straight: there is no difference between musical
> time and 'real' time.

Except that musical time may stand still in relation to real time, 
but not the other way around. (Well, since "real" time is actually 
audio time, that *could* happen - but then no plugins will run 
anyway.)


> the funny thing is if you're rooted in
> one view, the other appears non-linear. but the mapping between
> the two is an isomorphism no matter how you look at it.

Yes - and that's why I want plugins to "breathe" in the time space 
that is least likely to freeze while plugins are still alive. Plugins 
can always translate one into the other, but they can't operate 
properly if the very reference they use for communication goes away.


> the trouble only starts when you have multiple mappings
> between musical and transport time, not when you have but one.
>
> back to the question, it is a decision that has deep technical
> implications, yes. but i insist it is largely political because
> few will ever need multiple concurrent 'timelines', and many
> will have to pay a price to enable them.

The argument about multiple timelines has very little to do with the 
argument about the timestamp format.

I'm *not* arguing for audio timestamps because it would make multiple 
timelines easier.

I'm arguing for audio timestamps, because I do not want plugins to 
have two different time domains forced upon them, especially not when 
stopping one of them prevents plugins from communicating properly.


> i can only say i don't need them in musical practice, and they
> are uncommon enough to let those needing them do the maths
> themselves. one of the few things common to all musical culture
> seems to be that there is one predominant rhythmn, if there is
> rhythmn at all.

Yes, I agree. There is no strong motivation behind multiple timelines 
at all. If they're a problem, let's just not have them.


> >It doesn't seem too complicated if you think of it as separate
> >sequencers, each with a timeline of it's own... They're just
> > sending events to various units anyway, so what's the difference
> > if they send events describing different tempo maps as well?
>
> the point in having a sequencer is to have it become the
> central authority over tempo and time. the idea of sending
> tempo change events is, i'm afraid, another sign of some
> lack in understanding sequencers.
>
> and to make the point clear: you don't send tick events
> either, that's for synchronizing external equipment. you
> simply don't need it with a central time/tick/frame mapping.

You're missing my point. Tempo and transport events are nothing but 
an interface to the timeline. You may ask for the information through 
host calls or whatever - these are just *interfaces*.


> >Maybe it will be in most cases, but I can't see any real reasons
> > why you couldn't implement it as a reasonably normal plugin.
>
> there's absolutely no reason for the api to cover sequencers
> in plugin guise. it is a one-one relationship to the host. to
> be able to sequence every event in the network, you must assume
> the sequencer's access to the network to be about equivalent to
> the host's access.

You're not required to sequencer every event in the network. How 
about event processors? You want them to send events *back* to the 
host, or what?


> >Yes, but you still have to deal with tranport events. No big deal,
> >though; you just have to tell everyone that cares about them, so
> > they can adjust their internal "song position counters" at the
> > right time.
>
> transport control is no event because it invariably involves
> a discontinuity in time, thus it transcends the very idea of
> an event in time.

Yes - if you think of time in terms of musical time only.


> and plugins don't have an internal 'song position counter'.

It could be anywhere, but then you'd have to make a call somewhere 
every time you want a sample accurate version of it.


> they rely on the host/sequencer to keep track of the passage
> of time, that's the whole point.

I don't see how pretending that "real time" (audio time or whatever) 
is irrelevant makes this any easier.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



More information about the Linux-audio-dev mailing list