On Saturday 14 December 2002 23.20, David Gerard Matthews wrote:
[...]
* Is an
explicitly scale related pitch control type needed?
I would argue that it's not.
Do you have experience with processing events meant for non-ET
scales? I'm looking for a definitive answer to this question, but it
seems that this kind of stuff is used so rarely that one might argue
that 12tET and scaleless are the only two significant alternatives.
(And in that case, the answer is very simple: Use 1.0/octave, and
assume 12tET whenever you want to think in terms of notes.)
* Is there a
good reason to make event system timestamps
relate to musical time rather than audio time?
Again, I would rather let the timestamps deal with audio time.
Hosts which work in bars/beats/frames
should be capable of doing the necessary conversion. Remember,
there are plenty of parameters which
might need some time indications but which are completely unrelated
to notions of tempo. I'm thinking
mostly about LFOs and other modualtion sources here (although there
might be good grounds for lumping
these in with frequency controlled parameters.) Just as I would
rather see pitch control make as few
assumptions as possible about tuning and temperament, I would like
to see time control make as few
assumptions as possible about tempo and duration.
Well, you do have to be able to lock to tempo and/or musical time in
some cases - but (IMHO) that is an entirely different matter, which
has little to do with whatever format the event timestamps have.
Sequencers
generally do operate within the
shared assumptions of traditional concepts of periodic rhythm, but
in a lot of music (from pure ambient
to many non-Western musics to much avant-garde music) such notions
are irrelevant at best.
This confirms my suspicions. (And I even have some experience of just
being annoyed with those non-optional grids... :-/ In some cases, you
just want to make the *music* the timeline; not the other way around.)
* Should
plugins be able to ask the sequencer about *any*
event, for the full length of the timeline?
Not sure that I grok the ramifications of this.
Let's put it like this; two altornatives (or both, maybe):
1. Plugins receive timestamped events, telling them
what to do during each block. Effectively the same
thing as audio rate control streams; only structured
data instead of "raw" samples.
2. Plugins get direct access to the musical events,
as stored within the sequencer. (These will obviously
not have audio timestamps!) Now, plugins can just
look at the part of the timeline corresponding to
each block, and do whatever they like. Some event
processors may well play the whole track backwards!
This solution would also make it possible for
plugins to *modify* events in the sequencer database,
which means you can implement practically anything
as a plugin.
Obviously, these are two very different ways of dealing with events.
I would say they are solutions for completely different problem
spaces, with only slight overlap. One cannot replace the other; if
you want to deal with the combined problem space, you need both.
Why can't 2 deal with everything? Well, consider the simple case
where you want to chain plugins. One plugin reads events from the
sequencer, and is supposed to make a synth play the events. However,
now the *synth* will expect to have access to an event database as
well! So what does that first plugin do...? Plug in as a "wrapper"
around the whole sequencer database?
Well, I could think of ways to make that work, but none that make any
sense for a real time oriented API.
Suggestions?
Is there a third kind of "event system" that I've missed?
* Is there a
need for supporting multiple timelines?
Possibly. I would say definitely if the audio event timestamps
relate to musical time.
Well, I see what you mean - but see above; The timestamps are not
much of an issue, since sync'ed and locked effects would get their
musical time info by other means.
Though, obviously - if timestamps are in musical time and you can
have only one timeline, you have a problem... Or not. (My question is
basically whether you need to be able to sync or lock to multiple
timelines or not.)
For example, in a sequencer, it should be possible to
have
different tracks existing
simultaneously with different tempi. Obviously, if the timestamps
are derived from
audio time, then only a single timeline is needed, because you have
a single time
source which doesn't care about tempo.
For timestamps, yes - but if you want your plugin to "spontaneously"
(ie without explicit "note" events) to sync with the tempo or the
beat...?
This hypothetical sequencer
would be
able to convert between arbirtrary representations of bpm and time
signature,
but the code for doing this would be in the host app, not the
plugin.
Or in the sequencer *plugin* - if you like to implement it that way.
Since MuCoS/MAIA, I've been convinced that the host should be as
simple as possible. Basically just like the room in which you build
your studio. There's light so you can see what you're doing
(UI<->plugin connection support), power outlets for you machines
(memory management, event system,...), and various sorts of cables
hanging on the walls (routing/connection system). You'd bring in
controllers, synths, mixers, effects, powered speakers etc.
(Plugins!)
Now, if the plugin timestamps events internally using
musical time, then multiple
timelines are necessary in the above scenario.
Yes. (See above, though.)
And the most
fundamental, and most important question:
* Is it at all possible, or reasonable, to support
sequencers, audio editors and real time synths with
one, single plugin API?
Probably not.
Well, that's what everyone used to tell *me* - and now the situation
is pretty much the opposite. :-)
Well, at least I *tried* to squeeze it all into MAIA. I have indeed
given all of these issues a lot of thought, over and over again - and
finally realized that even if it *could* be made to work, it would
take ages to get right, and it would be way too complex to use for
anything serious.
I've learned my lesson. Maybe others should too...
For audio editors, I think JACK is doing a very fine
job.
Well, yes - but I was rather thinking about CoolEdit and SoundForge
style things, and their off-line plugins.
You could say the term "audio editor" is rather unspecific! :-)
In fact, beginning
with FreqTweak, there seems to be some precedent for using JACK for
plugins. JACK's
biggest problem, however, is its lack of midi support.
MIDI!? Aaargh! :-)
Well, if there was a way to sync + lock nicely with the ALSA
sequencer...?
Basically,
the way I see it, XAP would
be for plugins and realtime synths hosted on a sequencer or DAW app
which uses JACK for
audio input/output.
Yes, I agree totally on that.
JACK replaces crude and/or high latency hacks and allows applications
to play nicely together. LADSPA and eventually (or so we hope) XAP
turns monoliths into modular applications, that can exchange and
combine units in any way the user wants.
If we *had*
sufficient information to answer these questions,
there wouldn't be much of an argument after everyone understood
the problem. The details would just have been a matter of taste.
Now, we seem to have lots of ideas, but few facts, so there's not
much point in further discussion. We need to test our ideas in
real applications, and learn from the experience.
One thing which has crossed my mind: several people have brought
up VST as a frame of reference,
but has anyone looked at AudioUnits?
Very good point! It's a much more recent design, and even if the
motives for creating it were perhaps not only technical, one would
assume that these guys know what they're doing.
I admit that I haven't
either, but the reference code is out there,
and perhaps it might be a good idea to take a look at it.
Yes.
(One
potential problem is that the example
code seems to be in Objective C.)
Well, I don't exactly know Objective C, but I've read up on the
basics, for reasons I can't remember... (Probably to see if it was
the "C++ done right" I was looking for. In that case, it was not,
because the contructs are *higher* level; not lower.)
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`--------------------------->
http://olofson.net/audiality -'
---
http://olofson.net ---
http://www.reologica.se ---