On Wednesday 04 December 2002 10.09, Tim Hockin wrote:
Well, a
guaranteed unique ID is really rather handy when you want
to load up a project on another system and be *sure* that you're
using the right plugins... That's about the only strong
motivation I can think of right now, but it's strong enough for
me.
Ok, I see your motivation for this. I hate the idea of 'centrally
assigned' anythings for something as open as this. I'll think more
on it..
I see your point, but I'm not sure if it makes sense to take the
measures needed to guarantee uniqueness in some other way. Developer
IDs could probably be 16 bit, and I would assume that even if we
knock out both VST and DX (!), we're not going to have hundreds of
new developers registering every day constantly - so one could just
have someone administer this manually via email.
Either way, it only takes one unique ID per developer... How about
using the h/w-address of one of your NICs? But then, what if you
don't have a NIC? :-)
IMHO, plugins
should not worry about whether or not their outputs
are connected. (In fact, there are reasons why you'd want to
always guarantee that all ports are connected before you let a
plugin run. Open inputs would be connected to a dummy silent
output, and open outputs would be connected to a /dev/null
equivalent.)
I disagree with that - this is a waste of DSP cycles processing to
be sent nowhere.
So, why would you ask the plugin to set up outputs that you won't
connect, and then force the plugin to have another conditional to
check whether the output is connected or not?
Besides, it's not always as simple as adding a conditional to
*efficiently* eliminate the work related to an output. And even if it
is, chances are you have to put that conditional in the inner loop,
to avoid too much code duplication.
A stronger motivation, however, is that this would be another feature
of the API; one that every plugin is required to support.
That said, thinking about multiport plugins with complex internal
routing (ie output x may be affected by other inputs than input x), I
can see where something like this would be useful.
I would propose that the pre-instantiation host/plugin "negotiations"
including:
* A way for the host to ask the plugin what different types
of ports it has.
* A way of getting the maximum and minimum number of ports
of each type, as well as the granularity for port counts.
* A way for the host to tell the plugin how many ports of
each type it wants for a particular instance of the plugin.
* A way for the host to *ask* the plugin to disable certain
ports if possible, so they can be left disconnected.
The important point is that however you do it, you end up with a
plugin with two 1D, contiguous arrays (although possibly with some
ports disabled, if the plugin supports it); one for inputs and one
for outputs. That will simplify the low level/DSP code, and I think
*that's* where complexity matters the most. (Have as little code as
possible in the main path, to avoid optimizing stuff that can be done
elsewhere.)
As an example, say you have a plugin that takes mono input and
generates 5.1 output. Port info could look something like this:
Input type: In;MONO
Min count: 1
Max count: -1 (unlimited)
Granularity: 1
DisableSingle: FALSE (doesn't make sense)
DisableGroup: FALSE (group == single)
Output type: Out;5.1
Min count: 6
Max count: -1 (unlimited)
Granularity: 6
DisableSingle: TRUE
DisableGroup: FALSE (doesn't make sense)
So, if you want to process one mono input and have 4 channel output
(no bass and center channels), you'd ask for this when instantiating:
Inputs: In;MONO
Count: 1
Disabled: <None>
Outputs: Out;5.1
Count: 6
Disabled: <1, 5> (center, bass)
Now, if the plugin didn't support DisableSingle on the output ports
of type Out;5.1, you'd have to accept getting all 6 outs, and just
route the bass and centel channels to "/dev/null". It should be easy
enough for the host, and it could simplify and/or speed up the
average case (all outputs used, assumed) of the plugin a bit, since
there's no need for conditionals in the inner loop, mixing one buffer
for each output at a time, or having 63 (!) different versions of the
mixing loop.
as some kind
of interface to the plugin, but to me, it seems way
too limited to be of any real use. You still need a serious
interface
if it has no other use than 'ignore this signal and spare the CPU
time' it is good enough for me.
Well, it sounds useful, but I'm afraid it's not always that easy to
make use of in real DSP code - and in the cases where it isn't, I
think it's a bad idea to *require* that plugins support it.
IMO, it should be optional, and plugin coders should be strongly
*recommended* to consider this feature when it means a speed-up in
useful configurations without penalizing the average case. Plugins
should have the chance of setting it up during initialization or
similar context, rather than checking for it in the "process()"
callback.
just like you
program a studio sampler to output stuff to the
outputs you want. This interface may be standardized or not - or
there may be both variants - but either way, it has to be more
sophisticated than one bit per output.
Ehh, again, I think it is simpler. Lets assume a simple sampler.
It has a single output with 0 or more channels (in my terminology).
If you load a stereo sample, it has 2 channels. A 5.1 sample has
6 channels. Let's consider an 8-pad drum machine. It has 8
outputs each with 0-2 channels. Load a stereo sample, that output
has 2 channels. Now, as I said, maybe this is a bad idea. Maybe
it should be assumed that all outputs have 2 channels and mono gets
duplicated to both or (simpler) LEFT is MONO.
Well, I've never seen a synth or sampler (hardware or software)
change it's output "format" based on the format of the loaded
waveform - and I can't see that it would make sense, or even be
particularly useful. From the sound programming perspective, I
strongly prefer working with individual mono waveforms, each on a
voice of their own, as this offers much more flexibility. (And it's
also a helluva' lot easier to implement a sampler that way! :-)
Besides, I think it's the *host* (or indirectly, the user) that
should decide what output configuration it wants for a plugin, within
the limitations of the plugin.
What gets confusing is what we're really debating.
If I want to do
a simple stereo-only host, can I just connect the first pair of
outs and the plugin will route automatically? Or do I need to
connect all 8 to the same buffer in order to get all the output. In
the process of writing this I have convinced myself you are right
:) If the host does not connect pad #2, pad #2 is silent.
Well, I *am* right! ;-)
In the h/w world, the latter is how it works, and I'd say it makes a
lot of sense. There's a reason why use a mixer (and/or a multichannel
audio interface + virtual mixer), and that is that you want control
of every detail of the mix. The more fine grained, the better. The
case where you just want to plain mix all outputs from a sampler
without EQ or anything, is a special case, and I don't think it's
worth the effort implementing it inside plugins.
Plugins may or may not have their own internal routing and stuff,
which may *look* a lot like a motivation for a "smart routing"
feature in an API - but in fact, it's a different thing altogether.
The basic sampler or synth would just have one mono/stero/whatever
output for each "part" (h/w synth terms; some brands), or "channel"
(MIDI terms) - no internal routing system or mixer at all. (That's
why you have a virtual mixer! :-)
I think there
should be as little policy as possible in an API.
As in; if a plugin can assume that all ins and outs will be
connected, there are no special cases to worry about, and thus,
no need for a policy.
Slight change - a plugin only needs to handle connected inouts. If
an inout is not connected, the plugin can skip it or do whatever it
likes.
...provided there is a quarantee that there is a buffer for the port.
Or you'll segfault unless you check every port before messing with
it. :-)
Strongest
resason *not* to use multichannel ports: They don't mix
well with how you work in a studio. If something gives you
multiple
I considered that. At some point I made a conscious decision to
trade off that ability for the simplicity of knowing that all my
stereo channels are bonded together. I guess I am rethinking that.
Well, you may have noticed that most "real" mixers don't have stereo
channels at all. The reason for that is that stereo channels are just
a shortcut; using two mono channels for each stereo source does have
advantages. (Say, the right overhead mike on the drum kit needs a
little more treble for perfect balance... A stereo strip wouldn't be
of much use there.)
Strongest
reason *for*: When implementing it as interleaved data,
it
bleh - I always assumed an inout was n mono channels. The only
reason for grouping them into inouts was to 'bond' them.
Yeah - and that's a nice idea. However, I think it should be
implemented on a slightly higher level. (See above.)
Like on a
studio mixing desk; little notes saying things like
"bdrum", "snare upper", "snare lower", "overhead
left", "overhead
right" etc.
Should I use a different word than 'port'? is it too overloaded
with LADSPA?
Well, it depends on what you mean... I assume I don't know, so maybe
you're right about the confusion with LADSPA.
Hrrm, so how does something like this sound?
(metacode)
struct port_desc {
char *names;
};
simple sampler descriptor {
...
int n_out_ports = 6;
struct port_desc *out_ports[] = {
{ "mono:left" }
{ "right" }
{ "rear:center" }
{ "rear-left" }
{ "rear-right" }
{ "sub:lfe" }
};
...
};
So the host would know that if it connects 1 output, the name is
"mono", and if it connects 2 ouptuts, the names are "left",
"right", etc. Then it can connect "left" to "left" on the
next
plugin automatically. And if you want to hook it up to a mono
output, the user could be asked, or assumptions can be made. This
has the advantage(?) of not specifying a range of acceptable
configs, but a list. It can have 1, 2, or 6 channels.
Yeah, something like that. Add "count granularity", and you'll make
life for the plugin coder a lot easier, I think. (Again, see above.)
another example:
drum machine descriptor {
...
int n_out_ports = 16;
struct port_desc *out_ports[] = {
{ "left:left(0):mono(0)" }, { "right:right(0)" },
{ "left(1):mono(1)" }, { "right(1)" },
{ "left(2):mono(2)" }, { "right(2)" },
{ "left(3):mono(3)" }, { "right(3)" },
{ "left(4):mono(4)" }, { "right(4)" },
{ "left(5):mono(5)" }, { "right(5)" },
{ "left(6):mono(6)" }, { "right(6)" },
{ "left(7):mono(7)" }, { "right(7)" },
};
...
};
Does this mean the plugin is supposed to understand that you want a
"mono mix" if you only connect the left output?
and finally:
mixer descriptor {
...
int n_in_ports = -1;
struct port_desc *in_ports[] = {
{ "in(%d)" }
}
int n_out_ports = 2;
struct port_desc *out_ports[] = {
{ "left:mono" }
{ "right" }
}
}
Or something similar. It seems that this basic code would be
duplicated in almost every plugin. Can we make assumptions and let
the plugin leave it blank if the assumptions are correct?
Maybe... But that could be more confusing than helpful, actually.
"Hmm... What was the default again?"
In thinking about this I realized a potential problem
with not
having bonded channels. A mixer strip is now a mono strip. It
seems really nice to be able to say "Input 0 is 2-channels" and
load a stereo mixer slot, "Input 1 is 1-channel" and load a mono
mixer slot, "Input 2 is 6-channel" and load a 5.1 mixer slot.
Yeah, I've been thinking about this while writing... Note that this
is really a user interface issue; what we want is for the user to
know which outputs are related to which inputs - provided there *is*
such a relation. In the case of a basic mixer, it's quite obvious,
but we could face *any* sort of input/output relation in a plugin,
and should be able to deal with it - or rather, be able to ignore
that there is a relation at all, if we can't understand it.
Maybe this "granularity" field I proposed should be called "group
size" or something? It would make it more clear how the mapping from
input groups to output groups works, provided it is a 1:1 mapping. If
it's not, well, we'll just have to hope that the user can read the
docs and figure out how to connect things that way.
I'm back to being in a quandary. Someone convince
me!
Well, this is hairy stuff - but in this case, I don't think there's
all that much to it. There's no sensible way of describing the
input/output relations of every possible plugin, so it's debatable
whether we should care to try at all.
The 1:1 mapping seems simple enough and interesting, but honestly, I
can't say right now why a host would ever really be interested in
understanding it. It's still the user who has to decide how to
connect the plugins in a net, and the host can's help with much more
than grouping "stereo cables" together - until the user decides to
break those up in mono wires and route them manually, for extra fun.
Point being
that if the host understands the labels, it can
figure out what belongs together and thus may bundle mono ports
together into "multichannel cables" on the user interface level.
This is what the "inout is a bundle of mono channels" idea does.
Well, I don't quite understand the
voice_ison() call. I think
voice allocation best handled internally by each synth, as it's
highly implementation dependent.
My ideas wrt polyphony:
* note_on returns an int voice-id
* that voice-id is used by the host for note_off() or note_ctrl()
That's the way I do it in Audiality - but it doesn't mix well with
timestamped events, not even within the context of the RT engine
core.
* you can limit polyphony in the host
- when I trigger the 3rd voice on an instrument set for 2-voices,
I can note_off() one of them
I don't think that's a good idea. The synth has a much better chance
of knowing which voice is "best" to steal - and if smart voice
stealing is not what you want, you shouldn't use a polyphonic synth
or sound.
* you can limit polyphony in the instrument
- host has triggered a 3rd voice, but I only support 2, so I
internally note_off() one of them and return that voice_id again.
The host can recognise that and account for polyphony accurately
(even if it is nothing more than a counter).
Yeah... I do that in Audiality as well - but that only means I have
to keep track of whether or not voices have been stolen or not before
I send more events to them. (Objects that you have allocated
disappear at random. Great fun.) It complicates the higher level
(patch; Roland synth lingo) code, which totally sucks.
* note_off identifies to the host if a voice has
already ended
(e.g. a sample)
* note_ison can be called by the host periodically
for each voice to see if it is still alive (think of step-sequenced
samples). If a sample ends, the host would want to decrement it's
voice counter. The other option is a callback to the host. Not
sure which is less ugly.
Why not just let the synth deal with voice allocation? If you want to
know the number of playing voices, just throw in a control port
"active voices" or something. (The port name could be standardized,
so hosts will know what it's for.)
I am NOT trying to account for cross-app or cross-lan
voices,
though a JACK instrument which reads from a JACK port would be
neat.
Nor am I, really, but it's kind of neat if the protocol doesn't
totaly prevent that kind of stuff by design, for no good reason. And
I see reasons for not even doing it this way inside the single real
time thread of a synth.
[...]
a real
problem so far, but I dot't like it. I want
*everything* sample accurate! ;-)
Actually, our focus is slightly different. I'm FAR less concerned
with sample-accurate control. Small enough buffers make
tick-accurate control viable in my mind. But I could be convinced.
It sure is SIMPLER. :)
In my experience, buffer based timing is not accurate enough for
detailed control, and this is especially noticeable when you're not
quantizing everything. I don't want fast and accurate control to be
restricted to within synths and the like. The mixer automation should
be just as fast and accurate, so I don't have to use special plugins
as soon as I want something more demanding than a basic fade. Also,
I'd like to be able to build modular synths from plugins without
resorting to another plugin API (and other plugins...) just for that.
Besides, VSTi has it. DXi has it. I bet TDM has it. I'm sure all
major digital audio editing systems (s/w or h/w) have it. Sample
accurate timing. I guess there is a reason. (Or: It's not just me! :-)
quite conventient for things like strings and pads.
FL does
Velocity, Pan, Filter Cut, Filter Res, and Pitch Bend. Not
sure which of those I want to support, but I like the idea.
"None of those, but instead, anything" would be my suggestion. I
think it's a bad idea to "hardcode" a small number of controls
into the API. Some kind of lose "standard" such as the MIDI CC
allocation, could be handy, but the point is that control ports
should just be control ports; their function is supposed to be
decided by the plugin author.
I've contemplated an array of params that are configurable
per-note. Not everything is.
I would say pretty much everything is in your average synth, except
for any per-part inserts - although most synths (h/w, at lest) have
only master effects. It's just that most synths offer no per-note
real time control short of poly pressure and possibly some MIDI
extensions.
What if we had something like
struct int_voice_param {
int id;
char *name;
int low;
int high;
};
and specify an array of them. The host can use this array to build
a list of per-note params to display to the user. This starts to
get messy with type-specific controls.
Type specific controls are messy, period. Not much we can do about
that. :-)
Perhaps this info belongs
as part of the control structure. Yes, I think so.
Yes...
In Audiality, I have this concept of "-1 means all" all over the
place. It could be applied to control events as well, but I haven't
messed with it yet, as the synth is still controlled from an internal
MIDI sequencer, which can't really make use of such a feature. (Well,
there is SysEx... Or I could use some controls to select voices prior
to sending normal MIDI events.)
Anyway, your average MIDI events would be translated with a voice ID
argument of -1, while native sequencers for the plugin API could
actually make use of that argument. That said, checking for -1 means
you have another conditional - but then again, you could put it in
the "couldn't find this voice" case... The alternative would be to
have two sets of control events; one where events take voice IDs, and
one where they don't. The advantage (when using an event system) is
that you make use of the event decoding switch() statement, which has
to be there anyway.
Should be
handled on the UI level, IMHO. (See above.) Doing it
down here only complicates the connection managment for no real
gain.
I want to ignore as much of it as possible in the UI. I want to
keep it simple at the highest level so a musician spends his time
making music, not dragging virtual wires.
Of course. But that doesn't mean you have to move it all the way down
into the lowest levels of the plugin API. Most of it should be
handled by the host, IMNSHO. Hosts are few and plugins are many.
Ideally if there is a
stereo instrument and I want to add a stereo reverb, I'd just drop
it in place, all connections made automatically. If I have a mono
instrument and I want a stereo reverb, I'd drop the reverb in place
and it would automatically insert a mono-stereo panner plugin
between them.
Sure, that would be cool - and entirely doable. It only takes that
the host understands the port labels to some extent.
Yeah... But
this is one subject where I think you'll have to
search for a long time to find even two audio hackers that agree
on the same set of data types. ;-)
I think INT, FLOAT, and STRING suffice pretty well. And I MAY be
convinced that INT is not needed. Really, I prefer int (maps well
to MIDI).
How? What's wrong with just mapping [0,127] to [0,1] or whatever?
One might intuitively think that ints are better for on/off switches,
"selectors" and the like, but I don't think it matters much.
Truncating to int isn't all that expensive, at least provided you
disable ANSI rounding rules. (Those are not compatible with any
mainstream FPUs, it seems...)
What kinds of knobs need to be floats?
What kind of knobs need to be ints? And what range/resolution should
they have...? You don't have to decide if you use floats.
(I should know... I use fixed point throughout in the current
Audiality engine, and I've changed the bias two or three times, IIRC.
*heh* I do not recommend it, unless it's strictly needed for
performance reasons on low end CPUs.)
Just a note
here: Most real instruments don't have an absolute
start or end of each note. For example, a violin has it's pitch
defined as soon as you put your finger on the string - but when
is the note-on, and *what* is it? I would say "bow speed" would
be much more appropriate than on/off events.
I'd assume a violin modeller would have a BOWSPEED control. The
note_on() would tell it what the eventual pitch would be. The
plugin would use BOWSPEED to model the attack.
Then how do you control pitch continously? ;-)
If you have a PITCH control, the need for a note_on() is eliminated,
and you need only controls to play the synth. You can still emulate
MIDI style behavior by sending a single BOWSPEED (which should have a
more generic name) change to the velocity of the note to start a
note, and another change to 0 to stop the note.
A more correct mapping would be to ramp the "BOWSPEED" control, since
that's actually what velocity (for keyboard instruments and the like)
is about; motion of the hammer/stick/whatever. For a paino, this
control would effectively track the motion of the hammer - but being
*that* accurate might be overkill. :-)
Well, yes.
There *has* to be a set of basic types that cover
"anything we can think of". (Very small set; probably just float
and raw data blocks.) I'm thinking that one might be able to have
some "conveniency types" implemented on top of the others, rather
than a larger number of actual types.
I agree - Bool is a flag on INT. File is a flag on String.
Detecting whether a float == 0.0 or not isn't all that hard either.
:-)
Dunno if this
makes a lot of sense - I just have a feeling that
keeping the number of different objects in a system to a
functional minimum is generally a good idea. What the "functional
minimum" is here remains to see...
With this I agree. One of the reasons I HATE so many APIs is that
they are grossly over normalized. I don't need a pad_factory
object and a pad object and a plugin_factory object and a parameter
object and an
automatable_parameter object and a scope object... I want there to
be as FEW structs/objects as possible.
That said, one I am considering adding is a struct oapi_host. This
would have callbacks for things like malloc, free, and mem_failure
(the HOST should decide how to handle memory allocation failures,
not the plugin) as well as higher level stuff like get_buffer,
free_buffer, and who knows what else. Minimal, but it puts control
for error handling back in the hands of the host.
Yes. That's the way I'm going with Audiality as well. I already have
event allocation in the API (since it has to be done in real time
context, obviously), but I'm going to add calls for normal memory
allocation as well. Those would just wrap malloc() and free() on most
hosts, but if you really want bounded plugin instantiation and system
parameter change times, you could implement a real time memory
manager in the host. (Many plugins will have to recalculate filter
coefficients and stuff though, so we're only talkeng about *bounded*
times; not "instant" real time instantiation. Still a big difference
when you're controlling a synth on stage with a MIDI sequencer,
though. You don't want to miss that first note that should play for 4
whole bars, just because the synth decided to do some swapping when
instantiating a plugin...)
Yeah, I know.
It's just that I get nervous when something tries
to do "everything", but leaves out the "custom format" fallback
for cases that cannot be forseen. :-)
We're speaking of controls here. In my mind controls have three
characteristics. 1) They have to specify enough information that
the host can draw a nice UI automatically.
I don't think this is possible without entirely ruling out some kinds
of plugins.
2) They are automatable
(whether it is sane or not is different!).
Some controls may not be possible to change in real time context -
but I still think it makes sense to use the control API for things
like that.
The "System Parameters" of Audiality plugins are in fact "normal"
controls; you're just not allowed to change them from within real
time centext, since they may have the plugin reallocate internal
buffers, do massive calculations or whatnot.
In Audiality, the System Parameters are all within a hardcoded range
of controls, but I think it would be much more useful to have a flag
for each control that specifies whether or not it can be changed from
within real time context. In fact, I already have a TODO "enum" in
the plugin header:
typedef enum
{
/* Timing and dependencies (flags) */
FXCAP_TIMING_ = 0x0000000f,
FXCAP_TIMING_SYSCALLS = 0x00000001, /* Uses malloc() etc... */
FXCAP_TIMING_SLOW = 0x00000002, /* in relation to process() */
/* Data type (enumeration) */
FXCAP_TYPE_ = 0x000000f0,
FXCAP_TYPE_STROBE = 0x00000000, /* Event; value ignored */
FXCAP_TYPE_BOOLEAN = 0x00000010, /* 0 = false, !=0 = true */
FXCAP_TYPE_INTEGER = 0x00000020,
FXCAP_TYPE_FIXED_8 = 0x00000030, /* 8 fraction bits */
FXCAP_TYPE_FIXED_16 = 0x00000040, /* 16 fraction bits */
FXCAP_TYPE_FIXED_24 = 0x00000050, /* 24 fraction bits */
FXCAP_TYPE_FLOAT = 0x00000060, /* IEEE 32 bit float */
/* Access rules (flags) */
FXCAP_AXS_ = 0x00000f00,
FXCAP_AXS_READ = 0x00000100, /* May be read */
FXCAP_AXS_WRITE = 0x00000200, /* May be written */
FXCAP_AXS_READ_CALL = 0x00000400, /* Must use read_control()! */
FXCAP_AXS_WRITE_CALL = 0x00000800, /* Must use control()! */
/* Lifetime info - "When can I access this?" (flags)
* Note that there's no flag for PAUSED, as it's *never* legal
* to use any callback but state() in the PAUSED state.
*/
FXCAP_LIFE_ = 0x0000f000,
FXCAP_LIFE_OPEN = 0x00001000,
FXCAP_LIFE_READY = 0x00002000,
FXCAP_LIFE_RUNNING = 0x00004000
} a_fxcaps_t;
(Note that some of this is rather old, and irrelevant to the current
design.)
3) They alone compose a
preset. What would a raw_data_block be?
An Algorithmically Generated Waveform script...?
---8<-------------------------------------------------
/////////////////////////////////////////////
// Claps 2
// Copyright (C) David Olofson, 2002
/////////////////////////////////////////////
w_format target, MONO16, 32000;
w_blank target, 10000, 0;
procedure clap(delay, l1, l2, l3)
{
w_env AMPLITUDE,
delay, 0,
0, l1,
.01, l2,
.02, l3,
.2, 0;
w_osc target, NOISE;
}
//claps
w_env FREQUENCY, 0, 8000;
clap 0, .15, .05, .01;
clap .02, .5, .1, .02;
clap .05, .2, .1, .02;
clap .08, .3, .07, .015;
clap .12, .15, .05, .015;
//lpf
w_env MOD1;
w_env AMPLITUDE, 0, 1;
w_env FREQUENCY, 0, 4500;
w_filter target, LOWPASS_12;
//coloration
w_env AMPLITUDE, 0, 4;
w_env FREQUENCY, 0, 800;
w_filter target, PEAK_12;
w_env FREQUENCY, 0, 1100;
w_filter target, PEAK_12;
w_env AMPLITUDE, 0, 2;
w_env FREQUENCY, 0, 4000;
w_filter target, PEAK_12;
------------------------------------------------->8---
Sure, you could put it in a file (which is what I do), so no big deal.
I'd just want that the host understands that that filename (string)
refers to something that *must* go with the project if I decide to
write a complete bundle for backup or transfer. That shouldn't be a
major problem.
Well, you can
put stuff in external files, but that seems a bit
risky to me, in some situations. Hosts should provide per-project
space for files that should always go with the project, and some
rock solid way of ensuring that
I don't really want the plugins writing files. I'd rather see the
host write a preset file by reading all the control information, or
by the host calling a new char *oapi_serialize() method to store
and a new oapi_deserialize(char *data) method to load.
Well, then I guess you'll need the "raw data block" type after all,
since advanced synth plugins will have a lot of input data that
cannot be expressed as one or more "normal" controls in any sane way.
See my script above. Could be a NULL terminated string - but what
about plugins that want binary data? (Pascal strings rule...! ;-)
[...]
So do I.
I'm just trying to base my examples on well known
equipment and terminology.
Honestly, I don't know all the terminology.
Who does? :-) This is a wide field, and terminology is extended
continously as new technology is introduced...
I have never worked
with much studio gear. Most of what I have done is in the software
space. So I may be making mistakes by that, but I may also be
tossing obsolete-but-accepted notions for the same reason :)
Well, I haven't messed all that much with "real" studio gear either
(short of synths and pro audio interfaces, that is), but the
terminology is seen in virtual studios as well. (And there is a lot
to read about the "real" gear. ;-)
[...event systems hands-on...]
Interesting. How important is this REALLY, though.
Important enough that virtually every serious system in the industry
supports it.
Let me break
it into two parts: note control and parameter control.
I disagree - I don't want to hardcode a strict separation into the
API.
Note
control can be tick accurate as far as I am concerned :)
Some would disagree. I'm in doubt myself. (It depends on what a
"tick" is... For some, even one sample could be too much! :-)
As for
param control, it seems to me that a host that will automate params
will PROBABLY have small ticks.
Well, yes. And it would also have more function call and function
init code overhead than nescessary - and would still have real
limitations.
If the ticks are small (10-50
samples), is there a REAL drawback to tick-accurate control? I
know that philosophically there is, but REALLY.
Well, if you want to tweak the attack of a sound effect or some other
time critical stuff, you'll have to resort to destructive waveform
editing, unless your automation is sample accurate.
Another example would be stereo/phase effects, delay fine tuning of
percussion and that sort of stuff. That's very sensitive, and
requires *at least* sample accurate timing for perfect results. If
you don't have sample accurate note control, you'll have to program
this in the synths, or put audio delay plugins on their outputs - and
that only works if you want a *constant* delay for the whole track.
In the event model, if I want a smooth ramp for a
control between 0
and 100 across 10 ticks of 10 samples, do I need to send 10
'control += 1' events before each tick?
Just as with callback models, that depends entirely on the API and
the plugin implementation. AFAIK, DXi has "ramp events". The
Audiality synth has linear ramp events for output/send levels.
Interpolation is needed for basically everything, period. The hard
part is to decide how much of it to support/dictate in the API, and
how.
Seriously,
it's probably time to move on to the VSTi/DXi level
now. LADSPA and JACK rule, but the integration is still "only" on
the audio processing/routing level. We can't build a complete,
seriously useful virtual studio, until the execution and control
of synths is as rock solid as the audio.
Well, I really want to do it, so let's go. You keep talking about
Audiality, but if we're designing the same thing, why aren't we
working on the same project?
Well, that's the problem with Free/Open Source in general, I think.
The ones who care want to roll their own, and the ones that don't
care... well, they don't care, unless someone throws something nice
and ready to use at them.
As to Audiality, that basically came to be "by accident". It started
out as a very primitive sound FX player in a game, and eventually
contained a script driven off-line synth, a highly scalable
sampleplayer, a MIDI file player, FX routing, a "reverb", a
compressor and various other stuff.
I realized that the script driven synth was the best tool I ever used
for creating "real" sounds from scratch (no samples; only basic
oscillators), and decided that it was as close to the original idea
of Audiality I ever got - so I recycled the name, once again.
(The original project was a Win32 audio/MIDI sequencer that I dropped
due to the total lack of real time performance in Win32. The second
project was meant to become an audio engine running under RT-Linux -
but I dropped that for two reasons: 1) I needed a plugin API - which
is why I ended up here, and started the MAIA project, and 2) audio on
RT-Linux was rendered effectively obsolete by Mingo's lowlatency
patch.)
Lots of ideas to noodle on and lose sleep on. Looking
forward to
more discussion
*hehe* Well, I'm sure there will be a whole lot of activity on this
list for a while now. :-)
//David Olofson - Programmer, Composer, Open Source Advocate
.- Coming soon from VaporWare Inc...------------------------.
| The Return of Audiality! Real, working software. Really! |
| Real time and off-line synthesis, scripting, MIDI, LGPL...|
`-----------------------------------> (Public Release RSN) -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`---------------------------->
http://www.linuxdj.com/maia -'
---
http://olofson.net ---
http://www.reologica.se ---