[linux-audio-dev] XAP: What is it for

David Olofson david at olofson.net
Mon Dec 16 19:31:01 UTC 2002


On Monday 16 December 2002 14.55, Sami P Perttu wrote:
> On Fri, 13 Dec 2002, David Olofson wrote:
> > [...valid points...]
> >
> > > > How? It's not the host that sends these events in general;
> > > > it's other plugins. The host never touches the events in the
> > > > normal case.
> > >
> > > Okay. I'm lost here because I don't know what a XAP app would
> > > look like. I only know trackers :).
> >
> > Well, it's not *that* different. Just think of the different
> > parts of a tracker (MIDI input, song, pattern, "classic" effects,
> > synth, DSP effects, audio output) as separate plugins, running
> > under a host that does pretty much nothing but loading,
> > connecting and running plugins. The sequencer becomes a plugin,
> > rather than a part of the host, and you can do the same with I/O
> > and pretty much everything.
>
> My first reaction to reading the above paragraph was "you are
> insane".

Well, that's probably true, but I'm not sure it's relevant. ;-)


> That is a very ambitious plan!

Yes. Guess why MuCoS/MAIA never got off the ground. Audiality is 
(sort of :-) flying, though, and it's designed around the same ideas, 
basically.


> Would it really be possible
> to accomplish all that with one API...?

Yes, I think so. Timestamped structured data is an incredibly 
powerful and accurate tool. While being an alternative to audio rate 
control data, it is also capable of acting much like the message 
passing APIs used in many real time operating systems. The latter is 
what makes this basic form of interface so powerful, and yet easy to 
use.


> Is it possible to design it
> without a divine intervention...?

That remains to see. :-)


> In the end I fear you may find
> the host doing a whole lot more than "pretty much nothing". For
> instance, the metadata needed to describe all this could easily get
> out of hand. No metadata = no UI = useless API.

The "metadata" doesn't have to be all that much more complicated than 
for LADSPA. We have Control Ports and Audio Ports as well. Our 
control ports are slightly more complex, since we want to take 
advantage of some of the possibilites that timestamped events add. 
Our audio ports are just as simples as those in LADSPA, although we 
want to be able to arrange them in more structured ways than a single 
1D array.

It is going to be more complex than LADSPA, but I'm quite sure it 
well be simpler, cleaner, easier to use and more powerful than VST, 
at least in the areas we have the knowledge and resources to deal 
with now.

For example, we'll probably not have an entire side of the API 
dedicated to off-line processing. Not in 1.0. Maybe we'll never think 
of that as an integral part of an API of this kind.


In short, if you can host LADSPA plugins, you'll know how to host XAP 
plugins. There isn't all that much more you need to understand. The 
event system is 90% inline macros, and the rest comes in a Host 
Support Library. Just as plugin registering, loading, connection 
managment and most other stuff.

I suspect that you'll be able to write a basic host in *less* time 
than you can write a LADSPA host. You should get away rather easy 
with basic plugins as well. There is a lot you *can* do, but very 
little you *have to* do.


> If I were me I would keep XAP a low-level block processing API.

Well, that's what it *is*. We're not really stretching beyond what 
VST does, and people seem to do pretty well with that API, despite 
the lack of documentation and a host SDK, and despite it mixing MIDI 
and all sorts of stuff into it's event system, and despite it having 
a redundant but non-optional interface for parameters and presets. 
(Note that this on VST is my personal opinion; not that of the XAP 
"team".)


> The
> idea of audio ports on the one hand and sample-accurate control
> events interleaved in a single queue on the other is ingenious.

There's a reason why everyone and his dog is using it. ;-)


> Add
> to that voice allocation and well defined metadata capabilities and
> we have a backbone for such a softstudio as you outline above.

Yep, but you're forgetting a few details that part simple modular 
synth APIs from APIs that allow you to what people have been doing on 
Windoze and Mac for a good while. We intend to cover that, and 
possibly some more, in a clean and easy-to-use API.


> Then
> design more APIs on top of it as needed.

Well, you can *extend* the API, or possibly implement some protocols 
on top of it - but there's no big difference. Most useful things seem 
to require host services and/or that all or many plugins understand 
the "new stuff" anyway. So, it's the same thing as releasing a now, 
incompatible API anyway.

Consider LADSPA:
	Did it ever get timestamped events?
	What would be the implications of adding them?

Then look at VST 1.0 vs 2.0:
	VST 1.0 did not have events.
	VST 2.0 added them - which resulted in massive redundancy,
	and the fact that their event system is essentially crippled
	in many ways.

It's not like we were introduced to timestamped events, or any other 
of these concepts last week. Many of us have spent *years* thinking 
about this, hacking, testing and learning what works and what doesn't.

This is not our first shot at an audio plugin API. We've all learned 
a lot since I came here with Audiality/RT-Linux, and IMHO, we should 
do our best to make XAP reflect our current knowledge and experience. 
I think we can do better than a slightly enhanced LADSPA.


> Don't let the event aspect
> fool you into thinking it is meaningful to make XAP encompass
> everything.
>
> Summa theologica: dump musical time,

This won't work. Beat sync effects are really rather important in any 
serious synth setup these days.


> note pitch,

That, IMHO, only gives users another way to mix things up, without a 
good reason.

Either way, it's just a hint, and nothing that anyone really has to 
care about. It's only purpose is to allow nice hosts to suggest to 
lost users that they might want to place note based effects *before* 
the scale converter - in case they actually use one. (With 12tET, 
there's no need, because NOTEPITCH == PITCH in that case.)


> asynchronous I/O.

Yes, I think I agree that that should be left out, at least until we 
really know what we're doing. Until it allows us to split 
Linuxsampler into a streaming plugin and a sample player plugin, 
there isn't much point in having this in the official API.


> Use Hz for pitch.

Why Hz? Linear pitch is so much easier to work with, whether you 
think in musical or pure pitch terms.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



More information about the Linux-audio-dev mailing list