designed to achieve a certain purpose. From the
PortAudio homepage:
| PortAudio is intended to promote the exchange of audio synthesis software
| between developers on different platforms, and was recently selected as the
| audio component of a larger PortMusic project that includes MIDI and sound
| file support.
This clearly states the purpose: if you want to write audio synthesis software
,
then you should use PortAudio. Then, I assume, the abstraction is well-
developed. However, it does not state:
"... is intended to play sound samples with sound servers easily. Or: ... is
intended to port existing applications easily. Or: ... is intended to let
the application choose its programming model freely."
No. PortAudio makes a lot of choices for the software developer, and thus
provides an easy abstraction.
The point is that PortAudio follows the same basic abstraction that
the audio APIs on the overwhelmingly dominant platforms for audio
software development follow (callback-driven). Those APIs have emerged
from years of practical experience writing serious audio applications,
and I strongly believe (as you know) that they should be taken very
seriously. The fact that a couple of early linux hackers felt that the
open/read/write/close/ioctl model was the right thing seems to me of
much less significance than the number of working, useful,
sophisticated software that have been built around a set of
abstractions that are more or less identical to the ones PortAudio
provides.
This will mean, however, that actually
porting
software to PortAudio will probably be hard (compared to CSL),
that depends on where you started from. if you started from ASIO, it
won't be hard. if your app started as a VSTi, it won't. but sure, if
you started as your typical linux audio app that reads a stereo file
from the disk and shoves it into the audio interface, it will be
fairly easy to use CSL and a bit harder to use PortAudio.
this porting thing bothers me. we have only a handful of really great
apps for audio under linux right now (though we have lots of rather
interesting ones) and most of them already can use JACK. i would much
rather see people forced (just like apple have done) to work with the
"correct" abstraction than continue with multiple wrappers and
multiple different abstractions as we move into a period where i hope
to see an explosion in linux audio applications.
whereas
writing new software for PortAudio might be convenient, _if_ the software
falls in the scope of what the abstraction was made for.
well, there are at least two sets of evidence to consider there. i
think there is plenty of evidence in the world of windows that a large
amount of interesting audio software works with the ASIO and DirectX
models (which are semantically similar, if syntactically worlds
apart). its not limited to audio synthesis. but at the same time, its
worth noting that "most" apps on windows that emit audio for some
reason (i.e the ones that are not actually audio applications) do not
use ASIO or DirectX. so there is evidence to support the idea that at
least a couple of abstractions are necessary.
in contrast, CoreAudio offers only 1 abstraction model as i understand
it. so apple at least appear to have been willing to bet that all
software can use a single model.
the only
reason i was happy writing JACK was
precisely because its not another wrapper API - it specifically
removes 90% of the API present in ALSA, OSS and other similar HAL-type
APIs.
I am glad you did write JACK, although back then I thought it was just another
try to redo aRts (and we had some heated discussions back then), because some
people seem to like it.
i don't think its about liking it so much as the fact that it does
some things that are extremely important to many of us and that
nothing else can do.
If some people will like CSL, why not?
well, i am *really* not trying to be argumentative just for the sake
of it, but ... the reason not to is that CSL doesn't offer any new
functionality. its just another wrapper, and we already at least one
of them, with some evidence that its capable of supporting all apps.
On the other hand, if you added JACK support to CSL,
you could also mix the
output of all of these "sound servers" into JACK, without endangering your
latency properties.
since JACK has no desire to be a general purpose API for all apps,
this would be more appropriate than the other way around, i
think. there is no reason to run a JACK system that does audio i/o via
a sound server - it just isn't going to work with the performance
promises that JACK is supposed to provide.
i think that the fundamental problem here is the division between:
* the kinds of apps that emerge from the repeated questions on gtk-list
(and i'm sure the KDE equivalent): "what's the best way to play a
sound?"
* "serious" audio applications (and, i suppose, video apps too)
CSL seems to be about providing something to the first group that
PortAudio *could* provide, even though it may not be the best or
easiest method (although it might be the most appropriate). but i
really don't see how it offers anything to the second group that want
features (and to some extent, *lack* of features) beyond those CSL or
even PortAudio provide.
i see it this way because CSL doesn't appear to me to have emerged
from efforts of audio developers. its come mostly from the desktop
world as way to meet the need for a "cross-desktop" way to say "you've
got mail" or play the KDE startup theme, and other "singular" audio
events.
as i said above, the windows experience suggests that it may be
useful, perhaps even necessary, to have two different abstractions to
support these two rather different classes of applications. but the
jury is out on that.
--p