Hi!
On Tue, Feb 25, 2003 at 07:48:11PM +0200, Kai Vehmanen wrote:
> Date: Tue, 25 Feb 2003 12:20:22 -0500
> From: Paul Davis <paul(a)linuxaudiosystems.com>
> Reply-To: linux-audio-dev(a)music.columbia.edu
> To: linux-audio-dev(a)music.columbia.edu
> Subject: Re: [linux-audio-dev] Fwd: CSL Motivation
>
> >There are discussions on kde-multimedia about
> >the future of Linux/Unix multimedia (especially sound).
> >This is one of the most interesting messages.
>
> CSL is proposed primarily as a wrapper layer around existing APIs. as
> such, it seems to me to have no particular merits over PortAudio,
> which has the distinct advantages of (1) existing
CSL also "exists".
> (2) working on many platforms already and
You're right about that one: CSL is not as complete as PortAudio w.r.t to
portability.
> (3) using well-developed abstractions.
I do not believe that something is ever a "well-developed abstraction" by
itself. Something is always a "well-developed abstraction" from something,
designed to achieve a certain purpose. From the PortAudio homepage:
| PortAudio is intended to promote the exchange of audio synthesis software
| between developers on different platforms, and was recently selected as the
| audio component of a larger PortMusic project that includes MIDI and sound
| file support.
This clearly states the purpose: if you want to write audio synthesis software,
then you should use PortAudio. Then, I assume, the abstraction is well-
developed. However, it does not state:
"... is intended to play sound samples with sound servers easily. Or: ... is
intended to port existing applications easily. Or: ... is intended to let
the application choose its programming model freely."
No. PortAudio makes a lot of choices for the software developer, and thus
provides an easy abstraction. This will mean, however, that actually porting
software to PortAudio will probably be hard (compared to CSL), whereas
writing new software for PortAudio might be convenient, _if_ the software
falls in the scope of what the abstraction was made for.
> CSL was
> written as if PortAudio doesn't exist. I don't know if this a NIH
> attitude, or something else, but I see little reason not use consider
> PortAudio as *the* CSL, and by corollary, little reason to develop Yet
> Another Wrapper API.
Well, I gave you some. The paper gives some more. Basically, CSL is intended
for porting _most_ free software, whereas PortAudio is intended for portable
synthesis software.
I think PortAudio would benefit in _supporting_ CSL, rather than aRts for
instance, because CSL is more generic ; once new sound servers (like MAS)
come up, you need not patch PortAudio all the time, but just one place: a
CSL driver. The same is valid for other meta-frameworks like SDL.
> the only reason i was happy writing JACK was
> precisely because its not another wrapper API - it specifically
> removes 90% of the API present in ALSA, OSS and other similar HAL-type
> APIs.
I am glad you did write JACK, although back then I thought it was just another
try to redo aRts (and we had some heated discussions back then), because some
people seem to like it. If some people will like CSL, why not?
If you added CSL support to JACK right now, you would never need to bother
with any of the "sound server guys" like me again, because you could always
say: "support CSL in your sound server thing, and then JACK will support your
sound server".
On the other hand, if you added JACK support to CSL, you could also mix the
output of all of these "sound servers" into JACK, without endangering your
latency properties.
Cu... Stefan
--
-* Stefan Westerfeld, stefan(a)space.twc.de (PGP!), Hamburg/Germany
KDE Developer, project infos at http://space.twc.de/~stefan/kde *-
Hallo,
with the LAD meeting getting closer, I'm getting a bit curious about,
what the plans are for the open "Linux Sound Night" on 15.3.? Will we
hear some of you guys perform and Paul records it?
ciao
--
Frank Barknecht _ ______footils.org__
Greetings:
I need to investigate the 2.5.x kernel series. Is there a recommended
version for audio work ? Low-latency patches ? Any special comments
about the 2.5 series ?
TIA!
== Dave Phillips
On Thursday 27 February 2003 20:25, Tim Jansen wrote:
> Did the other platforms use a callback-driven approach because it is a
> superior API, or because it is the only way to have sound on
> cooperative-multitasking OSes like MacOS <X and early Windows versions?
>
> Callback-driven APIs are much harder to use, especially with many existing
> codecs and frameworks that have been written with a push API in mind.
>
I try to list some pro / con for call back and push APIs:
push API:
Pro:
* simple to play a single file, basically a copy from source file to
destination.
Con:
* hard to write plugins that way. Take a look at arts plugins. They all have
a 'calculateBlock' = call back!
Why?
In a pure push model each processing step reads data from
one file/pipe/device, processes it, pushes it to a file/pipe/device
You get:
* lots of threads/processes that are not optimally synchronized.
Any thread is runnable when there are input available until the
output is full.
But that is not the important case, concider the case when the last
processing steps output is almost empty (if it gets empty you will hear
a click). How to prioritize it higher than all other threads? Should it
always be higher? Suppose it is a mixer that has several inputs...
Could be done by a super server that sets priorities depending
on position in the line? This is not easy...
* If plugins, with callback model, are used to build the application. Does
it not make sense to build the whole application (audio part) the same way?
There are some neat tricks that can be used, since the wrapper library
can know where the destination is.
* if the destination is in the same process, your output will end up somewere
in your process memory.
* On the other hand, suppose the destination is another application,
it can allocate shared memory and let the output of your pluggin end up
there.
* If the output is destined to go to an audio board
It could then give you a memory mapped hardware buffer instead of
ordinary memory to avoid the final copy. (you will get different buffers on
each process...)
* if your output type does not match the input type of the destination,
the library could automatically insert a converter/resampler either on
your side or on the destination side (pick the one that gives less
communication).
* Can the destination change during the run?
1. Your application starts alone, output format matches one supported
by hardware. => hardware buffers
2. Another application starts (suppose the device can have several
destinations open at once - like SB Live!) => no change for your pluggin
(but assume the format of this pluggin is not supported by hardware
=> in process buffer + automatic inserted convertion pluggin
+ hardware buffer)
3. Even more applications start... No more possible to output direct to
hardware for all... suppose the library checks for matching data types
- and the first application match perfectly!
=> the new application will get shared memory,
your application will be changed to ordinary memory, these buffers will be
mixed by an automatically inserted pluggin that outputs
to the hardware buffer...
4. The new application ends. => hardware buffers again
=> your application/pluggin does not need to care. [No framework that I know
of implements this today - especially not the automatic parts]
With the push model your application needs to know about possible
destinations, or treat everything as a file or shared memory.
But how to handle the dynamic changes in the push model?
Pipes, shared memory?
It could also use a library that directs the produced buffer in the right
direction (CSL?) - but it will be hard to eliminate extra copying.
Note: Arts and the KDE multimedia framework does a lot of things right today.
It even goes one step furter since it moves the pluggins, including input and
output into one process - artsd. But currently it does not work perfectly
together with applications using other frameworks.
/RogerL
--
Roger Larsson
Skellefteå
Sweden
Hello,
As announced earlier on this list, Frank Neumann and I are organizing a
Conference of Linux Audio Developers at ZKM Karlsruhe. More information is
available from http://www.linuxdj.com/audio/lad/eventszkm2003.php3
The list of speakers and talks is now complete and the webpage of the event
has been moved to ZKM: http://on1.zkm.de/zkm/stories/storyReader$3027
Information on accommodation has been added as well.
In addition to the speakers, the following LADers have registered so far:
Rene Bastian
Joern Nettingsmeier
Jean-Daniel Pauget
Kai Vehmanen
Several other LADers have shown interest but not yet registered. If you want to
register for the conference, please provide the following information:
1) Hardware you will be bringing (if any)
2) How long will you stay ?
3) Email address to which we can send last minute information
Remarks: 1) It is not necessary to bring any hardware, but if you do so,
it would be important for us to know because we need to
plan the rooms, network cabling, power supply etc.
2) In addition to the talks, there is room for LAD internal discussion
especially on Saturday morning and Sunday. We assume that on
Sunday this will last until about 18.00. Some LADers will be
around already on Friday morning (some even on Thursday
afternoon), however we might still be busy with preparations for
the talks.
A live audio stream of the talks will be available for those who can not
attend the event.
Matthias
--
Dr. Matthias Nagorni
SuSE GmbH
Deutschherrnstr. 15-19 phone: +49 911 74053375
D - 90429 Nuernberg fax : +49 911 74053483
ReZound aims to be a stable, open source, and graphical audio file
editor primarily for but not limited to the Linux operating system.
http://sourceforge.net/project/showfiles.php?group_id=5056
--
Notes:
This release adds several new features and fixes a few bugs.
Changes:
- Added preliminary JACK support (the speed of and load on your computer
will deterine how well this works)
- Added a "Repeat Count" or "Repeat Time" when pasting from any clipboard
- Added a "Balance" action which can change the balance or pan audio
between two channels
- Added some symmetry changing buttons and smoothing feature to the
graph parameter widget
- Added a "Recent Actions" under the "Edit" menu
- When an audio I/O system is selected at configure time, ReZound will
now fall back to OSS if the other is unavailable
- Buffer size and buffer count settings can now be made without rebuilding
- Made other minor cosmetic change, backend changes and bug fixes
>yes, you can move audio over USB. the question is not whether you can,
>but whether you should, and my feeling is that professional or
>semi-professional users should avoid it completely, regardless of what
>Yamaha, Tascam, Edirol and others who want to provide *cheap*
>connectivity to home studio users say in the advertisements.
Actually they're not cheap at all. The main benefit of usb audio devices
is the portability. However, now that firewire is becoming a much
cheaper alternative usb devices are probably going to become obsolete
like the Laser disc has.
But it would be very nice if I could use my usb quattro to manipulate
the sounds of my bandmates in realtime at lowlatency. I tried with ssm
at 64 bytes and there was noticible lag so we couldn't do anything live.
The best I can get out of jack is 1024 but 2048 is more reliable.
Having to use PCI devices is a PITA when you are trying to gig at
different venues as they require a lot more space. There is also
something elegant about being able to instantly connect your setup to a
different computer by simply moving the USB cable.
However, it could be said that any sound device running on a PC is a
waste of time for serious musos as you cannot beat the sound quality
from a top of the line recording studio.
Each to their own but I would just like to be able to show people the
true potential of Linux Audio and currently I cannot unless I get a PCI
device. That, IMO, is what really sucks.
--
Patrick Shirkey - Boost Hardware Ltd.
Http://www.boosthardware.comHttp://www.djcj.org - The Linux Audio Users guide
========================================
Being on stage with the band in front of crowds shouting, "Get off! No!
We want normal music!", I think that was more like acting than anything
I've ever done.
Goldie, 8 Nov, 2002
The Scotsman
Using the attached scripted setpci commands I have gotten the audio
latency of my crusoe laptop down to 3 ms. Throughput to anything but the
audio device is worse, but who cares when you're just running one audio
synthesis/analysis process? I might crank up the pci latency of the ide
controller if I ever need to stream audio to or from disk. Results of
latencytest are posted at http://www.ouroboros-complex.org/latency.
My present kernel and alsa packages can be found at
http://www.ouroboros-complex.org/moon. These are compatible with the
Planet CCRMA project. See http://www.ouroboros-complex.org/moon/README.
I have effectively reduced the weight of my rig from 120 to 2 pounds.
Jack still doesn't work, but I can do pretty much everything in one Pd
process anyway. It's no huge MOTM system, but then again, it's not a
huge MOTM either.
Thank you everyone.
--
(jfm3 2838 BCBA 93BA 3058 ED95 A42C 37DB 66D1 B43C 9FD0)
Let's continue the cross-post circus. :)
Does anyone here have good connections to the GNOME audio folks? Is
gstreamer leading the whole thing, or are there others? I think it would
be great if we could at least manage to start living on the same planet
(... and maybe even someday, gasp, cooperate! >;)).
---------- Forwarded message ----------
Date: Thu, 27 Feb 2003 00:05:04 +0200 (EET)
From: Kai Vehmanen <kai.vehmanen(a)wakkanet.fi>
To: KDE Multimedia <kde-multimedia(a)mail.kde.org>
Cc: Paul Davis <paul(a)linuxaudiosystems.com>
Subject: Re: CSL Motivation
On Tue, 25 Feb 2003, Tim Janik wrote:
> and, more importantly, stefan and i wrote a paper going into the details of
> why we think a project like CSL is necessary and what we intend to achieve
> with it:
Ok, I already forwarded a few mails concerning this from lad. I'll add a
few comments of my own:
I think I understand your reasons behing CSL, and I think it (CSL) might
just be the necessary glue to unite KDE and GNOME on the multimedia front.
But what I see as a risk is that you forget the efforts and existing APIs
outside these two desktop projects. In the end, it's the applications that
count. It's certainly possible that you can port all multimedia apps that
come with GNOME and KDE to CSL, but this will never happen for the huge
set of audio apps that are listed at http://www.linuxsound.at. And these
are apps that people (not all, but many) want to use.
A second point is that for many apps, the functionality of CSL is just not
enough. ALSA PCM API is a very large one, but for a reason. Implementing a
flexible capabilities query API is very difficult (example: changing the
active srate affects the range of valid values for other parameters). The
selection of commonly used audio parameters has really grown (>2 channels,
different interleaving settings for channels, 20bit, 24bit, 24-in-4bytes,
24-in-3bytes, 24-in-lower3of4bytes, 32bit, 32bit-float, etc, etc ... these
are becoming more and more common. Then you have functionaliy for
selecting and querying available audio devices and setting up virtual
soundcards composed of multiple individual cards. These are all supported
by ALSA and just not available on other unices. Adding support for all
this into CSL would be a _huge_ task.
Perhaps the most important area of ALSA PCM API are the functions for
handling buffersize, interrupt-frequency and wake-up parameters. In other
words being able to set a buffersize value is not enough when writing
high-performance (low-latency, high-bandwidth) audio applications. You
need more control and this is what ALSA brings you. And it's good to note
that these are not only needed by music creation (or sw for musicians for
lack of a better term) apps, but also for desktop apps. I have myself
written a few desktop'ish audio apps that have needed the added
flexibility of ALSA.
Now JACK, on the other hand, offers completely new types of functionality
for audio apps: audio routing between audio applications, connection
management and transport control. These are all essential for music apps,
but don't make sense in an audio i/o abstraction like CSL.
So to summarize, I really hope that you leave a possibilty for these APIs
(especially ALSA and JACK) in the KDE multimedia architecture, so that it
would be possible to run different apps without the need to completely
bypass other application groups (like is the situation today with
aRts/esd/ALSA/OSS/JACK apps).
As a more practical suggestion, I see the options as:
1) A front-end API that is part of the KDE devel API
a) aRts
b) gstreamer
c) CSL
d) Portaudio
e) ... others?
2) Backend server that is user-selectable (you have a nice GUI
widget for selecting which to use)
a) aRts (current design, again uses OSS/ALSA)
b) JACK (gstreamer already has support for it)
c) ALSA (dmix or aserver)
d) MAS
e) ... others?
All official (part of the base packages) KDE+GNOME apps would use (1), but
3rd party apps could directly interact with (2) if they so wished. If the
required (2) is not running, user can go to the configuration page and
change the audio backend.
Comments? :)
--
http://www.eca.cx
Audio software for Linux!