Greetings:
I need to investigate the 2.5.x kernel series. Is there a recommended
version for audio work ? Low-latency patches ? Any special comments
about the 2.5 series ?
TIA!
== Dave Phillips
On Thursday 27 February 2003 20:25, Tim Jansen wrote:
> Did the other platforms use a callback-driven approach because it is a
> superior API, or because it is the only way to have sound on
> cooperative-multitasking OSes like MacOS <X and early Windows versions?
>
> Callback-driven APIs are much harder to use, especially with many existing
> codecs and frameworks that have been written with a push API in mind.
>
I try to list some pro / con for call back and push APIs:
push API:
Pro:
* simple to play a single file, basically a copy from source file to
destination.
Con:
* hard to write plugins that way. Take a look at arts plugins. They all have
a 'calculateBlock' = call back!
Why?
In a pure push model each processing step reads data from
one file/pipe/device, processes it, pushes it to a file/pipe/device
You get:
* lots of threads/processes that are not optimally synchronized.
Any thread is runnable when there are input available until the
output is full.
But that is not the important case, concider the case when the last
processing steps output is almost empty (if it gets empty you will hear
a click). How to prioritize it higher than all other threads? Should it
always be higher? Suppose it is a mixer that has several inputs...
Could be done by a super server that sets priorities depending
on position in the line? This is not easy...
* If plugins, with callback model, are used to build the application. Does
it not make sense to build the whole application (audio part) the same way?
There are some neat tricks that can be used, since the wrapper library
can know where the destination is.
* if the destination is in the same process, your output will end up somewere
in your process memory.
* On the other hand, suppose the destination is another application,
it can allocate shared memory and let the output of your pluggin end up
there.
* If the output is destined to go to an audio board
It could then give you a memory mapped hardware buffer instead of
ordinary memory to avoid the final copy. (you will get different buffers on
each process...)
* if your output type does not match the input type of the destination,
the library could automatically insert a converter/resampler either on
your side or on the destination side (pick the one that gives less
communication).
* Can the destination change during the run?
1. Your application starts alone, output format matches one supported
by hardware. => hardware buffers
2. Another application starts (suppose the device can have several
destinations open at once - like SB Live!) => no change for your pluggin
(but assume the format of this pluggin is not supported by hardware
=> in process buffer + automatic inserted convertion pluggin
+ hardware buffer)
3. Even more applications start... No more possible to output direct to
hardware for all... suppose the library checks for matching data types
- and the first application match perfectly!
=> the new application will get shared memory,
your application will be changed to ordinary memory, these buffers will be
mixed by an automatically inserted pluggin that outputs
to the hardware buffer...
4. The new application ends. => hardware buffers again
=> your application/pluggin does not need to care. [No framework that I know
of implements this today - especially not the automatic parts]
With the push model your application needs to know about possible
destinations, or treat everything as a file or shared memory.
But how to handle the dynamic changes in the push model?
Pipes, shared memory?
It could also use a library that directs the produced buffer in the right
direction (CSL?) - but it will be hard to eliminate extra copying.
Note: Arts and the KDE multimedia framework does a lot of things right today.
It even goes one step furter since it moves the pluggins, including input and
output into one process - artsd. But currently it does not work perfectly
together with applications using other frameworks.
/RogerL
--
Roger Larsson
Skellefteå
Sweden
Hello,
As announced earlier on this list, Frank Neumann and I are organizing a
Conference of Linux Audio Developers at ZKM Karlsruhe. More information is
available from http://www.linuxdj.com/audio/lad/eventszkm2003.php3
The list of speakers and talks is now complete and the webpage of the event
has been moved to ZKM: http://on1.zkm.de/zkm/stories/storyReader$3027
Information on accommodation has been added as well.
In addition to the speakers, the following LADers have registered so far:
Rene Bastian
Joern Nettingsmeier
Jean-Daniel Pauget
Kai Vehmanen
Several other LADers have shown interest but not yet registered. If you want to
register for the conference, please provide the following information:
1) Hardware you will be bringing (if any)
2) How long will you stay ?
3) Email address to which we can send last minute information
Remarks: 1) It is not necessary to bring any hardware, but if you do so,
it would be important for us to know because we need to
plan the rooms, network cabling, power supply etc.
2) In addition to the talks, there is room for LAD internal discussion
especially on Saturday morning and Sunday. We assume that on
Sunday this will last until about 18.00. Some LADers will be
around already on Friday morning (some even on Thursday
afternoon), however we might still be busy with preparations for
the talks.
A live audio stream of the talks will be available for those who can not
attend the event.
Matthias
--
Dr. Matthias Nagorni
SuSE GmbH
Deutschherrnstr. 15-19 phone: +49 911 74053375
D - 90429 Nuernberg fax : +49 911 74053483
ReZound aims to be a stable, open source, and graphical audio file
editor primarily for but not limited to the Linux operating system.
http://sourceforge.net/project/showfiles.php?group_id=5056
--
Notes:
This release adds several new features and fixes a few bugs.
Changes:
- Added preliminary JACK support (the speed of and load on your computer
will deterine how well this works)
- Added a "Repeat Count" or "Repeat Time" when pasting from any clipboard
- Added a "Balance" action which can change the balance or pan audio
between two channels
- Added some symmetry changing buttons and smoothing feature to the
graph parameter widget
- Added a "Recent Actions" under the "Edit" menu
- When an audio I/O system is selected at configure time, ReZound will
now fall back to OSS if the other is unavailable
- Buffer size and buffer count settings can now be made without rebuilding
- Made other minor cosmetic change, backend changes and bug fixes
>yes, you can move audio over USB. the question is not whether you can,
>but whether you should, and my feeling is that professional or
>semi-professional users should avoid it completely, regardless of what
>Yamaha, Tascam, Edirol and others who want to provide *cheap*
>connectivity to home studio users say in the advertisements.
Actually they're not cheap at all. The main benefit of usb audio devices
is the portability. However, now that firewire is becoming a much
cheaper alternative usb devices are probably going to become obsolete
like the Laser disc has.
But it would be very nice if I could use my usb quattro to manipulate
the sounds of my bandmates in realtime at lowlatency. I tried with ssm
at 64 bytes and there was noticible lag so we couldn't do anything live.
The best I can get out of jack is 1024 but 2048 is more reliable.
Having to use PCI devices is a PITA when you are trying to gig at
different venues as they require a lot more space. There is also
something elegant about being able to instantly connect your setup to a
different computer by simply moving the USB cable.
However, it could be said that any sound device running on a PC is a
waste of time for serious musos as you cannot beat the sound quality
from a top of the line recording studio.
Each to their own but I would just like to be able to show people the
true potential of Linux Audio and currently I cannot unless I get a PCI
device. That, IMO, is what really sucks.
--
Patrick Shirkey - Boost Hardware Ltd.
Http://www.boosthardware.comHttp://www.djcj.org - The Linux Audio Users guide
========================================
Being on stage with the band in front of crowds shouting, "Get off! No!
We want normal music!", I think that was more like acting than anything
I've ever done.
Goldie, 8 Nov, 2002
The Scotsman
Using the attached scripted setpci commands I have gotten the audio
latency of my crusoe laptop down to 3 ms. Throughput to anything but the
audio device is worse, but who cares when you're just running one audio
synthesis/analysis process? I might crank up the pci latency of the ide
controller if I ever need to stream audio to or from disk. Results of
latencytest are posted at http://www.ouroboros-complex.org/latency.
My present kernel and alsa packages can be found at
http://www.ouroboros-complex.org/moon. These are compatible with the
Planet CCRMA project. See http://www.ouroboros-complex.org/moon/README.
I have effectively reduced the weight of my rig from 120 to 2 pounds.
Jack still doesn't work, but I can do pretty much everything in one Pd
process anyway. It's no huge MOTM system, but then again, it's not a
huge MOTM either.
Thank you everyone.
--
(jfm3 2838 BCBA 93BA 3058 ED95 A42C 37DB 66D1 B43C 9FD0)
Let's continue the cross-post circus. :)
Does anyone here have good connections to the GNOME audio folks? Is
gstreamer leading the whole thing, or are there others? I think it would
be great if we could at least manage to start living on the same planet
(... and maybe even someday, gasp, cooperate! >;)).
---------- Forwarded message ----------
Date: Thu, 27 Feb 2003 00:05:04 +0200 (EET)
From: Kai Vehmanen <kai.vehmanen(a)wakkanet.fi>
To: KDE Multimedia <kde-multimedia(a)mail.kde.org>
Cc: Paul Davis <paul(a)linuxaudiosystems.com>
Subject: Re: CSL Motivation
On Tue, 25 Feb 2003, Tim Janik wrote:
> and, more importantly, stefan and i wrote a paper going into the details of
> why we think a project like CSL is necessary and what we intend to achieve
> with it:
Ok, I already forwarded a few mails concerning this from lad. I'll add a
few comments of my own:
I think I understand your reasons behing CSL, and I think it (CSL) might
just be the necessary glue to unite KDE and GNOME on the multimedia front.
But what I see as a risk is that you forget the efforts and existing APIs
outside these two desktop projects. In the end, it's the applications that
count. It's certainly possible that you can port all multimedia apps that
come with GNOME and KDE to CSL, but this will never happen for the huge
set of audio apps that are listed at http://www.linuxsound.at. And these
are apps that people (not all, but many) want to use.
A second point is that for many apps, the functionality of CSL is just not
enough. ALSA PCM API is a very large one, but for a reason. Implementing a
flexible capabilities query API is very difficult (example: changing the
active srate affects the range of valid values for other parameters). The
selection of commonly used audio parameters has really grown (>2 channels,
different interleaving settings for channels, 20bit, 24bit, 24-in-4bytes,
24-in-3bytes, 24-in-lower3of4bytes, 32bit, 32bit-float, etc, etc ... these
are becoming more and more common. Then you have functionaliy for
selecting and querying available audio devices and setting up virtual
soundcards composed of multiple individual cards. These are all supported
by ALSA and just not available on other unices. Adding support for all
this into CSL would be a _huge_ task.
Perhaps the most important area of ALSA PCM API are the functions for
handling buffersize, interrupt-frequency and wake-up parameters. In other
words being able to set a buffersize value is not enough when writing
high-performance (low-latency, high-bandwidth) audio applications. You
need more control and this is what ALSA brings you. And it's good to note
that these are not only needed by music creation (or sw for musicians for
lack of a better term) apps, but also for desktop apps. I have myself
written a few desktop'ish audio apps that have needed the added
flexibility of ALSA.
Now JACK, on the other hand, offers completely new types of functionality
for audio apps: audio routing between audio applications, connection
management and transport control. These are all essential for music apps,
but don't make sense in an audio i/o abstraction like CSL.
So to summarize, I really hope that you leave a possibilty for these APIs
(especially ALSA and JACK) in the KDE multimedia architecture, so that it
would be possible to run different apps without the need to completely
bypass other application groups (like is the situation today with
aRts/esd/ALSA/OSS/JACK apps).
As a more practical suggestion, I see the options as:
1) A front-end API that is part of the KDE devel API
a) aRts
b) gstreamer
c) CSL
d) Portaudio
e) ... others?
2) Backend server that is user-selectable (you have a nice GUI
widget for selecting which to use)
a) aRts (current design, again uses OSS/ALSA)
b) JACK (gstreamer already has support for it)
c) ALSA (dmix or aserver)
d) MAS
e) ... others?
All official (part of the base packages) KDE+GNOME apps would use (1), but
3rd party apps could directly interact with (2) if they so wished. If the
required (2) is not running, user can go to the configuration page and
change the audio backend.
Comments? :)
--
http://www.eca.cx
Audio software for Linux!
Hello,
I just released polarbear. I had the code lying around, and just merged
it with the jack/alsa i/o code of tapiir. Note that this is the first
public release. I did not test it thoroughly, and I am not sure if the
GUI is obvious enough (it should be if you are familiar with complex
filters), so any input is welcome.
polarbear is a tool for designing filters in the complex domain. Filters
can be designed by placing any number of poles and zeros on the z plane.
>From this the filter coefficients are calculated, and the filter can be
applied in real time on an audio stream.
polarbear can be found at
http://www.iua.upf.es/~mdeboer/projects/polarbear/
For the (far) future, the idea is that polarbear and tapiir can work
together, in the sense that the filter coefs calculated by polarbear can
be used to control the gains of tapiir. maybe polarbear and tapiir might
even merge. that would be some animal :-)
Maarten
Hi all!
First post to this list for me..
I have observed a strange thing with my jackd + BruteFIR setup, it looks something like this:
HW: Emagic 2|6 USB thing (hw0), directly connected to a USB port, no hubs.. This runs with both inputs and outputs in analog mode for now to rule out any kind of sample rate/clock rate mismatches digital mode can introduce.
Kernel 2.4.19-1LL (running Planet CCRMA software here with lowlatency enabled in the kernel) - have tried this on both a RedHat 7.3 and RedHat 8.0 system.
jackd -R -d alsa -d hw:0 -r 44100 -p 2048
BruteFIR running at 44100Hz with a filter size of 2048,8
This gives me a faint crackling noise in the output (sounds like latency issues, but nothing gets logged anywhere).. The symptom is the same for 48kHz, and to make sure that's it not really anything directly hardware related I can start jack up and instead of connecting BruteFIR I can connect AlsaPlayer and play MP3's or whatever just fine (!)...
That's not the most interesting part though - the crackling disappears if I choose a filter size of 4096,4 (or 8) instead, with the same period size for jack - that really shouldn't make any difference, should it??
Anyone else that have seen/heard this problem, or is it just me? :)
It seems that my 2|6 likes period sizes of 44100/1000 or 48000/1000 much better than the BruteFIR-enforced 2^something sizes though - I can't reliably play anything using a period size of 1024, but one of 820 is fine @ 44100Hz rate (?)
What other information would be relevant here?
/WernerJ