Yes! i was just about to write the list this very same message!
-------
NQuit
www.nquit.com
-------
----- Original Message -----
From: Dave Phillips
Sent: 7/13/2004 11:43:00 AM
To: linux-audio-dev@music.columbia.edu;linux-audio-user@music.columbia.edu
Subject: [linux-audio-dev] Ardour named Project Of The Year by LJ
> Greetings Earthlings:
>
> Just a quick note to say that the August 2004 issue of the Linux
> Journal has selected Ardour as Project Of The Year for its 2004 Editors
> Choice awards. Congratulations to Paul and everyone on the team !
>
> Best regards,
>
> dp
>
>
Hi,
I'm writing a pattern-based audio sequencer, and wonder about the file
format to choose when saving the whole song. My application handles the
usual stuff : metadata (bpm, settings), patterns, and samples.
Do you believe I should use (yet another) binary file format, embedding
patterns, samples, and possibly roasted chicken into a big standalone
(that's the good point) package ?
Or may I produce an XML file (doesn't <pattern> looks nicer than
[pattern] ? ;), with an option to mimic the save "Web-page, Complete"
mozilla behaviour ?
That is, when saving song.xml, sticking a song.xml.files/ folder in the
same directory, to store samples.
--
og
Maged Michael from IBM, who is publishing lots of practical lock-free
algorithms these days, has just published a paper describing a
lock-free memory allocation algorithm:
http://www.research.ibm.com/people/m/michael/pldi-2004.pdf
It seems plausible that you could use this to safely allocate memory
from RT threads. The only questions I have about its practicality are:
1. how easy/possible is it to use two malloc() implementations in the
same program? My brief research suggests that mixing system malloc(3)
and sbrk(2) (the latter is the underlying mechanism for obtaining more
memory from the OS) is not guaranteed to be safe. A possible solution
I have encountered is to obtain memory from the OS by mmap(/dev/zero)
instead of using sbrk(2).
2. When your lock-free malloc needs more memory from the OS it will
still take a system call to do it. I believe I have heard it said in
the past that system calls of any sort are unacceptable in RT code, but
isn't this a bit a of a hard-line position?
Josh
From: "Bill Huey (hui)" <bhuey(a)lnxw.com>
> On Tue, Jul 13, 2004 at 11:44:59PM +0100, Martijn Sipkema wrote:
> [...]
> > The worst case latency is the one that counts and that is the contended case. If
> > you could guarantee no contention then the worst case latency would be the
> > very fast uncontended case, but I doubt there are many (any?) examples of this in
> > practice. There are valid uses of mutexes with priority inheritance/ceiling protocol;
> > the poeple making the POSIX standard aren't stupid...
>
> There are cases where you have to use priority inheritance, but the schemes that are
> typically used either have a kind of exhaustive analysis backing it or uses a simple
> late detection scheme. In a general purpose OS, the latter is useful for various kind
> of overload cases. But if your system is constantly using that specific case, then it's
> a sign the contention in the kernel must *also* be a problem under SMP conditions. The
> constant use of priority inheritance overloads the scheduler, puts pressure on the
> cache and other negative things that hurt CPU local performance of the system.
>
> The reason why I mention this is because of Linux's hand-crafted nature of dealing
> with this. These are basically contention problems expressed in a different manner.
> The traditional Linux method is the correct method of deal with this in a general
> purpose OS. This also applies to application structure as well. The use of these
> mechanisms need to be thought out before application.
To be honest, I don't understand a word of what you are saying here. Could you
give an example of a ``contention problem'' and how it should be solved?
> > > > It is often heard in the Linux audio community that mutexes are not realtime
> > > > safe and a lock-free ringbuffer should be used instead. Using such a lock-free
> > > > ringbuffer requires non-standard atomic integer operations and does not
> > > > guarantee memory synchronization (and should probably not perform
> > > > significantly better than a decent mutex implementation) and is thus not
> > > > portable.
> > >
> > > It's to decouple the system from various time related problems with jitter.
> > > It's critical to use this since the nature of Linus is so temporally coarse
> > > that these techniques must be used to "smooth" over latency problems in the
> > > Linux kernel.
>
> > Either use mutexes or POSIX message queues... the latter also are not
> > intended for realtime use under Linux (though they are meant for it in
> > POSIX), since they don't allocate memory on creation.
>
> The nature these kind of applications push into a very demanding space where
> typical methodologies surrounding the use of threads goes out the window. Pushing
> both the IO and CPU resources of a kernel is the common case and often you have to
> roll your own APIs, synchronization mechanisms to deal with these problem. Simple
> Posix API and traditional mutexes are a bit too narrow in scope to solve these
> cross system concurrency problems. It's not trivial stuff at all and can span
> from loosely to tightly coupled systems, yes, all for pro-audio/video.
>
> Posix and friends in these cases simply aren't good enough to cut it.
I find this a little abstract. Sure, there might be areas where POSIX doesn't supply
all the needed tools, e.g. one might want some scheduling policy especially for
audio, but to say that POSIX isn't good enough without providing much
explanation...
--ms
From: "Bill Huey (hui)" <bhuey(a)lnxw.com>
> On Tue, Jul 13, 2004 at 01:09:28PM +0100, Martijn Sipkema wrote:
> > [...]
> > > Please double-check that there are no priority inversion problems and that
> > > the application is correctly setting the scheduling policy and that it is
> > > mlocking everything appropriately.
> >
> > I don't think it is currently possible to have cooperating threads with
> > different priorities without priority inversion when using a mutex to
> > serialize access to shared data; and using a mutex is in fact the only portable
> > way to do that...
> >
> > Thus, the fact that Linux does not support protocols to prevent priority
> > inversion (please correct me if I am wrong) kind of suggests that supporting
> > realtime applications is not considered very important.
>
> Any use of an explicit or implied blocking mutex across threads with differing
> priorities can results in priority inversion problems. The real problem, however,
> is contention. If you get rid of the contention in a certain critical section,
> you then also get rid of latency in the system. They are one and the same problem.
The worst case latency is the one that counts and that is the contended case. If
you could guarantee no contention then the worst case latency would be the
very fast uncontended case, but I doubt there are many (any?) examples of this in
practice. There are valid uses of mutexes with priority inheritance/ceiling protocol;
the poeple making the POSIX standard aren't stupid...
> > It is often heard in the Linux audio community that mutexes are not realtime
> > safe and a lock-free ringbuffer should be used instead. Using such a lock-free
> > ringbuffer requires non-standard atomic integer operations and does not
> > guarantee memory synchronization (and should probably not perform
> > significantly better than a decent mutex implementation) and is thus not
> > portable.
>
> It's to decouple the system from various time related problems with jitter.
> It's critical to use this since the nature of Linus is so temporally coarse
> that these techniques must be used to "smooth" over latency problems in the
> Linux kernel.
Either use mutexes or POSIX message queues... the latter also are not
intended for realtime use under Linux (though they are meant for it in
POSIX), since they don't allocate memory on creation.
> I personally would love to see these audio applications run on a first-class
> basis under Linux. Unfortunately, that won't happen until it gets near real
> time support prevasively through the kernel just like in SGI's IRIX. Multimedia
> applications really need to be under a hard real time system with special
> scheduler support so that CPU resources, IO channels can be throttled.
>
> The techniques Linux media folks are using now are basically a coarse hack
> to get things basically working. This won't change unless some fundamental
> concurrency issues (moving to a preemptive kernel with interrupt threads, etc..)
> change in Linux. Scattering preemption points manually over 2.6 is starting to
> look unmanable from all of the stack traces I've been reading in these latency
> related threads.
Improving the mutex and mqueue implementations to better support realtime
use would be a significant improvement I think, making Linux quite suitable
for realtime audio use.
--ms
On Tue, 2004-07-13 at 15:12, Bill Huey wrote:
> On Tue, Jul 13, 2004 at 01:09:28PM +0100, Martijn Sipkema wrote:
> > [...]
> > > Please double-check that there are no priority inversion problems and that
> > > the application is correctly setting the scheduling policy and that it is
> > > mlocking everything appropriately.
> >
> > I don't think it is currently possible to have cooperating threads with
> > different priorities without priority inversion when using a mutex to
> > serialize access to shared data; and using a mutex is in fact the only portable
> > way to do that...
> >
> > Thus, the fact that Linux does not support protocols to prevent priority
> > inversion (please correct me if I am wrong) kind of suggests that supporting
> > realtime applications is not considered very important.
>
> Any use of an explicit or implied blocking mutex across threads with differing
> priorities can results in priority inversion problems. The real problem, however,
> is contention. If you get rid of the contention in a certain critical section,
> you then also get rid of latency in the system. They are one and the same problem.
>
> > It is often heard in the Linux audio community that mutexes are not realtime
> > safe and a lock-free ringbuffer should be used instead. Using such a lock-free
> > ringbuffer requires non-standard atomic integer operations and does not
> > guarantee memory synchronization (and should probably not perform
> > significantly better than a decent mutex implementation) and is thus not
> > portable.
>
> It's to decouple the system from various time related problems with jitter.
> It's critical to use this since the nature of Linus is so temporally coarse
> that these techniques must be used to "smooth" over latency problems in the
> Linux kernel.
>
> I personally would love to see these audio applications run on a first-class
> basis under Linux. Unfortunately, that won't happen until it gets near real
> time support prevasively through the kernel just like in SGI's IRIX. Multimedia
> applications really need to be under a hard real time system with special
> scheduler support so that CPU resources, IO channels can be throttled.
>
I don't think invoking IRIX is going to get us a lot of sympathy on
LKML. It is widely reviled. BEOS is probably a better example.
Just my $0.02.
Lee
Some thoughts about low latency with dual processor or HT capable systems.
Isn't it possible to get low latency by locking everything (= all processes +
interrupts) except for the audio/midi interrupt and the RT process(es) to
one cpu and the latter two to the second, using the cpu and irq affinity calls
?
I thought this way interrupts, bottom halfs and the like are handled per cpu
and don't interfere (lock each other out) between cpus and processes not using
system calls should never be blocked.
If it is that way, this would help at least the MP/HT systems,regardless of
the kernel used.
Or am i thinking wrong here?
Greetings Earthlings:
Just a quick note to say that the August 2004 issue of the Linux
Journal has selected Ardour as Project Of The Year for its 2004 Editors
Choice awards. Congratulations to Paul and everyone on the team !
Best regards,
dp
>From: Dr.Graef(a)t-online.de (Albert Graef)
>
>does anyone know a library (preferably C/C++, or anything that
>interfaces to it) which implements the Prony algorithm (a.k.a. least
>squares fitting of a sampled signal to a sum of damped sinusoids)?
I placed a couple of papers at
ftp://ftp.funet.fi/pub/sci/audio/devel/dsp/
Please make the source code available (GPL or public domain
or equivalent). Do you need more papers?
I need a detailed spectrogram to Audacity. Audacity may now have
only FFT based linear scale spectrograms. I will draw manually the
pitch curves over the display. Instruments to be analysed are flutes,
violins and guitars. Both time and frequency accuracy should be good
when tracing the single pitch curve.
I will check more papers on the topic but I really would like to
see an overview of what techniques are available. Maybe a search in
the web helps.
Juhana
Hi,
I have a program with some (p)threads, and a jack ringbuffer. One
thread is writing to the ringbuffer, and another is reading from it.
The question is: is it (thread-)safe to have a _third_ thread that
looks at the ringbuffer via jack_ringbuffer_read_space() at random
times to determine how much data is in the ringbuffer? This third
thread does not actually read nor write any data to/from the
ringbuffer, just wants to know the amount of data in it.
So far I didn't see any crashes/lockups, is it really safe to do this
or just my dirty luck? :)
Thanks,
Tom