[linux-audio-dev] lock-free data structures

Joshua Haberman joshua at reverberate.org
Thu Jun 17 13:30:01 UTC 2004


Greetings, all.  I have not been subscribed to this list for several 
months now, but someone pointed me to a link to Paul's message that I 
quote below.  I realized that now is a good time to check back in, see 
what everyone is up to, and let you know what I'm up to.

A few days ago Paul posted:
 > the only sure way to do [live graph modification] is to use lock free
 > parallel programming techniques. this is still a highly experimental
 > area of computer science, and i have yet to see more than a glimmer of
 > a technique that would actually work in a complex real time system
 > like JACK.

Right now I am very interested in lock-free data structures, and I am 
working with Ross Bencina (of PortAudio and AudioMulch) to implement 
them in an open source library.  See [0] and [1].  It turns out there 
are several practical algorithms being published in this field, most 
notably by Maged Michael from IBM research.  Ross and I have been in 
contact with him for advice and clarification about his papers, which 
are available on his web site [2].  We have some working code, but 
since all this stuff is highly architecture-specific we need to do more 
research.  A significant part of the library will probably end up being 
in assembly.

For me, this is part of a bigger effort: rewriting Audacity's back-end, 
and making it a GUI-independent library.  We're tentatively calling 
this library "Mezzo" -- more about Mezzo's goals and basic API is at 
[3].  Mezzo's data representation scheme is just a refactoring of what 
Audacity uses now, but the real-time system will be completely 
redesigned to be similar to JACK's model: several nodes with input 
ports and output ports that implement callbacks.  The whole graph will 
be run from the audio api's callback.

Getting back to lock-free data structures, my intention is not to use 
locks anywhere in this real-time system.  Since I haven't written the 
whole thing yet I'm not sure if this is completely possible, but here 
is my general strategy:

1. signal graph is constructed in main thread
2. once audio starts, the audio thread runs the graph once for every 
callback
3. if the main thread wants to change the graph while it's running, it 
does any heavy lifting (memory allocation, etc), then sends a message 
to the audio thread with a lock-free queue (ex: "add this node and 
connect it to this other node").
4. buffers are passed between the disk thread and the audio thread 
using lock-free queues
5. in the real-time thread, buffers are allocated from and returned to 
a lock-free freelist.  For example, if you have a node that generates a 
sine wave, it needs to allocate a buffer when it runs to put its data 
in.  Instead of calling malloc, it asks for a free buffer from the 
freelist.

The real-time code is still in its beginning stages, but I have gotten 
it to play data from disk by sending it from a disk thread to an audio 
thread.  You can see the code in our SVN repository. [4]  I'm also 
keeping a log of my activity and my thought process on my web site [5].

I am interested in any reactions people may have to this work.  I drew 
a lot of the inspiration for Mezzo's real-time system from JACK and 
other things I have read on this list.

Regards,
Josh Haberman

[0] http://www.audiomulch.com/~rossb/code/lockfree/
[1] http://www.audiomulch.com/~rossb/code/lockfree/spec.txt
[2] http://www.research.ibm.com/people/m/michael/pubs.htm
[3] http://www.audacityteam.org/wiki/index.pl?Mezzo
[4] http://mezzo.homelinux.net/svn/
[5] http://www.reverberate.org/log/




More information about the Linux-audio-dev mailing list