I'm Wim Taymans and I'm working on a new project called PipeWire you might
have heard about . I have given some general presentations about it during
its various stages of development, some of which are online .
PipeWire started as a way to share arbirary multimedia, wich requires vastly
different requirements regarding format support, device and memory management
than JACK. It wasn't until I started experimenting with audio processing that
the design started to gravitate to JACK. And then some of JACKs features became
a requirement for PipeWire.
The end goal of PipeWire is to interconnect applications and devices through
a shared graph in a secure and efficient way. Some of the first applications
will be wayland screen sharing and camera sharing with access control for
sandboxed applications. It would be great if we could also use this to connect
audio apps and devices, possibly unifying the pulseaudio/JACK audio stack.
Because the general design is, what I think, now very similar to JACK, many
people have been asking me if I'm collaborating with the linux pro-audio
community on this in any way at all. I have not but I really want to change
that. In this mail I hope to start a conversation about what I'm doing and I
hope to get some help and experience from the broader professional audio
developers community on how we can make this into something useful for
I've been looking hard at all the things that are out there, including
Wayland, JACK, LV2, CRAS, GStreamer, MFT, OMX,.. and have been trying to
combine the best ideas of these projects into PipeWire. A new plugin API was
designed for hard realtime processing of any media type. PipeWire is LGPL
licensed and depends only on a standard c library. It's currently targeting
At the core of the PipeWire design is a graph of processing nodes with arbirary
input/output ports. Before processing begins, ports need to be configured with a
format and a set of buffers for the data. Buffer data and metadata generally
lives in memfd shared memory but can also be dmabuf or anything that can be
passed as an fd between processes. There is a lot of flexibility in doing this
setup, reusing much of the GStreamer experience there is. This all happens on
the main thread, infrequently, not very important for the actual execution of
In the realtime thread (PipeWire currently has 1 main thread and 1 realtime data
thread), events from various sources can start push/pull operations in the
graph. For the purpose of this mail, the audio sink uses a timerfd to wake up
when the alsa buffer fill level is below a threshold. This causes the sink to
fetch a buffer from its input port queue and copy it to the alsa ringbuffer. It
then issues a pull to fetch more data from all linked peer nodes for which there
is nothing queued. These peers will then eventually push another buffer in the
sink queue to be picked up in the next pull cycle of the sink. This is somewhat
similar to the JACK async scheduling model. In the generic case, PipeWire has to
walk upstream in the graph until it finds a node that can produce something (see
below how this can be optimized).
Scheduling of nodes is, contrary to JACKs (and LADSPA and LV2) single 'process'
method, done with 2 methods: process_input and process_ouput. This is done to
support more complex plugins that need to decouple input from output and to also
support a pull model for plugins. For internal clients, we directly call the
methods, for external clients we use an eventfd and a shared ringbuffer to send
the right process command to the client.
When the external client has finished processing or need to pull, it signals
PipeWire, which then wakes up the next clients if needed. This is different from
JACK, where a client directly wakes up the peers to avoid a server context
switch. JACK can do this because the graph and all client semaphores are shared.
PipeWire can't in general for a couple of reaons: 1) you need to bring mixing of
arbitrary formats to the clients 2) sandboxed clients should not be trusted with
this information and responsability. In some cases it would probably be possible
to improve that in the future (see below).
This kind of scheduling works well for generic desktop style audio and video.
Apps can send buffers of the size of their liking. Bigger buffers means higher
latency but less frequent wakeups. The sink wakeup frequency is determined by
the smallest buffer size that needs to be mixed. There is an upper limit for the
largest amount of data that is mixed in one go to avoid having to do rewinds in
alsa and still have reasonable latency when doing volume changes or adding new
The idea is to make a separate part of the graph dedicated to pro-audio. This
part of the graph runs with mono 32bit float sample buffers of a fixed size and
samplerate. The nodes running in this part of the graph also need to have a
fixed input-output pattern. In this part of the graph, negotiating the format
becomes trivial. We can preallocate a fixed size buffer for each port that is
used to send/mix data between nodes. Exactly like how JACK works. In this
scenario it would be possible to bring some of the graph state to trusted
clients so that they can wake up their peers directly.
As it turns out, the generic scheduling mechanism simplifies to the JACK way of
scheduling and the option to do some optimisations (can directly start push from
the sources, bundle process_input/output calls, mixing on ports is simplified by
equal buffer sizes, ...)
There is a lot more stuff that I can talk about and a lot of things that need
to be fleshed out like latency calculations, an equivalent of JACK transport,
session management, ... But this mail is already getting long :)
I would very much like to hear your ideas, comments, flames, thoughts on this
idea. I think I'm at a stage where I can present this to a bigger audience and
have enough experience with the matter to have meaningful discussions.
PipeWire is currently still in heavy development, many things can and do
still change. I'm currently writing a replacement libjack.so that runs jack
clients directly on PipeWire (mixing and complicated scheduling doesn't
Hope to hear your comments,
[Apologies for cross posting, please circulate widely.]
*New submission deadline: March 26, 2018*
1st International Faust Conference - Johannes Gutenberg University, Mainz
(Germany), July 17-18, 2018
The International Faust Conference (IFC-18: http://www.ifc18.uni-mainz.de)
will take place at the Johannes Gutenberg University
<http://www.uni-mainz.de/> of Mainz (Germany) on July 17-18, 2018. It aims
at gathering developers and users of the Faust programming language
<http://faust.grame.fr/> to present current projects and discuss future
directions for Faust and its community.
Participants will be able to share their work through paper presentations.
A series of round tables on various topics will serve as a platform to
brainstorm on Faust's features, semantics, tools, applications, etc. to
determine future directions for this language. Open spaces for demos and
workshops will be available for participants to openly share their ongoing
projects with the rest of the community.
As a special event, the winner of GRAME's Faust Open-Source Software
Competition will be announced during IFC-18.
IFC-18 is free and everyone is welcome to attend!
*Call for Papers*
We welcome submissions from academic, professional, independent
programmers, artists, etc. We solicit original papers centered around the Faust
programming language <http://faust.grame.fr/> in the following categories:
- Original research
- Technology tutorial
- Artistic project report (e.g., installation, composition, etc.)
Paper should be up to 14 pages in length, non anonymous, and formatted
according to this template
should be carried out via our EasyChair portal
All submissions are subject to peer review. Acceptance may be conditional
upon changes being made to the paper as directed by reviewers.
Accepted papers will be published on-line as well as in the IFC-18
proceedings paper version. They will be presented by their author(s) at
IFC-18 as 15 minutes presentations (+ 5 minutes for questions).
Feel free to contact us if you have any question.
- Papers submission deadline: March 26, 2018 March 2, 2018
- Notification of Acceptance: May 5, 2018 May 1, 2018
- Camera-Ready Version: June 1, 2018
*Call for Round Table Topics*
A series of round tables on the following themes will take place both
afternoons of IFC-18:
- Faust Tools (e.g., Architectures, IDE, Faust Code Generator, On-Line
- DSP in Faust and Faust Libraries (e.g., New Algorithms, New Libraries,
Missing Functions, etc.)
- Faust Compiler and Semantics
- Other Topics/Open Session
We solicit topic suggestions from the Faust community for each of these
themes. Topics can be submitted by means of this Google form
<https://goo.gl/forms/0fBYxk28jlRdtqRM2>. They will be introduced during
the round tables by the session chair.
Please, address your questions to: ifc18(a)muwiinfa.geschichte.uni-mainz.de
Conference website: http://www.ifc18.uni-mainz.de
Dr. Albert Gr"af
Computer Music Research Group, JGU Mainz, Germany
We just enabled all mail services for linuxaudio.org again. All mailing
lists are working again and mail can be sent and received for the
A short recap of what happened is that linuxaudio.org got compromised on
January 29th, probably with a compromised private SSH key or password
from an account with shell access. The attacker checked the kernel, saw
that it was vulnerable to Dirty COW¹, pulled in an exploit and got root.
This was quickly discovered by the IT department of Virginia Tech
University that disconnected the server from the internet and started a
forensic investigation procedure. As part of their IT security policy
the server had to be reinstalled and everything had to be set up from
scratch again. In the meanwhile I built an alternative setup and after
some discussion we agreed on moving linuxaudio.org away from the
Virginia Tech server.
So linuxaudio.org got a new home after 15 years at Virginia Tech². We're
very, very thankful that we could host linuxaudio.org on their servers
and we can't stress enough how grateful we are for all the work that has
been done on the side of Virginia Tech after the hack.
linuxaudio.org now lives at Fuga³, a fully open source OpenStack⁴ cloud
based in The Netherlands. Fuga is part of Cyso⁵, the company I work for.
The linuxaudio.org ecosystem now consists of three separate servers, a
web server, a mail server and a storage server. We rebuilt everything
with portability and scalability in mind with a strong focus on
security. You can never prevent passwords or SSH keys getting into the
hands of hackers but we'll try to keep the servers as up to date as we
can to narrow down the attack surface as much as possible.
A big thank you to all those who helped out! It was quite a ride but it
seems as if most part of the linuxaudio.org ecosystem is accessible
again. If you find any web pages, downloads or other bits and parts that
don't work properly then please let us know so we can take a look at it.
Many thanks in advance and also many thanks for bearing with us!
[Apologies for cross-postings] [Please distribute]
LAC 2018: 2nd Call for Papers / Works
Conference date: 7th - 10th June 2018
The Linux Audio Conference 2018 will be hosted at c-base, Berlin -
in partnership with the Electronic Music Studio (TU Berlin) and
The deadline for all submissions has been extended:
March 15th, 2018 (23:59 UTC)
Submissions may include:
- Music Performances
- Multimedia Installations
For more details see the CFP on the website:
Looking forward to seeing you in Berlin!
The Linux Audio Conference 2018 team
Not really sure the subgraph is so good -- one of the things JACK gives us
is the extremely solid knowledge of what it just did, is doing now, and
will do next period. If I run Pulse with JACK, it's JACK controlling the
hardware and Pulse feeding into it, not the other way around, because Pulse
is not tightly synchronized, whereas JACK is. But if you can make it work
as well, more power to you.
Concerning seeking and timing, though, I have had to wonder. My impression
of JACK for a long time (and more learned ladies and gentlemen, please
correct) is that it uses a basically round-robin approach to its clients,
with variation. I have had to wonder, especially given my need for this
<https://github.com/ponderworthy/MultiJACK>, how practical a model might be
possible, using preemptive multitasking or even Ethernet-style collision
avoidance through entropic data, at current CPU speeds. It's chopped into
frames, right? Couldn't audio and MIDI data be mapped into networking
frames and then thrown around using the kernel networking stack? The
timestamps are there...the connectivity is there...have to do interesting
translations... :-) Could be done at the IP level or even lower I would
think. The lower you go, the more power you get, because you're closer to
the kernel at every step.
*Jonathan E. Brickman jeb(a)ponderworthy.com
*Hear us at http://ponderworthy.com <http://ponderworthy.com> -- CDs and
MP3s now available! <http://ponderworthy.com/ad-astra/ad-astra.html>*
*Music of compassion; fire, and life!!!*
Call for Applications:
Workshop-in-Exposition: Thresholds of the Algorithmic
Bergen (NO), June 2018.
**Reminder and extended deadline: 23 February 2018**
(sorry for x-posting -- please distribute)
Algorithms have been used in music and sound art even before the
emergence of “computer music” in the 1950s, but today we witness an
entire new wave of interest, reflected in festivals, genres,
publications and research projects. It is the very notion of algorithms
that is shifting. They are no longer an abstract formalisation, but
emerge from artistic praxis and experimentation and become entangled in it.
Almat and BEK are happy to announce a call for participation in a
workshop-in-exposition taking place in Bergen, Norway, June 2018. This
will be a part of BEK and Notam’s ongoing series of workshops for
advanced users. It is a hybrid format that places the workshop inside an
exhibition context, where the exposed works and artefacts form the basis
of the workshop’s activity. Instead of “closed works”, what is exposed
to the general public are objects, sounds or installations that are open
to engagement and reconfiguration during the workshop.
Algorithms that Matter (Almat) is an artistic research project funded by
the Austrian Science Fund FWF, PEEK AR 403-GBL, and based at the
Institute of Electronic Music and Acoustics (IEM) in Graz, Austria.
BEK and Notam are centers for innovation and use of technology in music
and the arts in Norway. Both Notam and BEK have a strong focus on
education, and strive to establish new goals and provide new impulses
for current music technologists and artists.
- Full text of the call:
- Application form:
## Theme and Format
Thresholds are locations of transitions, points where one modality
becomes another, where a qualitative change occurs. In physics the point
where an aggregate state changes—the phase transition—is a distinguished
transitional location were the properties of the adjacent states become
evident. Similarly, in this workshop-in-exposition we want to study the
properties of the algorithmic by putting ourselves in threshold
positions and actively shape them. More than merely separating two
sides, one can spend time on a threshold, move along a ridge, performing
a tightrope walk while trying not to fall to either side.
Situated within the Almat artistic research project, this event aims at
bringing together practitioners and researchers in the field of digital
art, sound art and computational aesthetics. The hybrid format of
workshop-in-exposition puts on display works of the participants
pertaining to the theme, and at the same time avails them for
interrogation, discussion and reconfiguration during the week long workshop.
The full call embeds a list of three different ‘thresholds’ from which
the applicants should point out a specific one, that they recognise as
being addressed by their own artistic work. This will act both as a
point for further exploration during the workshop and as a bridge
towards audience perception.
Please read carefully the call and fill out the form provided at
https://almat.iem.at/call2018.html and send it to almat(a)iem.at along
with the required accompanying documents.
We aim at a balance of gender and background of the applicants.
- Duration of exhibition: from 08 June to 17 June 2018
- Start date (in situ): 04 June 2018
(preparation and set up from 04 June to 08 June 2018)
- End date: 17 June 2018
- Applicants must be present during the workshop.
- Workshop fee must be paid by confirmed participants (see form)
**Application deadline: 23 February 2018** (e-mail reception, 24:00 CET)
If you have further questions, please do not hesitate to contact us at
iem.at | kug.ac.at | bek.no | notam02.no | fwf.ac.at