[LAD] PipeWire

Jonathan Brickman jeb at ponderworthy.com
Mon Feb 19 23:19:58 CET 2018


 ​
Greetings, Wim.  Amazing project you have there.  I hope you succeed.  Len
has covered lots of excellent thoughts.  Here are a few more, clearly
intersecting.

First of all, it's a great idea.  I'd love to see one layer which could do
all of JACK and pulse.  But the pitfalls are many :-)  It's worthwhile to
remember that the ALSA people tried a lot of it, the code bits and
configuration settings are still there waiting to be used, it's just that
Pulse and JACK are doing it and more so much more reliably.

Second, the newer JACK+Pulse setup with Cadence controlling it is amazing,
a joy and a simplicity.  Kudos extremus (sorry, I am linguistically
challenged).  It does cost a bit in JACK DSP (5% on the big BNR hard server
when I tried it), but it works very reliably.

And third, I could certainly imagine one layer with three different kinds
of ports:  MIDI (using the JACK MIDI API), Pro Audio (using the JACK audio
API), and Desktop Audio (using the Pulse API).  All desktop audio ports
behave like Pulse, and are controlled using the Pulse control APIs, and by
default their data is mixed into a default Desktop Audio hardware output.
At the control system level (using JACK for all), Pulse ports look like
JACK ports and can be rerouted, but the underlying layer treats them
differently, decouples them from the rigid round-robin of JACK.  This does
not make for a simple system, because there has to be both kinds of ports
for the hardware audio, and I'm sure there are a lot more complications
which others will think of, and which will emerge as soon as users start
trying it!

J.E.B.

On Mon, Feb 19, 2018 at 2:39 AM, Wim Taymans <wim.taymans at gmail.com> wrote:

> Hi everyone,
>
> I'm Wim Taymans and I'm working on a new project called PipeWire you might
> have heard about [1]. I have given some general presentations about it
> during
> its various stages of development, some of which are online [2].
>
> PipeWire started as a way to share arbirary multimedia, wich requires
> vastly
> different requirements regarding format support, device and memory
> management
> than JACK. It wasn't until I started experimenting with audio processing
> that
> the design started to gravitate to JACK. And then some of JACKs features
> became
> a requirement for PipeWire.
>
> The end goal of PipeWire is to interconnect applications and devices
> through
> a shared graph in a secure and efficient way. Some of the first
> applications
> will be wayland screen sharing and camera sharing with access control for
> sandboxed applications. It would be great if we could also use this to
> connect
> audio apps and devices, possibly unifying the pulseaudio/JACK audio stack.
>
> Because the general design is, what I think, now very similar to JACK, many
> people have been asking me if I'm collaborating with the linux pro-audio
> community on this in any way at all. I have not but I really want to change
> that. In this mail I hope to start a conversation about what I'm doing and
> I
> hope to get some help and experience from the broader professional audio
> developers community on how we can make this into something useful for
> everybody.
>
> I've been looking hard at all the things that are out there, including
> Wayland, JACK, LV2, CRAS, GStreamer, MFT, OMX,.. and have been trying to
> combine the best ideas of these projects into PipeWire. A new plugin API
> was
> designed for hard realtime processing of any media type. PipeWire is LGPL
> licensed and depends only on a standard c library. It's currently targeting
> Linux.
>
> At the core of the PipeWire design is a graph of processing nodes with
> arbirary
> input/output ports. Before processing begins, ports need to be configured
> with a
> format and a set of buffers for the data. Buffer data and metadata
> generally
> lives in memfd shared memory but can also be dmabuf or anything that can be
> passed as an fd between processes. There is a lot of flexibility in doing
> this
> setup, reusing much of the GStreamer experience there is. This all happens
> on
> the main thread, infrequently, not very important for the actual execution
> of
> the graph.
>
> In the realtime thread (PipeWire currently has 1 main thread and 1
> realtime data
> thread), events from various sources can start push/pull operations in the
> graph. For the purpose of this mail, the audio sink uses a timerfd to wake
> up
> when the alsa buffer fill level is below a threshold. This causes the sink
> to
> fetch a buffer from its input port queue and copy it to the alsa
> ringbuffer. It
> then issues a pull to fetch more data from all linked peer nodes for which
> there
> is nothing queued. These peers will then eventually push another buffer in
> the
> sink queue to be picked up in the next pull cycle of the sink. This is
> somewhat
> similar to the JACK async scheduling model. In the generic case, PipeWire
> has to
> walk upstream in the graph until it finds a node that can produce
> something (see
> below how this can be optimized).
>
> Scheduling of nodes is, contrary to JACKs (and LADSPA and LV2) single
> 'process'
> method, done with 2 methods: process_input and process_ouput. This is done
> to
> support more complex plugins that need to decouple input from output and
> to also
> support a pull model for plugins. For internal clients, we directly call
> the
> methods, for external clients we use an eventfd and a shared ringbuffer to
> send
> the right process command to the client.
>
> When the external client has finished processing or need to pull, it
> signals
> PipeWire, which then wakes up the next clients if needed. This is
> different from
> JACK, where a client directly wakes up the peers to avoid a server context
> switch. JACK can do this because the graph and all client semaphores are
> shared.
> PipeWire can't in general for a couple of reaons: 1) you need to bring
> mixing of
> arbitrary formats to the clients 2) sandboxed clients should not be
> trusted with
> this information and responsability. In some cases it would probably be
> possible
> to improve that in the future (see below).
>
> This kind of scheduling works well for generic desktop style audio and
> video.
> Apps can send buffers of the size of their liking. Bigger buffers means
> higher
> latency but less frequent wakeups. The sink wakeup frequency is determined
> by
> the smallest buffer size that needs to be mixed. There is an upper limit
> for the
> largest amount of data that is mixed in one go to avoid having to do
> rewinds in
> alsa and still have reasonable latency when doing volume changes or adding
> new
> streams etc.
>
> The idea is to make a separate part of the graph dedicated to pro-audio.
> This
> part of the graph runs with mono 32bit float sample buffers of a fixed
> size and
> samplerate. The nodes running in this part of the graph also need to have a
> fixed input-output pattern. In this part of the graph, negotiating the
> format
> becomes trivial. We can preallocate a fixed size buffer for each port that
> is
> used to send/mix data between nodes. Exactly like how JACK works. In this
> scenario it would be possible to bring some of the graph state to trusted
> clients so that they can wake up their peers directly.
>
> As it turns out, the generic scheduling mechanism simplifies to the JACK
> way of
> scheduling and the option to do some optimisations (can directly start
> push from
> the sources, bundle process_input/output calls, mixing on ports is
> simplified by
> equal buffer sizes, ...)
>
> There is a lot more stuff that I can talk about and a lot of things that
> need
> to be fleshed out like latency calculations, an equivalent of JACK
> transport,
> session management, ... But this mail is already getting long :)
>
> I would very much like to hear your ideas, comments, flames, thoughts on
> this
> idea. I think I'm at a stage where I can present this to a bigger audience
> and
> have enough experience with the matter to have meaningful discussions.
>
> PipeWire is currently still in heavy development, many things can and do
> still change. I'm currently writing a replacement libjack.so[3] that runs
> jack
> clients directly on PipeWire (mixing and complicated scheduling doesn't
> work yet).
>
> Hope to hear your comments,
> Wim Taymans
>
>
> [1] pipewire.org
> [2] https://www.youtube.com/watch?v=6Xgx7cRoS0M
> [3] https://github.com/PipeWire/pipewire-jack
> _______________________________________________
> Linux-audio-dev mailing list
> Linux-audio-dev at lists.linuxaudio.org
> https://lists.linuxaudio.org/listinfo/linux-audio-dev
>



-- 
*Jonathan E. Brickman   jeb at ponderworthy.com
<http://login.jsp/?at=02e47df3-a9af-4cd9-b951-1a06d255b48f&mailto=jeb@ponderworthy.com>
  (785)233-9977*
*Hear us at http://ponderworthy.com <http://ponderworthy.com> -- CDs and
MP3s now available! <http://ponderworthy.com/ad-astra/ad-astra.html>*
*Music of compassion; fire, and life!!!*


On Mon, Feb 19, 2018 at 2:39 AM, Wim Taymans <wim.taymans at gmail.com> wrote:

> Hi everyone,
>
> I'm Wim Taymans and I'm working on a new project called PipeWire you might
> have heard about [1]. I have given some general presentations about it
> during
> its various stages of development, some of which are online [2].
>
> PipeWire started as a way to share arbirary multimedia, wich requires
> vastly
> different requirements regarding format support, device and memory
> management
> than JACK. It wasn't until I started experimenting with audio processing
> that
> the design started to gravitate to JACK. And then some of JACKs features
> became
> a requirement for PipeWire.
>
> The end goal of PipeWire is to interconnect applications and devices
> through
> a shared graph in a secure and efficient way. Some of the first
> applications
> will be wayland screen sharing and camera sharing with access control for
> sandboxed applications. It would be great if we could also use this to
> connect
> audio apps and devices, possibly unifying the pulseaudio/JACK audio stack.
>
> Because the general design is, what I think, now very similar to JACK, many
> people have been asking me if I'm collaborating with the linux pro-audio
> community on this in any way at all. I have not but I really want to change
> that. In this mail I hope to start a conversation about what I'm doing and
> I
> hope to get some help and experience from the broader professional audio
> developers community on how we can make this into something useful for
> everybody.
>
> I've been looking hard at all the things that are out there, including
> Wayland, JACK, LV2, CRAS, GStreamer, MFT, OMX,.. and have been trying to
> combine the best ideas of these projects into PipeWire. A new plugin API
> was
> designed for hard realtime processing of any media type. PipeWire is LGPL
> licensed and depends only on a standard c library. It's currently targeting
> Linux.
>
> At the core of the PipeWire design is a graph of processing nodes with
> arbirary
> input/output ports. Before processing begins, ports need to be configured
> with a
> format and a set of buffers for the data. Buffer data and metadata
> generally
> lives in memfd shared memory but can also be dmabuf or anything that can be
> passed as an fd between processes. There is a lot of flexibility in doing
> this
> setup, reusing much of the GStreamer experience there is. This all happens
> on
> the main thread, infrequently, not very important for the actual execution
> of
> the graph.
>
> In the realtime thread (PipeWire currently has 1 main thread and 1
> realtime data
> thread), events from various sources can start push/pull operations in the
> graph. For the purpose of this mail, the audio sink uses a timerfd to wake
> up
> when the alsa buffer fill level is below a threshold. This causes the sink
> to
> fetch a buffer from its input port queue and copy it to the alsa
> ringbuffer. It
> then issues a pull to fetch more data from all linked peer nodes for which
> there
> is nothing queued. These peers will then eventually push another buffer in
> the
> sink queue to be picked up in the next pull cycle of the sink. This is
> somewhat
> similar to the JACK async scheduling model. In the generic case, PipeWire
> has to
> walk upstream in the graph until it finds a node that can produce
> something (see
> below how this can be optimized).
>
> Scheduling of nodes is, contrary to JACKs (and LADSPA and LV2) single
> 'process'
> method, done with 2 methods: process_input and process_ouput. This is done
> to
> support more complex plugins that need to decouple input from output and
> to also
> support a pull model for plugins. For internal clients, we directly call
> the
> methods, for external clients we use an eventfd and a shared ringbuffer to
> send
> the right process command to the client.
>
> When the external client has finished processing or need to pull, it
> signals
> PipeWire, which then wakes up the next clients if needed. This is
> different from
> JACK, where a client directly wakes up the peers to avoid a server context
> switch. JACK can do this because the graph and all client semaphores are
> shared.
> PipeWire can't in general for a couple of reaons: 1) you need to bring
> mixing of
> arbitrary formats to the clients 2) sandboxed clients should not be
> trusted with
> this information and responsability. In some cases it would probably be
> possible
> to improve that in the future (see below).
>
> This kind of scheduling works well for generic desktop style audio and
> video.
> Apps can send buffers of the size of their liking. Bigger buffers means
> higher
> latency but less frequent wakeups. The sink wakeup frequency is determined
> by
> the smallest buffer size that needs to be mixed. There is an upper limit
> for the
> largest amount of data that is mixed in one go to avoid having to do
> rewinds in
> alsa and still have reasonable latency when doing volume changes or adding
> new
> streams etc.
>
> The idea is to make a separate part of the graph dedicated to pro-audio.
> This
> part of the graph runs with mono 32bit float sample buffers of a fixed
> size and
> samplerate. The nodes running in this part of the graph also need to have a
> fixed input-output pattern. In this part of the graph, negotiating the
> format
> becomes trivial. We can preallocate a fixed size buffer for each port that
> is
> used to send/mix data between nodes. Exactly like how JACK works. In this
> scenario it would be possible to bring some of the graph state to trusted
> clients so that they can wake up their peers directly.
>
> As it turns out, the generic scheduling mechanism simplifies to the JACK
> way of
> scheduling and the option to do some optimisations (can directly start
> push from
> the sources, bundle process_input/output calls, mixing on ports is
> simplified by
> equal buffer sizes, ...)
>
> There is a lot more stuff that I can talk about and a lot of things that
> need
> to be fleshed out like latency calculations, an equivalent of JACK
> transport,
> session management, ... But this mail is already getting long :)
>
> I would very much like to hear your ideas, comments, flames, thoughts on
> this
> idea. I think I'm at a stage where I can present this to a bigger audience
> and
> have enough experience with the matter to have meaningful discussions.
>
> PipeWire is currently still in heavy development, many things can and do
> still change. I'm currently writing a replacement libjack.so[3] that runs
> jack
> clients directly on PipeWire (mixing and complicated scheduling doesn't
> work yet).
>
> Hope to hear your comments,
> Wim Taymans
>
>
> [1] pipewire.org
> [2] https://www.youtube.com/watch?v=6Xgx7cRoS0M
> [3] https://github.com/PipeWire/pipewire-jack
> _______________________________________________
> Linux-audio-dev mailing list
> Linux-audio-dev at lists.linuxaudio.org
> https://lists.linuxaudio.org/listinfo/linux-audio-dev
>



-- 
*Jonathan E. Brickman   jeb at ponderworthy.com
<http://login.jsp/?at=02e47df3-a9af-4cd9-b951-1a06d255b48f&mailto=jeb@ponderworthy.com>
  (785)233-9977*
*Hear us at http://ponderworthy.com <http://ponderworthy.com> -- CDs and
MP3s now available! <http://ponderworthy.com/ad-astra/ad-astra.html>*
*Music of compassion; fire, and life!!!*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.linuxaudio.org/archives/linux-audio-dev/attachments/20180219/83739214/attachment.html>


More information about the Linux-audio-dev mailing list