Hello list,
I recently tried out petri-foo and I like it enough to care about it in
the form of bug reports.
I don't know how to contact the developers though.
The website http://petri-foo.sourceforge.net/ says last release 2012,
the github repository is switched to read-only.
Did I miss the active development place somehow? Fork of a fork?
-hgn
Hi everyone,
I'm Wim Taymans and I'm working on a new project called PipeWire you might
have heard about [1]. I have given some general presentations about it during
its various stages of development, some of which are online [2].
PipeWire started as a way to share arbirary multimedia, wich requires vastly
different requirements regarding format support, device and memory management
than JACK. It wasn't until I started experimenting with audio processing that
the design started to gravitate to JACK. And then some of JACKs features became
a requirement for PipeWire.
The end goal of PipeWire is to interconnect applications and devices through
a shared graph in a secure and efficient way. Some of the first applications
will be wayland screen sharing and camera sharing with access control for
sandboxed applications. It would be great if we could also use this to connect
audio apps and devices, possibly unifying the pulseaudio/JACK audio stack.
Because the general design is, what I think, now very similar to JACK, many
people have been asking me if I'm collaborating with the linux pro-audio
community on this in any way at all. I have not but I really want to change
that. In this mail I hope to start a conversation about what I'm doing and I
hope to get some help and experience from the broader professional audio
developers community on how we can make this into something useful for
everybody.
I've been looking hard at all the things that are out there, including
Wayland, JACK, LV2, CRAS, GStreamer, MFT, OMX,.. and have been trying to
combine the best ideas of these projects into PipeWire. A new plugin API was
designed for hard realtime processing of any media type. PipeWire is LGPL
licensed and depends only on a standard c library. It's currently targeting
Linux.
At the core of the PipeWire design is a graph of processing nodes with arbirary
input/output ports. Before processing begins, ports need to be configured with a
format and a set of buffers for the data. Buffer data and metadata generally
lives in memfd shared memory but can also be dmabuf or anything that can be
passed as an fd between processes. There is a lot of flexibility in doing this
setup, reusing much of the GStreamer experience there is. This all happens on
the main thread, infrequently, not very important for the actual execution of
the graph.
In the realtime thread (PipeWire currently has 1 main thread and 1 realtime data
thread), events from various sources can start push/pull operations in the
graph. For the purpose of this mail, the audio sink uses a timerfd to wake up
when the alsa buffer fill level is below a threshold. This causes the sink to
fetch a buffer from its input port queue and copy it to the alsa ringbuffer. It
then issues a pull to fetch more data from all linked peer nodes for which there
is nothing queued. These peers will then eventually push another buffer in the
sink queue to be picked up in the next pull cycle of the sink. This is somewhat
similar to the JACK async scheduling model. In the generic case, PipeWire has to
walk upstream in the graph until it finds a node that can produce something (see
below how this can be optimized).
Scheduling of nodes is, contrary to JACKs (and LADSPA and LV2) single 'process'
method, done with 2 methods: process_input and process_ouput. This is done to
support more complex plugins that need to decouple input from output and to also
support a pull model for plugins. For internal clients, we directly call the
methods, for external clients we use an eventfd and a shared ringbuffer to send
the right process command to the client.
When the external client has finished processing or need to pull, it signals
PipeWire, which then wakes up the next clients if needed. This is different from
JACK, where a client directly wakes up the peers to avoid a server context
switch. JACK can do this because the graph and all client semaphores are shared.
PipeWire can't in general for a couple of reaons: 1) you need to bring mixing of
arbitrary formats to the clients 2) sandboxed clients should not be trusted with
this information and responsability. In some cases it would probably be possible
to improve that in the future (see below).
This kind of scheduling works well for generic desktop style audio and video.
Apps can send buffers of the size of their liking. Bigger buffers means higher
latency but less frequent wakeups. The sink wakeup frequency is determined by
the smallest buffer size that needs to be mixed. There is an upper limit for the
largest amount of data that is mixed in one go to avoid having to do rewinds in
alsa and still have reasonable latency when doing volume changes or adding new
streams etc.
The idea is to make a separate part of the graph dedicated to pro-audio. This
part of the graph runs with mono 32bit float sample buffers of a fixed size and
samplerate. The nodes running in this part of the graph also need to have a
fixed input-output pattern. In this part of the graph, negotiating the format
becomes trivial. We can preallocate a fixed size buffer for each port that is
used to send/mix data between nodes. Exactly like how JACK works. In this
scenario it would be possible to bring some of the graph state to trusted
clients so that they can wake up their peers directly.
As it turns out, the generic scheduling mechanism simplifies to the JACK way of
scheduling and the option to do some optimisations (can directly start push from
the sources, bundle process_input/output calls, mixing on ports is simplified by
equal buffer sizes, ...)
There is a lot more stuff that I can talk about and a lot of things that need
to be fleshed out like latency calculations, an equivalent of JACK transport,
session management, ... But this mail is already getting long :)
I would very much like to hear your ideas, comments, flames, thoughts on this
idea. I think I'm at a stage where I can present this to a bigger audience and
have enough experience with the matter to have meaningful discussions.
PipeWire is currently still in heavy development, many things can and do
still change. I'm currently writing a replacement libjack.so[3] that runs jack
clients directly on PipeWire (mixing and complicated scheduling doesn't
work yet).
Hope to hear your comments,
Wim Taymans
[1] pipewire.org
[2] https://www.youtube.com/watch?v=6Xgx7cRoS0M
[3] https://github.com/PipeWire/pipewire-jack
Dear Linux Audio community,
we're sending this mail to let you know about the availability of the
remaining videos from LAC2018.
You can find them on media.ccc.de [1] and on the dedicated event pages
linked to in the schedule [2].
We hope you had a great time at the conference and if you couldn't be
there physically,
this is now the time to have a look at much of what has happened in
Berlin this year.
In other news, the website [3] is going to read-only mode shortly.
See you at future LACs!
[1] https://media.ccc.de/b/conferences/lac/lac18
[2] https://lac.linuxaudio.org/2018/pages/schedule/
[3] https://lac.linuxaudio.org/2018/
--
Linux Audio Conference team
Hello all,
Source code for octofile version 0.3.0 (linux) is now available at
<http://kokkinizita.linuxaudio.org/linuxaudio/downloads/index.html>
Octofile is the A to B format converter for Core Sound's Octomic.
A-format input can be 1,2,4 or 8 audio file(s) with resp. 8,4,2
or 1 channel(s) each, 44.1, 48 or 96 kHz.
Default output is a 2nd order Ambix file (CAF, SN3D, ACN) but the
legacy .amb or non-standard formats (e.g. Ambix in a .wav file)
are possible as well.
You will require a calibration file from Core Sound of course.
Ciao,
--
FA
Hello all,
Has anyone tried using multichannel USB audio on a Raspberry 3B+ ?
It seems to work perfectly (using zita-alsa-pcmi) with stereo cards.
When I try my RME Babyface (12 in, 12 out) in CC mode, the device
opens without problems, but then Alsa_pcmi::pcm_wait() times out
waiting for the poll fd to become ready. Timeout in pcm_wait() is
1000 milliseconds.
The same seems to happen with Jackd, which uses similar code.
Is there anything in the Pi's system or configuration that
excludes multichannel cards ?
Ciao,
--
FA
DrumGizmo 0.9.15 Released!
DrumGizmo is an open source, multichannel, multilayered, cross-platform
drum plugin and stand-alone application. It enables you to compose drums
in midi and mix them with a multichannel approach. It is comparable to
that of mixing a real drumkit that has been recorded with a multimic setup.
We didn't quite make the yearly LAC (Linux Audio Conference) schedule.
But fret not, now it is here! The new 0.9.15 release is primed and ready
for use.
Loads of new stuff in this release. Most prominent is the new timing
humanizer and the bleed control! The timing humanizer works in addition
to the velocity humanizer. Where the velocity humanizer adjusts the
velocity of incoming midi hits, the timing humanizer does the same thing
but instead moves them back and forth in time. This all helps to achieve
a less machine-sounding output. To help with the understanding of the
humanizing features, we've also added a “visualizer” which will mirror
the settings you choose for a better understanding. This is the first
iteration of the visualizer, so feedback is very welcome.
So what else? Oh yes, “bleed control”! And yep, it does exactly what you
expect. If you want a dry sounding kit, turn that slider aaaall the way
down. But if you want all of the ambience in there, turn it aaaall the
way up. Bleed is when a hit on any drum is also picked up by mics that
aren't the “primary” mic of that specific drum. For some genres bleed is
unwanted. For others, not so much. And now YOU are in control.
For the full list of changes, check the roadmap for 0.9.15 [2]
A note on the bleed control: Currently only the newly updated CrocellKit
v1.1 supports this [1]. We still need to update the rest of the kits
before it will work for those. So keep an eye out for that.
And that is all. Please enjoy this release! Get it here [3]
[1]: https://drumgizmo.org/wiki/doku.php?id=kits:crocellkit
[2]:
https://www.drumgizmo.org/wiki/doku.php?id=roadmap:features_roadmap#version…
[3]: http://www.drumgizmo.org/wiki/doku.php?id=getting_drumgizmo
Hey All,
Its been a long time, but there's a new Luppp release out now, version 1.2
https://github.com/openAVproductions/openAV-Luppp/releases/tag/release-1.2.0
Huge thanks to all the contributors, this release was largely driven by
contributions,
and community interaction and interest in the Luppp project - thanks to
all!!
Keep an eye out, there is more stuff coming up soon! -Harry
---
Changelog:
Features:
-> clear clip with MIDI
-> build with meson
-> space triggers special clip
-> manual BPM input (right click on Tap-Button)
Improvements:
-> avoid noise on all controls
-> reduce default metronome volume
-> make label code consistent
-> fix compiler warnings
-> remove all hard coded scene numbers
-> add some debug outputs
-> metronome fancy fades
-> better icon file
Fixes:
-> fix several leaks and errors
-> fix broken waveforms
-> fix fltk/ntk conflict
-> fix generic MIDI launch bug
-> fix wrong output mapping
-> fix input signal flow
-> fix input volume for clip recording
-> fix timing issues after changing playspeed
-> fix scenes losing names once a scene is played