Hello list,
I recently tried out petri-foo and I like it enough to care about it in
the form of bug reports.
I don't know how to contact the developers though.
The website http://petri-foo.sourceforge.net/ says last release 2012,
the github repository is switched to read-only.
Did I miss the active development place somehow? Fork of a fork?
-hgn
I need lossless JACK MIDI networking outside of JACK's built-in
networking, and not multicast unless someone can tell me
straightforwardly how to get multicast (qmidinet) to run within
localhost as well as outside it. Thus I am thinking of trying my hand
at using the Mido library to bridge JACK MIDI and TCP. I have never
done this sort of coding before, programmatorially I am mostly a deep
scripting guy, Python-heavy with a bunch of Bash on Linux, Powershell-
heavy on Windows of late, with a pile of history on back in Perl on
both and VBA on Windows. Anyone have
hints...suggestions...alternatives...a best or better starting
place? Right now I don't want the applets to do GUI at all, I just
want them to sit quietly in xterms, on JACK servers, keeping
connection, and passing MIDI data to and fro, as other processes and
devices bring it.
--
Jonathan E. Brickman jeb(a)ponderworthy.com (785)233-9977
Hear us at ponderworthy.com -- CDs and MP3 available!
Music of compassion; fire, and life!!!
spectmorph-0.4.1 has been released.
Overview of Changes in spectmorph-0.4.1:
----------------------------------------
* macOS is now supported: provide VST plugin for macOS >= 10.9
* Include instruments in source tarball and packages
* Install instruments to system-wide location
* New Instruments: Claudia Ah / Ih / Oh (female version of human voice)
* Improved tools for instrument building
- support displaying tuning in sminspector
- implement "smooth-tune" command for reducing vibrato from recordings
- minor encoder fixes/cleanups
- smlive now supports enable/disable noise
* VST plugin: fix automation in Cubase (define "effCanBeAutomated")
* UI: use Source A / Source B instead of Left Source / Right Source
* UI: update db label properly on grid instrument selection change
* Avoid exporting symbols that don't belong to the SpectMorph namespace
* Fix some LV2 ttl problems
* Fix locale related problems when using atof()
* Minor fixes and cleanups
What is SpectMorph?
-------------------
SpectMorph is a free software project which allows to analyze samples of
musical instruments, and to combine them (morphing). It can be used to
construct hybrid sounds, for instance a sound between a trumpet and a flute; or
smooth transitions, for instance a sound that starts as a trumpet and then
gradually changes to a flute.
SpectMorph ships with many ready-to-use instruments which can be combined using
morphing.
SpectMorph is implemented in C++ and licensed under the GNU LGPL version 3
Integrating SpectMorph into your Work
-------------------------------------
SpectMorph is currently available for Linux and Windows users. Here is a quick
overview of how you can make music using SpectMorph.
- VST Plugin, especially for proprietary solutions that don't support LV2.
(Available on Linux and 64-bit Windows)
- LV2 Plugin, for any sequencer that supports it.
- JACK Client.
- BEAST Module, integrating into BEASTs modular environment.
Note that at this point, we may still change the way sound synthesis works, so
newer versions of SpectMorph may sound (slightly) different than the current
version.
Links:
------
Website: http://www.spectmorph.org
Download: http://www.spectmorph.org/downloads
There are many audio demos on the website, which demonstrate morphing between
instruments.
--
Stefan Westerfeld, Hamburg/Germany, http://space.twc.de/~stefan
Spam detection software, running on the system "mail1.linuxaudio.cyso.net",
has identified this incoming email as possible spam. The original
message has been attached to this so you can view it or label
similar future email. If you have any questions, see
@@CONTACT_ADDRESS@@ for details.
Content preview: yes that was a not a great analogy - my example was only meant
to be general advice for any arbitrary dependency - it did not fit well for
this case where that dependency is part of the tool-chain - indeed, you are
correct that the very purpose of makefiles is to abstract over system-specifics
in the tool-chain (and pkg-config itself is for abstracting system-specific
library filenames) - if the two pkg-config binaries had different filenames
then it would perhaps call for a new makefile variable like: $PKGCONF [...]
Content analysis details: (5.5 points, 5.0 required)
pts rule name description
---- ---------------------- --------------------------------------------------
2.5 RCVD_IN_SORBS_HTTP RBL: SORBS: sender is open HTTP proxy server
[75.138.187.221 listed in dnsbl.sorbs.net]
2.4 RCVD_IN_SORBS_SOCKS RBL: SORBS: sender is open SOCKS proxy server
1.1 DATE_IN_PAST_03_06 Date: is 3 to 6 hours before Received: date
-0.5 AWL AWL: Adjusted score from AWL reputation of From: address
Spam detection software, running on the system "mail1.linuxaudio.cyso.net",
has identified this incoming email as possible spam. The original
message has been attached to this so you can view it or label
similar future email. If you have any questions, see
@@CONTACT_ADDRESS@@ for details.
Content preview: On Tue, 28 Aug 2018 16:48:40 +0200 Hermann Meyer wrote: >
Using pkg-config means that it works on any system, even older ones, using
pkg-config means that it works on any system that has pkg-config installed
- using pkgconf means that it works on any system that has pkgconf installed
- and either of pkg-config and pkgconf can be installed on any system regardless
of what the distro declares as the "official" implementation [...]
Content analysis details: (5.4 points, 5.0 required)
pts rule name description
---- ---------------------- --------------------------------------------------
0.0 BAD_ENC_HEADER Message has bad MIME encoding in the header
2.5 RCVD_IN_SORBS_HTTP RBL: SORBS: sender is open HTTP proxy server
[75.138.187.221 listed in dnsbl.sorbs.net]
2.4 RCVD_IN_SORBS_SOCKS RBL: SORBS: sender is open SOCKS proxy server
1.1 DATE_IN_PAST_03_06 Date: is 3 to 6 hours before Received: date
0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid
-0.1 DKIM_VALID Message has at least one valid DKIM or DK signature
-0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's
domain
-0.5 AWL AWL: Adjusted score from AWL reputation of From: address
Hello all,
Updates are available at
<http://kokkinizita.linuxaudio.org/linuxaudio/downloads/index.html>
for all of the following:
aeolus-0.9.7
ambdec-0.7.1
clthreads-2.4.2
clxclient-3.9.2
ebumeter-0.4.2
jaaa-0.9.2
jack_delay-0.4.2
jack_utils-0.0.1
japa-0.9.2
jconvolver-1.0.2
jkmeter-0.6.5
jmatconvol-0.3.3
jmeters-0.4.4
jnoisemeter-0.2.2
octofile-0.3.2
tetraproc-0.8.6
yass-0.1.0
zita-ajbridge-0.8.2
zita-alsa-pcmi-0.3.2
zita-at1-0.6.2
zita-bls1-0.3.3
zita-convolver-4.0.2
zita-dpl1-0.3.3
zita-jclient-0.4.2
zita-lrx-0.1.2
zita-mu1-0.3.3
zita-njbridge-0.4.4
zita-resampler-1.6.2
zita-rev1-0.2.2
For most of them this is just small bug fixes, general maintenance,
and above all a systematic cleanup of the Makefiles.
Also the jacktools python package (presented at LAC 2018) is
available now. I actually did upload the files just before
the conference, but forgot to update index.html...
Ciao,
--
FA
Hi everyone,
I'm Wim Taymans and I'm working on a new project called PipeWire you might
have heard about [1]. I have given some general presentations about it during
its various stages of development, some of which are online [2].
PipeWire started as a way to share arbirary multimedia, wich requires vastly
different requirements regarding format support, device and memory management
than JACK. It wasn't until I started experimenting with audio processing that
the design started to gravitate to JACK. And then some of JACKs features became
a requirement for PipeWire.
The end goal of PipeWire is to interconnect applications and devices through
a shared graph in a secure and efficient way. Some of the first applications
will be wayland screen sharing and camera sharing with access control for
sandboxed applications. It would be great if we could also use this to connect
audio apps and devices, possibly unifying the pulseaudio/JACK audio stack.
Because the general design is, what I think, now very similar to JACK, many
people have been asking me if I'm collaborating with the linux pro-audio
community on this in any way at all. I have not but I really want to change
that. In this mail I hope to start a conversation about what I'm doing and I
hope to get some help and experience from the broader professional audio
developers community on how we can make this into something useful for
everybody.
I've been looking hard at all the things that are out there, including
Wayland, JACK, LV2, CRAS, GStreamer, MFT, OMX,.. and have been trying to
combine the best ideas of these projects into PipeWire. A new plugin API was
designed for hard realtime processing of any media type. PipeWire is LGPL
licensed and depends only on a standard c library. It's currently targeting
Linux.
At the core of the PipeWire design is a graph of processing nodes with arbirary
input/output ports. Before processing begins, ports need to be configured with a
format and a set of buffers for the data. Buffer data and metadata generally
lives in memfd shared memory but can also be dmabuf or anything that can be
passed as an fd between processes. There is a lot of flexibility in doing this
setup, reusing much of the GStreamer experience there is. This all happens on
the main thread, infrequently, not very important for the actual execution of
the graph.
In the realtime thread (PipeWire currently has 1 main thread and 1 realtime data
thread), events from various sources can start push/pull operations in the
graph. For the purpose of this mail, the audio sink uses a timerfd to wake up
when the alsa buffer fill level is below a threshold. This causes the sink to
fetch a buffer from its input port queue and copy it to the alsa ringbuffer. It
then issues a pull to fetch more data from all linked peer nodes for which there
is nothing queued. These peers will then eventually push another buffer in the
sink queue to be picked up in the next pull cycle of the sink. This is somewhat
similar to the JACK async scheduling model. In the generic case, PipeWire has to
walk upstream in the graph until it finds a node that can produce something (see
below how this can be optimized).
Scheduling of nodes is, contrary to JACKs (and LADSPA and LV2) single 'process'
method, done with 2 methods: process_input and process_ouput. This is done to
support more complex plugins that need to decouple input from output and to also
support a pull model for plugins. For internal clients, we directly call the
methods, for external clients we use an eventfd and a shared ringbuffer to send
the right process command to the client.
When the external client has finished processing or need to pull, it signals
PipeWire, which then wakes up the next clients if needed. This is different from
JACK, where a client directly wakes up the peers to avoid a server context
switch. JACK can do this because the graph and all client semaphores are shared.
PipeWire can't in general for a couple of reaons: 1) you need to bring mixing of
arbitrary formats to the clients 2) sandboxed clients should not be trusted with
this information and responsability. In some cases it would probably be possible
to improve that in the future (see below).
This kind of scheduling works well for generic desktop style audio and video.
Apps can send buffers of the size of their liking. Bigger buffers means higher
latency but less frequent wakeups. The sink wakeup frequency is determined by
the smallest buffer size that needs to be mixed. There is an upper limit for the
largest amount of data that is mixed in one go to avoid having to do rewinds in
alsa and still have reasonable latency when doing volume changes or adding new
streams etc.
The idea is to make a separate part of the graph dedicated to pro-audio. This
part of the graph runs with mono 32bit float sample buffers of a fixed size and
samplerate. The nodes running in this part of the graph also need to have a
fixed input-output pattern. In this part of the graph, negotiating the format
becomes trivial. We can preallocate a fixed size buffer for each port that is
used to send/mix data between nodes. Exactly like how JACK works. In this
scenario it would be possible to bring some of the graph state to trusted
clients so that they can wake up their peers directly.
As it turns out, the generic scheduling mechanism simplifies to the JACK way of
scheduling and the option to do some optimisations (can directly start push from
the sources, bundle process_input/output calls, mixing on ports is simplified by
equal buffer sizes, ...)
There is a lot more stuff that I can talk about and a lot of things that need
to be fleshed out like latency calculations, an equivalent of JACK transport,
session management, ... But this mail is already getting long :)
I would very much like to hear your ideas, comments, flames, thoughts on this
idea. I think I'm at a stage where I can present this to a bigger audience and
have enough experience with the matter to have meaningful discussions.
PipeWire is currently still in heavy development, many things can and do
still change. I'm currently writing a replacement libjack.so[3] that runs jack
clients directly on PipeWire (mixing and complicated scheduling doesn't
work yet).
Hope to hear your comments,
Wim Taymans
[1] pipewire.org
[2] https://www.youtube.com/watch?v=6Xgx7cRoS0M
[3] https://github.com/PipeWire/pipewire-jack
08/08/2018 (ignore this date)
PulseAudio tsched
=================
Around December 2007 to February 2009 Lennart Poettering
has asked some questions in alsa-devel mailing list in
order to implement the timer based audio scheduling for
PulseAudio.
In 2008 he wrote an article named "What's Cooking in
PulseAudio's glitch-free Branch".
I have not found detailed documentation about PulseAudio
timer wake up and I'm not sure about some implementation
details. By using the emails from Lennart and his article
a Documentation can start to grow.
Do PulseAudio synchronize to sound device by adjusting the
timer or the amount committed (application pointer)?
The nice thing about the second case is that the period
time (in application's view) is bonded to system clock
instead of sound device's.
What timer interface does PulseAudio use? timerfd?
POSIX timers?
Hardware issues
===============
How the deviation of system and sound device clock is
today? Perhaps someone that works with sound cards or SoCs
can answer this. Do the most of today's hardware has the
same clock for sound device and system?
How sound devices deal with random accesses to their ring
buffer. Do most of them allow writing/reading anywhere and
anytime?
How is the acurracy and precision of the hardware pointer
returned by sound devices?
Does SNDRV_PCM_INFO_BATCH flag reveals something about the
above one?
As PulseAudio runs in many Linux distributions today, it
seems an interesting idea to implement an utility to
collect some information regarding the above issues. Then
make it available to the public. E.g. it would be possible
to know the pointer acurracy for specific devices.
SCHED_DEADLINE and timerfd
==========================
There are a few works over there about SCHED_DEADLINE and
audio:
- This paper presented in Linux Audio Conference 2011:
BAGNOLI, Giacomo; CUCINOTTA, Tommaso; FAGGIOLI, Dario.
2011. Low-Latency Audio on Linux by Means of Real-Time
Scheduling.
Dario Faggioli is one of the authors of SCHED_DEADLINE.
- This presentation of Alessio Balsini: "Experimenting
with the Android Audio Pipeline and SCHED_DEADLINE".
OSPM 2018. Here is an overview:
<https://lwn.net/Articles/754923/>.
In SCHED_DEADLINE I assume there is no way to accurately
adjust the timer. As well as there is no way to adjust
when the first expiration will be. I think this is because
of Constant Bandwidth Server algorithm. Must ask to some
developer of this.
Having this in mind, the only way to synchronize to sound
device is by adjusting the amount read/written
(application pointer).
Timerfd on the other hand is fully adjustable. However, it
can be preempted by a high priority task. Anyway I don't
see that as a problem as it should ralely happen.
Perhaps SCHED_DEADLINE is suitable for professional
environment (aiming almost-zero glitches).
Thoughts on this? Is there someone planning to implement
SCHED_DEADLINE in PulseAudio?
I'm currently working on both implementations (not for
PulseAudio). See
<https://github.com/ricardobiehl/simplesound>.
Take a look at 'Documentation/timer_wakeup.rst',
'tools/timer_wakeup.c' and 'tools/deadline_wakeup.c'.
'tools/waveplay.c' play .wav files using the application
wakeup method specified in 'tools/Makefile'.
Documentation
=============
Many of the points I've mentioned should get well covered
by some documentation (e.g. ALSA, PulseAudio). Also, I
think ALSA's internal (kernel-level) documentation needs
to grow.
I'm actually working on that, documenting some stuff as I'm
developing a small sound library that interacts directly
to ALSA in kernel, and the application wakeup using
timerfd and SCHED_DEADLINE.
Cheers!
Dear Linux Audio community,
we're sending this mail to let you know about the availability of the
remaining videos from LAC2018.
You can find them on media.ccc.de [1] and on the dedicated event pages
linked to in the schedule [2].
We hope you had a great time at the conference and if you couldn't be
there physically,
this is now the time to have a look at much of what has happened in
Berlin this year.
In other news, the website [3] is going to read-only mode shortly.
See you at future LACs!
[1] https://media.ccc.de/b/conferences/lac/lac18
[2] https://lac.linuxaudio.org/2018/pages/schedule/
[3] https://lac.linuxaudio.org/2018/
--
Linux Audio Conference team