Hello all,
I've been contemplating trying out Pipewire as a replacement
for Jack. What is holding me back is a what seems to be a
serious lack of information. I'm not prepared to spend a lot
of time and risk breaking a perfectly working system just to
find out that it was a bad idea from the start. So I have a
lot of questions which maybe some of you reading this can
answer. Thanks in advance for all useful information.
A first thing to consider is that I actually *like* the
separation of the 'desktop' and 'pro audio' worlds that
using Jack provides. I don't want the former to interfere
(or just be able to do so) with the latter. Even so, it may
be useful in some cases to route e.g. browser audio or a
video conference to the Jack world. So the ideal solution
for me would be the have Pipewire as a Jack client.
So first question:
Q1. Is that still possible ? If not, why not ?
If the answer is no, then all of the following become
relevant.
Q2. Does Pipewire as a Jack replacement work, in a reliable
way [1], in real-life conditions, with tens of clients,
each maybe having up to a hundred ports ?
Q3. What overhead (memory, CPU) is incurred for such large
systems, compared to plain old Jack ?
A key feature of Jack is that all clients share a common idea
of what a 'period' is, including its timing. In particular
the information provided by jack_get_cycle_times(), which is
basically the state of the DLL and identical for all clients
in any particular period. Now if Pipewire allows (non-Jack)
clients with arbitrary periods (and even sample rates)
Q4. Where is the DLL and what does it lock to when Pipewire
is emulating Jack ?
Q5. Do all Jack clients see the same (and correct) info
regarding the state of the DLL in all cases ?
The only way I can see this being OK would be that the Jack
emulation is not just a collection of Pipewire clients which
happen to have compatible parameters, but actually a dedicated
subsystem that operates almost independently of what the rest
of Pipewire is up to. Which in turn means that having Pipewire
as a Jack client would be the simpler (and hence preferred)
solution.
[1] which means I won't fall flat on my face in front of
a customer or a concert audience because of some software
hickup.
Ciao,
--
FA
Hi all,
not posting to any of LA* mailing lists often in the last years, but today I thought
I should make an exception to point out that it's now exactly 20 years ago that the
first Linux Audio Conference or "LAC" for short (then still called "Linux Audio
Developer's Meeting") took place at the ZKM in Karlsruhe, Germany.
https://lists.linuxaudio.org/hyperkitty/list/linux-audio-dev@lists.linuxaud…
Wow, 20 years come and gone so quickly! It's great to see a couple of "early
adopters" are still around today here, and many new names have entered the scene
in the meantime and left/leave a lasting footprint in it.
As a quick "trip down memory lane", here's a short list of things that happened back then:
- Takashi Iwai dove into the innards of an ALSA driver
- Fernando Lopez-Lezcano introduced the PlanetCCRMA distribution, based on Fedora
- Steve Harris explained the concept of a Bode frequency shifter as a LADSPA plugin
- Paul Davis held the keynote, AND spoke about JACK, AND shared his experience of writing
a DAW that is at the forefront of free, open-source and cross-platform DAWs today.
- Dave Philipps looked at the historic development of Linux audio, from OSS to ALSA and beyond
- Jörn Nettingsmeier managed to get the audio part of the presentations both recorded AND live-streamed
at a time when "streaming" was still a term unknown to most of us.
- The term "Linux Sound Night" already existed, but was..perhaps not as musical yet as you might have expected :-).
- And, we also learned that schedules are not easy to keep - I believe after the first day we
ran some >2 hours late, and people begin to STARVE.
- Posing in front of the ZKM "Kubus": https://linuxaudio.de/LAC2003_Posse.jpg
For some more memories, see also http://lac.linuxaudio.org/2003/zkm/
Of course, on the downside it has to be noted that Corona has impacted us too -
after an almost perfect track record of conferences or mini-conferences every year,
2021 and 2022 didn't see any event happening. There is a certain risk now that it
won't recover, but that's in our hands to change (sidenote: I have talked to my
contacts at ZKM here in Karlsruhe in January, but they are unable to finance and/or
organize an event of this size at the moment - certainly somewhat influenced by
current economics, but I am hearing ZKM has actually reduced their headcount quite
a bit, and after the recent passing of its artistic-scientific chairman, Peter
Weibel, it will have to be seen how his successor, Alistair Hudson, will steer
it into the future).
Whatever LAC's future will be, free and open-source audio software is certainly
flourishing, and will continue to do so. It just would be soo nice to enjoy the
results together with real, tangible people :-\.
If anyone sees an option for hosting a future LAC, lac-team(a)lists.linuxaudio.org
is willing to listen to proposals you might have (or just discuss it right here).
Well, the heck - there is no steering committee or anything in place, so I guess
the first one brave enough to say "I think we can do it" will get the job :-).
I am forever grateful to all the folks who have enriched and continue to enrich
our open-source audio life by writing or presenting software, creating
documentation and tutorials, hosting (AND attending) conferences like the LAC,
and whatever else helps to keep the penguin dancing!
Greetings,
Frank
Hey hey,
figuring out the syntax for c++20 concepts I run into challenges. With a two
parameter template concept like swappable_with or convertible_to is it always
necessary to use a requires statement like this:
template<typename T>
requires std::convertible_to<T,int>
void f(T t) ...
...
With single parameter concepts like integral a syntax like this will work:
template<std::integral T>
void f(T t)...
Also with single parameter concepts a statement like this should work:
std::integral auto my_variable;
Will that work with a two parameter concept like the above convertible_to? I
haven't seen/found a syntax which does it.
Any help is appreciated, definitely including a good read that WILL
demonstrate or demystify these things.
Best wishes,
Jeanette
--
* Website: http://juliencoder.de - for summer is a state of sound
* Youtube: https://www.youtube.com/channel/UCMS4rfGrTwz8W7jhC1Jnv7g
* Audiobombs: https://www.audiobombs.com/users/jeanette_c
* GitHub: https://github.com/jeanette-c
Don't care about money
It doesn't give me half the thrill <3
(Britney Spears)
Hi,
a story to continue. I'll release a second block of video tutorials
about LV2 plugin programming from the scratch. For everybody who wants
to make his/her own plugins. This time about graphical user interfaces.
I'll show you:
* the LV2 way to go, with turtle and C/C++,
* toolkits and the pitfalls,
* some of the toolkits we can use, and
* finally how to make UIs for the plugins we made before.
Every Friday, for the next some 10 weeks. Starting with this
announcement video: https://youtu.be/7mCLDBBXajU
See you on Friday
Sven
Hi all,
I just have released SoundTracker v1.0.4-pre1. I've decided to make a
feature-freeze at this point, this means that 1.0.4 will have no new
features comparing to this pre-release, only fixes and small
improvements if something will be found inconvenient. So I ask everyone
who has interest in SoundTracker to test this release. Here are main
features of the 1.0.4-pre1 release comparing to 1.0.3:
* General:
- Full-screen mode
- Volume of all samples of an instrument / all samples in the module can
be set to a value, not only gained. Also all samples panning can be
adjusted. These functions can also modify the explicitly given values of
volume / panning of notes in patterns
- Improved compatibility with FastTracker, also MOD files are played
more correctly
- Unused patterns are listed in the Pattern Sequence Editor
- Sampling rate can be specified while saving a sample as WAV
- New module optimizer with many control parameters
* Track Editor:
- Moving notes up / down is implemented
* Sample Editor:
- External programs can be used for samples processing. The interaction
between such a program is described using XML spec, no ST recompiling
required
- The whole sample is drawn after recording
- Exponential / reverse exponential volume fade transients
* Instrument Editor:
- Envelope inversion and shifting is implemented
SoundTracker download page:
https://sourceforge.net/projects/soundtracker/files/
Regards,
Yury.
Hey hey,
I want to record MIDI into a sequencer application, using RtMidi. My
understanding is:
inside that application's tracks events are directly stamped with delta
times, measured in ticks, since the last event in that track. Internally
the sequencer runs a clock at a high rate (480PPQ or higher).
It's probably best to "quantise" events to that internal clock while
they are recorded. So the track object, with its connected MIDI input
device, must be linked to the clock, which will wake up, whenever it is
time for a tick. Linked meaning there must be some way to exchange relevant
info (one of them owning a reference, callback, shared data, ...)
If that basic, and simplified, principle is sound. Which solutions have
been proven feasible to achieve that? For example: make every active
track a thread and have a global mutex? Each track registers a callback
function with the clock to store events. ...?
RtMidi's input object (RtMidiIn) offers to register a callback function,
that will be called with a received MIDI message, when input is
available. Such messages could be stored in a container and be delivered
when it's tick time. -- So again, a track might own a reference to the
Clock thread object and query the current tick count since the last
record start command and use that information to calculate the correct
delta. Not sure though, whether that might raise issues with execution
times. Using an atomic variable as a tick counter will avoid having to
use mutex locks, at least.
Is there any practical advice? Something based on experience? A good
read?
Best wishes and thanks,
Jeanette
--
* Website: http://juliencoder.de - for summer is a state of sound
* Youtube: https://www.youtube.com/channel/UCMS4rfGrTwz8W7jhC1Jnv7g
* Audiobombs: https://www.audiobombs.com/users/jeanette_c
* GitHub: https://github.com/jeanette-c
But now I'm stronger than yesterday <3
(Britney Spears)
Hey hey,
how is a logarithmic curve usually programmed in a DAW or sequencer? Do you
scale the values of (log1) to log(2) to the desired range and stretch it over
time? Do you ajudst steepness by either using more less of the log function or
changing both values like log(20) to log(21)?
I'm sure there are many ways to do it, but I'd assume that there are a few
"classic" ways to approach this with anything from controller "automation" to
envelope curve settings to ...
I'd appreciate any hints or simple examples from software practise.
Best wishes and thanks,
Jeanette
--
* Website: http://juliencoder.de - for summer is a state of sound
* Youtube: https://www.youtube.com/channel/UCMS4rfGrTwz8W7jhC1Jnv7g
* Audiobombs: https://www.audiobombs.com/users/jeanette_c
* GitHub: https://github.com/jeanette-c
Baby, take the time to realize
I'm not the kind to sacrifice <3
(Britney Spears)
Hello all,
What exactly is meant by a linear or exponential tempo change ?
Is the tempo a lin/exp function of time, or of score position ?
A bit of algebra leads to this:
Let
t = time (seconds)
p = score position (beats)
v = tempo (beats / second) [1]
We have
v = dp / dt # by definition
If v is a linear function of t, then
v (p) = square root of a linear fuction of p
If v is a exponential function of t, then
v (p) = a linear function of p
if v is a linear function of p then
v (t) = an exponential function of t
If v is an exponential function of p, then
v (t) = inverse of a linear function of t
So there is plenty of room for confusion...
[1] Using SI units :-)
Ciao,
--
FA
Hey hey,
I'm experimenting with c++ trying to program something nice for MIDI. I'm now
experimenting with clocks, using both the standard c++ chrono library and the
boost chrono library. My example program sets a desired delta between ticks of
250ms. I see that there is a difference, since Boost chrono can also use
pthread thread parameters foor realtime scheduling and priorities.
With the standard chrono and thread library my ticks are usually out by
anything between 150.000 to 290.000 nanoseconds. Using boost a tick is out
anything between 110.000 to 124.000 nanoseconds. Yes, much better. But,
assuming that I correct the drift, does it make a difference for MIDI
instruments?
I know one of my MIDI instruments with a clock sync'able delay can be rather
touchy with MIDI clocks, but are there good estimated guidelines experience
values how precise ticks should be?
Best wishes and thanks for any help,
Jeanette
--
* Website: http://juliencoder.de - for summer is a state of sound
* Youtube: https://www.youtube.com/channel/UCMS4rfGrTwz8W7jhC1Jnv7g
* Audiobombs: https://www.audiobombs.com/users/jeanette_c
* GitHub: https://github.com/jeanette-c
Top down, on the strip
Lookin' in the mirror
I'm checkin' out my lipstick <3
(Britney Spears)