Friends
I am working on a recorder that priorities simple keyboard control, it has very little gui. This is what I need, but one important use is for people who cannot look at a gui (me, I cannot be bothered, I find GUI displays usually very distracting, and at best inconvenient)
I do want to have peak indicator, a warning if the level is too high and data loss is occurring.
This is a new journey for me, and some concepts might be a bit fuzzy in my head, forgive me.
How can I have a warnings in my software that will play nice with accessibility software?
Should I output lines on stdout that screen readers can process and then do whatever they are configured for?
I want to get this right from the very start. I though this might be a good place to ask
I am sorry if this is a bit off topic for Linux audio
Sent from [Proton Mail](https://proton.me/mail/home) for Android.
Could someone explain what's going on here and how to fix that?
Since pipewire is here, the jack output plugin of linux-show-player
(LiSP) doesn't work anymore. Try to help myself getting an output on
the 3./4. channel of my Focusrite Scarlet 2i4 and created a 4-channel
audio file with audacity, empty at channel 1,2, signal at 3,4.
What happens?
In "Direct Scarlet 2i4 USB"-mode: LiSP creates a 4-channel-output
(output_[FL,FR,FC,LFE]), where the signal comes out at 1,2 (*_[FL,FR])
In "Pro Audio"-mode: LiSP only creates 2 outputs. Signal's also routed
to those.
Shouldn't multichannel stay multi with pipewire?
I don't know, why everything's just down-/upmixed to stereo…
(In case a mono track is streamed, it's also thrown at FL/FR - which in
that case is fortunaltely what I want…)
############
As a completion, here's my workaround:
Using LiSP's "command-cue" I can use a combination of pw-play and pw-
link to start the stream and redirect it to the desired output. "sleep"
adds a little delay so the audio-stream is established and can be
modified.
pw-play 4-ch-audio.wav &sleep .1&& pw-link pw-play:output_FL
alsa_output.usb-Focusrite_Scarlett_2i4_USB-00.pro-output-
0:playback_AUX3 && pw-link pw-play:output_FR alsa_output.usb-
Focusrite_Scarlett_2i4_USB-00.pro-output-0:playback_AUX4 && pw-link -d
pw-play:output_FR alsa_output.usb-Focusrite_Scarlett_2i4_USB-00.pro-
output-0:playback_AUX1 && pw-link -d pw-play:output_FL alsa_output.usb-
Focusrite_Scarlett_2i4_USB-00.pro-output-0:playback_AUX2
Would be easier if pw-play could handle output-advises directly…
Greets!
Mitsch
Loopino — Christmas Release 🎄
More Filters, Better Control, Improved Standalone Workflow
Just in time for the holidays, the new Loopino Christmas Release brings
workflow improvements, new classic filter models, and important
stability fixes—making Loopino more flexible, expressive, and reliable
than ever.
For standalone users, Loopino now features command-line support to
fine-tune the audio and MIDI setup before launch. You can directly
specify the ALSA MIDI device, sample rate, buffer size, and GUI
scaling—ideal for live setups or custom studio configurations.
Sound shaping has been expanded with two new character filters: a gritty
Wasp-style filter and a classic TB-303 filter, joining the existing Moog
and Oberheim-inspired designs. A new Tone control adds fast and musical
spectral shaping, perfect for dialing in brightness or weight without
complex routing.
This release also includes important bug fixes for both CLAP and VST2
builds. Thanks to everyone who reported issues—your feedback helps keep
Loopino stable and dependable across platforms.
New in this Release
-Command-line options for the standalone version:
- -d, --device <name> — select ALSA MIDI device (e.g. hw:1,0,0)
- -b, --buffer <value> — set ALSA buffer size
- -r, --rate <value> — set ALSA sample rate
- -s, --scaling <value> — GUI scaling factor (default: 1)
- New Wasp-style filter
- New TB-303 filter
- New Tone control
- Bug fixes for CLAP and VST2 (thanks to the reporters!)
Alongside these updates, Loopino continues to offer its full feature
set: drag-and-drop sample loading, on-the-fly recording, pitch tracking,
micro-loop generation, non-destructive wave shaping, ADSR envelopes,
multiple modulation sources, built-in effects, preset handling, WAV
export in key, and up to 48 voices of polyphony.
Project Page:
https://github.com/brummer10/Loopino
Release Page:
https://github.com/brummer10/Loopino/releases/tag/v0.2.0
Thank you for your continued support and feedback.
Happy holidays and happy looping! 🎶❄️
Hi, there!
Everything's just easy with pipewire, isn't it?
Well, since pipewire's here I have some problems with it using my
beloved linux-show-player (LiSP) in combination with jack-output.
So, I'd like to use a workaround using console-commands…
I have a build-in audio device and a bluetooth connection, pw-top lists
these devices:
Dummy-Driver
Freewheel-Driver
[…]
alsa-output.pci-0000_00_1b.0.analog-stereo
[…]
bluez_output.[bluetooth-adress].1
[…]
How do I send a stereo or mono audio stream to the bluez-output with a
console command? Can I use aplay? (Because aplay -L doesn't list the
bluetooth speaker…)
Any ideas?
Thank you!
Mitsch
**Loopino — New Release: Expressive Control, Classic Filters, and
Improved Standalone Support**
This new Loopino release focuses on expressiveness, classic
analogue-inspired sound shaping, and a more powerful standalone
experience. With newly added Pitch Wheel support, Loopino now responds
more like a real instrument, enabling expressive bends, subtle detuning,
and dynamic performance gestures via MIDI.
Sound design has been expanded with the addition of an Oberheim-style
filter, complementing the existing Moog-style ladder filter. Together,
they offer two distinct analogue flavours for sculpting everything from
smooth pads to aggressive textures. Velocity-dependent dynamic controls
allow envelopes, filters, and modulation depth to react naturally to
playing intensity, bringing Loopino even closer to a
performance-oriented sampler-synth hybrid.
To enrich spatial depth and movement, this release introduces an
integrated Chorus and Reverb, making it possible to create wide, lush,
and immersive sounds directly inside Loopino—no external effects required.
For standalone users, native ALSA audio and MIDI support has been added,
improving stability, latency, and system integration on Linux.
Alongside these additions, Loopino continues to offer its core feature
set: drag-and-drop sample loading, on-the-fly recording, pitch tracking,
a powerful Micro Loop Generator with selectable loop count and duration,
non-destructive wave shaping (square & saw), LP/HP ladder filtering,
phase modulators (sine, triangle, noise, Juno-style), vibrato, tremolo,
root frequency control, preset handling, WAV export in the selected key,
and up to 48 voices of polyphony.
**Highlights of this Release**
- MIDI Pitch Wheel support for expressive performance
- New Oberheim-style filter
- Velocity-based dynamic modulation controls
- Integrated Chorus and Reverb effects
- ALSA audio & MIDI support for the standalone version
- Continued improvements to stability, workflow, and sound quality
Loopino keeps evolving into a flexible, expressive instrument that
bridges sampling, synthesis, and performance—designed for sound
designers, experimental musicians, and anyone who enjoys turning raw
audio into playable instruments.
## Availability
- Linux: Standalone application, CLAP plugin, VST2 plugin
- Windows: CLAP plugin, VST2 plugin
Project Page:
https://github.com/brummer10/Loopino
Release Page:
https://github.com/brummer10/Loopino/releases/tag/v0.1.0
Dear list,
please excuse the following for being largely offtopic to linux audio. I
can't seem to find any useful information via startpage.com nor can I
think of a list I am subscribed to to ask the following:
I sometimes have the need to show pdf slides in presentations and like
using pdfpc[1], a "presenter console with multi-monitor support for PDF
files" using an extended X display. I do as well have the need to show
the (single) desktop I am working on. The only way I can achieve this is
to switch back and forth between the two setups using xrandr.
I am using a display manager (fluxbox) with multiple virtual desktops.
I am wondering if there is a smarter less interrupting way.
What if I had one extended display, whose left side is on my laptop's
screen and right side on the video projector, and had something like a
vnc-client on the right side, mirroring what is going on on the left
side of the screen. I could then show what I am working on on the left
screen but also have the pdf presenter console span the left and right
side of this extended display together.
I hope my description and question do make (some) sense.
If anyone has an idea I'd be happy to know!
thanks, and exuse the OT post,
Peter
[1] https://pdfpc.github.io/
Hi.
Since I rely on accessibility and haven't found a "DJing tool" for Linux
which doesn't require a graphical interfaces...
I used to do offline mixes with a bunch of sox instances piped into each
other, orchestrated with a shell script...
Which is rather tedious, so...
I recently came back to this and decided it is a great opportunity
for having a tool vibe-coded.
For playing with vibe-coding, and actually solving a problem I always had.
So I had GPT-5(.1) scratch my itch.
And out came
https://github.com/mlang/clmix
It has a built-in player + metronome for working out bpm, offset
of the main beat grid and start/end cue points.
Seeking works with bars, so you dont have to fiddle with time.
Once you worked out a track by ear, you can do a "snap-to-grid"
using aubio as a beat/onset detector to refine the offset if necessary.
Once you are happy with the track timing, you save it to your DB.
If you have a number of tracks in your DB, you then can
generate a (shuffled) mix. Selection of tracks from the DB
uses a tag expression language, so you can do things like
clmix music.json --random "dnb | jungle" --export beats.wav
clmix will determine the mean bpm, bring all track to the same
tempo and align cue points for you.
Thats basically its main use case for me.
I use mpd with crossfade a lot for playing music at home,
and kind of wanted something that actually can generate good crossfades.
So when I want a "club mix" at home, I now simply
generate it from my favourites I added to the clmix db.
Nothing special, just a tool to get a particular job done,
for a particular user group (command-line junkies).
Vibe-coding was fun.
From idea to first executable code was just a few hours.
However, I have a particular taste when it comes to C++,
so after a while of being amazed that much apparently correct code
was written by the AI, I started to play the "let me fix this" game.
Ended up doing a lot of bikeshedding. OTOH, I still feel
that kind of ping-pong was necessary to keep the code half-way decent.
If you let the AI do its thing without supervision, the code
ends up pretty unreadable after a while.
In any case, I make the fact this is vibe coded clear because
I do definitely not claim the fame. While I love to play
with programming, this is neither my dayjob nor do I have a lot of
(recent) practice. So I likely wouldn't have written this
if I didn't have some help. So with all the war around AI use
on the net, I can say this was a nice outcome for me.
Nobodies job was taken away, and I got something nice for maybe $30 of
API costs.
I got what I wanted, and I didn't have to waste a lot of time on it.
In any case, if this is helpful to anyone, enjoy.
--
CYa,
⡍⠁⠗⠊⠕