-------- Weitergeleitete Nachricht --------
Betreff: Re: [LAD] Re: First release for aloop v0.1
Datum: Mon, 3 Feb 2025 14:24:11 +0100
Von: Hermann Meyer <brummer-(a)web.de>
An: David Adler <d.adler(a)posteo.de>
Am 02.02.25 um 22:03 schrieb David Adler:
> On 02.02.2025 10:11, Janina Sajka wrote:
> ...
>> Maybe an arch aur, too?? :)
>
> Done.
> https://aur.archlinux.org/packages/aloop-git
>
> best
> Â -david
>
Oh, nice.
I've pushed in the mean time a new release aloop v0.2
It should fix some small remaining bugs and introduce drag and drop for
the playlist.
So the playlist could now easily re-sorted by drag and drop, a file
could be dropped into the player to start immediately, or dropped just
out of the playlist. Changes in the playlist could optional be saved.
enjoy the music
aloop is a audio file looper for Linux using PortAudio as back-end
(jack, pulse, alsa), libsndfile to load sound files and zita-resampler
to resample the files when needed. The GUI is created with libxputty.
This is the first release for aloop, it comes with the following features:
* support all file formats supported by libsndfile.
* resample files on load to match session Sample Rate
* file loading by drag n' drop
* included file browser
* open file directly in a desktop file browser
* open file on command-line
* create, sort, save and load playlists
* select to loop over a single file or over the play list
* move play-head to mouse position in wave view
* set loop points for start/end loop
* save loop points in play list
* save selected loop as wav file
* play backwards
* volume control
* break playback (keyboard support space bar)
* reset play-head to start position (keyboard support courser left)
Dependencies
* libsndfile1-dev
* portaudio19-dev
* libcairo2-dev
* libx11-dev
Project page:
https://github.com/brummer10/aloop
Release Page:
https://github.com/brummer10/aloop/releases/tag/v0.1
Please report issues to the project issue tracker.
Hi!
This mail is just a heads-up about a finding discovered with the help of
the linux-usb mailing list and which I thought some other people might
benefit from. If this is nothing new to you, then please do ignore it.
Background:
The USB audio class 2.0 specification dictates that isochronous
transfers (i.e. audio frames to/from an audio interface) happen every
"micro-frame". USB micro-frames are 125 microseconds (us) apart. 125 us
= 0.125 milliseconds (ms).
The majority of USB audio interfaces (at least those that I have) use
synchronous audio-streaming, i.e. the sample clock is derived from the
bus clock. There are definitely interfaces that use e.g. adaptive or
asynchronous modes and this discussion would have to be altered for these.
Given the case of synchronous mode isochronous transfers at a sampling
rate of 48000 Hz (= 48 kHz) this would correspond to 6 audio frames per
USB micro-frame.
48000 frames/second * 0.000125 seconds = 6 frames
So in principle a very well behaved audio interface attached to a very
well behaved USB controller sitting in a well tuned system _should_ be
able to achieve a minimum round-trip latency of 2 * 6 frames = 12 frames
or 250 us. This leaves out additional buffering inside the audio
interface and additional latency by anti-aliasing and reconstruction
filters.
The first caveat to above in the context of Linux: The snd_usb_audio
driver does not seem to support period sizes of just 6 frames. It _does
support 12 frames though, which is nice.
And here is the other caveat which lead to the title of this mail: Some
controllers are behaving "worse" than others. There's this little thing
in the XHCI spec which amounts to the following: The XHCI can specify in
a register how many micro-frames have to buffered at all times for
outgoing isochronous endpoints. The Intel XHCI in my ASRock N100dc-itx
main-board for example request a whole USB frame (which corresponds to 8
micro-frames or 1 ms) to be buffered at all times. Another XHCI that I
have (a Renesas controller) only requires one micro-frame. This has
direct consequences on the kind of period sizes and number of periods
that are usable on these controllers. These consequences don't explain
everything but at least you know that you can't ever get better than
this limit.
For example for a period size of 48 frames I need to use 3 periods on
the Intel controller (resulting in 3 ms round-trip latency), but for the
Renesas I can use 2 periods at 48 frames (resulting in 2 ms round-trip
latency). On the Renesas controller even 2 periods at 24 frames works
fine (1 ms round-trip latency).
I can lower the latency on the Intel controller by using a smaller
period size but more of them as long as the buffering requirement of the
controller is satisfied. One stable setting is for example a period size
of 24 and 5 periods which results in a round-trip latency of 2.5 ms.
So what's the take-away here? In some cases, if you are chasing stable
low-latency operation using a USB audio class 2.0 device it might just
be worth installing a different XHCI in your computer (the above
mentioned Renesas controller is just a PCI-Express card which sits in a
slot in my N100DC-itx board) that is better behaved than the one you
currently have.
Another take-away is that the above limitation only applies to the
outgoing direction (playback). If all I was interested in would be the
capture direction then 2 periods at 48 frames would work fine even on
the Intel controller.
Kind regards,
FPS
Ratatouille is a Neural Model loader and mixer for Linux/Windows.

This release introduce a normalization option for NAM models and
fix a issue with the normalization (a.k.a loudness compensation) of IR
Files (thanks to @avanzzzi )
Ratatouille allow to load up to two neural model files and mix there
output. Those models could be [*.nam files](https://tonehunt.org/all) or
[*.json or .aidax files](https://cloud.aida-x.cc/all). So you could
blend from clean to crunch for example, or, go wild and mix different
amp models, or mix a amp with a pedal simulation.
Ratatouille using parallel processing to process the second neural model
and the second IR-File to reduce the dsp load.
The "Delay" control could add a small delay to the second model to
overcome phasing issues, or to add some color/reverb to the sound.
To round up the sound it allow to load up to two Impulse Response files
and mix there output as well. You could try the wildest combinations,
or, be conservative and load just your single preferred IR-File.
Each neural model may have a different expected Sample Rate, Ratatouille
will resample the buffer to match that.
Impulse Response Files will be resampled on the fly to match the session
Sample Rate.
Project Page:
https://github.com/brummer10/Ratatouille.lv2
Release Page:
https://github.com/brummer10/Ratatouille.lv2
Hey hey,
I'm working on some Python code. The idea is to have a kind of MIDI sequencer
with a built-in metronome that plays the clicks using simple .wav files.
Here's the code, my question follows:
https://www.dropbox.com/scl/fi/5t2z0qtpj3ejyhevqdy9s/clock_test.zip?rlkey=w…
Currently, I am using pygame.mixer.Sound to play the .wav files, but compared
to the MIDI messages sent, the clicks lag. I tried lowering the buffer size,
as you can see in the Metronome class on line 110:
pygame.mixer.init(buffer=32)
No good.
Is there a better, hopefully simple, way to play these clicks?
On my system I'm running jackd with a samplerate of 48kHz and
-n 3 -p 256
This usually serves me well in recording audio and MIDI.
If someone could make suggestions, I'd appreciate it very much.
Best wishes,
Jeanette
--
* Website: http://juliencoder.de - for summer is a state of sound
* Youtube: https://www.youtube.com/channel/UCMS4rfGrTwz8W7jhC1Jnv7g
* Audiobombs: https://www.audiobombs.com/users/jeanette_c
* GitHub: https://github.com/jeanette-c
What's practical is logical. What the hell, who cares?
All I know is I'm so happy when you're dancing there. <3
(Britney Spears)