Ratatouille is a Neural Model loader and mixer for Linux/Windows.

This release introduce a normalization option for NAM models and
fix a issue with the normalization (a.k.a loudness compensation) of IR
Files (thanks to @avanzzzi )
Ratatouille allow to load up to two neural model files and mix there
output. Those models could be [*.nam files](https://tonehunt.org/all) or
[*.json or .aidax files](https://cloud.aida-x.cc/all). So you could
blend from clean to crunch for example, or, go wild and mix different
amp models, or mix a amp with a pedal simulation.
Ratatouille using parallel processing to process the second neural model
and the second IR-File to reduce the dsp load.
The "Delay" control could add a small delay to the second model to
overcome phasing issues, or to add some color/reverb to the sound.
To round up the sound it allow to load up to two Impulse Response files
and mix there output as well. You could try the wildest combinations,
or, be conservative and load just your single preferred IR-File.
Each neural model may have a different expected Sample Rate, Ratatouille
will resample the buffer to match that.
Impulse Response Files will be resampled on the fly to match the session
Sample Rate.
Project Page:
https://github.com/brummer10/Ratatouille.lv2
Release Page:
https://github.com/brummer10/Ratatouille.lv2
Hey hey,
I'm working on some Python code. The idea is to have a kind of MIDI sequencer
with a built-in metronome that plays the clicks using simple .wav files.
Here's the code, my question follows:
https://www.dropbox.com/scl/fi/5t2z0qtpj3ejyhevqdy9s/clock_test.zip?rlkey=w…
Currently, I am using pygame.mixer.Sound to play the .wav files, but compared
to the MIDI messages sent, the clicks lag. I tried lowering the buffer size,
as you can see in the Metronome class on line 110:
pygame.mixer.init(buffer=32)
No good.
Is there a better, hopefully simple, way to play these clicks?
On my system I'm running jackd with a samplerate of 48kHz and
-n 3 -p 256
This usually serves me well in recording audio and MIDI.
If someone could make suggestions, I'd appreciate it very much.
Best wishes,
Jeanette
--
* Website: http://juliencoder.de - for summer is a state of sound
* Youtube: https://www.youtube.com/channel/UCMS4rfGrTwz8W7jhC1Jnv7g
* Audiobombs: https://www.audiobombs.com/users/jeanette_c
* GitHub: https://github.com/jeanette-c
What's practical is logical. What the hell, who cares?
All I know is I'm so happy when you're dancing there. <3
(Britney Spears)
Guitarix.vst is the full blown guitarix stack as VST3 plugin for Linux,
using Juce to wrap the guitarix engine into a VST3 plugin.
It allow to load/save your presets, download presets from online and
load external LV2 plugs and IR Files, like the guitarix stand-alone version.
But all that as a VST3 plugin in your DAW. All parameters been exposed
to the DAW, so accessible for automation.
Other than the stand-alone, the VST3 version allows to switch the input
to a real stereo input, so it may match better your channel strip in the
DAW.
For Hdpi users, the GUI is full scalable.
New in this release:
 - Add support for Parallel Processing (process second mono chain in
parallel when stereo is selected)
 - Fix downloading online presets
 - Update included Juce modules to v7.0.12
 - Hide disabled controllers in main amp when no-tube is selected
 - Update included guitarix engine to latest git head
Project Page:
https://github.com/brummer10/guitarix.vst
Release Page:
https://github.com/brummer10/guitarix.vst/releases/tag/v0.4
regards
hermann
On Thu, Oct 17, 2024 at 12:27:30AM +0200, yoann.le.borgne(a)free.fr wrote:
> Regarding your approach of white box reverse engineering, have you had a
> look at Jatin Chowdhury's work on the subject?
> (https://github.com/jatinchowdhury18/AnalogTapeModel).
> He went quite far simulating the full chain and his plugin has acquired
> quite a reputation.
Yes, I know of his work. But reading the paper, I don't think
his physical model is correct.
Unless I'm missing something, the algorithm he uses amounts to
- upsampling, giving S(t)
- adding the bias signal B(t)
- evaluating the hysteresis loop on S(t) + B(t), giving M(t)
- downsampling, which filters out the HF bias.
But that NOT how things work in reality.
Imagine following a small fragment of tape (corresponding to
one sample in the upsampled domain), as it moves past the
recording head and away for it.
The magnetic field of the record head extends well beyond
the head gap, decreasing in strength as you move away from
it.
So the small tape fragment will go through to many
cycles of the bias + signal waveform with decreasing
amplitude. The resulting magnetisation is not just
the 'average' of M(t).
For an example, see
<http://kokkinizita.linuxaudio.org/linuxaudio/downloads/magnetisation.png>
X-axis is time in microseconds, bias frequency is 100 kHz,
bias amplitude is 1.0, signal (assumed constant during the
short time shown) = 0.5.
The green trace shows the magnetic field as seen by a small
tape fragment as it moves past the recording head.
The red trace is the resulting magnetisation, taking the
hysteresis effects into account. The constant value it
converges to is the signal that remains on the tape.
This is what I simulate, and it is way more complex than
what is described in Chowdhury's paper - that would just
be the average value of the red trace while it is still
at full amplitude.
If the signal is HF, it can't be assumed constant during
the time the field decay takes, it may well go through a
complete cycle or more. This is what causes the 'self
erasure' at higher frequencies which Chowdhury doesn't
take into account at all (AFAICS).
Ciao,
--
FA