Ratatouille is a Neural Model loader and mixer for Linux/Windows.
This release implement easier File switching. It's now possible to
switch the selected files via mouse wheel, via mouse button click and
via keyboard up/down keys. Right mouse button click will pop up the
file-list and allow to select a file directly.
Also, it implement lighter CPU usage for convolution (IR-File), on non
power-of-two buffer sizes, by using the multi threaded FFTConvolver engine.
Beside that the GUI was a bit overworked.
Ratatouille allow to load up to two neural model files and mix there
output. Those models could be /.nam files/ <https://tonehunt.org/all>/or
<https://cloud.aida-x.cc/all>/.json or .aidax files
<https://cloud.aida-x.cc/all>. So you could blend from clean to crunch
for example, or, go wild and mix different amp models, or mix a amp with
a pedal simulation.
The "Delay" control could add a small delay to the second model to
overcome phasing issues, or to add some color/reverb to the sound.
To round up the sound it allow to load up to two Impulse Response files
and mix there output as well. You could try the wildest combinations,
or, be conservative and load just your single preferred IR-File.
Each neural model may have a different expected Sample Rate, Ratatouille
will resample the buffer to match that.
Impulse Response Files will be resampled on the fly to match the session
Sample Rate.
Project Page (Source code):
https://github.com/brummer10/Ratatouille.lv2
<https://github.com/brummer10/Ratatouille.lv2>
Release Page (Binaries):
https://github.com/brummer10/Ratatouill ... s/tag/v0.6
<https://github.com/brummer10/Ratatouille.lv2/releases/tag/v0.6>
Ratatouille is a Neural Model loader and mixer for Linux/Windows.
This release fix handling frame buffer-sizes of arbitrary size, aka. not
power of two, for the impulse response engine.
Ratatouille allow to load up to two neural model files and mix there
output. Those models could be *.nam files <https://tonehunt.org/all> or
*.json or .aidax files <https://cloud.aida-x.cc/all>. So you could blend
from clean to crunch for example, or, go wild and mix different amp
models, or mix a amp with a pedal simulation.
The "Delay" control could add a small delay to the second model to
overcome phasing issues, or to add some color/reverb to the sound.
To round up the sound it allow to load up to two Impulse Response files
and mix there output as well. You could try the wildest combinations,
or, be conservative and load just your single preferred IR-File.
Each neural model may have a different expected Sample Rate, Ratatouille
will resample the buffer to match that.
Impulse Response Files will be resampled on the fly to match the session
Sample Rate.
Release Page:
https://github.com/brummer10/Ratatouille.lv2/releases/tag/v0.5
Project Page:
https://github.com/brummer10/Ratatouille.lv2
Hi
I revisited a old guitarix project (SpecMatch) and ported it to python3.
SpecMatch aims to compare two Sounds and generate a Impulse Response
File from the different.
Intentional it was developed to easier the process to recreate a
specific Sound within guitarix.
Now, this day's while we've NAM and AIDAX, there are better way's to do so.
Hence I've u-bounded it from guitarix and makes it a tool on it's own,
as there are still the need to add convolution to get the expected sound.
SpecMatch allow to load two Sound files, compare the Frequency
spectrum's of them, and generate a Impulse Response File from the
difference. So it enable you to get the missing bits.
A other use-case is to archive the "Full Impulse Response" of a
destination file by using the usual NAM trainer input file as source
file in SpecMatch.
I've posted some of my results, using it this way, as a show case, on
the linuxmusicians forum her:
https://linuxmusicians.com/viewtopic.php?p=168587#p168587
This is, after all the years, still work in progress, and, there
shouldn't ever been a release to be expected, as it is plain
development. Anyway, if this stuff is of some interest for you, here it is:
https://github.com/brummer10/SpecMatch
regards
hermann
SpectMorph 1.0.0-beta2 has been released.
Compared to beta1, the only change is that the plugin user interface now
works properly on macOS 14. You can get the new version from
https://www.spectmorph.org/downloads
If you do not use macOS 14, there is no reason to update.
Changes in beta2:
-----------------
* Plugin user interface now works correctly on macOS 14 (#28).
* Update clang++ compiler version on macOS.
* Minimum supported macOS version is now macOS 11.
Feedback for any issues you might experience with the beta version is
appreciated.
--
Stefan Westerfeld, http://space.twc.de/~stefan
PandaResampler 0.2.1 has been released.
https://github.com/swesterfeld/pandaresampler
This is a header only library for C++ which implements fast factor 2, 4
or 8 upsampling and downsampling based on SSE instructions. I've
developed the code for my DSP code in Anklang and SpectMorph.
It might be useful for you if you have some DSP loop which needs to be
oversampled to avoid aliasing.
Changes in PandaResampler 0.2.1:
--------------------------------
* Use meson build system.
* Improve test coverage and CI tests.
* Support building a shared library (#3).
* Do not build tests unless -Ddevel=true is used (#2).
* Install headers to include install directory from meson (#1).
--
Stefan Westerfeld, http://space.twc.de/~stefan
Hello all,
I'm still working on a new autotuner, zita-at2.
Some examples can be checked here:
<http://kokkinizita.linuxaudio.org/linuxaudio/retune/>
There's no autotune in these, just fixed pitch or formant
shifts - they can now be controlled separately.
Comments welcome, and of course I still need some more
vocal tracks to test...
Ciao,
--
FA
Please pardon cross-posting. I would appreciate it if you would please
spread the word. Thank you.
Dear all,
After years of development, the Pd-L2Ork developer community is thrilled to
announce immediate availability of the *WebPdL2Ork* open *BETA* that is
capable of running just about any patch created using Pd-L2Ork inside a
browser. Simply upload your patch to a Web-accessible location and point
your browser to *http://pd-l2ork.music.vt.edu:3000?url=<URL-to-your-patch>
<http://pd-l2ork.music.vt.edu:3000/?url=%3CURL-to-your-patch%3E>*
All related subpatches and abstractions will be accessible as long as they
are in the path. The main patch will be stretched across the browser
window. Subpatches may be visible as floating windows as long as their
location has been saved within the box of the original patch. Some
adjustments may be necessary to the subpatch locations, or they can be even
embedded as graph-on-parent-enabled subpatches on the main patch window.
To test out patches already hosted on our page, please use links provided
below. Select patches also have hidden shortcuts outlined below. For
optimal experience, we recommend Google Chrome
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.googl…>
or Chromium
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.chrom…>
.
VT Waves Project Learning Modules:
- Autotune
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fl2ork.mus…>
(explore
how an Adelle solo refrain would sound if sung on a single note;)
- Distortion
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fl2ork.mus…>
(experiment
with clipping an audio signal to create a guitar-like distortion)
- Phase Cancellation
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fl2ork.mus…>
(learn
how to cancel vocals from just about any mainstream pop tune by subtracting
the right channel from the left with an inverted phase, use Shift+(1-3) to
enable different sources)
- Pitch Relationships
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fl2ork.mus…>
(explore
frequency and pitch relationships; use Shift+A to enable pitch/frequency
ratio viewer, and Shift+S to open spectrogram subpatch)
- Spectral Filtering
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fl2ork.mus…>
(the
iconic Forbidden Planet and FFT-based vocal filtering)
- Spectral Filtering Harmonics
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fl2ork.mus…>
(explore
decomposition and reconstruction of human voice into 10 sine tones; use
keys ~ and 1-0 to toggle individual overtones, use Shift+(2-3) to enable
other potential sound sources, toggle off a source to "pause" the signal,
use Shift+4 to toggle slider that cross-fades betwen the original signal
and reconstructed one)
And, if you have a beefy computer, you can also run the entire L2Ork Tweeter
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fl2ork.mus…>
inside
the browser (currently networking is not supported, so only the offline
mode is available). Once loaded, consider opening one of the included saved
sessions using the top-right corner session loading option by clicking on
the green "LOAD" button positioned immediately to the left of the text box.
Please be patient with the loading process, as this is a CPU intensive
patch (the pd-l2ork patch itself is more than 5MB). Once the session is
loaded, it may take up to 10 seconds for the audio engine to catch-up
before the audio dropouts stop. If dropouts do not stop, or if loading
takes much longer, chances are your CPU is not fast enough to handle the
patch running inside the browser (you can always explore the desktop
version which is considerably less CPU intensive). Use Shift+(F1-F12) to
take control of individual parts. For more info on L2Ork Tweeter, including
tutorial videos, visit our Tweeter
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fl2ork.mus…>
page.
*What does not work:* Gem library, networking objects (they load but do not
work due to sandboxed nature of a web browser) and a few select (and not
commonly used) 3rd-party libraries are not yet supported. Everything else
should work out-of-box.
*To learn how to build your own HTTPS-enabled web server:* Visit the pd-l2ork
github
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.co…>
and
read the emscripten/DOCUMENTATION.md file.
For additional info on L2Ork visit https://l2ork.music.vt.edu
This project is sponsored by the Department of the Navy, Office of Naval
Research under ONR award number N00014-22-1-2164. Any opinions, findings,
and conclusions or recommendations expressed in this material are those of
the author(s) and do not necessarily reflect the views of the Office of
Naval Research.
Best,
Ico