Does anyone know of software that can generate MIDI messages from a touchpad?
The idea would be to send CCs to a sequencer or soft synth, but being able to
send it to an external hardware device would also be very useful.
--
Will J Godfrey
https://willgodfrey.bandcamp.com/http://yoshimi.github.io
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
I just unboxed a Forcusrite Scarlett Solo
I am very dissatisfied. Before I even turn it on there are serious problems. Do not buy this, unless you familiar with the actual device
There is no documentation supplied
The labeling of the box is woeful. No little indication which is high-z input
Some ambiguous switches on panel ("air" WTF?)
The mic input is on the back where it is less useful
The promised software is not in the box (mēh!)
Very disappointed in it. Not well thought through in its design, presentation or packaging
Worik
Ratatouille is a Neural Model loader and mixer for Linux/Windows.
This release implement easier File switching. It's now possible to
switch the selected files via mouse wheel, via mouse button click and
via keyboard up/down keys. Right mouse button click will pop up the
file-list and allow to select a file directly.
Also, it implement lighter CPU usage for convolution (IR-File), on non
power-of-two buffer sizes, by using the multi threaded FFTConvolver engine.
Beside that the GUI was a bit overworked.
Ratatouille allow to load up to two neural model files and mix there
output. Those models could be /.nam files/ <https://tonehunt.org/all>/or
<https://cloud.aida-x.cc/all>/.json or .aidax files
<https://cloud.aida-x.cc/all>. So you could blend from clean to crunch
for example, or, go wild and mix different amp models, or mix a amp with
a pedal simulation.
The "Delay" control could add a small delay to the second model to
overcome phasing issues, or to add some color/reverb to the sound.
To round up the sound it allow to load up to two Impulse Response files
and mix there output as well. You could try the wildest combinations,
or, be conservative and load just your single preferred IR-File.
Each neural model may have a different expected Sample Rate, Ratatouille
will resample the buffer to match that.
Impulse Response Files will be resampled on the fly to match the session
Sample Rate.
Project Page (Source code):
https://github.com/brummer10/Ratatouille.lv2
<https://github.com/brummer10/Ratatouille.lv2>
Release Page (Binaries):
https://github.com/brummer10/Ratatouill ... s/tag/v0.6
<https://github.com/brummer10/Ratatouille.lv2/releases/tag/v0.6>
I find myself in need of a USB to S/PDIF adapter. Does anyone have experience
with one of the low cost Behringer interfaces, UCA202 or UCA222?
I have an audio interface, so I am not very interested in the analog
performance of the adapter, I just need reliable S/PDIF operation.
The manual for both of those devices indicates 16-bit converters, but does not
make mention of whether the S/PDIF data path is also limited to 16 bits, or
whether they will pass 24-bit data. Has anyone had opportunity to check that?
And lastly, any alternatives I should check? The Behringer is easy to get and
only about US$30, which is the primary reason I was considering one of those
modules. Input and output would be nice, but output is all I really need
currently.
--
Chris Caudle
Ratatouille is a Neural Model loader and mixer for Linux/Windows.
This release fix handling frame buffer-sizes of arbitrary size, aka. not
power of two, for the impulse response engine.
Ratatouille allow to load up to two neural model files and mix there
output. Those models could be *.nam files <https://tonehunt.org/all> or
*.json or .aidax files <https://cloud.aida-x.cc/all>. So you could blend
from clean to crunch for example, or, go wild and mix different amp
models, or mix a amp with a pedal simulation.
The "Delay" control could add a small delay to the second model to
overcome phasing issues, or to add some color/reverb to the sound.
To round up the sound it allow to load up to two Impulse Response files
and mix there output as well. You could try the wildest combinations,
or, be conservative and load just your single preferred IR-File.
Each neural model may have a different expected Sample Rate, Ratatouille
will resample the buffer to match that.
Impulse Response Files will be resampled on the fly to match the session
Sample Rate.
Release Page:
https://github.com/brummer10/Ratatouille.lv2/releases/tag/v0.5
Project Page:
https://github.com/brummer10/Ratatouille.lv2
Hi
I revisited a old guitarix project (SpecMatch) and ported it to python3.
SpecMatch aims to compare two Sounds and generate a Impulse Response
File from the different.
Intentional it was developed to easier the process to recreate a
specific Sound within guitarix.
Now, this day's while we've NAM and AIDAX, there are better way's to do so.
Hence I've u-bounded it from guitarix and makes it a tool on it's own,
as there are still the need to add convolution to get the expected sound.
SpecMatch allow to load two Sound files, compare the Frequency
spectrum's of them, and generate a Impulse Response File from the
difference. So it enable you to get the missing bits.
A other use-case is to archive the "Full Impulse Response" of a
destination file by using the usual NAM trainer input file as source
file in SpecMatch.
I've posted some of my results, using it this way, as a show case, on
the linuxmusicians forum her:
https://linuxmusicians.com/viewtopic.php?p=168587#p168587
This is, after all the years, still work in progress, and, there
shouldn't ever been a release to be expected, as it is plain
development. Anyway, if this stuff is of some interest for you, here it is:
https://github.com/brummer10/SpecMatch
regards
hermann
SpectMorph 1.0.0-beta2 has been released.
Compared to beta1, the only change is that the plugin user interface now
works properly on macOS 14. You can get the new version from
https://www.spectmorph.org/downloads
If you do not use macOS 14, there is no reason to update.
Changes in beta2:
-----------------
* Plugin user interface now works correctly on macOS 14 (#28).
* Update clang++ compiler version on macOS.
* Minimum supported macOS version is now macOS 11.
Feedback for any issues you might experience with the beta version is
appreciated.
--
Stefan Westerfeld, http://space.twc.de/~stefan
Hello all,
I'm still working on a new autotuner, zita-at2.
Some examples can be checked here:
<http://kokkinizita.linuxaudio.org/linuxaudio/retune/>
There's no autotune in these, just fixed pitch or formant
shifts - they can now be controlled separately.
Comments welcome, and of course I still need some more
vocal tracks to test...
Ciao,
--
FA