Hi list,
I have heard rumors that Calf plugins could soon be dropped from
distributions mainly due to GTK2 dependency (see e.g. issue where
already in 2022 the Arch Linux maintainer is warning about this) [1].
Overall last Calf GitHub activity seems to indicate 3 years ago.
I'd say the only plugin I used and still use quite a lot is the 'Vintage
Delay' mainly due to its controls [2] and its 'final' result / effect
(i.e. what I hear), so essentially I'm considering the creative /
expressive aspects of this plugin, not its 'technical' features.
I know technical talks about these plug-ins are particularly
'flammable', but please keep in mind my question is purely posed as
'linux audio user' i.e. some dude trying to use Linux and FOSS to make
music - do also keep in mind this is/was (also historically as LADSPA)
the most widespread delay plugin around ;-)
BPM (sync or arbitrary) and stereo features (''ping-pong' or stereo
mode) including musical ratio (sub) divisions are maybe the most
important features for me as well as the straight-forward controls (from
a UX / musical wokflow perspective). i.e. I'd like a delay plugin where
I don't have to solve BPM to millisecond formulas and calculate left and
right delays each time (for that I already have a self-made Pd patch).
Any recommendations for possible substitutes, ideally FOSS, alternatives
- even better something I could just put in as a replacement on stereo
tracks / busses where this plug-in exists in my projects?
Or if it doesn't exist any friends from LAD willing to make one (I'd be
willing to help with requirements and testing etc.)?
I have a proof of concept of a simpler 'stereo delay' (without the
musical times) made in Pd which mimics this but I'd prefer a LV2 (or
similar) plugin. Also with the 'echo' plugin in recent Yoshimi versions
which can sync to BPM and allows independent left-right delay controls I
have obtained similar creative results (albeit only for yoshimi sounds,
of course).
Lorenzo
[1] https://github.com/calf-studio-gear/calf/issues/248
[2] https://calf-studio-gear.org/doc/Vintage%20Delay.html
Hello all,
zita-jclient-0.5.2 is now available at
<http://kokkinizita.linuxaudio.org/linuxaudio/downloads/index.html>
This version is required for the 'freewheeling' classes in
zita-jacktools to work. They will fail silently with older
versions of zita-jclient.
Ciao,
--
FA
Am 07.05.24 um 01:05 schrieb David W. Jones:
> The Fediverse's annual music contest/festival Fedivision is back
>
> The deadline for entries is next Sunday, voting will be two weeks later.
Well, apparently this was first announced in February:
https://fedivision.party/fedivision-2024/
Unfortunately, at this late date I think it's rather unrealistic for
most musician's to still be able to finish a submission until Sunday, if
they haven't started yet. Especially since it needs to be an original,
previously unreleased composition.
Thanks anyway for the heads up, I didn't know of this contest.
Chris
Ratatouille is a Neural Model loader and mixer for Linux/Windows.
![Ratatouille](https://github.com/brummer10/Ratatouille.lv2/blob/main/Ratatouille.png?raw=true)
Ratatouille allow to load up to two neural model files and mix there
output. Those models could be [*.nam files](https://tonehunt.org/all) or
[*.json or .aidax files](https://cloud.aida-x.cc/all). So you could
blend from clean to crunch for example, or, go wild and mix different
amp models, or mix a amp with a pedal simulation.
To round up the sound it allow to load up to two Impulse Response files
and mix there output as well. You could try the wildest combinations,
or, be conservative and load just your single preferred IR-File.
Each neural model may have a different expected Sample Rate, Ratatouille
will resample the buffer to match that.
On the Release Page been ready to use binaries, note please that
Ratatouille.lv2-v3-v0.2-linux-x86_64.tar.xz is a fully optimized version
using the x86-64-v3 optimisation, you could check if your system
supports that by running the command
`/usr/lib64/ld-linux-x86-64.so.2 --help 2>/dev/null | grep 'x86-64-v3
(supported'`
if that return nothing, your system can't use this, in that case you
should choose Ratatouille.lv2-v0.2-linux-x86_64.tar.xz
Impulse Response Files will be resampled on the fly to match the session
Sample Rate.
To build from source, please use Ratatouille.lv2-v0.2-src.tar.xz as only
that contain the needed submodules.
Release Page:
https://github.com/brummer10/Ratatouille.lv2/releases/tag/v0.2
Project Page:
https://github.com/brummer10/Ratatouille.lv2
Hello all,
Several people have asked how the pitch estimation
in zita-at1 works.
The basic method is to look at the autocorrelation
of the signal. This is a measure of how similar a
signal is to a time-shifted version of itself. It
can be computed efficiently as the inverse FFT of
the power spectrum.
In many cases the strongest autocorrelation peak
corresponds to the fundamental period. But this can
easily get ambiguous as there will also be peaks at
integer multiples of that period, and for strong
harmonics. To avoid errors it is necessary to look
also at the signal spectrum and level, and combine
all that info in some way. How exactly is mostly a
matter of trial and error. Which is why I need more
examples.
Have a look at
<http://kokkinizita.linuxaudio.org/linuxaudio/pitchdet1.png>
This a test of the pitch detection algorithm used in
zita-at1.
The X-axis is time in seconds, a new pitch estimate is
made every 10.667 ms (512 samples at 48 kHz).
Vertically we have autocorrelation, the Y-axis is in
samples. Red is positive, blue negative. The green dots
are the detected pitch period, zero means unvoiced.
The blue line on top is signal level in dB.
Note how this singer has a habit of letting the pitch
'droop', by up to an octave, at the end of a note. He
is probably not aware of it. This happens at 28.7s,
again at 30.8s, and in fact during the entire track.
What should an autotuner do with this ? Turn the glide
into a chromatic scale ? The real solution here would
be to edit the recording, adding a fast fadeout just
before the 'droop'. Even a minimal amount of reverb
will hide this.
The fragment from 29.7 to 30.3s is an example of a
vowel with very strong harmonics which show up as
the red bands below the real pitch period. In this
case the 2nd and 3rd harmonic were actually about 20
dB stronger than the fundamental. This is resolved
because the autocorrelation is still strongest at
the fundamental pitch.
The very last estimate in the next fragment (at 30.85s)
is an example of where this goes wrong, the algorithm
selects twice the real pitch period, assuming the
first autocorrelation peak is the 2nd harmonic.
This happens because there was significant energy
at the subharmonic, actually leakage from another
track via the headphone used by singer.
The false 'voiced' detection at 30.39s is also the
result of a signal leaking via the headphone.
Ciao,
--
FA