[Hans Fugal]
I'm about to write a DSSI/LADSPA plugin that among
other things, detunes
the signal by up to 15 cents. My understanding is that detuning is
accomplished by resampling. If that's the case, what do you do
with the time difference? Do you pad/truncate to get the same number of
samples you started out with? Wouldn't that introduce undesirables?
It would. As far as I know, there are two common ways to do this:
You can periodically window and granulate the signal, resample the
grains and resynthesize -- that's the faster way, doing it all in the
time domain. You'll get some comb filtering from the time-stretched
phase cancelling out during grain overlap, but if you're only doing 15
cent, it might turn out just fine. There's a library out there that
does this, I can't seem to remember its name though. Anyone?
The other way I know of is to use a phase vocoder. Basically, you
periodically window and granulate and then FFT the signal, do a bit of
math on the frequency content and resynthesize. The phase vocoder
keeps track of signal phase so the comb filtering effect is no issue.
The downside is the FFT and the math required to compute phase and
amplitude; it's quite a bit heavier on the CPU. Some code that does it
this way is at quitte.de/dsp/pvoc.html but for your intended use I'd
first look at the simpler solution mentioned above.
And of course source code to these:
$ locate pitch|grep ladspa
/usr/lib/ladspa/am_pitchshift_1433.so
/usr/lib/ladspa/pitch_scale_1193.so
/usr/lib/ladspa/pitch_scale_1194.so
/usr/lib/ladspa/tap_pitch.so
should be easily obtainable. :)
Cheers, Tim