On Thu, Jun 19, 2003 at 11:23:07PM +0200, Tom Weber wrote:
!
Instead of doing a discrete fourier transform when
reading a small
frame of the sound, do a dense transform (every 0.1 Hz?) and pick out
the peaks. Then assume that a similar enough frequency in the next
frame comes from the same source, keep joining those and your sound
will be represented by a lot of oscillators (wavelets) with amplitude
and frequency curves. I guess this would overcome the weaknesses in
having fixed frequencies.
There may be some problems...
- What about noise-like signals ?
- For a resolution of 0.1 Hz you need 10 seconds of sound. Only a real
sine wave lasting the whole 10 s will tranform as a 'peak', everything
else will be smeared out.
The normal way to use wavelets is to make them shorter at high frequencies,
sort of DFT with a log frequency scale.
This idea shouldn't be new, where can I read about
it? Has it been
implemented anywhere?
There are lots of resources on wavelets on the web. Ask google.
--
FA