On ons, 2004-06-09 at 03:12, iain duncan wrote:
There's a
/lot/ more information available in a MIDI performance, so the
potential to do interesting things is greater. Flash the screen
whenever the kick drum goes, have notes represented on screen as 3D
objects using frequency for location, filter cutoff controlling
lighting, blah blah etc. etc.
This is simply not true though! If you have an audio feed that is kick
only, there is far more information available by analysing the audio
than with a simple midi note on/velocity/duration. If the kick sound is
spectrally analysed the light can be riding the amplitude and frequency
content over the course of the note instead of just turning on when the
drum starts.
Mmm ... So if the kick says "Dap Dap", is that because it was triggered
twice? Or is it because it was routed thru a slapback? Or was the
sample, for some obscure reason, made that way?
The midi-control information is richer and more to the point if you are
interrested in monitoring what *causes* the sound. Reverse engineering
the resulting sound makes no sense. For what it is worth, it could be
just a sample of some guy with a giant ghetto-blaster pushing CD-play
...
/jens
However, the above does require a *lot* more cpu use to break apart
composite audio channels or a lot more hardware and cpu use to work on
multi-track input.
As I said earlier though, there is no reason not to enable both. Or even
enable midi messages based on audio analysis.
Iain