Well, since SOMEONE has to pipe up and say Pd can do
it, might as well
be me. :)
Pd with GEM is pretty cool as far as visualization goes. Far better
than anything I've ever seen or heard of actually, because you can
visualize MIDI any way you want, which has waaay more potential than
visualizing a boring old waveform (digital audio)
I'd say the opposite is true. Midi is lame for visualizing a
continuous data stream, say for example mapping frequency content or
amplitude to a continuous visualization. While midi is easy to work
with with, 7-bit cc values are pretty coarse. The trick to making the
waveform visualiztion interesting is to allow the user to break up the
composite waveform into user defined frequency bins so you can ride
the amplitude of specific frequency areas.
Iain
But you have lots of continuous controllers, is it 31?, and they control
16 channels seperately, and on each channel you have identifiable
intruments playing at different times. This is a much richer stream
of musically relevant information to display than just what you can
derive from the output signal.
No, you're assuming this is a two-track only input. Why would the app be
limited to two audio tracks only at once? What I would want to play with
is something that allows:
a) multi-track in, so that the music could be fed in with different
instruments per track for say visuals coming off a live multi-track setup.
b) filtering of tracks to seperate an individual track into more tracks
for analysis. ie if you had only a two track input you could use
user-controlled band pass filters to seperate the track into frequency
bins, allowing you to zero in on say the underlying bass pulse, or
underlying high end pulse.
These could of course work side by side with midi too, but I would think
the data available from audio analysis would be much richer and at a
finer resolution.
One could certainly do the above with either csound or PD ( among others
I'm sure! ).
Greg
The problem with the midi CC's is low resolution in both bit depth and
time ( midi chokes at a pretty low message throughput ) and that of
course you have to write these cc movements yourself to begin with. If
that was the plan, I think it would be far preferable to just use Csound
krate variables and make some sort of plugin/opcode to send the krate
variables to whatever visualization engine one wanted. Fortunately this
will actually be practical with all the new developments in Csound 5.
Hooray to the csound developers! = )
Hope that makes some kind of sense, it's late here. ; )
Iain
Iain