The bad news is that the best way to use
this new approach is to rethink the whole process of mixing and
mastering.
Or just let us find interesting ways to use the technology within the
current framework...
ToPlug-In or not to Plug-In:
<SNIP>
So rather than speak to developers about how to
improve my
programs or something like that which was suggested, I really need to
speak with potential users about what they might need or want, whether
that be a plugin or something completely different.
Well, the only issue that I've had with the conversation so far is that I do
not know what environment my consumer is going to listen to my music in.
Will he listen on a stereo with his real room acoustics? Or will he listen
on headphones without? And how do I make one mix that works for both
environments?
I don't see that as practical today, but maybe I'm missing your point. So,
to make my point most clearly, I will do only a single final mix today for
mastering. Should it use this technology or not when I don't know the
listening environment?
For example, these same programs can also be used to create
instruments. (A room can be regarded as part of a three-dimensional
instrument.)
I think this was part of Ron's POV. Cool.
On the approach used --- IR?:
Mark Knecht asked whether or not this work was IR-based. I assume
that this means "impulse response" function based.
Sorry. Yes, that is what I meant. I'm sort of bug-eyed these days looking at
IR waveforms - gigabytes worth of them.
What next?
To summarize a little bit: I can do a lot of different things here,
depending upon what people are interested in. I can try to write a
plugin that applies impulse response functions that I have generated;
I could perhaps make available the programs for producing them; I
could write a program that assists in applying them; I could write an
instrument generator; I could release a library of the utilities. Or
I could just do what Jörn suggested and wrap up what I've already
done. I suspect this would be the least useful approach for most
people, but the best approach for me and for potential collaborators.
I think this makes sense actually. My input would be that you need more ears
and more recorded sources to facilitate understand where this works well and
where it doesn't. Electronica vs. symphonic, jazz trios vs. grunge rock,
complete sound tracks vs. single instruments, drums vs. guitars vs. vocals
vs. dot.dot.dot...
I presume that this technology does not lend itself to automation controls
within a real-time environment? When parameters are changed with the IR
stuff I'm using there is a very large recalculation overhead before I can
start using it again. True for yours too? (If not true, then a plug in could
be of lots of uses!)
From my perspective, I think this is interesting work
you've done. I hope
you'll release a command line version for interested
folks to use and give
you feedback.
Thanks once again for your comments and for listening to the demo.
I'd appreciate further discussion, either here or privately by email.
I've got a little more on the idea of a plugin and on real-time
concerns which I'll send a little later.