On Wed, 19 Oct 2016 01:02:11 +0200
Robin Gareus <robin(a)gareus.org> wrote:
User: Guys, give me the dire straits!
EQ: Hey Overdrive, I'll add the brightness if you add distortion.
Overdrive: I'm in, dude.
Reverb: I'll be your reverb and we'll sound like brothers in arms
EQ: Deal.
Overdrive: EQ, please engage that low-shelf of yours I'm a bit bassy
today. Picard: Make it so.
Again, it is not about creating personalized tones so much. It is at
the mixing stage, where hopefully, when the mixing engineer is on the
payroll at $300 an hour (highly hypothetic), the decisions about the
tones of the tune have been taken. By humans.
At the mixing stage, to assist the mixing engineer.
You not only need a protocol, but a complete language
for detailed
behavioral description of all building blocks. e.g. "How does a given
effect affect the signal phase depending on <parameter-set>".
Of course. Naturally.
I don't see plugin auto-config happen with a
random set of plugins
anytime soon, and certainly not in a decentralized way with decision
logic built into the respective plugins. Then again, that would be a
cool project, maybe ask the DeepMind team.
Cool, this DeepMind thing.
Realistically it'll be a fixed set of plugin, the
analysis tool has
pre-shared knowledge about the available DSP + parameter behavior and
it is trained (neural network, heuristics, presets,..) specifically
for those plugins.
Yes, this is very likely how it will start to be known. At least it
will not be part of a specific DAW, but as a plugin, will be OK for
all DAWs out there.