Ralf Mardorf <ralf.mardorf(a)alice-dsl.net> writes:
On Fri, 05 May 2017 19:14:52 +0200, David Kastrup
wrote:
A
compromise might be to control the DAW mixer and independently to
control the hardware mixer, too, IOW something like hdspmixer
controlable by MIDI events. Do a lot of people need it?
Not enough if _every_ _single_ soundcard needs its own mixer
application coded from scratch. If one creates a common useful Jack
API for gained connections that one mixer application can work with,
there will certainly be enough need to provide a Midi controlled mixer
capable of using hardware mixing where available.
That's absurd, since the different sound cards provide completely
different routing features.
Which is why it makes good sense to have jackd fall back to software
gain when the card's known controls don't allow for using hardware.
That way you don't need specialized software to make best use of your
hardware.
If you mix a project with a DAW, then the software
mixer should be
completely separated from the hardware mixer.
Because what? We don't separate software and hardware plugins either.
They are available as jack ports. So why should a connection not be
hardware where supported?
If you want to get all in one, consider to just use
the I/Os of an
audio interface and to buy the right tool for that task, a separated
mixing console [1].
A separated mixing console takes a whole lot more space than a
nanoKontrol if you are trying to get to a conference by train.
I mean, I'd get the "who's gonna do it" argument. But this is going
more in the "let's keep things as badly interoperable as they are
because that is the way of our fathers" direction.
All I hear is "that is silly", "that is absurd", "that's not
the right
way to do things". But no actual argument why.
EQs, Reverb and delay would be nice for the hardware
monitoring, too
and indeed, proprietary software for a lot of audio interfaces
provide this.
For hardware monitoring? Hardly. At driver level? Maybe. But then
it's not fundamentally different to Jack plugins.
Yes, hardware monitoring done with an external mixing console [1] very
often is done by adding effects to the monitor signal, so some
proprietary audio interface software provides this for the audio
interface's hardware monitoring, too.
The audio interface's hardware monitoring is done at driver level, while
the software could be split to the driver and mixer software, the
access to the hardware monitoring still is part of the driver.
Jack plugins? Driver level? Jack is a sound server using a backend
such as ALSA.
Jack prefers working at realtime priority and some plugins may do so as
well, possibly in relation to priority elevation. Unfettered hardware
access is not necessary for the latency aspects.
I'm using hdspmixer for my RME audio interface. My
Focusrite isn't
supported, so I'm using it class compliant, without the option to have
access to the hardware monitoring. Actually a driver for other
Focusrite interfaces from the same series exists and I much likely
only need to add an ID to the driver's source code and build it, to
get access to the hardware monitoring, but I simply don't need it. A
lot of people don't need hardware monitoring provided by the audio
interface.
Many affordable USB-level interfaces are quite more prone to xruns if
you drive them in full duplex. So particularly on consumer hardware
(and that's what most people use) making use of available hardware
monitoring solutions employing the builtin mixers would make good sense.
Physical controllers are nice to use for that, and this is why software
like Ardour supports digital controllers which can be used for mixing
instead of a hardware mixer. I don't see the point in maintaining
software/hardware barriers that should only be crossed using
hard-tailored rather than general-purpose software.
--
David Kastrup