On Sun, Aug 17, 2014 at 06:34:25PM -0700, Len Ovens wrote:
I would think ((Gain+54)/64)*7f uses a lot less CPU
time than a real
(proper) log. Think 8 fingers (plus thumbs?) fading around 80 steps in a
small time. Remember that this calculation has to be done at both ends too
and the receiving end also has to deal with doing more calculation on as
many as 64 tracks of low latency audio at the same time (amongst other
things).
In the equation you mention, 'gain' is in dB, it doesn't make any
sense otherwise. So after applying the inverse of that equation
you need to convert from dB to a linear gain value.
If M is the value sent by the controller (0..127) and G the gain
in db then
G = (64 * M / 127) - 54
which will be in the range -54..+10 dB. M = 0 is probably interpreted
as 'off' instead of -54 dB as a special case. Step size is 64 / 127
dB or 0.504 dB.
A slightly better mapping would be 80 step of 0.5 dB for the
range +10...-30, then smoothly increase the step size to arrive
at a minimum gain of -70 dB or so. Even for this the calculations
are trivial.
There is no problem with CPU use. On the sender side you transmit
0..127 which is just 7 bits of an ADC measuring the voltage from a
linear fader or pot, there is no mapping at all.
On the receiver side, the conversion needs to be done just once
per 25 ms or so and only when the gain changes. Then interpolate
linearly in between, or not at all if the value hasn't changed.
The resulting CPU load is absolutely irrelevant compared to what
has to be done anyway.
As an example, my 4-band EQ LADSPA does this sort of calculations
for each of its 13 parameters. And calculating the internal filter
parameters is a bit more complicated than dB to linear. It makes
no difference at all in CPU use which is dominated by the per-sample
processing.
Ciao,
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)