[LAD] Fader mapping - was - Ardour MIDI tracer

Len Ovens len at ovenwerks.net
Thu Aug 21 15:30:30 UTC 2014


On Thu, 21 Aug 2014, Fons Adriaensen wrote:

> On Wed, Aug 20, 2014 at 07:37:43AM -0700, Len Ovens wrote:
>
>> http://www.allen-heath.com/media/GLD-Faders_2800.jpg
>> As you say 0 is a special case for off.
>> everything below -10 is the same amount of travel for 10db.
>> -10 to -5 and +5 to +10 are the same travel for 5db that the lower
>> parts are for 10db.
>> -5 to +5 uses more travel again.
>
> To me it looks as if the range -10..+10 is expanded. -5..+5 looks
> even more expanded but that's just because they left out the the
> marks at +/-2.5.
>
> This is very easy to achieve with a simple calculation. The
> only problem is that the 'dB/mm' value changes almost stepwise
> at -10, this makes a smooth fade more difficult. The solution
> it use either more linear (in dB) sections, or to make the
> dB/mm change more smoothly below the linear region. This is
> my favourite solution. Conversion in both directions is still
> quite easy, code on request.

On a 64bit machine, 16 memory locations (128 bytes) would hold a tick by 
tick map of values. A char[128] lookup table would be very fast. However, 
I am wondering how accurate the marks next to a fader are meant to be and 
if such a non-linear map makes sense anyway. If our steps are .5db in the 
end anyway, then then a "linear" (well straight log) surface from top to 
bottom seems reasonable in that the same travel always gives the same 
change. In any case, that is where I am going to start on the hardware 
side is a linear .5db per 1/128 movement.

However, because the middleware is effectively part of my project, I will 
have to do a conversion to 10bit at some point because I am trying to 
emulate the Mackie control surface. That is I want to do something that 
uses some kind of standard rather creating a new something. I am going to 
suggest that the markings on a surface are guides or aproximations and the 
user should be using their ears anyway.

Because I am building my project around Ardour (it's what I use and have 
for easy reference), I have already said that my control movement will not 
mirror the GUI, but if I want it to be true to the .5db per tick, I do 
need to use the same calc they use for the actual gain. There are no 
markings on the GUI fader, just the db at the top. Center of the fader is 
-10.5 (pitchbend 0) and each step is 16 because pitchbend is 14 bit and 
mackie control is 10 bit. Top of fade is +6 and bottom is off. Hmm, 
interesting, one step up is -109.1db... going back down one step gives 
-182.5db even though the pitchbend out is the same number.. but the 
surface going down one does go off. Not that either one matters as the 
first value up will be higher at -58db, so 45 ticks up from there (2d) is 
the same as 1 for my surface. My 2 is translated to 2f for -57.5.

Starting from the top at +6, I have to go down 3 mackie ticks just to get 
to +5.9. To get to my first value of +5.5 is 20 ticks, another 20 to 
+5.0... It seems a lot of the 1024 available values (and bandwidth) are 
wasted in the mackie control surface. I can't see anyone being able to 
hear the difference from 6.0 to 6.96, a part of the fader no one uses at 
that.

My middle range (value 64) is -26db, in mackie speak it is 4016 (looking 
at the whole 14 bits) or 1/4 range.

At this point I have to do some thinking because this matches an analog 
fader too. Why did the analog fader have such fine control in the -10 to 
+10 window? I am thinking that at least part of the reason has to do with 
mechanical play/bend/lag. By the time the user has moved the fader the 
smallest amount, the fader knob post has bent slightly and the wiper has 
"jumped" after it has overcome any stickyness rather than making any .1db 
movement.

> How much this matters depends on the application. Fades are
> rare in music mixing, but a very common thing in e.g. broad-
> casting.

Good I must be doing things right. The only fades I have used are fade out 
at end of song and I have drawn that in. Often, to make it sound right, I 
have to fade the lead (louder) tracks faster than the backing tracks 
anyway. (they may be 0 as much as 2 sec before the end)

>From what I have seen of fades in broadcast, a button with a fixed 1 sec 
(or less) fade would serve the purpose better than a fader. Dead air was a 
dreaded thing so the next part of the program was already there before the 
fade started and the fade was mostly not to have a harsh cut.

> In practice, if the resulting taper is OK you'll adapt to it fairly
> soon. As you do to e.g. a new car.

>From what was said already that becomes obvious, most mixing is small 
adjustments rather than a long fade. My only question becomes: will a jump 
from -58 to off be noticable? This would be 64 db down on a digital signal 
that has a dynamic range of ~109db depending on the analog HW going out. 
Background noise in the listening area will bring this lower.

> The only function of the encoding is to represent the fader position
> in the best possible way. How that is interpreted later and mapped to
> dB (or anything else) is a separate matter.

I think it depends on what the main purpose is. If it is setting levels in 
a mix, then supporting long slow fades does not make sense... or to put it 
another way, using the same control for both mixing and fading is maybe 
not the best thing. The standard fader mapping is because of this joint 
use. I think the +6 to -58 db range is probably great for mixing and the 
-58 to off will be smoothed by SW anyway even in a fade use.

> IMHO mostly beacuse the music itself has changed, and also because
> a lot of mixing is now done by people who have no training for it
> at all and are just working by trial and error. One result of that
> is sessions with lots of plugins in each and every track, usually a
> clear sign of amateurism.

I got the idea, the use of plugins was because (at the time) the HW could 
not handle a full input strip like what Mixbus has for example. SO the 
user could just add what was needed for a channel. Certainly adding 
individual reverb per track does not make sense as having the whole "band" 
in one sound space would be the normal target. I am thinking that with the 
more modern HW an expanded default channel strip with eq and minimal sends 
and trim might again make sense.


--
Len Ovens
www.ovenwerks.net



More information about the Linux-audio-dev mailing list