On 4/6/19 9:12 AM, Fons Adriaensen wrote:
On Fri, Apr 05, 2019 at 06:38:59PM -0400, Tim wrote:
[PC] -->--[ex. Par.
port]-->--[Triggers]-->--[Sin oscs]
--- |
| |
--<--[Audio input]---------------<---------
It's similar to how round-trip latency might be measured,
except there the sines are generated digitally by audio output.
You don't even need the oscillators. Just connect the digital
output to a line input. If you make sure the pulse doesn't
overload the audio input (using resistors to attenuate it
if necessary), then the delay can easily be measured with
sub-sample accuracy.
Hi Fons. Thanks for the reply.
The reason I mentioned oscillators is that I thought that
a single pulse would be rounded somewhat by the input's
analog filters making it harder for the software to tell
exactly where the pulse started.
So the software would be able to deal with sines better,
and measure phase differences rather than a pulse edge.
The reason I said two or more at non-harmonic frequencies
was in case the period of one sine might be too short
and the software might miss the peaks of some already
passed cycles - it wouldn't know how many cycles passed.
So two sine waves would help ensure no repetition for
at least several dozen cycles.
But after I posted I realized I only need one oscillator.
The two-oscillator method would only be required if the
sine waves were already in motion when the measuring
started so that the software could know for sure which
sine peaks it was dealing with, having a much longer
repetition time with two sine waves.
In this case though, we can get away with just one
sine wave since we know that we will be starting
the sine at a reliable time. The software knows
that it is expecting the start of the single wave
so two waves won't be required.
Something like that. Hope that made sense ;-)
The tricky part of is how to define input or output
latency in an unambiguous way, and to understand when
and how it matters. It doesn't for example when you
record a new track via the sound card (i.e. using a mic
or DI of an external instrument) while listening to
already recorded tracks. In that case only the round-trip
latency matters.
Yes, playback material has the luxury of being able to assign
'negative' latency to it - that is cue it up sooner.
But I think I still need to know what the latency of the input
signal of the newly recording track is, so that once
it has finished recording it's wave file can be adjusted.
I'm certain that I still need to know it.
My interest in latency is not just casual.
My MusE latency branch is attempting to add latency compensation
and correction to the application.
That's an important distinction - correction vs. compensation.
Only playback material can provide true 'correction', while
everything else can only be 'compensated' for, with
artificially inserted latency delay units.
This may not mean much to a casual reader, but have a look at
my graphic which helped me realize the algorithms involved:
https://www.dropbox.com/s/dd8wg1ygdttxeaz/Latency%201.jpeg?dl=0
The coloured boxes represent actual MusE tracks.
Purple is Wave Tracks, Yellow is Group (Buss) Tracks,
Green is Audio Output tracks, and red is Audio Input tracks.
The numbers represent the various latency values of signals,
as well as the resulting required 'compensator delay unit'
values and 'correction' values applied the wave track
playback material.
This was a very difficult thing to understand and build.
But I think I have all the concepts in place and the code
is in place to do it.
The only remaining task is the latency 'gathering' stage
at the beginning of each cycle where we ask all tracks,
inputs, outputs, plugins and synths what their latencies
are and then determine the required delay values of all
latency 'compensator' delay units, as well as the required
negative latency 'correction' values of all playback material
tracks.
Thankfully LV2 has a feature that helps with this pre-cycle
'gathering' stage. Other plugin systems are a bit more difficult.
With those, we may have to run the plugin for just one sample
before a full cycle in order to find out what the plugin's
latency values are.
Cheers.
Tim.
It *does* matter when recording a new track which is
not input from the sound card, but generated internally.
In that case you have input latency on the MIDI input
which drives the software instrument, and output latency
on the audio output playing back the already recorded
tracks. This again is a round-trip latency, but with
the two components originating in different systems.
It also matters when you you want to find out to which
audio sample a mouse click or keyboard event corresponds,
e.g. to mark a position for editing.
Let's assume you can generate a digital output pulse at
a time measured by jack_time(). On a Raspberry for example
you could use one of GPIO pins which can be manipulated
from user space with almost no system delay.
Somewhat later, a Jack client will detect this pulse in
the audio stream. Note that the time at which this Jack
client runs is irrelevant. A Jack client can run anywhere
in the current cycle, it doesn't make any difference for
the audio processing.
The time that matters is the start time of the Jack cycle
in which the pulse is detected in the audio stream, plus
the time represented by the position of the pulse in the
current period. You can get the start and end times of
the current cycle using jack_get_cycle_times().
With
Tp = jack time at which the digital pulse is generated
T0, T1 = the start and end times of the current cycle
k = the position (in samples) of the pulse in the period
N = the period size
then the input latency (in microsends) is
T0 + k * (T1 - T0) / N - Tp
Ciao,