Wait a minute. This discussion is making my head spin! How is there any way at all that
increasing the sampling rate, and changing nothing else, will improve the lowest reliable
latency of a system?
Lowest latency in my experience is a function of the longest time (in microseconds or
milliseconds) that you can expect your system to be unavailable for audio processing due
to other kernel work such as disk IO, plus whatever per-cycle setup overhead you have in
your application, plus data processing time. If that is, say, 5ms at 48 kHz, the kernel
latency part (which generally dominates) will be 5ms at 96 kHz (or 32 kHz, or...) unless
you reconfigure your system to improve kernel behavior, or improve your app's
performance.
Or are you just talking about the boundary case where kernel-dependent latency is very low
and you are limited by the smallest buffer size that the ALSA driver supports? In that
case I guess this would work. But you are more than doubling the amount of work your
system is doing, and if you are at the buffer size lower limit already, is it really an
audible improvement?
Am I missing something?
Thanks,
Bill Gribble
On Jan 2, 2014, at 0:25, Joel Roth
<joelz(a)pobox.com> wrote:
Harry van Haaren wrote:
On Wed,
Jan 1, 2014 at 1:21 PM, Joel Roth <joelz(a)pobox.com> wrote:
I was curious, if doubling the sample rate is a
practical way to reduce latency for live effects
processing. I would think it would reduce latency by half.
It would: you mention "practical", i'm not sure I'd call it that.
If one wanted to avoid the tradeoff of handling
twice the
usual amount of audio data,
CPU load will go up, since there is 2x more of data to process,
which also means every plugin / host has 2x more work to do.
Adds up quickly if you're doing things like convolution reverbs
or other CPU intense processing..
I was curious if ALSA sample
rate conversion, or some other clever hack could
be used to
get low latency advantage of the high sample rate, while
actually dealing with 48k streams through JACK.
Theoretically possible I suppose,
it seems like an awful lot of
effort to get a few less ms latency..
Latency below ~3ms isn't percievable at all IMO: most will agree.
Why not run jack at 64 frames, 2 buffers? That'll achieve approx
3ms (on 44.1kHz and 48kHz).. which is fine for the purpose?
Perhaps I'm missing something, are you doing
mulitple passes
trough the sound-card that you're adding its latency two or more times?
For a live submix, the routing I want to use is exemplified by:
system:capture_5 --> Nama:sax_in_1 --( ecasound )--> Nama:sax_out_1
Nama:sax_out_1 --( ecasound )--> system:playback_11
If I understand correctly, the latency not due to Ecasound
in this graph is the soundcard roundtrip plus the number of
hops which contribute (frames*buffers/sample-rate) latency
per.
I'm adding one more hop, so that would be 3ms
at 64/2, 4ms at 64/3, and 5.3ms at 128/2.
Regards,
Joel
Cheers, -Harry
--
Joel Roth
_______________________________________________
Linux-audio-user mailing list
Linux-audio-user(a)lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-user