On 4/4/19 8:41 PM, Thomas Brand wrote:
I just tried with the same setting to start jackd, and
it only works for
n<=2<=4. Anything above 4 will not start server ("ALSA: got smaller
periods 4 than 5 for capture"). However changing n is reflected on ports:
jack_lsp -l -L
port playback latency = [ 0 0 ] frames
port capture latency = [ 2048 2048 ] frames
total latency = 2048 frames
port playback latency = [ 4096 4096 ] frames
port capture latency = [ 0 0 ] frames
total latency = 4096 frames
I think latency in a graph is confusing per se. The points you made for
"shortest path" etc. make sense to me. I'm not sure if the semantics of
latency for ports and latency ranges are well enough documented. It can
get very complicated considering a graph with many ports each of which can
have another latency and a mixture of different latencies depending on
routing etc. I want to say you're not the only one confused here.
On the search of example clients or clients that make use of the latency
features inside jack, I didn't find anything quickly. I speculate that
this API isn't used much and could probably be a candidate to remove. It's
not unlike the session API that's sitting there but isn't used (much).
Did I understand your use case correctly: you want to be able to tell
(from a client's perspective) how long it will take from *now* (this
cycle) until the start of the buffer is played out (eg. the first sample
of buffer hits the DAC) ~ ??
To get back to the port's latency configuration here a simple case:
[HW Input] -> [App] -> [HW Output]
Should the App set its input and output port latency to 0?
Or should the App set its port latency as follow:
app.in.latency = hw_in.out.latency;
app.out.latency = app.in.latency + app.internal_latency;
Or something else?
And internally the application would use the hw input latency and hw
Is that all correct? Are my questions clear?
Thank you very much!