On Tue, 14 Feb 2023 at 16:32, Fons Adriaensen <fons(a)linuxaudio.org> wrote:
On Tue, Feb 14, 2023 at 03:57:05PM +0100, Wim Taymans wrote:
The real
difference between the two methods is 'sample count'
versus 'time' as the source of the event that starts a period.
I always wondered why one would use a timer, it just amounts
to polling. Suppose you look every 1 ms to check if there
You don't need to use polling with timerfd, just set the timeout
according to some clock,
add the timerfd to some poll loop and it wakes up on time.
It's of course not 'active polling' (spending all CPU time on
testing a condition), but it is still polling in the sense
that it is NOT the event you wait for (having enough samples
to start a Jack cycle) that wakes you up. When using a timer
you just test for that condition periodically, which means
you can be up to that period late.
The idea is to use a DLL to tweak the timeout and wake up exactly when
you have the desired number of samples in the device.
To avoid loss of period processing time, the timer period
must a very small fraction of the Jack period time. And
then I wonder what is the advantage.
If you want to implement dynamic latency changes with the IRQ based wakeup you
need to do the opposite, use small ALSA periods and then accumulate a
few until you
have the desired period for the graph.
It would be possible to implement something like this as an
alternative for timers.
Very much like
how ALSA wakes you up when a period expires.
AFAIK, ALSA doesn't use timers for that.
For a sound card on e.g. a PCI bus the start of cycle would
be the indirect result of an hardware interrupt. For USB
or firewire cards, it would be triggered by an event from
the lower (USB/firewire) layers.
BTW, what about the 'signed differences' issue I pointed
out earlier ?
Should be fixed with this:
https://gitlab.freedesktop.org/pipewire/pipewire/-/commit/274b63e9723ec00dd…
Wim
Ciao,
--
FA