[linux-audio-user] This criticism of jackd valid?

Maarten de Boer mdeboer at iua.upf.edu
Sun Jan 21 16:21:47 EST 2007

Hi Paul

I have a question about something you say in your slashdot post:

"The overhead of calling the graph associated with the data flow for
the frames is not insignificant, even on contemporary processors.
Therefore, calling the graph the minimum number of times is of some
significance, significance that only grows as the latency is reduced.
Because of this, all existing designs, including ASIO and CoreAudio
(with the proviso that CoreAudio is *not* driven by the interrupt from
the audio interface) call the graph only once for every hardware buffer

Do you have some numbers to show how relevant this overhead actually is?
I mean, if I use a specific internal buffer size (say 128 samples),
independend of the system buffer size, would that really be noticable?
I can think of some situations where this would be preferable. For
example, if you have many points in your callgraph where a fixed buffer
size is required (say some FFT's). Rather than doing buffering at all
these points, it seems to make more sense to do the whole callgraph
with that buffer size. I hope I make myself clear...

I did some experiments, and did not notice any significant difference
using different internal buffer sizes for my call graph. I am talking
about a call graph within a single application, and maybe you were
talking about a call graph with context switches?


This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

More information about the Linux-audio-user mailing list