On Thu, Jun 16, 2005 at 10:30:29AM -0400, Paul Davis wrote:
true, but i take it you get the way CoreAudio is doing
it: it means you
can drive audio processing from a different interrupt source (e.g.
system timer) because you have very accurate idea of the position of the
h/w frame pointer. In CoreAudio, the "callback" is decoupled from any
PCI, USB or ieee1394 interrupt. Tasty.
Didn't know they were doing that. But what is gained this way ? The
interrupt latency (probably less) and scheduling delays are still there.
I once had a look at the CoreAudio code in SC3. It's in the same file as
the jack interface, and both have similar code to implement some sort of
DLL-like smoothing. Any advantage CoreAudio has is certainly not visible
in that file. And when the jack version is rewritten to use the second DLL
(using OSC time) I'll be proposing, it will be much simpler than the
CoreAudio interface. It could even by done using the current (frame time)
DLL, with some complications.
There is
nothing really wrong with that model per se, and you can easily
build a callback system on top of it, as jackd does.
you can, true, though JACK doesn't. JACK uses poll and mmap
I know, but would there be much difference between poll()/mmap() and
read()/write() if you look at what happens internally ? The read()/write()
calls combine the actions of waiting, probably using the same mechanisms
as poll(), with a data copy to or from a user provided buffer. The latter
you do yourself when using poll()/mmap(). Both systems are the same in
the sense that for both, the user waits for the driver.
expecting regular audio developers
to use poll/mmap on a day to day basis creates very bad reactions :)
Where are all those real programmers ? ;-)
--
FA