On 02/23/2016 04:35 PM, Jonathan Brickman wrote:
What I want to do, is to use the resources I have to
run multiple signal
generation and processing chains asynchronously, in parallel, and then
use the final audio-hardware-synchronized chain to resample them all
into one, perhaps using the Zita tools. Anyone know if this is
possible? I saw this flow structure work very well in the video domain,
quite a few years ago.
Video is a lot more forgiving when it comes to timing and the modern CPU
architecture is optimized for bandwidth.
It can be entirely possible to reliably process 4K RGBA images in 40ms
(25fps) ~900MB/s; while the same machine fails to process 128 audio
samples in under 2ms (~256kB/s).
2c,
robin