Basically all audio processing in Linux is done on a buffer-by-buffer basis with a several-buffer queue. You need to do nothing to get that, it's just the way the underlying system works.
s/Linux/all general purpose operating systems, all plugin APIs, most well known audio I/O APIs/
basically, EVERYTHING works this way. You get a buffer's worth of data/space to read/write while the other buffer is being recorded/played. You get told to work with a buffer, you don't get to decide when.
ALSA and JACK add an additional wrinkle that it is possible to configure for more than 2 buffers, but in general, with a well designed audio interface and reasonably sensible motherboards, there's never a particularly good reason for d