[LAD] A question about audio file interfaces
fons at linuxaudio.org
Sun Dec 1 11:48:23 UTC 2013
On Sat, Nov 30, 2013 at 04:32:45PM -0800, Devin Anderson wrote:
> The questions you're asking hint at something that could be very
> similar to a library or two I've been working on in my spare time.
> I've been working on a service library for dispatching functions
> between a realtime thread and a set of non-realtime threads, and
> vice-versa, and intend to start work on another library that uses the
> former to do realtime disk-streaming.
Thanks to all who responded.
No, it's just about the code to read/write the files - what
it should do or not do in order to make it perform well with
'smart' disk access threads which will have there own ways to
organise buffering. If two (or more) such schemes are used on
top of each other the result could be worse than with just
one of them.
The file format in this case is just a simple subset of CAF,
only linear PCM, but with multiple UUID chunks. The first
layer should just support reading/writing/modifying any raw
chunk data, the second will handle some specific chunks types.
None of this needs optimal performance, but reading or writing
the audio data does, so that's what my question is about.
More in particular:
* use mmap, read/write or stdio ?
* any point in implementing things like gather/scatter, or
async access ?
* for multichannel: offer de-interleaved access, sparse
channel sets, etc. ?
I have various use cases for such a library, one of them being
to replace the private .ald format used by Aliki and some other
apps. But the code should also work efficiently with e.g. a DAW
having maybe a hundred files open at any time, and multiple
streams from the same file.
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
More information about the Linux-audio-dev