Your design is way too simple and fundamentally wrong.

If you want low latency you need to use a pull model (aka callback model) for audio i/o to the device. Let the device tell you when it wants audio data, and deliver it,on time, without blocking (which means no on-demand file i/o in the same thread as the device audio i/o). See http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing

On Tue, May 17, 2016 at 4:26 AM, David Griffith <dave@661.org> wrote:
On Tue, 17 May 2016, Andrea Del Signore wrote:

On Tue, May 17, 2016 at 12:25 AM, David Griffith <dave@661.org> wrote:
On Mon, 16 May 2016, Andrea Del Signore wrote:

> I'm not simply trying to mix two files.  My main project is a
> game engine in which two sounds are allowed at any one time. 
> For instance, there can be constant background music punctuated
> by sound effects.  I can't get these to mix correctly.

Hi,

in that case you can just skip the right number of frames before starting
playing sounds.

I modified my code to take the start time for each file and schedule the
play time with frame accuracy.

http://pastebin.com/0PMyfPvK

If you want your timing to be sample accurate the algorithm is a bit more
complex.

That won't work.  Your code schedules things ahead of time before anything else happens.  I need to be able to fire off sound effects the instant the player does something to cause them.  I can't know in advance.

--
David Griffith
dave@661.org

_______________________________________________
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev