[LAD] Pipes vs. Message Queues

Nick Copeland nickycopeland at hotmail.com
Fri Nov 25 14:21:28 UTC 2011


> From: clemens at ladisch.de
> To: nickycopeland at hotmail.com
> CC: d at drobilla.net; linux-audio-dev at lists.linuxaudio.org
>
> Perhaps I should revisit another project I was working on which was syslog event
> correlation: it used multiple threads to be scalable to >1M syslog per second
> (big installation). I was testing it with socketpair()s and other stuff. I would be
> interested to know if scheduler changes affect it too.

So if the pipe() is replaced with 

    socketpair(PF_UNIX, SOCK_STREAM, PF_UNSPEC, pipe_fd);

Then the issue I was seeing goes away. Perhaps the pipe() code has not been 
optimised since sockets were developed to replace them when IPC suddenly 
needed to be between hosts rather than processes? Pure conjecture.

[nicky at fidelispc] /tmp [148] cc ipc.c -lrt -DSET_RT_SCHED=1
[nicky at fidelispc] /tmp [149] ./a.out 4096 10000000
Sending a 4096 byte message 10000000 times.
Pipe send time:  26.131743
Pipe recv time:  26.132117
Queue send time: 18.576559
Queue recv time: 18.576592

The results were repeatable, CPU load was evenly distributed and the ludicrous
context switching figures were gone. Perhaps I should have replaced 'Pipe send time' 
with 'Socket send time'? The message queues seem to maintain the best results.
Somebody should compare that to a shared memory lockless ringbuffer although
I have a feeling they will not exceed the messages queues used here.

Regards, nick.
 		 	   		  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxaudio.org/pipermail/linux-audio-dev/attachments/20111125/ee708f05/attachment.html>


More information about the Linux-audio-dev mailing list