From: clemens(a)ladisch.de
To: nickycopeland(a)hotmail.com
CC: d(a)drobilla.net; linux-audio-dev(a)lists.linuxaudio.org
Perhaps I should revisit another project I was working on which was syslog event
correlation: it used multiple threads to be scalable to >1M syslog per second
(big installation). I was testing it with socketpair()s and other stuff. I would be
interested to know if scheduler changes affect it too.
So if the pipe() is replaced with
socketpair(PF_UNIX, SOCK_STREAM, PF_UNSPEC, pipe_fd);
Then the issue I was seeing goes away. Perhaps the pipe() code has not been
optimised since sockets were developed to replace them when IPC suddenly
needed to be between hosts rather than processes? Pure conjecture.
[nicky@fidelispc] /tmp [148] cc ipc.c -lrt -DSET_RT_SCHED=1
[nicky@fidelispc] /tmp [149] ./a.out 4096 10000000
Sending a 4096 byte message 10000000 times.
Pipe send time: 26.131743
Pipe recv time: 26.132117
Queue send time: 18.576559
Queue recv time: 18.576592
The results were repeatable, CPU load was evenly distributed and the ludicrous
context switching figures were gone. Perhaps I should have replaced 'Pipe send
time'
with 'Socket send time'? The message queues seem to maintain the best results.
Somebody should compare that to a shared memory lockless ringbuffer although
I have a feeling they will not exceed the messages queues used here.
Regards, nick.