<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 10pt;
font-family:Tahoma
}
--></style>
</head>
<body class='hmmessage'><div dir='ltr'>
> From: d@drobilla.net<br>> To: linux-audio-dev@lists.linuxaudio.org<br>> Date: Thu, 24 Nov 2011 19:10:26 -0500<br>> Subject: [LAD] Pipes vs. Message Queues<br>> <br>> I got curious, so I bashed out a quick program to benchmark pipes vs<br>> POSIX message queues. It just pumps a bunch of messages through the<br>> pipe/queue in a tight loop. The results were interesting:<br><br>Very interesting.<br><br>You might be running into some basic scheduler weirdness here though<br>and not something inherently wrong with the POSIX queues. I ran your<br>code here a few times in some different configurations. The results with 1M <br>messages had wild variance with SCHED_FIFO, sometimes 2s, 4s, 6s, etc. <br>Not reliable - although without rescheduling they did seem more consistent. <br><br>These below are with 10M to give longer run times:<br><br>a. no SCHED_FIFO, 10M cycles<br><br>[nicky@fidelispc] /tmp [65] cc ipc.c -lrt <br>[nicky@fidelispc] /tmp [66] ./a.out 4096 10000000<br>Sending a 4096 byte message 10000000 times.<br>Pipe recv time: 23.220948<br>Pipe send time: 23.220820<br>Queue recv time: 13.949289<br>Queue send time: 13.949226<br><br>b. SCHED_FIFO, again 10M cycles.<br><br>[nicky@fidelispc] /tmp [69] cc ipc.c -lrt -DSET_RT_SCHED=1<br>[nicky@fidelispc] /tmp [70] ./a.out 4096 10000000<br>Sending a 4096 byte message 10000000 times.<br>Pipe send time: 34.514288<br>Pipe recv time: 34.514404<br>Queue send time: 19.004525<br>Queue recv time: 19.004427<br><br>This was on a dual core laptop, 2.2GHz, no speed stepping, was also <br>watching the top whilst running this. <br><br>Without FIFO the system CPU spreads across both cores, they both run up<br>
towards 100% load for both IPC methods.<br>
<br>With SCHED_FIFO/pipe the load does not distribute - I get 94% system load <br>on a single CPU whilst running through the loop. <br><br>The POSIX code did not show this effect.<br><br>Odd results so had a look at vmstat rather than top, this gives some indication:<br><br>No rescheduling:<br>procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----<br> 1 0 0 6506924 312036 595456 0 0 0 196 750 1569 5 94 0 0<br> 1 0 0 6489188 312036 613344 0 0 0 0 961 25508 8 88 3 0<br> 1 0 0 6488724 312036 613536 0 0 0 0 991 21070 6 92 2 0<br> 1 0 0 6488812 312036 613552 0 0 0 0 697 1446 5 94 1 0<br><br>SCHED_FIFO pipe():<br>procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----<br> 2 0 0 6516912 311372 586924 0 0 0 108 569 435180 5 46 44 4<br> 2 0 0 6516272 311372 586972 0 0 0 0 556 436042 3 47 50 0<br> 1 0 0 6516608 311372 586972 0 0 0 0 548 436482 6 46 48 0<br> 1 0 0 6516928 311372 586924 0 0 0 0 563 435930 2 51 48 0<br><br>Ouch. Almost 100 times the number of context switches. Is the whole kernel<br>bound up in a single thread doing process context switches?<br><br>SCHED_FIFO message queues - generally lower, far more variance:<br>
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----<br> 1 0 0 6510328 316116 587048 0 0 0 0 427 142445 2 83 15 0<br> 1 1 0 6509736 316120 587044 0 0 0 44 440 795 3 97 1 0<br> 1 0 0 6509900 316132 587052 0 0 0 64 439 281037 9 69 22 0<br> 1 0 0 6509436 316132 587048 0 0 0 0 437 796 3 95 2 0<br> 1 0 0 6509868 316160 587020 0 0 0 164 452 151290 5 81 15 0<br><br>Vanilla kernel Linux fidelispc 2.6.32-35-generic #78-Ubuntu SMP Tue <br>Oct 11 16:11:24 UTC 2011 x86_64 GNU/Linux<br><br>Also tested with unbalanced priorities in the sender and receiver, and with only<br>prioritising one of them, pretty much the same as 90/90.<br><br>Not sure if that helps any. I have another system with a single core, might try it<br>out there later since my results were very different.<br><br>Regards, nick.<br><br>"we have to make sure the old choice [Windows] doesn't disappear”.<br>Jim Wong, president of IT products, Acer<br><br><br><div>> From: d@drobilla.net<br>> To: linux-audio-dev@lists.linuxaudio.org<br>> Date: Thu, 24 Nov 2011 19:10:26 -0500<br>> Subject: [LAD] Pipes vs. Message Queues<br>> <br>> I got curious, so I bashed out a quick program to benchmark pipes vs<br>> POSIX message queues. It just pumps a bunch of messages through the<br>> pipe/queue in a tight loop. The results were interesting:<br>> <br>> $ ./ipc 4096 1000000<br>> Sending a 4096 byte message 1000000 times.<br>> Pipe recv time: 6.881104<br>> Pipe send time: 6.880998<br>> Queue send time: 1.938512<br>> Queue recv time: 1.938581<br>> <br>> Whoah. Which made me wonder what happens with realtime priority<br>> (SCHED_FIFO priority 90):<br>> <br>> $ ./ipc 4096 1000000<br>> Sending a 4096 byte message 1000000 times.<br>> Pipe send time: 5.195232<br>> Pipe recv time: 5.195475<br>> Queue send time: 5.224862<br>> Queue recv time: 5.224987<br>> <br>> Pipes get a bit faster, and POSIX message queues get dramatically<br>> slower. Interesting.<br>> <br>> I am opening the queues as blocking here, and both sender and receiver<br>> are at the same priority, and aggressively pumping the queue as fast as<br>> they can, so there is a lot of competition and this is not an especially<br>> good model of any reality we care about, but it's interesting<br>> nonetheless.<br>> <br>> The first result really has me thinking how much Jack would benefit from<br>> using message queues instead of pipes and sockets. It looks like<br>> there's definitely potential here... I might try to write a more<br>> scientific benchmark that better emulates the case Jack would care about<br>> and measures wakeup latency, unless somebody beats me to it. That test<br>> could have the shm + wakeup pattern Jack actually uses and benchmark it<br>> vs. actually firing buffer payload over message queues...<br>> <br>> But I should be doing more pragmatic things, so here's this for now :)<br>> <br>> Program is here: http://drobilla.net/files/ipc.c<br>> <br>> Cheers,<br>> <br>> -dr<br>> <br>> _______________________________________________<br>> Linux-audio-dev mailing list<br>> Linux-audio-dev@lists.linuxaudio.org<br>> http://lists.linuxaudio.org/listinfo/linux-audio-dev<br></div> </div></body>
</html>