David Robillard:
On Fri, 2011-11-25 at 15:21 +0100, Nick Copeland
wrote:
[...]
So if the pipe() is replaced with
socketpair(PF_UNIX, SOCK_STREAM, PF_UNSPEC, pipe_fd);
Then the issue I was seeing goes away. Perhaps the pipe() code has not
been
optimised since sockets were developed to replace them when IPC suddenly
needed to be between hosts rather than processes? Pure conjecture.
Added sockets:
Normal scheduling:
$ ./ipc 4096 1000000
Sending a 4096 byte message 1000000 times.
Pipe recv time: 7.048740
Pipe send time: 7.048648
Socket send time: 2.365210
Socket recv time: 2.365292
Queue recv time: 2.072530
Queue send time: 2.072494
SCHED_FIFO:
$ ./ipc 4096 1000000
Sending a 4096 byte message 1000000 times.
Pipe send time: 5.279210
Pipe recv time: 5.279508
Socket send time: 2.709628
Socket recv time: 2.709645
Queue send time: 5.228892
Queue recv time: 5.228980
Interesting. I find sockets being significantly faster than the much
simpler pipes quite counter-intuitive.
Code at the same location:
http://drobilla.net/files/ipc.c
On my computer, sockets are sometimes slower than both pipes and queues.
However, by making sure the two processes doesn't change CPU during
runtime, sockets are always the fastest, and I get consistent results:
Realtime, same CPU:
[kjetil@ttleon c]$ ./ipc 4096 2000000
Sending a 4096 byte message 2000000 times.
Pipe send time: 7.762312
Pipe recv time: 7.762423
Socket send time: 4.607025
Socket recv time: 4.607103
Queue send time: 7.278029
Queue recv time: 7.278052
Realtime, different CPUs:
[kjetil@ttleon c]$ ./ipc 4096 2000000
Sending a 4096 byte message 2000000 times.
Pipe recv time: 5.520099
Pipe send time: 5.519875
Socket send time: 2.856027
Socket recv time: 2.856136
Queue recv time: 3.049226
Queue send time: 3.049159
Non-realtime, same CPU:
[kjetil@ttleon c]$ ./ipc 4096 2000000
Sending a 4096 byte message 2000000 times.
Pipe send time: 7.331822
Pipe recv time: 7.332272
Socket send time: 4.520883
Socket recv time: 4.520959
Queue send time: 7.103408
Queue recv time: 7.103448
Non-realtime, different CPUs:
[kjetil@ttleon c]$ ./ipc 4096 2000000
Sending a 4096 byte message 2000000 times.
Pipe recv time: 4.305804
Pipe send time: 4.305587
Socket send time: 2.899696
Socket recv time: 2.899806
Queue recv time: 3.028210
Queue send time: 3.028151
[kjetil@ttleon c]$ diff -u ipc.c~ ipc.c
--- ipc.c~ 2011-11-26 00:55:38.000000000 +0100
+++ ipc.c 2011-11-26 14:51:44.003407960 +0100
@@ -41,9 +41,10 @@
// Comment out this line to disable RT scheduling
//#define SET_RT_SCHED 1
-#ifdef SET_RT_SCHED
+#define _GNU_SOURCE
+#define __USE_GNU
#include <sched.h>
-#endif
+#include <pthread.h>
static inline double
elapsed_s(const struct timespec* start, const struct timespec* end)
@@ -177,6 +178,14 @@
return time;
}
+static void bound_thread_to_cpu(int cpu){
+ cpu_set_t set;
+ CPU_ZERO(&set);
+ CPU_SET(cpu,&set);
+ pthread_setaffinity_np(pthread_self(), sizeof(cpu_set_t), &set);
+}
+
int
main(int argc, char** argv)
{
@@ -218,6 +227,11 @@
return 1;
}
+ if (child_pid ==0)
+ bound_thread_to_cpu(0);
+ else
+ bound_thread_to_cpu(1); // set to 0 to run both on the same CPU
+
#ifdef SET_RT_SCHED
struct sched_param param;
param.sched_priority = 90;