The apps
already need to do some type of synchronization internally.
For example a player's disk thread, when its ringbuffer is full, needs
to wait for the process thread to consume some data and thus free up
Depends. If both ends are periodic processes no other synchronisation
is required. And e.g. Jack callback is such a process, and likely to
be one end.
How about the other "end" (i.e. the "disk thread"?) Would that
normally be periodic?
OK, even if your disk thread is periodic for some reason, how does
that argue for library-level synchronization, *instead of* app-level
synchronization? In this case the cost would be the same -- no loss.
You may be right about the (HW as opposed to compiler)
re-ordering of
data w.r.t. pointers on some architectures. But AFAIK, at least on Intel
and AMD writes are not re-ordered w.r.t. other writes from the same CPU,
"From the same CPU"? Are we regressing to non-SMP-only schemes? And
"Intel and AMD" only?
How about multiple cores / CPUs / caches? Pipeline reordering is not
the main concern (though it can happen) -- cache coherence is.
Regarding the volatile declarations, at least on my
version (which
is slightly different from Jack's) there is no performance penalty.
Under which access patterns, with what compiler / optimization flags
etc? I would not make such generalizations... Volatile frustrates the
optimizer's ability to chose the optimal access patterns.
So I keep them just as reminders that these data are
shared and may
change in unexpected ways.
Hijacking volatile for *manual* type checking, at the cost of
frustrating the optimizer? Andrei Alexandrescu once advocated that
approach for *automatic* type checking in a famous article
(
http://drdobbs.com/cpp/184403766). I believe the shortcomings have
been thoroughly discussed in comp.lang.c++.
If you want to remind yourself, you could group the variable(s) and
the mutex / semaphore in a structure, or name them similarly etc.
You are wrong in saying that 'volatile' has no
place in multi-threading.
It is the correct way to go if you want to ensure that a value is e.g.
read/written just once even if it is used many times:
It has no place in properly synchronized threaded programs. And it
cannot guarantee the correctness of un-synchronized threaded programs
(unless you assume non-SMP, non-hyper-threaded, Intely-type hardware
-- *maybe*)
extern volatile int xval; // Written by other
thread(s)
void f (void)
{
int x;
x = xval;
// use x many times, it won't change.
}
Without the 'volatile', the compiler is free to read
the memory value xval as many times as it wants, even
if it has a local copy, and it probably will do so if
you have many local variables.
What does that accomplish? You're merely frustrating the compiler's
ability to optimize. You're not achieving complete thread safety by
*adding* volatile -- not on arbitrary hardware. If your code is
completely thread-safe with volatile, it is also completely
thread-safe (and faster) without volatile. Volatile does not offer any
guarantees that cannot be later undone by the pipeline or CPU cache.
-- Dan