[linux-audio-dev] pthread_mutex_unlock

Jens M Andreasen jens.andreasen at chello.se
Fri Jan 27 11:09:20 UTC 2006


On Thu, 2006-01-26 at 17:01 +0100, Alfons Adriaensen wrote:

-<snip>
> But there is a completely different (but not very efficient) way to
> do this sort of thing. In this model, waiting for something (e.g. a
> lock) involves trying to get it inside a loop, and going to sleep
> when that fails. When for example a lock is released, all threads that
> could be waiting for it (or even in the limit all threads that are
> waiting for anything) are woken up, and when rescheduled and running
> again, try one more iteration of the loop. One of them will succeed
> and all the others will fail and go to sleep again. In such a system,
> the lock is not effectively taken until the new owner is actually
> running, and the original owner could take it back if it continues
> after the unlock. This method can involve a *lot* of rescheduling
> that is just a waste of time. But early UNIX kernels worked like
> this AFAIK (using hash lists to limit the number of wasted wakeups).
> 

It gets better. I dusted off one oldish book, the Pthreads Primer. It
has been a free download from Sun since 1996, so I expect quite a few
people to have read it (else, go do it now.) 

On page 104 I find the following code snippet which attempts at
explaining how to create a basic mutual exclusion lock:


  try_again:	ldstub address -> register
		compare register, 0
		branch_equal got_it
		call go_to_sleep
		jump try_again
  got_it:	return



The go_to_sleep/try_again sequence suggests a busy wait. Furthermore the
surrounding text says "Notice that there is nothing that guarantee the
thread that goes to sleep will ever get the lock when it wakes up."

Now, this is not exactly how pthread_mutex_lock should work. 

In all fairness though, the section at hand is about atomic operations
rather than the implementation of pthreads. (Here, ldstub is an
instruction guaranteed to load and store atomically systemwide, no
matter how many processors you have installed. It loads a memory
location and immediately sets that location to 0xFF.)

Let us flip a few pages forward and have a look at the actual
implementation. Here we have pretty pictures with sleeper queues just
like we expected. But we also have a description of a rare corner case,
where one running thread, not in the queue yet, can grab the lock before
any of those in the queue have had a chance to run. This again suggest
that the thread has to be, not only runable, but even running to get the
lock. It also suggests that pthread_mutex_unlock will do something like:

	mx[mutex] = 0
	if (mxq[mutex]) wakeup_one(mxq[mutex])

... which is weired when we look at the pre and post conditions of the
mutex when there are threads waiting. It is bloody locked! So why would
we ever want to unlock it?

The described behaviour may or may not have have been true for Solaris
in 1996, but a more reasonable implementation would have been:

	if (mxq[mutex]) wakeup_one(mxq[mutex])
	else mx[mutex] = 0


Race condition, thou be gone!

-- 
mvh // Jens M Andreasen




More information about the Linux-audio-dev mailing list