1 <!-- Copyright (C) 2003 Red Hat, Inc. -->
2 <!-- This material may be distributed only subject to the terms -->
3 <!-- and conditions set forth in the Open Publication License, v1.0 -->
4 <!-- or later (the latest version is presently available at -->
5 <!-- http://www.opencontent.org/openpub/). -->
6 <!-- Distribution of the work or derivative of the work in any -->
7 <!-- standard (paper) book form is prohibited unless prior -->
8 <!-- permission is obtained from the copyright holder. -->
13 ><meta name="MSSmartTagsPreventParsing" content="TRUE">
16 CONTENT="Modular DocBook HTML Stylesheet Version 1.76b+
19 TITLE="eCos Reference Manual"
20 HREF="ecos-ref.html"><LINK
22 TITLE="The eCos Kernel"
23 HREF="kernel.html"><LINK
26 HREF="kernel-alarms.html"><LINK
28 TITLE="Condition Variables"
29 HREF="kernel-condition-variables.html"></HEAD
40 SUMMARY="Header navigation table"
49 >eCos Reference Manual</TH
57 HREF="kernel-alarms.html"
71 HREF="kernel-condition-variables.html"
82 NAME="KERNEL-MUTEXES">Mutexes</H1
90 >cyg_mutex_init, cyg_mutex_destroy, cyg_mutex_lock, cyg_mutex_trylock, cyg_mutex_unlock, cyg_mutex_release, cyg_mutex_set_ceiling, cyg_mutex_set_protocol -- Synchronization primitive</DIV
92 CLASS="REFSYNOPSISDIV"
108 CLASS="FUNCSYNOPSISINFO"
109 >#include <cyg/kernel/kapi.h>
118 >void cyg_mutex_init</CODE
119 >(cyg_mutex_t* mutex);</CODE
125 >void cyg_mutex_destroy</CODE
126 >(cyg_mutex_t* mutex);</CODE
132 >cyg_bool_t cyg_mutex_lock</CODE
133 >(cyg_mutex_t* mutex);</CODE
139 >cyg_bool_t cyg_mutex_trylock</CODE
140 >(cyg_mutex_t* mutex);</CODE
146 >void cyg_mutex_unlock</CODE
147 >(cyg_mutex_t* mutex);</CODE
153 >void cyg_mutex_release</CODE
154 >(cyg_mutex_t* mutex);</CODE
160 >void cyg_mutex_set_ceiling</CODE
161 >(cyg_mutex_t* mutex, cyg_priority_t priority);</CODE
167 >void cyg_mutex_set_protocol</CODE
168 >(cyg_mutex_t* mutex, enum cyg_mutex_protocol protocol/);</CODE
177 NAME="KERNEL-MUTEXES-DESCRIPTION"
182 >The purpose of mutexes is to let threads share resources safely. If
183 two or more threads attempt to manipulate a data structure with no
184 locking between them then the system may run for quite some time
185 without apparent problems, but sooner or later the data structure will
186 become inconsistent and the application will start behaving strangely
187 and is quite likely to crash. The same can apply even when
188 manipulating a single variable or some other resource. For example,
198 CLASS="PROGRAMLISTING"
199 >static volatile int counter = 0;
212 >Assume that after a certain period of time <TT
216 has a value of 42, and two threads A and B running at the same
220 >. Typically thread A
221 will read the value of <TT
225 increment this register to 43, and write this updated value back to
226 memory. Thread B will do the same, so usually
230 > will end up with a value of 44. However if
231 thread A is timesliced after reading the old value 42 but before
232 writing back 43, thread B will still read back the old value and will
233 also write back 43. The net result is that the counter only gets
234 incremented once, not twice, which depending on the application may
238 >Sections of code like the above which involve manipulating shared data
239 are generally known as critical regions. Code should claim a lock
240 before entering a critical region and release the lock when leaving.
241 Mutexes provide an appropriate synchronization primitive for this.
250 CLASS="PROGRAMLISTING"
251 >static volatile int counter = 0;
252 static cyg_mutex_t lock;
259 cyg_mutex_lock(&lock);
261 cyg_mutex_unlock(&lock);
268 >A mutex must be initialized before it can be used, by calling
272 >. This takes a pointer to a
276 > data structure which is typically
277 statically allocated, and may be part of a larger data structure. If a
278 mutex is no longer required and there are no threads waiting on it
281 >cyg_mutex_destroy</TT
285 >The main functions for using a mutex are
292 >cyg_mutex_unlock</TT
293 >. In normal operation
297 > will return success after claiming
298 the mutex lock, blocking if another thread currently owns the mutex.
299 However the lock operation may fail if other code calls
302 >cyg_mutex_release</TT
306 >cyg_thread_release</TT
307 >, so if these functions may get
308 used then it is important to check the return value. The current owner
309 of a mutex should call <TT
311 >cyg_mutex_unlock</TT
313 lock is no longer required. This operation must be performed by the
314 owner, not by another thread.
319 >cyg_mutex_trylock</TT
324 > that will always return
325 immediately, returning success or failure as appropriate. This
326 function is rarely useful. Typical code locks a mutex just before
327 entering a critical region, so if the lock cannot be claimed then
328 there may be nothing else for the current thread to do. Use of this
329 function may also cause a form of priority inversion if the owner
330 owner runs at a lower priority, because the priority inheritance code
331 will not be triggered. Instead the current thread continues running,
332 preventing the owner from getting any cpu time, completing the
333 critical region, and releasing the mutex.
338 >cyg_mutex_release</TT
339 > can be used to wake up all
340 threads that are currently blocked inside a call to
344 > for a specific mutex. These lock
345 calls will return failure. The current mutex owner is not affected.
351 NAME="KERNEL-MUTEXES-PRIORITY-INVERSION"
354 >Priority Inversion</H2
356 >The use of mutexes gives rise to a problem known as priority
357 inversion. In a typical scenario this requires three threads A, B, and
358 C, running at high, medium and low priority respectively. Thread A and
359 thread B are temporarily blocked waiting for some event, so thread C
360 gets a chance to run, needs to enter a critical region, and locks
361 a mutex. At this point threads A and B are woken up - the exact order
362 does not matter. Thread A needs to claim the same mutex but has to
363 wait until C has left the critical region and can release the mutex.
364 Meanwhile thread B works on something completely different and can
365 continue running without problems. Because thread C is running a lower
366 priority than B it will not get a chance to run until B blocks for
367 some reason, and hence thread A cannot run either. The overall effect
368 is that a high-priority thread A cannot proceed because of a lower
369 priority thread B, and priority inversion has occurred.
372 >In simple applications it may be possible to arrange the code such
373 that priority inversion cannot occur, for example by ensuring that a
374 given mutex is never shared by threads running at different priority
375 levels. However this may not always be possible even at the
376 application level. In addition mutexes may be used internally by
377 underlying code, for example the memory allocation package, so careful
378 analysis of the whole system would be needed to be sure that priority
379 inversion cannot occur. Instead it is common practice to use one of
380 two techniques: priority ceilings and priority inheritance.
383 >Priority ceilings involve associating a priority with each mutex.
384 Usually this will match the highest priority thread that will ever
385 lock the mutex. When a thread running at a lower priority makes a
386 successful call to <TT
392 >cyg_mutex_trylock</TT
393 > its priority will be boosted to
394 that of the mutex. For example, given the previous example the
395 priority associated with the mutex would be that of thread A, so for
396 as long as it owns the mutex thread C will run in preference to thread
397 B. When C releases the mutex its priority drops to the normal value
398 again, allowing A to run and claim the mutex. Setting the
399 priority for a mutex involves a call to
402 >cyg_mutex_set_ceiling</TT
403 >, which is typically called
404 during initialization. It is possible to change the ceiling
405 dynamically but this will only affect subsequent lock operations, not
406 the current owner of the mutex.
409 >Priority ceilings are very suitable for simple applications, where for
410 every thread in the system it is possible to work out which mutexes
411 will be accessed. For more complicated applications this may prove
412 difficult, especially if thread priorities change at run-time. An
413 additional problem occurs for any mutexes outside the application, for
414 example used internally within eCos packages. A typical eCos package
415 will be unaware of the details of the various threads in the system,
416 so it will have no way of setting suitable ceilings for its internal
417 mutexes. If those mutexes are not exported to application code then
418 using priority ceilings may not be viable. The kernel does provide a
422 >CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL_DEFAULT_PRIORITY</TT
424 that can be used to set the default priority ceiling for all mutexes,
425 which may prove sufficient.
428 >The alternative approach is to use priority inheritance: if a thread
432 > for a mutex that it
433 currently owned by a lower-priority thread, then the owner will have
434 its priority raised to that of the current thread. Often this is more
435 efficient than priority ceilings because priority boosting only
436 happens when necessary, not for every lock operation, and the required
437 priority is determined at run-time rather than by static analysis.
438 However there are complications when multiple threads running at
439 different priorities try to lock a single mutex, or when the current
440 owner of a mutex then tries to lock additional mutexes, and this makes
441 the implementation significantly more complicated than priority
445 >There are a number of configuration options associated with priority
446 inversion. First, if after careful analysis it is known that priority
447 inversion cannot arise then the component
450 >CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL</TT
452 can be disabled. More commonly this component will be enabled, and one
456 >CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL_INHERIT</TT
461 >CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL_CEILING</TT
463 will be selected, so that one of the two protocols is available for
464 all mutexes. It is possible to select multiple protocols, so that some
465 mutexes can have priority ceilings while others use priority
466 inheritance or no priority inversion protection at all. Obviously this
467 flexibility will add to the code size and to the cost of mutex
468 operations. The default for all mutexes will be controlled by
471 >CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL_DEFAULT</TT
473 and can be changed at run-time using
476 >cyg_mutex_set_protocol</TT
480 >Priority inversion problems can also occur with other synchronization
481 primitives such as semaphores. For example there could be a situation
482 where a high-priority thread A is waiting on a semaphore, a
483 low-priority thread C needs to do just a little bit more work before
484 posting the semaphore, but a medium priority thread B is running and
485 preventing C from making progress. However a semaphore does not have
486 the concept of an owner, so there is no way for the system to know
487 that it is thread C which would next post to the semaphore. Hence
488 there is no way for the system to boost the priority of C
489 automatically and prevent the priority inversion. Instead situations
490 like this have to be detected by application developers and
491 appropriate precautions have to be taken, for example making sure that
492 all the threads run at suitable priorities at all times.
513 >The current implementation of priority inheritance within the eCos
514 kernel does not handle certain exceptional circumstances completely
515 correctly. Problems will only arise if a thread owns one mutex,
516 then attempts to claim another mutex, and there are other threads
517 attempting to lock these same mutexes. Although the system will
518 continue running, the current owners of the various mutexes involved
519 may not run at the priority they should. This situation never arises
520 in typical code because a mutex will only be locked for a small
521 critical region, and there is no need to manipulate other shared resources
522 inside this region. A more complicated implementation of priority
523 inheritance is possible but would add significant overhead and certain
524 operations would no longer be deterministic.
549 >Support for priority ceilings and priority inheritance is not
550 implemented for all schedulers. In particular neither priority
551 ceilings nor priority inheritance are currently available for the
562 NAME="KERNEL-MUTEXES-ALTERNATIVES"
567 >In nearly all circumstances, if two or more threads need to share some
568 data then protecting this data with a mutex is the correct thing to
569 do. Mutexes are the only primitive that combine a locking mechanism
570 and protection against priority inversion problems. However this
571 functionality is achieved at a cost, and in exceptional circumstances
572 such as an application's most critical inner loop it may be desirable
573 to use some other means of locking.
576 >When a critical region is very very small it is possible to lock the
577 scheduler, thus ensuring that no other thread can run until the
578 scheduler is unlocked again. This is achieved with calls to <A
579 HREF="kernel-schedcontrol.html"
582 >cyg_scheduler_lock</TT
587 >cyg_scheduler_unlock</TT
588 >. If the critical region
589 is sufficiently small then this can actually improve both performance
590 and dispatch latency because <TT
594 locks the scheduler for a brief period of time. This approach will not
595 work on SMP systems because another thread may already be running on a
596 different processor and accessing the critical region.
599 >Another way of avoiding the use of mutexes is to make sure that all
600 threads that access a particular critical region run at the same
601 priority and configure the system with timeslicing disabled
604 >CYGSEM_KERNEL_SCHED_TIMESLICE</TT
606 timeslicing a thread can only be preempted by a higher-priority one,
607 or if it performs some operation that can block. This approach
608 requires that none of the operations in the critical region can block,
609 so for example it is not legal to call
612 >cyg_semaphore_wait</TT
613 >. It is also vulnerable to
614 any changes in the configuration or to the various thread priorities:
615 any such changes may now have unexpected side effects. It will not
622 NAME="KERNEL-MUTEXES-RECURSIVE"
625 >Recursive Mutexes</H2
627 >The implementation of mutexes within the eCos kernel does not support
628 recursive locks. If a thread has locked a mutex and then attempts to
629 lock the mutex again, typically as a result of some recursive call in
630 a complicated call graph, then either an assertion failure will be
631 reported or the thread will deadlock. This behaviour is deliberate.
632 When a thread has just locked a mutex associated with some data
633 structure, it can assume that that data structure is in a consistent
634 state. Before unlocking the mutex again it must ensure that the data
635 structure is again in a consistent state. Recursive mutexes allow a
636 thread to make arbitrary changes to a data structure, then in a
637 recursive call lock the mutex again while the data structure is still
638 inconsistent. The net result is that code can no longer make any
639 assumptions about data structure consistency, which defeats the
640 purpose of using mutexes.
646 NAME="KERNEL-MUTEXES-CONTEXT"
657 >cyg_mutex_set_ceiling</TT
661 >cyg_mutex_set_protocol</TT
662 > are normally called during
663 initialization but may also be called from thread context. The
664 remaining functions should only be called from thread context. Mutexes
665 serve as a mutual exclusion mechanism between threads, and cannot be
666 used to synchronize between threads and the interrupt handling
667 subsystem. If a critical region is shared between a thread and a DSR
668 then it must be protected using <A
669 HREF="kernel-schedcontrol.html"
672 >cyg_scheduler_lock</TT
677 >cyg_scheduler_unlock</TT
678 >. If a critical region is
679 shared between a thread and an ISR, it must be protected by disabling
680 or masking interrupts. Obviously these operations must be used with
681 care because they can affect dispatch and interrupt latencies.
689 SUMMARY="Footer navigation table"
700 HREF="kernel-alarms.html"
718 HREF="kernel-condition-variables.html"
742 >Condition Variables</TD