1 =======================================================
2 Semantics and Behavior of Atomic and Bitmask Operations
3 =======================================================
5 :Author: David S. Miller
7 This document is intended to serve as a guide to Linux port
8 maintainers on how to implement atomic counter, bitops, and spinlock
11 Atomic Type And Operations
12 ==========================
14 The atomic_t type should be defined as a signed integer and
15 the atomic_long_t type as a signed long integer. Also, they should
16 be made opaque such that any kind of cast to a normal C integer type
17 will fail. Something like the following should suffice::
19 typedef struct { int counter; } atomic_t;
20 typedef struct { long counter; } atomic_long_t;
22 Historically, counter has been declared volatile. This is now discouraged.
23 See :ref:`Documentation/process/volatile-considered-harmful.rst
24 <volatile_considered_harmful>` for the complete rationale.
26 local_t is very similar to atomic_t. If the counter is per CPU and only
27 updated by one CPU, local_t is probably more appropriate. Please see
28 :ref:`Documentation/core-api/local_ops.rst <local_ops>` for the semantics of
31 The first operations to implement for atomic_t's are the initializers and
34 #define ATOMIC_INIT(i) { (i) }
35 #define atomic_set(v, i) ((v)->counter = (i))
37 The first macro is used in definitions, such as::
39 static atomic_t my_counter = ATOMIC_INIT(1);
41 The initializer is atomic in that the return values of the atomic operations
42 are guaranteed to be correct reflecting the initialized value if the
43 initializer is used before runtime. If the initializer is used at runtime, a
44 proper implicit or explicit read memory barrier is needed before reading the
45 value with atomic_read from another thread.
47 As with all of the ``atomic_`` interfaces, replace the leading ``atomic_``
48 with ``atomic_long_`` to operate on atomic_long_t.
50 The second interface can be used at runtime, as in::
52 struct foo { atomic_t counter; };
57 k = kmalloc(sizeof(*k), GFP_KERNEL);
60 atomic_set(&k->counter, 0);
62 The setting is atomic in that the return values of the atomic operations by
63 all threads are guaranteed to be correct reflecting either the value that has
64 been set with this operation or set with another operation. A proper implicit
65 or explicit memory barrier is needed before the value set with the operation
66 is guaranteed to be readable with atomic_read from another thread.
70 #define atomic_read(v) ((v)->counter)
72 which simply reads the counter value currently visible to the calling thread.
73 The read is atomic in that the return value is guaranteed to be one of the
74 values initialized or modified with the interface operations if a proper
75 implicit or explicit memory barrier is used after possible runtime
76 initialization by any other thread and the value is modified only with the
77 interface operations. atomic_read does not guarantee that the runtime
78 initialization by any other thread is visible yet, so the user of the
79 interface must take care of that with a proper implicit or explicit memory
84 ``atomic_read()`` and ``atomic_set()`` DO NOT IMPLY BARRIERS!
86 Some architectures may choose to use the volatile keyword, barriers, or
87 inline assembly to guarantee some degree of immediacy for atomic_read()
88 and atomic_set(). This is not uniformly guaranteed, and may change in
89 the future, so all users of atomic_t should treat atomic_read() and
90 atomic_set() as simple C statements that may be reordered or optimized
91 away entirely by the compiler or processor, and explicitly invoke the
92 appropriate compiler and/or memory barrier for each use case. Failure
93 to do so will result in code that may suddenly break when used with
94 different architectures or compiler optimizations, or even changes in
95 unrelated code which changes how the compiler optimizes the section
96 accessing atomic_t variables.
98 Properly aligned pointers, longs, ints, and chars (and unsigned
99 equivalents) may be atomically loaded from and stored to in the same
100 sense as described for atomic_read() and atomic_set(). The READ_ONCE()
101 and WRITE_ONCE() macros should be used to prevent the compiler from using
102 optimizations that might otherwise optimize accesses out of existence on
103 the one hand, or that might create unsolicited accesses on the other.
105 For example consider the following code::
110 If the compiler can prove that do_something() does not store to the
111 variable a, then the compiler is within its rights transforming this to
119 If you don't want the compiler to do this (and you probably don't), then
120 you should use something like the following::
122 while (READ_ONCE(a) < 0)
125 Alternatively, you could place a barrier() call in the loop.
127 For another example, consider the following code::
130 do_something_with(tmp_a);
131 do_something_else_with(tmp_a);
133 If the compiler can prove that do_something_with() does not store to the
134 variable a, then the compiler is within its rights to manufacture an
135 additional load as follows::
138 do_something_with(tmp_a);
140 do_something_else_with(tmp_a);
142 This could fatally confuse your code if it expected the same value
143 to be passed to do_something_with() and do_something_else_with().
145 The compiler would be likely to manufacture this additional load if
146 do_something_with() was an inline function that made very heavy use
147 of registers: reloading from variable a could save a flush to the
148 stack and later reload. To prevent the compiler from attacking your
149 code in this manner, write the following::
151 tmp_a = READ_ONCE(a);
152 do_something_with(tmp_a);
153 do_something_else_with(tmp_a);
155 For a final example, consider the following code, assuming that the
156 variable a is set at boot time before the second CPU is brought online
157 and never changed later, so that memory barriers are not needed::
164 The compiler is within its rights to manufacture an additional store
165 by transforming the above code into the following::
171 This could come as a fatal surprise to other code running concurrently
172 that expected b to never have the value 42 if a was zero. To prevent
173 the compiler from doing this, write something like::
180 Don't even -think- about doing this without proper use of memory barriers,
181 locks, or atomic operations if variable a can change at runtime!
185 ``READ_ONCE()`` OR ``WRITE_ONCE()`` DO NOT IMPLY A BARRIER!
187 Now, we move onto the atomic operation interfaces typically implemented with
188 the help of assembly code. ::
190 void atomic_add(int i, atomic_t *v);
191 void atomic_sub(int i, atomic_t *v);
192 void atomic_inc(atomic_t *v);
193 void atomic_dec(atomic_t *v);
195 These four routines add and subtract integral values to/from the given
196 atomic_t value. The first two routines pass explicit integers by
197 which to make the adjustment, whereas the latter two use an implicit
198 adjustment value of "1".
200 One very important aspect of these two routines is that they DO NOT
201 require any explicit memory barriers. They need only perform the
202 atomic_t counter update in an SMP safe manner.
206 int atomic_inc_return(atomic_t *v);
207 int atomic_dec_return(atomic_t *v);
209 These routines add 1 and subtract 1, respectively, from the given
210 atomic_t and return the new counter value after the operation is
213 Unlike the above routines, it is required that these primitives
214 include explicit memory barriers that are performed before and after
215 the operation. It must be done such that all memory operations before
216 and after the atomic operation calls are strongly ordered with respect
217 to the atomic operation itself.
219 For example, it should behave as if a smp_mb() call existed both
220 before and after the atomic operation.
222 If the atomic instructions used in an implementation provide explicit
223 memory barrier semantics which satisfy the above requirements, that is
228 int atomic_add_return(int i, atomic_t *v);
229 int atomic_sub_return(int i, atomic_t *v);
231 These behave just like atomic_{inc,dec}_return() except that an
232 explicit counter adjustment is given instead of the implicit "1".
233 This means that like atomic_{inc,dec}_return(), the memory barrier
234 semantics are required.
238 int atomic_inc_and_test(atomic_t *v);
239 int atomic_dec_and_test(atomic_t *v);
241 These two routines increment and decrement by 1, respectively, the
242 given atomic counter. They return a boolean indicating whether the
243 resulting counter value was zero or not.
245 Again, these primitives provide explicit memory barrier semantics around
246 the atomic operation::
248 int atomic_sub_and_test(int i, atomic_t *v);
250 This is identical to atomic_dec_and_test() except that an explicit
251 decrement is given instead of the implicit "1". This primitive must
252 provide explicit memory barrier semantics around the operation::
254 int atomic_add_negative(int i, atomic_t *v);
256 The given increment is added to the given atomic counter value. A boolean
257 is return which indicates whether the resulting counter value is negative.
258 This primitive must provide explicit memory barrier semantics around
263 int atomic_xchg(atomic_t *v, int new);
265 This performs an atomic exchange operation on the atomic variable v, setting
266 the given new value. It returns the old value that the atomic variable v had
267 just before the operation.
269 atomic_xchg must provide explicit memory barriers around the operation. ::
271 int atomic_cmpxchg(atomic_t *v, int old, int new);
273 This performs an atomic compare exchange operation on the atomic value v,
274 with the given old and new values. Like all atomic_xxx operations,
275 atomic_cmpxchg will only satisfy its atomicity semantics as long as all
276 other accesses of \*v are performed through atomic_xxx operations.
278 atomic_cmpxchg must provide explicit memory barriers around the operation,
279 although if the comparison fails then no memory ordering guarantees are
282 The semantics for atomic_cmpxchg are the same as those defined for 'cas'
287 int atomic_add_unless(atomic_t *v, int a, int u);
289 If the atomic value v is not equal to u, this function adds a to v, and
290 returns non zero. If v is equal to u then it returns zero. This is done as
293 atomic_add_unless must provide explicit memory barriers around the
294 operation unless it fails (returns 0).
296 atomic_inc_not_zero, equivalent to atomic_add_unless(v, 1, 0)
299 If a caller requires memory barrier semantics around an atomic_t
300 operation which does not return a value, a set of interfaces are
301 defined which accomplish this::
303 void smp_mb__before_atomic(void);
304 void smp_mb__after_atomic(void);
306 Preceding a non-value-returning read-modify-write atomic operation with
307 smp_mb__before_atomic() and following it with smp_mb__after_atomic()
308 provides the same full ordering that is provided by value-returning
309 read-modify-write atomic operations.
311 For example, smp_mb__before_atomic() can be used like so::
314 smp_mb__before_atomic();
315 atomic_dec(&obj->ref_count);
317 It makes sure that all memory operations preceding the atomic_dec()
318 call are strongly ordered with respect to the atomic counter
319 operation. In the above example, it guarantees that the assignment of
320 "1" to obj->dead will be globally visible to other cpus before the
321 atomic counter decrement.
323 Without the explicit smp_mb__before_atomic() call, the
324 implementation could legally allow the atomic counter update visible
325 to other cpus before the "obj->dead = 1;" assignment.
327 A missing memory barrier in the cases where they are required by the
328 atomic_t implementation above can have disastrous results. Here is
329 an example, which follows a pattern occurring frequently in the Linux
330 kernel. It is the use of atomic counters to implement reference
331 counting, and it works such that once the counter falls to zero it can
332 be guaranteed that no other entity can be accessing the object::
334 static void obj_list_add(struct obj *obj, struct list_head *head)
337 list_add(&obj->list, head);
340 static void obj_list_del(struct obj *obj)
342 list_del(&obj->list);
346 static void obj_destroy(struct obj *obj)
352 struct obj *obj_list_peek(struct list_head *head)
354 if (!list_empty(head)) {
357 obj = list_entry(head->next, struct obj, list);
358 atomic_inc(&obj->refcnt);
368 spin_lock(&global_list_lock);
369 obj = obj_list_peek(&global_list);
370 spin_unlock(&global_list_lock);
374 if (atomic_dec_and_test(&obj->refcnt))
379 void obj_timeout(struct obj *obj)
381 spin_lock(&global_list_lock);
383 spin_unlock(&global_list_lock);
385 if (atomic_dec_and_test(&obj->refcnt))
391 This is a simplification of the ARP queue management in the generic
392 neighbour discover code of the networking. Olaf Kirch found a bug wrt.
393 memory barriers in kfree_skb() that exposed the atomic_t memory barrier
394 requirements quite clearly.
396 Given the above scheme, it must be the case that the obj->active
397 update done by the obj list deletion be visible to other processors
398 before the atomic counter decrement is performed.
400 Otherwise, the counter could fall to zero, yet obj->active would still
401 be set, thus triggering the assertion in obj_destroy(). The error
402 sequence looks like this::
405 obj_poke() obj_timeout()
406 obj = obj_list_peek();
407 ... gains ref to obj, refcnt=2
410 ... visibility delayed ...
411 atomic_dec_and_test()
412 ... refcnt drops to 1 ...
413 atomic_dec_and_test()
414 ... refcount drops to 0 ...
416 BUG() triggers since obj->active
418 obj->active update visibility occurs
420 With the memory barrier semantics required of the atomic_t operations
421 which return values, the above sequence of memory visibility can never
422 happen. Specifically, in the above case the atomic_dec_and_test()
423 counter decrement would not become globally visible until the
424 obj->active update does.
426 As a historical note, 32-bit Sparc used to only allow usage of
427 24-bits of its atomic_t type. This was because it used 8 bits
428 as a spinlock for SMP safety. Sparc32 lacked a "compare and swap"
429 type instruction. However, 32-bit Sparc has since been moved over
430 to a "hash table of spinlocks" scheme, that allows the full 32-bit
431 counter to be realized. Essentially, an array of spinlocks are
432 indexed into based upon the address of the atomic_t being operated
433 on, and that lock protects the atomic operation. Parisc uses the
436 Another note is that the atomic_t operations returning values are
437 extremely slow on an old 386.
443 We will now cover the atomic bitmask operations. You will find that
444 their SMP and memory barrier semantics are similar in shape and scope
445 to the atomic_t ops above.
447 Native atomic bit operations are defined to operate on objects aligned
448 to the size of an "unsigned long" C data type, and are least of that
449 size. The endianness of the bits within each "unsigned long" are the
450 native endianness of the cpu. ::
452 void set_bit(unsigned long nr, volatile unsigned long *addr);
453 void clear_bit(unsigned long nr, volatile unsigned long *addr);
454 void change_bit(unsigned long nr, volatile unsigned long *addr);
456 These routines set, clear, and change, respectively, the bit number
457 indicated by "nr" on the bit mask pointed to by "ADDR".
459 They must execute atomically, yet there are no implicit memory barrier
460 semantics required of these interfaces. ::
462 int test_and_set_bit(unsigned long nr, volatile unsigned long *addr);
463 int test_and_clear_bit(unsigned long nr, volatile unsigned long *addr);
464 int test_and_change_bit(unsigned long nr, volatile unsigned long *addr);
466 Like the above, except that these routines return a boolean which
467 indicates whether the changed bit was set _BEFORE_ the atomic bit
470 WARNING! It is incredibly important that the value be a boolean,
471 ie. "0" or "1". Do not try to be fancy and save a few instructions by
472 declaring the above to return "long" and just returning something like
473 "old_val & mask" because that will not work.
475 For one thing, this return value gets truncated to int in many code
476 paths using these interfaces, so on 64-bit if the bit is set in the
477 upper 32-bits then testers will never see that.
479 One great example of where this problem crops up are the thread_info
480 flag operations. Routines such as test_and_set_ti_thread_flag() chop
481 the return value into an int. There are other places where things
482 like this occur as well.
484 These routines, like the atomic_t counter operations returning values,
485 must provide explicit memory barrier semantics around their execution.
486 All memory operations before the atomic bit operation call must be
487 made visible globally before the atomic bit operation is made visible.
488 Likewise, the atomic bit operation must be visible globally before any
489 subsequent memory operation is made visible. For example::
492 if (test_and_set_bit(0, &obj->flags))
496 The implementation of test_and_set_bit() must guarantee that
497 "obj->dead = 1;" is visible to cpus before the atomic memory operation
498 done by test_and_set_bit() becomes visible. Likewise, the atomic
499 memory operation done by test_and_set_bit() must become visible before
500 "obj->killed = 1;" is visible.
502 Finally there is the basic operation::
504 int test_bit(unsigned long nr, __const__ volatile unsigned long *addr);
506 Which returns a boolean indicating if bit "nr" is set in the bitmask
507 pointed to by "addr".
509 If explicit memory barriers are required around {set,clear}_bit() (which do
510 not return a value, and thus does not need to provide memory barrier
511 semantics), two interfaces are provided::
513 void smp_mb__before_atomic(void);
514 void smp_mb__after_atomic(void);
516 They are used as follows, and are akin to their atomic_t operation
519 /* All memory operations before this call will
520 * be globally visible before the clear_bit().
522 smp_mb__before_atomic();
525 /* The clear_bit() will be visible before all
526 * subsequent memory operations.
528 smp_mb__after_atomic();
530 There are two special bitops with lock barrier semantics (acquire/release,
531 same as spinlocks). These operate in the same way as their non-_lock/unlock
532 postfixed variants, except that they are to provide acquire/release semantics,
533 respectively. This means they can be used for bit_spin_trylock and
534 bit_spin_unlock type operations without specifying any more barriers. ::
536 int test_and_set_bit_lock(unsigned long nr, unsigned long *addr);
537 void clear_bit_unlock(unsigned long nr, unsigned long *addr);
538 void __clear_bit_unlock(unsigned long nr, unsigned long *addr);
540 The __clear_bit_unlock version is non-atomic, however it still implements
541 unlock barrier semantics. This can be useful if the lock itself is protecting
542 the other bits in the word.
544 Finally, there are non-atomic versions of the bitmask operations
545 provided. They are used in contexts where some other higher-level SMP
546 locking scheme is being used to protect the bitmask, and thus less
547 expensive non-atomic operations may be used in the implementation.
548 They have names similar to the above bitmask operation interfaces,
549 except that two underscores are prefixed to the interface name. ::
551 void __set_bit(unsigned long nr, volatile unsigned long *addr);
552 void __clear_bit(unsigned long nr, volatile unsigned long *addr);
553 void __change_bit(unsigned long nr, volatile unsigned long *addr);
554 int __test_and_set_bit(unsigned long nr, volatile unsigned long *addr);
555 int __test_and_clear_bit(unsigned long nr, volatile unsigned long *addr);
556 int __test_and_change_bit(unsigned long nr, volatile unsigned long *addr);
558 These non-atomic variants also do not require any special memory
561 The routines xchg() and cmpxchg() must provide the same exact
562 memory-barrier semantics as the atomic and bit operations returning
567 If someone wants to use xchg(), cmpxchg() and their variants,
568 linux/atomic.h should be included rather than asm/cmpxchg.h, unless the
569 code is in arch/* and can take care of itself.
571 Spinlocks and rwlocks have memory barrier expectations as well.
572 The rule to follow is simple:
574 1) When acquiring a lock, the implementation must make it globally
575 visible before any subsequent memory operation.
577 2) When releasing a lock, the implementation must make it such that
578 all previous memory operations are globally visible before the
581 Which finally brings us to _atomic_dec_and_lock(). There is an
582 architecture-neutral version implemented in lib/dec_and_lock.c,
583 but most platforms will wish to optimize this in assembler. ::
585 int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock);
587 Atomically decrement the given counter, and if will drop to zero
588 atomically acquire the given spinlock and perform the decrement
589 of the counter to zero. If it does not drop to zero, do nothing
592 It is actually pretty simple to get the memory barrier correct.
593 Simply satisfy the spinlock grab requirements, which is make
594 sure the spinlock operation is globally visible before any
595 subsequent memory operation.
597 We can demonstrate this operation more clearly if we define
598 an abstract atomic operation::
600 long cas(long *mem, long old, long new);
602 "cas" stands for "compare and swap". It atomically:
604 1) Compares "old" with the value currently at "mem".
605 2) If they are equal, "new" is written to "mem".
606 3) Regardless, the current value at "mem" is returned.
608 As an example usage, here is what an atomic counter update
611 void example_atomic_inc(long *counter)
619 ret = cas(counter, old, new);
625 Let's use cas() in order to build a pseudo-C atomic_dec_and_lock()::
627 int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
634 old = atomic_read(atomic);
640 ret = cas(atomic, old, new);
652 Now, as far as memory barriers go, as long as spin_lock()
653 strictly orders all subsequent memory operations (including
654 the cas()) with respect to itself, things will be fine.
656 Said another way, _atomic_dec_and_lock() must guarantee that
657 a counter dropping to zero is never made visible before the
658 spinlock being acquired.
662 Note that this also means that for the case where the counter is not
663 dropping to zero, there are no memory ordering requirements.