1 ============================
2 LINUX KERNEL MEMORY BARRIERS
3 ============================
5 By: David Howells <dhowells@redhat.com>
6 Paul E. McKenney <paulmck@linux.vnet.ibm.com>
10 (*) Abstract memory access model.
15 (*) What are memory barriers?
17 - Varieties of memory barrier.
18 - What may not be assumed about memory barriers?
19 - Data dependency barriers.
20 - Control dependencies.
21 - SMP barrier pairing.
22 - Examples of memory barrier sequences.
23 - Read memory barriers vs load speculation.
26 (*) Explicit kernel barriers.
29 - CPU memory barriers.
32 (*) Implicit kernel memory barriers.
35 - Interrupt disabling functions.
36 - Sleep and wake-up functions.
37 - Miscellaneous functions.
39 (*) Inter-CPU locking barrier effects.
41 - Locks vs memory accesses.
42 - Locks vs I/O accesses.
44 (*) Where are memory barriers needed?
46 - Interprocessor interaction.
51 (*) Kernel I/O barrier effects.
53 (*) Assumed minimum execution ordering model.
55 (*) The effects of the cpu cache.
58 - Cache coherency vs DMA.
59 - Cache coherency vs MMIO.
61 (*) The things CPUs get up to.
63 - And then there's the Alpha.
72 ============================
73 ABSTRACT MEMORY ACCESS MODEL
74 ============================
76 Consider the following abstract model of the system:
81 +-------+ : +--------+ : +-------+
84 | CPU 1 |<----->| Memory |<----->| CPU 2 |
87 +-------+ : +--------+ : +-------+
95 +---------->| Device |<----------+
101 Each CPU executes a program that generates memory access operations. In the
102 abstract CPU, memory operation ordering is very relaxed, and a CPU may actually
103 perform the memory operations in any order it likes, provided program causality
104 appears to be maintained. Similarly, the compiler may also arrange the
105 instructions it emits in any order it likes, provided it doesn't affect the
106 apparent operation of the program.
108 So in the above diagram, the effects of the memory operations performed by a
109 CPU are perceived by the rest of the system as the operations cross the
110 interface between the CPU and rest of the system (the dotted lines).
113 For example, consider the following sequence of events:
116 =============== ===============
121 The set of accesses as seen by the memory system in the middle can be arranged
122 in 24 different combinations:
124 STORE A=3, STORE B=4, y=LOAD A->3, x=LOAD B->4
125 STORE A=3, STORE B=4, x=LOAD B->4, y=LOAD A->3
126 STORE A=3, y=LOAD A->3, STORE B=4, x=LOAD B->4
127 STORE A=3, y=LOAD A->3, x=LOAD B->2, STORE B=4
128 STORE A=3, x=LOAD B->2, STORE B=4, y=LOAD A->3
129 STORE A=3, x=LOAD B->2, y=LOAD A->3, STORE B=4
130 STORE B=4, STORE A=3, y=LOAD A->3, x=LOAD B->4
134 and can thus result in four different combinations of values:
142 Furthermore, the stores committed by a CPU to the memory system may not be
143 perceived by the loads made by another CPU in the same order as the stores were
147 As a further example, consider this sequence of events:
150 =============== ===============
151 { A == 1, B == 2, C = 3, P == &A, Q == &C }
155 There is an obvious data dependency here, as the value loaded into D depends on
156 the address retrieved from P by CPU 2. At the end of the sequence, any of the
157 following results are possible:
159 (Q == &A) and (D == 1)
160 (Q == &B) and (D == 2)
161 (Q == &B) and (D == 4)
163 Note that CPU 2 will never try and load C into D because the CPU will load P
164 into Q before issuing the load of *Q.
170 Some devices present their control interfaces as collections of memory
171 locations, but the order in which the control registers are accessed is very
172 important. For instance, imagine an ethernet card with a set of internal
173 registers that are accessed through an address port register (A) and a data
174 port register (D). To read internal register 5, the following code might then
180 but this might show up as either of the following two sequences:
182 STORE *A = 5, x = LOAD *D
183 x = LOAD *D, STORE *A = 5
185 the second of which will almost certainly result in a malfunction, since it set
186 the address _after_ attempting to read the register.
192 There are some minimal guarantees that may be expected of a CPU:
194 (*) On any given CPU, dependent memory accesses will be issued in order, with
195 respect to itself. This means that for:
197 Q = READ_ONCE(P); smp_read_barrier_depends(); D = READ_ONCE(*Q);
199 the CPU will issue the following memory operations:
201 Q = LOAD P, D = LOAD *Q
203 and always in that order. On most systems, smp_read_barrier_depends()
204 does nothing, but it is required for DEC Alpha. The READ_ONCE()
205 is required to prevent compiler mischief. Please note that you
206 should normally use something like rcu_dereference() instead of
207 open-coding smp_read_barrier_depends().
209 (*) Overlapping loads and stores within a particular CPU will appear to be
210 ordered within that CPU. This means that for:
212 a = READ_ONCE(*X); WRITE_ONCE(*X, b);
214 the CPU will only issue the following sequence of memory operations:
216 a = LOAD *X, STORE *X = b
220 WRITE_ONCE(*X, c); d = READ_ONCE(*X);
222 the CPU will only issue:
224 STORE *X = c, d = LOAD *X
226 (Loads and stores overlap if they are targeted at overlapping pieces of
229 And there are a number of things that _must_ or _must_not_ be assumed:
231 (*) It _must_not_ be assumed that the compiler will do what you want
232 with memory references that are not protected by READ_ONCE() and
233 WRITE_ONCE(). Without them, the compiler is within its rights to
234 do all sorts of "creative" transformations, which are covered in
235 the COMPILER BARRIER section.
237 (*) It _must_not_ be assumed that independent loads and stores will be issued
238 in the order given. This means that for:
240 X = *A; Y = *B; *D = Z;
242 we may get any of the following sequences:
244 X = LOAD *A, Y = LOAD *B, STORE *D = Z
245 X = LOAD *A, STORE *D = Z, Y = LOAD *B
246 Y = LOAD *B, X = LOAD *A, STORE *D = Z
247 Y = LOAD *B, STORE *D = Z, X = LOAD *A
248 STORE *D = Z, X = LOAD *A, Y = LOAD *B
249 STORE *D = Z, Y = LOAD *B, X = LOAD *A
251 (*) It _must_ be assumed that overlapping memory accesses may be merged or
252 discarded. This means that for:
254 X = *A; Y = *(A + 4);
256 we may get any one of the following sequences:
258 X = LOAD *A; Y = LOAD *(A + 4);
259 Y = LOAD *(A + 4); X = LOAD *A;
260 {X, Y} = LOAD {*A, *(A + 4) };
264 *A = X; *(A + 4) = Y;
268 STORE *A = X; STORE *(A + 4) = Y;
269 STORE *(A + 4) = Y; STORE *A = X;
270 STORE {*A, *(A + 4) } = {X, Y};
272 And there are anti-guarantees:
274 (*) These guarantees do not apply to bitfields, because compilers often
275 generate code to modify these using non-atomic read-modify-write
276 sequences. Do not attempt to use bitfields to synchronize parallel
279 (*) Even in cases where bitfields are protected by locks, all fields
280 in a given bitfield must be protected by one lock. If two fields
281 in a given bitfield are protected by different locks, the compiler's
282 non-atomic read-modify-write sequences can cause an update to one
283 field to corrupt the value of an adjacent field.
285 (*) These guarantees apply only to properly aligned and sized scalar
286 variables. "Properly sized" currently means variables that are
287 the same size as "char", "short", "int" and "long". "Properly
288 aligned" means the natural alignment, thus no constraints for
289 "char", two-byte alignment for "short", four-byte alignment for
290 "int", and either four-byte or eight-byte alignment for "long",
291 on 32-bit and 64-bit systems, respectively. Note that these
292 guarantees were introduced into the C11 standard, so beware when
293 using older pre-C11 compilers (for example, gcc 4.6). The portion
294 of the standard containing this guarantee is Section 3.14, which
295 defines "memory location" as follows:
298 either an object of scalar type, or a maximal sequence
299 of adjacent bit-fields all having nonzero width
301 NOTE 1: Two threads of execution can update and access
302 separate memory locations without interfering with
305 NOTE 2: A bit-field and an adjacent non-bit-field member
306 are in separate memory locations. The same applies
307 to two bit-fields, if one is declared inside a nested
308 structure declaration and the other is not, or if the two
309 are separated by a zero-length bit-field declaration,
310 or if they are separated by a non-bit-field member
311 declaration. It is not safe to concurrently update two
312 bit-fields in the same structure if all members declared
313 between them are also bit-fields, no matter what the
314 sizes of those intervening bit-fields happen to be.
317 =========================
318 WHAT ARE MEMORY BARRIERS?
319 =========================
321 As can be seen above, independent memory operations are effectively performed
322 in random order, but this can be a problem for CPU-CPU interaction and for I/O.
323 What is required is some way of intervening to instruct the compiler and the
324 CPU to restrict the order.
326 Memory barriers are such interventions. They impose a perceived partial
327 ordering over the memory operations on either side of the barrier.
329 Such enforcement is important because the CPUs and other devices in a system
330 can use a variety of tricks to improve performance, including reordering,
331 deferral and combination of memory operations; speculative loads; speculative
332 branch prediction and various types of caching. Memory barriers are used to
333 override or suppress these tricks, allowing the code to sanely control the
334 interaction of multiple CPUs and/or devices.
337 VARIETIES OF MEMORY BARRIER
338 ---------------------------
340 Memory barriers come in four basic varieties:
342 (1) Write (or store) memory barriers.
344 A write memory barrier gives a guarantee that all the STORE operations
345 specified before the barrier will appear to happen before all the STORE
346 operations specified after the barrier with respect to the other
347 components of the system.
349 A write barrier is a partial ordering on stores only; it is not required
350 to have any effect on loads.
352 A CPU can be viewed as committing a sequence of store operations to the
353 memory system as time progresses. All stores before a write barrier will
354 occur in the sequence _before_ all the stores after the write barrier.
356 [!] Note that write barriers should normally be paired with read or data
357 dependency barriers; see the "SMP barrier pairing" subsection.
360 (2) Data dependency barriers.
362 A data dependency barrier is a weaker form of read barrier. In the case
363 where two loads are performed such that the second depends on the result
364 of the first (eg: the first load retrieves the address to which the second
365 load will be directed), a data dependency barrier would be required to
366 make sure that the target of the second load is updated before the address
367 obtained by the first load is accessed.
369 A data dependency barrier is a partial ordering on interdependent loads
370 only; it is not required to have any effect on stores, independent loads
371 or overlapping loads.
373 As mentioned in (1), the other CPUs in the system can be viewed as
374 committing sequences of stores to the memory system that the CPU being
375 considered can then perceive. A data dependency barrier issued by the CPU
376 under consideration guarantees that for any load preceding it, if that
377 load touches one of a sequence of stores from another CPU, then by the
378 time the barrier completes, the effects of all the stores prior to that
379 touched by the load will be perceptible to any loads issued after the data
382 See the "Examples of memory barrier sequences" subsection for diagrams
383 showing the ordering constraints.
385 [!] Note that the first load really has to have a _data_ dependency and
386 not a control dependency. If the address for the second load is dependent
387 on the first load, but the dependency is through a conditional rather than
388 actually loading the address itself, then it's a _control_ dependency and
389 a full read barrier or better is required. See the "Control dependencies"
390 subsection for more information.
392 [!] Note that data dependency barriers should normally be paired with
393 write barriers; see the "SMP barrier pairing" subsection.
396 (3) Read (or load) memory barriers.
398 A read barrier is a data dependency barrier plus a guarantee that all the
399 LOAD operations specified before the barrier will appear to happen before
400 all the LOAD operations specified after the barrier with respect to the
401 other components of the system.
403 A read barrier is a partial ordering on loads only; it is not required to
404 have any effect on stores.
406 Read memory barriers imply data dependency barriers, and so can substitute
409 [!] Note that read barriers should normally be paired with write barriers;
410 see the "SMP barrier pairing" subsection.
413 (4) General memory barriers.
415 A general memory barrier gives a guarantee that all the LOAD and STORE
416 operations specified before the barrier will appear to happen before all
417 the LOAD and STORE operations specified after the barrier with respect to
418 the other components of the system.
420 A general memory barrier is a partial ordering over both loads and stores.
422 General memory barriers imply both read and write memory barriers, and so
423 can substitute for either.
426 And a couple of implicit varieties:
428 (5) ACQUIRE operations.
430 This acts as a one-way permeable barrier. It guarantees that all memory
431 operations after the ACQUIRE operation will appear to happen after the
432 ACQUIRE operation with respect to the other components of the system.
433 ACQUIRE operations include LOCK operations and smp_load_acquire()
436 Memory operations that occur before an ACQUIRE operation may appear to
437 happen after it completes.
439 An ACQUIRE operation should almost always be paired with a RELEASE
443 (6) RELEASE operations.
445 This also acts as a one-way permeable barrier. It guarantees that all
446 memory operations before the RELEASE operation will appear to happen
447 before the RELEASE operation with respect to the other components of the
448 system. RELEASE operations include UNLOCK operations and
449 smp_store_release() operations.
451 Memory operations that occur after a RELEASE operation may appear to
452 happen before it completes.
454 The use of ACQUIRE and RELEASE operations generally precludes the need
455 for other sorts of memory barrier (but note the exceptions mentioned in
456 the subsection "MMIO write barrier"). In addition, a RELEASE+ACQUIRE
457 pair is -not- guaranteed to act as a full memory barrier. However, after
458 an ACQUIRE on a given variable, all memory accesses preceding any prior
459 RELEASE on that same variable are guaranteed to be visible. In other
460 words, within a given variable's critical section, all accesses of all
461 previous critical sections for that variable are guaranteed to have
464 This means that ACQUIRE acts as a minimal "acquire" operation and
465 RELEASE acts as a minimal "release" operation.
468 Memory barriers are only required where there's a possibility of interaction
469 between two CPUs or between a CPU and a device. If it can be guaranteed that
470 there won't be any such interaction in any particular piece of code, then
471 memory barriers are unnecessary in that piece of code.
474 Note that these are the _minimum_ guarantees. Different architectures may give
475 more substantial guarantees, but they may _not_ be relied upon outside of arch
479 WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS?
480 ----------------------------------------------
482 There are certain things that the Linux kernel memory barriers do not guarantee:
484 (*) There is no guarantee that any of the memory accesses specified before a
485 memory barrier will be _complete_ by the completion of a memory barrier
486 instruction; the barrier can be considered to draw a line in that CPU's
487 access queue that accesses of the appropriate type may not cross.
489 (*) There is no guarantee that issuing a memory barrier on one CPU will have
490 any direct effect on another CPU or any other hardware in the system. The
491 indirect effect will be the order in which the second CPU sees the effects
492 of the first CPU's accesses occur, but see the next point:
494 (*) There is no guarantee that a CPU will see the correct order of effects
495 from a second CPU's accesses, even _if_ the second CPU uses a memory
496 barrier, unless the first CPU _also_ uses a matching memory barrier (see
497 the subsection on "SMP Barrier Pairing").
499 (*) There is no guarantee that some intervening piece of off-the-CPU
500 hardware[*] will not reorder the memory accesses. CPU cache coherency
501 mechanisms should propagate the indirect effects of a memory barrier
502 between CPUs, but might not do so in order.
504 [*] For information on bus mastering DMA and coherency please read:
506 Documentation/PCI/pci.txt
507 Documentation/DMA-API-HOWTO.txt
508 Documentation/DMA-API.txt
511 DATA DEPENDENCY BARRIERS
512 ------------------------
514 The usage requirements of data dependency barriers are a little subtle, and
515 it's not always obvious that they're needed. To illustrate, consider the
516 following sequence of events:
519 =============== ===============
520 { A == 1, B == 2, C = 3, P == &A, Q == &C }
527 There's a clear data dependency here, and it would seem that by the end of the
528 sequence, Q must be either &A or &B, and that:
530 (Q == &A) implies (D == 1)
531 (Q == &B) implies (D == 4)
533 But! CPU 2's perception of P may be updated _before_ its perception of B, thus
534 leading to the following situation:
536 (Q == &B) and (D == 2) ????
538 Whilst this may seem like a failure of coherency or causality maintenance, it
539 isn't, and this behaviour can be observed on certain real CPUs (such as the DEC
542 To deal with this, a data dependency barrier or better must be inserted
543 between the address load and the data load:
546 =============== ===============
547 { A == 1, B == 2, C = 3, P == &A, Q == &C }
552 <data dependency barrier>
555 This enforces the occurrence of one of the two implications, and prevents the
556 third possibility from arising.
558 A data-dependency barrier must also order against dependent writes:
561 =============== ===============
562 { A == 1, B == 2, C = 3, P == &A, Q == &C }
567 <data dependency barrier>
570 The data-dependency barrier must order the read into Q with the store
571 into *Q. This prohibits this outcome:
575 Please note that this pattern should be rare. After all, the whole point
576 of dependency ordering is to -prevent- writes to the data structure, along
577 with the expensive cache misses associated with those writes. This pattern
578 can be used to record rare error conditions and the like, and the ordering
579 prevents such records from being lost.
582 [!] Note that this extremely counterintuitive situation arises most easily on
583 machines with split caches, so that, for example, one cache bank processes
584 even-numbered cache lines and the other bank processes odd-numbered cache
585 lines. The pointer P might be stored in an odd-numbered cache line, and the
586 variable B might be stored in an even-numbered cache line. Then, if the
587 even-numbered bank of the reading CPU's cache is extremely busy while the
588 odd-numbered bank is idle, one can see the new value of the pointer P (&B),
589 but the old value of the variable B (2).
592 The data dependency barrier is very important to the RCU system,
593 for example. See rcu_assign_pointer() and rcu_dereference() in
594 include/linux/rcupdate.h. This permits the current target of an RCU'd
595 pointer to be replaced with a new modified target, without the replacement
596 target appearing to be incompletely initialised.
598 See also the subsection on "Cache Coherency" for a more thorough example.
604 A load-load control dependency requires a full read memory barrier, not
605 simply a data dependency barrier to make it work correctly. Consider the
606 following bit of code:
610 <data dependency barrier> /* BUG: No data dependency!!! */
614 This will not have the desired effect because there is no actual data
615 dependency, but rather a control dependency that the CPU may short-circuit
616 by attempting to predict the outcome in advance, so that other CPUs see
617 the load from b as having happened before the load from a. In such a
618 case what's actually required is:
626 However, stores are not speculated. This means that ordering -is- provided
627 for load-store control dependencies, as in the following example:
634 Control dependencies pair normally with other types of barriers. That
635 said, please note that READ_ONCE() is not optional! Without the
636 READ_ONCE(), the compiler might combine the load from 'a' with other
637 loads from 'a', and the store to 'b' with other stores to 'b', with
638 possible highly counterintuitive effects on ordering.
640 Worse yet, if the compiler is able to prove (say) that the value of
641 variable 'a' is always non-zero, it would be well within its rights
642 to optimize the original example by eliminating the "if" statement
646 b = p; /* BUG: Compiler and CPU can both reorder!!! */
648 So don't leave out the READ_ONCE().
650 It is tempting to try to enforce ordering on identical stores on both
651 branches of the "if" statement as follows:
664 Unfortunately, current compilers will transform this as follows at high
669 WRITE_ONCE(b, p); /* BUG: No ordering vs. load from a!!! */
671 /* WRITE_ONCE(b, p); -- moved up, BUG!!! */
674 /* WRITE_ONCE(b, p); -- moved up, BUG!!! */
678 Now there is no conditional between the load from 'a' and the store to
679 'b', which means that the CPU is within its rights to reorder them:
680 The conditional is absolutely required, and must be present in the
681 assembly code even after all compiler optimizations have been applied.
682 Therefore, if you need ordering in this example, you need explicit
683 memory barriers, for example, smp_store_release():
687 smp_store_release(&b, p);
690 smp_store_release(&b, p);
694 In contrast, without explicit memory barriers, two-legged-if control
695 ordering is guaranteed only when the stores differ, for example:
706 The initial READ_ONCE() is still required to prevent the compiler from
707 proving the value of 'a'.
709 In addition, you need to be careful what you do with the local variable 'q',
710 otherwise the compiler might be able to guess the value and again remove
711 the needed conditional. For example:
722 If MAX is defined to be 1, then the compiler knows that (q % MAX) is
723 equal to zero, in which case the compiler is within its rights to
724 transform the above code into the following:
730 Given this transformation, the CPU is not required to respect the ordering
731 between the load from variable 'a' and the store to variable 'b'. It is
732 tempting to add a barrier(), but this does not help. The conditional
733 is gone, and the barrier won't bring it back. Therefore, if you are
734 relying on this ordering, you should make sure that MAX is greater than
735 one, perhaps as follows:
738 BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
747 Please note once again that the stores to 'b' differ. If they were
748 identical, as noted earlier, the compiler could pull this store outside
749 of the 'if' statement.
751 You must also be careful not to rely too much on boolean short-circuit
752 evaluation. Consider this example:
758 Because the first condition cannot fault and the second condition is
759 always true, the compiler can transform this example as following,
760 defeating control dependency:
765 This example underscores the need to ensure that the compiler cannot
766 out-guess your code. More generally, although READ_ONCE() does force
767 the compiler to actually emit code for a given load, it does not force
768 the compiler to use the results.
770 Finally, control dependencies do -not- provide transitivity. This is
771 demonstrated by two related examples, with the initial values of
772 x and y both being zero:
775 ======================= =======================
776 r1 = READ_ONCE(x); r2 = READ_ONCE(y);
777 if (r1 > 0) if (r2 > 0)
778 WRITE_ONCE(y, 1); WRITE_ONCE(x, 1);
780 assert(!(r1 == 1 && r2 == 1));
782 The above two-CPU example will never trigger the assert(). However,
783 if control dependencies guaranteed transitivity (which they do not),
784 then adding the following CPU would guarantee a related assertion:
787 =====================
790 assert(!(r1 == 2 && r2 == 1 && x == 2)); /* FAILS!!! */
792 But because control dependencies do -not- provide transitivity, the above
793 assertion can fail after the combined three-CPU example completes. If you
794 need the three-CPU example to provide ordering, you will need smp_mb()
795 between the loads and stores in the CPU 0 and CPU 1 code fragments,
796 that is, just before or just after the "if" statements. Furthermore,
797 the original two-CPU example is very fragile and should be avoided.
799 These two examples are the LB and WWC litmus tests from this paper:
800 http://www.cl.cam.ac.uk/users/pes20/ppc-supplemental/test6.pdf and this
801 site: https://www.cl.cam.ac.uk/~pes20/ppcmem/index.html.
805 (*) Control dependencies can order prior loads against later stores.
806 However, they do -not- guarantee any other sort of ordering:
807 Not prior loads against later loads, nor prior stores against
808 later anything. If you need these other forms of ordering,
809 use smp_rmb(), smp_wmb(), or, in the case of prior stores and
810 later loads, smp_mb().
812 (*) If both legs of the "if" statement begin with identical stores to
813 the same variable, then those stores must be ordered, either by
814 preceding both of them with smp_mb() or by using smp_store_release()
815 to carry out the stores. Please note that it is -not- sufficient
816 to use barrier() at beginning of each leg of the "if" statement,
817 as optimizing compilers do not necessarily respect barrier()
820 (*) Control dependencies require at least one run-time conditional
821 between the prior load and the subsequent store, and this
822 conditional must involve the prior load. If the compiler is able
823 to optimize the conditional away, it will have also optimized
824 away the ordering. Careful use of READ_ONCE() and WRITE_ONCE()
825 can help to preserve the needed conditional.
827 (*) Control dependencies require that the compiler avoid reordering the
828 dependency into nonexistence. Careful use of READ_ONCE() or
829 atomic{,64}_read() can help to preserve your control dependency.
830 Please see the COMPILER BARRIER section for more information.
832 (*) Control dependencies pair normally with other types of barriers.
834 (*) Control dependencies do -not- provide transitivity. If you
835 need transitivity, use smp_mb().
841 When dealing with CPU-CPU interactions, certain types of memory barrier should
842 always be paired. A lack of appropriate pairing is almost certainly an error.
844 General barriers pair with each other, though they also pair with most
845 other types of barriers, albeit without transitivity. An acquire barrier
846 pairs with a release barrier, but both may also pair with other barriers,
847 including of course general barriers. A write barrier pairs with a data
848 dependency barrier, a control dependency, an acquire barrier, a release
849 barrier, a read barrier, or a general barrier. Similarly a read barrier,
850 control dependency, or a data dependency barrier pairs with a write
851 barrier, an acquire barrier, a release barrier, or a general barrier:
854 =============== ===============
857 WRITE_ONCE(b, 2); x = READ_ONCE(b);
864 =============== ===============================
867 WRITE_ONCE(b, &a); x = READ_ONCE(b);
868 <data dependency barrier>
874 =============== ===============================
877 WRITE_ONCE(y, 1); if (r2 = READ_ONCE(x)) {
878 <implicit control dependency>
882 assert(r1 == 0 || r2 == 0);
884 Basically, the read barrier always has to be there, even though it can be of
887 [!] Note that the stores before the write barrier would normally be expected to
888 match the loads after the read barrier or the data dependency barrier, and vice
892 =================== ===================
893 WRITE_ONCE(a, 1); }---- --->{ v = READ_ONCE(c);
894 WRITE_ONCE(b, 2); } \ / { w = READ_ONCE(d);
895 <write barrier> \ <read barrier>
896 WRITE_ONCE(c, 3); } / \ { x = READ_ONCE(a);
897 WRITE_ONCE(d, 4); }---- --->{ y = READ_ONCE(b);
900 EXAMPLES OF MEMORY BARRIER SEQUENCES
901 ------------------------------------
903 Firstly, write barriers act as partial orderings on store operations.
904 Consider the following sequence of events:
907 =======================
915 This sequence of events is committed to the memory coherence system in an order
916 that the rest of the system might perceive as the unordered set of { STORE A,
917 STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
922 | |------>| C=3 | } /\
923 | | : +------+ }----- \ -----> Events perceptible to
924 | | : | A=1 | } \/ the rest of the system
926 | CPU 1 | : | B=2 | }
928 | | wwwwwwwwwwwwwwww } <--- At this point the write barrier
929 | | +------+ } requires all stores prior to the
930 | | : | E=5 | } barrier to be committed before
931 | | : +------+ } further stores may take place
936 | Sequence in which stores are committed to the
937 | memory system by CPU 1
941 Secondly, data dependency barriers act as partial orderings on data-dependent
942 loads. Consider the following sequence of events:
945 ======================= =======================
946 { B = 7; X = 9; Y = 8; C = &Y }
951 STORE D = 4 LOAD C (gets &B)
954 Without intervention, CPU 2 may perceive the events on CPU 1 in some
955 effectively random order, despite the write barrier issued by CPU 1:
958 | | +------+ +-------+ | Sequence of update
959 | |------>| B=2 |----- --->| Y->8 | | of perception on
960 | | : +------+ \ +-------+ | CPU 2
961 | CPU 1 | : | A=1 | \ --->| C->&Y | V
962 | | +------+ | +-------+
963 | | wwwwwwwwwwwwwwww | : :
965 | | : | C=&B |--- | : : +-------+
966 | | : +------+ \ | +-------+ | |
967 | |------>| D=4 | ----------->| C->&B |------>| |
968 | | +------+ | +-------+ | |
969 +-------+ : : | : : | |
973 Apparently incorrect ---> | | B->7 |------>| |
974 perception of B (!) | +-------+ | |
977 The load of X holds ---> \ | X->9 |------>| |
978 up the maintenance \ +-------+ | |
979 of coherence of B ----->| B->2 | +-------+
984 In the above example, CPU 2 perceives that B is 7, despite the load of *C
985 (which would be B) coming after the LOAD of C.
987 If, however, a data dependency barrier were to be placed between the load of C
988 and the load of *C (ie: B) on CPU 2:
991 ======================= =======================
992 { B = 7; X = 9; Y = 8; C = &Y }
997 STORE D = 4 LOAD C (gets &B)
998 <data dependency barrier>
1001 then the following will occur:
1004 | | +------+ +-------+
1005 | |------>| B=2 |----- --->| Y->8 |
1006 | | : +------+ \ +-------+
1007 | CPU 1 | : | A=1 | \ --->| C->&Y |
1008 | | +------+ | +-------+
1009 | | wwwwwwwwwwwwwwww | : :
1011 | | : | C=&B |--- | : : +-------+
1012 | | : +------+ \ | +-------+ | |
1013 | |------>| D=4 | ----------->| C->&B |------>| |
1014 | | +------+ | +-------+ | |
1015 +-------+ : : | : : | |
1019 | | X->9 |------>| |
1021 Makes sure all effects ---> \ ddddddddddddddddd | |
1022 prior to the store of C \ +-------+ | |
1023 are perceptible to ----->| B->2 |------>| |
1024 subsequent loads +-------+ | |
1028 And thirdly, a read barrier acts as a partial order on loads. Consider the
1029 following sequence of events:
1032 ======================= =======================
1040 Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in
1041 some effectively random order, despite the write barrier issued by CPU 1:
1044 | | +------+ +-------+
1045 | |------>| A=1 |------ --->| A->0 |
1046 | | +------+ \ +-------+
1047 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1048 | | +------+ | +-------+
1049 | |------>| B=2 |--- | : :
1050 | | +------+ \ | : : +-------+
1051 +-------+ : : \ | +-------+ | |
1052 ---------->| B->2 |------>| |
1053 | +-------+ | CPU 2 |
1054 | | A->0 |------>| |
1064 If, however, a read barrier were to be placed between the load of B and the
1068 ======================= =======================
1077 then the partial ordering imposed by CPU 1 will be perceived correctly by CPU
1081 | | +------+ +-------+
1082 | |------>| A=1 |------ --->| A->0 |
1083 | | +------+ \ +-------+
1084 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1085 | | +------+ | +-------+
1086 | |------>| B=2 |--- | : :
1087 | | +------+ \ | : : +-------+
1088 +-------+ : : \ | +-------+ | |
1089 ---------->| B->2 |------>| |
1090 | +-------+ | CPU 2 |
1093 At this point the read ----> \ rrrrrrrrrrrrrrrrr | |
1094 barrier causes all effects \ +-------+ | |
1095 prior to the storage of B ---->| A->1 |------>| |
1096 to be perceptible to CPU 2 +-------+ | |
1100 To illustrate this more completely, consider what could happen if the code
1101 contained a load of A either side of the read barrier:
1104 ======================= =======================
1110 LOAD A [first load of A]
1112 LOAD A [second load of A]
1114 Even though the two loads of A both occur after the load of B, they may both
1115 come up with different values:
1118 | | +------+ +-------+
1119 | |------>| A=1 |------ --->| A->0 |
1120 | | +------+ \ +-------+
1121 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1122 | | +------+ | +-------+
1123 | |------>| B=2 |--- | : :
1124 | | +------+ \ | : : +-------+
1125 +-------+ : : \ | +-------+ | |
1126 ---------->| B->2 |------>| |
1127 | +-------+ | CPU 2 |
1131 | | A->0 |------>| 1st |
1133 At this point the read ----> \ rrrrrrrrrrrrrrrrr | |
1134 barrier causes all effects \ +-------+ | |
1135 prior to the storage of B ---->| A->1 |------>| 2nd |
1136 to be perceptible to CPU 2 +-------+ | |
1140 But it may be that the update to A from CPU 1 becomes perceptible to CPU 2
1141 before the read barrier completes anyway:
1144 | | +------+ +-------+
1145 | |------>| A=1 |------ --->| A->0 |
1146 | | +------+ \ +-------+
1147 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1148 | | +------+ | +-------+
1149 | |------>| B=2 |--- | : :
1150 | | +------+ \ | : : +-------+
1151 +-------+ : : \ | +-------+ | |
1152 ---------->| B->2 |------>| |
1153 | +-------+ | CPU 2 |
1157 ---->| A->1 |------>| 1st |
1159 rrrrrrrrrrrrrrrrr | |
1161 | A->1 |------>| 2nd |
1166 The guarantee is that the second load will always come up with A == 1 if the
1167 load of B came up with B == 2. No such guarantee exists for the first load of
1168 A; that may come up with either A == 0 or A == 1.
1171 READ MEMORY BARRIERS VS LOAD SPECULATION
1172 ----------------------------------------
1174 Many CPUs speculate with loads: that is they see that they will need to load an
1175 item from memory, and they find a time where they're not using the bus for any
1176 other loads, and so do the load in advance - even though they haven't actually
1177 got to that point in the instruction execution flow yet. This permits the
1178 actual load instruction to potentially complete immediately because the CPU
1179 already has the value to hand.
1181 It may turn out that the CPU didn't actually need the value - perhaps because a
1182 branch circumvented the load - in which case it can discard the value or just
1183 cache it for later use.
1188 ======================= =======================
1190 DIVIDE } Divide instructions generally
1191 DIVIDE } take a long time to perform
1194 Which might appear as this:
1198 --->| B->2 |------>| |
1202 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1203 division speculates on the +-------+ ~ | |
1207 Once the divisions are complete --> : : ~-->| |
1208 the CPU can then perform the : : | |
1209 LOAD with immediate effect : : +-------+
1212 Placing a read barrier or a data dependency barrier just before the second
1216 ======================= =======================
1223 will force any value speculatively obtained to be reconsidered to an extent
1224 dependent on the type of barrier used. If there was no change made to the
1225 speculated memory location, then the speculated value will just be used:
1229 --->| B->2 |------>| |
1233 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1234 division speculates on the +-------+ ~ | |
1239 rrrrrrrrrrrrrrrr~ | |
1246 but if there was an update or an invalidation from another CPU pending, then
1247 the speculation will be cancelled and the value reloaded:
1251 --->| B->2 |------>| |
1255 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1256 division speculates on the +-------+ ~ | |
1261 rrrrrrrrrrrrrrrrr | |
1263 The speculation is discarded ---> --->| A->1 |------>| |
1264 and an updated value is +-------+ | |
1265 retrieved : : +-------+
1271 Transitivity is a deeply intuitive notion about ordering that is not
1272 always provided by real computer systems. The following example
1273 demonstrates transitivity (also called "cumulativity"):
1276 ======================= ======================= =======================
1278 STORE X=1 LOAD X STORE Y=1
1279 <general barrier> <general barrier>
1282 Suppose that CPU 2's load from X returns 1 and its load from Y returns 0.
1283 This indicates that CPU 2's load from X in some sense follows CPU 1's
1284 store to X and that CPU 2's load from Y in some sense preceded CPU 3's
1285 store to Y. The question is then "Can CPU 3's load from X return 0?"
1287 Because CPU 2's load from X in some sense came after CPU 1's store, it
1288 is natural to expect that CPU 3's load from X must therefore return 1.
1289 This expectation is an example of transitivity: if a load executing on
1290 CPU A follows a load from the same variable executing on CPU B, then
1291 CPU A's load must either return the same value that CPU B's load did,
1292 or must return some later value.
1294 In the Linux kernel, use of general memory barriers guarantees
1295 transitivity. Therefore, in the above example, if CPU 2's load from X
1296 returns 1 and its load from Y returns 0, then CPU 3's load from X must
1299 However, transitivity is -not- guaranteed for read or write barriers.
1300 For example, suppose that CPU 2's general barrier in the above example
1301 is changed to a read barrier as shown below:
1304 ======================= ======================= =======================
1306 STORE X=1 LOAD X STORE Y=1
1307 <read barrier> <general barrier>
1310 This substitution destroys transitivity: in this example, it is perfectly
1311 legal for CPU 2's load from X to return 1, its load from Y to return 0,
1312 and CPU 3's load from X to return 0.
1314 The key point is that although CPU 2's read barrier orders its pair
1315 of loads, it does not guarantee to order CPU 1's store. Therefore, if
1316 this example runs on a system where CPUs 1 and 2 share a store buffer
1317 or a level of cache, CPU 2 might have early access to CPU 1's writes.
1318 General barriers are therefore required to ensure that all CPUs agree
1319 on the combined order of CPU 1's and CPU 2's accesses.
1321 To reiterate, if your code requires transitivity, use general barriers
1325 ========================
1326 EXPLICIT KERNEL BARRIERS
1327 ========================
1329 The Linux kernel has a variety of different barriers that act at different
1332 (*) Compiler barrier.
1334 (*) CPU memory barriers.
1336 (*) MMIO write barrier.
1342 The Linux kernel has an explicit compiler barrier function that prevents the
1343 compiler from moving the memory accesses either side of it to the other side:
1347 This is a general barrier -- there are no read-read or write-write
1348 variants of barrier(). However, READ_ONCE() and WRITE_ONCE() can be
1349 thought of as weak forms of barrier() that affect only the specific
1350 accesses flagged by the READ_ONCE() or WRITE_ONCE().
1352 The barrier() function has the following effects:
1354 (*) Prevents the compiler from reordering accesses following the
1355 barrier() to precede any accesses preceding the barrier().
1356 One example use for this property is to ease communication between
1357 interrupt-handler code and the code that was interrupted.
1359 (*) Within a loop, forces the compiler to load the variables used
1360 in that loop's conditional on each pass through that loop.
1362 The READ_ONCE() and WRITE_ONCE() functions can prevent any number of
1363 optimizations that, while perfectly safe in single-threaded code, can
1364 be fatal in concurrent code. Here are some examples of these sorts
1367 (*) The compiler is within its rights to reorder loads and stores
1368 to the same variable, and in some cases, the CPU is within its
1369 rights to reorder loads to the same variable. This means that
1375 Might result in an older value of x stored in a[1] than in a[0].
1376 Prevent both the compiler and the CPU from doing this as follows:
1378 a[0] = READ_ONCE(x);
1379 a[1] = READ_ONCE(x);
1381 In short, READ_ONCE() and WRITE_ONCE() provide cache coherence for
1382 accesses from multiple CPUs to a single variable.
1384 (*) The compiler is within its rights to merge successive loads from
1385 the same variable. Such merging can cause the compiler to "optimize"
1389 do_something_with(tmp);
1391 into the following code, which, although in some sense legitimate
1392 for single-threaded code, is almost certainly not what the developer
1397 do_something_with(tmp);
1399 Use READ_ONCE() to prevent the compiler from doing this to you:
1401 while (tmp = READ_ONCE(a))
1402 do_something_with(tmp);
1404 (*) The compiler is within its rights to reload a variable, for example,
1405 in cases where high register pressure prevents the compiler from
1406 keeping all data of interest in registers. The compiler might
1407 therefore optimize the variable 'tmp' out of our previous example:
1410 do_something_with(tmp);
1412 This could result in the following code, which is perfectly safe in
1413 single-threaded code, but can be fatal in concurrent code:
1416 do_something_with(a);
1418 For example, the optimized version of this code could result in
1419 passing a zero to do_something_with() in the case where the variable
1420 a was modified by some other CPU between the "while" statement and
1421 the call to do_something_with().
1423 Again, use READ_ONCE() to prevent the compiler from doing this:
1425 while (tmp = READ_ONCE(a))
1426 do_something_with(tmp);
1428 Note that if the compiler runs short of registers, it might save
1429 tmp onto the stack. The overhead of this saving and later restoring
1430 is why compilers reload variables. Doing so is perfectly safe for
1431 single-threaded code, so you need to tell the compiler about cases
1432 where it is not safe.
1434 (*) The compiler is within its rights to omit a load entirely if it knows
1435 what the value will be. For example, if the compiler can prove that
1436 the value of variable 'a' is always zero, it can optimize this code:
1439 do_something_with(tmp);
1445 This transformation is a win for single-threaded code because it
1446 gets rid of a load and a branch. The problem is that the compiler
1447 will carry out its proof assuming that the current CPU is the only
1448 one updating variable 'a'. If variable 'a' is shared, then the
1449 compiler's proof will be erroneous. Use READ_ONCE() to tell the
1450 compiler that it doesn't know as much as it thinks it does:
1452 while (tmp = READ_ONCE(a))
1453 do_something_with(tmp);
1455 But please note that the compiler is also closely watching what you
1456 do with the value after the READ_ONCE(). For example, suppose you
1457 do the following and MAX is a preprocessor macro with the value 1:
1459 while ((tmp = READ_ONCE(a)) % MAX)
1460 do_something_with(tmp);
1462 Then the compiler knows that the result of the "%" operator applied
1463 to MAX will always be zero, again allowing the compiler to optimize
1464 the code into near-nonexistence. (It will still load from the
1467 (*) Similarly, the compiler is within its rights to omit a store entirely
1468 if it knows that the variable already has the value being stored.
1469 Again, the compiler assumes that the current CPU is the only one
1470 storing into the variable, which can cause the compiler to do the
1471 wrong thing for shared variables. For example, suppose you have
1475 /* Code that does not store to variable a. */
1478 The compiler sees that the value of variable 'a' is already zero, so
1479 it might well omit the second store. This would come as a fatal
1480 surprise if some other CPU might have stored to variable 'a' in the
1483 Use WRITE_ONCE() to prevent the compiler from making this sort of
1487 /* Code that does not store to variable a. */
1490 (*) The compiler is within its rights to reorder memory accesses unless
1491 you tell it not to. For example, consider the following interaction
1492 between process-level code and an interrupt handler:
1494 void process_level(void)
1496 msg = get_message();
1500 void interrupt_handler(void)
1503 process_message(msg);
1506 There is nothing to prevent the compiler from transforming
1507 process_level() to the following, in fact, this might well be a
1508 win for single-threaded code:
1510 void process_level(void)
1513 msg = get_message();
1516 If the interrupt occurs between these two statement, then
1517 interrupt_handler() might be passed a garbled msg. Use WRITE_ONCE()
1518 to prevent this as follows:
1520 void process_level(void)
1522 WRITE_ONCE(msg, get_message());
1523 WRITE_ONCE(flag, true);
1526 void interrupt_handler(void)
1528 if (READ_ONCE(flag))
1529 process_message(READ_ONCE(msg));
1532 Note that the READ_ONCE() and WRITE_ONCE() wrappers in
1533 interrupt_handler() are needed if this interrupt handler can itself
1534 be interrupted by something that also accesses 'flag' and 'msg',
1535 for example, a nested interrupt or an NMI. Otherwise, READ_ONCE()
1536 and WRITE_ONCE() are not needed in interrupt_handler() other than
1537 for documentation purposes. (Note also that nested interrupts
1538 do not typically occur in modern Linux kernels, in fact, if an
1539 interrupt handler returns with interrupts enabled, you will get a
1542 You should assume that the compiler can move READ_ONCE() and
1543 WRITE_ONCE() past code not containing READ_ONCE(), WRITE_ONCE(),
1544 barrier(), or similar primitives.
1546 This effect could also be achieved using barrier(), but READ_ONCE()
1547 and WRITE_ONCE() are more selective: With READ_ONCE() and
1548 WRITE_ONCE(), the compiler need only forget the contents of the
1549 indicated memory locations, while with barrier() the compiler must
1550 discard the value of all memory locations that it has currented
1551 cached in any machine registers. Of course, the compiler must also
1552 respect the order in which the READ_ONCE()s and WRITE_ONCE()s occur,
1553 though the CPU of course need not do so.
1555 (*) The compiler is within its rights to invent stores to a variable,
1556 as in the following example:
1563 The compiler might save a branch by optimizing this as follows:
1569 In single-threaded code, this is not only safe, but also saves
1570 a branch. Unfortunately, in concurrent code, this optimization
1571 could cause some other CPU to see a spurious value of 42 -- even
1572 if variable 'a' was never zero -- when loading variable 'b'.
1573 Use WRITE_ONCE() to prevent this as follows:
1580 The compiler can also invent loads. These are usually less
1581 damaging, but they can result in cache-line bouncing and thus in
1582 poor performance and scalability. Use READ_ONCE() to prevent
1585 (*) For aligned memory locations whose size allows them to be accessed
1586 with a single memory-reference instruction, prevents "load tearing"
1587 and "store tearing," in which a single large access is replaced by
1588 multiple smaller accesses. For example, given an architecture having
1589 16-bit store instructions with 7-bit immediate fields, the compiler
1590 might be tempted to use two 16-bit store-immediate instructions to
1591 implement the following 32-bit store:
1595 Please note that GCC really does use this sort of optimization,
1596 which is not surprising given that it would likely take more
1597 than two instructions to build the constant and then store it.
1598 This optimization can therefore be a win in single-threaded code.
1599 In fact, a recent bug (since fixed) caused GCC to incorrectly use
1600 this optimization in a volatile store. In the absence of such bugs,
1601 use of WRITE_ONCE() prevents store tearing in the following example:
1603 WRITE_ONCE(p, 0x00010002);
1605 Use of packed structures can also result in load and store tearing,
1608 struct __attribute__((__packed__)) foo {
1613 struct foo foo1, foo2;
1620 Because there are no READ_ONCE() or WRITE_ONCE() wrappers and no
1621 volatile markings, the compiler would be well within its rights to
1622 implement these three assignment statements as a pair of 32-bit
1623 loads followed by a pair of 32-bit stores. This would result in
1624 load tearing on 'foo1.b' and store tearing on 'foo2.b'. READ_ONCE()
1625 and WRITE_ONCE() again prevent tearing in this example:
1628 WRITE_ONCE(foo2.b, READ_ONCE(foo1.b));
1631 All that aside, it is never necessary to use READ_ONCE() and
1632 WRITE_ONCE() on a variable that has been marked volatile. For example,
1633 because 'jiffies' is marked volatile, it is never necessary to
1634 say READ_ONCE(jiffies). The reason for this is that READ_ONCE() and
1635 WRITE_ONCE() are implemented as volatile casts, which has no effect when
1636 its argument is already marked volatile.
1638 Please note that these compiler barriers have no direct effect on the CPU,
1639 which may then reorder things however it wishes.
1645 The Linux kernel has eight basic CPU memory barriers:
1647 TYPE MANDATORY SMP CONDITIONAL
1648 =============== ======================= ===========================
1649 GENERAL mb() smp_mb()
1650 WRITE wmb() smp_wmb()
1651 READ rmb() smp_rmb()
1652 DATA DEPENDENCY read_barrier_depends() smp_read_barrier_depends()
1655 All memory barriers except the data dependency barriers imply a compiler
1656 barrier. Data dependencies do not impose any additional compiler ordering.
1658 Aside: In the case of data dependencies, the compiler would be expected
1659 to issue the loads in the correct order (eg. `a[b]` would have to load
1660 the value of b before loading a[b]), however there is no guarantee in
1661 the C specification that the compiler may not speculate the value of b
1662 (eg. is equal to 1) and load a before b (eg. tmp = a[1]; if (b != 1)
1663 tmp = a[b]; ). There is also the problem of a compiler reloading b after
1664 having loaded a[b], thus having a newer copy of b than a[b]. A consensus
1665 has not yet been reached about these problems, however the READ_ONCE()
1666 macro is a good place to start looking.
1668 SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
1669 systems because it is assumed that a CPU will appear to be self-consistent,
1670 and will order overlapping accesses correctly with respect to itself.
1671 However, see the subsection on "Virtual Machine Guests" below.
1673 [!] Note that SMP memory barriers _must_ be used to control the ordering of
1674 references to shared memory on SMP systems, though the use of locking instead
1677 Mandatory barriers should not be used to control SMP effects, since mandatory
1678 barriers impose unnecessary overhead on both SMP and UP systems. They may,
1679 however, be used to control MMIO effects on accesses through relaxed memory I/O
1680 windows. These barriers are required even on non-SMP systems as they affect
1681 the order in which memory operations appear to a device by prohibiting both the
1682 compiler and the CPU from reordering them.
1685 There are some more advanced barrier functions:
1687 (*) smp_store_mb(var, value)
1689 This assigns the value to the variable and then inserts a full memory
1690 barrier after it. It isn't guaranteed to insert anything more than a
1691 compiler barrier in a UP compilation.
1694 (*) smp_mb__before_atomic();
1695 (*) smp_mb__after_atomic();
1697 These are for use with atomic (such as add, subtract, increment and
1698 decrement) functions that don't return a value, especially when used for
1699 reference counting. These functions do not imply memory barriers.
1701 These are also used for atomic bitop functions that do not return a
1702 value (such as set_bit and clear_bit).
1704 As an example, consider a piece of code that marks an object as being dead
1705 and then decrements the object's reference count:
1708 smp_mb__before_atomic();
1709 atomic_dec(&obj->ref_count);
1711 This makes sure that the death mark on the object is perceived to be set
1712 *before* the reference counter is decremented.
1714 See Documentation/atomic_ops.txt for more information. See the "Atomic
1715 operations" subsection for information on where to use these.
1718 (*) lockless_dereference();
1719 This can be thought of as a pointer-fetch wrapper around the
1720 smp_read_barrier_depends() data-dependency barrier.
1722 This is also similar to rcu_dereference(), but in cases where
1723 object lifetime is handled by some mechanism other than RCU, for
1724 example, when the objects removed only when the system goes down.
1725 In addition, lockless_dereference() is used in some data structures
1726 that can be used both with and without RCU.
1732 These are for use with consistent memory to guarantee the ordering
1733 of writes or reads of shared memory accessible to both the CPU and a
1736 For example, consider a device driver that shares memory with a device
1737 and uses a descriptor status value to indicate if the descriptor belongs
1738 to the device or the CPU, and a doorbell to notify it when new
1739 descriptors are available:
1741 if (desc->status != DEVICE_OWN) {
1742 /* do not read data until we own descriptor */
1745 /* read/modify data */
1746 read_data = desc->data;
1747 desc->data = write_data;
1749 /* flush modifications before status update */
1752 /* assign ownership */
1753 desc->status = DEVICE_OWN;
1755 /* force memory to sync before notifying device via MMIO */
1758 /* notify device of new descriptors */
1759 writel(DESC_NOTIFY, doorbell);
1762 The dma_rmb() allows us guarantee the device has released ownership
1763 before we read the data from the descriptor, and the dma_wmb() allows
1764 us to guarantee the data is written to the descriptor before the device
1765 can see it now has ownership. The wmb() is needed to guarantee that the
1766 cache coherent memory writes have completed before attempting a write to
1767 the cache incoherent MMIO region.
1769 See Documentation/DMA-API.txt for more information on consistent memory.
1774 The Linux kernel also has a special barrier for use with memory-mapped I/O
1779 This is a variation on the mandatory write barrier that causes writes to weakly
1780 ordered I/O regions to be partially ordered. Its effects may go beyond the
1781 CPU->Hardware interface and actually affect the hardware at some level.
1783 See the subsection "Locks vs I/O accesses" for more information.
1786 ===============================
1787 IMPLICIT KERNEL MEMORY BARRIERS
1788 ===============================
1790 Some of the other functions in the linux kernel imply memory barriers, amongst
1791 which are locking and scheduling functions.
1793 This specification is a _minimum_ guarantee; any particular architecture may
1794 provide more substantial guarantees, but these may not be relied upon outside
1795 of arch specific code.
1801 The Linux kernel has a number of locking constructs:
1809 In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations
1810 for each construct. These operations all imply certain barriers:
1812 (1) ACQUIRE operation implication:
1814 Memory operations issued after the ACQUIRE will be completed after the
1815 ACQUIRE operation has completed.
1817 Memory operations issued before the ACQUIRE may be completed after
1818 the ACQUIRE operation has completed. An smp_mb__before_spinlock(),
1819 combined with a following ACQUIRE, orders prior stores against
1820 subsequent loads and stores. Note that this is weaker than smp_mb()!
1821 The smp_mb__before_spinlock() primitive is free on many architectures.
1823 (2) RELEASE operation implication:
1825 Memory operations issued before the RELEASE will be completed before the
1826 RELEASE operation has completed.
1828 Memory operations issued after the RELEASE may be completed before the
1829 RELEASE operation has completed.
1831 (3) ACQUIRE vs ACQUIRE implication:
1833 All ACQUIRE operations issued before another ACQUIRE operation will be
1834 completed before that ACQUIRE operation.
1836 (4) ACQUIRE vs RELEASE implication:
1838 All ACQUIRE operations issued before a RELEASE operation will be
1839 completed before the RELEASE operation.
1841 (5) Failed conditional ACQUIRE implication:
1843 Certain locking variants of the ACQUIRE operation may fail, either due to
1844 being unable to get the lock immediately, or due to receiving an unblocked
1845 signal whilst asleep waiting for the lock to become available. Failed
1846 locks do not imply any sort of barrier.
1848 [!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only
1849 one-way barriers is that the effects of instructions outside of a critical
1850 section may seep into the inside of the critical section.
1852 An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier
1853 because it is possible for an access preceding the ACQUIRE to happen after the
1854 ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and
1855 the two accesses can themselves then cross:
1864 ACQUIRE M, STORE *B, STORE *A, RELEASE M
1866 When the ACQUIRE and RELEASE are a lock acquisition and release,
1867 respectively, this same reordering can occur if the lock's ACQUIRE and
1868 RELEASE are to the same lock variable, but only from the perspective of
1869 another CPU not holding that lock. In short, a ACQUIRE followed by an
1870 RELEASE may -not- be assumed to be a full memory barrier.
1872 Similarly, the reverse case of a RELEASE followed by an ACQUIRE does
1873 not imply a full memory barrier. Therefore, the CPU's execution of the
1874 critical sections corresponding to the RELEASE and the ACQUIRE can cross,
1884 ACQUIRE N, STORE *B, STORE *A, RELEASE M
1886 It might appear that this reordering could introduce a deadlock.
1887 However, this cannot happen because if such a deadlock threatened,
1888 the RELEASE would simply complete, thereby avoiding the deadlock.
1892 One key point is that we are only talking about the CPU doing
1893 the reordering, not the compiler. If the compiler (or, for
1894 that matter, the developer) switched the operations, deadlock
1897 But suppose the CPU reordered the operations. In this case,
1898 the unlock precedes the lock in the assembly code. The CPU
1899 simply elected to try executing the later lock operation first.
1900 If there is a deadlock, this lock operation will simply spin (or
1901 try to sleep, but more on that later). The CPU will eventually
1902 execute the unlock operation (which preceded the lock operation
1903 in the assembly code), which will unravel the potential deadlock,
1904 allowing the lock operation to succeed.
1906 But what if the lock is a sleeplock? In that case, the code will
1907 try to enter the scheduler, where it will eventually encounter
1908 a memory barrier, which will force the earlier unlock operation
1909 to complete, again unraveling the deadlock. There might be
1910 a sleep-unlock race, but the locking primitive needs to resolve
1911 such races properly in any case.
1913 Locks and semaphores may not provide any guarantee of ordering on UP compiled
1914 systems, and so cannot be counted on in such a situation to actually achieve
1915 anything at all - especially with respect to I/O accesses - unless combined
1916 with interrupt disabling operations.
1918 See also the section on "Inter-CPU locking barrier effects".
1921 As an example, consider the following:
1932 The following sequence of events is acceptable:
1934 ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE
1936 [+] Note that {*F,*A} indicates a combined access.
1938 But none of the following are:
1940 {*F,*A}, *B, ACQUIRE, *C, *D, RELEASE, *E
1941 *A, *B, *C, ACQUIRE, *D, RELEASE, *E, *F
1942 *A, *B, ACQUIRE, *C, RELEASE, *D, *E, *F
1943 *B, ACQUIRE, *C, *D, RELEASE, {*F,*A}, *E
1947 INTERRUPT DISABLING FUNCTIONS
1948 -----------------------------
1950 Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts
1951 (RELEASE equivalent) will act as compiler barriers only. So if memory or I/O
1952 barriers are required in such a situation, they must be provided from some
1956 SLEEP AND WAKE-UP FUNCTIONS
1957 ---------------------------
1959 Sleeping and waking on an event flagged in global data can be viewed as an
1960 interaction between two pieces of data: the task state of the task waiting for
1961 the event and the global data used to indicate the event. To make sure that
1962 these appear to happen in the right order, the primitives to begin the process
1963 of going to sleep, and the primitives to initiate a wake up imply certain
1966 Firstly, the sleeper normally follows something like this sequence of events:
1969 set_current_state(TASK_UNINTERRUPTIBLE);
1970 if (event_indicated)
1975 A general memory barrier is interpolated automatically by set_current_state()
1976 after it has altered the task state:
1979 ===============================
1980 set_current_state();
1982 STORE current->state
1984 LOAD event_indicated
1986 set_current_state() may be wrapped by:
1989 prepare_to_wait_exclusive();
1991 which therefore also imply a general memory barrier after setting the state.
1992 The whole sequence above is available in various canned forms, all of which
1993 interpolate the memory barrier in the right place:
1996 wait_event_interruptible();
1997 wait_event_interruptible_exclusive();
1998 wait_event_interruptible_timeout();
1999 wait_event_killable();
2000 wait_event_timeout();
2005 Secondly, code that performs a wake up normally follows something like this:
2007 event_indicated = 1;
2008 wake_up(&event_wait_queue);
2012 event_indicated = 1;
2013 wake_up_process(event_daemon);
2015 A write memory barrier is implied by wake_up() and co. if and only if they wake
2016 something up. The barrier occurs before the task state is cleared, and so sits
2017 between the STORE to indicate the event and the STORE to set TASK_RUNNING:
2020 =============================== ===============================
2021 set_current_state(); STORE event_indicated
2022 smp_store_mb(); wake_up();
2023 STORE current->state <write barrier>
2024 <general barrier> STORE current->state
2025 LOAD event_indicated
2027 To repeat, this write memory barrier is present if and only if something
2028 is actually awakened. To see this, consider the following sequence of
2029 events, where X and Y are both initially zero:
2032 =============================== ===============================
2033 X = 1; STORE event_indicated
2034 smp_mb(); wake_up();
2035 Y = 1; wait_event(wq, Y == 1);
2036 wake_up(); load from Y sees 1, no memory barrier
2037 load from X might see 0
2039 In contrast, if a wakeup does occur, CPU 2's load from X would be guaranteed
2042 The available waker functions include:
2048 wake_up_interruptible();
2049 wake_up_interruptible_all();
2050 wake_up_interruptible_nr();
2051 wake_up_interruptible_poll();
2052 wake_up_interruptible_sync();
2053 wake_up_interruptible_sync_poll();
2055 wake_up_locked_poll();
2061 [!] Note that the memory barriers implied by the sleeper and the waker do _not_
2062 order multiple stores before the wake-up with respect to loads of those stored
2063 values after the sleeper has called set_current_state(). For instance, if the
2066 set_current_state(TASK_INTERRUPTIBLE);
2067 if (event_indicated)
2069 __set_current_state(TASK_RUNNING);
2070 do_something(my_data);
2075 event_indicated = 1;
2076 wake_up(&event_wait_queue);
2078 there's no guarantee that the change to event_indicated will be perceived by
2079 the sleeper as coming after the change to my_data. In such a circumstance, the
2080 code on both sides must interpolate its own memory barriers between the
2081 separate data accesses. Thus the above sleeper ought to do:
2083 set_current_state(TASK_INTERRUPTIBLE);
2084 if (event_indicated) {
2086 do_something(my_data);
2089 and the waker should do:
2093 event_indicated = 1;
2094 wake_up(&event_wait_queue);
2097 MISCELLANEOUS FUNCTIONS
2098 -----------------------
2100 Other functions that imply barriers:
2102 (*) schedule() and similar imply full memory barriers.
2105 ===================================
2106 INTER-CPU ACQUIRING BARRIER EFFECTS
2107 ===================================
2109 On SMP systems locking primitives give a more substantial form of barrier: one
2110 that does affect memory access ordering on other CPUs, within the context of
2111 conflict on any particular lock.
2114 ACQUIRES VS MEMORY ACCESSES
2115 ---------------------------
2117 Consider the following: the system has a pair of spinlocks (M) and (Q), and
2118 three CPUs; then should the following sequence of events occur:
2121 =============================== ===============================
2122 WRITE_ONCE(*A, a); WRITE_ONCE(*E, e);
2124 WRITE_ONCE(*B, b); WRITE_ONCE(*F, f);
2125 WRITE_ONCE(*C, c); WRITE_ONCE(*G, g);
2127 WRITE_ONCE(*D, d); WRITE_ONCE(*H, h);
2129 Then there is no guarantee as to what order CPU 3 will see the accesses to *A
2130 through *H occur in, other than the constraints imposed by the separate locks
2131 on the separate CPUs. It might, for example, see:
2133 *E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M
2135 But it won't see any of:
2137 *B, *C or *D preceding ACQUIRE M
2138 *A, *B or *C following RELEASE M
2139 *F, *G or *H preceding ACQUIRE Q
2140 *E, *F or *G following RELEASE Q
2144 ACQUIRES VS I/O ACCESSES
2145 ------------------------
2147 Under certain circumstances (especially involving NUMA), I/O accesses within
2148 two spinlocked sections on two different CPUs may be seen as interleaved by the
2149 PCI bridge, because the PCI bridge does not necessarily participate in the
2150 cache-coherence protocol, and is therefore incapable of issuing the required
2151 read memory barriers.
2156 =============================== ===============================
2166 may be seen by the PCI bridge as follows:
2168 STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5
2170 which would probably cause the hardware to malfunction.
2173 What is necessary here is to intervene with an mmiowb() before dropping the
2174 spinlock, for example:
2177 =============================== ===============================
2189 this will ensure that the two stores issued on CPU 1 appear at the PCI bridge
2190 before either of the stores issued on CPU 2.
2193 Furthermore, following a store by a load from the same device obviates the need
2194 for the mmiowb(), because the load forces the store to complete before the load
2198 =============================== ===============================
2209 See Documentation/DocBook/deviceiobook.tmpl for more information.
2212 =================================
2213 WHERE ARE MEMORY BARRIERS NEEDED?
2214 =================================
2216 Under normal operation, memory operation reordering is generally not going to
2217 be a problem as a single-threaded linear piece of code will still appear to
2218 work correctly, even if it's in an SMP kernel. There are, however, four
2219 circumstances in which reordering definitely _could_ be a problem:
2221 (*) Interprocessor interaction.
2223 (*) Atomic operations.
2225 (*) Accessing devices.
2230 INTERPROCESSOR INTERACTION
2231 --------------------------
2233 When there's a system with more than one processor, more than one CPU in the
2234 system may be working on the same data set at the same time. This can cause
2235 synchronisation problems, and the usual way of dealing with them is to use
2236 locks. Locks, however, are quite expensive, and so it may be preferable to
2237 operate without the use of a lock if at all possible. In such a case
2238 operations that affect both CPUs may have to be carefully ordered to prevent
2241 Consider, for example, the R/W semaphore slow path. Here a waiting process is
2242 queued on the semaphore, by virtue of it having a piece of its stack linked to
2243 the semaphore's list of waiting processes:
2245 struct rw_semaphore {
2248 struct list_head waiters;
2251 struct rwsem_waiter {
2252 struct list_head list;
2253 struct task_struct *task;
2256 To wake up a particular waiter, the up_read() or up_write() functions have to:
2258 (1) read the next pointer from this waiter's record to know as to where the
2259 next waiter record is;
2261 (2) read the pointer to the waiter's task structure;
2263 (3) clear the task pointer to tell the waiter it has been given the semaphore;
2265 (4) call wake_up_process() on the task; and
2267 (5) release the reference held on the waiter's task struct.
2269 In other words, it has to perform this sequence of events:
2271 LOAD waiter->list.next;
2277 and if any of these steps occur out of order, then the whole thing may
2280 Once it has queued itself and dropped the semaphore lock, the waiter does not
2281 get the lock again; it instead just waits for its task pointer to be cleared
2282 before proceeding. Since the record is on the waiter's stack, this means that
2283 if the task pointer is cleared _before_ the next pointer in the list is read,
2284 another CPU might start processing the waiter and might clobber the waiter's
2285 stack before the up*() function has a chance to read the next pointer.
2287 Consider then what might happen to the above sequence of events:
2290 =============================== ===============================
2297 Woken up by other event
2302 foo() clobbers *waiter
2304 LOAD waiter->list.next;
2307 This could be dealt with using the semaphore lock, but then the down_xxx()
2308 function has to needlessly get the spinlock again after being woken up.
2310 The way to deal with this is to insert a general SMP memory barrier:
2312 LOAD waiter->list.next;
2319 In this case, the barrier makes a guarantee that all memory accesses before the
2320 barrier will appear to happen before all the memory accesses after the barrier
2321 with respect to the other CPUs on the system. It does _not_ guarantee that all
2322 the memory accesses before the barrier will be complete by the time the barrier
2323 instruction itself is complete.
2325 On a UP system - where this wouldn't be a problem - the smp_mb() is just a
2326 compiler barrier, thus making sure the compiler emits the instructions in the
2327 right order without actually intervening in the CPU. Since there's only one
2328 CPU, that CPU's dependency ordering logic will take care of everything else.
2334 Whilst they are technically interprocessor interaction considerations, atomic
2335 operations are noted specially as some of them imply full memory barriers and
2336 some don't, but they're very heavily relied on as a group throughout the
2339 Any atomic operation that modifies some state in memory and returns information
2340 about the state (old or new) implies an SMP-conditional general memory barrier
2341 (smp_mb()) on each side of the actual operation (with the exception of
2342 explicit lock operations, described later). These include:
2345 atomic_xchg(); atomic_long_xchg();
2346 atomic_inc_return(); atomic_long_inc_return();
2347 atomic_dec_return(); atomic_long_dec_return();
2348 atomic_add_return(); atomic_long_add_return();
2349 atomic_sub_return(); atomic_long_sub_return();
2350 atomic_inc_and_test(); atomic_long_inc_and_test();
2351 atomic_dec_and_test(); atomic_long_dec_and_test();
2352 atomic_sub_and_test(); atomic_long_sub_and_test();
2353 atomic_add_negative(); atomic_long_add_negative();
2355 test_and_clear_bit();
2356 test_and_change_bit();
2360 atomic_cmpxchg(); atomic_long_cmpxchg();
2361 atomic_add_unless(); atomic_long_add_unless();
2363 These are used for such things as implementing ACQUIRE-class and RELEASE-class
2364 operations and adjusting reference counters towards object destruction, and as
2365 such the implicit memory barrier effects are necessary.
2368 The following operations are potential problems as they do _not_ imply memory
2369 barriers, but might be used for implementing such things as RELEASE-class
2377 With these the appropriate explicit memory barrier should be used if necessary
2378 (smp_mb__before_atomic() for instance).
2381 The following also do _not_ imply memory barriers, and so may require explicit
2382 memory barriers under some circumstances (smp_mb__before_atomic() for
2390 If they're used for statistics generation, then they probably don't need memory
2391 barriers, unless there's a coupling between statistical data.
2393 If they're used for reference counting on an object to control its lifetime,
2394 they probably don't need memory barriers because either the reference count
2395 will be adjusted inside a locked section, or the caller will already hold
2396 sufficient references to make the lock, and thus a memory barrier unnecessary.
2398 If they're used for constructing a lock of some description, then they probably
2399 do need memory barriers as a lock primitive generally has to do things in a
2402 Basically, each usage case has to be carefully considered as to whether memory
2403 barriers are needed or not.
2405 The following operations are special locking primitives:
2407 test_and_set_bit_lock();
2409 __clear_bit_unlock();
2411 These implement ACQUIRE-class and RELEASE-class operations. These should be used in
2412 preference to other operations when implementing locking primitives, because
2413 their implementations can be optimised on many architectures.
2415 [!] Note that special memory barrier primitives are available for these
2416 situations because on some CPUs the atomic instructions used imply full memory
2417 barriers, and so barrier instructions are superfluous in conjunction with them,
2418 and in such cases the special barrier primitives will be no-ops.
2420 See Documentation/atomic_ops.txt for more information.
2426 Many devices can be memory mapped, and so appear to the CPU as if they're just
2427 a set of memory locations. To control such a device, the driver usually has to
2428 make the right memory accesses in exactly the right order.
2430 However, having a clever CPU or a clever compiler creates a potential problem
2431 in that the carefully sequenced accesses in the driver code won't reach the
2432 device in the requisite order if the CPU or the compiler thinks it is more
2433 efficient to reorder, combine or merge accesses - something that would cause
2434 the device to malfunction.
2436 Inside of the Linux kernel, I/O should be done through the appropriate accessor
2437 routines - such as inb() or writel() - which know how to make such accesses
2438 appropriately sequential. Whilst this, for the most part, renders the explicit
2439 use of memory barriers unnecessary, there are a couple of situations where they
2442 (1) On some systems, I/O stores are not strongly ordered across all CPUs, and
2443 so for _all_ general drivers locks should be used and mmiowb() must be
2444 issued prior to unlocking the critical section.
2446 (2) If the accessor functions are used to refer to an I/O memory window with
2447 relaxed memory access properties, then _mandatory_ memory barriers are
2448 required to enforce ordering.
2450 See Documentation/DocBook/deviceiobook.tmpl for more information.
2456 A driver may be interrupted by its own interrupt service routine, and thus the
2457 two parts of the driver may interfere with each other's attempts to control or
2460 This may be alleviated - at least in part - by disabling local interrupts (a
2461 form of locking), such that the critical operations are all contained within
2462 the interrupt-disabled section in the driver. Whilst the driver's interrupt
2463 routine is executing, the driver's core may not run on the same CPU, and its
2464 interrupt is not permitted to happen again until the current interrupt has been
2465 handled, thus the interrupt handler does not need to lock against that.
2467 However, consider a driver that was talking to an ethernet card that sports an
2468 address register and a data register. If that driver's core talks to the card
2469 under interrupt-disablement and then the driver's interrupt handler is invoked:
2480 The store to the data register might happen after the second store to the
2481 address register if ordering rules are sufficiently relaxed:
2483 STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA
2486 If ordering rules are relaxed, it must be assumed that accesses done inside an
2487 interrupt disabled section may leak outside of it and may interleave with
2488 accesses performed in an interrupt - and vice versa - unless implicit or
2489 explicit barriers are used.
2491 Normally this won't be a problem because the I/O accesses done inside such
2492 sections will include synchronous load operations on strictly ordered I/O
2493 registers that form implicit I/O barriers. If this isn't sufficient then an
2494 mmiowb() may need to be used explicitly.
2497 A similar situation may occur between an interrupt routine and two routines
2498 running on separate CPUs that communicate with each other. If such a case is
2499 likely, then interrupt-disabling locks should be used to guarantee ordering.
2502 ==========================
2503 KERNEL I/O BARRIER EFFECTS
2504 ==========================
2506 When accessing I/O memory, drivers should use the appropriate accessor
2511 These are intended to talk to I/O space rather than memory space, but
2512 that's primarily a CPU-specific concept. The i386 and x86_64 processors do
2513 indeed have special I/O space access cycles and instructions, but many
2514 CPUs don't have such a concept.
2516 The PCI bus, amongst others, defines an I/O space concept which - on such
2517 CPUs as i386 and x86_64 - readily maps to the CPU's concept of I/O
2518 space. However, it may also be mapped as a virtual I/O space in the CPU's
2519 memory map, particularly on those CPUs that don't support alternate I/O
2522 Accesses to this space may be fully synchronous (as on i386), but
2523 intermediary bridges (such as the PCI host bridge) may not fully honour
2526 They are guaranteed to be fully ordered with respect to each other.
2528 They are not guaranteed to be fully ordered with respect to other types of
2529 memory and I/O operation.
2531 (*) readX(), writeX():
2533 Whether these are guaranteed to be fully ordered and uncombined with
2534 respect to each other on the issuing CPU depends on the characteristics
2535 defined for the memory window through which they're accessing. On later
2536 i386 architecture machines, for example, this is controlled by way of the
2539 Ordinarily, these will be guaranteed to be fully ordered and uncombined,
2540 provided they're not accessing a prefetchable device.
2542 However, intermediary hardware (such as a PCI bridge) may indulge in
2543 deferral if it so wishes; to flush a store, a load from the same location
2544 is preferred[*], but a load from the same device or from configuration
2545 space should suffice for PCI.
2547 [*] NOTE! attempting to load from the same location as was written to may
2548 cause a malfunction - consider the 16550 Rx/Tx serial registers for
2551 Used with prefetchable I/O memory, an mmiowb() barrier may be required to
2552 force stores to be ordered.
2554 Please refer to the PCI specification for more information on interactions
2555 between PCI transactions.
2557 (*) readX_relaxed(), writeX_relaxed()
2559 These are similar to readX() and writeX(), but provide weaker memory
2560 ordering guarantees. Specifically, they do not guarantee ordering with
2561 respect to normal memory accesses (e.g. DMA buffers) nor do they guarantee
2562 ordering with respect to LOCK or UNLOCK operations. If the latter is
2563 required, an mmiowb() barrier can be used. Note that relaxed accesses to
2564 the same peripheral are guaranteed to be ordered with respect to each
2567 (*) ioreadX(), iowriteX()
2569 These will perform appropriately for the type of access they're actually
2570 doing, be it inX()/outX() or readX()/writeX().
2573 ========================================
2574 ASSUMED MINIMUM EXECUTION ORDERING MODEL
2575 ========================================
2577 It has to be assumed that the conceptual CPU is weakly-ordered but that it will
2578 maintain the appearance of program causality with respect to itself. Some CPUs
2579 (such as i386 or x86_64) are more constrained than others (such as powerpc or
2580 frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside
2581 of arch-specific code.
2583 This means that it must be considered that the CPU will execute its instruction
2584 stream in any order it feels like - or even in parallel - provided that if an
2585 instruction in the stream depends on an earlier instruction, then that
2586 earlier instruction must be sufficiently complete[*] before the later
2587 instruction may proceed; in other words: provided that the appearance of
2588 causality is maintained.
2590 [*] Some instructions have more than one effect - such as changing the
2591 condition codes, changing registers or changing memory - and different
2592 instructions may depend on different effects.
2594 A CPU may also discard any instruction sequence that winds up having no
2595 ultimate effect. For example, if two adjacent instructions both load an
2596 immediate value into the same register, the first may be discarded.
2599 Similarly, it has to be assumed that compiler might reorder the instruction
2600 stream in any way it sees fit, again provided the appearance of causality is
2604 ============================
2605 THE EFFECTS OF THE CPU CACHE
2606 ============================
2608 The way cached memory operations are perceived across the system is affected to
2609 a certain extent by the caches that lie between CPUs and memory, and by the
2610 memory coherence system that maintains the consistency of state in the system.
2612 As far as the way a CPU interacts with another part of the system through the
2613 caches goes, the memory system has to include the CPU's caches, and memory
2614 barriers for the most part act at the interface between the CPU and its cache
2615 (memory barriers logically act on the dotted line in the following diagram):
2617 <--- CPU ---> : <----------- Memory ----------->
2619 +--------+ +--------+ : +--------+ +-----------+
2620 | | | | : | | | | +--------+
2621 | CPU | | Memory | : | CPU | | | | |
2622 | Core |--->| Access |----->| Cache |<-->| | | |
2623 | | | Queue | : | | | |--->| Memory |
2624 | | | | : | | | | | |
2625 +--------+ +--------+ : +--------+ | | | |
2626 : | Cache | +--------+
2628 : | Mechanism | +--------+
2629 +--------+ +--------+ : +--------+ | | | |
2630 | | | | : | | | | | |
2631 | CPU | | Memory | : | CPU | | |--->| Device |
2632 | Core |--->| Access |----->| Cache |<-->| | | |
2633 | | | Queue | : | | | | | |
2634 | | | | : | | | | +--------+
2635 +--------+ +--------+ : +--------+ +-----------+
2639 Although any particular load or store may not actually appear outside of the
2640 CPU that issued it since it may have been satisfied within the CPU's own cache,
2641 it will still appear as if the full memory access had taken place as far as the
2642 other CPUs are concerned since the cache coherency mechanisms will migrate the
2643 cacheline over to the accessing CPU and propagate the effects upon conflict.
2645 The CPU core may execute instructions in any order it deems fit, provided the
2646 expected program causality appears to be maintained. Some of the instructions
2647 generate load and store operations which then go into the queue of memory
2648 accesses to be performed. The core may place these in the queue in any order
2649 it wishes, and continue execution until it is forced to wait for an instruction
2652 What memory barriers are concerned with is controlling the order in which
2653 accesses cross from the CPU side of things to the memory side of things, and
2654 the order in which the effects are perceived to happen by the other observers
2657 [!] Memory barriers are _not_ needed within a given CPU, as CPUs always see
2658 their own loads and stores as if they had happened in program order.
2660 [!] MMIO or other device accesses may bypass the cache system. This depends on
2661 the properties of the memory window through which devices are accessed and/or
2662 the use of any special device communication instructions the CPU may have.
2668 Life isn't quite as simple as it may appear above, however: for while the
2669 caches are expected to be coherent, there's no guarantee that that coherency
2670 will be ordered. This means that whilst changes made on one CPU will
2671 eventually become visible on all CPUs, there's no guarantee that they will
2672 become apparent in the same order on those other CPUs.
2675 Consider dealing with a system that has a pair of CPUs (1 & 2), each of which
2676 has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D):
2681 +--------+ : +--->| Cache A |<------->| |
2682 | | : | +---------+ | |
2684 | | : | +---------+ | |
2685 +--------+ : +--->| Cache B |<------->| |
2688 : +---------+ | System |
2689 +--------+ : +--->| Cache C |<------->| |
2690 | | : | +---------+ | |
2692 | | : | +---------+ | |
2693 +--------+ : +--->| Cache D |<------->| |
2698 Imagine the system has the following properties:
2700 (*) an odd-numbered cache line may be in cache A, cache C or it may still be
2703 (*) an even-numbered cache line may be in cache B, cache D or it may still be
2706 (*) whilst the CPU core is interrogating one cache, the other cache may be
2707 making use of the bus to access the rest of the system - perhaps to
2708 displace a dirty cacheline or to do a speculative load;
2710 (*) each cache has a queue of operations that need to be applied to that cache
2711 to maintain coherency with the rest of the system;
2713 (*) the coherency queue is not flushed by normal loads to lines already
2714 present in the cache, even though the contents of the queue may
2715 potentially affect those loads.
2717 Imagine, then, that two writes are made on the first CPU, with a write barrier
2718 between them to guarantee that they will appear to reach that CPU's caches in
2719 the requisite order:
2722 =============== =============== =======================================
2723 u == 0, v == 1 and p == &u, q == &u
2725 smp_wmb(); Make sure change to v is visible before
2727 <A:modify v=2> v is now in cache A exclusively
2729 <B:modify p=&v> p is now in cache B exclusively
2731 The write memory barrier forces the other CPUs in the system to perceive that
2732 the local CPU's caches have apparently been updated in the correct order. But
2733 now imagine that the second CPU wants to read those values:
2736 =============== =============== =======================================
2741 The above pair of reads may then fail to happen in the expected order, as the
2742 cacheline holding p may get updated in one of the second CPU's caches whilst
2743 the update to the cacheline holding v is delayed in the other of the second
2744 CPU's caches by some other cache event:
2747 =============== =============== =======================================
2748 u == 0, v == 1 and p == &u, q == &u
2751 <A:modify v=2> <C:busy>
2755 <B:modify p=&v> <D:commit p=&v>
2758 <C:read *q> Reads from v before v updated in cache
2762 Basically, whilst both cachelines will be updated on CPU 2 eventually, there's
2763 no guarantee that, without intervention, the order of update will be the same
2764 as that committed on CPU 1.
2767 To intervene, we need to interpolate a data dependency barrier or a read
2768 barrier between the loads. This will force the cache to commit its coherency
2769 queue before processing any further requests:
2772 =============== =============== =======================================
2773 u == 0, v == 1 and p == &u, q == &u
2776 <A:modify v=2> <C:busy>
2780 <B:modify p=&v> <D:commit p=&v>
2782 smp_read_barrier_depends()
2786 <C:read *q> Reads from v after v updated in cache
2789 This sort of problem can be encountered on DEC Alpha processors as they have a
2790 split cache that improves performance by making better use of the data bus.
2791 Whilst most CPUs do imply a data dependency barrier on the read when a memory
2792 access depends on a read, not all do, so it may not be relied on.
2794 Other CPUs may also have split caches, but must coordinate between the various
2795 cachelets for normal memory accesses. The semantics of the Alpha removes the
2796 need for coordination in the absence of memory barriers.
2799 CACHE COHERENCY VS DMA
2800 ----------------------
2802 Not all systems maintain cache coherency with respect to devices doing DMA. In
2803 such cases, a device attempting DMA may obtain stale data from RAM because
2804 dirty cache lines may be resident in the caches of various CPUs, and may not
2805 have been written back to RAM yet. To deal with this, the appropriate part of
2806 the kernel must flush the overlapping bits of cache on each CPU (and maybe
2807 invalidate them as well).
2809 In addition, the data DMA'd to RAM by a device may be overwritten by dirty
2810 cache lines being written back to RAM from a CPU's cache after the device has
2811 installed its own data, or cache lines present in the CPU's cache may simply
2812 obscure the fact that RAM has been updated, until at such time as the cacheline
2813 is discarded from the CPU's cache and reloaded. To deal with this, the
2814 appropriate part of the kernel must invalidate the overlapping bits of the
2817 See Documentation/cachetlb.txt for more information on cache management.
2820 CACHE COHERENCY VS MMIO
2821 -----------------------
2823 Memory mapped I/O usually takes place through memory locations that are part of
2824 a window in the CPU's memory space that has different properties assigned than
2825 the usual RAM directed window.
2827 Amongst these properties is usually the fact that such accesses bypass the
2828 caching entirely and go directly to the device buses. This means MMIO accesses
2829 may, in effect, overtake accesses to cached memory that were emitted earlier.
2830 A memory barrier isn't sufficient in such a case, but rather the cache must be
2831 flushed between the cached memory write and the MMIO access if the two are in
2835 =========================
2836 THE THINGS CPUS GET UP TO
2837 =========================
2839 A programmer might take it for granted that the CPU will perform memory
2840 operations in exactly the order specified, so that if the CPU is, for example,
2841 given the following piece of code to execute:
2849 they would then expect that the CPU will complete the memory operation for each
2850 instruction before moving on to the next one, leading to a definite sequence of
2851 operations as seen by external observers in the system:
2853 LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E.
2856 Reality is, of course, much messier. With many CPUs and compilers, the above
2857 assumption doesn't hold because:
2859 (*) loads are more likely to need to be completed immediately to permit
2860 execution progress, whereas stores can often be deferred without a
2863 (*) loads may be done speculatively, and the result discarded should it prove
2864 to have been unnecessary;
2866 (*) loads may be done speculatively, leading to the result having been fetched
2867 at the wrong time in the expected sequence of events;
2869 (*) the order of the memory accesses may be rearranged to promote better use
2870 of the CPU buses and caches;
2872 (*) loads and stores may be combined to improve performance when talking to
2873 memory or I/O hardware that can do batched accesses of adjacent locations,
2874 thus cutting down on transaction setup costs (memory and PCI devices may
2875 both be able to do this); and
2877 (*) the CPU's data cache may affect the ordering, and whilst cache-coherency
2878 mechanisms may alleviate this - once the store has actually hit the cache
2879 - there's no guarantee that the coherency management will be propagated in
2880 order to other CPUs.
2882 So what another CPU, say, might actually observe from the above piece of code
2885 LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B
2887 (Where "LOAD {*C,*D}" is a combined load)
2890 However, it is guaranteed that a CPU will be self-consistent: it will see its
2891 _own_ accesses appear to be correctly ordered, without the need for a memory
2892 barrier. For instance with the following code:
2901 and assuming no intervention by an external influence, it can be assumed that
2902 the final result will appear to be:
2904 U == the original value of *A
2909 The code above may cause the CPU to generate the full sequence of memory
2912 U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A
2914 in that order, but, without intervention, the sequence may have almost any
2915 combination of elements combined or discarded, provided the program's view
2916 of the world remains consistent. Note that READ_ONCE() and WRITE_ONCE()
2917 are -not- optional in the above example, as there are architectures
2918 where a given CPU might reorder successive loads to the same location.
2919 On such architectures, READ_ONCE() and WRITE_ONCE() do whatever is
2920 necessary to prevent this, for example, on Itanium the volatile casts
2921 used by READ_ONCE() and WRITE_ONCE() cause GCC to emit the special ld.acq
2922 and st.rel instructions (respectively) that prevent such reordering.
2924 The compiler may also combine, discard or defer elements of the sequence before
2925 the CPU even sees them.
2936 since, without either a write barrier or an WRITE_ONCE(), it can be
2937 assumed that the effect of the storage of V to *A is lost. Similarly:
2942 may, without a memory barrier or an READ_ONCE() and WRITE_ONCE(), be
2948 and the LOAD operation never appear outside of the CPU.
2951 AND THEN THERE'S THE ALPHA
2952 --------------------------
2954 The DEC Alpha CPU is one of the most relaxed CPUs there is. Not only that,
2955 some versions of the Alpha CPU have a split data cache, permitting them to have
2956 two semantically-related cache lines updated at separate times. This is where
2957 the data dependency barrier really becomes necessary as this synchronises both
2958 caches with the memory coherence system, thus making it seem like pointer
2959 changes vs new data occur in the right order.
2961 The Alpha defines the Linux kernel's memory barrier model.
2963 See the subsection on "Cache Coherency" above.
2965 VIRTUAL MACHINE GUESTS
2968 Guests running within virtual machines might be affected by SMP effects even if
2969 the guest itself is compiled without SMP support. This is an artifact of
2970 interfacing with an SMP host while running an UP kernel. Using mandatory
2971 barriers for this use-case would be possible but is often suboptimal.
2973 To handle this case optimally, low-level virt_mb() etc macros are available.
2974 These have the same effect as smp_mb() etc when SMP is enabled, but generate
2975 identical code for SMP and non-SMP systems. For example, virtual machine guests
2976 should use virt_mb() rather than smp_mb() when synchronizing against a
2977 (possibly SMP) host.
2979 These are equivalent to smp_mb() etc counterparts in all other respects,
2980 in particular, they do not control MMIO effects: to control
2981 MMIO effects, use mandatory barriers.
2990 Memory barriers can be used to implement circular buffering without the need
2991 of a lock to serialise the producer with the consumer. See:
2993 Documentation/circular-buffers.txt
3002 Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek,
3004 Chapter 5.2: Physical Address Space Characteristics
3005 Chapter 5.4: Caches and Write Buffers
3006 Chapter 5.5: Data Sharing
3007 Chapter 5.6: Read/Write Ordering
3009 AMD64 Architecture Programmer's Manual Volume 2: System Programming
3010 Chapter 7.1: Memory-Access Ordering
3011 Chapter 7.4: Buffering and Combining Memory Writes
3013 IA-32 Intel Architecture Software Developer's Manual, Volume 3:
3014 System Programming Guide
3015 Chapter 7.1: Locked Atomic Operations
3016 Chapter 7.2: Memory Ordering
3017 Chapter 7.4: Serializing Instructions
3019 The SPARC Architecture Manual, Version 9
3020 Chapter 8: Memory Models
3021 Appendix D: Formal Specification of the Memory Models
3022 Appendix J: Programming with the Memory Models
3024 UltraSPARC Programmer Reference Manual
3025 Chapter 5: Memory Accesses and Cacheability
3026 Chapter 15: Sparc-V9 Memory Models
3028 UltraSPARC III Cu User's Manual
3029 Chapter 9: Memory Models
3031 UltraSPARC IIIi Processor User's Manual
3032 Chapter 8: Memory Models
3034 UltraSPARC Architecture 2005
3036 Appendix D: Formal Specifications of the Memory Models
3038 UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005
3039 Chapter 8: Memory Models
3040 Appendix F: Caches and Cache Coherency
3042 Solaris Internals, Core Kernel Architecture, p63-68:
3043 Chapter 3.3: Hardware Considerations for Locks and
3046 Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching
3047 for Kernel Programmers:
3048 Chapter 13: Other Memory Models
3050 Intel Itanium Architecture Software Developer's Manual: Volume 1:
3051 Section 2.6: Speculation
3052 Section 4.4: Memory Access