1 ============================
2 LINUX KERNEL MEMORY BARRIERS
3 ============================
5 By: David Howells <dhowells@redhat.com>
6 Paul E. McKenney <paulmck@linux.vnet.ibm.com>
10 (*) Abstract memory access model.
15 (*) What are memory barriers?
17 - Varieties of memory barrier.
18 - What may not be assumed about memory barriers?
19 - Data dependency barriers.
20 - Control dependencies.
21 - SMP barrier pairing.
22 - Examples of memory barrier sequences.
23 - Read memory barriers vs load speculation.
26 (*) Explicit kernel barriers.
29 - CPU memory barriers.
32 (*) Implicit kernel memory barriers.
35 - Interrupt disabling functions.
36 - Sleep and wake-up functions.
37 - Miscellaneous functions.
39 (*) Inter-CPU locking barrier effects.
41 - Locks vs memory accesses.
42 - Locks vs I/O accesses.
44 (*) Where are memory barriers needed?
46 - Interprocessor interaction.
51 (*) Kernel I/O barrier effects.
53 (*) Assumed minimum execution ordering model.
55 (*) The effects of the cpu cache.
58 - Cache coherency vs DMA.
59 - Cache coherency vs MMIO.
61 (*) The things CPUs get up to.
63 - And then there's the Alpha.
72 ============================
73 ABSTRACT MEMORY ACCESS MODEL
74 ============================
76 Consider the following abstract model of the system:
81 +-------+ : +--------+ : +-------+
84 | CPU 1 |<----->| Memory |<----->| CPU 2 |
87 +-------+ : +--------+ : +-------+
95 +---------->| Device |<----------+
101 Each CPU executes a program that generates memory access operations. In the
102 abstract CPU, memory operation ordering is very relaxed, and a CPU may actually
103 perform the memory operations in any order it likes, provided program causality
104 appears to be maintained. Similarly, the compiler may also arrange the
105 instructions it emits in any order it likes, provided it doesn't affect the
106 apparent operation of the program.
108 So in the above diagram, the effects of the memory operations performed by a
109 CPU are perceived by the rest of the system as the operations cross the
110 interface between the CPU and rest of the system (the dotted lines).
113 For example, consider the following sequence of events:
116 =============== ===============
121 The set of accesses as seen by the memory system in the middle can be arranged
122 in 24 different combinations:
124 STORE A=3, STORE B=4, x=LOAD A->3, y=LOAD B->4
125 STORE A=3, STORE B=4, y=LOAD B->4, x=LOAD A->3
126 STORE A=3, x=LOAD A->3, STORE B=4, y=LOAD B->4
127 STORE A=3, x=LOAD A->3, y=LOAD B->2, STORE B=4
128 STORE A=3, y=LOAD B->2, STORE B=4, x=LOAD A->3
129 STORE A=3, y=LOAD B->2, x=LOAD A->3, STORE B=4
130 STORE B=4, STORE A=3, x=LOAD A->3, y=LOAD B->4
134 and can thus result in four different combinations of values:
142 Furthermore, the stores committed by a CPU to the memory system may not be
143 perceived by the loads made by another CPU in the same order as the stores were
147 As a further example, consider this sequence of events:
150 =============== ===============
151 { A == 1, B == 2, C = 3, P == &A, Q == &C }
155 There is an obvious data dependency here, as the value loaded into D depends on
156 the address retrieved from P by CPU 2. At the end of the sequence, any of the
157 following results are possible:
159 (Q == &A) and (D == 1)
160 (Q == &B) and (D == 2)
161 (Q == &B) and (D == 4)
163 Note that CPU 2 will never try and load C into D because the CPU will load P
164 into Q before issuing the load of *Q.
170 Some devices present their control interfaces as collections of memory
171 locations, but the order in which the control registers are accessed is very
172 important. For instance, imagine an ethernet card with a set of internal
173 registers that are accessed through an address port register (A) and a data
174 port register (D). To read internal register 5, the following code might then
180 but this might show up as either of the following two sequences:
182 STORE *A = 5, x = LOAD *D
183 x = LOAD *D, STORE *A = 5
185 the second of which will almost certainly result in a malfunction, since it set
186 the address _after_ attempting to read the register.
192 There are some minimal guarantees that may be expected of a CPU:
194 (*) On any given CPU, dependent memory accesses will be issued in order, with
195 respect to itself. This means that for:
197 ACCESS_ONCE(Q) = P; smp_read_barrier_depends(); D = ACCESS_ONCE(*Q);
199 the CPU will issue the following memory operations:
201 Q = LOAD P, D = LOAD *Q
203 and always in that order. On most systems, smp_read_barrier_depends()
204 does nothing, but it is required for DEC Alpha. The ACCESS_ONCE()
205 is required to prevent compiler mischief. Please note that you
206 should normally use something like rcu_dereference() instead of
207 open-coding smp_read_barrier_depends().
209 (*) Overlapping loads and stores within a particular CPU will appear to be
210 ordered within that CPU. This means that for:
212 a = ACCESS_ONCE(*X); ACCESS_ONCE(*X) = b;
214 the CPU will only issue the following sequence of memory operations:
216 a = LOAD *X, STORE *X = b
220 ACCESS_ONCE(*X) = c; d = ACCESS_ONCE(*X);
222 the CPU will only issue:
224 STORE *X = c, d = LOAD *X
226 (Loads and stores overlap if they are targeted at overlapping pieces of
229 And there are a number of things that _must_ or _must_not_ be assumed:
231 (*) It _must_not_ be assumed that the compiler will do what you want with
232 memory references that are not protected by ACCESS_ONCE(). Without
233 ACCESS_ONCE(), the compiler is within its rights to do all sorts
234 of "creative" transformations:
236 (-) Repeat the load, possibly getting a different value on the second
237 and subsequent loads. This is especially prone to happen when
238 register pressure is high.
240 (-) Merge adjacent loads and stores to the same location. The most
241 familiar example is the transformation from:
252 Using ACCESS_ONCE() as follows prevents this sort of optimization:
254 while (ACCESS_ONCE(a))
257 (-) "Store tearing", where a single store in the source code is split
258 into smaller stores in the object code. Note that gcc really
259 will do this on some architectures when storing certain constants.
260 It can be cheaper to do a series of immediate stores than to
261 form the constant in a register and then to store that register.
263 (-) "Load tearing", which splits loads in a manner analogous to
266 (*) It _must_not_ be assumed that independent loads and stores will be issued
267 in the order given. This means that for:
269 X = *A; Y = *B; *D = Z;
271 we may get any of the following sequences:
273 X = LOAD *A, Y = LOAD *B, STORE *D = Z
274 X = LOAD *A, STORE *D = Z, Y = LOAD *B
275 Y = LOAD *B, X = LOAD *A, STORE *D = Z
276 Y = LOAD *B, STORE *D = Z, X = LOAD *A
277 STORE *D = Z, X = LOAD *A, Y = LOAD *B
278 STORE *D = Z, Y = LOAD *B, X = LOAD *A
280 (*) It _must_ be assumed that overlapping memory accesses may be merged or
281 discarded. This means that for:
283 X = *A; Y = *(A + 4);
285 we may get any one of the following sequences:
287 X = LOAD *A; Y = LOAD *(A + 4);
288 Y = LOAD *(A + 4); X = LOAD *A;
289 {X, Y} = LOAD {*A, *(A + 4) };
293 *A = X; *(A + 4) = Y;
297 STORE *A = X; STORE *(A + 4) = Y;
298 STORE *(A + 4) = Y; STORE *A = X;
299 STORE {*A, *(A + 4) } = {X, Y};
302 =========================
303 WHAT ARE MEMORY BARRIERS?
304 =========================
306 As can be seen above, independent memory operations are effectively performed
307 in random order, but this can be a problem for CPU-CPU interaction and for I/O.
308 What is required is some way of intervening to instruct the compiler and the
309 CPU to restrict the order.
311 Memory barriers are such interventions. They impose a perceived partial
312 ordering over the memory operations on either side of the barrier.
314 Such enforcement is important because the CPUs and other devices in a system
315 can use a variety of tricks to improve performance, including reordering,
316 deferral and combination of memory operations; speculative loads; speculative
317 branch prediction and various types of caching. Memory barriers are used to
318 override or suppress these tricks, allowing the code to sanely control the
319 interaction of multiple CPUs and/or devices.
322 VARIETIES OF MEMORY BARRIER
323 ---------------------------
325 Memory barriers come in four basic varieties:
327 (1) Write (or store) memory barriers.
329 A write memory barrier gives a guarantee that all the STORE operations
330 specified before the barrier will appear to happen before all the STORE
331 operations specified after the barrier with respect to the other
332 components of the system.
334 A write barrier is a partial ordering on stores only; it is not required
335 to have any effect on loads.
337 A CPU can be viewed as committing a sequence of store operations to the
338 memory system as time progresses. All stores before a write barrier will
339 occur in the sequence _before_ all the stores after the write barrier.
341 [!] Note that write barriers should normally be paired with read or data
342 dependency barriers; see the "SMP barrier pairing" subsection.
345 (2) Data dependency barriers.
347 A data dependency barrier is a weaker form of read barrier. In the case
348 where two loads are performed such that the second depends on the result
349 of the first (eg: the first load retrieves the address to which the second
350 load will be directed), a data dependency barrier would be required to
351 make sure that the target of the second load is updated before the address
352 obtained by the first load is accessed.
354 A data dependency barrier is a partial ordering on interdependent loads
355 only; it is not required to have any effect on stores, independent loads
356 or overlapping loads.
358 As mentioned in (1), the other CPUs in the system can be viewed as
359 committing sequences of stores to the memory system that the CPU being
360 considered can then perceive. A data dependency barrier issued by the CPU
361 under consideration guarantees that for any load preceding it, if that
362 load touches one of a sequence of stores from another CPU, then by the
363 time the barrier completes, the effects of all the stores prior to that
364 touched by the load will be perceptible to any loads issued after the data
367 See the "Examples of memory barrier sequences" subsection for diagrams
368 showing the ordering constraints.
370 [!] Note that the first load really has to have a _data_ dependency and
371 not a control dependency. If the address for the second load is dependent
372 on the first load, but the dependency is through a conditional rather than
373 actually loading the address itself, then it's a _control_ dependency and
374 a full read barrier or better is required. See the "Control dependencies"
375 subsection for more information.
377 [!] Note that data dependency barriers should normally be paired with
378 write barriers; see the "SMP barrier pairing" subsection.
381 (3) Read (or load) memory barriers.
383 A read barrier is a data dependency barrier plus a guarantee that all the
384 LOAD operations specified before the barrier will appear to happen before
385 all the LOAD operations specified after the barrier with respect to the
386 other components of the system.
388 A read barrier is a partial ordering on loads only; it is not required to
389 have any effect on stores.
391 Read memory barriers imply data dependency barriers, and so can substitute
394 [!] Note that read barriers should normally be paired with write barriers;
395 see the "SMP barrier pairing" subsection.
398 (4) General memory barriers.
400 A general memory barrier gives a guarantee that all the LOAD and STORE
401 operations specified before the barrier will appear to happen before all
402 the LOAD and STORE operations specified after the barrier with respect to
403 the other components of the system.
405 A general memory barrier is a partial ordering over both loads and stores.
407 General memory barriers imply both read and write memory barriers, and so
408 can substitute for either.
411 And a couple of implicit varieties:
415 This acts as a one-way permeable barrier. It guarantees that all memory
416 operations after the LOCK operation will appear to happen after the LOCK
417 operation with respect to the other components of the system.
419 Memory operations that occur before a LOCK operation may appear to happen
422 A LOCK operation should almost always be paired with an UNLOCK operation.
425 (6) UNLOCK operations.
427 This also acts as a one-way permeable barrier. It guarantees that all
428 memory operations before the UNLOCK operation will appear to happen before
429 the UNLOCK operation with respect to the other components of the system.
431 Memory operations that occur after an UNLOCK operation may appear to
432 happen before it completes.
434 LOCK and UNLOCK operations are guaranteed to appear with respect to each
435 other strictly in the order specified.
437 The use of LOCK and UNLOCK operations generally precludes the need for
438 other sorts of memory barrier (but note the exceptions mentioned in the
439 subsection "MMIO write barrier").
442 Memory barriers are only required where there's a possibility of interaction
443 between two CPUs or between a CPU and a device. If it can be guaranteed that
444 there won't be any such interaction in any particular piece of code, then
445 memory barriers are unnecessary in that piece of code.
448 Note that these are the _minimum_ guarantees. Different architectures may give
449 more substantial guarantees, but they may _not_ be relied upon outside of arch
453 WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS?
454 ----------------------------------------------
456 There are certain things that the Linux kernel memory barriers do not guarantee:
458 (*) There is no guarantee that any of the memory accesses specified before a
459 memory barrier will be _complete_ by the completion of a memory barrier
460 instruction; the barrier can be considered to draw a line in that CPU's
461 access queue that accesses of the appropriate type may not cross.
463 (*) There is no guarantee that issuing a memory barrier on one CPU will have
464 any direct effect on another CPU or any other hardware in the system. The
465 indirect effect will be the order in which the second CPU sees the effects
466 of the first CPU's accesses occur, but see the next point:
468 (*) There is no guarantee that a CPU will see the correct order of effects
469 from a second CPU's accesses, even _if_ the second CPU uses a memory
470 barrier, unless the first CPU _also_ uses a matching memory barrier (see
471 the subsection on "SMP Barrier Pairing").
473 (*) There is no guarantee that some intervening piece of off-the-CPU
474 hardware[*] will not reorder the memory accesses. CPU cache coherency
475 mechanisms should propagate the indirect effects of a memory barrier
476 between CPUs, but might not do so in order.
478 [*] For information on bus mastering DMA and coherency please read:
480 Documentation/PCI/pci.txt
481 Documentation/DMA-API-HOWTO.txt
482 Documentation/DMA-API.txt
485 DATA DEPENDENCY BARRIERS
486 ------------------------
488 The usage requirements of data dependency barriers are a little subtle, and
489 it's not always obvious that they're needed. To illustrate, consider the
490 following sequence of events:
493 =============== ===============
494 { A == 1, B == 2, C = 3, P == &A, Q == &C }
501 There's a clear data dependency here, and it would seem that by the end of the
502 sequence, Q must be either &A or &B, and that:
504 (Q == &A) implies (D == 1)
505 (Q == &B) implies (D == 4)
507 But! CPU 2's perception of P may be updated _before_ its perception of B, thus
508 leading to the following situation:
510 (Q == &B) and (D == 2) ????
512 Whilst this may seem like a failure of coherency or causality maintenance, it
513 isn't, and this behaviour can be observed on certain real CPUs (such as the DEC
516 To deal with this, a data dependency barrier or better must be inserted
517 between the address load and the data load:
520 =============== ===============
521 { A == 1, B == 2, C = 3, P == &A, Q == &C }
526 <data dependency barrier>
529 This enforces the occurrence of one of the two implications, and prevents the
530 third possibility from arising.
532 [!] Note that this extremely counterintuitive situation arises most easily on
533 machines with split caches, so that, for example, one cache bank processes
534 even-numbered cache lines and the other bank processes odd-numbered cache
535 lines. The pointer P might be stored in an odd-numbered cache line, and the
536 variable B might be stored in an even-numbered cache line. Then, if the
537 even-numbered bank of the reading CPU's cache is extremely busy while the
538 odd-numbered bank is idle, one can see the new value of the pointer P (&B),
539 but the old value of the variable B (2).
542 Another example of where data dependency barriers might be required is where a
543 number is read from memory and then used to calculate the index for an array
547 =============== ===============
548 { M[0] == 1, M[1] == 2, M[3] = 3, P == 0, Q == 3 }
553 <data dependency barrier>
557 The data dependency barrier is very important to the RCU system,
558 for example. See rcu_assign_pointer() and rcu_dereference() in
559 include/linux/rcupdate.h. This permits the current target of an RCU'd
560 pointer to be replaced with a new modified target, without the replacement
561 target appearing to be incompletely initialised.
563 See also the subsection on "Cache Coherency" for a more thorough example.
569 A control dependency requires a full read memory barrier, not simply a data
570 dependency barrier to make it work correctly. Consider the following bit of
575 <data dependency barrier> /* BUG: No data dependency!!! */
579 This will not have the desired effect because there is no actual data
580 dependency, but rather a control dependency that the CPU may short-circuit
581 by attempting to predict the outcome in advance, so that other CPUs see
582 the load from b as having happened before the load from a. In such a
583 case what's actually required is:
591 However, stores are not speculated. This means that ordering -is- provided
592 in the following example:
595 if (ACCESS_ONCE(q)) {
599 Please note that ACCESS_ONCE() is not optional! Without the ACCESS_ONCE(),
600 the compiler is within its rights to transform this example:
604 b = p; /* BUG: Compiler can reorder!!! */
607 b = p; /* BUG: Compiler can reorder!!! */
611 into this, which of course defeats the ordering:
620 Worse yet, if the compiler is able to prove (say) that the value of
621 variable 'a' is always non-zero, it would be well within its rights
622 to optimize the original example by eliminating the "if" statement
626 b = p; /* BUG: Compiler can reorder!!! */
629 The solution is again ACCESS_ONCE(), which preserves the ordering between
630 the load from variable 'a' and the store to variable 'b':
641 You could also use barrier() to prevent the compiler from moving
642 the stores to variable 'b', but barrier() would not prevent the
643 compiler from proving to itself that a==1 always, so ACCESS_ONCE()
646 It is important to note that control dependencies absolutely require a
647 a conditional. For example, the following "optimized" version of
648 the above example breaks ordering:
651 ACCESS_ONCE(b) = p; /* BUG: No ordering vs. load from a!!! */
653 /* ACCESS_ONCE(b) = p; -- moved up, BUG!!! */
656 /* ACCESS_ONCE(b) = p; -- moved up, BUG!!! */
660 It is of course legal for the prior load to be part of the conditional,
661 for example, as follows:
663 if (ACCESS_ONCE(a) > 0) {
664 ACCESS_ONCE(b) = q / 2;
667 ACCESS_ONCE(b) = q / 3;
671 This will again ensure that the load from variable 'a' is ordered before the
672 stores to variable 'b'.
674 In addition, you need to be careful what you do with the local variable 'q',
675 otherwise the compiler might be able to guess the value and again remove
676 the needed conditional. For example:
687 If MAX is defined to be 1, then the compiler knows that (q % MAX) is
688 equal to zero, in which case the compiler is within its rights to
689 transform the above code into the following:
695 This transformation loses the ordering between the load from variable 'a'
696 and the store to variable 'b'. If you are relying on this ordering, you
697 should do something like the following:
700 BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
709 Finally, control dependencies do -not- provide transitivity. This is
710 demonstrated by two related examples:
713 ===================== =====================
714 r1 = ACCESS_ONCE(x); r2 = ACCESS_ONCE(y);
715 if (r1 >= 0) if (r2 >= 0)
716 ACCESS_ONCE(y) = 1; ACCESS_ONCE(x) = 1;
718 assert(!(r1 == 1 && r2 == 1));
720 The above two-CPU example will never trigger the assert(). However,
721 if control dependencies guaranteed transitivity (which they do not),
722 then adding the following two CPUs would guarantee a related assertion:
725 ===================== =====================
726 ACCESS_ONCE(x) = 2; ACCESS_ONCE(y) = 2;
728 assert(!(r1 == 2 && r2 == 2 && x == 1 && y == 1)); /* FAILS!!! */
730 But because control dependencies do -not- provide transitivity, the
731 above assertion can fail after the combined four-CPU example completes.
732 If you need the four-CPU example to provide ordering, you will need
733 smp_mb() between the loads and stores in the CPU 0 and CPU 1 code fragments.
737 (*) Control dependencies can order prior loads against later stores.
738 However, they do -not- guarantee any other sort of ordering:
739 Not prior loads against later loads, nor prior stores against
740 later anything. If you need these other forms of ordering,
741 use smb_rmb(), smp_wmb(), or, in the case of prior stores and
742 later loads, smp_mb().
744 (*) Control dependencies require at least one run-time conditional
745 between the prior load and the subsequent store. If the compiler
746 is able to optimize the conditional away, it will have also
747 optimized away the ordering. Careful use of ACCESS_ONCE() can
748 help to preserve the needed conditional.
750 (*) Control dependencies require that the compiler avoid reordering the
751 dependency into nonexistence. Careful use of ACCESS_ONCE() or
752 barrier() can help to preserve your control dependency.
754 (*) Control dependencies do -not- provide transitivity. If you
755 need transitivity, use smp_mb().
761 When dealing with CPU-CPU interactions, certain types of memory barrier should
762 always be paired. A lack of appropriate pairing is almost certainly an error.
764 A write barrier should always be paired with a data dependency barrier or read
765 barrier, though a general barrier would also be viable. Similarly a read
766 barrier or a data dependency barrier should always be paired with at least an
767 write barrier, though, again, a general barrier is viable:
770 =============== ===============
773 ACCESS_ONCE(b) = 2; x = ACCESS_ONCE(b);
780 =============== ===============================
783 ACCESS_ONCE(b) = &a; x = ACCESS_ONCE(b);
784 <data dependency barrier>
787 Basically, the read barrier always has to be there, even though it can be of
790 [!] Note that the stores before the write barrier would normally be expected to
791 match the loads after the read barrier or the data dependency barrier, and vice
795 =================== ===================
796 ACCESS_ONCE(a) = 1; }---- --->{ v = ACCESS_ONCE(c);
797 ACCESS_ONCE(b) = 2; } \ / { w = ACCESS_ONCE(d);
798 <write barrier> \ <read barrier>
799 ACCESS_ONCE(c) = 3; } / \ { x = ACCESS_ONCE(a);
800 ACCESS_ONCE(d) = 4; }---- --->{ y = ACCESS_ONCE(b);
803 EXAMPLES OF MEMORY BARRIER SEQUENCES
804 ------------------------------------
806 Firstly, write barriers act as partial orderings on store operations.
807 Consider the following sequence of events:
810 =======================
818 This sequence of events is committed to the memory coherence system in an order
819 that the rest of the system might perceive as the unordered set of { STORE A,
820 STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
825 | |------>| C=3 | } /\
826 | | : +------+ }----- \ -----> Events perceptible to
827 | | : | A=1 | } \/ the rest of the system
829 | CPU 1 | : | B=2 | }
831 | | wwwwwwwwwwwwwwww } <--- At this point the write barrier
832 | | +------+ } requires all stores prior to the
833 | | : | E=5 | } barrier to be committed before
834 | | : +------+ } further stores may take place
839 | Sequence in which stores are committed to the
840 | memory system by CPU 1
844 Secondly, data dependency barriers act as partial orderings on data-dependent
845 loads. Consider the following sequence of events:
848 ======================= =======================
849 { B = 7; X = 9; Y = 8; C = &Y }
854 STORE D = 4 LOAD C (gets &B)
857 Without intervention, CPU 2 may perceive the events on CPU 1 in some
858 effectively random order, despite the write barrier issued by CPU 1:
861 | | +------+ +-------+ | Sequence of update
862 | |------>| B=2 |----- --->| Y->8 | | of perception on
863 | | : +------+ \ +-------+ | CPU 2
864 | CPU 1 | : | A=1 | \ --->| C->&Y | V
865 | | +------+ | +-------+
866 | | wwwwwwwwwwwwwwww | : :
868 | | : | C=&B |--- | : : +-------+
869 | | : +------+ \ | +-------+ | |
870 | |------>| D=4 | ----------->| C->&B |------>| |
871 | | +------+ | +-------+ | |
872 +-------+ : : | : : | |
876 Apparently incorrect ---> | | B->7 |------>| |
877 perception of B (!) | +-------+ | |
880 The load of X holds ---> \ | X->9 |------>| |
881 up the maintenance \ +-------+ | |
882 of coherence of B ----->| B->2 | +-------+
887 In the above example, CPU 2 perceives that B is 7, despite the load of *C
888 (which would be B) coming after the LOAD of C.
890 If, however, a data dependency barrier were to be placed between the load of C
891 and the load of *C (ie: B) on CPU 2:
894 ======================= =======================
895 { B = 7; X = 9; Y = 8; C = &Y }
900 STORE D = 4 LOAD C (gets &B)
901 <data dependency barrier>
904 then the following will occur:
907 | | +------+ +-------+
908 | |------>| B=2 |----- --->| Y->8 |
909 | | : +------+ \ +-------+
910 | CPU 1 | : | A=1 | \ --->| C->&Y |
911 | | +------+ | +-------+
912 | | wwwwwwwwwwwwwwww | : :
914 | | : | C=&B |--- | : : +-------+
915 | | : +------+ \ | +-------+ | |
916 | |------>| D=4 | ----------->| C->&B |------>| |
917 | | +------+ | +-------+ | |
918 +-------+ : : | : : | |
924 Makes sure all effects ---> \ ddddddddddddddddd | |
925 prior to the store of C \ +-------+ | |
926 are perceptible to ----->| B->2 |------>| |
927 subsequent loads +-------+ | |
931 And thirdly, a read barrier acts as a partial order on loads. Consider the
932 following sequence of events:
935 ======================= =======================
943 Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in
944 some effectively random order, despite the write barrier issued by CPU 1:
947 | | +------+ +-------+
948 | |------>| A=1 |------ --->| A->0 |
949 | | +------+ \ +-------+
950 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
951 | | +------+ | +-------+
952 | |------>| B=2 |--- | : :
953 | | +------+ \ | : : +-------+
954 +-------+ : : \ | +-------+ | |
955 ---------->| B->2 |------>| |
956 | +-------+ | CPU 2 |
967 If, however, a read barrier were to be placed between the load of B and the
971 ======================= =======================
980 then the partial ordering imposed by CPU 1 will be perceived correctly by CPU
984 | | +------+ +-------+
985 | |------>| A=1 |------ --->| A->0 |
986 | | +------+ \ +-------+
987 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
988 | | +------+ | +-------+
989 | |------>| B=2 |--- | : :
990 | | +------+ \ | : : +-------+
991 +-------+ : : \ | +-------+ | |
992 ---------->| B->2 |------>| |
993 | +-------+ | CPU 2 |
996 At this point the read ----> \ rrrrrrrrrrrrrrrrr | |
997 barrier causes all effects \ +-------+ | |
998 prior to the storage of B ---->| A->1 |------>| |
999 to be perceptible to CPU 2 +-------+ | |
1003 To illustrate this more completely, consider what could happen if the code
1004 contained a load of A either side of the read barrier:
1007 ======================= =======================
1013 LOAD A [first load of A]
1015 LOAD A [second load of A]
1017 Even though the two loads of A both occur after the load of B, they may both
1018 come up with different values:
1021 | | +------+ +-------+
1022 | |------>| A=1 |------ --->| A->0 |
1023 | | +------+ \ +-------+
1024 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1025 | | +------+ | +-------+
1026 | |------>| B=2 |--- | : :
1027 | | +------+ \ | : : +-------+
1028 +-------+ : : \ | +-------+ | |
1029 ---------->| B->2 |------>| |
1030 | +-------+ | CPU 2 |
1034 | | A->0 |------>| 1st |
1036 At this point the read ----> \ rrrrrrrrrrrrrrrrr | |
1037 barrier causes all effects \ +-------+ | |
1038 prior to the storage of B ---->| A->1 |------>| 2nd |
1039 to be perceptible to CPU 2 +-------+ | |
1043 But it may be that the update to A from CPU 1 becomes perceptible to CPU 2
1044 before the read barrier completes anyway:
1047 | | +------+ +-------+
1048 | |------>| A=1 |------ --->| A->0 |
1049 | | +------+ \ +-------+
1050 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1051 | | +------+ | +-------+
1052 | |------>| B=2 |--- | : :
1053 | | +------+ \ | : : +-------+
1054 +-------+ : : \ | +-------+ | |
1055 ---------->| B->2 |------>| |
1056 | +-------+ | CPU 2 |
1060 ---->| A->1 |------>| 1st |
1062 rrrrrrrrrrrrrrrrr | |
1064 | A->1 |------>| 2nd |
1069 The guarantee is that the second load will always come up with A == 1 if the
1070 load of B came up with B == 2. No such guarantee exists for the first load of
1071 A; that may come up with either A == 0 or A == 1.
1074 READ MEMORY BARRIERS VS LOAD SPECULATION
1075 ----------------------------------------
1077 Many CPUs speculate with loads: that is they see that they will need to load an
1078 item from memory, and they find a time where they're not using the bus for any
1079 other loads, and so do the load in advance - even though they haven't actually
1080 got to that point in the instruction execution flow yet. This permits the
1081 actual load instruction to potentially complete immediately because the CPU
1082 already has the value to hand.
1084 It may turn out that the CPU didn't actually need the value - perhaps because a
1085 branch circumvented the load - in which case it can discard the value or just
1086 cache it for later use.
1091 ======================= =======================
1093 DIVIDE } Divide instructions generally
1094 DIVIDE } take a long time to perform
1097 Which might appear as this:
1101 --->| B->2 |------>| |
1105 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1106 division speculates on the +-------+ ~ | |
1110 Once the divisions are complete --> : : ~-->| |
1111 the CPU can then perform the : : | |
1112 LOAD with immediate effect : : +-------+
1115 Placing a read barrier or a data dependency barrier just before the second
1119 ======================= =======================
1126 will force any value speculatively obtained to be reconsidered to an extent
1127 dependent on the type of barrier used. If there was no change made to the
1128 speculated memory location, then the speculated value will just be used:
1132 --->| B->2 |------>| |
1136 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1137 division speculates on the +-------+ ~ | |
1142 rrrrrrrrrrrrrrrr~ | |
1149 but if there was an update or an invalidation from another CPU pending, then
1150 the speculation will be cancelled and the value reloaded:
1154 --->| B->2 |------>| |
1158 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1159 division speculates on the +-------+ ~ | |
1164 rrrrrrrrrrrrrrrrr | |
1166 The speculation is discarded ---> --->| A->1 |------>| |
1167 and an updated value is +-------+ | |
1168 retrieved : : +-------+
1174 Transitivity is a deeply intuitive notion about ordering that is not
1175 always provided by real computer systems. The following example
1176 demonstrates transitivity (also called "cumulativity"):
1179 ======================= ======================= =======================
1181 STORE X=1 LOAD X STORE Y=1
1182 <general barrier> <general barrier>
1185 Suppose that CPU 2's load from X returns 1 and its load from Y returns 0.
1186 This indicates that CPU 2's load from X in some sense follows CPU 1's
1187 store to X and that CPU 2's load from Y in some sense preceded CPU 3's
1188 store to Y. The question is then "Can CPU 3's load from X return 0?"
1190 Because CPU 2's load from X in some sense came after CPU 1's store, it
1191 is natural to expect that CPU 3's load from X must therefore return 1.
1192 This expectation is an example of transitivity: if a load executing on
1193 CPU A follows a load from the same variable executing on CPU B, then
1194 CPU A's load must either return the same value that CPU B's load did,
1195 or must return some later value.
1197 In the Linux kernel, use of general memory barriers guarantees
1198 transitivity. Therefore, in the above example, if CPU 2's load from X
1199 returns 1 and its load from Y returns 0, then CPU 3's load from X must
1202 However, transitivity is -not- guaranteed for read or write barriers.
1203 For example, suppose that CPU 2's general barrier in the above example
1204 is changed to a read barrier as shown below:
1207 ======================= ======================= =======================
1209 STORE X=1 LOAD X STORE Y=1
1210 <read barrier> <general barrier>
1213 This substitution destroys transitivity: in this example, it is perfectly
1214 legal for CPU 2's load from X to return 1, its load from Y to return 0,
1215 and CPU 3's load from X to return 0.
1217 The key point is that although CPU 2's read barrier orders its pair
1218 of loads, it does not guarantee to order CPU 1's store. Therefore, if
1219 this example runs on a system where CPUs 1 and 2 share a store buffer
1220 or a level of cache, CPU 2 might have early access to CPU 1's writes.
1221 General barriers are therefore required to ensure that all CPUs agree
1222 on the combined order of CPU 1's and CPU 2's accesses.
1224 To reiterate, if your code requires transitivity, use general barriers
1228 ========================
1229 EXPLICIT KERNEL BARRIERS
1230 ========================
1232 The Linux kernel has a variety of different barriers that act at different
1235 (*) Compiler barrier.
1237 (*) CPU memory barriers.
1239 (*) MMIO write barrier.
1245 The Linux kernel has an explicit compiler barrier function that prevents the
1246 compiler from moving the memory accesses either side of it to the other side:
1250 This is a general barrier -- there are no read-read or write-write variants
1251 of barrier(). Howevever, ACCESS_ONCE() can be thought of as a weak form
1252 for barrier() that affects only the specific accesses flagged by the
1255 The compiler barrier has no direct effect on the CPU, which may then reorder
1256 things however it wishes.
1262 The Linux kernel has eight basic CPU memory barriers:
1264 TYPE MANDATORY SMP CONDITIONAL
1265 =============== ======================= ===========================
1266 GENERAL mb() smp_mb()
1267 WRITE wmb() smp_wmb()
1268 READ rmb() smp_rmb()
1269 DATA DEPENDENCY read_barrier_depends() smp_read_barrier_depends()
1272 All memory barriers except the data dependency barriers imply a compiler
1273 barrier. Data dependencies do not impose any additional compiler ordering.
1275 Aside: In the case of data dependencies, the compiler would be expected to
1276 issue the loads in the correct order (eg. `a[b]` would have to load the value
1277 of b before loading a[b]), however there is no guarantee in the C specification
1278 that the compiler may not speculate the value of b (eg. is equal to 1) and load
1279 a before b (eg. tmp = a[1]; if (b != 1) tmp = a[b]; ). There is also the
1280 problem of a compiler reloading b after having loaded a[b], thus having a newer
1281 copy of b than a[b]. A consensus has not yet been reached about these problems,
1282 however the ACCESS_ONCE macro is a good place to start looking.
1284 SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
1285 systems because it is assumed that a CPU will appear to be self-consistent,
1286 and will order overlapping accesses correctly with respect to itself.
1288 [!] Note that SMP memory barriers _must_ be used to control the ordering of
1289 references to shared memory on SMP systems, though the use of locking instead
1292 Mandatory barriers should not be used to control SMP effects, since mandatory
1293 barriers unnecessarily impose overhead on UP systems. They may, however, be
1294 used to control MMIO effects on accesses through relaxed memory I/O windows.
1295 These are required even on non-SMP systems as they affect the order in which
1296 memory operations appear to a device by prohibiting both the compiler and the
1297 CPU from reordering them.
1300 There are some more advanced barrier functions:
1302 (*) set_mb(var, value)
1304 This assigns the value to the variable and then inserts a full memory
1305 barrier after it, depending on the function. It isn't guaranteed to
1306 insert anything more than a compiler barrier in a UP compilation.
1309 (*) smp_mb__before_atomic_dec();
1310 (*) smp_mb__after_atomic_dec();
1311 (*) smp_mb__before_atomic_inc();
1312 (*) smp_mb__after_atomic_inc();
1314 These are for use with atomic add, subtract, increment and decrement
1315 functions that don't return a value, especially when used for reference
1316 counting. These functions do not imply memory barriers.
1318 As an example, consider a piece of code that marks an object as being dead
1319 and then decrements the object's reference count:
1322 smp_mb__before_atomic_dec();
1323 atomic_dec(&obj->ref_count);
1325 This makes sure that the death mark on the object is perceived to be set
1326 *before* the reference counter is decremented.
1328 See Documentation/atomic_ops.txt for more information. See the "Atomic
1329 operations" subsection for information on where to use these.
1332 (*) smp_mb__before_clear_bit(void);
1333 (*) smp_mb__after_clear_bit(void);
1335 These are for use similar to the atomic inc/dec barriers. These are
1336 typically used for bitwise unlocking operations, so care must be taken as
1337 there are no implicit memory barriers here either.
1339 Consider implementing an unlock operation of some nature by clearing a
1340 locking bit. The clear_bit() would then need to be barriered like this:
1342 smp_mb__before_clear_bit();
1345 This prevents memory operations before the clear leaking to after it. See
1346 the subsection on "Locking Functions" with reference to UNLOCK operation
1349 See Documentation/atomic_ops.txt for more information. See the "Atomic
1350 operations" subsection for information on where to use these.
1356 The Linux kernel also has a special barrier for use with memory-mapped I/O
1361 This is a variation on the mandatory write barrier that causes writes to weakly
1362 ordered I/O regions to be partially ordered. Its effects may go beyond the
1363 CPU->Hardware interface and actually affect the hardware at some level.
1365 See the subsection "Locks vs I/O accesses" for more information.
1368 ===============================
1369 IMPLICIT KERNEL MEMORY BARRIERS
1370 ===============================
1372 Some of the other functions in the linux kernel imply memory barriers, amongst
1373 which are locking and scheduling functions.
1375 This specification is a _minimum_ guarantee; any particular architecture may
1376 provide more substantial guarantees, but these may not be relied upon outside
1377 of arch specific code.
1383 The Linux kernel has a number of locking constructs:
1392 In all cases there are variants on "LOCK" operations and "UNLOCK" operations
1393 for each construct. These operations all imply certain barriers:
1395 (1) LOCK operation implication:
1397 Memory operations issued after the LOCK will be completed after the LOCK
1398 operation has completed.
1400 Memory operations issued before the LOCK may be completed after the LOCK
1401 operation has completed.
1403 (2) UNLOCK operation implication:
1405 Memory operations issued before the UNLOCK will be completed before the
1406 UNLOCK operation has completed.
1408 Memory operations issued after the UNLOCK may be completed before the
1409 UNLOCK operation has completed.
1411 (3) LOCK vs LOCK implication:
1413 All LOCK operations issued before another LOCK operation will be completed
1414 before that LOCK operation.
1416 (4) LOCK vs UNLOCK implication:
1418 All LOCK operations issued before an UNLOCK operation will be completed
1419 before the UNLOCK operation.
1421 All UNLOCK operations issued before a LOCK operation will be completed
1422 before the LOCK operation.
1424 (5) Failed conditional LOCK implication:
1426 Certain variants of the LOCK operation may fail, either due to being
1427 unable to get the lock immediately, or due to receiving an unblocked
1428 signal whilst asleep waiting for the lock to become available. Failed
1429 locks do not imply any sort of barrier.
1431 Therefore, from (1), (2) and (4) an UNLOCK followed by an unconditional LOCK is
1432 equivalent to a full barrier, but a LOCK followed by an UNLOCK is not.
1434 [!] Note: one of the consequences of LOCKs and UNLOCKs being only one-way
1435 barriers is that the effects of instructions outside of a critical section
1436 may seep into the inside of the critical section.
1438 A LOCK followed by an UNLOCK may not be assumed to be full memory barrier
1439 because it is possible for an access preceding the LOCK to happen after the
1440 LOCK, and an access following the UNLOCK to happen before the UNLOCK, and the
1441 two accesses can themselves then cross:
1450 LOCK, STORE *B, STORE *A, UNLOCK
1452 Locks and semaphores may not provide any guarantee of ordering on UP compiled
1453 systems, and so cannot be counted on in such a situation to actually achieve
1454 anything at all - especially with respect to I/O accesses - unless combined
1455 with interrupt disabling operations.
1457 See also the section on "Inter-CPU locking barrier effects".
1460 As an example, consider the following:
1471 The following sequence of events is acceptable:
1473 LOCK, {*F,*A}, *E, {*C,*D}, *B, UNLOCK
1475 [+] Note that {*F,*A} indicates a combined access.
1477 But none of the following are:
1479 {*F,*A}, *B, LOCK, *C, *D, UNLOCK, *E
1480 *A, *B, *C, LOCK, *D, UNLOCK, *E, *F
1481 *A, *B, LOCK, *C, UNLOCK, *D, *E, *F
1482 *B, LOCK, *C, *D, UNLOCK, {*F,*A}, *E
1486 INTERRUPT DISABLING FUNCTIONS
1487 -----------------------------
1489 Functions that disable interrupts (LOCK equivalent) and enable interrupts
1490 (UNLOCK equivalent) will act as compiler barriers only. So if memory or I/O
1491 barriers are required in such a situation, they must be provided from some
1495 SLEEP AND WAKE-UP FUNCTIONS
1496 ---------------------------
1498 Sleeping and waking on an event flagged in global data can be viewed as an
1499 interaction between two pieces of data: the task state of the task waiting for
1500 the event and the global data used to indicate the event. To make sure that
1501 these appear to happen in the right order, the primitives to begin the process
1502 of going to sleep, and the primitives to initiate a wake up imply certain
1505 Firstly, the sleeper normally follows something like this sequence of events:
1508 set_current_state(TASK_UNINTERRUPTIBLE);
1509 if (event_indicated)
1514 A general memory barrier is interpolated automatically by set_current_state()
1515 after it has altered the task state:
1518 ===============================
1519 set_current_state();
1521 STORE current->state
1523 LOAD event_indicated
1525 set_current_state() may be wrapped by:
1528 prepare_to_wait_exclusive();
1530 which therefore also imply a general memory barrier after setting the state.
1531 The whole sequence above is available in various canned forms, all of which
1532 interpolate the memory barrier in the right place:
1535 wait_event_interruptible();
1536 wait_event_interruptible_exclusive();
1537 wait_event_interruptible_timeout();
1538 wait_event_killable();
1539 wait_event_timeout();
1544 Secondly, code that performs a wake up normally follows something like this:
1546 event_indicated = 1;
1547 wake_up(&event_wait_queue);
1551 event_indicated = 1;
1552 wake_up_process(event_daemon);
1554 A write memory barrier is implied by wake_up() and co. if and only if they wake
1555 something up. The barrier occurs before the task state is cleared, and so sits
1556 between the STORE to indicate the event and the STORE to set TASK_RUNNING:
1559 =============================== ===============================
1560 set_current_state(); STORE event_indicated
1561 set_mb(); wake_up();
1562 STORE current->state <write barrier>
1563 <general barrier> STORE current->state
1564 LOAD event_indicated
1566 The available waker functions include:
1572 wake_up_interruptible();
1573 wake_up_interruptible_all();
1574 wake_up_interruptible_nr();
1575 wake_up_interruptible_poll();
1576 wake_up_interruptible_sync();
1577 wake_up_interruptible_sync_poll();
1579 wake_up_locked_poll();
1585 [!] Note that the memory barriers implied by the sleeper and the waker do _not_
1586 order multiple stores before the wake-up with respect to loads of those stored
1587 values after the sleeper has called set_current_state(). For instance, if the
1590 set_current_state(TASK_INTERRUPTIBLE);
1591 if (event_indicated)
1593 __set_current_state(TASK_RUNNING);
1594 do_something(my_data);
1599 event_indicated = 1;
1600 wake_up(&event_wait_queue);
1602 there's no guarantee that the change to event_indicated will be perceived by
1603 the sleeper as coming after the change to my_data. In such a circumstance, the
1604 code on both sides must interpolate its own memory barriers between the
1605 separate data accesses. Thus the above sleeper ought to do:
1607 set_current_state(TASK_INTERRUPTIBLE);
1608 if (event_indicated) {
1610 do_something(my_data);
1613 and the waker should do:
1617 event_indicated = 1;
1618 wake_up(&event_wait_queue);
1621 MISCELLANEOUS FUNCTIONS
1622 -----------------------
1624 Other functions that imply barriers:
1626 (*) schedule() and similar imply full memory barriers.
1629 =================================
1630 INTER-CPU LOCKING BARRIER EFFECTS
1631 =================================
1633 On SMP systems locking primitives give a more substantial form of barrier: one
1634 that does affect memory access ordering on other CPUs, within the context of
1635 conflict on any particular lock.
1638 LOCKS VS MEMORY ACCESSES
1639 ------------------------
1641 Consider the following: the system has a pair of spinlocks (M) and (Q), and
1642 three CPUs; then should the following sequence of events occur:
1645 =============================== ===============================
1646 ACCESS_ONCE(*A) = a; ACCESS_ONCE(*E) = e;
1648 ACCESS_ONCE(*B) = b; ACCESS_ONCE(*F) = f;
1649 ACCESS_ONCE(*C) = c; ACCESS_ONCE(*G) = g;
1651 ACCESS_ONCE(*D) = d; ACCESS_ONCE(*H) = h;
1653 Then there is no guarantee as to what order CPU 3 will see the accesses to *A
1654 through *H occur in, other than the constraints imposed by the separate locks
1655 on the separate CPUs. It might, for example, see:
1657 *E, LOCK M, LOCK Q, *G, *C, *F, *A, *B, UNLOCK Q, *D, *H, UNLOCK M
1659 But it won't see any of:
1661 *B, *C or *D preceding LOCK M
1662 *A, *B or *C following UNLOCK M
1663 *F, *G or *H preceding LOCK Q
1664 *E, *F or *G following UNLOCK Q
1667 However, if the following occurs:
1670 =============================== ===============================
1671 ACCESS_ONCE(*A) = a;
1673 ACCESS_ONCE(*B) = b;
1674 ACCESS_ONCE(*C) = c;
1676 ACCESS_ONCE(*D) = d; ACCESS_ONCE(*E) = e;
1678 ACCESS_ONCE(*F) = f;
1679 ACCESS_ONCE(*G) = g;
1681 ACCESS_ONCE(*H) = h;
1685 *E, LOCK M [1], *C, *B, *A, UNLOCK M [1],
1686 LOCK M [2], *H, *F, *G, UNLOCK M [2], *D
1688 But assuming CPU 1 gets the lock first, CPU 3 won't see any of:
1690 *B, *C, *D, *F, *G or *H preceding LOCK M [1]
1691 *A, *B or *C following UNLOCK M [1]
1692 *F, *G or *H preceding LOCK M [2]
1693 *A, *B, *C, *E, *F or *G following UNLOCK M [2]
1696 LOCKS VS I/O ACCESSES
1697 ---------------------
1699 Under certain circumstances (especially involving NUMA), I/O accesses within
1700 two spinlocked sections on two different CPUs may be seen as interleaved by the
1701 PCI bridge, because the PCI bridge does not necessarily participate in the
1702 cache-coherence protocol, and is therefore incapable of issuing the required
1703 read memory barriers.
1708 =============================== ===============================
1718 may be seen by the PCI bridge as follows:
1720 STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5
1722 which would probably cause the hardware to malfunction.
1725 What is necessary here is to intervene with an mmiowb() before dropping the
1726 spinlock, for example:
1729 =============================== ===============================
1741 this will ensure that the two stores issued on CPU 1 appear at the PCI bridge
1742 before either of the stores issued on CPU 2.
1745 Furthermore, following a store by a load from the same device obviates the need
1746 for the mmiowb(), because the load forces the store to complete before the load
1750 =============================== ===============================
1761 See Documentation/DocBook/deviceiobook.tmpl for more information.
1764 =================================
1765 WHERE ARE MEMORY BARRIERS NEEDED?
1766 =================================
1768 Under normal operation, memory operation reordering is generally not going to
1769 be a problem as a single-threaded linear piece of code will still appear to
1770 work correctly, even if it's in an SMP kernel. There are, however, four
1771 circumstances in which reordering definitely _could_ be a problem:
1773 (*) Interprocessor interaction.
1775 (*) Atomic operations.
1777 (*) Accessing devices.
1782 INTERPROCESSOR INTERACTION
1783 --------------------------
1785 When there's a system with more than one processor, more than one CPU in the
1786 system may be working on the same data set at the same time. This can cause
1787 synchronisation problems, and the usual way of dealing with them is to use
1788 locks. Locks, however, are quite expensive, and so it may be preferable to
1789 operate without the use of a lock if at all possible. In such a case
1790 operations that affect both CPUs may have to be carefully ordered to prevent
1793 Consider, for example, the R/W semaphore slow path. Here a waiting process is
1794 queued on the semaphore, by virtue of it having a piece of its stack linked to
1795 the semaphore's list of waiting processes:
1797 struct rw_semaphore {
1800 struct list_head waiters;
1803 struct rwsem_waiter {
1804 struct list_head list;
1805 struct task_struct *task;
1808 To wake up a particular waiter, the up_read() or up_write() functions have to:
1810 (1) read the next pointer from this waiter's record to know as to where the
1811 next waiter record is;
1813 (2) read the pointer to the waiter's task structure;
1815 (3) clear the task pointer to tell the waiter it has been given the semaphore;
1817 (4) call wake_up_process() on the task; and
1819 (5) release the reference held on the waiter's task struct.
1821 In other words, it has to perform this sequence of events:
1823 LOAD waiter->list.next;
1829 and if any of these steps occur out of order, then the whole thing may
1832 Once it has queued itself and dropped the semaphore lock, the waiter does not
1833 get the lock again; it instead just waits for its task pointer to be cleared
1834 before proceeding. Since the record is on the waiter's stack, this means that
1835 if the task pointer is cleared _before_ the next pointer in the list is read,
1836 another CPU might start processing the waiter and might clobber the waiter's
1837 stack before the up*() function has a chance to read the next pointer.
1839 Consider then what might happen to the above sequence of events:
1842 =============================== ===============================
1849 Woken up by other event
1854 foo() clobbers *waiter
1856 LOAD waiter->list.next;
1859 This could be dealt with using the semaphore lock, but then the down_xxx()
1860 function has to needlessly get the spinlock again after being woken up.
1862 The way to deal with this is to insert a general SMP memory barrier:
1864 LOAD waiter->list.next;
1871 In this case, the barrier makes a guarantee that all memory accesses before the
1872 barrier will appear to happen before all the memory accesses after the barrier
1873 with respect to the other CPUs on the system. It does _not_ guarantee that all
1874 the memory accesses before the barrier will be complete by the time the barrier
1875 instruction itself is complete.
1877 On a UP system - where this wouldn't be a problem - the smp_mb() is just a
1878 compiler barrier, thus making sure the compiler emits the instructions in the
1879 right order without actually intervening in the CPU. Since there's only one
1880 CPU, that CPU's dependency ordering logic will take care of everything else.
1886 Whilst they are technically interprocessor interaction considerations, atomic
1887 operations are noted specially as some of them imply full memory barriers and
1888 some don't, but they're very heavily relied on as a group throughout the
1891 Any atomic operation that modifies some state in memory and returns information
1892 about the state (old or new) implies an SMP-conditional general memory barrier
1893 (smp_mb()) on each side of the actual operation (with the exception of
1894 explicit lock operations, described later). These include:
1898 atomic_xchg(); atomic_long_xchg();
1899 atomic_cmpxchg(); atomic_long_cmpxchg();
1900 atomic_inc_return(); atomic_long_inc_return();
1901 atomic_dec_return(); atomic_long_dec_return();
1902 atomic_add_return(); atomic_long_add_return();
1903 atomic_sub_return(); atomic_long_sub_return();
1904 atomic_inc_and_test(); atomic_long_inc_and_test();
1905 atomic_dec_and_test(); atomic_long_dec_and_test();
1906 atomic_sub_and_test(); atomic_long_sub_and_test();
1907 atomic_add_negative(); atomic_long_add_negative();
1909 test_and_clear_bit();
1910 test_and_change_bit();
1912 /* when succeeds (returns 1) */
1913 atomic_add_unless(); atomic_long_add_unless();
1915 These are used for such things as implementing LOCK-class and UNLOCK-class
1916 operations and adjusting reference counters towards object destruction, and as
1917 such the implicit memory barrier effects are necessary.
1920 The following operations are potential problems as they do _not_ imply memory
1921 barriers, but might be used for implementing such things as UNLOCK-class
1929 With these the appropriate explicit memory barrier should be used if necessary
1930 (smp_mb__before_clear_bit() for instance).
1933 The following also do _not_ imply memory barriers, and so may require explicit
1934 memory barriers under some circumstances (smp_mb__before_atomic_dec() for
1942 If they're used for statistics generation, then they probably don't need memory
1943 barriers, unless there's a coupling between statistical data.
1945 If they're used for reference counting on an object to control its lifetime,
1946 they probably don't need memory barriers because either the reference count
1947 will be adjusted inside a locked section, or the caller will already hold
1948 sufficient references to make the lock, and thus a memory barrier unnecessary.
1950 If they're used for constructing a lock of some description, then they probably
1951 do need memory barriers as a lock primitive generally has to do things in a
1954 Basically, each usage case has to be carefully considered as to whether memory
1955 barriers are needed or not.
1957 The following operations are special locking primitives:
1959 test_and_set_bit_lock();
1961 __clear_bit_unlock();
1963 These implement LOCK-class and UNLOCK-class operations. These should be used in
1964 preference to other operations when implementing locking primitives, because
1965 their implementations can be optimised on many architectures.
1967 [!] Note that special memory barrier primitives are available for these
1968 situations because on some CPUs the atomic instructions used imply full memory
1969 barriers, and so barrier instructions are superfluous in conjunction with them,
1970 and in such cases the special barrier primitives will be no-ops.
1972 See Documentation/atomic_ops.txt for more information.
1978 Many devices can be memory mapped, and so appear to the CPU as if they're just
1979 a set of memory locations. To control such a device, the driver usually has to
1980 make the right memory accesses in exactly the right order.
1982 However, having a clever CPU or a clever compiler creates a potential problem
1983 in that the carefully sequenced accesses in the driver code won't reach the
1984 device in the requisite order if the CPU or the compiler thinks it is more
1985 efficient to reorder, combine or merge accesses - something that would cause
1986 the device to malfunction.
1988 Inside of the Linux kernel, I/O should be done through the appropriate accessor
1989 routines - such as inb() or writel() - which know how to make such accesses
1990 appropriately sequential. Whilst this, for the most part, renders the explicit
1991 use of memory barriers unnecessary, there are a couple of situations where they
1994 (1) On some systems, I/O stores are not strongly ordered across all CPUs, and
1995 so for _all_ general drivers locks should be used and mmiowb() must be
1996 issued prior to unlocking the critical section.
1998 (2) If the accessor functions are used to refer to an I/O memory window with
1999 relaxed memory access properties, then _mandatory_ memory barriers are
2000 required to enforce ordering.
2002 See Documentation/DocBook/deviceiobook.tmpl for more information.
2008 A driver may be interrupted by its own interrupt service routine, and thus the
2009 two parts of the driver may interfere with each other's attempts to control or
2012 This may be alleviated - at least in part - by disabling local interrupts (a
2013 form of locking), such that the critical operations are all contained within
2014 the interrupt-disabled section in the driver. Whilst the driver's interrupt
2015 routine is executing, the driver's core may not run on the same CPU, and its
2016 interrupt is not permitted to happen again until the current interrupt has been
2017 handled, thus the interrupt handler does not need to lock against that.
2019 However, consider a driver that was talking to an ethernet card that sports an
2020 address register and a data register. If that driver's core talks to the card
2021 under interrupt-disablement and then the driver's interrupt handler is invoked:
2032 The store to the data register might happen after the second store to the
2033 address register if ordering rules are sufficiently relaxed:
2035 STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA
2038 If ordering rules are relaxed, it must be assumed that accesses done inside an
2039 interrupt disabled section may leak outside of it and may interleave with
2040 accesses performed in an interrupt - and vice versa - unless implicit or
2041 explicit barriers are used.
2043 Normally this won't be a problem because the I/O accesses done inside such
2044 sections will include synchronous load operations on strictly ordered I/O
2045 registers that form implicit I/O barriers. If this isn't sufficient then an
2046 mmiowb() may need to be used explicitly.
2049 A similar situation may occur between an interrupt routine and two routines
2050 running on separate CPUs that communicate with each other. If such a case is
2051 likely, then interrupt-disabling locks should be used to guarantee ordering.
2054 ==========================
2055 KERNEL I/O BARRIER EFFECTS
2056 ==========================
2058 When accessing I/O memory, drivers should use the appropriate accessor
2063 These are intended to talk to I/O space rather than memory space, but
2064 that's primarily a CPU-specific concept. The i386 and x86_64 processors do
2065 indeed have special I/O space access cycles and instructions, but many
2066 CPUs don't have such a concept.
2068 The PCI bus, amongst others, defines an I/O space concept which - on such
2069 CPUs as i386 and x86_64 - readily maps to the CPU's concept of I/O
2070 space. However, it may also be mapped as a virtual I/O space in the CPU's
2071 memory map, particularly on those CPUs that don't support alternate I/O
2074 Accesses to this space may be fully synchronous (as on i386), but
2075 intermediary bridges (such as the PCI host bridge) may not fully honour
2078 They are guaranteed to be fully ordered with respect to each other.
2080 They are not guaranteed to be fully ordered with respect to other types of
2081 memory and I/O operation.
2083 (*) readX(), writeX():
2085 Whether these are guaranteed to be fully ordered and uncombined with
2086 respect to each other on the issuing CPU depends on the characteristics
2087 defined for the memory window through which they're accessing. On later
2088 i386 architecture machines, for example, this is controlled by way of the
2091 Ordinarily, these will be guaranteed to be fully ordered and uncombined,
2092 provided they're not accessing a prefetchable device.
2094 However, intermediary hardware (such as a PCI bridge) may indulge in
2095 deferral if it so wishes; to flush a store, a load from the same location
2096 is preferred[*], but a load from the same device or from configuration
2097 space should suffice for PCI.
2099 [*] NOTE! attempting to load from the same location as was written to may
2100 cause a malfunction - consider the 16550 Rx/Tx serial registers for
2103 Used with prefetchable I/O memory, an mmiowb() barrier may be required to
2104 force stores to be ordered.
2106 Please refer to the PCI specification for more information on interactions
2107 between PCI transactions.
2111 These are similar to readX(), but are not guaranteed to be ordered in any
2112 way. Be aware that there is no I/O read barrier available.
2114 (*) ioreadX(), iowriteX()
2116 These will perform appropriately for the type of access they're actually
2117 doing, be it inX()/outX() or readX()/writeX().
2120 ========================================
2121 ASSUMED MINIMUM EXECUTION ORDERING MODEL
2122 ========================================
2124 It has to be assumed that the conceptual CPU is weakly-ordered but that it will
2125 maintain the appearance of program causality with respect to itself. Some CPUs
2126 (such as i386 or x86_64) are more constrained than others (such as powerpc or
2127 frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside
2128 of arch-specific code.
2130 This means that it must be considered that the CPU will execute its instruction
2131 stream in any order it feels like - or even in parallel - provided that if an
2132 instruction in the stream depends on an earlier instruction, then that
2133 earlier instruction must be sufficiently complete[*] before the later
2134 instruction may proceed; in other words: provided that the appearance of
2135 causality is maintained.
2137 [*] Some instructions have more than one effect - such as changing the
2138 condition codes, changing registers or changing memory - and different
2139 instructions may depend on different effects.
2141 A CPU may also discard any instruction sequence that winds up having no
2142 ultimate effect. For example, if two adjacent instructions both load an
2143 immediate value into the same register, the first may be discarded.
2146 Similarly, it has to be assumed that compiler might reorder the instruction
2147 stream in any way it sees fit, again provided the appearance of causality is
2151 ============================
2152 THE EFFECTS OF THE CPU CACHE
2153 ============================
2155 The way cached memory operations are perceived across the system is affected to
2156 a certain extent by the caches that lie between CPUs and memory, and by the
2157 memory coherence system that maintains the consistency of state in the system.
2159 As far as the way a CPU interacts with another part of the system through the
2160 caches goes, the memory system has to include the CPU's caches, and memory
2161 barriers for the most part act at the interface between the CPU and its cache
2162 (memory barriers logically act on the dotted line in the following diagram):
2164 <--- CPU ---> : <----------- Memory ----------->
2166 +--------+ +--------+ : +--------+ +-----------+
2167 | | | | : | | | | +--------+
2168 | CPU | | Memory | : | CPU | | | | |
2169 | Core |--->| Access |----->| Cache |<-->| | | |
2170 | | | Queue | : | | | |--->| Memory |
2171 | | | | : | | | | | |
2172 +--------+ +--------+ : +--------+ | | | |
2173 : | Cache | +--------+
2175 : | Mechanism | +--------+
2176 +--------+ +--------+ : +--------+ | | | |
2177 | | | | : | | | | | |
2178 | CPU | | Memory | : | CPU | | |--->| Device |
2179 | Core |--->| Access |----->| Cache |<-->| | | |
2180 | | | Queue | : | | | | | |
2181 | | | | : | | | | +--------+
2182 +--------+ +--------+ : +--------+ +-----------+
2186 Although any particular load or store may not actually appear outside of the
2187 CPU that issued it since it may have been satisfied within the CPU's own cache,
2188 it will still appear as if the full memory access had taken place as far as the
2189 other CPUs are concerned since the cache coherency mechanisms will migrate the
2190 cacheline over to the accessing CPU and propagate the effects upon conflict.
2192 The CPU core may execute instructions in any order it deems fit, provided the
2193 expected program causality appears to be maintained. Some of the instructions
2194 generate load and store operations which then go into the queue of memory
2195 accesses to be performed. The core may place these in the queue in any order
2196 it wishes, and continue execution until it is forced to wait for an instruction
2199 What memory barriers are concerned with is controlling the order in which
2200 accesses cross from the CPU side of things to the memory side of things, and
2201 the order in which the effects are perceived to happen by the other observers
2204 [!] Memory barriers are _not_ needed within a given CPU, as CPUs always see
2205 their own loads and stores as if they had happened in program order.
2207 [!] MMIO or other device accesses may bypass the cache system. This depends on
2208 the properties of the memory window through which devices are accessed and/or
2209 the use of any special device communication instructions the CPU may have.
2215 Life isn't quite as simple as it may appear above, however: for while the
2216 caches are expected to be coherent, there's no guarantee that that coherency
2217 will be ordered. This means that whilst changes made on one CPU will
2218 eventually become visible on all CPUs, there's no guarantee that they will
2219 become apparent in the same order on those other CPUs.
2222 Consider dealing with a system that has a pair of CPUs (1 & 2), each of which
2223 has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D):
2228 +--------+ : +--->| Cache A |<------->| |
2229 | | : | +---------+ | |
2231 | | : | +---------+ | |
2232 +--------+ : +--->| Cache B |<------->| |
2235 : +---------+ | System |
2236 +--------+ : +--->| Cache C |<------->| |
2237 | | : | +---------+ | |
2239 | | : | +---------+ | |
2240 +--------+ : +--->| Cache D |<------->| |
2245 Imagine the system has the following properties:
2247 (*) an odd-numbered cache line may be in cache A, cache C or it may still be
2250 (*) an even-numbered cache line may be in cache B, cache D or it may still be
2253 (*) whilst the CPU core is interrogating one cache, the other cache may be
2254 making use of the bus to access the rest of the system - perhaps to
2255 displace a dirty cacheline or to do a speculative load;
2257 (*) each cache has a queue of operations that need to be applied to that cache
2258 to maintain coherency with the rest of the system;
2260 (*) the coherency queue is not flushed by normal loads to lines already
2261 present in the cache, even though the contents of the queue may
2262 potentially affect those loads.
2264 Imagine, then, that two writes are made on the first CPU, with a write barrier
2265 between them to guarantee that they will appear to reach that CPU's caches in
2266 the requisite order:
2269 =============== =============== =======================================
2270 u == 0, v == 1 and p == &u, q == &u
2272 smp_wmb(); Make sure change to v is visible before
2274 <A:modify v=2> v is now in cache A exclusively
2276 <B:modify p=&v> p is now in cache B exclusively
2278 The write memory barrier forces the other CPUs in the system to perceive that
2279 the local CPU's caches have apparently been updated in the correct order. But
2280 now imagine that the second CPU wants to read those values:
2283 =============== =============== =======================================
2288 The above pair of reads may then fail to happen in the expected order, as the
2289 cacheline holding p may get updated in one of the second CPU's caches whilst
2290 the update to the cacheline holding v is delayed in the other of the second
2291 CPU's caches by some other cache event:
2294 =============== =============== =======================================
2295 u == 0, v == 1 and p == &u, q == &u
2298 <A:modify v=2> <C:busy>
2302 <B:modify p=&v> <D:commit p=&v>
2305 <C:read *q> Reads from v before v updated in cache
2309 Basically, whilst both cachelines will be updated on CPU 2 eventually, there's
2310 no guarantee that, without intervention, the order of update will be the same
2311 as that committed on CPU 1.
2314 To intervene, we need to interpolate a data dependency barrier or a read
2315 barrier between the loads. This will force the cache to commit its coherency
2316 queue before processing any further requests:
2319 =============== =============== =======================================
2320 u == 0, v == 1 and p == &u, q == &u
2323 <A:modify v=2> <C:busy>
2327 <B:modify p=&v> <D:commit p=&v>
2329 smp_read_barrier_depends()
2333 <C:read *q> Reads from v after v updated in cache
2336 This sort of problem can be encountered on DEC Alpha processors as they have a
2337 split cache that improves performance by making better use of the data bus.
2338 Whilst most CPUs do imply a data dependency barrier on the read when a memory
2339 access depends on a read, not all do, so it may not be relied on.
2341 Other CPUs may also have split caches, but must coordinate between the various
2342 cachelets for normal memory accesses. The semantics of the Alpha removes the
2343 need for coordination in the absence of memory barriers.
2346 CACHE COHERENCY VS DMA
2347 ----------------------
2349 Not all systems maintain cache coherency with respect to devices doing DMA. In
2350 such cases, a device attempting DMA may obtain stale data from RAM because
2351 dirty cache lines may be resident in the caches of various CPUs, and may not
2352 have been written back to RAM yet. To deal with this, the appropriate part of
2353 the kernel must flush the overlapping bits of cache on each CPU (and maybe
2354 invalidate them as well).
2356 In addition, the data DMA'd to RAM by a device may be overwritten by dirty
2357 cache lines being written back to RAM from a CPU's cache after the device has
2358 installed its own data, or cache lines present in the CPU's cache may simply
2359 obscure the fact that RAM has been updated, until at such time as the cacheline
2360 is discarded from the CPU's cache and reloaded. To deal with this, the
2361 appropriate part of the kernel must invalidate the overlapping bits of the
2364 See Documentation/cachetlb.txt for more information on cache management.
2367 CACHE COHERENCY VS MMIO
2368 -----------------------
2370 Memory mapped I/O usually takes place through memory locations that are part of
2371 a window in the CPU's memory space that has different properties assigned than
2372 the usual RAM directed window.
2374 Amongst these properties is usually the fact that such accesses bypass the
2375 caching entirely and go directly to the device buses. This means MMIO accesses
2376 may, in effect, overtake accesses to cached memory that were emitted earlier.
2377 A memory barrier isn't sufficient in such a case, but rather the cache must be
2378 flushed between the cached memory write and the MMIO access if the two are in
2382 =========================
2383 THE THINGS CPUS GET UP TO
2384 =========================
2386 A programmer might take it for granted that the CPU will perform memory
2387 operations in exactly the order specified, so that if the CPU is, for example,
2388 given the following piece of code to execute:
2390 a = ACCESS_ONCE(*A);
2391 ACCESS_ONCE(*B) = b;
2392 c = ACCESS_ONCE(*C);
2393 d = ACCESS_ONCE(*D);
2394 ACCESS_ONCE(*E) = e;
2396 they would then expect that the CPU will complete the memory operation for each
2397 instruction before moving on to the next one, leading to a definite sequence of
2398 operations as seen by external observers in the system:
2400 LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E.
2403 Reality is, of course, much messier. With many CPUs and compilers, the above
2404 assumption doesn't hold because:
2406 (*) loads are more likely to need to be completed immediately to permit
2407 execution progress, whereas stores can often be deferred without a
2410 (*) loads may be done speculatively, and the result discarded should it prove
2411 to have been unnecessary;
2413 (*) loads may be done speculatively, leading to the result having been fetched
2414 at the wrong time in the expected sequence of events;
2416 (*) the order of the memory accesses may be rearranged to promote better use
2417 of the CPU buses and caches;
2419 (*) loads and stores may be combined to improve performance when talking to
2420 memory or I/O hardware that can do batched accesses of adjacent locations,
2421 thus cutting down on transaction setup costs (memory and PCI devices may
2422 both be able to do this); and
2424 (*) the CPU's data cache may affect the ordering, and whilst cache-coherency
2425 mechanisms may alleviate this - once the store has actually hit the cache
2426 - there's no guarantee that the coherency management will be propagated in
2427 order to other CPUs.
2429 So what another CPU, say, might actually observe from the above piece of code
2432 LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B
2434 (Where "LOAD {*C,*D}" is a combined load)
2437 However, it is guaranteed that a CPU will be self-consistent: it will see its
2438 _own_ accesses appear to be correctly ordered, without the need for a memory
2439 barrier. For instance with the following code:
2441 U = ACCESS_ONCE(*A);
2442 ACCESS_ONCE(*A) = V;
2443 ACCESS_ONCE(*A) = W;
2444 X = ACCESS_ONCE(*A);
2445 ACCESS_ONCE(*A) = Y;
2446 Z = ACCESS_ONCE(*A);
2448 and assuming no intervention by an external influence, it can be assumed that
2449 the final result will appear to be:
2451 U == the original value of *A
2456 The code above may cause the CPU to generate the full sequence of memory
2459 U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A
2461 in that order, but, without intervention, the sequence may have almost any
2462 combination of elements combined or discarded, provided the program's view of
2463 the world remains consistent. Note that ACCESS_ONCE() is -not- optional
2464 in the above example, as there are architectures where a given CPU might
2465 interchange successive loads to the same location. On such architectures,
2466 ACCESS_ONCE() does whatever is necessary to prevent this, for example, on
2467 Itanium the volatile casts used by ACCESS_ONCE() cause GCC to emit the
2468 special ld.acq and st.rel instructions that prevent such reordering.
2470 The compiler may also combine, discard or defer elements of the sequence before
2471 the CPU even sees them.
2482 since, without either a write barrier or an ACCESS_ONCE(), it can be
2483 assumed that the effect of the storage of V to *A is lost. Similarly:
2488 may, without a memory barrier or an ACCESS_ONCE(), be reduced to:
2493 and the LOAD operation never appear outside of the CPU.
2496 AND THEN THERE'S THE ALPHA
2497 --------------------------
2499 The DEC Alpha CPU is one of the most relaxed CPUs there is. Not only that,
2500 some versions of the Alpha CPU have a split data cache, permitting them to have
2501 two semantically-related cache lines updated at separate times. This is where
2502 the data dependency barrier really becomes necessary as this synchronises both
2503 caches with the memory coherence system, thus making it seem like pointer
2504 changes vs new data occur in the right order.
2506 The Alpha defines the Linux kernel's memory barrier model.
2508 See the subsection on "Cache Coherency" above.
2518 Memory barriers can be used to implement circular buffering without the need
2519 of a lock to serialise the producer with the consumer. See:
2521 Documentation/circular-buffers.txt
2530 Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek,
2532 Chapter 5.2: Physical Address Space Characteristics
2533 Chapter 5.4: Caches and Write Buffers
2534 Chapter 5.5: Data Sharing
2535 Chapter 5.6: Read/Write Ordering
2537 AMD64 Architecture Programmer's Manual Volume 2: System Programming
2538 Chapter 7.1: Memory-Access Ordering
2539 Chapter 7.4: Buffering and Combining Memory Writes
2541 IA-32 Intel Architecture Software Developer's Manual, Volume 3:
2542 System Programming Guide
2543 Chapter 7.1: Locked Atomic Operations
2544 Chapter 7.2: Memory Ordering
2545 Chapter 7.4: Serializing Instructions
2547 The SPARC Architecture Manual, Version 9
2548 Chapter 8: Memory Models
2549 Appendix D: Formal Specification of the Memory Models
2550 Appendix J: Programming with the Memory Models
2552 UltraSPARC Programmer Reference Manual
2553 Chapter 5: Memory Accesses and Cacheability
2554 Chapter 15: Sparc-V9 Memory Models
2556 UltraSPARC III Cu User's Manual
2557 Chapter 9: Memory Models
2559 UltraSPARC IIIi Processor User's Manual
2560 Chapter 8: Memory Models
2562 UltraSPARC Architecture 2005
2564 Appendix D: Formal Specifications of the Memory Models
2566 UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005
2567 Chapter 8: Memory Models
2568 Appendix F: Caches and Cache Coherency
2570 Solaris Internals, Core Kernel Architecture, p63-68:
2571 Chapter 3.3: Hardware Considerations for Locks and
2574 Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching
2575 for Kernel Programmers:
2576 Chapter 13: Other Memory Models
2578 Intel Itanium Architecture Software Developer's Manual: Volume 1:
2579 Section 2.6: Speculation
2580 Section 4.4: Memory Access