]>
Commit | Line | Data |
---|---|---|
108b42b4 DH |
1 | ============================ |
2 | LINUX KERNEL MEMORY BARRIERS | |
3 | ============================ | |
4 | ||
5 | By: David Howells <dhowells@redhat.com> | |
6 | ||
7 | Contents: | |
8 | ||
9 | (*) Abstract memory access model. | |
10 | ||
11 | - Device operations. | |
12 | - Guarantees. | |
13 | ||
14 | (*) What are memory barriers? | |
15 | ||
16 | - Varieties of memory barrier. | |
17 | - What may not be assumed about memory barriers? | |
18 | - Data dependency barriers. | |
19 | - Control dependencies. | |
20 | - SMP barrier pairing. | |
21 | - Examples of memory barrier sequences. | |
22 | ||
23 | (*) Explicit kernel barriers. | |
24 | ||
25 | - Compiler barrier. | |
26 | - The CPU memory barriers. | |
27 | - MMIO write barrier. | |
28 | ||
29 | (*) Implicit kernel memory barriers. | |
30 | ||
31 | - Locking functions. | |
32 | - Interrupt disabling functions. | |
33 | - Miscellaneous functions. | |
34 | ||
35 | (*) Inter-CPU locking barrier effects. | |
36 | ||
37 | - Locks vs memory accesses. | |
38 | - Locks vs I/O accesses. | |
39 | ||
40 | (*) Where are memory barriers needed? | |
41 | ||
42 | - Interprocessor interaction. | |
43 | - Atomic operations. | |
44 | - Accessing devices. | |
45 | - Interrupts. | |
46 | ||
47 | (*) Kernel I/O barrier effects. | |
48 | ||
49 | (*) Assumed minimum execution ordering model. | |
50 | ||
51 | (*) The effects of the cpu cache. | |
52 | ||
53 | - Cache coherency. | |
54 | - Cache coherency vs DMA. | |
55 | - Cache coherency vs MMIO. | |
56 | ||
57 | (*) The things CPUs get up to. | |
58 | ||
59 | - And then there's the Alpha. | |
60 | ||
61 | (*) References. | |
62 | ||
63 | ||
64 | ============================ | |
65 | ABSTRACT MEMORY ACCESS MODEL | |
66 | ============================ | |
67 | ||
68 | Consider the following abstract model of the system: | |
69 | ||
70 | : : | |
71 | : : | |
72 | : : | |
73 | +-------+ : +--------+ : +-------+ | |
74 | | | : | | : | | | |
75 | | | : | | : | | | |
76 | | CPU 1 |<----->| Memory |<----->| CPU 2 | | |
77 | | | : | | : | | | |
78 | | | : | | : | | | |
79 | +-------+ : +--------+ : +-------+ | |
80 | ^ : ^ : ^ | |
81 | | : | : | | |
82 | | : | : | | |
83 | | : v : | | |
84 | | : +--------+ : | | |
85 | | : | | : | | |
86 | | : | | : | | |
87 | +---------->| Device |<----------+ | |
88 | : | | : | |
89 | : | | : | |
90 | : +--------+ : | |
91 | : : | |
92 | ||
93 | Each CPU executes a program that generates memory access operations. In the | |
94 | abstract CPU, memory operation ordering is very relaxed, and a CPU may actually | |
95 | perform the memory operations in any order it likes, provided program causality | |
96 | appears to be maintained. Similarly, the compiler may also arrange the | |
97 | instructions it emits in any order it likes, provided it doesn't affect the | |
98 | apparent operation of the program. | |
99 | ||
100 | So in the above diagram, the effects of the memory operations performed by a | |
101 | CPU are perceived by the rest of the system as the operations cross the | |
102 | interface between the CPU and rest of the system (the dotted lines). | |
103 | ||
104 | ||
105 | For example, consider the following sequence of events: | |
106 | ||
107 | CPU 1 CPU 2 | |
108 | =============== =============== | |
109 | { A == 1; B == 2 } | |
110 | A = 3; x = A; | |
111 | B = 4; y = B; | |
112 | ||
113 | The set of accesses as seen by the memory system in the middle can be arranged | |
114 | in 24 different combinations: | |
115 | ||
116 | STORE A=3, STORE B=4, x=LOAD A->3, y=LOAD B->4 | |
117 | STORE A=3, STORE B=4, y=LOAD B->4, x=LOAD A->3 | |
118 | STORE A=3, x=LOAD A->3, STORE B=4, y=LOAD B->4 | |
119 | STORE A=3, x=LOAD A->3, y=LOAD B->2, STORE B=4 | |
120 | STORE A=3, y=LOAD B->2, STORE B=4, x=LOAD A->3 | |
121 | STORE A=3, y=LOAD B->2, x=LOAD A->3, STORE B=4 | |
122 | STORE B=4, STORE A=3, x=LOAD A->3, y=LOAD B->4 | |
123 | STORE B=4, ... | |
124 | ... | |
125 | ||
126 | and can thus result in four different combinations of values: | |
127 | ||
128 | x == 1, y == 2 | |
129 | x == 1, y == 4 | |
130 | x == 3, y == 2 | |
131 | x == 3, y == 4 | |
132 | ||
133 | ||
134 | Furthermore, the stores committed by a CPU to the memory system may not be | |
135 | perceived by the loads made by another CPU in the same order as the stores were | |
136 | committed. | |
137 | ||
138 | ||
139 | As a further example, consider this sequence of events: | |
140 | ||
141 | CPU 1 CPU 2 | |
142 | =============== =============== | |
143 | { A == 1, B == 2, C = 3, P == &A, Q == &C } | |
144 | B = 4; Q = P; | |
145 | P = &B D = *Q; | |
146 | ||
147 | There is an obvious data dependency here, as the value loaded into D depends on | |
148 | the address retrieved from P by CPU 2. At the end of the sequence, any of the | |
149 | following results are possible: | |
150 | ||
151 | (Q == &A) and (D == 1) | |
152 | (Q == &B) and (D == 2) | |
153 | (Q == &B) and (D == 4) | |
154 | ||
155 | Note that CPU 2 will never try and load C into D because the CPU will load P | |
156 | into Q before issuing the load of *Q. | |
157 | ||
158 | ||
159 | DEVICE OPERATIONS | |
160 | ----------------- | |
161 | ||
162 | Some devices present their control interfaces as collections of memory | |
163 | locations, but the order in which the control registers are accessed is very | |
164 | important. For instance, imagine an ethernet card with a set of internal | |
165 | registers that are accessed through an address port register (A) and a data | |
166 | port register (D). To read internal register 5, the following code might then | |
167 | be used: | |
168 | ||
169 | *A = 5; | |
170 | x = *D; | |
171 | ||
172 | but this might show up as either of the following two sequences: | |
173 | ||
174 | STORE *A = 5, x = LOAD *D | |
175 | x = LOAD *D, STORE *A = 5 | |
176 | ||
177 | the second of which will almost certainly result in a malfunction, since it set | |
178 | the address _after_ attempting to read the register. | |
179 | ||
180 | ||
181 | GUARANTEES | |
182 | ---------- | |
183 | ||
184 | There are some minimal guarantees that may be expected of a CPU: | |
185 | ||
186 | (*) On any given CPU, dependent memory accesses will be issued in order, with | |
187 | respect to itself. This means that for: | |
188 | ||
189 | Q = P; D = *Q; | |
190 | ||
191 | the CPU will issue the following memory operations: | |
192 | ||
193 | Q = LOAD P, D = LOAD *Q | |
194 | ||
195 | and always in that order. | |
196 | ||
197 | (*) Overlapping loads and stores within a particular CPU will appear to be | |
198 | ordered within that CPU. This means that for: | |
199 | ||
200 | a = *X; *X = b; | |
201 | ||
202 | the CPU will only issue the following sequence of memory operations: | |
203 | ||
204 | a = LOAD *X, STORE *X = b | |
205 | ||
206 | And for: | |
207 | ||
208 | *X = c; d = *X; | |
209 | ||
210 | the CPU will only issue: | |
211 | ||
212 | STORE *X = c, d = LOAD *X | |
213 | ||
214 | (Loads and stores overlap if they are targetted at overlapping pieces of | |
215 | memory). | |
216 | ||
217 | And there are a number of things that _must_ or _must_not_ be assumed: | |
218 | ||
219 | (*) It _must_not_ be assumed that independent loads and stores will be issued | |
220 | in the order given. This means that for: | |
221 | ||
222 | X = *A; Y = *B; *D = Z; | |
223 | ||
224 | we may get any of the following sequences: | |
225 | ||
226 | X = LOAD *A, Y = LOAD *B, STORE *D = Z | |
227 | X = LOAD *A, STORE *D = Z, Y = LOAD *B | |
228 | Y = LOAD *B, X = LOAD *A, STORE *D = Z | |
229 | Y = LOAD *B, STORE *D = Z, X = LOAD *A | |
230 | STORE *D = Z, X = LOAD *A, Y = LOAD *B | |
231 | STORE *D = Z, Y = LOAD *B, X = LOAD *A | |
232 | ||
233 | (*) It _must_ be assumed that overlapping memory accesses may be merged or | |
234 | discarded. This means that for: | |
235 | ||
236 | X = *A; Y = *(A + 4); | |
237 | ||
238 | we may get any one of the following sequences: | |
239 | ||
240 | X = LOAD *A; Y = LOAD *(A + 4); | |
241 | Y = LOAD *(A + 4); X = LOAD *A; | |
242 | {X, Y} = LOAD {*A, *(A + 4) }; | |
243 | ||
244 | And for: | |
245 | ||
246 | *A = X; Y = *A; | |
247 | ||
248 | we may get either of: | |
249 | ||
250 | STORE *A = X; Y = LOAD *A; | |
251 | STORE *A = Y; | |
252 | ||
253 | ||
254 | ========================= | |
255 | WHAT ARE MEMORY BARRIERS? | |
256 | ========================= | |
257 | ||
258 | As can be seen above, independent memory operations are effectively performed | |
259 | in random order, but this can be a problem for CPU-CPU interaction and for I/O. | |
260 | What is required is some way of intervening to instruct the compiler and the | |
261 | CPU to restrict the order. | |
262 | ||
263 | Memory barriers are such interventions. They impose a perceived partial | |
264 | ordering between the memory operations specified on either side of the barrier. | |
265 | They request that the sequence of memory events generated appears to other | |
266 | parts of the system as if the barrier is effective on that CPU. | |
267 | ||
268 | ||
269 | VARIETIES OF MEMORY BARRIER | |
270 | --------------------------- | |
271 | ||
272 | Memory barriers come in four basic varieties: | |
273 | ||
274 | (1) Write (or store) memory barriers. | |
275 | ||
276 | A write memory barrier gives a guarantee that all the STORE operations | |
277 | specified before the barrier will appear to happen before all the STORE | |
278 | operations specified after the barrier with respect to the other | |
279 | components of the system. | |
280 | ||
281 | A write barrier is a partial ordering on stores only; it is not required | |
282 | to have any effect on loads. | |
283 | ||
284 | A CPU can be viewed as as commiting a sequence of store operations to the | |
285 | memory system as time progresses. All stores before a write barrier will | |
286 | occur in the sequence _before_ all the stores after the write barrier. | |
287 | ||
288 | [!] Note that write barriers should normally be paired with read or data | |
289 | dependency barriers; see the "SMP barrier pairing" subsection. | |
290 | ||
291 | ||
292 | (2) Data dependency barriers. | |
293 | ||
294 | A data dependency barrier is a weaker form of read barrier. In the case | |
295 | where two loads are performed such that the second depends on the result | |
296 | of the first (eg: the first load retrieves the address to which the second | |
297 | load will be directed), a data dependency barrier would be required to | |
298 | make sure that the target of the second load is updated before the address | |
299 | obtained by the first load is accessed. | |
300 | ||
301 | A data dependency barrier is a partial ordering on interdependent loads | |
302 | only; it is not required to have any effect on stores, independent loads | |
303 | or overlapping loads. | |
304 | ||
305 | As mentioned in (1), the other CPUs in the system can be viewed as | |
306 | committing sequences of stores to the memory system that the CPU being | |
307 | considered can then perceive. A data dependency barrier issued by the CPU | |
308 | under consideration guarantees that for any load preceding it, if that | |
309 | load touches one of a sequence of stores from another CPU, then by the | |
310 | time the barrier completes, the effects of all the stores prior to that | |
311 | touched by the load will be perceptible to any loads issued after the data | |
312 | dependency barrier. | |
313 | ||
314 | See the "Examples of memory barrier sequences" subsection for diagrams | |
315 | showing the ordering constraints. | |
316 | ||
317 | [!] Note that the first load really has to have a _data_ dependency and | |
318 | not a control dependency. If the address for the second load is dependent | |
319 | on the first load, but the dependency is through a conditional rather than | |
320 | actually loading the address itself, then it's a _control_ dependency and | |
321 | a full read barrier or better is required. See the "Control dependencies" | |
322 | subsection for more information. | |
323 | ||
324 | [!] Note that data dependency barriers should normally be paired with | |
325 | write barriers; see the "SMP barrier pairing" subsection. | |
326 | ||
327 | ||
328 | (3) Read (or load) memory barriers. | |
329 | ||
330 | A read barrier is a data dependency barrier plus a guarantee that all the | |
331 | LOAD operations specified before the barrier will appear to happen before | |
332 | all the LOAD operations specified after the barrier with respect to the | |
333 | other components of the system. | |
334 | ||
335 | A read barrier is a partial ordering on loads only; it is not required to | |
336 | have any effect on stores. | |
337 | ||
338 | Read memory barriers imply data dependency barriers, and so can substitute | |
339 | for them. | |
340 | ||
341 | [!] Note that read barriers should normally be paired with write barriers; | |
342 | see the "SMP barrier pairing" subsection. | |
343 | ||
344 | ||
345 | (4) General memory barriers. | |
346 | ||
347 | A general memory barrier is a combination of both a read memory barrier | |
348 | and a write memory barrier. It is a partial ordering over both loads and | |
349 | stores. | |
350 | ||
351 | General memory barriers imply both read and write memory barriers, and so | |
352 | can substitute for either. | |
353 | ||
354 | ||
355 | And a couple of implicit varieties: | |
356 | ||
357 | (5) LOCK operations. | |
358 | ||
359 | This acts as a one-way permeable barrier. It guarantees that all memory | |
360 | operations after the LOCK operation will appear to happen after the LOCK | |
361 | operation with respect to the other components of the system. | |
362 | ||
363 | Memory operations that occur before a LOCK operation may appear to happen | |
364 | after it completes. | |
365 | ||
366 | A LOCK operation should almost always be paired with an UNLOCK operation. | |
367 | ||
368 | ||
369 | (6) UNLOCK operations. | |
370 | ||
371 | This also acts as a one-way permeable barrier. It guarantees that all | |
372 | memory operations before the UNLOCK operation will appear to happen before | |
373 | the UNLOCK operation with respect to the other components of the system. | |
374 | ||
375 | Memory operations that occur after an UNLOCK operation may appear to | |
376 | happen before it completes. | |
377 | ||
378 | LOCK and UNLOCK operations are guaranteed to appear with respect to each | |
379 | other strictly in the order specified. | |
380 | ||
381 | The use of LOCK and UNLOCK operations generally precludes the need for | |
382 | other sorts of memory barrier (but note the exceptions mentioned in the | |
383 | subsection "MMIO write barrier"). | |
384 | ||
385 | ||
386 | Memory barriers are only required where there's a possibility of interaction | |
387 | between two CPUs or between a CPU and a device. If it can be guaranteed that | |
388 | there won't be any such interaction in any particular piece of code, then | |
389 | memory barriers are unnecessary in that piece of code. | |
390 | ||
391 | ||
392 | Note that these are the _minimum_ guarantees. Different architectures may give | |
393 | more substantial guarantees, but they may _not_ be relied upon outside of arch | |
394 | specific code. | |
395 | ||
396 | ||
397 | WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS? | |
398 | ---------------------------------------------- | |
399 | ||
400 | There are certain things that the Linux kernel memory barriers do not guarantee: | |
401 | ||
402 | (*) There is no guarantee that any of the memory accesses specified before a | |
403 | memory barrier will be _complete_ by the completion of a memory barrier | |
404 | instruction; the barrier can be considered to draw a line in that CPU's | |
405 | access queue that accesses of the appropriate type may not cross. | |
406 | ||
407 | (*) There is no guarantee that issuing a memory barrier on one CPU will have | |
408 | any direct effect on another CPU or any other hardware in the system. The | |
409 | indirect effect will be the order in which the second CPU sees the effects | |
410 | of the first CPU's accesses occur, but see the next point: | |
411 | ||
412 | (*) There is no guarantee that the a CPU will see the correct order of effects | |
413 | from a second CPU's accesses, even _if_ the second CPU uses a memory | |
414 | barrier, unless the first CPU _also_ uses a matching memory barrier (see | |
415 | the subsection on "SMP Barrier Pairing"). | |
416 | ||
417 | (*) There is no guarantee that some intervening piece of off-the-CPU | |
418 | hardware[*] will not reorder the memory accesses. CPU cache coherency | |
419 | mechanisms should propagate the indirect effects of a memory barrier | |
420 | between CPUs, but might not do so in order. | |
421 | ||
422 | [*] For information on bus mastering DMA and coherency please read: | |
423 | ||
424 | Documentation/pci.txt | |
425 | Documentation/DMA-mapping.txt | |
426 | Documentation/DMA-API.txt | |
427 | ||
428 | ||
429 | DATA DEPENDENCY BARRIERS | |
430 | ------------------------ | |
431 | ||
432 | The usage requirements of data dependency barriers are a little subtle, and | |
433 | it's not always obvious that they're needed. To illustrate, consider the | |
434 | following sequence of events: | |
435 | ||
436 | CPU 1 CPU 2 | |
437 | =============== =============== | |
438 | { A == 1, B == 2, C = 3, P == &A, Q == &C } | |
439 | B = 4; | |
440 | <write barrier> | |
441 | P = &B | |
442 | Q = P; | |
443 | D = *Q; | |
444 | ||
445 | There's a clear data dependency here, and it would seem that by the end of the | |
446 | sequence, Q must be either &A or &B, and that: | |
447 | ||
448 | (Q == &A) implies (D == 1) | |
449 | (Q == &B) implies (D == 4) | |
450 | ||
451 | But! CPU 2's perception of P may be updated _before_ its perception of B, thus | |
452 | leading to the following situation: | |
453 | ||
454 | (Q == &B) and (D == 2) ???? | |
455 | ||
456 | Whilst this may seem like a failure of coherency or causality maintenance, it | |
457 | isn't, and this behaviour can be observed on certain real CPUs (such as the DEC | |
458 | Alpha). | |
459 | ||
460 | To deal with this, a data dependency barrier must be inserted between the | |
461 | address load and the data load: | |
462 | ||
463 | CPU 1 CPU 2 | |
464 | =============== =============== | |
465 | { A == 1, B == 2, C = 3, P == &A, Q == &C } | |
466 | B = 4; | |
467 | <write barrier> | |
468 | P = &B | |
469 | Q = P; | |
470 | <data dependency barrier> | |
471 | D = *Q; | |
472 | ||
473 | This enforces the occurrence of one of the two implications, and prevents the | |
474 | third possibility from arising. | |
475 | ||
476 | [!] Note that this extremely counterintuitive situation arises most easily on | |
477 | machines with split caches, so that, for example, one cache bank processes | |
478 | even-numbered cache lines and the other bank processes odd-numbered cache | |
479 | lines. The pointer P might be stored in an odd-numbered cache line, and the | |
480 | variable B might be stored in an even-numbered cache line. Then, if the | |
481 | even-numbered bank of the reading CPU's cache is extremely busy while the | |
482 | odd-numbered bank is idle, one can see the new value of the pointer P (&B), | |
483 | but the old value of the variable B (1). | |
484 | ||
485 | ||
486 | Another example of where data dependency barriers might by required is where a | |
487 | number is read from memory and then used to calculate the index for an array | |
488 | access: | |
489 | ||
490 | CPU 1 CPU 2 | |
491 | =============== =============== | |
492 | { M[0] == 1, M[1] == 2, M[3] = 3, P == 0, Q == 3 } | |
493 | M[1] = 4; | |
494 | <write barrier> | |
495 | P = 1 | |
496 | Q = P; | |
497 | <data dependency barrier> | |
498 | D = M[Q]; | |
499 | ||
500 | ||
501 | The data dependency barrier is very important to the RCU system, for example. | |
502 | See rcu_dereference() in include/linux/rcupdate.h. This permits the current | |
503 | target of an RCU'd pointer to be replaced with a new modified target, without | |
504 | the replacement target appearing to be incompletely initialised. | |
505 | ||
506 | See also the subsection on "Cache Coherency" for a more thorough example. | |
507 | ||
508 | ||
509 | CONTROL DEPENDENCIES | |
510 | -------------------- | |
511 | ||
512 | A control dependency requires a full read memory barrier, not simply a data | |
513 | dependency barrier to make it work correctly. Consider the following bit of | |
514 | code: | |
515 | ||
516 | q = &a; | |
517 | if (p) | |
518 | q = &b; | |
519 | <data dependency barrier> | |
520 | x = *q; | |
521 | ||
522 | This will not have the desired effect because there is no actual data | |
523 | dependency, but rather a control dependency that the CPU may short-circuit by | |
524 | attempting to predict the outcome in advance. In such a case what's actually | |
525 | required is: | |
526 | ||
527 | q = &a; | |
528 | if (p) | |
529 | q = &b; | |
530 | <read barrier> | |
531 | x = *q; | |
532 | ||
533 | ||
534 | SMP BARRIER PAIRING | |
535 | ------------------- | |
536 | ||
537 | When dealing with CPU-CPU interactions, certain types of memory barrier should | |
538 | always be paired. A lack of appropriate pairing is almost certainly an error. | |
539 | ||
540 | A write barrier should always be paired with a data dependency barrier or read | |
541 | barrier, though a general barrier would also be viable. Similarly a read | |
542 | barrier or a data dependency barrier should always be paired with at least an | |
543 | write barrier, though, again, a general barrier is viable: | |
544 | ||
545 | CPU 1 CPU 2 | |
546 | =============== =============== | |
547 | a = 1; | |
548 | <write barrier> | |
549 | b = 2; x = a; | |
550 | <read barrier> | |
551 | y = b; | |
552 | ||
553 | Or: | |
554 | ||
555 | CPU 1 CPU 2 | |
556 | =============== =============================== | |
557 | a = 1; | |
558 | <write barrier> | |
559 | b = &a; x = b; | |
560 | <data dependency barrier> | |
561 | y = *x; | |
562 | ||
563 | Basically, the read barrier always has to be there, even though it can be of | |
564 | the "weaker" type. | |
565 | ||
566 | ||
567 | EXAMPLES OF MEMORY BARRIER SEQUENCES | |
568 | ------------------------------------ | |
569 | ||
570 | Firstly, write barriers act as a partial orderings on store operations. | |
571 | Consider the following sequence of events: | |
572 | ||
573 | CPU 1 | |
574 | ======================= | |
575 | STORE A = 1 | |
576 | STORE B = 2 | |
577 | STORE C = 3 | |
578 | <write barrier> | |
579 | STORE D = 4 | |
580 | STORE E = 5 | |
581 | ||
582 | This sequence of events is committed to the memory coherence system in an order | |
583 | that the rest of the system might perceive as the unordered set of { STORE A, | |
584 | STORE B, STORE C } all occuring before the unordered set of { STORE D, STORE E | |
585 | }: | |
586 | ||
587 | +-------+ : : | |
588 | | | +------+ | |
589 | | |------>| C=3 | } /\ | |
590 | | | : +------+ }----- \ -----> Events perceptible | |
591 | | | : | A=1 | } \/ to rest of system | |
592 | | | : +------+ } | |
593 | | CPU 1 | : | B=2 | } | |
594 | | | +------+ } | |
595 | | | wwwwwwwwwwwwwwww } <--- At this point the write barrier | |
596 | | | +------+ } requires all stores prior to the | |
597 | | | : | E=5 | } barrier to be committed before | |
598 | | | : +------+ } further stores may be take place. | |
599 | | |------>| D=4 | } | |
600 | | | +------+ | |
601 | +-------+ : : | |
602 | | | |
603 | | Sequence in which stores committed to memory system | |
604 | | by CPU 1 | |
605 | V | |
606 | ||
607 | ||
608 | Secondly, data dependency barriers act as a partial orderings on data-dependent | |
609 | loads. Consider the following sequence of events: | |
610 | ||
611 | CPU 1 CPU 2 | |
612 | ======================= ======================= | |
613 | STORE A = 1 | |
614 | STORE B = 2 | |
615 | <write barrier> | |
616 | STORE C = &B LOAD X | |
617 | STORE D = 4 LOAD C (gets &B) | |
618 | LOAD *C (reads B) | |
619 | ||
620 | Without intervention, CPU 2 may perceive the events on CPU 1 in some | |
621 | effectively random order, despite the write barrier issued by CPU 1: | |
622 | ||
623 | +-------+ : : : : | |
624 | | | +------+ +-------+ | Sequence of update | |
625 | | |------>| B=2 |----- --->| Y->8 | | of perception on | |
626 | | | : +------+ \ +-------+ | CPU 2 | |
627 | | CPU 1 | : | A=1 | \ --->| C->&Y | V | |
628 | | | +------+ | +-------+ | |
629 | | | wwwwwwwwwwwwwwww | : : | |
630 | | | +------+ | : : | |
631 | | | : | C=&B |--- | : : +-------+ | |
632 | | | : +------+ \ | +-------+ | | | |
633 | | |------>| D=4 | ----------->| C->&B |------>| | | |
634 | | | +------+ | +-------+ | | | |
635 | +-------+ : : | : : | | | |
636 | | : : | | | |
637 | | : : | CPU 2 | | |
638 | | +-------+ | | | |
639 | Apparently incorrect ---> | | B->7 |------>| | | |
640 | perception of B (!) | +-------+ | | | |
641 | | : : | | | |
642 | | +-------+ | | | |
643 | The load of X holds ---> \ | X->9 |------>| | | |
644 | up the maintenance \ +-------+ | | | |
645 | of coherence of B ----->| B->2 | +-------+ | |
646 | +-------+ | |
647 | : : | |
648 | ||
649 | ||
650 | In the above example, CPU 2 perceives that B is 7, despite the load of *C | |
651 | (which would be B) coming after the the LOAD of C. | |
652 | ||
653 | If, however, a data dependency barrier were to be placed between the load of C | |
654 | and the load of *C (ie: B) on CPU 2, then the following will occur: | |
655 | ||
656 | +-------+ : : : : | |
657 | | | +------+ +-------+ | |
658 | | |------>| B=2 |----- --->| Y->8 | | |
659 | | | : +------+ \ +-------+ | |
660 | | CPU 1 | : | A=1 | \ --->| C->&Y | | |
661 | | | +------+ | +-------+ | |
662 | | | wwwwwwwwwwwwwwww | : : | |
663 | | | +------+ | : : | |
664 | | | : | C=&B |--- | : : +-------+ | |
665 | | | : +------+ \ | +-------+ | | | |
666 | | |------>| D=4 | ----------->| C->&B |------>| | | |
667 | | | +------+ | +-------+ | | | |
668 | +-------+ : : | : : | | | |
669 | | : : | | | |
670 | | : : | CPU 2 | | |
671 | | +-------+ | | | |
672 | \ | X->9 |------>| | | |
673 | \ +-------+ | | | |
674 | ----->| B->2 | | | | |
675 | +-------+ | | | |
676 | Makes sure all effects ---> ddddddddddddddddd | | | |
677 | prior to the store of C +-------+ | | | |
678 | are perceptible to | B->2 |------>| | | |
679 | successive loads +-------+ | | | |
680 | : : +-------+ | |
681 | ||
682 | ||
683 | And thirdly, a read barrier acts as a partial order on loads. Consider the | |
684 | following sequence of events: | |
685 | ||
686 | CPU 1 CPU 2 | |
687 | ======================= ======================= | |
688 | STORE A=1 | |
689 | STORE B=2 | |
690 | STORE C=3 | |
691 | <write barrier> | |
692 | STORE D=4 | |
693 | STORE E=5 | |
694 | LOAD A | |
695 | LOAD B | |
696 | LOAD C | |
697 | LOAD D | |
698 | LOAD E | |
699 | ||
700 | Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in | |
701 | some effectively random order, despite the write barrier issued by CPU 1: | |
702 | ||
703 | +-------+ : : | |
704 | | | +------+ | |
705 | | |------>| C=3 | } | |
706 | | | : +------+ } | |
707 | | | : | A=1 | } | |
708 | | | : +------+ } | |
709 | | CPU 1 | : | B=2 | }--- | |
710 | | | +------+ } \ | |
711 | | | wwwwwwwwwwwww} \ | |
712 | | | +------+ } \ : : +-------+ | |
713 | | | : | E=5 | } \ +-------+ | | | |
714 | | | : +------+ } \ { | C->3 |------>| | | |
715 | | |------>| D=4 | } \ { +-------+ : | | | |
716 | | | +------+ \ { | E->5 | : | | | |
717 | +-------+ : : \ { +-------+ : | | | |
718 | Transfer -->{ | A->1 | : | CPU 2 | | |
719 | from CPU 1 { +-------+ : | | | |
720 | to CPU 2 { | D->4 | : | | | |
721 | { +-------+ : | | | |
722 | { | B->2 |------>| | | |
723 | +-------+ | | | |
724 | : : +-------+ | |
725 | ||
726 | ||
727 | If, however, a read barrier were to be placed between the load of C and the | |
728 | load of D on CPU 2, then the partial ordering imposed by CPU 1 will be | |
729 | perceived correctly by CPU 2. | |
730 | ||
731 | +-------+ : : | |
732 | | | +------+ | |
733 | | |------>| C=3 | } | |
734 | | | : +------+ } | |
735 | | | : | A=1 | }--- | |
736 | | | : +------+ } \ | |
737 | | CPU 1 | : | B=2 | } \ | |
738 | | | +------+ \ | |
739 | | | wwwwwwwwwwwwwwww \ | |
740 | | | +------+ \ : : +-------+ | |
741 | | | : | E=5 | } \ +-------+ | | | |
742 | | | : +------+ }--- \ { | C->3 |------>| | | |
743 | | |------>| D=4 | } \ \ { +-------+ : | | | |
744 | | | +------+ \ -->{ | B->2 | : | | | |
745 | +-------+ : : \ { +-------+ : | | | |
746 | \ { | A->1 | : | CPU 2 | | |
747 | \ +-------+ | | | |
748 | At this point the read ----> \ rrrrrrrrrrrrrrrrr | | | |
749 | barrier causes all effects \ +-------+ | | | |
750 | prior to the storage of C \ { | E->5 | : | | | |
751 | to be perceptible to CPU 2 -->{ +-------+ : | | | |
752 | { | D->4 |------>| | | |
753 | +-------+ | | | |
754 | : : +-------+ | |
755 | ||
756 | ||
757 | ======================== | |
758 | EXPLICIT KERNEL BARRIERS | |
759 | ======================== | |
760 | ||
761 | The Linux kernel has a variety of different barriers that act at different | |
762 | levels: | |
763 | ||
764 | (*) Compiler barrier. | |
765 | ||
766 | (*) CPU memory barriers. | |
767 | ||
768 | (*) MMIO write barrier. | |
769 | ||
770 | ||
771 | COMPILER BARRIER | |
772 | ---------------- | |
773 | ||
774 | The Linux kernel has an explicit compiler barrier function that prevents the | |
775 | compiler from moving the memory accesses either side of it to the other side: | |
776 | ||
777 | barrier(); | |
778 | ||
779 | This a general barrier - lesser varieties of compiler barrier do not exist. | |
780 | ||
781 | The compiler barrier has no direct effect on the CPU, which may then reorder | |
782 | things however it wishes. | |
783 | ||
784 | ||
785 | CPU MEMORY BARRIERS | |
786 | ------------------- | |
787 | ||
788 | The Linux kernel has eight basic CPU memory barriers: | |
789 | ||
790 | TYPE MANDATORY SMP CONDITIONAL | |
791 | =============== ======================= =========================== | |
792 | GENERAL mb() smp_mb() | |
793 | WRITE wmb() smp_wmb() | |
794 | READ rmb() smp_rmb() | |
795 | DATA DEPENDENCY read_barrier_depends() smp_read_barrier_depends() | |
796 | ||
797 | ||
798 | All CPU memory barriers unconditionally imply compiler barriers. | |
799 | ||
800 | SMP memory barriers are reduced to compiler barriers on uniprocessor compiled | |
801 | systems because it is assumed that a CPU will be appear to be self-consistent, | |
802 | and will order overlapping accesses correctly with respect to itself. | |
803 | ||
804 | [!] Note that SMP memory barriers _must_ be used to control the ordering of | |
805 | references to shared memory on SMP systems, though the use of locking instead | |
806 | is sufficient. | |
807 | ||
808 | Mandatory barriers should not be used to control SMP effects, since mandatory | |
809 | barriers unnecessarily impose overhead on UP systems. They may, however, be | |
810 | used to control MMIO effects on accesses through relaxed memory I/O windows. | |
811 | These are required even on non-SMP systems as they affect the order in which | |
812 | memory operations appear to a device by prohibiting both the compiler and the | |
813 | CPU from reordering them. | |
814 | ||
815 | ||
816 | There are some more advanced barrier functions: | |
817 | ||
818 | (*) set_mb(var, value) | |
819 | (*) set_wmb(var, value) | |
820 | ||
821 | These assign the value to the variable and then insert at least a write | |
822 | barrier after it, depending on the function. They aren't guaranteed to | |
823 | insert anything more than a compiler barrier in a UP compilation. | |
824 | ||
825 | ||
826 | (*) smp_mb__before_atomic_dec(); | |
827 | (*) smp_mb__after_atomic_dec(); | |
828 | (*) smp_mb__before_atomic_inc(); | |
829 | (*) smp_mb__after_atomic_inc(); | |
830 | ||
831 | These are for use with atomic add, subtract, increment and decrement | |
dbc8700e DH |
832 | functions that don't return a value, especially when used for reference |
833 | counting. These functions do not imply memory barriers. | |
108b42b4 DH |
834 | |
835 | As an example, consider a piece of code that marks an object as being dead | |
836 | and then decrements the object's reference count: | |
837 | ||
838 | obj->dead = 1; | |
839 | smp_mb__before_atomic_dec(); | |
840 | atomic_dec(&obj->ref_count); | |
841 | ||
842 | This makes sure that the death mark on the object is perceived to be set | |
843 | *before* the reference counter is decremented. | |
844 | ||
845 | See Documentation/atomic_ops.txt for more information. See the "Atomic | |
846 | operations" subsection for information on where to use these. | |
847 | ||
848 | ||
849 | (*) smp_mb__before_clear_bit(void); | |
850 | (*) smp_mb__after_clear_bit(void); | |
851 | ||
852 | These are for use similar to the atomic inc/dec barriers. These are | |
853 | typically used for bitwise unlocking operations, so care must be taken as | |
854 | there are no implicit memory barriers here either. | |
855 | ||
856 | Consider implementing an unlock operation of some nature by clearing a | |
857 | locking bit. The clear_bit() would then need to be barriered like this: | |
858 | ||
859 | smp_mb__before_clear_bit(); | |
860 | clear_bit( ... ); | |
861 | ||
862 | This prevents memory operations before the clear leaking to after it. See | |
863 | the subsection on "Locking Functions" with reference to UNLOCK operation | |
864 | implications. | |
865 | ||
866 | See Documentation/atomic_ops.txt for more information. See the "Atomic | |
867 | operations" subsection for information on where to use these. | |
868 | ||
869 | ||
870 | MMIO WRITE BARRIER | |
871 | ------------------ | |
872 | ||
873 | The Linux kernel also has a special barrier for use with memory-mapped I/O | |
874 | writes: | |
875 | ||
876 | mmiowb(); | |
877 | ||
878 | This is a variation on the mandatory write barrier that causes writes to weakly | |
879 | ordered I/O regions to be partially ordered. Its effects may go beyond the | |
880 | CPU->Hardware interface and actually affect the hardware at some level. | |
881 | ||
882 | See the subsection "Locks vs I/O accesses" for more information. | |
883 | ||
884 | ||
885 | =============================== | |
886 | IMPLICIT KERNEL MEMORY BARRIERS | |
887 | =============================== | |
888 | ||
889 | Some of the other functions in the linux kernel imply memory barriers, amongst | |
890 | which are locking, scheduling and memory allocation functions. | |
891 | ||
892 | This specification is a _minimum_ guarantee; any particular architecture may | |
893 | provide more substantial guarantees, but these may not be relied upon outside | |
894 | of arch specific code. | |
895 | ||
896 | ||
897 | LOCKING FUNCTIONS | |
898 | ----------------- | |
899 | ||
900 | The Linux kernel has a number of locking constructs: | |
901 | ||
902 | (*) spin locks | |
903 | (*) R/W spin locks | |
904 | (*) mutexes | |
905 | (*) semaphores | |
906 | (*) R/W semaphores | |
907 | (*) RCU | |
908 | ||
909 | In all cases there are variants on "LOCK" operations and "UNLOCK" operations | |
910 | for each construct. These operations all imply certain barriers: | |
911 | ||
912 | (1) LOCK operation implication: | |
913 | ||
914 | Memory operations issued after the LOCK will be completed after the LOCK | |
915 | operation has completed. | |
916 | ||
917 | Memory operations issued before the LOCK may be completed after the LOCK | |
918 | operation has completed. | |
919 | ||
920 | (2) UNLOCK operation implication: | |
921 | ||
922 | Memory operations issued before the UNLOCK will be completed before the | |
923 | UNLOCK operation has completed. | |
924 | ||
925 | Memory operations issued after the UNLOCK may be completed before the | |
926 | UNLOCK operation has completed. | |
927 | ||
928 | (3) LOCK vs LOCK implication: | |
929 | ||
930 | All LOCK operations issued before another LOCK operation will be completed | |
931 | before that LOCK operation. | |
932 | ||
933 | (4) LOCK vs UNLOCK implication: | |
934 | ||
935 | All LOCK operations issued before an UNLOCK operation will be completed | |
936 | before the UNLOCK operation. | |
937 | ||
938 | All UNLOCK operations issued before a LOCK operation will be completed | |
939 | before the LOCK operation. | |
940 | ||
941 | (5) Failed conditional LOCK implication: | |
942 | ||
943 | Certain variants of the LOCK operation may fail, either due to being | |
944 | unable to get the lock immediately, or due to receiving an unblocked | |
945 | signal whilst asleep waiting for the lock to become available. Failed | |
946 | locks do not imply any sort of barrier. | |
947 | ||
948 | Therefore, from (1), (2) and (4) an UNLOCK followed by an unconditional LOCK is | |
949 | equivalent to a full barrier, but a LOCK followed by an UNLOCK is not. | |
950 | ||
951 | [!] Note: one of the consequence of LOCKs and UNLOCKs being only one-way | |
952 | barriers is that the effects instructions outside of a critical section may | |
953 | seep into the inside of the critical section. | |
954 | ||
955 | Locks and semaphores may not provide any guarantee of ordering on UP compiled | |
956 | systems, and so cannot be counted on in such a situation to actually achieve | |
957 | anything at all - especially with respect to I/O accesses - unless combined | |
958 | with interrupt disabling operations. | |
959 | ||
960 | See also the section on "Inter-CPU locking barrier effects". | |
961 | ||
962 | ||
963 | As an example, consider the following: | |
964 | ||
965 | *A = a; | |
966 | *B = b; | |
967 | LOCK | |
968 | *C = c; | |
969 | *D = d; | |
970 | UNLOCK | |
971 | *E = e; | |
972 | *F = f; | |
973 | ||
974 | The following sequence of events is acceptable: | |
975 | ||
976 | LOCK, {*F,*A}, *E, {*C,*D}, *B, UNLOCK | |
977 | ||
978 | [+] Note that {*F,*A} indicates a combined access. | |
979 | ||
980 | But none of the following are: | |
981 | ||
982 | {*F,*A}, *B, LOCK, *C, *D, UNLOCK, *E | |
983 | *A, *B, *C, LOCK, *D, UNLOCK, *E, *F | |
984 | *A, *B, LOCK, *C, UNLOCK, *D, *E, *F | |
985 | *B, LOCK, *C, *D, UNLOCK, {*F,*A}, *E | |
986 | ||
987 | ||
988 | ||
989 | INTERRUPT DISABLING FUNCTIONS | |
990 | ----------------------------- | |
991 | ||
992 | Functions that disable interrupts (LOCK equivalent) and enable interrupts | |
993 | (UNLOCK equivalent) will act as compiler barriers only. So if memory or I/O | |
994 | barriers are required in such a situation, they must be provided from some | |
995 | other means. | |
996 | ||
997 | ||
998 | MISCELLANEOUS FUNCTIONS | |
999 | ----------------------- | |
1000 | ||
1001 | Other functions that imply barriers: | |
1002 | ||
1003 | (*) schedule() and similar imply full memory barriers. | |
1004 | ||
1005 | (*) Memory allocation and release functions imply full memory barriers. | |
1006 | ||
1007 | ||
1008 | ================================= | |
1009 | INTER-CPU LOCKING BARRIER EFFECTS | |
1010 | ================================= | |
1011 | ||
1012 | On SMP systems locking primitives give a more substantial form of barrier: one | |
1013 | that does affect memory access ordering on other CPUs, within the context of | |
1014 | conflict on any particular lock. | |
1015 | ||
1016 | ||
1017 | LOCKS VS MEMORY ACCESSES | |
1018 | ------------------------ | |
1019 | ||
1020 | Consider the following: the system has a pair of spinlocks (N) and (Q), and | |
1021 | three CPUs; then should the following sequence of events occur: | |
1022 | ||
1023 | CPU 1 CPU 2 | |
1024 | =============================== =============================== | |
1025 | *A = a; *E = e; | |
1026 | LOCK M LOCK Q | |
1027 | *B = b; *F = f; | |
1028 | *C = c; *G = g; | |
1029 | UNLOCK M UNLOCK Q | |
1030 | *D = d; *H = h; | |
1031 | ||
1032 | Then there is no guarantee as to what order CPU #3 will see the accesses to *A | |
1033 | through *H occur in, other than the constraints imposed by the separate locks | |
1034 | on the separate CPUs. It might, for example, see: | |
1035 | ||
1036 | *E, LOCK M, LOCK Q, *G, *C, *F, *A, *B, UNLOCK Q, *D, *H, UNLOCK M | |
1037 | ||
1038 | But it won't see any of: | |
1039 | ||
1040 | *B, *C or *D preceding LOCK M | |
1041 | *A, *B or *C following UNLOCK M | |
1042 | *F, *G or *H preceding LOCK Q | |
1043 | *E, *F or *G following UNLOCK Q | |
1044 | ||
1045 | ||
1046 | However, if the following occurs: | |
1047 | ||
1048 | CPU 1 CPU 2 | |
1049 | =============================== =============================== | |
1050 | *A = a; | |
1051 | LOCK M [1] | |
1052 | *B = b; | |
1053 | *C = c; | |
1054 | UNLOCK M [1] | |
1055 | *D = d; *E = e; | |
1056 | LOCK M [2] | |
1057 | *F = f; | |
1058 | *G = g; | |
1059 | UNLOCK M [2] | |
1060 | *H = h; | |
1061 | ||
1062 | CPU #3 might see: | |
1063 | ||
1064 | *E, LOCK M [1], *C, *B, *A, UNLOCK M [1], | |
1065 | LOCK M [2], *H, *F, *G, UNLOCK M [2], *D | |
1066 | ||
1067 | But assuming CPU #1 gets the lock first, it won't see any of: | |
1068 | ||
1069 | *B, *C, *D, *F, *G or *H preceding LOCK M [1] | |
1070 | *A, *B or *C following UNLOCK M [1] | |
1071 | *F, *G or *H preceding LOCK M [2] | |
1072 | *A, *B, *C, *E, *F or *G following UNLOCK M [2] | |
1073 | ||
1074 | ||
1075 | LOCKS VS I/O ACCESSES | |
1076 | --------------------- | |
1077 | ||
1078 | Under certain circumstances (especially involving NUMA), I/O accesses within | |
1079 | two spinlocked sections on two different CPUs may be seen as interleaved by the | |
1080 | PCI bridge, because the PCI bridge does not necessarily participate in the | |
1081 | cache-coherence protocol, and is therefore incapable of issuing the required | |
1082 | read memory barriers. | |
1083 | ||
1084 | For example: | |
1085 | ||
1086 | CPU 1 CPU 2 | |
1087 | =============================== =============================== | |
1088 | spin_lock(Q) | |
1089 | writel(0, ADDR) | |
1090 | writel(1, DATA); | |
1091 | spin_unlock(Q); | |
1092 | spin_lock(Q); | |
1093 | writel(4, ADDR); | |
1094 | writel(5, DATA); | |
1095 | spin_unlock(Q); | |
1096 | ||
1097 | may be seen by the PCI bridge as follows: | |
1098 | ||
1099 | STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5 | |
1100 | ||
1101 | which would probably cause the hardware to malfunction. | |
1102 | ||
1103 | ||
1104 | What is necessary here is to intervene with an mmiowb() before dropping the | |
1105 | spinlock, for example: | |
1106 | ||
1107 | CPU 1 CPU 2 | |
1108 | =============================== =============================== | |
1109 | spin_lock(Q) | |
1110 | writel(0, ADDR) | |
1111 | writel(1, DATA); | |
1112 | mmiowb(); | |
1113 | spin_unlock(Q); | |
1114 | spin_lock(Q); | |
1115 | writel(4, ADDR); | |
1116 | writel(5, DATA); | |
1117 | mmiowb(); | |
1118 | spin_unlock(Q); | |
1119 | ||
1120 | this will ensure that the two stores issued on CPU #1 appear at the PCI bridge | |
1121 | before either of the stores issued on CPU #2. | |
1122 | ||
1123 | ||
1124 | Furthermore, following a store by a load to the same device obviates the need | |
1125 | for an mmiowb(), because the load forces the store to complete before the load | |
1126 | is performed: | |
1127 | ||
1128 | CPU 1 CPU 2 | |
1129 | =============================== =============================== | |
1130 | spin_lock(Q) | |
1131 | writel(0, ADDR) | |
1132 | a = readl(DATA); | |
1133 | spin_unlock(Q); | |
1134 | spin_lock(Q); | |
1135 | writel(4, ADDR); | |
1136 | b = readl(DATA); | |
1137 | spin_unlock(Q); | |
1138 | ||
1139 | ||
1140 | See Documentation/DocBook/deviceiobook.tmpl for more information. | |
1141 | ||
1142 | ||
1143 | ================================= | |
1144 | WHERE ARE MEMORY BARRIERS NEEDED? | |
1145 | ================================= | |
1146 | ||
1147 | Under normal operation, memory operation reordering is generally not going to | |
1148 | be a problem as a single-threaded linear piece of code will still appear to | |
1149 | work correctly, even if it's in an SMP kernel. There are, however, three | |
1150 | circumstances in which reordering definitely _could_ be a problem: | |
1151 | ||
1152 | (*) Interprocessor interaction. | |
1153 | ||
1154 | (*) Atomic operations. | |
1155 | ||
1156 | (*) Accessing devices (I/O). | |
1157 | ||
1158 | (*) Interrupts. | |
1159 | ||
1160 | ||
1161 | INTERPROCESSOR INTERACTION | |
1162 | -------------------------- | |
1163 | ||
1164 | When there's a system with more than one processor, more than one CPU in the | |
1165 | system may be working on the same data set at the same time. This can cause | |
1166 | synchronisation problems, and the usual way of dealing with them is to use | |
1167 | locks. Locks, however, are quite expensive, and so it may be preferable to | |
1168 | operate without the use of a lock if at all possible. In such a case | |
1169 | operations that affect both CPUs may have to be carefully ordered to prevent | |
1170 | a malfunction. | |
1171 | ||
1172 | Consider, for example, the R/W semaphore slow path. Here a waiting process is | |
1173 | queued on the semaphore, by virtue of it having a piece of its stack linked to | |
1174 | the semaphore's list of waiting processes: | |
1175 | ||
1176 | struct rw_semaphore { | |
1177 | ... | |
1178 | spinlock_t lock; | |
1179 | struct list_head waiters; | |
1180 | }; | |
1181 | ||
1182 | struct rwsem_waiter { | |
1183 | struct list_head list; | |
1184 | struct task_struct *task; | |
1185 | }; | |
1186 | ||
1187 | To wake up a particular waiter, the up_read() or up_write() functions have to: | |
1188 | ||
1189 | (1) read the next pointer from this waiter's record to know as to where the | |
1190 | next waiter record is; | |
1191 | ||
1192 | (4) read the pointer to the waiter's task structure; | |
1193 | ||
1194 | (3) clear the task pointer to tell the waiter it has been given the semaphore; | |
1195 | ||
1196 | (4) call wake_up_process() on the task; and | |
1197 | ||
1198 | (5) release the reference held on the waiter's task struct. | |
1199 | ||
1200 | In otherwords, it has to perform this sequence of events: | |
1201 | ||
1202 | LOAD waiter->list.next; | |
1203 | LOAD waiter->task; | |
1204 | STORE waiter->task; | |
1205 | CALL wakeup | |
1206 | RELEASE task | |
1207 | ||
1208 | and if any of these steps occur out of order, then the whole thing may | |
1209 | malfunction. | |
1210 | ||
1211 | Once it has queued itself and dropped the semaphore lock, the waiter does not | |
1212 | get the lock again; it instead just waits for its task pointer to be cleared | |
1213 | before proceeding. Since the record is on the waiter's stack, this means that | |
1214 | if the task pointer is cleared _before_ the next pointer in the list is read, | |
1215 | another CPU might start processing the waiter and might clobber the waiter's | |
1216 | stack before the up*() function has a chance to read the next pointer. | |
1217 | ||
1218 | Consider then what might happen to the above sequence of events: | |
1219 | ||
1220 | CPU 1 CPU 2 | |
1221 | =============================== =============================== | |
1222 | down_xxx() | |
1223 | Queue waiter | |
1224 | Sleep | |
1225 | up_yyy() | |
1226 | LOAD waiter->task; | |
1227 | STORE waiter->task; | |
1228 | Woken up by other event | |
1229 | <preempt> | |
1230 | Resume processing | |
1231 | down_xxx() returns | |
1232 | call foo() | |
1233 | foo() clobbers *waiter | |
1234 | </preempt> | |
1235 | LOAD waiter->list.next; | |
1236 | --- OOPS --- | |
1237 | ||
1238 | This could be dealt with using the semaphore lock, but then the down_xxx() | |
1239 | function has to needlessly get the spinlock again after being woken up. | |
1240 | ||
1241 | The way to deal with this is to insert a general SMP memory barrier: | |
1242 | ||
1243 | LOAD waiter->list.next; | |
1244 | LOAD waiter->task; | |
1245 | smp_mb(); | |
1246 | STORE waiter->task; | |
1247 | CALL wakeup | |
1248 | RELEASE task | |
1249 | ||
1250 | In this case, the barrier makes a guarantee that all memory accesses before the | |
1251 | barrier will appear to happen before all the memory accesses after the barrier | |
1252 | with respect to the other CPUs on the system. It does _not_ guarantee that all | |
1253 | the memory accesses before the barrier will be complete by the time the barrier | |
1254 | instruction itself is complete. | |
1255 | ||
1256 | On a UP system - where this wouldn't be a problem - the smp_mb() is just a | |
1257 | compiler barrier, thus making sure the compiler emits the instructions in the | |
1258 | right order without actually intervening in the CPU. Since there there's only | |
1259 | one CPU, that CPU's dependency ordering logic will take care of everything | |
1260 | else. | |
1261 | ||
1262 | ||
1263 | ATOMIC OPERATIONS | |
1264 | ----------------- | |
1265 | ||
dbc8700e DH |
1266 | Whilst they are technically interprocessor interaction considerations, atomic |
1267 | operations are noted specially as some of them imply full memory barriers and | |
1268 | some don't, but they're very heavily relied on as a group throughout the | |
1269 | kernel. | |
1270 | ||
1271 | Any atomic operation that modifies some state in memory and returns information | |
1272 | about the state (old or new) implies an SMP-conditional general memory barrier | |
1273 | (smp_mb()) on each side of the actual operation. These include: | |
108b42b4 DH |
1274 | |
1275 | xchg(); | |
1276 | cmpxchg(); | |
108b42b4 DH |
1277 | atomic_cmpxchg(); |
1278 | atomic_inc_return(); | |
1279 | atomic_dec_return(); | |
1280 | atomic_add_return(); | |
1281 | atomic_sub_return(); | |
1282 | atomic_inc_and_test(); | |
1283 | atomic_dec_and_test(); | |
1284 | atomic_sub_and_test(); | |
1285 | atomic_add_negative(); | |
1286 | atomic_add_unless(); | |
dbc8700e DH |
1287 | test_and_set_bit(); |
1288 | test_and_clear_bit(); | |
1289 | test_and_change_bit(); | |
1290 | ||
1291 | These are used for such things as implementing LOCK-class and UNLOCK-class | |
1292 | operations and adjusting reference counters towards object destruction, and as | |
1293 | such the implicit memory barrier effects are necessary. | |
108b42b4 | 1294 | |
108b42b4 | 1295 | |
dbc8700e DH |
1296 | The following operation are potential problems as they do _not_ imply memory |
1297 | barriers, but might be used for implementing such things as UNLOCK-class | |
1298 | operations: | |
108b42b4 | 1299 | |
dbc8700e | 1300 | atomic_set(); |
108b42b4 DH |
1301 | set_bit(); |
1302 | clear_bit(); | |
1303 | change_bit(); | |
dbc8700e DH |
1304 | |
1305 | With these the appropriate explicit memory barrier should be used if necessary | |
1306 | (smp_mb__before_clear_bit() for instance). | |
108b42b4 DH |
1307 | |
1308 | ||
dbc8700e DH |
1309 | The following also do _not_ imply memory barriers, and so may require explicit |
1310 | memory barriers under some circumstances (smp_mb__before_atomic_dec() for | |
1311 | instance)): | |
108b42b4 DH |
1312 | |
1313 | atomic_add(); | |
1314 | atomic_sub(); | |
1315 | atomic_inc(); | |
1316 | atomic_dec(); | |
1317 | ||
1318 | If they're used for statistics generation, then they probably don't need memory | |
1319 | barriers, unless there's a coupling between statistical data. | |
1320 | ||
1321 | If they're used for reference counting on an object to control its lifetime, | |
1322 | they probably don't need memory barriers because either the reference count | |
1323 | will be adjusted inside a locked section, or the caller will already hold | |
1324 | sufficient references to make the lock, and thus a memory barrier unnecessary. | |
1325 | ||
1326 | If they're used for constructing a lock of some description, then they probably | |
1327 | do need memory barriers as a lock primitive generally has to do things in a | |
1328 | specific order. | |
1329 | ||
1330 | ||
1331 | Basically, each usage case has to be carefully considered as to whether memory | |
dbc8700e DH |
1332 | barriers are needed or not. |
1333 | ||
1334 | [!] Note that special memory barrier primitives are available for these | |
1335 | situations because on some CPUs the atomic instructions used imply full memory | |
1336 | barriers, and so barrier instructions are superfluous in conjunction with them, | |
1337 | and in such cases the special barrier primitives will be no-ops. | |
108b42b4 DH |
1338 | |
1339 | See Documentation/atomic_ops.txt for more information. | |
1340 | ||
1341 | ||
1342 | ACCESSING DEVICES | |
1343 | ----------------- | |
1344 | ||
1345 | Many devices can be memory mapped, and so appear to the CPU as if they're just | |
1346 | a set of memory locations. To control such a device, the driver usually has to | |
1347 | make the right memory accesses in exactly the right order. | |
1348 | ||
1349 | However, having a clever CPU or a clever compiler creates a potential problem | |
1350 | in that the carefully sequenced accesses in the driver code won't reach the | |
1351 | device in the requisite order if the CPU or the compiler thinks it is more | |
1352 | efficient to reorder, combine or merge accesses - something that would cause | |
1353 | the device to malfunction. | |
1354 | ||
1355 | Inside of the Linux kernel, I/O should be done through the appropriate accessor | |
1356 | routines - such as inb() or writel() - which know how to make such accesses | |
1357 | appropriately sequential. Whilst this, for the most part, renders the explicit | |
1358 | use of memory barriers unnecessary, there are a couple of situations where they | |
1359 | might be needed: | |
1360 | ||
1361 | (1) On some systems, I/O stores are not strongly ordered across all CPUs, and | |
1362 | so for _all_ general drivers locks should be used and mmiowb() must be | |
1363 | issued prior to unlocking the critical section. | |
1364 | ||
1365 | (2) If the accessor functions are used to refer to an I/O memory window with | |
1366 | relaxed memory access properties, then _mandatory_ memory barriers are | |
1367 | required to enforce ordering. | |
1368 | ||
1369 | See Documentation/DocBook/deviceiobook.tmpl for more information. | |
1370 | ||
1371 | ||
1372 | INTERRUPTS | |
1373 | ---------- | |
1374 | ||
1375 | A driver may be interrupted by its own interrupt service routine, and thus the | |
1376 | two parts of the driver may interfere with each other's attempts to control or | |
1377 | access the device. | |
1378 | ||
1379 | This may be alleviated - at least in part - by disabling local interrupts (a | |
1380 | form of locking), such that the critical operations are all contained within | |
1381 | the interrupt-disabled section in the driver. Whilst the driver's interrupt | |
1382 | routine is executing, the driver's core may not run on the same CPU, and its | |
1383 | interrupt is not permitted to happen again until the current interrupt has been | |
1384 | handled, thus the interrupt handler does not need to lock against that. | |
1385 | ||
1386 | However, consider a driver that was talking to an ethernet card that sports an | |
1387 | address register and a data register. If that driver's core talks to the card | |
1388 | under interrupt-disablement and then the driver's interrupt handler is invoked: | |
1389 | ||
1390 | LOCAL IRQ DISABLE | |
1391 | writew(ADDR, 3); | |
1392 | writew(DATA, y); | |
1393 | LOCAL IRQ ENABLE | |
1394 | <interrupt> | |
1395 | writew(ADDR, 4); | |
1396 | q = readw(DATA); | |
1397 | </interrupt> | |
1398 | ||
1399 | The store to the data register might happen after the second store to the | |
1400 | address register if ordering rules are sufficiently relaxed: | |
1401 | ||
1402 | STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA | |
1403 | ||
1404 | ||
1405 | If ordering rules are relaxed, it must be assumed that accesses done inside an | |
1406 | interrupt disabled section may leak outside of it and may interleave with | |
1407 | accesses performed in an interrupt - and vice versa - unless implicit or | |
1408 | explicit barriers are used. | |
1409 | ||
1410 | Normally this won't be a problem because the I/O accesses done inside such | |
1411 | sections will include synchronous load operations on strictly ordered I/O | |
1412 | registers that form implicit I/O barriers. If this isn't sufficient then an | |
1413 | mmiowb() may need to be used explicitly. | |
1414 | ||
1415 | ||
1416 | A similar situation may occur between an interrupt routine and two routines | |
1417 | running on separate CPUs that communicate with each other. If such a case is | |
1418 | likely, then interrupt-disabling locks should be used to guarantee ordering. | |
1419 | ||
1420 | ||
1421 | ========================== | |
1422 | KERNEL I/O BARRIER EFFECTS | |
1423 | ========================== | |
1424 | ||
1425 | When accessing I/O memory, drivers should use the appropriate accessor | |
1426 | functions: | |
1427 | ||
1428 | (*) inX(), outX(): | |
1429 | ||
1430 | These are intended to talk to I/O space rather than memory space, but | |
1431 | that's primarily a CPU-specific concept. The i386 and x86_64 processors do | |
1432 | indeed have special I/O space access cycles and instructions, but many | |
1433 | CPUs don't have such a concept. | |
1434 | ||
1435 | The PCI bus, amongst others, defines an I/O space concept - which on such | |
1436 | CPUs as i386 and x86_64 cpus readily maps to the CPU's concept of I/O | |
1437 | space. However, it may also mapped as a virtual I/O space in the CPU's | |
1438 | memory map, particularly on those CPUs that don't support alternate | |
1439 | I/O spaces. | |
1440 | ||
1441 | Accesses to this space may be fully synchronous (as on i386), but | |
1442 | intermediary bridges (such as the PCI host bridge) may not fully honour | |
1443 | that. | |
1444 | ||
1445 | They are guaranteed to be fully ordered with respect to each other. | |
1446 | ||
1447 | They are not guaranteed to be fully ordered with respect to other types of | |
1448 | memory and I/O operation. | |
1449 | ||
1450 | (*) readX(), writeX(): | |
1451 | ||
1452 | Whether these are guaranteed to be fully ordered and uncombined with | |
1453 | respect to each other on the issuing CPU depends on the characteristics | |
1454 | defined for the memory window through which they're accessing. On later | |
1455 | i386 architecture machines, for example, this is controlled by way of the | |
1456 | MTRR registers. | |
1457 | ||
1458 | Ordinarily, these will be guaranteed to be fully ordered and uncombined,, | |
1459 | provided they're not accessing a prefetchable device. | |
1460 | ||
1461 | However, intermediary hardware (such as a PCI bridge) may indulge in | |
1462 | deferral if it so wishes; to flush a store, a load from the same location | |
1463 | is preferred[*], but a load from the same device or from configuration | |
1464 | space should suffice for PCI. | |
1465 | ||
1466 | [*] NOTE! attempting to load from the same location as was written to may | |
1467 | cause a malfunction - consider the 16550 Rx/Tx serial registers for | |
1468 | example. | |
1469 | ||
1470 | Used with prefetchable I/O memory, an mmiowb() barrier may be required to | |
1471 | force stores to be ordered. | |
1472 | ||
1473 | Please refer to the PCI specification for more information on interactions | |
1474 | between PCI transactions. | |
1475 | ||
1476 | (*) readX_relaxed() | |
1477 | ||
1478 | These are similar to readX(), but are not guaranteed to be ordered in any | |
1479 | way. Be aware that there is no I/O read barrier available. | |
1480 | ||
1481 | (*) ioreadX(), iowriteX() | |
1482 | ||
1483 | These will perform as appropriate for the type of access they're actually | |
1484 | doing, be it inX()/outX() or readX()/writeX(). | |
1485 | ||
1486 | ||
1487 | ======================================== | |
1488 | ASSUMED MINIMUM EXECUTION ORDERING MODEL | |
1489 | ======================================== | |
1490 | ||
1491 | It has to be assumed that the conceptual CPU is weakly-ordered but that it will | |
1492 | maintain the appearance of program causality with respect to itself. Some CPUs | |
1493 | (such as i386 or x86_64) are more constrained than others (such as powerpc or | |
1494 | frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside | |
1495 | of arch-specific code. | |
1496 | ||
1497 | This means that it must be considered that the CPU will execute its instruction | |
1498 | stream in any order it feels like - or even in parallel - provided that if an | |
1499 | instruction in the stream depends on the an earlier instruction, then that | |
1500 | earlier instruction must be sufficiently complete[*] before the later | |
1501 | instruction may proceed; in other words: provided that the appearance of | |
1502 | causality is maintained. | |
1503 | ||
1504 | [*] Some instructions have more than one effect - such as changing the | |
1505 | condition codes, changing registers or changing memory - and different | |
1506 | instructions may depend on different effects. | |
1507 | ||
1508 | A CPU may also discard any instruction sequence that winds up having no | |
1509 | ultimate effect. For example, if two adjacent instructions both load an | |
1510 | immediate value into the same register, the first may be discarded. | |
1511 | ||
1512 | ||
1513 | Similarly, it has to be assumed that compiler might reorder the instruction | |
1514 | stream in any way it sees fit, again provided the appearance of causality is | |
1515 | maintained. | |
1516 | ||
1517 | ||
1518 | ============================ | |
1519 | THE EFFECTS OF THE CPU CACHE | |
1520 | ============================ | |
1521 | ||
1522 | The way cached memory operations are perceived across the system is affected to | |
1523 | a certain extent by the caches that lie between CPUs and memory, and by the | |
1524 | memory coherence system that maintains the consistency of state in the system. | |
1525 | ||
1526 | As far as the way a CPU interacts with another part of the system through the | |
1527 | caches goes, the memory system has to include the CPU's caches, and memory | |
1528 | barriers for the most part act at the interface between the CPU and its cache | |
1529 | (memory barriers logically act on the dotted line in the following diagram): | |
1530 | ||
1531 | <--- CPU ---> : <----------- Memory -----------> | |
1532 | : | |
1533 | +--------+ +--------+ : +--------+ +-----------+ | |
1534 | | | | | : | | | | +--------+ | |
1535 | | CPU | | Memory | : | CPU | | | | | | |
1536 | | Core |--->| Access |----->| Cache |<-->| | | | | |
1537 | | | | Queue | : | | | |--->| Memory | | |
1538 | | | | | : | | | | | | | |
1539 | +--------+ +--------+ : +--------+ | | | | | |
1540 | : | Cache | +--------+ | |
1541 | : | Coherency | | |
1542 | : | Mechanism | +--------+ | |
1543 | +--------+ +--------+ : +--------+ | | | | | |
1544 | | | | | : | | | | | | | |
1545 | | CPU | | Memory | : | CPU | | |--->| Device | | |
1546 | | Core |--->| Access |----->| Cache |<-->| | | | | |
1547 | | | | Queue | : | | | | | | | |
1548 | | | | | : | | | | +--------+ | |
1549 | +--------+ +--------+ : +--------+ +-----------+ | |
1550 | : | |
1551 | : | |
1552 | ||
1553 | Although any particular load or store may not actually appear outside of the | |
1554 | CPU that issued it since it may have been satisfied within the CPU's own cache, | |
1555 | it will still appear as if the full memory access had taken place as far as the | |
1556 | other CPUs are concerned since the cache coherency mechanisms will migrate the | |
1557 | cacheline over to the accessing CPU and propagate the effects upon conflict. | |
1558 | ||
1559 | The CPU core may execute instructions in any order it deems fit, provided the | |
1560 | expected program causality appears to be maintained. Some of the instructions | |
1561 | generate load and store operations which then go into the queue of memory | |
1562 | accesses to be performed. The core may place these in the queue in any order | |
1563 | it wishes, and continue execution until it is forced to wait for an instruction | |
1564 | to complete. | |
1565 | ||
1566 | What memory barriers are concerned with is controlling the order in which | |
1567 | accesses cross from the CPU side of things to the memory side of things, and | |
1568 | the order in which the effects are perceived to happen by the other observers | |
1569 | in the system. | |
1570 | ||
1571 | [!] Memory barriers are _not_ needed within a given CPU, as CPUs always see | |
1572 | their own loads and stores as if they had happened in program order. | |
1573 | ||
1574 | [!] MMIO or other device accesses may bypass the cache system. This depends on | |
1575 | the properties of the memory window through which devices are accessed and/or | |
1576 | the use of any special device communication instructions the CPU may have. | |
1577 | ||
1578 | ||
1579 | CACHE COHERENCY | |
1580 | --------------- | |
1581 | ||
1582 | Life isn't quite as simple as it may appear above, however: for while the | |
1583 | caches are expected to be coherent, there's no guarantee that that coherency | |
1584 | will be ordered. This means that whilst changes made on one CPU will | |
1585 | eventually become visible on all CPUs, there's no guarantee that they will | |
1586 | become apparent in the same order on those other CPUs. | |
1587 | ||
1588 | ||
1589 | Consider dealing with a system that has pair of CPUs (1 & 2), each of which has | |
1590 | a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D): | |
1591 | ||
1592 | : | |
1593 | : +--------+ | |
1594 | : +---------+ | | | |
1595 | +--------+ : +--->| Cache A |<------->| | | |
1596 | | | : | +---------+ | | | |
1597 | | CPU 1 |<---+ | | | |
1598 | | | : | +---------+ | | | |
1599 | +--------+ : +--->| Cache B |<------->| | | |
1600 | : +---------+ | | | |
1601 | : | Memory | | |
1602 | : +---------+ | System | | |
1603 | +--------+ : +--->| Cache C |<------->| | | |
1604 | | | : | +---------+ | | | |
1605 | | CPU 2 |<---+ | | | |
1606 | | | : | +---------+ | | | |
1607 | +--------+ : +--->| Cache D |<------->| | | |
1608 | : +---------+ | | | |
1609 | : +--------+ | |
1610 | : | |
1611 | ||
1612 | Imagine the system has the following properties: | |
1613 | ||
1614 | (*) an odd-numbered cache line may be in cache A, cache C or it may still be | |
1615 | resident in memory; | |
1616 | ||
1617 | (*) an even-numbered cache line may be in cache B, cache D or it may still be | |
1618 | resident in memory; | |
1619 | ||
1620 | (*) whilst the CPU core is interrogating one cache, the other cache may be | |
1621 | making use of the bus to access the rest of the system - perhaps to | |
1622 | displace a dirty cacheline or to do a speculative load; | |
1623 | ||
1624 | (*) each cache has a queue of operations that need to be applied to that cache | |
1625 | to maintain coherency with the rest of the system; | |
1626 | ||
1627 | (*) the coherency queue is not flushed by normal loads to lines already | |
1628 | present in the cache, even though the contents of the queue may | |
1629 | potentially effect those loads. | |
1630 | ||
1631 | Imagine, then, that two writes are made on the first CPU, with a write barrier | |
1632 | between them to guarantee that they will appear to reach that CPU's caches in | |
1633 | the requisite order: | |
1634 | ||
1635 | CPU 1 CPU 2 COMMENT | |
1636 | =============== =============== ======================================= | |
1637 | u == 0, v == 1 and p == &u, q == &u | |
1638 | v = 2; | |
1639 | smp_wmb(); Make sure change to v visible before | |
1640 | change to p | |
1641 | <A:modify v=2> v is now in cache A exclusively | |
1642 | p = &v; | |
1643 | <B:modify p=&v> p is now in cache B exclusively | |
1644 | ||
1645 | The write memory barrier forces the other CPUs in the system to perceive that | |
1646 | the local CPU's caches have apparently been updated in the correct order. But | |
1647 | now imagine that the second CPU that wants to read those values: | |
1648 | ||
1649 | CPU 1 CPU 2 COMMENT | |
1650 | =============== =============== ======================================= | |
1651 | ... | |
1652 | q = p; | |
1653 | x = *q; | |
1654 | ||
1655 | The above pair of reads may then fail to happen in expected order, as the | |
1656 | cacheline holding p may get updated in one of the second CPU's caches whilst | |
1657 | the update to the cacheline holding v is delayed in the other of the second | |
1658 | CPU's caches by some other cache event: | |
1659 | ||
1660 | CPU 1 CPU 2 COMMENT | |
1661 | =============== =============== ======================================= | |
1662 | u == 0, v == 1 and p == &u, q == &u | |
1663 | v = 2; | |
1664 | smp_wmb(); | |
1665 | <A:modify v=2> <C:busy> | |
1666 | <C:queue v=2> | |
1667 | p = &b; q = p; | |
1668 | <D:request p> | |
1669 | <B:modify p=&v> <D:commit p=&v> | |
1670 | <D:read p> | |
1671 | x = *q; | |
1672 | <C:read *q> Reads from v before v updated in cache | |
1673 | <C:unbusy> | |
1674 | <C:commit v=2> | |
1675 | ||
1676 | Basically, whilst both cachelines will be updated on CPU 2 eventually, there's | |
1677 | no guarantee that, without intervention, the order of update will be the same | |
1678 | as that committed on CPU 1. | |
1679 | ||
1680 | ||
1681 | To intervene, we need to interpolate a data dependency barrier or a read | |
1682 | barrier between the loads. This will force the cache to commit its coherency | |
1683 | queue before processing any further requests: | |
1684 | ||
1685 | CPU 1 CPU 2 COMMENT | |
1686 | =============== =============== ======================================= | |
1687 | u == 0, v == 1 and p == &u, q == &u | |
1688 | v = 2; | |
1689 | smp_wmb(); | |
1690 | <A:modify v=2> <C:busy> | |
1691 | <C:queue v=2> | |
1692 | p = &b; q = p; | |
1693 | <D:request p> | |
1694 | <B:modify p=&v> <D:commit p=&v> | |
1695 | <D:read p> | |
1696 | smp_read_barrier_depends() | |
1697 | <C:unbusy> | |
1698 | <C:commit v=2> | |
1699 | x = *q; | |
1700 | <C:read *q> Reads from v after v updated in cache | |
1701 | ||
1702 | ||
1703 | This sort of problem can be encountered on DEC Alpha processors as they have a | |
1704 | split cache that improves performance by making better use of the data bus. | |
1705 | Whilst most CPUs do imply a data dependency barrier on the read when a memory | |
1706 | access depends on a read, not all do, so it may not be relied on. | |
1707 | ||
1708 | Other CPUs may also have split caches, but must coordinate between the various | |
1709 | cachelets for normal memory accesss. The semantics of the Alpha removes the | |
1710 | need for coordination in absence of memory barriers. | |
1711 | ||
1712 | ||
1713 | CACHE COHERENCY VS DMA | |
1714 | ---------------------- | |
1715 | ||
1716 | Not all systems maintain cache coherency with respect to devices doing DMA. In | |
1717 | such cases, a device attempting DMA may obtain stale data from RAM because | |
1718 | dirty cache lines may be resident in the caches of various CPUs, and may not | |
1719 | have been written back to RAM yet. To deal with this, the appropriate part of | |
1720 | the kernel must flush the overlapping bits of cache on each CPU (and maybe | |
1721 | invalidate them as well). | |
1722 | ||
1723 | In addition, the data DMA'd to RAM by a device may be overwritten by dirty | |
1724 | cache lines being written back to RAM from a CPU's cache after the device has | |
1725 | installed its own data, or cache lines simply present in a CPUs cache may | |
1726 | simply obscure the fact that RAM has been updated, until at such time as the | |
1727 | cacheline is discarded from the CPU's cache and reloaded. To deal with this, | |
1728 | the appropriate part of the kernel must invalidate the overlapping bits of the | |
1729 | cache on each CPU. | |
1730 | ||
1731 | See Documentation/cachetlb.txt for more information on cache management. | |
1732 | ||
1733 | ||
1734 | CACHE COHERENCY VS MMIO | |
1735 | ----------------------- | |
1736 | ||
1737 | Memory mapped I/O usually takes place through memory locations that are part of | |
1738 | a window in the CPU's memory space that have different properties assigned than | |
1739 | the usual RAM directed window. | |
1740 | ||
1741 | Amongst these properties is usually the fact that such accesses bypass the | |
1742 | caching entirely and go directly to the device buses. This means MMIO accesses | |
1743 | may, in effect, overtake accesses to cached memory that were emitted earlier. | |
1744 | A memory barrier isn't sufficient in such a case, but rather the cache must be | |
1745 | flushed between the cached memory write and the MMIO access if the two are in | |
1746 | any way dependent. | |
1747 | ||
1748 | ||
1749 | ========================= | |
1750 | THE THINGS CPUS GET UP TO | |
1751 | ========================= | |
1752 | ||
1753 | A programmer might take it for granted that the CPU will perform memory | |
1754 | operations in exactly the order specified, so that if a CPU is, for example, | |
1755 | given the following piece of code to execute: | |
1756 | ||
1757 | a = *A; | |
1758 | *B = b; | |
1759 | c = *C; | |
1760 | d = *D; | |
1761 | *E = e; | |
1762 | ||
1763 | They would then expect that the CPU will complete the memory operation for each | |
1764 | instruction before moving on to the next one, leading to a definite sequence of | |
1765 | operations as seen by external observers in the system: | |
1766 | ||
1767 | LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E. | |
1768 | ||
1769 | ||
1770 | Reality is, of course, much messier. With many CPUs and compilers, the above | |
1771 | assumption doesn't hold because: | |
1772 | ||
1773 | (*) loads are more likely to need to be completed immediately to permit | |
1774 | execution progress, whereas stores can often be deferred without a | |
1775 | problem; | |
1776 | ||
1777 | (*) loads may be done speculatively, and the result discarded should it prove | |
1778 | to have been unnecessary; | |
1779 | ||
1780 | (*) loads may be done speculatively, leading to the result having being | |
1781 | fetched at the wrong time in the expected sequence of events; | |
1782 | ||
1783 | (*) the order of the memory accesses may be rearranged to promote better use | |
1784 | of the CPU buses and caches; | |
1785 | ||
1786 | (*) loads and stores may be combined to improve performance when talking to | |
1787 | memory or I/O hardware that can do batched accesses of adjacent locations, | |
1788 | thus cutting down on transaction setup costs (memory and PCI devices may | |
1789 | both be able to do this); and | |
1790 | ||
1791 | (*) the CPU's data cache may affect the ordering, and whilst cache-coherency | |
1792 | mechanisms may alleviate this - once the store has actually hit the cache | |
1793 | - there's no guarantee that the coherency management will be propagated in | |
1794 | order to other CPUs. | |
1795 | ||
1796 | So what another CPU, say, might actually observe from the above piece of code | |
1797 | is: | |
1798 | ||
1799 | LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B | |
1800 | ||
1801 | (Where "LOAD {*C,*D}" is a combined load) | |
1802 | ||
1803 | ||
1804 | However, it is guaranteed that a CPU will be self-consistent: it will see its | |
1805 | _own_ accesses appear to be correctly ordered, without the need for a memory | |
1806 | barrier. For instance with the following code: | |
1807 | ||
1808 | U = *A; | |
1809 | *A = V; | |
1810 | *A = W; | |
1811 | X = *A; | |
1812 | *A = Y; | |
1813 | Z = *A; | |
1814 | ||
1815 | and assuming no intervention by an external influence, it can be assumed that | |
1816 | the final result will appear to be: | |
1817 | ||
1818 | U == the original value of *A | |
1819 | X == W | |
1820 | Z == Y | |
1821 | *A == Y | |
1822 | ||
1823 | The code above may cause the CPU to generate the full sequence of memory | |
1824 | accesses: | |
1825 | ||
1826 | U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A | |
1827 | ||
1828 | in that order, but, without intervention, the sequence may have almost any | |
1829 | combination of elements combined or discarded, provided the program's view of | |
1830 | the world remains consistent. | |
1831 | ||
1832 | The compiler may also combine, discard or defer elements of the sequence before | |
1833 | the CPU even sees them. | |
1834 | ||
1835 | For instance: | |
1836 | ||
1837 | *A = V; | |
1838 | *A = W; | |
1839 | ||
1840 | may be reduced to: | |
1841 | ||
1842 | *A = W; | |
1843 | ||
1844 | since, without a write barrier, it can be assumed that the effect of the | |
1845 | storage of V to *A is lost. Similarly: | |
1846 | ||
1847 | *A = Y; | |
1848 | Z = *A; | |
1849 | ||
1850 | may, without a memory barrier, be reduced to: | |
1851 | ||
1852 | *A = Y; | |
1853 | Z = Y; | |
1854 | ||
1855 | and the LOAD operation never appear outside of the CPU. | |
1856 | ||
1857 | ||
1858 | AND THEN THERE'S THE ALPHA | |
1859 | -------------------------- | |
1860 | ||
1861 | The DEC Alpha CPU is one of the most relaxed CPUs there is. Not only that, | |
1862 | some versions of the Alpha CPU have a split data cache, permitting them to have | |
1863 | two semantically related cache lines updating at separate times. This is where | |
1864 | the data dependency barrier really becomes necessary as this synchronises both | |
1865 | caches with the memory coherence system, thus making it seem like pointer | |
1866 | changes vs new data occur in the right order. | |
1867 | ||
1868 | The Alpha defines the Linux's kernel's memory barrier model. | |
1869 | ||
1870 | See the subsection on "Cache Coherency" above. | |
1871 | ||
1872 | ||
1873 | ========== | |
1874 | REFERENCES | |
1875 | ========== | |
1876 | ||
1877 | Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek, | |
1878 | Digital Press) | |
1879 | Chapter 5.2: Physical Address Space Characteristics | |
1880 | Chapter 5.4: Caches and Write Buffers | |
1881 | Chapter 5.5: Data Sharing | |
1882 | Chapter 5.6: Read/Write Ordering | |
1883 | ||
1884 | AMD64 Architecture Programmer's Manual Volume 2: System Programming | |
1885 | Chapter 7.1: Memory-Access Ordering | |
1886 | Chapter 7.4: Buffering and Combining Memory Writes | |
1887 | ||
1888 | IA-32 Intel Architecture Software Developer's Manual, Volume 3: | |
1889 | System Programming Guide | |
1890 | Chapter 7.1: Locked Atomic Operations | |
1891 | Chapter 7.2: Memory Ordering | |
1892 | Chapter 7.4: Serializing Instructions | |
1893 | ||
1894 | The SPARC Architecture Manual, Version 9 | |
1895 | Chapter 8: Memory Models | |
1896 | Appendix D: Formal Specification of the Memory Models | |
1897 | Appendix J: Programming with the Memory Models | |
1898 | ||
1899 | UltraSPARC Programmer Reference Manual | |
1900 | Chapter 5: Memory Accesses and Cacheability | |
1901 | Chapter 15: Sparc-V9 Memory Models | |
1902 | ||
1903 | UltraSPARC III Cu User's Manual | |
1904 | Chapter 9: Memory Models | |
1905 | ||
1906 | UltraSPARC IIIi Processor User's Manual | |
1907 | Chapter 8: Memory Models | |
1908 | ||
1909 | UltraSPARC Architecture 2005 | |
1910 | Chapter 9: Memory | |
1911 | Appendix D: Formal Specifications of the Memory Models | |
1912 | ||
1913 | UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005 | |
1914 | Chapter 8: Memory Models | |
1915 | Appendix F: Caches and Cache Coherency | |
1916 | ||
1917 | Solaris Internals, Core Kernel Architecture, p63-68: | |
1918 | Chapter 3.3: Hardware Considerations for Locks and | |
1919 | Synchronization | |
1920 | ||
1921 | Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching | |
1922 | for Kernel Programmers: | |
1923 | Chapter 13: Other Memory Models | |
1924 | ||
1925 | Intel Itanium Architecture Software Developer's Manual: Volume 1: | |
1926 | Section 2.6: Speculation | |
1927 | Section 4.4: Memory Access |