]>
Commit | Line | Data |
---|---|---|
a1b2a555 CL |
1 | this_cpu operations |
2 | ------------------- | |
3 | ||
4 | this_cpu operations are a way of optimizing access to per cpu | |
ac490f4d PK |
5 | variables associated with the *currently* executing processor. This is |
6 | done through the use of segment registers (or a dedicated register where | |
7 | the cpu permanently stored the beginning of the per cpu area for a | |
8 | specific processor). | |
a1b2a555 | 9 | |
ac490f4d PK |
10 | this_cpu operations add a per cpu variable offset to the processor |
11 | specific per cpu base and encode that operation in the instruction | |
a1b2a555 CL |
12 | operating on the per cpu variable. |
13 | ||
ac490f4d | 14 | This means that there are no atomicity issues between the calculation of |
a1b2a555 | 15 | the offset and the operation on the data. Therefore it is not |
ac490f4d | 16 | necessary to disable preemption or interrupts to ensure that the |
a1b2a555 CL |
17 | processor is not changed between the calculation of the address and |
18 | the operation on the data. | |
19 | ||
20 | Read-modify-write operations are of particular interest. Frequently | |
21 | processors have special lower latency instructions that can operate | |
ac490f4d PK |
22 | without the typical synchronization overhead, but still provide some |
23 | sort of relaxed atomicity guarantees. The x86, for example, can execute | |
24 | RMW (Read Modify Write) instructions like inc/dec/cmpxchg without the | |
a1b2a555 CL |
25 | lock prefix and the associated latency penalty. |
26 | ||
27 | Access to the variable without the lock prefix is not synchronized but | |
28 | synchronization is not necessary since we are dealing with per cpu | |
29 | data specific to the currently executing processor. Only the current | |
30 | processor should be accessing that variable and therefore there are no | |
31 | concurrency issues with other processors in the system. | |
32 | ||
ac490f4d PK |
33 | Please note that accesses by remote processors to a per cpu area are |
34 | exceptional situations and may impact performance and/or correctness | |
35 | (remote write operations) of local RMW operations via this_cpu_*. | |
36 | ||
37 | The main use of the this_cpu operations has been to optimize counter | |
38 | operations. | |
39 | ||
40 | The following this_cpu() operations with implied preemption protection | |
41 | are defined. These operations can be used without worrying about | |
42 | preemption and interrupts. | |
43 | ||
44 | this_cpu_add() | |
45 | this_cpu_read(pcp) | |
46 | this_cpu_write(pcp, val) | |
47 | this_cpu_add(pcp, val) | |
48 | this_cpu_and(pcp, val) | |
49 | this_cpu_or(pcp, val) | |
50 | this_cpu_add_return(pcp, val) | |
51 | this_cpu_xchg(pcp, nval) | |
52 | this_cpu_cmpxchg(pcp, oval, nval) | |
53 | this_cpu_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) | |
54 | this_cpu_sub(pcp, val) | |
55 | this_cpu_inc(pcp) | |
56 | this_cpu_dec(pcp) | |
57 | this_cpu_sub_return(pcp, val) | |
58 | this_cpu_inc_return(pcp) | |
59 | this_cpu_dec_return(pcp) | |
60 | ||
61 | ||
62 | Inner working of this_cpu operations | |
63 | ------------------------------------ | |
64 | ||
a1b2a555 CL |
65 | On x86 the fs: or the gs: segment registers contain the base of the |
66 | per cpu area. It is then possible to simply use the segment override | |
67 | to relocate a per cpu relative address to the proper per cpu area for | |
68 | the processor. So the relocation to the per cpu base is encoded in the | |
69 | instruction via a segment register prefix. | |
70 | ||
71 | For example: | |
72 | ||
73 | DEFINE_PER_CPU(int, x); | |
74 | int z; | |
75 | ||
76 | z = this_cpu_read(x); | |
77 | ||
78 | results in a single instruction | |
79 | ||
80 | mov ax, gs:[x] | |
81 | ||
82 | instead of a sequence of calculation of the address and then a fetch | |
ac490f4d | 83 | from that address which occurs with the per cpu operations. Before |
a1b2a555 CL |
84 | this_cpu_ops such sequence also required preempt disable/enable to |
85 | prevent the kernel from moving the thread to a different processor | |
86 | while the calculation is performed. | |
87 | ||
ac490f4d | 88 | Consider the following this_cpu operation: |
a1b2a555 CL |
89 | |
90 | this_cpu_inc(x) | |
91 | ||
ac490f4d | 92 | The above results in the following single instruction (no lock prefix!) |
a1b2a555 CL |
93 | |
94 | inc gs:[x] | |
95 | ||
96 | instead of the following operations required if there is no segment | |
ac490f4d | 97 | register: |
a1b2a555 CL |
98 | |
99 | int *y; | |
100 | int cpu; | |
101 | ||
102 | cpu = get_cpu(); | |
103 | y = per_cpu_ptr(&x, cpu); | |
104 | (*y)++; | |
105 | put_cpu(); | |
106 | ||
ac490f4d | 107 | Note that these operations can only be used on per cpu data that is |
a1b2a555 CL |
108 | reserved for a specific processor. Without disabling preemption in the |
109 | surrounding code this_cpu_inc() will only guarantee that one of the | |
ac490f4d | 110 | per cpu counters is correctly incremented. However, there is no |
a1b2a555 CL |
111 | guarantee that the OS will not move the process directly before or |
112 | after the this_cpu instruction is executed. In general this means that | |
113 | the value of the individual counters for each processor are | |
114 | meaningless. The sum of all the per cpu counters is the only value | |
115 | that is of interest. | |
116 | ||
117 | Per cpu variables are used for performance reasons. Bouncing cache | |
118 | lines can be avoided if multiple processors concurrently go through | |
119 | the same code paths. Since each processor has its own per cpu | |
ac490f4d | 120 | variables no concurrent cache line updates take place. The price that |
a1b2a555 | 121 | has to be paid for this optimization is the need to add up the per cpu |
ac490f4d | 122 | counters when the value of a counter is needed. |
a1b2a555 CL |
123 | |
124 | ||
125 | Special operations: | |
126 | ------------------- | |
127 | ||
128 | y = this_cpu_ptr(&x) | |
129 | ||
130 | Takes the offset of a per cpu variable (&x !) and returns the address | |
131 | of the per cpu variable that belongs to the currently executing | |
132 | processor. this_cpu_ptr avoids multiple steps that the common | |
133 | get_cpu/put_cpu sequence requires. No processor number is | |
ac490f4d PK |
134 | available. Instead, the offset of the local per cpu area is simply |
135 | added to the per cpu offset. | |
a1b2a555 | 136 | |
ac490f4d PK |
137 | Note that this operation is usually used in a code segment when |
138 | preemption has been disabled. The pointer is then used to | |
139 | access local per cpu data in a critical section. When preemption | |
140 | is re-enabled this pointer is usually no longer useful since it may | |
141 | no longer point to per cpu data of the current processor. | |
a1b2a555 CL |
142 | |
143 | ||
144 | Per cpu variables and offsets | |
145 | ----------------------------- | |
146 | ||
ac490f4d | 147 | Per cpu variables have *offsets* to the beginning of the per cpu |
a1b2a555 CL |
148 | area. They do not have addresses although they look like that in the |
149 | code. Offsets cannot be directly dereferenced. The offset must be | |
ac490f4d | 150 | added to a base pointer of a per cpu area of a processor in order to |
a1b2a555 CL |
151 | form a valid address. |
152 | ||
153 | Therefore the use of x or &x outside of the context of per cpu | |
154 | operations is invalid and will generally be treated like a NULL | |
155 | pointer dereference. | |
156 | ||
ac490f4d | 157 | DEFINE_PER_CPU(int, x); |
a1b2a555 | 158 | |
ac490f4d PK |
159 | In the context of per cpu operations the above implies that x is a per |
160 | cpu variable. Most this_cpu operations take a cpu variable. | |
a1b2a555 | 161 | |
ac490f4d | 162 | int __percpu *p = &x; |
a1b2a555 | 163 | |
ac490f4d PK |
164 | &x and hence p is the *offset* of a per cpu variable. this_cpu_ptr() |
165 | takes the offset of a per cpu variable which makes this look a bit | |
166 | strange. | |
a1b2a555 CL |
167 | |
168 | ||
169 | Operations on a field of a per cpu structure | |
170 | -------------------------------------------- | |
171 | ||
172 | Let's say we have a percpu structure | |
173 | ||
174 | struct s { | |
175 | int n,m; | |
176 | }; | |
177 | ||
178 | DEFINE_PER_CPU(struct s, p); | |
179 | ||
180 | ||
181 | Operations on these fields are straightforward | |
182 | ||
183 | this_cpu_inc(p.m) | |
184 | ||
185 | z = this_cpu_cmpxchg(p.m, 0, 1); | |
186 | ||
187 | ||
188 | If we have an offset to struct s: | |
189 | ||
190 | struct s __percpu *ps = &p; | |
191 | ||
ac490f4d | 192 | this_cpu_dec(ps->m); |
a1b2a555 CL |
193 | |
194 | z = this_cpu_inc_return(ps->n); | |
195 | ||
196 | ||
197 | The calculation of the pointer may require the use of this_cpu_ptr() | |
198 | if we do not make use of this_cpu ops later to manipulate fields: | |
199 | ||
200 | struct s *pp; | |
201 | ||
202 | pp = this_cpu_ptr(&p); | |
203 | ||
204 | pp->m--; | |
205 | ||
206 | z = pp->n++; | |
207 | ||
208 | ||
209 | Variants of this_cpu ops | |
210 | ------------------------- | |
211 | ||
ac490f4d | 212 | this_cpu ops are interrupt safe. Some architectures do not support |
a1b2a555 CL |
213 | these per cpu local operations. In that case the operation must be |
214 | replaced by code that disables interrupts, then does the operations | |
ac490f4d | 215 | that are guaranteed to be atomic and then re-enable interrupts. Doing |
a1b2a555 CL |
216 | so is expensive. If there are other reasons why the scheduler cannot |
217 | change the processor we are executing on then there is no reason to | |
ac490f4d PK |
218 | disable interrupts. For that purpose the following __this_cpu operations |
219 | are provided. | |
220 | ||
221 | These operations have no guarantee against concurrent interrupts or | |
222 | preemption. If a per cpu variable is not used in an interrupt context | |
223 | and the scheduler cannot preempt, then they are safe. If any interrupts | |
224 | still occur while an operation is in progress and if the interrupt too | |
225 | modifies the variable, then RMW actions can not be guaranteed to be | |
226 | safe. | |
227 | ||
228 | __this_cpu_add() | |
229 | __this_cpu_read(pcp) | |
230 | __this_cpu_write(pcp, val) | |
231 | __this_cpu_add(pcp, val) | |
232 | __this_cpu_and(pcp, val) | |
233 | __this_cpu_or(pcp, val) | |
234 | __this_cpu_add_return(pcp, val) | |
235 | __this_cpu_xchg(pcp, nval) | |
236 | __this_cpu_cmpxchg(pcp, oval, nval) | |
237 | __this_cpu_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) | |
238 | __this_cpu_sub(pcp, val) | |
239 | __this_cpu_inc(pcp) | |
240 | __this_cpu_dec(pcp) | |
241 | __this_cpu_sub_return(pcp, val) | |
242 | __this_cpu_inc_return(pcp) | |
243 | __this_cpu_dec_return(pcp) | |
244 | ||
245 | ||
246 | Will increment x and will not fall-back to code that disables | |
a1b2a555 CL |
247 | interrupts on platforms that cannot accomplish atomicity through |
248 | address relocation and a Read-Modify-Write operation in the same | |
249 | instruction. | |
250 | ||
251 | ||
a1b2a555 CL |
252 | &this_cpu_ptr(pp)->n vs this_cpu_ptr(&pp->n) |
253 | -------------------------------------------- | |
254 | ||
255 | The first operation takes the offset and forms an address and then | |
ac490f4d PK |
256 | adds the offset of the n field. This may result in two add |
257 | instructions emitted by the compiler. | |
a1b2a555 CL |
258 | |
259 | The second one first adds the two offsets and then does the | |
260 | relocation. IMHO the second form looks cleaner and has an easier time | |
261 | with (). The second form also is consistent with the way | |
262 | this_cpu_read() and friends are used. | |
263 | ||
264 | ||
ac490f4d PK |
265 | Remote access to per cpu data |
266 | ------------------------------ | |
267 | ||
268 | Per cpu data structures are designed to be used by one cpu exclusively. | |
269 | If you use the variables as intended, this_cpu_ops() are guaranteed to | |
270 | be "atomic" as no other CPU has access to these data structures. | |
271 | ||
272 | There are special cases where you might need to access per cpu data | |
273 | structures remotely. It is usually safe to do a remote read access | |
274 | and that is frequently done to summarize counters. Remote write access | |
275 | something which could be problematic because this_cpu ops do not | |
276 | have lock semantics. A remote write may interfere with a this_cpu | |
277 | RMW operation. | |
278 | ||
279 | Remote write accesses to percpu data structures are highly discouraged | |
280 | unless absolutely necessary. Please consider using an IPI to wake up | |
281 | the remote CPU and perform the update to its per cpu area. | |
282 | ||
283 | To access per-cpu data structure remotely, typically the per_cpu_ptr() | |
284 | function is used: | |
285 | ||
286 | ||
287 | DEFINE_PER_CPU(struct data, datap); | |
288 | ||
289 | struct data *p = per_cpu_ptr(&datap, cpu); | |
290 | ||
291 | This makes it explicit that we are getting ready to access a percpu | |
292 | area remotely. | |
293 | ||
294 | You can also do the following to convert the datap offset to an address | |
295 | ||
296 | struct data *p = this_cpu_ptr(&datap); | |
297 | ||
298 | but, passing of pointers calculated via this_cpu_ptr to other cpus is | |
299 | unusual and should be avoided. | |
300 | ||
301 | Remote access are typically only for reading the status of another cpus | |
302 | per cpu data. Write accesses can cause unique problems due to the | |
303 | relaxed synchronization requirements for this_cpu operations. | |
304 | ||
305 | One example that illustrates some concerns with write operations is | |
306 | the following scenario that occurs because two per cpu variables | |
307 | share a cache-line but the relaxed synchronization is applied to | |
308 | only one process updating the cache-line. | |
309 | ||
310 | Consider the following example | |
311 | ||
312 | ||
313 | struct test { | |
314 | atomic_t a; | |
315 | int b; | |
316 | }; | |
317 | ||
318 | DEFINE_PER_CPU(struct test, onecacheline); | |
319 | ||
320 | There is some concern about what would happen if the field 'a' is updated | |
321 | remotely from one processor and the local processor would use this_cpu ops | |
322 | to update field b. Care should be taken that such simultaneous accesses to | |
323 | data within the same cache line are avoided. Also costly synchronization | |
324 | may be necessary. IPIs are generally recommended in such scenarios instead | |
325 | of a remote write to the per cpu area of another processor. | |
326 | ||
327 | Even in cases where the remote writes are rare, please bear in | |
328 | mind that a remote write will evict the cache line from the processor | |
329 | that most likely will access it. If the processor wakes up and finds a | |
330 | missing local cache line of a per cpu area, its performance and hence | |
331 | the wake up times will be affected. | |
332 | ||
333 | Christoph Lameter, August 4th, 2014 | |
334 | Pranith Kumar, Aug 2nd, 2014 |