]>
Commit | Line | Data |
---|---|---|
1da177e4 LT |
1 | Cache and TLB Flushing |
2 | Under Linux | |
3 | ||
4 | David S. Miller <davem@redhat.com> | |
5 | ||
6 | This document describes the cache/tlb flushing interfaces called | |
7 | by the Linux VM subsystem. It enumerates over each interface, | |
8 | describes it's intended purpose, and what side effect is expected | |
9 | after the interface is invoked. | |
10 | ||
11 | The side effects described below are stated for a uniprocessor | |
12 | implementation, and what is to happen on that single processor. The | |
13 | SMP cases are a simple extension, in that you just extend the | |
14 | definition such that the side effect for a particular interface occurs | |
15 | on all processors in the system. Don't let this scare you into | |
16 | thinking SMP cache/tlb flushing must be so inefficient, this is in | |
17 | fact an area where many optimizations are possible. For example, | |
18 | if it can be proven that a user address space has never executed | |
19 | on a cpu (see vma->cpu_vm_mask), one need not perform a flush | |
20 | for this address space on that cpu. | |
21 | ||
22 | First, the TLB flushing interfaces, since they are the simplest. The | |
23 | "TLB" is abstracted under Linux as something the cpu uses to cache | |
24 | virtual-->physical address translations obtained from the software | |
25 | page tables. Meaning that if the software page tables change, it is | |
26 | possible for stale translations to exist in this "TLB" cache. | |
27 | Therefore when software page table changes occur, the kernel will | |
28 | invoke one of the following flush methods _after_ the page table | |
29 | changes occur: | |
30 | ||
31 | 1) void flush_tlb_all(void) | |
32 | ||
33 | The most severe flush of all. After this interface runs, | |
34 | any previous page table modification whatsoever will be | |
35 | visible to the cpu. | |
36 | ||
37 | This is usually invoked when the kernel page tables are | |
38 | changed, since such translations are "global" in nature. | |
39 | ||
40 | 2) void flush_tlb_mm(struct mm_struct *mm) | |
41 | ||
42 | This interface flushes an entire user address space from | |
43 | the TLB. After running, this interface must make sure that | |
44 | any previous page table modifications for the address space | |
45 | 'mm' will be visible to the cpu. That is, after running, | |
46 | there will be no entries in the TLB for 'mm'. | |
47 | ||
48 | This interface is used to handle whole address space | |
49 | page table operations such as what happens during | |
50 | fork, and exec. | |
51 | ||
1da177e4 LT |
52 | 3) void flush_tlb_range(struct vm_area_struct *vma, |
53 | unsigned long start, unsigned long end) | |
54 | ||
55 | Here we are flushing a specific range of (user) virtual | |
56 | address translations from the TLB. After running, this | |
57 | interface must make sure that any previous page table | |
58 | modifications for the address space 'vma->vm_mm' in the range | |
59 | 'start' to 'end-1' will be visible to the cpu. That is, after | |
60 | running, here will be no entries in the TLB for 'mm' for | |
61 | virtual addresses in the range 'start' to 'end-1'. | |
62 | ||
63 | The "vma" is the backing store being used for the region. | |
64 | Primarily, this is used for munmap() type operations. | |
65 | ||
66 | The interface is provided in hopes that the port can find | |
67 | a suitably efficient method for removing multiple page | |
68 | sized translations from the TLB, instead of having the kernel | |
69 | call flush_tlb_page (see below) for each entry which may be | |
70 | modified. | |
71 | ||
1da177e4 LT |
72 | 4) void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) |
73 | ||
74 | This time we need to remove the PAGE_SIZE sized translation | |
75 | from the TLB. The 'vma' is the backing structure used by | |
76 | Linux to keep track of mmap'd regions for a process, the | |
77 | address space is available via vma->vm_mm. Also, one may | |
78 | test (vma->vm_flags & VM_EXEC) to see if this region is | |
79 | executable (and thus could be in the 'instruction TLB' in | |
80 | split-tlb type setups). | |
81 | ||
82 | After running, this interface must make sure that any previous | |
83 | page table modification for address space 'vma->vm_mm' for | |
84 | user virtual address 'addr' will be visible to the cpu. That | |
85 | is, after running, there will be no entries in the TLB for | |
86 | 'vma->vm_mm' for virtual address 'addr'. | |
87 | ||
88 | This is used primarily during fault processing. | |
89 | ||
1da177e4 LT |
90 | 5) void flush_tlb_pgtables(struct mm_struct *mm, |
91 | unsigned long start, unsigned long end) | |
92 | ||
93 | The software page tables for address space 'mm' for virtual | |
94 | addresses in the range 'start' to 'end-1' are being torn down. | |
95 | ||
96 | Some platforms cache the lowest level of the software page tables | |
97 | in a linear virtually mapped array, to make TLB miss processing | |
98 | more efficient. On such platforms, since the TLB is caching the | |
99 | software page table structure, it needs to be flushed when parts | |
100 | of the software page table tree are unlinked/freed. | |
101 | ||
102 | Sparc64 is one example of a platform which does this. | |
103 | ||
104 | Usually, when munmap()'ing an area of user virtual address | |
105 | space, the kernel leaves the page table parts around and just | |
106 | marks the individual pte's as invalid. However, if very large | |
107 | portions of the address space are unmapped, the kernel frees up | |
108 | those portions of the software page tables to prevent potential | |
109 | excessive kernel memory usage caused by erratic mmap/mmunmap | |
110 | sequences. It is at these times that flush_tlb_pgtables will | |
111 | be invoked. | |
112 | ||
113 | 6) void update_mmu_cache(struct vm_area_struct *vma, | |
114 | unsigned long address, pte_t pte) | |
115 | ||
116 | At the end of every page fault, this routine is invoked to | |
117 | tell the architecture specific code that a translation | |
118 | described by "pte" now exists at virtual address "address" | |
119 | for address space "vma->vm_mm", in the software page tables. | |
120 | ||
121 | A port may use this information in any way it so chooses. | |
122 | For example, it could use this event to pre-load TLB | |
123 | translations for software managed TLB configurations. | |
124 | The sparc64 port currently does this. | |
125 | ||
126 | 7) void tlb_migrate_finish(struct mm_struct *mm) | |
127 | ||
128 | This interface is called at the end of an explicit | |
129 | process migration. This interface provides a hook | |
130 | to allow a platform to update TLB or context-specific | |
131 | information for the address space. | |
132 | ||
133 | The ia64 sn2 platform is one example of a platform | |
134 | that uses this interface. | |
135 | ||
136 | 8) void lazy_mmu_prot_update(pte_t pte) | |
137 | This interface is called whenever the protection on | |
138 | any user PTEs change. This interface provides a notification | |
575c9687 | 139 | to architecture specific code to take appropriate action. |
1da177e4 LT |
140 | |
141 | ||
142 | Next, we have the cache flushing interfaces. In general, when Linux | |
143 | is changing an existing virtual-->physical mapping to a new value, | |
144 | the sequence will be in one of the following forms: | |
145 | ||
146 | 1) flush_cache_mm(mm); | |
147 | change_all_page_tables_of(mm); | |
148 | flush_tlb_mm(mm); | |
149 | ||
150 | 2) flush_cache_range(vma, start, end); | |
151 | change_range_of_page_tables(mm, start, end); | |
152 | flush_tlb_range(vma, start, end); | |
153 | ||
154 | 3) flush_cache_page(vma, addr, pfn); | |
155 | set_pte(pte_pointer, new_pte_val); | |
156 | flush_tlb_page(vma, addr); | |
157 | ||
158 | The cache level flush will always be first, because this allows | |
159 | us to properly handle systems whose caches are strict and require | |
160 | a virtual-->physical translation to exist for a virtual address | |
161 | when that virtual address is flushed from the cache. The HyperSparc | |
162 | cpu is one such cpu with this attribute. | |
163 | ||
164 | The cache flushing routines below need only deal with cache flushing | |
165 | to the extent that it is necessary for a particular cpu. Mostly, | |
166 | these routines must be implemented for cpus which have virtually | |
167 | indexed caches which must be flushed when virtual-->physical | |
168 | translations are changed or removed. So, for example, the physically | |
169 | indexed physically tagged caches of IA32 processors have no need to | |
170 | implement these interfaces since the caches are fully synchronized | |
171 | and have no dependency on translation information. | |
172 | ||
173 | Here are the routines, one by one: | |
174 | ||
175 | 1) void flush_cache_mm(struct mm_struct *mm) | |
176 | ||
177 | This interface flushes an entire user address space from | |
178 | the caches. That is, after running, there will be no cache | |
179 | lines associated with 'mm'. | |
180 | ||
181 | This interface is used to handle whole address space | |
182 | page table operations such as what happens during | |
183 | fork, exit, and exec. | |
184 | ||
185 | 2) void flush_cache_range(struct vm_area_struct *vma, | |
186 | unsigned long start, unsigned long end) | |
187 | ||
188 | Here we are flushing a specific range of (user) virtual | |
189 | addresses from the cache. After running, there will be no | |
190 | entries in the cache for 'vma->vm_mm' for virtual addresses in | |
191 | the range 'start' to 'end-1'. | |
192 | ||
193 | The "vma" is the backing store being used for the region. | |
194 | Primarily, this is used for munmap() type operations. | |
195 | ||
196 | The interface is provided in hopes that the port can find | |
197 | a suitably efficient method for removing multiple page | |
198 | sized regions from the cache, instead of having the kernel | |
199 | call flush_cache_page (see below) for each entry which may be | |
200 | modified. | |
201 | ||
202 | 3) void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn) | |
203 | ||
204 | This time we need to remove a PAGE_SIZE sized range | |
205 | from the cache. The 'vma' is the backing structure used by | |
206 | Linux to keep track of mmap'd regions for a process, the | |
207 | address space is available via vma->vm_mm. Also, one may | |
208 | test (vma->vm_flags & VM_EXEC) to see if this region is | |
209 | executable (and thus could be in the 'instruction cache' in | |
210 | "Harvard" type cache layouts). | |
211 | ||
212 | The 'pfn' indicates the physical page frame (shift this value | |
213 | left by PAGE_SHIFT to get the physical address) that 'addr' | |
214 | translates to. It is this mapping which should be removed from | |
215 | the cache. | |
216 | ||
217 | After running, there will be no entries in the cache for | |
218 | 'vma->vm_mm' for virtual address 'addr' which translates | |
219 | to 'pfn'. | |
220 | ||
221 | This is used primarily during fault processing. | |
222 | ||
223 | 4) void flush_cache_kmaps(void) | |
224 | ||
225 | This routine need only be implemented if the platform utilizes | |
226 | highmem. It will be called right before all of the kmaps | |
227 | are invalidated. | |
228 | ||
229 | After running, there will be no entries in the cache for | |
230 | the kernel virtual address range PKMAP_ADDR(0) to | |
231 | PKMAP_ADDR(LAST_PKMAP). | |
232 | ||
233 | This routing should be implemented in asm/highmem.h | |
234 | ||
235 | 5) void flush_cache_vmap(unsigned long start, unsigned long end) | |
236 | void flush_cache_vunmap(unsigned long start, unsigned long end) | |
237 | ||
238 | Here in these two interfaces we are flushing a specific range | |
239 | of (kernel) virtual addresses from the cache. After running, | |
240 | there will be no entries in the cache for the kernel address | |
241 | space for virtual addresses in the range 'start' to 'end-1'. | |
242 | ||
243 | The first of these two routines is invoked after map_vm_area() | |
244 | has installed the page table entries. The second is invoked | |
245 | before unmap_vm_area() deletes the page table entries. | |
246 | ||
247 | There exists another whole class of cpu cache issues which currently | |
248 | require a whole different set of interfaces to handle properly. | |
249 | The biggest problem is that of virtual aliasing in the data cache | |
250 | of a processor. | |
251 | ||
252 | Is your port susceptible to virtual aliasing in it's D-cache? | |
253 | Well, if your D-cache is virtually indexed, is larger in size than | |
254 | PAGE_SIZE, and does not prevent multiple cache lines for the same | |
255 | physical address from existing at once, you have this problem. | |
256 | ||
257 | If your D-cache has this problem, first define asm/shmparam.h SHMLBA | |
258 | properly, it should essentially be the size of your virtually | |
259 | addressed D-cache (or if the size is variable, the largest possible | |
260 | size). This setting will force the SYSv IPC layer to only allow user | |
261 | processes to mmap shared memory at address which are a multiple of | |
262 | this value. | |
263 | ||
264 | NOTE: This does not fix shared mmaps, check out the sparc64 port for | |
265 | one way to solve this (in particular SPARC_FLAG_MMAPSHARED). | |
266 | ||
267 | Next, you have to solve the D-cache aliasing issue for all | |
268 | other cases. Please keep in mind that fact that, for a given page | |
269 | mapped into some user address space, there is always at least one more | |
270 | mapping, that of the kernel in it's linear mapping starting at | |
271 | PAGE_OFFSET. So immediately, once the first user maps a given | |
272 | physical page into its address space, by implication the D-cache | |
273 | aliasing problem has the potential to exist since the kernel already | |
274 | maps this page at its virtual address. | |
275 | ||
276 | void copy_user_page(void *to, void *from, unsigned long addr, struct page *page) | |
277 | void clear_user_page(void *to, unsigned long addr, struct page *page) | |
278 | ||
279 | These two routines store data in user anonymous or COW | |
280 | pages. It allows a port to efficiently avoid D-cache alias | |
281 | issues between userspace and the kernel. | |
282 | ||
283 | For example, a port may temporarily map 'from' and 'to' to | |
284 | kernel virtual addresses during the copy. The virtual address | |
285 | for these two pages is chosen in such a way that the kernel | |
286 | load/store instructions happen to virtual addresses which are | |
287 | of the same "color" as the user mapping of the page. Sparc64 | |
288 | for example, uses this technique. | |
289 | ||
290 | The 'addr' parameter tells the virtual address where the | |
291 | user will ultimately have this page mapped, and the 'page' | |
292 | parameter gives a pointer to the struct page of the target. | |
293 | ||
294 | If D-cache aliasing is not an issue, these two routines may | |
295 | simply call memcpy/memset directly and do nothing more. | |
296 | ||
297 | void flush_dcache_page(struct page *page) | |
298 | ||
299 | Any time the kernel writes to a page cache page, _OR_ | |
300 | the kernel is about to read from a page cache page and | |
301 | user space shared/writable mappings of this page potentially | |
302 | exist, this routine is called. | |
303 | ||
304 | NOTE: This routine need only be called for page cache pages | |
305 | which can potentially ever be mapped into the address | |
306 | space of a user process. So for example, VFS layer code | |
307 | handling vfs symlinks in the page cache need not call | |
308 | this interface at all. | |
309 | ||
310 | The phrase "kernel writes to a page cache page" means, | |
311 | specifically, that the kernel executes store instructions | |
312 | that dirty data in that page at the page->virtual mapping | |
313 | of that page. It is important to flush here to handle | |
314 | D-cache aliasing, to make sure these kernel stores are | |
315 | visible to user space mappings of that page. | |
316 | ||
317 | The corollary case is just as important, if there are users | |
318 | which have shared+writable mappings of this file, we must make | |
319 | sure that kernel reads of these pages will see the most recent | |
320 | stores done by the user. | |
321 | ||
322 | If D-cache aliasing is not an issue, this routine may | |
323 | simply be defined as a nop on that architecture. | |
324 | ||
325 | There is a bit set aside in page->flags (PG_arch_1) as | |
326 | "architecture private". The kernel guarantees that, | |
327 | for pagecache pages, it will clear this bit when such | |
328 | a page first enters the pagecache. | |
329 | ||
330 | This allows these interfaces to be implemented much more | |
331 | efficiently. It allows one to "defer" (perhaps indefinitely) | |
332 | the actual flush if there are currently no user processes | |
333 | mapping this page. See sparc64's flush_dcache_page and | |
334 | update_mmu_cache implementations for an example of how to go | |
335 | about doing this. | |
336 | ||
337 | The idea is, first at flush_dcache_page() time, if | |
338 | page->mapping->i_mmap is an empty tree and ->i_mmap_nonlinear | |
339 | an empty list, just mark the architecture private page flag bit. | |
340 | Later, in update_mmu_cache(), a check is made of this flag bit, | |
341 | and if set the flush is done and the flag bit is cleared. | |
342 | ||
343 | IMPORTANT NOTE: It is often important, if you defer the flush, | |
344 | that the actual flush occurs on the same CPU | |
345 | as did the cpu stores into the page to make it | |
346 | dirty. Again, see sparc64 for examples of how | |
347 | to deal with this. | |
348 | ||
349 | void copy_to_user_page(struct vm_area_struct *vma, struct page *page, | |
350 | unsigned long user_vaddr, | |
351 | void *dst, void *src, int len) | |
352 | void copy_from_user_page(struct vm_area_struct *vma, struct page *page, | |
353 | unsigned long user_vaddr, | |
354 | void *dst, void *src, int len) | |
355 | When the kernel needs to copy arbitrary data in and out | |
356 | of arbitrary user pages (f.e. for ptrace()) it will use | |
357 | these two routines. | |
358 | ||
359 | Any necessary cache flushing or other coherency operations | |
360 | that need to occur should happen here. If the processor's | |
361 | instruction cache does not snoop cpu stores, it is very | |
362 | likely that you will need to flush the instruction cache | |
363 | for copy_to_user_page(). | |
364 | ||
365 | void flush_icache_range(unsigned long start, unsigned long end) | |
366 | When the kernel stores into addresses that it will execute | |
367 | out of (eg when loading modules), this function is called. | |
368 | ||
369 | If the icache does not snoop stores then this routine will need | |
370 | to flush it. | |
371 | ||
372 | void flush_icache_page(struct vm_area_struct *vma, struct page *page) | |
373 | All the functionality of flush_icache_page can be implemented in | |
374 | flush_dcache_page and update_mmu_cache. In 2.7 the hope is to | |
375 | remove this interface completely. |