]> git.proxmox.com Git - mirror_ubuntu-bionic-kernel.git/blame - Documentation/vm/transhuge.txt
Merge branch 'x86-pti-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git...
[mirror_ubuntu-bionic-kernel.git] / Documentation / vm / transhuge.txt
CommitLineData
1c9bf22c
AA
1= Transparent Hugepage Support =
2
3== Objective ==
4
5Performance critical computing applications dealing with large memory
6working sets are already running on top of libhugetlbfs and in turn
7hugetlbfs. Transparent Hugepage Support is an alternative means of
8using huge pages for the backing of virtual memory with huge pages
9that supports the automatic promotion and demotion of page sizes and
10without the shortcomings of hugetlbfs.
11
1b5946a8
KS
12Currently it only works for anonymous memory mappings and tmpfs/shmem.
13But in the future it can expand to other filesystems.
1c9bf22c
AA
14
15The reason applications are running faster is because of two
16factors. The first factor is almost completely irrelevant and it's not
17of significant interest because it'll also have the downside of
18requiring larger clear-page copy-page in page faults which is a
19potentially negative effect. The first factor consists in taking a
20single page fault for each 2M virtual region touched by userland (so
21reducing the enter/exit kernel frequency by a 512 times factor). This
22only matters the first time the memory is accessed for the lifetime of
23a memory mapping. The second long lasting and much more important
24factor will affect all subsequent accesses to the memory for the whole
25runtime of the application. The second factor consist of two
26components: 1) the TLB miss will run faster (especially with
27virtualization using nested pagetables but almost always also on bare
28metal without virtualization) and 2) a single TLB entry will be
29mapping a much larger amount of virtual memory in turn reducing the
30number of TLB misses. With virtualization and nested pagetables the
31TLB can be mapped of larger size only if both KVM and the Linux guest
32are using hugepages but a significant speedup already happens if only
33one of the two is using hugepages just because of the fact the TLB
34miss is going to run faster.
35
36== Design ==
37
a46e6376
KS
38- "graceful fallback": mm components which don't have transparent hugepage
39 knowledge fall back to breaking huge pmd mapping into table of ptes and,
40 if necessary, split a transparent hugepage. Therefore these components
41 can continue working on the regular pages or regular pte mappings.
1c9bf22c
AA
42
43- if a hugepage allocation fails because of memory fragmentation,
44 regular pages should be gracefully allocated instead and mixed in
45 the same vma without any failure or significant delay and without
46 userland noticing
47
48- if some task quits and more hugepages become available (either
49 immediately in the buddy or through the VM), guest physical memory
50 backed by regular pages should be relocated on hugepages
51 automatically (with khugepaged)
52
53- it doesn't require memory reservation and in turn it uses hugepages
54 whenever possible (the only possible reservation here is kernelcore=
55 to avoid unmovable pages to fragment all the memory but such a tweak
56 is not specific to transparent hugepage support and it's a generic
57 feature that applies to all dynamic high order allocations in the
58 kernel)
59
1c9bf22c
AA
60Transparent Hugepage Support maximizes the usefulness of free memory
61if compared to the reservation approach of hugetlbfs by allowing all
62unused memory to be used as cache or other movable (or even unmovable
63entities). It doesn't require reservation to prevent hugepage
64allocation failures to be noticeable from userland. It allows paging
65and all other advanced VM features to be available on the
66hugepages. It requires no modifications for applications to take
67advantage of it.
68
69Applications however can be further optimized to take advantage of
70this feature, like for example they've been optimized before to avoid
71a flood of mmap system calls for every malloc(4k). Optimizing userland
72is by far not mandatory and khugepaged already can take care of long
73lived page allocations even for hugepage unaware applications that
74deals with large amounts of memory.
75
76In certain cases when hugepages are enabled system wide, application
77may end up allocating more memory resources. An application may mmap a
78large region but only touch 1 byte of it, in that case a 2M page might
79be allocated instead of a 4k page for no good. This is why it's
80possible to disable hugepages system-wide and to only have them inside
81MADV_HUGEPAGE madvise regions.
82
83Embedded systems should enable hugepages only inside madvise regions
84to eliminate any risk of wasting any precious byte of memory and to
85only run faster.
86
87Applications that gets a lot of benefit from hugepages and that don't
88risk to lose memory by using hugepages, should use
89madvise(MADV_HUGEPAGE) on their critical mmapped regions.
90
91== sysfs ==
92
1b5946a8
KS
93Transparent Hugepage Support for anonymous memory can be entirely disabled
94(mostly for debugging purposes) or only enabled inside MADV_HUGEPAGE
95regions (to avoid the risk of consuming more memory resources) or enabled
96system wide. This can be achieved with one of:
1c9bf22c
AA
97
98echo always >/sys/kernel/mm/transparent_hugepage/enabled
99echo madvise >/sys/kernel/mm/transparent_hugepage/enabled
100echo never >/sys/kernel/mm/transparent_hugepage/enabled
101
102It's also possible to limit defrag efforts in the VM to generate
1b5946a8
KS
103anonymous hugepages in case they're not immediately free to madvise
104regions or to never try to defrag memory and simply fallback to regular
105pages unless hugepages are immediately available. Clearly if we spend CPU
106time to defrag memory, we would expect to gain even more by the fact we
107use hugepages later instead of regular pages. This isn't always
1c9bf22c
AA
108guaranteed, but it may be more likely in case the allocation is for a
109MADV_HUGEPAGE region.
110
111echo always >/sys/kernel/mm/transparent_hugepage/defrag
444eb2a4 112echo defer >/sys/kernel/mm/transparent_hugepage/defrag
21440d7e 113echo defer+madvise >/sys/kernel/mm/transparent_hugepage/defrag
1c9bf22c
AA
114echo madvise >/sys/kernel/mm/transparent_hugepage/defrag
115echo never >/sys/kernel/mm/transparent_hugepage/defrag
116
444eb2a4
MG
117"always" means that an application requesting THP will stall on allocation
118failure and directly reclaim pages and compact memory in an effort to
119allocate a THP immediately. This may be desirable for virtual machines
120that benefit heavily from THP use and are willing to delay the VM start
121to utilise them.
122
123"defer" means that an application will wake kswapd in the background
21440d7e 124to reclaim pages and wake kcompactd to compact memory so that THP is
444eb2a4
MG
125available in the near future. It's the responsibility of khugepaged
126to then install the THP pages later.
127
21440d7e
DR
128"defer+madvise" will enter direct reclaim and compaction like "always", but
129only for regions that have used madvise(MADV_HUGEPAGE); all other regions
130will wake kswapd in the background to reclaim pages and wake kcompactd to
131compact memory so that THP is available in the near future.
132
444eb2a4
MG
133"madvise" will enter direct reclaim like "always" but only for regions
134that are have used madvise(MADV_HUGEPAGE). This is the default behaviour.
135
136"never" should be self-explanatory.
137
1b5946a8
KS
138By default kernel tries to use huge zero page on read page fault to
139anonymous mapping. It's possible to disable huge zero page by writing 0
140or enable it back by writing 1:
79da5407 141
f49cbdde
WL
142echo 0 >/sys/kernel/mm/transparent_hugepage/use_zero_page
143echo 1 >/sys/kernel/mm/transparent_hugepage/use_zero_page
79da5407 144
49920d28
HD
145Some userspace (such as a test program, or an optimized memory allocation
146library) may want to know the size (in bytes) of a transparent hugepage:
147
148cat /sys/kernel/mm/transparent_hugepage/hpage_pmd_size
149
1c9bf22c
AA
150khugepaged will be automatically started when
151transparent_hugepage/enabled is set to "always" or "madvise, and it'll
152be automatically shutdown if it's set to "never".
153
154khugepaged runs usually at low frequency so while one may not want to
155invoke defrag algorithms synchronously during the page faults, it
156should be worth invoking defrag at least in khugepaged. However it's
e369fde1
DR
157also possible to disable defrag in khugepaged by writing 0 or enable
158defrag in khugepaged by writing 1:
1c9bf22c 159
e369fde1
DR
160echo 0 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
161echo 1 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
1c9bf22c
AA
162
163You can also control how many pages khugepaged should scan at each
164pass:
165
166/sys/kernel/mm/transparent_hugepage/khugepaged/pages_to_scan
167
168and how many milliseconds to wait in khugepaged between each pass (you
169can set this to 0 to run khugepaged at 100% utilization of one core):
170
171/sys/kernel/mm/transparent_hugepage/khugepaged/scan_sleep_millisecs
172
173and how many milliseconds to wait in khugepaged if there's an hugepage
174allocation failure to throttle the next allocation attempt.
175
176/sys/kernel/mm/transparent_hugepage/khugepaged/alloc_sleep_millisecs
177
178The khugepaged progress can be seen in the number of pages collapsed:
179
180/sys/kernel/mm/transparent_hugepage/khugepaged/pages_collapsed
181
182for each pass:
183
184/sys/kernel/mm/transparent_hugepage/khugepaged/full_scans
185
9ddfa69f
EA
186max_ptes_none specifies how many extra small pages (that are
187not already mapped) can be allocated when collapsing a group
188of small pages into one large page.
189
190/sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_none
191
192A higher value leads to use additional memory for programs.
193A lower value leads to gain less thp performance. Value of
194max_ptes_none can waste cpu time very little, you can
195ignore it.
196
80f73b4b
EA
197max_ptes_swap specifies how many pages can be brought in from
198swap when collapsing a group of pages into a transparent huge page.
199
200/sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_swap
201
202A higher value can cause excessive swap IO and waste
203memory. A lower value can prevent THPs from being
204collapsed, resulting fewer pages being collapsed into
205THPs, and lower memory access performance.
206
1c9bf22c
AA
207== Boot parameter ==
208
209You can change the sysfs boot time defaults of Transparent Hugepage
210Support by passing the parameter "transparent_hugepage=always" or
211"transparent_hugepage=madvise" or "transparent_hugepage=never"
212(without "") to the kernel command line.
213
1b5946a8
KS
214== Hugepages in tmpfs/shmem ==
215
216You can control hugepage allocation policy in tmpfs with mount option
217"huge=". It can have following values:
218
219 - "always":
220 Attempt to allocate huge pages every time we need a new page;
221
222 - "never":
223 Do not allocate huge pages;
224
225 - "within_size":
226 Only allocate huge page if it will be fully within i_size.
227 Also respect fadvise()/madvise() hints;
228
229 - "advise:
230 Only allocate huge pages if requested with fadvise()/madvise();
231
232The default policy is "never".
233
234"mount -o remount,huge= /mountpoint" works fine after mount: remounting
235huge=never will not attempt to break up huge pages at all, just stop more
236from being allocated.
237
238There's also sysfs knob to control hugepage allocation policy for internal
239shmem mount: /sys/kernel/mm/transparent_hugepage/shmem_enabled. The mount
240is used for SysV SHM, memfds, shared anonymous mmaps (of /dev/zero or
241MAP_ANONYMOUS), GPU drivers' DRM objects, Ashmem.
242
243In addition to policies listed above, shmem_enabled allows two further
244values:
245
246 - "deny":
247 For use in emergencies, to force the huge option off from
248 all mounts;
249 - "force":
250 Force the huge option on for all - very useful for testing;
251
1c9bf22c
AA
252== Need of application restart ==
253
1b5946a8
KS
254The transparent_hugepage/enabled values and tmpfs mount option only affect
255future behavior. So to make them effective you need to restart any
256application that could have been using hugepages. This also applies to the
257regions registered in khugepaged.
1c9bf22c 258
69256994
MG
259== Monitoring usage ==
260
1b5946a8
KS
261The number of anonymous transparent huge pages currently used by the
262system is available by reading the AnonHugePages field in /proc/meminfo.
263To identify what applications are using anonymous transparent huge pages,
264it is necessary to read /proc/PID/smaps and count the AnonHugePages fields
265for each mapping.
266
267The number of file transparent huge pages mapped to userspace is available
268by reading ShmemPmdMapped and ShmemHugePages fields in /proc/meminfo.
929f9d28 269To identify what applications are mapping file transparent huge pages, it
1b5946a8
KS
270is necessary to read /proc/PID/smaps and count the FileHugeMapped fields
271for each mapping.
272
273Note that reading the smaps file is expensive and reading it
274frequently will incur overhead.
69256994
MG
275
276There are a number of counters in /proc/vmstat that may be used to
277monitor how successfully the system is providing huge pages for use.
278
279thp_fault_alloc is incremented every time a huge page is successfully
280 allocated to handle a page fault. This applies to both the
281 first time a page is faulted and for COW faults.
282
283thp_collapse_alloc is incremented by khugepaged when it has found
284 a range of pages to collapse into one huge page and has
285 successfully allocated a new huge page to store the data.
286
287thp_fault_fallback is incremented if a page fault fails to allocate
288 a huge page and instead falls back to using small pages.
289
290thp_collapse_alloc_failed is incremented if khugepaged found a range
291 of pages that should be collapsed into one huge page but failed
292 the allocation.
293
1b5946a8 294thp_file_alloc is incremented every time a file huge page is successfully
929f9d28 295 allocated.
1b5946a8
KS
296
297thp_file_mapped is incremented every time a file huge page is mapped into
298 user address space.
299
a46e6376 300thp_split_page is incremented every time a huge page is split into base
69256994
MG
301 pages. This can happen for a variety of reasons but a common
302 reason is that a huge page is old and is being reclaimed.
a46e6376
KS
303 This action implies splitting all PMD the page mapped with.
304
8da9704c 305thp_split_page_failed is incremented if kernel fails to split huge
a46e6376
KS
306 page. This can happen if the page was pinned by somebody.
307
f9719a03
KS
308thp_deferred_split_page is incremented when a huge page is put onto split
309 queue. This happens when a huge page is partially unmapped and
310 splitting it would free up some memory. Pages on split queue are
311 going to be split under memory pressure.
312
a46e6376
KS
313thp_split_pmd is incremented every time a PMD split into table of PTEs.
314 This can happen, for instance, when application calls mprotect() or
315 munmap() on part of huge page. It doesn't split huge page, only
316 page table entry.
69256994 317
d8a8e1f0
KS
318thp_zero_page_alloc is incremented every time a huge zero page is
319 successfully allocated. It includes allocations which where
320 dropped due race with other allocation. Note, it doesn't count
321 every map of the huge zero page, only its allocation.
322
323thp_zero_page_alloc_failed is incremented if kernel fails to allocate
324 huge zero page and falls back to using small pages.
325
69256994
MG
326As the system ages, allocating huge pages may be expensive as the
327system uses memory compaction to copy data around memory to free a
328huge page for use. There are some counters in /proc/vmstat to help
329monitor this overhead.
330
331compact_stall is incremented every time a process stalls to run
332 memory compaction so that a huge page is free for use.
333
334compact_success is incremented if the system compacted memory and
335 freed a huge page for use.
336
337compact_fail is incremented if the system tries to compact memory
338 but failed.
339
340compact_pages_moved is incremented each time a page is moved. If
341 this value is increasing rapidly, it implies that the system
342 is copying a lot of data to satisfy the huge page allocation.
343 It is possible that the cost of copying exceeds any savings
344 from reduced TLB misses.
345
346compact_pagemigrate_failed is incremented when the underlying mechanism
347 for moving a page failed.
348
349compact_blocks_moved is incremented each time memory compaction examines
350 a huge page aligned range of pages.
351
352It is possible to establish how long the stalls were using the function
353tracer to record how long was spent in __alloc_pages_nodemask and
354using the mm_page_alloc tracepoint to identify which allocations were
355for huge pages.
356
1c9bf22c
AA
357== get_user_pages and follow_page ==
358
359get_user_pages and follow_page if run on a hugepage, will return the
360head or tail pages as usual (exactly as they would do on
361hugetlbfs). Most gup users will only care about the actual physical
362address of the page and its temporary pinning to release after the I/O
363is complete, so they won't ever notice the fact the page is huge. But
364if any driver is going to mangle over the page structure of the tail
365page (like for checking page->mapping or other bits that are relevant
366for the head page and not the tail page), it should be updated to jump
a46e6376
KS
367to check head page instead. Taking reference on any head/tail page would
368prevent page from being split by anyone.
1c9bf22c
AA
369
370NOTE: these aren't new constraints to the GUP API, and they match the
371same constrains that applies to hugetlbfs too, so any driver capable
372of handling GUP on hugetlbfs will also work fine on transparent
373hugepage backed mappings.
374
375In case you can't handle compound pages if they're returned by
376follow_page, the FOLL_SPLIT bit can be specified as parameter to
377follow_page, so that it will split the hugepages before returning
378them. Migration for example passes FOLL_SPLIT as parameter to
379follow_page because it's not hugepage aware and in fact it can't work
380at all on hugetlbfs (but it instead works fine on transparent
381hugepages thanks to FOLL_SPLIT). migration simply can't deal with
382hugepages being returned (as it's not only checking the pfn of the
383page and pinning it during the copy but it pretends to migrate the
384memory in regular page sizes and with regular pte/pmd mappings).
385
386== Optimizing the applications ==
387
388To be guaranteed that the kernel will map a 2M page immediately in any
389memory region, the mmap region has to be hugepage naturally
390aligned. posix_memalign() can provide that guarantee.
391
392== Hugetlbfs ==
393
394You can use hugetlbfs on a kernel that has transparent hugepage
395support enabled just fine as always. No difference can be noted in
396hugetlbfs other than there will be less overall fragmentation. All
397usual features belonging to hugetlbfs are preserved and
398unaffected. libhugetlbfs will also work fine as usual.
399
400== Graceful fallback ==
401
89474d50 402Code walking pagetables but unaware about huge pmds can simply call
a46e6376 403split_huge_pmd(vma, pmd, addr) where the pmd is the one returned by
1c9bf22c 404pmd_offset. It's trivial to make the code transparent hugepage aware
a46e6376 405by just grepping for "pmd_offset" and adding split_huge_pmd where
1c9bf22c
AA
406missing after pmd_offset returns the pmd. Thanks to the graceful
407fallback design, with a one liner change, you can avoid to write
408hundred if not thousand of lines of complex code to make your code
409hugepage aware.
410
411If you're not walking pagetables but you run into a physical hugepage
412but you can't handle it natively in your code, you can split it by
413calling split_huge_page(page). This is what the Linux VM does before
a46e6376
KS
414it tries to swapout the hugepage for example. split_huge_page() can fail
415if the page is pinned and you must handle this correctly.
1c9bf22c
AA
416
417Example to make mremap.c transparent hugepage aware with a one liner
418change:
419
420diff --git a/mm/mremap.c b/mm/mremap.c
421--- a/mm/mremap.c
422+++ b/mm/mremap.c
423@@ -41,6 +41,7 @@ static pmd_t *get_old_pmd(struct mm_stru
424 return NULL;
425
426 pmd = pmd_offset(pud, addr);
a46e6376 427+ split_huge_pmd(vma, pmd, addr);
1c9bf22c
AA
428 if (pmd_none_or_clear_bad(pmd))
429 return NULL;
430
431== Locking in hugepage aware code ==
432
433We want as much code as possible hugepage aware, as calling
a46e6376 434split_huge_page() or split_huge_pmd() has a cost.
1c9bf22c
AA
435
436To make pagetable walks huge pmd aware, all you need to do is to call
437pmd_trans_huge() on the pmd returned by pmd_offset. You must hold the
438mmap_sem in read (or write) mode to be sure an huge pmd cannot be
439created from under you by khugepaged (khugepaged collapse_huge_page
440takes the mmap_sem in write mode in addition to the anon_vma lock). If
441pmd_trans_huge returns false, you just fallback in the old code
442paths. If instead pmd_trans_huge returns true, you have to take the
a46e6376
KS
443page table lock (pmd_lock()) and re-run pmd_trans_huge. Taking the
444page table lock will prevent the huge pmd to be converted into a
445regular pmd from under you (split_huge_pmd can run in parallel to the
1c9bf22c 446pagetable walk). If the second pmd_trans_huge returns false, you
a46e6376
KS
447should just drop the page table lock and fallback to the old code as
448before. Otherwise you can proceed to process the huge pmd and the
449hugepage natively. Once finished you can drop the page table lock.
450
451== Refcounts and transparent huge pages ==
452
453Refcounting on THP is mostly consistent with refcounting on other compound
454pages:
455
0139aa7b 456 - get_page()/put_page() and GUP operate in head page's ->_refcount.
a46e6376 457
0139aa7b 458 - ->_refcount in tail pages is always zero: get_page_unless_zero() never
a46e6376
KS
459 succeed on tail pages.
460
461 - map/unmap of the pages with PTE entry increment/decrement ->_mapcount
462 on relevant sub-page of the compound page.
463
464 - map/unmap of the whole compound page accounted in compound_mapcount
1b5946a8
KS
465 (stored in first tail page). For file huge pages, we also increment
466 ->_mapcount of all sub-pages in order to have race-free detection of
467 last unmap of subpages.
a46e6376 468
1b5946a8
KS
469PageDoubleMap() indicates that the page is *possibly* mapped with PTEs.
470
471For anonymous pages PageDoubleMap() also indicates ->_mapcount in all
472subpages is offset up by one. This additional reference is required to
473get race-free detection of unmap of subpages when we have them mapped with
474both PMDs and PTEs.
a46e6376
KS
475
476This is optimization required to lower overhead of per-subpage mapcount
477tracking. The alternative is alter ->_mapcount in all subpages on each
478map/unmap of the whole compound page.
479
1b5946a8
KS
480For anonymous pages, we set PG_double_map when a PMD of the page got split
481for the first time, but still have PMD mapping. The additional references
482go away with last compound_mapcount.
483
484File pages get PG_double_map set on first map of the page with PTE and
485goes away when the page gets evicted from page cache.
1c9bf22c
AA
486
487split_huge_page internally has to distribute the refcounts in the head
a46e6376
KS
488page to the tail pages before clearing all PG_head/tail bits from the page
489structures. It can be done easily for refcounts taken by page table
490entries. But we don't have enough information on how to distribute any
491additional pins (i.e. from get_user_pages). split_huge_page() fails any
492requests to split pinned huge page: it expects page count to be equal to
493sum of mapcount of all sub-pages plus one (split_huge_page caller must
494have reference for head page).
495
0139aa7b 496split_huge_page uses migration entries to stabilize page->_refcount and
1b5946a8 497page->_mapcount of anonymous pages. File pages just got unmapped.
a46e6376
KS
498
499We safe against physical memory scanners too: the only legitimate way
500scanner can get reference to a page is get_page_unless_zero().
501
89474d50
EE
502All tail pages have zero ->_refcount until atomic_add(). This prevents the
503scanner from getting a reference to the tail page up to that point. After the
929f9d28 504atomic_add() we don't care about the ->_refcount value. We already known how
89474d50 505many references should be uncharged from the head page.
a46e6376
KS
506
507For head page get_page_unless_zero() will succeed and we don't mind. It's
508clear where reference should go after split: it will stay on head page.
509
510Note that split_huge_pmd() doesn't have any limitation on refcounting:
511pmd can be split at any point and never fails.
512
513== Partial unmap and deferred_split_huge_page() ==
514
515Unmapping part of THP (with munmap() or other way) is not going to free
516memory immediately. Instead, we detect that a subpage of THP is not in use
517in page_remove_rmap() and queue the THP for splitting if memory pressure
518comes. Splitting will free up unused subpages.
519
520Splitting the page right away is not an option due to locking context in
521the place where we can detect partial unmap. It's also might be
929f9d28
SP
522counterproductive since in many cases partial unmap happens during exit(2) if
523a THP crosses a VMA boundary.
a46e6376
KS
524
525Function deferred_split_huge_page() is used to queue page for splitting.
526The splitting itself will happen when we get memory pressure via shrinker
527interface.