]> git.proxmox.com Git - mirror_ubuntu-artful-kernel.git/blame - Documentation/vm/transhuge.txt
thp, vmstat: implement HZP_ALLOC and HZP_ALLOC_FAILED events
[mirror_ubuntu-artful-kernel.git] / Documentation / vm / transhuge.txt
CommitLineData
1c9bf22c
AA
1= Transparent Hugepage Support =
2
3== Objective ==
4
5Performance critical computing applications dealing with large memory
6working sets are already running on top of libhugetlbfs and in turn
7hugetlbfs. Transparent Hugepage Support is an alternative means of
8using huge pages for the backing of virtual memory with huge pages
9that supports the automatic promotion and demotion of page sizes and
10without the shortcomings of hugetlbfs.
11
12Currently it only works for anonymous memory mappings but in the
13future it can expand over the pagecache layer starting with tmpfs.
14
15The reason applications are running faster is because of two
16factors. The first factor is almost completely irrelevant and it's not
17of significant interest because it'll also have the downside of
18requiring larger clear-page copy-page in page faults which is a
19potentially negative effect. The first factor consists in taking a
20single page fault for each 2M virtual region touched by userland (so
21reducing the enter/exit kernel frequency by a 512 times factor). This
22only matters the first time the memory is accessed for the lifetime of
23a memory mapping. The second long lasting and much more important
24factor will affect all subsequent accesses to the memory for the whole
25runtime of the application. The second factor consist of two
26components: 1) the TLB miss will run faster (especially with
27virtualization using nested pagetables but almost always also on bare
28metal without virtualization) and 2) a single TLB entry will be
29mapping a much larger amount of virtual memory in turn reducing the
30number of TLB misses. With virtualization and nested pagetables the
31TLB can be mapped of larger size only if both KVM and the Linux guest
32are using hugepages but a significant speedup already happens if only
33one of the two is using hugepages just because of the fact the TLB
34miss is going to run faster.
35
36== Design ==
37
38- "graceful fallback": mm components which don't have transparent
39 hugepage knowledge fall back to breaking a transparent hugepage and
40 working on the regular pages and their respective regular pmd/pte
41 mappings
42
43- if a hugepage allocation fails because of memory fragmentation,
44 regular pages should be gracefully allocated instead and mixed in
45 the same vma without any failure or significant delay and without
46 userland noticing
47
48- if some task quits and more hugepages become available (either
49 immediately in the buddy or through the VM), guest physical memory
50 backed by regular pages should be relocated on hugepages
51 automatically (with khugepaged)
52
53- it doesn't require memory reservation and in turn it uses hugepages
54 whenever possible (the only possible reservation here is kernelcore=
55 to avoid unmovable pages to fragment all the memory but such a tweak
56 is not specific to transparent hugepage support and it's a generic
57 feature that applies to all dynamic high order allocations in the
58 kernel)
59
60- this initial support only offers the feature in the anonymous memory
61 regions but it'd be ideal to move it to tmpfs and the pagecache
62 later
63
64Transparent Hugepage Support maximizes the usefulness of free memory
65if compared to the reservation approach of hugetlbfs by allowing all
66unused memory to be used as cache or other movable (or even unmovable
67entities). It doesn't require reservation to prevent hugepage
68allocation failures to be noticeable from userland. It allows paging
69and all other advanced VM features to be available on the
70hugepages. It requires no modifications for applications to take
71advantage of it.
72
73Applications however can be further optimized to take advantage of
74this feature, like for example they've been optimized before to avoid
75a flood of mmap system calls for every malloc(4k). Optimizing userland
76is by far not mandatory and khugepaged already can take care of long
77lived page allocations even for hugepage unaware applications that
78deals with large amounts of memory.
79
80In certain cases when hugepages are enabled system wide, application
81may end up allocating more memory resources. An application may mmap a
82large region but only touch 1 byte of it, in that case a 2M page might
83be allocated instead of a 4k page for no good. This is why it's
84possible to disable hugepages system-wide and to only have them inside
85MADV_HUGEPAGE madvise regions.
86
87Embedded systems should enable hugepages only inside madvise regions
88to eliminate any risk of wasting any precious byte of memory and to
89only run faster.
90
91Applications that gets a lot of benefit from hugepages and that don't
92risk to lose memory by using hugepages, should use
93madvise(MADV_HUGEPAGE) on their critical mmapped regions.
94
95== sysfs ==
96
97Transparent Hugepage Support can be entirely disabled (mostly for
98debugging purposes) or only enabled inside MADV_HUGEPAGE regions (to
99avoid the risk of consuming more memory resources) or enabled system
100wide. This can be achieved with one of:
101
102echo always >/sys/kernel/mm/transparent_hugepage/enabled
103echo madvise >/sys/kernel/mm/transparent_hugepage/enabled
104echo never >/sys/kernel/mm/transparent_hugepage/enabled
105
106It's also possible to limit defrag efforts in the VM to generate
107hugepages in case they're not immediately free to madvise regions or
108to never try to defrag memory and simply fallback to regular pages
109unless hugepages are immediately available. Clearly if we spend CPU
110time to defrag memory, we would expect to gain even more by the fact
111we use hugepages later instead of regular pages. This isn't always
112guaranteed, but it may be more likely in case the allocation is for a
113MADV_HUGEPAGE region.
114
115echo always >/sys/kernel/mm/transparent_hugepage/defrag
116echo madvise >/sys/kernel/mm/transparent_hugepage/defrag
117echo never >/sys/kernel/mm/transparent_hugepage/defrag
118
119khugepaged will be automatically started when
120transparent_hugepage/enabled is set to "always" or "madvise, and it'll
121be automatically shutdown if it's set to "never".
122
123khugepaged runs usually at low frequency so while one may not want to
124invoke defrag algorithms synchronously during the page faults, it
125should be worth invoking defrag at least in khugepaged. However it's
e369fde1
DR
126also possible to disable defrag in khugepaged by writing 0 or enable
127defrag in khugepaged by writing 1:
1c9bf22c 128
e369fde1
DR
129echo 0 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
130echo 1 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
1c9bf22c
AA
131
132You can also control how many pages khugepaged should scan at each
133pass:
134
135/sys/kernel/mm/transparent_hugepage/khugepaged/pages_to_scan
136
137and how many milliseconds to wait in khugepaged between each pass (you
138can set this to 0 to run khugepaged at 100% utilization of one core):
139
140/sys/kernel/mm/transparent_hugepage/khugepaged/scan_sleep_millisecs
141
142and how many milliseconds to wait in khugepaged if there's an hugepage
143allocation failure to throttle the next allocation attempt.
144
145/sys/kernel/mm/transparent_hugepage/khugepaged/alloc_sleep_millisecs
146
147The khugepaged progress can be seen in the number of pages collapsed:
148
149/sys/kernel/mm/transparent_hugepage/khugepaged/pages_collapsed
150
151for each pass:
152
153/sys/kernel/mm/transparent_hugepage/khugepaged/full_scans
154
155== Boot parameter ==
156
157You can change the sysfs boot time defaults of Transparent Hugepage
158Support by passing the parameter "transparent_hugepage=always" or
159"transparent_hugepage=madvise" or "transparent_hugepage=never"
160(without "") to the kernel command line.
161
162== Need of application restart ==
163
164The transparent_hugepage/enabled values only affect future
165behavior. So to make them effective you need to restart any
166application that could have been using hugepages. This also applies to
167the regions registered in khugepaged.
168
69256994
MG
169== Monitoring usage ==
170
171The number of transparent huge pages currently used by the system is
172available by reading the AnonHugePages field in /proc/meminfo. To
173identify what applications are using transparent huge pages, it is
174necessary to read /proc/PID/smaps and count the AnonHugePages fields
175for each mapping. Note that reading the smaps file is expensive and
176reading it frequently will incur overhead.
177
178There are a number of counters in /proc/vmstat that may be used to
179monitor how successfully the system is providing huge pages for use.
180
181thp_fault_alloc is incremented every time a huge page is successfully
182 allocated to handle a page fault. This applies to both the
183 first time a page is faulted and for COW faults.
184
185thp_collapse_alloc is incremented by khugepaged when it has found
186 a range of pages to collapse into one huge page and has
187 successfully allocated a new huge page to store the data.
188
189thp_fault_fallback is incremented if a page fault fails to allocate
190 a huge page and instead falls back to using small pages.
191
192thp_collapse_alloc_failed is incremented if khugepaged found a range
193 of pages that should be collapsed into one huge page but failed
194 the allocation.
195
196thp_split is incremented every time a huge page is split into base
197 pages. This can happen for a variety of reasons but a common
198 reason is that a huge page is old and is being reclaimed.
199
d8a8e1f0
KS
200thp_zero_page_alloc is incremented every time a huge zero page is
201 successfully allocated. It includes allocations which where
202 dropped due race with other allocation. Note, it doesn't count
203 every map of the huge zero page, only its allocation.
204
205thp_zero_page_alloc_failed is incremented if kernel fails to allocate
206 huge zero page and falls back to using small pages.
207
69256994
MG
208As the system ages, allocating huge pages may be expensive as the
209system uses memory compaction to copy data around memory to free a
210huge page for use. There are some counters in /proc/vmstat to help
211monitor this overhead.
212
213compact_stall is incremented every time a process stalls to run
214 memory compaction so that a huge page is free for use.
215
216compact_success is incremented if the system compacted memory and
217 freed a huge page for use.
218
219compact_fail is incremented if the system tries to compact memory
220 but failed.
221
222compact_pages_moved is incremented each time a page is moved. If
223 this value is increasing rapidly, it implies that the system
224 is copying a lot of data to satisfy the huge page allocation.
225 It is possible that the cost of copying exceeds any savings
226 from reduced TLB misses.
227
228compact_pagemigrate_failed is incremented when the underlying mechanism
229 for moving a page failed.
230
231compact_blocks_moved is incremented each time memory compaction examines
232 a huge page aligned range of pages.
233
234It is possible to establish how long the stalls were using the function
235tracer to record how long was spent in __alloc_pages_nodemask and
236using the mm_page_alloc tracepoint to identify which allocations were
237for huge pages.
238
1c9bf22c
AA
239== get_user_pages and follow_page ==
240
241get_user_pages and follow_page if run on a hugepage, will return the
242head or tail pages as usual (exactly as they would do on
243hugetlbfs). Most gup users will only care about the actual physical
244address of the page and its temporary pinning to release after the I/O
245is complete, so they won't ever notice the fact the page is huge. But
246if any driver is going to mangle over the page structure of the tail
247page (like for checking page->mapping or other bits that are relevant
248for the head page and not the tail page), it should be updated to jump
249to check head page instead (while serializing properly against
250split_huge_page() to avoid the head and tail pages to disappear from
251under it, see the futex code to see an example of that, hugetlbfs also
252needed special handling in futex code for similar reasons).
253
254NOTE: these aren't new constraints to the GUP API, and they match the
255same constrains that applies to hugetlbfs too, so any driver capable
256of handling GUP on hugetlbfs will also work fine on transparent
257hugepage backed mappings.
258
259In case you can't handle compound pages if they're returned by
260follow_page, the FOLL_SPLIT bit can be specified as parameter to
261follow_page, so that it will split the hugepages before returning
262them. Migration for example passes FOLL_SPLIT as parameter to
263follow_page because it's not hugepage aware and in fact it can't work
264at all on hugetlbfs (but it instead works fine on transparent
265hugepages thanks to FOLL_SPLIT). migration simply can't deal with
266hugepages being returned (as it's not only checking the pfn of the
267page and pinning it during the copy but it pretends to migrate the
268memory in regular page sizes and with regular pte/pmd mappings).
269
270== Optimizing the applications ==
271
272To be guaranteed that the kernel will map a 2M page immediately in any
273memory region, the mmap region has to be hugepage naturally
274aligned. posix_memalign() can provide that guarantee.
275
276== Hugetlbfs ==
277
278You can use hugetlbfs on a kernel that has transparent hugepage
279support enabled just fine as always. No difference can be noted in
280hugetlbfs other than there will be less overall fragmentation. All
281usual features belonging to hugetlbfs are preserved and
282unaffected. libhugetlbfs will also work fine as usual.
283
284== Graceful fallback ==
285
286Code walking pagetables but unware about huge pmds can simply call
e180377f 287split_huge_page_pmd(vma, addr, pmd) where the pmd is the one returned by
1c9bf22c
AA
288pmd_offset. It's trivial to make the code transparent hugepage aware
289by just grepping for "pmd_offset" and adding split_huge_page_pmd where
290missing after pmd_offset returns the pmd. Thanks to the graceful
291fallback design, with a one liner change, you can avoid to write
292hundred if not thousand of lines of complex code to make your code
293hugepage aware.
294
295If you're not walking pagetables but you run into a physical hugepage
296but you can't handle it natively in your code, you can split it by
297calling split_huge_page(page). This is what the Linux VM does before
298it tries to swapout the hugepage for example.
299
300Example to make mremap.c transparent hugepage aware with a one liner
301change:
302
303diff --git a/mm/mremap.c b/mm/mremap.c
304--- a/mm/mremap.c
305+++ b/mm/mremap.c
306@@ -41,6 +41,7 @@ static pmd_t *get_old_pmd(struct mm_stru
307 return NULL;
308
309 pmd = pmd_offset(pud, addr);
e180377f 310+ split_huge_page_pmd(vma, addr, pmd);
1c9bf22c
AA
311 if (pmd_none_or_clear_bad(pmd))
312 return NULL;
313
314== Locking in hugepage aware code ==
315
316We want as much code as possible hugepage aware, as calling
317split_huge_page() or split_huge_page_pmd() has a cost.
318
319To make pagetable walks huge pmd aware, all you need to do is to call
320pmd_trans_huge() on the pmd returned by pmd_offset. You must hold the
321mmap_sem in read (or write) mode to be sure an huge pmd cannot be
322created from under you by khugepaged (khugepaged collapse_huge_page
323takes the mmap_sem in write mode in addition to the anon_vma lock). If
324pmd_trans_huge returns false, you just fallback in the old code
325paths. If instead pmd_trans_huge returns true, you have to take the
326mm->page_table_lock and re-run pmd_trans_huge. Taking the
327page_table_lock will prevent the huge pmd to be converted into a
328regular pmd from under you (split_huge_page can run in parallel to the
329pagetable walk). If the second pmd_trans_huge returns false, you
330should just drop the page_table_lock and fallback to the old code as
331before. Otherwise you should run pmd_trans_splitting on the pmd. In
332case pmd_trans_splitting returns true, it means split_huge_page is
333already in the middle of splitting the page. So if pmd_trans_splitting
334returns true it's enough to drop the page_table_lock and call
335wait_split_huge_page and then fallback the old code paths. You are
336guaranteed by the time wait_split_huge_page returns, the pmd isn't
337huge anymore. If pmd_trans_splitting returns false, you can proceed to
338process the huge pmd and the hugepage natively. Once finished you can
339drop the page_table_lock.
340
341== compound_lock, get_user_pages and put_page ==
342
343split_huge_page internally has to distribute the refcounts in the head
344page to the tail pages before clearing all PG_head/tail bits from the
345page structures. It can do that easily for refcounts taken by huge pmd
346mappings. But the GUI API as created by hugetlbfs (that returns head
347and tail pages if running get_user_pages on an address backed by any
348hugepage), requires the refcount to be accounted on the tail pages and
349not only in the head pages, if we want to be able to run
350split_huge_page while there are gup pins established on any tail
351page. Failure to be able to run split_huge_page if there's any gup pin
352on any tail page, would mean having to split all hugepages upfront in
353get_user_pages which is unacceptable as too many gup users are
354performance critical and they must work natively on hugepages like
355they work natively on hugetlbfs already (hugetlbfs is simpler because
356hugetlbfs pages cannot be splitted so there wouldn't be requirement of
357accounting the pins on the tail pages for hugetlbfs). If we wouldn't
358account the gup refcounts on the tail pages during gup, we won't know
359anymore which tail page is pinned by gup and which is not while we run
360split_huge_page. But we still have to add the gup pin to the head page
361too, to know when we can free the compound page in case it's never
362splitted during its lifetime. That requires changing not just
363get_page, but put_page as well so that when put_page runs on a tail
364page (and only on a tail page) it will find its respective head page,
365and then it will decrease the head page refcount in addition to the
366tail page refcount. To obtain a head page reliably and to decrease its
367refcount without race conditions, put_page has to serialize against
368__split_huge_page_refcount using a special per-page lock called
369compound_lock.