]>
Commit | Line | Data |
---|---|---|
db0fb184 | 1 | Documentation for /proc/sys/vm/* kernel version 2.6.29 |
1da177e4 | 2 | (c) 1998, 1999, Rik van Riel <riel@nl.linux.org> |
db0fb184 | 3 | (c) 2008 Peter W. Morreale <pmorreale@novell.com> |
1da177e4 LT |
4 | |
5 | For general info and legal blurb, please look in README. | |
6 | ||
7 | ============================================================== | |
8 | ||
9 | This file contains the documentation for the sysctl files in | |
db0fb184 | 10 | /proc/sys/vm and is valid for Linux kernel version 2.6.29. |
1da177e4 LT |
11 | |
12 | The files in this directory can be used to tune the operation | |
13 | of the virtual memory (VM) subsystem of the Linux kernel and | |
14 | the writeout of dirty data to disk. | |
15 | ||
16 | Default values and initialization routines for most of these | |
17 | files can be found in mm/swap.c. | |
18 | ||
19 | Currently, these files are in /proc/sys/vm: | |
db0fb184 | 20 | |
4eeab4f5 | 21 | - admin_reserve_kbytes |
db0fb184 | 22 | - block_dump |
76ab0f53 | 23 | - compact_memory |
5bbe3547 | 24 | - compact_unevictable_allowed |
db0fb184 | 25 | - dirty_background_bytes |
1da177e4 | 26 | - dirty_background_ratio |
db0fb184 | 27 | - dirty_bytes |
1da177e4 | 28 | - dirty_expire_centisecs |
db0fb184 | 29 | - dirty_ratio |
fc1ca3d5 | 30 | - dirtytime_expire_seconds |
1da177e4 | 31 | - dirty_writeback_centisecs |
db0fb184 | 32 | - drop_caches |
5e771905 | 33 | - extfrag_threshold |
db0fb184 PM |
34 | - hugetlb_shm_group |
35 | - laptop_mode | |
36 | - legacy_va_layout | |
37 | - lowmem_reserve_ratio | |
1da177e4 | 38 | - max_map_count |
6a46079c AK |
39 | - memory_failure_early_kill |
40 | - memory_failure_recovery | |
1da177e4 | 41 | - min_free_kbytes |
0ff38490 | 42 | - min_slab_ratio |
db0fb184 PM |
43 | - min_unmapped_ratio |
44 | - mmap_min_addr | |
d07e2259 DC |
45 | - mmap_rnd_bits |
46 | - mmap_rnd_compat_bits | |
d5dbac87 | 47 | - nr_hugepages |
d1634e1a | 48 | - nr_hugepages_mempolicy |
d5dbac87 | 49 | - nr_overcommit_hugepages |
db0fb184 PM |
50 | - nr_trim_pages (only if CONFIG_MMU=n) |
51 | - numa_zonelist_order | |
52 | - oom_dump_tasks | |
53 | - oom_kill_allocating_task | |
49f0ce5f | 54 | - overcommit_kbytes |
db0fb184 PM |
55 | - overcommit_memory |
56 | - overcommit_ratio | |
57 | - page-cluster | |
58 | - panic_on_oom | |
59 | - percpu_pagelist_fraction | |
60 | - stat_interval | |
52b6f46b | 61 | - stat_refresh |
4518085e | 62 | - numa_stat |
db0fb184 | 63 | - swappiness |
c9b1d098 | 64 | - user_reserve_kbytes |
db0fb184 | 65 | - vfs_cache_pressure |
e6507a00 | 66 | - watermark_scale_factor |
db0fb184 PM |
67 | - zone_reclaim_mode |
68 | ||
1da177e4 LT |
69 | ============================================================== |
70 | ||
4eeab4f5 AS |
71 | admin_reserve_kbytes |
72 | ||
73 | The amount of free memory in the system that should be reserved for users | |
74 | with the capability cap_sys_admin. | |
75 | ||
76 | admin_reserve_kbytes defaults to min(3% of free pages, 8MB) | |
77 | ||
78 | That should provide enough for the admin to log in and kill a process, | |
79 | if necessary, under the default overcommit 'guess' mode. | |
80 | ||
81 | Systems running under overcommit 'never' should increase this to account | |
82 | for the full Virtual Memory Size of programs used to recover. Otherwise, | |
83 | root may not be able to log in to recover the system. | |
84 | ||
85 | How do you calculate a minimum useful reserve? | |
86 | ||
87 | sshd or login + bash (or some other shell) + top (or ps, kill, etc.) | |
88 | ||
89 | For overcommit 'guess', we can sum resident set sizes (RSS). | |
90 | On x86_64 this is about 8MB. | |
91 | ||
92 | For overcommit 'never', we can take the max of their virtual sizes (VSZ) | |
93 | and add the sum of their RSS. | |
94 | On x86_64 this is about 128MB. | |
95 | ||
96 | Changing this takes effect whenever an application requests memory. | |
97 | ||
98 | ============================================================== | |
99 | ||
db0fb184 | 100 | block_dump |
1da177e4 | 101 | |
db0fb184 PM |
102 | block_dump enables block I/O debugging when set to a nonzero value. More |
103 | information on block I/O debugging is in Documentation/laptops/laptop-mode.txt. | |
1da177e4 LT |
104 | |
105 | ============================================================== | |
106 | ||
76ab0f53 MG |
107 | compact_memory |
108 | ||
109 | Available only when CONFIG_COMPACTION is set. When 1 is written to the file, | |
110 | all zones are compacted such that free memory is available in contiguous | |
111 | blocks where possible. This can be important for example in the allocation of | |
112 | huge pages although processes will also directly compact memory as required. | |
113 | ||
114 | ============================================================== | |
115 | ||
5bbe3547 EM |
116 | compact_unevictable_allowed |
117 | ||
118 | Available only when CONFIG_COMPACTION is set. When set to 1, compaction is | |
119 | allowed to examine the unevictable lru (mlocked pages) for pages to compact. | |
120 | This should be used on systems where stalls for minor page faults are an | |
121 | acceptable trade for large contiguous free memory. Set to 0 to prevent | |
122 | compaction from moving pages that are unevictable. Default value is 1. | |
123 | ||
124 | ============================================================== | |
125 | ||
db0fb184 | 126 | dirty_background_bytes |
1da177e4 | 127 | |
6601fac8 AB |
128 | Contains the amount of dirty memory at which the background kernel |
129 | flusher threads will start writeback. | |
1da177e4 | 130 | |
abffc020 AR |
131 | Note: dirty_background_bytes is the counterpart of dirty_background_ratio. Only |
132 | one of them may be specified at a time. When one sysctl is written it is | |
133 | immediately taken into account to evaluate the dirty memory limits and the | |
134 | other appears as 0 when read. | |
1da177e4 | 135 | |
db0fb184 | 136 | ============================================================== |
1da177e4 | 137 | |
db0fb184 | 138 | dirty_background_ratio |
1da177e4 | 139 | |
715ea41e ZL |
140 | Contains, as a percentage of total available memory that contains free pages |
141 | and reclaimable pages, the number of pages at which the background kernel | |
142 | flusher threads will start writing out dirty data. | |
143 | ||
d83e2a4e | 144 | The total available memory is not equal to total system memory. |
1da177e4 | 145 | |
db0fb184 | 146 | ============================================================== |
1da177e4 | 147 | |
db0fb184 PM |
148 | dirty_bytes |
149 | ||
150 | Contains the amount of dirty memory at which a process generating disk writes | |
151 | will itself start writeback. | |
152 | ||
abffc020 AR |
153 | Note: dirty_bytes is the counterpart of dirty_ratio. Only one of them may be |
154 | specified at a time. When one sysctl is written it is immediately taken into | |
155 | account to evaluate the dirty memory limits and the other appears as 0 when | |
156 | read. | |
1da177e4 | 157 | |
9e4a5bda AR |
158 | Note: the minimum value allowed for dirty_bytes is two pages (in bytes); any |
159 | value lower than this limit will be ignored and the old configuration will be | |
160 | retained. | |
161 | ||
1da177e4 LT |
162 | ============================================================== |
163 | ||
db0fb184 | 164 | dirty_expire_centisecs |
1da177e4 | 165 | |
db0fb184 | 166 | This tunable is used to define when dirty data is old enough to be eligible |
6601fac8 AB |
167 | for writeout by the kernel flusher threads. It is expressed in 100'ths |
168 | of a second. Data which has been dirty in-memory for longer than this | |
169 | interval will be written out next time a flusher thread wakes up. | |
db0fb184 PM |
170 | |
171 | ============================================================== | |
172 | ||
173 | dirty_ratio | |
174 | ||
715ea41e ZL |
175 | Contains, as a percentage of total available memory that contains free pages |
176 | and reclaimable pages, the number of pages at which a process which is | |
177 | generating disk writes will itself start writing out dirty data. | |
178 | ||
d83e2a4e | 179 | The total available memory is not equal to total system memory. |
1da177e4 LT |
180 | |
181 | ============================================================== | |
182 | ||
fc1ca3d5 YS |
183 | dirtytime_expire_seconds |
184 | ||
185 | When a lazytime inode is constantly having its pages dirtied, the inode with | |
186 | an updated timestamp will never get chance to be written out. And, if the | |
187 | only thing that has happened on the file system is a dirtytime inode caused | |
188 | by an atime update, a worker will be scheduled to make sure that inode | |
189 | eventually gets pushed out to disk. This tunable is used to define when dirty | |
190 | inode is old enough to be eligible for writeback by the kernel flusher threads. | |
191 | And, it is also used as the interval to wakeup dirtytime_writeback thread. | |
192 | ||
193 | ============================================================== | |
194 | ||
db0fb184 | 195 | dirty_writeback_centisecs |
1da177e4 | 196 | |
6601fac8 | 197 | The kernel flusher threads will periodically wake up and write `old' data |
db0fb184 PM |
198 | out to disk. This tunable expresses the interval between those wakeups, in |
199 | 100'ths of a second. | |
1da177e4 | 200 | |
db0fb184 | 201 | Setting this to zero disables periodic writeback altogether. |
1da177e4 LT |
202 | |
203 | ============================================================== | |
204 | ||
db0fb184 | 205 | drop_caches |
1da177e4 | 206 | |
5509a5d2 DH |
207 | Writing to this will cause the kernel to drop clean caches, as well as |
208 | reclaimable slab objects like dentries and inodes. Once dropped, their | |
209 | memory becomes free. | |
1da177e4 | 210 | |
db0fb184 PM |
211 | To free pagecache: |
212 | echo 1 > /proc/sys/vm/drop_caches | |
5509a5d2 | 213 | To free reclaimable slab objects (includes dentries and inodes): |
db0fb184 | 214 | echo 2 > /proc/sys/vm/drop_caches |
5509a5d2 | 215 | To free slab objects and pagecache: |
db0fb184 | 216 | echo 3 > /proc/sys/vm/drop_caches |
1da177e4 | 217 | |
5509a5d2 DH |
218 | This is a non-destructive operation and will not free any dirty objects. |
219 | To increase the number of objects freed by this operation, the user may run | |
220 | `sync' prior to writing to /proc/sys/vm/drop_caches. This will minimize the | |
221 | number of dirty objects on the system and create more candidates to be | |
222 | dropped. | |
223 | ||
224 | This file is not a means to control the growth of the various kernel caches | |
225 | (inodes, dentries, pagecache, etc...) These objects are automatically | |
226 | reclaimed by the kernel when memory is needed elsewhere on the system. | |
227 | ||
228 | Use of this file can cause performance problems. Since it discards cached | |
229 | objects, it may cost a significant amount of I/O and CPU to recreate the | |
230 | dropped objects, especially if they were under heavy use. Because of this, | |
231 | use outside of a testing or debugging environment is not recommended. | |
232 | ||
233 | You may see informational messages in your kernel log when this file is | |
234 | used: | |
235 | ||
236 | cat (1234): drop_caches: 3 | |
237 | ||
238 | These are informational only. They do not mean that anything is wrong | |
239 | with your system. To disable them, echo 4 (bit 3) into drop_caches. | |
1da177e4 LT |
240 | |
241 | ============================================================== | |
242 | ||
5e771905 MG |
243 | extfrag_threshold |
244 | ||
245 | This parameter affects whether the kernel will compact memory or direct | |
a10726bb RV |
246 | reclaim to satisfy a high-order allocation. The extfrag/extfrag_index file in |
247 | debugfs shows what the fragmentation index for each order is in each zone in | |
248 | the system. Values tending towards 0 imply allocations would fail due to lack | |
249 | of memory, values towards 1000 imply failures are due to fragmentation and -1 | |
250 | implies that the allocation will succeed as long as watermarks are met. | |
5e771905 MG |
251 | |
252 | The kernel will not compact memory in a zone if the | |
253 | fragmentation index is <= extfrag_threshold. The default value is 500. | |
254 | ||
255 | ============================================================== | |
256 | ||
d09b6468 MH |
257 | highmem_is_dirtyable |
258 | ||
259 | Available only for systems with CONFIG_HIGHMEM enabled (32b systems). | |
260 | ||
261 | This parameter controls whether the high memory is considered for dirty | |
262 | writers throttling. This is not the case by default which means that | |
263 | only the amount of memory directly visible/usable by the kernel can | |
264 | be dirtied. As a result, on systems with a large amount of memory and | |
265 | lowmem basically depleted writers might be throttled too early and | |
266 | streaming writes can get very slow. | |
267 | ||
268 | Changing the value to non zero would allow more memory to be dirtied | |
269 | and thus allow writers to write more data which can be flushed to the | |
270 | storage more effectively. Note this also comes with a risk of pre-mature | |
271 | OOM killer because some writers (e.g. direct block device writes) can | |
272 | only use the low memory and they can fill it up with dirty data without | |
273 | any throttling. | |
274 | ||
275 | ============================================================== | |
276 | ||
db0fb184 | 277 | hugetlb_shm_group |
8ad4b1fb | 278 | |
db0fb184 PM |
279 | hugetlb_shm_group contains group id that is allowed to create SysV |
280 | shared memory segment using hugetlb page. | |
8ad4b1fb | 281 | |
db0fb184 | 282 | ============================================================== |
8ad4b1fb | 283 | |
db0fb184 | 284 | laptop_mode |
1743660b | 285 | |
db0fb184 PM |
286 | laptop_mode is a knob that controls "laptop mode". All the things that are |
287 | controlled by this knob are discussed in Documentation/laptops/laptop-mode.txt. | |
1743660b | 288 | |
db0fb184 | 289 | ============================================================== |
1743660b | 290 | |
db0fb184 | 291 | legacy_va_layout |
1b2ffb78 | 292 | |
2174efb6 | 293 | If non-zero, this sysctl disables the new 32-bit mmap layout - the kernel |
db0fb184 | 294 | will use the legacy (2.4) layout for all processes. |
1b2ffb78 | 295 | |
db0fb184 | 296 | ============================================================== |
1b2ffb78 | 297 | |
db0fb184 PM |
298 | lowmem_reserve_ratio |
299 | ||
300 | For some specialised workloads on highmem machines it is dangerous for | |
301 | the kernel to allow process memory to be allocated from the "lowmem" | |
302 | zone. This is because that memory could then be pinned via the mlock() | |
303 | system call, or by unavailability of swapspace. | |
304 | ||
305 | And on large highmem machines this lack of reclaimable lowmem memory | |
306 | can be fatal. | |
307 | ||
308 | So the Linux page allocator has a mechanism which prevents allocations | |
309 | which _could_ use highmem from using too much lowmem. This means that | |
310 | a certain amount of lowmem is defended from the possibility of being | |
311 | captured into pinned user memory. | |
312 | ||
313 | (The same argument applies to the old 16 megabyte ISA DMA region. This | |
314 | mechanism will also defend that region from allocations which could use | |
315 | highmem or lowmem). | |
316 | ||
317 | The `lowmem_reserve_ratio' tunable determines how aggressive the kernel is | |
318 | in defending these lower zones. | |
319 | ||
320 | If you have a machine which uses highmem or ISA DMA and your | |
321 | applications are using mlock(), or if you are running with no swap then | |
322 | you probably should change the lowmem_reserve_ratio setting. | |
323 | ||
324 | The lowmem_reserve_ratio is an array. You can see them by reading this file. | |
325 | - | |
326 | % cat /proc/sys/vm/lowmem_reserve_ratio | |
327 | 256 256 32 | |
328 | - | |
db0fb184 PM |
329 | |
330 | But, these values are not used directly. The kernel calculates # of protection | |
331 | pages for each zones from them. These are shown as array of protection pages | |
332 | in /proc/zoneinfo like followings. (This is an example of x86-64 box). | |
333 | Each zone has an array of protection pages like this. | |
334 | ||
335 | - | |
336 | Node 0, zone DMA | |
337 | pages free 1355 | |
338 | min 3 | |
339 | low 3 | |
340 | high 4 | |
341 | : | |
342 | : | |
343 | numa_other 0 | |
344 | protection: (0, 2004, 2004, 2004) | |
345 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
346 | pagesets | |
347 | cpu: 0 pcp: 0 | |
348 | : | |
349 | - | |
350 | These protections are added to score to judge whether this zone should be used | |
351 | for page allocation or should be reclaimed. | |
352 | ||
353 | In this example, if normal pages (index=2) are required to this DMA zone and | |
41858966 MG |
354 | watermark[WMARK_HIGH] is used for watermark, the kernel judges this zone should |
355 | not be used because pages_free(1355) is smaller than watermark + protection[2] | |
db0fb184 PM |
356 | (4 + 2004 = 2008). If this protection value is 0, this zone would be used for |
357 | normal page requirement. If requirement is DMA zone(index=0), protection[0] | |
358 | (=0) is used. | |
359 | ||
360 | zone[i]'s protection[j] is calculated by following expression. | |
361 | ||
362 | (i < j): | |
363 | zone[i]->protection[j] | |
013110a7 | 364 | = (total sums of managed_pages from zone[i+1] to zone[j] on the node) |
db0fb184 PM |
365 | / lowmem_reserve_ratio[i]; |
366 | (i = j): | |
367 | (should not be protected. = 0; | |
368 | (i > j): | |
369 | (not necessary, but looks 0) | |
370 | ||
371 | The default values of lowmem_reserve_ratio[i] are | |
372 | 256 (if zone[i] means DMA or DMA32 zone) | |
373 | 32 (others). | |
374 | As above expression, they are reciprocal number of ratio. | |
013110a7 | 375 | 256 means 1/256. # of protection pages becomes about "0.39%" of total managed |
db0fb184 PM |
376 | pages of higher zones on the node. |
377 | ||
378 | If you would like to protect more pages, smaller values are effective. | |
d3cda233 JK |
379 | The minimum value is 1 (1/1 -> 100%). The value less than 1 completely |
380 | disables protection of the pages. | |
1b2ffb78 | 381 | |
db0fb184 | 382 | ============================================================== |
1b2ffb78 | 383 | |
db0fb184 | 384 | max_map_count: |
1743660b | 385 | |
db0fb184 PM |
386 | This file contains the maximum number of memory map areas a process |
387 | may have. Memory map areas are used as a side-effect of calling | |
def5efe0 DR |
388 | malloc, directly by mmap, mprotect, and madvise, and also when loading |
389 | shared libraries. | |
1743660b | 390 | |
db0fb184 PM |
391 | While most applications need less than a thousand maps, certain |
392 | programs, particularly malloc debuggers, may consume lots of them, | |
393 | e.g., up to one or two maps per allocation. | |
fadd8fbd | 394 | |
db0fb184 | 395 | The default value is 65536. |
9614634f | 396 | |
6a46079c AK |
397 | ============================================================= |
398 | ||
399 | memory_failure_early_kill: | |
400 | ||
401 | Control how to kill processes when uncorrected memory error (typically | |
402 | a 2bit error in a memory module) is detected in the background by hardware | |
403 | that cannot be handled by the kernel. In some cases (like the page | |
404 | still having a valid copy on disk) the kernel will handle the failure | |
405 | transparently without affecting any applications. But if there is | |
406 | no other uptodate copy of the data it will kill to prevent any data | |
407 | corruptions from propagating. | |
408 | ||
409 | 1: Kill all processes that have the corrupted and not reloadable page mapped | |
410 | as soon as the corruption is detected. Note this is not supported | |
411 | for a few types of pages, like kernel internally allocated data or | |
412 | the swap cache, but works for the majority of user pages. | |
413 | ||
414 | 0: Only unmap the corrupted page from all processes and only kill a process | |
415 | who tries to access it. | |
416 | ||
417 | The kill is done using a catchable SIGBUS with BUS_MCEERR_AO, so processes can | |
418 | handle this if they want to. | |
419 | ||
420 | This is only active on architectures/platforms with advanced machine | |
421 | check handling and depends on the hardware capabilities. | |
422 | ||
423 | Applications can override this setting individually with the PR_MCE_KILL prctl | |
424 | ||
425 | ============================================================== | |
426 | ||
427 | memory_failure_recovery | |
428 | ||
429 | Enable memory failure recovery (when supported by the platform) | |
430 | ||
431 | 1: Attempt recovery. | |
432 | ||
433 | 0: Always panic on a memory failure. | |
434 | ||
db0fb184 | 435 | ============================================================== |
9614634f | 436 | |
db0fb184 | 437 | min_free_kbytes: |
9614634f | 438 | |
db0fb184 | 439 | This is used to force the Linux VM to keep a minimum number |
41858966 MG |
440 | of kilobytes free. The VM uses this number to compute a |
441 | watermark[WMARK_MIN] value for each lowmem zone in the system. | |
442 | Each lowmem zone gets a number of reserved free pages based | |
443 | proportionally on its size. | |
db0fb184 PM |
444 | |
445 | Some minimal amount of memory is needed to satisfy PF_MEMALLOC | |
446 | allocations; if you set this to lower than 1024KB, your system will | |
447 | become subtly broken, and prone to deadlock under high loads. | |
448 | ||
449 | Setting this too high will OOM your machine instantly. | |
9614634f CL |
450 | |
451 | ============================================================= | |
452 | ||
0ff38490 CL |
453 | min_slab_ratio: |
454 | ||
455 | This is available only on NUMA kernels. | |
456 | ||
457 | A percentage of the total pages in each zone. On Zone reclaim | |
458 | (fallback from the local zone occurs) slabs will be reclaimed if more | |
459 | than this percentage of pages in a zone are reclaimable slab pages. | |
460 | This insures that the slab growth stays under control even in NUMA | |
461 | systems that rarely perform global reclaim. | |
462 | ||
463 | The default is 5 percent. | |
464 | ||
465 | Note that slab reclaim is triggered in a per zone / node fashion. | |
466 | The process of reclaiming slab memory is currently not node specific | |
467 | and may not be fast. | |
468 | ||
469 | ============================================================= | |
470 | ||
db0fb184 | 471 | min_unmapped_ratio: |
fadd8fbd | 472 | |
db0fb184 | 473 | This is available only on NUMA kernels. |
fadd8fbd | 474 | |
90afa5de MG |
475 | This is a percentage of the total pages in each zone. Zone reclaim will |
476 | only occur if more than this percentage of pages are in a state that | |
477 | zone_reclaim_mode allows to be reclaimed. | |
478 | ||
479 | If zone_reclaim_mode has the value 4 OR'd, then the percentage is compared | |
480 | against all file-backed unmapped pages including swapcache pages and tmpfs | |
481 | files. Otherwise, only unmapped pages backed by normal files but not tmpfs | |
482 | files and similar are considered. | |
2b744c01 | 483 | |
db0fb184 | 484 | The default is 1 percent. |
fadd8fbd | 485 | |
db0fb184 | 486 | ============================================================== |
2b744c01 | 487 | |
db0fb184 | 488 | mmap_min_addr |
ed032189 | 489 | |
db0fb184 | 490 | This file indicates the amount of address space which a user process will |
af901ca1 | 491 | be restricted from mmapping. Since kernel null dereference bugs could |
db0fb184 PM |
492 | accidentally operate based on the information in the first couple of pages |
493 | of memory userspace processes should not be allowed to write to them. By | |
494 | default this value is set to 0 and no protections will be enforced by the | |
495 | security module. Setting this value to something like 64k will allow the | |
496 | vast majority of applications to work correctly and provide defense in depth | |
497 | against future potential kernel bugs. | |
fe071d7e | 498 | |
db0fb184 | 499 | ============================================================== |
fef1bdd6 | 500 | |
d07e2259 DC |
501 | mmap_rnd_bits: |
502 | ||
503 | This value can be used to select the number of bits to use to | |
504 | determine the random offset to the base address of vma regions | |
505 | resulting from mmap allocations on architectures which support | |
506 | tuning address space randomization. This value will be bounded | |
507 | by the architecture's minimum and maximum supported values. | |
508 | ||
509 | This value can be changed after boot using the | |
510 | /proc/sys/vm/mmap_rnd_bits tunable | |
511 | ||
512 | ============================================================== | |
513 | ||
514 | mmap_rnd_compat_bits: | |
515 | ||
516 | This value can be used to select the number of bits to use to | |
517 | determine the random offset to the base address of vma regions | |
518 | resulting from mmap allocations for applications run in | |
519 | compatibility mode on architectures which support tuning address | |
520 | space randomization. This value will be bounded by the | |
521 | architecture's minimum and maximum supported values. | |
522 | ||
523 | This value can be changed after boot using the | |
524 | /proc/sys/vm/mmap_rnd_compat_bits tunable | |
525 | ||
526 | ============================================================== | |
527 | ||
db0fb184 | 528 | nr_hugepages |
fef1bdd6 | 529 | |
db0fb184 | 530 | Change the minimum size of the hugepage pool. |
fef1bdd6 | 531 | |
1ad1335d | 532 | See Documentation/admin-guide/mm/hugetlbpage.rst |
fef1bdd6 | 533 | |
db0fb184 | 534 | ============================================================== |
d1634e1a PD |
535 | |
536 | nr_hugepages_mempolicy | |
537 | ||
538 | Change the size of the hugepage pool at run-time on a specific | |
539 | set of NUMA nodes. | |
540 | ||
541 | See Documentation/admin-guide/mm/hugetlbpage.rst | |
542 | ||
543 | ============================================================== | |
fef1bdd6 | 544 | |
db0fb184 | 545 | nr_overcommit_hugepages |
fef1bdd6 | 546 | |
db0fb184 PM |
547 | Change the maximum size of the hugepage pool. The maximum is |
548 | nr_hugepages + nr_overcommit_hugepages. | |
fe071d7e | 549 | |
1ad1335d | 550 | See Documentation/admin-guide/mm/hugetlbpage.rst |
fe071d7e | 551 | |
db0fb184 | 552 | ============================================================== |
fe071d7e | 553 | |
db0fb184 | 554 | nr_trim_pages |
ed032189 | 555 | |
db0fb184 PM |
556 | This is available only on NOMMU kernels. |
557 | ||
558 | This value adjusts the excess page trimming behaviour of power-of-2 aligned | |
559 | NOMMU mmap allocations. | |
560 | ||
561 | A value of 0 disables trimming of allocations entirely, while a value of 1 | |
562 | trims excess pages aggressively. Any value >= 1 acts as the watermark where | |
563 | trimming of allocations is initiated. | |
564 | ||
565 | The default value is 1. | |
566 | ||
567 | See Documentation/nommu-mmap.txt for more information. | |
ed032189 | 568 | |
f0c0b2b8 KH |
569 | ============================================================== |
570 | ||
571 | numa_zonelist_order | |
572 | ||
c9bff3ee MH |
573 | This sysctl is only for NUMA and it is deprecated. Anything but |
574 | Node order will fail! | |
575 | ||
f0c0b2b8 KH |
576 | 'where the memory is allocated from' is controlled by zonelists. |
577 | (This documentation ignores ZONE_HIGHMEM/ZONE_DMA32 for simple explanation. | |
578 | you may be able to read ZONE_DMA as ZONE_DMA32...) | |
579 | ||
580 | In non-NUMA case, a zonelist for GFP_KERNEL is ordered as following. | |
581 | ZONE_NORMAL -> ZONE_DMA | |
582 | This means that a memory allocation request for GFP_KERNEL will | |
583 | get memory from ZONE_DMA only when ZONE_NORMAL is not available. | |
584 | ||
585 | In NUMA case, you can think of following 2 types of order. | |
586 | Assume 2 node NUMA and below is zonelist of Node(0)'s GFP_KERNEL | |
587 | ||
588 | (A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA -> Node(1) ZONE_NORMAL | |
589 | (B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORMAL -> Node(0) ZONE_DMA. | |
590 | ||
591 | Type(A) offers the best locality for processes on Node(0), but ZONE_DMA | |
592 | will be used before ZONE_NORMAL exhaustion. This increases possibility of | |
593 | out-of-memory(OOM) of ZONE_DMA because ZONE_DMA is tend to be small. | |
594 | ||
595 | Type(B) cannot offer the best locality but is more robust against OOM of | |
596 | the DMA zone. | |
597 | ||
598 | Type(A) is called as "Node" order. Type (B) is "Zone" order. | |
599 | ||
600 | "Node order" orders the zonelists by node, then by zone within each node. | |
5a3016a6 | 601 | Specify "[Nn]ode" for node order |
f0c0b2b8 KH |
602 | |
603 | "Zone Order" orders the zonelists by zone type, then by node within each | |
5a3016a6 | 604 | zone. Specify "[Zz]one" for zone order. |
f0c0b2b8 | 605 | |
7c88a292 XQ |
606 | Specify "[Dd]efault" to request automatic configuration. |
607 | ||
608 | On 32-bit, the Normal zone needs to be preserved for allocations accessible | |
609 | by the kernel, so "zone" order will be selected. | |
610 | ||
611 | On 64-bit, devices that require DMA32/DMA are relatively rare, so "node" | |
612 | order will be selected. | |
613 | ||
614 | Default order is recommended unless this is causing problems for your | |
615 | system/application. | |
d5dbac87 NA |
616 | |
617 | ============================================================== | |
618 | ||
db0fb184 | 619 | oom_dump_tasks |
d5dbac87 | 620 | |
dc6c9a35 KS |
621 | Enables a system-wide task dump (excluding kernel threads) to be produced |
622 | when the kernel performs an OOM-killing and includes such information as | |
af5b0f6a KS |
623 | pid, uid, tgid, vm size, rss, pgtables_bytes, swapents, oom_score_adj |
624 | score, and name. This is helpful to determine why the OOM killer was | |
625 | invoked, to identify the rogue task that caused it, and to determine why | |
626 | the OOM killer chose the task it did to kill. | |
d5dbac87 | 627 | |
db0fb184 PM |
628 | If this is set to zero, this information is suppressed. On very |
629 | large systems with thousands of tasks it may not be feasible to dump | |
630 | the memory state information for each one. Such systems should not | |
631 | be forced to incur a performance penalty in OOM conditions when the | |
632 | information may not be desired. | |
633 | ||
634 | If this is set to non-zero, this information is shown whenever the | |
635 | OOM killer actually kills a memory-hogging task. | |
636 | ||
ad915c43 | 637 | The default value is 1 (enabled). |
d5dbac87 NA |
638 | |
639 | ============================================================== | |
640 | ||
db0fb184 | 641 | oom_kill_allocating_task |
d5dbac87 | 642 | |
db0fb184 PM |
643 | This enables or disables killing the OOM-triggering task in |
644 | out-of-memory situations. | |
d5dbac87 | 645 | |
db0fb184 PM |
646 | If this is set to zero, the OOM killer will scan through the entire |
647 | tasklist and select a task based on heuristics to kill. This normally | |
648 | selects a rogue memory-hogging task that frees up a large amount of | |
649 | memory when killed. | |
650 | ||
651 | If this is set to non-zero, the OOM killer simply kills the task that | |
652 | triggered the out-of-memory condition. This avoids the expensive | |
653 | tasklist scan. | |
654 | ||
655 | If panic_on_oom is selected, it takes precedence over whatever value | |
656 | is used in oom_kill_allocating_task. | |
657 | ||
658 | The default value is 0. | |
dd8632a1 PM |
659 | |
660 | ============================================================== | |
661 | ||
49f0ce5f JM |
662 | overcommit_kbytes: |
663 | ||
664 | When overcommit_memory is set to 2, the committed address space is not | |
665 | permitted to exceed swap plus this amount of physical RAM. See below. | |
666 | ||
667 | Note: overcommit_kbytes is the counterpart of overcommit_ratio. Only one | |
668 | of them may be specified at a time. Setting one disables the other (which | |
669 | then appears as 0 when read). | |
670 | ||
671 | ============================================================== | |
672 | ||
db0fb184 | 673 | overcommit_memory: |
dd8632a1 | 674 | |
db0fb184 | 675 | This value contains a flag that enables memory overcommitment. |
dd8632a1 | 676 | |
db0fb184 PM |
677 | When this flag is 0, the kernel attempts to estimate the amount |
678 | of free memory left when userspace requests more memory. | |
dd8632a1 | 679 | |
db0fb184 PM |
680 | When this flag is 1, the kernel pretends there is always enough |
681 | memory until it actually runs out. | |
dd8632a1 | 682 | |
db0fb184 PM |
683 | When this flag is 2, the kernel uses a "never overcommit" |
684 | policy that attempts to prevent any overcommit of memory. | |
c9b1d098 | 685 | Note that user_reserve_kbytes affects this policy. |
dd8632a1 | 686 | |
db0fb184 PM |
687 | This feature can be very useful because there are a lot of |
688 | programs that malloc() huge amounts of memory "just-in-case" | |
689 | and don't use much of it. | |
690 | ||
691 | The default value is 0. | |
692 | ||
ad56b738 | 693 | See Documentation/vm/overcommit-accounting.rst and |
85f237a5 | 694 | mm/util.c::__vm_enough_memory() for more information. |
db0fb184 PM |
695 | |
696 | ============================================================== | |
697 | ||
698 | overcommit_ratio: | |
699 | ||
700 | When overcommit_memory is set to 2, the committed address | |
701 | space is not permitted to exceed swap plus this percentage | |
702 | of physical RAM. See above. | |
703 | ||
704 | ============================================================== | |
705 | ||
706 | page-cluster | |
707 | ||
df858fa8 CE |
708 | page-cluster controls the number of pages up to which consecutive pages |
709 | are read in from swap in a single attempt. This is the swap counterpart | |
710 | to page cache readahead. | |
711 | The mentioned consecutivity is not in terms of virtual/physical addresses, | |
712 | but consecutive on swap space - that means they were swapped out together. | |
db0fb184 PM |
713 | |
714 | It is a logarithmic value - setting it to zero means "1 page", setting | |
715 | it to 1 means "2 pages", setting it to 2 means "4 pages", etc. | |
df858fa8 | 716 | Zero disables swap readahead completely. |
db0fb184 PM |
717 | |
718 | The default value is three (eight pages at a time). There may be some | |
719 | small benefits in tuning this to a different value if your workload is | |
720 | swap-intensive. | |
721 | ||
df858fa8 CE |
722 | Lower values mean lower latencies for initial faults, but at the same time |
723 | extra faults and I/O delays for following faults if they would have been part of | |
724 | that consecutive pages readahead would have brought in. | |
725 | ||
db0fb184 PM |
726 | ============================================================= |
727 | ||
728 | panic_on_oom | |
729 | ||
730 | This enables or disables panic on out-of-memory feature. | |
731 | ||
732 | If this is set to 0, the kernel will kill some rogue process, | |
733 | called oom_killer. Usually, oom_killer can kill rogue processes and | |
734 | system will survive. | |
735 | ||
736 | If this is set to 1, the kernel panics when out-of-memory happens. | |
737 | However, if a process limits using nodes by mempolicy/cpusets, | |
738 | and those nodes become memory exhaustion status, one process | |
739 | may be killed by oom-killer. No panic occurs in this case. | |
740 | Because other nodes' memory may be free. This means system total status | |
741 | may be not fatal yet. | |
742 | ||
743 | If this is set to 2, the kernel panics compulsorily even on the | |
daaf1e68 KH |
744 | above-mentioned. Even oom happens under memory cgroup, the whole |
745 | system panics. | |
db0fb184 PM |
746 | |
747 | The default value is 0. | |
748 | 1 and 2 are for failover of clustering. Please select either | |
749 | according to your policy of failover. | |
daaf1e68 KH |
750 | panic_on_oom=2+kdump gives you very strong tool to investigate |
751 | why oom happens. You can get snapshot. | |
db0fb184 PM |
752 | |
753 | ============================================================= | |
754 | ||
755 | percpu_pagelist_fraction | |
756 | ||
757 | This is the fraction of pages at most (high mark pcp->high) in each zone that | |
758 | are allocated for each per cpu page list. The min value for this is 8. It | |
759 | means that we don't allow more than 1/8th of pages in each zone to be | |
760 | allocated in any single per_cpu_pagelist. This entry only changes the value | |
761 | of hot per cpu pagelists. User can specify a number like 100 to allocate | |
762 | 1/100th of each zone to each per cpu page list. | |
763 | ||
764 | The batch value of each per cpu pagelist is also updated as a result. It is | |
765 | set to pcp->high/4. The upper limit of batch is (PAGE_SHIFT * 8) | |
766 | ||
767 | The initial value is zero. Kernel does not use this value at boot time to set | |
7cd2b0a3 DR |
768 | the high water marks for each per cpu page list. If the user writes '0' to this |
769 | sysctl, it will revert to this default behavior. | |
db0fb184 PM |
770 | |
771 | ============================================================== | |
772 | ||
773 | stat_interval | |
774 | ||
775 | The time interval between which vm statistics are updated. The default | |
776 | is 1 second. | |
777 | ||
778 | ============================================================== | |
779 | ||
52b6f46b HD |
780 | stat_refresh |
781 | ||
782 | Any read or write (by root only) flushes all the per-cpu vm statistics | |
783 | into their global totals, for more accurate reports when testing | |
784 | e.g. cat /proc/sys/vm/stat_refresh /proc/meminfo | |
785 | ||
786 | As a side-effect, it also checks for negative totals (elsewhere reported | |
787 | as 0) and "fails" with EINVAL if any are found, with a warning in dmesg. | |
788 | (At time of writing, a few stats are known sometimes to be found negative, | |
789 | with no ill effects: errors and warnings on these stats are suppressed.) | |
790 | ||
791 | ============================================================== | |
792 | ||
4518085e KW |
793 | numa_stat |
794 | ||
795 | This interface allows runtime configuration of numa statistics. | |
796 | ||
797 | When page allocation performance becomes a bottleneck and you can tolerate | |
798 | some possible tool breakage and decreased numa counter precision, you can | |
799 | do: | |
800 | echo 0 > /proc/sys/vm/numa_stat | |
801 | ||
802 | When page allocation performance is not a bottleneck and you want all | |
803 | tooling to work, you can do: | |
804 | echo 1 > /proc/sys/vm/numa_stat | |
805 | ||
806 | ============================================================== | |
807 | ||
db0fb184 PM |
808 | swappiness |
809 | ||
810 | This control is used to define how aggressive the kernel will swap | |
2743232c | 811 | memory pages. Higher values will increase aggressiveness, lower values |
8582cb96 AT |
812 | decrease the amount of swap. A value of 0 instructs the kernel not to |
813 | initiate swap until the amount of free and file-backed pages is less | |
814 | than the high water mark in a zone. | |
db0fb184 PM |
815 | |
816 | The default value is 60. | |
817 | ||
818 | ============================================================== | |
819 | ||
c9b1d098 AS |
820 | - user_reserve_kbytes |
821 | ||
633708a4 | 822 | When overcommit_memory is set to 2, "never overcommit" mode, reserve |
c9b1d098 AS |
823 | min(3% of current process size, user_reserve_kbytes) of free memory. |
824 | This is intended to prevent a user from starting a single memory hogging | |
825 | process, such that they cannot recover (kill the hog). | |
826 | ||
827 | user_reserve_kbytes defaults to min(3% of the current process size, 128MB). | |
828 | ||
829 | If this is reduced to zero, then the user will be allowed to allocate | |
830 | all free memory with a single process, minus admin_reserve_kbytes. | |
831 | Any subsequent attempts to execute a command will result in | |
832 | "fork: Cannot allocate memory". | |
833 | ||
834 | Changing this takes effect whenever an application requests memory. | |
835 | ||
836 | ============================================================== | |
837 | ||
db0fb184 PM |
838 | vfs_cache_pressure |
839 | ------------------ | |
840 | ||
4a0da71b DV |
841 | This percentage value controls the tendency of the kernel to reclaim |
842 | the memory which is used for caching of directory and inode objects. | |
db0fb184 PM |
843 | |
844 | At the default value of vfs_cache_pressure=100 the kernel will attempt to | |
845 | reclaim dentries and inodes at a "fair" rate with respect to pagecache and | |
846 | swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer | |
55c37a84 JK |
847 | to retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will |
848 | never reclaim dentries and inodes due to memory pressure and this can easily | |
849 | lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100 | |
db0fb184 PM |
850 | causes the kernel to prefer to reclaim dentries and inodes. |
851 | ||
4a0da71b DV |
852 | Increasing vfs_cache_pressure significantly beyond 100 may have negative |
853 | performance impact. Reclaim code needs to take various locks to find freeable | |
854 | directory and inode objects. With vfs_cache_pressure=1000, it will look for | |
855 | ten times more freeable objects than there are. | |
856 | ||
795ae7a0 JW |
857 | ============================================================= |
858 | ||
859 | watermark_scale_factor: | |
860 | ||
861 | This factor controls the aggressiveness of kswapd. It defines the | |
862 | amount of memory left in a node/system before kswapd is woken up and | |
863 | how much memory needs to be free before kswapd goes back to sleep. | |
864 | ||
865 | The unit is in fractions of 10,000. The default value of 10 means the | |
866 | distances between watermarks are 0.1% of the available memory in the | |
867 | node/system. The maximum value is 1000, or 10% of memory. | |
868 | ||
869 | A high rate of threads entering direct reclaim (allocstall) or kswapd | |
870 | going to sleep prematurely (kswapd_low_wmark_hit_quickly) can indicate | |
871 | that the number of free pages kswapd maintains for latency reasons is | |
872 | too small for the allocation bursts occurring in the system. This knob | |
873 | can then be used to tune kswapd aggressiveness accordingly. | |
874 | ||
db0fb184 PM |
875 | ============================================================== |
876 | ||
877 | zone_reclaim_mode: | |
878 | ||
879 | Zone_reclaim_mode allows someone to set more or less aggressive approaches to | |
880 | reclaim memory when a zone runs out of memory. If it is set to zero then no | |
881 | zone reclaim occurs. Allocations will be satisfied from other zones / nodes | |
882 | in the system. | |
883 | ||
884 | This is value ORed together of | |
885 | ||
886 | 1 = Zone reclaim on | |
887 | 2 = Zone reclaim writes dirty pages out | |
888 | 4 = Zone reclaim swaps pages | |
889 | ||
4f9b16a6 MG |
890 | zone_reclaim_mode is disabled by default. For file servers or workloads |
891 | that benefit from having their data cached, zone_reclaim_mode should be | |
892 | left disabled as the caching effect is likely to be more important than | |
db0fb184 PM |
893 | data locality. |
894 | ||
4f9b16a6 MG |
895 | zone_reclaim may be enabled if it's known that the workload is partitioned |
896 | such that each partition fits within a NUMA node and that accessing remote | |
897 | memory would cause a measurable performance reduction. The page allocator | |
898 | will then reclaim easily reusable pages (those page cache pages that are | |
899 | currently not used) before allocating off node pages. | |
900 | ||
db0fb184 PM |
901 | Allowing zone reclaim to write out pages stops processes that are |
902 | writing large amounts of data from dirtying pages on other nodes. Zone | |
903 | reclaim will write out dirty pages if a zone fills up and so effectively | |
904 | throttle the process. This may decrease the performance of a single process | |
905 | since it cannot use all of system memory to buffer the outgoing writes | |
906 | anymore but it preserve the memory on other nodes so that the performance | |
907 | of other processes running on other nodes will not be affected. | |
908 | ||
909 | Allowing regular swap effectively restricts allocations to the local | |
910 | node unless explicitly overridden by memory policies or cpuset | |
911 | configurations. | |
912 | ||
913 | ============ End of Document ================================= |