]> git.proxmox.com Git - mirror_ubuntu-artful-kernel.git/blame - Documentation/cgroup-v2.txt
Merge tag 'armsoc-cleanup' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
[mirror_ubuntu-artful-kernel.git] / Documentation / cgroup-v2.txt
CommitLineData
6c292092
TH
1
2Control Group v2
3
4October, 2015 Tejun Heo <tj@kernel.org>
5
6This is the authoritative documentation on the design, interface and
7conventions of cgroup v2. It describes all userland-visible aspects
8of cgroup including core and specific controller behaviors. All
9future changes must be reflected in this document. Documentation for
9a2ddda5 10v1 is available under Documentation/cgroup-v1/.
6c292092
TH
11
12CONTENTS
13
141. Introduction
15 1-1. Terminology
16 1-2. What is cgroup?
172. Basic Operations
18 2-1. Mounting
19 2-2. Organizing Processes
20 2-3. [Un]populated Notification
21 2-4. Controlling Controllers
22 2-4-1. Enabling and Disabling
23 2-4-2. Top-down Constraint
24 2-4-3. No Internal Process Constraint
25 2-5. Delegation
26 2-5-1. Model of Delegation
27 2-5-2. Delegation Containment
28 2-6. Guidelines
29 2-6-1. Organize Once and Control
30 2-6-2. Avoid Name Collisions
313. Resource Distribution Models
32 3-1. Weights
33 3-2. Limits
34 3-3. Protections
35 3-4. Allocations
364. Interface Files
37 4-1. Format
38 4-2. Conventions
39 4-3. Core Interface Files
405. Controllers
41 5-1. CPU
42 5-1-1. CPU Interface Files
43 5-2. Memory
44 5-2-1. Memory Interface Files
45 5-2-2. Usage Guidelines
46 5-2-3. Memory Ownership
47 5-3. IO
48 5-3-1. IO Interface Files
49 5-3-2. Writeback
50P. Information on Kernel Programming
51 P-1. Filesystem Support for Writeback
52D. Deprecated v1 Core Features
53R. Issues with v1 and Rationales for v2
54 R-1. Multiple Hierarchies
55 R-2. Thread Granularity
56 R-3. Competition Between Inner Nodes and Threads
57 R-4. Other Interface Issues
58 R-5. Controller Issues and Remedies
59 R-5-1. Memory
60
61
621. Introduction
63
641-1. Terminology
65
66"cgroup" stands for "control group" and is never capitalized. The
67singular form is used to designate the whole feature and also as a
68qualifier as in "cgroup controllers". When explicitly referring to
69multiple individual control groups, the plural form "cgroups" is used.
70
71
721-2. What is cgroup?
73
74cgroup is a mechanism to organize processes hierarchically and
75distribute system resources along the hierarchy in a controlled and
76configurable manner.
77
78cgroup is largely composed of two parts - the core and controllers.
79cgroup core is primarily responsible for hierarchically organizing
80processes. A cgroup controller is usually responsible for
81distributing a specific type of system resource along the hierarchy
82although there are utility controllers which serve purposes other than
83resource distribution.
84
85cgroups form a tree structure and every process in the system belongs
86to one and only one cgroup. All threads of a process belong to the
87same cgroup. On creation, all processes are put in the cgroup that
88the parent process belongs to at the time. A process can be migrated
89to another cgroup. Migration of a process doesn't affect already
90existing descendant processes.
91
92Following certain structural constraints, controllers may be enabled or
93disabled selectively on a cgroup. All controller behaviors are
94hierarchical - if a controller is enabled on a cgroup, it affects all
95processes which belong to the cgroups consisting the inclusive
96sub-hierarchy of the cgroup. When a controller is enabled on a nested
97cgroup, it always restricts the resource distribution further. The
98restrictions set closer to the root in the hierarchy can not be
99overridden from further away.
100
101
1022. Basic Operations
103
1042-1. Mounting
105
106Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2
107hierarchy can be mounted with the following mount command.
108
109 # mount -t cgroup2 none $MOUNT_POINT
110
111cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All
112controllers which support v2 and are not bound to a v1 hierarchy are
113automatically bound to the v2 hierarchy and show up at the root.
114Controllers which are not in active use in the v2 hierarchy can be
115bound to other hierarchies. This allows mixing v2 hierarchy with the
116legacy v1 multiple hierarchies in a fully backward compatible way.
117
118A controller can be moved across hierarchies only after the controller
119is no longer referenced in its current hierarchy. Because per-cgroup
120controller states are destroyed asynchronously and controllers may
121have lingering references, a controller may not show up immediately on
122the v2 hierarchy after the final umount of the previous hierarchy.
123Similarly, a controller should be fully disabled to be moved out of
124the unified hierarchy and it may take some time for the disabled
125controller to become available for other hierarchies; furthermore, due
126to inter-controller dependencies, other controllers may need to be
127disabled too.
128
129While useful for development and manual configurations, moving
130controllers dynamically between the v2 and other hierarchies is
131strongly discouraged for production use. It is recommended to decide
132the hierarchies and controller associations before starting using the
133controllers after system boot.
134
1619b6d4
JW
135During transition to v2, system management software might still
136automount the v1 cgroup filesystem and so hijack all controllers
137during boot, before manual intervention is possible. To make testing
138and experimenting easier, the kernel parameter cgroup_no_v1= allows
139disabling controllers in v1 and make them always available in v2.
140
6c292092
TH
141
1422-2. Organizing Processes
143
144Initially, only the root cgroup exists to which all processes belong.
145A child cgroup can be created by creating a sub-directory.
146
147 # mkdir $CGROUP_NAME
148
149A given cgroup may have multiple child cgroups forming a tree
150structure. Each cgroup has a read-writable interface file
151"cgroup.procs". When read, it lists the PIDs of all processes which
152belong to the cgroup one-per-line. The PIDs are not ordered and the
153same PID may show up more than once if the process got moved to
154another cgroup and then back or the PID got recycled while reading.
155
156A process can be migrated into a cgroup by writing its PID to the
157target cgroup's "cgroup.procs" file. Only one process can be migrated
158on a single write(2) call. If a process is composed of multiple
159threads, writing the PID of any thread migrates all threads of the
160process.
161
162When a process forks a child process, the new process is born into the
163cgroup that the forking process belongs to at the time of the
164operation. After exit, a process stays associated with the cgroup
165that it belonged to at the time of exit until it's reaped; however, a
166zombie process does not appear in "cgroup.procs" and thus can't be
167moved to another cgroup.
168
169A cgroup which doesn't have any children or live processes can be
170destroyed by removing the directory. Note that a cgroup which doesn't
171have any children and is associated only with zombie processes is
172considered empty and can be removed.
173
174 # rmdir $CGROUP_NAME
175
176"/proc/$PID/cgroup" lists a process's cgroup membership. If legacy
177cgroup is in use in the system, this file may contain multiple lines,
178one for each hierarchy. The entry for cgroup v2 is always in the
179format "0::$PATH".
180
181 # cat /proc/842/cgroup
182 ...
183 0::/test-cgroup/test-cgroup-nested
184
185If the process becomes a zombie and the cgroup it was associated with
186is removed subsequently, " (deleted)" is appended to the path.
187
188 # cat /proc/842/cgroup
189 ...
190 0::/test-cgroup/test-cgroup-nested (deleted)
191
192
1932-3. [Un]populated Notification
194
195Each non-root cgroup has a "cgroup.events" file which contains
196"populated" field indicating whether the cgroup's sub-hierarchy has
197live processes in it. Its value is 0 if there is no live process in
198the cgroup and its descendants; otherwise, 1. poll and [id]notify
199events are triggered when the value changes. This can be used, for
200example, to start a clean-up operation after all processes of a given
201sub-hierarchy have exited. The populated state updates and
202notifications are recursive. Consider the following sub-hierarchy
203where the numbers in the parentheses represent the numbers of processes
204in each cgroup.
205
206 A(4) - B(0) - C(1)
207 \ D(0)
208
209A, B and C's "populated" fields would be 1 while D's 0. After the one
210process in C exits, B and C's "populated" fields would flip to "0" and
211file modified events will be generated on the "cgroup.events" files of
212both cgroups.
213
214
2152-4. Controlling Controllers
216
2172-4-1. Enabling and Disabling
218
219Each cgroup has a "cgroup.controllers" file which lists all
220controllers available for the cgroup to enable.
221
222 # cat cgroup.controllers
223 cpu io memory
224
225No controller is enabled by default. Controllers can be enabled and
226disabled by writing to the "cgroup.subtree_control" file.
227
228 # echo "+cpu +memory -io" > cgroup.subtree_control
229
230Only controllers which are listed in "cgroup.controllers" can be
231enabled. When multiple operations are specified as above, either they
232all succeed or fail. If multiple operations on the same controller
233are specified, the last one is effective.
234
235Enabling a controller in a cgroup indicates that the distribution of
236the target resource across its immediate children will be controlled.
237Consider the following sub-hierarchy. The enabled controllers are
238listed in parentheses.
239
240 A(cpu,memory) - B(memory) - C()
241 \ D()
242
243As A has "cpu" and "memory" enabled, A will control the distribution
244of CPU cycles and memory to its children, in this case, B. As B has
245"memory" enabled but not "CPU", C and D will compete freely on CPU
246cycles but their division of memory available to B will be controlled.
247
248As a controller regulates the distribution of the target resource to
249the cgroup's children, enabling it creates the controller's interface
250files in the child cgroups. In the above example, enabling "cpu" on B
251would create the "cpu." prefixed controller interface files in C and
252D. Likewise, disabling "memory" from B would remove the "memory."
253prefixed controller interface files from C and D. This means that the
254controller interface files - anything which doesn't start with
255"cgroup." are owned by the parent rather than the cgroup itself.
256
257
2582-4-2. Top-down Constraint
259
260Resources are distributed top-down and a cgroup can further distribute
261a resource only if the resource has been distributed to it from the
262parent. This means that all non-root "cgroup.subtree_control" files
263can only contain controllers which are enabled in the parent's
264"cgroup.subtree_control" file. A controller can be enabled only if
265the parent has the controller enabled and a controller can't be
266disabled if one or more children have it enabled.
267
268
2692-4-3. No Internal Process Constraint
270
271Non-root cgroups can only distribute resources to their children when
272they don't have any processes of their own. In other words, only
273cgroups which don't contain any processes can have controllers enabled
274in their "cgroup.subtree_control" files.
275
276This guarantees that, when a controller is looking at the part of the
277hierarchy which has it enabled, processes are always only on the
278leaves. This rules out situations where child cgroups compete against
279internal processes of the parent.
280
281The root cgroup is exempt from this restriction. Root contains
282processes and anonymous resource consumption which can't be associated
283with any other cgroups and requires special treatment from most
284controllers. How resource consumption in the root cgroup is governed
285is up to each controller.
286
287Note that the restriction doesn't get in the way if there is no
288enabled controller in the cgroup's "cgroup.subtree_control". This is
289important as otherwise it wouldn't be possible to create children of a
290populated cgroup. To control resource distribution of a cgroup, the
291cgroup must create children and transfer all its processes to the
292children before enabling controllers in its "cgroup.subtree_control"
293file.
294
295
2962-5. Delegation
297
2982-5-1. Model of Delegation
299
300A cgroup can be delegated to a less privileged user by granting write
301access of the directory and its "cgroup.procs" file to the user. Note
302that resource control interface files in a given directory control the
303distribution of the parent's resources and thus must not be delegated
304along with the directory.
305
306Once delegated, the user can build sub-hierarchy under the directory,
307organize processes as it sees fit and further distribute the resources
308it received from the parent. The limits and other settings of all
309resource controllers are hierarchical and regardless of what happens
310in the delegated sub-hierarchy, nothing can escape the resource
311restrictions imposed by the parent.
312
313Currently, cgroup doesn't impose any restrictions on the number of
314cgroups in or nesting depth of a delegated sub-hierarchy; however,
315this may be limited explicitly in the future.
316
317
3182-5-2. Delegation Containment
319
320A delegated sub-hierarchy is contained in the sense that processes
321can't be moved into or out of the sub-hierarchy by the delegatee. For
322a process with a non-root euid to migrate a target process into a
323cgroup by writing its PID to the "cgroup.procs" file, the following
324conditions must be met.
325
326- The writer's euid must match either uid or suid of the target process.
327
328- The writer must have write access to the "cgroup.procs" file.
329
330- The writer must have write access to the "cgroup.procs" file of the
331 common ancestor of the source and destination cgroups.
332
333The above three constraints ensure that while a delegatee may migrate
334processes around freely in the delegated sub-hierarchy it can't pull
335in from or push out to outside the sub-hierarchy.
336
337For an example, let's assume cgroups C0 and C1 have been delegated to
338user U0 who created C00, C01 under C0 and C10 under C1 as follows and
339all processes under C0 and C1 belong to U0.
340
341 ~~~~~~~~~~~~~ - C0 - C00
342 ~ cgroup ~ \ C01
343 ~ hierarchy ~
344 ~~~~~~~~~~~~~ - C1 - C10
345
346Let's also say U0 wants to write the PID of a process which is
347currently in C10 into "C00/cgroup.procs". U0 has write access to the
348file and uid match on the process; however, the common ancestor of the
349source cgroup C10 and the destination cgroup C00 is above the points
350of delegation and U0 would not have write access to its "cgroup.procs"
351files and thus the write will be denied with -EACCES.
352
353
3542-6. Guidelines
355
3562-6-1. Organize Once and Control
357
358Migrating a process across cgroups is a relatively expensive operation
359and stateful resources such as memory are not moved together with the
360process. This is an explicit design decision as there often exist
361inherent trade-offs between migration and various hot paths in terms
362of synchronization cost.
363
364As such, migrating processes across cgroups frequently as a means to
365apply different resource restrictions is discouraged. A workload
366should be assigned to a cgroup according to the system's logical and
367resource structure once on start-up. Dynamic adjustments to resource
368distribution can be made by changing controller configuration through
369the interface files.
370
371
3722-6-2. Avoid Name Collisions
373
374Interface files for a cgroup and its children cgroups occupy the same
375directory and it is possible to create children cgroups which collide
376with interface files.
377
378All cgroup core interface files are prefixed with "cgroup." and each
379controller's interface files are prefixed with the controller name and
380a dot. A controller's name is composed of lower case alphabets and
381'_'s but never begins with an '_' so it can be used as the prefix
382character for collision avoidance. Also, interface file names won't
383start or end with terms which are often used in categorizing workloads
384such as job, service, slice, unit or workload.
385
386cgroup doesn't do anything to prevent name collisions and it's the
387user's responsibility to avoid them.
388
389
3903. Resource Distribution Models
391
392cgroup controllers implement several resource distribution schemes
393depending on the resource type and expected use cases. This section
394describes major schemes in use along with their expected behaviors.
395
396
3973-1. Weights
398
399A parent's resource is distributed by adding up the weights of all
400active children and giving each the fraction matching the ratio of its
401weight against the sum. As only children which can make use of the
402resource at the moment participate in the distribution, this is
403work-conserving. Due to the dynamic nature, this model is usually
404used for stateless resources.
405
406All weights are in the range [1, 10000] with the default at 100. This
407allows symmetric multiplicative biases in both directions at fine
408enough granularity while staying in the intuitive range.
409
410As long as the weight is in range, all configuration combinations are
411valid and there is no reason to reject configuration changes or
412process migrations.
413
414"cpu.weight" proportionally distributes CPU cycles to active children
415and is an example of this type.
416
417
4183-2. Limits
419
420A child can only consume upto the configured amount of the resource.
421Limits can be over-committed - the sum of the limits of children can
422exceed the amount of resource available to the parent.
423
424Limits are in the range [0, max] and defaults to "max", which is noop.
425
426As limits can be over-committed, all configuration combinations are
427valid and there is no reason to reject configuration changes or
428process migrations.
429
430"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
431on an IO device and is an example of this type.
432
433
4343-3. Protections
435
436A cgroup is protected to be allocated upto the configured amount of
437the resource if the usages of all its ancestors are under their
438protected levels. Protections can be hard guarantees or best effort
439soft boundaries. Protections can also be over-committed in which case
440only upto the amount available to the parent is protected among
441children.
442
443Protections are in the range [0, max] and defaults to 0, which is
444noop.
445
446As protections can be over-committed, all configuration combinations
447are valid and there is no reason to reject configuration changes or
448process migrations.
449
450"memory.low" implements best-effort memory protection and is an
451example of this type.
452
453
4543-4. Allocations
455
456A cgroup is exclusively allocated a certain amount of a finite
457resource. Allocations can't be over-committed - the sum of the
458allocations of children can not exceed the amount of resource
459available to the parent.
460
461Allocations are in the range [0, max] and defaults to 0, which is no
462resource.
463
464As allocations can't be over-committed, some configuration
465combinations are invalid and should be rejected. Also, if the
466resource is mandatory for execution of processes, process migrations
467may be rejected.
468
469"cpu.rt.max" hard-allocates realtime slices and is an example of this
470type.
471
472
4734. Interface Files
474
4754-1. Format
476
477All interface files should be in one of the following formats whenever
478possible.
479
480 New-line separated values
481 (when only one value can be written at once)
482
483 VAL0\n
484 VAL1\n
485 ...
486
487 Space separated values
488 (when read-only or multiple values can be written at once)
489
490 VAL0 VAL1 ...\n
491
492 Flat keyed
493
494 KEY0 VAL0\n
495 KEY1 VAL1\n
496 ...
497
498 Nested keyed
499
500 KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
501 KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
502 ...
503
504For a writable file, the format for writing should generally match
505reading; however, controllers may allow omitting later fields or
506implement restricted shortcuts for most common use cases.
507
508For both flat and nested keyed files, only the values for a single key
509can be written at a time. For nested keyed files, the sub key pairs
510may be specified in any order and not all pairs have to be specified.
511
512
5134-2. Conventions
514
515- Settings for a single feature should be contained in a single file.
516
517- The root cgroup should be exempt from resource control and thus
518 shouldn't have resource control interface files. Also,
519 informational files on the root cgroup which end up showing global
520 information available elsewhere shouldn't exist.
521
522- If a controller implements weight based resource distribution, its
523 interface file should be named "weight" and have the range [1,
524 10000] with 100 as the default. The values are chosen to allow
525 enough and symmetric bias in both directions while keeping it
526 intuitive (the default is 100%).
527
528- If a controller implements an absolute resource guarantee and/or
529 limit, the interface files should be named "min" and "max"
530 respectively. If a controller implements best effort resource
531 guarantee and/or limit, the interface files should be named "low"
532 and "high" respectively.
533
534 In the above four control files, the special token "max" should be
535 used to represent upward infinity for both reading and writing.
536
537- If a setting has a configurable default value and keyed specific
538 overrides, the default entry should be keyed with "default" and
539 appear as the first entry in the file.
540
541 The default value can be updated by writing either "default $VAL" or
542 "$VAL".
543
544 When writing to update a specific override, "default" can be used as
545 the value to indicate removal of the override. Override entries
546 with "default" as the value must not appear when read.
547
548 For example, a setting which is keyed by major:minor device numbers
549 with integer values may look like the following.
550
551 # cat cgroup-example-interface-file
552 default 150
553 8:0 300
554
555 The default value can be updated by
556
557 # echo 125 > cgroup-example-interface-file
558
559 or
560
561 # echo "default 125" > cgroup-example-interface-file
562
563 An override can be set by
564
565 # echo "8:16 170" > cgroup-example-interface-file
566
567 and cleared by
568
569 # echo "8:0 default" > cgroup-example-interface-file
570 # cat cgroup-example-interface-file
571 default 125
572 8:16 170
573
574- For events which are not very high frequency, an interface file
575 "events" should be created which lists event key value pairs.
576 Whenever a notifiable event happens, file modified event should be
577 generated on the file.
578
579
5804-3. Core Interface Files
581
582All cgroup core files are prefixed with "cgroup."
583
584 cgroup.procs
585
586 A read-write new-line separated values file which exists on
587 all cgroups.
588
589 When read, it lists the PIDs of all processes which belong to
590 the cgroup one-per-line. The PIDs are not ordered and the
591 same PID may show up more than once if the process got moved
592 to another cgroup and then back or the PID got recycled while
593 reading.
594
595 A PID can be written to migrate the process associated with
596 the PID to the cgroup. The writer should match all of the
597 following conditions.
598
599 - Its euid is either root or must match either uid or suid of
600 the target process.
601
602 - It must have write access to the "cgroup.procs" file.
603
604 - It must have write access to the "cgroup.procs" file of the
605 common ancestor of the source and destination cgroups.
606
607 When delegating a sub-hierarchy, write access to this file
608 should be granted along with the containing directory.
609
610 cgroup.controllers
611
612 A read-only space separated values file which exists on all
613 cgroups.
614
615 It shows space separated list of all controllers available to
616 the cgroup. The controllers are not ordered.
617
618 cgroup.subtree_control
619
620 A read-write space separated values file which exists on all
621 cgroups. Starts out empty.
622
623 When read, it shows space separated list of the controllers
624 which are enabled to control resource distribution from the
625 cgroup to its children.
626
627 Space separated list of controllers prefixed with '+' or '-'
628 can be written to enable or disable controllers. A controller
629 name prefixed with '+' enables the controller and '-'
630 disables. If a controller appears more than once on the list,
631 the last one is effective. When multiple enable and disable
632 operations are specified, either all succeed or all fail.
633
634 cgroup.events
635
636 A read-only flat-keyed file which exists on non-root cgroups.
637 The following entries are defined. Unless specified
638 otherwise, a value change in this file generates a file
639 modified event.
640
641 populated
642
643 1 if the cgroup or its descendants contains any live
644 processes; otherwise, 0.
645
646
6475. Controllers
648
6495-1. CPU
650
651[NOTE: The interface for the cpu controller hasn't been merged yet]
652
653The "cpu" controllers regulates distribution of CPU cycles. This
654controller implements weight and absolute bandwidth limit models for
655normal scheduling policy and absolute bandwidth allocation model for
656realtime scheduling policy.
657
658
6595-1-1. CPU Interface Files
660
661All time durations are in microseconds.
662
663 cpu.stat
664
665 A read-only flat-keyed file which exists on non-root cgroups.
666
667 It reports the following six stats.
668
669 usage_usec
670 user_usec
671 system_usec
672 nr_periods
673 nr_throttled
674 throttled_usec
675
676 cpu.weight
677
678 A read-write single value file which exists on non-root
679 cgroups. The default is "100".
680
681 The weight in the range [1, 10000].
682
683 cpu.max
684
685 A read-write two value file which exists on non-root cgroups.
686 The default is "max 100000".
687
688 The maximum bandwidth limit. It's in the following format.
689
690 $MAX $PERIOD
691
692 which indicates that the group may consume upto $MAX in each
693 $PERIOD duration. "max" for $MAX indicates no limit. If only
694 one number is written, $MAX is updated.
695
696 cpu.rt.max
697
698 [NOTE: The semantics of this file is still under discussion and the
699 interface hasn't been merged yet]
700
701 A read-write two value file which exists on all cgroups.
702 The default is "0 100000".
703
704 The maximum realtime runtime allocation. Over-committing
705 configurations are disallowed and process migrations are
706 rejected if not enough bandwidth is available. It's in the
707 following format.
708
709 $MAX $PERIOD
710
711 which indicates that the group may consume upto $MAX in each
712 $PERIOD duration. If only one number is written, $MAX is
713 updated.
714
715
7165-2. Memory
717
718The "memory" controller regulates distribution of memory. Memory is
719stateful and implements both limit and protection models. Due to the
720intertwining between memory usage and reclaim pressure and the
721stateful nature of memory, the distribution model is relatively
722complex.
723
724While not completely water-tight, all major memory usages by a given
725cgroup are tracked so that the total memory consumption can be
726accounted and controlled to a reasonable extent. Currently, the
727following types of memory usages are tracked.
728
729- Userland memory - page cache and anonymous memory.
730
731- Kernel data structures such as dentries and inodes.
732
733- TCP socket buffers.
734
735The above list may expand in the future for better coverage.
736
737
7385-2-1. Memory Interface Files
739
740All memory amounts are in bytes. If a value which is not aligned to
741PAGE_SIZE is written, the value may be rounded up to the closest
742PAGE_SIZE multiple when read back.
743
744 memory.current
745
746 A read-only single value file which exists on non-root
747 cgroups.
748
749 The total amount of memory currently being used by the cgroup
750 and its descendants.
751
752 memory.low
753
754 A read-write single value file which exists on non-root
755 cgroups. The default is "0".
756
757 Best-effort memory protection. If the memory usages of a
758 cgroup and all its ancestors are below their low boundaries,
759 the cgroup's memory won't be reclaimed unless memory can be
760 reclaimed from unprotected cgroups.
761
762 Putting more memory than generally available under this
763 protection is discouraged.
764
765 memory.high
766
767 A read-write single value file which exists on non-root
768 cgroups. The default is "max".
769
770 Memory usage throttle limit. This is the main mechanism to
771 control memory usage of a cgroup. If a cgroup's usage goes
772 over the high boundary, the processes of the cgroup are
773 throttled and put under heavy reclaim pressure.
774
775 Going over the high limit never invokes the OOM killer and
776 under extreme conditions the limit may be breached.
777
778 memory.max
779
780 A read-write single value file which exists on non-root
781 cgroups. The default is "max".
782
783 Memory usage hard limit. This is the final protection
784 mechanism. If a cgroup's memory usage reaches this limit and
785 can't be reduced, the OOM killer is invoked in the cgroup.
786 Under certain circumstances, the usage may go over the limit
787 temporarily.
788
789 This is the ultimate protection mechanism. As long as the
790 high limit is used and monitored properly, this limit's
791 utility is limited to providing the final safety net.
792
793 memory.events
794
795 A read-only flat-keyed file which exists on non-root cgroups.
796 The following entries are defined. Unless specified
797 otherwise, a value change in this file generates a file
798 modified event.
799
800 low
801
802 The number of times the cgroup is reclaimed due to
803 high memory pressure even though its usage is under
804 the low boundary. This usually indicates that the low
805 boundary is over-committed.
806
807 high
808
809 The number of times processes of the cgroup are
810 throttled and routed to perform direct memory reclaim
811 because the high memory boundary was exceeded. For a
812 cgroup whose memory usage is capped by the high limit
813 rather than global memory pressure, this event's
814 occurrences are expected.
815
816 max
817
818 The number of times the cgroup's memory usage was
819 about to go over the max boundary. If direct reclaim
820 fails to bring it down, the OOM killer is invoked.
821
822 oom
823
824 The number of times the OOM killer has been invoked in
825 the cgroup. This may not exactly match the number of
826 processes killed but should generally be close.
827
587d9f72
JW
828 memory.stat
829
830 A read-only flat-keyed file which exists on non-root cgroups.
831
832 This breaks down the cgroup's memory footprint into different
833 types of memory, type-specific details, and other information
834 on the state and past events of the memory management system.
835
836 All memory amounts are in bytes.
837
838 The entries are ordered to be human readable, and new entries
839 can show up in the middle. Don't rely on items remaining in a
840 fixed position; use the keys to look up specific values!
841
842 anon
843
844 Amount of memory used in anonymous mappings such as
845 brk(), sbrk(), and mmap(MAP_ANONYMOUS)
846
847 file
848
849 Amount of memory used to cache filesystem data,
850 including tmpfs and shared memory.
851
12580e4b
VD
852 kernel_stack
853
854 Amount of memory allocated to kernel stacks.
855
27ee57c9
VD
856 slab
857
858 Amount of memory used for storing in-kernel data
859 structures.
860
4758e198
JW
861 sock
862
863 Amount of memory used in network transmission buffers
864
587d9f72
JW
865 file_mapped
866
867 Amount of cached filesystem data mapped with mmap()
868
869 file_dirty
870
871 Amount of cached filesystem data that was modified but
872 not yet written back to disk
873
874 file_writeback
875
876 Amount of cached filesystem data that was modified and
877 is currently being written back to disk
878
879 inactive_anon
880 active_anon
881 inactive_file
882 active_file
883 unevictable
884
885 Amount of memory, swap-backed and filesystem-backed,
886 on the internal memory management lists used by the
887 page reclaim algorithm
888
27ee57c9
VD
889 slab_reclaimable
890
891 Part of "slab" that might be reclaimed, such as
892 dentries and inodes.
893
894 slab_unreclaimable
895
896 Part of "slab" that cannot be reclaimed on memory
897 pressure.
898
587d9f72
JW
899 pgfault
900
901 Total number of page faults incurred
902
903 pgmajfault
904
905 Number of major page faults incurred
906
3e24b19d
VD
907 memory.swap.current
908
909 A read-only single value file which exists on non-root
910 cgroups.
911
912 The total amount of swap currently being used by the cgroup
913 and its descendants.
914
915 memory.swap.max
916
917 A read-write single value file which exists on non-root
918 cgroups. The default is "max".
919
920 Swap usage hard limit. If a cgroup's swap usage reaches this
921 limit, anonymous meomry of the cgroup will not be swapped out.
922
6c292092 923
6c83e6cb 9245-2-2. Usage Guidelines
6c292092
TH
925
926"memory.high" is the main mechanism to control memory usage.
927Over-committing on high limit (sum of high limits > available memory)
928and letting global memory pressure to distribute memory according to
929usage is a viable strategy.
930
931Because breach of the high limit doesn't trigger the OOM killer but
932throttles the offending cgroup, a management agent has ample
933opportunities to monitor and take appropriate actions such as granting
934more memory or terminating the workload.
935
936Determining whether a cgroup has enough memory is not trivial as
937memory usage doesn't indicate whether the workload can benefit from
938more memory. For example, a workload which writes data received from
939network to a file can use all available memory but can also operate as
940performant with a small amount of memory. A measure of memory
941pressure - how much the workload is being impacted due to lack of
942memory - is necessary to determine whether a workload needs more
943memory; unfortunately, memory pressure monitoring mechanism isn't
944implemented yet.
945
946
9475-2-3. Memory Ownership
948
949A memory area is charged to the cgroup which instantiated it and stays
950charged to the cgroup until the area is released. Migrating a process
951to a different cgroup doesn't move the memory usages that it
952instantiated while in the previous cgroup to the new cgroup.
953
954A memory area may be used by processes belonging to different cgroups.
955To which cgroup the area will be charged is in-deterministic; however,
956over time, the memory area is likely to end up in a cgroup which has
957enough memory allowance to avoid high reclaim pressure.
958
959If a cgroup sweeps a considerable amount of memory which is expected
960to be accessed repeatedly by other cgroups, it may make sense to use
961POSIX_FADV_DONTNEED to relinquish the ownership of memory areas
962belonging to the affected files to ensure correct memory ownership.
963
964
9655-3. IO
966
967The "io" controller regulates the distribution of IO resources. This
968controller implements both weight based and absolute bandwidth or IOPS
969limit distribution; however, weight based distribution is available
970only if cfq-iosched is in use and neither scheme is available for
971blk-mq devices.
972
973
9745-3-1. IO Interface Files
975
976 io.stat
977
978 A read-only nested-keyed file which exists on non-root
979 cgroups.
980
981 Lines are keyed by $MAJ:$MIN device numbers and not ordered.
982 The following nested keys are defined.
983
984 rbytes Bytes read
985 wbytes Bytes written
986 rios Number of read IOs
987 wios Number of write IOs
988
989 An example read output follows.
990
991 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353
992 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252
993
994 io.weight
995
996 A read-write flat-keyed file which exists on non-root cgroups.
997 The default is "default 100".
998
999 The first line is the default weight applied to devices
1000 without specific override. The rest are overrides keyed by
1001 $MAJ:$MIN device numbers and not ordered. The weights are in
1002 the range [1, 10000] and specifies the relative amount IO time
1003 the cgroup can use in relation to its siblings.
1004
1005 The default weight can be updated by writing either "default
1006 $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing
1007 "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default".
1008
1009 An example read output follows.
1010
1011 default 100
1012 8:16 200
1013 8:0 50
1014
1015 io.max
1016
1017 A read-write nested-keyed file which exists on non-root
1018 cgroups.
1019
1020 BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN
1021 device numbers and not ordered. The following nested keys are
1022 defined.
1023
1024 rbps Max read bytes per second
1025 wbps Max write bytes per second
1026 riops Max read IO operations per second
1027 wiops Max write IO operations per second
1028
1029 When writing, any number of nested key-value pairs can be
1030 specified in any order. "max" can be specified as the value
1031 to remove a specific limit. If the same key is specified
1032 multiple times, the outcome is undefined.
1033
1034 BPS and IOPS are measured in each IO direction and IOs are
1035 delayed if limit is reached. Temporary bursts are allowed.
1036
1037 Setting read limit at 2M BPS and write at 120 IOPS for 8:16.
1038
1039 echo "8:16 rbps=2097152 wiops=120" > io.max
1040
1041 Reading returns the following.
1042
1043 8:16 rbps=2097152 wbps=max riops=max wiops=120
1044
1045 Write IOPS limit can be removed by writing the following.
1046
1047 echo "8:16 wiops=max" > io.max
1048
1049 Reading now returns the following.
1050
1051 8:16 rbps=2097152 wbps=max riops=max wiops=max
1052
1053
10545-3-2. Writeback
1055
1056Page cache is dirtied through buffered writes and shared mmaps and
1057written asynchronously to the backing filesystem by the writeback
1058mechanism. Writeback sits between the memory and IO domains and
1059regulates the proportion of dirty memory by balancing dirtying and
1060write IOs.
1061
1062The io controller, in conjunction with the memory controller,
1063implements control of page cache writeback IOs. The memory controller
1064defines the memory domain that dirty memory ratio is calculated and
1065maintained for and the io controller defines the io domain which
1066writes out dirty pages for the memory domain. Both system-wide and
1067per-cgroup dirty memory states are examined and the more restrictive
1068of the two is enforced.
1069
1070cgroup writeback requires explicit support from the underlying
1071filesystem. Currently, cgroup writeback is implemented on ext2, ext4
1072and btrfs. On other filesystems, all writeback IOs are attributed to
1073the root cgroup.
1074
1075There are inherent differences in memory and writeback management
1076which affects how cgroup ownership is tracked. Memory is tracked per
1077page while writeback per inode. For the purpose of writeback, an
1078inode is assigned to a cgroup and all IO requests to write dirty pages
1079from the inode are attributed to that cgroup.
1080
1081As cgroup ownership for memory is tracked per page, there can be pages
1082which are associated with different cgroups than the one the inode is
1083associated with. These are called foreign pages. The writeback
1084constantly keeps track of foreign pages and, if a particular foreign
1085cgroup becomes the majority over a certain period of time, switches
1086the ownership of the inode to that cgroup.
1087
1088While this model is enough for most use cases where a given inode is
1089mostly dirtied by a single cgroup even when the main writing cgroup
1090changes over time, use cases where multiple cgroups write to a single
1091inode simultaneously are not supported well. In such circumstances, a
1092significant portion of IOs are likely to be attributed incorrectly.
1093As memory controller assigns page ownership on the first use and
1094doesn't update it until the page is released, even if writeback
1095strictly follows page ownership, multiple cgroups dirtying overlapping
1096areas wouldn't work as expected. It's recommended to avoid such usage
1097patterns.
1098
1099The sysctl knobs which affect writeback behavior are applied to cgroup
1100writeback as follows.
1101
1102 vm.dirty_background_ratio
1103 vm.dirty_ratio
1104
1105 These ratios apply the same to cgroup writeback with the
1106 amount of available memory capped by limits imposed by the
1107 memory controller and system-wide clean memory.
1108
1109 vm.dirty_background_bytes
1110 vm.dirty_bytes
1111
1112 For cgroup writeback, this is calculated into ratio against
1113 total available memory and applied the same way as
1114 vm.dirty[_background]_ratio.
1115
1116
1117P. Information on Kernel Programming
1118
1119This section contains kernel programming information in the areas
1120where interacting with cgroup is necessary. cgroup core and
1121controllers are not covered.
1122
1123
1124P-1. Filesystem Support for Writeback
1125
1126A filesystem can support cgroup writeback by updating
1127address_space_operations->writepage[s]() to annotate bio's using the
1128following two functions.
1129
1130 wbc_init_bio(@wbc, @bio)
1131
1132 Should be called for each bio carrying writeback data and
1133 associates the bio with the inode's owner cgroup. Can be
1134 called anytime between bio allocation and submission.
1135
1136 wbc_account_io(@wbc, @page, @bytes)
1137
1138 Should be called for each data segment being written out.
1139 While this function doesn't care exactly when it's called
1140 during the writeback session, it's the easiest and most
1141 natural to call it as data segments are added to a bio.
1142
1143With writeback bio's annotated, cgroup support can be enabled per
1144super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for
1145selective disabling of cgroup writeback support which is helpful when
1146certain filesystem features, e.g. journaled data mode, are
1147incompatible.
1148
1149wbc_init_bio() binds the specified bio to its cgroup. Depending on
1150the configuration, the bio may be executed at a lower priority and if
1151the writeback session is holding shared resources, e.g. a journal
1152entry, may lead to priority inversion. There is no one easy solution
1153for the problem. Filesystems can try to work around specific problem
1154cases by skipping wbc_init_bio() or using bio_associate_blkcg()
1155directly.
1156
1157
1158D. Deprecated v1 Core Features
1159
1160- Multiple hierarchies including named ones are not supported.
1161
1162- All mount options and remounting are not supported.
1163
1164- The "tasks" file is removed and "cgroup.procs" is not sorted.
1165
1166- "cgroup.clone_children" is removed.
1167
1168- /proc/cgroups is meaningless for v2. Use "cgroup.controllers" file
1169 at the root instead.
1170
1171
1172R. Issues with v1 and Rationales for v2
1173
1174R-1. Multiple Hierarchies
1175
1176cgroup v1 allowed an arbitrary number of hierarchies and each
1177hierarchy could host any number of controllers. While this seemed to
1178provide a high level of flexibility, it wasn't useful in practice.
1179
1180For example, as there is only one instance of each controller, utility
1181type controllers such as freezer which can be useful in all
1182hierarchies could only be used in one. The issue is exacerbated by
1183the fact that controllers couldn't be moved to another hierarchy once
1184hierarchies were populated. Another issue was that all controllers
1185bound to a hierarchy were forced to have exactly the same view of the
1186hierarchy. It wasn't possible to vary the granularity depending on
1187the specific controller.
1188
1189In practice, these issues heavily limited which controllers could be
1190put on the same hierarchy and most configurations resorted to putting
1191each controller on its own hierarchy. Only closely related ones, such
1192as the cpu and cpuacct controllers, made sense to be put on the same
1193hierarchy. This often meant that userland ended up managing multiple
1194similar hierarchies repeating the same steps on each hierarchy
1195whenever a hierarchy management operation was necessary.
1196
1197Furthermore, support for multiple hierarchies came at a steep cost.
1198It greatly complicated cgroup core implementation but more importantly
1199the support for multiple hierarchies restricted how cgroup could be
1200used in general and what controllers was able to do.
1201
1202There was no limit on how many hierarchies there might be, which meant
1203that a thread's cgroup membership couldn't be described in finite
1204length. The key might contain any number of entries and was unlimited
1205in length, which made it highly awkward to manipulate and led to
1206addition of controllers which existed only to identify membership,
1207which in turn exacerbated the original problem of proliferating number
1208of hierarchies.
1209
1210Also, as a controller couldn't have any expectation regarding the
1211topologies of hierarchies other controllers might be on, each
1212controller had to assume that all other controllers were attached to
1213completely orthogonal hierarchies. This made it impossible, or at
1214least very cumbersome, for controllers to cooperate with each other.
1215
1216In most use cases, putting controllers on hierarchies which are
1217completely orthogonal to each other isn't necessary. What usually is
1218called for is the ability to have differing levels of granularity
1219depending on the specific controller. In other words, hierarchy may
1220be collapsed from leaf towards root when viewed from specific
1221controllers. For example, a given configuration might not care about
1222how memory is distributed beyond a certain level while still wanting
1223to control how CPU cycles are distributed.
1224
1225
1226R-2. Thread Granularity
1227
1228cgroup v1 allowed threads of a process to belong to different cgroups.
1229This didn't make sense for some controllers and those controllers
1230ended up implementing different ways to ignore such situations but
1231much more importantly it blurred the line between API exposed to
1232individual applications and system management interface.
1233
1234Generally, in-process knowledge is available only to the process
1235itself; thus, unlike service-level organization of processes,
1236categorizing threads of a process requires active participation from
1237the application which owns the target process.
1238
1239cgroup v1 had an ambiguously defined delegation model which got abused
1240in combination with thread granularity. cgroups were delegated to
1241individual applications so that they can create and manage their own
1242sub-hierarchies and control resource distributions along them. This
1243effectively raised cgroup to the status of a syscall-like API exposed
1244to lay programs.
1245
1246First of all, cgroup has a fundamentally inadequate interface to be
1247exposed this way. For a process to access its own knobs, it has to
1248extract the path on the target hierarchy from /proc/self/cgroup,
1249construct the path by appending the name of the knob to the path, open
1250and then read and/or write to it. This is not only extremely clunky
1251and unusual but also inherently racy. There is no conventional way to
1252define transaction across the required steps and nothing can guarantee
1253that the process would actually be operating on its own sub-hierarchy.
1254
1255cgroup controllers implemented a number of knobs which would never be
1256accepted as public APIs because they were just adding control knobs to
1257system-management pseudo filesystem. cgroup ended up with interface
1258knobs which were not properly abstracted or refined and directly
1259revealed kernel internal details. These knobs got exposed to
1260individual applications through the ill-defined delegation mechanism
1261effectively abusing cgroup as a shortcut to implementing public APIs
1262without going through the required scrutiny.
1263
1264This was painful for both userland and kernel. Userland ended up with
1265misbehaving and poorly abstracted interfaces and kernel exposing and
1266locked into constructs inadvertently.
1267
1268
1269R-3. Competition Between Inner Nodes and Threads
1270
1271cgroup v1 allowed threads to be in any cgroups which created an
1272interesting problem where threads belonging to a parent cgroup and its
1273children cgroups competed for resources. This was nasty as two
1274different types of entities competed and there was no obvious way to
1275settle it. Different controllers did different things.
1276
1277The cpu controller considered threads and cgroups as equivalents and
1278mapped nice levels to cgroup weights. This worked for some cases but
1279fell flat when children wanted to be allocated specific ratios of CPU
1280cycles and the number of internal threads fluctuated - the ratios
1281constantly changed as the number of competing entities fluctuated.
1282There also were other issues. The mapping from nice level to weight
1283wasn't obvious or universal, and there were various other knobs which
1284simply weren't available for threads.
1285
1286The io controller implicitly created a hidden leaf node for each
1287cgroup to host the threads. The hidden leaf had its own copies of all
1288the knobs with "leaf_" prefixed. While this allowed equivalent
1289control over internal threads, it was with serious drawbacks. It
1290always added an extra layer of nesting which wouldn't be necessary
1291otherwise, made the interface messy and significantly complicated the
1292implementation.
1293
1294The memory controller didn't have a way to control what happened
1295between internal tasks and child cgroups and the behavior was not
1296clearly defined. There were attempts to add ad-hoc behaviors and
1297knobs to tailor the behavior to specific workloads which would have
1298led to problems extremely difficult to resolve in the long term.
1299
1300Multiple controllers struggled with internal tasks and came up with
1301different ways to deal with it; unfortunately, all the approaches were
1302severely flawed and, furthermore, the widely different behaviors
1303made cgroup as a whole highly inconsistent.
1304
1305This clearly is a problem which needs to be addressed from cgroup core
1306in a uniform way.
1307
1308
1309R-4. Other Interface Issues
1310
1311cgroup v1 grew without oversight and developed a large number of
1312idiosyncrasies and inconsistencies. One issue on the cgroup core side
1313was how an empty cgroup was notified - a userland helper binary was
1314forked and executed for each event. The event delivery wasn't
1315recursive or delegatable. The limitations of the mechanism also led
1316to in-kernel event delivery filtering mechanism further complicating
1317the interface.
1318
1319Controller interfaces were problematic too. An extreme example is
1320controllers completely ignoring hierarchical organization and treating
1321all cgroups as if they were all located directly under the root
1322cgroup. Some controllers exposed a large amount of inconsistent
1323implementation details to userland.
1324
1325There also was no consistency across controllers. When a new cgroup
1326was created, some controllers defaulted to not imposing extra
1327restrictions while others disallowed any resource usage until
1328explicitly configured. Configuration knobs for the same type of
1329control used widely differing naming schemes and formats. Statistics
1330and information knobs were named arbitrarily and used different
1331formats and units even in the same controller.
1332
1333cgroup v2 establishes common conventions where appropriate and updates
1334controllers so that they expose minimal and consistent interfaces.
1335
1336
1337R-5. Controller Issues and Remedies
1338
1339R-5-1. Memory
1340
1341The original lower boundary, the soft limit, is defined as a limit
1342that is per default unset. As a result, the set of cgroups that
1343global reclaim prefers is opt-in, rather than opt-out. The costs for
1344optimizing these mostly negative lookups are so high that the
1345implementation, despite its enormous size, does not even provide the
1346basic desirable behavior. First off, the soft limit has no
1347hierarchical meaning. All configured groups are organized in a global
1348rbtree and treated like equal peers, regardless where they are located
1349in the hierarchy. This makes subtree delegation impossible. Second,
1350the soft limit reclaim pass is so aggressive that it not just
1351introduces high allocation latencies into the system, but also impacts
1352system performance due to overreclaim, to the point where the feature
1353becomes self-defeating.
1354
1355The memory.low boundary on the other hand is a top-down allocated
1356reserve. A cgroup enjoys reclaim protection when it and all its
1357ancestors are below their low boundaries, which makes delegation of
1358subtrees possible. Secondly, new cgroups have no reserve per default
1359and in the common case most cgroups are eligible for the preferred
1360reclaim pass. This allows the new low boundary to be efficiently
1361implemented with just a minor addition to the generic reclaim code,
1362without the need for out-of-band data structures and reclaim passes.
1363Because the generic reclaim code considers all cgroups except for the
1364ones running low in the preferred first reclaim pass, overreclaim of
1365individual groups is eliminated as well, resulting in much better
1366overall workload performance.
1367
1368The original high boundary, the hard limit, is defined as a strict
1369limit that can not budge, even if the OOM killer has to be called.
1370But this generally goes against the goal of making the most out of the
1371available memory. The memory consumption of workloads varies during
1372runtime, and that requires users to overcommit. But doing that with a
1373strict upper limit requires either a fairly accurate prediction of the
1374working set size or adding slack to the limit. Since working set size
1375estimation is hard and error prone, and getting it wrong results in
1376OOM kills, most users tend to err on the side of a looser limit and
1377end up wasting precious resources.
1378
1379The memory.high boundary on the other hand can be set much more
1380conservatively. When hit, it throttles allocations by forcing them
1381into direct reclaim to work off the excess, but it never invokes the
1382OOM killer. As a result, a high boundary that is chosen too
1383aggressively will not terminate the processes, but instead it will
1384lead to gradual performance degradation. The user can monitor this
1385and make corrections until the minimal memory footprint that still
1386gives acceptable performance is found.
1387
1388In extreme cases, with many concurrent allocations and a complete
1389breakdown of reclaim progress within the group, the high boundary can
1390be exceeded. But even then it's mostly better to satisfy the
1391allocation from the slack available in other groups or the rest of the
1392system than killing the group. Otherwise, memory.max is there to
1393limit this type of spillover and ultimately contain buggy or even
1394malicious applications.
3e24b19d 1395
b6e6edcf
JW
1396Setting the original memory.limit_in_bytes below the current usage was
1397subject to a race condition, where concurrent charges could cause the
1398limit setting to fail. memory.max on the other hand will first set the
1399limit to prevent new charges, and then reclaim and OOM kill until the
1400new limit is met - or the task writing to memory.max is killed.
1401
3e24b19d
VD
1402The combined memory+swap accounting and limiting is replaced by real
1403control over swap space.
1404
1405The main argument for a combined memory+swap facility in the original
1406cgroup design was that global or parental pressure would always be
1407able to swap all anonymous memory of a child group, regardless of the
1408child's own (possibly untrusted) configuration. However, untrusted
1409groups can sabotage swapping by other means - such as referencing its
1410anonymous memory in a tight loop - and an admin can not assume full
1411swappability when overcommitting untrusted jobs.
1412
1413For trusted jobs, on the other hand, a combined counter is not an
1414intuitive userspace interface, and it flies in the face of the idea
1415that cgroup controllers should account and limit specific physical
1416resources. Swap space is a resource like all others in the system,
1417and that's why unified hierarchy allows distributing it separately.