]> git.proxmox.com Git - mirror_ubuntu-zesty-kernel.git/blob - Documentation/sched-stats.txt
ACPI: SBS: Add ACPI_PROCFS around procfs handling code.
[mirror_ubuntu-zesty-kernel.git] / Documentation / sched-stats.txt
1 Version 14 of schedstats includes support for sched_domains, which hit the
2 mainline kernel in 2.6.20 although it is identical to the stats from version
3 12 which was in the kernel from 2.6.13-2.6.19 (version 13 never saw a kernel
4 release). Some counters make more sense to be per-runqueue; other to be
5 per-domain. Note that domains (and their associated information) will only
6 be pertinent and available on machines utilizing CONFIG_SMP.
7
8 In version 14 of schedstat, there is at least one level of domain
9 statistics for each cpu listed, and there may well be more than one
10 domain. Domains have no particular names in this implementation, but
11 the highest numbered one typically arbitrates balancing across all the
12 cpus on the machine, while domain0 is the most tightly focused domain,
13 sometimes balancing only between pairs of cpus. At this time, there
14 are no architectures which need more than three domain levels. The first
15 field in the domain stats is a bit map indicating which cpus are affected
16 by that domain.
17
18 These fields are counters, and only increment. Programs which make use
19 of these will need to start with a baseline observation and then calculate
20 the change in the counters at each subsequent observation. A perl script
21 which does this for many of the fields is available at
22
23 http://eaglet.rain.com/rick/linux/schedstat/
24
25 Note that any such script will necessarily be version-specific, as the main
26 reason to change versions is changes in the output format. For those wishing
27 to write their own scripts, the fields are described here.
28
29 CPU statistics
30 --------------
31 cpu<N> 1 2 3 4 5 6 7 8 9 10 11 12
32
33 NOTE: In the sched_yield() statistics, the active queue is considered empty
34 if it has only one process in it, since obviously the process calling
35 sched_yield() is that process.
36
37 First four fields are sched_yield() statistics:
38 1) # of times both the active and the expired queue were empty
39 2) # of times just the active queue was empty
40 3) # of times just the expired queue was empty
41 4) # of times sched_yield() was called
42
43 Next three are schedule() statistics:
44 5) # of times we switched to the expired queue and reused it
45 6) # of times schedule() was called
46 7) # of times schedule() left the processor idle
47
48 Next two are try_to_wake_up() statistics:
49 8) # of times try_to_wake_up() was called
50 9) # of times try_to_wake_up() was called to wake up the local cpu
51
52 Next three are statistics describing scheduling latency:
53 10) sum of all time spent running by tasks on this processor (in jiffies)
54 11) sum of all time spent waiting to run by tasks on this processor (in
55 jiffies)
56 12) # of timeslices run on this cpu
57
58
59 Domain statistics
60 -----------------
61 One of these is produced per domain for each cpu described. (Note that if
62 CONFIG_SMP is not defined, *no* domains are utilized and these lines
63 will not appear in the output.)
64
65 domain<N> <cpumask> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
66
67 The first field is a bit mask indicating what cpus this domain operates over.
68
69 The next 24 are a variety of load_balance() statistics in grouped into types
70 of idleness (idle, busy, and newly idle):
71
72 1) # of times in this domain load_balance() was called when the
73 cpu was idle
74 2) # of times in this domain load_balance() checked but found
75 the load did not require balancing when the cpu was idle
76 3) # of times in this domain load_balance() tried to move one or
77 more tasks and failed, when the cpu was idle
78 4) sum of imbalances discovered (if any) with each call to
79 load_balance() in this domain when the cpu was idle
80 5) # of times in this domain pull_task() was called when the cpu
81 was idle
82 6) # of times in this domain pull_task() was called even though
83 the target task was cache-hot when idle
84 7) # of times in this domain load_balance() was called but did
85 not find a busier queue while the cpu was idle
86 8) # of times in this domain a busier queue was found while the
87 cpu was idle but no busier group was found
88
89 9) # of times in this domain load_balance() was called when the
90 cpu was busy
91 10) # of times in this domain load_balance() checked but found the
92 load did not require balancing when busy
93 11) # of times in this domain load_balance() tried to move one or
94 more tasks and failed, when the cpu was busy
95 12) sum of imbalances discovered (if any) with each call to
96 load_balance() in this domain when the cpu was busy
97 13) # of times in this domain pull_task() was called when busy
98 14) # of times in this domain pull_task() was called even though the
99 target task was cache-hot when busy
100 15) # of times in this domain load_balance() was called but did not
101 find a busier queue while the cpu was busy
102 16) # of times in this domain a busier queue was found while the cpu
103 was busy but no busier group was found
104
105 17) # of times in this domain load_balance() was called when the
106 cpu was just becoming idle
107 18) # of times in this domain load_balance() checked but found the
108 load did not require balancing when the cpu was just becoming idle
109 19) # of times in this domain load_balance() tried to move one or more
110 tasks and failed, when the cpu was just becoming idle
111 20) sum of imbalances discovered (if any) with each call to
112 load_balance() in this domain when the cpu was just becoming idle
113 21) # of times in this domain pull_task() was called when newly idle
114 22) # of times in this domain pull_task() was called even though the
115 target task was cache-hot when just becoming idle
116 23) # of times in this domain load_balance() was called but did not
117 find a busier queue while the cpu was just becoming idle
118 24) # of times in this domain a busier queue was found while the cpu
119 was just becoming idle but no busier group was found
120
121 Next three are active_load_balance() statistics:
122 25) # of times active_load_balance() was called
123 26) # of times active_load_balance() tried to move a task and failed
124 27) # of times active_load_balance() successfully moved a task
125
126 Next three are sched_balance_exec() statistics:
127 28) sbe_cnt is not used
128 29) sbe_balanced is not used
129 30) sbe_pushed is not used
130
131 Next three are sched_balance_fork() statistics:
132 31) sbf_cnt is not used
133 32) sbf_balanced is not used
134 33) sbf_pushed is not used
135
136 Next three are try_to_wake_up() statistics:
137 34) # of times in this domain try_to_wake_up() awoke a task that
138 last ran on a different cpu in this domain
139 35) # of times in this domain try_to_wake_up() moved a task to the
140 waking cpu because it was cache-cold on its own cpu anyway
141 36) # of times in this domain try_to_wake_up() started passive balancing
142
143 /proc/<pid>/schedstat
144 ----------------
145 schedstats also adds a new /proc/<pid/schedstat file to include some of
146 the same information on a per-process level. There are three fields in
147 this file correlating for that process to:
148 1) time spent on the cpu
149 2) time spent waiting on a runqueue
150 3) # of timeslices run on this cpu
151
152 A program could be easily written to make use of these extra fields to
153 report on how well a particular process or set of processes is faring
154 under the scheduler's policies. A simple version of such a program is
155 available at
156 http://eaglet.rain.com/rick/linux/schedstat/v12/latency.c