]> git.proxmox.com Git - mirror_ubuntu-bionic-kernel.git/blob - kernel/trace/Kconfig
Merge branch 'linus' into tracing/core
[mirror_ubuntu-bionic-kernel.git] / kernel / trace / Kconfig
1 #
2 # Architectures that offer an FUNCTION_TRACER implementation should
3 # select HAVE_FUNCTION_TRACER:
4 #
5
6 config USER_STACKTRACE_SUPPORT
7 bool
8
9 config NOP_TRACER
10 bool
11
12 config HAVE_FTRACE_NMI_ENTER
13 bool
14
15 config HAVE_FUNCTION_TRACER
16 bool
17
18 config HAVE_FUNCTION_GRAPH_TRACER
19 bool
20
21 config HAVE_FUNCTION_TRACE_MCOUNT_TEST
22 bool
23 help
24 This gets selected when the arch tests the function_trace_stop
25 variable at the mcount call site. Otherwise, this variable
26 is tested by the called function.
27
28 config HAVE_DYNAMIC_FTRACE
29 bool
30
31 config HAVE_FTRACE_MCOUNT_RECORD
32 bool
33
34 config HAVE_HW_BRANCH_TRACER
35 bool
36
37 config HAVE_FTRACE_SYSCALLS
38 bool
39
40 config TRACER_MAX_TRACE
41 bool
42
43 config RING_BUFFER
44 bool
45
46 config FTRACE_NMI_ENTER
47 bool
48 depends on HAVE_FTRACE_NMI_ENTER
49 default y
50
51 config EVENT_TRACING
52 bool
53
54 config TRACING
55 bool
56 select DEBUG_FS
57 select RING_BUFFER
58 select STACKTRACE if STACKTRACE_SUPPORT
59 select TRACEPOINTS
60 select NOP_TRACER
61 select BINARY_PRINTF
62 select EVENT_TRACING
63
64 #
65 # Minimum requirements an architecture has to meet for us to
66 # be able to offer generic tracing facilities:
67 #
68 config TRACING_SUPPORT
69 bool
70 # PPC32 has no irqflags tracing support, but it can use most of the
71 # tracers anyway, they were tested to build and work. Note that new
72 # exceptions to this list aren't welcomed, better implement the
73 # irqflags tracing for your architecture.
74 depends on TRACE_IRQFLAGS_SUPPORT || PPC32
75 depends on STACKTRACE_SUPPORT
76 default y
77
78 if TRACING_SUPPORT
79
80 menu "Tracers"
81
82 config FUNCTION_TRACER
83 bool "Kernel Function Tracer"
84 depends on HAVE_FUNCTION_TRACER
85 select FRAME_POINTER
86 select KALLSYMS
87 select TRACING
88 select CONTEXT_SWITCH_TRACER
89 help
90 Enable the kernel to trace every kernel function. This is done
91 by using a compiler feature to insert a small, 5-byte No-Operation
92 instruction to the beginning of every kernel function, which NOP
93 sequence is then dynamically patched into a tracer call when
94 tracing is enabled by the administrator. If it's runtime disabled
95 (the bootup default), then the overhead of the instructions is very
96 small and not measurable even in micro-benchmarks.
97
98 config FUNCTION_GRAPH_TRACER
99 bool "Kernel Function Graph Tracer"
100 depends on HAVE_FUNCTION_GRAPH_TRACER
101 depends on FUNCTION_TRACER
102 default y
103 help
104 Enable the kernel to trace a function at both its return
105 and its entry.
106 Its first purpose is to trace the duration of functions and
107 draw a call graph for each thread with some information like
108 the return value. This is done by setting the current return
109 address on the current task structure into a stack of calls.
110
111
112 config IRQSOFF_TRACER
113 bool "Interrupts-off Latency Tracer"
114 default n
115 depends on TRACE_IRQFLAGS_SUPPORT
116 depends on GENERIC_TIME
117 select TRACE_IRQFLAGS
118 select TRACING
119 select TRACER_MAX_TRACE
120 help
121 This option measures the time spent in irqs-off critical
122 sections, with microsecond accuracy.
123
124 The default measurement method is a maximum search, which is
125 disabled by default and can be runtime (re-)started
126 via:
127
128 echo 0 > /debugfs/tracing/tracing_max_latency
129
130 (Note that kernel size and overhead increases with this option
131 enabled. This option and the preempt-off timing option can be
132 used together or separately.)
133
134 config PREEMPT_TRACER
135 bool "Preemption-off Latency Tracer"
136 default n
137 depends on GENERIC_TIME
138 depends on PREEMPT
139 select TRACING
140 select TRACER_MAX_TRACE
141 help
142 This option measures the time spent in preemption off critical
143 sections, with microsecond accuracy.
144
145 The default measurement method is a maximum search, which is
146 disabled by default and can be runtime (re-)started
147 via:
148
149 echo 0 > /debugfs/tracing/tracing_max_latency
150
151 (Note that kernel size and overhead increases with this option
152 enabled. This option and the irqs-off timing option can be
153 used together or separately.)
154
155 config SYSPROF_TRACER
156 bool "Sysprof Tracer"
157 depends on X86
158 select TRACING
159 select CONTEXT_SWITCH_TRACER
160 help
161 This tracer provides the trace needed by the 'Sysprof' userspace
162 tool.
163
164 config SCHED_TRACER
165 bool "Scheduling Latency Tracer"
166 select TRACING
167 select CONTEXT_SWITCH_TRACER
168 select TRACER_MAX_TRACE
169 help
170 This tracer tracks the latency of the highest priority task
171 to be scheduled in, starting from the point it has woken up.
172
173 config CONTEXT_SWITCH_TRACER
174 bool "Trace process context switches"
175 select TRACING
176 select MARKERS
177 help
178 This tracer gets called from the context switch and records
179 all switching of tasks.
180
181 config EVENT_TRACER
182 bool "Trace various events in the kernel"
183 select TRACING
184 help
185 This tracer hooks to various trace points in the kernel
186 allowing the user to pick and choose which trace point they
187 want to trace.
188
189 config FTRACE_SYSCALLS
190 bool "Trace syscalls"
191 depends on HAVE_FTRACE_SYSCALLS
192 select TRACING
193 select KALLSYMS
194 help
195 Basic tracer to catch the syscall entry and exit events.
196
197 config BOOT_TRACER
198 bool "Trace boot initcalls"
199 select TRACING
200 select CONTEXT_SWITCH_TRACER
201 help
202 This tracer helps developers to optimize boot times: it records
203 the timings of the initcalls and traces key events and the identity
204 of tasks that can cause boot delays, such as context-switches.
205
206 Its aim is to be parsed by the /scripts/bootgraph.pl tool to
207 produce pretty graphics about boot inefficiencies, giving a visual
208 representation of the delays during initcalls - but the raw
209 /debug/tracing/trace text output is readable too.
210
211 You must pass in ftrace=initcall to the kernel command line
212 to enable this on bootup.
213
214 config TRACE_BRANCH_PROFILING
215 bool "Trace likely/unlikely profiler"
216 select TRACING
217 help
218 This tracer profiles all the the likely and unlikely macros
219 in the kernel. It will display the results in:
220
221 /debugfs/tracing/profile_annotated_branch
222
223 Note: this will add a significant overhead, only turn this
224 on if you need to profile the system's use of these macros.
225
226 Say N if unsure.
227
228 config PROFILE_ALL_BRANCHES
229 bool "Profile all if conditionals"
230 depends on TRACE_BRANCH_PROFILING
231 help
232 This tracer profiles all branch conditions. Every if ()
233 taken in the kernel is recorded whether it hit or miss.
234 The results will be displayed in:
235
236 /debugfs/tracing/profile_branch
237
238 This configuration, when enabled, will impose a great overhead
239 on the system. This should only be enabled when the system
240 is to be analyzed
241
242 Say N if unsure.
243
244 config TRACING_BRANCHES
245 bool
246 help
247 Selected by tracers that will trace the likely and unlikely
248 conditions. This prevents the tracers themselves from being
249 profiled. Profiling the tracing infrastructure can only happen
250 when the likelys and unlikelys are not being traced.
251
252 config BRANCH_TRACER
253 bool "Trace likely/unlikely instances"
254 depends on TRACE_BRANCH_PROFILING
255 select TRACING_BRANCHES
256 help
257 This traces the events of likely and unlikely condition
258 calls in the kernel. The difference between this and the
259 "Trace likely/unlikely profiler" is that this is not a
260 histogram of the callers, but actually places the calling
261 events into a running trace buffer to see when and where the
262 events happened, as well as their results.
263
264 Say N if unsure.
265
266 config POWER_TRACER
267 bool "Trace power consumption behavior"
268 depends on X86
269 select TRACING
270 help
271 This tracer helps developers to analyze and optimize the kernels
272 power management decisions, specifically the C-state and P-state
273 behavior.
274
275
276 config STACK_TRACER
277 bool "Trace max stack"
278 depends on HAVE_FUNCTION_TRACER
279 select FUNCTION_TRACER
280 select STACKTRACE
281 select KALLSYMS
282 help
283 This special tracer records the maximum stack footprint of the
284 kernel and displays it in debugfs/tracing/stack_trace.
285
286 This tracer works by hooking into every function call that the
287 kernel executes, and keeping a maximum stack depth value and
288 stack-trace saved. If this is configured with DYNAMIC_FTRACE
289 then it will not have any overhead while the stack tracer
290 is disabled.
291
292 To enable the stack tracer on bootup, pass in 'stacktrace'
293 on the kernel command line.
294
295 The stack tracer can also be enabled or disabled via the
296 sysctl kernel.stack_tracer_enabled
297
298 Say N if unsure.
299
300 config HW_BRANCH_TRACER
301 depends on HAVE_HW_BRANCH_TRACER
302 bool "Trace hw branches"
303 select TRACING
304 help
305 This tracer records all branches on the system in a circular
306 buffer giving access to the last N branches for each cpu.
307
308 config KMEMTRACE
309 bool "Trace SLAB allocations"
310 select TRACING
311 help
312 kmemtrace provides tracing for slab allocator functions, such as
313 kmalloc, kfree, kmem_cache_alloc, kmem_cache_free etc.. Collected
314 data is then fed to the userspace application in order to analyse
315 allocation hotspots, internal fragmentation and so on, making it
316 possible to see how well an allocator performs, as well as debug
317 and profile kernel code.
318
319 This requires an userspace application to use. See
320 Documentation/trace/kmemtrace.txt for more information.
321
322 Saying Y will make the kernel somewhat larger and slower. However,
323 if you disable kmemtrace at run-time or boot-time, the performance
324 impact is minimal (depending on the arch the kernel is built for).
325
326 If unsure, say N.
327
328 config WORKQUEUE_TRACER
329 bool "Trace workqueues"
330 select TRACING
331 help
332 The workqueue tracer provides some statistical informations
333 about each cpu workqueue thread such as the number of the
334 works inserted and executed since their creation. It can help
335 to evaluate the amount of work each of them have to perform.
336 For example it can help a developer to decide whether he should
337 choose a per cpu workqueue instead of a singlethreaded one.
338
339 config BLK_DEV_IO_TRACE
340 bool "Support for tracing block io actions"
341 depends on SYSFS
342 depends on BLOCK
343 select RELAY
344 select DEBUG_FS
345 select TRACEPOINTS
346 select TRACING
347 select STACKTRACE
348 help
349 Say Y here if you want to be able to trace the block layer actions
350 on a given queue. Tracing allows you to see any traffic happening
351 on a block device queue. For more information (and the userspace
352 support tools needed), fetch the blktrace tools from:
353
354 git://git.kernel.dk/blktrace.git
355
356 Tracing also is possible using the ftrace interface, e.g.:
357
358 echo 1 > /sys/block/sda/sda1/trace/enable
359 echo blk > /sys/kernel/debug/tracing/current_tracer
360 cat /sys/kernel/debug/tracing/trace_pipe
361
362 If unsure, say N.
363
364 config DYNAMIC_FTRACE
365 bool "enable/disable ftrace tracepoints dynamically"
366 depends on FUNCTION_TRACER
367 depends on HAVE_DYNAMIC_FTRACE
368 default y
369 help
370 This option will modify all the calls to ftrace dynamically
371 (will patch them out of the binary image and replaces them
372 with a No-Op instruction) as they are called. A table is
373 created to dynamically enable them again.
374
375 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but otherwise
376 has native performance as long as no tracing is active.
377
378 The changes to the code are done by a kernel thread that
379 wakes up once a second and checks to see if any ftrace calls
380 were made. If so, it runs stop_machine (stops all CPUS)
381 and modifies the code to jump over the call to ftrace.
382
383 config FUNCTION_PROFILER
384 bool "Kernel function profiler"
385 depends on FUNCTION_TRACER
386 default n
387 help
388 This option enables the kernel function profiler. A file is created
389 in debugfs called function_profile_enabled which defaults to zero.
390 When a 1 is echoed into this file profiling begins, and when a
391 zero is entered, profiling stops. A file in the trace_stats
392 directory called functions, that show the list of functions that
393 have been hit and their counters.
394
395 If in doubt, say N
396
397 config FTRACE_MCOUNT_RECORD
398 def_bool y
399 depends on DYNAMIC_FTRACE
400 depends on HAVE_FTRACE_MCOUNT_RECORD
401
402 config FTRACE_SELFTEST
403 bool
404
405 config FTRACE_STARTUP_TEST
406 bool "Perform a startup test on ftrace"
407 depends on TRACING
408 select FTRACE_SELFTEST
409 help
410 This option performs a series of startup tests on ftrace. On bootup
411 a series of tests are made to verify that the tracer is
412 functioning properly. It will do tests on all the configured
413 tracers of ftrace.
414
415 config MMIOTRACE
416 bool "Memory mapped IO tracing"
417 depends on HAVE_MMIOTRACE_SUPPORT && PCI
418 select TRACING
419 help
420 Mmiotrace traces Memory Mapped I/O access and is meant for
421 debugging and reverse engineering. It is called from the ioremap
422 implementation and works via page faults. Tracing is disabled by
423 default and can be enabled at run-time.
424
425 See Documentation/trace/mmiotrace.txt.
426 If you are not helping to develop drivers, say N.
427
428 config MMIOTRACE_TEST
429 tristate "Test module for mmiotrace"
430 depends on MMIOTRACE && m
431 help
432 This is a dumb module for testing mmiotrace. It is very dangerous
433 as it will write garbage to IO memory starting at a given address.
434 However, it should be safe to use on e.g. unused portion of VRAM.
435
436 Say N, unless you absolutely know what you are doing.
437
438 endmenu
439
440 endif # TRACING_SUPPORT
441