]> git.proxmox.com Git - mirror_ubuntu-artful-kernel.git/blame - arch/Kconfig
mm: improve __GFP_NORETRY comment based on implementation
[mirror_ubuntu-artful-kernel.git] / arch / Kconfig
CommitLineData
fb32e03f
MD
1#
2# General architecture dependent options
3#
125e5645
MD
4
5config OPROFILE
b309a294 6 tristate "OProfile system profiling"
125e5645
MD
7 depends on PROFILING
8 depends on HAVE_OPROFILE
d69d59f4 9 select RING_BUFFER
9a5963eb 10 select RING_BUFFER_ALLOW_SWAP
125e5645
MD
11 help
12 OProfile is a profiling system capable of profiling the
13 whole system, include the kernel, kernel modules, libraries,
14 and applications.
15
16 If unsure, say N.
17
4d4036e0
JY
18config OPROFILE_EVENT_MULTIPLEX
19 bool "OProfile multiplexing support (EXPERIMENTAL)"
20 default n
21 depends on OPROFILE && X86
22 help
23 The number of hardware counters is limited. The multiplexing
24 feature enables OProfile to gather more events than counters
25 are provided by the hardware. This is realized by switching
26 between events at an user specified time interval.
27
28 If unsure, say N.
29
125e5645 30config HAVE_OPROFILE
9ba16087 31 bool
125e5645 32
dcfce4a0
RR
33config OPROFILE_NMI_TIMER
34 def_bool y
af9feebe 35 depends on PERF_EVENTS && HAVE_PERF_EVENTS_NMI && !PPC64
dcfce4a0 36
125e5645
MD
37config KPROBES
38 bool "Kprobes"
05ed160e 39 depends on MODULES
125e5645 40 depends on HAVE_KPROBES
05ed160e 41 select KALLSYMS
125e5645
MD
42 help
43 Kprobes allows you to trap at almost any kernel address and
44 execute a callback function. register_kprobe() establishes
45 a probepoint and specifies the callback. Kprobes is useful
46 for kernel debugging, non-intrusive instrumentation and testing.
47 If in doubt, say "N".
48
45f81b1c 49config JUMP_LABEL
c5905afb 50 bool "Optimize very unlikely/likely branches"
45f81b1c
SR
51 depends on HAVE_ARCH_JUMP_LABEL
52 help
c5905afb
IM
53 This option enables a transparent branch optimization that
54 makes certain almost-always-true or almost-always-false branch
55 conditions even cheaper to execute within the kernel.
56
57 Certain performance-sensitive kernel code, such as trace points,
58 scheduler functionality, networking code and KVM have such
59 branches and include support for this optimization technique.
60
45f81b1c 61 If it is detected that the compiler has support for "asm goto",
c5905afb
IM
62 the kernel will compile such branches with just a nop
63 instruction. When the condition flag is toggled to true, the
64 nop will be converted to a jump instruction to execute the
65 conditional block of instructions.
66
67 This technique lowers overhead and stress on the branch prediction
68 of the processor and generally makes the kernel faster. The update
69 of the condition is slower, but those are always very rare.
45f81b1c 70
c5905afb
IM
71 ( On 32-bit x86, the necessary options added to the compiler
72 flags may increase the size of the kernel slightly. )
45f81b1c 73
1987c947
PZ
74config STATIC_KEYS_SELFTEST
75 bool "Static key selftest"
76 depends on JUMP_LABEL
77 help
78 Boot time self-test of the branch patching code.
79
afd66255 80config OPTPROBES
5cc718b9
MH
81 def_bool y
82 depends on KPROBES && HAVE_OPTPROBES
afd66255 83 depends on !PREEMPT
afd66255 84
e7dbfe34
MH
85config KPROBES_ON_FTRACE
86 def_bool y
87 depends on KPROBES && HAVE_KPROBES_ON_FTRACE
88 depends on DYNAMIC_FTRACE_WITH_REGS
89 help
90 If function tracer is enabled and the arch supports full
91 passing of pt_regs to function tracing, then kprobes can
92 optimize on top of function tracing.
93
2b144498 94config UPROBES
09294e31 95 def_bool n
2b144498 96 help
7b2d81d4
IM
97 Uprobes is the user-space counterpart to kprobes: they
98 enable instrumentation applications (such as 'perf probe')
99 to establish unintrusive probes in user-space binaries and
100 libraries, by executing handler functions when the probes
101 are hit by user-space applications.
102
103 ( These probes come in the form of single-byte breakpoints,
104 managed by the kernel and kept transparent to the probed
105 application. )
2b144498 106
c19fa94a
JH
107config HAVE_64BIT_ALIGNED_ACCESS
108 def_bool 64BIT && !HAVE_EFFICIENT_UNALIGNED_ACCESS
109 help
110 Some architectures require 64 bit accesses to be 64 bit
111 aligned, which also requires structs containing 64 bit values
112 to be 64 bit aligned too. This includes some 32 bit
113 architectures which can do 64 bit accesses, as well as 64 bit
114 architectures without unaligned access.
115
116 This symbol should be selected by an architecture if 64 bit
117 accesses are required to be 64 bit aligned in this way even
118 though it is not a 64 bit architecture.
119
120 See Documentation/unaligned-memory-access.txt for more
121 information on the topic of unaligned memory accesses.
122
58340a07 123config HAVE_EFFICIENT_UNALIGNED_ACCESS
9ba16087 124 bool
58340a07
JB
125 help
126 Some architectures are unable to perform unaligned accesses
127 without the use of get_unaligned/put_unaligned. Others are
128 unable to perform such accesses efficiently (e.g. trap on
129 unaligned access and require fixing it up in the exception
130 handler.)
131
132 This symbol should be selected by an architecture if it can
133 perform unaligned accesses efficiently to allow different
134 code paths to be selected for these cases. Some network
135 drivers, for example, could opt to not fix up alignment
136 problems with received packets if doing so would not help
137 much.
138
139 See Documentation/unaligned-memory-access.txt for more
140 information on the topic of unaligned memory accesses.
141
cf66bb93
DW
142config ARCH_USE_BUILTIN_BSWAP
143 bool
144 help
145 Modern versions of GCC (since 4.4) have builtin functions
146 for handling byte-swapping. Using these, instead of the old
147 inline assembler that the architecture code provides in the
148 __arch_bswapXX() macros, allows the compiler to see what's
149 happening and offers more opportunity for optimisation. In
150 particular, the compiler will be able to combine the byteswap
151 with a nearby load or store and use load-and-swap or
152 store-and-swap instructions if the architecture has them. It
153 should almost *never* result in code which is worse than the
154 hand-coded assembler in <asm/swab.h>. But just in case it
155 does, the use of the builtins is optional.
156
157 Any architecture with load-and-swap or store-and-swap
158 instructions should set this. And it shouldn't hurt to set it
159 on architectures that don't have such instructions.
160
9edddaa2
AM
161config KRETPROBES
162 def_bool y
163 depends on KPROBES && HAVE_KRETPROBES
164
7c68af6e
AK
165config USER_RETURN_NOTIFIER
166 bool
167 depends on HAVE_USER_RETURN_NOTIFIER
168 help
169 Provide a kernel-internal notification when a cpu is about to
170 switch to user mode.
171
28b2ee20 172config HAVE_IOREMAP_PROT
9ba16087 173 bool
28b2ee20 174
125e5645 175config HAVE_KPROBES
9ba16087 176 bool
9edddaa2
AM
177
178config HAVE_KRETPROBES
9ba16087 179 bool
74bc7cee 180
afd66255
MH
181config HAVE_OPTPROBES
182 bool
d314d74c 183
e7dbfe34
MH
184config HAVE_KPROBES_ON_FTRACE
185 bool
186
d314d74c
CW
187config HAVE_NMI_WATCHDOG
188 bool
1f5a4ad9
RM
189#
190# An arch should select this if it provides all these things:
191#
192# task_pt_regs() in asm/processor.h or asm/ptrace.h
193# arch_has_single_step() if there is hardware single-step support
194# arch_has_block_step() if there is hardware block-step support
1f5a4ad9
RM
195# asm/syscall.h supplying asm-generic/syscall.h interface
196# linux/regset.h user_regset interfaces
197# CORE_DUMP_USE_REGSET #define'd in linux/elf.h
198# TIF_SYSCALL_TRACE calls tracehook_report_syscall_{entry,exit}
199# TIF_NOTIFY_RESUME calls tracehook_notify_resume()
200# signal delivery calls tracehook_signal_handler()
201#
202config HAVE_ARCH_TRACEHOOK
9ba16087 203 bool
1f5a4ad9 204
74bc7cee 205config HAVE_DMA_ATTRS
9ba16087 206 bool
3d442233 207
c64be2bb
MS
208config HAVE_DMA_CONTIGUOUS
209 bool
210
29d5e047
TG
211config GENERIC_SMP_IDLE_THREAD
212 bool
213
485cf5da
KH
214config GENERIC_IDLE_POLL_SETUP
215 bool
216
a6359d1e
TG
217# Select if arch init_task initializer is different to init/init_task.c
218config ARCH_INIT_TASK
a4a2eb49
TG
219 bool
220
f5e10287
TG
221# Select if arch has its private alloc_task_struct() function
222config ARCH_TASK_STRUCT_ALLOCATOR
223 bool
224
225# Select if arch has its private alloc_thread_info() function
226config ARCH_THREAD_INFO_ALLOCATOR
227 bool
228
5aaeb5c0
IM
229# Select if arch wants to size task_struct dynamically via arch_task_struct_size:
230config ARCH_WANTS_DYNAMIC_TASK_STRUCT
231 bool
232
f850c30c
HC
233config HAVE_REGS_AND_STACK_ACCESS_API
234 bool
e01292b1
HC
235 help
236 This symbol should be selected by an architecure if it supports
237 the API needed to access registers and stack entries from pt_regs,
238 declared in asm/ptrace.h
239 For example the kprobes-based event tracer needs this API.
f850c30c 240
9483a578 241config HAVE_CLK
9ba16087 242 bool
9483a578
DB
243 help
244 The <linux/clk.h> calls support software clock gating and
245 thus are a key power management tool on many systems.
246
5ee00bd4
JR
247config HAVE_DMA_API_DEBUG
248 bool
36cd3c9f 249
62a038d3
P
250config HAVE_HW_BREAKPOINT
251 bool
99e8c5a3 252 depends on PERF_EVENTS
62a038d3 253
0102752e
FW
254config HAVE_MIXED_BREAKPOINTS_REGS
255 bool
256 depends on HAVE_HW_BREAKPOINT
257 help
258 Depending on the arch implementation of hardware breakpoints,
259 some of them have separate registers for data and instruction
260 breakpoints addresses, others have mixed registers to store
261 them but define the access type in a control register.
262 Select this option if your arch implements breakpoints under the
263 latter fashion.
264
7c68af6e
AK
265config HAVE_USER_RETURN_NOTIFIER
266 bool
a1922ed6 267
c01d4323
FW
268config HAVE_PERF_EVENTS_NMI
269 bool
23637d47
FW
270 help
271 System hardware can generate an NMI using the perf event
272 subsystem. Also has support for calculating CPU cycle events
273 to determine how many clock cycles in a given period.
c01d4323 274
c5e63197
JO
275config HAVE_PERF_REGS
276 bool
277 help
278 Support selective register dumps for perf events. This includes
279 bit-mapping of each registers and a unique architecture id.
280
c5ebcedb
JO
281config HAVE_PERF_USER_STACK_DUMP
282 bool
283 help
284 Support user stack dumps for perf event samples. This needs
285 access to the user stack pointer which is not unified across
286 architectures.
287
bf5438fc
JB
288config HAVE_ARCH_JUMP_LABEL
289 bool
290
26723911
PZ
291config HAVE_RCU_TABLE_FREE
292 bool
293
df013ffb
HY
294config ARCH_HAVE_NMI_SAFE_CMPXCHG
295 bool
296
43570fd2
HC
297config HAVE_ALIGNED_STRUCT_PAGE
298 bool
299 help
300 This makes sure that struct pages are double word aligned and that
301 e.g. the SLUB allocator can perform double word atomic operations
302 on a struct page for better performance. However selecting this
303 might increase the size of a struct page by a word.
304
4156153c
HC
305config HAVE_CMPXCHG_LOCAL
306 bool
307
2565409f
HC
308config HAVE_CMPXCHG_DOUBLE
309 bool
310
c1d7e01d
WD
311config ARCH_WANT_IPC_PARSE_VERSION
312 bool
313
314config ARCH_WANT_COMPAT_IPC_PARSE_VERSION
315 bool
316
48b25c43 317config ARCH_WANT_OLD_COMPAT_IPC
c1d7e01d 318 select ARCH_WANT_COMPAT_IPC_PARSE_VERSION
48b25c43
CM
319 bool
320
e2cfabdf
WD
321config HAVE_ARCH_SECCOMP_FILTER
322 bool
323 help
fb0fadf9 324 An arch should select this symbol if it provides all of these things:
bb6ea430
WD
325 - syscall_get_arch()
326 - syscall_get_arguments()
327 - syscall_rollback()
328 - syscall_set_return_value()
fb0fadf9
WD
329 - SIGSYS siginfo_t support
330 - secure_computing is called from a ptrace_event()-safe context
331 - secure_computing return value is checked and a return value of -1
332 results in the system call being skipped immediately.
48dc92b9 333 - seccomp syscall wired up
e2cfabdf 334
ff27f38e
AL
335 For best performance, an arch should use seccomp_phase1 and
336 seccomp_phase2 directly. It should call seccomp_phase1 for all
337 syscalls if TIF_SECCOMP is set, but seccomp_phase1 does not
338 need to be called from a ptrace-safe context. It must then
339 call seccomp_phase2 if seccomp_phase1 returns anything other
340 than SECCOMP_PHASE1_OK or SECCOMP_PHASE1_SKIP.
341
342 As an additional optimization, an arch may provide seccomp_data
343 directly to seccomp_phase1; this avoids multiple calls
344 to the syscall_xyz helpers for every syscall.
345
e2cfabdf
WD
346config SECCOMP_FILTER
347 def_bool y
348 depends on HAVE_ARCH_SECCOMP_FILTER && SECCOMP && NET
349 help
350 Enable tasks to build secure computing environments defined
351 in terms of Berkeley Packet Filter programs which implement
352 task-defined system call filtering polices.
353
354 See Documentation/prctl/seccomp_filter.txt for details.
355
19952a92
KC
356config HAVE_CC_STACKPROTECTOR
357 bool
358 help
359 An arch should select this symbol if:
360 - its compiler supports the -fstack-protector option
361 - it has implemented a stack canary (e.g. __stack_chk_guard)
362
363config CC_STACKPROTECTOR
8779657d
KC
364 def_bool n
365 help
366 Set when a stack-protector mode is enabled, so that the build
367 can enable kernel-side support for the GCC feature.
368
369choice
370 prompt "Stack Protector buffer overflow detection"
19952a92 371 depends on HAVE_CC_STACKPROTECTOR
8779657d 372 default CC_STACKPROTECTOR_NONE
19952a92 373 help
8779657d 374 This option turns on the "stack-protector" GCC feature. This
19952a92
KC
375 feature puts, at the beginning of functions, a canary value on
376 the stack just before the return address, and validates
377 the value just before actually returning. Stack based buffer
378 overflows (that need to overwrite this return address) now also
379 overwrite the canary, which gets detected and the attack is then
380 neutralized via a kernel panic.
381
8779657d
KC
382config CC_STACKPROTECTOR_NONE
383 bool "None"
384 help
385 Disable "stack-protector" GCC feature.
386
387config CC_STACKPROTECTOR_REGULAR
388 bool "Regular"
389 select CC_STACKPROTECTOR
390 help
391 Functions will have the stack-protector canary logic added if they
392 have an 8-byte or larger character array on the stack.
393
19952a92 394 This feature requires gcc version 4.2 or above, or a distribution
8779657d
KC
395 gcc with the feature backported ("-fstack-protector").
396
397 On an x86 "defconfig" build, this feature adds canary checks to
398 about 3% of all kernel functions, which increases kernel code size
399 by about 0.3%.
400
401config CC_STACKPROTECTOR_STRONG
402 bool "Strong"
403 select CC_STACKPROTECTOR
404 help
405 Functions will have the stack-protector canary logic added in any
406 of the following conditions:
407
408 - local variable's address used as part of the right hand side of an
409 assignment or function argument
410 - local variable is an array (or union containing an array),
411 regardless of array type or length
412 - uses register local variables
413
414 This feature requires gcc version 4.9 or above, or a distribution
415 gcc with the feature backported ("-fstack-protector-strong").
416
417 On an x86 "defconfig" build, this feature adds canary checks to
418 about 20% of all kernel functions, which increases the kernel code
419 size by about 2%.
420
421endchoice
19952a92 422
91d1aa43 423config HAVE_CONTEXT_TRACKING
2b1d5024
FW
424 bool
425 help
91d1aa43
FW
426 Provide kernel/user boundaries probes necessary for subsystems
427 that need it, such as userspace RCU extended quiescent state.
428 Syscalls need to be wrapped inside user_exit()-user_enter() through
429 the slow path using TIF_NOHZ flag. Exceptions handlers must be
430 wrapped as well. Irqs are already protected inside
431 rcu_irq_enter/rcu_irq_exit() but preemption or signal handling on
432 irq exit still need to be protected.
2b1d5024 433
b952741c
FW
434config HAVE_VIRT_CPU_ACCOUNTING
435 bool
436
554b0004
KH
437config HAVE_VIRT_CPU_ACCOUNTING_GEN
438 bool
439 default y if 64BIT
440 help
441 With VIRT_CPU_ACCOUNTING_GEN, cputime_t becomes 64-bit.
442 Before enabling this option, arch code must be audited
443 to ensure there are no races in concurrent read/write of
444 cputime_t. For example, reading/writing 64-bit cputime_t on
445 some 32-bit arches may require multiple accesses, so proper
446 locking is needed to protect against concurrent accesses.
447
448
fdf9c356
FW
449config HAVE_IRQ_TIME_ACCOUNTING
450 bool
451 help
452 Archs need to ensure they use a high enough resolution clock to
453 support irq time accounting and then call enable_sched_clock_irqtime().
454
15626062
GS
455config HAVE_ARCH_TRANSPARENT_HUGEPAGE
456 bool
457
0ddab1d2
TK
458config HAVE_ARCH_HUGE_VMAP
459 bool
460
0f8975ec
PE
461config HAVE_ARCH_SOFT_DIRTY
462 bool
463
786d35d4
DH
464config HAVE_MOD_ARCH_SPECIFIC
465 bool
466 help
467 The arch uses struct mod_arch_specific to store data. Many arches
468 just need a simple module loader without arch specific data - those
469 should not enable this.
470
471config MODULES_USE_ELF_RELA
472 bool
473 help
474 Modules only use ELF RELA relocations. Modules with ELF REL
475 relocations will give an error.
476
477config MODULES_USE_ELF_REL
478 bool
479 help
480 Modules only use ELF REL relocations. Modules with ELF RELA
481 relocations will give an error.
482
b92021b0
RR
483config HAVE_UNDERSCORE_SYMBOL_PREFIX
484 bool
485 help
486 Some architectures generate an _ in front of C symbols; things like
487 module loading and assembly files need to know about this.
488
cc1f0274
FW
489config HAVE_IRQ_EXIT_ON_IRQ_STACK
490 bool
491 help
492 Architecture doesn't only execute the irq handler on the irq stack
493 but also irq_exit(). This way we can process softirqs on this irq
494 stack instead of switching to a new one when we call __do_softirq()
495 in the end of an hardirq.
496 This spares a stack switch and improves cache usage on softirq
497 processing.
498
235a8f02
KS
499config PGTABLE_LEVELS
500 int
501 default 2
502
2b68f6ca
KC
503config ARCH_HAS_ELF_RANDOMIZE
504 bool
505 help
506 An architecture supports choosing randomized locations for
507 stack, mmap, brk, and ET_DYN. Defined functions:
508 - arch_mmap_rnd()
204db6ed 509 - arch_randomize_brk()
2b68f6ca 510
3033f14a
JT
511config HAVE_COPY_THREAD_TLS
512 bool
513 help
514 Architecture provides copy_thread_tls to accept tls argument via
515 normal C parameter passing, rather than extracting the syscall
516 argument from pt_regs.
517
d2125043
AV
518#
519# ABI hall of shame
520#
521config CLONE_BACKWARDS
522 bool
523 help
524 Architecture has tls passed as the 4th argument of clone(2),
525 not the 5th one.
526
527config CLONE_BACKWARDS2
528 bool
529 help
530 Architecture has the first two arguments of clone(2) swapped.
531
dfa9771a
MS
532config CLONE_BACKWARDS3
533 bool
534 help
535 Architecture has tls passed as the 3rd argument of clone(2),
536 not the 5th one.
537
eaca6eae
AV
538config ODD_RT_SIGACTION
539 bool
540 help
541 Architecture has unusual rt_sigaction(2) arguments
542
0a0e8cdf
AV
543config OLD_SIGSUSPEND
544 bool
545 help
546 Architecture has old sigsuspend(2) syscall, of one-argument variety
547
548config OLD_SIGSUSPEND3
549 bool
550 help
551 Even weirder antique ABI - three-argument sigsuspend(2)
552
495dfbf7
AV
553config OLD_SIGACTION
554 bool
555 help
556 Architecture has old sigaction(2) syscall. Nope, not the same
557 as OLD_SIGSUSPEND | OLD_SIGSUSPEND3 - alpha has sigsuspend(2),
558 but fairly different variant of sigaction(2), thanks to OSF/1
559 compatibility...
560
561config COMPAT_OLD_SIGACTION
562 bool
563
2521f2c2 564source "kernel/gcov/Kconfig"