]> git.proxmox.com Git - mirror_ubuntu-artful-kernel.git/blame - Documentation/dev-tools/kcov.rst
UBUNTU: Start new release
[mirror_ubuntu-artful-kernel.git] / Documentation / dev-tools / kcov.rst
CommitLineData
5c9a8750
DV
1kcov: code coverage for fuzzing
2===============================
3
4kcov exposes kernel code coverage information in a form suitable for coverage-
5guided fuzzing (randomized testing). Coverage data of a running kernel is
6exported via the "kcov" debugfs file. Coverage collection is enabled on a task
7basis, and thus it can capture precise coverage of a single system call.
8
9Note that kcov does not aim to collect as much coverage as possible. It aims
10to collect more or less stable coverage that is function of syscall inputs.
11To achieve this goal it does not collect coverage in soft/hard interrupts
12and instrumentation of some inherently non-deterministic parts of kernel is
8a1115ff 13disabled (e.g. scheduler, locking).
5c9a8750 14
758f726e
JC
15Usage
16-----
5c9a8750 17
758f726e 18Configure the kernel with::
5c9a8750
DV
19
20 CONFIG_KCOV=y
21
22CONFIG_KCOV requires gcc built on revision 231296 or later.
758f726e 23Profiling data will only become accessible once debugfs has been mounted::
5c9a8750
DV
24
25 mount -t debugfs none /sys/kernel/debug
26
57131dd3
JN
27The following program demonstrates kcov usage from within a test program:
28
29.. code-block:: c
758f726e
JC
30
31 #include <stdio.h>
32 #include <stddef.h>
33 #include <stdint.h>
34 #include <stdlib.h>
35 #include <sys/types.h>
36 #include <sys/stat.h>
37 #include <sys/ioctl.h>
38 #include <sys/mman.h>
39 #include <unistd.h>
40 #include <fcntl.h>
41
42 #define KCOV_INIT_TRACE _IOR('c', 1, unsigned long)
43 #define KCOV_ENABLE _IO('c', 100)
44 #define KCOV_DISABLE _IO('c', 101)
45 #define COVER_SIZE (64<<10)
46
47 int main(int argc, char **argv)
48 {
5c9a8750
DV
49 int fd;
50 unsigned long *cover, n, i;
51
52 /* A single fd descriptor allows coverage collection on a single
53 * thread.
54 */
55 fd = open("/sys/kernel/debug/kcov", O_RDWR);
56 if (fd == -1)
57 perror("open"), exit(1);
58 /* Setup trace mode and trace size. */
59 if (ioctl(fd, KCOV_INIT_TRACE, COVER_SIZE))
60 perror("ioctl"), exit(1);
61 /* Mmap buffer shared between kernel- and user-space. */
62 cover = (unsigned long*)mmap(NULL, COVER_SIZE * sizeof(unsigned long),
63 PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
64 if ((void*)cover == MAP_FAILED)
65 perror("mmap"), exit(1);
66 /* Enable coverage collection on the current thread. */
67 if (ioctl(fd, KCOV_ENABLE, 0))
68 perror("ioctl"), exit(1);
69 /* Reset coverage from the tail of the ioctl() call. */
70 __atomic_store_n(&cover[0], 0, __ATOMIC_RELAXED);
71 /* That's the target syscal call. */
72 read(-1, NULL, 0);
73 /* Read number of PCs collected. */
74 n = __atomic_load_n(&cover[0], __ATOMIC_RELAXED);
75 for (i = 0; i < n; i++)
76 printf("0x%lx\n", cover[i + 1]);
77 /* Disable coverage collection for the current thread. After this call
78 * coverage can be enabled for a different thread.
79 */
80 if (ioctl(fd, KCOV_DISABLE, 0))
81 perror("ioctl"), exit(1);
82 /* Free resources. */
83 if (munmap(cover, COVER_SIZE * sizeof(unsigned long)))
84 perror("munmap"), exit(1);
85 if (close(fd))
86 perror("close"), exit(1);
87 return 0;
758f726e
JC
88 }
89
90After piping through addr2line output of the program looks as follows::
91
92 SyS_read
93 fs/read_write.c:562
94 __fdget_pos
95 fs/file.c:774
96 __fget_light
97 fs/file.c:746
98 __fget_light
99 fs/file.c:750
100 __fget_light
101 fs/file.c:760
102 __fdget_pos
103 fs/file.c:784
104 SyS_read
105 fs/read_write.c:562
5c9a8750
DV
106
107If a program needs to collect coverage from several threads (independently),
108it needs to open /sys/kernel/debug/kcov in each thread separately.
109
110The interface is fine-grained to allow efficient forking of test processes.
111That is, a parent process opens /sys/kernel/debug/kcov, enables trace mode,
112mmaps coverage buffer and then forks child processes in a loop. Child processes
113only need to enable coverage (disable happens automatically on thread end).