]> git.proxmox.com Git - mirror_ubuntu-bionic-kernel.git/blame - Documentation/vm/userfaultfd.txt
Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input
[mirror_ubuntu-bionic-kernel.git] / Documentation / vm / userfaultfd.txt
CommitLineData
25edd8bf
AA
1= Userfaultfd =
2
3== Objective ==
4
5Userfaults allow the implementation of on-demand paging from userland
6and more generally they allow userland to take control of various
7memory page faults, something otherwise only the kernel code could do.
8
9For example userfaults allows a proper and more optimal implementation
10of the PROT_NONE+SIGSEGV trick.
11
12== Design ==
13
14Userfaults are delivered and resolved through the userfaultfd syscall.
15
16The userfaultfd (aside from registering and unregistering virtual
17memory ranges) provides two primary functionalities:
18
191) read/POLLIN protocol to notify a userland thread of the faults
20 happening
21
222) various UFFDIO_* ioctls that can manage the virtual memory regions
23 registered in the userfaultfd that allows userland to efficiently
24 resolve the userfaults it receives via 1) or to manage the virtual
25 memory in the background
26
27The real advantage of userfaults if compared to regular virtual memory
28management of mremap/mprotect is that the userfaults in all their
29operations never involve heavyweight structures like vmas (in fact the
30userfaultfd runtime load never takes the mmap_sem for writing).
31
32Vmas are not suitable for page- (or hugepage) granular fault tracking
33when dealing with virtual address spaces that could span
34Terabytes. Too many vmas would be needed for that.
35
36The userfaultfd once opened by invoking the syscall, can also be
37passed using unix domain sockets to a manager process, so the same
38manager process could handle the userfaults of a multitude of
39different processes without them being aware about what is going on
40(well of course unless they later try to use the userfaultfd
41themselves on the same region the manager is already tracking, which
42is a corner case that would currently return -EBUSY).
43
44== API ==
45
46When first opened the userfaultfd must be enabled invoking the
47UFFDIO_API ioctl specifying a uffdio_api.api value set to UFFD_API (or
48a later API version) which will specify the read/POLLIN protocol
a9b85f94
AA
49userland intends to speak on the UFFD and the uffdio_api.features
50userland requires. The UFFDIO_API ioctl if successful (i.e. if the
51requested uffdio_api.api is spoken also by the running kernel and the
52requested features are going to be enabled) will return into
53uffdio_api.features and uffdio_api.ioctls two 64bit bitmasks of
54respectively all the available features of the read(2) protocol and
55the generic ioctl available.
25edd8bf 56
5a02026d
MR
57The uffdio_api.features bitmask returned by the UFFDIO_API ioctl
58defines what memory types are supported by the userfaultfd and what
59events, except page fault notifications, may be generated.
60
61If the kernel supports registering userfaultfd ranges on hugetlbfs
62virtual memory areas, UFFD_FEATURE_MISSING_HUGETLBFS will be set in
63uffdio_api.features. Similarly, UFFD_FEATURE_MISSING_SHMEM will be
64set if the kernel supports registering userfaultfd ranges on shared
65memory (covering all shmem APIs, i.e. tmpfs, IPCSHM, /dev/zero
66MAP_SHARED, memfd_create, etc).
67
68The userland application that wants to use userfaultfd with hugetlbfs
69or shared memory need to set the corresponding flag in
70uffdio_api.features to enable those features.
71
72If the userland desires to receive notifications for events other than
73page faults, it has to verify that uffdio_api.features has appropriate
74UFFD_FEATURE_EVENT_* bits set. These events are described in more
75detail below in "Non-cooperative userfaultfd" section.
76
25edd8bf
AA
77Once the userfaultfd has been enabled the UFFDIO_REGISTER ioctl should
78be invoked (if present in the returned uffdio_api.ioctls bitmask) to
79register a memory range in the userfaultfd by setting the
80uffdio_register structure accordingly. The uffdio_register.mode
81bitmask will specify to the kernel which kind of faults to track for
82the range (UFFDIO_REGISTER_MODE_MISSING would track missing
83pages). The UFFDIO_REGISTER ioctl will return the
84uffdio_register.ioctls bitmask of ioctls that are suitable to resolve
85userfaults on the range registered. Not all ioctls will necessarily be
86supported for all memory types depending on the underlying virtual
87memory backend (anonymous memory vs tmpfs vs real filebacked
88mappings).
89
90Userland can use the uffdio_register.ioctls to manage the virtual
91address space in the background (to add or potentially also remove
92memory from the userfaultfd registered range). This means a userfault
93could be triggering just before userland maps in the background the
94user-faulted page.
95
96The primary ioctl to resolve userfaults is UFFDIO_COPY. That
97atomically copies a page into the userfault registered range and wakes
98up the blocked userfaults (unless uffdio_copy.mode &
99UFFDIO_COPY_MODE_DONTWAKE is set). Other ioctl works similarly to
100UFFDIO_COPY. They're atomic as in guaranteeing that nothing can see an
101half copied page since it'll keep userfaulting until the copy has
102finished.
103
104== QEMU/KVM ==
105
106QEMU/KVM is using the userfaultfd syscall to implement postcopy live
107migration. Postcopy live migration is one form of memory
108externalization consisting of a virtual machine running with part or
109all of its memory residing on a different node in the cloud. The
110userfaultfd abstraction is generic enough that not a single line of
111KVM kernel code had to be modified in order to add postcopy live
112migration to QEMU.
113
114Guest async page faults, FOLL_NOWAIT and all other GUP features work
115just fine in combination with userfaults. Userfaults trigger async
116page faults in the guest scheduler so those guest processes that
117aren't waiting for userfaults (i.e. network bound) can keep running in
118the guest vcpus.
119
120It is generally beneficial to run one pass of precopy live migration
121just before starting postcopy live migration, in order to avoid
122generating userfaults for readonly guest regions.
123
124The implementation of postcopy live migration currently uses one
125single bidirectional socket but in the future two different sockets
126will be used (to reduce the latency of the userfaults to the minimum
127possible without having to decrease /proc/sys/net/ipv4/tcp_wmem).
128
129The QEMU in the source node writes all pages that it knows are missing
130in the destination node, into the socket, and the migration thread of
131the QEMU running in the destination node runs UFFDIO_COPY|ZEROPAGE
132ioctls on the userfaultfd in order to map the received pages into the
133guest (UFFDIO_ZEROCOPY is used if the source page was a zero page).
134
135A different postcopy thread in the destination node listens with
136poll() to the userfaultfd in parallel. When a POLLIN event is
137generated after a userfault triggers, the postcopy thread read() from
138the userfaultfd and receives the fault address (or -EAGAIN in case the
139userfault was already resolved and waken by a UFFDIO_COPY|ZEROPAGE run
140by the parallel QEMU migration thread).
141
142After the QEMU postcopy thread (running in the destination node) gets
143the userfault address it writes the information about the missing page
144into the socket. The QEMU source node receives the information and
145roughly "seeks" to that page address and continues sending all
146remaining missing pages from that new page offset. Soon after that
147(just the time to flush the tcp_wmem queue through the network) the
148migration thread in the QEMU running in the destination node will
149receive the page that triggered the userfault and it'll map it as
150usual with the UFFDIO_COPY|ZEROPAGE (without actually knowing if it
151was spontaneously sent by the source or if it was an urgent page
9332ef9d 152requested through a userfault).
25edd8bf
AA
153
154By the time the userfaults start, the QEMU in the destination node
155doesn't need to keep any per-page state bitmap relative to the live
156migration around and a single per-page bitmap has to be maintained in
157the QEMU running in the source node to know which pages are still
158missing in the destination node. The bitmap in the source node is
159checked to find which missing pages to send in round robin and we seek
160over it when receiving incoming userfaults. After sending each page of
161course the bitmap is updated accordingly. It's also useful to avoid
162sending the same page twice (in case the userfault is read by the
163postcopy thread just before UFFDIO_COPY|ZEROPAGE runs in the migration
164thread).
5a02026d
MR
165
166== Non-cooperative userfaultfd ==
167
168When the userfaultfd is monitored by an external manager, the manager
169must be able to track changes in the process virtual memory
170layout. Userfaultfd can notify the manager about such changes using
171the same read(2) protocol as for the page fault notifications. The
172manager has to explicitly enable these events by setting appropriate
173bits in uffdio_api.features passed to UFFDIO_API ioctl:
174
5a02026d
MR
175UFFD_FEATURE_EVENT_FORK - enable userfaultfd hooks for fork(). When
176this feature is enabled, the userfaultfd context of the parent process
177is duplicated into the newly created process. The manager receives
178UFFD_EVENT_FORK with file descriptor of the new userfaultfd context in
179the uffd_msg.fork.
180
181UFFD_FEATURE_EVENT_REMAP - enable notifications about mremap()
182calls. When the non-cooperative process moves a virtual memory area to
183a different location, the manager will receive UFFD_EVENT_REMAP. The
184uffd_msg.remap will contain the old and new addresses of the area and
185its original length.
186
187UFFD_FEATURE_EVENT_REMOVE - enable notifications about
188madvise(MADV_REMOVE) and madvise(MADV_DONTNEED) calls. The event
189UFFD_EVENT_REMOVE will be generated upon these calls to madvise. The
190uffd_msg.remove will contain start and end addresses of the removed
191area.
192
193UFFD_FEATURE_EVENT_UNMAP - enable notifications about memory
194unmapping. The manager will get UFFD_EVENT_UNMAP with uffd_msg.remove
195containing start and end addresses of the unmapped area.
196
197Although the UFFD_FEATURE_EVENT_REMOVE and UFFD_FEATURE_EVENT_UNMAP
198are pretty similar, they quite differ in the action expected from the
199userfaultfd manager. In the former case, the virtual memory is
200removed, but the area is not, the area remains monitored by the
201userfaultfd, and if a page fault occurs in that area it will be
202delivered to the manager. The proper resolution for such page fault is
203to zeromap the faulting address. However, in the latter case, when an
204area is unmapped, either explicitly (with munmap() system call), or
205implicitly (e.g. during mremap()), the area is removed and in turn the
206userfaultfd context for such area disappears too and the manager will
207not get further userland page faults from the removed area. Still, the
208notification is required in order to prevent manager from using
209UFFDIO_COPY on the unmapped area.
210
211Unlike userland page faults which have to be synchronous and require
212explicit or implicit wakeup, all the events are delivered
213asynchronously and the non-cooperative process resumes execution as
214soon as manager executes read(). The userfaultfd manager should
215carefully synchronize calls to UFFDIO_COPY with the events
216processing. To aid the synchronization, the UFFDIO_COPY ioctl will
217return -ENOSPC when the monitored process exits at the time of
218UFFDIO_COPY, and -ENOENT, when the non-cooperative process has changed
219its virtual memory layout simultaneously with outstanding UFFDIO_COPY
220operation.
221
222The current asynchronous model of the event delivery is optimal for
223single threaded non-cooperative userfaultfd manager implementations. A
224synchronous event delivery model can be added later as a new
225userfaultfd feature to facilitate multithreading enhancements of the
226non cooperative manager, for example to allow UFFDIO_COPY ioctls to
227run in parallel to the event reception. Single threaded
228implementations should continue to use the current async event
229delivery model instead.