2 * Xen leaves the responsibility for maintaining p2m mappings to the
3 * guests themselves, but it must also access and update the p2m array
4 * during suspend/resume when all the pages are reallocated.
6 * The p2m table is logically a flat array, but we implement it as a
7 * three-level tree to allow the address space to be sparse.
13 * p2m_mid p2m_mid p2m_mid_mfn p2m_mid_mfn
15 * p2m p2m p2m p2m p2m p2m p2m ...
17 * The p2m_mid_mfn pages are mapped by p2m_top_mfn_p.
19 * The p2m_top and p2m_top_mfn levels are limited to 1 page, so the
20 * maximum representable pseudo-physical address space is:
21 * P2M_TOP_PER_PAGE * P2M_MID_PER_PAGE * P2M_PER_PAGE pages
23 * P2M_PER_PAGE depends on the architecture, as a mfn is always
24 * unsigned long (8 bytes on 64-bit, 4 bytes on 32), leading to
25 * 512 and 1024 entries respectively.
27 * In short, these structures contain the Machine Frame Number (MFN) of the PFN.
29 * However not all entries are filled with MFNs. Specifically for all other
30 * leaf entries, or for the top root, or middle one, for which there is a void
31 * entry, we assume it is "missing". So (for example)
32 * pfn_to_mfn(0x90909090)=INVALID_P2M_ENTRY.
34 * We also have the possibility of setting 1-1 mappings on certain regions, so
36 * pfn_to_mfn(0xc0000)=0xc0000
38 * The benefit of this is, that we can assume for non-RAM regions (think
39 * PCI BARs, or ACPI spaces), we can create mappings easily because we
40 * get the PFN value to match the MFN.
42 * For this to work efficiently we have one new page p2m_identity and
43 * allocate (via reserved_brk) any other pages we need to cover the sides
44 * (1GB or 4MB boundary violations). All entries in p2m_identity are set to
45 * INVALID_P2M_ENTRY type (Xen toolstack only recognizes that and MFNs,
46 * no other fancy value).
48 * On lookup we spot that the entry points to p2m_identity and return the
49 * identity value instead of dereferencing and returning INVALID_P2M_ENTRY.
50 * If the entry points to an allocated page, we just proceed as before and
51 * return the PFN. If the PFN has IDENTITY_FRAME_BIT set we unmask that in
52 * appropriate functions (pfn_to_mfn).
54 * The reason for having the IDENTITY_FRAME_BIT instead of just returning the
55 * PFN is that we could find ourselves where pfn_to_mfn(pfn)==pfn for a
56 * non-identity pfn. To protect ourselves against we elect to set (and get) the
57 * IDENTITY_FRAME_BIT on all identity mapped PFNs.
59 * This simplistic diagram is used to explain the more subtle piece of code.
60 * There is also a digram of the P2M at the end that can help.
61 * Imagine your E820 looking as so:
64 * /-------------------+---------\/----\ /----------\ /---+-----\
65 * | System RAM | Sys RAM ||ACPI| | reserved | | Sys RAM |
66 * \-------------------+---------/\----/ \----------/ \---+-----/
69 * [1029MB = 263424 (0x40500), 2001MB = 512256 (0x7D100),
70 * 2048MB = 524288 (0x80000)]
72 * And dom0_mem=max:3GB,1GB is passed in to the guest, meaning memory past 1GB
73 * is actually not present (would have to kick the balloon driver to put it in).
75 * When we are told to set the PFNs for identity mapping (see patch: "xen/setup:
76 * Set identity mapping for non-RAM E820 and E820 gaps.") we pass in the start
77 * of the PFN and the end PFN (263424 and 512256 respectively). The first step
78 * is to reserve_brk a top leaf page if the p2m[1] is missing. The top leaf page
79 * covers 512^2 of page estate (1GB) and in case the start or end PFN is not
80 * aligned on 512^2*PAGE_SIZE (1GB) we reserve_brk new middle and leaf pages as
81 * required to split any existing p2m_mid_missing middle pages.
83 * With the E820 example above, 263424 is not 1GB aligned so we allocate a
84 * reserve_brk page which will cover the PFNs estate from 0x40000 to 0x80000.
85 * Each entry in the allocate page is "missing" (points to p2m_missing).
87 * Next stage is to determine if we need to do a more granular boundary check
88 * on the 4MB (or 2MB depending on architecture) off the start and end pfn's.
89 * We check if the start pfn and end pfn violate that boundary check, and if
90 * so reserve_brk a (p2m[x][y]) leaf page. This way we have a much finer
91 * granularity of setting which PFNs are missing and which ones are identity.
92 * In our example 263424 and 512256 both fail the check so we reserve_brk two
93 * pages. Populate them with INVALID_P2M_ENTRY (so they both have "missing"
94 * values) and assign them to p2m[1][2] and p2m[1][488] respectively.
96 * At this point we would at minimum reserve_brk one page, but could be up to
97 * three. Each call to set_phys_range_identity has at maximum a three page
98 * cost. If we were to query the P2M at this stage, all those entries from
99 * start PFN through end PFN (so 1029MB -> 2001MB) would return
100 * INVALID_P2M_ENTRY ("missing").
102 * The next step is to walk from the start pfn to the end pfn setting
103 * the IDENTITY_FRAME_BIT on each PFN. This is done in set_phys_range_identity.
104 * If we find that the middle entry is pointing to p2m_missing we can swap it
105 * over to p2m_identity - this way covering 4MB (or 2MB) PFN space (and
106 * similarly swapping p2m_mid_missing for p2m_mid_identity for larger regions).
107 * At this point we do not need to worry about boundary aligment (so no need to
108 * reserve_brk a middle page, figure out which PFNs are "missing" and which
109 * ones are identity), as that has been done earlier. If we find that the
110 * middle leaf is not occupied by p2m_identity or p2m_missing, we dereference
111 * that page (which covers 512 PFNs) and set the appropriate PFN with
112 * IDENTITY_FRAME_BIT. In our example 263424 and 512256 end up there, and we
113 * set from p2m[1][2][256->511] and p2m[1][488][0->256] with
114 * IDENTITY_FRAME_BIT set.
116 * All other regions that are void (or not filled) either point to p2m_missing
117 * (considered missing) or have the default value of INVALID_P2M_ENTRY (also
118 * considered missing). In our case, p2m[1][2][0->255] and p2m[1][488][257->511]
119 * contain the INVALID_P2M_ENTRY value and are considered "missing."
121 * Finally, the region beyond the end of of the E820 (4 GB in this example)
122 * is set to be identity (in case there are MMIO regions placed here).
124 * This is what the p2m ends up looking (for the E820 above) with this
127 * p2m /--------------\
128 * /-----\ | &mfn_list[0],| /-----------------\
129 * | 0 |------>| &mfn_list[1],| /---------------\ | ~0, ~0, .. |
130 * |-----| | ..., ~0, ~0 | | ~0, ~0, [x]---+----->| IDENTITY [@256] |
131 * | 1 |---\ \--------------/ | [p2m_identity]+\ | IDENTITY [@257] |
132 * |-----| \ | [p2m_identity]+\\ | .... |
133 * | 2 |--\ \-------------------->| ... | \\ \----------------/
134 * |-----| \ \---------------/ \\
135 * | 3 |-\ \ \\ p2m_identity [1]
136 * |-----| \ \-------------------->/---------------\ /-----------------\
137 * | .. |\ | | [p2m_identity]+-->| ~0, ~0, ~0, ... |
138 * \-----/ | | | [p2m_identity]+-->| ..., ~0 |
139 * | | | .... | \-----------------/
140 * | | +-[x], ~0, ~0.. +\
141 * | | \---------------/ \
142 * | | \-> /---------------\
143 * | V p2m_mid_missing p2m_missing | IDENTITY[@0] |
144 * | /-----------------\ /------------\ | IDENTITY[@256]|
145 * | | [p2m_missing] +---->| ~0, ~0, ...| | ~0, ~0, .... |
146 * | | [p2m_missing] +---->| ..., ~0 | \---------------/
147 * | | ... | \------------/
148 * | \-----------------/
151 * | /-----------------\
152 * \-->| [p2m_identity] +---->[1]
153 * | [p2m_identity] +---->[1]
155 * \-----------------/
157 * where ~0 is INVALID_P2M_ENTRY. IDENTITY is (PFN | IDENTITY_BIT)
160 #include <linux/init.h>
161 #include <linux/module.h>
162 #include <linux/list.h>
163 #include <linux/hash.h>
164 #include <linux/sched.h>
165 #include <linux/seq_file.h>
166 #include <linux/bootmem.h>
168 #include <asm/cache.h>
169 #include <asm/setup.h>
171 #include <asm/xen/page.h>
172 #include <asm/xen/hypercall.h>
173 #include <asm/xen/hypervisor.h>
174 #include <xen/balloon.h>
175 #include <xen/grant_table.h>
178 #include "multicalls.h"
181 static void __init
m2p_override_init(void);
183 unsigned long xen_max_p2m_pfn __read_mostly
;
185 static unsigned long *p2m_mid_missing_mfn
;
186 static unsigned long *p2m_top_mfn
;
187 static unsigned long **p2m_top_mfn_p
;
189 /* Placeholders for holes in the address space */
190 static RESERVE_BRK_ARRAY(unsigned long, p2m_missing
, P2M_PER_PAGE
);
191 static RESERVE_BRK_ARRAY(unsigned long *, p2m_mid_missing
, P2M_MID_PER_PAGE
);
193 static RESERVE_BRK_ARRAY(unsigned long **, p2m_top
, P2M_TOP_PER_PAGE
);
195 static RESERVE_BRK_ARRAY(unsigned long, p2m_identity
, P2M_PER_PAGE
);
196 static RESERVE_BRK_ARRAY(unsigned long *, p2m_mid_identity
, P2M_MID_PER_PAGE
);
198 RESERVE_BRK(p2m_mid
, PAGE_SIZE
* (MAX_DOMAIN_PAGES
/ (P2M_PER_PAGE
* P2M_MID_PER_PAGE
)));
200 /* For each I/O range remapped we may lose up to two leaf pages for the boundary
201 * violations and three mid pages to cover up to 3GB. With
202 * early_can_reuse_p2m_middle() most of the leaf pages will be reused by the
205 RESERVE_BRK(p2m_identity_remap
, PAGE_SIZE
* 2 * 3 * MAX_REMAP_RANGES
);
207 static inline unsigned p2m_top_index(unsigned long pfn
)
209 BUG_ON(pfn
>= MAX_P2M_PFN
);
210 return pfn
/ (P2M_MID_PER_PAGE
* P2M_PER_PAGE
);
213 static inline unsigned p2m_mid_index(unsigned long pfn
)
215 return (pfn
/ P2M_PER_PAGE
) % P2M_MID_PER_PAGE
;
218 static inline unsigned p2m_index(unsigned long pfn
)
220 return pfn
% P2M_PER_PAGE
;
223 static void p2m_top_init(unsigned long ***top
)
227 for (i
= 0; i
< P2M_TOP_PER_PAGE
; i
++)
228 top
[i
] = p2m_mid_missing
;
231 static void p2m_top_mfn_init(unsigned long *top
)
235 for (i
= 0; i
< P2M_TOP_PER_PAGE
; i
++)
236 top
[i
] = virt_to_mfn(p2m_mid_missing_mfn
);
239 static void p2m_top_mfn_p_init(unsigned long **top
)
243 for (i
= 0; i
< P2M_TOP_PER_PAGE
; i
++)
244 top
[i
] = p2m_mid_missing_mfn
;
247 static void p2m_mid_init(unsigned long **mid
, unsigned long *leaf
)
251 for (i
= 0; i
< P2M_MID_PER_PAGE
; i
++)
255 static void p2m_mid_mfn_init(unsigned long *mid
, unsigned long *leaf
)
259 for (i
= 0; i
< P2M_MID_PER_PAGE
; i
++)
260 mid
[i
] = virt_to_mfn(leaf
);
263 static void p2m_init(unsigned long *p2m
)
267 for (i
= 0; i
< P2M_MID_PER_PAGE
; i
++)
268 p2m
[i
] = INVALID_P2M_ENTRY
;
272 * Build the parallel p2m_top_mfn and p2m_mid_mfn structures
274 * This is called both at boot time, and after resuming from suspend:
275 * - At boot time we're called rather early, and must use alloc_bootmem*()
276 * to allocate memory.
278 * - After resume we're called from within stop_machine, but the mfn
279 * tree should already be completely allocated.
281 void __ref
xen_build_mfn_list_list(void)
285 if (xen_feature(XENFEAT_auto_translated_physmap
))
288 /* Pre-initialize p2m_top_mfn to be completely missing */
289 if (p2m_top_mfn
== NULL
) {
290 p2m_mid_missing_mfn
= alloc_bootmem_align(PAGE_SIZE
, PAGE_SIZE
);
291 p2m_mid_mfn_init(p2m_mid_missing_mfn
, p2m_missing
);
293 p2m_top_mfn_p
= alloc_bootmem_align(PAGE_SIZE
, PAGE_SIZE
);
294 p2m_top_mfn_p_init(p2m_top_mfn_p
);
296 p2m_top_mfn
= alloc_bootmem_align(PAGE_SIZE
, PAGE_SIZE
);
297 p2m_top_mfn_init(p2m_top_mfn
);
299 /* Reinitialise, mfn's all change after migration */
300 p2m_mid_mfn_init(p2m_mid_missing_mfn
, p2m_missing
);
303 for (pfn
= 0; pfn
< xen_max_p2m_pfn
; pfn
+= P2M_PER_PAGE
) {
304 unsigned topidx
= p2m_top_index(pfn
);
305 unsigned mididx
= p2m_mid_index(pfn
);
307 unsigned long *mid_mfn_p
;
309 mid
= p2m_top
[topidx
];
310 mid_mfn_p
= p2m_top_mfn_p
[topidx
];
312 /* Don't bother allocating any mfn mid levels if
313 * they're just missing, just update the stored mfn,
314 * since all could have changed over a migrate.
316 if (mid
== p2m_mid_missing
) {
318 BUG_ON(mid_mfn_p
!= p2m_mid_missing_mfn
);
319 p2m_top_mfn
[topidx
] = virt_to_mfn(p2m_mid_missing_mfn
);
320 pfn
+= (P2M_MID_PER_PAGE
- 1) * P2M_PER_PAGE
;
324 if (mid_mfn_p
== p2m_mid_missing_mfn
) {
326 * XXX boot-time only! We should never find
327 * missing parts of the mfn tree after
330 mid_mfn_p
= alloc_bootmem_align(PAGE_SIZE
, PAGE_SIZE
);
331 p2m_mid_mfn_init(mid_mfn_p
, p2m_missing
);
333 p2m_top_mfn_p
[topidx
] = mid_mfn_p
;
336 p2m_top_mfn
[topidx
] = virt_to_mfn(mid_mfn_p
);
337 mid_mfn_p
[mididx
] = virt_to_mfn(mid
[mididx
]);
341 void xen_setup_mfn_list_list(void)
343 if (xen_feature(XENFEAT_auto_translated_physmap
))
346 BUG_ON(HYPERVISOR_shared_info
== &xen_dummy_shared_info
);
348 HYPERVISOR_shared_info
->arch
.pfn_to_mfn_frame_list_list
=
349 virt_to_mfn(p2m_top_mfn
);
350 HYPERVISOR_shared_info
->arch
.max_pfn
= xen_max_p2m_pfn
;
353 /* Set up p2m_top to point to the domain-builder provided p2m pages */
354 void __init
xen_build_dynamic_phys_to_machine(void)
356 unsigned long *mfn_list
;
357 unsigned long max_pfn
;
360 if (xen_feature(XENFEAT_auto_translated_physmap
))
363 mfn_list
= (unsigned long *)xen_start_info
->mfn_list
;
364 max_pfn
= min(MAX_DOMAIN_PAGES
, xen_start_info
->nr_pages
);
365 xen_max_p2m_pfn
= max_pfn
;
367 p2m_missing
= extend_brk(PAGE_SIZE
, PAGE_SIZE
);
368 p2m_init(p2m_missing
);
369 p2m_identity
= extend_brk(PAGE_SIZE
, PAGE_SIZE
);
370 p2m_init(p2m_identity
);
372 p2m_mid_missing
= extend_brk(PAGE_SIZE
, PAGE_SIZE
);
373 p2m_mid_init(p2m_mid_missing
, p2m_missing
);
374 p2m_mid_identity
= extend_brk(PAGE_SIZE
, PAGE_SIZE
);
375 p2m_mid_init(p2m_mid_identity
, p2m_identity
);
377 p2m_top
= extend_brk(PAGE_SIZE
, PAGE_SIZE
);
378 p2m_top_init(p2m_top
);
381 * The domain builder gives us a pre-constructed p2m array in
382 * mfn_list for all the pages initially given to us, so we just
383 * need to graft that into our tree structure.
385 for (pfn
= 0; pfn
< max_pfn
; pfn
+= P2M_PER_PAGE
) {
386 unsigned topidx
= p2m_top_index(pfn
);
387 unsigned mididx
= p2m_mid_index(pfn
);
389 if (p2m_top
[topidx
] == p2m_mid_missing
) {
390 unsigned long **mid
= extend_brk(PAGE_SIZE
, PAGE_SIZE
);
391 p2m_mid_init(mid
, p2m_missing
);
393 p2m_top
[topidx
] = mid
;
397 * As long as the mfn_list has enough entries to completely
398 * fill a p2m page, pointing into the array is ok. But if
399 * not the entries beyond the last pfn will be undefined.
401 if (unlikely(pfn
+ P2M_PER_PAGE
> max_pfn
)) {
402 unsigned long p2midx
;
404 p2midx
= max_pfn
% P2M_PER_PAGE
;
405 for ( ; p2midx
< P2M_PER_PAGE
; p2midx
++)
406 mfn_list
[pfn
+ p2midx
] = INVALID_P2M_ENTRY
;
408 p2m_top
[topidx
][mididx
] = &mfn_list
[pfn
];
414 unsigned long __init
xen_revector_p2m_tree(void)
416 unsigned long va_start
;
417 unsigned long va_end
;
419 unsigned long pfn_free
= 0;
420 unsigned long *mfn_list
= NULL
;
423 va_start
= xen_start_info
->mfn_list
;
424 /*We copy in increments of P2M_PER_PAGE * sizeof(unsigned long),
425 * so make sure it is rounded up to that */
426 size
= PAGE_ALIGN(xen_start_info
->nr_pages
* sizeof(unsigned long));
427 va_end
= va_start
+ size
;
429 /* If we were revectored already, don't do it again. */
430 if (va_start
<= __START_KERNEL_map
&& va_start
>= __PAGE_OFFSET
)
433 mfn_list
= alloc_bootmem_align(size
, PAGE_SIZE
);
435 pr_warn("Could not allocate space for a new P2M tree!\n");
436 return xen_start_info
->mfn_list
;
438 /* Fill it out with INVALID_P2M_ENTRY value */
439 memset(mfn_list
, 0xFF, size
);
441 for (pfn
= 0; pfn
< ALIGN(MAX_DOMAIN_PAGES
, P2M_PER_PAGE
); pfn
+= P2M_PER_PAGE
) {
442 unsigned topidx
= p2m_top_index(pfn
);
444 unsigned long *mid_p
;
446 if (!p2m_top
[topidx
])
449 if (p2m_top
[topidx
] == p2m_mid_missing
)
452 mididx
= p2m_mid_index(pfn
);
453 mid_p
= p2m_top
[topidx
][mididx
];
456 if ((mid_p
== p2m_missing
) || (mid_p
== p2m_identity
))
459 if ((unsigned long)mid_p
== INVALID_P2M_ENTRY
)
462 /* The old va. Rebase it on mfn_list */
463 if (mid_p
>= (unsigned long *)va_start
&& mid_p
<= (unsigned long *)va_end
) {
466 if (pfn_free
> (size
/ sizeof(unsigned long))) {
467 WARN(1, "Only allocated for %ld pages, but we want %ld!\n",
468 size
/ sizeof(unsigned long), pfn_free
);
471 new = &mfn_list
[pfn_free
];
473 copy_page(new, mid_p
);
474 p2m_top
[topidx
][mididx
] = &mfn_list
[pfn_free
];
476 pfn_free
+= P2M_PER_PAGE
;
479 /* This should be the leafs allocated for identity from _brk. */
481 return (unsigned long)mfn_list
;
485 unsigned long __init
xen_revector_p2m_tree(void)
490 unsigned long get_phys_to_machine(unsigned long pfn
)
492 unsigned topidx
, mididx
, idx
;
494 if (unlikely(pfn
>= MAX_P2M_PFN
))
495 return IDENTITY_FRAME(pfn
);
497 topidx
= p2m_top_index(pfn
);
498 mididx
= p2m_mid_index(pfn
);
499 idx
= p2m_index(pfn
);
502 * The INVALID_P2M_ENTRY is filled in both p2m_*identity
503 * and in p2m_*missing, so returning the INVALID_P2M_ENTRY
506 if (p2m_top
[topidx
][mididx
] == p2m_identity
)
507 return IDENTITY_FRAME(pfn
);
509 return p2m_top
[topidx
][mididx
][idx
];
511 EXPORT_SYMBOL_GPL(get_phys_to_machine
);
513 static void *alloc_p2m_page(void)
515 return (void *)__get_free_page(GFP_KERNEL
| __GFP_REPEAT
);
518 static void free_p2m_page(void *p
)
520 free_page((unsigned long)p
);
524 * Fully allocate the p2m structure for a given pfn. We need to check
525 * that both the top and mid levels are allocated, and make sure the
526 * parallel mfn tree is kept in sync. We may race with other cpus, so
527 * the new pages are installed with cmpxchg; if we lose the race then
528 * simply free the page we allocated and use the one that's there.
530 static bool alloc_p2m(unsigned long pfn
)
532 unsigned topidx
, mididx
;
533 unsigned long ***top_p
, **mid
;
534 unsigned long *top_mfn_p
, *mid_mfn
;
535 unsigned long *p2m_orig
;
537 topidx
= p2m_top_index(pfn
);
538 mididx
= p2m_mid_index(pfn
);
540 top_p
= &p2m_top
[topidx
];
541 mid
= ACCESS_ONCE(*top_p
);
543 if (mid
== p2m_mid_missing
) {
544 /* Mid level is missing, allocate a new one */
545 mid
= alloc_p2m_page();
549 p2m_mid_init(mid
, p2m_missing
);
551 if (cmpxchg(top_p
, p2m_mid_missing
, mid
) != p2m_mid_missing
)
555 top_mfn_p
= &p2m_top_mfn
[topidx
];
556 mid_mfn
= ACCESS_ONCE(p2m_top_mfn_p
[topidx
]);
558 BUG_ON(virt_to_mfn(mid_mfn
) != *top_mfn_p
);
560 if (mid_mfn
== p2m_mid_missing_mfn
) {
561 /* Separately check the mid mfn level */
562 unsigned long missing_mfn
;
563 unsigned long mid_mfn_mfn
;
564 unsigned long old_mfn
;
566 mid_mfn
= alloc_p2m_page();
570 p2m_mid_mfn_init(mid_mfn
, p2m_missing
);
572 missing_mfn
= virt_to_mfn(p2m_mid_missing_mfn
);
573 mid_mfn_mfn
= virt_to_mfn(mid_mfn
);
574 old_mfn
= cmpxchg(top_mfn_p
, missing_mfn
, mid_mfn_mfn
);
575 if (old_mfn
!= missing_mfn
) {
576 free_p2m_page(mid_mfn
);
577 mid_mfn
= mfn_to_virt(old_mfn
);
579 p2m_top_mfn_p
[topidx
] = mid_mfn
;
583 p2m_orig
= ACCESS_ONCE(p2m_top
[topidx
][mididx
]);
584 if (p2m_orig
== p2m_identity
|| p2m_orig
== p2m_missing
) {
585 /* p2m leaf page is missing */
588 p2m
= alloc_p2m_page();
594 if (cmpxchg(&mid
[mididx
], p2m_orig
, p2m
) != p2m_orig
)
597 mid_mfn
[mididx
] = virt_to_mfn(p2m
);
603 static bool __init
early_alloc_p2m(unsigned long pfn
, bool check_boundary
)
605 unsigned topidx
, mididx
, idx
;
608 topidx
= p2m_top_index(pfn
);
609 mididx
= p2m_mid_index(pfn
);
610 idx
= p2m_index(pfn
);
612 /* Pfff.. No boundary cross-over, lets get out. */
613 if (!idx
&& check_boundary
)
616 WARN(p2m_top
[topidx
][mididx
] == p2m_identity
,
617 "P2M[%d][%d] == IDENTITY, should be MISSING (or alloced)!\n",
621 * Could be done by xen_build_dynamic_phys_to_machine..
623 if (p2m_top
[topidx
][mididx
] != p2m_missing
)
626 /* Boundary cross-over for the edges: */
627 p2m
= extend_brk(PAGE_SIZE
, PAGE_SIZE
);
631 p2m_top
[topidx
][mididx
] = p2m
;
636 static bool __init
early_alloc_p2m_middle(unsigned long pfn
)
638 unsigned topidx
= p2m_top_index(pfn
);
641 mid
= p2m_top
[topidx
];
642 if (mid
== p2m_mid_missing
) {
643 mid
= extend_brk(PAGE_SIZE
, PAGE_SIZE
);
645 p2m_mid_init(mid
, p2m_missing
);
647 p2m_top
[topidx
] = mid
;
653 * Skim over the P2M tree looking at pages that are either filled with
654 * INVALID_P2M_ENTRY or with 1:1 PFNs. If found, re-use that page and
655 * replace the P2M leaf with a p2m_missing or p2m_identity.
656 * Stick the old page in the new P2M tree location.
658 static bool __init
early_can_reuse_p2m_middle(unsigned long set_pfn
)
668 /* We only look when this entails a P2M middle layer */
669 if (p2m_index(set_pfn
))
672 for (pfn
= 0; pfn
< MAX_DOMAIN_PAGES
; pfn
+= P2M_PER_PAGE
) {
673 topidx
= p2m_top_index(pfn
);
675 if (!p2m_top
[topidx
])
678 if (p2m_top
[topidx
] == p2m_mid_missing
)
681 mididx
= p2m_mid_index(pfn
);
682 p2m
= p2m_top
[topidx
][mididx
];
686 if ((p2m
== p2m_missing
) || (p2m
== p2m_identity
))
689 if ((unsigned long)p2m
== INVALID_P2M_ENTRY
)
694 for (idx
= 0; idx
< P2M_PER_PAGE
; idx
++) {
695 /* IDENTITY_PFNs are 1:1 */
696 if (p2m
[idx
] == IDENTITY_FRAME(pfn
+ idx
))
698 else if (p2m
[idx
] == INVALID_P2M_ENTRY
)
703 if ((ident_pfns
== P2M_PER_PAGE
) || (inv_pfns
== P2M_PER_PAGE
))
708 /* Found one, replace old with p2m_identity or p2m_missing */
709 p2m_top
[topidx
][mididx
] = (ident_pfns
? p2m_identity
: p2m_missing
);
711 /* Reset where we want to stick the old page in. */
712 topidx
= p2m_top_index(set_pfn
);
713 mididx
= p2m_mid_index(set_pfn
);
715 /* This shouldn't happen */
716 if (WARN_ON(p2m_top
[topidx
] == p2m_mid_missing
))
717 early_alloc_p2m_middle(set_pfn
);
719 if (WARN_ON(p2m_top
[topidx
][mididx
] != p2m_missing
))
723 p2m_top
[topidx
][mididx
] = p2m
;
727 bool __init
early_set_phys_to_machine(unsigned long pfn
, unsigned long mfn
)
729 if (unlikely(!__set_phys_to_machine(pfn
, mfn
))) {
730 if (!early_alloc_p2m_middle(pfn
))
733 if (early_can_reuse_p2m_middle(pfn
))
734 return __set_phys_to_machine(pfn
, mfn
);
736 if (!early_alloc_p2m(pfn
, false /* boundary crossover OK!*/))
739 if (!__set_phys_to_machine(pfn
, mfn
))
746 static void __init
early_split_p2m(unsigned long pfn
)
748 unsigned long mididx
, idx
;
750 mididx
= p2m_mid_index(pfn
);
751 idx
= p2m_index(pfn
);
754 * Allocate new middle and leaf pages if this pfn lies in the
758 early_alloc_p2m_middle(pfn
);
760 early_alloc_p2m(pfn
, false);
763 unsigned long __init
set_phys_range_identity(unsigned long pfn_s
,
768 if (unlikely(pfn_s
>= MAX_P2M_PFN
))
771 if (unlikely(xen_feature(XENFEAT_auto_translated_physmap
)))
772 return pfn_e
- pfn_s
;
777 if (pfn_e
> MAX_P2M_PFN
)
780 early_split_p2m(pfn_s
);
781 early_split_p2m(pfn_e
);
783 for (pfn
= pfn_s
; pfn
< pfn_e
;) {
784 unsigned topidx
= p2m_top_index(pfn
);
785 unsigned mididx
= p2m_mid_index(pfn
);
787 if (!__set_phys_to_machine(pfn
, IDENTITY_FRAME(pfn
)))
792 * If the PFN was set to a middle or leaf identity
793 * page the remainder must also be identity, so skip
794 * ahead to the next middle or leaf entry.
796 if (p2m_top
[topidx
] == p2m_mid_identity
)
797 pfn
= ALIGN(pfn
, P2M_MID_PER_PAGE
* P2M_PER_PAGE
);
798 else if (p2m_top
[topidx
][mididx
] == p2m_identity
)
799 pfn
= ALIGN(pfn
, P2M_PER_PAGE
);
802 WARN((pfn
- pfn_s
) != (pfn_e
- pfn_s
),
803 "Identity mapping failed. We are %ld short of 1-1 mappings!\n",
804 (pfn_e
- pfn_s
) - (pfn
- pfn_s
));
809 /* Try to install p2m mapping; fail if intermediate bits missing */
810 bool __set_phys_to_machine(unsigned long pfn
, unsigned long mfn
)
812 unsigned topidx
, mididx
, idx
;
814 /* don't track P2M changes in autotranslate guests */
815 if (unlikely(xen_feature(XENFEAT_auto_translated_physmap
)))
818 if (unlikely(pfn
>= MAX_P2M_PFN
)) {
819 BUG_ON(mfn
!= INVALID_P2M_ENTRY
);
823 topidx
= p2m_top_index(pfn
);
824 mididx
= p2m_mid_index(pfn
);
825 idx
= p2m_index(pfn
);
827 /* For sparse holes were the p2m leaf has real PFN along with
828 * PCI holes, stick in the PFN as the MFN value.
830 * set_phys_range_identity() will have allocated new middle
831 * and leaf pages as required so an existing p2m_mid_missing
832 * or p2m_missing mean that whole range will be identity so
833 * these can be switched to p2m_mid_identity or p2m_identity.
835 if (mfn
!= INVALID_P2M_ENTRY
&& (mfn
& IDENTITY_FRAME_BIT
)) {
836 if (p2m_top
[topidx
] == p2m_mid_identity
)
839 if (p2m_top
[topidx
] == p2m_mid_missing
) {
840 WARN_ON(cmpxchg(&p2m_top
[topidx
], p2m_mid_missing
,
841 p2m_mid_identity
) != p2m_mid_missing
);
845 if (p2m_top
[topidx
][mididx
] == p2m_identity
)
848 /* Swap over from MISSING to IDENTITY if needed. */
849 if (p2m_top
[topidx
][mididx
] == p2m_missing
) {
850 WARN_ON(cmpxchg(&p2m_top
[topidx
][mididx
], p2m_missing
,
851 p2m_identity
) != p2m_missing
);
856 if (p2m_top
[topidx
][mididx
] == p2m_missing
)
857 return mfn
== INVALID_P2M_ENTRY
;
859 p2m_top
[topidx
][mididx
][idx
] = mfn
;
864 bool set_phys_to_machine(unsigned long pfn
, unsigned long mfn
)
866 if (unlikely(!__set_phys_to_machine(pfn
, mfn
))) {
870 if (!__set_phys_to_machine(pfn
, mfn
))
877 #define M2P_OVERRIDE_HASH_SHIFT 10
878 #define M2P_OVERRIDE_HASH (1 << M2P_OVERRIDE_HASH_SHIFT)
880 static RESERVE_BRK_ARRAY(struct list_head
, m2p_overrides
, M2P_OVERRIDE_HASH
);
881 static DEFINE_SPINLOCK(m2p_override_lock
);
883 static void __init
m2p_override_init(void)
887 m2p_overrides
= extend_brk(sizeof(*m2p_overrides
) * M2P_OVERRIDE_HASH
,
888 sizeof(unsigned long));
890 for (i
= 0; i
< M2P_OVERRIDE_HASH
; i
++)
891 INIT_LIST_HEAD(&m2p_overrides
[i
]);
894 static unsigned long mfn_hash(unsigned long mfn
)
896 return hash_long(mfn
, M2P_OVERRIDE_HASH_SHIFT
);
899 int set_foreign_p2m_mapping(struct gnttab_map_grant_ref
*map_ops
,
900 struct gnttab_map_grant_ref
*kmap_ops
,
901 struct page
**pages
, unsigned int count
)
907 if (xen_feature(XENFEAT_auto_translated_physmap
))
912 paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE
) {
913 arch_enter_lazy_mmu_mode();
917 for (i
= 0; i
< count
; i
++) {
918 unsigned long mfn
, pfn
;
920 /* Do not add to override if the map failed. */
921 if (map_ops
[i
].status
)
924 if (map_ops
[i
].flags
& GNTMAP_contains_pte
) {
925 pte
= (pte_t
*) (mfn_to_virt(PFN_DOWN(map_ops
[i
].host_addr
)) +
926 (map_ops
[i
].host_addr
& ~PAGE_MASK
));
929 mfn
= PFN_DOWN(map_ops
[i
].dev_bus_addr
);
931 pfn
= page_to_pfn(pages
[i
]);
933 WARN_ON(PagePrivate(pages
[i
]));
934 SetPagePrivate(pages
[i
]);
935 set_page_private(pages
[i
], mfn
);
936 pages
[i
]->index
= pfn_to_mfn(pfn
);
938 if (unlikely(!set_phys_to_machine(pfn
, FOREIGN_FRAME(mfn
)))) {
944 ret
= m2p_add_override(mfn
, pages
[i
], &kmap_ops
[i
]);
952 arch_leave_lazy_mmu_mode();
956 EXPORT_SYMBOL_GPL(set_foreign_p2m_mapping
);
958 /* Add an MFN override for a particular page */
959 int m2p_add_override(unsigned long mfn
, struct page
*page
,
960 struct gnttab_map_grant_ref
*kmap_op
)
964 unsigned long uninitialized_var(address
);
968 pfn
= page_to_pfn(page
);
969 if (!PageHighMem(page
)) {
970 address
= (unsigned long)__va(pfn
<< PAGE_SHIFT
);
971 ptep
= lookup_address(address
, &level
);
972 if (WARN(ptep
== NULL
|| level
!= PG_LEVEL_4K
,
973 "m2p_add_override: pfn %lx not mapped", pfn
))
977 if (kmap_op
!= NULL
) {
978 if (!PageHighMem(page
)) {
979 struct multicall_space mcs
=
980 xen_mc_entry(sizeof(*kmap_op
));
982 MULTI_grant_table_op(mcs
.mc
,
983 GNTTABOP_map_grant_ref
, kmap_op
, 1);
985 xen_mc_issue(PARAVIRT_LAZY_MMU
);
988 spin_lock_irqsave(&m2p_override_lock
, flags
);
989 list_add(&page
->lru
, &m2p_overrides
[mfn_hash(mfn
)]);
990 spin_unlock_irqrestore(&m2p_override_lock
, flags
);
992 /* p2m(m2p(mfn)) == mfn: the mfn is already present somewhere in
993 * this domain. Set the FOREIGN_FRAME_BIT in the p2m for the other
994 * pfn so that the following mfn_to_pfn(mfn) calls will return the
995 * pfn from the m2p_override (the backend pfn) instead.
996 * We need to do this because the pages shared by the frontend
997 * (xen-blkfront) can be already locked (lock_page, called by
998 * do_read_cache_page); when the userspace backend tries to use them
999 * with direct_IO, mfn_to_pfn returns the pfn of the frontend, so
1000 * do_blockdev_direct_IO is going to try to lock the same pages
1001 * again resulting in a deadlock.
1002 * As a side effect get_user_pages_fast might not be safe on the
1003 * frontend pages while they are being shared with the backend,
1004 * because mfn_to_pfn (that ends up being called by GUPF) will
1005 * return the backend pfn rather than the frontend pfn. */
1006 pfn
= mfn_to_pfn_no_overrides(mfn
);
1007 if (get_phys_to_machine(pfn
) == mfn
)
1008 set_phys_to_machine(pfn
, FOREIGN_FRAME(mfn
));
1012 EXPORT_SYMBOL_GPL(m2p_add_override
);
1014 int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref
*unmap_ops
,
1015 struct gnttab_map_grant_ref
*kmap_ops
,
1016 struct page
**pages
, unsigned int count
)
1021 if (xen_feature(XENFEAT_auto_translated_physmap
))
1026 paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE
) {
1027 arch_enter_lazy_mmu_mode();
1031 for (i
= 0; i
< count
; i
++) {
1032 unsigned long mfn
= get_phys_to_machine(page_to_pfn(pages
[i
]));
1033 unsigned long pfn
= page_to_pfn(pages
[i
]);
1035 if (mfn
== INVALID_P2M_ENTRY
|| !(mfn
& FOREIGN_FRAME_BIT
)) {
1040 set_page_private(pages
[i
], INVALID_P2M_ENTRY
);
1041 WARN_ON(!PagePrivate(pages
[i
]));
1042 ClearPagePrivate(pages
[i
]);
1043 set_phys_to_machine(pfn
, pages
[i
]->index
);
1046 ret
= m2p_remove_override(pages
[i
], &kmap_ops
[i
], mfn
);
1053 arch_leave_lazy_mmu_mode();
1056 EXPORT_SYMBOL_GPL(clear_foreign_p2m_mapping
);
1058 int m2p_remove_override(struct page
*page
,
1059 struct gnttab_map_grant_ref
*kmap_op
,
1062 unsigned long flags
;
1064 unsigned long uninitialized_var(address
);
1068 pfn
= page_to_pfn(page
);
1070 if (!PageHighMem(page
)) {
1071 address
= (unsigned long)__va(pfn
<< PAGE_SHIFT
);
1072 ptep
= lookup_address(address
, &level
);
1074 if (WARN(ptep
== NULL
|| level
!= PG_LEVEL_4K
,
1075 "m2p_remove_override: pfn %lx not mapped", pfn
))
1079 spin_lock_irqsave(&m2p_override_lock
, flags
);
1080 list_del(&page
->lru
);
1081 spin_unlock_irqrestore(&m2p_override_lock
, flags
);
1083 if (kmap_op
!= NULL
) {
1084 if (!PageHighMem(page
)) {
1085 struct multicall_space mcs
;
1086 struct gnttab_unmap_and_replace
*unmap_op
;
1087 struct page
*scratch_page
= get_balloon_scratch_page();
1088 unsigned long scratch_page_address
= (unsigned long)
1089 __va(page_to_pfn(scratch_page
) << PAGE_SHIFT
);
1092 * It might be that we queued all the m2p grant table
1093 * hypercalls in a multicall, then m2p_remove_override
1094 * get called before the multicall has actually been
1095 * issued. In this case handle is going to -1 because
1096 * it hasn't been modified yet.
1098 if (kmap_op
->handle
== -1)
1101 * Now if kmap_op->handle is negative it means that the
1102 * hypercall actually returned an error.
1104 if (kmap_op
->handle
== GNTST_general_error
) {
1105 printk(KERN_WARNING
"m2p_remove_override: "
1106 "pfn %lx mfn %lx, failed to modify kernel mappings",
1108 put_balloon_scratch_page();
1114 mcs
= __xen_mc_entry(
1115 sizeof(struct gnttab_unmap_and_replace
));
1116 unmap_op
= mcs
.args
;
1117 unmap_op
->host_addr
= kmap_op
->host_addr
;
1118 unmap_op
->new_addr
= scratch_page_address
;
1119 unmap_op
->handle
= kmap_op
->handle
;
1121 MULTI_grant_table_op(mcs
.mc
,
1122 GNTTABOP_unmap_and_replace
, unmap_op
, 1);
1124 mcs
= __xen_mc_entry(0);
1125 MULTI_update_va_mapping(mcs
.mc
, scratch_page_address
,
1126 pfn_pte(page_to_pfn(scratch_page
),
1127 PAGE_KERNEL_RO
), 0);
1129 xen_mc_issue(PARAVIRT_LAZY_MMU
);
1131 kmap_op
->host_addr
= 0;
1132 put_balloon_scratch_page();
1136 /* p2m(m2p(mfn)) == FOREIGN_FRAME(mfn): the mfn is already present
1137 * somewhere in this domain, even before being added to the
1138 * m2p_override (see comment above in m2p_add_override).
1139 * If there are no other entries in the m2p_override corresponding
1140 * to this mfn, then remove the FOREIGN_FRAME_BIT from the p2m for
1141 * the original pfn (the one shared by the frontend): the backend
1142 * cannot do any IO on this page anymore because it has been
1143 * unshared. Removing the FOREIGN_FRAME_BIT from the p2m entry of
1144 * the original pfn causes mfn_to_pfn(mfn) to return the frontend
1146 mfn
&= ~FOREIGN_FRAME_BIT
;
1147 pfn
= mfn_to_pfn_no_overrides(mfn
);
1148 if (get_phys_to_machine(pfn
) == FOREIGN_FRAME(mfn
) &&
1149 m2p_find_override(mfn
) == NULL
)
1150 set_phys_to_machine(pfn
, mfn
);
1154 EXPORT_SYMBOL_GPL(m2p_remove_override
);
1156 struct page
*m2p_find_override(unsigned long mfn
)
1158 unsigned long flags
;
1159 struct list_head
*bucket
= &m2p_overrides
[mfn_hash(mfn
)];
1160 struct page
*p
, *ret
;
1164 spin_lock_irqsave(&m2p_override_lock
, flags
);
1166 list_for_each_entry(p
, bucket
, lru
) {
1167 if (page_private(p
) == mfn
) {
1173 spin_unlock_irqrestore(&m2p_override_lock
, flags
);
1178 unsigned long m2p_find_override_pfn(unsigned long mfn
, unsigned long pfn
)
1180 struct page
*p
= m2p_find_override(mfn
);
1181 unsigned long ret
= pfn
;
1184 ret
= page_to_pfn(p
);
1188 EXPORT_SYMBOL_GPL(m2p_find_override_pfn
);
1190 #ifdef CONFIG_XEN_DEBUG_FS
1191 #include <linux/debugfs.h>
1192 #include "debugfs.h"
1193 static int p2m_dump_show(struct seq_file
*m
, void *v
)
1195 static const char * const level_name
[] = { "top", "middle",
1196 "entry", "abnormal", "error"};
1197 #define TYPE_IDENTITY 0
1198 #define TYPE_MISSING 1
1200 #define TYPE_UNKNOWN 3
1201 static const char * const type_name
[] = {
1202 [TYPE_IDENTITY
] = "identity",
1203 [TYPE_MISSING
] = "missing",
1205 [TYPE_UNKNOWN
] = "abnormal"};
1206 unsigned long pfn
, prev_pfn_type
= 0, prev_pfn_level
= 0;
1207 unsigned int uninitialized_var(prev_level
);
1208 unsigned int uninitialized_var(prev_type
);
1213 for (pfn
= 0; pfn
< MAX_DOMAIN_PAGES
; pfn
++) {
1214 unsigned topidx
= p2m_top_index(pfn
);
1215 unsigned mididx
= p2m_mid_index(pfn
);
1216 unsigned idx
= p2m_index(pfn
);
1220 type
= TYPE_UNKNOWN
;
1221 if (p2m_top
[topidx
] == p2m_mid_missing
) {
1222 lvl
= 0; type
= TYPE_MISSING
;
1223 } else if (p2m_top
[topidx
] == NULL
) {
1224 lvl
= 0; type
= TYPE_UNKNOWN
;
1225 } else if (p2m_top
[topidx
][mididx
] == NULL
) {
1226 lvl
= 1; type
= TYPE_UNKNOWN
;
1227 } else if (p2m_top
[topidx
][mididx
] == p2m_identity
) {
1228 lvl
= 1; type
= TYPE_IDENTITY
;
1229 } else if (p2m_top
[topidx
][mididx
] == p2m_missing
) {
1230 lvl
= 1; type
= TYPE_MISSING
;
1231 } else if (p2m_top
[topidx
][mididx
][idx
] == 0) {
1232 lvl
= 2; type
= TYPE_UNKNOWN
;
1233 } else if (p2m_top
[topidx
][mididx
][idx
] == IDENTITY_FRAME(pfn
)) {
1234 lvl
= 2; type
= TYPE_IDENTITY
;
1235 } else if (p2m_top
[topidx
][mididx
][idx
] == INVALID_P2M_ENTRY
) {
1236 lvl
= 2; type
= TYPE_MISSING
;
1237 } else if (p2m_top
[topidx
][mididx
][idx
] == pfn
) {
1238 lvl
= 2; type
= TYPE_PFN
;
1239 } else if (p2m_top
[topidx
][mididx
][idx
] != pfn
) {
1240 lvl
= 2; type
= TYPE_PFN
;
1246 if (pfn
== MAX_DOMAIN_PAGES
-1) {
1248 type
= TYPE_UNKNOWN
;
1250 if (prev_type
!= type
) {
1251 seq_printf(m
, " [0x%lx->0x%lx] %s\n",
1252 prev_pfn_type
, pfn
, type_name
[prev_type
]);
1253 prev_pfn_type
= pfn
;
1256 if (prev_level
!= lvl
) {
1257 seq_printf(m
, " [0x%lx->0x%lx] level %s\n",
1258 prev_pfn_level
, pfn
, level_name
[prev_level
]);
1259 prev_pfn_level
= pfn
;
1264 #undef TYPE_IDENTITY
1270 static int p2m_dump_open(struct inode
*inode
, struct file
*filp
)
1272 return single_open(filp
, p2m_dump_show
, NULL
);
1275 static const struct file_operations p2m_dump_fops
= {
1276 .open
= p2m_dump_open
,
1278 .llseek
= seq_lseek
,
1279 .release
= single_release
,
1282 static struct dentry
*d_mmu_debug
;
1284 static int __init
xen_p2m_debugfs(void)
1286 struct dentry
*d_xen
= xen_init_debugfs();
1291 d_mmu_debug
= debugfs_create_dir("mmu", d_xen
);
1293 debugfs_create_file("p2m", 0600, d_mmu_debug
, NULL
, &p2m_dump_fops
);
1296 fs_initcall(xen_p2m_debugfs
);
1297 #endif /* CONFIG_XEN_DEBUG_FS */