]>
Commit | Line | Data |
---|---|---|
38a778aa JK |
1 | KVM Lock Overview |
2 | ================= | |
3 | ||
4 | 1. Acquisition Orders | |
5 | --------------------- | |
6 | ||
58e3948a PB |
7 | The acquisition orders for mutexes are as follows: |
8 | ||
9 | - kvm->lock is taken outside vcpu->mutex | |
10 | ||
11 | - kvm->lock is taken outside kvm->slots_lock and kvm->irq_lock | |
12 | ||
13 | - kvm->slots_lock is taken outside kvm->irq_lock, though acquiring | |
14 | them together is quite rare. | |
15 | ||
3f5ad8be PB |
16 | On x86, vcpu->mutex is taken outside kvm->arch.hyperv.hv_lock. |
17 | ||
18 | For spinlocks, kvm_lock is taken outside kvm->mmu_lock. | |
19 | ||
20 | Everything else is a leaf: no other lock is taken inside the critical | |
21 | sections. | |
38a778aa | 22 | |
58d8b172 XG |
23 | 2: Exception |
24 | ------------ | |
25 | ||
26 | Fast page fault: | |
27 | ||
28 | Fast page fault is the fast path which fixes the guest page fault out of | |
29 | the mmu-lock on x86. Currently, the page fault can be fast only if the | |
30 | shadow page table is present and it is caused by write-protect, that means | |
31 | we just need change the W bit of the spte. | |
32 | ||
33 | What we use to avoid all the race is the SPTE_HOST_WRITEABLE bit and | |
34 | SPTE_MMU_WRITEABLE bit on the spte: | |
35 | - SPTE_HOST_WRITEABLE means the gfn is writable on host. | |
36 | - SPTE_MMU_WRITEABLE means the gfn is writable on mmu. The bit is set when | |
37 | the gfn is writable on guest mmu and it is not write-protected by shadow | |
38 | page write-protection. | |
39 | ||
40 | On fast page fault path, we will use cmpxchg to atomically set the spte W | |
41 | bit if spte.SPTE_HOST_WRITEABLE = 1 and spte.SPTE_WRITE_PROTECT = 1, this | |
42 | is safe because whenever changing these bits can be detected by cmpxchg. | |
43 | ||
44 | But we need carefully check these cases: | |
45 | 1): The mapping from gfn to pfn | |
46 | The mapping from gfn to pfn may be changed since we can only ensure the pfn | |
47 | is not changed during cmpxchg. This is a ABA problem, for example, below case | |
48 | will happen: | |
49 | ||
50 | At the beginning: | |
51 | gpte = gfn1 | |
52 | gfn1 is mapped to pfn1 on host | |
53 | spte is the shadow page table entry corresponding with gpte and | |
54 | spte = pfn1 | |
55 | ||
56 | VCPU 0 VCPU0 | |
57 | on fast page fault path: | |
58 | ||
59 | old_spte = *spte; | |
60 | pfn1 is swapped out: | |
61 | spte = 0; | |
62 | ||
63 | pfn1 is re-alloced for gfn2. | |
64 | ||
65 | gpte is changed to point to | |
66 | gfn2 by the guest: | |
67 | spte = pfn1; | |
68 | ||
69 | if (cmpxchg(spte, old_spte, old_spte+W) | |
70 | mark_page_dirty(vcpu->kvm, gfn1) | |
71 | OOPS!!! | |
72 | ||
73 | We dirty-log for gfn1, that means gfn2 is lost in dirty-bitmap. | |
74 | ||
75 | For direct sp, we can easily avoid it since the spte of direct sp is fixed | |
76 | to gfn. For indirect sp, before we do cmpxchg, we call gfn_to_pfn_atomic() | |
77 | to pin gfn to pfn, because after gfn_to_pfn_atomic(): | |
78 | - We have held the refcount of pfn that means the pfn can not be freed and | |
79 | be reused for another gfn. | |
80 | - The pfn is writable that means it can not be shared between different gfns | |
81 | by KSM. | |
82 | ||
83 | Then, we can ensure the dirty bitmaps is correctly set for a gfn. | |
84 | ||
85 | Currently, to simplify the whole things, we disable fast page fault for | |
86 | indirect shadow page. | |
87 | ||
88 | 2): Dirty bit tracking | |
89 | In the origin code, the spte can be fast updated (non-atomically) if the | |
90 | spte is read-only and the Accessed bit has already been set since the | |
91 | Accessed bit and Dirty bit can not be lost. | |
92 | ||
93 | But it is not true after fast page fault since the spte can be marked | |
94 | writable between reading spte and updating spte. Like below case: | |
95 | ||
96 | At the beginning: | |
97 | spte.W = 0 | |
98 | spte.Accessed = 1 | |
99 | ||
100 | VCPU 0 VCPU0 | |
101 | In mmu_spte_clear_track_bits(): | |
102 | ||
103 | old_spte = *spte; | |
104 | ||
105 | /* 'if' condition is satisfied. */ | |
bb3541f1 | 106 | if (old_spte.Accessed == 1 && |
58d8b172 XG |
107 | old_spte.W == 0) |
108 | spte = 0ull; | |
109 | on fast page fault path: | |
110 | spte.W = 1 | |
111 | memory write on the spte: | |
112 | spte.Dirty = 1 | |
113 | ||
114 | ||
115 | else | |
116 | old_spte = xchg(spte, 0ull) | |
117 | ||
118 | ||
bb3541f1 | 119 | if (old_spte.Accessed == 1) |
58d8b172 XG |
120 | kvm_set_pfn_accessed(spte.pfn); |
121 | if (old_spte.Dirty == 1) | |
122 | kvm_set_pfn_dirty(spte.pfn); | |
123 | OOPS!!! | |
124 | ||
125 | The Dirty bit is lost in this case. | |
126 | ||
127 | In order to avoid this kind of issue, we always treat the spte as "volatile" | |
128 | if it can be updated out of mmu-lock, see spte_has_volatile_bits(), it means, | |
17180032 | 129 | the spte is always atomically updated in this case. |
58d8b172 XG |
130 | |
131 | 3): flush tlbs due to spte updated | |
132 | If the spte is updated from writable to readonly, we should flush all TLBs, | |
133 | otherwise rmap_write_protect will find a read-only spte, even though the | |
134 | writable spte might be cached on a CPU's TLB. | |
135 | ||
136 | As mentioned before, the spte can be updated to writable out of mmu-lock on | |
137 | fast page fault path, in order to easily audit the path, we see if TLBs need | |
138 | be flushed caused by this reason in mmu_spte_update() since this is a common | |
139 | function to update spte (present -> present). | |
140 | ||
141 | Since the spte is "volatile" if it can be updated out of mmu-lock, we always | |
17180032 | 142 | atomically update the spte, the race caused by fast page fault can be avoided, |
58d8b172 XG |
143 | See the comments in spte_has_volatile_bits() and mmu_spte_update(). |
144 | ||
145 | 3. Reference | |
38a778aa JK |
146 | ------------ |
147 | ||
148 | Name: kvm_lock | |
2f303b74 | 149 | Type: spinlock_t |
38a778aa JK |
150 | Arch: any |
151 | Protects: - vm_list | |
4a937f96 PB |
152 | |
153 | Name: kvm_count_lock | |
154 | Type: raw_spinlock_t | |
155 | Arch: any | |
156 | Protects: - hardware virtualization enable/disable | |
38a778aa JK |
157 | Comment: 'raw' because hardware enabling/disabling must be atomic /wrt |
158 | migration. | |
159 | ||
160 | Name: kvm_arch::tsc_write_lock | |
161 | Type: raw_spinlock | |
162 | Arch: x86 | |
163 | Protects: - kvm_arch::{last_tsc_write,last_tsc_nsec,last_tsc_offset} | |
164 | - tsc offset in vmcb | |
165 | Comment: 'raw' because updating the tsc offsets must not be preempted. | |
58d8b172 XG |
166 | |
167 | Name: kvm->mmu_lock | |
168 | Type: spinlock_t | |
169 | Arch: any | |
170 | Protects: -shadow page/shadow tlb entry | |
171 | Comment: it is a spinlock since it is used in mmu notifier. | |
519192aa TH |
172 | |
173 | Name: kvm->srcu | |
174 | Type: srcu lock | |
175 | Arch: any | |
176 | Protects: - kvm->memslots | |
177 | - kvm->buses | |
178 | Comment: The srcu read lock must be held while accessing memslots (e.g. | |
179 | when using gfn_to_* functions) and while accessing in-kernel | |
180 | MMIO/PIO address->device structure mapping (kvm->buses). | |
181 | The srcu index can be stored in kvm_vcpu->srcu_idx per vcpu | |
182 | if it is needed by multiple functions. | |
bf9f6ac8 FW |
183 | |
184 | Name: blocked_vcpu_on_cpu_lock | |
185 | Type: spinlock_t | |
186 | Arch: x86 | |
187 | Protects: blocked_vcpu_on_cpu | |
188 | Comment: This is a per-CPU lock and it is used for VT-d posted-interrupts. | |
189 | When VT-d posted-interrupts is supported and the VM has assigned | |
190 | devices, we put the blocked vCPU on the list blocked_vcpu_on_cpu | |
191 | protected by blocked_vcpu_on_cpu_lock, when VT-d hardware issues | |
192 | wakeup notification event since external interrupts from the | |
193 | assigned devices happens, we will find the vCPU on the list to | |
194 | wakeup. |