]> git.proxmox.com Git - mirror_ubuntu-jammy-kernel.git/commit - arch/x86/kvm/mmu/mmu.c
KVM: x86/mmu: Fold max_mapping_level() into kvm_mmu_hugepage_adjust()
authorSean Christopherson <sean.j.christopherson@intel.com>
Wed, 8 Jan 2020 20:24:46 +0000 (12:24 -0800)
committerPaolo Bonzini <pbonzini@redhat.com>
Mon, 27 Jan 2020 19:00:08 +0000 (20:00 +0100)
commit293e306e7faac4eafaefb9518a1cd6eaecad88e9
tree5f602370c71d3ed81ef44618078f55c870e108f3
parentd32ec81bab670e599e645e1d1d5231d62de7d0d6
KVM: x86/mmu: Fold max_mapping_level() into kvm_mmu_hugepage_adjust()

Fold max_mapping_level() into kvm_mmu_hugepage_adjust() now that HugeTLB
mappings are handled in kvm_mmu_hugepage_adjust(), i.e. there isn't a
need to pre-calculate the max mapping level.  Co-locating all hugepage
checks eliminates a memslot lookup, at the cost of performing the
__mmu_gfn_lpage_is_disallowed() checks while holding mmu_lock.

The latency of lpage_is_disallowed() is likely negligible relative to
the rest of the code run while holding mmu_lock, and can be offset to
some extent by eliminating the mmu_gfn_lpage_is_disallowed() check in
set_spte() in a future patch.  Eliminating the check in set_spte() is
made possible by performing the initial lpage_is_disallowed() checks
while holding mmu_lock.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
arch/x86/kvm/mmu/mmu.c
arch/x86/kvm/mmu/paging_tmpl.h