Fix:
[ 3190.059226] BUG: unable to handle kernel NULL pointer dereference at (null)
[ 3190.062224] IP: [<
ffffffffa02aac66>] mmu_page_zap_pte+0x10/0xa7 [kvm]
[ 3190.063760] PGD
104f50067 PUD
112bea067 PMD 0
[ 3190.065309] Oops: 0000 [#1] SMP DEBUG_PAGEALLOC
[ 3190.066860] CPU 1
[ ...... ]
[ 3190.109629] Call Trace:
[ 3190.111342] [<
ffffffffa02aada6>] kvm_mmu_prepare_zap_page+0xa9/0x1fc [kvm]
[ 3190.113091] [<
ffffffffa02ab2f5>] mmu_shrink+0x11f/0x1f3 [kvm]
[ 3190.114844] [<
ffffffffa02ab25d>] ? mmu_shrink+0x87/0x1f3 [kvm]
[ 3190.116598] [<
ffffffff81150c9d>] ? prune_super+0x142/0x154
[ 3190.118333] [<
ffffffff8110a4f4>] ? shrink_slab+0x39/0x31e
[ 3190.120043] [<
ffffffff8110a687>] shrink_slab+0x1cc/0x31e
[ 3190.121718] [<
ffffffff8110ca1d>] do_try_to_free_pages
This is caused by shrinking page from the empty mmu, although we have
checked n_used_mmu_pages, it is useless since the check is out of mmu-lock
Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
{
struct kvm_mmu_page *page;
+ if (list_empty(&kvm->arch.active_mmu_pages))
+ return;
+
page = container_of(kvm->arch.active_mmu_pages.prev,
struct kvm_mmu_page, link);
kvm_mmu_prepare_zap_page(kvm, page, invalid_list);