]> git.proxmox.com Git - mirror_ubuntu-bionic-kernel.git/commitdiff
mm/thp: fix call to mmu_notifier in set_pmd_migration_entry() v2
authorJérôme Glisse <jglisse@redhat.com>
Sat, 13 Oct 2018 04:34:36 +0000 (21:34 -0700)
committerJuerg Haefliger <juergh@canonical.com>
Wed, 24 Jul 2019 01:53:47 +0000 (19:53 -0600)
BugLink: https://bugs.launchpad.net/bugs/1836426
commit bfba8e5cf28f413aa05571af493871d74438979f upstream.

Inside set_pmd_migration_entry() we are holding page table locks and thus
we can not sleep so we can not call invalidate_range_start/end()

So remove call to mmu_notifier_invalidate_range_start/end() because they
are call inside the function calling set_pmd_migration_entry() (see
try_to_unmap_one()).

Link: http://lkml.kernel.org/r/20181012181056.7864-1-jglisse@redhat.com
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Reported-by: Andrea Arcangeli <aarcange@redhat.com>
Reviewed-by: Zi Yan <zi.yan@cs.rutgers.edu>
Acked-by: Michal Hocko <mhocko@kernel.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: David Nellans <dnellans@nvidia.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Kamal Mostafa <kamal@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
mm/huge_memory.c

index ec05d69432e51a5c8b19261728b9fb90be4129c3..917878a7c7697241db1d703f1c011cc13e367011 100644 (file)
@@ -2893,9 +2893,6 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
        if (!(pvmw->pmd && !pvmw->pte))
                return;
 
-       mmu_notifier_invalidate_range_start(mm, address,
-                       address + HPAGE_PMD_SIZE);
-
        flush_cache_range(vma, address, address + HPAGE_PMD_SIZE);
        pmdval = *pvmw->pmd;
        pmdp_invalidate(vma, address, pvmw->pmd);
@@ -2908,9 +2905,6 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
        set_pmd_at(mm, address, pvmw->pmd, pmdswp);
        page_remove_rmap(page, true);
        put_page(page);
-
-       mmu_notifier_invalidate_range_end(mm, address,
-                       address + HPAGE_PMD_SIZE);
 }
 
 void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)