]> git.proxmox.com Git - mirror_ubuntu-artful-kernel.git/commitdiff
mm, vmscan: avoid thrashing anon lru when free + file is low
authorDavid Rientjes <rientjes@google.com>
Mon, 10 Jul 2017 22:47:20 +0000 (15:47 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Mon, 10 Jul 2017 23:32:30 +0000 (16:32 -0700)
The purpose of the code that commit 623762517e23 ("revert 'mm: vmscan:
do not swap anon pages just because free+file is low'") reintroduces is
to prefer swapping anonymous memory rather than trashing the file lru.

If the anonymous inactive lru for the set of eligible zones is
considered low, however, or the length of the list for the given reclaim
priority does not allow for effective anonymous-only reclaiming, then
avoid forcing SCAN_ANON.  Forcing SCAN_ANON will end up thrashing the
small list and leave unreclaimed memory on the file lrus.

If the inactive list is insufficient, fallback to balanced reclaim so
the file lru doesn't remain untouched.

[akpm@linux-foundation.org: fix build]
Link: http://lkml.kernel.org/r/alpine.DEB.2.10.1705011432220.137835@chino.kir.corp.google.com
Signed-off-by: David Rientjes <rientjes@google.com>
Suggested-by: Minchan Kim <minchan@kernel.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/vmscan.c

index 9e95fafc026b4174331aee5b8dd91f0ba099a8c4..e9210f825219c4ec944b747a84656c5e8f5fd007 100644 (file)
@@ -2228,8 +2228,17 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
                }
 
                if (unlikely(pgdatfile + pgdatfree <= total_high_wmark)) {
-                       scan_balance = SCAN_ANON;
-                       goto out;
+                       /*
+                        * Force SCAN_ANON if there are enough inactive
+                        * anonymous pages on the LRU in eligible zones.
+                        * Otherwise, the small LRU gets thrashed.
+                        */
+                       if (!inactive_list_is_low(lruvec, false, memcg, sc, false) &&
+                           lruvec_lru_size(lruvec, LRU_INACTIVE_ANON, sc->reclaim_idx)
+                                       >> sc->priority) {
+                               scan_balance = SCAN_ANON;
+                               goto out;
+                       }
                }
        }