]> git.proxmox.com Git - mirror_ubuntu-jammy-kernel.git/commitdiff
sched/fair: Ensure _sum and _avg values stay consistent
authorOdin Ugedal <odin@uged.al>
Thu, 24 Jun 2021 11:18:15 +0000 (13:18 +0200)
committerPeter Zijlstra <peterz@infradead.org>
Mon, 28 Jun 2021 13:42:24 +0000 (15:42 +0200)
The _sum and _avg values are in general sync together with the PELT
divider. They are however not always completely in perfect sync,
resulting in situations where _sum gets to zero while _avg stays
positive. Such situations are undesirable.

This comes from the fact that PELT will increase period_contrib, also
increasing the PELT divider, without updating _sum and _avg values to
stay in perfect sync where (_sum == _avg * divider). However, such PELT
change will never lower _sum, making it impossible to end up in a
situation where _sum is zero and _avg is not.

Therefore, we need to ensure that when subtracting load outside PELT,
that when _sum is zero, _avg is also set to zero. This occurs when
(_sum < _avg * divider), and the subtracted (_avg * divider) is bigger
or equal to the current _sum, while the subtracted _avg is smaller than
the current _avg.

Reported-by: Sachin Sant <sachinp@linux.vnet.ibm.com>
Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
Signed-off-by: Odin Ugedal <odin@uged.al>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Tested-by: Sachin Sant <sachinp@linux.vnet.ibm.com>
Link: https://lore.kernel.org/r/20210624111815.57937-1-odin@uged.al
kernel/sched/fair.c

index 4a3e61a88acce6a3923550b7243953fc2303219e..45edf61eed7378730a5b1080ee2864bbd453b20b 100644 (file)
@@ -3657,15 +3657,15 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
 
                r = removed_load;
                sub_positive(&sa->load_avg, r);
-               sub_positive(&sa->load_sum, r * divider);
+               sa->load_sum = sa->load_avg * divider;
 
                r = removed_util;
                sub_positive(&sa->util_avg, r);
-               sub_positive(&sa->util_sum, r * divider);
+               sa->util_sum = sa->util_avg * divider;
 
                r = removed_runnable;
                sub_positive(&sa->runnable_avg, r);
-               sub_positive(&sa->runnable_sum, r * divider);
+               sa->runnable_sum = sa->runnable_avg * divider;
 
                /*
                 * removed_runnable is the unweighted version of removed_load so we