]> git.proxmox.com Git - mirror_ubuntu-bionic-kernel.git/commit - kernel/sched/fair.c
sched/fair: Implement more accurate async detach
authorPeter Zijlstra <peterz@infradead.org>
Fri, 12 May 2017 12:18:10 +0000 (14:18 +0200)
committerIngo Molnar <mingo@kernel.org>
Fri, 29 Sep 2017 17:35:17 +0000 (19:35 +0200)
commit9a2dd585b2c431ec1e5d46a9d9568291c7a534cc
tree987633c0ac6fa9ce7c9f278157d9621f6f1c0fbb
parentf207934fb79d1af1de1a62b09d56a3a1914172c4
sched/fair: Implement more accurate async detach

The problem with the overestimate is that it will subtract too big a
value from the load_sum, thereby pushing it down further than it ought
to go. Since runnable_load_avg is not subject to a similar 'force',
this results in the occasional 'runnable_load > load' situation.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/sched/fair.c