]> git.proxmox.com Git - pve-kernel.git/blame - patches/kernel/0018-sched-core-Drop-spinlocks-on-contention-iff-kernel-i.patch
update sources to Ubuntu-6.5.0-27.27
[pve-kernel.git] / patches / kernel / 0018-sched-core-Drop-spinlocks-on-contention-iff-kernel-i.patch
CommitLineData
29cb6fcb
FW
1From 39f2bfe0177d3f56c9feac4e70424e4952949e2a Mon Sep 17 00:00:00 2001
2From: Sean Christopherson <seanjc@google.com>
3Date: Wed, 10 Jan 2024 13:47:23 -0800
4Subject: [PATCH] sched/core: Drop spinlocks on contention iff kernel is
5 preemptible
6
7Use preempt_model_preemptible() to detect a preemptible kernel when
8deciding whether or not to reschedule in order to drop a contended
9spinlock or rwlock. Because PREEMPT_DYNAMIC selects PREEMPTION, kernels
10built with PREEMPT_DYNAMIC=y will yield contended locks even if the live
11preemption model is "none" or "voluntary". In short, make kernels with
12dynamically selected models behave the same as kernels with statically
13selected models.
14
15Somewhat counter-intuitively, NOT yielding a lock can provide better
16latency for the relevant tasks/processes. E.g. KVM x86's mmu_lock, a
17rwlock, is often contended between an invalidation event (takes mmu_lock
18for write) and a vCPU servicing a guest page fault (takes mmu_lock for
19read). For _some_ setups, letting the invalidation task complete even
20if there is mmu_lock contention provides lower latency for *all* tasks,
21i.e. the invalidation completes sooner *and* the vCPU services the guest
22page fault sooner.
23
24But even KVM's mmu_lock behavior isn't uniform, e.g. the "best" behavior
25can vary depending on the host VMM, the guest workload, the number of
26vCPUs, the number of pCPUs in the host, why there is lock contention, etc.
27
28In other words, simply deleting the CONFIG_PREEMPTION guard (or doing the
29opposite and removing contention yielding entirely) needs to come with a
30big pile of data proving that changing the status quo is a net positive.
31
32Cc: Valentin Schneider <valentin.schneider@arm.com>
33Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
34Cc: Marco Elver <elver@google.com>
35Cc: Frederic Weisbecker <frederic@kernel.org>
36Cc: David Matlack <dmatlack@google.com>
37Signed-off-by: Sean Christopherson <seanjc@google.com>
38---
39 include/linux/sched.h | 14 ++++++--------
40 1 file changed, 6 insertions(+), 8 deletions(-)
41
42diff --git a/include/linux/sched.h b/include/linux/sched.h
43index 292c31697248..a274bc85f222 100644
44--- a/include/linux/sched.h
45+++ b/include/linux/sched.h
46@@ -2234,11 +2234,10 @@ static inline bool preempt_model_preemptible(void)
47 */
48 static inline int spin_needbreak(spinlock_t *lock)
49 {
50-#ifdef CONFIG_PREEMPTION
51+ if (!preempt_model_preemptible())
52+ return 0;
53+
54 return spin_is_contended(lock);
55-#else
56- return 0;
57-#endif
58 }
59
60 /*
61@@ -2251,11 +2250,10 @@ static inline int spin_needbreak(spinlock_t *lock)
62 */
63 static inline int rwlock_needbreak(rwlock_t *lock)
64 {
65-#ifdef CONFIG_PREEMPTION
66+ if (!preempt_model_preemptible())
67+ return 0;
68+
69 return rwlock_is_contended(lock);
70-#else
71- return 0;
72-#endif
73 }
74
75 static __always_inline bool need_resched(void)
76--
772.39.2
78