]> git.proxmox.com Git - mirror_ubuntu-zesty-kernel.git/commit
x86/paravirt: Optimize native pv_lock_ops.vcpu_is_preempted()
authorPeter Zijlstra <peterz@infradead.org>
Tue, 15 Nov 2016 15:47:06 +0000 (16:47 +0100)
committerIngo Molnar <mingo@kernel.org>
Tue, 22 Nov 2016 11:48:11 +0000 (12:48 +0100)
commit3cded41794818d788aa1dc028ede4a1c1222d937
treeb38f7540e28ee21ccec1e2fb850ff4235041984e
parent05ffc951392df57edecc2519327b169210c3df75
x86/paravirt: Optimize native pv_lock_ops.vcpu_is_preempted()

Avoid the pointless function call to pv_lock_ops.vcpu_is_preempted()
when a paravirt spinlock enabled kernel is ran on native hardware.

Do this by patching out the CALL instruction with "XOR %RAX,%RAX"
which has the same effect (0 return value).

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: David.Laight@ACULAB.COM
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: benh@kernel.crashing.org
Cc: boqun.feng@gmail.com
Cc: borntraeger@de.ibm.com
Cc: bsingharora@gmail.com
Cc: dave@stgolabs.net
Cc: jgross@suse.com
Cc: kernellwp@gmail.com
Cc: konrad.wilk@oracle.com
Cc: mpe@ellerman.id.au
Cc: paulmck@linux.vnet.ibm.com
Cc: paulus@samba.org
Cc: pbonzini@redhat.com
Cc: rkrcmar@redhat.com
Cc: will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
arch/x86/include/asm/paravirt.h
arch/x86/include/asm/paravirt_types.h
arch/x86/include/asm/qspinlock.h
arch/x86/include/asm/spinlock.h
arch/x86/kernel/kvm.c
arch/x86/kernel/paravirt-spinlocks.c
arch/x86/kernel/paravirt_patch_32.c
arch/x86/kernel/paravirt_patch_64.c
arch/x86/xen/spinlock.c