]> git.kernelconcepts.de Git - karo-tx-linux.git/commit
x86, vdso: Use asm volatile in __getcpu
authorAndy Lutomirski <luto@amacapital.net>
Sun, 21 Dec 2014 16:57:46 +0000 (08:57 -0800)
committerAndy Lutomirski <luto@amacapital.net>
Tue, 23 Dec 2014 21:05:30 +0000 (13:05 -0800)
commit1ddf0b1b11aa8a90cef6706e935fc31c75c406ba
tree78e79b96126a6f54c5f4afa37d4f11f07aaab6d8
parent394f56fe480140877304d342dec46d50dc823d46
x86, vdso: Use asm volatile in __getcpu

In Linux 3.18 and below, GCC hoists the lsl instructions in the
pvclock code all the way to the beginning of __vdso_clock_gettime,
slowing the non-paravirt case significantly.  For unknown reasons,
presumably related to the removal of a branch, the performance issue
is gone as of

e76b027e6408 x86,vdso: Use LSL unconditionally for vgetcpu

but I don't trust GCC enough to expect the problem to stay fixed.

There should be no correctness issue, because the __getcpu calls in
__vdso_vlock_gettime were never necessary in the first place.

Note to stable maintainers: In 3.18 and below, depending on
configuration, gcc 4.9.2 generates code like this:

     9c3:       44 0f 03 e8             lsl    %ax,%r13d
     9c7:       45 89 eb                mov    %r13d,%r11d
     9ca:       0f 03 d8                lsl    %ax,%ebx

This patch won't apply as is to any released kernel, but I'll send a
trivial backported version if needed.

Fixes: 51c19b4f5927 x86: vdso: pvclock gettime support
Cc: stable@vger.kernel.org # 3.8+
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
arch/x86/include/asm/vgtod.h