1. 18 Nov, 2020 1 commit
  2. 29 Oct, 2020 2 commits
  3. 09 Sep, 2020 3 commits
  4. 07 Sep, 2020 6 commits
  5. 11 Jun, 2020 1 commit
  6. 09 Jun, 2020 2 commits
    • Mike Rapoport's avatar
      mm: reorder includes after introduction of linux/pgtable.h · 65fddcfc
      Mike Rapoport authored
      
      The replacement of <asm/pgrable.h> with <linux/pgtable.h> made the include
      of the latter in the middle of asm includes.  Fix this up with the aid of
      the below script and manual adjustments here and there.
      
      	import sys
      	import re
      
      	if len(sys.argv) is not 3:
      	    print "USAGE: %s <file> <header>" % (sys.argv[0])
      	    sys.exit(1)
      
      	hdr_to_move="#include <linux/%s>" % sys.argv[2]
      	moved = False
      	in_hdrs = False
      
      	with open(sys.argv[1], "r") as f:
      	    lines = f.readlines()
      	    for _line in lines:
      		line = _line.rstrip('
      ')
      		if line == hdr_to_move:
      		    continue
      		if line.startswith("#include <linux/"):
      		    in_hdrs = True
      		elif not moved and in_hdrs:
      		    moved = True
      		    print hdr_to_move
      		print line
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vincent Chen <deanbo422@gmail.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: http://lkml.kernel.org/r/20200514170327.31389-4-rppt@kernel.org
      
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      65fddcfc
    • Mike Rapoport's avatar
      mm: introduce include/linux/pgtable.h · ca5999fd
      Mike Rapoport authored
      
      The include/linux/pgtable.h is going to be the home of generic page table
      manipulation functions.
      
      Start with moving asm-generic/pgtable.h to include/linux/pgtable.h and
      make the latter include asm/pgtable.h.
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vincent Chen <deanbo422@gmail.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: http://lkml.kernel.org/r/20200514170327.31389-3-rppt@kernel.org
      
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ca5999fd
  7. 18 Oct, 2019 4 commits
    • Jiri Slaby's avatar
      x86/asm/64: Change all ENTRY+END to SYM_CODE_* · bc7b11c0
      Jiri Slaby authored
      
      Change all assembly code which is marked using END (and not ENDPROC).
      Switch all these to the appropriate new annotation SYM_CODE_START and
      SYM_CODE_END.
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> [xen bits]
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Cao jin <caoj.fnst@cn.fujitsu.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: linux-arch@vger.kernel.org
      Cc: Maran Wilson <maran.wilson@oracle.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stefano Stabellini <sstabellini@kernel.org>
      Cc: "Steven Rostedt (VMware)" <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: x86-ml <x86@kernel.org>
      Cc: xen-devel@lists.xenproject.org
      Link: https://lkml.kernel.org/r/20191011115108.12392-24-jslaby@suse.cz
      bc7b11c0
    • Jiri Slaby's avatar
      x86/asm: Do not annotate functions with GLOBAL · 37818afd
      Jiri Slaby authored
      
      GLOBAL is an x86's custom macro and is going to die very soon. It was
      meant for global symbols, but here, it was used for functions. Instead,
      use the new macros SYM_FUNC_START* and SYM_CODE_START* (depending on the
      type of the function) which are dedicated to global functions. And since
      they both require a closing by SYM_*_END, do that here too.
      
      startup_64, which does not use GLOBAL but uses .globl explicitly, is
      converted too.
      
      "No alignments" are preserved.
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Cc: Allison Randal <allison@lohutok.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Cao jin <caoj.fnst@cn.fujitsu.com>
      Cc: Enrico Weigelt <info@metux.net>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Kate Stewart <kstewart@linuxfoundation.org>
      Cc: linux-arch@vger.kernel.org
      Cc: Maran Wilson <maran.wilson@oracle.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: x86-ml <x86@kernel.org>
      Link: https://lkml.kernel.org/r/20191011115108.12392-17-jslaby@suse.cz
      37818afd
    • Jiri Slaby's avatar
      x86/asm/head: Annotate data appropriately · b1bd27b9
      Jiri Slaby authored
      
      Use the new SYM_DATA, SYM_DATA_START, and SYM_DATA_END in both 32 and 64
      bit head_*.S. In the 64-bit version, define also
      SYM_DATA_START_PAGE_ALIGNED locally using the new SYM_START. It is used
      in the code instead of NEXT_PAGE() which was defined in this file and
      had been using the obsolete macro GLOBAL().
      
      Now, the data in the 64-bit object file look sane:
        Value   Size Type    Bind   Vis      Ndx Name
          0000  4096 OBJECT  GLOBAL DEFAULT   15 init_level4_pgt
          1000  4096 OBJECT  GLOBAL DEFAULT   15 level3_kernel_pgt
          2000  2048 OBJECT  GLOBAL DEFAULT   15 level2_kernel_pgt
          3000  4096 OBJECT  GLOBAL DEFAULT   15 level2_fixmap_pgt
          4000  4096 OBJECT  GLOBAL DEFAULT   15 level1_fixmap_pgt
          5000     2 OBJECT  GLOBAL DEFAULT   15 early_gdt_descr
          5002     8 OBJECT  LOCAL  DEFAULT   15 early_gdt_descr_base
          500a     8 OBJECT  GLOBAL DEFAULT   15 phys_base
          0000     8 OBJECT  GLOBAL DEFAULT   17 initial_code
          0008     8 OBJECT  GLOBAL DEFAULT   17 initial_gs
          0010     8 OBJECT  GLOBAL DEFAULT   17 initial_stack
          0000     4 OBJECT  GLOBAL DEFAULT   19 early_recursion_flag
          1000  4096 OBJECT  GLOBAL DEFAULT   19 early_level4_pgt
          2000 0x40000 OBJECT  GLOBAL DEFAULT   19 early_dynamic_pgts
          0000  4096 OBJECT  GLOBAL DEFAULT   22 empty_zero_page
      
      All have correct size and type now.
      
      Note that this also removes implicit 16B alignment previously inserted
      by ENTRY:
      
      * initial_code, setup_once_ref, initial_page_table, initial_stack,
        boot_gdt are still aligned
      * early_gdt_descr is now properly aligned as was intended before ENTRY
        was added there long time ago
      * phys_base's alignment is kept by an explicitly added new alignment
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Cao jin <caoj.fnst@cn.fujitsu.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: linux-arch@vger.kernel.org
      Cc: Maran Wilson <maran.wilson@oracle.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: x86-ml <x86@kernel.org>
      Link: https://lkml.kernel.org/r/20191011115108.12392-12-jslaby@suse.cz
      b1bd27b9
    • Jiri Slaby's avatar
      x86/asm: Annotate local pseudo-functions · ef77e688
      Jiri Slaby authored
      
      Use the newly added SYM_CODE_START_LOCAL* to annotate beginnings of
      all pseudo-functions (those ending with END until now) which do not
      have ".globl" annotation. This is needed to balance END for tools that
      generate debuginfo. Note that ENDs are switched to SYM_CODE_END too so
      that everybody can see the pairing.
      
      C-like functions (which handle frame ptr etc.) are not annotated here,
      hence SYM_CODE_* macros are used here, not SYM_FUNC_*. Note that the
      32bit version of early_idt_handler_common already had ENDPROC -- switch
      that to SYM_CODE_END for the same reason as above (and to be the same as
      64bit).
      
      While early_idt_handler_common is LOCAL, it's name is not prepended with
      ".L" as it happens to appear in call traces.
      
      bad_get_user*, and bad_put_user are now aligned, as they are separate
      functions. They do not mind to be aligned -- no need to be compact
      there.
      
      early_idt_handler_common is aligned now too, as it is after
      early_idt_handler_array, so as well no need to be compact there.
      
      verify_cpu is self-standing and included in other .S files, so align it
      too.
      
      The others have alignment preserved to what it used to be (using the
      _NOALIGN variant of macros).
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Cc: Alexios Zavras <alexios.zavras@intel.com>
      Cc: Allison Randal <allison@lohutok.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Cao jin <caoj.fnst@cn.fujitsu.com>
      Cc: Enrico Weigelt <info@metux.net>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: linux-arch@vger.kernel.org
      Cc: Maran Wilson <maran.wilson@oracle.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: x86-ml <x86@kernel.org>
      Link: https://lkml.kernel.org/r/20191011115108.12392-6-jslaby@suse.cz
      ef77e688
  8. 05 Oct, 2019 1 commit
    • Jiri Slaby's avatar
      x86/asm: Reorder early variables · 1a8770b7
      Jiri Slaby authored
      
      Moving early_recursion_flag (4 bytes) after early_level4_pgt (4k) and
      early_dynamic_pgts (256k) saves 4k which are used for alignment of
      early_level4_pgt after early_recursion_flag.
      
      The real improvement is merely on the source code side. Previously it
      was:
      * __INITDATA + .balign
      * early_recursion_flag variable
      * a ton of CPP MACROS
      * __INITDATA (again)
      * early_top_pgt and early_recursion_flag variables
      * .data
      
      Now, it is a bit simpler:
      * a ton of CPP MACROS
      * __INITDATA + .balign
      * early_top_pgt and early_recursion_flag variables
      * early_recursion_flag variable
      * .data
      
      On the binary level the change looks like this:
      Before:
       (sections)
        12 .init.data    00042000  0000000000000000  0000000000000000 00008000  2**12
       (symbols)
        000000       4 OBJECT  GLOBAL DEFAULT   22 early_recursion_flag
        001000    4096 OBJECT  GLOBAL DEFAULT   22 early_top_pgt
        002000 0x40000 OBJECT  GLOBAL DEFAULT   22 early_dynamic_pgts
      
      After:
       (sections)
        12 .init.data    00041004  0000000000000000  0000000000000000 00008000  2**12
       (symbols)
        000000    4096 OBJECT  GLOBAL DEFAULT   22 early_top_pgt
        001000 0x40000 OBJECT  GLOBAL DEFAULT   22 early_dynamic_pgts
        041000       4 OBJECT  GLOBAL DEFAULT   22 early_recursion_flag
      
      So the resulting vmlinux is smaller by 4k with my toolchain as many
      other variables can be placed after early_recursion_flag to fill the
      rest of the page. Note that this is only .init data, so it is freed
      right after being booted anyway. Savings on-disk are none -- compression
      of zeros is easy, so the size of bzImage is the same pre and post the
      change.
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: x86-ml <x86@kernel.org>
      Link: https://lkml.kernel.org/r/20191003095238.29831-1-jslaby@suse.cz
      1a8770b7
  9. 22 Jul, 2019 1 commit
  10. 18 Jul, 2019 1 commit
  11. 17 Jul, 2019 1 commit
  12. 17 Apr, 2019 1 commit
    • Andy Lutomirski's avatar
      x86/irq/64: Split the IRQ stack into its own pages · e6401c13
      Andy Lutomirski authored
      
      Currently, the IRQ stack is hardcoded as the first page of the percpu
      area, and the stack canary lives on the IRQ stack. The former gets in
      the way of adding an IRQ stack guard page, and the latter is a potential
      weakness in the stack canary mechanism.
      
      Split the IRQ stack into its own private percpu pages.
      
      [ tglx: Make 64 and 32 bit share struct irq_stack ]
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Brijesh Singh <brijesh.singh@amd.com>
      Cc: "Chang S. Bae" <chang.seok.bae@intel.com>
      Cc: Dominik Brodowski <linux@dominikbrodowski.net>
      Cc: Feng Tang <feng.tang@intel.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jan Beulich <JBeulich@suse.com>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Joerg Roedel <jroedel@suse.de>
      Cc: Jordan Borgner <mail@jordan-borgner.de>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Maran Wilson <maran.wilson@oracle.com>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Cc: Nicolai Stange <nstange@suse.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Pu Wen <puwen@hygon.cn>
      Cc: "Rafael Ávila de Espíndola" <rafael@espindo.la>
      Cc: Sean Christopherson <sean.j.christopherson@intel.com>
      Cc: Stefano Stabellini <sstabellini@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: x86-ml <x86@kernel.org>
      Cc: xen-devel@lists.xenproject.org
      Link: https://lkml.kernel.org/r/20190414160146.267376656@linutronix.de
      e6401c13
  13. 13 Dec, 2018 1 commit
  14. 20 Sep, 2018 1 commit
  15. 03 Sep, 2018 2 commits
  16. 03 Jul, 2018 1 commit
    • Jan Beulich's avatar
      x86/asm/64: Use 32-bit XOR to zero registers · a7bea830
      Jan Beulich authored
      
      Some Intel CPUs don't recognize 64-bit XORs as zeroing idioms. Zeroing
      idioms don't require execution bandwidth, as they're being taken care
      of in the frontend (through register renaming). Use 32-bit XORs instead.
      Signed-off-by: default avatarJan Beulich <jbeulich@suse.com>
      Cc: Alok Kataria <akataria@vmware.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: davem@davemloft.net
      Cc: herbert@gondor.apana.org.au
      Cc: pavel@ucw.cz
      Cc: rjw@rjwysocki.net
      Link: http://lkml.kernel.org/r/5B39FF1A02000078001CFB54@prv1-mh.provo.novell.com
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      a7bea830
  17. 12 Apr, 2018 1 commit
    • Dave Hansen's avatar
      x86/mm: Comment _PAGE_GLOBAL mystery · 430d4005
      Dave Hansen authored
      
      I was mystified as to where the _PAGE_GLOBAL in the kernel page tables
      for kernel text came from.  I audited all the places I could find, but
      I missed one: head_64.S.
      
      The page tables that we create in here live for a long time, and they
      also have _PAGE_GLOBAL set, despite whether the processor supports it
      or not.  It's harmless, and we got *lucky* that the pageattr code
      accidentally clears it when we wipe it out of __supported_pte_mask and
      then later try to mark kernel text read-only.
      
      Comment some of these properties to make it easier to find and
      understand in the future.
      Signed-off-by: default avatarDave Hansen <dave.hansen@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Kees Cook <keescook@google.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nadav Amit <namit@vmware.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/20180406205513.079BB265@viggo.jf.intel.com
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      430d4005
  18. 21 Feb, 2018 3 commits
    • Kirill A. Shutemov's avatar
      x86/mm: Optimize boot-time paging mode switching cost · 39b95522
      Kirill A. Shutemov authored
      
      By this point we have functioning boot-time switching between 4- and
      5-level paging mode. But naive approach comes with cost.
      
      Numbers below are for kernel build, allmodconfig, 5 times.
      
      CONFIG_X86_5LEVEL=n:
      
       Performance counter stats for 'sh -c make -j100 -B -k >/dev/null' (5 runs):
      
         17308719.892691      task-clock:u (msec)       #   26.772 CPUs utilized            ( +-  0.11% )
                       0      context-switches:u        #    0.000 K/sec
                       0      cpu-migrations:u          #    0.000 K/sec
             331,993,164      page-faults:u             #    0.019 M/sec                    ( +-  0.01% )
      43,614,978,867,455      cycles:u                  #    2.520 GHz                      ( +-  0.01% )
      39,371,534,575,126      stalled-cycles-frontend:u #   90.27% frontend cycles idle     ( +-  0.09% )
      28,363,350,152,428      instructions:u            #    0.65  insn per cycle
                                                        #    1.39  stalled cycles per insn  ( +-  0.00% )
       6,316,784,066,413      branches:u                #  364.948 M/sec                    ( +-  0.00% )
         250,808,144,781      branch-misses:u           #    3.97% of all branches          ( +-  0.01% )
      
           646.531974142 seconds time elapsed                                          ( +-  1.15% )
      
      CONFIG_X86_5LEVEL=y:
      
       Performance counter stats for 'sh -c make -j100 -B -k >/dev/null' (5 runs):
      
         17411536.780625      task-clock:u (msec)       #   26.426 CPUs utilized            ( +-  0.10% )
                       0      context-switches:u        #    0.000 K/sec
                       0      cpu-migrations:u          #    0.000 K/sec
             331,868,663      page-faults:u             #    0.019 M/sec                    ( +-  0.01% )
      43,865,909,056,301      cycles:u                  #    2.519 GHz                      ( +-  0.01% )
      39,740,130,365,581      stalled-cycles-frontend:u #   90.59% frontend cycles idle     ( +-  0.05% )
      28,363,358,997,959      instructions:u            #    0.65  insn per cycle
                                                        #    1.40  stalled cycles per insn  ( +-  0.00% )
       6,316,784,937,460      branches:u                #  362.793 M/sec                    ( +-  0.00% )
         251,531,919,485      branch-misses:u           #    3.98% of all branches          ( +-  0.00% )
      
           658.886307752 seconds time elapsed                                          ( +-  0.92% )
      
      The patch tries to fix the performance regression by using
      cpu_feature_enabled(X86_FEATURE_LA57) instead of pgtable_l5_enabled in
      all hot code paths. These will statically patch the target code for
      additional performance.
      
      CONFIG_X86_5LEVEL=y + the patch:
      
       Performance counter stats for 'sh -c make -j100 -B -k >/dev/null' (5 runs):
      
         17381990.268506      task-clock:u (msec)       #   26.907 CPUs utilized            ( +-  0.19% )
                       0      context-switches:u        #    0.000 K/sec
                       0      cpu-migrations:u          #    0.000 K/sec
             331,862,625      page-faults:u             #    0.019 M/sec                    ( +-  0.01% )
      43,697,726,320,051      cycles:u                  #    2.514 GHz                      ( +-  0.03% )
      39,480,408,690,401      stalled-cycles-frontend:u #   90.35% frontend cycles idle     ( +-  0.05% )
      28,363,394,221,388      instructions:u            #    0.65  insn per cycle
                                                        #    1.39  stalled cycles per insn  ( +-  0.00% )
       6,316,794,985,573      branches:u                #  363.410 M/sec                    ( +-  0.00% )
         251,013,232,547      branch-misses:u           #    3.97% of all branches          ( +-  0.01% )
      
           645.991174661 seconds time elapsed                                          ( +-  1.19% )
      
      Unfortunately, this approach doesn't help with text size:
      
        vmlinux.before .text size:	8190319
        vmlinux.after .text size:	8200623
      
      The .text section is increased by about 4k. Not sure if we can do anything
      about this.
      Signed-off-by: default avatarKirill A. Shuemov <kirill.shutemov@linux.intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/20180216114948.68868-4-kirill.shutemov@linux.intel.com
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      39b95522
    • Kirill A. Shutemov's avatar
      x86/xen: Allow XEN_PV and XEN_PVH to be enabled with X86_5LEVEL · b9952ec7
      Kirill A. Shutemov authored
      
      With boot-time switching between paging modes, XEN_PV and XEN_PVH can be
      boot into 4-level paging mode.
      Tested-by: default avatarJuergen Gross <jgross@suse.com>
      Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Reviewed-by: default avatarJuergen Gross <jgross@suse.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/20180216114948.68868-2-kirill.shutemov@linux.intel.com
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      b9952ec7
    • Peter Zijlstra's avatar
      x86/boot, objtool: Annotate indirect jump in secondary_startup_64() · bd89004f
      Peter Zijlstra authored
      
      The objtool retpoline validation found this indirect jump. Seeing how
      it's on CPU bringup before we run userspace it should be safe, annotate
      it.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarDavid Woodhouse <dwmw@amazon.co.uk>
      Acked-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Acked-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      bd89004f
  19. 16 Feb, 2018 2 commits
  20. 23 Dec, 2017 1 commit
    • Dave Hansen's avatar
      x86/mm/pti: Allocate a separate user PGD · d9e9a641
      Dave Hansen authored
      
      Kernel page table isolation requires to have two PGDs. One for the kernel,
      which contains the full kernel mapping plus the user space mapping and one
      for user space which contains the user space mappings and the minimal set
      of kernel mappings which are required by the architecture to be able to
      transition from and to user space.
      
      Add the necessary preliminaries.
      
      [ tglx: Split out from the big kaiser dump. EFI fixup from Kirill ]
      Signed-off-by: default avatarDave Hansen <dave.hansen@linux.intel.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarBorislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      d9e9a641
  21. 02 Nov, 2017 2 commits
    • Greg Kroah-Hartman's avatar
      License cleanup: add SPDX GPL-2.0 license identifier to files with no license · b2441318
      Greg Kroah-Hartman authored
      
      Many source files in the tree are missing licensing information, which
      makes it harder for compliance tools to determine the correct license.
      
      By default all files without license information are under the default
      license of the kernel, which is GPL version 2.
      
      Update the files which contain no license information with the 'GPL-2.0'
      SPDX license identifier.  The SPDX identifier is a legally binding
      shorthand, which can be used instead of the full boiler plate text.
      
      This patch is based on work done by Thomas Gleixner and Kate Stewart and
      Philippe Ombredanne.
      
      How this work was done:
      
      Patches were generated and checked against linux-4.14-rc6 for a subset of
      the use cases:
       - file had no licensing information it it.
       - file was a */uapi/* one with no licensing information in it,
       - file was a */uapi/* one with existing licensing information,
      
      Further patches will be generated in subsequent months to fix up cases
      where non-standard license headers were used, and references to license
      had to be inferred by heuristics based on keywords.
      
      The analysis to determine which SPDX License Identifier to be applied to
      a file was done in a spreadsheet of side by side results from of the
      output of two independent scanners (ScanCode & Windriver) producing SPDX
      tag:value files created by Philippe Ombredanne.  Philippe prepared the
      base worksheet, and did an initial spot review of a few 1000 files.
      
      The 4.13 kernel was the starting point of the analysis with 60,537 files
      assessed.  Kate Stewart did a file by file comparison of the scanner
      results in the spreadsheet to determine which SPDX license identifier(s)
      to be applied to the file. She confirmed any determination that was not
      immediately clear with lawyers working with the Linux Foundation.
      
      Criteria used to select files for SPDX license identifier tagging was:
       - Files considered eligible had to be source code files.
       - Make and config files were included as candidates if they contained >5
         lines of source
       - File already had some variant of a license header in it (even if <5
         lines).
      
      All documentation files were explicitly excluded.
      
      The following heuristics were used to determine which SPDX license
      identifiers to apply.
      
       - when both scanners couldn't find any license traces, file was
         considered to have no license information in it, and the top level
         COPYING file license applied.
      
         For non */uapi/* files that summary was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0                                              11139
      
         and resulted in the first patch in this series.
      
         If that file was a */uapi/* path one, it was "GPL-2.0 WITH
         Linux-syscall-note" otherwise it was "GPL-2.0".  Results of that was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0 WITH Linux-syscall-note                        930
      
         and resulted in the second patch in this series.
      
       - if a file had some form of licensing information in it, and was one
         of the */uapi/* ones, it was denoted with the Linux-syscall-note if
         any GPL family license was found in the file or had no licensing in
         it (per prior point).  Results summary:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|------
         GPL-2.0 WITH Linux-syscall-note                       270
         GPL-2.0+ WITH Linux-syscall-note                      169
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause)    21
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)    17
         LGPL-2.1+ WITH Linux-syscall-note                      15
         GPL-1.0+ WITH Linux-syscall-note                       14
         ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause)    5
         LGPL-2.0+ WITH Linux-syscall-note                       4
         LGPL-2.1 WITH Linux-syscall-note                        3
         ((GPL-2.0 WITH Linux-syscall-note) OR MIT)              3
         ((GPL-2.0 WITH Linux-syscall-note) AND MIT)             1
      
         and that resulted in the third patch in this series.
      
       - when the two scanners agreed on the detected license(s), that became
         the concluded license(s).
      
       - when there was disagreement between the two scanners (one detected a
         license but the other didn't, or they both detected different
         licenses) a manual inspection of the file occurred.
      
       - In most cases a manual inspection of the information in the file
         resulted in a clear resolution of the license that should apply (and
         which scanner probably needed to revisit its heuristics).
      
       - When it was not immediately clear, the license identifier was
         confirmed with lawyers working with the Linux Foundation.
      
       - If there was any question as to the appropriate license identifier,
         the file was flagged for further research and to be revisited later
         in time.
      
      In total, over 70 hours of logged manual review was done on the
      spreadsheet to determine the SPDX license identifiers to apply to the
      source files by Kate, Philippe, Thomas and, in some cases, confirmation
      by lawyers working with the Linux Foundation.
      
      Kate also obtained a third independent scan of the 4.13 code base from
      FOSSology, and compared selected files where the other two scanners
      disagreed against that SPDX file, to see if there was new insights.  The
      Windriver scanner is based on an older version of FOSSology in part, so
      they are related.
      
      Thomas did random spot checks in about 500 files from the spreadsheets
      for the uapi headers and agreed with SPDX license identifier in the
      files he inspected. For the non-uapi files Thomas did random spot checks
      in about 15000 files.
      
      In initial set of patches against 4.14-rc6, 3 files were found to have
      copy/paste license identifier errors, and have been fixed to reflect the
      correct identifier.
      
      Additionally Philippe spent 10 hours this week doing a detailed manual
      inspection and review of the 12,461 patched files from the initial patch
      version early this week with:
       - a full scancode scan run, collecting the matched texts, detected
         license ids and scores
       - reviewing anything where there was a license detected (about 500+
         files) to ensure that the applied SPDX license was correct
       - reviewing anything where there was no detection but the patch license
         was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
         SPDX license was correct
      
      This produced a worksheet with 20 files needing minor correction.  This
      worksheet was then exported into 3 different .csv files for the
      different types of files to be modified.
      
      These .csv files were then reviewed by Greg.  Thomas wrote a script to
      parse the csv files and add the proper SPDX tag to the file, in the
      format that the file expected.  This script was further refined by Greg
      based on the output to detect more types of files automatically and to
      distinguish between header and source .c files (which need different
      comment types.)  Finally Greg ran the script using the .csv files to
      generate the patches.
      Reviewed-by: default avatarKate Stewart <kstewart@linuxfoundation.org>
      Reviewed-by: default avatarPhilippe Ombredanne <pombredanne@nexb.com>
      Reviewed-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b2441318
    • Andy Lutomirski's avatar
      x86/entry/64: Split the IRET-to-user and IRET-to-kernel paths · 26c4ef9c
      Andy Lutomirski authored
      
      These code paths will diverge soon.
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/dccf8c7b3750199b4b30383c812d4e2931811509.1509609304.git.luto@kernel.org
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      26c4ef9c
  22. 01 Nov, 2017 1 commit
    • Ricardo Neri's avatar
      x86/boot: Relocate definition of the initial state of CR0 · b0ce5b8c
      Ricardo Neri authored
      
      Both head_32.S and head_64.S utilize the same value to initialize the
      control register CR0. Also, other parts of the kernel might want to access
      this initial definition (e.g., emulation code for User-Mode Instruction
      Prevention uses this state to provide a sane dummy value for CR0 when
      emulating the smsw instruction). Thus, relocate this definition to a
      header file from which it can be conveniently accessed.
      Suggested-by: default avatarBorislav Petkov <bp@alien8.de>
      Signed-off-by: default avatarRicardo Neri <ricardo.neri-calderon@linux.intel.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarBorislav Petkov <bp@suse.de>
      Reviewed-by: default avatarAndy Lutomirski <luto@kernel.org>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: ricardo.neri@intel.com
      Cc: linux-mm@kvack.org
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Huang Rui <ray.huang@amd.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: linux-arch@vger.kernel.org
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: "Ravi V. Shankar" <ravi.v.shankar@intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Chen Yucong <slaoub@gmail.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: https://lkml.kernel.org/r/1509135945-13762-3-git-send-email-ricardo.neri-calderon@linux.intel.com
      b0ce5b8c
  23. 23 Oct, 2017 1 commit