1. 04 Feb, 2020 3 commits
  2. 05 Dec, 2019 1 commit
  3. 01 Dec, 2019 1 commit
  4. 25 Nov, 2019 1 commit
  5. 23 Nov, 2019 1 commit
  6. 15 Nov, 2019 2 commits
    • Arnd Bergmann's avatar
      y2038: allow disabling time32 system calls · 942437c9
      Arnd Bergmann authored
      
      At the moment, the compilation of the old time32 system calls depends
      purely on the architecture. As systems with new libc based on 64-bit
      time_t are getting deployed, even architectures that previously supported
      these (notably x86-32 and arm32 but also many others) no longer depend on
      them, and removing them from a kernel image results in a smaller kernel
      binary, the same way we can leave out many other optional system calls.
      
      More importantly, on an embedded system that needs to keep working
      beyond year 2038, any user space program calling these system calls
      is likely a bug, so removing them from the kernel image does provide
      an extra debugging help for finding broken applications.
      
      I've gone back and forth on hiding this option unless CONFIG_EXPERT
      is set. This version leaves it visible based on the logic that
      eventually it will be turned off indefinitely.
      Acked-by: default avatarChristian Brauner <christian.brauner@ubuntu.com>
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      942437c9
    • Arnd Bergmann's avatar
      y2038: remove CONFIG_64BIT_TIME · 3ca47e95
      Arnd Bergmann authored
      
      The CONFIG_64BIT_TIME option is defined on all architectures, and can
      be removed for simplicity now.
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      3ca47e95
  7. 24 Sep, 2019 2 commits
  8. 06 Sep, 2019 1 commit
  9. 04 Sep, 2019 1 commit
    • Christoph Hellwig's avatar
      dma-mapping: remove CONFIG_ARCH_NO_COHERENT_DMA_MMAP · 62fcee9a
      Christoph Hellwig authored
      
      CONFIG_ARCH_NO_COHERENT_DMA_MMAP is now functionally identical to
      !CONFIG_MMU, so remove the separate symbol.  The only difference is that
      arm did not set it for !CONFIG_MMU, but arm uses a separate dma mapping
      implementation including its own mmap method, which is handled by moving
      the CONFIG_MMU check in dma_can_mmap so that is only applies to the
      dma-direct case, just as the other ifdefs for it.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>	# m68k
      62fcee9a
  10. 21 Aug, 2019 1 commit
  11. 09 Aug, 2019 1 commit
  12. 05 Aug, 2019 1 commit
  13. 31 Jul, 2019 1 commit
  14. 18 Jul, 2019 1 commit
  15. 03 Jul, 2019 1 commit
  16. 03 Jun, 2019 1 commit
  17. 28 May, 2019 1 commit
    • Steven Rostedt (VMware)'s avatar
      ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS · 86b3de60
      Steven Rostedt (VMware) authored
      Commit c19fa94a
      
       ("Add HAVE_64BIT_ALIGNED_ACCESS") added the config for
      architectures that required 64bit aligned access for all 64bit words. As
      the ftrace ring buffer stores data on 4 byte alignment, this config option
      was used to force it to store data on 8 byte alignment to make sure the data
      being stored and written directly into the ring buffer was 8 byte aligned as
      it would cause issues trying to write an 8 byte word on a 4 not 8 byte
      aligned memory location.
      
      But with the removal of the metag architecture, which was the only
      architecture to use this, there is no architecture supported by Linux that
      requires 8 byte aligne access for all 8 byte words (4 byte alignment is good
      enough). Removing this config can simplify the code a bit.
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      86b3de60
  18. 14 May, 2019 1 commit
  19. 30 Apr, 2019 1 commit
    • Rick Edgecombe's avatar
      x86/mm/cpa: Add set_direct_map_*() functions · d253ca0c
      Rick Edgecombe authored
      
      Add two new functions set_direct_map_default_noflush() and
      set_direct_map_invalid_noflush() for setting the direct map alias for the
      page to its default valid permissions and to an invalid state that cannot
      be cached in a TLB, respectively. These functions do not flush the TLB.
      
      Note, __kernel_map_pages() does something similar but flushes the TLB and
      doesn't reset the permission bits to default on all architectures.
      
      Also add an ARCH config ARCH_HAS_SET_DIRECT_MAP for specifying whether
      these have an actual implementation or a default empty one.
      Signed-off-by: default avatarRick Edgecombe <rick.p.edgecombe@intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: <akpm@linux-foundation.org>
      Cc: <ard.biesheuvel@linaro.org>
      Cc: <deneen.t.dock@intel.com>
      Cc: <kernel-hardening@lists.openwall.com>
      Cc: <kristen@linux.intel.com>
      Cc: <linux_dti@icloud.com>
      Cc: <will.deacon@arm.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nadav Amit <nadav.amit@gmail.com>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: https://lkml.kernel.org/r/20190426001143.4983-15-namit@vmware.com
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      d253ca0c
  20. 29 Apr, 2019 1 commit
    • Arnd Bergmann's avatar
      y2038: Make CONFIG_64BIT_TIME unconditional · f3d96467
      Arnd Bergmann authored
      As Stepan Golosunov points out, there is a small mistake in the
      get_timespec64() function in the kernel. It was originally added under the
      assumption that CONFIG_64BIT_TIME would get enabled on all 32-bit and
      64-bit architectures, but when the conversion was done, it was only turned
      on for 32-bit ones.
      
      The effect is that the get_timespec64() function never clears the upper
      half of the tv_nsec field for 32-bit tasks in compat mode. Clearing this is
      required for POSIX compliant behavior of functions that pass a 'timespec'
      structure with a 64-bit tv_sec and a 32-bit tv_nsec, plus uninitialized
      padding.
      
      The easiest fix for linux-5.1 is to just make the Kconfig symbol
      unconditional, as it was originally intended. As a follow-up, the #ifdef
      CONFIG_64BIT_TIME can be removed completely..
      
      Note: for native 32-bit mode, no change is needed, this works as
      designed and user space should never need to clear the upper 32
      bits of the tv_nsec field, in or out of the kernel.
      
      Fixes: 00bf25d6
      
       ("y2038: use time32 syscall names on 32-bit")
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Joseph Myers <joseph@codesourcery.com>
      Cc: libc-alpha@sourceware.org
      Cc: linux-api@vger.kernel.org
      Cc: Deepa Dinamani <deepa.kernel@gmail.com>
      Cc: Lukasz Majewski <lukma@denx.de>
      Cc: Stepan Golosunov <stepan@golosunov.pp.ru>
      Link: https://lore.kernel.org/lkml/20190422090710.bmxdhhankurhafxq@sghpc.golosunov.pp.ru/
      Link: https://lkml.kernel.org/r/20190429131951.471701-1-arnd@arndb.de
      f3d96467
  21. 10 Apr, 2019 2 commits
    • Waiman Long's avatar
      locking/rwsem: Enable lock event counting · a8654596
      Waiman Long authored
      
      Add lock event counting calls so that we can track the number of lock
      events happening in the rwsem code.
      
      With CONFIG_LOCK_EVENT_COUNTS on and booting a 4-socket 112-thread x86-64
      system, the rwsem counts after system bootup were as follows:
      
        rwsem_opt_fail=261
        rwsem_opt_wlock=50636
        rwsem_rlock=445
        rwsem_rlock_fail=0
        rwsem_rlock_fast=22
        rwsem_rtrylock=810144
        rwsem_sleep_reader=441
        rwsem_sleep_writer=310
        rwsem_wake_reader=355
        rwsem_wake_writer=2335
        rwsem_wlock=261
        rwsem_wlock_fail=0
        rwsem_wtrylock=20583
      
      It can be seen that most of the lock acquisitions in the slowpath were
      write-locks in the optimistic spinning code path with no sleeping at
      all. For this system, over 97% of the locks are acquired via optimistic
      spinning. It illustrates the importance of optimistic spinning in
      improving the performance of rwsem.
      Signed-off-by: default avatarWaiman Long <longman@redhat.com>
      Acked-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: default avatarDavidlohr Bueso <dbueso@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Link: http://lkml.kernel.org/r/20190404174320.22416-11-longman@redhat.com
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      a8654596
    • Waiman Long's avatar
      locking/lock_events: Make lock_events available for all archs & other locks · fb346fd9
      Waiman Long authored
      
      The QUEUED_LOCK_STAT option to report queued spinlocks event counts
      was previously allowed only on x86 architecture. To make the locking
      event counting code more useful, it is now renamed to a more generic
      LOCK_EVENT_COUNTS config option. This new option will be available to
      all the architectures that use qspinlock at the moment.
      
      Other locking code can now start to use the generic locking event
      counting code by including lock_events.h and put the new locking event
      names into the lock_events_list.h header file.
      
      My experience with lock event counting is that it gives valuable insight
      on how the locking code works and what can be done to make it better. I
      would like to extend this benefit to other locking code like mutex and
      rwsem in the near future.
      
      The PV qspinlock specific code will stay in qspinlock_stat.h. The
      locking event counters will now reside in the <debugfs>/lock_event_counts
      directory.
      Signed-off-by: default avatarWaiman Long <longman@redhat.com>
      Acked-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: default avatarDavidlohr Bueso <dbueso@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Link: http://lkml.kernel.org/r/20190404174320.22416-9-longman@redhat.com
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      fb346fd9
  22. 03 Apr, 2019 3 commits
  23. 19 Feb, 2019 1 commit
    • Yury Norov's avatar
      32-bit userspace ABI: introduce ARCH_32BIT_OFF_T config option · 942fa985
      Yury Norov authored
      
      All new 32-bit architectures should have 64-bit userspace off_t type, but
      existing architectures has 32-bit ones.
      
      To enforce the rule, new config option is added to arch/Kconfig that defaults
      ARCH_32BIT_OFF_T to be disabled for new 32-bit architectures. All existing
      32-bit architectures enable it explicitly.
      
      New option affects force_o_largefile() behaviour. Namely, if userspace
      off_t is 64-bits long, we have no reason to reject user to open big files.
      
      Note that even if architectures has only 64-bit off_t in the kernel
      (arc, c6x, h8300, hexagon, nios2, openrisc, and unicore32),
      a libc may use 32-bit off_t, and therefore want to limit the file size
      to 4GB unless specified differently in the open flags.
      Signed-off-by: default avatarYury Norov <ynorov@caviumnetworks.com>
      Acked-by: default avatarArnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarYury Norov <ynorov@marvell.com>
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      942fa985
  24. 06 Feb, 2019 1 commit
    • Arnd Bergmann's avatar
      y2038: use time32 syscall names on 32-bit · 00bf25d6
      Arnd Bergmann authored
      
      This is the big flip, where all 32-bit architectures set COMPAT_32BIT_TIME
      and use the _time32 system calls from the former compat layer instead
      of the system calls that take __kernel_timespec and similar arguments.
      
      The temporary redirects for __kernel_timespec, __kernel_itimerspec
      and __kernel_timex can get removed with this.
      
      It would be easy to split this commit by architecture, but with the new
      generated system call tables, it's easy enough to do it all at once,
      which makes it a little easier to check that the changes are the same
      in each table.
      Acked-by: default avatarGeert Uytterhoeven <geert@linux-m68k.org>
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      00bf25d6
  25. 04 Feb, 2019 1 commit
    • Ard Biesheuvel's avatar
      x86: Make ARCH_USE_MEMREMAP_PROT a generic Kconfig symbol · ce9084ba
      Ard Biesheuvel authored
      
      Turn ARCH_USE_MEMREMAP_PROT into a generic Kconfig symbol, and fix the
      dependency expression to reflect that AMD_MEM_ENCRYPT depends on it,
      instead of the other way around. This will permit ARCH_USE_MEMREMAP_PROT
      to be selected by other architectures.
      
      Note that the encryption related early memremap routines in
      arch/x86/mm/ioremap.c cannot be built for 32-bit x86 without triggering
      the following warning:
      
           arch/x86//mm/ioremap.c: In function 'early_memremap_encrypted':
        >> arch/x86/include/asm/pgtable_types.h:193:27: warning: conversion from
                           'long long unsigned int' to 'long unsigned int' changes
                           value from '9223372036854776163' to '355' [-Woverflow]
            #define __PAGE_KERNEL_ENC (__PAGE_KERNEL | _PAGE_ENC)
                                      ^~~~~~~~~~~~~~~~~~~~~~~~~~~
           arch/x86//mm/ioremap.c:713:46: note: in expansion of macro '__PAGE_KERNEL_ENC'
             return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_ENC);
      
      which essentially means they are 64-bit only anyway. However, we cannot
      make them dependent on CONFIG_ARCH_HAS_MEM_ENCRYPT, since that is always
      defined, even for i386 (and changing that results in a slew of build errors)
      
      So instead, build those routines only if CONFIG_AMD_MEM_ENCRYPT is
      defined.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
      Cc: Alexander Graf <agraf@suse.de>
      Cc: Bjorn Andersson <bjorn.andersson@linaro.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Heinrich Schuchardt <xypron.glpk@gmx.de>
      Cc: Jeffrey Hugo <jhugo@codeaurora.org>
      Cc: Lee Jones <lee.jones@linaro.org>
      Cc: Leif Lindholm <leif.lindholm@linaro.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Matt Fleming <matt@codeblueprint.co.uk>
      Cc: Peter Jones <pjones@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-efi@vger.kernel.org
      Link: http://lkml.kernel.org/r/20190202094119.13230-9-ard.biesheuvel@linaro.org
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      ce9084ba
  26. 22 Jan, 2019 1 commit
  27. 06 Jan, 2019 1 commit
    • Masahiro Yamada's avatar
      jump_label: move 'asm goto' support test to Kconfig · e9666d10
      Masahiro Yamada authored
      
      Currently, CONFIG_JUMP_LABEL just means "I _want_ to use jump label".
      
      The jump label is controlled by HAVE_JUMP_LABEL, which is defined
      like this:
      
        #if defined(CC_HAVE_ASM_GOTO) && defined(CONFIG_JUMP_LABEL)
        # define HAVE_JUMP_LABEL
        #endif
      
      We can improve this by testing 'asm goto' support in Kconfig, then
      make JUMP_LABEL depend on CC_HAS_ASM_GOTO.
      
      Ugly #ifdef HAVE_JUMP_LABEL will go away, and CONFIG_JUMP_LABEL will
      match to the real kernel capability.
      Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
      Tested-by: default avatarSedat Dilek <sedat.dilek@gmail.com>
      e9666d10
  28. 04 Jan, 2019 1 commit
  29. 11 Oct, 2018 1 commit
  30. 27 Sep, 2018 1 commit
  31. 04 Sep, 2018 1 commit
    • Alexander Popov's avatar
      x86/entry: Add STACKLEAK erasing the kernel stack at the end of syscalls · afaef01c
      Alexander Popov authored
      The STACKLEAK feature (initially developed by PaX Team) has the following
      benefits:
      
      1. Reduces the information that can be revealed through kernel stack leak
         bugs. The idea of erasing the thread stack at the end of syscalls is
         similar to CONFIG_PAGE_POISONING and memzero_explicit() in kernel
         crypto, which all comply with FDP_RIP.2 (Full Residual Information
         Protection) of the Common Criteria standard.
      
      2. Blocks some uninitialized stack variable attacks (e.g. CVE-2017-17712,
         CVE-2010-2963). That kind of bugs should be killed by improving C
         compilers in future, which might take a long time.
      
      This commit introduces the code filling the used part of the kernel
      stack with a poison value before returning to userspace. Full
      STACKLEAK feature also contains the gcc plugin which comes in a
      separate commit.
      
      The STACKLEAK feature is ported from grsecurity/PaX. More information at:
        https://grsecurity.net/
        https://pax.grsecurity.net/
      
      
      
      This code is modified from Brad Spengler/PaX Team's code in the last
      public patch of grsecurity/PaX based on our understanding of the code.
      Changes or omissions from the original code are ours and don't reflect
      the original grsecurity/PaX code.
      
      Performance impact:
      
      Hardware: Intel Core i7-4770, 16 GB RAM
      
      Test #1: building the Linux kernel on a single core
              0.91% slowdown
      
      Test #2: hackbench -s 4096 -l 2000 -g 15 -f 25 -P
              4.2% slowdown
      
      So the STACKLEAK description in Kconfig includes: "The tradeoff is the
      performance impact: on a single CPU system kernel compilation sees a 1%
      slowdown, other systems and workloads may vary and you are advised to
      test this feature on your expected workload before deploying it".
      Signed-off-by: default avatarAlexander Popov <alex.popov@linux.com>
      Acked-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarDave Hansen <dave.hansen@linux.intel.com>
      Acked-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      afaef01c
  32. 23 Aug, 2018 1 commit
    • Peter Zijlstra's avatar
      mm/tlb, x86/mm: Support invalidating TLB caches for RCU_TABLE_FREE · d86564a2
      Peter Zijlstra authored
      Jann reported that x86 was missing required TLB invalidates when he
      hit the !*batch slow path in tlb_remove_table().
      
      This is indeed the case; RCU_TABLE_FREE does not provide TLB (cache)
      invalidates, the PowerPC-hash where this code originated and the
      Sparc-hash where this was subsequently used did not need that. ARM
      which later used this put an explicit TLB invalidate in their
      __p*_free_tlb() functions, and PowerPC-radix followed that example.
      
      But when we hooked up x86 we failed to consider this. Fix this by
      (optionally) hooking tlb_remove_table() into the TLB invalidate code.
      
      NOTE: s390 was also needing something like this and might now
            be able to use the generic code again.
      
      [ Modified to be on top of Nick's cleanups, which simplified this patch
        now that tlb_flush_mmu_tlbonly() really only flushes the TLB - Linus ]
      
      Fixes: 9e52fc2b
      
       ("x86/mm: Enable RCU based page table freeing (CONFIG_HAVE_RCU_TABLE_FREE=y)")
      Reported-by: default avatarJann Horn <jannh@google.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: default avatarRik van Riel <riel@surriel.com>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: stable@kernel.org
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d86564a2
  33. 22 Aug, 2018 1 commit
    • Ard Biesheuvel's avatar
      arch: enable relative relocations for arm64, power and x86 · 271ca788
      Ard Biesheuvel authored
      Patch series "add support for relative references in special sections", v10.
      
      This adds support for emitting special sections such as initcall arrays,
      PCI fixups and tracepoints as relative references rather than absolute
      references.  This reduces the size by 50% on 64-bit architectures, but
      more importantly, it removes the need for carrying relocation metadata for
      these sections in relocatable kernels (e.g., for KASLR) that needs to be
      fixed up at boot time.  On arm64, this reduces the vmlinux footprint of
      such a reference by 8x (8 byte absolute reference + 24 byte RELA entry vs
      4 byte relative reference)
      
      Patch #3 was sent out before as a single patch.  This series supersedes
      the previous submission.  This version makes relative ksymtab entries
      dependent on the new Kconfig symbol HAVE_ARCH_PREL32_RELOCATIONS rather
      than trying to infer from kbuild test robot replies for which
      architectures it should be blacklisted.
      
      Patch #1 introduces the new Kconfig symbol HAVE_ARCH_PREL32_RELOCATIONS,
      and sets it for the main architectures that are expected to benefit the
      most from this feature, i.e., 64-bit architectures or ones that use
      runtime relocations.
      
      Patch #2 add support for #define'ing __DISABLE_EXPORTS to get rid of
      ksymtab/kcrctab sections in decompressor and EFI stub objects when
      rebuilding existing C files to run in a different context.
      
      Patches #4 - #6 implement relative references for initcalls, PCI fixups
      and tracepoints, respectively, all of which produce sections with order
      ~1000 entries on an arm64 defconfig kernel with tracing enabled.  This
      means we save about 28 KB of vmlinux space for each of these patches.
      
      [From the v7 series blurb, which included the jump_label patches as well]:
      
        For the arm64 kernel, all patches combined reduce the memory footprint
        of vmlinux by about 1.3 MB (using a config copied from Ubuntu that has
        KASLR enabled), of which ~1 MB is the size reduction of the RELA section
        in .init, and the remaining 300 KB is reduction of .text/.data.
      
      This patch (of 6):
      
      Before updating certain subsystems to use place relative 32-bit
      relocations in special sections, to save space and reduce the number of
      absolute relocations that need to be processed at runtime by relocatable
      kernels, introduce the Kconfig symbol and define it for some architectures
      that should be able to support and benefit from it.
      
      Link: http://lkml.kernel.org/r/20180704083651.24360-2-ard.biesheuvel@linaro.org
      
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Reviewed-by: default avatarWill Deacon <will.deacon@arm.com>
      Acked-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Thomas Garnier <thgarnie@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "Serge E. Hallyn" <serge@hallyn.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: James Morris <jmorris@namei.org>
      Cc: Nicolas Pitre <nico@linaro.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>,
      Cc: James Morris <james.morris@microsoft.com>
      Cc: Jessica Yu <jeyu@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      271ca788