1. 21 Apr, 2019 3 commits
    • Christophe Leroy's avatar
      powerpc/32s: Implement Kernel Userspace Access Protection · a68c31fc
      Christophe Leroy authored
      
      This patch implements Kernel Userspace Access Protection for
      book3s/32.
      
      Due to limitations of the processor page protection capabilities,
      the protection is only against writing. read protection cannot be
      achieved using page protection.
      
      The previous patch modifies the page protection so that RW user
      pages are RW for Key 0 and RO for Key 1, and it sets Key 0 for
      both user and kernel.
      
      This patch changes userspace segment registers are set to Ku 0
      and Ks 1. When kernel needs to write to RW pages, the associated
      segment register is then changed to Ks 0 in order to allow write
      access to the kernel.
      
      In order to avoid having the read all segment registers when
      locking/unlocking the access, some data is kept in the thread_struct
      and saved on stack on exceptions. The field identifies both the
      first unlocked segment and the first segment following the last
      unlocked one. When no segment is unlocked, it contains value 0.
      
      As the hash_page() function is not able to easily determine if a
      protfault is due to a bad kernel access to userspace, protfaults
      need to be handled by handle_page_fault when KUAP is set.
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      [mpe: Drop allow_read/write_to/from_user() as they're now in kup.h,
            and adapt allow_user_access() to do nothing when to == NULL]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      a68c31fc
    • Christophe Leroy's avatar
      powerpc/32s: Prepare Kernel Userspace Access Protection · f342adca
      Christophe Leroy authored
      
      This patch prepares Kernel Userspace Access Protection for
      book3s/32.
      
      Due to limitations of the processor page protection capabilities,
      the protection is only against writing. read protection cannot be
      achieved using page protection.
      
      book3s/32 provides the following values for PP bits:
      
      PP00 provides RW for Key 0 and NA for Key 1
      PP01 provides RW for Key 0 and RO for Key 1
      PP10 provides RW for all
      PP11 provides RO for all
      
      Today PP10 is used for RW pages and PP11 for RO pages, and user
      segment register's Kp and Ks are set to 1. This patch modifies
      page protection to use PP01 for RW pages and sets user segment
      registers to Kp 0 and Ks 0.
      
      This will allow to setup Userspace write access protection by
      settng Ks to 1 in the following patch.
      
      Kernel space segment registers remain unchanged.
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      f342adca
    • Christophe Leroy's avatar
      powerpc/32s: Implement Kernel Userspace Execution Prevention. · 31ed2b13
      Christophe Leroy authored
      
      To implement Kernel Userspace Execution Prevention, this patch
      sets NX bit on all user segments on kernel entry and clears NX bit
      on all user segments on kernel exit.
      
      Note that powerpc 601 doesn't have the NX bit, so KUEP will not
      work on it. A warning is displayed at startup.
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      31ed2b13
  2. 01 Apr, 2019 1 commit
  3. 18 Mar, 2019 1 commit
  4. 23 Feb, 2019 6 commits
  5. 21 Feb, 2019 10 commits
  6. 19 Dec, 2018 2 commits
  7. 26 Nov, 2018 1 commit
  8. 30 Jul, 2018 1 commit
  9. 13 Nov, 2017 1 commit
  10. 03 Aug, 2017 2 commits
  11. 02 Aug, 2017 1 commit
  12. 21 Mar, 2017 1 commit
  13. 26 Jan, 2017 1 commit
  14. 22 Sep, 2016 1 commit
  15. 08 Aug, 2016 1 commit
  16. 08 Mar, 2012 1 commit
    • Benjamin Herrenschmidt's avatar
      powerpc: Call do_page_fault() with interrupts off · a546498f
      Benjamin Herrenschmidt authored
      
      We currently turn interrupts back to their previous state before
      calling do_page_fault(). This can be annoying when debugging as
      a bad fault will potentially have lost some processor state before
      getting into the debugger.
      
      We also end up calling some generic code with interrupts enabled
      such as notify_page_fault() with interrupts enabled, which could
      be unexpected.
      
      This changes our code to behave more like other architectures,
      and make the assembly entry code call into do_page_faults() with
      interrupts disabled. They are conditionally re-enabled from
      within do_page_fault() in the same spot x86 does it.
      
      While there, add the might_sleep() test in the case of a successful
      trylock of the mmap semaphore, again like x86.
      
      Also fix a bug in the existing assembly where r12 (_MSR) could get
      clobbered by C calls (the DTL accounting in the exception common
      macro and DISABLE_INTS) in some cases.
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      ---
      
      v2. Add the r12 clobber fix
      a546498f
  17. 19 Sep, 2011 1 commit
    • Scott Wood's avatar
      powerpc/32: Pass device tree address as u64 to machine_init · 6dece0eb
      Scott Wood authored
      
      u64 is used rather than phys_addr_t to keep things simple, as
      this is called from assembly code.
      
      Update callers to pass a 64-bit address in r3/r4.  Other unused
      register assignments that were once parameters to machine_init
      are dropped.
      
      For FSL BookE, look up the physical address of the device tree from the
      effective address passed in r3 by the loader.  This is required for
      situations where memory does not start at zero (due to AMP or IOMMU-less
      virtualization), and thus the IMA doesn't start at zero, and thus the
      device tree effective address does not equal the physical address.
      Signed-off-by: default avatarScott Wood <scottwood@freescale.com>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      6dece0eb
  18. 19 May, 2011 2 commits
  19. 01 Apr, 2011 1 commit
    • Benjamin Herrenschmidt's avatar
      powerpc/smp: soft-replugged CPUs must go back to start_secondary · fa3f82c8
      Benjamin Herrenschmidt authored
      
      Various thing are torn down when a CPU is hot-unplugged. That CPU
      is expected to go back to start_secondary when re-plugged to re
      initialize everything, such as clock sources, maps, ...
      
      Some implementations just return from cpu_die() callback
      in the idle loop when the CPU is "re-plugged". This is not enough.
      
      We fix it using a little asm trampoline which resets the stack
      and calls back into start_secondary as if we were all fresh from
      boot. The trampoline already existed on ppc64, but we add it for
      ppc32
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      fa3f82c8
  20. 17 May, 2010 1 commit
  21. 13 Dec, 2009 1 commit