- 19 Apr, 2022 1 commit
-
-
Peter Zijlstra authored
Objtool can figure out that some \cfunc()s are noreturn and then complains about certain instances having unreachable tails: vmlinux.o: warning: objtool: asm_exc_xen_unknown_trap()+0x16: unreachable instruction Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by:
Josh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20220408094718.441854969@infradead.org
-
- 15 Mar, 2022 7 commits
-
-
Peter Zijlstra authored
Without CONFIG_X86_ESPFIX64 exc_double_fault() is noreturn and objtool is clever enough to figure that out. vmlinux.o: warning: objtool: asm_exc_double_fault()+0x22: unreachable instruction 0000000000001260 <asm_exc_double_fault>: 1260: f3 0f 1e fa endbr64 1264: 90 nop 1265: 90 nop 1266: 90 nop 1267: e8 84 03 00 00 call 15f0 <paranoid_entry> 126c: 48 89 e7 mov %rsp,%rdi 126f: 48 8b 74 24 78 mov 0x78(%rsp),%rsi 1274: 48 c7 44 24 78 ff ff ff ff movq $0xffffffffffffffff,0x78(%rsp) 127d: e8 00 00 00 00 call 1282 <asm_exc_double_fault+0x22> 127e: R_X86_64_PLT32 exc_double_fault-0x4 1282: e9 09 04 00 00 jmp 1690 <paranoid_exit> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/Yi9gOW9f1GGwwUD6@hirez.programming.kicks-ass.net
-
Peter Zijlstra authored
No IBT on AMD so far.. probably correct, who knows. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by:
Josh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20220308154318.995109889@infradead.org
-
Peter Zijlstra authored
Annotate away some of the generic code references. This is things where we take the address of a symbol for exception handling or return addresses (eg. context switch). Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by:
Josh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20220308154318.877758523@infradead.org
-
Peter Zijlstra authored
Kernel entry points should be having ENDBR on for IBT configs. The SYSCALL entry points are found through taking their respective address in order to program them in the MSRs, while the exception entry points are found through UNWIND_HINT_IRET_REGS. The rule is that any UNWIND_HINT_IRET_REGS at sym+0 should have an ENDBR, see the later objtool ibt validation patch. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by:
Josh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20220308154317.933157479@infradead.org
-
Peter Zijlstra authored
Even though Xen currently doesn't advertise IBT, prepare for when it will eventually do so and sprinkle the ENDBR dust accordingly. Even though most of the entry points are IRET like, the CPL0 Hypervisor can set WAIT-FOR-ENDBR and demand ENDBR at these sites. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by:
Josh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20220308154317.873919996@infradead.org
-
Peter Zijlstra authored
By doing an early rewrite of 'jmp native_iret` in restore_regs_and_return_to_kernel() we can get rid of the last INTERRUPT_RETURN user and paravirt_iret. Suggested-by:
Andrew Cooper <Andrew.Cooper3@citrix.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by:
Josh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20220308154317.815039833@infradead.org
-
Peter Zijlstra authored
Since commit 5c8f6a2e ("x86/xen: Add xenpv_restore_regs_and_return_to_usermode()") Xen will no longer reach this code and we can do away with the paravirt SWAPGS/INTERRUPT_RETURN. Suggested-by:
Andrew Cooper <Andrew.Cooper3@citrix.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by:
Josh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20220308154317.756014488@infradead.org
-
- 13 Dec, 2021 1 commit
-
-
Eric W. Biederman authored
There are two big uses of do_exit. The first is it's design use to be the guts of the exit(2) system call. The second use is to terminate a task after something catastrophic has happened like a NULL pointer in kernel code. Add a function make_task_dead that is initialy exactly the same as do_exit to cover the cases where do_exit is called to handle catastrophic failure. In time this can probably be reduced to just a light wrapper around do_task_dead. For now keep it exactly the same so that there will be no behavioral differences introducing this new concept. Replace all of the uses of do_exit that use it for catastraphic task cleanup with make_task_dead to make it clear what the code is doing. As part of this rename rewind_stack_do_exit rewind_stack_and_make_dead. Signed-off-by:
"Eric W. Biederman" <ebiederm@xmission.com>
-
- 11 Dec, 2021 1 commit
-
-
Peter Zijlstra authored
Place the anonymous .fixup code at the tail of the regular functions. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by:
Josh Poimboeuf <jpoimboe@redhat.com> Reviewed-by:
Borislav Petkov <bp@suse.de> Reviewed-by:
Lai Jiangshan <jiangshanlai@gmail.com> Link: https://lore.kernel.org/r/20211110101325.186049322@infradead.org
-
- 08 Dec, 2021 1 commit
-
-
Peter Zijlstra authored
Replace all ret/retq instructions with RET in preparation of making RET a macro. Since AS is case insensitive it's a big no-op without RET defined. find arch/x86/ -name \*.S | while read file do sed -i 's/\<ret[q]*\>/RET/' $file done Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20211204134907.905503893@infradead.org
-
- 03 Dec, 2021 3 commits
-
-
Lai Jiangshan authored
In the native case, PER_CPU_VAR(cpu_tss_rw + TSS_sp0) is the trampoline stack. But XEN pv doesn't use trampoline stack, so PER_CPU_VAR(cpu_tss_rw + TSS_sp0) is also the kernel stack. In that case, source and destination stacks are identical, which means that reusing swapgs_restore_regs_and_return_to_usermode() in XEN pv would cause %rsp to move up to the top of the kernel stack and leave the IRET frame below %rsp. This is dangerous as it can be corrupted if #NMI / #MC hit as either of these events occurring in the middle of the stack pushing would clobber data on the (original) stack. And, with XEN pv, swapgs_restore_regs_and_return_to_usermode() pushing the IRET frame on to the original address is useless and error-prone when there is any future attempt to modify the code. [ bp: Massage commit message. ] Fixes: 7f2590a1 ("x86/entry/64: Use a per-CPU trampoline stack for IDT entries") Signed-off-by:
Lai Jiangshan <laijs@linux.alibaba.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Reviewed-by:
Boris Ostrovsky <boris.ostrovsky@oracle.com> Link: https://lkml.kernel.org/r/20211126101209.8613-4-jiangshanlai@gmail.com
-
Lai Jiangshan authored
The commit c7589070 ("x86/entry/64: Remove unneeded kernel CR3 switching") removed a CR3 write in the faulting path of load_gs_index(). But the path's FENCE_SWAPGS_USER_ENTRY has no fence operation if PTI is enabled, see spectre_v1_select_mitigation(). Rather, it depended on the serializing CR3 write of SWITCH_TO_KERNEL_CR3 and since it got removed, add a FENCE_SWAPGS_KERNEL_ENTRY call to make sure speculation is blocked. [ bp: Massage commit message and comment. ] Fixes: c7589070 ("x86/entry/64: Remove unneeded kernel CR3 switching") Signed-off-by:
Lai Jiangshan <laijs@linux.alibaba.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20211126101209.8613-3-jiangshanlai@gmail.com
-
Lai Jiangshan authored
Commit 18ec54fd ("x86/speculation: Prepare entry code for Spectre v1 swapgs mitigations") added FENCE_SWAPGS_{KERNEL|USER}_ENTRY for conditional SWAPGS. In paranoid_entry(), it uses only FENCE_SWAPGS_KERNEL_ENTRY for both branches. This is because the fence is required for both cases since the CR3 write is conditional even when PTI is enabled. But 96b23714 ("x86/entry/64: Switch CR3 before SWAPGS in paranoid entry") changed the order of SWAPGS and the CR3 write. And it missed the needed FENCE_SWAPGS_KERNEL_ENTRY for the user gsbase case. Add it back by changing the branches so that FENCE_SWAPGS_KERNEL_ENTRY can cover both branches. [ bp: Massage, fix typos, remove obsolete comment while at it. ] Fixes: 96b23714 ("x86/entry/64: Switch CR3 before SWAPGS in paranoid entry") Signed-off-by:
Lai Jiangshan <laijs@linux.alibaba.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20211126101209.8613-2-jiangshanlai@gmail.com
-
- 21 Jun, 2021 1 commit
-
-
Joerg Roedel authored
Split up the #VC handler code into a from-user and a from-kernel part. This allows clean and correct state tracking, as the #VC handler needs to enter NMI-state when raised from kernel mode and plain IRQ state when raised from user-mode. Fixes: 62441a1f ("x86/sev-es: Correctly track IRQ states in runtime #VC handler") Suggested-by:
Peter Zijlstra <peterz@infradead.org> Signed-off-by:
Joerg Roedel <jroedel@suse.de> Signed-off-by:
Borislav Petkov <bp@suse.de> Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20210618115409.22735-3-joro@8bytes.org
-
- 20 May, 2021 1 commit
-
-
H. Peter Anvin (Intel) authored
Right now, *some* code will treat e.g. 0x0000000100000001 as a system call and some will not. Some of the code, notably in ptrace, will treat 0x000000018000000 as a system call and some will not. Finally, right now, e.g. 335 for x86-64 will force the exit code to be set to -ENOSYS even if poked by ptrace, but 548 will not, because there is an observable difference between an out of range system call and a system call number that falls outside the range of the table. This is visible to the user: for example, the syscall_numbering_64 test fails if run under strace, because as strace uses ptrace, it ends up clobbering the upper half of the 64-bit system call number. The architecture independent code all assumes that a system call is "int" that the value -1 specifically and not just any negative value is used for a non-system call. This is the case on x86 as well when arch-independent code is involved. The arch-independent API is defined/documented (but not *implemented*!) in <asm-generic/syscall.h>. This is an ABI change, but is in fact a revert to the original x86-64 ABI. The original assembly entry code would zero-extend the system call number; Use sign extend to be explicit that this is treated as a signed number (although in practice it makes no difference, of course) and to avoid people getting the idea of "optimizing" it, as has happened on at least two(!) separate occasions. Do not store the extended value into regs->orig_ax, however: on x86-64, the ABI is that the callee is responsible for extending parameters, so only examining the lower 32 bits is fully consistent with any "int" argument to any system call, e.g. regs->di for write(2). The full value of %rax on entry to the kernel is thus still available. [ tglx: Add a comment to the ASM code ] Signed-off-by:
H. Peter Anvin (Intel) <hpa@zytor.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20210518191303.4135296-5-hpa@zytor.com
-
- 12 May, 2021 1 commit
-
-
H. Peter Anvin (Intel) authored
Reverse the order of arguments to do_syscall_64() so that the first argument is the pt_regs pointer. This is not only consistent with *all* other entry points from assembly, but it actually makes the compiled code slightly better. Signed-off-by:
H. Peter Anvin (Intel) <hpa@zytor.com> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210510185316.3307264-3-hpa@zytor.com
-
- 21 Mar, 2021 1 commit
-
-
Ingo Molnar authored
Fix another ~42 single-word typos in arch/x86/ code comments, missed a few in the first pass, in particular in .S files. Signed-off-by:
Ingo Molnar <mingo@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: linux-kernel@vger.kernel.org
-
- 11 Mar, 2021 1 commit
-
-
Juergen Gross authored
Instead of using paravirt patching for custom code sequences use ALTERNATIVE for the functions with custom code replacements. Instead of patching an ud2 instruction for unpopulated vector entries into the caller site, use a simple function just calling BUG() as a replacement. Simplify the register defines for assembler paravirt calling, as there isn't much usage left. Signed-off-by:
Juergen Gross <jgross@suse.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20210311142319.4723-14-jgross@suse.com
-
- 10 Feb, 2021 5 commits
-
-
Thomas Gleixner authored
Use the new inline stack switching and remove the old ASM indirect call implementation. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Reviewed-by:
Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20210210002512.972714001@linutronix.de
-
Thomas Gleixner authored
Convert device interrupts to inline stack switching by replacing the existing macro implementation with the new inline version. Tweak the function signature of the actual handler function to have the vector argument as u32. That allows the inline macro to avoid extra intermediates and lets the compiler be smarter about the whole thing. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Reviewed-by:
Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20210210002512.769728139@linutronix.de
-
Thomas Gleixner authored
To inline the stack switching and to prepare for enabling CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK provide a macro template for system vectors and device interrupts and convert the system vectors over to it. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Reviewed-by:
Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20210210002512.676197354@linutronix.de
-
Juergen Gross authored
USERGS_SYSRET64 is used to return from a syscall via SYSRET, but a Xen PV guest will nevertheless use the IRET hypercall, as there is no sysret PV hypercall defined. So instead of testing all the prerequisites for doing a sysret and then mangling the stack for Xen PV again for doing an iret just use the iret exit from the beginning. This can easily be done via an ALTERNATIVE like it is done for the sysenter compat case already. It should be noted that this drops the optimization in Xen for not restoring a few registers when returning to user mode, but it seems as if the saved instructions in the kernel more than compensate for this drop (a kernel build in a Xen PV guest was slightly faster with this patch applied). While at it remove the stale sysret32 remnants. Signed-off-by:
Juergen Gross <jgross@suse.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210120135555.32594-6-jgross@suse.com
-
Juergen Gross authored
SWAPGS is used only for interrupts coming from user mode or for returning to user mode. So there is no reason to use the PARAVIRT framework, as it can easily be replaced by an ALTERNATIVE depending on X86_FEATURE_XENPV. There are several instances using the PV-aware SWAPGS macro in paths which are never executed in a Xen PV guest. Replace those with the plain swapgs instruction. For SWAPGS_UNSAFE_STACK the same applies. Signed-off-by:
Juergen Gross <jgross@suse.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Reviewed-by:
Borislav Petkov <bp@suse.de> Reviewed-by:
Thomas Gleixner <tglx@linutronix.de> Acked-by:
Andy Lutomirski <luto@kernel.org> Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20210120135555.32594-5-jgross@suse.com
-
- 22 Sep, 2020 1 commit
-
-
Thomas Gleixner authored
Sami reported that run_on_irqstack_cond() requires the caller to cast functions to mismatching types, which trips indirect call Control-Flow Integrity (CFI) in Clang. Instead of disabling CFI on that function, provide proper helpers for the three call variants. The actual ASM code stays the same as that is out of reach. [ bp: Fix __run_on_irqstack() prototype to match. ] Fixes: 931b9414 ("x86/entry: Provide helpers for executing on the irqstack") Reported-by:
Nathan Chancellor <natechancellor@gmail.com> Reported-by:
Sami Tolvanen <samitolvanen@google.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Borislav Petkov <bp@suse.de> Tested-by:
Sami Tolvanen <samitolvanen@google.com> Cc: <stable@vger.kernel.org> Link: https://github.com/ClangBuiltLinux/linux/issues/1052 Link: https://lkml.kernel.org/r/87pn6eb5tv.fsf@nanos.tec.linutronix.de
-
- 09 Sep, 2020 1 commit
-
-
Joerg Roedel authored
The #VC handler needs special entry code because: 1. It runs on an IST stack 2. It needs to be able to handle nested #VC exceptions To make this work, the entry code is implemented to pretend it doesn't use an IST stack. When entered from user-mode or early SYSCALL entry path it switches to the task stack. If entered from kernel-mode it tries to switch back to the previous stack in the IRET frame. The stack found in the IRET frame is validated first, and if it is not safe to use it for the #VC handler, the code will switch to a fall-back stack (the #VC2 IST stack). From there, it can cause nested exceptions again. Signed-off-by:
Joerg Roedel <jroedel@suse.de> Signed-off-by:
Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200907131613.12703-46-joro@8bytes.org
-
- 24 Aug, 2020 1 commit
-
-
Borislav Petkov authored
Add the proper explanation why an LFENCE is not needed in the FSGSBASE case. Fixes: c82965f9 ("x86/entry/64: Handle FSGSBASE enabled paranoid entry/exit") Signed-off-by:
Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200821090710.GE12181@zn.tnic
-
- 15 Aug, 2020 1 commit
-
-
Juergen Gross authored
There are some code parts using CONFIG_PARAVIRT for Xen pvops related issues instead of the more stringent CONFIG_PARAVIRT_XXL. Signed-off-by:
Juergen Gross <jgross@suse.com> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200815100641.26362-4-jgross@suse.com
-
- 24 Jul, 2020 1 commit
-
-
Thomas Gleixner authored
Replace the x86 variant with the generic version. Provide the relevant architecture specific helper functions and defines. Use a temporary define for idtentry_exit_user which will be cleaned up seperately. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Acked-by:
Kees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200722220520.494648601@linutronix.de
-
- 21 Jul, 2020 1 commit
-
-
Eric W. Biederman authored
To allow the kernel not to play games with set_fs to call exec implement kernel_execve. The function kernel_execve takes pointers into kernel memory and copies the values pointed to onto the new userspace stack. The calls with arguments from kernel space of do_execve are replaced with calls to kernel_execve. The calls do_execve and do_execveat are made static as there are now no callers outside of exec. The comments that mention do_execve are updated to refer to kernel_execve or execve depending on the circumstances. In addition to correcting the comments, this makes it easy to grep for do_execve and verify it is not used. Inspired-by: https://lkml.kernel.org/r/20200627072704.2447163-1-hch@lst.de Reviewed-by:
Kees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/87wo365ikj.fsf@x220.int.ebiederm.org Signed-off-by:
"Eric W. Biederman" <ebiederm@xmission.com>
-
- 18 Jun, 2020 2 commits
-
-
Chang S. Bae authored
Without FSGSBASE, user space cannot change GSBASE other than through a PRCTL. The kernel enforces that the user space GSBASE value is postive as negative values are used for detecting the kernel space GSBASE value in the paranoid entry code. If FSGSBASE is enabled, user space can set arbitrary GSBASE values without kernel intervention, including negative ones, which breaks the paranoid entry assumptions. To avoid this, paranoid entry needs to unconditionally save the current GSBASE value independent of the interrupted context, retrieve and write the kernel GSBASE and unconditionally restore the saved value on exit. The restore happens either in paranoid_exit or in the special exit path of the NMI low level code. All other entry code pathes which use unconditional SWAPGS are not affected as they do not depend on the actual content. [ tglx: Massaged changelogs and comments ] Suggested-by:
H. Peter Anvin <hpa@zytor.com> Suggested-by:
Andy Lutomirski <luto@kernel.org> Suggested-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Sasha Levin <sashal@kernel.org> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/1557309753-24073-13-git-send-email-chang.seok.bae@intel.com Link: https://lkml.kernel.org/r/20200528201402.1708239-12-sashal@kernel.org
-
Chang S. Bae authored
When FSGSBASE is enabled, the GSBASE handling in paranoid entry will need to retrieve the kernel GSBASE which requires that the kernel page table is active. As the CR3 switch to the kernel page tables (PTI is active) does not depend on kernel GSBASE, move the CR3 switch in front of the GSBASE handling. Comment the EBX content while at it. No functional change. [ tglx: Rewrote changelog and comments ] Signed-off-by:
Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Sasha Levin <sashal@kernel.org> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/1557309753-24073-11-git-send-email-chang.seok.bae@intel.com Link: https://lkml.kernel.org/r/20200528201402.1708239-10-sashal@kernel.org
-
- 15 Jun, 2020 1 commit
-
-
Vitaly Kuznetsov authored
KVM now supports using interrupt for 'page ready' APF event delivery and legacy mechanism was deprecated. Switch KVM guests to the new one. Signed-off-by:
Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20200525144125.143875-9-vkuznets@redhat.com> [Use HYPERVISOR_CALLBACK_VECTOR instead of a separate vector. - Paolo] Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
- 11 Jun, 2020 7 commits
-
-
Thomas Gleixner authored
The entry rework moved interrupt entry code from the irqentry to the noinstr section which made the irqentry section empty. This breaks boundary checks which rely on the __irqentry_text_start/end markers to find out whether a function in a stack trace is interrupt/exception entry code. This affects the function graph tracer and filter_irq_stacks(). As the IDT entry points are all sequentialy emitted this is rather simple to unbreak by injecting __irqentry_text_start/end as global labels. To make this work correctly: - Remove the IRQENTRY_TEXT section from the x86 linker script - Define __irqentry so it breaks the build if it's used - Adjust the entry mirroring in PTI - Remove the redundant kprobes and unwinder bound checks Reported-by:
Qian Cai <cai@lca.pw> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Peter Zijlstra authored
Both #DB itself, as all other IST users (NMI, #MC) now clear DR7 on entry. Combined with not allowing breakpoints on entry/noinstr/NOKPROBE text and no single step (EFLAGS.TF) inside the #DB handler should guarantee no nested #DB. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200529213321.303027161@infradead.org
-
Thomas Gleixner authored
No more users. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Acked-by:
Andy Lutomirski <luto@kernel.org> Link: https://lore.kernel.org/r/20200521202120.523289762@linutronix.de
-
Thomas Gleixner authored
The last step to remove the irq tracing cruft from ASM. Ignore #DF as the maschine is going to die anyway. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Acked-by:
Andy Lutomirski <luto@kernel.org> Link: https://lore.kernel.org/r/20200521202120.414043330@linutronix.de
-
Thomas Gleixner authored
Since INT3/#BP no longer runs on an IST, this workaround is no longer required. Tested by running lockdep+ftrace as described in the initial commit: 5963e317 ("ftrace/x86: Do not change stacks in DEBUG when calling lockdep") Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Reviewed-by:
Steven Rostedt (VMware) <rostedt@goodmis.org> Acked-by:
Andy Lutomirski <luto@kernel.org> Link: https://lore.kernel.org/r/20200521202120.319418546@linutronix.de
-
Thomas Gleixner authored
No more users. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Acked-by:
Andy Lutomirski <luto@kernel.org> Link: https://lore.kernel.org/r/20200521202120.021462159@linutronix.de
-
Thomas Gleixner authored
Remove all the code which was there to emit the system vector stubs. All users are gone. Move the now unused GET_CR2_INTO macro muck to head_64.S where the last user is. Fixup the eye hurting comment there while at it. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Acked-by:
Andy Lutomirski <luto@kernel.org> Link: https://lore.kernel.org/r/20200521202119.927433002@linutronix.de
-