- 09 May, 2022 40 commits
-
-
Kees Cook authored
commit d0f6cfb2 upstream. Control Flow Integrity (CFI) instrumentation of the kernel noticed that the caller, dev_attr_show(), and the callback, odvp_show(), did not have matching function prototypes, which would cause a CFI exception to be raised. Correct the prototype by using struct device_attribute instead of struct kobj_attribute. Reported-and-tested-by:
Joao Moreira <joao@overdrivepizza.com> Link: https://lore.kernel.org/lkml/067ce8bd4c3968054509831fa2347f4f@overdrivepizza.com/ Fixes: 006f006f ("thermal/int340x_thermal: Export OEM vendor variables") Cc: 5.8+ <stable@vger.kernel.org> # 5.8+ Signed-off-by:
Kees Cook <keescook@chromium.org> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ville Syrjälä authored
commit fc45e55e upstream. The "safe state" index is used by acpi_idle_enter_bm() to avoid entering a C-state that may require bus mastering to be disabled on entry in the cases when this is not going to happen. For this reason, it should not be set to point to C3 type of C-states, because they may require bus mastering to be disabled on entry in principle. This was broken by commit d6b88ce2 ("ACPI: processor idle: Allow playing dead in C3 state") which inadvertently allowed the "safe state" index to point to C3 type of C-states. This results in a machine that won't boot past the point when it first enters C3. Restore the correct behaviour (either demote to C1/C2, or use C3 but also set ARB_DIS=1). I hit this on a Fujitsu Siemens Lifebook S6010 (P3) machine. Fixes: d6b88ce2 ("ACPI: processor idle: Allow playing dead in C3 state") Cc: 5.16+ <stable@vger.kernel.org> # 5.16+ Signed-off-by:
Ville Syrjälä <ville.syrjala@linux.intel.com> Tested-by:
Woody Suwalski <wsuwalski@gmail.com> [ rjw: Subject and changelog adjustments ] Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dinh Nguyen authored
commit 5fd1fe48 upstream. I made a mistake with the commit a6aaa003 ("net: ethernet: stmmac: fix altr_tse_pcs function when using a fixed-link"). I should have tested against both scenario of having a SGMII interface and one without. Without the SGMII PCS TSE adpater, the sgmii_adapter_base address is NULL, thus a write to this address will fail. Cc: stable@vger.kernel.org Fixes: a6aaa003 ("net: ethernet: stmmac: fix altr_tse_pcs function when using a fixed-link") Signed-off-by:
Dinh Nguyen <dinguyen@kernel.org> Link: https://lore.kernel.org/r/20220420152345.27415-1-dinguyen@kernel.org Signed-off-by:
Jakub Kicinski <kuba@kernel.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Imre Deak authored
commit 4ae4dd2e upstream. Fix typo in the _SEL_FETCH_PLANE_BASE_1_B register base address. Fixes: a5523e2f ("drm/i915: Add PSR2 selective fetch registers") References: https://gitlab.freedesktop.org/drm/intel/-/issues/5400 Cc: José Roberto de Souza <jose.souza@intel.com> Cc: <stable@vger.kernel.org> # v5.9+ Signed-off-by:
Imre Deak <imre.deak@intel.com> Reviewed-by:
José Roberto de Souza <jose.souza@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220421162221.2261895-1-imre.deak@intel.com (cherry picked from commit af2cbc6e ) Signed-off-by:
Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jouni Högander authored
commit c05d8332 upstream. We have now seen panel (XMG Core 15 e21 laptop) advertizing support for Intel proprietary eDP backlight control via DPCD registers, but actually working only with legacy pwm control. This patch adds panel EDID check for possible HDR static metadata and Intel proprietary eDP backlight control is used only if that exists. Missing HDR static metadata is ignored if user specifically asks for Intel proprietary eDP backlight control via enable_dpcd_backlight parameter. v2 : - Ignore missing HDR static metadata if Intel proprietary eDP backlight control is forced via i915.enable_dpcd_backlight - Printout info message if panel is missing HDR static metadata and support for Intel proprietary eDP backlight control is detected Fixes: 4a8d7990 ("drm/i915/dp: Enable Intel's HDR backlight interface (only SDR for now)") Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/5284 Cc: Lyude Paul <lyude@redhat.com> Cc: Mika Kahola <mika.kahola@intel.com> Cc: Jani Nikula <jani.nikula@intel.com> Cc: Filippo Falezza <filippo.falezza@outlook.it> Cc: stable@vger.kernel.org Signed-off-by:
Jouni Högander <jouni.hogander@intel.com> Signed-off-by:
Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220413082826.120634-1-jouni.hogander@intel.com Reviewed-by:
Lyude Paul <lyude@redhat.com> (cherry picked from commit b4b15757 ) Signed-off-by:
Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Martin Willi authored
commit 8ddffdb9 upstream. The commit referenced below fixed packet re-routing if Netfilter mangles a routing key property of a packet and the packet is routed in a VRF L3 domain. The fix, however, addressed IPv4 re-routing, only. This commit applies the same behavior for IPv6. While at it, untangle the nested ternary operator to make the code more readable. Fixes: 6d8b49c3 ("netfilter: Update ip_route_me_harder to consider L3 domain") Cc: stable@vger.kernel.org Signed-off-by:
Martin Willi <martin@strongswan.org> Reviewed-by:
David Ahern <dsahern@kernel.org> Signed-off-by:
Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Md Sadre Alam authored
commit ba7542eb upstream. This patch fixes a memory corruption that occurred in the nand_scan() path for Hynix nand device. On boot, for Hynix nand device will panic at a weird place: | Unable to handle kernel NULL pointer dereference at virtual address 00000070 | [00000070] *pgd=00000000 | Internal error: Oops: 5 [#1] PREEMPT SMP ARM | Modules linked in: | CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.17.0-01473-g13ae1769cfb0 #38 | Hardware name: Generic DT based system | PC is at nandc_set_reg+0x8/0x1c | LR is at qcom_nandc_command+0x20c/0x5d0 | pc : [<c088b74c>] lr : [<c088d9c8>] psr: 00000113 | sp : c14adc50 ip : c14ee208 fp : c0cc970c | r10: 000000a3 r9 : 00000000 r8 : 00000040 | r7 : c16f6a00 r6 : 00000090 r5 : 00000004 r4 :c14ee040 | r3 : 00000000 r2 : 0000000b r1 : 00000000 r0 :c14ee040 | Flags: nzcv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none | Control: 10c5387d Table: 8020406a DAC: 00000051 | Register r0 information: slab kmalloc-2k start c14ee000 pointer offset 64 size 2048 | Process swapper/0 (pid: 1, stack limit = 0x(ptrval)) | nandc_set_reg from qcom_nandc_command+0x20c/0x5d0 | qcom_nandc_command from nand_readid_op+0x198/0x1e8 | nand_readid_op from hynix_nand_has_valid_jedecid+0x30/0x78 | hynix_nand_has_valid_jedecid from hynix_nand_init+0xb8/0x454 | hynix_nand_init from nand_scan_with_ids+0xa30/0x14a8 | nand_scan_with_ids from qcom_nandc_probe+0x648/0x7b0 | qcom_nandc_probe from platform_probe+0x58/0xac The problem is that the nand_scan()'s qcom_nand_attach_chip callback is updating the nandc->max_cwperpage from 1 to 4 or 8 based on page size. This causes the sg_init_table of clear_bam_transaction() in the driver's qcom_nandc_command() to memset much more than what was initially allocated by alloc_bam_transaction(). This patch will update nandc->max_cwperpage 1 to 4 or 8 based on page size in qcom_nand_attach_chip call back after freeing the previously allocated memory for bam txn as per nandc->max_cwperpage = 1 and then again allocating bam txn as per nandc->max_cwperpage = 4 or 8 based on page size in qcom_nand_attach_chip call back itself. Cc: stable@vger.kernel.org Fixes: 6a3cec64 ("mtd: rawnand: qcom: convert driver to nand_scan()") Reported-by:
Konrad Dybcio <konrad.dybcio@somainline.org> Reviewed-by:
Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Co-developed-by:
Sricharan R <quic_srichara@quicinc.com> Signed-off-by:
Sricharan R <quic_srichara@quicinc.com> Signed-off-by:
Md Sadre Alam <quic_mdalam@quicinc.com> Signed-off-by:
Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/1650268107-5363-1-git-send-email-quic_mdalam@quicinc.com Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Zqiang authored
commit 31fa985b upstream. kasan_quarantine_remove_cache() is called in kmem_cache_shrink()/ destroy(). The kasan_quarantine_remove_cache() call is protected by cpuslock in kmem_cache_destroy() to ensure serialization with kasan_cpu_offline(). However the kasan_quarantine_remove_cache() call is not protected by cpuslock in kmem_cache_shrink(). When a CPU is going offline and cache shrink occurs at same time, the cpu_quarantine may be corrupted by interrupt (per_cpu_remove_cache operation). So add a cpu_quarantine offline flags check in per_cpu_remove_cache(). [akpm@linux-foundation.org: add comment, per Zqiang] Link: https://lkml.kernel.org/r/20220414025925.2423818-1-qiang1.zhang@intel.com Signed-off-by:
Zqiang <qiang1.zhang@intel.com> Reviewed-by:
Dmitry Vyukov <dvyukov@google.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Damien Le Moal authored
commit 694852ea upstream. Ensure that the i_flags field of struct zonefs_inode_info is cleared to 0 when initializing a zone file inode, avoiding seeing the flag ZONEFS_ZONE_OPEN being incorrectly set. Fixes: b5c00e97 ("zonefs: open/close zone on file open/close") Cc: <stable@vger.kernel.org> Signed-off-by:
Damien Le Moal <damien.lemoal@opensource.wdc.com> Reviewed-by:
Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by:
Chaitanya Kulkarni <kch@nvidia.com> Reviewed-by:
Hans Holmberg <hans.holmberg@wdc.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Damien Le Moal authored
commit 1da18a29 upstream. The mount option "explicit_open" manages the device open zone resources to ensure that if an application opens a sequential file for writing, the file zone can always be written by explicitly opening the zone and accounting for that state with the s_open_zones counter. However, if some zones are already open when mounting, the device open zone resource usage status will be larger than the initial s_open_zones value of 0. Ensure that this inconsistency does not happen by closing any sequential zone that is open when mounting. Furthermore, with ZNS drives, closing an explicitly open zone that has not been written will change the zone state to "closed", that is, the zone will remain in an active state. Since this can then cause failures of explicit open operations on other zones if the drive active zone resources are exceeded, we need to make sure that the zone is not active anymore by resetting it instead of closing it. To address this, zonefs_zone_mgmt() is modified to change a REQ_OP_ZONE_CLOSE request into a REQ_OP_ZONE_RESET for sequential zones that have not been written. Fixes: b5c00e97 ("zonefs: open/close zone on file open/close") Cc: <stable@vger.kernel.org> Signed-off-by:
Damien Le Moal <damien.lemoal@opensource.wdc.com> Reviewed-by:
Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by:
Hans Holmberg <hans.holmberg@wdc.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ville Syrjälä authored
commit 20e582e1 upstream. This reverts commit bfe55a1f . This was presumably misdiagnosed as an inability to use C3 at all when I suspect the real problem is just misconfiguration of C3 vs. ARB_DIS. Signed-off-by:
Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: 5.16+ <stable@vger.kernel.org> # 5.16+ Tested-by:
Woody Suwalski <wsuwalski@gmail.com> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sidhartha Kumar authored
[ Upstream commit 18d609da ] Because mremap does not have a MAP_FIXED_NOREPLACE flag, it can destroy existing mappings. This causes a segfault when regions such as text are remapped and the permissions are changed. Verify the requested mremap destination address does not overlap any existing mappings by using mmap's MAP_FIXED_NOREPLACE flag. Keep incrementing the destination address until a valid mapping is found or fail the current test once the max address is reached. Link: https://lkml.kernel.org/r/20220420215721.4868-2-sidhartha.kumar@oracle.com Signed-off-by:
Sidhartha Kumar <sidhartha.kumar@oracle.com> Reviewed-by:
Shuah Khan <skhan@linuxfoundation.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Sidhartha Kumar authored
[ Upstream commit 9c85a9ba ] Avoid calling mmap with requested addresses that are less than the system's mmap_min_addr. When run as root, mmap returns EACCES when trying to map addresses < mmap_min_addr. This is not one of the error codes for the condition to retry the mmap in the test. Rather than arbitrarily retrying on EACCES, don't attempt an mmap until addr > vm.mmap_min_addr. Add a munmap call after an alignment check as the mappings are retained after the retry and can reach the vm.max_map_count sysctl. Link: https://lkml.kernel.org/r/20220420215721.4868-1-sidhartha.kumar@oracle.com Signed-off-by:
Sidhartha Kumar <sidhartha.kumar@oracle.com> Reviewed-by:
Shuah Khan <skhan@linuxfoundation.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Alexey Kardashevskiy authored
[ Upstream commit bb82c574 ] The "read_bhrb" global symbol is only called under CONFIG_PPC64 of arch/powerpc/perf/core-book3s.c but it is compiled for both 32 and 64 bit anyway (and LLVM fails to link this on 32bit). This fixes it by moving bhrb.o to obj64 targets. Signed-off-by:
Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220421025756.571995-1-aik@ozlabs.ru Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Duoming Zhou authored
[ Upstream commit bc6de287 ] There is a deadlock in rr_close(), which is shown below: (Thread 1) | (Thread 2) | rr_open() rr_close() | add_timer() spin_lock_irqsave() //(1) | (wait a time) ... | rr_timer() del_timer_sync() | spin_lock_irqsave() //(2) (wait timer to stop) | ... We hold rrpriv->lock in position (1) of thread 1 and use del_timer_sync() to wait timer to stop, but timer handler also need rrpriv->lock in position (2) of thread 2. As a result, rr_close() will block forever. This patch extracts del_timer_sync() from the protection of spin_lock_irqsave(), which could let timer handler to obtain the needed lock. Signed-off-by:
Duoming Zhou <duoming@zju.edu.cn> Link: https://lore.kernel.org/r/20220417125519.82618-1-duoming@zju.edu.cn Signed-off-by:
Paolo Abeni <pabeni@redhat.com> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Ronnie Sahlberg authored
[ Upstream commit f5d0f921 ] because the copychunk_write might cover a region of the file that has not yet been sent to the server and thus fail. A simple way to reproduce this is: truncate -s 0 /mnt/testfile; strace -f -o x -ttT xfs_io -i -f -c 'pwrite 0k 128k' -c 'fcollapse 16k 24k' /mnt/testfile the issue is that the 'pwrite 0k 128k' becomes rearranged on the wire with the 'fcollapse 16k 24k' due to write-back caching. fcollapse is implemented in cifs.ko as a SMB2 IOCTL(COPYCHUNK_WRITE) call and it will fail serverside since the file is still 0b in size serverside until the writes have been destaged. To avoid this we must ensure that we destage any unwritten data to the server before calling COPYCHUNK_WRITE. Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1997373 Reported-by:
Xiaoli Feng <xifeng@redhat.com> Signed-off-by:
Ronnie Sahlberg <lsahlber@redhat.com> Signed-off-by:
Steve French <stfrench@microsoft.com> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Mikulas Patocka authored
[ Upstream commit a6823e4e ] The first "if" condition in __memcpy_flushcache is supposed to align the "dest" variable to 8 bytes and copy data up to this alignment. However, this condition may misbehave if "size" is greater than 4GiB. The statement min_t(unsigned, size, ALIGN(dest, 8) - dest); casts both arguments to unsigned int and selects the smaller one. However, the cast truncates high bits in "size" and it results in misbehavior. For example: suppose that size == 0x100000001, dest == 0x200000002 min_t(unsigned, size, ALIGN(dest, 8) - dest) == min_t(0x1, 0xe) == 0x1; ... dest += 0x1; so we copy just one byte "and" dest remains unaligned. This patch fixes the bug by replacing unsigned with size_t. Signed-off-by:
Mikulas Patocka <mpatocka@redhat.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
suresh kumar authored
[ Upstream commit 49aefd13 ] Commit b5f86218 was introduced to discard lowest hash bit for layer3+4 hashing but it also removes last bit from non layer3+4 hashing Below script shows layer2+3 hashing will result in same slave to be used with above commit. $ cat hash.py #/usr/bin/python3.6 h_dests=[0xa0, 0xa1] h_source=0xe3 hproto=0x8 saddr=0x1e7aa8c0 daddr=0x17aa8c0 for h_dest in h_dests: hash = (h_dest ^ h_source ^ hproto ^ saddr ^ daddr) hash ^= hash >> 16 hash ^= hash >> 8 print(hash) print("with last bit removed") for h_dest in h_dests: hash = (h_dest ^ h_source ^ hproto ^ saddr ^ daddr) hash ^= hash >> 16 hash ^= hash >> 8 hash = hash >> 1 print(hash) Output: $ python3.6 hash.py 522133332 522133333 <-------------- will result in both slaves being used with last bit removed 261066666 261066666 <-------------- only single slave used Signed-off-by:
suresh kumar <suresh2514@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Namjae Jeon authored
[ Upstream commit 02655a70 ] Currently ksmbd is using ->f_bsize from vfs_statfs() as sector size. If fat/exfat is a local share, ->f_bsize is a cluster size that is too large to be used as a sector size. Sector sizes larger than 4K cause problem occurs when mounting an iso file through windows client. The error message can be obtained using Mount-DiskImage command, the error is: "Mount-DiskImage : The sector size of the physical disk on which the virtual disk resides is not supported." This patch reports fixed 4KB sector size if ->s_blocksize is bigger than 4KB. Signed-off-by:
Namjae Jeon <linkinjeon@kernel.org> Signed-off-by:
Steve French <stfrench@microsoft.com> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Namjae Jeon authored
[ Upstream commit 8510a043 ] Add missing increment reference count of parent fp in ksmbd_lookup_fd_inode(). Signed-off-by:
Namjae Jeon <linkinjeon@kernel.org> Reviewed-by:
Hyunchul Lee <hyc.lee@gmail.com> Signed-off-by:
Steve French <stfrench@microsoft.com> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Duoming Zhou authored
[ Upstream commit eb5adc70 ] There is a deadlock in rs_close(), which is shown below: (Thread 1) | (Thread 2) | rs_open() rs_close() | mod_timer() spin_lock_bh() //(1) | (wait a time) ... | rs_poll() del_timer_sync() | spin_lock() //(2) (wait timer to stop) | ... We hold timer_lock in position (1) of thread 1 and use del_timer_sync() to wait timer to stop, but timer handler also need timer_lock in position (2) of thread 2. As a result, rs_close() will block forever. This patch deletes the redundant timer_lock in order to prevent the deadlock. Because there is no race condition between rs_close, rs_open and rs_poll. Signed-off-by:
Duoming Zhou <duoming@zju.edu.cn> Message-Id: <20220407154430.22387-1-duoming@zju.edu.cn> Signed-off-by:
Max Filippov <jcmvbkbc@gmail.com> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Ye Bin authored
[ Upstream commit b98535d0 ] We got issue as follows: ------------[ cut here ]------------ kernel BUG at fs/jbd2/transaction.c:389! invalid opcode: 0000 [#1] PREEMPT SMP KASAN PTI CPU: 9 PID: 131 Comm: kworker/9:1 Not tainted 5.17.0-862.14.0.6.x86_64-00001-g23f87daf7d74-dirty #197 Workqueue: events flush_stashed_error_work RIP: 0010:start_this_handle+0x41c/0x1160 RSP: 0018:ffff888106b47c20 EFLAGS: 00010202 RAX: ffffed10251b8400 RBX: ffff888128dc204c RCX: ffffffffb52972ac RDX: 0000000000000200 RSI: 0000000000000004 RDI: ffff888128dc2050 RBP: 0000000000000039 R08: 0000000000000001 R09: ffffed10251b840a R10: ffff888128dc204f R11: ffffed10251b8409 R12: ffff888116d78000 R13: 0000000000000000 R14: dffffc0000000000 R15: ffff888128dc2000 FS: 0000000000000000(0000) GS:ffff88839d680000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000001620068 CR3: 0000000376c0e000 CR4: 00000000000006e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> jbd2__journal_start+0x38a/0x790 jbd2_journal_start+0x19/0x20 flush_stashed_error_work+0x110/0x2b3 process_one_work+0x688/0x1080 worker_thread+0x8b/0xc50 kthread+0x26f/0x310 ret_from_fork+0x22/0x30 </TASK> Modules linked in: ---[ end trace 0000000000000000 ]--- Above issue may happen as follows: umount read procfs error_work ext4_put_super flush_work(&sbi->s_error_work); ext4_mb_seq_groups_show ext4_mb_load_buddy_gfp ext4_mb_init_group ext4_mb_init_cache ext4_read_block_bitmap_nowait ext4_validate_block_bitmap ext4_error ext4_handle_error schedule_work(&EXT4_SB(sb)->s_error_work); ext4_unregister_sysfs(sb); jbd2_journal_destroy(sbi->s_journal); journal_kill_thread journal->j_flags |= JBD2_UNMOUNT; flush_stashed_error_work jbd2_journal_start start_this_handle BUG_ON(journal->j_flags & JBD2_UNMOUNT); To solve this issue, we call 'ext4_unregister_sysfs() before flushing s_error_work in ext4_put_super(). Signed-off-by:
Ye Bin <yebin10@huawei.com> Reviewed-by:
Jan Kara <jack@suse.cz> Reviewed-by:
Ritesh Harjani <riteshh@linux.ibm.com> Link: https://lore.kernel.org/r/20220322012419.725457-1-yebin10@huawei.com Signed-off-by:
Theodore Ts'o <tytso@mit.edu> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Zheyu Ma authored
[ Upstream commit 92ccbf17 ] When the driver fails during probing, the driver should disable the regulator, not just handle it in wm8731_hw_init(). The following log reveals it: [ 17.812483] WARNING: CPU: 1 PID: 364 at drivers/regulator/core.c:2257 _regulator_put+0x3ec/0x4e0 [ 17.815958] RIP: 0010:_regulator_put+0x3ec/0x4e0 [ 17.824467] Call Trace: [ 17.824774] <TASK> [ 17.825040] regulator_bulk_free+0x82/0xe0 [ 17.825514] devres_release_group+0x319/0x3d0 [ 17.825882] i2c_device_probe+0x766/0x940 [ 17.829198] i2c_register_driver+0xb5/0x130 Signed-off-by:
Zheyu Ma <zheyuma97@gmail.com> Link: https://lore.kernel.org/r/20220405121038.4094051-1-zheyuma97@gmail.com Signed-off-by:
Mark Brown <broonie@kernel.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Chao Song authored
[ Upstream commit 97326be1 ] The left speaker of max98373 uses spk_r_endpoint, and right speaker uses spk_l_endpoint, this is obviously wrong. This patch corrects the endpoints for max98373 codec. Signed-off-by:
Chao Song <chao.song@linux.intel.com> Signed-off-by:
Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> Link: https://lore.kernel.org/r/20220406192341.271465-1-pierre-louis.bossart@linux.intel.com Signed-off-by:
Mark Brown <broonie@kernel.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Pengcheng Yang authored
[ Upstream commit d9157f68 ] Currently DSACK is regarded as a dupack, which may cause F-RTO to incorrectly enter "loss was real" when receiving DSACK. Packetdrill to demonstrate: // Enable F-RTO and TLP 0 `sysctl -q net.ipv4.tcp_frto=2` 0 `sysctl -q net.ipv4.tcp_early_retrans=3` 0 `sysctl -q net.ipv4.tcp_congestion_control=cubic` // Establish a connection +0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3 +0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0 +0 bind(3, ..., ...) = 0 +0 listen(3, 1) = 0 // RTT 10ms, RTO 210ms +.1 < S 0:0(0) win 32792 <mss 1000,sackOK,nop,nop,nop,wscale 7> +0 > S. 0:0(0) ack 1 <...> +.01 < . 1:1(0) ack 1 win 257 +0 accept(3, ..., ...) = 4 // Send 2 data segments +0 write(4, ..., 2000) = 2000 +0 > P. 1:2001(2000) ack 1 // TLP +.022 > P. 1001:2001(1000) ack 1 // Continue to send 8 data segments +0 write(4, ..., 10000) = 10000 +0 > P. 2001:10001(8000) ack 1 // RTO +.188 > . 1:1001(1000) ack 1 // The original data is acked and new data is sent(F-RTO step 2.b) +0 < . 1:1(0) ack 2001 win 257 +0 > P. 10001:12001(2000) ack 1 // D-SACK caused by TLP is regarded as a dupack, this results in // the incorrect judgment of "loss was real"(F-RTO step 3.a) +.022 < . 1:1(0) ack 2001 win 257 <sack 1001:2001,nop,nop> // Never-retransmitted data(3001:4001) are acked and // expect to switch to open state(F-RTO step 3.b) +0 < . 1:1(0) ack 4001 win 257 +0 %{ assert tcpi_ca_state == 0, tcpi_ca_state }% Fixes: e33099f9 ("tcp: implement RFC5682 F-RTO") Signed-off-by:
Pengcheng Yang <yangpc@wangsu.com> Acked-by:
Neal Cardwell <ncardwell@google.com> Tested-by:
Neal Cardwell <ncardwell@google.com> Reviewed-by:
Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/r/1650967419-2150-1-git-send-email-yangpc@wangsu.com Signed-off-by:
Jakub Kicinski <kuba@kernel.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Dany Madden authored
[ Upstream commit aeaf59b7 ] This reverts commit 723ad916 When client requests channel or ring size larger than what the server can support the server will cap the request to the supported max. So, the client would not be able to successfully request resources that exceed the server limit. Fixes: 723ad916 ("ibmvnic: Add ethtool private flag for driver-defined queue limits") Signed-off-by:
Dany Madden <drt@linux.ibm.com> Link: https://lore.kernel.org/r/20220427235146.23189-1-drt@linux.ibm.com Signed-off-by:
Jakub Kicinski <kuba@kernel.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Leon Romanovsky authored
[ Upstream commit f049efc7 ] The VF driver can forward any IPsec flags and such makes the function is not extendable and prone to backward/forward incompatibility. If new software runs on VF, it won't know that PF configured something completely different as it "knows" only XFRM_OFFLOAD_INBOUND flag. Fixes: eda0333a ("ixgbe: add VF IPsec management") Reviewed-by:
Raed Salem <raeds@nvidia.com> Signed-off-by:
Leon Romanovsky <leonro@nvidia.com> Reviewed-by:
Shannon Nelson <snelson@pensando.io> Tested-by:
Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by:
Tony Nguyen <anthony.l.nguyen@intel.com> Link: https://lore.kernel.org/r/20220427173152.443102-1-anthony.l.nguyen@intel.com Signed-off-by:
Jakub Kicinski <kuba@kernel.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Timothy Hayes authored
[ Upstream commit 4e13f670 ] This patch corrects a bug whereby synthesized events from SPE samples are missing virtual addresses. Fixes: 54f7815e ("perf arm-spe: Fill address info for samples") Reviewed-by:
Leo Yan <leo.yan@linaro.org> Signed-off-by:
Timothy Hayes <timothy.hayes@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: bpf@vger.kernel.org Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Fastabend <john.fastabend@gmail.com> Cc: John Garry <john.garry@huawei.com> Cc: KP Singh <kpsingh@kernel.org> Cc: Leo Yan <leo.yan@linaro.org> Cc: linux-arm-kernel@lists.infradead.org Cc: Mark Rutland <mark.rutland@arm.com> Cc: Martin KaFai Lau <kafai@fb.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: netdev@vger.kernel.org Cc: Song Liu <songliubraving@fb.com> Cc: Will Deacon <will@kernel.org> Cc: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/r/20220421165205.117662-2-timothy.hayes@arm.com Signed-off-by:
Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Andreas Gruenbacher authored
[ Upstream commit 296abc0d ] Commit 00bfe02f ("gfs2: Fix mmap + page fault deadlocks for buffered I/O") changed gfs2_file_read_iter() and gfs2_file_buffered_write() to allow dropping the inode glock while faulting in user buffers. When the lock was dropped, a short result was returned to indicate that the operation was interrupted. As pointed out by Linus (see the link below), this behavior is broken and the operations should always re-acquire the inode glock and resume the operation instead. Link: https://lore.kernel.org/lkml/CAHk-=whaz-g_nOOoo8RRiWNjnv2R+h6_xk2F1J4TuSRxk1MtLw@mail.gmail.com/ Fixes: 00bfe02f ("gfs2: Fix mmap + page fault deadlocks for buffered I/O") Signed-off-by:
Andreas Gruenbacher <agruenba@redhat.com> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Andreas Gruenbacher authored
[ Upstream commit 3bde4c48 ] When direct writes fail with -ENOTBLK because we're writing into a hole (gfs2_iomap_begin()) or because of a page invalidation failure (iomap_dio_rw()), we're falling back to buffered writes. In that case, when we lose the inode glock in gfs2_file_buffered_write(), we want to re-acquire it instead of returning a short write. Signed-off-by:
Andreas Gruenbacher <agruenba@redhat.com> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Andreas Gruenbacher authored
[ Upstream commit 124c458a ] Clean up the retry logic in the read and write functions somewhat. Signed-off-by:
Andreas Gruenbacher <agruenba@redhat.com> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Andreas Gruenbacher authored
[ Upstream commit 554c577c ] Currently, instead of performing a short write, iomap_file_buffered_write will fail when part of its iov iterator cannot be read. In contrast, gfs2_file_buffered_write will loop around if it can read part of the iov iterator, so we can end up in an endless loop. This should be fixed in iomap_file_buffered_write (and also generic_perform_write), but this comes a bit late in the 5.16 development cycle, so work around it in the filesystem by trimming the iov iterator to the known-good size for now. Signed-off-by:
Andreas Gruenbacher <agruenba@redhat.com> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Yang Yingliang authored
[ Upstream commit d2b52ec0 ] Put device node in error path in fec_enet_init_stop_mode(). Fixes: 8a448bf8 ("net: ethernet: fec: move GPR register offset and bit into DT") Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20220426125231.375688-1-yangyingliang@huawei.com Signed-off-by:
Jakub Kicinski <kuba@kernel.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Manish Chopra authored
[ Upstream commit af68656d ] While handling PCI errors (AER flow) driver tries to disable NAPI [napi_disable()] after NAPI is deleted [__netif_napi_del()] which causes unexpected system hang/crash. System message log shows the following: ======================================= [ 3222.537510] EEH: Detected PCI bus error on PHB#384-PE#800000 [ 3222.537511] EEH: This PCI device has failed 2 times in the last hour and will be permanently disabled after 5 failures. [ 3222.537512] EEH: Notify device drivers to shutdown [ 3222.537513] EEH: Beginning: 'error_detected(IO frozen)' [ 3222.537514] EEH: PE#800000 (PCI 0384:80:00.0): Invoking bnx2x->error_detected(IO frozen) [ 3222.537516] bnx2x: [bnx2x_io_error_detected:14236(eth14)]IO error detected [ 3222.537650] EEH: PE#800000 (PCI 0384:80:00.0): bnx2x driver reports: 'need reset' [ 3222.537651] EEH: PE#800000 (PCI 0384:80:00.1): Invoking bnx2x->error_detected(IO frozen) [ 3222.537651] bnx2x: [bnx2x_io_error_detected:14236(eth13)]IO error detected [ 3222.537729] EEH: PE#800000 (PCI 0384:80:00.1): bnx2x driver reports: 'need reset' [ 3222.537729] EEH: Finished:'error_detected(IO frozen)' with aggregate recovery state:'need reset' [ 3222.537890] EEH: Collect temporary log [ 3222.583481] EEH: of node=0384:80:00.0 [ 3222.583519] EEH: PCI device/vendor: 168e14e4 [ 3222.583557] EEH: PCI cmd/status register: 00100140 [ 3222.583557] EEH: PCI-E capabilities and status follow: [ 3222.583744] EEH: PCI-E 00: 00020010 012c8da2 00095d5e 00455c82 [ 3222.583892] EEH: PCI-E 10: 10820000 00000000 00000000 00000000 [ 3222.583893] EEH: PCI-E 20: 00000000 [ 3222.583893] EEH: PCI-E AER capability register set follows: [ 3222.584079] EEH: PCI-E AER 00: 13c10001 00000000 00000000 00062030 [ 3222.584230] EEH: PCI-E AER 10: 00002000 000031c0 000001e0 00000000 [ 3222.584378] EEH: PCI-E AER 20: 00000000 00000000 00000000 00000000 [ 3222.584416] EEH: PCI-E AER 30: 00000000 00000000 [ 3222.584416] EEH: of node=0384:80:00.1 [ 3222.584454] EEH: PCI device/vendor: 168e14e4 [ 3222.584491] EEH: PCI cmd/status register: 00100140 [ 3222.584492] EEH: PCI-E capabilities and status follow: [ 3222.584677] EEH: PCI-E 00: 00020010 012c8da2 00095d5e 00455c82 [ 3222.584825] EEH: PCI-E 10: 10820000 00000000 00000000 00000000 [ 3222.584826] EEH: PCI-E 20: 00000000 [ 3222.584826] EEH: PCI-E AER capability register set follows: [ 3222.585011] EEH: PCI-E AER 00: 13c10001 00000000 00000000 00062030 [ 3222.585160] EEH: PCI-E AER 10: 00002000 000031c0 000001e0 00000000 [ 3222.585309] EEH: PCI-E AER 20: 00000000 00000000 00000000 00000000 [ 3222.585347] EEH: PCI-E AER 30: 00000000 00000000 [ 3222.586872] RTAS: event: 5, Type: Platform Error (224), Severity: 2 [ 3222.586873] EEH: Reset without hotplug activity [ 3224.762767] EEH: Beginning: 'slot_reset' [ 3224.762770] EEH: PE#800000 (PCI 0384:80:00.0): Invoking bnx2x->slot_reset() [ 3224.762771] bnx2x: [bnx2x_io_slot_reset:14271(eth14)]IO slot reset initializing... [ 3224.762887] bnx2x 0384:80:00.0: enabling device (0140 -> 0142) [ 3224.768157] bnx2x: [bnx2x_io_slot_reset:14287(eth14)]IO slot reset --> driver unload Uninterruptible tasks ===================== crash> ps | grep UN 213 2 11 c000000004c89e00 UN 0.0 0 0 [eehd] 215 2 0 c000000004c80000 UN 0.0 0 0 [kworker/0:2] 2196 1 28 c000000004504f00 UN 0.1 15936 11136 wickedd 4287 1 9 c00000020d076800 UN 0.0 4032 3008 agetty 4289 1 20 c00000020d056680 UN 0.0 7232 3840 agetty 32423 2 26 c00000020038c580 UN 0.0 0 0 [kworker/26:3] 32871 4241 27 c0000002609ddd00 UN 0.1 18624 11648 sshd 32920 10130 16 c00000027284a100 UN 0.1 48512 12608 sendmail 33092 32987 0 c000000205218b00 UN 0.1 48512 12608 sendmail 33154 4567 16 c000000260e51780 UN 0.1 48832 12864 pickup 33209 4241 36 c000000270cb6500 UN 0.1 18624 11712 sshd 33473 33283 0 c000000205211480 UN 0.1 48512 12672 sendmail 33531 4241 37 c00000023c902780 UN 0.1 18624 11648 sshd EEH handler hung while bnx2x sleeping and holding RTNL lock =========================================================== crash> bt 213 PID: 213 TASK: c000000004c89e00 CPU: 11 COMMAND: "eehd" #0 [c000000004d477e0] __schedule at c000000000c70808 #1 [c000000004d478b0] schedule at c000000000c70ee0 #2 [c000000004d478e0] schedule_timeout at c000000000c76dec #3 [c000000004d479c0] msleep at c0000000002120cc #4 [c000000004d479f0] napi_disable at c000000000a06448 ^^^^^^^^^^^^^^^^ #5 [c000000004d47a30] bnx2x_netif_stop at c0080000018dba94 [bnx2x] #6 [c000000004d47a60] bnx2x_io_slot_reset at c0080000018a551c [bnx2x] #7 [c000000004d47b20] eeh_report_reset at c00000000004c9bc #8 [c000000004d47b90] eeh_pe_report at c00000000004d1a8 #9 [c000000004d47c40] eeh_handle_normal_event at c00000000004da64 And the sleeping source code ============================ crash> dis -ls c000000000a06448 FILE: ../net/core/dev.c LINE: 6702 6697 { 6698 might_sleep(); 6699 set_bit(NAPI_STATE_DISABLE, &n->state); 6700 6701 while (test_and_set_bit(NAPI_STATE_SCHED, &n->state)) * 6702 msleep(1); 6703 while (test_and_set_bit(NAPI_STATE_NPSVC, &n->state)) 6704 msleep(1); 6705 6706 hrtimer_cancel(&n->timer); 6707 6708 clear_bit(NAPI_STATE_DISABLE, &n->state); 6709 } EEH calls into bnx2x twice based on the system log above, first through bnx2x_io_error_detected() and then bnx2x_io_slot_reset(), and executes the following call chains: bnx2x_io_error_detected() +-> bnx2x_eeh_nic_unload() +-> bnx2x_del_all_napi() +-> __netif_napi_del() bnx2x_io_slot_reset() +-> bnx2x_netif_stop() +-> bnx2x_napi_disable() +->napi_disable() Fix this by correcting the sequence of NAPI APIs usage, that is delete the NAPI after disabling it. Fixes: 7fa6f340 ("bnx2x: AER revised") Reported-by:
David Christensen <drc@linux.vnet.ibm.com> Tested-by:
David Christensen <drc@linux.vnet.ibm.com> Signed-off-by:
Manish Chopra <manishc@marvell.com> Signed-off-by:
Ariel Elior <aelior@marvell.com> Link: https://lore.kernel.org/r/20220426153913.6966-1-manishc@marvell.com Signed-off-by:
Jakub Kicinski <kuba@kernel.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Maxim Mikityanskiy authored
[ Upstream commit a0df7194 ] Calling tls_append_frag when max_open_record_len == record->len might add an empty fragment to the TLS record if the call happens to be on the page boundary. Normally tls_append_frag coalesces the zero-sized fragment to the previous one, but not if it's on page boundary. If a resync happens then, the mlx5 driver posts dump WQEs in tx_post_resync_dump, and the empty fragment may become a data segment with byte_count == 0, which will confuse the NIC and lead to a CQE error. This commit fixes the described issue by skipping tls_append_frag on zero size to avoid adding empty fragments. The fix is not in the driver, because an empty fragment is hardly the desired behavior. Fixes: e8f69799 ("net/tls: Add generic NIC offload infrastructure") Signed-off-by:
Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by:
Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20220426154949.159055-1-maximmi@nvidia.com Signed-off-by:
Jakub Kicinski <kuba@kernel.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Miaoqian Lin authored
[ Upstream commit 65e54987 ] When dcn20_clk_src_construct() fails, we need to release clk_src. Fixes: 6f4e6361 ("drm/amd/display: Add Renoir resource (v2)") Signed-off-by:
Miaoqian Lin <linmq006@gmail.com> Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
David Yat Sin authored
[ Upstream commit 7c6b6e18 ] dqm->gws_queue_count and pdd->qpd.mapped_gws_queue need to be updated each time the queue gets evicted. Fixes: b8020b03 ("drm/amdkfd: Enable over-subscription with >1 GWS queue") Signed-off-by:
David Yat Sin <david.yatsin@amd.com> Reviewed-by:
Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Volodymyr Mytnyk authored
[ Upstream commit 626873c4 ] `nf_flowtable_udp_timeout` sysctl option is available only if CONFIG_NFT_FLOW_OFFLOAD enabled. But infra for this flow offload UDP timeout was added under CONFIG_NF_FLOW_TABLE config option. So, if you have CONFIG_NFT_FLOW_OFFLOAD disabled and CONFIG_NF_FLOW_TABLE enabled, the `nf_flowtable_udp_timeout` is not present in sysfs. Please note, that TCP flow offload timeout sysctl option is present even CONFIG_NFT_FLOW_OFFLOAD is disabled. I suppose it was a typo in commit that adds UDP flow offload timeout and CONFIG_NF_FLOW_TABLE should be used instead. Fixes: 975c5750 ("netfilter: conntrack: Introduce udp offload timeout configuration") Signed-off-by:
Volodymyr Mytnyk <volodymyr.mytnyk@plvision.eu> Signed-off-by:
Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Jens Axboe authored
[ Upstream commit 5a1e99b6 ] We should check unused fields for non-zero and -EINVAL if they are set, making it consistent with other opcodes. Fixes: aa1fa28f ("io_uring: add support for recvmsg()") Signed-off-by:
Jens Axboe <axboe@kernel.dk> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Jens Axboe authored
[ Upstream commit 588faa1e ] We should check unused fields for non-zero and -EINVAL if they are set, making it consistent with other opcodes. Fixes: 0fa03c62 ("io_uring: add support for sendmsg()") Signed-off-by:
Jens Axboe <axboe@kernel.dk> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-