- 09 Dec, 2017 25 commits
-
-
Greg Kroah-Hartman authored
-
Colin Ian King authored
commit 1d5a3158 upstream. The variable temp is incorrectly being updated, instead it should be offset otherwise the loop just reads the same capability value and loops forever. Thanks to Alan Stern for pointing out the correct fix to my original fix. Fix also cleans up clang warning: drivers/usb/host/ehci-dbg.c:840:4: warning: Value stored to 'temp' is never read Fixes: d49d4317 ("USB: misc ehci updates") Signed-off-by:
Colin Ian King <colin.king@canonical.com> Acked-by:
Alan Stern <stern@rowland.harvard.edu> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Oliver Neukum authored
commit 446f666d upstream. USBDEVFS_URB_ISO_ASAP must be accepted only for ISO endpoints. Improve sanity checking. Reported-by:
Andrey Konovalov <andreyknvl@google.com> Signed-off-by:
Oliver Neukum <oneukum@suse.com> Acked-by:
Alan Stern <stern@rowland.harvard.edu> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dan Carpenter authored
commit 57999d11 upstream. There used to be an integer overflow check in proc_do_submiturb() but we removed it. It turns out that it's still required. The uurb->buffer_length variable is a signed integer and it's controlled by the user. It can lead to an integer overflow when we do: num_sgs = DIV_ROUND_UP(uurb->buffer_length, USB_SG_SIZE); If we strip away the macro then that line looks like this: num_sgs = (uurb->buffer_length + USB_SG_SIZE - 1) / USB_SG_SIZE; ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ It's the first addition which can overflow. Fixes: 1129d270 ("USB: Increase usbfs transfer limit") Signed-off-by:
Dan Carpenter <dan.carpenter@oracle.com> Acked-by:
Alan Stern <stern@rowland.harvard.edu> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mateusz Berezecki authored
commit 1129d270 upstream. Promote a variable keeping track of USB transfer memory usage to a wider data type and allow for higher bandwidth transfers from a large number of USB devices connected to a single host. Signed-off-by:
Mateusz Berezecki <mateuszb@fastmail.fm> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mike Looijmans authored
commit 973593a9 upstream. Sometimes the USB device gets confused about the state of the initialization and the connection fails. In particular, the device thinks that it's already set up and running while the host thinks the device still needs to be configured. To work around this issue, power-cycle the hub's output to issue a sort of "reset" to the device. This makes the device restart its state machine and then the initialization succeeds. This fixes problems where the kernel reports a list of errors like this: usb 1-1.3: device not accepting address 19, error -71 The end result is a non-functioning device. After this patch, the sequence becomes like this: usb 1-1.3: new high-speed USB device number 18 using ci_hdrc usb 1-1.3: device not accepting address 18, error -71 usb 1-1.3: new high-speed USB device number 19 using ci_hdrc usb 1-1.3: device not accepting address 19, error -71 usb 1-1-port3: attempt power cycle usb 1-1.3: new high-speed USB device number 21 using ci_hdrc usb-storage 1-1.3:1.2: USB Mass Storage device detected Signed-off-by:
Mike Looijmans <mike.looijmans@topic.nl> Acked-by:
Alan Stern <stern@rowland.harvard.edu> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Matt Wilson authored
commit 3bfd1300 upstream. This device will be used in future Amazon EC2 instances as the primary serial port (i.e., data sent to this port will be available via the GetConsoleOuput [1] EC2 API). [1] http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_GetConsoleOutput.html Signed-off-by:
Matt Wilson <msw@amazon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Kai-Heng Feng authored
commit e43a12f1 upstream. KY-688 USB 3.1 Type-C Hub internally uses a Genesys Logic hub to connect to Realtek r8153. Similar to commit ("7496cfe5 usb: quirks: Add no-lpm quirk for Moshi USB to Ethernet Adapter"), no-lpm can make r8153 ethernet work. Signed-off-by:
Kai-Heng Feng <kai.heng.feng@canonical.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hans de Goede authored
commit 7fee72d5 upstream. We've been adding this as a quirk on a per device basis hoping that newer disk enclosures would do better, but that has not happened, so simply apply this quirk to all Seagate devices. Signed-off-by:
Hans de Goede <hdegoede@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Boshi Wang authored
[ Upstream commit ebe7c0a7 ] The hash_setup function always sets the hash_setup_done flag, even when the hash algorithm is invalid. This prevents the default hash algorithm defined as CONFIG_IMA_DEFAULT_HASH from being used. This patch sets hash_setup_done flag only for valid hash algorithms. Fixes: e7a2ad7e "ima: enable support for larger default filedata hash algorithms" Signed-off-by:
Boshi Wang <wangboshi@huawei.com> Signed-off-by:
Mimi Zohar <zohar@linux.vnet.ibm.com> Signed-off-by:
Sasha Levin <alexander.levin@verizon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rui Sousa authored
[ Upstream commit 01f8902b ] Fix hardware setup of multicast address hash: - Never clear the hardware hash (to avoid packet loss) - Construct the hash register values in software and then write once to hardware Signed-off-by:
Rui Sousa <rui.sousa@nxp.com> Signed-off-by:
Fugang Duan <fugang.duan@nxp.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Sasha Levin <alexander.levin@verizon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jan Kara authored
[ Upstream commit 0911d004 ] Some ->page_mkwrite handlers may return VM_FAULT_RETRY as its return code (GFS2 or Lustre can definitely do this). However VM_FAULT_RETRY from ->page_mkwrite is completely unhandled by the mm code and results in locking and writeably mapping the page which definitely is not what the caller wanted. Fix Lustre and block_page_mkwrite_ret() used by other filesystems (notably GFS2) to return VM_FAULT_NOPAGE instead which results in bailing out from the fault code, the CPU then retries the access, and we fault again effectively doing what the handler wanted. Link: http://lkml.kernel.org/r/20170203150729.15863-1-jack@suse.cz Signed-off-by:
Jan Kara <jack@suse.cz> Reported-by:
Al Viro <viro@ZenIV.linux.org.uk> Reviewed-by:
Jinshan Xiong <jinshan.xiong@intel.com> Cc: Matthew Wilcox <willy@infradead.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Sasha Levin <alexander.levin@verizon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Parthasarathy Bhuvaragan authored
[ Upstream commit 35e22e49 ] In tipc_server_stop(), we iterate over the connections with limiting factor as server's idr_in_use. We ignore the fact that this variable is decremented in tipc_close_conn(), leading to premature exit. In this commit, we iterate until the we have no connections left. Acked-by:
Ying Xue <ying.xue@windriver.com> Acked-by:
Jon Maloy <jon.maloy@ericsson.com> Tested-by:
John Thompson <thompa.atl@gmail.com> Signed-off-by:
Parthasarathy Bhuvaragan <parthasarathy.bhuvaragan@ericsson.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Sasha Levin <alexander.levin@verizon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Colin Ian King authored
[ Upstream commit 0e73fc9a ] The comparison on the timeout can lead to an array overrun read on sctp_timer_tbl because of an off-by-one error. Fix this by using < instead of <= and also compare to the array size rather than SCTP_EVENT_TIMEOUT_MAX. Fixes CoverityScan CID#1397639 ("Out-of-bounds read") Signed-off-by:
Colin Ian King <colin.king@canonical.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Sasha Levin <alexander.levin@verizon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Trond Myklebust authored
[ Upstream commit c6180a62 ] If the server reboots multiple times, the client should rely on the server to tell it that it cannot reclaim state as per section 9.6.3.4 in RFC7530 and section 8.4.2.1 in RFC5661. Currently, the client is being to conservative, and is assuming that if the server reboots while state recovery is in progress, then it must ignore state that was not recovered before the reboot. Signed-off-by:
Trond Myklebust <trond.myklebust@primarydata.com> Signed-off-by:
Sasha Levin <alexander.levin@verizon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vlad Tsyrklevich authored
[ Upstream commit ce7e40c4 ] ipddp_route structs contain alignment padding so kernel heap memory is leaked when they are copied to user space in ipddp_ioctl(SIOCFINDIPDDPRT). Change kmalloc() to kzalloc() to clear that memory. Signed-off-by:
Vlad Tsyrklevich <vlad@tsyrklevich.net> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Sasha Levin <alexander.levin@verizon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
David Forster authored
[ Upstream commit 93e246f7 ] vti6 interface is registered before the rtnl_link_ops block is attached. As a result the resulting RTM_NEWLINK is missing IFLA_INFO_KIND. Re-order attachment of rtnl_link_ops block to fix. Signed-off-by:
Dave Forster <dforster@brocade.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Sasha Levin <alexander.levin@verizon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Peter Ujfalusi authored
[ Upstream commit 65727977 ] OMAP1510, OMAP5910 and OMAP310 have only 9 logical channels. OMAP1610, OMAP5912, OMAP1710, OMAP730, and OMAP850 have 16 logical channels available. The wired 17 for the lch_count must have been used to cover the 16 + 1 dedicated LCD channel, in reality we can only use 9 or 16 channels. The d->chan_count is not used by the omap-dma stack, so we can skip the setup. chan_count was configured to the number of logical channels and not the actual number of physical channels anyways. Signed-off-by:
Peter Ujfalusi <peter.ujfalusi@ti.com> Acked-by:
Aaro Koskinen <aaro.koskinen@iki.fi> Signed-off-by:
Tony Lindgren <tony@atomide.com> Signed-off-by:
Sasha Levin <alexander.levin@verizon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Thomas Richter authored
[ Upstream commit 22905582 ] Command perf test -v 16 (Setup struct perf_event_attr test) always reports success even if the test case fails. It works correctly if you also specify -F (for don't fork). root@s35lp76 perf]# ./perf test -v 16 15: Setup struct perf_event_attr : --- start --- running './tests/attr/test-record-no-delay' [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.002 MB /tmp/tmp4E1h7R/perf.data (1 samples) ] expected task=0, got 1 expected precise_ip=0, got 3 expected wakeup_events=1, got 0 FAILED './tests/attr/test-record-no-delay' - match failure test child finished with 0 ---- end ---- Setup struct perf_event_attr: Ok The reason for the wrong error reporting is the return value of the system() library call. It is called in run_dir() file tests/attr.c and returns the exit status, in above case 0xff00. This value is given as parameter to the exit() function which can only handle values 0-0xff. The child process terminates with exit value of 0 and the parent does not detect any error. This patch corrects the error reporting and prints the correct test result. Signed-off-by:
Thomas-Mich Richter <tmricht@linux.vnet.ibm.com> Acked-by:
Jiri Olsa <jolsa@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Thomas-Mich Richter <tmricht@linux.vnet.ibm.com> LPU-Reference: 20170913081209.39570-2-tmricht@linux.vnet.ibm.com Link: http://lkml.kernel.org/n/tip-rdube6rfcjsr1nzue72c7lqn@git.kernel.org Signed-off-by:
Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by:
Sasha Levin <alexander.levin@verizon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jibin Xu authored
[ Upstream commit b00bebbc ] When kernel configuration SMP,PREEMPT and DEBUG_PREEMPT are enabled, echo 1 >/proc/sys/kernel/sysrq echo p >/proc/sysrq-trigger kernel will print call trace as below: sysrq: SysRq : Show Regs BUG: using __this_cpu_read() in preemptible [00000000] code: sh/435 caller is __this_cpu_preempt_check+0x18/0x20 Call trace: [<ffffff8008088e80>] dump_backtrace+0x0/0x1d0 [<ffffff8008089074>] show_stack+0x24/0x30 [<ffffff8008447970>] dump_stack+0x90/0xb0 [<ffffff8008463950>] check_preemption_disabled+0x100/0x108 [<ffffff8008463998>] __this_cpu_preempt_check+0x18/0x20 [<ffffff80084c9194>] sysrq_handle_showregs+0x1c/0x40 [<ffffff80084c9c7c>] __handle_sysrq+0x12c/0x1a0 [<ffffff80084ca140>] write_sysrq_trigger+0x60/0x70 [<ffffff8008251e00>] proc_reg_write+0x90/0xd0 [<ffffff80081f1788>] __vfs_write+0x48/0x90 [<ffffff80081f241c>] vfs_write+0xa4/0x190 [<ffffff80081f3354>] SyS_write+0x54/0xb0 [<ffffff80080833f0>] el0_svc_naked+0x24/0x28 This can be seen on a common board like an r-pi3. This happens because when echo p >/proc/sysrq-trigger, get_irq_regs() is called outside of IRQ context, if preemption is enabled in this situation,kernel will print the call trace. Since many prior discussions on the mailing lists have made it clear that get_irq_regs either just returns NULL or stale data when used outside of IRQ context,we simply avoid calling it outside of IRQ context. Signed-off-by:
Jibin Xu <jibin.xu@windriver.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Sasha Levin <alexander.levin@verizon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Gustavo A. R. Silva authored
[ Upstream commit a8e9b186 ] Add missing break statement in order to prevent the code from falling through. Signed-off-by:
Gustavo A. R. Silva <garsilva@embeddedor.com> Cc: Qiuxu Zhuo <qiuxu.zhuo@intel.com> Cc: linux-edac <linux-edac@vger.kernel.org> Link: http://lkml.kernel.org/r/20171016174029.GA19757@embeddedor.com Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Sasha Levin <alexander.levin@verizon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hiromitsu Yamasaki authored
[ Upstream commit 36735783 ] DMA supports 32-bit words only, even if BITLEN1 of SITMDR2 register is 16bit. Fixes: b0d0ce8b ("spi: sh-msiof: Add DMA support") Signed-off-by:
Hiromitsu Yamasaki <hiromitsu.yamasaki.ym@renesas.com> Signed-off-by:
Simon Horman <horms+renesas@verge.net.au> Acked-by:
Geert Uytterhoeven <geert+renesas@glider.be> Acked-by:
Dirk Behme <dirk.behme@de.bosch.com> Signed-off-by:
Mark Brown <broonie@kernel.org> Signed-off-by:
Sasha Levin <alexander.levin@verizon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lukas Wunner authored
[ Upstream commit 3236a965 ] This driver's ->rs485_config callback checks if SER_RS485_RTS_ON_SEND and SER_RS485_RTS_AFTER_SEND have the same value. If they do, it means the user has passed in invalid data with the TIOCSRS485 ioctl() since RTS must have a different polarity when sending and when not sending. In this case, rs485 mode is not enabled (the RS485_URA bit is not set in the RS485 Enable Register) and this is supposed to be signaled back to the user by clearing the SER_RS485_ENABLED bit in struct serial_rs485 ... except a missing tilde character is preventing that from happening. Fixes: 28e3fb6c ("serial: Add support for Fintek F81216A LPC to 4 UART") Cc: Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com> Cc: "Ji-Ze Hong (Peter Hong)" <hpeter@gmail.com> Signed-off-by:
Lukas Wunner <lukas@wunner.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Sasha Levin <alexander.levin@verizon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rui Hua authored
commit e393aa24 upstream. When we send a read request and hit the clean data in cache device, there is a situation called cache read race in bcache(see the commit in the tail of cache_look_up(), the following explaination just copy from there): The bucket we're reading from might be reused while our bio is in flight, and we could then end up reading the wrong data. We guard against this by checking (in bch_cache_read_endio()) if the pointer is stale again; if so, we treat it as an error (s->iop.error = -EINTR) and reread from the backing device (but we don't pass that error up anywhere) It should be noted that cache read race happened under normal circumstances, not the circumstance when SSD failed, it was counted and shown in /sys/fs/bcache/XXX/internal/cache_read_races. Without this patch, when we use writeback mode, we will never reread from the backing device when cache read race happened, until the whole cache device is clean, because the condition (s->recoverable && (dc && !atomic_read(&dc->has_dirty))) is false in cached_dev_read_error(). In this situation, the s->iop.error(= -EINTR) will be passed up, at last, user will receive -EINTR when it's bio end, this is not suitable, and wield to up-application. In this patch, we use s->read_dirty_data to judge whether the read request hit dirty data in cache device, it is safe to reread data from the backing device when the read request hit clean data. This can not only handle cache read race, but also recover data when failed read request from cache device. [edited by mlyle to fix up whitespace, commit log title, comment spelling] Fixes: d59b2379 ("bcache: only permit to recovery read error when cache device is clean") Signed-off-by:
Hua Rui <huarui.dev@gmail.com> Reviewed-by:
Michael Lyle <mlyle@lyle.org> Reviewed-by:
Coly Li <colyli@suse.de> Signed-off-by:
Michael Lyle <mlyle@lyle.org> Signed-off-by:
Jens Axboe <axboe@kernel.dk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Coly Li authored
commit d59b2379 upstream. When bcache does read I/Os, for example in writeback or writethrough mode, if a read request on cache device is failed, bcache will try to recovery the request by reading from cached device. If the data on cached device is not synced with cache device, then requester will get a stale data. For critical storage system like database, providing stale data from recovery may result an application level data corruption, which is unacceptible. With this patch, for a failed read request in writeback or writethrough mode, recovery a recoverable read request only happens when cache device is clean. That is to say, all data on cached device is up to update. For other cache modes in bcache, read request will never hit cached_dev_read_error(), they don't need this patch. Please note, because cache mode can be switched arbitrarily in run time, a writethrough mode might be switched from a writeback mode. Therefore checking dc->has_data in writethrough mode still makes sense. Changelog: V4: Fix parens error pointed by Michael Lyle. v3: By response from Kent Oversteet, he thinks recovering stale data is a bug to fix, and option to permit it is unnecessary. So this version the sysfs file is removed. v2: rename sysfs entry from allow_stale_data_on_failure to allow_stale_data_on_failure, and fix the confusing commit log. v1: initial patch posted. [small change to patch comment spelling by mlyle] Signed-off-by:
Coly Li <colyli@suse.de> Signed-off-by:
Michael Lyle <mlyle@lyle.org> Reported-by:
Arne Wolf <awolf@lenovo.com> Reviewed-by:
Michael Lyle <mlyle@lyle.org> Cc: Kent Overstreet <kent.overstreet@gmail.com> Cc: Nix <nix@esperi.org.uk> Cc: Kai Krakow <hurikhan77@gmail.com> Cc: Eric Wheeler <bcache@lists.ewheeler.net> Cc: Junhui Tang <tang.junhui@zte.com.cn> Signed-off-by:
Jens Axboe <axboe@kernel.dk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 05 Dec, 2017 13 commits
-
-
Greg Kroah-Hartman authored
-
Ville Syrjälä authored
commit 56350fb8 upstream. The hardware always writes one or two bytes in the index portion of an indexed transfer. Make sure the message we send as the index doesn't have a zero length. Cc: Daniel Kurtz <djkurtz@chromium.org> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Sean Paul <seanpaul@chromium.org> Fixes: 56f9eac0 ("drm/i915/intel_i2c: use INDEX cycles for i2c read transactions") Signed-off-by:
Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171123194157.25367-3-ville.syrjala@linux.intel.com Reviewed-by:
Chris Wilson <chris@chris-wilson.co.uk> (cherry picked from commit bb9e0d4b ) Signed-off-by:
Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ville Syrjälä authored
commit ae5c631e upstream. We can only specify the one slave address to indexed reads/writes. Make sure the messages we check are destined to the same slave address before deciding to do an indexed transfer. Cc: Daniel Kurtz <djkurtz@chromium.org> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Sean Paul <seanpaul@chromium.org> Fixes: 56f9eac0 ("drm/i915/intel_i2c: use INDEX cycles for i2c read transactions") Signed-off-by:
Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171123194157.25367-2-ville.syrjala@linux.intel.com Reviewed-by:
Chris Wilson <chris@chris-wilson.co.uk> (cherry picked from commit c4deb62d ) Signed-off-by:
Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
NeilBrown authored
commit b688741c upstream. For correct close-to-open semantics, NFS must validate the change attribute of a directory (or file) on open. Since commit ecf3d1f1 ("vfs: kill FS_REVAL_DOT by adding a d_weak_revalidate dentry op"), open() of "." or a path ending ".." is not revalidated reliably (except when that direct is a mount point). Prior to that commit, "." was revalidated using nfs_lookup_revalidate() which checks the LOOKUP_OPEN flag and forces revalidation if the flag is set. Since that commit, nfs_weak_revalidate() is used for NFSv3 (which ignores the flags) and nothing is used for NFSv4. This is fixed by using nfs_lookup_verify_inode() in nfs_weak_revalidate(). This does the revalidation exactly when needed. Also, add a definition of .d_weak_revalidate for NFSv4. The incorrect behavior is easily demonstrated by running "echo *" in some non-mountpoint NFS directory while watching network traffic. Without this patch, "echo *" sometimes doesn't produce any traffic. With the patch it always does. Fixes: ecf3d1f1 ("vfs: kill FS_REVAL_DOT by adding a d_weak_revalidate dentry op") cc: stable@vger.kernel.org (3.9+) Signed-off-by:
NeilBrown <neilb@suse.com> Signed-off-by:
Anna Schumaker <Anna.Schumaker@Netapp.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jonathan Liu authored
commit f3621a8e upstream. During panel removal or system shutdown panel_simple_disable() is called which disables the panel backlight but the panel is still powered due to missing calls to panel_simple_unprepare(). Fixes: d02fd93e ("drm/panel: simple - Disable panel on shutdown") Signed-off-by:
Jonathan Liu <net147@gmail.com> Signed-off-by:
Thierry Reding <treding@nvidia.com> Link: https://patchwork.freedesktop.org/patch/msgid/20170807115545.27747-1-net147@gmail.com Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Heiner Kallweit authored
commit d9bcd462 upstream. So far we completely rely on the caller to provide valid arguments. To be on the safe side perform an own sanity check. Signed-off-by:
Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by:
Bartosz Golaszewski <brgl@bgdev.pl> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Paolo Bonzini authored
commit 6ea6e843 upstream. Sometimes, a processor might execute an instruction while another processor is updating the page tables for that instruction's code page, but before the TLB shootdown completes. The interesting case happens if the page is in the TLB. In general, the processor will succeed in executing the instruction and nothing bad happens. However, what if the instruction is an MMIO access? If *that* happens, KVM invokes the emulator, and the emulator gets the updated page tables. If the update side had marked the code page as non present, the page table walk then will fail and so will x86_decode_insn. Unfortunately, even though kvm_fetch_guest_virt is correctly returning X86EMUL_PROPAGATE_FAULT, x86_decode_insn's caller treats the failure as a fatal error if the instruction cannot simply be reexecuted (as is the case for MMIO). And this in fact happened sometimes when rebooting Windows 2012r2 guests. Just checking ctxt->have_exception and injecting the exception if true is enough to fix the case. Thanks to Eduardo Habkost for helping in the debugging of this issue. Reported-by:
Yanan Fu <yfu@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Radim Krčmář <rkrcmar@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Liran Alon authored
commit 61cb57c9 upstream. Instruction emulation after trapping a #UD exception can result in an MMIO access, for example when emulating a MOVBE on a processor that doesn't support the instruction. In this case, the #UD vmexit handler must exit to user mode, but there wasn't any code to do so. Add it for both VMX and SVM. Signed-off-by:
Liran Alon <liran.alon@oracle.com> Reviewed-by:
Nikita Leshenko <nikita.leshchenko@oracle.com> Reviewed-by:
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by:
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by:
Wanpeng Li <wanpeng.li@hotmail.com> Reviewed-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Radim Krčmář <rkrcmar@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Josef Bacik authored
commit 8e138e0d upstream. We discovered a box that had double allocations, and suspected the space cache may be to blame. While auditing the write out path I noticed that if we've already setup the space cache we will just carry on. This means that any error we hit after cache_save_setup before we go to actually write the cache out we won't reset the inode generation, so whatever was already written will be considered correct, except it'll be stale. Fix this by _always_ resetting the generation on the block group inode, this way we only ever have valid or invalid cache. With this patch I was no longer able to reproduce cache corruption with dm-log-writes and my bpf error injection tool. Signed-off-by:
Josef Bacik <jbacik@fb.com> Signed-off-by:
David Sterba <dsterba@suse.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
chenjie authored
commit 6ea8d958 upstream. MADVISE_WILLNEED has always been a noop for DAX (formerly XIP) mappings. Unfortunately madvise_willneed() doesn't communicate this information properly to the generic madvise syscall implementation. The calling convention is quite subtle there. madvise_vma() is supposed to either return an error or update &prev otherwise the main loop will never advance to the next vma and it will keep looping for ever without a way to get out of the kernel. It seems this has been broken since introduction. Nobody has noticed because nobody seems to be using MADVISE_WILLNEED on these DAX mappings. [mhocko@suse.com: rewrite changelog] Link: http://lkml.kernel.org/r/20171127115318.911-1-guoxuenan@huawei.com Fixes: fe77ba6f ("[PATCH] xip: madvice/fadvice: execute in place") Signed-off-by:
chenjie <chenjie6@huawei.com> Signed-off-by:
guoxuenan <guoxuenan@huawei.com> Acked-by:
Michal Hocko <mhocko@suse.com> Cc: Minchan Kim <minchan@kernel.org> Cc: zhangyi (F) <yi.zhang@huawei.com> Cc: Miao Xie <miaoxie@huawei.com> Cc: Mike Rapoport <rppt@linux.vnet.ibm.com> Cc: Shaohua Li <shli@fb.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: David Rientjes <rientjes@google.com> Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com> Cc: Rik van Riel <riel@redhat.com> Cc: Carsten Otte <cotte@de.ibm.com> Cc: Dan Williams <dan.j.williams@intel.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Kirill A. Shutemov authored
commit a8f97366 upstream. Currently, we unconditionally make page table dirty in touch_pmd(). It may result in false-positive can_follow_write_pmd(). We may avoid the situation, if we would only make the page table entry dirty if caller asks for write access -- FOLL_WRITE. The patch also changes touch_pud() in the same way. Signed-off-by:
Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Hugh Dickins <hughd@google.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> [Salvatore Bonaccorso: backport for 3.16: - Adjust context - Drop specific part for PUD-sized transparent hugepages. Support for PUD-sized transparent hugepages was added in v4.11-rc1 ] Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Herbert Xu authored
commit 1137b5e2 upstream. An independent security researcher, Mohamed Ghannam, has reported this vulnerability to Beyond Security's SecuriTeam Secure Disclosure program. The xfrm_dump_policy_done function expects xfrm_dump_policy to have been called at least once or it will crash. This can be triggered if a dump fails because the target socket's receive buffer is full. This patch fixes it by using the cb->start mechanism to ensure that the initialisation is always done regardless of the buffer situation. Fixes: 12a169e7 ("ipsec: Put dumpers on the dump list") Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
Steffen Klassert <steffen.klassert@secunet.com> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tom Herbert authored
commit fc9e50f5 upstream. The start callback allows the caller to set up a context for the dump callbacks. Presumably, the context can then be destroyed in the done callback. Signed-off-by:
Tom Herbert <tom@herbertland.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 30 Nov, 2017 2 commits
-
-
Greg Kroah-Hartman authored
-
Juergen Gross authored
[ Upstream commit 639b0881 ] When accessing Xenstore in a transaction the user is specifying a transaction id which he normally obtained from Xenstore when starting the transaction. Xenstore is validating a transaction id against all known transaction ids of the connection the request came in. As all requests of a domain not being the one where Xenstore lives share one connection, validation of transaction ids of different users of Xenstore in that domain should be done by the kernel of that domain being the multiplexer between the Xenstore users in that domain and Xenstore. In order to prohibit one Xenstore user "hijacking" a transaction from another user the xenbus driver has to verify a given transaction id against all known transaction ids of the user before forwarding it to Xenstore. Signed-off-by:
Juergen Gross <jgross@suse.com> Reviewed-by:
Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by:
Juergen Gross <jgross@suse.com> Signed-off-by:
Sasha Levin <alexander.levin@verizon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-