This is the start of the stable review cycle for the 6.2.11 release. There are 173 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 14 Apr 2023 08:28:02 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.2.11-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.2.y and the diffstat can be found below.
thanks,
greg k-h
------------- Pseudo-Shortlog of commits:
Greg Kroah-Hartman gregkh@linuxfoundation.org Linux 6.2.11-rc1
Liam R. Howlett Liam.Howlett@Oracle.com mm: enable maple tree RCU mode by default.
Liam R. Howlett Liam.Howlett@Oracle.com maple_tree: add RCU lock checking to rcu callback functions
Liam R. Howlett Liam.Howlett@Oracle.com maple_tree: add smp_rmb() to dead node detection
Liam R. Howlett Liam.Howlett@Oracle.com maple_tree: remove extra smp_wmb() from mas_dead_leaves()
Liam R. Howlett Liam.Howlett@Oracle.com maple_tree: fix freeing of nodes in rcu mode
Liam R. Howlett Liam.Howlett@Oracle.com maple_tree: detect dead nodes in mas_start()
Liam R. Howlett Liam.Howlett@Oracle.com maple_tree: refine ma_state init from mas_start()
Liam R. Howlett Liam.Howlett@Oracle.com maple_tree: be more cautious about dead nodes
Liam R. Howlett Liam.Howlett@Oracle.com maple_tree: fix mas_prev() and mas_find() state handling
Liam R. Howlett Liam.Howlett@Oracle.com maple_tree: fix handle of invalidated state in mas_wr_store_setup()
Liam R. Howlett Liam.Howlett@Oracle.com maple_tree: reduce user error potential
Liam R. Howlett Liam.Howlett@Oracle.com maple_tree: fix potential rcu issue
Liam R. Howlett Liam.Howlett@Oracle.com maple_tree: remove GFP_ZERO from kmem_cache_alloc() and kmem_cache_alloc_bulk()
Alistair Popple apopple@nvidia.com mm: take a page reference when removing device exclusive entries
Robert Foss rfoss@kernel.org drm/bridge: lt9611: Fix PLL being unable to lock
Tim Huang tim.huang@amd.com drm/amdgpu: skip psp suspend for IMU enabled ASICs mode2 reset
Alex Deucher alexander.deucher@amd.com drm/amdgpu: for S0ix, skip SDMA 5.x+ suspend/resume
Roman Li roman.li@amd.com drm/amd/display: Clear MST topology if it fails to resume
Peng Zhang zhangpeng.00@bytedance.com maple_tree: fix a potential concurrency bug in RCU mode
Peng Zhang zhangpeng.00@bytedance.com maple_tree: fix get wrong data_end in mtree_lookup_walk()
Peter Xu peterx@redhat.com mm/hugetlb: fix uffd wr-protection for CoW optimization path
Rongwei Wang rongwei.wang@linux.alibaba.com mm/swap: fix swap_info_struct race between swapoff and get_swap_pages()
Zheng Yejian zhengyejian1@huawei.com ring-buffer: Fix race while reader and writer are on the same page
Min Li lm0963hack@gmail.com drm/i915: fix race condition UAF in i915_perf_add_config_ioctl
Tvrtko Ursulin tvrtko.ursulin@intel.com drm/i915: Fix context runtime accounting
Karol Herbst kherbst@redhat.com drm/nouveau/disp: Support more modes by checking with lower bpc
Boris Brezillon boris.brezillon@collabora.com drm/panfrost: Fix the panfrost_mmu_map_fault_addr() error path
Jens Axboe axboe@kernel.dk ublk: read any SQE values upfront
Felix Fietkau nbd@nbd.name wifi: mt76: ignore key disable commands
Lorenzo Bianconi lorenzo@kernel.org wifi: mt76: mt7921: fix fw used for offload check for mt7922
Yafang Shao laoar.shao@gmail.com mm: vmalloc: avoid warn_alloc noise caused by fatal signal
Sergey Senozhatsky senozhatsky@chromium.org zsmalloc: document freeable stats
Steven Rostedt (Google) rostedt@goodmis.org tracing/synthetic: Make lastcmd_mutex static
Kan Liang kan.liang@linux.intel.com perf/core: Fix the same task check in perf_event_set_output
Peter Zijlstra peterz@infradead.org perf: Optimize perf_pmu_migrate_context()
Yu Kuai yukuai3@huawei.com block: don't set GD_NEED_PART_SCAN if scan partition failed
Ming Lei ming.lei@redhat.com block: ublk: make sure that block size is set correctly
Thiago Rafael Becker tbecker@redhat.com cifs: sanitize paths in cifs_update_super_prepath.
Keith Busch kbusch@kernel.org nvme: fix discard support without oncs
Zhong Jinghua zhongjinghua@huawei.com scsi: iscsi_tcp: Check that sock is valid before iscsi_set_param()
Li Zetao lizetao1@huawei.com scsi: qla2xxx: Fix memory leak in qla2x00_probe_one()
Wojciech Lukowicz wlukowicz01@gmail.com io_uring: fix memory leak when removing provided buffers
Wojciech Lukowicz wlukowicz01@gmail.com io_uring: fix return value when removing provided buffers
Nuno Sá nuno.sa@analog.com iio: adc: ad7791: fix IRQ flags
Guennadi Liakhovetski guennadi.liakhovetski@linux.intel.com ASoC: SOF: avoid a NULL dereference with unsupported widgets
Jason Montleon jmontleo@redhat.com ASoC: hdac_hdmi: use set_stream() instead of set_tdm_slots()
Jason Gunthorpe jgg@ziepe.ca iommufd: Do not corrupt the pfn list when doing batch carry
Jason Gunthorpe jgg@ziepe.ca iommufd: Fix unpinning of pages when an access is present
Jason Gunthorpe jgg@ziepe.ca iommufd: Check for uptr overflow
Steven Rostedt (Google) rostedt@goodmis.org tracing: Free error logs of tracing instances
Daniel Bristot de Oliveira bristot@kernel.org tracing/osnoise: Fix notify new tracing_max_latency
Daniel Bristot de Oliveira bristot@kernel.org tracing/timerlat: Notify new max thread latency
Tze-nan Wu Tze-nan.Wu@mediatek.com tracing/synthetic: Fix races on freeing last_cmd
Song Yoong Siang yoong.siang.song@intel.com net: stmmac: Add queue reset into stmmac_xdp_open() function
Hans de Goede hdegoede@redhat.com ACPI: video: Add acpi_backlight=video quirk for Lenovo ThinkPad W530
Hans de Goede hdegoede@redhat.com ACPI: video: Add acpi_backlight=video quirk for Apple iMac14,1 and iMac14,2
Hans de Goede hdegoede@redhat.com ACPI: video: Make acpi_backlight=video work independent from GPU driver
Hans de Goede hdegoede@redhat.com ACPI: video: Add auto_detect arg to __acpi_video_get_backlight_type()
Oliver Hartkopp socketcan@hartkopp.net can: isotp: isotp_recvmsg(): use sock_recv_cmsgs() to get SOCK_RXQ_OVFL infos
Michal Sojka michal.sojka@cvut.cz can: isotp: isotp_ops: fix poll() to not report false EPOLLOUT events
Oliver Hartkopp socketcan@hartkopp.net can: isotp: fix race between isotp_sendsmg() and isotp_release()
Oleksij Rempel linux@rempel-privat.de can: j1939: j1939_tp_tx_dat_new(): fix out-of-bounds memory access
Christian Brauner brauner@kernel.org fs: drop peer group ids under namespace lock
Zheng Yejian zhengyejian1@huawei.com ftrace: Fix issue that 'direct->addr' not restored in modify_ftrace_direct()
John Keeping john@metanate.com ftrace: Mark get_lock_parent_ip() __always_inline
Keith Busch kbusch@kernel.org blk-mq: directly poll requests
William Breathitt Gray william.gray@linaro.org counter: 104-quad-8: Fix Synapse action reported for Index signals
William Breathitt Gray william.gray@linaro.org counter: 104-quad-8: Fix race condition between FLAG and CNTR reads
Steve Clevenger scclevenger@os.amperecomputing.com coresight-etm4: Fix for() loop drvdata->nr_addr_cmp range bug
Suzuki K Poulose suzuki.poulose@arm.com coresight: etm4x: Do not access TRCIDR1 for identification
Muchun Song muchun.song@linux.dev mm: kfence: fix handling discontiguous page
Muchun Song muchun.song@linux.dev mm: kfence: fix PG_slab and memcg_data clearing
Jeremi Piotrowski jpiotrowski@linux.microsoft.com KVM: SVM: Flush Hyper-V TLB when required
Sean Christopherson seanjc@google.com KVM: nVMX: Do not report error code when synthesizing VM-Exit from Real Mode
Sean Christopherson seanjc@google.com KVM: x86: Clear "has_error_code", not "error_code", for RM exception injection
Mario Limonciello mario.limonciello@amd.com x86/ACPI/boot: Use FADT version to check support for online capable
Eric DeVolder eric.devolder@oracle.com x86/acpi/boot: Correct acpi_is_processor_usable() check
Andy Chi andy.chi@canonical.com ALSA: hda/realtek: fix mute/micmute LEDs for a HP ProBook
Jeremy Soller jeremy@system76.com ALSA: hda/realtek: Add quirk for Clevo X370SNW
Namjae Jeon linkinjeon@kernel.org ksmbd: fix slab-out-of-bounds in init_smb2_rsp_hdr
Marios Makassikis mmakassikis@freebox.fr ksmbd: do not call kvmalloc() with __GFP_NORETRY | __GFP_NO_WARN
Ilpo Järvinen ilpo.jarvinen@linux.intel.com serial: 8250: Prevent starting up DMA Rx on THRI interrupt
Geert Uytterhoeven geert+renesas@glider.be dt-bindings: serial: renesas,scif: Fix 4th IRQ for 4-IRQ SCIFs
Shiyang Ruan ruansy.fnst@fujitsu.com fsdax: force clear dirty mark if CoW
Shiyang Ruan ruansy.fnst@fujitsu.com fsdax: unshare: zero destination if srcmap is HOLE or UNWRITTEN
Shiyang Ruan ruansy.fnst@fujitsu.com fsdax: dedupe should compare the min of two iters' length
Ryusuke Konishi konishi.ryusuke@gmail.com nilfs2: fix sysfs interface lifetime
Ryusuke Konishi konishi.ryusuke@gmail.com nilfs2: fix potential UAF of struct nilfs_sc_info in nilfs_segctor_thread()
Sherry Sun sherry.sun@nxp.com tty: serial: fsl_lpuart: fix crash in lpuart_uport_is_active
Sherry Sun sherry.sun@nxp.com tty: serial: fsl_lpuart: avoid checking for transfer complete when UARTCTRL_SBK is asserted in lpuart32_tx_empty
Biju Das biju.das.jz@bp.renesas.com tty: serial: sh-sci: Fix Rx on RZ/G2L SCI
Biju Das biju.das.jz@bp.renesas.com tty: serial: sh-sci: Fix transmit end interrupt handler
Mårten Lindahl marten.lindahl@axis.com iio: light: vcnl4000: Fix WARN_ON on uninitialized lock
Kai-Heng Feng kai.heng.feng@canonical.com iio: light: cm32181: Unregister second I2C client if present
Nuno Sá nuno.sa@analog.com iio: buffer: make sure O_NONBLOCK is respected
Nuno Sá nuno.sa@analog.com iio: buffer: correctly return bytes written in output buffers
Mehdi Djait mehdi.djait.k@gmail.com iio: accel: kionix-kx022a: Get the timestamp from the driver's private data in the trigger_handler
Nuno Sá nuno.sa@analog.com iio: adc: max11410: fix read_poll_timeout() usage
William Breathitt Gray william.gray@linaro.org iio: dac: cio-dac: Fix max DAC write value check for 12-bit
Lars-Peter Clausen lars@metafoo.de iio: adc: ti-ads7950: Set `can_sleep` flag for GPIO chip
Andy Shevchenko andriy.shevchenko@linux.intel.com iio: adc: qcom-spmi-adc5: Fix the channel name
Arnd Bergmann arnd@arndb.de iio: adis16480: select CONFIG_CRC32
Ian Ray ian.ray@ge.com drivers: iio: adc: ltc2497: fix LSB shift
Bjørn Mork bjorn@mork.no USB: serial: option: add Quectel RM500U-CN modem
Enrico Sau enrico.sau@gmail.com USB: serial: option: add Telit FE990 compositions
RD Babiera rdbabiera@google.com usb: typec: altmodes/displayport: Fix configure initial pin assignment
Kees Jan Koster kjkoster@kjkoster.org USB: serial: cp210x: add Silicon Labs IFS-USB-DATACABLE IDs
Heikki Krogerus heikki.krogerus@linux.intel.com usb: dwc3: pci: add support for the Intel Meteor Lake-S
Pawel Laszczak pawell@cadence.com usb: cdnsp: Fixes error: uninitialized symbol 'len'
D Scott Phillips scott@os.amperecomputing.com xhci: also avoid the XHCI_ZERO_64B_REGS quirk with a passthrough iommu
Mathias Nyman mathias.nyman@linux.intel.com xhci: Free the command allocated for setting LPM if we return early
Wayne Chang waynec@nvidia.com usb: xhci: tegra: fix sleep in atomic call
Mathias Nyman mathias.nyman@linux.intel.com Revert "usb: xhci-pci: Set PROBE_PREFER_ASYNCHRONOUS"
Lukas Wunner lukas@wunner.de PCI/DOE: Fix memory leak with CONFIG_DEBUG_OBJECTS=y
Lukas Wunner lukas@wunner.de PCI/DOE: Silence WARN splat with CONFIG_DEBUG_OBJECTS=y
Lukas Wunner lukas@wunner.de cxl/pci: Handle excessive CDAT length
Lukas Wunner lukas@wunner.de cxl/pci: Handle truncated CDAT entries
Lukas Wunner lukas@wunner.de cxl/pci: Handle truncated CDAT header
Lukas Wunner lukas@wunner.de cxl/pci: Fix CDAT retrieval on big endian
Michael Sit Wei Hong michael.wei.hong.sit@intel.com net: stmmac: check fwnode for phy device before scanning for phy
Ard Biesheuvel ardb@kernel.org arm64: compat: Work around uninitialized variable warning
Shailend Chand shailend@google.com gve: Secure enough bytes in the first TX desc for all TCP pkts
Eric Dumazet edumazet@google.com netlink: annotate lockless accesses to nlk->max_recvmsg_len
Andy Roulin aroulin@nvidia.com ethtool: reset #lanes when lanes is omitted
Kuniyuki Iwashima kuniyu@amazon.com ping: Fix potentail NULL deref for /proc/net/icmp.
Kuniyuki Iwashima kuniyu@amazon.com raw: Fix NULL deref in raw_get_next().
Eric Dumazet edumazet@google.com raw: use net_hash_mix() in hash function
Lingyu Liu lingyu.liu@intel.com ice: Reset FDIR counter in FDIR init stage
Simei Su simei.su@intel.com ice: fix wrong fallback logic for FDIR
Dai Ngo dai.ngo@oracle.com NFSD: callback request does not use correct credential for AUTH_SYS
Jeff Layton jlayton@kernel.org sunrpc: only free unix grouplist after RCU settles
Corinna Vinschen vinschen@redhat.com net: stmmac: fix up RX flow hash indirection table when setting channels
Siddharth Vadapalli s-vadapalli@ti.com net: ethernet: ti: am65-cpsw: Fix mdio cleanup in probe
Dhruva Gole d-gole@ti.com gpio: davinci: Add irq chip flag to skip set wake
Dhruva Gole d-gole@ti.com gpio: davinci: Do not clear the bank intr enable bit in save_context
Mark Pearson mpearson-lenovo@squebb.ca platform/x86: think-lmi: Clean up display of current_value on Thinkstation
Mark Pearson mpearson-lenovo@squebb.ca platform/x86: think-lmi: Fix memory leaks when parsing ThinkStation WMI strings
Armin Wolf W_Armin@gmx.de platform/x86: think-lmi: Fix memory leak when showing current settings
Ziyang Xuan william.xuanziyang@huawei.com ipv6: Fix an uninit variable access bug in __ip6_make_skb()
Sricharan Ramabadhran quic_srichara@quicinc.com net: qrtr: Do not do DEL_SERVER broadcast after DEL_CLIENT
Daniele Ceraolo Spurio daniele.ceraolospurio@intel.com drm/i915/huc: Cancel HuC delayed load timer on reset.
Xin Long lucien.xin@gmail.com sctp: check send stream number after wait_for_sndbuf
Felix Fietkau nbd@nbd.name net: ethernet: mtk_eth_soc: fix remaining throughput regression
Gustav Ekelund gustaek@axis.com net: dsa: mv88e6xxx: Reset mv88e6393x force WD event bit
Jakub Kicinski kuba@kernel.org net: don't let netpoll invoke NAPI if in xmit context
Takashi Iwai tiwai@suse.de ALSA: hda/hdmi: Preserve the previous PCM device upon re-enablement
Eric Dumazet edumazet@google.com icmp: guard against too small mtu
Jeff Layton jlayton@kernel.org nfsd: call op_release, even when op_func returns an error
Chuck Lever chuck.lever@oracle.com NFSD: Avoid calling OPDESC() with ops->opnum == OP_ILLEGAL
Hans de Goede hdegoede@redhat.com wifi: brcmfmac: Fix SDIO suspend/resume regression
Andrea Righi andrea.righi@canonical.com l2tp: generate correct module alias strings
Michael Sit Wei Hong michael.wei.hong.sit@intel.com net: stmmac: remove redundant fixup to support fixed-link mode
Michael Sit Wei Hong michael.wei.hong.sit@intel.com net: stmmac: check if MAC needs to attach to a PHY
Michael Sit Wei Hong michael.wei.hong.sit@intel.com net: phylink: add phylink_expects_phy() method
Ziyang Xuan william.xuanziyang@huawei.com net: qrtr: Fix a refcount bug in qrtr_recvmsg()
Felix Fietkau nbd@nbd.name wifi: mac80211: fix invalid drv_sta_pre_rcu_remove calls for non-uploaded sta
Ryder Lee ryder.lee@mediatek.com wifi: mac80211: fix the size calculation of ieee80211_ie_len_eht_cap()
Nico Boehr nrb@linux.ibm.com KVM: s390: pv: fix external interruption loop not always detected
Srinivas Kandagatla srinivas.kandagatla@linaro.org ASoC: codecs: lpass: fix the order or clks turn off during suspend
Uwe Kleine-König u.kleine-koenig@pengutronix.de pwm: meson: Explicitly set .polarity in .get_state()
Uwe Kleine-König u.kleine-koenig@pengutronix.de pwm: sprd: Explicitly set .polarity in .get_state()
Uwe Kleine-König u.kleine-koenig@pengutronix.de pwm: iqs620a: Explicitly set .polarity in .get_state()
Uwe Kleine-König u.kleine-koenig@pengutronix.de pwm: cros-ec: Explicitly set .polarity in .get_state()
Uwe Kleine-König u.kleine-koenig@pengutronix.de pwm: hibvt: Explicitly set .polarity in .get_state()
Ranjani Sridharan ranjani.sridharan@linux.intel.com ASoC: SOF: ipc4: Ensure DSP is in D0I0 during sof_ipc4_set_get_data()
Mohammed Gamal mgamal@redhat.com Drivers: vmbus: Check for channel allocation before looking up relids
Randy Dunlap rdunlap@infradead.org gpio: GPIO_REGMAP: select REGMAP instead of depending on it
Ville Syrjälä ville.syrjala@linux.intel.com drm/i915: Add a .color_post_update() hook
Ville Syrjälä ville.syrjala@linux.intel.com drm/i915: Move the DSB setup/cleaup into the color code
Mike Snitzer snitzer@kernel.org dm: fix improper splitting for abnormal bios
Heinz Mauelshagen heinzm@redhat.com dm: change "unsigned" to "unsigned int"
Jiapeng Chong jiapeng.chong@linux.alibaba.com dm integrity: Remove bi_sector that's only used by commented debug code
Joe Thornber ejt@redhat.com dm cache: Add some documentation to dm-cache-background-tracker.h
-------------
Diffstat:
.../devicetree/bindings/serial/renesas,scif.yaml | 4 +- Documentation/mm/zsmalloc.rst | 2 + Makefile | 4 +- arch/arm64/kernel/compat_alignment.c | 32 +- arch/s390/kvm/intercept.c | 32 +- arch/x86/kernel/acpi/boot.c | 9 +- arch/x86/kvm/kvm_onhyperv.h | 5 + arch/x86/kvm/svm/svm.c | 37 +- arch/x86/kvm/svm/svm_onhyperv.h | 15 + arch/x86/kvm/vmx/nested.c | 7 +- arch/x86/kvm/x86.c | 11 +- block/blk-mq.c | 4 +- block/genhd.c | 8 +- drivers/acpi/acpi_video.c | 15 +- drivers/acpi/video_detect.c | 58 +++- drivers/block/ublk_drv.c | 26 +- drivers/counter/104-quad-8.c | 31 +- drivers/cxl/core/pci.c | 38 +- drivers/cxl/cxlpci.h | 14 + drivers/gpio/Kconfig | 2 +- drivers/gpio/gpio-davinci.c | 5 +- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 18 + drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 2 + drivers/gpu/drm/bridge/lontium-lt9611.c | 1 + drivers/gpu/drm/i915/display/intel_color.c | 23 ++ drivers/gpu/drm/i915/display/intel_color.h | 3 + drivers/gpu/drm/i915/display/intel_display.c | 28 +- drivers/gpu/drm/i915/display/intel_display.h | 8 + .../gpu/drm/i915/gt/intel_execlists_submission.c | 12 +- drivers/gpu/drm/i915/gt/uc/intel_huc.c | 7 + drivers/gpu/drm/i915/gt/uc/intel_huc.h | 7 +- drivers/gpu/drm/i915/i915_perf.c | 6 +- drivers/gpu/drm/nouveau/dispnv50/disp.c | 32 ++ drivers/gpu/drm/nouveau/nouveau_dp.c | 8 +- drivers/gpu/drm/panfrost/panfrost_mmu.c | 1 + drivers/hv/connection.c | 4 + drivers/hwtracing/coresight/coresight-etm4x-core.c | 24 +- drivers/hwtracing/coresight/coresight-etm4x.h | 20 +- drivers/iio/accel/kionix-kx022a.c | 2 +- drivers/iio/adc/ad7791.c | 2 +- drivers/iio/adc/ltc2497.c | 6 +- drivers/iio/adc/max11410.c | 22 +- drivers/iio/adc/qcom-spmi-adc5.c | 10 +- drivers/iio/adc/ti-ads7950.c | 1 + drivers/iio/dac/cio-dac.c | 4 +- drivers/iio/imu/Kconfig | 1 + drivers/iio/industrialio-buffer.c | 21 +- drivers/iio/light/cm32181.c | 12 + drivers/iio/light/vcnl4000.c | 3 +- drivers/iommu/iommufd/pages.c | 16 +- drivers/md/dm-bio-prison-v1.c | 10 +- drivers/md/dm-bio-prison-v2.c | 12 +- drivers/md/dm-bio-prison-v2.h | 10 +- drivers/md/dm-bufio.c | 58 ++-- drivers/md/dm-cache-background-tracker.c | 8 +- drivers/md/dm-cache-background-tracker.h | 46 ++- drivers/md/dm-cache-metadata.c | 40 +-- drivers/md/dm-cache-metadata.h | 4 +- drivers/md/dm-cache-policy-internal.h | 10 +- drivers/md/dm-cache-policy-smq.c | 163 ++++----- drivers/md/dm-cache-policy.c | 2 +- drivers/md/dm-cache-policy.h | 4 +- drivers/md/dm-cache-target.c | 50 +-- drivers/md/dm-core.h | 6 +- drivers/md/dm-crypt.c | 48 +-- drivers/md/dm-delay.c | 6 +- drivers/md/dm-ebs-target.c | 2 +- drivers/md/dm-era-target.c | 32 +- drivers/md/dm-exception-store.c | 6 +- drivers/md/dm-exception-store.h | 18 +- drivers/md/dm-flakey.c | 22 +- drivers/md/dm-integrity.c | 328 +++++++++--------- drivers/md/dm-io-rewind.c | 4 +- drivers/md/dm-io.c | 32 +- drivers/md/dm-ioctl.c | 18 +- drivers/md/dm-kcopyd.c | 30 +- drivers/md/dm-linear.c | 2 +- drivers/md/dm-log-userspace-base.c | 6 +- drivers/md/dm-log-userspace-transfer.c | 2 +- drivers/md/dm-log-writes.c | 10 +- drivers/md/dm-log.c | 10 +- drivers/md/dm-mpath.c | 46 +-- drivers/md/dm-mpath.h | 2 +- drivers/md/dm-path-selector.h | 2 +- drivers/md/dm-ps-io-affinity.c | 4 +- drivers/md/dm-ps-queue-length.c | 10 +- drivers/md/dm-ps-round-robin.c | 6 +- drivers/md/dm-ps-service-time.c | 14 +- drivers/md/dm-raid.c | 2 +- drivers/md/dm-raid1.c | 22 +- drivers/md/dm-region-hash.c | 22 +- drivers/md/dm-rq.c | 16 +- drivers/md/dm-rq.h | 2 +- drivers/md/dm-snap-persistent.c | 8 +- drivers/md/dm-snap-transient.c | 6 +- drivers/md/dm-snap.c | 34 +- drivers/md/dm-stats.c | 74 ++-- drivers/md/dm-stats.h | 6 +- drivers/md/dm-stripe.c | 10 +- drivers/md/dm-switch.c | 46 +-- drivers/md/dm-table.c | 25 +- drivers/md/dm-thin-metadata.c | 24 +- drivers/md/dm-thin.c | 46 +-- drivers/md/dm-uevent.c | 4 +- drivers/md/dm-uevent.h | 4 +- drivers/md/dm-verity-fec.c | 30 +- drivers/md/dm-verity-fec.h | 18 +- drivers/md/dm-verity-target.c | 30 +- drivers/md/dm-verity.h | 8 +- drivers/md/dm-writecache.c | 80 ++--- drivers/md/dm.c | 55 ++- drivers/md/dm.h | 4 +- drivers/md/persistent-data/dm-array.c | 69 ++-- drivers/md/persistent-data/dm-array.h | 2 +- drivers/md/persistent-data/dm-bitset.c | 12 +- drivers/md/persistent-data/dm-block-manager.c | 16 +- drivers/md/persistent-data/dm-block-manager.h | 6 +- drivers/md/persistent-data/dm-btree-remove.c | 46 +-- drivers/md/persistent-data/dm-btree-spine.c | 4 +- drivers/md/persistent-data/dm-btree.c | 98 +++--- drivers/md/persistent-data/dm-btree.h | 12 +- .../persistent-data/dm-persistent-data-internal.h | 6 +- drivers/md/persistent-data/dm-space-map-common.c | 28 +- drivers/md/persistent-data/dm-space-map-metadata.c | 20 +- .../md/persistent-data/dm-transaction-manager.c | 16 +- .../md/persistent-data/dm-transaction-manager.h | 2 +- drivers/net/dsa/mv88e6xxx/chip.c | 2 +- drivers/net/dsa/mv88e6xxx/global2.c | 20 ++ drivers/net/dsa/mv88e6xxx/global2.h | 1 + drivers/net/ethernet/google/gve/gve.h | 2 + drivers/net/ethernet/google/gve/gve_tx.c | 12 +- drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c | 23 +- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 4 + drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c | 1 - drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 21 +- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 6 +- drivers/net/phy/phylink.c | 19 + .../wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c | 36 +- .../wireless/broadcom/brcm80211/brcmfmac/sdio.h | 2 + drivers/net/wireless/mediatek/mt76/mt7603/main.c | 10 +- drivers/net/wireless/mediatek/mt76/mt7615/mac.c | 70 ++-- drivers/net/wireless/mediatek/mt76/mt7615/main.c | 15 +- drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h | 6 +- drivers/net/wireless/mediatek/mt76/mt76x02_util.c | 18 +- drivers/net/wireless/mediatek/mt76/mt7915/main.c | 13 +- drivers/net/wireless/mediatek/mt76/mt7921/main.c | 13 +- drivers/net/wireless/mediatek/mt76/mt7921/pci.c | 2 +- drivers/net/wireless/mediatek/mt76/mt7996/main.c | 13 +- drivers/nvme/host/core.c | 6 +- drivers/pci/doe.c | 30 +- drivers/platform/x86/think-lmi.c | 20 +- drivers/pwm/pwm-cros-ec.c | 1 + drivers/pwm/pwm-hibvt.c | 1 + drivers/pwm/pwm-iqs620a.c | 1 + drivers/pwm/pwm-meson.c | 8 + drivers/pwm/pwm-sprd.c | 1 + drivers/scsi/iscsi_tcp.c | 3 +- drivers/scsi/qla2xxx/qla_os.c | 1 + drivers/tty/serial/8250/8250_port.c | 11 + drivers/tty/serial/fsl_lpuart.c | 10 +- drivers/tty/serial/sh-sci.c | 10 +- drivers/usb/cdns3/cdnsp-ep0.c | 3 +- drivers/usb/dwc3/dwc3-pci.c | 4 + drivers/usb/host/xhci-pci.c | 7 +- drivers/usb/host/xhci-tegra.c | 6 +- drivers/usb/host/xhci.c | 7 +- drivers/usb/serial/cp210x.c | 1 + drivers/usb/serial/option.c | 10 + drivers/usb/typec/altmodes/displayport.c | 6 +- fs/cifs/fs_context.c | 13 +- fs/cifs/fs_context.h | 3 + fs/cifs/misc.c | 2 +- fs/dax.c | 52 ++- fs/ksmbd/connection.c | 5 +- fs/ksmbd/server.c | 5 +- fs/ksmbd/smb2pdu.c | 3 - fs/ksmbd/smb_common.c | 138 ++++++-- fs/ksmbd/smb_common.h | 2 +- fs/namespace.c | 2 +- fs/nfsd/blocklayout.c | 1 + fs/nfsd/nfs4callback.c | 4 +- fs/nfsd/nfs4xdr.c | 15 +- fs/nilfs2/segment.c | 3 +- fs/nilfs2/super.c | 2 + fs/nilfs2/the_nilfs.c | 12 +- include/acpi/video.h | 15 +- include/linux/device-mapper.h | 38 +- include/linux/dm-bufio.h | 12 +- include/linux/dm-dirty-log.h | 6 +- include/linux/dm-io.h | 8 +- include/linux/dm-kcopyd.h | 22 +- include/linux/dm-region-hash.h | 2 +- include/linux/ftrace.h | 2 +- include/linux/mm_types.h | 3 +- include/linux/pci-doe.h | 8 +- include/linux/phylink.h | 1 + include/net/raw.h | 15 +- io_uring/io_uring.c | 2 +- io_uring/kbuf.c | 7 +- kernel/events/core.c | 14 +- kernel/fork.c | 3 + kernel/trace/ftrace.c | 15 +- kernel/trace/ring_buffer.c | 13 +- kernel/trace/trace.c | 1 + kernel/trace/trace_events_synth.c | 19 +- kernel/trace/trace_osnoise.c | 4 +- lib/maple_tree.c | 383 +++++++++++++-------- mm/hugetlb.c | 14 +- mm/kfence/core.c | 32 +- mm/memory.c | 16 +- mm/mmap.c | 3 +- mm/swapfile.c | 3 +- mm/vmalloc.c | 8 +- net/can/isotp.c | 74 ++-- net/can/j1939/transport.c | 5 +- net/core/netpoll.c | 19 +- net/ethtool/linkmodes.c | 7 +- net/ipv4/icmp.c | 5 + net/ipv4/ping.c | 8 +- net/ipv4/raw.c | 49 +-- net/ipv4/raw_diag.c | 10 +- net/ipv6/ip6_output.c | 7 +- net/ipv6/raw.c | 14 +- net/l2tp/l2tp_ip.c | 8 +- net/l2tp/l2tp_ip6.c | 8 +- net/mac80211/sta_info.c | 3 +- net/mac80211/util.c | 2 +- net/netlink/af_netlink.c | 15 +- net/qrtr/af_qrtr.c | 2 + net/qrtr/ns.c | 15 +- net/sctp/socket.c | 4 + net/sunrpc/svcauth_unix.c | 17 +- sound/pci/hda/patch_hdmi.c | 11 + sound/pci/hda/patch_realtek.c | 2 + sound/soc/codecs/hdac_hdmi.c | 17 +- sound/soc/codecs/lpass-rx-macro.c | 4 +- sound/soc/codecs/lpass-tx-macro.c | 4 +- sound/soc/codecs/lpass-wsa-macro.c | 4 +- sound/soc/sof/ipc4-topology.c | 8 + sound/soc/sof/ipc4.c | 8 + tools/testing/radix-tree/maple.c | 18 +- 241 files changed, 2637 insertions(+), 1757 deletions(-)
From: Joe Thornber ejt@redhat.com
[ Upstream commit 22c40e134c4c7a828ac09d25a5a8597b1e45c031 ]
Signed-off-by: Joe Thornber ejt@redhat.com Signed-off-by: Mike Snitzer snitzer@kernel.org Stable-dep-of: f7b58a69fad9 ("dm: fix improper splitting for abnormal bios") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/md/dm-cache-background-tracker.h | 40 ++++++++++++++++++++++-- 1 file changed, 37 insertions(+), 3 deletions(-)
diff --git a/drivers/md/dm-cache-background-tracker.h b/drivers/md/dm-cache-background-tracker.h index 27ab90dbc2752..b5056e8275c15 100644 --- a/drivers/md/dm-cache-background-tracker.h +++ b/drivers/md/dm-cache-background-tracker.h @@ -12,19 +12,44 @@
/*----------------------------------------------------------------*/
+/* + * The cache policy decides what background work should be performed, + * such as promotions, demotions and writebacks. The core cache target + * is in charge of performing the work, and does so when it sees fit. + * + * The background_tracker acts as a go between. Keeping track of future + * work that the policy has decided upon, and handing (issuing) it to + * the core target when requested. + * + * There is no locking in this, so calls will probably need to be + * protected with a spinlock. + */ + struct background_work; struct background_tracker;
/* - * FIXME: discuss lack of locking in all methods. + * Create a new tracker, it will not be able to queue more than + * 'max_work' entries. */ struct background_tracker *btracker_create(unsigned max_work); + +/* + * Destroy the tracker. No issued, but not complete, work should + * exist when this is called. It is fine to have queued but unissued + * work. + */ void btracker_destroy(struct background_tracker *b);
unsigned btracker_nr_writebacks_queued(struct background_tracker *b); unsigned btracker_nr_demotions_queued(struct background_tracker *b);
/* + * Queue some work within the tracker. 'work' should point to the work + * to queue, this will be copied (ownership doesn't pass). If pwork + * is not NULL then it will be set to point to the tracker's internal + * copy of the work. + * * returns -EINVAL iff the work is already queued. -ENOMEM if the work * couldn't be queued for another reason. */ @@ -33,11 +58,20 @@ int btracker_queue(struct background_tracker *b, struct policy_work **pwork);
/* + * Hands out the next piece of work to be performed. * Returns -ENODATA if there's no work. */ int btracker_issue(struct background_tracker *b, struct policy_work **work); -void btracker_complete(struct background_tracker *b, - struct policy_work *op); + +/* + * Informs the tracker that the work has been completed and it may forget + * about it. + */ +void btracker_complete(struct background_tracker *b, struct policy_work *op); + +/* + * Predicate to see if an origin block is already scheduled for promotion. + */ bool btracker_promotion_already_present(struct background_tracker *b, dm_oblock_t oblock);
From: Jiapeng Chong jiapeng.chong@linux.alibaba.com
[ Upstream commit 5cd6d1d53a1f74222e73d8b42ab7ecf28ee2f34f ]
drivers/md/dm-integrity.c:1738:13: warning: variable 'bi_sector' set but not used.
Reported-by: Abaci Robot abaci@linux.alibaba.com Link: https://bugzilla.openanolis.cn/show_bug.cgi?id=3895 Signed-off-by: Jiapeng Chong jiapeng.chong@linux.alibaba.com Signed-off-by: Mike Snitzer snitzer@kernel.org Stable-dep-of: f7b58a69fad9 ("dm: fix improper splitting for abnormal bios") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/md/dm-integrity.c | 7 ------- 1 file changed, 7 deletions(-)
diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c index 1388ee35571e0..c62c21aadf329 100644 --- a/drivers/md/dm-integrity.c +++ b/drivers/md/dm-integrity.c @@ -1735,7 +1735,6 @@ static void integrity_metadata(struct work_struct *w) }
if (unlikely(dio->op == REQ_OP_DISCARD)) { - sector_t bi_sector = dio->bio_details.bi_iter.bi_sector; unsigned bi_size = dio->bio_details.bi_iter.bi_size; unsigned max_size = likely(checksums != checksums_onstack) ? PAGE_SIZE : HASH_MAX_DIGESTSIZE; unsigned max_blocks = max_size / ic->tag_size; @@ -1752,13 +1751,7 @@ static void integrity_metadata(struct work_struct *w) goto error; }
- /*if (bi_size < this_step_blocks << (SECTOR_SHIFT + ic->sb->log2_sectors_per_block)) { - printk("BUGG: bi_sector: %llx, bi_size: %u\n", bi_sector, bi_size); - printk("BUGG: this_step_blocks: %u\n", this_step_blocks); - BUG(); - }*/ bi_size -= this_step_blocks << (SECTOR_SHIFT + ic->sb->log2_sectors_per_block); - bi_sector += this_step_blocks << ic->sb->log2_sectors_per_block; }
if (likely(checksums != checksums_onstack))
From: Heinz Mauelshagen heinzm@redhat.com
[ Upstream commit 86a3238c7b9b759cb864f4f768ab2e24687dc0e6 ]
Signed-off-by: Heinz Mauelshagen heinzm@redhat.com Signed-off-by: Mike Snitzer snitzer@kernel.org Stable-dep-of: f7b58a69fad9 ("dm: fix improper splitting for abnormal bios") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/md/dm-bio-prison-v1.c | 10 +- drivers/md/dm-bio-prison-v2.c | 12 +- drivers/md/dm-bio-prison-v2.h | 10 +- drivers/md/dm-bufio.c | 58 ++-- drivers/md/dm-cache-background-tracker.c | 8 +- drivers/md/dm-cache-background-tracker.h | 6 +- drivers/md/dm-cache-metadata.c | 40 +-- drivers/md/dm-cache-metadata.h | 4 +- drivers/md/dm-cache-policy-internal.h | 10 +- drivers/md/dm-cache-policy-smq.c | 163 ++++----- drivers/md/dm-cache-policy.c | 2 +- drivers/md/dm-cache-policy.h | 4 +- drivers/md/dm-cache-target.c | 50 +-- drivers/md/dm-core.h | 6 +- drivers/md/dm-crypt.c | 48 +-- drivers/md/dm-delay.c | 6 +- drivers/md/dm-ebs-target.c | 2 +- drivers/md/dm-era-target.c | 32 +- drivers/md/dm-exception-store.c | 6 +- drivers/md/dm-exception-store.h | 18 +- drivers/md/dm-flakey.c | 22 +- drivers/md/dm-integrity.c | 321 +++++++++--------- drivers/md/dm-io-rewind.c | 4 +- drivers/md/dm-io.c | 32 +- drivers/md/dm-ioctl.c | 18 +- drivers/md/dm-kcopyd.c | 30 +- drivers/md/dm-linear.c | 2 +- drivers/md/dm-log-userspace-base.c | 6 +- drivers/md/dm-log-userspace-transfer.c | 2 +- drivers/md/dm-log-writes.c | 10 +- drivers/md/dm-log.c | 10 +- drivers/md/dm-mpath.c | 46 +-- drivers/md/dm-mpath.h | 2 +- drivers/md/dm-path-selector.h | 2 +- drivers/md/dm-ps-io-affinity.c | 4 +- drivers/md/dm-ps-queue-length.c | 10 +- drivers/md/dm-ps-round-robin.c | 6 +- drivers/md/dm-ps-service-time.c | 14 +- drivers/md/dm-raid.c | 2 +- drivers/md/dm-raid1.c | 22 +- drivers/md/dm-region-hash.c | 22 +- drivers/md/dm-rq.c | 16 +- drivers/md/dm-rq.h | 2 +- drivers/md/dm-snap-persistent.c | 8 +- drivers/md/dm-snap-transient.c | 6 +- drivers/md/dm-snap.c | 34 +- drivers/md/dm-stats.c | 74 ++-- drivers/md/dm-stats.h | 6 +- drivers/md/dm-stripe.c | 10 +- drivers/md/dm-switch.c | 46 +-- drivers/md/dm-table.c | 25 +- drivers/md/dm-thin-metadata.c | 24 +- drivers/md/dm-thin.c | 46 +-- drivers/md/dm-uevent.c | 4 +- drivers/md/dm-uevent.h | 4 +- drivers/md/dm-verity-fec.c | 30 +- drivers/md/dm-verity-fec.h | 18 +- drivers/md/dm-verity-target.c | 30 +- drivers/md/dm-verity.h | 8 +- drivers/md/dm-writecache.c | 80 ++--- drivers/md/dm.c | 50 ++- drivers/md/dm.h | 4 +- drivers/md/persistent-data/dm-array.c | 69 ++-- drivers/md/persistent-data/dm-array.h | 2 +- drivers/md/persistent-data/dm-bitset.c | 12 +- drivers/md/persistent-data/dm-block-manager.c | 16 +- drivers/md/persistent-data/dm-block-manager.h | 6 +- drivers/md/persistent-data/dm-btree-remove.c | 46 +-- drivers/md/persistent-data/dm-btree-spine.c | 4 +- drivers/md/persistent-data/dm-btree.c | 98 +++--- drivers/md/persistent-data/dm-btree.h | 12 +- .../dm-persistent-data-internal.h | 6 +- .../md/persistent-data/dm-space-map-common.c | 28 +- .../persistent-data/dm-space-map-metadata.c | 20 +- .../persistent-data/dm-transaction-manager.c | 16 +- .../persistent-data/dm-transaction-manager.h | 2 +- include/linux/device-mapper.h | 38 +-- include/linux/dm-bufio.h | 12 +- include/linux/dm-dirty-log.h | 6 +- include/linux/dm-io.h | 8 +- include/linux/dm-kcopyd.h | 22 +- include/linux/dm-region-hash.h | 2 +- 82 files changed, 1016 insertions(+), 1016 deletions(-)
diff --git a/drivers/md/dm-bio-prison-v1.c b/drivers/md/dm-bio-prison-v1.c index 1f8f98efd97a0..138067abe14b7 100644 --- a/drivers/md/dm-bio-prison-v1.c +++ b/drivers/md/dm-bio-prison-v1.c @@ -285,14 +285,14 @@ EXPORT_SYMBOL_GPL(dm_cell_promote_or_release);
struct dm_deferred_entry { struct dm_deferred_set *ds; - unsigned count; + unsigned int count; struct list_head work_items; };
struct dm_deferred_set { spinlock_t lock; - unsigned current_entry; - unsigned sweeper; + unsigned int current_entry; + unsigned int sweeper; struct dm_deferred_entry entries[DEFERRED_SET_SIZE]; };
@@ -338,7 +338,7 @@ struct dm_deferred_entry *dm_deferred_entry_inc(struct dm_deferred_set *ds) } EXPORT_SYMBOL_GPL(dm_deferred_entry_inc);
-static unsigned ds_next(unsigned index) +static unsigned int ds_next(unsigned int index) { return (index + 1) % DEFERRED_SET_SIZE; } @@ -373,7 +373,7 @@ EXPORT_SYMBOL_GPL(dm_deferred_entry_dec); int dm_deferred_set_add_work(struct dm_deferred_set *ds, struct list_head *work) { int r = 1; - unsigned next_entry; + unsigned int next_entry;
spin_lock_irq(&ds->lock); if ((ds->sweeper == ds->current_entry) && diff --git a/drivers/md/dm-bio-prison-v2.c b/drivers/md/dm-bio-prison-v2.c index 9dec3b61cf70a..0cc0d13c40e51 100644 --- a/drivers/md/dm-bio-prison-v2.c +++ b/drivers/md/dm-bio-prison-v2.c @@ -148,7 +148,7 @@ static bool __find_or_insert(struct dm_bio_prison_v2 *prison,
static bool __get(struct dm_bio_prison_v2 *prison, struct dm_cell_key_v2 *key, - unsigned lock_level, + unsigned int lock_level, struct bio *inmate, struct dm_bio_prison_cell_v2 *cell_prealloc, struct dm_bio_prison_cell_v2 **cell) @@ -171,7 +171,7 @@ static bool __get(struct dm_bio_prison_v2 *prison,
bool dm_cell_get_v2(struct dm_bio_prison_v2 *prison, struct dm_cell_key_v2 *key, - unsigned lock_level, + unsigned int lock_level, struct bio *inmate, struct dm_bio_prison_cell_v2 *cell_prealloc, struct dm_bio_prison_cell_v2 **cell_result) @@ -224,7 +224,7 @@ EXPORT_SYMBOL_GPL(dm_cell_put_v2);
static int __lock(struct dm_bio_prison_v2 *prison, struct dm_cell_key_v2 *key, - unsigned lock_level, + unsigned int lock_level, struct dm_bio_prison_cell_v2 *cell_prealloc, struct dm_bio_prison_cell_v2 **cell_result) { @@ -255,7 +255,7 @@ static int __lock(struct dm_bio_prison_v2 *prison,
int dm_cell_lock_v2(struct dm_bio_prison_v2 *prison, struct dm_cell_key_v2 *key, - unsigned lock_level, + unsigned int lock_level, struct dm_bio_prison_cell_v2 *cell_prealloc, struct dm_bio_prison_cell_v2 **cell_result) { @@ -291,7 +291,7 @@ EXPORT_SYMBOL_GPL(dm_cell_quiesce_v2);
static int __promote(struct dm_bio_prison_v2 *prison, struct dm_bio_prison_cell_v2 *cell, - unsigned new_lock_level) + unsigned int new_lock_level) { if (!cell->exclusive_lock) return -EINVAL; @@ -302,7 +302,7 @@ static int __promote(struct dm_bio_prison_v2 *prison,
int dm_cell_lock_promote_v2(struct dm_bio_prison_v2 *prison, struct dm_bio_prison_cell_v2 *cell, - unsigned new_lock_level) + unsigned int new_lock_level) { int r;
diff --git a/drivers/md/dm-bio-prison-v2.h b/drivers/md/dm-bio-prison-v2.h index 6e04234268db3..5a7d996bbbd80 100644 --- a/drivers/md/dm-bio-prison-v2.h +++ b/drivers/md/dm-bio-prison-v2.h @@ -44,8 +44,8 @@ struct dm_cell_key_v2 { struct dm_bio_prison_cell_v2 { // FIXME: pack these bool exclusive_lock; - unsigned exclusive_level; - unsigned shared_count; + unsigned int exclusive_level; + unsigned int shared_count; struct work_struct *quiesce_continuation;
struct rb_node node; @@ -86,7 +86,7 @@ void dm_bio_prison_free_cell_v2(struct dm_bio_prison_v2 *prison, */ bool dm_cell_get_v2(struct dm_bio_prison_v2 *prison, struct dm_cell_key_v2 *key, - unsigned lock_level, + unsigned int lock_level, struct bio *inmate, struct dm_bio_prison_cell_v2 *cell_prealloc, struct dm_bio_prison_cell_v2 **cell_result); @@ -114,7 +114,7 @@ bool dm_cell_put_v2(struct dm_bio_prison_v2 *prison, */ int dm_cell_lock_v2(struct dm_bio_prison_v2 *prison, struct dm_cell_key_v2 *key, - unsigned lock_level, + unsigned int lock_level, struct dm_bio_prison_cell_v2 *cell_prealloc, struct dm_bio_prison_cell_v2 **cell_result);
@@ -132,7 +132,7 @@ void dm_cell_quiesce_v2(struct dm_bio_prison_v2 *prison, */ int dm_cell_lock_promote_v2(struct dm_bio_prison_v2 *prison, struct dm_bio_prison_cell_v2 *cell, - unsigned new_lock_level); + unsigned int new_lock_level);
/* * Adds any held bios to the bio list. diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c index 19caaf684ee34..382c5cc471952 100644 --- a/drivers/md/dm-bufio.c +++ b/drivers/md/dm-bufio.c @@ -89,7 +89,7 @@ struct dm_bufio_client { unsigned long n_buffers[LIST_SIZE];
struct block_device *bdev; - unsigned block_size; + unsigned int block_size; s8 sectors_per_block_bits; void (*alloc_callback)(struct dm_buffer *); void (*write_callback)(struct dm_buffer *); @@ -98,9 +98,9 @@ struct dm_bufio_client { struct dm_io_client *dm_io;
struct list_head reserved_buffers; - unsigned need_reserved_buffers; + unsigned int need_reserved_buffers;
- unsigned minimum_buffers; + unsigned int minimum_buffers;
struct rb_root buffer_tree; wait_queue_head_t free_buffer_wait; @@ -145,14 +145,14 @@ struct dm_buffer { unsigned char list_mode; /* LIST_* */ blk_status_t read_error; blk_status_t write_error; - unsigned accessed; - unsigned hold_count; + unsigned int accessed; + unsigned int hold_count; unsigned long state; unsigned long last_accessed; - unsigned dirty_start; - unsigned dirty_end; - unsigned write_start; - unsigned write_end; + unsigned int dirty_start; + unsigned int dirty_end; + unsigned int write_start; + unsigned int write_end; struct dm_bufio_client *c; struct list_head write_list; void (*end_io)(struct dm_buffer *, blk_status_t); @@ -220,7 +220,7 @@ static unsigned long global_num = 0; /* * Buffers are freed after this timeout */ -static unsigned dm_bufio_max_age = DM_BUFIO_DEFAULT_AGE_SECS; +static unsigned int dm_bufio_max_age = DM_BUFIO_DEFAULT_AGE_SECS; static unsigned long dm_bufio_retain_bytes = DM_BUFIO_DEFAULT_RETAIN_BYTES;
static unsigned long dm_bufio_peak_allocated; @@ -438,7 +438,7 @@ static void *alloc_buffer_data(struct dm_bufio_client *c, gfp_t gfp_mask, * as if GFP_NOIO was specified. */ if (gfp_mask & __GFP_NORETRY) { - unsigned noio_flag = memalloc_noio_save(); + unsigned int noio_flag = memalloc_noio_save(); void *ptr = __vmalloc(c->block_size, gfp_mask);
memalloc_noio_restore(noio_flag); @@ -591,7 +591,7 @@ static void dmio_complete(unsigned long error, void *context) }
static void use_dmio(struct dm_buffer *b, enum req_op op, sector_t sector, - unsigned n_sectors, unsigned offset) + unsigned int n_sectors, unsigned int offset) { int r; struct dm_io_request io_req = { @@ -629,11 +629,11 @@ static void bio_complete(struct bio *bio) }
static void use_bio(struct dm_buffer *b, enum req_op op, sector_t sector, - unsigned n_sectors, unsigned offset) + unsigned int n_sectors, unsigned int offset) { struct bio *bio; char *ptr; - unsigned vec_size, len; + unsigned int vec_size, len;
vec_size = b->c->block_size >> PAGE_SHIFT; if (unlikely(b->c->sectors_per_block_bits < PAGE_SHIFT - SECTOR_SHIFT)) @@ -654,7 +654,7 @@ static void use_bio(struct dm_buffer *b, enum req_op op, sector_t sector, len = n_sectors << SECTOR_SHIFT;
do { - unsigned this_step = min((unsigned)(PAGE_SIZE - offset_in_page(ptr)), len); + unsigned int this_step = min((unsigned int)(PAGE_SIZE - offset_in_page(ptr)), len); if (!bio_add_page(bio, virt_to_page(ptr), this_step, offset_in_page(ptr))) { bio_put(bio); @@ -684,9 +684,9 @@ static inline sector_t block_to_sector(struct dm_bufio_client *c, sector_t block static void submit_io(struct dm_buffer *b, enum req_op op, void (*end_io)(struct dm_buffer *, blk_status_t)) { - unsigned n_sectors; + unsigned int n_sectors; sector_t sector; - unsigned offset, end; + unsigned int offset, end;
b->end_io = end_io;
@@ -1156,7 +1156,7 @@ void *dm_bufio_new(struct dm_bufio_client *c, sector_t block, EXPORT_SYMBOL_GPL(dm_bufio_new);
void dm_bufio_prefetch(struct dm_bufio_client *c, - sector_t block, unsigned n_blocks) + sector_t block, unsigned int n_blocks) { struct blk_plug plug;
@@ -1232,7 +1232,7 @@ void dm_bufio_release(struct dm_buffer *b) EXPORT_SYMBOL_GPL(dm_bufio_release);
void dm_bufio_mark_partial_buffer_dirty(struct dm_buffer *b, - unsigned start, unsigned end) + unsigned int start, unsigned int end) { struct dm_bufio_client *c = b->c;
@@ -1529,13 +1529,13 @@ void dm_bufio_forget_buffers(struct dm_bufio_client *c, sector_t block, sector_t } EXPORT_SYMBOL_GPL(dm_bufio_forget_buffers);
-void dm_bufio_set_minimum_buffers(struct dm_bufio_client *c, unsigned n) +void dm_bufio_set_minimum_buffers(struct dm_bufio_client *c, unsigned int n) { c->minimum_buffers = n; } EXPORT_SYMBOL_GPL(dm_bufio_set_minimum_buffers);
-unsigned dm_bufio_get_block_size(struct dm_bufio_client *c) +unsigned int dm_bufio_get_block_size(struct dm_bufio_client *c) { return c->block_size; } @@ -1734,15 +1734,15 @@ static unsigned long dm_bufio_shrink_count(struct shrinker *shrink, struct shrin /* * Create the buffering interface */ -struct dm_bufio_client *dm_bufio_client_create(struct block_device *bdev, unsigned block_size, - unsigned reserved_buffers, unsigned aux_size, +struct dm_bufio_client *dm_bufio_client_create(struct block_device *bdev, unsigned int block_size, + unsigned int reserved_buffers, unsigned int aux_size, void (*alloc_callback)(struct dm_buffer *), void (*write_callback)(struct dm_buffer *), unsigned int flags) { int r; struct dm_bufio_client *c; - unsigned i; + unsigned int i; char slab_name[27];
if (!block_size || block_size & ((1 << SECTOR_SHIFT) - 1)) { @@ -1796,7 +1796,7 @@ struct dm_bufio_client *dm_bufio_client_create(struct block_device *bdev, unsign
if (block_size <= KMALLOC_MAX_SIZE && (block_size < PAGE_SIZE || !is_power_of_2(block_size))) { - unsigned align = min(1U << __ffs(block_size), (unsigned)PAGE_SIZE); + unsigned int align = min(1U << __ffs(block_size), (unsigned int)PAGE_SIZE); snprintf(slab_name, sizeof slab_name, "dm_bufio_cache-%u", block_size); c->slab_cache = kmem_cache_create(slab_name, block_size, align, SLAB_RECLAIM_ACCOUNT, NULL); @@ -1872,7 +1872,7 @@ EXPORT_SYMBOL_GPL(dm_bufio_client_create); */ void dm_bufio_client_destroy(struct dm_bufio_client *c) { - unsigned i; + unsigned int i;
drop_buffers(c);
@@ -1920,9 +1920,9 @@ void dm_bufio_set_sector_offset(struct dm_bufio_client *c, sector_t start) } EXPORT_SYMBOL_GPL(dm_bufio_set_sector_offset);
-static unsigned get_max_age_hz(void) +static unsigned int get_max_age_hz(void) { - unsigned max_age = READ_ONCE(dm_bufio_max_age); + unsigned int max_age = READ_ONCE(dm_bufio_max_age);
if (max_age > UINT_MAX / HZ) max_age = UINT_MAX / HZ; @@ -1973,7 +1973,7 @@ static void do_global_cleanup(struct work_struct *w) struct dm_bufio_client *locked_client = NULL; struct dm_bufio_client *current_client; struct dm_buffer *b; - unsigned spinlock_hold_count; + unsigned int spinlock_hold_count; unsigned long threshold = dm_bufio_cache_size - dm_bufio_cache_size / DM_BUFIO_LOW_WATERMARK_RATIO; unsigned long loops = global_num * 2; diff --git a/drivers/md/dm-cache-background-tracker.c b/drivers/md/dm-cache-background-tracker.c index 7887f99b82bd5..c606e6bfc3f8b 100644 --- a/drivers/md/dm-cache-background-tracker.c +++ b/drivers/md/dm-cache-background-tracker.c @@ -17,7 +17,7 @@ struct bt_work { };
struct background_tracker { - unsigned max_work; + unsigned int max_work; atomic_t pending_promotes; atomic_t pending_writebacks; atomic_t pending_demotes; @@ -29,7 +29,7 @@ struct background_tracker { struct kmem_cache *work_cache; };
-struct background_tracker *btracker_create(unsigned max_work) +struct background_tracker *btracker_create(unsigned int max_work) { struct background_tracker *b = kmalloc(sizeof(*b), GFP_KERNEL);
@@ -155,13 +155,13 @@ static void update_stats(struct background_tracker *b, struct policy_work *w, in } }
-unsigned btracker_nr_writebacks_queued(struct background_tracker *b) +unsigned int btracker_nr_writebacks_queued(struct background_tracker *b) { return atomic_read(&b->pending_writebacks); } EXPORT_SYMBOL_GPL(btracker_nr_writebacks_queued);
-unsigned btracker_nr_demotions_queued(struct background_tracker *b) +unsigned int btracker_nr_demotions_queued(struct background_tracker *b) { return atomic_read(&b->pending_demotes); } diff --git a/drivers/md/dm-cache-background-tracker.h b/drivers/md/dm-cache-background-tracker.h index b5056e8275c15..14d3d53dc77a3 100644 --- a/drivers/md/dm-cache-background-tracker.h +++ b/drivers/md/dm-cache-background-tracker.h @@ -32,7 +32,7 @@ struct background_tracker; * Create a new tracker, it will not be able to queue more than * 'max_work' entries. */ -struct background_tracker *btracker_create(unsigned max_work); +struct background_tracker *btracker_create(unsigned int max_work);
/* * Destroy the tracker. No issued, but not complete, work should @@ -41,8 +41,8 @@ struct background_tracker *btracker_create(unsigned max_work); */ void btracker_destroy(struct background_tracker *b);
-unsigned btracker_nr_writebacks_queued(struct background_tracker *b); -unsigned btracker_nr_demotions_queued(struct background_tracker *b); +unsigned int btracker_nr_writebacks_queued(struct background_tracker *b); +unsigned int btracker_nr_demotions_queued(struct background_tracker *b);
/* * Queue some work within the tracker. 'work' should point to the work diff --git a/drivers/md/dm-cache-metadata.c b/drivers/md/dm-cache-metadata.c index 83a5975bcc729..f5b4c996dc05f 100644 --- a/drivers/md/dm-cache-metadata.c +++ b/drivers/md/dm-cache-metadata.c @@ -104,7 +104,7 @@ struct dm_cache_metadata { refcount_t ref_count; struct list_head list;
- unsigned version; + unsigned int version; struct block_device *bdev; struct dm_block_manager *bm; struct dm_space_map *metadata_sm; @@ -129,7 +129,7 @@ struct dm_cache_metadata { bool clean_when_opened:1;
char policy_name[CACHE_POLICY_NAME_SIZE]; - unsigned policy_version[CACHE_POLICY_VERSION_SIZE]; + unsigned int policy_version[CACHE_POLICY_VERSION_SIZE]; size_t policy_hint_size; struct dm_cache_statistics stats;
@@ -260,10 +260,10 @@ static int superblock_lock(struct dm_cache_metadata *cmd, static int __superblock_all_zeroes(struct dm_block_manager *bm, bool *result) { int r; - unsigned i; + unsigned int i; struct dm_block *b; __le64 *data_le, zero = cpu_to_le64(0); - unsigned sb_block_size = dm_bm_block_size(bm) / sizeof(__le64); + unsigned int sb_block_size = dm_bm_block_size(bm) / sizeof(__le64);
/* * We can't use a validator here - it may be all zeroes. @@ -727,7 +727,7 @@ static int __commit_transaction(struct dm_cache_metadata *cmd, */ #define FLAGS_MASK ((1 << 16) - 1)
-static __le64 pack_value(dm_oblock_t block, unsigned flags) +static __le64 pack_value(dm_oblock_t block, unsigned int flags) { uint64_t value = from_oblock(block); value <<= 16; @@ -735,7 +735,7 @@ static __le64 pack_value(dm_oblock_t block, unsigned flags) return cpu_to_le64(value); }
-static void unpack_value(__le64 value_le, dm_oblock_t *block, unsigned *flags) +static void unpack_value(__le64 value_le, dm_oblock_t *block, unsigned int *flags) { uint64_t value = le64_to_cpu(value_le); uint64_t b = value >> 16; @@ -749,7 +749,7 @@ static struct dm_cache_metadata *metadata_open(struct block_device *bdev, sector_t data_block_size, bool may_format_device, size_t policy_hint_size, - unsigned metadata_version) + unsigned int metadata_version) { int r; struct dm_cache_metadata *cmd; @@ -810,7 +810,7 @@ static struct dm_cache_metadata *lookup_or_open(struct block_device *bdev, sector_t data_block_size, bool may_format_device, size_t policy_hint_size, - unsigned metadata_version) + unsigned int metadata_version) { struct dm_cache_metadata *cmd, *cmd2;
@@ -855,7 +855,7 @@ struct dm_cache_metadata *dm_cache_metadata_open(struct block_device *bdev, sector_t data_block_size, bool may_format_device, size_t policy_hint_size, - unsigned metadata_version) + unsigned int metadata_version) { struct dm_cache_metadata *cmd = lookup_or_open(bdev, data_block_size, may_format_device, policy_hint_size, metadata_version); @@ -890,7 +890,7 @@ static int block_clean_combined_dirty(struct dm_cache_metadata *cmd, dm_cblock_t int r; __le64 value; dm_oblock_t ob; - unsigned flags; + unsigned int flags;
r = dm_array_get_value(&cmd->info, cmd->root, from_cblock(b), &value); if (r) @@ -1288,7 +1288,7 @@ static bool policy_unchanged(struct dm_cache_metadata *cmd, struct dm_cache_policy *policy) { const char *policy_name = dm_cache_policy_get_name(policy); - const unsigned *policy_version = dm_cache_policy_get_version(policy); + const unsigned int *policy_version = dm_cache_policy_get_version(policy); size_t policy_hint_size = dm_cache_policy_get_hint_size(policy);
/* @@ -1339,7 +1339,7 @@ static int __load_mapping_v1(struct dm_cache_metadata *cmd, __le32 *hint_value_le;
dm_oblock_t oblock; - unsigned flags; + unsigned int flags; bool dirty = true;
dm_array_cursor_get_value(mapping_cursor, (void **) &mapping_value_le); @@ -1381,7 +1381,7 @@ static int __load_mapping_v2(struct dm_cache_metadata *cmd, __le32 *hint_value_le;
dm_oblock_t oblock; - unsigned flags; + unsigned int flags; bool dirty = true;
dm_array_cursor_get_value(mapping_cursor, (void **) &mapping_value_le); @@ -1513,7 +1513,7 @@ static int __dump_mapping(void *context, uint64_t cblock, void *leaf) { __le64 value; dm_oblock_t oblock; - unsigned flags; + unsigned int flags;
memcpy(&value, leaf, sizeof(value)); unpack_value(value, &oblock, &flags); @@ -1547,7 +1547,7 @@ int dm_cache_changed_this_transaction(struct dm_cache_metadata *cmd) static int __dirty(struct dm_cache_metadata *cmd, dm_cblock_t cblock, bool dirty) { int r; - unsigned flags; + unsigned int flags; dm_oblock_t oblock; __le64 value;
@@ -1574,10 +1574,10 @@ static int __dirty(struct dm_cache_metadata *cmd, dm_cblock_t cblock, bool dirty
}
-static int __set_dirty_bits_v1(struct dm_cache_metadata *cmd, unsigned nr_bits, unsigned long *bits) +static int __set_dirty_bits_v1(struct dm_cache_metadata *cmd, unsigned int nr_bits, unsigned long *bits) { int r; - unsigned i; + unsigned int i; for (i = 0; i < nr_bits; i++) { r = __dirty(cmd, to_cblock(i), test_bit(i, bits)); if (r) @@ -1594,7 +1594,7 @@ static int is_dirty_callback(uint32_t index, bool *value, void *context) return 0; }
-static int __set_dirty_bits_v2(struct dm_cache_metadata *cmd, unsigned nr_bits, unsigned long *bits) +static int __set_dirty_bits_v2(struct dm_cache_metadata *cmd, unsigned int nr_bits, unsigned long *bits) { int r = 0;
@@ -1613,7 +1613,7 @@ static int __set_dirty_bits_v2(struct dm_cache_metadata *cmd, unsigned nr_bits, }
int dm_cache_set_dirty_bits(struct dm_cache_metadata *cmd, - unsigned nr_bits, + unsigned int nr_bits, unsigned long *bits) { int r; @@ -1712,7 +1712,7 @@ static int write_hints(struct dm_cache_metadata *cmd, struct dm_cache_policy *po int r; size_t hint_size; const char *policy_name = dm_cache_policy_get_name(policy); - const unsigned *policy_version = dm_cache_policy_get_version(policy); + const unsigned int *policy_version = dm_cache_policy_get_version(policy);
if (!policy_name[0] || (strlen(policy_name) > sizeof(cmd->policy_name) - 1)) diff --git a/drivers/md/dm-cache-metadata.h b/drivers/md/dm-cache-metadata.h index 0905f2c1615e1..b40322bc44cf7 100644 --- a/drivers/md/dm-cache-metadata.h +++ b/drivers/md/dm-cache-metadata.h @@ -60,7 +60,7 @@ struct dm_cache_metadata *dm_cache_metadata_open(struct block_device *bdev, sector_t data_block_size, bool may_format_device, size_t policy_hint_size, - unsigned metadata_version); + unsigned int metadata_version);
void dm_cache_metadata_close(struct dm_cache_metadata *cmd);
@@ -96,7 +96,7 @@ int dm_cache_load_mappings(struct dm_cache_metadata *cmd, void *context);
int dm_cache_set_dirty_bits(struct dm_cache_metadata *cmd, - unsigned nr_bits, unsigned long *bits); + unsigned int nr_bits, unsigned long *bits);
struct dm_cache_statistics { uint32_t read_hits; diff --git a/drivers/md/dm-cache-policy-internal.h b/drivers/md/dm-cache-policy-internal.h index 56f0a23f698c0..8e49baa78dc19 100644 --- a/drivers/md/dm-cache-policy-internal.h +++ b/drivers/md/dm-cache-policy-internal.h @@ -85,7 +85,7 @@ static inline void policy_tick(struct dm_cache_policy *p, bool can_block) }
static inline int policy_emit_config_values(struct dm_cache_policy *p, char *result, - unsigned maxlen, ssize_t *sz_ptr) + unsigned int maxlen, ssize_t *sz_ptr) { ssize_t sz = *sz_ptr; if (p->emit_config_values) @@ -112,18 +112,18 @@ static inline void policy_allow_migrations(struct dm_cache_policy *p, bool allow /* * Some utility functions commonly used by policies and the core target. */ -static inline size_t bitset_size_in_bytes(unsigned nr_entries) +static inline size_t bitset_size_in_bytes(unsigned int nr_entries) { return sizeof(unsigned long) * dm_div_up(nr_entries, BITS_PER_LONG); }
-static inline unsigned long *alloc_bitset(unsigned nr_entries) +static inline unsigned long *alloc_bitset(unsigned int nr_entries) { size_t s = bitset_size_in_bytes(nr_entries); return vzalloc(s); }
-static inline void clear_bitset(void *bitset, unsigned nr_entries) +static inline void clear_bitset(void *bitset, unsigned int nr_entries) { size_t s = bitset_size_in_bytes(nr_entries); memset(bitset, 0, s); @@ -154,7 +154,7 @@ void dm_cache_policy_destroy(struct dm_cache_policy *p); */ const char *dm_cache_policy_get_name(struct dm_cache_policy *p);
-const unsigned *dm_cache_policy_get_version(struct dm_cache_policy *p); +const unsigned int *dm_cache_policy_get_version(struct dm_cache_policy *p);
size_t dm_cache_policy_get_hint_size(struct dm_cache_policy *p);
diff --git a/drivers/md/dm-cache-policy-smq.c b/drivers/md/dm-cache-policy-smq.c index a3d281fc14c3a..54343812223e8 100644 --- a/drivers/md/dm-cache-policy-smq.c +++ b/drivers/md/dm-cache-policy-smq.c @@ -23,12 +23,12 @@ /* * Safe division functions that return zero on divide by zero. */ -static unsigned safe_div(unsigned n, unsigned d) +static unsigned int safe_div(unsigned int n, unsigned int d) { return d ? n / d : 0u; }
-static unsigned safe_mod(unsigned n, unsigned d) +static unsigned int safe_mod(unsigned int n, unsigned int d) { return d ? n % d : 0u; } @@ -36,10 +36,10 @@ static unsigned safe_mod(unsigned n, unsigned d) /*----------------------------------------------------------------*/
struct entry { - unsigned hash_next:28; - unsigned prev:28; - unsigned next:28; - unsigned level:6; + unsigned int hash_next:28; + unsigned int prev:28; + unsigned int next:28; + unsigned int level:6; bool dirty:1; bool allocated:1; bool sentinel:1; @@ -62,7 +62,7 @@ struct entry_space { struct entry *end; };
-static int space_init(struct entry_space *es, unsigned nr_entries) +static int space_init(struct entry_space *es, unsigned int nr_entries) { if (!nr_entries) { es->begin = es->end = NULL; @@ -82,7 +82,7 @@ static void space_exit(struct entry_space *es) vfree(es->begin); }
-static struct entry *__get_entry(struct entry_space *es, unsigned block) +static struct entry *__get_entry(struct entry_space *es, unsigned int block) { struct entry *e;
@@ -92,13 +92,13 @@ static struct entry *__get_entry(struct entry_space *es, unsigned block) return e; }
-static unsigned to_index(struct entry_space *es, struct entry *e) +static unsigned int to_index(struct entry_space *es, struct entry *e) { BUG_ON(e < es->begin || e >= es->end); return e - es->begin; }
-static struct entry *to_entry(struct entry_space *es, unsigned block) +static struct entry *to_entry(struct entry_space *es, unsigned int block) { if (block == INDEXER_NULL) return NULL; @@ -109,8 +109,8 @@ static struct entry *to_entry(struct entry_space *es, unsigned block) /*----------------------------------------------------------------*/
struct ilist { - unsigned nr_elts; /* excluding sentinel entries */ - unsigned head, tail; + unsigned int nr_elts; /* excluding sentinel entries */ + unsigned int head, tail; };
static void l_init(struct ilist *l) @@ -252,23 +252,23 @@ static struct entry *l_pop_tail(struct entry_space *es, struct ilist *l) struct queue { struct entry_space *es;
- unsigned nr_elts; - unsigned nr_levels; + unsigned int nr_elts; + unsigned int nr_levels; struct ilist qs[MAX_LEVELS];
/* * We maintain a count of the number of entries we would like in each * level. */ - unsigned last_target_nr_elts; - unsigned nr_top_levels; - unsigned nr_in_top_levels; - unsigned target_count[MAX_LEVELS]; + unsigned int last_target_nr_elts; + unsigned int nr_top_levels; + unsigned int nr_in_top_levels; + unsigned int target_count[MAX_LEVELS]; };
-static void q_init(struct queue *q, struct entry_space *es, unsigned nr_levels) +static void q_init(struct queue *q, struct entry_space *es, unsigned int nr_levels) { - unsigned i; + unsigned int i;
q->es = es; q->nr_elts = 0; @@ -284,7 +284,7 @@ static void q_init(struct queue *q, struct entry_space *es, unsigned nr_levels) q->nr_in_top_levels = 0u; }
-static unsigned q_size(struct queue *q) +static unsigned int q_size(struct queue *q) { return q->nr_elts; } @@ -332,9 +332,9 @@ static void q_del(struct queue *q, struct entry *e) /* * Return the oldest entry of the lowest populated level. */ -static struct entry *q_peek(struct queue *q, unsigned max_level, bool can_cross_sentinel) +static struct entry *q_peek(struct queue *q, unsigned int max_level, bool can_cross_sentinel) { - unsigned level; + unsigned int level; struct entry *e;
max_level = min(max_level, q->nr_levels); @@ -369,7 +369,7 @@ static struct entry *q_pop(struct queue *q) * used by redistribute, so we know this is true. It also doesn't adjust * the q->nr_elts count. */ -static struct entry *__redist_pop_from(struct queue *q, unsigned level) +static struct entry *__redist_pop_from(struct queue *q, unsigned int level) { struct entry *e;
@@ -383,9 +383,10 @@ static struct entry *__redist_pop_from(struct queue *q, unsigned level) return NULL; }
-static void q_set_targets_subrange_(struct queue *q, unsigned nr_elts, unsigned lbegin, unsigned lend) +static void q_set_targets_subrange_(struct queue *q, unsigned int nr_elts, + unsigned int lbegin, unsigned int lend) { - unsigned level, nr_levels, entries_per_level, remainder; + unsigned int level, nr_levels, entries_per_level, remainder;
BUG_ON(lbegin > lend); BUG_ON(lend > q->nr_levels); @@ -426,7 +427,7 @@ static void q_set_targets(struct queue *q)
static void q_redistribute(struct queue *q) { - unsigned target, level; + unsigned int target, level; struct ilist *l, *l_above; struct entry *e;
@@ -467,12 +468,12 @@ static void q_redistribute(struct queue *q) } }
-static void q_requeue(struct queue *q, struct entry *e, unsigned extra_levels, +static void q_requeue(struct queue *q, struct entry *e, unsigned int extra_levels, struct entry *s1, struct entry *s2) { struct entry *de; - unsigned sentinels_passed = 0; - unsigned new_level = min(q->nr_levels - 1u, e->level + extra_levels); + unsigned int sentinels_passed = 0; + unsigned int new_level = min(q->nr_levels - 1u, e->level + extra_levels);
/* try and find an entry to swap with */ if (extra_levels && (e->level < q->nr_levels - 1u)) { @@ -512,9 +513,9 @@ static void q_requeue(struct queue *q, struct entry *e, unsigned extra_levels, #define EIGHTH (1u << (FP_SHIFT - 3u))
struct stats { - unsigned hit_threshold; - unsigned hits; - unsigned misses; + unsigned int hit_threshold; + unsigned int hits; + unsigned int misses; };
enum performance { @@ -523,7 +524,7 @@ enum performance { Q_WELL };
-static void stats_init(struct stats *s, unsigned nr_levels) +static void stats_init(struct stats *s, unsigned int nr_levels) { s->hit_threshold = (nr_levels * 3u) / 4u; s->hits = 0u; @@ -535,7 +536,7 @@ static void stats_reset(struct stats *s) s->hits = s->misses = 0u; }
-static void stats_level_accessed(struct stats *s, unsigned level) +static void stats_level_accessed(struct stats *s, unsigned int level) { if (level >= s->hit_threshold) s->hits++; @@ -556,7 +557,7 @@ static void stats_miss(struct stats *s) */ static enum performance stats_assess(struct stats *s) { - unsigned confidence = safe_div(s->hits << FP_SHIFT, s->hits + s->misses); + unsigned int confidence = safe_div(s->hits << FP_SHIFT, s->hits + s->misses);
if (confidence < SIXTEENTH) return Q_POOR; @@ -573,16 +574,16 @@ static enum performance stats_assess(struct stats *s) struct smq_hash_table { struct entry_space *es; unsigned long long hash_bits; - unsigned *buckets; + unsigned int *buckets; };
/* * All cache entries are stored in a chained hash table. To save space we * use indexing again, and only store indexes to the next entry. */ -static int h_init(struct smq_hash_table *ht, struct entry_space *es, unsigned nr_entries) +static int h_init(struct smq_hash_table *ht, struct entry_space *es, unsigned int nr_entries) { - unsigned i, nr_buckets; + unsigned int i, nr_buckets;
ht->es = es; nr_buckets = roundup_pow_of_two(max(nr_entries / 4u, 16u)); @@ -603,7 +604,7 @@ static void h_exit(struct smq_hash_table *ht) vfree(ht->buckets); }
-static struct entry *h_head(struct smq_hash_table *ht, unsigned bucket) +static struct entry *h_head(struct smq_hash_table *ht, unsigned int bucket) { return to_entry(ht->es, ht->buckets[bucket]); } @@ -613,7 +614,7 @@ static struct entry *h_next(struct smq_hash_table *ht, struct entry *e) return to_entry(ht->es, e->hash_next); }
-static void __h_insert(struct smq_hash_table *ht, unsigned bucket, struct entry *e) +static void __h_insert(struct smq_hash_table *ht, unsigned int bucket, struct entry *e) { e->hash_next = ht->buckets[bucket]; ht->buckets[bucket] = to_index(ht->es, e); @@ -621,11 +622,11 @@ static void __h_insert(struct smq_hash_table *ht, unsigned bucket, struct entry
static void h_insert(struct smq_hash_table *ht, struct entry *e) { - unsigned h = hash_64(from_oblock(e->oblock), ht->hash_bits); + unsigned int h = hash_64(from_oblock(e->oblock), ht->hash_bits); __h_insert(ht, h, e); }
-static struct entry *__h_lookup(struct smq_hash_table *ht, unsigned h, dm_oblock_t oblock, +static struct entry *__h_lookup(struct smq_hash_table *ht, unsigned int h, dm_oblock_t oblock, struct entry **prev) { struct entry *e; @@ -641,7 +642,7 @@ static struct entry *__h_lookup(struct smq_hash_table *ht, unsigned h, dm_oblock return NULL; }
-static void __h_unlink(struct smq_hash_table *ht, unsigned h, +static void __h_unlink(struct smq_hash_table *ht, unsigned int h, struct entry *e, struct entry *prev) { if (prev) @@ -656,7 +657,7 @@ static void __h_unlink(struct smq_hash_table *ht, unsigned h, static struct entry *h_lookup(struct smq_hash_table *ht, dm_oblock_t oblock) { struct entry *e, *prev; - unsigned h = hash_64(from_oblock(oblock), ht->hash_bits); + unsigned int h = hash_64(from_oblock(oblock), ht->hash_bits);
e = __h_lookup(ht, h, oblock, &prev); if (e && prev) { @@ -673,7 +674,7 @@ static struct entry *h_lookup(struct smq_hash_table *ht, dm_oblock_t oblock)
static void h_remove(struct smq_hash_table *ht, struct entry *e) { - unsigned h = hash_64(from_oblock(e->oblock), ht->hash_bits); + unsigned int h = hash_64(from_oblock(e->oblock), ht->hash_bits); struct entry *prev;
/* @@ -689,16 +690,16 @@ static void h_remove(struct smq_hash_table *ht, struct entry *e)
struct entry_alloc { struct entry_space *es; - unsigned begin; + unsigned int begin;
- unsigned nr_allocated; + unsigned int nr_allocated; struct ilist free; };
static void init_allocator(struct entry_alloc *ea, struct entry_space *es, - unsigned begin, unsigned end) + unsigned int begin, unsigned int end) { - unsigned i; + unsigned int i;
ea->es = es; ea->nr_allocated = 0u; @@ -742,7 +743,7 @@ static struct entry *alloc_entry(struct entry_alloc *ea) /* * This assumes the cblock hasn't already been allocated. */ -static struct entry *alloc_particular_entry(struct entry_alloc *ea, unsigned i) +static struct entry *alloc_particular_entry(struct entry_alloc *ea, unsigned int i) { struct entry *e = __get_entry(ea->es, ea->begin + i);
@@ -770,12 +771,12 @@ static bool allocator_empty(struct entry_alloc *ea) return l_empty(&ea->free); }
-static unsigned get_index(struct entry_alloc *ea, struct entry *e) +static unsigned int get_index(struct entry_alloc *ea, struct entry *e) { return to_index(ea->es, e) - ea->begin; }
-static struct entry *get_entry(struct entry_alloc *ea, unsigned index) +static struct entry *get_entry(struct entry_alloc *ea, unsigned int index) { return __get_entry(ea->es, ea->begin + index); } @@ -800,9 +801,9 @@ struct smq_policy { sector_t cache_block_size;
sector_t hotspot_block_size; - unsigned nr_hotspot_blocks; - unsigned cache_blocks_per_hotspot_block; - unsigned hotspot_level_jump; + unsigned int nr_hotspot_blocks; + unsigned int cache_blocks_per_hotspot_block; + unsigned int hotspot_level_jump;
struct entry_space es; struct entry_alloc writeback_sentinel_alloc; @@ -831,7 +832,7 @@ struct smq_policy { * Keeps track of time, incremented by the core. We use this to * avoid attributing multiple hits within the same tick. */ - unsigned tick; + unsigned int tick;
/* * The hash tables allows us to quickly find an entry by origin @@ -846,8 +847,8 @@ struct smq_policy { bool current_demote_sentinels; unsigned long next_demote_period;
- unsigned write_promote_level; - unsigned read_promote_level; + unsigned int write_promote_level; + unsigned int read_promote_level;
unsigned long next_hotspot_period; unsigned long next_cache_period; @@ -859,24 +860,24 @@ struct smq_policy {
/*----------------------------------------------------------------*/
-static struct entry *get_sentinel(struct entry_alloc *ea, unsigned level, bool which) +static struct entry *get_sentinel(struct entry_alloc *ea, unsigned int level, bool which) { return get_entry(ea, which ? level : NR_CACHE_LEVELS + level); }
-static struct entry *writeback_sentinel(struct smq_policy *mq, unsigned level) +static struct entry *writeback_sentinel(struct smq_policy *mq, unsigned int level) { return get_sentinel(&mq->writeback_sentinel_alloc, level, mq->current_writeback_sentinels); }
-static struct entry *demote_sentinel(struct smq_policy *mq, unsigned level) +static struct entry *demote_sentinel(struct smq_policy *mq, unsigned int level) { return get_sentinel(&mq->demote_sentinel_alloc, level, mq->current_demote_sentinels); }
static void __update_writeback_sentinels(struct smq_policy *mq) { - unsigned level; + unsigned int level; struct queue *q = &mq->dirty; struct entry *sentinel;
@@ -889,7 +890,7 @@ static void __update_writeback_sentinels(struct smq_policy *mq)
static void __update_demote_sentinels(struct smq_policy *mq) { - unsigned level; + unsigned int level; struct queue *q = &mq->clean; struct entry *sentinel;
@@ -917,7 +918,7 @@ static void update_sentinels(struct smq_policy *mq)
static void __sentinels_init(struct smq_policy *mq) { - unsigned level; + unsigned int level; struct entry *sentinel;
for (level = 0; level < NR_CACHE_LEVELS; level++) { @@ -1008,7 +1009,7 @@ static void requeue(struct smq_policy *mq, struct entry *e) } }
-static unsigned default_promote_level(struct smq_policy *mq) +static unsigned int default_promote_level(struct smq_policy *mq) { /* * The promote level depends on the current performance of the @@ -1030,9 +1031,9 @@ static unsigned default_promote_level(struct smq_policy *mq) 1, 1, 1, 2, 4, 6, 7, 8, 7, 6, 4, 4, 3, 3, 2, 2, 1 };
- unsigned hits = mq->cache_stats.hits; - unsigned misses = mq->cache_stats.misses; - unsigned index = safe_div(hits << 4u, hits + misses); + unsigned int hits = mq->cache_stats.hits; + unsigned int misses = mq->cache_stats.misses; + unsigned int index = safe_div(hits << 4u, hits + misses); return table[index]; }
@@ -1042,7 +1043,7 @@ static void update_promote_levels(struct smq_policy *mq) * If there are unused cache entries then we want to be really * eager to promote. */ - unsigned threshold_level = allocator_empty(&mq->cache_alloc) ? + unsigned int threshold_level = allocator_empty(&mq->cache_alloc) ? default_promote_level(mq) : (NR_HOTSPOT_LEVELS / 2u);
threshold_level = max(threshold_level, NR_HOTSPOT_LEVELS); @@ -1124,7 +1125,7 @@ static void end_cache_period(struct smq_policy *mq) #define CLEAN_TARGET 25u #define FREE_TARGET 25u
-static unsigned percent_to_target(struct smq_policy *mq, unsigned p) +static unsigned int percent_to_target(struct smq_policy *mq, unsigned int p) { return from_cblock(mq->cache_size) * p / 100u; } @@ -1150,7 +1151,7 @@ static bool clean_target_met(struct smq_policy *mq, bool idle)
static bool free_target_met(struct smq_policy *mq) { - unsigned nr_free; + unsigned int nr_free;
nr_free = from_cblock(mq->cache_size) - mq->cache_alloc.nr_allocated; return (nr_free + btracker_nr_demotions_queued(mq->bg_work)) >= @@ -1300,7 +1301,7 @@ static dm_oblock_t to_hblock(struct smq_policy *mq, dm_oblock_t b)
static struct entry *update_hotspot_queue(struct smq_policy *mq, dm_oblock_t b) { - unsigned hi; + unsigned int hi; dm_oblock_t hb = to_hblock(mq, b); struct entry *e = h_lookup(&mq->hotspot_table, hb);
@@ -1549,7 +1550,7 @@ static void smq_clear_dirty(struct dm_cache_policy *p, dm_cblock_t cblock) spin_unlock_irqrestore(&mq->lock, flags); }
-static unsigned random_level(dm_cblock_t cblock) +static unsigned int random_level(dm_cblock_t cblock) { return hash_32(from_cblock(cblock), 9) & (NR_CACHE_LEVELS - 1); } @@ -1660,7 +1661,7 @@ static int mq_set_config_value(struct dm_cache_policy *p, }
static int mq_emit_config_values(struct dm_cache_policy *p, char *result, - unsigned maxlen, ssize_t *sz_ptr) + unsigned int maxlen, ssize_t *sz_ptr) { ssize_t sz = *sz_ptr;
@@ -1699,16 +1700,16 @@ static void init_policy_functions(struct smq_policy *mq, bool mimic_mq)
static bool too_many_hotspot_blocks(sector_t origin_size, sector_t hotspot_block_size, - unsigned nr_hotspot_blocks) + unsigned int nr_hotspot_blocks) { return (hotspot_block_size * nr_hotspot_blocks) > origin_size; }
static void calc_hotspot_params(sector_t origin_size, sector_t cache_block_size, - unsigned nr_cache_blocks, + unsigned int nr_cache_blocks, sector_t *hotspot_block_size, - unsigned *nr_hotspot_blocks) + unsigned int *nr_hotspot_blocks) { *hotspot_block_size = cache_block_size * 16u; *nr_hotspot_blocks = max(nr_cache_blocks / 4u, 1024u); @@ -1724,9 +1725,9 @@ static struct dm_cache_policy *__smq_create(dm_cblock_t cache_size, bool mimic_mq, bool migrations_allowed) { - unsigned i; - unsigned nr_sentinels_per_queue = 2u * NR_CACHE_LEVELS; - unsigned total_sentinels = 2u * nr_sentinels_per_queue; + unsigned int i; + unsigned int nr_sentinels_per_queue = 2u * NR_CACHE_LEVELS; + unsigned int total_sentinels = 2u * nr_sentinels_per_queue; struct smq_policy *mq = kzalloc(sizeof(*mq), GFP_KERNEL);
if (!mq) diff --git a/drivers/md/dm-cache-policy.c b/drivers/md/dm-cache-policy.c index c1a3cee99b445..2e58bbcf3e3bd 100644 --- a/drivers/md/dm-cache-policy.c +++ b/drivers/md/dm-cache-policy.c @@ -154,7 +154,7 @@ const char *dm_cache_policy_get_name(struct dm_cache_policy *p) } EXPORT_SYMBOL_GPL(dm_cache_policy_get_name);
-const unsigned *dm_cache_policy_get_version(struct dm_cache_policy *p) +const unsigned int *dm_cache_policy_get_version(struct dm_cache_policy *p) { struct dm_cache_policy_type *t = p->private;
diff --git a/drivers/md/dm-cache-policy.h b/drivers/md/dm-cache-policy.h index 06eb31af626f1..6ba3e9c91af53 100644 --- a/drivers/md/dm-cache-policy.h +++ b/drivers/md/dm-cache-policy.h @@ -128,7 +128,7 @@ struct dm_cache_policy { * Configuration. */ int (*emit_config_values)(struct dm_cache_policy *p, char *result, - unsigned maxlen, ssize_t *sz_ptr); + unsigned int maxlen, ssize_t *sz_ptr); int (*set_config_value)(struct dm_cache_policy *p, const char *key, const char *value);
@@ -157,7 +157,7 @@ struct dm_cache_policy_type { * what gets passed on the target line to select your policy. */ char name[CACHE_POLICY_NAME_SIZE]; - unsigned version[CACHE_POLICY_VERSION_SIZE]; + unsigned int version[CACHE_POLICY_VERSION_SIZE];
/* * For use by an alias dm_cache_policy_type to point to the diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c index 17fde3e5a1f7b..8f7426b71e025 100644 --- a/drivers/md/dm-cache-target.c +++ b/drivers/md/dm-cache-target.c @@ -275,7 +275,7 @@ enum cache_io_mode { struct cache_features { enum cache_metadata_mode mode; enum cache_io_mode io_mode; - unsigned metadata_version; + unsigned int metadata_version; bool discard_passdown:1; };
@@ -362,7 +362,7 @@ struct cache { * Rather than reconstructing the table line for the status we just * save it and regurgitate. */ - unsigned nr_ctr_args; + unsigned int nr_ctr_args; const char **ctr_args;
struct dm_kcopyd_client *copier; @@ -378,7 +378,7 @@ struct cache { unsigned long *dirty_bitset; atomic_t nr_dirty;
- unsigned policy_nr_args; + unsigned int policy_nr_args; struct dm_cache_policy *policy;
/* @@ -409,7 +409,7 @@ struct cache {
struct per_bio_data { bool tick:1; - unsigned req_nr:2; + unsigned int req_nr:2; struct dm_bio_prison_cell_v2 *cell; struct dm_hook_info hook_info; sector_t len; @@ -517,7 +517,7 @@ static void build_key(dm_oblock_t begin, dm_oblock_t end, struct dm_cell_key_v2 #define WRITE_LOCK_LEVEL 0 #define READ_WRITE_LOCK_LEVEL 1
-static unsigned lock_level(struct bio *bio) +static unsigned int lock_level(struct bio *bio) { return bio_data_dir(bio) == WRITE ? WRITE_LOCK_LEVEL : @@ -1884,7 +1884,7 @@ static void check_migrations(struct work_struct *ws) */ static void destroy(struct cache *cache) { - unsigned i; + unsigned int i;
mempool_exit(&cache->migration_pool);
@@ -2124,7 +2124,7 @@ static int parse_features(struct cache_args *ca, struct dm_arg_set *as, };
int r, mode_ctr = 0; - unsigned argc; + unsigned int argc; const char *arg; struct cache_features *cf = &ca->features;
@@ -2544,7 +2544,7 @@ static int cache_create(struct cache_args *ca, struct cache **result)
static int copy_ctr_args(struct cache *cache, int argc, const char **argv) { - unsigned i; + unsigned int i; const char **copy;
copy = kcalloc(argc, sizeof(*copy), GFP_KERNEL); @@ -2566,7 +2566,7 @@ static int copy_ctr_args(struct cache *cache, int argc, const char **argv) return 0; }
-static int cache_ctr(struct dm_target *ti, unsigned argc, char **argv) +static int cache_ctr(struct dm_target *ti, unsigned int argc, char **argv) { int r = -EINVAL; struct cache_args *ca; @@ -2669,7 +2669,7 @@ static int write_dirty_bitset(struct cache *cache)
static int write_discard_bitset(struct cache *cache) { - unsigned i, r; + unsigned int i, r;
if (get_cache_mode(cache) >= CM_READ_ONLY) return -EINVAL; @@ -2983,11 +2983,11 @@ static void cache_resume(struct dm_target *ti) }
static void emit_flags(struct cache *cache, char *result, - unsigned maxlen, ssize_t *sz_ptr) + unsigned int maxlen, ssize_t *sz_ptr) { ssize_t sz = *sz_ptr; struct cache_features *cf = &cache->features; - unsigned count = (cf->metadata_version == 2) + !cf->discard_passdown + 1; + unsigned int count = (cf->metadata_version == 2) + !cf->discard_passdown + 1;
DMEMIT("%u ", count);
@@ -3027,10 +3027,10 @@ static void emit_flags(struct cache *cache, char *result, * <policy name> <#policy args> <policy args>* <cache metadata mode> <needs_check> */ static void cache_status(struct dm_target *ti, status_type_t type, - unsigned status_flags, char *result, unsigned maxlen) + unsigned int status_flags, char *result, unsigned int maxlen) { int r = 0; - unsigned i; + unsigned int i; ssize_t sz = 0; dm_block_t nr_free_blocks_metadata = 0; dm_block_t nr_blocks_metadata = 0; @@ -3067,18 +3067,18 @@ static void cache_status(struct dm_target *ti, status_type_t type, residency = policy_residency(cache->policy);
DMEMIT("%u %llu/%llu %llu %llu/%llu %u %u %u %u %u %u %lu ", - (unsigned)DM_CACHE_METADATA_BLOCK_SIZE, + (unsigned int)DM_CACHE_METADATA_BLOCK_SIZE, (unsigned long long)(nr_blocks_metadata - nr_free_blocks_metadata), (unsigned long long)nr_blocks_metadata, (unsigned long long)cache->sectors_per_block, (unsigned long long) from_cblock(residency), (unsigned long long) from_cblock(cache->cache_size), - (unsigned) atomic_read(&cache->stats.read_hit), - (unsigned) atomic_read(&cache->stats.read_miss), - (unsigned) atomic_read(&cache->stats.write_hit), - (unsigned) atomic_read(&cache->stats.write_miss), - (unsigned) atomic_read(&cache->stats.demotion), - (unsigned) atomic_read(&cache->stats.promotion), + (unsigned int) atomic_read(&cache->stats.read_hit), + (unsigned int) atomic_read(&cache->stats.read_miss), + (unsigned int) atomic_read(&cache->stats.write_hit), + (unsigned int) atomic_read(&cache->stats.write_miss), + (unsigned int) atomic_read(&cache->stats.demotion), + (unsigned int) atomic_read(&cache->stats.promotion), (unsigned long) atomic_read(&cache->nr_dirty));
emit_flags(cache, result, maxlen, &sz); @@ -3257,11 +3257,11 @@ static int request_invalidation(struct cache *cache, struct cblock_range *range) return r; }
-static int process_invalidate_cblocks_message(struct cache *cache, unsigned count, +static int process_invalidate_cblocks_message(struct cache *cache, unsigned int count, const char **cblock_ranges) { int r = 0; - unsigned i; + unsigned int i; struct cblock_range range;
if (!passthrough_mode(cache)) { @@ -3298,8 +3298,8 @@ static int process_invalidate_cblocks_message(struct cache *cache, unsigned coun * * The key migration_threshold is supported by the cache target core. */ -static int cache_message(struct dm_target *ti, unsigned argc, char **argv, - char *result, unsigned maxlen) +static int cache_message(struct dm_target *ti, unsigned int argc, char **argv, + char *result, unsigned int maxlen) { struct cache *cache = ti->private;
diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h index 6c6bd24774f25..28c641352de9b 100644 --- a/drivers/md/dm-core.h +++ b/drivers/md/dm-core.h @@ -119,7 +119,7 @@ struct mapped_device { struct dm_stats stats;
/* the number of internal suspends */ - unsigned internal_suspend_count; + unsigned int internal_suspend_count;
int swap_bios; struct semaphore swap_bios_semaphore; @@ -326,9 +326,9 @@ static inline struct completion *dm_get_completion_from_kobject(struct kobject * return &container_of(kobj, struct dm_kobject_holder, kobj)->completion; }
-unsigned __dm_get_module_param(unsigned *module_param, unsigned def, unsigned max); +unsigned int __dm_get_module_param(unsigned int *module_param, unsigned int def, unsigned int max);
-static inline bool dm_message_test_buffer_overflow(char *result, unsigned maxlen) +static inline bool dm_message_test_buffer_overflow(char *result, unsigned int maxlen) { return !maxlen || strlen(result) + 1 >= maxlen; } diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index dc2d0d61ade93..ee269b1d09fac 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -173,14 +173,14 @@ struct crypt_config { } iv_gen_private; u64 iv_offset; unsigned int iv_size; - unsigned short int sector_size; + unsigned short sector_size; unsigned char sector_shift;
union { struct crypto_skcipher **tfms; struct crypto_aead **tfms_aead; } cipher_tfm; - unsigned tfms_count; + unsigned int tfms_count; unsigned long cipher_flags;
/* @@ -214,7 +214,7 @@ struct crypt_config { * pool for per bio private data, crypto requests, * encryption requeusts/buffer pages and integrity tags */ - unsigned tag_pool_max_sectors; + unsigned int tag_pool_max_sectors; mempool_t tag_pool; mempool_t req_pool; mempool_t page_pool; @@ -231,7 +231,7 @@ struct crypt_config { #define POOL_ENTRY_SIZE 512
static DEFINE_SPINLOCK(dm_crypt_clients_lock); -static unsigned dm_crypt_clients_n = 0; +static unsigned int dm_crypt_clients_n = 0; static volatile unsigned long dm_crypt_pages_per_client; #define DM_CRYPT_MEMORY_PERCENT 2 #define DM_CRYPT_MIN_PAGES_PER_CLIENT (BIO_MAX_VECS * 16) @@ -356,7 +356,7 @@ static int crypt_iv_essiv_gen(struct crypt_config *cc, u8 *iv, static int crypt_iv_benbi_ctr(struct crypt_config *cc, struct dm_target *ti, const char *opts) { - unsigned bs; + unsigned int bs; int log;
if (crypt_integrity_aead(cc)) @@ -1466,7 +1466,7 @@ static void kcryptd_async_done(struct crypto_async_request *async_req, static int crypt_alloc_req_skcipher(struct crypt_config *cc, struct convert_context *ctx) { - unsigned key_index = ctx->cc_sector & (cc->tfms_count - 1); + unsigned int key_index = ctx->cc_sector & (cc->tfms_count - 1);
if (!ctx->r.req) { ctx->r.req = mempool_alloc(&cc->req_pool, in_interrupt() ? GFP_ATOMIC : GFP_NOIO); @@ -1660,13 +1660,13 @@ static void crypt_free_buffer_pages(struct crypt_config *cc, struct bio *clone); * non-blocking allocations without a mutex first but on failure we fallback * to blocking allocations with a mutex. */ -static struct bio *crypt_alloc_buffer(struct dm_crypt_io *io, unsigned size) +static struct bio *crypt_alloc_buffer(struct dm_crypt_io *io, unsigned int size) { struct crypt_config *cc = io->cc; struct bio *clone; unsigned int nr_iovecs = (size + PAGE_SIZE - 1) >> PAGE_SHIFT; gfp_t gfp_mask = GFP_NOWAIT | __GFP_HIGHMEM; - unsigned i, len, remaining_size; + unsigned int i, len, remaining_size; struct page *page;
retry: @@ -1806,7 +1806,7 @@ static void crypt_endio(struct bio *clone) { struct dm_crypt_io *io = clone->bi_private; struct crypt_config *cc = io->cc; - unsigned rw = bio_data_dir(clone); + unsigned int rw = bio_data_dir(clone); blk_status_t error;
/* @@ -2261,7 +2261,7 @@ static void crypt_free_tfms_aead(struct crypt_config *cc)
static void crypt_free_tfms_skcipher(struct crypt_config *cc) { - unsigned i; + unsigned int i;
if (!cc->cipher_tfm.tfms) return; @@ -2286,7 +2286,7 @@ static void crypt_free_tfms(struct crypt_config *cc)
static int crypt_alloc_tfms_skcipher(struct crypt_config *cc, char *ciphermode) { - unsigned i; + unsigned int i; int err;
cc->cipher_tfm.tfms = kcalloc(cc->tfms_count, @@ -2344,12 +2344,12 @@ static int crypt_alloc_tfms(struct crypt_config *cc, char *ciphermode) return crypt_alloc_tfms_skcipher(cc, ciphermode); }
-static unsigned crypt_subkey_size(struct crypt_config *cc) +static unsigned int crypt_subkey_size(struct crypt_config *cc) { return (cc->key_size - cc->key_extra_size) >> ilog2(cc->tfms_count); }
-static unsigned crypt_authenckey_size(struct crypt_config *cc) +static unsigned int crypt_authenckey_size(struct crypt_config *cc) { return crypt_subkey_size(cc) + RTA_SPACE(sizeof(struct crypto_authenc_key_param)); } @@ -2360,7 +2360,7 @@ static unsigned crypt_authenckey_size(struct crypt_config *cc) * This funcion converts cc->key to this special format. */ static void crypt_copy_authenckey(char *p, const void *key, - unsigned enckeylen, unsigned authkeylen) + unsigned int enckeylen, unsigned int authkeylen) { struct crypto_authenc_key_param *param; struct rtattr *rta; @@ -2378,7 +2378,7 @@ static void crypt_copy_authenckey(char *p, const void *key,
static int crypt_setkey(struct crypt_config *cc) { - unsigned subkey_size; + unsigned int subkey_size; int err = 0, i, r;
/* Ignore extra keys (which are used for IV etc) */ @@ -3417,7 +3417,7 @@ static int crypt_map(struct dm_target *ti, struct bio *bio) crypt_io_init(io, cc, bio, dm_target_offset(ti, bio->bi_iter.bi_sector));
if (cc->on_disk_tag_size) { - unsigned tag_len = cc->on_disk_tag_size * (bio_sectors(bio) >> cc->sector_shift); + unsigned int tag_len = cc->on_disk_tag_size * (bio_sectors(bio) >> cc->sector_shift);
if (unlikely(tag_len > KMALLOC_MAX_SIZE) || unlikely(!(io->integrity_metadata = kmalloc(tag_len, @@ -3445,14 +3445,14 @@ static int crypt_map(struct dm_target *ti, struct bio *bio)
static char hex2asc(unsigned char c) { - return c + '0' + ((unsigned)(9 - c) >> 4 & 0x27); + return c + '0' + ((unsigned int)(9 - c) >> 4 & 0x27); }
static void crypt_status(struct dm_target *ti, status_type_t type, - unsigned status_flags, char *result, unsigned maxlen) + unsigned int status_flags, char *result, unsigned int maxlen) { struct crypt_config *cc = ti->private; - unsigned i, sz = 0; + unsigned int i, sz = 0; int num_feature_args = 0;
switch (type) { @@ -3568,8 +3568,8 @@ static void crypt_resume(struct dm_target *ti) * key set <key> * key wipe */ -static int crypt_message(struct dm_target *ti, unsigned argc, char **argv, - char *result, unsigned maxlen) +static int crypt_message(struct dm_target *ti, unsigned int argc, char **argv, + char *result, unsigned int maxlen) { struct crypt_config *cc = ti->private; int key_size, ret = -EINVAL; @@ -3630,10 +3630,10 @@ static void crypt_io_hints(struct dm_target *ti, struct queue_limits *limits) limits->max_segment_size = PAGE_SIZE;
limits->logical_block_size = - max_t(unsigned, limits->logical_block_size, cc->sector_size); + max_t(unsigned int, limits->logical_block_size, cc->sector_size); limits->physical_block_size = - max_t(unsigned, limits->physical_block_size, cc->sector_size); - limits->io_min = max_t(unsigned, limits->io_min, cc->sector_size); + max_t(unsigned int, limits->physical_block_size, cc->sector_size); + limits->io_min = max_t(unsigned int, limits->io_min, cc->sector_size); limits->dma_alignment = limits->logical_block_size - 1; }
diff --git a/drivers/md/dm-delay.c b/drivers/md/dm-delay.c index 869afef5654ae..02b8f4e818276 100644 --- a/drivers/md/dm-delay.c +++ b/drivers/md/dm-delay.c @@ -20,8 +20,8 @@ struct delay_class { struct dm_dev *dev; sector_t start; - unsigned delay; - unsigned ops; + unsigned int delay; + unsigned int ops; };
struct delay_c { @@ -305,7 +305,7 @@ static int delay_map(struct dm_target *ti, struct bio *bio) DMEMIT("%s %llu %u", (c)->dev->name, (unsigned long long)(c)->start, (c)->delay)
static void delay_status(struct dm_target *ti, status_type_t type, - unsigned status_flags, char *result, unsigned maxlen) + unsigned int status_flags, char *result, unsigned int maxlen) { struct delay_c *dc = ti->private; int sz = 0; diff --git a/drivers/md/dm-ebs-target.c b/drivers/md/dm-ebs-target.c index 512cc6cea095a..7606c6695a0e2 100644 --- a/drivers/md/dm-ebs-target.c +++ b/drivers/md/dm-ebs-target.c @@ -390,7 +390,7 @@ static int ebs_map(struct dm_target *ti, struct bio *bio) }
static void ebs_status(struct dm_target *ti, status_type_t type, - unsigned status_flags, char *result, unsigned maxlen) + unsigned int status_flags, char *result, unsigned int maxlen) { struct ebs_c *ec = ti->private;
diff --git a/drivers/md/dm-era-target.c b/drivers/md/dm-era-target.c index e92c1afc3677f..a96290103cca8 100644 --- a/drivers/md/dm-era-target.c +++ b/drivers/md/dm-era-target.c @@ -51,7 +51,7 @@ static void writeset_free(struct writeset *ws) }
static int setup_on_disk_bitset(struct dm_disk_bitset *info, - unsigned nr_bits, dm_block_t *root) + unsigned int nr_bits, dm_block_t *root) { int r;
@@ -62,7 +62,7 @@ static int setup_on_disk_bitset(struct dm_disk_bitset *info, return dm_bitset_resize(info, *root, 0, nr_bits, false, root); }
-static size_t bitset_size(unsigned nr_bits) +static size_t bitset_size(unsigned int nr_bits) { return sizeof(unsigned long) * dm_div_up(nr_bits, BITS_PER_LONG); } @@ -323,10 +323,10 @@ static int superblock_lock(struct era_metadata *md, static int superblock_all_zeroes(struct dm_block_manager *bm, bool *result) { int r; - unsigned i; + unsigned int i; struct dm_block *b; __le64 *data_le, zero = cpu_to_le64(0); - unsigned sb_block_size = dm_bm_block_size(bm) / sizeof(__le64); + unsigned int sb_block_size = dm_bm_block_size(bm) / sizeof(__le64);
/* * We can't use a validator here - it may be all zeroes. @@ -363,12 +363,12 @@ static void ws_unpack(const struct writeset_disk *disk, struct writeset_metadata core->root = le64_to_cpu(disk->root); }
-static void ws_inc(void *context, const void *value, unsigned count) +static void ws_inc(void *context, const void *value, unsigned int count) { struct era_metadata *md = context; struct writeset_disk ws_d; dm_block_t b; - unsigned i; + unsigned int i;
for (i = 0; i < count; i++) { memcpy(&ws_d, value + (i * sizeof(ws_d)), sizeof(ws_d)); @@ -377,12 +377,12 @@ static void ws_inc(void *context, const void *value, unsigned count) } }
-static void ws_dec(void *context, const void *value, unsigned count) +static void ws_dec(void *context, const void *value, unsigned int count) { struct era_metadata *md = context; struct writeset_disk ws_d; dm_block_t b; - unsigned i; + unsigned int i;
for (i = 0; i < count; i++) { memcpy(&ws_d, value + (i * sizeof(ws_d)), sizeof(ws_d)); @@ -667,7 +667,7 @@ static void swap_writeset(struct era_metadata *md, struct writeset *new_writeset *--------------------------------------------------------------*/ struct digest { uint32_t era; - unsigned nr_bits, current_bit; + unsigned int nr_bits, current_bit; struct writeset_metadata writeset; __le32 value; struct dm_disk_bitset info; @@ -702,7 +702,7 @@ static int metadata_digest_transcribe_writeset(struct era_metadata *md, { int r; bool marked; - unsigned b, e = min(d->current_bit + INSERTS_PER_STEP, d->nr_bits); + unsigned int b, e = min(d->current_bit + INSERTS_PER_STEP, d->nr_bits);
for (b = d->current_bit; b < e; b++) { r = writeset_marked_on_disk(&d->info, &d->writeset, b, &marked); @@ -1439,7 +1439,7 @@ static bool valid_block_size(dm_block_t block_size) /* * <metadata dev> <data dev> <data block size (sectors)> */ -static int era_ctr(struct dm_target *ti, unsigned argc, char **argv) +static int era_ctr(struct dm_target *ti, unsigned int argc, char **argv) { int r; char dummy; @@ -1618,7 +1618,7 @@ static int era_preresume(struct dm_target *ti) * <current era> <held metadata root | '-'> */ static void era_status(struct dm_target *ti, status_type_t type, - unsigned status_flags, char *result, unsigned maxlen) + unsigned int status_flags, char *result, unsigned int maxlen) { int r; struct era *era = ti->private; @@ -1633,10 +1633,10 @@ static void era_status(struct dm_target *ti, status_type_t type, goto err;
DMEMIT("%u %llu/%llu %u", - (unsigned) (DM_ERA_METADATA_BLOCK_SIZE >> SECTOR_SHIFT), + (unsigned int) (DM_ERA_METADATA_BLOCK_SIZE >> SECTOR_SHIFT), (unsigned long long) stats.used, (unsigned long long) stats.total, - (unsigned) stats.era); + (unsigned int) stats.era);
if (stats.snap != SUPERBLOCK_LOCATION) DMEMIT(" %llu", stats.snap); @@ -1662,8 +1662,8 @@ static void era_status(struct dm_target *ti, status_type_t type, DMEMIT("Error"); }
-static int era_message(struct dm_target *ti, unsigned argc, char **argv, - char *result, unsigned maxlen) +static int era_message(struct dm_target *ti, unsigned int argc, char **argv, + char *result, unsigned int maxlen) { struct era *era = ti->private;
diff --git a/drivers/md/dm-exception-store.c b/drivers/md/dm-exception-store.c index 3997f34cfebc6..cc3987c97eb94 100644 --- a/drivers/md/dm-exception-store.c +++ b/drivers/md/dm-exception-store.c @@ -142,7 +142,7 @@ EXPORT_SYMBOL(dm_exception_store_type_unregister); static int set_chunk_size(struct dm_exception_store *store, const char *chunk_size_arg, char **error) { - unsigned chunk_size; + unsigned int chunk_size;
if (kstrtouint(chunk_size_arg, 10, &chunk_size)) { *error = "Invalid chunk size"; @@ -158,7 +158,7 @@ static int set_chunk_size(struct dm_exception_store *store, }
int dm_exception_store_set_chunk_size(struct dm_exception_store *store, - unsigned chunk_size, + unsigned int chunk_size, char **error) { /* Check chunk_size is a power of 2 */ @@ -190,7 +190,7 @@ int dm_exception_store_set_chunk_size(struct dm_exception_store *store,
int dm_exception_store_create(struct dm_target *ti, int argc, char **argv, struct dm_snapshot *snap, - unsigned *args_used, + unsigned int *args_used, struct dm_exception_store **store) { int r = 0; diff --git a/drivers/md/dm-exception-store.h b/drivers/md/dm-exception-store.h index b5f20eba36415..862df68a7db04 100644 --- a/drivers/md/dm-exception-store.h +++ b/drivers/md/dm-exception-store.h @@ -96,9 +96,9 @@ struct dm_exception_store_type { */ void (*drop_snapshot) (struct dm_exception_store *store);
- unsigned (*status) (struct dm_exception_store *store, - status_type_t status, char *result, - unsigned maxlen); + unsigned int (*status) (struct dm_exception_store *store, + status_type_t status, char *result, + unsigned int maxlen);
/* * Return how full the snapshot is. @@ -118,9 +118,9 @@ struct dm_exception_store { struct dm_snapshot *snap;
/* Size of data blocks saved - must be a power of 2 */ - unsigned chunk_size; - unsigned chunk_mask; - unsigned chunk_shift; + unsigned int chunk_size; + unsigned int chunk_mask; + unsigned int chunk_shift;
void *context;
@@ -144,7 +144,7 @@ static inline chunk_t dm_chunk_number(chunk_t chunk) return chunk & (chunk_t)((1ULL << DM_CHUNK_NUMBER_BITS) - 1ULL); }
-static inline unsigned dm_consecutive_chunk_count(struct dm_exception *e) +static inline unsigned int dm_consecutive_chunk_count(struct dm_exception *e) { return e->new_chunk >> DM_CHUNK_NUMBER_BITS; } @@ -181,12 +181,12 @@ int dm_exception_store_type_register(struct dm_exception_store_type *type); int dm_exception_store_type_unregister(struct dm_exception_store_type *type);
int dm_exception_store_set_chunk_size(struct dm_exception_store *store, - unsigned chunk_size, + unsigned int chunk_size, char **error);
int dm_exception_store_create(struct dm_target *ti, int argc, char **argv, struct dm_snapshot *snap, - unsigned *args_used, + unsigned int *args_used, struct dm_exception_store **store); void dm_exception_store_destroy(struct dm_exception_store *store);
diff --git a/drivers/md/dm-flakey.c b/drivers/md/dm-flakey.c index 335684a1aeaa5..7efbdb42cf3b4 100644 --- a/drivers/md/dm-flakey.c +++ b/drivers/md/dm-flakey.c @@ -26,12 +26,12 @@ struct flakey_c { struct dm_dev *dev; unsigned long start_time; sector_t start; - unsigned up_interval; - unsigned down_interval; + unsigned int up_interval; + unsigned int down_interval; unsigned long flags; - unsigned corrupt_bio_byte; - unsigned corrupt_bio_rw; - unsigned corrupt_bio_value; + unsigned int corrupt_bio_byte; + unsigned int corrupt_bio_rw; + unsigned int corrupt_bio_value; blk_opf_t corrupt_bio_flags; };
@@ -48,7 +48,7 @@ static int parse_features(struct dm_arg_set *as, struct flakey_c *fc, struct dm_target *ti) { int r; - unsigned argc; + unsigned int argc; const char *arg_name;
static const struct dm_arg _args[] = { @@ -148,7 +148,7 @@ static int parse_features(struct dm_arg_set *as, struct flakey_c *fc, BUILD_BUG_ON(sizeof(fc->corrupt_bio_flags) != sizeof(unsigned int)); r = dm_read_arg(_args + 3, as, - (__force unsigned *)&fc->corrupt_bio_flags, + (__force unsigned int *)&fc->corrupt_bio_flags, &ti->error); if (r) return r; @@ -324,7 +324,7 @@ static void corrupt_bio_data(struct bio *bio, struct flakey_c *fc) static int flakey_map(struct dm_target *ti, struct bio *bio) { struct flakey_c *fc = ti->private; - unsigned elapsed; + unsigned int elapsed; struct per_bio_data *pb = dm_per_bio_data(bio, sizeof(struct per_bio_data)); pb->bio_submitted = false;
@@ -417,11 +417,11 @@ static int flakey_end_io(struct dm_target *ti, struct bio *bio, }
static void flakey_status(struct dm_target *ti, status_type_t type, - unsigned status_flags, char *result, unsigned maxlen) + unsigned int status_flags, char *result, unsigned int maxlen) { - unsigned sz = 0; + unsigned int sz = 0; struct flakey_c *fc = ti->private; - unsigned drop_writes, error_writes; + unsigned int drop_writes, error_writes;
switch (type) { case STATUSTYPE_INFO: diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c index c62c21aadf329..53f9f765df9fd 100644 --- a/drivers/md/dm-integrity.c +++ b/drivers/md/dm-integrity.c @@ -157,13 +157,13 @@ struct alg_spec { char *alg_string; char *key_string; __u8 *key; - unsigned key_size; + unsigned int key_size; };
struct dm_integrity_c { struct dm_dev *dev; struct dm_dev *meta_dev; - unsigned tag_size; + unsigned int tag_size; __s8 log2_tag_size; sector_t start; mempool_t journal_io_mempool; @@ -171,8 +171,8 @@ struct dm_integrity_c { struct dm_bufio_client *bufio; struct workqueue_struct *metadata_wq; struct superblock *sb; - unsigned journal_pages; - unsigned n_bitmap_blocks; + unsigned int journal_pages; + unsigned int n_bitmap_blocks;
struct page_list *journal; struct page_list *journal_io; @@ -180,7 +180,7 @@ struct dm_integrity_c { struct page_list *recalc_bitmap; struct page_list *may_write_bitmap; struct bitmap_block_status *bbs; - unsigned bitmap_flush_interval; + unsigned int bitmap_flush_interval; int synchronous_mode; struct bio_list synchronous_bios; struct delayed_work bitmap_flush_work; @@ -201,12 +201,12 @@ struct dm_integrity_c { unsigned char journal_entries_per_sector; unsigned char journal_section_entries; unsigned short journal_section_sectors; - unsigned journal_sections; - unsigned journal_entries; + unsigned int journal_sections; + unsigned int journal_entries; sector_t data_device_sectors; sector_t meta_device_sectors; - unsigned initial_sectors; - unsigned metadata_run; + unsigned int initial_sectors; + unsigned int metadata_run; __s8 log2_metadata_run; __u8 log2_buffer_sectors; __u8 sectors_per_block; @@ -230,17 +230,17 @@ struct dm_integrity_c { unsigned char commit_seq; commit_id_t commit_ids[N_COMMIT_IDS];
- unsigned committed_section; - unsigned n_committed_sections; + unsigned int committed_section; + unsigned int n_committed_sections;
- unsigned uncommitted_section; - unsigned n_uncommitted_sections; + unsigned int uncommitted_section; + unsigned int n_uncommitted_sections;
- unsigned free_section; + unsigned int free_section; unsigned char free_section_entry; - unsigned free_sectors; + unsigned int free_sectors;
- unsigned free_sectors_threshold; + unsigned int free_sectors_threshold;
struct workqueue_struct *commit_wq; struct work_struct commit_work; @@ -257,7 +257,7 @@ struct dm_integrity_c {
unsigned long autocommit_jiffies; struct timer_list autocommit_timer; - unsigned autocommit_msec; + unsigned int autocommit_msec;
wait_queue_head_t copy_to_journal_wait;
@@ -305,7 +305,7 @@ struct dm_integrity_io { struct dm_integrity_range range;
sector_t metadata_block; - unsigned metadata_offset; + unsigned int metadata_offset;
atomic_t in_flight; blk_status_t bi_status; @@ -329,7 +329,7 @@ struct journal_io { struct bitmap_block_status { struct work_struct work; struct dm_integrity_c *ic; - unsigned idx; + unsigned int idx; unsigned long *bitmap; struct bio_list bio_queue; spinlock_t bio_queue_lock; @@ -410,8 +410,8 @@ static bool dm_integrity_disable_recalculate(struct dm_integrity_c *ic) return false; }
-static commit_id_t dm_integrity_commit_id(struct dm_integrity_c *ic, unsigned i, - unsigned j, unsigned char seq) +static commit_id_t dm_integrity_commit_id(struct dm_integrity_c *ic, unsigned int i, + unsigned int j, unsigned char seq) { /* * Xor the number with section and sector, so that if a piece of @@ -426,7 +426,7 @@ static void get_area_and_offset(struct dm_integrity_c *ic, sector_t data_sector, if (!ic->meta_dev) { __u8 log2_interleave_sectors = ic->sb->log2_interleave_sectors; *area = data_sector >> log2_interleave_sectors; - *offset = (unsigned)data_sector & ((1U << log2_interleave_sectors) - 1); + *offset = (unsigned int)data_sector & ((1U << log2_interleave_sectors) - 1); } else { *area = 0; *offset = data_sector; @@ -435,15 +435,15 @@ static void get_area_and_offset(struct dm_integrity_c *ic, sector_t data_sector,
#define sector_to_block(ic, n) \ do { \ - BUG_ON((n) & (unsigned)((ic)->sectors_per_block - 1)); \ + BUG_ON((n) & (unsigned int)((ic)->sectors_per_block - 1)); \ (n) >>= (ic)->sb->log2_sectors_per_block; \ } while (0)
static __u64 get_metadata_sector_and_offset(struct dm_integrity_c *ic, sector_t area, - sector_t offset, unsigned *metadata_offset) + sector_t offset, unsigned int *metadata_offset) { __u64 ms; - unsigned mo; + unsigned int mo;
ms = area << ic->sb->log2_interleave_sectors; if (likely(ic->log2_metadata_run >= 0)) @@ -484,7 +484,7 @@ static sector_t get_data_sector(struct dm_integrity_c *ic, sector_t area, sector return result; }
-static void wraparound_section(struct dm_integrity_c *ic, unsigned *sec_ptr) +static void wraparound_section(struct dm_integrity_c *ic, unsigned int *sec_ptr) { if (unlikely(*sec_ptr >= ic->journal_sections)) *sec_ptr -= ic->journal_sections; @@ -508,7 +508,7 @@ static int sb_mac(struct dm_integrity_c *ic, bool wr) { SHASH_DESC_ON_STACK(desc, ic->journal_mac); int r; - unsigned size = crypto_shash_digestsize(ic->journal_mac); + unsigned int size = crypto_shash_digestsize(ic->journal_mac);
if (sizeof(struct superblock) + size > 1 << SECTOR_SHIFT) { dm_integrity_io_error(ic, "digest is too long", -EINVAL); @@ -704,8 +704,8 @@ static bool block_bitmap_op(struct dm_integrity_c *ic, struct page_list *bitmap,
static void block_bitmap_copy(struct dm_integrity_c *ic, struct page_list *dst, struct page_list *src) { - unsigned n_bitmap_pages = DIV_ROUND_UP(ic->n_bitmap_blocks, PAGE_SIZE / BITMAP_BLOCK_SIZE); - unsigned i; + unsigned int n_bitmap_pages = DIV_ROUND_UP(ic->n_bitmap_blocks, PAGE_SIZE / BITMAP_BLOCK_SIZE); + unsigned int i;
for (i = 0; i < n_bitmap_pages; i++) { unsigned long *dst_data = lowmem_page_address(dst[i].page); @@ -716,18 +716,18 @@ static void block_bitmap_copy(struct dm_integrity_c *ic, struct page_list *dst,
static struct bitmap_block_status *sector_to_bitmap_block(struct dm_integrity_c *ic, sector_t sector) { - unsigned bit = sector >> (ic->sb->log2_sectors_per_block + ic->log2_blocks_per_bitmap_bit); - unsigned bitmap_block = bit / (BITMAP_BLOCK_SIZE * 8); + unsigned int bit = sector >> (ic->sb->log2_sectors_per_block + ic->log2_blocks_per_bitmap_bit); + unsigned int bitmap_block = bit / (BITMAP_BLOCK_SIZE * 8);
BUG_ON(bitmap_block >= ic->n_bitmap_blocks); return &ic->bbs[bitmap_block]; }
-static void access_journal_check(struct dm_integrity_c *ic, unsigned section, unsigned offset, +static void access_journal_check(struct dm_integrity_c *ic, unsigned int section, unsigned int offset, bool e, const char *function) { #if defined(CONFIG_DM_DEBUG) || defined(INTERNAL_VERIFY) - unsigned limit = e ? ic->journal_section_entries : ic->journal_section_sectors; + unsigned int limit = e ? ic->journal_section_entries : ic->journal_section_sectors;
if (unlikely(section >= ic->journal_sections) || unlikely(offset >= limit)) { @@ -738,10 +738,10 @@ static void access_journal_check(struct dm_integrity_c *ic, unsigned section, un #endif }
-static void page_list_location(struct dm_integrity_c *ic, unsigned section, unsigned offset, - unsigned *pl_index, unsigned *pl_offset) +static void page_list_location(struct dm_integrity_c *ic, unsigned int section, unsigned int offset, + unsigned int *pl_index, unsigned int *pl_offset) { - unsigned sector; + unsigned int sector;
access_journal_check(ic, section, offset, false, "page_list_location");
@@ -752,9 +752,9 @@ static void page_list_location(struct dm_integrity_c *ic, unsigned section, unsi }
static struct journal_sector *access_page_list(struct dm_integrity_c *ic, struct page_list *pl, - unsigned section, unsigned offset, unsigned *n_sectors) + unsigned int section, unsigned int offset, unsigned int *n_sectors) { - unsigned pl_index, pl_offset; + unsigned int pl_index, pl_offset; char *va;
page_list_location(ic, section, offset, &pl_index, &pl_offset); @@ -767,14 +767,14 @@ static struct journal_sector *access_page_list(struct dm_integrity_c *ic, struct return (struct journal_sector *)(va + pl_offset); }
-static struct journal_sector *access_journal(struct dm_integrity_c *ic, unsigned section, unsigned offset) +static struct journal_sector *access_journal(struct dm_integrity_c *ic, unsigned int section, unsigned int offset) { return access_page_list(ic, ic->journal, section, offset, NULL); }
-static struct journal_entry *access_journal_entry(struct dm_integrity_c *ic, unsigned section, unsigned n) +static struct journal_entry *access_journal_entry(struct dm_integrity_c *ic, unsigned int section, unsigned int n) { - unsigned rel_sector, offset; + unsigned int rel_sector, offset; struct journal_sector *js;
access_journal_check(ic, section, n, true, "access_journal_entry"); @@ -786,7 +786,7 @@ static struct journal_entry *access_journal_entry(struct dm_integrity_c *ic, uns return (struct journal_entry *)((char *)js + offset * ic->journal_entry_size); }
-static struct journal_sector *access_journal_data(struct dm_integrity_c *ic, unsigned section, unsigned n) +static struct journal_sector *access_journal_data(struct dm_integrity_c *ic, unsigned int section, unsigned int n) { n <<= ic->sb->log2_sectors_per_block;
@@ -797,11 +797,11 @@ static struct journal_sector *access_journal_data(struct dm_integrity_c *ic, uns return access_journal(ic, section, n); }
-static void section_mac(struct dm_integrity_c *ic, unsigned section, __u8 result[JOURNAL_MAC_SIZE]) +static void section_mac(struct dm_integrity_c *ic, unsigned int section, __u8 result[JOURNAL_MAC_SIZE]) { SHASH_DESC_ON_STACK(desc, ic->journal_mac); int r; - unsigned j, size; + unsigned int j, size;
desc->tfm = ic->journal_mac;
@@ -866,10 +866,10 @@ static void section_mac(struct dm_integrity_c *ic, unsigned section, __u8 result memset(result, 0, JOURNAL_MAC_SIZE); }
-static void rw_section_mac(struct dm_integrity_c *ic, unsigned section, bool wr) +static void rw_section_mac(struct dm_integrity_c *ic, unsigned int section, bool wr) { __u8 result[JOURNAL_MAC_SIZE]; - unsigned j; + unsigned int j;
if (!ic->journal_mac) return; @@ -898,12 +898,12 @@ static void complete_journal_op(void *context) complete(&comp->comp); }
-static void xor_journal(struct dm_integrity_c *ic, bool encrypt, unsigned section, - unsigned n_sections, struct journal_completion *comp) +static void xor_journal(struct dm_integrity_c *ic, bool encrypt, unsigned int section, + unsigned int n_sections, struct journal_completion *comp) { struct async_submit_ctl submit; size_t n_bytes = (size_t)(n_sections * ic->journal_section_sectors) << SECTOR_SHIFT; - unsigned pl_index, pl_offset, section_index; + unsigned int pl_index, pl_offset, section_index; struct page_list *source_pl, *target_pl;
if (likely(encrypt)) { @@ -928,7 +928,7 @@ static void xor_journal(struct dm_integrity_c *ic, bool encrypt, unsigned sectio struct page *dst_page;
while (unlikely(pl_index == section_index)) { - unsigned dummy; + unsigned int dummy; if (likely(encrypt)) rw_section_mac(ic, section, true); section++; @@ -990,8 +990,8 @@ static bool do_crypt(bool encrypt, struct skcipher_request *req, struct journal_ return false; }
-static void crypt_journal(struct dm_integrity_c *ic, bool encrypt, unsigned section, - unsigned n_sections, struct journal_completion *comp) +static void crypt_journal(struct dm_integrity_c *ic, bool encrypt, unsigned int section, + unsigned int n_sections, struct journal_completion *comp) { struct scatterlist **source_sg; struct scatterlist **target_sg; @@ -1008,7 +1008,7 @@ static void crypt_journal(struct dm_integrity_c *ic, bool encrypt, unsigned sect
do { struct skcipher_request *req; - unsigned ivsize; + unsigned int ivsize; char *iv;
if (likely(encrypt)) @@ -1034,8 +1034,8 @@ static void crypt_journal(struct dm_integrity_c *ic, bool encrypt, unsigned sect complete_journal_op(comp); }
-static void encrypt_journal(struct dm_integrity_c *ic, bool encrypt, unsigned section, - unsigned n_sections, struct journal_completion *comp) +static void encrypt_journal(struct dm_integrity_c *ic, bool encrypt, unsigned int section, + unsigned int n_sections, struct journal_completion *comp) { if (ic->journal_xor) return xor_journal(ic, encrypt, section, n_sections, comp); @@ -1052,12 +1052,12 @@ static void complete_journal_io(unsigned long error, void *context) }
static void rw_journal_sectors(struct dm_integrity_c *ic, blk_opf_t opf, - unsigned sector, unsigned n_sectors, + unsigned int sector, unsigned int n_sectors, struct journal_completion *comp) { struct dm_io_request io_req; struct dm_io_region io_loc; - unsigned pl_index, pl_offset; + unsigned int pl_index, pl_offset; int r;
if (unlikely(dm_integrity_failed(ic))) { @@ -1099,10 +1099,10 @@ static void rw_journal_sectors(struct dm_integrity_c *ic, blk_opf_t opf, }
static void rw_journal(struct dm_integrity_c *ic, blk_opf_t opf, - unsigned section, unsigned n_sections, + unsigned int section, unsigned int n_sections, struct journal_completion *comp) { - unsigned sector, n_sectors; + unsigned int sector, n_sectors;
sector = section * ic->journal_section_sectors; n_sectors = n_sections * ic->journal_section_sectors; @@ -1110,12 +1110,12 @@ static void rw_journal(struct dm_integrity_c *ic, blk_opf_t opf, rw_journal_sectors(ic, opf, sector, n_sectors, comp); }
-static void write_journal(struct dm_integrity_c *ic, unsigned commit_start, unsigned commit_sections) +static void write_journal(struct dm_integrity_c *ic, unsigned int commit_start, unsigned int commit_sections) { struct journal_completion io_comp; struct journal_completion crypt_comp_1; struct journal_completion crypt_comp_2; - unsigned i; + unsigned int i;
io_comp.ic = ic; init_completion(&io_comp.comp); @@ -1135,7 +1135,7 @@ static void write_journal(struct dm_integrity_c *ic, unsigned commit_start, unsi rw_journal(ic, REQ_OP_WRITE | REQ_FUA | REQ_SYNC, commit_start, commit_sections, &io_comp); } else { - unsigned to_end; + unsigned int to_end; io_comp.in_flight = (atomic_t)ATOMIC_INIT(2); to_end = ic->journal_sections - commit_start; if (ic->journal_io) { @@ -1172,15 +1172,15 @@ static void write_journal(struct dm_integrity_c *ic, unsigned commit_start, unsi wait_for_completion_io(&io_comp.comp); }
-static void copy_from_journal(struct dm_integrity_c *ic, unsigned section, unsigned offset, - unsigned n_sectors, sector_t target, io_notify_fn fn, void *data) +static void copy_from_journal(struct dm_integrity_c *ic, unsigned int section, unsigned int offset, + unsigned int n_sectors, sector_t target, io_notify_fn fn, void *data) { struct dm_io_request io_req; struct dm_io_region io_loc; int r; - unsigned sector, pl_index, pl_offset; + unsigned int sector, pl_index, pl_offset;
- BUG_ON((target | n_sectors | offset) & (unsigned)(ic->sectors_per_block - 1)); + BUG_ON((target | n_sectors | offset) & (unsigned int)(ic->sectors_per_block - 1));
if (unlikely(dm_integrity_failed(ic))) { fn(-1UL, data); @@ -1221,7 +1221,7 @@ static bool add_new_range(struct dm_integrity_c *ic, struct dm_integrity_range * struct rb_node **n = &ic->in_progress.rb_node; struct rb_node *parent;
- BUG_ON((new_range->logical_sector | new_range->n_sectors) & (unsigned)(ic->sectors_per_block - 1)); + BUG_ON((new_range->logical_sector | new_range->n_sectors) & (unsigned int)(ic->sectors_per_block - 1));
if (likely(check_waiting)) { struct dm_integrity_range *range; @@ -1339,10 +1339,10 @@ static void remove_journal_node(struct dm_integrity_c *ic, struct journal_node *
#define NOT_FOUND (-1U)
-static unsigned find_journal_node(struct dm_integrity_c *ic, sector_t sector, sector_t *next_sector) +static unsigned int find_journal_node(struct dm_integrity_c *ic, sector_t sector, sector_t *next_sector) { struct rb_node *n = ic->journal_tree_root.rb_node; - unsigned found = NOT_FOUND; + unsigned int found = NOT_FOUND; *next_sector = (sector_t)-1; while (n) { struct journal_node *j = container_of(n, struct journal_node, node); @@ -1360,7 +1360,7 @@ static unsigned find_journal_node(struct dm_integrity_c *ic, sector_t sector, se return found; }
-static bool test_journal_node(struct dm_integrity_c *ic, unsigned pos, sector_t sector) +static bool test_journal_node(struct dm_integrity_c *ic, unsigned int pos, sector_t sector) { struct journal_node *node, *next_node; struct rb_node *next; @@ -1385,7 +1385,7 @@ static bool find_newer_committed_node(struct dm_integrity_c *ic, struct journal_ { struct rb_node *next; struct journal_node *next_node; - unsigned next_section; + unsigned int next_section;
BUG_ON(RB_EMPTY_NODE(&node->node));
@@ -1398,7 +1398,7 @@ static bool find_newer_committed_node(struct dm_integrity_c *ic, struct journal_ if (next_node->sector != node->sector) return false;
- next_section = (unsigned)(next_node - ic->journal_tree) / ic->journal_section_entries; + next_section = (unsigned int)(next_node - ic->journal_tree) / ic->journal_section_entries; if (next_section >= ic->committed_section && next_section < ic->committed_section + ic->n_committed_sections) return true; @@ -1413,17 +1413,17 @@ static bool find_newer_committed_node(struct dm_integrity_c *ic, struct journal_ #define TAG_CMP 2
static int dm_integrity_rw_tag(struct dm_integrity_c *ic, unsigned char *tag, sector_t *metadata_block, - unsigned *metadata_offset, unsigned total_size, int op) + unsigned int *metadata_offset, unsigned int total_size, int op) { #define MAY_BE_FILLER 1 #define MAY_BE_HASH 2 - unsigned hash_offset = 0; - unsigned may_be = MAY_BE_HASH | (ic->discard ? MAY_BE_FILLER : 0); + unsigned int hash_offset = 0; + unsigned int may_be = MAY_BE_HASH | (ic->discard ? MAY_BE_FILLER : 0);
do { unsigned char *data, *dp; struct dm_buffer *b; - unsigned to_copy; + unsigned int to_copy; int r;
r = dm_integrity_failed(ic); @@ -1453,7 +1453,7 @@ static int dm_integrity_rw_tag(struct dm_integrity_c *ic, unsigned char *tag, se goto thorough_test; } } else { - unsigned i, ts; + unsigned int i, ts; thorough_test: ts = total_size;
@@ -1652,7 +1652,7 @@ static void integrity_sector_checksum(struct dm_integrity_c *ic, sector_t sector __le64 sector_le = cpu_to_le64(sector); SHASH_DESC_ON_STACK(req, ic->internal_hash); int r; - unsigned digest_size; + unsigned int digest_size;
req->tfm = ic->internal_hash;
@@ -1709,13 +1709,13 @@ static void integrity_metadata(struct work_struct *w) if (ic->internal_hash) { struct bvec_iter iter; struct bio_vec bv; - unsigned digest_size = crypto_shash_digestsize(ic->internal_hash); + unsigned int digest_size = crypto_shash_digestsize(ic->internal_hash); struct bio *bio = dm_bio_from_per_bio_data(dio, sizeof(struct dm_integrity_io)); char *checksums; - unsigned extra_space = unlikely(digest_size > ic->tag_size) ? digest_size - ic->tag_size : 0; + unsigned int extra_space = unlikely(digest_size > ic->tag_size) ? digest_size - ic->tag_size : 0; char checksums_onstack[max((size_t)HASH_MAX_DIGESTSIZE, MAX_TAG_SIZE)]; sector_t sector; - unsigned sectors_to_process; + unsigned int sectors_to_process;
if (unlikely(ic->mode == 'R')) goto skip_io; @@ -1735,13 +1735,13 @@ static void integrity_metadata(struct work_struct *w) }
if (unlikely(dio->op == REQ_OP_DISCARD)) { - unsigned bi_size = dio->bio_details.bi_iter.bi_size; - unsigned max_size = likely(checksums != checksums_onstack) ? PAGE_SIZE : HASH_MAX_DIGESTSIZE; - unsigned max_blocks = max_size / ic->tag_size; + unsigned int bi_size = dio->bio_details.bi_iter.bi_size; + unsigned int max_size = likely(checksums != checksums_onstack) ? PAGE_SIZE : HASH_MAX_DIGESTSIZE; + unsigned int max_blocks = max_size / ic->tag_size; memset(checksums, DISCARD_FILLER, max_size);
while (bi_size) { - unsigned this_step_blocks = bi_size >> (SECTOR_SHIFT + ic->sb->log2_sectors_per_block); + unsigned int this_step_blocks = bi_size >> (SECTOR_SHIFT + ic->sb->log2_sectors_per_block); this_step_blocks = min(this_step_blocks, max_blocks); r = dm_integrity_rw_tag(ic, checksums, &dio->metadata_block, &dio->metadata_offset, this_step_blocks * ic->tag_size, TAG_WRITE); @@ -1763,7 +1763,7 @@ static void integrity_metadata(struct work_struct *w) sectors_to_process = dio->range.n_sectors;
__bio_for_each_segment(bv, bio, iter, dio->bio_details.bi_iter) { - unsigned pos; + unsigned int pos; char *mem, *checksums_ptr;
again: @@ -1816,13 +1816,13 @@ static void integrity_metadata(struct work_struct *w) if (bip) { struct bio_vec biv; struct bvec_iter iter; - unsigned data_to_process = dio->range.n_sectors; + unsigned int data_to_process = dio->range.n_sectors; sector_to_block(ic, data_to_process); data_to_process *= ic->tag_size;
bip_for_each_vec(biv, bip, iter) { unsigned char *tag; - unsigned this_len; + unsigned int this_len;
BUG_ON(PageHighMem(biv.bv_page)); tag = bvec_virt(&biv); @@ -1860,7 +1860,7 @@ static int dm_integrity_map(struct dm_target *ti, struct bio *bio) if (unlikely(dio->op == REQ_OP_DISCARD)) { if (ti->max_io_len) { sector_t sec = dm_target_offset(ti, bio->bi_iter.bi_sector); - unsigned log2_max_io_len = __fls(ti->max_io_len); + unsigned int log2_max_io_len = __fls(ti->max_io_len); sector_t start_boundary = sec >> log2_max_io_len; sector_t end_boundary = (sec + bio_sectors(bio) - 1) >> log2_max_io_len; if (start_boundary < end_boundary) { @@ -1890,7 +1890,7 @@ static int dm_integrity_map(struct dm_target *ti, struct bio *bio) ic->provided_data_sectors); return DM_MAPIO_KILL; } - if (unlikely((dio->range.logical_sector | bio_sectors(bio)) & (unsigned)(ic->sectors_per_block - 1))) { + if (unlikely((dio->range.logical_sector | bio_sectors(bio)) & (unsigned int)(ic->sectors_per_block - 1))) { DMERR("Bio not aligned on %u sectors: 0x%llx, 0x%x", ic->sectors_per_block, dio->range.logical_sector, bio_sectors(bio)); @@ -1912,7 +1912,7 @@ static int dm_integrity_map(struct dm_target *ti, struct bio *bio) bip = bio_integrity(bio); if (!ic->internal_hash) { if (bip) { - unsigned wanted_tag_size = bio_sectors(bio) >> ic->sb->log2_sectors_per_block; + unsigned int wanted_tag_size = bio_sectors(bio) >> ic->sb->log2_sectors_per_block; if (ic->log2_tag_size >= 0) wanted_tag_size <<= ic->log2_tag_size; else @@ -1942,11 +1942,11 @@ static int dm_integrity_map(struct dm_target *ti, struct bio *bio) }
static bool __journal_read_write(struct dm_integrity_io *dio, struct bio *bio, - unsigned journal_section, unsigned journal_entry) + unsigned int journal_section, unsigned int journal_entry) { struct dm_integrity_c *ic = dio->ic; sector_t logical_sector; - unsigned n_sectors; + unsigned int n_sectors;
logical_sector = dio->range.logical_sector; n_sectors = dio->range.n_sectors; @@ -1969,7 +1969,7 @@ static bool __journal_read_write(struct dm_integrity_io *dio, struct bio *bio, if (unlikely(dio->op == REQ_OP_READ)) { struct journal_sector *js; char *mem_ptr; - unsigned s; + unsigned int s;
if (unlikely(journal_entry_is_inprogress(je))) { flush_dcache_page(bv.bv_page); @@ -2006,12 +2006,12 @@ static bool __journal_read_write(struct dm_integrity_io *dio, struct bio *bio,
if (!ic->internal_hash) { struct bio_integrity_payload *bip = bio_integrity(bio); - unsigned tag_todo = ic->tag_size; + unsigned int tag_todo = ic->tag_size; char *tag_ptr = journal_entry_tag(ic, je);
if (bip) do { struct bio_vec biv = bvec_iter_bvec(bip->bip_vec, bip->bip_iter); - unsigned tag_now = min(biv.bv_len, tag_todo); + unsigned int tag_now = min(biv.bv_len, tag_todo); char *tag_addr; BUG_ON(PageHighMem(biv.bv_page)); tag_addr = bvec_virt(&biv); @@ -2030,7 +2030,7 @@ static bool __journal_read_write(struct dm_integrity_io *dio, struct bio *bio,
if (likely(dio->op == REQ_OP_WRITE)) { struct journal_sector *js; - unsigned s; + unsigned int s;
js = access_journal_data(ic, journal_section, journal_entry); memcpy(js, mem + bv.bv_offset, ic->sectors_per_block << SECTOR_SHIFT); @@ -2041,7 +2041,7 @@ static bool __journal_read_write(struct dm_integrity_io *dio, struct bio *bio, } while (++s < ic->sectors_per_block);
if (ic->internal_hash) { - unsigned digest_size = crypto_shash_digestsize(ic->internal_hash); + unsigned int digest_size = crypto_shash_digestsize(ic->internal_hash); if (unlikely(digest_size > ic->tag_size)) { char checksums_onstack[HASH_MAX_DIGESTSIZE]; integrity_sector_checksum(ic, logical_sector, (char *)js, checksums_onstack); @@ -2098,8 +2098,8 @@ static void dm_integrity_map_continue(struct dm_integrity_io *dio, bool from_map { struct dm_integrity_c *ic = dio->ic; struct bio *bio = dm_bio_from_per_bio_data(dio, sizeof(struct dm_integrity_io)); - unsigned journal_section, journal_entry; - unsigned journal_read_pos; + unsigned int journal_section, journal_entry; + unsigned int journal_read_pos; struct completion read_comp; bool discard_retried = false; bool need_sync_io = ic->internal_hash && dio->op == REQ_OP_READ; @@ -2124,8 +2124,8 @@ static void dm_integrity_map_continue(struct dm_integrity_io *dio, bool from_map journal_read_pos = NOT_FOUND; if (ic->mode == 'J' && likely(dio->op != REQ_OP_DISCARD)) { if (dio->op == REQ_OP_WRITE) { - unsigned next_entry, i, pos; - unsigned ws, we, range_sectors; + unsigned int next_entry, i, pos; + unsigned int ws, we, range_sectors;
dio->range.n_sectors = min(dio->range.n_sectors, (sector_t)ic->free_sectors << ic->sb->log2_sectors_per_block); @@ -2178,8 +2178,8 @@ static void dm_integrity_map_continue(struct dm_integrity_io *dio, bool from_map if (unlikely(dio->range.n_sectors > next_sector - dio->range.logical_sector)) dio->range.n_sectors = next_sector - dio->range.logical_sector; } else { - unsigned i; - unsigned jp = journal_read_pos + 1; + unsigned int i; + unsigned int jp = journal_read_pos + 1; for (i = ic->sectors_per_block; i < dio->range.n_sectors; i += ic->sectors_per_block, jp++) { if (!test_journal_node(ic, jp, dio->range.logical_sector + i)) break; @@ -2211,7 +2211,7 @@ static void dm_integrity_map_continue(struct dm_integrity_io *dio, bool from_map */ if (journal_read_pos != NOT_FOUND) { sector_t next_sector; - unsigned new_pos = find_journal_node(ic, dio->range.logical_sector, &next_sector); + unsigned int new_pos = find_journal_node(ic, dio->range.logical_sector, &next_sector); if (unlikely(new_pos != journal_read_pos)) { remove_range_unlocked(ic, &dio->range); goto retry; @@ -2220,7 +2220,7 @@ static void dm_integrity_map_continue(struct dm_integrity_io *dio, bool from_map } if (ic->mode == 'J' && likely(dio->op == REQ_OP_DISCARD) && !discard_retried) { sector_t next_sector; - unsigned new_pos = find_journal_node(ic, dio->range.logical_sector, &next_sector); + unsigned int new_pos = find_journal_node(ic, dio->range.logical_sector, &next_sector); if (unlikely(new_pos != NOT_FOUND) || unlikely(next_sector < dio->range.logical_sector - dio->range.n_sectors)) { remove_range_unlocked(ic, &dio->range); @@ -2347,8 +2347,8 @@ static void pad_uncommitted(struct dm_integrity_c *ic) static void integrity_commit(struct work_struct *w) { struct dm_integrity_c *ic = container_of(w, struct dm_integrity_c, commit_work); - unsigned commit_start, commit_sections; - unsigned i, j, n; + unsigned int commit_start, commit_sections; + unsigned int i, j, n; struct bio *flushes;
del_timer(&ic->autocommit_timer); @@ -2426,17 +2426,17 @@ static void complete_copy_from_journal(unsigned long error, void *context) static void restore_last_bytes(struct dm_integrity_c *ic, struct journal_sector *js, struct journal_entry *je) { - unsigned s = 0; + unsigned int s = 0; do { js->commit_id = je->last_bytes[s]; js++; } while (++s < ic->sectors_per_block); }
-static void do_journal_write(struct dm_integrity_c *ic, unsigned write_start, - unsigned write_sections, bool from_replay) +static void do_journal_write(struct dm_integrity_c *ic, unsigned int write_start, + unsigned int write_sections, bool from_replay) { - unsigned i, j, n; + unsigned int i, j, n; struct journal_completion comp; struct blk_plug plug;
@@ -2455,9 +2455,9 @@ static void do_journal_write(struct dm_integrity_c *ic, unsigned write_start, for (j = 0; j < ic->journal_section_entries; j++) { struct journal_entry *je = access_journal_entry(ic, i, j); sector_t sec, area, offset; - unsigned k, l, next_loop; + unsigned int k, l, next_loop; sector_t metadata_block; - unsigned metadata_offset; + unsigned int metadata_offset; struct journal_io *io;
if (journal_entry_is_unused(je)) @@ -2465,7 +2465,7 @@ static void do_journal_write(struct dm_integrity_c *ic, unsigned write_start, BUG_ON(unlikely(journal_entry_is_inprogress(je)) && !from_replay); sec = journal_entry_get_sector(je); if (unlikely(from_replay)) { - if (unlikely(sec & (unsigned)(ic->sectors_per_block - 1))) { + if (unlikely(sec & (unsigned int)(ic->sectors_per_block - 1))) { dm_integrity_io_error(ic, "invalid sector in journal", -EIO); sec &= ~(sector_t)(ic->sectors_per_block - 1); } @@ -2583,9 +2583,9 @@ static void do_journal_write(struct dm_integrity_c *ic, unsigned write_start, static void integrity_writer(struct work_struct *w) { struct dm_integrity_c *ic = container_of(w, struct dm_integrity_c, writer_work); - unsigned write_start, write_sections; + unsigned int write_start, write_sections;
- unsigned prev_free_sectors; + unsigned int prev_free_sectors;
spin_lock_irq(&ic->endio_wait.lock); write_start = ic->committed_section; @@ -2632,12 +2632,12 @@ static void integrity_recalc(struct work_struct *w) struct dm_io_region io_loc; sector_t area, offset; sector_t metadata_block; - unsigned metadata_offset; + unsigned int metadata_offset; sector_t logical_sector, n_sectors; __u8 *t; - unsigned i; + unsigned int i; int r; - unsigned super_counter = 0; + unsigned int super_counter = 0;
DEBUG_print("start recalculation... (position %llx)\n", le64_to_cpu(ic->sb->recalc_sector));
@@ -2661,7 +2661,7 @@ static void integrity_recalc(struct work_struct *w) get_area_and_offset(ic, range.logical_sector, &area, &offset); range.n_sectors = min((sector_t)RECALC_SECTORS, ic->provided_data_sectors - range.logical_sector); if (!ic->meta_dev) - range.n_sectors = min(range.n_sectors, ((sector_t)1U << ic->sb->log2_interleave_sectors) - (unsigned)offset); + range.n_sectors = min(range.n_sectors, ((sector_t)1U << ic->sb->log2_interleave_sectors) - (unsigned int)offset);
add_new_range_and_wait(ic, &range); spin_unlock_irq(&ic->endio_wait.lock); @@ -2852,10 +2852,10 @@ static void bitmap_flush_work(struct work_struct *work) }
-static void init_journal(struct dm_integrity_c *ic, unsigned start_section, - unsigned n_sections, unsigned char commit_seq) +static void init_journal(struct dm_integrity_c *ic, unsigned int start_section, + unsigned int n_sections, unsigned char commit_seq) { - unsigned i, j, n; + unsigned int i, j, n;
if (!n_sections) return; @@ -2878,7 +2878,7 @@ static void init_journal(struct dm_integrity_c *ic, unsigned start_section, write_journal(ic, start_section, n_sections); }
-static int find_commit_seq(struct dm_integrity_c *ic, unsigned i, unsigned j, commit_id_t id) +static int find_commit_seq(struct dm_integrity_c *ic, unsigned int i, unsigned int j, commit_id_t id) { unsigned char k; for (k = 0; k < N_COMMIT_IDS; k++) { @@ -2891,11 +2891,11 @@ static int find_commit_seq(struct dm_integrity_c *ic, unsigned i, unsigned j, co
static void replay_journal(struct dm_integrity_c *ic) { - unsigned i, j; + unsigned int i, j; bool used_commit_ids[N_COMMIT_IDS]; - unsigned max_commit_id_sections[N_COMMIT_IDS]; - unsigned write_start, write_sections; - unsigned continue_section; + unsigned int max_commit_id_sections[N_COMMIT_IDS]; + unsigned int write_start, write_sections; + unsigned int continue_section; bool journal_empty; unsigned char unused, last_used, want_commit_seq;
@@ -3013,7 +3013,7 @@ static void replay_journal(struct dm_integrity_c *ic) ic->commit_seq = want_commit_seq; DEBUG_print("continuing from section %u, commit seq %d\n", write_start, ic->commit_seq); } else { - unsigned s; + unsigned int s; unsigned char erase_seq; clear_journal: DEBUG_print("clearing journal\n"); @@ -3245,10 +3245,10 @@ static void dm_integrity_resume(struct dm_target *ti) }
static void dm_integrity_status(struct dm_target *ti, status_type_t type, - unsigned status_flags, char *result, unsigned maxlen) + unsigned int status_flags, char *result, unsigned int maxlen) { struct dm_integrity_c *ic = (struct dm_integrity_c *)ti->private; - unsigned arg_count; + unsigned int arg_count; size_t sz = 0;
switch (type) { @@ -3298,7 +3298,7 @@ static void dm_integrity_status(struct dm_target *ti, status_type_t type, DMEMIT(" interleave_sectors:%u", 1U << ic->sb->log2_interleave_sectors); DMEMIT(" buffer_sectors:%u", 1U << ic->log2_buffer_sectors); if (ic->mode == 'J') { - DMEMIT(" journal_watermark:%u", (unsigned)watermark_percentage); + DMEMIT(" journal_watermark:%u", (unsigned int)watermark_percentage); DMEMIT(" commit_time:%u", ic->autocommit_msec); } if (ic->mode == 'B') { @@ -3377,7 +3377,7 @@ static void dm_integrity_io_hints(struct dm_target *ti, struct queue_limits *lim
static void calculate_journal_section_size(struct dm_integrity_c *ic) { - unsigned sector_space = JOURNAL_SECTOR_DATA; + unsigned int sector_space = JOURNAL_SECTOR_DATA;
ic->journal_sections = le32_to_cpu(ic->sb->journal_sections); ic->journal_entry_size = roundup(offsetof(struct journal_entry, last_bytes[ic->sectors_per_block]) + ic->tag_size, @@ -3454,9 +3454,10 @@ static void get_provided_data_sectors(struct dm_integrity_c *ic) } }
-static int initialize_superblock(struct dm_integrity_c *ic, unsigned journal_sectors, unsigned interleave_sectors) +static int initialize_superblock(struct dm_integrity_c *ic, + unsigned int journal_sectors, unsigned int interleave_sectors) { - unsigned journal_sections; + unsigned int journal_sections; int test_bit;
memset(ic->sb, 0, SB_SECTORS << SECTOR_SHIFT); @@ -3541,7 +3542,7 @@ static void dm_integrity_set(struct dm_target *ti, struct dm_integrity_c *ic)
static void dm_integrity_free_page_list(struct page_list *pl) { - unsigned i; + unsigned int i;
if (!pl) return; @@ -3550,10 +3551,10 @@ static void dm_integrity_free_page_list(struct page_list *pl) kvfree(pl); }
-static struct page_list *dm_integrity_alloc_page_list(unsigned n_pages) +static struct page_list *dm_integrity_alloc_page_list(unsigned int n_pages) { struct page_list *pl; - unsigned i; + unsigned int i;
pl = kvmalloc_array(n_pages + 1, sizeof(struct page_list), GFP_KERNEL | __GFP_ZERO); if (!pl) @@ -3576,7 +3577,7 @@ static struct page_list *dm_integrity_alloc_page_list(unsigned n_pages)
static void dm_integrity_free_journal_scatterlist(struct dm_integrity_c *ic, struct scatterlist **sl) { - unsigned i; + unsigned int i; for (i = 0; i < ic->journal_sections; i++) kvfree(sl[i]); kvfree(sl); @@ -3586,7 +3587,7 @@ static struct scatterlist **dm_integrity_alloc_journal_scatterlist(struct dm_int struct page_list *pl) { struct scatterlist **sl; - unsigned i; + unsigned int i;
sl = kvmalloc_array(ic->journal_sections, sizeof(struct scatterlist *), @@ -3596,10 +3597,10 @@ static struct scatterlist **dm_integrity_alloc_journal_scatterlist(struct dm_int
for (i = 0; i < ic->journal_sections; i++) { struct scatterlist *s; - unsigned start_index, start_offset; - unsigned end_index, end_offset; - unsigned n_pages; - unsigned idx; + unsigned int start_index, start_offset; + unsigned int end_index, end_offset; + unsigned int n_pages; + unsigned int idx;
page_list_location(ic, i, 0, &start_index, &start_offset); page_list_location(ic, i, ic->journal_section_sectors - 1, @@ -3617,7 +3618,7 @@ static struct scatterlist **dm_integrity_alloc_journal_scatterlist(struct dm_int sg_init_table(s, n_pages); for (idx = start_index; idx <= end_index; idx++) { char *va = lowmem_page_address(pl[idx].page); - unsigned start = 0, end = PAGE_SIZE; + unsigned int start = 0, end = PAGE_SIZE; if (idx == start_index) start = start_offset; if (idx == end_index) @@ -3704,7 +3705,7 @@ static int get_mac(struct crypto_shash **hash, struct alg_spec *a, char **error, static int create_journal(struct dm_integrity_c *ic, char **error) { int r = 0; - unsigned i; + unsigned int i; __u64 journal_pages, journal_desc_size, journal_tree_size; unsigned char *crypt_data = NULL, *crypt_iv = NULL; struct skcipher_request *req = NULL; @@ -3731,7 +3732,7 @@ static int create_journal(struct dm_integrity_c *ic, char **error) goto bad; } if (ic->journal_crypt_alg.alg_string) { - unsigned ivsize, blocksize; + unsigned int ivsize, blocksize; struct journal_completion comp;
comp.ic = ic; @@ -3820,7 +3821,7 @@ static int create_journal(struct dm_integrity_c *ic, char **error) crypto_free_skcipher(ic->journal_crypt); ic->journal_crypt = NULL; } else { - unsigned crypt_len = roundup(ivsize, blocksize); + unsigned int crypt_len = roundup(ivsize, blocksize);
req = skcipher_request_alloc(ic->journal_crypt, GFP_KERNEL); if (!req) { @@ -3908,7 +3909,7 @@ static int create_journal(struct dm_integrity_c *ic, char **error) }
for (i = 0; i < N_COMMIT_IDS; i++) { - unsigned j; + unsigned int j; retest_commit_id: for (j = 0; j < i; j++) { if (ic->commit_ids[j] == ic->commit_ids[i]) { @@ -3962,17 +3963,17 @@ static int create_journal(struct dm_integrity_c *ic, char **error) * journal_mac * recalculate */ -static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv) +static int dm_integrity_ctr(struct dm_target *ti, unsigned int argc, char **argv) { struct dm_integrity_c *ic; char dummy; int r; - unsigned extra_args; + unsigned int extra_args; struct dm_arg_set as; static const struct dm_arg _args[] = { {0, 18, "Invalid number of feature args"}, }; - unsigned journal_sectors, interleave_sectors, buffer_sectors, journal_watermark, sync_msec; + unsigned int journal_sectors, interleave_sectors, buffer_sectors, journal_watermark, sync_msec; bool should_write_sb; __u64 threshold; unsigned long long start; @@ -4051,7 +4052,7 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv)
while (extra_args--) { const char *opt_string; - unsigned val; + unsigned int val; unsigned long long llval; opt_string = dm_shift_arg(&as); if (!opt_string) { @@ -4384,7 +4385,7 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv) DEBUG_print(" journal_entries_per_sector %u\n", ic->journal_entries_per_sector); DEBUG_print(" journal_section_entries %u\n", ic->journal_section_entries); DEBUG_print(" journal_section_sectors %u\n", ic->journal_section_sectors); - DEBUG_print(" journal_sections %u\n", (unsigned)le32_to_cpu(ic->sb->journal_sections)); + DEBUG_print(" journal_sections %u\n", (unsigned int)le32_to_cpu(ic->sb->journal_sections)); DEBUG_print(" journal_entries %u\n", ic->journal_entries); DEBUG_print(" log2_interleave_sectors %d\n", ic->sb->log2_interleave_sectors); DEBUG_print(" data_device_sectors 0x%llx\n", bdev_nr_sectors(ic->dev->bdev)); @@ -4458,8 +4459,8 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv) }
if (ic->mode == 'B') { - unsigned i; - unsigned n_bitmap_pages = DIV_ROUND_UP(ic->n_bitmap_blocks, PAGE_SIZE / BITMAP_BLOCK_SIZE); + unsigned int i; + unsigned int n_bitmap_pages = DIV_ROUND_UP(ic->n_bitmap_blocks, PAGE_SIZE / BITMAP_BLOCK_SIZE);
ic->recalc_bitmap = dm_integrity_alloc_page_list(n_bitmap_pages); if (!ic->recalc_bitmap) { @@ -4479,7 +4480,7 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv) INIT_DELAYED_WORK(&ic->bitmap_flush_work, bitmap_flush_work); for (i = 0; i < ic->n_bitmap_blocks; i++) { struct bitmap_block_status *bbs = &ic->bbs[i]; - unsigned sector, pl_index, pl_offset; + unsigned int sector, pl_index, pl_offset;
INIT_WORK(&bbs->work, bitmap_block_work); bbs->ic = ic; @@ -4516,7 +4517,7 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv) goto bad; } if (ic->mode == 'B') { - unsigned max_io_len = ((sector_t)ic->sectors_per_block << ic->log2_blocks_per_bitmap_bit) * (BITMAP_BLOCK_SIZE * 8); + unsigned int max_io_len = ((sector_t)ic->sectors_per_block << ic->log2_blocks_per_bitmap_bit) * (BITMAP_BLOCK_SIZE * 8); if (!max_io_len) max_io_len = 1U << 31; DEBUG_print("max_io_len: old %u, new %u\n", ti->max_io_len, max_io_len); @@ -4587,7 +4588,7 @@ static void dm_integrity_dtr(struct dm_target *ti) if (ic->journal_io_scatterlist) dm_integrity_free_journal_scatterlist(ic, ic->journal_io_scatterlist); if (ic->sk_requests) { - unsigned i; + unsigned int i;
for (i = 0; i < ic->journal_sections; i++) { struct skcipher_request *req = ic->sk_requests[i]; diff --git a/drivers/md/dm-io-rewind.c b/drivers/md/dm-io-rewind.c index 0db53ccb94ba7..773c4cff8b89f 100644 --- a/drivers/md/dm-io-rewind.c +++ b/drivers/md/dm-io-rewind.c @@ -57,7 +57,7 @@ static void dm_bio_integrity_rewind(struct bio *bio, unsigned int bytes_done) { struct bio_integrity_payload *bip = bio_integrity(bio); struct blk_integrity *bi = blk_get_integrity(bio->bi_bdev->bd_disk); - unsigned bytes = bio_integrity_bytes(bi, bytes_done >> 9); + unsigned int bytes = bio_integrity_bytes(bi, bytes_done >> 9);
bip->bip_iter.bi_sector -= bio_integrity_intervals(bi, bytes_done >> 9); dm_bvec_iter_rewind(bip->bip_vec, &bip->bip_iter, bytes); @@ -131,7 +131,7 @@ static inline void dm_bio_rewind_iter(const struct bio *bio, * rewinding from end of bio and restoring its original position. * Caller is also responsibile for restoring bio's size. */ -static void dm_bio_rewind(struct bio *bio, unsigned bytes) +static void dm_bio_rewind(struct bio *bio, unsigned int bytes) { if (bio_integrity(bio)) dm_bio_integrity_rewind(bio, bytes); diff --git a/drivers/md/dm-io.c b/drivers/md/dm-io.c index 7835645334593..e488b05e35fa3 100644 --- a/drivers/md/dm-io.c +++ b/drivers/md/dm-io.c @@ -48,7 +48,7 @@ static struct kmem_cache *_dm_io_cache; struct dm_io_client *dm_io_client_create(void) { struct dm_io_client *client; - unsigned min_ios = dm_get_reserved_bio_based_ios(); + unsigned int min_ios = dm_get_reserved_bio_based_ios(); int ret;
client = kzalloc(sizeof(*client), GFP_KERNEL); @@ -88,7 +88,7 @@ EXPORT_SYMBOL(dm_io_client_destroy); * bi_private. *---------------------------------------------------------------*/ static void store_io_and_region_in_bio(struct bio *bio, struct io *io, - unsigned region) + unsigned int region) { if (unlikely(!IS_ALIGNED((unsigned long)io, DM_IO_MAX_REGIONS))) { DMCRIT("Unaligned struct io pointer %p", io); @@ -99,7 +99,7 @@ static void store_io_and_region_in_bio(struct bio *bio, struct io *io, }
static void retrieve_io_and_region_from_bio(struct bio *bio, struct io **io, - unsigned *region) + unsigned int *region) { unsigned long val = (unsigned long)bio->bi_private;
@@ -137,7 +137,7 @@ static void dec_count(struct io *io, unsigned int region, blk_status_t error) static void endio(struct bio *bio) { struct io *io; - unsigned region; + unsigned int region; blk_status_t error;
if (bio->bi_status && bio_data_dir(bio) == READ) @@ -160,11 +160,11 @@ static void endio(struct bio *bio) *---------------------------------------------------------------*/ struct dpages { void (*get_page)(struct dpages *dp, - struct page **p, unsigned long *len, unsigned *offset); + struct page **p, unsigned long *len, unsigned int *offset); void (*next_page)(struct dpages *dp);
union { - unsigned context_u; + unsigned int context_u; struct bvec_iter context_bi; }; void *context_ptr; @@ -177,9 +177,9 @@ struct dpages { * Functions for getting the pages from a list. */ static void list_get_page(struct dpages *dp, - struct page **p, unsigned long *len, unsigned *offset) + struct page **p, unsigned long *len, unsigned int *offset) { - unsigned o = dp->context_u; + unsigned int o = dp->context_u; struct page_list *pl = (struct page_list *) dp->context_ptr;
*p = pl->page; @@ -194,7 +194,7 @@ static void list_next_page(struct dpages *dp) dp->context_u = 0; }
-static void list_dp_init(struct dpages *dp, struct page_list *pl, unsigned offset) +static void list_dp_init(struct dpages *dp, struct page_list *pl, unsigned int offset) { dp->get_page = list_get_page; dp->next_page = list_next_page; @@ -206,7 +206,7 @@ static void list_dp_init(struct dpages *dp, struct page_list *pl, unsigned offse * Functions for getting the pages from a bvec. */ static void bio_get_page(struct dpages *dp, struct page **p, - unsigned long *len, unsigned *offset) + unsigned long *len, unsigned int *offset) { struct bio_vec bvec = bvec_iter_bvec((struct bio_vec *)dp->context_ptr, dp->context_bi); @@ -244,7 +244,7 @@ static void bio_dp_init(struct dpages *dp, struct bio *bio) * Functions for getting the pages from a VMA. */ static void vm_get_page(struct dpages *dp, - struct page **p, unsigned long *len, unsigned *offset) + struct page **p, unsigned long *len, unsigned int *offset) { *p = vmalloc_to_page(dp->context_ptr); *offset = dp->context_u; @@ -269,7 +269,7 @@ static void vm_dp_init(struct dpages *dp, void *data) * Functions for getting the pages from kernel memory. */ static void km_get_page(struct dpages *dp, struct page **p, unsigned long *len, - unsigned *offset) + unsigned int *offset) { *p = virt_to_page(dp->context_ptr); *offset = dp->context_u; @@ -293,15 +293,15 @@ static void km_dp_init(struct dpages *dp, void *data) /*----------------------------------------------------------------- * IO routines that accept a list of pages. *---------------------------------------------------------------*/ -static void do_region(const blk_opf_t opf, unsigned region, +static void do_region(const blk_opf_t opf, unsigned int region, struct dm_io_region *where, struct dpages *dp, struct io *io) { struct bio *bio; struct page *page; unsigned long len; - unsigned offset; - unsigned num_bvecs; + unsigned int offset; + unsigned int num_bvecs; sector_t remaining = where->count; struct request_queue *q = bdev_get_queue(where->bdev); sector_t num_sectors; @@ -508,7 +508,7 @@ static int dp_init(struct dm_io_request *io_req, struct dpages *dp, return 0; }
-int dm_io(struct dm_io_request *io_req, unsigned num_regions, +int dm_io(struct dm_io_request *io_req, unsigned int num_regions, struct dm_io_region *where, unsigned long *sync_error_bits) { int r; diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c index e031088ff15c6..37f5ea7337cc2 100644 --- a/drivers/md/dm-ioctl.c +++ b/drivers/md/dm-ioctl.c @@ -31,7 +31,7 @@ struct dm_file { * poll will wait until the global event number is greater than * this value. */ - volatile unsigned global_event_nr; + volatile unsigned int global_event_nr; };
/*----------------------------------------------------------------- @@ -413,7 +413,7 @@ static struct mapped_device *dm_hash_rename(struct dm_ioctl *param, struct hash_cell *hc; struct dm_table *table; struct mapped_device *md; - unsigned change_uuid = (param->flags & DM_UUID_FLAG) ? 1 : 0; + unsigned int change_uuid = (param->flags & DM_UUID_FLAG) ? 1 : 0; int srcu_idx;
/* @@ -1021,7 +1021,7 @@ static int dev_rename(struct file *filp, struct dm_ioctl *param, size_t param_si int r; char *new_data = (char *) param + param->data_start; struct mapped_device *md; - unsigned change_uuid = (param->flags & DM_UUID_FLAG) ? 1 : 0; + unsigned int change_uuid = (param->flags & DM_UUID_FLAG) ? 1 : 0;
if (new_data < param->data || invalid_str(new_data, (void *) param + param_size) || !*new_data || @@ -1096,7 +1096,7 @@ static int dev_set_geometry(struct file *filp, struct dm_ioctl *param, size_t pa static int do_suspend(struct dm_ioctl *param) { int r = 0; - unsigned suspend_flags = DM_SUSPEND_LOCKFS_FLAG; + unsigned int suspend_flags = DM_SUSPEND_LOCKFS_FLAG; struct mapped_device *md;
md = find_device(param); @@ -1125,7 +1125,7 @@ static int do_suspend(struct dm_ioctl *param) static int do_resume(struct dm_ioctl *param) { int r = 0; - unsigned suspend_flags = DM_SUSPEND_LOCKFS_FLAG; + unsigned int suspend_flags = DM_SUSPEND_LOCKFS_FLAG; struct hash_cell *hc; struct mapped_device *md; struct dm_table *new_map, *old_map = NULL; @@ -1243,7 +1243,7 @@ static void retrieve_status(struct dm_table *table, char *outbuf, *outptr; status_type_t type; size_t remaining, len, used = 0; - unsigned status_flags = 0; + unsigned int status_flags = 0;
outptr = outbuf = get_result_buffer(param, param_size, &len);
@@ -1648,8 +1648,8 @@ static int table_status(struct file *filp, struct dm_ioctl *param, size_t param_ * Returns a number <= 1 if message was processed by device mapper. * Returns 2 if message should be delivered to the target. */ -static int message_for_md(struct mapped_device *md, unsigned argc, char **argv, - char *result, unsigned maxlen) +static int message_for_md(struct mapped_device *md, unsigned int argc, char **argv, + char *result, unsigned int maxlen) { int r;
@@ -1859,7 +1859,7 @@ static int copy_params(struct dm_ioctl __user *user, struct dm_ioctl *param_kern struct dm_ioctl *dmi; int secure_data; const size_t minimum_data_size = offsetof(struct dm_ioctl, data); - unsigned noio_flag; + unsigned int noio_flag;
if (copy_from_user(param_kernel, user, minimum_data_size)) return -EFAULT; diff --git a/drivers/md/dm-kcopyd.c b/drivers/md/dm-kcopyd.c index 4d3bbbea2e9a8..0ef78e56aa88c 100644 --- a/drivers/md/dm-kcopyd.c +++ b/drivers/md/dm-kcopyd.c @@ -34,14 +34,14 @@ #define DEFAULT_SUB_JOB_SIZE_KB 512 #define MAX_SUB_JOB_SIZE_KB 1024
-static unsigned kcopyd_subjob_size_kb = DEFAULT_SUB_JOB_SIZE_KB; +static unsigned int kcopyd_subjob_size_kb = DEFAULT_SUB_JOB_SIZE_KB;
module_param(kcopyd_subjob_size_kb, uint, S_IRUGO | S_IWUSR); MODULE_PARM_DESC(kcopyd_subjob_size_kb, "Sub-job size for dm-kcopyd clients");
-static unsigned dm_get_kcopyd_subjob_size(void) +static unsigned int dm_get_kcopyd_subjob_size(void) { - unsigned sub_job_size_kb; + unsigned int sub_job_size_kb;
sub_job_size_kb = __dm_get_module_param(&kcopyd_subjob_size_kb, DEFAULT_SUB_JOB_SIZE_KB, @@ -56,9 +56,9 @@ static unsigned dm_get_kcopyd_subjob_size(void) *---------------------------------------------------------------*/ struct dm_kcopyd_client { struct page_list *pages; - unsigned nr_reserved_pages; - unsigned nr_free_pages; - unsigned sub_job_size; + unsigned int nr_reserved_pages; + unsigned int nr_free_pages; + unsigned int sub_job_size;
struct dm_io_client *io_client;
@@ -119,7 +119,7 @@ static DEFINE_SPINLOCK(throttle_spinlock);
static void io_job_start(struct dm_kcopyd_throttle *t) { - unsigned throttle, now, difference; + unsigned int throttle, now, difference; int slept = 0, skew;
if (unlikely(!t)) @@ -182,7 +182,7 @@ static void io_job_finish(struct dm_kcopyd_throttle *t) goto skip_limit;
if (!t->num_io_jobs) { - unsigned now, difference; + unsigned int now, difference;
now = jiffies; difference = now - t->last_jiffies; @@ -303,9 +303,9 @@ static void drop_pages(struct page_list *pl) /* * Allocate and reserve nr_pages for the use of a specific client. */ -static int client_reserve_pages(struct dm_kcopyd_client *kc, unsigned nr_pages) +static int client_reserve_pages(struct dm_kcopyd_client *kc, unsigned int nr_pages) { - unsigned i; + unsigned int i; struct page_list *pl = NULL, *next;
for (i = 0; i < nr_pages; i++) { @@ -341,7 +341,7 @@ static void client_free_pages(struct dm_kcopyd_client *kc) struct kcopyd_job { struct dm_kcopyd_client *kc; struct list_head list; - unsigned flags; + unsigned int flags;
/* * Error state of the job. @@ -582,7 +582,7 @@ static int run_io_job(struct kcopyd_job *job) static int run_pages_job(struct kcopyd_job *job) { int r; - unsigned nr_pages = dm_div_up(job->dests[0].count, PAGE_SIZE >> 9); + unsigned int nr_pages = dm_div_up(job->dests[0].count, PAGE_SIZE >> 9);
r = kcopyd_get_pages(job->kc, nr_pages, &job->pages); if (!r) { @@ -849,8 +849,8 @@ void dm_kcopyd_copy(struct dm_kcopyd_client *kc, struct dm_io_region *from, EXPORT_SYMBOL(dm_kcopyd_copy);
void dm_kcopyd_zero(struct dm_kcopyd_client *kc, - unsigned num_dests, struct dm_io_region *dests, - unsigned flags, dm_kcopyd_notify_fn fn, void *context) + unsigned int num_dests, struct dm_io_region *dests, + unsigned int flags, dm_kcopyd_notify_fn fn, void *context) { dm_kcopyd_copy(kc, NULL, num_dests, dests, flags, fn, context); } @@ -906,7 +906,7 @@ int kcopyd_cancel(struct kcopyd_job *job, int block) struct dm_kcopyd_client *dm_kcopyd_client_create(struct dm_kcopyd_throttle *throttle) { int r; - unsigned reserve_pages; + unsigned int reserve_pages; struct dm_kcopyd_client *kc;
kc = kzalloc(sizeof(*kc), GFP_KERNEL); diff --git a/drivers/md/dm-linear.c b/drivers/md/dm-linear.c index 3212ef6aa81bb..26b1af6461771 100644 --- a/drivers/md/dm-linear.c +++ b/drivers/md/dm-linear.c @@ -95,7 +95,7 @@ static int linear_map(struct dm_target *ti, struct bio *bio) }
static void linear_status(struct dm_target *ti, status_type_t type, - unsigned status_flags, char *result, unsigned maxlen) + unsigned int status_flags, char *result, unsigned int maxlen) { struct linear_c *lc = (struct linear_c *) ti->private; size_t sz = 0; diff --git a/drivers/md/dm-log-userspace-base.c b/drivers/md/dm-log-userspace-base.c index 9ab93ebea8895..9fc69382692bd 100644 --- a/drivers/md/dm-log-userspace-base.c +++ b/drivers/md/dm-log-userspace-base.c @@ -123,7 +123,7 @@ static int userspace_do_request(struct log_c *lc, const char *uuid, }
static int build_constructor_string(struct dm_target *ti, - unsigned argc, char **argv, + unsigned int argc, char **argv, char **ctr_str) { int i, str_size; @@ -188,7 +188,7 @@ static void do_flush(struct work_struct *work) * to the userspace ctr function. */ static int userspace_ctr(struct dm_dirty_log *log, struct dm_target *ti, - unsigned argc, char **argv) + unsigned int argc, char **argv) { int r = 0; int str_size; @@ -792,7 +792,7 @@ static region_t userspace_get_sync_count(struct dm_dirty_log *log) * Returns: amount of space consumed */ static int userspace_status(struct dm_dirty_log *log, status_type_t status_type, - char *result, unsigned maxlen) + char *result, unsigned int maxlen) { int r = 0; char *table_args; diff --git a/drivers/md/dm-log-userspace-transfer.c b/drivers/md/dm-log-userspace-transfer.c index fdf8ec304f8d2..072559b709edd 100644 --- a/drivers/md/dm-log-userspace-transfer.c +++ b/drivers/md/dm-log-userspace-transfer.c @@ -142,7 +142,7 @@ static void cn_ulog_callback(struct cn_msg *msg, struct netlink_skb_parms *nsp) fill_pkg(msg, NULL); else if (msg->len < sizeof(*tfr)) DMERR("Incomplete message received (expected %u, got %u): [%u]", - (unsigned)sizeof(*tfr), msg->len, msg->seq); + (unsigned int)sizeof(*tfr), msg->len, msg->seq); else fill_pkg(NULL, tfr); spin_unlock(&receiving_list_lock); diff --git a/drivers/md/dm-log-writes.c b/drivers/md/dm-log-writes.c index 178e13a5b059f..efdfb2e1868a4 100644 --- a/drivers/md/dm-log-writes.c +++ b/drivers/md/dm-log-writes.c @@ -792,10 +792,10 @@ static int normal_end_io(struct dm_target *ti, struct bio *bio, * INFO format: <logged entries> <highest allocated sector> */ static void log_writes_status(struct dm_target *ti, status_type_t type, - unsigned status_flags, char *result, - unsigned maxlen) + unsigned int status_flags, char *result, + unsigned int maxlen) { - unsigned sz = 0; + unsigned int sz = 0; struct log_writes_c *lc = ti->private;
switch (type) { @@ -844,8 +844,8 @@ static int log_writes_iterate_devices(struct dm_target *ti, * Messages supported: * mark <mark data> - specify the marked data. */ -static int log_writes_message(struct dm_target *ti, unsigned argc, char **argv, - char *result, unsigned maxlen) +static int log_writes_message(struct dm_target *ti, unsigned int argc, char **argv, + char *result, unsigned int maxlen) { int r = -EINVAL; struct log_writes_c *lc = ti->private; diff --git a/drivers/md/dm-log.c b/drivers/md/dm-log.c index cf10fa6677972..159f2c05dfd3c 100644 --- a/drivers/md/dm-log.c +++ b/drivers/md/dm-log.c @@ -223,7 +223,7 @@ struct log_c { unsigned int region_count; region_t sync_count;
- unsigned bitset_uint32_count; + unsigned int bitset_uint32_count; uint32_t *clean_bits; uint32_t *sync_bits; uint32_t *recovering_bits; /* FIXME: this seems excessive */ @@ -255,20 +255,20 @@ struct log_c { * The touched member needs to be updated every time we access * one of the bitsets. */ -static inline int log_test_bit(uint32_t *bs, unsigned bit) +static inline int log_test_bit(uint32_t *bs, unsigned int bit) { return test_bit_le(bit, bs) ? 1 : 0; }
static inline void log_set_bit(struct log_c *l, - uint32_t *bs, unsigned bit) + uint32_t *bs, unsigned int bit) { __set_bit_le(bit, bs); l->touched_cleaned = 1; }
static inline void log_clear_bit(struct log_c *l, - uint32_t *bs, unsigned bit) + uint32_t *bs, unsigned int bit) { __clear_bit_le(bit, bs); l->touched_dirtied = 1; @@ -582,7 +582,7 @@ static void fail_log_device(struct log_c *lc) static int disk_resume(struct dm_dirty_log *log) { int r; - unsigned i; + unsigned int i; struct log_c *lc = (struct log_c *) log->context; size_t size = lc->bitset_uint32_count * sizeof(uint32_t);
diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c index 0e325469a252a..91c25ad8eed84 100644 --- a/drivers/md/dm-mpath.c +++ b/drivers/md/dm-mpath.c @@ -29,7 +29,7 @@
#define DM_MSG_PREFIX "multipath" #define DM_PG_INIT_DELAY_MSECS 2000 -#define DM_PG_INIT_DELAY_DEFAULT ((unsigned) -1) +#define DM_PG_INIT_DELAY_DEFAULT ((unsigned int) -1) #define QUEUE_IF_NO_PATH_TIMEOUT_DEFAULT 0
static unsigned long queue_if_no_path_timeout_secs = QUEUE_IF_NO_PATH_TIMEOUT_DEFAULT; @@ -39,7 +39,7 @@ struct pgpath { struct list_head list;
struct priority_group *pg; /* Owning PG */ - unsigned fail_count; /* Cumulative failure count */ + unsigned int fail_count; /* Cumulative failure count */
struct dm_path path; struct delayed_work activate_path; @@ -59,8 +59,8 @@ struct priority_group { struct multipath *m; /* Owning multipath instance */ struct path_selector ps;
- unsigned pg_num; /* Reference number */ - unsigned nr_pgpaths; /* Number of paths in PG */ + unsigned int pg_num; /* Reference number */ + unsigned int nr_pgpaths; /* Number of paths in PG */ struct list_head pgpaths;
bool bypassed:1; /* Temporarily bypass this PG? */ @@ -78,14 +78,14 @@ struct multipath { struct priority_group *next_pg; /* Switch to this PG if set */
atomic_t nr_valid_paths; /* Total number of usable paths */ - unsigned nr_priority_groups; + unsigned int nr_priority_groups; struct list_head priority_groups;
const char *hw_handler_name; char *hw_handler_params; wait_queue_head_t pg_init_wait; /* Wait for pg_init completion */ - unsigned pg_init_retries; /* Number of times to retry pg_init */ - unsigned pg_init_delay_msecs; /* Number of msecs before pg_init retry */ + unsigned int pg_init_retries; /* Number of times to retry pg_init */ + unsigned int pg_init_delay_msecs; /* Number of msecs before pg_init retry */ atomic_t pg_init_in_progress; /* Only one pg_init allowed at once */ atomic_t pg_init_count; /* Number of times pg_init called */
@@ -397,7 +397,7 @@ static struct pgpath *choose_pgpath(struct multipath *m, size_t nr_bytes) unsigned long flags; struct priority_group *pg; struct pgpath *pgpath; - unsigned bypassed = 1; + unsigned int bypassed = 1;
if (!atomic_read(&m->nr_valid_paths)) { spin_lock_irqsave(&m->lock, flags); @@ -840,7 +840,7 @@ static int parse_path_selector(struct dm_arg_set *as, struct priority_group *pg, { int r; struct path_selector_type *pst; - unsigned ps_argc; + unsigned int ps_argc;
static const struct dm_arg _args[] = { {0, 1024, "invalid number of path selector args"}, @@ -983,7 +983,7 @@ static struct priority_group *parse_priority_group(struct dm_arg_set *as, };
int r; - unsigned i, nr_selector_args, nr_args; + unsigned int i, nr_selector_args, nr_args; struct priority_group *pg; struct dm_target *ti = m->ti;
@@ -1049,7 +1049,7 @@ static struct priority_group *parse_priority_group(struct dm_arg_set *as,
static int parse_hw_handler(struct dm_arg_set *as, struct multipath *m) { - unsigned hw_argc; + unsigned int hw_argc; int ret; struct dm_target *ti = m->ti;
@@ -1101,7 +1101,7 @@ static int parse_hw_handler(struct dm_arg_set *as, struct multipath *m) static int parse_features(struct dm_arg_set *as, struct multipath *m) { int r; - unsigned argc; + unsigned int argc; struct dm_target *ti = m->ti; const char *arg_name;
@@ -1170,7 +1170,7 @@ static int parse_features(struct dm_arg_set *as, struct multipath *m) return r; }
-static int multipath_ctr(struct dm_target *ti, unsigned argc, char **argv) +static int multipath_ctr(struct dm_target *ti, unsigned int argc, char **argv) { /* target arguments */ static const struct dm_arg _args[] = { @@ -1181,8 +1181,8 @@ static int multipath_ctr(struct dm_target *ti, unsigned argc, char **argv) int r; struct multipath *m; struct dm_arg_set as; - unsigned pg_count = 0; - unsigned next_pg_num; + unsigned int pg_count = 0; + unsigned int next_pg_num; unsigned long flags;
as.argc = argc; @@ -1224,7 +1224,7 @@ static int multipath_ctr(struct dm_target *ti, unsigned argc, char **argv) /* parse the priority groups */ while (as.argc) { struct priority_group *pg; - unsigned nr_valid_paths = atomic_read(&m->nr_valid_paths); + unsigned int nr_valid_paths = atomic_read(&m->nr_valid_paths);
pg = parse_priority_group(&as, m); if (IS_ERR(pg)) { @@ -1365,7 +1365,7 @@ static int reinstate_path(struct pgpath *pgpath) int r = 0, run_queue = 0; unsigned long flags; struct multipath *m = pgpath->pg->m; - unsigned nr_valid_paths; + unsigned int nr_valid_paths;
spin_lock_irqsave(&m->lock, flags);
@@ -1454,7 +1454,7 @@ static void bypass_pg(struct multipath *m, struct priority_group *pg, static int switch_pg_num(struct multipath *m, const char *pgstr) { struct priority_group *pg; - unsigned pgnum; + unsigned int pgnum; unsigned long flags; char dummy;
@@ -1487,7 +1487,7 @@ static int switch_pg_num(struct multipath *m, const char *pgstr) static int bypass_pg_num(struct multipath *m, const char *pgstr, bool bypassed) { struct priority_group *pg; - unsigned pgnum; + unsigned int pgnum; char dummy;
if (!pgstr || (sscanf(pgstr, "%u%c", &pgnum, &dummy) != 1) || !pgnum || @@ -1789,14 +1789,14 @@ static void multipath_resume(struct dm_target *ti) * num_paths num_selector_args [path_dev [selector_args]* ]+ ]+ */ static void multipath_status(struct dm_target *ti, status_type_t type, - unsigned status_flags, char *result, unsigned maxlen) + unsigned int status_flags, char *result, unsigned int maxlen) { int sz = 0, pg_counter, pgpath_counter; unsigned long flags; struct multipath *m = ti->private; struct priority_group *pg; struct pgpath *p; - unsigned pg_num; + unsigned int pg_num; char state;
spin_lock_irqsave(&m->lock, flags); @@ -1948,8 +1948,8 @@ static void multipath_status(struct dm_target *ti, status_type_t type, spin_unlock_irqrestore(&m->lock, flags); }
-static int multipath_message(struct dm_target *ti, unsigned argc, char **argv, - char *result, unsigned maxlen) +static int multipath_message(struct dm_target *ti, unsigned int argc, char **argv, + char *result, unsigned int maxlen) { int r = -EINVAL; struct dm_dev *dev; diff --git a/drivers/md/dm-mpath.h b/drivers/md/dm-mpath.h index e230f71962596..5343698fe5f1b 100644 --- a/drivers/md/dm-mpath.h +++ b/drivers/md/dm-mpath.h @@ -17,6 +17,6 @@ struct dm_path { };
/* Callback for hwh_pg_init_fn to use when complete */ -void dm_pg_init_complete(struct dm_path *path, unsigned err_flags); +void dm_pg_init_complete(struct dm_path *path, unsigned int err_flags);
#endif diff --git a/drivers/md/dm-path-selector.h b/drivers/md/dm-path-selector.h index 83cac2b04b668..0f2b37af87662 100644 --- a/drivers/md/dm-path-selector.h +++ b/drivers/md/dm-path-selector.h @@ -52,7 +52,7 @@ struct path_selector_type { /* * Constructs a path selector object, takes custom arguments */ - int (*create) (struct path_selector *ps, unsigned argc, char **argv); + int (*create) (struct path_selector *ps, unsigned int argc, char **argv); void (*destroy) (struct path_selector *ps);
/* diff --git a/drivers/md/dm-ps-io-affinity.c b/drivers/md/dm-ps-io-affinity.c index f74501e65a8ed..76ce4ce872229 100644 --- a/drivers/md/dm-ps-io-affinity.c +++ b/drivers/md/dm-ps-io-affinity.c @@ -108,7 +108,7 @@ static int ioa_add_path(struct path_selector *ps, struct dm_path *path, return ret; }
-static int ioa_create(struct path_selector *ps, unsigned argc, char **argv) +static int ioa_create(struct path_selector *ps, unsigned int argc, char **argv) { struct selector *s;
@@ -138,7 +138,7 @@ static int ioa_create(struct path_selector *ps, unsigned argc, char **argv) static void ioa_destroy(struct path_selector *ps) { struct selector *s = ps->context; - unsigned cpu; + unsigned int cpu;
for_each_cpu(cpu, s->path_mask) ioa_free_path(s, cpu); diff --git a/drivers/md/dm-ps-queue-length.c b/drivers/md/dm-ps-queue-length.c index cef70657bbbc2..6fbec9fc242d9 100644 --- a/drivers/md/dm-ps-queue-length.c +++ b/drivers/md/dm-ps-queue-length.c @@ -35,7 +35,7 @@ struct selector { struct path_info { struct list_head list; struct dm_path *path; - unsigned repeat_count; + unsigned int repeat_count; atomic_t qlen; /* the number of in-flight I/Os */ };
@@ -52,7 +52,7 @@ static struct selector *alloc_selector(void) return s; }
-static int ql_create(struct path_selector *ps, unsigned argc, char **argv) +static int ql_create(struct path_selector *ps, unsigned int argc, char **argv) { struct selector *s = alloc_selector();
@@ -84,9 +84,9 @@ static void ql_destroy(struct path_selector *ps) }
static int ql_status(struct path_selector *ps, struct dm_path *path, - status_type_t type, char *result, unsigned maxlen) + status_type_t type, char *result, unsigned int maxlen) { - unsigned sz = 0; + unsigned int sz = 0; struct path_info *pi;
/* When called with NULL path, return selector status/args. */ @@ -116,7 +116,7 @@ static int ql_add_path(struct path_selector *ps, struct dm_path *path, { struct selector *s = ps->context; struct path_info *pi; - unsigned repeat_count = QL_MIN_IO; + unsigned int repeat_count = QL_MIN_IO; char dummy; unsigned long flags;
diff --git a/drivers/md/dm-ps-round-robin.c b/drivers/md/dm-ps-round-robin.c index 27f44c5fa04e8..1d07392b5ed48 100644 --- a/drivers/md/dm-ps-round-robin.c +++ b/drivers/md/dm-ps-round-robin.c @@ -26,7 +26,7 @@ struct path_info { struct list_head list; struct dm_path *path; - unsigned repeat_count; + unsigned int repeat_count; };
static void free_paths(struct list_head *paths) @@ -62,7 +62,7 @@ static struct selector *alloc_selector(void) return s; }
-static int rr_create(struct path_selector *ps, unsigned argc, char **argv) +static int rr_create(struct path_selector *ps, unsigned int argc, char **argv) { struct selector *s;
@@ -119,7 +119,7 @@ static int rr_add_path(struct path_selector *ps, struct dm_path *path, { struct selector *s = ps->context; struct path_info *pi; - unsigned repeat_count = RR_MIN_IO; + unsigned int repeat_count = RR_MIN_IO; char dummy; unsigned long flags;
diff --git a/drivers/md/dm-ps-service-time.c b/drivers/md/dm-ps-service-time.c index 3ec9c33265c52..84d26234dc053 100644 --- a/drivers/md/dm-ps-service-time.c +++ b/drivers/md/dm-ps-service-time.c @@ -30,8 +30,8 @@ struct selector { struct path_info { struct list_head list; struct dm_path *path; - unsigned repeat_count; - unsigned relative_throughput; + unsigned int repeat_count; + unsigned int relative_throughput; atomic_t in_flight_size; /* Total size of in-flight I/Os */ };
@@ -48,7 +48,7 @@ static struct selector *alloc_selector(void) return s; }
-static int st_create(struct path_selector *ps, unsigned argc, char **argv) +static int st_create(struct path_selector *ps, unsigned int argc, char **argv) { struct selector *s = alloc_selector();
@@ -80,9 +80,9 @@ static void st_destroy(struct path_selector *ps) }
static int st_status(struct path_selector *ps, struct dm_path *path, - status_type_t type, char *result, unsigned maxlen) + status_type_t type, char *result, unsigned int maxlen) { - unsigned sz = 0; + unsigned int sz = 0; struct path_info *pi;
if (!path) @@ -113,8 +113,8 @@ static int st_add_path(struct path_selector *ps, struct dm_path *path, { struct selector *s = ps->context; struct path_info *pi; - unsigned repeat_count = ST_MIN_IO; - unsigned relative_throughput = 1; + unsigned int repeat_count = ST_MIN_IO; + unsigned int relative_throughput = 1; char dummy; unsigned long flags;
diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c index 54263679a7b14..b26c12856b1db 100644 --- a/drivers/md/dm-raid.c +++ b/drivers/md/dm-raid.c @@ -3712,7 +3712,7 @@ static void raid_status(struct dm_target *ti, status_type_t type, }
static int raid_message(struct dm_target *ti, unsigned int argc, char **argv, - char *result, unsigned maxlen) + char *result, unsigned int maxlen) { struct raid_set *rs = ti->private; struct mddev *mddev = &rs->md; diff --git a/drivers/md/dm-raid1.c b/drivers/md/dm-raid1.c index 06a38dc320253..8bd7e87d3538e 100644 --- a/drivers/md/dm-raid1.c +++ b/drivers/md/dm-raid1.c @@ -82,7 +82,7 @@ struct mirror_set {
struct work_struct trigger_event;
- unsigned nr_mirrors; + unsigned int nr_mirrors; struct mirror mirror[]; };
@@ -327,7 +327,7 @@ static void recovery_complete(int read_err, unsigned long write_err,
static void recover(struct mirror_set *ms, struct dm_region *reg) { - unsigned i; + unsigned int i; struct dm_io_region from, to[DM_KCOPYD_MAX_REGIONS], *dest; struct mirror *m; unsigned long flags = 0; @@ -593,7 +593,7 @@ static void do_reads(struct mirror_set *ms, struct bio_list *reads)
static void write_callback(unsigned long error, void *context) { - unsigned i; + unsigned int i; struct bio *bio = (struct bio *) context; struct mirror_set *ms; int should_wake = 0; @@ -963,10 +963,10 @@ static int get_mirror(struct mirror_set *ms, struct dm_target *ti, * Create dirty log: log_type #log_params <log_params> */ static struct dm_dirty_log *create_dirty_log(struct dm_target *ti, - unsigned argc, char **argv, - unsigned *args_used) + unsigned int argc, char **argv, + unsigned int *args_used) { - unsigned param_count; + unsigned int param_count; struct dm_dirty_log *dl; char dummy;
@@ -997,10 +997,10 @@ static struct dm_dirty_log *create_dirty_log(struct dm_target *ti, return dl; }
-static int parse_features(struct mirror_set *ms, unsigned argc, char **argv, - unsigned *args_used) +static int parse_features(struct mirror_set *ms, unsigned int argc, char **argv, + unsigned int *args_used) { - unsigned num_features; + unsigned int num_features; struct dm_target *ti = ms->ti; char dummy; int i; @@ -1389,7 +1389,7 @@ static char device_status_char(struct mirror *m)
static void mirror_status(struct dm_target *ti, status_type_t type, - unsigned status_flags, char *result, unsigned maxlen) + unsigned int status_flags, char *result, unsigned int maxlen) { unsigned int m, sz = 0; int num_feature_args = 0; @@ -1458,7 +1458,7 @@ static int mirror_iterate_devices(struct dm_target *ti, { struct mirror_set *ms = ti->private; int ret = 0; - unsigned i; + unsigned int i;
for (i = 0; !ret && i < ms->nr_mirrors; i++) ret = fn(ti, ms->mirror[i].dev, diff --git a/drivers/md/dm-region-hash.c b/drivers/md/dm-region-hash.c index 1f760451e6f48..adbdb4b671372 100644 --- a/drivers/md/dm-region-hash.c +++ b/drivers/md/dm-region-hash.c @@ -56,17 +56,17 @@ *---------------------------------------------------------------*/ struct dm_region_hash { uint32_t region_size; - unsigned region_shift; + unsigned int region_shift;
/* holds persistent region state */ struct dm_dirty_log *log;
/* hash table */ rwlock_t hash_lock; - unsigned mask; - unsigned nr_buckets; - unsigned prime; - unsigned shift; + unsigned int mask; + unsigned int nr_buckets; + unsigned int prime; + unsigned int shift; struct list_head *buckets;
/* @@ -74,7 +74,7 @@ struct dm_region_hash { */ int flush_failure;
- unsigned max_recovery; /* Max # of regions to recover in parallel */ + unsigned int max_recovery; /* Max # of regions to recover in parallel */
spinlock_t region_lock; atomic_t recovery_in_flight; @@ -163,12 +163,12 @@ struct dm_region_hash *dm_region_hash_create( struct bio_list *bios), void (*wakeup_workers)(void *context), void (*wakeup_all_recovery_waiters)(void *context), - sector_t target_begin, unsigned max_recovery, + sector_t target_begin, unsigned int max_recovery, struct dm_dirty_log *log, uint32_t region_size, region_t nr_regions) { struct dm_region_hash *rh; - unsigned nr_buckets, max_buckets; + unsigned int nr_buckets, max_buckets; size_t i; int ret;
@@ -236,7 +236,7 @@ EXPORT_SYMBOL_GPL(dm_region_hash_create);
void dm_region_hash_destroy(struct dm_region_hash *rh) { - unsigned h; + unsigned int h; struct dm_region *reg, *nreg;
BUG_ON(!list_empty(&rh->quiesced_regions)); @@ -263,9 +263,9 @@ struct dm_dirty_log *dm_rh_dirty_log(struct dm_region_hash *rh) } EXPORT_SYMBOL_GPL(dm_rh_dirty_log);
-static unsigned rh_hash(struct dm_region_hash *rh, region_t region) +static unsigned int rh_hash(struct dm_region_hash *rh, region_t region) { - return (unsigned) ((region * rh->prime) >> rh->shift) & rh->mask; + return (unsigned int) ((region * rh->prime) >> rh->shift) & rh->mask; }
static struct dm_region *__rh_lookup(struct dm_region_hash *rh, region_t region) diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c index a41209a43506c..80f46e01bca44 100644 --- a/drivers/md/dm-rq.c +++ b/drivers/md/dm-rq.c @@ -23,33 +23,33 @@ struct dm_rq_target_io { union map_info info; struct dm_stats_aux stats_aux; unsigned long duration_jiffies; - unsigned n_sectors; - unsigned completed; + unsigned int n_sectors; + unsigned int completed; };
#define DM_MQ_NR_HW_QUEUES 1 #define DM_MQ_QUEUE_DEPTH 2048 -static unsigned dm_mq_nr_hw_queues = DM_MQ_NR_HW_QUEUES; -static unsigned dm_mq_queue_depth = DM_MQ_QUEUE_DEPTH; +static unsigned int dm_mq_nr_hw_queues = DM_MQ_NR_HW_QUEUES; +static unsigned int dm_mq_queue_depth = DM_MQ_QUEUE_DEPTH;
/* * Request-based DM's mempools' reserved IOs set by the user. */ #define RESERVED_REQUEST_BASED_IOS 256 -static unsigned reserved_rq_based_ios = RESERVED_REQUEST_BASED_IOS; +static unsigned int reserved_rq_based_ios = RESERVED_REQUEST_BASED_IOS;
-unsigned dm_get_reserved_rq_based_ios(void) +unsigned int dm_get_reserved_rq_based_ios(void) { return __dm_get_module_param(&reserved_rq_based_ios, RESERVED_REQUEST_BASED_IOS, DM_RESERVED_MAX_IOS); }
-static unsigned dm_get_blk_mq_nr_hw_queues(void) +static unsigned int dm_get_blk_mq_nr_hw_queues(void) { return __dm_get_module_param(&dm_mq_nr_hw_queues, 1, 32); }
-static unsigned dm_get_blk_mq_queue_depth(void) +static unsigned int dm_get_blk_mq_queue_depth(void) { return __dm_get_module_param(&dm_mq_queue_depth, DM_MQ_QUEUE_DEPTH, BLK_MQ_MAX_DEPTH); diff --git a/drivers/md/dm-rq.h b/drivers/md/dm-rq.h index 1eea0da641db5..2c97ad1451400 100644 --- a/drivers/md/dm-rq.h +++ b/drivers/md/dm-rq.h @@ -38,7 +38,7 @@ void dm_stop_queue(struct request_queue *q);
void dm_mq_kick_requeue_list(struct mapped_device *md);
-unsigned dm_get_reserved_rq_based_ios(void); +unsigned int dm_get_reserved_rq_based_ios(void);
ssize_t dm_attr_rq_based_seq_io_merge_deadline_show(struct mapped_device *md, char *buf); ssize_t dm_attr_rq_based_seq_io_merge_deadline_store(struct mapped_device *md, diff --git a/drivers/md/dm-snap-persistent.c b/drivers/md/dm-snap-persistent.c index 680cc05ec6542..5176810f5d243 100644 --- a/drivers/md/dm-snap-persistent.c +++ b/drivers/md/dm-snap-persistent.c @@ -303,7 +303,7 @@ static int read_header(struct pstore *ps, int *new_snapshot) { int r; struct disk_header *dh; - unsigned chunk_size; + unsigned int chunk_size; int chunk_size_supplied = 1; char *chunk_err;
@@ -895,11 +895,11 @@ static int persistent_ctr(struct dm_exception_store *store, char *options) return r; }
-static unsigned persistent_status(struct dm_exception_store *store, +static unsigned int persistent_status(struct dm_exception_store *store, status_type_t status, char *result, - unsigned maxlen) + unsigned int maxlen) { - unsigned sz = 0; + unsigned int sz = 0;
switch (status) { case STATUSTYPE_INFO: diff --git a/drivers/md/dm-snap-transient.c b/drivers/md/dm-snap-transient.c index 0e0ae4c36b374..d83a0565bd101 100644 --- a/drivers/md/dm-snap-transient.c +++ b/drivers/md/dm-snap-transient.c @@ -84,11 +84,11 @@ static int transient_ctr(struct dm_exception_store *store, char *options) return 0; }
-static unsigned transient_status(struct dm_exception_store *store, +static unsigned int transient_status(struct dm_exception_store *store, status_type_t status, char *result, - unsigned maxlen) + unsigned int maxlen) { - unsigned sz = 0; + unsigned int sz = 0;
switch (status) { case STATUSTYPE_INFO: diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c index d1c2f84d27e36..c64d987c544d7 100644 --- a/drivers/md/dm-snap.c +++ b/drivers/md/dm-snap.c @@ -41,7 +41,7 @@ static const char dm_snapshot_merge_target_name[] = "snapshot-merge";
struct dm_exception_table { uint32_t hash_mask; - unsigned hash_shift; + unsigned int hash_shift; struct hlist_bl_head *table; };
@@ -106,7 +106,7 @@ struct dm_snapshot { /* The on disk metadata handler */ struct dm_exception_store *store;
- unsigned in_progress; + unsigned int in_progress; struct wait_queue_head in_progress_wait;
struct dm_kcopyd_client *kcopyd_client; @@ -161,7 +161,7 @@ struct dm_snapshot { */ #define DEFAULT_COW_THRESHOLD 2048
-static unsigned cow_threshold = DEFAULT_COW_THRESHOLD; +static unsigned int cow_threshold = DEFAULT_COW_THRESHOLD; module_param_named(snapshot_cow_threshold, cow_threshold, uint, 0644); MODULE_PARM_DESC(snapshot_cow_threshold, "Maximum number of chunks being copied on write");
@@ -324,7 +324,7 @@ struct origin { struct dm_origin { struct dm_dev *dev; struct dm_target *ti; - unsigned split_boundary; + unsigned int split_boundary; struct list_head hash_list; };
@@ -377,7 +377,7 @@ static void exit_origin_hash(void) kfree(_dm_origins); }
-static unsigned origin_hash(struct block_device *bdev) +static unsigned int origin_hash(struct block_device *bdev) { return bdev->bd_dev & ORIGIN_MASK; } @@ -652,7 +652,7 @@ static void dm_exception_table_unlock(struct dm_exception_table_lock *lock) }
static int dm_exception_table_init(struct dm_exception_table *et, - uint32_t size, unsigned hash_shift) + uint32_t size, unsigned int hash_shift) { unsigned int i;
@@ -850,7 +850,7 @@ static int dm_add_exception(void *context, chunk_t old, chunk_t new) static uint32_t __minimum_chunk_size(struct origin *o) { struct dm_snapshot *snap; - unsigned chunk_size = rounddown_pow_of_two(UINT_MAX); + unsigned int chunk_size = rounddown_pow_of_two(UINT_MAX);
if (o) list_for_each_entry(snap, &o->snapshots, list) @@ -1010,7 +1010,7 @@ static int remove_single_exception_chunk(struct dm_snapshot *s) }
static int origin_write_extent(struct dm_snapshot *merging_snap, - sector_t sector, unsigned chunk_size); + sector_t sector, unsigned int chunk_size);
static void merge_callback(int read_err, unsigned long write_err, void *context); @@ -1183,7 +1183,7 @@ static int parse_snapshot_features(struct dm_arg_set *as, struct dm_snapshot *s, struct dm_target *ti) { int r; - unsigned argc; + unsigned int argc; const char *arg_name;
static const struct dm_arg _args[] = { @@ -1241,7 +1241,7 @@ static int snapshot_ctr(struct dm_target *ti, unsigned int argc, char **argv) int r = -EINVAL; char *origin_path, *cow_path; dev_t origin_dev, cow_dev; - unsigned args_used, num_flush_bios = 1; + unsigned int args_used, num_flush_bios = 1; fmode_t origin_mode = FMODE_READ;
if (argc < 4) { @@ -2315,11 +2315,11 @@ static void snapshot_merge_resume(struct dm_target *ti) }
static void snapshot_status(struct dm_target *ti, status_type_t type, - unsigned status_flags, char *result, unsigned maxlen) + unsigned int status_flags, char *result, unsigned int maxlen) { - unsigned sz = 0; + unsigned int sz = 0; struct dm_snapshot *snap = ti->private; - unsigned num_features; + unsigned int num_features;
switch (type) { case STATUSTYPE_INFO: @@ -2592,7 +2592,7 @@ static int do_origin(struct dm_dev *origin, struct bio *bio, bool limit) * size must be a multiple of merging_snap's chunk_size. */ static int origin_write_extent(struct dm_snapshot *merging_snap, - sector_t sector, unsigned size) + sector_t sector, unsigned int size) { int must_wait = 0; sector_t n; @@ -2668,7 +2668,7 @@ static void origin_dtr(struct dm_target *ti) static int origin_map(struct dm_target *ti, struct bio *bio) { struct dm_origin *o = ti->private; - unsigned available_sectors; + unsigned int available_sectors;
bio_set_dev(bio, o->dev->bdev);
@@ -2679,7 +2679,7 @@ static int origin_map(struct dm_target *ti, struct bio *bio) return DM_MAPIO_REMAPPED;
available_sectors = o->split_boundary - - ((unsigned)bio->bi_iter.bi_sector & (o->split_boundary - 1)); + ((unsigned int)bio->bi_iter.bi_sector & (o->split_boundary - 1));
if (bio_sectors(bio) > available_sectors) dm_accept_partial_bio(bio, available_sectors); @@ -2713,7 +2713,7 @@ static void origin_postsuspend(struct dm_target *ti) }
static void origin_status(struct dm_target *ti, status_type_t type, - unsigned status_flags, char *result, unsigned maxlen) + unsigned int status_flags, char *result, unsigned int maxlen) { struct dm_origin *o = ti->private;
diff --git a/drivers/md/dm-stats.c b/drivers/md/dm-stats.c index d12ba9bce145d..7eeb3c2a2492b 100644 --- a/drivers/md/dm-stats.c +++ b/drivers/md/dm-stats.c @@ -42,12 +42,12 @@ struct dm_stat_shared { struct dm_stat { struct list_head list_entry; int id; - unsigned stat_flags; + unsigned int stat_flags; size_t n_entries; sector_t start; sector_t end; sector_t step; - unsigned n_histogram_entries; + unsigned int n_histogram_entries; unsigned long long *histogram_boundaries; const char *program_id; const char *aux_data; @@ -63,7 +63,7 @@ struct dm_stat {
struct dm_stats_last_position { sector_t last_sector; - unsigned last_rw; + unsigned int last_rw; };
/* @@ -255,8 +255,8 @@ static void dm_stats_recalc_precise_timestamps(struct dm_stats *stats) }
static int dm_stats_create(struct dm_stats *stats, sector_t start, sector_t end, - sector_t step, unsigned stat_flags, - unsigned n_histogram_entries, + sector_t step, unsigned int stat_flags, + unsigned int n_histogram_entries, unsigned long long *histogram_boundaries, const char *program_id, const char *aux_data, void (*suspend_callback)(struct mapped_device *), @@ -475,11 +475,11 @@ static int dm_stats_delete(struct dm_stats *stats, int id) }
static int dm_stats_list(struct dm_stats *stats, const char *program, - char *result, unsigned maxlen) + char *result, unsigned int maxlen) { struct dm_stat *s; sector_t len; - unsigned sz = 0; + unsigned int sz = 0;
/* * Output format: @@ -499,7 +499,7 @@ static int dm_stats_list(struct dm_stats *stats, const char *program, if (s->stat_flags & STAT_PRECISE_TIMESTAMPS) DMEMIT(" precise_timestamps"); if (s->n_histogram_entries) { - unsigned i; + unsigned int i; DMEMIT(" histogram:"); for (i = 0; i < s->n_histogram_entries; i++) { if (i) @@ -523,7 +523,7 @@ static void dm_stat_round(struct dm_stat *s, struct dm_stat_shared *shared, * This is racy, but so is part_round_stats_single. */ unsigned long long now, difference; - unsigned in_flight_read, in_flight_write; + unsigned int in_flight_read, in_flight_write;
if (likely(!(s->stat_flags & STAT_PRECISE_TIMESTAMPS))) now = jiffies; @@ -534,8 +534,8 @@ static void dm_stat_round(struct dm_stat *s, struct dm_stat_shared *shared, if (!difference) return;
- in_flight_read = (unsigned)atomic_read(&shared->in_flight[READ]); - in_flight_write = (unsigned)atomic_read(&shared->in_flight[WRITE]); + in_flight_read = (unsigned int)atomic_read(&shared->in_flight[READ]); + in_flight_write = (unsigned int)atomic_read(&shared->in_flight[WRITE]); if (in_flight_read) p->io_ticks[READ] += difference; if (in_flight_write) @@ -596,9 +596,9 @@ static void dm_stat_for_entry(struct dm_stat *s, size_t entry, duration = stats_aux->duration_ns; } if (s->n_histogram_entries) { - unsigned lo = 0, hi = s->n_histogram_entries + 1; + unsigned int lo = 0, hi = s->n_histogram_entries + 1; while (lo + 1 < hi) { - unsigned mid = (lo + hi) / 2; + unsigned int mid = (lo + hi) / 2; if (s->histogram_boundaries[mid - 1] > duration) { hi = mid; } else { @@ -656,7 +656,7 @@ static void __dm_stat_bio(struct dm_stat *s, int bi_rw, }
void dm_stats_account_io(struct dm_stats *stats, unsigned long bi_rw, - sector_t bi_sector, unsigned bi_sectors, bool end, + sector_t bi_sector, unsigned int bi_sectors, bool end, unsigned long start_time, struct dm_stats_aux *stats_aux) { @@ -745,7 +745,7 @@ static void __dm_stat_init_temporary_percpu_totals(struct dm_stat_shared *shared shared->tmp.io_ticks_total += READ_ONCE(p->io_ticks_total); shared->tmp.time_in_queue += READ_ONCE(p->time_in_queue); if (s->n_histogram_entries) { - unsigned i; + unsigned int i; for (i = 0; i < s->n_histogram_entries + 1; i++) shared->tmp.histogram[i] += READ_ONCE(p->histogram[i]); } @@ -779,7 +779,7 @@ static void __dm_stat_clear(struct dm_stat *s, size_t idx_start, size_t idx_end, p->time_in_queue -= shared->tmp.time_in_queue; local_irq_enable(); if (s->n_histogram_entries) { - unsigned i; + unsigned int i; for (i = 0; i < s->n_histogram_entries + 1; i++) { local_irq_disable(); p = &s->stat_percpu[smp_processor_id()][x]; @@ -816,7 +816,7 @@ static int dm_stats_clear(struct dm_stats *stats, int id) static unsigned long long dm_jiffies_to_msec64(struct dm_stat *s, unsigned long long j) { unsigned long long result; - unsigned mult; + unsigned int mult;
if (s->stat_flags & STAT_PRECISE_TIMESTAMPS) return j; @@ -836,9 +836,9 @@ static unsigned long long dm_jiffies_to_msec64(struct dm_stat *s, unsigned long
static int dm_stats_print(struct dm_stats *stats, int id, size_t idx_start, size_t idx_len, - bool clear, char *result, unsigned maxlen) + bool clear, char *result, unsigned int maxlen) { - unsigned sz = 0; + unsigned int sz = 0; struct dm_stat *s; size_t x; sector_t start, end, step; @@ -894,7 +894,7 @@ static int dm_stats_print(struct dm_stats *stats, int id, dm_jiffies_to_msec64(s, shared->tmp.io_ticks[READ]), dm_jiffies_to_msec64(s, shared->tmp.io_ticks[WRITE])); if (s->n_histogram_entries) { - unsigned i; + unsigned int i; for (i = 0; i < s->n_histogram_entries + 1; i++) { DMEMIT("%s%llu", !i ? " " : ":", shared->tmp.histogram[i]); } @@ -943,11 +943,11 @@ static int dm_stats_set_aux(struct dm_stats *stats, int id, const char *aux_data return 0; }
-static int parse_histogram(const char *h, unsigned *n_histogram_entries, +static int parse_histogram(const char *h, unsigned int *n_histogram_entries, unsigned long long **histogram_boundaries) { const char *q; - unsigned n; + unsigned int n; unsigned long long last;
*n_histogram_entries = 1; @@ -982,23 +982,23 @@ static int parse_histogram(const char *h, unsigned *n_histogram_entries, }
static int message_stats_create(struct mapped_device *md, - unsigned argc, char **argv, - char *result, unsigned maxlen) + unsigned int argc, char **argv, + char *result, unsigned int maxlen) { int r; int id; char dummy; unsigned long long start, end, len, step; - unsigned divisor; + unsigned int divisor; const char *program_id, *aux_data; - unsigned stat_flags = 0; + unsigned int stat_flags = 0;
- unsigned n_histogram_entries = 0; + unsigned int n_histogram_entries = 0; unsigned long long *histogram_boundaries = NULL;
struct dm_arg_set as, as_backup; const char *a; - unsigned feature_args; + unsigned int feature_args;
/* * Input format: @@ -1107,7 +1107,7 @@ static int message_stats_create(struct mapped_device *md, }
static int message_stats_delete(struct mapped_device *md, - unsigned argc, char **argv) + unsigned int argc, char **argv) { int id; char dummy; @@ -1122,7 +1122,7 @@ static int message_stats_delete(struct mapped_device *md, }
static int message_stats_clear(struct mapped_device *md, - unsigned argc, char **argv) + unsigned int argc, char **argv) { int id; char dummy; @@ -1137,8 +1137,8 @@ static int message_stats_clear(struct mapped_device *md, }
static int message_stats_list(struct mapped_device *md, - unsigned argc, char **argv, - char *result, unsigned maxlen) + unsigned int argc, char **argv, + char *result, unsigned int maxlen) { int r; const char *program = NULL; @@ -1160,8 +1160,8 @@ static int message_stats_list(struct mapped_device *md, }
static int message_stats_print(struct mapped_device *md, - unsigned argc, char **argv, bool clear, - char *result, unsigned maxlen) + unsigned int argc, char **argv, bool clear, + char *result, unsigned int maxlen) { int id; char dummy; @@ -1187,7 +1187,7 @@ static int message_stats_print(struct mapped_device *md, }
static int message_stats_set_aux(struct mapped_device *md, - unsigned argc, char **argv) + unsigned int argc, char **argv) { int id; char dummy; @@ -1201,8 +1201,8 @@ static int message_stats_set_aux(struct mapped_device *md, return dm_stats_set_aux(dm_get_stats(md), id, argv[2]); }
-int dm_stats_message(struct mapped_device *md, unsigned argc, char **argv, - char *result, unsigned maxlen) +int dm_stats_message(struct mapped_device *md, unsigned int argc, char **argv, + char *result, unsigned int maxlen) { int r;
diff --git a/drivers/md/dm-stats.h b/drivers/md/dm-stats.h index ee32b099f1cf7..c6728c8b41594 100644 --- a/drivers/md/dm-stats.h +++ b/drivers/md/dm-stats.h @@ -26,11 +26,11 @@ void dm_stats_cleanup(struct dm_stats *st);
struct mapped_device;
-int dm_stats_message(struct mapped_device *md, unsigned argc, char **argv, - char *result, unsigned maxlen); +int dm_stats_message(struct mapped_device *md, unsigned int argc, char **argv, + char *result, unsigned int maxlen);
void dm_stats_account_io(struct dm_stats *stats, unsigned long bi_rw, - sector_t bi_sector, unsigned bi_sectors, bool end, + sector_t bi_sector, unsigned int bi_sectors, bool end, unsigned long start_time, struct dm_stats_aux *aux);
diff --git a/drivers/md/dm-stripe.c b/drivers/md/dm-stripe.c index baa085cc67bde..a81ed080730a7 100644 --- a/drivers/md/dm-stripe.c +++ b/drivers/md/dm-stripe.c @@ -273,7 +273,7 @@ static int stripe_map(struct dm_target *ti, struct bio *bio) { struct stripe_c *sc = ti->private; uint32_t stripe; - unsigned target_bio_nr; + unsigned int target_bio_nr;
if (bio->bi_opf & REQ_PREFLUSH) { target_bio_nr = dm_bio_get_target_bio_nr(bio); @@ -359,7 +359,7 @@ static size_t stripe_dax_recovery_write(struct dm_target *ti, pgoff_t pgoff, */
static void stripe_status(struct dm_target *ti, status_type_t type, - unsigned status_flags, char *result, unsigned maxlen) + unsigned int status_flags, char *result, unsigned int maxlen) { struct stripe_c *sc = (struct stripe_c *) ti->private; unsigned int sz = 0; @@ -406,7 +406,7 @@ static void stripe_status(struct dm_target *ti, status_type_t type, static int stripe_end_io(struct dm_target *ti, struct bio *bio, blk_status_t *error) { - unsigned i; + unsigned int i; char major_minor[16]; struct stripe_c *sc = ti->private;
@@ -444,7 +444,7 @@ static int stripe_iterate_devices(struct dm_target *ti, { struct stripe_c *sc = ti->private; int ret = 0; - unsigned i = 0; + unsigned int i = 0;
do { ret = fn(ti, sc->stripe[i].dev, @@ -459,7 +459,7 @@ static void stripe_io_hints(struct dm_target *ti, struct queue_limits *limits) { struct stripe_c *sc = ti->private; - unsigned chunk_size = sc->chunk_size << SECTOR_SHIFT; + unsigned int chunk_size = sc->chunk_size << SECTOR_SHIFT;
blk_limits_io_min(limits, chunk_size); blk_limits_io_opt(limits, chunk_size * sc->stripes); diff --git a/drivers/md/dm-switch.c b/drivers/md/dm-switch.c index 534dc2ca8bb06..f734b5a097443 100644 --- a/drivers/md/dm-switch.c +++ b/drivers/md/dm-switch.c @@ -38,9 +38,9 @@ struct switch_path { struct switch_ctx { struct dm_target *ti;
- unsigned nr_paths; /* Number of paths in path_list. */ + unsigned int nr_paths; /* Number of paths in path_list. */
- unsigned region_size; /* Region size in 512-byte sectors */ + unsigned int region_size; /* Region size in 512-byte sectors */ unsigned long nr_regions; /* Number of regions making up the device */ signed char region_size_bits; /* log2 of region_size or -1 */
@@ -56,8 +56,8 @@ struct switch_ctx { struct switch_path path_list[]; };
-static struct switch_ctx *alloc_switch_ctx(struct dm_target *ti, unsigned nr_paths, - unsigned region_size) +static struct switch_ctx *alloc_switch_ctx(struct dm_target *ti, unsigned int nr_paths, + unsigned int region_size) { struct switch_ctx *sctx;
@@ -73,7 +73,7 @@ static struct switch_ctx *alloc_switch_ctx(struct dm_target *ti, unsigned nr_pat return sctx; }
-static int alloc_region_table(struct dm_target *ti, unsigned nr_paths) +static int alloc_region_table(struct dm_target *ti, unsigned int nr_paths) { struct switch_ctx *sctx = ti->private; sector_t nr_regions = ti->len; @@ -124,7 +124,7 @@ static int alloc_region_table(struct dm_target *ti, unsigned nr_paths) }
static void switch_get_position(struct switch_ctx *sctx, unsigned long region_nr, - unsigned long *region_index, unsigned *bit) + unsigned long *region_index, unsigned int *bit) { if (sctx->region_entries_per_slot_bits >= 0) { *region_index = region_nr >> sctx->region_entries_per_slot_bits; @@ -137,10 +137,10 @@ static void switch_get_position(struct switch_ctx *sctx, unsigned long region_nr *bit *= sctx->region_table_entry_bits; }
-static unsigned switch_region_table_read(struct switch_ctx *sctx, unsigned long region_nr) +static unsigned int switch_region_table_read(struct switch_ctx *sctx, unsigned long region_nr) { unsigned long region_index; - unsigned bit; + unsigned int bit;
switch_get_position(sctx, region_nr, ®ion_index, &bit);
@@ -151,9 +151,9 @@ static unsigned switch_region_table_read(struct switch_ctx *sctx, unsigned long /* * Find which path to use at given offset. */ -static unsigned switch_get_path_nr(struct switch_ctx *sctx, sector_t offset) +static unsigned int switch_get_path_nr(struct switch_ctx *sctx, sector_t offset) { - unsigned path_nr; + unsigned int path_nr; sector_t p;
p = offset; @@ -172,10 +172,10 @@ static unsigned switch_get_path_nr(struct switch_ctx *sctx, sector_t offset) }
static void switch_region_table_write(struct switch_ctx *sctx, unsigned long region_nr, - unsigned value) + unsigned int value) { unsigned long region_index; - unsigned bit; + unsigned int bit; region_table_slot_t pte;
switch_get_position(sctx, region_nr, ®ion_index, &bit); @@ -191,7 +191,7 @@ static void switch_region_table_write(struct switch_ctx *sctx, unsigned long reg */ static void initialise_region_table(struct switch_ctx *sctx) { - unsigned path_nr = 0; + unsigned int path_nr = 0; unsigned long region_nr;
for (region_nr = 0; region_nr < sctx->nr_regions; region_nr++) { @@ -249,7 +249,7 @@ static void switch_dtr(struct dm_target *ti) * Optional args are to allow for future extension: currently this * parameter must be 0. */ -static int switch_ctr(struct dm_target *ti, unsigned argc, char **argv) +static int switch_ctr(struct dm_target *ti, unsigned int argc, char **argv) { static const struct dm_arg _args[] = { {1, (KMALLOC_MAX_SIZE - sizeof(struct switch_ctx)) / sizeof(struct switch_path), "Invalid number of paths"}, @@ -259,7 +259,7 @@ static int switch_ctr(struct dm_target *ti, unsigned argc, char **argv)
struct switch_ctx *sctx; struct dm_arg_set as; - unsigned nr_paths, region_size, nr_optional_args; + unsigned int nr_paths, region_size, nr_optional_args; int r;
as.argc = argc; @@ -320,7 +320,7 @@ static int switch_map(struct dm_target *ti, struct bio *bio) { struct switch_ctx *sctx = ti->private; sector_t offset = dm_target_offset(ti, bio->bi_iter.bi_sector); - unsigned path_nr = switch_get_path_nr(sctx, offset); + unsigned int path_nr = switch_get_path_nr(sctx, offset);
bio_set_dev(bio, sctx->path_list[path_nr].dmdev->bdev); bio->bi_iter.bi_sector = sctx->path_list[path_nr].start + offset; @@ -371,9 +371,9 @@ static __always_inline unsigned long parse_hex(const char **string) }
static int process_set_region_mappings(struct switch_ctx *sctx, - unsigned argc, char **argv) + unsigned int argc, char **argv) { - unsigned i; + unsigned int i; unsigned long region_index = 0;
for (i = 1; i < argc; i++) { @@ -466,8 +466,8 @@ static int process_set_region_mappings(struct switch_ctx *sctx, * * Only set_region_mappings is supported. */ -static int switch_message(struct dm_target *ti, unsigned argc, char **argv, - char *result, unsigned maxlen) +static int switch_message(struct dm_target *ti, unsigned int argc, char **argv, + char *result, unsigned int maxlen) { static DEFINE_MUTEX(message_mutex);
@@ -487,10 +487,10 @@ static int switch_message(struct dm_target *ti, unsigned argc, char **argv, }
static void switch_status(struct dm_target *ti, status_type_t type, - unsigned status_flags, char *result, unsigned maxlen) + unsigned int status_flags, char *result, unsigned int maxlen) { struct switch_ctx *sctx = ti->private; - unsigned sz = 0; + unsigned int sz = 0; int path_nr;
switch (type) { @@ -519,7 +519,7 @@ static void switch_status(struct dm_target *ti, status_type_t type, static int switch_prepare_ioctl(struct dm_target *ti, struct block_device **bdev) { struct switch_ctx *sctx = ti->private; - unsigned path_nr; + unsigned int path_nr;
path_nr = switch_get_path_nr(sctx, 0);
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index 8541d5688f3a6..c571f2385b57f 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -126,7 +126,7 @@ static int alloc_targets(struct dm_table *t, unsigned int num) }
int dm_table_create(struct dm_table **result, fmode_t mode, - unsigned num_targets, struct mapped_device *md) + unsigned int num_targets, struct mapped_device *md) { struct dm_table *t = kzalloc(sizeof(*t), GFP_KERNEL);
@@ -470,10 +470,10 @@ static int adjoin(struct dm_table *t, struct dm_target *ti) * On the other hand, dm-switch needs to process bulk data using messages and * excessive use of GFP_NOIO could cause trouble. */ -static char **realloc_argv(unsigned *size, char **old_argv) +static char **realloc_argv(unsigned int *size, char **old_argv) { char **argv; - unsigned new_size; + unsigned int new_size; gfp_t gfp;
if (*size) { @@ -499,7 +499,7 @@ static char **realloc_argv(unsigned *size, char **old_argv) int dm_split_args(int *argc, char ***argvp, char *input) { char *start, *end = input, *out, **argv = NULL; - unsigned array_size = 0; + unsigned int array_size = 0;
*argc = 0;
@@ -732,9 +732,8 @@ int dm_table_add_target(struct dm_table *t, const char *type, /* * Target argument parsing helpers. */ -static int validate_next_arg(const struct dm_arg *arg, - struct dm_arg_set *arg_set, - unsigned *value, char **error, unsigned grouped) +static int validate_next_arg(const struct dm_arg *arg, struct dm_arg_set *arg_set, + unsigned int *value, char **error, unsigned int grouped) { const char *arg_str = dm_shift_arg(arg_set); char dummy; @@ -752,14 +751,14 @@ static int validate_next_arg(const struct dm_arg *arg, }
int dm_read_arg(const struct dm_arg *arg, struct dm_arg_set *arg_set, - unsigned *value, char **error) + unsigned int *value, char **error) { return validate_next_arg(arg, arg_set, value, error, 0); } EXPORT_SYMBOL(dm_read_arg);
int dm_read_arg_group(const struct dm_arg *arg, struct dm_arg_set *arg_set, - unsigned *value, char **error) + unsigned int *value, char **error) { return validate_next_arg(arg, arg_set, value, error, 1); } @@ -780,7 +779,7 @@ const char *dm_shift_arg(struct dm_arg_set *as) } EXPORT_SYMBOL(dm_shift_arg);
-void dm_consume_args(struct dm_arg_set *as, unsigned num_args) +void dm_consume_args(struct dm_arg_set *as, unsigned int num_args) { BUG_ON(as->argc < num_args); as->argc -= num_args; @@ -856,7 +855,7 @@ static int device_is_rq_stackable(struct dm_target *ti, struct dm_dev *dev,
static int dm_table_determine_type(struct dm_table *t) { - unsigned bio_based = 0, request_based = 0, hybrid = 0; + unsigned int bio_based = 0, request_based = 0, hybrid = 0; struct dm_target *ti; struct list_head *devices = dm_table_get_devices(t); enum dm_queue_mode live_md_type = dm_get_md_type(t->md); @@ -1535,7 +1534,7 @@ static bool dm_table_any_dev_attr(struct dm_table *t, static int count_device(struct dm_target *ti, struct dm_dev *dev, sector_t start, sector_t len, void *data) { - unsigned *num_devices = data; + unsigned int *num_devices = data;
(*num_devices)++;
@@ -1565,7 +1564,7 @@ bool dm_table_has_no_data_devices(struct dm_table *t) { for (unsigned int i = 0; i < t->num_targets; i++) { struct dm_target *ti = dm_table_get_target(t, i); - unsigned num_devices = 0; + unsigned int num_devices = 0;
if (!ti->type->iterate_devices) return false; diff --git a/drivers/md/dm-thin-metadata.c b/drivers/md/dm-thin-metadata.c index 6bcc4c4786d89..80545ec541210 100644 --- a/drivers/md/dm-thin-metadata.c +++ b/drivers/md/dm-thin-metadata.c @@ -318,12 +318,12 @@ static void unpack_block_time(uint64_t v, dm_block_t *b, uint32_t *t) */ typedef int (*run_fn)(struct dm_space_map *, dm_block_t, dm_block_t);
-static void with_runs(struct dm_space_map *sm, const __le64 *value_le, unsigned count, run_fn fn) +static void with_runs(struct dm_space_map *sm, const __le64 *value_le, unsigned int count, run_fn fn) { uint64_t b, begin, end; uint32_t t; bool in_run = false; - unsigned i; + unsigned int i;
for (i = 0; i < count; i++, value_le++) { /* We know value_le is 8 byte aligned */ @@ -348,13 +348,13 @@ static void with_runs(struct dm_space_map *sm, const __le64 *value_le, unsigned fn(sm, begin, end); }
-static void data_block_inc(void *context, const void *value_le, unsigned count) +static void data_block_inc(void *context, const void *value_le, unsigned int count) { with_runs((struct dm_space_map *) context, (const __le64 *) value_le, count, dm_sm_inc_blocks); }
-static void data_block_dec(void *context, const void *value_le, unsigned count) +static void data_block_dec(void *context, const void *value_le, unsigned int count) { with_runs((struct dm_space_map *) context, (const __le64 *) value_le, count, dm_sm_dec_blocks); @@ -374,21 +374,21 @@ static int data_block_equal(void *context, const void *value1_le, const void *va return b1 == b2; }
-static void subtree_inc(void *context, const void *value, unsigned count) +static void subtree_inc(void *context, const void *value, unsigned int count) { struct dm_btree_info *info = context; const __le64 *root_le = value; - unsigned i; + unsigned int i;
for (i = 0; i < count; i++, root_le++) dm_tm_inc(info->tm, le64_to_cpu(*root_le)); }
-static void subtree_dec(void *context, const void *value, unsigned count) +static void subtree_dec(void *context, const void *value, unsigned int count) { struct dm_btree_info *info = context; const __le64 *root_le = value; - unsigned i; + unsigned int i;
for (i = 0; i < count; i++, root_le++) if (dm_btree_del(info, le64_to_cpu(*root_le))) @@ -448,10 +448,10 @@ static int superblock_lock(struct dm_pool_metadata *pmd, static int __superblock_all_zeroes(struct dm_block_manager *bm, int *result) { int r; - unsigned i; + unsigned int i; struct dm_block *b; __le64 *data_le, zero = cpu_to_le64(0); - unsigned block_size = dm_bm_block_size(bm) / sizeof(__le64); + unsigned int block_size = dm_bm_block_size(bm) / sizeof(__le64);
/* * We can't use a validator here - it may be all zeroes. @@ -971,7 +971,7 @@ struct dm_pool_metadata *dm_pool_metadata_open(struct block_device *bdev, int dm_pool_metadata_close(struct dm_pool_metadata *pmd) { int r; - unsigned open_devices = 0; + unsigned int open_devices = 0; struct dm_thin_device *td, *tmp;
down_read(&pmd->root_lock); @@ -1679,7 +1679,7 @@ int dm_thin_insert_block(struct dm_thin_device *td, dm_block_t block, static int __remove_range(struct dm_thin_device *td, dm_block_t begin, dm_block_t end) { int r; - unsigned count, total_count = 0; + unsigned int count, total_count = 0; struct dm_pool_metadata *pmd = td->pmd; dm_block_t keys[1] = { td->id }; __le64 value; diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c index e6e5ab29a95df..ba4ba6be7e232 100644 --- a/drivers/md/dm-thin.c +++ b/drivers/md/dm-thin.c @@ -32,7 +32,7 @@ #define COMMIT_PERIOD HZ #define NO_SPACE_TIMEOUT_SECS 60
-static unsigned no_space_timeout_secs = NO_SPACE_TIMEOUT_SECS; +static unsigned int no_space_timeout_secs = NO_SPACE_TIMEOUT_SECS;
DECLARE_DM_KCOPYD_THROTTLE_WITH_MODULE_PARM(snapshot_copy_throttle, "A percentage of time allocated for copy on write"); @@ -254,7 +254,7 @@ struct pool { struct delayed_work no_space_timeout;
unsigned long last_commit_jiffies; - unsigned ref_count; + unsigned int ref_count;
spinlock_t lock; struct bio_list deferred_flush_bios; @@ -2159,7 +2159,7 @@ static void process_thin_deferred_bios(struct thin_c *tc) struct bio *bio; struct bio_list bios; struct blk_plug plug; - unsigned count = 0; + unsigned int count = 0;
if (tc->requeue_mode) { error_thin_bio_list(tc, &tc->deferred_bio_list, @@ -2229,9 +2229,9 @@ static int cmp_cells(const void *lhs, const void *rhs) return 0; }
-static unsigned sort_cells(struct pool *pool, struct list_head *cells) +static unsigned int sort_cells(struct pool *pool, struct list_head *cells) { - unsigned count = 0; + unsigned int count = 0; struct dm_bio_prison_cell *cell, *tmp;
list_for_each_entry_safe(cell, tmp, cells, user_list) { @@ -2252,7 +2252,7 @@ static void process_thin_deferred_cells(struct thin_c *tc) struct pool *pool = tc->pool; struct list_head cells; struct dm_bio_prison_cell *cell; - unsigned i, j, count; + unsigned int i, j, count;
INIT_LIST_HEAD(&cells);
@@ -3115,7 +3115,7 @@ static int parse_pool_features(struct dm_arg_set *as, struct pool_features *pf, struct dm_target *ti) { int r; - unsigned argc; + unsigned int argc; const char *arg_name;
static const struct dm_arg _args[] = { @@ -3252,7 +3252,7 @@ static dm_block_t calc_metadata_threshold(struct pool_c *pt) * read_only: Don't allow any changes to be made to the pool metadata. * error_if_no_space: error IOs, instead of queueing, if no space. */ -static int pool_ctr(struct dm_target *ti, unsigned argc, char **argv) +static int pool_ctr(struct dm_target *ti, unsigned int argc, char **argv) { int r, pool_created = 0; struct pool_c *pt; @@ -3648,7 +3648,7 @@ static void pool_postsuspend(struct dm_target *ti) (void) commit(pool); }
-static int check_arg_count(unsigned argc, unsigned args_required) +static int check_arg_count(unsigned int argc, unsigned int args_required) { if (argc != args_required) { DMWARN("Message received with %u arguments instead of %u.", @@ -3671,7 +3671,7 @@ static int read_dev_id(char *arg, dm_thin_id *dev_id, int warning) return -EINVAL; }
-static int process_create_thin_mesg(unsigned argc, char **argv, struct pool *pool) +static int process_create_thin_mesg(unsigned int argc, char **argv, struct pool *pool) { dm_thin_id dev_id; int r; @@ -3694,7 +3694,7 @@ static int process_create_thin_mesg(unsigned argc, char **argv, struct pool *poo return 0; }
-static int process_create_snap_mesg(unsigned argc, char **argv, struct pool *pool) +static int process_create_snap_mesg(unsigned int argc, char **argv, struct pool *pool) { dm_thin_id dev_id; dm_thin_id origin_dev_id; @@ -3722,7 +3722,7 @@ static int process_create_snap_mesg(unsigned argc, char **argv, struct pool *poo return 0; }
-static int process_delete_mesg(unsigned argc, char **argv, struct pool *pool) +static int process_delete_mesg(unsigned int argc, char **argv, struct pool *pool) { dm_thin_id dev_id; int r; @@ -3742,7 +3742,7 @@ static int process_delete_mesg(unsigned argc, char **argv, struct pool *pool) return r; }
-static int process_set_transaction_id_mesg(unsigned argc, char **argv, struct pool *pool) +static int process_set_transaction_id_mesg(unsigned int argc, char **argv, struct pool *pool) { dm_thin_id old_id, new_id; int r; @@ -3771,7 +3771,7 @@ static int process_set_transaction_id_mesg(unsigned argc, char **argv, struct po return 0; }
-static int process_reserve_metadata_snap_mesg(unsigned argc, char **argv, struct pool *pool) +static int process_reserve_metadata_snap_mesg(unsigned int argc, char **argv, struct pool *pool) { int r;
@@ -3788,7 +3788,7 @@ static int process_reserve_metadata_snap_mesg(unsigned argc, char **argv, struct return r; }
-static int process_release_metadata_snap_mesg(unsigned argc, char **argv, struct pool *pool) +static int process_release_metadata_snap_mesg(unsigned int argc, char **argv, struct pool *pool) { int r;
@@ -3812,8 +3812,8 @@ static int process_release_metadata_snap_mesg(unsigned argc, char **argv, struct * reserve_metadata_snap * release_metadata_snap */ -static int pool_message(struct dm_target *ti, unsigned argc, char **argv, - char *result, unsigned maxlen) +static int pool_message(struct dm_target *ti, unsigned int argc, char **argv, + char *result, unsigned int maxlen) { int r = -EINVAL; struct pool_c *pt = ti->private; @@ -3853,9 +3853,9 @@ static int pool_message(struct dm_target *ti, unsigned argc, char **argv, }
static void emit_flags(struct pool_features *pf, char *result, - unsigned sz, unsigned maxlen) + unsigned int sz, unsigned int maxlen) { - unsigned count = !pf->zero_new_blocks + !pf->discard_enabled + + unsigned int count = !pf->zero_new_blocks + !pf->discard_enabled + !pf->discard_passdown + (pf->mode == PM_READ_ONLY) + pf->error_if_no_space; DMEMIT("%u ", count); @@ -3883,10 +3883,10 @@ static void emit_flags(struct pool_features *pf, char *result, * <pool mode> <discard config> <no space config> <needs_check> */ static void pool_status(struct dm_target *ti, status_type_t type, - unsigned status_flags, char *result, unsigned maxlen) + unsigned int status_flags, char *result, unsigned int maxlen) { int r; - unsigned sz = 0; + unsigned int sz = 0; uint64_t transaction_id; dm_block_t nr_free_blocks_data; dm_block_t nr_free_blocks_metadata; @@ -4148,7 +4148,7 @@ static void thin_dtr(struct dm_target *ti) * If the pool device has discards disabled, they get disabled for the thin * device as well. */ -static int thin_ctr(struct dm_target *ti, unsigned argc, char **argv) +static int thin_ctr(struct dm_target *ti, unsigned int argc, char **argv) { int r; struct thin_c *tc; @@ -4371,7 +4371,7 @@ static int thin_preresume(struct dm_target *ti) * <nr mapped sectors> <highest mapped sector> */ static void thin_status(struct dm_target *ti, status_type_t type, - unsigned status_flags, char *result, unsigned maxlen) + unsigned int status_flags, char *result, unsigned int maxlen) { int r; ssize_t sz = 0; diff --git a/drivers/md/dm-uevent.c b/drivers/md/dm-uevent.c index 8671267200d88..a02b3f6ea47a8 100644 --- a/drivers/md/dm-uevent.c +++ b/drivers/md/dm-uevent.c @@ -60,7 +60,7 @@ static struct dm_uevent *dm_build_path_uevent(struct mapped_device *md, enum kobject_action action, const char *dm_action, const char *path, - unsigned nr_valid_paths) + unsigned int nr_valid_paths) { struct dm_uevent *event;
@@ -168,7 +168,7 @@ EXPORT_SYMBOL_GPL(dm_send_uevents); * */ void dm_path_uevent(enum dm_uevent_type event_type, struct dm_target *ti, - const char *path, unsigned nr_valid_paths) + const char *path, unsigned int nr_valid_paths) { struct mapped_device *md = dm_table_get_md(ti->table); struct dm_uevent *event; diff --git a/drivers/md/dm-uevent.h b/drivers/md/dm-uevent.h index d30d226f2a181..2c9ba561fd8e9 100644 --- a/drivers/md/dm-uevent.h +++ b/drivers/md/dm-uevent.h @@ -20,7 +20,7 @@ extern void dm_uevent_exit(void); extern void dm_send_uevents(struct list_head *events, struct kobject *kobj); extern void dm_path_uevent(enum dm_uevent_type event_type, struct dm_target *ti, const char *path, - unsigned nr_valid_paths); + unsigned int nr_valid_paths);
#else
@@ -37,7 +37,7 @@ static inline void dm_send_uevents(struct list_head *events, } static inline void dm_path_uevent(enum dm_uevent_type event_type, struct dm_target *ti, const char *path, - unsigned nr_valid_paths) + unsigned int nr_valid_paths) { }
diff --git a/drivers/md/dm-verity-fec.c b/drivers/md/dm-verity-fec.c index 23cffce564035..962fc32c947c5 100644 --- a/drivers/md/dm-verity-fec.c +++ b/drivers/md/dm-verity-fec.c @@ -59,14 +59,14 @@ static int fec_decode_rs8(struct dm_verity *v, struct dm_verity_fec_io *fio, * to the data block. Caller is responsible for releasing buf. */ static u8 *fec_read_parity(struct dm_verity *v, u64 rsb, int index, - unsigned *offset, struct dm_buffer **buf) + unsigned int *offset, struct dm_buffer **buf) { u64 position, block, rem; u8 *res;
position = (index + rsb) * v->fec->roots; block = div64_u64_rem(position, v->fec->io_size, &rem); - *offset = (unsigned)rem; + *offset = (unsigned int)rem;
res = dm_bufio_read(v->fec->bufio, block, buf); if (IS_ERR(res)) { @@ -102,7 +102,7 @@ static u8 *fec_read_parity(struct dm_verity *v, u64 rsb, int index, */ static inline u8 *fec_buffer_rs_block(struct dm_verity *v, struct dm_verity_fec_io *fio, - unsigned i, unsigned j) + unsigned int i, unsigned int j) { return &fio->bufs[i][j * v->fec->rsn]; } @@ -111,7 +111,7 @@ static inline u8 *fec_buffer_rs_block(struct dm_verity *v, * Return an index to the current RS block when called inside * fec_for_each_buffer_rs_block. */ -static inline unsigned fec_buffer_rs_index(unsigned i, unsigned j) +static inline unsigned int fec_buffer_rs_index(unsigned int i, unsigned int j) { return (i << DM_VERITY_FEC_BUF_RS_BITS) + j; } @@ -121,12 +121,12 @@ static inline unsigned fec_buffer_rs_index(unsigned i, unsigned j) * starting from block_offset. */ static int fec_decode_bufs(struct dm_verity *v, struct dm_verity_fec_io *fio, - u64 rsb, int byte_index, unsigned block_offset, + u64 rsb, int byte_index, unsigned int block_offset, int neras) { int r, corrected = 0, res; struct dm_buffer *buf; - unsigned n, i, offset; + unsigned int n, i, offset; u8 *par, *block;
par = fec_read_parity(v, rsb, block_offset, &offset, &buf); @@ -197,7 +197,7 @@ static int fec_is_erasure(struct dm_verity *v, struct dm_verity_io *io, * fits into buffers. Check for erasure locations if @neras is non-NULL. */ static int fec_read_bufs(struct dm_verity *v, struct dm_verity_io *io, - u64 rsb, u64 target, unsigned block_offset, + u64 rsb, u64 target, unsigned int block_offset, int *neras) { bool is_zero; @@ -208,7 +208,7 @@ static int fec_read_bufs(struct dm_verity *v, struct dm_verity_io *io, u64 block, ileaved; u8 *bbuf, *rs_block; u8 want_digest[HASH_MAX_DIGESTSIZE]; - unsigned n, k; + unsigned int n, k;
if (neras) *neras = 0; @@ -304,7 +304,7 @@ static int fec_read_bufs(struct dm_verity *v, struct dm_verity_io *io, */ static int fec_alloc_bufs(struct dm_verity *v, struct dm_verity_fec_io *fio) { - unsigned n; + unsigned int n;
if (!fio->rs) fio->rs = mempool_alloc(&v->fec->rs_pool, GFP_NOIO); @@ -344,7 +344,7 @@ static int fec_alloc_bufs(struct dm_verity *v, struct dm_verity_fec_io *fio) */ static void fec_init_bufs(struct dm_verity *v, struct dm_verity_fec_io *fio) { - unsigned n; + unsigned int n;
fec_for_each_buffer(fio, n) memset(fio->bufs[n], 0, v->fec->rsn << DM_VERITY_FEC_BUF_RS_BITS); @@ -362,7 +362,7 @@ static int fec_decode_rsb(struct dm_verity *v, struct dm_verity_io *io, bool use_erasures) { int r, neras = 0; - unsigned pos; + unsigned int pos;
r = fec_alloc_bufs(v, fio); if (unlikely(r < 0)) @@ -484,7 +484,7 @@ int verity_fec_decode(struct dm_verity *v, struct dm_verity_io *io, */ void verity_fec_finish_io(struct dm_verity_io *io) { - unsigned n; + unsigned int n; struct dm_verity_fec *f = io->v->fec; struct dm_verity_fec_io *fio = fec_io(io);
@@ -522,8 +522,8 @@ void verity_fec_init_io(struct dm_verity_io *io) /* * Append feature arguments and values to the status table. */ -unsigned verity_fec_status_table(struct dm_verity *v, unsigned sz, - char *result, unsigned maxlen) +unsigned int verity_fec_status_table(struct dm_verity *v, unsigned int sz, + char *result, unsigned int maxlen) { if (!verity_fec_is_enabled(v)) return sz; @@ -589,7 +589,7 @@ bool verity_is_fec_opt_arg(const char *arg_name) }
int verity_fec_parse_opt_args(struct dm_arg_set *as, struct dm_verity *v, - unsigned *argc, const char *arg_name) + unsigned int *argc, const char *arg_name) { int r; struct dm_target *ti = v->ti; diff --git a/drivers/md/dm-verity-fec.h b/drivers/md/dm-verity-fec.h index 3c46c8d618833..8454070d28242 100644 --- a/drivers/md/dm-verity-fec.h +++ b/drivers/md/dm-verity-fec.h @@ -55,10 +55,10 @@ struct dm_verity_fec_io { struct rs_control *rs; /* Reed-Solomon state */ int erasures[DM_VERITY_FEC_MAX_RSN]; /* erasures for decode_rs8 */ u8 *bufs[DM_VERITY_FEC_BUF_MAX]; /* bufs for deinterleaving */ - unsigned nbufs; /* number of buffers allocated */ + unsigned int nbufs; /* number of buffers allocated */ u8 *output; /* buffer for corrected output */ size_t output_pos; - unsigned level; /* recursion level */ + unsigned int level; /* recursion level */ };
#ifdef CONFIG_DM_VERITY_FEC @@ -72,15 +72,15 @@ extern int verity_fec_decode(struct dm_verity *v, struct dm_verity_io *io, enum verity_block_type type, sector_t block, u8 *dest, struct bvec_iter *iter);
-extern unsigned verity_fec_status_table(struct dm_verity *v, unsigned sz, - char *result, unsigned maxlen); +extern unsigned int verity_fec_status_table(struct dm_verity *v, unsigned int sz, + char *result, unsigned int maxlen);
extern void verity_fec_finish_io(struct dm_verity_io *io); extern void verity_fec_init_io(struct dm_verity_io *io);
extern bool verity_is_fec_opt_arg(const char *arg_name); extern int verity_fec_parse_opt_args(struct dm_arg_set *as, - struct dm_verity *v, unsigned *argc, + struct dm_verity *v, unsigned int *argc, const char *arg_name);
extern void verity_fec_dtr(struct dm_verity *v); @@ -106,9 +106,9 @@ static inline int verity_fec_decode(struct dm_verity *v, return -EOPNOTSUPP; }
-static inline unsigned verity_fec_status_table(struct dm_verity *v, - unsigned sz, char *result, - unsigned maxlen) +static inline unsigned int verity_fec_status_table(struct dm_verity *v, + unsigned int sz, char *result, + unsigned int maxlen) { return sz; } @@ -128,7 +128,7 @@ static inline bool verity_is_fec_opt_arg(const char *arg_name)
static inline int verity_fec_parse_opt_args(struct dm_arg_set *as, struct dm_verity *v, - unsigned *argc, + unsigned int *argc, const char *arg_name) { return -EINVAL; diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c index ccf5b852fbf7a..64e8ac429984d 100644 --- a/drivers/md/dm-verity-target.c +++ b/drivers/md/dm-verity-target.c @@ -41,7 +41,7 @@ #define DM_VERITY_OPTS_MAX (4 + DM_VERITY_OPTS_FEC + \ DM_VERITY_ROOT_HASH_VERIFICATION_OPTS)
-static unsigned dm_verity_prefetch_cluster = DM_VERITY_DEFAULT_PREFETCH_SIZE; +static unsigned int dm_verity_prefetch_cluster = DM_VERITY_DEFAULT_PREFETCH_SIZE;
module_param_named(prefetch_cluster, dm_verity_prefetch_cluster, uint, S_IRUGO | S_IWUSR);
@@ -51,7 +51,7 @@ struct dm_verity_prefetch_work { struct work_struct work; struct dm_verity *v; sector_t block; - unsigned n_blocks; + unsigned int n_blocks; };
/* @@ -196,10 +196,10 @@ int verity_hash(struct dm_verity *v, struct ahash_request *req, }
static void verity_hash_at_level(struct dm_verity *v, sector_t block, int level, - sector_t *hash_block, unsigned *offset) + sector_t *hash_block, unsigned int *offset) { sector_t position = verity_position_at_level(v, block, level); - unsigned idx; + unsigned int idx;
*hash_block = v->hash_level_block[level] + (position >> v->hash_per_block_bits);
@@ -287,7 +287,7 @@ static int verity_verify_level(struct dm_verity *v, struct dm_verity_io *io, u8 *data; int r; sector_t hash_block; - unsigned offset; + unsigned int offset;
verity_hash_at_level(v, block, level, &hash_block, &offset);
@@ -445,13 +445,13 @@ int verity_for_bv_block(struct dm_verity *v, struct dm_verity_io *io, struct dm_verity_io *io, u8 *data, size_t len)) { - unsigned todo = 1 << v->data_dev_block_bits; + unsigned int todo = 1 << v->data_dev_block_bits; struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size);
do { int r; u8 *page; - unsigned len; + unsigned int len; struct bio_vec bv = bio_iter_iovec(bio, *iter);
page = bvec_kmap_local(&bv); @@ -688,7 +688,7 @@ static void verity_prefetch_io(struct work_struct *work) verity_hash_at_level(v, pw->block, i, &hash_block_start, NULL); verity_hash_at_level(v, pw->block + pw->n_blocks - 1, i, &hash_block_end, NULL); if (!i) { - unsigned cluster = READ_ONCE(dm_verity_prefetch_cluster); + unsigned int cluster = READ_ONCE(dm_verity_prefetch_cluster);
cluster >>= v->data_dev_block_bits; if (unlikely(!cluster)) @@ -753,7 +753,7 @@ static int verity_map(struct dm_target *ti, struct bio *bio) bio_set_dev(bio, v->data_dev->bdev); bio->bi_iter.bi_sector = verity_map_sector(v, bio->bi_iter.bi_sector);
- if (((unsigned)bio->bi_iter.bi_sector | bio_sectors(bio)) & + if (((unsigned int)bio->bi_iter.bi_sector | bio_sectors(bio)) & ((1 << (v->data_dev_block_bits - SECTOR_SHIFT)) - 1)) { DMERR_LIMIT("unaligned io"); return DM_MAPIO_KILL; @@ -789,12 +789,12 @@ static int verity_map(struct dm_target *ti, struct bio *bio) * Status: V (valid) or C (corruption found) */ static void verity_status(struct dm_target *ti, status_type_t type, - unsigned status_flags, char *result, unsigned maxlen) + unsigned int status_flags, char *result, unsigned int maxlen) { struct dm_verity *v = ti->private; - unsigned args = 0; - unsigned sz = 0; - unsigned x; + unsigned int args = 0; + unsigned int sz = 0; + unsigned int x;
switch (type) { case STATUSTYPE_INFO: @@ -1054,7 +1054,7 @@ static int verity_parse_opt_args(struct dm_arg_set *as, struct dm_verity *v, bool only_modifier_opts) { int r = 0; - unsigned argc; + unsigned int argc; struct dm_target *ti = v->ti; const char *arg_name;
@@ -1156,7 +1156,7 @@ static int verity_parse_opt_args(struct dm_arg_set *as, struct dm_verity *v, * <digest> * <salt> Hex string or "-" if no salt. */ -static int verity_ctr(struct dm_target *ti, unsigned argc, char **argv) +static int verity_ctr(struct dm_target *ti, unsigned int argc, char **argv) { struct dm_verity *v; struct dm_verity_sig_opts verify_args = {0}; diff --git a/drivers/md/dm-verity.h b/drivers/md/dm-verity.h index 98f306ec6a33d..2f555b4203679 100644 --- a/drivers/md/dm-verity.h +++ b/drivers/md/dm-verity.h @@ -42,7 +42,7 @@ struct dm_verity { u8 *root_digest; /* digest of the root block */ u8 *salt; /* salt: its size is salt_size */ u8 *zero_digest; /* digest for a zero block */ - unsigned salt_size; + unsigned int salt_size; sector_t data_start; /* data offset in 512-byte sectors */ sector_t hash_start; /* hash start in blocks */ sector_t data_blocks; /* the number of data blocks */ @@ -54,10 +54,10 @@ struct dm_verity { unsigned char version; bool hash_failed:1; /* set if hash of any block failed */ bool use_tasklet:1; /* try to verify in tasklet before work-queue */ - unsigned digest_size; /* digest size for the current hash algorithm */ + unsigned int digest_size; /* digest size for the current hash algorithm */ unsigned int ahash_reqsize;/* the size of temporary space for crypto */ enum verity_mode mode; /* mode for handling verification errors */ - unsigned corrupted_errs;/* Number of errors for corrupted blocks */ + unsigned int corrupted_errs;/* Number of errors for corrupted blocks */
struct workqueue_struct *verify_wq;
@@ -77,7 +77,7 @@ struct dm_verity_io { bio_end_io_t *orig_bi_end_io;
sector_t block; - unsigned n_blocks; + unsigned int n_blocks; bool in_tasklet;
struct bvec_iter iter; diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c index 96a003eb73234..431c84595ddb7 100644 --- a/drivers/md/dm-writecache.c +++ b/drivers/md/dm-writecache.c @@ -128,9 +128,9 @@ struct dm_writecache { unsigned long max_age; unsigned long pause;
- unsigned uncommitted_blocks; - unsigned autocommit_blocks; - unsigned max_writeback_jobs; + unsigned int uncommitted_blocks; + unsigned int autocommit_blocks; + unsigned int max_writeback_jobs;
int error;
@@ -155,7 +155,7 @@ struct dm_writecache { sector_t data_device_sectors; void *block_start; struct wc_entry *entries; - unsigned block_size; + unsigned int block_size; unsigned char block_size_bits;
bool pmem_mode:1; @@ -178,13 +178,13 @@ struct dm_writecache { bool metadata_only:1; bool pause_set:1;
- unsigned high_wm_percent_value; - unsigned low_wm_percent_value; - unsigned autocommit_time_value; - unsigned max_age_value; - unsigned pause_value; + unsigned int high_wm_percent_value; + unsigned int low_wm_percent_value; + unsigned int autocommit_time_value; + unsigned int max_age_value; + unsigned int pause_value;
- unsigned writeback_all; + unsigned int writeback_all; struct workqueue_struct *writeback_wq; struct work_struct writeback_work; struct work_struct flush_work; @@ -202,7 +202,7 @@ struct dm_writecache {
struct dm_kcopyd_client *dm_kcopyd; unsigned long *dirty_bitmap; - unsigned dirty_bitmap_size; + unsigned int dirty_bitmap_size;
struct bio_set bio_set; mempool_t copy_pool; @@ -227,7 +227,7 @@ struct writeback_struct { struct list_head endio_entry; struct dm_writecache *wc; struct wc_entry **wc_list; - unsigned wc_list_n; + unsigned int wc_list_n; struct wc_entry *wc_list_inline[WB_LIST_INLINE]; struct bio bio; }; @@ -236,7 +236,7 @@ struct copy_struct { struct list_head endio_entry; struct dm_writecache *wc; struct wc_entry *e; - unsigned n_entries; + unsigned int n_entries; int error; };
@@ -369,7 +369,7 @@ static struct page *persistent_memory_page(void *addr) return virt_to_page(addr); }
-static unsigned persistent_memory_page_offset(void *addr) +static unsigned int persistent_memory_page_offset(void *addr) { return (unsigned long)addr & (PAGE_SIZE - 1); } @@ -502,11 +502,11 @@ static void ssd_commit_flushed(struct dm_writecache *wc, bool wait_for_ios) COMPLETION_INITIALIZER_ONSTACK(endio.c), ATOMIC_INIT(1), }; - unsigned bitmap_bits = wc->dirty_bitmap_size * 8; - unsigned i = 0; + unsigned int bitmap_bits = wc->dirty_bitmap_size * 8; + unsigned int i = 0;
while (1) { - unsigned j; + unsigned int j; i = find_next_bit(wc->dirty_bitmap, bitmap_bits, i); if (unlikely(i == bitmap_bits)) break; @@ -1100,7 +1100,7 @@ static void writecache_resume(struct dm_target *ti) wc_unlock(wc); }
-static int process_flush_mesg(unsigned argc, char **argv, struct dm_writecache *wc) +static int process_flush_mesg(unsigned int argc, char **argv, struct dm_writecache *wc) { if (argc != 1) return -EINVAL; @@ -1133,7 +1133,7 @@ static int process_flush_mesg(unsigned argc, char **argv, struct dm_writecache * return 0; }
-static int process_flush_on_suspend_mesg(unsigned argc, char **argv, struct dm_writecache *wc) +static int process_flush_on_suspend_mesg(unsigned int argc, char **argv, struct dm_writecache *wc) { if (argc != 1) return -EINVAL; @@ -1153,7 +1153,7 @@ static void activate_cleaner(struct dm_writecache *wc) wc->freelist_low_watermark = wc->n_blocks; }
-static int process_cleaner_mesg(unsigned argc, char **argv, struct dm_writecache *wc) +static int process_cleaner_mesg(unsigned int argc, char **argv, struct dm_writecache *wc) { if (argc != 1) return -EINVAL; @@ -1167,7 +1167,7 @@ static int process_cleaner_mesg(unsigned argc, char **argv, struct dm_writecache return 0; }
-static int process_clear_stats_mesg(unsigned argc, char **argv, struct dm_writecache *wc) +static int process_clear_stats_mesg(unsigned int argc, char **argv, struct dm_writecache *wc) { if (argc != 1) return -EINVAL; @@ -1179,8 +1179,8 @@ static int process_clear_stats_mesg(unsigned argc, char **argv, struct dm_writec return 0; }
-static int writecache_message(struct dm_target *ti, unsigned argc, char **argv, - char *result, unsigned maxlen) +static int writecache_message(struct dm_target *ti, unsigned int argc, char **argv, + char *result, unsigned int maxlen) { int r = -EINVAL; struct dm_writecache *wc = ti->private; @@ -1238,9 +1238,9 @@ static void memcpy_flushcache_optimized(void *dest, void *source, size_t size) static void bio_copy_block(struct dm_writecache *wc, struct bio *bio, void *data) { void *buf; - unsigned size; + unsigned int size; int rw = bio_data_dir(bio); - unsigned remaining_size = wc->block_size; + unsigned int remaining_size = wc->block_size;
do { struct bio_vec bv = bio_iter_iovec(bio, bio->bi_iter); @@ -1371,7 +1371,7 @@ static enum wc_map_op writecache_map_read(struct dm_writecache *wc, struct bio * static void writecache_bio_copy_ssd(struct dm_writecache *wc, struct bio *bio, struct wc_entry *e, bool search_used) { - unsigned bio_size = wc->block_size; + unsigned int bio_size = wc->block_size; sector_t start_cache_sec = cache_sector(wc, e); sector_t current_cache_sec = start_cache_sec + (bio_size >> SECTOR_SHIFT);
@@ -1540,7 +1540,7 @@ static int writecache_map(struct dm_target *ti, struct bio *bio)
bio->bi_iter.bi_sector = dm_target_offset(ti, bio->bi_iter.bi_sector);
- if (unlikely((((unsigned)bio->bi_iter.bi_sector | bio_sectors(bio)) & + if (unlikely((((unsigned int)bio->bi_iter.bi_sector | bio_sectors(bio)) & (wc->block_size / 512 - 1)) != 0)) { DMERR("I/O is not aligned, sector %llu, size %u, block size %u", (unsigned long long)bio->bi_iter.bi_sector, @@ -1666,7 +1666,7 @@ static void writecache_copy_endio(int read_err, unsigned long write_err, void *p
static void __writecache_endio_pmem(struct dm_writecache *wc, struct list_head *list) { - unsigned i; + unsigned int i; struct writeback_struct *wb; struct wc_entry *e; unsigned long n_walked = 0; @@ -1782,7 +1782,7 @@ static int writecache_endio_thread(void *data) static bool wc_add_block(struct writeback_struct *wb, struct wc_entry *e) { struct dm_writecache *wc = wb->wc; - unsigned block_size = wc->block_size; + unsigned int block_size = wc->block_size; void *address = memory_data(wc, e);
persistent_memory_flush_cache(address, block_size); @@ -1817,7 +1817,7 @@ static void __writecache_writeback_pmem(struct dm_writecache *wc, struct writeba struct wc_entry *e, *f; struct bio *bio; struct writeback_struct *wb; - unsigned max_pages; + unsigned int max_pages;
while (wbl->size) { wbl->size--; @@ -1880,7 +1880,7 @@ static void __writecache_writeback_ssd(struct dm_writecache *wc, struct writebac struct copy_struct *c;
while (wbl->size) { - unsigned n_sectors; + unsigned int n_sectors;
wbl->size--; e = container_of(wbl->list.prev, struct wc_entry, lru); @@ -2092,7 +2092,7 @@ static void writecache_writeback(struct work_struct *work) } }
-static int calculate_memory_size(uint64_t device_size, unsigned block_size, +static int calculate_memory_size(uint64_t device_size, unsigned int block_size, size_t *n_blocks_p, size_t *n_metadata_blocks_p) { uint64_t n_blocks, offset; @@ -2207,12 +2207,12 @@ static void writecache_dtr(struct dm_target *ti) kfree(wc); }
-static int writecache_ctr(struct dm_target *ti, unsigned argc, char **argv) +static int writecache_ctr(struct dm_target *ti, unsigned int argc, char **argv) { struct dm_writecache *wc; struct dm_arg_set as; const char *string; - unsigned opt_params; + unsigned int opt_params; size_t offset, data_size; int i, r; char dummy; @@ -2419,7 +2419,7 @@ static int writecache_ctr(struct dm_target *ti, unsigned argc, char **argv) goto invalid_optional; wc->autocommit_blocks_set = true; } else if (!strcasecmp(string, "autocommit_time") && opt_params >= 1) { - unsigned autocommit_msecs; + unsigned int autocommit_msecs; string = dm_shift_arg(&as), opt_params--; if (sscanf(string, "%u%c", &autocommit_msecs, &dummy) != 1) goto invalid_optional; @@ -2429,7 +2429,7 @@ static int writecache_ctr(struct dm_target *ti, unsigned argc, char **argv) wc->autocommit_time_value = autocommit_msecs; wc->autocommit_time_set = true; } else if (!strcasecmp(string, "max_age") && opt_params >= 1) { - unsigned max_age_msecs; + unsigned int max_age_msecs; string = dm_shift_arg(&as), opt_params--; if (sscanf(string, "%u%c", &max_age_msecs, &dummy) != 1) goto invalid_optional; @@ -2454,7 +2454,7 @@ static int writecache_ctr(struct dm_target *ti, unsigned argc, char **argv) } else if (!strcasecmp(string, "metadata_only")) { wc->metadata_only = true; } else if (!strcasecmp(string, "pause_writeback") && opt_params >= 1) { - unsigned pause_msecs; + unsigned int pause_msecs; if (WC_MODE_PMEM(wc)) goto invalid_optional; string = dm_shift_arg(&as), opt_params--; @@ -2653,11 +2653,11 @@ static int writecache_ctr(struct dm_target *ti, unsigned argc, char **argv) }
static void writecache_status(struct dm_target *ti, status_type_t type, - unsigned status_flags, char *result, unsigned maxlen) + unsigned int status_flags, char *result, unsigned int maxlen) { struct dm_writecache *wc = ti->private; - unsigned extra_args; - unsigned sz = 0; + unsigned int extra_args; + unsigned int sz = 0;
switch (type) { case STATUSTYPE_INFO: diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 1b6c3c783a8eb..94e4899d8ac7c 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -83,7 +83,7 @@ struct clone_info { struct bio *bio; struct dm_io *io; sector_t sector; - unsigned sector_count; + unsigned int sector_count; bool is_abnormal_io:1; bool submit_as_polled:1; }; @@ -111,7 +111,7 @@ struct bio *dm_bio_from_per_bio_data(void *data, size_t data_size) } EXPORT_SYMBOL_GPL(dm_bio_from_per_bio_data);
-unsigned dm_bio_get_target_bio_nr(const struct bio *bio) +unsigned int dm_bio_get_target_bio_nr(const struct bio *bio) { return container_of(bio, struct dm_target_io, clone)->target_bio_nr; } @@ -142,7 +142,7 @@ struct table_device { * Bio-based DM's mempools' reserved IOs set by the user. */ #define RESERVED_BIO_BASED_IOS 16 -static unsigned reserved_bio_based_ios = RESERVED_BIO_BASED_IOS; +static unsigned int reserved_bio_based_ios = RESERVED_BIO_BASED_IOS;
static int __dm_get_module_param_int(int *module_param, int min, int max) { @@ -165,11 +165,10 @@ static int __dm_get_module_param_int(int *module_param, int min, int max) return param; }
-unsigned __dm_get_module_param(unsigned *module_param, - unsigned def, unsigned max) +unsigned int __dm_get_module_param(unsigned int *module_param, unsigned int def, unsigned int max) { - unsigned param = READ_ONCE(*module_param); - unsigned modified_param = 0; + unsigned int param = READ_ONCE(*module_param); + unsigned int modified_param = 0;
if (!param) modified_param = def; @@ -184,14 +183,14 @@ unsigned __dm_get_module_param(unsigned *module_param, return param; }
-unsigned dm_get_reserved_bio_based_ios(void) +unsigned int dm_get_reserved_bio_based_ios(void) { return __dm_get_module_param(&reserved_bio_based_ios, RESERVED_BIO_BASED_IOS, DM_RESERVED_MAX_IOS); } EXPORT_SYMBOL_GPL(dm_get_reserved_bio_based_ios);
-static unsigned dm_get_numa_node(void) +static unsigned int dm_get_numa_node(void) { return __dm_get_module_param_int(&dm_numa_node, DM_NUMA_NODE, num_online_nodes() - 1); @@ -603,7 +602,7 @@ static void free_io(struct dm_io *io) }
static struct bio *alloc_tio(struct clone_info *ci, struct dm_target *ti, - unsigned target_bio_nr, unsigned *len, gfp_t gfp_mask) + unsigned int target_bio_nr, unsigned int *len, gfp_t gfp_mask) { struct mapped_device *md = ci->io->md; struct dm_target_io *tio; @@ -1314,11 +1313,11 @@ static size_t dm_dax_recovery_write(struct dax_device *dax_dev, pgoff_t pgoff, * the partially processed part (the sum of regions 1+2) must be the same for all * copies of the bio. */ -void dm_accept_partial_bio(struct bio *bio, unsigned n_sectors) +void dm_accept_partial_bio(struct bio *bio, unsigned int n_sectors) { struct dm_target_io *tio = clone_to_tio(bio); struct dm_io *io = tio->io; - unsigned bio_sectors = bio_sectors(bio); + unsigned int bio_sectors = bio_sectors(bio);
BUG_ON(dm_tio_flagged(tio, DM_TIO_IS_DUPLICATE_BIO)); BUG_ON(op_is_zone_mgmt(bio_op(bio))); @@ -1447,7 +1446,7 @@ static void __map_bio(struct bio *clone) } }
-static void setup_split_accounting(struct clone_info *ci, unsigned len) +static void setup_split_accounting(struct clone_info *ci, unsigned int len) { struct dm_io *io = ci->io;
@@ -1463,7 +1462,7 @@ static void setup_split_accounting(struct clone_info *ci, unsigned len) }
static void alloc_multiple_bios(struct bio_list *blist, struct clone_info *ci, - struct dm_target *ti, unsigned num_bios) + struct dm_target *ti, unsigned int num_bios) { struct bio *bio; int try; @@ -1492,7 +1491,7 @@ static void alloc_multiple_bios(struct bio_list *blist, struct clone_info *ci, }
static int __send_duplicate_bios(struct clone_info *ci, struct dm_target *ti, - unsigned int num_bios, unsigned *len) + unsigned int num_bios, unsigned int *len) { struct bio_list blist = BIO_EMPTY_LIST; struct bio *clone; @@ -1560,10 +1559,9 @@ static void __send_empty_flush(struct clone_info *ci) }
static void __send_changing_extent_only(struct clone_info *ci, struct dm_target *ti, - unsigned num_bios) + unsigned int num_bios) { - unsigned len; - unsigned int bios; + unsigned int len, bios;
len = min_t(sector_t, ci->sector_count, max_io_len_target_boundary(ti, dm_target_offset(ti, ci->sector))); @@ -1601,7 +1599,7 @@ static bool is_abnormal_io(struct bio *bio) static blk_status_t __process_abnormal_io(struct clone_info *ci, struct dm_target *ti) { - unsigned num_bios = 0; + unsigned int num_bios = 0;
switch (bio_op(ci->bio)) { case REQ_OP_DISCARD: @@ -1679,7 +1677,7 @@ static blk_status_t __split_and_process_bio(struct clone_info *ci) { struct bio *clone; struct dm_target *ti; - unsigned len; + unsigned int len;
ti = dm_table_find_target(ci->map, ci->sector); if (unlikely(!ti)) @@ -2376,7 +2374,7 @@ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t) struct mapped_device *dm_get_md(dev_t dev) { struct mapped_device *md; - unsigned minor = MINOR(dev); + unsigned int minor = MINOR(dev);
if (MAJOR(dev) != _major || minor >= (1 << MINORBITS)) return NULL; @@ -2659,7 +2657,7 @@ static void unlock_fs(struct mapped_device *md) * are being added to md->deferred list. */ static int __dm_suspend(struct mapped_device *md, struct dm_table *map, - unsigned suspend_flags, unsigned int task_state, + unsigned int suspend_flags, unsigned int task_state, int dmf_suspended_flag) { bool do_lockfs = suspend_flags & DM_SUSPEND_LOCKFS_FLAG; @@ -2766,7 +2764,7 @@ static int __dm_suspend(struct mapped_device *md, struct dm_table *map, * * To abort suspend, start the request_queue. */ -int dm_suspend(struct mapped_device *md, unsigned suspend_flags) +int dm_suspend(struct mapped_device *md, unsigned int suspend_flags) { struct dm_table *map = NULL; int r = 0; @@ -2868,7 +2866,7 @@ int dm_resume(struct mapped_device *md) * It may be used only from the kernel. */
-static void __dm_internal_suspend(struct mapped_device *md, unsigned suspend_flags) +static void __dm_internal_suspend(struct mapped_device *md, unsigned int suspend_flags) { struct dm_table *map = NULL;
@@ -2970,10 +2968,10 @@ EXPORT_SYMBOL_GPL(dm_internal_resume_fast); * Event notification. *---------------------------------------------------------------*/ int dm_kobject_uevent(struct mapped_device *md, enum kobject_action action, - unsigned cookie, bool need_resize_uevent) + unsigned int cookie, bool need_resize_uevent) { int r; - unsigned noio_flag; + unsigned int noio_flag; char udev_cookie[DM_COOKIE_LENGTH]; char *envp[3] = { NULL, NULL, NULL }; char **envpp = envp; diff --git a/drivers/md/dm.h b/drivers/md/dm.h index a9a3ffcad084c..a7917df09cafb 100644 --- a/drivers/md/dm.h +++ b/drivers/md/dm.h @@ -203,7 +203,7 @@ int dm_get_table_device(struct mapped_device *md, dev_t dev, fmode_t mode, void dm_put_table_device(struct mapped_device *md, struct dm_dev *d);
int dm_kobject_uevent(struct mapped_device *md, enum kobject_action action, - unsigned cookie, bool need_resize_uevent); + unsigned int cookie, bool need_resize_uevent);
void dm_internal_suspend(struct mapped_device *md); void dm_internal_resume(struct mapped_device *md); @@ -222,6 +222,6 @@ void dm_free_md_mempools(struct dm_md_mempools *pools); /* * Various helpers */ -unsigned dm_get_reserved_bio_based_ios(void); +unsigned int dm_get_reserved_bio_based_ios(void);
#endif diff --git a/drivers/md/persistent-data/dm-array.c b/drivers/md/persistent-data/dm-array.c index 3a963d783a865..eff9b41869f29 100644 --- a/drivers/md/persistent-data/dm-array.c +++ b/drivers/md/persistent-data/dm-array.c @@ -68,8 +68,8 @@ static int array_block_check(struct dm_block_validator *v, CSUM_XOR)); if (csum_disk != bh_le->csum) { DMERR_LIMIT("array_block_check failed: csum %u != wanted %u", - (unsigned) le32_to_cpu(csum_disk), - (unsigned) le32_to_cpu(bh_le->csum)); + (unsigned int) le32_to_cpu(csum_disk), + (unsigned int) le32_to_cpu(bh_le->csum)); return -EILSEQ; }
@@ -94,7 +94,7 @@ static struct dm_block_validator array_validator = { * index - The index into _this_ specific block. */ static void *element_at(struct dm_array_info *info, struct array_block *ab, - unsigned index) + unsigned int index) { unsigned char *entry = (unsigned char *) (ab + 1);
@@ -108,9 +108,9 @@ static void *element_at(struct dm_array_info *info, struct array_block *ab, * in an array block. */ static void on_entries(struct dm_array_info *info, struct array_block *ab, - void (*fn)(void *, const void *, unsigned)) + void (*fn)(void *, const void *, unsigned int)) { - unsigned nr_entries = le32_to_cpu(ab->nr_entries); + unsigned int nr_entries = le32_to_cpu(ab->nr_entries); fn(info->value_type.context, element_at(info, ab, 0), nr_entries); }
@@ -171,7 +171,7 @@ static int alloc_ablock(struct dm_array_info *info, size_t size_of_block, * the current number of entries. */ static void fill_ablock(struct dm_array_info *info, struct array_block *ab, - const void *value, unsigned new_nr) + const void *value, unsigned int new_nr) { uint32_t nr_entries, delta, i; struct dm_btree_value_type *vt = &info->value_type; @@ -194,7 +194,7 @@ static void fill_ablock(struct dm_array_info *info, struct array_block *ab, * entries. */ static void trim_ablock(struct dm_array_info *info, struct array_block *ab, - unsigned new_nr) + unsigned int new_nr) { uint32_t nr_entries, delta; struct dm_btree_value_type *vt = &info->value_type; @@ -247,7 +247,7 @@ static void unlock_ablock(struct dm_array_info *info, struct dm_block *block) * / max_entries). */ static int lookup_ablock(struct dm_array_info *info, dm_block_t root, - unsigned index, struct dm_block **block, + unsigned int index, struct dm_block **block, struct array_block **ab) { int r; @@ -295,7 +295,7 @@ static int __shadow_ablock(struct dm_array_info *info, dm_block_t b, * The shadow op will often be a noop. Only insert if it really * copied data. */ -static int __reinsert_ablock(struct dm_array_info *info, unsigned index, +static int __reinsert_ablock(struct dm_array_info *info, unsigned int index, struct dm_block *block, dm_block_t b, dm_block_t *root) { @@ -321,7 +321,7 @@ static int __reinsert_ablock(struct dm_array_info *info, unsigned index, * for both the current root block, and the new one. */ static int shadow_ablock(struct dm_array_info *info, dm_block_t *root, - unsigned index, struct dm_block **block, + unsigned int index, struct dm_block **block, struct array_block **ab) { int r; @@ -346,7 +346,7 @@ static int shadow_ablock(struct dm_array_info *info, dm_block_t *root, */ static int insert_new_ablock(struct dm_array_info *info, size_t size_of_block, uint32_t max_entries, - unsigned block_index, uint32_t nr, + unsigned int block_index, uint32_t nr, const void *value, dm_block_t *root) { int r; @@ -365,8 +365,8 @@ static int insert_new_ablock(struct dm_array_info *info, size_t size_of_block, }
static int insert_full_ablocks(struct dm_array_info *info, size_t size_of_block, - unsigned begin_block, unsigned end_block, - unsigned max_entries, const void *value, + unsigned int begin_block, unsigned int end_block, + unsigned int max_entries, const void *value, dm_block_t *root) { int r = 0; @@ -402,20 +402,20 @@ struct resize { /* * Maximum nr entries in an array block. */ - unsigned max_entries; + unsigned int max_entries;
/* * nr of completely full blocks in the array. * * 'old' refers to before the resize, 'new' after. */ - unsigned old_nr_full_blocks, new_nr_full_blocks; + unsigned int old_nr_full_blocks, new_nr_full_blocks;
/* * Number of entries in the final block. 0 iff only full blocks in * the array. */ - unsigned old_nr_entries_in_last_block, new_nr_entries_in_last_block; + unsigned int old_nr_entries_in_last_block, new_nr_entries_in_last_block;
/* * The default value used when growing the array. @@ -430,8 +430,8 @@ struct resize { * begin_index - the index of the first array block to remove. * end_index - the one-past-the-end value. ie. this block is not removed. */ -static int drop_blocks(struct resize *resize, unsigned begin_index, - unsigned end_index) +static int drop_blocks(struct resize *resize, unsigned int begin_index, + unsigned int end_index) { int r;
@@ -449,8 +449,8 @@ static int drop_blocks(struct resize *resize, unsigned begin_index, /* * Calculates how many blocks are needed for the array. */ -static unsigned total_nr_blocks_needed(unsigned nr_full_blocks, - unsigned nr_entries_in_last_block) +static unsigned int total_nr_blocks_needed(unsigned int nr_full_blocks, + unsigned int nr_entries_in_last_block) { return nr_full_blocks + (nr_entries_in_last_block ? 1 : 0); } @@ -461,7 +461,7 @@ static unsigned total_nr_blocks_needed(unsigned nr_full_blocks, static int shrink(struct resize *resize) { int r; - unsigned begin, end; + unsigned int begin, end; struct dm_block *block; struct array_block *ab;
@@ -527,7 +527,7 @@ static int grow_add_tail_block(struct resize *resize) static int grow_needs_more_blocks(struct resize *resize) { int r; - unsigned old_nr_blocks = resize->old_nr_full_blocks; + unsigned int old_nr_blocks = resize->old_nr_full_blocks;
if (resize->old_nr_entries_in_last_block > 0) { old_nr_blocks++; @@ -569,11 +569,11 @@ static int grow(struct resize *resize) * These are the value_type functions for the btree elements, which point * to array blocks. */ -static void block_inc(void *context, const void *value, unsigned count) +static void block_inc(void *context, const void *value, unsigned int count) { const __le64 *block_le = value; struct dm_array_info *info = context; - unsigned i; + unsigned int i;
for (i = 0; i < count; i++, block_le++) dm_tm_inc(info->btree_info.tm, le64_to_cpu(*block_le)); @@ -618,9 +618,9 @@ static void __block_dec(void *context, const void *value) dm_tm_dec(info->btree_info.tm, b); }
-static void block_dec(void *context, const void *value, unsigned count) +static void block_dec(void *context, const void *value, unsigned int count) { - unsigned i; + unsigned int i; for (i = 0; i < count; i++, value += sizeof(__le64)) __block_dec(context, value); } @@ -700,10 +700,11 @@ int dm_array_resize(struct dm_array_info *info, dm_block_t root, EXPORT_SYMBOL_GPL(dm_array_resize);
static int populate_ablock_with_values(struct dm_array_info *info, struct array_block *ab, - value_fn fn, void *context, unsigned base, unsigned new_nr) + value_fn fn, void *context, + unsigned int base, unsigned int new_nr) { int r; - unsigned i; + unsigned int i; struct dm_btree_value_type *vt = &info->value_type;
BUG_ON(le32_to_cpu(ab->nr_entries)); @@ -728,7 +729,7 @@ int dm_array_new(struct dm_array_info *info, dm_block_t *root, int r; struct dm_block *block; struct array_block *ab; - unsigned block_index, end_block, size_of_block, max_entries; + unsigned int block_index, end_block, size_of_block, max_entries;
r = dm_array_empty(info, root); if (r) @@ -776,7 +777,7 @@ int dm_array_get_value(struct dm_array_info *info, dm_block_t root, struct dm_block *block; struct array_block *ab; size_t size_of_block; - unsigned entry, max_entries; + unsigned int entry, max_entries;
size_of_block = dm_bm_block_size(dm_tm_get_bm(info->btree_info.tm)); max_entries = calc_max_entries(info->value_type.size, size_of_block); @@ -804,8 +805,8 @@ static int array_set_value(struct dm_array_info *info, dm_block_t root, struct dm_block *block; struct array_block *ab; size_t size_of_block; - unsigned max_entries; - unsigned entry; + unsigned int max_entries; + unsigned int entry; void *old_value; struct dm_btree_value_type *vt = &info->value_type;
@@ -861,9 +862,9 @@ static int walk_ablock(void *context, uint64_t *keys, void *leaf) struct walk_info *wi = context;
int r; - unsigned i; + unsigned int i; __le64 block_le; - unsigned nr_entries, max_entries; + unsigned int nr_entries, max_entries; struct dm_block *block; struct array_block *ab;
diff --git a/drivers/md/persistent-data/dm-array.h b/drivers/md/persistent-data/dm-array.h index d7d2d579c662c..b6c7077c73591 100644 --- a/drivers/md/persistent-data/dm-array.h +++ b/drivers/md/persistent-data/dm-array.h @@ -198,7 +198,7 @@ struct dm_array_cursor {
struct dm_block *block; struct array_block *ab; - unsigned index; + unsigned int index; };
int dm_array_cursor_begin(struct dm_array_info *info, diff --git a/drivers/md/persistent-data/dm-bitset.c b/drivers/md/persistent-data/dm-bitset.c index b7208d82e748a..625d93498cddb 100644 --- a/drivers/md/persistent-data/dm-bitset.c +++ b/drivers/md/persistent-data/dm-bitset.c @@ -41,7 +41,7 @@ EXPORT_SYMBOL_GPL(dm_bitset_empty);
struct packer_context { bit_value_fn fn; - unsigned nr_bits; + unsigned int nr_bits; void *context; };
@@ -49,7 +49,7 @@ static int pack_bits(uint32_t index, void *value, void *context) { int r; struct packer_context *p = context; - unsigned bit, nr = min(64u, p->nr_bits - (index * 64)); + unsigned int bit, nr = min(64u, p->nr_bits - (index * 64)); uint64_t word = 0; bool bv;
@@ -147,7 +147,7 @@ static int get_array_entry(struct dm_disk_bitset *info, dm_block_t root, uint32_t index, dm_block_t *new_root) { int r; - unsigned array_index = index / BITS_PER_ARRAY_ENTRY; + unsigned int array_index = index / BITS_PER_ARRAY_ENTRY;
if (info->current_index_set) { if (info->current_index == array_index) @@ -165,7 +165,7 @@ int dm_bitset_set_bit(struct dm_disk_bitset *info, dm_block_t root, uint32_t index, dm_block_t *new_root) { int r; - unsigned b = index % BITS_PER_ARRAY_ENTRY; + unsigned int b = index % BITS_PER_ARRAY_ENTRY;
r = get_array_entry(info, root, index, new_root); if (r) @@ -182,7 +182,7 @@ int dm_bitset_clear_bit(struct dm_disk_bitset *info, dm_block_t root, uint32_t index, dm_block_t *new_root) { int r; - unsigned b = index % BITS_PER_ARRAY_ENTRY; + unsigned int b = index % BITS_PER_ARRAY_ENTRY;
r = get_array_entry(info, root, index, new_root); if (r) @@ -199,7 +199,7 @@ int dm_bitset_test_bit(struct dm_disk_bitset *info, dm_block_t root, uint32_t index, dm_block_t *new_root, bool *result) { int r; - unsigned b = index % BITS_PER_ARRAY_ENTRY; + unsigned int b = index % BITS_PER_ARRAY_ENTRY;
r = get_array_entry(info, root, index, new_root); if (r) diff --git a/drivers/md/persistent-data/dm-block-manager.c b/drivers/md/persistent-data/dm-block-manager.c index 11935864f50f5..1f40100908d7c 100644 --- a/drivers/md/persistent-data/dm-block-manager.c +++ b/drivers/md/persistent-data/dm-block-manager.c @@ -57,10 +57,10 @@ struct waiter { int wants_write; };
-static unsigned __find_holder(struct block_lock *lock, +static unsigned int __find_holder(struct block_lock *lock, struct task_struct *task) { - unsigned i; + unsigned int i;
for (i = 0; i < MAX_HOLDERS; i++) if (lock->holders[i] == task) @@ -73,7 +73,7 @@ static unsigned __find_holder(struct block_lock *lock, /* call this *after* you increment lock->count */ static void __add_holder(struct block_lock *lock, struct task_struct *task) { - unsigned h = __find_holder(lock, NULL); + unsigned int h = __find_holder(lock, NULL); #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING struct stack_store *t; #endif @@ -90,14 +90,14 @@ static void __add_holder(struct block_lock *lock, struct task_struct *task) /* call this *before* you decrement lock->count */ static void __del_holder(struct block_lock *lock, struct task_struct *task) { - unsigned h = __find_holder(lock, task); + unsigned int h = __find_holder(lock, task); lock->holders[h] = NULL; put_task_struct(task); }
static int __check_holder(struct block_lock *lock) { - unsigned i; + unsigned int i;
for (i = 0; i < MAX_HOLDERS; i++) { if (lock->holders[i] == current) { @@ -376,8 +376,8 @@ struct dm_block_manager { };
struct dm_block_manager *dm_block_manager_create(struct block_device *bdev, - unsigned block_size, - unsigned max_held_per_thread) + unsigned int block_size, + unsigned int max_held_per_thread) { int r; struct dm_block_manager *bm; @@ -415,7 +415,7 @@ void dm_block_manager_destroy(struct dm_block_manager *bm) } EXPORT_SYMBOL_GPL(dm_block_manager_destroy);
-unsigned dm_bm_block_size(struct dm_block_manager *bm) +unsigned int dm_bm_block_size(struct dm_block_manager *bm) { return dm_bufio_get_block_size(bm->bufio); } diff --git a/drivers/md/persistent-data/dm-block-manager.h b/drivers/md/persistent-data/dm-block-manager.h index e728937f376a3..58a23b8ec1902 100644 --- a/drivers/md/persistent-data/dm-block-manager.h +++ b/drivers/md/persistent-data/dm-block-manager.h @@ -32,11 +32,11 @@ void *dm_block_data(struct dm_block *b); */ struct dm_block_manager; struct dm_block_manager *dm_block_manager_create( - struct block_device *bdev, unsigned block_size, - unsigned max_held_per_thread); + struct block_device *bdev, unsigned int block_size, + unsigned int max_held_per_thread); void dm_block_manager_destroy(struct dm_block_manager *bm);
-unsigned dm_bm_block_size(struct dm_block_manager *bm); +unsigned int dm_bm_block_size(struct dm_block_manager *bm); dm_block_t dm_bm_nr_blocks(struct dm_block_manager *bm);
/*----------------------------------------------------------------*/ diff --git a/drivers/md/persistent-data/dm-btree-remove.c b/drivers/md/persistent-data/dm-btree-remove.c index 4ead31e0d8ce5..ac213138b0217 100644 --- a/drivers/md/persistent-data/dm-btree-remove.c +++ b/drivers/md/persistent-data/dm-btree-remove.c @@ -124,10 +124,10 @@ static int node_copy(struct btree_node *left, struct btree_node *right, int shif /* * Delete a specific entry from a leaf node. */ -static void delete_at(struct btree_node *n, unsigned index) +static void delete_at(struct btree_node *n, unsigned int index) { - unsigned nr_entries = le32_to_cpu(n->header.nr_entries); - unsigned nr_to_copy = nr_entries - (index + 1); + unsigned int nr_entries = le32_to_cpu(n->header.nr_entries); + unsigned int nr_to_copy = nr_entries - (index + 1); uint32_t value_size = le32_to_cpu(n->header.value_size); BUG_ON(index >= nr_entries);
@@ -144,20 +144,20 @@ static void delete_at(struct btree_node *n, unsigned index) n->header.nr_entries = cpu_to_le32(nr_entries - 1); }
-static unsigned merge_threshold(struct btree_node *n) +static unsigned int merge_threshold(struct btree_node *n) { return le32_to_cpu(n->header.max_entries) / 3; }
struct child { - unsigned index; + unsigned int index; struct dm_block *block; struct btree_node *n; };
static int init_child(struct dm_btree_info *info, struct dm_btree_value_type *vt, struct btree_node *parent, - unsigned index, struct child *result) + unsigned int index, struct child *result) { int r, inc; dm_block_t root; @@ -263,7 +263,7 @@ static int __rebalance2(struct dm_btree_info *info, struct btree_node *parent, /* * Rebalance. */ - unsigned target_left = (nr_left + nr_right) / 2; + unsigned int target_left = (nr_left + nr_right) / 2; ret = shift(left, right, nr_left - target_left); if (ret) return ret; @@ -273,7 +273,7 @@ static int __rebalance2(struct dm_btree_info *info, struct btree_node *parent, }
static int rebalance2(struct shadow_spine *s, struct dm_btree_info *info, - struct dm_btree_value_type *vt, unsigned left_index) + struct dm_btree_value_type *vt, unsigned int left_index) { int r; struct btree_node *parent; @@ -310,7 +310,7 @@ static int delete_center_node(struct dm_btree_info *info, struct btree_node *par uint32_t nr_left, uint32_t nr_center, uint32_t nr_right) { uint32_t max_entries = le32_to_cpu(left->header.max_entries); - unsigned shift = min(max_entries - nr_left, nr_center); + unsigned int shift = min(max_entries - nr_left, nr_center);
if (nr_left + shift > max_entries) { DMERR("node shift out of bounds"); @@ -351,10 +351,10 @@ static int redistribute3(struct dm_btree_info *info, struct btree_node *parent, { int s, ret; uint32_t max_entries = le32_to_cpu(left->header.max_entries); - unsigned total = nr_left + nr_center + nr_right; - unsigned target_right = total / 3; - unsigned remainder = (target_right * 3) != total; - unsigned target_left = target_right + remainder; + unsigned int total = nr_left + nr_center + nr_right; + unsigned int target_right = total / 3; + unsigned int remainder = (target_right * 3) != total; + unsigned int target_left = target_right + remainder;
BUG_ON(target_left > max_entries); BUG_ON(target_right > max_entries); @@ -422,7 +422,7 @@ static int __rebalance3(struct dm_btree_info *info, struct btree_node *parent, uint32_t nr_center = le32_to_cpu(center->header.nr_entries); uint32_t nr_right = le32_to_cpu(right->header.nr_entries);
- unsigned threshold = merge_threshold(left) * 4 + 1; + unsigned int threshold = merge_threshold(left) * 4 + 1;
if ((left->header.max_entries != center->header.max_entries) || (center->header.max_entries != right->header.max_entries)) { @@ -440,7 +440,7 @@ static int __rebalance3(struct dm_btree_info *info, struct btree_node *parent, }
static int rebalance3(struct shadow_spine *s, struct dm_btree_info *info, - struct dm_btree_value_type *vt, unsigned left_index) + struct dm_btree_value_type *vt, unsigned int left_index) { int r; struct btree_node *parent = dm_block_data(shadow_current(s)); @@ -519,7 +519,7 @@ static int rebalance_children(struct shadow_spine *s, return r; }
-static int do_leaf(struct btree_node *n, uint64_t key, unsigned *index) +static int do_leaf(struct btree_node *n, uint64_t key, unsigned int *index) { int i = lower_bound(n, key);
@@ -539,7 +539,7 @@ static int do_leaf(struct btree_node *n, uint64_t key, unsigned *index) */ static int remove_raw(struct shadow_spine *s, struct dm_btree_info *info, struct dm_btree_value_type *vt, dm_block_t root, - uint64_t key, unsigned *index) + uint64_t key, unsigned int *index) { int i = *index, r; struct btree_node *n; @@ -589,7 +589,7 @@ static int remove_raw(struct shadow_spine *s, struct dm_btree_info *info, int dm_btree_remove(struct dm_btree_info *info, dm_block_t root, uint64_t *keys, dm_block_t *new_root) { - unsigned level, last_level = info->levels - 1; + unsigned int level, last_level = info->levels - 1; int index = 0, r = 0; struct shadow_spine spine; struct btree_node *n; @@ -601,7 +601,7 @@ int dm_btree_remove(struct dm_btree_info *info, dm_block_t root, r = remove_raw(&spine, info, (level == last_level ? &info->value_type : &le64_vt), - root, keys[level], (unsigned *)&index); + root, keys[level], (unsigned int *)&index); if (r < 0) break;
@@ -685,9 +685,9 @@ static int remove_nearest(struct shadow_spine *s, struct dm_btree_info *info,
static int remove_one(struct dm_btree_info *info, dm_block_t root, uint64_t *keys, uint64_t end_key, - dm_block_t *new_root, unsigned *nr_removed) + dm_block_t *new_root, unsigned int *nr_removed) { - unsigned level, last_level = info->levels - 1; + unsigned int level, last_level = info->levels - 1; int index = 0, r = 0; struct shadow_spine spine; struct btree_node *n; @@ -698,7 +698,7 @@ static int remove_one(struct dm_btree_info *info, dm_block_t root, init_shadow_spine(&spine, info); for (level = 0; level < last_level; level++) { r = remove_raw(&spine, info, &le64_vt, - root, keys[level], (unsigned *) &index); + root, keys[level], (unsigned int *) &index); if (r < 0) goto out;
@@ -742,7 +742,7 @@ static int remove_one(struct dm_btree_info *info, dm_block_t root,
int dm_btree_remove_leaves(struct dm_btree_info *info, dm_block_t root, uint64_t *first_key, uint64_t end_key, - dm_block_t *new_root, unsigned *nr_removed) + dm_block_t *new_root, unsigned int *nr_removed) { int r;
diff --git a/drivers/md/persistent-data/dm-btree-spine.c b/drivers/md/persistent-data/dm-btree-spine.c index e653458888a7c..45a39d4f1c10f 100644 --- a/drivers/md/persistent-data/dm-btree-spine.c +++ b/drivers/md/persistent-data/dm-btree-spine.c @@ -234,12 +234,12 @@ dm_block_t shadow_root(struct shadow_spine *s) return s->root; }
-static void le64_inc(void *context, const void *value_le, unsigned count) +static void le64_inc(void *context, const void *value_le, unsigned int count) { dm_tm_with_runs(context, value_le, count, dm_tm_inc_range); }
-static void le64_dec(void *context, const void *value_le, unsigned count) +static void le64_dec(void *context, const void *value_le, unsigned int count) { dm_tm_with_runs(context, value_le, count, dm_tm_dec_range); } diff --git a/drivers/md/persistent-data/dm-btree.c b/drivers/md/persistent-data/dm-btree.c index 5ce64e93aae74..1cc783d7030d8 100644 --- a/drivers/md/persistent-data/dm-btree.c +++ b/drivers/md/persistent-data/dm-btree.c @@ -23,8 +23,8 @@ static void memcpy_disk(void *dest, const void *src, size_t len) __dm_unbless_for_disk(src); }
-static void array_insert(void *base, size_t elt_size, unsigned nr_elts, - unsigned index, void *elt) +static void array_insert(void *base, size_t elt_size, unsigned int nr_elts, + unsigned int index, void *elt) __dm_written_to_disk(elt) { if (index < nr_elts) @@ -80,7 +80,7 @@ void inc_children(struct dm_transaction_manager *tm, struct btree_node *n, vt->inc(vt->context, value_ptr(n, 0), nr_entries); }
-static int insert_at(size_t value_size, struct btree_node *node, unsigned index, +static int insert_at(size_t value_size, struct btree_node *node, unsigned int index, uint64_t key, void *value) __dm_written_to_disk(value) { @@ -162,9 +162,9 @@ EXPORT_SYMBOL_GPL(dm_btree_empty); struct frame { struct dm_block *b; struct btree_node *n; - unsigned level; - unsigned nr_children; - unsigned current_child; + unsigned int level; + unsigned int nr_children; + unsigned int current_child; };
struct del_stack { @@ -193,7 +193,7 @@ static int unprocessed_frames(struct del_stack *s)
static void prefetch_children(struct del_stack *s, struct frame *f) { - unsigned i; + unsigned int i; struct dm_block_manager *bm = dm_tm_get_bm(s->tm);
for (i = 0; i < f->nr_children; i++) @@ -205,7 +205,7 @@ static bool is_internal_level(struct dm_btree_info *info, struct frame *f) return f->level < (info->levels - 1); }
-static int push_frame(struct del_stack *s, dm_block_t b, unsigned level) +static int push_frame(struct del_stack *s, dm_block_t b, unsigned int level) { int r; uint32_t ref_count; @@ -371,7 +371,7 @@ static int btree_lookup_raw(struct ro_spine *s, dm_block_t block, uint64_t key, int dm_btree_lookup(struct dm_btree_info *info, dm_block_t root, uint64_t *keys, void *value_le) { - unsigned level, last_level = info->levels - 1; + unsigned int level, last_level = info->levels - 1; int r = -ENODATA; uint64_t rkey; __le64 internal_value_le; @@ -467,7 +467,7 @@ static int dm_btree_lookup_next_single(struct dm_btree_info *info, dm_block_t ro int dm_btree_lookup_next(struct dm_btree_info *info, dm_block_t root, uint64_t *keys, uint64_t *rkey, void *value_le) { - unsigned level; + unsigned int level; int r = -ENODATA; __le64 internal_value_le; struct ro_spine spine; @@ -502,9 +502,9 @@ EXPORT_SYMBOL_GPL(dm_btree_lookup_next); * Copies entries from one region of a btree node to another. The regions * must not overlap. */ -static void copy_entries(struct btree_node *dest, unsigned dest_offset, - struct btree_node *src, unsigned src_offset, - unsigned count) +static void copy_entries(struct btree_node *dest, unsigned int dest_offset, + struct btree_node *src, unsigned int src_offset, + unsigned int count) { size_t value_size = le32_to_cpu(dest->header.value_size); memcpy(dest->keys + dest_offset, src->keys + src_offset, count * sizeof(uint64_t)); @@ -515,9 +515,9 @@ static void copy_entries(struct btree_node *dest, unsigned dest_offset, * Moves entries from one region fo a btree node to another. The regions * may overlap. */ -static void move_entries(struct btree_node *dest, unsigned dest_offset, - struct btree_node *src, unsigned src_offset, - unsigned count) +static void move_entries(struct btree_node *dest, unsigned int dest_offset, + struct btree_node *src, unsigned int src_offset, + unsigned int count) { size_t value_size = le32_to_cpu(dest->header.value_size); memmove(dest->keys + dest_offset, src->keys + src_offset, count * sizeof(uint64_t)); @@ -528,7 +528,7 @@ static void move_entries(struct btree_node *dest, unsigned dest_offset, * Erases the first 'count' entries of a btree node, shifting following * entries down into their place. */ -static void shift_down(struct btree_node *n, unsigned count) +static void shift_down(struct btree_node *n, unsigned int count) { move_entries(n, 0, n, count, le32_to_cpu(n->header.nr_entries) - count); } @@ -537,7 +537,7 @@ static void shift_down(struct btree_node *n, unsigned count) * Moves entries in a btree node up 'count' places, making space for * new entries at the start of the node. */ -static void shift_up(struct btree_node *n, unsigned count) +static void shift_up(struct btree_node *n, unsigned int count) { move_entries(n, count, n, 0, le32_to_cpu(n->header.nr_entries)); } @@ -548,18 +548,18 @@ static void shift_up(struct btree_node *n, unsigned count) */ static void redistribute2(struct btree_node *left, struct btree_node *right) { - unsigned nr_left = le32_to_cpu(left->header.nr_entries); - unsigned nr_right = le32_to_cpu(right->header.nr_entries); - unsigned total = nr_left + nr_right; - unsigned target_left = total / 2; - unsigned target_right = total - target_left; + unsigned int nr_left = le32_to_cpu(left->header.nr_entries); + unsigned int nr_right = le32_to_cpu(right->header.nr_entries); + unsigned int total = nr_left + nr_right; + unsigned int target_left = total / 2; + unsigned int target_right = total - target_left;
if (nr_left < target_left) { - unsigned delta = target_left - nr_left; + unsigned int delta = target_left - nr_left; copy_entries(left, nr_left, right, 0, delta); shift_down(right, delta); } else if (nr_left > target_left) { - unsigned delta = nr_left - target_left; + unsigned int delta = nr_left - target_left; if (nr_right) shift_up(right, delta); copy_entries(right, 0, left, target_left, delta); @@ -576,10 +576,10 @@ static void redistribute2(struct btree_node *left, struct btree_node *right) static void redistribute3(struct btree_node *left, struct btree_node *center, struct btree_node *right) { - unsigned nr_left = le32_to_cpu(left->header.nr_entries); - unsigned nr_center = le32_to_cpu(center->header.nr_entries); - unsigned nr_right = le32_to_cpu(right->header.nr_entries); - unsigned total, target_left, target_center, target_right; + unsigned int nr_left = le32_to_cpu(left->header.nr_entries); + unsigned int nr_center = le32_to_cpu(center->header.nr_entries); + unsigned int nr_right = le32_to_cpu(right->header.nr_entries); + unsigned int total, target_left, target_center, target_right;
BUG_ON(nr_center);
@@ -589,19 +589,19 @@ static void redistribute3(struct btree_node *left, struct btree_node *center, target_right = (total - target_left - target_center);
if (nr_left < target_left) { - unsigned left_short = target_left - nr_left; + unsigned int left_short = target_left - nr_left; copy_entries(left, nr_left, right, 0, left_short); copy_entries(center, 0, right, left_short, target_center); shift_down(right, nr_right - target_right);
} else if (nr_left < (target_left + target_center)) { - unsigned left_to_center = nr_left - target_left; + unsigned int left_to_center = nr_left - target_left; copy_entries(center, 0, left, target_left, left_to_center); copy_entries(center, left_to_center, right, 0, target_center - left_to_center); shift_down(right, nr_right - target_right);
} else { - unsigned right_short = target_right - nr_right; + unsigned int right_short = target_right - nr_right; shift_up(right, right_short); copy_entries(right, 0, left, nr_left - right_short, right_short); copy_entries(center, 0, left, target_left, nr_left - target_left); @@ -642,7 +642,7 @@ static void redistribute3(struct btree_node *left, struct btree_node *center, * * Where A* is a shadow of A. */ -static int split_one_into_two(struct shadow_spine *s, unsigned parent_index, +static int split_one_into_two(struct shadow_spine *s, unsigned int parent_index, struct dm_btree_value_type *vt, uint64_t key) { int r; @@ -696,7 +696,7 @@ static int split_one_into_two(struct shadow_spine *s, unsigned parent_index, * to the new shadow. */ static int shadow_child(struct dm_btree_info *info, struct dm_btree_value_type *vt, - struct btree_node *parent, unsigned index, + struct btree_node *parent, unsigned int index, struct dm_block **result) { int r, inc; @@ -725,11 +725,11 @@ static int shadow_child(struct dm_btree_info *info, struct dm_btree_value_type * * Splits two nodes into three. This is more work, but results in fuller * nodes, so saves metadata space. */ -static int split_two_into_three(struct shadow_spine *s, unsigned parent_index, +static int split_two_into_three(struct shadow_spine *s, unsigned int parent_index, struct dm_btree_value_type *vt, uint64_t key) { int r; - unsigned middle_index; + unsigned int middle_index; struct dm_block *left, *middle, *right, *parent; struct btree_node *ln, *rn, *mn, *pn; __le64 location; @@ -830,7 +830,7 @@ static int btree_split_beneath(struct shadow_spine *s, uint64_t key) { int r; size_t size; - unsigned nr_left, nr_right; + unsigned int nr_left, nr_right; struct dm_block *left, *right, *new_parent; struct btree_node *pn, *ln, *rn; __le64 val; @@ -904,7 +904,7 @@ static int btree_split_beneath(struct shadow_spine *s, uint64_t key) * Redistributes a node's entries with its left sibling. */ static int rebalance_left(struct shadow_spine *s, struct dm_btree_value_type *vt, - unsigned parent_index, uint64_t key) + unsigned int parent_index, uint64_t key) { int r; struct dm_block *sib; @@ -933,7 +933,7 @@ static int rebalance_left(struct shadow_spine *s, struct dm_btree_value_type *vt * Redistributes a nodes entries with its right sibling. */ static int rebalance_right(struct shadow_spine *s, struct dm_btree_value_type *vt, - unsigned parent_index, uint64_t key) + unsigned int parent_index, uint64_t key) { int r; struct dm_block *sib; @@ -961,10 +961,10 @@ static int rebalance_right(struct shadow_spine *s, struct dm_btree_value_type *v /* * Returns the number of spare entries in a node. */ -static int get_node_free_space(struct dm_btree_info *info, dm_block_t b, unsigned *space) +static int get_node_free_space(struct dm_btree_info *info, dm_block_t b, unsigned int *space) { int r; - unsigned nr_entries; + unsigned int nr_entries; struct dm_block *block; struct btree_node *node;
@@ -990,12 +990,12 @@ static int get_node_free_space(struct dm_btree_info *info, dm_block_t b, unsigne */ #define SPACE_THRESHOLD 8 static int rebalance_or_split(struct shadow_spine *s, struct dm_btree_value_type *vt, - unsigned parent_index, uint64_t key) + unsigned int parent_index, uint64_t key) { int r; struct btree_node *parent = dm_block_data(shadow_parent(s)); - unsigned nr_parent = le32_to_cpu(parent->header.nr_entries); - unsigned free_space; + unsigned int nr_parent = le32_to_cpu(parent->header.nr_entries); + unsigned int free_space; int left_shared = 0, right_shared = 0;
/* Should we move entries to the left sibling? */ @@ -1080,7 +1080,7 @@ static bool has_space_for_insert(struct btree_node *node, uint64_t key)
static int btree_insert_raw(struct shadow_spine *s, dm_block_t root, struct dm_btree_value_type *vt, - uint64_t key, unsigned *index) + uint64_t key, unsigned int *index) { int r, i = *index, top = 1; struct btree_node *node; @@ -1214,7 +1214,7 @@ int btree_get_overwrite_leaf(struct dm_btree_info *info, dm_block_t root, }
static bool need_insert(struct btree_node *node, uint64_t *keys, - unsigned level, unsigned index) + unsigned int level, unsigned int index) { return ((index >= le32_to_cpu(node->header.nr_entries)) || (le64_to_cpu(node->keys[index]) != keys[level])); @@ -1226,7 +1226,7 @@ static int insert(struct dm_btree_info *info, dm_block_t root, __dm_written_to_disk(value) { int r; - unsigned level, index = -1, last_level = info->levels - 1; + unsigned int level, index = -1, last_level = info->levels - 1; dm_block_t block = root; struct shadow_spine spine; struct btree_node *n; @@ -1412,7 +1412,7 @@ static int walk_node(struct dm_btree_info *info, dm_block_t block, void *context) { int r; - unsigned i, nr; + unsigned int i, nr; struct dm_block *node; struct btree_node *n; uint64_t keys; @@ -1455,7 +1455,7 @@ EXPORT_SYMBOL_GPL(dm_btree_walk);
static void prefetch_values(struct dm_btree_cursor *c) { - unsigned i, nr; + unsigned int i, nr; __le64 value_le; struct cursor_node *n = c->nodes + c->depth - 1; struct btree_node *bn = dm_block_data(n->b); diff --git a/drivers/md/persistent-data/dm-btree.h b/drivers/md/persistent-data/dm-btree.h index d2ae5aa4d00b6..5566e7c32e829 100644 --- a/drivers/md/persistent-data/dm-btree.h +++ b/drivers/md/persistent-data/dm-btree.h @@ -58,14 +58,14 @@ struct dm_btree_value_type { * somewhere.) This method is _not_ called for insertion of a new * value: It is assumed the ref count is already 1. */ - void (*inc)(void *context, const void *value, unsigned count); + void (*inc)(void *context, const void *value, unsigned int count);
/* * These values are being deleted. The btree takes care of freeing * the memory pointed to by @value. Often the del function just * needs to decrement a reference counts somewhere. */ - void (*dec)(void *context, const void *value, unsigned count); + void (*dec)(void *context, const void *value, unsigned int count);
/* * A test for equality between two values. When a value is @@ -84,7 +84,7 @@ struct dm_btree_info { /* * Number of nested btrees. (Not the depth of a single tree.) */ - unsigned levels; + unsigned int levels; struct dm_btree_value_type value_type; };
@@ -149,7 +149,7 @@ int dm_btree_remove(struct dm_btree_info *info, dm_block_t root, */ int dm_btree_remove_leaves(struct dm_btree_info *info, dm_block_t root, uint64_t *keys, uint64_t end_key, - dm_block_t *new_root, unsigned *nr_removed); + dm_block_t *new_root, unsigned int *nr_removed);
/* * Returns < 0 on failure. Otherwise the number of key entries that have @@ -188,7 +188,7 @@ int dm_btree_walk(struct dm_btree_info *info, dm_block_t root,
struct cursor_node { struct dm_block *b; - unsigned index; + unsigned int index; };
struct dm_btree_cursor { @@ -196,7 +196,7 @@ struct dm_btree_cursor { dm_block_t root;
bool prefetch_leaves; - unsigned depth; + unsigned int depth; struct cursor_node nodes[DM_BTREE_CURSOR_MAX_DEPTH]; };
diff --git a/drivers/md/persistent-data/dm-persistent-data-internal.h b/drivers/md/persistent-data/dm-persistent-data-internal.h index c49e26fff36c8..b945a2be93fb2 100644 --- a/drivers/md/persistent-data/dm-persistent-data-internal.h +++ b/drivers/md/persistent-data/dm-persistent-data-internal.h @@ -9,11 +9,11 @@
#include "dm-block-manager.h"
-static inline unsigned dm_hash_block(dm_block_t b, unsigned hash_mask) +static inline unsigned int dm_hash_block(dm_block_t b, unsigned int hash_mask) { - const unsigned BIG_PRIME = 4294967291UL; + const unsigned int BIG_PRIME = 4294967291UL;
- return (((unsigned) b) * BIG_PRIME) & hash_mask; + return (((unsigned int) b) * BIG_PRIME) & hash_mask; }
#endif /* _PERSISTENT_DATA_INTERNAL_H */ diff --git a/drivers/md/persistent-data/dm-space-map-common.c b/drivers/md/persistent-data/dm-space-map-common.c index bfbfa750e0160..af800efed9f3c 100644 --- a/drivers/md/persistent-data/dm-space-map-common.c +++ b/drivers/md/persistent-data/dm-space-map-common.c @@ -126,7 +126,7 @@ static void *dm_bitmap_data(struct dm_block *b)
#define WORD_MASK_HIGH 0xAAAAAAAAAAAAAAAAULL
-static unsigned dm_bitmap_word_used(void *addr, unsigned b) +static unsigned int dm_bitmap_word_used(void *addr, unsigned int b) { __le64 *words_le = addr; __le64 *w_le = words_le + (b >> ENTRIES_SHIFT); @@ -137,11 +137,11 @@ static unsigned dm_bitmap_word_used(void *addr, unsigned b) return !(~bits & mask); }
-static unsigned sm_lookup_bitmap(void *addr, unsigned b) +static unsigned int sm_lookup_bitmap(void *addr, unsigned int b) { __le64 *words_le = addr; __le64 *w_le = words_le + (b >> ENTRIES_SHIFT); - unsigned hi, lo; + unsigned int hi, lo;
b = (b & (ENTRIES_PER_WORD - 1)) << 1; hi = !!test_bit_le(b, (void *) w_le); @@ -149,7 +149,7 @@ static unsigned sm_lookup_bitmap(void *addr, unsigned b) return (hi << 1) | lo; }
-static void sm_set_bitmap(void *addr, unsigned b, unsigned val) +static void sm_set_bitmap(void *addr, unsigned int b, unsigned int val) { __le64 *words_le = addr; __le64 *w_le = words_le + (b >> ENTRIES_SHIFT); @@ -167,8 +167,8 @@ static void sm_set_bitmap(void *addr, unsigned b, unsigned val) __clear_bit_le(b + 1, (void *) w_le); }
-static int sm_find_free(void *addr, unsigned begin, unsigned end, - unsigned *result) +static int sm_find_free(void *addr, unsigned int begin, unsigned int end, + unsigned int *result) { while (begin < end) { if (!(begin & (ENTRIES_PER_WORD - 1)) && @@ -237,7 +237,7 @@ int sm_ll_extend(struct ll_disk *ll, dm_block_t extra_blocks) { int r; dm_block_t i, nr_blocks, nr_indexes; - unsigned old_blocks, blocks; + unsigned int old_blocks, blocks;
nr_blocks = ll->nr_blocks + extra_blocks; old_blocks = dm_sector_div_up(ll->nr_blocks, ll->entries_per_block); @@ -351,7 +351,7 @@ int sm_ll_find_free_block(struct ll_disk *ll, dm_block_t begin,
for (i = index_begin; i < index_end; i++, begin = 0) { struct dm_block *blk; - unsigned position; + unsigned int position; uint32_t bit_end;
r = ll->load_ie(ll, i, &ie_disk); @@ -369,7 +369,7 @@ int sm_ll_find_free_block(struct ll_disk *ll, dm_block_t begin, bit_end = (i == index_end - 1) ? end : ll->entries_per_block;
r = sm_find_free(dm_bitmap_data(blk), - max_t(unsigned, begin, le32_to_cpu(ie_disk.none_free_before)), + max_t(unsigned int, begin, le32_to_cpu(ie_disk.none_free_before)), bit_end, &position); if (r == -ENOSPC) { /* @@ -1097,7 +1097,7 @@ static inline int ie_cache_writeback(struct ll_disk *ll, struct ie_cache *iec) &iec->index, &iec->ie, &ll->bitmap_root); }
-static inline unsigned hash_index(dm_block_t index) +static inline unsigned int hash_index(dm_block_t index) { return dm_hash_block(index, IE_CACHE_MASK); } @@ -1106,7 +1106,7 @@ static int disk_ll_load_ie(struct ll_disk *ll, dm_block_t index, struct disk_index_entry *ie) { int r; - unsigned h = hash_index(index); + unsigned int h = hash_index(index); struct ie_cache *iec = ll->ie_cache + h;
if (iec->valid) { @@ -1137,7 +1137,7 @@ static int disk_ll_save_ie(struct ll_disk *ll, dm_block_t index, struct disk_index_entry *ie) { int r; - unsigned h = hash_index(index); + unsigned int h = hash_index(index); struct ie_cache *iec = ll->ie_cache + h;
ll->bitmap_index_changed = true; @@ -1164,7 +1164,7 @@ static int disk_ll_save_ie(struct ll_disk *ll, dm_block_t index,
static int disk_ll_init_index(struct ll_disk *ll) { - unsigned i; + unsigned int i; for (i = 0; i < IE_CACHE_SIZE; i++) { struct ie_cache *iec = ll->ie_cache + i; iec->valid = false; @@ -1186,7 +1186,7 @@ static dm_block_t disk_ll_max_entries(struct ll_disk *ll) static int disk_ll_commit(struct ll_disk *ll) { int r = 0; - unsigned i; + unsigned int i;
for (i = 0; i < IE_CACHE_SIZE; i++) { struct ie_cache *iec = ll->ie_cache + i; diff --git a/drivers/md/persistent-data/dm-space-map-metadata.c b/drivers/md/persistent-data/dm-space-map-metadata.c index 392ae26134a4e..0d1fcdf29c835 100644 --- a/drivers/md/persistent-data/dm-space-map-metadata.c +++ b/drivers/md/persistent-data/dm-space-map-metadata.c @@ -94,8 +94,8 @@ struct block_op { };
struct bop_ring_buffer { - unsigned begin; - unsigned end; + unsigned int begin; + unsigned int end; struct block_op bops[MAX_RECURSIVE_ALLOCATIONS + 1]; };
@@ -110,9 +110,9 @@ static bool brb_empty(struct bop_ring_buffer *brb) return brb->begin == brb->end; }
-static unsigned brb_next(struct bop_ring_buffer *brb, unsigned old) +static unsigned int brb_next(struct bop_ring_buffer *brb, unsigned int old) { - unsigned r = old + 1; + unsigned int r = old + 1; return r >= ARRAY_SIZE(brb->bops) ? 0 : r; }
@@ -120,7 +120,7 @@ static int brb_push(struct bop_ring_buffer *brb, enum block_op_type type, dm_block_t b, dm_block_t e) { struct block_op *bop; - unsigned next = brb_next(brb, brb->end); + unsigned int next = brb_next(brb, brb->end);
/* * We don't allow the last bop to be filled, this way we can @@ -171,8 +171,8 @@ struct sm_metadata {
dm_block_t begin;
- unsigned recursion_count; - unsigned allocated_this_transaction; + unsigned int recursion_count; + unsigned int allocated_this_transaction; struct bop_ring_buffer uncommitted;
struct threshold threshold; @@ -300,9 +300,9 @@ static int sm_metadata_get_count(struct dm_space_map *sm, dm_block_t b, uint32_t *result) { int r; - unsigned i; + unsigned int i; struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm); - unsigned adjustment = 0; + unsigned int adjustment = 0;
/* * We may have some uncommitted adjustments to add. This list @@ -340,7 +340,7 @@ static int sm_metadata_count_is_more_than_one(struct dm_space_map *sm, dm_block_t b, int *result) { int r, adjustment = 0; - unsigned i; + unsigned int i; struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm); uint32_t rc;
diff --git a/drivers/md/persistent-data/dm-transaction-manager.c b/drivers/md/persistent-data/dm-transaction-manager.c index 16643fc974e84..39885f8355847 100644 --- a/drivers/md/persistent-data/dm-transaction-manager.c +++ b/drivers/md/persistent-data/dm-transaction-manager.c @@ -28,14 +28,14 @@ struct prefetch_set { dm_block_t blocks[PREFETCH_SIZE]; };
-static unsigned prefetch_hash(dm_block_t b) +static unsigned int prefetch_hash(dm_block_t b) { return hash_64(b, PREFETCH_BITS); }
static void prefetch_wipe(struct prefetch_set *p) { - unsigned i; + unsigned int i; for (i = 0; i < PREFETCH_SIZE; i++) p->blocks[i] = PREFETCH_SENTINEL; } @@ -48,7 +48,7 @@ static void prefetch_init(struct prefetch_set *p)
static void prefetch_add(struct prefetch_set *p, dm_block_t b) { - unsigned h = prefetch_hash(b); + unsigned int h = prefetch_hash(b);
mutex_lock(&p->lock); if (p->blocks[h] == PREFETCH_SENTINEL) @@ -59,7 +59,7 @@ static void prefetch_add(struct prefetch_set *p, dm_block_t b)
static void prefetch_issue(struct prefetch_set *p, struct dm_block_manager *bm) { - unsigned i; + unsigned int i;
mutex_lock(&p->lock);
@@ -103,7 +103,7 @@ struct dm_transaction_manager { static int is_shadow(struct dm_transaction_manager *tm, dm_block_t b) { int r = 0; - unsigned bucket = dm_hash_block(b, DM_HASH_MASK); + unsigned int bucket = dm_hash_block(b, DM_HASH_MASK); struct shadow_info *si;
spin_lock(&tm->lock); @@ -123,7 +123,7 @@ static int is_shadow(struct dm_transaction_manager *tm, dm_block_t b) */ static void insert_shadow(struct dm_transaction_manager *tm, dm_block_t b) { - unsigned bucket; + unsigned int bucket; struct shadow_info *si;
si = kmalloc(sizeof(*si), GFP_NOIO); @@ -393,11 +393,11 @@ void dm_tm_dec_range(struct dm_transaction_manager *tm, dm_block_t b, dm_block_t EXPORT_SYMBOL_GPL(dm_tm_dec_range);
void dm_tm_with_runs(struct dm_transaction_manager *tm, - const __le64 *value_le, unsigned count, dm_tm_run_fn fn) + const __le64 *value_le, unsigned int count, dm_tm_run_fn fn) { uint64_t b, begin, end; bool in_run = false; - unsigned i; + unsigned int i;
for (i = 0; i < count; i++, value_le++) { b = le64_to_cpu(*value_le); diff --git a/drivers/md/persistent-data/dm-transaction-manager.h b/drivers/md/persistent-data/dm-transaction-manager.h index 906c02ed0365b..0f573a4a01aeb 100644 --- a/drivers/md/persistent-data/dm-transaction-manager.h +++ b/drivers/md/persistent-data/dm-transaction-manager.h @@ -111,7 +111,7 @@ void dm_tm_dec_range(struct dm_transaction_manager *tm, dm_block_t b, dm_block_t */ typedef void (*dm_tm_run_fn)(struct dm_transaction_manager *, dm_block_t, dm_block_t); void dm_tm_with_runs(struct dm_transaction_manager *tm, - const __le64 *value_le, unsigned count, dm_tm_run_fn fn); + const __le64 *value_le, unsigned int count, dm_tm_run_fn fn);
int dm_tm_ref(struct dm_transaction_manager *tm, dm_block_t b, uint32_t *result);
diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h index 04c6acf7faaa5..201dd1ab7f1c6 100644 --- a/include/linux/device-mapper.h +++ b/include/linux/device-mapper.h @@ -87,10 +87,10 @@ typedef int (*dm_preresume_fn) (struct dm_target *ti); typedef void (*dm_resume_fn) (struct dm_target *ti);
typedef void (*dm_status_fn) (struct dm_target *ti, status_type_t status_type, - unsigned status_flags, char *result, unsigned maxlen); + unsigned int status_flags, char *result, unsigned int maxlen);
-typedef int (*dm_message_fn) (struct dm_target *ti, unsigned argc, char **argv, - char *result, unsigned maxlen); +typedef int (*dm_message_fn) (struct dm_target *ti, unsigned int argc, char **argv, + char *result, unsigned int maxlen);
typedef int (*dm_prepare_ioctl_fn) (struct dm_target *ti, struct block_device **bdev);
@@ -187,7 +187,7 @@ struct target_type { uint64_t features; const char *name; struct module *module; - unsigned version[3]; + unsigned int version[3]; dm_ctr_fn ctr; dm_dtr_fn dtr; dm_map_fn map; @@ -313,31 +313,31 @@ struct dm_target { * It is a responsibility of the target driver to remap these bios * to the real underlying devices. */ - unsigned num_flush_bios; + unsigned int num_flush_bios;
/* * The number of discard bios that will be submitted to the target. * The bio number can be accessed with dm_bio_get_target_bio_nr. */ - unsigned num_discard_bios; + unsigned int num_discard_bios;
/* * The number of secure erase bios that will be submitted to the target. * The bio number can be accessed with dm_bio_get_target_bio_nr. */ - unsigned num_secure_erase_bios; + unsigned int num_secure_erase_bios;
/* * The number of WRITE ZEROES bios that will be submitted to the target. * The bio number can be accessed with dm_bio_get_target_bio_nr. */ - unsigned num_write_zeroes_bios; + unsigned int num_write_zeroes_bios;
/* * The minimum number of extra bytes allocated in each io for the * target to use. */ - unsigned per_io_data_size; + unsigned int per_io_data_size;
/* target specific data */ void *private; @@ -383,7 +383,7 @@ struct dm_target {
void *dm_per_bio_data(struct bio *bio, size_t data_size); struct bio *dm_bio_from_per_bio_data(void *data, size_t data_size); -unsigned dm_bio_get_target_bio_nr(const struct bio *bio); +unsigned int dm_bio_get_target_bio_nr(const struct bio *bio);
u64 dm_start_time_ns_from_clone(struct bio *bio);
@@ -394,7 +394,7 @@ void dm_unregister_target(struct target_type *t); * Target argument parsing. */ struct dm_arg_set { - unsigned argc; + unsigned int argc; char **argv; };
@@ -403,8 +403,8 @@ struct dm_arg_set { * the error message to use if the number is found to be outside that range. */ struct dm_arg { - unsigned min; - unsigned max; + unsigned int min; + unsigned int max; char *error; };
@@ -413,7 +413,7 @@ struct dm_arg { * returning -EINVAL and setting *error. */ int dm_read_arg(const struct dm_arg *arg, struct dm_arg_set *arg_set, - unsigned *value, char **error); + unsigned int *value, char **error);
/* * Process the next argument as the start of a group containing between @@ -421,7 +421,7 @@ int dm_read_arg(const struct dm_arg *arg, struct dm_arg_set *arg_set, * *num_args or, if invalid, return -EINVAL and set *error. */ int dm_read_arg_group(const struct dm_arg *arg, struct dm_arg_set *arg_set, - unsigned *num_args, char **error); + unsigned int *num_args, char **error);
/* * Return the current argument and shift to the next. @@ -431,7 +431,7 @@ const char *dm_shift_arg(struct dm_arg_set *as); /* * Move through num_args arguments. */ -void dm_consume_args(struct dm_arg_set *as, unsigned num_args); +void dm_consume_args(struct dm_arg_set *as, unsigned int num_args);
/*----------------------------------------------------------------- * Functions for creating and manipulating mapped devices. @@ -461,7 +461,7 @@ void *dm_get_mdptr(struct mapped_device *md); /* * A device can still be used while suspended, but I/O is deferred. */ -int dm_suspend(struct mapped_device *md, unsigned suspend_flags); +int dm_suspend(struct mapped_device *md, unsigned int suspend_flags); int dm_resume(struct mapped_device *md);
/* @@ -481,7 +481,7 @@ struct gendisk *dm_disk(struct mapped_device *md); int dm_suspended(struct dm_target *ti); int dm_post_suspending(struct dm_target *ti); int dm_noflush_suspending(struct dm_target *ti); -void dm_accept_partial_bio(struct bio *bio, unsigned n_sectors); +void dm_accept_partial_bio(struct bio *bio, unsigned int n_sectors); void dm_submit_bio_remap(struct bio *clone, struct bio *tgt_clone); union map_info *dm_get_rq_mapinfo(struct request *rq);
@@ -525,7 +525,7 @@ int dm_set_geometry(struct mapped_device *md, struct hd_geometry *geo); * First create an empty table. */ int dm_table_create(struct dm_table **result, fmode_t mode, - unsigned num_targets, struct mapped_device *md); + unsigned int num_targets, struct mapped_device *md);
/* * Then call this once for each target. diff --git a/include/linux/dm-bufio.h b/include/linux/dm-bufio.h index 15d9e15ca830d..1262d92ab88fc 100644 --- a/include/linux/dm-bufio.h +++ b/include/linux/dm-bufio.h @@ -26,8 +26,8 @@ struct dm_buffer; * Create a buffered IO cache on a given device */ struct dm_bufio_client * -dm_bufio_client_create(struct block_device *bdev, unsigned block_size, - unsigned reserved_buffers, unsigned aux_size, +dm_bufio_client_create(struct block_device *bdev, unsigned int block_size, + unsigned int reserved_buffers, unsigned int aux_size, void (*alloc_callback)(struct dm_buffer *), void (*write_callback)(struct dm_buffer *), unsigned int flags); @@ -81,7 +81,7 @@ void *dm_bufio_new(struct dm_bufio_client *c, sector_t block, * I/O to finish. */ void dm_bufio_prefetch(struct dm_bufio_client *c, - sector_t block, unsigned n_blocks); + sector_t block, unsigned int n_blocks);
/* * Release a reference obtained with dm_bufio_{read,get,new}. The data @@ -106,7 +106,7 @@ void dm_bufio_mark_buffer_dirty(struct dm_buffer *b); * write the specified part of the buffer or it may write a larger superset. */ void dm_bufio_mark_partial_buffer_dirty(struct dm_buffer *b, - unsigned start, unsigned end); + unsigned int start, unsigned int end);
/* * Initiate writing of dirty buffers, without waiting for completion. @@ -152,9 +152,9 @@ void dm_bufio_forget_buffers(struct dm_bufio_client *c, sector_t block, sector_t /* * Set the minimum number of buffers before cleanup happens. */ -void dm_bufio_set_minimum_buffers(struct dm_bufio_client *c, unsigned n); +void dm_bufio_set_minimum_buffers(struct dm_bufio_client *c, unsigned int n);
-unsigned dm_bufio_get_block_size(struct dm_bufio_client *c); +unsigned int dm_bufio_get_block_size(struct dm_bufio_client *c); sector_t dm_bufio_get_device_size(struct dm_bufio_client *c); struct dm_io_client *dm_bufio_get_dm_io_client(struct dm_bufio_client *c); sector_t dm_bufio_get_block_number(struct dm_buffer *b); diff --git a/include/linux/dm-dirty-log.h b/include/linux/dm-dirty-log.h index 7084503c3405f..843c857f07b0d 100644 --- a/include/linux/dm-dirty-log.h +++ b/include/linux/dm-dirty-log.h @@ -33,7 +33,7 @@ struct dm_dirty_log_type { struct list_head list;
int (*ctr)(struct dm_dirty_log *log, struct dm_target *ti, - unsigned argc, char **argv); + unsigned int argc, char **argv); void (*dtr)(struct dm_dirty_log *log);
/* @@ -116,7 +116,7 @@ struct dm_dirty_log_type { * Support function for mirror status requests. */ int (*status)(struct dm_dirty_log *log, status_type_t status_type, - char *result, unsigned maxlen); + char *result, unsigned int maxlen);
/* * is_remote_recovering is necessary for cluster mirroring. It provides @@ -139,7 +139,7 @@ int dm_dirty_log_type_unregister(struct dm_dirty_log_type *type); struct dm_dirty_log *dm_dirty_log_create(const char *type_name, struct dm_target *ti, int (*flush_callback_fn)(struct dm_target *ti), - unsigned argc, char **argv); + unsigned int argc, char **argv); void dm_dirty_log_destroy(struct dm_dirty_log *log);
#endif /* __KERNEL__ */ diff --git a/include/linux/dm-io.h b/include/linux/dm-io.h index 8e1c4ab5df043..92e7abfe04f92 100644 --- a/include/linux/dm-io.h +++ b/include/linux/dm-io.h @@ -26,7 +26,7 @@ struct page_list { struct page *page; };
-typedef void (*io_notify_fn)(unsigned long error, void *context); +typedef void (*io_notify_fn)(unsigned int long error, void *context);
enum dm_io_mem_type { DM_IO_PAGE_LIST,/* Page list */ @@ -38,7 +38,7 @@ enum dm_io_mem_type { struct dm_io_memory { enum dm_io_mem_type type;
- unsigned offset; + unsigned int offset;
union { struct page_list *pl; @@ -78,8 +78,8 @@ void dm_io_client_destroy(struct dm_io_client *client); * Each bit in the optional 'sync_error_bits' bitset indicates whether an * error occurred doing io to the corresponding region. */ -int dm_io(struct dm_io_request *io_req, unsigned num_regions, - struct dm_io_region *region, unsigned long *sync_error_bits); +int dm_io(struct dm_io_request *io_req, unsigned int num_regions, + struct dm_io_region *region, unsigned int long *sync_error_bits);
#endif /* __KERNEL__ */ #endif /* _LINUX_DM_IO_H */ diff --git a/include/linux/dm-kcopyd.h b/include/linux/dm-kcopyd.h index c1707ee5b5408..68c412b31b788 100644 --- a/include/linux/dm-kcopyd.h +++ b/include/linux/dm-kcopyd.h @@ -23,11 +23,11 @@ #define DM_KCOPYD_WRITE_SEQ 2
struct dm_kcopyd_throttle { - unsigned throttle; - unsigned num_io_jobs; - unsigned io_period; - unsigned total_period; - unsigned last_jiffies; + unsigned int throttle; + unsigned int num_io_jobs; + unsigned int io_period; + unsigned int total_period; + unsigned int last_jiffies; };
/* @@ -60,12 +60,12 @@ void dm_kcopyd_client_flush(struct dm_kcopyd_client *kc); * read_err is a boolean, * write_err is a bitset, with 1 bit for each destination region */ -typedef void (*dm_kcopyd_notify_fn)(int read_err, unsigned long write_err, +typedef void (*dm_kcopyd_notify_fn)(int read_err, unsigned int long write_err, void *context);
void dm_kcopyd_copy(struct dm_kcopyd_client *kc, struct dm_io_region *from, - unsigned num_dests, struct dm_io_region *dests, - unsigned flags, dm_kcopyd_notify_fn fn, void *context); + unsigned int num_dests, struct dm_io_region *dests, + unsigned int flags, dm_kcopyd_notify_fn fn, void *context);
/* * Prepare a callback and submit it via the kcopyd thread. @@ -80,11 +80,11 @@ void dm_kcopyd_copy(struct dm_kcopyd_client *kc, struct dm_io_region *from, */ void *dm_kcopyd_prepare_callback(struct dm_kcopyd_client *kc, dm_kcopyd_notify_fn fn, void *context); -void dm_kcopyd_do_callback(void *job, int read_err, unsigned long write_err); +void dm_kcopyd_do_callback(void *job, int read_err, unsigned int long write_err);
void dm_kcopyd_zero(struct dm_kcopyd_client *kc, - unsigned num_dests, struct dm_io_region *dests, - unsigned flags, dm_kcopyd_notify_fn fn, void *context); + unsigned int num_dests, struct dm_io_region *dests, + unsigned int flags, dm_kcopyd_notify_fn fn, void *context);
#endif /* __KERNEL__ */ #endif /* _LINUX_DM_KCOPYD_H */ diff --git a/include/linux/dm-region-hash.h b/include/linux/dm-region-hash.h index 9e2a7a401df50..e8691539e1d77 100644 --- a/include/linux/dm-region-hash.h +++ b/include/linux/dm-region-hash.h @@ -37,7 +37,7 @@ struct dm_region_hash *dm_region_hash_create( struct bio_list *bios), void (*wakeup_workers)(void *context), void (*wakeup_all_recovery_waiters)(void *context), - sector_t target_begin, unsigned max_recovery, + sector_t target_begin, unsigned int max_recovery, struct dm_dirty_log *log, uint32_t region_size, region_t nr_regions); void dm_region_hash_destroy(struct dm_region_hash *rh);
From: Mike Snitzer snitzer@kernel.org
[ Upstream commit f7b58a69fad9d2c4c90cab0247811155dd0d48e7 ]
"Abnormal" bios include discards, write zeroes and secure erase. By no longer passing the calculated 'len' pointer, commit 7dd06a2548b2 ("dm: allow dm_accept_partial_bio() for dm_io without duplicate bios") took a senseless approach to disallowing dm_accept_partial_bio() from working for duplicate bios processed using __send_duplicate_bios().
It inadvertently and incorrectly stopped the use of 'len' when initializing a target's io (in alloc_tio). As such the resulting tio could address more area of a device than it should.
For example, when discarding an entire DM striped device with the following DM table: vg-lvol0: 0 159744 striped 2 128 7:0 2048 7:1 2048 vg-lvol0: 159744 45056 striped 2 128 7:2 2048 7:3 2048
Before this fix:
device-mapper: striped: target_stripe=0, bdev=7:0, start=2048 len=102400 blkdiscard: attempt to access beyond end of device loop0: rw=2051, sector=2048, nr_sectors = 102400 limit=81920
device-mapper: striped: target_stripe=1, bdev=7:1, start=2048 len=102400 blkdiscard: attempt to access beyond end of device loop1: rw=2051, sector=2048, nr_sectors = 102400 limit=81920
After this fix;
device-mapper: striped: target_stripe=0, bdev=7:0, start=2048 len=79872 device-mapper: striped: target_stripe=1, bdev=7:1, start=2048 len=79872
Fixes: 7dd06a2548b2 ("dm: allow dm_accept_partial_bio() for dm_io without duplicate bios") Cc: stable@vger.kernel.org Reported-by: Orange Kao orange@aiven.io Signed-off-by: Mike Snitzer snitzer@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/md/dm.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 94e4899d8ac7c..cdbf24def8af3 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1462,7 +1462,8 @@ static void setup_split_accounting(struct clone_info *ci, unsigned int len) }
static void alloc_multiple_bios(struct bio_list *blist, struct clone_info *ci, - struct dm_target *ti, unsigned int num_bios) + struct dm_target *ti, unsigned int num_bios, + unsigned *len) { struct bio *bio; int try; @@ -1473,7 +1474,7 @@ static void alloc_multiple_bios(struct bio_list *blist, struct clone_info *ci, if (try) mutex_lock(&ci->io->md->table_devices_lock); for (bio_nr = 0; bio_nr < num_bios; bio_nr++) { - bio = alloc_tio(ci, ti, bio_nr, NULL, + bio = alloc_tio(ci, ti, bio_nr, len, try ? GFP_NOIO : GFP_NOWAIT); if (!bio) break; @@ -1511,7 +1512,7 @@ static int __send_duplicate_bios(struct clone_info *ci, struct dm_target *ti, if (len) setup_split_accounting(ci, *len); /* dm_accept_partial_bio() is not supported with shared tio->len_ptr */ - alloc_multiple_bios(&blist, ci, ti, num_bios); + alloc_multiple_bios(&blist, ci, ti, num_bios, len); while ((clone = bio_list_pop(&blist))) { dm_tio_set_flag(clone_to_tio(clone), DM_TIO_IS_DUPLICATE_BIO); __map_bio(clone);
From: Ville Syrjälä ville.syrjala@linux.intel.com
[ Upstream commit efb2b57edf20c32b08eee4ce8b436c459fe4caea ]
Since the color management code is the only user of the DSB at the moment move the DSB prepare/cleanup there too. The code has to anyway make decisions on whether to use the DSB or not (and how to use it). Also we'll need a place where we actually generate the DSB command buffer ahead of time rather than the current situation where it gets generated too late during the mmio programming of the hardware.
Signed-off-by: Ville Syrjälä ville.syrjala@linux.intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20221123152638.20622-9-ville.s... Reviewed-by: Uma Shankar uma.shankar@intel.com Stable-dep-of: c880f855d1e2 ("drm/i915: Add a .color_post_update() hook") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/i915/display/intel_color.c | 10 ++++++++ drivers/gpu/drm/i915/display/intel_color.h | 2 ++ drivers/gpu/drm/i915/display/intel_display.c | 25 ++++++++------------ drivers/gpu/drm/i915/display/intel_display.h | 8 +++++++ 4 files changed, 30 insertions(+), 15 deletions(-)
diff --git a/drivers/gpu/drm/i915/display/intel_color.c b/drivers/gpu/drm/i915/display/intel_color.c index c3928d28cd443..ff6b8aaaa2194 100644 --- a/drivers/gpu/drm/i915/display/intel_color.c +++ b/drivers/gpu/drm/i915/display/intel_color.c @@ -1220,6 +1220,16 @@ void intel_color_commit_arm(const struct intel_crtc_state *crtc_state) i915->display.funcs.color->color_commit_arm(crtc_state); }
+void intel_color_prepare_commit(struct intel_crtc_state *crtc_state) +{ + intel_dsb_prepare(crtc_state); +} + +void intel_color_cleanup_commit(struct intel_crtc_state *crtc_state) +{ + intel_dsb_cleanup(crtc_state); +} + static bool intel_can_preload_luts(const struct intel_crtc_state *new_crtc_state) { struct intel_crtc *crtc = to_intel_crtc(new_crtc_state->uapi.crtc); diff --git a/drivers/gpu/drm/i915/display/intel_color.h b/drivers/gpu/drm/i915/display/intel_color.h index 2a5ada67774d0..0e85406036b54 100644 --- a/drivers/gpu/drm/i915/display/intel_color.h +++ b/drivers/gpu/drm/i915/display/intel_color.h @@ -17,6 +17,8 @@ void intel_color_init_hooks(struct drm_i915_private *i915); int intel_color_init(struct drm_i915_private *i915); void intel_color_crtc_init(struct intel_crtc *crtc); int intel_color_check(struct intel_crtc_state *crtc_state); +void intel_color_prepare_commit(struct intel_crtc_state *crtc_state); +void intel_color_cleanup_commit(struct intel_crtc_state *crtc_state); void intel_color_commit_noarm(const struct intel_crtc_state *crtc_state); void intel_color_commit_arm(const struct intel_crtc_state *crtc_state); void intel_color_load_luts(const struct intel_crtc_state *crtc_state); diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c index f0aad2403109b..ca76408b99b38 100644 --- a/drivers/gpu/drm/i915/display/intel_display.c +++ b/drivers/gpu/drm/i915/display/intel_display.c @@ -93,7 +93,6 @@ #include "intel_dp_link_training.h" #include "intel_dpio_phy.h" #include "intel_dpt.h" -#include "intel_dsb.h" #include "intel_fbc.h" #include "intel_fbdev.h" #include "intel_fdi.h" @@ -6946,7 +6945,7 @@ static int intel_atomic_prepare_commit(struct intel_atomic_state *state)
for_each_new_intel_crtc_in_state(state, crtc, crtc_state, i) { if (intel_crtc_needs_color_update(crtc_state)) - intel_dsb_prepare(crtc_state); + intel_color_prepare_commit(crtc_state); }
return 0; @@ -7399,24 +7398,18 @@ static void intel_atomic_commit_fence_wait(struct intel_atomic_state *intel_stat &wait_reset); }
-static void intel_cleanup_dsbs(struct intel_atomic_state *state) -{ - struct intel_crtc_state *old_crtc_state, *new_crtc_state; - struct intel_crtc *crtc; - int i; - - for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state, - new_crtc_state, i) - intel_dsb_cleanup(old_crtc_state); -} - static void intel_atomic_cleanup_work(struct work_struct *work) { struct intel_atomic_state *state = container_of(work, struct intel_atomic_state, base.commit_work); struct drm_i915_private *i915 = to_i915(state->base.dev); + struct intel_crtc_state *old_crtc_state; + struct intel_crtc *crtc; + int i; + + for_each_old_intel_crtc_in_state(state, crtc, old_crtc_state, i) + intel_color_cleanup_commit(old_crtc_state);
- intel_cleanup_dsbs(state); drm_atomic_helper_cleanup_planes(&i915->drm, &state->base); drm_atomic_helper_commit_cleanup_done(&state->base); drm_atomic_state_put(&state->base); @@ -7624,6 +7617,8 @@ static void intel_atomic_commit_tail(struct intel_atomic_state *state) * DSB cleanup is done in cleanup_work aligning with framebuffer * cleanup. So copy and reset the dsb structure to sync with * commit_done and later do dsb cleanup in cleanup_work. + * + * FIXME get rid of this funny new->old swapping */ old_crtc_state->dsb = fetch_and_zero(&new_crtc_state->dsb); } @@ -7774,7 +7769,7 @@ static int intel_atomic_commit(struct drm_device *dev, i915_sw_fence_commit(&state->commit_ready);
for_each_new_intel_crtc_in_state(state, crtc, new_crtc_state, i) - intel_dsb_cleanup(new_crtc_state); + intel_color_cleanup_commit(new_crtc_state);
drm_atomic_helper_cleanup_planes(dev, &state->base); intel_runtime_pm_put(&dev_priv->runtime_pm, state->wakeref); diff --git a/drivers/gpu/drm/i915/display/intel_display.h b/drivers/gpu/drm/i915/display/intel_display.h index 714030136b7f2..ef73730f32b09 100644 --- a/drivers/gpu/drm/i915/display/intel_display.h +++ b/drivers/gpu/drm/i915/display/intel_display.h @@ -440,6 +440,14 @@ enum hpd_pin { (__i)++) \ for_each_if(plane)
+#define for_each_old_intel_crtc_in_state(__state, crtc, old_crtc_state, __i) \ + for ((__i) = 0; \ + (__i) < (__state)->base.dev->mode_config.num_crtc && \ + ((crtc) = to_intel_crtc((__state)->base.crtcs[__i].ptr), \ + (old_crtc_state) = to_intel_crtc_state((__state)->base.crtcs[__i].old_state), 1); \ + (__i)++) \ + for_each_if(crtc) + #define for_each_new_intel_plane_in_state(__state, plane, new_plane_state, __i) \ for ((__i) = 0; \ (__i) < (__state)->base.dev->mode_config.num_total_plane && \
From: Ville Syrjälä ville.syrjala@linux.intel.com
[ Upstream commit c880f855d1e240a956dcfce884269bad92fc849c ]
We're going to need stuff after the color management register latching has happened. Add a corresponding hook.
Cc: stable@vger.kernel.org #v5.19+ Cc: Manasi Navare navaremanasi@google.com Cc: Drew Davenport ddavenport@chromium.org Cc: Imre Deak imre.deak@intel.com Cc: Jouni Högander jouni.hogander@intel.com Signed-off-by: Ville Syrjälä ville.syrjala@linux.intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230320095438.17328-4-ville.s... Reviewed-by: Imre Deak imre.deak@intel.com (cherry picked from commit 3962ca4e080a525fc9eae87aa6b2286f1fae351d) Signed-off-by: Jani Nikula jani.nikula@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/i915/display/intel_color.c | 13 +++++++++++++ drivers/gpu/drm/i915/display/intel_color.h | 1 + drivers/gpu/drm/i915/display/intel_display.c | 3 +++ 3 files changed, 17 insertions(+)
diff --git a/drivers/gpu/drm/i915/display/intel_color.c b/drivers/gpu/drm/i915/display/intel_color.c index ff6b8aaaa2194..85a38d794dd9f 100644 --- a/drivers/gpu/drm/i915/display/intel_color.c +++ b/drivers/gpu/drm/i915/display/intel_color.c @@ -46,6 +46,11 @@ struct intel_color_funcs { * registers involved with the same commit. */ void (*color_commit_arm)(const struct intel_crtc_state *crtc_state); + /* + * Perform any extra tasks needed after all the + * double buffered registers have been latched. + */ + void (*color_post_update)(const struct intel_crtc_state *crtc_state); /* * Load LUTs (and other single buffered color management * registers). Will (hopefully) be called during the vblank @@ -1220,6 +1225,14 @@ void intel_color_commit_arm(const struct intel_crtc_state *crtc_state) i915->display.funcs.color->color_commit_arm(crtc_state); }
+void intel_color_post_update(const struct intel_crtc_state *crtc_state) +{ + struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev); + + if (i915->display.funcs.color->color_post_update) + i915->display.funcs.color->color_post_update(crtc_state); +} + void intel_color_prepare_commit(struct intel_crtc_state *crtc_state) { intel_dsb_prepare(crtc_state); diff --git a/drivers/gpu/drm/i915/display/intel_color.h b/drivers/gpu/drm/i915/display/intel_color.h index 0e85406036b54..0256f49c3910d 100644 --- a/drivers/gpu/drm/i915/display/intel_color.h +++ b/drivers/gpu/drm/i915/display/intel_color.h @@ -21,6 +21,7 @@ void intel_color_prepare_commit(struct intel_crtc_state *crtc_state); void intel_color_cleanup_commit(struct intel_crtc_state *crtc_state); void intel_color_commit_noarm(const struct intel_crtc_state *crtc_state); void intel_color_commit_arm(const struct intel_crtc_state *crtc_state); +void intel_color_post_update(const struct intel_crtc_state *crtc_state); void intel_color_load_luts(const struct intel_crtc_state *crtc_state); void intel_color_get_config(struct intel_crtc_state *crtc_state); int intel_color_get_gamma_bit_precision(const struct intel_crtc_state *crtc_state); diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c index ca76408b99b38..2d46dcf820a23 100644 --- a/drivers/gpu/drm/i915/display/intel_display.c +++ b/drivers/gpu/drm/i915/display/intel_display.c @@ -1263,6 +1263,9 @@ static void intel_post_plane_update(struct intel_atomic_state *state, if (needs_cursorclk_wa(old_crtc_state) && !needs_cursorclk_wa(new_crtc_state)) icl_wa_cursorclkgating(dev_priv, pipe, false); + + if (intel_crtc_needs_color_update(new_crtc_state)) + intel_color_post_update(new_crtc_state); }
static void intel_crtc_enable_flip_done(struct intel_atomic_state *state,
From: Randy Dunlap rdunlap@infradead.org
[ Upstream commit d49765b5f4320a402fbc4ed5edfd73d87640f27c ]
REGMAP is a hidden (not user visible) symbol. Users cannot set it directly thru "make *config", so drivers should select it instead of depending on it if they need it.
Consistently using "select" or "depends on" can also help reduce Kconfig circular dependency issues.
Therefore, change the use of "depends on REGMAP" to "select REGMAP".
Fixes: ebe363197e52 ("gpio: add a reusable generic gpio_chip using regmap") Signed-off-by: Randy Dunlap rdunlap@infradead.org Cc: Michael Walle michael@walle.cc Cc: Linus Walleij linus.walleij@linaro.org Cc: Bartosz Golaszewski brgl@bgdev.pl Cc: linux-gpio@vger.kernel.org Acked-by: Michael Walle michael@walle.cc Signed-off-by: Bartosz Golaszewski bartosz.golaszewski@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpio/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpio/Kconfig b/drivers/gpio/Kconfig index e9917a45b005a..42e5042d01495 100644 --- a/drivers/gpio/Kconfig +++ b/drivers/gpio/Kconfig @@ -100,7 +100,7 @@ config GPIO_GENERIC tristate
config GPIO_REGMAP - depends on REGMAP + select REGMAP tristate
# put drivers in the right section, in alphabetical order
From: Mohammed Gamal mgamal@redhat.com
[ Upstream commit 1eb65c8687316c65140b48fad27133d583178e15 ]
relid2channel() assumes vmbus channel array to be allocated when called. However, in cases such as kdump/kexec, not all relids will be reset by the host. When the second kernel boots and if the guest receives a vmbus interrupt during vmbus driver initialization before vmbus_connect() is called, before it finishes, or if it fails, the vmbus interrupt service routine is called which in turn calls relid2channel() and can cause a null pointer dereference.
Print a warning and error out in relid2channel() for a channel id that's invalid in the second kernel.
Fixes: 8b6a877c060e ("Drivers: hv: vmbus: Replace the per-CPU channel lists with a global array of channels")
Signed-off-by: Mohammed Gamal mgamal@redhat.com Reviewed-by: Dexuan Cui decui@microsoft.com Link: https://lore.kernel.org/r/20230217204411.212709-1-mgamal@redhat.com Signed-off-by: Wei Liu wei.liu@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/hv/connection.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c index 9dc27e5d367a2..da51b50787dff 100644 --- a/drivers/hv/connection.c +++ b/drivers/hv/connection.c @@ -409,6 +409,10 @@ void vmbus_disconnect(void) */ struct vmbus_channel *relid2channel(u32 relid) { + if (vmbus_connection.channels == NULL) { + pr_warn_once("relid2channel: relid=%d: No channels mapped!\n", relid); + return NULL; + } if (WARN_ON(relid >= MAX_CHANNEL_RELIDS)) return NULL; return READ_ONCE(vmbus_connection.channels[relid]);
From: Ranjani Sridharan ranjani.sridharan@linux.intel.com
[ Upstream commit e51f49512d98783b90799c9cc2002895ec3aa0eb ]
The set_get_data() IPC op bypasses the check for the no_pm flag as done with the regular IPC tx_msg op. Since set_get_data should be performed when the DSP is in D0I0, set the DSP power state to D0I0 before sending the IPC's in sof_ipc4_set_get_data().
Fixes: ceb89acc4dc8 ("ASoC: SOF: ipc4: Add support for mandatory message handling functionality") Signed-off-by: Ranjani Sridharan ranjani.sridharan@linux.intel.com Reviewed-by: Bard Liao yung-chuan.liao@linux.intel.com Reviewed-by: Péter Ujfalusi peter.ujfalusi@linux.intel.com Reviewed-by: Pierre-Louis Bossart pierre-louis.bossart@linux.intel.com Signed-off-by: Peter Ujfalusi peter.ujfalusi@linux.intel.com Link: https://lore.kernel.org/r/20230322085538.10214-1-peter.ujfalusi@linux.intel.... Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/sof/ipc4.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/sound/soc/sof/ipc4.c b/sound/soc/sof/ipc4.c index 74cd7e9560193..280fc89043b16 100644 --- a/sound/soc/sof/ipc4.c +++ b/sound/soc/sof/ipc4.c @@ -393,6 +393,9 @@ static int sof_ipc4_tx_msg(struct snd_sof_dev *sdev, void *msg_data, size_t msg_ static int sof_ipc4_set_get_data(struct snd_sof_dev *sdev, void *data, size_t payload_bytes, bool set) { + const struct sof_dsp_power_state target_state = { + .state = SOF_DSP_PM_D0, + }; size_t payload_limit = sdev->ipc->max_payload_size; struct sof_ipc4_msg *ipc4_msg = data; struct sof_ipc4_msg tx = {{ 0 }}; @@ -423,6 +426,11 @@ static int sof_ipc4_set_get_data(struct snd_sof_dev *sdev, void *data,
tx.extension |= SOF_IPC4_MOD_EXT_MSG_FIRST_BLOCK(1);
+ /* ensure the DSP is in D0i0 before sending IPC */ + ret = snd_sof_dsp_set_power_state(sdev, &target_state); + if (ret < 0) + return ret; + /* Serialise IPC TX */ mutex_lock(&sdev->ipc->tx_mutex);
From: Uwe Kleine-König u.kleine-koenig@pengutronix.de
[ Upstream commit 6f57937980142715e927697a6ffd2050f38ed6f6 ]
The driver only both polarities. Complete the implementation of .get_state() by setting .polarity according to the configured hardware state.
Fixes: d09f00810850 ("pwm: Add PWM driver for HiSilicon BVT SOCs") Link: https://lore.kernel.org/r/20230228135508.1798428-2-u.kleine-koenig@pengutron... Signed-off-by: Uwe Kleine-König u.kleine-koenig@pengutronix.de Signed-off-by: Thierry Reding thierry.reding@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/pwm/pwm-hibvt.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/pwm/pwm-hibvt.c b/drivers/pwm/pwm-hibvt.c index 12c05c155cab0..1b9274c5ad872 100644 --- a/drivers/pwm/pwm-hibvt.c +++ b/drivers/pwm/pwm-hibvt.c @@ -146,6 +146,7 @@ static int hibvt_pwm_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
value = readl(base + PWM_CTRL_ADDR(pwm->hwpwm)); state->enabled = (PWM_ENABLE_MASK & value); + state->polarity = (PWM_POLARITY_MASK & value) ? PWM_POLARITY_INVERSED : PWM_POLARITY_NORMAL;
return 0; }
From: Uwe Kleine-König u.kleine-koenig@pengutronix.de
[ Upstream commit 30006b77c7e130e01d1ab2148cc8abf73dfcc4bf ]
The driver only supports normal polarity. Complete the implementation of .get_state() by setting .polarity accordingly.
Reviewed-by: Guenter Roeck groeck@chromium.org Fixes: 1f0d3bb02785 ("pwm: Add ChromeOS EC PWM driver") Link: https://lore.kernel.org/r/20230228135508.1798428-3-u.kleine-koenig@pengutron... Signed-off-by: Uwe Kleine-König u.kleine-koenig@pengutronix.de Signed-off-by: Thierry Reding thierry.reding@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/pwm/pwm-cros-ec.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/pwm/pwm-cros-ec.c b/drivers/pwm/pwm-cros-ec.c index 86df6702cb835..ad18b0ebe3f1e 100644 --- a/drivers/pwm/pwm-cros-ec.c +++ b/drivers/pwm/pwm-cros-ec.c @@ -198,6 +198,7 @@ static int cros_ec_pwm_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
state->enabled = (ret > 0); state->period = EC_PWM_MAX_DUTY; + state->polarity = PWM_POLARITY_NORMAL;
/* * Note that "disabled" and "duty cycle == 0" are treated the same. If
From: Uwe Kleine-König u.kleine-koenig@pengutronix.de
[ Upstream commit b20b097128d9145fadcea1cbb45c4d186cb57466 ]
The driver only supports normal polarity. Complete the implementation of .get_state() by setting .polarity accordingly.
Fixes: 6f0841a8197b ("pwm: Add support for Azoteq IQS620A PWM generator") Reviewed-by: Jeff LaBundy jeff@labundy.com Link: https://lore.kernel.org/r/20230228135508.1798428-4-u.kleine-koenig@pengutron... Signed-off-by: Uwe Kleine-König u.kleine-koenig@pengutronix.de Signed-off-by: Thierry Reding thierry.reding@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/pwm/pwm-iqs620a.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/pwm/pwm-iqs620a.c b/drivers/pwm/pwm-iqs620a.c index 4987ca940b648..01208c2f58843 100644 --- a/drivers/pwm/pwm-iqs620a.c +++ b/drivers/pwm/pwm-iqs620a.c @@ -126,6 +126,7 @@ static int iqs620_pwm_get_state(struct pwm_chip *chip, struct pwm_device *pwm, mutex_unlock(&iqs620_pwm->lock);
state->period = IQS620_PWM_PERIOD_NS; + state->polarity = PWM_POLARITY_NORMAL;
return 0; }
From: Uwe Kleine-König u.kleine-koenig@pengutronix.de
[ Upstream commit 2be4dcf6627e1bcbbef8e6ba1811f5127d39202c ]
The driver only supports normal polarity. Complete the implementation of .get_state() by setting .polarity accordingly.
Fixes: 8aae4b02e8a6 ("pwm: sprd: Add Spreadtrum PWM support") Link: https://lore.kernel.org/r/20230228135508.1798428-5-u.kleine-koenig@pengutron... Signed-off-by: Uwe Kleine-König u.kleine-koenig@pengutronix.de Signed-off-by: Thierry Reding thierry.reding@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/pwm/pwm-sprd.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/pwm/pwm-sprd.c b/drivers/pwm/pwm-sprd.c index d866ce345f977..bde579a338c27 100644 --- a/drivers/pwm/pwm-sprd.c +++ b/drivers/pwm/pwm-sprd.c @@ -109,6 +109,7 @@ static int sprd_pwm_get_state(struct pwm_chip *chip, struct pwm_device *pwm, duty = val & SPRD_PWM_DUTY_MSK; tmp = (prescale + 1) * NSEC_PER_SEC * duty; state->duty_cycle = DIV_ROUND_CLOSEST_ULL(tmp, chn->clk_rate); + state->polarity = PWM_POLARITY_NORMAL;
/* Disable PWM clocks if the PWM channel is not in enable state. */ if (!state->enabled)
From: Uwe Kleine-König u.kleine-koenig@pengutronix.de
[ Upstream commit 8caa81eb950cb2e9d2d6959b37d853162d197f57 ]
The driver only supports normal polarity. Complete the implementation of .get_state() by setting .polarity accordingly.
This fixes a regression that was possible since commit c73a3107624d ("pwm: Handle .get_state() failures") which stopped to zero-initialize the state passed to the .get_state() callback. This was reported at https://forum.odroid.com/viewtopic.php?f=177&t=46360 . While this was an unintended side effect, the real issue is the driver's callback not setting the polarity.
There is a complicating fact, that the .apply() callback fakes support for inversed polarity. This is not (and cannot) be matched by .get_state(). As fixing this isn't easy, only point it out in a comment to prevent authors of other drivers from copying that approach.
Fixes: c375bcbaabdb ("pwm: meson: Read the full hardware state in meson_pwm_get_state()") Reported-by: Munehisa Kamata kamatam@amazon.com Acked-by: Martin Blumenstingl martin.blumenstingl@googlemail.com Link: https://lore.kernel.org/r/20230310191405.2606296-1-u.kleine-koenig@pengutron... Signed-off-by: Uwe Kleine-König u.kleine-koenig@pengutronix.de Signed-off-by: Thierry Reding thierry.reding@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/pwm/pwm-meson.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/drivers/pwm/pwm-meson.c b/drivers/pwm/pwm-meson.c index 16d79ca5d8f53..5cd7b90872c62 100644 --- a/drivers/pwm/pwm-meson.c +++ b/drivers/pwm/pwm-meson.c @@ -162,6 +162,12 @@ static int meson_pwm_calc(struct meson_pwm *meson, struct pwm_device *pwm, duty = state->duty_cycle; period = state->period;
+ /* + * Note this is wrong. The result is an output wave that isn't really + * inverted and so is wrongly identified by .get_state as normal. + * Fixing this needs some care however as some machines might rely on + * this. + */ if (state->polarity == PWM_POLARITY_INVERSED) duty = period - duty;
@@ -358,6 +364,8 @@ static int meson_pwm_get_state(struct pwm_chip *chip, struct pwm_device *pwm, state->duty_cycle = 0; }
+ state->polarity = PWM_POLARITY_NORMAL; + return 0; }
From: Srinivas Kandagatla srinivas.kandagatla@linaro.org
[ Upstream commit a4a3203426f4b67535d6442ddc5dca8878a0678f ]
The order in which clocks are stopped matters as some of the clock like NPL are derived from MCLK.
Without this patch, Dragonboard RB5 DSP would crash with below error: qcom_q6v5_pas 17300000.remoteproc: fatal error received: ABT_dal.c:278:ABTimeout: AHB Bus hang is detected, Number of bus hang detected := 2 , addr0 = 0x3370000 , addr1 = 0x0!!!
Turn off fsgen first, followed by npl and then finally mclk, which is exactly the opposite order of enable sequence.
Fixes: 1dc3459009c3 ("ASoC: codecs: lpass: register mclk after runtime pm") Reported-by: Amit Pundir amit.pundir@linaro.org Signed-off-by: Srinivas Kandagatla srinivas.kandagatla@linaro.org Tested-by: Amit Pundir amit.pundir@linaro.org Link: https://lore.kernel.org/r/20230323110125.23790-1-srinivas.kandagatla@linaro.... Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/codecs/lpass-rx-macro.c | 4 ++-- sound/soc/codecs/lpass-tx-macro.c | 4 ++-- sound/soc/codecs/lpass-wsa-macro.c | 4 ++-- 3 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/sound/soc/codecs/lpass-rx-macro.c b/sound/soc/codecs/lpass-rx-macro.c index 8621cfabcf5b6..1639f3b66facb 100644 --- a/sound/soc/codecs/lpass-rx-macro.c +++ b/sound/soc/codecs/lpass-rx-macro.c @@ -3667,9 +3667,9 @@ static int __maybe_unused rx_macro_runtime_suspend(struct device *dev) regcache_cache_only(rx->regmap, true); regcache_mark_dirty(rx->regmap);
- clk_disable_unprepare(rx->mclk); - clk_disable_unprepare(rx->npl); clk_disable_unprepare(rx->fsgen); + clk_disable_unprepare(rx->npl); + clk_disable_unprepare(rx->mclk);
return 0; } diff --git a/sound/soc/codecs/lpass-tx-macro.c b/sound/soc/codecs/lpass-tx-macro.c index 8facdb922f076..9f33289ce2174 100644 --- a/sound/soc/codecs/lpass-tx-macro.c +++ b/sound/soc/codecs/lpass-tx-macro.c @@ -2093,9 +2093,9 @@ static int __maybe_unused tx_macro_runtime_suspend(struct device *dev) regcache_cache_only(tx->regmap, true); regcache_mark_dirty(tx->regmap);
- clk_disable_unprepare(tx->mclk); - clk_disable_unprepare(tx->npl); clk_disable_unprepare(tx->fsgen); + clk_disable_unprepare(tx->npl); + clk_disable_unprepare(tx->mclk);
return 0; } diff --git a/sound/soc/codecs/lpass-wsa-macro.c b/sound/soc/codecs/lpass-wsa-macro.c index c0b86d69c72e3..01149b20b4c93 100644 --- a/sound/soc/codecs/lpass-wsa-macro.c +++ b/sound/soc/codecs/lpass-wsa-macro.c @@ -2504,9 +2504,9 @@ static int __maybe_unused wsa_macro_runtime_suspend(struct device *dev) regcache_cache_only(wsa->regmap, true); regcache_mark_dirty(wsa->regmap);
- clk_disable_unprepare(wsa->mclk); - clk_disable_unprepare(wsa->npl); clk_disable_unprepare(wsa->fsgen); + clk_disable_unprepare(wsa->npl); + clk_disable_unprepare(wsa->mclk);
return 0; }
From: Nico Boehr nrb@linux.ibm.com
[ Upstream commit 21f27df854008b86349a203bf97fef79bb11f53e ]
To determine whether the guest has caused an external interruption loop upon code 20 (external interrupt) intercepts, the ext_new_psw needs to be inspected to see whether external interrupts are enabled.
Under non-PV, ext_new_psw can simply be taken from guest lowcore. Under PV, KVM can only access the encrypted guest lowcore and hence the ext_new_psw must not be taken from guest lowcore.
handle_external_interrupt() incorrectly did that and hence was not able to reliably tell whether an external interruption loop is happening or not. False negatives cause spurious failures of my kvm-unit-test for extint loops[1] under PV.
Since code 20 is only caused under PV if and only if the guest's ext_new_psw is enabled for external interrupts, false positive detection of a external interruption loop can not happen.
Fix this issue by instead looking at the guest PSW in the state description. Since the PSW swap for external interrupt is done by the ultravisor before the intercept is caused, this reliably tells whether the guest is enabled for external interrupts in the ext_new_psw.
Also update the comments to explain better what is happening.
[1] https://lore.kernel.org/kvm/20220812062151.1980937-4-nrb@linux.ibm.com/
Signed-off-by: Nico Boehr nrb@linux.ibm.com Reviewed-by: Janosch Frank frankja@linux.ibm.com Reviewed-by: Christian Borntraeger borntraeger@linux.ibm.com Fixes: 201ae986ead7 ("KVM: s390: protvirt: Implement interrupt injection") Link: https://lore.kernel.org/r/20230213085520.100756-2-nrb@linux.ibm.com Message-Id: 20230213085520.100756-2-nrb@linux.ibm.com Signed-off-by: Janosch Frank frankja@linux.ibm.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/s390/kvm/intercept.c | 32 ++++++++++++++++++++++++-------- 1 file changed, 24 insertions(+), 8 deletions(-)
diff --git a/arch/s390/kvm/intercept.c b/arch/s390/kvm/intercept.c index 0ee02dae14b2b..2cda8d9d7c6ef 100644 --- a/arch/s390/kvm/intercept.c +++ b/arch/s390/kvm/intercept.c @@ -271,10 +271,18 @@ static int handle_prog(struct kvm_vcpu *vcpu) * handle_external_interrupt - used for external interruption interceptions * @vcpu: virtual cpu * - * This interception only occurs if the CPUSTAT_EXT_INT bit was set, or if - * the new PSW does not have external interrupts disabled. In the first case, - * we've got to deliver the interrupt manually, and in the second case, we - * drop to userspace to handle the situation there. + * This interception occurs if: + * - the CPUSTAT_EXT_INT bit was already set when the external interrupt + * occurred. In this case, the interrupt needs to be injected manually to + * preserve interrupt priority. + * - the external new PSW has external interrupts enabled, which will cause an + * interruption loop. We drop to userspace in this case. + * + * The latter case can be detected by inspecting the external mask bit in the + * external new psw. + * + * Under PV, only the latter case can occur, since interrupt priorities are + * handled in the ultravisor. */ static int handle_external_interrupt(struct kvm_vcpu *vcpu) { @@ -285,10 +293,18 @@ static int handle_external_interrupt(struct kvm_vcpu *vcpu)
vcpu->stat.exit_external_interrupt++;
- rc = read_guest_lc(vcpu, __LC_EXT_NEW_PSW, &newpsw, sizeof(psw_t)); - if (rc) - return rc; - /* We can not handle clock comparator or timer interrupt with bad PSW */ + if (kvm_s390_pv_cpu_is_protected(vcpu)) { + newpsw = vcpu->arch.sie_block->gpsw; + } else { + rc = read_guest_lc(vcpu, __LC_EXT_NEW_PSW, &newpsw, sizeof(psw_t)); + if (rc) + return rc; + } + + /* + * Clock comparator or timer interrupt with external interrupt enabled + * will cause interrupt loop. Drop to userspace. + */ if ((eic == EXT_IRQ_CLK_COMP || eic == EXT_IRQ_CPU_TIMER) && (newpsw.mask & PSW_MASK_EXT)) return -EOPNOTSUPP;
From: Ryder Lee ryder.lee@mediatek.com
[ Upstream commit dd01579e5ed922dcfcb8fec53fa03b81c7649a04 ]
Here should return the size of ieee80211_eht_cap_elem_fixed, so fix it.
Fixes: 820acc810fb6 ("mac80211: Add EHT capabilities to association/probe request") Signed-off-by: Ryder Lee ryder.lee@mediatek.com Link: https://lore.kernel.org/r/06c13635fc03bcff58a647b8e03e9f01a74294bd.167993525... Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/mac80211/util.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/mac80211/util.c b/net/mac80211/util.c index 9c219e525eded..ed9e659f49f63 100644 --- a/net/mac80211/util.c +++ b/net/mac80211/util.c @@ -4906,7 +4906,7 @@ u8 ieee80211_ie_len_eht_cap(struct ieee80211_sub_if_data *sdata, u8 iftype) &eht_cap->eht_cap_elem, is_ap); return 2 + 1 + - sizeof(he_cap->he_cap_elem) + n + + sizeof(eht_cap->eht_cap_elem) + n + ieee80211_eht_ppe_size(eht_cap->eht_ppe_thres[0], eht_cap->eht_cap_elem.phy_cap_info); return 0;
From: Felix Fietkau nbd@nbd.name
[ Upstream commit 12b220a6171faf10638ab683a975cadcf1a352d6 ]
Avoid potential data corruption issues caused by uninitialized driver private data structures.
Reported-by: Brian Coverstone brian@mainsequence.net Fixes: 6a9d1b91f34d ("mac80211: add pre-RCU-sync sta removal driver operation") Signed-off-by: Felix Fietkau nbd@nbd.name Link: https://lore.kernel.org/r/20230324120924.38412-3-nbd@nbd.name Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/mac80211/sta_info.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c index 34cb833db25f5..39731ef51e03a 100644 --- a/net/mac80211/sta_info.c +++ b/net/mac80211/sta_info.c @@ -1261,7 +1261,8 @@ static int __must_check __sta_info_destroy_part1(struct sta_info *sta) list_del_rcu(&sta->list); sta->removed = true;
- drv_sta_pre_rcu_remove(local, sta->sdata, sta); + if (sta->uploaded) + drv_sta_pre_rcu_remove(local, sta->sdata, sta);
if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN && rcu_access_pointer(sdata->u.vlan.sta) == sta)
From: Ziyang Xuan william.xuanziyang@huawei.com
[ Upstream commit 44d807320000db0d0013372ad39b53e12d52f758 ]
Syzbot reported a bug as following:
refcount_t: addition on 0; use-after-free. ... RIP: 0010:refcount_warn_saturate+0x17c/0x1f0 lib/refcount.c:25 ... Call Trace: <TASK> __refcount_add include/linux/refcount.h:199 [inline] __refcount_inc include/linux/refcount.h:250 [inline] refcount_inc include/linux/refcount.h:267 [inline] kref_get include/linux/kref.h:45 [inline] qrtr_node_acquire net/qrtr/af_qrtr.c:202 [inline] qrtr_node_lookup net/qrtr/af_qrtr.c:398 [inline] qrtr_send_resume_tx net/qrtr/af_qrtr.c:1003 [inline] qrtr_recvmsg+0x85f/0x990 net/qrtr/af_qrtr.c:1070 sock_recvmsg_nosec net/socket.c:1017 [inline] sock_recvmsg+0xe2/0x160 net/socket.c:1038 qrtr_ns_worker+0x170/0x1700 net/qrtr/ns.c:688 process_one_work+0x991/0x15c0 kernel/workqueue.c:2390 worker_thread+0x669/0x1090 kernel/workqueue.c:2537
It occurs in the concurrent scenario of qrtr_recvmsg() and qrtr_endpoint_unregister() as following:
cpu0 cpu1 qrtr_recvmsg qrtr_endpoint_unregister qrtr_send_resume_tx qrtr_node_release qrtr_node_lookup mutex_lock(&qrtr_node_lock) spin_lock_irqsave(&qrtr_nodes_lock, ) refcount_dec_and_test(&node->ref) [node->ref == 0] radix_tree_lookup [node != NULL] __qrtr_node_release qrtr_node_acquire spin_lock_irqsave(&qrtr_nodes_lock, ) kref_get(&node->ref) [WARNING] ... mutex_unlock(&qrtr_node_lock)
Use qrtr_node_lock to protect qrtr_node_lookup() implementation, this is actually improving the protection of node reference.
Fixes: 0a7e0d0ef054 ("net: qrtr: Migrate node lookup tree to spinlock") Reported-by: syzbot+a7492efaa5d61b51db23@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?extid=a7492efaa5d61b51db23 Signed-off-by: Ziyang Xuan william.xuanziyang@huawei.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/qrtr/af_qrtr.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/net/qrtr/af_qrtr.c b/net/qrtr/af_qrtr.c index 5c2fb992803b7..3a70255c8d02f 100644 --- a/net/qrtr/af_qrtr.c +++ b/net/qrtr/af_qrtr.c @@ -393,10 +393,12 @@ static struct qrtr_node *qrtr_node_lookup(unsigned int nid) struct qrtr_node *node; unsigned long flags;
+ mutex_lock(&qrtr_node_lock); spin_lock_irqsave(&qrtr_nodes_lock, flags); node = radix_tree_lookup(&qrtr_nodes, nid); node = qrtr_node_acquire(node); spin_unlock_irqrestore(&qrtr_nodes_lock, flags); + mutex_unlock(&qrtr_node_lock);
return node; }
From: Michael Sit Wei Hong michael.wei.hong.sit@intel.com
[ Upstream commit 653a180957a85c3fc30320cc7e84f5dc913a64f8 ]
Provide phylink_expects_phy() to allow MAC drivers to check if it is expecting a PHY to attach to. Since fixed-linked setups do not need to attach to a PHY.
Provides a boolean value as to if the MAC should expect a PHY. Returns true if a PHY is expected.
Reviewed-by: Russell King (Oracle) rmk+kernel@armlinux.org.uk Signed-off-by: Michael Sit Wei Hong michael.wei.hong.sit@intel.com Signed-off-by: David S. Miller davem@davemloft.net Stable-dep-of: fe2cfbc96803 ("net: stmmac: check if MAC needs to attach to a PHY") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/phy/phylink.c | 19 +++++++++++++++++++ include/linux/phylink.h | 1 + 2 files changed, 20 insertions(+)
diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c index 4d2519cdb8012..bf8a8ed5d5d7b 100644 --- a/drivers/net/phy/phylink.c +++ b/drivers/net/phy/phylink.c @@ -1571,6 +1571,25 @@ void phylink_destroy(struct phylink *pl) } EXPORT_SYMBOL_GPL(phylink_destroy);
+/** + * phylink_expects_phy() - Determine if phylink expects a phy to be attached + * @pl: a pointer to a &struct phylink returned from phylink_create() + * + * When using fixed-link mode, or in-band mode with 1000base-X or 2500base-X, + * no PHY is needed. + * + * Returns true if phylink will be expecting a PHY. + */ +bool phylink_expects_phy(struct phylink *pl) +{ + if (pl->cfg_link_an_mode == MLO_AN_FIXED || + (pl->cfg_link_an_mode == MLO_AN_INBAND && + phy_interface_mode_is_8023z(pl->link_config.interface))) + return false; + return true; +} +EXPORT_SYMBOL_GPL(phylink_expects_phy); + static void phylink_phy_change(struct phy_device *phydev, bool up) { struct phylink *pl = phydev->phylink; diff --git a/include/linux/phylink.h b/include/linux/phylink.h index c492c26202b5b..637698ed5cb6c 100644 --- a/include/linux/phylink.h +++ b/include/linux/phylink.h @@ -574,6 +574,7 @@ struct phylink *phylink_create(struct phylink_config *, struct fwnode_handle *, phy_interface_t iface, const struct phylink_mac_ops *mac_ops); void phylink_destroy(struct phylink *); +bool phylink_expects_phy(struct phylink *pl);
int phylink_connect_phy(struct phylink *, struct phy_device *); int phylink_of_phy_connect(struct phylink *, struct device_node *, u32 flags);
From: Michael Sit Wei Hong michael.wei.hong.sit@intel.com
[ Upstream commit fe2cfbc9680356a3d9f8adde8a38e715831e32f5 ]
After the introduction of the fixed-link support, the MAC driver no longer attempt to scan for a PHY to attach to. This causes the non fixed-link setups to stop working.
Using the phylink_expects_phy() to check and determine if the MAC should expect and attach a PHY.
Fixes: ab21cf920928 ("net: stmmac: make mdio register skips PHY scanning for fixed-link") Signed-off-by: Michael Sit Wei Hong michael.wei.hong.sit@intel.com Signed-off-by: Lai Peter Jun Ann peter.jun.ann.lai@intel.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index 7389718b4797b..20b51a39db38d 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -1135,6 +1135,7 @@ static int stmmac_init_phy(struct net_device *dev) { struct stmmac_priv *priv = netdev_priv(dev); struct fwnode_handle *fwnode; + bool phy_needed; int ret;
fwnode = of_fwnode_handle(priv->plat->phylink_node); @@ -1144,10 +1145,11 @@ static int stmmac_init_phy(struct net_device *dev) if (fwnode) ret = phylink_fwnode_phy_connect(priv->phylink, fwnode, 0);
+ phy_needed = phylink_expects_phy(priv->phylink); /* Some DT bindings do not set-up the PHY handle. Let's try to * manually parse it */ - if (!fwnode || ret) { + if (!fwnode || phy_needed || ret) { int addr = priv->plat->phy_addr; struct phy_device *phydev;
From: Michael Sit Wei Hong michael.wei.hong.sit@intel.com
[ Upstream commit 6fc21a6ed5953b1dd3a41ce7be1ea57f5ef8c081 ]
Currently, intel_speed_mode_2500() will fix-up xpcs_an_inband to 1 if the underlying controller has a max speed of 1000Mbps. The value has been initialized and modified if it is a fixed-linked setup earlier.
This patch removes the fix-up to allow for fixed-linked setup support. In stmmac_phy_setup(), ovr_an_inband is set based on the value of xpcs_an_inband. Which in turn will return an error in phylink_parse_mode() where MLO_AN_FIXED and ovr_an_inband are both set.
Fixes: c82386310d95 ("stmmac: intel: prepare to support 1000BASE-X phy interface setting") Signed-off-by: Michael Sit Wei Hong michael.wei.hong.sit@intel.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c | 1 - 1 file changed, 1 deletion(-)
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c index 13aa919633b47..ab9f876b6df7e 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c @@ -251,7 +251,6 @@ static void intel_speed_mode_2500(struct net_device *ndev, void *intel_data) priv->plat->mdio_bus_data->xpcs_an_inband = false; } else { priv->plat->max_speed = 1000; - priv->plat->mdio_bus_data->xpcs_an_inband = true; } }
From: Andrea Righi andrea.righi@canonical.com
[ Upstream commit 154e07c164859fc90bf4e8143f2f6c1af9f3a35e ]
Commit 65b32f801bfb ("uapi: move IPPROTO_L2TP to in.h") moved the definition of IPPROTO_L2TP from a define to an enum, but since __stringify doesn't work properly with enums, we ended up breaking the modalias strings for the l2tp modules:
$ modinfo l2tp_ip l2tp_ip6 | grep alias alias: net-pf-2-proto-IPPROTO_L2TP alias: net-pf-2-proto-2-type-IPPROTO_L2TP alias: net-pf-10-proto-IPPROTO_L2TP alias: net-pf-10-proto-2-type-IPPROTO_L2TP
Use the resolved number directly in MODULE_ALIAS_*() macros (as we already do with SOCK_DGRAM) to fix the alias strings:
$ modinfo l2tp_ip l2tp_ip6 | grep alias alias: net-pf-2-proto-115 alias: net-pf-2-proto-115-type-2 alias: net-pf-10-proto-115 alias: net-pf-10-proto-115-type-2
Moreover, fix the ordering of the parameters passed to MODULE_ALIAS_NET_PF_PROTO_TYPE() by switching proto and type.
Fixes: 65b32f801bfb ("uapi: move IPPROTO_L2TP to in.h") Link: https://lore.kernel.org/lkml/ZCQt7hmodtUaBlCP@righiandr-XPS-13-7390 Signed-off-by: Guillaume Nault gnault@redhat.com Signed-off-by: Andrea Righi andrea.righi@canonical.com Reviewed-by: Wojciech Drewek wojciech.drewek@intel.com Tested-by: Wojciech Drewek wojciech.drewek@intel.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/l2tp/l2tp_ip.c | 8 ++++---- net/l2tp/l2tp_ip6.c | 8 ++++---- 2 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/net/l2tp/l2tp_ip.c b/net/l2tp/l2tp_ip.c index 4db5a554bdbd9..41a74fc84ca13 100644 --- a/net/l2tp/l2tp_ip.c +++ b/net/l2tp/l2tp_ip.c @@ -677,8 +677,8 @@ MODULE_AUTHOR("James Chapman jchapman@katalix.com"); MODULE_DESCRIPTION("L2TP over IP"); MODULE_VERSION("1.0");
-/* Use the value of SOCK_DGRAM (2) directory, because __stringify doesn't like - * enums +/* Use the values of SOCK_DGRAM (2) as type and IPPROTO_L2TP (115) as protocol, + * because __stringify doesn't like enums */ -MODULE_ALIAS_NET_PF_PROTO_TYPE(PF_INET, 2, IPPROTO_L2TP); -MODULE_ALIAS_NET_PF_PROTO(PF_INET, IPPROTO_L2TP); +MODULE_ALIAS_NET_PF_PROTO_TYPE(PF_INET, 115, 2); +MODULE_ALIAS_NET_PF_PROTO(PF_INET, 115); diff --git a/net/l2tp/l2tp_ip6.c b/net/l2tp/l2tp_ip6.c index 2478aa60145fb..5137ea1861ce2 100644 --- a/net/l2tp/l2tp_ip6.c +++ b/net/l2tp/l2tp_ip6.c @@ -806,8 +806,8 @@ MODULE_AUTHOR("Chris Elston celston@katalix.com"); MODULE_DESCRIPTION("L2TP IP encapsulation for IPv6"); MODULE_VERSION("1.0");
-/* Use the value of SOCK_DGRAM (2) directory, because __stringify doesn't like - * enums +/* Use the values of SOCK_DGRAM (2) as type and IPPROTO_L2TP (115) as protocol, + * because __stringify doesn't like enums */ -MODULE_ALIAS_NET_PF_PROTO_TYPE(PF_INET6, 2, IPPROTO_L2TP); -MODULE_ALIAS_NET_PF_PROTO(PF_INET6, IPPROTO_L2TP); +MODULE_ALIAS_NET_PF_PROTO_TYPE(PF_INET6, 115, 2); +MODULE_ALIAS_NET_PF_PROTO(PF_INET6, 115);
From: Hans de Goede hdegoede@redhat.com
[ Upstream commit e4efa515d58f1363d8a27e548f9c5769d3121e03 ]
After commit 92cadedd9d5f ("brcmfmac: Avoid keeping power to SDIO card unless WOWL is used"), the wifi adapter by default is turned off on suspend and then re-probed on resume.
In at least 2 model x86/acpi tablets with brcmfmac43430a1 wifi adapters, the newly added re-probe on resume fails like this:
brcmfmac: brcmf_sdio_bus_rxctl: resumed on timeout ieee80211 phy1: brcmf_bus_started: failed: -110 ieee80211 phy1: brcmf_attach: dongle is not responding: err=-110 brcmfmac: brcmf_sdio_firmware_callback: brcmf_attach failed
It seems this specific brcmfmac model does not like being reprobed without it actually being turned off first.
And the adapter is not being turned off during suspend because of commit f0992ace680c ("brcmfmac: prohibit ACPI power management for brcmfmac driver").
Now that the driver is being reprobed on resume, the disabling of ACPI pm is no longer necessary, except when WOWL is used (in which case there is no-reprobe).
Move the dis-/en-abling of ACPI pm to brcmf_sdio_wowl_config(), this fixes the brcmfmac43430a1 suspend/resume regression and should help save some power when suspended.
This change means that the code now also may re-enable ACPI pm when WOWL gets disabled. ACPI pm should only be re-enabled if it was enabled by the ACPI core originally. Add a brcmf_sdiod_acpi_save_power_manageable() to save the original state for this.
This has been tested on the following devices:
Asus T100TA brcmfmac43241b4-sdio Acer Iconia One 7 B1-750 brcmfmac43340-sdio Chuwi Hi8 brcmfmac43430a0-sdio Chuwi Hi8 brcmfmac43430a1-sdio
(the Asus T100TA is the device for which the prohibiting of ACPI pm was originally added)
Fixes: 92cadedd9d5f ("brcmfmac: Avoid keeping power to SDIO card unless WOWL is used") Cc: Ulf Hansson ulf.hansson@linaro.org Signed-off-by: Hans de Goede hdegoede@redhat.com Reviewed-by: Ulf Hansson ulf.hansson@linaro.org Signed-off-by: Kalle Valo kvalo@kernel.org Link: https://lore.kernel.org/r/20230320122252.240070-1-hdegoede@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../broadcom/brcm80211/brcmfmac/bcmsdh.c | 36 +++++++++++++------ .../broadcom/brcm80211/brcmfmac/sdio.h | 2 ++ 2 files changed, 28 insertions(+), 10 deletions(-)
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c index b7c918f241c91..65d4799a56584 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c @@ -994,15 +994,34 @@ static const struct sdio_device_id brcmf_sdmmc_ids[] = { MODULE_DEVICE_TABLE(sdio, brcmf_sdmmc_ids);
-static void brcmf_sdiod_acpi_set_power_manageable(struct device *dev, - int val) +static void brcmf_sdiod_acpi_save_power_manageable(struct brcmf_sdio_dev *sdiodev) { #if IS_ENABLED(CONFIG_ACPI) struct acpi_device *adev;
- adev = ACPI_COMPANION(dev); + adev = ACPI_COMPANION(&sdiodev->func1->dev); if (adev) - adev->flags.power_manageable = 0; + sdiodev->func1_power_manageable = adev->flags.power_manageable; + + adev = ACPI_COMPANION(&sdiodev->func2->dev); + if (adev) + sdiodev->func2_power_manageable = adev->flags.power_manageable; +#endif +} + +static void brcmf_sdiod_acpi_set_power_manageable(struct brcmf_sdio_dev *sdiodev, + int enable) +{ +#if IS_ENABLED(CONFIG_ACPI) + struct acpi_device *adev; + + adev = ACPI_COMPANION(&sdiodev->func1->dev); + if (adev) + adev->flags.power_manageable = enable ? sdiodev->func1_power_manageable : 0; + + adev = ACPI_COMPANION(&sdiodev->func2->dev); + if (adev) + adev->flags.power_manageable = enable ? sdiodev->func2_power_manageable : 0; #endif }
@@ -1012,7 +1031,6 @@ static int brcmf_ops_sdio_probe(struct sdio_func *func, int err; struct brcmf_sdio_dev *sdiodev; struct brcmf_bus *bus_if; - struct device *dev;
brcmf_dbg(SDIO, "Enter\n"); brcmf_dbg(SDIO, "Class=%x\n", func->class); @@ -1020,14 +1038,9 @@ static int brcmf_ops_sdio_probe(struct sdio_func *func, brcmf_dbg(SDIO, "sdio device ID: 0x%04x\n", func->device); brcmf_dbg(SDIO, "Function#: %d\n", func->num);
- dev = &func->dev; - /* Set MMC_QUIRK_LENIENT_FN0 for this card */ func->card->quirks |= MMC_QUIRK_LENIENT_FN0;
- /* prohibit ACPI power management for this device */ - brcmf_sdiod_acpi_set_power_manageable(dev, 0); - /* Consume func num 1 but dont do anything with it. */ if (func->num == 1) return 0; @@ -1059,6 +1072,7 @@ static int brcmf_ops_sdio_probe(struct sdio_func *func, dev_set_drvdata(&sdiodev->func1->dev, bus_if); sdiodev->dev = &sdiodev->func1->dev;
+ brcmf_sdiod_acpi_save_power_manageable(sdiodev); brcmf_sdiod_change_state(sdiodev, BRCMF_SDIOD_DOWN);
brcmf_dbg(SDIO, "F2 found, calling brcmf_sdiod_probe...\n"); @@ -1124,6 +1138,8 @@ void brcmf_sdio_wowl_config(struct device *dev, bool enabled)
if (sdiodev->settings->bus.sdio.oob_irq_supported || pm_caps & MMC_PM_WAKE_SDIO_IRQ) { + /* Stop ACPI from turning off the device when wowl is enabled */ + brcmf_sdiod_acpi_set_power_manageable(sdiodev, !enabled); sdiodev->wowl_enabled = enabled; brcmf_dbg(SDIO, "Configuring WOWL, enabled=%d\n", enabled); return; diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.h index b76d34d36bde6..0d18ed15b4032 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.h +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.h @@ -188,6 +188,8 @@ struct brcmf_sdio_dev { char nvram_name[BRCMF_FW_NAME_LEN]; char clm_name[BRCMF_FW_NAME_LEN]; bool wowl_enabled; + bool func1_power_manageable; + bool func2_power_manageable; enum brcmf_sdiod_state state; struct brcmf_sdiod_freezer *freezer; const struct firmware *clm_fw;
From: Chuck Lever chuck.lever@oracle.com
[ Upstream commit 804d8e0a6e54427268790472781e03bc243f4ee3 ]
OPDESC() simply indexes into nfsd4_ops[] by the op's operation number, without range checking that value. It assumes callers are careful to avoid calling it with an out-of-bounds opnum value.
nfsd4_decode_compound() is not so careful, and can invoke OPDESC() with opnum set to OP_ILLEGAL, which is 10044 -- well beyond the end of nfsd4_ops[].
Reported-by: Jeff Layton jlayton@kernel.org Fixes: f4f9ef4a1b0a ("nfsd4: opdesc will be useful outside nfs4proc.c") Signed-off-by: Chuck Lever chuck.lever@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/nfsd/nfs4xdr.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c index 97edb32be77f1..67bbd2d6334c4 100644 --- a/fs/nfsd/nfs4xdr.c +++ b/fs/nfsd/nfs4xdr.c @@ -2476,10 +2476,12 @@ nfsd4_decode_compound(struct nfsd4_compoundargs *argp) for (i = 0; i < argp->opcnt; i++) { op = &argp->ops[i]; op->replay = NULL; + op->opdesc = NULL;
if (xdr_stream_decode_u32(argp->xdr, &op->opnum) < 0) return false; if (nfsd4_opnum_in_range(argp, op)) { + op->opdesc = OPDESC(op); op->status = nfsd4_dec_ops[op->opnum](argp, &op->u); if (op->status != nfs_ok) trace_nfsd_compound_decode_err(argp->rqstp, @@ -2490,7 +2492,7 @@ nfsd4_decode_compound(struct nfsd4_compoundargs *argp) op->opnum = OP_ILLEGAL; op->status = nfserr_op_illegal; } - op->opdesc = OPDESC(op); + /* * We'll try to cache the result in the DRC if any one * op in the compound wants to be cached:
From: Jeff Layton jlayton@kernel.org
[ Upstream commit 15a8b55dbb1ba154d82627547c5761cac884d810 ]
For ops with "trivial" replies, nfsd4_encode_operation will shortcut most of the encoding work and skip to just marshalling up the status. One of the things it skips is calling op_release. This could cause a memory leak in the layoutget codepath if there is an error at an inopportune time.
Have the compound processing engine always call op_release, even when op_func sets an error in op->status. With this change, we also need nfsd4_block_get_device_info_scsi to set the gd_device pointer to NULL on error to avoid a double free.
Reported-by: Zhi Li yieli@redhat.com Link: https://bugzilla.redhat.com/show_bug.cgi?id=2181403 Fixes: 34b1744c91cc ("nfsd4: define ->op_release for compound ops") Signed-off-by: Jeff Layton jlayton@kernel.org Signed-off-by: Chuck Lever chuck.lever@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/nfsd/blocklayout.c | 1 + fs/nfsd/nfs4xdr.c | 11 +++++------ 2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/fs/nfsd/blocklayout.c b/fs/nfsd/blocklayout.c index 04697f8dc37d6..01d7fd108cf3d 100644 --- a/fs/nfsd/blocklayout.c +++ b/fs/nfsd/blocklayout.c @@ -297,6 +297,7 @@ nfsd4_block_get_device_info_scsi(struct super_block *sb,
out_free_dev: kfree(dev); + gdp->gd_device = NULL; return ret; }
diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c index 67bbd2d6334c4..7799835c2196e 100644 --- a/fs/nfsd/nfs4xdr.c +++ b/fs/nfsd/nfs4xdr.c @@ -5400,10 +5400,8 @@ nfsd4_encode_operation(struct nfsd4_compoundres *resp, struct nfsd4_op *op) __be32 *p;
p = xdr_reserve_space(xdr, 8); - if (!p) { - WARN_ON_ONCE(1); - return; - } + if (!p) + goto release; *p++ = cpu_to_be32(op->opnum); post_err_offset = xdr->buf->len;
@@ -5418,8 +5416,6 @@ nfsd4_encode_operation(struct nfsd4_compoundres *resp, struct nfsd4_op *op) op->status = encoder(resp, op->status, &op->u); if (op->status) trace_nfsd_compound_encode_err(rqstp, op->opnum, op->status); - if (opdesc && opdesc->op_release) - opdesc->op_release(&op->u); xdr_commit_encode(xdr);
/* nfsd4_check_resp_size guarantees enough room for error status */ @@ -5460,6 +5456,9 @@ nfsd4_encode_operation(struct nfsd4_compoundres *resp, struct nfsd4_op *op) } status: *p = op->status; +release: + if (opdesc && opdesc->op_release) + opdesc->op_release(&op->u); }
/*
From: Eric Dumazet edumazet@google.com
[ Upstream commit 7d63b67125382ff0ffdfca434acbc94a38bd092b ]
syzbot was able to trigger a panic [1] in icmp_glue_bits(), or more exactly in skb_copy_and_csum_bits()
There is no repro yet, but I think the issue is that syzbot manages to lower device mtu to a small value, fooling __icmp_send()
__icmp_send() must make sure there is enough room for the packet to include at least the headers.
We might in the future refactor skb_copy_and_csum_bits() and its callers to no longer crash when something bad happens.
[1] kernel BUG at net/core/skbuff.c:3343 ! invalid opcode: 0000 [#1] PREEMPT SMP KASAN CPU: 0 PID: 15766 Comm: syz-executor.0 Not tainted 6.3.0-rc4-syzkaller-00039-gffe78bbd5121 #0 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.14.0-2 04/01/2014 RIP: 0010:skb_copy_and_csum_bits+0x798/0x860 net/core/skbuff.c:3343 Code: f0 c1 c8 08 41 89 c6 e9 73 ff ff ff e8 61 48 d4 f9 e9 41 fd ff ff 48 8b 7c 24 48 e8 52 48 d4 f9 e9 c3 fc ff ff e8 c8 27 84 f9 <0f> 0b 48 89 44 24 28 e8 3c 48 d4 f9 48 8b 44 24 28 e9 9d fb ff ff RSP: 0018:ffffc90000007620 EFLAGS: 00010246 RAX: 0000000000000000 RBX: 00000000000001e8 RCX: 0000000000000100 RDX: ffff8880276f6280 RSI: ffffffff87fdd138 RDI: 0000000000000005 RBP: 0000000000000000 R08: 0000000000000005 R09: 0000000000000000 R10: 00000000000001e8 R11: 0000000000000001 R12: 000000000000003c R13: 0000000000000000 R14: ffff888028244868 R15: 0000000000000b0e FS: 00007fbc81f1c700(0000) GS:ffff88802ca00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000001b2df43000 CR3: 00000000744db000 CR4: 0000000000150ef0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <IRQ> icmp_glue_bits+0x7b/0x210 net/ipv4/icmp.c:353 __ip_append_data+0x1d1b/0x39f0 net/ipv4/ip_output.c:1161 ip_append_data net/ipv4/ip_output.c:1343 [inline] ip_append_data+0x115/0x1a0 net/ipv4/ip_output.c:1322 icmp_push_reply+0xa8/0x440 net/ipv4/icmp.c:370 __icmp_send+0xb80/0x1430 net/ipv4/icmp.c:765 ipv4_send_dest_unreach net/ipv4/route.c:1239 [inline] ipv4_link_failure+0x5a9/0x9e0 net/ipv4/route.c:1246 dst_link_failure include/net/dst.h:423 [inline] arp_error_report+0xcb/0x1c0 net/ipv4/arp.c:296 neigh_invalidate+0x20d/0x560 net/core/neighbour.c:1079 neigh_timer_handler+0xc77/0xff0 net/core/neighbour.c:1166 call_timer_fn+0x1a0/0x580 kernel/time/timer.c:1700 expire_timers+0x29b/0x4b0 kernel/time/timer.c:1751 __run_timers kernel/time/timer.c:2022 [inline]
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Reported-by: syzbot+d373d60fddbdc915e666@syzkaller.appspotmail.com Signed-off-by: Eric Dumazet edumazet@google.com Link: https://lore.kernel.org/r/20230330174502.1915328-1-edumazet@google.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/icmp.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c index 46aa2d65e40ab..635ed4f057495 100644 --- a/net/ipv4/icmp.c +++ b/net/ipv4/icmp.c @@ -746,6 +746,11 @@ void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info, room = 576; room -= sizeof(struct iphdr) + icmp_param.replyopts.opt.opt.optlen; room -= sizeof(struct icmphdr); + /* Guard against tiny mtu. We need to include at least one + * IP network header for this message to make any sense. + */ + if (room <= (int)sizeof(struct iphdr)) + goto ende;
icmp_param.data_len = skb_in->len - icmp_param.offset; if (icmp_param.data_len > room)
From: Takashi Iwai tiwai@suse.de
[ Upstream commit f785f5ee968f7045268b8be6b0abc850c4a4277c ]
When a DRM driver turns on or off the screen with the audio capability, it notifies the ELD to HD-audio HDMI codec driver via component ops. HDMI codec driver, in turn, attaches or detaches the PCM stream for the given port on the fly.
The problem is that, since the recent code change, the HDMI driver always treats the PCM stream assignment dynamically; this ended up the confusion of the PCM device appearance. e.g. when a screen goes once off and on again, it may appear on a different PCM device before the screen-off. Although the application should treat such a change, it doesn't seem working gracefully with the current pipewire (maybe PulseAudio, too).
As a workaround, this patch changes the HDMI codec driver behavior slightly to be more consistent. Now it remembers the previous PCM slot for the given port and try to assign to it. That is, if a port is re-enabled, the driver tries to use the same PCM slot that was assigned to that port previously. If it conflicts, a new slot is searched and used like before, instead.
Note that multiple monitor connections are the only typical case where the PCM slot preservation is effective. As long as only a single monitor is connected, the behavior isn't changed, and the first PCM slot is still assigned always.
Fixes: ef6f5494faf6 ("ALSA: hda/hdmi: Use only dynamic PCM device allocation") Reviewed-by: Jaroslav Kysela perex@perex.cz Link: https://bugzilla.kernel.org/show_bug.cgi?id=217259 Link: https://lore.kernel.org/r/20230331142217.19791-1-tiwai@suse.de Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- sound/pci/hda/patch_hdmi.c | 11 +++++++++++ 1 file changed, 11 insertions(+)
diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c index 9ea633fe93393..4ffa3a59f419f 100644 --- a/sound/pci/hda/patch_hdmi.c +++ b/sound/pci/hda/patch_hdmi.c @@ -81,6 +81,7 @@ struct hdmi_spec_per_pin { struct delayed_work work; struct hdmi_pcm *pcm; /* pointer to spec->pcm_rec[n] dynamically*/ int pcm_idx; /* which pcm is attached. -1 means no pcm is attached */ + int prev_pcm_idx; /* previously assigned pcm index */ int repoll_count; bool setup; /* the stream has been set up by prepare callback */ bool silent_stream; @@ -1380,9 +1381,17 @@ static void hdmi_attach_hda_pcm(struct hdmi_spec *spec, /* pcm already be attached to the pin */ if (per_pin->pcm) return; + /* try the previously used slot at first */ + idx = per_pin->prev_pcm_idx; + if (idx >= 0) { + if (!test_bit(idx, &spec->pcm_bitmap)) + goto found; + per_pin->prev_pcm_idx = -1; /* no longer valid, clear it */ + } idx = hdmi_find_pcm_slot(spec, per_pin); if (idx == -EBUSY) return; + found: per_pin->pcm_idx = idx; per_pin->pcm = get_hdmi_pcm(spec, idx); set_bit(idx, &spec->pcm_bitmap); @@ -1398,6 +1407,7 @@ static void hdmi_detach_hda_pcm(struct hdmi_spec *spec, return; idx = per_pin->pcm_idx; per_pin->pcm_idx = -1; + per_pin->prev_pcm_idx = idx; /* remember the previous index */ per_pin->pcm = NULL; if (idx >= 0 && idx < spec->pcm_used) clear_bit(idx, &spec->pcm_bitmap); @@ -1924,6 +1934,7 @@ static int hdmi_add_pin(struct hda_codec *codec, hda_nid_t pin_nid)
per_pin->pcm = NULL; per_pin->pcm_idx = -1; + per_pin->prev_pcm_idx = -1; per_pin->pin_nid = pin_nid; per_pin->pin_nid_idx = spec->num_nids; per_pin->dev_id = i;
From: Jakub Kicinski kuba@kernel.org
[ Upstream commit 275b471e3d2daf1472ae8fa70dc1b50c9e0b9e75 ]
Commit 0db3dc73f7a3 ("[NETPOLL]: tx lock deadlock fix") narrowed down the region under netif_tx_trylock() inside netpoll_send_skb(). (At that point in time netif_tx_trylock() would lock all queues of the device.) Taking the tx lock was problematic because driver's cleanup method may take the same lock. So the change made us hold the xmit lock only around xmit, and expected the driver to take care of locking within ->ndo_poll_controller().
Unfortunately this only works if netpoll isn't itself called with the xmit lock already held. Netpoll code is careful and uses trylock(). The drivers, however, may be using plain lock(). Printing while holding the xmit lock is going to result in rare deadlocks.
Luckily we record the xmit lock owners, so we can scan all the queues, the same way we scan NAPI owners. If any of the xmit locks is held by the local CPU we better not attempt any polling.
It would be nice if we could narrow down the check to only the NAPIs and the queue we're trying to use. I don't see a way to do that now.
Reported-by: Roman Gushchin roman.gushchin@linux.dev Fixes: 0db3dc73f7a3 ("[NETPOLL]: tx lock deadlock fix") Signed-off-by: Jakub Kicinski kuba@kernel.org Reviewed-by: Eric Dumazet edumazet@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/netpoll.c | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-)
diff --git a/net/core/netpoll.c b/net/core/netpoll.c index 9be762e1d0428..4ac8d0ad9f6fc 100644 --- a/net/core/netpoll.c +++ b/net/core/netpoll.c @@ -137,6 +137,20 @@ static void queue_process(struct work_struct *work) } }
+static int netif_local_xmit_active(struct net_device *dev) +{ + int i; + + for (i = 0; i < dev->num_tx_queues; i++) { + struct netdev_queue *txq = netdev_get_tx_queue(dev, i); + + if (READ_ONCE(txq->xmit_lock_owner) == smp_processor_id()) + return 1; + } + + return 0; +} + static void poll_one_napi(struct napi_struct *napi) { int work; @@ -183,7 +197,10 @@ void netpoll_poll_dev(struct net_device *dev) if (!ni || down_trylock(&ni->dev_lock)) return;
- if (!netif_running(dev)) { + /* Some drivers will take the same locks in poll and xmit, + * we can't poll if local CPU is already in xmit. + */ + if (!netif_running(dev) || netif_local_xmit_active(dev)) { up(&ni->dev_lock); return; }
From: Gustav Ekelund gustaek@axis.com
[ Upstream commit 089b91a0155c4de1209a07ff2a7dd299ff3ece47 ]
The force watchdog event bit is not cleared during SW reset in the mv88e6393x switch. This is a different behavior compared to mv886390 which clears the force WD event bit as advertised. This causes a force WD event to be handled over and over again as the SW reset following the event never clears the force WD event bit.
Explicitly clear the watchdog event register to 0 in irq_action when handling an event to prevent the switch from sending continuous interrupts. Marvell aren't aware of any other stuck bits apart from the force WD bit.
Fixes: de776d0d316f ("net: dsa: mv88e6xxx: add support for mv88e6393x family" Signed-off-by: Gustav Ekelund gustaek@axis.com Reviewed-by: Andrew Lunn andrew@lunn.ch Reviewed-by: Florian Fainelli f.fainelli@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/dsa/mv88e6xxx/chip.c | 2 +- drivers/net/dsa/mv88e6xxx/global2.c | 20 ++++++++++++++++++++ drivers/net/dsa/mv88e6xxx/global2.h | 1 + 3 files changed, 22 insertions(+), 1 deletion(-)
diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c index 8211a4d373e81..e57d86484a3a4 100644 --- a/drivers/net/dsa/mv88e6xxx/chip.c +++ b/drivers/net/dsa/mv88e6xxx/chip.c @@ -5518,7 +5518,7 @@ static const struct mv88e6xxx_ops mv88e6393x_ops = { * .port_set_upstream_port method. */ .set_egress_port = mv88e6393x_set_egress_port, - .watchdog_ops = &mv88e6390_watchdog_ops, + .watchdog_ops = &mv88e6393x_watchdog_ops, .mgmt_rsvd2cpu = mv88e6393x_port_mgmt_rsvd2cpu, .pot_clear = mv88e6xxx_g2_pot_clear, .reset = mv88e6352_g1_reset, diff --git a/drivers/net/dsa/mv88e6xxx/global2.c b/drivers/net/dsa/mv88e6xxx/global2.c index fa65ecd9cb853..ec49939968fac 100644 --- a/drivers/net/dsa/mv88e6xxx/global2.c +++ b/drivers/net/dsa/mv88e6xxx/global2.c @@ -931,6 +931,26 @@ const struct mv88e6xxx_irq_ops mv88e6390_watchdog_ops = { .irq_free = mv88e6390_watchdog_free, };
+static int mv88e6393x_watchdog_action(struct mv88e6xxx_chip *chip, int irq) +{ + mv88e6390_watchdog_action(chip, irq); + + /* Fix for clearing the force WD event bit. + * Unreleased erratum on mv88e6393x. + */ + mv88e6xxx_g2_write(chip, MV88E6390_G2_WDOG_CTL, + MV88E6390_G2_WDOG_CTL_UPDATE | + MV88E6390_G2_WDOG_CTL_PTR_EVENT); + + return IRQ_HANDLED; +} + +const struct mv88e6xxx_irq_ops mv88e6393x_watchdog_ops = { + .irq_action = mv88e6393x_watchdog_action, + .irq_setup = mv88e6390_watchdog_setup, + .irq_free = mv88e6390_watchdog_free, +}; + static irqreturn_t mv88e6xxx_g2_watchdog_thread_fn(int irq, void *dev_id) { struct mv88e6xxx_chip *chip = dev_id; diff --git a/drivers/net/dsa/mv88e6xxx/global2.h b/drivers/net/dsa/mv88e6xxx/global2.h index 7536b8b0ad011..c05fad5c9f19d 100644 --- a/drivers/net/dsa/mv88e6xxx/global2.h +++ b/drivers/net/dsa/mv88e6xxx/global2.h @@ -363,6 +363,7 @@ int mv88e6xxx_g2_device_mapping_write(struct mv88e6xxx_chip *chip, int target, extern const struct mv88e6xxx_irq_ops mv88e6097_watchdog_ops; extern const struct mv88e6xxx_irq_ops mv88e6250_watchdog_ops; extern const struct mv88e6xxx_irq_ops mv88e6390_watchdog_ops; +extern const struct mv88e6xxx_irq_ops mv88e6393x_watchdog_ops;
extern const struct mv88e6xxx_avb_ops mv88e6165_avb_ops; extern const struct mv88e6xxx_avb_ops mv88e6352_avb_ops;
From: Felix Fietkau nbd@nbd.name
[ Upstream commit e669ce46740a9815953bb4452a6bc5a7fdc21a50 ]
Based on further tests, it seems that the QDMA shaper is not able to perform shaping close to the MAC link rate without throughput loss. This cannot be compensated by increasing the shaping rate, so it seems to be an internal limit.
Fix the remaining throughput regression by detecting that condition and limiting shaping to ports with lower link speed.
This patch intentionally ignores link speed gain from TRGMII, because even on such links, shaping to 1000 Mbit/s incurs some throughput degradation.
Fixes: f63959c7eec3 ("net: ethernet: mtk_eth_soc: implement multi-queue support for per-port queues") Tested-By: Frank Wunderlich frank-w@public-files.de Reported-by: Frank Wunderlich frank-w@public-files.de Signed-off-by: Felix Fietkau nbd@nbd.name Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index bd7c18c839d42..f56d4e7d4ae5d 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -703,6 +703,7 @@ static void mtk_mac_link_up(struct phylink_config *config, MAC_MCR_FORCE_RX_FC);
/* Configure speed */ + mac->speed = speed; switch (speed) { case SPEED_2500: case SPEED_1000: @@ -3169,6 +3170,9 @@ static int mtk_device_event(struct notifier_block *n, unsigned long event, void if (dp->index >= MTK_QDMA_NUM_QUEUES) return NOTIFY_DONE;
+ if (mac->speed > 0 && mac->speed <= s.base.speed) + s.base.speed = 0; + mtk_set_queue_speed(eth, dp->index + 3, s.base.speed);
return NOTIFY_DONE;
From: Xin Long lucien.xin@gmail.com
[ Upstream commit 2584024b23552c00d95b50255e47bd18d306d31a ]
This patch fixes a corner case where the asoc out stream count may change after wait_for_sndbuf.
When the main thread in the client starts a connection, if its out stream count is set to N while the in stream count in the server is set to N - 2, another thread in the client keeps sending the msgs with stream number N - 1, and waits for sndbuf before processing INIT_ACK.
However, after processing INIT_ACK, the out stream count in the client is shrunk to N - 2, the same to the in stream count in the server. The crash occurs when the thread waiting for sndbuf is awake and sends the msg in a non-existing stream(N - 1), the call trace is as below:
KASAN: null-ptr-deref in range [0x0000000000000038-0x000000000000003f] Call Trace: <TASK> sctp_cmd_send_msg net/sctp/sm_sideeffect.c:1114 [inline] sctp_cmd_interpreter net/sctp/sm_sideeffect.c:1777 [inline] sctp_side_effects net/sctp/sm_sideeffect.c:1199 [inline] sctp_do_sm+0x197d/0x5310 net/sctp/sm_sideeffect.c:1170 sctp_primitive_SEND+0x9f/0xc0 net/sctp/primitive.c:163 sctp_sendmsg_to_asoc+0x10eb/0x1a30 net/sctp/socket.c:1868 sctp_sendmsg+0x8d4/0x1d90 net/sctp/socket.c:2026 inet_sendmsg+0x9d/0xe0 net/ipv4/af_inet.c:825 sock_sendmsg_nosec net/socket.c:722 [inline] sock_sendmsg+0xde/0x190 net/socket.c:745
The fix is to add an unlikely check for the send stream number after the thread wakes up from the wait_for_sndbuf.
Fixes: 5bbbbe32a431 ("sctp: introduce stream scheduler foundations") Reported-by: syzbot+47c24ca20a2fa01f082e@syzkaller.appspotmail.com Signed-off-by: Xin Long lucien.xin@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/sctp/socket.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/net/sctp/socket.c b/net/sctp/socket.c index 84021a6c4f9da..ec7d1a89efbbd 100644 --- a/net/sctp/socket.c +++ b/net/sctp/socket.c @@ -1829,6 +1829,10 @@ static int sctp_sendmsg_to_asoc(struct sctp_association *asoc, err = sctp_wait_for_sndbuf(asoc, &timeo, msg_len); if (err) goto err; + if (unlikely(sinfo->sinfo_stream >= asoc->stream.outcnt)) { + err = -EINVAL; + goto err; + } }
if (sctp_state(asoc, CLOSED)) {
From: Daniele Ceraolo Spurio daniele.ceraolospurio@intel.com
[ Upstream commit c74237496fbc799257b091179dd01a3200f7314d ]
In the rare case where we do a full GT reset after starting the HuC load and before it completes (which basically boils down to i915 hanging during init), we need to cancel the delayed load fence, as it will be re-initialized in the post-reset recovery.
Fixes: 27536e03271d ("drm/i915/huc: track delayed HuC load with a fence") Signed-off-by: Daniele Ceraolo Spurio daniele.ceraolospurio@intel.com Cc: Alan Previn alan.previn.teres.alexis@intel.com Reviewed-by: Alan Previn alan.previn.teres.alexis@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230313205556.1174503-1-danie... (cherry picked from commit cdf7911f7dbcb37228409a63bf75630776c45a15) Signed-off-by: Jani Nikula jani.nikula@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/i915/gt/uc/intel_huc.c | 7 +++++++ drivers/gpu/drm/i915/gt/uc/intel_huc.h | 7 +------ 2 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_huc.c b/drivers/gpu/drm/i915/gt/uc/intel_huc.c index 410905da8e974..0c103ca160d10 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_huc.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_huc.c @@ -235,6 +235,13 @@ static void delayed_huc_load_fini(struct intel_huc *huc) i915_sw_fence_fini(&huc->delayed_load.fence); }
+int intel_huc_sanitize(struct intel_huc *huc) +{ + delayed_huc_load_complete(huc); + intel_uc_fw_sanitize(&huc->fw); + return 0; +} + static bool vcs_supported(struct intel_gt *gt) { intel_engine_mask_t mask = gt->info.engine_mask; diff --git a/drivers/gpu/drm/i915/gt/uc/intel_huc.h b/drivers/gpu/drm/i915/gt/uc/intel_huc.h index 52db03620c609..db555b3c1f562 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_huc.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_huc.h @@ -41,6 +41,7 @@ struct intel_huc { } delayed_load; };
+int intel_huc_sanitize(struct intel_huc *huc); void intel_huc_init_early(struct intel_huc *huc); int intel_huc_init(struct intel_huc *huc); void intel_huc_fini(struct intel_huc *huc); @@ -54,12 +55,6 @@ bool intel_huc_is_authenticated(struct intel_huc *huc); void intel_huc_register_gsc_notifier(struct intel_huc *huc, struct bus_type *bus); void intel_huc_unregister_gsc_notifier(struct intel_huc *huc, struct bus_type *bus);
-static inline int intel_huc_sanitize(struct intel_huc *huc) -{ - intel_uc_fw_sanitize(&huc->fw); - return 0; -} - static inline bool intel_huc_is_supported(struct intel_huc *huc) { return intel_uc_fw_is_supported(&huc->fw);
From: Sricharan Ramabadhran quic_srichara@quicinc.com
[ Upstream commit 839349d13905927d8a567ca4d21d88c82028e31d ]
On the remote side, when QRTR socket is removed, af_qrtr will call qrtr_port_remove() which broadcasts the DEL_CLIENT packet to all neighbours including local NS. NS upon receiving the DEL_CLIENT packet, will remove the lookups associated with the node:port and broadcasts the DEL_SERVER packet.
But on the host side, due to the arrival of the DEL_CLIENT packet, the NS would've already deleted the server belonging to that port. So when the remote's NS again broadcasts the DEL_SERVER for that port, it throws below error message on the host:
"failed while handling packet from 2:-2"
So fix this error by not broadcasting the DEL_SERVER packet when the DEL_CLIENT packet gets processed."
Fixes: 0c2204a4ad71 ("net: qrtr: Migrate nameservice to kernel from userspace") Reviewed-by: Manivannan Sadhasivam mani@kernel.org Signed-off-by: Ram Kumar Dharuman quic_ramd@quicinc.com Signed-off-by: Sricharan Ramabadhran quic_srichara@quicinc.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/qrtr/ns.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/net/qrtr/ns.c b/net/qrtr/ns.c index e595079c2cafe..3e40a1ba48f79 100644 --- a/net/qrtr/ns.c +++ b/net/qrtr/ns.c @@ -273,7 +273,7 @@ static struct qrtr_server *server_add(unsigned int service, return NULL; }
-static int server_del(struct qrtr_node *node, unsigned int port) +static int server_del(struct qrtr_node *node, unsigned int port, bool bcast) { struct qrtr_lookup *lookup; struct qrtr_server *srv; @@ -286,7 +286,7 @@ static int server_del(struct qrtr_node *node, unsigned int port) radix_tree_delete(&node->servers, port);
/* Broadcast the removal of local servers */ - if (srv->node == qrtr_ns.local_node) + if (srv->node == qrtr_ns.local_node && bcast) service_announce_del(&qrtr_ns.bcast_sq, srv);
/* Announce the service's disappearance to observers */ @@ -372,7 +372,7 @@ static int ctrl_cmd_bye(struct sockaddr_qrtr *from) } slot = radix_tree_iter_resume(slot, &iter); rcu_read_unlock(); - server_del(node, srv->port); + server_del(node, srv->port, true); rcu_read_lock(); } rcu_read_unlock(); @@ -458,10 +458,13 @@ static int ctrl_cmd_del_client(struct sockaddr_qrtr *from, kfree(lookup); }
- /* Remove the server belonging to this port */ + /* Remove the server belonging to this port but don't broadcast + * DEL_SERVER. Neighbours would've already removed the server belonging + * to this port due to the DEL_CLIENT broadcast from qrtr_port_remove(). + */ node = node_get(node_id); if (node) - server_del(node, port); + server_del(node, port, false);
/* Advertise the removal of this client to all local servers */ local_node = node_get(qrtr_ns.local_node); @@ -566,7 +569,7 @@ static int ctrl_cmd_del_server(struct sockaddr_qrtr *from, if (!node) return -ENOENT;
- return server_del(node, port); + return server_del(node, port, true); }
static int ctrl_cmd_new_lookup(struct sockaddr_qrtr *from,
From: Ziyang Xuan william.xuanziyang@huawei.com
[ Upstream commit ea30388baebcce37fd594d425a65037ca35e59e8 ]
Syzbot reported a bug as following:
===================================================== BUG: KMSAN: uninit-value in arch_atomic64_inc arch/x86/include/asm/atomic64_64.h:88 [inline] BUG: KMSAN: uninit-value in arch_atomic_long_inc include/linux/atomic/atomic-long.h:161 [inline] BUG: KMSAN: uninit-value in atomic_long_inc include/linux/atomic/atomic-instrumented.h:1429 [inline] BUG: KMSAN: uninit-value in __ip6_make_skb+0x2f37/0x30f0 net/ipv6/ip6_output.c:1956 arch_atomic64_inc arch/x86/include/asm/atomic64_64.h:88 [inline] arch_atomic_long_inc include/linux/atomic/atomic-long.h:161 [inline] atomic_long_inc include/linux/atomic/atomic-instrumented.h:1429 [inline] __ip6_make_skb+0x2f37/0x30f0 net/ipv6/ip6_output.c:1956 ip6_finish_skb include/net/ipv6.h:1122 [inline] ip6_push_pending_frames+0x10e/0x550 net/ipv6/ip6_output.c:1987 rawv6_push_pending_frames+0xb12/0xb90 net/ipv6/raw.c:579 rawv6_sendmsg+0x297e/0x2e60 net/ipv6/raw.c:922 inet_sendmsg+0x101/0x180 net/ipv4/af_inet.c:827 sock_sendmsg_nosec net/socket.c:714 [inline] sock_sendmsg net/socket.c:734 [inline] ____sys_sendmsg+0xa8e/0xe70 net/socket.c:2476 ___sys_sendmsg+0x2a1/0x3f0 net/socket.c:2530 __sys_sendmsg net/socket.c:2559 [inline] __do_sys_sendmsg net/socket.c:2568 [inline] __se_sys_sendmsg net/socket.c:2566 [inline] __x64_sys_sendmsg+0x367/0x540 net/socket.c:2566 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd
Uninit was created at: slab_post_alloc_hook mm/slab.h:766 [inline] slab_alloc_node mm/slub.c:3452 [inline] __kmem_cache_alloc_node+0x71f/0xce0 mm/slub.c:3491 __do_kmalloc_node mm/slab_common.c:967 [inline] __kmalloc_node_track_caller+0x114/0x3b0 mm/slab_common.c:988 kmalloc_reserve net/core/skbuff.c:492 [inline] __alloc_skb+0x3af/0x8f0 net/core/skbuff.c:565 alloc_skb include/linux/skbuff.h:1270 [inline] __ip6_append_data+0x51c1/0x6bb0 net/ipv6/ip6_output.c:1684 ip6_append_data+0x411/0x580 net/ipv6/ip6_output.c:1854 rawv6_sendmsg+0x2882/0x2e60 net/ipv6/raw.c:915 inet_sendmsg+0x101/0x180 net/ipv4/af_inet.c:827 sock_sendmsg_nosec net/socket.c:714 [inline] sock_sendmsg net/socket.c:734 [inline] ____sys_sendmsg+0xa8e/0xe70 net/socket.c:2476 ___sys_sendmsg+0x2a1/0x3f0 net/socket.c:2530 __sys_sendmsg net/socket.c:2559 [inline] __do_sys_sendmsg net/socket.c:2568 [inline] __se_sys_sendmsg net/socket.c:2566 [inline] __x64_sys_sendmsg+0x367/0x540 net/socket.c:2566 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd
It is because icmp6hdr does not in skb linear region under the scenario of SOCK_RAW socket. Access icmp6_hdr(skb)->icmp6_type directly will trigger the uninit variable access bug.
Use a local variable icmp6_type to carry the correct value in different scenarios.
Fixes: 14878f75abd5 ("[IPV6]: Add ICMPMsgStats MIB (RFC 4293) [rev 2]") Reported-by: syzbot+8257f4dcef79de670baf@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?id=3d605ec1d0a7f2a269a1a6936ac7f2b85975ee9... Signed-off-by: Ziyang Xuan william.xuanziyang@huawei.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv6/ip6_output.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c index c314fdde0097c..95a55c6630add 100644 --- a/net/ipv6/ip6_output.c +++ b/net/ipv6/ip6_output.c @@ -1965,8 +1965,13 @@ struct sk_buff *__ip6_make_skb(struct sock *sk, IP6_UPD_PO_STATS(net, rt->rt6i_idev, IPSTATS_MIB_OUT, skb->len); if (proto == IPPROTO_ICMPV6) { struct inet6_dev *idev = ip6_dst_idev(skb_dst(skb)); + u8 icmp6_type;
- ICMP6MSGOUT_INC_STATS(net, idev, icmp6_hdr(skb)->icmp6_type); + if (sk->sk_socket->type == SOCK_RAW && !inet_sk(sk)->hdrincl) + icmp6_type = fl6->fl6_icmp_type; + else + icmp6_type = icmp6_hdr(skb)->icmp6_type; + ICMP6MSGOUT_INC_STATS(net, idev, icmp6_type); ICMP6_INC_STATS(net, idev, ICMP6_MIB_OUTMSGS); }
From: Armin Wolf W_Armin@gmx.de
[ Upstream commit a3c4c053014585dcf20f4df954791b74d8a8afcd ]
When retriving a item string with tlmi_setting(), the result has to be freed using kfree(). In current_value_show() however, malformed item strings are not freed, causing a memory leak. Fix this by eliminating the early return responsible for this.
Reported-by: Mirsad Goran Todorovac mirsad.todorovac@alu.unizg.hr Link: https://lore.kernel.org/platform-driver-x86/01e920bc-5882-ba0c-dd15-868bf0ec... Tested-by: Mirsad Goran Todorovac mirsad.todorovac@alu.unizg.hr Fixes: 0fdf10e5fc96 ("platform/x86: think-lmi: Split current_value to reflect only the value") Signed-off-by: Armin Wolf W_Armin@gmx.de Link: https://lore.kernel.org/r/20230331213319.41040-1-W_Armin@gmx.de Tested-by: Mario Limonciello mario.limonciello@amd.com Reviewed-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/platform/x86/think-lmi.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/platform/x86/think-lmi.c b/drivers/platform/x86/think-lmi.c index 74af3e593b2ca..4e738ec5e6fb8 100644 --- a/drivers/platform/x86/think-lmi.c +++ b/drivers/platform/x86/think-lmi.c @@ -930,10 +930,12 @@ static ssize_t current_value_show(struct kobject *kobj, struct kobj_attribute *a /* validate and split from `item,value` -> `value` */ value = strpbrk(item, ","); if (!value || value == item || !strlen(value + 1)) - return -EINVAL; + ret = -EINVAL; + else + ret = sysfs_emit(buf, "%s\n", value + 1);
- ret = sysfs_emit(buf, "%s\n", value + 1); kfree(item); + return ret; }
From: Mark Pearson mpearson-lenovo@squebb.ca
[ Upstream commit e7d796fccdc8d17c2d21817ebe4c7bf5bbfe5433 ]
My previous commit introduced a memory leak where the item allocated from tlmi_setting was not freed. This commit also renames it to avoid confusion with the similarly name variable in the same function.
Fixes: 8a02d70679fc ("platform/x86: think-lmi: Add possible_values for ThinkStation") Reported-by: Mirsad Todorovac mirsad.todorovac@alu.unizg.hr Link: https://lore.kernel.org/lkml/df26ff45-8933-f2b3-25f4-6ee51ccda7d8@gmx.de/T/ Signed-off-by: Mark Pearson mpearson-lenovo@squebb.ca Link: https://lore.kernel.org/r/20230403013120.2105-1-mpearson-lenovo@squebb.ca Tested-by: Mario Limonciello mario.limonciello@amd.com Tested-by: Mirsad Goran Todorovac mirsad.todorovac@alu.unizg.hr Reviewed-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/platform/x86/think-lmi.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/platform/x86/think-lmi.c b/drivers/platform/x86/think-lmi.c index 4e738ec5e6fb8..70c4ee254c43a 100644 --- a/drivers/platform/x86/think-lmi.c +++ b/drivers/platform/x86/think-lmi.c @@ -1459,10 +1459,10 @@ static int tlmi_analyze(void) * name string. * Try and pull that out if it's available. */ - char *item, *optstart, *optend; + char *optitem, *optstart, *optend;
- if (!tlmi_setting(setting->index, &item, LENOVO_BIOS_SETTING_GUID)) { - optstart = strstr(item, "[Optional:"); + if (!tlmi_setting(setting->index, &optitem, LENOVO_BIOS_SETTING_GUID)) { + optstart = strstr(optitem, "[Optional:"); if (optstart) { optstart += strlen("[Optional:"); optend = strstr(optstart, "]"); @@ -1471,6 +1471,7 @@ static int tlmi_analyze(void) kstrndup(optstart, optend - optstart, GFP_KERNEL); } + kfree(optitem); } } /*
From: Mark Pearson mpearson-lenovo@squebb.ca
[ Upstream commit 7065655216d4d034d71164641f3bec0b189ad6fa ]
On ThinkStations on retrieving the attribute value the BIOS appends the possible values to the string. Clean up the display in the current_value_show function so the options part is not displayed.
Fixes: a40cd7ef22fb ("platform/x86: think-lmi: Add WMI interface support on Lenovo platforms") Reported by Mario Limoncello Mario.Limonciello@amd.com Link: https://github.com/fwupd/fwupd/issues/5077#issuecomment-1488730526 Signed-off-by: Mark Pearson mpearson-lenovo@squebb.ca Link: https://lore.kernel.org/r/20230403013120.2105-2-mpearson-lenovo@squebb.ca Tested-by: Mario Limonciello mario.limonciello@amd.com Tested-by: Mirsad Goran Todorovac mirsad.todorovac@alu.unizg.hr Reviewed-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/platform/x86/think-lmi.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/drivers/platform/x86/think-lmi.c b/drivers/platform/x86/think-lmi.c index 70c4ee254c43a..336b9029d1515 100644 --- a/drivers/platform/x86/think-lmi.c +++ b/drivers/platform/x86/think-lmi.c @@ -920,7 +920,7 @@ static ssize_t display_name_show(struct kobject *kobj, struct kobj_attribute *at static ssize_t current_value_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { struct tlmi_attr_setting *setting = to_tlmi_attr_setting(kobj); - char *item, *value; + char *item, *value, *p; int ret;
ret = tlmi_setting(setting->index, &item, LENOVO_BIOS_SETTING_GUID); @@ -931,9 +931,12 @@ static ssize_t current_value_show(struct kobject *kobj, struct kobj_attribute *a value = strpbrk(item, ","); if (!value || value == item || !strlen(value + 1)) ret = -EINVAL; - else + else { + /* On Workstations remove the Options part after the value */ + p = strchrnul(value, ';'); + *p = '\0'; ret = sysfs_emit(buf, "%s\n", value + 1); - + } kfree(item);
return ret;
From: Dhruva Gole d-gole@ti.com
[ Upstream commit fe092498cb9638418c96675be320c74a16306b48 ]
The interrupt enable bits might be set if we want to use the GPIO as wakeup source. Clearing this will mean disabling of interrupts in the GPIO banks that we may want to wakeup from. Thus remove the line that was clearing this bit from the driver's save context function.
Cc: Devarsh Thakkar devarsht@ti.com Fixes: 0651a730924b ("gpio: davinci: Add support for system suspend/resume PM") Signed-off-by: Dhruva Gole d-gole@ti.com Reviewed-by: Linus Walleij linus.walleij@linaro.org Acked-by: Keerthy j-keerthy@ti.com Signed-off-by: Bartosz Golaszewski bartosz.golaszewski@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpio/gpio-davinci.c | 3 --- 1 file changed, 3 deletions(-)
diff --git a/drivers/gpio/gpio-davinci.c b/drivers/gpio/gpio-davinci.c index fa51a91afa54f..39b00855499b2 100644 --- a/drivers/gpio/gpio-davinci.c +++ b/drivers/gpio/gpio-davinci.c @@ -642,9 +642,6 @@ static void davinci_gpio_save_context(struct davinci_gpio_controller *chips, context->set_falling = readl_relaxed(&g->set_falling); }
- /* Clear Bank interrupt enable bit */ - writel_relaxed(0, base + BINTEN); - /* Clear all interrupt status registers */ writel_relaxed(GENMASK(31, 0), &g->intstat); }
From: Dhruva Gole d-gole@ti.com
[ Upstream commit 7b75c4703609a3ebaf67271813521bc0281e1ec1 ]
Add the IRQCHIP_SKIP_SET_WAKE flag since there are no special IRQ Wake bits that can be set to enable wakeup IRQ.
Fixes: 3d9edf09d452 ("[ARM] 4457/2: davinci: GPIO support") Signed-off-by: Dhruva Gole d-gole@ti.com Reviewed-by: Linus Walleij linus.walleij@linaro.org Signed-off-by: Bartosz Golaszewski bartosz.golaszewski@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpio/gpio-davinci.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpio/gpio-davinci.c b/drivers/gpio/gpio-davinci.c index 39b00855499b2..7a26919ff127b 100644 --- a/drivers/gpio/gpio-davinci.c +++ b/drivers/gpio/gpio-davinci.c @@ -325,7 +325,7 @@ static struct irq_chip gpio_irqchip = { .irq_enable = gpio_irq_enable, .irq_disable = gpio_irq_disable, .irq_set_type = gpio_irq_type, - .flags = IRQCHIP_SET_TYPE_MASKED, + .flags = IRQCHIP_SET_TYPE_MASKED | IRQCHIP_SKIP_SET_WAKE, };
static void gpio_irq_handler(struct irq_desc *desc)
From: Siddharth Vadapalli s-vadapalli@ti.com
[ Upstream commit c6b486fb33680ad5a3a6390ce693c835caaae3f7 ]
In the am65_cpsw_nuss_probe() function's cleanup path, the call to of_platform_device_destroy() for the common->mdio_dev device is invoked unconditionally. It is possible that either the MDIO node is not present in the device-tree, or the MDIO node is disabled in the device-tree. In both these cases, the MDIO device is not created, resulting in a NULL pointer dereference when the of_platform_device_destroy() function is invoked on the common->mdio_dev device on the cleanup path.
Fix this by ensuring that the common->mdio_dev device exists, before attempting to invoke of_platform_device_destroy().
Fixes: a45cfcc69a25 ("net: ethernet: ti: am65-cpsw-nuss: use of_platform_device_create() for mdio") Signed-off-by: Siddharth Vadapalli s-vadapalli@ti.com Reviewed-by: Roger Quadros rogerq@kernel.org Link: https://lore.kernel.org/r/20230403090321.835877-1-s-vadapalli@ti.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c index 3e17152798554..9286b2b3353e3 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -2854,7 +2854,8 @@ static int am65_cpsw_nuss_probe(struct platform_device *pdev) am65_cpsw_nuss_phylink_cleanup(common); am65_cpts_release(common->cpts); err_of_clear: - of_platform_device_destroy(common->mdio_dev, NULL); + if (common->mdio_dev) + of_platform_device_destroy(common->mdio_dev, NULL); err_pm_clear: pm_runtime_put_sync(dev); pm_runtime_disable(dev); @@ -2883,7 +2884,8 @@ static int am65_cpsw_nuss_remove(struct platform_device *pdev) am65_cpsw_nuss_phylink_cleanup(common); am65_cpts_release(common->cpts);
- of_platform_device_destroy(common->mdio_dev, NULL); + if (common->mdio_dev) + of_platform_device_destroy(common->mdio_dev, NULL);
pm_runtime_put_sync(&pdev->dev); pm_runtime_disable(&pdev->dev);
From: Corinna Vinschen vinschen@redhat.com
[ Upstream commit 218c597325f4faf7b7a6049233a30d7842b5b2dc ]
stmmac_reinit_queues() fails to fix up the RX hash. Even if the number of channels gets restricted, the output of `ethtool -x' indicates that all RX queues are used:
$ ethtool -l enp0s29f2 Channel parameters for enp0s29f2: Pre-set maximums: RX: 8 TX: 8 Other: n/a Combined: n/a Current hardware settings: RX: 8 TX: 8 Other: n/a Combined: n/a $ ethtool -x enp0s29f2 RX flow hash indirection table for enp0s29f2 with 8 RX ring(s): 0: 0 1 2 3 4 5 6 7 8: 0 1 2 3 4 5 6 7 [...] $ ethtool -L enp0s29f2 rx 3 $ ethtool -x enp0s29f2 RX flow hash indirection table for enp0s29f2 with 3 RX ring(s): 0: 0 1 2 3 4 5 6 7 8: 0 1 2 3 4 5 6 7 [...]
Fix this by setting the indirection table according to the number of specified queues. The result is now as expected:
$ ethtool -L enp0s29f2 rx 3 $ ethtool -x enp0s29f2 RX flow hash indirection table for enp0s29f2 with 3 RX ring(s): 0: 0 1 2 0 1 2 0 1 8: 2 0 1 2 0 1 2 0 [...]
Tested on Intel Elkhart Lake.
Fixes: 0366f7e06a6b ("net: stmmac: add ethtool support for get/set channels") Signed-off-by: Corinna Vinschen vinschen@redhat.com Link: https://lore.kernel.org/r/20230403121120.489138-1-vinschen@redhat.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index 20b51a39db38d..4888536a31500 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -6948,7 +6948,7 @@ static void stmmac_napi_del(struct net_device *dev) int stmmac_reinit_queues(struct net_device *dev, u32 rx_cnt, u32 tx_cnt) { struct stmmac_priv *priv = netdev_priv(dev); - int ret = 0; + int ret = 0, i;
if (netif_running(dev)) stmmac_release(dev); @@ -6957,6 +6957,10 @@ int stmmac_reinit_queues(struct net_device *dev, u32 rx_cnt, u32 tx_cnt)
priv->plat->rx_queues_to_use = rx_cnt; priv->plat->tx_queues_to_use = tx_cnt; + if (!netif_is_rxfh_configured(dev)) + for (i = 0; i < ARRAY_SIZE(priv->rss.table); i++) + priv->rss.table[i] = ethtool_rxfh_indir_default(i, + rx_cnt);
stmmac_napi_add(dev);
From: Jeff Layton jlayton@kernel.org
[ Upstream commit 5085e41f9e83a1bec51da1f20b54f2ec3a13a3fe ]
While the unix_gid object is rcu-freed, the group_info list that it contains is not. Ensure that we only put the group list reference once we are really freeing the unix_gid object.
Reported-by: Zhi Li yieli@redhat.com Link: https://bugzilla.redhat.com/show_bug.cgi?id=2183056 Signed-off-by: Jeff Layton jlayton@kernel.org Fixes: fd5d2f78261b ("SUNRPC: Make server side AUTH_UNIX use lockless lookups") Signed-off-by: Chuck Lever chuck.lever@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/sunrpc/svcauth_unix.c | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-)
diff --git a/net/sunrpc/svcauth_unix.c b/net/sunrpc/svcauth_unix.c index b1efc34db6ed8..609ade4fb49ed 100644 --- a/net/sunrpc/svcauth_unix.c +++ b/net/sunrpc/svcauth_unix.c @@ -416,14 +416,23 @@ static int unix_gid_hash(kuid_t uid) return hash_long(from_kuid(&init_user_ns, uid), GID_HASHBITS); }
-static void unix_gid_put(struct kref *kref) +static void unix_gid_free(struct rcu_head *rcu) { - struct cache_head *item = container_of(kref, struct cache_head, ref); - struct unix_gid *ug = container_of(item, struct unix_gid, h); + struct unix_gid *ug = container_of(rcu, struct unix_gid, rcu); + struct cache_head *item = &ug->h; + if (test_bit(CACHE_VALID, &item->flags) && !test_bit(CACHE_NEGATIVE, &item->flags)) put_group_info(ug->gi); - kfree_rcu(ug, rcu); + kfree(ug); +} + +static void unix_gid_put(struct kref *kref) +{ + struct cache_head *item = container_of(kref, struct cache_head, ref); + struct unix_gid *ug = container_of(item, struct unix_gid, h); + + call_rcu(&ug->rcu, unix_gid_free); }
static int unix_gid_match(struct cache_head *corig, struct cache_head *cnew)
From: Dai Ngo dai.ngo@oracle.com
[ Upstream commit 7de82c2f36fb26aa78440bbf0efcf360b691d98b ]
Currently callback request does not use the credential specified in CREATE_SESSION if the security flavor for the back channel is AUTH_SYS.
Problem was discovered by pynfs 4.1 DELEG5 and DELEG7 test with error: DELEG5 st_delegation.testCBSecParms : FAILURE expected callback with uid, gid == 17, 19, got 0, 0
Signed-off-by: Dai Ngo dai.ngo@oracle.com Reviewed-by: Jeff Layton jlayton@kernel.org Fixes: 8276c902bbe9 ("SUNRPC: remove uid and gid from struct auth_cred") Signed-off-by: Chuck Lever chuck.lever@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/nfsd/nfs4callback.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c index 2a815f5a52c4b..4039ffcf90ba5 100644 --- a/fs/nfsd/nfs4callback.c +++ b/fs/nfsd/nfs4callback.c @@ -946,8 +946,8 @@ static const struct cred *get_backchannel_cred(struct nfs4_client *clp, struct r if (!kcred) return NULL;
- kcred->uid = ses->se_cb_sec.uid; - kcred->gid = ses->se_cb_sec.gid; + kcred->fsuid = ses->se_cb_sec.uid; + kcred->fsgid = ses->se_cb_sec.gid; return kcred; } }
From: Simei Su simei.su@intel.com
[ Upstream commit b4a01ace20f5c93c724abffc0a83ec84f514b98d ]
When adding a FDIR filter, if ice_vc_fdir_set_irq_ctx returns failure, the inserted fdir entry will not be removed and if ice_vc_fdir_write_fltr returns failure, the fdir context info for irq handler will not be cleared which may lead to inconsistent or memory leak issue. This patch refines failure cases to resolve this issue.
Fixes: 1f7ea1cd6a37 ("ice: Enable FDIR Configure for AVF") Signed-off-by: Simei Su simei.su@intel.com Tested-by: Rafal Romanowski rafal.romanowski@intel.com Signed-off-by: Tony Nguyen anthony.l.nguyen@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c index a2645ff3100e4..f4ef76e37098c 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c @@ -1871,7 +1871,7 @@ int ice_vc_add_fdir_fltr(struct ice_vf *vf, u8 *msg) v_ret = VIRTCHNL_STATUS_SUCCESS; stat->status = VIRTCHNL_FDIR_FAILURE_RULE_NORESOURCE; dev_dbg(dev, "VF %d: set FDIR context failed\n", vf->vf_id); - goto err_free_conf; + goto err_rem_entry; }
ret = ice_vc_fdir_write_fltr(vf, conf, true, is_tun); @@ -1880,15 +1880,16 @@ int ice_vc_add_fdir_fltr(struct ice_vf *vf, u8 *msg) stat->status = VIRTCHNL_FDIR_FAILURE_RULE_NORESOURCE; dev_err(dev, "VF %d: writing FDIR rule failed, ret:%d\n", vf->vf_id, ret); - goto err_rem_entry; + goto err_clr_irq; }
exit: kfree(stat); return ret;
-err_rem_entry: +err_clr_irq: ice_vc_fdir_clear_irq_ctx(vf); +err_rem_entry: ice_vc_fdir_remove_entry(vf, conf, conf->flow_id); err_free_conf: devm_kfree(dev, conf);
From: Lingyu Liu lingyu.liu@intel.com
[ Upstream commit 83c911dc5e0e8e6eaa6431c06972a8f159bfe2fc ]
Reset the FDIR counters when FDIR inits. Without this patch, when VF initializes or resets, all the FDIR counters are not cleaned, which may cause unexpected behaviors for future FDIR rule create (e.g., rule conflict).
Fixes: 1f7ea1cd6a37 ("ice: Enable FDIR Configure for AVF") Signed-off-by: Junfeng Guo junfeng.guo@intel.com Signed-off-by: Lingyu Liu lingyu.liu@intel.com Tested-by: Rafal Romanowski rafal.romanowski@intel.com Signed-off-by: Tony Nguyen anthony.l.nguyen@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/intel/ice/ice_virtchnl_fdir.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+)
diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c index f4ef76e37098c..7f72604079723 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c @@ -541,6 +541,21 @@ static void ice_vc_fdir_rem_prof_all(struct ice_vf *vf) } }
+/** + * ice_vc_fdir_reset_cnt_all - reset all FDIR counters for this VF FDIR + * @fdir: pointer to the VF FDIR structure + */ +static void ice_vc_fdir_reset_cnt_all(struct ice_vf_fdir *fdir) +{ + enum ice_fltr_ptype flow; + + for (flow = ICE_FLTR_PTYPE_NONF_NONE; + flow < ICE_FLTR_PTYPE_MAX; flow++) { + fdir->fdir_fltr_cnt[flow][0] = 0; + fdir->fdir_fltr_cnt[flow][1] = 0; + } +} + /** * ice_vc_fdir_has_prof_conflict * @vf: pointer to the VF structure @@ -1998,6 +2013,7 @@ void ice_vf_fdir_init(struct ice_vf *vf) spin_lock_init(&fdir->ctx_lock); fdir->ctx_irq.flags = 0; fdir->ctx_done.flags = 0; + ice_vc_fdir_reset_cnt_all(fdir); }
/**
From: Eric Dumazet edumazet@google.com
[ Upstream commit 6579f5bacc2c4cbc5ef6abb45352416939d1f844 ]
Some applications seem to rely on RAW sockets.
If they use private netns, we can avoid piling all RAW sockets bound to a given protocol into a single bucket.
Also place (struct raw_hashinfo).lock into its own cache line to limit false sharing.
Alternative would be to have per-netns hashtables, but this seems too expensive for most netns where RAW sockets are not used.
Signed-off-by: Eric Dumazet edumazet@google.com Signed-off-by: Jakub Kicinski kuba@kernel.org Stable-dep-of: 0a78cf7264d2 ("raw: Fix NULL deref in raw_get_next().") Signed-off-by: Sasha Levin sashal@kernel.org --- include/net/raw.h | 13 +++++++++++-- net/ipv4/raw.c | 13 +++++++------ net/ipv6/raw.c | 4 ++-- 3 files changed, 20 insertions(+), 10 deletions(-)
diff --git a/include/net/raw.h b/include/net/raw.h index 5e665934ebc7c..2c004c20ed996 100644 --- a/include/net/raw.h +++ b/include/net/raw.h @@ -15,6 +15,8 @@
#include <net/inet_sock.h> #include <net/protocol.h> +#include <net/netns/hash.h> +#include <linux/hash.h> #include <linux/icmp.h>
extern struct proto raw_prot; @@ -29,13 +31,20 @@ int raw_local_deliver(struct sk_buff *, int);
int raw_rcv(struct sock *, struct sk_buff *);
-#define RAW_HTABLE_SIZE MAX_INET_PROTOS +#define RAW_HTABLE_LOG 8 +#define RAW_HTABLE_SIZE (1U << RAW_HTABLE_LOG)
struct raw_hashinfo { spinlock_t lock; - struct hlist_nulls_head ht[RAW_HTABLE_SIZE]; + + struct hlist_nulls_head ht[RAW_HTABLE_SIZE] ____cacheline_aligned; };
+static inline u32 raw_hashfunc(const struct net *net, u32 proto) +{ + return hash_32(net_hash_mix(net) ^ proto, RAW_HTABLE_LOG); +} + static inline void raw_hashinfo_init(struct raw_hashinfo *hashinfo) { int i; diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c index 006c1f0ed8b47..2a53a0bf29232 100644 --- a/net/ipv4/raw.c +++ b/net/ipv4/raw.c @@ -93,7 +93,7 @@ int raw_hash_sk(struct sock *sk) struct raw_hashinfo *h = sk->sk_prot->h.raw_hash; struct hlist_nulls_head *hlist;
- hlist = &h->ht[inet_sk(sk)->inet_num & (RAW_HTABLE_SIZE - 1)]; + hlist = &h->ht[raw_hashfunc(sock_net(sk), inet_sk(sk)->inet_num)];
spin_lock(&h->lock); __sk_nulls_add_node_rcu(sk, hlist); @@ -160,9 +160,9 @@ static int icmp_filter(const struct sock *sk, const struct sk_buff *skb) * RFC 1122: SHOULD pass TOS value up to the transport layer. * -> It does. And not only TOS, but all IP header. */ -static int raw_v4_input(struct sk_buff *skb, const struct iphdr *iph, int hash) +static int raw_v4_input(struct net *net, struct sk_buff *skb, + const struct iphdr *iph, int hash) { - struct net *net = dev_net(skb->dev); struct hlist_nulls_head *hlist; struct hlist_nulls_node *hnode; int sdif = inet_sdif(skb); @@ -193,9 +193,10 @@ static int raw_v4_input(struct sk_buff *skb, const struct iphdr *iph, int hash)
int raw_local_deliver(struct sk_buff *skb, int protocol) { - int hash = protocol & (RAW_HTABLE_SIZE - 1); + struct net *net = dev_net(skb->dev);
- return raw_v4_input(skb, ip_hdr(skb), hash); + return raw_v4_input(net, skb, ip_hdr(skb), + raw_hashfunc(net, protocol)); }
static void raw_err(struct sock *sk, struct sk_buff *skb, u32 info) @@ -271,7 +272,7 @@ void raw_icmp_error(struct sk_buff *skb, int protocol, u32 info) struct sock *sk; int hash;
- hash = protocol & (RAW_HTABLE_SIZE - 1); + hash = raw_hashfunc(net, protocol); hlist = &raw_v4_hashinfo.ht[hash];
rcu_read_lock(); diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c index ada087b50541a..45b35b5f893c5 100644 --- a/net/ipv6/raw.c +++ b/net/ipv6/raw.c @@ -152,7 +152,7 @@ static bool ipv6_raw_deliver(struct sk_buff *skb, int nexthdr) saddr = &ipv6_hdr(skb)->saddr; daddr = saddr + 1;
- hash = nexthdr & (RAW_HTABLE_SIZE - 1); + hash = raw_hashfunc(net, nexthdr); hlist = &raw_v6_hashinfo.ht[hash]; rcu_read_lock(); sk_nulls_for_each(sk, hnode, hlist) { @@ -338,7 +338,7 @@ void raw6_icmp_error(struct sk_buff *skb, int nexthdr, struct sock *sk; int hash;
- hash = nexthdr & (RAW_HTABLE_SIZE - 1); + hash = raw_hashfunc(net, nexthdr); hlist = &raw_v6_hashinfo.ht[hash]; rcu_read_lock(); sk_nulls_for_each(sk, hnode, hlist) {
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit 0a78cf7264d29abeca098eae0b188a10aabc8a32 ]
Dae R. Jeong reported a NULL deref in raw_get_next() [0].
It seems that the repro was running these sequences in parallel so that one thread was iterating on a socket that was being freed in another netns.
unshare(0x40060200) r0 = syz_open_procfs(0x0, &(0x7f0000002080)='net/raw\x00') socket$inet_icmp_raw(0x2, 0x3, 0x1) pread64(r0, &(0x7f0000000000)=""/10, 0xa, 0x10000000007f)
After commit 0daf07e52709 ("raw: convert raw sockets to RCU"), we use RCU and hlist_nulls_for_each_entry() to iterate over SOCK_RAW sockets. However, we should use spinlock for slow paths to avoid the NULL deref.
Also, SOCK_RAW does not use SLAB_TYPESAFE_BY_RCU, and the slab object is not reused during iteration in the grace period. In fact, the lockless readers do not check the nulls marker with get_nulls_value(). So, SOCK_RAW should use hlist instead of hlist_nulls.
Instead of adding an unnecessary barrier by sk_nulls_for_each_rcu(), let's convert hlist_nulls to hlist and use sk_for_each_rcu() for fast paths and sk_for_each() and spinlock for /proc/net/raw.
[0]: general protection fault, probably for non-canonical address 0xdffffc0000000005: 0000 [#1] PREEMPT SMP KASAN KASAN: null-ptr-deref in range [0x0000000000000028-0x000000000000002f] CPU: 2 PID: 20952 Comm: syz-executor.0 Not tainted 6.2.0-g048ec869bafd-dirty #7 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014 RIP: 0010:read_pnet include/net/net_namespace.h:383 [inline] RIP: 0010:sock_net include/net/sock.h:649 [inline] RIP: 0010:raw_get_next net/ipv4/raw.c:974 [inline] RIP: 0010:raw_get_idx net/ipv4/raw.c:986 [inline] RIP: 0010:raw_seq_start+0x431/0x800 net/ipv4/raw.c:995 Code: ef e8 33 3d 94 f7 49 8b 6d 00 4c 89 ef e8 b7 65 5f f7 49 89 ed 49 83 c5 98 0f 84 9a 00 00 00 48 83 c5 c8 48 89 e8 48 c1 e8 03 <42> 80 3c 30 00 74 08 48 89 ef e8 00 3d 94 f7 4c 8b 7d 00 48 89 ef RSP: 0018:ffffc9001154f9b0 EFLAGS: 00010206 RAX: 0000000000000005 RBX: 1ffff1100302c8fd RCX: 0000000000000000 RDX: 0000000000000028 RSI: ffffc9001154f988 RDI: ffffc9000f77a338 RBP: 0000000000000029 R08: ffffffff8a50ffb4 R09: fffffbfff24b6bd9 R10: fffffbfff24b6bd9 R11: 0000000000000000 R12: ffff88801db73b78 R13: fffffffffffffff9 R14: dffffc0000000000 R15: 0000000000000030 FS: 00007f843ae8e700(0000) GS:ffff888063700000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 000055bb9614b35f CR3: 000000003c672000 CR4: 00000000003506e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> seq_read_iter+0x4c6/0x10f0 fs/seq_file.c:225 seq_read+0x224/0x320 fs/seq_file.c:162 pde_read fs/proc/inode.c:316 [inline] proc_reg_read+0x23f/0x330 fs/proc/inode.c:328 vfs_read+0x31e/0xd30 fs/read_write.c:468 ksys_pread64 fs/read_write.c:665 [inline] __do_sys_pread64 fs/read_write.c:675 [inline] __se_sys_pread64 fs/read_write.c:672 [inline] __x64_sys_pread64+0x1e9/0x280 fs/read_write.c:672 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x4e/0xa0 arch/x86/entry/common.c:82 entry_SYSCALL_64_after_hwframe+0x63/0xcd RIP: 0033:0x478d29 Code: f7 d8 64 89 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 bc ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f843ae8dbe8 EFLAGS: 00000246 ORIG_RAX: 0000000000000011 RAX: ffffffffffffffda RBX: 0000000000791408 RCX: 0000000000478d29 RDX: 000000000000000a RSI: 0000000020000000 RDI: 0000000000000003 RBP: 00000000f477909a R08: 0000000000000000 R09: 0000000000000000 R10: 000010000000007f R11: 0000000000000246 R12: 0000000000791740 R13: 0000000000791414 R14: 0000000000791408 R15: 00007ffc2eb48a50 </TASK> Modules linked in: ---[ end trace 0000000000000000 ]--- RIP: 0010:read_pnet include/net/net_namespace.h:383 [inline] RIP: 0010:sock_net include/net/sock.h:649 [inline] RIP: 0010:raw_get_next net/ipv4/raw.c:974 [inline] RIP: 0010:raw_get_idx net/ipv4/raw.c:986 [inline] RIP: 0010:raw_seq_start+0x431/0x800 net/ipv4/raw.c:995 Code: ef e8 33 3d 94 f7 49 8b 6d 00 4c 89 ef e8 b7 65 5f f7 49 89 ed 49 83 c5 98 0f 84 9a 00 00 00 48 83 c5 c8 48 89 e8 48 c1 e8 03 <42> 80 3c 30 00 74 08 48 89 ef e8 00 3d 94 f7 4c 8b 7d 00 48 89 ef RSP: 0018:ffffc9001154f9b0 EFLAGS: 00010206 RAX: 0000000000000005 RBX: 1ffff1100302c8fd RCX: 0000000000000000 RDX: 0000000000000028 RSI: ffffc9001154f988 RDI: ffffc9000f77a338 RBP: 0000000000000029 R08: ffffffff8a50ffb4 R09: fffffbfff24b6bd9 R10: fffffbfff24b6bd9 R11: 0000000000000000 R12: ffff88801db73b78 R13: fffffffffffffff9 R14: dffffc0000000000 R15: 0000000000000030 FS: 00007f843ae8e700(0000) GS:ffff888063700000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f92ff166000 CR3: 000000003c672000 CR4: 00000000003506e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Fixes: 0daf07e52709 ("raw: convert raw sockets to RCU") Reported-by: syzbot syzkaller@googlegroups.com Reported-by: Dae R. Jeong threeearcat@gmail.com Link: https://lore.kernel.org/netdev/ZCA2mGV_cmq7lIfV@dragonet/ Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Reviewed-by: Eric Dumazet edumazet@google.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- include/net/raw.h | 4 ++-- net/ipv4/raw.c | 36 +++++++++++++++++++----------------- net/ipv4/raw_diag.c | 10 ++++------ net/ipv6/raw.c | 10 ++++------ 4 files changed, 29 insertions(+), 31 deletions(-)
diff --git a/include/net/raw.h b/include/net/raw.h index 2c004c20ed996..3af5289fdead9 100644 --- a/include/net/raw.h +++ b/include/net/raw.h @@ -37,7 +37,7 @@ int raw_rcv(struct sock *, struct sk_buff *); struct raw_hashinfo { spinlock_t lock;
- struct hlist_nulls_head ht[RAW_HTABLE_SIZE] ____cacheline_aligned; + struct hlist_head ht[RAW_HTABLE_SIZE] ____cacheline_aligned; };
static inline u32 raw_hashfunc(const struct net *net, u32 proto) @@ -51,7 +51,7 @@ static inline void raw_hashinfo_init(struct raw_hashinfo *hashinfo)
spin_lock_init(&hashinfo->lock); for (i = 0; i < RAW_HTABLE_SIZE; i++) - INIT_HLIST_NULLS_HEAD(&hashinfo->ht[i], i); + INIT_HLIST_HEAD(&hashinfo->ht[i]); }
#ifdef CONFIG_PROC_FS diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c index 2a53a0bf29232..af03aa8a8e513 100644 --- a/net/ipv4/raw.c +++ b/net/ipv4/raw.c @@ -91,12 +91,12 @@ EXPORT_SYMBOL_GPL(raw_v4_hashinfo); int raw_hash_sk(struct sock *sk) { struct raw_hashinfo *h = sk->sk_prot->h.raw_hash; - struct hlist_nulls_head *hlist; + struct hlist_head *hlist;
hlist = &h->ht[raw_hashfunc(sock_net(sk), inet_sk(sk)->inet_num)];
spin_lock(&h->lock); - __sk_nulls_add_node_rcu(sk, hlist); + sk_add_node_rcu(sk, hlist); sock_set_flag(sk, SOCK_RCU_FREE); spin_unlock(&h->lock); sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1); @@ -110,7 +110,7 @@ void raw_unhash_sk(struct sock *sk) struct raw_hashinfo *h = sk->sk_prot->h.raw_hash;
spin_lock(&h->lock); - if (__sk_nulls_del_node_init_rcu(sk)) + if (sk_del_node_init_rcu(sk)) sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1); spin_unlock(&h->lock); } @@ -163,16 +163,15 @@ static int icmp_filter(const struct sock *sk, const struct sk_buff *skb) static int raw_v4_input(struct net *net, struct sk_buff *skb, const struct iphdr *iph, int hash) { - struct hlist_nulls_head *hlist; - struct hlist_nulls_node *hnode; int sdif = inet_sdif(skb); + struct hlist_head *hlist; int dif = inet_iif(skb); int delivered = 0; struct sock *sk;
hlist = &raw_v4_hashinfo.ht[hash]; rcu_read_lock(); - sk_nulls_for_each(sk, hnode, hlist) { + sk_for_each_rcu(sk, hlist) { if (!raw_v4_match(net, sk, iph->protocol, iph->saddr, iph->daddr, dif, sdif)) continue; @@ -264,10 +263,9 @@ static void raw_err(struct sock *sk, struct sk_buff *skb, u32 info) void raw_icmp_error(struct sk_buff *skb, int protocol, u32 info) { struct net *net = dev_net(skb->dev); - struct hlist_nulls_head *hlist; - struct hlist_nulls_node *hnode; int dif = skb->dev->ifindex; int sdif = inet_sdif(skb); + struct hlist_head *hlist; const struct iphdr *iph; struct sock *sk; int hash; @@ -276,7 +274,7 @@ void raw_icmp_error(struct sk_buff *skb, int protocol, u32 info) hlist = &raw_v4_hashinfo.ht[hash];
rcu_read_lock(); - sk_nulls_for_each(sk, hnode, hlist) { + sk_for_each_rcu(sk, hlist) { iph = (const struct iphdr *)skb->data; if (!raw_v4_match(net, sk, iph->protocol, iph->daddr, iph->saddr, dif, sdif)) @@ -948,14 +946,13 @@ static struct sock *raw_get_first(struct seq_file *seq, int bucket) { struct raw_hashinfo *h = pde_data(file_inode(seq->file)); struct raw_iter_state *state = raw_seq_private(seq); - struct hlist_nulls_head *hlist; - struct hlist_nulls_node *hnode; + struct hlist_head *hlist; struct sock *sk;
for (state->bucket = bucket; state->bucket < RAW_HTABLE_SIZE; ++state->bucket) { hlist = &h->ht[state->bucket]; - sk_nulls_for_each(sk, hnode, hlist) { + sk_for_each(sk, hlist) { if (sock_net(sk) == seq_file_net(seq)) return sk; } @@ -968,7 +965,7 @@ static struct sock *raw_get_next(struct seq_file *seq, struct sock *sk) struct raw_iter_state *state = raw_seq_private(seq);
do { - sk = sk_nulls_next(sk); + sk = sk_next(sk); } while (sk && sock_net(sk) != seq_file_net(seq));
if (!sk) @@ -987,9 +984,12 @@ static struct sock *raw_get_idx(struct seq_file *seq, loff_t pos) }
void *raw_seq_start(struct seq_file *seq, loff_t *pos) - __acquires(RCU) + __acquires(&h->lock) { - rcu_read_lock(); + struct raw_hashinfo *h = pde_data(file_inode(seq->file)); + + spin_lock(&h->lock); + return *pos ? raw_get_idx(seq, *pos - 1) : SEQ_START_TOKEN; } EXPORT_SYMBOL_GPL(raw_seq_start); @@ -1008,9 +1008,11 @@ void *raw_seq_next(struct seq_file *seq, void *v, loff_t *pos) EXPORT_SYMBOL_GPL(raw_seq_next);
void raw_seq_stop(struct seq_file *seq, void *v) - __releases(RCU) + __releases(&h->lock) { - rcu_read_unlock(); + struct raw_hashinfo *h = pde_data(file_inode(seq->file)); + + spin_unlock(&h->lock); } EXPORT_SYMBOL_GPL(raw_seq_stop);
diff --git a/net/ipv4/raw_diag.c b/net/ipv4/raw_diag.c index 999321834b94a..da3591a66a169 100644 --- a/net/ipv4/raw_diag.c +++ b/net/ipv4/raw_diag.c @@ -57,8 +57,7 @@ static bool raw_lookup(struct net *net, struct sock *sk, static struct sock *raw_sock_get(struct net *net, const struct inet_diag_req_v2 *r) { struct raw_hashinfo *hashinfo = raw_get_hashinfo(r); - struct hlist_nulls_head *hlist; - struct hlist_nulls_node *hnode; + struct hlist_head *hlist; struct sock *sk; int slot;
@@ -68,7 +67,7 @@ static struct sock *raw_sock_get(struct net *net, const struct inet_diag_req_v2 rcu_read_lock(); for (slot = 0; slot < RAW_HTABLE_SIZE; slot++) { hlist = &hashinfo->ht[slot]; - sk_nulls_for_each(sk, hnode, hlist) { + sk_for_each_rcu(sk, hlist) { if (raw_lookup(net, sk, r)) { /* * Grab it and keep until we fill @@ -142,9 +141,8 @@ static void raw_diag_dump(struct sk_buff *skb, struct netlink_callback *cb, struct raw_hashinfo *hashinfo = raw_get_hashinfo(r); struct net *net = sock_net(skb->sk); struct inet_diag_dump_data *cb_data; - struct hlist_nulls_head *hlist; - struct hlist_nulls_node *hnode; int num, s_num, slot, s_slot; + struct hlist_head *hlist; struct sock *sk = NULL; struct nlattr *bc;
@@ -161,7 +159,7 @@ static void raw_diag_dump(struct sk_buff *skb, struct netlink_callback *cb, num = 0;
hlist = &hashinfo->ht[slot]; - sk_nulls_for_each(sk, hnode, hlist) { + sk_for_each_rcu(sk, hlist) { struct inet_sock *inet = inet_sk(sk);
if (!net_eq(sock_net(sk), net)) diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c index 45b35b5f893c5..4fc511bdf176c 100644 --- a/net/ipv6/raw.c +++ b/net/ipv6/raw.c @@ -141,10 +141,9 @@ EXPORT_SYMBOL(rawv6_mh_filter_unregister); static bool ipv6_raw_deliver(struct sk_buff *skb, int nexthdr) { struct net *net = dev_net(skb->dev); - struct hlist_nulls_head *hlist; - struct hlist_nulls_node *hnode; const struct in6_addr *saddr; const struct in6_addr *daddr; + struct hlist_head *hlist; struct sock *sk; bool delivered = false; __u8 hash; @@ -155,7 +154,7 @@ static bool ipv6_raw_deliver(struct sk_buff *skb, int nexthdr) hash = raw_hashfunc(net, nexthdr); hlist = &raw_v6_hashinfo.ht[hash]; rcu_read_lock(); - sk_nulls_for_each(sk, hnode, hlist) { + sk_for_each_rcu(sk, hlist) { int filtered;
if (!raw_v6_match(net, sk, nexthdr, daddr, saddr, @@ -333,15 +332,14 @@ void raw6_icmp_error(struct sk_buff *skb, int nexthdr, u8 type, u8 code, int inner_offset, __be32 info) { struct net *net = dev_net(skb->dev); - struct hlist_nulls_head *hlist; - struct hlist_nulls_node *hnode; + struct hlist_head *hlist; struct sock *sk; int hash;
hash = raw_hashfunc(net, nexthdr); hlist = &raw_v6_hashinfo.ht[hash]; rcu_read_lock(); - sk_nulls_for_each(sk, hnode, hlist) { + sk_for_each_rcu(sk, hlist) { /* Note: ipv6_hdr(skb) != skb->data */ const struct ipv6hdr *ip6h = (const struct ipv6hdr *)skb->data;
From: Kuniyuki Iwashima kuniyu@amazon.com
[ Upstream commit ab5fb73ffa01072b4d8031cc05801fa1cb653bee ]
After commit dbca1596bbb0 ("ping: convert to RCU lookups, get rid of rwlock"), we use RCU for ping sockets, but we should use spinlock for /proc/net/icmp to avoid a potential NULL deref mentioned in the previous patch.
Let's go back to using spinlock there.
Note we can convert ping sockets to use hlist instead of hlist_nulls because we do not use SLAB_TYPESAFE_BY_RCU for ping sockets.
Fixes: dbca1596bbb0 ("ping: convert to RCU lookups, get rid of rwlock") Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Reviewed-by: Eric Dumazet edumazet@google.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/ping.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c index 409ec2a1f95b0..5178a3f3cb537 100644 --- a/net/ipv4/ping.c +++ b/net/ipv4/ping.c @@ -1089,13 +1089,13 @@ static struct sock *ping_get_idx(struct seq_file *seq, loff_t pos) }
void *ping_seq_start(struct seq_file *seq, loff_t *pos, sa_family_t family) - __acquires(RCU) + __acquires(ping_table.lock) { struct ping_iter_state *state = seq->private; state->bucket = 0; state->family = family;
- rcu_read_lock(); + spin_lock(&ping_table.lock);
return *pos ? ping_get_idx(seq, *pos-1) : SEQ_START_TOKEN; } @@ -1121,9 +1121,9 @@ void *ping_seq_next(struct seq_file *seq, void *v, loff_t *pos) EXPORT_SYMBOL_GPL(ping_seq_next);
void ping_seq_stop(struct seq_file *seq, void *v) - __releases(RCU) + __releases(ping_table.lock) { - rcu_read_unlock(); + spin_unlock(&ping_table.lock); } EXPORT_SYMBOL_GPL(ping_seq_stop);
From: Andy Roulin aroulin@nvidia.com
[ Upstream commit e847c7675e19ef344913724dc68f83df31ad6a17 ]
If the number of lanes was forced and then subsequently the user omits this parameter, the ksettings->lanes is reset. The driver should then reset the number of lanes to the device's default for the specified speed.
However, although the ksettings->lanes is set to 0, the mod variable is not set to true to indicate the driver and userspace should be notified of the changes.
The consequence is that the same ethtool operation will produce different results based on the initial state.
If the initial state is: $ ethtool swp1 | grep -A 3 'Speed: ' Speed: 500000Mb/s Lanes: 2 Duplex: Full Auto-negotiation: on
then executing 'ethtool -s swp1 speed 50000 autoneg off' will yield: $ ethtool swp1 | grep -A 3 'Speed: ' Speed: 500000Mb/s Lanes: 2 Duplex: Full Auto-negotiation: off
While if the initial state is: $ ethtool swp1 | grep -A 3 'Speed: ' Speed: 500000Mb/s Lanes: 1 Duplex: Full Auto-negotiation: off
executing the same 'ethtool -s swp1 speed 50000 autoneg off' results in: $ ethtool swp1 | grep -A 3 'Speed: ' Speed: 500000Mb/s Lanes: 1 Duplex: Full Auto-negotiation: off
This patch fixes this behavior. Omitting lanes will always results in the driver choosing the default lane width for the chosen speed. In this scenario, regardless of the initial state, the end state will be, e.g.,
$ ethtool swp1 | grep -A 3 'Speed: ' Speed: 500000Mb/s Lanes: 2 Duplex: Full Auto-negotiation: off
Fixes: 012ce4dd3102 ("ethtool: Extend link modes settings uAPI with lanes") Signed-off-by: Andy Roulin aroulin@nvidia.com Reviewed-by: Danielle Ratson danieller@nvidia.com Reviewed-by: Ido Schimmel idosch@nvidia.com Link: https://lore.kernel.org/r/ac238d6b-8726-8156-3810-6471291dbc7f@nvidia.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/ethtool/linkmodes.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/net/ethtool/linkmodes.c b/net/ethtool/linkmodes.c index 126e06c713a3a..2d91f2a8c7626 100644 --- a/net/ethtool/linkmodes.c +++ b/net/ethtool/linkmodes.c @@ -282,11 +282,12 @@ static int ethnl_update_linkmodes(struct genl_info *info, struct nlattr **tb, "lanes configuration not supported by device"); return -EOPNOTSUPP; } - } else if (!lsettings->autoneg) { - /* If autoneg is off and lanes parameter is not passed from user, - * set the lanes parameter to 0. + } else if (!lsettings->autoneg && ksettings->lanes) { + /* If autoneg is off and lanes parameter is not passed from user but + * it was defined previously then set the lanes parameter to 0. */ ksettings->lanes = 0; + *mod = true; }
ret = ethnl_update_bitset(ksettings->link_modes.advertising,
From: Eric Dumazet edumazet@google.com
[ Upstream commit a1865f2e7d10dde00d35a2122b38d2e469ae67ed ]
syzbot reported a data-race in data-race in netlink_recvmsg() [1]
Indeed, netlink_recvmsg() can be run concurrently, and netlink_dump() also needs protection.
[1] BUG: KCSAN: data-race in netlink_recvmsg / netlink_recvmsg
read to 0xffff888141840b38 of 8 bytes by task 23057 on cpu 0: netlink_recvmsg+0xea/0x730 net/netlink/af_netlink.c:1988 sock_recvmsg_nosec net/socket.c:1017 [inline] sock_recvmsg net/socket.c:1038 [inline] __sys_recvfrom+0x1ee/0x2e0 net/socket.c:2194 __do_sys_recvfrom net/socket.c:2212 [inline] __se_sys_recvfrom net/socket.c:2208 [inline] __x64_sys_recvfrom+0x78/0x90 net/socket.c:2208 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd
write to 0xffff888141840b38 of 8 bytes by task 23037 on cpu 1: netlink_recvmsg+0x114/0x730 net/netlink/af_netlink.c:1989 sock_recvmsg_nosec net/socket.c:1017 [inline] sock_recvmsg net/socket.c:1038 [inline] ____sys_recvmsg+0x156/0x310 net/socket.c:2720 ___sys_recvmsg net/socket.c:2762 [inline] do_recvmmsg+0x2e5/0x710 net/socket.c:2856 __sys_recvmmsg net/socket.c:2935 [inline] __do_sys_recvmmsg net/socket.c:2958 [inline] __se_sys_recvmmsg net/socket.c:2951 [inline] __x64_sys_recvmmsg+0xe2/0x160 net/socket.c:2951 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd
value changed: 0x0000000000000000 -> 0x0000000000001000
Reported by Kernel Concurrency Sanitizer on: CPU: 1 PID: 23037 Comm: syz-executor.2 Not tainted 6.3.0-rc4-syzkaller-00195-g5a57b48fdfcb #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/02/2023
Fixes: 9063e21fb026 ("netlink: autosize skb lengthes") Reported-by: syzbot syzkaller@googlegroups.com Signed-off-by: Eric Dumazet edumazet@google.com Reviewed-by: Simon Horman simon.horman@corigine.com Link: https://lore.kernel.org/r/20230403214643.768555-1-edumazet@google.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/netlink/af_netlink.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c index c642776597531..f365dfdd672d7 100644 --- a/net/netlink/af_netlink.c +++ b/net/netlink/af_netlink.c @@ -1952,7 +1952,7 @@ static int netlink_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, struct scm_cookie scm; struct sock *sk = sock->sk; struct netlink_sock *nlk = nlk_sk(sk); - size_t copied; + size_t copied, max_recvmsg_len; struct sk_buff *skb, *data_skb; int err, ret;
@@ -1985,9 +1985,10 @@ static int netlink_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, #endif
/* Record the max length of recvmsg() calls for future allocations */ - nlk->max_recvmsg_len = max(nlk->max_recvmsg_len, len); - nlk->max_recvmsg_len = min_t(size_t, nlk->max_recvmsg_len, - SKB_WITH_OVERHEAD(32768)); + max_recvmsg_len = max(READ_ONCE(nlk->max_recvmsg_len), len); + max_recvmsg_len = min_t(size_t, max_recvmsg_len, + SKB_WITH_OVERHEAD(32768)); + WRITE_ONCE(nlk->max_recvmsg_len, max_recvmsg_len);
copied = data_skb->len; if (len < copied) { @@ -2236,6 +2237,7 @@ static int netlink_dump(struct sock *sk) struct netlink_ext_ack extack = {}; struct netlink_callback *cb; struct sk_buff *skb = NULL; + size_t max_recvmsg_len; struct module *module; int err = -ENOBUFS; int alloc_min_size; @@ -2258,8 +2260,9 @@ static int netlink_dump(struct sock *sk) cb = &nlk->cb; alloc_min_size = max_t(int, cb->min_dump_alloc, NLMSG_GOODSIZE);
- if (alloc_min_size < nlk->max_recvmsg_len) { - alloc_size = nlk->max_recvmsg_len; + max_recvmsg_len = READ_ONCE(nlk->max_recvmsg_len); + if (alloc_min_size < max_recvmsg_len) { + alloc_size = max_recvmsg_len; skb = alloc_skb(alloc_size, (GFP_KERNEL & ~__GFP_DIRECT_RECLAIM) | __GFP_NOWARN | __GFP_NORETRY);
From: Shailend Chand shailend@google.com
[ Upstream commit 3ce9345580974863c060fa32971537996a7b2d57 ]
Non-GSO TCP packets whose SKBs' linear portion did not include the entire TCP header were not populating the first Tx descriptor with as many bytes as the vNIC expected. This change ensures that all TCP packets populate the first descriptor with the correct number of bytes.
Fixes: 893ce44df565 ("gve: Add basic driver framework for Compute Engine Virtual NIC") Signed-off-by: Shailend Chand shailend@google.com Link: https://lore.kernel.org/r/20230403172809.2939306-1-shailend@google.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/google/gve/gve.h | 2 ++ drivers/net/ethernet/google/gve/gve_tx.c | 12 +++++------- 2 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index 64eb0442c82fd..005cb9dfe078b 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -47,6 +47,8 @@
#define GVE_RX_BUFFER_SIZE_DQO 2048
+#define GVE_GQ_TX_MIN_PKT_DESC_BYTES 182 + /* Each slot in the desc ring has a 1:1 mapping to a slot in the data ring */ struct gve_rx_desc_queue { struct gve_rx_desc *desc_ring; /* the descriptor ring */ diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c index 4888bf05fbedb..5e11b82367545 100644 --- a/drivers/net/ethernet/google/gve/gve_tx.c +++ b/drivers/net/ethernet/google/gve/gve_tx.c @@ -284,8 +284,8 @@ static inline int gve_skb_fifo_bytes_required(struct gve_tx_ring *tx, int bytes; int hlen;
- hlen = skb_is_gso(skb) ? skb_checksum_start_offset(skb) + - tcp_hdrlen(skb) : skb_headlen(skb); + hlen = skb_is_gso(skb) ? skb_checksum_start_offset(skb) + tcp_hdrlen(skb) : + min_t(int, GVE_GQ_TX_MIN_PKT_DESC_BYTES, skb->len);
pad_bytes = gve_tx_fifo_pad_alloc_one_frag(&tx->tx_fifo, hlen); @@ -454,13 +454,11 @@ static int gve_tx_add_skb_copy(struct gve_priv *priv, struct gve_tx_ring *tx, st pkt_desc = &tx->desc[idx];
l4_hdr_offset = skb_checksum_start_offset(skb); - /* If the skb is gso, then we want the tcp header in the first segment - * otherwise we want the linear portion of the skb (which will contain - * the checksum because skb->csum_start and skb->csum_offset are given - * relative to skb->head) in the first segment. + /* If the skb is gso, then we want the tcp header alone in the first segment + * otherwise we want the minimum required by the gVNIC spec. */ hlen = is_gso ? l4_hdr_offset + tcp_hdrlen(skb) : - skb_headlen(skb); + min_t(int, GVE_GQ_TX_MIN_PKT_DESC_BYTES, skb->len);
info->skb = skb; /* We don't want to split the header, so if necessary, pad to the end
From: Ard Biesheuvel ardb@kernel.org
[ Upstream commit 32d85999680601d01b2a36713c9ffd7397c8688b ]
Dan reports that smatch complains about a potential uninitialized variable being used in the compat alignment fixup code.
The logic is not wrong per se, but we do end up using an uninitialized variable if reading the instruction that triggered the alignment fault from user space faults, even if the fault ensures that the uninitialized value doesn't propagate any further.
Given that we just give up and return 1 if any fault occurs when reading the instruction, let's get rid of the 'success handling' pattern that captures the fault in a variable and aborts later, and instead, just return 1 immediately if any of the get_user() calls result in an exception.
Fixes: 3fc24ef32d3b ("arm64: compat: Implement misalignment fixups for multiword loads") Reported-by: kernel test robot lkp@intel.com Reported-by: Dan Carpenter error27@gmail.com Link: https://lore.kernel.org/r/202304021214.gekJ8yRc-lkp@intel.com/ Signed-off-by: Ard Biesheuvel ardb@kernel.org Link: https://lore.kernel.org/r/20230404103625.2386382-1-ardb@kernel.org Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/kernel/compat_alignment.c | 32 ++++++++++++---------------- 1 file changed, 14 insertions(+), 18 deletions(-)
diff --git a/arch/arm64/kernel/compat_alignment.c b/arch/arm64/kernel/compat_alignment.c index 5edec2f49ec98..deff21bfa6800 100644 --- a/arch/arm64/kernel/compat_alignment.c +++ b/arch/arm64/kernel/compat_alignment.c @@ -314,36 +314,32 @@ int do_compat_alignment_fixup(unsigned long addr, struct pt_regs *regs) int (*handler)(unsigned long addr, u32 instr, struct pt_regs *regs); unsigned int type; u32 instr = 0; - u16 tinstr = 0; int isize = 4; int thumb2_32b = 0; - int fault;
instrptr = instruction_pointer(regs);
if (compat_thumb_mode(regs)) { __le16 __user *ptr = (__le16 __user *)(instrptr & ~1); + u16 tinstr, tinst2;
- fault = alignment_get_thumb(regs, ptr, &tinstr); - if (!fault) { - if (IS_T32(tinstr)) { - /* Thumb-2 32-bit */ - u16 tinst2; - fault = alignment_get_thumb(regs, ptr + 1, &tinst2); - instr = ((u32)tinstr << 16) | tinst2; - thumb2_32b = 1; - } else { - isize = 2; - instr = thumb2arm(tinstr); - } + if (alignment_get_thumb(regs, ptr, &tinstr)) + return 1; + + if (IS_T32(tinstr)) { /* Thumb-2 32-bit */ + if (alignment_get_thumb(regs, ptr + 1, &tinst2)) + return 1; + instr = ((u32)tinstr << 16) | tinst2; + thumb2_32b = 1; + } else { + isize = 2; + instr = thumb2arm(tinstr); } } else { - fault = alignment_get_arm(regs, (__le32 __user *)instrptr, &instr); + if (alignment_get_arm(regs, (__le32 __user *)instrptr, &instr)) + return 1; }
- if (fault) - return 1; - switch (CODING_BITS(instr)) { case 0x00000000: /* 3.13.4 load/store instruction extensions */ if (LDSTHD_I_BIT(instr))
From: Michael Sit Wei Hong michael.wei.hong.sit@intel.com
[ Upstream commit 8fbc10b995a506e173f1080dfa2764f232a65e02 ]
Some DT devices already have phy device configured in the DT/ACPI. Current implementation scans for a phy unconditionally even though there is a phy listed in the DT/ACPI and already attached.
We should check the fwnode if there is any phy device listed in fwnode and decide whether to scan for a phy to attach to.
Fixes: fe2cfbc96803 ("net: stmmac: check if MAC needs to attach to a PHY") Reported-by: Martin Blumenstingl martin.blumenstingl@googlemail.com Link: https://lore.kernel.org/lkml/20230403212434.296975-1-martin.blumenstingl@goo... Tested-by: Guenter Roeck linux@roeck-us.net Tested-by: Shahab Vahedi shahab@synopsys.com Tested-by: Marek Szyprowski m.szyprowski@samsung.com Tested-by: Martin Blumenstingl martin.blumenstingl@googlemail.com Suggested-by: Russell King (Oracle) rmk+kernel@armlinux.org.uk Signed-off-by: Michael Sit Wei Hong michael.wei.hong.sit@intel.com Link: https://lore.kernel.org/r/20230406024541.3556305-1-michael.wei.hong.sit@inte... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index 4888536a31500..622b95bfb0b2b 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -1134,22 +1134,26 @@ static void stmmac_check_pcs_mode(struct stmmac_priv *priv) static int stmmac_init_phy(struct net_device *dev) { struct stmmac_priv *priv = netdev_priv(dev); + struct fwnode_handle *phy_fwnode; struct fwnode_handle *fwnode; - bool phy_needed; int ret;
+ if (!phylink_expects_phy(priv->phylink)) + return 0; + fwnode = of_fwnode_handle(priv->plat->phylink_node); if (!fwnode) fwnode = dev_fwnode(priv->device);
if (fwnode) - ret = phylink_fwnode_phy_connect(priv->phylink, fwnode, 0); + phy_fwnode = fwnode_get_phy_node(fwnode); + else + phy_fwnode = NULL;
- phy_needed = phylink_expects_phy(priv->phylink); /* Some DT bindings do not set-up the PHY handle. Let's try to * manually parse it */ - if (!fwnode || phy_needed || ret) { + if (!phy_fwnode || IS_ERR(phy_fwnode)) { int addr = priv->plat->phy_addr; struct phy_device *phydev;
@@ -1165,6 +1169,9 @@ static int stmmac_init_phy(struct net_device *dev) }
ret = phylink_connect_phy(priv->phylink, phydev); + } else { + fwnode_handle_put(phy_fwnode); + ret = phylink_fwnode_phy_connect(priv->phylink, fwnode, 0); }
if (!priv->plat->pmt) {
From: Lukas Wunner lukas@wunner.de
commit fbaa38214cd9e150764ccaa82e04ecf42cc1140c upstream.
The CDAT exposed in sysfs differs between little endian and big endian arches: On big endian, every 4 bytes are byte-swapped.
PCI Configuration Space is little endian (PCI r3.0 sec 6.1). Accessors such as pci_read_config_dword() implicitly swap bytes on big endian. That way, the macros in include/uapi/linux/pci_regs.h work regardless of the arch's endianness. For an example of implicit byte-swapping, see ppc4xx_pciex_read_config(), which calls in_le32(), which uses lwbrx (Load Word Byte-Reverse Indexed).
DOE Read/Write Data Mailbox Registers are unlike other registers in Configuration Space in that they contain or receive a 4 byte portion of an opaque byte stream (a "Data Object" per PCIe r6.0 sec 7.9.24.5f). They need to be copied to or from the request/response buffer verbatim. So amend pci_doe_send_req() and pci_doe_recv_resp() to undo the implicit byte-swapping.
The CXL_DOE_TABLE_ACCESS_* and PCI_DOE_DATA_OBJECT_DISC_* macros assume implicit byte-swapping. Byte-swap requests after constructing them with those macros and byte-swap responses before parsing them.
Change the request and response type to __le32 to avoid sparse warnings. Per a request from Jonathan, replace sizeof(u32) with sizeof(__le32) for consistency.
Fixes: c97006046c79 ("cxl/port: Read CDAT table") Tested-by: Ira Weiny ira.weiny@intel.com Signed-off-by: Lukas Wunner lukas@wunner.de Reviewed-by: Dan Williams dan.j.williams@intel.com Cc: stable@vger.kernel.org # v6.0+ Reviewed-by: Jonathan Cameron Jonathan.Cameron@huawei.com Link: https://lore.kernel.org/r/3051114102f41d19df3debbee123129118fc5e6d.167854349... Signed-off-by: Dan Williams dan.j.williams@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/cxl/core/pci.c | 26 +++++++++++++------------- drivers/pci/doe.c | 25 ++++++++++++++----------- include/linux/pci-doe.h | 8 ++++++-- 3 files changed, 33 insertions(+), 26 deletions(-)
--- a/drivers/cxl/core/pci.c +++ b/drivers/cxl/core/pci.c @@ -480,7 +480,7 @@ static struct pci_doe_mb *find_cdat_doe( return NULL; }
-#define CDAT_DOE_REQ(entry_handle) \ +#define CDAT_DOE_REQ(entry_handle) cpu_to_le32 \ (FIELD_PREP(CXL_DOE_TABLE_ACCESS_REQ_CODE, \ CXL_DOE_TABLE_ACCESS_REQ_CODE_READ) | \ FIELD_PREP(CXL_DOE_TABLE_ACCESS_TABLE_TYPE, \ @@ -493,8 +493,8 @@ static void cxl_doe_task_complete(struct }
struct cdat_doe_task { - u32 request_pl; - u32 response_pl[32]; + __le32 request_pl; + __le32 response_pl[32]; struct completion c; struct pci_doe_task task; }; @@ -528,10 +528,10 @@ static int cxl_cdat_get_length(struct de return rc; } wait_for_completion(&t.c); - if (t.task.rv < sizeof(u32)) + if (t.task.rv < sizeof(__le32)) return -EIO;
- *length = t.response_pl[1]; + *length = le32_to_cpu(t.response_pl[1]); dev_dbg(dev, "CDAT length %zu\n", *length);
return 0; @@ -542,13 +542,13 @@ static int cxl_cdat_read_table(struct de struct cxl_cdat *cdat) { size_t length = cdat->length; - u32 *data = cdat->table; + __le32 *data = cdat->table; int entry_handle = 0;
do { DECLARE_CDAT_DOE_TASK(CDAT_DOE_REQ(entry_handle), t); size_t entry_dw; - u32 *entry; + __le32 *entry; int rc;
rc = pci_doe_submit_task(cdat_doe, &t.task); @@ -558,21 +558,21 @@ static int cxl_cdat_read_table(struct de } wait_for_completion(&t.c); /* 1 DW header + 1 DW data min */ - if (t.task.rv < (2 * sizeof(u32))) + if (t.task.rv < (2 * sizeof(__le32))) return -EIO;
/* Get the CXL table access header entry handle */ entry_handle = FIELD_GET(CXL_DOE_TABLE_ACCESS_ENTRY_HANDLE, - t.response_pl[0]); + le32_to_cpu(t.response_pl[0])); entry = t.response_pl + 1; - entry_dw = t.task.rv / sizeof(u32); + entry_dw = t.task.rv / sizeof(__le32); /* Skip Header */ entry_dw -= 1; - entry_dw = min(length / sizeof(u32), entry_dw); + entry_dw = min(length / sizeof(__le32), entry_dw); /* Prevent length < 1 DW from causing a buffer overflow */ if (entry_dw) { - memcpy(data, entry, entry_dw * sizeof(u32)); - length -= entry_dw * sizeof(u32); + memcpy(data, entry, entry_dw * sizeof(__le32)); + length -= entry_dw * sizeof(__le32); data += entry_dw; } } while (entry_handle != CXL_DOE_TABLE_ACCESS_LAST_ENTRY); --- a/drivers/pci/doe.c +++ b/drivers/pci/doe.c @@ -128,7 +128,7 @@ static int pci_doe_send_req(struct pci_d return -EIO;
/* Length is 2 DW of header + length of payload in DW */ - length = 2 + task->request_pl_sz / sizeof(u32); + length = 2 + task->request_pl_sz / sizeof(__le32); if (length > PCI_DOE_MAX_LENGTH) return -EIO; if (length == PCI_DOE_MAX_LENGTH) @@ -141,9 +141,9 @@ static int pci_doe_send_req(struct pci_d pci_write_config_dword(pdev, offset + PCI_DOE_WRITE, FIELD_PREP(PCI_DOE_DATA_OBJECT_HEADER_2_LENGTH, length)); - for (i = 0; i < task->request_pl_sz / sizeof(u32); i++) + for (i = 0; i < task->request_pl_sz / sizeof(__le32); i++) pci_write_config_dword(pdev, offset + PCI_DOE_WRITE, - task->request_pl[i]); + le32_to_cpu(task->request_pl[i]));
pci_doe_write_ctrl(doe_mb, PCI_DOE_CTRL_GO);
@@ -195,11 +195,11 @@ static int pci_doe_recv_resp(struct pci_
/* First 2 dwords have already been read */ length -= 2; - payload_length = min(length, task->response_pl_sz / sizeof(u32)); + payload_length = min(length, task->response_pl_sz / sizeof(__le32)); /* Read the rest of the response payload */ for (i = 0; i < payload_length; i++) { - pci_read_config_dword(pdev, offset + PCI_DOE_READ, - &task->response_pl[i]); + pci_read_config_dword(pdev, offset + PCI_DOE_READ, &val); + task->response_pl[i] = cpu_to_le32(val); /* Prior to the last ack, ensure Data Object Ready */ if (i == (payload_length - 1) && !pci_doe_data_obj_ready(doe_mb)) return -EIO; @@ -217,7 +217,7 @@ static int pci_doe_recv_resp(struct pci_ if (FIELD_GET(PCI_DOE_STATUS_ERROR, val)) return -EIO;
- return min(length, task->response_pl_sz / sizeof(u32)) * sizeof(u32); + return min(length, task->response_pl_sz / sizeof(__le32)) * sizeof(__le32); }
static void signal_task_complete(struct pci_doe_task *task, int rv) @@ -317,14 +317,16 @@ static int pci_doe_discovery(struct pci_ { u32 request_pl = FIELD_PREP(PCI_DOE_DATA_OBJECT_DISC_REQ_3_INDEX, *index); + __le32 request_pl_le = cpu_to_le32(request_pl); + __le32 response_pl_le; u32 response_pl; DECLARE_COMPLETION_ONSTACK(c); struct pci_doe_task task = { .prot.vid = PCI_VENDOR_ID_PCI_SIG, .prot.type = PCI_DOE_PROTOCOL_DISCOVERY, - .request_pl = &request_pl, + .request_pl = &request_pl_le, .request_pl_sz = sizeof(request_pl), - .response_pl = &response_pl, + .response_pl = &response_pl_le, .response_pl_sz = sizeof(response_pl), .complete = pci_doe_task_complete, .private = &c, @@ -340,6 +342,7 @@ static int pci_doe_discovery(struct pci_ if (task.rv != sizeof(response_pl)) return -EIO;
+ response_pl = le32_to_cpu(response_pl_le); *vid = FIELD_GET(PCI_DOE_DATA_OBJECT_DISC_RSP_3_VID, response_pl); *protocol = FIELD_GET(PCI_DOE_DATA_OBJECT_DISC_RSP_3_PROTOCOL, response_pl); @@ -533,8 +536,8 @@ int pci_doe_submit_task(struct pci_doe_m * DOE requests must be a whole number of DW and the response needs to * be big enough for at least 1 DW */ - if (task->request_pl_sz % sizeof(u32) || - task->response_pl_sz < sizeof(u32)) + if (task->request_pl_sz % sizeof(__le32) || + task->response_pl_sz < sizeof(__le32)) return -EINVAL;
if (test_bit(PCI_DOE_FLAG_DEAD, &doe_mb->flags)) --- a/include/linux/pci-doe.h +++ b/include/linux/pci-doe.h @@ -34,6 +34,10 @@ struct pci_doe_mb; * @work: Used internally by the mailbox * @doe_mb: Used internally by the mailbox * + * Payloads are treated as opaque byte streams which are transmitted verbatim, + * without byte-swapping. If payloads contain little-endian register values, + * the caller is responsible for conversion with cpu_to_le32() / le32_to_cpu(). + * * The payload sizes and rv are specified in bytes with the following * restrictions concerning the protocol. * @@ -45,9 +49,9 @@ struct pci_doe_mb; */ struct pci_doe_task { struct pci_doe_protocol prot; - u32 *request_pl; + __le32 *request_pl; size_t request_pl_sz; - u32 *response_pl; + __le32 *response_pl; size_t response_pl_sz; int rv; void (*complete)(struct pci_doe_task *task);
From: Lukas Wunner lukas@wunner.de
commit 34bafc747c54fb58c1908ec3116fa6137393e596 upstream.
cxl_cdat_get_length() only checks whether the DOE response size is sufficient for the Table Access response header (1 dword), but not the succeeding CDAT header (1 dword length plus other fields).
It thus returns whatever uninitialized memory happens to be on the stack if a truncated DOE response with only 1 dword was received. Fix it.
Fixes: c97006046c79 ("cxl/port: Read CDAT table") Reported-by: Ming Li ming4.li@intel.com Tested-by: Ira Weiny ira.weiny@intel.com Signed-off-by: Lukas Wunner lukas@wunner.de Reviewed-by: Ming Li ming4.li@intel.com Reviewed-by: Dan Williams dan.j.williams@intel.com Reviewed-by: Jonathan Cameron Jonathan.Cameron@huawei.com Cc: stable@vger.kernel.org # v6.0+ Reviewed-by: Kuppuswamy Sathyanarayanan sathyanarayanan.kuppuswamy@linux.intel.com Link: https://lore.kernel.org/r/000e69cd163461c8b1bc2cf4155b6e25402c29c7.167854349... Signed-off-by: Dan Williams dan.j.williams@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/cxl/core/pci.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/cxl/core/pci.c +++ b/drivers/cxl/core/pci.c @@ -528,7 +528,7 @@ static int cxl_cdat_get_length(struct de return rc; } wait_for_completion(&t.c); - if (t.task.rv < sizeof(__le32)) + if (t.task.rv < 2 * sizeof(__le32)) return -EIO;
*length = le32_to_cpu(t.response_pl[1]);
From: Lukas Wunner lukas@wunner.de
commit b56faef2312057db20479b240eb71bd2e51fb51c upstream.
If truncated CDAT entries are received from a device, the concatenation of those entries constitutes a corrupt CDAT, yet is happily exposed to user space.
Avoid by verifying response lengths and erroring out if truncation is detected.
The last CDAT entry may still be truncated despite the checks introduced herein if the length in the CDAT header is too small. However, that is easily detectable by user space because it reaches EOF prematurely. A subsequent commit which rightsizes the CDAT response allocation closes that remaining loophole.
The two lines introduced here which exceed 80 chars are shortened to less than 80 chars by a subsequent commit which migrates to a synchronous DOE API and replaces "t.task.rv" by "rc".
The existing acpi_cdat_header and acpi_table_cdat struct definitions provided by ACPICA cannot be used because they do not employ __le16 or __le32 types. I believe that cannot be changed because those types are Linux-specific and ACPI is specified for little endian platforms only, hence doesn't care about endianness. So duplicate the structs.
Fixes: c97006046c79 ("cxl/port: Read CDAT table") Tested-by: Ira Weiny ira.weiny@intel.com Signed-off-by: Lukas Wunner lukas@wunner.de Reviewed-by: Dan Williams dan.j.williams@intel.com Reviewed-by: Jonathan Cameron Jonathan.Cameron@huawei.com Cc: stable@vger.kernel.org # v6.0+ Link: https://lore.kernel.org/r/bce3aebc0e8e18a1173425a7a865b232c3912963.167854349... Signed-off-by: Dan Williams dan.j.williams@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/cxl/core/pci.c | 13 +++++++++---- drivers/cxl/cxlpci.h | 14 ++++++++++++++ 2 files changed, 23 insertions(+), 4 deletions(-)
--- a/drivers/cxl/core/pci.c +++ b/drivers/cxl/core/pci.c @@ -547,8 +547,8 @@ static int cxl_cdat_read_table(struct de
do { DECLARE_CDAT_DOE_TASK(CDAT_DOE_REQ(entry_handle), t); + struct cdat_entry_header *entry; size_t entry_dw; - __le32 *entry; int rc;
rc = pci_doe_submit_task(cdat_doe, &t.task); @@ -557,14 +557,19 @@ static int cxl_cdat_read_table(struct de return rc; } wait_for_completion(&t.c); - /* 1 DW header + 1 DW data min */ - if (t.task.rv < (2 * sizeof(__le32))) + + /* 1 DW Table Access Response Header + CDAT entry */ + entry = (struct cdat_entry_header *)(t.response_pl + 1); + if ((entry_handle == 0 && + t.task.rv != sizeof(__le32) + sizeof(struct cdat_header)) || + (entry_handle > 0 && + (t.task.rv < sizeof(__le32) + sizeof(*entry) || + t.task.rv != sizeof(__le32) + le16_to_cpu(entry->length)))) return -EIO;
/* Get the CXL table access header entry handle */ entry_handle = FIELD_GET(CXL_DOE_TABLE_ACCESS_ENTRY_HANDLE, le32_to_cpu(t.response_pl[0])); - entry = t.response_pl + 1; entry_dw = t.task.rv / sizeof(__le32); /* Skip Header */ entry_dw -= 1; --- a/drivers/cxl/cxlpci.h +++ b/drivers/cxl/cxlpci.h @@ -62,6 +62,20 @@ enum cxl_regloc_type { CXL_REGLOC_RBI_TYPES };
+struct cdat_header { + __le32 length; + u8 revision; + u8 checksum; + u8 reserved[6]; + __le32 sequence; +} __packed; + +struct cdat_entry_header { + u8 type; + u8 reserved; + __le16 length; +} __packed; + int devm_cxl_port_enumerate_dports(struct cxl_port *port); struct cxl_dev_state; int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm);
From: Lukas Wunner lukas@wunner.de
commit 4fe2c13d59d849be3b45371e3913ec5dc77fc0fb upstream.
If the length in the CDAT header is larger than the concatenation of the header and all table entries, then the CDAT exposed to user space contains trailing null bytes.
Not every consumer may be able to handle that. Per Postel's robustness principle, "be liberal in what you accept" and silently reduce the cached length to avoid exposing those null bytes.
Fixes: c97006046c79 ("cxl/port: Read CDAT table") Tested-by: Ira Weiny ira.weiny@intel.com Signed-off-by: Lukas Wunner lukas@wunner.de Reviewed-by: Dan Williams dan.j.williams@intel.com Reviewed-by: Jonathan Cameron Jonathan.Cameron@huawei.com Cc: stable@vger.kernel.org # v6.0+ Link: https://lore.kernel.org/r/6d98b3c7da5343172bd3ccabfabbc1f31c079d74.167854349... Signed-off-by: Dan Williams dan.j.williams@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/cxl/core/pci.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/drivers/cxl/core/pci.c +++ b/drivers/cxl/core/pci.c @@ -582,6 +582,9 @@ static int cxl_cdat_read_table(struct de } } while (entry_handle != CXL_DOE_TABLE_ACCESS_LAST_ENTRY);
+ /* Length in CDAT header may exceed concatenation of CDAT entries */ + cdat->length -= length; + return 0; }
From: Lukas Wunner lukas@wunner.de
commit 92dc899c3b4927f3cfa23f55bf759171234b5802 upstream.
Gregory Price reports a WARN splat with CONFIG_DEBUG_OBJECTS=y upon CXL probing because pci_doe_submit_task() invokes INIT_WORK() instead of INIT_WORK_ONSTACK() for a work_struct that was allocated on the stack.
All callers of pci_doe_submit_task() allocate the work_struct on the stack, so replace INIT_WORK() with INIT_WORK_ONSTACK() as a backportable short-term fix.
The long-term fix implemented by a subsequent commit is to move to a synchronous API which allocates the work_struct internally in the DOE library.
Stacktrace for posterity:
WARNING: CPU: 0 PID: 23 at lib/debugobjects.c:545 __debug_object_init.cold+0x18/0x183 CPU: 0 PID: 23 Comm: kworker/u2:1 Not tainted 6.1.0-0.rc1.20221019gitaae703b02f92.17.fc38.x86_64 #1 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014 Call Trace: pci_doe_submit_task+0x5d/0xd0 pci_doe_discovery+0xb4/0x100 pcim_doe_create_mb+0x219/0x290 cxl_pci_probe+0x192/0x430 local_pci_probe+0x41/0x80 pci_device_probe+0xb3/0x220 really_probe+0xde/0x380 __driver_probe_device+0x78/0x170 driver_probe_device+0x1f/0x90 __driver_attach_async_helper+0x5c/0xe0 async_run_entry_fn+0x30/0x130 process_one_work+0x294/0x5b0
Fixes: 9d24322e887b ("PCI/DOE: Add DOE mailbox support functions") Link: https://lore.kernel.org/linux-cxl/Y1bOniJliOFszvIK@memverge.com/ Reported-by: Gregory Price gregory.price@memverge.com Tested-by: Ira Weiny ira.weiny@intel.com Tested-by: Gregory Price gregory.price@memverge.com Signed-off-by: Lukas Wunner lukas@wunner.de Reviewed-by: Ira Weiny ira.weiny@intel.com Reviewed-by: Dan Williams dan.j.williams@intel.com Reviewed-by: Gregory Price gregory.price@memverge.com Cc: stable@vger.kernel.org # v6.0+ Reviewed-by: Jonathan Cameron Jonathan.Cameron@huawei.com Acked-by: Bjorn Helgaas bhelgaas@google.com Link: https://lore.kernel.org/r/67a9117f463ecdb38a2dbca6a20391ce2f1e7a06.167854349... Signed-off-by: Dan Williams dan.j.williams@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/pci/doe.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/drivers/pci/doe.c +++ b/drivers/pci/doe.c @@ -523,6 +523,8 @@ EXPORT_SYMBOL_GPL(pci_doe_supports_prot) * task->complete will be called when the state machine is done processing this * task. * + * @task must be allocated on the stack. + * * Excess data will be discarded. * * RETURNS: 0 when task has been successfully queued, -ERRNO on error @@ -544,7 +546,7 @@ int pci_doe_submit_task(struct pci_doe_m return -EIO;
task->doe_mb = doe_mb; - INIT_WORK(&task->work, doe_statemachine_work); + INIT_WORK_ONSTACK(&task->work, doe_statemachine_work); queue_work(doe_mb->work_queue, &task->work); return 0; }
From: Lukas Wunner lukas@wunner.de
commit abf04be0e7071f2bcd39bf97ba407e7d4439785e upstream.
After a pci_doe_task completes, its work_struct needs to be destroyed to avoid a memory leak with CONFIG_DEBUG_OBJECTS=y.
Fixes: 9d24322e887b ("PCI/DOE: Add DOE mailbox support functions") Tested-by: Ira Weiny ira.weiny@intel.com Signed-off-by: Lukas Wunner lukas@wunner.de Reviewed-by: Ira Weiny ira.weiny@intel.com Reviewed-by: Davidlohr Bueso dave@stgolabs.net Reviewed-by: Dan Williams dan.j.williams@intel.com Reviewed-by: Jonathan Cameron Jonathan.Cameron@huawei.com Cc: stable@vger.kernel.org # v6.0+ Acked-by: Bjorn Helgaas bhelgaas@google.com Link: https://lore.kernel.org/r/775768b4912531c3b887d405fc51a50e465e1bf9.167854349... Signed-off-by: Dan Williams dan.j.williams@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/pci/doe.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/pci/doe.c +++ b/drivers/pci/doe.c @@ -224,6 +224,7 @@ static void signal_task_complete(struct { task->rv = rv; task->complete(task); + destroy_work_on_stack(&task->work); }
static void signal_task_abort(struct pci_doe_task *task, int rv)
From: Mathias Nyman mathias.nyman@linux.intel.com
commit 8e77d3d59d7b5da13deda1d832c51b8bbdbe2037 upstream.
This reverts commit 4c2604a9a6899bab195edbee35fc8d64ce1444aa.
Asynch probe caused regression in a setup with both Renesas and Intel xHC controllers. Devices connected to the Renesas disconnected shortly after boot. With Asynch probe the busnumbers got interleaved.
xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 xhci_hcd 0000:04:00.0: new USB bus registered, assigned bus number 2 xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 3 xhci_hcd 0000:04:00.0: new USB bus registered, assigned bus number 4
Reason why this commit causes regression is still unknown, but revert it while debugging the issue.
Fixes: 4c2604a9a689 ("usb: xhci-pci: Set PROBE_PREFER_ASYNCHRONOUS") Cc: stable stable@kernel.org Link: https://lore.kernel.org/linux-usb/20230307132120.5897c5af@deangelis.fenrir.o... Signed-off-by: Mathias Nyman mathias.nyman@linux.intel.com Link: https://lore.kernel.org/r/20230330143056.1390020-3-mathias.nyman@linux.intel... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/host/xhci-pci.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c index fb988e4ea924..6db07ca419c3 100644 --- a/drivers/usb/host/xhci-pci.c +++ b/drivers/usb/host/xhci-pci.c @@ -771,12 +771,11 @@ static struct pci_driver xhci_pci_driver = { /* suspend and resume implemented later */
.shutdown = usb_hcd_pci_shutdown, - .driver = { #ifdef CONFIG_PM - .pm = &usb_hcd_pci_pm_ops, -#endif - .probe_type = PROBE_PREFER_ASYNCHRONOUS, + .driver = { + .pm = &usb_hcd_pci_pm_ops }, +#endif };
static int __init xhci_pci_init(void)
From: Wayne Chang waynec@nvidia.com
commit 4c7f9d2e413dc06a157c4e5dccde84aaf4655eb3 upstream.
When we set the dual-role port to Host mode, we observed the following splat: [ 167.057718] BUG: sleeping function called from invalid context at include/linux/sched/mm.h:229 [ 167.057872] Workqueue: events tegra_xusb_usb_phy_work [ 167.057954] Call trace: [ 167.057962] dump_backtrace+0x0/0x210 [ 167.057996] show_stack+0x30/0x50 [ 167.058020] dump_stack_lvl+0x64/0x84 [ 167.058065] dump_stack+0x14/0x34 [ 167.058100] __might_resched+0x144/0x180 [ 167.058140] __might_sleep+0x64/0xd0 [ 167.058171] slab_pre_alloc_hook.constprop.0+0xa8/0x110 [ 167.058202] __kmalloc_track_caller+0x74/0x2b0 [ 167.058233] kvasprintf+0xa4/0x190 [ 167.058261] kasprintf+0x58/0x90 [ 167.058285] tegra_xusb_find_port_node.isra.0+0x58/0xd0 [ 167.058334] tegra_xusb_find_port+0x38/0xa0 [ 167.058380] tegra_xusb_padctl_get_usb3_companion+0x38/0xd0 [ 167.058430] tegra_xhci_id_notify+0x8c/0x1e0 [ 167.058473] notifier_call_chain+0x88/0x100 [ 167.058506] atomic_notifier_call_chain+0x44/0x70 [ 167.058537] tegra_xusb_usb_phy_work+0x60/0xd0 [ 167.058581] process_one_work+0x1dc/0x4c0 [ 167.058618] worker_thread+0x54/0x410 [ 167.058650] kthread+0x188/0x1b0 [ 167.058672] ret_from_fork+0x10/0x20
The function tegra_xusb_padctl_get_usb3_companion eventually calls tegra_xusb_find_port and this in turn calls kasprintf which might sleep and so cannot be called from an atomic context.
Fix this by moving the call to tegra_xusb_padctl_get_usb3_companion to the tegra_xhci_id_work function where it is really needed.
Fixes: f836e7843036 ("usb: xhci-tegra: Add OTG support") Cc: stable@vger.kernel.org Signed-off-by: Wayne Chang waynec@nvidia.com Signed-off-by: Haotien Hsu haotienh@nvidia.com Link: https://lore.kernel.org/r/20230327095548.1599470-1-haotienh@nvidia.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/host/xhci-tegra.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
--- a/drivers/usb/host/xhci-tegra.c +++ b/drivers/usb/host/xhci-tegra.c @@ -1225,6 +1225,9 @@ static void tegra_xhci_id_work(struct wo
mutex_unlock(&tegra->lock);
+ tegra->otg_usb3_port = tegra_xusb_padctl_get_usb3_companion(tegra->padctl, + tegra->otg_usb2_port); + if (tegra->host_mode) { /* switch to host mode */ if (tegra->otg_usb3_port >= 0) { @@ -1339,9 +1342,6 @@ static int tegra_xhci_id_notify(struct n }
tegra->otg_usb2_port = tegra_xusb_get_usb2_port(tegra, usbphy); - tegra->otg_usb3_port = tegra_xusb_padctl_get_usb3_companion( - tegra->padctl, - tegra->otg_usb2_port);
tegra->host_mode = (usbphy->last_event == USB_EVENT_ID) ? true : false;
From: Mathias Nyman mathias.nyman@linux.intel.com
commit f6caea4855553a8b99ba3ec23ecdb5ed8262f26c upstream.
The command allocated to set exit latency LPM values need to be freed in case the command is never queued. This would be the case if there is no change in exit latency values, or device is missing.
Reported-by: Mirsad Goran Todorovac mirsad.todorovac@alu.unizg.hr Link: https://lore.kernel.org/linux-usb/24263902-c9b3-ce29-237b-1c3d6918f4fe@alu.u... Tested-by: Mirsad Goran Todorovac mirsad.todorovac@alu.unizg.hr Fixes: 5c2a380a5aa8 ("xhci: Allocate separate command structures for each LPM command") Cc: Stable@vger.kernel.org Signed-off-by: Mathias Nyman mathias.nyman@linux.intel.com Link: https://lore.kernel.org/r/20230330143056.1390020-4-mathias.nyman@linux.intel... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/host/xhci.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/usb/host/xhci.c +++ b/drivers/usb/host/xhci.c @@ -4406,6 +4406,7 @@ static int __maybe_unused xhci_change_ma
if (!virt_dev || max_exit_latency == virt_dev->current_mel) { spin_unlock_irqrestore(&xhci->lock, flags); + xhci_free_command(xhci, command); return 0; }
From: D Scott Phillips scott@os.amperecomputing.com
commit ecaa4902439298f6b0e29f47424a86b310a9ff4f upstream.
Previously the quirk was skipped when no iommu was present. The same rationale for skipping the quirk also applies in the iommu.passthrough=1 case.
Skip applying the XHCI_ZERO_64B_REGS quirk if the device's iommu domain is passthrough.
Fixes: 12de0a35c996 ("xhci: Add quirk to zero 64bit registers on Renesas PCIe controllers") Cc: stable stable@kernel.org Signed-off-by: D Scott Phillips scott@os.amperecomputing.com Acked-by: Marc Zyngier maz@kernel.org Signed-off-by: Mathias Nyman mathias.nyman@linux.intel.com Link: https://lore.kernel.org/r/20230330143056.1390020-2-mathias.nyman@linux.intel... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/host/xhci.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/drivers/usb/host/xhci.c +++ b/drivers/usb/host/xhci.c @@ -9,6 +9,7 @@ */
#include <linux/pci.h> +#include <linux/iommu.h> #include <linux/iopoll.h> #include <linux/irq.h> #include <linux/log2.h> @@ -228,6 +229,7 @@ int xhci_reset(struct xhci_hcd *xhci, u6 static void xhci_zero_64b_regs(struct xhci_hcd *xhci) { struct device *dev = xhci_to_hcd(xhci)->self.sysdev; + struct iommu_domain *domain; int err, i; u64 val; u32 intrs; @@ -246,7 +248,9 @@ static void xhci_zero_64b_regs(struct xh * an iommu. Doing anything when there is no iommu is definitely * unsafe... */ - if (!(xhci->quirks & XHCI_ZERO_64B_REGS) || !device_iommu_mapped(dev)) + domain = iommu_get_domain_for_dev(dev); + if (!(xhci->quirks & XHCI_ZERO_64B_REGS) || !domain || + domain->type == IOMMU_DOMAIN_IDENTITY) return;
xhci_info(xhci, "Zeroing 64bit base registers, expecting fault\n");
From: Pawel Laszczak pawell@cadence.com
commit 1edf48991a783d00a3a18dc0d27c88139e4030a2 upstream.
The patch 5bc38d33a5a1: "usb: cdnsp: Fixes issue with redundant Status Stage" leads to the following Smatch static checker warning:
drivers/usb/cdns3/cdnsp-ep0.c:470 cdnsp_setup_analyze() error: uninitialized symbol 'len'.
cc: stable@vger.kernel.org Fixes: 5bc38d33a5a1 ("usb: cdnsp: Fixes issue with redundant Status Stage") Signed-off-by: Pawel Laszczak pawell@cadence.com Link: https://lore.kernel.org/r/20230331090600.454674-1-pawell@cadence.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/cdns3/cdnsp-ep0.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
--- a/drivers/usb/cdns3/cdnsp-ep0.c +++ b/drivers/usb/cdns3/cdnsp-ep0.c @@ -414,7 +414,7 @@ static int cdnsp_ep0_std_request(struct void cdnsp_setup_analyze(struct cdnsp_device *pdev) { struct usb_ctrlrequest *ctrl = &pdev->setup; - int ret = 0; + int ret = -EINVAL; u16 len;
trace_cdnsp_ctrl_req(ctrl); @@ -424,7 +424,6 @@ void cdnsp_setup_analyze(struct cdnsp_de
if (pdev->gadget.state == USB_STATE_NOTATTACHED) { dev_err(pdev->dev, "ERR: Setup detected in unattached state\n"); - ret = -EINVAL; goto out; }
From: Heikki Krogerus heikki.krogerus@linux.intel.com
commit ec799c8a92e0be91e0940cc739a27f483242df65 upstream.
This patch adds the necessary PCI ID for Intel Meteor Lake-S devices.
Signed-off-by: Heikki Krogerus heikki.krogerus@linux.intel.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230330150224.89316-1-heikki.krogerus@linux.intel... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/dwc3/dwc3-pci.c | 4 ++++ 1 file changed, 4 insertions(+)
--- a/drivers/usb/dwc3/dwc3-pci.c +++ b/drivers/usb/dwc3/dwc3-pci.c @@ -49,6 +49,7 @@ #define PCI_DEVICE_ID_INTEL_RPLS 0x7a61 #define PCI_DEVICE_ID_INTEL_MTLM 0x7eb1 #define PCI_DEVICE_ID_INTEL_MTLP 0x7ec1 +#define PCI_DEVICE_ID_INTEL_MTLS 0x7f6f #define PCI_DEVICE_ID_INTEL_MTL 0x7e7e #define PCI_DEVICE_ID_INTEL_TGL 0x9a15 #define PCI_DEVICE_ID_AMD_MR 0x163a @@ -474,6 +475,9 @@ static const struct pci_device_id dwc3_p { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTLP), (kernel_ulong_t) &dwc3_pci_intel_swnode, },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTLS), + (kernel_ulong_t) &dwc3_pci_intel_swnode, }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTL), (kernel_ulong_t) &dwc3_pci_intel_swnode, },
From: Kees Jan Koster kjkoster@kjkoster.org
commit 71f8afa2b66e356f435b6141b4a9ccf953e18356 upstream.
The Silicon Labs IFS-USB-DATACABLE is used in conjunction with for example the Quint UPSes. It is used to enable Modbus communication with the UPS to query configuration, power and battery status.
Signed-off-by: Kees Jan Koster kjkoster@kjkoster.org Cc: stable@vger.kernel.org Signed-off-by: Johan Hovold johan@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/serial/cp210x.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/usb/serial/cp210x.c +++ b/drivers/usb/serial/cp210x.c @@ -120,6 +120,7 @@ static const struct usb_device_id id_tab { USB_DEVICE(0x10C4, 0x826B) }, /* Cygnal Integrated Products, Inc., Fasttrax GPS demonstration module */ { USB_DEVICE(0x10C4, 0x8281) }, /* Nanotec Plug & Drive */ { USB_DEVICE(0x10C4, 0x8293) }, /* Telegesis ETRX2USB */ + { USB_DEVICE(0x10C4, 0x82AA) }, /* Silicon Labs IFS-USB-DATACABLE used with Quint UPS */ { USB_DEVICE(0x10C4, 0x82EF) }, /* CESINEL FALCO 6105 AC Power Supply */ { USB_DEVICE(0x10C4, 0x82F1) }, /* CESINEL MEDCAL EFD Earth Fault Detector */ { USB_DEVICE(0x10C4, 0x82F2) }, /* CESINEL MEDCAL ST Network Analyzer */
From: RD Babiera rdbabiera@google.com
commit eddebe39602efe631b83ff8d03f26eba12cfd760 upstream.
While determining the initial pin assignment to be sent in the configure message, using the DP_PIN_ASSIGN_DP_ONLY_MASK mask causes the DFP_U to send both Pin Assignment C and E when both are supported by the DFP_U and UFP_U. The spec (Table 5-7 DFP_U Pin Assignment Selection Mandates, VESA DisplayPort Alt Mode Standard v2.0) indicates that the DFP_U never selects Pin Assignment E when Pin Assignment C is offered.
Update the DP_PIN_ASSIGN_DP_ONLY_MASK conditional to intially select only Pin Assignment C if it is available.
Fixes: 0e3bb7d6894d ("usb: typec: Add driver for DisplayPort alternate mode") Cc: stable@vger.kernel.org Signed-off-by: RD Babiera rdbabiera@google.com Reviewed-by: Heikki Krogerus heikki.krogerus@linux.intel.com Link: https://lore.kernel.org/r/20230329215159.2046932-1-rdbabiera@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/typec/altmodes/displayport.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/drivers/usb/typec/altmodes/displayport.c +++ b/drivers/usb/typec/altmodes/displayport.c @@ -112,8 +112,12 @@ static int dp_altmode_configure(struct d if (dp->data.status & DP_STATUS_PREFER_MULTI_FUNC && pin_assign & DP_PIN_ASSIGN_MULTI_FUNC_MASK) pin_assign &= DP_PIN_ASSIGN_MULTI_FUNC_MASK; - else if (pin_assign & DP_PIN_ASSIGN_DP_ONLY_MASK) + else if (pin_assign & DP_PIN_ASSIGN_DP_ONLY_MASK) { pin_assign &= DP_PIN_ASSIGN_DP_ONLY_MASK; + /* Default to pin assign C if available */ + if (pin_assign & BIT(DP_PIN_ASSIGN_C)) + pin_assign = BIT(DP_PIN_ASSIGN_C); + }
if (!pin_assign) return -EINVAL;
From: Enrico Sau enrico.sau@gmail.com
commit 773e8e7d07b753474b2ccd605ff092faaa9e65b9 upstream.
Add the following Telit FE990 compositions:
0x1080: tty, adb, rmnet, tty, tty, tty, tty 0x1081: tty, adb, mbim, tty, tty, tty, tty 0x1082: rndis, tty, adb, tty, tty, tty, tty 0x1083: tty, adb, ecm, tty, tty, tty, tty
Signed-off-by: Enrico Sau enrico.sau@gmail.com Link: https://lore.kernel.org/r/20230314090059.77876-1-enrico.sau@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Johan Hovold johan@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/serial/option.c | 8 ++++++++ 1 file changed, 8 insertions(+)
--- a/drivers/usb/serial/option.c +++ b/drivers/usb/serial/option.c @@ -1300,6 +1300,14 @@ static const struct usb_device_id option .driver_info = NCTRL(0) | RSVD(1) }, { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1075, 0xff), /* Telit FN990 (PCIe) */ .driver_info = RSVD(0) }, + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1080, 0xff), /* Telit FE990 (rmnet) */ + .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) }, + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1081, 0xff), /* Telit FE990 (MBIM) */ + .driver_info = NCTRL(0) | RSVD(1) }, + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1082, 0xff), /* Telit FE990 (RNDIS) */ + .driver_info = NCTRL(2) | RSVD(3) }, + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1083, 0xff), /* Telit FE990 (ECM) */ + .driver_info = NCTRL(0) | RSVD(1) }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910), .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
From: Bjørn Mork bjorn@mork.no
commit 7708a3858e69db91a8b69487994f33b96d20192a upstream.
This modem supports several modes with a class network function and a number of serial functions, all using ff/00/00
The device ID is the same in all modes.
RNDIS mode ---------- T: Bus=01 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#= 2 Spd=480 MxCh= 0 D: Ver= 2.10 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=2c7c ProdID=0900 Rev= 4.04 S: Manufacturer=Quectel S: Product=RM500U-CN S: SerialNumber=0123456789ABCDEF C:* #Ifs= 7 Cfg#= 1 Atr=c0 MxPwr=500mA A: FirstIf#= 0 IfCount= 2 Cls=e0(wlcon) Sub=01 Prot=03 I:* If#= 0 Alt= 0 #EPs= 1 Cls=e0(wlcon) Sub=01 Prot=03 Driver=rndis_host E: Ad=82(I) Atr=03(Int.) MxPS= 8 Ivl=32ms I:* If#= 1 Alt= 0 #EPs= 2 Cls=0a(data ) Sub=00 Prot=00 Driver=rndis_host E: Ad=81(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=01(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 2 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=83(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=02(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 3 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=84(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=03(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 4 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=85(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=04(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 5 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=86(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=05(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 6 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=87(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=06(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms
ECM mode -------- T: Bus=01 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#= 2 Spd=480 MxCh= 0 D: Ver= 2.10 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=2c7c ProdID=0900 Rev= 4.04 S: Manufacturer=Quectel S: Product=RM500U-CN S: SerialNumber=0123456789ABCDEF C:* #Ifs= 7 Cfg#= 1 Atr=c0 MxPwr=500mA A: FirstIf#= 0 IfCount= 2 Cls=02(comm.) Sub=06 Prot=00 I:* If#= 0 Alt= 0 #EPs= 1 Cls=02(comm.) Sub=06 Prot=00 Driver=cdc_ether E: Ad=82(I) Atr=03(Int.) MxPS= 16 Ivl=32ms I: If#= 1 Alt= 0 #EPs= 0 Cls=0a(data ) Sub=00 Prot=00 Driver=cdc_ether I:* If#= 1 Alt= 1 #EPs= 2 Cls=0a(data ) Sub=00 Prot=00 Driver=cdc_ether E: Ad=81(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=01(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 2 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=83(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=02(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 3 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=84(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=03(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 4 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=85(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=04(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 5 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=86(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=05(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 6 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=87(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=06(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms
NCM mode -------- T: Bus=01 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#= 5 Spd=480 MxCh= 0 D: Ver= 2.10 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=2c7c ProdID=0900 Rev= 4.04 S: Manufacturer=Quectel S: Product=RM500U-CN S: SerialNumber=0123456789ABCDEF C:* #Ifs= 7 Cfg#= 1 Atr=c0 MxPwr=500mA A: FirstIf#= 0 IfCount= 2 Cls=02(comm.) Sub=0d Prot=00 I:* If#= 0 Alt= 0 #EPs= 1 Cls=02(comm.) Sub=0d Prot=00 Driver=cdc_ncm E: Ad=82(I) Atr=03(Int.) MxPS= 16 Ivl=32ms I: If#= 1 Alt= 0 #EPs= 0 Cls=0a(data ) Sub=00 Prot=01 Driver=cdc_ncm I:* If#= 1 Alt= 1 #EPs= 2 Cls=0a(data ) Sub=00 Prot=01 Driver=cdc_ncm E: Ad=81(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=01(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 2 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=83(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=02(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 3 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=84(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=03(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 4 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=85(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=04(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 5 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=86(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=05(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 6 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=87(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=06(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms
Reported-by: Andrew Green askgreen@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Bjørn Mork bjorn@mork.no Signed-off-by: Johan Hovold johan@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/serial/option.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/drivers/usb/serial/option.c +++ b/drivers/usb/serial/option.c @@ -1198,6 +1198,8 @@ static const struct usb_device_id option { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0xff, 0x30) }, { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0, 0x40) }, { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0, 0) }, + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, 0x0900, 0xff, 0, 0), /* RM500U-CN */ + .driver_info = ZLP }, { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200U, 0xff, 0, 0) }, { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200S_CN, 0xff, 0, 0) }, { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200T, 0xff, 0, 0) },
From: Ian Ray ian.ray@ge.com
commit 6327a930ab7bfa1ab33bcdffd5f5f4b1e7131504 upstream.
Correct the "sub_lsb" shift for the ltc2497 and drop the sub_lsb element which is now constant.
An earlier version of the code shifted by 14 but this was a consequence of reading three bytes into a __be32 buffer and using be32_to_cpu(), so eight extra bits needed to be skipped. Now we use get_unaligned_be24() and thus the additional skip is wrong.
Fixes: 2187cfeb3626 ("drivers: iio: adc: ltc2497: LTC2499 support") Signed-off-by: Ian Ray ian.ray@ge.com Link: https://lore.kernel.org/r/20230127125714.44608-1-ian.ray@ge.com Cc: Stable@vger.kernel.org Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/adc/ltc2497.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-)
--- a/drivers/iio/adc/ltc2497.c +++ b/drivers/iio/adc/ltc2497.c @@ -28,7 +28,6 @@ struct ltc2497_driverdata { struct ltc2497core_driverdata common_ddata; struct i2c_client *client; u32 recv_size; - u32 sub_lsb; /* * DMA (thus cache coherency maintenance) may require the * transfer buffers to live in their own cache lines. @@ -65,10 +64,10 @@ static int ltc2497_result_and_measure(st * equivalent to a sign extension. */ if (st->recv_size == 3) { - *val = (get_unaligned_be24(st->data.d8) >> st->sub_lsb) + *val = (get_unaligned_be24(st->data.d8) >> 6) - BIT(ddata->chip_info->resolution + 1); } else { - *val = (be32_to_cpu(st->data.d32) >> st->sub_lsb) + *val = (be32_to_cpu(st->data.d32) >> 6) - BIT(ddata->chip_info->resolution + 1); }
@@ -122,7 +121,6 @@ static int ltc2497_probe(struct i2c_clie st->common_ddata.chip_info = chip_info;
resolution = chip_info->resolution; - st->sub_lsb = 31 - (resolution + 1); st->recv_size = BITS_TO_BYTES(resolution) + 1;
return ltc2497core_probe(dev, indio_dev);
From: Arnd Bergmann arnd@arndb.de
commit d9b540ee461cca7edca0dd2c2a42625c6b9ffb8f upstream.
In rare randconfig builds, the missing CRC32 helper causes a link error:
ld.lld: error: undefined symbol: crc32_le
referenced by usercopy_64.c vmlinux.o:(adis16480_trigger_handler)
Fixes: 941f130881fa ("iio: adis16480: support burst read function") Signed-off-by: Arnd Bergmann arnd@arndb.de Reviewed-by: Nuno Sá nuno.sa@analog.com Link: https://lore.kernel.org/r/20230131094616.130238-1-arnd@kernel.org Cc: Stable@vger.kernel.org Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/imu/Kconfig | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/iio/imu/Kconfig +++ b/drivers/iio/imu/Kconfig @@ -47,6 +47,7 @@ config ADIS16480 depends on SPI select IIO_ADIS_LIB select IIO_ADIS_LIB_BUFFER if IIO_BUFFER + select CRC32 help Say yes here to build support for Analog Devices ADIS16375, ADIS16480, ADIS16485, ADIS16488 inertial sensors.
From: Andy Shevchenko andriy.shevchenko@linux.intel.com
commit 701c875aded880013aacac608832995c4b052257 upstream.
The node name can contain an address part which is unused by the driver. Moreover, this string is propagated into the userspace label, sysfs filenames *and breaking ABI*.
Cut the address part out before assigning the channel name.
Fixes: 4f47a236a23d ("iio: adc: qcom-spmi-adc5: convert to device properties") Reported-by: Marijn Suijten marijn.suijten@somainline.org Signed-off-by: Andy Shevchenko andriy.shevchenko@linux.intel.com Reviewed-by: Marijn Suijten marijn.suijten@somainline.org Link: https://lore.kernel.org/r/20230118100623.42255-1-andriy.shevchenko@linux.int... Cc: Stable@vger.kernel.org Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/adc/qcom-spmi-adc5.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-)
--- a/drivers/iio/adc/qcom-spmi-adc5.c +++ b/drivers/iio/adc/qcom-spmi-adc5.c @@ -626,12 +626,20 @@ static int adc5_get_fw_channel_data(stru struct fwnode_handle *fwnode, const struct adc5_data *data) { - const char *name = fwnode_get_name(fwnode), *channel_name; + const char *channel_name; + char *name; u32 chan, value, varr[2]; u32 sid = 0; int ret; struct device *dev = adc->dev;
+ name = devm_kasprintf(dev, GFP_KERNEL, "%pfwP", fwnode); + if (!name) + return -ENOMEM; + + /* Cut the address part */ + name[strchrnul(name, '@') - name] = '\0'; + ret = fwnode_property_read_u32(fwnode, "reg", &chan); if (ret) { dev_err(dev, "invalid channel number %s\n", name);
From: Lars-Peter Clausen lars@metafoo.de
commit 363c7dc72f79edd55bf1c4380e0fbf7f1bbc2c86 upstream.
The ads7950 uses a mutex as well as SPI transfers in its GPIO callbacks. This means these callbacks can sleep and the `can_sleep` flag should be set.
Having the flag set will make sure that warnings are generated when calling any of the callbacks from a potentially non-sleeping context.
Fixes: c97dce792dc8 ("iio: adc: ti-ads7950: add GPIO support") Signed-off-by: Lars-Peter Clausen lars@metafoo.de Acked-by: David Lechner david@lechnology.com Link: https://lore.kernel.org/r/20230312210933.2275376-1-lars@metafoo.de Cc: Stable@vger.kernel.org Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/adc/ti-ads7950.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/iio/adc/ti-ads7950.c +++ b/drivers/iio/adc/ti-ads7950.c @@ -634,6 +634,7 @@ static int ti_ads7950_probe(struct spi_d st->chip.label = dev_name(&st->spi->dev); st->chip.parent = &st->spi->dev; st->chip.owner = THIS_MODULE; + st->chip.can_sleep = true; st->chip.base = -1; st->chip.ngpio = TI_ADS7950_NUM_GPIOS; st->chip.get_direction = ti_ads7950_get_direction;
From: William Breathitt Gray william.gray@linaro.org
commit c3701185ee1973845db088d8b0fc443397ab0eb2 upstream.
The CIO-DAC series of devices only supports DAC values up to 12-bit rather than 16-bit. Trying to write a 16-bit value results in only the lower 12 bits affecting the DAC output which is not what the user expects. Instead, adjust the DAC write value check to reject values larger than 12-bit so that they fail explicitly as invalid for the user.
Fixes: 3b8df5fd526e ("iio: Add IIO support for the Measurement Computing CIO-DAC family") Cc: stable@vger.kernel.org Signed-off-by: William Breathitt Gray william.gray@linaro.org Link: https://lore.kernel.org/r/20230311002248.8548-1-william.gray@linaro.org Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/dac/cio-dac.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/iio/dac/cio-dac.c +++ b/drivers/iio/dac/cio-dac.c @@ -66,8 +66,8 @@ static int cio_dac_write_raw(struct iio_ if (mask != IIO_CHAN_INFO_RAW) return -EINVAL;
- /* DAC can only accept up to a 16-bit value */ - if ((unsigned int)val > 65535) + /* DAC can only accept up to a 12-bit value */ + if ((unsigned int)val > 4095) return -EINVAL;
priv->chan_out_states[chan->channel] = val;
From: Nuno Sá nuno.sa@analog.com
commit 7b3825e9487d77e83bf1e27b10a74cd729b8f972 upstream.
Even though we are passing 'ret' as stop condition for read_poll_timeout(), that return code is still being ignored. The reason is that the poll will stop if the passed condition is true which will happen if the passed op() returns error. However, read_poll_timeout() returns 0 if the *complete* condition evaluates to true. Therefore, the error code returned by op() will be ignored.
To fix this we need to check for both error codes: * The one returned by read_poll_timeout() which is either 0 or ETIMEDOUT. * The one returned by the passed op().
Fixes: a44ef7c46097 ("iio: adc: add max11410 adc driver") Signed-off-by: Nuno Sá nuno.sa@analog.com Acked-by: Ibrahim Tilki Ibrahim.Tilki@analog.com Link: https://lore.kernel.org/r/20230307095303.713251-1-nuno.sa@analog.com Cc: Stable@vger.kernel.org Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/adc/max11410.c | 22 +++++++++++++++------- 1 file changed, 15 insertions(+), 7 deletions(-)
--- a/drivers/iio/adc/max11410.c +++ b/drivers/iio/adc/max11410.c @@ -413,13 +413,17 @@ static int max11410_sample(struct max114 if (!ret) return -ETIMEDOUT; } else { + int ret2; + /* Wait for status register Conversion Ready flag */ - ret = read_poll_timeout(max11410_read_reg, ret, - ret || (val & MAX11410_STATUS_CONV_READY_BIT), + ret = read_poll_timeout(max11410_read_reg, ret2, + ret2 || (val & MAX11410_STATUS_CONV_READY_BIT), 5000, MAX11410_CONVERSION_TIMEOUT_MS * 1000, true, st, MAX11410_REG_STATUS, &val); if (ret) return ret; + if (ret2) + return ret2; }
/* Read ADC Data */ @@ -850,17 +854,21 @@ static int max11410_init_vref(struct dev
static int max11410_calibrate(struct max11410_state *st, u32 cal_type) { - int ret, val; + int ret, ret2, val;
ret = max11410_write_reg(st, MAX11410_REG_CAL_START, cal_type); if (ret) return ret;
/* Wait for status register Calibration Ready flag */ - return read_poll_timeout(max11410_read_reg, ret, - ret || (val & MAX11410_STATUS_CAL_READY_BIT), - 50000, MAX11410_CALIB_TIMEOUT_MS * 1000, true, - st, MAX11410_REG_STATUS, &val); + ret = read_poll_timeout(max11410_read_reg, ret2, + ret2 || (val & MAX11410_STATUS_CAL_READY_BIT), + 50000, MAX11410_CALIB_TIMEOUT_MS * 1000, true, + st, MAX11410_REG_STATUS, &val); + if (ret) + return ret; + + return ret2; }
static int max11410_self_calibrate(struct max11410_state *st)
From: Mehdi Djait mehdi.djait.k@gmail.com
commit 03fada47311a3e668f73efc9278c4a559e64ee85 upstream.
The trigger_handler gets called from the IRQ thread handler using iio_trigger_poll_chained() which will only call the bottom half of the pollfunc and therefore pf->timestamp will not get set.
Use instead the timestamp from the driver's private data which is always set in the IRQ handler.
Fixes: 7c1d1677b322 ("iio: accel: Support Kionix/ROHM KX022A accelerometer") Link: https://lore.kernel.org/linux-iio/Y+6QoBLh1k82cJVN@carbian/ Reviewed-by: Matti Vaittinen mazziesaccount@gmail.com Signed-off-by: Mehdi Djait mehdi.djait.k@gmail.com Link: https://lore.kernel.org/r/20230218135111.90061-1-mehdi.djait.k@gmail.com Cc: Stable@vger.kernel.org Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/accel/kionix-kx022a.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/iio/accel/kionix-kx022a.c b/drivers/iio/accel/kionix-kx022a.c index f866859855cd..1c3a72380fb8 100644 --- a/drivers/iio/accel/kionix-kx022a.c +++ b/drivers/iio/accel/kionix-kx022a.c @@ -864,7 +864,7 @@ static irqreturn_t kx022a_trigger_handler(int irq, void *p) if (ret < 0) goto err_read;
- iio_push_to_buffers_with_timestamp(idev, data->buffer, pf->timestamp); + iio_push_to_buffers_with_timestamp(idev, data->buffer, data->timestamp); err_read: iio_trigger_notify_done(idev->trig);
From: Nuno Sá nuno.sa@analog.com
commit b5184a26a28fac1d708b0bfeeb958a9260c2924c upstream.
If for some reason 'rb->access->write()' does not write the full requested data and the O_NONBLOCK is set, we would return 'n' to userspace which is not really truth. Hence, let's return the number of bytes we effectively wrote.
Fixes: 9eeee3b0bf190 ("iio: Add output buffer support") Signed-off-by: Nuno Sá nuno.sa@analog.com Reviewed-by: Lars-Peter Clausen lars@metafoo.de Link: https://lore.kernel.org/r/20230216101452.591805-2-nuno.sa@analog.com Cc: Stable@vger.kernel.org Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/industrialio-buffer.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/iio/industrialio-buffer.c +++ b/drivers/iio/industrialio-buffer.c @@ -220,7 +220,7 @@ static ssize_t iio_buffer_write(struct f } while (ret == 0); remove_wait_queue(&rb->pollq, &wait);
- return ret < 0 ? ret : n; + return ret < 0 ? ret : written; }
/**
From: Nuno Sá nuno.sa@analog.com
commit 3da1814184582ed0faf039275a3f02e6f69944ee upstream.
For output buffers, there's no guarantee that the buffer won't be full in the first iteration of the loop in which case we would block independently of userspace passing O_NONBLOCK or not. Fix it by always checking the flag before going to sleep.
While at it (and as it's a bit related), refactored the loop so that the stop condition is 'written != n', i.e, run the loop until all data has been copied into the IIO buffers. This makes the code a bit simpler.
Fixes: 9eeee3b0bf190 ("iio: Add output buffer support") Signed-off-by: Nuno Sá nuno.sa@analog.com Reviewed-by: Lars-Peter Clausen lars@metafoo.de Link: https://lore.kernel.org/r/20230216101452.591805-3-nuno.sa@analog.com Cc: Stable@vger.kernel.org Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/industrialio-buffer.c | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-)
--- a/drivers/iio/industrialio-buffer.c +++ b/drivers/iio/industrialio-buffer.c @@ -203,21 +203,24 @@ static ssize_t iio_buffer_write(struct f break; }
+ if (filp->f_flags & O_NONBLOCK) { + if (!written) + ret = -EAGAIN; + break; + } + wait_woken(&wait, TASK_INTERRUPTIBLE, MAX_SCHEDULE_TIMEOUT); continue; }
ret = rb->access->write(rb, n - written, buf + written); - if (ret == 0 && (filp->f_flags & O_NONBLOCK)) - ret = -EAGAIN; + if (ret < 0) + break;
- if (ret > 0) { - written += ret; - if (written != n && !(filp->f_flags & O_NONBLOCK)) - continue; - } - } while (ret == 0); + written += ret; + + } while (written != n); remove_wait_queue(&rb->pollq, &wait);
return ret < 0 ? ret : written;
From: Kai-Heng Feng kai.heng.feng@canonical.com
commit 099cc90a5a62e68b2fe3a42da011ab929b98bf73 upstream.
If a second dummy client that talks to the actual I2C address was created in probe(), there should be a proper cleanup on driver and device removal to avoid leakage.
So unregister the dummy client via another callback.
Reviewed-by: Hans de Goede hdegoede@redhat.com Suggested-by: Hans de Goede hdegoede@redhat.com Fixes: c1e62062ff54 ("iio: light: cm32181: Handle CM3218 ACPI devices with 2 I2C resources") Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2152281 Signed-off-by: Kai-Heng Feng kai.heng.feng@canonical.com Link: https://lore.kernel.org/r/20230223020059.2013993-1-kai.heng.feng@canonical.c... Cc: Stable@vger.kernel.org Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/light/cm32181.c | 12 ++++++++++++ 1 file changed, 12 insertions(+)
--- a/drivers/iio/light/cm32181.c +++ b/drivers/iio/light/cm32181.c @@ -429,6 +429,14 @@ static const struct iio_info cm32181_inf .attrs = &cm32181_attribute_group, };
+static void cm32181_unregister_dummy_client(void *data) +{ + struct i2c_client *client = data; + + /* Unregister the dummy client */ + i2c_unregister_device(client); +} + static int cm32181_probe(struct i2c_client *client) { struct device *dev = &client->dev; @@ -460,6 +468,10 @@ static int cm32181_probe(struct i2c_clie client = i2c_acpi_new_device(dev, 1, &board_info); if (IS_ERR(client)) return PTR_ERR(client); + + ret = devm_add_action_or_reset(dev, cm32181_unregister_dummy_client, client); + if (ret) + return ret; }
cm32181 = iio_priv(indio_dev);
From: Mårten Lindahl marten.lindahl@axis.com
commit 42ec40b0883c1cce58b06e8fa82049a61033151c upstream.
There are different init functions for the sensors in this driver in which only one initializes the generic vcnl4000_lock. With commit e21b5b1f2669 ("iio: light: vcnl4000: Preserve conf bits when toggle power") the vcnl4040 sensor started to depend on the lock, but it was missed to initialize it in vcnl4040's init function. This has not been visible until we run lockdep on it:
DEBUG_LOCKS_WARN_ON(lock->magic != lock) at kernel/locking/mutex.c:575 __mutex_lock+0x4f8/0x890 Call trace: __mutex_lock mutex_lock_nested vcnl4200_set_power_state vcnl4200_init vcnl4000_probe
Fix this by initializing the lock in the probe function instead of doing it in the chip specific init functions.
Fixes: e21b5b1f2669 ("iio: light: vcnl4000: Preserve conf bits when toggle power") Signed-off-by: Mårten Lindahl marten.lindahl@axis.com Reviewed-by: Andy Shevchenko andriy.shevchenko@linux.intel.com Link: https://lore.kernel.org/r/20230131140109.2067577-1-marten.lindahl@axis.com Cc: Stable@vger.kernel.org Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/light/vcnl4000.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/iio/light/vcnl4000.c b/drivers/iio/light/vcnl4000.c index cc1a2062e76d..69c5bc987e26 100644 --- a/drivers/iio/light/vcnl4000.c +++ b/drivers/iio/light/vcnl4000.c @@ -199,7 +199,6 @@ static int vcnl4000_init(struct vcnl4000_data *data)
data->rev = ret & 0xf; data->al_scale = 250000; - mutex_init(&data->vcnl4000_lock);
return data->chip_spec->set_power_state(data, true); }; @@ -1197,6 +1196,8 @@ static int vcnl4000_probe(struct i2c_client *client) data->id = id->driver_data; data->chip_spec = &vcnl4000_chip_spec_cfg[data->id];
+ mutex_init(&data->vcnl4000_lock); + ret = data->chip_spec->init(data); if (ret < 0) return ret;
From: Biju Das biju.das.jz@bp.renesas.com
commit b43a18647f03c87e77d50d6fe74904b61b96323e upstream.
The fourth interrupt on SCI port is transmit end interrupt compared to the break interrupt on other port types. So, shuffle the interrupts to fix the transmit end interrupt handler.
Fixes: e1d0be616186 ("sh-sci: Add h8300 SCI") Cc: stable stable@kernel.org Suggested-by: Geert Uytterhoeven geert+renesas@glider.be Signed-off-by: Biju Das biju.das.jz@bp.renesas.com Link: https://lore.kernel.org/r/20230317150403.154094-1-biju.das.jz@bp.renesas.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/tty/serial/sh-sci.c | 8 ++++++++ 1 file changed, 8 insertions(+)
--- a/drivers/tty/serial/sh-sci.c +++ b/drivers/tty/serial/sh-sci.c @@ -31,6 +31,7 @@ #include <linux/ioport.h> #include <linux/ktime.h> #include <linux/major.h> +#include <linux/minmax.h> #include <linux/module.h> #include <linux/mm.h> #include <linux/of.h> @@ -2864,6 +2865,13 @@ static int sci_init_single(struct platfo sci_port->irqs[i] = platform_get_irq(dev, i); }
+ /* + * The fourth interrupt on SCI port is transmit end interrupt, so + * shuffle the interrupts. + */ + if (p->type == PORT_SCI) + swap(sci_port->irqs[SCIx_BRI_IRQ], sci_port->irqs[SCIx_TEI_IRQ]); + /* The SCI generates several interrupts. They can be muxed together or * connected to different interrupt lines. In the muxed case only one * interrupt resource is specified as there is only one interrupt ID.
From: Biju Das biju.das.jz@bp.renesas.com
commit f92ed0cd9328aed918ebb0ebb64d259eccbcc6e7 upstream.
SCI IP on RZ/G2L alike SoCs do not need regshift compared to other SCI IPs on the SH platform. Currently, it does regshift and configuring Rx wrongly. Drop adding regshift for RZ/G2L alike SoCs.
Fixes: dfc80387aefb ("serial: sh-sci: Compute the regshift value for SCI ports") Cc: stable@vger.kernel.org Signed-off-by: Biju Das biju.das.jz@bp.renesas.com Link: https://lore.kernel.org/r/20230321114753.75038-3-biju.das.jz@bp.renesas.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/tty/serial/sh-sci.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/tty/serial/sh-sci.c +++ b/drivers/tty/serial/sh-sci.c @@ -2937,7 +2937,7 @@ static int sci_init_single(struct platfo port->flags = UPF_FIXED_PORT | UPF_BOOT_AUTOCONF | p->flags; port->fifosize = sci_port->params->fifosize;
- if (port->type == PORT_SCI) { + if (port->type == PORT_SCI && !dev->dev.of_node) { if (sci_port->reg_size >= 0x20) port->regshift = 2; else
From: Sherry Sun sherry.sun@nxp.com
commit 9425914f3de6febbd6250395f56c8279676d9c3c upstream.
According to LPUART RM, Transmission Complete Flag becomes 0 if queuing a break character by writing 1 to CTRL[SBK], so here need to avoid checking for transmission complete when UARTCTRL_SBK is asserted, otherwise the lpuart32_tx_empty may never get TIOCSER_TEMT.
Commit 2411fd94ceaa("tty: serial: fsl_lpuart: skip waiting for transmission complete when UARTCTRL_SBK is asserted") only fix it in lpuart32_set_termios(), here also fix it in lpuart32_tx_empty().
Fixes: 380c966c093e ("tty: serial: fsl_lpuart: add 32-bit register interface support") Cc: stable stable@kernel.org Signed-off-by: Sherry Sun sherry.sun@nxp.com Link: https://lore.kernel.org/r/20230323054415.20363-1-sherry.sun@nxp.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/tty/serial/fsl_lpuart.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
--- a/drivers/tty/serial/fsl_lpuart.c +++ b/drivers/tty/serial/fsl_lpuart.c @@ -832,11 +832,17 @@ static unsigned int lpuart32_tx_empty(st struct lpuart_port, port); unsigned long stat = lpuart32_read(port, UARTSTAT); unsigned long sfifo = lpuart32_read(port, UARTFIFO); + unsigned long ctrl = lpuart32_read(port, UARTCTRL);
if (sport->dma_tx_in_progress) return 0;
- if (stat & UARTSTAT_TC && sfifo & UARTFIFO_TXEMPT) + /* + * LPUART Transmission Complete Flag may never be set while queuing a break + * character, so avoid checking for transmission complete when UARTCTRL_SBK + * is asserted. + */ + if ((stat & UARTSTAT_TC && sfifo & UARTFIFO_TXEMPT) || ctrl & UARTCTRL_SBK) return TIOCSER_TEMT;
return 0;
From: Sherry Sun sherry.sun@nxp.com
commit 178e00f36f934a88682d96aa046c1f90cb6f83a7 upstream.
For serdev framework, tty->dev is a NULL pointer, lpuart_uport_is_active calling device_may_wakeup() may cause kernel NULL pointer crash, so here add the NULL pointer check before using it.
Fixes: 4f5cb8c5e915 ("tty: serial: fsl_lpuart: enable wakeup source for lpuart") Cc: stable stable@kernel.org Signed-off-by: Sherry Sun sherry.sun@nxp.com Link: https://lore.kernel.org/r/20230323110923.24581-1-sherry.sun@nxp.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/tty/serial/fsl_lpuart.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/tty/serial/fsl_lpuart.c +++ b/drivers/tty/serial/fsl_lpuart.c @@ -2896,7 +2896,7 @@ static bool lpuart_uport_is_active(struc tty = tty_port_tty_get(port); if (tty) { tty_dev = tty->dev; - may_wake = device_may_wakeup(tty_dev); + may_wake = tty_dev && device_may_wakeup(tty_dev); tty_kref_put(tty); }
From: Ryusuke Konishi konishi.ryusuke@gmail.com
commit 6be49d100c22ffea3287a4b19d7639d259888e33 upstream.
The finalization of nilfs_segctor_thread() can race with nilfs_segctor_kill_thread() which terminates that thread, potentially causing a use-after-free BUG as KASAN detected.
At the end of nilfs_segctor_thread(), it assigns NULL to "sc_task" member of "struct nilfs_sc_info" to indicate the thread has finished, and then notifies nilfs_segctor_kill_thread() of this using waitqueue "sc_wait_task" on the struct nilfs_sc_info.
However, here, immediately after the NULL assignment to "sc_task", it is possible that nilfs_segctor_kill_thread() will detect it and return to continue the deallocation, freeing the nilfs_sc_info structure before the thread does the notification.
This fixes the issue by protecting the NULL assignment to "sc_task" and its notification, with spinlock "sc_state_lock" of the struct nilfs_sc_info. Since nilfs_segctor_kill_thread() does a final check to see if "sc_task" is NULL with "sc_state_lock" locked, this can eliminate the race.
Link: https://lkml.kernel.org/r/20230327175318.8060-1-konishi.ryusuke@gmail.com Reported-by: syzbot+b08ebcc22f8f3e6be43a@syzkaller.appspotmail.com Link: https://lkml.kernel.org/r/00000000000000660d05f7dfa877@google.com Signed-off-by: Ryusuke Konishi konishi.ryusuke@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/nilfs2/segment.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
--- a/fs/nilfs2/segment.c +++ b/fs/nilfs2/segment.c @@ -2607,11 +2607,10 @@ static int nilfs_segctor_thread(void *ar goto loop;
end_thread: - spin_unlock(&sci->sc_state_lock); - /* end sync. */ sci->sc_task = NULL; wake_up(&sci->sc_wait_task); /* for nilfs_segctor_kill_thread() */ + spin_unlock(&sci->sc_state_lock); return 0; }
From: Ryusuke Konishi konishi.ryusuke@gmail.com
commit 42560f9c92cc43dce75dbf06cc0d840dced39b12 upstream.
The current nilfs2 sysfs support has issues with the timing of creation and deletion of sysfs entries, potentially leading to null pointer dereferences, use-after-free, and lockdep warnings.
Some of the sysfs attributes for nilfs2 per-filesystem instance refer to metadata file "cpfile", "sufile", or "dat", but nilfs_sysfs_create_device_group that creates those attributes is executed before the inodes for these metadata files are loaded, and nilfs_sysfs_delete_device_group which deletes these sysfs entries is called after releasing their metadata file inodes.
Therefore, access to some of these sysfs attributes may occur outside of the lifetime of these metadata files, resulting in inode NULL pointer dereferences or use-after-free.
In addition, the call to nilfs_sysfs_create_device_group() is made during the locking period of the semaphore "ns_sem" of nilfs object, so the shrinker call caused by the memory allocation for the sysfs entries, may derive lock dependencies "ns_sem" -> (shrinker) -> "locks acquired in nilfs_evict_inode()".
Since nilfs2 may acquire "ns_sem" deep in the call stack holding other locks via its error handler __nilfs_error(), this causes lockdep to report circular locking. This is a false positive and no circular locking actually occurs as no inodes exist yet when nilfs_sysfs_create_device_group() is called. Fortunately, the lockdep warnings can be resolved by simply moving the call to nilfs_sysfs_create_device_group() out of "ns_sem".
This fixes these sysfs issues by revising where the device's sysfs interface is created/deleted and keeping its lifetime within the lifetime of the metadata files above.
Link: https://lkml.kernel.org/r/20230330205515.6167-1-konishi.ryusuke@gmail.com Fixes: dd70edbde262 ("nilfs2: integrate sysfs support into driver") Signed-off-by: Ryusuke Konishi konishi.ryusuke@gmail.com Reported-by: syzbot+979fa7f9c0d086fdc282@syzkaller.appspotmail.com Link: https://lkml.kernel.org/r/0000000000003414b505f7885f7e@google.com Reported-by: syzbot+5b7d542076d9bddc3c6a@syzkaller.appspotmail.com Link: https://lkml.kernel.org/r/0000000000006ac86605f5f44eb9@google.com Cc: Viacheslav Dubeyko slava@dubeyko.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/nilfs2/super.c | 2 ++ fs/nilfs2/the_nilfs.c | 12 +++++++----- 2 files changed, 9 insertions(+), 5 deletions(-)
--- a/fs/nilfs2/super.c +++ b/fs/nilfs2/super.c @@ -482,6 +482,7 @@ static void nilfs_put_super(struct super up_write(&nilfs->ns_sem); }
+ nilfs_sysfs_delete_device_group(nilfs); iput(nilfs->ns_sufile); iput(nilfs->ns_cpfile); iput(nilfs->ns_dat); @@ -1105,6 +1106,7 @@ nilfs_fill_super(struct super_block *sb, nilfs_put_root(fsroot);
failed_unload: + nilfs_sysfs_delete_device_group(nilfs); iput(nilfs->ns_sufile); iput(nilfs->ns_cpfile); iput(nilfs->ns_dat); --- a/fs/nilfs2/the_nilfs.c +++ b/fs/nilfs2/the_nilfs.c @@ -87,7 +87,6 @@ void destroy_nilfs(struct the_nilfs *nil { might_sleep(); if (nilfs_init(nilfs)) { - nilfs_sysfs_delete_device_group(nilfs); brelse(nilfs->ns_sbh[0]); brelse(nilfs->ns_sbh[1]); } @@ -305,6 +304,10 @@ int load_nilfs(struct the_nilfs *nilfs, goto failed; }
+ err = nilfs_sysfs_create_device_group(sb); + if (unlikely(err)) + goto sysfs_error; + if (valid_fs) goto skip_recovery;
@@ -366,6 +369,9 @@ int load_nilfs(struct the_nilfs *nilfs, goto failed;
failed_unload: + nilfs_sysfs_delete_device_group(nilfs); + + sysfs_error: iput(nilfs->ns_cpfile); iput(nilfs->ns_sufile); iput(nilfs->ns_dat); @@ -697,10 +703,6 @@ int init_nilfs(struct the_nilfs *nilfs, if (err) goto failed_sbh;
- err = nilfs_sysfs_create_device_group(sb); - if (err) - goto failed_sbh; - set_nilfs_init(nilfs); err = 0; out:
From: Shiyang Ruan ruansy.fnst@fujitsu.com
commit e900ba10d15041a6236cc75778cc6e06c3590a58 upstream.
In an dedupe comparison iter loop, the length of iomap_iter decreases because it implies the remaining length after each iteration.
The dedupe command will fail with -EIO if the range is larger than one page size and not aligned to the page size. Also report warning in dmesg:
[ 4338.498374] ------------[ cut here ]------------ [ 4338.498689] WARNING: CPU: 3 PID: 1415645 at fs/iomap/iter.c:16 ...
The compare function should use the min length of the current iters, not the total length.
Link: https://lkml.kernel.org/r/1679469958-2-1-git-send-email-ruansy.fnst@fujitsu.... Fixes: 0e79e3736d54 ("fsdax: dedupe: iter two files at the same time") Signed-off-by: Shiyang Ruan ruansy.fnst@fujitsu.com Reviewed-by: Darrick J. Wong djwong@kernel.org Cc: Dan Williams dan.j.williams@intel.com Cc: Jan Kara jack@suse.cz Cc: Matthew Wilcox (Oracle) willy@infradead.org Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/dax.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/fs/dax.c +++ b/fs/dax.c @@ -2022,8 +2022,8 @@ int dax_dedupe_file_range_compare(struct
while ((ret = iomap_iter(&src_iter, ops)) > 0 && (ret = iomap_iter(&dst_iter, ops)) > 0) { - compared = dax_range_compare_iter(&src_iter, &dst_iter, len, - same); + compared = dax_range_compare_iter(&src_iter, &dst_iter, + min(src_iter.len, dst_iter.len), same); if (compared < 0) return ret; src_iter.processed = dst_iter.processed = compared;
From: Shiyang Ruan ruansy.fnst@fujitsu.com
commit 13dd4e04625f600e5affb1b3f0b6c35268ab839b upstream.
unshare copies data from source to destination. But if the source is HOLE or UNWRITTEN extents, we should zero the destination, otherwise the HOLE or UNWRITTEN part will be user-visible old data of the new allocated extent.
Found by running generic/649 while mounting with -o dax=always on pmem.
Link: https://lkml.kernel.org/r/1679483469-2-1-git-send-email-ruansy.fnst@fujitsu.... Fixes: d984648e428b ("fsdax,xfs: port unshare to fsdax") Signed-off-by: Shiyang Ruan ruansy.fnst@fujitsu.com Cc: Dan Williams dan.j.williams@intel.com Cc: Darrick J. Wong djwong@kernel.org Cc: Jan Kara jack@suse.cz Cc: Matthew Wilcox (Oracle) willy@infradead.org Cc: Alistair Popple apopple@nvidia.com Cc: Jason Gunthorpe jgg@nvidia.com Cc: John Hubbard jhubbard@nvidia.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/dax.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-)
--- a/fs/dax.c +++ b/fs/dax.c @@ -1258,15 +1258,20 @@ static s64 dax_unshare_iter(struct iomap /* don't bother with blocks that are not shared to start with */ if (!(iomap->flags & IOMAP_F_SHARED)) return length; - /* don't bother with holes or unwritten extents */ - if (srcmap->type == IOMAP_HOLE || srcmap->type == IOMAP_UNWRITTEN) - return length;
id = dax_read_lock(); ret = dax_iomap_direct_access(iomap, pos, length, &daddr, NULL); if (ret < 0) goto out_unlock;
+ /* zero the distance if srcmap is HOLE or UNWRITTEN */ + if (srcmap->flags & IOMAP_F_SHARED || srcmap->type == IOMAP_UNWRITTEN) { + memset(daddr, 0, length); + dax_flush(iomap->dax_dev, daddr, length); + ret = length; + goto out_unlock; + } + ret = dax_iomap_direct_access(srcmap, pos, length, &saddr, NULL); if (ret < 0) goto out_unlock;
From: Shiyang Ruan ruansy.fnst@fujitsu.com
commit f76b3a32879de215ced3f8c754c4077b0c2f79e3 upstream.
XFS allows CoW on non-shared extents to combat fragmentation[1]. The old non-shared extent could be mwrited before, its dax entry is marked dirty.
This results in a WARNing:
[ 28.512349] ------------[ cut here ]------------ [ 28.512622] WARNING: CPU: 2 PID: 5255 at fs/dax.c:390 dax_insert_entry+0x342/0x390 [ 28.513050] Modules linked in: rpcsec_gss_krb5 auth_rpcgss nfsv4 nfs lockd grace fscache netfs nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ip_set nf_tables [ 28.515462] CPU: 2 PID: 5255 Comm: fsstress Kdump: loaded Not tainted 6.3.0-rc1-00001-g85e1481e19c1-dirty #117 [ 28.515902] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS Arch Linux 1.16.1-1-1 04/01/2014 [ 28.516307] RIP: 0010:dax_insert_entry+0x342/0x390 [ 28.516536] Code: 30 5b 5d 41 5c 41 5d 41 5e 41 5f c3 cc cc cc cc 48 8b 45 20 48 83 c0 01 e9 e2 fe ff ff 48 8b 45 20 48 83 c0 01 e9 cd fe ff ff <0f> 0b e9 53 ff ff ff 48 8b 7c 24 08 31 f6 e8 1b 61 a1 00 eb 8c 48 [ 28.517417] RSP: 0000:ffffc9000845fb18 EFLAGS: 00010086 [ 28.517721] RAX: 0000000000000053 RBX: 0000000000000155 RCX: 000000000018824b [ 28.518113] RDX: 0000000000000000 RSI: ffffffff827525a6 RDI: 00000000ffffffff [ 28.518515] RBP: ffffea00062092c0 R08: 0000000000000000 R09: ffffc9000845f9c8 [ 28.518905] R10: 0000000000000003 R11: ffffffff82ddb7e8 R12: 0000000000000155 [ 28.519301] R13: 0000000000000000 R14: 000000000018824b R15: ffff88810cfa76b8 [ 28.519703] FS: 00007f14a0c94740(0000) GS:ffff88817bd00000(0000) knlGS:0000000000000000 [ 28.520148] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 28.520472] CR2: 00007f14a0c8d000 CR3: 000000010321c004 CR4: 0000000000770ee0 [ 28.520863] PKRU: 55555554 [ 28.521043] Call Trace: [ 28.521219] <TASK> [ 28.521368] dax_fault_iter+0x196/0x390 [ 28.521595] dax_iomap_pte_fault+0x19b/0x3d0 [ 28.521852] __xfs_filemap_fault+0x234/0x2b0 [ 28.522116] __do_fault+0x30/0x130 [ 28.522334] do_fault+0x193/0x340 [ 28.522586] __handle_mm_fault+0x2d3/0x690 [ 28.522975] handle_mm_fault+0xe6/0x2c0 [ 28.523259] do_user_addr_fault+0x1bc/0x6f0 [ 28.523521] exc_page_fault+0x60/0x140 [ 28.523763] asm_exc_page_fault+0x22/0x30 [ 28.524001] RIP: 0033:0x7f14a0b589ca [ 28.524225] Code: c5 fe 7f 07 c5 fe 7f 47 20 c5 fe 7f 47 40 c5 fe 7f 47 60 c5 f8 77 c3 66 0f 1f 84 00 00 00 00 00 40 0f b6 c6 48 89 d1 48 89 fa <f3> aa 48 89 d0 c5 f8 77 c3 66 66 2e 0f 1f 84 00 00 00 00 00 66 90 [ 28.525198] RSP: 002b:00007fff1dea1c98 EFLAGS: 00010202 [ 28.525505] RAX: 000000000000001e RBX: 000000000014a000 RCX: 0000000000006046 [ 28.525895] RDX: 00007f14a0c82000 RSI: 000000000000001e RDI: 00007f14a0c8d000 [ 28.526290] RBP: 000000000000006f R08: 0000000000000004 R09: 000000000014a000 [ 28.526681] R10: 0000000000000008 R11: 0000000000000246 R12: 028f5c28f5c28f5c [ 28.527067] R13: 8f5c28f5c28f5c29 R14: 0000000000011046 R15: 00007f14a0c946c0 [ 28.527449] </TASK> [ 28.527600] ---[ end trace 0000000000000000 ]---
To be able to delete this entry, clear its dirty mark before invalidate_inode_pages2_range().
[1] https://lore.kernel.org/linux-xfs/20230321151339.GA11376@frogsfrogsfrogs/
Link: https://lkml.kernel.org/r/1679653680-2-1-git-send-email-ruansy.fnst@fujitsu.... Fixes: f80e1668888f3 ("fsdax: invalidate pages when CoW") Signed-off-by: Shiyang Ruan ruansy.fnst@fujitsu.com Cc: Dan Williams dan.j.williams@intel.com Cc: Darrick J. Wong djwong@kernel.org Cc: Jan Kara jack@suse.cz Cc: Matthew Wilcox (Oracle) willy@infradead.org Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/dax.c | 37 +++++++++++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+)
diff --git a/fs/dax.c b/fs/dax.c index 5d2e9b10030e..2ababb89918d 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -781,6 +781,33 @@ static int __dax_invalidate_entry(struct address_space *mapping, return ret; }
+static int __dax_clear_dirty_range(struct address_space *mapping, + pgoff_t start, pgoff_t end) +{ + XA_STATE(xas, &mapping->i_pages, start); + unsigned int scanned = 0; + void *entry; + + xas_lock_irq(&xas); + xas_for_each(&xas, entry, end) { + entry = get_unlocked_entry(&xas, 0); + xas_clear_mark(&xas, PAGECACHE_TAG_DIRTY); + xas_clear_mark(&xas, PAGECACHE_TAG_TOWRITE); + put_unlocked_entry(&xas, entry, WAKE_NEXT); + + if (++scanned % XA_CHECK_SCHED) + continue; + + xas_pause(&xas); + xas_unlock_irq(&xas); + cond_resched(); + xas_lock_irq(&xas); + } + xas_unlock_irq(&xas); + + return 0; +} + /* * Delete DAX entry at @index from @mapping. Wait for it * to be unlocked before deleting it. @@ -1440,6 +1467,16 @@ static loff_t dax_iomap_iter(const struct iomap_iter *iomi, * written by write(2) is visible in mmap. */ if (iomap->flags & IOMAP_F_NEW || cow) { + /* + * Filesystem allows CoW on non-shared extents. The src extents + * may have been mmapped with dirty mark before. To be able to + * invalidate its dax entries, we need to clear the dirty mark + * in advance. + */ + if (cow) + __dax_clear_dirty_range(iomi->inode->i_mapping, + pos >> PAGE_SHIFT, + (end - 1) >> PAGE_SHIFT); invalidate_inode_pages2_range(iomi->inode->i_mapping, pos >> PAGE_SHIFT, (end - 1) >> PAGE_SHIFT);
From: Geert Uytterhoeven geert+renesas@glider.be
commit 7b21f329ae0ab6361c0aebfc094db95821490cd1 upstream.
The fourth interrupt on SCIF variants with four interrupts (RZ/A1) is the Break interrupt, not the Transmit End interrupt (like on SCI(g)). Update the description and interrupt name to fix this.
Fixes: 384d00fae8e51f8f ("dt-bindings: serial: sh-sci: Convert to json-schema") Cc: stable stable@kernel.org Signed-off-by: Geert Uytterhoeven geert+renesas@glider.be Acked-by: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org Link: https://lore.kernel.org/r/719d1582e0ebbe3d674e3a48fc26295e1475a4c3.167904639... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- Documentation/devicetree/bindings/serial/renesas,scif.yaml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/Documentation/devicetree/bindings/serial/renesas,scif.yaml +++ b/Documentation/devicetree/bindings/serial/renesas,scif.yaml @@ -92,7 +92,7 @@ properties: - description: Error interrupt - description: Receive buffer full interrupt - description: Transmit buffer empty interrupt - - description: Transmit End interrupt + - description: Break interrupt - items: - description: Error interrupt - description: Receive buffer full interrupt @@ -107,7 +107,7 @@ properties: - const: eri - const: rxi - const: txi - - const: tei + - const: bri - items: - const: eri - const: rxi
From: Ilpo Järvinen ilpo.jarvinen@linux.intel.com
commit 90b8596ac46043e4a782d9111f5b285251b13756 upstream.
Hans de Goede reported Bluetooth adapters (HCIs) connected over an UART connection failed due corrupted Rx payload. The problem was narrowed down to DMA Rx starting on UART_IIR_THRI interrupt. The problem occurs despite LSR having DR bit set, which is precondition for attempting to start DMA Rx in the first place.
From a debug patch:
[x.807834] 8250irq: iir=cc lsr+saved=60 received=0/15 ier=0f dma_t/rx/err=0/0/0 [x.808676] 8250irq: iir=c2 lsr+saved=61 received=0/0 ier=0f dma_t/rx/err=0/0/0 [x.808776] 8250irq: iir=cc lsr+saved=60 received=1/12 ier=0d dma_t/rx/err=0/1/0 [x.808870] Bluetooth: hci0: Frame reassembly failed (-84)
In the debug snippet, received field indicates 1 byte was transferred over DMA and 12 bytes after that with the non-DMA Rx. The sole byte DMA handled was corrupted (gets zeroed) which leads to the HCI failure.
This problem became apparent after commit e8ffbb71f783 ("serial: 8250: use THRE & __stop_tx also with DMA") changed Tx stop behavior. Tx stop is now triggered from a THRI interrupt.
Despite that this problem looks like a HW bug, this fix is not adding UART_BUG_xx flag to the driver beucase it seems useful in general to avoid starting DMA when there are only a few bytes to transfer. Skipping DMA for small transfers avoids the extra overhead DMA incurs.
Thus, don't setup DMA Rx on UART_IIR_THRI but leave it to a subsequent interrupt which has Rx a related IIR value.
By returning false from handle_rx_dma(), the DMA vs non-DMA decision is postponed until either UART_IIR_RDI (FIFO threshold worth of bytes awaiting) or UART_IIR_TIMEOUT (inter-character timeout) triggers at a later time which allows better to discern whether the number of bytes warrants starting DMA or not.
Reported-by: Hans de Goede hdegoede@redhat.com Tested-by: Hans de Goede hdegoede@redhat.com Fixes: e8ffbb71f783 ("serial: 8250: use THRE & __stop_tx also with DMA") Cc: stable@vger.kernel.org Signed-off-by: Ilpo Järvinen ilpo.jarvinen@linux.intel.com Acked-by: Hans de Goede hdegoede@redhat.com Link: https://lore.kernel.org/r/20230317103034.12881-1-ilpo.jarvinen@linux.intel.c... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/tty/serial/8250/8250_port.c | 11 +++++++++++ 1 file changed, 11 insertions(+)
--- a/drivers/tty/serial/8250/8250_port.c +++ b/drivers/tty/serial/8250/8250_port.c @@ -1896,6 +1896,17 @@ EXPORT_SYMBOL_GPL(serial8250_modem_statu static bool handle_rx_dma(struct uart_8250_port *up, unsigned int iir) { switch (iir & 0x3f) { + case UART_IIR_THRI: + /* + * Postpone DMA or not decision to IIR_RDI or IIR_RX_TIMEOUT + * because it's impossible to do an informed decision about + * that with IIR_THRI. + * + * This also fixes one known DMA Rx corruption issue where + * DR is asserted but DMA Rx only gets a corrupted zero byte + * (too early DR?). + */ + return false; case UART_IIR_RDI: if (!up->dma->rx_running) break;
From: Marios Makassikis mmakassikis@freebox.fr
commit e416ea62a9166e6075a07a970cc5bf79255d2700 upstream.
Commit 83dcedd5540d ("ksmbd: fix infinite loop in ksmbd_conn_handler_loop()"), changes GFP modifiers passed to kvmalloc(). This cause xfstests generic/551 test to fail. We limit pdu length size according to connection status and maximum number of connections. In the rest, memory allocation of request is limited by credit management. so these flags are no longer needed.
Fixes: 83dcedd5540d ("ksmbd: fix infinite loop in ksmbd_conn_handler_loop()") Cc: stable@vger.kernel.org Signed-off-by: Marios Makassikis mmakassikis@freebox.fr Acked-by: Namjae Jeon linkinjeon@kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ksmbd/connection.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-)
--- a/fs/ksmbd/connection.c +++ b/fs/ksmbd/connection.c @@ -326,10 +326,7 @@ int ksmbd_conn_handler_loop(void *p)
/* 4 for rfc1002 length field */ size = pdu_size + 4; - conn->request_buf = kvmalloc(size, - GFP_KERNEL | - __GFP_NOWARN | - __GFP_NORETRY); + conn->request_buf = kvmalloc(size, GFP_KERNEL); if (!conn->request_buf) break;
From: Namjae Jeon linkinjeon@kernel.org
commit dc8289f912387c3bcfbc5d2db29c8947fa207c11 upstream.
When smb1 mount fails, KASAN detect slab-out-of-bounds in init_smb2_rsp_hdr like the following one. For smb1 negotiate(56bytes) , init_smb2_rsp_hdr() for smb2 is called. The issue occurs while handling smb1 negotiate as smb2 server operations. Add smb server operations for smb1 (get_cmd_val, init_rsp_hdr, allocate_rsp_buf, check_user_session) to handle smb1 negotiate so that smb2 server operation does not handle it.
[ 411.400423] CIFS: VFS: Use of the less secure dialect vers=1.0 is not recommended unless required for access to very old servers [ 411.400452] CIFS: Attempting to mount \192.168.45.139\homes [ 411.479312] ksmbd: init_smb2_rsp_hdr : 492 [ 411.479323] ================================================================== [ 411.479327] BUG: KASAN: slab-out-of-bounds in init_smb2_rsp_hdr+0x1e2/0x1f4 [ksmbd] [ 411.479369] Read of size 16 at addr ffff888488ed0734 by task kworker/14:1/199
[ 411.479379] CPU: 14 PID: 199 Comm: kworker/14:1 Tainted: G OE 6.1.21 #3 [ 411.479386] Hardware name: ASUSTeK COMPUTER INC. Z10PA-D8 Series/Z10PA-D8 Series, BIOS 3801 08/23/2019 [ 411.479390] Workqueue: ksmbd-io handle_ksmbd_work [ksmbd] [ 411.479425] Call Trace: [ 411.479428] <TASK> [ 411.479432] dump_stack_lvl+0x49/0x63 [ 411.479444] print_report+0x171/0x4a8 [ 411.479452] ? kasan_complete_mode_report_info+0x3c/0x200 [ 411.479463] ? init_smb2_rsp_hdr+0x1e2/0x1f4 [ksmbd] [ 411.479497] kasan_report+0xb4/0x130 [ 411.479503] ? init_smb2_rsp_hdr+0x1e2/0x1f4 [ksmbd] [ 411.479537] kasan_check_range+0x149/0x1e0 [ 411.479543] memcpy+0x24/0x70 [ 411.479550] init_smb2_rsp_hdr+0x1e2/0x1f4 [ksmbd] [ 411.479585] handle_ksmbd_work+0x109/0x760 [ksmbd] [ 411.479616] ? _raw_spin_unlock_irqrestore+0x50/0x50 [ 411.479624] ? smb3_encrypt_resp+0x340/0x340 [ksmbd] [ 411.479656] process_one_work+0x49c/0x790 [ 411.479667] worker_thread+0x2b1/0x6e0 [ 411.479674] ? process_one_work+0x790/0x790 [ 411.479680] kthread+0x177/0x1b0 [ 411.479686] ? kthread_complete_and_exit+0x30/0x30 [ 411.479692] ret_from_fork+0x22/0x30 [ 411.479702] </TASK>
Fixes: 39b291b86b59 ("ksmbd: return unsupported error on smb1 mount") Cc: stable@vger.kernel.org Signed-off-by: Namjae Jeon linkinjeon@kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/ksmbd/server.c | 5 - fs/ksmbd/smb2pdu.c | 3 - fs/ksmbd/smb_common.c | 138 +++++++++++++++++++++++++++++++++++++++----------- fs/ksmbd/smb_common.h | 2 4 files changed, 111 insertions(+), 37 deletions(-)
--- a/fs/ksmbd/server.c +++ b/fs/ksmbd/server.c @@ -289,10 +289,7 @@ static int queue_ksmbd_work(struct ksmbd work->request_buf = conn->request_buf; conn->request_buf = NULL;
- if (ksmbd_init_smb_server(work)) { - ksmbd_free_work_struct(work); - return -EINVAL; - } + ksmbd_init_smb_server(work);
ksmbd_conn_enqueue_request(work); atomic_inc(&conn->r_count); --- a/fs/ksmbd/smb2pdu.c +++ b/fs/ksmbd/smb2pdu.c @@ -235,9 +235,6 @@ int init_smb2_neg_rsp(struct ksmbd_work struct smb2_negotiate_rsp *rsp; struct ksmbd_conn *conn = work->conn;
- if (conn->need_neg == false) - return -EINVAL; - *(__be32 *)work->response_buf = cpu_to_be32(conn->vals->header_size);
--- a/fs/ksmbd/smb_common.c +++ b/fs/ksmbd/smb_common.c @@ -283,20 +283,121 @@ err_out: return BAD_PROT_ID; }
-int ksmbd_init_smb_server(struct ksmbd_work *work) +#define SMB_COM_NEGOTIATE_EX 0x0 + +/** + * get_smb1_cmd_val() - get smb command value from smb header + * @work: smb work containing smb header + * + * Return: smb command value + */ +static u16 get_smb1_cmd_val(struct ksmbd_work *work) { - struct ksmbd_conn *conn = work->conn; + return SMB_COM_NEGOTIATE_EX; +}
- if (conn->need_neg == false) +/** + * init_smb1_rsp_hdr() - initialize smb negotiate response header + * @work: smb work containing smb request + * + * Return: 0 on success, otherwise -EINVAL + */ +static int init_smb1_rsp_hdr(struct ksmbd_work *work) +{ + struct smb_hdr *rsp_hdr = (struct smb_hdr *)work->response_buf; + struct smb_hdr *rcv_hdr = (struct smb_hdr *)work->request_buf; + + /* + * Remove 4 byte direct TCP header. + */ + *(__be32 *)work->response_buf = + cpu_to_be32(sizeof(struct smb_hdr) - 4); + + rsp_hdr->Command = SMB_COM_NEGOTIATE; + *(__le32 *)rsp_hdr->Protocol = SMB1_PROTO_NUMBER; + rsp_hdr->Flags = SMBFLG_RESPONSE; + rsp_hdr->Flags2 = SMBFLG2_UNICODE | SMBFLG2_ERR_STATUS | + SMBFLG2_EXT_SEC | SMBFLG2_IS_LONG_NAME; + rsp_hdr->Pid = rcv_hdr->Pid; + rsp_hdr->Mid = rcv_hdr->Mid; + return 0; +} + +/** + * smb1_check_user_session() - check for valid session for a user + * @work: smb work containing smb request buffer + * + * Return: 0 on success, otherwise error + */ +static int smb1_check_user_session(struct ksmbd_work *work) +{ + unsigned int cmd = work->conn->ops->get_cmd_val(work); + + if (cmd == SMB_COM_NEGOTIATE_EX) return 0;
- init_smb3_11_server(conn); + return -EINVAL; +} + +/** + * smb1_allocate_rsp_buf() - allocate response buffer for a command + * @work: smb work containing smb request + * + * Return: 0 on success, otherwise -ENOMEM + */ +static int smb1_allocate_rsp_buf(struct ksmbd_work *work) +{ + work->response_buf = kmalloc(MAX_CIFS_SMALL_BUFFER_SIZE, + GFP_KERNEL | __GFP_ZERO); + work->response_sz = MAX_CIFS_SMALL_BUFFER_SIZE; + + if (!work->response_buf) { + pr_err("Failed to allocate %u bytes buffer\n", + MAX_CIFS_SMALL_BUFFER_SIZE); + return -ENOMEM; + }
- if (conn->ops->get_cmd_val(work) != SMB_COM_NEGOTIATE) - conn->need_neg = false; return 0; }
+static struct smb_version_ops smb1_server_ops = { + .get_cmd_val = get_smb1_cmd_val, + .init_rsp_hdr = init_smb1_rsp_hdr, + .allocate_rsp_buf = smb1_allocate_rsp_buf, + .check_user_session = smb1_check_user_session, +}; + +static int smb1_negotiate(struct ksmbd_work *work) +{ + return ksmbd_smb_negotiate_common(work, SMB_COM_NEGOTIATE); +} + +static struct smb_version_cmds smb1_server_cmds[1] = { + [SMB_COM_NEGOTIATE_EX] = { .proc = smb1_negotiate, }, +}; + +static void init_smb1_server(struct ksmbd_conn *conn) +{ + conn->ops = &smb1_server_ops; + conn->cmds = smb1_server_cmds; + conn->max_cmds = ARRAY_SIZE(smb1_server_cmds); +} + +void ksmbd_init_smb_server(struct ksmbd_work *work) +{ + struct ksmbd_conn *conn = work->conn; + __le32 proto; + + if (conn->need_neg == false) + return; + + proto = *(__le32 *)((struct smb_hdr *)work->request_buf)->Protocol; + if (proto == SMB1_PROTO_NUMBER) + init_smb1_server(conn); + else + init_smb3_11_server(conn); +} + int ksmbd_populate_dot_dotdot_entries(struct ksmbd_work *work, int info_level, struct ksmbd_file *dir, struct ksmbd_dir_info *d_info, @@ -444,20 +545,10 @@ static int smb_handle_negotiate(struct k
ksmbd_debug(SMB, "Unsupported SMB1 protocol\n");
- /* - * Remove 4 byte direct TCP header, add 2 byte bcc and - * 2 byte DialectIndex. - */ - *(__be32 *)work->response_buf = - cpu_to_be32(sizeof(struct smb_hdr) - 4 + 2 + 2); + /* Add 2 byte bcc and 2 byte DialectIndex. */ + inc_rfc1001_len(work->response_buf, 4); neg_rsp->hdr.Status.CifsError = STATUS_SUCCESS;
- neg_rsp->hdr.Command = SMB_COM_NEGOTIATE; - *(__le32 *)neg_rsp->hdr.Protocol = SMB1_PROTO_NUMBER; - neg_rsp->hdr.Flags = SMBFLG_RESPONSE; - neg_rsp->hdr.Flags2 = SMBFLG2_UNICODE | SMBFLG2_ERR_STATUS | - SMBFLG2_EXT_SEC | SMBFLG2_IS_LONG_NAME; - neg_rsp->hdr.WordCount = 1; neg_rsp->DialectIndex = cpu_to_le16(work->conn->dialect); neg_rsp->ByteCount = 0; @@ -474,23 +565,12 @@ int ksmbd_smb_negotiate_common(struct ks ksmbd_debug(SMB, "conn->dialect 0x%x\n", conn->dialect);
if (command == SMB2_NEGOTIATE_HE) { - struct smb2_hdr *smb2_hdr = smb2_get_msg(work->request_buf); - - if (smb2_hdr->ProtocolId != SMB2_PROTO_NUMBER) { - ksmbd_debug(SMB, "Downgrade to SMB1 negotiation\n"); - command = SMB_COM_NEGOTIATE; - } - } - - if (command == SMB2_NEGOTIATE_HE) { ret = smb2_handle_negotiate(work); - init_smb2_neg_rsp(work); return ret; }
if (command == SMB_COM_NEGOTIATE) { if (__smb2_negotiate(conn)) { - conn->need_neg = true; init_smb3_11_server(conn); init_smb2_neg_rsp(work); ksmbd_debug(SMB, "Upgrade to SMB2 negotiation\n"); --- a/fs/ksmbd/smb_common.h +++ b/fs/ksmbd/smb_common.h @@ -427,7 +427,7 @@ bool ksmbd_smb_request(struct ksmbd_conn
int ksmbd_lookup_dialect_by_id(__le16 *cli_dialects, __le16 dialects_count);
-int ksmbd_init_smb_server(struct ksmbd_work *work); +void ksmbd_init_smb_server(struct ksmbd_work *work);
struct ksmbd_kstat; int ksmbd_populate_dot_dotdot_entries(struct ksmbd_work *work,
From: Jeremy Soller jeremy@system76.com
commit 36d4d213c6d4fffae2645a601e8ae996de4c3645 upstream.
Fixes speaker output and headset detection on Clevo X370SNW.
Signed-off-by: Jeremy Soller jeremy@system76.com Signed-off-by: Tim Crawford tcrawford@system76.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230331162317.14992-1-tcrawford@system76.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- sound/pci/hda/patch_realtek.c | 1 + 1 file changed, 1 insertion(+)
--- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -2624,6 +2624,7 @@ static const struct snd_pci_quirk alc882 SND_PCI_QUIRK(0x1462, 0xda57, "MSI Z270-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS), SND_PCI_QUIRK_VENDOR(0x1462, "MSI", ALC882_FIXUP_GPIO3), SND_PCI_QUIRK(0x147b, 0x107a, "Abit AW9D-MAX", ALC882_FIXUP_ABIT_AW9D_MAX), + SND_PCI_QUIRK(0x1558, 0x3702, "Clevo X370SN[VW]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), SND_PCI_QUIRK(0x1558, 0x50d3, "Clevo PC50[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), SND_PCI_QUIRK(0x1558, 0x65d1, "Clevo PB51[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), SND_PCI_QUIRK(0x1558, 0x65d2, "Clevo PB51R[CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
From: Andy Chi andy.chi@canonical.com
commit 9fdc1605c504204e0fdec7892b29c916579e06f3 upstream.
There is a HP ProBook which using ALC236 codec and need the ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF quirk to make mute LED and micmute LED work.
Signed-off-by: Andy Chi andy.chi@canonical.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230331083242.58416-1-andy.chi@canonical.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- sound/pci/hda/patch_realtek.c | 1 + 1 file changed, 1 insertion(+)
--- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -9443,6 +9443,7 @@ static const struct snd_pci_quirk alc269 SND_PCI_QUIRK(0x103c, 0x8b47, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), SND_PCI_QUIRK(0x103c, 0x8b5d, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), SND_PCI_QUIRK(0x103c, 0x8b5e, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), + SND_PCI_QUIRK(0x103c, 0x8b66, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), SND_PCI_QUIRK(0x103c, 0x8b7a, "HP", ALC236_FIXUP_HP_GPIO_LED), SND_PCI_QUIRK(0x103c, 0x8b7d, "HP", ALC236_FIXUP_HP_GPIO_LED), SND_PCI_QUIRK(0x103c, 0x8b87, "HP", ALC236_FIXUP_HP_GPIO_LED),
From: Eric DeVolder eric.devolder@oracle.com
commit fed8d8773b8ea68ad99d9eee8c8343bef9da2c2c upstream.
The logic in acpi_is_processor_usable() requires the online capable bit be set for hotpluggable CPUs. The online capable bit has been introduced in ACPI 6.3.
However, for ACPI revisions < 6.3 which do not support that bit, CPUs should be reported as usable, not the other way around.
Reverse the check.
[ bp: Rewrite commit message. ]
Fixes: e2869bd7af60 ("x86/acpi/boot: Do not register processors that cannot be onlined for x2APIC") Suggested-by: Miguel Luis miguel.luis@oracle.com Suggested-by: Boris Ostrovsky boris.ovstrosky@oracle.com Signed-off-by: Eric DeVolder eric.devolder@oracle.com Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Tested-by: David R david@unsolicited.net Cc: stable@kernel.org Link: https://lore.kernel.org/r/20230327191026.3454-2-eric.devolder@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/acpi/boot.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/arch/x86/kernel/acpi/boot.c +++ b/arch/x86/kernel/acpi/boot.c @@ -193,7 +193,8 @@ static bool __init acpi_is_processor_usa if (lapic_flags & ACPI_MADT_ENABLED) return true;
- if (acpi_support_online_capable && (lapic_flags & ACPI_MADT_ONLINE_CAPABLE)) + if (!acpi_support_online_capable || + (lapic_flags & ACPI_MADT_ONLINE_CAPABLE)) return true;
return false;
From: Mario Limonciello mario.limonciello@amd.com
commit a74fabfbd1b7013045afc8cc541e6cab3360ccb5 upstream.
ACPI 6.3 introduced the online capable bit, and also introduced MADT version 5.
Latter was used to distinguish whether the offset storing online capable could be used. However ACPI 6.2b has MADT version "45" which is for an errata version of the ACPI 6.2 spec. This means that the Linux code for detecting availability of MADT will mistakenly flag ACPI 6.2b as supporting online capable which is inaccurate as it's an ACPI 6.3 feature.
Instead use the FADT major and minor revision fields to distinguish this.
[ bp: Massage. ]
Fixes: aa06e20f1be6 ("x86/ACPI: Don't add CPUs that are not online capable") Reported-by: Eric DeVolder eric.devolder@oracle.com Reported-by: Borislav Petkov bp@alien8.de Signed-off-by: Mario Limonciello mario.limonciello@amd.com Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Cc: stable@kernel.org Link: https://lore.kernel.org/r/943d2445-84df-d939-f578-5d8240d342cc@unsolicited.n... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/acpi/boot.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/arch/x86/kernel/acpi/boot.c +++ b/arch/x86/kernel/acpi/boot.c @@ -146,7 +146,11 @@ static int __init acpi_parse_madt(struct
pr_debug("Local APIC address 0x%08x\n", madt->address); } - if (madt->header.revision >= 5) + + /* ACPI 6.3 and newer support the online capable bit. */ + if (acpi_gbl_FADT.header.revision > 6 || + (acpi_gbl_FADT.header.revision == 6 && + acpi_gbl_FADT.minor_revision >= 3)) acpi_support_online_capable = true;
default_acpi_madt_oem_check(madt->header.oem_id,
From: Sean Christopherson seanjc@google.com
commit 6c41468c7c12d74843bb414fc00307ea8a6318c3 upstream.
When injecting an exception into a vCPU in Real Mode, suppress the error code by clearing the flag that tracks whether the error code is valid, not by clearing the error code itself. The "typo" was introduced by recent fix for SVM's funky Paged Real Mode.
Opportunistically hoist the logic above the tracepoint so that the trace is coherent with respect to what is actually injected (this was also the behavior prior to the buggy commit).
Fixes: b97f07458373 ("KVM: x86: determine if an exception has an error code only when injecting it.") Cc: stable@vger.kernel.org Cc: Maxim Levitsky mlevitsk@redhat.com Signed-off-by: Sean Christopherson seanjc@google.com Message-Id: 20230322143300.2209476-2-seanjc@google.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kvm/x86.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-)
--- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9795,13 +9795,20 @@ int kvm_check_nested_events(struct kvm_v
static void kvm_inject_exception(struct kvm_vcpu *vcpu) { + /* + * Suppress the error code if the vCPU is in Real Mode, as Real Mode + * exceptions don't report error codes. The presence of an error code + * is carried with the exception and only stripped when the exception + * is injected as intercepted #PF VM-Exits for AMD's Paged Real Mode do + * report an error code despite the CPU being in Real Mode. + */ + vcpu->arch.exception.has_error_code &= is_protmode(vcpu); + trace_kvm_inj_exception(vcpu->arch.exception.vector, vcpu->arch.exception.has_error_code, vcpu->arch.exception.error_code, vcpu->arch.exception.injected);
- if (vcpu->arch.exception.error_code && !is_protmode(vcpu)) - vcpu->arch.exception.error_code = false; static_call(kvm_x86_inject_exception)(vcpu); }
From: Sean Christopherson seanjc@google.com
commit 80962ec912db56d323883154efc2297473e692cb upstream.
Don't report an error code to L1 when synthesizing a nested VM-Exit and L2 is in Real Mode. Per Intel's SDM, regarding the error code valid bit:
This bit is always 0 if the VM exit occurred while the logical processor was in real-address mode (CR0.PE=0).
The bug was introduced by a recent fix for AMD's Paged Real Mode, which moved the error code suppression from the common "queue exception" path to the "inject exception" path, but missed VMX's "synthesize VM-Exit" path.
Fixes: b97f07458373 ("KVM: x86: determine if an exception has an error code only when injecting it.") Cc: stable@vger.kernel.org Cc: Maxim Levitsky mlevitsk@redhat.com Signed-off-by: Sean Christopherson seanjc@google.com Message-Id: 20230322143300.2209476-3-seanjc@google.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kvm/vmx/nested.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
--- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -3873,7 +3873,12 @@ static void nested_vmx_inject_exception_ exit_qual = 0; }
- if (ex->has_error_code) { + /* + * Unlike AMD's Paged Real Mode, which reports an error code on #PF + * VM-Exits even if the CPU is in Real Mode, Intel VMX never sets the + * "has error code" flags on VM-Exit if the CPU is in Real Mode. + */ + if (ex->has_error_code && is_protmode(vcpu)) { /* * Intel CPUs do not generate error codes with bits 31:16 set, * and more importantly VMX disallows setting bits 31:16 in the
From: Jeremi Piotrowski jpiotrowski@linux.microsoft.com
commit e5c972c1fadacc858b6a564d056f177275238040 upstream.
The Hyper-V "EnlightenedNptTlb" enlightenment is always enabled when KVM is running on top of Hyper-V and Hyper-V exposes support for it (which is always). On AMD CPUs this enlightenment results in ASID invalidations not flushing TLB entries derived from the NPT. To force the underlying (L0) hypervisor to rebuild its shadow page tables, an explicit hypercall is needed.
The original KVM implementation of Hyper-V's "EnlightenedNptTlb" on SVM only added remote TLB flush hooks. This worked out fine for a while, as sufficient remote TLB flushes where being issued in KVM to mask the problem. Since v5.17, changes in the TDP code reduced the number of flushes and the out-of-sync TLB prevents guests from booting successfully.
Split svm_flush_tlb_current() into separate callbacks for the 3 cases (guest/all/current), and issue the required Hyper-V hypercall when a Hyper-V TLB flush is needed. The most important case where the TLB flush was missing is when loading a new PGD, which is followed by what is now svm_flush_tlb_current().
Cc: stable@vger.kernel.org # v5.17+ Fixes: 1e0c7d40758b ("KVM: SVM: hyper-v: Remote TLB flush for SVM") Link: https://lore.kernel.org/lkml/43980946-7bbf-dcef-7e40-af904c456250@linux.micr... Suggested-by: Sean Christopherson seanjc@google.com Signed-off-by: Jeremi Piotrowski jpiotrowski@linux.microsoft.com Reviewed-by: Vitaly Kuznetsov vkuznets@redhat.com Message-Id: 20230324145233.4585-1-jpiotrowski@linux.microsoft.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kvm/kvm_onhyperv.h | 5 +++++ arch/x86/kvm/svm/svm.c | 37 ++++++++++++++++++++++++++++++++++--- arch/x86/kvm/svm/svm_onhyperv.h | 15 +++++++++++++++ 3 files changed, 54 insertions(+), 3 deletions(-)
--- a/arch/x86/kvm/kvm_onhyperv.h +++ b/arch/x86/kvm/kvm_onhyperv.h @@ -12,6 +12,11 @@ int hv_remote_flush_tlb_with_range(struc int hv_remote_flush_tlb(struct kvm *kvm); void hv_track_root_tdp(struct kvm_vcpu *vcpu, hpa_t root_tdp); #else /* !CONFIG_HYPERV */ +static inline int hv_remote_flush_tlb(struct kvm *kvm) +{ + return -EOPNOTSUPP; +} + static inline void hv_track_root_tdp(struct kvm_vcpu *vcpu, hpa_t root_tdp) { } --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3718,7 +3718,7 @@ static void svm_enable_nmi_window(struct svm->vmcb->save.rflags |= (X86_EFLAGS_TF | X86_EFLAGS_RF); }
-static void svm_flush_tlb_current(struct kvm_vcpu *vcpu) +static void svm_flush_tlb_asid(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu);
@@ -3742,6 +3742,37 @@ static void svm_flush_tlb_current(struct svm->current_vmcb->asid_generation--; }
+static void svm_flush_tlb_current(struct kvm_vcpu *vcpu) +{ + hpa_t root_tdp = vcpu->arch.mmu->root.hpa; + + /* + * When running on Hyper-V with EnlightenedNptTlb enabled, explicitly + * flush the NPT mappings via hypercall as flushing the ASID only + * affects virtual to physical mappings, it does not invalidate guest + * physical to host physical mappings. + */ + if (svm_hv_is_enlightened_tlb_enabled(vcpu) && VALID_PAGE(root_tdp)) + hyperv_flush_guest_mapping(root_tdp); + + svm_flush_tlb_asid(vcpu); +} + +static void svm_flush_tlb_all(struct kvm_vcpu *vcpu) +{ + /* + * When running on Hyper-V with EnlightenedNptTlb enabled, remote TLB + * flushes should be routed to hv_remote_flush_tlb() without requesting + * a "regular" remote flush. Reaching this point means either there's + * a KVM bug or a prior hv_remote_flush_tlb() call failed, both of + * which might be fatal to the guest. Yell, but try to recover. + */ + if (WARN_ON_ONCE(svm_hv_is_enlightened_tlb_enabled(vcpu))) + hv_remote_flush_tlb(vcpu->kvm); + + svm_flush_tlb_asid(vcpu); +} + static void svm_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t gva) { struct vcpu_svm *svm = to_svm(vcpu); @@ -4747,10 +4778,10 @@ static struct kvm_x86_ops svm_x86_ops __ .set_rflags = svm_set_rflags, .get_if_flag = svm_get_if_flag,
- .flush_tlb_all = svm_flush_tlb_current, + .flush_tlb_all = svm_flush_tlb_all, .flush_tlb_current = svm_flush_tlb_current, .flush_tlb_gva = svm_flush_tlb_gva, - .flush_tlb_guest = svm_flush_tlb_current, + .flush_tlb_guest = svm_flush_tlb_asid,
.vcpu_pre_run = svm_vcpu_pre_run, .vcpu_run = svm_vcpu_run, --- a/arch/x86/kvm/svm/svm_onhyperv.h +++ b/arch/x86/kvm/svm/svm_onhyperv.h @@ -6,6 +6,8 @@ #ifndef __ARCH_X86_KVM_SVM_ONHYPERV_H__ #define __ARCH_X86_KVM_SVM_ONHYPERV_H__
+#include <asm/mshyperv.h> + #if IS_ENABLED(CONFIG_HYPERV)
#include "kvm_onhyperv.h" @@ -15,6 +17,14 @@ static struct kvm_x86_ops svm_x86_ops;
int svm_hv_enable_l2_tlb_flush(struct kvm_vcpu *vcpu);
+static inline bool svm_hv_is_enlightened_tlb_enabled(struct kvm_vcpu *vcpu) +{ + struct hv_vmcb_enlightenments *hve = &to_svm(vcpu)->vmcb->control.hv_enlightenments; + + return ms_hyperv.nested_features & HV_X64_NESTED_ENLIGHTENED_TLB && + !!hve->hv_enlightenments_control.enlightened_npt_tlb; +} + static inline void svm_hv_init_vmcb(struct vmcb *vmcb) { struct hv_vmcb_enlightenments *hve = &vmcb->control.hv_enlightenments; @@ -80,6 +90,11 @@ static inline void svm_hv_update_vp_id(s } #else
+static inline bool svm_hv_is_enlightened_tlb_enabled(struct kvm_vcpu *vcpu) +{ + return false; +} + static inline void svm_hv_init_vmcb(struct vmcb *vmcb) { }
From: Muchun Song songmuchun@bytedance.com
commit 3ee2d7471fa4963a2ced0a84f0653ce88b43c5b2 upstream.
It does not reset PG_slab and memcg_data when KFENCE fails to initialize kfence pool at runtime. It is reporting a "Bad page state" message when kfence pool is freed to buddy. The checking of whether it is a compound head page seems unnecessary since we already guarantee this when allocating kfence pool. Remove the check to simplify the code.
Link: https://lkml.kernel.org/r/20230320030059.20189-1-songmuchun@bytedance.com Fixes: 0ce20dd84089 ("mm: add Kernel Electric-Fence infrastructure") Signed-off-by: Muchun Song songmuchun@bytedance.com Cc: Alexander Potapenko glider@google.com Cc: Dmitry Vyukov dvyukov@google.com Cc: Jann Horn jannh@google.com Cc: Marco Elver elver@google.com Cc: Roman Gushchin roman.gushchin@linux.dev Cc: SeongJae Park sjpark@amazon.de Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/kfence/core.c | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-)
--- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -561,10 +561,6 @@ static unsigned long kfence_init_pool(vo if (!i || (i % 2)) continue;
- /* Verify we do not have a compound head page. */ - if (WARN_ON(compound_head(&pages[i]) != &pages[i])) - return addr; - __folio_set_slab(slab_folio(slab)); #ifdef CONFIG_MEMCG slab->memcg_data = (unsigned long)&kfence_metadata[i / 2 - 1].objcg | @@ -597,12 +593,26 @@ static unsigned long kfence_init_pool(vo
/* Protect the right redzone. */ if (unlikely(!kfence_protect(addr + PAGE_SIZE))) - return addr; + goto reset_slab;
addr += 2 * PAGE_SIZE; }
return 0; + +reset_slab: + for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) { + struct slab *slab = page_slab(&pages[i]); + + if (!i || (i % 2)) + continue; +#ifdef CONFIG_MEMCG + slab->memcg_data = 0; +#endif + __folio_clear_slab(slab_folio(slab)); + } + + return addr; }
static bool __init kfence_init_pool_early(void) @@ -632,16 +642,6 @@ static bool __init kfence_init_pool_earl * fails for the first page, and therefore expect addr==__kfence_pool in * most failure cases. */ - for (char *p = (char *)addr; p < __kfence_pool + KFENCE_POOL_SIZE; p += PAGE_SIZE) { - struct slab *slab = virt_to_slab(p); - - if (!slab) - continue; -#ifdef CONFIG_MEMCG - slab->memcg_data = 0; -#endif - __folio_clear_slab(slab_folio(slab)); - } memblock_free_late(__pa(addr), KFENCE_POOL_SIZE - (addr - (unsigned long)__kfence_pool)); __kfence_pool = NULL; return false;
From: Muchun Song songmuchun@bytedance.com
commit 1f2803b2660f4b04d48d065072c0ae0c9ca255fd upstream.
The struct pages could be discontiguous when the kfence pool is allocated via alloc_contig_pages() with CONFIG_SPARSEMEM and !CONFIG_SPARSEMEM_VMEMMAP.
This may result in setting PG_slab and memcg_data to a arbitrary address (may be not used as a struct page), which in the worst case might corrupt the kernel.
So the iteration should use nth_page().
Link: https://lkml.kernel.org/r/20230323025003.94447-1-songmuchun@bytedance.com Fixes: 0ce20dd84089 ("mm: add Kernel Electric-Fence infrastructure") Signed-off-by: Muchun Song songmuchun@bytedance.com Reviewed-by: Marco Elver elver@google.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Cc: Alexander Potapenko glider@google.com Cc: Dmitry Vyukov dvyukov@google.com Cc: Jann Horn jannh@google.com Cc: SeongJae Park sjpark@amazon.de Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/kfence/core.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -556,7 +556,7 @@ static unsigned long kfence_init_pool(vo * enters __slab_free() slow-path. */ for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) { - struct slab *slab = page_slab(&pages[i]); + struct slab *slab = page_slab(nth_page(pages, i));
if (!i || (i % 2)) continue; @@ -602,7 +602,7 @@ static unsigned long kfence_init_pool(vo
reset_slab: for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) { - struct slab *slab = page_slab(&pages[i]); + struct slab *slab = page_slab(nth_page(pages, i));
if (!i || (i % 2)) continue;
From: Suzuki K Poulose suzuki.poulose@arm.com
commit 735e7b30a53a1679c050cddb73f5e5316105d2e3 upstream.
CoreSight ETM4x architecture clearly provides ways to identify a device via registers in the "Management" class, TRCDEVARCH and TRCDEVTYPE. These registers can be accessed without the Trace domain being powered on. We additionally added TRCIDR1 as fallback in order to cover for any ETMs that may not have implemented TRCDEVARCH. So far, nobody has reported hitting a WARNING we placed to catch such systems.
Also, more importantly it is problematic to access TRCIDR1, which is a "Trace" register via MMIO access, without clearing the OSLK. But we cannot mess with the OSLK until we know for sure that this is an ETMv4 device. Thus, this kind of creates a chicken and egg problem unnecessarily for systems "which are compliant" to the ETMv4 architecture.
Let us remove the TRCIDR1 fall back check and rely only on TRCDEVARCH.
Fixes: 8b94db1edaee ("coresight: etm4x: Use TRCDEVARCH for component discovery") Cc: stable@vger.kernel.org Reported-by: Steve Clevenger scclevenger@os.amperecomputing.com Link: https://lore.kernel.org/all/143540e5623d4c7393d24833f2b80600d8d745d2.1677881... Cc: Mike Leach mike.leach@linaro.org Cc: James Clark james.clark@arm.com Reviewed-by: Mike Leach mike.leach@linaro.org Reviewed-by: Anshuman Khandual anshuman.khandual@arm.com Signed-off-by: Suzuki K Poulose suzuki.poulose@arm.com Link: https://lore.kernel.org/r/20230321104530.1547136-1-suzuki.poulose@arm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/hwtracing/coresight/coresight-etm4x-core.c | 22 ++++++++------------- drivers/hwtracing/coresight/coresight-etm4x.h | 20 +++++-------------- 2 files changed, 15 insertions(+), 27 deletions(-)
--- a/drivers/hwtracing/coresight/coresight-etm4x-core.c +++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c @@ -1013,25 +1013,21 @@ static bool etm4_init_iomem_access(struc struct csdev_access *csa) { u32 devarch = readl_relaxed(drvdata->base + TRCDEVARCH); - u32 idr1 = readl_relaxed(drvdata->base + TRCIDR1);
/* * All ETMs must implement TRCDEVARCH to indicate that - * the component is an ETMv4. To support any broken - * implementations we fall back to TRCIDR1 check, which - * is not really reliable. + * the component is an ETMv4. Even though TRCIDR1 also + * contains the information, it is part of the "Trace" + * register and must be accessed with the OSLK cleared, + * with MMIO. But we cannot touch the OSLK until we are + * sure this is an ETM. So rely only on the TRCDEVARCH. */ - if ((devarch & ETM_DEVARCH_ID_MASK) == ETM_DEVARCH_ETMv4x_ARCH) { - drvdata->arch = etm_devarch_to_arch(devarch); - } else { - pr_warn("CPU%d: ETM4x incompatible TRCDEVARCH: %x, falling back to TRCIDR1\n", - smp_processor_id(), devarch); - - if (ETM_TRCIDR1_ARCH_MAJOR(idr1) != ETM_TRCIDR1_ARCH_ETMv4) - return false; - drvdata->arch = etm_trcidr_to_arch(idr1); + if ((devarch & ETM_DEVARCH_ID_MASK) != ETM_DEVARCH_ETMv4x_ARCH) { + pr_warn_once("TRCDEVARCH doesn't match ETMv4 architecture\n"); + return false; }
+ drvdata->arch = etm_devarch_to_arch(devarch); *csa = CSDEV_ACCESS_IOMEM(drvdata->base); return true; } --- a/drivers/hwtracing/coresight/coresight-etm4x.h +++ b/drivers/hwtracing/coresight/coresight-etm4x.h @@ -753,14 +753,12 @@ * TRCDEVARCH - CoreSight architected register * - Bits[15:12] - Major version * - Bits[19:16] - Minor version - * TRCIDR1 - ETM architected register - * - Bits[11:8] - Major version - * - Bits[7:4] - Minor version - * We must rely on TRCDEVARCH for the version information, - * however we don't want to break the support for potential - * old implementations which might not implement it. Thus - * we fall back to TRCIDR1 if TRCDEVARCH is not implemented - * for memory mapped components. + * + * We must rely only on TRCDEVARCH for the version information. Even though, + * TRCIDR1 also provides the architecture version, it is a "Trace" register + * and as such must be accessed only with Trace power domain ON. This may + * not be available at probe time. + * * Now to make certain decisions easier based on the version * we use an internal representation of the version in the * driver, as follows : @@ -786,12 +784,6 @@ static inline u8 etm_devarch_to_arch(u32 ETM_DEVARCH_REVISION(devarch)); }
-static inline u8 etm_trcidr_to_arch(u32 trcidr1) -{ - return ETM_ARCH_VERSION(ETM_TRCIDR1_ARCH_MAJOR(trcidr1), - ETM_TRCIDR1_ARCH_MINOR(trcidr1)); -} - enum etm_impdef_type { ETM4_IMPDEF_HISI_CORE_COMMIT, ETM4_IMPDEF_FEATURE_MAX,
From: Steve Clevenger scclevenger@os.amperecomputing.com
commit bf84937e882009075f57fd213836256fc65d96bc upstream.
In etm4_enable_hw, fix for() loop range to represent address comparator pairs.
Fixes: 2e1cdfe184b5 ("coresight-etm4x: Adding CoreSight ETM4x driver") Cc: stable@vger.kernel.org Signed-off-by: Steve Clevenger scclevenger@os.amperecomputing.com Reviewed-by: James Clark james.clark@arm.com Signed-off-by: Suzuki K Poulose suzuki.poulose@arm.com Link: https://lore.kernel.org/r/4a4ee61ce8ef402615a4528b21a051de3444fb7b.167754007... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/hwtracing/coresight/coresight-etm4x-core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/hwtracing/coresight/coresight-etm4x-core.c +++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c @@ -454,7 +454,7 @@ static int etm4_enable_hw(struct etmv4_d if (etm4x_sspcicrn_present(drvdata, i)) etm4x_relaxed_write32(csa, config->ss_pe_cmp[i], TRCSSPCICRn(i)); } - for (i = 0; i < drvdata->nr_addr_cmp; i++) { + for (i = 0; i < drvdata->nr_addr_cmp * 2; i++) { etm4x_relaxed_write64(csa, config->addr_val[i], TRCACVRn(i)); etm4x_relaxed_write64(csa, config->addr_acc[i], TRCACATRn(i)); }
From: William Breathitt Gray william.gray@linaro.org
commit 4aa3b75c74603c3374877d5fd18ad9cc3a9a62ed upstream.
The Counter (CNTR) register is 24 bits wide, but we can have an effective 25-bit count value by setting bit 24 to the XOR of the Borrow flag and Carry flag. The flags can be read from the FLAG register, but a race condition exists: the Borrow flag and Carry flag are instantaneous and could change by the time the count value is read from the CNTR register.
Since the race condition could result in an incorrect 25-bit count value, remove support for 25-bit count values from this driver; hard-coded maximum count values are replaced by a LS7267_CNTR_MAX define for consistency and clarity.
Fixes: 28e5d3bb0325 ("iio: 104-quad-8: Add IIO support for the ACCES 104-QUAD-8") Cc: stable@vger.kernel.org # 6.1.x Cc: stable@vger.kernel.org # 6.2.x Link: https://lore.kernel.org/r/20230312231554.134858-1-william.gray@linaro.org/ Signed-off-by: William Breathitt Gray william.gray@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/counter/104-quad-8.c | 29 ++++++++--------------------- 1 file changed, 8 insertions(+), 21 deletions(-)
--- a/drivers/counter/104-quad-8.c +++ b/drivers/counter/104-quad-8.c @@ -97,10 +97,6 @@ struct quad8 { struct quad8_reg __iomem *reg; };
-/* Borrow Toggle flip-flop */ -#define QUAD8_FLAG_BT BIT(0) -/* Carry Toggle flip-flop */ -#define QUAD8_FLAG_CT BIT(1) /* Error flag */ #define QUAD8_FLAG_E BIT(4) /* Up/Down flag */ @@ -133,6 +129,9 @@ struct quad8 { #define QUAD8_CMR_QUADRATURE_X2 0x10 #define QUAD8_CMR_QUADRATURE_X4 0x18
+/* Each Counter is 24 bits wide */ +#define LS7267_CNTR_MAX GENMASK(23, 0) + static int quad8_signal_read(struct counter_device *counter, struct counter_signal *signal, enum counter_signal_level *level) @@ -156,18 +155,10 @@ static int quad8_count_read(struct count { struct quad8 *const priv = counter_priv(counter); struct channel_reg __iomem *const chan = priv->reg->channel + count->id; - unsigned int flags; - unsigned int borrow; - unsigned int carry; unsigned long irqflags; int i;
- flags = ioread8(&chan->control); - borrow = flags & QUAD8_FLAG_BT; - carry = !!(flags & QUAD8_FLAG_CT); - - /* Borrow XOR Carry effectively doubles count range */ - *val = (unsigned long)(borrow ^ carry) << 24; + *val = 0;
spin_lock_irqsave(&priv->lock, irqflags);
@@ -191,8 +182,7 @@ static int quad8_count_write(struct coun unsigned long irqflags; int i;
- /* Only 24-bit values are supported */ - if (val > 0xFFFFFF) + if (val > LS7267_CNTR_MAX) return -ERANGE;
spin_lock_irqsave(&priv->lock, irqflags); @@ -806,8 +796,7 @@ static int quad8_count_preset_write(stru struct quad8 *const priv = counter_priv(counter); unsigned long irqflags;
- /* Only 24-bit values are supported */ - if (preset > 0xFFFFFF) + if (preset > LS7267_CNTR_MAX) return -ERANGE;
spin_lock_irqsave(&priv->lock, irqflags); @@ -834,8 +823,7 @@ static int quad8_count_ceiling_read(stru *ceiling = priv->preset[count->id]; break; default: - /* By default 0x1FFFFFF (25 bits unsigned) is maximum count */ - *ceiling = 0x1FFFFFF; + *ceiling = LS7267_CNTR_MAX; break; }
@@ -850,8 +838,7 @@ static int quad8_count_ceiling_write(str struct quad8 *const priv = counter_priv(counter); unsigned long irqflags;
- /* Only 24-bit values are supported */ - if (ceiling > 0xFFFFFF) + if (ceiling > LS7267_CNTR_MAX) return -ERANGE;
spin_lock_irqsave(&priv->lock, irqflags);
From: William Breathitt Gray william.gray@linaro.org
commit 00f4bc5184c19cb33f468f1ea409d70d19f8f502 upstream.
Signal 16 and higher represent the device's Index lines. The priv->preset_enable array holds the device configuration for these Index lines. The preset_enable configuration is active low on the device, so invert the conditional check in quad8_action_read() to properly handle the logical state of preset_enable.
Fixes: f1d8a071d45b ("counter: 104-quad-8: Add Generic Counter interface support") Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230316203426.224745-1-william.gray@linaro.org/ Signed-off-by: William Breathitt Gray william.gray@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/counter/104-quad-8.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/counter/104-quad-8.c +++ b/drivers/counter/104-quad-8.c @@ -368,7 +368,7 @@ static int quad8_action_read(struct coun
/* Handle Index signals */ if (synapse->signal->id >= 16) { - if (priv->preset_enable[count->id]) + if (!priv->preset_enable[count->id]) *action = COUNTER_SYNAPSE_ACTION_RISING_EDGE; else *action = COUNTER_SYNAPSE_ACTION_NONE;
From: Keith Busch kbusch@kernel.org
commit 38a8c4d1d45006841f0643f4cb29b5e50758837c upstream.
Polling needs a bio with a valid bi_bdev, but neither of those are guaranteed for polled driver requests. Make request based polling directly use blk-mq's polling function instead.
When executing a request from a polled hctx, we know the request's cookie, and that it's from a live blk-mq queue that supports polling, so we can safely skip everything that bio_poll provides.
Cc: stable@kernel.org Reported-by: Martin Belanger Martin.Belanger@dell.com Reported-by: Daniel Wagner dwagner@suse.de Signed-off-by: Keith Busch kbusch@kernel.org Tested-by: Daniel Wagner dwagner@suse.de Revieded-by: Daniel Wagner dwagner@suse.de Reviewed-by: Chaitanya Kulkarni kch@nvidia.com Reviewed-by: Sagi Grimberg sagi@grimberg.me Reviewed-by: Christoph Hellwig hch@lst.de Tested-by: Shin'ichiro Kawasaki shinichiro.kawasaki@wdc.com Link: https://lore.kernel.org/r/20230331180056.1155862-1-kbusch@meta.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- block/blk-mq.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-)
--- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1359,8 +1359,6 @@ bool blk_rq_is_poll(struct request *rq) return false; if (rq->mq_hctx->type != HCTX_TYPE_POLL) return false; - if (WARN_ON_ONCE(!rq->bio)) - return false; return true; } EXPORT_SYMBOL_GPL(blk_rq_is_poll); @@ -1368,7 +1366,7 @@ EXPORT_SYMBOL_GPL(blk_rq_is_poll); static void blk_rq_poll_completion(struct request *rq, struct completion *wait) { do { - bio_poll(rq->bio, NULL, 0); + blk_mq_poll(rq->q, blk_rq_to_qc(rq), NULL, 0); cond_resched(); } while (!completion_done(wait)); }
From: John Keeping john@metanate.com
commit ea65b41807a26495ff2a73dd8b1bab2751940887 upstream.
If the compiler decides not to inline this function then preemption tracing will always show an IP inside the preemption disabling path and never the function actually calling preempt_{enable,disable}.
Link: https://lore.kernel.org/linux-trace-kernel/20230327173647.1690849-1-john@met...
Cc: Masami Hiramatsu mhiramat@kernel.org Cc: Mark Rutland mark.rutland@arm.com Cc: stable@vger.kernel.org Fixes: f904f58263e1d ("sched/debug: Fix preempt_disable_ip recording for preempt_disable()") Signed-off-by: John Keeping john@metanate.com Signed-off-by: Steven Rostedt (Google) rostedt@goodmis.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/linux/ftrace.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -970,7 +970,7 @@ static inline void __ftrace_enabled_rest #define CALLER_ADDR5 ((unsigned long)ftrace_return_address(5)) #define CALLER_ADDR6 ((unsigned long)ftrace_return_address(6))
-static inline unsigned long get_lock_parent_ip(void) +static __always_inline unsigned long get_lock_parent_ip(void) { unsigned long addr = CALLER_ADDR0;
From: Zheng Yejian zhengyejian1@huawei.com
commit 2a2d8c51defb446e8d89a83f42f8e5cd529111e9 upstream.
Syzkaller report a WARNING: "WARN_ON(!direct)" in modify_ftrace_direct().
Root cause is 'direct->addr' was changed from 'old_addr' to 'new_addr' but not restored if error happened on calling ftrace_modify_direct_caller(). Then it can no longer find 'direct' by that 'old_addr'.
To fix it, restore 'direct->addr' to 'old_addr' explicitly in error path.
Link: https://lore.kernel.org/linux-trace-kernel/20230330025223.1046087-1-zhengyej...
Cc: stable@vger.kernel.org Cc: mhiramat@kernel.org Cc: mark.rutland@arm.com Cc: ast@kernel.org Cc: daniel@iogearbox.net Fixes: 8a141dd7f706 ("ftrace: Fix modify_ftrace_direct.") Signed-off-by: Zheng Yejian zhengyejian1@huawei.com Signed-off-by: Steven Rostedt (Google) rostedt@goodmis.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/trace/ftrace.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-)
--- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -5568,12 +5568,15 @@ int modify_ftrace_direct(unsigned long i ret = 0; }
- if (unlikely(ret && new_direct)) { - direct->count++; - list_del_rcu(&new_direct->next); - synchronize_rcu_tasks(); - kfree(new_direct); - ftrace_direct_func_count--; + if (ret) { + direct->addr = old_addr; + if (unlikely(new_direct)) { + direct->count++; + list_del_rcu(&new_direct->next); + synchronize_rcu_tasks(); + kfree(new_direct); + ftrace_direct_func_count--; + } }
out_unlock:
From: Christian Brauner brauner@kernel.org
commit cb2239c198ad9fbd5aced22cf93e45562da781eb upstream.
When cleaning up peer group ids in the failure path we need to make sure to hold on to the namespace lock. Otherwise another thread might just turn the mount from a shared into a non-shared mount concurrently.
Link: https://lore.kernel.org/lkml/00000000000088694505f8132d77@google.com Fixes: 2a1867219c7b ("fs: add mount_setattr()") Reported-by: syzbot+8ac3859139c685c4f597@syzkaller.appspotmail.com Cc: stable@vger.kernel.org # 5.12+ Message-Id: 20230330-vfs-mount_setattr-propagation-fix-v1-1-37548d91533b@kernel.org Signed-off-by: Christian Brauner brauner@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/namespace.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/namespace.c +++ b/fs/namespace.c @@ -4286,9 +4286,9 @@ out: unlock_mount_hash();
if (kattr->propagation) { - namespace_unlock(); if (err) cleanup_group_ids(mnt, NULL); + namespace_unlock(); }
return err;
From: Oleksij Rempel o.rempel@pengutronix.de
commit b45193cb4df556fe6251b285a5ce44046dd36b4a upstream.
In the j1939_tp_tx_dat_new() function, an out-of-bounds memory access could occur during the memcpy() operation if the size of skb->cb is larger than the size of struct j1939_sk_buff_cb. This is because the memcpy() operation uses the size of skb->cb, leading to a read beyond the struct j1939_sk_buff_cb.
Updated the memcpy() operation to use the size of struct j1939_sk_buff_cb instead of the size of skb->cb. This ensures that the memcpy() operation only reads the memory within the bounds of struct j1939_sk_buff_cb, preventing out-of-bounds memory access.
Additionally, add a BUILD_BUG_ON() to check that the size of skb->cb is greater than or equal to the size of struct j1939_sk_buff_cb. This ensures that the skb->cb buffer is large enough to hold the j1939_sk_buff_cb structure.
Fixes: 9d71dd0c7009 ("can: add support of SAE J1939 protocol") Reported-by: Shuangpeng Bai sjb7183@psu.edu Tested-by: Shuangpeng Bai sjb7183@psu.edu Signed-off-by: Oleksij Rempel o.rempel@pengutronix.de Link: https://groups.google.com/g/syzkaller/c/G_LL-C3plRs/m/-8xCi6dCAgAJ Link: https://lore.kernel.org/all/20230404073128.3173900-1-o.rempel@pengutronix.de Cc: stable@vger.kernel.org [mkl: rephrase commit message] Signed-off-by: Marc Kleine-Budde mkl@pengutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/can/j1939/transport.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
--- a/net/can/j1939/transport.c +++ b/net/can/j1939/transport.c @@ -604,7 +604,10 @@ sk_buff *j1939_tp_tx_dat_new(struct j193 /* reserve CAN header */ skb_reserve(skb, offsetof(struct can_frame, data));
- memcpy(skb->cb, re_skcb, sizeof(skb->cb)); + /* skb->cb must be large enough to hold a j1939_sk_buff_cb structure */ + BUILD_BUG_ON(sizeof(skb->cb) < sizeof(*re_skcb)); + + memcpy(skb->cb, re_skcb, sizeof(*re_skcb)); skcb = j1939_skb_to_cb(skb); if (swap_src_dst) j1939_skbcb_swap(skcb);
From: Oliver Hartkopp socketcan@hartkopp.net
commit 051737439eaee5bdd03d3c2ef5510d54a478fd05 upstream.
As discussed with Dae R. Jeong and Hillf Danton here [1] the sendmsg() function in isotp.c might get into a race condition when restoring the former tx.state from the old_state.
Remove the old_state concept and implement proper locking for the ISOTP_IDLE transitions in isotp_sendmsg(), inspired by a simplification idea from Hillf Danton.
Introduce a new tx.state ISOTP_SHUTDOWN and use the same locking mechanism from isotp_release() which resolves a potential race between isotp_sendsmg() and isotp_release().
[1] https://lore.kernel.org/linux-can/ZB%2F93xJxq%2FBUqAgG@dragonet
v1: https://lore.kernel.org/all/20230331102114.15164-1-socketcan@hartkopp.net v2: https://lore.kernel.org/all/20230331123600.3550-1-socketcan@hartkopp.net take care of signal interrupts for wait_event_interruptible() in isotp_release() v3: https://lore.kernel.org/all/20230331130654.9886-1-socketcan@hartkopp.net take care of signal interrupts for wait_event_interruptible() in isotp_sendmsg() in the wait_tx_done case v4: https://lore.kernel.org/all/20230331131935.21465-1-socketcan@hartkopp.net take care of signal interrupts for wait_event_interruptible() in isotp_sendmsg() in ALL cases
Cc: Dae R. Jeong threeearcat@gmail.com Cc: Hillf Danton hdanton@sina.com Signed-off-by: Oliver Hartkopp socketcan@hartkopp.net Fixes: 4f027cba8216 ("can: isotp: split tx timer into transmission and timeout") Link: https://lore.kernel.org/all/20230331131935.21465-1-socketcan@hartkopp.net Cc: stable@vger.kernel.org [mkl: rephrase commit message] Signed-off-by: Marc Kleine-Budde mkl@pengutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/can/isotp.c | 55 +++++++++++++++++++++++++++++++------------------------ 1 file changed, 31 insertions(+), 24 deletions(-)
--- a/net/can/isotp.c +++ b/net/can/isotp.c @@ -119,7 +119,8 @@ enum { ISOTP_WAIT_FIRST_FC, ISOTP_WAIT_FC, ISOTP_WAIT_DATA, - ISOTP_SENDING + ISOTP_SENDING, + ISOTP_SHUTDOWN, };
struct tpcon { @@ -880,8 +881,8 @@ static enum hrtimer_restart isotp_tx_tim txtimer); struct sock *sk = &so->sk;
- /* don't handle timeouts in IDLE state */ - if (so->tx.state == ISOTP_IDLE) + /* don't handle timeouts in IDLE or SHUTDOWN state */ + if (so->tx.state == ISOTP_IDLE || so->tx.state == ISOTP_SHUTDOWN) return HRTIMER_NORESTART;
/* we did not get any flow control or echo frame in time */ @@ -918,7 +919,6 @@ static int isotp_sendmsg(struct socket * { struct sock *sk = sock->sk; struct isotp_sock *so = isotp_sk(sk); - u32 old_state = so->tx.state; struct sk_buff *skb; struct net_device *dev; struct canfd_frame *cf; @@ -928,23 +928,24 @@ static int isotp_sendmsg(struct socket * int off; int err;
- if (!so->bound) + if (!so->bound || so->tx.state == ISOTP_SHUTDOWN) return -EADDRNOTAVAIL;
+wait_free_buffer: /* we do not support multiple buffers - for now */ - if (cmpxchg(&so->tx.state, ISOTP_IDLE, ISOTP_SENDING) != ISOTP_IDLE || - wq_has_sleeper(&so->wait)) { - if (msg->msg_flags & MSG_DONTWAIT) { - err = -EAGAIN; - goto err_out; - } + if (wq_has_sleeper(&so->wait) && (msg->msg_flags & MSG_DONTWAIT)) + return -EAGAIN;
- /* wait for complete transmission of current pdu */ - err = wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE); - if (err) - goto err_out; + /* wait for complete transmission of current pdu */ + err = wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE); + if (err) + goto err_event_drop; + + if (cmpxchg(&so->tx.state, ISOTP_IDLE, ISOTP_SENDING) != ISOTP_IDLE) { + if (so->tx.state == ISOTP_SHUTDOWN) + return -EADDRNOTAVAIL;
- so->tx.state = ISOTP_SENDING; + goto wait_free_buffer; }
if (!size || size > MAX_MSG_LENGTH) { @@ -1074,7 +1075,9 @@ static int isotp_sendmsg(struct socket *
if (wait_tx_done) { /* wait for complete transmission of current pdu */ - wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE); + err = wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE); + if (err) + goto err_event_drop;
if (sk->sk_err) return -sk->sk_err; @@ -1082,13 +1085,15 @@ static int isotp_sendmsg(struct socket *
return size;
+err_event_drop: + /* got signal: force tx state machine to be idle */ + so->tx.state = ISOTP_IDLE; + hrtimer_cancel(&so->txfrtimer); + hrtimer_cancel(&so->txtimer); err_out_drop: /* drop this PDU and unlock a potential wait queue */ - old_state = ISOTP_IDLE; -err_out: - so->tx.state = old_state; - if (so->tx.state == ISOTP_IDLE) - wake_up_interruptible(&so->wait); + so->tx.state = ISOTP_IDLE; + wake_up_interruptible(&so->wait);
return err; } @@ -1150,10 +1155,12 @@ static int isotp_release(struct socket * net = sock_net(sk);
/* wait for complete transmission of current pdu */ - wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE); + while (wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE) == 0 && + cmpxchg(&so->tx.state, ISOTP_IDLE, ISOTP_SHUTDOWN) != ISOTP_IDLE) + ;
/* force state machines to be idle also when a signal occurred */ - so->tx.state = ISOTP_IDLE; + so->tx.state = ISOTP_SHUTDOWN; so->rx.state = ISOTP_IDLE;
spin_lock(&isotp_notifier_lock);
From: Michal Sojka michal.sojka@cvut.cz
commit 79e19fa79cb5d5f1b3bf3e3ae24989ccb93c7b7b upstream.
When using select()/poll()/epoll() with a non-blocking ISOTP socket to wait for when non-blocking write is possible, a false EPOLLOUT event is sometimes returned. This can happen at least after sending a message which must be split to multiple CAN frames.
The reason is that isotp_sendmsg() returns -EAGAIN when tx.state is not equal to ISOTP_IDLE and this behavior is not reflected in datagram_poll(), which is used in isotp_ops.
This is fixed by introducing ISOTP-specific poll function, which suppresses the EPOLLOUT events in that case.
v2: https://lore.kernel.org/all/20230302092812.320643-1-michal.sojka@cvut.cz v1: https://lore.kernel.org/all/20230224010659.48420-1-michal.sojka@cvut.cz https://lore.kernel.org/all/b53a04a2-ba1f-3858-84c1-d3eb3301ae15@hartkopp.ne...
Signed-off-by: Michal Sojka michal.sojka@cvut.cz Reported-by: Jakub Jira jirajak2@fel.cvut.cz Tested-by: Oliver Hartkopp socketcan@hartkopp.net Acked-by: Oliver Hartkopp socketcan@hartkopp.net Fixes: e057dd3fc20f ("can: add ISO 15765-2:2016 transport protocol") Link: https://lore.kernel.org/all/20230331125511.372783-1-michal.sojka@cvut.cz Cc: stable@vger.kernel.org Signed-off-by: Marc Kleine-Budde mkl@pengutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/can/isotp.c | 17 ++++++++++++++++- 1 file changed, 16 insertions(+), 1 deletion(-)
--- a/net/can/isotp.c +++ b/net/can/isotp.c @@ -1615,6 +1615,21 @@ static int isotp_init(struct sock *sk) return 0; }
+static __poll_t isotp_poll(struct file *file, struct socket *sock, poll_table *wait) +{ + struct sock *sk = sock->sk; + struct isotp_sock *so = isotp_sk(sk); + + __poll_t mask = datagram_poll(file, sock, wait); + poll_wait(file, &so->wait, wait); + + /* Check for false positives due to TX state */ + if ((mask & EPOLLWRNORM) && (so->tx.state != ISOTP_IDLE)) + mask &= ~(EPOLLOUT | EPOLLWRNORM); + + return mask; +} + static int isotp_sock_no_ioctlcmd(struct socket *sock, unsigned int cmd, unsigned long arg) { @@ -1630,7 +1645,7 @@ static const struct proto_ops isotp_ops .socketpair = sock_no_socketpair, .accept = sock_no_accept, .getname = isotp_getname, - .poll = datagram_poll, + .poll = isotp_poll, .ioctl = isotp_sock_no_ioctlcmd, .gettstamp = sock_gettstamp, .listen = sock_no_listen,
From: Oliver Hartkopp socketcan@hartkopp.net
commit 0145462fc802cd447ef5d029758043c7f15b4b1e upstream.
isotp.c was still using sock_recv_timestamp() which does not provide control messages to detect dropped PDUs in the receive path.
Fixes: e057dd3fc20f ("can: add ISO 15765-2:2016 transport protocol") Signed-off-by: Oliver Hartkopp socketcan@hartkopp.net Link: https://lore.kernel.org/all/20230330170248.62342-1-socketcan@hartkopp.net Cc: stable@vger.kernel.org Signed-off-by: Marc Kleine-Budde mkl@pengutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/can/isotp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/net/can/isotp.c +++ b/net/can/isotp.c @@ -1125,7 +1125,7 @@ static int isotp_recvmsg(struct socket * if (ret < 0) goto out_err;
- sock_recv_timestamp(msg, sk, skb); + sock_recv_cmsgs(msg, sk, skb);
if (msg->msg_name) { __sockaddr_check_size(ISOTP_MIN_NAMELEN);
From: Hans de Goede hdegoede@redhat.com
commit 78dfc9d1d1abb9e400386fa9c5724a8f7d75e3b9 upstream.
Allow callers of __acpi_video_get_backlight_type() to pass a pointer to a bool which will get set to false if the backlight-type comes from the cmdline or a DMI quirk and set to true if auto-detection was used.
And make __acpi_video_get_backlight_type() non static so that it can be called directly outside of video_detect.c .
While at it turn the acpi_video_get_backlight_type() and acpi_video_backlight_use_native() wrappers into static inline functions in include/acpi/video.h, so that we need to export one less symbol.
Fixes: 5aa9d943e9b6 ("ACPI: video: Don't enable fallback path for creating ACPI backlight by default") Cc: All applicable stable@vger.kernel.org Reviewed-by: Mario Limonciello mario.limonciello@amd.com Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/acpi/video_detect.c | 21 ++++++++------------- include/acpi/video.h | 15 +++++++++++++-- 2 files changed, 21 insertions(+), 15 deletions(-)
--- a/drivers/acpi/video_detect.c +++ b/drivers/acpi/video_detect.c @@ -774,7 +774,7 @@ static bool prefer_native_over_acpi_vide * Determine which type of backlight interface to use on this system, * First check cmdline, then dmi quirks, then do autodetect. */ -static enum acpi_backlight_type __acpi_video_get_backlight_type(bool native) +enum acpi_backlight_type __acpi_video_get_backlight_type(bool native, bool *auto_detect) { static DEFINE_MUTEX(init_mutex); static bool nvidia_wmi_ec_present; @@ -799,6 +799,9 @@ static enum acpi_backlight_type __acpi_v native_available = true; mutex_unlock(&init_mutex);
+ if (auto_detect) + *auto_detect = false; + /* * The below heuristics / detection steps are in order of descending * presedence. The commandline takes presedence over anything else. @@ -810,6 +813,9 @@ static enum acpi_backlight_type __acpi_v if (acpi_backlight_dmi != acpi_backlight_undef) return acpi_backlight_dmi;
+ if (auto_detect) + *auto_detect = true; + /* Special cases such as nvidia_wmi_ec and apple gmux. */ if (nvidia_wmi_ec_present) return acpi_backlight_nvidia_wmi_ec; @@ -829,15 +835,4 @@ static enum acpi_backlight_type __acpi_v /* No ACPI video/native (old hw), use vendor specific fw methods. */ return acpi_backlight_vendor; } - -enum acpi_backlight_type acpi_video_get_backlight_type(void) -{ - return __acpi_video_get_backlight_type(false); -} -EXPORT_SYMBOL(acpi_video_get_backlight_type); - -bool acpi_video_backlight_use_native(void) -{ - return __acpi_video_get_backlight_type(true) == acpi_backlight_native; -} -EXPORT_SYMBOL(acpi_video_backlight_use_native); +EXPORT_SYMBOL(__acpi_video_get_backlight_type); --- a/include/acpi/video.h +++ b/include/acpi/video.h @@ -59,8 +59,6 @@ extern void acpi_video_unregister(void); extern void acpi_video_register_backlight(void); extern int acpi_video_get_edid(struct acpi_device *device, int type, int device_id, void **edid); -extern enum acpi_backlight_type acpi_video_get_backlight_type(void); -extern bool acpi_video_backlight_use_native(void); /* * Note: The value returned by acpi_video_handles_brightness_key_presses() * may change over time and should not be cached. @@ -69,6 +67,19 @@ extern bool acpi_video_handles_brightnes extern int acpi_video_get_levels(struct acpi_device *device, struct acpi_video_device_brightness **dev_br, int *pmax_level); + +extern enum acpi_backlight_type __acpi_video_get_backlight_type(bool native, + bool *auto_detect); + +static inline enum acpi_backlight_type acpi_video_get_backlight_type(void) +{ + return __acpi_video_get_backlight_type(false, NULL); +} + +static inline bool acpi_video_backlight_use_native(void) +{ + return __acpi_video_get_backlight_type(true, NULL) == acpi_backlight_native; +} #else static inline void acpi_video_report_nolcd(void) { return; }; static inline int acpi_video_register(void) { return -ENODEV; }
From: Hans de Goede hdegoede@redhat.com
commit e506731c8f35699d746c615164ed620cd53c00ca upstream.
Commit 3dbc80a3e4c5 ("ACPI: video: Make backlight class device registration a separate step (v2)") combined with commit 5aa9d943e9b6 ("ACPI: video: Don't enable fallback path for creating ACPI backlight by default")
Means that the video.ko code now fully depends on the GPU driver calling acpi_video_register_backlight() for the acpi_video# backlight class devices to get registered.
This means that if the GPU driver does not do this, acpi_backlight=video on the cmdline, or DMI quirks for selecting acpi_video# will not work.
This is a problem on for example Apple iMac14,1 all-in-ones where the monitor's LCD panel shows up as a regular DP connection instead of eDP so the GPU driver will not call acpi_video_register_backlight() [1].
Fix this by making video.ko directly register the acpi_video# devices when these have been explicitly requested either on the cmdline or through DMI quirks (rather then auto-detection being used).
[1] GPU drivers only call acpi_video_register_backlight() when an internal panel is detected, to avoid non working acpi_video# devices getting registered on desktops which unfortunately is a real issue.
Fixes: 5aa9d943e9b6 ("ACPI: video: Don't enable fallback path for creating ACPI backlight by default") Cc: All applicable stable@vger.kernel.org Reviewed-by: Mario Limonciello mario.limonciello@amd.com Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/acpi/acpi_video.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-)
--- a/drivers/acpi/acpi_video.c +++ b/drivers/acpi/acpi_video.c @@ -1984,6 +1984,7 @@ static int instance; static int acpi_video_bus_add(struct acpi_device *device) { struct acpi_video_bus *video; + bool auto_detect; int error; acpi_status status;
@@ -2045,10 +2046,20 @@ static int acpi_video_bus_add(struct acp mutex_unlock(&video_list_lock);
/* - * The userspace visible backlight_device gets registered separately - * from acpi_video_register_backlight(). + * If backlight-type auto-detection is used then a native backlight may + * show up later and this may change the result from video to native. + * Therefor normally the userspace visible /sys/class/backlight device + * gets registered separately by the GPU driver calling + * acpi_video_register_backlight() when an internal panel is detected. + * Register the backlight now when not using auto-detection, so that + * when the kernel cmdline or DMI-quirks are used the backlight will + * get registered even if acpi_video_register_backlight() is not called. */ acpi_video_run_bcl_for_osi(video); + if (__acpi_video_get_backlight_type(false, &auto_detect) == acpi_backlight_video && + !auto_detect) + acpi_video_bus_register_backlight(video); + acpi_video_bus_add_notify_handler(video);
return 0;
From: Hans de Goede hdegoede@redhat.com
commit 2699107989431d6db44f8a9e809ea74c387336d1 upstream.
On the Apple iMac14,1 and iMac14,2 all-in-ones (monitors with builtin "PC") the connection between the GPU and the panel is seen by the GPU driver as regular DP instead of eDP, causing the GPU driver to never call acpi_video_register_backlight().
(GPU drivers only call acpi_video_register_backlight() when an internal panel is detected, to avoid non working acpi_video# devices getting registered on desktops which unfortunately is a real issue.)
Fix the missing acpi_video# backlight device on these all-in-ones by adding a acpi_backlight=video DMI quirk, so that video.ko will immediately register the backlight device instead of waiting for an acpi_video_register_backlight() call.
Fixes: 5aa9d943e9b6 ("ACPI: video: Don't enable fallback path for creating ACPI backlight by default") Cc: All applicable stable@vger.kernel.org Reviewed-by: Mario Limonciello mario.limonciello@amd.com Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/acpi/video_detect.c | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+)
--- a/drivers/acpi/video_detect.c +++ b/drivers/acpi/video_detect.c @@ -277,6 +277,29 @@ static const struct dmi_system_id video_ },
/* + * Models which need acpi_video backlight control where the GPU drivers + * do not call acpi_video_register_backlight() because no internal panel + * is detected. Typically these are all-in-ones (monitors with builtin + * PC) where the panel connection shows up as regular DP instead of eDP. + */ + { + .callback = video_detect_force_video, + /* Apple iMac14,1 */ + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."), + DMI_MATCH(DMI_PRODUCT_NAME, "iMac14,1"), + }, + }, + { + .callback = video_detect_force_video, + /* Apple iMac14,2 */ + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."), + DMI_MATCH(DMI_PRODUCT_NAME, "iMac14,2"), + }, + }, + + /* * These models have a working acpi_video backlight control, and using * native backlight causes a regression where backlight does not work * when userspace is not handling brightness key events. Disable
From: Hans de Goede hdegoede@redhat.com
commit a5b2781dcab2c77979a4b8adda781d2543580901 upstream.
The Lenovo ThinkPad W530 uses a nvidia k1000m GPU. When this gets used together with one of the older nvidia binary driver series (the latest series does not support it), then backlight control does not work.
This is caused by commit 3dbc80a3e4c5 ("ACPI: video: Make backlight class device registration a separate step (v2)") combined with commit 5aa9d943e9b6 ("ACPI: video: Don't enable fallback path for creating ACPI backlight by default").
After these changes the acpi_video# backlight device is only registered when requested by a GPU driver calling acpi_video_register_backlight() which the nvidia binary driver does not do.
I realize that using the nvidia binary driver is not a supported use-case and users can workaround this by adding acpi_backlight=video on the kernel commandline, but the ThinkPad W530 is a popular model under Linux users, so it seems worthwhile to add a quirk for this.
I will also email Nvidia asking them to make the driver call acpi_video_register_backlight() when an internal LCD panel is detected. So maybe the next maintenance release of the drivers will fix this...
Fixes: 5aa9d943e9b6 ("ACPI: video: Don't enable fallback path for creating ACPI backlight by default") Cc: All applicable stable@vger.kernel.org Reviewed-by: Mario Limonciello mario.limonciello@amd.com Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/acpi/video_detect.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+)
--- a/drivers/acpi/video_detect.c +++ b/drivers/acpi/video_detect.c @@ -300,6 +300,20 @@ static const struct dmi_system_id video_ },
/* + * Older models with nvidia GPU which need acpi_video backlight + * control and where the old nvidia binary driver series does not + * call acpi_video_register_backlight(). + */ + { + .callback = video_detect_force_video, + /* ThinkPad W530 */ + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), + DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad W530"), + }, + }, + + /* * These models have a working acpi_video backlight control, and using * native backlight causes a regression where backlight does not work * when userspace is not handling brightness key events. Disable
From: Song Yoong Siang yoong.siang.song@intel.com
commit 24e3fce00c0b557491ff596c0682a29dee6fe848 upstream.
Queue reset was moved out from __init_dma_rx_desc_rings() and __init_dma_tx_desc_rings() functions. Thus, the driver fails to transmit and receive packet after XDP prog setup.
This commit adds the missing queue reset into stmmac_xdp_open() function.
Fixes: f9ec5723c3db ("net: ethernet: stmicro: stmmac: move queue reset to dedicated functions") Cc: stable@vger.kernel.org # 6.0+ Signed-off-by: Song Yoong Siang yoong.siang.song@intel.com Reviewed-by: Alexander Duyck alexanderduyck@fb.com Link: https://lore.kernel.org/r/20230404044823.3226144-1-yoong.siang.song@intel.co... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -6629,6 +6629,8 @@ int stmmac_xdp_open(struct net_device *d goto init_error; }
+ stmmac_reset_queues_param(priv); + /* DMA CSR Channel configuration */ for (chan = 0; chan < dma_csr_ch; chan++) { stmmac_init_chan(priv, priv->ioaddr, priv->plat->dma_cfg, chan);
From: Tze-nan Wu Tze-nan.Wu@mediatek.com
commit 4ccf11c4e8a8e051499d53a12f502196c97a758e upstream.
Currently, the "last_cmd" variable can be accessed by multiple processes asynchronously when multiple users manipulate synthetic_events node at the same time, it could lead to use-after-free or double-free.
This patch add "lastcmd_mutex" to prevent "last_cmd" from being accessed asynchronously.
================================================================
It's easy to reproduce in the KASAN environment by running the two scripts below in different shells.
script 1: while : do echo -n -e '\x88' > /sys/kernel/tracing/synthetic_events done
script 2: while : do echo -n -e '\xb0' > /sys/kernel/tracing/synthetic_events done
================================================================ double-free scenario:
process A process B ------------------- --------------- 1.kstrdup last_cmd 2.free last_cmd 3.free last_cmd(double-free)
================================================================ use-after-free scenario:
process A process B ------------------- --------------- 1.kstrdup last_cmd 2.free last_cmd 3.tracing_log_err(use-after-free)
================================================================
Appendix 1. KASAN report double-free:
BUG: KASAN: double-free in kfree+0xdc/0x1d4 Free of addr ***** by task sh/4879 Call trace: ... kfree+0xdc/0x1d4 create_or_delete_synth_event+0x60/0x1e8 trace_parse_run_command+0x2bc/0x4b8 synth_events_write+0x20/0x30 vfs_write+0x200/0x830 ...
Allocated by task 4879: ... kstrdup+0x5c/0x98 create_or_delete_synth_event+0x6c/0x1e8 trace_parse_run_command+0x2bc/0x4b8 synth_events_write+0x20/0x30 vfs_write+0x200/0x830 ...
Freed by task 5464: ... kfree+0xdc/0x1d4 create_or_delete_synth_event+0x60/0x1e8 trace_parse_run_command+0x2bc/0x4b8 synth_events_write+0x20/0x30 vfs_write+0x200/0x830 ...
================================================================ Appendix 2. KASAN report use-after-free:
BUG: KASAN: use-after-free in strlen+0x5c/0x7c Read of size 1 at addr ***** by task sh/5483 sh: CPU: 7 PID: 5483 Comm: sh ... __asan_report_load1_noabort+0x34/0x44 strlen+0x5c/0x7c tracing_log_err+0x60/0x444 create_or_delete_synth_event+0xc4/0x204 trace_parse_run_command+0x2bc/0x4b8 synth_events_write+0x20/0x30 vfs_write+0x200/0x830 ...
Allocated by task 5483: ... kstrdup+0x5c/0x98 create_or_delete_synth_event+0x80/0x204 trace_parse_run_command+0x2bc/0x4b8 synth_events_write+0x20/0x30 vfs_write+0x200/0x830 ...
Freed by task 5480: ... kfree+0xdc/0x1d4 create_or_delete_synth_event+0x74/0x204 trace_parse_run_command+0x2bc/0x4b8 synth_events_write+0x20/0x30 vfs_write+0x200/0x830 ...
Link: https://lore.kernel.org/linux-trace-kernel/20230321110444.1587-1-Tze-nan.Wu@...
Fixes: 27c888da9867 ("tracing: Remove size restriction on synthetic event cmd error logging") Cc: stable@vger.kernel.org Cc: Masami Hiramatsu mhiramat@kernel.org Cc: Matthias Brugger matthias.bgg@gmail.com Cc: AngeloGioacchino Del Regno angelogioacchino.delregno@collabora.com Cc: "Tom Zanussi" zanussi@kernel.org Signed-off-by: Tze-nan Wu Tze-nan.Wu@mediatek.com Signed-off-by: Steven Rostedt (Google) rostedt@goodmis.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/trace/trace_events_synth.c | 19 +++++++++++++++---- 1 file changed, 15 insertions(+), 4 deletions(-)
--- a/kernel/trace/trace_events_synth.c +++ b/kernel/trace/trace_events_synth.c @@ -44,14 +44,21 @@ enum { ERRORS };
static const char *err_text[] = { ERRORS };
+DEFINE_MUTEX(lastcmd_mutex); static char *last_cmd;
static int errpos(const char *str) { + int ret = 0; + + mutex_lock(&lastcmd_mutex); if (!str || !last_cmd) - return 0; + goto out;
- return err_pos(last_cmd, str); + ret = err_pos(last_cmd, str); + out: + mutex_unlock(&lastcmd_mutex); + return ret; }
static void last_cmd_set(const char *str) @@ -59,18 +66,22 @@ static void last_cmd_set(const char *str if (!str) return;
+ mutex_lock(&lastcmd_mutex); kfree(last_cmd); - last_cmd = kstrdup(str, GFP_KERNEL); + mutex_unlock(&lastcmd_mutex); }
static void synth_err(u8 err_type, u16 err_pos) { + mutex_lock(&lastcmd_mutex); if (!last_cmd) - return; + goto out;
tracing_log_err(NULL, "synthetic_events", last_cmd, err_text, err_type, err_pos); + out: + mutex_unlock(&lastcmd_mutex); }
static int create_synth_event(const char *raw_command);
From: Daniel Bristot de Oliveira bristot@kernel.org
commit b9f451a9029a16eb7913ace09b92493d00f2e564 upstream.
timerlat is not reporting a new tracing_max_latency for the thread latency. The reason is that it is not calling notify_new_max_latency() function after the new thread latency is sampled.
Call notify_new_max_latency() after computing the thread latency.
Link: https://lkml.kernel.org/r/16e18d61d69073d0192ace07bf61e405cca96e9c.168010418...
Cc: stable@vger.kernel.org Fixes: dae181349f1e ("tracing/osnoise: Support a list of trace_array *tr") Signed-off-by: Daniel Bristot de Oliveira bristot@kernel.org Signed-off-by: Steven Rostedt (Google) rostedt@goodmis.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/trace/trace_osnoise.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/kernel/trace/trace_osnoise.c +++ b/kernel/trace/trace_osnoise.c @@ -1738,6 +1738,8 @@ static int timerlat_main(void *data)
trace_timerlat_sample(&s);
+ notify_new_max_latency(diff); + timerlat_dump_stack(time_to_us(diff));
tlat->tracing_thread = false;
From: Daniel Bristot de Oliveira bristot@kernel.org
commit d3cba7f02cd82118c32651c73374d8a5a459d9a6 upstream.
osnoise/timerlat tracers are reporting new max latency on instances where the tracing is off, creating inconsistencies between the max reported values in the trace and in the tracing_max_latency. Thus only report new tracing_max_latency on active tracing instances.
Link: https://lkml.kernel.org/r/ecd109fde4a0c24ab0f00ba1e9a144ac19a91322.168010418...
Cc: stable@vger.kernel.org Fixes: dae181349f1e ("tracing/osnoise: Support a list of trace_array *tr") Signed-off-by: Daniel Bristot de Oliveira bristot@kernel.org Signed-off-by: Steven Rostedt (Google) rostedt@goodmis.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/trace/trace_osnoise.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/kernel/trace/trace_osnoise.c +++ b/kernel/trace/trace_osnoise.c @@ -1296,7 +1296,7 @@ static void notify_new_max_latency(u64 l rcu_read_lock(); list_for_each_entry_rcu(inst, &osnoise_instances, list) { tr = inst->tr; - if (tr->max_latency < latency) { + if (tracer_tracing_is_on(tr) && tr->max_latency < latency) { tr->max_latency = latency; latency_fsnotify(tr); }
From: Steven Rostedt (Google) rostedt@goodmis.org
commit 3357c6e429643231e60447b52ffbb7ac895aca22 upstream.
When a tracing instance is removed, the error messages that hold errors that occurred in the instance needs to be freed. The following reports a memory leak:
# cd /sys/kernel/tracing # mkdir instances/foo # echo 'hist:keys=x' > instances/foo/events/sched/sched_switch/trigger # cat instances/foo/error_log [ 117.404795] hist:sched:sched_switch: error: Couldn't find field Command: hist:keys=x ^ # rmdir instances/foo
Then check for memory leaks:
# echo scan > /sys/kernel/debug/kmemleak # cat /sys/kernel/debug/kmemleak unreferenced object 0xffff88810d8ec700 (size 192): comm "bash", pid 869, jiffies 4294950577 (age 215.752s) hex dump (first 32 bytes): 60 dd 68 61 81 88 ff ff 60 dd 68 61 81 88 ff ff `.ha....`.ha.... a0 30 8c 83 ff ff ff ff 26 00 0a 00 00 00 00 00 .0......&....... backtrace: [<00000000dae26536>] kmalloc_trace+0x2a/0xa0 [<00000000b2938940>] tracing_log_err+0x277/0x2e0 [<000000004a0e1b07>] parse_atom+0x966/0xb40 [<0000000023b24337>] parse_expr+0x5f3/0xdb0 [<00000000594ad074>] event_hist_trigger_parse+0x27f8/0x3560 [<00000000293a9645>] trigger_process_regex+0x135/0x1a0 [<000000005c22b4f2>] event_trigger_write+0x87/0xf0 [<000000002cadc509>] vfs_write+0x162/0x670 [<0000000059c3b9be>] ksys_write+0xca/0x170 [<00000000f1cddc00>] do_syscall_64+0x3e/0xc0 [<00000000868ac68c>] entry_SYSCALL_64_after_hwframe+0x72/0xdc unreferenced object 0xffff888170c35a00 (size 32): comm "bash", pid 869, jiffies 4294950577 (age 215.752s) hex dump (first 32 bytes): 0a 20 20 43 6f 6d 6d 61 6e 64 3a 20 68 69 73 74 . Command: hist 3a 6b 65 79 73 3d 78 0a 00 00 00 00 00 00 00 00 :keys=x......... backtrace: [<000000006a747de5>] __kmalloc+0x4d/0x160 [<000000000039df5f>] tracing_log_err+0x29b/0x2e0 [<000000004a0e1b07>] parse_atom+0x966/0xb40 [<0000000023b24337>] parse_expr+0x5f3/0xdb0 [<00000000594ad074>] event_hist_trigger_parse+0x27f8/0x3560 [<00000000293a9645>] trigger_process_regex+0x135/0x1a0 [<000000005c22b4f2>] event_trigger_write+0x87/0xf0 [<000000002cadc509>] vfs_write+0x162/0x670 [<0000000059c3b9be>] ksys_write+0xca/0x170 [<00000000f1cddc00>] do_syscall_64+0x3e/0xc0 [<00000000868ac68c>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
The problem is that the error log needs to be freed when the instance is removed.
Link: https://lore.kernel.org/lkml/76134d9f-a5ba-6a0d-37b3-28310b4a1e91@alu.unizg.... Link: https://lore.kernel.org/linux-trace-kernel/20230404194504.5790b95f@gandalf.l...
Cc: stable@vger.kernel.org Cc: Masami Hiramatsu mhiramat@kernel.org Cc: Andrew Morton akpm@linux-foundation.org Cc: Mark Rutland mark.rutland@arm.com Cc: Thorsten Leemhuis regressions@leemhuis.info Cc: Ulf Hansson ulf.hansson@linaro.org Cc: Eric Biggers ebiggers@kernel.org Fixes: 2f754e771b1a6 ("tracing: Have the error logs show up in the proper instances") Reported-by: Mirsad Goran Todorovac mirsad.todorovac@alu.unizg.hr Tested-by: Mirsad Todorovac mirsad.todorovac@alu.unizg.hr Signed-off-by: Steven Rostedt (Google) rostedt@goodmis.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/trace/trace.c | 1 + 1 file changed, 1 insertion(+)
--- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -9472,6 +9472,7 @@ static int __remove_instance(struct trac tracefs_remove(tr->dir); free_percpu(tr->last_func_repeats); free_trace_buffers(tr); + clear_tracing_err_log(tr);
for (i = 0; i < tr->nr_topts; i++) { kfree(tr->topts[i].topts);
From: Jason Gunthorpe jgg@nvidia.com
commit e4395701330fc4aee530905039516fe770b81417 upstream.
syzkaller found that setting up a map with a user VA that wraps past zero can trigger WARN_ONs, particularly from pin_user_pages weirdly returning 0 due to invalid arguments.
Prevent creating a pages with a uptr and size that would math overflow.
WARNING: CPU: 0 PID: 518 at drivers/iommu/iommufd/pages.c:793 pfn_reader_user_pin+0x2e6/0x390 Modules linked in: CPU: 0 PID: 518 Comm: repro Not tainted 6.3.0-rc2-eeac8ede1755+ #1 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014 RIP: 0010:pfn_reader_user_pin+0x2e6/0x390 Code: b1 11 e9 25 fe ff ff e8 28 e4 0f ff 31 ff 48 89 de e8 2e e6 0f ff 48 85 db 74 0a e8 14 e4 0f ff e9 4d ff ff ff e8 0a e4 0f ff <0f> 0b bb f2 ff ff ff e9 3c ff ff ff e8 f9 e3 0f ff ba 01 00 00 00 RSP: 0018:ffffc90000f9fa30 EFLAGS: 00010246 RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffffffff821e2b72 RDX: 0000000000000000 RSI: ffff888014184680 RDI: 0000000000000002 RBP: ffffc90000f9fa78 R08: 00000000000000ff R09: 0000000079de6f4e R10: ffffc90000f9f790 R11: ffff888014185418 R12: ffffc90000f9fc60 R13: 0000000000000002 R14: ffff888007879800 R15: 0000000000000000 FS: 00007f4227555740(0000) GS:ffff88807dc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000020000043 CR3: 000000000e748005 CR4: 0000000000770ef0 PKRU: 55555554 Call Trace: <TASK> pfn_reader_next+0x14a/0x7b0 ? interval_tree_double_span_iter_update+0x11a/0x140 pfn_reader_first+0x140/0x1b0 iopt_pages_rw_slow+0x71/0x280 ? __this_cpu_preempt_check+0x20/0x30 iopt_pages_rw_access+0x2b2/0x5b0 iommufd_access_rw+0x19f/0x2f0 iommufd_test+0xd11/0x16f0 ? write_comp_data+0x2f/0x90 iommufd_fops_ioctl+0x206/0x330 __x64_sys_ioctl+0x10e/0x160 ? __pfx_iommufd_fops_ioctl+0x10/0x10 do_syscall_64+0x3b/0x90 entry_SYSCALL_64_after_hwframe+0x72/0xdc
Cc: stable@vger.kernel.org Fixes: 8d160cd4d506 ("iommufd: Algorithms for PFN storage") Link: https://lore.kernel.org/r/1-v1-ceab6a4d7d7a+94-iommufd_syz_jgg@nvidia.com Reviewed-by: Kevin Tian kevin.tian@intel.com Reported-by: Pengfei Xu pengfei.xu@intel.com Tested-by: Pengfei Xu pengfei.xu@intel.com Signed-off-by: Jason Gunthorpe jgg@nvidia.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iommu/iommufd/pages.c | 4 ++++ 1 file changed, 4 insertions(+)
--- a/drivers/iommu/iommufd/pages.c +++ b/drivers/iommu/iommufd/pages.c @@ -1140,6 +1140,7 @@ struct iopt_pages *iopt_alloc_pages(void bool writable) { struct iopt_pages *pages; + unsigned long end;
/* * The iommu API uses size_t as the length, and protect the DIV_ROUND_UP @@ -1148,6 +1149,9 @@ struct iopt_pages *iopt_alloc_pages(void if (length > SIZE_MAX - PAGE_SIZE || length == 0) return ERR_PTR(-EINVAL);
+ if (check_add_overflow((unsigned long)uptr, length, &end)) + return ERR_PTR(-EOVERFLOW); + pages = kzalloc(sizeof(*pages), GFP_KERNEL_ACCOUNT); if (!pages) return ERR_PTR(-ENOMEM);
From: Jason Gunthorpe jgg@nvidia.com
commit 727c28c1cef2bc013d2c8bb6c50e410a3882a04e upstream.
syzkaller found that the calculation of batch_last_index should use 'start_index' since at input to this function the batch is either empty or it has already been adjusted to cross any accesses so it will start at the point we are unmapping from.
Getting this wrong causes the unmap to run over the end of the pages which corrupts pages that were never mapped. In most cases this triggers the num pinned debugging:
WARNING: CPU: 0 PID: 557 at drivers/iommu/iommufd/pages.c:294 __iopt_area_unfill_domain+0x152/0x560 Modules linked in: CPU: 0 PID: 557 Comm: repro Not tainted 6.3.0-rc2-eeac8ede1755 #1 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014 RIP: 0010:__iopt_area_unfill_domain+0x152/0x560 Code: d2 0f ff 44 8b 64 24 54 48 8b 44 24 48 31 ff 44 89 e6 48 89 44 24 38 e8 fc d3 0f ff 45 85 e4 0f 85 eb 01 00 00 e8 0e d2 0f ff <0f> 0b e8 07 d2 0f ff 48 8b 44 24 38 89 5c 24 58 89 18 8b 44 24 54 RSP: 0018:ffffc9000108baf0 EFLAGS: 00010246 RAX: 0000000000000000 RBX: 00000000ffffffff RCX: ffffffff821e3f85 RDX: 0000000000000000 RSI: ffff88800faf0000 RDI: 0000000000000002 RBP: ffffc9000108bd18 R08: 000000000003ca25 R09: 0000000000000014 R10: 000000000003ca00 R11: 0000000000000024 R12: 0000000000000004 R13: 0000000000000801 R14: 00000000000007ff R15: 0000000000000800 FS: 00007f3499ce1740(0000) GS:ffff88807dc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000020000243 CR3: 00000000179c2001 CR4: 0000000000770ef0 PKRU: 55555554 Call Trace: <TASK> iopt_area_unfill_domain+0x32/0x40 iopt_table_remove_domain+0x23f/0x4c0 iommufd_device_selftest_detach+0x3a/0x90 iommufd_selftest_destroy+0x55/0x70 iommufd_object_destroy_user+0xce/0x130 iommufd_destroy+0xa2/0xc0 iommufd_fops_ioctl+0x206/0x330 __x64_sys_ioctl+0x10e/0x160 do_syscall_64+0x3b/0x90 entry_SYSCALL_64_after_hwframe+0x72/0xdc
Also add some useful WARN_ON sanity checks.
Cc: stable@vger.kernel.org Fixes: 8d160cd4d506 ("iommufd: Algorithms for PFN storage") Link: https://lore.kernel.org/r/2-v1-ceab6a4d7d7a+94-iommufd_syz_jgg@nvidia.com Reviewed-by: Kevin Tian kevin.tian@intel.com Reported-by: Pengfei Xu pengfei.xu@intel.com Tested-by: Pengfei Xu pengfei.xu@intel.com Signed-off-by: Jason Gunthorpe jgg@nvidia.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iommu/iommufd/pages.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-)
--- a/drivers/iommu/iommufd/pages.c +++ b/drivers/iommu/iommufd/pages.c @@ -1205,13 +1205,21 @@ iopt_area_unpin_domain(struct pfn_batch unsigned long start = max(start_index, *unmapped_end_index);
+ if (IS_ENABLED(CONFIG_IOMMUFD_TEST) && + batch->total_pfns) + WARN_ON(*unmapped_end_index - + batch->total_pfns != + start_index); batch_from_domain(batch, domain, area, start, last_index); - batch_last_index = start + batch->total_pfns - 1; + batch_last_index = start_index + batch->total_pfns - 1; } else { batch_last_index = last_index; }
+ if (IS_ENABLED(CONFIG_IOMMUFD_TEST)) + WARN_ON(batch_last_index > real_last_index); + /* * unmaps must always 'cut' at a place where the pfns are not * contiguous to pair with the maps that always install
From: Jason Gunthorpe jgg@nvidia.com
commit 13a0d1ae7ee6b438f5537711a8c60cba00554943 upstream.
If batch->end is 0 then setting npfns[0] before computing the new value of pfns will fail to adjust the pfn and result in various page accounting corruptions. It should be ordered after.
This seems to result in various kinds of page meta-data corruption related failures:
WARNING: CPU: 1 PID: 527 at mm/gup.c:75 try_grab_folio+0x503/0x740 Modules linked in: CPU: 1 PID: 527 Comm: repro Not tainted 6.3.0-rc2-eeac8ede1755+ #1 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014 RIP: 0010:try_grab_folio+0x503/0x740 Code: e3 01 48 89 de e8 6d c1 dd ff 48 85 db 0f 84 7c fe ff ff e8 4f bf dd ff 49 8d 47 ff 48 89 45 d0 e9 73 fe ff ff e8 3d bf dd ff <0f> 0b 31 db e9 d0 fc ff ff e8 2f bf dd ff 48 8b 5d c8 31 ff 48 89 RSP: 0018:ffffc90000f37908 EFLAGS: 00010046 RAX: 0000000000000000 RBX: 00000000fffffc02 RCX: ffffffff81504c26 RDX: 0000000000000000 RSI: ffff88800d030000 RDI: 0000000000000002 RBP: ffffc90000f37948 R08: 000000000003ca24 R09: 0000000000000008 R10: 000000000003ca00 R11: 0000000000000023 R12: ffffea000035d540 R13: 0000000000000001 R14: 0000000000000000 R15: ffffea000035d540 FS: 00007fecbf659740(0000) GS:ffff88807dd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00000000200011c3 CR3: 000000000ef66006 CR4: 0000000000770ee0 PKRU: 55555554 Call Trace: <TASK> internal_get_user_pages_fast+0xd32/0x2200 pin_user_pages_fast+0x65/0x90 pfn_reader_user_pin+0x376/0x390 pfn_reader_next+0x14a/0x7b0 pfn_reader_first+0x140/0x1b0 iopt_area_fill_domain+0x74/0x210 iopt_table_add_domain+0x30e/0x6e0 iommufd_device_selftest_attach+0x7f/0x140 iommufd_test+0x10ff/0x16f0 iommufd_fops_ioctl+0x206/0x330 __x64_sys_ioctl+0x10e/0x160 do_syscall_64+0x3b/0x90 entry_SYSCALL_64_after_hwframe+0x72/0xdc
Cc: stable@vger.kernel.org Fixes: f394576eb11d ("iommufd: PFN handling for iopt_pages") Link: https://lore.kernel.org/r/3-v1-ceab6a4d7d7a+94-iommufd_syz_jgg@nvidia.com Reviewed-by: Kevin Tian kevin.tian@intel.com Reported-by: Pengfei Xu pengfei.xu@intel.com Tested-by: Pengfei Xu pengfei.xu@intel.com Signed-off-by: Jason Gunthorpe jgg@nvidia.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iommu/iommufd/pages.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/iommu/iommufd/pages.c +++ b/drivers/iommu/iommufd/pages.c @@ -294,9 +294,9 @@ static void batch_clear_carry(struct pfn batch->npfns[batch->end - 1] < keep_pfns);
batch->total_pfns = keep_pfns; - batch->npfns[0] = keep_pfns; batch->pfns[0] = batch->pfns[batch->end - 1] + (batch->npfns[batch->end - 1] - keep_pfns); + batch->npfns[0] = keep_pfns; batch->end = 0; }
From: Jason Montleon jmontleo@redhat.com
commit f6887a71bdd2f0dcba9b8180dd2223cfa8637e85 upstream.
hdac_hdmi was not updated to use set_stream() instead of set_tdm_slots() in the original commit so HDMI no longer produces audio.
Cc: stable@vger.kernel.org Link: https://lore.kernel.org/regressions/CAJD_bPKQdtaExvVEKxhQ47G-ZXDA=k+gzhMJRHL... Fixes: 636110411ca7 ("ASoC: Intel/SOF: use set_stream() instead of set_tdm_slots() for HDAudio") Signed-off-by: Jason Montleon jmontleo@redhat.com Reviewed-by: Pierre-Louis Bossart pierre-louis.bossart@linux.intel.com Link: https://lore.kernel.org/r/20230324170711.2526-1-jmontleo@redhat.com Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- sound/soc/codecs/hdac_hdmi.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-)
--- a/sound/soc/codecs/hdac_hdmi.c +++ b/sound/soc/codecs/hdac_hdmi.c @@ -436,23 +436,28 @@ static int hdac_hdmi_setup_audio_infofra return 0; }
-static int hdac_hdmi_set_tdm_slot(struct snd_soc_dai *dai, - unsigned int tx_mask, unsigned int rx_mask, - int slots, int slot_width) +static int hdac_hdmi_set_stream(struct snd_soc_dai *dai, + void *stream, int direction) { struct hdac_hdmi_priv *hdmi = snd_soc_dai_get_drvdata(dai); struct hdac_device *hdev = hdmi->hdev; struct hdac_hdmi_dai_port_map *dai_map; struct hdac_hdmi_pcm *pcm; + struct hdac_stream *hstream;
- dev_dbg(&hdev->dev, "%s: strm_tag: %d\n", __func__, tx_mask); + if (!stream) + return -EINVAL; + + hstream = (struct hdac_stream *)stream; + + dev_dbg(&hdev->dev, "%s: strm_tag: %d\n", __func__, hstream->stream_tag);
dai_map = &hdmi->dai_map[dai->id];
pcm = hdac_hdmi_get_pcm_from_cvt(hdmi, dai_map->cvt);
if (pcm) - pcm->stream_tag = (tx_mask << 4); + pcm->stream_tag = (hstream->stream_tag << 4);
return 0; } @@ -1544,7 +1549,7 @@ static const struct snd_soc_dai_ops hdmi .startup = hdac_hdmi_pcm_open, .shutdown = hdac_hdmi_pcm_close, .hw_params = hdac_hdmi_set_hw_params, - .set_tdm_slot = hdac_hdmi_set_tdm_slot, + .set_stream = hdac_hdmi_set_stream, };
/*
From: Guennadi Liakhovetski guennadi.liakhovetski@linux.intel.com
commit e3720f92e0237921da537e47a0b24e27899203f8 upstream.
If an IPC4 topology contains an unsupported widget, its .module_info field won't be set, then sof_ipc4_route_setup() will cause a kernel Oops trying to dereference it. Add a check for such cases.
Cc: stable@vger.kernel.org # 6.2 Signed-off-by: Guennadi Liakhovetski guennadi.liakhovetski@linux.intel.com Signed-off-by: Peter Ujfalusi peter.ujfalusi@linux.intel.com Link: https://lore.kernel.org/r/20230329113828.28562-1-peter.ujfalusi@linux.intel.... Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- sound/soc/sof/ipc4-topology.c | 8 ++++++++ 1 file changed, 8 insertions(+)
--- a/sound/soc/sof/ipc4-topology.c +++ b/sound/soc/sof/ipc4-topology.c @@ -1686,6 +1686,14 @@ static int sof_ipc4_route_setup(struct s u32 header, extension; int ret;
+ if (!src_fw_module || !sink_fw_module) { + /* The NULL module will print as "(efault)" */ + dev_err(sdev->dev, "source %s or sink %s widget weren't set up properly\n", + src_fw_module->man4_module_entry.name, + sink_fw_module->man4_module_entry.name); + return -ENODEV; + } + sroute->src_queue_id = sof_ipc4_get_queue_id(src_widget, sink_widget, SOF_PIN_TYPE_SOURCE); if (sroute->src_queue_id < 0) {
From: Nuno Sá nuno.sa@analog.com
[ Upstream commit 0c6ef985a1fd8a74dcb5cad941ddcadd55cb8697 ]
The interrupt is triggered on the falling edge rather than being a level low interrupt.
Fixes: da4d3d6bb9f6 ("iio: adc: ad-sigma-delta: Allow custom IRQ flags") Signed-off-by: Nuno Sá nuno.sa@analog.com Link: https://lore.kernel.org/r/20230120124645.819910-1-nuno.sa@analog.com Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/iio/adc/ad7791.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/iio/adc/ad7791.c b/drivers/iio/adc/ad7791.c index fee8d129a5f08..86effe8501b44 100644 --- a/drivers/iio/adc/ad7791.c +++ b/drivers/iio/adc/ad7791.c @@ -253,7 +253,7 @@ static const struct ad_sigma_delta_info ad7791_sigma_delta_info = { .has_registers = true, .addr_shift = 4, .read_mask = BIT(3), - .irq_flags = IRQF_TRIGGER_LOW, + .irq_flags = IRQF_TRIGGER_FALLING, };
static int ad7791_read_raw(struct iio_dev *indio_dev,
From: Wojciech Lukowicz wlukowicz01@gmail.com
[ Upstream commit c0921e51dab767ef5adf6175c4a0ba3c6e1074a3 ]
When a request to remove buffers is submitted, and the given number to be removed is larger than available in the specified buffer group, the resulting CQE result will be the number of removed buffers + 1, which is 1 more than it should be.
Previously, the head was part of the list and it got removed after the loop, so the increment was needed. Now, the head is not an element of the list, so the increment shouldn't be there anymore.
Fixes: dbc7d452e7cf ("io_uring: manage provided buffers strictly ordered") Signed-off-by: Wojciech Lukowicz wlukowicz01@gmail.com Link: https://lore.kernel.org/r/20230401195039.404909-2-wlukowicz01@gmail.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- io_uring/kbuf.c | 2 -- 1 file changed, 2 deletions(-)
diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c index 3002dc8271959..0fdcc0adbdbcc 100644 --- a/io_uring/kbuf.c +++ b/io_uring/kbuf.c @@ -228,7 +228,6 @@ static int __io_remove_buffers(struct io_ring_ctx *ctx, return i; }
- /* the head kbuf is the list itself */ while (!list_empty(&bl->buf_list)) { struct io_buffer *nxt;
@@ -238,7 +237,6 @@ static int __io_remove_buffers(struct io_ring_ctx *ctx, return i; cond_resched(); } - i++;
return i; }
From: Wojciech Lukowicz wlukowicz01@gmail.com
[ Upstream commit b4a72c0589fdea6259720375426179888969d6a2 ]
When removing provided buffers, io_buffer structs are not being disposed of, leading to a memory leak. They can't be freed individually, because they are allocated in page-sized groups. They need to be added to some free list instead, such as io_buffers_cache. All callers already hold the lock protecting it, apart from when destroying buffers, so had to extend the lock there.
Fixes: cc3cec8367cb ("io_uring: speedup provided buffer handling") Signed-off-by: Wojciech Lukowicz wlukowicz01@gmail.com Link: https://lore.kernel.org/r/20230401195039.404909-2-wlukowicz01@gmail.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- io_uring/io_uring.c | 2 +- io_uring/kbuf.c | 5 ++++- 2 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index a4e9dbc7b67a8..add5cff7952c5 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -2722,8 +2722,8 @@ static __cold void io_ring_ctx_free(struct io_ring_ctx *ctx) io_eventfd_unregister(ctx); io_alloc_cache_free(&ctx->apoll_cache, io_apoll_cache_free); io_alloc_cache_free(&ctx->netmsg_cache, io_netmsg_cache_free); - mutex_unlock(&ctx->uring_lock); io_destroy_buffers(ctx); + mutex_unlock(&ctx->uring_lock); if (ctx->sq_creds) put_cred(ctx->sq_creds); if (ctx->submitter_task) diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c index 0fdcc0adbdbcc..a90c820ce99e1 100644 --- a/io_uring/kbuf.c +++ b/io_uring/kbuf.c @@ -228,11 +228,14 @@ static int __io_remove_buffers(struct io_ring_ctx *ctx, return i; }
+ /* protects io_buffers_cache */ + lockdep_assert_held(&ctx->uring_lock); + while (!list_empty(&bl->buf_list)) { struct io_buffer *nxt;
nxt = list_first_entry(&bl->buf_list, struct io_buffer, list); - list_del(&nxt->list); + list_move(&nxt->list, &ctx->io_buffers_cache); if (++i == nbufs) return i; cond_resched();
From: Li Zetao lizetao1@huawei.com
[ Upstream commit 85ade4010e13ef152ea925c74d94253db92e5428 ]
There is a memory leak reported by kmemleak:
unreferenced object 0xffffc900003f0000 (size 12288): comm "modprobe", pid 19117, jiffies 4299751452 (age 42490.264s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<00000000629261a8>] __vmalloc_node_range+0xe56/0x1110 [<0000000001906886>] __vmalloc_node+0xbd/0x150 [<000000005bb4dc34>] vmalloc+0x25/0x30 [<00000000a2dc1194>] qla2x00_create_host+0x7a0/0xe30 [qla2xxx] [<0000000062b14b47>] qla2x00_probe_one+0x2eb8/0xd160 [qla2xxx] [<00000000641ccc04>] local_pci_probe+0xeb/0x1a0
The root cause is traced to an error-handling path in qla2x00_probe_one() when the adapter "base_vha" initialize failed. The fab_scan_rp "scan.l" is used to record the port information and it is allocated in qla2x00_create_host(). However, it is not released in the error handling path "probe_failed".
Fix this by freeing the memory of "scan.l" when an error occurs in the adapter initialization process.
Fixes: a4239945b8ad ("scsi: qla2xxx: Add switch command to simplify fabric discovery") Signed-off-by: Li Zetao lizetao1@huawei.com Link: https://lore.kernel.org/r/20230325110004.363898-1-lizetao1@huawei.com Reviewed-by: Himanshu Madhani himanshu.madhani@oracle.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/qla2xxx/qla_os.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c index 02913cc75195b..901c5c8035ef2 100644 --- a/drivers/scsi/qla2xxx/qla_os.c +++ b/drivers/scsi/qla2xxx/qla_os.c @@ -3607,6 +3607,7 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id) probe_failed: qla_enode_stop(base_vha); qla_edb_stop(base_vha); + vfree(base_vha->scan.l); if (base_vha->gnl.l) { dma_free_coherent(&ha->pdev->dev, base_vha->gnl.size, base_vha->gnl.l, base_vha->gnl.ldma);
From: Zhong Jinghua zhongjinghua@huawei.com
[ Upstream commit 48b19b79cfa37b1e50da3b5a8af529f994c08901 ]
The validity of sock should be checked before assignment to avoid incorrect values. Commit 57569c37f0ad ("scsi: iscsi: iscsi_tcp: Fix null-ptr-deref while calling getpeername()") introduced this change which may lead to inconsistent values of tcp_sw_conn->sendpage and conn->datadgst_en.
Fix the issue by moving the position of the assignment.
Fixes: 57569c37f0ad ("scsi: iscsi: iscsi_tcp: Fix null-ptr-deref while calling getpeername()") Signed-off-by: Zhong Jinghua zhongjinghua@huawei.com Link: https://lore.kernel.org/r/20230329071739.2175268-1-zhongjinghua@huaweicloud.... Reviewed-by: Mike Christie michael.christie@oracle.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/iscsi_tcp.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/scsi/iscsi_tcp.c b/drivers/scsi/iscsi_tcp.c index 0454d94e8cf0d..e7a6fc01d9ca8 100644 --- a/drivers/scsi/iscsi_tcp.c +++ b/drivers/scsi/iscsi_tcp.c @@ -768,13 +768,12 @@ static int iscsi_sw_tcp_conn_set_param(struct iscsi_cls_conn *cls_conn, iscsi_set_param(cls_conn, param, buf, buflen); break; case ISCSI_PARAM_DATADGST_EN: - iscsi_set_param(cls_conn, param, buf, buflen); - mutex_lock(&tcp_sw_conn->sock_lock); if (!tcp_sw_conn->sock) { mutex_unlock(&tcp_sw_conn->sock_lock); return -ENOTCONN; } + iscsi_set_param(cls_conn, param, buf, buflen); tcp_sw_conn->sendpage = conn->datadgst_en ? sock_no_sendpage : tcp_sw_conn->sock->ops->sendpage; mutex_unlock(&tcp_sw_conn->sock_lock);
From: Keith Busch kbusch@kernel.org
[ Upstream commit d3205ab75e99a47539ec91ef85ba488f4ddfeaa9 ]
The device can report discard support without setting the ONCS DSM bit. When not set, the driver clears max_discard_size expecting it to be set later. We don't know the size until we have the namespace format, though, so setting it is deferred until configuring one, but the driver was abandoning the discard settings due to that initial clearing.
Move the max_discard_size calculation above the check for a '0' discard size.
Fixes: 1a86924e4f46475 ("nvme: fix interpretation of DMRSL") Reported-by: Laurence Oberman loberman@redhat.com Signed-off-by: Keith Busch kbusch@kernel.org Reviewed-by: Niklas Cassel niklas.cassel@wdc.com Reviewed-by: Sagi Grimberg sagi@grimberg.me Tested-by: Laurence Oberman loberman@redhat.com Signed-off-by: Christoph Hellwig hch@lst.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvme/host/core.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 70b5e891f6b3b..ee1b075d12cfc 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -1717,6 +1717,9 @@ static void nvme_config_discard(struct gendisk *disk, struct nvme_ns *ns) struct request_queue *queue = disk->queue; u32 size = queue_logical_block_size(queue);
+ if (ctrl->dmrsl && ctrl->dmrsl <= nvme_sect_to_lba(ns, UINT_MAX)) + ctrl->max_discard_sectors = nvme_lba_to_sect(ns, ctrl->dmrsl); + if (ctrl->max_discard_sectors == 0) { blk_queue_max_discard_sectors(queue, 0); return; @@ -1731,9 +1734,6 @@ static void nvme_config_discard(struct gendisk *disk, struct nvme_ns *ns) if (queue->limits.max_discard_sectors) return;
- if (ctrl->dmrsl && ctrl->dmrsl <= nvme_sect_to_lba(ns, UINT_MAX)) - ctrl->max_discard_sectors = nvme_lba_to_sect(ns, ctrl->dmrsl); - blk_queue_max_discard_sectors(queue, ctrl->max_discard_sectors); blk_queue_max_discard_segments(queue, ctrl->max_discard_segments);
From: Thiago Rafael Becker tbecker@redhat.com
[ Upstream commit d19342c6609b67f2ba83b9eccca2777e3687f625 ]
After a server reboot, clients are failing to move files with ENOENT. This is caused by DFS referrals containing multiple separators, which the server move call doesn't recognize.
v1: Initial patch. v2: Move prototype to header.
Link: https://bugzilla.redhat.com/show_bug.cgi?id=2182472 Fixes: a31080899d5f ("cifs: sanitize multiple delimiters in prepath") Actually-Fixes: 24e0a1eff9e2 ("cifs: switch to new mount api") Reviewed-by: Paulo Alcantara (SUSE) pc@manguebit.com Signed-off-by: Thiago Rafael Becker tbecker@redhat.com Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/cifs/fs_context.c | 13 +++++++------ fs/cifs/fs_context.h | 3 +++ fs/cifs/misc.c | 2 +- 3 files changed, 11 insertions(+), 7 deletions(-)
diff --git a/fs/cifs/fs_context.c b/fs/cifs/fs_context.c index 6d13f8207e96a..ace11a1a7c8ab 100644 --- a/fs/cifs/fs_context.c +++ b/fs/cifs/fs_context.c @@ -441,13 +441,14 @@ int smb3_parse_opt(const char *options, const char *key, char **val) * but there are some bugs that prevent rename from working if there are * multiple delimiters. * - * Returns a sanitized duplicate of @path. The caller is responsible for - * cleaning up the original. + * Returns a sanitized duplicate of @path. @gfp indicates the GFP_* flags + * for kstrdup. + * The caller is responsible for freeing the original. */ #define IS_DELIM(c) ((c) == '/' || (c) == '\') -static char *sanitize_path(char *path) +char *cifs_sanitize_prepath(char *prepath, gfp_t gfp) { - char *cursor1 = path, *cursor2 = path; + char *cursor1 = prepath, *cursor2 = prepath;
/* skip all prepended delimiters */ while (IS_DELIM(*cursor1)) @@ -469,7 +470,7 @@ static char *sanitize_path(char *path) cursor2--;
*(cursor2) = '\0'; - return kstrdup(path, GFP_KERNEL); + return kstrdup(prepath, gfp); }
/* @@ -531,7 +532,7 @@ smb3_parse_devname(const char *devname, struct smb3_fs_context *ctx) if (!*pos) return 0;
- ctx->prepath = sanitize_path(pos); + ctx->prepath = cifs_sanitize_prepath(pos, GFP_KERNEL); if (!ctx->prepath) return -ENOMEM;
diff --git a/fs/cifs/fs_context.h b/fs/cifs/fs_context.h index 3de00e7127ec4..f4eaf85589022 100644 --- a/fs/cifs/fs_context.h +++ b/fs/cifs/fs_context.h @@ -287,4 +287,7 @@ extern void smb3_update_mnt_flags(struct cifs_sb_info *cifs_sb); */ #define SMB3_MAX_DCLOSETIMEO (1 << 30) #define SMB3_DEF_DCLOSETIMEO (1 * HZ) /* even 1 sec enough to help eg open/write/close/open/read */ + +extern char *cifs_sanitize_prepath(char *prepath, gfp_t gfp); + #endif diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c index 5542893ef03f7..2fae6b08314d9 100644 --- a/fs/cifs/misc.c +++ b/fs/cifs/misc.c @@ -1297,7 +1297,7 @@ int cifs_update_super_prepath(struct cifs_sb_info *cifs_sb, char *prefix) kfree(cifs_sb->prepath);
if (prefix && *prefix) { - cifs_sb->prepath = kstrdup(prefix, GFP_ATOMIC); + cifs_sb->prepath = cifs_sanitize_prepath(prefix, GFP_ATOMIC); if (!cifs_sb->prepath) return -ENOMEM;
From: Ming Lei ming.lei@redhat.com
[ Upstream commit 1d1665279a845d16c93687389e364386e3fe0f38 ]
block size is one very key setting for block layer, and bad block size could panic kernel easily.
Make sure that block size is set correctly.
Meantime if ublk_validate_params() fails, clear ub->params so that disk is prevented from being added.
Fixes: 71f28f3136af ("ublk_drv: add io_uring based userspace block driver") Reported-and-tested-by: Breno Leitao leitao@debian.org Signed-off-by: Ming Lei ming.lei@redhat.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/block/ublk_drv.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c index 22a790d512842..341f490fdbb02 100644 --- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -233,7 +233,7 @@ static int ublk_validate_params(const struct ublk_device *ub) if (ub->params.types & UBLK_PARAM_TYPE_BASIC) { const struct ublk_param_basic *p = &ub->params.basic;
- if (p->logical_bs_shift > PAGE_SHIFT) + if (p->logical_bs_shift > PAGE_SHIFT || p->logical_bs_shift < 9) return -EINVAL;
if (p->logical_bs_shift > p->physical_bs_shift) @@ -1886,6 +1886,8 @@ static int ublk_ctrl_set_params(struct io_uring_cmd *cmd) /* clear all we don't support yet */ ub->params.types &= UBLK_PARAM_TYPE_ALL; ret = ublk_validate_params(ub); + if (ret) + ub->params.types = 0; } mutex_unlock(&ub->mutex); ublk_put_device(ub);
From: Yu Kuai yukuai3@huawei.com
[ Upstream commit 3723091ea1884d599cc8b8bf719d6f42e8d4d8b1 ]
Currently if disk_scan_partitions() failed, GD_NEED_PART_SCAN will still set, and partition scan will be proceed again when blkdev_get_by_dev() is called. However, this will cause a problem that re-assemble partitioned raid device will creat partition for underlying disk.
Test procedure:
mdadm -CR /dev/md0 -l 1 -n 2 /dev/sda /dev/sdb -e 1.0 sgdisk -n 0:0:+100MiB /dev/md0 blockdev --rereadpt /dev/sda blockdev --rereadpt /dev/sdb mdadm -S /dev/md0 mdadm -A /dev/md0 /dev/sda /dev/sdb
Test result: underlying disk partition and raid partition can be observed at the same time
Note that this can still happen in come corner cases that GD_NEED_PART_SCAN can be set for underlying disk while re-assemble raid device.
Fixes: e5cfefa97bcc ("block: fix scan partition for exclusively open device again") Reviewed-by: Jan Kara jack@suse.cz Reviewed-by: Ming Lei ming.lei@redhat.com Signed-off-by: Yu Kuai yukuai3@huawei.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- block/genhd.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/block/genhd.c b/block/genhd.c index 9c4c9aa559ab8..7082032636035 100644 --- a/block/genhd.c +++ b/block/genhd.c @@ -368,7 +368,6 @@ int disk_scan_partitions(struct gendisk *disk, fmode_t mode) if (disk->open_partitions) return -EBUSY;
- set_bit(GD_NEED_PART_SCAN, &disk->state); /* * If the device is opened exclusively by current thread already, it's * safe to scan partitons, otherwise, use bd_prepare_to_claim() to @@ -381,12 +380,19 @@ int disk_scan_partitions(struct gendisk *disk, fmode_t mode) return ret; }
+ set_bit(GD_NEED_PART_SCAN, &disk->state); bdev = blkdev_get_by_dev(disk_devt(disk), mode & ~FMODE_EXCL, NULL); if (IS_ERR(bdev)) ret = PTR_ERR(bdev); else blkdev_put(bdev, mode & ~FMODE_EXCL);
+ /* + * If blkdev_get_by_dev() failed early, GD_NEED_PART_SCAN is still set, + * and this will cause that re-assemble partitioned raid device will + * creat partition for underlying disk. + */ + clear_bit(GD_NEED_PART_SCAN, &disk->state); if (!(mode & FMODE_EXCL)) bd_abort_claiming(disk->part0, disk_scan_partitions); return ret;
From: Peter Zijlstra peterz@infradead.org
[ Upstream commit b168098912926236bbeebaf7795eb7aab76d2b45 ]
Thomas reported that offlining CPUs spends a lot of time in synchronize_rcu() as called from perf_pmu_migrate_context() even though he's not actually using uncore events.
Turns out, the thing is unconditionally waiting for RCU, even if there's no actual events to migrate.
Fixes: 0cda4c023132 ("perf: Introduce perf_pmu_migrate_context()") Reported-by: Thomas Gleixner tglx@linutronix.de Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Tested-by: Thomas Gleixner tglx@linutronix.de Reviewed-by: Thomas Gleixner tglx@linutronix.de Reviewed-by: Paul E. McKenney paulmck@kernel.org Link: https://lkml.kernel.org/r/20230403090858.GT4253@hirez.programming.kicks-ass.... Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/events/core.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c index fad170b475921..4b3205f6bed5e 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -12875,12 +12875,14 @@ void perf_pmu_migrate_context(struct pmu *pmu, int src_cpu, int dst_cpu) __perf_pmu_remove(src_ctx, src_cpu, pmu, &src_ctx->pinned_groups, &events); __perf_pmu_remove(src_ctx, src_cpu, pmu, &src_ctx->flexible_groups, &events);
- /* - * Wait for the events to quiesce before re-instating them. - */ - synchronize_rcu(); + if (!list_empty(&events)) { + /* + * Wait for the events to quiesce before re-instating them. + */ + synchronize_rcu();
- __perf_pmu_install(dst_ctx, dst_cpu, pmu, &events); + __perf_pmu_install(dst_ctx, dst_cpu, pmu, &events); + }
mutex_unlock(&dst_ctx->mutex); mutex_unlock(&src_ctx->mutex);
From: Kan Liang kan.liang@linux.intel.com
[ Upstream commit 24d3ae2f37d8bc3c14b31d353c5d27baf582b6a6 ]
The same task check in perf_event_set_output has some potential issues for some usages.
For the current perf code, there is a problem if using of perf_event_open() to have multiple samples getting into the same mmap’d memory when they are both attached to the same process. https://lore.kernel.org/all/92645262-D319-4068-9C44-2409EF44888E@gmail.com/ Because the event->ctx is not ready when the perf_event_set_output() is invoked in the perf_event_open().
Besides the above issue, before the commit bd2756811766 ("perf: Rewrite core context handling"), perf record can errors out when sampling with a hardware event and a software event as below. $ perf record -e cycles,dummy --per-thread ls failed to mmap with 22 (Invalid argument) That's because that prior to the commit a hardware event and a software event are from different task context.
The problem should be a long time issue since commit c3f00c70276d ("perk: Separate find_get_context() from event initialization").
The task struct is stored in the event->hw.target for each per-thread event. It is a more reliable way to determine whether two events are attached to the same task.
The event->hw.target was also introduced several years ago by the commit 50f16a8bf9d7 ("perf: Remove type specific target pointers"). It can not only be used to fix the issue with the current code, but also back port to fix the issues with an older kernel.
Note: The event->hw.target was introduced later than commit c3f00c70276d. The patch may cannot be applied between the commit c3f00c70276d and commit 50f16a8bf9d7. Anybody that wants to back-port this at that period may have to find other solutions.
Fixes: c3f00c70276d ("perf: Separate find_get_context() from event initialization") Signed-off-by: Kan Liang kan.liang@linux.intel.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Reviewed-by: Zhengjun Xing zhengjun.xing@linux.intel.com Link: https://lkml.kernel.org/r/20230322202449.512091-1-kan.liang@linux.intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/events/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -12155,7 +12155,7 @@ perf_event_set_output(struct perf_event /* * If its not a per-cpu rb, it must be the same task. */ - if (output_event->cpu == -1 && output_event->ctx != event->ctx) + if (output_event->cpu == -1 && output_event->hw.target != event->hw.target) goto out;
/*
From: Steven Rostedt (Google) rostedt@goodmis.org
commit 31c683967174b487939efaf65e41f5ff1404e141 upstream.
The lastcmd_mutex is only used in trace_events_synth.c and should be static.
Link: https://lore.kernel.org/linux-trace-kernel/202304062033.cRStgOuP-lkp@intel.c... Link: https://lore.kernel.org/linux-trace-kernel/20230406111033.6e26de93@gandalf.l...
Cc: Masami Hiramatsu mhiramat@kernel.org Cc: Mark Rutland mark.rutland@arm.com Cc: Tze-nan Wu Tze-nan.Wu@mediatek.com Fixes: 4ccf11c4e8a8e ("tracing/synthetic: Fix races on freeing last_cmd") Reviewed-by: Mukesh Ojha quic_mojha@quicinc.com Reported-by: kernel test robot lkp@intel.com Signed-off-by: Steven Rostedt (Google) rostedt@goodmis.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/trace/trace_events_synth.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/kernel/trace/trace_events_synth.c +++ b/kernel/trace/trace_events_synth.c @@ -44,7 +44,7 @@ enum { ERRORS };
static const char *err_text[] = { ERRORS };
-DEFINE_MUTEX(lastcmd_mutex); +static DEFINE_MUTEX(lastcmd_mutex); static char *last_cmd;
static int errpos(const char *str)
From: Sergey Senozhatsky senozhatsky@chromium.org
commit 618a8a917dbf5830e2064d2fa0568940eb5d2584 upstream.
When freeable class stat was added to classes file (back in 2016) we forgot to update zsmalloc documentation. Fix that.
Link: https://lkml.kernel.org/r/20230325024631.2817153-3-senozhatsky@chromium.org Fixes: 1120ed548394 ("mm/zsmalloc: add `freeable' column to pool stat") Signed-off-by: Sergey Senozhatsky senozhatsky@chromium.org Cc: Minchan Kim minchan@kernel.org Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- Documentation/mm/zsmalloc.rst | 2 ++ 1 file changed, 2 insertions(+)
--- a/Documentation/mm/zsmalloc.rst +++ b/Documentation/mm/zsmalloc.rst @@ -68,6 +68,8 @@ pages_used the number of pages allocated for the class pages_per_zspage the number of 0-order pages to make a zspage +freeable + the approximate number of pages class compaction can free
We assign a zspage to ZS_ALMOST_EMPTY fullness group when n <= N / f, where
From: Yafang Shao laoar.shao@gmail.com
commit f349b15e183d6956f1b63d6ff57849ff10c7edd5 upstream.
There're some suspicious warn_alloc on my test serer, for example,
[13366.518837] warn_alloc: 81 callbacks suppressed [13366.518841] test_verifier: vmalloc error: size 4096, page order 0, failed to allocate pages, mode:0x500dc2(GFP_HIGHUSER|__GFP_ZERO|__GFP_ACCOUNT), nodemask=(null),cpuset=/,mems_allowed=0-1 [13366.522240] CPU: 30 PID: 722463 Comm: test_verifier Kdump: loaded Tainted: G W O 6.2.0+ #638 [13366.524216] Call Trace: [13366.524702] <TASK> [13366.525148] dump_stack_lvl+0x6c/0x80 [13366.525712] dump_stack+0x10/0x20 [13366.526239] warn_alloc+0x119/0x190 [13366.526783] ? alloc_pages_bulk_array_mempolicy+0x9e/0x2a0 [13366.527470] __vmalloc_area_node+0x546/0x5b0 [13366.528066] __vmalloc_node_range+0xc2/0x210 [13366.528660] __vmalloc_node+0x42/0x50 [13366.529186] ? bpf_prog_realloc+0x53/0xc0 [13366.529743] __vmalloc+0x1e/0x30 [13366.530235] bpf_prog_realloc+0x53/0xc0 [13366.530771] bpf_patch_insn_single+0x80/0x1b0 [13366.531351] bpf_jit_blind_constants+0xe9/0x1c0 [13366.531932] ? __free_pages+0xee/0x100 [13366.532457] ? free_large_kmalloc+0x58/0xb0 [13366.533002] bpf_int_jit_compile+0x8c/0x5e0 [13366.533546] bpf_prog_select_runtime+0xb4/0x100 [13366.534108] bpf_prog_load+0x6b1/0xa50 [13366.534610] ? perf_event_task_tick+0x96/0xb0 [13366.535151] ? security_capable+0x3a/0x60 [13366.535663] __sys_bpf+0xb38/0x2190 [13366.536120] ? kvm_clock_get_cycles+0x9/0x10 [13366.536643] __x64_sys_bpf+0x1c/0x30 [13366.537094] do_syscall_64+0x38/0x90 [13366.537554] entry_SYSCALL_64_after_hwframe+0x72/0xdc [13366.538107] RIP: 0033:0x7f78310f8e29 [13366.538561] Code: 01 00 48 81 c4 80 00 00 00 e9 f1 fe ff ff 0f 1f 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 17 e0 2c 00 f7 d8 64 89 01 48 [13366.540286] RSP: 002b:00007ffe2a61fff8 EFLAGS: 00000206 ORIG_RAX: 0000000000000141 [13366.541031] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f78310f8e29 [13366.541749] RDX: 0000000000000080 RSI: 00007ffe2a6200b0 RDI: 0000000000000005 [13366.542470] RBP: 00007ffe2a620010 R08: 00007ffe2a6202a0 R09: 00007ffe2a6200b0 [13366.543183] R10: 00000000000f423e R11: 0000000000000206 R12: 0000000000407800 [13366.543900] R13: 00007ffe2a620540 R14: 0000000000000000 R15: 0000000000000000 [13366.544623] </TASK> [13366.545260] Mem-Info: [13366.546121] active_anon:81319 inactive_anon:20733 isolated_anon:0 active_file:69450 inactive_file:5624 isolated_file:0 unevictable:0 dirty:10 writeback:0 slab_reclaimable:69649 slab_unreclaimable:48930 mapped:27400 shmem:12868 pagetables:4929 sec_pagetables:0 bounce:0 kernel_misc_reclaimable:0 free:15870308 free_pcp:142935 free_cma:0 [13366.551886] Node 0 active_anon:224836kB inactive_anon:33528kB active_file:175692kB inactive_file:13752kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:59248kB dirty:32kB writeback:0kB shmem:18252kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB kernel_stack:4616kB pagetables:10664kB sec_pagetables:0kB all_unreclaimable? no [13366.555184] Node 1 active_anon:100440kB inactive_anon:49404kB active_file:102108kB inactive_file:8744kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:50352kB dirty:8kB writeback:0kB shmem:33220kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB kernel_stack:3896kB pagetables:9052kB sec_pagetables:0kB all_unreclaimable? no [13366.558262] Node 0 DMA free:15360kB boost:0kB min:304kB low:380kB high:456kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB [13366.560821] lowmem_reserve[]: 0 2735 31873 31873 31873 [13366.561981] Node 0 DMA32 free:2790904kB boost:0kB min:56028kB low:70032kB high:84036kB reserved_highatomic:0KB active_anon:1936kB inactive_anon:20kB active_file:396kB inactive_file:344kB unevictable:0kB writepending:0kB present:3129200kB managed:2801520kB mlocked:0kB bounce:0kB free_pcp:5188kB local_pcp:0kB free_cma:0kB [13366.565148] lowmem_reserve[]: 0 0 29137 29137 29137 [13366.566168] Node 0 Normal free:28533824kB boost:0kB min:596740kB low:745924kB high:895108kB reserved_highatomic:28672KB active_anon:222900kB inactive_anon:33508kB active_file:175296kB inactive_file:13408kB unevictable:0kB writepending:32kB present:30408704kB managed:29837172kB mlocked:0kB bounce:0kB free_pcp:295724kB local_pcp:0kB free_cma:0kB [13366.569485] lowmem_reserve[]: 0 0 0 0 0 [13366.570416] Node 1 Normal free:32141144kB boost:0kB min:660504kB low:825628kB high:990752kB reserved_highatomic:69632KB active_anon:100440kB inactive_anon:49404kB active_file:102108kB inactive_file:8744kB unevictable:0kB writepending:8kB present:33554432kB managed:33025372kB mlocked:0kB bounce:0kB free_pcp:270880kB local_pcp:46860kB free_cma:0kB [13366.573403] lowmem_reserve[]: 0 0 0 0 0 [13366.574015] Node 0 DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15360kB [13366.575474] Node 0 DMA32: 782*4kB (UME) 756*8kB (UME) 736*16kB (UME) 745*32kB (UME) 694*64kB (UME) 653*128kB (UME) 595*256kB (UME) 552*512kB (UME) 454*1024kB (UME) 347*2048kB (UME) 246*4096kB (UME) = 2790904kB [13366.577442] Node 0 Normal: 33856*4kB (UMEH) 51815*8kB (UMEH) 42418*16kB (UMEH) 36272*32kB (UMEH) 22195*64kB (UMEH) 10296*128kB (UMEH) 7238*256kB (UMEH) 5638*512kB (UEH) 5337*1024kB (UMEH) 3506*2048kB (UMEH) 1470*4096kB (UME) = 28533784kB [13366.580460] Node 1 Normal: 15776*4kB (UMEH) 37485*8kB (UMEH) 29509*16kB (UMEH) 21420*32kB (UMEH) 14818*64kB (UMEH) 13051*128kB (UMEH) 9918*256kB (UMEH) 7374*512kB (UMEH) 5397*1024kB (UMEH) 3887*2048kB (UMEH) 2002*4096kB (UME) = 32141240kB [13366.583027] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB [13366.584380] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB [13366.585702] Node 1 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB [13366.587042] Node 1 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB [13366.588372] 87386 total pagecache pages [13366.589266] 0 pages in swap cache [13366.590327] Free swap = 0kB [13366.591227] Total swap = 0kB [13366.592142] 16777082 pages RAM [13366.593057] 0 pages HighMem/MovableOnly [13366.594037] 357226 pages reserved [13366.594979] 0 pages hwpoisoned
This failure really confuse me as there're still lots of available pages. Finally I figured out it was caused by a fatal signal. When a process is allocating memory via vm_area_alloc_pages(), it will break directly even if it hasn't allocated the requested pages when it receives a fatal signal. In that case, we shouldn't show this warn_alloc, as it is useless. We only need to show this warning when there're really no enough pages.
Link: https://lkml.kernel.org/r/20230330162625.13604-1-laoar.shao@gmail.com Signed-off-by: Yafang Shao laoar.shao@gmail.com Reviewed-by: Lorenzo Stoakes lstoakes@gmail.com Cc: Christoph Hellwig hch@infradead.org Cc: Uladzislau Rezki (Sony) urezki@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/vmalloc.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
--- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3070,9 +3070,11 @@ static void *__vmalloc_area_node(struct * allocation request, free them via __vfree() if any. */ if (area->nr_pages != nr_small_pages) { - warn_alloc(gfp_mask, NULL, - "vmalloc error: size %lu, page order %u, failed to allocate pages", - area->nr_pages * PAGE_SIZE, page_order); + /* vm_area_alloc_pages() can also fail due to a fatal signal */ + if (!fatal_signal_pending(current)) + warn_alloc(gfp_mask, NULL, + "vmalloc error: size %lu, page order %u, failed to allocate pages", + area->nr_pages * PAGE_SIZE, page_order); goto fail; }
From: Lorenzo Bianconi lorenzo@kernel.org
commit eb85df0a5643612285f61f38122564498d0c49f7 upstream.
Fix the firmware version used for offload capability check used by 0x0616 devices. This path enables offload capabilities for 0x0616 devices.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=217245 Fixes: 034ae28b56f1 ("wifi: mt76: mt7921: introduce remain_on_channel support") Cc: stable@vger.kernel.org Signed-off-by: Lorenzo Bianconi lorenzo@kernel.org Signed-off-by: Kalle Valo kvalo@kernel.org Link: https://lore.kernel.org/r/632d8f0c9781c9902d7160e2c080aa7e9232d50d.167999748... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/wireless/mediatek/mt76/mt7921/pci.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c +++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c @@ -20,7 +20,7 @@ static const struct pci_device_id mt7921 { PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x0608), .driver_data = (kernel_ulong_t)MT7921_FIRMWARE_WM }, { PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x0616), - .driver_data = (kernel_ulong_t)MT7921_FIRMWARE_WM }, + .driver_data = (kernel_ulong_t)MT7922_FIRMWARE_WM }, { }, };
From: Felix Fietkau nbd@nbd.name
commit e6db67fa871dee37d22701daba806bfcd4d9df49 upstream.
This helps avoid cleartext leakage of already queued or powersave buffered packets, when a reassoc triggers the key deletion.
Cc: stable@vger.kernel.org Signed-off-by: Felix Fietkau nbd@nbd.name Signed-off-by: Kalle Valo kvalo@kernel.org Link: https://lore.kernel.org/r/20230330091259.61378-1-nbd@nbd.name Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/wireless/mediatek/mt76/mt7603/main.c | 10 +-- drivers/net/wireless/mediatek/mt76/mt7615/mac.c | 70 ++++++--------------- drivers/net/wireless/mediatek/mt76/mt7615/main.c | 15 ++-- drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h | 6 - drivers/net/wireless/mediatek/mt76/mt76x02_util.c | 18 ++--- drivers/net/wireless/mediatek/mt76/mt7915/main.c | 13 +-- drivers/net/wireless/mediatek/mt76/mt7921/main.c | 13 +-- drivers/net/wireless/mediatek/mt76/mt7996/main.c | 13 +-- 8 files changed, 62 insertions(+), 96 deletions(-)
--- a/drivers/net/wireless/mediatek/mt76/mt7603/main.c +++ b/drivers/net/wireless/mediatek/mt76/mt7603/main.c @@ -512,15 +512,15 @@ mt7603_set_key(struct ieee80211_hw *hw, !(key->flags & IEEE80211_KEY_FLAG_PAIRWISE)) return -EOPNOTSUPP;
- if (cmd == SET_KEY) { - key->hw_key_idx = wcid->idx; - wcid->hw_key_idx = idx; - } else { + if (cmd != SET_KEY) { if (idx == wcid->hw_key_idx) wcid->hw_key_idx = -1;
- key = NULL; + return 0; } + + key->hw_key_idx = wcid->idx; + wcid->hw_key_idx = idx; mt76_wcid_key_setup(&dev->mt76, wcid, key);
return mt7603_wtbl_set_key(dev, wcid->idx, key); --- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c +++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c @@ -1193,8 +1193,7 @@ EXPORT_SYMBOL_GPL(mt7615_mac_enable_rtsc static int mt7615_mac_wtbl_update_key(struct mt7615_dev *dev, struct mt76_wcid *wcid, struct ieee80211_key_conf *key, - enum mt76_cipher_type cipher, u16 cipher_mask, - enum set_key_cmd cmd) + enum mt76_cipher_type cipher, u16 cipher_mask) { u32 addr = mt7615_mac_wtbl_addr(dev, wcid->idx) + 30 * 4; u8 data[32] = {}; @@ -1203,27 +1202,18 @@ mt7615_mac_wtbl_update_key(struct mt7615 return -EINVAL;
mt76_rr_copy(dev, addr, data, sizeof(data)); - if (cmd == SET_KEY) { - if (cipher == MT_CIPHER_TKIP) { - /* Rx/Tx MIC keys are swapped */ - memcpy(data, key->key, 16); - memcpy(data + 16, key->key + 24, 8); - memcpy(data + 24, key->key + 16, 8); - } else { - if (cipher_mask == BIT(cipher)) - memcpy(data, key->key, key->keylen); - else if (cipher != MT_CIPHER_BIP_CMAC_128) - memcpy(data, key->key, 16); - if (cipher == MT_CIPHER_BIP_CMAC_128) - memcpy(data + 16, key->key, 16); - } + if (cipher == MT_CIPHER_TKIP) { + /* Rx/Tx MIC keys are swapped */ + memcpy(data, key->key, 16); + memcpy(data + 16, key->key + 24, 8); + memcpy(data + 24, key->key + 16, 8); } else { + if (cipher_mask == BIT(cipher)) + memcpy(data, key->key, key->keylen); + else if (cipher != MT_CIPHER_BIP_CMAC_128) + memcpy(data, key->key, 16); if (cipher == MT_CIPHER_BIP_CMAC_128) - memset(data + 16, 0, 16); - else if (cipher_mask) - memset(data, 0, 16); - if (!cipher_mask) - memset(data, 0, sizeof(data)); + memcpy(data + 16, key->key, 16); }
mt76_wr_copy(dev, addr, data, sizeof(data)); @@ -1234,7 +1224,7 @@ mt7615_mac_wtbl_update_key(struct mt7615 static int mt7615_mac_wtbl_update_pk(struct mt7615_dev *dev, struct mt76_wcid *wcid, enum mt76_cipher_type cipher, u16 cipher_mask, - int keyidx, enum set_key_cmd cmd) + int keyidx) { u32 addr = mt7615_mac_wtbl_addr(dev, wcid->idx), w0, w1;
@@ -1253,9 +1243,7 @@ mt7615_mac_wtbl_update_pk(struct mt7615_ else w0 &= ~MT_WTBL_W0_RX_IK_VALID;
- if (cmd == SET_KEY && - (cipher != MT_CIPHER_BIP_CMAC_128 || - cipher_mask == BIT(cipher))) { + if (cipher != MT_CIPHER_BIP_CMAC_128 || cipher_mask == BIT(cipher)) { w0 &= ~MT_WTBL_W0_KEY_IDX; w0 |= FIELD_PREP(MT_WTBL_W0_KEY_IDX, keyidx); } @@ -1272,19 +1260,10 @@ mt7615_mac_wtbl_update_pk(struct mt7615_
static void mt7615_mac_wtbl_update_cipher(struct mt7615_dev *dev, struct mt76_wcid *wcid, - enum mt76_cipher_type cipher, u16 cipher_mask, - enum set_key_cmd cmd) + enum mt76_cipher_type cipher, u16 cipher_mask) { u32 addr = mt7615_mac_wtbl_addr(dev, wcid->idx);
- if (!cipher_mask) { - mt76_clear(dev, addr + 2 * 4, MT_WTBL_W2_KEY_TYPE); - return; - } - - if (cmd != SET_KEY) - return; - if (cipher == MT_CIPHER_BIP_CMAC_128 && cipher_mask & ~BIT(MT_CIPHER_BIP_CMAC_128)) return; @@ -1295,8 +1274,7 @@ mt7615_mac_wtbl_update_cipher(struct mt7
int __mt7615_mac_wtbl_set_key(struct mt7615_dev *dev, struct mt76_wcid *wcid, - struct ieee80211_key_conf *key, - enum set_key_cmd cmd) + struct ieee80211_key_conf *key) { enum mt76_cipher_type cipher; u16 cipher_mask = wcid->cipher; @@ -1306,19 +1284,14 @@ int __mt7615_mac_wtbl_set_key(struct mt7 if (cipher == MT_CIPHER_NONE) return -EOPNOTSUPP;
- if (cmd == SET_KEY) - cipher_mask |= BIT(cipher); - else - cipher_mask &= ~BIT(cipher); - - mt7615_mac_wtbl_update_cipher(dev, wcid, cipher, cipher_mask, cmd); - err = mt7615_mac_wtbl_update_key(dev, wcid, key, cipher, cipher_mask, - cmd); + cipher_mask |= BIT(cipher); + mt7615_mac_wtbl_update_cipher(dev, wcid, cipher, cipher_mask); + err = mt7615_mac_wtbl_update_key(dev, wcid, key, cipher, cipher_mask); if (err < 0) return err;
err = mt7615_mac_wtbl_update_pk(dev, wcid, cipher, cipher_mask, - key->keyidx, cmd); + key->keyidx); if (err < 0) return err;
@@ -1329,13 +1302,12 @@ int __mt7615_mac_wtbl_set_key(struct mt7
int mt7615_mac_wtbl_set_key(struct mt7615_dev *dev, struct mt76_wcid *wcid, - struct ieee80211_key_conf *key, - enum set_key_cmd cmd) + struct ieee80211_key_conf *key) { int err;
spin_lock_bh(&dev->mt76.lock); - err = __mt7615_mac_wtbl_set_key(dev, wcid, key, cmd); + err = __mt7615_mac_wtbl_set_key(dev, wcid, key); spin_unlock_bh(&dev->mt76.lock);
return err; --- a/drivers/net/wireless/mediatek/mt76/mt7615/main.c +++ b/drivers/net/wireless/mediatek/mt76/mt7615/main.c @@ -391,18 +391,17 @@ static int mt7615_set_key(struct ieee802
if (cmd == SET_KEY) *wcid_keyidx = idx; - else if (idx == *wcid_keyidx) - *wcid_keyidx = -1; - else + else { + if (idx == *wcid_keyidx) + *wcid_keyidx = -1; goto out; + }
- mt76_wcid_key_setup(&dev->mt76, wcid, - cmd == SET_KEY ? key : NULL); - + mt76_wcid_key_setup(&dev->mt76, wcid, key); if (mt76_is_mmio(&dev->mt76)) - err = mt7615_mac_wtbl_set_key(dev, wcid, key, cmd); + err = mt7615_mac_wtbl_set_key(dev, wcid, key); else - err = __mt7615_mac_wtbl_set_key(dev, wcid, key, cmd); + err = __mt7615_mac_wtbl_set_key(dev, wcid, key);
out: mt7615_mutex_release(dev); --- a/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h +++ b/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h @@ -484,11 +484,9 @@ int mt7615_mac_write_txwi(struct mt7615_ void mt7615_mac_set_timing(struct mt7615_phy *phy); int __mt7615_mac_wtbl_set_key(struct mt7615_dev *dev, struct mt76_wcid *wcid, - struct ieee80211_key_conf *key, - enum set_key_cmd cmd); + struct ieee80211_key_conf *key); int mt7615_mac_wtbl_set_key(struct mt7615_dev *dev, struct mt76_wcid *wcid, - struct ieee80211_key_conf *key, - enum set_key_cmd cmd); + struct ieee80211_key_conf *key); void mt7615_mac_reset_work(struct work_struct *work); u32 mt7615_mac_get_sta_tid_sn(struct mt7615_dev *dev, int wcid, u8 tid);
--- a/drivers/net/wireless/mediatek/mt76/mt76x02_util.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x02_util.c @@ -455,20 +455,20 @@ int mt76x02_set_key(struct ieee80211_hw msta = sta ? (struct mt76x02_sta *)sta->drv_priv : NULL; wcid = msta ? &msta->wcid : &mvif->group_wcid;
- if (cmd == SET_KEY) { - key->hw_key_idx = wcid->idx; - wcid->hw_key_idx = idx; - if (key->flags & IEEE80211_KEY_FLAG_RX_MGMT) { - key->flags |= IEEE80211_KEY_FLAG_SW_MGMT_TX; - wcid->sw_iv = true; - } - } else { + if (cmd != SET_KEY) { if (idx == wcid->hw_key_idx) { wcid->hw_key_idx = -1; wcid->sw_iv = false; }
- key = NULL; + return 0; + } + + key->hw_key_idx = wcid->idx; + wcid->hw_key_idx = idx; + if (key->flags & IEEE80211_KEY_FLAG_RX_MGMT) { + key->flags |= IEEE80211_KEY_FLAG_SW_MGMT_TX; + wcid->sw_iv = true; } mt76_wcid_key_setup(&dev->mt76, wcid, key);
--- a/drivers/net/wireless/mediatek/mt76/mt7915/main.c +++ b/drivers/net/wireless/mediatek/mt76/mt7915/main.c @@ -410,16 +410,15 @@ static int mt7915_set_key(struct ieee802 mt7915_mcu_add_bss_info(phy, vif, true); }
- if (cmd == SET_KEY) + if (cmd == SET_KEY) { *wcid_keyidx = idx; - else if (idx == *wcid_keyidx) - *wcid_keyidx = -1; - else + } else { + if (idx == *wcid_keyidx) + *wcid_keyidx = -1; goto out; + }
- mt76_wcid_key_setup(&dev->mt76, wcid, - cmd == SET_KEY ? key : NULL); - + mt76_wcid_key_setup(&dev->mt76, wcid, key); err = mt76_connac_mcu_add_key(&dev->mt76, vif, &msta->bip, key, MCU_EXT_CMD(STA_REC_UPDATE), &msta->wcid, cmd); --- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c +++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c @@ -569,16 +569,15 @@ static int mt7921_set_key(struct ieee802
mt7921_mutex_acquire(dev);
- if (cmd == SET_KEY) + if (cmd == SET_KEY) { *wcid_keyidx = idx; - else if (idx == *wcid_keyidx) - *wcid_keyidx = -1; - else + } else { + if (idx == *wcid_keyidx) + *wcid_keyidx = -1; goto out; + }
- mt76_wcid_key_setup(&dev->mt76, wcid, - cmd == SET_KEY ? key : NULL); - + mt76_wcid_key_setup(&dev->mt76, wcid, key); err = mt76_connac_mcu_add_key(&dev->mt76, vif, &msta->bip, key, MCU_UNI_CMD(STA_REC_UPDATE), &msta->wcid, cmd); --- a/drivers/net/wireless/mediatek/mt76/mt7996/main.c +++ b/drivers/net/wireless/mediatek/mt76/mt7996/main.c @@ -351,16 +351,15 @@ static int mt7996_set_key(struct ieee802 mt7996_mcu_add_bss_info(phy, vif, true); }
- if (cmd == SET_KEY) + if (cmd == SET_KEY) { *wcid_keyidx = idx; - else if (idx == *wcid_keyidx) - *wcid_keyidx = -1; - else + } else { + if (idx == *wcid_keyidx) + *wcid_keyidx = -1; goto out; + }
- mt76_wcid_key_setup(&dev->mt76, wcid, - cmd == SET_KEY ? key : NULL); - + mt76_wcid_key_setup(&dev->mt76, wcid, key); err = mt7996_mcu_add_key(&dev->mt76, vif, &msta->bip, key, MCU_WMWA_UNI_CMD(STA_REC_UPDATE), &msta->wcid, cmd);
From: Jens Axboe axboe@kernel.dk
commit 8c68ae3b22fa6fb2dbe83ef955ff10936503d28e upstream.
Since SQE memory is shared with userspace, we should only be reading it once. We cannot read it multiple times, particularly when it's read once for validation and then read again for the actual use.
ublk_ch_uring_cmd() is safe when called as a retry operation, as the memory backing is stable at that point. But for normal issue, we want to ensure that we only read ublksrv_io_cmd once. Wrap the function in a helper that reads the value into an on-stack copy of the struct.
Cc: stable@vger.kernel.org # 6.0+ Reviewed-by: Ming Lei ming.lei@redhat.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/block/ublk_drv.c | 22 ++++++++++++++++++++-- 1 file changed, 20 insertions(+), 2 deletions(-)
--- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -1202,9 +1202,10 @@ static void ublk_handle_need_get_data(st ublk_queue_cmd(ubq, req); }
-static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags) +static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd, + unsigned int issue_flags, + struct ublksrv_io_cmd *ub_cmd) { - struct ublksrv_io_cmd *ub_cmd = (struct ublksrv_io_cmd *)cmd->cmd; struct ublk_device *ub = cmd->file->private_data; struct ublk_queue *ubq; struct ublk_io *io; @@ -1306,6 +1307,23 @@ static int ublk_ch_uring_cmd(struct io_u return -EIOCBQUEUED; }
+static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags) +{ + struct ublksrv_io_cmd *ub_src = (struct ublksrv_io_cmd *) cmd->cmd; + struct ublksrv_io_cmd ub_cmd; + + /* + * Not necessary for async retry, but let's keep it simple and always + * copy the values to avoid any potential reuse. + */ + ub_cmd.q_id = READ_ONCE(ub_src->q_id); + ub_cmd.tag = READ_ONCE(ub_src->tag); + ub_cmd.result = READ_ONCE(ub_src->result); + ub_cmd.addr = READ_ONCE(ub_src->addr); + + return __ublk_ch_uring_cmd(cmd, issue_flags, &ub_cmd); +} + static const struct file_operations ublk_ch_fops = { .owner = THIS_MODULE, .open = ublk_ch_open,
From: Boris Brezillon boris.brezillon@collabora.com
commit 764a2ab9eb56e1200083e771aab16186836edf1d upstream.
Make sure all bo->base.pages entries are either NULL or pointing to a valid page before calling drm_gem_shmem_put_pages().
Reported-by: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: stable@vger.kernel.org Fixes: 187d2929206e ("drm/panfrost: Add support for GPU heap allocations") Signed-off-by: Boris Brezillon boris.brezillon@collabora.com Reviewed-by: Steven Price steven.price@arm.com Link: https://patchwork.freedesktop.org/patch/msgid/20210521093811.1018992-1-boris... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/panfrost/panfrost_mmu.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -504,6 +504,7 @@ static int panfrost_mmu_map_fault_addr(s if (IS_ERR(pages[i])) { mutex_unlock(&bo->base.pages_lock); ret = PTR_ERR(pages[i]); + pages[i] = NULL; goto err_pages; } }
From: Karol Herbst kherbst@redhat.com
commit 7f67aa097e875c87fba024e850cf405342300059 upstream.
This allows us to advertise more modes especially on HDR displays.
Fixes using 4K@60 modes on my TV and main display both using a HDMI to DP adapter. Also fixes similar issues for users running into this.
Cc: stable@vger.kernel.org # 5.10+ Signed-off-by: Karol Herbst kherbst@redhat.com Reviewed-by: Lyude Paul lyude@redhat.com Link: https://patchwork.freedesktop.org/patch/msgid/20230330223938.4025569-1-kherb... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/nouveau/dispnv50/disp.c | 32 ++++++++++++++++++++++++++++++++ drivers/gpu/drm/nouveau/nouveau_dp.c | 8 +++++--- 2 files changed, 37 insertions(+), 3 deletions(-)
--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c +++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c @@ -363,6 +363,35 @@ nv50_outp_atomic_check_view(struct drm_e return 0; }
+static void +nv50_outp_atomic_fix_depth(struct drm_encoder *encoder, struct drm_crtc_state *crtc_state) +{ + struct nv50_head_atom *asyh = nv50_head_atom(crtc_state); + struct nouveau_encoder *nv_encoder = nouveau_encoder(encoder); + struct drm_display_mode *mode = &asyh->state.adjusted_mode; + unsigned int max_rate, mode_rate; + + switch (nv_encoder->dcb->type) { + case DCB_OUTPUT_DP: + max_rate = nv_encoder->dp.link_nr * nv_encoder->dp.link_bw; + + /* we don't support more than 10 anyway */ + asyh->or.bpc = min_t(u8, asyh->or.bpc, 10); + + /* reduce the bpc until it works out */ + while (asyh->or.bpc > 6) { + mode_rate = DIV_ROUND_UP(mode->clock * asyh->or.bpc * 3, 8); + if (mode_rate <= max_rate) + break; + + asyh->or.bpc -= 2; + } + break; + default: + break; + } +} + static int nv50_outp_atomic_check(struct drm_encoder *encoder, struct drm_crtc_state *crtc_state, @@ -381,6 +410,9 @@ nv50_outp_atomic_check(struct drm_encode if (crtc_state->mode_changed || crtc_state->connectors_changed) asyh->or.bpc = connector->display_info.bpc;
+ /* We might have to reduce the bpc */ + nv50_outp_atomic_fix_depth(encoder, crtc_state); + return 0; }
--- a/drivers/gpu/drm/nouveau/nouveau_dp.c +++ b/drivers/gpu/drm/nouveau/nouveau_dp.c @@ -263,8 +263,6 @@ nouveau_dp_irq(struct work_struct *work) }
/* TODO: - * - Use the minimum possible BPC here, once we add support for the max bpc - * property. * - Validate against the DP caps advertised by the GPU (we don't check these * yet) */ @@ -276,7 +274,11 @@ nv50_dp_mode_valid(struct drm_connector { const unsigned int min_clock = 25000; unsigned int max_rate, mode_rate, ds_max_dotclock, clock = mode->clock; - const u8 bpp = connector->display_info.bpc * 3; + /* Check with the minmum bpc always, so we can advertise better modes. + * In particlar not doing this causes modes to be dropped on HDR + * displays as we might check with a bpc of 16 even. + */ + const u8 bpp = 6 * 3;
if (mode->flags & DRM_MODE_FLAG_INTERLACE && !outp->caps.dp_interlace) return MODE_NO_INTERLACE;
From: Tvrtko Ursulin tvrtko.ursulin@intel.com
commit dc3421560a67361442f33ec962fc6dd48895a0df upstream.
When considering whether to mark one context as stopped and another as started we need to look at whether the previous and new _contexts_ are different and not just requests. Otherwise the software tracked context start time was incorrectly updated to the most recent lite-restore time- stamp, which was in some cases resulting in active time going backward, until the context switch (typically the heartbeat pulse) would synchronise with the hardware tracked context runtime. Easiest use case to observe this behaviour was with a full screen clients with close to 100% engine load.
Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@intel.com Fixes: bb6287cb1886 ("drm/i915: Track context current active time") Cc: stable@vger.kernel.org # v5.19+ Reviewed-by: Matthew Auld matthew.auld@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230320151423.1708436-1-tvrtk... [tursulin: Fix spelling in commit msg.] (cherry picked from commit b3e70051879c665acdd3a1ab50d0ed58d6a8001f) Signed-off-by: Jani Nikula jani.nikula@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/i915/gt/intel_execlists_submission.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c @@ -2018,6 +2018,8 @@ process_csb(struct intel_engine_cs *engi * inspecting the queue to see if we need to resumbit. */ if (*prev != *execlists->active) { /* elide lite-restores */ + struct intel_context *prev_ce = NULL, *active_ce = NULL; + /* * Note the inherent discrepancy between the HW runtime, * recorded as part of the context switch, and the CPU @@ -2029,9 +2031,15 @@ process_csb(struct intel_engine_cs *engi * and correct overselves later when updating from HW. */ if (*prev) - lrc_runtime_stop((*prev)->context); + prev_ce = (*prev)->context; if (*execlists->active) - lrc_runtime_start((*execlists->active)->context); + active_ce = (*execlists->active)->context; + if (prev_ce != active_ce) { + if (prev_ce) + lrc_runtime_stop(prev_ce); + if (active_ce) + lrc_runtime_start(active_ce); + } new_timeslice(execlists); }
From: Min Li lm0963hack@gmail.com
commit dc30c011469165d57af9adac5baff7d767d20e5c upstream.
Userspace can guess the id value and try to race oa_config object creation with config remove, resulting in a use-after-free if we dereference the object after unlocking the metrics_lock. For that reason, unlocking the metrics_lock must be done after we are done dereferencing the object.
Signed-off-by: Min Li lm0963hack@gmail.com Fixes: f89823c21224 ("drm/i915/perf: Implement I915_PERF_ADD/REMOVE_CONFIG interface") Cc: stable@vger.kernel.org # v4.14+ Reviewed-by: Andi Shyti andi.shyti@linux.intel.com Reviewed-by: Umesh Nerlige Ramappa umesh.nerlige.ramappa@intel.com Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230328093627.5067-1-lm0963ha... [tursulin: Manually added stable tag.] (cherry picked from commit 49f6f6483b652108bcb73accd0204a464b922395) Signed-off-by: Jani Nikula jani.nikula@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/i915/i915_perf.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
--- a/drivers/gpu/drm/i915/i915_perf.c +++ b/drivers/gpu/drm/i915/i915_perf.c @@ -4612,13 +4612,13 @@ int i915_perf_add_config_ioctl(struct dr err = oa_config->id; goto sysfs_err; } - - mutex_unlock(&perf->metrics_lock); + id = oa_config->id;
drm_dbg(&perf->i915->drm, "Added config %s id=%i\n", oa_config->uuid, oa_config->id); + mutex_unlock(&perf->metrics_lock);
- return oa_config->id; + return id;
sysfs_err: mutex_unlock(&perf->metrics_lock);
From: Zheng Yejian zhengyejian1@huawei.com
commit 6455b6163d8c680366663cdb8c679514d55fc30c upstream.
When user reads file 'trace_pipe', kernel keeps printing following logs that warn at "cpu_buffer->reader_page->read > rb_page_size(reader)" in rb_get_reader_page(). It just looks like there's an infinite loop in tracing_read_pipe(). This problem occurs several times on arm64 platform when testing v5.10 and below.
Call trace: rb_get_reader_page+0x248/0x1300 rb_buffer_peek+0x34/0x160 ring_buffer_peek+0xbc/0x224 peek_next_entry+0x98/0xbc __find_next_entry+0xc4/0x1c0 trace_find_next_entry_inc+0x30/0x94 tracing_read_pipe+0x198/0x304 vfs_read+0xb4/0x1e0 ksys_read+0x74/0x100 __arm64_sys_read+0x24/0x30 el0_svc_common.constprop.0+0x7c/0x1bc do_el0_svc+0x2c/0x94 el0_svc+0x20/0x30 el0_sync_handler+0xb0/0xb4 el0_sync+0x160/0x180
Then I dump the vmcore and look into the problematic per_cpu ring_buffer, I found that tail_page/commit_page/reader_page are on the same page while reader_page->read is obviously abnormal: tail_page == commit_page == reader_page == { .write = 0x100d20, .read = 0x8f9f4805, // Far greater than 0xd20, obviously abnormal!!! .entries = 0x10004c, .real_end = 0x0, .page = { .time_stamp = 0x857257416af0, .commit = 0xd20, // This page hasn't been full filled. // .data[0...0xd20] seems normal. } }
The root cause is most likely the race that reader and writer are on the same page while reader saw an event that not fully committed by writer.
To fix this, add memory barriers to make sure the reader can see the content of what is committed. Since commit a0fcaaed0c46 ("ring-buffer: Fix race between reset page and reading page") has added the read barrier in rb_get_reader_page(), here we just need to add the write barrier.
Link: https://lore.kernel.org/linux-trace-kernel/20230325021247.2923907-1-zhengyej...
Cc: stable@vger.kernel.org Fixes: 77ae365eca89 ("ring-buffer: make lockless") Suggested-by: Steven Rostedt (Google) rostedt@goodmis.org Signed-off-by: Zheng Yejian zhengyejian1@huawei.com Signed-off-by: Steven Rostedt (Google) rostedt@goodmis.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/trace/ring_buffer.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-)
--- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -3102,6 +3102,10 @@ rb_set_commit_to_write(struct ring_buffe if (RB_WARN_ON(cpu_buffer, rb_is_reader_page(cpu_buffer->tail_page))) return; + /* + * No need for a memory barrier here, as the update + * of the tail_page did it for this page. + */ local_set(&cpu_buffer->commit_page->page->commit, rb_page_write(cpu_buffer->commit_page)); rb_inc_page(&cpu_buffer->commit_page); @@ -3111,6 +3115,8 @@ rb_set_commit_to_write(struct ring_buffe while (rb_commit_index(cpu_buffer) != rb_page_write(cpu_buffer->commit_page)) {
+ /* Make sure the readers see the content of what is committed. */ + smp_wmb(); local_set(&cpu_buffer->commit_page->page->commit, rb_page_write(cpu_buffer->commit_page)); RB_WARN_ON(cpu_buffer, @@ -4688,7 +4694,12 @@ rb_get_reader_page(struct ring_buffer_pe
/* * Make sure we see any padding after the write update - * (see rb_reset_tail()) + * (see rb_reset_tail()). + * + * In addition, a writer may be writing on the reader page + * if the page has not been fully filled, so the read barrier + * is also needed to make sure we see the content of what is + * committed by the writer (see rb_set_commit_to_write()). */ smp_rmb();
From: Rongwei Wang rongwei.wang@linux.alibaba.com
commit 6fe7d6b992113719e96744d974212df3fcddc76c upstream.
The si->lock must be held when deleting the si from the available list. Otherwise, another thread can re-add the si to the available list, which can lead to memory corruption. The only place we have found where this happens is in the swapoff path. This case can be described as below:
core 0 core 1 swapoff
del_from_avail_list(si) waiting
try lock si->lock acquire swap_avail_lock and re-add si into swap_avail_head
acquire si->lock but missing si already being added again, and continuing to clear SWP_WRITEOK, etc.
It can be easily found that a massive warning messages can be triggered inside get_swap_pages() by some special cases, for example, we call madvise(MADV_PAGEOUT) on blocks of touched memory concurrently, meanwhile, run much swapon-swapoff operations (e.g. stress-ng-swap).
However, in the worst case, panic can be caused by the above scene. In swapoff(), the memory used by si could be kept in swap_info[] after turning off a swap. This means memory corruption will not be caused immediately until allocated and reset for a new swap in the swapon path. A panic message caused: (with CONFIG_PLIST_DEBUG enabled)
------------[ cut here ]------------ top: 00000000e58a3003, n: 0000000013e75cda, p: 000000008cd4451a prev: 0000000035b1e58a, n: 000000008cd4451a, p: 000000002150ee8d next: 000000008cd4451a, n: 000000008cd4451a, p: 000000008cd4451a WARNING: CPU: 21 PID: 1843 at lib/plist.c:60 plist_check_prev_next_node+0x50/0x70 Modules linked in: rfkill(E) crct10dif_ce(E)... CPU: 21 PID: 1843 Comm: stress-ng Kdump: ... 5.10.134+ Hardware name: Alibaba Cloud ECS, BIOS 0.0.0 02/06/2015 pstate: 60400005 (nZCv daif +PAN -UAO -TCO BTYPE=--) pc : plist_check_prev_next_node+0x50/0x70 lr : plist_check_prev_next_node+0x50/0x70 sp : ffff0018009d3c30 x29: ffff0018009d3c40 x28: ffff800011b32a98 x27: 0000000000000000 x26: ffff001803908000 x25: ffff8000128ea088 x24: ffff800011b32a48 x23: 0000000000000028 x22: ffff001800875c00 x21: ffff800010f9e520 x20: ffff001800875c00 x19: ffff001800fdc6e0 x18: 0000000000000030 x17: 0000000000000000 x16: 0000000000000000 x15: 0736076307640766 x14: 0730073007380731 x13: 0736076307640766 x12: 0730073007380731 x11: 000000000004058d x10: 0000000085a85b76 x9 : ffff8000101436e4 x8 : ffff800011c8ce08 x7 : 0000000000000000 x6 : 0000000000000001 x5 : ffff0017df9ed338 x4 : 0000000000000001 x3 : ffff8017ce62a000 x2 : ffff0017df9ed340 x1 : 0000000000000000 x0 : 0000000000000000 Call trace: plist_check_prev_next_node+0x50/0x70 plist_check_head+0x80/0xf0 plist_add+0x28/0x140 add_to_avail_list+0x9c/0xf0 _enable_swap_info+0x78/0xb4 __do_sys_swapon+0x918/0xa10 __arm64_sys_swapon+0x20/0x30 el0_svc_common+0x8c/0x220 do_el0_svc+0x2c/0x90 el0_svc+0x1c/0x30 el0_sync_handler+0xa8/0xb0 el0_sync+0x148/0x180 irq event stamp: 2082270
Now, si->lock locked before calling 'del_from_avail_list()' to make sure other thread see the si had been deleted and SWP_WRITEOK cleared together, will not reinsert again.
This problem exists in versions after stable 5.10.y.
Link: https://lkml.kernel.org/r/20230404154716.23058-1-rongwei.wang@linux.alibaba.... Fixes: a2468cc9bfdff ("swap: choose swap device according to numa node") Tested-by: Yongchen Yin wb-yyc939293@alibaba-inc.com Signed-off-by: Rongwei Wang rongwei.wang@linux.alibaba.com Cc: Bagas Sanjaya bagasdotme@gmail.com Cc: Matthew Wilcox (Oracle) willy@infradead.org Cc: Aaron Lu aaron.lu@intel.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/swapfile.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -679,6 +679,7 @@ static void __del_from_avail_list(struct { int nid;
+ assert_spin_locked(&p->lock); for_each_node(nid) plist_del(&p->avail_lists[nid], &swap_avail_heads[nid]); } @@ -2435,8 +2436,8 @@ SYSCALL_DEFINE1(swapoff, const char __us spin_unlock(&swap_lock); goto out_dput; } - del_from_avail_list(p); spin_lock(&p->lock); + del_from_avail_list(p); if (p->prio < 0) { struct swap_info_struct *si = p; int nid;
From: Peter Xu peterx@redhat.com
commit 60d5b473d61be61ac315e544fcd6a8234a79500e upstream.
This patch fixes an issue that a hugetlb uffd-wr-protected mapping can be writable even with uffd-wp bit set. It only happens with hugetlb private mappings, when someone firstly wr-protects a missing pte (which will install a pte marker), then a write to the same page without any prior access to the page.
Userfaultfd-wp trap for hugetlb was implemented in hugetlb_fault() before reaching hugetlb_wp() to avoid taking more locks that userfault won't need. However there's one CoW optimization path that can trigger hugetlb_wp() inside hugetlb_no_page(), which will bypass the trap.
This patch skips hugetlb_wp() for CoW and retries the fault if uffd-wp bit is detected. The new path will only trigger in the CoW optimization path because generic hugetlb_fault() (e.g. when a present pte was wr-protected) will resolve the uffd-wp bit already. Also make sure anonymous UNSHARE won't be affected and can still be resolved, IOW only skip CoW not CoR.
This patch will be needed for v5.19+ hence copy stable.
[peterx@redhat.com: v2] Link: https://lkml.kernel.org/r/ZBzOqwF2wrHgBVZb@x1n [peterx@redhat.com: v3] Link: https://lkml.kernel.org/r/20230324142620.2344140-1-peterx@redhat.com Link: https://lkml.kernel.org/r/20230321191840.1897940-1-peterx@redhat.com Fixes: 166f3ecc0daf ("mm/hugetlb: hook page faults for uffd write protection") Signed-off-by: Peter Xu peterx@redhat.com Reported-by: Muhammad Usama Anjum usama.anjum@collabora.com Tested-by: Muhammad Usama Anjum usama.anjum@collabora.com Acked-by: David Hildenbrand david@redhat.com Reviewed-by: Mike Kravetz mike.kravetz@oracle.com Cc: Andrea Arcangeli aarcange@redhat.com Cc: Axel Rasmussen axelrasmussen@google.com Cc: Mike Rapoport rppt@linux.vnet.ibm.com Cc: Nadav Amit nadav.amit@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/hugetlb.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-)
--- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5476,7 +5476,7 @@ static vm_fault_t hugetlb_wp(struct mm_s struct page *pagecache_page, spinlock_t *ptl) { const bool unshare = flags & FAULT_FLAG_UNSHARE; - pte_t pte; + pte_t pte = huge_ptep_get(ptep); struct hstate *h = hstate_vma(vma); struct page *old_page, *new_page; int outside_reserve = 0; @@ -5485,6 +5485,17 @@ static vm_fault_t hugetlb_wp(struct mm_s struct mmu_notifier_range range;
/* + * Never handle CoW for uffd-wp protected pages. It should be only + * handled when the uffd-wp protection is removed. + * + * Note that only the CoW optimization path (in hugetlb_no_page()) + * can trigger this, because hugetlb_fault() will always resolve + * uffd-wp bit first. + */ + if (!unshare && huge_pte_uffd_wp(pte)) + return 0; + + /* * hugetlb does not support FOLL_FORCE-style write faults that keep the * PTE mapped R/O such as maybe_mkwrite() would do. */ @@ -5497,7 +5508,6 @@ static vm_fault_t hugetlb_wp(struct mm_s return 0; }
- pte = huge_ptep_get(ptep); old_page = pte_page(pte);
delayacct_wpcopy_start();
From: Peng Zhang zhangpeng.00@bytedance.com
commit ec07967d7523adb3670f9dfee0232e3bc868f3de upstream.
if (likely(offset > end)) max = pivots[offset];
The above code should be changed to if (likely(offset < end)), which is correct. This affects the correctness of ma_data_end(). Now it seems that the final result will not be wrong, but it is best to change it. This patch does not change the code as above, because it simplifies the code by the way.
Link: https://lkml.kernel.org/r/20230314124203.91572-1-zhangpeng.00@bytedance.com Link: https://lkml.kernel.org/r/20230314124203.91572-2-zhangpeng.00@bytedance.com Fixes: 54a611b60590 ("Maple Tree: add new data structure") Signed-off-by: Peng Zhang zhangpeng.00@bytedance.com Reviewed-by: Liam R. Howlett Liam.Howlett@oracle.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- lib/maple_tree.c | 15 +++++---------- 1 file changed, 5 insertions(+), 10 deletions(-)
--- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -3875,18 +3875,13 @@ static inline void *mtree_lookup_walk(st end = ma_data_end(node, type, pivots, max); if (unlikely(ma_dead_node(node))) goto dead_node; - - if (pivots[offset] >= mas->index) - goto next; - do { - offset++; - } while ((offset < end) && (pivots[offset] < mas->index)); - - if (likely(offset > end)) - max = pivots[offset]; + if (pivots[offset] >= mas->index) { + max = pivots[offset]; + break; + } + } while (++offset < end);
-next: slots = ma_slots(node, type); next = mt_slot(mas->tree, slots, offset); if (unlikely(ma_dead_node(node)))
From: Peng Zhang zhangpeng.00@bytedance.com
commit c45ea315a602d45569b08b93e9ab30f6a63a38aa upstream.
There is a concurrency bug that may cause the wrong value to be loaded when a CPU is modifying the maple tree.
CPU1: mtree_insert_range() mas_insert() mas_store_root() ... mas_root_expand() ... rcu_assign_pointer(mas->tree->ma_root, mte_mk_root(mas->node)); ma_set_meta(node, maple_leaf_64, 0, slot); <---IP
CPU2: mtree_load() mtree_lookup_walk() ma_data_end();
When CPU1 is about to execute the instruction pointed to by IP, the ma_data_end() executed by CPU2 may return the wrong end position, which will cause the value loaded by mtree_load() to be wrong.
An example of triggering the bug:
Add mdelay(100) between rcu_assign_pointer() and ma_set_meta() in mas_root_expand().
static DEFINE_MTREE(tree); int work(void *p) { unsigned long val; for (int i = 0 ; i< 30; ++i) { val = (unsigned long)mtree_load(&tree, 8); mdelay(5); pr_info("%lu",val); } return 0; }
mt_init_flags(&tree, MT_FLAGS_USE_RCU); mtree_insert(&tree, 0, (void*)12345, GFP_KERNEL); run_thread(work) mtree_insert(&tree, 1, (void*)56789, GFP_KERNEL);
In RCU mode, mtree_load() should always return the value before or after the data structure is modified, and in this example mtree_load(&tree, 8) may return 56789 which is not expected, it should always return NULL. Fix it by put ma_set_meta() before rcu_assign_pointer().
Link: https://lkml.kernel.org/r/20230314124203.91572-4-zhangpeng.00@bytedance.com Fixes: 54a611b60590 ("Maple Tree: add new data structure") Signed-off-by: Peng Zhang zhangpeng.00@bytedance.com Reviewed-by: Liam R. Howlett Liam.Howlett@oracle.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- lib/maple_tree.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
--- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -3659,10 +3659,9 @@ static inline int mas_root_expand(struct slot++; mas->depth = 1; mas_set_height(mas); - + ma_set_meta(node, maple_leaf_64, 0, slot); /* swap the new root into the tree */ rcu_assign_pointer(mas->tree->ma_root, mte_mk_root(mas->node)); - ma_set_meta(node, maple_leaf_64, 0, slot); return slot; }
From: Roman Li roman.li@amd.com
commit 3f6752b4de41896c7f1609b1585db2080e8150d8 upstream.
[Why] In case of failure to resume MST topology after suspend, an emtpty mst tree prevents further mst hub detection on the same connector. That causes the issue with MST hub hotplug after it's been unplug in suspend.
[How] Stop topology manager on the connector after detecting DM_MST failure.
Reviewed-by: Wayne Lin Wayne.Lin@amd.com Acked-by: Jasdeep Dhillon jdhillon@amd.com Signed-off-by: Roman Li roman.li@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: "Limonciello, Mario" Mario.Limonciello@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -2183,6 +2183,8 @@ static int detect_mst_link_for_all_conne DRM_ERROR("DM_MST: Failed to start MST\n"); aconnector->dc_link->type = dc_connection_single; + ret = dm_helpers_dp_mst_stop_top_mgr(aconnector->dc_link->ctx, + aconnector->dc_link); break; } }
From: Alex Deucher alexander.deucher@amd.com
commit 2a7798ea7390fd78f191c9e9bf68f5581d3b4a02 upstream.
SDMA 5.x is part of the GFX block so it's controlled via GFXOFF. Skip suspend as it should be handled the same as GFX.
v2: drop SDMA 4.x. That requires special handling.
Reviewed-by: Mario Limonciello mario.limonciello@amd.com Acked-by: Rajneesh Bhardwaj rajneesh.bhardwaj@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: "Limonciello, Mario" Mario.Limonciello@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 6 ++++++ 1 file changed, 6 insertions(+)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -3045,6 +3045,12 @@ static int amdgpu_device_ip_suspend_phas adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_MES)) continue;
+ /* SDMA 5.x+ is part of GFX power domain so it's covered by GFXOFF */ + if (adev->in_s0ix && + (adev->ip_versions[SDMA0_HWIP][0] >= IP_VERSION(5, 0, 0)) && + (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_SDMA)) + continue; + /* XXX handle errors */ r = adev->ip_blocks[i].version->funcs->suspend(adev); /* XXX handle errors */
From: Tim Huang tim.huang@amd.com
commit e11c775030c5585370fda43035204bb5fa23b139 upstream.
The psp suspend & resume should be skipped to avoid destroy the TMR and reload FWs again for IMU enabled APU ASICs.
Signed-off-by: Tim Huang tim.huang@amd.com Acked-by: Alex Deucher alexander.deucher@amd.com Reviewed-by: Mario Limonciello mario.limonciello@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 12 ++++++++++++ 1 file changed, 12 insertions(+)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -3051,6 +3051,18 @@ static int amdgpu_device_ip_suspend_phas (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_SDMA)) continue;
+ /* Once swPSP provides the IMU, RLC FW binaries to TOS during cold-boot. + * These are in TMR, hence are expected to be reused by PSP-TOS to reload + * from this location and RLC Autoload automatically also gets loaded + * from here based on PMFW -> PSP message during re-init sequence. + * Therefore, the psp suspend & resume should be skipped to avoid destroy + * the TMR and reload FWs again for IMU enabled APU ASICs. + */ + if (amdgpu_in_reset(adev) && + (adev->flags & AMD_IS_APU) && adev->gfx.imu.funcs && + adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_PSP) + continue; + /* XXX handle errors */ r = adev->ip_blocks[i].version->funcs->suspend(adev); /* XXX handle errors */
From: Robert Foss robert.foss@linaro.org
commit 2a9df204be0bbb896e087f00b9ee3fc559d5a608 upstream.
This fixes PLL being unable to lock, and is derived from an equivalent downstream commit.
Available LT9611 documentation does not list this register, neither does LT9611UXC (which is a different chip).
This commit has been confirmed to fix HDMI output on DragonBoard 845c.
Suggested-by: Amit Pundir amit.pundir@linaro.org Reviewed-by: Amit Pundir amit.pundir@linaro.org Signed-off-by: Robert Foss robert.foss@linaro.org Link: https://patchwork.freedesktop.org/patch/msgid/20221213150304.4189760-1-rober... Signed-off-by: Amit Pundir amit.pundir@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/bridge/lontium-lt9611.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/gpu/drm/bridge/lontium-lt9611.c +++ b/drivers/gpu/drm/bridge/lontium-lt9611.c @@ -258,6 +258,7 @@ static int lt9611_pll_setup(struct lt961 { 0x8126, 0x55 }, { 0x8127, 0x66 }, { 0x8128, 0x88 }, + { 0x812a, 0x20 }, };
regmap_multi_reg_write(lt9611->regmap, reg_cfg, ARRAY_SIZE(reg_cfg));
From: Alistair Popple apopple@nvidia.com
commit 7c7b962938ddda6a9cd095de557ee5250706ea88 upstream.
Device exclusive page table entries are used to prevent CPU access to a page whilst it is being accessed from a device. Typically this is used to implement atomic operations when the underlying bus does not support atomic access. When a CPU thread encounters a device exclusive entry it locks the page and restores the original entry after calling mmu notifiers to signal drivers that exclusive access is no longer available.
The device exclusive entry holds a reference to the page making it safe to access the struct page whilst the entry is present. However the fault handling code does not hold the PTL when taking the page lock. This means if there are multiple threads faulting concurrently on the device exclusive entry one will remove the entry whilst others will wait on the page lock without holding a reference.
This can lead to threads locking or waiting on a folio with a zero refcount. Whilst mmap_lock prevents the pages getting freed via munmap() they may still be freed by a migration. This leads to warnings such as PAGE_FLAGS_CHECK_AT_FREE due to the page being locked when the refcount drops to zero.
Fix this by trying to take a reference on the folio before locking it. The code already checks the PTE under the PTL and aborts if the entry is no longer there. It is also possible the folio has been unmapped, freed and re-allocated allowing a reference to be taken on an unrelated folio. This case is also detected by the PTE check and the folio is unlocked without further changes.
Link: https://lkml.kernel.org/r/20230330012519.804116-1-apopple@nvidia.com Fixes: b756a3b5e7ea ("mm: device exclusive memory access") Signed-off-by: Alistair Popple apopple@nvidia.com Reviewed-by: Ralph Campbell rcampbell@nvidia.com Reviewed-by: John Hubbard jhubbard@nvidia.com Acked-by: David Hildenbrand david@redhat.com Cc: Matthew Wilcox (Oracle) willy@infradead.org Cc: Christoph Hellwig hch@infradead.org Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/memory.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-)
--- a/mm/memory.c +++ b/mm/memory.c @@ -3580,8 +3580,21 @@ static vm_fault_t remove_device_exclusiv struct vm_area_struct *vma = vmf->vma; struct mmu_notifier_range range;
- if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) + /* + * We need a reference to lock the folio because we don't hold + * the PTL so a racing thread can remove the device-exclusive + * entry and unmap it. If the folio is free the entry must + * have been removed already. If it happens to have already + * been re-allocated after being freed all we do is lock and + * unlock it. + */ + if (!folio_try_get(folio)) + return 0; + + if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) { + folio_put(folio); return VM_FAULT_RETRY; + } mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0, vma, vma->vm_mm, vmf->address & PAGE_MASK, (vmf->address & PAGE_MASK) + PAGE_SIZE, NULL); @@ -3594,6 +3607,7 @@ static vm_fault_t remove_device_exclusiv
pte_unmap_unlock(vmf->pte, vmf->ptl); folio_unlock(folio); + folio_put(folio);
mmu_notifier_invalidate_range_end(&range); return 0;
From: "Liam R. Howlett" Liam.Howlett@Oracle.com
commit 541e06b772c1aaffb3b6a245ccface36d7107af2 upstream.
Preallocations are common in the VMA code to avoid allocating under certain locking conditions. The preallocations must also cover the worst-case scenario. Removing the GFP_ZERO flag from the kmem_cache_alloc() (and bulk variant) calls will reduce the amount of time spent zeroing memory that may not be used. Only zero out the necessary area to keep track of the allocations in the maple state. Zero the entire node prior to using it in the tree.
This required internal changes to node counting on allocation, so the test code is also updated.
This restores some micro-benchmark performance: up to +9% in mmtests mmap1 by my testing +10% to +20% in mmap, mmapaddr, mmapmany tests reported by Red Hat
Link: https://bugzilla.redhat.com/show_bug.cgi?id=2149636 Link: https://lkml.kernel.org/r/20230105160427.2988454-1-Liam.Howlett@oracle.com Cc: stable@vger.kernel.org Fixes: 54a611b60590 ("Maple Tree: add new data structure") Signed-off-by: Liam Howlett Liam.Howlett@oracle.com Reported-by: Jirka Hladky jhladky@redhat.com Suggested-by: Matthew Wilcox (Oracle) willy@infradead.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- lib/maple_tree.c | 80 ++++++++++++++++++++------------------- tools/testing/radix-tree/maple.c | 18 ++++---- 2 files changed, 52 insertions(+), 46 deletions(-)
--- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -149,13 +149,12 @@ struct maple_subtree_state { /* Functions */ static inline struct maple_node *mt_alloc_one(gfp_t gfp) { - return kmem_cache_alloc(maple_node_cache, gfp | __GFP_ZERO); + return kmem_cache_alloc(maple_node_cache, gfp); }
static inline int mt_alloc_bulk(gfp_t gfp, size_t size, void **nodes) { - return kmem_cache_alloc_bulk(maple_node_cache, gfp | __GFP_ZERO, size, - nodes); + return kmem_cache_alloc_bulk(maple_node_cache, gfp, size, nodes); }
static inline void mt_free_bulk(size_t size, void __rcu **nodes) @@ -1128,9 +1127,10 @@ static inline struct maple_node *mas_pop { struct maple_alloc *ret, *node = mas->alloc; unsigned long total = mas_allocated(mas); + unsigned int req = mas_alloc_req(mas);
/* nothing or a request pending. */ - if (unlikely(!total)) + if (WARN_ON(!total)) return NULL;
if (total == 1) { @@ -1140,27 +1140,25 @@ static inline struct maple_node *mas_pop goto single_node; }
- if (!node->node_count) { + if (node->node_count == 1) { /* Single allocation in this node. */ mas->alloc = node->slot[0]; - node->slot[0] = NULL; mas->alloc->total = node->total - 1; ret = node; goto new_head; } - node->total--; - ret = node->slot[node->node_count]; - node->slot[node->node_count--] = NULL; + ret = node->slot[--node->node_count]; + node->slot[node->node_count] = NULL;
single_node: new_head: - ret->total = 0; - ret->node_count = 0; - if (ret->request_count) { - mas_set_alloc_req(mas, ret->request_count + 1); - ret->request_count = 0; + if (req) { + req++; + mas_set_alloc_req(mas, req); } + + memset(ret, 0, sizeof(*ret)); return (struct maple_node *)ret; }
@@ -1179,21 +1177,20 @@ static inline void mas_push_node(struct unsigned long count; unsigned int requested = mas_alloc_req(mas);
- memset(reuse, 0, sizeof(*reuse)); count = mas_allocated(mas);
- if (count && (head->node_count < MAPLE_ALLOC_SLOTS - 1)) { - if (head->slot[0]) - head->node_count++; - head->slot[head->node_count] = reuse; + reuse->request_count = 0; + reuse->node_count = 0; + if (count && (head->node_count < MAPLE_ALLOC_SLOTS)) { + head->slot[head->node_count++] = reuse; head->total++; goto done; }
reuse->total = 1; if ((head) && !((unsigned long)head & 0x1)) { - head->request_count = 0; reuse->slot[0] = head; + reuse->node_count = 1; reuse->total += head->total; }
@@ -1212,7 +1209,6 @@ static inline void mas_alloc_nodes(struc { struct maple_alloc *node; unsigned long allocated = mas_allocated(mas); - unsigned long success = allocated; unsigned int requested = mas_alloc_req(mas); unsigned int count; void **slots = NULL; @@ -1228,24 +1224,29 @@ static inline void mas_alloc_nodes(struc WARN_ON(!allocated); }
- if (!allocated || mas->alloc->node_count == MAPLE_ALLOC_SLOTS - 1) { + if (!allocated || mas->alloc->node_count == MAPLE_ALLOC_SLOTS) { node = (struct maple_alloc *)mt_alloc_one(gfp); if (!node) goto nomem_one;
- if (allocated) + if (allocated) { node->slot[0] = mas->alloc; + node->node_count = 1; + } else { + node->node_count = 0; + }
- success++; mas->alloc = node; + node->total = ++allocated; requested--; }
node = mas->alloc; + node->request_count = 0; while (requested) { max_req = MAPLE_ALLOC_SLOTS; - if (node->slot[0]) { - unsigned int offset = node->node_count + 1; + if (node->node_count) { + unsigned int offset = node->node_count;
slots = (void **)&node->slot[offset]; max_req -= offset; @@ -1259,15 +1260,13 @@ static inline void mas_alloc_nodes(struc goto nomem_bulk;
node->node_count += count; - /* zero indexed. */ - if (slots == (void **)&node->slot) - node->node_count--; - - success += count; + allocated += count; node = node->slot[0]; + node->node_count = 0; + node->request_count = 0; requested -= count; } - mas->alloc->total = success; + mas->alloc->total = allocated; return;
nomem_bulk: @@ -1276,7 +1275,7 @@ nomem_bulk: nomem_one: mas_set_alloc_req(mas, requested); if (mas->alloc && !(((unsigned long)mas->alloc & 0x1))) - mas->alloc->total = success; + mas->alloc->total = allocated; mas_set_err(mas, -ENOMEM); return;
@@ -5725,6 +5724,7 @@ int mas_preallocate(struct ma_state *mas void mas_destroy(struct ma_state *mas) { struct maple_alloc *node; + unsigned long total;
/* * When using mas_for_each() to insert an expected number of elements, @@ -5747,14 +5747,20 @@ void mas_destroy(struct ma_state *mas) } mas->mas_flags &= ~(MA_STATE_BULK|MA_STATE_PREALLOC);
- while (mas->alloc && !((unsigned long)mas->alloc & 0x1)) { + total = mas_allocated(mas); + while (total) { node = mas->alloc; mas->alloc = node->slot[0]; - if (node->node_count > 0) - mt_free_bulk(node->node_count, - (void __rcu **)&node->slot[1]); + if (node->node_count > 1) { + size_t count = node->node_count - 1; + + mt_free_bulk(count, (void __rcu **)&node->slot[1]); + total -= count; + } kmem_cache_free(maple_node_cache, node); + total--; } + mas->alloc = NULL; } EXPORT_SYMBOL_GPL(mas_destroy); --- a/tools/testing/radix-tree/maple.c +++ b/tools/testing/radix-tree/maple.c @@ -173,11 +173,11 @@ static noinline void check_new_node(stru
if (!MAPLE_32BIT) { if (i >= 35) - e = i - 35; + e = i - 34; else if (i >= 5) - e = i - 5; + e = i - 4; else if (i >= 2) - e = i - 2; + e = i - 1; } else { if (i >= 4) e = i - 4; @@ -305,17 +305,17 @@ static noinline void check_new_node(stru MT_BUG_ON(mt, mas.node != MA_ERROR(-ENOMEM)); MT_BUG_ON(mt, !mas_nomem(&mas, GFP_KERNEL)); MT_BUG_ON(mt, mas_allocated(&mas) != MAPLE_ALLOC_SLOTS + 1); - MT_BUG_ON(mt, mas.alloc->node_count != MAPLE_ALLOC_SLOTS - 1); + MT_BUG_ON(mt, mas.alloc->node_count != MAPLE_ALLOC_SLOTS);
mn = mas_pop_node(&mas); /* get the next node. */ MT_BUG_ON(mt, mn == NULL); MT_BUG_ON(mt, not_empty(mn)); MT_BUG_ON(mt, mas_allocated(&mas) != MAPLE_ALLOC_SLOTS); - MT_BUG_ON(mt, mas.alloc->node_count != MAPLE_ALLOC_SLOTS - 2); + MT_BUG_ON(mt, mas.alloc->node_count != MAPLE_ALLOC_SLOTS - 1);
mas_push_node(&mas, mn); MT_BUG_ON(mt, mas_allocated(&mas) != MAPLE_ALLOC_SLOTS + 1); - MT_BUG_ON(mt, mas.alloc->node_count != MAPLE_ALLOC_SLOTS - 1); + MT_BUG_ON(mt, mas.alloc->node_count != MAPLE_ALLOC_SLOTS);
/* Check the limit of pop/push/pop */ mas_node_count(&mas, MAPLE_ALLOC_SLOTS + 2); /* Request */ @@ -323,14 +323,14 @@ static noinline void check_new_node(stru MT_BUG_ON(mt, mas.node != MA_ERROR(-ENOMEM)); MT_BUG_ON(mt, !mas_nomem(&mas, GFP_KERNEL)); MT_BUG_ON(mt, mas_alloc_req(&mas)); - MT_BUG_ON(mt, mas.alloc->node_count); + MT_BUG_ON(mt, mas.alloc->node_count != 1); MT_BUG_ON(mt, mas_allocated(&mas) != MAPLE_ALLOC_SLOTS + 2); mn = mas_pop_node(&mas); MT_BUG_ON(mt, not_empty(mn)); MT_BUG_ON(mt, mas_allocated(&mas) != MAPLE_ALLOC_SLOTS + 1); - MT_BUG_ON(mt, mas.alloc->node_count != MAPLE_ALLOC_SLOTS - 1); + MT_BUG_ON(mt, mas.alloc->node_count != MAPLE_ALLOC_SLOTS); mas_push_node(&mas, mn); - MT_BUG_ON(mt, mas.alloc->node_count); + MT_BUG_ON(mt, mas.alloc->node_count != 1); MT_BUG_ON(mt, mas_allocated(&mas) != MAPLE_ALLOC_SLOTS + 2); mn = mas_pop_node(&mas); MT_BUG_ON(mt, not_empty(mn));
From: "Liam R. Howlett" Liam.Howlett@Oracle.com
commit 65be6f058b0eba98dc6c6f197ea9f62c9b6a519f upstream.
Ensure the node isn't dead after reading the node end.
Link: https://lkml.kernel.org/r/20230120162650.984577-3-Liam.Howlett@oracle.com Cc: Stable@vger.kernel.org Fixes: 54a611b60590 ("Maple Tree: add new data structure") Signed-off-by: Liam R. Howlett Liam.Howlett@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- lib/maple_tree.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -4655,13 +4655,13 @@ static inline void *mas_next_nentry(stru pivots = ma_pivots(node, type); slots = ma_slots(node, type); mas->index = mas_safe_min(mas, pivots, mas->offset); + count = ma_data_end(node, type, pivots, mas->max); if (ma_dead_node(node)) return NULL;
if (mas->index > max) return NULL;
- count = ma_data_end(node, type, pivots, mas->max); if (mas->offset > count) return NULL;
From: "Liam R. Howlett" Liam.Howlett@Oracle.com
commit 50e81c82ad947045c7ed26ddc9acb17276b653b6 upstream.
When iterating, a user may operate on the tree and cause the maple state to be altered and left in an unintuitive state. Detect this scenario and correct it by setting to the limit and invalidating the state.
Link: https://lkml.kernel.org/r/20230120162650.984577-4-Liam.Howlett@oracle.com Cc: Stable@vger.kernel.org Fixes: 54a611b60590 ("Maple Tree: add new data structure") Signed-off-by: Liam R. Howlett Liam.Howlett@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- lib/maple_tree.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
--- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -4736,6 +4736,11 @@ static inline void *mas_next_entry(struc unsigned long last; enum maple_type mt;
+ if (mas->index > limit) { + mas->index = mas->last = limit; + mas_pause(mas); + return NULL; + } last = mas->last; retry: offset = mas->offset; @@ -4842,6 +4847,11 @@ static inline void *mas_prev_entry(struc { void *entry;
+ if (mas->index < min) { + mas->index = mas->last = min; + mas_pause(mas); + return NULL; + } retry: while (likely(!mas_is_none(mas))) { entry = mas_prev_nentry(mas, min, mas->index);
From: "Liam R. Howlett" Liam.Howlett@Oracle.com
commit 1202700c3f8cc5f7e4646c3cf05ee6f7c8bc6ccf upstream.
If an invalidated maple state is encountered during write, reset the maple state to MAS_START. This will result in a re-walk of the tree to the correct location for the write.
Link: https://lore.kernel.org/all/20230107020126.1627-1-sj@kernel.org/ Link: https://lkml.kernel.org/r/20230120162650.984577-6-Liam.Howlett@oracle.com Cc: Stable@vger.kernel.org Fixes: 54a611b60590 ("Maple Tree: add new data structure") Signed-off-by: Liam R. Howlett Liam.Howlett@oracle.com Reported-by: SeongJae Park sj@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- lib/maple_tree.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -5599,6 +5599,9 @@ static inline void mte_destroy_walk(stru
static void mas_wr_store_setup(struct ma_wr_state *wr_mas) { + if (unlikely(mas_is_paused(wr_mas->mas))) + mas_reset(wr_mas->mas); + if (!mas_is_start(wr_mas->mas)) { if (mas_is_none(wr_mas->mas)) { mas_reset(wr_mas->mas);
From: "Liam R. Howlett" Liam.Howlett@Oracle.com
commit 17dc622c7b0f94e49bed030726df4db12ecaa6b5 upstream.
When mas_prev() does not find anything, set the state to MAS_NONE.
Handle the MAS_NONE in mas_find() like a MAS_START.
Link: https://lkml.kernel.org/r/20230120162650.984577-7-Liam.Howlett@oracle.com Cc: Stable@vger.kernel.org Fixes: 54a611b60590 ("Maple Tree: add new data structure") Signed-off-by: Liam R. Howlett Liam.Howlett@oracle.com Reported-by: syzbot+502859d610c661e56545@syzkaller.appspotmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- lib/maple_tree.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -4849,7 +4849,7 @@ static inline void *mas_prev_entry(struc
if (mas->index < min) { mas->index = mas->last = min; - mas_pause(mas); + mas->node = MAS_NONE; return NULL; } retry: @@ -5911,6 +5911,7 @@ void *mas_prev(struct ma_state *mas, uns if (!mas->index) { /* Nothing comes before 0 */ mas->last = 0; + mas->node = MAS_NONE; return NULL; }
@@ -6001,6 +6002,9 @@ void *mas_find(struct ma_state *mas, uns mas->index = ++mas->last; }
+ if (unlikely(mas_is_none(mas))) + mas->node = MAS_START; + if (unlikely(mas_is_start(mas))) { /* First run or continue */ void *entry;
From: "Liam R. Howlett" Liam.Howlett@Oracle.com
commit 39d0bd86c499ecd6abae42a9b7112056c5560691 upstream.
ma_pivots() and ma_data_end() may be called with a dead node. Ensure to that the node isn't dead before using the returned values.
This is necessary for RCU mode of the maple tree.
Link: https://lkml.kernel.org/r/20230227173632.3292573-1-surenb@google.com Link: https://lkml.kernel.org/r/20230227173632.3292573-2-surenb@google.com Fixes: 54a611b60590 ("Maple Tree: add new data structure") Cc: Stable@vger.kernel.org Signed-off-by: Liam Howlett Liam.Howlett@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- lib/maple_tree.c | 52 +++++++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 43 insertions(+), 9 deletions(-)
--- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -539,6 +539,7 @@ static inline bool ma_dead_node(const st
return (parent == node); } + /* * mte_dead_node() - check if the @enode is dead. * @enode: The encoded maple node @@ -620,6 +621,8 @@ static inline unsigned int mas_alloc_req * @node - the maple node * @type - the node type * + * In the event of a dead node, this array may be %NULL + * * Return: A pointer to the maple node pivots */ static inline unsigned long *ma_pivots(struct maple_node *node, @@ -1091,8 +1094,11 @@ static int mas_ascend(struct ma_state *m a_type = mas_parent_enum(mas, p_enode); a_node = mte_parent(p_enode); a_slot = mte_parent_slot(p_enode); - pivots = ma_pivots(a_node, a_type); a_enode = mt_mk_node(a_node, a_type); + pivots = ma_pivots(a_node, a_type); + + if (unlikely(ma_dead_node(a_node))) + return 1;
if (!set_min && a_slot) { set_min = true; @@ -1398,6 +1404,9 @@ static inline unsigned char ma_data_end( { unsigned char offset;
+ if (!pivots) + return 0; + if (type == maple_arange_64) return ma_meta_end(node, type);
@@ -1433,6 +1442,9 @@ static inline unsigned char mas_data_end return ma_meta_end(node, type);
pivots = ma_pivots(node, type); + if (unlikely(ma_dead_node(node))) + return 0; + offset = mt_pivots[type] - 1; if (likely(!pivots[offset])) return ma_meta_end(node, type); @@ -4498,6 +4510,9 @@ static inline int mas_prev_node(struct m node = mas_mn(mas); slots = ma_slots(node, mt); pivots = ma_pivots(node, mt); + if (unlikely(ma_dead_node(node))) + return 1; + mas->max = pivots[offset]; if (offset) mas->min = pivots[offset - 1] + 1; @@ -4519,6 +4534,9 @@ static inline int mas_prev_node(struct m slots = ma_slots(node, mt); pivots = ma_pivots(node, mt); offset = ma_data_end(node, mt, pivots, mas->max); + if (unlikely(ma_dead_node(node))) + return 1; + if (offset) mas->min = pivots[offset - 1] + 1;
@@ -4567,6 +4585,7 @@ static inline int mas_next_node(struct m struct maple_enode *enode; int level = 0; unsigned char offset; + unsigned char node_end; enum maple_type mt; void __rcu **slots;
@@ -4590,7 +4609,11 @@ static inline int mas_next_node(struct m node = mas_mn(mas); mt = mte_node_type(mas->node); pivots = ma_pivots(node, mt); - } while (unlikely(offset == ma_data_end(node, mt, pivots, mas->max))); + node_end = ma_data_end(node, mt, pivots, mas->max); + if (unlikely(ma_dead_node(node))) + return 1; + + } while (unlikely(offset == node_end));
slots = ma_slots(node, mt); pivot = mas_safe_pivot(mas, pivots, ++offset, mt); @@ -4606,6 +4629,9 @@ static inline int mas_next_node(struct m mt = mte_node_type(mas->node); slots = ma_slots(node, mt); pivots = ma_pivots(node, mt); + if (unlikely(ma_dead_node(node))) + return 1; + offset = 0; pivot = pivots[0]; } @@ -4652,11 +4678,14 @@ static inline void *mas_next_nentry(stru return NULL; }
- pivots = ma_pivots(node, type); slots = ma_slots(node, type); - mas->index = mas_safe_min(mas, pivots, mas->offset); + pivots = ma_pivots(node, type); count = ma_data_end(node, type, pivots, mas->max); - if (ma_dead_node(node)) + if (unlikely(ma_dead_node(node))) + return NULL; + + mas->index = mas_safe_min(mas, pivots, mas->offset); + if (unlikely(ma_dead_node(node))) return NULL;
if (mas->index > max) @@ -4814,6 +4843,11 @@ retry:
slots = ma_slots(mn, mt); pivots = ma_pivots(mn, mt); + if (unlikely(ma_dead_node(mn))) { + mas_rewalk(mas, index); + goto retry; + } + if (offset == mt_pivots[mt]) pivot = mas->max; else @@ -6616,11 +6650,11 @@ static inline void *mas_first_entry(stru while (likely(!ma_is_leaf(mt))) { MT_BUG_ON(mas->tree, mte_dead_node(mas->node)); slots = ma_slots(mn, mt); - pivots = ma_pivots(mn, mt); - max = pivots[0]; entry = mas_slot(mas, slots, 0); + pivots = ma_pivots(mn, mt); if (unlikely(ma_dead_node(mn))) return NULL; + max = pivots[0]; mas->node = entry; mn = mas_mn(mas); mt = mte_node_type(mas->node); @@ -6640,13 +6674,13 @@ static inline void *mas_first_entry(stru if (likely(entry)) return entry;
- pivots = ma_pivots(mn, mt); - mas->index = pivots[0] + 1; mas->offset = 1; entry = mas_slot(mas, slots, 1); + pivots = ma_pivots(mn, mt); if (unlikely(ma_dead_node(mn))) return NULL;
+ mas->index = pivots[0] + 1; if (mas->index > limit) goto none;
From: "Liam R. Howlett" Liam.Howlett@Oracle.com
commit 46b345848261009477552d654cb2f65000c30e4d upstream.
If mas->node is an MAS_START, there are three cases, and they all assign different values to mas->node and mas->offset. So there is no need to set them to a default value before updating.
Update them directly to make them easier to understand and for better readability.
Link: https://lkml.kernel.org/r/20221221060058.609003-7-vernon2gm@gmail.com Cc: Stable@vger.kernel.org Fixes: 54a611b60590 ("Maple Tree: add new data structure") Signed-off-by: Vernon Yang vernon2gm@gmail.com Signed-off-by: Liam R. Howlett Liam.Howlett@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- lib/maple_tree.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
--- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -1339,7 +1339,7 @@ static void mas_node_count(struct ma_sta * mas_start() - Sets up maple state for operations. * @mas: The maple state. * - * If mas->node == MAS_START, then set the min, max, depth, and offset to + * If mas->node == MAS_START, then set the min, max and depth to * defaults. * * Return: @@ -1353,22 +1353,22 @@ static inline struct maple_enode *mas_st if (likely(mas_is_start(mas))) { struct maple_enode *root;
- mas->node = MAS_NONE; mas->min = 0; mas->max = ULONG_MAX; mas->depth = 0; - mas->offset = 0;
root = mas_root(mas); /* Tree with nodes */ if (likely(xa_is_node(root))) { mas->depth = 1; mas->node = mte_safe_root(root); + mas->offset = 0; return NULL; }
/* empty tree */ if (unlikely(!root)) { + mas->node = MAS_NONE; mas->offset = MAPLE_NODE_SLOTS; return NULL; }
From: "Liam R. Howlett" Liam.Howlett@Oracle.com
commit a7b92d59c885018cb7bb88539892278e4fd64b29 upstream.
When initially starting a search, the root node may already be in the process of being replaced in RCU mode. Detect and restart the walk if this is the case. This is necessary for RCU mode of the maple tree.
Link: https://lkml.kernel.org/r/20230227173632.3292573-3-surenb@google.com Cc: Stable@vger.kernel.org Fixes: 54a611b60590 ("Maple Tree: add new data structure") Signed-off-by: Liam Howlett Liam.Howlett@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- lib/maple_tree.c | 4 ++++ 1 file changed, 4 insertions(+)
--- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -1357,12 +1357,16 @@ static inline struct maple_enode *mas_st mas->max = ULONG_MAX; mas->depth = 0;
+retry: root = mas_root(mas); /* Tree with nodes */ if (likely(xa_is_node(root))) { mas->depth = 1; mas->node = mte_safe_root(root); mas->offset = 0; + if (mte_dead_node(mas->node)) + goto retry; + return NULL; }
From: "Liam R. Howlett" Liam.Howlett@Oracle.com
commit 2e5b4921f8efc9e845f4f04741797d16f36847eb upstream.
The walk to destroy the nodes was not always setting the node type and would result in a destroy method potentially using the values as nodes. Avoid this by setting the correct node types. This is necessary for the RCU mode of the maple tree.
Link: https://lkml.kernel.org/r/20230227173632.3292573-4-surenb@google.com Cc: Stable@vger.kernel.org Fixes: 54a611b60590 ("Maple Tree: add new data structure") Signed-off-by: Liam Howlett Liam.Howlett@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- lib/maple_tree.c | 73 ++++++++++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 62 insertions(+), 11 deletions(-)
--- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -898,6 +898,44 @@ static inline void ma_set_meta(struct ma }
/* + * mas_clear_meta() - clear the metadata information of a node, if it exists + * @mas: The maple state + * @mn: The maple node + * @mt: The maple node type + * @offset: The offset of the highest sub-gap in this node. + * @end: The end of the data in this node. + */ +static inline void mas_clear_meta(struct ma_state *mas, struct maple_node *mn, + enum maple_type mt) +{ + struct maple_metadata *meta; + unsigned long *pivots; + void __rcu **slots; + void *next; + + switch (mt) { + case maple_range_64: + pivots = mn->mr64.pivot; + if (unlikely(pivots[MAPLE_RANGE64_SLOTS - 2])) { + slots = mn->mr64.slot; + next = mas_slot_locked(mas, slots, + MAPLE_RANGE64_SLOTS - 1); + if (unlikely((mte_to_node(next) && mte_node_type(next)))) + return; /* The last slot is a node, no metadata */ + } + fallthrough; + case maple_arange_64: + meta = ma_meta(mn, mt); + break; + default: + return; + } + + meta->gap = 0; + meta->end = 0; +} + +/* * ma_meta_end() - Get the data end of a node from the metadata * @mn: The maple node * @mt: The maple node type @@ -5438,20 +5476,22 @@ no_gap: * mas_dead_leaves() - Mark all leaves of a node as dead. * @mas: The maple state * @slots: Pointer to the slot array + * @type: The maple node type * * Must hold the write lock. * * Return: The number of leaves marked as dead. */ static inline -unsigned char mas_dead_leaves(struct ma_state *mas, void __rcu **slots) +unsigned char mas_dead_leaves(struct ma_state *mas, void __rcu **slots, + enum maple_type mt) { struct maple_node *node; enum maple_type type; void *entry; int offset;
- for (offset = 0; offset < mt_slot_count(mas->node); offset++) { + for (offset = 0; offset < mt_slots[mt]; offset++) { entry = mas_slot_locked(mas, slots, offset); type = mte_node_type(entry); node = mte_to_node(entry); @@ -5470,14 +5510,13 @@ unsigned char mas_dead_leaves(struct ma_
static void __rcu **mas_dead_walk(struct ma_state *mas, unsigned char offset) { - struct maple_node *node, *next; + struct maple_node *next; void __rcu **slots = NULL;
next = mas_mn(mas); do { - mas->node = ma_enode_ptr(next); - node = mas_mn(mas); - slots = ma_slots(node, node->type); + mas->node = mt_mk_node(next, next->type); + slots = ma_slots(next, next->type); next = mas_slot_locked(mas, slots, offset); offset = 0; } while (!ma_is_leaf(next->type)); @@ -5541,11 +5580,14 @@ static inline void __rcu **mas_destroy_d node = mas_mn(mas); slots = ma_slots(node, mte_node_type(mas->node)); next = mas_slot_locked(mas, slots, 0); - if ((mte_dead_node(next))) + if ((mte_dead_node(next))) { + mte_to_node(next)->type = mte_node_type(next); next = mas_slot_locked(mas, slots, 1); + }
mte_set_node_dead(mas->node); node->type = mte_node_type(mas->node); + mas_clear_meta(mas, node, node->type); node->piv_parent = prev; node->parent_slot = offset; offset = 0; @@ -5565,13 +5607,18 @@ static void mt_destroy_walk(struct maple
MA_STATE(mas, &mt, 0, 0);
- if (mte_is_leaf(enode)) + mas.node = enode; + if (mte_is_leaf(enode)) { + node->type = mte_node_type(enode); goto free_leaf; + }
+ ma_flags &= ~MT_FLAGS_LOCK_MASK; mt_init_flags(&mt, ma_flags); mas_lock(&mas);
- mas.node = start = enode; + mte_to_node(enode)->ma_flags = ma_flags; + start = enode; slots = mas_destroy_descend(&mas, start, 0); node = mas_mn(&mas); do { @@ -5579,7 +5626,8 @@ static void mt_destroy_walk(struct maple unsigned char offset; struct maple_enode *parent, *tmp;
- node->slot_len = mas_dead_leaves(&mas, slots); + node->type = mte_node_type(mas.node); + node->slot_len = mas_dead_leaves(&mas, slots, node->type); if (free) mt_free_bulk(node->slot_len, slots); offset = node->parent_slot + 1; @@ -5603,7 +5651,8 @@ next: } while (start != mas.node);
node = mas_mn(&mas); - node->slot_len = mas_dead_leaves(&mas, slots); + node->type = mte_node_type(mas.node); + node->slot_len = mas_dead_leaves(&mas, slots, node->type); if (free) mt_free_bulk(node->slot_len, slots);
@@ -5613,6 +5662,8 @@ start_slots_free: free_leaf: if (free) mt_free_rcu(&node->rcu); + else + mas_clear_meta(&mas, node, node->type); }
/*
From: "Liam R. Howlett" Liam.Howlett@Oracle.com
commit 8372f4d83f96f35915106093cde4565836587123 upstream.
The call to mte_set_dead_node() before the smp_wmb() already calls smp_wmb() so this is not needed. This is an optimization for the RCU mode of the maple tree.
Link: https://lkml.kernel.org/r/20230227173632.3292573-5-surenb@google.com Fixes: 54a611b60590 ("Maple Tree: add new data structure") Cc: stable@vger.kernel.org Signed-off-by: Liam Howlett Liam.Howlett@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- lib/maple_tree.c | 1 - 1 file changed, 1 deletion(-)
--- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -5500,7 +5500,6 @@ unsigned char mas_dead_leaves(struct ma_ break;
mte_set_node_dead(entry); - smp_wmb(); /* Needed for RCU */ node->type = type; rcu_assign_pointer(slots[offset], node); }
From: "Liam R. Howlett" Liam.Howlett@Oracle.com
commit 0a2b18d948838e16912b3b627b504ab062b7d02a upstream.
Add an smp_rmb() before reading the parent pointer to ensure that anything read from the node prior to the parent pointer hasn't been reordered ahead of this check.
The is necessary for RCU mode.
Link: https://lkml.kernel.org/r/20230227173632.3292573-7-surenb@google.com Fixes: 54a611b60590 ("Maple Tree: add new data structure") Cc: stable@vger.kernel.org Signed-off-by: Liam R. Howlett Liam.Howlett@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- lib/maple_tree.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)
--- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -534,9 +534,11 @@ static inline struct maple_node *mte_par */ static inline bool ma_dead_node(const struct maple_node *node) { - struct maple_node *parent = (void *)((unsigned long) - node->parent & ~MAPLE_NODE_MASK); + struct maple_node *parent;
+ /* Do not reorder reads from the node prior to the parent check */ + smp_rmb(); + parent = (void *)((unsigned long) node->parent & ~MAPLE_NODE_MASK); return (parent == node); }
@@ -551,6 +553,8 @@ static inline bool mte_dead_node(const s struct maple_node *parent, *node;
node = mte_to_node(enode); + /* Do not reorder reads from the node prior to the parent check */ + smp_rmb(); parent = mte_parent(enode); return (parent == node); }
From: "Liam R. Howlett" Liam.Howlett@Oracle.com
commit 790e1fa86b340c2bd4a327e01c161f7a1ad885f6 upstream.
Dereferencing RCU objects within the RCU callback without the RCU check has caused lockdep to complain. Fix the RCU dereferencing by using the RCU callback lock to ensure the operation is safe.
Also stop creating a new lock to use for dereferencing during destruction of the tree or subtree. Instead, pass through a pointer to the tree that has the lock that is held for RCU dereferencing checking. It also does not make sense to use the maple state in the freeing scenario as the tree walk is a special case where the tree no longer has the normal encodings and parent pointers.
Link: https://lkml.kernel.org/r/20230227173632.3292573-8-surenb@google.com Fixes: 54a611b60590 ("Maple Tree: add new data structure") Cc: stable@vger.kernel.org Reported-by: Suren Baghdasaryan surenb@google.com Signed-off-by: Liam R. Howlett Liam.Howlett@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- lib/maple_tree.c | 188 ++++++++++++++++++++++++++++--------------------------- 1 file changed, 96 insertions(+), 92 deletions(-)
--- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -819,6 +819,11 @@ static inline void *mt_slot(const struct return rcu_dereference_check(slots[offset], mt_locked(mt)); }
+static inline void *mt_slot_locked(struct maple_tree *mt, void __rcu **slots, + unsigned char offset) +{ + return rcu_dereference_protected(slots[offset], mt_locked(mt)); +} /* * mas_slot_locked() - Get the slot value when holding the maple tree lock. * @mas: The maple state @@ -830,7 +835,7 @@ static inline void *mt_slot(const struct static inline void *mas_slot_locked(struct ma_state *mas, void __rcu **slots, unsigned char offset) { - return rcu_dereference_protected(slots[offset], mt_locked(mas->tree)); + return mt_slot_locked(mas->tree, slots, offset); }
/* @@ -902,34 +907,35 @@ static inline void ma_set_meta(struct ma }
/* - * mas_clear_meta() - clear the metadata information of a node, if it exists - * @mas: The maple state + * mt_clear_meta() - clear the metadata information of a node, if it exists + * @mt: The maple tree * @mn: The maple node - * @mt: The maple node type + * @type: The maple node type * @offset: The offset of the highest sub-gap in this node. * @end: The end of the data in this node. */ -static inline void mas_clear_meta(struct ma_state *mas, struct maple_node *mn, - enum maple_type mt) +static inline void mt_clear_meta(struct maple_tree *mt, struct maple_node *mn, + enum maple_type type) { struct maple_metadata *meta; unsigned long *pivots; void __rcu **slots; void *next;
- switch (mt) { + switch (type) { case maple_range_64: pivots = mn->mr64.pivot; if (unlikely(pivots[MAPLE_RANGE64_SLOTS - 2])) { slots = mn->mr64.slot; - next = mas_slot_locked(mas, slots, - MAPLE_RANGE64_SLOTS - 1); - if (unlikely((mte_to_node(next) && mte_node_type(next)))) - return; /* The last slot is a node, no metadata */ + next = mt_slot_locked(mt, slots, + MAPLE_RANGE64_SLOTS - 1); + if (unlikely((mte_to_node(next) && + mte_node_type(next)))) + return; /* no metadata, could be node */ } fallthrough; case maple_arange_64: - meta = ma_meta(mn, mt); + meta = ma_meta(mn, type); break; default: return; @@ -5477,7 +5483,7 @@ no_gap: }
/* - * mas_dead_leaves() - Mark all leaves of a node as dead. + * mte_dead_leaves() - Mark all leaves of a node as dead. * @mas: The maple state * @slots: Pointer to the slot array * @type: The maple node type @@ -5487,16 +5493,16 @@ no_gap: * Return: The number of leaves marked as dead. */ static inline -unsigned char mas_dead_leaves(struct ma_state *mas, void __rcu **slots, - enum maple_type mt) +unsigned char mte_dead_leaves(struct maple_enode *enode, struct maple_tree *mt, + void __rcu **slots) { struct maple_node *node; enum maple_type type; void *entry; int offset;
- for (offset = 0; offset < mt_slots[mt]; offset++) { - entry = mas_slot_locked(mas, slots, offset); + for (offset = 0; offset < mt_slot_count(enode); offset++) { + entry = mt_slot(mt, slots, offset); type = mte_node_type(entry); node = mte_to_node(entry); /* Use both node and type to catch LE & BE metadata */ @@ -5511,162 +5517,160 @@ unsigned char mas_dead_leaves(struct ma_ return offset; }
-static void __rcu **mas_dead_walk(struct ma_state *mas, unsigned char offset) +/** + * mte_dead_walk() - Walk down a dead tree to just before the leaves + * @enode: The maple encoded node + * @offset: The starting offset + * + * Note: This can only be used from the RCU callback context. + */ +static void __rcu **mte_dead_walk(struct maple_enode **enode, unsigned char offset) { - struct maple_node *next; + struct maple_node *node, *next; void __rcu **slots = NULL;
- next = mas_mn(mas); + next = mte_to_node(*enode); do { - mas->node = mt_mk_node(next, next->type); - slots = ma_slots(next, next->type); - next = mas_slot_locked(mas, slots, offset); + *enode = ma_enode_ptr(next); + node = mte_to_node(*enode); + slots = ma_slots(node, node->type); + next = rcu_dereference_protected(slots[offset], + lock_is_held(&rcu_callback_map)); offset = 0; } while (!ma_is_leaf(next->type));
return slots; }
+/** + * mt_free_walk() - Walk & free a tree in the RCU callback context + * @head: The RCU head that's within the node. + * + * Note: This can only be used from the RCU callback context. + */ static void mt_free_walk(struct rcu_head *head) { void __rcu **slots; struct maple_node *node, *start; - struct maple_tree mt; + struct maple_enode *enode; unsigned char offset; enum maple_type type; - MA_STATE(mas, &mt, 0, 0);
node = container_of(head, struct maple_node, rcu);
if (ma_is_leaf(node->type)) goto free_leaf;
- mt_init_flags(&mt, node->ma_flags); - mas_lock(&mas); start = node; - mas.node = mt_mk_node(node, node->type); - slots = mas_dead_walk(&mas, 0); - node = mas_mn(&mas); + enode = mt_mk_node(node, node->type); + slots = mte_dead_walk(&enode, 0); + node = mte_to_node(enode); do { mt_free_bulk(node->slot_len, slots); offset = node->parent_slot + 1; - mas.node = node->piv_parent; - if (mas_mn(&mas) == node) - goto start_slots_free; - - type = mte_node_type(mas.node); - slots = ma_slots(mte_to_node(mas.node), type); - if ((offset < mt_slots[type]) && (slots[offset])) - slots = mas_dead_walk(&mas, offset); - - node = mas_mn(&mas); + enode = node->piv_parent; + if (mte_to_node(enode) == node) + goto free_leaf; + + type = mte_node_type(enode); + slots = ma_slots(mte_to_node(enode), type); + if ((offset < mt_slots[type]) && + rcu_dereference_protected(slots[offset], + lock_is_held(&rcu_callback_map))) + slots = mte_dead_walk(&enode, offset); + node = mte_to_node(enode); } while ((node != start) || (node->slot_len < offset));
slots = ma_slots(node, node->type); mt_free_bulk(node->slot_len, slots);
-start_slots_free: - mas_unlock(&mas); free_leaf: mt_free_rcu(&node->rcu); }
-static inline void __rcu **mas_destroy_descend(struct ma_state *mas, - struct maple_enode *prev, unsigned char offset) +static inline void __rcu **mte_destroy_descend(struct maple_enode **enode, + struct maple_tree *mt, struct maple_enode *prev, unsigned char offset) { struct maple_node *node; - struct maple_enode *next = mas->node; + struct maple_enode *next = *enode; void __rcu **slots = NULL; + enum maple_type type; + unsigned char next_offset = 0;
do { - mas->node = next; - node = mas_mn(mas); - slots = ma_slots(node, mte_node_type(mas->node)); - next = mas_slot_locked(mas, slots, 0); - if ((mte_dead_node(next))) { - mte_to_node(next)->type = mte_node_type(next); - next = mas_slot_locked(mas, slots, 1); - } + *enode = next; + node = mte_to_node(*enode); + type = mte_node_type(*enode); + slots = ma_slots(node, type); + next = mt_slot_locked(mt, slots, next_offset); + if ((mte_dead_node(next))) + next = mt_slot_locked(mt, slots, ++next_offset);
- mte_set_node_dead(mas->node); - node->type = mte_node_type(mas->node); - mas_clear_meta(mas, node, node->type); + mte_set_node_dead(*enode); + node->type = type; node->piv_parent = prev; node->parent_slot = offset; - offset = 0; - prev = mas->node; + offset = next_offset; + next_offset = 0; + prev = *enode; } while (!mte_is_leaf(next));
return slots; }
-static void mt_destroy_walk(struct maple_enode *enode, unsigned char ma_flags, +static void mt_destroy_walk(struct maple_enode *enode, struct maple_tree *mt, bool free) { void __rcu **slots; struct maple_node *node = mte_to_node(enode); struct maple_enode *start; - struct maple_tree mt;
- MA_STATE(mas, &mt, 0, 0); - - mas.node = enode; if (mte_is_leaf(enode)) { node->type = mte_node_type(enode); goto free_leaf; }
- ma_flags &= ~MT_FLAGS_LOCK_MASK; - mt_init_flags(&mt, ma_flags); - mas_lock(&mas); - - mte_to_node(enode)->ma_flags = ma_flags; start = enode; - slots = mas_destroy_descend(&mas, start, 0); - node = mas_mn(&mas); + slots = mte_destroy_descend(&enode, mt, start, 0); + node = mte_to_node(enode); // Updated in the above call. do { enum maple_type type; unsigned char offset; struct maple_enode *parent, *tmp;
- node->type = mte_node_type(mas.node); - node->slot_len = mas_dead_leaves(&mas, slots, node->type); + node->slot_len = mte_dead_leaves(enode, mt, slots); if (free) mt_free_bulk(node->slot_len, slots); offset = node->parent_slot + 1; - mas.node = node->piv_parent; - if (mas_mn(&mas) == node) - goto start_slots_free; + enode = node->piv_parent; + if (mte_to_node(enode) == node) + goto free_leaf;
- type = mte_node_type(mas.node); - slots = ma_slots(mte_to_node(mas.node), type); + type = mte_node_type(enode); + slots = ma_slots(mte_to_node(enode), type); if (offset >= mt_slots[type]) goto next;
- tmp = mas_slot_locked(&mas, slots, offset); + tmp = mt_slot_locked(mt, slots, offset); if (mte_node_type(tmp) && mte_to_node(tmp)) { - parent = mas.node; - mas.node = tmp; - slots = mas_destroy_descend(&mas, parent, offset); + parent = enode; + enode = tmp; + slots = mte_destroy_descend(&enode, mt, parent, offset); } next: - node = mas_mn(&mas); - } while (start != mas.node); + node = mte_to_node(enode); + } while (start != enode);
- node = mas_mn(&mas); - node->type = mte_node_type(mas.node); - node->slot_len = mas_dead_leaves(&mas, slots, node->type); + node = mte_to_node(enode); + node->slot_len = mte_dead_leaves(enode, mt, slots); if (free) mt_free_bulk(node->slot_len, slots);
-start_slots_free: - mas_unlock(&mas); - free_leaf: if (free) mt_free_rcu(&node->rcu); else - mas_clear_meta(&mas, node, node->type); + mt_clear_meta(mt, node, node->type); }
/* @@ -5682,10 +5686,10 @@ static inline void mte_destroy_walk(stru struct maple_node *node = mte_to_node(enode);
if (mt_in_rcu(mt)) { - mt_destroy_walk(enode, mt->ma_flags, false); + mt_destroy_walk(enode, mt, false); call_rcu(&node->rcu, mt_free_walk); } else { - mt_destroy_walk(enode, mt->ma_flags, true); + mt_destroy_walk(enode, mt, true); } }
From: "Liam R. Howlett" Liam.Howlett@Oracle.com
commit 3dd4432549415f3c65dd52d5c687629efbf4ece1 upstream.
Use the maple tree in RCU mode for VMA tracking.
The maple tree tracks the stack and is able to update the pivot (lower/upper boundary) in-place to allow the page fault handler to write to the tree while holding just the mmap read lock. This is safe as the writes to the stack have a guard VMA which ensures there will always be a NULL in the direction of the growth and thus will only update a pivot.
It is possible, but not recommended, to have VMAs that grow up/down without guard VMAs. syzbot has constructed a testcase which sets up a VMA to grow and consume the empty space. Overwriting the entire NULL entry causes the tree to be altered in a way that is not safe for concurrent readers; the readers may see a node being rewritten or one that does not match the maple state they are using.
Enabling RCU mode allows the concurrent readers to see a stable node and will return the expected result.
Link: https://lkml.kernel.org/r/20230227173632.3292573-9-surenb@google.com Cc: stable@vger.kernel.org Fixes: d4af56c5c7c6 ("mm: start tracking VMAs with maple tree") Signed-off-by: Liam R. Howlett Liam.Howlett@oracle.com Reported-by: syzbot+8d95422d3537159ca390@syzkaller.appspotmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/linux/mm_types.h | 3 ++- kernel/fork.c | 3 +++ mm/mmap.c | 3 ++- 3 files changed, 7 insertions(+), 2 deletions(-)
--- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -810,7 +810,8 @@ struct mm_struct { unsigned long cpu_bitmap[]; };
-#define MM_MT_FLAGS (MT_FLAGS_ALLOC_RANGE | MT_FLAGS_LOCK_EXTERN) +#define MM_MT_FLAGS (MT_FLAGS_ALLOC_RANGE | MT_FLAGS_LOCK_EXTERN | \ + MT_FLAGS_USE_RCU) extern struct mm_struct init_mm;
/* Pointer magic because the dynamic array size confuses some compilers. */ --- a/kernel/fork.c +++ b/kernel/fork.c @@ -617,6 +617,7 @@ static __latent_entropy int dup_mmap(str if (retval) goto out;
+ mt_clear_in_rcu(mas.tree); mas_for_each(&old_mas, mpnt, ULONG_MAX) { struct file *file;
@@ -703,6 +704,8 @@ static __latent_entropy int dup_mmap(str retval = arch_dup_mmap(oldmm, mm); loop_out: mas_destroy(&mas); + if (!retval) + mt_set_in_rcu(mas.tree); out: mmap_write_unlock(mm); flush_tlb_mm(oldmm); --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2308,7 +2308,7 @@ do_mas_align_munmap(struct ma_state *mas int count = 0; int error = -ENOMEM; MA_STATE(mas_detach, &mt_detach, 0, 0); - mt_init_flags(&mt_detach, MT_FLAGS_LOCK_EXTERN); + mt_init_flags(&mt_detach, mas->tree->ma_flags & MT_FLAGS_LOCK_MASK); mt_set_external_lock(&mt_detach, &mm->mmap_lock);
if (mas_preallocate(mas, vma, GFP_KERNEL)) @@ -3095,6 +3095,7 @@ void exit_mmap(struct mm_struct *mm) */ set_bit(MMF_OOM_SKIP, &mm->flags); mmap_write_lock(mm); + mt_clear_in_rcu(&mm->mm_mt); free_pgtables(&tlb, &mm->mm_mt, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING); tlb_finish_mmu(&tlb);
On Wed, Apr 12, 2023 at 10:32:06AM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.2.11 release. There are 173 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Tested-by: Conor Dooley conor.dooley@microchip.com
Thanks, Conor.
* Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 6.2.11 release. There are 173 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 14 Apr 2023 08:28:02 +0000. Anything received after that time might be too late.
Hi Greg
6.2.11-rc1
compiles, boots and runs here on x86_64 (AMD Ryzen 5 PRO 4650G, Slackware64-15.0)
Tested-by: Markus Reichelt lkt+2023@mareichelt.com
On Wed, Apr 12, 2023 at 10:32:06AM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.2.11 release. There are 173 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 14 Apr 2023 08:28:02 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.2.11-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.2.y and the diffstat can be found below.
thanks,
greg k-h
Tested rc1 against the Fedora build system (aarch64, armv7, ppc64le, s390x, x86_64), and boot tested x86_64. No regressions noted.
Tested-by: Justin M. Forbes jforbes@fedoraproject.org
On 4/12/23 01:32, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.2.11 release. There are 173 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 14 Apr 2023 08:28:02 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.2.11-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.2.y and the diffstat can be found below.
thanks,
greg k-h
On ARCH_BRCMSTB using 32-bit and 64-bit ARM kernels, build tested on BMIPS_GENERIC:
Tested-by: Florian Fainelli f.fainelli@gmail.com
On 4/12/23 02:32, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.2.11 release. There are 173 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 14 Apr 2023 08:28:02 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.2.11-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.2.y and the diffstat can be found below.
thanks,
greg k-h
Compiled and booted on my test system. No dmesg regressions.
Tested-by: Shuah Khan skhan@linuxfoundation.org
thanks, -- Shuah
On Wed, Apr 12, 2023 at 10:32:06AM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.2.11 release. There are 173 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 14 Apr 2023 08:28:02 +0000. Anything received after that time might be too late.
Build results: total: 155 pass: 155 fail: 0 Qemu test results: total: 520 pass: 520 fail: 0
Tested-by: Guenter Roeck linux@roeck-us.net
Guenter
On 4/12/23 1:32 AM, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.2.11 release. There are 173 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 14 Apr 2023 08:28:02 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.2.11-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.2.y and the diffstat can be found below.
thanks,
greg k-h
Built and booted successfully on RISC-V RV64 (HiFive Unmatched).
Tested-by: Ron Economos re@w6rz.net
On Apr 12, 2023, at 4:32 AM, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 6.2.11 release. There are 173 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 14 Apr 2023 08:28:02 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.2.11-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.2.y and the diffstat can be found below.
thanks,
greg k-h
Compiled & booted on two of my x86_64 test systems, no errors or regressions.
Tested-by: Slade Watkins srw@sladewatkins.net
Slade
On Wed, Apr 12, 2023 at 10:32:06AM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.2.11 release. There are 173 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Successfully cross-compiled for arm64 (bcm2711_defconfig, GCC 10.2.0) and powerpc (ps3_defconfig, GCC 12.2.0).
Tested-by: Bagas Sanjaya bagasdotme@gmail.com
On Wed, 12 Apr 2023 at 14:17, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 6.2.11 release. There are 173 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 14 Apr 2023 08:28:02 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.2.11-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.2.y and the diffstat can be found below.
thanks,
greg k-h
Results from Linaro’s test farm. No regressions on arm64, arm, x86_64, and i386.
Tested-by: Linux Kernel Functional Testing lkft@linaro.org
## Build * kernel: 6.2.11-rc1 * git: https://gitlab.com/Linaro/lkft/mirrors/stable/linux-stable-rc * git branch: linux-6.2.y * git commit: 5f50ce97de71b5278626756de07d906a3a4882d0 * git describe: v6.2.9-361-g5f50ce97de71 * test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-6.2.y/build/v6.2.9-...
## Test Regressions (compared to v6.2.9)
## Metric Regressions (compared to v6.2.9)
## Test Fixes (compared to v6.2.9)
## Metric Fixes (compared to v6.2.9)
## Test result summary total: 186207, pass: 158993, fail: 4143, skip: 22756, xfail: 315
## Build Summary * arc: 5 total, 5 passed, 0 failed * arm: 145 total, 142 passed, 3 failed * arm64: 54 total, 53 passed, 1 failed * i386: 41 total, 38 passed, 3 failed * mips: 30 total, 28 passed, 2 failed * parisc: 8 total, 8 passed, 0 failed * powerpc: 38 total, 36 passed, 2 failed * riscv: 26 total, 25 passed, 1 failed * s390: 16 total, 16 passed, 0 failed * sh: 14 total, 12 passed, 2 failed * sparc: 8 total, 7 passed, 1 failed * x86_64: 46 total, 46 passed, 0 failed
## Test suites summary * boot * fwts * igt-gpu-tools * kselftest-android * kselftest-arm64 * kselftest-breakpoints * kselftest-capabilities * kselftest-cgroup * kselftest-clone3 * kselftest-core * kselftest-cpu-hotplug * kselftest-cpufreq * kselftest-drivers-dma-buf * kselftest-efivarfs * kselftest-exec * kselftest-filesystems * kselftest-filesystems-binderfs * kselftest-firmware * kselftest-fpu * kselftest-ftrace * kselftest-futex * kselftest-gpio * kselftest-intel_pstate * kselftest-ipc * kselftest-ir * kselftest-kcmp * kselftest-kexec * kselftest-kvm * kselftest-lib * kselftest-livepatch * kselftest-membarrier * kselftest-memfd * kselftest-memory-hotplug * kselftest-mincore * kselftest-mount * kselftest-mqueue * kselftest-net * kselftest-net-forwarding * kselftest-net-mptcp * kselftest-netfilter * kselftest-nsfs * kselftest-openat2 * kselftest-pid_namespace * kselftest-pidfd * kselftest-proc * kselftest-pstore * kselftest-ptrace * kselftest-rseq * kselftest-rtc * kselftest-seccomp * kselftest-sigaltstack * kselftest-size * kselftest-splice * kselftest-static_keys * kselftest-sync * kselftest-sysctl * kselftest-tc-testing * kselftest-timens * kselftest-timers * kselftest-tmpfs * kselftest-tpm2 * kselftest-user * kselftest-user_events * kselftest-vDSO * kselftest-vm * kselftest-watchdog * kselftest-x86 * kselftest-zram * kunit * kvm-unit-tests * libhugetlbfs * log-parser-boot * log-parser-test * ltp-cap_bounds * ltp-commands * ltp-containers * ltp-controllers * ltp-cpuhotplug * ltp-crypto * ltp-cve * ltp-dio * ltp-fcntl-locktests * ltp-filecaps * ltp-fs * ltp-fs_bind * ltp-fs_perms_simple * ltp-fsx * ltp-hugetlb * ltp-io * ltp-ipc * ltp-math * ltp-mm * ltp-nptl * ltp-pty * ltp-sched * ltp-securebits * ltp-smoke * ltp-syscalls * ltp-tracing * network-basic-tests * perf * rcutorture * v4l2-compliance * vdso
-- Linaro LKFT https://lkft.linaro.org
linux-stable-mirror@lists.linaro.org