diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst index 25f3b253219859a6fa120850b9fcd99bd775b1bd..e05e581af5cfe617f38112907aadea8d03f2a9d6 100644 --- a/Documentation/admin-guide/hw-vuln/spectre.rst +++ b/Documentation/admin-guide/hw-vuln/spectre.rst @@ -41,10 +41,11 @@ Related CVEs The following CVE entries describe Spectre variants: - ============= ======================= ================= + ============= ======================= ========================== CVE-2017-5753 Bounds check bypass Spectre variant 1 CVE-2017-5715 Branch target injection Spectre variant 2 - ============= ======================= ================= + CVE-2019-1125 Spectre v1 swapgs Spectre variant 1 (swapgs) + ============= ======================= ========================== Problem ------- @@ -78,6 +79,13 @@ There are some extensions of Spectre variant 1 attacks for reading data over the network, see :ref:`[12] `. However such attacks are difficult, low bandwidth, fragile, and are considered low risk. +Note that, despite "Bounds Check Bypass" name, Spectre variant 1 is not +only about user-controlled array bounds checks. It can affect any +conditional checks. The kernel entry code interrupt, exception, and NMI +handlers all have conditional swapgs checks. Those may be problematic +in the context of Spectre v1, as kernel code can speculatively run with +a user GS. + Spectre variant 2 (Branch Target Injection) ------------------------------------------- @@ -132,6 +140,9 @@ not cover all possible attack vectors. 1. A user process attacking the kernel ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Spectre variant 1 +~~~~~~~~~~~~~~~~~ + The attacker passes a parameter to the kernel via a register or via a known address in memory during a syscall. Such parameter may be used later by the kernel as an index to an array or to derive @@ -144,7 +155,40 @@ not cover all possible attack vectors. potentially be influenced for Spectre attacks, new "nospec" accessor macros are used to prevent speculative loading of data. - Spectre variant 2 attacker can :ref:`poison ` the branch +Spectre variant 1 (swapgs) +~~~~~~~~~~~~~~~~~~~~~~~~~~ + + An attacker can train the branch predictor to speculatively skip the + swapgs path for an interrupt or exception. If they initialize + the GS register to a user-space value, if the swapgs is speculatively + skipped, subsequent GS-related percpu accesses in the speculation + window will be done with the attacker-controlled GS value. This + could cause privileged memory to be accessed and leaked. + + For example: + + :: + + if (coming from user space) + swapgs + mov %gs:, %reg + mov (%reg), %reg1 + + When coming from user space, the CPU can speculatively skip the + swapgs, and then do a speculative percpu load using the user GS + value. So the user can speculatively force a read of any kernel + value. If a gadget exists which uses the percpu value as an address + in another load/store, then the contents of the kernel value may + become visible via an L1 side channel attack. + + A similar attack exists when coming from kernel space. The CPU can + speculatively do the swapgs, causing the user GS to get used for the + rest of the speculative window. + +Spectre variant 2 +~~~~~~~~~~~~~~~~~ + + A spectre variant 2 attacker can :ref:`poison ` the branch target buffer (BTB) before issuing syscall to launch an attack. After entering the kernel, the kernel could use the poisoned branch target buffer on indirect jump and jump to gadget code in speculative @@ -280,11 +324,18 @@ The sysfs file showing Spectre variant 1 mitigation status is: The possible values in this file are: - ======================================= ================================= - 'Mitigation: __user pointer sanitation' Protection in kernel on a case by - case base with explicit pointer - sanitation. - ======================================= ================================= + .. list-table:: + + * - 'Not affected' + - The processor is not vulnerable. + * - 'Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers' + - The swapgs protections are disabled; otherwise it has + protection in the kernel on a case by case base with explicit + pointer sanitation and usercopy LFENCE barriers. + * - 'Mitigation: usercopy/swapgs barriers and __user pointer sanitization' + - Protection in the kernel on a case by case base with explicit + pointer sanitation, usercopy LFENCE barriers, and swapgs LFENCE + barriers. However, the protections are put in place on a case by case basis, and there is no guarantee that all possible attack vectors for Spectre @@ -366,12 +417,27 @@ Turning on mitigation for Spectre variant 1 and Spectre variant 2 1. Kernel mitigation ^^^^^^^^^^^^^^^^^^^^ +Spectre variant 1 +~~~~~~~~~~~~~~~~~ + For the Spectre variant 1, vulnerable kernel code (as determined by code audit or scanning tools) is annotated on a case by case basis to use nospec accessor macros for bounds clipping :ref:`[2] ` to avoid any usable disclosure gadgets. However, it may not cover all attack vectors for Spectre variant 1. + Copy-from-user code has an LFENCE barrier to prevent the access_ok() + check from being mis-speculated. The barrier is done by the + barrier_nospec() macro. + + For the swapgs variant of Spectre variant 1, LFENCE barriers are + added to interrupt, exception and NMI entry where needed. These + barriers are done by the FENCE_SWAPGS_KERNEL_ENTRY and + FENCE_SWAPGS_USER_ENTRY macros. + +Spectre variant 2 +~~~~~~~~~~~~~~~~~ + For Spectre variant 2 mitigation, the compiler turns indirect calls or jumps in the kernel into equivalent return trampolines (retpolines) :ref:`[3] ` :ref:`[9] ` to go to the target @@ -473,6 +539,12 @@ Mitigation control on the kernel command line Spectre variant 2 mitigation can be disabled or force enabled at the kernel command line. + nospectre_v1 + + [X86,PPC] Disable mitigations for Spectre Variant 1 + (bounds check bypass). With this option data leaks are + possible in the system. + nospectre_v2 [X86] Disable all mitigations for the Spectre variant 2 diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 1cee1174cde6e5cf1aa36cfde763ae2a87bb10c1..e8ddf0ef232e3a43eae5204e5c515fcc7ed8c2ef 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -2515,6 +2515,7 @@ Equivalent to: nopti [X86,PPC] nospectre_v1 [PPC] nobp=0 [S390] + nospectre_v1 [X86] nospectre_v2 [X86,PPC,S390] spectre_v2_user=off [X86] spec_store_bypass_disable=off [X86,PPC] @@ -2861,9 +2862,9 @@ nosmt=force: Force disable SMT, cannot be undone via the sysfs control file. - nospectre_v1 [PPC] Disable mitigations for Spectre Variant 1 (bounds - check bypass). With this option data leaks are possible - in the system. + nospectre_v1 [X66, PPC] Disable mitigations for Spectre Variant 1 + (bounds check bypass). With this option data leaks + are possible in the system. nospectre_v2 [X86,PPC_FSL_BOOK3E] Disable all mitigations for the Spectre variant 2 (indirect branch prediction) vulnerability. System may @@ -3947,6 +3948,13 @@ Run specified binary instead of /init from the ramdisk, used for early userspace startup. See initrd. + rdrand= [X86] + force - Override the decision by the kernel to hide the + advertisement of RDRAND support (this affects + certain AMD processors because of buggy BIOS + support, specifically around the suspend/resume + path). + rdt= [HW,X86,RDT] Turn on/off individual RDT features. List is: cmt, mbmtotal, mbmlocal, l3cat, l3cdp, l2cat, l2cdp, diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt index 913396ac582431cb3acbdc96a1bd7f4293661d0b..ed0d814df7e06a25daf0b381f9929d83e152a57c 100644 --- a/Documentation/atomic_t.txt +++ b/Documentation/atomic_t.txt @@ -177,6 +177,9 @@ These helper barriers exist because architectures have varying implicit ordering on their SMP atomic primitives. For example our TSO architectures provide full ordered atomics and these barriers are no-ops. +NOTE: when the atomic RmW ops are fully ordered, they should also imply a +compiler barrier. + Thus: atomic_fetch_add(); diff --git a/Documentation/devicetree/bindings/media/i2c/imx219.txt b/Documentation/devicetree/bindings/media/i2c/imx219.txt new file mode 100644 index 0000000000000000000000000000000000000000..a02f1ce1e1204c9de49f44c9ccec7780137e853d --- /dev/null +++ b/Documentation/devicetree/bindings/media/i2c/imx219.txt @@ -0,0 +1,59 @@ +* Sony 1/4.0-Inch 8Mpixel CMOS Digital Image Sensor + +The Sony imx219 is a 1/4.0-inch CMOS active pixel digital image sensor with +an active array size of 3280H x 2464V. It is programmable through I2C +interface. The I2C address is fixed to 0x10 as per sensor data sheet. +Image data is sent through MIPI CSI-2, which is configured as either 2 or 4 +data lanes. + +Required Properties: +- compatible: value should be "sony,imx219" for imx219 sensor +- reg: I2C bus address of the device +- clocks: reference to the xclk input clock. +- clock-names: should be "xclk". +- DOVDD-supply: Digital I/O voltage supply, 1.8 volts +- AVDD-supply: Analog voltage supply, 2.8 volts +- DVDD-supply: Digital core voltage supply, 1.2 volts + +Optional Properties: +- xclr-gpios: reference to the GPIO connected to the xclr pin, if any. Must be + released after all supplies are applied. + This is an active high signal to the imx219. + +The imx219 device node should contain one 'port' child node with +an 'endpoint' subnode. For further reading on port node refer to +Documentation/devicetree/bindings/media/video-interfaces.txt. + +Endpoint node required properties for CSI-2 connection are: +- remote-endpoint: a phandle to the bus receiver's endpoint node. +- clock-lanes: should be set to <0> (clock lane on hardware lane 0) +- data-lanes: should be set to <1 2>, or <1 2 3 4> (two or four lane CSI-2 + supported) + +Example: + sensor@10 { + compatible = "sony,imx219"; + reg = <0x10>; + #address-cells = <1>; + #size-cells = <0>; + clocks = <&imx219_clk>; + clock-names = "xclk"; + xclr-gpios = <&gpio_sensor 0 0>; + DOVDD-supply = <&vgen4_reg>; /* 1.8v */ + AVDD-supply = <&vgen3_reg>; /* 2.8v */ + DVDD-supply = <&vgen2_reg>; /* 1.2v */ + + imx219_clk: camera-clk { + compatible = "fixed-clock"; + #clock-cells = <0>; + clock-frequency = <24000000>; + }; + + port { + sensor_out: endpoint { + remote-endpoint = <&csiss_in>; + clock-lanes = <0>; + data-lanes = <1 2>; + }; + }; + }; diff --git a/Documentation/devicetree/bindings/net/marvell-orion-mdio.txt b/Documentation/devicetree/bindings/net/marvell-orion-mdio.txt index 42cd81090a2c77f959020328aee5f953b02703db..3f3cfc1d8d4d855485c048c85b330b52970cae11 100644 --- a/Documentation/devicetree/bindings/net/marvell-orion-mdio.txt +++ b/Documentation/devicetree/bindings/net/marvell-orion-mdio.txt @@ -16,7 +16,7 @@ Required properties: Optional properties: - interrupts: interrupt line number for the SMI error/done interrupt -- clocks: phandle for up to three required clocks for the MDIO instance +- clocks: phandle for up to four required clocks for the MDIO instance The child nodes of the MDIO driver are the individual PHY devices connected to this MDIO bus. They must have a "reg" property given the diff --git a/Documentation/fb/modedb.txt b/Documentation/fb/modedb.txt index 16aa08453911cba57f0616f5a472e9c6ecc5e14e..1dd5a52f9390fe85dfeae72c2722fc3759939799 100644 --- a/Documentation/fb/modedb.txt +++ b/Documentation/fb/modedb.txt @@ -51,6 +51,20 @@ To force the VGA output to be enabled and drive a specific mode say: Specifying the option multiple times for different ports is possible, e.g.: video=LVDS-1:d video=HDMI-1:D +Options can also be passed after the mode, using commas as separator. + + Sample usage: 720x480,rotate=180 - 720x480 mode, rotated by 180 degrees + +Valid options are: + + - margin_top, margin_bottom, margin_left, margin_right (integer): + Number of pixels in the margins, typically to deal with overscan on TVs + - reflect_x (boolean): Perform an axial symmetry on the X axis + - reflect_y (boolean): Perform an axial symmetry on the Y axis + - rotate (integer): Rotate the initial framebuffer by x + degrees. Valid values are 0, 90, 180 and 270. + + ***** oOo ***** oOo ***** oOo ***** oOo ***** oOo ***** oOo ***** oOo ***** What is the VESA(TM) Coordinated Video Timings (CVT)? diff --git a/Documentation/scheduler/sched-pelt.c b/Documentation/scheduler/sched-pelt.c index e4219139386ae805575a7c1cae5dc83f5b3e52ca..7238b355919c757cc7b8bc38b905470b1a69a805 100644 --- a/Documentation/scheduler/sched-pelt.c +++ b/Documentation/scheduler/sched-pelt.c @@ -20,7 +20,8 @@ void calc_runnable_avg_yN_inv(void) int i; unsigned int x; - printf("static const u32 runnable_avg_yN_inv[] = {"); + /* To silence -Wunused-but-set-variable warnings. */ + printf("static const u32 runnable_avg_yN_inv[] __maybe_unused = {"); for (i = 0; i < HALFLIFE; i++) { x = ((1UL<<32)-1)*pow(y, i); diff --git a/Makefile b/Makefile index 38f2150457fddee2490d2c15fa0970145c940332..f6c9d5757470eb047316c96d9ce9920d10577205 100644 --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 4 PATCHLEVEL = 19 -SUBLEVEL = 59 +SUBLEVEL = 71 EXTRAVERSION = NAME = "People's Front" @@ -430,6 +430,7 @@ KBUILD_CFLAGS_MODULE := -DMODULE KBUILD_LDFLAGS_MODULE := -T $(srctree)/scripts/module-common.lds KBUILD_LDFLAGS := GCC_PLUGINS_CFLAGS := +CLANG_FLAGS := export ARCH SRCARCH CONFIG_SHELL HOSTCC KBUILD_HOSTCFLAGS CROSS_COMPILE AS LD CC export CPP AR NM STRIP OBJCOPY OBJDUMP KBUILD_HOSTLDFLAGS KBUILD_HOSTLDLIBS @@ -482,7 +483,7 @@ endif ifeq ($(cc-name),clang) ifneq ($(CROSS_COMPILE),) -CLANG_FLAGS := --target=$(notdir $(CROSS_COMPILE:%-=%)) +CLANG_FLAGS += --target=$(notdir $(CROSS_COMPILE:%-=%)) GCC_TOOLCHAIN_DIR := $(dir $(shell which $(CROSS_COMPILE)elfedit)) CLANG_FLAGS += --prefix=$(GCC_TOOLCHAIN_DIR) GCC_TOOLCHAIN := $(realpath $(GCC_TOOLCHAIN_DIR)/..) @@ -491,6 +492,7 @@ ifneq ($(GCC_TOOLCHAIN),) CLANG_FLAGS += --gcc-toolchain=$(GCC_TOOLCHAIN) endif CLANG_FLAGS += -no-integrated-as +CLANG_FLAGS += -Werror=unknown-warning-option KBUILD_CFLAGS += $(CLANG_FLAGS) KBUILD_AFLAGS += $(CLANG_FLAGS) export CLANG_FLAGS diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig index 74953e76a57d581a795227d0570974f217161e05..0cce54182cc578119084c96f1dab8c9158bfc96c 100644 --- a/arch/arc/Kconfig +++ b/arch/arc/Kconfig @@ -199,7 +199,6 @@ config NR_CPUS config ARC_SMP_HALT_ON_RESET bool "Enable Halt-on-reset boot mode" - default y if ARC_UBOOT_SUPPORT help In SMP configuration cores can be configured as Halt-on-reset or they could all start at same time. For Halt-on-reset, non @@ -539,18 +538,6 @@ config ARC_DBG_TLB_PARANOIA endif -config ARC_UBOOT_SUPPORT - bool "Support uboot arg Handling" - default n - help - ARC Linux by default checks for uboot provided args as pointers to - external cmdline or DTB. This however breaks in absence of uboot, - when booting from Metaware debugger directly, as the registers are - not zeroed out on reset by mdb and/or ARCv2 based cores. The bogus - registers look like uboot args to kernel which then chokes. - So only enable the uboot arg checking/processing if users are sure - of uboot being in play. - config ARC_BUILTIN_DTB_NAME string "Built in DTB" help diff --git a/arch/arc/configs/nps_defconfig b/arch/arc/configs/nps_defconfig index 6e84060e7c90a2cbba081a46f87ab607aee1d22e..621f59407d7693057f642d64cfe31dbdb7cd7d9d 100644 --- a/arch/arc/configs/nps_defconfig +++ b/arch/arc/configs/nps_defconfig @@ -31,7 +31,6 @@ CONFIG_ARC_CACHE_LINE_SHIFT=5 # CONFIG_ARC_HAS_LLSC is not set CONFIG_ARC_KVADDR_SIZE=402 CONFIG_ARC_EMUL_UNALIGNED=y -CONFIG_ARC_UBOOT_SUPPORT=y CONFIG_PREEMPT=y CONFIG_NET=y CONFIG_UNIX=y diff --git a/arch/arc/configs/vdk_hs38_defconfig b/arch/arc/configs/vdk_hs38_defconfig index 1e59a2e9c602fa2736cfc0d6fdd439b07a11105b..e447ace6fa1cab14f00f6ebadb7b15d2812c616a 100644 --- a/arch/arc/configs/vdk_hs38_defconfig +++ b/arch/arc/configs/vdk_hs38_defconfig @@ -13,7 +13,6 @@ CONFIG_PARTITION_ADVANCED=y CONFIG_ARC_PLAT_AXS10X=y CONFIG_AXS103=y CONFIG_ISA_ARCV2=y -CONFIG_ARC_UBOOT_SUPPORT=y CONFIG_ARC_BUILTIN_DTB_NAME="vdk_hs38" CONFIG_PREEMPT=y CONFIG_NET=y diff --git a/arch/arc/configs/vdk_hs38_smp_defconfig b/arch/arc/configs/vdk_hs38_smp_defconfig index b5c3f6c54b032d2a84510737272cacbe1ec89b1c..c82cdb10aaf4fba577b43188809a395298ee3c5e 100644 --- a/arch/arc/configs/vdk_hs38_smp_defconfig +++ b/arch/arc/configs/vdk_hs38_smp_defconfig @@ -15,8 +15,6 @@ CONFIG_AXS103=y CONFIG_ISA_ARCV2=y CONFIG_SMP=y # CONFIG_ARC_TIMERS_64BIT is not set -# CONFIG_ARC_SMP_HALT_ON_RESET is not set -CONFIG_ARC_UBOOT_SUPPORT=y CONFIG_ARC_BUILTIN_DTB_NAME="vdk_hs38_smp" CONFIG_PREEMPT=y CONFIG_NET=y diff --git a/arch/arc/kernel/head.S b/arch/arc/kernel/head.S index 208bf2c9e7b0d98b97e1778addd3bfea3e3424ce..a72bbda2f7aad0099860ef3f5e99af12722defae 100644 --- a/arch/arc/kernel/head.S +++ b/arch/arc/kernel/head.S @@ -100,7 +100,6 @@ ENTRY(stext) st.ab 0, [r5, 4] 1: -#ifdef CONFIG_ARC_UBOOT_SUPPORT ; Uboot - kernel ABI ; r0 = [0] No uboot interaction, [1] cmdline in r2, [2] DTB in r2 ; r1 = magic number (always zero as of now) @@ -109,7 +108,6 @@ ENTRY(stext) st r0, [@uboot_tag] st r1, [@uboot_magic] st r2, [@uboot_arg] -#endif ; setup "current" tsk and optionally cache it in dedicated r25 mov r9, @init_task diff --git a/arch/arc/kernel/setup.c b/arch/arc/kernel/setup.c index a1218937abd68ac1ba3905762cde0be5a1bcf027..89c97dcfa3602b3f5e1de180c9a9b7f47a76f23b 100644 --- a/arch/arc/kernel/setup.c +++ b/arch/arc/kernel/setup.c @@ -493,7 +493,6 @@ void __init handle_uboot_args(void) bool use_embedded_dtb = true; bool append_cmdline = false; -#ifdef CONFIG_ARC_UBOOT_SUPPORT /* check that we know this tag */ if (uboot_tag != UBOOT_TAG_NONE && uboot_tag != UBOOT_TAG_CMDLINE && @@ -525,7 +524,6 @@ void __init handle_uboot_args(void) append_cmdline = true; ignore_uboot_args: -#endif if (use_embedded_dtb) { machine_desc = setup_machine_fdt(__dtb_start); diff --git a/arch/arc/kernel/unwind.c b/arch/arc/kernel/unwind.c index 183391d4d33a4138d04418da4b90d61efe38c6c4..9cf2ee8b434937e14e03f4ef29090d56dd0cbdbb 100644 --- a/arch/arc/kernel/unwind.c +++ b/arch/arc/kernel/unwind.c @@ -185,11 +185,6 @@ static void *__init unw_hdr_alloc_early(unsigned long sz) MAX_DMA_ADDRESS); } -static void *unw_hdr_alloc(unsigned long sz) -{ - return kmalloc(sz, GFP_KERNEL); -} - static void init_unwind_table(struct unwind_table *table, const char *name, const void *core_start, unsigned long core_size, const void *init_start, unsigned long init_size, @@ -370,6 +365,10 @@ ret_err: } #ifdef CONFIG_MODULES +static void *unw_hdr_alloc(unsigned long sz) +{ + return kmalloc(sz, GFP_KERNEL); +} static struct unwind_table *last_table; diff --git a/arch/arm/boot/dts/bcm2708-rpi-b-plus.dts b/arch/arm/boot/dts/bcm2708-rpi-b-plus.dts index b2b8e5c9d5823e80413817856efbbd03a5bcc0e5..b800699a03fbd61c0e8240a4635d96b3cac444bb 100644 --- a/arch/arm/boot/dts/bcm2708-rpi-b-plus.dts +++ b/arch/arm/boot/dts/bcm2708-rpi-b-plus.dts @@ -6,6 +6,7 @@ #include "bcm283x-rpi-csi1-2lane.dtsi" / { + compatible = "raspberrypi,model-b-plus", "brcm,bcm2835"; model = "Raspberry Pi Model B+"; }; diff --git a/arch/arm/boot/dts/bcm2708-rpi-b.dts b/arch/arm/boot/dts/bcm2708-rpi-b.dts index 3f9a8e4cfe8f5a503bfa6e94384b3e29685fc88a..ef47775692ceaf6dc13c20ab5f5fbe8e641405ad 100644 --- a/arch/arm/boot/dts/bcm2708-rpi-b.dts +++ b/arch/arm/boot/dts/bcm2708-rpi-b.dts @@ -6,6 +6,7 @@ #include "bcm283x-rpi-csi1-2lane.dtsi" / { + compatible = "raspberrypi,model-b", "brcm,bcm2835"; model = "Raspberry Pi Model B"; }; diff --git a/arch/arm/boot/dts/bcm2708-rpi-cm.dts b/arch/arm/boot/dts/bcm2708-rpi-cm.dts index 1a3975b3563098cb9b96759dc9d3458ae2921903..64809aee5c0ca7fd32f30b5e20c1eaa80c3c8eb3 100644 --- a/arch/arm/boot/dts/bcm2708-rpi-cm.dts +++ b/arch/arm/boot/dts/bcm2708-rpi-cm.dts @@ -5,6 +5,7 @@ #include "bcm283x-rpi-csi1-4lane.dtsi" / { + compatible = "raspberrypi,compute-module", "brcm,bcm2835"; model = "Raspberry Pi Compute Module"; }; diff --git a/arch/arm/boot/dts/bcm2710-rpi-cm3.dts b/arch/arm/boot/dts/bcm2710-rpi-cm3.dts index 14a37ffda8557f27bfc58e7eaac09d1ec6835e70..addebe448e32c8e7b810746e36a72cfb286077eb 100644 --- a/arch/arm/boot/dts/bcm2710-rpi-cm3.dts +++ b/arch/arm/boot/dts/bcm2710-rpi-cm3.dts @@ -6,6 +6,7 @@ #include "bcm283x-rpi-csi1-4lane.dtsi" / { + compatible = "raspberrypi,3-compute-module", "brcm,bcm2837"; model = "Raspberry Pi Compute Module 3"; }; diff --git a/arch/arm/boot/dts/bcm2711-rpi-4-b.dts b/arch/arm/boot/dts/bcm2711-rpi-4-b.dts index 9b5c4f8a4d1707126c945fbb1c35a1149cad6c7e..18c0f9599d3aaac4108f9008afc1cc17306a9190 100644 --- a/arch/arm/boot/dts/bcm2711-rpi-4-b.dts +++ b/arch/arm/boot/dts/bcm2711-rpi-4-b.dts @@ -23,6 +23,10 @@ mmc0 = &emmc2; mmc1 = &mmcnr; mmc2 = &sdhost; + i2c3 = &i2c3; + i2c4 = &i2c4; + i2c5 = &i2c5; + i2c6 = &i2c6; /delete-property/ ethernet; /delete-property/ intc; ethernet0 = &genet; @@ -207,31 +211,37 @@ i2c0_pins: i2c0 { brcm,pins = <0 1>; brcm,function = ; + brcm,pull = ; }; i2c1_pins: i2c1 { brcm,pins = <2 3>; brcm,function = ; + brcm,pull = ; }; i2c3_pins: i2c3 { brcm,pins = <4 5>; brcm,function = ; + brcm,pull = ; }; i2c4_pins: i2c4 { brcm,pins = <8 9>; brcm,function = ; + brcm,pull = ; }; i2c5_pins: i2c5 { brcm,pins = <12 13>; brcm,function = ; + brcm,pull = ; }; i2c6_pins: i2c6 { brcm,pins = <22 23>; brcm,function = ; + brcm,pull = ; }; i2s_pins: i2s { diff --git a/arch/arm/boot/dts/bcm2838.dtsi b/arch/arm/boot/dts/bcm2838.dtsi index 3cecfa62af428f9db9910f23ae2de57a95794669..1d321ad6640a66d56d96cf43dea51c5ec450d11d 100644 --- a/arch/arm/boot/dts/bcm2838.dtsi +++ b/arch/arm/boot/dts/bcm2838.dtsi @@ -35,6 +35,8 @@ <0x40042000 0x2000>, <0x40044000 0x2000>, <0x40046000 0x2000>; + interrupts = ; }; thermal: thermal@7d5d2200 { @@ -222,15 +224,12 @@ }; arm-pmu { - /* - * N.B. the A72 PMU support only exists in arch/arm64, hence - * the fallback to the A53 version. - */ - compatible = "arm,cortex-a72-pmu", "arm,cortex-a53-pmu"; + compatible = "arm,cortex-a72-pmu"; interrupts = , , , ; + interrupt-affinity = <&cpu0>, <&cpu1>, <&cpu2>, <&cpu3>; }; timer { @@ -410,26 +409,26 @@ }; hevc-decoder@7eb00000 { - compatible = "raspberrypi,argon-hevc-decoder"; + compatible = "raspberrypi,rpivid-hevc-decoder"; reg = <0x0 0x7eb00000 0x10000>; status = "okay"; }; - argon-local-intc@7eb10000 { - compatible = "raspberrypi,argon-local-intc"; + rpivid-local-intc@7eb10000 { + compatible = "raspberrypi,rpivid-local-intc"; reg = <0x0 0x7eb10000 0x1000>; status = "okay"; interrupts = ; }; h264-decoder@7eb20000 { - compatible = "raspberrypi,argon-h264-decoder"; + compatible = "raspberrypi,rpivid-h264-decoder"; reg = <0x0 0x7eb20000 0x10000>; status = "okay"; }; vp9-decoder@7eb30000 { - compatible = "raspberrypi,argon-vp9-decoder"; + compatible = "raspberrypi,rpivid-vp9-decoder"; reg = <0x0 0x7eb30000 0x10000>; status = "okay"; }; diff --git a/arch/arm/boot/dts/bcm47094-linksys-panamera.dts b/arch/arm/boot/dts/bcm47094-linksys-panamera.dts index 36efe410dcd71a54ae6f802e405fb9b60296b58f..9e33c41f541125f1d5d4a497f0f7ef06bd1d8e06 100644 --- a/arch/arm/boot/dts/bcm47094-linksys-panamera.dts +++ b/arch/arm/boot/dts/bcm47094-linksys-panamera.dts @@ -125,6 +125,9 @@ }; mdio-bus-mux { + #address-cells = <1>; + #size-cells = <0>; + /* BIT(9) = 1 => external mdio */ mdio_ext: mdio@200 { reg = <0x200>; diff --git a/arch/arm/boot/dts/gemini-dlink-dns-313.dts b/arch/arm/boot/dts/gemini-dlink-dns-313.dts index d1329322b968540825459b040c96006b9f21eea8..361dccd6c7eeeecce84399a7553c6b43d4089110 100644 --- a/arch/arm/boot/dts/gemini-dlink-dns-313.dts +++ b/arch/arm/boot/dts/gemini-dlink-dns-313.dts @@ -11,7 +11,7 @@ / { model = "D-Link DNS-313 1-Bay Network Storage Enclosure"; - compatible = "dlink,dir-313", "cortina,gemini"; + compatible = "dlink,dns-313", "cortina,gemini"; #address-cells = <1>; #size-cells = <1>; diff --git a/arch/arm/boot/dts/imx6ul.dtsi b/arch/arm/boot/dts/imx6ul.dtsi index 2366f093cc76d822426fcff6f95fd0f7eb8bd999..336cdead3da54cd137eeaa45897685b489ebb5a0 100644 --- a/arch/arm/boot/dts/imx6ul.dtsi +++ b/arch/arm/boot/dts/imx6ul.dtsi @@ -359,7 +359,7 @@ pwm1: pwm@2080000 { compatible = "fsl,imx6ul-pwm", "fsl,imx27-pwm"; reg = <0x02080000 0x4000>; - interrupts = ; + interrupts = ; clocks = <&clks IMX6UL_CLK_PWM1>, <&clks IMX6UL_CLK_PWM1>; clock-names = "ipg", "per"; @@ -370,7 +370,7 @@ pwm2: pwm@2084000 { compatible = "fsl,imx6ul-pwm", "fsl,imx27-pwm"; reg = <0x02084000 0x4000>; - interrupts = ; + interrupts = ; clocks = <&clks IMX6UL_CLK_PWM2>, <&clks IMX6UL_CLK_PWM2>; clock-names = "ipg", "per"; @@ -381,7 +381,7 @@ pwm3: pwm@2088000 { compatible = "fsl,imx6ul-pwm", "fsl,imx27-pwm"; reg = <0x02088000 0x4000>; - interrupts = ; + interrupts = ; clocks = <&clks IMX6UL_CLK_PWM3>, <&clks IMX6UL_CLK_PWM3>; clock-names = "ipg", "per"; @@ -392,7 +392,7 @@ pwm4: pwm@208c000 { compatible = "fsl,imx6ul-pwm", "fsl,imx27-pwm"; reg = <0x0208c000 0x4000>; - interrupts = ; + interrupts = ; clocks = <&clks IMX6UL_CLK_PWM4>, <&clks IMX6UL_CLK_PWM4>; clock-names = "ipg", "per"; diff --git a/arch/arm/boot/dts/overlays/Makefile b/arch/arm/boot/dts/overlays/Makefile index 216403c69c5580983c27aae3e94e80ed792c0f32..87082d64302daf3470c60fc39120cacc481aa0c3 100644 --- a/arch/arm/boot/dts/overlays/Makefile +++ b/arch/arm/boot/dts/overlays/Makefile @@ -53,6 +53,7 @@ dtbo-$(CONFIG_ARCH_BCM2835) += \ hifiberry-dac.dtbo \ hifiberry-dacplus.dtbo \ hifiberry-dacplusadc.dtbo \ + hifiberry-dacplusadcpro.dtbo \ hifiberry-digi.dtbo \ hifiberry-digi-pro.dtbo \ hy28a.dtbo \ @@ -76,6 +77,7 @@ dtbo-$(CONFIG_ARCH_BCM2835) += \ i2c6.dtbo \ i2s-gpio28-31.dtbo \ ilitek251x.dtbo \ + imx219.dtbo \ iqaudio-codec.dtbo \ iqaudio-dac.dtbo \ iqaudio-dacplus.dtbo \ diff --git a/arch/arm/boot/dts/overlays/README b/arch/arm/boot/dts/overlays/README index 568e7117b456b58c2216bb3b6e7e1cd2e4e3098c..915461238b02546a330d53595d6ed650c6ee3966 100644 --- a/arch/arm/boot/dts/overlays/README +++ b/arch/arm/boot/dts/overlays/README @@ -475,12 +475,14 @@ Params: Name: audremap -Info: Switches PWM sound output to pins 12 (Right) & 13 (Left) +Info: Switches PWM sound output to GPIOs on the 40-pin header Load: dtoverlay=audremap,= Params: swap_lr Reverse the channel allocation, which will also swap the audio jack outputs (default off) enable_jack Don't switch off the audio jack output (default off) + pins_12_13 Select GPIOs 12 & 13 (default) + pins_18_19 Select GPIOs 18 & 19 Name: balena-fin @@ -881,6 +883,27 @@ Params: 24db_digital_gain Allow gain to be applied via the PCM512x codec master for bit clock and frame clock. +Name: hifiberry-dacplusadcpro +Info: Configures the HifiBerry DAC+ADC PRO audio card +Load: dtoverlay=hifiberry-dacplusadcpro,= +Params: 24db_digital_gain Allow gain to be applied via the PCM512x codec + Digital volume control. Enable with + "dtoverlay=hifiberry-dacplusadcpro,24db_digital_gain" + (The default behaviour is that the Digital + volume control is limited to a maximum of + 0dB. ie. it can attenuate but not provide + gain. For most users, this will be desired + as it will prevent clipping. By appending + the 24dB_digital_gain parameter, the Digital + volume control will allow up to 24dB of + gain. If this parameter is enabled, it is the + responsibility of the user to ensure that + the Digital volume control is set to a value + that does not result in clipping/distortion!) + slave Force DAC+ADC Pro into slave mode, using Pi as + master for bit clock and frame clock. + + Name: hifiberry-digi Info: Configures the HifiBerry Digi and Digi+ audio card Load: dtoverlay=hifiberry-digi @@ -1198,6 +1221,8 @@ Info: Enable the i2c3 bus Load: dtoverlay=i2c3, Params: pins_2_3 Use GPIOs 2 and 3 pins_4_5 Use GPIOs 4 and 5 (default) + baudrate Set the baudrate for the interface (default + "100000") Name: i2c4 @@ -1205,6 +1230,8 @@ Info: Enable the i2c4 bus Load: dtoverlay=i2c4, Params: pins_6_7 Use GPIOs 6 and 7 pins_8_9 Use GPIOs 8 and 9 (default) + baudrate Set the baudrate for the interface (default + "100000") Name: i2c5 @@ -1212,6 +1239,8 @@ Info: Enable the i2c5 bus Load: dtoverlay=i2c5, Params: pins_10_11 Use GPIOs 10 and 11 pins_12_13 Use GPIOs 12 and 13 (default) + baudrate Set the baudrate for the interface (default + "100000") Name: i2c6 @@ -1219,6 +1248,8 @@ Info: Enable the i2c6 bus Load: dtoverlay=i2c6, Params: pins_0_1 Use GPIOs 0 and 1 pins_22_23 Use GPIOs 22 and 23 (default) + baudrate Set the baudrate for the interface (default + "100000") Name: i2s-gpio28-31 @@ -1238,6 +1269,18 @@ Params: interrupt GPIO used for interrupt (default 4) touchscreen (in pixels) +Name: imx219 +Info: Sony IMX219 camera module. + Uses Unicam 1, which is the standard camera connector on most Pi + variants. +Load: dtoverlay=imx219,= +Params: i2c_pins_0_1 Use pins 0&1 for the I2C instead of 44&45. + Useful on Compute Modules. + + i2c_pins_28_29 Use pins 28&29 for the I2C instead of 44&45. + This is required for Pi B+, 2, 0, and 0W. + + Name: iqaudio-codec Info: Configures the IQaudio Codec audio card Load: dtoverlay=iqaudio-codec @@ -1384,6 +1427,7 @@ Params: gpiopin Gpio pin connected to the INTA output of the addr I2C address of the MCP23017 (default: 0x20) mcp23008 Configure an MCP23008 instead. + noints Disable the interrupt GPIO line. Name: mcp23s17 @@ -2457,6 +2501,7 @@ Params: cma-256 CMA is 256MB (needs 1GB) cma-128 CMA is 128MB cma-96 CMA is 96MB cma-64 CMA is 64MB + audio Enable or disable audio over HDMI (default "on") Name: vga666 diff --git a/arch/arm/boot/dts/overlays/audremap-overlay.dts b/arch/arm/boot/dts/overlays/audremap-overlay.dts index 4a818206a8912380fa9871489063a115b4474d8c..d624bb3a3feaf1f9abf90a5e0eb44b7a03806945 100644 --- a/arch/arm/boot/dts/overlays/audremap-overlay.dts +++ b/arch/arm/boot/dts/overlays/audremap-overlay.dts @@ -7,13 +7,29 @@ fragment@0 { target = <&audio_pins>; frag0: __overlay__ { + }; + }; + + fragment@1 { + target = <&audio_pins>; + __overlay__ { brcm,pins = < 12 13 >; brcm,function = < 4 >; /* alt0 alt0 */ }; }; + fragment@2 { + target = <&audio_pins>; + __dormant__ { + brcm,pins = < 18 19 >; + brcm,function = < 2 >; /* alt5 alt5 */ + }; + }; + __overrides__ { swap_lr = <&frag0>, "swap_lr?"; enable_jack = <&frag0>, "enable_jack?"; + pins_12_13 = <0>,"+1-2"; + pins_18_19 = <0>,"-1+2"; }; }; diff --git a/arch/arm/boot/dts/overlays/hifiberry-dacplusadcpro-overlay.dts b/arch/arm/boot/dts/overlays/hifiberry-dacplusadcpro-overlay.dts new file mode 100644 index 0000000000000000000000000000000000000000..00e5d450a88b1d6813ba067242bab44b049a5ea6 --- /dev/null +++ b/arch/arm/boot/dts/overlays/hifiberry-dacplusadcpro-overlay.dts @@ -0,0 +1,64 @@ +// Definitions for HiFiBerry DAC+ADC PRO +/dts-v1/; +/plugin/; + +/ { + compatible = "brcm,bcm2708"; + + fragment@0 { + target-path = "/clocks"; + __overlay__ { + dacpro_osc: dacpro_osc { + compatible = "hifiberry,dacpro-clk"; + #clock-cells = <0>; + }; + }; + }; + + fragment@1 { + target = <&i2s>; + __overlay__ { + status = "okay"; + }; + }; + + fragment@2 { + target = <&i2c1>; + __overlay__ { + #address-cells = <1>; + #size-cells = <0>; + status = "okay"; + + hb_dac: pcm5122@4d { + #sound-dai-cells = <0>; + compatible = "ti,pcm5122"; + reg = <0x4d>; + clocks = <&dacpro_osc>; + status = "okay"; + }; + hb_adc: pcm186x@4a { + #sound-dai-cells = <0>; + compatible = "ti,pcm1863"; + reg = <0x4a>; + clocks = <&dacpro_osc>; + status = "okay"; + }; + }; + }; + + fragment@3 { + target = <&sound>; + hifiberry_dacplusadcpro: __overlay__ { + compatible = "hifiberry,hifiberry-dacplusadcpro"; + audio-codec = <&hb_dac &hb_adc>; + i2s-controller = <&i2s>; + status = "okay"; + }; + }; + + __overrides__ { + 24db_digital_gain = + <&hifiberry_dacplusadcpro>,"hifiberry-dacplusadcpro,24db_digital_gain?"; + slave = <&hifiberry_dacplusadcpro>,"hifiberry-dacplusadcpro,slave?"; + }; +}; diff --git a/arch/arm/boot/dts/overlays/hifiberry-dacplusdsp-overlay.dts b/arch/arm/boot/dts/overlays/hifiberry-dacplusdsp-overlay.dts new file mode 100644 index 0000000000000000000000000000000000000000..63432e8b983fe878821f3ecf6a5b87846c175c10 --- /dev/null +++ b/arch/arm/boot/dts/overlays/hifiberry-dacplusdsp-overlay.dts @@ -0,0 +1,34 @@ +// Definitions for hifiberry DAC+DSP soundcard overlay +/dts-v1/; +/plugin/; + +/ { + compatible = "brcm,bcm2835"; + + fragment@0 { + target = <&i2s>; + __overlay__ { + status = "okay"; + }; + }; + + fragment@1 { + target-path = "/"; + __overlay__ { + dacplusdsp-codec { + #sound-dai-cells = <0>; + compatible = "hifiberry,dacplusdsp"; + status = "okay"; + }; + }; + }; + + fragment@2 { + target = <&sound>; + __overlay__ { + compatible = "hifiberrydacplusdsp,hifiberrydacplusdsp-soundcard"; + i2s-controller = <&i2s>; + status = "okay"; + }; + }; +}; diff --git a/arch/arm/boot/dts/overlays/i2c3-overlay.dts b/arch/arm/boot/dts/overlays/i2c3-overlay.dts index 79d0da0ca6f2afdb5e6f6f327a113975edd19a6f..f651f3541d25b81af047510a50d24416a65df04b 100644 --- a/arch/arm/boot/dts/overlays/i2c3-overlay.dts +++ b/arch/arm/boot/dts/overlays/i2c3-overlay.dts @@ -6,10 +6,11 @@ fragment@0 { target = <&i2c3>; - __overlay__ { + frag0: __overlay__ { status = "okay"; pinctrl-names = "default"; pinctrl-0 = <&i2c3_pins>; + clock-frequency = <100000>; }; }; @@ -20,8 +21,16 @@ }; }; + fragment@2 { + target = <&i2c3_pins>; + __overlay__ { + brcm,pins = <4 5>; + }; + }; + __overrides__ { - pins_2_3 = <0>,"=1"; - pins_4_5 = <0>,"!1"; + pins_2_3 = <0>,"=1!2"; + pins_4_5 = <0>,"!1=2"; + baudrate = <&frag0>, "clock-frequency:0"; }; }; diff --git a/arch/arm/boot/dts/overlays/i2c4-overlay.dts b/arch/arm/boot/dts/overlays/i2c4-overlay.dts index ffad91763f076e35b02a877358e4ee32ea933da4..3cfd87913918ede72258ebc3ccf30f6abaabb29c 100644 --- a/arch/arm/boot/dts/overlays/i2c4-overlay.dts +++ b/arch/arm/boot/dts/overlays/i2c4-overlay.dts @@ -6,10 +6,11 @@ fragment@0 { target = <&i2c4>; - __overlay__ { + frag0: __overlay__ { status = "okay"; pinctrl-names = "default"; pinctrl-0 = <&i2c4_pins>; + clock-frequency = <100000>; }; }; @@ -20,8 +21,16 @@ }; }; + fragment@2 { + target = <&i2c4_pins>; + __overlay__ { + brcm,pins = <8 9>; + }; + }; + __overrides__ { - pins_6_7 = <0>,"=1"; - pins_8_9 = <0>,"!1"; + pins_6_7 = <0>,"=1!2"; + pins_8_9 = <0>,"!1=2"; + baudrate = <&frag0>, "clock-frequency:0"; }; }; diff --git a/arch/arm/boot/dts/overlays/i2c5-overlay.dts b/arch/arm/boot/dts/overlays/i2c5-overlay.dts index 551ce154f877a42f1b37a4ffc763b25b2e0c5485..c71f511d90cf27e76ffd85046bf5e6f283842b0f 100644 --- a/arch/arm/boot/dts/overlays/i2c5-overlay.dts +++ b/arch/arm/boot/dts/overlays/i2c5-overlay.dts @@ -6,10 +6,11 @@ fragment@0 { target = <&i2c5>; - __overlay__ { + frag0: __overlay__ { status = "okay"; pinctrl-names = "default"; pinctrl-0 = <&i2c5_pins>; + clock-frequency = <100000>; }; }; @@ -20,8 +21,16 @@ }; }; + fragment@2 { + target = <&i2c5_pins>; + __overlay__ { + brcm,pins = <12 13>; + }; + }; + __overrides__ { - pins_10_11 = <0>,"=1"; - pins_12_13 = <0>,"!1"; + pins_10_11 = <0>,"=1!2"; + pins_12_13 = <0>,"!1=2"; + baudrate = <&frag0>, "clock-frequency:0"; }; }; diff --git a/arch/arm/boot/dts/overlays/i2c6-overlay.dts b/arch/arm/boot/dts/overlays/i2c6-overlay.dts index 0a67f7da6ac00fccc1b5203a0694c7fbaf6aaa1e..17861ef00c5b4fdb599b07d7a5f064a5bc6880b6 100644 --- a/arch/arm/boot/dts/overlays/i2c6-overlay.dts +++ b/arch/arm/boot/dts/overlays/i2c6-overlay.dts @@ -6,10 +6,11 @@ fragment@0 { target = <&i2c6>; - __overlay__ { + frag0: __overlay__ { status = "okay"; pinctrl-names = "default"; pinctrl-0 = <&i2c6_pins>; + clock-frequency = <100000>; }; }; @@ -20,8 +21,16 @@ }; }; + fragment@2 { + target = <&i2c6_pins>; + __overlay__ { + brcm,pins = <22 23>; + }; + }; + __overrides__ { - pins_0_1 = <0>,"=1"; - pins_22_23 = <0>,"!1"; + pins_0_1 = <0>,"=1!2"; + pins_22_23 = <0>,"!1=2"; + baudrate = <&frag0>, "clock-frequency:0"; }; }; diff --git a/arch/arm/boot/dts/overlays/imx219-overlay.dts b/arch/arm/boot/dts/overlays/imx219-overlay.dts new file mode 100644 index 0000000000000000000000000000000000000000..2a1500d07b680027c3c9db01fa798d71a630c103 --- /dev/null +++ b/arch/arm/boot/dts/overlays/imx219-overlay.dts @@ -0,0 +1,129 @@ +// SPDX-License-Identifier: GPL-2.0-only +// Definitions for IMX219 camera module on VC I2C bus +/dts-v1/; +/plugin/; + +#include + +/{ + compatible = "brcm,bcm2835"; + + fragment@0 { + target = <&i2c_vc>; + __overlay__ { + #address-cells = <1>; + #size-cells = <0>; + status = "okay"; + + imx219: imx219@10 { + compatible = "sony,imx219"; + reg = <0x10>; + status = "okay"; + + clocks = <&imx219_clk>; + clock-names = "xclk"; + + VANA-supply = <&imx219_vana>; /* 2.8v */ + VDIG-supply = <&imx219_vdig>; /* 1.8v */ + VDDL-supply = <&imx219_vddl>; /* 1.2v */ + + imx219_clk: camera-clk { + compatible = "fixed-clock"; + #clock-cells = <0>; + clock-frequency = <24000000>; + }; + + port { + imx219_0: endpoint { + remote-endpoint = <&csi1_ep>; + clock-lanes = <0>; + data-lanes = <1 2>; + clock-noncontinuous; + link-frequencies = + /bits/ 64 <297000000>; + }; + }; + }; + }; + }; + + fragment@1 { + target = <&csi1>; + __overlay__ { + status = "okay"; + + port { + csi1_ep: endpoint { + remote-endpoint = <&imx219_0>; + }; + }; + }; + }; + + fragment@2 { + target = <&i2c0_pins>; + __dormant__ { + brcm,pins = <28 29>; + brcm,function = <4>; /* alt0 */ + }; + }; + fragment@3 { + target = <&i2c0_pins>; + __overlay__ { + brcm,pins = <44 45>; + brcm,function = <5>; /* alt1 */ + }; + }; + fragment@4 { + target = <&i2c0_pins>; + __dormant__ { + brcm,pins = <0 1>; + brcm,function = <4>; /* alt0 */ + }; + }; + fragment@5 { + target = <&i2c_vc>; + __overlay__ { + status = "okay"; + }; + }; + + fragment@6 { + target-path="/"; + __overlay__ { + imx219_vana: fixedregulator@0 { + compatible = "regulator-fixed"; + regulator-name = "imx219_vana"; + regulator-min-microvolt = <2800000>; + regulator-max-microvolt = <2800000>; + gpio = <&gpio 41 GPIO_ACTIVE_HIGH>; + enable-active-high; + }; + imx219_vdig: fixedregulator@1 { + compatible = "regulator-fixed"; + regulator-name = "imx219_vdig"; + regulator-min-microvolt = <1800000>; + regulator-max-microvolt = <1800000>; + }; + imx219_vddl: fixedregulator@2 { + compatible = "regulator-fixed"; + regulator-name = "imx219_vddl"; + regulator-min-microvolt = <1200000>; + regulator-max-microvolt = <1200000>; + }; + }; + }; + + fragment@7 { + target-path="/__overrides__"; + __overlay__ { + cam0-pwdn-ctrl = <&imx219_vana>,"gpio:0"; + cam0-pwdn = <&imx219_vana>,"gpio:4"; + }; + }; + + __overrides__ { + i2c_pins_0_1 = <0>,"-2-3+4"; + i2c_pins_28_29 = <0>,"+2-3-4"; + }; +}; diff --git a/arch/arm/boot/dts/overlays/mcp23017-overlay.dts b/arch/arm/boot/dts/overlays/mcp23017-overlay.dts index f6421bd476f9c0502c3368b3b4a1f8e999d80c36..16af971c3bdb79922cb4de3a5ff7b0d2676cd2b3 100644 --- a/arch/arm/boot/dts/overlays/mcp23017-overlay.dts +++ b/arch/arm/boot/dts/overlays/mcp23017-overlay.dts @@ -16,7 +16,7 @@ fragment@1 { target = <&gpio>; __overlay__ { - mcp23017_pins: mcp23017_pins { + mcp23017_pins: mcp23017_pins@20 { brcm,pins = <4>; brcm,function = <0>; }; @@ -34,11 +34,6 @@ reg = <0x20>; gpio-controller; #gpio-cells = <2>; - #interrupt-cells=<2>; - interrupt-parent = <&gpio>; - interrupts = <4 2>; - interrupt-controller; - microchip,irq-mirror; status = "okay"; }; @@ -52,11 +47,25 @@ }; }; + fragment@4 { + target = <&i2c1>; + __overlay__ { + mcp23017_irq: mcp@20 { + #interrupt-cells=<2>; + interrupt-parent = <&gpio>; + interrupts = <4 2>; + interrupt-controller; + microchip,irq-mirror; + }; + }; + }; + __overrides__ { gpiopin = <&mcp23017_pins>,"brcm,pins:0", - <&mcp23017>,"interrupts:0"; - addr = <&mcp23017>,"reg:0"; + <&mcp23017_irq>,"interrupts:0"; + addr = <&mcp23017>,"reg:0", <&mcp23017_pins>,"reg:0"; mcp23008 = <0>,"=3"; + noints = <0>,"!1!4"; }; }; diff --git a/arch/arm/boot/dts/overlays/sc16is752-i2c-overlay.dts b/arch/arm/boot/dts/overlays/sc16is752-i2c-overlay.dts index 355eaac656c8c768c03719d1d705b2898c002270..57ae35c3844259fa105fe5ae604e292523592788 100644 --- a/arch/arm/boot/dts/overlays/sc16is752-i2c-overlay.dts +++ b/arch/arm/boot/dts/overlays/sc16is752-i2c-overlay.dts @@ -35,6 +35,6 @@ __overrides__ { int_pin = <&sc16is752>,"interrupts:0"; addr = <&sc16is752>,"reg:0",<&sc16is752_clk>,"name"; - xtal = <&sc16is752>,"clock-frequency:0"; + xtal = <&sc16is752_clk>,"clock-frequency:0"; }; }; diff --git a/arch/arm/boot/dts/overlays/upstream-overlay.dts b/arch/arm/boot/dts/overlays/upstream-overlay.dts index 48e0ef61952c6d9018c03e6cfe21c7c77fe3a3da..6112640837fc0f6f4a277aaa03f54c2785ee9609 100644 --- a/arch/arm/boot/dts/overlays/upstream-overlay.dts +++ b/arch/arm/boot/dts/overlays/upstream-overlay.dts @@ -110,6 +110,12 @@ }; }; fragment@17 { + target = <&hdmi>; + __dormant__ { + dmas; + }; + }; + fragment@18 { target = <&usb>; #address-cells = <1>; #size-cells = <1>; diff --git a/arch/arm/boot/dts/overlays/vc4-kms-v3d-overlay.dts b/arch/arm/boot/dts/overlays/vc4-kms-v3d-overlay.dts index 19e1d2548e7bbc6ca989de7520fb7d1f91560dcf..c5f687e8bcb9a881ee75015692e2cfe8ff8dbc61 100644 --- a/arch/arm/boot/dts/overlays/vc4-kms-v3d-overlay.dts +++ b/arch/arm/boot/dts/overlays/vc4-kms-v3d-overlay.dts @@ -134,11 +134,19 @@ }; }; + fragment@17 { + target = <&hdmi>; + __dormant__ { + dmas; + }; + }; + __overrides__ { cma-256 = <0>,"+0-1-2-3-4"; cma-192 = <0>,"-0+1-2-3-4"; cma-128 = <0>,"-0-1+2-3-4"; cma-96 = <0>,"-0-1-2+3-4"; cma-64 = <0>,"-0-1-2-3+4"; + audio = <0>,"!17"; }; }; diff --git a/arch/arm/boot/dts/rk3288-veyron-mickey.dts b/arch/arm/boot/dts/rk3288-veyron-mickey.dts index 1e0158acf895d99f8bde234891011e019e172a92..a593d0a998fc8bcb7cbf75960af21b4a2bac44ca 100644 --- a/arch/arm/boot/dts/rk3288-veyron-mickey.dts +++ b/arch/arm/boot/dts/rk3288-veyron-mickey.dts @@ -124,10 +124,6 @@ }; }; -&emmc { - /delete-property/mmc-hs200-1_8v; -}; - &i2c2 { status = "disabled"; }; diff --git a/arch/arm/boot/dts/rk3288-veyron-minnie.dts b/arch/arm/boot/dts/rk3288-veyron-minnie.dts index f95d0c5fcf71263f044cb84a7efd6599878895be..6e8946052c78b12d688d4ec4a7621576f0cb4ab7 100644 --- a/arch/arm/boot/dts/rk3288-veyron-minnie.dts +++ b/arch/arm/boot/dts/rk3288-veyron-minnie.dts @@ -90,10 +90,6 @@ pwm-off-delay-ms = <200>; }; -&emmc { - /delete-property/mmc-hs200-1_8v; -}; - &gpio_keys { pinctrl-0 = <&pwr_key_l &ap_lid_int_l &volum_down_l &volum_up_l>; diff --git a/arch/arm/boot/dts/rk3288.dtsi b/arch/arm/boot/dts/rk3288.dtsi index c706adf4aed2f893e12805b899ac43a7ce91d0f6..440d6783faca55ad6073bfd2fdaae5fb3a2c9957 100644 --- a/arch/arm/boot/dts/rk3288.dtsi +++ b/arch/arm/boot/dts/rk3288.dtsi @@ -227,6 +227,7 @@ , ; clock-frequency = <24000000>; + arm,no-tick-in-suspend; }; timer: timer@ff810000 { diff --git a/arch/arm/configs/bcm2709_defconfig b/arch/arm/configs/bcm2709_defconfig index 596e39034bed6042e9e5d0130f58fb61e7d47c52..26addac66b6415bc3130166decf2b36a53a6aa06 100644 --- a/arch/arm/configs/bcm2709_defconfig +++ b/arch/arm/configs/bcm2709_defconfig @@ -16,11 +16,11 @@ CONFIG_IKCONFIG=m CONFIG_IKCONFIG_PROC=y CONFIG_MEMCG=y CONFIG_BLK_CGROUP=y +CONFIG_CGROUP_PIDS=y CONFIG_CGROUP_FREEZER=y CONFIG_CPUSETS=y CONFIG_CGROUP_DEVICE=y CONFIG_CGROUP_CPUACCT=y -CONFIG_CGROUP_PIDS=y CONFIG_NAMESPACES=y CONFIG_USER_NS=y CONFIG_SCHED_AUTOGROUP=y @@ -33,13 +33,13 @@ CONFIG_ARCH_BCM2835=y # CONFIG_CACHE_L2X0 is not set CONFIG_SMP=y CONFIG_VMSPLIT_2G=y -CONFIG_PREEMPT_RT_FULL=y # CONFIG_CPU_SW_DOMAIN_PAN is not set CONFIG_UACCESS_WITH_MEMCPY=y CONFIG_SECCOMP=y # CONFIG_ATAGS is not set CONFIG_ZBOOT_ROM_TEXT=0x0 CONFIG_ZBOOT_ROM_BSS=0x0 +CONFIG_PREEMPT_RT_FULL=y CONFIG_CMDLINE="console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 root=/dev/mmcblk0p2 rootfstype=ext4 rootwait" CONFIG_CPU_FREQ=y CONFIG_CPU_FREQ_STAT=y @@ -687,6 +687,7 @@ CONFIG_PPS_CLIENT_GPIO=m CONFIG_PINCTRL_MCP23S08=m CONFIG_GPIO_BCM_VIRT=y CONFIG_GPIO_MOCKUP=m +CONFIG_GPIO_PCA953X=m CONFIG_GPIO_PCF857X=m CONFIG_GPIO_ARIZONA=m CONFIG_GPIO_STMPE=y @@ -707,6 +708,7 @@ CONFIG_W1_SLAVE_DS2438=m CONFIG_W1_SLAVE_DS2780=m CONFIG_W1_SLAVE_DS2781=m CONFIG_W1_SLAVE_DS28E04=m +CONFIG_W1_SLAVE_DS28E17=m CONFIG_POWER_RESET=y CONFIG_POWER_RESET_GPIO=y CONFIG_BATTERY_DS2760=m @@ -914,6 +916,7 @@ CONFIG_VIDEO_TVP5150=m CONFIG_VIDEO_TW2804=m CONFIG_VIDEO_TW9903=m CONFIG_VIDEO_TW9906=m +CONFIG_VIDEO_IMX219=m CONFIG_VIDEO_OV5647=m CONFIG_VIDEO_OV7640=m CONFIG_VIDEO_MT9V011=m @@ -924,8 +927,12 @@ CONFIG_DRM_PANEL_SIMPLE=m CONFIG_DRM_PANEL_RASPBERRYPI_TOUCHSCREEN=m CONFIG_DRM_VC4=m CONFIG_DRM_TINYDRM=m +CONFIG_TINYDRM_ILI9225=m +CONFIG_TINYDRM_ILI9341=m CONFIG_TINYDRM_MI0283QT=m CONFIG_TINYDRM_REPAPER=m +CONFIG_TINYDRM_ST7586=m +CONFIG_TINYDRM_ST7735R=m CONFIG_FB=y CONFIG_FB_BCM2708=y CONFIG_FB_UDL=m @@ -936,6 +943,7 @@ CONFIG_FB_RPISENSE=m CONFIG_BACKLIGHT_RPI=m CONFIG_BACKLIGHT_GPIO=m CONFIG_FRAMEBUFFER_CONSOLE=y +CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y CONFIG_LOGO=y # CONFIG_LOGO_LINUX_MONO is not set # CONFIG_LOGO_LINUX_VGA16 is not set @@ -962,6 +970,7 @@ CONFIG_SND_BCM2708_SOC_GOOGLEVOICEHAT_SOUNDCARD=m CONFIG_SND_BCM2708_SOC_HIFIBERRY_DAC=m CONFIG_SND_BCM2708_SOC_HIFIBERRY_DACPLUS=m CONFIG_SND_BCM2708_SOC_HIFIBERRY_DACPLUSADC=m +CONFIG_SND_BCM2708_SOC_HIFIBERRY_DACPLUSADCPRO=m CONFIG_SND_BCM2708_SOC_HIFIBERRY_DIGI=m CONFIG_SND_BCM2708_SOC_HIFIBERRY_AMP=m CONFIG_SND_BCM2708_SOC_RPI_CIRRUS=m @@ -1181,7 +1190,9 @@ CONFIG_MMC_SDHCI=y CONFIG_MMC_SDHCI_PLTFM=y CONFIG_MMC_SPI=m CONFIG_LEDS_CLASS=y +CONFIG_LEDS_PCA9532=m CONFIG_LEDS_GPIO=y +CONFIG_LEDS_PCA955X=m CONFIG_LEDS_PCA963X=m CONFIG_LEDS_IS31FL32XX=m CONFIG_LEDS_TRIGGER_TIMER=y diff --git a/arch/arm/configs/bcm2711_defconfig b/arch/arm/configs/bcm2711_defconfig index 9866d606135ba16adb227d30fe4d86dd0aceda11..421cd4eec0a51ea91071734680d2dab1997799c2 100644 --- a/arch/arm/configs/bcm2711_defconfig +++ b/arch/arm/configs/bcm2711_defconfig @@ -41,10 +41,10 @@ CONFIG_SMP=y CONFIG_HIGHMEM=y CONFIG_UACCESS_WITH_MEMCPY=y CONFIG_SECCOMP=y -CONFIG_PREEMPT_RT_FULL=y # CONFIG_ATAGS is not set CONFIG_ZBOOT_ROM_TEXT=0x0 CONFIG_ZBOOT_ROM_BSS=0x0 +CONFIG_PREEMPT_RT_FULL=y CONFIG_CMDLINE="console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 root=/dev/mmcblk0p2 rootfstype=ext4 rootwait" CONFIG_CPU_FREQ=y CONFIG_CPU_FREQ_STAT=y @@ -651,7 +651,7 @@ CONFIG_BRCM_CHAR_DRIVERS=y CONFIG_BCM_VCIO=y CONFIG_BCM_VC_SM=y CONFIG_BCM2835_DEVGPIOMEM=y -CONFIG_ARGON_MEM=m +CONFIG_RPIVID_MEM=m # CONFIG_LEGACY_PTYS is not set CONFIG_SERIAL_8250=y # CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set @@ -695,6 +695,7 @@ CONFIG_PPS_CLIENT_GPIO=m CONFIG_PINCTRL_MCP23S08=m CONFIG_GPIO_BCM_VIRT=y CONFIG_GPIO_MOCKUP=m +CONFIG_GPIO_PCA953X=m CONFIG_GPIO_PCF857X=m CONFIG_GPIO_ARIZONA=m CONFIG_GPIO_STMPE=y @@ -715,6 +716,7 @@ CONFIG_W1_SLAVE_DS2438=m CONFIG_W1_SLAVE_DS2780=m CONFIG_W1_SLAVE_DS2781=m CONFIG_W1_SLAVE_DS28E04=m +CONFIG_W1_SLAVE_DS28E17=m CONFIG_POWER_RESET=y CONFIG_POWER_RESET_GPIO=y CONFIG_BATTERY_DS2760=m @@ -924,6 +926,7 @@ CONFIG_VIDEO_TVP5150=m CONFIG_VIDEO_TW2804=m CONFIG_VIDEO_TW9903=m CONFIG_VIDEO_TW9906=m +CONFIG_VIDEO_IMX219=m CONFIG_VIDEO_OV5647=m CONFIG_VIDEO_OV7640=m CONFIG_VIDEO_MT9V011=m @@ -935,8 +938,12 @@ CONFIG_DRM_PANEL_RASPBERRYPI_TOUCHSCREEN=m CONFIG_DRM_V3D=m CONFIG_DRM_VC4=m CONFIG_DRM_TINYDRM=m +CONFIG_TINYDRM_ILI9225=m +CONFIG_TINYDRM_ILI9341=m CONFIG_TINYDRM_MI0283QT=m CONFIG_TINYDRM_REPAPER=m +CONFIG_TINYDRM_ST7586=m +CONFIG_TINYDRM_ST7735R=m CONFIG_FB=y CONFIG_FB_BCM2708=y CONFIG_FB_UDL=m @@ -947,6 +954,7 @@ CONFIG_FB_RPISENSE=m CONFIG_BACKLIGHT_RPI=m CONFIG_BACKLIGHT_GPIO=m CONFIG_FRAMEBUFFER_CONSOLE=y +CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y CONFIG_LOGO=y # CONFIG_LOGO_LINUX_MONO is not set # CONFIG_LOGO_LINUX_VGA16 is not set @@ -973,6 +981,7 @@ CONFIG_SND_BCM2708_SOC_GOOGLEVOICEHAT_SOUNDCARD=m CONFIG_SND_BCM2708_SOC_HIFIBERRY_DAC=m CONFIG_SND_BCM2708_SOC_HIFIBERRY_DACPLUS=m CONFIG_SND_BCM2708_SOC_HIFIBERRY_DACPLUSADC=m +CONFIG_SND_BCM2708_SOC_HIFIBERRY_DACPLUSADCPRO=m CONFIG_SND_BCM2708_SOC_HIFIBERRY_DIGI=m CONFIG_SND_BCM2708_SOC_HIFIBERRY_AMP=m CONFIG_SND_BCM2708_SOC_RPI_CIRRUS=m @@ -1213,7 +1222,9 @@ CONFIG_MMC_SDHCI_PLTFM=y CONFIG_MMC_SDHCI_IPROC=y CONFIG_MMC_SPI=m CONFIG_LEDS_CLASS=y +CONFIG_LEDS_PCA9532=m CONFIG_LEDS_GPIO=y +CONFIG_LEDS_PCA955X=m CONFIG_LEDS_PCA963X=m CONFIG_LEDS_IS31FL32XX=m CONFIG_LEDS_TRIGGER_TIMER=y diff --git a/arch/arm/configs/bcmrpi_defconfig b/arch/arm/configs/bcmrpi_defconfig index 070c6b0488b60053af926a37cb69d13371660c08..36085375107a9e6e3244471328da3b7229200d7c 100644 --- a/arch/arm/configs/bcmrpi_defconfig +++ b/arch/arm/configs/bcmrpi_defconfig @@ -29,7 +29,6 @@ CONFIG_ARCH_MULTI_V6=y # CONFIG_ARCH_MULTI_V7 is not set CONFIG_ARCH_BCM=y CONFIG_ARCH_BCM2835=y -CONFIG_PREEMPT_RT_FULL=y # CONFIG_CACHE_L2X0 is not set # CONFIG_CPU_SW_DOMAIN_PAN is not set CONFIG_UACCESS_WITH_MEMCPY=y @@ -37,6 +36,7 @@ CONFIG_SECCOMP=y # CONFIG_ATAGS is not set CONFIG_ZBOOT_ROM_TEXT=0x0 CONFIG_ZBOOT_ROM_BSS=0x0 +CONFIG_PREEMPT_RT_FULL=y CONFIG_CMDLINE="console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 root=/dev/mmcblk0p2 rootfstype=ext4 rootwait" CONFIG_CPU_FREQ=y CONFIG_CPU_FREQ_STAT=y @@ -679,6 +679,7 @@ CONFIG_PPS_CLIENT_LDISC=m CONFIG_PPS_CLIENT_GPIO=m CONFIG_PINCTRL_MCP23S08=m CONFIG_GPIO_MOCKUP=m +CONFIG_GPIO_PCA953X=m CONFIG_GPIO_PCF857X=m CONFIG_GPIO_ARIZONA=m CONFIG_GPIO_STMPE=y @@ -699,6 +700,7 @@ CONFIG_W1_SLAVE_DS2438=m CONFIG_W1_SLAVE_DS2780=m CONFIG_W1_SLAVE_DS2781=m CONFIG_W1_SLAVE_DS28E04=m +CONFIG_W1_SLAVE_DS28E17=m CONFIG_POWER_RESET=y CONFIG_POWER_RESET_GPIO=y CONFIG_BATTERY_DS2760=m @@ -906,6 +908,7 @@ CONFIG_VIDEO_TVP5150=m CONFIG_VIDEO_TW2804=m CONFIG_VIDEO_TW9903=m CONFIG_VIDEO_TW9906=m +CONFIG_VIDEO_IMX219=m CONFIG_VIDEO_OV5647=m CONFIG_VIDEO_OV7640=m CONFIG_VIDEO_MT9V011=m @@ -916,8 +919,12 @@ CONFIG_DRM_PANEL_SIMPLE=m CONFIG_DRM_PANEL_RASPBERRYPI_TOUCHSCREEN=m CONFIG_DRM_VC4=m CONFIG_DRM_TINYDRM=m +CONFIG_TINYDRM_ILI9225=m +CONFIG_TINYDRM_ILI9341=m CONFIG_TINYDRM_MI0283QT=m CONFIG_TINYDRM_REPAPER=m +CONFIG_TINYDRM_ST7586=m +CONFIG_TINYDRM_ST7735R=m CONFIG_FB=y CONFIG_FB_BCM2708=y CONFIG_FB_UDL=m @@ -928,6 +935,7 @@ CONFIG_FB_RPISENSE=m CONFIG_BACKLIGHT_RPI=m CONFIG_BACKLIGHT_GPIO=m CONFIG_FRAMEBUFFER_CONSOLE=y +CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y CONFIG_LOGO=y # CONFIG_LOGO_LINUX_MONO is not set # CONFIG_LOGO_LINUX_VGA16 is not set @@ -954,6 +962,7 @@ CONFIG_SND_BCM2708_SOC_GOOGLEVOICEHAT_SOUNDCARD=m CONFIG_SND_BCM2708_SOC_HIFIBERRY_DAC=m CONFIG_SND_BCM2708_SOC_HIFIBERRY_DACPLUS=m CONFIG_SND_BCM2708_SOC_HIFIBERRY_DACPLUSADC=m +CONFIG_SND_BCM2708_SOC_HIFIBERRY_DACPLUSADCPRO=m CONFIG_SND_BCM2708_SOC_HIFIBERRY_DIGI=m CONFIG_SND_BCM2708_SOC_HIFIBERRY_AMP=m CONFIG_SND_BCM2708_SOC_RPI_CIRRUS=m @@ -1191,7 +1200,9 @@ CONFIG_MMC_SDHCI=y CONFIG_MMC_SDHCI_PLTFM=y CONFIG_MMC_SPI=m CONFIG_LEDS_CLASS=y +CONFIG_LEDS_PCA9532=m CONFIG_LEDS_GPIO=y +CONFIG_LEDS_PCA955X=m CONFIG_LEDS_PCA963X=m CONFIG_LEDS_IS31FL32XX=m CONFIG_LEDS_TRIGGER_TIMER=y diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c index 7bbaa293a38ce959367811f7fa59c4b27db7cef8..ece345886790e2e7fa6b1b2cb3957b2d4e3336c8 100644 --- a/arch/arm/kernel/setup.c +++ b/arch/arm/kernel/setup.c @@ -1238,6 +1238,8 @@ static int c_show(struct seq_file *m, void *v) { int i, j; u32 cpuid; + struct device_node *np; + const char *model; for_each_online_cpu(i) { /* @@ -1297,6 +1299,14 @@ static int c_show(struct seq_file *m, void *v) seq_printf(m, "Revision\t: %04x\n", system_rev); seq_printf(m, "Serial\t\t: %s\n", system_serial); + np = of_find_node_by_path("/"); + if (np) { + if (!of_property_read_string(np, "model", + &model)) + seq_printf(m, "Model\t\t: %s\n", model); + of_node_put(np); + } + return 0; } diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c index fd6cde23bb5d0dd4f5080dbe1845ead9e446ac80..871fa50a09f19ef4b291d7609ff911f297494f91 100644 --- a/arch/arm/kvm/coproc.c +++ b/arch/arm/kvm/coproc.c @@ -658,13 +658,22 @@ int kvm_handle_cp14_64(struct kvm_vcpu *vcpu, struct kvm_run *run) } static void reset_coproc_regs(struct kvm_vcpu *vcpu, - const struct coproc_reg *table, size_t num) + const struct coproc_reg *table, size_t num, + unsigned long *bmap) { unsigned long i; for (i = 0; i < num; i++) - if (table[i].reset) + if (table[i].reset) { + int reg = table[i].reg; + table[i].reset(vcpu, &table[i]); + if (reg > 0 && reg < NR_CP15_REGS) { + set_bit(reg, bmap); + if (table[i].is_64bit) + set_bit(reg + 1, bmap); + } + } } static struct coproc_params decode_32bit_hsr(struct kvm_vcpu *vcpu) @@ -1439,17 +1448,15 @@ void kvm_reset_coprocs(struct kvm_vcpu *vcpu) { size_t num; const struct coproc_reg *table; - - /* Catch someone adding a register without putting in reset entry. */ - memset(vcpu->arch.ctxt.cp15, 0x42, sizeof(vcpu->arch.ctxt.cp15)); + DECLARE_BITMAP(bmap, NR_CP15_REGS) = { 0, }; /* Generic chip reset first (so target could override). */ - reset_coproc_regs(vcpu, cp15_regs, ARRAY_SIZE(cp15_regs)); + reset_coproc_regs(vcpu, cp15_regs, ARRAY_SIZE(cp15_regs), bmap); table = get_target_table(vcpu->arch.target, &num); - reset_coproc_regs(vcpu, table, num); + reset_coproc_regs(vcpu, table, num, bmap); for (num = 1; num < NR_CP15_REGS; num++) - WARN(vcpu_cp15(vcpu, num) == 0x42424242, + WARN(!test_bit(num, bmap), "Didn't reset vcpu_cp15(vcpu, %zi)", num); } diff --git a/arch/arm/mach-davinci/sleep.S b/arch/arm/mach-davinci/sleep.S index cd350dee4df376a3452299df86ba53815b50649c..efcd400b2abb3a876d9b36a7918ff6d0d3bf93cd 100644 --- a/arch/arm/mach-davinci/sleep.S +++ b/arch/arm/mach-davinci/sleep.S @@ -37,6 +37,7 @@ #define DEEPSLEEP_SLEEPENABLE_BIT BIT(31) .text + .arch armv5te /* * Move DaVinci into deep sleep state * diff --git a/arch/arm/mach-omap2/prm3xxx.c b/arch/arm/mach-omap2/prm3xxx.c index 05858f966f7d9443f776fe2ee924c05e3e7b99a9..dfa65fc2c82bc14dbf69de2ccd27b9de9e83f037 100644 --- a/arch/arm/mach-omap2/prm3xxx.c +++ b/arch/arm/mach-omap2/prm3xxx.c @@ -433,7 +433,7 @@ static void omap3_prm_reconfigure_io_chain(void) * registers, and omap3xxx_prm_reconfigure_io_chain() must be called. * No return value. */ -static void __init omap3xxx_prm_enable_io_wakeup(void) +static void omap3xxx_prm_enable_io_wakeup(void) { if (prm_features & PRM_HAS_IO_WAKEUP) omap2_prm_set_mod_reg_bits(OMAP3430_EN_IO_MASK, WKUP_MOD, diff --git a/arch/arm/mach-rpc/dma.c b/arch/arm/mach-rpc/dma.c index fb48f3141fb4d7cd2403aaece876ae338a308bf2..c4c96661eb89ae2d60ba3eecf4c84e17a401243f 100644 --- a/arch/arm/mach-rpc/dma.c +++ b/arch/arm/mach-rpc/dma.c @@ -131,7 +131,7 @@ static irqreturn_t iomd_dma_handle(int irq, void *dev_id) } while (1); idma->state = ~DMA_ST_AB; - disable_irq(irq); + disable_irq_nosync(irq); return IRQ_HANDLED; } @@ -174,6 +174,9 @@ static void iomd_enable_dma(unsigned int chan, dma_t *dma) DMA_FROM_DEVICE : DMA_TO_DEVICE); } + idma->dma_addr = idma->dma.sg->dma_address; + idma->dma_len = idma->dma.sg->length; + iomd_writeb(DMA_CR_C, dma_base + CR); idma->state = DMA_ST_AB; } diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 4a4db69c5e9a0b31efdb5952e4339e9db4f6e9e6..8f1d890d44cde1798f392572f6ccd9d0395237f0 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -252,7 +252,8 @@ config GENERIC_CALIBRATE_DELAY def_bool y config ZONE_DMA32 - def_bool y + bool "Support DMA32 zone" if EXPERT + default y config HAVE_GENERIC_GUP def_bool y diff --git a/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi b/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi index 212e6634c9baa5173efd128eb9e37c28a6714468..7398ae8856dc0ecf425d8eaac7f753c1855800c5 100644 --- a/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi +++ b/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi @@ -330,7 +330,8 @@ regulator-max-microvolt = <1320000>; enable-gpios = <&pmic 6 GPIO_ACTIVE_HIGH>; regulator-ramp-delay = <80>; - regulator-enable-ramp-delay = <1000>; + regulator-enable-ramp-delay = <2000>; + regulator-settling-time-us = <160>; }; }; }; diff --git a/arch/arm64/boot/dts/nvidia/tegra210.dtsi b/arch/arm64/boot/dts/nvidia/tegra210.dtsi index 3be920efee823a2913f7695bbd09a4ca10f03eb8..6597c0894137a471546ab9a6b02d44ec04d32211 100644 --- a/arch/arm64/boot/dts/nvidia/tegra210.dtsi +++ b/arch/arm64/boot/dts/nvidia/tegra210.dtsi @@ -1119,7 +1119,7 @@ compatible = "nvidia,tegra210-agic"; #interrupt-cells = <3>; interrupt-controller; - reg = <0x702f9000 0x2000>, + reg = <0x702f9000 0x1000>, <0x702fa000 0x2000>; interrupts = ; clocks = <&tegra_car TEGRA210_CLK_APE>; diff --git a/arch/arm64/boot/dts/rockchip/rk3399.dtsi b/arch/arm64/boot/dts/rockchip/rk3399.dtsi index df7e62d9a670842d4f9a89ec114e7beddab921b2..cea44a7c7cf998566f2c9f99d3b400d1f11acc5a 100644 --- a/arch/arm64/boot/dts/rockchip/rk3399.dtsi +++ b/arch/arm64/boot/dts/rockchip/rk3399.dtsi @@ -1643,11 +1643,11 @@ reg = <0x0 0xff914000 0x0 0x100>, <0x0 0xff915000 0x0 0x100>; interrupts = ; interrupt-names = "isp0_mmu"; - clocks = <&cru ACLK_ISP0_NOC>, <&cru HCLK_ISP0_NOC>; + clocks = <&cru ACLK_ISP0_WRAPPER>, <&cru HCLK_ISP0_WRAPPER>; clock-names = "aclk", "iface"; #iommu-cells = <0>; + power-domains = <&power RK3399_PD_ISP0>; rockchip,disable-mmu-reset; - status = "disabled"; }; isp1_mmu: iommu@ff924000 { @@ -1655,11 +1655,11 @@ reg = <0x0 0xff924000 0x0 0x100>, <0x0 0xff925000 0x0 0x100>; interrupts = ; interrupt-names = "isp1_mmu"; - clocks = <&cru ACLK_ISP1_NOC>, <&cru HCLK_ISP1_NOC>; + clocks = <&cru ACLK_ISP1_WRAPPER>, <&cru HCLK_ISP1_WRAPPER>; clock-names = "aclk", "iface"; #iommu-cells = <0>; + power-domains = <&power RK3399_PD_ISP1>; rockchip,disable-mmu-reset; - status = "disabled"; }; hdmi_sound: hdmi-sound { diff --git a/arch/arm64/configs/bcm2711_defconfig b/arch/arm64/configs/bcm2711_defconfig index 4bb420d57eb8a4a7598b01236f3f97f23552e6c4..5652b9333d6d5ac390c1b1bafa44d748bd451343 100644 --- a/arch/arm64/configs/bcm2711_defconfig +++ b/arch/arm64/configs/bcm2711_defconfig @@ -33,12 +33,12 @@ CONFIG_PCIE_BRCMSTB=y # CONFIG_CAVIUM_ERRATUM_22375 is not set # CONFIG_CAVIUM_ERRATUM_23154 is not set # CONFIG_CAVIUM_ERRATUM_27456 is not set -CONFIG_PREEMPT_RT_FULL=y CONFIG_SECCOMP=y CONFIG_ARMV8_DEPRECATED=y CONFIG_SWP_EMULATION=y CONFIG_CP15_BARRIER_EMULATION=y CONFIG_SETEND_EMULATION=y +CONFIG_PREEMPT_RT_FULL=y CONFIG_CMDLINE="console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 root=/dev/mmcblk0p2 rootfstype=ext4 rootwait" CONFIG_COMPAT=y # CONFIG_SUSPEND is not set @@ -625,6 +625,7 @@ CONFIG_PPS_CLIENT_GPIO=m CONFIG_PINCTRL_MCP23S08=m CONFIG_GPIO_BCM_VIRT=y CONFIG_GPIO_MOCKUP=m +CONFIG_GPIO_PCA953X=m CONFIG_GPIO_PCF857X=m CONFIG_GPIO_ARIZONA=m CONFIG_GPIO_STMPE=y @@ -645,6 +646,7 @@ CONFIG_W1_SLAVE_DS2438=m CONFIG_W1_SLAVE_DS2780=m CONFIG_W1_SLAVE_DS2781=m CONFIG_W1_SLAVE_DS28E04=m +CONFIG_W1_SLAVE_DS28E17=m CONFIG_POWER_RESET_GPIO=y CONFIG_BATTERY_DS2760=m CONFIG_BATTERY_GAUGE_LTC2941=m @@ -679,6 +681,7 @@ CONFIG_MEDIA_ANALOG_TV_SUPPORT=y CONFIG_MEDIA_DIGITAL_TV_SUPPORT=y CONFIG_MEDIA_RADIO_SUPPORT=y CONFIG_MEDIA_CONTROLLER=y +CONFIG_VIDEO_V4L2_SUBDEV_API=y CONFIG_MEDIA_USB_SUPPORT=y CONFIG_USB_VIDEO_CLASS=m CONFIG_USB_M5602=m @@ -784,6 +787,7 @@ CONFIG_VIDEO_TVP5150=m CONFIG_VIDEO_TW2804=m CONFIG_VIDEO_TW9903=m CONFIG_VIDEO_TW9906=m +CONFIG_VIDEO_IMX219=m CONFIG_VIDEO_OV7640=m CONFIG_VIDEO_MT9V011=m CONFIG_DRM=m @@ -791,6 +795,7 @@ CONFIG_DRM_LOAD_EDID_FIRMWARE=y CONFIG_DRM_UDL=m CONFIG_DRM_PANEL_SIMPLE=m CONFIG_DRM_PANEL_RASPBERRYPI_TOUCHSCREEN=m +CONFIG_DRM_V3D=m CONFIG_DRM_VC4=m CONFIG_DRM_TINYDRM=m CONFIG_TINYDRM_MI0283QT=m @@ -855,6 +860,7 @@ CONFIG_SND_PISOUND=m CONFIG_SND_SOC_ADAU1701=m CONFIG_SND_SOC_ADAU7002=m CONFIG_SND_SOC_AK4554=m +CONFIG_SND_SOC_CS4265=m CONFIG_SND_SOC_CS4271_I2C=m CONFIG_SND_SOC_SPDIF=m CONFIG_SND_SOC_WM8804_I2C=m @@ -1040,7 +1046,9 @@ CONFIG_MMC_SDHCI_PLTFM=y CONFIG_MMC_SDHCI_IPROC=y CONFIG_MMC_SPI=m CONFIG_LEDS_CLASS=y +CONFIG_LEDS_PCA9532=m CONFIG_LEDS_GPIO=y +CONFIG_LEDS_PCA955X=m CONFIG_LEDS_TRIGGER_TIMER=y CONFIG_LEDS_TRIGGER_ONESHOT=y CONFIG_LEDS_TRIGGER_HEARTBEAT=y @@ -1135,6 +1143,7 @@ CONFIG_VIDEO_BCM2835=m CONFIG_MAILBOX=y CONFIG_BCM2835_MBOX=y # CONFIG_IOMMU_SUPPORT is not set +CONFIG_BCM2835_POWER=y CONFIG_RASPBERRYPI_POWER=y CONFIG_EXTCON=m CONFIG_EXTCON_ARIZONA=m diff --git a/arch/arm64/configs/bcmrpi3_defconfig b/arch/arm64/configs/bcmrpi3_defconfig index 6ff6e49482eeae6368cd759fa7a874f1b6b271ff..0ca2b57ec87f8aa0f030da64c38975975649a8ff 100644 --- a/arch/arm64/configs/bcmrpi3_defconfig +++ b/arch/arm64/configs/bcmrpi3_defconfig @@ -32,7 +32,6 @@ CONFIG_ARCH_BCM2835=y # CONFIG_CAVIUM_ERRATUM_27456 is not set CONFIG_SCHED_MC=y CONFIG_NR_CPUS=4 -CONFIG_PREEMPT_RT_FULL=y CONFIG_HZ_1000=y CONFIG_SECCOMP=y CONFIG_ARMV8_DEPRECATED=y @@ -40,6 +39,7 @@ CONFIG_SWP_EMULATION=y CONFIG_CP15_BARRIER_EMULATION=y CONFIG_SETEND_EMULATION=y CONFIG_RANDOMIZE_BASE=y +CONFIG_PREEMPT_RT_FULL=y CONFIG_CMDLINE="console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 root=/dev/mmcblk0p2 rootfstype=ext4 rootwait" CONFIG_COMPAT=y # CONFIG_SUSPEND is not set @@ -611,6 +611,7 @@ CONFIG_PPS_CLIENT_LDISC=m CONFIG_PPS_CLIENT_GPIO=m CONFIG_GPIO_SYSFS=y CONFIG_GPIO_BCM_VIRT=y +CONFIG_GPIO_PCA953X=m CONFIG_GPIO_ARIZONA=m CONFIG_GPIO_STMPE=y CONFIG_W1=m @@ -629,6 +630,7 @@ CONFIG_W1_SLAVE_DS2433=m CONFIG_W1_SLAVE_DS2780=m CONFIG_W1_SLAVE_DS2781=m CONFIG_W1_SLAVE_DS28E04=m +CONFIG_W1_SLAVE_DS28E17=m CONFIG_POWER_RESET_GPIO=y CONFIG_BATTERY_DS2760=m CONFIG_BATTERY_MAX17040=m @@ -761,6 +763,7 @@ CONFIG_VIDEO_TVP5150=m CONFIG_VIDEO_TW2804=m CONFIG_VIDEO_TW9903=m CONFIG_VIDEO_TW9906=m +CONFIG_VIDEO_IMX219=m CONFIG_VIDEO_OV5647=m CONFIG_VIDEO_OV7640=m CONFIG_VIDEO_MT9V011=m @@ -823,6 +826,7 @@ CONFIG_SND_SOC_AD193X_SPI=m CONFIG_SND_SOC_AD193X_I2C=m CONFIG_SND_SOC_ADAU1701=m CONFIG_SND_SOC_AK4554=m +CONFIG_SND_SOC_CS4265=m CONFIG_SND_SOC_CS4271_I2C=m CONFIG_SND_SOC_ICS43432=m CONFIG_SND_SOC_WM8804_I2C=m @@ -992,7 +996,9 @@ CONFIG_MMC_SDHCI_PLTFM=y CONFIG_MMC_SDHCI_IPROC=m CONFIG_MMC_SPI=m CONFIG_LEDS_CLASS=y +CONFIG_LEDS_PCA9532=m CONFIG_LEDS_GPIO=y +CONFIG_LEDS_PCA955X=m CONFIG_LEDS_PCA963X=m CONFIG_LEDS_IS31FL32XX=m CONFIG_LEDS_TRIGGER_TIMER=y diff --git a/arch/arm64/crypto/sha1-ce-glue.c b/arch/arm64/crypto/sha1-ce-glue.c index 17fac2889f56ea99dbcca0c87e8c6c6b45934235..d8c521c757e836717b5a80b70e1e83f293c9bc2c 100644 --- a/arch/arm64/crypto/sha1-ce-glue.c +++ b/arch/arm64/crypto/sha1-ce-glue.c @@ -54,7 +54,7 @@ static int sha1_ce_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { struct sha1_ce_state *sctx = shash_desc_ctx(desc); - bool finalize = !sctx->sst.count && !(len % SHA1_BLOCK_SIZE); + bool finalize = !sctx->sst.count && !(len % SHA1_BLOCK_SIZE) && len; if (!may_use_simd()) return crypto_sha1_finup(desc, data, len, out); diff --git a/arch/arm64/crypto/sha2-ce-glue.c b/arch/arm64/crypto/sha2-ce-glue.c index 261f5195cab74b2952ee2158daf08ab968939701..c47d1a28ff6bb6274180c6063e2034be1b991eca 100644 --- a/arch/arm64/crypto/sha2-ce-glue.c +++ b/arch/arm64/crypto/sha2-ce-glue.c @@ -59,7 +59,7 @@ static int sha256_ce_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { struct sha256_ce_state *sctx = shash_desc_ctx(desc); - bool finalize = !sctx->sst.count && !(len % SHA256_BLOCK_SIZE); + bool finalize = !sctx->sst.count && !(len % SHA256_BLOCK_SIZE) && len; if (!may_use_simd()) { if (len) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index f90f5d83b228aebab81b73a01f066b050a201c7c..5a97ac853168222a0525f3a3d5b75aa02d5bb0cf 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -112,7 +112,11 @@ * RAS Error Synchronization barrier */ .macro esb +#ifdef CONFIG_ARM64_RAS_EXTN hint #16 +#else + nop +#endif .endm /* diff --git a/arch/arm64/include/asm/compat.h b/arch/arm64/include/asm/compat.h index 1a037b94eba10d481866063bfcc8c5f59adf2e35..cee28a05ee98f0a63dabac43b939f46e457214a6 100644 --- a/arch/arm64/include/asm/compat.h +++ b/arch/arm64/include/asm/compat.h @@ -159,6 +159,7 @@ static inline compat_uptr_t ptr_to_compat(void __user *uptr) } #define compat_user_stack_pointer() (user_stack_pointer(task_pt_regs(current))) +#define COMPAT_MINSIGSTKSZ 2048 static inline void __user *arch_compat_alloc_user_space(long len) { diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 1717ba1db35ddb935720c20ec46c318d59ca9b83..510f687d269a15db9386bf145e8c804ea177ead1 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -45,9 +45,10 @@ */ enum ftr_type { - FTR_EXACT, /* Use a predefined safe value */ - FTR_LOWER_SAFE, /* Smaller value is safe */ - FTR_HIGHER_SAFE,/* Bigger value is safe */ + FTR_EXACT, /* Use a predefined safe value */ + FTR_LOWER_SAFE, /* Smaller value is safe */ + FTR_HIGHER_SAFE, /* Bigger value is safe */ + FTR_HIGHER_OR_ZERO_SAFE, /* Bigger value is safe, but 0 is biggest */ }; #define FTR_STRICT true /* SANITY check strict matching required */ diff --git a/arch/arm64/include/asm/dma-mapping.h b/arch/arm64/include/asm/dma-mapping.h index b7847eb8a7bb76d8602d7a328b478220e7e66120..9195e524cb0823a96965873308601585814d710d 100644 --- a/arch/arm64/include/asm/dma-mapping.h +++ b/arch/arm64/include/asm/dma-mapping.h @@ -24,6 +24,27 @@ #include #include +extern void *arm64_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, + gfp_t gfp, unsigned long attrs); +extern void arm64_dma_free(struct device *dev, size_t size, void *cpu_addr, + dma_addr_t handle, unsigned long attrs); +extern int arm64_dma_mmap(struct device *dev, struct vm_area_struct *vma, + void *cpu_addr, dma_addr_t dma_addr, size_t size, + unsigned long attrs); +extern int arm64_dma_get_sgtable(struct device *dev, struct sg_table *sgt, + void *cpu_addr, dma_addr_t dma_addr, size_t size, + unsigned long attrs); +extern int arm64_dma_map_sg(struct device *dev, struct scatterlist *sgl, int nelems, + enum dma_data_direction dir, unsigned long attrs); +extern void arm64_dma_unmap_sg(struct device *dev, struct scatterlist *sgl, int, + enum dma_data_direction dir, unsigned long attrs); +extern void arm64_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sgl, int nelems, + enum dma_data_direction dir); +extern void arm64_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sgl, int nelems, + enum dma_data_direction dir); + + + extern const struct dma_map_ops dummy_dma_ops; static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h index 7ed320895d1f463d1e95cd9ec6328a49eed765ae..f52a2968a3b696270bc73adbb7adfdca1ce01744 100644 --- a/arch/arm64/include/asm/efi.h +++ b/arch/arm64/include/asm/efi.h @@ -94,7 +94,11 @@ static inline unsigned long efi_get_max_initrd_addr(unsigned long dram_base, ((protocol##_t *)instance)->f(instance, ##__VA_ARGS__) #define alloc_screen_info(x...) &screen_info -#define free_screen_info(x...) + +static inline void free_screen_info(efi_system_table_t *sys_table_arg, + struct screen_info *si) +{ +} /* redeclare as 'hidden' so the compiler will generate relative references */ extern struct screen_info screen_info __attribute__((__visibility__("hidden"))); diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index ea423db39364456585192cee8eebf3e7432f2d35..2214a403f39b924316b97a907bd8c89ac493f788 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -419,8 +419,8 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, PMD_TYPE_SECT) #if defined(CONFIG_ARM64_64K_PAGES) || CONFIG_PGTABLE_LEVELS < 3 -#define pud_sect(pud) (0) -#define pud_table(pud) (1) +static inline bool pud_sect(pud_t pud) { return false; } +static inline bool pud_table(pud_t pud) { return true; } #else #define pud_sect(pud) ((pud_val(pud) & PUD_TYPE_MASK) == \ PUD_TYPE_SECT) diff --git a/arch/arm64/kernel/acpi.c b/arch/arm64/kernel/acpi.c index ed46dc188b225d2d0aec587f435b5b2ce774c5cd..970f15c76bace5bebf097e550c3b8b5b32de3fd4 100644 --- a/arch/arm64/kernel/acpi.c +++ b/arch/arm64/kernel/acpi.c @@ -154,10 +154,14 @@ static int __init acpi_fadt_sanity_check(void) */ if (table->revision < 5 || (table->revision == 5 && fadt->minor_revision < 1)) { - pr_err("Unsupported FADT revision %d.%d, should be 5.1+\n", + pr_err(FW_BUG "Unsupported FADT revision %d.%d, should be 5.1+\n", table->revision, fadt->minor_revision); - ret = -EINVAL; - goto out; + + if (!fadt->arm_boot_flags) { + ret = -EINVAL; + goto out; + } + pr_err("FADT has ARM boot flags set, assuming 5.1\n"); } if (!(fadt->flags & ACPI_FADT_HW_REDUCED)) { diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 93f69d82225de1b59cd03e63710ed8031c5b4c76..94babc3d0ec2c72b0ac70cb12169590e0202b13c 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -165,9 +165,17 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = { }; static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = { - S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN4_SHIFT, 4, ID_AA64MMFR0_TGRAN4_NI), - S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN64_SHIFT, 4, ID_AA64MMFR0_TGRAN64_NI), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN16_SHIFT, 4, ID_AA64MMFR0_TGRAN16_NI), + /* + * We already refuse to boot CPUs that don't support our configured + * page size, so we can only detect mismatches for a page size other + * than the one we're currently using. Unfortunately, SoCs like this + * exist in the wild so, even though we don't like it, we'll have to go + * along with it and treat them as non-strict. + */ + S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN4_SHIFT, 4, ID_AA64MMFR0_TGRAN4_NI), + S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN64_SHIFT, 4, ID_AA64MMFR0_TGRAN64_NI), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN16_SHIFT, 4, ID_AA64MMFR0_TGRAN16_NI), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_BIGENDEL0_SHIFT, 4, 0), /* Linux shouldn't care about secure memory */ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_SNSMEM_SHIFT, 4, 0), @@ -206,8 +214,8 @@ static const struct arm64_ftr_bits ftr_ctr[] = { ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, 31, 1, 1), /* RES1 */ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_DIC_SHIFT, 1, 1), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IDC_SHIFT, 1, 1), - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_SAFE, CTR_CWG_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_SAFE, CTR_ERG_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_OR_ZERO_SAFE, CTR_CWG_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_OR_ZERO_SAFE, CTR_ERG_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_DMINLINE_SHIFT, 4, 1), /* * Linux can handle differing I-cache policies. Userspace JITs will @@ -449,6 +457,10 @@ static s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new, case FTR_LOWER_SAFE: ret = new < cur ? new : cur; break; + case FTR_HIGHER_OR_ZERO_SAFE: + if (!cur || !new) + break; + /* Fallthrough */ case FTR_HIGHER_SAFE: ret = new > cur ? new : cur; break; diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c index e9ab7b3ed31765e2a915c9841d515679f8e8749a..71f41de52d09dd01500249e0bc96f8f2c64d2eaa 100644 --- a/arch/arm64/kernel/cpuinfo.c +++ b/arch/arm64/kernel/cpuinfo.c @@ -27,6 +27,7 @@ #include #include #include +#include #include #include #include @@ -125,6 +126,10 @@ static int c_show(struct seq_file *m, void *v) { int i, j; bool compat = personality(current->personality) == PER_LINUX32; + struct device_node *np; + const char *model; + const char *serial; + u32 revision; for_each_online_cpu(i) { struct cpuinfo_arm64 *cpuinfo = &per_cpu(cpu_data, i); @@ -176,6 +181,26 @@ static int c_show(struct seq_file *m, void *v) seq_printf(m, "CPU revision\t: %d\n\n", MIDR_REVISION(midr)); } + seq_printf(m, "Hardware\t: BCM2835\n"); + + np = of_find_node_by_path("/system"); + if (np) { + if (!of_property_read_u32(np, "linux,revision", &revision)) + seq_printf(m, "Revision\t: %04x\n", revision); + of_node_put(np); + } + + np = of_find_node_by_path("/"); + if (np) { + if (!of_property_read_string(np, "serial-number", + &serial)) + seq_printf(m, "Serial\t\t: %s\n", serial); + if (!of_property_read_string(np, "model", + &model)) + seq_printf(m, "Model\t\t: %s\n", model); + of_node_put(np); + } + return 0; } diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index d30ca1b304cdd086c82d8a72238e4f0f653b7e9d..b582580c8c4ca319c9be443f1742d2d3d26aed6f 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -830,7 +830,7 @@ el0_dbg: mov x1, x25 mov x2, sp bl do_debug_exception - enable_daif + enable_da_f ct_user_exit b ret_to_user el0_inv: @@ -882,7 +882,7 @@ el0_error_naked: enable_dbg mov x0, sp bl do_serror - enable_daif + enable_da_f ct_user_exit b ret_to_user ENDPROC(el0_error) diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c index 57e962290df3a0aee4aaeddbb6a8c9369b5c7931..7eff8afa035fdbca60383a946558ca06befde450 100644 --- a/arch/arm64/kernel/ftrace.c +++ b/arch/arm64/kernel/ftrace.c @@ -76,7 +76,7 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) if (offset < -SZ_128M || offset >= SZ_128M) { #ifdef CONFIG_ARM64_MODULE_PLTS - struct plt_entry trampoline; + struct plt_entry trampoline, *dst; struct module *mod; /* @@ -104,24 +104,27 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) * is added in the future, but for now, the pr_err() below * deals with a theoretical issue only. */ + dst = mod->arch.ftrace_trampoline; trampoline = get_plt_entry(addr); - if (!plt_entries_equal(mod->arch.ftrace_trampoline, - &trampoline)) { - if (!plt_entries_equal(mod->arch.ftrace_trampoline, - &(struct plt_entry){})) { + if (!plt_entries_equal(dst, &trampoline)) { + if (!plt_entries_equal(dst, &(struct plt_entry){})) { pr_err("ftrace: far branches to multiple entry points unsupported inside a single module\n"); return -EINVAL; } /* point the trampoline to our ftrace entry point */ module_disable_ro(mod); - *mod->arch.ftrace_trampoline = trampoline; + *dst = trampoline; module_enable_ro(mod, true); - /* update trampoline before patching in the branch */ - smp_wmb(); + /* + * Ensure updated trampoline is visible to instruction + * fetch before we patch in the branch. + */ + __flush_icache_range((unsigned long)&dst[0], + (unsigned long)&dst[1]); } - addr = (unsigned long)(void *)mod->arch.ftrace_trampoline; + addr = (unsigned long)dst; #else /* CONFIG_ARM64_MODULE_PLTS */ return -EINVAL; #endif /* CONFIG_ARM64_MODULE_PLTS */ diff --git a/arch/arm64/kernel/hw_breakpoint.c b/arch/arm64/kernel/hw_breakpoint.c index 8c9644376326fe96f05645d298c1b4fcd383d652..7c0611f5d2ce7594bd33b8e967db8aa9210befa3 100644 --- a/arch/arm64/kernel/hw_breakpoint.c +++ b/arch/arm64/kernel/hw_breakpoint.c @@ -547,13 +547,14 @@ int hw_breakpoint_arch_parse(struct perf_event *bp, /* Aligned */ break; case 1: - /* Allow single byte watchpoint. */ - if (hw->ctrl.len == ARM_BREAKPOINT_LEN_1) - break; case 2: /* Allow halfword watchpoints and breakpoints. */ if (hw->ctrl.len == ARM_BREAKPOINT_LEN_2) break; + case 3: + /* Allow single byte watchpoint. */ + if (hw->ctrl.len == ARM_BREAKPOINT_LEN_1) + break; default: return -EINVAL; } diff --git a/arch/arm64/kernel/image.h b/arch/arm64/kernel/image.h index 8da289dc843a0aeeb6f8a27b67325c3de8678d8d..eff6a564ab8081c5726b4dce0258ff8d350dc9f1 100644 --- a/arch/arm64/kernel/image.h +++ b/arch/arm64/kernel/image.h @@ -73,7 +73,11 @@ #ifdef CONFIG_EFI -__efistub_stext_offset = stext - _text; +/* + * Use ABSOLUTE() to avoid ld.lld treating this as a relative symbol: + * https://github.com/ClangBuiltLinux/linux/issues/561 + */ +__efistub_stext_offset = ABSOLUTE(stext - _text); /* * The EFI stub has its own symbol namespace prefixed by __efistub_, to diff --git a/arch/arm64/kernel/return_address.c b/arch/arm64/kernel/return_address.c index 933adbc0f654d84e2e7331a93455ee74c747a091..0311fe52c8ffb5be933f0546dd95cfe702cec6db 100644 --- a/arch/arm64/kernel/return_address.c +++ b/arch/arm64/kernel/return_address.c @@ -11,6 +11,7 @@ #include #include +#include #include #include @@ -32,6 +33,7 @@ static int save_return_addr(struct stackframe *frame, void *d) return 0; } } +NOKPROBE_SYMBOL(save_return_addr); void *return_address(unsigned int level) { @@ -55,3 +57,4 @@ void *return_address(unsigned int level) return NULL; } EXPORT_SYMBOL_GPL(return_address); +NOKPROBE_SYMBOL(return_address); diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index 4989f7ea1e59925ff9eda5ac68c7ac3cedb3d4fe..bb482ec044b61d43fe5071605b6a039ba8985c28 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -85,6 +86,7 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) return 0; } +NOKPROBE_SYMBOL(unwind_frame); void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame, int (*fn)(struct stackframe *, void *), void *data) @@ -99,6 +101,7 @@ void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame, break; } } +NOKPROBE_SYMBOL(walk_stackframe); #ifdef CONFIG_STACKTRACE struct stack_trace_data { diff --git a/arch/arm64/kvm/regmap.c b/arch/arm64/kvm/regmap.c index 7a5173ea227648b4ec534ee60d557c8e5894cf73..4c2e96ef306ed9e81cabdb6dd6eb49303d668128 100644 --- a/arch/arm64/kvm/regmap.c +++ b/arch/arm64/kvm/regmap.c @@ -189,13 +189,18 @@ void vcpu_write_spsr32(struct kvm_vcpu *vcpu, unsigned long v) switch (spsr_idx) { case KVM_SPSR_SVC: write_sysreg_el1(v, spsr); + break; case KVM_SPSR_ABT: write_sysreg(v, spsr_abt); + break; case KVM_SPSR_UND: write_sysreg(v, spsr_und); + break; case KVM_SPSR_IRQ: write_sysreg(v, spsr_irq); + break; case KVM_SPSR_FIQ: write_sysreg(v, spsr_fiq); + break; } } diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index d112af75680bbdf7c409c2aadb824da7672d915d..6da2bbdb9648fa4944920a2e12c6760048f61f09 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -626,7 +626,7 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) */ val = ((pmcr & ~ARMV8_PMU_PMCR_MASK) | (ARMV8_PMU_PMCR_MASK & 0xdecafbad)) & (~ARMV8_PMU_PMCR_E); - __vcpu_sys_reg(vcpu, PMCR_EL0) = val; + __vcpu_sys_reg(vcpu, r->reg) = val; } static bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 flags) @@ -968,13 +968,13 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */ #define DBG_BCR_BVR_WCR_WVR_EL1(n) \ { SYS_DESC(SYS_DBGBVRn_EL1(n)), \ - trap_bvr, reset_bvr, n, 0, get_bvr, set_bvr }, \ + trap_bvr, reset_bvr, 0, 0, get_bvr, set_bvr }, \ { SYS_DESC(SYS_DBGBCRn_EL1(n)), \ - trap_bcr, reset_bcr, n, 0, get_bcr, set_bcr }, \ + trap_bcr, reset_bcr, 0, 0, get_bcr, set_bcr }, \ { SYS_DESC(SYS_DBGWVRn_EL1(n)), \ - trap_wvr, reset_wvr, n, 0, get_wvr, set_wvr }, \ + trap_wvr, reset_wvr, 0, 0, get_wvr, set_wvr }, \ { SYS_DESC(SYS_DBGWCRn_EL1(n)), \ - trap_wcr, reset_wcr, n, 0, get_wcr, set_wcr } + trap_wcr, reset_wcr, 0, 0, get_wcr, set_wcr } /* Macro to expand the PMEVCNTRn_EL0 register */ #define PMU_PMEVCNTR_EL0(n) \ @@ -1359,7 +1359,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_CSSELR_EL1), NULL, reset_unknown, CSSELR_EL1 }, - { SYS_DESC(SYS_PMCR_EL0), access_pmcr, reset_pmcr, }, + { SYS_DESC(SYS_PMCR_EL0), access_pmcr, reset_pmcr, PMCR_EL0 }, { SYS_DESC(SYS_PMCNTENSET_EL0), access_pmcnten, reset_unknown, PMCNTENSET_EL0 }, { SYS_DESC(SYS_PMCNTENCLR_EL0), access_pmcnten, NULL, PMCNTENSET_EL0 }, { SYS_DESC(SYS_PMOVSCLR_EL0), access_pmovs, NULL, PMOVSSET_EL0 }, @@ -2072,13 +2072,19 @@ static int emulate_sys_reg(struct kvm_vcpu *vcpu, } static void reset_sys_reg_descs(struct kvm_vcpu *vcpu, - const struct sys_reg_desc *table, size_t num) + const struct sys_reg_desc *table, size_t num, + unsigned long *bmap) { unsigned long i; for (i = 0; i < num; i++) - if (table[i].reset) + if (table[i].reset) { + int reg = table[i].reg; + table[i].reset(vcpu, &table[i]); + if (reg > 0 && reg < NR_SYS_REGS) + set_bit(reg, bmap); + } } /** @@ -2576,18 +2582,16 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu) { size_t num; const struct sys_reg_desc *table; - - /* Catch someone adding a register without putting in reset entry. */ - memset(&vcpu->arch.ctxt.sys_regs, 0x42, sizeof(vcpu->arch.ctxt.sys_regs)); + DECLARE_BITMAP(bmap, NR_SYS_REGS) = { 0, }; /* Generic chip reset first (so target could override). */ - reset_sys_reg_descs(vcpu, sys_reg_descs, ARRAY_SIZE(sys_reg_descs)); + reset_sys_reg_descs(vcpu, sys_reg_descs, ARRAY_SIZE(sys_reg_descs), bmap); table = get_target_table(vcpu->arch.target, true, &num); - reset_sys_reg_descs(vcpu, table, num); + reset_sys_reg_descs(vcpu, table, num, bmap); for (num = 1; num < NR_SYS_REGS; num++) { - if (WARN(__vcpu_sys_reg(vcpu, num) == 0x4242424242424242, + if (WARN(!test_bit(num, bmap), "Didn't reset __vcpu_sys_reg(%zi)\n", num)) break; } diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c index d3a5bb16f0b2312cd60477d9f60b433a1d80ada4..b1f6d4e1f46455f2b5a4bb15958867c1cdc73713 100644 --- a/arch/arm64/mm/dma-mapping.c +++ b/arch/arm64/mm/dma-mapping.c @@ -138,6 +138,12 @@ no_mem: return NULL; } +void *arm64_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, + gfp_t gfp, unsigned long attrs) +{ + return __dma_alloc(dev, size, handle, gfp, attrs); +} + static void __dma_free(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle, unsigned long attrs) @@ -154,6 +160,12 @@ static void __dma_free(struct device *dev, size_t size, swiotlb_free(dev, size, swiotlb_addr, dma_handle, attrs); } +void arm64_dma_free(struct device *dev, size_t size, void *cpu_addr, + dma_addr_t handle, unsigned long attrs) +{ + __dma_free(dev, size, cpu_addr, handle, attrs); +} + static dma_addr_t __swiotlb_map_page(struct device *dev, struct page *page, unsigned long offset, size_t size, enum dma_data_direction dir, @@ -197,6 +209,12 @@ static int __swiotlb_map_sg_attrs(struct device *dev, struct scatterlist *sgl, return ret; } +int arm64_dma_map_sg(struct device *dev, struct scatterlist *sgl, int nelems, + enum dma_data_direction dir, unsigned long attrs) +{ + return __swiotlb_map_sg_attrs(dev, sgl, nelems, dir, attrs); +} + static void __swiotlb_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl, int nelems, enum dma_data_direction dir, @@ -213,6 +231,12 @@ static void __swiotlb_unmap_sg_attrs(struct device *dev, swiotlb_unmap_sg_attrs(dev, sgl, nelems, dir, attrs); } +void arm64_dma_unmap_sg(struct device *dev, struct scatterlist *sgl, int nelems, + enum dma_data_direction dir, unsigned long attrs) +{ + __swiotlb_unmap_sg_attrs(dev, sgl, nelems, dir, attrs); +} + static void __swiotlb_sync_single_for_cpu(struct device *dev, dma_addr_t dev_addr, size_t size, enum dma_data_direction dir) @@ -245,6 +269,12 @@ static void __swiotlb_sync_sg_for_cpu(struct device *dev, swiotlb_sync_sg_for_cpu(dev, sgl, nelems, dir); } +void arm64_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sgl, int nelems, + enum dma_data_direction dir) +{ + __swiotlb_sync_sg_for_cpu(dev, sgl, nelems, dir); +} + static void __swiotlb_sync_sg_for_device(struct device *dev, struct scatterlist *sgl, int nelems, enum dma_data_direction dir) @@ -259,6 +289,12 @@ static void __swiotlb_sync_sg_for_device(struct device *dev, sg->length, dir); } +void arm64_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sgl, int nelems, + enum dma_data_direction dir) +{ + __swiotlb_sync_sg_for_device(dev, sgl, nelems, dir); +} + static int __swiotlb_mmap_pfn(struct vm_area_struct *vma, unsigned long pfn, size_t size) { @@ -294,6 +330,13 @@ static int __swiotlb_mmap(struct device *dev, return __swiotlb_mmap_pfn(vma, pfn, size); } +int arm64_dma_mmap(struct device *dev, struct vm_area_struct *vma, + void *cpu_addr, dma_addr_t dma_addr, size_t size, + unsigned long attrs) +{ + return __swiotlb_mmap(dev, vma, cpu_addr, dma_addr, size, attrs); +} + static int __swiotlb_get_sgtable_page(struct sg_table *sgt, struct page *page, size_t size) { @@ -314,6 +357,13 @@ static int __swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt, return __swiotlb_get_sgtable_page(sgt, page, size); } +int arm64_dma_get_sgtable(struct device *dev, struct sg_table *sgt, + void *cpu_addr, dma_addr_t dma_addr, size_t size, + unsigned long attrs) +{ + return __swiotlb_get_sgtable(dev, sgt, cpu_addr, dma_addr, size, attrs); +} + static int __swiotlb_dma_supported(struct device *hwdev, u64 mask) { if (swiotlb) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 229e6f6cbdfebf863dad90bd0d3aa42a36282f85..bf39be95c30590cff943cd7f02bf1715e2be6913 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -233,8 +233,9 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max) { unsigned long max_zone_pfns[MAX_NR_ZONES] = {0}; - if (IS_ENABLED(CONFIG_ZONE_DMA32)) - max_zone_pfns[ZONE_DMA32] = PFN_DOWN(max_zone_dma_phys()); +#ifdef CONFIG_ZONE_DMA32 + max_zone_pfns[ZONE_DMA32] = PFN_DOWN(max_zone_dma_phys()); +#endif max_zone_pfns[ZONE_NORMAL] = max; free_area_init_nodes(max_zone_pfns); diff --git a/arch/mips/boot/compressed/Makefile b/arch/mips/boot/compressed/Makefile index 3c453a1f1ff10c218e809ec1f5efe2bb70b1ef7d..172801ed35b89994f6a52e492a7c486cb517d821 100644 --- a/arch/mips/boot/compressed/Makefile +++ b/arch/mips/boot/compressed/Makefile @@ -78,6 +78,8 @@ OBJCOPYFLAGS_piggy.o := --add-section=.image=$(obj)/vmlinux.bin.z \ $(obj)/piggy.o: $(obj)/dummy.o $(obj)/vmlinux.bin.z FORCE $(call if_changed,objcopy) +HOSTCFLAGS_calc_vmlinuz_load_addr.o += $(LINUXINCLUDE) + # Calculate the load address of the compressed kernel image hostprogs-y := calc_vmlinuz_load_addr diff --git a/arch/mips/boot/compressed/calc_vmlinuz_load_addr.c b/arch/mips/boot/compressed/calc_vmlinuz_load_addr.c index 542c3ede97222faec0e38d8b88c280609068fd38..d14f75ec827323702d61d6ef6eb2871ecfda3513 100644 --- a/arch/mips/boot/compressed/calc_vmlinuz_load_addr.c +++ b/arch/mips/boot/compressed/calc_vmlinuz_load_addr.c @@ -13,7 +13,7 @@ #include #include #include -#include "../../../../include/linux/sizes.h" +#include int main(int argc, char *argv[]) { diff --git a/arch/mips/include/asm/mach-ath79/ar933x_uart.h b/arch/mips/include/asm/mach-ath79/ar933x_uart.h index c2917b39966bf18797b6e4885d5f713b52e2b11d..bba2c883795163a43056c61946285eddd80cf282 100644 --- a/arch/mips/include/asm/mach-ath79/ar933x_uart.h +++ b/arch/mips/include/asm/mach-ath79/ar933x_uart.h @@ -27,8 +27,8 @@ #define AR933X_UART_CS_PARITY_S 0 #define AR933X_UART_CS_PARITY_M 0x3 #define AR933X_UART_CS_PARITY_NONE 0 -#define AR933X_UART_CS_PARITY_ODD 1 -#define AR933X_UART_CS_PARITY_EVEN 2 +#define AR933X_UART_CS_PARITY_ODD 2 +#define AR933X_UART_CS_PARITY_EVEN 3 #define AR933X_UART_CS_IF_MODE_S 2 #define AR933X_UART_CS_IF_MODE_M 0x3 #define AR933X_UART_CS_IF_MODE_NONE 0 diff --git a/arch/mips/jz4740/board-qi_lb60.c b/arch/mips/jz4740/board-qi_lb60.c index 705593d40d120d24a3c4ac3c56fe3859a4ba7a9c..05c60fa4fa06b1c22a25546b2fd3c9a263631193 100644 --- a/arch/mips/jz4740/board-qi_lb60.c +++ b/arch/mips/jz4740/board-qi_lb60.c @@ -471,27 +471,27 @@ static unsigned long pin_cfg_bias_disable[] = { static struct pinctrl_map pin_map[] __initdata = { /* NAND pin configuration */ PIN_MAP_MUX_GROUP_DEFAULT("jz4740-nand", - "10010000.jz4740-pinctrl", "nand", "nand-cs1"), + "10010000.pin-controller", "nand-cs1", "nand"), /* fbdev pin configuration */ PIN_MAP_MUX_GROUP("jz4740-fb", PINCTRL_STATE_DEFAULT, - "10010000.jz4740-pinctrl", "lcd", "lcd-8bit"), + "10010000.pin-controller", "lcd-8bit", "lcd"), PIN_MAP_MUX_GROUP("jz4740-fb", PINCTRL_STATE_SLEEP, - "10010000.jz4740-pinctrl", "lcd", "lcd-no-pins"), + "10010000.pin-controller", "lcd-no-pins", "lcd"), /* MMC pin configuration */ PIN_MAP_MUX_GROUP_DEFAULT("jz4740-mmc.0", - "10010000.jz4740-pinctrl", "mmc", "mmc-1bit"), + "10010000.pin-controller", "mmc-1bit", "mmc"), PIN_MAP_MUX_GROUP_DEFAULT("jz4740-mmc.0", - "10010000.jz4740-pinctrl", "mmc", "mmc-4bit"), + "10010000.pin-controller", "mmc-4bit", "mmc"), PIN_MAP_CONFIGS_PIN_DEFAULT("jz4740-mmc.0", - "10010000.jz4740-pinctrl", "PD0", pin_cfg_bias_disable), + "10010000.pin-controller", "PD0", pin_cfg_bias_disable), PIN_MAP_CONFIGS_PIN_DEFAULT("jz4740-mmc.0", - "10010000.jz4740-pinctrl", "PD2", pin_cfg_bias_disable), + "10010000.pin-controller", "PD2", pin_cfg_bias_disable), /* PWM pin configuration */ PIN_MAP_MUX_GROUP_DEFAULT("jz4740-pwm", - "10010000.jz4740-pinctrl", "pwm4", "pwm4"), + "10010000.pin-controller", "pwm4", "pwm4"), }; diff --git a/arch/mips/kernel/cacheinfo.c b/arch/mips/kernel/cacheinfo.c index 97d5239ca47baef7602b7500f46dc3d4cc8681f1..428ef218920398c6162b82e5688ab61dfe311580 100644 --- a/arch/mips/kernel/cacheinfo.c +++ b/arch/mips/kernel/cacheinfo.c @@ -80,6 +80,8 @@ static int __populate_cache_leaves(unsigned int cpu) if (c->tcache.waysize) populate_cache(tcache, this_leaf, 3, CACHE_TYPE_UNIFIED); + this_cpu_ci->cpu_map_populated = true; + return 0; } diff --git a/arch/mips/kernel/i8253.c b/arch/mips/kernel/i8253.c index 5f209f111e59e3ad9c6e31bee8352623cacc3e68..df7ddd246eaac730333bc24cc361f1893be100aa 100644 --- a/arch/mips/kernel/i8253.c +++ b/arch/mips/kernel/i8253.c @@ -32,7 +32,8 @@ void __init setup_pit_timer(void) static int __init init_pit_clocksource(void) { - if (num_possible_cpus() > 1) /* PIT does not scale! */ + if (num_possible_cpus() > 1 || /* PIT does not scale! */ + !clockevent_state_periodic(&i8253_clockevent)) return 0; return clocksource_i8253_init(); diff --git a/arch/mips/lantiq/irq.c b/arch/mips/lantiq/irq.c index c4ef1c31e0c4f03ca22725668b2931d34a7cd749..37caeadb2964c956256040ff201743c140de310c 100644 --- a/arch/mips/lantiq/irq.c +++ b/arch/mips/lantiq/irq.c @@ -156,8 +156,9 @@ static int ltq_eiu_settype(struct irq_data *d, unsigned int type) if (edge) irq_set_handler(d->hwirq, handle_edge_irq); - ltq_eiu_w32(ltq_eiu_r32(LTQ_EIU_EXIN_C) | - (val << (i * 4)), LTQ_EIU_EXIN_C); + ltq_eiu_w32((ltq_eiu_r32(LTQ_EIU_EXIN_C) & + (~(7 << (i * 4)))) | (val << (i * 4)), + LTQ_EIU_EXIN_C); } } diff --git a/arch/parisc/boot/compressed/vmlinux.lds.S b/arch/parisc/boot/compressed/vmlinux.lds.S index 4ebd4e65524cd3cf6347a2948dede9163a4807b9..41ebe97fad1097f85a0c5d602db8bb74f474b946 100644 --- a/arch/parisc/boot/compressed/vmlinux.lds.S +++ b/arch/parisc/boot/compressed/vmlinux.lds.S @@ -42,8 +42,8 @@ SECTIONS #endif _startcode_end = .; - /* bootloader code and data starts behind area of extracted kernel */ - . = (SZ_end - SZparisc_kernel_start + KERNEL_BINARY_TEXT_START); + /* bootloader code and data starts at least behind area of extracted kernel */ + . = MAX(ABSOLUTE(.), (SZ_end - SZparisc_kernel_start + KERNEL_BINARY_TEXT_START)); /* align on next page boundary */ . = ALIGN(4096); diff --git a/arch/parisc/kernel/ptrace.c b/arch/parisc/kernel/ptrace.c index 0964c236e3e5a711056e058a8a5ee63343f9a496..de2998cb189e8814d4cc55121cd7e9fb03476807 100644 --- a/arch/parisc/kernel/ptrace.c +++ b/arch/parisc/kernel/ptrace.c @@ -167,6 +167,9 @@ long arch_ptrace(struct task_struct *child, long request, if ((addr & (sizeof(unsigned long)-1)) || addr >= sizeof(struct pt_regs)) break; + if (addr == PT_IAOQ0 || addr == PT_IAOQ1) { + data |= 3; /* ensure userspace privilege */ + } if ((addr >= PT_GR1 && addr <= PT_GR31) || addr == PT_IAOQ0 || addr == PT_IAOQ1 || (addr >= PT_FR0 && addr <= PT_FR31 + 4) || @@ -228,16 +231,18 @@ long arch_ptrace(struct task_struct *child, long request, static compat_ulong_t translate_usr_offset(compat_ulong_t offset) { - if (offset < 0) - return sizeof(struct pt_regs); - else if (offset <= 32*4) /* gr[0..31] */ - return offset * 2 + 4; - else if (offset <= 32*4+32*8) /* gr[0..31] + fr[0..31] */ - return offset + 32*4; - else if (offset < sizeof(struct pt_regs)/2 + 32*4) - return offset * 2 + 4 - 32*8; + compat_ulong_t pos; + + if (offset < 32*4) /* gr[0..31] */ + pos = offset * 2 + 4; + else if (offset < 32*4+32*8) /* fr[0] ... fr[31] */ + pos = (offset - 32*4) + PT_FR0; + else if (offset < sizeof(struct pt_regs)/2 + 32*4) /* sr[0] ... ipsw */ + pos = (offset - 32*4 - 32*8) * 2 + PT_SR0 + 4; else - return sizeof(struct pt_regs); + pos = sizeof(struct pt_regs); + + return pos; } long compat_arch_ptrace(struct task_struct *child, compat_long_t request, @@ -281,9 +286,12 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request, addr = translate_usr_offset(addr); if (addr >= sizeof(struct pt_regs)) break; + if (addr == PT_IAOQ0+4 || addr == PT_IAOQ1+4) { + data |= 3; /* ensure userspace privilege */ + } if (addr >= PT_FR0 && addr <= PT_FR31 + 4) { /* Special case, fp regs are 64 bits anyway */ - *(__u64 *) ((char *) task_regs(child) + addr) = data; + *(__u32 *) ((char *) task_regs(child) + addr) = data; ret = 0; } else if ((addr >= PT_GR1+4 && addr <= PT_GR31+4) || @@ -496,7 +504,8 @@ static void set_reg(struct pt_regs *regs, int num, unsigned long val) return; case RI(iaoq[0]): case RI(iaoq[1]): - regs->iaoq[num - RI(iaoq[0])] = val; + /* set 2 lowest bits to ensure userspace privilege: */ + regs->iaoq[num - RI(iaoq[0])] = val | 3; return; case RI(sar): regs->sar = val; return; diff --git a/arch/powerpc/boot/xz_config.h b/arch/powerpc/boot/xz_config.h index e22e5b3770ddcc32b4ee37fecc63893b96522535..ebfadd39e1924a357bdb818a5a60d3353c1dfa6c 100644 --- a/arch/powerpc/boot/xz_config.h +++ b/arch/powerpc/boot/xz_config.h @@ -20,10 +20,30 @@ static inline uint32_t swab32p(void *p) #ifdef __LITTLE_ENDIAN__ #define get_le32(p) (*((uint32_t *) (p))) +#define cpu_to_be32(x) swab32(x) +static inline u32 be32_to_cpup(const u32 *p) +{ + return swab32p((u32 *)p); +} #else #define get_le32(p) swab32p(p) +#define cpu_to_be32(x) (x) +static inline u32 be32_to_cpup(const u32 *p) +{ + return *p; +} #endif +static inline uint32_t get_unaligned_be32(const void *p) +{ + return be32_to_cpup(p); +} + +static inline void put_unaligned_be32(u32 val, void *p) +{ + *((u32 *)p) = cpu_to_be32(val); +} + #define memeq(a, b, size) (memcmp(a, b, size) == 0) #define memzero(buf, size) memset(buf, 0, size) diff --git a/arch/powerpc/include/asm/cacheflush.h b/arch/powerpc/include/asm/cacheflush.h index d5a8d7bf07594b0e0e127db45bf03d6129c1bfbc..b189f7aee222e3277e73c98e0a16ce704f8e8797 100644 --- a/arch/powerpc/include/asm/cacheflush.h +++ b/arch/powerpc/include/asm/cacheflush.h @@ -32,9 +32,12 @@ * not expect this type of fault. flush_cache_vmap is not exactly the right * place to put this, but it seems to work well enough. */ -#define flush_cache_vmap(start, end) do { asm volatile("ptesync" ::: "memory"); } while (0) +static inline void flush_cache_vmap(unsigned long start, unsigned long end) +{ + asm volatile("ptesync" ::: "memory"); +} #else -#define flush_cache_vmap(start, end) do { } while (0) +static inline void flush_cache_vmap(unsigned long start, unsigned long end) { } #endif #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 diff --git a/arch/powerpc/kernel/eeh.c b/arch/powerpc/kernel/eeh.c index c72767a5327ad63d5de84d5e3787ad0b24f6b7e0..fe3c6f3bd3b6226727fffc12b0ec0e3bef1fb4b4 100644 --- a/arch/powerpc/kernel/eeh.c +++ b/arch/powerpc/kernel/eeh.c @@ -360,10 +360,19 @@ static inline unsigned long eeh_token_to_phys(unsigned long token) ptep = find_init_mm_pte(token, &hugepage_shift); if (!ptep) return token; - WARN_ON(hugepage_shift); - pa = pte_pfn(*ptep) << PAGE_SHIFT; - return pa | (token & (PAGE_SIZE-1)); + pa = pte_pfn(*ptep); + + /* On radix we can do hugepage mappings for io, so handle that */ + if (hugepage_shift) { + pa <<= hugepage_shift; + pa |= token & ((1ul << hugepage_shift) - 1); + } else { + pa <<= PAGE_SHIFT; + pa |= token & (PAGE_SIZE - 1); + } + + return pa; } /* diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index 2d8fc8c9da7a1f210816bd9734c3d8453d8fc04e..06cc77813dbb7bc73e2bfb8f8f30f62d6a83ea30 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -1745,7 +1745,7 @@ handle_page_fault: addi r3,r1,STACK_FRAME_OVERHEAD bl do_page_fault cmpdi r3,0 - beq+ 12f + beq+ ret_from_except_lite bl save_nvgprs mr r5,r3 addi r3,r1,STACK_FRAME_OVERHEAD @@ -1760,7 +1760,12 @@ handle_dabr_fault: ld r5,_DSISR(r1) addi r3,r1,STACK_FRAME_OVERHEAD bl do_break -12: b ret_from_except_lite + /* + * do_break() may have changed the NV GPRS while handling a breakpoint. + * If so, we need to restore them with their updated values. Don't use + * ret_from_except_lite here. + */ + b ret_from_except #ifdef CONFIG_PPC_BOOK3S_64 diff --git a/arch/powerpc/kernel/misc_64.S b/arch/powerpc/kernel/misc_64.S index 4935ef9a142e1715422541a4601ce4d62f735da7..52c7298f583a3774fe7d035aa1ace0dad5d8ae7a 100644 --- a/arch/powerpc/kernel/misc_64.S +++ b/arch/powerpc/kernel/misc_64.S @@ -137,7 +137,7 @@ _GLOBAL_TOC(flush_dcache_range) subf r8,r6,r4 /* compute length */ add r8,r8,r5 /* ensure we get enough */ lwz r9,DCACHEL1LOGBLOCKSIZE(r10) /* Get log-2 of dcache block size */ - srw. r8,r8,r9 /* compute line count */ + srd. r8,r8,r9 /* compute line count */ beqlr /* nothing to do? */ mtctr r8 0: dcbst 0,r6 @@ -155,7 +155,7 @@ _GLOBAL(flush_inval_dcache_range) subf r8,r6,r4 /* compute length */ add r8,r8,r5 /* ensure we get enough */ lwz r9,DCACHEL1LOGBLOCKSIZE(r10)/* Get log-2 of dcache block size */ - srw. r8,r8,r9 /* compute line count */ + srd. r8,r8,r9 /* compute line count */ beqlr /* nothing to do? */ sync isync diff --git a/arch/powerpc/kernel/pci_of_scan.c b/arch/powerpc/kernel/pci_of_scan.c index 98f04725def75f09dc924f7861d795c986ca98f0..c101b321dece8480f3474d709e7d10fcc17bc1e6 100644 --- a/arch/powerpc/kernel/pci_of_scan.c +++ b/arch/powerpc/kernel/pci_of_scan.c @@ -45,6 +45,8 @@ unsigned int pci_parse_of_flags(u32 addr0, int bridge) if (addr0 & 0x02000000) { flags = IORESOURCE_MEM | PCI_BASE_ADDRESS_SPACE_MEMORY; flags |= (addr0 >> 22) & PCI_BASE_ADDRESS_MEM_TYPE_64; + if (flags & PCI_BASE_ADDRESS_MEM_TYPE_64) + flags |= IORESOURCE_MEM_64; flags |= (addr0 >> 28) & PCI_BASE_ADDRESS_MEM_TYPE_1M; if (addr0 & 0x40000000) flags |= IORESOURCE_PREFETCH diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c index fd59fef9931bf1ca4721ec2729d482514616febb..906b05c2adae3f2d0770d7c6ae6a9603168e5776 100644 --- a/arch/powerpc/kernel/signal_32.c +++ b/arch/powerpc/kernel/signal_32.c @@ -1202,6 +1202,9 @@ SYSCALL_DEFINE0(rt_sigreturn) goto bad; if (MSR_TM_ACTIVE(msr_hi<<32)) { + /* Trying to start TM on non TM system */ + if (!cpu_has_feature(CPU_FTR_TM)) + goto bad; /* We only recheckpoint on return if we're * transaction. */ diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powerpc/kernel/signal_64.c index 14b0f5b6a373da21f567c5e5cd4e27ca66430c4a..b5933d7219db60623868286ee386891c63b2cd4f 100644 --- a/arch/powerpc/kernel/signal_64.c +++ b/arch/powerpc/kernel/signal_64.c @@ -750,6 +750,11 @@ SYSCALL_DEFINE0(rt_sigreturn) if (MSR_TM_ACTIVE(msr)) { /* We recheckpoint on return. */ struct ucontext __user *uc_transact; + + /* Trying to start TM on non TM system */ + if (!cpu_has_feature(CPU_FTR_TM)) + goto badframe; + if (__get_user(uc_transact, &uc->uc_link)) goto badframe; if (restore_tm_sigcontexts(current, &uc->uc_mcontext, diff --git a/arch/powerpc/kernel/swsusp_32.S b/arch/powerpc/kernel/swsusp_32.S index 7a919e9a3400bb9a41cc98b42875b1285e9f5f02..cbdf86228eaaa7c0d64082979d265e332de42bbc 100644 --- a/arch/powerpc/kernel/swsusp_32.S +++ b/arch/powerpc/kernel/swsusp_32.S @@ -25,11 +25,19 @@ #define SL_IBAT2 0x48 #define SL_DBAT3 0x50 #define SL_IBAT3 0x58 -#define SL_TB 0x60 -#define SL_R2 0x68 -#define SL_CR 0x6c -#define SL_LR 0x70 -#define SL_R12 0x74 /* r12 to r31 */ +#define SL_DBAT4 0x60 +#define SL_IBAT4 0x68 +#define SL_DBAT5 0x70 +#define SL_IBAT5 0x78 +#define SL_DBAT6 0x80 +#define SL_IBAT6 0x88 +#define SL_DBAT7 0x90 +#define SL_IBAT7 0x98 +#define SL_TB 0xa0 +#define SL_R2 0xa8 +#define SL_CR 0xac +#define SL_LR 0xb0 +#define SL_R12 0xb4 /* r12 to r31 */ #define SL_SIZE (SL_R12 + 80) .section .data @@ -114,6 +122,41 @@ _GLOBAL(swsusp_arch_suspend) mfibatl r4,3 stw r4,SL_IBAT3+4(r11) +BEGIN_MMU_FTR_SECTION + mfspr r4,SPRN_DBAT4U + stw r4,SL_DBAT4(r11) + mfspr r4,SPRN_DBAT4L + stw r4,SL_DBAT4+4(r11) + mfspr r4,SPRN_DBAT5U + stw r4,SL_DBAT5(r11) + mfspr r4,SPRN_DBAT5L + stw r4,SL_DBAT5+4(r11) + mfspr r4,SPRN_DBAT6U + stw r4,SL_DBAT6(r11) + mfspr r4,SPRN_DBAT6L + stw r4,SL_DBAT6+4(r11) + mfspr r4,SPRN_DBAT7U + stw r4,SL_DBAT7(r11) + mfspr r4,SPRN_DBAT7L + stw r4,SL_DBAT7+4(r11) + mfspr r4,SPRN_IBAT4U + stw r4,SL_IBAT4(r11) + mfspr r4,SPRN_IBAT4L + stw r4,SL_IBAT4+4(r11) + mfspr r4,SPRN_IBAT5U + stw r4,SL_IBAT5(r11) + mfspr r4,SPRN_IBAT5L + stw r4,SL_IBAT5+4(r11) + mfspr r4,SPRN_IBAT6U + stw r4,SL_IBAT6(r11) + mfspr r4,SPRN_IBAT6L + stw r4,SL_IBAT6+4(r11) + mfspr r4,SPRN_IBAT7U + stw r4,SL_IBAT7(r11) + mfspr r4,SPRN_IBAT7L + stw r4,SL_IBAT7+4(r11) +END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS) + #if 0 /* Backup various CPU config stuffs */ bl __save_cpu_setup @@ -279,27 +322,41 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC) mtibatu 3,r4 lwz r4,SL_IBAT3+4(r11) mtibatl 3,r4 -#endif - BEGIN_MMU_FTR_SECTION - li r4,0 + lwz r4,SL_DBAT4(r11) mtspr SPRN_DBAT4U,r4 + lwz r4,SL_DBAT4+4(r11) mtspr SPRN_DBAT4L,r4 + lwz r4,SL_DBAT5(r11) mtspr SPRN_DBAT5U,r4 + lwz r4,SL_DBAT5+4(r11) mtspr SPRN_DBAT5L,r4 + lwz r4,SL_DBAT6(r11) mtspr SPRN_DBAT6U,r4 + lwz r4,SL_DBAT6+4(r11) mtspr SPRN_DBAT6L,r4 + lwz r4,SL_DBAT7(r11) mtspr SPRN_DBAT7U,r4 + lwz r4,SL_DBAT7+4(r11) mtspr SPRN_DBAT7L,r4 + lwz r4,SL_IBAT4(r11) mtspr SPRN_IBAT4U,r4 + lwz r4,SL_IBAT4+4(r11) mtspr SPRN_IBAT4L,r4 + lwz r4,SL_IBAT5(r11) mtspr SPRN_IBAT5U,r4 + lwz r4,SL_IBAT5+4(r11) mtspr SPRN_IBAT5L,r4 + lwz r4,SL_IBAT6(r11) mtspr SPRN_IBAT6U,r4 + lwz r4,SL_IBAT6+4(r11) mtspr SPRN_IBAT6L,r4 + lwz r4,SL_IBAT7(r11) mtspr SPRN_IBAT7U,r4 + lwz r4,SL_IBAT7+4(r11) mtspr SPRN_IBAT7L,r4 END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS) +#endif /* Flush all TLBs */ lis r4,0x1000 diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c index 9a3f2646ecc7e87bd24c03c02198eb9f45763725..07a8004c3c237520954136dfd5409a80a692dfe9 100644 --- a/arch/powerpc/kvm/book3s_64_vio.c +++ b/arch/powerpc/kvm/book3s_64_vio.c @@ -602,8 +602,10 @@ long kvmppc_h_put_tce_indirect(struct kvm_vcpu *vcpu, if (kvmppc_gpa_to_ua(vcpu->kvm, tce & ~(TCE_PCI_READ | TCE_PCI_WRITE), - &ua, NULL)) - return H_PARAMETER; + &ua, NULL)) { + ret = H_PARAMETER; + goto unlock_exit; + } list_for_each_entry_lockless(stit, &stt->iommu_tables, next) { ret = kvmppc_tce_iommu_map(vcpu->kvm, stt, diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c b/arch/powerpc/kvm/book3s_64_vio_hv.c index 6821ead4b4ebc128a9a772a712feefb213b1bad0..eb8b11515a7ffe8439c26995aa6eb4d9183fde59 100644 --- a/arch/powerpc/kvm/book3s_64_vio_hv.c +++ b/arch/powerpc/kvm/book3s_64_vio_hv.c @@ -528,8 +528,10 @@ long kvmppc_rm_h_put_tce_indirect(struct kvm_vcpu *vcpu, ua = 0; if (kvmppc_gpa_to_ua(vcpu->kvm, tce & ~(TCE_PCI_READ | TCE_PCI_WRITE), - &ua, NULL)) - return H_PARAMETER; + &ua, NULL)) { + ret = H_PARAMETER; + goto unlock_exit; + } list_for_each_entry_lockless(stit, &stt->iommu_tables, next) { ret = kvmppc_rm_tce_iommu_map(vcpu->kvm, stt, diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index 578174a33d229732c0fef060ef95d3a1f0176b81..51cd66dc1bb09955a0b4a7b42698f69911272b98 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -61,6 +61,11 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *v) return !!(v->arch.pending_exceptions) || kvm_request_pending(v); } +bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu) +{ + return kvm_arch_vcpu_runnable(vcpu); +} + bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu) { return false; diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c index 7296a42eb62e42f3ad6561f5833d638c990d11ab..cef0b7ee1024646cdc0edeb4d9c5cbbc0a4837fc 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -150,6 +150,8 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, unsigned long addr, unsigned long sz } else { pdshift = PUD_SHIFT; pu = pud_alloc(mm, pg, addr); + if (!pu) + return NULL; if (pshift == PUD_SHIFT) return (pte_t *)pu; else if (pshift > PMD_SHIFT) { @@ -158,6 +160,8 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, unsigned long addr, unsigned long sz } else { pdshift = PMD_SHIFT; pm = pmd_alloc(mm, pu, addr); + if (!pm) + return NULL; if (pshift == PMD_SHIFT) /* 16MB hugepage */ return (pte_t *)pm; @@ -174,12 +178,16 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, unsigned long addr, unsigned long sz } else { pdshift = PUD_SHIFT; pu = pud_alloc(mm, pg, addr); + if (!pu) + return NULL; if (pshift >= PUD_SHIFT) { ptl = pud_lockptr(mm, pu); hpdp = (hugepd_t *)pu; } else { pdshift = PMD_SHIFT; pm = pmd_alloc(mm, pu, addr); + if (!pm) + return NULL; ptl = pmd_lockptr(mm, pm); hpdp = (hugepd_t *)pm; } diff --git a/arch/powerpc/platforms/4xx/uic.c b/arch/powerpc/platforms/4xx/uic.c index 8b4dd0da08395556950c50029af05bcfad41085c..9e27cfe2702686fe1dc718e05341dd025c3bc3e6 100644 --- a/arch/powerpc/platforms/4xx/uic.c +++ b/arch/powerpc/platforms/4xx/uic.c @@ -158,6 +158,7 @@ static int uic_set_irq_type(struct irq_data *d, unsigned int flow_type) mtdcr(uic->dcrbase + UIC_PR, pr); mtdcr(uic->dcrbase + UIC_TR, tr); + mtdcr(uic->dcrbase + UIC_SR, ~mask); raw_spin_unlock_irqrestore(&uic->lock, flags); diff --git a/arch/powerpc/platforms/powermac/sleep.S b/arch/powerpc/platforms/powermac/sleep.S index f89808b9713d0671d51d7a38ac0e8ae320b4d575..b0660ef691779407ec1296d80586093683aa8dc2 100644 --- a/arch/powerpc/platforms/powermac/sleep.S +++ b/arch/powerpc/platforms/powermac/sleep.S @@ -38,10 +38,18 @@ #define SL_IBAT2 0x48 #define SL_DBAT3 0x50 #define SL_IBAT3 0x58 -#define SL_TB 0x60 -#define SL_R2 0x68 -#define SL_CR 0x6c -#define SL_R12 0x70 /* r12 to r31 */ +#define SL_DBAT4 0x60 +#define SL_IBAT4 0x68 +#define SL_DBAT5 0x70 +#define SL_IBAT5 0x78 +#define SL_DBAT6 0x80 +#define SL_IBAT6 0x88 +#define SL_DBAT7 0x90 +#define SL_IBAT7 0x98 +#define SL_TB 0xa0 +#define SL_R2 0xa8 +#define SL_CR 0xac +#define SL_R12 0xb0 /* r12 to r31 */ #define SL_SIZE (SL_R12 + 80) .section .text @@ -126,6 +134,41 @@ _GLOBAL(low_sleep_handler) mfibatl r4,3 stw r4,SL_IBAT3+4(r1) +BEGIN_MMU_FTR_SECTION + mfspr r4,SPRN_DBAT4U + stw r4,SL_DBAT4(r1) + mfspr r4,SPRN_DBAT4L + stw r4,SL_DBAT4+4(r1) + mfspr r4,SPRN_DBAT5U + stw r4,SL_DBAT5(r1) + mfspr r4,SPRN_DBAT5L + stw r4,SL_DBAT5+4(r1) + mfspr r4,SPRN_DBAT6U + stw r4,SL_DBAT6(r1) + mfspr r4,SPRN_DBAT6L + stw r4,SL_DBAT6+4(r1) + mfspr r4,SPRN_DBAT7U + stw r4,SL_DBAT7(r1) + mfspr r4,SPRN_DBAT7L + stw r4,SL_DBAT7+4(r1) + mfspr r4,SPRN_IBAT4U + stw r4,SL_IBAT4(r1) + mfspr r4,SPRN_IBAT4L + stw r4,SL_IBAT4+4(r1) + mfspr r4,SPRN_IBAT5U + stw r4,SL_IBAT5(r1) + mfspr r4,SPRN_IBAT5L + stw r4,SL_IBAT5+4(r1) + mfspr r4,SPRN_IBAT6U + stw r4,SL_IBAT6(r1) + mfspr r4,SPRN_IBAT6L + stw r4,SL_IBAT6+4(r1) + mfspr r4,SPRN_IBAT7U + stw r4,SL_IBAT7(r1) + mfspr r4,SPRN_IBAT7L + stw r4,SL_IBAT7+4(r1) +END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS) + /* Backup various CPU config stuffs */ bl __save_cpu_setup @@ -326,22 +369,37 @@ grackle_wake_up: mtibatl 3,r4 BEGIN_MMU_FTR_SECTION - li r4,0 + lwz r4,SL_DBAT4(r1) mtspr SPRN_DBAT4U,r4 + lwz r4,SL_DBAT4+4(r1) mtspr SPRN_DBAT4L,r4 + lwz r4,SL_DBAT5(r1) mtspr SPRN_DBAT5U,r4 + lwz r4,SL_DBAT5+4(r1) mtspr SPRN_DBAT5L,r4 + lwz r4,SL_DBAT6(r1) mtspr SPRN_DBAT6U,r4 + lwz r4,SL_DBAT6+4(r1) mtspr SPRN_DBAT6L,r4 + lwz r4,SL_DBAT7(r1) mtspr SPRN_DBAT7U,r4 + lwz r4,SL_DBAT7+4(r1) mtspr SPRN_DBAT7L,r4 + lwz r4,SL_IBAT4(r1) mtspr SPRN_IBAT4U,r4 + lwz r4,SL_IBAT4+4(r1) mtspr SPRN_IBAT4L,r4 + lwz r4,SL_IBAT5(r1) mtspr SPRN_IBAT5U,r4 + lwz r4,SL_IBAT5+4(r1) mtspr SPRN_IBAT5L,r4 + lwz r4,SL_IBAT6(r1) mtspr SPRN_IBAT6U,r4 + lwz r4,SL_IBAT6+4(r1) mtspr SPRN_IBAT6L,r4 + lwz r4,SL_IBAT7(r1) mtspr SPRN_IBAT7U,r4 + lwz r4,SL_IBAT7+4(r1) mtspr SPRN_IBAT7L,r4 END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS) diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c index 8006c54a91e3fb6c08fa22e87631b9297e5dca5e..fd8166ffbffa74c12d8aec8561631bedfba5af69 100644 --- a/arch/powerpc/platforms/powernv/npu-dma.c +++ b/arch/powerpc/platforms/powernv/npu-dma.c @@ -56,9 +56,22 @@ static struct dentry *atsd_threshold_dentry; static struct pci_dev *get_pci_dev(struct device_node *dn) { struct pci_dn *pdn = PCI_DN(dn); + struct pci_dev *pdev; - return pci_get_domain_bus_and_slot(pci_domain_nr(pdn->phb->bus), + pdev = pci_get_domain_bus_and_slot(pci_domain_nr(pdn->phb->bus), pdn->busno, pdn->devfn); + + /* + * pci_get_domain_bus_and_slot() increased the reference count of + * the PCI device, but callers don't need that actually as the PE + * already holds a reference to the device. Since callers aren't + * aware of the reference count change, call pci_dev_put() now to + * avoid leaks. + */ + if (pdev) + pci_dev_put(pdev); + + return pdev; } /* Given a NPU device get the associated PCI device. */ diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c index e4c658cda3a732e489c931d5985bd5d84fca354e..f99cd31b6fd1a96ed5a6886ba650412f6fe4d6df 100644 --- a/arch/powerpc/platforms/pseries/hotplug-memory.c +++ b/arch/powerpc/platforms/pseries/hotplug-memory.c @@ -1012,6 +1012,9 @@ static int pseries_update_drconf_memory(struct of_reconfig_data *pr) if (!memblock_size) return -EINVAL; + if (!pr->old_prop) + return 0; + p = (__be32 *) pr->old_prop->value; if (!p) return -EINVAL; diff --git a/arch/powerpc/platforms/pseries/mobility.c b/arch/powerpc/platforms/pseries/mobility.c index f0e30dc949888a578ecfd6939c65b1eecc210604..7b60fcf04dc476f2b1ae39606dfccfaae388923a 100644 --- a/arch/powerpc/platforms/pseries/mobility.c +++ b/arch/powerpc/platforms/pseries/mobility.c @@ -9,6 +9,7 @@ * 2 as published by the Free Software Foundation. */ +#include #include #include #include @@ -344,11 +345,19 @@ void post_mobility_fixup(void) if (rc) printk(KERN_ERR "Post-mobility activate-fw failed: %d\n", rc); + /* + * We don't want CPUs to go online/offline while the device + * tree is being updated. + */ + cpus_read_lock(); + rc = pseries_devicetree_update(MIGRATION_SCOPE); if (rc) printk(KERN_ERR "Post-mobility device tree update " "failed: %d\n", rc); + cpus_read_unlock(); + /* Possibly switch to a new RFI flush type */ pseries_setup_rfi_flush(); diff --git a/arch/powerpc/sysdev/xive/common.c b/arch/powerpc/sysdev/xive/common.c index 959a2a62f23329775beba45473e944395005192a..0b24b10312213e60d7aff0d1575cb075dab1f79e 100644 --- a/arch/powerpc/sysdev/xive/common.c +++ b/arch/powerpc/sysdev/xive/common.c @@ -483,7 +483,7 @@ static int xive_find_target_in_mask(const struct cpumask *mask, * Now go through the entire mask until we find a valid * target. */ - for (;;) { + do { /* * We re-check online as the fallback case passes us * an untested affinity mask @@ -491,12 +491,11 @@ static int xive_find_target_in_mask(const struct cpumask *mask, if (cpu_online(cpu) && xive_try_pick_target(cpu)) return cpu; cpu = cpumask_next(cpu, mask); - if (cpu == first) - break; /* Wrap around */ if (cpu >= nr_cpu_ids) cpu = cpumask_first(mask); - } + } while (cpu != first); + return -1; } diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c index dd6badc31f458488b1f0a37209adbf7a5536dc43..74cfc1be04d6e52308b1ab56445331837c010128 100644 --- a/arch/powerpc/xmon/xmon.c +++ b/arch/powerpc/xmon/xmon.c @@ -466,8 +466,10 @@ static int xmon_core(struct pt_regs *regs, int fromipi) local_irq_save(flags); hard_irq_disable(); - tracing_enabled = tracing_is_on(); - tracing_off(); + if (!fromipi) { + tracing_enabled = tracing_is_on(); + tracing_off(); + } bp = in_breakpoint_table(regs->nip, &offset); if (bp != NULL) { diff --git a/arch/riscv/include/asm/switch_to.h b/arch/riscv/include/asm/switch_to.h index dd6b05bff75b6fd41b3fd12239ace6edcf022bad..d911a8c2314d209871c8cb29ac9f1a81d4236f53 100644 --- a/arch/riscv/include/asm/switch_to.h +++ b/arch/riscv/include/asm/switch_to.h @@ -23,7 +23,7 @@ extern void __fstate_restore(struct task_struct *restore_from); static inline void __fstate_clean(struct pt_regs *regs) { - regs->sstatus |= (regs->sstatus & ~(SR_FS)) | SR_FS_CLEAN; + regs->sstatus = (regs->sstatus & ~SR_FS) | SR_FS_CLEAN; } static inline void fstate_save(struct task_struct *task, diff --git a/arch/s390/include/asm/facility.h b/arch/s390/include/asm/facility.h index 99c8ce30b3cd1a4f70540ffa0d03cbb99e35df4d..7ffbc5d7ccf380fde85b0c13df4c019d50a87a88 100644 --- a/arch/s390/include/asm/facility.h +++ b/arch/s390/include/asm/facility.h @@ -59,6 +59,18 @@ static inline int test_facility(unsigned long nr) return __test_facility(nr, &S390_lowcore.stfle_fac_list); } +static inline unsigned long __stfle_asm(u64 *stfle_fac_list, int size) +{ + register unsigned long reg0 asm("0") = size - 1; + + asm volatile( + ".insn s,0xb2b00000,0(%1)" /* stfle */ + : "+d" (reg0) + : "a" (stfle_fac_list) + : "memory", "cc"); + return reg0; +} + /** * stfle - Store facility list extended * @stfle_fac_list: array where facility list can be stored @@ -76,13 +88,8 @@ static inline void stfle(u64 *stfle_fac_list, int size) memcpy(stfle_fac_list, &S390_lowcore.stfl_fac_list, 4); if (S390_lowcore.stfl_fac_list & 0x01000000) { /* More facility bits available with stfle */ - register unsigned long reg0 asm("0") = size - 1; - - asm volatile(".insn s,0xb2b00000,0(%1)" /* stfle */ - : "+d" (reg0) - : "a" (stfle_fac_list) - : "memory", "cc"); - nr = (reg0 + 1) * 8; /* # bytes stored by stfle */ + nr = __stfle_asm(stfle_fac_list, size); + nr = min_t(unsigned long, (nr + 1) * 8, size * 8); } memset((char *) stfle_fac_list + nr, 0, size * 8 - nr); preempt_enable(); diff --git a/arch/s390/include/asm/page.h b/arch/s390/include/asm/page.h index 41e3908b397f8f2faa5bab59266fec25c635a6a7..0d753291c43c0f2427a2234b2fde9074b290ae3c 100644 --- a/arch/s390/include/asm/page.h +++ b/arch/s390/include/asm/page.h @@ -176,6 +176,8 @@ static inline int devmem_is_allowed(unsigned long pfn) #define VM_DATA_DEFAULT_FLAGS (VM_READ | VM_WRITE | \ VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) +#define ARCH_ZONE_DMA_BITS 31 + #include #include diff --git a/arch/s390/kernel/vmlinux.lds.S b/arch/s390/kernel/vmlinux.lds.S index b43f8d33a3697de32e7c9f7dae4cbf2e7cf3bc46..18ede6e806b917ce8a302bf864219de236282b4a 100644 --- a/arch/s390/kernel/vmlinux.lds.S +++ b/arch/s390/kernel/vmlinux.lds.S @@ -31,10 +31,9 @@ PHDRS { SECTIONS { . = 0x100000; - _stext = .; /* Start of text section */ .text : { - /* Text and read-only data */ - _text = .; + _stext = .; /* Start of text section */ + _text = .; /* Text and read-only data */ HEAD_TEXT TEXT_TEXT SCHED_TEXT @@ -46,11 +45,10 @@ SECTIONS *(.text.*_indirect_*) *(.fixup) *(.gnu.warning) + . = ALIGN(PAGE_SIZE); + _etext = .; /* End of text section */ } :text = 0x0700 - . = ALIGN(PAGE_SIZE); - _etext = .; /* End of text section */ - NOTES :text :note .dummy : { *(.dummy) } :data diff --git a/arch/sh/boards/Kconfig b/arch/sh/boards/Kconfig index 6394b4f0a69be95d4a44b1c347ceb4f09dfdb166..f42feab25dcf1deddf7deeb1e54daf2d50eb9f14 100644 --- a/arch/sh/boards/Kconfig +++ b/arch/sh/boards/Kconfig @@ -8,27 +8,19 @@ config SH_ALPHA_BOARD bool config SH_DEVICE_TREE - bool "Board Described by Device Tree" + bool select OF select OF_EARLY_FLATTREE select TIMER_OF select COMMON_CLK select GENERIC_CALIBRATE_DELAY - help - Select Board Described by Device Tree to build a kernel that - does not hard-code any board-specific knowledge but instead uses - a device tree blob provided by the boot-loader. You must enable - drivers for any hardware you want to use separately. At this - time, only boards based on the open-hardware J-Core processors - have sufficient driver coverage to use this option; do not - select it if you are using original SuperH hardware. config SH_JCORE_SOC bool "J-Core SoC" - depends on SH_DEVICE_TREE && (CPU_SH2 || CPU_J2) + select SH_DEVICE_TREE select CLKSRC_JCORE_PIT select JCORE_AIC - default y if CPU_J2 + depends on CPU_J2 help Select this option to include drivers core components of the J-Core SoC, including interrupt controllers and timers. diff --git a/arch/sh/include/asm/io.h b/arch/sh/include/asm/io.h index 98cb8c802b1a8cccafb1cd52d4717a149490792c..0ae60d6800004f332b25f6e447077b2ee8227184 100644 --- a/arch/sh/include/asm/io.h +++ b/arch/sh/include/asm/io.h @@ -371,7 +371,11 @@ static inline int iounmap_fixed(void __iomem *addr) { return -EINVAL; } #define ioremap_nocache ioremap #define ioremap_uc ioremap -#define iounmap __iounmap + +static inline void iounmap(void __iomem *addr) +{ + __iounmap(addr); +} /* * Convert a physical pointer to a virtual kernel pointer for /dev/mem diff --git a/arch/sh/kernel/hw_breakpoint.c b/arch/sh/kernel/hw_breakpoint.c index d9ff3b42da7cb11a3e6d62ec39dcfc2c35ce40d2..2569ffc061f9c69e7896ef47eb0bc53ed9947768 100644 --- a/arch/sh/kernel/hw_breakpoint.c +++ b/arch/sh/kernel/hw_breakpoint.c @@ -160,6 +160,7 @@ int arch_bp_generic_fields(int sh_len, int sh_type, switch (sh_type) { case SH_BREAKPOINT_READ: *gen_type = HW_BREAKPOINT_R; + break; case SH_BREAKPOINT_WRITE: *gen_type = HW_BREAKPOINT_W; break; diff --git a/arch/um/include/asm/mmu_context.h b/arch/um/include/asm/mmu_context.h index fca34b2177e28a055663055d01c4fb7d78420285..129fb1d1f1c5b346b06a2f5da43ce8eae858e940 100644 --- a/arch/um/include/asm/mmu_context.h +++ b/arch/um/include/asm/mmu_context.h @@ -53,7 +53,7 @@ static inline void activate_mm(struct mm_struct *old, struct mm_struct *new) * when the new ->mm is used for the first time. */ __switch_mm(&new->context.id); - down_write(&new->mmap_sem); + down_write_nested(&new->mmap_sem, 1); uml_setup_stubs(new); up_write(&new->mmap_sem); } diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c index 8dd1d5ccae58023fb7cb1e3c0cf83255b6b1988b..0387d7a96c842b335ba836dfe4a44c1af5120f19 100644 --- a/arch/x86/boot/compressed/misc.c +++ b/arch/x86/boot/compressed/misc.c @@ -17,6 +17,7 @@ #include "pgtable.h" #include "../string.h" #include "../voffset.h" +#include /* * WARNING!! diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h index a423bdb426862dc25c7df6ab8309b3d0d6108a38..47fd18db6b3bf54fa40e20f4856df7d07e8f15e0 100644 --- a/arch/x86/boot/compressed/misc.h +++ b/arch/x86/boot/compressed/misc.h @@ -22,7 +22,6 @@ #include #include #include -#include #define BOOT_BOOT_H #include "../ctype.h" diff --git a/arch/x86/boot/string.c b/arch/x86/boot/string.c index c4428a176973311950429e0ba00c4ce13e9f6fe6..2622c0742c92d0e4a6b515d37c2a2f6b1a461758 100644 --- a/arch/x86/boot/string.c +++ b/arch/x86/boot/string.c @@ -34,6 +34,14 @@ int memcmp(const void *s1, const void *s2, size_t len) return diff; } +/* + * Clang may lower `memcmp == 0` to `bcmp == 0`. + */ +int bcmp(const void *s1, const void *s2, size_t len) +{ + return memcmp(s1, s2, len); +} + int strcmp(const char *str1, const char *str2) { const unsigned char *s1 = (const unsigned char *)str1; diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h index e699b2041665360024b4904809c514bf481ff801..578b5455334f01cbd7daba2c8acc3196caac2461 100644 --- a/arch/x86/entry/calling.h +++ b/arch/x86/entry/calling.h @@ -329,6 +329,23 @@ For 32-bit we have the following conventions - kernel is built with #endif +/* + * Mitigate Spectre v1 for conditional swapgs code paths. + * + * FENCE_SWAPGS_USER_ENTRY is used in the user entry swapgs code path, to + * prevent a speculative swapgs when coming from kernel space. + * + * FENCE_SWAPGS_KERNEL_ENTRY is used in the kernel entry non-swapgs code path, + * to prevent the swapgs from getting speculatively skipped when coming from + * user space. + */ +.macro FENCE_SWAPGS_USER_ENTRY + ALTERNATIVE "", "lfence", X86_FEATURE_FENCE_SWAPGS_USER +.endm +.macro FENCE_SWAPGS_KERNEL_ENTRY + ALTERNATIVE "", "lfence", X86_FEATURE_FENCE_SWAPGS_KERNEL +.endm + #endif /* CONFIG_X86_64 */ /* diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index 0b25d2efdb876e202ec5f1c5bd3342c5e66fdc8b..d880352e410ca86965041e93f530930f1597f415 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -1115,6 +1115,30 @@ ENTRY(irq_entries_start) .endr END(irq_entries_start) +#ifdef CONFIG_X86_LOCAL_APIC + .align 8 +ENTRY(spurious_entries_start) + vector=FIRST_SYSTEM_VECTOR + .rept (NR_VECTORS - FIRST_SYSTEM_VECTOR) + pushl $(~vector+0x80) /* Note: always in signed byte range */ + vector=vector+1 + jmp common_spurious + .align 8 + .endr +END(spurious_entries_start) + +common_spurious: + ASM_CLAC + addl $-0x80, (%esp) /* Adjust vector into the [-256, -1] range */ + SAVE_ALL switch_stacks=1 + ENCODE_FRAME_POINTER + TRACE_IRQS_OFF + movl %esp, %eax + call smp_spurious_interrupt + jmp ret_from_intr +ENDPROC(common_spurious) +#endif + /* * the CPU automatically disables interrupts when executing an IRQ vector, * so IRQ-flags tracing has to follow that: diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 23dda6f4a69f879927655618ab1206f2a104bbd2..663a99f6320f21e2593ebc5355a28af6ad547f1f 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -438,6 +438,18 @@ ENTRY(irq_entries_start) .endr END(irq_entries_start) + .align 8 +ENTRY(spurious_entries_start) + vector=FIRST_SYSTEM_VECTOR + .rept (NR_VECTORS - FIRST_SYSTEM_VECTOR) + UNWIND_HINT_IRET_REGS + pushq $(~vector+0x80) /* Note: always in signed byte range */ + jmp common_spurious + .align 8 + vector=vector+1 + .endr +END(spurious_entries_start) + .macro DEBUG_ENTRY_ASSERT_IRQS_OFF #ifdef CONFIG_DEBUG_ENTRY pushq %rax @@ -570,7 +582,7 @@ ENTRY(interrupt_entry) testb $3, CS-ORIG_RAX+8(%rsp) jz 1f SWAPGS - + FENCE_SWAPGS_USER_ENTRY /* * Switch to the thread stack. The IRET frame and orig_ax are * on the stack, as well as the return address. RDI..R12 are @@ -600,8 +612,10 @@ ENTRY(interrupt_entry) UNWIND_HINT_FUNC movq (%rdi), %rdi + jmp 2f 1: - + FENCE_SWAPGS_KERNEL_ENTRY +2: PUSH_AND_CLEAR_REGS save_ret=1 ENCODE_FRAME_POINTER 8 @@ -634,10 +648,20 @@ _ASM_NOKPROBE(interrupt_entry) /* Interrupt entry/exit. */ - /* - * The interrupt stubs push (~vector+0x80) onto the stack and - * then jump to common_interrupt. - */ +/* + * The interrupt stubs push (~vector+0x80) onto the stack and + * then jump to common_spurious/interrupt. + */ +common_spurious: + addq $-0x80, (%rsp) /* Adjust vector to [-256, -1] range */ + call interrupt_entry + UNWIND_HINT_REGS indirect=1 + call smp_spurious_interrupt /* rdi points to pt_regs */ + jmp ret_from_intr +END(common_spurious) +_ASM_NOKPROBE(common_spurious) + +/* common_interrupt is a hotpath. Align it */ .p2align CONFIG_X86_L1_CACHE_SHIFT common_interrupt: addq $-0x80, (%rsp) /* Adjust vector to [-256, -1] range */ @@ -1192,7 +1216,6 @@ idtentry stack_segment do_stack_segment has_error_code=1 #ifdef CONFIG_XEN idtentry xennmi do_nmi has_error_code=0 idtentry xendebug do_debug has_error_code=0 -idtentry xenint3 do_int3 has_error_code=0 #endif idtentry general_protection do_general_protection has_error_code=1 @@ -1237,6 +1260,13 @@ ENTRY(paranoid_entry) */ SAVE_AND_SWITCH_TO_KERNEL_CR3 scratch_reg=%rax save_reg=%r14 + /* + * The above SAVE_AND_SWITCH_TO_KERNEL_CR3 macro doesn't do an + * unconditional CR3 write, even in the PTI case. So do an lfence + * to prevent GS speculation, regardless of whether PTI is enabled. + */ + FENCE_SWAPGS_KERNEL_ENTRY + ret END(paranoid_entry) @@ -1287,6 +1317,7 @@ ENTRY(error_entry) * from user mode due to an IRET fault. */ SWAPGS + FENCE_SWAPGS_USER_ENTRY /* We have user CR3. Change to kernel CR3. */ SWITCH_TO_KERNEL_CR3 scratch_reg=%rax @@ -1308,6 +1339,8 @@ ENTRY(error_entry) CALL_enter_from_user_mode ret +.Lerror_entry_done_lfence: + FENCE_SWAPGS_KERNEL_ENTRY .Lerror_entry_done: TRACE_IRQS_OFF ret @@ -1326,7 +1359,7 @@ ENTRY(error_entry) cmpq %rax, RIP+8(%rsp) je .Lbstep_iret cmpq $.Lgs_change, RIP+8(%rsp) - jne .Lerror_entry_done + jne .Lerror_entry_done_lfence /* * hack: .Lgs_change can fail with user gsbase. If this happens, fix up @@ -1334,6 +1367,7 @@ ENTRY(error_entry) * .Lgs_change's error handler with kernel gsbase. */ SWAPGS + FENCE_SWAPGS_USER_ENTRY SWITCH_TO_KERNEL_CR3 scratch_reg=%rax jmp .Lerror_entry_done @@ -1348,6 +1382,7 @@ ENTRY(error_entry) * gsbase and CR3. Switch to kernel gsbase and CR3: */ SWAPGS + FENCE_SWAPGS_USER_ENTRY SWITCH_TO_KERNEL_CR3 scratch_reg=%rax /* @@ -1439,6 +1474,7 @@ ENTRY(nmi) swapgs cld + FENCE_SWAPGS_USER_ENTRY SWITCH_TO_KERNEL_CR3 scratch_reg=%rdx movq %rsp, %rdx movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp diff --git a/arch/x86/entry/vdso/vclock_gettime.c b/arch/x86/entry/vdso/vclock_gettime.c index e48ca3afa0912cc8bb03bd6dba84b0999abe1982..8a88e738f87db7d49265814cfc99e0d262677fa5 100644 --- a/arch/x86/entry/vdso/vclock_gettime.c +++ b/arch/x86/entry/vdso/vclock_gettime.c @@ -29,12 +29,12 @@ extern int __vdso_gettimeofday(struct timeval *tv, struct timezone *tz); extern time_t __vdso_time(time_t *t); #ifdef CONFIG_PARAVIRT_CLOCK -extern u8 pvclock_page +extern u8 pvclock_page[PAGE_SIZE] __attribute__((visibility("hidden"))); #endif #ifdef CONFIG_HYPERV_TSCPAGE -extern u8 hvclock_page +extern u8 hvclock_page[PAGE_SIZE] __attribute__((visibility("hidden"))); #endif @@ -191,13 +191,24 @@ notrace static inline u64 vgetsns(int *mode) if (gtod->vclock_mode == VCLOCK_TSC) cycles = vread_tsc(); + + /* + * For any memory-mapped vclock type, we need to make sure that gcc + * doesn't cleverly hoist a load before the mode check. Otherwise we + * might end up touching the memory-mapped page even if the vclock in + * question isn't enabled, which will segfault. Hence the barriers. + */ #ifdef CONFIG_PARAVIRT_CLOCK - else if (gtod->vclock_mode == VCLOCK_PVCLOCK) + else if (gtod->vclock_mode == VCLOCK_PVCLOCK) { + barrier(); cycles = vread_pvclock(mode); + } #endif #ifdef CONFIG_HYPERV_TSCPAGE - else if (gtod->vclock_mode == VCLOCK_HVCLOCK) + else if (gtod->vclock_mode == VCLOCK_HVCLOCK) { + barrier(); cycles = vread_hvclock(mode); + } #endif else return 0; diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c index 8671de126eac09e0a63358d72305ce0a5e9f4f31..baa7e36073f907b19a983459aa15e66a682b9f95 100644 --- a/arch/x86/events/amd/uncore.c +++ b/arch/x86/events/amd/uncore.c @@ -210,15 +210,22 @@ static int amd_uncore_event_init(struct perf_event *event) hwc->config = event->attr.config & AMD64_RAW_EVENT_MASK_NB; hwc->idx = -1; + if (event->cpu < 0) + return -EINVAL; + /* * SliceMask and ThreadMask need to be set for certain L3 events in * Family 17h. For other events, the two fields do not affect the count. */ - if (l3_mask) - hwc->config |= (AMD64_L3_SLICE_MASK | AMD64_L3_THREAD_MASK); + if (l3_mask && is_llc_event(event)) { + int thread = 2 * (cpu_data(event->cpu).cpu_core_id % 4); - if (event->cpu < 0) - return -EINVAL; + if (smp_num_siblings > 1) + thread += cpu_data(event->cpu).apicid & 1; + + hwc->config |= (1ULL << (AMD64_L3_THREAD_SHIFT + thread) & + AMD64_L3_THREAD_MASK) | AMD64_L3_SLICE_MASK; + } uncore = event_to_amd_uncore(event); if (!uncore) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index c8b0bf2b0d5e40b4eeae55f0437283080c1fe947..db5a2ba6175366ed6eb60dd38c46fd275e79e635 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -2074,12 +2074,10 @@ static void intel_pmu_disable_event(struct perf_event *event) cpuc->intel_ctrl_host_mask &= ~(1ull << hwc->idx); cpuc->intel_cp_status &= ~(1ull << hwc->idx); - if (unlikely(hwc->config_base == MSR_ARCH_PERFMON_FIXED_CTR_CTRL)) { + if (unlikely(hwc->config_base == MSR_ARCH_PERFMON_FIXED_CTR_CTRL)) intel_pmu_disable_fixed(hwc); - return; - } - - x86_pmu_disable_event(event); + else + x86_pmu_disable_event(event); /* * Needs to be called after x86_pmu_disable_event, diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h index cc6dd4f78158b91a7b56f960ef8b5acebe4c8b34..42fa3974c421c01ec79ccb9809bf023689919519 100644 --- a/arch/x86/events/intel/uncore.h +++ b/arch/x86/events/intel/uncore.h @@ -402,6 +402,16 @@ static inline bool is_freerunning_event(struct perf_event *event) (((cfg >> 8) & 0xff) >= UNCORE_FREERUNNING_UMASK_START); } +/* Check and reject invalid config */ +static inline int uncore_freerunning_hw_config(struct intel_uncore_box *box, + struct perf_event *event) +{ + if (is_freerunning_event(event)) + return 0; + + return -EINVAL; +} + static inline void uncore_disable_box(struct intel_uncore_box *box) { if (box->pmu->type->ops->disable_box) diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c index b10e04387f380342c9b9427012b09b9534a347d1..8e4e8e423839a1dc77e4e18279ce98f07fc9f2cc 100644 --- a/arch/x86/events/intel/uncore_snbep.c +++ b/arch/x86/events/intel/uncore_snbep.c @@ -3585,6 +3585,7 @@ static struct uncore_event_desc skx_uncore_iio_freerunning_events[] = { static struct intel_uncore_ops skx_uncore_iio_freerunning_ops = { .read_counter = uncore_msr_read_counter, + .hw_config = uncore_freerunning_hw_config, }; static struct attribute *skx_uncore_iio_freerunning_formats_attr[] = { diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h index 130e81e10fc7cfae6dd6e193965f65648a0d1a79..050368db9d35763d3afae17bf2a28cb2343f4548 100644 --- a/arch/x86/include/asm/apic.h +++ b/arch/x86/include/asm/apic.h @@ -48,7 +48,7 @@ static inline void generic_apic_probe(void) #ifdef CONFIG_X86_LOCAL_APIC -extern unsigned int apic_verbosity; +extern int apic_verbosity; extern int local_apic_timer_c2_ok; extern int disable_apic; diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h index ce84388e540c918a01e31599bc78881a5b9d2954..d266a4066289364e7af0733d42dcfecd6045802d 100644 --- a/arch/x86/include/asm/atomic.h +++ b/arch/x86/include/asm/atomic.h @@ -54,7 +54,7 @@ static __always_inline void arch_atomic_add(int i, atomic_t *v) { asm volatile(LOCK_PREFIX "addl %1,%0" : "+m" (v->counter) - : "ir" (i)); + : "ir" (i) : "memory"); } /** @@ -68,7 +68,7 @@ static __always_inline void arch_atomic_sub(int i, atomic_t *v) { asm volatile(LOCK_PREFIX "subl %1,%0" : "+m" (v->counter) - : "ir" (i)); + : "ir" (i) : "memory"); } /** @@ -95,7 +95,7 @@ static __always_inline bool arch_atomic_sub_and_test(int i, atomic_t *v) static __always_inline void arch_atomic_inc(atomic_t *v) { asm volatile(LOCK_PREFIX "incl %0" - : "+m" (v->counter)); + : "+m" (v->counter) :: "memory"); } #define arch_atomic_inc arch_atomic_inc @@ -108,7 +108,7 @@ static __always_inline void arch_atomic_inc(atomic_t *v) static __always_inline void arch_atomic_dec(atomic_t *v) { asm volatile(LOCK_PREFIX "decl %0" - : "+m" (v->counter)); + : "+m" (v->counter) :: "memory"); } #define arch_atomic_dec arch_atomic_dec diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h index 5f851d92eecd9ee8eaad86c6b002937633e9144f..55ca027f8c1c3c6c2a03ae17877085530b977f89 100644 --- a/arch/x86/include/asm/atomic64_64.h +++ b/arch/x86/include/asm/atomic64_64.h @@ -45,7 +45,7 @@ static __always_inline void arch_atomic64_add(long i, atomic64_t *v) { asm volatile(LOCK_PREFIX "addq %1,%0" : "=m" (v->counter) - : "er" (i), "m" (v->counter)); + : "er" (i), "m" (v->counter) : "memory"); } /** @@ -59,7 +59,7 @@ static inline void arch_atomic64_sub(long i, atomic64_t *v) { asm volatile(LOCK_PREFIX "subq %1,%0" : "=m" (v->counter) - : "er" (i), "m" (v->counter)); + : "er" (i), "m" (v->counter) : "memory"); } /** @@ -87,7 +87,7 @@ static __always_inline void arch_atomic64_inc(atomic64_t *v) { asm volatile(LOCK_PREFIX "incq %0" : "=m" (v->counter) - : "m" (v->counter)); + : "m" (v->counter) : "memory"); } #define arch_atomic64_inc arch_atomic64_inc @@ -101,7 +101,7 @@ static __always_inline void arch_atomic64_dec(atomic64_t *v) { asm volatile(LOCK_PREFIX "decq %0" : "=m" (v->counter) - : "m" (v->counter)); + : "m" (v->counter) : "memory"); } #define arch_atomic64_dec arch_atomic64_dec diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h index 14de0432d288414bd1437e44b8cb13facc6f12e9..84f848c2541a6e5febb218fc64d209270c80e9bc 100644 --- a/arch/x86/include/asm/barrier.h +++ b/arch/x86/include/asm/barrier.h @@ -80,8 +80,8 @@ do { \ }) /* Atomic operations are already serializing on x86 */ -#define __smp_mb__before_atomic() barrier() -#define __smp_mb__after_atomic() barrier() +#define __smp_mb__before_atomic() do { } while (0) +#define __smp_mb__after_atomic() do { } while (0) #include diff --git a/arch/x86/include/asm/bootparam_utils.h b/arch/x86/include/asm/bootparam_utils.h index a07ffd23e4dd67d3e182bd803eb868eaef1bcdf5..d3983fdf1012160dff7ed5a48b7793124dfd8f6c 100644 --- a/arch/x86/include/asm/bootparam_utils.h +++ b/arch/x86/include/asm/bootparam_utils.h @@ -18,6 +18,20 @@ * Note: efi_info is commonly left uninitialized, but that field has a * private magic, so it is better to leave it unchanged. */ + +#define sizeof_mbr(type, member) ({ sizeof(((type *)0)->member); }) + +#define BOOT_PARAM_PRESERVE(struct_member) \ + { \ + .start = offsetof(struct boot_params, struct_member), \ + .len = sizeof_mbr(struct boot_params, struct_member), \ + } + +struct boot_params_to_save { + unsigned int start; + unsigned int len; +}; + static void sanitize_boot_params(struct boot_params *boot_params) { /* @@ -36,19 +50,40 @@ static void sanitize_boot_params(struct boot_params *boot_params) */ if (boot_params->sentinel) { /* fields in boot_params are left uninitialized, clear them */ - memset(&boot_params->ext_ramdisk_image, 0, - (char *)&boot_params->efi_info - - (char *)&boot_params->ext_ramdisk_image); - memset(&boot_params->kbd_status, 0, - (char *)&boot_params->hdr - - (char *)&boot_params->kbd_status); - memset(&boot_params->_pad7[0], 0, - (char *)&boot_params->edd_mbr_sig_buffer[0] - - (char *)&boot_params->_pad7[0]); - memset(&boot_params->_pad8[0], 0, - (char *)&boot_params->eddbuf[0] - - (char *)&boot_params->_pad8[0]); - memset(&boot_params->_pad9[0], 0, sizeof(boot_params->_pad9)); + static struct boot_params scratch; + char *bp_base = (char *)boot_params; + char *save_base = (char *)&scratch; + int i; + + const struct boot_params_to_save to_save[] = { + BOOT_PARAM_PRESERVE(screen_info), + BOOT_PARAM_PRESERVE(apm_bios_info), + BOOT_PARAM_PRESERVE(tboot_addr), + BOOT_PARAM_PRESERVE(ist_info), + BOOT_PARAM_PRESERVE(hd0_info), + BOOT_PARAM_PRESERVE(hd1_info), + BOOT_PARAM_PRESERVE(sys_desc_table), + BOOT_PARAM_PRESERVE(olpc_ofw_header), + BOOT_PARAM_PRESERVE(efi_info), + BOOT_PARAM_PRESERVE(alt_mem_k), + BOOT_PARAM_PRESERVE(scratch), + BOOT_PARAM_PRESERVE(e820_entries), + BOOT_PARAM_PRESERVE(eddbuf_entries), + BOOT_PARAM_PRESERVE(edd_mbr_sig_buf_entries), + BOOT_PARAM_PRESERVE(edd_mbr_sig_buffer), + BOOT_PARAM_PRESERVE(hdr), + BOOT_PARAM_PRESERVE(e820_table), + BOOT_PARAM_PRESERVE(eddbuf), + }; + + memset(&scratch, 0, sizeof(scratch)); + + for (i = 0; i < ARRAY_SIZE(to_save); i++) { + memcpy(save_base + to_save[i].start, + bp_base + to_save[i].start, to_save[i].len); + } + + memcpy(boot_params, save_base, sizeof(*boot_params)); } } diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h index ce95b8cbd2296b1e33de2e0f520a00f3981e3f23..68889ace9c4c6b9f8e12752f8d66039ef02e375a 100644 --- a/arch/x86/include/asm/cpufeature.h +++ b/arch/x86/include/asm/cpufeature.h @@ -22,8 +22,8 @@ enum cpuid_leafs CPUID_LNX_3, CPUID_7_0_EBX, CPUID_D_1_EAX, - CPUID_F_0_EDX, - CPUID_F_1_EDX, + CPUID_LNX_4, + CPUID_DUMMY, CPUID_8000_0008_EBX, CPUID_6_EAX, CPUID_8000_000A_EDX, diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 69037da75ea024602c66d4eb7602abfd4eec0141..759f0a1766124513cf015cf2f2a547710477b9f8 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -239,12 +239,14 @@ #define X86_FEATURE_BMI1 ( 9*32+ 3) /* 1st group bit manipulation extensions */ #define X86_FEATURE_HLE ( 9*32+ 4) /* Hardware Lock Elision */ #define X86_FEATURE_AVX2 ( 9*32+ 5) /* AVX2 instructions */ +#define X86_FEATURE_FDP_EXCPTN_ONLY ( 9*32+ 6) /* "" FPU data pointer updated only on x87 exceptions */ #define X86_FEATURE_SMEP ( 9*32+ 7) /* Supervisor Mode Execution Protection */ #define X86_FEATURE_BMI2 ( 9*32+ 8) /* 2nd group bit manipulation extensions */ #define X86_FEATURE_ERMS ( 9*32+ 9) /* Enhanced REP MOVSB/STOSB instructions */ #define X86_FEATURE_INVPCID ( 9*32+10) /* Invalidate Processor Context ID */ #define X86_FEATURE_RTM ( 9*32+11) /* Restricted Transactional Memory */ #define X86_FEATURE_CQM ( 9*32+12) /* Cache QoS Monitoring */ +#define X86_FEATURE_ZERO_FCS_FDS ( 9*32+13) /* "" Zero out FPU CS and FPU DS */ #define X86_FEATURE_MPX ( 9*32+14) /* Memory Protection Extension */ #define X86_FEATURE_RDT_A ( 9*32+15) /* Resource Director Technology Allocation */ #define X86_FEATURE_AVX512F ( 9*32+16) /* AVX-512 Foundation */ @@ -269,13 +271,18 @@ #define X86_FEATURE_XGETBV1 (10*32+ 2) /* XGETBV with ECX = 1 instruction */ #define X86_FEATURE_XSAVES (10*32+ 3) /* XSAVES/XRSTORS instructions */ -/* Intel-defined CPU QoS Sub-leaf, CPUID level 0x0000000F:0 (EDX), word 11 */ -#define X86_FEATURE_CQM_LLC (11*32+ 1) /* LLC QoS if 1 */ - -/* Intel-defined CPU QoS Sub-leaf, CPUID level 0x0000000F:1 (EDX), word 12 */ -#define X86_FEATURE_CQM_OCCUP_LLC (12*32+ 0) /* LLC occupancy monitoring */ -#define X86_FEATURE_CQM_MBM_TOTAL (12*32+ 1) /* LLC Total MBM monitoring */ -#define X86_FEATURE_CQM_MBM_LOCAL (12*32+ 2) /* LLC Local MBM monitoring */ +/* + * Extended auxiliary flags: Linux defined - for features scattered in various + * CPUID levels like 0xf, etc. + * + * Reuse free bits when adding new feature flags! + */ +#define X86_FEATURE_CQM_LLC (11*32+ 0) /* LLC QoS if 1 */ +#define X86_FEATURE_CQM_OCCUP_LLC (11*32+ 1) /* LLC occupancy monitoring */ +#define X86_FEATURE_CQM_MBM_TOTAL (11*32+ 2) /* LLC Total MBM monitoring */ +#define X86_FEATURE_CQM_MBM_LOCAL (11*32+ 3) /* LLC Local MBM monitoring */ +#define X86_FEATURE_FENCE_SWAPGS_USER (11*32+ 4) /* "" LFENCE in user entry SWAPGS path */ +#define X86_FEATURE_FENCE_SWAPGS_KERNEL (11*32+ 5) /* "" LFENCE in kernel entry SWAPGS path */ /* AMD-defined CPU features, CPUID level 0x80000008 (EBX), word 13 */ #define X86_FEATURE_CLZERO (13*32+ 0) /* CLZERO instruction */ @@ -381,5 +388,6 @@ #define X86_BUG_L1TF X86_BUG(18) /* CPU is affected by L1 Terminal Fault */ #define X86_BUG_MDS X86_BUG(19) /* CPU is affected by Microarchitectural data sampling */ #define X86_BUG_MSBDS_ONLY X86_BUG(20) /* CPU is only affected by the MSDBS variant of BUG_MDS */ +#define X86_BUG_SWAPGS X86_BUG(21) /* CPU is affected by speculation through SWAPGS */ #endif /* _ASM_X86_CPUFEATURES_H */ diff --git a/arch/x86/include/asm/hw_irq.h b/arch/x86/include/asm/hw_irq.h index 32e666e1231e77f128252b2f9354f708081cafdb..cbd97e22d2f3197053f785dd45f59e08448e7d6a 100644 --- a/arch/x86/include/asm/hw_irq.h +++ b/arch/x86/include/asm/hw_irq.h @@ -150,8 +150,11 @@ extern char irq_entries_start[]; #define trace_irq_entries_start irq_entries_start #endif +extern char spurious_entries_start[]; + #define VECTOR_UNUSED NULL -#define VECTOR_RETRIGGERED ((void *)~0UL) +#define VECTOR_SHUTDOWN ((void *)~0UL) +#define VECTOR_RETRIGGERED ((void *)~1UL) typedef struct irq_desc* vector_irq_t[NR_VECTORS]; DECLARE_PER_CPU(vector_irq_t, vector_irq); diff --git a/arch/x86/include/asm/intel-family.h b/arch/x86/include/asm/intel-family.h index 2e38fb82b91d1286f1c4adc13a8741ab6dc828dd..aebedbaf5260747788d22e9a7b154104e5fa9ea6 100644 --- a/arch/x86/include/asm/intel-family.h +++ b/arch/x86/include/asm/intel-family.h @@ -56,6 +56,7 @@ #define INTEL_FAM6_ICELAKE_XEON_D 0x6C #define INTEL_FAM6_ICELAKE_DESKTOP 0x7D #define INTEL_FAM6_ICELAKE_MOBILE 0x7E +#define INTEL_FAM6_ICELAKE_NNPI 0x9D /* "Small Core" Processors (Atom) */ diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 7014dba23d20ceda17a760ad6dc9ee4f8ec020ac..3245b95ad2d97e06842b6561769347793c5d0161 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1113,6 +1113,7 @@ struct kvm_x86_ops { int (*update_pi_irte)(struct kvm *kvm, unsigned int host_irq, uint32_t guest_irq, bool set); void (*apicv_post_state_restore)(struct kvm_vcpu *vcpu); + bool (*dy_apicv_has_pending_interrupt)(struct kvm_vcpu *vcpu); int (*set_hv_timer)(struct kvm_vcpu *vcpu, u64 guest_deadline_tsc); void (*cancel_hv_timer)(struct kvm_vcpu *vcpu); @@ -1427,25 +1428,29 @@ enum { #define kvm_arch_vcpu_memslots_id(vcpu) ((vcpu)->arch.hflags & HF_SMM_MASK ? 1 : 0) #define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, (role).smm) +asmlinkage void __noreturn kvm_spurious_fault(void); + /* * Hardware virtualization extension instructions may fault if a * reboot turns off virtualization while processes are running. - * Trap the fault and ignore the instruction if that happens. + * Usually after catching the fault we just panic; during reboot + * instead the instruction is ignored. */ -asmlinkage void kvm_spurious_fault(void); - -#define ____kvm_handle_fault_on_reboot(insn, cleanup_insn) \ - "666: " insn "\n\t" \ - "668: \n\t" \ - ".pushsection .fixup, \"ax\" \n" \ - "667: \n\t" \ - cleanup_insn "\n\t" \ - "cmpb $0, kvm_rebooting \n\t" \ - "jne 668b \n\t" \ - __ASM_SIZE(push) " $666b \n\t" \ - "jmp kvm_spurious_fault \n\t" \ - ".popsection \n\t" \ - _ASM_EXTABLE(666b, 667b) +#define ____kvm_handle_fault_on_reboot(insn, cleanup_insn) \ + "666: \n\t" \ + insn "\n\t" \ + "jmp 668f \n\t" \ + "667: \n\t" \ + "call kvm_spurious_fault \n\t" \ + "668: \n\t" \ + ".pushsection .fixup, \"ax\" \n\t" \ + "700: \n\t" \ + cleanup_insn "\n\t" \ + "cmpb $0, kvm_rebooting\n\t" \ + "je 667b \n\t" \ + "jmp 668b \n\t" \ + ".popsection \n\t" \ + _ASM_EXTABLE(666b, 700b) #define __kvm_handle_fault_on_reboot(insn) \ ____kvm_handle_fault_on_reboot(insn, "") diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index f85f43db922545d1a61344679c0a135e51aee483..a1d22e4428f63732330c4d68b146a58228dd63b2 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -334,6 +334,7 @@ #define MSR_AMD64_PATCH_LEVEL 0x0000008b #define MSR_AMD64_TSC_RATIO 0xc0000104 #define MSR_AMD64_NB_CFG 0xc001001f +#define MSR_AMD64_CPUID_FN_1 0xc0011004 #define MSR_AMD64_PATCH_LOADER 0xc0010020 #define MSR_AMD64_OSVW_ID_LENGTH 0xc0010140 #define MSR_AMD64_OSVW_STATUS 0xc0010141 diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 599c273f5d006a904e70e5639c906b482413d37d..28cb2b31527a3c5fe5e45f29fff540acef65c73c 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -202,7 +202,7 @@ " lfence;\n" \ " jmp 902b;\n" \ " .align 16\n" \ - "903: addl $4, %%esp;\n" \ + "903: lea 4(%%esp), %%esp;\n" \ " pushl %[thunk_target];\n" \ " ret;\n" \ " .align 16\n" \ diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index e375d4266b53e35bbe2383d7633d478f29871c4f..a04677038872c89f0cf8c0f00d4703630d8a4080 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -768,6 +768,7 @@ static __always_inline bool pv_vcpu_is_preempted(long cpu) PV_RESTORE_ALL_CALLER_REGS \ FRAME_END \ "ret;" \ + ".size " PV_THUNK_NAME(func) ", .-" PV_THUNK_NAME(func) ";" \ ".popsection") /* Get a reference to a callee-save function */ diff --git a/arch/x86/include/asm/traps.h b/arch/x86/include/asm/traps.h index afbc87206886e76d73995fe4be47c3568485ea76..b771bb3d159bc8ce27f7897a01f3af4d8be2a78a 100644 --- a/arch/x86/include/asm/traps.h +++ b/arch/x86/include/asm/traps.h @@ -40,7 +40,7 @@ asmlinkage void simd_coprocessor_error(void); asmlinkage void xen_divide_error(void); asmlinkage void xen_xennmi(void); asmlinkage void xen_xendebug(void); -asmlinkage void xen_xenint3(void); +asmlinkage void xen_int3(void); asmlinkage void xen_overflow(void); asmlinkage void xen_bounds(void); asmlinkage void xen_invalid_op(void); diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c index 84132eddb5a858ff97b4c83a7ac6708eb49f4ab1..90be3a1506d3f0067559d0956e3f29d474c7c6bf 100644 --- a/arch/x86/kernel/apic/apic.c +++ b/arch/x86/kernel/apic/apic.c @@ -181,7 +181,7 @@ EXPORT_SYMBOL_GPL(local_apic_timer_c2_ok); /* * Debug level, exported for io_apic.c */ -unsigned int apic_verbosity; +int apic_verbosity; int pic_mode; @@ -715,7 +715,7 @@ static __initdata unsigned long lapic_cal_pm1, lapic_cal_pm2; static __initdata unsigned long lapic_cal_j1, lapic_cal_j2; /* - * Temporary interrupt handler. + * Temporary interrupt handler and polled calibration function. */ static void __init lapic_cal_handler(struct clock_event_device *dev) { @@ -799,7 +799,8 @@ calibrate_by_pmtimer(long deltapm, long *delta, long *deltatsc) static int __init calibrate_APIC_clock(void) { struct clock_event_device *levt = this_cpu_ptr(&lapic_events); - void (*real_handler)(struct clock_event_device *dev); + u64 tsc_perj = 0, tsc_start = 0; + unsigned long jif_start; unsigned long deltaj; long delta, deltatsc; int pm_referenced = 0; @@ -830,28 +831,64 @@ static int __init calibrate_APIC_clock(void) apic_printk(APIC_VERBOSE, "Using local APIC timer interrupts.\n" "calibrating APIC timer ...\n"); + /* + * There are platforms w/o global clockevent devices. Instead of + * making the calibration conditional on that, use a polling based + * approach everywhere. + */ local_irq_disable(); - /* Replace the global interrupt handler */ - real_handler = global_clock_event->event_handler; - global_clock_event->event_handler = lapic_cal_handler; - /* * Setup the APIC counter to maximum. There is no way the lapic * can underflow in the 100ms detection time frame */ __setup_APIC_LVTT(0xffffffff, 0, 0); - /* Let the interrupts run */ + /* + * Methods to terminate the calibration loop: + * 1) Global clockevent if available (jiffies) + * 2) TSC if available and frequency is known + */ + jif_start = READ_ONCE(jiffies); + + if (tsc_khz) { + tsc_start = rdtsc(); + tsc_perj = div_u64((u64)tsc_khz * 1000, HZ); + } + + /* + * Enable interrupts so the tick can fire, if a global + * clockevent device is available + */ local_irq_enable(); - while (lapic_cal_loops <= LAPIC_CAL_LOOPS) - cpu_relax(); + while (lapic_cal_loops <= LAPIC_CAL_LOOPS) { + /* Wait for a tick to elapse */ + while (1) { + if (tsc_khz) { + u64 tsc_now = rdtsc(); + if ((tsc_now - tsc_start) >= tsc_perj) { + tsc_start += tsc_perj; + break; + } + } else { + unsigned long jif_now = READ_ONCE(jiffies); - local_irq_disable(); + if (time_after(jif_now, jif_start)) { + jif_start = jif_now; + break; + } + } + cpu_relax(); + } + + /* Invoke the calibration routine */ + local_irq_disable(); + lapic_cal_handler(NULL); + local_irq_enable(); + } - /* Restore the real event handler */ - global_clock_event->event_handler = real_handler; + local_irq_disable(); /* Build delta t1-t2 as apic timer counts down */ delta = lapic_cal_t1 - lapic_cal_t2; @@ -904,10 +941,11 @@ static int __init calibrate_APIC_clock(void) levt->features &= ~CLOCK_EVT_FEAT_DUMMY; /* - * PM timer calibration failed or not turned on - * so lets try APIC timer based calibration + * PM timer calibration failed or not turned on so lets try APIC + * timer based calibration, if a global clockevent device is + * available. */ - if (!pm_referenced) { + if (!pm_referenced && global_clock_event) { apic_printk(APIC_VERBOSE, "... verify APIC timer\n"); /* @@ -1102,6 +1140,10 @@ void clear_local_APIC(void) apic_write(APIC_LVT0, v | APIC_LVT_MASKED); v = apic_read(APIC_LVT1); apic_write(APIC_LVT1, v | APIC_LVT_MASKED); + if (!x2apic_enabled()) { + v = apic_read(APIC_LDR) & ~APIC_LDR_MASK; + apic_write(APIC_LDR, v); + } if (maxlvt >= 4) { v = apic_read(APIC_LVTPC); apic_write(APIC_LVTPC, v | APIC_LVT_MASKED); @@ -1452,7 +1494,8 @@ static void apic_pending_intr_clear(void) if (queued) { if (boot_cpu_has(X86_FEATURE_TSC) && cpu_khz) { ntsc = rdtsc(); - max_loops = (cpu_khz << 10) - (ntsc - tsc); + max_loops = (long long)cpu_khz << 10; + max_loops -= ntsc - tsc; } else { max_loops--; } @@ -2026,21 +2069,32 @@ __visible void __irq_entry smp_spurious_interrupt(struct pt_regs *regs) entering_irq(); trace_spurious_apic_entry(vector); + inc_irq_stat(irq_spurious_count); + /* - * Check if this really is a spurious interrupt and ACK it - * if it is a vectored one. Just in case... - * Spurious interrupts should not be ACKed. + * If this is a spurious interrupt then do not acknowledge + */ + if (vector == SPURIOUS_APIC_VECTOR) { + /* See SDM vol 3 */ + pr_info("Spurious APIC interrupt (vector 0xFF) on CPU#%d, should never happen.\n", + smp_processor_id()); + goto out; + } + + /* + * If it is a vectored one, verify it's set in the ISR. If set, + * acknowledge it. */ v = apic_read(APIC_ISR + ((vector & ~0x1f) >> 1)); - if (v & (1 << (vector & 0x1f))) + if (v & (1 << (vector & 0x1f))) { + pr_info("Spurious interrupt (vector 0x%02x) on CPU#%d. Acked\n", + vector, smp_processor_id()); ack_APIC_irq(); - - inc_irq_stat(irq_spurious_count); - - /* see sw-dev-man vol 3, chapter 7.4.13.5 */ - pr_info("spurious APIC interrupt through vector %02x on CPU#%d, " - "should never happen.\n", vector, smp_processor_id()); - + } else { + pr_info("Spurious interrupt (vector 0x%02x) on CPU#%d. Not pending!\n", + vector, smp_processor_id()); + } +out: trace_spurious_apic_exit(vector); exiting_irq(); } diff --git a/arch/x86/kernel/apic/bigsmp_32.c b/arch/x86/kernel/apic/bigsmp_32.c index afee386ff711e95132dd5c01e79317ac22bd9a56..caedd8d60d3610b8b9bc11f5669de16521b3ac2a 100644 --- a/arch/x86/kernel/apic/bigsmp_32.c +++ b/arch/x86/kernel/apic/bigsmp_32.c @@ -38,32 +38,12 @@ static int bigsmp_early_logical_apicid(int cpu) return early_per_cpu(x86_cpu_to_apicid, cpu); } -static inline unsigned long calculate_ldr(int cpu) -{ - unsigned long val, id; - - val = apic_read(APIC_LDR) & ~APIC_LDR_MASK; - id = per_cpu(x86_bios_cpu_apicid, cpu); - val |= SET_APIC_LOGICAL_ID(id); - - return val; -} - /* - * Set up the logical destination ID. - * - * Intel recommends to set DFR, LDR and TPR before enabling - * an APIC. See e.g. "AP-388 82489DX User's Manual" (Intel - * document number 292116). So here it goes... + * bigsmp enables physical destination mode + * and doesn't use LDR and DFR */ static void bigsmp_init_apic_ldr(void) { - unsigned long val; - int cpu = smp_processor_id(); - - apic_write(APIC_DFR, APIC_DFR_FLAT); - val = calculate_ldr(cpu); - apic_write(APIC_LDR, val); } static void bigsmp_setup_apic_routing(void) diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c index c2bd6e0433f8bad85f99587dcf42e9b258853156..7fb88c22b9a1503cc533c2107c580b26cc286fc2 100644 --- a/arch/x86/kernel/apic/io_apic.c +++ b/arch/x86/kernel/apic/io_apic.c @@ -1894,6 +1894,50 @@ static int ioapic_set_affinity(struct irq_data *irq_data, return ret; } +/* + * Interrupt shutdown masks the ioapic pin, but the interrupt might already + * be in flight, but not yet serviced by the target CPU. That means + * __synchronize_hardirq() would return and claim that everything is calmed + * down. So free_irq() would proceed and deactivate the interrupt and free + * resources. + * + * Once the target CPU comes around to service it it will find a cleared + * vector and complain. While the spurious interrupt is harmless, the full + * release of resources might prevent the interrupt from being acknowledged + * which keeps the hardware in a weird state. + * + * Verify that the corresponding Remote-IRR bits are clear. + */ +static int ioapic_irq_get_chip_state(struct irq_data *irqd, + enum irqchip_irq_state which, + bool *state) +{ + struct mp_chip_data *mcd = irqd->chip_data; + struct IO_APIC_route_entry rentry; + struct irq_pin_list *p; + + if (which != IRQCHIP_STATE_ACTIVE) + return -EINVAL; + + *state = false; + raw_spin_lock(&ioapic_lock); + for_each_irq_pin(p, mcd->irq_2_pin) { + rentry = __ioapic_read_entry(p->apic, p->pin); + /* + * The remote IRR is only valid in level trigger mode. It's + * meaning is undefined for edge triggered interrupts and + * irrelevant because the IO-APIC treats them as fire and + * forget. + */ + if (rentry.irr && rentry.trigger) { + *state = true; + break; + } + } + raw_spin_unlock(&ioapic_lock); + return 0; +} + static struct irq_chip ioapic_chip __read_mostly = { .name = "IO-APIC", .irq_startup = startup_ioapic_irq, @@ -1903,6 +1947,7 @@ static struct irq_chip ioapic_chip __read_mostly = { .irq_eoi = ioapic_ack_level, .irq_set_affinity = ioapic_set_affinity, .irq_retrigger = irq_chip_retrigger_hierarchy, + .irq_get_irqchip_state = ioapic_irq_get_chip_state, .flags = IRQCHIP_SKIP_SET_WAKE, }; @@ -1915,6 +1960,7 @@ static struct irq_chip ioapic_ir_chip __read_mostly = { .irq_eoi = ioapic_ir_ack_level, .irq_set_affinity = ioapic_set_affinity, .irq_retrigger = irq_chip_retrigger_hierarchy, + .irq_get_irqchip_state = ioapic_irq_get_chip_state, .flags = IRQCHIP_SKIP_SET_WAKE, }; diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c index 652e7ffa9b9de5d616b8da67624466c00ad14722..10e1d17aa060809cad9f884a75b0c1ff879df41e 100644 --- a/arch/x86/kernel/apic/vector.c +++ b/arch/x86/kernel/apic/vector.c @@ -342,7 +342,7 @@ static void clear_irq_vector(struct irq_data *irqd) trace_vector_clear(irqd->irq, vector, apicd->cpu, apicd->prev_vector, apicd->prev_cpu); - per_cpu(vector_irq, apicd->cpu)[vector] = VECTOR_UNUSED; + per_cpu(vector_irq, apicd->cpu)[vector] = VECTOR_SHUTDOWN; irq_matrix_free(vector_matrix, apicd->cpu, vector, managed); apicd->vector = 0; @@ -351,7 +351,7 @@ static void clear_irq_vector(struct irq_data *irqd) if (!vector) return; - per_cpu(vector_irq, apicd->prev_cpu)[vector] = VECTOR_UNUSED; + per_cpu(vector_irq, apicd->prev_cpu)[vector] = VECTOR_SHUTDOWN; irq_matrix_free(vector_matrix, apicd->prev_cpu, vector, managed); apicd->prev_vector = 0; apicd->move_in_progress = 0; diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c index da1f5e78363e91f4d9e53fd59e4344531f372553..f86f912ce215867cfb54a5c827c1947cc4bba4d3 100644 --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -799,6 +799,64 @@ static void init_amd_ln(struct cpuinfo_x86 *c) msr_set_bit(MSR_AMD64_DE_CFG, 31); } +static bool rdrand_force; + +static int __init rdrand_cmdline(char *str) +{ + if (!str) + return -EINVAL; + + if (!strcmp(str, "force")) + rdrand_force = true; + else + return -EINVAL; + + return 0; +} +early_param("rdrand", rdrand_cmdline); + +static void clear_rdrand_cpuid_bit(struct cpuinfo_x86 *c) +{ + /* + * Saving of the MSR used to hide the RDRAND support during + * suspend/resume is done by arch/x86/power/cpu.c, which is + * dependent on CONFIG_PM_SLEEP. + */ + if (!IS_ENABLED(CONFIG_PM_SLEEP)) + return; + + /* + * The nordrand option can clear X86_FEATURE_RDRAND, so check for + * RDRAND support using the CPUID function directly. + */ + if (!(cpuid_ecx(1) & BIT(30)) || rdrand_force) + return; + + msr_clear_bit(MSR_AMD64_CPUID_FN_1, 62); + + /* + * Verify that the CPUID change has occurred in case the kernel is + * running virtualized and the hypervisor doesn't support the MSR. + */ + if (cpuid_ecx(1) & BIT(30)) { + pr_info_once("BIOS may not properly restore RDRAND after suspend, but hypervisor does not support hiding RDRAND via CPUID.\n"); + return; + } + + clear_cpu_cap(c, X86_FEATURE_RDRAND); + pr_info_once("BIOS may not properly restore RDRAND after suspend, hiding RDRAND via CPUID. Use rdrand=force to reenable.\n"); +} + +static void init_amd_jg(struct cpuinfo_x86 *c) +{ + /* + * Some BIOS implementations do not restore proper RDRAND support + * across suspend and resume. Check on whether to hide the RDRAND + * instruction support via CPUID. + */ + clear_rdrand_cpuid_bit(c); +} + static void init_amd_bd(struct cpuinfo_x86 *c) { u64 value; @@ -813,6 +871,13 @@ static void init_amd_bd(struct cpuinfo_x86 *c) wrmsrl_safe(MSR_F15H_IC_CFG, value); } } + + /* + * Some BIOS implementations do not restore proper RDRAND support + * across suspend and resume. Check on whether to hide the RDRAND + * instruction support via CPUID. + */ + clear_rdrand_cpuid_bit(c); } static void init_amd_zn(struct cpuinfo_x86 *c) @@ -855,6 +920,7 @@ static void init_amd(struct cpuinfo_x86 *c) case 0x10: init_amd_gh(c); break; case 0x12: init_amd_ln(c); break; case 0x15: init_amd_bd(c); break; + case 0x16: init_amd_jg(c); break; case 0x17: init_amd_zn(c); break; } diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index a5cde748cf76bd39948f6d90c39b7a7c09b5c1e2..ee7d17611ead47f87774400982503b65553f209a 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -32,6 +32,7 @@ #include #include +static void __init spectre_v1_select_mitigation(void); static void __init spectre_v2_select_mitigation(void); static void __init ssb_select_mitigation(void); static void __init l1tf_select_mitigation(void); @@ -96,17 +97,11 @@ void __init check_bugs(void) if (boot_cpu_has(X86_FEATURE_STIBP)) x86_spec_ctrl_mask |= SPEC_CTRL_STIBP; - /* Select the proper spectre mitigation before patching alternatives */ + /* Select the proper CPU mitigations before patching alternatives: */ + spectre_v1_select_mitigation(); spectre_v2_select_mitigation(); - - /* - * Select proper mitigation for any exposure to the Speculative Store - * Bypass vulnerability. - */ ssb_select_mitigation(); - l1tf_select_mitigation(); - mds_select_mitigation(); arch_smt_update(); @@ -271,6 +266,98 @@ static int __init mds_cmdline(char *str) } early_param("mds", mds_cmdline); +#undef pr_fmt +#define pr_fmt(fmt) "Spectre V1 : " fmt + +enum spectre_v1_mitigation { + SPECTRE_V1_MITIGATION_NONE, + SPECTRE_V1_MITIGATION_AUTO, +}; + +static enum spectre_v1_mitigation spectre_v1_mitigation __ro_after_init = + SPECTRE_V1_MITIGATION_AUTO; + +static const char * const spectre_v1_strings[] = { + [SPECTRE_V1_MITIGATION_NONE] = "Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers", + [SPECTRE_V1_MITIGATION_AUTO] = "Mitigation: usercopy/swapgs barriers and __user pointer sanitization", +}; + +/* + * Does SMAP provide full mitigation against speculative kernel access to + * userspace? + */ +static bool smap_works_speculatively(void) +{ + if (!boot_cpu_has(X86_FEATURE_SMAP)) + return false; + + /* + * On CPUs which are vulnerable to Meltdown, SMAP does not + * prevent speculative access to user data in the L1 cache. + * Consider SMAP to be non-functional as a mitigation on these + * CPUs. + */ + if (boot_cpu_has(X86_BUG_CPU_MELTDOWN)) + return false; + + return true; +} + +static void __init spectre_v1_select_mitigation(void) +{ + if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off()) { + spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE; + return; + } + + if (spectre_v1_mitigation == SPECTRE_V1_MITIGATION_AUTO) { + /* + * With Spectre v1, a user can speculatively control either + * path of a conditional swapgs with a user-controlled GS + * value. The mitigation is to add lfences to both code paths. + * + * If FSGSBASE is enabled, the user can put a kernel address in + * GS, in which case SMAP provides no protection. + * + * [ NOTE: Don't check for X86_FEATURE_FSGSBASE until the + * FSGSBASE enablement patches have been merged. ] + * + * If FSGSBASE is disabled, the user can only put a user space + * address in GS. That makes an attack harder, but still + * possible if there's no SMAP protection. + */ + if (!smap_works_speculatively()) { + /* + * Mitigation can be provided from SWAPGS itself or + * PTI as the CR3 write in the Meltdown mitigation + * is serializing. + * + * If neither is there, mitigate with an LFENCE to + * stop speculation through swapgs. + */ + if (boot_cpu_has_bug(X86_BUG_SWAPGS) && + !boot_cpu_has(X86_FEATURE_PTI)) + setup_force_cpu_cap(X86_FEATURE_FENCE_SWAPGS_USER); + + /* + * Enable lfences in the kernel entry (non-swapgs) + * paths, to prevent user entry from speculatively + * skipping swapgs. + */ + setup_force_cpu_cap(X86_FEATURE_FENCE_SWAPGS_KERNEL); + } + } + + pr_info("%s\n", spectre_v1_strings[spectre_v1_mitigation]); +} + +static int __init nospectre_v1_cmdline(char *str) +{ + spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE; + return 0; +} +early_param("nospectre_v1", nospectre_v1_cmdline); + #undef pr_fmt #define pr_fmt(fmt) "Spectre V2 : " fmt @@ -1196,7 +1283,7 @@ static ssize_t l1tf_show_state(char *buf) static ssize_t mds_show_state(char *buf) { - if (!hypervisor_is_type(X86_HYPER_NATIVE)) { + if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) { return sprintf(buf, "%s; SMT Host state unknown\n", mds_strings[mds_mitigation]); } @@ -1258,7 +1345,7 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr break; case X86_BUG_SPECTRE_V1: - return sprintf(buf, "Mitigation: __user pointer sanitization\n"); + return sprintf(buf, "%s\n", spectre_v1_strings[spectre_v1_mitigation]); case X86_BUG_SPECTRE_V2: return sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled], diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c index 0c5fcbd998cf11badefad906a2122400a3512d58..9d863e8f9b3f22b7119a7e98556a722c819c2e17 100644 --- a/arch/x86/kernel/cpu/cacheinfo.c +++ b/arch/x86/kernel/cpu/cacheinfo.c @@ -651,8 +651,7 @@ void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c, int cpu, u8 node_id) if (c->x86 < 0x17) { /* LLC is at the node level. */ per_cpu(cpu_llc_id, cpu) = node_id; - } else if (c->x86 == 0x17 && - c->x86_model >= 0 && c->x86_model <= 0x1F) { + } else if (c->x86 == 0x17 && c->x86_model <= 0x1F) { /* * LLC is at the core complex level. * Core complex ID is ApicId[3] for these processors. diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 1073118b9bf03b43fad3bd523b1ca72fdedeabcc..b33fdfa0ff49e6f9e70754051216eee0d08eed16 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -808,6 +808,30 @@ static void init_speculation_control(struct cpuinfo_x86 *c) } } +static void init_cqm(struct cpuinfo_x86 *c) +{ + if (!cpu_has(c, X86_FEATURE_CQM_LLC)) { + c->x86_cache_max_rmid = -1; + c->x86_cache_occ_scale = -1; + return; + } + + /* will be overridden if occupancy monitoring exists */ + c->x86_cache_max_rmid = cpuid_ebx(0xf); + + if (cpu_has(c, X86_FEATURE_CQM_OCCUP_LLC) || + cpu_has(c, X86_FEATURE_CQM_MBM_TOTAL) || + cpu_has(c, X86_FEATURE_CQM_MBM_LOCAL)) { + u32 eax, ebx, ecx, edx; + + /* QoS sub-leaf, EAX=0Fh, ECX=1 */ + cpuid_count(0xf, 1, &eax, &ebx, &ecx, &edx); + + c->x86_cache_max_rmid = ecx; + c->x86_cache_occ_scale = ebx; + } +} + void get_cpu_cap(struct cpuinfo_x86 *c) { u32 eax, ebx, ecx, edx; @@ -839,33 +863,6 @@ void get_cpu_cap(struct cpuinfo_x86 *c) c->x86_capability[CPUID_D_1_EAX] = eax; } - /* Additional Intel-defined flags: level 0x0000000F */ - if (c->cpuid_level >= 0x0000000F) { - - /* QoS sub-leaf, EAX=0Fh, ECX=0 */ - cpuid_count(0x0000000F, 0, &eax, &ebx, &ecx, &edx); - c->x86_capability[CPUID_F_0_EDX] = edx; - - if (cpu_has(c, X86_FEATURE_CQM_LLC)) { - /* will be overridden if occupancy monitoring exists */ - c->x86_cache_max_rmid = ebx; - - /* QoS sub-leaf, EAX=0Fh, ECX=1 */ - cpuid_count(0x0000000F, 1, &eax, &ebx, &ecx, &edx); - c->x86_capability[CPUID_F_1_EDX] = edx; - - if ((cpu_has(c, X86_FEATURE_CQM_OCCUP_LLC)) || - ((cpu_has(c, X86_FEATURE_CQM_MBM_TOTAL)) || - (cpu_has(c, X86_FEATURE_CQM_MBM_LOCAL)))) { - c->x86_cache_max_rmid = ecx; - c->x86_cache_occ_scale = ebx; - } - } else { - c->x86_cache_max_rmid = -1; - c->x86_cache_occ_scale = -1; - } - } - /* AMD-defined flags: level 0x80000001 */ eax = cpuid_eax(0x80000000); c->extended_cpuid_level = eax; @@ -896,6 +893,7 @@ void get_cpu_cap(struct cpuinfo_x86 *c) init_scattered_cpuid_features(c); init_speculation_control(c); + init_cqm(c); /* * Clear/Set all flags overridden by options, after probe. @@ -954,6 +952,7 @@ static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c) #define NO_L1TF BIT(3) #define NO_MDS BIT(4) #define MSBDS_ONLY BIT(5) +#define NO_SWAPGS BIT(6) #define VULNWL(_vendor, _family, _model, _whitelist) \ { X86_VENDOR_##_vendor, _family, _model, X86_FEATURE_ANY, _whitelist } @@ -977,29 +976,37 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = { VULNWL_INTEL(ATOM_BONNELL, NO_SPECULATION), VULNWL_INTEL(ATOM_BONNELL_MID, NO_SPECULATION), - VULNWL_INTEL(ATOM_SILVERMONT, NO_SSB | NO_L1TF | MSBDS_ONLY), - VULNWL_INTEL(ATOM_SILVERMONT_X, NO_SSB | NO_L1TF | MSBDS_ONLY), - VULNWL_INTEL(ATOM_SILVERMONT_MID, NO_SSB | NO_L1TF | MSBDS_ONLY), - VULNWL_INTEL(ATOM_AIRMONT, NO_SSB | NO_L1TF | MSBDS_ONLY), - VULNWL_INTEL(XEON_PHI_KNL, NO_SSB | NO_L1TF | MSBDS_ONLY), - VULNWL_INTEL(XEON_PHI_KNM, NO_SSB | NO_L1TF | MSBDS_ONLY), + VULNWL_INTEL(ATOM_SILVERMONT, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS), + VULNWL_INTEL(ATOM_SILVERMONT_X, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS), + VULNWL_INTEL(ATOM_SILVERMONT_MID, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS), + VULNWL_INTEL(ATOM_AIRMONT, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS), + VULNWL_INTEL(XEON_PHI_KNL, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS), + VULNWL_INTEL(XEON_PHI_KNM, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS), VULNWL_INTEL(CORE_YONAH, NO_SSB), - VULNWL_INTEL(ATOM_AIRMONT_MID, NO_L1TF | MSBDS_ONLY), + VULNWL_INTEL(ATOM_AIRMONT_MID, NO_L1TF | MSBDS_ONLY | NO_SWAPGS), - VULNWL_INTEL(ATOM_GOLDMONT, NO_MDS | NO_L1TF), - VULNWL_INTEL(ATOM_GOLDMONT_X, NO_MDS | NO_L1TF), - VULNWL_INTEL(ATOM_GOLDMONT_PLUS, NO_MDS | NO_L1TF), + VULNWL_INTEL(ATOM_GOLDMONT, NO_MDS | NO_L1TF | NO_SWAPGS), + VULNWL_INTEL(ATOM_GOLDMONT_X, NO_MDS | NO_L1TF | NO_SWAPGS), + VULNWL_INTEL(ATOM_GOLDMONT_PLUS, NO_MDS | NO_L1TF | NO_SWAPGS), + + /* + * Technically, swapgs isn't serializing on AMD (despite it previously + * being documented as such in the APM). But according to AMD, %gs is + * updated non-speculatively, and the issuing of %gs-relative memory + * operands will be blocked until the %gs update completes, which is + * good enough for our purposes. + */ /* AMD Family 0xf - 0x12 */ - VULNWL_AMD(0x0f, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS), - VULNWL_AMD(0x10, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS), - VULNWL_AMD(0x11, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS), - VULNWL_AMD(0x12, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS), + VULNWL_AMD(0x0f, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS), + VULNWL_AMD(0x10, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS), + VULNWL_AMD(0x11, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS), + VULNWL_AMD(0x12, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS), /* FAMILY_ANY must be last, otherwise 0x0f - 0x12 matches won't work */ - VULNWL_AMD(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS), + VULNWL_AMD(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS), {} }; @@ -1036,6 +1043,9 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) setup_force_cpu_bug(X86_BUG_MSBDS_ONLY); } + if (!cpu_matches(NO_SWAPGS)) + setup_force_cpu_bug(X86_BUG_SWAPGS); + if (cpu_matches(NO_MELTDOWN)) return; diff --git a/arch/x86/kernel/cpu/cpuid-deps.c b/arch/x86/kernel/cpu/cpuid-deps.c index 2c0bd38a44ab125c6a6f9627eea0cc69c3d882f5..fa07a224e7b98187469161cdfd86644e673acb82 100644 --- a/arch/x86/kernel/cpu/cpuid-deps.c +++ b/arch/x86/kernel/cpu/cpuid-deps.c @@ -59,6 +59,9 @@ static const struct cpuid_dep cpuid_deps[] = { { X86_FEATURE_AVX512_4VNNIW, X86_FEATURE_AVX512F }, { X86_FEATURE_AVX512_4FMAPS, X86_FEATURE_AVX512F }, { X86_FEATURE_AVX512_VPOPCNTDQ, X86_FEATURE_AVX512F }, + { X86_FEATURE_CQM_OCCUP_LLC, X86_FEATURE_CQM_LLC }, + { X86_FEATURE_CQM_MBM_TOTAL, X86_FEATURE_CQM_LLC }, + { X86_FEATURE_CQM_MBM_LOCAL, X86_FEATURE_CQM_LLC }, {} }; diff --git a/arch/x86/kernel/cpu/mkcapflags.sh b/arch/x86/kernel/cpu/mkcapflags.sh index d0dfb892c72fe7e3648fba753e0d7351d19006f9..aed45b8895d5b5f5c293ae4f6a47499e580f5825 100644 --- a/arch/x86/kernel/cpu/mkcapflags.sh +++ b/arch/x86/kernel/cpu/mkcapflags.sh @@ -4,6 +4,8 @@ # Generate the x86_cap/bug_flags[] arrays from include/asm/cpufeatures.h # +set -e + IN=$1 OUT=$2 diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c index 772c219b688989926eac0e082d9dbdba1b4109f1..5a52672e3f8bada2e92305f1a83722f57e8a8088 100644 --- a/arch/x86/kernel/cpu/scattered.c +++ b/arch/x86/kernel/cpu/scattered.c @@ -21,6 +21,10 @@ struct cpuid_bit { static const struct cpuid_bit cpuid_bits[] = { { X86_FEATURE_APERFMPERF, CPUID_ECX, 0, 0x00000006, 0 }, { X86_FEATURE_EPB, CPUID_ECX, 3, 0x00000006, 0 }, + { X86_FEATURE_CQM_LLC, CPUID_EDX, 1, 0x0000000f, 0 }, + { X86_FEATURE_CQM_OCCUP_LLC, CPUID_EDX, 0, 0x0000000f, 1 }, + { X86_FEATURE_CQM_MBM_TOTAL, CPUID_EDX, 1, 0x0000000f, 1 }, + { X86_FEATURE_CQM_MBM_LOCAL, CPUID_EDX, 2, 0x0000000f, 1 }, { X86_FEATURE_CAT_L3, CPUID_EBX, 1, 0x00000010, 0 }, { X86_FEATURE_CAT_L2, CPUID_EBX, 2, 0x00000010, 0 }, { X86_FEATURE_CDP_L3, CPUID_ECX, 2, 0x00000010, 1 }, diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index ddee1f0870c4b091cae110f35d92a2d87a05d8b6..250cfa85b6330835d86b9cad9665c412f4f62792 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -184,24 +184,25 @@ unsigned long __head __startup_64(unsigned long physaddr, pgtable_flags = _KERNPG_TABLE_NOENC + sme_get_me_mask(); if (la57) { - p4d = fixup_pointer(early_dynamic_pgts[next_early_pgt++], physaddr); + p4d = fixup_pointer(early_dynamic_pgts[(*next_pgt_ptr)++], + physaddr); i = (physaddr >> PGDIR_SHIFT) % PTRS_PER_PGD; pgd[i + 0] = (pgdval_t)p4d + pgtable_flags; pgd[i + 1] = (pgdval_t)p4d + pgtable_flags; - i = (physaddr >> P4D_SHIFT) % PTRS_PER_P4D; - p4d[i + 0] = (pgdval_t)pud + pgtable_flags; - p4d[i + 1] = (pgdval_t)pud + pgtable_flags; + i = physaddr >> P4D_SHIFT; + p4d[(i + 0) % PTRS_PER_P4D] = (pgdval_t)pud + pgtable_flags; + p4d[(i + 1) % PTRS_PER_P4D] = (pgdval_t)pud + pgtable_flags; } else { i = (physaddr >> PGDIR_SHIFT) % PTRS_PER_PGD; pgd[i + 0] = (pgdval_t)pud + pgtable_flags; pgd[i + 1] = (pgdval_t)pud + pgtable_flags; } - i = (physaddr >> PUD_SHIFT) % PTRS_PER_PUD; - pud[i + 0] = (pudval_t)pmd + pgtable_flags; - pud[i + 1] = (pudval_t)pmd + pgtable_flags; + i = physaddr >> PUD_SHIFT; + pud[(i + 0) % PTRS_PER_PUD] = (pudval_t)pmd + pgtable_flags; + pud[(i + 1) % PTRS_PER_PUD] = (pudval_t)pmd + pgtable_flags; pmd_entry = __PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL; /* Filter out unsupported __PAGE_KERNEL_* bits: */ @@ -211,8 +212,9 @@ unsigned long __head __startup_64(unsigned long physaddr, pmd_entry += physaddr; for (i = 0; i < DIV_ROUND_UP(_end - _text, PMD_SIZE); i++) { - int idx = i + (physaddr >> PMD_SHIFT) % PTRS_PER_PMD; - pmd[idx] = pmd_entry + i * PMD_SIZE; + int idx = i + (physaddr >> PMD_SHIFT); + + pmd[idx % PTRS_PER_PMD] = pmd_entry + i * PMD_SIZE; } /* diff --git a/arch/x86/kernel/idt.c b/arch/x86/kernel/idt.c index 01adea278a71024076137a2571794fcd5ab9d246..a7e0e975043fd05cf6d7510c82f827728ac555fa 100644 --- a/arch/x86/kernel/idt.c +++ b/arch/x86/kernel/idt.c @@ -321,7 +321,8 @@ void __init idt_setup_apic_and_irq_gates(void) #ifdef CONFIG_X86_LOCAL_APIC for_each_clear_bit_from(i, system_vectors, NR_VECTORS) { set_bit(i, system_vectors); - set_intr_gate(i, spurious_interrupt); + entry = spurious_entries_start + 8 * (i - FIRST_SYSTEM_VECTOR); + set_intr_gate(i, entry); } #endif } diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c index 59b5f2ea7c2f32d8c02181be432262b2df97ea1a..a975246074b5c571e7346228553467e8f8244895 100644 --- a/arch/x86/kernel/irq.c +++ b/arch/x86/kernel/irq.c @@ -246,7 +246,7 @@ __visible unsigned int __irq_entry do_IRQ(struct pt_regs *regs) if (!handle_irq(desc, regs)) { ack_APIC_irq(); - if (desc != VECTOR_RETRIGGERED) { + if (desc != VECTOR_RETRIGGERED && desc != VECTOR_SHUTDOWN) { pr_emerg_ratelimited("%s: %d.%d No irq handler for vector\n", __func__, smp_processor_id(), vector); diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 7f89d609095acddecc45419deaebcb896c01a021..cee45d46e67dccdaf4382554450bca412d7ed1e1 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -830,6 +830,7 @@ asm( "cmpb $0, " __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rax);" "setne %al;" "ret;" +".size __raw_callee_save___kvm_vcpu_is_preempted, .-__raw_callee_save___kvm_vcpu_is_preempted;" ".popsection"); #endif diff --git a/arch/x86/kernel/mpparse.c b/arch/x86/kernel/mpparse.c index ddb1ca6923b1bccda07a52fb5e98256d0c2a744f..5b4c3279909474aa8443e828dadfb580633361e5 100644 --- a/arch/x86/kernel/mpparse.c +++ b/arch/x86/kernel/mpparse.c @@ -547,17 +547,15 @@ void __init default_get_smp_config(unsigned int early) * local APIC has default address */ mp_lapic_addr = APIC_DEFAULT_PHYS_BASE; - return; + goto out; } pr_info("Default MP configuration #%d\n", mpf->feature1); construct_default_ISA_mptable(mpf->feature1); } else if (mpf->physptr) { - if (check_physptr(mpf, early)) { - early_memunmap(mpf, sizeof(*mpf)); - return; - } + if (check_physptr(mpf, early)) + goto out; } else BUG(); @@ -566,7 +564,7 @@ void __init default_get_smp_config(unsigned int early) /* * Only use the first configuration found. */ - +out: early_memunmap(mpf, sizeof(*mpf)); } diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c index aeba77881d85476a2aa9a71306c757b3eb014fe1..516ec7586a5fbd9ce820b93f5d66a0816ab6d449 100644 --- a/arch/x86/kernel/ptrace.c +++ b/arch/x86/kernel/ptrace.c @@ -652,11 +652,10 @@ static unsigned long ptrace_get_debugreg(struct task_struct *tsk, int n) { struct thread_struct *thread = &tsk->thread; unsigned long val = 0; - int index = n; if (n < HBP_NUM) { + int index = array_index_nospec(n, HBP_NUM); struct perf_event *bp = thread->ptrace_bps[index]; - index = array_index_nospec(index, HBP_NUM); if (bp) val = bp->hw.info.address; diff --git a/arch/x86/kernel/sysfb_efi.c b/arch/x86/kernel/sysfb_efi.c index 623965e86b65eda431b8e5fdbc2204c47bcb8b33..897da526e40e66027a34b9657c73aea3bdf78c51 100644 --- a/arch/x86/kernel/sysfb_efi.c +++ b/arch/x86/kernel/sysfb_efi.c @@ -231,9 +231,55 @@ static const struct dmi_system_id efifb_dmi_system_table[] __initconst = { {}, }; +/* + * Some devices have a portrait LCD but advertise a landscape resolution (and + * pitch). We simply swap width and height for these devices so that we can + * correctly deal with some of them coming with multiple resolutions. + */ +static const struct dmi_system_id efifb_dmi_swap_width_height[] __initconst = { + { + /* + * Lenovo MIIX310-10ICR, only some batches have the troublesome + * 800x1280 portrait screen. Luckily the portrait version has + * its own BIOS version, so we match on that. + */ + .matches = { + DMI_EXACT_MATCH(DMI_SYS_VENDOR, "LENOVO"), + DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "MIIX 310-10ICR"), + DMI_EXACT_MATCH(DMI_BIOS_VERSION, "1HCN44WW"), + }, + }, + { + /* Lenovo MIIX 320-10ICR with 800x1280 portrait screen */ + .matches = { + DMI_EXACT_MATCH(DMI_SYS_VENDOR, "LENOVO"), + DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, + "Lenovo MIIX 320-10ICR"), + }, + }, + { + /* Lenovo D330 with 800x1280 or 1200x1920 portrait screen */ + .matches = { + DMI_EXACT_MATCH(DMI_SYS_VENDOR, "LENOVO"), + DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, + "Lenovo ideapad D330-10IGM"), + }, + }, + {}, +}; + __init void sysfb_apply_efi_quirks(void) { if (screen_info.orig_video_isVGA != VIDEO_TYPE_EFI || !(screen_info.capabilities & VIDEO_CAPABILITY_SKIP_QUIRKS)) dmi_check_system(efifb_dmi_system_table); + + if (screen_info.orig_video_isVGA == VIDEO_TYPE_EFI && + dmi_check_system(efifb_dmi_swap_width_height)) { + u16 temp = screen_info.lfb_width; + + screen_info.lfb_width = screen_info.lfb_height; + screen_info.lfb_height = temp; + screen_info.lfb_linelength = 4 * screen_info.lfb_width; + } } diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c index deb576b23b7cf49817533d00555d0dc976c42486..9119859ba78714d882a09fe49b263d2e085a5329 100644 --- a/arch/x86/kernel/uprobes.c +++ b/arch/x86/kernel/uprobes.c @@ -521,9 +521,12 @@ struct uprobe_xol_ops { void (*abort)(struct arch_uprobe *, struct pt_regs *); }; -static inline int sizeof_long(void) +static inline int sizeof_long(struct pt_regs *regs) { - return in_ia32_syscall() ? 4 : 8; + /* + * Check registers for mode as in_xxx_syscall() does not apply here. + */ + return user_64bit_mode(regs) ? 8 : 4; } static int default_pre_xol_op(struct arch_uprobe *auprobe, struct pt_regs *regs) @@ -534,9 +537,9 @@ static int default_pre_xol_op(struct arch_uprobe *auprobe, struct pt_regs *regs) static int emulate_push_stack(struct pt_regs *regs, unsigned long val) { - unsigned long new_sp = regs->sp - sizeof_long(); + unsigned long new_sp = regs->sp - sizeof_long(regs); - if (copy_to_user((void __user *)new_sp, &val, sizeof_long())) + if (copy_to_user((void __user *)new_sp, &val, sizeof_long(regs))) return -EFAULT; regs->sp = new_sp; @@ -569,7 +572,7 @@ static int default_post_xol_op(struct arch_uprobe *auprobe, struct pt_regs *regs long correction = utask->vaddr - utask->xol_vaddr; regs->ip += correction; } else if (auprobe->defparam.fixups & UPROBE_FIX_CALL) { - regs->sp += sizeof_long(); /* Pop incorrect return address */ + regs->sp += sizeof_long(regs); /* Pop incorrect return address */ if (emulate_push_stack(regs, utask->vaddr + auprobe->defparam.ilen)) return -ERESTART; } @@ -688,7 +691,7 @@ static int branch_post_xol_op(struct arch_uprobe *auprobe, struct pt_regs *regs) * "call" insn was executed out-of-line. Just restore ->sp and restart. * We could also restore ->ip and try to call branch_emulate_op() again. */ - regs->sp += sizeof_long(); + regs->sp += sizeof_long(regs); return -ERESTART; } @@ -1068,7 +1071,7 @@ bool arch_uprobe_skip_sstep(struct arch_uprobe *auprobe, struct pt_regs *regs) unsigned long arch_uretprobe_hijack_return_addr(unsigned long trampoline_vaddr, struct pt_regs *regs) { - int rasize = sizeof_long(), nleft; + int rasize = sizeof_long(regs), nleft; unsigned long orig_ret_vaddr = 0; /* clear high bits for 32-bit apps */ if (copy_from_user(&orig_ret_vaddr, (void __user *)regs->sp, rasize)) diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h index 9a327d5b6d1f5bf420c7f15c22cefcdddc921f73..d78a61408243f20e8369bc5acaa3562d7c34d0aa 100644 --- a/arch/x86/kvm/cpuid.h +++ b/arch/x86/kvm/cpuid.h @@ -47,8 +47,6 @@ static const struct cpuid_reg reverse_cpuid[] = { [CPUID_8000_0001_ECX] = {0x80000001, 0, CPUID_ECX}, [CPUID_7_0_EBX] = { 7, 0, CPUID_EBX}, [CPUID_D_1_EAX] = { 0xd, 1, CPUID_EAX}, - [CPUID_F_0_EDX] = { 0xf, 0, CPUID_EDX}, - [CPUID_F_1_EDX] = { 0xf, 1, CPUID_EDX}, [CPUID_8000_0008_EBX] = {0x80000008, 0, CPUID_EBX}, [CPUID_6_EAX] = { 6, 0, CPUID_EAX}, [CPUID_8000_000A_EDX] = {0x8000000a, 0, CPUID_EDX}, diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index 4b2a399f1df5015760cb540d2eed668d0be83fdf..62f3cc7ea7ad0ae5b6cc1cd0d9c98471a4916c60 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -209,6 +209,9 @@ static void recalculate_apic_map(struct kvm *kvm) if (!apic_x2apic_mode(apic) && !new->phys_map[xapic_id]) new->phys_map[xapic_id] = apic; + if (!kvm_apic_sw_enabled(apic)) + continue; + ldr = kvm_lapic_get_reg(apic, APIC_LDR); if (apic_x2apic_mode(apic)) { @@ -252,6 +255,8 @@ static inline void apic_set_spiv(struct kvm_lapic *apic, u32 val) recalculate_apic_map(apic->vcpu->kvm); } else static_key_slow_inc(&apic_sw_disabled.key); + + recalculate_apic_map(apic->vcpu->kvm); } } diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index e0f982e35c96b8ac9a286d6472884709fa4350a4..cdc0c460950f3d0ed2d3d9106bd7acaaf87057ae 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -4532,11 +4532,11 @@ static void update_permission_bitmask(struct kvm_vcpu *vcpu, */ /* Faults from writes to non-writable pages */ - u8 wf = (pfec & PFERR_WRITE_MASK) ? ~w : 0; + u8 wf = (pfec & PFERR_WRITE_MASK) ? (u8)~w : 0; /* Faults from user mode accesses to supervisor pages */ - u8 uf = (pfec & PFERR_USER_MASK) ? ~u : 0; + u8 uf = (pfec & PFERR_USER_MASK) ? (u8)~u : 0; /* Faults from fetches of non-executable pages*/ - u8 ff = (pfec & PFERR_FETCH_MASK) ? ~x : 0; + u8 ff = (pfec & PFERR_FETCH_MASK) ? (u8)~x : 0; /* Faults from kernel mode fetches of user pages */ u8 smepf = 0; /* Faults from kernel mode accesses of user pages */ diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 952aebd0a8a34fa750993a3d6b7b626be31192a0..acc8d217f6565cfa877f5a72bd24b644f97bd198 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -131,8 +131,8 @@ static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, intr ? kvm_perf_overflow_intr : kvm_perf_overflow, pmc); if (IS_ERR(event)) { - printk_once("kvm_pmu: event creation failed %ld\n", - PTR_ERR(event)); + pr_debug_ratelimited("kvm_pmu: event creation failed %ld for pmc->idx = %d\n", + PTR_ERR(event), pmc->idx); return; } diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index ea454d3f7763f260d798d3d76a99185f2aca505c..0f33f00aa4dfe010bf2734059f5661fa2f2dced5 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -5146,6 +5146,11 @@ static void svm_deliver_avic_intr(struct kvm_vcpu *vcpu, int vec) kvm_vcpu_wake_up(vcpu); } +static bool svm_dy_apicv_has_pending_interrupt(struct kvm_vcpu *vcpu) +{ + return false; +} + static void svm_ir_list_del(struct vcpu_svm *svm, struct amd_iommu_pi_data *pi) { unsigned long flags; @@ -7203,6 +7208,7 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = { .pmu_ops = &amd_pmu_ops, .deliver_posted_interrupt = svm_deliver_avic_intr, + .dy_apicv_has_pending_interrupt = svm_dy_apicv_has_pending_interrupt, .update_pi_irte = svm_update_pi_irte, .setup_mce = svm_setup_mce, diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 73d6d585dd66d8567d62abe0d226a0ed9722bb13..2e310ea62d609e53a2c9442a376093ea593c69c8 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -8457,6 +8457,7 @@ static void vmx_disable_shadow_vmcs(struct vcpu_vmx *vmx) { vmcs_clear_bits(SECONDARY_VM_EXEC_CONTROL, SECONDARY_EXEC_SHADOW_VMCS); vmcs_write64(VMCS_LINK_POINTER, -1ull); + vmx->nested.sync_shadow_vmcs = false; } static inline void nested_release_vmcs12(struct vcpu_vmx *vmx) @@ -8468,7 +8469,6 @@ static inline void nested_release_vmcs12(struct vcpu_vmx *vmx) /* copy to memory all shadowed fields in case they were modified */ copy_shadow_to_vmcs12(vmx); - vmx->nested.sync_shadow_vmcs = false; vmx_disable_shadow_vmcs(vmx); } vmx->nested.posted_intr_nv = -1; @@ -8490,6 +8490,8 @@ static void free_nested(struct vcpu_vmx *vmx) if (!vmx->nested.vmxon && !vmx->nested.smm.vmxon) return; + kvm_clear_request(KVM_REQ_GET_VMCS12_PAGES, &vmx->vcpu); + hrtimer_cancel(&vmx->nested.preemption_timer); vmx->nested.vmxon = false; vmx->nested.smm.vmxon = false; @@ -8668,6 +8670,9 @@ static void copy_shadow_to_vmcs12(struct vcpu_vmx *vmx) u64 field_value; struct vmcs *shadow_vmcs = vmx->vmcs01.shadow_vmcs; + if (WARN_ON(!shadow_vmcs)) + return; + preempt_disable(); vmcs_load(shadow_vmcs); @@ -8706,6 +8711,9 @@ static void copy_vmcs12_to_shadow(struct vcpu_vmx *vmx) u64 field_value = 0; struct vmcs *shadow_vmcs = vmx->vmcs01.shadow_vmcs; + if (WARN_ON(!shadow_vmcs)) + return; + vmcs_load(shadow_vmcs); for (q = 0; q < ARRAY_SIZE(fields); q++) { @@ -10403,6 +10411,11 @@ static u8 vmx_has_apicv_interrupt(struct kvm_vcpu *vcpu) return ((rvi & 0xf0) > (vppr & 0xf0)); } +static bool vmx_dy_apicv_has_pending_interrupt(struct kvm_vcpu *vcpu) +{ + return pi_test_on(vcpu_to_pi_desc(vcpu)); +} + static void vmx_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap) { if (!kvm_vcpu_apicv_active(vcpu)) @@ -14379,6 +14392,7 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = { .guest_apic_has_interrupt = vmx_guest_apic_has_interrupt, .sync_pir_to_irr = vmx_sync_pir_to_irr, .deliver_posted_interrupt = vmx_deliver_posted_interrupt, + .dy_apicv_has_pending_interrupt = vmx_dy_apicv_has_pending_interrupt, .set_tss_addr = vmx_set_tss_addr, .set_identity_map_addr = vmx_set_identity_map_addr, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index c90545667fd6df0d462389960910fda88eb05ed0..12cf6dc9cac5d8e409cf8464930968e29cd4b893 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -6308,12 +6308,13 @@ restart: unsigned long rflags = kvm_x86_ops->get_rflags(vcpu); toggle_interruptibility(vcpu, ctxt->interruptibility); vcpu->arch.emulate_regs_need_sync_to_vcpu = false; - kvm_rip_write(vcpu, ctxt->eip); - if (r == EMULATE_DONE && ctxt->tf) - kvm_vcpu_do_singlestep(vcpu, &r); if (!ctxt->have_exception || - exception_type(ctxt->exception.vector) == EXCPT_TRAP) + exception_type(ctxt->exception.vector) == EXCPT_TRAP) { + kvm_rip_write(vcpu, ctxt->eip); + if (r == EMULATE_DONE && ctxt->tf) + kvm_vcpu_do_singlestep(vcpu, &r); __kvm_set_rflags(vcpu, ctxt->eflags); + } /* * For STI, interrupts are shadowed; so KVM_REQ_EVENT will @@ -9343,6 +9344,22 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu) return kvm_vcpu_running(vcpu) || kvm_vcpu_has_events(vcpu); } +bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu) +{ + if (READ_ONCE(vcpu->arch.pv.pv_unhalted)) + return true; + + if (kvm_test_request(KVM_REQ_NMI, vcpu) || + kvm_test_request(KVM_REQ_SMI, vcpu) || + kvm_test_request(KVM_REQ_EVENT, vcpu)) + return true; + + if (vcpu->arch.apicv_active && kvm_x86_ops->dy_apicv_has_pending_interrupt(vcpu)) + return true; + + return false; +} + bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu) { return vcpu->arch.preempted_in_kernel; diff --git a/arch/x86/lib/cpu.c b/arch/x86/lib/cpu.c index 2dd1fe13a37b36aacfeca12733178f62a89ba309..19f707992db22b3d2fdc5f8c9be85f63bf7aeb93 100644 --- a/arch/x86/lib/cpu.c +++ b/arch/x86/lib/cpu.c @@ -1,5 +1,6 @@ #include #include +#include unsigned int x86_family(unsigned int sig) { diff --git a/arch/x86/math-emu/fpu_emu.h b/arch/x86/math-emu/fpu_emu.h index a5a41ec5807211d8cb634f2ef009a9ad867da4a4..0c122226ca56f5b2e98549868939ea2457ef5adf 100644 --- a/arch/x86/math-emu/fpu_emu.h +++ b/arch/x86/math-emu/fpu_emu.h @@ -177,7 +177,7 @@ static inline void reg_copy(FPU_REG const *x, FPU_REG *y) #define setexponentpos(x,y) { (*(short *)&((x)->exp)) = \ ((y) + EXTENDED_Ebias) & 0x7fff; } #define exponent16(x) (*(short *)&((x)->exp)) -#define setexponent16(x,y) { (*(short *)&((x)->exp)) = (y); } +#define setexponent16(x,y) { (*(short *)&((x)->exp)) = (u16)(y); } #define addexponent(x,y) { (*(short *)&((x)->exp)) += (y); } #define stdexp(x) { (*(short *)&((x)->exp)) += EXTENDED_Ebias; } diff --git a/arch/x86/math-emu/reg_constant.c b/arch/x86/math-emu/reg_constant.c index 8dc9095bab224daab73338b7551d26b71319b9a2..742619e94bdf281a0dc19a36b1fe2b5a63a19a7b 100644 --- a/arch/x86/math-emu/reg_constant.c +++ b/arch/x86/math-emu/reg_constant.c @@ -18,7 +18,7 @@ #include "control_w.h" #define MAKE_REG(s, e, l, h) { l, h, \ - ((EXTENDED_Ebias+(e)) | ((SIGN_##s != 0)*0x8000)) } + (u16)((EXTENDED_Ebias+(e)) | ((SIGN_##s != 0)*0x8000)) } FPU_REG const CONST_1 = MAKE_REG(POS, 0, 0x00000000, 0x80000000); #if 0 diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 9d9765e4d1ef19661a0c3765755340f0814408b5..1bcb7242ad79a099880e055cb378df1ee695e9d9 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -261,13 +261,14 @@ static inline pmd_t *vmalloc_sync_one(pgd_t *pgd, unsigned long address) pmd = pmd_offset(pud, address); pmd_k = pmd_offset(pud_k, address); - if (!pmd_present(*pmd_k)) - return NULL; - if (!pmd_present(*pmd)) + if (pmd_present(*pmd) != pmd_present(*pmd_k)) set_pmd(pmd, *pmd_k); + + if (!pmd_present(*pmd_k)) + return NULL; else - BUG_ON(pmd_page(*pmd) != pmd_page(*pmd_k)); + BUG_ON(pmd_pfn(*pmd) != pmd_pfn(*pmd_k)); return pmd_k; } @@ -287,17 +288,13 @@ void vmalloc_sync_all(void) spin_lock(&pgd_lock); list_for_each_entry(page, &pgd_list, lru) { spinlock_t *pgt_lock; - pmd_t *ret; /* the pgt_lock only for Xen */ pgt_lock = &pgd_page_get_mm(page)->page_table_lock; spin_lock(pgt_lock); - ret = vmalloc_sync_one(page_address(page), address); + vmalloc_sync_one(page_address(page), address); spin_unlock(pgt_lock); - - if (!ret) - break; } spin_unlock(&pgd_lock); } diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c index 513ce09e99504368eaac0c39f6a1d1346bad1d36..3aa3149df07f9d5e596867f4afd11604ec62dcd0 100644 --- a/arch/x86/power/cpu.c +++ b/arch/x86/power/cpu.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include @@ -24,7 +25,7 @@ #include #include #include -#include +#include #ifdef CONFIG_X86_32 __visible unsigned long saved_context_ebx; @@ -398,15 +399,14 @@ static int __init bsp_pm_check_init(void) core_initcall(bsp_pm_check_init); -static int msr_init_context(const u32 *msr_id, const int total_num) +static int msr_build_context(const u32 *msr_id, const int num) { - int i = 0; + struct saved_msrs *saved_msrs = &saved_context.saved_msrs; struct saved_msr *msr_array; + int total_num; + int i, j; - if (saved_context.saved_msrs.array || saved_context.saved_msrs.num > 0) { - pr_err("x86/pm: MSR quirk already applied, please check your DMI match table.\n"); - return -EINVAL; - } + total_num = saved_msrs->num + num; msr_array = kmalloc_array(total_num, sizeof(struct saved_msr), GFP_KERNEL); if (!msr_array) { @@ -414,19 +414,30 @@ static int msr_init_context(const u32 *msr_id, const int total_num) return -ENOMEM; } - for (i = 0; i < total_num; i++) { - msr_array[i].info.msr_no = msr_id[i]; + if (saved_msrs->array) { + /* + * Multiple callbacks can invoke this function, so copy any + * MSR save requests from previous invocations. + */ + memcpy(msr_array, saved_msrs->array, + sizeof(struct saved_msr) * saved_msrs->num); + + kfree(saved_msrs->array); + } + + for (i = saved_msrs->num, j = 0; i < total_num; i++, j++) { + msr_array[i].info.msr_no = msr_id[j]; msr_array[i].valid = false; msr_array[i].info.reg.q = 0; } - saved_context.saved_msrs.num = total_num; - saved_context.saved_msrs.array = msr_array; + saved_msrs->num = total_num; + saved_msrs->array = msr_array; return 0; } /* - * The following section is a quirk framework for problematic BIOSen: + * The following sections are a quirk framework for problematic BIOSen: * Sometimes MSRs are modified by the BIOSen after suspended to * RAM, this might cause unexpected behavior after wakeup. * Thus we save/restore these specified MSRs across suspend/resume @@ -441,7 +452,7 @@ static int msr_initialize_bdw(const struct dmi_system_id *d) u32 bdw_msr_id[] = { MSR_IA32_THERM_CONTROL }; pr_info("x86/pm: %s detected, MSR saving is needed during suspending.\n", d->ident); - return msr_init_context(bdw_msr_id, ARRAY_SIZE(bdw_msr_id)); + return msr_build_context(bdw_msr_id, ARRAY_SIZE(bdw_msr_id)); } static const struct dmi_system_id msr_save_dmi_table[] = { @@ -456,9 +467,58 @@ static const struct dmi_system_id msr_save_dmi_table[] = { {} }; +static int msr_save_cpuid_features(const struct x86_cpu_id *c) +{ + u32 cpuid_msr_id[] = { + MSR_AMD64_CPUID_FN_1, + }; + + pr_info("x86/pm: family %#hx cpu detected, MSR saving is needed during suspending.\n", + c->family); + + return msr_build_context(cpuid_msr_id, ARRAY_SIZE(cpuid_msr_id)); +} + +static const struct x86_cpu_id msr_save_cpu_table[] = { + { + .vendor = X86_VENDOR_AMD, + .family = 0x15, + .model = X86_MODEL_ANY, + .feature = X86_FEATURE_ANY, + .driver_data = (kernel_ulong_t)msr_save_cpuid_features, + }, + { + .vendor = X86_VENDOR_AMD, + .family = 0x16, + .model = X86_MODEL_ANY, + .feature = X86_FEATURE_ANY, + .driver_data = (kernel_ulong_t)msr_save_cpuid_features, + }, + {} +}; + +typedef int (*pm_cpu_match_t)(const struct x86_cpu_id *); +static int pm_cpu_check(const struct x86_cpu_id *c) +{ + const struct x86_cpu_id *m; + int ret = 0; + + m = x86_match_cpu(msr_save_cpu_table); + if (m) { + pm_cpu_match_t fn; + + fn = (pm_cpu_match_t)m->driver_data; + ret = fn(m); + } + + return ret; +} + static int pm_check_save_msr(void) { dmi_check_system(msr_save_dmi_table); + pm_cpu_check(msr_save_cpu_table); + return 0; } diff --git a/arch/x86/purgatory/Makefile b/arch/x86/purgatory/Makefile index 3cf302b2633222ff45323b2528f3499233b650c5..8901a1f89cf57bd16cbf79d6625a804274c6afe4 100644 --- a/arch/x86/purgatory/Makefile +++ b/arch/x86/purgatory/Makefile @@ -6,6 +6,9 @@ purgatory-y := purgatory.o stack.o setup-x86_$(BITS).o sha256.o entry64.o string targets += $(purgatory-y) PURGATORY_OBJS = $(addprefix $(obj)/,$(purgatory-y)) +$(obj)/string.o: $(srctree)/arch/x86/boot/compressed/string.c FORCE + $(call if_changed_rule,cc_o_c) + $(obj)/sha256.o: $(srctree)/lib/sha256.c FORCE $(call if_changed_rule,cc_o_c) @@ -17,11 +20,34 @@ KCOV_INSTRUMENT := n # Default KBUILD_CFLAGS can have -pg option set when FTRACE is enabled. That # in turn leaves some undefined symbols like __fentry__ in purgatory and not -# sure how to relocate those. Like kexec-tools, use custom flags. - -KBUILD_CFLAGS := -fno-strict-aliasing -Wall -Wstrict-prototypes -fno-zero-initialized-in-bss -fno-builtin -ffreestanding -c -Os -mcmodel=large -KBUILD_CFLAGS += -m$(BITS) -KBUILD_CFLAGS += $(call cc-option,-fno-PIE) +# sure how to relocate those. +ifdef CONFIG_FUNCTION_TRACER +CFLAGS_REMOVE_sha256.o += $(CC_FLAGS_FTRACE) +CFLAGS_REMOVE_purgatory.o += $(CC_FLAGS_FTRACE) +CFLAGS_REMOVE_string.o += $(CC_FLAGS_FTRACE) +CFLAGS_REMOVE_kexec-purgatory.o += $(CC_FLAGS_FTRACE) +endif + +ifdef CONFIG_STACKPROTECTOR +CFLAGS_REMOVE_sha256.o += -fstack-protector +CFLAGS_REMOVE_purgatory.o += -fstack-protector +CFLAGS_REMOVE_string.o += -fstack-protector +CFLAGS_REMOVE_kexec-purgatory.o += -fstack-protector +endif + +ifdef CONFIG_STACKPROTECTOR_STRONG +CFLAGS_REMOVE_sha256.o += -fstack-protector-strong +CFLAGS_REMOVE_purgatory.o += -fstack-protector-strong +CFLAGS_REMOVE_string.o += -fstack-protector-strong +CFLAGS_REMOVE_kexec-purgatory.o += -fstack-protector-strong +endif + +ifdef CONFIG_RETPOLINE +CFLAGS_REMOVE_sha256.o += $(RETPOLINE_CFLAGS) +CFLAGS_REMOVE_purgatory.o += $(RETPOLINE_CFLAGS) +CFLAGS_REMOVE_string.o += $(RETPOLINE_CFLAGS) +CFLAGS_REMOVE_kexec-purgatory.o += $(RETPOLINE_CFLAGS) +endif $(obj)/purgatory.ro: $(PURGATORY_OBJS) FORCE $(call if_changed,ld) diff --git a/arch/x86/purgatory/purgatory.c b/arch/x86/purgatory/purgatory.c index 025c34ac0d848f642a4b4b1c2c9577c52aa72614..7971f7a8af59f1aa84bec3e9ea2129c8b0c629ee 100644 --- a/arch/x86/purgatory/purgatory.c +++ b/arch/x86/purgatory/purgatory.c @@ -70,3 +70,9 @@ void purgatory(void) } copy_backup_region(); } + +/* + * Defined in order to reuse memcpy() and memset() from + * arch/x86/boot/compressed/string.c + */ +void warn(const char *msg) {} diff --git a/arch/x86/purgatory/string.c b/arch/x86/purgatory/string.c deleted file mode 100644 index 795ca4f2cb3c912e37f214802cde76fc4fea7985..0000000000000000000000000000000000000000 --- a/arch/x86/purgatory/string.c +++ /dev/null @@ -1,25 +0,0 @@ -/* - * Simple string functions. - * - * Copyright (C) 2014 Red Hat Inc. - * - * Author: - * Vivek Goyal - * - * This source code is licensed under the GNU General Public License, - * Version 2. See the file COPYING for more details. - */ - -#include - -#include "../boot/string.c" - -void *memcpy(void *dst, const void *src, size_t len) -{ - return __builtin_memcpy(dst, src, len); -} - -void *memset(void *dst, int c, size_t len) -{ - return __builtin_memset(dst, c, len); -} diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c index 782f98b332f05b9632bd156163a719a5f7cd745f..1730a26ff6abcc2a1f4a682748b88c1ff0de8752 100644 --- a/arch/x86/xen/enlighten_pv.c +++ b/arch/x86/xen/enlighten_pv.c @@ -597,12 +597,12 @@ struct trap_array_entry { static struct trap_array_entry trap_array[] = { { debug, xen_xendebug, true }, - { int3, xen_xenint3, true }, { double_fault, xen_double_fault, true }, #ifdef CONFIG_X86_MCE { machine_check, xen_machine_check, true }, #endif { nmi, xen_xennmi, true }, + { int3, xen_int3, false }, { overflow, xen_overflow, false }, #ifdef CONFIG_IA32_EMULATION { entry_INT80_compat, xen_entry_INT80_compat, false }, diff --git a/arch/x86/xen/xen-asm_64.S b/arch/x86/xen/xen-asm_64.S index 417b339e5c8e1aadedd20231c9be82ac93dbe728..3a6feed76dfc167736984315390e9b8b0cb81e6c 100644 --- a/arch/x86/xen/xen-asm_64.S +++ b/arch/x86/xen/xen-asm_64.S @@ -30,7 +30,6 @@ xen_pv_trap divide_error xen_pv_trap debug xen_pv_trap xendebug xen_pv_trap int3 -xen_pv_trap xenint3 xen_pv_trap xennmi xen_pv_trap overflow xen_pv_trap bounds diff --git a/arch/xtensa/kernel/setup.c b/arch/xtensa/kernel/setup.c index a285fbd0fd9be96660260c10f7420be687ad1b20..15580e4fc766a24191fe93dadf47590c424a1818 100644 --- a/arch/xtensa/kernel/setup.c +++ b/arch/xtensa/kernel/setup.c @@ -515,6 +515,7 @@ void cpu_reset(void) "add %2, %2, %7\n\t" "addi %0, %0, -1\n\t" "bnez %0, 1b\n\t" + "isync\n\t" /* Jump to identity mapping */ "jx %3\n" "2:\n\t" diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index becd793a258c81b19db969b2b34404a35c29efd9..d8d2ac294b0c092ddae80eac160f27ae5357b5c6 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -1886,9 +1886,14 @@ static void bfq_request_merged(struct request_queue *q, struct request *req, blk_rq_pos(container_of(rb_prev(&req->rb_node), struct request, rb_node))) { struct bfq_queue *bfqq = bfq_init_rq(req); - struct bfq_data *bfqd = bfqq->bfqd; + struct bfq_data *bfqd; struct request *prev, *next_rq; + if (!bfqq) + return; + + bfqd = bfqq->bfqd; + /* Reposition request in its sort_list */ elv_rb_del(&bfqq->sort_list, req); elv_rb_add(&bfqq->sort_list, req); @@ -1930,6 +1935,9 @@ static void bfq_requests_merged(struct request_queue *q, struct request *rq, struct bfq_queue *bfqq = bfq_init_rq(rq), *next_bfqq = bfq_init_rq(next); + if (!bfqq) + return; + /* * If next and rq belong to the same bfq_queue and next is older * than rq, then reposition rq in the fifo (by substituting next @@ -4590,12 +4598,12 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, spin_lock_irq(&bfqd->lock); bfqq = bfq_init_rq(rq); - if (at_head || blk_rq_is_passthrough(rq)) { + if (!bfqq || at_head || blk_rq_is_passthrough(rq)) { if (at_head) list_add(&rq->queuelist, &bfqd->dispatch); else list_add_tail(&rq->queuelist, &bfqd->dispatch); - } else { /* bfqq is assumed to be non null here */ + } else { idle_timer_disabled = __bfq_insert_request(bfqd, rq); /* * Update bfqq, because, if a queue merge has occurred diff --git a/block/bio-integrity.c b/block/bio-integrity.c index 67b5fb861a5100c5294e572668881a187e478a6b..5bd90cd4b51e3c3251cf794395e96a9ab0963395 100644 --- a/block/bio-integrity.c +++ b/block/bio-integrity.c @@ -291,8 +291,12 @@ bool bio_integrity_prep(struct bio *bio) ret = bio_integrity_add_page(bio, virt_to_page(buf), bytes, offset); - if (ret == 0) - return false; + if (ret == 0) { + printk(KERN_ERR "could not attach integrity payload\n"); + kfree(buf); + status = BLK_STS_RESOURCE; + goto err_end_io; + } if (ret < bytes) break; diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index c630e02836a80d7d406778208c659aebda8fcf06..527524134693005624d508b6720f8dfeefbcc47b 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -1016,8 +1016,12 @@ static int blkcg_print_stat(struct seq_file *sf, void *v) } next: if (has_stats) { - off += scnprintf(buf+off, size-off, "\n"); - seq_commit(sf, off); + if (off < size - 1) { + off += scnprintf(buf+off, size-off, "\n"); + seq_commit(sf, off); + } else { + seq_commit(sf, -1); + } } } diff --git a/block/blk-core.c b/block/blk-core.c index 38b1bd4931650f4ba0b6fe11feede4aadbad0497..f7e16b4466f0f8b4857f62b979538b0d548553a4 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -201,6 +201,7 @@ void blk_rq_init(struct request_queue *q, struct request *rq) rq->internal_tag = -1; rq->start_time_ns = ktime_get_ns(); rq->part = NULL; + refcount_set(&rq->ref, 1); } EXPORT_SYMBOL(blk_rq_init); @@ -423,24 +424,25 @@ void blk_sync_queue(struct request_queue *q) EXPORT_SYMBOL(blk_sync_queue); /** - * blk_set_preempt_only - set QUEUE_FLAG_PREEMPT_ONLY + * blk_set_pm_only - increment pm_only counter * @q: request queue pointer - * - * Returns the previous value of the PREEMPT_ONLY flag - 0 if the flag was not - * set and 1 if the flag was already set. */ -int blk_set_preempt_only(struct request_queue *q) +void blk_set_pm_only(struct request_queue *q) { - return blk_queue_flag_test_and_set(QUEUE_FLAG_PREEMPT_ONLY, q); + atomic_inc(&q->pm_only); } -EXPORT_SYMBOL_GPL(blk_set_preempt_only); +EXPORT_SYMBOL_GPL(blk_set_pm_only); -void blk_clear_preempt_only(struct request_queue *q) +void blk_clear_pm_only(struct request_queue *q) { - blk_queue_flag_clear(QUEUE_FLAG_PREEMPT_ONLY, q); - wake_up_all(&q->mq_freeze_wq); + int pm_only; + + pm_only = atomic_dec_return(&q->pm_only); + WARN_ON_ONCE(pm_only < 0); + if (pm_only == 0) + wake_up_all(&q->mq_freeze_wq); } -EXPORT_SYMBOL_GPL(blk_clear_preempt_only); +EXPORT_SYMBOL_GPL(blk_clear_pm_only); /** * __blk_run_queue_uncond - run a queue whether or not it has been stopped @@ -918,7 +920,7 @@ EXPORT_SYMBOL(blk_alloc_queue); */ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags) { - const bool preempt = flags & BLK_MQ_REQ_PREEMPT; + const bool pm = flags & BLK_MQ_REQ_PREEMPT; while (true) { bool success = false; @@ -926,11 +928,11 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags) rcu_read_lock(); if (percpu_ref_tryget_live(&q->q_usage_counter)) { /* - * The code that sets the PREEMPT_ONLY flag is - * responsible for ensuring that that flag is globally - * visible before the queue is unfrozen. + * The code that increments the pm_only counter is + * responsible for ensuring that that counter is + * globally visible before the queue is unfrozen. */ - if (preempt || !blk_queue_preempt_only(q)) { + if (pm || !blk_queue_pm_only(q)) { success = true; } else { percpu_ref_put(&q->q_usage_counter); @@ -955,7 +957,7 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags) wait_event(q->mq_freeze_wq, (atomic_read(&q->mq_freeze_depth) == 0 && - (preempt || !blk_queue_preempt_only(q))) || + (pm || !blk_queue_pm_only(q))) || blk_queue_dying(q)); if (blk_queue_dying(q)) return -ENODEV; diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c index 6b8396ccb5c44b70cb4205d584665bde592f15fc..f4f7c73fb8284a7f6c1189f2a63b23ba7040b889 100644 --- a/block/blk-iolatency.c +++ b/block/blk-iolatency.c @@ -565,6 +565,10 @@ static void blkcg_iolatency_done_bio(struct rq_qos *rqos, struct bio *bio) if (!blkg) return; + /* We didn't actually submit this bio, don't account it. */ + if (bio->bi_status == BLK_STS_AGAIN) + return; + iolat = blkg_to_lat(bio->bi_blkg); if (!iolat) return; @@ -742,8 +746,10 @@ static int iolatency_set_min_lat_nsec(struct blkcg_gq *blkg, u64 val) if (!oldval && val) return 1; - if (oldval && !val) + if (oldval && !val) { + blkcg_clear_delay(blkg); return -1; + } return 0; } diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index cb1e6cf7ac48f4e187e915896376cdfec79c9d2a..a5ea86835fcb2dc2bb7f1a1593637709bce62b2a 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -102,6 +102,14 @@ static int blk_flags_show(struct seq_file *m, const unsigned long flags, return 0; } +static int queue_pm_only_show(void *data, struct seq_file *m) +{ + struct request_queue *q = data; + + seq_printf(m, "%d\n", atomic_read(&q->pm_only)); + return 0; +} + #define QUEUE_FLAG_NAME(name) [QUEUE_FLAG_##name] = #name static const char *const blk_queue_flag_name[] = { QUEUE_FLAG_NAME(QUEUED), @@ -132,7 +140,6 @@ static const char *const blk_queue_flag_name[] = { QUEUE_FLAG_NAME(REGISTERED), QUEUE_FLAG_NAME(SCSI_PASSTHROUGH), QUEUE_FLAG_NAME(QUIESCED), - QUEUE_FLAG_NAME(PREEMPT_ONLY), }; #undef QUEUE_FLAG_NAME @@ -209,6 +216,7 @@ static ssize_t queue_write_hint_store(void *data, const char __user *buf, static const struct blk_mq_debugfs_attr blk_mq_debugfs_queue_attrs[] = { { "poll_stat", 0400, queue_poll_stat_show }, { "requeue_list", 0400, .seq_ops = &queue_requeue_list_seq_ops }, + { "pm_only", 0600, queue_pm_only_show, NULL }, { "state", 0600, queue_state_show, queue_state_write }, { "write_hints", 0600, queue_write_hint_show, queue_write_hint_store }, { "zone_wlock", 0400, queue_zone_wlock_show, NULL }, diff --git a/block/blk-throttle.c b/block/blk-throttle.c index 01d0620a4e4a5e829c9de5c2dec0ac5e0b2b3ab3..caee658609d7385937bf7d08d4258aec70346692 100644 --- a/block/blk-throttle.c +++ b/block/blk-throttle.c @@ -892,13 +892,10 @@ static bool tg_with_in_iops_limit(struct throtl_grp *tg, struct bio *bio, unsigned long jiffy_elapsed, jiffy_wait, jiffy_elapsed_rnd; u64 tmp; - jiffy_elapsed = jiffy_elapsed_rnd = jiffies - tg->slice_start[rw]; - - /* Slice has just started. Consider one slice interval */ - if (!jiffy_elapsed) - jiffy_elapsed_rnd = tg->td->throtl_slice; + jiffy_elapsed = jiffies - tg->slice_start[rw]; - jiffy_elapsed_rnd = roundup(jiffy_elapsed_rnd, tg->td->throtl_slice); + /* Round up to the next throttle slice, wait time must be nonzero */ + jiffy_elapsed_rnd = roundup(jiffy_elapsed + 1, tg->td->throtl_slice); /* * jiffy_elapsed_rnd should not be a big value as minimum iops can be diff --git a/crypto/asymmetric_keys/Kconfig b/crypto/asymmetric_keys/Kconfig index f3702e533ff41044694625aad813abc58b8af4dd..d8a73d94bb309a524e47cca954a66a55f385b1f4 100644 --- a/crypto/asymmetric_keys/Kconfig +++ b/crypto/asymmetric_keys/Kconfig @@ -15,6 +15,7 @@ config ASYMMETRIC_PUBLIC_KEY_SUBTYPE select MPILIB select CRYPTO_HASH_INFO select CRYPTO_AKCIPHER + select CRYPTO_HASH help This option provides support for asymmetric public key type handling. If signature generation and/or verification are to be used, @@ -34,6 +35,7 @@ config X509_CERTIFICATE_PARSER config PKCS7_MESSAGE_PARSER tristate "PKCS#7 message parser" depends on X509_CERTIFICATE_PARSER + select CRYPTO_HASH select ASN1 select OID_REGISTRY help @@ -56,6 +58,7 @@ config SIGNED_PE_FILE_VERIFICATION bool "Support for PE file signature verification" depends on PKCS7_MESSAGE_PARSER=y depends on SYSTEM_DATA_VERIFICATION + select CRYPTO_HASH select ASN1 select OID_REGISTRY help diff --git a/crypto/chacha20poly1305.c b/crypto/chacha20poly1305.c index 4d6f51bcdfabe903c804845ecaa6182f21a3c702..af8afe5c06ea9a827df39c65b0bda11176943043 100644 --- a/crypto/chacha20poly1305.c +++ b/crypto/chacha20poly1305.c @@ -67,6 +67,8 @@ struct chachapoly_req_ctx { unsigned int cryptlen; /* Actual AD, excluding IV */ unsigned int assoclen; + /* request flags, with MAY_SLEEP cleared if needed */ + u32 flags; union { struct poly_req poly; struct chacha_req chacha; @@ -76,8 +78,12 @@ struct chachapoly_req_ctx { static inline void async_done_continue(struct aead_request *req, int err, int (*cont)(struct aead_request *)) { - if (!err) + if (!err) { + struct chachapoly_req_ctx *rctx = aead_request_ctx(req); + + rctx->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; err = cont(req); + } if (err != -EINPROGRESS && err != -EBUSY) aead_request_complete(req, err); @@ -144,7 +150,7 @@ static int chacha_decrypt(struct aead_request *req) dst = scatterwalk_ffwd(rctx->dst, req->dst, req->assoclen); } - skcipher_request_set_callback(&creq->req, aead_request_flags(req), + skcipher_request_set_callback(&creq->req, rctx->flags, chacha_decrypt_done, req); skcipher_request_set_tfm(&creq->req, ctx->chacha); skcipher_request_set_crypt(&creq->req, src, dst, @@ -188,7 +194,7 @@ static int poly_tail(struct aead_request *req) memcpy(&preq->tail.cryptlen, &len, sizeof(len)); sg_set_buf(preq->src, &preq->tail, sizeof(preq->tail)); - ahash_request_set_callback(&preq->req, aead_request_flags(req), + ahash_request_set_callback(&preq->req, rctx->flags, poly_tail_done, req); ahash_request_set_tfm(&preq->req, ctx->poly); ahash_request_set_crypt(&preq->req, preq->src, @@ -219,7 +225,7 @@ static int poly_cipherpad(struct aead_request *req) sg_init_table(preq->src, 1); sg_set_buf(preq->src, &preq->pad, padlen); - ahash_request_set_callback(&preq->req, aead_request_flags(req), + ahash_request_set_callback(&preq->req, rctx->flags, poly_cipherpad_done, req); ahash_request_set_tfm(&preq->req, ctx->poly); ahash_request_set_crypt(&preq->req, preq->src, NULL, padlen); @@ -250,7 +256,7 @@ static int poly_cipher(struct aead_request *req) sg_init_table(rctx->src, 2); crypt = scatterwalk_ffwd(rctx->src, crypt, req->assoclen); - ahash_request_set_callback(&preq->req, aead_request_flags(req), + ahash_request_set_callback(&preq->req, rctx->flags, poly_cipher_done, req); ahash_request_set_tfm(&preq->req, ctx->poly); ahash_request_set_crypt(&preq->req, crypt, NULL, rctx->cryptlen); @@ -280,7 +286,7 @@ static int poly_adpad(struct aead_request *req) sg_init_table(preq->src, 1); sg_set_buf(preq->src, preq->pad, padlen); - ahash_request_set_callback(&preq->req, aead_request_flags(req), + ahash_request_set_callback(&preq->req, rctx->flags, poly_adpad_done, req); ahash_request_set_tfm(&preq->req, ctx->poly); ahash_request_set_crypt(&preq->req, preq->src, NULL, padlen); @@ -304,7 +310,7 @@ static int poly_ad(struct aead_request *req) struct poly_req *preq = &rctx->u.poly; int err; - ahash_request_set_callback(&preq->req, aead_request_flags(req), + ahash_request_set_callback(&preq->req, rctx->flags, poly_ad_done, req); ahash_request_set_tfm(&preq->req, ctx->poly); ahash_request_set_crypt(&preq->req, req->src, NULL, rctx->assoclen); @@ -331,7 +337,7 @@ static int poly_setkey(struct aead_request *req) sg_init_table(preq->src, 1); sg_set_buf(preq->src, rctx->key, sizeof(rctx->key)); - ahash_request_set_callback(&preq->req, aead_request_flags(req), + ahash_request_set_callback(&preq->req, rctx->flags, poly_setkey_done, req); ahash_request_set_tfm(&preq->req, ctx->poly); ahash_request_set_crypt(&preq->req, preq->src, NULL, sizeof(rctx->key)); @@ -355,7 +361,7 @@ static int poly_init(struct aead_request *req) struct poly_req *preq = &rctx->u.poly; int err; - ahash_request_set_callback(&preq->req, aead_request_flags(req), + ahash_request_set_callback(&preq->req, rctx->flags, poly_init_done, req); ahash_request_set_tfm(&preq->req, ctx->poly); @@ -393,7 +399,7 @@ static int poly_genkey(struct aead_request *req) chacha_iv(creq->iv, req, 0); - skcipher_request_set_callback(&creq->req, aead_request_flags(req), + skcipher_request_set_callback(&creq->req, rctx->flags, poly_genkey_done, req); skcipher_request_set_tfm(&creq->req, ctx->chacha); skcipher_request_set_crypt(&creq->req, creq->src, creq->src, @@ -433,7 +439,7 @@ static int chacha_encrypt(struct aead_request *req) dst = scatterwalk_ffwd(rctx->dst, req->dst, req->assoclen); } - skcipher_request_set_callback(&creq->req, aead_request_flags(req), + skcipher_request_set_callback(&creq->req, rctx->flags, chacha_encrypt_done, req); skcipher_request_set_tfm(&creq->req, ctx->chacha); skcipher_request_set_crypt(&creq->req, src, dst, @@ -451,6 +457,7 @@ static int chachapoly_encrypt(struct aead_request *req) struct chachapoly_req_ctx *rctx = aead_request_ctx(req); rctx->cryptlen = req->cryptlen; + rctx->flags = aead_request_flags(req); /* encrypt call chain: * - chacha_encrypt/done() @@ -472,6 +479,7 @@ static int chachapoly_decrypt(struct aead_request *req) struct chachapoly_req_ctx *rctx = aead_request_ctx(req); rctx->cryptlen = req->cryptlen - POLY1305_DIGEST_SIZE; + rctx->flags = aead_request_flags(req); /* decrypt call chain: * - poly_genkey/done() diff --git a/crypto/ghash-generic.c b/crypto/ghash-generic.c index d9f192b953b22b06ea4adc6b921a1d59a1dde1f2..591b52d3bdca31f4546c4f6b7dd1e1cd6727ed95 100644 --- a/crypto/ghash-generic.c +++ b/crypto/ghash-generic.c @@ -34,6 +34,7 @@ static int ghash_setkey(struct crypto_shash *tfm, const u8 *key, unsigned int keylen) { struct ghash_ctx *ctx = crypto_shash_ctx(tfm); + be128 k; if (keylen != GHASH_BLOCK_SIZE) { crypto_shash_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); @@ -42,7 +43,12 @@ static int ghash_setkey(struct crypto_shash *tfm, if (ctx->gf128) gf128mul_free_4k(ctx->gf128); - ctx->gf128 = gf128mul_init_4k_lle((be128 *)key); + + BUILD_BUG_ON(sizeof(k) != GHASH_BLOCK_SIZE); + memcpy(&k, key, GHASH_BLOCK_SIZE); /* avoid violating alignment rules */ + ctx->gf128 = gf128mul_init_4k_lle(&k); + memzero_explicit(&k, GHASH_BLOCK_SIZE); + if (!ctx->gf128) return -ENOMEM; diff --git a/crypto/serpent_generic.c b/crypto/serpent_generic.c index 7c3382facc82e8bb706a48029d90875cafb6a156..600bd288881ddd69c6bb5d5487eb73eaea5c1789 100644 --- a/crypto/serpent_generic.c +++ b/crypto/serpent_generic.c @@ -229,7 +229,13 @@ x4 ^= x2; \ }) -static void __serpent_setkey_sbox(u32 r0, u32 r1, u32 r2, u32 r3, u32 r4, u32 *k) +/* + * both gcc and clang have misoptimized this function in the past, + * producing horrible object code from spilling temporary variables + * on the stack. Forcing this part out of line avoids that. + */ +static noinline void __serpent_setkey_sbox(u32 r0, u32 r1, u32 r2, + u32 r3, u32 r4, u32 *k) { k += 100; S3(r3, r4, r0, r1, r2); store_and_load_keys(r1, r2, r4, r3, 28, 24); diff --git a/drivers/acpi/acpica/acevents.h b/drivers/acpi/acpica/acevents.h index 704bebbd35b06adc0d9faeb98f22535586330d61..298180bf7e3c16d73ac3d71d2d4d208633ba44f5 100644 --- a/drivers/acpi/acpica/acevents.h +++ b/drivers/acpi/acpica/acevents.h @@ -69,7 +69,8 @@ acpi_status acpi_ev_mask_gpe(struct acpi_gpe_event_info *gpe_event_info, u8 is_masked); acpi_status -acpi_ev_add_gpe_reference(struct acpi_gpe_event_info *gpe_event_info); +acpi_ev_add_gpe_reference(struct acpi_gpe_event_info *gpe_event_info, + u8 clear_on_enable); acpi_status acpi_ev_remove_gpe_reference(struct acpi_gpe_event_info *gpe_event_info); diff --git a/drivers/acpi/acpica/evgpe.c b/drivers/acpi/acpica/evgpe.c index e10fec99a182eca363167e27e1a6338e42586c09..4b5d3b4c627a723f931bc94713f3cacbb799e1bb 100644 --- a/drivers/acpi/acpica/evgpe.c +++ b/drivers/acpi/acpica/evgpe.c @@ -146,6 +146,7 @@ acpi_ev_mask_gpe(struct acpi_gpe_event_info *gpe_event_info, u8 is_masked) * FUNCTION: acpi_ev_add_gpe_reference * * PARAMETERS: gpe_event_info - Add a reference to this GPE + * clear_on_enable - Clear GPE status before enabling it * * RETURN: Status * @@ -155,7 +156,8 @@ acpi_ev_mask_gpe(struct acpi_gpe_event_info *gpe_event_info, u8 is_masked) ******************************************************************************/ acpi_status -acpi_ev_add_gpe_reference(struct acpi_gpe_event_info *gpe_event_info) +acpi_ev_add_gpe_reference(struct acpi_gpe_event_info *gpe_event_info, + u8 clear_on_enable) { acpi_status status = AE_OK; @@ -170,6 +172,10 @@ acpi_ev_add_gpe_reference(struct acpi_gpe_event_info *gpe_event_info) /* Enable on first reference */ + if (clear_on_enable) { + (void)acpi_hw_clear_gpe(gpe_event_info); + } + status = acpi_ev_update_gpe_enable_mask(gpe_event_info); if (ACPI_SUCCESS(status)) { status = acpi_ev_enable_gpe(gpe_event_info); diff --git a/drivers/acpi/acpica/evgpeblk.c b/drivers/acpi/acpica/evgpeblk.c index b253063b09d39c1c3c5bb3c81a375c8cec8756cf..8d96270ed8c738e6376280877562745238a801cf 100644 --- a/drivers/acpi/acpica/evgpeblk.c +++ b/drivers/acpi/acpica/evgpeblk.c @@ -453,7 +453,7 @@ acpi_ev_initialize_gpe_block(struct acpi_gpe_xrupt_info *gpe_xrupt_info, continue; } - status = acpi_ev_add_gpe_reference(gpe_event_info); + status = acpi_ev_add_gpe_reference(gpe_event_info, FALSE); if (ACPI_FAILURE(status)) { ACPI_EXCEPTION((AE_INFO, status, "Could not enable GPE 0x%02X", diff --git a/drivers/acpi/acpica/evxface.c b/drivers/acpi/acpica/evxface.c index febc332b00ac1313716b96e01c6fc3cc977e3ac3..841557bda64191602f2766425ee19d85a6d8f88b 100644 --- a/drivers/acpi/acpica/evxface.c +++ b/drivers/acpi/acpica/evxface.c @@ -971,7 +971,7 @@ acpi_remove_gpe_handler(acpi_handle gpe_device, ACPI_GPE_DISPATCH_METHOD) || (ACPI_GPE_DISPATCH_TYPE(handler->original_flags) == ACPI_GPE_DISPATCH_NOTIFY)) && handler->originally_enabled) { - (void)acpi_ev_add_gpe_reference(gpe_event_info); + (void)acpi_ev_add_gpe_reference(gpe_event_info, FALSE); if (ACPI_GPE_IS_POLLING_NEEDED(gpe_event_info)) { /* Poll edge triggered GPEs to handle existing events */ diff --git a/drivers/acpi/acpica/evxfgpe.c b/drivers/acpi/acpica/evxfgpe.c index b2d5f66cc1b055863492567007fee2444bfa1150..4188731e7c406ea20d4be04c65eb3a478e121d40 100644 --- a/drivers/acpi/acpica/evxfgpe.c +++ b/drivers/acpi/acpica/evxfgpe.c @@ -108,7 +108,7 @@ acpi_status acpi_enable_gpe(acpi_handle gpe_device, u32 gpe_number) if (gpe_event_info) { if (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) != ACPI_GPE_DISPATCH_NONE) { - status = acpi_ev_add_gpe_reference(gpe_event_info); + status = acpi_ev_add_gpe_reference(gpe_event_info, TRUE); if (ACPI_SUCCESS(status) && ACPI_GPE_IS_POLLING_NEEDED(gpe_event_info)) { diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c index 43c2615434b48b8a6576d70662f9fd5af4e6f79b..e11b5da6f828f3bd8fffa23094b1a31e797d7c9c 100644 --- a/drivers/acpi/arm64/iort.c +++ b/drivers/acpi/arm64/iort.c @@ -616,8 +616,8 @@ static int iort_dev_find_its_id(struct device *dev, u32 req_id, /* Move to ITS specific data */ its = (struct acpi_iort_its_group *)node->node_data; - if (idx > its->its_count) { - dev_err(dev, "requested ITS ID index [%d] is greater than available [%d]\n", + if (idx >= its->its_count) { + dev_err(dev, "requested ITS ID index [%d] overruns ITS entries [%d]\n", idx, its->its_count); return -ENXIO; } diff --git a/drivers/acpi/blacklist.c b/drivers/acpi/blacklist.c index 995c4d8922b12eef963a9cc1cab591ee7b404b1d..761f0c19a451266856838ad221afae623f7f6610 100644 --- a/drivers/acpi/blacklist.c +++ b/drivers/acpi/blacklist.c @@ -30,7 +30,9 @@ #include "internal.h" +#ifdef CONFIG_DMI static const struct dmi_system_id acpi_rev_dmi_table[] __initconst; +#endif /* * POLICY: If *anything* doesn't work, put it on the blacklist. @@ -74,7 +76,9 @@ int __init acpi_blacklisted(void) } (void)early_acpi_osi_init(); +#ifdef CONFIG_DMI dmi_check_system(acpi_rev_dmi_table); +#endif return blacklisted; } diff --git a/drivers/android/binder.c b/drivers/android/binder.c index 5d67f5fec6c1bf82bff8fa90113a812d4292a1f2..6e04e7a707a12c3ab5d4bb7fd7eef6172e588a4e 100644 --- a/drivers/android/binder.c +++ b/drivers/android/binder.c @@ -1960,8 +1960,18 @@ static struct binder_thread *binder_get_txn_from_and_acq_inner( static void binder_free_transaction(struct binder_transaction *t) { - if (t->buffer) - t->buffer->transaction = NULL; + struct binder_proc *target_proc = t->to_proc; + + if (target_proc) { + binder_inner_proc_lock(target_proc); + if (t->buffer) + t->buffer->transaction = NULL; + binder_inner_proc_unlock(target_proc); + } + /* + * If the transaction has no target_proc, then + * t->buffer->transaction has already been cleared. + */ kfree(t); binder_stats_deleted(BINDER_STAT_TRANSACTION); } @@ -2838,7 +2848,7 @@ static void binder_transaction(struct binder_proc *proc, else return_error = BR_DEAD_REPLY; mutex_unlock(&context->context_mgr_node_lock); - if (target_node && target_proc == proc) { + if (target_node && target_proc->pid == proc->pid) { binder_user_error("%d:%d got transaction to context manager from process owning it\n", proc->pid, thread->pid); return_error = BR_FAILED_REPLY; @@ -3484,10 +3494,12 @@ static int binder_thread_write(struct binder_proc *proc, buffer->debug_id, buffer->transaction ? "active" : "finished"); + binder_inner_proc_lock(proc); if (buffer->transaction) { buffer->transaction->buffer = NULL; buffer->transaction = NULL; } + binder_inner_proc_unlock(proc); if (buffer->async_transaction && buffer->target_node) { struct binder_node *buf_node; struct binder_work *w; diff --git a/drivers/ata/libahci_platform.c b/drivers/ata/libahci_platform.c index c92c10d553746da95702677b9b380a0da099b242..5bece9752ed6892f69bf54d61d94316d84aba9d0 100644 --- a/drivers/ata/libahci_platform.c +++ b/drivers/ata/libahci_platform.c @@ -313,6 +313,9 @@ static int ahci_platform_get_phy(struct ahci_host_priv *hpriv, u32 port, hpriv->phys[port] = NULL; rc = 0; break; + case -EPROBE_DEFER: + /* Do not complain yet */ + break; default: dev_err(dev, diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c index 01306c018398fa16583cab46bd1e51b9ccf86309..ccc80ff57eb2018adf9657aaa284a5aa3b1e52a4 100644 --- a/drivers/ata/libata-eh.c +++ b/drivers/ata/libata-eh.c @@ -1490,7 +1490,7 @@ static int ata_eh_read_log_10h(struct ata_device *dev, tf->hob_lbah = buf[10]; tf->nsect = buf[12]; tf->hob_nsect = buf[13]; - if (ata_id_has_ncq_autosense(dev->id)) + if (dev->class == ATA_DEV_ZAC && ata_id_has_ncq_autosense(dev->id)) tf->auxiliary = buf[14] << 16 | buf[15] << 8 | buf[16]; return 0; @@ -1737,7 +1737,8 @@ void ata_eh_analyze_ncq_error(struct ata_link *link) memcpy(&qc->result_tf, &tf, sizeof(tf)); qc->result_tf.flags = ATA_TFLAG_ISADDR | ATA_TFLAG_LBA | ATA_TFLAG_LBA48; qc->err_mask |= AC_ERR_DEV | AC_ERR_NCQ; - if ((qc->result_tf.command & ATA_SENSE) || qc->result_tf.auxiliary) { + if (dev->class == ATA_DEV_ZAC && + ((qc->result_tf.command & ATA_SENSE) || qc->result_tf.auxiliary)) { char sense_key, asc, ascq; sense_key = (qc->result_tf.auxiliary >> 16) & 0xff; @@ -1791,10 +1792,11 @@ static unsigned int ata_eh_analyze_tf(struct ata_queued_cmd *qc, } switch (qc->dev->class) { - case ATA_DEV_ATA: case ATA_DEV_ZAC: if (stat & ATA_SENSE) ata_eh_request_sense(qc, qc->scsicmd); + /* fall through */ + case ATA_DEV_ATA: if (err & ATA_ICRC) qc->err_mask |= AC_ERR_ATA_BUS; if (err & (ATA_UNC | ATA_AMNF)) diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c index 1984fc78c750b42505a5178761366dce33fa4089..3a64fa4aaf7e34257a30b0141b40b589fbbd3005 100644 --- a/drivers/ata/libata-scsi.c +++ b/drivers/ata/libata-scsi.c @@ -1803,6 +1803,21 @@ nothing_to_do: return 1; } +static bool ata_check_nblocks(struct scsi_cmnd *scmd, u32 n_blocks) +{ + struct request *rq = scmd->request; + u32 req_blocks; + + if (!blk_rq_is_passthrough(rq)) + return true; + + req_blocks = blk_rq_bytes(rq) / scmd->device->sector_size; + if (n_blocks > req_blocks) + return false; + + return true; +} + /** * ata_scsi_rw_xlat - Translate SCSI r/w command into an ATA one * @qc: Storage for translated ATA taskfile @@ -1847,6 +1862,8 @@ static unsigned int ata_scsi_rw_xlat(struct ata_queued_cmd *qc) scsi_10_lba_len(cdb, &block, &n_block); if (cdb[1] & (1 << 3)) tf_flags |= ATA_TFLAG_FUA; + if (!ata_check_nblocks(scmd, n_block)) + goto invalid_fld; break; case READ_6: case WRITE_6: @@ -1861,6 +1878,8 @@ static unsigned int ata_scsi_rw_xlat(struct ata_queued_cmd *qc) */ if (!n_block) n_block = 256; + if (!ata_check_nblocks(scmd, n_block)) + goto invalid_fld; break; case READ_16: case WRITE_16: @@ -1871,6 +1890,8 @@ static unsigned int ata_scsi_rw_xlat(struct ata_queued_cmd *qc) scsi_16_lba_len(cdb, &block, &n_block); if (cdb[1] & (1 << 3)) tf_flags |= ATA_TFLAG_FUA; + if (!ata_check_nblocks(scmd, n_block)) + goto invalid_fld; break; default: DPRINTK("no-byte command\n"); diff --git a/drivers/ata/libata-sff.c b/drivers/ata/libata-sff.c index c5ea0fc635e54eb800cb12d8812ea5e508c388cc..873cc0906055129eff641428db31c118cbdbeb21 100644 --- a/drivers/ata/libata-sff.c +++ b/drivers/ata/libata-sff.c @@ -674,6 +674,10 @@ static void ata_pio_sector(struct ata_queued_cmd *qc) unsigned int offset; unsigned char *buf; + if (!qc->cursg) { + qc->curbytes = qc->nbytes; + return; + } if (qc->curbytes == qc->nbytes - qc->sect_size) ap->hsm_task_state = HSM_ST_LAST; @@ -699,6 +703,8 @@ static void ata_pio_sector(struct ata_queued_cmd *qc) if (qc->cursg_ofs == qc->cursg->length) { qc->cursg = sg_next(qc->cursg); + if (!qc->cursg) + ap->hsm_task_state = HSM_ST_LAST; qc->cursg_ofs = 0; } } diff --git a/drivers/ata/libata-zpodd.c b/drivers/ata/libata-zpodd.c index 173e6f2dd9af0f12afdc1fee7e372cfa4291e0aa..eefda51f97d351bda5d8437e2d83aaeae16b30f6 100644 --- a/drivers/ata/libata-zpodd.c +++ b/drivers/ata/libata-zpodd.c @@ -56,7 +56,7 @@ static enum odd_mech_type zpodd_get_mech_type(struct ata_device *dev) unsigned int ret; struct rm_feature_desc *desc; struct ata_taskfile tf; - static const char cdb[] = { GPCMD_GET_CONFIGURATION, + static const char cdb[ATAPI_CDB_LEN] = { GPCMD_GET_CONFIGURATION, 2, /* only 1 feature descriptor requested */ 0, 3, /* 3, removable medium feature */ 0, 0, 0,/* reserved */ diff --git a/drivers/atm/iphase.c b/drivers/atm/iphase.c index 82532c299bb5964a429e81353b9c5f94d9bb5ed2..008905d4152a39e843ac339f59cbfb989ab1fd4e 100644 --- a/drivers/atm/iphase.c +++ b/drivers/atm/iphase.c @@ -63,6 +63,7 @@ #include #include #include +#include #include "iphase.h" #include "suni.h" #define swap_byte_order(x) (((x & 0xff) << 8) | ((x & 0xff00) >> 8)) @@ -2760,8 +2761,11 @@ static int ia_ioctl(struct atm_dev *dev, unsigned int cmd, void __user *arg) } if (copy_from_user(&ia_cmds, arg, sizeof ia_cmds)) return -EFAULT; board = ia_cmds.status; - if ((board < 0) || (board > iadev_count)) - board = 0; + + if ((board < 0) || (board > iadev_count)) + board = 0; + board = array_index_nospec(board, iadev_count + 1); + iadev = ia_dev[board]; switch (ia_cmds.cmd) { case MEMDUMP: diff --git a/drivers/auxdisplay/panel.c b/drivers/auxdisplay/panel.c index 3b25a643058c9dde38511d646da8700e53c46837..0b8e2a7d6e9344009c48d78115ebf853f8ce924e 100644 --- a/drivers/auxdisplay/panel.c +++ b/drivers/auxdisplay/panel.c @@ -1618,6 +1618,8 @@ static void panel_attach(struct parport *port) return; err_lcd_unreg: + if (scan_timer.function) + del_timer_sync(&scan_timer); if (lcd.enabled) charlcd_unregister(lcd.charlcd); err_unreg_device: diff --git a/drivers/base/base.h b/drivers/base/base.h index 7a419a7a6235b166625bcc4216de79a66249188c..559b047de9f757db3788874d7cfe05b59402b4d7 100644 --- a/drivers/base/base.h +++ b/drivers/base/base.h @@ -66,6 +66,9 @@ struct driver_private { * probed first. * @device - pointer back to the struct device that this structure is * associated with. + * @dead - This device is currently either in the process of or has been + * removed from the system. Any asynchronous events scheduled for this + * device should exit without taking any action. * * Nothing outside of the driver core should ever touch these fields. */ @@ -76,6 +79,7 @@ struct device_private { struct klist_node knode_bus; struct list_head deferred_probe; struct device *device; + u8 dead:1; }; #define to_device_private_parent(obj) \ container_of(obj, struct device_private, knode_parent) diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c index dd6a6850cb450d5029474bc805dcd9624a65e529..ce015ce2977c47a898b6c1d0de66ac2e9b2d89c4 100644 --- a/drivers/base/cacheinfo.c +++ b/drivers/base/cacheinfo.c @@ -653,7 +653,8 @@ static int cacheinfo_cpu_pre_down(unsigned int cpu) static int __init cacheinfo_sysfs_init(void) { - return cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "base/cacheinfo:online", + return cpuhp_setup_state(CPUHP_AP_BASE_CACHEINFO_ONLINE, + "base/cacheinfo:online", cacheinfo_cpu_online, cacheinfo_cpu_pre_down); } device_initcall(cacheinfo_sysfs_init); diff --git a/drivers/base/core.c b/drivers/base/core.c index 92e2c32c2227015140ecad6def65e765b54942ed..e1a8d5c06f65e0a3b0a97649b7290439a14af275 100644 --- a/drivers/base/core.c +++ b/drivers/base/core.c @@ -2031,6 +2031,24 @@ void put_device(struct device *dev) } EXPORT_SYMBOL_GPL(put_device); +bool kill_device(struct device *dev) +{ + /* + * Require the device lock and set the "dead" flag to guarantee that + * the update behavior is consistent with the other bitfields near + * it and that we cannot have an asynchronous probe routine trying + * to run while we are tearing out the bus/class/sysfs from + * underneath the device. + */ + lockdep_assert_held(&dev->mutex); + + if (dev->p->dead) + return false; + dev->p->dead = true; + return true; +} +EXPORT_SYMBOL_GPL(kill_device); + /** * device_del - delete device from system. * @dev: device. @@ -2050,6 +2068,10 @@ void device_del(struct device *dev) struct kobject *glue_dir = NULL; struct class_interface *class_intf; + device_lock(dev); + kill_device(dev); + device_unlock(dev); + /* Notify clients of device removal. This call must come * before dpm_sysfs_remove(). */ diff --git a/drivers/base/dd.c b/drivers/base/dd.c index d48b310c4760377408863f5ac017cb4901ba2f15..11d24a552ee499103a35b9ef18d858d98cf4798b 100644 --- a/drivers/base/dd.c +++ b/drivers/base/dd.c @@ -725,15 +725,6 @@ static int __device_attach_driver(struct device_driver *drv, void *_data) bool async_allowed; int ret; - /* - * Check if device has already been claimed. This may - * happen with driver loading, device discovery/registration, - * and deferred probe processing happens all at once with - * multiple threads. - */ - if (dev->driver) - return -EBUSY; - ret = driver_match_device(drv, dev); if (ret == 0) { /* no match */ @@ -768,6 +759,15 @@ static void __device_attach_async_helper(void *_dev, async_cookie_t cookie) device_lock(dev); + /* + * Check if device has already been removed or claimed. This may + * happen with driver loading, device discovery/registration, + * and deferred probe processing happens all at once with + * multiple threads. + */ + if (dev->p->dead || dev->driver) + goto out_unlock; + if (dev->parent) pm_runtime_get_sync(dev->parent); @@ -778,7 +778,7 @@ static void __device_attach_async_helper(void *_dev, async_cookie_t cookie) if (dev->parent) pm_runtime_put(dev->parent); - +out_unlock: device_unlock(dev); put_device(dev); @@ -891,7 +891,7 @@ static int __driver_attach(struct device *dev, void *data) if (dev->parent && dev->bus->need_parent_lock) device_lock(dev->parent); device_lock(dev); - if (!dev->driver) + if (!dev->p->dead && !dev->driver) driver_probe_device(drv, dev); device_unlock(dev); if (dev->parent && dev->bus->need_parent_lock) diff --git a/drivers/base/firmware_loader/fallback.c b/drivers/base/firmware_loader/fallback.c index b5c865fe263b25303b72bc42a5715e4fba0e746d..818d8c37d70a9b7dbf7d93fb1383574bcc275c00 100644 --- a/drivers/base/firmware_loader/fallback.c +++ b/drivers/base/firmware_loader/fallback.c @@ -659,7 +659,7 @@ static bool fw_run_sysfs_fallback(enum fw_opt opt_flags) /* Also permit LSMs and IMA to fail firmware sysfs fallback */ ret = security_kernel_load_data(LOADING_FIRMWARE); if (ret < 0) - return ret; + return false; return fw_force_sysfs_fallback(opt_flags); } diff --git a/drivers/base/regmap/regmap-debugfs.c b/drivers/base/regmap/regmap-debugfs.c index 87b562e49a435ffedac0f71b4652b669611e9a0e..c9687c8b23478138fe5824be4a7bf1737a384d88 100644 --- a/drivers/base/regmap/regmap-debugfs.c +++ b/drivers/base/regmap/regmap-debugfs.c @@ -575,6 +575,8 @@ void regmap_debugfs_init(struct regmap *map, const char *name) } if (!strcmp(name, "dummy")) { + kfree(map->debugfs_name); + map->debugfs_name = kasprintf(GFP_KERNEL, "dummy%d", dummy_index); name = map->debugfs_name; diff --git a/drivers/base/regmap/regmap-irq.c b/drivers/base/regmap/regmap-irq.c index 429ca8ed7e518087bc1ddaebd9f4c8393eb5179e..982c7ac311b8524eb2b5bbda1275d4a3c81aab50 100644 --- a/drivers/base/regmap/regmap-irq.c +++ b/drivers/base/regmap/regmap-irq.c @@ -91,6 +91,9 @@ static void regmap_irq_sync_unlock(struct irq_data *data) * suppress pointless writes. */ for (i = 0; i < d->chip->num_regs; i++) { + if (!d->chip->mask_base) + continue; + reg = d->chip->mask_base + (i * map->reg_stride * d->irq_reg_stride); if (d->chip->mask_invert) { @@ -526,6 +529,9 @@ int regmap_add_irq_chip(struct regmap *map, int irq, int irq_flags, /* Mask all the interrupts by default */ for (i = 0; i < chip->num_regs; i++) { d->mask_buf[i] = d->mask_buf_def[i]; + if (!chip->mask_base) + continue; + reg = chip->mask_base + (i * map->reg_stride * d->irq_reg_stride); if (chip->mask_invert) diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c index 0360a90ad6b623530f0053dbc3b3d29748266aae..6c9f6988bc09331ca12054da6d73f959d5f40de0 100644 --- a/drivers/base/regmap/regmap.c +++ b/drivers/base/regmap/regmap.c @@ -1618,6 +1618,8 @@ static int _regmap_raw_write_impl(struct regmap *map, unsigned int reg, map->format.reg_bytes + map->format.pad_bytes, val, val_len); + else + ret = -ENOTSUPP; /* If that didn't work fall back on linearising by hand. */ if (ret == -ENOTSUPP) { diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c index cb919b964066022f9099cead35bb443f07a073ae..3cdadf75c82da1835a81953dfd3e1daf349c3aad 100644 --- a/drivers/block/drbd/drbd_receiver.c +++ b/drivers/block/drbd/drbd_receiver.c @@ -5240,7 +5240,7 @@ static int drbd_do_auth(struct drbd_connection *connection) unsigned int key_len; char secret[SHARED_SECRET_MAX]; /* 64 byte */ unsigned int resp_size; - SHASH_DESC_ON_STACK(desc, connection->cram_hmac_tfm); + struct shash_desc *desc; struct packet_info pi; struct net_conf *nc; int err, rv; @@ -5253,6 +5253,13 @@ static int drbd_do_auth(struct drbd_connection *connection) memcpy(secret, nc->shared_secret, key_len); rcu_read_unlock(); + desc = kmalloc(sizeof(struct shash_desc) + + crypto_shash_descsize(connection->cram_hmac_tfm), + GFP_KERNEL); + if (!desc) { + rv = -1; + goto fail; + } desc->tfm = connection->cram_hmac_tfm; desc->flags = 0; @@ -5395,7 +5402,10 @@ static int drbd_do_auth(struct drbd_connection *connection) kfree(peers_ch); kfree(response); kfree(right_response); - shash_desc_zero(desc); + if (desc) { + shash_desc_zero(desc); + kfree(desc); + } return rv; } diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c index a8de56f1936db166919c6bb1afa0c0e910be3a43..4a9a4d12721ae8d5a18f87bf1cff12e98efcfb26 100644 --- a/drivers/block/floppy.c +++ b/drivers/block/floppy.c @@ -2119,6 +2119,9 @@ static void setup_format_params(int track) raw_cmd->kernel_data = floppy_track_buffer; raw_cmd->length = 4 * F_SECT_PER_TRACK; + if (!F_SECT_PER_TRACK) + return; + /* allow for about 30ms for data transport per track */ head_shift = (F_SECT_PER_TRACK + 5) / 6; @@ -3241,8 +3244,12 @@ static int set_geometry(unsigned int cmd, struct floppy_struct *g, int cnt; /* sanity checking for parameters. */ - if (g->sect <= 0 || - g->head <= 0 || + if ((int)g->sect <= 0 || + (int)g->head <= 0 || + /* check for overflow in max_sector */ + (int)(g->sect * g->head) <= 0 || + /* check for zero in F_SECT_PER_TRACK */ + (unsigned char)((g->sect << 2) >> FD_SIZECODE(g)) == 0 || g->track <= 0 || g->track > UDP->tracks >> STRETCH(g) || /* check if reserved bits are set */ (g->stretch & ~(FD_STRETCH | FD_SWAPSIDES | FD_SECTBASEMASK)) != 0) @@ -3386,6 +3393,24 @@ static int fd_getgeo(struct block_device *bdev, struct hd_geometry *geo) return 0; } +static bool valid_floppy_drive_params(const short autodetect[8], + int native_format) +{ + size_t floppy_type_size = ARRAY_SIZE(floppy_type); + size_t i = 0; + + for (i = 0; i < 8; ++i) { + if (autodetect[i] < 0 || + autodetect[i] >= floppy_type_size) + return false; + } + + if (native_format < 0 || native_format >= floppy_type_size) + return false; + + return true; +} + static int fd_locked_ioctl(struct block_device *bdev, fmode_t mode, unsigned int cmd, unsigned long param) { @@ -3512,6 +3537,9 @@ static int fd_locked_ioctl(struct block_device *bdev, fmode_t mode, unsigned int SUPBOUND(size, strlen((const char *)outparam) + 1); break; case FDSETDRVPRM: + if (!valid_floppy_drive_params(inparam.dp.autodetect, + inparam.dp.native_format)) + return -EINVAL; *UDP = inparam.dp; break; case FDGETDRVPRM: @@ -3709,6 +3737,8 @@ static int compat_setdrvprm(int drive, return -EPERM; if (copy_from_user(&v, arg, sizeof(struct compat_floppy_drive_params))) return -EFAULT; + if (!valid_floppy_drive_params(v.autodetect, v.native_format)) + return -EINVAL; mutex_lock(&floppy_mutex); UDP->cmos = v.cmos; UDP->max_dtr = v.max_dtr; diff --git a/drivers/block/loop.c b/drivers/block/loop.c index aa76c816dbb496440d8dc2785f51b5994b678ebf..298c96b6befa0ef5bae86ac493b34bdbeedf8ed1 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -886,7 +886,7 @@ static void loop_unprepare_queue(struct loop_device *lo) static int loop_kthread_worker_fn(void *worker_ptr) { - current->flags |= PF_LESS_THROTTLE; + current->flags |= PF_LESS_THROTTLE | PF_MEMALLOC_NOIO; return kthread_worker_fn(worker_ptr); } diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c index c13a6d1796a776938be1d9e4c002be471e8a0431..fa60f265ee506264f89adff027b5c61a1197b71a 100644 --- a/drivers/block/nbd.c +++ b/drivers/block/nbd.c @@ -1218,7 +1218,7 @@ static void nbd_clear_sock_ioctl(struct nbd_device *nbd, struct block_device *bdev) { sock_shutdown(nbd); - kill_bdev(bdev); + __invalidate_device(bdev, true); nbd_bdev_reset(bdev); if (test_and_clear_bit(NBD_HAS_CONFIG_REF, &nbd->config->runtime_flags)) diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk_main.c index 093b614d652445a337db00ea8beaded767073415..c5c0b7c89481555074bda218c8a545a2539d1ee4 100644 --- a/drivers/block/null_blk_main.c +++ b/drivers/block/null_blk_main.c @@ -321,11 +321,12 @@ static ssize_t nullb_device_power_store(struct config_item *item, set_bit(NULLB_DEV_FL_CONFIGURED, &dev->flags); dev->power = newp; } else if (dev->power && !newp) { - mutex_lock(&lock); - dev->power = newp; - null_del_dev(dev->nullb); - mutex_unlock(&lock); - clear_bit(NULLB_DEV_FL_UP, &dev->flags); + if (test_and_clear_bit(NULLB_DEV_FL_UP, &dev->flags)) { + mutex_lock(&lock); + dev->power = newp; + null_del_dev(dev->nullb); + mutex_unlock(&lock); + } clear_bit(NULLB_DEV_FL_CONFIGURED, &dev->flags); } diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c index a4bc74e72c394965f31dcbe7b55c8e5cd0fc6cd5..55869b362fdfb7c500177890443a6617e2107081 100644 --- a/drivers/block/xen-blkback/xenbus.c +++ b/drivers/block/xen-blkback/xenbus.c @@ -974,6 +974,7 @@ static int read_per_ring_refs(struct xen_blkif_ring *ring, const char *dir) } blkif->nr_ring_pages = nr_grefs; + err = -ENOMEM; for (i = 0; i < nr_grefs * XEN_BLKIF_REQS_PER_PAGE; i++) { req = kzalloc(sizeof(*req), GFP_KERNEL); if (!req) @@ -996,7 +997,7 @@ static int read_per_ring_refs(struct xen_blkif_ring *ring, const char *dir) err = xen_blkif_map(ring, ring_ref, nr_grefs, evtchn); if (err) { xenbus_dev_fatal(dev, err, "mapping ring-ref port %u", evtchn); - return err; + goto fail; } return 0; @@ -1016,8 +1017,7 @@ fail: } kfree(req); } - return -ENOMEM; - + return err; } static int connect_ring(struct backend_info *be) diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c index 40a4f95f6178158a3212fd7424c1813ef13a2aba..75cf605f54e5e4eba67def24d2e054aed3a98dc6 100644 --- a/drivers/bluetooth/btusb.c +++ b/drivers/bluetooth/btusb.c @@ -277,7 +277,9 @@ static const struct usb_device_id blacklist_table[] = { { USB_DEVICE(0x04ca, 0x3015), .driver_info = BTUSB_QCA_ROME }, { USB_DEVICE(0x04ca, 0x3016), .driver_info = BTUSB_QCA_ROME }, { USB_DEVICE(0x04ca, 0x301a), .driver_info = BTUSB_QCA_ROME }, + { USB_DEVICE(0x13d3, 0x3491), .driver_info = BTUSB_QCA_ROME }, { USB_DEVICE(0x13d3, 0x3496), .driver_info = BTUSB_QCA_ROME }, + { USB_DEVICE(0x13d3, 0x3501), .driver_info = BTUSB_QCA_ROME }, /* Broadcom BCM2035 */ { USB_DEVICE(0x0a5c, 0x2009), .driver_info = BTUSB_BCM92035 }, diff --git a/drivers/bluetooth/hci_ath.c b/drivers/bluetooth/hci_ath.c index d568fbd94d6c84cc0045be68204f46b995e76cdb..20235925344dd224327a12ededac2589906be334 100644 --- a/drivers/bluetooth/hci_ath.c +++ b/drivers/bluetooth/hci_ath.c @@ -112,6 +112,9 @@ static int ath_open(struct hci_uart *hu) BT_DBG("hu %p", hu); + if (!hci_uart_has_flow_control(hu)) + return -EOPNOTSUPP; + ath = kzalloc(sizeof(*ath), GFP_KERNEL); if (!ath) return -ENOMEM; diff --git a/drivers/bluetooth/hci_bcm.c b/drivers/bluetooth/hci_bcm.c index 800132369134973122122a07e4de434f87462d3d..aa6b7ed9fdf12552c3bdbe4fe95e24ae4589bf4f 100644 --- a/drivers/bluetooth/hci_bcm.c +++ b/drivers/bluetooth/hci_bcm.c @@ -369,6 +369,9 @@ static int bcm_open(struct hci_uart *hu) bt_dev_dbg(hu->hdev, "hu %p", hu); + if (!hci_uart_has_flow_control(hu)) + return -EOPNOTSUPP; + bcm = kzalloc(sizeof(*bcm), GFP_KERNEL); if (!bcm) return -ENOMEM; diff --git a/drivers/bluetooth/hci_bcsp.c b/drivers/bluetooth/hci_bcsp.c index 1a7f0c82fb362ec2e200c2b6428b923fb24aa3ab..66fe1e6dc631feaf401b9f9c900262f570e7a68d 100644 --- a/drivers/bluetooth/hci_bcsp.c +++ b/drivers/bluetooth/hci_bcsp.c @@ -759,6 +759,11 @@ static int bcsp_close(struct hci_uart *hu) skb_queue_purge(&bcsp->rel); skb_queue_purge(&bcsp->unrel); + if (bcsp->rx_skb) { + kfree_skb(bcsp->rx_skb); + bcsp->rx_skb = NULL; + } + kfree(bcsp); return 0; } diff --git a/drivers/bluetooth/hci_intel.c b/drivers/bluetooth/hci_intel.c index 46ace321bf60ebb1a1cfa3446838765cb025d812..e9228520e4c7af163b9ddeefedd200a01d4ca46c 100644 --- a/drivers/bluetooth/hci_intel.c +++ b/drivers/bluetooth/hci_intel.c @@ -406,6 +406,9 @@ static int intel_open(struct hci_uart *hu) BT_DBG("hu %p", hu); + if (!hci_uart_has_flow_control(hu)) + return -EOPNOTSUPP; + intel = kzalloc(sizeof(*intel), GFP_KERNEL); if (!intel) return -ENOMEM; diff --git a/drivers/bluetooth/hci_ldisc.c b/drivers/bluetooth/hci_ldisc.c index c915daf01a89d22475f615717a7ea51b92bc2aad..efeb8137ec67fdaac80c6c14660df7acb3e508b2 100644 --- a/drivers/bluetooth/hci_ldisc.c +++ b/drivers/bluetooth/hci_ldisc.c @@ -299,6 +299,19 @@ static int hci_uart_send_frame(struct hci_dev *hdev, struct sk_buff *skb) return 0; } +/* Check the underlying device or tty has flow control support */ +bool hci_uart_has_flow_control(struct hci_uart *hu) +{ + /* serdev nodes check if the needed operations are present */ + if (hu->serdev) + return true; + + if (hu->tty->driver->ops->tiocmget && hu->tty->driver->ops->tiocmset) + return true; + + return false; +} + /* Flow control or un-flow control the device */ void hci_uart_set_flow_control(struct hci_uart *hu, bool enable) { diff --git a/drivers/bluetooth/hci_mrvl.c b/drivers/bluetooth/hci_mrvl.c index ffb00669346f0ee70045043e82af0651e80591ba..23791df081bab0935fed1a91ec3e36f67da9f3e7 100644 --- a/drivers/bluetooth/hci_mrvl.c +++ b/drivers/bluetooth/hci_mrvl.c @@ -66,6 +66,9 @@ static int mrvl_open(struct hci_uart *hu) BT_DBG("hu %p", hu); + if (!hci_uart_has_flow_control(hu)) + return -EOPNOTSUPP; + mrvl = kzalloc(sizeof(*mrvl), GFP_KERNEL); if (!mrvl) return -ENOMEM; diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c index 77004c29da089f9d3a21423af5f024b19bebfdcd..f96e58de049b3b98ad46420695d7b9db2487e80f 100644 --- a/drivers/bluetooth/hci_qca.c +++ b/drivers/bluetooth/hci_qca.c @@ -450,6 +450,9 @@ static int qca_open(struct hci_uart *hu) BT_DBG("hu %p qca_open", hu); + if (!hci_uart_has_flow_control(hu)) + return -EOPNOTSUPP; + qca = kzalloc(sizeof(struct qca_data), GFP_KERNEL); if (!qca) return -ENOMEM; diff --git a/drivers/bluetooth/hci_uart.h b/drivers/bluetooth/hci_uart.h index 00cab2fd7a1b8302ef164940a784485a9198784d..067a610f1372a4b171a9e0446f76d3eaf52fbfc8 100644 --- a/drivers/bluetooth/hci_uart.h +++ b/drivers/bluetooth/hci_uart.h @@ -118,6 +118,7 @@ int hci_uart_tx_wakeup(struct hci_uart *hu); int hci_uart_init_ready(struct hci_uart *hu); void hci_uart_init_work(struct work_struct *work); void hci_uart_set_baudrate(struct hci_uart *hu, unsigned int speed); +bool hci_uart_has_flow_control(struct hci_uart *hu); void hci_uart_set_flow_control(struct hci_uart *hu, bool enable); void hci_uart_set_speeds(struct hci_uart *hu, unsigned int init_speed, unsigned int oper_speed); diff --git a/drivers/bus/hisi_lpc.c b/drivers/bus/hisi_lpc.c index d5f85455fa6216fc2c7660e5f543f5aab71f50f5..e31c02dc777098ca99901e033a34a5b66f63dbb0 100644 --- a/drivers/bus/hisi_lpc.c +++ b/drivers/bus/hisi_lpc.c @@ -456,6 +456,17 @@ struct hisi_lpc_acpi_cell { size_t pdata_size; }; +static void hisi_lpc_acpi_remove(struct device *hostdev) +{ + struct acpi_device *adev = ACPI_COMPANION(hostdev); + struct acpi_device *child; + + device_for_each_child(hostdev, NULL, hisi_lpc_acpi_remove_subdev); + + list_for_each_entry(child, &adev->children, node) + acpi_device_clear_enumerated(child); +} + /* * hisi_lpc_acpi_probe - probe children for ACPI FW * @hostdev: LPC host device pointer @@ -556,8 +567,7 @@ static int hisi_lpc_acpi_probe(struct device *hostdev) return 0; fail: - device_for_each_child(hostdev, NULL, - hisi_lpc_acpi_remove_subdev); + hisi_lpc_acpi_remove(hostdev); return ret; } @@ -570,6 +580,10 @@ static int hisi_lpc_acpi_probe(struct device *dev) { return -ENODEV; } + +static void hisi_lpc_acpi_remove(struct device *hostdev) +{ +} #endif // CONFIG_ACPI /* @@ -607,24 +621,27 @@ static int hisi_lpc_probe(struct platform_device *pdev) range->fwnode = dev->fwnode; range->flags = LOGIC_PIO_INDIRECT; range->size = PIO_INDIRECT_SIZE; + range->hostdata = lpcdev; + range->ops = &hisi_lpc_ops; + lpcdev->io_host = range; ret = logic_pio_register_range(range); if (ret) { dev_err(dev, "register IO range failed (%d)!\n", ret); return ret; } - lpcdev->io_host = range; /* register the LPC host PIO resources */ if (acpi_device) ret = hisi_lpc_acpi_probe(dev); else ret = of_platform_populate(dev->of_node, NULL, NULL, dev); - if (ret) + if (ret) { + logic_pio_unregister_range(range); return ret; + } - lpcdev->io_host->hostdata = lpcdev; - lpcdev->io_host->ops = &hisi_lpc_ops; + dev_set_drvdata(dev, lpcdev); io_end = lpcdev->io_host->io_start + lpcdev->io_host->size; dev_info(dev, "registered range [%pa - %pa]\n", @@ -633,6 +650,23 @@ static int hisi_lpc_probe(struct platform_device *pdev) return ret; } +static int hisi_lpc_remove(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct acpi_device *acpi_device = ACPI_COMPANION(dev); + struct hisi_lpc_dev *lpcdev = dev_get_drvdata(dev); + struct logic_pio_hwaddr *range = lpcdev->io_host; + + if (acpi_device) + hisi_lpc_acpi_remove(dev); + else + of_platform_depopulate(dev); + + logic_pio_unregister_range(range); + + return 0; +} + static const struct of_device_id hisi_lpc_of_match[] = { { .compatible = "hisilicon,hip06-lpc", }, { .compatible = "hisilicon,hip07-lpc", }, @@ -646,5 +680,6 @@ static struct platform_driver hisi_lpc_driver = { .acpi_match_table = ACPI_PTR(hisi_lpc_acpi_match), }, .probe = hisi_lpc_probe, + .remove = hisi_lpc_remove, }; builtin_platform_driver(hisi_lpc_driver); diff --git a/drivers/char/broadcom/Kconfig b/drivers/char/broadcom/Kconfig index 4ba3fb4049eb0078c318440adfbdde88474039de..6f9b37bf6b8a9628c738e8377f1043022789a130 100644 --- a/drivers/char/broadcom/Kconfig +++ b/drivers/char/broadcom/Kconfig @@ -50,10 +50,10 @@ config BCM2835_SMI_DEV Broadcom's Secondary Memory interface. The low-level functionality is provided by the SMI driver itself. -config ARGON_MEM - tristate "Character device driver for the Argon decoder hardware" +config RPIVID_MEM + tristate "Character device driver for the Raspberry Pi RPIVid video decoder hardware" default n help This driver provides a character device interface for memory-map operations - so userspace tools can access the control and status registers of the Argon - video decoder hardware. + so userspace tools can access the control and status registers of the + Raspberry Pi RPiVid video decoder hardware. diff --git a/drivers/char/broadcom/Makefile b/drivers/char/broadcom/Makefile index 50767fc1b56900766e84c3b54f02ccb958b7e28d..e060265397193d4d91dac6f34d51a18e5a3912fa 100644 --- a/drivers/char/broadcom/Makefile +++ b/drivers/char/broadcom/Makefile @@ -4,4 +4,4 @@ obj-$(CONFIG_BCM_VC_SM) += vc_sm/ obj-$(CONFIG_BCM2835_DEVGPIOMEM)+= bcm2835-gpiomem.o obj-$(CONFIG_BCM2835_SMI_DEV) += bcm2835_smi_dev.o -obj-$(CONFIG_ARGON_MEM) += argon-mem.o +obj-$(CONFIG_RPIVID_MEM) += rpivid-mem.o diff --git a/drivers/char/broadcom/rpivid-mem.c b/drivers/char/broadcom/rpivid-mem.c new file mode 100644 index 0000000000000000000000000000000000000000..e4e5fb1fb8209c0bce24d46028c07d2c1317424f --- /dev/null +++ b/drivers/char/broadcom/rpivid-mem.c @@ -0,0 +1,286 @@ +/** + * rpivid-mem.c - character device access to the RPiVid decoder registers + * + * Based on bcm2835-gpiomem.c. Provides IO memory access to the decoder + * register blocks such that ffmpeg plugins can access the hardware. + * + * Jonathan Bell + * Copyright (c) 2019, Raspberry Pi (Trading) Ltd. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions, and the following disclaimer, + * without modification. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * 3. The names of the above-listed copyright holders may not be used + * to endorse or promote products derived from this software without + * specific prior written permission. + * + * ALTERNATIVELY, this software may be distributed under the terms of the + * GNU General Public License ("GPL") version 2, as published by the Free + * Software Foundation. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define DRIVER_NAME "rpivid-mem" +#define DEVICE_MINOR 0 + +struct rpivid_mem_priv { + dev_t devid; + struct class *class; + struct cdev rpivid_mem_cdev; + unsigned long regs_phys; + unsigned long mem_window_len; + struct device *dev; + const char *name; +}; + +static int rpivid_mem_open(struct inode *inode, struct file *file) +{ + int dev = iminor(inode); + int ret = 0; + struct rpivid_mem_priv *priv; + if (dev != DEVICE_MINOR && dev != DEVICE_MINOR + 1) + ret = -ENXIO; + + priv = container_of(inode->i_cdev, struct rpivid_mem_priv, + rpivid_mem_cdev); + if (!priv) + return -EINVAL; + file->private_data = priv; + return ret; +} + +static int rpivid_mem_release(struct inode *inode, struct file *file) +{ + int dev = iminor(inode); + int ret = 0; + + if (dev != DEVICE_MINOR && dev != DEVICE_MINOR + 1) + ret = -ENXIO; + + return ret; +} + +static const struct vm_operations_struct rpivid_mem_vm_ops = { +#ifdef CONFIG_HAVE_IOREMAP_PROT + .access = generic_access_phys +#endif +}; + +static int rpivid_mem_mmap(struct file *file, struct vm_area_struct *vma) +{ + struct rpivid_mem_priv *priv; + unsigned long pages; + + priv = file->private_data; + pages = priv->regs_phys >> PAGE_SHIFT; + /* + * The address decode is far larger than the actual number of registers. + * Just map the whole lot in. + */ + vma->vm_page_prot = phys_mem_access_prot(file, pages, + priv->mem_window_len, + vma->vm_page_prot); + vma->vm_ops = &rpivid_mem_vm_ops; + if (remap_pfn_range(vma, vma->vm_start, + pages, + priv->mem_window_len, + vma->vm_page_prot)) { + return -EAGAIN; + } + return 0; +} + +static const struct file_operations +rpivid_mem_fops = { + .owner = THIS_MODULE, + .open = rpivid_mem_open, + .release = rpivid_mem_release, + .mmap = rpivid_mem_mmap, +}; + +static const struct of_device_id rpivid_mem_of_match[]; +static int rpivid_mem_probe(struct platform_device *pdev) +{ + int err; + void *ptr_err; + const struct of_device_id *id; + struct device *dev = &pdev->dev; + struct device *rpivid_mem_dev; + struct resource *ioresource; + struct rpivid_mem_priv *priv; + + + /* Allocate buffers and instance data */ + + priv = kzalloc(sizeof(struct rpivid_mem_priv), GFP_KERNEL); + + if (!priv) { + err = -ENOMEM; + goto failed_inst_alloc; + } + platform_set_drvdata(pdev, priv); + + priv->dev = dev; + id = of_match_device(rpivid_mem_of_match, dev); + if (!id) + return -EINVAL; + priv->name = id->data; + + ioresource = platform_get_resource(pdev, IORESOURCE_MEM, 0); + if (ioresource) { + priv->regs_phys = ioresource->start; + priv->mem_window_len = ioresource->end - ioresource->start; + } else { + dev_err(priv->dev, "failed to get IO resource"); + err = -ENOENT; + goto failed_get_resource; + } + + /* Create character device entries */ + + err = alloc_chrdev_region(&priv->devid, + DEVICE_MINOR, 2, priv->name); + if (err != 0) { + dev_err(priv->dev, "unable to allocate device number"); + goto failed_alloc_chrdev; + } + cdev_init(&priv->rpivid_mem_cdev, &rpivid_mem_fops); + priv->rpivid_mem_cdev.owner = THIS_MODULE; + err = cdev_add(&priv->rpivid_mem_cdev, priv->devid, 2); + if (err != 0) { + dev_err(priv->dev, "unable to register device"); + goto failed_cdev_add; + } + + /* Create sysfs entries */ + + priv->class = class_create(THIS_MODULE, priv->name); + ptr_err = priv->class; + if (IS_ERR(ptr_err)) + goto failed_class_create; + + rpivid_mem_dev = device_create(priv->class, NULL, + priv->devid, NULL, + priv->name); + ptr_err = rpivid_mem_dev; + if (IS_ERR(ptr_err)) + goto failed_device_create; + + /* Legacy alias */ + { + char *oldname = kstrdup(priv->name, GFP_KERNEL); + + oldname[1] = 'a'; + oldname[2] = 'r'; + oldname[3] = 'g'; + oldname[4] = 'o'; + oldname[5] = 'n'; + (void)device_create(priv->class, NULL, priv->devid + 1, NULL, + oldname + 1); + kfree(oldname); + } + + dev_info(priv->dev, "%s initialised: Registers at 0x%08lx length 0x%08lx", + priv->name, priv->regs_phys, priv->mem_window_len); + + return 0; + +failed_device_create: + class_destroy(priv->class); +failed_class_create: + cdev_del(&priv->rpivid_mem_cdev); + err = PTR_ERR(ptr_err); +failed_cdev_add: + unregister_chrdev_region(priv->devid, 1); +failed_alloc_chrdev: +failed_get_resource: + kfree(priv); +failed_inst_alloc: + dev_err(priv->dev, "could not load rpivid_mem"); + return err; +} + +static int rpivid_mem_remove(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct rpivid_mem_priv *priv = platform_get_drvdata(pdev); + + device_destroy(priv->class, priv->devid); + class_destroy(priv->class); + cdev_del(&priv->rpivid_mem_cdev); + unregister_chrdev_region(priv->devid, 1); + kfree(priv); + + dev_info(dev, "%s driver removed - OK", priv->name); + return 0; +} + +static const struct of_device_id rpivid_mem_of_match[] = { + { + .compatible = "raspberrypi,rpivid-hevc-decoder", + .data = "rpivid-hevcmem", + }, + { + .compatible = "raspberrypi,rpivid-h264-decoder", + .data = "rpivid-h264mem", + }, + { + .compatible = "raspberrypi,rpivid-vp9-decoder", + .data = "rpivid-vp9mem", + }, + /* The "intc" is included as this block of hardware contains the + * "frame done" status flags. + */ + { + .compatible = "raspberrypi,rpivid-local-intc", + .data = "rpivid-intcmem", + }, + { /* sentinel */ }, +}; + +MODULE_DEVICE_TABLE(of, rpivid_mem_of_match); + +static struct platform_driver rpivid_mem_driver = { + .probe = rpivid_mem_probe, + .remove = rpivid_mem_remove, + .driver = { + .name = DRIVER_NAME, + .owner = THIS_MODULE, + .of_match_table = rpivid_mem_of_match, + }, +}; + +module_platform_driver(rpivid_mem_driver); + +MODULE_ALIAS("platform:rpivid-mem"); +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Driver for accessing RPiVid decoder registers from userspace"); +MODULE_AUTHOR("Jonathan Bell "); diff --git a/drivers/char/hpet.c b/drivers/char/hpet.c index 9bffcd37cc7bd65b93fbac59f996b62d118d084f..c0732f032248479fb6ea2386ec76bb46e3800798 100644 --- a/drivers/char/hpet.c +++ b/drivers/char/hpet.c @@ -570,8 +570,7 @@ static inline unsigned long hpet_time_div(struct hpets *hpets, unsigned long long m; m = hpets->hp_tick_freq + (dis >> 1); - do_div(m, dis); - return (unsigned long)m; + return div64_ul(m, dis); } static int diff --git a/drivers/clk/at91/clk-generated.c b/drivers/clk/at91/clk-generated.c index 33481368740e7dc038fe1e32065e2659d8cec730..113152425a95dc102d1affc016706a6866aeefd8 100644 --- a/drivers/clk/at91/clk-generated.c +++ b/drivers/clk/at91/clk-generated.c @@ -153,6 +153,8 @@ static int clk_generated_determine_rate(struct clk_hw *hw, continue; div = DIV_ROUND_CLOSEST(parent_rate, req->rate); + if (div > GENERATED_MAX_DIV + 1) + div = GENERATED_MAX_DIV + 1; clk_generated_best_diff(req, parent, parent_rate, div, &best_diff, &best_rate); diff --git a/drivers/clk/renesas/renesas-cpg-mssr.c b/drivers/clk/renesas/renesas-cpg-mssr.c index f4b013e9352d9efca6260c5ca763e8fcc53eb6f5..24485bee9b49e949277bb9370634db8b9dc620fb 100644 --- a/drivers/clk/renesas/renesas-cpg-mssr.c +++ b/drivers/clk/renesas/renesas-cpg-mssr.c @@ -535,17 +535,11 @@ static int cpg_mssr_reset(struct reset_controller_dev *rcdev, unsigned int reg = id / 32; unsigned int bit = id % 32; u32 bitmask = BIT(bit); - unsigned long flags; - u32 value; dev_dbg(priv->dev, "reset %u%02u\n", reg, bit); /* Reset module */ - spin_lock_irqsave(&priv->rmw_lock, flags); - value = readl(priv->base + SRCR(reg)); - value |= bitmask; - writel(value, priv->base + SRCR(reg)); - spin_unlock_irqrestore(&priv->rmw_lock, flags); + writel(bitmask, priv->base + SRCR(reg)); /* Wait for at least one cycle of the RCLK clock (@ ca. 32 kHz) */ udelay(35); @@ -562,16 +556,10 @@ static int cpg_mssr_assert(struct reset_controller_dev *rcdev, unsigned long id) unsigned int reg = id / 32; unsigned int bit = id % 32; u32 bitmask = BIT(bit); - unsigned long flags; - u32 value; dev_dbg(priv->dev, "assert %u%02u\n", reg, bit); - spin_lock_irqsave(&priv->rmw_lock, flags); - value = readl(priv->base + SRCR(reg)); - value |= bitmask; - writel(value, priv->base + SRCR(reg)); - spin_unlock_irqrestore(&priv->rmw_lock, flags); + writel(bitmask, priv->base + SRCR(reg)); return 0; } diff --git a/drivers/clk/socfpga/clk-periph-s10.c b/drivers/clk/socfpga/clk-periph-s10.c index 568f59b58ddfa94852462ed8fd3531d48fff151c..e7c877d354c7bff5777fec95eef40a48b3879188 100644 --- a/drivers/clk/socfpga/clk-periph-s10.c +++ b/drivers/clk/socfpga/clk-periph-s10.c @@ -37,7 +37,7 @@ static unsigned long clk_peri_cnt_clk_recalc_rate(struct clk_hw *hwclk, if (socfpgaclk->fixed_div) { div = socfpgaclk->fixed_div; } else { - if (!socfpgaclk->bypass_reg) + if (socfpgaclk->hw.reg) div = ((readl(socfpgaclk->hw.reg) & 0x7ff) + 1); } diff --git a/drivers/clk/sprd/Kconfig b/drivers/clk/sprd/Kconfig index 87892471eb96c3549ced203976eee9cbf7cfbb43..bad8099832d4806cd1a300549b2df66efc1df6f8 100644 --- a/drivers/clk/sprd/Kconfig +++ b/drivers/clk/sprd/Kconfig @@ -2,6 +2,7 @@ config SPRD_COMMON_CLK tristate "Clock support for Spreadtrum SoCs" depends on ARCH_SPRD || COMPILE_TEST default ARCH_SPRD + select REGMAP_MMIO if SPRD_COMMON_CLK diff --git a/drivers/clk/sprd/sc9860-clk.c b/drivers/clk/sprd/sc9860-clk.c index 9980ab55271ba2f71f7ad09ba04d2bbd23468eb7..f76305b4bc8df9488bf099a3fa2e5e2d7f09a56d 100644 --- a/drivers/clk/sprd/sc9860-clk.c +++ b/drivers/clk/sprd/sc9860-clk.c @@ -2023,6 +2023,7 @@ static int sc9860_clk_probe(struct platform_device *pdev) { const struct of_device_id *match; const struct sprd_clk_desc *desc; + int ret; match = of_match_node(sprd_sc9860_clk_ids, pdev->dev.of_node); if (!match) { @@ -2031,7 +2032,9 @@ static int sc9860_clk_probe(struct platform_device *pdev) } desc = match->data; - sprd_clk_regmap_init(pdev, desc); + ret = sprd_clk_regmap_init(pdev, desc); + if (ret) + return ret; return sprd_clk_probe(&pdev->dev, desc->hw_clks); } diff --git a/drivers/clk/tegra/clk-tegra210.c b/drivers/clk/tegra/clk-tegra210.c index 9eb1cb14fce11ca5dd0ddd725102a343712d3b24..4e1bc23c98655aa0fdbece0cb5a3e2c0206cd2be 100644 --- a/drivers/clk/tegra/clk-tegra210.c +++ b/drivers/clk/tegra/clk-tegra210.c @@ -2214,9 +2214,9 @@ static struct div_nmp pllu_nmp = { }; static struct tegra_clk_pll_freq_table pll_u_freq_table[] = { - { 12000000, 480000000, 40, 1, 0, 0 }, - { 13000000, 480000000, 36, 1, 0, 0 }, /* actual: 468.0 MHz */ - { 38400000, 480000000, 25, 2, 0, 0 }, + { 12000000, 480000000, 40, 1, 1, 0 }, + { 13000000, 480000000, 36, 1, 1, 0 }, /* actual: 468.0 MHz */ + { 38400000, 480000000, 25, 2, 1, 0 }, { 0, 0, 0, 0, 0, 0 }, }; @@ -3343,6 +3343,7 @@ static struct tegra_clk_init_table init_table[] __initdata = { { TEGRA210_CLK_DFLL_REF, TEGRA210_CLK_PLL_P, 51000000, 1 }, { TEGRA210_CLK_SBC4, TEGRA210_CLK_PLL_P, 12000000, 1 }, { TEGRA210_CLK_PLL_RE_VCO, TEGRA210_CLK_CLK_MAX, 672000000, 1 }, + { TEGRA210_CLK_PLL_U_OUT1, TEGRA210_CLK_CLK_MAX, 48000000, 1 }, { TEGRA210_CLK_XUSB_GATE, TEGRA210_CLK_CLK_MAX, 0, 1 }, { TEGRA210_CLK_XUSB_SS_SRC, TEGRA210_CLK_PLL_U_480M, 120000000, 0 }, { TEGRA210_CLK_XUSB_FS_SRC, TEGRA210_CLK_PLL_U_48M, 48000000, 0 }, @@ -3367,7 +3368,6 @@ static struct tegra_clk_init_table init_table[] __initdata = { { TEGRA210_CLK_PLL_DP, TEGRA210_CLK_CLK_MAX, 270000000, 0 }, { TEGRA210_CLK_SOC_THERM, TEGRA210_CLK_PLL_P, 51000000, 0 }, { TEGRA210_CLK_CCLK_G, TEGRA210_CLK_CLK_MAX, 0, 1 }, - { TEGRA210_CLK_PLL_U_OUT1, TEGRA210_CLK_CLK_MAX, 48000000, 1 }, { TEGRA210_CLK_PLL_U_OUT2, TEGRA210_CLK_CLK_MAX, 60000000, 1 }, /* This MUST be the last entry. */ { TEGRA210_CLK_CLK_MAX, TEGRA210_CLK_CLK_MAX, 0, 0 }, diff --git a/drivers/clk/ti/clkctrl.c b/drivers/clk/ti/clkctrl.c index ca3218337fd74f93148c762c30e44c2f3dacb74a..dfaa5aad06927bed680c3799d2c986769ff984db 100644 --- a/drivers/clk/ti/clkctrl.c +++ b/drivers/clk/ti/clkctrl.c @@ -229,6 +229,7 @@ static struct clk_hw *_ti_omap4_clkctrl_xlate(struct of_phandle_args *clkspec, { struct omap_clkctrl_provider *provider = data; struct omap_clkctrl_clk *entry; + bool found = false; if (clkspec->args_count != 2) return ERR_PTR(-EINVAL); @@ -238,11 +239,13 @@ static struct clk_hw *_ti_omap4_clkctrl_xlate(struct of_phandle_args *clkspec, list_for_each_entry(entry, &provider->clocks, node) { if (entry->reg_offset == clkspec->args[0] && - entry->bit_offset == clkspec->args[1]) + entry->bit_offset == clkspec->args[1]) { + found = true; break; + } } - if (!entry) + if (!found) return ERR_PTR(-EINVAL); return entry->clk; diff --git a/drivers/clocksource/exynos_mct.c b/drivers/clocksource/exynos_mct.c index d55c30f6981dcc377b7ad6a2bb76b2b37a182ce1..aaf5bfa9bd9c915efdca1b2ac5fba58d86199a01 100644 --- a/drivers/clocksource/exynos_mct.c +++ b/drivers/clocksource/exynos_mct.c @@ -211,7 +211,7 @@ static void exynos4_frc_resume(struct clocksource *cs) static struct clocksource mct_frc = { .name = "mct-frc", - .rating = 400, + .rating = 450, /* use value higher than ARM arch timer */ .read = exynos4_frc_read, .mask = CLOCKSOURCE_MASK(32), .flags = CLOCK_SOURCE_IS_CONTINUOUS, @@ -466,7 +466,7 @@ static int exynos4_mct_starting_cpu(unsigned int cpu) evt->set_state_oneshot_stopped = set_state_shutdown; evt->tick_resume = set_state_shutdown; evt->features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT; - evt->rating = 450; + evt->rating = 500; /* use value higher than ARM arch timer */ exynos4_mct_write(TICK_BASE_CNT, mevt->base + MCT_L_TCNTB_OFFSET); diff --git a/drivers/cpufreq/pasemi-cpufreq.c b/drivers/cpufreq/pasemi-cpufreq.c index c7710c149de85a57fc6fa9e784dee83a0b6cf11d..a0620c9ec064957c963f31a665a92106def6c929 100644 --- a/drivers/cpufreq/pasemi-cpufreq.c +++ b/drivers/cpufreq/pasemi-cpufreq.c @@ -145,10 +145,18 @@ static int pas_cpufreq_cpu_init(struct cpufreq_policy *policy) int err = -ENODEV; cpu = of_get_cpu_node(policy->cpu, NULL); + if (!cpu) + goto out; + max_freqp = of_get_property(cpu, "clock-frequency", NULL); of_node_put(cpu); - if (!cpu) + if (!max_freqp) { + err = -EINVAL; goto out; + } + + /* we need the freq in kHz */ + max_freq = *max_freqp / 1000; dn = of_find_compatible_node(NULL, NULL, "1682m-sdc"); if (!dn) @@ -185,16 +193,6 @@ static int pas_cpufreq_cpu_init(struct cpufreq_policy *policy) } pr_debug("init cpufreq on CPU %d\n", policy->cpu); - - max_freqp = of_get_property(cpu, "clock-frequency", NULL); - if (!max_freqp) { - err = -EINVAL; - goto out_unmap_sdcpwr; - } - - /* we need the freq in kHz */ - max_freq = *max_freqp / 1000; - pr_debug("max clock-frequency is at %u kHz\n", max_freq); pr_debug("initializing frequency table\n"); @@ -212,9 +210,6 @@ static int pas_cpufreq_cpu_init(struct cpufreq_policy *policy) return cpufreq_generic_init(policy, pas_freqs, get_gizmo_latency()); -out_unmap_sdcpwr: - iounmap(sdcpwr_mapbase); - out_unmap_sdcasr: iounmap(sdcasr_mapbase); out: diff --git a/drivers/crypto/amcc/crypto4xx_alg.c b/drivers/crypto/amcc/crypto4xx_alg.c index 0c85a5123f85d976d6b808d24e78b12b5508b852..1d87deca32ed55be2290e186073b6ed183804566 100644 --- a/drivers/crypto/amcc/crypto4xx_alg.c +++ b/drivers/crypto/amcc/crypto4xx_alg.c @@ -76,12 +76,16 @@ static void set_dynamic_sa_command_1(struct dynamic_sa_ctl *sa, u32 cm, } static inline int crypto4xx_crypt(struct skcipher_request *req, - const unsigned int ivlen, bool decrypt) + const unsigned int ivlen, bool decrypt, + bool check_blocksize) { struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(req); struct crypto4xx_ctx *ctx = crypto_skcipher_ctx(cipher); __le32 iv[AES_IV_SIZE]; + if (check_blocksize && !IS_ALIGNED(req->cryptlen, AES_BLOCK_SIZE)) + return -EINVAL; + if (ivlen) crypto4xx_memcpy_to_le32(iv, req->iv, ivlen); @@ -90,24 +94,34 @@ static inline int crypto4xx_crypt(struct skcipher_request *req, ctx->sa_len, 0, NULL); } -int crypto4xx_encrypt_noiv(struct skcipher_request *req) +int crypto4xx_encrypt_noiv_block(struct skcipher_request *req) +{ + return crypto4xx_crypt(req, 0, false, true); +} + +int crypto4xx_encrypt_iv_stream(struct skcipher_request *req) +{ + return crypto4xx_crypt(req, AES_IV_SIZE, false, false); +} + +int crypto4xx_decrypt_noiv_block(struct skcipher_request *req) { - return crypto4xx_crypt(req, 0, false); + return crypto4xx_crypt(req, 0, true, true); } -int crypto4xx_encrypt_iv(struct skcipher_request *req) +int crypto4xx_decrypt_iv_stream(struct skcipher_request *req) { - return crypto4xx_crypt(req, AES_IV_SIZE, false); + return crypto4xx_crypt(req, AES_IV_SIZE, true, false); } -int crypto4xx_decrypt_noiv(struct skcipher_request *req) +int crypto4xx_encrypt_iv_block(struct skcipher_request *req) { - return crypto4xx_crypt(req, 0, true); + return crypto4xx_crypt(req, AES_IV_SIZE, false, true); } -int crypto4xx_decrypt_iv(struct skcipher_request *req) +int crypto4xx_decrypt_iv_block(struct skcipher_request *req) { - return crypto4xx_crypt(req, AES_IV_SIZE, true); + return crypto4xx_crypt(req, AES_IV_SIZE, true, true); } /** @@ -278,8 +292,8 @@ crypto4xx_ctr_crypt(struct skcipher_request *req, bool encrypt) return ret; } - return encrypt ? crypto4xx_encrypt_iv(req) - : crypto4xx_decrypt_iv(req); + return encrypt ? crypto4xx_encrypt_iv_stream(req) + : crypto4xx_decrypt_iv_stream(req); } static int crypto4xx_sk_setup_fallback(struct crypto4xx_ctx *ctx, diff --git a/drivers/crypto/amcc/crypto4xx_core.c b/drivers/crypto/amcc/crypto4xx_core.c index d2ec9fd1b8bb03f5e869b8a1122ccd5196ed4fff..6386e1784fe4198b385c063a93e4f5c3e2ca0ad4 100644 --- a/drivers/crypto/amcc/crypto4xx_core.c +++ b/drivers/crypto/amcc/crypto4xx_core.c @@ -1153,8 +1153,8 @@ static struct crypto4xx_alg_common crypto4xx_alg[] = { .max_keysize = AES_MAX_KEY_SIZE, .ivsize = AES_IV_SIZE, .setkey = crypto4xx_setkey_aes_cbc, - .encrypt = crypto4xx_encrypt_iv, - .decrypt = crypto4xx_decrypt_iv, + .encrypt = crypto4xx_encrypt_iv_block, + .decrypt = crypto4xx_decrypt_iv_block, .init = crypto4xx_sk_init, .exit = crypto4xx_sk_exit, } }, @@ -1173,8 +1173,8 @@ static struct crypto4xx_alg_common crypto4xx_alg[] = { .max_keysize = AES_MAX_KEY_SIZE, .ivsize = AES_IV_SIZE, .setkey = crypto4xx_setkey_aes_cfb, - .encrypt = crypto4xx_encrypt_iv, - .decrypt = crypto4xx_decrypt_iv, + .encrypt = crypto4xx_encrypt_iv_stream, + .decrypt = crypto4xx_decrypt_iv_stream, .init = crypto4xx_sk_init, .exit = crypto4xx_sk_exit, } }, @@ -1186,7 +1186,7 @@ static struct crypto4xx_alg_common crypto4xx_alg[] = { .cra_flags = CRYPTO_ALG_NEED_FALLBACK | CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, - .cra_blocksize = AES_BLOCK_SIZE, + .cra_blocksize = 1, .cra_ctxsize = sizeof(struct crypto4xx_ctx), .cra_module = THIS_MODULE, }, @@ -1206,7 +1206,7 @@ static struct crypto4xx_alg_common crypto4xx_alg[] = { .cra_priority = CRYPTO4XX_CRYPTO_PRIORITY, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, - .cra_blocksize = AES_BLOCK_SIZE, + .cra_blocksize = 1, .cra_ctxsize = sizeof(struct crypto4xx_ctx), .cra_module = THIS_MODULE, }, @@ -1226,15 +1226,15 @@ static struct crypto4xx_alg_common crypto4xx_alg[] = { .cra_priority = CRYPTO4XX_CRYPTO_PRIORITY, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, - .cra_blocksize = AES_BLOCK_SIZE, + .cra_blocksize = 1, .cra_ctxsize = sizeof(struct crypto4xx_ctx), .cra_module = THIS_MODULE, }, .min_keysize = AES_MIN_KEY_SIZE, .max_keysize = AES_MAX_KEY_SIZE, .setkey = crypto4xx_setkey_aes_ecb, - .encrypt = crypto4xx_encrypt_noiv, - .decrypt = crypto4xx_decrypt_noiv, + .encrypt = crypto4xx_encrypt_noiv_block, + .decrypt = crypto4xx_decrypt_noiv_block, .init = crypto4xx_sk_init, .exit = crypto4xx_sk_exit, } }, @@ -1245,7 +1245,7 @@ static struct crypto4xx_alg_common crypto4xx_alg[] = { .cra_priority = CRYPTO4XX_CRYPTO_PRIORITY, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, - .cra_blocksize = AES_BLOCK_SIZE, + .cra_blocksize = 1, .cra_ctxsize = sizeof(struct crypto4xx_ctx), .cra_module = THIS_MODULE, }, @@ -1253,8 +1253,8 @@ static struct crypto4xx_alg_common crypto4xx_alg[] = { .max_keysize = AES_MAX_KEY_SIZE, .ivsize = AES_IV_SIZE, .setkey = crypto4xx_setkey_aes_ofb, - .encrypt = crypto4xx_encrypt_iv, - .decrypt = crypto4xx_decrypt_iv, + .encrypt = crypto4xx_encrypt_iv_stream, + .decrypt = crypto4xx_decrypt_iv_stream, .init = crypto4xx_sk_init, .exit = crypto4xx_sk_exit, } }, diff --git a/drivers/crypto/amcc/crypto4xx_core.h b/drivers/crypto/amcc/crypto4xx_core.h index e2ca56722f077242a0a356d14e15638db48ab49c..21a6bbcedc55db7741b770b91ab33e3a4750d86b 100644 --- a/drivers/crypto/amcc/crypto4xx_core.h +++ b/drivers/crypto/amcc/crypto4xx_core.h @@ -179,10 +179,12 @@ int crypto4xx_setkey_rfc3686(struct crypto_skcipher *cipher, const u8 *key, unsigned int keylen); int crypto4xx_encrypt_ctr(struct skcipher_request *req); int crypto4xx_decrypt_ctr(struct skcipher_request *req); -int crypto4xx_encrypt_iv(struct skcipher_request *req); -int crypto4xx_decrypt_iv(struct skcipher_request *req); -int crypto4xx_encrypt_noiv(struct skcipher_request *req); -int crypto4xx_decrypt_noiv(struct skcipher_request *req); +int crypto4xx_encrypt_iv_stream(struct skcipher_request *req); +int crypto4xx_decrypt_iv_stream(struct skcipher_request *req); +int crypto4xx_encrypt_iv_block(struct skcipher_request *req); +int crypto4xx_decrypt_iv_block(struct skcipher_request *req); +int crypto4xx_encrypt_noiv_block(struct skcipher_request *req); +int crypto4xx_decrypt_noiv_block(struct skcipher_request *req); int crypto4xx_rfc3686_encrypt(struct skcipher_request *req); int crypto4xx_rfc3686_decrypt(struct skcipher_request *req); int crypto4xx_sha1_alg_init(struct crypto_tfm *tfm); diff --git a/drivers/crypto/amcc/crypto4xx_trng.c b/drivers/crypto/amcc/crypto4xx_trng.c index 53ab1f140a2658ca521b60de0be0ce69b4185134..8a3ed40312061f9e877c2219602a177a3491d5a0 100644 --- a/drivers/crypto/amcc/crypto4xx_trng.c +++ b/drivers/crypto/amcc/crypto4xx_trng.c @@ -111,7 +111,6 @@ void ppc4xx_trng_probe(struct crypto4xx_core_device *core_dev) return; err_out: - of_node_put(trng); iounmap(dev->trng_base); kfree(rng); dev->trng_base = NULL; diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c index 9bc54c3c2cb97d9b539cffe29b7ece48d07214f9..1907945f82b787bf4baa49fa4aaf818d458b4839 100644 --- a/drivers/crypto/caam/caamalg.c +++ b/drivers/crypto/caam/caamalg.c @@ -887,6 +887,7 @@ static void ablkcipher_encrypt_done(struct device *jrdev, u32 *desc, u32 err, struct ablkcipher_request *req = context; struct ablkcipher_edesc *edesc; struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req); + struct caam_ctx *ctx = crypto_ablkcipher_ctx(ablkcipher); int ivsize = crypto_ablkcipher_ivsize(ablkcipher); #ifdef DEBUG @@ -911,10 +912,11 @@ static void ablkcipher_encrypt_done(struct device *jrdev, u32 *desc, u32 err, /* * The crypto API expects us to set the IV (req->info) to the last - * ciphertext block. This is used e.g. by the CTS mode. + * ciphertext block when running in CBC mode. */ - scatterwalk_map_and_copy(req->info, req->dst, req->nbytes - ivsize, - ivsize, 0); + if ((ctx->cdata.algtype & OP_ALG_AAI_MASK) == OP_ALG_AAI_CBC) + scatterwalk_map_and_copy(req->info, req->dst, req->nbytes - + ivsize, ivsize, 0); /* In case initial IV was generated, copy it in GIVCIPHER request */ if (edesc->iv_dir == DMA_FROM_DEVICE) { @@ -1651,10 +1653,11 @@ static int ablkcipher_decrypt(struct ablkcipher_request *req) /* * The crypto API expects us to set the IV (req->info) to the last - * ciphertext block. + * ciphertext block when running in CBC mode. */ - scatterwalk_map_and_copy(req->info, req->src, req->nbytes - ivsize, - ivsize, 0); + if ((ctx->cdata.algtype & OP_ALG_AAI_MASK) == OP_ALG_AAI_CBC) + scatterwalk_map_and_copy(req->info, req->src, req->nbytes - + ivsize, ivsize, 0); /* Create and submit job descriptor*/ init_ablkcipher_job(ctx->sh_desc_dec, ctx->sh_desc_dec_dma, edesc, req); diff --git a/drivers/crypto/ccp/ccp-crypto-aes-galois.c b/drivers/crypto/ccp/ccp-crypto-aes-galois.c index ca1f0d780b61cee959cb326d60bbde7c43782a78..e5dcb29b687f6324621405218ed20ab2d781894e 100644 --- a/drivers/crypto/ccp/ccp-crypto-aes-galois.c +++ b/drivers/crypto/ccp/ccp-crypto-aes-galois.c @@ -61,6 +61,19 @@ static int ccp_aes_gcm_setkey(struct crypto_aead *tfm, const u8 *key, static int ccp_aes_gcm_setauthsize(struct crypto_aead *tfm, unsigned int authsize) { + switch (authsize) { + case 16: + case 15: + case 14: + case 13: + case 12: + case 8: + case 4: + break; + default: + return -EINVAL; + } + return 0; } @@ -107,6 +120,7 @@ static int ccp_aes_gcm_crypt(struct aead_request *req, bool encrypt) memset(&rctx->cmd, 0, sizeof(rctx->cmd)); INIT_LIST_HEAD(&rctx->cmd.entry); rctx->cmd.engine = CCP_ENGINE_AES; + rctx->cmd.u.aes.authsize = crypto_aead_authsize(tfm); rctx->cmd.u.aes.type = ctx->u.aes.type; rctx->cmd.u.aes.mode = ctx->u.aes.mode; rctx->cmd.u.aes.action = encrypt; diff --git a/drivers/crypto/ccp/ccp-dev.c b/drivers/crypto/ccp/ccp-dev.c index 1b5035d562880a6d0b66458fae9fa15adda33c1c..b8c94a01cfc941c0808b24d88c639216086cdd8c 100644 --- a/drivers/crypto/ccp/ccp-dev.c +++ b/drivers/crypto/ccp/ccp-dev.c @@ -35,56 +35,62 @@ struct ccp_tasklet_data { }; /* Human-readable error strings */ +#define CCP_MAX_ERROR_CODE 64 static char *ccp_error_codes[] = { "", - "ERR 01: ILLEGAL_ENGINE", - "ERR 02: ILLEGAL_KEY_ID", - "ERR 03: ILLEGAL_FUNCTION_TYPE", - "ERR 04: ILLEGAL_FUNCTION_MODE", - "ERR 05: ILLEGAL_FUNCTION_ENCRYPT", - "ERR 06: ILLEGAL_FUNCTION_SIZE", - "ERR 07: Zlib_MISSING_INIT_EOM", - "ERR 08: ILLEGAL_FUNCTION_RSVD", - "ERR 09: ILLEGAL_BUFFER_LENGTH", - "ERR 10: VLSB_FAULT", - "ERR 11: ILLEGAL_MEM_ADDR", - "ERR 12: ILLEGAL_MEM_SEL", - "ERR 13: ILLEGAL_CONTEXT_ID", - "ERR 14: ILLEGAL_KEY_ADDR", - "ERR 15: 0xF Reserved", - "ERR 16: Zlib_ILLEGAL_MULTI_QUEUE", - "ERR 17: Zlib_ILLEGAL_JOBID_CHANGE", - "ERR 18: CMD_TIMEOUT", - "ERR 19: IDMA0_AXI_SLVERR", - "ERR 20: IDMA0_AXI_DECERR", - "ERR 21: 0x15 Reserved", - "ERR 22: IDMA1_AXI_SLAVE_FAULT", - "ERR 23: IDMA1_AIXI_DECERR", - "ERR 24: 0x18 Reserved", - "ERR 25: ZLIBVHB_AXI_SLVERR", - "ERR 26: ZLIBVHB_AXI_DECERR", - "ERR 27: 0x1B Reserved", - "ERR 27: ZLIB_UNEXPECTED_EOM", - "ERR 27: ZLIB_EXTRA_DATA", - "ERR 30: ZLIB_BTYPE", - "ERR 31: ZLIB_UNDEFINED_SYMBOL", - "ERR 32: ZLIB_UNDEFINED_DISTANCE_S", - "ERR 33: ZLIB_CODE_LENGTH_SYMBOL", - "ERR 34: ZLIB _VHB_ILLEGAL_FETCH", - "ERR 35: ZLIB_UNCOMPRESSED_LEN", - "ERR 36: ZLIB_LIMIT_REACHED", - "ERR 37: ZLIB_CHECKSUM_MISMATCH0", - "ERR 38: ODMA0_AXI_SLVERR", - "ERR 39: ODMA0_AXI_DECERR", - "ERR 40: 0x28 Reserved", - "ERR 41: ODMA1_AXI_SLVERR", - "ERR 42: ODMA1_AXI_DECERR", - "ERR 43: LSB_PARITY_ERR", + "ILLEGAL_ENGINE", + "ILLEGAL_KEY_ID", + "ILLEGAL_FUNCTION_TYPE", + "ILLEGAL_FUNCTION_MODE", + "ILLEGAL_FUNCTION_ENCRYPT", + "ILLEGAL_FUNCTION_SIZE", + "Zlib_MISSING_INIT_EOM", + "ILLEGAL_FUNCTION_RSVD", + "ILLEGAL_BUFFER_LENGTH", + "VLSB_FAULT", + "ILLEGAL_MEM_ADDR", + "ILLEGAL_MEM_SEL", + "ILLEGAL_CONTEXT_ID", + "ILLEGAL_KEY_ADDR", + "0xF Reserved", + "Zlib_ILLEGAL_MULTI_QUEUE", + "Zlib_ILLEGAL_JOBID_CHANGE", + "CMD_TIMEOUT", + "IDMA0_AXI_SLVERR", + "IDMA0_AXI_DECERR", + "0x15 Reserved", + "IDMA1_AXI_SLAVE_FAULT", + "IDMA1_AIXI_DECERR", + "0x18 Reserved", + "ZLIBVHB_AXI_SLVERR", + "ZLIBVHB_AXI_DECERR", + "0x1B Reserved", + "ZLIB_UNEXPECTED_EOM", + "ZLIB_EXTRA_DATA", + "ZLIB_BTYPE", + "ZLIB_UNDEFINED_SYMBOL", + "ZLIB_UNDEFINED_DISTANCE_S", + "ZLIB_CODE_LENGTH_SYMBOL", + "ZLIB _VHB_ILLEGAL_FETCH", + "ZLIB_UNCOMPRESSED_LEN", + "ZLIB_LIMIT_REACHED", + "ZLIB_CHECKSUM_MISMATCH0", + "ODMA0_AXI_SLVERR", + "ODMA0_AXI_DECERR", + "0x28 Reserved", + "ODMA1_AXI_SLVERR", + "ODMA1_AXI_DECERR", }; -void ccp_log_error(struct ccp_device *d, int e) +void ccp_log_error(struct ccp_device *d, unsigned int e) { - dev_err(d->dev, "CCP error: %s (0x%x)\n", ccp_error_codes[e], e); + if (WARN_ON(e >= CCP_MAX_ERROR_CODE)) + return; + + if (e < ARRAY_SIZE(ccp_error_codes)) + dev_err(d->dev, "CCP error %d: %s\n", e, ccp_error_codes[e]); + else + dev_err(d->dev, "CCP error %d: Unknown Error\n", e); } /* List of CCPs, CCP count, read-write access lock, and access functions @@ -537,6 +543,10 @@ int ccp_dev_suspend(struct sp_device *sp, pm_message_t state) unsigned long flags; unsigned int i; + /* If there's no device there's nothing to do */ + if (!ccp) + return 0; + spin_lock_irqsave(&ccp->cmd_lock, flags); ccp->suspending = 1; @@ -561,6 +571,10 @@ int ccp_dev_resume(struct sp_device *sp) unsigned long flags; unsigned int i; + /* If there's no device there's nothing to do */ + if (!ccp) + return 0; + spin_lock_irqsave(&ccp->cmd_lock, flags); ccp->suspending = 0; diff --git a/drivers/crypto/ccp/ccp-dev.h b/drivers/crypto/ccp/ccp-dev.h index 6810b65c1939c88abee1448f06e9d9c2b71460df..7442b0422f8ac032f3c0c7ee87f352b522671927 100644 --- a/drivers/crypto/ccp/ccp-dev.h +++ b/drivers/crypto/ccp/ccp-dev.h @@ -632,7 +632,7 @@ struct ccp5_desc { void ccp_add_device(struct ccp_device *ccp); void ccp_del_device(struct ccp_device *ccp); -extern void ccp_log_error(struct ccp_device *, int); +extern void ccp_log_error(struct ccp_device *, unsigned int); struct ccp_device *ccp_alloc_struct(struct sp_device *sp); bool ccp_queues_suspended(struct ccp_device *ccp); diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c index 0ea43cdeb05f0f4c5c38a332b30faf1cc67f4c6e..1e2e42106dee070d92ac543c8e1448327caffb8a 100644 --- a/drivers/crypto/ccp/ccp-ops.c +++ b/drivers/crypto/ccp/ccp-ops.c @@ -625,6 +625,8 @@ static int ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q, unsigned long long *final; unsigned int dm_offset; + unsigned int authsize; + unsigned int jobid; unsigned int ilen; bool in_place = true; /* Default value */ int ret; @@ -645,6 +647,21 @@ static int ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q, if (!aes->key) /* Gotta have a key SGL */ return -EINVAL; + /* Zero defaults to 16 bytes, the maximum size */ + authsize = aes->authsize ? aes->authsize : AES_BLOCK_SIZE; + switch (authsize) { + case 16: + case 15: + case 14: + case 13: + case 12: + case 8: + case 4: + break; + default: + return -EINVAL; + } + /* First, decompose the source buffer into AAD & PT, * and the destination buffer into AAD, CT & tag, or * the input into CT & tag. @@ -659,13 +676,15 @@ static int ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q, p_tag = scatterwalk_ffwd(sg_tag, p_outp, ilen); } else { /* Input length for decryption includes tag */ - ilen = aes->src_len - AES_BLOCK_SIZE; + ilen = aes->src_len - authsize; p_tag = scatterwalk_ffwd(sg_tag, p_inp, ilen); } + jobid = CCP_NEW_JOBID(cmd_q->ccp); + memset(&op, 0, sizeof(op)); op.cmd_q = cmd_q; - op.jobid = CCP_NEW_JOBID(cmd_q->ccp); + op.jobid = jobid; op.sb_key = cmd_q->sb_key; /* Pre-allocated */ op.sb_ctx = cmd_q->sb_ctx; /* Pre-allocated */ op.init = 1; @@ -766,8 +785,7 @@ static int ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q, while (src.sg_wa.bytes_left) { ccp_prepare_data(&src, &dst, &op, AES_BLOCK_SIZE, true); if (!src.sg_wa.bytes_left) { - unsigned int nbytes = aes->src_len - % AES_BLOCK_SIZE; + unsigned int nbytes = ilen % AES_BLOCK_SIZE; if (nbytes) { op.eom = 1; @@ -816,6 +834,13 @@ static int ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q, final[0] = cpu_to_be64(aes->aad_len * 8); final[1] = cpu_to_be64(ilen * 8); + memset(&op, 0, sizeof(op)); + op.cmd_q = cmd_q; + op.jobid = jobid; + op.sb_key = cmd_q->sb_key; /* Pre-allocated */ + op.sb_ctx = cmd_q->sb_ctx; /* Pre-allocated */ + op.init = 1; + op.u.aes.type = aes->type; op.u.aes.mode = CCP_AES_MODE_GHASH; op.u.aes.action = CCP_AES_GHASHFINAL; op.src.type = CCP_MEMTYPE_SYSTEM; @@ -832,18 +857,19 @@ static int ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q, if (aes->action == CCP_AES_ACTION_ENCRYPT) { /* Put the ciphered tag after the ciphertext. */ - ccp_get_dm_area(&final_wa, 0, p_tag, 0, AES_BLOCK_SIZE); + ccp_get_dm_area(&final_wa, 0, p_tag, 0, authsize); } else { /* Does this ciphered tag match the input? */ - ret = ccp_init_dm_workarea(&tag, cmd_q, AES_BLOCK_SIZE, + ret = ccp_init_dm_workarea(&tag, cmd_q, authsize, DMA_BIDIRECTIONAL); if (ret) goto e_tag; - ret = ccp_set_dm_area(&tag, 0, p_tag, 0, AES_BLOCK_SIZE); + ret = ccp_set_dm_area(&tag, 0, p_tag, 0, authsize); if (ret) goto e_tag; - ret = memcmp(tag.address, final_wa.address, AES_BLOCK_SIZE); + ret = crypto_memneq(tag.address, final_wa.address, + authsize) ? -EBADMSG : 0; ccp_dm_free(&tag); } @@ -851,11 +877,11 @@ e_tag: ccp_dm_free(&final_wa); e_dst: - if (aes->src_len && !in_place) + if (ilen > 0 && !in_place) ccp_free_data(&dst, cmd_q); e_src: - if (aes->src_len) + if (ilen > 0) ccp_free_data(&src, cmd_q); e_aad: diff --git a/drivers/crypto/inside-secure/safexcel_cipher.c b/drivers/crypto/inside-secure/safexcel_cipher.c index 3aef1d43e43510130dc359d61c911e51209e9a7f..42a3830fbd1901f02529d10071a9a6453bf42cbe 100644 --- a/drivers/crypto/inside-secure/safexcel_cipher.c +++ b/drivers/crypto/inside-secure/safexcel_cipher.c @@ -51,6 +51,8 @@ struct safexcel_cipher_ctx { struct safexcel_cipher_req { enum safexcel_cipher_direction direction; + /* Number of result descriptors associated to the request */ + unsigned int rdescs; bool needs_inv; }; @@ -333,7 +335,10 @@ static int safexcel_handle_req_result(struct safexcel_crypto_priv *priv, int rin *ret = 0; - do { + if (unlikely(!sreq->rdescs)) + return 0; + + while (sreq->rdescs--) { rdesc = safexcel_ring_next_rptr(priv, &priv->ring[ring].rdr); if (IS_ERR(rdesc)) { dev_err(priv->dev, @@ -346,7 +351,7 @@ static int safexcel_handle_req_result(struct safexcel_crypto_priv *priv, int rin *ret = safexcel_rdesc_check_errors(priv, rdesc); ndesc++; - } while (!rdesc->last_seg); + } safexcel_complete(priv, ring); @@ -501,6 +506,7 @@ cdesc_rollback: static int safexcel_handle_inv_result(struct safexcel_crypto_priv *priv, int ring, struct crypto_async_request *base, + struct safexcel_cipher_req *sreq, bool *should_complete, int *ret) { struct safexcel_cipher_ctx *ctx = crypto_tfm_ctx(base->tfm); @@ -509,7 +515,10 @@ static int safexcel_handle_inv_result(struct safexcel_crypto_priv *priv, *ret = 0; - do { + if (unlikely(!sreq->rdescs)) + return 0; + + while (sreq->rdescs--) { rdesc = safexcel_ring_next_rptr(priv, &priv->ring[ring].rdr); if (IS_ERR(rdesc)) { dev_err(priv->dev, @@ -522,7 +531,7 @@ static int safexcel_handle_inv_result(struct safexcel_crypto_priv *priv, *ret = safexcel_rdesc_check_errors(priv, rdesc); ndesc++; - } while (!rdesc->last_seg); + } safexcel_complete(priv, ring); @@ -564,7 +573,7 @@ static int safexcel_skcipher_handle_result(struct safexcel_crypto_priv *priv, if (sreq->needs_inv) { sreq->needs_inv = false; - err = safexcel_handle_inv_result(priv, ring, async, + err = safexcel_handle_inv_result(priv, ring, async, sreq, should_complete, ret); } else { err = safexcel_handle_req_result(priv, ring, async, req->src, @@ -587,7 +596,7 @@ static int safexcel_aead_handle_result(struct safexcel_crypto_priv *priv, if (sreq->needs_inv) { sreq->needs_inv = false; - err = safexcel_handle_inv_result(priv, ring, async, + err = safexcel_handle_inv_result(priv, ring, async, sreq, should_complete, ret); } else { err = safexcel_handle_req_result(priv, ring, async, req->src, @@ -633,6 +642,8 @@ static int safexcel_skcipher_send(struct crypto_async_request *async, int ring, ret = safexcel_send_req(async, ring, sreq, req->src, req->dst, req->cryptlen, 0, 0, req->iv, commands, results); + + sreq->rdescs = *results; return ret; } @@ -655,6 +666,7 @@ static int safexcel_aead_send(struct crypto_async_request *async, int ring, req->cryptlen, req->assoclen, crypto_aead_authsize(tfm), req->iv, commands, results); + sreq->rdescs = *results; return ret; } diff --git a/drivers/crypto/nx/nx-842-powernv.c b/drivers/crypto/nx/nx-842-powernv.c index c68df7e8bee185487cd6e9544fd6c4f31ab0a9c2..7ce2467c771eb63d7a2b932329670904ec40b027 100644 --- a/drivers/crypto/nx/nx-842-powernv.c +++ b/drivers/crypto/nx/nx-842-powernv.c @@ -36,8 +36,6 @@ MODULE_ALIAS_CRYPTO("842-nx"); #define WORKMEM_ALIGN (CRB_ALIGN) #define CSB_WAIT_MAX (5000) /* ms */ #define VAS_RETRIES (10) -/* # of requests allowed per RxFIFO at a time. 0 for unlimited */ -#define MAX_CREDITS_PER_RXFIFO (1024) struct nx842_workmem { /* Below fields must be properly aligned */ @@ -821,7 +819,11 @@ static int __init vas_cfg_coproc_info(struct device_node *dn, int chip_id, rxattr.lnotify_lpid = lpid; rxattr.lnotify_pid = pid; rxattr.lnotify_tid = tid; - rxattr.wcreds_max = MAX_CREDITS_PER_RXFIFO; + /* + * Maximum RX window credits can not be more than #CRBs in + * RxFIFO. Otherwise, can get checkstop if RxFIFO overruns. + */ + rxattr.wcreds_max = fifo_size / CRB_SIZE; /* * Open a VAS receice window which is used to configure RxFIFO diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c index c5859d3cb82547d18992daf2d5bc1951ff2e0096..41b288bdcdbfe078cd3dc1e2d08fe0304fd3ef90 100644 --- a/drivers/crypto/talitos.c +++ b/drivers/crypto/talitos.c @@ -334,6 +334,21 @@ int talitos_submit(struct device *dev, int ch, struct talitos_desc *desc, } EXPORT_SYMBOL(talitos_submit); +static __be32 get_request_hdr(struct talitos_request *request, bool is_sec1) +{ + struct talitos_edesc *edesc; + + if (!is_sec1) + return request->desc->hdr; + + if (!request->desc->next_desc) + return request->desc->hdr1; + + edesc = container_of(request->desc, struct talitos_edesc, desc); + + return ((struct talitos_desc *)(edesc->buf + edesc->dma_len))->hdr1; +} + /* * process what was done, notify callback of error if not */ @@ -355,12 +370,7 @@ static void flush_channel(struct device *dev, int ch, int error, int reset_ch) /* descriptors with their done bits set don't get the error */ rmb(); - if (!is_sec1) - hdr = request->desc->hdr; - else if (request->desc->next_desc) - hdr = (request->desc + 1)->hdr1; - else - hdr = request->desc->hdr1; + hdr = get_request_hdr(request, is_sec1); if ((hdr & DESC_HDR_DONE) == DESC_HDR_DONE) status = 0; @@ -490,8 +500,14 @@ static u32 current_desc_hdr(struct device *dev, int ch) } } - if (priv->chan[ch].fifo[iter].desc->next_desc == cur_desc) - return (priv->chan[ch].fifo[iter].desc + 1)->hdr; + if (priv->chan[ch].fifo[iter].desc->next_desc == cur_desc) { + struct talitos_edesc *edesc; + + edesc = container_of(priv->chan[ch].fifo[iter].desc, + struct talitos_edesc, desc); + return ((struct talitos_desc *) + (edesc->buf + edesc->dma_len))->hdr; + } return priv->chan[ch].fifo[iter].desc->hdr; } @@ -913,36 +929,6 @@ badkey: return -EINVAL; } -/* - * talitos_edesc - s/w-extended descriptor - * @src_nents: number of segments in input scatterlist - * @dst_nents: number of segments in output scatterlist - * @icv_ool: whether ICV is out-of-line - * @iv_dma: dma address of iv for checking continuity and link table - * @dma_len: length of dma mapped link_tbl space - * @dma_link_tbl: bus physical address of link_tbl/buf - * @desc: h/w descriptor - * @link_tbl: input and output h/w link tables (if {src,dst}_nents > 1) (SEC2) - * @buf: input and output buffeur (if {src,dst}_nents > 1) (SEC1) - * - * if decrypting (with authcheck), or either one of src_nents or dst_nents - * is greater than 1, an integrity check value is concatenated to the end - * of link_tbl data - */ -struct talitos_edesc { - int src_nents; - int dst_nents; - bool icv_ool; - dma_addr_t iv_dma; - int dma_len; - dma_addr_t dma_link_tbl; - struct talitos_desc desc; - union { - struct talitos_ptr link_tbl[0]; - u8 buf[0]; - }; -}; - static void talitos_sg_unmap(struct device *dev, struct talitos_edesc *edesc, struct scatterlist *src, @@ -1015,7 +1001,6 @@ static void ipsec_esp_encrypt_done(struct device *dev, unsigned int authsize = crypto_aead_authsize(authenc); unsigned int ivsize = crypto_aead_ivsize(authenc); struct talitos_edesc *edesc; - struct scatterlist *sg; void *icvdata; edesc = container_of(desc, struct talitos_edesc, desc); @@ -1029,9 +1014,8 @@ static void ipsec_esp_encrypt_done(struct device *dev, else icvdata = &edesc->link_tbl[edesc->src_nents + edesc->dst_nents + 2]; - sg = sg_last(areq->dst, edesc->dst_nents); - memcpy((char *)sg_virt(sg) + sg->length - authsize, - icvdata, authsize); + sg_pcopy_from_buffer(areq->dst, edesc->dst_nents ? : 1, icvdata, + authsize, areq->assoclen + areq->cryptlen); } dma_unmap_single(dev, edesc->iv_dma, ivsize, DMA_TO_DEVICE); @@ -1049,7 +1033,6 @@ static void ipsec_esp_decrypt_swauth_done(struct device *dev, struct crypto_aead *authenc = crypto_aead_reqtfm(req); unsigned int authsize = crypto_aead_authsize(authenc); struct talitos_edesc *edesc; - struct scatterlist *sg; char *oicv, *icv; struct talitos_private *priv = dev_get_drvdata(dev); bool is_sec1 = has_ftr_sec1(priv); @@ -1059,9 +1042,18 @@ static void ipsec_esp_decrypt_swauth_done(struct device *dev, ipsec_esp_unmap(dev, edesc, req); if (!err) { + char icvdata[SHA512_DIGEST_SIZE]; + int nents = edesc->dst_nents ? : 1; + unsigned int len = req->assoclen + req->cryptlen; + /* auth check */ - sg = sg_last(req->dst, edesc->dst_nents ? : 1); - icv = (char *)sg_virt(sg) + sg->length - authsize; + if (nents > 1) { + sg_pcopy_to_buffer(req->dst, nents, icvdata, authsize, + len - authsize); + icv = icvdata; + } else { + icv = (char *)sg_virt(req->dst) + len - authsize; + } if (edesc->dma_len) { if (is_sec1) @@ -1431,15 +1423,11 @@ static struct talitos_edesc *talitos_edesc_alloc(struct device *dev, edesc->dst_nents = dst_nents; edesc->iv_dma = iv_dma; edesc->dma_len = dma_len; - if (dma_len) { - void *addr = &edesc->link_tbl[0]; - - if (is_sec1 && !dst) - addr += sizeof(struct talitos_desc); - edesc->dma_link_tbl = dma_map_single(dev, addr, + if (dma_len) + edesc->dma_link_tbl = dma_map_single(dev, &edesc->link_tbl[0], edesc->dma_len, DMA_BIDIRECTIONAL); - } + return edesc; } @@ -1481,7 +1469,6 @@ static int aead_decrypt(struct aead_request *req) struct talitos_ctx *ctx = crypto_aead_ctx(authenc); struct talitos_private *priv = dev_get_drvdata(ctx->dev); struct talitos_edesc *edesc; - struct scatterlist *sg; void *icvdata; req->cryptlen -= authsize; @@ -1515,9 +1502,8 @@ static int aead_decrypt(struct aead_request *req) else icvdata = &edesc->link_tbl[0]; - sg = sg_last(req->src, edesc->src_nents ? : 1); - - memcpy(icvdata, (char *)sg_virt(sg) + sg->length - authsize, authsize); + sg_pcopy_to_buffer(req->src, edesc->src_nents ? : 1, icvdata, authsize, + req->assoclen + req->cryptlen - authsize); return ipsec_esp(edesc, req, ipsec_esp_decrypt_swauth_done); } @@ -1571,11 +1557,15 @@ static void ablkcipher_done(struct device *dev, int err) { struct ablkcipher_request *areq = context; + struct crypto_ablkcipher *cipher = crypto_ablkcipher_reqtfm(areq); + struct talitos_ctx *ctx = crypto_ablkcipher_ctx(cipher); + unsigned int ivsize = crypto_ablkcipher_ivsize(cipher); struct talitos_edesc *edesc; edesc = container_of(desc, struct talitos_edesc, desc); common_nonsnoop_unmap(dev, edesc, areq); + memcpy(areq->info, ctx->iv, ivsize); kfree(edesc); @@ -1706,14 +1696,16 @@ static void common_nonsnoop_hash_unmap(struct device *dev, struct talitos_private *priv = dev_get_drvdata(dev); bool is_sec1 = has_ftr_sec1(priv); struct talitos_desc *desc = &edesc->desc; - struct talitos_desc *desc2 = desc + 1; + struct talitos_desc *desc2 = (struct talitos_desc *) + (edesc->buf + edesc->dma_len); unmap_single_talitos_ptr(dev, &edesc->desc.ptr[5], DMA_FROM_DEVICE); if (desc->next_desc && desc->ptr[5].ptr != desc2->ptr[5].ptr) unmap_single_talitos_ptr(dev, &desc2->ptr[5], DMA_FROM_DEVICE); - talitos_sg_unmap(dev, edesc, req_ctx->psrc, NULL, 0, 0); + if (req_ctx->psrc) + talitos_sg_unmap(dev, edesc, req_ctx->psrc, NULL, 0, 0); /* When using hashctx-in, must unmap it. */ if (from_talitos_ptr_len(&edesc->desc.ptr[1], is_sec1)) @@ -1780,7 +1772,6 @@ static void talitos_handle_buggy_hash(struct talitos_ctx *ctx, static int common_nonsnoop_hash(struct talitos_edesc *edesc, struct ahash_request *areq, unsigned int length, - unsigned int offset, void (*callback) (struct device *dev, struct talitos_desc *desc, void *context, int error)) @@ -1819,9 +1810,7 @@ static int common_nonsnoop_hash(struct talitos_edesc *edesc, sg_count = edesc->src_nents ?: 1; if (is_sec1 && sg_count > 1) - sg_pcopy_to_buffer(req_ctx->psrc, sg_count, - edesc->buf + sizeof(struct talitos_desc), - length, req_ctx->nbuf); + sg_copy_to_buffer(req_ctx->psrc, sg_count, edesc->buf, length); else if (length) sg_count = dma_map_sg(dev, req_ctx->psrc, sg_count, DMA_TO_DEVICE); @@ -1834,7 +1823,7 @@ static int common_nonsnoop_hash(struct talitos_edesc *edesc, DMA_TO_DEVICE); } else { sg_count = talitos_sg_map(dev, req_ctx->psrc, length, edesc, - &desc->ptr[3], sg_count, offset, 0); + &desc->ptr[3], sg_count, 0, 0); if (sg_count > 1) sync_needed = true; } @@ -1858,7 +1847,8 @@ static int common_nonsnoop_hash(struct talitos_edesc *edesc, talitos_handle_buggy_hash(ctx, edesc, &desc->ptr[3]); if (is_sec1 && req_ctx->nbuf && length) { - struct talitos_desc *desc2 = desc + 1; + struct talitos_desc *desc2 = (struct talitos_desc *) + (edesc->buf + edesc->dma_len); dma_addr_t next_desc; memset(desc2, 0, sizeof(*desc2)); @@ -1879,7 +1869,7 @@ static int common_nonsnoop_hash(struct talitos_edesc *edesc, DMA_TO_DEVICE); copy_talitos_ptr(&desc2->ptr[2], &desc->ptr[2], is_sec1); sg_count = talitos_sg_map(dev, req_ctx->psrc, length, edesc, - &desc2->ptr[3], sg_count, offset, 0); + &desc2->ptr[3], sg_count, 0, 0); if (sg_count > 1) sync_needed = true; copy_talitos_ptr(&desc2->ptr[5], &desc->ptr[5], is_sec1); @@ -1990,7 +1980,6 @@ static int ahash_process_req(struct ahash_request *areq, unsigned int nbytes) struct device *dev = ctx->dev; struct talitos_private *priv = dev_get_drvdata(dev); bool is_sec1 = has_ftr_sec1(priv); - int offset = 0; u8 *ctx_buf = req_ctx->buf[req_ctx->buf_idx]; if (!req_ctx->last && (nbytes + req_ctx->nbuf <= blocksize)) { @@ -2030,6 +2019,8 @@ static int ahash_process_req(struct ahash_request *areq, unsigned int nbytes) sg_chain(req_ctx->bufsl, 2, areq->src); req_ctx->psrc = req_ctx->bufsl; } else if (is_sec1 && req_ctx->nbuf && req_ctx->nbuf < blocksize) { + int offset; + if (nbytes_to_hash > blocksize) offset = blocksize - req_ctx->nbuf; else @@ -2042,7 +2033,8 @@ static int ahash_process_req(struct ahash_request *areq, unsigned int nbytes) sg_copy_to_buffer(areq->src, nents, ctx_buf + req_ctx->nbuf, offset); req_ctx->nbuf += offset; - req_ctx->psrc = areq->src; + req_ctx->psrc = scatterwalk_ffwd(req_ctx->bufsl, areq->src, + offset); } else req_ctx->psrc = areq->src; @@ -2082,8 +2074,7 @@ static int ahash_process_req(struct ahash_request *areq, unsigned int nbytes) if (ctx->keylen && (req_ctx->first || req_ctx->last)) edesc->desc.hdr |= DESC_HDR_MODE0_MDEU_HMAC; - return common_nonsnoop_hash(edesc, areq, nbytes_to_hash, offset, - ahash_done); + return common_nonsnoop_hash(edesc, areq, nbytes_to_hash, ahash_done); } static int ahash_update(struct ahash_request *areq) @@ -3202,7 +3193,10 @@ static struct talitos_crypto_alg *talitos_alg_alloc(struct device *dev, alg->cra_priority = t_alg->algt.priority; else alg->cra_priority = TALITOS_CRA_PRIORITY; - alg->cra_alignmask = 0; + if (has_ftr_sec1(priv)) + alg->cra_alignmask = 3; + else + alg->cra_alignmask = 0; alg->cra_ctxsize = sizeof(struct talitos_ctx); alg->cra_flags |= CRYPTO_ALG_KERN_DRIVER_ONLY; diff --git a/drivers/crypto/talitos.h b/drivers/crypto/talitos.h index a65a63e0d6c10e7d31c0010a97344802896f0679..979f6a61e545f1aa54d9d895932b838572f3fd86 100644 --- a/drivers/crypto/talitos.h +++ b/drivers/crypto/talitos.h @@ -65,6 +65,36 @@ struct talitos_desc { #define TALITOS_DESC_SIZE (sizeof(struct talitos_desc) - sizeof(__be32)) +/* + * talitos_edesc - s/w-extended descriptor + * @src_nents: number of segments in input scatterlist + * @dst_nents: number of segments in output scatterlist + * @icv_ool: whether ICV is out-of-line + * @iv_dma: dma address of iv for checking continuity and link table + * @dma_len: length of dma mapped link_tbl space + * @dma_link_tbl: bus physical address of link_tbl/buf + * @desc: h/w descriptor + * @link_tbl: input and output h/w link tables (if {src,dst}_nents > 1) (SEC2) + * @buf: input and output buffeur (if {src,dst}_nents > 1) (SEC1) + * + * if decrypting (with authcheck), or either one of src_nents or dst_nents + * is greater than 1, an integrity check value is concatenated to the end + * of link_tbl data + */ +struct talitos_edesc { + int src_nents; + int dst_nents; + bool icv_ool; + dma_addr_t iv_dma; + int dma_len; + dma_addr_t dma_link_tbl; + struct talitos_desc desc; + union { + struct talitos_ptr link_tbl[0]; + u8 buf[0]; + }; +}; + /** * talitos_request - descriptor submission request * @desc: descriptor pointer (kernel virtual) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 13884474d1588f7a086154d3b06dbd09ff28e881..69842145c223fd73f8228f37ab668774ac55c112 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -1069,6 +1069,7 @@ static int dma_buf_debug_show(struct seq_file *s, void *unused) fence->ops->get_driver_name(fence), fence->ops->get_timeline_name(fence), dma_fence_is_signaled(fence) ? "" : "un"); + dma_fence_put(fence); } rcu_read_unlock(); diff --git a/drivers/dma-buf/reservation.c b/drivers/dma-buf/reservation.c index 6c95f61a32e73d54ed70f461e676826075419f45..49ab09468ba15c12ec80e3e34b1656e450f1fa70 100644 --- a/drivers/dma-buf/reservation.c +++ b/drivers/dma-buf/reservation.c @@ -416,6 +416,10 @@ int reservation_object_get_fences_rcu(struct reservation_object *obj, GFP_NOWAIT | __GFP_NOWARN); if (!nshared) { rcu_read_unlock(); + + dma_fence_put(fence_excl); + fence_excl = NULL; + nshared = krealloc(shared, sz, GFP_KERNEL); if (nshared) { shared = nshared; diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c index 1c658ec3cbf42d5e515067d48a2bb943948d8158..3f5a01cb4ab45b05c37aaac65512be88587615db 100644 --- a/drivers/dma/imx-sdma.c +++ b/drivers/dma/imx-sdma.c @@ -2039,27 +2039,6 @@ static int sdma_probe(struct platform_device *pdev) if (pdata && pdata->script_addrs) sdma_add_scripts(sdma, pdata->script_addrs); - if (pdata) { - ret = sdma_get_firmware(sdma, pdata->fw_name); - if (ret) - dev_warn(&pdev->dev, "failed to get firmware from platform data\n"); - } else { - /* - * Because that device tree does not encode ROM script address, - * the RAM script in firmware is mandatory for device tree - * probe, otherwise it fails. - */ - ret = of_property_read_string(np, "fsl,sdma-ram-script-name", - &fw_name); - if (ret) - dev_warn(&pdev->dev, "failed to get firmware name\n"); - else { - ret = sdma_get_firmware(sdma, fw_name); - if (ret) - dev_warn(&pdev->dev, "failed to get firmware from device tree\n"); - } - } - sdma->dma_device.dev = &pdev->dev; sdma->dma_device.device_alloc_chan_resources = sdma_alloc_chan_resources; @@ -2103,6 +2082,33 @@ static int sdma_probe(struct platform_device *pdev) of_node_put(spba_bus); } + /* + * Kick off firmware loading as the very last step: + * attempt to load firmware only if we're not on the error path, because + * the firmware callback requires a fully functional and allocated sdma + * instance. + */ + if (pdata) { + ret = sdma_get_firmware(sdma, pdata->fw_name); + if (ret) + dev_warn(&pdev->dev, "failed to get firmware from platform data\n"); + } else { + /* + * Because that device tree does not encode ROM script address, + * the RAM script in firmware is mandatory for device tree + * probe, otherwise it fails. + */ + ret = of_property_read_string(np, "fsl,sdma-ram-script-name", + &fw_name); + if (ret) { + dev_warn(&pdev->dev, "failed to get firmware name\n"); + } else { + ret = sdma_get_firmware(sdma, fw_name); + if (ret) + dev_warn(&pdev->dev, "failed to get firmware from device tree\n"); + } + } + return 0; err_register: diff --git a/drivers/dma/sh/rcar-dmac.c b/drivers/dma/sh/rcar-dmac.c index 0b05a1e08d213a649dc45219e1a3d28df4a85385..041ce864097e49804f6b9f8dcdd4cd3cf344cb6a 100644 --- a/drivers/dma/sh/rcar-dmac.c +++ b/drivers/dma/sh/rcar-dmac.c @@ -1164,7 +1164,7 @@ rcar_dmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, struct rcar_dmac_chan *rchan = to_rcar_dmac_chan(chan); /* Someone calling slave DMA on a generic channel? */ - if (rchan->mid_rid < 0 || !sg_len) { + if (rchan->mid_rid < 0 || !sg_len || !sg_dma_len(sgl)) { dev_warn(chan->device->dev, "%s: bad parameter: len=%d, id=%d\n", __func__, sg_len, rchan->mid_rid); diff --git a/drivers/dma/ste_dma40.c b/drivers/dma/ste_dma40.c index f4edfc56f34ef65dc34e50e38fd3c9aa258364fd..3d55405c49cacc409c937a0048f31d0de18edd64 100644 --- a/drivers/dma/ste_dma40.c +++ b/drivers/dma/ste_dma40.c @@ -142,7 +142,7 @@ enum d40_events { * when the DMA hw is powered off. * TODO: Add save/restore of D40_DREG_GCC on dma40 v3 or later, if that works. */ -static u32 d40_backup_regs[] = { +static __maybe_unused u32 d40_backup_regs[] = { D40_DREG_LCPA, D40_DREG_LCLA, D40_DREG_PRMSE, @@ -211,7 +211,7 @@ static u32 d40_backup_regs_v4b[] = { #define BACKUP_REGS_SZ_V4B ARRAY_SIZE(d40_backup_regs_v4b) -static u32 d40_backup_regs_chan[] = { +static __maybe_unused u32 d40_backup_regs_chan[] = { D40_CHAN_REG_SSCFG, D40_CHAN_REG_SSELT, D40_CHAN_REG_SSPTR, diff --git a/drivers/dma/stm32-mdma.c b/drivers/dma/stm32-mdma.c index 06dd1725375e514710c245305069d61586ac2961..8c3c3e5b812a85d66c1b6b04797cdd2bf5ff6e64 100644 --- a/drivers/dma/stm32-mdma.c +++ b/drivers/dma/stm32-mdma.c @@ -1376,7 +1376,7 @@ static irqreturn_t stm32_mdma_irq_handler(int irq, void *devid) chan = &dmadev->chan[id]; if (!chan) { - dev_err(chan2dev(chan), "MDMA channel not initialized\n"); + dev_dbg(mdma2dev(dmadev), "MDMA channel not initialized\n"); goto exit; } diff --git a/drivers/dma/tegra20-apb-dma.c b/drivers/dma/tegra20-apb-dma.c index 8219ab88a507cddbca62a89f2465dd82eea1ee69..fb23993430d31490b66da870954187f0b80e05f7 100644 --- a/drivers/dma/tegra20-apb-dma.c +++ b/drivers/dma/tegra20-apb-dma.c @@ -981,8 +981,12 @@ static struct dma_async_tx_descriptor *tegra_dma_prep_slave_sg( csr |= tdc->slave_id << TEGRA_APBDMA_CSR_REQ_SEL_SHIFT; } - if (flags & DMA_PREP_INTERRUPT) + if (flags & DMA_PREP_INTERRUPT) { csr |= TEGRA_APBDMA_CSR_IE_EOC; + } else { + WARN_ON_ONCE(1); + return NULL; + } apb_seq |= TEGRA_APBDMA_APBSEQ_WRAP_WORD_1; @@ -1124,8 +1128,12 @@ static struct dma_async_tx_descriptor *tegra_dma_prep_dma_cyclic( csr |= tdc->slave_id << TEGRA_APBDMA_CSR_REQ_SEL_SHIFT; } - if (flags & DMA_PREP_INTERRUPT) + if (flags & DMA_PREP_INTERRUPT) { csr |= TEGRA_APBDMA_CSR_IE_EOC; + } else { + WARN_ON_ONCE(1); + return NULL; + } apb_seq |= TEGRA_APBDMA_APBSEQ_WRAP_WORD_1; diff --git a/drivers/dma/ti/omap-dma.c b/drivers/dma/ti/omap-dma.c index a4a931ddf6f695fa21a25a359a94ee0c57f92beb..aeb9c29e52554d4f715de00ab51dff46c038eed9 100644 --- a/drivers/dma/ti/omap-dma.c +++ b/drivers/dma/ti/omap-dma.c @@ -1237,7 +1237,7 @@ static struct dma_async_tx_descriptor *omap_dma_prep_dma_interleaved( if (src_icg) { d->ccr |= CCR_SRC_AMODE_DBLIDX; d->ei = 1; - d->fi = src_icg; + d->fi = src_icg + 1; } else if (xt->src_inc) { d->ccr |= CCR_SRC_AMODE_POSTINC; d->fi = 0; @@ -1252,7 +1252,7 @@ static struct dma_async_tx_descriptor *omap_dma_prep_dma_interleaved( if (dst_icg) { d->ccr |= CCR_DST_AMODE_DBLIDX; sg->ei = 1; - sg->fi = dst_icg; + sg->fi = dst_icg + 1; } else if (xt->dst_inc) { d->ccr |= CCR_DST_AMODE_POSTINC; sg->fi = 0; diff --git a/drivers/edac/edac_mc_sysfs.c b/drivers/edac/edac_mc_sysfs.c index 20374b8248f087e343738bee152a8b34e746d6d6..d4545a9222a07c6684938521d8395385b00ac2ed 100644 --- a/drivers/edac/edac_mc_sysfs.c +++ b/drivers/edac/edac_mc_sysfs.c @@ -26,7 +26,7 @@ static int edac_mc_log_ue = 1; static int edac_mc_log_ce = 1; static int edac_mc_panic_on_ue; -static int edac_mc_poll_msec = 1000; +static unsigned int edac_mc_poll_msec = 1000; /* Getter functions for above */ int edac_mc_get_log_ue(void) @@ -45,30 +45,30 @@ int edac_mc_get_panic_on_ue(void) } /* this is temporary */ -int edac_mc_get_poll_msec(void) +unsigned int edac_mc_get_poll_msec(void) { return edac_mc_poll_msec; } static int edac_set_poll_msec(const char *val, const struct kernel_param *kp) { - unsigned long l; + unsigned int i; int ret; if (!val) return -EINVAL; - ret = kstrtoul(val, 0, &l); + ret = kstrtouint(val, 0, &i); if (ret) return ret; - if (l < 1000) + if (i < 1000) return -EINVAL; - *((unsigned long *)kp->arg) = l; + *((unsigned int *)kp->arg) = i; /* notify edac_mc engine to reset the poll period */ - edac_mc_reset_delay_period(l); + edac_mc_reset_delay_period(i); return 0; } @@ -82,7 +82,7 @@ MODULE_PARM_DESC(edac_mc_log_ue, module_param(edac_mc_log_ce, int, 0644); MODULE_PARM_DESC(edac_mc_log_ce, "Log correctable error to console: 0=off 1=on"); -module_param_call(edac_mc_poll_msec, edac_set_poll_msec, param_get_int, +module_param_call(edac_mc_poll_msec, edac_set_poll_msec, param_get_uint, &edac_mc_poll_msec, 0644); MODULE_PARM_DESC(edac_mc_poll_msec, "Polling period in milliseconds"); @@ -404,6 +404,8 @@ static inline int nr_pages_per_csrow(struct csrow_info *csrow) static int edac_create_csrow_object(struct mem_ctl_info *mci, struct csrow_info *csrow, int index) { + int err; + csrow->dev.type = &csrow_attr_type; csrow->dev.bus = mci->bus; csrow->dev.groups = csrow_dev_groups; @@ -416,7 +418,11 @@ static int edac_create_csrow_object(struct mem_ctl_info *mci, edac_dbg(0, "creating (virtual) csrow node %s\n", dev_name(&csrow->dev)); - return device_add(&csrow->dev); + err = device_add(&csrow->dev); + if (err) + put_device(&csrow->dev); + + return err; } /* Create a CSROW object under specifed edac_mc_device */ diff --git a/drivers/edac/edac_module.h b/drivers/edac/edac_module.h index dec88dcea036f5b7672409fb2f2cfd78bc863609..c9f0e73872a6445b66e7669937fe5e014f8cff4a 100644 --- a/drivers/edac/edac_module.h +++ b/drivers/edac/edac_module.h @@ -36,7 +36,7 @@ extern int edac_mc_get_log_ue(void); extern int edac_mc_get_log_ce(void); extern int edac_mc_get_panic_on_ue(void); extern int edac_get_poll_msec(void); -extern int edac_mc_get_poll_msec(void); +extern unsigned int edac_mc_get_poll_msec(void); unsigned edac_dimm_info_location(struct dimm_info *dimm, char *buf, unsigned len); diff --git a/drivers/firmware/Kconfig b/drivers/firmware/Kconfig index 6e83880046d787d978cddaa1fcf2c5b930843186..ed212c8b4108370152fda28275843e65fc4d50c9 100644 --- a/drivers/firmware/Kconfig +++ b/drivers/firmware/Kconfig @@ -198,7 +198,7 @@ config DMI_SCAN_MACHINE_NON_EFI_FALLBACK config ISCSI_IBFT_FIND bool "iSCSI Boot Firmware Table Attributes" - depends on X86 && ACPI + depends on X86 && ISCSI_IBFT default n help This option enables the kernel to find the region of memory @@ -209,7 +209,8 @@ config ISCSI_IBFT_FIND config ISCSI_IBFT tristate "iSCSI Boot Firmware Table Attributes module" select ISCSI_BOOT_SYSFS - depends on ISCSI_IBFT_FIND && SCSI && SCSI_LOWLEVEL + select ISCSI_IBFT_FIND if X86 + depends on ACPI && SCSI && SCSI_LOWLEVEL default n help This option enables support for detection and exposing of iSCSI diff --git a/drivers/firmware/efi/efi-bgrt.c b/drivers/firmware/efi/efi-bgrt.c index b22ccfb0c991bde8c6d222d0a44ce527e304c7ae..2bf4d31f4967566e2cb5e72853ddad0876e3aad1 100644 --- a/drivers/firmware/efi/efi-bgrt.c +++ b/drivers/firmware/efi/efi-bgrt.c @@ -50,11 +50,6 @@ void __init efi_bgrt_init(struct acpi_table_header *table) bgrt->version); goto out; } - if (bgrt->status & 0xfe) { - pr_notice("Ignoring BGRT: reserved status bits are non-zero %u\n", - bgrt->status); - goto out; - } if (bgrt->image_type != 0) { pr_notice("Ignoring BGRT: invalid image type %u (expected 0)\n", bgrt->image_type); diff --git a/drivers/firmware/iscsi_ibft.c b/drivers/firmware/iscsi_ibft.c index c51462f5aa1e4f52d01cb7c88db42bdc006f7767..966aef334c420f3015e475c349dae84adb151f41 100644 --- a/drivers/firmware/iscsi_ibft.c +++ b/drivers/firmware/iscsi_ibft.c @@ -93,6 +93,10 @@ MODULE_DESCRIPTION("sysfs interface to BIOS iBFT information"); MODULE_LICENSE("GPL"); MODULE_VERSION(IBFT_ISCSI_VERSION); +#ifndef CONFIG_ISCSI_IBFT_FIND +struct acpi_table_ibft *ibft_addr; +#endif + struct ibft_hdr { u8 id; u8 version; diff --git a/drivers/firmware/psci_checker.c b/drivers/firmware/psci_checker.c index 3469436579622b957fcf805aff1edb0da41adc1c..cbd53cb1b2d4783bbacca41f4ada309f24af642f 100644 --- a/drivers/firmware/psci_checker.c +++ b/drivers/firmware/psci_checker.c @@ -366,16 +366,16 @@ static int suspend_test_thread(void *arg) for (;;) { /* Needs to be set first to avoid missing a wakeup. */ set_current_state(TASK_INTERRUPTIBLE); - if (kthread_should_stop()) { - __set_current_state(TASK_RUNNING); + if (kthread_should_park()) break; - } schedule(); } pr_info("CPU %d suspend test results: success %d, shallow states %d, errors %d\n", cpu, nb_suspend, nb_shallow_sleep, nb_err); + kthread_parkme(); + return nb_err; } @@ -440,8 +440,10 @@ static int suspend_tests(void) /* Stop and destroy all threads, get return status. */ - for (i = 0; i < nb_threads; ++i) + for (i = 0; i < nb_threads; ++i) { + err += kthread_park(threads[i]); err += kthread_stop(threads[i]); + } out: cpuidle_resume_and_unlock(); kfree(threads); diff --git a/drivers/fpga/Kconfig b/drivers/fpga/Kconfig index 1ebcef4bab5b8f52b4676ab63d732b42ffd8d349..87337fcfbc0d2bd862127bea2208b8afa143ee81 100644 --- a/drivers/fpga/Kconfig +++ b/drivers/fpga/Kconfig @@ -39,6 +39,7 @@ config ALTERA_PR_IP_CORE_PLAT config FPGA_MGR_ALTERA_PS_SPI tristate "Altera FPGA Passive Serial over SPI" depends on SPI + select BITREVERSE help FPGA manager driver support for Altera Arria/Cyclone/Stratix using the passive serial interface over SPI. diff --git a/drivers/fsi/fsi-scom.c b/drivers/fsi/fsi-scom.c index df94021dd9d12bc32b18873076151d3fccbae5c7..fdc0e458dbaaf9c2b59488e6b1a4d7102926657a 100644 --- a/drivers/fsi/fsi-scom.c +++ b/drivers/fsi/fsi-scom.c @@ -47,8 +47,7 @@ #define SCOM_STATUS_PIB_RESP_MASK 0x00007000 #define SCOM_STATUS_PIB_RESP_SHIFT 12 -#define SCOM_STATUS_ANY_ERR (SCOM_STATUS_ERR_SUMMARY | \ - SCOM_STATUS_PROTECTION | \ +#define SCOM_STATUS_ANY_ERR (SCOM_STATUS_PROTECTION | \ SCOM_STATUS_PARITY | \ SCOM_STATUS_PIB_ABORT | \ SCOM_STATUS_PIB_RESP_MASK) @@ -260,11 +259,6 @@ static int handle_fsi2pib_status(struct scom_device *scom, uint32_t status) /* Return -EBUSY on PIB abort to force a retry */ if (status & SCOM_STATUS_PIB_ABORT) return -EBUSY; - if (status & SCOM_STATUS_ERR_SUMMARY) { - fsi_device_write(scom->fsi_dev, SCOM_FSI2PIB_RESET_REG, &dummy, - sizeof(uint32_t)); - return -EIO; - } return 0; } diff --git a/drivers/gpio/gpio-davinci.c b/drivers/gpio/gpio-davinci.c index a5ece8ea79bc83837838760dd449e238e9570f3a..abb332d15a1317a4e962b9152e4f7caf6356ef44 100644 --- a/drivers/gpio/gpio-davinci.c +++ b/drivers/gpio/gpio-davinci.c @@ -222,8 +222,9 @@ static int davinci_gpio_probe(struct platform_device *pdev) for (i = 0; i < nirq; i++) { chips->irqs[i] = platform_get_irq(pdev, i); if (chips->irqs[i] < 0) { - dev_info(dev, "IRQ not populated, err = %d\n", - chips->irqs[i]); + if (chips->irqs[i] != -EPROBE_DEFER) + dev_info(dev, "IRQ not populated, err = %d\n", + chips->irqs[i]); return chips->irqs[i]; } } diff --git a/drivers/gpio/gpio-omap.c b/drivers/gpio/gpio-omap.c index 6fa430d98517cc2fbf3dd6711b5c00bd23e686fd..feabac40743ee04acdcc8f62b68ccd068f1e4a96 100644 --- a/drivers/gpio/gpio-omap.c +++ b/drivers/gpio/gpio-omap.c @@ -837,9 +837,9 @@ static void omap_gpio_irq_shutdown(struct irq_data *d) raw_spin_lock_irqsave(&bank->lock, flags); bank->irq_usage &= ~(BIT(offset)); - omap_set_gpio_irqenable(bank, offset, 0); - omap_clear_gpio_irqstatus(bank, offset); omap_set_gpio_triggering(bank, offset, IRQ_TYPE_NONE); + omap_clear_gpio_irqstatus(bank, offset); + omap_set_gpio_irqenable(bank, offset, 0); if (!LINE_USED(bank->mod_usage, offset)) omap_clear_gpio_debounce(bank, offset); omap_disable_gpio_module(bank, offset); @@ -881,8 +881,8 @@ static void omap_gpio_mask_irq(struct irq_data *d) unsigned long flags; raw_spin_lock_irqsave(&bank->lock, flags); - omap_set_gpio_irqenable(bank, offset, 0); omap_set_gpio_triggering(bank, offset, IRQ_TYPE_NONE); + omap_set_gpio_irqenable(bank, offset, 0); raw_spin_unlock_irqrestore(&bank->lock, flags); } @@ -894,9 +894,6 @@ static void omap_gpio_unmask_irq(struct irq_data *d) unsigned long flags; raw_spin_lock_irqsave(&bank->lock, flags); - if (trigger) - omap_set_gpio_triggering(bank, offset, trigger); - omap_set_gpio_irqenable(bank, offset, 1); /* @@ -904,9 +901,13 @@ static void omap_gpio_unmask_irq(struct irq_data *d) * is cleared, thus after the handler has run. OMAP4 needs this done * after enabing the interrupt to clear the wakeup status. */ - if (bank->level_mask & BIT(offset)) + if (bank->regs->leveldetect0 && bank->regs->wkup_en && + trigger & (IRQ_TYPE_LEVEL_HIGH | IRQ_TYPE_LEVEL_LOW)) omap_clear_gpio_irqstatus(bank, offset); + if (trigger) + omap_set_gpio_triggering(bank, offset, trigger); + raw_spin_unlock_irqrestore(&bank->lock, flags); } @@ -1687,6 +1688,8 @@ static struct omap_gpio_reg_offs omap4_gpio_regs = { .clr_dataout = OMAP4_GPIO_CLEARDATAOUT, .irqstatus = OMAP4_GPIO_IRQSTATUS0, .irqstatus2 = OMAP4_GPIO_IRQSTATUS1, + .irqstatus_raw0 = OMAP4_GPIO_IRQSTATUSRAW0, + .irqstatus_raw1 = OMAP4_GPIO_IRQSTATUSRAW1, .irqenable = OMAP4_GPIO_IRQSTATUSSET0, .irqenable2 = OMAP4_GPIO_IRQSTATUSSET1, .set_irqenable = OMAP4_GPIO_IRQSTATUSSET0, diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c index f54b2b7a82c6f53434fd0dd6b728706a6baf4dd7..9bd29c27c07f1808148421c62a7d769364303dc5 100644 --- a/drivers/gpio/gpiolib.c +++ b/drivers/gpio/gpiolib.c @@ -948,9 +948,11 @@ static int lineevent_create(struct gpio_device *gdev, void __user *ip) } if (eflags & GPIOEVENT_REQUEST_RISING_EDGE) - irqflags |= IRQF_TRIGGER_RISING; + irqflags |= test_bit(FLAG_ACTIVE_LOW, &desc->flags) ? + IRQF_TRIGGER_FALLING : IRQF_TRIGGER_RISING; if (eflags & GPIOEVENT_REQUEST_FALLING_EDGE) - irqflags |= IRQF_TRIGGER_FALLING; + irqflags |= test_bit(FLAG_ACTIVE_LOW, &desc->flags) ? + IRQF_TRIGGER_RISING : IRQF_TRIGGER_FALLING; irqflags |= IRQF_ONESHOT; irqflags |= IRQF_SHARED; @@ -1082,9 +1084,11 @@ static long gpio_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) if (test_bit(FLAG_ACTIVE_LOW, &desc->flags)) lineinfo.flags |= GPIOLINE_FLAG_ACTIVE_LOW; if (test_bit(FLAG_OPEN_DRAIN, &desc->flags)) - lineinfo.flags |= GPIOLINE_FLAG_OPEN_DRAIN; + lineinfo.flags |= (GPIOLINE_FLAG_OPEN_DRAIN | + GPIOLINE_FLAG_IS_OUT); if (test_bit(FLAG_OPEN_SOURCE, &desc->flags)) - lineinfo.flags |= GPIOLINE_FLAG_OPEN_SOURCE; + lineinfo.flags |= (GPIOLINE_FLAG_OPEN_SOURCE | + GPIOLINE_FLAG_IS_OUT); if (copy_to_user(ip, &lineinfo, sizeof(lineinfo))) return -EFAULT; @@ -2879,7 +2883,7 @@ int gpiod_get_array_value_complex(bool raw, bool can_sleep, int gpiod_get_raw_value(const struct gpio_desc *desc) { VALIDATE_DESC(desc); - /* Should be using gpio_get_value_cansleep() */ + /* Should be using gpiod_get_raw_value_cansleep() */ WARN_ON(desc->gdev->chip->can_sleep); return gpiod_get_raw_value_commit(desc); } @@ -2900,7 +2904,7 @@ int gpiod_get_value(const struct gpio_desc *desc) int value; VALIDATE_DESC(desc); - /* Should be using gpio_get_value_cansleep() */ + /* Should be using gpiod_get_value_cansleep() */ WARN_ON(desc->gdev->chip->can_sleep); value = gpiod_get_raw_value_commit(desc); @@ -3125,7 +3129,7 @@ int gpiod_set_array_value_complex(bool raw, bool can_sleep, void gpiod_set_raw_value(struct gpio_desc *desc, int value) { VALIDATE_DESC_VOID(desc); - /* Should be using gpiod_set_value_cansleep() */ + /* Should be using gpiod_set_raw_value_cansleep() */ WARN_ON(desc->gdev->chip->can_sleep); gpiod_set_raw_value_commit(desc, value); } @@ -3166,6 +3170,7 @@ static void gpiod_set_value_nocheck(struct gpio_desc *desc, int value) void gpiod_set_value(struct gpio_desc *desc, int value) { VALIDATE_DESC_VOID(desc); + /* Should be using gpiod_set_value_cansleep() */ WARN_ON(desc->gdev->chip->can_sleep); gpiod_set_value_nocheck(desc, value); } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c index 92b11de1958132c28e4ffd68e1fd782a8e2e5771..354c8b6106dc273b52f1d9e898d060038e4d6214 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c @@ -575,6 +575,7 @@ static const struct amdgpu_px_quirk amdgpu_px_quirk_list[] = { { 0x1002, 0x6900, 0x1002, 0x0124, AMDGPU_PX_QUIRK_FORCE_ATPX }, { 0x1002, 0x6900, 0x1028, 0x0812, AMDGPU_PX_QUIRK_FORCE_ATPX }, { 0x1002, 0x6900, 0x1028, 0x0813, AMDGPU_PX_QUIRK_FORCE_ATPX }, + { 0x1002, 0x699f, 0x1028, 0x0814, AMDGPU_PX_QUIRK_FORCE_ATPX }, { 0x1002, 0x6900, 0x1025, 0x125A, AMDGPU_PX_QUIRK_FORCE_ATPX }, { 0x1002, 0x6900, 0x17AA, 0x3806, AMDGPU_PX_QUIRK_FORCE_ATPX }, { 0, 0, 0, 0, 0 }, diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c index f5fb93795a69a8955cd0327db6e6eddac37a7c1e..65cecfdd9b454f344416d14115ef85574579f98d 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c @@ -707,7 +707,7 @@ static ssize_t amdgpu_debugfs_gpr_read(struct file *f, char __user *buf, thread = (*pos & GENMASK_ULL(59, 52)) >> 52; bank = (*pos & GENMASK_ULL(61, 60)) >> 60; - data = kmalloc_array(1024, sizeof(*data), GFP_KERNEL); + data = kcalloc(1024, sizeof(*data), GFP_KERNEL); if (!data) return -ENOMEM; diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c index 72f8018fa2a836572b9c898785bb99deecc1ca91..ede27dab675facf137f2a80b2b738b485c864874 100644 --- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c @@ -1037,6 +1037,9 @@ static int gmc_v9_0_gart_enable(struct amdgpu_device *adev) tmp = RREG32_SOC15(HDP, 0, mmHDP_HOST_PATH_CNTL); WREG32_SOC15(HDP, 0, mmHDP_HOST_PATH_CNTL, tmp); + WREG32_SOC15(HDP, 0, mmHDP_NONSURFACE_BASE, (adev->gmc.vram_start >> 8)); + WREG32_SOC15(HDP, 0, mmHDP_NONSURFACE_BASE_HI, (adev->gmc.vram_start >> 40)); + /* After HDP is initialized, flush HDP.*/ adev->nbio_funcs->hdp_flush(adev, NULL); diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c index 4f22e745df51b4c2aad4ec842f04ac96b4070f68..189212cb3547585b4c5f6f6e64309953a4982a5e 100644 --- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c +++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c @@ -1268,12 +1268,17 @@ int amdkfd_fence_wait_timeout(unsigned int *fence_addr, return 0; } -static int unmap_sdma_queues(struct device_queue_manager *dqm, - unsigned int sdma_engine) +static int unmap_sdma_queues(struct device_queue_manager *dqm) { - return pm_send_unmap_queue(&dqm->packets, KFD_QUEUE_TYPE_SDMA, - KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0, false, - sdma_engine); + int i, retval = 0; + + for (i = 0; i < dqm->dev->device_info->num_sdma_engines; i++) { + retval = pm_send_unmap_queue(&dqm->packets, KFD_QUEUE_TYPE_SDMA, + KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0, false, i); + if (retval) + return retval; + } + return retval; } /* dqm->lock mutex has to be locked before calling this function */ @@ -1312,10 +1317,8 @@ static int unmap_queues_cpsch(struct device_queue_manager *dqm, pr_debug("Before destroying queues, sdma queue count is : %u\n", dqm->sdma_queue_count); - if (dqm->sdma_queue_count > 0) { - unmap_sdma_queues(dqm, 0); - unmap_sdma_queues(dqm, 1); - } + if (dqm->sdma_queue_count > 0) + unmap_sdma_queues(dqm); retval = pm_send_unmap_queue(&dqm->packets, KFD_QUEUE_TYPE_COMPUTE, filter, filter_param, false, 0); diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c index 0cedb37cf513563dc6fea50e6b40ef0889c3bb61..985bebde5a343a04e473711e975be7726176acdb 100644 --- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c +++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c @@ -75,6 +75,7 @@ static int init_mqd(struct mqd_manager *mm, void **mqd, struct v9_mqd *m; struct kfd_dev *kfd = mm->dev; + *mqd_mem_obj = NULL; /* From V9, for CWSR, the control stack is located on the next page * boundary after the mqd, we will use the gtt allocation function * instead of sub-allocation function. @@ -92,8 +93,10 @@ static int init_mqd(struct mqd_manager *mm, void **mqd, } else retval = kfd_gtt_sa_allocate(mm->dev, sizeof(struct v9_mqd), mqd_mem_obj); - if (retval != 0) + if (retval) { + kfree(*mqd_mem_obj); return -ENOMEM; + } m = (struct v9_mqd *) (*mqd_mem_obj)->cpu_ptr; addr = (*mqd_mem_obj)->gpu_addr; diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index dac7978f5ee1f91b32b72464189daaf52147c9ff..221de241535a53d43a1db695ba492dedb2fe5d93 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -3644,6 +3644,13 @@ void amdgpu_dm_connector_init_helper(struct amdgpu_display_manager *dm, { struct amdgpu_device *adev = dm->ddev->dev_private; + /* + * Some of the properties below require access to state, like bpc. + * Allocate some default initial connector state with our reset helper. + */ + if (aconnector->base.funcs->reset) + aconnector->base.funcs->reset(&aconnector->base); + aconnector->connector_id = link_index; aconnector->dc_link = link; aconnector->base.interlace_allowed = false; @@ -3811,9 +3818,6 @@ static int amdgpu_dm_connector_init(struct amdgpu_display_manager *dm, &aconnector->base, &amdgpu_dm_connector_helper_funcs); - if (aconnector->base.funcs->reset) - aconnector->base.funcs->reset(&aconnector->base); - amdgpu_dm_connector_init_helper( dm, aconnector, diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c index e3f5e5d6f0c18ece19bf88a3a0ee30fdd041318a..f4b89d1ea6f6f76a03c6628f63e9e34c2d7b6aad 100644 --- a/drivers/gpu/drm/amd/display/dc/core/dc.c +++ b/drivers/gpu/drm/amd/display/dc/core/dc.c @@ -462,8 +462,10 @@ void dc_link_set_test_pattern(struct dc_link *link, static void destruct(struct dc *dc) { - dc_release_state(dc->current_state); - dc->current_state = NULL; + if (dc->current_state) { + dc_release_state(dc->current_state); + dc->current_state = NULL; + } destroy_links(dc); diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c index e0a96abb3c46c7196c4c09770a038ac864c10849..f0d68aa7c8fccb3dfb75a21632acc832a46e2458 100644 --- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c +++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c @@ -222,7 +222,7 @@ bool resource_construct( * PORT_CONNECTIVITY == 1 (as instructed by HW team). */ update_num_audio(&straps, &num_audio, &pool->audio_support); - for (i = 0; i < pool->pipe_count && i < num_audio; i++) { + for (i = 0; i < caps->num_audio; i++) { struct audio *aud = create_funcs->create_audio(ctx, i); if (aud == NULL) { @@ -1713,6 +1713,12 @@ static struct audio *find_first_free_audio( return pool->audios[i]; } } + + /* use engine id to find free audio */ + if ((id < pool->audio_count) && (res_ctx->is_audio_acquired[id] == false)) { + return pool->audios[id]; + } + /*not found the matching one, first come first serve*/ for (i = 0; i < pool->audio_count; i++) { if (res_ctx->is_audio_acquired[i] == false) { @@ -1866,6 +1872,7 @@ static int get_norm_pix_clk(const struct dc_crtc_timing *timing) pix_clk /= 2; if (timing->pixel_encoding != PIXEL_ENCODING_YCBCR422) { switch (timing->display_color_depth) { + case COLOR_DEPTH_666: case COLOR_DEPTH_888: normalized_pix_clk = pix_clk; break; @@ -1949,7 +1956,7 @@ enum dc_status resource_map_pool_resources( /* TODO: Add check if ASIC support and EDID audio */ if (!stream->sink->converter_disable_audio && dc_is_audio_capable_signal(pipe_ctx->stream->signal) && - stream->audio_info.mode_count) { + stream->audio_info.mode_count && stream->audio_info.flags.all) { pipe_ctx->stream_res.audio = find_first_free_audio( &context->res_ctx, pool, pipe_ctx->stream_res.stream_enc->id); diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_abm.c b/drivers/gpu/drm/amd/display/dc/dce/dce_abm.c index 29294db1a96b775013635d080725e392cda4766b..da8b198538e5fdea8ba6dadbab666589ab28c1da 100644 --- a/drivers/gpu/drm/amd/display/dc/dce/dce_abm.c +++ b/drivers/gpu/drm/amd/display/dc/dce/dce_abm.c @@ -242,6 +242,10 @@ static void dmcu_set_backlight_level( s2 |= (level << ATOM_S2_CURRENT_BL_LEVEL_SHIFT); REG_WRITE(BIOS_SCRATCH_2, s2); + + /* waitDMCUReadyForCmd */ + REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, + 0, 1, 80000); } static void dce_abm_init(struct abm *abm) @@ -474,6 +478,8 @@ void dce_abm_destroy(struct abm **abm) { struct dce_abm *abm_dce = TO_DCE_ABM(*abm); + abm_dce->base.funcs->set_abm_immediate_disable(*abm); + kfree(abm_dce); *abm = NULL; } diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c index 53ccacf99eca416d129045add530367d41634685..c3ad2bbec1a5278beb92a46c5c84204455f8ee01 100644 --- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c +++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c @@ -242,6 +242,9 @@ static void build_prescale_params(struct ipp_prescale_params *prescale_params, prescale_params->mode = IPP_PRESCALE_MODE_FIXED_UNSIGNED; switch (plane_state->format) { + case SURFACE_PIXEL_FORMAT_GRPH_RGB565: + prescale_params->scale = 0x2082; + break; case SURFACE_PIXEL_FORMAT_GRPH_ARGB8888: case SURFACE_PIXEL_FORMAT_GRPH_ABGR8888: prescale_params->scale = 0x2020; diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c index 7736ef123e9be097836e217e9e9b6d55747e19b6..ead221ccb93e0aa78f84c5152a1716c13ca85983 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c @@ -23,6 +23,7 @@ * */ +#include #include "dm_services.h" #include "core_types.h" #include "resource.h" diff --git a/drivers/gpu/drm/amd/display/dc/inc/core_types.h b/drivers/gpu/drm/amd/display/dc/inc/core_types.h index c0b9ca13393b61a502902b4950dc43e462b8c718..f4469fa5afb553e9828c5c25e412926bb6c9508f 100644 --- a/drivers/gpu/drm/amd/display/dc/inc/core_types.h +++ b/drivers/gpu/drm/amd/display/dc/inc/core_types.h @@ -159,7 +159,7 @@ struct resource_pool { struct clock_source *clock_sources[MAX_CLOCK_SOURCES]; unsigned int clk_src_count; - struct audio *audios[MAX_PIPES]; + struct audio *audios[MAX_AUDIOS]; unsigned int audio_count; struct audio_support audio_support; diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/hw_shared.h b/drivers/gpu/drm/amd/display/dc/inc/hw/hw_shared.h index cf7433ebf91a07557b1f4328980ba4bdcf15ab07..71901743a9387b705e5e54b465f2ffbea1afa333 100644 --- a/drivers/gpu/drm/amd/display/dc/inc/hw/hw_shared.h +++ b/drivers/gpu/drm/amd/display/dc/inc/hw/hw_shared.h @@ -34,6 +34,7 @@ * Data types shared between different Virtual HW blocks ******************************************************************************/ +#define MAX_AUDIOS 7 #define MAX_PIPES 6 struct gamma_curve { diff --git a/drivers/gpu/drm/ast/ast_main.c b/drivers/gpu/drm/ast/ast_main.c index 373700c05a00f9f3890d48fee79317edf77cedd8..224fa1ef87ff92125c5ada667f41c0defac43e86 100644 --- a/drivers/gpu/drm/ast/ast_main.c +++ b/drivers/gpu/drm/ast/ast_main.c @@ -131,8 +131,8 @@ static int ast_detect_chip(struct drm_device *dev, bool *need_post) /* Enable extended register access */ - ast_enable_mmio(dev); ast_open_key(ast); + ast_enable_mmio(dev); /* Find out whether P2A works or whether to use device-tree */ ast_detect_config_mode(dev, &scu_rev); @@ -576,6 +576,9 @@ void ast_driver_unload(struct drm_device *dev) { struct ast_private *ast = dev->dev_private; + /* enable standard VGA decode */ + ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xa1, 0x04); + ast_release_firmware(dev); kfree(ast->dp501_fw_addr); ast_mode_fini(dev); diff --git a/drivers/gpu/drm/ast/ast_mode.c b/drivers/gpu/drm/ast/ast_mode.c index 8bb355d5d43d80169fbfeb73954008b772616e15..9d92d2d2fcfc7c1ccb85fa6ffce1485fc874f979 100644 --- a/drivers/gpu/drm/ast/ast_mode.c +++ b/drivers/gpu/drm/ast/ast_mode.c @@ -600,7 +600,7 @@ static int ast_crtc_mode_set(struct drm_crtc *crtc, return -EINVAL; ast_open_key(ast); - ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xa1, 0xff, 0x04); + ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xa1, 0x06); ast_set_std_reg(crtc, adjusted_mode, &vbios_mode); ast_set_crtc_reg(crtc, adjusted_mode, &vbios_mode); diff --git a/drivers/gpu/drm/ast/ast_post.c b/drivers/gpu/drm/ast/ast_post.c index f7d421359d564756ff86d78c64293e37eea74c7e..c1d1ac51d1c207c0cb0b2f08825aa19ca7761bde 100644 --- a/drivers/gpu/drm/ast/ast_post.c +++ b/drivers/gpu/drm/ast/ast_post.c @@ -46,7 +46,7 @@ void ast_enable_mmio(struct drm_device *dev) { struct ast_private *ast = dev->dev_private; - ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xa1, 0xff, 0x04); + ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xa1, 0x06); } diff --git a/drivers/gpu/drm/bridge/Kconfig b/drivers/gpu/drm/bridge/Kconfig index bf6cad6c9178b10eda1e639ac0d785631d1f6518..7a3e5a8f6439b547ea7f978921fef43bae557328 100644 --- a/drivers/gpu/drm/bridge/Kconfig +++ b/drivers/gpu/drm/bridge/Kconfig @@ -46,6 +46,7 @@ config DRM_DUMB_VGA_DAC config DRM_LVDS_ENCODER tristate "Transparent parallel to LVDS encoder support" depends on OF + select DRM_KMS_HELPER select DRM_PANEL_BRIDGE help Support for transparent parallel to LVDS encoders that don't require diff --git a/drivers/gpu/drm/bridge/sii902x.c b/drivers/gpu/drm/bridge/sii902x.c index e59a135423336bd187f0038956f06ac4574d94dc..0cc6dbbcddcf5630f2abf9871d7ce205790c91e9 100644 --- a/drivers/gpu/drm/bridge/sii902x.c +++ b/drivers/gpu/drm/bridge/sii902x.c @@ -261,10 +261,11 @@ static void sii902x_bridge_mode_set(struct drm_bridge *bridge, struct regmap *regmap = sii902x->regmap; u8 buf[HDMI_INFOFRAME_SIZE(AVI)]; struct hdmi_avi_infoframe frame; + u16 pixel_clock_10kHz = adj->clock / 10; int ret; - buf[0] = adj->clock; - buf[1] = adj->clock >> 8; + buf[0] = pixel_clock_10kHz & 0xff; + buf[1] = pixel_clock_10kHz >> 8; buf[2] = adj->vrefresh; buf[3] = 0x00; buf[4] = adj->hdisplay; diff --git a/drivers/gpu/drm/bridge/tc358767.c b/drivers/gpu/drm/bridge/tc358767.c index 39154735875693d1bb152964d9634c194aaf5364..aaca5248da07017965af4ee2ca8198ece0bd1c5c 100644 --- a/drivers/gpu/drm/bridge/tc358767.c +++ b/drivers/gpu/drm/bridge/tc358767.c @@ -1149,6 +1149,13 @@ static int tc_connector_get_modes(struct drm_connector *connector) struct tc_data *tc = connector_to_tc(connector); struct edid *edid; unsigned int count; + int ret; + + ret = tc_get_display_props(tc); + if (ret < 0) { + dev_err(tc->dev, "failed to read display props: %d\n", ret); + return 0; + } if (tc->panel && tc->panel->funcs && tc->panel->funcs->get_modes) { count = tc->panel->funcs->get_modes(tc->panel); diff --git a/drivers/gpu/drm/bridge/ti-tfp410.c b/drivers/gpu/drm/bridge/ti-tfp410.c index c3e32138c6bb08c5cdb6c75e7d6624984914e0c0..9dc109df0808c70d193377b8217ff37f9d93bd35 100644 --- a/drivers/gpu/drm/bridge/ti-tfp410.c +++ b/drivers/gpu/drm/bridge/ti-tfp410.c @@ -64,7 +64,12 @@ static int tfp410_get_modes(struct drm_connector *connector) drm_connector_update_edid_property(connector, edid); - return drm_add_edid_modes(connector, edid); + ret = drm_add_edid_modes(connector, edid); + + kfree(edid); + + return ret; + fallback: /* No EDID, fallback on the XGA standard modes */ ret = drm_add_modes_noedid(connector, 1920, 1200); diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c index c22062cc99923f8a04202de3ab879908a324be35..2f416b283ef8d710f622e673b913c653e4321122 100644 --- a/drivers/gpu/drm/drm_atomic_helper.c +++ b/drivers/gpu/drm/drm_atomic_helper.c @@ -3736,6 +3736,24 @@ void drm_atomic_helper_connector_reset(struct drm_connector *connector) } EXPORT_SYMBOL(drm_atomic_helper_connector_reset); +/** + * drm_atomic_helper_connector_tv_reset - Resets TV connector properties + * @connector: DRM connector + * + * Resets the TV-related properties attached to a connector. + */ +void drm_atomic_helper_connector_tv_reset(struct drm_connector *connector) +{ + struct drm_cmdline_mode *cmdline = &connector->cmdline_mode; + struct drm_connector_state *state = connector->state; + + state->tv.margins.left = cmdline->tv_margins.left; + state->tv.margins.right = cmdline->tv_margins.right; + state->tv.margins.top = cmdline->tv_margins.top; + state->tv.margins.bottom = cmdline->tv_margins.bottom; +} +EXPORT_SYMBOL(drm_atomic_helper_connector_tv_reset); + /** * __drm_atomic_helper_connector_duplicate_state - copy atomic connector state * @connector: connector object diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c index 6011d769d50bb51197bfd935f3e6fff660878a1b..ff0996f49b211d1f56a748f585c4d85780229c15 100644 --- a/drivers/gpu/drm/drm_connector.c +++ b/drivers/gpu/drm/drm_connector.c @@ -135,8 +135,9 @@ static void drm_connector_get_cmdline_mode(struct drm_connector *connector) connector->force = mode->force; } - DRM_DEBUG_KMS("cmdline mode for connector %s %dx%d@%dHz%s%s%s\n", + DRM_DEBUG_KMS("cmdline mode for connector %s %s %dx%d@%dHz%s%s%s\n", connector->name, + mode->name ? mode->name : "", mode->xres, mode->yres, mode->refresh_specified ? mode->refresh : 60, mode->rb ? " reduced blanking" : "", @@ -1110,7 +1111,71 @@ void drm_hdmi_avi_infoframe_content_type(struct hdmi_avi_infoframe *frame, EXPORT_SYMBOL(drm_hdmi_avi_infoframe_content_type); /** - * drm_create_tv_properties - create TV specific connector properties + * drm_mode_attach_tv_margin_properties - attach TV connector margin properties + * @connector: DRM connector + * + * Called by a driver when it needs to attach TV margin props to a connector. + * Typically used on SDTV and HDMI connectors. + */ +void drm_connector_attach_tv_margin_properties(struct drm_connector *connector) +{ + struct drm_device *dev = connector->dev; + + drm_object_attach_property(&connector->base, + dev->mode_config.tv_left_margin_property, + 0); + drm_object_attach_property(&connector->base, + dev->mode_config.tv_right_margin_property, + 0); + drm_object_attach_property(&connector->base, + dev->mode_config.tv_top_margin_property, + 0); + drm_object_attach_property(&connector->base, + dev->mode_config.tv_bottom_margin_property, + 0); +} +EXPORT_SYMBOL(drm_connector_attach_tv_margin_properties); + +/** + * drm_mode_create_tv_margin_properties - create TV connector margin properties + * @dev: DRM device + * + * Called by a driver's HDMI connector initialization routine, this function + * creates the TV margin properties for a given device. No need to call this + * function for an SDTV connector, it's already called from + * drm_mode_create_tv_properties(). + */ +int drm_mode_create_tv_margin_properties(struct drm_device *dev) +{ + if (dev->mode_config.tv_left_margin_property) + return 0; + + dev->mode_config.tv_left_margin_property = + drm_property_create_range(dev, 0, "left margin", 0, 100); + if (!dev->mode_config.tv_left_margin_property) + return -ENOMEM; + + dev->mode_config.tv_right_margin_property = + drm_property_create_range(dev, 0, "right margin", 0, 100); + if (!dev->mode_config.tv_right_margin_property) + return -ENOMEM; + + dev->mode_config.tv_top_margin_property = + drm_property_create_range(dev, 0, "top margin", 0, 100); + if (!dev->mode_config.tv_top_margin_property) + return -ENOMEM; + + dev->mode_config.tv_bottom_margin_property = + drm_property_create_range(dev, 0, "bottom margin", 0, 100); + if (!dev->mode_config.tv_bottom_margin_property) + return -ENOMEM; + + return 0; +} +EXPORT_SYMBOL(drm_mode_create_tv_margin_properties); + +/** + * drm_mode_create_tv_properties - create TV specific connector properties * @dev: DRM device * @num_modes: number of different TV formats (modes) supported * @modes: array of pointers to strings containing name of each format @@ -1155,24 +1220,7 @@ int drm_mode_create_tv_properties(struct drm_device *dev, /* * Other, TV specific properties: margins & TV modes. */ - dev->mode_config.tv_left_margin_property = - drm_property_create_range(dev, 0, "left margin", 0, 100); - if (!dev->mode_config.tv_left_margin_property) - goto nomem; - - dev->mode_config.tv_right_margin_property = - drm_property_create_range(dev, 0, "right margin", 0, 100); - if (!dev->mode_config.tv_right_margin_property) - goto nomem; - - dev->mode_config.tv_top_margin_property = - drm_property_create_range(dev, 0, "top margin", 0, 100); - if (!dev->mode_config.tv_top_margin_property) - goto nomem; - - dev->mode_config.tv_bottom_margin_property = - drm_property_create_range(dev, 0, "bottom margin", 0, 100); - if (!dev->mode_config.tv_bottom_margin_property) + if (drm_mode_create_tv_margin_properties(dev)) goto nomem; dev->mode_config.tv_mode_property = diff --git a/drivers/gpu/drm/drm_debugfs_crc.c b/drivers/gpu/drm/drm_debugfs_crc.c index 99961192bf034f893cbac5521c996dc98aa49887..c88e5ff41add6a4898186672070b03293c3e9ffa 100644 --- a/drivers/gpu/drm/drm_debugfs_crc.c +++ b/drivers/gpu/drm/drm_debugfs_crc.c @@ -379,12 +379,13 @@ int drm_crtc_add_crc_entry(struct drm_crtc *crtc, bool has_frame, struct drm_crtc_crc *crc = &crtc->crc; struct drm_crtc_crc_entry *entry; int head, tail; + unsigned long flags; - spin_lock(&crc->lock); + spin_lock_irqsave(&crc->lock, flags); /* Caller may not have noticed yet that userspace has stopped reading */ if (!crc->entries) { - spin_unlock(&crc->lock); + spin_unlock_irqrestore(&crc->lock, flags); return -EINVAL; } @@ -395,7 +396,7 @@ int drm_crtc_add_crc_entry(struct drm_crtc *crtc, bool has_frame, bool was_overflow = crc->overflow; crc->overflow = true; - spin_unlock(&crc->lock); + spin_unlock_irqrestore(&crc->lock, flags); if (!was_overflow) DRM_ERROR("Overflow of CRC buffer, userspace reads too slow.\n"); @@ -411,7 +412,7 @@ int drm_crtc_add_crc_entry(struct drm_crtc *crtc, bool has_frame, head = (head + 1) & (DRM_CRC_ENTRIES_NR - 1); crc->head = head; - spin_unlock(&crc->lock); + spin_unlock_irqrestore(&crc->lock, flags); wake_up_interruptible(&crc->wq); diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c index 5965f6383ada343708ec64a37280d43dc719f008..e5e7e65934da96ee5fa78e5c9a8f63f8f49841de 100644 --- a/drivers/gpu/drm/drm_edid.c +++ b/drivers/gpu/drm/drm_edid.c @@ -1349,6 +1349,7 @@ MODULE_PARM_DESC(edid_fixup, static void drm_get_displayid(struct drm_connector *connector, struct edid *edid); +static int validate_displayid(u8 *displayid, int length, int idx); static int drm_edid_block_checksum(const u8 *raw_edid) { @@ -2932,16 +2933,46 @@ static u8 *drm_find_edid_extension(const struct edid *edid, int ext_id) return edid_ext; } -static u8 *drm_find_cea_extension(const struct edid *edid) -{ - return drm_find_edid_extension(edid, CEA_EXT); -} static u8 *drm_find_displayid_extension(const struct edid *edid) { return drm_find_edid_extension(edid, DISPLAYID_EXT); } +static u8 *drm_find_cea_extension(const struct edid *edid) +{ + int ret; + int idx = 1; + int length = EDID_LENGTH; + struct displayid_block *block; + u8 *cea; + u8 *displayid; + + /* Look for a top level CEA extension block */ + cea = drm_find_edid_extension(edid, CEA_EXT); + if (cea) + return cea; + + /* CEA blocks can also be found embedded in a DisplayID block */ + displayid = drm_find_displayid_extension(edid); + if (!displayid) + return NULL; + + ret = validate_displayid(displayid, length, idx); + if (ret) + return NULL; + + idx += sizeof(struct displayid_hdr); + for_each_displayid_db(displayid, block, idx, length) { + if (block->tag == DATA_BLOCK_CTA) { + cea = (u8 *)block; + break; + } + } + + return cea; +} + /* * Calculate the alternate clock for the CEA mode * (60Hz vs. 59.94Hz etc.) @@ -3665,13 +3696,38 @@ cea_revision(const u8 *cea) static int cea_db_offsets(const u8 *cea, int *start, int *end) { - /* Data block offset in CEA extension block */ - *start = 4; - *end = cea[2]; - if (*end == 0) - *end = 127; - if (*end < 4 || *end > 127) - return -ERANGE; + /* DisplayID CTA extension blocks and top-level CEA EDID + * block header definitions differ in the following bytes: + * 1) Byte 2 of the header specifies length differently, + * 2) Byte 3 is only present in the CEA top level block. + * + * The different definitions for byte 2 follow. + * + * DisplayID CTA extension block defines byte 2 as: + * Number of payload bytes + * + * CEA EDID block defines byte 2 as: + * Byte number (decimal) within this block where the 18-byte + * DTDs begin. If no non-DTD data is present in this extension + * block, the value should be set to 04h (the byte after next). + * If set to 00h, there are no DTDs present in this block and + * no non-DTD data. + */ + if (cea[0] == DATA_BLOCK_CTA) { + *start = 3; + *end = *start + cea[2]; + } else if (cea[0] == CEA_EXT) { + /* Data block offset in CEA extension block */ + *start = 4; + *end = cea[2]; + if (*end == 0) + *end = 127; + if (*end < 4 || *end > 127) + return -ERANGE; + } else { + return -ENOTSUPP; + } + return 0; } @@ -5218,6 +5274,9 @@ static int drm_parse_display_id(struct drm_connector *connector, case DATA_BLOCK_TYPE_1_DETAILED_TIMING: /* handled in mode gathering code. */ break; + case DATA_BLOCK_CTA: + /* handled in the cea parser code. */ + break; default: DRM_DEBUG_KMS("found DisplayID tag 0x%x, unhandled\n", block->tag); break; diff --git a/drivers/gpu/drm/drm_edid_load.c b/drivers/gpu/drm/drm_edid_load.c index a4915099aaa99223b56de4181c26510303769966..a0e107abc40d722c7d5bf06c74c5c4bbb509dd68 100644 --- a/drivers/gpu/drm/drm_edid_load.c +++ b/drivers/gpu/drm/drm_edid_load.c @@ -290,6 +290,8 @@ struct edid *drm_load_edid_firmware(struct drm_connector *connector) * the last one found one as a fallback. */ fwstr = kstrdup(edid_firmware, GFP_KERNEL); + if (!fwstr) + return ERR_PTR(-ENOMEM); edidstr = fwstr; while ((edidname = strsep(&edidstr, ","))) { diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c index 8b546fde139d611179ebb6f1ce83c8ef4e77a828..a0b217f93a0d225b8fc351b8056f866852c40332 100644 --- a/drivers/gpu/drm/drm_fb_helper.c +++ b/drivers/gpu/drm/drm_fb_helper.c @@ -2099,6 +2099,10 @@ struct drm_display_mode *drm_pick_cmdline_mode(struct drm_fb_helper_connector *f prefer_non_interlace = !cmdline_mode->interlace; again: list_for_each_entry(mode, &fb_helper_conn->connector->modes, head) { + /* Check (optional) mode name first */ + if (!strcmp(mode->name, cmdline_mode->name)) + return mode; + /* check width/height */ if (mode->hdisplay != cmdline_mode->xres || mode->vdisplay != cmdline_mode->yres) @@ -2460,6 +2464,7 @@ static void drm_setup_crtc_rotation(struct drm_fb_helper *fb_helper, struct drm_connector *connector) { struct drm_plane *plane = fb_crtc->mode_set.crtc->primary; + struct drm_cmdline_mode *cmdline; uint64_t valid_mask = 0; int i, rotation; @@ -2479,6 +2484,35 @@ static void drm_setup_crtc_rotation(struct drm_fb_helper *fb_helper, rotation = DRM_MODE_ROTATE_0; } + /** + * The panel already defined the default rotation + * through its orientation. Whatever has been provided + * on the command line needs to be added to that. + * + * Unfortunately, the rotations are at different bit + * indices, so the math to add them up are not as + * trivial as they could. + * + * Reflections on the other hand are pretty trivial to deal with, a + * simple XOR between the two handle the addition nicely. + */ + cmdline = &connector->cmdline_mode; + if (cmdline->specified && cmdline->rotation_reflection) { + unsigned int cmdline_rest, panel_rest; + unsigned int cmdline_rot, panel_rot; + unsigned int sum_rot, sum_rest; + + panel_rot = ilog2(rotation & DRM_MODE_ROTATE_MASK); + cmdline_rot = ilog2(cmdline->rotation_reflection & DRM_MODE_ROTATE_MASK); + sum_rot = (panel_rot + cmdline_rot) % 4; + + panel_rest = rotation & ~DRM_MODE_ROTATE_MASK; + cmdline_rest = cmdline->rotation_reflection & ~DRM_MODE_ROTATE_MASK; + sum_rest = panel_rest ^ cmdline_rest; + + rotation = (1 << sum_rot) | sum_rest; + } + /* * TODO: support 90 / 270 degree hardware rotation, * depending on the hardware this may require the framebuffer diff --git a/drivers/gpu/drm/drm_framebuffer.c b/drivers/gpu/drm/drm_framebuffer.c index 781af1d42d766bf63db12801ace4703132db84fa..b64a6ffc0aed72c6429e81b68527bb60b4a447a7 100644 --- a/drivers/gpu/drm/drm_framebuffer.c +++ b/drivers/gpu/drm/drm_framebuffer.c @@ -793,7 +793,7 @@ static int atomic_remove_fb(struct drm_framebuffer *fb) struct drm_device *dev = fb->dev; struct drm_atomic_state *state; struct drm_plane *plane; - struct drm_connector *conn; + struct drm_connector *conn __maybe_unused; struct drm_connector_state *conn_state; int i, ret; unsigned plane_mask; diff --git a/drivers/gpu/drm/drm_modes.c b/drivers/gpu/drm/drm_modes.c index a3104d79b48f07d229264d050f993e515d78e396..243de91ee9ee112b1cf09a9a8c98fca79a845b37 100644 --- a/drivers/gpu/drm/drm_modes.c +++ b/drivers/gpu/drm/drm_modes.c @@ -30,6 +30,7 @@ * authorization from the copyright holder(s) and author(s). */ +#include #include #include #include @@ -1414,6 +1415,260 @@ void drm_connector_list_update(struct drm_connector *connector) } EXPORT_SYMBOL(drm_connector_list_update); +static int drm_mode_parse_cmdline_bpp(const char *str, char **end_ptr, + struct drm_cmdline_mode *mode) +{ + unsigned int bpp; + + if (str[0] != '-') + return -EINVAL; + + str++; + bpp = simple_strtol(str, end_ptr, 10); + if (*end_ptr == str) + return -EINVAL; + + mode->bpp = bpp; + mode->bpp_specified = true; + + return 0; +} + +static int drm_mode_parse_cmdline_refresh(const char *str, char **end_ptr, + struct drm_cmdline_mode *mode) +{ + unsigned int refresh; + + if (str[0] != '@') + return -EINVAL; + + str++; + refresh = simple_strtol(str, end_ptr, 10); + if (*end_ptr == str) + return -EINVAL; + + mode->refresh = refresh; + mode->refresh_specified = true; + + return 0; +} + +static int drm_mode_parse_cmdline_extra(const char *str, int length, + struct drm_connector *connector, + struct drm_cmdline_mode *mode) +{ + int i; + + for (i = 0; i < length; i++) { + switch (str[i]) { + case 'i': + mode->interlace = true; + break; + case 'm': + mode->margins = true; + break; + case 'D': + if (mode->force != DRM_FORCE_UNSPECIFIED) + return -EINVAL; + + if ((connector->connector_type != DRM_MODE_CONNECTOR_DVII) && + (connector->connector_type != DRM_MODE_CONNECTOR_HDMIB)) + mode->force = DRM_FORCE_ON; + else + mode->force = DRM_FORCE_ON_DIGITAL; + break; + case 'd': + if (mode->force != DRM_FORCE_UNSPECIFIED) + return -EINVAL; + + mode->force = DRM_FORCE_OFF; + break; + case 'e': + if (mode->force != DRM_FORCE_UNSPECIFIED) + return -EINVAL; + + mode->force = DRM_FORCE_ON; + break; + default: + return -EINVAL; + } + } + + return 0; +} + +static int drm_mode_parse_cmdline_res_mode(const char *str, unsigned int length, + bool extras, + struct drm_connector *connector, + struct drm_cmdline_mode *mode) +{ + const char *str_start = str; + bool rb = false, cvt = false; + int xres = 0, yres = 0; + int remaining, i; + char *end_ptr; + + xres = simple_strtol(str, &end_ptr, 10); + if (end_ptr == str) + return -EINVAL; + + if (end_ptr[0] != 'x') + return -EINVAL; + end_ptr++; + + str = end_ptr; + yres = simple_strtol(str, &end_ptr, 10); + if (end_ptr == str) + return -EINVAL; + + remaining = length - (end_ptr - str_start); + if (remaining < 0) + return -EINVAL; + + for (i = 0; i < remaining; i++) { + switch (end_ptr[i]) { + case 'M': + cvt = true; + break; + case 'R': + rb = true; + break; + default: + /* + * Try to pass that to our extras parsing + * function to handle the case where the + * extras are directly after the resolution + */ + if (extras) { + int ret = drm_mode_parse_cmdline_extra(end_ptr + i, + 1, + connector, + mode); + if (ret) + return ret; + } else { + return -EINVAL; + } + } + } + + mode->xres = xres; + mode->yres = yres; + mode->cvt = cvt; + mode->rb = rb; + + return 0; +} + +static int drm_mode_parse_cmdline_options(char *str, size_t len, + struct drm_connector *connector, + struct drm_cmdline_mode *mode) +{ + unsigned int rotation = 0; + char *sep = str; + + while ((sep = strchr(sep, ','))) { + char *delim, *option; + + option = sep + 1; + delim = strchr(option, '='); + if (!delim) { + delim = strchr(option, ','); + + if (!delim) + delim = str + len; + } + + if (!strncmp(option, "rotate", delim - option)) { + const char *value = delim + 1; + unsigned int deg; + + deg = simple_strtol(value, &sep, 10); + + /* Make sure we have parsed something */ + if (sep == value) + return -EINVAL; + + switch (deg) { + case 0: + rotation |= DRM_MODE_ROTATE_0; + break; + + case 90: + rotation |= DRM_MODE_ROTATE_90; + break; + + case 180: + rotation |= DRM_MODE_ROTATE_180; + break; + + case 270: + rotation |= DRM_MODE_ROTATE_270; + break; + + default: + return -EINVAL; + } + } else if (!strncmp(option, "reflect_x", delim - option)) { + rotation |= DRM_MODE_REFLECT_X; + sep = delim; + } else if (!strncmp(option, "reflect_y", delim - option)) { + rotation |= DRM_MODE_REFLECT_Y; + sep = delim; + } else if (!strncmp(option, "margin_right", delim - option)) { + const char *value = delim + 1; + unsigned int margin; + + margin = simple_strtol(value, &sep, 10); + + /* Make sure we have parsed something */ + if (sep == value) + return -EINVAL; + + mode->tv_margins.right = margin; + } else if (!strncmp(option, "margin_left", delim - option)) { + const char *value = delim + 1; + unsigned int margin; + + margin = simple_strtol(value, &sep, 10); + + /* Make sure we have parsed something */ + if (sep == value) + return -EINVAL; + + mode->tv_margins.left = margin; + } else if (!strncmp(option, "margin_top", delim - option)) { + const char *value = delim + 1; + unsigned int margin; + + margin = simple_strtol(value, &sep, 10); + + /* Make sure we have parsed something */ + if (sep == value) + return -EINVAL; + + mode->tv_margins.top = margin; + } else if (!strncmp(option, "margin_bottom", delim - option)) { + const char *value = delim + 1; + unsigned int margin; + + margin = simple_strtol(value, &sep, 10); + + /* Make sure we have parsed something */ + if (sep == value) + return -EINVAL; + + mode->tv_margins.bottom = margin; + } else { + return -EINVAL; + } + } + + mode->rotation_reflection = rotation; + + return 0; +} + /** * drm_mode_parse_command_line_for_connector - parse command line modeline for connector * @mode_option: optional per connector mode option @@ -1429,6 +1684,10 @@ EXPORT_SYMBOL(drm_connector_list_update); * * x[M][R][-][@][i][m][eDd] * + * Additionals options can be provided following the mode, using a comma to + * separate each option. Valid options can be found in + * Documentation/fb/modedb.txt. + * * The intermediate drm_cmdline_mode structure is required to store additional * options from the command line modline like the force-enable/disable flag. * @@ -1440,13 +1699,13 @@ bool drm_mode_parse_command_line_for_connector(const char *mode_option, struct drm_cmdline_mode *mode) { const char *name; - unsigned int namelen; - bool res_specified = false, bpp_specified = false, refresh_specified = false; - unsigned int xres = 0, yres = 0, bpp = 32, refresh = 0; - bool yres_specified = false, cvt = false, rb = false; - bool interlace = false, margins = false, was_digit = false; - int i; - enum drm_connector_force force = DRM_FORCE_UNSPECIFIED; + bool named_mode = false, parse_extras = false; + unsigned int bpp_off = 0, refresh_off = 0, options_off = 0; + unsigned int mode_end = 0; + char *bpp_ptr = NULL, *refresh_ptr = NULL, *extra_ptr = NULL; + char *options_ptr = NULL; + char *bpp_end_ptr = NULL, *refresh_end_ptr = NULL; + int ret; #ifdef CONFIG_FB if (!mode_option) @@ -1459,127 +1718,111 @@ bool drm_mode_parse_command_line_for_connector(const char *mode_option, } name = mode_option; - namelen = strlen(name); - for (i = namelen-1; i >= 0; i--) { - switch (name[i]) { - case '@': - if (!refresh_specified && !bpp_specified && - !yres_specified && !cvt && !rb && was_digit) { - refresh = simple_strtol(&name[i+1], NULL, 10); - refresh_specified = true; - was_digit = false; - } else - goto done; - break; - case '-': - if (!bpp_specified && !yres_specified && !cvt && - !rb && was_digit) { - bpp = simple_strtol(&name[i+1], NULL, 10); - bpp_specified = true; - was_digit = false; - } else - goto done; - break; - case 'x': - if (!yres_specified && was_digit) { - yres = simple_strtol(&name[i+1], NULL, 10); - yres_specified = true; - was_digit = false; - } else - goto done; - break; - case '0' ... '9': - was_digit = true; - break; - case 'M': - if (yres_specified || cvt || was_digit) - goto done; - cvt = true; - break; - case 'R': - if (yres_specified || cvt || rb || was_digit) - goto done; - rb = true; - break; - case 'm': - if (cvt || yres_specified || was_digit) - goto done; - margins = true; - break; - case 'i': - if (cvt || yres_specified || was_digit) - goto done; - interlace = true; - break; - case 'e': - if (yres_specified || bpp_specified || refresh_specified || - was_digit || (force != DRM_FORCE_UNSPECIFIED)) - goto done; - force = DRM_FORCE_ON; - break; - case 'D': - if (yres_specified || bpp_specified || refresh_specified || - was_digit || (force != DRM_FORCE_UNSPECIFIED)) - goto done; + /* + * This is a bit convoluted. To differentiate between the + * named modes and poorly formatted resolutions, we need a + * bunch of things: + * - We need to make sure that the first character (which + * would be our resolution in X) is a digit. + * - However, if the X resolution is missing, then we end up + * with something like x, with our first character + * being an alpha-numerical character, which would be + * considered a named mode. + * + * If this isn't enough, we should add more heuristics here, + * and matching unit-tests. + */ + if (!isdigit(name[0]) && name[0] != 'x') + named_mode = true; - if ((connector->connector_type != DRM_MODE_CONNECTOR_DVII) && - (connector->connector_type != DRM_MODE_CONNECTOR_HDMIB)) - force = DRM_FORCE_ON; - else - force = DRM_FORCE_ON_DIGITAL; - break; - case 'd': - if (yres_specified || bpp_specified || refresh_specified || - was_digit || (force != DRM_FORCE_UNSPECIFIED)) - goto done; + /* Try to locate the bpp and refresh specifiers, if any */ + bpp_ptr = strchr(name, '-'); + if (bpp_ptr) { + bpp_off = bpp_ptr - name; + mode->bpp_specified = true; + } - force = DRM_FORCE_OFF; - break; - default: - goto done; - } + refresh_ptr = strchr(name, '@'); + if (refresh_ptr) { + if (named_mode) + return false; + + refresh_off = refresh_ptr - name; + mode->refresh_specified = true; } - if (i < 0 && yres_specified) { - char *ch; - xres = simple_strtol(name, &ch, 10); - if ((ch != NULL) && (*ch == 'x')) - res_specified = true; - else - i = ch - name; - } else if (!yres_specified && was_digit) { - /* catch mode that begins with digits but has no 'x' */ - i = 0; + /* Locate the start of named options */ + options_ptr = strchr(name, ','); + if (options_ptr) + options_off = options_ptr - name; + + /* Locate the end of the name / resolution, and parse it */ + if (bpp_ptr) { + mode_end = bpp_off; + } else if (refresh_ptr) { + mode_end = refresh_off; + } else if (options_ptr) { + mode_end = options_off; + } else { + mode_end = strlen(name); + parse_extras = true; } -done: - if (i >= 0) { - pr_warn("[drm] parse error at position %i in video mode '%s'\n", - i, name); - mode->specified = false; - return false; + + if (named_mode) { + strncpy(mode->name, name, mode_end); + } else { + ret = drm_mode_parse_cmdline_res_mode(name, mode_end, + parse_extras, + connector, + mode); + if (ret) + return false; } + mode->specified = true; - if (res_specified) { - mode->specified = true; - mode->xres = xres; - mode->yres = yres; + if (bpp_ptr) { + ret = drm_mode_parse_cmdline_bpp(bpp_ptr, &bpp_end_ptr, mode); + if (ret) + return false; } - if (refresh_specified) { - mode->refresh_specified = true; - mode->refresh = refresh; + if (refresh_ptr) { + ret = drm_mode_parse_cmdline_refresh(refresh_ptr, + &refresh_end_ptr, mode); + if (ret) + return false; } - if (bpp_specified) { - mode->bpp_specified = true; - mode->bpp = bpp; + /* + * Locate the end of the bpp / refresh, and parse the extras + * if relevant + */ + if (bpp_ptr && refresh_ptr) + extra_ptr = max(bpp_end_ptr, refresh_end_ptr); + else if (bpp_ptr) + extra_ptr = bpp_end_ptr; + else if (refresh_ptr) + extra_ptr = refresh_end_ptr; + + if (extra_ptr && + extra_ptr != options_ptr) { + int len = strlen(name) - (extra_ptr - name); + + ret = drm_mode_parse_cmdline_extra(extra_ptr, len, + connector, mode); + if (ret) + return false; + } + + if (options_ptr) { + int len = strlen(name) - (options_ptr - name); + + ret = drm_mode_parse_cmdline_options(options_ptr, len, + connector, mode); + if (ret) + return false; } - mode->rb = rb; - mode->cvt = cvt; - mode->interlace = interlace; - mode->margins = margins; - mode->force = force; return true; } diff --git a/drivers/gpu/drm/exynos/exynos_drm_scaler.c b/drivers/gpu/drm/exynos/exynos_drm_scaler.c index 0ddb6eec7b113ea306fea4bde563e8ecb9945495..df228436a03d92965d65178598df9c923a21f7c6 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_scaler.c +++ b/drivers/gpu/drm/exynos/exynos_drm_scaler.c @@ -108,12 +108,12 @@ static inline int scaler_reset(struct scaler_context *scaler) scaler_write(SCALER_CFG_SOFT_RESET, SCALER_CFG); do { cpu_relax(); - } while (retry > 1 && + } while (--retry > 1 && scaler_read(SCALER_CFG) & SCALER_CFG_SOFT_RESET); do { cpu_relax(); scaler_write(1, SCALER_INT_EN); - } while (retry > 0 && scaler_read(SCALER_INT_EN) != 1); + } while (--retry > 0 && scaler_read(SCALER_INT_EN) != 1); return retry ? 0 : -EIO; } diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c index 12e4203c06dbd1e817e4afece3eee0f6c232c57b..66abe061f07b09397c228d76a177bea184ac2329 100644 --- a/drivers/gpu/drm/i915/gvt/kvmgt.c +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c @@ -1741,6 +1741,18 @@ int kvmgt_dma_map_guest_page(unsigned long handle, unsigned long gfn, entry = __gvt_cache_find_gfn(info->vgpu, gfn); if (!entry) { + ret = gvt_dma_map_page(vgpu, gfn, dma_addr, size); + if (ret) + goto err_unlock; + + ret = __gvt_cache_add(info->vgpu, gfn, *dma_addr, size); + if (ret) + goto err_unmap; + } else if (entry->size != size) { + /* the same gfn with different size: unmap and re-map */ + gvt_dma_unmap_page(vgpu, gfn, entry->dma_addr, entry->size); + __gvt_cache_remove_entry(vgpu, entry); + ret = gvt_dma_map_page(vgpu, gfn, dma_addr, size); if (ret) goto err_unlock; diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c index f8cfd16be534cf3eece97c4a59a456e5d8769bde..a4b4ab7b9f8ef323592b9e8f162de009c21673b3 100644 --- a/drivers/gpu/drm/i915/i915_drv.c +++ b/drivers/gpu/drm/i915/i915_drv.c @@ -1120,6 +1120,12 @@ static int i915_driver_init_hw(struct drm_i915_private *dev_priv) pci_set_master(pdev); + /* + * We don't have a max segment size, so set it to the max so sg's + * debugging layer doesn't complain + */ + dma_set_max_seg_size(&pdev->dev, UINT_MAX); + /* overlay on gen2 is broken and can't address above 1G */ if (IS_GEN2(dev_priv)) { ret = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(30)); diff --git a/drivers/gpu/drm/i915/i915_vgpu.c b/drivers/gpu/drm/i915/i915_vgpu.c index 869cf4a3b6de75fee593c0f66c953cc1035434a6..a6cb3e034dd5a72bfb388c7e414ba6005f97e507 100644 --- a/drivers/gpu/drm/i915/i915_vgpu.c +++ b/drivers/gpu/drm/i915/i915_vgpu.c @@ -100,6 +100,9 @@ static struct _balloon_info_ bl_info; static void vgt_deballoon_space(struct i915_ggtt *ggtt, struct drm_mm_node *node) { + if (!drm_mm_node_allocated(node)) + return; + DRM_DEBUG_DRIVER("deballoon space: range [0x%llx - 0x%llx] %llu KiB.\n", node->start, node->start + node->size, diff --git a/drivers/gpu/drm/i915/intel_device_info.c b/drivers/gpu/drm/i915/intel_device_info.c index 0ef0c6448d53a835fbdf5319a8010c64d613bd0f..01fa98299bae65a125862e57c307cdbce07c3d32 100644 --- a/drivers/gpu/drm/i915/intel_device_info.c +++ b/drivers/gpu/drm/i915/intel_device_info.c @@ -474,7 +474,7 @@ static void broadwell_sseu_info_init(struct drm_i915_private *dev_priv) u8 eu_disabled_mask; u32 n_disabled; - if (!(sseu->subslice_mask[ss] & BIT(ss))) + if (!(sseu->subslice_mask[s] & BIT(ss))) /* skip disabled subslice */ continue; diff --git a/drivers/gpu/drm/i915/vlv_dsi_pll.c b/drivers/gpu/drm/i915/vlv_dsi_pll.c index a132a8037ecc6b2a317229918e8d2471cdbcb9dd..77df7903e071e3e74550ee1567fdfa970b44a650 100644 --- a/drivers/gpu/drm/i915/vlv_dsi_pll.c +++ b/drivers/gpu/drm/i915/vlv_dsi_pll.c @@ -413,8 +413,8 @@ static void glk_dsi_program_esc_clock(struct drm_device *dev, else txesc2_div = 10; - I915_WRITE(MIPIO_TXESC_CLK_DIV1, txesc1_div & GLK_TX_ESC_CLK_DIV1_MASK); - I915_WRITE(MIPIO_TXESC_CLK_DIV2, txesc2_div & GLK_TX_ESC_CLK_DIV2_MASK); + I915_WRITE(MIPIO_TXESC_CLK_DIV1, (1 << (txesc1_div - 1)) & GLK_TX_ESC_CLK_DIV1_MASK); + I915_WRITE(MIPIO_TXESC_CLK_DIV2, (1 << (txesc2_div - 1)) & GLK_TX_ESC_CLK_DIV2_MASK); } /* Program BXT Mipi clocks and dividers */ diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index c1abad8a8612683a237dfae4fec766c806c9caf8..dbfd2c006f7406ed31614dc3de1b412c8ee37b09 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -1284,7 +1284,8 @@ static int add_gpu_components(struct device *dev, if (!np) return 0; - drm_of_component_match_add(dev, matchptr, compare_of, np); + if (of_device_is_available(np)) + drm_of_component_match_add(dev, matchptr, compare_of, np); of_node_put(np); @@ -1321,16 +1322,24 @@ static int msm_pdev_probe(struct platform_device *pdev) ret = add_gpu_components(&pdev->dev, &match); if (ret) - return ret; + goto fail; /* on all devices that I am aware of, iommu's which can map * any address the cpu can see are used: */ ret = dma_set_mask_and_coherent(&pdev->dev, ~0); if (ret) - return ret; + goto fail; + + ret = component_master_add_with_match(&pdev->dev, &msm_drm_ops, match); + if (ret) + goto fail; - return component_master_add_with_match(&pdev->dev, &msm_drm_ops, match); + return 0; + +fail: + of_platform_depopulate(&pdev->dev); + return ret; } static int msm_pdev_remove(struct platform_device *pdev) diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c index 247f72cc4d10a4547309effb6b8904e9c0d60d49..fb0094fc55834aa7ba69090a517d38ad2b332001 100644 --- a/drivers/gpu/drm/nouveau/nouveau_connector.c +++ b/drivers/gpu/drm/nouveau/nouveau_connector.c @@ -251,7 +251,7 @@ nouveau_conn_reset(struct drm_connector *connector) return; if (connector->state) - __drm_atomic_helper_connector_destroy_state(connector->state); + nouveau_conn_atomic_destroy_state(connector, connector->state); __drm_atomic_helper_connector_reset(connector, &asyc->state); asyc->dither.mode = DITHERING_MODE_AUTO; asyc->dither.depth = DITHERING_DEPTH_AUTO; diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/aux.c b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/aux.c index b4e7404fe660e24937bde1e518533feecad376d9..a11637b0f6ccf43cc39c417dc64a8fb349815a22 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/aux.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/aux.c @@ -40,8 +40,7 @@ nvkm_i2c_aux_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, int num) u8 *ptr = msg->buf; while (remaining) { - u8 cnt = (remaining > 16) ? 16 : remaining; - u8 cmd; + u8 cnt, retries, cmd; if (msg->flags & I2C_M_RD) cmd = 1; @@ -51,10 +50,19 @@ nvkm_i2c_aux_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, int num) if (mcnt || remaining > 16) cmd |= 4; /* MOT */ - ret = aux->func->xfer(aux, true, cmd, msg->addr, ptr, &cnt); - if (ret < 0) { - nvkm_i2c_aux_release(aux); - return ret; + for (retries = 0, cnt = 0; + retries < 32 && !cnt; + retries++) { + cnt = min_t(u8, remaining, 16); + ret = aux->func->xfer(aux, true, cmd, + msg->addr, ptr, &cnt); + if (ret < 0) + goto out; + } + if (!cnt) { + AUX_TRACE(aux, "no data after 32 retries"); + ret = -EIO; + goto out; } ptr += cnt; @@ -64,8 +72,10 @@ nvkm_i2c_aux_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, int num) msg++; } + ret = num; +out: nvkm_i2c_aux_release(aux); - return num; + return ret; } static u32 diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/base.c index ecacb22834d76d24c9ee656748715ca552ed10ae..719345074711144d13b85b7e75ccac2c8eeb0c5b 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/base.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/base.c @@ -184,6 +184,25 @@ nvkm_i2c_fini(struct nvkm_subdev *subdev, bool suspend) return 0; } +static int +nvkm_i2c_preinit(struct nvkm_subdev *subdev) +{ + struct nvkm_i2c *i2c = nvkm_i2c(subdev); + struct nvkm_i2c_bus *bus; + struct nvkm_i2c_pad *pad; + + /* + * We init our i2c busses as early as possible, since they may be + * needed by the vbios init scripts on some cards + */ + list_for_each_entry(pad, &i2c->pad, head) + nvkm_i2c_pad_init(pad); + list_for_each_entry(bus, &i2c->bus, head) + nvkm_i2c_bus_init(bus); + + return 0; +} + static int nvkm_i2c_init(struct nvkm_subdev *subdev) { @@ -238,6 +257,7 @@ nvkm_i2c_dtor(struct nvkm_subdev *subdev) static const struct nvkm_subdev_func nvkm_i2c = { .dtor = nvkm_i2c_dtor, + .preinit = nvkm_i2c_preinit, .init = nvkm_i2c_init, .fini = nvkm_i2c_fini, .intr = nvkm_i2c_intr, diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c index 97964f7f2acee08350101947a4ccd9a717f5f199..b1d41c4921dd5765e61eb93d941de36d12a675d1 100644 --- a/drivers/gpu/drm/panel/panel-simple.c +++ b/drivers/gpu/drm/panel/panel-simple.c @@ -2803,7 +2803,14 @@ static int panel_simple_dsi_probe(struct mipi_dsi_device *dsi) dsi->format = desc->format; dsi->lanes = desc->lanes; - return mipi_dsi_attach(dsi); + err = mipi_dsi_attach(dsi); + if (err) { + struct panel_simple *panel = dev_get_drvdata(&dsi->dev); + + drm_panel_remove(&panel->base); + } + + return err; } static int panel_simple_dsi_remove(struct mipi_dsi_device *dsi) diff --git a/drivers/gpu/drm/rockchip/analogix_dp-rockchip.c b/drivers/gpu/drm/rockchip/analogix_dp-rockchip.c index 080f0535219502306774ea799c10cc30793006ca..6a4da3a0ff1c3f8c4fb765d6d03fcf3c0428200c 100644 --- a/drivers/gpu/drm/rockchip/analogix_dp-rockchip.c +++ b/drivers/gpu/drm/rockchip/analogix_dp-rockchip.c @@ -436,7 +436,7 @@ static int rockchip_dp_resume(struct device *dev) static const struct dev_pm_ops rockchip_dp_pm_ops = { #ifdef CONFIG_PM_SLEEP - .suspend = rockchip_dp_suspend, + .suspend_late = rockchip_dp_suspend, .resume_early = rockchip_dp_resume, #endif }; diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c index f8f9ae6622eb539783be0c8a02ef3b616bdea1b3..873624a11ce8805c702e3544eae227af957edb27 100644 --- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c +++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c @@ -880,7 +880,8 @@ static bool vop_crtc_mode_fixup(struct drm_crtc *crtc, struct vop *vop = to_vop(crtc); adjusted_mode->clock = - clk_round_rate(vop->dclk, mode->clock * 1000) / 1000; + DIV_ROUND_UP(clk_round_rate(vop->dclk, mode->clock * 1000), + 1000); return true; } diff --git a/drivers/gpu/drm/tilcdc/tilcdc_drv.c b/drivers/gpu/drm/tilcdc/tilcdc_drv.c index 0fb300d41a09c02508e70e89678695e09a8ff0f9..e1868776da2524be333c33f89cb4864ccdf11043 100644 --- a/drivers/gpu/drm/tilcdc/tilcdc_drv.c +++ b/drivers/gpu/drm/tilcdc/tilcdc_drv.c @@ -184,6 +184,12 @@ static void tilcdc_fini(struct drm_device *dev) { struct tilcdc_drm_private *priv = dev->dev_private; +#ifdef CONFIG_CPU_FREQ + if (priv->freq_transition.notifier_call) + cpufreq_unregister_notifier(&priv->freq_transition, + CPUFREQ_TRANSITION_NOTIFIER); +#endif + if (priv->crtc) tilcdc_crtc_shutdown(priv->crtc); @@ -198,12 +204,6 @@ static void tilcdc_fini(struct drm_device *dev) drm_mode_config_cleanup(dev); tilcdc_remove_external_device(dev); -#ifdef CONFIG_CPU_FREQ - if (priv->freq_transition.notifier_call) - cpufreq_unregister_notifier(&priv->freq_transition, - CPUFREQ_TRANSITION_NOTIFIER); -#endif - if (priv->clk) clk_put(priv->clk); @@ -274,17 +274,6 @@ static int tilcdc_init(struct drm_driver *ddrv, struct device *dev) goto init_failed; } -#ifdef CONFIG_CPU_FREQ - priv->freq_transition.notifier_call = cpufreq_transition; - ret = cpufreq_register_notifier(&priv->freq_transition, - CPUFREQ_TRANSITION_NOTIFIER); - if (ret) { - dev_err(dev, "failed to register cpufreq notifier\n"); - priv->freq_transition.notifier_call = NULL; - goto init_failed; - } -#endif - if (of_property_read_u32(node, "max-bandwidth", &priv->max_bandwidth)) priv->max_bandwidth = TILCDC_DEFAULT_MAX_BANDWIDTH; @@ -361,6 +350,17 @@ static int tilcdc_init(struct drm_driver *ddrv, struct device *dev) } modeset_init(ddev); +#ifdef CONFIG_CPU_FREQ + priv->freq_transition.notifier_call = cpufreq_transition; + ret = cpufreq_register_notifier(&priv->freq_transition, + CPUFREQ_TRANSITION_NOTIFIER); + if (ret) { + dev_err(dev, "failed to register cpufreq notifier\n"); + priv->freq_transition.notifier_call = NULL; + goto init_failed; + } +#endif + if (priv->is_componentized) { ret = component_bind_all(dev, ddev); if (ret < 0) diff --git a/drivers/gpu/drm/udl/udl_drv.c b/drivers/gpu/drm/udl/udl_drv.c index 54e767bd5ddb7c5fc282f18cde0a10dd65094682..f28703db8dbd65a4f68b2f7d960a3157858b72e5 100644 --- a/drivers/gpu/drm/udl/udl_drv.c +++ b/drivers/gpu/drm/udl/udl_drv.c @@ -47,10 +47,16 @@ static const struct file_operations udl_driver_fops = { .llseek = noop_llseek, }; +static void udl_driver_release(struct drm_device *dev) +{ + udl_fini(dev); + udl_modeset_cleanup(dev); + drm_dev_fini(dev); + kfree(dev); +} + static struct drm_driver driver = { .driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_PRIME, - .load = udl_driver_load, - .unload = udl_driver_unload, .release = udl_driver_release, /* gem hooks */ @@ -74,28 +80,56 @@ static struct drm_driver driver = { .patchlevel = DRIVER_PATCHLEVEL, }; +static struct udl_device *udl_driver_create(struct usb_interface *interface) +{ + struct usb_device *udev = interface_to_usbdev(interface); + struct udl_device *udl; + int r; + + udl = kzalloc(sizeof(*udl), GFP_KERNEL); + if (!udl) + return ERR_PTR(-ENOMEM); + + r = drm_dev_init(&udl->drm, &driver, &interface->dev); + if (r) { + kfree(udl); + return ERR_PTR(r); + } + + udl->udev = udev; + udl->drm.dev_private = udl; + + r = udl_init(udl); + if (r) { + drm_dev_fini(&udl->drm); + kfree(udl); + return ERR_PTR(r); + } + + usb_set_intfdata(interface, udl); + return udl; +} + static int udl_usb_probe(struct usb_interface *interface, const struct usb_device_id *id) { - struct usb_device *udev = interface_to_usbdev(interface); - struct drm_device *dev; int r; + struct udl_device *udl; - dev = drm_dev_alloc(&driver, &interface->dev); - if (IS_ERR(dev)) - return PTR_ERR(dev); + udl = udl_driver_create(interface); + if (IS_ERR(udl)) + return PTR_ERR(udl); - r = drm_dev_register(dev, (unsigned long)udev); + r = drm_dev_register(&udl->drm, 0); if (r) goto err_free; - usb_set_intfdata(interface, dev); - DRM_INFO("Initialized udl on minor %d\n", dev->primary->index); + DRM_INFO("Initialized udl on minor %d\n", udl->drm.primary->index); return 0; err_free: - drm_dev_unref(dev); + drm_dev_put(&udl->drm); return r; } diff --git a/drivers/gpu/drm/udl/udl_drv.h b/drivers/gpu/drm/udl/udl_drv.h index 4ae67d882eae928e6b39fb4f240a46bfc272ed15..35c1f33fbc1a0b455c9d2dcf3d23d638923bc863 100644 --- a/drivers/gpu/drm/udl/udl_drv.h +++ b/drivers/gpu/drm/udl/udl_drv.h @@ -50,8 +50,8 @@ struct urb_list { struct udl_fbdev; struct udl_device { + struct drm_device drm; struct device *dev; - struct drm_device *ddev; struct usb_device *udev; struct drm_crtc *crtc; @@ -71,6 +71,8 @@ struct udl_device { atomic_t cpu_kcycles_used; /* transpired during pixel processing */ }; +#define to_udl(x) container_of(x, struct udl_device, drm) + struct udl_gem_object { struct drm_gem_object base; struct page **pages; @@ -102,9 +104,8 @@ struct urb *udl_get_urb(struct drm_device *dev); int udl_submit_urb(struct drm_device *dev, struct urb *urb, size_t len); void udl_urb_completion(struct urb *urb); -int udl_driver_load(struct drm_device *dev, unsigned long flags); -void udl_driver_unload(struct drm_device *dev); -void udl_driver_release(struct drm_device *dev); +int udl_init(struct udl_device *udl); +void udl_fini(struct drm_device *dev); int udl_fbdev_init(struct drm_device *dev); void udl_fbdev_cleanup(struct drm_device *dev); diff --git a/drivers/gpu/drm/udl/udl_fb.c b/drivers/gpu/drm/udl/udl_fb.c index dd9ffded223b5fb09c025d518d5090b27716b560..4ab101bf1df010b58c377079e892fc8142c7064d 100644 --- a/drivers/gpu/drm/udl/udl_fb.c +++ b/drivers/gpu/drm/udl/udl_fb.c @@ -82,7 +82,7 @@ int udl_handle_damage(struct udl_framebuffer *fb, int x, int y, int width, int height) { struct drm_device *dev = fb->base.dev; - struct udl_device *udl = dev->dev_private; + struct udl_device *udl = to_udl(dev); int i, ret; char *cmd; cycles_t start_cycles, end_cycles; @@ -210,10 +210,10 @@ static int udl_fb_open(struct fb_info *info, int user) { struct udl_fbdev *ufbdev = info->par; struct drm_device *dev = ufbdev->ufb.base.dev; - struct udl_device *udl = dev->dev_private; + struct udl_device *udl = to_udl(dev); /* If the USB device is gone, we don't accept new opens */ - if (drm_dev_is_unplugged(udl->ddev)) + if (drm_dev_is_unplugged(&udl->drm)) return -ENODEV; ufbdev->fb_count++; @@ -441,7 +441,7 @@ static void udl_fbdev_destroy(struct drm_device *dev, int udl_fbdev_init(struct drm_device *dev) { - struct udl_device *udl = dev->dev_private; + struct udl_device *udl = to_udl(dev); int bpp_sel = fb_bpp; struct udl_fbdev *ufbdev; int ret; @@ -480,7 +480,7 @@ free: void udl_fbdev_cleanup(struct drm_device *dev) { - struct udl_device *udl = dev->dev_private; + struct udl_device *udl = to_udl(dev); if (!udl->fbdev) return; @@ -491,7 +491,7 @@ void udl_fbdev_cleanup(struct drm_device *dev) void udl_fbdev_unplug(struct drm_device *dev) { - struct udl_device *udl = dev->dev_private; + struct udl_device *udl = to_udl(dev); struct udl_fbdev *ufbdev; if (!udl->fbdev) return; diff --git a/drivers/gpu/drm/udl/udl_gem.c b/drivers/gpu/drm/udl/udl_gem.c index bb7b58407039bbbb099a371b9a432dc12983f886..3b3e17652bb20f341e68fb33302c8635a8db18c9 100644 --- a/drivers/gpu/drm/udl/udl_gem.c +++ b/drivers/gpu/drm/udl/udl_gem.c @@ -203,7 +203,7 @@ int udl_gem_mmap(struct drm_file *file, struct drm_device *dev, { struct udl_gem_object *gobj; struct drm_gem_object *obj; - struct udl_device *udl = dev->dev_private; + struct udl_device *udl = to_udl(dev); int ret = 0; mutex_lock(&udl->gem_lock); diff --git a/drivers/gpu/drm/udl/udl_main.c b/drivers/gpu/drm/udl/udl_main.c index 19055dda31409d0d1bcc50084a37b4caf5fc7b85..8d22b6cd524123f60d2220bc32441cbf5ff8c285 100644 --- a/drivers/gpu/drm/udl/udl_main.c +++ b/drivers/gpu/drm/udl/udl_main.c @@ -29,7 +29,7 @@ static int udl_parse_vendor_descriptor(struct drm_device *dev, struct usb_device *usbdev) { - struct udl_device *udl = dev->dev_private; + struct udl_device *udl = to_udl(dev); char *desc; char *buf; char *desc_end; @@ -165,7 +165,7 @@ void udl_urb_completion(struct urb *urb) static void udl_free_urb_list(struct drm_device *dev) { - struct udl_device *udl = dev->dev_private; + struct udl_device *udl = to_udl(dev); int count = udl->urbs.count; struct list_head *node; struct urb_node *unode; @@ -198,7 +198,7 @@ static void udl_free_urb_list(struct drm_device *dev) static int udl_alloc_urb_list(struct drm_device *dev, int count, size_t size) { - struct udl_device *udl = dev->dev_private; + struct udl_device *udl = to_udl(dev); struct urb *urb; struct urb_node *unode; char *buf; @@ -262,7 +262,7 @@ retry: struct urb *udl_get_urb(struct drm_device *dev) { - struct udl_device *udl = dev->dev_private; + struct udl_device *udl = to_udl(dev); int ret = 0; struct list_head *entry; struct urb_node *unode; @@ -295,7 +295,7 @@ error: int udl_submit_urb(struct drm_device *dev, struct urb *urb, size_t len) { - struct udl_device *udl = dev->dev_private; + struct udl_device *udl = to_udl(dev); int ret; BUG_ON(len > udl->urbs.size); @@ -310,20 +310,12 @@ int udl_submit_urb(struct drm_device *dev, struct urb *urb, size_t len) return ret; } -int udl_driver_load(struct drm_device *dev, unsigned long flags) +int udl_init(struct udl_device *udl) { - struct usb_device *udev = (void*)flags; - struct udl_device *udl; + struct drm_device *dev = &udl->drm; int ret = -ENOMEM; DRM_DEBUG("\n"); - udl = kzalloc(sizeof(struct udl_device), GFP_KERNEL); - if (!udl) - return -ENOMEM; - - udl->udev = udev; - udl->ddev = dev; - dev->dev_private = udl; mutex_init(&udl->gem_lock); @@ -357,7 +349,6 @@ int udl_driver_load(struct drm_device *dev, unsigned long flags) err: if (udl->urbs.count) udl_free_urb_list(dev); - kfree(udl); DRM_ERROR("%d\n", ret); return ret; } @@ -368,9 +359,9 @@ int udl_drop_usb(struct drm_device *dev) return 0; } -void udl_driver_unload(struct drm_device *dev) +void udl_fini(struct drm_device *dev) { - struct udl_device *udl = dev->dev_private; + struct udl_device *udl = to_udl(dev); drm_kms_helper_poll_fini(dev); @@ -378,12 +369,4 @@ void udl_driver_unload(struct drm_device *dev) udl_free_urb_list(dev); udl_fbdev_cleanup(dev); - kfree(udl); -} - -void udl_driver_release(struct drm_device *dev) -{ - udl_modeset_cleanup(dev); - drm_dev_fini(dev); - kfree(dev); } diff --git a/drivers/gpu/drm/v3d/Kconfig b/drivers/gpu/drm/v3d/Kconfig index 1552bf552c942eb3d3c516052c017f7b743c884d..2967459b8fb0fb051b0715ca6935f7d38752877b 100644 --- a/drivers/gpu/drm/v3d/Kconfig +++ b/drivers/gpu/drm/v3d/Kconfig @@ -1,6 +1,6 @@ config DRM_V3D tristate "Broadcom V3D 3.x and newer" - depends on ARCH_BCM || ARCH_BCMSTB || COMPILE_TEST + depends on ARCH_BCM || ARCH_BCMSTB || ARCH_BCM2835 || COMPILE_TEST depends on DRM depends on COMMON_CLK depends on MMU diff --git a/drivers/gpu/drm/vc4/vc4_crtc.c b/drivers/gpu/drm/vc4/vc4_crtc.c index 5615ceb15708f331911dae4cf81750d2c076e5cd..193fe880df7c59330ed587715758f3dde70bcf48 100644 --- a/drivers/gpu/drm/vc4/vc4_crtc.c +++ b/drivers/gpu/drm/vc4/vc4_crtc.c @@ -48,6 +48,13 @@ struct vc4_crtc_state { struct drm_mm_node mm; bool feed_txp; bool txp_armed; + + struct { + unsigned int left; + unsigned int right; + unsigned int top; + unsigned int bottom; + } margins; }; static inline struct vc4_crtc_state * @@ -623,6 +630,37 @@ static enum drm_mode_status vc4_crtc_mode_valid(struct drm_crtc *crtc, return MODE_OK; } +void vc4_crtc_get_margins(struct drm_crtc_state *state, + unsigned int *left, unsigned int *right, + unsigned int *top, unsigned int *bottom) +{ + struct vc4_crtc_state *vc4_state = to_vc4_crtc_state(state); + struct drm_connector_state *conn_state; + struct drm_connector *conn; + int i; + + *left = vc4_state->margins.left; + *right = vc4_state->margins.right; + *top = vc4_state->margins.top; + *bottom = vc4_state->margins.bottom; + + /* We have to interate over all new connector states because + * vc4_crtc_get_margins() might be called before + * vc4_crtc_atomic_check() which means margins info in vc4_crtc_state + * might be outdated. + */ + for_each_new_connector_in_state(state->state, conn, conn_state, i) { + if (conn_state->crtc != state->crtc) + continue; + + *left = conn_state->tv.margins.left; + *right = conn_state->tv.margins.right; + *top = conn_state->tv.margins.top; + *bottom = conn_state->tv.margins.bottom; + break; + } +} + static int vc4_crtc_atomic_check(struct drm_crtc *crtc, struct drm_crtc_state *state) { @@ -670,6 +708,10 @@ static int vc4_crtc_atomic_check(struct drm_crtc *crtc, vc4_state->feed_txp = false; } + vc4_state->margins.left = conn_state->tv.margins.left; + vc4_state->margins.right = conn_state->tv.margins.right; + vc4_state->margins.top = conn_state->tv.margins.top; + vc4_state->margins.bottom = conn_state->tv.margins.bottom; break; } @@ -971,6 +1013,7 @@ static struct drm_crtc_state *vc4_crtc_duplicate_state(struct drm_crtc *crtc) old_vc4_state = to_vc4_crtc_state(crtc->state); vc4_state->feed_txp = old_vc4_state->feed_txp; + vc4_state->margins = old_vc4_state->margins; __drm_atomic_helper_crtc_duplicate_state(crtc, &vc4_state->base); return &vc4_state->base; diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h index e29895d47aff20e10bb7f4b7cfa324c46d59ad57..67553b440d20826902ce7d5f996bc7700d8b14b7 100644 --- a/drivers/gpu/drm/vc4/vc4_drv.h +++ b/drivers/gpu/drm/vc4/vc4_drv.h @@ -705,6 +705,9 @@ bool vc4_crtc_get_scanoutpos(struct drm_device *dev, unsigned int crtc_id, const struct drm_display_mode *mode); void vc4_crtc_handle_vblank(struct vc4_crtc *crtc); void vc4_crtc_txp_armed(struct drm_crtc_state *state); +void vc4_crtc_get_margins(struct drm_crtc_state *state, + unsigned int *right, unsigned int *left, + unsigned int *top, unsigned int *bottom); /* vc4_debugfs.c */ int vc4_debugfs_init(struct drm_minor *minor); diff --git a/drivers/gpu/drm/vc4/vc4_firmware_kms.c b/drivers/gpu/drm/vc4/vc4_firmware_kms.c index 7a6c45e1b02c89a892953561ae93601488b635cf..89ad71cff2bdfd35432398b728c8cf626eaa2152 100644 --- a/drivers/gpu/drm/vc4/vc4_firmware_kms.c +++ b/drivers/gpu/drm/vc4/vc4_firmware_kms.c @@ -256,6 +256,23 @@ static inline struct vc4_crtc *to_vc4_crtc(struct drm_crtc *crtc) return container_of(crtc, struct vc4_crtc, base); } +struct vc4_crtc_state { + struct drm_crtc_state base; + + struct { + unsigned int left; + unsigned int right; + unsigned int top; + unsigned int bottom; + } margins; +}; + +static inline struct vc4_crtc_state * +to_vc4_crtc_state(struct drm_crtc_state *crtc_state) +{ + return (struct vc4_crtc_state *)crtc_state; +} + struct vc4_fkms_encoder { struct drm_encoder base; bool hdmi_monitor; @@ -268,6 +285,13 @@ to_vc4_fkms_encoder(struct drm_encoder *encoder) return container_of(encoder, struct vc4_fkms_encoder, base); } +/* "Broadcast RGB" property. + * Allows overriding of HDMI full or limited range RGB + */ +#define VC4_BROADCAST_RGB_AUTO 0 +#define VC4_BROADCAST_RGB_FULL 1 +#define VC4_BROADCAST_RGB_LIMITED 2 + /* VC4 FKMS connector KMS struct */ struct vc4_fkms_connector { struct drm_connector base; @@ -280,6 +304,8 @@ struct vc4_fkms_connector { struct vc4_dev *vc4_dev; u32 display_number; u32 display_type; + + struct drm_property *broadcast_rgb_property; }; static inline struct vc4_fkms_connector * @@ -288,6 +314,16 @@ to_vc4_fkms_connector(struct drm_connector *connector) return container_of(connector, struct vc4_fkms_connector, base); } +/* VC4 FKMS connector state */ +struct vc4_fkms_connector_state { + struct drm_connector_state base; + + int broadcast_rgb; +}; + +#define to_vc4_fkms_connector_state(x) \ + container_of(x, struct vc4_fkms_connector_state, base) + static u32 vc4_get_display_type(u32 display_number) { const u32 display_types[] = { @@ -365,19 +401,128 @@ static int vc4_plane_set_blank(struct drm_plane *plane, bool blank) return ret; } +static void vc4_fkms_crtc_get_margins(struct drm_crtc_state *state, + unsigned int *left, unsigned int *right, + unsigned int *top, unsigned int *bottom) +{ + struct vc4_crtc_state *vc4_state = to_vc4_crtc_state(state); + struct drm_connector_state *conn_state; + struct drm_connector *conn; + int i; + + *left = vc4_state->margins.left; + *right = vc4_state->margins.right; + *top = vc4_state->margins.top; + *bottom = vc4_state->margins.bottom; + + /* We have to interate over all new connector states because + * vc4_fkms_crtc_get_margins() might be called before + * vc4_fkms_crtc_atomic_check() which means margins info in + * vc4_crtc_state might be outdated. + */ + for_each_new_connector_in_state(state->state, conn, conn_state, i) { + if (conn_state->crtc != state->crtc) + continue; + + *left = conn_state->tv.margins.left; + *right = conn_state->tv.margins.right; + *top = conn_state->tv.margins.top; + *bottom = conn_state->tv.margins.bottom; + break; + } +} + +static int vc4_fkms_margins_adj(struct drm_plane_state *pstate, + struct set_plane *plane) +{ + unsigned int left, right, top, bottom; + int adjhdisplay, adjvdisplay; + struct drm_crtc_state *crtc_state; + + crtc_state = drm_atomic_get_new_crtc_state(pstate->state, + pstate->crtc); + + vc4_fkms_crtc_get_margins(crtc_state, &left, &right, &top, &bottom); + + if (!left && !right && !top && !bottom) + return 0; + + if (left + right >= crtc_state->mode.hdisplay || + top + bottom >= crtc_state->mode.vdisplay) + return -EINVAL; + + adjhdisplay = crtc_state->mode.hdisplay - (left + right); + plane->dst_x = DIV_ROUND_CLOSEST(plane->dst_x * adjhdisplay, + (int)crtc_state->mode.hdisplay); + plane->dst_x += left; + if (plane->dst_x > (int)(crtc_state->mode.hdisplay - left)) + plane->dst_x = crtc_state->mode.hdisplay - left; + + adjvdisplay = crtc_state->mode.vdisplay - (top + bottom); + plane->dst_y = DIV_ROUND_CLOSEST(plane->dst_y * adjvdisplay, + (int)crtc_state->mode.vdisplay); + plane->dst_y += top; + if (plane->dst_y > (int)(crtc_state->mode.vdisplay - top)) + plane->dst_y = crtc_state->mode.vdisplay - top; + + plane->dst_w = DIV_ROUND_CLOSEST(plane->dst_w * adjhdisplay, + crtc_state->mode.hdisplay); + plane->dst_h = DIV_ROUND_CLOSEST(plane->dst_h * adjvdisplay, + crtc_state->mode.vdisplay); + + if (!plane->dst_w || !plane->dst_h) + return -EINVAL; + + return 0; +} + static void vc4_plane_atomic_update(struct drm_plane *plane, struct drm_plane_state *old_state) { struct drm_plane_state *state = plane->state; + + /* + * Do NOT set now, as we haven't checked if the crtc is active or not. + * Set from vc4_plane_set_blank instead. + * + * If the CRTC is on (or going to be on) and we're enabled, + * then unblank. Otherwise, stay blank until CRTC enable. + */ + if (state->crtc->state->active) + vc4_plane_set_blank(plane, false); +} + +static void vc4_plane_atomic_disable(struct drm_plane *plane, + struct drm_plane_state *old_state) +{ + struct drm_plane_state *state = plane->state; + struct vc4_fkms_plane *vc4_plane = to_vc4_fkms_plane(plane); + + DRM_DEBUG_ATOMIC("[PLANE:%d:%s] plane disable %dx%d@%d +%d,%d\n", + plane->base.id, plane->name, + state->crtc_w, + state->crtc_h, + vc4_plane->mb.plane.vc_image_type, + state->crtc_x, + state->crtc_y); + vc4_plane_set_blank(plane, true); +} + +static bool plane_enabled(struct drm_plane_state *state) +{ + return state->fb && state->crtc; +} + +static int vc4_plane_to_mb(struct drm_plane *plane, + struct mailbox_set_plane *mb, + struct drm_plane_state *state) +{ struct drm_framebuffer *fb = state->fb; struct drm_gem_cma_object *bo = drm_fb_cma_get_gem_obj(fb, 0); const struct drm_format_info *drm_fmt = fb->format; const struct vc_image_format *vc_fmt = vc4_get_vc_image_fmt(drm_fmt->format); - struct vc4_fkms_plane *vc4_plane = to_vc4_fkms_plane(plane); - struct mailbox_set_plane *mb = &vc4_plane->mb; int num_planes = fb->format->num_planes; - struct drm_display_mode *mode = &state->crtc->mode; unsigned int rotation = SUPPORTED_ROTATIONS; mb->plane.vc_image_type = vc_fmt->vc_image; @@ -417,25 +562,7 @@ static void vc4_plane_atomic_update(struct drm_plane *plane, break; } - /* FIXME: If the dest rect goes off screen then clip the src rect so we - * don't have off-screen pixels. - */ - if (plane->type == DRM_PLANE_TYPE_CURSOR) { - /* There is no scaling on the cursor plane, therefore the calcs - * to alter the source crop as the cursor goes off the screen - * are simple. - */ - if (mb->plane.dst_x + mb->plane.dst_w > mode->hdisplay) { - mb->plane.dst_w = mode->hdisplay - mb->plane.dst_x; - mb->plane.src_w = (mode->hdisplay - mb->plane.dst_x) - << 16; - } - if (mb->plane.dst_y + mb->plane.dst_h > mode->vdisplay) { - mb->plane.dst_h = mode->vdisplay - mb->plane.dst_y; - mb->plane.src_h = (mode->vdisplay - mb->plane.dst_y) - << 16; - } - } + vc4_fkms_margins_adj(state, &mb->plane); if (num_planes > 1) { /* Assume this must be YUV */ @@ -525,38 +652,19 @@ static void vc4_plane_atomic_update(struct drm_plane *plane, state->alpha, state->normalized_zpos); - /* - * Do NOT set now, as we haven't checked if the crtc is active or not. - * Set from vc4_plane_set_blank instead. - * - * If the CRTC is on (or going to be on) and we're enabled, - * then unblank. Otherwise, stay blank until CRTC enable. - */ - if (state->crtc->state->active) - vc4_plane_set_blank(plane, false); + return 0; } -static void vc4_plane_atomic_disable(struct drm_plane *plane, - struct drm_plane_state *old_state) +static int vc4_plane_atomic_check(struct drm_plane *plane, + struct drm_plane_state *state) { - //struct vc4_dev *vc4 = to_vc4_dev(plane->dev); - struct drm_plane_state *state = plane->state; struct vc4_fkms_plane *vc4_plane = to_vc4_fkms_plane(plane); - DRM_DEBUG_ATOMIC("[PLANE:%d:%s] plane disable %dx%d@%d +%d,%d\n", - plane->base.id, plane->name, - state->crtc_w, - state->crtc_h, - vc4_plane->mb.plane.vc_image_type, - state->crtc_x, - state->crtc_y); - vc4_plane_set_blank(plane, true); -} + if (!plane_enabled(state)) + return 0; + + return vc4_plane_to_mb(plane, &vc4_plane->mb, state); -static int vc4_plane_atomic_check(struct drm_plane *plane, - struct drm_plane_state *state) -{ - return 0; } static void vc4_plane_destroy(struct drm_plane *plane) @@ -683,6 +791,7 @@ static struct drm_plane *vc4_fkms_plane_init(struct drm_device *dev, * other layers as requested by KMS. */ switch (type) { + default: case DRM_PLANE_TYPE_PRIMARY: default_zpos = 0; break; @@ -741,8 +850,6 @@ static void vc4_crtc_mode_set_nofb(struct drm_crtc *crtc) mode->picture_aspect_ratio, mode->flags); mb.timings.display = vc4_crtc->display_number; - mb.timings.video_id_code = frame.avi.video_code; - mb.timings.clock = mode->clock; mb.timings.hdisplay = mode->hdisplay; mb.timings.hsync_start = mode->hsync_start; @@ -780,11 +887,30 @@ static void vc4_crtc_mode_set_nofb(struct drm_crtc *crtc) break; } - if (!vc4_encoder->hdmi_monitor) + if (!vc4_encoder->hdmi_monitor) { mb.timings.flags |= TIMINGS_FLAGS_DVI; - else if (drm_default_rgb_quant_range(mode) == + mb.timings.video_id_code = frame.avi.video_code; + } else { + struct vc4_fkms_connector_state *conn_state = + to_vc4_fkms_connector_state(vc4_crtc->connector->state); + + /* Do not provide a VIC as the HDMI spec requires that we do not + * signal the opposite of the defined range in the AVI + * infoframe. + */ + mb.timings.video_id_code = 0; + + if (conn_state->broadcast_rgb == VC4_BROADCAST_RGB_AUTO) { + /* See CEA-861-E - 5.1 Default Encoding Parameters */ + if (drm_default_rgb_quant_range(mode) == HDMI_QUANTIZATION_RANGE_LIMITED) - mb.timings.flags |= TIMINGS_FLAGS_RGB_LIMITED; + mb.timings.flags |= TIMINGS_FLAGS_RGB_LIMITED; + } else { + if (conn_state->broadcast_rgb == + VC4_BROADCAST_RGB_LIMITED) + mb.timings.flags |= TIMINGS_FLAGS_RGB_LIMITED; + } + } /* FIXME: To implement @@ -806,6 +932,7 @@ static void vc4_crtc_mode_set_nofb(struct drm_crtc *crtc) static void vc4_crtc_disable(struct drm_crtc *crtc, struct drm_crtc_state *old_state) { + struct drm_device *dev = crtc->dev; struct drm_plane *plane; DRM_DEBUG_KMS("[CRTC:%d] vblanks off.\n", @@ -821,6 +948,38 @@ static void vc4_crtc_disable(struct drm_crtc *crtc, struct drm_crtc_state *old_s drm_atomic_crtc_for_each_plane(plane, crtc) vc4_plane_atomic_disable(plane, plane->state); + + /* + * Make sure we issue a vblank event after disabling the CRTC if + * someone was waiting it. + */ + if (crtc->state->event) { + unsigned long flags; + + spin_lock_irqsave(&dev->event_lock, flags); + drm_crtc_send_vblank_event(crtc, crtc->state->event); + crtc->state->event = NULL; + spin_unlock_irqrestore(&dev->event_lock, flags); + } +} + +static void vc4_crtc_consume_event(struct drm_crtc *crtc) +{ + struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc); + struct drm_device *dev = crtc->dev; + unsigned long flags; + + if (!crtc->state->event) + return; + + crtc->state->event->pipe = drm_crtc_index(crtc); + + WARN_ON(drm_crtc_vblank_get(crtc) != 0); + + spin_lock_irqsave(&dev->event_lock, flags); + vc4_crtc->event = crtc->state->event; + crtc->state->event = NULL; + spin_unlock_irqrestore(&dev->event_lock, flags); } static void vc4_crtc_enable(struct drm_crtc *crtc, struct drm_crtc_state *old_state) @@ -830,6 +989,7 @@ static void vc4_crtc_enable(struct drm_crtc *crtc, struct drm_crtc_state *old_st DRM_DEBUG_KMS("[CRTC:%d] vblanks on.\n", crtc->base.id); drm_crtc_vblank_on(crtc); + vc4_crtc_consume_event(crtc); /* Unblank the planes (if they're supposed to be displayed). */ drm_atomic_crtc_for_each_plane(plane, crtc) @@ -878,31 +1038,33 @@ vc4_crtc_mode_valid(struct drm_crtc *crtc, const struct drm_display_mode *mode) static int vc4_crtc_atomic_check(struct drm_crtc *crtc, struct drm_crtc_state *state) { - DRM_DEBUG_KMS("[CRTC:%d] crtc_atomic_check.\n", - crtc->base.id); + struct vc4_crtc_state *vc4_state = to_vc4_crtc_state(state); + struct drm_connector *conn; + struct drm_connector_state *conn_state; + int i; + + DRM_DEBUG_KMS("[CRTC:%d] crtc_atomic_check.\n", crtc->base.id); + + for_each_new_connector_in_state(state->state, conn, conn_state, i) { + if (conn_state->crtc != crtc) + continue; + + vc4_state->margins.left = conn_state->tv.margins.left; + vc4_state->margins.right = conn_state->tv.margins.right; + vc4_state->margins.top = conn_state->tv.margins.top; + vc4_state->margins.bottom = conn_state->tv.margins.bottom; + break; + } return 0; } static void vc4_crtc_atomic_flush(struct drm_crtc *crtc, struct drm_crtc_state *old_state) { - struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc); - struct drm_device *dev = crtc->dev; - DRM_DEBUG_KMS("[CRTC:%d] crtc_atomic_flush.\n", crtc->base.id); - if (crtc->state->event) { - unsigned long flags; - - crtc->state->event->pipe = drm_crtc_index(crtc); - - WARN_ON(drm_crtc_vblank_get(crtc) != 0); - - spin_lock_irqsave(&dev->event_lock, flags); - vc4_crtc->event = crtc->state->event; - crtc->state->event = NULL; - spin_unlock_irqrestore(&dev->event_lock, flags); - } + if (crtc->state->active && old_state->active && crtc->state->event) + vc4_crtc_consume_event(crtc); } static void vc4_crtc_handle_page_flip(struct vc4_crtc *vc4_crtc) @@ -950,14 +1112,17 @@ static irqreturn_t vc4_crtc_irq_handler(int irq, void *data) vc4_crtc_handle_page_flip(crtc_list[0]); } - /* Check for the secondary display too */ - chan = readl(crtc_list[0]->regs + SMIDSW1); + if (crtc_list[1]) { + /* Check for the secondary display too */ + chan = readl(crtc_list[0]->regs + SMIDSW1); - if (chan & 1) { - writel(SMI_NEW, crtc_list[0]->regs + SMIDSW1); - if (crtc_list[1]->vblank_enabled) - drm_crtc_handle_vblank(&crtc_list[1]->base); - vc4_crtc_handle_page_flip(crtc_list[1]); + if (chan & 1) { + writel(SMI_NEW, crtc_list[0]->regs + SMIDSW1); + + if (crtc_list[1]->vblank_enabled) + drm_crtc_handle_vblank(&crtc_list[1]->base); + vc4_crtc_handle_page_flip(crtc_list[1]); + } } } @@ -980,6 +1145,33 @@ static int vc4_page_flip(struct drm_crtc *crtc, return drm_atomic_helper_page_flip(crtc, fb, event, flags, ctx); } +static struct drm_crtc_state * +vc4_crtc_duplicate_state(struct drm_crtc *crtc) +{ + struct vc4_crtc_state *vc4_state, *old_vc4_state; + + vc4_state = kzalloc(sizeof(*vc4_state), GFP_KERNEL); + if (!vc4_state) + return NULL; + + old_vc4_state = to_vc4_crtc_state(crtc->state); + vc4_state->margins = old_vc4_state->margins; + + __drm_atomic_helper_crtc_duplicate_state(crtc, &vc4_state->base); + return &vc4_state->base; +} + +static void +vc4_crtc_reset(struct drm_crtc *crtc) +{ + if (crtc->state) + __drm_atomic_helper_crtc_destroy_state(crtc->state); + + crtc->state = kzalloc(sizeof(*crtc->state), GFP_KERNEL); + if (crtc->state) + crtc->state->crtc = crtc; +} + static int vc4_fkms_enable_vblank(struct drm_crtc *crtc) { struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc); @@ -1007,8 +1199,8 @@ static const struct drm_crtc_funcs vc4_crtc_funcs = { .set_property = NULL, .cursor_set = NULL, /* handled by drm_mode_cursor_universal */ .cursor_move = NULL, /* handled by drm_mode_cursor_universal */ - .reset = drm_atomic_helper_crtc_reset, - .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state, + .reset = vc4_crtc_reset, + .atomic_duplicate_state = vc4_crtc_duplicate_state, .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state, .enable_vblank = vc4_fkms_enable_vblank, .disable_vblank = vc4_fkms_disable_vblank, @@ -1204,13 +1396,102 @@ static void vc4_fkms_connector_destroy(struct drm_connector *connector) drm_connector_cleanup(connector); } +/** + * vc4_connector_duplicate_state - duplicate connector state + * @connector: digital connector + * + * Allocates and returns a copy of the connector state (both common and + * digital connector specific) for the specified connector. + * + * Returns: The newly allocated connector state, or NULL on failure. + */ +struct drm_connector_state * +vc4_connector_duplicate_state(struct drm_connector *connector) +{ + struct vc4_fkms_connector_state *state; + + state = kmemdup(connector->state, sizeof(*state), GFP_KERNEL); + if (!state) + return NULL; + + __drm_atomic_helper_connector_duplicate_state(connector, &state->base); + return &state->base; +} + +/** + * vc4_connector_atomic_get_property - hook for connector->atomic_get_property. + * @connector: Connector to get the property for. + * @state: Connector state to retrieve the property from. + * @property: Property to retrieve. + * @val: Return value for the property. + * + * Returns the atomic property value for a digital connector. + */ +int vc4_connector_atomic_get_property(struct drm_connector *connector, + const struct drm_connector_state *state, + struct drm_property *property, + uint64_t *val) +{ + struct vc4_fkms_connector *fkms_connector = + to_vc4_fkms_connector(connector); + struct vc4_fkms_connector_state *vc4_conn_state = + to_vc4_fkms_connector_state(state); + + if (property == fkms_connector->broadcast_rgb_property) { + *val = vc4_conn_state->broadcast_rgb; + } else { + DRM_DEBUG_ATOMIC("Unknown property [PROP:%d:%s]\n", + property->base.id, property->name); + return -EINVAL; + } + + return 0; +} + +/** + * vc4_connector_atomic_set_property - hook for connector->atomic_set_property. + * @connector: Connector to set the property for. + * @state: Connector state to set the property on. + * @property: Property to set. + * @val: New value for the property. + * + * Sets the atomic property value for a digital connector. + */ +int vc4_connector_atomic_set_property(struct drm_connector *connector, + struct drm_connector_state *state, + struct drm_property *property, + uint64_t val) +{ + struct vc4_fkms_connector *fkms_connector = + to_vc4_fkms_connector(connector); + struct vc4_fkms_connector_state *vc4_conn_state = + to_vc4_fkms_connector_state(state); + + if (property == fkms_connector->broadcast_rgb_property) { + vc4_conn_state->broadcast_rgb = val; + return 0; + } + + DRM_DEBUG_ATOMIC("Unknown property [PROP:%d:%s]\n", + property->base.id, property->name); + return -EINVAL; +} + +static void vc4_hdmi_connector_reset(struct drm_connector *connector) +{ + drm_atomic_helper_connector_reset(connector); + drm_atomic_helper_connector_tv_reset(connector); +} + static const struct drm_connector_funcs vc4_fkms_connector_funcs = { .detect = vc4_fkms_connector_detect, .fill_modes = drm_helper_probe_single_connector_modes, .destroy = vc4_fkms_connector_destroy, - .reset = drm_atomic_helper_connector_reset, - .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state, + .reset = vc4_hdmi_connector_reset, + .atomic_duplicate_state = vc4_connector_duplicate_state, .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, + .atomic_get_property = vc4_connector_atomic_get_property, + .atomic_set_property = vc4_connector_atomic_set_property, }; static const struct drm_connector_helper_funcs vc4_fkms_connector_helper_funcs = { @@ -1223,12 +1504,40 @@ static const struct drm_connector_helper_funcs vc4_fkms_lcd_conn_helper_funcs = .best_encoder = vc4_fkms_connector_best_encoder, }; +static const struct drm_prop_enum_list broadcast_rgb_names[] = { + { VC4_BROADCAST_RGB_AUTO, "Automatic" }, + { VC4_BROADCAST_RGB_FULL, "Full" }, + { VC4_BROADCAST_RGB_LIMITED, "Limited 16:235" }, +}; + +static void +vc4_attach_broadcast_rgb_property(struct vc4_fkms_connector *fkms_connector) +{ + struct drm_device *dev = fkms_connector->base.dev; + struct drm_property *prop; + + prop = fkms_connector->broadcast_rgb_property; + if (!prop) { + prop = drm_property_create_enum(dev, DRM_MODE_PROP_ENUM, + "Broadcast RGB", + broadcast_rgb_names, + ARRAY_SIZE(broadcast_rgb_names)); + if (!prop) + return; + + fkms_connector->broadcast_rgb_property = prop; + } + + drm_object_attach_property(&fkms_connector->base.base, prop, 0); +} + static struct drm_connector * vc4_fkms_connector_init(struct drm_device *dev, struct drm_encoder *encoder, u32 display_num) { struct drm_connector *connector = NULL; struct vc4_fkms_connector *fkms_connector; + struct vc4_fkms_connector_state *conn_state = NULL; struct vc4_dev *vc4_dev = to_vc4_dev(dev); int ret = 0; @@ -1237,9 +1546,18 @@ vc4_fkms_connector_init(struct drm_device *dev, struct drm_encoder *encoder, fkms_connector = devm_kzalloc(dev->dev, sizeof(*fkms_connector), GFP_KERNEL); if (!fkms_connector) { - ret = -ENOMEM; - goto fail; + return ERR_PTR(-ENOMEM); } + + /* + * Allocate enough memory to hold vc4_fkms_connector_state, + */ + conn_state = kzalloc(sizeof(*conn_state), GFP_KERNEL); + if (!conn_state) { + kfree(fkms_connector); + return ERR_PTR(-ENOMEM); + } + connector = &fkms_connector->base; fkms_connector->encoder = encoder; @@ -1247,6 +1565,9 @@ vc4_fkms_connector_init(struct drm_device *dev, struct drm_encoder *encoder, fkms_connector->display_type = vc4_get_display_type(display_num); fkms_connector->vc4_dev = vc4_dev; + __drm_atomic_helper_connector_reset(connector, + &conn_state->base); + if (fkms_connector->display_type == DRM_MODE_ENCODER_DSI) { drm_connector_init(dev, connector, &vc4_fkms_connector_funcs, DRM_MODE_CONNECTOR_DSI); @@ -1267,11 +1588,24 @@ vc4_fkms_connector_init(struct drm_device *dev, struct drm_encoder *encoder, connector->interlace_allowed = 0; } + /* Create and attach TV margin props to this connector. + * Already done for SDTV outputs. + */ + if (fkms_connector->display_type != DRM_MODE_ENCODER_TVDAC) { + ret = drm_mode_create_tv_margin_properties(dev); + if (ret) + goto fail; + } + + drm_connector_attach_tv_margin_properties(connector); + connector->polled = (DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT); connector->doublescan_allowed = 0; + vc4_attach_broadcast_rgb_property(fkms_connector); + drm_connector_attach_encoder(connector, encoder); return connector; diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c index fd5522fd179e56399c63ce819e0f2a3580bc9ec1..9dc0fc6b8a2a8194bdb068d9115c3468fca101c6 100644 --- a/drivers/gpu/drm/vc4/vc4_hdmi.c +++ b/drivers/gpu/drm/vc4/vc4_hdmi.c @@ -292,11 +292,17 @@ static int vc4_hdmi_connector_get_modes(struct drm_connector *connector) return ret; } +static void vc4_hdmi_connector_reset(struct drm_connector *connector) +{ + drm_atomic_helper_connector_reset(connector); + drm_atomic_helper_connector_tv_reset(connector); +} + static const struct drm_connector_funcs vc4_hdmi_connector_funcs = { .detect = vc4_hdmi_connector_detect, .fill_modes = drm_helper_probe_single_connector_modes, .destroy = vc4_hdmi_connector_destroy, - .reset = drm_atomic_helper_connector_reset, + .reset = vc4_hdmi_connector_reset, .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state, .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, }; @@ -310,6 +316,7 @@ static struct drm_connector *vc4_hdmi_connector_init(struct drm_device *dev, { struct drm_connector *connector; struct vc4_hdmi_connector *hdmi_connector; + int ret; hdmi_connector = devm_kzalloc(dev->dev, sizeof(*hdmi_connector), GFP_KERNEL); @@ -323,6 +330,13 @@ static struct drm_connector *vc4_hdmi_connector_init(struct drm_device *dev, DRM_MODE_CONNECTOR_HDMIA); drm_connector_helper_add(connector, &vc4_hdmi_connector_helper_funcs); + /* Create and attach TV margin props to this connector. */ + ret = drm_mode_create_tv_margin_properties(dev); + if (ret) + return ERR_PTR(ret); + + drm_connector_attach_tv_margin_properties(connector); + connector->polled = (DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT); @@ -408,6 +422,9 @@ static void vc4_hdmi_write_infoframe(struct drm_encoder *encoder, static void vc4_hdmi_set_avi_infoframe(struct drm_encoder *encoder) { struct vc4_hdmi_encoder *vc4_encoder = to_vc4_hdmi_encoder(encoder); + struct vc4_dev *vc4 = encoder->dev->dev_private; + struct vc4_hdmi *hdmi = vc4->hdmi; + struct drm_connector_state *cstate = hdmi->connector->state; struct drm_crtc *crtc = encoder->crtc; const struct drm_display_mode *mode = &crtc->state->adjusted_mode; union hdmi_infoframe frame; @@ -426,6 +443,11 @@ static void vc4_hdmi_set_avi_infoframe(struct drm_encoder *encoder) vc4_encoder->rgb_range_selectable, false); + frame.avi.right_bar = cstate->tv.margins.right; + frame.avi.left_bar = cstate->tv.margins.left; + frame.avi.top_bar = cstate->tv.margins.top; + frame.avi.bottom_bar = cstate->tv.margins.bottom; + vc4_hdmi_write_infoframe(encoder, &frame); } @@ -1071,10 +1093,12 @@ static int vc4_hdmi_audio_init(struct vc4_hdmi *hdmi) struct device *dev = &hdmi->pdev->dev; const __be32 *addr; int ret; + int len; - if (!of_find_property(dev->of_node, "dmas", NULL)) { + if (!of_find_property(dev->of_node, "dmas", &len) || + len == 0) { dev_warn(dev, - "'dmas' DT property is missing, no HDMI audio\n"); + "'dmas' DT property is missing or empty, no HDMI audio\n"); return 0; } diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c index 39e608271263dbde1454757018e5cfb655e1cd66..ce2749d90dd00e3db26d4a515b6b7a1832d46ea5 100644 --- a/drivers/gpu/drm/vc4/vc4_plane.c +++ b/drivers/gpu/drm/vc4/vc4_plane.c @@ -258,6 +258,52 @@ static u32 vc4_get_scl_field(struct drm_plane_state *state, int plane) } } +static int vc4_plane_margins_adj(struct drm_plane_state *pstate) +{ + struct vc4_plane_state *vc4_pstate = to_vc4_plane_state(pstate); + unsigned int left, right, top, bottom, adjhdisplay, adjvdisplay; + struct drm_crtc_state *crtc_state; + + crtc_state = drm_atomic_get_new_crtc_state(pstate->state, + pstate->crtc); + + vc4_crtc_get_margins(crtc_state, &left, &right, &top, &bottom); + if (!left && !right && !top && !bottom) + return 0; + + if (left + right >= crtc_state->mode.hdisplay || + top + bottom >= crtc_state->mode.vdisplay) + return -EINVAL; + + adjhdisplay = crtc_state->mode.hdisplay - (left + right); + vc4_pstate->crtc_x = DIV_ROUND_CLOSEST(vc4_pstate->crtc_x * + adjhdisplay, + crtc_state->mode.hdisplay); + vc4_pstate->crtc_x += left; + if (vc4_pstate->crtc_x > crtc_state->mode.hdisplay - left) + vc4_pstate->crtc_x = crtc_state->mode.hdisplay - left; + + adjvdisplay = crtc_state->mode.vdisplay - (top + bottom); + vc4_pstate->crtc_y = DIV_ROUND_CLOSEST(vc4_pstate->crtc_y * + adjvdisplay, + crtc_state->mode.vdisplay); + vc4_pstate->crtc_y += top; + if (vc4_pstate->crtc_y > crtc_state->mode.vdisplay - top) + vc4_pstate->crtc_y = crtc_state->mode.vdisplay - top; + + vc4_pstate->crtc_w = DIV_ROUND_CLOSEST(vc4_pstate->crtc_w * + adjhdisplay, + crtc_state->mode.hdisplay); + vc4_pstate->crtc_h = DIV_ROUND_CLOSEST(vc4_pstate->crtc_h * + adjvdisplay, + crtc_state->mode.vdisplay); + + if (!vc4_pstate->crtc_w || !vc4_pstate->crtc_h) + return -EINVAL; + + return 0; +} + static int vc4_plane_setup_clipping_and_scaling(struct drm_plane_state *state) { struct drm_plane *plane = state->plane; @@ -267,30 +313,63 @@ static int vc4_plane_setup_clipping_and_scaling(struct drm_plane_state *state) u32 subpixel_src_mask = (1 << 16) - 1; u32 format = fb->format->format; int num_planes = fb->format->num_planes; - u32 h_subsample = 1; - u32 v_subsample = 1; - int i; + int min_scale = 1, max_scale = INT_MAX; + struct drm_crtc_state *crtc_state; + u32 h_subsample, v_subsample; + int i, ret; + + crtc_state = drm_atomic_get_existing_crtc_state(state->state, + state->crtc); + if (!crtc_state) { + DRM_DEBUG_KMS("Invalid crtc state\n"); + return -EINVAL; + } + + /* No configuring scaling on the cursor plane, since it gets + * non-vblank-synced updates, and scaling requires LBM changes which + * have to be vblank-synced. + */ + if (plane->type == DRM_PLANE_TYPE_CURSOR) { + min_scale = DRM_PLANE_HELPER_NO_SCALING; + max_scale = DRM_PLANE_HELPER_NO_SCALING; + } else { + min_scale = 1; + max_scale = INT_MAX; + } + + ret = drm_atomic_helper_check_plane_state(state, crtc_state, + min_scale, max_scale, + true, true); + if (ret) + return ret; + + h_subsample = drm_format_horz_chroma_subsampling(format); + v_subsample = drm_format_vert_chroma_subsampling(format); for (i = 0; i < num_planes; i++) vc4_state->offsets[i] = bo->paddr + fb->offsets[i]; /* We don't support subpixel source positioning for scaling. */ - if ((state->src_x & subpixel_src_mask) || - (state->src_y & subpixel_src_mask) || - (state->src_w & subpixel_src_mask) || - (state->src_h & subpixel_src_mask)) { + if ((state->src.x1 & subpixel_src_mask) || + (state->src.x2 & subpixel_src_mask) || + (state->src.y1 & subpixel_src_mask) || + (state->src.y2 & subpixel_src_mask)) { return -EINVAL; } - vc4_state->src_x = state->src_x >> 16; - vc4_state->src_y = state->src_y >> 16; - vc4_state->src_w[0] = state->src_w >> 16; - vc4_state->src_h[0] = state->src_h >> 16; + vc4_state->src_x = state->src.x1 >> 16; + vc4_state->src_y = state->src.y1 >> 16; + vc4_state->src_w[0] = (state->src.x2 - state->src.x1) >> 16; + vc4_state->src_h[0] = (state->src.y2 - state->src.y1) >> 16; - vc4_state->crtc_x = state->crtc_x; - vc4_state->crtc_y = state->crtc_y; - vc4_state->crtc_w = state->crtc_w; - vc4_state->crtc_h = state->crtc_h; + vc4_state->crtc_x = state->dst.x1; + vc4_state->crtc_y = state->dst.y1; + vc4_state->crtc_w = state->dst.x2 - state->dst.x1; + vc4_state->crtc_h = state->dst.y2 - state->dst.y1; + + ret = vc4_plane_margins_adj(state); + if (ret) + return ret; vc4_state->x_scaling[0] = vc4_get_scaling_mode(vc4_state->src_w[0], vc4_state->crtc_w); @@ -303,8 +382,6 @@ static int vc4_plane_setup_clipping_and_scaling(struct drm_plane_state *state) if (num_planes > 1) { vc4_state->is_yuv = true; - h_subsample = drm_format_horz_chroma_subsampling(format); - v_subsample = drm_format_vert_chroma_subsampling(format); vc4_state->src_w[1] = vc4_state->src_w[0] / h_subsample; vc4_state->src_h[1] = vc4_state->src_h[0] / v_subsample; @@ -329,41 +406,6 @@ static int vc4_plane_setup_clipping_and_scaling(struct drm_plane_state *state) vc4_state->y_scaling[1] = VC4_SCALING_NONE; } - /* No configuring scaling on the cursor plane, since it gets - non-vblank-synced updates, and scaling requires requires - LBM changes which have to be vblank-synced. - */ - if (plane->type == DRM_PLANE_TYPE_CURSOR && !vc4_state->is_unity) - return -EINVAL; - - /* Clamp the on-screen start x/y to 0. The hardware doesn't - * support negative y, and negative x wastes bandwidth. - */ - if (vc4_state->crtc_x < 0) { - for (i = 0; i < num_planes; i++) { - u32 cpp = fb->format->cpp[i]; - u32 subs = ((i == 0) ? 1 : h_subsample); - - vc4_state->offsets[i] += (cpp * - (-vc4_state->crtc_x) / subs); - } - vc4_state->src_w[0] += vc4_state->crtc_x; - vc4_state->src_w[1] += vc4_state->crtc_x / h_subsample; - vc4_state->crtc_x = 0; - } - - if (vc4_state->crtc_y < 0) { - for (i = 0; i < num_planes; i++) { - u32 subs = ((i == 0) ? 1 : v_subsample); - - vc4_state->offsets[i] += (fb->pitches[i] * - (-vc4_state->crtc_y) / subs); - } - vc4_state->src_h[0] += vc4_state->crtc_y; - vc4_state->src_h[1] += vc4_state->crtc_y / v_subsample; - vc4_state->crtc_y = 0; - } - return 0; } @@ -471,6 +513,7 @@ static int vc4_plane_mode_set(struct drm_plane *plane, const struct hvs_format *format = vc4_get_hvs_format(fb->format->format); u64 base_format_mod = fourcc_mod_broadcom_mod(fb->modifier); int num_planes = drm_format_num_planes(format->drm); + u32 h_subsample, v_subsample; bool mix_plane_alpha; bool covers_screen; u32 scl0, scl1, pitch0; @@ -516,26 +559,77 @@ static int vc4_plane_mode_set(struct drm_plane *plane, scl1 = vc4_get_scl_field(state, 0); } + h_subsample = drm_format_horz_chroma_subsampling(format->drm); + v_subsample = drm_format_vert_chroma_subsampling(format->drm); + switch (base_format_mod) { case DRM_FORMAT_MOD_LINEAR: tiling = SCALER_CTL0_TILING_LINEAR; pitch0 = VC4_SET_FIELD(fb->pitches[0], SCALER_SRC_PITCH); + + /* Adjust the base pointer to the first pixel to be scanned + * out. + */ + for (i = 0; i < num_planes; i++) { + vc4_state->offsets[i] += vc4_state->src_y / + (i ? v_subsample : 1) * + fb->pitches[i]; + vc4_state->offsets[i] += vc4_state->src_x / + (i ? h_subsample : 1) * + fb->format->cpp[i]; + } + break; case DRM_FORMAT_MOD_BROADCOM_VC4_T_TILED: { - /* For T-tiled, the FB pitch is "how many bytes from - * one row to the next, such that pitch * tile_h == - * tile_size * tiles_per_row." - */ u32 tile_size_shift = 12; /* T tiles are 4kb */ + /* Whole-tile offsets, mostly for setting the pitch. */ + u32 tile_w_shift = fb->format->cpp[0] == 2 ? 6 : 5; u32 tile_h_shift = 5; /* 16 and 32bpp are 32 pixels high */ + u32 tile_w_mask = (1 << tile_w_shift) - 1; + /* The height mask on 32-bit-per-pixel tiles is 63, i.e. twice + * the height (in pixels) of a 4k tile. + */ + u32 tile_h_mask = (2 << tile_h_shift) - 1; + /* For T-tiled, the FB pitch is "how many bytes from one row to + * the next, such that + * + * pitch * tile_h == tile_size * tiles_per_row + */ u32 tiles_w = fb->pitches[0] >> (tile_size_shift - tile_h_shift); + u32 tiles_l = vc4_state->src_x >> tile_w_shift; + u32 tiles_r = tiles_w - tiles_l; + u32 tiles_t = vc4_state->src_y >> tile_h_shift; + /* Intra-tile offsets, which modify the base address (the + * SCALER_PITCH0_TILE_Y_OFFSET tells HVS how to walk from that + * base address). + */ + u32 tile_y = (vc4_state->src_y >> 4) & 1; + u32 subtile_y = (vc4_state->src_y >> 2) & 3; + u32 utile_y = vc4_state->src_y & 3; + u32 x_off = vc4_state->src_x & tile_w_mask; + u32 y_off = vc4_state->src_y & tile_h_mask; tiling = SCALER_CTL0_TILING_256B_OR_T; + pitch0 = (VC4_SET_FIELD(x_off, SCALER_PITCH0_SINK_PIX) | + VC4_SET_FIELD(y_off, SCALER_PITCH0_TILE_Y_OFFSET) | + VC4_SET_FIELD(tiles_l, SCALER_PITCH0_TILE_WIDTH_L) | + VC4_SET_FIELD(tiles_r, SCALER_PITCH0_TILE_WIDTH_R)); + vc4_state->offsets[0] += tiles_t * (tiles_w << tile_size_shift); + vc4_state->offsets[0] += subtile_y << 8; + vc4_state->offsets[0] += utile_y << 4; + + /* Rows of tiles alternate left-to-right and right-to-left. */ + if (tiles_t & 1) { + pitch0 |= SCALER_PITCH0_TILE_INITIAL_LINE_DIR; + vc4_state->offsets[0] += (tiles_w - tiles_l) << + tile_size_shift; + vc4_state->offsets[0] -= (1 + !tile_y) << 10; + } else { + vc4_state->offsets[0] += tiles_l << tile_size_shift; + vc4_state->offsets[0] += tile_y << 10; + } - pitch0 = (VC4_SET_FIELD(0, SCALER_PITCH0_TILE_Y_OFFSET) | - VC4_SET_FIELD(0, SCALER_PITCH0_TILE_WIDTH_L) | - VC4_SET_FIELD(tiles_w, SCALER_PITCH0_TILE_WIDTH_R)); break; } @@ -811,7 +905,7 @@ void vc4_plane_async_set_fb(struct drm_plane *plane, struct drm_framebuffer *fb) static void vc4_plane_atomic_async_update(struct drm_plane *plane, struct drm_plane_state *state) { - struct vc4_plane_state *vc4_state = to_vc4_plane_state(plane->state); + struct vc4_plane_state *vc4_state, *new_vc4_state; if (plane->state->fb != state->fb) { vc4_plane_async_set_fb(plane, state->fb); @@ -833,7 +927,18 @@ static void vc4_plane_atomic_async_update(struct drm_plane *plane, plane->state->src_y = state->src_y; /* Update the display list based on the new crtc_x/y. */ - vc4_plane_atomic_check(plane, plane->state); + vc4_plane_atomic_check(plane, state); + + new_vc4_state = to_vc4_plane_state(state); + vc4_state = to_vc4_plane_state(plane->state); + + /* Update the current vc4_state pos0, pos2 and ptr0 dlist entries. */ + vc4_state->dlist[vc4_state->pos0_offset] = + new_vc4_state->dlist[vc4_state->pos0_offset]; + vc4_state->dlist[vc4_state->pos2_offset] = + new_vc4_state->dlist[vc4_state->pos2_offset]; + vc4_state->dlist[vc4_state->ptr0_offset] = + new_vc4_state->dlist[vc4_state->ptr0_offset]; /* Note that we can't just call vc4_plane_write_dlist() * because that would smash the context data that the HVS is diff --git a/drivers/gpu/drm/vc4/vc4_regs.h b/drivers/gpu/drm/vc4/vc4_regs.h index d6864fa4bd141d032a4cfb730841e16e8321d192..931088014272438e2a15293543c0eb2dcbbef08f 100644 --- a/drivers/gpu/drm/vc4/vc4_regs.h +++ b/drivers/gpu/drm/vc4/vc4_regs.h @@ -1037,14 +1037,18 @@ enum hvs_pixel_format { #define SCALER_TILE_HEIGHT_MASK VC4_MASK(15, 0) #define SCALER_TILE_HEIGHT_SHIFT 0 +/* Common PITCH0 fields */ +#define SCALER_PITCH0_SINK_PIX_MASK VC4_MASK(31, 26) +#define SCALER_PITCH0_SINK_PIX_SHIFT 26 + /* PITCH0 fields for T-tiled. */ #define SCALER_PITCH0_TILE_WIDTH_L_MASK VC4_MASK(22, 16) #define SCALER_PITCH0_TILE_WIDTH_L_SHIFT 16 #define SCALER_PITCH0_TILE_LINE_DIR BIT(15) #define SCALER_PITCH0_TILE_INITIAL_LINE_DIR BIT(14) /* Y offset within a tile. */ -#define SCALER_PITCH0_TILE_Y_OFFSET_MASK VC4_MASK(13, 7) -#define SCALER_PITCH0_TILE_Y_OFFSET_SHIFT 7 +#define SCALER_PITCH0_TILE_Y_OFFSET_MASK VC4_MASK(13, 8) +#define SCALER_PITCH0_TILE_Y_OFFSET_SHIFT 8 #define SCALER_PITCH0_TILE_WIDTH_R_MASK VC4_MASK(6, 0) #define SCALER_PITCH0_TILE_WIDTH_R_SHIFT 0 diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c index 7bdf6f0e58a5343a880470e61e69ee2292b7bd42..8d2f5ded86d66e19241528c51e7474e86283bbc2 100644 --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c @@ -528,6 +528,9 @@ static int virtio_gpu_get_caps_ioctl(struct drm_device *dev, if (!ret) return -EBUSY; + /* is_valid check must proceed before copy of the cache entry. */ + smp_rmb(); + ptr = cache_ent->caps_cache; copy_exit: diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index 020070d483d350a58695c7fa9f21f7c2be4db463..c8a581b1f4c40355a4bcb0eacc58d8043712bbbb 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -588,6 +588,8 @@ static void virtio_gpu_cmd_capset_cb(struct virtio_gpu_device *vgdev, cache_ent->id == le32_to_cpu(cmd->capset_id)) { memcpy(cache_ent->caps_cache, resp->capset_data, cache_ent->size); + /* Copy must occur before is_valid is signalled. */ + smp_wmb(); atomic_set(&cache_ent->is_valid, 1); break; } diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c index e4e09d47c5c0e001934a14a2df103e13a7a328aa..59e9d05ab928b49f2c6ca1f3a0cd5fbecdd56a1b 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c @@ -389,8 +389,10 @@ static int vmw_recv_msg(struct rpc_channel *channel, void **msg, break; } - if (retries == RETRIES) + if (retries == RETRIES) { + kfree(reply); return -EINVAL; + } *msg_len = reply_len; *msg = reply; diff --git a/drivers/gpu/host1x/bus.c b/drivers/gpu/host1x/bus.c index 815bdb42e3f0368f78c8397f4b2fbda3e94f8ccf..0121fe7a4548dbbfbc6a24ba21667db8f0ff93e7 100644 --- a/drivers/gpu/host1x/bus.c +++ b/drivers/gpu/host1x/bus.c @@ -423,6 +423,9 @@ static int host1x_device_add(struct host1x *host1x, of_dma_configure(&device->dev, host1x->dev->of_node, true); + device->dev.dma_parms = &device->dma_parms; + dma_set_max_seg_size(&device->dev, SZ_4M); + err = host1x_device_parse_dt(device, driver); if (err < 0) { kfree(device); diff --git a/drivers/gpu/ipu-v3/ipu-ic.c b/drivers/gpu/ipu-v3/ipu-ic.c index 67cc820253a99b84341825437e9e0bfce3cd16d6..fb79e118f26c8759cc37ec1b222b89f609a53fea 100644 --- a/drivers/gpu/ipu-v3/ipu-ic.c +++ b/drivers/gpu/ipu-v3/ipu-ic.c @@ -257,7 +257,7 @@ static int init_csc(struct ipu_ic *ic, writel(param, base++); param = ((a[0] & 0x1fe0) >> 5) | (params->scale << 8) | - (params->sat << 9); + (params->sat << 10); writel(param, base++); param = ((a[1] & 0x1f) << 27) | ((c[0][1] & 0x1ff) << 18) | diff --git a/drivers/hid/hid-a4tech.c b/drivers/hid/hid-a4tech.c index 9428ea7cdf8a00dc686e00c6da77be5cb60215b7..c52bd163abb3e1a93714f0ddc64a418858f52881 100644 --- a/drivers/hid/hid-a4tech.c +++ b/drivers/hid/hid-a4tech.c @@ -26,12 +26,36 @@ #define A4_2WHEEL_MOUSE_HACK_7 0x01 #define A4_2WHEEL_MOUSE_HACK_B8 0x02 +#define A4_WHEEL_ORIENTATION (HID_UP_GENDESK | 0x000000b8) + struct a4tech_sc { unsigned long quirks; unsigned int hw_wheel; __s32 delayed_value; }; +static int a4_input_mapping(struct hid_device *hdev, struct hid_input *hi, + struct hid_field *field, struct hid_usage *usage, + unsigned long **bit, int *max) +{ + struct a4tech_sc *a4 = hid_get_drvdata(hdev); + + if (a4->quirks & A4_2WHEEL_MOUSE_HACK_B8 && + usage->hid == A4_WHEEL_ORIENTATION) { + /* + * We do not want to have this usage mapped to anything as it's + * nonstandard and doesn't really behave like an HID report. + * It's only selecting the orientation (vertical/horizontal) of + * the previous mouse wheel report. The input_events will be + * generated once both reports are recorded in a4_event(). + */ + return -1; + } + + return 0; + +} + static int a4_input_mapped(struct hid_device *hdev, struct hid_input *hi, struct hid_field *field, struct hid_usage *usage, unsigned long **bit, int *max) @@ -53,8 +77,7 @@ static int a4_event(struct hid_device *hdev, struct hid_field *field, struct a4tech_sc *a4 = hid_get_drvdata(hdev); struct input_dev *input; - if (!(hdev->claimed & HID_CLAIMED_INPUT) || !field->hidinput || - !usage->type) + if (!(hdev->claimed & HID_CLAIMED_INPUT) || !field->hidinput) return 0; input = field->hidinput->input; @@ -65,7 +88,7 @@ static int a4_event(struct hid_device *hdev, struct hid_field *field, return 1; } - if (usage->hid == 0x000100b8) { + if (usage->hid == A4_WHEEL_ORIENTATION) { input_event(input, EV_REL, value ? REL_HWHEEL : REL_WHEEL, a4->delayed_value); return 1; @@ -129,6 +152,7 @@ MODULE_DEVICE_TABLE(hid, a4_devices); static struct hid_driver a4_driver = { .name = "a4tech", .id_table = a4_devices, + .input_mapping = a4_input_mapping, .input_mapped = a4_input_mapped, .event = a4_event, .probe = a4_probe, diff --git a/drivers/hid/hid-holtek-kbd.c b/drivers/hid/hid-holtek-kbd.c index 6e1a4a4fc0c109f7bf60083963d8c239a10939cf..ab9da597106fa5f49d4a12c07cbfff76ad33088d 100644 --- a/drivers/hid/hid-holtek-kbd.c +++ b/drivers/hid/hid-holtek-kbd.c @@ -126,9 +126,14 @@ static int holtek_kbd_input_event(struct input_dev *dev, unsigned int type, /* Locate the boot interface, to receive the LED change events */ struct usb_interface *boot_interface = usb_ifnum_to_if(usb_dev, 0); + struct hid_device *boot_hid; + struct hid_input *boot_hid_input; - struct hid_device *boot_hid = usb_get_intfdata(boot_interface); - struct hid_input *boot_hid_input = list_first_entry(&boot_hid->inputs, + if (unlikely(boot_interface == NULL)) + return -ENODEV; + + boot_hid = usb_get_intfdata(boot_interface); + boot_hid_input = list_first_entry(&boot_hid->inputs, struct hid_input, list); return boot_hid_input->input->event(boot_hid_input->input, type, code, diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h index 3007555435b25fee2f9e9592a000d52563771e5e..4d9ef60e3bce94368cdb7f8a0ae6f3835beeaa27 100644 --- a/drivers/hid/hid-ids.h +++ b/drivers/hid/hid-ids.h @@ -82,6 +82,7 @@ #define HID_DEVICE_ID_ALPS_U1_DUAL_3BTN_PTP 0x1220 #define HID_DEVICE_ID_ALPS_U1 0x1215 #define HID_DEVICE_ID_ALPS_T4_BTNLESS 0x120C +#define HID_DEVICE_ID_ALPS_1222 0x1222 #define USB_VENDOR_ID_AMI 0x046b @@ -221,6 +222,9 @@ #define USB_VENDOR_ID_BAANTO 0x2453 #define USB_DEVICE_ID_BAANTO_MT_190W2 0x0100 +#define USB_VENDOR_ID_BEKEN 0x25a7 +#define USB_DEVICE_ID_AIRMOUSE_T3 0x2402 + #define USB_VENDOR_ID_BELKIN 0x050d #define USB_DEVICE_ID_FLIP_KVM 0x3201 @@ -268,6 +272,7 @@ #define USB_DEVICE_ID_CHICONY_MULTI_TOUCH 0xb19d #define USB_DEVICE_ID_CHICONY_WIRELESS 0x0618 #define USB_DEVICE_ID_CHICONY_PIXART_USB_OPTICAL_MOUSE 0x1053 +#define USB_DEVICE_ID_CHICONY_PIXART_USB_OPTICAL_MOUSE2 0x0939 #define USB_DEVICE_ID_CHICONY_WIRELESS2 0x1123 #define USB_DEVICE_ID_ASUS_AK1D 0x1125 #define USB_DEVICE_ID_CHICONY_ACER_SWITCH12 0x1421 @@ -560,6 +565,7 @@ #define USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE_0B4A 0x0b4a #define USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE 0x134a #define USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE_094A 0x094a +#define USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE_0641 0x0641 #define USB_VENDOR_ID_HUION 0x256c #define USB_DEVICE_ID_HUION_TABLET 0x006e @@ -971,6 +977,7 @@ #define USB_DEVICE_ID_SAITEK_RAT7 0x0cd7 #define USB_DEVICE_ID_SAITEK_RAT9 0x0cfa #define USB_DEVICE_ID_SAITEK_MMO7 0x0cd0 +#define USB_DEVICE_ID_SAITEK_X52 0x075c #define USB_VENDOR_ID_SAMSUNG 0x0419 #define USB_DEVICE_ID_SAMSUNG_IR_REMOTE 0x0001 @@ -1184,6 +1191,9 @@ #define USB_VENDOR_ID_XAT 0x2505 #define USB_DEVICE_ID_XAT_CSR 0x0220 +#define USB_VENDOR_ID_XENTA 0x1d57 +#define USB_DEVICE_ID_AIRMOUSE_MX3 0xad03 + #define USB_VENDOR_ID_XIN_MO 0x16c0 #define USB_DEVICE_ID_XIN_MO_DUAL_ARCADE 0x05e1 #define USB_DEVICE_ID_THT_2P_ARCADE 0x75e1 diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c index 184e49036e1dce14c332aa5a071bc3d43e22b63a..f9167d0e095ceff56b6aec3422692ae328be57a4 100644 --- a/drivers/hid/hid-multitouch.c +++ b/drivers/hid/hid-multitouch.c @@ -1788,6 +1788,10 @@ static const struct hid_device_id mt_devices[] = { HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8, USB_VENDOR_ID_ALPS_JP, HID_DEVICE_ID_ALPS_U1_DUAL_3BTN_PTP) }, + { .driver_data = MT_CLS_WIN_8_DUAL, + HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8, + USB_VENDOR_ID_ALPS_JP, + HID_DEVICE_ID_ALPS_1222) }, /* Lenovo X1 TAB Gen 2 */ { .driver_data = MT_CLS_WIN_8_DUAL, diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c index 5892f1bd037ec4f4aba27f609aac246e33e1cc7f..206dd921e58fad45d669e677c1009c8b417345ba 100644 --- a/drivers/hid/hid-quirks.c +++ b/drivers/hid/hid-quirks.c @@ -43,8 +43,10 @@ static const struct hid_device_id hid_quirks[] = { { HID_USB_DEVICE(USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_CS682), HID_QUIRK_NOGET }, { HID_USB_DEVICE(USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_CS692), HID_QUIRK_NOGET }, { HID_USB_DEVICE(USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_UC100KM), HID_QUIRK_NOGET }, + { HID_USB_DEVICE(USB_VENDOR_ID_BEKEN, USB_DEVICE_ID_AIRMOUSE_T3), HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE }, { HID_USB_DEVICE(USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_MULTI_TOUCH), HID_QUIRK_MULTI_INPUT }, { HID_USB_DEVICE(USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_PIXART_USB_OPTICAL_MOUSE), HID_QUIRK_ALWAYS_POLL }, + { HID_USB_DEVICE(USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_PIXART_USB_OPTICAL_MOUSE2), HID_QUIRK_ALWAYS_POLL }, { HID_USB_DEVICE(USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_WIRELESS), HID_QUIRK_MULTI_INPUT }, { HID_USB_DEVICE(USB_VENDOR_ID_CHIC, USB_DEVICE_ID_CHIC_GAMEPAD), HID_QUIRK_BADPAD }, { HID_USB_DEVICE(USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_3AXIS_5BUTTON_STICK), HID_QUIRK_NOGET }, @@ -93,6 +95,7 @@ static const struct hid_device_id hid_quirks[] = { { HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE_0B4A), HID_QUIRK_ALWAYS_POLL }, { HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE), HID_QUIRK_ALWAYS_POLL }, { HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE_094A), HID_QUIRK_ALWAYS_POLL }, + { HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE_0641), HID_QUIRK_ALWAYS_POLL }, { HID_USB_DEVICE(USB_VENDOR_ID_IDEACOM, USB_DEVICE_ID_IDEACOM_IDC6680), HID_QUIRK_MULTI_INPUT }, { HID_USB_DEVICE(USB_VENDOR_ID_INNOMEDIA, USB_DEVICE_ID_INNEX_GENESIS_ATARI), HID_QUIRK_MULTI_INPUT }, { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_EASYPEN_M610X), HID_QUIRK_MULTI_INPUT }, @@ -141,6 +144,7 @@ static const struct hid_device_id hid_quirks[] = { { HID_USB_DEVICE(USB_VENDOR_ID_RETROUSB, USB_DEVICE_ID_RETROUSB_SNES_RETROPAD), HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE }, { HID_USB_DEVICE(USB_VENDOR_ID_RETROUSB, USB_DEVICE_ID_RETROUSB_SNES_RETROPORT), HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE }, { HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_RUMBLEPAD), HID_QUIRK_BADPAD }, + { HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_X52), HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE }, { HID_USB_DEVICE(USB_VENDOR_ID_SEMICO, USB_DEVICE_ID_SEMICO_USB_KEYKOARD2), HID_QUIRK_NO_INIT_REPORTS }, { HID_USB_DEVICE(USB_VENDOR_ID_SEMICO, USB_DEVICE_ID_SEMICO_USB_KEYKOARD), HID_QUIRK_NO_INIT_REPORTS }, { HID_USB_DEVICE(USB_VENDOR_ID_SENNHEISER, USB_DEVICE_ID_SENNHEISER_BTD500USB), HID_QUIRK_NOGET }, @@ -170,6 +174,7 @@ static const struct hid_device_id hid_quirks[] = { { HID_USB_DEVICE(USB_VENDOR_ID_WALTOP, USB_DEVICE_ID_WALTOP_SIRIUS_BATTERY_FREE_TABLET), HID_QUIRK_MULTI_INPUT }, { HID_USB_DEVICE(USB_VENDOR_ID_WISEGROUP_LTD2, USB_DEVICE_ID_SMARTJOY_DUAL_PLUS), HID_QUIRK_NOGET | HID_QUIRK_MULTI_INPUT }, { HID_USB_DEVICE(USB_VENDOR_ID_WISEGROUP, USB_DEVICE_ID_QUAD_USB_JOYPAD), HID_QUIRK_NOGET | HID_QUIRK_MULTI_INPUT }, + { HID_USB_DEVICE(USB_VENDOR_ID_XENTA, USB_DEVICE_ID_AIRMOUSE_MX3), HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE }, { 0 } }; diff --git a/drivers/hid/hid-sony.c b/drivers/hid/hid-sony.c index 9671a4bad64392b2348d977c424027b559994c6d..31f1023214d3681bc566df2288f932032903ae4a 100644 --- a/drivers/hid/hid-sony.c +++ b/drivers/hid/hid-sony.c @@ -587,10 +587,14 @@ static void sony_set_leds(struct sony_sc *sc); static inline void sony_schedule_work(struct sony_sc *sc, enum sony_worker which) { + unsigned long flags; + switch (which) { case SONY_WORKER_STATE: - if (!sc->defer_initialization) + spin_lock_irqsave(&sc->lock, flags); + if (!sc->defer_initialization && sc->state_worker_initialized) schedule_work(&sc->state_worker); + spin_unlock_irqrestore(&sc->lock, flags); break; case SONY_WORKER_HOTPLUG: if (sc->hotplug_worker_initialized) @@ -2553,13 +2557,18 @@ static inline void sony_init_output_report(struct sony_sc *sc, static inline void sony_cancel_work_sync(struct sony_sc *sc) { + unsigned long flags; + if (sc->hotplug_worker_initialized) cancel_work_sync(&sc->hotplug_worker); - if (sc->state_worker_initialized) + if (sc->state_worker_initialized) { + spin_lock_irqsave(&sc->lock, flags); + sc->state_worker_initialized = 0; + spin_unlock_irqrestore(&sc->lock, flags); cancel_work_sync(&sc->state_worker); + } } - static int sony_input_configured(struct hid_device *hdev, struct hid_input *hidinput) { diff --git a/drivers/hid/hid-tmff.c b/drivers/hid/hid-tmff.c index bea8def64f437ed13ed5576a479634cee0a27ca4..30b8c3256c9917e88f6dcff081ee7a9306bc00f9 100644 --- a/drivers/hid/hid-tmff.c +++ b/drivers/hid/hid-tmff.c @@ -34,6 +34,8 @@ #include "hid-ids.h" +#define THRUSTMASTER_DEVICE_ID_2_IN_1_DT 0xb320 + static const signed short ff_rumble[] = { FF_RUMBLE, -1 @@ -88,6 +90,7 @@ static int tmff_play(struct input_dev *dev, void *data, struct hid_field *ff_field = tmff->ff_field; int x, y; int left, right; /* Rumbling */ + int motor_swap; switch (effect->type) { case FF_CONSTANT: @@ -112,6 +115,13 @@ static int tmff_play(struct input_dev *dev, void *data, ff_field->logical_minimum, ff_field->logical_maximum); + /* 2-in-1 strong motor is left */ + if (hid->product == THRUSTMASTER_DEVICE_ID_2_IN_1_DT) { + motor_swap = left; + left = right; + right = motor_swap; + } + dbg_hid("(left,right)=(%08x, %08x)\n", left, right); ff_field->value[0] = left; ff_field->value[1] = right; @@ -238,6 +248,8 @@ static const struct hid_device_id tm_devices[] = { .driver_data = (unsigned long)ff_rumble }, { HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb304), /* FireStorm Dual Power 2 (and 3) */ .driver_data = (unsigned long)ff_rumble }, + { HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, THRUSTMASTER_DEVICE_ID_2_IN_1_DT), /* Dual Trigger 2-in-1 */ + .driver_data = (unsigned long)ff_rumble }, { HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb323), /* Dual Trigger 3-in-1 (PC Mode) */ .driver_data = (unsigned long)ff_rumble }, { HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb324), /* Dual Trigger 3-in-1 (PS3 Mode) */ diff --git a/drivers/hid/usbhid/hiddev.c b/drivers/hid/usbhid/hiddev.c index a746017fac170ca15895435fd4df4fbe3a04d51f..5a949ca42b1d06c4279e08cbe0674f7fa85ef676 100644 --- a/drivers/hid/usbhid/hiddev.c +++ b/drivers/hid/usbhid/hiddev.c @@ -297,6 +297,14 @@ static int hiddev_open(struct inode *inode, struct file *file) spin_unlock_irq(&list->hiddev->list_lock); mutex_lock(&hiddev->existancelock); + /* + * recheck exist with existance lock held to + * avoid opening a disconnected device + */ + if (!list->hiddev->exist) { + res = -ENODEV; + goto bail_unlock; + } if (!list->hiddev->open++) if (list->hiddev->exist) { struct hid_device *hid = hiddev->hid; @@ -313,6 +321,10 @@ bail_normal_power: hid_hw_power(hid, PM_HINT_NORMAL); bail_unlock: mutex_unlock(&hiddev->existancelock); + + spin_lock_irq(&list->hiddev->list_lock); + list_del(&list->node); + spin_unlock_irq(&list->hiddev->list_lock); bail: file->private_data = NULL; vfree(list); diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c index 0bdd85d486feeb2fec33f7651f201df146c8cd71..9cd4705b74bda0d57609efb2ba8fe1ac15fbdf2c 100644 --- a/drivers/hid/wacom_sys.c +++ b/drivers/hid/wacom_sys.c @@ -275,6 +275,9 @@ static void wacom_feature_mapping(struct hid_device *hdev, wacom_hid_usage_quirk(hdev, field, usage); switch (equivalent_usage) { + case WACOM_HID_WD_TOUCH_RING_SETTING: + wacom->generic_has_leds = true; + break; case HID_DG_CONTACTMAX: /* leave touch_max as is if predefined */ if (!features->touch_max) { diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c index d7c3f4ac2c045d230f81eaa9880cdba7606f0243..50ef7b6cd195761153c07cd2fceda7025dc0dfd0 100644 --- a/drivers/hid/wacom_wac.c +++ b/drivers/hid/wacom_wac.c @@ -537,14 +537,14 @@ static int wacom_intuos_pad(struct wacom_wac *wacom) */ buttons = (data[4] << 1) | (data[3] & 0x01); } else if (features->type == CINTIQ_COMPANION_2) { - /* d-pad right -> data[4] & 0x10 - * d-pad up -> data[4] & 0x20 - * d-pad left -> data[4] & 0x40 - * d-pad down -> data[4] & 0x80 - * d-pad center -> data[3] & 0x01 + /* d-pad right -> data[2] & 0x10 + * d-pad up -> data[2] & 0x20 + * d-pad left -> data[2] & 0x40 + * d-pad down -> data[2] & 0x80 + * d-pad center -> data[1] & 0x01 */ buttons = ((data[2] >> 4) << 7) | - ((data[1] & 0x04) << 6) | + ((data[1] & 0x04) << 4) | ((data[2] & 0x0F) << 2) | (data[1] & 0x03); } else if (features->type >= INTUOS5S && features->type <= INTUOSPL) { @@ -848,6 +848,8 @@ static int wacom_intuos_general(struct wacom_wac *wacom) y >>= 1; distance >>= 1; } + if (features->type == INTUOSHT2) + distance = features->distance_max - distance; input_report_abs(input, ABS_X, x); input_report_abs(input, ABS_Y, y); input_report_abs(input, ABS_DISTANCE, distance); @@ -1061,7 +1063,7 @@ static int wacom_remote_irq(struct wacom_wac *wacom_wac, size_t len) input_report_key(input, BTN_BASE2, (data[11] & 0x02)); if (data[12] & 0x80) - input_report_abs(input, ABS_WHEEL, (data[12] & 0x7f)); + input_report_abs(input, ABS_WHEEL, (data[12] & 0x7f) - 1); else input_report_abs(input, ABS_WHEEL, 0); @@ -1928,8 +1930,6 @@ static void wacom_wac_pad_usage_mapping(struct hid_device *hdev, features->device_type |= WACOM_DEVICETYPE_PAD; break; case WACOM_HID_WD_BUTTONCENTER: - wacom->generic_has_leds = true; - /* fall through */ case WACOM_HID_WD_BUTTONHOME: case WACOM_HID_WD_BUTTONUP: case WACOM_HID_WD_BUTTONDOWN: @@ -2121,14 +2121,12 @@ static void wacom_wac_pad_report(struct hid_device *hdev, bool active = wacom_wac->hid_data.inrange_state != 0; /* report prox for expresskey events */ - if ((wacom_equivalent_usage(field->physical) == HID_DG_TABLETFUNCTIONKEY) && - wacom_wac->hid_data.pad_input_event_flag) { + if (wacom_wac->hid_data.pad_input_event_flag) { input_event(input, EV_ABS, ABS_MISC, active ? PAD_DEVICE_ID : 0); input_sync(input); if (!active) wacom_wac->hid_data.pad_input_event_flag = false; } - } static void wacom_wac_pen_usage_mapping(struct hid_device *hdev, @@ -2725,9 +2723,7 @@ static int wacom_wac_collection(struct hid_device *hdev, struct hid_report *repo if (report->type != HID_INPUT_REPORT) return -1; - if (WACOM_PAD_FIELD(field) && wacom->wacom_wac.pad_input) - wacom_wac_pad_report(hdev, report, field); - else if (WACOM_PEN_FIELD(field) && wacom->wacom_wac.pen_input) + if (WACOM_PEN_FIELD(field) && wacom->wacom_wac.pen_input) wacom_wac_pen_report(hdev, report); else if (WACOM_FINGER_FIELD(field) && wacom->wacom_wac.touch_input) wacom_wac_finger_report(hdev, report); @@ -2741,7 +2737,7 @@ void wacom_wac_report(struct hid_device *hdev, struct hid_report *report) struct wacom_wac *wacom_wac = &wacom->wacom_wac; struct hid_field *field; bool pad_in_hid_field = false, pen_in_hid_field = false, - finger_in_hid_field = false; + finger_in_hid_field = false, true_pad = false; int r; int prev_collection = -1; @@ -2757,6 +2753,8 @@ void wacom_wac_report(struct hid_device *hdev, struct hid_report *report) pen_in_hid_field = true; if (WACOM_FINGER_FIELD(field)) finger_in_hid_field = true; + if (wacom_equivalent_usage(field->physical) == HID_DG_TABLETFUNCTIONKEY) + true_pad = true; } wacom_wac_battery_pre_report(hdev, report); @@ -2780,6 +2778,9 @@ void wacom_wac_report(struct hid_device *hdev, struct hid_report *report) } wacom_wac_battery_report(hdev, report); + + if (true_pad && wacom->wacom_wac.pad_input) + wacom_wac_pad_report(hdev, report, field); } static int wacom_bpt_touch(struct wacom_wac *wacom) @@ -3735,7 +3736,7 @@ int wacom_setup_touch_input_capabilities(struct input_dev *input_dev, 0, 5920, 4, 0); } input_abs_set_res(input_dev, ABS_MT_POSITION_X, 40); - input_abs_set_res(input_dev, ABS_MT_POSITION_X, 40); + input_abs_set_res(input_dev, ABS_MT_POSITION_Y, 40); /* fall through */ diff --git a/drivers/hid/wacom_wac.h b/drivers/hid/wacom_wac.h index 295fd3718caa0bed767683775c852d47de92fd7b..f67d871841c0c62859a963e2566b23835b7e2b27 100644 --- a/drivers/hid/wacom_wac.h +++ b/drivers/hid/wacom_wac.h @@ -145,6 +145,7 @@ #define WACOM_HID_WD_OFFSETBOTTOM (WACOM_HID_UP_WACOMDIGITIZER | 0x0d33) #define WACOM_HID_WD_DATAMODE (WACOM_HID_UP_WACOMDIGITIZER | 0x1002) #define WACOM_HID_WD_DIGITIZERINFO (WACOM_HID_UP_WACOMDIGITIZER | 0x1013) +#define WACOM_HID_WD_TOUCH_RING_SETTING (WACOM_HID_UP_WACOMDIGITIZER | 0x1032) #define WACOM_HID_UP_G9 0xff090000 #define WACOM_HID_G9_PEN (WACOM_HID_UP_G9 | 0x02) #define WACOM_HID_G9_TOUCHSCREEN (WACOM_HID_UP_G9 | 0x11) diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c index 2f164bd746874549709a7c51122217d52ad05111..fdb0f832fadefe33aff3642406ae7fc380657078 100644 --- a/drivers/hv/channel.c +++ b/drivers/hv/channel.c @@ -38,7 +38,7 @@ static unsigned long virt_to_hvpfn(void *addr) { - unsigned long paddr; + phys_addr_t paddr; if (is_vmalloc_addr(addr)) paddr = page_to_phys(vmalloc_to_page(addr)) + diff --git a/drivers/hwmon/nct6775.c b/drivers/hwmon/nct6775.c index 78603b78cf410de903aa22d55147e6b600ab0398..eba692cddbdee721ece32e4b943d1a244575e991 100644 --- a/drivers/hwmon/nct6775.c +++ b/drivers/hwmon/nct6775.c @@ -818,7 +818,7 @@ static const u16 NCT6106_REG_TARGET[] = { 0x111, 0x121, 0x131 }; static const u16 NCT6106_REG_WEIGHT_TEMP_SEL[] = { 0x168, 0x178, 0x188 }; static const u16 NCT6106_REG_WEIGHT_TEMP_STEP[] = { 0x169, 0x179, 0x189 }; static const u16 NCT6106_REG_WEIGHT_TEMP_STEP_TOL[] = { 0x16a, 0x17a, 0x18a }; -static const u16 NCT6106_REG_WEIGHT_DUTY_STEP[] = { 0x16b, 0x17b, 0x17c }; +static const u16 NCT6106_REG_WEIGHT_DUTY_STEP[] = { 0x16b, 0x17b, 0x18b }; static const u16 NCT6106_REG_WEIGHT_TEMP_BASE[] = { 0x16c, 0x17c, 0x18c }; static const u16 NCT6106_REG_WEIGHT_DUTY_BASE[] = { 0x16d, 0x17d, 0x18d }; @@ -3673,6 +3673,7 @@ static int nct6775_probe(struct platform_device *pdev) data->REG_FAN_TIME[0] = NCT6106_REG_FAN_STOP_TIME; data->REG_FAN_TIME[1] = NCT6106_REG_FAN_STEP_UP_TIME; data->REG_FAN_TIME[2] = NCT6106_REG_FAN_STEP_DOWN_TIME; + data->REG_TOLERANCE_H = NCT6106_REG_TOLERANCE_H; data->REG_PWM[0] = NCT6106_REG_PWM; data->REG_PWM[1] = NCT6106_REG_FAN_START_OUTPUT; data->REG_PWM[2] = NCT6106_REG_FAN_STOP_OUTPUT; diff --git a/drivers/hwmon/nct7802.c b/drivers/hwmon/nct7802.c index 2876c18ed84115dbd97e7eb2c4f9b899ebf2cfe7..38ffbdb0a85fba288cdcac8e76cc7ff3eadec65d 100644 --- a/drivers/hwmon/nct7802.c +++ b/drivers/hwmon/nct7802.c @@ -768,7 +768,7 @@ static struct attribute *nct7802_in_attrs[] = { &sensor_dev_attr_in3_alarm.dev_attr.attr, &sensor_dev_attr_in3_beep.dev_attr.attr, - &sensor_dev_attr_in4_input.dev_attr.attr, /* 17 */ + &sensor_dev_attr_in4_input.dev_attr.attr, /* 16 */ &sensor_dev_attr_in4_min.dev_attr.attr, &sensor_dev_attr_in4_max.dev_attr.attr, &sensor_dev_attr_in4_alarm.dev_attr.attr, @@ -794,9 +794,9 @@ static umode_t nct7802_in_is_visible(struct kobject *kobj, if (index >= 6 && index < 11 && (reg & 0x03) != 0x03) /* VSEN1 */ return 0; - if (index >= 11 && index < 17 && (reg & 0x0c) != 0x0c) /* VSEN2 */ + if (index >= 11 && index < 16 && (reg & 0x0c) != 0x0c) /* VSEN2 */ return 0; - if (index >= 17 && (reg & 0x30) != 0x30) /* VSEN3 */ + if (index >= 16 && (reg & 0x30) != 0x30) /* VSEN3 */ return 0; return attr->mode; diff --git a/drivers/hwtracing/intel_th/msu.c b/drivers/hwtracing/intel_th/msu.c index 8ff326c0c406c9ac2a5f58815d0057d39978c90d..3cdf85b1ce4fee2366bef46a53aa4a92319aa889 100644 --- a/drivers/hwtracing/intel_th/msu.c +++ b/drivers/hwtracing/intel_th/msu.c @@ -632,7 +632,7 @@ static int msc_buffer_contig_alloc(struct msc *msc, unsigned long size) goto err_out; ret = -ENOMEM; - page = alloc_pages(GFP_KERNEL | __GFP_ZERO, order); + page = alloc_pages(GFP_KERNEL | __GFP_ZERO | GFP_DMA32, order); if (!page) goto err_free_sgt; diff --git a/drivers/hwtracing/intel_th/pci.c b/drivers/hwtracing/intel_th/pci.c index 70f2cb90adc5eb2e15441dcc7f189f9e5495f5aa..968319f4e5f101e5f1371d7701bc1493630e3d0e 100644 --- a/drivers/hwtracing/intel_th/pci.c +++ b/drivers/hwtracing/intel_th/pci.c @@ -140,6 +140,11 @@ static const struct pci_device_id intel_th_pci_id_table[] = { PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xa1a6), .driver_data = (kernel_ulong_t)0, }, + { + /* Lewisburg PCH */ + PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xa226), + .driver_data = (kernel_ulong_t)0, + }, { /* Gemini Lake */ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x318e), @@ -170,6 +175,16 @@ static const struct pci_device_id intel_th_pci_id_table[] = { PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x02a6), .driver_data = (kernel_ulong_t)&intel_th_2x, }, + { + /* Ice Lake NNPI */ + PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x45c5), + .driver_data = (kernel_ulong_t)&intel_th_2x, + }, + { + /* Tiger Lake PCH */ + PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xa0a6), + .driver_data = (kernel_ulong_t)&intel_th_2x, + }, { 0 }, }; diff --git a/drivers/hwtracing/stm/core.c b/drivers/hwtracing/stm/core.c index 9ec9197edffaf64bd041e9e6071bcd59e6a3ea48..eeba421dc823d71f1864697cd861eb6e875fe593 100644 --- a/drivers/hwtracing/stm/core.c +++ b/drivers/hwtracing/stm/core.c @@ -1098,7 +1098,6 @@ int stm_source_register_device(struct device *parent, err: put_device(&src->dev); - kfree(src); return err; } diff --git a/drivers/i2c/busses/i2c-emev2.c b/drivers/i2c/busses/i2c-emev2.c index 35b302d983e0d93d74b255d57f113226f9c799f9..959d4912ec0d5ce1082c4713cee86e13b10a228d 100644 --- a/drivers/i2c/busses/i2c-emev2.c +++ b/drivers/i2c/busses/i2c-emev2.c @@ -69,6 +69,7 @@ struct em_i2c_device { struct completion msg_done; struct clk *sclk; struct i2c_client *slave; + int irq; }; static inline void em_clear_set_bit(struct em_i2c_device *priv, u8 clear, u8 set, u8 reg) @@ -339,6 +340,12 @@ static int em_i2c_unreg_slave(struct i2c_client *slave) writeb(0, priv->base + I2C_OFS_SVA0); + /* + * Wait for interrupt to finish. New slave irqs cannot happen because we + * cleared the slave address and, thus, only extension codes will be + * detected which do not use the slave ptr. + */ + synchronize_irq(priv->irq); priv->slave = NULL; return 0; @@ -355,7 +362,7 @@ static int em_i2c_probe(struct platform_device *pdev) { struct em_i2c_device *priv; struct resource *r; - int irq, ret; + int ret; priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL); if (!priv) @@ -390,8 +397,8 @@ static int em_i2c_probe(struct platform_device *pdev) em_i2c_reset(&priv->adap); - irq = platform_get_irq(pdev, 0); - ret = devm_request_irq(&pdev->dev, irq, em_i2c_irq_handler, 0, + priv->irq = platform_get_irq(pdev, 0); + ret = devm_request_irq(&pdev->dev, priv->irq, em_i2c_irq_handler, 0, "em_i2c", priv); if (ret) goto err_clk; @@ -401,7 +408,8 @@ static int em_i2c_probe(struct platform_device *pdev) if (ret) goto err_clk; - dev_info(&pdev->dev, "Added i2c controller %d, irq %d\n", priv->adap.nr, irq); + dev_info(&pdev->dev, "Added i2c controller %d, irq %d\n", priv->adap.nr, + priv->irq); return 0; diff --git a/drivers/i2c/busses/i2c-piix4.c b/drivers/i2c/busses/i2c-piix4.c index 90946a8b9a75a94a8eadc4d50071b166a9d44c81..9ff3371ec385da0221ec530b0bfadfb4233d0a33 100644 --- a/drivers/i2c/busses/i2c-piix4.c +++ b/drivers/i2c/busses/i2c-piix4.c @@ -98,7 +98,7 @@ #define SB800_PIIX4_PORT_IDX_MASK 0x06 #define SB800_PIIX4_PORT_IDX_SHIFT 1 -/* On kerncz, SmBus0Sel is at bit 20:19 of PMx00 DecodeEn */ +/* On kerncz and Hudson2, SmBus0Sel is at bit 20:19 of PMx00 DecodeEn */ #define SB800_PIIX4_PORT_IDX_KERNCZ 0x02 #define SB800_PIIX4_PORT_IDX_MASK_KERNCZ 0x18 #define SB800_PIIX4_PORT_IDX_SHIFT_KERNCZ 3 @@ -362,18 +362,16 @@ static int piix4_setup_sb800(struct pci_dev *PIIX4_dev, /* Find which register is used for port selection */ if (PIIX4_dev->vendor == PCI_VENDOR_ID_AMD) { - switch (PIIX4_dev->device) { - case PCI_DEVICE_ID_AMD_KERNCZ_SMBUS: + if (PIIX4_dev->device == PCI_DEVICE_ID_AMD_KERNCZ_SMBUS || + (PIIX4_dev->device == PCI_DEVICE_ID_AMD_HUDSON2_SMBUS && + PIIX4_dev->revision >= 0x1F)) { piix4_port_sel_sb800 = SB800_PIIX4_PORT_IDX_KERNCZ; piix4_port_mask_sb800 = SB800_PIIX4_PORT_IDX_MASK_KERNCZ; piix4_port_shift_sb800 = SB800_PIIX4_PORT_IDX_SHIFT_KERNCZ; - break; - case PCI_DEVICE_ID_AMD_HUDSON2_SMBUS: - default: + } else { piix4_port_sel_sb800 = SB800_PIIX4_PORT_IDX_ALT; piix4_port_mask_sb800 = SB800_PIIX4_PORT_IDX_MASK; piix4_port_shift_sb800 = SB800_PIIX4_PORT_IDX_SHIFT; - break; } } else { if (!request_muxed_region(SB800_PIIX4_SMB_IDX, 2, diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c index 254e6219e5389f17114185c57470914562c2bed6..2c29f901d30908b3f0b4f9b5c29b6639ea46b398 100644 --- a/drivers/i2c/busses/i2c-rcar.c +++ b/drivers/i2c/busses/i2c-rcar.c @@ -139,6 +139,7 @@ struct rcar_i2c_priv { enum dma_data_direction dma_direction; struct reset_control *rstc; + int irq; }; #define rcar_i2c_priv_to_dev(p) ((p)->adap.dev.parent) @@ -859,9 +860,11 @@ static int rcar_unreg_slave(struct i2c_client *slave) WARN_ON(!priv->slave); + /* disable irqs and ensure none is running before clearing ptr */ rcar_i2c_write(priv, ICSIER, 0); rcar_i2c_write(priv, ICSCR, 0); + synchronize_irq(priv->irq); priv->slave = NULL; pm_runtime_put(rcar_i2c_priv_to_dev(priv)); @@ -916,7 +919,7 @@ static int rcar_i2c_probe(struct platform_device *pdev) struct i2c_adapter *adap; struct device *dev = &pdev->dev; struct i2c_timings i2c_t; - int irq, ret; + int ret; priv = devm_kzalloc(dev, sizeof(struct rcar_i2c_priv), GFP_KERNEL); if (!priv) @@ -979,10 +982,10 @@ static int rcar_i2c_probe(struct platform_device *pdev) pm_runtime_put(dev); - irq = platform_get_irq(pdev, 0); - ret = devm_request_irq(dev, irq, rcar_i2c_irq, 0, dev_name(dev), priv); + priv->irq = platform_get_irq(pdev, 0); + ret = devm_request_irq(dev, priv->irq, rcar_i2c_irq, 0, dev_name(dev), priv); if (ret < 0) { - dev_err(dev, "cannot get irq %d\n", irq); + dev_err(dev, "cannot get irq %d\n", priv->irq); goto out_pm_disable; } diff --git a/drivers/i2c/busses/i2c-stm32f7.c b/drivers/i2c/busses/i2c-stm32f7.c index a492da9fd0d326db0128ada9d2f940985b2851c8..ac9c9486b834cc78cb71ab7be34dfcc4b68f1daa 100644 --- a/drivers/i2c/busses/i2c-stm32f7.c +++ b/drivers/i2c/busses/i2c-stm32f7.c @@ -24,7 +24,6 @@ #include #include #include -#include #include #include #include @@ -1782,15 +1781,14 @@ static struct i2c_algorithm stm32f7_i2c_algo = { static int stm32f7_i2c_probe(struct platform_device *pdev) { - struct device_node *np = pdev->dev.of_node; struct stm32f7_i2c_dev *i2c_dev; const struct stm32f7_i2c_setup *setup; struct resource *res; - u32 irq_error, irq_event, clk_rate, rise_time, fall_time; + u32 clk_rate, rise_time, fall_time; struct i2c_adapter *adap; struct reset_control *rst; dma_addr_t phy_addr; - int ret; + int irq_error, irq_event, ret; i2c_dev = devm_kzalloc(&pdev->dev, sizeof(*i2c_dev), GFP_KERNEL); if (!i2c_dev) @@ -1802,16 +1800,20 @@ static int stm32f7_i2c_probe(struct platform_device *pdev) return PTR_ERR(i2c_dev->base); phy_addr = (dma_addr_t)res->start; - irq_event = irq_of_parse_and_map(np, 0); - if (!irq_event) { - dev_err(&pdev->dev, "IRQ event missing or invalid\n"); - return -EINVAL; + irq_event = platform_get_irq(pdev, 0); + if (irq_event <= 0) { + if (irq_event != -EPROBE_DEFER) + dev_err(&pdev->dev, "Failed to get IRQ event: %d\n", + irq_event); + return irq_event ? : -ENOENT; } - irq_error = irq_of_parse_and_map(np, 1); - if (!irq_error) { - dev_err(&pdev->dev, "IRQ error missing or invalid\n"); - return -EINVAL; + irq_error = platform_get_irq(pdev, 1); + if (irq_error <= 0) { + if (irq_error != -EPROBE_DEFER) + dev_err(&pdev->dev, "Failed to get IRQ error: %d\n", + irq_error); + return irq_error ? : -ENOENT; } i2c_dev->clk = devm_clk_get(&pdev->dev, NULL); diff --git a/drivers/i2c/i2c-core-base.c b/drivers/i2c/i2c-core-base.c index 5b0e1d9e5adc053c2831397f8dfd2128bc89d777..1de10e5c70d7b661809bf20a02f4d8d057b02d1e 100644 --- a/drivers/i2c/i2c-core-base.c +++ b/drivers/i2c/i2c-core-base.c @@ -185,7 +185,7 @@ static int i2c_generic_bus_free(struct i2c_adapter *adap) int i2c_generic_scl_recovery(struct i2c_adapter *adap) { struct i2c_bus_recovery_info *bri = adap->bus_recovery_info; - int i = 0, scl = 1, ret; + int i = 0, scl = 1, ret = 0; if (bri->prepare_recovery) bri->prepare_recovery(adap); diff --git a/drivers/iio/accel/cros_ec_accel_legacy.c b/drivers/iio/accel/cros_ec_accel_legacy.c index 063e89eff791a7de06ec2abfa5f814d9ab0193ee..c776a3509a7173720435d70e3611f287c550fad2 100644 --- a/drivers/iio/accel/cros_ec_accel_legacy.c +++ b/drivers/iio/accel/cros_ec_accel_legacy.c @@ -328,7 +328,6 @@ static const struct iio_chan_spec_ext_info cros_ec_accel_legacy_ext_info[] = { .modified = 1, \ .info_mask_separate = \ BIT(IIO_CHAN_INFO_RAW) | \ - BIT(IIO_CHAN_INFO_SCALE) | \ BIT(IIO_CHAN_INFO_CALIBBIAS), \ .info_mask_shared_by_all = BIT(IIO_CHAN_INFO_SCALE), \ .ext_info = cros_ec_accel_legacy_ext_info, \ diff --git a/drivers/iio/adc/max9611.c b/drivers/iio/adc/max9611.c index 0538ff8c4ac1d2f1e242898daf644d3b69345c4d..49c1956e6a6742f6189e3ce7dcb19cd9af4fb039 100644 --- a/drivers/iio/adc/max9611.c +++ b/drivers/iio/adc/max9611.c @@ -86,7 +86,7 @@ #define MAX9611_TEMP_MAX_POS 0x7f80 #define MAX9611_TEMP_MAX_NEG 0xff80 #define MAX9611_TEMP_MIN_NEG 0xd980 -#define MAX9611_TEMP_MASK GENMASK(7, 15) +#define MAX9611_TEMP_MASK GENMASK(15, 7) #define MAX9611_TEMP_SHIFT 0x07 #define MAX9611_TEMP_RAW(_r) ((_r) >> MAX9611_TEMP_SHIFT) #define MAX9611_TEMP_SCALE_NUM 1000000 @@ -483,7 +483,7 @@ static int max9611_init(struct max9611_dev *max9611) if (ret) return ret; - regval = ret & MAX9611_TEMP_MASK; + regval &= MAX9611_TEMP_MASK; if ((regval > MAX9611_TEMP_MAX_POS && regval < MAX9611_TEMP_MIN_NEG) || diff --git a/drivers/iio/adc/stm32-dfsdm-adc.c b/drivers/iio/adc/stm32-dfsdm-adc.c index fcd4a1c00ca0574d02326602f68da118a499fca4..15a115210108d58765628a8a3ee4b158f26894f0 100644 --- a/drivers/iio/adc/stm32-dfsdm-adc.c +++ b/drivers/iio/adc/stm32-dfsdm-adc.c @@ -1144,6 +1144,12 @@ static int stm32_dfsdm_adc_probe(struct platform_device *pdev) * So IRQ associated to filter instance 0 is dedicated to the Filter 0. */ irq = platform_get_irq(pdev, 0); + if (irq < 0) { + if (irq != -EPROBE_DEFER) + dev_err(dev, "Failed to get IRQ: %d\n", irq); + return irq; + } + ret = devm_request_irq(dev, irq, stm32_dfsdm_irq, 0, pdev->name, adc); if (ret < 0) { diff --git a/drivers/iio/adc/stm32-dfsdm-core.c b/drivers/iio/adc/stm32-dfsdm-core.c index bf089f5d622532740885146c0be6c513882d95a4..941630615e88535be61e6bb097b9af77ca8d2ac5 100644 --- a/drivers/iio/adc/stm32-dfsdm-core.c +++ b/drivers/iio/adc/stm32-dfsdm-core.c @@ -213,6 +213,8 @@ static int stm32_dfsdm_parse_of(struct platform_device *pdev, } priv->dfsdm.phys_base = res->start; priv->dfsdm.base = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR(priv->dfsdm.base)) + return PTR_ERR(priv->dfsdm.base); /* * "dfsdm" clock is mandatory for DFSDM peripheral clocking. @@ -222,8 +224,10 @@ static int stm32_dfsdm_parse_of(struct platform_device *pdev, */ priv->clk = devm_clk_get(&pdev->dev, "dfsdm"); if (IS_ERR(priv->clk)) { - dev_err(&pdev->dev, "No stm32_dfsdm_clk clock found\n"); - return -EINVAL; + ret = PTR_ERR(priv->clk); + if (ret != -EPROBE_DEFER) + dev_err(&pdev->dev, "Failed to get clock (%d)\n", ret); + return ret; } priv->aclk = devm_clk_get(&pdev->dev, "audio"); diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c index ef459f2f2eeb859c5a7c7a4b4501e00adedd7ae1..7586c1dd73f19e167b2bfa62336eaf0c9ac14687 100644 --- a/drivers/infiniband/core/mad.c +++ b/drivers/infiniband/core/mad.c @@ -3182,18 +3182,18 @@ static int ib_mad_port_open(struct ib_device *device, if (has_smi) cq_size *= 2; + port_priv->pd = ib_alloc_pd(device, 0); + if (IS_ERR(port_priv->pd)) { + dev_err(&device->dev, "Couldn't create ib_mad PD\n"); + ret = PTR_ERR(port_priv->pd); + goto error3; + } + port_priv->cq = ib_alloc_cq(port_priv->device, port_priv, cq_size, 0, IB_POLL_WORKQUEUE); if (IS_ERR(port_priv->cq)) { dev_err(&device->dev, "Couldn't create ib_mad CQ\n"); ret = PTR_ERR(port_priv->cq); - goto error3; - } - - port_priv->pd = ib_alloc_pd(device, 0); - if (IS_ERR(port_priv->pd)) { - dev_err(&device->dev, "Couldn't create ib_mad PD\n"); - ret = PTR_ERR(port_priv->pd); goto error4; } @@ -3236,11 +3236,11 @@ error8: error7: destroy_mad_qp(&port_priv->qp_info[0]); error6: - ib_dealloc_pd(port_priv->pd); -error4: ib_free_cq(port_priv->cq); cleanup_recv_queue(&port_priv->qp_info[1]); cleanup_recv_queue(&port_priv->qp_info[0]); +error4: + ib_dealloc_pd(port_priv->pd); error3: kfree(port_priv); @@ -3270,8 +3270,8 @@ static int ib_mad_port_close(struct ib_device *device, int port_num) destroy_workqueue(port_priv->wq); destroy_mad_qp(&port_priv->qp_info[1]); destroy_mad_qp(&port_priv->qp_info[0]); - ib_dealloc_pd(port_priv->pd); ib_free_cq(port_priv->cq); + ib_dealloc_pd(port_priv->pd); cleanup_recv_queue(&port_priv->qp_info[1]); cleanup_recv_queue(&port_priv->qp_info[0]); /* XXX: Handle deallocation of MAD registration tables */ diff --git a/drivers/infiniband/core/sa_query.c b/drivers/infiniband/core/sa_query.c index 7b794a14d6e851fbc4afc82711ecdb6ecf336c61..8be082edf986fd1e4d763c69bbd06ef3b3bd8cdc 100644 --- a/drivers/infiniband/core/sa_query.c +++ b/drivers/infiniband/core/sa_query.c @@ -1232,7 +1232,6 @@ static int roce_resolve_route_from_path(struct sa_path_rec *rec, { struct rdma_dev_addr dev_addr = {}; union { - struct sockaddr _sockaddr; struct sockaddr_in _sockaddr_in; struct sockaddr_in6 _sockaddr_in6; } sgid_addr, dgid_addr; @@ -1249,12 +1248,12 @@ static int roce_resolve_route_from_path(struct sa_path_rec *rec, */ dev_addr.net = &init_net; - rdma_gid2ip(&sgid_addr._sockaddr, &rec->sgid); - rdma_gid2ip(&dgid_addr._sockaddr, &rec->dgid); + rdma_gid2ip((struct sockaddr *)&sgid_addr, &rec->sgid); + rdma_gid2ip((struct sockaddr *)&dgid_addr, &rec->dgid); /* validate the route */ - ret = rdma_resolve_ip_route(&sgid_addr._sockaddr, - &dgid_addr._sockaddr, &dev_addr); + ret = rdma_resolve_ip_route((struct sockaddr *)&sgid_addr, + (struct sockaddr *)&dgid_addr, &dev_addr); if (ret) return ret; diff --git a/drivers/infiniband/core/user_mad.c b/drivers/infiniband/core/user_mad.c index c34a6852d691f666fb1d8deac965335c4f7c0840..a18f3f8ad77fe91944b54157a7003b84565d8f7d 100644 --- a/drivers/infiniband/core/user_mad.c +++ b/drivers/infiniband/core/user_mad.c @@ -49,6 +49,7 @@ #include #include #include +#include #include @@ -868,11 +869,14 @@ static int ib_umad_unreg_agent(struct ib_umad_file *file, u32 __user *arg) if (get_user(id, arg)) return -EFAULT; + if (id >= IB_UMAD_MAX_AGENTS) + return -EINVAL; mutex_lock(&file->port->file_mutex); mutex_lock(&file->mutex); - if (id >= IB_UMAD_MAX_AGENTS || !__get_agent(file, id)) { + id = array_index_nospec(id, IB_UMAD_MAX_AGENTS); + if (!__get_agent(file, id)) { ret = -EINVAL; goto out; } diff --git a/drivers/infiniband/hw/hfi1/chip.c b/drivers/infiniband/hw/hfi1/chip.c index d8eb4dc04d69035c050c7c48a0dc1d1177709192..6aa5a8a242ffddb2cd9494932076a995dbb78310 100644 --- a/drivers/infiniband/hw/hfi1/chip.c +++ b/drivers/infiniband/hw/hfi1/chip.c @@ -14586,7 +14586,7 @@ void hfi1_deinit_vnic_rsm(struct hfi1_devdata *dd) clear_rcvctrl(dd, RCV_CTRL_RCV_RSM_ENABLE_SMASK); } -static void init_rxe(struct hfi1_devdata *dd) +static int init_rxe(struct hfi1_devdata *dd) { struct rsm_map_table *rmt; u64 val; @@ -14595,6 +14595,9 @@ static void init_rxe(struct hfi1_devdata *dd) write_csr(dd, RCV_ERR_MASK, ~0ull); rmt = alloc_rsm_map_table(dd); + if (!rmt) + return -ENOMEM; + /* set up QOS, including the QPN map table */ init_qos(dd, rmt); init_user_fecn_handling(dd, rmt); @@ -14621,6 +14624,7 @@ static void init_rxe(struct hfi1_devdata *dd) val |= ((4ull & RCV_BYPASS_HDR_SIZE_MASK) << RCV_BYPASS_HDR_SIZE_SHIFT); write_csr(dd, RCV_BYPASS, val); + return 0; } static void init_other(struct hfi1_devdata *dd) @@ -15163,7 +15167,10 @@ struct hfi1_devdata *hfi1_init_dd(struct pci_dev *pdev, goto bail_cleanup; /* set initial RXE CSRs */ - init_rxe(dd); + ret = init_rxe(dd); + if (ret) + goto bail_cleanup; + /* set initial TXE CSRs */ init_txe(dd); /* set initial non-RXE, non-TXE CSRs */ diff --git a/drivers/infiniband/hw/hfi1/verbs.c b/drivers/infiniband/hw/hfi1/verbs.c index 27d9c4cefdc7a515ad5b183373652939ab9a09eb..1ad38c8c1ef97e2250f231b10f56c6d16f42abd6 100644 --- a/drivers/infiniband/hw/hfi1/verbs.c +++ b/drivers/infiniband/hw/hfi1/verbs.c @@ -54,6 +54,7 @@ #include #include #include +#include #include "hfi.h" #include "common.h" @@ -1596,6 +1597,7 @@ static int hfi1_check_ah(struct ib_device *ibdev, struct rdma_ah_attr *ah_attr) sl = rdma_ah_get_sl(ah_attr); if (sl >= ARRAY_SIZE(ibp->sl_to_sc)) return -EINVAL; + sl = array_index_nospec(sl, ARRAY_SIZE(ibp->sl_to_sc)); sc5 = ibp->sl_to_sc[sl]; if (sc_to_vlt(dd, sc5) > num_vls && sc_to_vlt(dd, sc5) != 0xf) diff --git a/drivers/infiniband/hw/i40iw/i40iw_verbs.c b/drivers/infiniband/hw/i40iw/i40iw_verbs.c index e2e6c74a74522598e2fc02a7c5d78bb68939d746..a5e3349b8a7c31acf5e682306f4e483ae7ac5284 100644 --- a/drivers/infiniband/hw/i40iw/i40iw_verbs.c +++ b/drivers/infiniband/hw/i40iw/i40iw_verbs.c @@ -806,6 +806,8 @@ static int i40iw_query_qp(struct ib_qp *ibqp, struct i40iw_qp *iwqp = to_iwqp(ibqp); struct i40iw_sc_qp *qp = &iwqp->sc_qp; + attr->qp_state = iwqp->ibqp_state; + attr->cur_qp_state = attr->qp_state; attr->qp_access_flags = 0; attr->cap.max_send_wr = qp->qp_uk.sq_size; attr->cap.max_recv_wr = qp->qp_uk.rq_size; diff --git a/drivers/infiniband/hw/mlx5/mad.c b/drivers/infiniband/hw/mlx5/mad.c index 32a9e9228b13554c2d1d5db057a5a484efc4f2bf..cdf6e26ebc87da8ca5e4747f9835ea959d8885db 100644 --- a/drivers/infiniband/hw/mlx5/mad.c +++ b/drivers/infiniband/hw/mlx5/mad.c @@ -197,19 +197,33 @@ static void pma_cnt_assign(struct ib_pma_portcounters *pma_cnt, vl_15_dropped); } -static int process_pma_cmd(struct mlx5_core_dev *mdev, u8 port_num, +static int process_pma_cmd(struct mlx5_ib_dev *dev, u8 port_num, const struct ib_mad *in_mad, struct ib_mad *out_mad) { - int err; + struct mlx5_core_dev *mdev; + bool native_port = true; + u8 mdev_port_num; void *out_cnt; + int err; + mdev = mlx5_ib_get_native_port_mdev(dev, port_num, &mdev_port_num); + if (!mdev) { + /* Fail to get the native port, likely due to 2nd port is still + * unaffiliated. In such case default to 1st port and attached + * PF device. + */ + native_port = false; + mdev = dev->mdev; + mdev_port_num = 1; + } /* Declaring support of extended counters */ if (in_mad->mad_hdr.attr_id == IB_PMA_CLASS_PORT_INFO) { struct ib_class_port_info cpi = {}; cpi.capability_mask = IB_PMA_CLASS_CAP_EXT_WIDTH; memcpy((out_mad->data + 40), &cpi, sizeof(cpi)); - return IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY; + err = IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY; + goto done; } if (in_mad->mad_hdr.attr_id == IB_PMA_PORT_COUNTERS_EXT) { @@ -218,11 +232,13 @@ static int process_pma_cmd(struct mlx5_core_dev *mdev, u8 port_num, int sz = MLX5_ST_SZ_BYTES(query_vport_counter_out); out_cnt = kvzalloc(sz, GFP_KERNEL); - if (!out_cnt) - return IB_MAD_RESULT_FAILURE; + if (!out_cnt) { + err = IB_MAD_RESULT_FAILURE; + goto done; + } err = mlx5_core_query_vport_counter(mdev, 0, 0, - port_num, out_cnt, sz); + mdev_port_num, out_cnt, sz); if (!err) pma_cnt_ext_assign(pma_cnt_ext, out_cnt); } else { @@ -231,20 +247,23 @@ static int process_pma_cmd(struct mlx5_core_dev *mdev, u8 port_num, int sz = MLX5_ST_SZ_BYTES(ppcnt_reg); out_cnt = kvzalloc(sz, GFP_KERNEL); - if (!out_cnt) - return IB_MAD_RESULT_FAILURE; + if (!out_cnt) { + err = IB_MAD_RESULT_FAILURE; + goto done; + } - err = mlx5_core_query_ib_ppcnt(mdev, port_num, + err = mlx5_core_query_ib_ppcnt(mdev, mdev_port_num, out_cnt, sz); if (!err) pma_cnt_assign(pma_cnt, out_cnt); - } - + } kvfree(out_cnt); - if (err) - return IB_MAD_RESULT_FAILURE; - - return IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY; + err = err ? IB_MAD_RESULT_FAILURE : + IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY; +done: + if (native_port) + mlx5_ib_put_native_port_mdev(dev, port_num); + return err; } int mlx5_ib_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num, @@ -256,8 +275,6 @@ int mlx5_ib_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num, struct mlx5_ib_dev *dev = to_mdev(ibdev); const struct ib_mad *in_mad = (const struct ib_mad *)in; struct ib_mad *out_mad = (struct ib_mad *)out; - struct mlx5_core_dev *mdev; - u8 mdev_port_num; int ret; if (WARN_ON_ONCE(in_mad_size != sizeof(*in_mad) || @@ -266,19 +283,14 @@ int mlx5_ib_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num, memset(out_mad->data, 0, sizeof(out_mad->data)); - mdev = mlx5_ib_get_native_port_mdev(dev, port_num, &mdev_port_num); - if (!mdev) - return IB_MAD_RESULT_FAILURE; - - if (MLX5_CAP_GEN(mdev, vport_counters) && + if (MLX5_CAP_GEN(dev->mdev, vport_counters) && in_mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_PERF_MGMT && in_mad->mad_hdr.method == IB_MGMT_METHOD_GET) { - ret = process_pma_cmd(mdev, mdev_port_num, in_mad, out_mad); + ret = process_pma_cmd(dev, port_num, in_mad, out_mad); } else { ret = process_mad(ibdev, mad_flags, port_num, in_wc, in_grh, in_mad, out_mad); } - mlx5_ib_put_native_port_mdev(dev, port_num); return ret; } diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index 8cc4da62f050e860878b29ba0e40ce0527bd9ae7..53eccc0da8fd19a25fd8885cfe0352ec8f1c8c34 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -939,15 +939,19 @@ static int mlx5_ib_query_device(struct ib_device *ibdev, } if (MLX5_CAP_GEN(mdev, tag_matching)) { - props->tm_caps.max_rndv_hdr_size = MLX5_TM_MAX_RNDV_MSG_SIZE; props->tm_caps.max_num_tags = (1 << MLX5_CAP_GEN(mdev, log_tag_matching_list_sz)) - 1; - props->tm_caps.flags = IB_TM_CAP_RC; props->tm_caps.max_ops = 1 << MLX5_CAP_GEN(mdev, log_max_qp_sz); props->tm_caps.max_sge = MLX5_TM_MAX_SGE; } + if (MLX5_CAP_GEN(mdev, tag_matching) && + MLX5_CAP_GEN(mdev, rndv_offload_rc)) { + props->tm_caps.flags = IB_TM_CAP_RNDV_RC; + props->tm_caps.max_rndv_hdr_size = MLX5_TM_MAX_RNDV_MSG_SIZE; + } + if (MLX5_CAP_GEN(dev->mdev, cq_moderation)) { props->cq_caps.max_cq_moderation_count = MLX5_MAX_CQ_COUNT; diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index 320d4dfe8c2f419cfdb9f53ddcdcc122eed294f1..941d1df54631afb0e283a83e75e18fec9e734f46 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -467,6 +467,7 @@ struct mlx5_umr_wr { u64 length; int access_flags; u32 mkey; + u8 ignore_free_state:1; }; static inline const struct mlx5_umr_wr *umr_wr(const struct ib_send_wr *wr) diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index 7df4a4fe4af477727c1bdf26ad120a04c0916a7d..bd1fdadf7ba01907078b9cae895be3a14fa4151e 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -51,22 +51,12 @@ static void clean_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr); static void dereg_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr); static int mr_cache_max_order(struct mlx5_ib_dev *dev); static int unreg_umr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr); -static bool umr_can_modify_entity_size(struct mlx5_ib_dev *dev) -{ - return !MLX5_CAP_GEN(dev->mdev, umr_modify_entity_size_disabled); -} static bool umr_can_use_indirect_mkey(struct mlx5_ib_dev *dev) { return !MLX5_CAP_GEN(dev->mdev, umr_indirect_mkey_disabled); } -static bool use_umr(struct mlx5_ib_dev *dev, int order) -{ - return order <= mr_cache_max_order(dev) && - umr_can_modify_entity_size(dev); -} - static int destroy_mkey(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr) { int err = mlx5_core_destroy_mkey(dev->mdev, &mr->mmkey); @@ -548,13 +538,16 @@ void mlx5_mr_cache_free(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr) return; c = order2idx(dev, mr->order); - if (c < 0 || c >= MAX_MR_CACHE_ENTRIES) { - mlx5_ib_warn(dev, "order %d, cache index %d\n", mr->order, c); - return; - } + WARN_ON(c < 0 || c >= MAX_MR_CACHE_ENTRIES); - if (unreg_umr(dev, mr)) + if (unreg_umr(dev, mr)) { + mr->allocated_from_cache = false; + destroy_mkey(dev, mr); + ent = &cache->ent[c]; + if (ent->cur < ent->limit) + queue_work(cache->wq, &ent->work); return; + } ent = &cache->ent[c]; spin_lock_irq(&ent->lock); @@ -1302,7 +1295,7 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, { struct mlx5_ib_dev *dev = to_mdev(pd->device); struct mlx5_ib_mr *mr = NULL; - bool populate_mtts = false; + bool use_umr; struct ib_umem *umem; int page_shift; int npages; @@ -1335,29 +1328,30 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, if (err < 0) return ERR_PTR(err); - if (use_umr(dev, order)) { + use_umr = !MLX5_CAP_GEN(dev->mdev, umr_modify_entity_size_disabled) && + (!MLX5_CAP_GEN(dev->mdev, umr_modify_atomic_disabled) || + !MLX5_CAP_GEN(dev->mdev, atomic)); + + if (order <= mr_cache_max_order(dev) && use_umr) { mr = alloc_mr_from_cache(pd, umem, virt_addr, length, ncont, page_shift, order, access_flags); if (PTR_ERR(mr) == -EAGAIN) { mlx5_ib_dbg(dev, "cache empty for order %d\n", order); mr = NULL; } - populate_mtts = false; } else if (!MLX5_CAP_GEN(dev->mdev, umr_extended_translation_offset)) { if (access_flags & IB_ACCESS_ON_DEMAND) { err = -EINVAL; pr_err("Got MR registration for ODP MR > 512MB, not supported for Connect-IB\n"); goto error; } - populate_mtts = true; + use_umr = false; } if (!mr) { - if (!umr_can_modify_entity_size(dev)) - populate_mtts = true; mutex_lock(&dev->slow_path_mutex); mr = reg_create(NULL, pd, virt_addr, length, umem, ncont, - page_shift, access_flags, populate_mtts); + page_shift, access_flags, !use_umr); mutex_unlock(&dev->slow_path_mutex); } @@ -1375,7 +1369,7 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, update_odp_mr(mr); #endif - if (!populate_mtts) { + if (use_umr) { int update_xlt_flags = MLX5_IB_UPD_XLT_ENABLE; if (access_flags & IB_ACCESS_ON_DEMAND) @@ -1408,9 +1402,11 @@ static int unreg_umr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr) return 0; umrwr.wr.send_flags = MLX5_IB_SEND_UMR_DISABLE_MR | - MLX5_IB_SEND_UMR_FAIL_IF_FREE; + MLX5_IB_SEND_UMR_UPDATE_PD_ACCESS; umrwr.wr.opcode = MLX5_IB_WR_UMR; + umrwr.pd = dev->umrc.pd; umrwr.mkey = mr->mmkey.key; + umrwr.ignore_free_state = 1; return mlx5_ib_post_send_wait(dev, &umrwr); } @@ -1615,10 +1611,10 @@ static void clean_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr) mr->sig = NULL; } - mlx5_free_priv_descs(mr); - - if (!allocated_from_cache) + if (!allocated_from_cache) { destroy_mkey(dev, mr); + mlx5_free_priv_descs(mr); + } } static void dereg_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr) diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index 183fe5c8ceb77600b593d0f872e1225e9111c773..77b1f3fd086ad754564627d4f465b155767ef2f3 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -1501,7 +1501,6 @@ static int create_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp, } MLX5_SET(tirc, tirc, rx_hash_fn, MLX5_RX_HASH_FN_TOEPLITZ); - MLX5_SET(tirc, tirc, rx_hash_symmetric, 1); memcpy(rss_key, ucmd.rx_hash_key, len); break; } @@ -3717,10 +3716,14 @@ static int set_reg_umr_segment(struct mlx5_ib_dev *dev, memset(umr, 0, sizeof(*umr)); - if (wr->send_flags & MLX5_IB_SEND_UMR_FAIL_IF_FREE) - umr->flags = MLX5_UMR_CHECK_FREE; /* fail if free */ - else - umr->flags = MLX5_UMR_CHECK_NOT_FREE; /* fail if not free */ + if (!umrwr->ignore_free_state) { + if (wr->send_flags & MLX5_IB_SEND_UMR_FAIL_IF_FREE) + /* fail if free */ + umr->flags = MLX5_UMR_CHECK_FREE; + else + /* fail if not free */ + umr->flags = MLX5_UMR_CHECK_NOT_FREE; + } umr->xlt_octowords = cpu_to_be16(get_xlt_octo(umrwr->xlt_size)); if (wr->send_flags & MLX5_IB_SEND_UMR_UPDATE_XLT) { diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 4111b798fd3c9ab7e749675140690270b9cac46c..681d8e0913d069c5aca66dbc87d134793e5c3f6d 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -435,6 +435,7 @@ static enum resp_states check_rkey(struct rxe_qp *qp, qp->resp.va = reth_va(pkt); qp->resp.rkey = reth_rkey(pkt); qp->resp.resid = reth_len(pkt); + qp->resp.length = reth_len(pkt); } access = (pkt->mask & RXE_READ_MASK) ? IB_ACCESS_REMOTE_READ : IB_ACCESS_REMOTE_WRITE; @@ -859,7 +860,9 @@ static enum resp_states do_complete(struct rxe_qp *qp, pkt->mask & RXE_WRITE_MASK) ? IB_WC_RECV_RDMA_WITH_IMM : IB_WC_RECV; wc->vendor_err = 0; - wc->byte_len = wqe->dma.length - wqe->dma.resid; + wc->byte_len = (pkt->mask & RXE_IMMDT_MASK && + pkt->mask & RXE_WRITE_MASK) ? + qp->resp.length : wqe->dma.length - wqe->dma.resid; /* fields after byte_len are different between kernel and user * space diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 332a16dad2a7e66e16b892d9c84fbad739377fff..3b731c7682e5bcadf874e2b0f475ca92591beee8 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -212,6 +212,7 @@ struct rxe_resp_info { struct rxe_mem *mr; u32 resid; u32 rkey; + u32 length; u64 atomic_orig; /* SRQ only */ diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c index 30f840f874b3c010ddf373b133ef0bdd8297fb25..78dd36daac00ec29889844f75330ef66dac583ac 100644 --- a/drivers/infiniband/ulp/ipoib/ipoib_main.c +++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c @@ -1892,12 +1892,6 @@ static void ipoib_child_init(struct net_device *ndev) struct ipoib_dev_priv *priv = ipoib_priv(ndev); struct ipoib_dev_priv *ppriv = ipoib_priv(priv->parent); - dev_hold(priv->parent); - - down_write(&ppriv->vlan_rwsem); - list_add_tail(&priv->list, &ppriv->child_intfs); - up_write(&ppriv->vlan_rwsem); - priv->max_ib_mtu = ppriv->max_ib_mtu; set_bit(IPOIB_FLAG_SUBINTERFACE, &priv->flags); memcpy(priv->dev->dev_addr, ppriv->dev->dev_addr, INFINIBAND_ALEN); @@ -1940,6 +1934,17 @@ static int ipoib_ndo_init(struct net_device *ndev) if (rc) { pr_warn("%s: failed to initialize device: %s port %d (ret = %d)\n", priv->ca->name, priv->dev->name, priv->port, rc); + return rc; + } + + if (priv->parent) { + struct ipoib_dev_priv *ppriv = ipoib_priv(priv->parent); + + dev_hold(priv->parent); + + down_write(&ppriv->vlan_rwsem); + list_add_tail(&priv->list, &ppriv->child_intfs); + up_write(&ppriv->vlan_rwsem); } return 0; @@ -1957,6 +1962,14 @@ static void ipoib_ndo_uninit(struct net_device *dev) */ WARN_ON(!list_empty(&priv->child_intfs)); + if (priv->parent) { + struct ipoib_dev_priv *ppriv = ipoib_priv(priv->parent); + + down_write(&ppriv->vlan_rwsem); + list_del(&priv->list); + up_write(&ppriv->vlan_rwsem); + } + ipoib_neigh_hash_uninit(dev); ipoib_ib_dev_cleanup(dev); @@ -1968,15 +1981,8 @@ static void ipoib_ndo_uninit(struct net_device *dev) priv->wq = NULL; } - if (priv->parent) { - struct ipoib_dev_priv *ppriv = ipoib_priv(priv->parent); - - down_write(&ppriv->vlan_rwsem); - list_del(&priv->list); - up_write(&ppriv->vlan_rwsem); - + if (priv->parent) dev_put(priv->parent); - } } static int ipoib_set_vf_link_state(struct net_device *dev, int vf, int link_state) @@ -1997,6 +2003,7 @@ static int ipoib_get_vf_config(struct net_device *dev, int vf, return err; ivf->vf = vf; + memcpy(ivf->mac, dev->dev_addr, dev->addr_len); return 0; } diff --git a/drivers/input/joystick/iforce/iforce-usb.c b/drivers/input/joystick/iforce/iforce-usb.c index 78073259c9a1ad3d49575379e675239566ea295c..c431df7401b44bc8767bec6fa88b88209e7ac17d 100644 --- a/drivers/input/joystick/iforce/iforce-usb.c +++ b/drivers/input/joystick/iforce/iforce-usb.c @@ -141,7 +141,12 @@ static int iforce_usb_probe(struct usb_interface *intf, return -ENODEV; epirq = &interface->endpoint[0].desc; + if (!usb_endpoint_is_int_in(epirq)) + return -ENODEV; + epout = &interface->endpoint[1].desc; + if (!usb_endpoint_is_int_out(epout)) + return -ENODEV; if (!(iforce = kzalloc(sizeof(struct iforce) + 32, GFP_KERNEL))) goto fail; diff --git a/drivers/input/mouse/alps.c b/drivers/input/mouse/alps.c index 0a6f7ca883e7fe816a82d0dae915ab120e6681e9..dd80ff6cc4273079fa8f75806d89df97e914fa01 100644 --- a/drivers/input/mouse/alps.c +++ b/drivers/input/mouse/alps.c @@ -24,6 +24,7 @@ #include "psmouse.h" #include "alps.h" +#include "trackpoint.h" /* * Definitions for ALPS version 3 and 4 command mode protocol @@ -2864,6 +2865,23 @@ static const struct alps_protocol_info *alps_match_table(unsigned char *e7, return NULL; } +static bool alps_is_cs19_trackpoint(struct psmouse *psmouse) +{ + u8 param[2] = { 0 }; + + if (ps2_command(&psmouse->ps2dev, + param, MAKE_PS2_CMD(0, 2, TP_READ_ID))) + return false; + + /* + * param[0] contains the trackpoint device variant_id while + * param[1] contains the firmware_id. So far all alps + * trackpoint-only devices have their variant_ids equal + * TP_VARIANT_ALPS and their firmware_ids are in 0x20~0x2f range. + */ + return param[0] == TP_VARIANT_ALPS && ((param[1] & 0xf0) == 0x20); +} + static int alps_identify(struct psmouse *psmouse, struct alps_data *priv) { const struct alps_protocol_info *protocol; @@ -3164,6 +3182,20 @@ int alps_detect(struct psmouse *psmouse, bool set_properties) if (error) return error; + /* + * ALPS cs19 is a trackpoint-only device, and uses different + * protocol than DualPoint ones, so we return -EINVAL here and let + * trackpoint.c drive this device. If the trackpoint driver is not + * enabled, the device will fall back to a bare PS/2 mouse. + * If ps2_command() fails here, we depend on the immediately + * followed psmouse_reset() to reset the device to normal state. + */ + if (alps_is_cs19_trackpoint(psmouse)) { + psmouse_dbg(psmouse, + "ALPS CS19 trackpoint-only device detected, ignoring\n"); + return -EINVAL; + } + /* * Reset the device to make sure it is fully operational: * on some laptops, like certain Dell Latitudes, we may diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c index 8e6077d8e434a30cf76598164aa95cffcf2439fb..06cebde2422ea7589928351da4b117edcd0239e9 100644 --- a/drivers/input/mouse/synaptics.c +++ b/drivers/input/mouse/synaptics.c @@ -176,13 +176,16 @@ static const char * const smbus_pnp_ids[] = { "LEN0072", /* X1 Carbon Gen 5 (2017) - Elan/ALPS trackpoint */ "LEN0073", /* X1 Carbon G5 (Elantech) */ "LEN0092", /* X1 Carbon 6 */ + "LEN0093", /* T480 */ "LEN0096", /* X280 */ "LEN0097", /* X280 -> ALPS trackpoint */ + "LEN009b", /* T580 */ "LEN200f", /* T450s */ "LEN2054", /* E480 */ "LEN2055", /* E580 */ "SYN3052", /* HP EliteBook 840 G4 */ "SYN3221", /* HP 15-ay000 */ + "SYN323d", /* HP Spectre X360 13-w013dx */ NULL }; diff --git a/drivers/input/mouse/trackpoint.h b/drivers/input/mouse/trackpoint.h index 10a0391482343d4ba482336367b204cf64f6e47c..538986e5ac5bcf4f5c38267db393a30fedb0dee6 100644 --- a/drivers/input/mouse/trackpoint.h +++ b/drivers/input/mouse/trackpoint.h @@ -161,7 +161,8 @@ struct trackpoint_data { #ifdef CONFIG_MOUSE_PS2_TRACKPOINT int trackpoint_detect(struct psmouse *psmouse, bool set_properties); #else -inline int trackpoint_detect(struct psmouse *psmouse, bool set_properties) +static inline int trackpoint_detect(struct psmouse *psmouse, + bool set_properties) { return -ENOSYS; } diff --git a/drivers/input/tablet/gtco.c b/drivers/input/tablet/gtco.c index 4b8b9d7aa75e2785991fc5838bf55728c715e6fe..35031228a6d076cf0dd91dab411f049ec679df6b 100644 --- a/drivers/input/tablet/gtco.c +++ b/drivers/input/tablet/gtco.c @@ -78,6 +78,7 @@ Scott Hill shill@gtcocalcomp.com /* Max size of a single report */ #define REPORT_MAX_SIZE 10 +#define MAX_COLLECTION_LEVELS 10 /* Bitmask whether pen is in range */ @@ -223,8 +224,7 @@ static void parse_hid_report_descriptor(struct gtco *device, char * report, char maintype = 'x'; char globtype[12]; int indent = 0; - char indentstr[10] = ""; - + char indentstr[MAX_COLLECTION_LEVELS + 1] = { 0 }; dev_dbg(ddev, "======>>>>>>PARSE<<<<<<======\n"); @@ -350,6 +350,13 @@ static void parse_hid_report_descriptor(struct gtco *device, char * report, case TAG_MAIN_COL_START: maintype = 'S'; + if (indent == MAX_COLLECTION_LEVELS) { + dev_err(ddev, "Collection level %d would exceed limit of %d\n", + indent + 1, + MAX_COLLECTION_LEVELS); + break; + } + if (data == 0) { dev_dbg(ddev, "======>>>>>> Physical\n"); strcpy(globtype, "Physical"); @@ -369,8 +376,15 @@ static void parse_hid_report_descriptor(struct gtco *device, char * report, break; case TAG_MAIN_COL_END: - dev_dbg(ddev, "<<<<<<======\n"); maintype = 'E'; + + if (indent == 0) { + dev_err(ddev, "Collection level already at zero\n"); + break; + } + + dev_dbg(ddev, "<<<<<<======\n"); + indent--; for (x = 0; x < indent; x++) indentstr[x] = '-'; diff --git a/drivers/input/tablet/kbtab.c b/drivers/input/tablet/kbtab.c index 75b500651e4e4051f1ec403e02febed6d59cd72f..b1cf0c9712740dc9b552907160d11b37fedecfb2 100644 --- a/drivers/input/tablet/kbtab.c +++ b/drivers/input/tablet/kbtab.c @@ -116,6 +116,10 @@ static int kbtab_probe(struct usb_interface *intf, const struct usb_device_id *i if (intf->cur_altsetting->desc.bNumEndpoints < 1) return -ENODEV; + endpoint = &intf->cur_altsetting->endpoint[0].desc; + if (!usb_endpoint_is_int_in(endpoint)) + return -ENODEV; + kbtab = kzalloc(sizeof(struct kbtab), GFP_KERNEL); input_dev = input_allocate_device(); if (!kbtab || !input_dev) @@ -154,8 +158,6 @@ static int kbtab_probe(struct usb_interface *intf, const struct usb_device_id *i input_set_abs_params(input_dev, ABS_Y, 0, 0x1750, 4, 0); input_set_abs_params(input_dev, ABS_PRESSURE, 0, 0xff, 0, 0); - endpoint = &intf->cur_altsetting->endpoint[0].desc; - usb_fill_int_urb(kbtab->irq, dev, usb_rcvintpipe(dev, endpoint->bEndpointAddress), kbtab->data, 8, diff --git a/drivers/input/touchscreen/usbtouchscreen.c b/drivers/input/touchscreen/usbtouchscreen.c index d61570d64ee76bd8ac161daa6a9dc55b23a5ee40..48304e26f988b15f0947d1ecccfa4e6e5878b995 100644 --- a/drivers/input/touchscreen/usbtouchscreen.c +++ b/drivers/input/touchscreen/usbtouchscreen.c @@ -1672,6 +1672,8 @@ static int usbtouch_probe(struct usb_interface *intf, if (!usbtouch || !input_dev) goto out_free; + mutex_init(&usbtouch->pm_mutex); + type = &usbtouch_dev_info[id->driver_info]; usbtouch->type = type; if (!type->process_pkt) diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c index 3a1d30304f7e9167a0a448b8132b5c93a535abff..66b4800bcdd8b5847032c8ead59a94d57c31a0d8 100644 --- a/drivers/iommu/amd_iommu_init.c +++ b/drivers/iommu/amd_iommu_init.c @@ -1710,7 +1710,7 @@ static const struct attribute_group *amd_iommu_groups[] = { NULL, }; -static int iommu_init_pci(struct amd_iommu *iommu) +static int __init iommu_init_pci(struct amd_iommu *iommu) { int cap_ptr = iommu->cap_ptr; u32 range, misc, low, high; diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 511ff9a1d6d94087bffaaac226055bf892e08d91..f9dbb064f95719640be283979275f669b1462c63 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -675,7 +675,7 @@ static int __finalise_sg(struct device *dev, struct scatterlist *sg, int nents, * - and wouldn't make the resulting output segment too long */ if (cur_len && !s_iova_off && (dma_addr & seg_mask) && - (cur_len + s_length <= max_len)) { + (max_len - cur_len >= s_length)) { /* ...then concatenate it with the previous one */ cur_len += s_length; } else { diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index c1439019dd127edcba6a59e3aebf2ff35bdb023d..b9af2419006f8341563551d2f32c1226c081b249 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -3721,7 +3721,7 @@ static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size) freelist = domain_unmap(domain, start_pfn, last_pfn); - if (intel_iommu_strict) { + if (intel_iommu_strict || !has_iova_flush_queue(&domain->iovad)) { iommu_flush_iotlb_psi(iommu, domain, start_pfn, nrpages, !freelist, 0); /* free iova */ diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 8c15c598029934484e26520aa4aa7aa703a04ca3..bc14825edc9cbe508ada87af24dfef388f7b72c7 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -211,18 +211,21 @@ static int iommu_insert_resv_region(struct iommu_resv_region *new, pos = pos->next; } else if ((start >= a) && (end <= b)) { if (new->type == type) - goto done; + return 0; else pos = pos->next; } else { if (new->type == type) { phys_addr_t new_start = min(a, start); phys_addr_t new_end = max(b, end); + int ret; list_del(&entry->list); entry->start = new_start; entry->length = new_end - new_start + 1; - iommu_insert_resv_region(entry, regions); + ret = iommu_insert_resv_region(entry, regions); + kfree(entry); + return ret; } else { pos = pos->next; } @@ -235,7 +238,6 @@ insert: return -ENOMEM; list_add_tail(®ion->list, pos); -done: return 0; } diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index 83fe2621effe72bc1cbeecd80df4030235d87328..60348d707b9932e98bba82f4c62cd0930b562305 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -65,9 +65,14 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule, } EXPORT_SYMBOL_GPL(init_iova_domain); +bool has_iova_flush_queue(struct iova_domain *iovad) +{ + return !!iovad->fq; +} + static void free_iova_flush_queue(struct iova_domain *iovad) { - if (!iovad->fq) + if (!has_iova_flush_queue(iovad)) return; if (timer_pending(&iovad->fq_timer)) @@ -85,13 +90,14 @@ static void free_iova_flush_queue(struct iova_domain *iovad) int init_iova_flush_queue(struct iova_domain *iovad, iova_flush_cb flush_cb, iova_entry_dtor entry_dtor) { + struct iova_fq __percpu *queue; int cpu; atomic64_set(&iovad->fq_flush_start_cnt, 0); atomic64_set(&iovad->fq_flush_finish_cnt, 0); - iovad->fq = alloc_percpu(struct iova_fq); - if (!iovad->fq) + queue = alloc_percpu(struct iova_fq); + if (!queue) return -ENOMEM; iovad->flush_cb = flush_cb; @@ -100,13 +106,17 @@ int init_iova_flush_queue(struct iova_domain *iovad, for_each_possible_cpu(cpu) { struct iova_fq *fq; - fq = per_cpu_ptr(iovad->fq, cpu); + fq = per_cpu_ptr(queue, cpu); fq->head = 0; fq->tail = 0; spin_lock_init(&fq->lock); } + smp_wmb(); + + iovad->fq = queue; + timer_setup(&iovad->fq_timer, fq_flush_timeout, 0); atomic_set(&iovad->fq_timer_on, 0); diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c index 21681f0f85f43eccbd38315501149e50ef36f788..a1d9617b625040ac26edcff58c5a7a586a723a88 100644 --- a/drivers/irqchip/irq-gic-v3-its.c +++ b/drivers/irqchip/irq-gic-v3-its.c @@ -741,32 +741,43 @@ static void its_flush_cmd(struct its_node *its, struct its_cmd_block *cmd) } static int its_wait_for_range_completion(struct its_node *its, - struct its_cmd_block *from, + u64 prev_idx, struct its_cmd_block *to) { - u64 rd_idx, from_idx, to_idx; + u64 rd_idx, to_idx, linear_idx; u32 count = 1000000; /* 1s! */ - from_idx = its_cmd_ptr_to_offset(its, from); + /* Linearize to_idx if the command set has wrapped around */ to_idx = its_cmd_ptr_to_offset(its, to); + if (to_idx < prev_idx) + to_idx += ITS_CMD_QUEUE_SZ; + + linear_idx = prev_idx; while (1) { + s64 delta; + rd_idx = readl_relaxed(its->base + GITS_CREADR); - /* Direct case */ - if (from_idx < to_idx && rd_idx >= to_idx) - break; + /* + * Compute the read pointer progress, taking the + * potential wrap-around into account. + */ + delta = rd_idx - prev_idx; + if (rd_idx < prev_idx) + delta += ITS_CMD_QUEUE_SZ; - /* Wrapped case */ - if (from_idx >= to_idx && rd_idx >= to_idx && rd_idx < from_idx) + linear_idx += delta; + if (linear_idx >= to_idx) break; count--; if (!count) { - pr_err_ratelimited("ITS queue timeout (%llu %llu %llu)\n", - from_idx, to_idx, rd_idx); + pr_err_ratelimited("ITS queue timeout (%llu %llu)\n", + to_idx, linear_idx); return -1; } + prev_idx = rd_idx; cpu_relax(); udelay(1); } @@ -783,6 +794,7 @@ void name(struct its_node *its, \ struct its_cmd_block *cmd, *sync_cmd, *next_cmd; \ synctype *sync_obj; \ unsigned long flags; \ + u64 rd_idx; \ \ raw_spin_lock_irqsave(&its->lock, flags); \ \ @@ -804,10 +816,11 @@ void name(struct its_node *its, \ } \ \ post: \ + rd_idx = readl_relaxed(its->base + GITS_CREADR); \ next_cmd = its_post_commands(its); \ raw_spin_unlock_irqrestore(&its->lock, flags); \ \ - if (its_wait_for_range_completion(its, cmd, next_cmd)) \ + if (its_wait_for_range_completion(its, rd_idx, next_cmd)) \ pr_err_ratelimited("ITS cmd %ps failed\n", builder); \ } @@ -2892,7 +2905,7 @@ static int its_vpe_init(struct its_vpe *vpe) if (!its_alloc_vpe_table(vpe_id)) { its_vpe_id_free(vpe_id); - its_free_pending_table(vpe->vpt_page); + its_free_pending_table(vpt_page); return -ENOMEM; } diff --git a/drivers/irqchip/irq-imx-gpcv2.c b/drivers/irqchip/irq-imx-gpcv2.c index 4760307ab43fc33404b6b2ec07b2c3b49a6f6405..cef8f5e2e8fce9a1bb5e9609ec500fa1a2d9d149 100644 --- a/drivers/irqchip/irq-imx-gpcv2.c +++ b/drivers/irqchip/irq-imx-gpcv2.c @@ -131,6 +131,7 @@ static struct irq_chip gpcv2_irqchip_data_chip = { .irq_unmask = imx_gpcv2_irq_unmask, .irq_set_wake = imx_gpcv2_irq_set_wake, .irq_retrigger = irq_chip_retrigger_hierarchy, + .irq_set_type = irq_chip_set_type_parent, #ifdef CONFIG_SMP .irq_set_affinity = irq_chip_set_affinity_parent, #endif diff --git a/drivers/irqchip/irq-meson-gpio.c b/drivers/irqchip/irq-meson-gpio.c index 7b531fd075b885396c378ef84bf0aaa110dc3c0f..7599b10ecf09d153e8b2b563f2ce515f19eb3358 100644 --- a/drivers/irqchip/irq-meson-gpio.c +++ b/drivers/irqchip/irq-meson-gpio.c @@ -73,6 +73,7 @@ static const struct of_device_id meson_irq_gpio_matches[] = { { .compatible = "amlogic,meson-gxbb-gpio-intc", .data = &gxbb_params }, { .compatible = "amlogic,meson-gxl-gpio-intc", .data = &gxl_params }, { .compatible = "amlogic,meson-axg-gpio-intc", .data = &axg_params }, + { .compatible = "amlogic,meson-g12a-gpio-intc", .data = &axg_params }, { } }; diff --git a/drivers/isdn/hardware/mISDN/hfcsusb.c b/drivers/isdn/hardware/mISDN/hfcsusb.c index 6d05946b445eb039aeb6c9c755e94dbe8b8f1dac..c952002c6301d8c219c51243e22d38c5bee8b2ec 100644 --- a/drivers/isdn/hardware/mISDN/hfcsusb.c +++ b/drivers/isdn/hardware/mISDN/hfcsusb.c @@ -1406,6 +1406,7 @@ start_isoc_chain(struct usb_fifo *fifo, int num_packets_per_urb, printk(KERN_DEBUG "%s: %s: alloc urb for fifo %i failed", hw->name, __func__, fifo->fifonum); + continue; } fifo->iso[i].owner_fifo = (struct usb_fifo *) fifo; fifo->iso[i].indx = i; @@ -1704,13 +1705,23 @@ hfcsusb_stop_endpoint(struct hfcsusb *hw, int channel) static int setup_hfcsusb(struct hfcsusb *hw) { + void *dmabuf = kmalloc(sizeof(u_char), GFP_KERNEL); u_char b; + int ret; if (debug & DBG_HFC_CALL_TRACE) printk(KERN_DEBUG "%s: %s\n", hw->name, __func__); + if (!dmabuf) + return -ENOMEM; + + ret = read_reg_atomic(hw, HFCUSB_CHIP_ID, dmabuf); + + memcpy(&b, dmabuf, sizeof(u_char)); + kfree(dmabuf); + /* check the chip id */ - if (read_reg_atomic(hw, HFCUSB_CHIP_ID, &b) != 1) { + if (ret != 1) { printk(KERN_DEBUG "%s: %s: cannot read chip id\n", hw->name, __func__); return 1; @@ -1967,6 +1978,9 @@ hfcsusb_probe(struct usb_interface *intf, const struct usb_device_id *id) /* get endpoint base */ idx = ((ep_addr & 0x7f) - 1) * 2; + if (idx > 15) + return -EIO; + if (ep_addr & 0x80) idx++; attr = ep->desc.bmAttributes; diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c index 95be6e36c7ddf5f16d4ebf16363ae121ce65ad95..80710c62ac29313073399cc16b5c87d01cc81759 100644 --- a/drivers/lightnvm/pblk-core.c +++ b/drivers/lightnvm/pblk-core.c @@ -288,14 +288,16 @@ void pblk_free_rqd(struct pblk *pblk, struct nvm_rq *rqd, int type) void pblk_bio_free_pages(struct pblk *pblk, struct bio *bio, int off, int nr_pages) { - struct bio_vec bv; - int i; - - WARN_ON(off + nr_pages != bio->bi_vcnt); - - for (i = off; i < nr_pages + off; i++) { - bv = bio->bi_io_vec[i]; - mempool_free(bv.bv_page, &pblk->page_bio_pool); + struct bio_vec *bv; + struct page *page; + int i, e, nbv = 0; + + for (i = 0; i < bio->bi_vcnt; i++) { + bv = &bio->bi_io_vec[i]; + page = bv->bv_page; + for (e = 0; e < bv->bv_len; e += PBLK_EXPOSED_PAGE_SIZE, nbv++) + if (nbv >= off) + mempool_free(page++, &pblk->page_bio_pool); } } diff --git a/drivers/mailbox/mailbox.c b/drivers/mailbox/mailbox.c index 674b35f402f5e939833bdbde4a39399689d653f9..055c90b8253cbe37749cb94338ef45b01574056b 100644 --- a/drivers/mailbox/mailbox.c +++ b/drivers/mailbox/mailbox.c @@ -391,11 +391,13 @@ struct mbox_chan *mbox_request_channel_byname(struct mbox_client *cl, of_property_for_each_string(np, "mbox-names", prop, mbox_name) { if (!strncmp(name, mbox_name, strlen(name))) - break; + return mbox_request_channel(cl, index); index++; } - return mbox_request_channel(cl, index); + dev_err(cl->dev, "%s() could not locate channel named \"%s\"\n", + __func__, name); + return ERR_PTR(-EINVAL); } EXPORT_SYMBOL_GPL(mbox_request_channel_byname); diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c index de85b3af3b39dd289e117fc655032a757b5b98c4..9c3beb1e382b9ed616cc039662987d99618511e6 100644 --- a/drivers/md/bcache/alloc.c +++ b/drivers/md/bcache/alloc.c @@ -393,6 +393,11 @@ long bch_bucket_alloc(struct cache *ca, unsigned int reserve, bool wait) struct bucket *b; long r; + + /* No allocation if CACHE_SET_IO_DISABLE bit is set */ + if (unlikely(test_bit(CACHE_SET_IO_DISABLE, &ca->set->flags))) + return -1; + /* fastpath */ if (fifo_pop(&ca->free[RESERVE_NONE], r) || fifo_pop(&ca->free[reserve], r)) @@ -484,6 +489,10 @@ int __bch_bucket_alloc_set(struct cache_set *c, unsigned int reserve, { int i; + /* No allocation if CACHE_SET_IO_DISABLE bit is set */ + if (unlikely(test_bit(CACHE_SET_IO_DISABLE, &c->flags))) + return -1; + lockdep_assert_held(&c->bucket_lock); BUG_ON(!n || n > c->caches_loaded || n > 8); diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h index 954dad29e6e8fca910b0ebd24171591f2acd0831..83f0b91aeb90d48d3e6db32bf3a83cdc06217e92 100644 --- a/drivers/md/bcache/bcache.h +++ b/drivers/md/bcache/bcache.h @@ -708,8 +708,6 @@ struct cache_set { #define BUCKET_HASH_BITS 12 struct hlist_head bucket_hash[1 << BUCKET_HASH_BITS]; - - DECLARE_HEAP(struct btree *, flush_btree); }; struct bbio { diff --git a/drivers/md/bcache/io.c b/drivers/md/bcache/io.c index c250979683194c69f8ea1b9585e13907c877eb3a..4d93f07f63e515c23e0792331d87f3c2cea1c485 100644 --- a/drivers/md/bcache/io.c +++ b/drivers/md/bcache/io.c @@ -58,6 +58,18 @@ void bch_count_backing_io_errors(struct cached_dev *dc, struct bio *bio) WARN_ONCE(!dc, "NULL pointer of struct cached_dev"); + /* + * Read-ahead requests on a degrading and recovering md raid + * (e.g. raid6) device might be failured immediately by md + * raid code, which is not a real hardware media failure. So + * we shouldn't count failed REQ_RAHEAD bio to dc->io_errors. + */ + if (bio->bi_opf & REQ_RAHEAD) { + pr_warn_ratelimited("%s: Read-ahead I/O failed on backing device, ignore", + dc->backing_dev_name); + return; + } + errors = atomic_add_return(1, &dc->io_errors); if (errors < dc->error_limit) pr_err("%s: IO error on backing device, unrecoverable", diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c index f880e5eba8dd9a7e9e4030934453a3c2501e472c..ec1e35a62934d1e0d9be5374fb27b9695cf2d75c 100644 --- a/drivers/md/bcache/journal.c +++ b/drivers/md/bcache/journal.c @@ -390,12 +390,6 @@ err: } /* Journalling */ -#define journal_max_cmp(l, r) \ - (fifo_idx(&c->journal.pin, btree_current_write(l)->journal) < \ - fifo_idx(&(c)->journal.pin, btree_current_write(r)->journal)) -#define journal_min_cmp(l, r) \ - (fifo_idx(&c->journal.pin, btree_current_write(l)->journal) > \ - fifo_idx(&(c)->journal.pin, btree_current_write(r)->journal)) static void btree_flush_write(struct cache_set *c) { @@ -403,35 +397,25 @@ static void btree_flush_write(struct cache_set *c) * Try to find the btree node with that references the oldest journal * entry, best is our current candidate and is locked if non NULL: */ - struct btree *b; - int i; + struct btree *b, *best; + unsigned int i; atomic_long_inc(&c->flush_write); - retry: - spin_lock(&c->journal.lock); - if (heap_empty(&c->flush_btree)) { - for_each_cached_btree(b, c, i) - if (btree_current_write(b)->journal) { - if (!heap_full(&c->flush_btree)) - heap_add(&c->flush_btree, b, - journal_max_cmp); - else if (journal_max_cmp(b, - heap_peek(&c->flush_btree))) { - c->flush_btree.data[0] = b; - heap_sift(&c->flush_btree, 0, - journal_max_cmp); - } + best = NULL; + + for_each_cached_btree(b, c, i) + if (btree_current_write(b)->journal) { + if (!best) + best = b; + else if (journal_pin_cmp(c, + btree_current_write(best)->journal, + btree_current_write(b)->journal)) { + best = b; } + } - for (i = c->flush_btree.used / 2 - 1; i >= 0; --i) - heap_sift(&c->flush_btree, i, journal_min_cmp); - } - - b = NULL; - heap_pop(&c->flush_btree, b, journal_min_cmp); - spin_unlock(&c->journal.lock); - + b = best; if (b) { mutex_lock(&b->write_lock); if (!btree_current_write(b)->journal) { @@ -810,6 +794,10 @@ atomic_t *bch_journal(struct cache_set *c, struct journal_write *w; atomic_t *ret; + /* No journaling if CACHE_SET_IO_DISABLE set already */ + if (unlikely(test_bit(CACHE_SET_IO_DISABLE, &c->flags))) + return NULL; + if (!CACHE_SYNC(&c->sb)) return NULL; @@ -854,7 +842,6 @@ void bch_journal_free(struct cache_set *c) free_pages((unsigned long) c->journal.w[1].data, JSET_BITS); free_pages((unsigned long) c->journal.w[0].data, JSET_BITS); free_fifo(&c->journal.pin); - free_heap(&c->flush_btree); } int bch_journal_alloc(struct cache_set *c) @@ -869,8 +856,7 @@ int bch_journal_alloc(struct cache_set *c) j->w[0].c = c; j->w[1].c = c; - if (!(init_heap(&c->flush_btree, 128, GFP_KERNEL)) || - !(init_fifo(&j->pin, JOURNAL_PIN, GFP_KERNEL)) || + if (!(init_fifo(&j->pin, JOURNAL_PIN, GFP_KERNEL)) || !(j->w[0].data = (void *) __get_free_pages(GFP_KERNEL, JSET_BITS)) || !(j->w[1].data = (void *) __get_free_pages(GFP_KERNEL, JSET_BITS))) return -ENOMEM; diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c index 2409507d7bff854595578b161225c86f92a46053..e6c7a84bb1dfd9a97fdef05dbe24c5df50fc653e 100644 --- a/drivers/md/bcache/super.c +++ b/drivers/md/bcache/super.c @@ -1180,18 +1180,16 @@ static void cached_dev_free(struct closure *cl) { struct cached_dev *dc = container_of(cl, struct cached_dev, disk.cl); - mutex_lock(&bch_register_lock); - if (test_and_clear_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags)) cancel_writeback_rate_update_dwork(dc); if (!IS_ERR_OR_NULL(dc->writeback_thread)) kthread_stop(dc->writeback_thread); - if (dc->writeback_write_wq) - destroy_workqueue(dc->writeback_write_wq); if (!IS_ERR_OR_NULL(dc->status_update_thread)) kthread_stop(dc->status_update_thread); + mutex_lock(&bch_register_lock); + if (atomic_read(&dc->running)) bd_unlink_disk_holder(dc->bdev, dc->disk.disk); bcache_device_free(&dc->disk); @@ -1425,8 +1423,6 @@ int bch_flash_dev_create(struct cache_set *c, uint64_t size) bool bch_cached_dev_error(struct cached_dev *dc) { - struct cache_set *c; - if (!dc || test_bit(BCACHE_DEV_CLOSING, &dc->disk.flags)) return false; @@ -1437,21 +1433,6 @@ bool bch_cached_dev_error(struct cached_dev *dc) pr_err("stop %s: too many IO errors on backing device %s\n", dc->disk.disk->disk_name, dc->backing_dev_name); - /* - * If the cached device is still attached to a cache set, - * even dc->io_disable is true and no more I/O requests - * accepted, cache device internal I/O (writeback scan or - * garbage collection) may still prevent bcache device from - * being stopped. So here CACHE_SET_IO_DISABLE should be - * set to c->flags too, to make the internal I/O to cache - * device rejected and stopped immediately. - * If c is NULL, that means the bcache device is not attached - * to any cache set, then no CACHE_SET_IO_DISABLE bit to set. - */ - c = dc->disk.c; - if (c && test_and_set_bit(CACHE_SET_IO_DISABLE, &c->flags)) - pr_info("CACHE_SET_IO_DISABLE already set"); - bcache_device_stop(&dc->disk); return true; } @@ -1552,7 +1533,7 @@ static void cache_set_flush(struct closure *cl) kobject_put(&c->internal); kobject_del(&c->kobj); - if (c->gc_thread) + if (!IS_ERR_OR_NULL(c->gc_thread)) kthread_stop(c->gc_thread); if (!IS_ERR_OR_NULL(c->root)) diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c index 541454b4f479ca5e8b2f24ce76d8188e5f989b95..5bb81e564ce8827869b12f5da6f6fbf6a2ac5d90 100644 --- a/drivers/md/bcache/sysfs.c +++ b/drivers/md/bcache/sysfs.c @@ -175,7 +175,7 @@ SHOW(__bch_cached_dev) var_print(writeback_percent); sysfs_hprint(writeback_rate, wb ? atomic_long_read(&dc->writeback_rate.rate) << 9 : 0); - sysfs_hprint(io_errors, atomic_read(&dc->io_errors)); + sysfs_printf(io_errors, "%i", atomic_read(&dc->io_errors)); sysfs_printf(io_error_limit, "%i", dc->error_limit); sysfs_printf(io_disable, "%i", dc->io_disable); var_print(writeback_rate_update_seconds); @@ -426,7 +426,7 @@ static struct attribute *bch_cached_dev_files[] = { &sysfs_writeback_rate_p_term_inverse, &sysfs_writeback_rate_minimum, &sysfs_writeback_rate_debug, - &sysfs_errors, + &sysfs_io_errors, &sysfs_io_error_limit, &sysfs_io_disable, &sysfs_dirty_data, diff --git a/drivers/md/bcache/util.h b/drivers/md/bcache/util.h index 00aab6abcfe4fd3bb4e625780de0badee109dbf0..b1f5b7aea8724e33fa9eb28d51983f6885e369a7 100644 --- a/drivers/md/bcache/util.h +++ b/drivers/md/bcache/util.h @@ -113,8 +113,6 @@ do { \ #define heap_full(h) ((h)->used == (h)->size) -#define heap_empty(h) ((h)->used == 0) - #define DECLARE_FIFO(type, name) \ struct { \ size_t front, back, size, mask; \ diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c index 08c3a9f9676c9e89649b56f765fb378973149a62..ba5395fd386d562ac070158fa03666e9197e5ce7 100644 --- a/drivers/md/bcache/writeback.c +++ b/drivers/md/bcache/writeback.c @@ -708,6 +708,10 @@ static int bch_writeback_thread(void *arg) } } + if (dc->writeback_write_wq) { + flush_workqueue(dc->writeback_write_wq); + destroy_workqueue(dc->writeback_write_wq); + } cached_dev_put(dc); wait_for_kthread_stop(); @@ -803,6 +807,7 @@ int bch_cached_dev_writeback_start(struct cached_dev *dc) "bcache_writeback"); if (IS_ERR(dc->writeback_thread)) { cached_dev_put(dc); + destroy_workqueue(dc->writeback_write_wq); return PTR_ERR(dc->writeback_thread); } diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h index 7d480c930eaf0a1f4fe90238d830ea4ee61daebb..7e426e4d1352823235e621aac3e7be22ca891b83 100644 --- a/drivers/md/dm-core.h +++ b/drivers/md/dm-core.h @@ -130,6 +130,7 @@ struct mapped_device { }; int md_in_flight(struct mapped_device *md); +void disable_discard(struct mapped_device *md); void disable_write_same(struct mapped_device *md); void disable_write_zeroes(struct mapped_device *md); diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c index dbdcc543832dfe10da9dcac2330d91a692888e66..2e22d588f0563ccc5492889f4a13cd6f66905f1e 100644 --- a/drivers/md/dm-integrity.c +++ b/drivers/md/dm-integrity.c @@ -1749,7 +1749,22 @@ offload_to_thread: queue_work(ic->wait_wq, &dio->work); return; } + if (journal_read_pos != NOT_FOUND) + dio->range.n_sectors = ic->sectors_per_block; wait_and_add_new_range(ic, &dio->range); + /* + * wait_and_add_new_range drops the spinlock, so the journal + * may have been changed arbitrarily. We need to recheck. + * To simplify the code, we restrict I/O size to just one block. + */ + if (journal_read_pos != NOT_FOUND) { + sector_t next_sector; + unsigned new_pos = find_journal_node(ic, dio->range.logical_sector, &next_sector); + if (unlikely(new_pos != journal_read_pos)) { + remove_range_unlocked(ic, &dio->range); + goto retry; + } + } } spin_unlock_irq(&ic->endio_wait.lock); diff --git a/drivers/md/dm-kcopyd.c b/drivers/md/dm-kcopyd.c index 671c24332802e5702de7353d3a9479c004616657..3f694d9061ec5fd53e2449f601cee75d3ec6023e 100644 --- a/drivers/md/dm-kcopyd.c +++ b/drivers/md/dm-kcopyd.c @@ -548,8 +548,10 @@ static int run_io_job(struct kcopyd_job *job) * no point in continuing. */ if (test_bit(DM_KCOPYD_WRITE_SEQ, &job->flags) && - job->master_job->write_err) + job->master_job->write_err) { + job->write_err = job->master_job->write_err; return -EIO; + } io_job_start(job->kc->throttle); @@ -601,6 +603,7 @@ static int process_jobs(struct list_head *jobs, struct dm_kcopyd_client *kc, else job->read_err = 1; push(&kc->complete_jobs, job); + wake(kc); break; } diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c index c44925e4e4813d246d0d208fef5587019e4afb17..b78a8a4d061caf0b6f396b66e4a7f9a5478a5323 100644 --- a/drivers/md/dm-raid.c +++ b/drivers/md/dm-raid.c @@ -3199,7 +3199,7 @@ static int raid_ctr(struct dm_target *ti, unsigned int argc, char **argv) */ r = rs_prepare_reshape(rs); if (r) - return r; + goto bad; /* Reshaping ain't recovery, so disable recovery */ rs_setup_recovery(rs, MaxSector); diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c index 29736c7e5f1f73f3f360e3229ef76d1cca7ba6fc..5e4b13675d32104175ed76bb04d53495e64a7a13 100644 --- a/drivers/md/dm-rq.c +++ b/drivers/md/dm-rq.c @@ -295,11 +295,14 @@ static void dm_done(struct request *clone, blk_status_t error, bool mapped) } if (unlikely(error == BLK_STS_TARGET)) { - if (req_op(clone) == REQ_OP_WRITE_SAME && - !clone->q->limits.max_write_same_sectors) + if (req_op(clone) == REQ_OP_DISCARD && + !clone->q->limits.max_discard_sectors) + disable_discard(tio->md); + else if (req_op(clone) == REQ_OP_WRITE_SAME && + !clone->q->limits.max_write_same_sectors) disable_write_same(tio->md); - if (req_op(clone) == REQ_OP_WRITE_ZEROES && - !clone->q->limits.max_write_zeroes_sectors) + else if (req_op(clone) == REQ_OP_WRITE_ZEROES && + !clone->q->limits.max_write_zeroes_sectors) disable_write_zeroes(tio->md); } diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index c7fe4789c40efc6d0892c202773327bc6492e34c..36275c59e4e7b06aa41c45fc36d5d4b30637486b 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -562,7 +562,7 @@ static char **realloc_argv(unsigned *size, char **old_argv) gfp = GFP_NOIO; } argv = kmalloc_array(new_size, sizeof(*argv), gfp); - if (argv) { + if (argv && old_argv) { memcpy(argv, old_argv, *size * sizeof(*argv)); *size = new_size; } @@ -1349,7 +1349,7 @@ void dm_table_event(struct dm_table *t) } EXPORT_SYMBOL(dm_table_event); -sector_t dm_table_get_size(struct dm_table *t) +inline sector_t dm_table_get_size(struct dm_table *t) { return t->num_targets ? (t->highs[t->num_targets - 1] + 1) : 0; } @@ -1374,6 +1374,9 @@ struct dm_target *dm_table_find_target(struct dm_table *t, sector_t sector) unsigned int l, n = 0, k = 0; sector_t *node; + if (unlikely(sector >= dm_table_get_size(t))) + return &t->targets[t->num_targets]; + for (l = 0; l < t->depth; l++) { n = get_child(n, k); node = get_node(t, l, n); diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c index fc65f0dedf7f702b31d6adf21b8a2b26238c8b43..e3599b43f9eb984ccb52e8484ec5e7b6c3a8cd54 100644 --- a/drivers/md/dm-verity-target.c +++ b/drivers/md/dm-verity-target.c @@ -236,8 +236,8 @@ static int verity_handle_err(struct dm_verity *v, enum verity_block_type type, BUG(); } - DMERR("%s: %s block %llu is corrupted", v->data_dev->name, type_str, - block); + DMERR_LIMIT("%s: %s block %llu is corrupted", v->data_dev->name, + type_str, block); if (v->corrupted_errs == DM_VERITY_MAX_CORRUPTED_ERRS) DMERR("%s: reached maximum errors", v->data_dev->name); diff --git a/drivers/md/dm-zoned-metadata.c b/drivers/md/dm-zoned-metadata.c index d8334cd45d7cb5eb90745b09463cf4daaa231021..7e8d7fc99410dff89b1fbd7380dbed5bb97c4a27 100644 --- a/drivers/md/dm-zoned-metadata.c +++ b/drivers/md/dm-zoned-metadata.c @@ -401,15 +401,18 @@ static struct dmz_mblock *dmz_get_mblock_slow(struct dmz_metadata *zmd, sector_t block = zmd->sb[zmd->mblk_primary].block + mblk_no; struct bio *bio; + if (dmz_bdev_is_dying(zmd->dev)) + return ERR_PTR(-EIO); + /* Get a new block and a BIO to read it */ mblk = dmz_alloc_mblock(zmd, mblk_no); if (!mblk) - return NULL; + return ERR_PTR(-ENOMEM); bio = bio_alloc(GFP_NOIO, 1); if (!bio) { dmz_free_mblock(zmd, mblk); - return NULL; + return ERR_PTR(-ENOMEM); } spin_lock(&zmd->mblk_lock); @@ -540,8 +543,8 @@ static struct dmz_mblock *dmz_get_mblock(struct dmz_metadata *zmd, if (!mblk) { /* Cache miss: read the block from disk */ mblk = dmz_get_mblock_slow(zmd, mblk_no); - if (!mblk) - return ERR_PTR(-ENOMEM); + if (IS_ERR(mblk)) + return mblk; } /* Wait for on-going read I/O and check for error */ @@ -569,16 +572,19 @@ static void dmz_dirty_mblock(struct dmz_metadata *zmd, struct dmz_mblock *mblk) /* * Issue a metadata block write BIO. */ -static void dmz_write_mblock(struct dmz_metadata *zmd, struct dmz_mblock *mblk, - unsigned int set) +static int dmz_write_mblock(struct dmz_metadata *zmd, struct dmz_mblock *mblk, + unsigned int set) { sector_t block = zmd->sb[set].block + mblk->no; struct bio *bio; + if (dmz_bdev_is_dying(zmd->dev)) + return -EIO; + bio = bio_alloc(GFP_NOIO, 1); if (!bio) { set_bit(DMZ_META_ERROR, &mblk->state); - return; + return -ENOMEM; } set_bit(DMZ_META_WRITING, &mblk->state); @@ -590,6 +596,8 @@ static void dmz_write_mblock(struct dmz_metadata *zmd, struct dmz_mblock *mblk, bio_set_op_attrs(bio, REQ_OP_WRITE, REQ_META | REQ_PRIO); bio_add_page(bio, mblk->page, DMZ_BLOCK_SIZE, 0); submit_bio(bio); + + return 0; } /* @@ -601,6 +609,9 @@ static int dmz_rdwr_block(struct dmz_metadata *zmd, int op, sector_t block, struct bio *bio; int ret; + if (dmz_bdev_is_dying(zmd->dev)) + return -EIO; + bio = bio_alloc(GFP_NOIO, 1); if (!bio) return -ENOMEM; @@ -658,22 +669,29 @@ static int dmz_write_dirty_mblocks(struct dmz_metadata *zmd, { struct dmz_mblock *mblk; struct blk_plug plug; - int ret = 0; + int ret = 0, nr_mblks_submitted = 0; /* Issue writes */ blk_start_plug(&plug); - list_for_each_entry(mblk, write_list, link) - dmz_write_mblock(zmd, mblk, set); + list_for_each_entry(mblk, write_list, link) { + ret = dmz_write_mblock(zmd, mblk, set); + if (ret) + break; + nr_mblks_submitted++; + } blk_finish_plug(&plug); /* Wait for completion */ list_for_each_entry(mblk, write_list, link) { + if (!nr_mblks_submitted) + break; wait_on_bit_io(&mblk->state, DMZ_META_WRITING, TASK_UNINTERRUPTIBLE); if (test_bit(DMZ_META_ERROR, &mblk->state)) { clear_bit(DMZ_META_ERROR, &mblk->state); ret = -EIO; } + nr_mblks_submitted--; } /* Flush drive cache (this will also sync data) */ @@ -735,6 +753,11 @@ int dmz_flush_metadata(struct dmz_metadata *zmd) */ dmz_lock_flush(zmd); + if (dmz_bdev_is_dying(zmd->dev)) { + ret = -EIO; + goto out; + } + /* Get dirty blocks */ spin_lock(&zmd->mblk_lock); list_splice_init(&zmd->mblk_dirty_list, &write_list); @@ -1534,7 +1557,7 @@ static struct dm_zone *dmz_get_rnd_zone_for_reclaim(struct dmz_metadata *zmd) struct dm_zone *zone; if (list_empty(&zmd->map_rnd_list)) - return NULL; + return ERR_PTR(-EBUSY); list_for_each_entry(zone, &zmd->map_rnd_list, link) { if (dmz_is_buf(zone)) @@ -1545,7 +1568,7 @@ static struct dm_zone *dmz_get_rnd_zone_for_reclaim(struct dmz_metadata *zmd) return dzone; } - return NULL; + return ERR_PTR(-EBUSY); } /* @@ -1556,7 +1579,7 @@ static struct dm_zone *dmz_get_seq_zone_for_reclaim(struct dmz_metadata *zmd) struct dm_zone *zone; if (list_empty(&zmd->map_seq_list)) - return NULL; + return ERR_PTR(-EBUSY); list_for_each_entry(zone, &zmd->map_seq_list, link) { if (!zone->bzone) @@ -1565,7 +1588,7 @@ static struct dm_zone *dmz_get_seq_zone_for_reclaim(struct dmz_metadata *zmd) return zone; } - return NULL; + return ERR_PTR(-EBUSY); } /* @@ -1593,30 +1616,6 @@ struct dm_zone *dmz_get_zone_for_reclaim(struct dmz_metadata *zmd) return zone; } -/* - * Activate a zone (increment its reference count). - */ -void dmz_activate_zone(struct dm_zone *zone) -{ - set_bit(DMZ_ACTIVE, &zone->flags); - atomic_inc(&zone->refcount); -} - -/* - * Deactivate a zone. This decrement the zone reference counter - * and clears the active state of the zone once the count reaches 0, - * indicating that all BIOs to the zone have completed. Returns - * true if the zone was deactivated. - */ -void dmz_deactivate_zone(struct dm_zone *zone) -{ - if (atomic_dec_and_test(&zone->refcount)) { - WARN_ON(!test_bit(DMZ_ACTIVE, &zone->flags)); - clear_bit_unlock(DMZ_ACTIVE, &zone->flags); - smp_mb__after_atomic(); - } -} - /* * Get the zone mapping a chunk, if the chunk is mapped already. * If no mapping exist and the operation is WRITE, a zone is @@ -1647,6 +1646,10 @@ again: /* Alloate a random zone */ dzone = dmz_alloc_zone(zmd, DMZ_ALLOC_RND); if (!dzone) { + if (dmz_bdev_is_dying(zmd->dev)) { + dzone = ERR_PTR(-EIO); + goto out; + } dmz_wait_for_free_zones(zmd); goto again; } @@ -1744,6 +1747,10 @@ again: /* Alloate a random zone */ bzone = dmz_alloc_zone(zmd, DMZ_ALLOC_RND); if (!bzone) { + if (dmz_bdev_is_dying(zmd->dev)) { + bzone = ERR_PTR(-EIO); + goto out; + } dmz_wait_for_free_zones(zmd); goto again; } diff --git a/drivers/md/dm-zoned-reclaim.c b/drivers/md/dm-zoned-reclaim.c index edf4b95eb0750dc6485513d49c240b2982017114..9470b8f77a337bb22e639ea8e45defc12cb59cb2 100644 --- a/drivers/md/dm-zoned-reclaim.c +++ b/drivers/md/dm-zoned-reclaim.c @@ -37,7 +37,7 @@ enum { /* * Number of seconds of target BIO inactivity to consider the target idle. */ -#define DMZ_IDLE_PERIOD (10UL * HZ) +#define DMZ_IDLE_PERIOD (10UL * HZ) /* * Percentage of unmapped (free) random zones below which reclaim starts @@ -134,6 +134,9 @@ static int dmz_reclaim_copy(struct dmz_reclaim *zrc, set_bit(DM_KCOPYD_WRITE_SEQ, &flags); while (block < end_block) { + if (dev->flags & DMZ_BDEV_DYING) + return -EIO; + /* Get a valid region from the source zone */ ret = dmz_first_valid_block(zmd, src_zone, &block); if (ret <= 0) @@ -215,7 +218,7 @@ static int dmz_reclaim_buf(struct dmz_reclaim *zrc, struct dm_zone *dzone) dmz_unlock_flush(zmd); - return 0; + return ret; } /* @@ -259,7 +262,7 @@ static int dmz_reclaim_seq_data(struct dmz_reclaim *zrc, struct dm_zone *dzone) dmz_unlock_flush(zmd); - return 0; + return ret; } /* @@ -312,7 +315,7 @@ static int dmz_reclaim_rnd_data(struct dmz_reclaim *zrc, struct dm_zone *dzone) dmz_unlock_flush(zmd); - return 0; + return ret; } /* @@ -334,7 +337,7 @@ static void dmz_reclaim_empty(struct dmz_reclaim *zrc, struct dm_zone *dzone) /* * Find a candidate zone for reclaim and process it. */ -static void dmz_reclaim(struct dmz_reclaim *zrc) +static int dmz_do_reclaim(struct dmz_reclaim *zrc) { struct dmz_metadata *zmd = zrc->metadata; struct dm_zone *dzone; @@ -344,8 +347,8 @@ static void dmz_reclaim(struct dmz_reclaim *zrc) /* Get a data zone */ dzone = dmz_get_zone_for_reclaim(zmd); - if (!dzone) - return; + if (IS_ERR(dzone)) + return PTR_ERR(dzone); start = jiffies; @@ -391,13 +394,20 @@ static void dmz_reclaim(struct dmz_reclaim *zrc) out: if (ret) { dmz_unlock_zone_reclaim(dzone); - return; + return ret; } - (void) dmz_flush_metadata(zrc->metadata); + ret = dmz_flush_metadata(zrc->metadata); + if (ret) { + dmz_dev_debug(zrc->dev, + "Metadata flush for zone %u failed, err %d\n", + dmz_id(zmd, rzone), ret); + return ret; + } dmz_dev_debug(zrc->dev, "Reclaimed zone %u in %u ms", dmz_id(zmd, rzone), jiffies_to_msecs(jiffies - start)); + return 0; } /* @@ -442,6 +452,10 @@ static void dmz_reclaim_work(struct work_struct *work) struct dmz_metadata *zmd = zrc->metadata; unsigned int nr_rnd, nr_unmap_rnd; unsigned int p_unmap_rnd; + int ret; + + if (dmz_bdev_is_dying(zrc->dev)) + return; if (!dmz_should_reclaim(zrc)) { mod_delayed_work(zrc->wq, &zrc->work, DMZ_IDLE_PERIOD); @@ -471,7 +485,17 @@ static void dmz_reclaim_work(struct work_struct *work) (dmz_target_idle(zrc) ? "Idle" : "Busy"), p_unmap_rnd, nr_unmap_rnd, nr_rnd); - dmz_reclaim(zrc); + ret = dmz_do_reclaim(zrc); + if (ret) { + dmz_dev_debug(zrc->dev, "Reclaim error %d\n", ret); + if (ret == -EIO) + /* + * LLD might be performing some error handling sequence + * at the underlying device. To not interfere, do not + * attempt to schedule the next reclaim run immediately. + */ + return; + } dmz_schedule_reclaim(zrc); } diff --git a/drivers/md/dm-zoned-target.c b/drivers/md/dm-zoned-target.c index 85fb2baa8a7fa0addb106936adb2e2719c1dbe40..1030c42add05f7b3005be948c355f64f4472736f 100644 --- a/drivers/md/dm-zoned-target.c +++ b/drivers/md/dm-zoned-target.c @@ -133,6 +133,8 @@ static int dmz_submit_bio(struct dmz_target *dmz, struct dm_zone *zone, atomic_inc(&bioctx->ref); generic_make_request(clone); + if (clone->bi_status == BLK_STS_IOERR) + return -EIO; if (bio_op(bio) == REQ_OP_WRITE && dmz_is_seq(zone)) zone->wp_block += nr_blocks; @@ -277,8 +279,8 @@ static int dmz_handle_buffered_write(struct dmz_target *dmz, /* Get the buffer zone. One will be allocated if needed */ bzone = dmz_get_chunk_buffer(zmd, zone); - if (!bzone) - return -ENOSPC; + if (IS_ERR(bzone)) + return PTR_ERR(bzone); if (dmz_is_readonly(bzone)) return -EROFS; @@ -389,6 +391,11 @@ static void dmz_handle_bio(struct dmz_target *dmz, struct dm_chunk_work *cw, dmz_lock_metadata(zmd); + if (dmz->dev->flags & DMZ_BDEV_DYING) { + ret = -EIO; + goto out; + } + /* * Get the data zone mapping the chunk. There may be no * mapping for read and discard. If a mapping is obtained, @@ -493,6 +500,8 @@ static void dmz_flush_work(struct work_struct *work) /* Flush dirty metadata blocks */ ret = dmz_flush_metadata(dmz->metadata); + if (ret) + dmz_dev_debug(dmz->dev, "Metadata flush failed, rc=%d\n", ret); /* Process queued flush requests */ while (1) { @@ -513,22 +522,24 @@ static void dmz_flush_work(struct work_struct *work) * Get a chunk work and start it to process a new BIO. * If the BIO chunk has no work yet, create one. */ -static void dmz_queue_chunk_work(struct dmz_target *dmz, struct bio *bio) +static int dmz_queue_chunk_work(struct dmz_target *dmz, struct bio *bio) { unsigned int chunk = dmz_bio_chunk(dmz->dev, bio); struct dm_chunk_work *cw; + int ret = 0; mutex_lock(&dmz->chunk_lock); /* Get the BIO chunk work. If one is not active yet, create one */ cw = radix_tree_lookup(&dmz->chunk_rxtree, chunk); if (!cw) { - int ret; /* Create a new chunk work */ cw = kmalloc(sizeof(struct dm_chunk_work), GFP_NOIO); - if (!cw) + if (unlikely(!cw)) { + ret = -ENOMEM; goto out; + } INIT_WORK(&cw->work, dmz_chunk_work); atomic_set(&cw->refcount, 0); @@ -539,7 +550,6 @@ static void dmz_queue_chunk_work(struct dmz_target *dmz, struct bio *bio) ret = radix_tree_insert(&dmz->chunk_rxtree, chunk, cw); if (unlikely(ret)) { kfree(cw); - cw = NULL; goto out; } } @@ -547,10 +557,38 @@ static void dmz_queue_chunk_work(struct dmz_target *dmz, struct bio *bio) bio_list_add(&cw->bio_list, bio); dmz_get_chunk_work(cw); + dmz_reclaim_bio_acc(dmz->reclaim); if (queue_work(dmz->chunk_wq, &cw->work)) dmz_get_chunk_work(cw); out: mutex_unlock(&dmz->chunk_lock); + return ret; +} + +/* + * Check the backing device availability. If it's on the way out, + * start failing I/O. Reclaim and metadata components also call this + * function to cleanly abort operation in the event of such failure. + */ +bool dmz_bdev_is_dying(struct dmz_dev *dmz_dev) +{ + struct gendisk *disk; + + if (!(dmz_dev->flags & DMZ_BDEV_DYING)) { + disk = dmz_dev->bdev->bd_disk; + if (blk_queue_dying(bdev_get_queue(dmz_dev->bdev))) { + dmz_dev_warn(dmz_dev, "Backing device queue dying"); + dmz_dev->flags |= DMZ_BDEV_DYING; + } else if (disk->fops->check_events) { + if (disk->fops->check_events(disk, 0) & + DISK_EVENT_MEDIA_CHANGE) { + dmz_dev_warn(dmz_dev, "Backing device offline"); + dmz_dev->flags |= DMZ_BDEV_DYING; + } + } + } + + return dmz_dev->flags & DMZ_BDEV_DYING; } /* @@ -564,6 +602,10 @@ static int dmz_map(struct dm_target *ti, struct bio *bio) sector_t sector = bio->bi_iter.bi_sector; unsigned int nr_sectors = bio_sectors(bio); sector_t chunk_sector; + int ret; + + if (dmz_bdev_is_dying(dmz->dev)) + return DM_MAPIO_KILL; dmz_dev_debug(dev, "BIO op %d sector %llu + %u => chunk %llu, block %llu, %u blocks", bio_op(bio), (unsigned long long)sector, nr_sectors, @@ -601,8 +643,14 @@ static int dmz_map(struct dm_target *ti, struct bio *bio) dm_accept_partial_bio(bio, dev->zone_nr_sectors - chunk_sector); /* Now ready to handle this BIO */ - dmz_reclaim_bio_acc(dmz->reclaim); - dmz_queue_chunk_work(dmz, bio); + ret = dmz_queue_chunk_work(dmz, bio); + if (ret) { + dmz_dev_debug(dmz->dev, + "BIO op %d, can't process chunk %llu, err %i\n", + bio_op(bio), (u64)dmz_bio_chunk(dmz->dev, bio), + ret); + return DM_MAPIO_REQUEUE; + } return DM_MAPIO_SUBMITTED; } @@ -856,6 +904,9 @@ static int dmz_prepare_ioctl(struct dm_target *ti, struct block_device **bdev) { struct dmz_target *dmz = ti->private; + if (dmz_bdev_is_dying(dmz->dev)) + return -ENODEV; + *bdev = dmz->dev->bdev; return 0; diff --git a/drivers/md/dm-zoned.h b/drivers/md/dm-zoned.h index 12419f0bfe78ba14fd8e0c29249f26a22a99095b..93a64529f21902968a705c6ea88a292ad6161d24 100644 --- a/drivers/md/dm-zoned.h +++ b/drivers/md/dm-zoned.h @@ -56,6 +56,8 @@ struct dmz_dev { unsigned int nr_zones; + unsigned int flags; + sector_t zone_nr_sectors; unsigned int zone_nr_sectors_shift; @@ -67,6 +69,9 @@ struct dmz_dev { (dev)->zone_nr_sectors_shift) #define dmz_chunk_block(dev, b) ((b) & ((dev)->zone_nr_blocks - 1)) +/* Device flags. */ +#define DMZ_BDEV_DYING (1 << 0) + /* * Zone descriptor. */ @@ -115,7 +120,6 @@ enum { DMZ_BUF, /* Zone internal state */ - DMZ_ACTIVE, DMZ_RECLAIM, DMZ_SEQ_WRITE_ERR, }; @@ -128,7 +132,6 @@ enum { #define dmz_is_empty(z) ((z)->wp_block == 0) #define dmz_is_offline(z) test_bit(DMZ_OFFLINE, &(z)->flags) #define dmz_is_readonly(z) test_bit(DMZ_READ_ONLY, &(z)->flags) -#define dmz_is_active(z) test_bit(DMZ_ACTIVE, &(z)->flags) #define dmz_in_reclaim(z) test_bit(DMZ_RECLAIM, &(z)->flags) #define dmz_seq_write_err(z) test_bit(DMZ_SEQ_WRITE_ERR, &(z)->flags) @@ -188,8 +191,30 @@ void dmz_unmap_zone(struct dmz_metadata *zmd, struct dm_zone *zone); unsigned int dmz_nr_rnd_zones(struct dmz_metadata *zmd); unsigned int dmz_nr_unmap_rnd_zones(struct dmz_metadata *zmd); -void dmz_activate_zone(struct dm_zone *zone); -void dmz_deactivate_zone(struct dm_zone *zone); +/* + * Activate a zone (increment its reference count). + */ +static inline void dmz_activate_zone(struct dm_zone *zone) +{ + atomic_inc(&zone->refcount); +} + +/* + * Deactivate a zone. This decrement the zone reference counter + * indicating that all BIOs to the zone have completed when the count is 0. + */ +static inline void dmz_deactivate_zone(struct dm_zone *zone) +{ + atomic_dec(&zone->refcount); +} + +/* + * Test if a zone is active, that is, has a refcount > 0. + */ +static inline bool dmz_is_active(struct dm_zone *zone) +{ + return atomic_read(&zone->refcount); +} int dmz_lock_zone_reclaim(struct dm_zone *zone); void dmz_unlock_zone_reclaim(struct dm_zone *zone); @@ -225,4 +250,9 @@ void dmz_resume_reclaim(struct dmz_reclaim *zrc); void dmz_reclaim_bio_acc(struct dmz_reclaim *zrc); void dmz_schedule_reclaim(struct dmz_reclaim *zrc); +/* + * Functions defined in dm-zoned-target.c + */ +bool dmz_bdev_is_dying(struct dmz_dev *dmz_dev); + #endif /* DM_ZONED_H */ diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 42768fe92b41b21a4f2cdd02efce6285d9248a6b..c9860e3b04ddf10d5a5c8c9ad9f5f0e048599a36 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -910,6 +910,15 @@ static void dec_pending(struct dm_io *io, blk_status_t error) } } +void disable_discard(struct mapped_device *md) +{ + struct queue_limits *limits = dm_get_queue_limits(md); + + /* device doesn't really support DISCARD, disable it */ + limits->max_discard_sectors = 0; + blk_queue_flag_clear(QUEUE_FLAG_DISCARD, md->queue); +} + void disable_write_same(struct mapped_device *md) { struct queue_limits *limits = dm_get_queue_limits(md); @@ -935,11 +944,14 @@ static void clone_endio(struct bio *bio) dm_endio_fn endio = tio->ti->type->end_io; if (unlikely(error == BLK_STS_TARGET) && md->type != DM_TYPE_NVME_BIO_BASED) { - if (bio_op(bio) == REQ_OP_WRITE_SAME && - !bio->bi_disk->queue->limits.max_write_same_sectors) + if (bio_op(bio) == REQ_OP_DISCARD && + !bio->bi_disk->queue->limits.max_discard_sectors) + disable_discard(md); + else if (bio_op(bio) == REQ_OP_WRITE_SAME && + !bio->bi_disk->queue->limits.max_write_same_sectors) disable_write_same(md); - if (bio_op(bio) == REQ_OP_WRITE_ZEROES && - !bio->bi_disk->queue->limits.max_write_zeroes_sectors) + else if (bio_op(bio) == REQ_OP_WRITE_ZEROES && + !bio->bi_disk->queue->limits.max_write_zeroes_sectors) disable_write_zeroes(md); } diff --git a/drivers/md/persistent-data/dm-btree.c b/drivers/md/persistent-data/dm-btree.c index 58b319757b1e5a1c274bd98681d6ce229bbbabbe..8aae0624a2971e939fa79acd656f98035049460d 100644 --- a/drivers/md/persistent-data/dm-btree.c +++ b/drivers/md/persistent-data/dm-btree.c @@ -628,39 +628,40 @@ static int btree_split_beneath(struct shadow_spine *s, uint64_t key) new_parent = shadow_current(s); + pn = dm_block_data(new_parent); + size = le32_to_cpu(pn->header.flags) & INTERNAL_NODE ? + sizeof(__le64) : s->info->value_type.size; + + /* create & init the left block */ r = new_block(s->info, &left); if (r < 0) return r; + ln = dm_block_data(left); + nr_left = le32_to_cpu(pn->header.nr_entries) / 2; + + ln->header.flags = pn->header.flags; + ln->header.nr_entries = cpu_to_le32(nr_left); + ln->header.max_entries = pn->header.max_entries; + ln->header.value_size = pn->header.value_size; + memcpy(ln->keys, pn->keys, nr_left * sizeof(pn->keys[0])); + memcpy(value_ptr(ln, 0), value_ptr(pn, 0), nr_left * size); + + /* create & init the right block */ r = new_block(s->info, &right); if (r < 0) { unlock_block(s->info, left); return r; } - pn = dm_block_data(new_parent); - ln = dm_block_data(left); rn = dm_block_data(right); - - nr_left = le32_to_cpu(pn->header.nr_entries) / 2; nr_right = le32_to_cpu(pn->header.nr_entries) - nr_left; - ln->header.flags = pn->header.flags; - ln->header.nr_entries = cpu_to_le32(nr_left); - ln->header.max_entries = pn->header.max_entries; - ln->header.value_size = pn->header.value_size; - rn->header.flags = pn->header.flags; rn->header.nr_entries = cpu_to_le32(nr_right); rn->header.max_entries = pn->header.max_entries; rn->header.value_size = pn->header.value_size; - - memcpy(ln->keys, pn->keys, nr_left * sizeof(pn->keys[0])); memcpy(rn->keys, pn->keys + nr_left, nr_right * sizeof(pn->keys[0])); - - size = le32_to_cpu(pn->header.flags) & INTERNAL_NODE ? - sizeof(__le64) : s->info->value_type.size; - memcpy(value_ptr(ln, 0), value_ptr(pn, 0), nr_left * size); memcpy(value_ptr(rn, 0), value_ptr(pn, nr_left), nr_right * size); diff --git a/drivers/md/persistent-data/dm-space-map-metadata.c b/drivers/md/persistent-data/dm-space-map-metadata.c index aec44924396622729f64e84abda381d24606e14a..25328582cc4820dc29d0e79a8012eeeb8a7fb2f4 100644 --- a/drivers/md/persistent-data/dm-space-map-metadata.c +++ b/drivers/md/persistent-data/dm-space-map-metadata.c @@ -249,7 +249,7 @@ static int out(struct sm_metadata *smm) } if (smm->recursion_count == 1) - apply_bops(smm); + r = apply_bops(smm); smm->recursion_count--; diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index adec2947c3e191b5dd4c1f7aea90d00758e8a1c2..2bb4f1287cfb1826ceb245e97fc0bbbdce4b9568 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -7672,7 +7672,7 @@ abort: static int raid5_add_disk(struct mddev *mddev, struct md_rdev *rdev) { struct r5conf *conf = mddev->private; - int err = -EEXIST; + int ret, err = -EEXIST; int disk; struct disk_info *p; int first = 0; @@ -7687,7 +7687,14 @@ static int raid5_add_disk(struct mddev *mddev, struct md_rdev *rdev) * The array is in readonly mode if journal is missing, so no * write requests running. We should be safe */ - log_init(conf, rdev, false); + ret = log_init(conf, rdev, false); + if (ret) + return ret; + + ret = r5l_start(conf->log); + if (ret) + return ret; + return 0; } if (mddev->recovery_disabled == conf->recovery_disabled) diff --git a/drivers/media/common/videobuf2/videobuf2-core.c b/drivers/media/common/videobuf2/videobuf2-core.c index b0477136f728fe9beb30e72cd7aa5399044ded50..a31fd6312a37a6f7c500d116838b411b5395b5f7 100644 --- a/drivers/media/common/videobuf2/videobuf2-core.c +++ b/drivers/media/common/videobuf2/videobuf2-core.c @@ -207,6 +207,10 @@ static int __vb2_buf_mem_alloc(struct vb2_buffer *vb) for (plane = 0; plane < vb->num_planes; ++plane) { unsigned long size = PAGE_ALIGN(vb->planes[plane].length); + /* Did it wrap around? */ + if (size < vb->planes[plane].length) + goto free; + mem_priv = call_ptr_memop(vb, alloc, q->alloc_devs[plane] ? : q->dev, q->dma_attrs, size, q->dma_dir, q->gfp_flags); diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c index 015e737095cdd6644b4e0332120aa3ee993eb1d0..e9bfea986cc47e4bba3064d9de1c53e6f59741ee 100644 --- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c +++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c @@ -59,7 +59,7 @@ static int vb2_dma_sg_alloc_compacted(struct vb2_dma_sg_buf *buf, gfp_t gfp_flags) { unsigned int last_page = 0; - int size = buf->size; + unsigned long size = buf->size; while (size > 0) { struct page *pages; diff --git a/drivers/media/dvb-frontends/tua6100.c b/drivers/media/dvb-frontends/tua6100.c index b233b7be0b84aaee1eb621660c02a22614e988fa..e6aaf4973aef47bdf8e84628bf5aa471734b2b96 100644 --- a/drivers/media/dvb-frontends/tua6100.c +++ b/drivers/media/dvb-frontends/tua6100.c @@ -75,8 +75,8 @@ static int tua6100_set_params(struct dvb_frontend *fe) struct i2c_msg msg1 = { .addr = priv->i2c_address, .flags = 0, .buf = reg1, .len = 4 }; struct i2c_msg msg2 = { .addr = priv->i2c_address, .flags = 0, .buf = reg2, .len = 3 }; -#define _R 4 -#define _P 32 +#define _R_VAL 4 +#define _P_VAL 32 #define _ri 4000000 // setup register 0 @@ -91,14 +91,14 @@ static int tua6100_set_params(struct dvb_frontend *fe) else reg1[1] = 0x0c; - if (_P == 64) + if (_P_VAL == 64) reg1[1] |= 0x40; if (c->frequency >= 1525000) reg1[1] |= 0x80; // register 2 - reg2[1] = (_R >> 8) & 0x03; - reg2[2] = _R; + reg2[1] = (_R_VAL >> 8) & 0x03; + reg2[2] = _R_VAL; if (c->frequency < 1455000) reg2[1] |= 0x1c; else if (c->frequency < 1630000) @@ -110,18 +110,18 @@ static int tua6100_set_params(struct dvb_frontend *fe) * The N divisor ratio (note: c->frequency is in kHz, but we * need it in Hz) */ - prediv = (c->frequency * _R) / (_ri / 1000); - div = prediv / _P; + prediv = (c->frequency * _R_VAL) / (_ri / 1000); + div = prediv / _P_VAL; reg1[1] |= (div >> 9) & 0x03; reg1[2] = div >> 1; reg1[3] = (div << 7); - priv->frequency = ((div * _P) * (_ri / 1000)) / _R; + priv->frequency = ((div * _P_VAL) * (_ri / 1000)) / _R_VAL; // Finally, calculate and store the value for A - reg1[3] |= (prediv - (div*_P)) & 0x7f; + reg1[3] |= (prediv - (div*_P_VAL)) & 0x7f; -#undef _R -#undef _P +#undef _R_VAL +#undef _P_VAL #undef _ri if (fe->ops.i2c_gate_ctrl) diff --git a/drivers/media/i2c/Kconfig b/drivers/media/i2c/Kconfig index 63c9ac2c6a5ff62792a8a3594a7b705e27e11583..bd008e0d1d718a42ba6fc1b682b32da424efabd1 100644 --- a/drivers/media/i2c/Kconfig +++ b/drivers/media/i2c/Kconfig @@ -596,6 +596,17 @@ config VIDEO_APTINA_PLL config VIDEO_SMIAPP_PLL tristate +config VIDEO_IMX219 + tristate "Sony IMX219 sensor support" + depends on I2C && VIDEO_V4L2 && VIDEO_V4L2_SUBDEV_API + depends on MEDIA_CAMERA_SUPPORT + help + This is a Video4Linux2 sensor driver for the Sony + IMX219 camera. + + To compile this driver as a module, choose M here: the + module will be called imx219. + config VIDEO_IMX258 tristate "Sony IMX258 sensor support" depends on I2C && VIDEO_V4L2 && VIDEO_V4L2_SUBDEV_API diff --git a/drivers/media/i2c/Makefile b/drivers/media/i2c/Makefile index a94eb03d10d4e12d278207a1484686b6131df8f6..8b4fe01411b9b957cc57f57facfa39b21e5e15c7 100644 --- a/drivers/media/i2c/Makefile +++ b/drivers/media/i2c/Makefile @@ -36,7 +36,7 @@ obj-$(CONFIG_VIDEO_ADV748X) += adv748x/ obj-$(CONFIG_VIDEO_ADV7604) += adv7604.o obj-$(CONFIG_VIDEO_ADV7842) += adv7842.o obj-$(CONFIG_VIDEO_AD9389B) += ad9389b.o -obj-$(CONFIG_VIDEO_ADV7511) += adv7511.o +obj-$(CONFIG_VIDEO_ADV7511) += adv7511-v4l2.o obj-$(CONFIG_VIDEO_VPX3220) += vpx3220.o obj-$(CONFIG_VIDEO_VS6624) += vs6624.o obj-$(CONFIG_VIDEO_BT819) += bt819.o @@ -106,6 +106,7 @@ obj-$(CONFIG_VIDEO_I2C) += video-i2c.o obj-$(CONFIG_VIDEO_ML86V7667) += ml86v7667.o obj-$(CONFIG_VIDEO_OV2659) += ov2659.o obj-$(CONFIG_VIDEO_TC358743) += tc358743.o +obj-$(CONFIG_VIDEO_IMX219) += imx219.o obj-$(CONFIG_VIDEO_IMX258) += imx258.o obj-$(CONFIG_VIDEO_IMX274) += imx274.o diff --git a/drivers/media/i2c/adv7511.c b/drivers/media/i2c/adv7511-v4l2.c similarity index 99% rename from drivers/media/i2c/adv7511.c rename to drivers/media/i2c/adv7511-v4l2.c index 88349b5053cce0298734ace8ae247171cd439712..6869bb593a68275d36062268bcfc5e0a7af83859 100644 --- a/drivers/media/i2c/adv7511.c +++ b/drivers/media/i2c/adv7511-v4l2.c @@ -5,6 +5,11 @@ * Copyright 2013 Cisco Systems, Inc. and/or its affiliates. All rights reserved. */ +/* + * This file is named adv7511-v4l2.c so it doesn't conflict with the Analog + * Device ADV7511 (config fragment CONFIG_DRM_I2C_ADV7511). + */ + #include #include diff --git a/drivers/media/i2c/imx219.c b/drivers/media/i2c/imx219.c new file mode 100644 index 0000000000000000000000000000000000000000..682397a484922583bc7969371daff4a0b4768622 --- /dev/null +++ b/drivers/media/i2c/imx219.c @@ -0,0 +1,1093 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * A V4L2 driver for Sony IMX219 cameras. + * Copyright (C) 2019, Raspberry Pi (Trading) Ltd + * + * Based on Sony imx258 camera driver + * Copyright (C) 2018 Intel Corporation + * + * DT / fwnode changes, and regulator / GPIO control taken from ov5640.c + * Copyright (C) 2011-2013 Freescale Semiconductor, Inc. All Rights Reserved. + * Copyright (C) 2014-2017 Mentor Graphics Inc. + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define IMX219_REG_VALUE_08BIT 1 +#define IMX219_REG_VALUE_16BIT 2 + +#define IMX219_REG_MODE_SELECT 0x0100 +#define IMX219_MODE_STANDBY 0x00 +#define IMX219_MODE_STREAMING 0x01 + +/* Chip ID */ +#define IMX219_REG_CHIP_ID 0x0000 +#define IMX219_CHIP_ID 0x0219 + +/* V_TIMING internal */ +#define IMX219_REG_VTS 0x0160 +#define IMX219_VTS_15FPS 0x0dc6 +#define IMX219_VTS_30FPS_1080P 0x06e3 +#define IMX219_VTS_30FPS_BINNED 0x06e3 +#define IMX219_VTS_MAX 0xffff + +/*Frame Length Line*/ +#define IMX219_FLL_MIN 0x08a6 +#define IMX219_FLL_MAX 0xffff +#define IMX219_FLL_STEP 1 +#define IMX219_FLL_DEFAULT 0x0c98 + +/* HBLANK control - read only */ +#define IMX219_PPL_DEFAULT 5352 + +/* Exposure control */ +#define IMX219_REG_EXPOSURE 0x015a +#define IMX219_EXPOSURE_MIN 4 +#define IMX219_EXPOSURE_STEP 1 +#define IMX219_EXPOSURE_DEFAULT 0x640 +#define IMX219_EXPOSURE_MAX 65535 + +/* Analog gain control */ +#define IMX219_REG_ANALOG_GAIN 0x0157 +#define IMX219_ANA_GAIN_MIN 0 +#define IMX219_ANA_GAIN_MAX 232 +#define IMX219_ANA_GAIN_STEP 1 +#define IMX219_ANA_GAIN_DEFAULT 0x0 + +/* Digital gain control */ +#define IMX219_REG_DIGITAL_GAIN 0x0158 +#define IMX219_DGTL_GAIN_MIN 0x0100 +#define IMX219_DGTL_GAIN_MAX 0x0fff +#define IMX219_DGTL_GAIN_DEFAULT 0x0100 +#define IMX219_DGTL_GAIN_STEP 1 + +/* Test Pattern Control */ +#define IMX219_REG_TEST_PATTERN 0x0600 +#define IMX219_TEST_PATTERN_DISABLE 0 +#define IMX219_TEST_PATTERN_SOLID_COLOR 1 +#define IMX219_TEST_PATTERN_COLOR_BARS 2 +#define IMX219_TEST_PATTERN_GREY_COLOR 3 +#define IMX219_TEST_PATTERN_PN9 4 + +struct imx219_reg { + u16 address; + u8 val; +}; + +struct imx219_reg_list { + u32 num_of_regs; + const struct imx219_reg *regs; +}; + +/* Mode : resolution and related config&values */ +struct imx219_mode { + /* Frame width */ + u32 width; + /* Frame height */ + u32 height; + + /* V-timing */ + u32 vts_def; + + /* Default register values */ + struct imx219_reg_list reg_list; +}; + +/* + * Register sets lifted off the i2C interface from the Raspberry Pi firmware + * driver. + * 3280x2464 = mode 2, 1920x1080 = mode 1, and 1640x1232 = mode 4. + */ +static const struct imx219_reg mode_3280x2464_regs[] = { + {0x0100, 0x00}, + {0x30eb, 0x0c}, + {0x30eb, 0x05}, + {0x300a, 0xff}, + {0x300b, 0xff}, + {0x30eb, 0x05}, + {0x30eb, 0x09}, + {0x0114, 0x01}, + {0x0128, 0x00}, + {0x012a, 0x18}, + {0x012b, 0x00}, + {0x0164, 0x00}, + {0x0165, 0x00}, + {0x0166, 0x0c}, + {0x0167, 0xcf}, + {0x0168, 0x00}, + {0x0169, 0x00}, + {0x016a, 0x09}, + {0x016b, 0x9f}, + {0x016c, 0x0c}, + {0x016d, 0xd0}, + {0x016e, 0x09}, + {0x016f, 0xa0}, + {0x0170, 0x01}, + {0x0171, 0x01}, + {0x0174, 0x00}, + {0x0175, 0x00}, + {0x018c, 0x0a}, + {0x018d, 0x0a}, + {0x0301, 0x05}, + {0x0303, 0x01}, + {0x0304, 0x03}, + {0x0305, 0x03}, + {0x0306, 0x00}, + {0x0307, 0x39}, + {0x0309, 0x0a}, + {0x030b, 0x01}, + {0x030c, 0x00}, + {0x030d, 0x72}, + {0x0624, 0x0c}, + {0x0625, 0xd0}, + {0x0626, 0x09}, + {0x0627, 0xa0}, + {0x455e, 0x00}, + {0x471e, 0x4b}, + {0x4767, 0x0f}, + {0x4750, 0x14}, + {0x4540, 0x00}, + {0x47b4, 0x14}, + {0x4713, 0x30}, + {0x478b, 0x10}, + {0x478f, 0x10}, + {0x4793, 0x10}, + {0x4797, 0x0e}, + {0x479b, 0x0e}, + + {0x0172, 0x03}, + {0x0162, 0x0d}, + {0x0163, 0x78}, +}; + +static const struct imx219_reg mode_1920_1080_regs[] = { + {0x0100, 0x00}, + {0x30eb, 0x05}, + {0x30eb, 0x0c}, + {0x300a, 0xff}, + {0x300b, 0xff}, + {0x30eb, 0x05}, + {0x30eb, 0x09}, + {0x0114, 0x01}, + {0x0128, 0x00}, + {0x012a, 0x18}, + {0x012b, 0x00}, + {0x0162, 0x0d}, + {0x0163, 0x78}, + {0x0164, 0x02}, + {0x0165, 0xa8}, + {0x0166, 0x0a}, + {0x0167, 0x27}, + {0x0168, 0x02}, + {0x0169, 0xb4}, + {0x016a, 0x06}, + {0x016b, 0xeb}, + {0x016c, 0x07}, + {0x016d, 0x80}, + {0x016e, 0x04}, + {0x016f, 0x38}, + {0x0170, 0x01}, + {0x0171, 0x01}, + {0x0174, 0x00}, + {0x0175, 0x00}, + {0x018c, 0x0a}, + {0x018d, 0x0a}, + {0x0301, 0x05}, + {0x0303, 0x01}, + {0x0304, 0x03}, + {0x0305, 0x03}, + {0x0306, 0x00}, + {0x0307, 0x39}, + {0x0309, 0x0a}, + {0x030b, 0x01}, + {0x030c, 0x00}, + {0x030d, 0x72}, + {0x455e, 0x00}, + {0x471e, 0x4b}, + {0x4767, 0x0f}, + {0x4750, 0x14}, + {0x4540, 0x00}, + {0x47b4, 0x14}, + {0x4713, 0x30}, + {0x478b, 0x10}, + {0x478f, 0x10}, + {0x4793, 0x10}, + {0x4797, 0x0e}, + {0x479b, 0x0e}, + + {0x0172, 0x03}, + {0x0162, 0x0d}, + {0x0163, 0x78}, +}; + +static const struct imx219_reg mode_1640_1232_regs[] = { + {0x30eb, 0x0c}, + {0x30eb, 0x05}, + {0x300a, 0xff}, + {0x300b, 0xff}, + {0x30eb, 0x05}, + {0x30eb, 0x09}, + {0x0114, 0x01}, + {0x0128, 0x00}, + {0x012a, 0x18}, + {0x012b, 0x00}, + {0x0164, 0x00}, + {0x0165, 0x00}, + {0x0166, 0x0c}, + {0x0167, 0xcf}, + {0x0168, 0x00}, + {0x0169, 0x00}, + {0x016a, 0x09}, + {0x016b, 0x9f}, + {0x016c, 0x06}, + {0x016d, 0x68}, + {0x016e, 0x04}, + {0x016f, 0xd0}, + {0x0170, 0x01}, + {0x0171, 0x01}, + {0x0174, 0x01}, + {0x0175, 0x01}, + {0x018c, 0x0a}, + {0x018d, 0x0a}, + {0x0301, 0x05}, + {0x0303, 0x01}, + {0x0304, 0x03}, + {0x0305, 0x03}, + {0x0306, 0x00}, + {0x0307, 0x39}, + {0x0309, 0x0a}, + {0x030b, 0x01}, + {0x030c, 0x00}, + {0x030d, 0x72}, + {0x455e, 0x00}, + {0x471e, 0x4b}, + {0x4767, 0x0f}, + {0x4750, 0x14}, + {0x4540, 0x00}, + {0x47b4, 0x14}, + {0x4713, 0x30}, + {0x478b, 0x10}, + {0x478f, 0x10}, + {0x4793, 0x10}, + {0x4797, 0x0e}, + {0x479b, 0x0e}, + + {0x0172, 0x03}, + {0x0162, 0x0d}, + {0x0163, 0x78}, +}; + +static const char * const imx219_test_pattern_menu[] = { + "Disabled", + "Color Bars", + "Solid Color", + "Grey Color Bars", + "PN9" +}; + +static const int imx219_test_pattern_val[] = { + IMX219_TEST_PATTERN_DISABLE, + IMX219_TEST_PATTERN_COLOR_BARS, + IMX219_TEST_PATTERN_SOLID_COLOR, + IMX219_TEST_PATTERN_GREY_COLOR, + IMX219_TEST_PATTERN_PN9, +}; + +/* regulator supplies */ +static const char * const imx219_supply_name[] = { + /* Supplies can be enabled in any order */ + "VANA", /* Analog (2.8V) supply */ + "VDIG", /* Digital Core (1.8V) supply */ + "VDDL", /* IF (1.2V) supply */ +}; + +#define IMX219_NUM_SUPPLIES ARRAY_SIZE(imx219_supply_name) + +#define IMX219_XCLR_DELAY_MS 10 /* Initialisation delay after XCLR low->high */ + +/* Mode configs */ +static const struct imx219_mode supported_modes[] = { + { + /* 8MPix 15fps mode */ + .width = 3280, + .height = 2464, + .vts_def = IMX219_VTS_15FPS, + .reg_list = { + .num_of_regs = ARRAY_SIZE(mode_3280x2464_regs), + .regs = mode_3280x2464_regs, + }, + }, + { + /* 1080P 30fps cropped */ + .width = 1920, + .height = 1080, + .vts_def = IMX219_VTS_30FPS_1080P, + .reg_list = { + .num_of_regs = ARRAY_SIZE(mode_1920_1080_regs), + .regs = mode_1920_1080_regs, + }, + }, + { + /* 2x2 binned 30fps mode */ + .width = 1640, + .height = 1232, + .vts_def = IMX219_VTS_30FPS_BINNED, + .reg_list = { + .num_of_regs = ARRAY_SIZE(mode_1640_1232_regs), + .regs = mode_1640_1232_regs, + }, + }, +}; + +struct imx219 { + struct v4l2_subdev sd; + struct media_pad pad; + + struct v4l2_fwnode_endpoint ep; /* the parsed DT endpoint info */ + struct clk *xclk; /* system clock to IMX219 */ + u32 xclk_freq; + + struct gpio_desc *xclr_gpio; + struct regulator_bulk_data supplies[IMX219_NUM_SUPPLIES]; + + struct v4l2_ctrl_handler ctrl_handler; + /* V4L2 Controls */ + struct v4l2_ctrl *pixel_rate; + struct v4l2_ctrl *exposure; + + /* Current mode */ + const struct imx219_mode *mode; + + /* + * Mutex for serialized access: + * Protect sensor module set pad format and start/stop streaming safely. + */ + struct mutex mutex; + + int power_count; + /* Streaming on/off */ + bool streaming; +}; + +static inline struct imx219 *to_imx219(struct v4l2_subdev *_sd) +{ + return container_of(_sd, struct imx219, sd); +} + +/* Read registers up to 2 at a time */ +static int imx219_read_reg(struct imx219 *imx219, u16 reg, u32 len, u32 *val) +{ + struct i2c_client *client = v4l2_get_subdevdata(&imx219->sd); + struct i2c_msg msgs[2]; + u8 addr_buf[2] = { reg >> 8, reg & 0xff }; + u8 data_buf[4] = { 0, }; + int ret; + + if (len > 4) + return -EINVAL; + + /* Write register address */ + msgs[0].addr = client->addr; + msgs[0].flags = 0; + msgs[0].len = ARRAY_SIZE(addr_buf); + msgs[0].buf = addr_buf; + + /* Read data from register */ + msgs[1].addr = client->addr; + msgs[1].flags = I2C_M_RD; + msgs[1].len = len; + msgs[1].buf = &data_buf[4 - len]; + + ret = i2c_transfer(client->adapter, msgs, ARRAY_SIZE(msgs)); + if (ret != ARRAY_SIZE(msgs)) + return -EIO; + + *val = get_unaligned_be32(data_buf); + + return 0; +} + +/* Write registers up to 2 at a time */ +static int imx219_write_reg(struct imx219 *imx219, u16 reg, u32 len, u32 val) +{ + struct i2c_client *client = v4l2_get_subdevdata(&imx219->sd); + u8 buf[6]; + + if (len > 4) + return -EINVAL; + + put_unaligned_be16(reg, buf); + put_unaligned_be32(val << (8 * (4 - len)), buf + 2); + if (i2c_master_send(client, buf, len + 2) != len + 2) + return -EIO; + + return 0; +} + +/* Write a list of registers */ +static int imx219_write_regs(struct imx219 *imx219, + const struct imx219_reg *regs, u32 len) +{ + struct i2c_client *client = v4l2_get_subdevdata(&imx219->sd); + unsigned int i; + int ret; + + for (i = 0; i < len; i++) { + ret = imx219_write_reg(imx219, regs[i].address, 1, regs[i].val); + if (ret) { + dev_err_ratelimited(&client->dev, + "Failed to write reg 0x%4.4x. error = %d\n", + regs[i].address, ret); + + return ret; + } + } + + return 0; +} + +/* Power/clock management functions */ +static void imx219_power(struct imx219 *imx219, bool enable) +{ + gpiod_set_value_cansleep(imx219->xclr_gpio, enable ? 1 : 0); +} + +static int imx219_set_power_on(struct imx219 *imx219) +{ + struct i2c_client *client = v4l2_get_subdevdata(&imx219->sd); + int ret; + + ret = clk_prepare_enable(imx219->xclk); + if (ret) { + dev_err(&client->dev, "%s: failed to enable clock\n", + __func__); + return ret; + } + + ret = regulator_bulk_enable(IMX219_NUM_SUPPLIES, + imx219->supplies); + if (ret) { + dev_err(&client->dev, "%s: failed to enable regulators\n", + __func__); + goto xclk_off; + } + + imx219_power(imx219, true); + msleep(IMX219_XCLR_DELAY_MS); + + return 0; +xclk_off: + clk_disable_unprepare(imx219->xclk); + return ret; +} + +static void imx219_set_power_off(struct imx219 *imx219) +{ + imx219_power(imx219, false); + regulator_bulk_disable(IMX219_NUM_SUPPLIES, imx219->supplies); + clk_disable_unprepare(imx219->xclk); +} + +static int imx219_set_power(struct imx219 *imx219, bool on) +{ + int ret = 0; + + if (on) { + ret = imx219_set_power_on(imx219); + if (ret) + return ret; + } else { + imx219_set_power_off(imx219); + } + + return 0; +} + +/* Open sub-device */ +static int imx219_s_power(struct v4l2_subdev *sd, int on) +{ + struct imx219 *imx219 = to_imx219(sd); + int ret = 0; + + mutex_lock(&imx219->mutex); + + /* + * If the power count is modified from 0 to != 0 or from != 0 to 0, + * update the power state. + */ + if (imx219->power_count == !on) { + ret = imx219_set_power(imx219, !!on); + if (ret) + goto out; + } + + /* Update the power count. */ + imx219->power_count += on ? 1 : -1; + WARN_ON(imx219->power_count < 0); +out: + mutex_unlock(&imx219->mutex); + + return ret; +} + +static int imx219_open(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh) +{ + struct v4l2_mbus_framefmt *try_fmt = + v4l2_subdev_get_try_format(sd, fh->pad, 0); + + /* Initialize try_fmt */ + try_fmt->width = supported_modes[0].width; + try_fmt->height = supported_modes[0].height; + try_fmt->code = MEDIA_BUS_FMT_SBGGR10_1X10; + try_fmt->field = V4L2_FIELD_NONE; + + return 0; +} + +static int imx219_set_ctrl(struct v4l2_ctrl *ctrl) +{ + struct imx219 *imx219 = + container_of(ctrl->handler, struct imx219, ctrl_handler); + struct i2c_client *client = v4l2_get_subdevdata(&imx219->sd); + int ret = 0; + + /* + * Applying V4L2 control value only happens + * when power is up for streaming + */ + if (pm_runtime_get_if_in_use(&client->dev) == 0) + return 0; + + switch (ctrl->id) { + case V4L2_CID_ANALOGUE_GAIN: + ret = imx219_write_reg(imx219, IMX219_REG_ANALOG_GAIN, + IMX219_REG_VALUE_08BIT, ctrl->val); + break; + case V4L2_CID_EXPOSURE: + ret = imx219_write_reg(imx219, IMX219_REG_EXPOSURE, + IMX219_REG_VALUE_16BIT, ctrl->val); + break; + case V4L2_CID_DIGITAL_GAIN: + ret = imx219_write_reg(imx219, IMX219_REG_DIGITAL_GAIN, + IMX219_REG_VALUE_16BIT, ctrl->val); + break; + case V4L2_CID_TEST_PATTERN: + ret = imx219_write_reg(imx219, IMX219_REG_TEST_PATTERN, + IMX219_REG_VALUE_16BIT, + imx219_test_pattern_val[ctrl->val]); + break; + default: + dev_info(&client->dev, + "ctrl(id:0x%x,val:0x%x) is not handled\n", + ctrl->id, ctrl->val); + ret = -EINVAL; + break; + } + + pm_runtime_put(&client->dev); + + return ret; +} + +static const struct v4l2_ctrl_ops imx219_ctrl_ops = { + .s_ctrl = imx219_set_ctrl, +}; + +static int imx219_enum_mbus_code(struct v4l2_subdev *sd, + struct v4l2_subdev_pad_config *cfg, + struct v4l2_subdev_mbus_code_enum *code) +{ + /* Only one bayer order(GRBG) is supported */ + if (code->index > 0) + return -EINVAL; + + code->code = MEDIA_BUS_FMT_SBGGR10_1X10; + + return 0; +} + +static int imx219_enum_frame_size(struct v4l2_subdev *sd, + struct v4l2_subdev_pad_config *cfg, + struct v4l2_subdev_frame_size_enum *fse) +{ + if (fse->index >= ARRAY_SIZE(supported_modes)) + return -EINVAL; + + if (fse->code != MEDIA_BUS_FMT_SBGGR10_1X10) + return -EINVAL; + + fse->min_width = supported_modes[fse->index].width; + fse->max_width = fse->min_width; + fse->min_height = supported_modes[fse->index].height; + fse->max_height = fse->min_height; + + return 0; +} + +static void imx219_update_pad_format(const struct imx219_mode *mode, + struct v4l2_subdev_format *fmt) +{ + fmt->format.width = mode->width; + fmt->format.height = mode->height; + fmt->format.code = MEDIA_BUS_FMT_SBGGR10_1X10; + fmt->format.field = V4L2_FIELD_NONE; +} + +static int __imx219_get_pad_format(struct imx219 *imx219, + struct v4l2_subdev_pad_config *cfg, + struct v4l2_subdev_format *fmt) +{ + if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) + fmt->format = *v4l2_subdev_get_try_format(&imx219->sd, cfg, + fmt->pad); + else + imx219_update_pad_format(imx219->mode, fmt); + + return 0; +} + +static int imx219_get_pad_format(struct v4l2_subdev *sd, + struct v4l2_subdev_pad_config *cfg, + struct v4l2_subdev_format *fmt) +{ + struct imx219 *imx219 = to_imx219(sd); + int ret; + + mutex_lock(&imx219->mutex); + ret = __imx219_get_pad_format(imx219, cfg, fmt); + mutex_unlock(&imx219->mutex); + + return ret; +} + +static int imx219_set_pad_format(struct v4l2_subdev *sd, + struct v4l2_subdev_pad_config *cfg, + struct v4l2_subdev_format *fmt) +{ + struct imx219 *imx219 = to_imx219(sd); + const struct imx219_mode *mode; + struct v4l2_mbus_framefmt *framefmt; + + mutex_lock(&imx219->mutex); + + /* Only one raw bayer(BGGR) order is supported */ + fmt->format.code = MEDIA_BUS_FMT_SBGGR10_1X10; + + mode = v4l2_find_nearest_size(supported_modes, + ARRAY_SIZE(supported_modes), + width, height, + fmt->format.width, fmt->format.height); + imx219_update_pad_format(mode, fmt); + if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) { + framefmt = v4l2_subdev_get_try_format(sd, cfg, fmt->pad); + *framefmt = fmt->format; + } else { + imx219->mode = mode; + } + + mutex_unlock(&imx219->mutex); + + return 0; +} + +/* Start streaming */ +static int imx219_start_streaming(struct imx219 *imx219) +{ + struct i2c_client *client = v4l2_get_subdevdata(&imx219->sd); + const struct imx219_reg_list *reg_list; + int ret; + + /* Apply default values of current mode */ + reg_list = &imx219->mode->reg_list; + ret = imx219_write_regs(imx219, reg_list->regs, reg_list->num_of_regs); + if (ret) { + dev_err(&client->dev, "%s failed to set mode\n", __func__); + return ret; + } + + /* + * Set VTS appropriately for frame rate control. + * Currently fixed per mode. + */ + ret = imx219_write_reg(imx219, IMX219_REG_VTS, + IMX219_REG_VALUE_16BIT, imx219->mode->vts_def); + if (ret) + return ret; + + /* Apply customized values from user */ + ret = __v4l2_ctrl_handler_setup(imx219->sd.ctrl_handler); + if (ret) + return ret; + + /* set stream on register */ + return imx219_write_reg(imx219, IMX219_REG_MODE_SELECT, + IMX219_REG_VALUE_08BIT, IMX219_MODE_STREAMING); +} + +/* Stop streaming */ +static int imx219_stop_streaming(struct imx219 *imx219) +{ + struct i2c_client *client = v4l2_get_subdevdata(&imx219->sd); + int ret; + + /* set stream off register */ + ret = imx219_write_reg(imx219, IMX219_REG_MODE_SELECT, + IMX219_REG_VALUE_08BIT, IMX219_MODE_STANDBY); + if (ret) + dev_err(&client->dev, "%s failed to set stream\n", __func__); + + /* + * Return success even if it was an error, as there is nothing the + * caller can do about it. + */ + return 0; +} + +static int imx219_set_stream(struct v4l2_subdev *sd, int enable) +{ + struct imx219 *imx219 = to_imx219(sd); + struct i2c_client *client = v4l2_get_subdevdata(sd); + int ret = 0; + + mutex_lock(&imx219->mutex); + if (imx219->streaming == enable) { + mutex_unlock(&imx219->mutex); + return 0; + } + + if (enable) { + ret = pm_runtime_get_sync(&client->dev); + if (ret < 0) { + pm_runtime_put_noidle(&client->dev); + goto err_unlock; + } + + /* + * Apply default & customized values + * and then start streaming. + */ + ret = imx219_start_streaming(imx219); + if (ret) { + pm_runtime_put(&client->dev); + goto err_unlock; + } + } else { + imx219_stop_streaming(imx219); + pm_runtime_put(&client->dev); + } + + imx219->streaming = enable; + mutex_unlock(&imx219->mutex); + + return ret; + +err_unlock: + mutex_unlock(&imx219->mutex); + + return ret; +} + +static int __maybe_unused imx219_suspend(struct device *dev) +{ + struct i2c_client *client = to_i2c_client(dev); + struct v4l2_subdev *sd = i2c_get_clientdata(client); + struct imx219 *imx219 = to_imx219(sd); + + if (imx219->streaming) + imx219_stop_streaming(imx219); + + return 0; +} + +static int __maybe_unused imx219_resume(struct device *dev) +{ + struct i2c_client *client = to_i2c_client(dev); + struct v4l2_subdev *sd = i2c_get_clientdata(client); + struct imx219 *imx219 = to_imx219(sd); + int ret; + + if (imx219->streaming) { + ret = imx219_start_streaming(imx219); + if (ret) + goto error; + } + + return 0; + +error: + imx219_stop_streaming(imx219); + imx219->streaming = 0; + return ret; +} + +static int imx219_get_regulators(struct imx219 *imx219) +{ + struct i2c_client *client = v4l2_get_subdevdata(&imx219->sd); + int i; + + for (i = 0; i < IMX219_NUM_SUPPLIES; i++) + imx219->supplies[i].supply = imx219_supply_name[i]; + + return devm_regulator_bulk_get(&client->dev, + IMX219_NUM_SUPPLIES, + imx219->supplies); +} + +/* Verify chip ID */ +static int imx219_identify_module(struct imx219 *imx219) +{ + struct i2c_client *client = v4l2_get_subdevdata(&imx219->sd); + int ret; + u32 val; + + ret = imx219_set_power_on(imx219); + if (ret) + return ret; + + ret = imx219_read_reg(imx219, IMX219_REG_CHIP_ID, + IMX219_REG_VALUE_16BIT, &val); + if (ret) { + dev_err(&client->dev, "failed to read chip id %x\n", + IMX219_CHIP_ID); + goto power_off; + } + + if (val != IMX219_CHIP_ID) { + dev_err(&client->dev, "chip id mismatch: %x!=%x\n", + IMX219_CHIP_ID, val); + ret = -EIO; + } + +power_off: + imx219_set_power_off(imx219); + return ret; +} + +static const struct v4l2_subdev_core_ops imx219_core_ops = { + .s_power = imx219_s_power, +}; + +static const struct v4l2_subdev_video_ops imx219_video_ops = { + .s_stream = imx219_set_stream, +}; + +static const struct v4l2_subdev_pad_ops imx219_pad_ops = { + .enum_mbus_code = imx219_enum_mbus_code, + .get_fmt = imx219_get_pad_format, + .set_fmt = imx219_set_pad_format, + .enum_frame_size = imx219_enum_frame_size, +}; + +static const struct v4l2_subdev_ops imx219_subdev_ops = { + .core = &imx219_core_ops, + .video = &imx219_video_ops, + .pad = &imx219_pad_ops, +}; + +static const struct v4l2_subdev_internal_ops imx219_internal_ops = { + .open = imx219_open, +}; + +/* Initialize control handlers */ +static int imx219_init_controls(struct imx219 *imx219) +{ + struct i2c_client *client = v4l2_get_subdevdata(&imx219->sd); + struct v4l2_ctrl_handler *ctrl_hdlr; + int ret; + + ctrl_hdlr = &imx219->ctrl_handler; + ret = v4l2_ctrl_handler_init(ctrl_hdlr, 8); + if (ret) + return ret; + + mutex_init(&imx219->mutex); + ctrl_hdlr->lock = &imx219->mutex; + + imx219->exposure = v4l2_ctrl_new_std(ctrl_hdlr, &imx219_ctrl_ops, + V4L2_CID_EXPOSURE, + IMX219_EXPOSURE_MIN, + IMX219_EXPOSURE_MAX, + IMX219_EXPOSURE_STEP, + IMX219_EXPOSURE_DEFAULT); + + v4l2_ctrl_new_std(ctrl_hdlr, &imx219_ctrl_ops, V4L2_CID_ANALOGUE_GAIN, + IMX219_ANA_GAIN_MIN, IMX219_ANA_GAIN_MAX, + IMX219_ANA_GAIN_STEP, IMX219_ANA_GAIN_DEFAULT); + + v4l2_ctrl_new_std(ctrl_hdlr, &imx219_ctrl_ops, V4L2_CID_DIGITAL_GAIN, + IMX219_DGTL_GAIN_MIN, IMX219_DGTL_GAIN_MAX, + IMX219_DGTL_GAIN_STEP, IMX219_DGTL_GAIN_DEFAULT); + + v4l2_ctrl_new_std_menu_items(ctrl_hdlr, &imx219_ctrl_ops, + V4L2_CID_TEST_PATTERN, + ARRAY_SIZE(imx219_test_pattern_menu) - 1, + 0, 0, imx219_test_pattern_menu); + + if (ctrl_hdlr->error) { + ret = ctrl_hdlr->error; + dev_err(&client->dev, "%s control init failed (%d)\n", + __func__, ret); + goto error; + } + + imx219->sd.ctrl_handler = ctrl_hdlr; + + return 0; + +error: + v4l2_ctrl_handler_free(ctrl_hdlr); + mutex_destroy(&imx219->mutex); + + return ret; +} + +static void imx219_free_controls(struct imx219 *imx219) +{ + v4l2_ctrl_handler_free(imx219->sd.ctrl_handler); + mutex_destroy(&imx219->mutex); +} + +static int imx219_probe(struct i2c_client *client, + const struct i2c_device_id *id) +{ + struct device *dev = &client->dev; + struct fwnode_handle *endpoint; + struct imx219 *imx219; + int ret; + + imx219 = devm_kzalloc(&client->dev, sizeof(*imx219), GFP_KERNEL); + if (!imx219) + return -ENOMEM; + + /* Initialize subdev */ + v4l2_i2c_subdev_init(&imx219->sd, client, &imx219_subdev_ops); + + /* Get CSI2 bus config */ + endpoint = fwnode_graph_get_next_endpoint(dev_fwnode(&client->dev), + NULL); + if (!endpoint) { + dev_err(dev, "endpoint node not found\n"); + return -EINVAL; + } + + ret = v4l2_fwnode_endpoint_parse(endpoint, &imx219->ep); + fwnode_handle_put(endpoint); + if (ret) { + dev_err(dev, "Could not parse endpoint\n"); + return ret; + } + + /* Get system clock (xclk) */ + imx219->xclk = devm_clk_get(dev, "xclk"); + if (IS_ERR(imx219->xclk)) { + dev_err(dev, "failed to get xclk\n"); + return PTR_ERR(imx219->xclk); + } + + imx219->xclk_freq = clk_get_rate(imx219->xclk); + if (imx219->xclk_freq != 24000000) { + dev_err(dev, "xclk frequency not supported: %d Hz\n", + imx219->xclk_freq); + return -EINVAL; + } + + ret = imx219_get_regulators(imx219); + if (ret) + return ret; + + /* request optional power down pin */ + imx219->xclr_gpio = devm_gpiod_get_optional(dev, "xclr", + GPIOD_OUT_HIGH); + + /* Check module identity */ + ret = imx219_identify_module(imx219); + if (ret) + return ret; + + /* Set default mode to max resolution */ + imx219->mode = &supported_modes[0]; + + ret = imx219_init_controls(imx219); + if (ret) + return ret; + + /* Initialize subdev */ + imx219->sd.internal_ops = &imx219_internal_ops; + imx219->sd.flags |= V4L2_SUBDEV_FL_HAS_DEVNODE; + imx219->sd.entity.function = MEDIA_ENT_F_CAM_SENSOR; + + /* Initialize source pad */ + imx219->pad.flags = MEDIA_PAD_FL_SOURCE; + + ret = media_entity_pads_init(&imx219->sd.entity, 1, &imx219->pad); + if (ret) + goto error_handler_free; + + ret = v4l2_async_register_subdev_sensor_common(&imx219->sd); + if (ret < 0) + goto error_media_entity; + + pm_runtime_set_active(&client->dev); + pm_runtime_enable(&client->dev); + pm_runtime_idle(&client->dev); + + return 0; + +error_media_entity: + media_entity_cleanup(&imx219->sd.entity); + +error_handler_free: + imx219_free_controls(imx219); + + return ret; +} + +static int imx219_remove(struct i2c_client *client) +{ + struct v4l2_subdev *sd = i2c_get_clientdata(client); + struct imx219 *imx219 = to_imx219(sd); + + v4l2_async_unregister_subdev(sd); + media_entity_cleanup(&sd->entity); + imx219_free_controls(imx219); + + pm_runtime_disable(&client->dev); + pm_runtime_set_suspended(&client->dev); + + return 0; +} + +static const struct of_device_id imx219_dt_ids[] = { + { .compatible = "sony,imx219" }, + { /* sentinel */ } +}; +MODULE_DEVICE_TABLE(of, imx219_dt_ids); + +static struct i2c_driver imx219_i2c_driver = { + .driver = { + .name = "imx219", + .of_match_table = imx219_dt_ids, + }, + .probe = imx219_probe, + .remove = imx219_remove, +}; + +module_i2c_driver(imx219_i2c_driver); + +MODULE_AUTHOR("Dave Stevenson = ARRAY_SIZE(ov7740_framesizes)) { + fsize = &ov7740_framesizes[0]; + fmt->width = fsize->width; + fmt->height = fsize->height; + } if (ret_frmsize != NULL) *ret_frmsize = fsize; diff --git a/drivers/media/media-device.c b/drivers/media/media-device.c index 3bae24b15eaa4cbd5e25485121e50ddcb5991af2..ed518b1f82e4a941799ee2c0061b5dcbf5fbabbf 100644 --- a/drivers/media/media-device.c +++ b/drivers/media/media-device.c @@ -487,6 +487,7 @@ static long media_device_enum_links32(struct media_device *mdev, { struct media_links_enum links; compat_uptr_t pads_ptr, links_ptr; + int ret; memset(&links, 0, sizeof(links)); @@ -498,7 +499,14 @@ static long media_device_enum_links32(struct media_device *mdev, links.pads = compat_ptr(pads_ptr); links.links = compat_ptr(links_ptr); - return media_device_enum_links(mdev, &links); + ret = media_device_enum_links(mdev, &links); + if (ret) + return ret; + + if (copy_to_user(ulinks->reserved, links.reserved, + sizeof(ulinks->reserved))) + return -EFAULT; + return 0; } #define MEDIA_IOC_ENUM_LINKS32 _IOWR('|', 0x02, struct media_links_enum32) diff --git a/drivers/media/pci/saa7164/saa7164-core.c b/drivers/media/pci/saa7164/saa7164-core.c index d697e1ad929c2d8e765feb15f136563b103bef05..5102519df108328cccf15c061b4038dd673d9963 100644 --- a/drivers/media/pci/saa7164/saa7164-core.c +++ b/drivers/media/pci/saa7164/saa7164-core.c @@ -1122,16 +1122,25 @@ static int saa7164_proc_show(struct seq_file *m, void *v) return 0; } +static struct proc_dir_entry *saa7164_pe; + static int saa7164_proc_create(void) { - struct proc_dir_entry *pe; - - pe = proc_create_single("saa7164", S_IRUGO, NULL, saa7164_proc_show); - if (!pe) + saa7164_pe = proc_create_single("saa7164", 0444, NULL, saa7164_proc_show); + if (!saa7164_pe) return -ENOMEM; return 0; } + +static void saa7164_proc_destroy(void) +{ + if (saa7164_pe) + remove_proc_entry("saa7164", NULL); +} +#else +static int saa7164_proc_create(void) { return 0; } +static void saa7164_proc_destroy(void) {} #endif static int saa7164_thread_function(void *data) @@ -1503,19 +1512,21 @@ static struct pci_driver saa7164_pci_driver = { static int __init saa7164_init(void) { - printk(KERN_INFO "saa7164 driver loaded\n"); + int ret = pci_register_driver(&saa7164_pci_driver); + + if (ret) + return ret; -#ifdef CONFIG_PROC_FS saa7164_proc_create(); -#endif - return pci_register_driver(&saa7164_pci_driver); + + pr_info("saa7164 driver loaded\n"); + + return 0; } static void __exit saa7164_fini(void) { -#ifdef CONFIG_PROC_FS - remove_proc_entry("saa7164", NULL); -#endif + saa7164_proc_destroy(); pci_unregister_driver(&saa7164_pci_driver); } diff --git a/drivers/media/platform/bcm2835/Kconfig b/drivers/media/platform/bcm2835/Kconfig index dac4e92bbbb6d95ce6d71f83a34b9701c68be7a8..20e15147ee02f94bca9f664343f8e062188321ff 100644 --- a/drivers/media/platform/bcm2835/Kconfig +++ b/drivers/media/platform/bcm2835/Kconfig @@ -2,7 +2,7 @@ config VIDEO_BCM2835_UNICAM tristate "Broadcom BCM2835 Unicam video capture driver" - depends on VIDEO_V4L2 && VIDEO_V4L2_SUBDEV_API + depends on VIDEO_V4L2 && VIDEO_V4L2_SUBDEV_API && MEDIA_CONTROLLER depends on ARCH_BCM2835 || COMPILE_TEST select VIDEOBUF2_DMA_CONTIG select V4L2_FWNODE diff --git a/drivers/media/platform/bcm2835/bcm2835-unicam.c b/drivers/media/platform/bcm2835/bcm2835-unicam.c index 5f9375add52072b9176f30d7d476a978156a4967..3f79bd86ac5ff9846e3231b680065c8ddb74527b 100644 --- a/drivers/media/platform/bcm2835/bcm2835-unicam.c +++ b/drivers/media/platform/bcm2835/bcm2835-unicam.c @@ -314,6 +314,9 @@ struct unicam_device { struct clk *clock; /* V4l2 device */ struct v4l2_device v4l2_dev; + struct media_device mdev; + struct media_pad pad; + /* parent device */ struct platform_device *pdev; /* subdevice async Notifier */ @@ -963,7 +966,7 @@ static void unicam_cfg_image_id(struct unicam_device *dev) } } -void unicam_start_rx(struct unicam_device *dev, unsigned long addr) +static void unicam_start_rx(struct unicam_device *dev, unsigned long addr) { struct unicam_cfg *cfg = &dev->cfg; int line_int_freq = dev->v_fmt.fmt.pix.height >> 2; @@ -1912,6 +1915,8 @@ static int unicam_probe_complete(struct unicam_device *unicam) unicam->v4l2_dev.ctrl_handler = NULL; video_set_drvdata(vdev, unicam); + vdev->entity.flags |= MEDIA_ENT_FL_DEFAULT; + ret = video_register_device(vdev, VFL_TYPE_GRABBER, -1); if (ret) { unicam_err(unicam, "Unable to register video device.\n"); @@ -1953,6 +1958,16 @@ static int unicam_probe_complete(struct unicam_device *unicam) return ret; } + ret = media_create_pad_link(&unicam->sensor->entity, 0, + &unicam->video_dev.entity, 0, + MEDIA_LNK_FL_ENABLED | + MEDIA_LNK_FL_IMMUTABLE); + if (ret) { + unicam_err(unicam, "Unable to create pad links.\n"); + video_unregister_device(&unicam->video_dev); + return ret; + } + return 0; } @@ -2155,18 +2170,38 @@ static int unicam_probe(struct platform_device *pdev) return -EINVAL; } + unicam->mdev.dev = &pdev->dev; + strscpy(unicam->mdev.model, UNICAM_MODULE_NAME, + sizeof(unicam->mdev.model)); + strscpy(unicam->mdev.serial, "", sizeof(unicam->mdev.serial)); + snprintf(unicam->mdev.bus_info, sizeof(unicam->mdev.bus_info), + "platform:%s", pdev->name); + unicam->mdev.hw_revision = 1; + + media_entity_pads_init(&unicam->video_dev.entity, 1, &unicam->pad); + media_device_init(&unicam->mdev); + + unicam->v4l2_dev.mdev = &unicam->mdev; + ret = v4l2_device_register(&pdev->dev, &unicam->v4l2_dev); if (ret) { unicam_err(unicam, "Unable to register v4l2 device.\n"); - return ret; + goto media_cleanup; + } + + ret = media_device_register(&unicam->mdev); + if (ret < 0) { + unicam_err(unicam, + "Unable to register media-controller device.\n"); + goto probe_out_v4l2_unregister; } /* Reserve space for the controls */ hdl = &unicam->ctrl_handler; ret = v4l2_ctrl_handler_init(hdl, 16); if (ret < 0) - goto probe_out_v4l2_unregister; + goto media_unregister; unicam->v4l2_dev.ctrl_handler = hdl; /* set the driver data in platform device */ @@ -2185,8 +2220,13 @@ static int unicam_probe(struct platform_device *pdev) free_hdl: v4l2_ctrl_handler_free(hdl); +media_unregister: + media_device_unregister(&unicam->mdev); probe_out_v4l2_unregister: v4l2_device_unregister(&unicam->v4l2_dev); +media_cleanup: + media_device_cleanup(&unicam->mdev); + return ret; } @@ -2204,6 +2244,8 @@ static int unicam_remove(struct platform_device *pdev) video_unregister_device(&unicam->video_dev); if (unicam->sensor_config) v4l2_subdev_free_pad_config(unicam->sensor_config); + media_device_unregister(&unicam->mdev); + media_device_cleanup(&unicam->mdev); return 0; } diff --git a/drivers/media/platform/coda/coda-bit.c b/drivers/media/platform/coda/coda-bit.c index a3cfefdbee1275a45ff6d45d717a8b376f808049..c3eaddced72143a5c1969c56925d6ffd5de8b728 100644 --- a/drivers/media/platform/coda/coda-bit.c +++ b/drivers/media/platform/coda/coda-bit.c @@ -1728,6 +1728,7 @@ static int __coda_start_decoding(struct coda_ctx *ctx) v4l2_err(&dev->v4l2_dev, "CODA_COMMAND_SEQ_INIT timeout\n"); return ret; } + ctx->sequence_offset = ~0U; ctx->initialized = 1; /* Update kfifo out pointer from coda bitstream read pointer */ @@ -2142,12 +2143,17 @@ static void coda_finish_decode(struct coda_ctx *ctx) else if (ctx->display_idx < 0) ctx->hold = true; } else if (decoded_idx == -2) { + if (ctx->display_idx >= 0 && + ctx->display_idx < ctx->num_internal_frames) + ctx->sequence_offset++; /* no frame was decoded, we still return remaining buffers */ } else if (decoded_idx < 0 || decoded_idx >= ctx->num_internal_frames) { v4l2_err(&dev->v4l2_dev, "decoded frame index out of range: %d\n", decoded_idx); } else { - val = coda_read(dev, CODA_RET_DEC_PIC_FRAME_NUM) - 1; + val = coda_read(dev, CODA_RET_DEC_PIC_FRAME_NUM); + if (ctx->sequence_offset == -1) + ctx->sequence_offset = val; val -= ctx->sequence_offset; spin_lock_irqsave(&ctx->buffer_meta_lock, flags); if (!list_empty(&ctx->buffer_meta_list)) { @@ -2303,7 +2309,6 @@ irqreturn_t coda_irq_handler(int irq, void *data) if (ctx == NULL) { v4l2_err(&dev->v4l2_dev, "Instance released before the end of transaction\n"); - mutex_unlock(&dev->coda_mutex); return IRQ_HANDLED; } diff --git a/drivers/media/platform/coda/coda-common.c b/drivers/media/platform/coda/coda-common.c index 19d92edcc98113b145427888632f281c72f7f3d6..4b0220f40b4253d0739553e51185fd9dc9d23484 100644 --- a/drivers/media/platform/coda/coda-common.c +++ b/drivers/media/platform/coda/coda-common.c @@ -997,6 +997,8 @@ static int coda_encoder_cmd(struct file *file, void *fh, /* Set the stream-end flag on this context */ ctx->bit_stream_param |= CODA_BIT_STREAM_END_FLAG; + flush_work(&ctx->pic_run_work); + /* If there is no buffer in flight, wake up */ if (!ctx->streamon_out || ctx->qsequence == ctx->osequence) { dst_vq = v4l2_m2m_get_vq(ctx->fh.m2m_ctx, diff --git a/drivers/media/platform/davinci/vpss.c b/drivers/media/platform/davinci/vpss.c index 19cf6853411e223905e183585753ba2ded590e11..89a86c19579b8ab9d2cbee42a4cc25913dab9b96 100644 --- a/drivers/media/platform/davinci/vpss.c +++ b/drivers/media/platform/davinci/vpss.c @@ -518,6 +518,11 @@ static int __init vpss_init(void) return -EBUSY; oper_cfg.vpss_regs_base2 = ioremap(VPSS_CLK_CTRL, 4); + if (unlikely(!oper_cfg.vpss_regs_base2)) { + release_mem_region(VPSS_CLK_CTRL, 4); + return -ENOMEM; + } + writel(VPSS_CLK_CTRL_VENCCLKEN | VPSS_CLK_CTRL_DACCLKEN, oper_cfg.vpss_regs_base2); diff --git a/drivers/media/platform/marvell-ccic/mcam-core.c b/drivers/media/platform/marvell-ccic/mcam-core.c index dfdbd4354b74cf700305c0b2c70a396938cd039a..eeee15ff007d878a3028d85e0f8a3eac2bcc4743 100644 --- a/drivers/media/platform/marvell-ccic/mcam-core.c +++ b/drivers/media/platform/marvell-ccic/mcam-core.c @@ -200,7 +200,6 @@ struct mcam_vb_buffer { struct list_head queue; struct mcam_dma_desc *dma_desc; /* Descriptor virtual address */ dma_addr_t dma_desc_pa; /* Descriptor physical address */ - int dma_desc_nent; /* Number of mapped descriptors */ }; static inline struct mcam_vb_buffer *vb_to_mvb(struct vb2_v4l2_buffer *vb) @@ -608,9 +607,11 @@ static void mcam_dma_contig_done(struct mcam_camera *cam, int frame) static void mcam_sg_next_buffer(struct mcam_camera *cam) { struct mcam_vb_buffer *buf; + struct sg_table *sg_table; buf = list_first_entry(&cam->buffers, struct mcam_vb_buffer, queue); list_del_init(&buf->queue); + sg_table = vb2_dma_sg_plane_desc(&buf->vb_buf.vb2_buf, 0); /* * Very Bad Not Good Things happen if you don't clear * C1_DESC_ENA before making any descriptor changes. @@ -618,7 +619,7 @@ static void mcam_sg_next_buffer(struct mcam_camera *cam) mcam_reg_clear_bit(cam, REG_CTRL1, C1_DESC_ENA); mcam_reg_write(cam, REG_DMA_DESC_Y, buf->dma_desc_pa); mcam_reg_write(cam, REG_DESC_LEN_Y, - buf->dma_desc_nent*sizeof(struct mcam_dma_desc)); + sg_table->nents * sizeof(struct mcam_dma_desc)); mcam_reg_write(cam, REG_DESC_LEN_U, 0); mcam_reg_write(cam, REG_DESC_LEN_V, 0); mcam_reg_set_bit(cam, REG_CTRL1, C1_DESC_ENA); diff --git a/drivers/media/platform/omap/omap_vout_vrfb.c b/drivers/media/platform/omap/omap_vout_vrfb.c index 29e3f5da59c1ff61137f66d93b75b865521ed5e1..11ec048929e80109e6702249aea0b345e79378d3 100644 --- a/drivers/media/platform/omap/omap_vout_vrfb.c +++ b/drivers/media/platform/omap/omap_vout_vrfb.c @@ -253,8 +253,7 @@ int omap_vout_prepare_vrfb(struct omap_vout_device *vout, */ pixsize = vout->bpp * vout->vrfb_bpp; - dst_icg = ((MAX_PIXELS_PER_LINE * pixsize) - - (vout->pix.width * vout->bpp)) + 1; + dst_icg = MAX_PIXELS_PER_LINE * pixsize - vout->pix.width * vout->bpp; xt->src_start = vout->buf_phy_addr[vb->i]; xt->dst_start = vout->vrfb_context[vb->i].paddr[0]; diff --git a/drivers/media/platform/rcar_fdp1.c b/drivers/media/platform/rcar_fdp1.c index 2a15b7cca338fe6445ae11a8ec275c9e40ac1245..0d1467028811303f57c7192504cbd113443830f7 100644 --- a/drivers/media/platform/rcar_fdp1.c +++ b/drivers/media/platform/rcar_fdp1.c @@ -257,6 +257,8 @@ MODULE_PARM_DESC(debug, "activate debug info"); #define FD1_IP_H3_ES1 0x02010101 #define FD1_IP_M3W 0x02010202 #define FD1_IP_H3 0x02010203 +#define FD1_IP_M3N 0x02010204 +#define FD1_IP_E3 0x02010205 /* LUTs */ #define FD1_LUT_DIF_ADJ 0x1000 @@ -2365,6 +2367,12 @@ static int fdp1_probe(struct platform_device *pdev) case FD1_IP_H3: dprintk(fdp1, "FDP1 Version R-Car H3\n"); break; + case FD1_IP_M3N: + dprintk(fdp1, "FDP1 Version R-Car M3N\n"); + break; + case FD1_IP_E3: + dprintk(fdp1, "FDP1 Version R-Car E3\n"); + break; default: dev_err(fdp1->dev, "FDP1 Unidentifiable (0x%08x)\n", hw_version); diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc.c b/drivers/media/platform/s5p-mfc/s5p_mfc.c index ca11f8a7569dca82883f4a124d881db62151c9d8..4b8516c35bc204700bdc1de88d6faf7503b71249 100644 --- a/drivers/media/platform/s5p-mfc/s5p_mfc.c +++ b/drivers/media/platform/s5p-mfc/s5p_mfc.c @@ -527,7 +527,8 @@ static void s5p_mfc_handle_seq_done(struct s5p_mfc_ctx *ctx, dev); ctx->mv_count = s5p_mfc_hw_call(dev->mfc_ops, get_mv_count, dev); - ctx->scratch_buf_size = s5p_mfc_hw_call(dev->mfc_ops, + if (FW_HAS_E_MIN_SCRATCH_BUF(dev)) + ctx->scratch_buf_size = s5p_mfc_hw_call(dev->mfc_ops, get_min_scratch_buf_size, dev); if (ctx->img_width == 0 || ctx->img_height == 0) ctx->state = MFCINST_ERROR; diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc_pm.c b/drivers/media/platform/s5p-mfc/s5p_mfc_pm.c index eb85cedc5ef34a66ba1889c416ef7e401921c804..5e080f32b0e8247324ea513151cb97c362edd44a 100644 --- a/drivers/media/platform/s5p-mfc/s5p_mfc_pm.c +++ b/drivers/media/platform/s5p-mfc/s5p_mfc_pm.c @@ -38,6 +38,11 @@ int s5p_mfc_init_pm(struct s5p_mfc_dev *dev) for (i = 0; i < pm->num_clocks; i++) { pm->clocks[i] = devm_clk_get(pm->device, pm->clk_names[i]); if (IS_ERR(pm->clocks[i])) { + /* additional clocks are optional */ + if (i && PTR_ERR(pm->clocks[i]) == -ENOENT) { + pm->clocks[i] = NULL; + continue; + } mfc_err("Failed to get clock: %s\n", pm->clk_names[i]); return PTR_ERR(pm->clocks[i]); diff --git a/drivers/media/platform/vimc/vimc-capture.c b/drivers/media/platform/vimc/vimc-capture.c index 65d657daf66f8e7b0da13aaaea893f733ce7fbec..8e014cc485f002ef8555794a4fa182f1f4e634e0 100644 --- a/drivers/media/platform/vimc/vimc-capture.c +++ b/drivers/media/platform/vimc/vimc-capture.c @@ -132,12 +132,15 @@ static int vimc_cap_s_fmt_vid_cap(struct file *file, void *priv, struct v4l2_format *f) { struct vimc_cap_device *vcap = video_drvdata(file); + int ret; /* Do not change the format while stream is on */ if (vb2_is_busy(&vcap->queue)) return -EBUSY; - vimc_cap_try_fmt_vid_cap(file, priv, f); + ret = vimc_cap_try_fmt_vid_cap(file, priv, f); + if (ret) + return ret; dev_dbg(vcap->dev, "%s: format update: " "old:%dx%d (0x%x, %d, %d, %d, %d) " diff --git a/drivers/media/radio/radio-raremono.c b/drivers/media/radio/radio-raremono.c index 9a5079d64c4ab1fa96af7f0a22b0ebdb4793701b..729600c4a056b824b4fdfcc7e831ae939e951aa2 100644 --- a/drivers/media/radio/radio-raremono.c +++ b/drivers/media/radio/radio-raremono.c @@ -271,6 +271,14 @@ static int vidioc_g_frequency(struct file *file, void *priv, return 0; } +static void raremono_device_release(struct v4l2_device *v4l2_dev) +{ + struct raremono_device *radio = to_raremono_dev(v4l2_dev); + + kfree(radio->buffer); + kfree(radio); +} + /* File system interface */ static const struct v4l2_file_operations usb_raremono_fops = { .owner = THIS_MODULE, @@ -295,12 +303,14 @@ static int usb_raremono_probe(struct usb_interface *intf, struct raremono_device *radio; int retval = 0; - radio = devm_kzalloc(&intf->dev, sizeof(struct raremono_device), GFP_KERNEL); - if (radio) - radio->buffer = devm_kmalloc(&intf->dev, BUFFER_LENGTH, GFP_KERNEL); - - if (!radio || !radio->buffer) + radio = kzalloc(sizeof(*radio), GFP_KERNEL); + if (!radio) + return -ENOMEM; + radio->buffer = kmalloc(BUFFER_LENGTH, GFP_KERNEL); + if (!radio->buffer) { + kfree(radio); return -ENOMEM; + } radio->usbdev = interface_to_usbdev(intf); radio->intf = intf; @@ -324,7 +334,8 @@ static int usb_raremono_probe(struct usb_interface *intf, if (retval != 3 || (get_unaligned_be16(&radio->buffer[1]) & 0xfff) == 0x0242) { dev_info(&intf->dev, "this is not Thanko's Raremono.\n"); - return -ENODEV; + retval = -ENODEV; + goto free_mem; } dev_info(&intf->dev, "Thanko's Raremono connected: (%04X:%04X)\n", @@ -333,7 +344,7 @@ static int usb_raremono_probe(struct usb_interface *intf, retval = v4l2_device_register(&intf->dev, &radio->v4l2_dev); if (retval < 0) { dev_err(&intf->dev, "couldn't register v4l2_device\n"); - return retval; + goto free_mem; } mutex_init(&radio->lock); @@ -345,6 +356,7 @@ static int usb_raremono_probe(struct usb_interface *intf, radio->vdev.ioctl_ops = &usb_raremono_ioctl_ops; radio->vdev.lock = &radio->lock; radio->vdev.release = video_device_release_empty; + radio->v4l2_dev.release = raremono_device_release; usb_set_intfdata(intf, &radio->v4l2_dev); @@ -360,6 +372,10 @@ static int usb_raremono_probe(struct usb_interface *intf, } dev_err(&intf->dev, "could not register video device\n"); v4l2_device_unregister(&radio->v4l2_dev); + +free_mem: + kfree(radio->buffer); + kfree(radio); return retval; } diff --git a/drivers/media/radio/wl128x/fmdrv_v4l2.c b/drivers/media/radio/wl128x/fmdrv_v4l2.c index dccdf6558e6ab7ce4e1e613453acdd5558badd3a..33abc8616ecb8122bd24a6117680e1f22aaf3ef7 100644 --- a/drivers/media/radio/wl128x/fmdrv_v4l2.c +++ b/drivers/media/radio/wl128x/fmdrv_v4l2.c @@ -549,6 +549,7 @@ int fm_v4l2_init_video_device(struct fmdev *fmdev, int radio_nr) /* Register with V4L2 subsystem as RADIO device */ if (video_register_device(&gradio_dev, VFL_TYPE_RADIO, radio_nr)) { + v4l2_device_unregister(&fmdev->v4l2_dev); fmerr("Could not register video device\n"); return -ENOMEM; } @@ -562,6 +563,8 @@ int fm_v4l2_init_video_device(struct fmdev *fmdev, int radio_nr) if (ret < 0) { fmerr("(fmdev): Can't init ctrl handler\n"); v4l2_ctrl_handler_free(&fmdev->ctrl_handler); + video_unregister_device(fmdev->radio_dev); + v4l2_device_unregister(&fmdev->v4l2_dev); return -EBUSY; } diff --git a/drivers/media/rc/ir-spi.c b/drivers/media/rc/ir-spi.c index 66334e8d63baa8c41fb4e4afe742aeb994704148..c58f2d38a4582ec3ab126bd471a1861b7e42f215 100644 --- a/drivers/media/rc/ir-spi.c +++ b/drivers/media/rc/ir-spi.c @@ -161,6 +161,7 @@ static const struct of_device_id ir_spi_of_match[] = { { .compatible = "ir-spi-led" }, {}, }; +MODULE_DEVICE_TABLE(of, ir_spi_of_match); static struct spi_driver ir_spi_driver = { .probe = ir_spi_probe, diff --git a/drivers/media/usb/au0828/au0828-core.c b/drivers/media/usb/au0828/au0828-core.c index 257ae0d8cfe27250d52a6d6ee3523c40d0028320..e3f63299f85c0906a31db16528f7af468b836e68 100644 --- a/drivers/media/usb/au0828/au0828-core.c +++ b/drivers/media/usb/au0828/au0828-core.c @@ -623,6 +623,12 @@ static int au0828_usb_probe(struct usb_interface *interface, /* Setup */ au0828_card_setup(dev); + /* + * Store the pointer to the au0828_dev so it can be accessed in + * au0828_usb_disconnect + */ + usb_set_intfdata(interface, dev); + /* Analog TV */ retval = au0828_analog_register(dev, interface); if (retval) { @@ -641,12 +647,6 @@ static int au0828_usb_probe(struct usb_interface *interface, /* Remote controller */ au0828_rc_register(dev); - /* - * Store the pointer to the au0828_dev so it can be accessed in - * au0828_usb_disconnect - */ - usb_set_intfdata(interface, dev); - pr_info("Registered device AU0828 [%s]\n", dev->board.name == NULL ? "Unset" : dev->board.name); diff --git a/drivers/media/usb/cpia2/cpia2_usb.c b/drivers/media/usb/cpia2/cpia2_usb.c index a771e0a52610c84e492f1bafccfebbfe9f73e7c5..f5b04594e209400926b6310c4b389e3a94399907 100644 --- a/drivers/media/usb/cpia2/cpia2_usb.c +++ b/drivers/media/usb/cpia2/cpia2_usb.c @@ -902,7 +902,6 @@ static void cpia2_usb_disconnect(struct usb_interface *intf) cpia2_unregister_camera(cam); v4l2_device_disconnect(&cam->v4l2_dev); mutex_unlock(&cam->v4l2_lock); - v4l2_device_put(&cam->v4l2_dev); if(cam->buffers) { DBG("Wakeup waiting processes\n"); @@ -911,6 +910,8 @@ static void cpia2_usb_disconnect(struct usb_interface *intf) wake_up_interruptible(&cam->wq_stream); } + v4l2_device_put(&cam->v4l2_dev); + LOG("CPiA2 camera disconnected.\n"); } diff --git a/drivers/media/usb/dvb-usb/dvb-usb-init.c b/drivers/media/usb/dvb-usb/dvb-usb-init.c index 40ca4eafb137412edd1bb58f24868da5069e017c..39ac22486bcd91b6823ba0055fc58444afccba59 100644 --- a/drivers/media/usb/dvb-usb/dvb-usb-init.c +++ b/drivers/media/usb/dvb-usb/dvb-usb-init.c @@ -287,12 +287,15 @@ EXPORT_SYMBOL(dvb_usb_device_init); void dvb_usb_device_exit(struct usb_interface *intf) { struct dvb_usb_device *d = usb_get_intfdata(intf); - const char *name = "generic DVB-USB module"; + const char *default_name = "generic DVB-USB module"; + char name[40]; usb_set_intfdata(intf, NULL); if (d != NULL && d->desc != NULL) { - name = d->desc->name; + strscpy(name, d->desc->name, sizeof(name)); dvb_usb_exit(d); + } else { + strscpy(name, default_name, sizeof(name)); } info("%s successfully deinitialized and disconnected.", name); diff --git a/drivers/media/usb/hdpvr/hdpvr-video.c b/drivers/media/usb/hdpvr/hdpvr-video.c index 1b89c77bad6673d03600f2961f4c8fc96667b5dc..0615996572e41a618626795d36822ee466d324d2 100644 --- a/drivers/media/usb/hdpvr/hdpvr-video.c +++ b/drivers/media/usb/hdpvr/hdpvr-video.c @@ -439,7 +439,7 @@ static ssize_t hdpvr_read(struct file *file, char __user *buffer, size_t count, /* wait for the first buffer */ if (!(file->f_flags & O_NONBLOCK)) { if (wait_event_interruptible(dev->wait_data, - hdpvr_get_next_buffer(dev))) + !list_empty_careful(&dev->rec_buff_list))) return -ERESTARTSYS; } @@ -465,10 +465,17 @@ static ssize_t hdpvr_read(struct file *file, char __user *buffer, size_t count, goto err; } if (!err) { - v4l2_dbg(MSG_INFO, hdpvr_debug, &dev->v4l2_dev, - "timeout: restart streaming\n"); + v4l2_info(&dev->v4l2_dev, + "timeout: restart streaming\n"); + mutex_lock(&dev->io_mutex); hdpvr_stop_streaming(dev); - msecs_to_jiffies(4000); + mutex_unlock(&dev->io_mutex); + /* + * The FW needs about 4 seconds after streaming + * stopped before it is ready to restart + * streaming. + */ + msleep(4000); err = hdpvr_start_streaming(dev); if (err) { ret = err; @@ -1133,9 +1140,7 @@ static void hdpvr_device_release(struct video_device *vdev) struct hdpvr_device *dev = video_get_drvdata(vdev); hdpvr_delete(dev); - mutex_lock(&dev->io_mutex); flush_work(&dev->worker); - mutex_unlock(&dev->io_mutex); v4l2_device_unregister(&dev->v4l2_dev); v4l2_ctrl_handler_free(&dev->hdl); diff --git a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c index 673fdca8d2dac0fc09fc713b8f87e2ce5879bd5f..fcb201a40920e0281f15d05f55fb8550de245b0e 100644 --- a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c +++ b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c @@ -1680,7 +1680,7 @@ static int pvr2_decoder_enable(struct pvr2_hdw *hdw,int enablefl) } if (!hdw->flag_decoder_missed) { pvr2_trace(PVR2_TRACE_ERROR_LEGS, - "WARNING: No decoder present"); + "***WARNING*** No decoder present"); hdw->flag_decoder_missed = !0; trace_stbit("flag_decoder_missed", hdw->flag_decoder_missed); @@ -2366,7 +2366,7 @@ struct pvr2_hdw *pvr2_hdw_create(struct usb_interface *intf, if (hdw_desc->flag_is_experimental) { pvr2_trace(PVR2_TRACE_INFO, "**********"); pvr2_trace(PVR2_TRACE_INFO, - "WARNING: Support for this device (%s) is experimental.", + "***WARNING*** Support for this device (%s) is experimental.", hdw_desc->description); pvr2_trace(PVR2_TRACE_INFO, "Important functionality might not be entirely working."); diff --git a/drivers/media/usb/pvrusb2/pvrusb2-i2c-core.c b/drivers/media/usb/pvrusb2/pvrusb2-i2c-core.c index f3003ca05f4ba370d3ccbc647b269ca5bd817c9c..922c06279663519e4e4240b229784cefb7702b1c 100644 --- a/drivers/media/usb/pvrusb2/pvrusb2-i2c-core.c +++ b/drivers/media/usb/pvrusb2/pvrusb2-i2c-core.c @@ -343,11 +343,11 @@ static int i2c_hack_cx25840(struct pvr2_hdw *hdw, if ((ret != 0) || (*rdata == 0x04) || (*rdata == 0x0a)) { pvr2_trace(PVR2_TRACE_ERROR_LEGS, - "WARNING: Detected a wedged cx25840 chip; the device will not work."); + "***WARNING*** Detected a wedged cx25840 chip; the device will not work."); pvr2_trace(PVR2_TRACE_ERROR_LEGS, - "WARNING: Try power cycling the pvrusb2 device."); + "***WARNING*** Try power cycling the pvrusb2 device."); pvr2_trace(PVR2_TRACE_ERROR_LEGS, - "WARNING: Disabling further access to the device to prevent other foul-ups."); + "***WARNING*** Disabling further access to the device to prevent other foul-ups."); // This blocks all further communication with the part. hdw->i2c_func[0x44] = NULL; pvr2_hdw_render_useless(hdw); diff --git a/drivers/media/usb/pvrusb2/pvrusb2-std.c b/drivers/media/usb/pvrusb2/pvrusb2-std.c index 6b651f8b54df0f7a713d34959b08c7bcac0cec39..37dc299a1ca2682e41913388ff2d11a87880dd58 100644 --- a/drivers/media/usb/pvrusb2/pvrusb2-std.c +++ b/drivers/media/usb/pvrusb2/pvrusb2-std.c @@ -353,7 +353,7 @@ struct v4l2_standard *pvr2_std_create_enum(unsigned int *countptr, bcnt = pvr2_std_id_to_str(buf,sizeof(buf),fmsk); pvr2_trace( PVR2_TRACE_ERROR_LEGS, - "WARNING: Failed to classify the following standard(s): %.*s", + "***WARNING*** Failed to classify the following standard(s): %.*s", bcnt,buf); } diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c index 467b1ddaf4e75ec641129eb16388bba7189de692..f2854337cdcac8073bd59854bfd58eb22d2480fc 100644 --- a/drivers/media/usb/uvc/uvc_ctrl.c +++ b/drivers/media/usb/uvc/uvc_ctrl.c @@ -2350,7 +2350,9 @@ void uvc_ctrl_cleanup_device(struct uvc_device *dev) struct uvc_entity *entity; unsigned int i; - cancel_work_sync(&dev->async_ctrl.work); + /* Can be uninitialized if we are aborting on probe error. */ + if (dev->async_ctrl.work.func) + cancel_work_sync(&dev->async_ctrl.work); /* Free controls and control mappings for all entities. */ list_for_each_entry(entity, &dev->entities, list) { diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c index 6ac5f5d426154c60eddc6662bb544f96d851a3ed..c5b2bd7e5973f0618746bd6054307fd20ba86b5a 100644 --- a/drivers/media/v4l2-core/v4l2-ctrls.c +++ b/drivers/media/v4l2-core/v4l2-ctrls.c @@ -275,6 +275,7 @@ const char * const *v4l2_ctrl_get_menu(u32 id) "Flash", "Cloudy", "Shade", + "Greyworld", NULL, }; static const char * const camera_iso_sensitivity_auto[] = { @@ -2249,16 +2250,15 @@ struct v4l2_ctrl *v4l2_ctrl_new_custom(struct v4l2_ctrl_handler *hdl, v4l2_ctrl_fill(cfg->id, &name, &type, &min, &max, &step, &def, &flags); - is_menu = (cfg->type == V4L2_CTRL_TYPE_MENU || - cfg->type == V4L2_CTRL_TYPE_INTEGER_MENU); + is_menu = (type == V4L2_CTRL_TYPE_MENU || + type == V4L2_CTRL_TYPE_INTEGER_MENU); if (is_menu) WARN_ON(step); else WARN_ON(cfg->menu_skip_mask); - if (cfg->type == V4L2_CTRL_TYPE_MENU && qmenu == NULL) + if (type == V4L2_CTRL_TYPE_MENU && !qmenu) { qmenu = v4l2_ctrl_get_menu(cfg->id); - else if (cfg->type == V4L2_CTRL_TYPE_INTEGER_MENU && - qmenu_int == NULL) { + } else if (type == V4L2_CTRL_TYPE_INTEGER_MENU && !qmenu_int) { handler_set_err(hdl, -EINVAL); return NULL; } diff --git a/drivers/memstick/core/memstick.c b/drivers/memstick/core/memstick.c index 1246d69ba187422610fd65f68fba60a2dbd181dd..b1564cacd19e1b86dc26fabe945b4c892e5f4e40 100644 --- a/drivers/memstick/core/memstick.c +++ b/drivers/memstick/core/memstick.c @@ -629,13 +629,18 @@ static int __init memstick_init(void) return -ENOMEM; rc = bus_register(&memstick_bus_type); - if (!rc) - rc = class_register(&memstick_host_class); + if (rc) + goto error_destroy_workqueue; - if (!rc) - return 0; + rc = class_register(&memstick_host_class); + if (rc) + goto error_bus_unregister; + + return 0; +error_bus_unregister: bus_unregister(&memstick_bus_type); +error_destroy_workqueue: destroy_workqueue(workqueue); return rc; diff --git a/drivers/mfd/arizona-core.c b/drivers/mfd/arizona-core.c index 5f1e37d23943a3986bcb416b42f4170fa8a478c5..47d6d40f41cd52fe47cb52462d6672a94e9597b6 100644 --- a/drivers/mfd/arizona-core.c +++ b/drivers/mfd/arizona-core.c @@ -996,7 +996,7 @@ int arizona_dev_init(struct arizona *arizona) unsigned int reg, val; int (*apply_patch)(struct arizona *) = NULL; const struct mfd_cell *subdevs = NULL; - int n_subdevs, ret, i; + int n_subdevs = 0, ret, i; dev_set_drvdata(arizona->dev, arizona); mutex_init(&arizona->clk_lock); diff --git a/drivers/mfd/bcm2835-pm.c b/drivers/mfd/bcm2835-pm.c index ab1e9cbc50b1b0d9a12406647ac3742b8eddebab..f66f92fe28c341893026dd26d9e127e3b5dc2a11 100644 --- a/drivers/mfd/bcm2835-pm.c +++ b/drivers/mfd/bcm2835-pm.c @@ -50,14 +50,14 @@ static int bcm2835_pm_probe(struct platform_device *pdev) if (ret) return ret; - /* Map the ARGON ASB regs if present. */ + /* Map the RPiVid ASB regs if present. */ res = platform_get_resource(pdev, IORESOURCE_MEM, 2); if (res) { - pm->arg_asb = devm_ioremap_resource(dev, res); - if (IS_ERR(pm->arg_asb)) { - dev_err(dev, "Failed to map ARGON ASB: %ld\n", - PTR_ERR(pm->arg_asb)); - return PTR_ERR(pm->arg_asb); + pm->rpivid_asb = devm_ioremap_resource(dev, res); + if (IS_ERR(pm->rpivid_asb)) { + dev_err(dev, "Failed to map RPiVid ASB: %ld\n", + PTR_ERR(pm->rpivid_asb)); + return PTR_ERR(pm->rpivid_asb); } } diff --git a/drivers/mfd/hi655x-pmic.c b/drivers/mfd/hi655x-pmic.c index 96c07fa1802adcce7d48f971410096b4b2a4f479..6693f74aa6ab9a41307b1838ce21011e0fc0fc68 100644 --- a/drivers/mfd/hi655x-pmic.c +++ b/drivers/mfd/hi655x-pmic.c @@ -112,6 +112,8 @@ static int hi655x_pmic_probe(struct platform_device *pdev) pmic->regmap = devm_regmap_init_mmio_clk(dev, NULL, base, &hi655x_regmap_config); + if (IS_ERR(pmic->regmap)) + return PTR_ERR(pmic->regmap); regmap_read(pmic->regmap, HI655X_BUS_ADDR(HI655X_VER_REG), &pmic->ver); if ((pmic->ver < PMU_VER_START) || (pmic->ver > PMU_VER_END)) { diff --git a/drivers/mfd/madera-core.c b/drivers/mfd/madera-core.c index 8cfea969b060277dd53633a114b534e3eefdb2e0..45c7d8b9734938625c58890191ab28c806621e4f 100644 --- a/drivers/mfd/madera-core.c +++ b/drivers/mfd/madera-core.c @@ -278,6 +278,7 @@ const struct of_device_id madera_of_match[] = { { .compatible = "cirrus,wm1840", .data = (void *)WM1840 }, {} }; +MODULE_DEVICE_TABLE(of, madera_of_match); EXPORT_SYMBOL_GPL(madera_of_match); static int madera_get_reset_gpio(struct madera *madera) diff --git a/drivers/mfd/mfd-core.c b/drivers/mfd/mfd-core.c index 94e3f32ce935717e97f2e938b433f3673c38f22d..182973df1aed4b7c5f0ce6766f5fb94a2e675ab1 100644 --- a/drivers/mfd/mfd-core.c +++ b/drivers/mfd/mfd-core.c @@ -179,6 +179,7 @@ static int mfd_add_device(struct device *parent, int id, for_each_child_of_node(parent->of_node, np) { if (of_device_is_compatible(np, cell->of_compatible)) { pdev->dev.of_node = np; + pdev->dev.fwnode = &np->fwnode; break; } } diff --git a/drivers/misc/eeprom/at24.c b/drivers/misc/eeprom/at24.c index ddfcf4ade7bf33b46fbc0f5256d32a10240c74c9..dc3537651b807a7d4e1d1ee22a0a81acfccf8c0a 100644 --- a/drivers/misc/eeprom/at24.c +++ b/drivers/misc/eeprom/at24.c @@ -724,7 +724,7 @@ static int at24_probe(struct i2c_client *client) nvmem_config.name = dev_name(dev); nvmem_config.dev = dev; nvmem_config.read_only = !writable; - nvmem_config.root_only = true; + nvmem_config.root_only = !(pdata.flags & AT24_FLAG_IRUGO); nvmem_config.owner = THIS_MODULE; nvmem_config.compat = true; nvmem_config.base_dev = dev; diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h index bb1ee9834a029d28fdd690ed660f6f6355c89b67..cdd7af16d5eee565fd1fbb1447f31ba2558756ff 100644 --- a/drivers/misc/mei/hw-me-regs.h +++ b/drivers/misc/mei/hw-me-regs.h @@ -141,6 +141,11 @@ #define MEI_DEV_ID_ICP_LP 0x34E0 /* Ice Lake Point LP */ +#define MEI_DEV_ID_TGP_LP 0xA0E0 /* Tiger Lake Point LP */ + +#define MEI_DEV_ID_MCC 0x4B70 /* Mule Creek Canyon (EHL) */ +#define MEI_DEV_ID_MCC_4 0x4B75 /* Mule Creek Canyon 4 (EHL) */ + /* * MEI HW Section */ diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c index 4299658d48d63ab011271c264cc6af77cd94843c..e41f9e0a3fdf9e264b0a54c477f8065d6f1bc6b9 100644 --- a/drivers/misc/mei/pci-me.c +++ b/drivers/misc/mei/pci-me.c @@ -107,6 +107,11 @@ static const struct pci_device_id mei_me_pci_tbl[] = { {MEI_PCI_DEVICE(MEI_DEV_ID_ICP_LP, MEI_ME_PCH12_CFG)}, + {MEI_PCI_DEVICE(MEI_DEV_ID_TGP_LP, MEI_ME_PCH12_CFG)}, + + {MEI_PCI_DEVICE(MEI_DEV_ID_MCC, MEI_ME_PCH12_CFG)}, + {MEI_PCI_DEVICE(MEI_DEV_ID_MCC_4, MEI_ME_PCH8_CFG)}, + /* required last entry */ {0, } }; diff --git a/drivers/misc/vmw_vmci/vmci_doorbell.c b/drivers/misc/vmw_vmci/vmci_doorbell.c index b3fa738ae0050b48ba3f07ef3a02460d72898ea4..f005206d9033b53a83bf39955cc9d12fe732e0c3 100644 --- a/drivers/misc/vmw_vmci/vmci_doorbell.c +++ b/drivers/misc/vmw_vmci/vmci_doorbell.c @@ -318,7 +318,8 @@ int vmci_dbell_host_context_notify(u32 src_cid, struct vmci_handle handle) entry = container_of(resource, struct dbell_entry, resource); if (entry->run_delayed) { - schedule_work(&entry->work); + if (!schedule_work(&entry->work)) + vmci_resource_put(resource); } else { entry->notify_cb(entry->client_data); vmci_resource_put(resource); @@ -366,7 +367,8 @@ static void dbell_fire_entries(u32 notify_idx) atomic_read(&dbell->active) == 1) { if (dbell->run_delayed) { vmci_resource_get(&dbell->resource); - schedule_work(&dbell->work); + if (!schedule_work(&dbell->work)) + vmci_resource_put(&dbell->resource); } else { dbell->notify_cb(dbell->client_data); } diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c index cfb8ee24eaba153f00e13016987126e49e382436..04738359ec0292797136344778879999212d8651 100644 --- a/drivers/mmc/core/sd.c +++ b/drivers/mmc/core/sd.c @@ -1277,6 +1277,12 @@ int mmc_attach_sd(struct mmc_host *host) goto err; } + /* + * Some SD cards claims an out of spec VDD voltage range. Let's treat + * these bits as being in-valid and especially also bit7. + */ + ocr &= ~0x7FFF; + rocr = mmc_select_voltage(host, ocr); /* diff --git a/drivers/mmc/host/bcm2835-mmc.c b/drivers/mmc/host/bcm2835-mmc.c index 68daa59d313b47806b8ed83d93600ece1c8be705..1311c8268e83ed7e1a0c1fe4e9b961cd6b16f672 100644 --- a/drivers/mmc/host/bcm2835-mmc.c +++ b/drivers/mmc/host/bcm2835-mmc.c @@ -38,6 +38,7 @@ #include #include #include +#include #include "sdhci.h" @@ -344,16 +345,17 @@ static void bcm2835_mmc_dma_complete(void *param) host->use_dma = false; - if (host->data && !(host->data->flags & MMC_DATA_WRITE)) { - /* otherwise handled in SDHCI IRQ */ + if (host->data) { dma_chan = host->dma_chan_rxtx; - dir_data = DMA_FROM_DEVICE; - + if (host->data->flags & MMC_DATA_WRITE) + dir_data = DMA_TO_DEVICE; + else + dir_data = DMA_FROM_DEVICE; dma_unmap_sg(dma_chan->device->dev, host->data->sg, host->data->sg_len, dir_data); - - bcm2835_mmc_finish_data(host); + if (! (host->data->flags & MMC_DATA_WRITE)) + bcm2835_mmc_finish_data(host); } else if (host->wait_for_dma) { host->wait_for_dma = false; tasklet_schedule(&host->finish_tasklet); @@ -539,6 +541,8 @@ static void bcm2835_mmc_transfer_dma(struct bcm2835_host *host) spin_unlock_irqrestore(&host->lock, flags); dmaengine_submit(desc); dma_async_issue_pending(dma_chan); + } else { + dma_unmap_sg(dma_chan->device->dev, host->data->sg, len, dir_data); } } @@ -1374,7 +1378,10 @@ static int bcm2835_mmc_add_host(struct bcm2835_host *host) } #endif mmc->max_segs = 128; - mmc->max_req_size = 524288; + if (swiotlb_max_segment()) + mmc->max_req_size = (1 << IO_TLB_SHIFT) * IO_TLB_SEGSIZE; + else + mmc->max_req_size = 524288; mmc->max_seg_size = mmc->max_req_size; mmc->max_blk_size = 512; mmc->max_blk_count = 65535; diff --git a/drivers/mmc/host/cavium.c b/drivers/mmc/host/cavium.c index ed5cefb8376838b401aba0ac394c6a69e37dd818..89deb451e0ac6225c9481dc21ca21373f46c2d7c 100644 --- a/drivers/mmc/host/cavium.c +++ b/drivers/mmc/host/cavium.c @@ -374,6 +374,7 @@ static int finish_dma_single(struct cvm_mmc_host *host, struct mmc_data *data) { data->bytes_xfered = data->blocks * data->blksz; data->error = 0; + dma_unmap_sg(host->dev, data->sg, data->sg_len, get_dma_dir(data)); return 1; } @@ -1046,7 +1047,8 @@ int cvm_mmc_of_slot_probe(struct device *dev, struct cvm_mmc_host *host) mmc->max_segs = 1; /* DMA size field can address up to 8 MB */ - mmc->max_seg_size = 8 * 1024 * 1024; + mmc->max_seg_size = min_t(unsigned int, 8 * 1024 * 1024, + dma_get_max_seg_size(host->dev)); mmc->max_req_size = mmc->max_seg_size; /* External DMA is in 512 byte blocks */ mmc->max_blk_size = 512; diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c index 80dc2fd6576cf3f88afd695ad1f36ec1b4f52b41..942da07c9eb87cc4b362bd6eda345b46c7b5428e 100644 --- a/drivers/mmc/host/dw_mmc.c +++ b/drivers/mmc/host/dw_mmc.c @@ -2038,8 +2038,7 @@ static void dw_mci_tasklet_func(unsigned long priv) * delayed. Allowing the transfer to take place * avoids races and keeps things simple. */ - if ((err != -ETIMEDOUT) && - (cmd->opcode == MMC_SEND_TUNING_BLOCK)) { + if (err != -ETIMEDOUT) { state = STATE_SENDING_DATA; continue; } diff --git a/drivers/mmc/host/meson-mx-sdio.c b/drivers/mmc/host/meson-mx-sdio.c index 9841b447ccde0df5ba98d49782c793eb7b0fa1bf..f6c76be2be0d3fa0d5b93bdd1d79acee793f7a9d 100644 --- a/drivers/mmc/host/meson-mx-sdio.c +++ b/drivers/mmc/host/meson-mx-sdio.c @@ -76,7 +76,7 @@ #define MESON_MX_SDIO_IRQC_IF_CONFIG_MASK GENMASK(7, 6) #define MESON_MX_SDIO_IRQC_FORCE_DATA_CLK BIT(8) #define MESON_MX_SDIO_IRQC_FORCE_DATA_CMD BIT(9) - #define MESON_MX_SDIO_IRQC_FORCE_DATA_DAT_MASK GENMASK(10, 13) + #define MESON_MX_SDIO_IRQC_FORCE_DATA_DAT_MASK GENMASK(13, 10) #define MESON_MX_SDIO_IRQC_SOFT_RESET BIT(15) #define MESON_MX_SDIO_IRQC_FORCE_HALT BIT(30) #define MESON_MX_SDIO_IRQC_HALT_HOLE BIT(31) diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c index 8594659cb59238f70998fce6b8598d5fe86b0ec4..ad0275191d910b4ed73afda896b94adf108c3889 100644 --- a/drivers/mmc/host/sdhci-msm.c +++ b/drivers/mmc/host/sdhci-msm.c @@ -582,11 +582,14 @@ static int msm_init_cm_dll(struct sdhci_host *host) struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host); int wait_cnt = 50; - unsigned long flags; + unsigned long flags, xo_clk = 0; u32 config; const struct sdhci_msm_offset *msm_offset = msm_host->offset; + if (msm_host->use_14lpp_dll_reset && !IS_ERR_OR_NULL(msm_host->xo_clk)) + xo_clk = clk_get_rate(msm_host->xo_clk); + spin_lock_irqsave(&host->lock, flags); /* @@ -634,10 +637,10 @@ static int msm_init_cm_dll(struct sdhci_host *host) config &= CORE_FLL_CYCLE_CNT; if (config) mclk_freq = DIV_ROUND_CLOSEST_ULL((host->clock * 8), - clk_get_rate(msm_host->xo_clk)); + xo_clk); else mclk_freq = DIV_ROUND_CLOSEST_ULL((host->clock * 4), - clk_get_rate(msm_host->xo_clk)); + xo_clk); config = readl_relaxed(host->ioaddr + msm_offset->core_dll_config_2); diff --git a/drivers/mmc/host/sdhci-of-arasan.c b/drivers/mmc/host/sdhci-of-arasan.c index 7fdac277e382f0de00b8b5d6f4347398a363cb0b..9c77bfe4334f3c67ab61fb1cac75d5adc4ef53bb 100644 --- a/drivers/mmc/host/sdhci-of-arasan.c +++ b/drivers/mmc/host/sdhci-of-arasan.c @@ -788,7 +788,8 @@ static int sdhci_arasan_probe(struct platform_device *pdev) ret = mmc_of_parse(host->mmc); if (ret) { - dev_err(&pdev->dev, "parsing dt failed (%d)\n", ret); + if (ret != -EPROBE_DEFER) + dev_err(&pdev->dev, "parsing dt failed (%d)\n", ret); goto unreg_clk; } diff --git a/drivers/mmc/host/sdhci-of-at91.c b/drivers/mmc/host/sdhci-of-at91.c index 682c573e20a727e118282b33eae9d982917e138f..e284102c16e97aa690e65198ab3390ecb1cc030c 100644 --- a/drivers/mmc/host/sdhci-of-at91.c +++ b/drivers/mmc/host/sdhci-of-at91.c @@ -365,6 +365,9 @@ static int sdhci_at91_probe(struct platform_device *pdev) pm_runtime_set_autosuspend_delay(&pdev->dev, 50); pm_runtime_use_autosuspend(&pdev->dev); + /* HS200 is broken at this moment */ + host->quirks2 = SDHCI_QUIRK2_BROKEN_HS200; + ret = sdhci_add_host(host); if (ret) goto pm_runtime_disable; diff --git a/drivers/mmc/host/sdhci-pci-o2micro.c b/drivers/mmc/host/sdhci-pci-o2micro.c index fa8d9da2ab7f6b18dd838ef8fc6d5504d46f7d53..e248d7945c062a561d035a5613c04e0a241721ac 100644 --- a/drivers/mmc/host/sdhci-pci-o2micro.c +++ b/drivers/mmc/host/sdhci-pci-o2micro.c @@ -290,11 +290,21 @@ int sdhci_pci_o2_probe_slot(struct sdhci_pci_slot *slot) { struct sdhci_pci_chip *chip; struct sdhci_host *host; - u32 reg; + u32 reg, caps; int ret; chip = slot->chip; host = slot->host; + + caps = sdhci_readl(host, SDHCI_CAPABILITIES); + + /* + * mmc_select_bus_width() will test the bus to determine the actual bus + * width. + */ + if (caps & SDHCI_CAN_DO_8BIT) + host->mmc->caps |= MMC_CAP_8_BIT_DATA; + switch (chip->pdev->device) { case PCI_DEVICE_ID_O2_SDS0: case PCI_DEVICE_ID_O2_SEABIRD0: diff --git a/drivers/mtd/nand/raw/mtk_nand.c b/drivers/mtd/nand/raw/mtk_nand.c index 57b5ed1699e386e51865ba77c61c6561ffc1775f..dce5b7e44e7afa25ecacfaa8d9087d1fbb1ffcab 100644 --- a/drivers/mtd/nand/raw/mtk_nand.c +++ b/drivers/mtd/nand/raw/mtk_nand.c @@ -509,7 +509,8 @@ static int mtk_nfc_setup_data_interface(struct mtd_info *mtd, int csline, { struct mtk_nfc *nfc = nand_get_controller_data(mtd_to_nand(mtd)); const struct nand_sdr_timings *timings; - u32 rate, tpoecs, tprecs, tc2r, tw2r, twh, twst, trlt; + u32 rate, tpoecs, tprecs, tc2r, tw2r, twh, twst = 0, trlt = 0; + u32 thold; timings = nand_get_sdr_timings(conf); if (IS_ERR(timings)) @@ -545,11 +546,28 @@ static int mtk_nfc_setup_data_interface(struct mtd_info *mtd, int csline, twh = DIV_ROUND_UP(twh * rate, 1000000) - 1; twh &= 0xf; - twst = timings->tWP_min / 1000; + /* Calculate real WE#/RE# hold time in nanosecond */ + thold = (twh + 1) * 1000000 / rate; + /* nanosecond to picosecond */ + thold *= 1000; + + /* + * WE# low level time should be expaned to meet WE# pulse time + * and WE# cycle time at the same time. + */ + if (thold < timings->tWC_min) + twst = timings->tWC_min - thold; + twst = max(timings->tWP_min, twst) / 1000; twst = DIV_ROUND_UP(twst * rate, 1000000) - 1; twst &= 0xf; - trlt = max(timings->tREA_max, timings->tRP_min) / 1000; + /* + * RE# low level time should be expaned to meet RE# pulse time, + * RE# access time and RE# cycle time at the same time. + */ + if (thold < timings->tRC_min) + trlt = timings->tRC_min - thold; + trlt = max3(trlt, timings->tREA_max, timings->tRP_min) / 1000; trlt = DIV_ROUND_UP(trlt * rate, 1000000) - 1; trlt &= 0xf; diff --git a/drivers/mtd/nand/raw/nand_micron.c b/drivers/mtd/nand/raw/nand_micron.c index f5dc0a7a2456325a92142ce0f4b537a7dcbbe79d..fb401c25732c7abf61119a5830859d5992a4bc0f 100644 --- a/drivers/mtd/nand/raw/nand_micron.c +++ b/drivers/mtd/nand/raw/nand_micron.c @@ -400,6 +400,14 @@ static int micron_supports_on_die_ecc(struct nand_chip *chip) (chip->id.data[4] & MICRON_ID_INTERNAL_ECC_MASK) != 0x2) return MICRON_ON_DIE_UNSUPPORTED; + /* + * It seems that there are devices which do not support ECC officially. + * At least the MT29F2G08ABAGA / MT29F2G08ABBGA devices supports + * enabling the ECC feature but don't reflect that to the READ_ID table. + * So we have to guarantee that we disable the ECC feature directly + * after we did the READ_ID table command. Later we can evaluate the + * ECC_ENABLE support. + */ ret = micron_nand_on_die_ecc_setup(chip, true); if (ret) return MICRON_ON_DIE_UNSUPPORTED; @@ -408,13 +416,13 @@ static int micron_supports_on_die_ecc(struct nand_chip *chip) if (ret) return MICRON_ON_DIE_UNSUPPORTED; - if (!(id[4] & MICRON_ID_ECC_ENABLED)) - return MICRON_ON_DIE_UNSUPPORTED; - ret = micron_nand_on_die_ecc_setup(chip, false); if (ret) return MICRON_ON_DIE_UNSUPPORTED; + if (!(id[4] & MICRON_ID_ECC_ENABLED)) + return MICRON_ON_DIE_UNSUPPORTED; + ret = nand_readid_op(chip, 0, id, sizeof(id)); if (ret) return MICRON_ON_DIE_UNSUPPORTED; diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c index 8c7bf91ce4e1d3844e8247b60ecec208dee2e85f..48b3ab26b12492d8b2efbc41e1d8eb4a5eda72d6 100644 --- a/drivers/mtd/nand/spi/core.c +++ b/drivers/mtd/nand/spi/core.c @@ -572,12 +572,12 @@ static int spinand_mtd_read(struct mtd_info *mtd, loff_t from, if (ret == -EBADMSG) { ecc_failed = true; mtd->ecc_stats.failed++; - ret = 0; } else { mtd->ecc_stats.corrected += ret; max_bitflips = max_t(unsigned int, max_bitflips, ret); } + ret = 0; ops->retlen += iter.req.datalen; ops->oobretlen += iter.req.ooblen; } diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c index 7e162fff01abbd3371e01dd8bff1d72af065bd83..0d2392c4b625a195c9727adab45c6bc933b5c794 100644 --- a/drivers/net/bonding/bond_main.c +++ b/drivers/net/bonding/bond_main.c @@ -1102,6 +1102,8 @@ static void bond_compute_features(struct bonding *bond) done: bond_dev->vlan_features = vlan_features; bond_dev->hw_enc_features = enc_features | NETIF_F_GSO_ENCAP_ALL | + NETIF_F_HW_VLAN_CTAG_TX | + NETIF_F_HW_VLAN_STAG_TX | NETIF_F_GSO_UDP_L4; bond_dev->gso_max_segs = gso_max_segs; netif_set_gso_max_size(bond_dev, gso_max_size); @@ -2188,6 +2190,15 @@ static void bond_miimon_commit(struct bonding *bond) bond_for_each_slave(bond, slave, iter) { switch (slave->new_link) { case BOND_LINK_NOCHANGE: + /* For 802.3ad mode, check current slave speed and + * duplex again in case its port was disabled after + * invalid speed/duplex reporting but recovered before + * link monitoring could make a decision on the actual + * link status + */ + if (BOND_MODE(bond) == BOND_MODE_8023AD && + slave->link == BOND_LINK_UP) + bond_3ad_adapter_speed_duplex_changed(slave); continue; case BOND_LINK_UP: @@ -3852,8 +3863,8 @@ static netdev_tx_t bond_xmit_roundrobin(struct sk_buff *skb, struct net_device *bond_dev) { struct bonding *bond = netdev_priv(bond_dev); - struct iphdr *iph = ip_hdr(skb); struct slave *slave; + int slave_cnt; u32 slave_id; /* Start with the curr_active_slave that joined the bond as the @@ -3862,23 +3873,32 @@ static netdev_tx_t bond_xmit_roundrobin(struct sk_buff *skb, * send the join/membership reports. The curr_active_slave found * will send all of this type of traffic. */ - if (iph->protocol == IPPROTO_IGMP && skb->protocol == htons(ETH_P_IP)) { - slave = rcu_dereference(bond->curr_active_slave); - if (slave) - bond_dev_queue_xmit(bond, skb, slave->dev); - else - bond_xmit_slave_id(bond, skb, 0); - } else { - int slave_cnt = READ_ONCE(bond->slave_cnt); + if (skb->protocol == htons(ETH_P_IP)) { + int noff = skb_network_offset(skb); + struct iphdr *iph; - if (likely(slave_cnt)) { - slave_id = bond_rr_gen_slave_id(bond); - bond_xmit_slave_id(bond, skb, slave_id % slave_cnt); - } else { - bond_tx_drop(bond_dev, skb); + if (unlikely(!pskb_may_pull(skb, noff + sizeof(*iph)))) + goto non_igmp; + + iph = ip_hdr(skb); + if (iph->protocol == IPPROTO_IGMP) { + slave = rcu_dereference(bond->curr_active_slave); + if (slave) + bond_dev_queue_xmit(bond, skb, slave->dev); + else + bond_xmit_slave_id(bond, skb, 0); + return NETDEV_TX_OK; } } +non_igmp: + slave_cnt = READ_ONCE(bond->slave_cnt); + if (likely(slave_cnt)) { + slave_id = bond_rr_gen_slave_id(bond); + bond_xmit_slave_id(bond, skb, slave_id % slave_cnt); + } else { + bond_tx_drop(bond_dev, skb); + } return NETDEV_TX_OK; } diff --git a/drivers/net/caif/caif_hsi.c b/drivers/net/caif/caif_hsi.c index 433a14b9f731bc5a71ac431b2eb71274c31247da..253a1bbe37e8bd48caf8b59ea8de8a6d3797830e 100644 --- a/drivers/net/caif/caif_hsi.c +++ b/drivers/net/caif/caif_hsi.c @@ -1455,7 +1455,7 @@ static void __exit cfhsi_exit_module(void) rtnl_lock(); list_for_each_safe(list_node, n, &cfhsi_list) { cfhsi = list_entry(list_node, struct cfhsi, list); - unregister_netdev(cfhsi->ndev); + unregister_netdevice(cfhsi->ndev); } rtnl_unlock(); } diff --git a/drivers/net/can/dev.c b/drivers/net/can/dev.c index c05e4d50d43d74a32ef38c656fb85bbb8b06e962..bd127ce3aba245e3bc100538da2aebdcf2b0287e 100644 --- a/drivers/net/can/dev.c +++ b/drivers/net/can/dev.c @@ -1260,6 +1260,8 @@ int register_candev(struct net_device *dev) return -EINVAL; dev->rtnl_link_ops = &can_link_ops; + netif_carrier_off(dev); + return register_netdev(dev); } EXPORT_SYMBOL_GPL(register_candev); diff --git a/drivers/net/can/rcar/rcar_canfd.c b/drivers/net/can/rcar/rcar_canfd.c index 602c19e23f052ed50bd691408bc9644bf5049cb5..786d852a70d5844f4531637e07a9fb2e040b90cd 100644 --- a/drivers/net/can/rcar/rcar_canfd.c +++ b/drivers/net/can/rcar/rcar_canfd.c @@ -1512,10 +1512,11 @@ static int rcar_canfd_rx_poll(struct napi_struct *napi, int quota) /* All packets processed */ if (num_pkts < quota) { - napi_complete_done(napi, num_pkts); - /* Enable Rx FIFO interrupts */ - rcar_canfd_set_bit(priv->base, RCANFD_RFCC(ridx), - RCANFD_RFCC_RFIE); + if (napi_complete_done(napi, num_pkts)) { + /* Enable Rx FIFO interrupts */ + rcar_canfd_set_bit(priv->base, RCANFD_RFCC(ridx), + RCANFD_RFCC_RFIE); + } } return num_pkts; } diff --git a/drivers/net/can/sja1000/peak_pcmcia.c b/drivers/net/can/sja1000/peak_pcmcia.c index b8c39ede7cd51445b6ed653585811066ab75d7d4..179bfcd541f2f552253848d20342ab7c23409f58 100644 --- a/drivers/net/can/sja1000/peak_pcmcia.c +++ b/drivers/net/can/sja1000/peak_pcmcia.c @@ -487,7 +487,7 @@ static void pcan_free_channels(struct pcan_pccard *card) if (!netdev) continue; - strncpy(name, netdev->name, IFNAMSIZ); + strlcpy(name, netdev->name, IFNAMSIZ); unregister_sja1000dev(netdev); diff --git a/drivers/net/can/spi/mcp251x.c b/drivers/net/can/spi/mcp251x.c index cae0f5b633d0099c125e6dba55f8142a1da328be..1c66a534bf1ac8114ee90a883b9154e9a236a62e 100644 --- a/drivers/net/can/spi/mcp251x.c +++ b/drivers/net/can/spi/mcp251x.c @@ -628,6 +628,7 @@ static int mcp251x_hw_reset(struct spi_device *spi) struct mcp251x_priv *priv = spi_get_drvdata(spi); u8 reg; int ret; + int retries = 10; /* Wait for oscillator startup timer after power up */ mdelay(MCP251X_OST_DELAY_MS); @@ -637,10 +638,18 @@ static int mcp251x_hw_reset(struct spi_device *spi) if (ret) return ret; - /* Wait for oscillator startup timer after reset */ - mdelay(MCP251X_OST_DELAY_MS); + /* + * Wait for oscillator startup timer after reset + * + * Some devices can take longer than the expected 5ms to wake + * up, so allow a few retries. + */ + + do { + mdelay(MCP251X_OST_DELAY_MS); + reg = mcp251x_read_reg(spi, CANSTAT); + } while (!reg && retries--); - reg = mcp251x_read_reg(spi, CANSTAT); if ((reg & CANCTRL_REQOP_MASK) != CANCTRL_REQOP_CONF) return -ENODEV; @@ -678,17 +687,6 @@ static int mcp251x_power_enable(struct regulator *reg, int enable) return regulator_disable(reg); } -static void mcp251x_open_clean(struct net_device *net) -{ - struct mcp251x_priv *priv = netdev_priv(net); - struct spi_device *spi = priv->spi; - - free_irq(spi->irq, priv); - mcp251x_hw_sleep(spi); - mcp251x_power_enable(priv->transceiver, 0); - close_candev(net); -} - static int mcp251x_stop(struct net_device *net) { struct mcp251x_priv *priv = netdev_priv(net); @@ -957,37 +955,43 @@ static int mcp251x_open(struct net_device *net) flags | IRQF_ONESHOT, DEVICE_NAME, priv); if (ret) { dev_err(&spi->dev, "failed to acquire irq %d\n", spi->irq); - mcp251x_power_enable(priv->transceiver, 0); - close_candev(net); - goto open_unlock; + goto out_close; } priv->wq = alloc_workqueue("mcp251x_wq", WQ_FREEZABLE | WQ_MEM_RECLAIM, 0); + if (!priv->wq) { + ret = -ENOMEM; + goto out_clean; + } INIT_WORK(&priv->tx_work, mcp251x_tx_work_handler); INIT_WORK(&priv->restart_work, mcp251x_restart_work_handler); ret = mcp251x_hw_reset(spi); - if (ret) { - mcp251x_open_clean(net); - goto open_unlock; - } + if (ret) + goto out_free_wq; ret = mcp251x_setup(net, spi); - if (ret) { - mcp251x_open_clean(net); - goto open_unlock; - } + if (ret) + goto out_free_wq; ret = mcp251x_set_normal_mode(spi); - if (ret) { - mcp251x_open_clean(net); - goto open_unlock; - } + if (ret) + goto out_free_wq; can_led_event(net, CAN_LED_EVENT_OPEN); netif_wake_queue(net); + mutex_unlock(&priv->mcp_lock); + + return 0; -open_unlock: +out_free_wq: + destroy_workqueue(priv->wq); +out_clean: + free_irq(spi->irq, priv); + mcp251x_hw_sleep(spi); +out_close: + mcp251x_power_enable(priv->transceiver, 0); + close_candev(net); mutex_unlock(&priv->mcp_lock); return ret; } diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_core.c b/drivers/net/can/usb/peak_usb/pcan_usb_core.c index 611f9d31be5d0370612fe8d4f9771b88dd9f3d37..43b0fa2b99322e33a0ea0fb81529a4c6844b0050 100644 --- a/drivers/net/can/usb/peak_usb/pcan_usb_core.c +++ b/drivers/net/can/usb/peak_usb/pcan_usb_core.c @@ -576,16 +576,16 @@ static int peak_usb_ndo_stop(struct net_device *netdev) dev->state &= ~PCAN_USB_STATE_STARTED; netif_stop_queue(netdev); + close_candev(netdev); + + dev->can.state = CAN_STATE_STOPPED; + /* unlink all pending urbs and free used memory */ peak_usb_unlink_all_urbs(dev); if (dev->adapter->dev_stop) dev->adapter->dev_stop(dev); - close_candev(netdev); - - dev->can.state = CAN_STATE_STOPPED; - /* can set bus off now */ if (dev->adapter->dev_set_bus) { int err = dev->adapter->dev_set_bus(dev, 0); @@ -863,7 +863,7 @@ static void peak_usb_disconnect(struct usb_interface *intf) dev_prev_siblings = dev->prev_siblings; dev->state &= ~PCAN_USB_STATE_CONNECTED; - strncpy(name, netdev->name, IFNAMSIZ); + strlcpy(name, netdev->name, IFNAMSIZ); unregister_netdev(netdev); diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_fd.c b/drivers/net/can/usb/peak_usb/pcan_usb_fd.c index dd161c5eea8ec7e01aaa7d4c759fe212e5d9b79d..41988358f63c86cdc307fe541e510ef5bcfd1b1a 100644 --- a/drivers/net/can/usb/peak_usb/pcan_usb_fd.c +++ b/drivers/net/can/usb/peak_usb/pcan_usb_fd.c @@ -849,7 +849,7 @@ static int pcan_usb_fd_init(struct peak_usb_device *dev) goto err_out; /* allocate command buffer once for all for the interface */ - pdev->cmd_buffer_addr = kmalloc(PCAN_UFD_CMD_BUFFER_SIZE, + pdev->cmd_buffer_addr = kzalloc(PCAN_UFD_CMD_BUFFER_SIZE, GFP_KERNEL); if (!pdev->cmd_buffer_addr) goto err_out_1; diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_pro.c b/drivers/net/can/usb/peak_usb/pcan_usb_pro.c index d516def846abec6c661dc185da1d64b73ca22441..b304198f0b3af0d670677bb0526549c49394a037 100644 --- a/drivers/net/can/usb/peak_usb/pcan_usb_pro.c +++ b/drivers/net/can/usb/peak_usb/pcan_usb_pro.c @@ -502,7 +502,7 @@ static int pcan_usb_pro_drv_loaded(struct peak_usb_device *dev, int loaded) u8 *buffer; int err; - buffer = kmalloc(PCAN_USBPRO_FCT_DRVLD_REQ_LEN, GFP_KERNEL); + buffer = kzalloc(PCAN_USBPRO_FCT_DRVLD_REQ_LEN, GFP_KERNEL); if (!buffer) return -ENOMEM; diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c index 411cfb806459a6c994d27f2d10bdb8bd17a52755..703e6bdaf0e1fe1ee1b4dd0dba860da1d53155b9 100644 --- a/drivers/net/dsa/mv88e6xxx/chip.c +++ b/drivers/net/dsa/mv88e6xxx/chip.c @@ -4816,6 +4816,8 @@ static int mv88e6xxx_probe(struct mdio_device *mdiodev) err = PTR_ERR(chip->reset); goto out; } + if (chip->reset) + usleep_range(1000, 2000); err = mv88e6xxx_detect(chip); if (err) diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c index 5a727d4729da7348075b75101154cca3cf515073..cf01e73d1bcc888e0815a63b294ec2f32df61611 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c @@ -286,6 +286,9 @@ int bnx2x_tx_int(struct bnx2x *bp, struct bnx2x_fp_txdata *txdata) hw_cons = le16_to_cpu(*txdata->tx_cons_sb); sw_cons = txdata->tx_pkt_cons; + /* Ensure subsequent loads occur after hw_cons */ + smp_rmb(); + while (sw_cons != hw_cons) { u16 pkt_cons; @@ -1933,8 +1936,7 @@ u16 bnx2x_select_queue(struct net_device *dev, struct sk_buff *skb, } /* select a non-FCoE queue */ - return fallback(dev, skb, NULL) % - (BNX2X_NUM_ETH_QUEUES(bp) * bp->max_cos); + return fallback(dev, skb, NULL) % (BNX2X_NUM_ETH_QUEUES(bp)); } void bnx2x_set_num_queues(struct bnx2x *bp) @@ -3056,12 +3058,13 @@ int bnx2x_nic_unload(struct bnx2x *bp, int unload_mode, bool keep_link) /* if VF indicate to PF this function is going down (PF will delete sp * elements and clear initializations */ - if (IS_VF(bp)) + if (IS_VF(bp)) { + bnx2x_clear_vlan_info(bp); bnx2x_vfpf_close_vf(bp); - else if (unload_mode != UNLOAD_RECOVERY) + } else if (unload_mode != UNLOAD_RECOVERY) { /* if this is a normal/close unload need to clean up chip*/ bnx2x_chip_cleanup(bp, unload_mode, keep_link); - else { + } else { /* Send the UNLOAD_REQUEST to the MCP */ bnx2x_send_unload_req(bp, unload_mode); @@ -3858,9 +3861,12 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev) if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) { if (!(bp->flags & TX_TIMESTAMPING_EN)) { + bp->eth_stats.ptp_skip_tx_ts++; BNX2X_ERR("Tx timestamping was not enabled, this packet will not be timestamped\n"); } else if (bp->ptp_tx_skb) { - BNX2X_ERR("The device supports only a single outstanding packet to timestamp, this packet will not be timestamped\n"); + bp->eth_stats.ptp_skip_tx_ts++; + netdev_err_once(bp->dev, + "Device supports only a single outstanding packet to timestamp, this packet won't be timestamped\n"); } else { skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS; /* schedule check for Tx timestamp */ diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h index 0e508e5defce315f2e5254ca238afe26b523054a..ee5159ef837e38e285b13653d70a5529dcf35310 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h @@ -425,6 +425,8 @@ void bnx2x_set_reset_global(struct bnx2x *bp); void bnx2x_disable_close_the_gate(struct bnx2x *bp); int bnx2x_init_hw_func_cnic(struct bnx2x *bp); +void bnx2x_clear_vlan_info(struct bnx2x *bp); + /** * bnx2x_sp_event - handle ramrods completion. * diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c index c428b0655c26e924dfc8b1c2e2fe52e7017fc8e7..00f9ed93360c6707860c5aa3538d518f45224f59 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c @@ -182,7 +182,9 @@ static const struct { { STATS_OFFSET32(driver_filtered_tx_pkt), 4, false, "driver_filtered_tx_pkt" }, { STATS_OFFSET32(eee_tx_lpi), - 4, true, "Tx LPI entry count"} + 4, true, "Tx LPI entry count"}, + { STATS_OFFSET32(ptp_skip_tx_ts), + 4, false, "ptp_skipped_tx_tstamp" }, }; #define BNX2X_NUM_STATS ARRAY_SIZE(bnx2x_stats_arr) diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c index a585f1025a5802cb71c4ec8cb61bbaa1a513d074..68c62e32e882046c96f7a66d35c2a1ba78da6ede 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c @@ -8488,11 +8488,21 @@ int bnx2x_set_vlan_one(struct bnx2x *bp, u16 vlan, return rc; } +void bnx2x_clear_vlan_info(struct bnx2x *bp) +{ + struct bnx2x_vlan_entry *vlan; + + /* Mark that hw forgot all entries */ + list_for_each_entry(vlan, &bp->vlan_reg, link) + vlan->hw = false; + + bp->vlan_cnt = 0; +} + static int bnx2x_del_all_vlans(struct bnx2x *bp) { struct bnx2x_vlan_mac_obj *vlan_obj = &bp->sp_objs[0].vlan_obj; unsigned long ramrod_flags = 0, vlan_flags = 0; - struct bnx2x_vlan_entry *vlan; int rc; __set_bit(RAMROD_COMP_WAIT, &ramrod_flags); @@ -8501,10 +8511,7 @@ static int bnx2x_del_all_vlans(struct bnx2x *bp) if (rc) return rc; - /* Mark that hw forgot all entries */ - list_for_each_entry(vlan, &bp->vlan_reg, link) - vlan->hw = false; - bp->vlan_cnt = 0; + bnx2x_clear_vlan_info(bp); return 0; } @@ -15244,11 +15251,24 @@ static void bnx2x_ptp_task(struct work_struct *work) u32 val_seq; u64 timestamp, ns; struct skb_shared_hwtstamps shhwtstamps; + bool bail = true; + int i; + + /* FW may take a while to complete timestamping; try a bit and if it's + * still not complete, may indicate an error state - bail out then. + */ + for (i = 0; i < 10; i++) { + /* Read Tx timestamp registers */ + val_seq = REG_RD(bp, port ? NIG_REG_P1_TLLH_PTP_BUF_SEQID : + NIG_REG_P0_TLLH_PTP_BUF_SEQID); + if (val_seq & 0x10000) { + bail = false; + break; + } + msleep(1 << i); + } - /* Read Tx timestamp registers */ - val_seq = REG_RD(bp, port ? NIG_REG_P1_TLLH_PTP_BUF_SEQID : - NIG_REG_P0_TLLH_PTP_BUF_SEQID); - if (val_seq & 0x10000) { + if (!bail) { /* There is a valid timestamp value */ timestamp = REG_RD(bp, port ? NIG_REG_P1_TLLH_PTP_BUF_TS_MSB : NIG_REG_P0_TLLH_PTP_BUF_TS_MSB); @@ -15263,16 +15283,18 @@ static void bnx2x_ptp_task(struct work_struct *work) memset(&shhwtstamps, 0, sizeof(shhwtstamps)); shhwtstamps.hwtstamp = ns_to_ktime(ns); skb_tstamp_tx(bp->ptp_tx_skb, &shhwtstamps); - dev_kfree_skb_any(bp->ptp_tx_skb); - bp->ptp_tx_skb = NULL; DP(BNX2X_MSG_PTP, "Tx timestamp, timestamp cycles = %llu, ns = %llu\n", timestamp, ns); } else { - DP(BNX2X_MSG_PTP, "There is no valid Tx timestamp yet\n"); - /* Reschedule to keep checking for a valid timestamp value */ - schedule_work(&bp->ptp_task); + DP(BNX2X_MSG_PTP, + "Tx timestamp is not recorded (register read=%u)\n", + val_seq); + bp->eth_stats.ptp_skip_tx_ts++; } + + dev_kfree_skb_any(bp->ptp_tx_skb); + bp->ptp_tx_skb = NULL; } void bnx2x_set_rx_ts(struct bnx2x *bp, struct sk_buff *skb) diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h index b2644ed13d064eacc3b34cf59d48b156bedac16b..d55e63692cf3bf5ab4c11b8ddc9d800c520d5e78 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h @@ -207,6 +207,9 @@ struct bnx2x_eth_stats { u32 driver_filtered_tx_pkt; /* src: Clear-on-Read register; Will not survive PMF Migration */ u32 eee_tx_lpi; + + /* PTP */ + u32 ptp_skip_tx_ts; }; struct bnx2x_eth_q_stats { diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c index bdd7717c890b1dc000b2cc978460e8302c0ce86a..251166dfb8a83616e120d6ee3da9bea887716b9c 100644 --- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c +++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c @@ -72,6 +72,10 @@ #define GENET_RDMA_REG_OFF (priv->hw_params->rdma_offset + \ TOTAL_DESC * DMA_DESC_SIZE) +static bool skip_umac_reset = true; +module_param(skip_umac_reset, bool, 0444); +MODULE_PARM_DESC(skip_umac_reset, "Skip UMAC reset step"); + static inline void bcmgenet_writel(u32 value, void __iomem *offset) { /* MIPS chips strapped for BE will automagically configure the @@ -1993,6 +1997,11 @@ static void reset_umac(struct bcmgenet_priv *priv) bcmgenet_rbuf_ctrl_set(priv, 0); udelay(10); + if (skip_umac_reset) { + pr_warn("Skipping UMAC reset\n"); + return; + } + /* disable MAC while updating its registers */ bcmgenet_umac_writel(priv, 0, UMAC_CMD); @@ -3086,39 +3095,42 @@ static void bcmgenet_timeout(struct net_device *dev) netif_tx_wake_all_queues(dev); } -#define MAX_MC_COUNT 16 +#define MAX_MDF_FILTER 17 static inline void bcmgenet_set_mdf_addr(struct bcmgenet_priv *priv, unsigned char *addr, - int *i, - int *mc) + int *i) { - u32 reg; - bcmgenet_umac_writel(priv, addr[0] << 8 | addr[1], UMAC_MDF_ADDR + (*i * 4)); bcmgenet_umac_writel(priv, addr[2] << 24 | addr[3] << 16 | addr[4] << 8 | addr[5], UMAC_MDF_ADDR + ((*i + 1) * 4)); - reg = bcmgenet_umac_readl(priv, UMAC_MDF_CTRL); - reg |= (1 << (MAX_MC_COUNT - *mc)); - bcmgenet_umac_writel(priv, reg, UMAC_MDF_CTRL); *i += 2; - (*mc)++; } static void bcmgenet_set_rx_mode(struct net_device *dev) { struct bcmgenet_priv *priv = netdev_priv(dev); struct netdev_hw_addr *ha; - int i, mc; + int i, nfilter; u32 reg; netif_dbg(priv, hw, dev, "%s: %08X\n", __func__, dev->flags); - /* Promiscuous mode */ + /* Number of filters needed */ + nfilter = netdev_uc_count(dev) + netdev_mc_count(dev) + 2; + + /* + * Turn on promicuous mode for three scenarios + * 1. IFF_PROMISC flag is set + * 2. IFF_ALLMULTI flag is set + * 3. The number of filters needed exceeds the number filters + * supported by the hardware. + */ reg = bcmgenet_umac_readl(priv, UMAC_CMD); - if (dev->flags & IFF_PROMISC) { + if ((dev->flags & (IFF_PROMISC | IFF_ALLMULTI)) || + (nfilter > MAX_MDF_FILTER)) { reg |= CMD_PROMISC; bcmgenet_umac_writel(priv, reg, UMAC_CMD); bcmgenet_umac_writel(priv, 0, UMAC_MDF_CTRL); @@ -3128,32 +3140,24 @@ static void bcmgenet_set_rx_mode(struct net_device *dev) bcmgenet_umac_writel(priv, reg, UMAC_CMD); } - /* UniMac doesn't support ALLMULTI */ - if (dev->flags & IFF_ALLMULTI) { - netdev_warn(dev, "ALLMULTI is not supported\n"); - return; - } - /* update MDF filter */ i = 0; - mc = 0; /* Broadcast */ - bcmgenet_set_mdf_addr(priv, dev->broadcast, &i, &mc); + bcmgenet_set_mdf_addr(priv, dev->broadcast, &i); /* my own address.*/ - bcmgenet_set_mdf_addr(priv, dev->dev_addr, &i, &mc); - /* Unicast list*/ - if (netdev_uc_count(dev) > (MAX_MC_COUNT - mc)) - return; + bcmgenet_set_mdf_addr(priv, dev->dev_addr, &i); - if (!netdev_uc_empty(dev)) - netdev_for_each_uc_addr(ha, dev) - bcmgenet_set_mdf_addr(priv, ha->addr, &i, &mc); - /* Multicast */ - if (netdev_mc_empty(dev) || netdev_mc_count(dev) >= (MAX_MC_COUNT - mc)) - return; + /* Unicast */ + netdev_for_each_uc_addr(ha, dev) + bcmgenet_set_mdf_addr(priv, ha->addr, &i); + /* Multicast */ netdev_for_each_mc_addr(ha, dev) - bcmgenet_set_mdf_addr(priv, ha->addr, &i, &mc); + bcmgenet_set_mdf_addr(priv, ha->addr, &i); + + /* Enable filters */ + reg = GENMASK(MAX_MDF_FILTER - 1, MAX_MDF_FILTER - nfilter); + bcmgenet_umac_writel(priv, reg, UMAC_MDF_CTRL); } /* Set the hardware MAC address. */ diff --git a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c index c34ea385fe4a5b40d9f2905780ed6224fccfb11c..6be6de0774b61f7cd90e7e607326c1e5ac8052c5 100644 --- a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c +++ b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c @@ -3270,7 +3270,7 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent) if (!adapter->regs) { dev_err(&pdev->dev, "cannot map device registers\n"); err = -ENOMEM; - goto out_free_adapter; + goto out_free_adapter_nofail; } adapter->pdev = pdev; @@ -3398,6 +3398,9 @@ out_free_dev: if (adapter->port[i]) free_netdev(adapter->port[i]); +out_free_adapter_nofail: + kfree_skb(adapter->nofail_skb); + out_free_adapter: kfree(adapter); diff --git a/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c b/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c index d97e0d7e541afde772cf669a98f58d9a5a6eb626..b766362031c32e4be277d13c8f6b0c3397eab97d 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c +++ b/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c @@ -1065,14 +1065,12 @@ static void cudbg_t4_fwcache(struct cudbg_init *pdbg_init, } } -static int cudbg_collect_mem_region(struct cudbg_init *pdbg_init, - struct cudbg_buffer *dbg_buff, - struct cudbg_error *cudbg_err, - u8 mem_type) +static unsigned long cudbg_mem_region_size(struct cudbg_init *pdbg_init, + struct cudbg_error *cudbg_err, + u8 mem_type) { struct adapter *padap = pdbg_init->adap; struct cudbg_meminfo mem_info; - unsigned long size; u8 mc_idx; int rc; @@ -1086,7 +1084,16 @@ static int cudbg_collect_mem_region(struct cudbg_init *pdbg_init, if (rc) return rc; - size = mem_info.avail[mc_idx].limit - mem_info.avail[mc_idx].base; + return mem_info.avail[mc_idx].limit - mem_info.avail[mc_idx].base; +} + +static int cudbg_collect_mem_region(struct cudbg_init *pdbg_init, + struct cudbg_buffer *dbg_buff, + struct cudbg_error *cudbg_err, + u8 mem_type) +{ + unsigned long size = cudbg_mem_region_size(pdbg_init, cudbg_err, mem_type); + return cudbg_read_fw_mem(pdbg_init, dbg_buff, mem_type, size, cudbg_err); } diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c index f2aba5b160c2d55fd1e319d2c6c284e92a25d64c..d45c435a599d667c8595f1ed9fc1d1f62c65ec52 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c @@ -67,7 +67,8 @@ static struct ch_tc_pedit_fields pedits[] = { static struct ch_tc_flower_entry *allocate_flower_entry(void) { struct ch_tc_flower_entry *new = kzalloc(sizeof(*new), GFP_KERNEL); - spin_lock_init(&new->lock); + if (new) + spin_lock_init(&new->lock); return new; } diff --git a/drivers/net/ethernet/emulex/benet/be_ethtool.c b/drivers/net/ethernet/emulex/benet/be_ethtool.c index bfb16a4744901736b6db90352a4fdaf0e41b2290..d1905d50c26cb8a58dcee88648f09e1cd709ebd3 100644 --- a/drivers/net/ethernet/emulex/benet/be_ethtool.c +++ b/drivers/net/ethernet/emulex/benet/be_ethtool.c @@ -895,7 +895,7 @@ static void be_self_test(struct net_device *netdev, struct ethtool_test *test, u64 *data) { struct be_adapter *adapter = netdev_priv(netdev); - int status; + int status, cnt; u8 link_status = 0; if (adapter->function_caps & BE_FUNCTION_CAPS_SUPER_NIC) { @@ -906,6 +906,9 @@ static void be_self_test(struct net_device *netdev, struct ethtool_test *test, memset(data, 0, sizeof(u64) * ETHTOOL_TESTS_NUM); + /* check link status before offline tests */ + link_status = netif_carrier_ok(netdev); + if (test->flags & ETH_TEST_FL_OFFLINE) { if (be_loopback_test(adapter, BE_MAC_LOOPBACK, &data[0]) != 0) test->flags |= ETH_TEST_FL_FAILED; @@ -926,13 +929,26 @@ static void be_self_test(struct net_device *netdev, struct ethtool_test *test, test->flags |= ETH_TEST_FL_FAILED; } - status = be_cmd_link_status_query(adapter, NULL, &link_status, 0); - if (status) { - test->flags |= ETH_TEST_FL_FAILED; - data[4] = -1; - } else if (!link_status) { + /* link status was down prior to test */ + if (!link_status) { test->flags |= ETH_TEST_FL_FAILED; data[4] = 1; + return; + } + + for (cnt = 10; cnt; cnt--) { + status = be_cmd_link_status_query(adapter, NULL, &link_status, + 0); + if (status) { + test->flags |= ETH_TEST_FL_FAILED; + data[4] = -1; + break; + } + + if (link_status) + break; + + msleep_interruptible(500); } } diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c index bff74752cef16f4fd55b1702660ca048a236162c..3fe6a28027fe13fcde75b5d7848b3a8246b686ae 100644 --- a/drivers/net/ethernet/emulex/benet/be_main.c +++ b/drivers/net/ethernet/emulex/benet/be_main.c @@ -4700,8 +4700,12 @@ int be_update_queues(struct be_adapter *adapter) struct net_device *netdev = adapter->netdev; int status; - if (netif_running(netdev)) + if (netif_running(netdev)) { + /* device cannot transmit now, avoid dev_watchdog timeouts */ + netif_carrier_off(netdev); + be_close(netdev); + } be_cancel_worker(adapter); diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c index bf715a3672736eda537aefc64ebdb00d0835a39f..4cf80de4c471c425905da873c79ba3b5635da378 100644 --- a/drivers/net/ethernet/freescale/fec_main.c +++ b/drivers/net/ethernet/freescale/fec_main.c @@ -1689,10 +1689,10 @@ static void fec_get_mac(struct net_device *ndev) */ if (!is_valid_ether_addr(iap)) { /* Report it and use a random ethernet address instead */ - netdev_err(ndev, "Invalid MAC address: %pM\n", iap); + dev_err(&fep->pdev->dev, "Invalid MAC address: %pM\n", iap); eth_hw_addr_random(ndev); - netdev_info(ndev, "Using random MAC address: %pM\n", - ndev->dev_addr); + dev_info(&fep->pdev->dev, "Using random MAC address: %pM\n", + ndev->dev_addr); return; } diff --git a/drivers/net/ethernet/hisilicon/hip04_eth.c b/drivers/net/ethernet/hisilicon/hip04_eth.c index 6127697ede120707fdc92f8d50cc9355487664f3..a91d49dd92ea6c32a308ea61d0e1a02785985aa1 100644 --- a/drivers/net/ethernet/hisilicon/hip04_eth.c +++ b/drivers/net/ethernet/hisilicon/hip04_eth.c @@ -157,6 +157,7 @@ struct hip04_priv { unsigned int reg_inten; struct napi_struct napi; + struct device *dev; struct net_device *ndev; struct tx_desc *tx_desc; @@ -185,7 +186,7 @@ struct hip04_priv { static inline unsigned int tx_count(unsigned int head, unsigned int tail) { - return (head - tail) % (TX_DESC_NUM - 1); + return (head - tail) % TX_DESC_NUM; } static void hip04_config_port(struct net_device *ndev, u32 speed, u32 duplex) @@ -387,7 +388,7 @@ static int hip04_tx_reclaim(struct net_device *ndev, bool force) } if (priv->tx_phys[tx_tail]) { - dma_unmap_single(&ndev->dev, priv->tx_phys[tx_tail], + dma_unmap_single(priv->dev, priv->tx_phys[tx_tail], priv->tx_skb[tx_tail]->len, DMA_TO_DEVICE); priv->tx_phys[tx_tail] = 0; @@ -437,8 +438,8 @@ static int hip04_mac_start_xmit(struct sk_buff *skb, struct net_device *ndev) return NETDEV_TX_BUSY; } - phys = dma_map_single(&ndev->dev, skb->data, skb->len, DMA_TO_DEVICE); - if (dma_mapping_error(&ndev->dev, phys)) { + phys = dma_map_single(priv->dev, skb->data, skb->len, DMA_TO_DEVICE); + if (dma_mapping_error(priv->dev, phys)) { dev_kfree_skb(skb); return NETDEV_TX_OK; } @@ -497,6 +498,9 @@ static int hip04_rx_poll(struct napi_struct *napi, int budget) u16 len; u32 err; + /* clean up tx descriptors */ + tx_remaining = hip04_tx_reclaim(ndev, false); + while (cnt && !last) { buf = priv->rx_buf[priv->rx_head]; skb = build_skb(buf, priv->rx_buf_size); @@ -505,7 +509,7 @@ static int hip04_rx_poll(struct napi_struct *napi, int budget) goto refill; } - dma_unmap_single(&ndev->dev, priv->rx_phys[priv->rx_head], + dma_unmap_single(priv->dev, priv->rx_phys[priv->rx_head], RX_BUF_SIZE, DMA_FROM_DEVICE); priv->rx_phys[priv->rx_head] = 0; @@ -534,9 +538,9 @@ refill: buf = netdev_alloc_frag(priv->rx_buf_size); if (!buf) goto done; - phys = dma_map_single(&ndev->dev, buf, + phys = dma_map_single(priv->dev, buf, RX_BUF_SIZE, DMA_FROM_DEVICE); - if (dma_mapping_error(&ndev->dev, phys)) + if (dma_mapping_error(priv->dev, phys)) goto done; priv->rx_buf[priv->rx_head] = buf; priv->rx_phys[priv->rx_head] = phys; @@ -557,8 +561,7 @@ refill: } napi_complete_done(napi, rx); done: - /* clean up tx descriptors and start a new timer if necessary */ - tx_remaining = hip04_tx_reclaim(ndev, false); + /* start a new timer if necessary */ if (rx < budget && tx_remaining) hip04_start_tx_timer(priv); @@ -640,9 +643,9 @@ static int hip04_mac_open(struct net_device *ndev) for (i = 0; i < RX_DESC_NUM; i++) { dma_addr_t phys; - phys = dma_map_single(&ndev->dev, priv->rx_buf[i], + phys = dma_map_single(priv->dev, priv->rx_buf[i], RX_BUF_SIZE, DMA_FROM_DEVICE); - if (dma_mapping_error(&ndev->dev, phys)) + if (dma_mapping_error(priv->dev, phys)) return -EIO; priv->rx_phys[i] = phys; @@ -676,7 +679,7 @@ static int hip04_mac_stop(struct net_device *ndev) for (i = 0; i < RX_DESC_NUM; i++) { if (priv->rx_phys[i]) { - dma_unmap_single(&ndev->dev, priv->rx_phys[i], + dma_unmap_single(priv->dev, priv->rx_phys[i], RX_BUF_SIZE, DMA_FROM_DEVICE); priv->rx_phys[i] = 0; } @@ -820,6 +823,7 @@ static int hip04_mac_probe(struct platform_device *pdev) return -ENOMEM; priv = netdev_priv(ndev); + priv->dev = d; priv->ndev = ndev; platform_set_drvdata(pdev, ndev); SET_NETDEV_DEV(ndev, &pdev->dev); diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.c b/drivers/net/ethernet/hisilicon/hns3/hnae3.c index fff5be8078ac388102456f3e505a5a0d46b630bf..0594a6c3dccda161a9ecc3ac38e55eb456e16f49 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hnae3.c +++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.c @@ -229,6 +229,7 @@ void hnae3_unregister_ae_algo(struct hnae3_ae_algo *ae_algo) ae_algo->ops->uninit_ae_dev(ae_dev); hnae3_set_bit(ae_dev->flag, HNAE3_DEV_INITED_B, 0); + ae_dev->ops = NULL; } list_del(&ae_algo->node); @@ -316,6 +317,7 @@ void hnae3_unregister_ae_dev(struct hnae3_ae_dev *ae_dev) ae_algo->ops->uninit_ae_dev(ae_dev); hnae3_set_bit(ae_dev->flag, HNAE3_DEV_INITED_B, 0); + ae_dev->ops = NULL; } list_del(&ae_dev->node); diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c index 9684ad015c429dfdde7a82e614d390945eb80f60..6a3c6b02a77cd5ddb33b5de31dc58cd2442bf114 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c @@ -245,11 +245,13 @@ static int hns3_lp_run_test(struct net_device *ndev, enum hnae3_loop mode) skb_get(skb); tx_ret = hns3_nic_net_xmit(skb, ndev); - if (tx_ret == NETDEV_TX_OK) + if (tx_ret == NETDEV_TX_OK) { good_cnt++; - else + } else { + kfree_skb(skb); netdev_err(ndev, "hns3_lb_run_test xmit failed: %d\n", tx_ret); + } } if (good_cnt != HNS3_NIC_LB_TEST_PKT_NUM) { ret_val = HNS3_NIC_LB_TEST_TX_CNT_ERR; diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c index 4648c6a9d9e8654191060af4c88a7ac5549f71a2..89ca69fa2b97b2ea28647192db6747049dab18f2 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c @@ -663,8 +663,7 @@ static u8 *hclge_comm_get_strings(u32 stringset, return buff; for (i = 0; i < size; i++) { - snprintf(buff, ETH_GSTRING_LEN, - strs[i].desc); + snprintf(buff, ETH_GSTRING_LEN, "%s", strs[i].desc); buff = buff + ETH_GSTRING_LEN; } diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c index 48235dc2dd56f03ce482f10b17a5021fb73ca7fc..11e9259ca0407b255672e85e87f25eee554a9103 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c @@ -54,7 +54,8 @@ static int hclge_shaper_para_calc(u32 ir, u8 shaper_level, u32 tick; /* Calc tick */ - if (shaper_level >= HCLGE_SHAPER_LVL_CNT) + if (shaper_level >= HCLGE_SHAPER_LVL_CNT || + ir > HCLGE_ETHER_MAX_RATE) return -EINVAL; tick = tick_array[shaper_level]; @@ -1057,6 +1058,9 @@ static int hclge_tm_schd_mode_vnet_base_cfg(struct hclge_vport *vport) int ret; u8 i; + if (vport->vport_id >= HNAE3_MAX_TC) + return -EINVAL; + ret = hclge_tm_pri_schd_mode_cfg(hdev, vport->vport_id); if (ret) return ret; diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c index 8cd339c92c1afbe153652459b3f286af84f1fd1a..a7b5a47ab83d5b1ba2eb34d0f328bebd2e809d1d 100644 --- a/drivers/net/ethernet/intel/e1000e/netdev.c +++ b/drivers/net/ethernet/intel/e1000e/netdev.c @@ -4208,7 +4208,7 @@ void e1000e_up(struct e1000_adapter *adapter) e1000_configure_msix(adapter); e1000_irq_enable(adapter); - netif_start_queue(adapter->netdev); + /* Tx queue started by watchdog timer when link is up */ e1000e_trigger_lsc(adapter); } @@ -4584,6 +4584,7 @@ int e1000e_open(struct net_device *netdev) pm_runtime_get_sync(&pdev->dev); netif_carrier_off(netdev); + netif_stop_queue(netdev); /* allocate transmit descriptors */ err = e1000e_setup_tx_resources(adapter->tx_ring); @@ -4644,7 +4645,6 @@ int e1000e_open(struct net_device *netdev) e1000_irq_enable(adapter); adapter->tx_hang_recheck = false; - netif_start_queue(netdev); hw->mac.get_link_status = true; pm_runtime_put(&pdev->dev); @@ -5266,6 +5266,7 @@ static void e1000_watchdog_task(struct work_struct *work) if (phy->ops.cfg_on_link_up) phy->ops.cfg_on_link_up(hw); + netif_wake_queue(netdev); netif_carrier_on(netdev); if (!test_bit(__E1000_DOWN, &adapter->state)) @@ -5279,6 +5280,7 @@ static void e1000_watchdog_task(struct work_struct *work) /* Link status message must follow this format */ pr_info("%s NIC Link is Down\n", adapter->netdev->name); netif_carrier_off(netdev); + netif_stop_queue(netdev); if (!test_bit(__E1000_DOWN, &adapter->state)) mod_timer(&adapter->phy_info_timer, round_jiffies(jiffies + 2 * HZ)); @@ -5286,13 +5288,8 @@ static void e1000_watchdog_task(struct work_struct *work) /* 8000ES2LAN requires a Rx packet buffer work-around * on link down event; reset the controller to flush * the Rx packet buffer. - * - * If the link is lost the controller stops DMA, but - * if there is queued Tx work it cannot be done. So - * reset the controller to flush the Tx packet buffers. */ - if ((adapter->flags & FLAG_RX_NEEDS_RESTART) || - e1000_desc_unused(tx_ring) + 1 < tx_ring->count) + if (adapter->flags & FLAG_RX_NEEDS_RESTART) adapter->flags |= FLAG_RESTART_NOW; else pm_schedule_suspend(netdev->dev.parent, @@ -5315,6 +5312,14 @@ link_up: adapter->gotc_old = adapter->stats.gotc; spin_unlock(&adapter->stats64_lock); + /* If the link is lost the controller stops DMA, but + * if there is queued Tx work it cannot be done. So + * reset the controller to flush the Tx packet buffers. + */ + if (!netif_carrier_ok(netdev) && + (e1000_desc_unused(tx_ring) + 1 < tx_ring->count)) + adapter->flags |= FLAG_RESTART_NOW; + /* If reset is necessary, do it outside of interrupt context. */ if (adapter->flags & FLAG_RESTART_NOW) { schedule_work(&adapter->reset_task); diff --git a/drivers/net/ethernet/intel/i40evf/i40e_txrx.c b/drivers/net/ethernet/intel/i40evf/i40e_txrx.c index a9730711e2579da0ed1621f57d99597868ca2c83..b56d22b530a7079ff740c56a490e0e79a2c27822 100644 --- a/drivers/net/ethernet/intel/i40evf/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40evf/i40e_txrx.c @@ -1291,7 +1291,7 @@ static struct sk_buff *i40e_construct_skb(struct i40e_ring *rx_ring, struct i40e_rx_buffer *rx_buffer, unsigned int size) { - void *va = page_address(rx_buffer->page) + rx_buffer->page_offset; + void *va; #if (PAGE_SIZE < 8192) unsigned int truesize = i40e_rx_pg_size(rx_ring) / 2; #else @@ -1301,6 +1301,7 @@ static struct sk_buff *i40e_construct_skb(struct i40e_ring *rx_ring, struct sk_buff *skb; /* prefetch first cache line of first page */ + va = page_address(rx_buffer->page) + rx_buffer->page_offset; prefetch(va); #if L1_CACHE_BYTES < 128 prefetch(va + L1_CACHE_BYTES); @@ -1355,7 +1356,7 @@ static struct sk_buff *i40e_build_skb(struct i40e_ring *rx_ring, struct i40e_rx_buffer *rx_buffer, unsigned int size) { - void *va = page_address(rx_buffer->page) + rx_buffer->page_offset; + void *va; #if (PAGE_SIZE < 8192) unsigned int truesize = i40e_rx_pg_size(rx_ring) / 2; #else @@ -1365,6 +1366,7 @@ static struct sk_buff *i40e_build_skb(struct i40e_ring *rx_ring, struct sk_buff *skb; /* prefetch first cache line of first page */ + va = page_address(rx_buffer->page) + rx_buffer->page_offset; prefetch(va); #if L1_CACHE_BYTES < 128 prefetch(va + L1_CACHE_BYTES); diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index 5aa083d9a6c9ac0a604da886913811476b58f52b..ab76a5f77cd0e82f85665fcc7092bc7fde3e21cd 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -5703,6 +5703,7 @@ static void igb_tx_ctxtdesc(struct igb_ring *tx_ring, */ if (tx_ring->launchtime_enable) { ts = ns_to_timespec64(first->skb->tstamp); + first->skb->tstamp = 0; context_desc->seqnum_seed = cpu_to_le32(ts.tv_nsec / 32); } else { context_desc->seqnum_seed = 0; diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c index e5a8461fe6a99bfbf8ab20b85e38c0f0c24e0bb5..8829bd95d0d36557acf0e669f77f6680e45485c5 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c @@ -3223,7 +3223,8 @@ static int ixgbe_get_module_info(struct net_device *dev, page_swap = true; } - if (sff8472_rev == IXGBE_SFF_SFF_8472_UNSUP || page_swap) { + if (sff8472_rev == IXGBE_SFF_SFF_8472_UNSUP || page_swap || + !(addr_mode & IXGBE_SFF_DDM_IMPLEMENTED)) { /* We have a SFP, but it does not support SFF-8472 */ modinfo->type = ETH_MODULE_SFF_8079; modinfo->eeprom_len = ETH_MODULE_SFF_8079_LEN; diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h index 64e44e01c973fc4c047f04432c4694ac06271a25..c56baad04ee615067649c521aef12bedb99c150c 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h @@ -45,6 +45,7 @@ #define IXGBE_SFF_SOFT_RS_SELECT_10G 0x8 #define IXGBE_SFF_SOFT_RS_SELECT_1G 0x0 #define IXGBE_SFF_ADDRESSING_MODE 0x4 +#define IXGBE_SFF_DDM_IMPLEMENTED 0x40 #define IXGBE_SFF_QSFP_DA_ACTIVE_CABLE 0x1 #define IXGBE_SFF_QSFP_DA_PASSIVE_CABLE 0x8 #define IXGBE_SFF_QSFP_CONNECTOR_NOT_SEPARABLE 0x23 diff --git a/drivers/net/ethernet/marvell/mvmdio.c b/drivers/net/ethernet/marvell/mvmdio.c index c5dac6bd2be4d31b988df0f572deedff15ce18fc..ee7857298361ded94b87e7e715c87610a2a05924 100644 --- a/drivers/net/ethernet/marvell/mvmdio.c +++ b/drivers/net/ethernet/marvell/mvmdio.c @@ -64,7 +64,7 @@ struct orion_mdio_dev { void __iomem *regs; - struct clk *clk[3]; + struct clk *clk[4]; /* * If we have access to the error interrupt pin (which is * somewhat misnamed as it not only reflects internal errors @@ -321,6 +321,10 @@ static int orion_mdio_probe(struct platform_device *pdev) for (i = 0; i < ARRAY_SIZE(dev->clk); i++) { dev->clk[i] = of_clk_get(pdev->dev.of_node, i); + if (PTR_ERR(dev->clk[i]) == -EPROBE_DEFER) { + ret = -EPROBE_DEFER; + goto out_clk; + } if (IS_ERR(dev->clk[i])) break; clk_prepare_enable(dev->clk[i]); @@ -362,6 +366,7 @@ out_mdio: if (dev->err_interrupt > 0) writel(0, dev->regs + MVMDIO_ERR_INT_MASK); +out_clk: for (i = 0; i < ARRAY_SIZE(dev->clk); i++) { if (IS_ERR(dev->clk[i])) break; diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c index df5b74f289e192e053cd0191dde687513a882737..9b608d23ff7eeb4f39f5a636105632c6852d1ca6 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c @@ -3501,6 +3501,7 @@ static int mvpp2_set_mac_address(struct net_device *dev, void *p) static int mvpp2_change_mtu(struct net_device *dev, int mtu) { struct mvpp2_port *port = netdev_priv(dev); + bool running = netif_running(dev); int err; if (!IS_ALIGNED(MVPP2_RX_PKT_SIZE(mtu), 8)) { @@ -3509,40 +3510,24 @@ static int mvpp2_change_mtu(struct net_device *dev, int mtu) mtu = ALIGN(MVPP2_RX_PKT_SIZE(mtu), 8); } - if (!netif_running(dev)) { - err = mvpp2_bm_update_mtu(dev, mtu); - if (!err) { - port->pkt_size = MVPP2_RX_PKT_SIZE(mtu); - return 0; - } - - /* Reconfigure BM to the original MTU */ - err = mvpp2_bm_update_mtu(dev, dev->mtu); - if (err) - goto log_error; - } - - mvpp2_stop_dev(port); + if (running) + mvpp2_stop_dev(port); err = mvpp2_bm_update_mtu(dev, mtu); - if (!err) { + if (err) { + netdev_err(dev, "failed to change MTU\n"); + /* Reconfigure BM to the original MTU */ + mvpp2_bm_update_mtu(dev, dev->mtu); + } else { port->pkt_size = MVPP2_RX_PKT_SIZE(mtu); - goto out_start; } - /* Reconfigure BM to the original MTU */ - err = mvpp2_bm_update_mtu(dev, dev->mtu); - if (err) - goto log_error; - -out_start: - mvpp2_start_dev(port); - mvpp2_egress_enable(port); - mvpp2_ingress_enable(port); + if (running) { + mvpp2_start_dev(port); + mvpp2_egress_enable(port); + mvpp2_ingress_enable(port); + } - return 0; -log_error: - netdev_err(dev, "failed to change MTU\n"); return err; } @@ -4427,9 +4412,9 @@ static void mvpp2_xlg_config(struct mvpp2_port *port, unsigned int mode, if (state->pause & MLO_PAUSE_RX) ctrl0 |= MVPP22_XLG_CTRL0_RX_FLOW_CTRL_EN; - ctrl4 &= ~MVPP22_XLG_CTRL4_MACMODSELECT_GMAC; - ctrl4 |= MVPP22_XLG_CTRL4_FWD_FC | MVPP22_XLG_CTRL4_FWD_PFC | - MVPP22_XLG_CTRL4_EN_IDLE_CHECK; + ctrl4 &= ~(MVPP22_XLG_CTRL4_MACMODSELECT_GMAC | + MVPP22_XLG_CTRL4_EN_IDLE_CHECK); + ctrl4 |= MVPP22_XLG_CTRL4_FWD_FC | MVPP22_XLG_CTRL4_FWD_PFC; writel(ctrl0, port->base + MVPP22_XLG_CTRL0_REG); writel(ctrl4, port->base + MVPP22_XLG_CTRL4_REG); @@ -5358,9 +5343,6 @@ static int mvpp2_remove(struct platform_device *pdev) mvpp2_dbgfs_cleanup(priv); - flush_workqueue(priv->stats_queue); - destroy_workqueue(priv->stats_queue); - fwnode_for_each_available_child_node(fwnode, port_fwnode) { if (priv->port_list[i]) { mutex_destroy(&priv->port_list[i]->gather_stats_lock); @@ -5369,6 +5351,8 @@ static int mvpp2_remove(struct platform_device *pdev) i++; } + destroy_workqueue(priv->stats_queue); + for (i = 0; i < MVPP2_BM_POOLS_NUM; i++) { struct mvpp2_bm_pool *bm_pool = &priv->bm_pools[i]; diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c index ae2240074d8ef8c3ab763af481acb8936db7ff25..5692c6087bbb0781ef473ea5dfe8f6148c4ae4f7 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c @@ -312,7 +312,8 @@ static void mvpp2_prs_sram_shift_set(struct mvpp2_prs_entry *pe, int shift, } /* Set value */ - pe->sram[MVPP2_BIT_TO_WORD(MVPP2_PRS_SRAM_SHIFT_OFFS)] = shift & MVPP2_PRS_SRAM_SHIFT_MASK; + pe->sram[MVPP2_BIT_TO_WORD(MVPP2_PRS_SRAM_SHIFT_OFFS)] |= + shift & MVPP2_PRS_SRAM_SHIFT_MASK; /* Reset and set operation */ mvpp2_prs_sram_bits_clear(pe, MVPP2_PRS_SRAM_OP_SEL_SHIFT_OFFS, diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/marvell/sky2.c index 1485f66cf7b0ca5001f4559e03131a9b4430e0f9..4ade864c8d531f97511e630ba45e97eb8c31f394 100644 --- a/drivers/net/ethernet/marvell/sky2.c +++ b/drivers/net/ethernet/marvell/sky2.c @@ -4947,6 +4947,13 @@ static const struct dmi_system_id msi_blacklist[] = { DMI_MATCH(DMI_PRODUCT_NAME, "P-79"), }, }, + { + .ident = "ASUS P6T", + .matches = { + DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK Computer INC."), + DMI_MATCH(DMI_BOARD_NAME, "P6T"), + }, + }, {} }; diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c index f5cd9539980f80384d89b21931ef8739a296f1b5..45d9a5f8fa1bcd11f89900c20c5e53f9e654a9ef 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c @@ -1190,7 +1190,7 @@ int mlx4_en_config_rss_steer(struct mlx4_en_priv *priv) err = mlx4_qp_alloc(mdev->dev, priv->base_qpn, rss_map->indir_qp); if (err) { en_err(priv, "Failed to allocate RSS indirection QP\n"); - goto rss_err; + goto qp_alloc_err; } rss_map->indir_qp->event = mlx4_en_sqp_event; @@ -1244,6 +1244,7 @@ indir_err: MLX4_QP_STATE_RST, NULL, 0, 0, rss_map->indir_qp); mlx4_qp_remove(mdev->dev, rss_map->indir_qp); mlx4_qp_free(mdev->dev, rss_map->indir_qp); +qp_alloc_err: kfree(rss_map->indir_qp); rss_map->indir_qp = NULL; rss_err: diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dev.c b/drivers/net/ethernet/mellanox/mlx5/core/dev.c index 1c225be9c7db996f58c9033c54279fd84bbba2b0..3692d6a1cce8d6acebda01fa84bf0ce8d5a1fcb4 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/dev.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/dev.c @@ -307,7 +307,7 @@ void mlx5_unregister_device(struct mlx5_core_dev *dev) struct mlx5_interface *intf; mutex_lock(&mlx5_intf_mutex); - list_for_each_entry(intf, &intf_list, list) + list_for_each_entry_reverse(intf, &intf_list, list) mlx5_remove_device(intf, priv); list_del(&priv->dev_list); mutex_unlock(&mlx5_intf_mutex); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c index 45cdde694d20049af85391ffcc376438fadb921e..a4be04debe671e142bafd9b34a0dd50ab4dcd8e4 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c @@ -437,12 +437,6 @@ arfs_hash_bucket(struct arfs_table *arfs_t, __be16 src_port, return &arfs_t->rules_hash[bucket_idx]; } -static u8 arfs_get_ip_proto(const struct sk_buff *skb) -{ - return (skb->protocol == htons(ETH_P_IP)) ? - ip_hdr(skb)->protocol : ipv6_hdr(skb)->nexthdr; -} - static struct arfs_table *arfs_get_table(struct mlx5e_arfs_tables *arfs, u8 ip_proto, __be16 etype) { @@ -599,31 +593,9 @@ out: arfs_may_expire_flow(priv); } -/* return L4 destination port from ip4/6 packets */ -static __be16 arfs_get_dst_port(const struct sk_buff *skb) -{ - char *transport_header; - - transport_header = skb_transport_header(skb); - if (arfs_get_ip_proto(skb) == IPPROTO_TCP) - return ((struct tcphdr *)transport_header)->dest; - return ((struct udphdr *)transport_header)->dest; -} - -/* return L4 source port from ip4/6 packets */ -static __be16 arfs_get_src_port(const struct sk_buff *skb) -{ - char *transport_header; - - transport_header = skb_transport_header(skb); - if (arfs_get_ip_proto(skb) == IPPROTO_TCP) - return ((struct tcphdr *)transport_header)->source; - return ((struct udphdr *)transport_header)->source; -} - static struct arfs_rule *arfs_alloc_rule(struct mlx5e_priv *priv, struct arfs_table *arfs_t, - const struct sk_buff *skb, + const struct flow_keys *fk, u16 rxq, u32 flow_id) { struct arfs_rule *rule; @@ -638,19 +610,19 @@ static struct arfs_rule *arfs_alloc_rule(struct mlx5e_priv *priv, INIT_WORK(&rule->arfs_work, arfs_handle_work); tuple = &rule->tuple; - tuple->etype = skb->protocol; + tuple->etype = fk->basic.n_proto; + tuple->ip_proto = fk->basic.ip_proto; if (tuple->etype == htons(ETH_P_IP)) { - tuple->src_ipv4 = ip_hdr(skb)->saddr; - tuple->dst_ipv4 = ip_hdr(skb)->daddr; + tuple->src_ipv4 = fk->addrs.v4addrs.src; + tuple->dst_ipv4 = fk->addrs.v4addrs.dst; } else { - memcpy(&tuple->src_ipv6, &ipv6_hdr(skb)->saddr, + memcpy(&tuple->src_ipv6, &fk->addrs.v6addrs.src, sizeof(struct in6_addr)); - memcpy(&tuple->dst_ipv6, &ipv6_hdr(skb)->daddr, + memcpy(&tuple->dst_ipv6, &fk->addrs.v6addrs.dst, sizeof(struct in6_addr)); } - tuple->ip_proto = arfs_get_ip_proto(skb); - tuple->src_port = arfs_get_src_port(skb); - tuple->dst_port = arfs_get_dst_port(skb); + tuple->src_port = fk->ports.src; + tuple->dst_port = fk->ports.dst; rule->flow_id = flow_id; rule->filter_id = priv->fs.arfs.last_filter_id++ % RPS_NO_FILTER; @@ -661,37 +633,33 @@ static struct arfs_rule *arfs_alloc_rule(struct mlx5e_priv *priv, return rule; } -static bool arfs_cmp_ips(struct arfs_tuple *tuple, - const struct sk_buff *skb) +static bool arfs_cmp(const struct arfs_tuple *tuple, const struct flow_keys *fk) { - if (tuple->etype == htons(ETH_P_IP) && - tuple->src_ipv4 == ip_hdr(skb)->saddr && - tuple->dst_ipv4 == ip_hdr(skb)->daddr) - return true; - if (tuple->etype == htons(ETH_P_IPV6) && - (!memcmp(&tuple->src_ipv6, &ipv6_hdr(skb)->saddr, - sizeof(struct in6_addr))) && - (!memcmp(&tuple->dst_ipv6, &ipv6_hdr(skb)->daddr, - sizeof(struct in6_addr)))) - return true; + if (tuple->src_port != fk->ports.src || tuple->dst_port != fk->ports.dst) + return false; + if (tuple->etype != fk->basic.n_proto) + return false; + if (tuple->etype == htons(ETH_P_IP)) + return tuple->src_ipv4 == fk->addrs.v4addrs.src && + tuple->dst_ipv4 == fk->addrs.v4addrs.dst; + if (tuple->etype == htons(ETH_P_IPV6)) + return !memcmp(&tuple->src_ipv6, &fk->addrs.v6addrs.src, + sizeof(struct in6_addr)) && + !memcmp(&tuple->dst_ipv6, &fk->addrs.v6addrs.dst, + sizeof(struct in6_addr)); return false; } static struct arfs_rule *arfs_find_rule(struct arfs_table *arfs_t, - const struct sk_buff *skb) + const struct flow_keys *fk) { struct arfs_rule *arfs_rule; struct hlist_head *head; - __be16 src_port = arfs_get_src_port(skb); - __be16 dst_port = arfs_get_dst_port(skb); - head = arfs_hash_bucket(arfs_t, src_port, dst_port); + head = arfs_hash_bucket(arfs_t, fk->ports.src, fk->ports.dst); hlist_for_each_entry(arfs_rule, head, hlist) { - if (arfs_rule->tuple.src_port == src_port && - arfs_rule->tuple.dst_port == dst_port && - arfs_cmp_ips(&arfs_rule->tuple, skb)) { + if (arfs_cmp(&arfs_rule->tuple, fk)) return arfs_rule; - } } return NULL; @@ -704,20 +672,24 @@ int mlx5e_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb, struct mlx5e_arfs_tables *arfs = &priv->fs.arfs; struct arfs_table *arfs_t; struct arfs_rule *arfs_rule; + struct flow_keys fk; + + if (!skb_flow_dissect_flow_keys(skb, &fk, 0)) + return -EPROTONOSUPPORT; - if (skb->protocol != htons(ETH_P_IP) && - skb->protocol != htons(ETH_P_IPV6)) + if (fk.basic.n_proto != htons(ETH_P_IP) && + fk.basic.n_proto != htons(ETH_P_IPV6)) return -EPROTONOSUPPORT; if (skb->encapsulation) return -EPROTONOSUPPORT; - arfs_t = arfs_get_table(arfs, arfs_get_ip_proto(skb), skb->protocol); + arfs_t = arfs_get_table(arfs, fk.basic.ip_proto, fk.basic.n_proto); if (!arfs_t) return -EPROTONOSUPPORT; spin_lock_bh(&arfs->arfs_lock); - arfs_rule = arfs_find_rule(arfs_t, skb); + arfs_rule = arfs_find_rule(arfs_t, &fk); if (arfs_rule) { if (arfs_rule->rxq == rxq_index) { spin_unlock_bh(&arfs->arfs_lock); @@ -725,8 +697,7 @@ int mlx5e_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb, } arfs_rule->rxq = rxq_index; } else { - arfs_rule = arfs_alloc_rule(priv, arfs_t, skb, - rxq_index, flow_id); + arfs_rule = arfs_alloc_rule(priv, arfs_t, &fk, rxq_index, flow_id); if (!arfs_rule) { spin_unlock_bh(&arfs->arfs_lock); return -ENOMEM; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c index 792bb8bc0cd34bda53ab35bdcfd4acd8391a2a18..2b9350f4c7522bd455dfe91ee4771e9ca6f0e4d9 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c @@ -1083,6 +1083,9 @@ static int mlx5e_set_pauseparam(struct net_device *netdev, struct mlx5_core_dev *mdev = priv->mdev; int err; + if (!MLX5_CAP_GEN(mdev, vport_group_manager)) + return -EOPNOTSUPP; + if (pauseparam->autoneg) return -EINVAL; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 0f1c296c3ce4ec97cbef95e9cce0b29ffffa148a..83ab2c0e6b61fd90d233a97c18dbdd8e60962727 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -420,12 +420,11 @@ static inline u64 mlx5e_get_mpwqe_offset(struct mlx5e_rq *rq, u16 wqe_ix) static void mlx5e_init_frags_partition(struct mlx5e_rq *rq) { - struct mlx5e_wqe_frag_info next_frag, *prev; + struct mlx5e_wqe_frag_info next_frag = {}; + struct mlx5e_wqe_frag_info *prev = NULL; int i; next_frag.di = &rq->wqe.di[0]; - next_frag.offset = 0; - prev = NULL; for (i = 0; i < mlx5_wq_cyc_get_size(&rq->wqe.wq); i++) { struct mlx5e_rq_frag_info *frag_info = &rq->wqe.info.arr[0]; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index 9f7f8425f6767c94dde33332624123f317cc5959..c8928ce69185f287e3c8a6134229235a5fb013a7 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -992,13 +992,13 @@ void mlx5e_tc_encap_flows_del(struct mlx5e_priv *priv, void mlx5e_tc_update_neigh_used_value(struct mlx5e_neigh_hash_entry *nhe) { struct mlx5e_neigh *m_neigh = &nhe->m_neigh; - u64 bytes, packets, lastuse = 0; struct mlx5e_tc_flow *flow; struct mlx5e_encap_entry *e; struct mlx5_fc *counter; struct neigh_table *tbl; bool neigh_used = false; struct neighbour *n; + u64 lastuse; if (m_neigh->family == AF_INET) tbl = &arp_tbl; @@ -1015,7 +1015,7 @@ void mlx5e_tc_update_neigh_used_value(struct mlx5e_neigh_hash_entry *nhe) list_for_each_entry(flow, &e->flows, encap) { if (flow->flags & MLX5E_TC_FLOW_OFFLOADED) { counter = mlx5_flow_rule_counter(flow->rule[0]); - mlx5_fc_query_cached(counter, &bytes, &packets, &lastuse); + lastuse = mlx5_fc_query_lastuse(counter); if (time_after((unsigned long)lastuse, nhe->reported_lastuse)) { neigh_used = true; break; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c index 58af6be13dfa88a2e08d5d85dd54b3fdaeb4e4f6..808ddd732e04473b218dea25c5def432038896d0 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c @@ -321,6 +321,11 @@ int mlx5_fc_query(struct mlx5_core_dev *dev, struct mlx5_fc *counter, } EXPORT_SYMBOL(mlx5_fc_query); +u64 mlx5_fc_query_lastuse(struct mlx5_fc *counter) +{ + return counter->cache.lastuse; +} + void mlx5_fc_query_cached(struct mlx5_fc *counter, u64 *bytes, u64 *packets, u64 *lastuse) { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c index 5b7fe826414473e51a35560bed7faa7ca048b5f9..db6aafcced0dd4410555e3d6c19fa70a3aff3d49 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c @@ -662,7 +662,9 @@ struct net_device *mlx5_rdma_netdev_alloc(struct mlx5_core_dev *mdev, profile->init(mdev, netdev, profile, ipriv); - mlx5e_attach_netdev(epriv); + err = mlx5e_attach_netdev(epriv); + if (err) + goto detach; netif_carrier_off(netdev); /* set rdma_netdev func pointers */ @@ -678,6 +680,11 @@ struct net_device *mlx5_rdma_netdev_alloc(struct mlx5_core_dev *mdev, return netdev; +detach: + profile->cleanup(epriv); + if (ipriv->sub_interface) + return NULL; + mlx5e_destroy_mdev_resources(mdev); destroy_ht: mlx5i_pkey_qpn_ht_cleanup(netdev); destroy_wq: diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c index 0cab06046e5dc0344391be9d41d0296252021a5e..ee126bcf7c350936e519e3fbd5322d47ac076e03 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c @@ -5032,7 +5032,7 @@ static int __init mlxsw_sp_module_init(void) return 0; err_sp2_pci_driver_register: - mlxsw_pci_driver_unregister(&mlxsw_sp2_pci_driver); + mlxsw_pci_driver_unregister(&mlxsw_sp1_pci_driver); err_sp1_pci_driver_register: mlxsw_core_driver_unregister(&mlxsw_sp2_driver); err_sp2_core_driver_register: diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c index b25048c6c7618e4b2e9fa210bcdfe7516bcb41d2..21296fa7f7fbf52060d51740e9c2247085c4e8d0 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c @@ -408,14 +408,6 @@ static int mlxsw_sp_port_dcb_app_update(struct mlxsw_sp_port *mlxsw_sp_port) have_dscp = mlxsw_sp_port_dcb_app_prio_dscp_map(mlxsw_sp_port, &prio_map); - if (!have_dscp) { - err = mlxsw_sp_port_dcb_toggle_trust(mlxsw_sp_port, - MLXSW_REG_QPTS_TRUST_STATE_PCP); - if (err) - netdev_err(mlxsw_sp_port->dev, "Couldn't switch to trust L2\n"); - return err; - } - mlxsw_sp_port_dcb_app_dscp_prio_map(mlxsw_sp_port, default_prio, &dscp_map); err = mlxsw_sp_port_dcb_app_update_qpdpm(mlxsw_sp_port, @@ -432,6 +424,14 @@ static int mlxsw_sp_port_dcb_app_update(struct mlxsw_sp_port *mlxsw_sp_port) return err; } + if (!have_dscp) { + err = mlxsw_sp_port_dcb_toggle_trust(mlxsw_sp_port, + MLXSW_REG_QPTS_TRUST_STATE_PCP); + if (err) + netdev_err(mlxsw_sp_port->dev, "Couldn't switch to trust L2\n"); + return err; + } + err = mlxsw_sp_port_dcb_toggle_trust(mlxsw_sp_port, MLXSW_REG_QPTS_TRUST_STATE_DSCP); if (err) { diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c index 10291198decd61238024769c9e037beabe95db5f..732ba21d3369dcfec7307755889bb562e6e8f809 100644 --- a/drivers/net/ethernet/mscc/ocelot.c +++ b/drivers/net/ethernet/mscc/ocelot.c @@ -1767,6 +1767,7 @@ EXPORT_SYMBOL(ocelot_init); void ocelot_deinit(struct ocelot *ocelot) { + cancel_delayed_work(&ocelot->stats_work); destroy_workqueue(ocelot->stats_queue); mutex_destroy(&ocelot->stats_lock); } diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c index 4dd82a1612aa800a17c665d41d1593a56902722c..a6a9688db307f50750ef5103a3fa612f48b2e94f 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_dev.c +++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c @@ -3096,6 +3096,7 @@ static void qed_nvm_info_free(struct qed_hwfn *p_hwfn) static int qed_hw_prepare_single(struct qed_hwfn *p_hwfn, void __iomem *p_regview, void __iomem *p_doorbells, + u64 db_phys_addr, enum qed_pci_personality personality) { int rc = 0; @@ -3103,6 +3104,7 @@ static int qed_hw_prepare_single(struct qed_hwfn *p_hwfn, /* Split PCI bars evenly between hwfns */ p_hwfn->regview = p_regview; p_hwfn->doorbells = p_doorbells; + p_hwfn->db_phys_addr = db_phys_addr; if (IS_VF(p_hwfn->cdev)) return qed_vf_hw_prepare(p_hwfn); @@ -3198,7 +3200,9 @@ int qed_hw_prepare(struct qed_dev *cdev, /* Initialize the first hwfn - will learn number of hwfns */ rc = qed_hw_prepare_single(p_hwfn, cdev->regview, - cdev->doorbells, personality); + cdev->doorbells, + cdev->db_phys_addr, + personality); if (rc) return rc; @@ -3207,22 +3211,25 @@ int qed_hw_prepare(struct qed_dev *cdev, /* Initialize the rest of the hwfns */ if (cdev->num_hwfns > 1) { void __iomem *p_regview, *p_doorbell; - u8 __iomem *addr; + u64 db_phys_addr; + u32 offset; /* adjust bar offset for second engine */ - addr = cdev->regview + - qed_hw_bar_size(p_hwfn, p_hwfn->p_main_ptt, - BAR_ID_0) / 2; - p_regview = addr; + offset = qed_hw_bar_size(p_hwfn, p_hwfn->p_main_ptt, + BAR_ID_0) / 2; + p_regview = cdev->regview + offset; - addr = cdev->doorbells + - qed_hw_bar_size(p_hwfn, p_hwfn->p_main_ptt, - BAR_ID_1) / 2; - p_doorbell = addr; + offset = qed_hw_bar_size(p_hwfn, p_hwfn->p_main_ptt, + BAR_ID_1) / 2; + + p_doorbell = cdev->doorbells + offset; + + db_phys_addr = cdev->db_phys_addr + offset; /* prepare second hw function */ rc = qed_hw_prepare_single(&cdev->hwfns[1], p_regview, - p_doorbell, personality); + p_doorbell, db_phys_addr, + personality); /* in case of error, need to free the previously * initiliazed hwfn 0. diff --git a/drivers/net/ethernet/qlogic/qed/qed_int.c b/drivers/net/ethernet/qlogic/qed/qed_int.c index b22f464ea3fa770e94327640e32a35972ee35745..f9e475075d3ea249225b47538344403c59294db0 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_int.c +++ b/drivers/net/ethernet/qlogic/qed/qed_int.c @@ -939,7 +939,7 @@ static int qed_int_deassertion(struct qed_hwfn *p_hwfn, snprintf(bit_name, 30, p_aeu->bit_name, num); else - strncpy(bit_name, + strlcpy(bit_name, p_aeu->bit_name, 30); /* We now need to pass bitmask in its diff --git a/drivers/net/ethernet/qlogic/qed/qed_iwarp.c b/drivers/net/ethernet/qlogic/qed/qed_iwarp.c index b7471e48db7b19d97628900beceba5e4bc395528..7002a660b6b4c278d130388e4ac59fb7e7cddb93 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_iwarp.c +++ b/drivers/net/ethernet/qlogic/qed/qed_iwarp.c @@ -2709,6 +2709,8 @@ qed_iwarp_ll2_start(struct qed_hwfn *p_hwfn, data.input.rx_num_desc = n_ooo_bufs * 2; data.input.tx_num_desc = data.input.rx_num_desc; data.input.tx_max_bds_per_packet = QED_IWARP_MAX_BDS_PER_FPDU; + data.input.tx_tc = PKT_LB_TC; + data.input.tx_dest = QED_LL2_TX_DEST_LB; data.p_connection_handle = &iwarp_info->ll2_mpa_handle; data.input.secondary_queue = true; data.cbs = &cbs; diff --git a/drivers/net/ethernet/qlogic/qed/qed_rdma.c b/drivers/net/ethernet/qlogic/qed/qed_rdma.c index 7873d6dfd91f55607a6d60b23b568488edf7d360..909422d9390330c6c133a4d47af5ee2b7a8d5dac 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_rdma.c +++ b/drivers/net/ethernet/qlogic/qed/qed_rdma.c @@ -442,7 +442,7 @@ static void qed_rdma_init_devinfo(struct qed_hwfn *p_hwfn, /* Vendor specific information */ dev->vendor_id = cdev->vendor_id; dev->vendor_part_id = cdev->device_id; - dev->hw_ver = 0; + dev->hw_ver = cdev->chip_rev; dev->fw_ver = (FW_MAJOR_VERSION << 24) | (FW_MINOR_VERSION << 16) | (FW_REVISION_VERSION << 8) | (FW_ENGINEERING_VERSION); @@ -803,7 +803,7 @@ static int qed_rdma_add_user(void *rdma_cxt, dpi_start_offset + ((out_params->dpi) * p_hwfn->dpi_size)); - out_params->dpi_phys_addr = p_hwfn->cdev->db_phys_addr + + out_params->dpi_phys_addr = p_hwfn->db_phys_addr + dpi_start_offset + ((out_params->dpi) * p_hwfn->dpi_size); diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h index 884f1f52dcc25e88713a28978bb9bdaa4bc3a320..70879a3ab567c57004da73656e6c8e49237000de 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h @@ -59,7 +59,7 @@ struct rmnet_map_dl_csum_trailer { struct rmnet_map_ul_csum_header { __be16 csum_start_offset; u16 csum_insert_offset:14; - u16 udp_ip4_ind:1; + u16 udp_ind:1; u16 csum_enabled:1; } __aligned(1); diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c index 57a9c314a665fc8aae9ce94073b9a256fa4973ee..b2090cedd2e965aa1dc901b97b05dae6a537ef30 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c @@ -215,9 +215,9 @@ rmnet_map_ipv4_ul_csum_header(void *iphdr, ul_header->csum_insert_offset = skb->csum_offset; ul_header->csum_enabled = 1; if (ip4h->protocol == IPPROTO_UDP) - ul_header->udp_ip4_ind = 1; + ul_header->udp_ind = 1; else - ul_header->udp_ip4_ind = 0; + ul_header->udp_ind = 0; /* Changing remaining fields to network order */ hdr++; @@ -248,6 +248,7 @@ rmnet_map_ipv6_ul_csum_header(void *ip6hdr, struct rmnet_map_ul_csum_header *ul_header, struct sk_buff *skb) { + struct ipv6hdr *ip6h = (struct ipv6hdr *)ip6hdr; __be16 *hdr = (__be16 *)ul_header, offset; offset = htons((__force u16)(skb_transport_header(skb) - @@ -255,7 +256,11 @@ rmnet_map_ipv6_ul_csum_header(void *ip6hdr, ul_header->csum_start_offset = offset; ul_header->csum_insert_offset = skb->csum_offset; ul_header->csum_enabled = 1; - ul_header->udp_ip4_ind = 0; + + if (ip6h->nexthdr == IPPROTO_UDP) + ul_header->udp_ind = 1; + else + ul_header->udp_ind = 0; /* Changing remaining fields to network order */ hdr++; @@ -428,7 +433,7 @@ sw_csum: ul_header->csum_start_offset = 0; ul_header->csum_insert_offset = 0; ul_header->csum_enabled = 0; - ul_header->udp_ip4_ind = 0; + ul_header->udp_ind = 0; priv->stats.csum_sw++; } diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c index 7a50b911b18046c38381ddf0df747693ea432985..0c8b7146637e1537c621108ea40ea5b852c27b67 100644 --- a/drivers/net/ethernet/realtek/r8169.c +++ b/drivers/net/ethernet/realtek/r8169.c @@ -5202,6 +5202,143 @@ static void rtl_hw_start_8411_2(struct rtl8169_private *tp) /* disable aspm and clock request before access ephy */ rtl_hw_aspm_clkreq_enable(tp, false); rtl_ephy_init(tp, e_info_8411_2, ARRAY_SIZE(e_info_8411_2)); + + /* The following Realtek-provided magic fixes an issue with the RX unit + * getting confused after the PHY having been powered-down. + */ + r8168_mac_ocp_write(tp, 0xFC28, 0x0000); + r8168_mac_ocp_write(tp, 0xFC2A, 0x0000); + r8168_mac_ocp_write(tp, 0xFC2C, 0x0000); + r8168_mac_ocp_write(tp, 0xFC2E, 0x0000); + r8168_mac_ocp_write(tp, 0xFC30, 0x0000); + r8168_mac_ocp_write(tp, 0xFC32, 0x0000); + r8168_mac_ocp_write(tp, 0xFC34, 0x0000); + r8168_mac_ocp_write(tp, 0xFC36, 0x0000); + mdelay(3); + r8168_mac_ocp_write(tp, 0xFC26, 0x0000); + + r8168_mac_ocp_write(tp, 0xF800, 0xE008); + r8168_mac_ocp_write(tp, 0xF802, 0xE00A); + r8168_mac_ocp_write(tp, 0xF804, 0xE00C); + r8168_mac_ocp_write(tp, 0xF806, 0xE00E); + r8168_mac_ocp_write(tp, 0xF808, 0xE027); + r8168_mac_ocp_write(tp, 0xF80A, 0xE04F); + r8168_mac_ocp_write(tp, 0xF80C, 0xE05E); + r8168_mac_ocp_write(tp, 0xF80E, 0xE065); + r8168_mac_ocp_write(tp, 0xF810, 0xC602); + r8168_mac_ocp_write(tp, 0xF812, 0xBE00); + r8168_mac_ocp_write(tp, 0xF814, 0x0000); + r8168_mac_ocp_write(tp, 0xF816, 0xC502); + r8168_mac_ocp_write(tp, 0xF818, 0xBD00); + r8168_mac_ocp_write(tp, 0xF81A, 0x074C); + r8168_mac_ocp_write(tp, 0xF81C, 0xC302); + r8168_mac_ocp_write(tp, 0xF81E, 0xBB00); + r8168_mac_ocp_write(tp, 0xF820, 0x080A); + r8168_mac_ocp_write(tp, 0xF822, 0x6420); + r8168_mac_ocp_write(tp, 0xF824, 0x48C2); + r8168_mac_ocp_write(tp, 0xF826, 0x8C20); + r8168_mac_ocp_write(tp, 0xF828, 0xC516); + r8168_mac_ocp_write(tp, 0xF82A, 0x64A4); + r8168_mac_ocp_write(tp, 0xF82C, 0x49C0); + r8168_mac_ocp_write(tp, 0xF82E, 0xF009); + r8168_mac_ocp_write(tp, 0xF830, 0x74A2); + r8168_mac_ocp_write(tp, 0xF832, 0x8CA5); + r8168_mac_ocp_write(tp, 0xF834, 0x74A0); + r8168_mac_ocp_write(tp, 0xF836, 0xC50E); + r8168_mac_ocp_write(tp, 0xF838, 0x9CA2); + r8168_mac_ocp_write(tp, 0xF83A, 0x1C11); + r8168_mac_ocp_write(tp, 0xF83C, 0x9CA0); + r8168_mac_ocp_write(tp, 0xF83E, 0xE006); + r8168_mac_ocp_write(tp, 0xF840, 0x74F8); + r8168_mac_ocp_write(tp, 0xF842, 0x48C4); + r8168_mac_ocp_write(tp, 0xF844, 0x8CF8); + r8168_mac_ocp_write(tp, 0xF846, 0xC404); + r8168_mac_ocp_write(tp, 0xF848, 0xBC00); + r8168_mac_ocp_write(tp, 0xF84A, 0xC403); + r8168_mac_ocp_write(tp, 0xF84C, 0xBC00); + r8168_mac_ocp_write(tp, 0xF84E, 0x0BF2); + r8168_mac_ocp_write(tp, 0xF850, 0x0C0A); + r8168_mac_ocp_write(tp, 0xF852, 0xE434); + r8168_mac_ocp_write(tp, 0xF854, 0xD3C0); + r8168_mac_ocp_write(tp, 0xF856, 0x49D9); + r8168_mac_ocp_write(tp, 0xF858, 0xF01F); + r8168_mac_ocp_write(tp, 0xF85A, 0xC526); + r8168_mac_ocp_write(tp, 0xF85C, 0x64A5); + r8168_mac_ocp_write(tp, 0xF85E, 0x1400); + r8168_mac_ocp_write(tp, 0xF860, 0xF007); + r8168_mac_ocp_write(tp, 0xF862, 0x0C01); + r8168_mac_ocp_write(tp, 0xF864, 0x8CA5); + r8168_mac_ocp_write(tp, 0xF866, 0x1C15); + r8168_mac_ocp_write(tp, 0xF868, 0xC51B); + r8168_mac_ocp_write(tp, 0xF86A, 0x9CA0); + r8168_mac_ocp_write(tp, 0xF86C, 0xE013); + r8168_mac_ocp_write(tp, 0xF86E, 0xC519); + r8168_mac_ocp_write(tp, 0xF870, 0x74A0); + r8168_mac_ocp_write(tp, 0xF872, 0x48C4); + r8168_mac_ocp_write(tp, 0xF874, 0x8CA0); + r8168_mac_ocp_write(tp, 0xF876, 0xC516); + r8168_mac_ocp_write(tp, 0xF878, 0x74A4); + r8168_mac_ocp_write(tp, 0xF87A, 0x48C8); + r8168_mac_ocp_write(tp, 0xF87C, 0x48CA); + r8168_mac_ocp_write(tp, 0xF87E, 0x9CA4); + r8168_mac_ocp_write(tp, 0xF880, 0xC512); + r8168_mac_ocp_write(tp, 0xF882, 0x1B00); + r8168_mac_ocp_write(tp, 0xF884, 0x9BA0); + r8168_mac_ocp_write(tp, 0xF886, 0x1B1C); + r8168_mac_ocp_write(tp, 0xF888, 0x483F); + r8168_mac_ocp_write(tp, 0xF88A, 0x9BA2); + r8168_mac_ocp_write(tp, 0xF88C, 0x1B04); + r8168_mac_ocp_write(tp, 0xF88E, 0xC508); + r8168_mac_ocp_write(tp, 0xF890, 0x9BA0); + r8168_mac_ocp_write(tp, 0xF892, 0xC505); + r8168_mac_ocp_write(tp, 0xF894, 0xBD00); + r8168_mac_ocp_write(tp, 0xF896, 0xC502); + r8168_mac_ocp_write(tp, 0xF898, 0xBD00); + r8168_mac_ocp_write(tp, 0xF89A, 0x0300); + r8168_mac_ocp_write(tp, 0xF89C, 0x051E); + r8168_mac_ocp_write(tp, 0xF89E, 0xE434); + r8168_mac_ocp_write(tp, 0xF8A0, 0xE018); + r8168_mac_ocp_write(tp, 0xF8A2, 0xE092); + r8168_mac_ocp_write(tp, 0xF8A4, 0xDE20); + r8168_mac_ocp_write(tp, 0xF8A6, 0xD3C0); + r8168_mac_ocp_write(tp, 0xF8A8, 0xC50F); + r8168_mac_ocp_write(tp, 0xF8AA, 0x76A4); + r8168_mac_ocp_write(tp, 0xF8AC, 0x49E3); + r8168_mac_ocp_write(tp, 0xF8AE, 0xF007); + r8168_mac_ocp_write(tp, 0xF8B0, 0x49C0); + r8168_mac_ocp_write(tp, 0xF8B2, 0xF103); + r8168_mac_ocp_write(tp, 0xF8B4, 0xC607); + r8168_mac_ocp_write(tp, 0xF8B6, 0xBE00); + r8168_mac_ocp_write(tp, 0xF8B8, 0xC606); + r8168_mac_ocp_write(tp, 0xF8BA, 0xBE00); + r8168_mac_ocp_write(tp, 0xF8BC, 0xC602); + r8168_mac_ocp_write(tp, 0xF8BE, 0xBE00); + r8168_mac_ocp_write(tp, 0xF8C0, 0x0C4C); + r8168_mac_ocp_write(tp, 0xF8C2, 0x0C28); + r8168_mac_ocp_write(tp, 0xF8C4, 0x0C2C); + r8168_mac_ocp_write(tp, 0xF8C6, 0xDC00); + r8168_mac_ocp_write(tp, 0xF8C8, 0xC707); + r8168_mac_ocp_write(tp, 0xF8CA, 0x1D00); + r8168_mac_ocp_write(tp, 0xF8CC, 0x8DE2); + r8168_mac_ocp_write(tp, 0xF8CE, 0x48C1); + r8168_mac_ocp_write(tp, 0xF8D0, 0xC502); + r8168_mac_ocp_write(tp, 0xF8D2, 0xBD00); + r8168_mac_ocp_write(tp, 0xF8D4, 0x00AA); + r8168_mac_ocp_write(tp, 0xF8D6, 0xE0C0); + r8168_mac_ocp_write(tp, 0xF8D8, 0xC502); + r8168_mac_ocp_write(tp, 0xF8DA, 0xBD00); + r8168_mac_ocp_write(tp, 0xF8DC, 0x0132); + + r8168_mac_ocp_write(tp, 0xFC26, 0x8000); + + r8168_mac_ocp_write(tp, 0xFC2A, 0x0743); + r8168_mac_ocp_write(tp, 0xFC2C, 0x0801); + r8168_mac_ocp_write(tp, 0xFC2E, 0x0BE9); + r8168_mac_ocp_write(tp, 0xFC30, 0x02FD); + r8168_mac_ocp_write(tp, 0xFC32, 0x0C25); + r8168_mac_ocp_write(tp, 0xFC34, 0x00A9); + r8168_mac_ocp_write(tp, 0xFC36, 0x012D); + rtl_hw_aspm_clkreq_enable(tp, true); } @@ -7102,13 +7239,18 @@ static int rtl_alloc_irq(struct rtl8169_private *tp) { unsigned int flags; - if (tp->mac_version <= RTL_GIGA_MAC_VER_06) { + switch (tp->mac_version) { + case RTL_GIGA_MAC_VER_02 ... RTL_GIGA_MAC_VER_06: RTL_W8(tp, Cfg9346, Cfg9346_Unlock); RTL_W8(tp, Config2, RTL_R8(tp, Config2) & ~MSIEnable); RTL_W8(tp, Cfg9346, Cfg9346_Lock); + /* fall through */ + case RTL_GIGA_MAC_VER_07 ... RTL_GIGA_MAC_VER_24: flags = PCI_IRQ_LEGACY; - } else { + break; + default: flags = PCI_IRQ_ALL_TYPES; + break; } return pci_alloc_irq_vectors(tp->pci_dev, 1, 1, flags); diff --git a/drivers/net/ethernet/sis/sis900.c b/drivers/net/ethernet/sis/sis900.c index 4bb89f74742c908386410c23640ddbe6462c9cfc..d5bcbc40a55fc1239b20b1c88b044ffdcedcf927 100644 --- a/drivers/net/ethernet/sis/sis900.c +++ b/drivers/net/ethernet/sis/sis900.c @@ -1057,7 +1057,7 @@ sis900_open(struct net_device *net_dev) sis900_set_mode(sis_priv, HW_SPEED_10_MBPS, FDX_CAPABLE_HALF_SELECTED); /* Enable all known interrupts by setting the interrupt mask. */ - sw32(imr, RxSOVR | RxORN | RxERR | RxOK | TxURN | TxERR | TxIDLE); + sw32(imr, RxSOVR | RxORN | RxERR | RxOK | TxURN | TxERR | TxIDLE | TxDESC); sw32(cr, RxENA | sr32(cr)); sw32(ier, IE); @@ -1578,7 +1578,7 @@ static void sis900_tx_timeout(struct net_device *net_dev) sw32(txdp, sis_priv->tx_ring_dma); /* Enable all known interrupts by setting the interrupt mask. */ - sw32(imr, RxSOVR | RxORN | RxERR | RxOK | TxURN | TxERR | TxIDLE); + sw32(imr, RxSOVR | RxORN | RxERR | RxOK | TxURN | TxERR | TxIDLE | TxDESC); } /** @@ -1618,7 +1618,7 @@ sis900_start_xmit(struct sk_buff *skb, struct net_device *net_dev) spin_unlock_irqrestore(&sis_priv->lock, flags); return NETDEV_TX_OK; } - sis_priv->tx_ring[entry].cmdsts = (OWN | skb->len); + sis_priv->tx_ring[entry].cmdsts = (OWN | INTR | skb->len); sw32(cr, TxENA | sr32(cr)); sis_priv->cur_tx ++; @@ -1674,7 +1674,7 @@ static irqreturn_t sis900_interrupt(int irq, void *dev_instance) do { status = sr32(isr); - if ((status & (HIBERR|TxURN|TxERR|TxIDLE|RxORN|RxERR|RxOK)) == 0) + if ((status & (HIBERR|TxURN|TxERR|TxIDLE|TxDESC|RxORN|RxERR|RxOK)) == 0) /* nothing intresting happened */ break; handled = 1; @@ -1684,7 +1684,7 @@ static irqreturn_t sis900_interrupt(int irq, void *dev_instance) /* Rx interrupt */ sis900_rx(net_dev); - if (status & (TxURN | TxERR | TxIDLE)) + if (status & (TxURN | TxERR | TxIDLE | TxDESC)) /* Tx interrupt */ sis900_finish_xmit(net_dev); @@ -1896,8 +1896,8 @@ static void sis900_finish_xmit (struct net_device *net_dev) if (tx_status & OWN) { /* The packet is not transmitted yet (owned by hardware) ! - * Note: the interrupt is generated only when Tx Machine - * is idle, so this is an almost impossible case */ + * Note: this is an almost impossible condition + * in case of TxDESC ('descriptor interrupt') */ break; } @@ -2473,7 +2473,7 @@ static int sis900_resume(struct pci_dev *pci_dev) sis900_set_mode(sis_priv, HW_SPEED_10_MBPS, FDX_CAPABLE_HALF_SELECTED); /* Enable all known interrupts by setting the interrupt mask. */ - sw32(imr, RxSOVR | RxORN | RxERR | RxOK | TxURN | TxERR | TxIDLE); + sw32(imr, RxSOVR | RxORN | RxERR | RxOK | TxURN | TxERR | TxIDLE | TxDESC); sw32(cr, RxENA | sr32(cr)); sw32(ier, IE); diff --git a/drivers/net/ethernet/stmicro/stmmac/common.h b/drivers/net/ethernet/stmicro/stmmac/common.h index 272b9ca663148f36ccb7ae45363df773f2dd4c4c..b069b3a2453be2330b8790445e88ae32a6e26399 100644 --- a/drivers/net/ethernet/stmicro/stmmac/common.h +++ b/drivers/net/ethernet/stmicro/stmmac/common.h @@ -261,7 +261,7 @@ struct stmmac_safety_stats { #define STMMAC_COAL_TX_TIMER 1000 #define STMMAC_MAX_COAL_TX_TICK 100000 #define STMMAC_TX_MAX_FRAMES 256 -#define STMMAC_TX_FRAMES 25 +#define STMMAC_TX_FRAMES 1 /* Packets types */ enum packets_types { diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c index 49a896a16391948162b9bc748d516dd5d307c5f7..79c91526f3ecce6470a73d0c52fde876508852e2 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c @@ -893,6 +893,11 @@ static int sun8i_dwmac_set_syscon(struct stmmac_priv *priv) * address. No need to mask it again. */ reg |= 1 << H3_EPHY_ADDR_SHIFT; + } else { + /* For SoCs without internal PHY the PHY selection bit should be + * set to 0 (external PHY). + */ + reg &= ~H3_EPHY_SELECT; } if (!of_property_read_u32(node, "allwinner,tx-delay-ps", &val)) { diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c index 0877bde6e860b24a4f6003e817ee82a0726f0f11..21d131347e2effb5f94d777a264a72ed79ef813d 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c @@ -216,6 +216,12 @@ static void dwmac1000_set_filter(struct mac_device_info *hw, GMAC_ADDR_LOW(reg)); reg++; } + + while (reg <= perfect_addr_number) { + writel(0, ioaddr + GMAC_ADDR_HIGH(reg)); + writel(0, ioaddr + GMAC_ADDR_LOW(reg)); + reg++; + } } #ifdef FRAME_FILTER_DEBUG diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c index 7e5d5db0d5165b5ff9b65d9e82f229c4ea5c5888..48cf5e2b24417f282fd6be1c7991c9a3a5dccacb 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c @@ -88,6 +88,8 @@ static void dwmac4_rx_queue_priority(struct mac_device_info *hw, u32 value; base_register = (queue < 4) ? GMAC_RXQ_CTRL2 : GMAC_RXQ_CTRL3; + if (queue >= 4) + queue -= 4; value = readl(ioaddr + base_register); @@ -105,6 +107,8 @@ static void dwmac4_tx_queue_priority(struct mac_device_info *hw, u32 value; base_register = (queue < 4) ? GMAC_TXQ_PRTY_MAP0 : GMAC_TXQ_PRTY_MAP1; + if (queue >= 4) + queue -= 4; value = readl(ioaddr + base_register); @@ -444,14 +448,20 @@ static void dwmac4_set_filter(struct mac_device_info *hw, * are required */ value |= GMAC_PACKET_FILTER_PR; - } else if (!netdev_uc_empty(dev)) { - int reg = 1; + } else { struct netdev_hw_addr *ha; + int reg = 1; netdev_for_each_uc_addr(ha, dev) { dwmac4_set_umac_addr(hw, ha->addr, reg); reg++; } + + while (reg <= GMAC_MAX_PERFECT_ADDRESSES) { + writel(0, ioaddr + GMAC_ADDR_HIGH(reg)); + writel(0, ioaddr + GMAC_ADDR_LOW(reg)); + reg++; + } } writel(value, ioaddr + GMAC_PACKET_FILTER); @@ -469,8 +479,9 @@ static void dwmac4_flow_ctrl(struct mac_device_info *hw, unsigned int duplex, if (fc & FLOW_RX) { pr_debug("\tReceive Flow-Control ON\n"); flow |= GMAC_RX_FLOW_CTRL_RFE; - writel(flow, ioaddr + GMAC_RX_FLOW_CTRL); } + writel(flow, ioaddr + GMAC_RX_FLOW_CTRL); + if (fc & FLOW_TX) { pr_debug("\tTransmit Flow-Control ON\n"); @@ -478,7 +489,7 @@ static void dwmac4_flow_ctrl(struct mac_device_info *hw, unsigned int duplex, pr_debug("\tduplex mode: PAUSE %d\n", pause_time); for (queue = 0; queue < tx_cnt; queue++) { - flow |= GMAC_TX_FLOW_CTRL_TFE; + flow = GMAC_TX_FLOW_CTRL_TFE; if (duplex) flow |= @@ -486,6 +497,9 @@ static void dwmac4_flow_ctrl(struct mac_device_info *hw, unsigned int duplex, writel(flow, ioaddr + GMAC_QX_TX_FLOW_CTRL(queue)); } + } else { + for (queue = 0; queue < tx_cnt; queue++) + writel(0, ioaddr + GMAC_QX_TX_FLOW_CTRL(queue)); } } diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c index d182f82f7b58608f0aa6e335bf6071562dc8cb6f..870302a7177e2334d19056dd4abc2df066851d16 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c @@ -106,6 +106,8 @@ static void dwxgmac2_rx_queue_prio(struct mac_device_info *hw, u32 prio, u32 value, reg; reg = (queue < 4) ? XGMAC_RXQ_CTRL2 : XGMAC_RXQ_CTRL3; + if (queue >= 4) + queue -= 4; value = readl(ioaddr + reg); value &= ~XGMAC_PSRQ(queue); @@ -169,6 +171,8 @@ static void dwxgmac2_map_mtl_to_dma(struct mac_device_info *hw, u32 queue, u32 value, reg; reg = (queue < 4) ? XGMAC_MTL_RXQ_DMA_MAP0 : XGMAC_MTL_RXQ_DMA_MAP1; + if (queue >= 4) + queue -= 4; value = readl(ioaddr + reg); value &= ~XGMAC_QxMDMACH(queue); diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index 5c18874614ba1a9bd16e898846c3dacdc544f73f..0101ebaecf028b209fc4bf318bca3d61a5eaf8ab 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -3036,17 +3036,8 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev) /* Manage oversized TCP frames for GMAC4 device */ if (skb_is_gso(skb) && priv->tso) { - if (skb_shinfo(skb)->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6)) { - /* - * There is no way to determine the number of TSO - * capable Queues. Let's use always the Queue 0 - * because if TSO is supported then at least this - * one will be capable. - */ - skb_set_queue_mapping(skb, 0); - + if (skb_shinfo(skb)->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6)) return stmmac_tso_xmit(skb, dev); - } } if (unlikely(stmmac_tx_avail(priv, queue) < nfrags + 1)) { @@ -3855,6 +3846,23 @@ static int stmmac_setup_tc(struct net_device *ndev, enum tc_setup_type type, } } +static u16 stmmac_select_queue(struct net_device *dev, struct sk_buff *skb, + struct net_device *sb_dev, + select_queue_fallback_t fallback) +{ + if (skb_shinfo(skb)->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6)) { + /* + * There is no way to determine the number of TSO + * capable Queues. Let's use always the Queue 0 + * because if TSO is supported then at least this + * one will be capable. + */ + return 0; + } + + return fallback(dev, skb, NULL) % dev->real_num_tx_queues; +} + static int stmmac_set_mac_address(struct net_device *ndev, void *addr) { struct stmmac_priv *priv = netdev_priv(ndev); @@ -4097,6 +4105,7 @@ static const struct net_device_ops stmmac_netdev_ops = { .ndo_tx_timeout = stmmac_tx_timeout, .ndo_do_ioctl = stmmac_ioctl, .ndo_setup_tc = stmmac_setup_tc, + .ndo_select_queue = stmmac_select_queue, #ifdef CONFIG_NET_POLL_CONTROLLER .ndo_poll_controller = stmmac_poll_controller, #endif diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c index 58ea18af9813ab950b1252cde08d626abe61c30e..37c0bc699cd9ca80dbfdb77e7893abaad150fe62 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c @@ -37,7 +37,7 @@ static struct stmmac_tc_entry *tc_find_entry(struct stmmac_priv *priv, entry = &priv->tc_entries[i]; if (!entry->in_use && !first && free) first = entry; - if (entry->handle == loc && !free) + if ((entry->handle == loc) && !free && !entry->is_frag) dup = entry; } diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c index 7cfd7ff38e86f04cf56f9465b1865c35a3ee465f..66b30ebd45ee8fcea48636ed19ba5db2a54e9c09 100644 --- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c +++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c @@ -614,6 +614,10 @@ static void axienet_start_xmit_done(struct net_device *ndev) ndev->stats.tx_packets += packets; ndev->stats.tx_bytes += size; + + /* Matches barrier in axienet_start_xmit */ + smp_mb(); + netif_wake_queue(ndev); } @@ -668,9 +672,19 @@ static int axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev) cur_p = &lp->tx_bd_v[lp->tx_bd_tail]; if (axienet_check_tx_bd_space(lp, num_frag)) { - if (!netif_queue_stopped(ndev)) - netif_stop_queue(ndev); - return NETDEV_TX_BUSY; + if (netif_queue_stopped(ndev)) + return NETDEV_TX_BUSY; + + netif_stop_queue(ndev); + + /* Matches barrier in axienet_start_xmit_done */ + smp_mb(); + + /* Space might have just been freed - check again */ + if (axienet_check_tx_bd_space(lp, num_frag)) + return NETDEV_TX_BUSY; + + netif_wake_queue(ndev); } if (skb->ip_summed == CHECKSUM_PARTIAL) { diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c index 7a145172d50385f56f54e4b114ab6194fd672ca5..d178d5bad7e48aa0f8744a213d7660813290f978 100644 --- a/drivers/net/gtp.c +++ b/drivers/net/gtp.c @@ -289,16 +289,29 @@ static int gtp1u_udp_encap_recv(struct gtp_dev *gtp, struct sk_buff *skb) return gtp_rx(pctx, skb, hdrlen, gtp->role); } -static void gtp_encap_destroy(struct sock *sk) +static void __gtp_encap_destroy(struct sock *sk) { struct gtp_dev *gtp; - gtp = rcu_dereference_sk_user_data(sk); + lock_sock(sk); + gtp = sk->sk_user_data; if (gtp) { + if (gtp->sk0 == sk) + gtp->sk0 = NULL; + else + gtp->sk1u = NULL; udp_sk(sk)->encap_type = 0; rcu_assign_sk_user_data(sk, NULL); sock_put(sk); } + release_sock(sk); +} + +static void gtp_encap_destroy(struct sock *sk) +{ + rtnl_lock(); + __gtp_encap_destroy(sk); + rtnl_unlock(); } static void gtp_encap_disable_sock(struct sock *sk) @@ -306,7 +319,7 @@ static void gtp_encap_disable_sock(struct sock *sk) if (!sk) return; - gtp_encap_destroy(sk); + __gtp_encap_destroy(sk); } static void gtp_encap_disable(struct gtp_dev *gtp) @@ -800,7 +813,8 @@ static struct sock *gtp_encap_enable_socket(int fd, int type, goto out_sock; } - if (rcu_dereference_sk_user_data(sock->sk)) { + lock_sock(sock->sk); + if (sock->sk->sk_user_data) { sk = ERR_PTR(-EBUSY); goto out_sock; } @@ -816,6 +830,7 @@ static struct sock *gtp_encap_enable_socket(int fd, int type, setup_udp_tunnel_sock(sock_net(sock->sk), sock, &tuncfg); out_sock: + release_sock(sock->sk); sockfd_put(sock); return sk; } @@ -847,8 +862,13 @@ static int gtp_encap_enable(struct gtp_dev *gtp, struct nlattr *data[]) if (data[IFLA_GTP_ROLE]) { role = nla_get_u32(data[IFLA_GTP_ROLE]); - if (role > GTP_ROLE_SGSN) + if (role > GTP_ROLE_SGSN) { + if (sk0) + gtp_encap_disable_sock(sk0); + if (sk1u) + gtp_encap_disable_sock(sk1u); return -EINVAL; + } } gtp->sk0 = sk0; @@ -949,7 +969,7 @@ static int ipv4_pdp_add(struct gtp_dev *gtp, struct sock *sk, } - pctx = kmalloc(sizeof(struct pdp_ctx), GFP_KERNEL); + pctx = kmalloc(sizeof(*pctx), GFP_ATOMIC); if (pctx == NULL) return -ENOMEM; @@ -1038,6 +1058,7 @@ static int gtp_genl_new_pdp(struct sk_buff *skb, struct genl_info *info) return -EINVAL; } + rtnl_lock(); rcu_read_lock(); gtp = gtp_find_dev(sock_net(skb->sk), info->attrs); @@ -1062,6 +1083,7 @@ static int gtp_genl_new_pdp(struct sk_buff *skb, struct genl_info *info) out_unlock: rcu_read_unlock(); + rtnl_unlock(); return err; } @@ -1363,9 +1385,9 @@ late_initcall(gtp_init); static void __exit gtp_fini(void) { - unregister_pernet_subsys(>p_net_ops); genl_unregister_family(>p_genl_family); rtnl_link_unregister(>p_link_ops); + unregister_pernet_subsys(>p_net_ops); pr_info("GTP module unloaded\n"); } diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c index cf6b9b1771f12cff597648423511b5f146440be5..cc60ef9634db248c2ef52f648a252bcc5bb8daf6 100644 --- a/drivers/net/hyperv/netvsc_drv.c +++ b/drivers/net/hyperv/netvsc_drv.c @@ -847,7 +847,6 @@ int netvsc_recv_callback(struct net_device *net, csum_info, vlan, data, len); if (unlikely(!skb)) { ++net_device_ctx->eth_stats.rx_no_memory; - rcu_read_unlock(); return NVSP_STAT_FAIL; } diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c index 7de88b33d5b96d7f18a5f7c242a54c935b587086..2c971357e66cff7e61b8595757cc59d200badb8b 100644 --- a/drivers/net/macsec.c +++ b/drivers/net/macsec.c @@ -869,6 +869,7 @@ static void macsec_reset_skb(struct sk_buff *skb, struct net_device *dev) static void macsec_finalize_skb(struct sk_buff *skb, u8 icv_len, u8 hdr_len) { + skb->ip_summed = CHECKSUM_NONE; memmove(skb->data + hdr_len, skb->data, 2 * ETH_ALEN); skb_pull(skb, hdr_len); pskb_trim_unique(skb, skb->len - icv_len); @@ -1103,10 +1104,9 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb) } skb = skb_unshare(skb, GFP_ATOMIC); - if (!skb) { - *pskb = NULL; + *pskb = skb; + if (!skb) return RX_HANDLER_CONSUMED; - } pulled_sci = pskb_may_pull(skb, macsec_extra_len(true)); if (!pulled_sci) { diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c index 8a96d985a52fda01eaec0c214a3754dfc0c5dc6f..6144146aec29d01d8fec7ad1149f617da2edcbdc 100644 --- a/drivers/net/phy/phy_device.c +++ b/drivers/net/phy/phy_device.c @@ -757,6 +757,9 @@ int phy_connect_direct(struct net_device *dev, struct phy_device *phydev, { int rc; + if (!dev) + return -EINVAL; + rc = phy_attach_direct(dev, phydev, phydev->dev_flags, interface); if (rc) return rc; @@ -1098,6 +1101,9 @@ struct phy_device *phy_attach(struct net_device *dev, const char *bus_id, struct device *d; int rc; + if (!dev) + return ERR_PTR(-EINVAL); + /* Search the list of PHY devices on the mdio bus for the * PHY with the requested name */ diff --git a/drivers/net/phy/phy_led_triggers.c b/drivers/net/phy/phy_led_triggers.c index 491efc1bf5c4894a3c3a8b8d67d236f68205efa4..7278eca70f9f36db2c837f9640526df6b6bf53db 100644 --- a/drivers/net/phy/phy_led_triggers.c +++ b/drivers/net/phy/phy_led_triggers.c @@ -58,8 +58,9 @@ void phy_led_trigger_change_speed(struct phy_device *phy) if (!phy->last_triggered) led_trigger_event(&phy->led_link_trigger->trigger, LED_FULL); + else + led_trigger_event(&phy->last_triggered->trigger, LED_OFF); - led_trigger_event(&phy->last_triggered->trigger, LED_OFF); led_trigger_event(&plt->trigger, LED_FULL); phy->last_triggered = plt; } diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c index e029c7977a562db9247897b35ae476569cba9f72..2e8056d48f4a0622e68d34494316fb70c885c524 100644 --- a/drivers/net/phy/phylink.c +++ b/drivers/net/phy/phylink.c @@ -226,6 +226,8 @@ static int phylink_parse_fixedlink(struct phylink *pl, __ETHTOOL_LINK_MODE_MASK_NBITS, true); linkmode_zero(pl->supported); phylink_set(pl->supported, MII); + phylink_set(pl->supported, Pause); + phylink_set(pl->supported, Asym_Pause); if (s) { __set_bit(s->bit, pl->supported); } else { diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c index 8807a806cc47f304bb3c5197dba54e2935cdee14..998d08ae7431aef493ae70063268e7d2a0ebc53a 100644 --- a/drivers/net/phy/sfp.c +++ b/drivers/net/phy/sfp.c @@ -185,10 +185,11 @@ struct sfp { struct gpio_desc *gpio[GPIO_MAX]; bool attached; + struct mutex st_mutex; /* Protects state */ unsigned int state; struct delayed_work poll; struct delayed_work timeout; - struct mutex sm_mutex; + struct mutex sm_mutex; /* Protects state machine */ unsigned char sm_mod_state; unsigned char sm_dev_state; unsigned short sm_state; @@ -513,7 +514,7 @@ static int sfp_hwmon_read_sensor(struct sfp *sfp, int reg, long *value) static void sfp_hwmon_to_rx_power(long *value) { - *value = DIV_ROUND_CLOSEST(*value, 100); + *value = DIV_ROUND_CLOSEST(*value, 10); } static void sfp_hwmon_calibrate(struct sfp *sfp, unsigned int slope, int offset, @@ -1718,6 +1719,7 @@ static void sfp_check_state(struct sfp *sfp) { unsigned int state, i, changed; + mutex_lock(&sfp->st_mutex); state = sfp_get_state(sfp); changed = state ^ sfp->state; changed &= SFP_F_PRESENT | SFP_F_LOS | SFP_F_TX_FAULT; @@ -1743,6 +1745,7 @@ static void sfp_check_state(struct sfp *sfp) sfp_sm_event(sfp, state & SFP_F_LOS ? SFP_E_LOS_HIGH : SFP_E_LOS_LOW); rtnl_unlock(); + mutex_unlock(&sfp->st_mutex); } static irqreturn_t sfp_irq(int irq, void *data) @@ -1773,6 +1776,7 @@ static struct sfp *sfp_alloc(struct device *dev) sfp->dev = dev; mutex_init(&sfp->sm_mutex); + mutex_init(&sfp->st_mutex); INIT_DELAYED_WORK(&sfp->poll, sfp_poll); INIT_DELAYED_WORK(&sfp->timeout, sfp_timeout); diff --git a/drivers/net/ppp/ppp_mppe.c b/drivers/net/ppp/ppp_mppe.c index a205750b431ba5565df39a3d7d94ac409513fd9e..8609c1a0777b21040f0a8f0127b2bde65fe299ad 100644 --- a/drivers/net/ppp/ppp_mppe.c +++ b/drivers/net/ppp/ppp_mppe.c @@ -63,6 +63,7 @@ MODULE_AUTHOR("Frank Cusack "); MODULE_DESCRIPTION("Point-to-Point Protocol Microsoft Point-to-Point Encryption support"); MODULE_LICENSE("Dual BSD/GPL"); MODULE_ALIAS("ppp-compress-" __stringify(CI_MPPE)); +MODULE_SOFTDEP("pre: arc4"); MODULE_VERSION("1.0.2"); static unsigned int diff --git a/drivers/net/ppp/pppoe.c b/drivers/net/ppp/pppoe.c index f22639f0116a4268644e4e8af911d5791b21962f..c04f3dc17d76f1c6d476faae5542975ef1c2707b 100644 --- a/drivers/net/ppp/pppoe.c +++ b/drivers/net/ppp/pppoe.c @@ -1120,6 +1120,9 @@ static const struct proto_ops pppoe_ops = { .recvmsg = pppoe_recvmsg, .mmap = sock_no_mmap, .ioctl = pppox_ioctl, +#ifdef CONFIG_COMPAT + .compat_ioctl = pppox_compat_ioctl, +#endif }; static const struct pppox_proto pppoe_proto = { diff --git a/drivers/net/ppp/pppox.c b/drivers/net/ppp/pppox.c index c0599b3b23c06b179c843562dc7026f3e7d2f8df..9128e42e33e74f5f744fa4b9564cb31d2394d71b 100644 --- a/drivers/net/ppp/pppox.c +++ b/drivers/net/ppp/pppox.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -103,6 +104,18 @@ int pppox_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg) EXPORT_SYMBOL(pppox_ioctl); +#ifdef CONFIG_COMPAT +int pppox_compat_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg) +{ + if (cmd == PPPOEIOCSFWD32) + cmd = PPPOEIOCSFWD; + + return pppox_ioctl(sock, cmd, (unsigned long)compat_ptr(arg)); +} + +EXPORT_SYMBOL(pppox_compat_ioctl); +#endif + static int pppox_create(struct net *net, struct socket *sock, int protocol, int kern) { diff --git a/drivers/net/ppp/pptp.c b/drivers/net/ppp/pptp.c index 7321a4eca2354005f2704742f0324283f24d21cb..9ad3ff40a563f2428ac710895f8a5b24824b15a2 100644 --- a/drivers/net/ppp/pptp.c +++ b/drivers/net/ppp/pptp.c @@ -633,6 +633,9 @@ static const struct proto_ops pptp_ops = { .recvmsg = sock_no_recvmsg, .mmap = sock_no_mmap, .ioctl = pppox_ioctl, +#ifdef CONFIG_COMPAT + .compat_ioctl = pppox_compat_ioctl, +#endif }; static const struct pppox_proto pppox_pptp_proto = { diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c index dc30f11f47664641704a9af6a9d61760cbf4ef98..3feb49badda9c7e1a792e84b03f6b07f40d71f39 100644 --- a/drivers/net/team/team.c +++ b/drivers/net/team/team.c @@ -1011,6 +1011,8 @@ static void __team_compute_features(struct team *team) team->dev->vlan_features = vlan_features; team->dev->hw_enc_features = enc_features | NETIF_F_GSO_ENCAP_ALL | + NETIF_F_HW_VLAN_CTAG_TX | + NETIF_F_HW_VLAN_STAG_TX | NETIF_F_GSO_UDP_L4; team->dev->hard_header_len = max_hard_header_len; diff --git a/drivers/net/tun.c b/drivers/net/tun.c index b67fee56ec81b6462095ae3480bd4e323b2e92c3..5fa7047ea36187f88897c8cb587c4ee00603b393 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -1682,6 +1682,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun, skb_reserve(skb, pad - delta); skb_put(skb, len); + skb_set_owner_w(skb, tfile->socket.sk); get_page(alloc_frag->page); alloc_frag->offset += buflen; diff --git a/drivers/net/usb/asix_devices.c b/drivers/net/usb/asix_devices.c index 3d93993e74da09abfa63252247c680be069401d3..2eca4168af2f0a658fcb2b1b0584653ac82e9efe 100644 --- a/drivers/net/usb/asix_devices.c +++ b/drivers/net/usb/asix_devices.c @@ -238,7 +238,7 @@ static void asix_phy_reset(struct usbnet *dev, unsigned int reset_bits) static int ax88172_bind(struct usbnet *dev, struct usb_interface *intf) { int ret = 0; - u8 buf[ETH_ALEN]; + u8 buf[ETH_ALEN] = {0}; int i; unsigned long gpio_bits = dev->driver_info->data; @@ -689,7 +689,7 @@ static int asix_resume(struct usb_interface *intf) static int ax88772_bind(struct usbnet *dev, struct usb_interface *intf) { int ret, i; - u8 buf[ETH_ALEN], chipcode = 0; + u8 buf[ETH_ALEN] = {0}, chipcode = 0; u32 phyid; struct asix_common_private *priv; @@ -1073,7 +1073,7 @@ static const struct net_device_ops ax88178_netdev_ops = { static int ax88178_bind(struct usbnet *dev, struct usb_interface *intf) { int ret; - u8 buf[ETH_ALEN]; + u8 buf[ETH_ALEN] = {0}; usbnet_get_endpoints(dev,intf); diff --git a/drivers/net/usb/pegasus.c b/drivers/net/usb/pegasus.c index f4247b275e0901a54ebf0b36d67adaf708bf6950..b7a0df95d4b0fe760b2f8a4976b5e4fc06003668 100644 --- a/drivers/net/usb/pegasus.c +++ b/drivers/net/usb/pegasus.c @@ -285,7 +285,7 @@ static void mdio_write(struct net_device *dev, int phy_id, int loc, int val) static int read_eprom_word(pegasus_t *pegasus, __u8 index, __u16 *retdata) { int i; - __u8 tmp; + __u8 tmp = 0; __le16 retdatai; int ret; diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c index 128c8a327d8ee4ec192ef1b160d8504565c0f287..51017c6bb3bcb6351294ce8a20eb67d89ba9ad01 100644 --- a/drivers/net/usb/qmi_wwan.c +++ b/drivers/net/usb/qmi_wwan.c @@ -1231,6 +1231,7 @@ static const struct usb_device_id products[] = { {QMI_FIXED_INTF(0x2001, 0x7e35, 4)}, /* D-Link DWM-222 */ {QMI_FIXED_INTF(0x2020, 0x2031, 4)}, /* Olicard 600 */ {QMI_FIXED_INTF(0x2020, 0x2033, 4)}, /* BroadMobi BM806U */ + {QMI_FIXED_INTF(0x2020, 0x2060, 4)}, /* BroadMobi BM818 */ {QMI_FIXED_INTF(0x0f3d, 0x68a2, 8)}, /* Sierra Wireless MC7700 */ {QMI_FIXED_INTF(0x114f, 0x68a2, 8)}, /* Sierra Wireless MC7750 */ {QMI_FIXED_INTF(0x1199, 0x68a2, 8)}, /* Sierra Wireless MC7710 in QMI mode */ diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c index 449fc52f9a89e9452793383f03b809002a7f1989..9f895083bc0aad15a2cadfca2fba43d3406f12a8 100644 --- a/drivers/net/vrf.c +++ b/drivers/net/vrf.c @@ -169,23 +169,29 @@ static int vrf_ip6_local_out(struct net *net, struct sock *sk, static netdev_tx_t vrf_process_v6_outbound(struct sk_buff *skb, struct net_device *dev) { - const struct ipv6hdr *iph = ipv6_hdr(skb); + const struct ipv6hdr *iph; struct net *net = dev_net(skb->dev); - struct flowi6 fl6 = { - /* needed to match OIF rule */ - .flowi6_oif = dev->ifindex, - .flowi6_iif = LOOPBACK_IFINDEX, - .daddr = iph->daddr, - .saddr = iph->saddr, - .flowlabel = ip6_flowinfo(iph), - .flowi6_mark = skb->mark, - .flowi6_proto = iph->nexthdr, - .flowi6_flags = FLOWI_FLAG_SKIP_NH_OIF, - }; + struct flowi6 fl6; int ret = NET_XMIT_DROP; struct dst_entry *dst; struct dst_entry *dst_null = &net->ipv6.ip6_null_entry->dst; + if (!pskb_may_pull(skb, ETH_HLEN + sizeof(struct ipv6hdr))) + goto err; + + iph = ipv6_hdr(skb); + + memset(&fl6, 0, sizeof(fl6)); + /* needed to match OIF rule */ + fl6.flowi6_oif = dev->ifindex; + fl6.flowi6_iif = LOOPBACK_IFINDEX; + fl6.daddr = iph->daddr; + fl6.saddr = iph->saddr; + fl6.flowlabel = ip6_flowinfo(iph); + fl6.flowi6_mark = skb->mark; + fl6.flowi6_proto = iph->nexthdr; + fl6.flowi6_flags = FLOWI_FLAG_SKIP_NH_OIF; + dst = ip6_route_output(net, NULL, &fl6); if (dst == dst_null) goto err; @@ -241,21 +247,27 @@ static int vrf_ip_local_out(struct net *net, struct sock *sk, static netdev_tx_t vrf_process_v4_outbound(struct sk_buff *skb, struct net_device *vrf_dev) { - struct iphdr *ip4h = ip_hdr(skb); + struct iphdr *ip4h; int ret = NET_XMIT_DROP; - struct flowi4 fl4 = { - /* needed to match OIF rule */ - .flowi4_oif = vrf_dev->ifindex, - .flowi4_iif = LOOPBACK_IFINDEX, - .flowi4_tos = RT_TOS(ip4h->tos), - .flowi4_flags = FLOWI_FLAG_ANYSRC | FLOWI_FLAG_SKIP_NH_OIF, - .flowi4_proto = ip4h->protocol, - .daddr = ip4h->daddr, - .saddr = ip4h->saddr, - }; + struct flowi4 fl4; struct net *net = dev_net(vrf_dev); struct rtable *rt; + if (!pskb_may_pull(skb, ETH_HLEN + sizeof(struct iphdr))) + goto err; + + ip4h = ip_hdr(skb); + + memset(&fl4, 0, sizeof(fl4)); + /* needed to match OIF rule */ + fl4.flowi4_oif = vrf_dev->ifindex; + fl4.flowi4_iif = LOOPBACK_IFINDEX; + fl4.flowi4_tos = RT_TOS(ip4h->tos); + fl4.flowi4_flags = FLOWI_FLAG_ANYSRC | FLOWI_FLAG_SKIP_NH_OIF; + fl4.flowi4_proto = ip4h->protocol; + fl4.daddr = ip4h->daddr; + fl4.saddr = ip4h->saddr; + rt = ip_route_output_flow(net, &fl4, NULL); if (IS_ERR(rt)) goto err; diff --git a/drivers/net/wireless/ath/ath10k/hw.c b/drivers/net/wireless/ath/ath10k/hw.c index 677535b3d2070eea5d20983b89c9aba44be4a829..476e0535f06f0ca6745b0f43bd6c3a5fd78247fe 100644 --- a/drivers/net/wireless/ath/ath10k/hw.c +++ b/drivers/net/wireless/ath/ath10k/hw.c @@ -168,7 +168,7 @@ const struct ath10k_hw_values qca6174_values = { }; const struct ath10k_hw_values qca99x0_values = { - .rtc_state_val_on = 5, + .rtc_state_val_on = 7, .ce_count = 12, .msi_assign_ce_max = 12, .num_target_ce_config_wlan = 10, diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c index f3b1cfacfe9dfcb1a4bfb0ae9438ef3406c2a9cf..1419f9d1505fe2e9291eebdc81a1ac1e6e1176d2 100644 --- a/drivers/net/wireless/ath/ath10k/mac.c +++ b/drivers/net/wireless/ath/ath10k/mac.c @@ -1624,6 +1624,10 @@ static int ath10k_mac_setup_prb_tmpl(struct ath10k_vif *arvif) if (arvif->vdev_type != WMI_VDEV_TYPE_AP) return 0; + /* For mesh, probe response and beacon share the same template */ + if (ieee80211_vif_is_mesh(vif)) + return 0; + prb = ieee80211_proberesp_get(hw, vif); if (!prb) { ath10k_warn(ar, "failed to get probe resp template from mac80211\n"); diff --git a/drivers/net/wireless/ath/ath10k/sdio.c b/drivers/net/wireless/ath/ath10k/sdio.c index 7f61591ce0de6b5c904fde8261c9a6689a587f5a..686759b5613f2c7f05685a8911c1f3a882e55732 100644 --- a/drivers/net/wireless/ath/ath10k/sdio.c +++ b/drivers/net/wireless/ath/ath10k/sdio.c @@ -613,6 +613,10 @@ static int ath10k_sdio_mbox_rx_alloc(struct ath10k *ar, full_len, last_in_bundle, last_in_bundle); + if (ret) { + ath10k_warn(ar, "alloc_rx_pkt error %d\n", ret); + goto err; + } } ar_sdio->n_rx_pkts = i; @@ -2069,6 +2073,9 @@ static void ath10k_sdio_remove(struct sdio_func *func) cancel_work_sync(&ar_sdio->wr_async_work); ath10k_core_unregister(ar); ath10k_core_destroy(ar); + + flush_workqueue(ar_sdio->workqueue); + destroy_workqueue(ar_sdio->workqueue); } static const struct sdio_device_id ath10k_sdio_devices[] = { diff --git a/drivers/net/wireless/ath/ath10k/txrx.c b/drivers/net/wireless/ath/ath10k/txrx.c index cda164f6e9f62f87e36c40c96f5fb7586d87a94c..6f62ddc0494c346b6718cb3bea872feafe0d74e1 100644 --- a/drivers/net/wireless/ath/ath10k/txrx.c +++ b/drivers/net/wireless/ath/ath10k/txrx.c @@ -156,6 +156,9 @@ struct ath10k_peer *ath10k_peer_find_by_id(struct ath10k *ar, int peer_id) { struct ath10k_peer *peer; + if (peer_id >= BITS_PER_TYPE(peer->peer_ids)) + return NULL; + lockdep_assert_held(&ar->data_lock); list_for_each_entry(peer, &ar->peers, list) diff --git a/drivers/net/wireless/ath/ath10k/usb.c b/drivers/net/wireless/ath/ath10k/usb.c index d4803ff5a78a75eb7e2d63bc77a1623f146e283e..f09a4ad2e9de71c25973ae553b94d0086606bfe0 100644 --- a/drivers/net/wireless/ath/ath10k/usb.c +++ b/drivers/net/wireless/ath/ath10k/usb.c @@ -1025,7 +1025,7 @@ static int ath10k_usb_probe(struct usb_interface *interface, } /* TODO: remove this once USB support is fully implemented */ - ath10k_warn(ar, "WARNING: ath10k USB support is incomplete, don't expect anything to work!\n"); + ath10k_warn(ar, "Warning: ath10k USB support is incomplete, don't expect anything to work!\n"); return 0; diff --git a/drivers/net/wireless/ath/ath6kl/wmi.c b/drivers/net/wireless/ath/ath6kl/wmi.c index 777acc564ac9917d331de8f2ef87657199ce482a..bc7916f2add0971adb18b7fa3501b43a6a8bd7bf 100644 --- a/drivers/net/wireless/ath/ath6kl/wmi.c +++ b/drivers/net/wireless/ath/ath6kl/wmi.c @@ -1178,6 +1178,10 @@ static int ath6kl_wmi_pstream_timeout_event_rx(struct wmi *wmi, u8 *datap, return -EINVAL; ev = (struct wmi_pstream_timeout_event *) datap; + if (ev->traffic_class >= WMM_NUM_AC) { + ath6kl_err("invalid traffic class: %d\n", ev->traffic_class); + return -EINVAL; + } /* * When the pstream (fat pipe == AC) timesout, it means there were @@ -1519,6 +1523,10 @@ static int ath6kl_wmi_cac_event_rx(struct wmi *wmi, u8 *datap, int len, return -EINVAL; reply = (struct wmi_cac_event *) datap; + if (reply->ac >= WMM_NUM_AC) { + ath6kl_err("invalid AC: %d\n", reply->ac); + return -EINVAL; + } if ((reply->cac_indication == CAC_INDICATION_ADMISSION_RESP) && (reply->status_code != IEEE80211_TSPEC_STATUS_ADMISS_ACCEPTED)) { @@ -2635,7 +2643,7 @@ int ath6kl_wmi_delete_pstream_cmd(struct wmi *wmi, u8 if_idx, u8 traffic_class, u16 active_tsids = 0; int ret; - if (traffic_class > 3) { + if (traffic_class >= WMM_NUM_AC) { ath6kl_err("invalid traffic class: %d\n", traffic_class); return -EINVAL; } diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c index bb319f22761fbe46379837ed21dee4f343475dc5..b4f7ee423d4072f1a3baaf6ac11b548745de7dc8 100644 --- a/drivers/net/wireless/ath/ath9k/hw.c +++ b/drivers/net/wireless/ath/ath9k/hw.c @@ -252,8 +252,9 @@ void ath9k_hw_get_channel_centers(struct ath_hw *ah, /* Chip Revisions */ /******************/ -static void ath9k_hw_read_revisions(struct ath_hw *ah) +static bool ath9k_hw_read_revisions(struct ath_hw *ah) { + u32 srev; u32 val; if (ah->get_mac_revision) @@ -269,25 +270,33 @@ static void ath9k_hw_read_revisions(struct ath_hw *ah) val = REG_READ(ah, AR_SREV); ah->hw_version.macRev = MS(val, AR_SREV_REVISION2); } - return; + return true; case AR9300_DEVID_AR9340: ah->hw_version.macVersion = AR_SREV_VERSION_9340; - return; + return true; case AR9300_DEVID_QCA955X: ah->hw_version.macVersion = AR_SREV_VERSION_9550; - return; + return true; case AR9300_DEVID_AR953X: ah->hw_version.macVersion = AR_SREV_VERSION_9531; - return; + return true; case AR9300_DEVID_QCA956X: ah->hw_version.macVersion = AR_SREV_VERSION_9561; - return; + return true; } - val = REG_READ(ah, AR_SREV) & AR_SREV_ID; + srev = REG_READ(ah, AR_SREV); + + if (srev == -EIO) { + ath_err(ath9k_hw_common(ah), + "Failed to read SREV register"); + return false; + } + + val = srev & AR_SREV_ID; if (val == 0xFF) { - val = REG_READ(ah, AR_SREV); + val = srev; ah->hw_version.macVersion = (val & AR_SREV_VERSION2) >> AR_SREV_TYPE2_S; ah->hw_version.macRev = MS(val, AR_SREV_REVISION2); @@ -306,6 +315,8 @@ static void ath9k_hw_read_revisions(struct ath_hw *ah) if (ah->hw_version.macVersion == AR_SREV_VERSION_5416_PCIE) ah->is_pciexpress = true; } + + return true; } /************************************/ @@ -559,7 +570,10 @@ static int __ath9k_hw_init(struct ath_hw *ah) struct ath_common *common = ath9k_hw_common(ah); int r = 0; - ath9k_hw_read_revisions(ah); + if (!ath9k_hw_read_revisions(ah)) { + ath_err(common, "Could not read hardware revisions"); + return -EOPNOTSUPP; + } switch (ah->hw_version.macVersion) { case AR_SREV_VERSION_5416_PCI: diff --git a/drivers/net/wireless/ath/dfs_pattern_detector.c b/drivers/net/wireless/ath/dfs_pattern_detector.c index d52b31b45df7d1fd20f301573e7e5f0379df02b5..a274eb0d19688f8c8604647d03c38483ce07d923 100644 --- a/drivers/net/wireless/ath/dfs_pattern_detector.c +++ b/drivers/net/wireless/ath/dfs_pattern_detector.c @@ -111,7 +111,7 @@ static const struct radar_detector_specs jp_radar_ref_types[] = { JP_PATTERN(0, 0, 1, 1428, 1428, 1, 18, 29, false), JP_PATTERN(1, 2, 3, 3846, 3846, 1, 18, 29, false), JP_PATTERN(2, 0, 1, 1388, 1388, 1, 18, 50, false), - JP_PATTERN(3, 1, 2, 4000, 4000, 1, 18, 50, false), + JP_PATTERN(3, 0, 4, 4000, 4000, 1, 18, 50, false), JP_PATTERN(4, 0, 5, 150, 230, 1, 23, 50, false), JP_PATTERN(5, 6, 10, 200, 500, 1, 16, 50, false), JP_PATTERN(6, 11, 20, 200, 500, 1, 12, 50, false), diff --git a/drivers/net/wireless/ath/wil6210/interrupt.c b/drivers/net/wireless/ath/wil6210/interrupt.c index 5d287a8e1b458a8aca674275a55c007df944e4c3..0655cd8845142c244a260252d01b3d74e003d062 100644 --- a/drivers/net/wireless/ath/wil6210/interrupt.c +++ b/drivers/net/wireless/ath/wil6210/interrupt.c @@ -296,21 +296,24 @@ void wil_configure_interrupt_moderation(struct wil6210_priv *wil) static irqreturn_t wil6210_irq_rx(int irq, void *cookie) { struct wil6210_priv *wil = cookie; - u32 isr = wil_ioread32_and_clear(wil->csr + - HOSTADDR(RGF_DMA_EP_RX_ICR) + - offsetof(struct RGF_ICR, ICR)); + u32 isr; bool need_unmask = true; + wil6210_mask_irq_rx(wil); + + isr = wil_ioread32_and_clear(wil->csr + + HOSTADDR(RGF_DMA_EP_RX_ICR) + + offsetof(struct RGF_ICR, ICR)); + trace_wil6210_irq_rx(isr); wil_dbg_irq(wil, "ISR RX 0x%08x\n", isr); if (unlikely(!isr)) { wil_err_ratelimited(wil, "spurious IRQ: RX\n"); + wil6210_unmask_irq_rx(wil); return IRQ_NONE; } - wil6210_mask_irq_rx(wil); - /* RX_DONE and RX_HTRSH interrupts are the same if interrupt * moderation is not used. Interrupt moderation may cause RX * buffer overflow while RX_DONE is delayed. The required @@ -355,21 +358,24 @@ static irqreturn_t wil6210_irq_rx(int irq, void *cookie) static irqreturn_t wil6210_irq_rx_edma(int irq, void *cookie) { struct wil6210_priv *wil = cookie; - u32 isr = wil_ioread32_and_clear(wil->csr + - HOSTADDR(RGF_INT_GEN_RX_ICR) + - offsetof(struct RGF_ICR, ICR)); + u32 isr; bool need_unmask = true; + wil6210_mask_irq_rx_edma(wil); + + isr = wil_ioread32_and_clear(wil->csr + + HOSTADDR(RGF_INT_GEN_RX_ICR) + + offsetof(struct RGF_ICR, ICR)); + trace_wil6210_irq_rx(isr); wil_dbg_irq(wil, "ISR RX 0x%08x\n", isr); if (unlikely(!isr)) { wil_err(wil, "spurious IRQ: RX\n"); + wil6210_unmask_irq_rx_edma(wil); return IRQ_NONE; } - wil6210_mask_irq_rx_edma(wil); - if (likely(isr & BIT_RX_STATUS_IRQ)) { wil_dbg_irq(wil, "RX status ring\n"); isr &= ~BIT_RX_STATUS_IRQ; @@ -403,21 +409,24 @@ static irqreturn_t wil6210_irq_rx_edma(int irq, void *cookie) static irqreturn_t wil6210_irq_tx_edma(int irq, void *cookie) { struct wil6210_priv *wil = cookie; - u32 isr = wil_ioread32_and_clear(wil->csr + - HOSTADDR(RGF_INT_GEN_TX_ICR) + - offsetof(struct RGF_ICR, ICR)); + u32 isr; bool need_unmask = true; + wil6210_mask_irq_tx_edma(wil); + + isr = wil_ioread32_and_clear(wil->csr + + HOSTADDR(RGF_INT_GEN_TX_ICR) + + offsetof(struct RGF_ICR, ICR)); + trace_wil6210_irq_tx(isr); wil_dbg_irq(wil, "ISR TX 0x%08x\n", isr); if (unlikely(!isr)) { wil_err(wil, "spurious IRQ: TX\n"); + wil6210_unmask_irq_tx_edma(wil); return IRQ_NONE; } - wil6210_mask_irq_tx_edma(wil); - if (likely(isr & BIT_TX_STATUS_IRQ)) { wil_dbg_irq(wil, "TX status ring\n"); isr &= ~BIT_TX_STATUS_IRQ; @@ -446,21 +455,24 @@ static irqreturn_t wil6210_irq_tx_edma(int irq, void *cookie) static irqreturn_t wil6210_irq_tx(int irq, void *cookie) { struct wil6210_priv *wil = cookie; - u32 isr = wil_ioread32_and_clear(wil->csr + - HOSTADDR(RGF_DMA_EP_TX_ICR) + - offsetof(struct RGF_ICR, ICR)); + u32 isr; bool need_unmask = true; + wil6210_mask_irq_tx(wil); + + isr = wil_ioread32_and_clear(wil->csr + + HOSTADDR(RGF_DMA_EP_TX_ICR) + + offsetof(struct RGF_ICR, ICR)); + trace_wil6210_irq_tx(isr); wil_dbg_irq(wil, "ISR TX 0x%08x\n", isr); if (unlikely(!isr)) { wil_err_ratelimited(wil, "spurious IRQ: TX\n"); + wil6210_unmask_irq_tx(wil); return IRQ_NONE; } - wil6210_mask_irq_tx(wil); - if (likely(isr & BIT_DMA_EP_TX_ICR_TX_DONE)) { wil_dbg_irq(wil, "TX done\n"); isr &= ~BIT_DMA_EP_TX_ICR_TX_DONE; @@ -532,20 +544,23 @@ static bool wil_validate_mbox_regs(struct wil6210_priv *wil) static irqreturn_t wil6210_irq_misc(int irq, void *cookie) { struct wil6210_priv *wil = cookie; - u32 isr = wil_ioread32_and_clear(wil->csr + - HOSTADDR(RGF_DMA_EP_MISC_ICR) + - offsetof(struct RGF_ICR, ICR)); + u32 isr; + + wil6210_mask_irq_misc(wil, false); + + isr = wil_ioread32_and_clear(wil->csr + + HOSTADDR(RGF_DMA_EP_MISC_ICR) + + offsetof(struct RGF_ICR, ICR)); trace_wil6210_irq_misc(isr); wil_dbg_irq(wil, "ISR MISC 0x%08x\n", isr); if (!isr) { wil_err(wil, "spurious IRQ: MISC\n"); + wil6210_unmask_irq_misc(wil, false); return IRQ_NONE; } - wil6210_mask_irq_misc(wil, false); - if (isr & ISR_MISC_FW_ERROR) { u32 fw_assert_code = wil_r(wil, wil->rgf_fw_assert_code_addr); u32 ucode_assert_code = diff --git a/drivers/net/wireless/ath/wil6210/txrx.c b/drivers/net/wireless/ath/wil6210/txrx.c index 75c8aa297107173362172b1924dbe42524e1eb73..1b1b58e0129a3abc74130f1b7f3524ab0831bc38 100644 --- a/drivers/net/wireless/ath/wil6210/txrx.c +++ b/drivers/net/wireless/ath/wil6210/txrx.c @@ -736,6 +736,7 @@ void wil_netif_rx_any(struct sk_buff *skb, struct net_device *ndev) [GRO_HELD] = "GRO_HELD", [GRO_NORMAL] = "GRO_NORMAL", [GRO_DROP] = "GRO_DROP", + [GRO_CONSUMED] = "GRO_CONSUMED", }; wil->txrx_ops.get_netif_rx_params(skb, &cid, &security); diff --git a/drivers/net/wireless/ath/wil6210/wmi.c b/drivers/net/wireless/ath/wil6210/wmi.c index 6e3b3031f29bd742544c775ba737bb1304d00905..2010f771478dfc8ce29526e15615ef38d3d1569f 100644 --- a/drivers/net/wireless/ath/wil6210/wmi.c +++ b/drivers/net/wireless/ath/wil6210/wmi.c @@ -2816,7 +2816,18 @@ static void wmi_event_handle(struct wil6210_priv *wil, /* check if someone waits for this event */ if (wil->reply_id && wil->reply_id == id && wil->reply_mid == mid) { - WARN_ON(wil->reply_buf); + if (wil->reply_buf) { + /* event received while wmi_call is waiting + * with a buffer. Such event should be handled + * in wmi_recv_cmd function. Handling the event + * here means a previous wmi_call was timeout. + * Drop the event and do not handle it. + */ + wil_err(wil, + "Old event (%d, %s) while wmi_call is waiting. Drop it and Continue waiting\n", + id, eventid2name(id)); + return; + } wmi_evt_call_handler(vif, id, evt_data, len - sizeof(*wmi)); diff --git a/drivers/net/wireless/intel/iwlwifi/fw/smem.c b/drivers/net/wireless/intel/iwlwifi/fw/smem.c index ff85d69c2a8cb3703bdc1bebe3148435a4cf2e43..557ee47bffd8c33baddf500fd13aebc337bcfa62 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/smem.c +++ b/drivers/net/wireless/intel/iwlwifi/fw/smem.c @@ -8,7 +8,7 @@ * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved. * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH * Copyright(c) 2016 - 2017 Intel Deutschland GmbH - * Copyright(c) 2018 Intel Corporation + * Copyright(c) 2018 - 2019 Intel Corporation * * This program is free software; you can redistribute it and/or modify * it under the terms of version 2 of the GNU General Public License as @@ -31,7 +31,7 @@ * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved. * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH * Copyright(c) 2016 - 2017 Intel Deutschland GmbH - * Copyright(c) 2018 Intel Corporation + * Copyright(c) 2018 - 2019 Intel Corporation * All rights reserved. * * Redistribution and use in source and binary forms, with or without @@ -134,6 +134,7 @@ void iwl_get_shared_mem_conf(struct iwl_fw_runtime *fwrt) .len = { 0, }, }; struct iwl_rx_packet *pkt; + int ret; if (fw_has_capa(&fwrt->fw->ucode_capa, IWL_UCODE_TLV_CAPA_EXTEND_SHARED_MEM_CFG)) @@ -141,8 +142,13 @@ void iwl_get_shared_mem_conf(struct iwl_fw_runtime *fwrt) else cmd.id = SHARED_MEM_CFG; - if (WARN_ON(iwl_trans_send_cmd(fwrt->trans, &cmd))) + ret = iwl_trans_send_cmd(fwrt->trans, &cmd); + + if (ret) { + WARN(ret != -ERFKILL, + "Could not send the SMEM command: %d\n", ret); return; + } pkt = cmd.resp_pkt; if (fwrt->trans->cfg->device_family >= IWL_DEVICE_FAMILY_22000) diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c index 8b7d70e3a379305fa8709de30279cacc4391c9d2..3fe7605a2cca438e9a63767881253a8d1d855705 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c @@ -724,7 +724,7 @@ static int iwl_mvm_sar_get_ewrd_table(struct iwl_mvm *mvm) for (i = 0; i < n_profiles; i++) { /* the tables start at element 3 */ - static int pos = 3; + int pos = 3; /* The EWRD profiles officially go from 2 to 4, but we * save them in sar_profiles[1-3] (because we don't @@ -836,6 +836,22 @@ int iwl_mvm_sar_select_profile(struct iwl_mvm *mvm, int prof_a, int prof_b) return iwl_mvm_send_cmd_pdu(mvm, REDUCE_TX_POWER_CMD, 0, len, &cmd); } +static bool iwl_mvm_sar_geo_support(struct iwl_mvm *mvm) +{ + /* + * The GEO_TX_POWER_LIMIT command is not supported on earlier + * firmware versions. Unfortunately, we don't have a TLV API + * flag to rely on, so rely on the major version which is in + * the first byte of ucode_ver. This was implemented + * initially on version 38 and then backported to 36, 29 and + * 17. + */ + return IWL_UCODE_SERIAL(mvm->fw->ucode_ver) >= 38 || + IWL_UCODE_SERIAL(mvm->fw->ucode_ver) == 36 || + IWL_UCODE_SERIAL(mvm->fw->ucode_ver) == 29 || + IWL_UCODE_SERIAL(mvm->fw->ucode_ver) == 17; +} + int iwl_mvm_get_sar_geo_profile(struct iwl_mvm *mvm) { struct iwl_geo_tx_power_profiles_resp *resp; @@ -851,6 +867,9 @@ int iwl_mvm_get_sar_geo_profile(struct iwl_mvm *mvm) .data = { &geo_cmd }, }; + if (!iwl_mvm_sar_geo_support(mvm)) + return -EOPNOTSUPP; + ret = iwl_mvm_send_cmd(mvm, &cmd); if (ret) { IWL_ERR(mvm, "Failed to get geographic profile info %d\n", ret); @@ -876,13 +895,7 @@ static int iwl_mvm_sar_geo_init(struct iwl_mvm *mvm) int ret, i, j; u16 cmd_wide_id = WIDE_ID(PHY_OPS_GROUP, GEO_TX_POWER_LIMIT); - /* - * This command is not supported on earlier firmware versions. - * Unfortunately, we don't have a TLV API flag to rely on, so - * rely on the major version which is in the first byte of - * ucode_ver. - */ - if (IWL_UCODE_SERIAL(mvm->fw->ucode_ver) < 41) + if (!iwl_mvm_sar_geo_support(mvm)) return 0; ret = iwl_mvm_sar_get_wgds_table(mvm); diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c index 2d21f0a1fa006fc2e4eae4ef700e8dab535db6b5..ffae299c34928083b9487e880d913956103e6f82 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c @@ -641,6 +641,9 @@ int iwl_mvm_tx_skb_non_sta(struct iwl_mvm *mvm, struct sk_buff *skb) memcpy(&info, skb->cb, sizeof(info)); + if (WARN_ON_ONCE(skb->len > IEEE80211_MAX_DATA_LEN + hdrlen)) + return -1; + if (WARN_ON_ONCE(info.flags & IEEE80211_TX_CTL_AMPDU)) return -1; diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c index 2146fda8da2fdbdece661ceb3e177ab7c5c2b83e..64d976d872b84ab50de574703d60b081a58ad8ca 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c @@ -164,7 +164,7 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans, memcpy(iml_img, trans->iml, trans->iml_len); - iwl_enable_interrupts(trans); + iwl_enable_fw_load_int_ctx_info(trans); /* kick FW self load */ iwl_write64(trans, CSR_CTXT_INFO_ADDR, diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c index b2cd7ef5fc3a9ba3b37351745d6fdedfda985cf9..6f25fd1bbd8f40fb30181613715e09b826cd530f 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c @@ -206,7 +206,7 @@ int iwl_pcie_ctxt_info_init(struct iwl_trans *trans, trans_pcie->ctxt_info = ctxt_info; - iwl_enable_interrupts(trans); + iwl_enable_fw_load_int_ctx_info(trans); /* Configure debug, if exists */ if (trans->dbg_dest_tlv) diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h index b63d44b7cd7c7be1e6a0a3fe0081edc5ea4820b7..00f9566bcc2136ab588beb05fada23eeea2b60f1 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h +++ b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h @@ -896,6 +896,33 @@ static inline void iwl_enable_fw_load_int(struct iwl_trans *trans) } } +static inline void iwl_enable_fw_load_int_ctx_info(struct iwl_trans *trans) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + + IWL_DEBUG_ISR(trans, "Enabling ALIVE interrupt only\n"); + + if (!trans_pcie->msix_enabled) { + /* + * When we'll receive the ALIVE interrupt, the ISR will call + * iwl_enable_fw_load_int_ctx_info again to set the ALIVE + * interrupt (which is not really needed anymore) but also the + * RX interrupt which will allow us to receive the ALIVE + * notification (which is Rx) and continue the flow. + */ + trans_pcie->inta_mask = CSR_INT_BIT_ALIVE | CSR_INT_BIT_FH_RX; + iwl_write32(trans, CSR_INT_MASK, trans_pcie->inta_mask); + } else { + iwl_enable_hw_int_msk_msix(trans, + MSIX_HW_INT_CAUSES_REG_ALIVE); + /* + * Leave all the FH causes enabled to get the ALIVE + * notification. + */ + iwl_enable_fh_int_msk_msix(trans, trans_pcie->fh_init_mask); + } +} + static inline u16 iwl_pcie_get_cmd_index(const struct iwl_txq *q, u32 index) { return index & (q->n_window - 1); diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c index 6dcd5374d9b4d3b4c0aa60ad15c42428e2d90842..1d144985ea589d7a285c04279a70e5c09b570647 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c @@ -1778,26 +1778,26 @@ irqreturn_t iwl_pcie_irq_handler(int irq, void *dev_id) goto out; } - if (iwl_have_debug_level(IWL_DL_ISR)) { - /* NIC fires this, but we don't use it, redundant with WAKEUP */ - if (inta & CSR_INT_BIT_SCD) { - IWL_DEBUG_ISR(trans, - "Scheduler finished to transmit the frame/frames.\n"); - isr_stats->sch++; - } + /* NIC fires this, but we don't use it, redundant with WAKEUP */ + if (inta & CSR_INT_BIT_SCD) { + IWL_DEBUG_ISR(trans, + "Scheduler finished to transmit the frame/frames.\n"); + isr_stats->sch++; + } - /* Alive notification via Rx interrupt will do the real work */ - if (inta & CSR_INT_BIT_ALIVE) { - IWL_DEBUG_ISR(trans, "Alive interrupt\n"); - isr_stats->alive++; - if (trans->cfg->gen2) { - /* - * We can restock, since firmware configured - * the RFH - */ - iwl_pcie_rxmq_restock(trans, trans_pcie->rxq); - } + /* Alive notification via Rx interrupt will do the real work */ + if (inta & CSR_INT_BIT_ALIVE) { + IWL_DEBUG_ISR(trans, "Alive interrupt\n"); + isr_stats->alive++; + if (trans->cfg->gen2) { + /* + * We can restock, since firmware configured + * the RFH + */ + iwl_pcie_rxmq_restock(trans, trans_pcie->rxq); } + + handled |= CSR_INT_BIT_ALIVE; } /* Safely ignore these bits for debug checks below */ @@ -1916,6 +1916,9 @@ irqreturn_t iwl_pcie_irq_handler(int irq, void *dev_id) /* Re-enable RF_KILL if it occurred */ else if (handled & CSR_INT_BIT_RF_KILL) iwl_enable_rfkill_int(trans); + /* Re-enable the ALIVE / Rx interrupt if it occurred */ + else if (handled & (CSR_INT_BIT_ALIVE | CSR_INT_BIT_FH_RX)) + iwl_enable_fw_load_int_ctx_info(trans); spin_unlock(&trans_pcie->irq_lock); out: @@ -2060,10 +2063,18 @@ irqreturn_t iwl_pcie_irq_msix_handler(int irq, void *dev_id) return IRQ_NONE; } - if (iwl_have_debug_level(IWL_DL_ISR)) - IWL_DEBUG_ISR(trans, "ISR inta_fh 0x%08x, enabled 0x%08x\n", - inta_fh, + if (iwl_have_debug_level(IWL_DL_ISR)) { + IWL_DEBUG_ISR(trans, + "ISR inta_fh 0x%08x, enabled (sw) 0x%08x (hw) 0x%08x\n", + inta_fh, trans_pcie->fh_mask, iwl_read32(trans, CSR_MSIX_FH_INT_MASK_AD)); + if (inta_fh & ~trans_pcie->fh_mask) + IWL_DEBUG_ISR(trans, + "We got a masked interrupt (0x%08x)\n", + inta_fh & ~trans_pcie->fh_mask); + } + + inta_fh &= trans_pcie->fh_mask; if ((trans_pcie->shared_vec_mask & IWL_SHARED_IRQ_NON_RX) && inta_fh & MSIX_FH_INT_CAUSES_Q0) { @@ -2103,11 +2114,18 @@ irqreturn_t iwl_pcie_irq_msix_handler(int irq, void *dev_id) } /* After checking FH register check HW register */ - if (iwl_have_debug_level(IWL_DL_ISR)) + if (iwl_have_debug_level(IWL_DL_ISR)) { IWL_DEBUG_ISR(trans, - "ISR inta_hw 0x%08x, enabled 0x%08x\n", - inta_hw, + "ISR inta_hw 0x%08x, enabled (sw) 0x%08x (hw) 0x%08x\n", + inta_hw, trans_pcie->hw_mask, iwl_read32(trans, CSR_MSIX_HW_INT_MASK_AD)); + if (inta_hw & ~trans_pcie->hw_mask) + IWL_DEBUG_ISR(trans, + "We got a masked interrupt 0x%08x\n", + inta_hw & ~trans_pcie->hw_mask); + } + + inta_hw &= trans_pcie->hw_mask; /* Alive notification via Rx interrupt will do the real work */ if (inta_hw & MSIX_HW_INT_CAUSES_REG_ALIVE) { diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c index 2bc67219ed3efadd09597a28fe18e39ad5f1d736..31e72e1ff1e267ea5a57780b5b503e94d9b17cdd 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c @@ -289,6 +289,15 @@ void iwl_trans_pcie_gen2_fw_alive(struct iwl_trans *trans, u32 scd_addr) * paging memory cannot be freed included since FW will still use it */ iwl_pcie_ctxt_info_free(trans); + + /* + * Re-enable all the interrupts, including the RF-Kill one, now that + * the firmware is alive. + */ + iwl_enable_interrupts(trans); + mutex_lock(&trans_pcie->mutex); + iwl_pcie_check_hw_rf_kill(trans); + mutex_unlock(&trans_pcie->mutex); } int iwl_trans_pcie_gen2_start_fw(struct iwl_trans *trans, diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c index 93f0d387688a1314a54a8f17f4c02153dadd802d..42fdb7970cfdcb6608934ddf25c72087628199dd 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c @@ -403,6 +403,8 @@ static void iwl_pcie_tfd_unmap(struct iwl_trans *trans, DMA_TO_DEVICE); } + meta->tbs = 0; + if (trans->cfg->use_tfh) { struct iwl_tfh_tfd *tfd_fh = (void *)tfd; diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c index 7cd428c0af433a7e3e0ca0cde0825006cea7d0a8..ce2dd06af62e8b987a027d7cb097b08a5147e258 100644 --- a/drivers/net/wireless/mac80211_hwsim.c +++ b/drivers/net/wireless/mac80211_hwsim.c @@ -3502,10 +3502,12 @@ static int hwsim_dump_radio_nl(struct sk_buff *skb, hdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq, &hwsim_genl_family, NLM_F_MULTI, HWSIM_CMD_GET_RADIO); - if (!hdr) + if (hdr) { + genl_dump_check_consistent(cb, hdr); + genlmsg_end(skb, hdr); + } else { res = -EMSGSIZE; - genl_dump_check_consistent(cb, hdr); - genlmsg_end(skb, hdr); + } } done: diff --git a/drivers/net/wireless/marvell/mwifiex/main.h b/drivers/net/wireless/marvell/mwifiex/main.h index b025ba164412813f6b675b9f096969898c940af2..e39bb5c42c9a540c07ea475d8557cbb7383cf7ff 100644 --- a/drivers/net/wireless/marvell/mwifiex/main.h +++ b/drivers/net/wireless/marvell/mwifiex/main.h @@ -124,6 +124,7 @@ enum { #define MWIFIEX_MAX_TOTAL_SCAN_TIME (MWIFIEX_TIMER_10S - MWIFIEX_TIMER_1S) +#define WPA_GTK_OUI_OFFSET 2 #define RSN_GTK_OUI_OFFSET 2 #define MWIFIEX_OUI_NOT_PRESENT 0 diff --git a/drivers/net/wireless/marvell/mwifiex/scan.c b/drivers/net/wireless/marvell/mwifiex/scan.c index 6dd771ce68a356f24d695954af79d985b31450fa..ed27147efcb37b9d8318b2aaac2d8620ac7adbd6 100644 --- a/drivers/net/wireless/marvell/mwifiex/scan.c +++ b/drivers/net/wireless/marvell/mwifiex/scan.c @@ -181,7 +181,8 @@ mwifiex_is_wpa_oui_present(struct mwifiex_bssdescriptor *bss_desc, u32 cipher) u8 ret = MWIFIEX_OUI_NOT_PRESENT; if (has_vendor_hdr(bss_desc->bcn_wpa_ie, WLAN_EID_VENDOR_SPECIFIC)) { - iebody = (struct ie_body *) bss_desc->bcn_wpa_ie->data; + iebody = (struct ie_body *)((u8 *)bss_desc->bcn_wpa_ie->data + + WPA_GTK_OUI_OFFSET); oui = &mwifiex_wpa_oui[cipher][0]; ret = mwifiex_search_oui_in_ie(iebody, oui); if (ret) diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/init.c b/drivers/net/wireless/mediatek/mt76/mt76x0/init.c index 0a3e046d78db376d91fe99b603ccffe336e3a51a..da2ba51dec352d498422656e784664a894b81a83 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x0/init.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x0/init.c @@ -369,7 +369,7 @@ static void mt76x0_stop_hardware(struct mt76x0_dev *dev) mt76x0_chip_onoff(dev, false, false); } -int mt76x0_init_hardware(struct mt76x0_dev *dev) +int mt76x0_init_hardware(struct mt76x0_dev *dev, bool reset) { static const u16 beacon_offsets[16] = { /* 512 byte per beacon */ @@ -382,7 +382,7 @@ int mt76x0_init_hardware(struct mt76x0_dev *dev) dev->beacon_offsets = beacon_offsets; - mt76x0_chip_onoff(dev, true, true); + mt76x0_chip_onoff(dev, true, reset); ret = mt76x0_wait_asic_ready(dev); if (ret) diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/mt76x0.h b/drivers/net/wireless/mediatek/mt76/mt76x0/mt76x0.h index fc9857f61771ccbc9d712c94402c455f498b8853..f9dfe5097b099cf27fca0e27c3ae8e0e17e322fc 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x0/mt76x0.h +++ b/drivers/net/wireless/mediatek/mt76/mt76x0/mt76x0.h @@ -279,7 +279,7 @@ void mt76x0_addr_wr(struct mt76x0_dev *dev, const u32 offset, const u8 *addr); /* Init */ struct mt76x0_dev *mt76x0_alloc_device(struct device *dev); -int mt76x0_init_hardware(struct mt76x0_dev *dev); +int mt76x0_init_hardware(struct mt76x0_dev *dev, bool reset); int mt76x0_register_device(struct mt76x0_dev *dev); void mt76x0_cleanup(struct mt76x0_dev *dev); void mt76x0_chip_onoff(struct mt76x0_dev *dev, bool enable, bool reset); diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/usb.c b/drivers/net/wireless/mediatek/mt76/mt76x0/usb.c index 54ae1f113be23dd51b1ab7fafb79bd3d991af893..5aacb1f6a841d0720b5372112b2f3b325b598597 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x0/usb.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x0/usb.c @@ -300,7 +300,7 @@ static int mt76x0_probe(struct usb_interface *usb_intf, if (!(mt76_rr(dev, MT_EFUSE_CTRL) & MT_EFUSE_CTRL_SEL)) dev_warn(dev->mt76.dev, "Warning: eFUSE not present\n"); - ret = mt76x0_init_hardware(dev); + ret = mt76x0_init_hardware(dev, true); if (ret) goto err; @@ -354,7 +354,7 @@ static int mt76x0_resume(struct usb_interface *usb_intf) struct mt76x0_dev *dev = usb_get_intfdata(usb_intf); int ret; - ret = mt76x0_init_hardware(dev); + ret = mt76x0_init_hardware(dev, false); if (ret) return ret; diff --git a/drivers/net/wireless/mediatek/mt7601u/dma.c b/drivers/net/wireless/mediatek/mt7601u/dma.c index 7f3e3983b781d72af83b0fc47006b1a2ff34078c..47cebb2ec05c5a3939771b2c041e5c74ac8ca3f9 100644 --- a/drivers/net/wireless/mediatek/mt7601u/dma.c +++ b/drivers/net/wireless/mediatek/mt7601u/dma.c @@ -193,10 +193,23 @@ static void mt7601u_complete_rx(struct urb *urb) struct mt7601u_rx_queue *q = &dev->rx_q; unsigned long flags; - spin_lock_irqsave(&dev->rx_lock, flags); + /* do no schedule rx tasklet if urb has been unlinked + * or the device has been removed + */ + switch (urb->status) { + case -ECONNRESET: + case -ESHUTDOWN: + case -ENOENT: + return; + default: + dev_err_ratelimited(dev->dev, "rx urb failed: %d\n", + urb->status); + /* fall through */ + case 0: + break; + } - if (mt7601u_urb_has_error(urb)) - dev_err(dev->dev, "Error: RX urb failed:%d\n", urb->status); + spin_lock_irqsave(&dev->rx_lock, flags); if (WARN_ONCE(q->e[q->end].urb != urb, "RX urb mismatch")) goto out; @@ -228,14 +241,25 @@ static void mt7601u_complete_tx(struct urb *urb) struct sk_buff *skb; unsigned long flags; - spin_lock_irqsave(&dev->tx_lock, flags); + switch (urb->status) { + case -ECONNRESET: + case -ESHUTDOWN: + case -ENOENT: + return; + default: + dev_err_ratelimited(dev->dev, "tx urb failed: %d\n", + urb->status); + /* fall through */ + case 0: + break; + } - if (mt7601u_urb_has_error(urb)) - dev_err(dev->dev, "Error: TX urb failed:%d\n", urb->status); + spin_lock_irqsave(&dev->tx_lock, flags); if (WARN_ONCE(q->e[q->start].urb != urb, "TX urb mismatch")) goto out; skb = q->e[q->start].skb; + q->e[q->start].skb = NULL; trace_mt_tx_dma_done(dev, skb); __skb_queue_tail(&dev->tx_skb_done, skb); @@ -363,19 +387,9 @@ int mt7601u_dma_enqueue_tx(struct mt7601u_dev *dev, struct sk_buff *skb, static void mt7601u_kill_rx(struct mt7601u_dev *dev) { int i; - unsigned long flags; - - spin_lock_irqsave(&dev->rx_lock, flags); - - for (i = 0; i < dev->rx_q.entries; i++) { - int next = dev->rx_q.end; - spin_unlock_irqrestore(&dev->rx_lock, flags); - usb_poison_urb(dev->rx_q.e[next].urb); - spin_lock_irqsave(&dev->rx_lock, flags); - } - - spin_unlock_irqrestore(&dev->rx_lock, flags); + for (i = 0; i < dev->rx_q.entries; i++) + usb_poison_urb(dev->rx_q.e[i].urb); } static int mt7601u_submit_rx_buf(struct mt7601u_dev *dev, @@ -445,10 +459,10 @@ static void mt7601u_free_tx_queue(struct mt7601u_tx_queue *q) { int i; - WARN_ON(q->used); - for (i = 0; i < q->entries; i++) { usb_poison_urb(q->e[i].urb); + if (q->e[i].skb) + mt7601u_tx_status(q->dev, q->e[i].skb); usb_free_urb(q->e[i].urb); } } diff --git a/drivers/net/wireless/mediatek/mt7601u/tx.c b/drivers/net/wireless/mediatek/mt7601u/tx.c index 3600e911a63e85d23ecc15f238c42db66865472d..4d81c45722fbb756dc3c91fa8559772dc8738fda 100644 --- a/drivers/net/wireless/mediatek/mt7601u/tx.c +++ b/drivers/net/wireless/mediatek/mt7601u/tx.c @@ -117,9 +117,9 @@ void mt7601u_tx_status(struct mt7601u_dev *dev, struct sk_buff *skb) info->status.rates[0].idx = -1; info->flags |= IEEE80211_TX_STAT_ACK; - spin_lock(&dev->mac_lock); + spin_lock_bh(&dev->mac_lock); ieee80211_tx_status(dev->hw, skb); - spin_unlock(&dev->mac_lock); + spin_unlock_bh(&dev->mac_lock); } static int mt7601u_skb_rooms(struct mt7601u_dev *dev, struct sk_buff *skb) diff --git a/drivers/net/wireless/realtek/rtlwifi/usb.c b/drivers/net/wireless/realtek/rtlwifi/usb.c index 2ac5004d7a401ab5d1255126c5c0a00a5e233705..5adb939afee8832435c6ed07b408fa23ac8cd65c 100644 --- a/drivers/net/wireless/realtek/rtlwifi/usb.c +++ b/drivers/net/wireless/realtek/rtlwifi/usb.c @@ -1081,13 +1081,13 @@ int rtl_usb_probe(struct usb_interface *intf, rtlpriv->cfg->ops->read_eeprom_info(hw); err = _rtl_usb_init(hw); if (err) - goto error_out; + goto error_out2; rtl_usb_init_sw(hw); /* Init mac80211 sw */ err = rtl_init_core(hw); if (err) { pr_err("Can't allocate sw for mac80211\n"); - goto error_out; + goto error_out2; } if (rtlpriv->cfg->ops->init_sw_vars(hw)) { pr_err("Can't init_sw_vars\n"); @@ -1108,6 +1108,7 @@ int rtl_usb_probe(struct usb_interface *intf, error_out: rtl_deinit_core(hw); +error_out2: _rtl_usb_io_handler_release(hw); usb_put_dev(udev); complete(&rtlpriv->firmware_loading_complete); diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c index d5081ffdc8f035a5ae3f7b1ea7aef53b521b69e3..1c849106b7935274e79390fb7d2f748166b7d12d 100644 --- a/drivers/net/xen-netback/netback.c +++ b/drivers/net/xen-netback/netback.c @@ -925,6 +925,7 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue, skb_shinfo(skb)->nr_frags = MAX_SKB_FRAGS; nskb = xenvif_alloc_skb(0); if (unlikely(nskb == NULL)) { + skb_shinfo(skb)->nr_frags = 0; kfree_skb(skb); xenvif_tx_err(queue, &txreq, extra_count, idx); if (net_ratelimit()) @@ -940,6 +941,7 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue, if (xenvif_set_skb_gso(queue->vif, skb, gso)) { /* Failure in xenvif_set_skb_gso is fatal. */ + skb_shinfo(skb)->nr_frags = 0; kfree_skb(skb); kfree_skb(nskb); break; diff --git a/drivers/nfc/nfcmrvl/main.c b/drivers/nfc/nfcmrvl/main.c index e65d027b91fafbbd752970cff0afdc9e8cfb0d7c..529be35ac1782a62e9f18b71905fa597ea52a222 100644 --- a/drivers/nfc/nfcmrvl/main.c +++ b/drivers/nfc/nfcmrvl/main.c @@ -244,7 +244,7 @@ void nfcmrvl_chip_reset(struct nfcmrvl_private *priv) /* Reset possible fault of previous session */ clear_bit(NFCMRVL_PHY_ERROR, &priv->flags); - if (priv->config.reset_n_io) { + if (gpio_is_valid(priv->config.reset_n_io)) { nfc_info(priv->dev, "reset the chip\n"); gpio_set_value(priv->config.reset_n_io, 0); usleep_range(5000, 10000); @@ -255,7 +255,7 @@ void nfcmrvl_chip_reset(struct nfcmrvl_private *priv) void nfcmrvl_chip_halt(struct nfcmrvl_private *priv) { - if (priv->config.reset_n_io) + if (gpio_is_valid(priv->config.reset_n_io)) gpio_set_value(priv->config.reset_n_io, 0); } diff --git a/drivers/nfc/nfcmrvl/uart.c b/drivers/nfc/nfcmrvl/uart.c index 9a22056e8d9eea921b59a588318be8c47b6b1046..e5a622ce4b9517d299170f20d35bde2bd5e003da 100644 --- a/drivers/nfc/nfcmrvl/uart.c +++ b/drivers/nfc/nfcmrvl/uart.c @@ -26,7 +26,7 @@ static unsigned int hci_muxed; static unsigned int flow_control; static unsigned int break_control; -static unsigned int reset_n_io; +static int reset_n_io = -EINVAL; /* ** NFCMRVL NCI OPS @@ -231,5 +231,5 @@ MODULE_PARM_DESC(break_control, "Tell if UART driver must drive break signal."); module_param(hci_muxed, uint, 0); MODULE_PARM_DESC(hci_muxed, "Tell if transport is muxed in HCI one."); -module_param(reset_n_io, uint, 0); +module_param(reset_n_io, int, 0); MODULE_PARM_DESC(reset_n_io, "GPIO that is wired to RESET_N signal."); diff --git a/drivers/nfc/nfcmrvl/usb.c b/drivers/nfc/nfcmrvl/usb.c index 945cc903d8f1123fd63a13c42564a3dfe7465a12..888e298f610b8ed7140c85c5d0355d55007b6cfc 100644 --- a/drivers/nfc/nfcmrvl/usb.c +++ b/drivers/nfc/nfcmrvl/usb.c @@ -305,6 +305,7 @@ static int nfcmrvl_probe(struct usb_interface *intf, /* No configuration for USB */ memset(&config, 0, sizeof(config)); + config.reset_n_io = -EINVAL; nfc_info(&udev->dev, "intf %p id %p\n", intf, id); diff --git a/drivers/nfc/st-nci/se.c b/drivers/nfc/st-nci/se.c index f55d082ace71558c8bf23d1813d70da18c9c5a0d..5d6e7e931bc6cd06bc162a2672acba679d60d782 100644 --- a/drivers/nfc/st-nci/se.c +++ b/drivers/nfc/st-nci/se.c @@ -344,6 +344,8 @@ static int st_nci_hci_connectivity_event_received(struct nci_dev *ndev, transaction = (struct nfc_evt_transaction *)devm_kzalloc(dev, skb->len - 2, GFP_KERNEL); + if (!transaction) + return -ENOMEM; transaction->aid_len = skb->data[1]; memcpy(transaction->aid, &skb->data[2], transaction->aid_len); diff --git a/drivers/nfc/st21nfca/se.c b/drivers/nfc/st21nfca/se.c index 4bed9e842db38126859d74d4d585dee66ea80d33..fd967a38a94a5da4c4687fd9d058120887c3f8d7 100644 --- a/drivers/nfc/st21nfca/se.c +++ b/drivers/nfc/st21nfca/se.c @@ -328,6 +328,8 @@ int st21nfca_connectivity_event_received(struct nfc_hci_dev *hdev, u8 host, transaction = (struct nfc_evt_transaction *)devm_kzalloc(dev, skb->len - 2, GFP_KERNEL); + if (!transaction) + return -ENOMEM; transaction->aid_len = skb->data[1]; memcpy(transaction->aid, &skb->data[2], diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c index a3132a9eb91c21b2ecbac00d3d38f0efb966c6a0..2ba22cd1331b0528a44bf28fb313be96a3ad497c 100644 --- a/drivers/nvdimm/bus.c +++ b/drivers/nvdimm/bus.c @@ -86,7 +86,7 @@ static void nvdimm_bus_probe_end(struct nvdimm_bus *nvdimm_bus) { nvdimm_bus_lock(&nvdimm_bus->dev); if (--nvdimm_bus->probe_active == 0) - wake_up(&nvdimm_bus->probe_wait); + wake_up(&nvdimm_bus->wait); nvdimm_bus_unlock(&nvdimm_bus->dev); } @@ -348,7 +348,7 @@ struct nvdimm_bus *nvdimm_bus_register(struct device *parent, return NULL; INIT_LIST_HEAD(&nvdimm_bus->list); INIT_LIST_HEAD(&nvdimm_bus->mapping_list); - init_waitqueue_head(&nvdimm_bus->probe_wait); + init_waitqueue_head(&nvdimm_bus->wait); nvdimm_bus->id = ida_simple_get(&nd_ida, 0, 0, GFP_KERNEL); mutex_init(&nvdimm_bus->reconfig_mutex); badrange_init(&nvdimm_bus->badrange); @@ -418,6 +418,9 @@ static int nd_bus_remove(struct device *dev) list_del_init(&nvdimm_bus->list); mutex_unlock(&nvdimm_bus_list_mutex); + wait_event(nvdimm_bus->wait, + atomic_read(&nvdimm_bus->ioctl_active) == 0); + nd_synchronize(); device_for_each_child(&nvdimm_bus->dev, NULL, child_unregister); @@ -525,13 +528,38 @@ EXPORT_SYMBOL(nd_device_register); void nd_device_unregister(struct device *dev, enum nd_async_mode mode) { + bool killed; + switch (mode) { case ND_ASYNC: + /* + * In the async case this is being triggered with the + * device lock held and the unregistration work needs to + * be moved out of line iff this is thread has won the + * race to schedule the deletion. + */ + if (!kill_device(dev)) + return; + get_device(dev); async_schedule_domain(nd_async_device_unregister, dev, &nd_async_domain); break; case ND_SYNC: + /* + * In the sync case the device is being unregistered due + * to a state change of the parent. Claim the kill state + * to synchronize against other unregistration requests, + * or otherwise let the async path handle it if the + * unregistration was already queued. + */ + device_lock(dev); + killed = kill_device(dev); + device_unlock(dev); + + if (!killed) + return; + nd_synchronize(); device_unregister(dev); break; @@ -837,10 +865,12 @@ void wait_nvdimm_bus_probe_idle(struct device *dev) do { if (nvdimm_bus->probe_active == 0) break; - nvdimm_bus_unlock(&nvdimm_bus->dev); - wait_event(nvdimm_bus->probe_wait, + nvdimm_bus_unlock(dev); + device_unlock(dev); + wait_event(nvdimm_bus->wait, nvdimm_bus->probe_active == 0); - nvdimm_bus_lock(&nvdimm_bus->dev); + device_lock(dev); + nvdimm_bus_lock(dev); } while (true); } @@ -923,20 +953,19 @@ static int __nd_ioctl(struct nvdimm_bus *nvdimm_bus, struct nvdimm *nvdimm, int read_only, unsigned int ioctl_cmd, unsigned long arg) { struct nvdimm_bus_descriptor *nd_desc = nvdimm_bus->nd_desc; - static char out_env[ND_CMD_MAX_ENVELOPE]; - static char in_env[ND_CMD_MAX_ENVELOPE]; const struct nd_cmd_desc *desc = NULL; unsigned int cmd = _IOC_NR(ioctl_cmd); struct device *dev = &nvdimm_bus->dev; void __user *p = (void __user *) arg; + char *out_env = NULL, *in_env = NULL; const char *cmd_name, *dimm_name; u32 in_len = 0, out_len = 0; unsigned int func = cmd; unsigned long cmd_mask; struct nd_cmd_pkg pkg; int rc, i, cmd_rc; + void *buf = NULL; u64 buf_len = 0; - void *buf; if (nvdimm) { desc = nd_cmd_dimm_desc(cmd); @@ -967,7 +996,7 @@ static int __nd_ioctl(struct nvdimm_bus *nvdimm_bus, struct nvdimm *nvdimm, case ND_CMD_ARS_START: case ND_CMD_CLEAR_ERROR: case ND_CMD_CALL: - dev_dbg(&nvdimm_bus->dev, "'%s' command while read-only.\n", + dev_dbg(dev, "'%s' command while read-only.\n", nvdimm ? nvdimm_cmd_name(cmd) : nvdimm_bus_cmd_name(cmd)); return -EPERM; @@ -976,6 +1005,9 @@ static int __nd_ioctl(struct nvdimm_bus *nvdimm_bus, struct nvdimm *nvdimm, } /* process an input envelope */ + in_env = kzalloc(ND_CMD_MAX_ENVELOPE, GFP_KERNEL); + if (!in_env) + return -ENOMEM; for (i = 0; i < desc->in_num; i++) { u32 in_size, copy; @@ -983,14 +1015,17 @@ static int __nd_ioctl(struct nvdimm_bus *nvdimm_bus, struct nvdimm *nvdimm, if (in_size == UINT_MAX) { dev_err(dev, "%s:%s unknown input size cmd: %s field: %d\n", __func__, dimm_name, cmd_name, i); - return -ENXIO; + rc = -ENXIO; + goto out; } - if (in_len < sizeof(in_env)) - copy = min_t(u32, sizeof(in_env) - in_len, in_size); + if (in_len < ND_CMD_MAX_ENVELOPE) + copy = min_t(u32, ND_CMD_MAX_ENVELOPE - in_len, in_size); else copy = 0; - if (copy && copy_from_user(&in_env[in_len], p + in_len, copy)) - return -EFAULT; + if (copy && copy_from_user(&in_env[in_len], p + in_len, copy)) { + rc = -EFAULT; + goto out; + } in_len += in_size; } @@ -1002,6 +1037,12 @@ static int __nd_ioctl(struct nvdimm_bus *nvdimm_bus, struct nvdimm *nvdimm, } /* process an output envelope */ + out_env = kzalloc(ND_CMD_MAX_ENVELOPE, GFP_KERNEL); + if (!out_env) { + rc = -ENOMEM; + goto out; + } + for (i = 0; i < desc->out_num; i++) { u32 out_size = nd_cmd_out_size(nvdimm, cmd, desc, i, (u32 *) in_env, (u32 *) out_env, 0); @@ -1010,15 +1051,18 @@ static int __nd_ioctl(struct nvdimm_bus *nvdimm_bus, struct nvdimm *nvdimm, if (out_size == UINT_MAX) { dev_dbg(dev, "%s unknown output size cmd: %s field: %d\n", dimm_name, cmd_name, i); - return -EFAULT; + rc = -EFAULT; + goto out; } - if (out_len < sizeof(out_env)) - copy = min_t(u32, sizeof(out_env) - out_len, out_size); + if (out_len < ND_CMD_MAX_ENVELOPE) + copy = min_t(u32, ND_CMD_MAX_ENVELOPE - out_len, out_size); else copy = 0; if (copy && copy_from_user(&out_env[out_len], - p + in_len + out_len, copy)) - return -EFAULT; + p + in_len + out_len, copy)) { + rc = -EFAULT; + goto out; + } out_len += out_size; } @@ -1026,19 +1070,23 @@ static int __nd_ioctl(struct nvdimm_bus *nvdimm_bus, struct nvdimm *nvdimm, if (buf_len > ND_IOCTL_MAX_BUFLEN) { dev_dbg(dev, "%s cmd: %s buf_len: %llu > %d\n", dimm_name, cmd_name, buf_len, ND_IOCTL_MAX_BUFLEN); - return -EINVAL; + rc = -EINVAL; + goto out; } buf = vmalloc(buf_len); - if (!buf) - return -ENOMEM; + if (!buf) { + rc = -ENOMEM; + goto out; + } if (copy_from_user(buf, p, buf_len)) { rc = -EFAULT; goto out; } - nvdimm_bus_lock(&nvdimm_bus->dev); + device_lock(dev); + nvdimm_bus_lock(dev); rc = nd_cmd_clear_to_send(nvdimm_bus, nvdimm, func, buf); if (rc) goto out_unlock; @@ -1053,39 +1101,24 @@ static int __nd_ioctl(struct nvdimm_bus *nvdimm_bus, struct nvdimm *nvdimm, nvdimm_account_cleared_poison(nvdimm_bus, clear_err->address, clear_err->cleared); } - nvdimm_bus_unlock(&nvdimm_bus->dev); if (copy_to_user(p, buf, buf_len)) rc = -EFAULT; - vfree(buf); - return rc; - - out_unlock: - nvdimm_bus_unlock(&nvdimm_bus->dev); - out: +out_unlock: + nvdimm_bus_unlock(dev); + device_unlock(dev); +out: + kfree(in_env); + kfree(out_env); vfree(buf); return rc; } -static long nd_ioctl(struct file *file, unsigned int cmd, unsigned long arg) -{ - long id = (long) file->private_data; - int rc = -ENXIO, ro; - struct nvdimm_bus *nvdimm_bus; - - ro = ((file->f_flags & O_ACCMODE) == O_RDONLY); - mutex_lock(&nvdimm_bus_list_mutex); - list_for_each_entry(nvdimm_bus, &nvdimm_bus_list, list) { - if (nvdimm_bus->id == id) { - rc = __nd_ioctl(nvdimm_bus, NULL, ro, cmd, arg); - break; - } - } - mutex_unlock(&nvdimm_bus_list_mutex); - - return rc; -} +enum nd_ioctl_mode { + BUS_IOCTL, + DIMM_IOCTL, +}; static int match_dimm(struct device *dev, void *data) { @@ -1100,31 +1133,62 @@ static int match_dimm(struct device *dev, void *data) return 0; } -static long nvdimm_ioctl(struct file *file, unsigned int cmd, unsigned long arg) +static long nd_ioctl(struct file *file, unsigned int cmd, unsigned long arg, + enum nd_ioctl_mode mode) + { - int rc = -ENXIO, ro; - struct nvdimm_bus *nvdimm_bus; + struct nvdimm_bus *nvdimm_bus, *found = NULL; + long id = (long) file->private_data; + struct nvdimm *nvdimm = NULL; + int rc, ro; ro = ((file->f_flags & O_ACCMODE) == O_RDONLY); mutex_lock(&nvdimm_bus_list_mutex); list_for_each_entry(nvdimm_bus, &nvdimm_bus_list, list) { - struct device *dev = device_find_child(&nvdimm_bus->dev, - file->private_data, match_dimm); - struct nvdimm *nvdimm; - - if (!dev) - continue; + if (mode == DIMM_IOCTL) { + struct device *dev; + + dev = device_find_child(&nvdimm_bus->dev, + file->private_data, match_dimm); + if (!dev) + continue; + nvdimm = to_nvdimm(dev); + found = nvdimm_bus; + } else if (nvdimm_bus->id == id) { + found = nvdimm_bus; + } - nvdimm = to_nvdimm(dev); - rc = __nd_ioctl(nvdimm_bus, nvdimm, ro, cmd, arg); - put_device(dev); - break; + if (found) { + atomic_inc(&nvdimm_bus->ioctl_active); + break; + } } mutex_unlock(&nvdimm_bus_list_mutex); + if (!found) + return -ENXIO; + + nvdimm_bus = found; + rc = __nd_ioctl(nvdimm_bus, nvdimm, ro, cmd, arg); + + if (nvdimm) + put_device(&nvdimm->dev); + if (atomic_dec_and_test(&nvdimm_bus->ioctl_active)) + wake_up(&nvdimm_bus->wait); + return rc; } +static long bus_ioctl(struct file *file, unsigned int cmd, unsigned long arg) +{ + return nd_ioctl(file, cmd, arg, BUS_IOCTL); +} + +static long dimm_ioctl(struct file *file, unsigned int cmd, unsigned long arg) +{ + return nd_ioctl(file, cmd, arg, DIMM_IOCTL); +} + static int nd_open(struct inode *inode, struct file *file) { long minor = iminor(inode); @@ -1136,16 +1200,16 @@ static int nd_open(struct inode *inode, struct file *file) static const struct file_operations nvdimm_bus_fops = { .owner = THIS_MODULE, .open = nd_open, - .unlocked_ioctl = nd_ioctl, - .compat_ioctl = nd_ioctl, + .unlocked_ioctl = bus_ioctl, + .compat_ioctl = bus_ioctl, .llseek = noop_llseek, }; static const struct file_operations nvdimm_fops = { .owner = THIS_MODULE, .open = nd_open, - .unlocked_ioctl = nvdimm_ioctl, - .compat_ioctl = nvdimm_ioctl, + .unlocked_ioctl = dimm_ioctl, + .compat_ioctl = dimm_ioctl, .llseek = noop_llseek, }; diff --git a/drivers/nvdimm/dax_devs.c b/drivers/nvdimm/dax_devs.c index 0453f49dc70814f35d2e0988f46304777f3e2559..326f02ffca81f7ce638d60f3935469170c959594 100644 --- a/drivers/nvdimm/dax_devs.c +++ b/drivers/nvdimm/dax_devs.c @@ -126,7 +126,7 @@ int nd_dax_probe(struct device *dev, struct nd_namespace_common *ndns) nvdimm_bus_unlock(&ndns->dev); if (!dax_dev) return -ENOMEM; - pfn_sb = devm_kzalloc(dev, sizeof(*pfn_sb), GFP_KERNEL); + pfn_sb = devm_kmalloc(dev, sizeof(*pfn_sb), GFP_KERNEL); nd_pfn->pfn_sb = pfn_sb; rc = nd_pfn_validate(nd_pfn, DAX_SIG); dev_dbg(dev, "dax: %s\n", rc == 0 ? dev_name(dax_dev) : ""); diff --git a/drivers/nvdimm/nd-core.h b/drivers/nvdimm/nd-core.h index 5ff254dc9b14fb52bc3c9b2dcc6d4d263e900d9c..adf62a6c0fe277cbd4cba1280490259e1031e49d 100644 --- a/drivers/nvdimm/nd-core.h +++ b/drivers/nvdimm/nd-core.h @@ -25,10 +25,11 @@ extern int nvdimm_major; struct nvdimm_bus { struct nvdimm_bus_descriptor *nd_desc; - wait_queue_head_t probe_wait; + wait_queue_head_t wait; struct list_head list; struct device dev; int id, probe_active; + atomic_t ioctl_active; struct list_head mapping_list; struct mutex reconfig_mutex; struct badrange badrange; diff --git a/drivers/nvdimm/pfn.h b/drivers/nvdimm/pfn.h index dde9853453d3c622c42511772ff807bcf1fd70d0..e901e3a3b04c9920007609f64443a8895bb2dca4 100644 --- a/drivers/nvdimm/pfn.h +++ b/drivers/nvdimm/pfn.h @@ -36,6 +36,7 @@ struct nd_pfn_sb { __le32 end_trunc; /* minor-version-2 record the base alignment of the mapping */ __le32 align; + /* minor-version-3 guarantee the padding and flags are zero */ u8 padding[4000]; __le64 checksum; }; diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c index 3ee995a3bfc9bd9327cddc716787368566cb8377..86ed09b2a1929a52dcd16e0a6d85216331cf9b9f 100644 --- a/drivers/nvdimm/pfn_devs.c +++ b/drivers/nvdimm/pfn_devs.c @@ -361,6 +361,15 @@ struct device *nd_pfn_create(struct nd_region *nd_region) return dev; } +/** + * nd_pfn_validate - read and validate info-block + * @nd_pfn: fsdax namespace runtime state / properties + * @sig: 'devdax' or 'fsdax' signature + * + * Upon return the info-block buffer contents (->pfn_sb) are + * indeterminate when validation fails, and a coherent info-block + * otherwise. + */ int nd_pfn_validate(struct nd_pfn *nd_pfn, const char *sig) { u64 checksum, offset; @@ -506,7 +515,7 @@ int nd_pfn_probe(struct device *dev, struct nd_namespace_common *ndns) nvdimm_bus_unlock(&ndns->dev); if (!pfn_dev) return -ENOMEM; - pfn_sb = devm_kzalloc(dev, sizeof(*pfn_sb), GFP_KERNEL); + pfn_sb = devm_kmalloc(dev, sizeof(*pfn_sb), GFP_KERNEL); nd_pfn = to_nd_pfn(pfn_dev); nd_pfn->pfn_sb = pfn_sb; rc = nd_pfn_validate(nd_pfn, PFN_SIG); @@ -638,7 +647,7 @@ static int nd_pfn_init(struct nd_pfn *nd_pfn) u64 checksum; int rc; - pfn_sb = devm_kzalloc(&nd_pfn->dev, sizeof(*pfn_sb), GFP_KERNEL); + pfn_sb = devm_kmalloc(&nd_pfn->dev, sizeof(*pfn_sb), GFP_KERNEL); if (!pfn_sb) return -ENOMEM; @@ -647,11 +656,14 @@ static int nd_pfn_init(struct nd_pfn *nd_pfn) sig = DAX_SIG; else sig = PFN_SIG; + rc = nd_pfn_validate(nd_pfn, sig); if (rc != -ENODEV) return rc; /* no info block, do init */; + memset(pfn_sb, 0, sizeof(*pfn_sb)); + nd_region = to_nd_region(nd_pfn->dev.parent); if (nd_region->ro) { dev_info(&nd_pfn->dev, @@ -705,7 +717,7 @@ static int nd_pfn_init(struct nd_pfn *nd_pfn) memcpy(pfn_sb->uuid, nd_pfn->uuid, 16); memcpy(pfn_sb->parent_uuid, nd_dev_to_uuid(&ndns->dev), 16); pfn_sb->version_major = cpu_to_le16(1); - pfn_sb->version_minor = cpu_to_le16(2); + pfn_sb->version_minor = cpu_to_le16(3); pfn_sb->start_pad = cpu_to_le32(start_pad); pfn_sb->end_trunc = cpu_to_le32(end_trunc); pfn_sb->align = cpu_to_le32(nd_pfn->align); diff --git a/drivers/nvdimm/region.c b/drivers/nvdimm/region.c index b9ca0033cc9996b295fe28e96369531ef87ab20a..f9130cc157e8355a7221c10fd15303a864239fd1 100644 --- a/drivers/nvdimm/region.c +++ b/drivers/nvdimm/region.c @@ -42,17 +42,6 @@ static int nd_region_probe(struct device *dev) if (rc) return rc; - rc = nd_region_register_namespaces(nd_region, &err); - if (rc < 0) - return rc; - - ndrd = dev_get_drvdata(dev); - ndrd->ns_active = rc; - ndrd->ns_count = rc + err; - - if (rc && err && rc == err) - return -ENODEV; - if (is_nd_pmem(&nd_region->dev)) { struct resource ndr_res; @@ -68,6 +57,17 @@ static int nd_region_probe(struct device *dev) nvdimm_badblocks_populate(nd_region, &nd_region->bb, &ndr_res); } + rc = nd_region_register_namespaces(nd_region, &err); + if (rc < 0) + return rc; + + ndrd = dev_get_drvdata(dev); + ndrd->ns_active = rc; + ndrd->ns_count = rc + err; + + if (rc && err && rc == err) + return -ENODEV; + nd_region->btt_seed = nd_btt_create(nd_region); nd_region->pfn_seed = nd_pfn_create(nd_region); nd_region->dax_seed = nd_dax_create(nd_region); diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c index e7377f1028ef687637a4a9f481899b05cc264b1f..0303296e6d5b6c85e19c18439afc70d58ae7e42b 100644 --- a/drivers/nvdimm/region_devs.c +++ b/drivers/nvdimm/region_devs.c @@ -425,10 +425,12 @@ static ssize_t available_size_show(struct device *dev, * memory nvdimm_bus_lock() is dropped, but that's userspace's * problem to not race itself. */ + device_lock(dev); nvdimm_bus_lock(dev); wait_nvdimm_bus_probe_idle(dev); available = nd_region_available_dpa(nd_region); nvdimm_bus_unlock(dev); + device_unlock(dev); return sprintf(buf, "%llu\n", available); } @@ -440,10 +442,12 @@ static ssize_t max_available_extent_show(struct device *dev, struct nd_region *nd_region = to_nd_region(dev); unsigned long long available = 0; + device_lock(dev); nvdimm_bus_lock(dev); wait_nvdimm_bus_probe_idle(dev); available = nd_region_allocatable_dpa(nd_region); nvdimm_bus_unlock(dev); + device_unlock(dev); return sprintf(buf, "%llu\n", available); } diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index d8869d978c341474051a1f21981aca9f859cf3bc..ae0b01059fc6dfe79f942938c3f9480dac619492 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -1183,6 +1183,9 @@ static u32 nvme_passthru_start(struct nvme_ctrl *ctrl, struct nvme_ns *ns, */ if (effects & (NVME_CMD_EFFECTS_LBCC | NVME_CMD_EFFECTS_CSE_MASK)) { mutex_lock(&ctrl->scan_lock); + mutex_lock(&ctrl->subsys->lock); + nvme_mpath_start_freeze(ctrl->subsys); + nvme_mpath_wait_freeze(ctrl->subsys); nvme_start_freeze(ctrl); nvme_wait_freeze(ctrl); } @@ -1213,6 +1216,8 @@ static void nvme_passthru_end(struct nvme_ctrl *ctrl, u32 effects) nvme_update_formats(ctrl); if (effects & (NVME_CMD_EFFECTS_LBCC | NVME_CMD_EFFECTS_CSE_MASK)) { nvme_unfreeze(ctrl); + nvme_mpath_unfreeze(ctrl->subsys); + mutex_unlock(&ctrl->subsys->lock); mutex_unlock(&ctrl->scan_lock); } if (effects & NVME_CMD_EFFECTS_CCC) @@ -1557,6 +1562,7 @@ static void __nvme_revalidate_disk(struct gendisk *disk, struct nvme_id_ns *id) if (ns->head->disk) { nvme_update_disk_info(ns->head->disk, ns, id); blk_queue_stack_limits(ns->head->disk->queue, ns->queue); + revalidate_disk(ns->head->disk); } #endif } @@ -3168,6 +3174,14 @@ static void nvme_ns_remove(struct nvme_ns *ns) return; nvme_fault_inject_fini(ns); + + mutex_lock(&ns->ctrl->subsys->lock); + list_del_rcu(&ns->siblings); + mutex_unlock(&ns->ctrl->subsys->lock); + synchronize_rcu(); /* guarantee not available in head->list */ + nvme_mpath_clear_current_path(ns); + synchronize_srcu(&ns->head->srcu); /* wait for concurrent submissions */ + if (ns->disk && ns->disk->flags & GENHD_FL_UP) { sysfs_remove_group(&disk_to_dev(ns->disk)->kobj, &nvme_ns_id_attr_group); @@ -3179,16 +3193,10 @@ static void nvme_ns_remove(struct nvme_ns *ns) blk_integrity_unregister(ns->disk); } - mutex_lock(&ns->ctrl->subsys->lock); - list_del_rcu(&ns->siblings); - nvme_mpath_clear_current_path(ns); - mutex_unlock(&ns->ctrl->subsys->lock); - down_write(&ns->ctrl->namespaces_rwsem); list_del_init(&ns->list); up_write(&ns->ctrl->namespaces_rwsem); - synchronize_srcu(&ns->head->srcu); nvme_mpath_check_last_path(ns); nvme_put_ns(ns); } diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c index 260248fbb8feb4cbf7c9149c9b9e43939c7a431f..05d6371c7f3858f027b22cef69a754dd9d6dae4c 100644 --- a/drivers/nvme/host/multipath.c +++ b/drivers/nvme/host/multipath.c @@ -20,9 +20,34 @@ module_param(multipath, bool, 0444); MODULE_PARM_DESC(multipath, "turn on native support for multiple controllers per subsystem"); -inline bool nvme_ctrl_use_ana(struct nvme_ctrl *ctrl) +void nvme_mpath_unfreeze(struct nvme_subsystem *subsys) { - return multipath && ctrl->subsys && (ctrl->subsys->cmic & (1 << 3)); + struct nvme_ns_head *h; + + lockdep_assert_held(&subsys->lock); + list_for_each_entry(h, &subsys->nsheads, entry) + if (h->disk) + blk_mq_unfreeze_queue(h->disk->queue); +} + +void nvme_mpath_wait_freeze(struct nvme_subsystem *subsys) +{ + struct nvme_ns_head *h; + + lockdep_assert_held(&subsys->lock); + list_for_each_entry(h, &subsys->nsheads, entry) + if (h->disk) + blk_mq_freeze_queue_wait(h->disk->queue); +} + +void nvme_mpath_start_freeze(struct nvme_subsystem *subsys) +{ + struct nvme_ns_head *h; + + lockdep_assert_held(&subsys->lock); + list_for_each_entry(h, &subsys->nsheads, entry) + if (h->disk) + blk_freeze_queue_start(h->disk->queue); } /* @@ -516,7 +541,8 @@ int nvme_mpath_init(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id) { int error; - if (!nvme_ctrl_use_ana(ctrl)) + /* check if multipath is enabled and we have the capability */ + if (!multipath || !ctrl->subsys || !(ctrl->subsys->cmic & (1 << 3))) return 0; ctrl->anacap = id->anacap; diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index e82cdaec81c9c10e90f4f825c35ab4cfcc33d97b..2653e1f4196d508fdf7c11c4d8306fbdf1373f35 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -464,7 +464,14 @@ extern const struct attribute_group nvme_ns_id_attr_group; extern const struct block_device_operations nvme_ns_head_ops; #ifdef CONFIG_NVME_MULTIPATH -bool nvme_ctrl_use_ana(struct nvme_ctrl *ctrl); +static inline bool nvme_ctrl_use_ana(struct nvme_ctrl *ctrl) +{ + return ctrl->ana_log_buf != NULL; +} + +void nvme_mpath_unfreeze(struct nvme_subsystem *subsys); +void nvme_mpath_wait_freeze(struct nvme_subsystem *subsys); +void nvme_mpath_start_freeze(struct nvme_subsystem *subsys); void nvme_set_disk_name(char *disk_name, struct nvme_ns *ns, struct nvme_ctrl *ctrl, int *flags); void nvme_failover_req(struct request *req); @@ -549,6 +556,15 @@ static inline void nvme_mpath_uninit(struct nvme_ctrl *ctrl) static inline void nvme_mpath_stop(struct nvme_ctrl *ctrl) { } +static inline void nvme_mpath_unfreeze(struct nvme_subsystem *subsys) +{ +} +static inline void nvme_mpath_wait_freeze(struct nvme_subsystem *subsys) +{ +} +static inline void nvme_mpath_start_freeze(struct nvme_subsystem *subsys) +{ +} #endif /* CONFIG_NVME_MULTIPATH */ #ifdef CONFIG_NVM diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index c8eeecc581154269afb726824f8caa17691eede2..a64a8bca0d5b9dcfe9056f9207568f2a3dd1e0b7 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -2253,11 +2253,13 @@ static void nvme_reset_work(struct work_struct *work) struct nvme_dev *dev = container_of(work, struct nvme_dev, ctrl.reset_work); bool was_suspend = !!(dev->ctrl.ctrl_config & NVME_CC_SHN_NORMAL); - int result = -ENODEV; + int result; enum nvme_ctrl_state new_state = NVME_CTRL_LIVE; - if (WARN_ON(dev->ctrl.state != NVME_CTRL_RESETTING)) + if (WARN_ON(dev->ctrl.state != NVME_CTRL_RESETTING)) { + result = -ENODEV; goto out; + } /* * If we're called to reset a live controller first shut it down before @@ -2294,6 +2296,7 @@ static void nvme_reset_work(struct work_struct *work) if (!nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_CONNECTING)) { dev_warn(dev->ctrl.device, "failed to mark controller CONNECTING\n"); + result = -EBUSY; goto out; } @@ -2354,6 +2357,7 @@ static void nvme_reset_work(struct work_struct *work) if (!nvme_change_ctrl_state(&dev->ctrl, new_state)) { dev_warn(dev->ctrl.device, "failed to mark controller state %d\n", new_state); + result = -ENODEV; goto out; } @@ -2464,7 +2468,7 @@ static void nvme_async_probe(void *data, async_cookie_t cookie) { struct nvme_dev *dev = data; - nvme_reset_ctrl_sync(&dev->ctrl); + flush_work(&dev->ctrl.reset_work); flush_work(&dev->ctrl.scan_work); nvme_put_ctrl(&dev->ctrl); } @@ -2531,6 +2535,7 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id) dev_info(dev->ctrl.device, "pci function %s\n", dev_name(&pdev->dev)); + nvme_reset_ctrl(&dev->ctrl); nvme_get_ctrl(&dev->ctrl); async_schedule(nvme_async_probe, dev); diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c index 9908082b32c4b42647085f18bd5dcdf1e0184652..137a27fa369cbf8bc6495878deb1127df4820b2b 100644 --- a/drivers/nvme/target/loop.c +++ b/drivers/nvme/target/loop.c @@ -678,6 +678,14 @@ static void nvme_loop_remove_port(struct nvmet_port *port) mutex_lock(&nvme_loop_ports_mutex); list_del_init(&port->entry); mutex_unlock(&nvme_loop_ports_mutex); + + /* + * Ensure any ctrls that are in the process of being + * deleted are in fact deleted before we return + * and free the port. This is to prevent active + * ctrls from using a port after it's freed. + */ + flush_workqueue(nvme_delete_wq); } static const struct nvmet_fabrics_ops nvme_loop_ops = { diff --git a/drivers/pci/controller/Makefile b/drivers/pci/controller/Makefile index c40b7b982ea48a4c83d32587a59f7a8ad17896b1..71be2a0a7e68d7fedd5efedf15a15899c429e40f 100644 --- a/drivers/pci/controller/Makefile +++ b/drivers/pci/controller/Makefile @@ -32,6 +32,9 @@ obj-$(CONFIG_PCIE_BRCMSTB) += pcie-brcmstb.o ifdef CONFIG_ARM obj-$(CONFIG_PCIE_BRCMSTB) += pcie-brcmstb-bounce.o endif +ifdef CONFIG_ARM64 +obj-$(CONFIG_PCIE_BRCMSTB) += pcie-brcmstb-bounce64.o +endif obj-$(CONFIG_VMD) += vmd.o # pcie-hisi.o quirks are needed even without CONFIG_PCIE_DW diff --git a/drivers/pci/controller/dwc/pci-dra7xx.c b/drivers/pci/controller/dwc/pci-dra7xx.c index a32d6dde7a579f6c21fe3daf79f10c566b48bb8d..412524aa1fdec4e4e97e84d9d0972590d6f95f3d 100644 --- a/drivers/pci/controller/dwc/pci-dra7xx.c +++ b/drivers/pci/controller/dwc/pci-dra7xx.c @@ -26,6 +26,7 @@ #include #include #include +#include #include "../../pci.h" #include "pcie-designware.h" diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c index 4352c1cb926d587532fd57d644b911b739026069..87a8887fd4d3e4936ecc448dc645f9c531ddd391 100644 --- a/drivers/pci/controller/dwc/pcie-qcom.c +++ b/drivers/pci/controller/dwc/pcie-qcom.c @@ -178,6 +178,8 @@ static void qcom_ep_reset_assert(struct qcom_pcie *pcie) static void qcom_ep_reset_deassert(struct qcom_pcie *pcie) { + /* Ensure that PERST has been asserted for at least 100 ms */ + msleep(100); gpiod_set_value_cansleep(pcie->reset, 0); usleep_range(PERST_DELAY_US, PERST_DELAY_US + 500); } diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c index 808a182830e5cd90648652c572557bbc7322e569..5dadc964ad3b4aee9b845b931079686ad0aa8c6b 100644 --- a/drivers/pci/controller/pci-hyperv.c +++ b/drivers/pci/controller/pci-hyperv.c @@ -1880,6 +1880,7 @@ static void hv_pci_devices_present(struct hv_pcibus_device *hbus, static void hv_eject_device_work(struct work_struct *work) { struct pci_eject_response *ejct_pkt; + struct hv_pcibus_device *hbus; struct hv_pci_dev *hpdev; struct pci_dev *pdev; unsigned long flags; @@ -1890,6 +1891,7 @@ static void hv_eject_device_work(struct work_struct *work) } ctxt; hpdev = container_of(work, struct hv_pci_dev, wrk); + hbus = hpdev->hbus; WARN_ON(hpdev->state != hv_pcichild_ejecting); @@ -1900,8 +1902,7 @@ static void hv_eject_device_work(struct work_struct *work) * because hbus->pci_bus may not exist yet. */ wslot = wslot_to_devfn(hpdev->desc.win_slot.slot); - pdev = pci_get_domain_bus_and_slot(hpdev->hbus->sysdata.domain, 0, - wslot); + pdev = pci_get_domain_bus_and_slot(hbus->sysdata.domain, 0, wslot); if (pdev) { pci_lock_rescan_remove(); pci_stop_and_remove_bus_device(pdev); @@ -1909,9 +1910,9 @@ static void hv_eject_device_work(struct work_struct *work) pci_unlock_rescan_remove(); } - spin_lock_irqsave(&hpdev->hbus->device_list_lock, flags); + spin_lock_irqsave(&hbus->device_list_lock, flags); list_del(&hpdev->list_entry); - spin_unlock_irqrestore(&hpdev->hbus->device_list_lock, flags); + spin_unlock_irqrestore(&hbus->device_list_lock, flags); if (hpdev->pci_slot) pci_destroy_slot(hpdev->pci_slot); @@ -1920,7 +1921,7 @@ static void hv_eject_device_work(struct work_struct *work) ejct_pkt = (struct pci_eject_response *)&ctxt.pkt.message; ejct_pkt->message_type.type = PCI_EJECTION_COMPLETE; ejct_pkt->wslot.slot = hpdev->desc.win_slot.slot; - vmbus_sendpacket(hpdev->hbus->hdev->channel, ejct_pkt, + vmbus_sendpacket(hbus->hdev->channel, ejct_pkt, sizeof(*ejct_pkt), (unsigned long)&ctxt.pkt, VM_PKT_DATA_INBAND, 0); @@ -1929,7 +1930,9 @@ static void hv_eject_device_work(struct work_struct *work) /* For the two refs got in new_pcichild_device() */ put_pcichild(hpdev); put_pcichild(hpdev); - put_hvpcibus(hpdev->hbus); + /* hpdev has been freed. Do not use it any more. */ + + put_hvpcibus(hbus); } /** diff --git a/drivers/pci/controller/pcie-brcmstb-bounce.h b/drivers/pci/controller/pcie-brcmstb-bounce.h index 2fe20a14d03522b950fc256d1023ad109a1269c9..7caa0781329b552333edfbd12265ca90fdf51350 100644 --- a/drivers/pci/controller/pcie-brcmstb-bounce.h +++ b/drivers/pci/controller/pcie-brcmstb-bounce.h @@ -6,7 +6,7 @@ #ifndef _PCIE_BRCMSTB_BOUNCE_H #define _PCIE_BRCMSTB_BOUNCE_H -#ifdef CONFIG_ARM +#if defined(CONFIG_ARM) || defined(CONFIG_ARM64) int brcm_pcie_bounce_init(struct device *dev, unsigned long buffer_size, dma_addr_t threshold); diff --git a/drivers/pci/controller/pcie-brcmstb-bounce64.c b/drivers/pci/controller/pcie-brcmstb-bounce64.c new file mode 100644 index 0000000000000000000000000000000000000000..d9f1a46fc3315fe79ed1028b3d33a8410a3071da --- /dev/null +++ b/drivers/pci/controller/pcie-brcmstb-bounce64.c @@ -0,0 +1,576 @@ +/* + * This code started out as a version of arch/arm/common/dmabounce.c, + * modified to cope with highmem pages. Now it has been changed heavily - + * it now preallocates a large block (currently 4MB) and carves it up + * sequentially in ring fashion, and DMA is used to copy the data - to the + * point where very little of the original remains. + * + * Copyright (C) 2019 Raspberry Pi (Trading) Ltd. + * + * Original version by Brad Parker (brad@heeltoe.com) + * Re-written by Christopher Hoover + * Made generic by Deepak Saxena + * + * Copyright (C) 2002 Hewlett Packard Company. + * Copyright (C) 2004 MontaVista Software, Inc. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * version 2 as published by the Free Software Foundation. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +#define STATS + +#ifdef STATS +#define DO_STATS(X) do { X ; } while (0) +#else +#define DO_STATS(X) do { } while (0) +#endif + +/* ************************************************** */ + +struct safe_buffer { + struct list_head node; + + /* original request */ + size_t size; + int direction; + + struct dmabounce_pool *pool; + void *safe; + dma_addr_t unsafe_dma_addr; + dma_addr_t safe_dma_addr; +}; + +struct dmabounce_pool { + unsigned long pages; + void *virt_addr; + dma_addr_t dma_addr; + unsigned long *alloc_map; + unsigned long alloc_pos; + spinlock_t lock; + struct device *dev; + unsigned long num_pages; +#ifdef STATS + size_t max_size; + unsigned long num_bufs; + unsigned long max_bufs; + unsigned long max_pages; +#endif +}; + +struct dmabounce_device_info { + struct device *dev; + dma_addr_t threshold; + struct list_head safe_buffers; + struct dmabounce_pool pool; + rwlock_t lock; +#ifdef STATS + unsigned long map_count; + unsigned long unmap_count; + unsigned long sync_dev_count; + unsigned long sync_cpu_count; + unsigned long fail_count; + int attr_res; +#endif +}; + +static struct dmabounce_device_info *g_dmabounce_device_info; + +extern int bcm2838_dma40_memcpy_init(void); +extern void bcm2838_dma40_memcpy(dma_addr_t dst, dma_addr_t src, size_t size); + +#ifdef STATS +static ssize_t +bounce_show(struct device *dev, struct device_attribute *attr, char *buf) +{ + struct dmabounce_device_info *device_info = g_dmabounce_device_info; + return sprintf(buf, "m:%lu/%lu s:%lu/%lu f:%lu s:%zu b:%lu/%lu a:%lu/%lu\n", + device_info->map_count, + device_info->unmap_count, + device_info->sync_dev_count, + device_info->sync_cpu_count, + device_info->fail_count, + device_info->pool.max_size, + device_info->pool.num_bufs, + device_info->pool.max_bufs, + device_info->pool.num_pages * PAGE_SIZE, + device_info->pool.max_pages * PAGE_SIZE); +} + +static DEVICE_ATTR(dmabounce_stats, 0444, bounce_show, NULL); +#endif + +static int bounce_create(struct dmabounce_pool *pool, struct device *dev, + unsigned long buffer_size) +{ + int ret = -ENOMEM; + pool->pages = (buffer_size + PAGE_SIZE - 1)/PAGE_SIZE; + pool->alloc_map = bitmap_zalloc(pool->pages, GFP_KERNEL); + if (!pool->alloc_map) + goto err_bitmap; + pool->virt_addr = dma_alloc_coherent(dev, pool->pages * PAGE_SIZE, + &pool->dma_addr, GFP_KERNEL); + if (!pool->virt_addr) + goto err_dmabuf; + + pool->alloc_pos = 0; + spin_lock_init(&pool->lock); + pool->dev = dev; + pool->num_pages = 0; + + DO_STATS(pool->max_size = 0); + DO_STATS(pool->num_bufs = 0); + DO_STATS(pool->max_bufs = 0); + DO_STATS(pool->max_pages = 0); + + return 0; + +err_dmabuf: + bitmap_free(pool->alloc_map); +err_bitmap: + return ret; +} + +static void bounce_destroy(struct dmabounce_pool *pool) +{ + dma_free_coherent(pool->dev, pool->pages * PAGE_SIZE, pool->virt_addr, + pool->dma_addr); + + bitmap_free(pool->alloc_map); +} + +static void *bounce_alloc(struct dmabounce_pool *pool, size_t size, + dma_addr_t *dmaaddrp) +{ + unsigned long pages; + unsigned long flags; + unsigned long pos; + + pages = (size + PAGE_SIZE - 1)/PAGE_SIZE; + + DO_STATS(pool->max_size = max(size, pool->max_size)); + + spin_lock_irqsave(&pool->lock, flags); + pos = bitmap_find_next_zero_area(pool->alloc_map, pool->pages, + pool->alloc_pos, pages, 0); + /* If not found, try from the start */ + if (pos >= pool->pages && pool->alloc_pos) + pos = bitmap_find_next_zero_area(pool->alloc_map, pool->pages, + 0, pages, 0); + + if (pos >= pool->pages) { + spin_unlock_irqrestore(&pool->lock, flags); + return NULL; + } + + bitmap_set(pool->alloc_map, pos, pages); + pool->alloc_pos = (pos + pages) % pool->pages; + pool->num_pages += pages; + + DO_STATS(pool->num_bufs++); + DO_STATS(pool->max_bufs = max(pool->num_bufs, pool->max_bufs)); + DO_STATS(pool->max_pages = max(pool->num_pages, pool->max_pages)); + + spin_unlock_irqrestore(&pool->lock, flags); + + *dmaaddrp = pool->dma_addr + pos * PAGE_SIZE; + + return pool->virt_addr + pos * PAGE_SIZE; +} + +static void +bounce_free(struct dmabounce_pool *pool, void *buf, size_t size) +{ + unsigned long pages; + unsigned long flags; + unsigned long pos; + + pages = (size + PAGE_SIZE - 1)/PAGE_SIZE; + pos = (buf - pool->virt_addr)/PAGE_SIZE; + + BUG_ON((buf - pool->virt_addr) & (PAGE_SIZE - 1)); + + spin_lock_irqsave(&pool->lock, flags); + bitmap_clear(pool->alloc_map, pos, pages); + pool->num_pages -= pages; + if (pool->num_pages == 0) + pool->alloc_pos = 0; + DO_STATS(pool->num_bufs--); + spin_unlock_irqrestore(&pool->lock, flags); +} + +/* allocate a 'safe' buffer and keep track of it */ +static struct safe_buffer * +alloc_safe_buffer(struct dmabounce_device_info *device_info, + dma_addr_t dma_addr, size_t size, enum dma_data_direction dir) +{ + struct safe_buffer *buf; + struct dmabounce_pool *pool = &device_info->pool; + struct device *dev = device_info->dev; + unsigned long flags; + + /* + * Although one might expect this to be called in thread context, + * using GFP_KERNEL here leads to hard-to-debug lockups. in_atomic() + * was previously used to select the appropriate allocation mode, + * but this is unsafe. + */ + buf = kmalloc(sizeof(struct safe_buffer), GFP_ATOMIC); + if (!buf) { + dev_warn(dev, "%s: kmalloc failed\n", __func__); + return NULL; + } + + buf->unsafe_dma_addr = dma_addr; + buf->size = size; + buf->direction = dir; + buf->pool = pool; + + buf->safe = bounce_alloc(pool, size, &buf->safe_dma_addr); + + if (!buf->safe) { + dev_warn(dev, + "%s: could not alloc dma memory (size=%d)\n", + __func__, size); + kfree(buf); + return NULL; + } + + write_lock_irqsave(&device_info->lock, flags); + list_add(&buf->node, &device_info->safe_buffers); + write_unlock_irqrestore(&device_info->lock, flags); + + return buf; +} + +/* determine if a buffer is from our "safe" pool */ +static struct safe_buffer * +find_safe_buffer(struct dmabounce_device_info *device_info, + dma_addr_t safe_dma_addr) +{ + struct safe_buffer *b, *rb = NULL; + unsigned long flags; + + read_lock_irqsave(&device_info->lock, flags); + + list_for_each_entry(b, &device_info->safe_buffers, node) + if (b->safe_dma_addr <= safe_dma_addr && + b->safe_dma_addr + b->size > safe_dma_addr) { + rb = b; + break; + } + + read_unlock_irqrestore(&device_info->lock, flags); + return rb; +} + +static void +free_safe_buffer(struct dmabounce_device_info *device_info, + struct safe_buffer *buf) +{ + unsigned long flags; + + write_lock_irqsave(&device_info->lock, flags); + list_del(&buf->node); + write_unlock_irqrestore(&device_info->lock, flags); + + bounce_free(buf->pool, buf->safe, buf->size); + + kfree(buf); +} + +/* ************************************************** */ + +static struct safe_buffer * +find_safe_buffer_dev(struct device *dev, dma_addr_t dma_addr, const char *where) +{ + if (!dev || !g_dmabounce_device_info) + return NULL; + if (dma_mapping_error(dev, dma_addr)) { + dev_err(dev, "Trying to %s invalid mapping\n", where); + return NULL; + } + return find_safe_buffer(g_dmabounce_device_info, dma_addr); +} + +static dma_addr_t +map_single(struct device *dev, struct safe_buffer *buf, size_t size, + enum dma_data_direction dir, unsigned long attrs) +{ + BUG_ON(buf->size != size); + BUG_ON(buf->direction != dir); + + dev_dbg(dev, "map: %llx->%llx\n", (u64)buf->unsafe_dma_addr, + (u64)buf->safe_dma_addr); + + if ((dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL) && + !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + bcm2838_dma40_memcpy(buf->safe_dma_addr, buf->unsafe_dma_addr, + size); + + return buf->safe_dma_addr; +} + +static dma_addr_t +unmap_single(struct device *dev, struct safe_buffer *buf, size_t size, + enum dma_data_direction dir, unsigned long attrs) +{ + BUG_ON(buf->size != size); + BUG_ON(buf->direction != dir); + + if ((dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL) && + !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) { + dev_dbg(dev, "unmap: %llx->%llx\n", (u64)buf->safe_dma_addr, + (u64)buf->unsafe_dma_addr); + + bcm2838_dma40_memcpy(buf->unsafe_dma_addr, buf->safe_dma_addr, + size); + } + return buf->unsafe_dma_addr; +} + +/* ************************************************** */ + +/* + * see if a buffer address is in an 'unsafe' range. if it is + * allocate a 'safe' buffer and copy the unsafe buffer into it. + * substitute the safe buffer for the unsafe one. + * (basically move the buffer from an unsafe area to a safe one) + */ +static dma_addr_t +dmabounce_map_page(struct device *dev, struct page *page, unsigned long offset, + size_t size, enum dma_data_direction dir, + unsigned long attrs) +{ + struct dmabounce_device_info *device_info = g_dmabounce_device_info; + dma_addr_t dma_addr; + + dma_addr = phys_to_dma(dev, page_to_phys(page)) + offset; + + swiotlb_sync_single_for_device(dev, dma_addr, size, dir); + if (!is_device_dma_coherent(dev)) + __dma_map_area(phys_to_virt(dma_to_phys(dev, dma_addr)), size, dir); + + if (device_info && (dma_addr + size) > device_info->threshold) { + struct safe_buffer *buf; + + buf = alloc_safe_buffer(device_info, dma_addr, size, dir); + if (!buf) { + DO_STATS(device_info->fail_count++); + return (~(dma_addr_t)0x0); + } + + DO_STATS(device_info->map_count++); + + dma_addr = map_single(dev, buf, size, dir, attrs); + } + return dma_addr; +} + +/* + * see if a mapped address was really a "safe" buffer and if so, copy + * the data from the safe buffer back to the unsafe buffer and free up + * the safe buffer. (basically return things back to the way they + * should be) + */ +static void +dmabounce_unmap_page(struct device *dev, dma_addr_t dma_addr, size_t size, + enum dma_data_direction dir, unsigned long attrs) +{ + struct safe_buffer *buf; + + buf = find_safe_buffer_dev(dev, dma_addr, __func__); + if (buf) { + DO_STATS(g_dmabounce_device_info->unmap_count++); + dma_addr = unmap_single(dev, buf, size, dir, attrs); + free_safe_buffer(g_dmabounce_device_info, buf); + } + + if (!is_device_dma_coherent(dev)) + __dma_unmap_area(phys_to_virt(dma_to_phys(dev, dma_addr)), size, dir); + swiotlb_sync_single_for_cpu(dev, dma_addr, size, dir); +} + +/* + * A version of dmabounce_map_page that assumes the mapping has already + * been created - intended for streaming operation. + */ +static void +dmabounce_sync_for_device(struct device *dev, dma_addr_t dma_addr, size_t size, + enum dma_data_direction dir) +{ + struct safe_buffer *buf; + + swiotlb_sync_single_for_device(dev, dma_addr, size, dir); + if (!is_device_dma_coherent(dev)) + __dma_map_area(phys_to_virt(dma_to_phys(dev, dma_addr)), size, dir); + + buf = find_safe_buffer_dev(dev, dma_addr, __func__); + if (buf) { + DO_STATS(g_dmabounce_device_info->sync_dev_count++); + map_single(dev, buf, size, dir, 0); + } +} + +/* + * A version of dmabounce_unmap_page that doesn't destroy the mapping - + * intended for streaming operation. + */ +static void +dmabounce_sync_for_cpu(struct device *dev, dma_addr_t dma_addr, + size_t size, enum dma_data_direction dir) +{ + struct safe_buffer *buf; + + buf = find_safe_buffer_dev(dev, dma_addr, __func__); + if (buf) { + DO_STATS(g_dmabounce_device_info->sync_cpu_count++); + dma_addr = unmap_single(dev, buf, size, dir, 0); + } + + if (!is_device_dma_coherent(dev)) + __dma_unmap_area(phys_to_virt(dma_to_phys(dev, dma_addr)), size, dir); + swiotlb_sync_single_for_cpu(dev, dma_addr, size, dir); +} + +static int dmabounce_dma_supported(struct device *dev, u64 dma_mask) +{ + if (g_dmabounce_device_info) + return 0; + + return swiotlb_dma_supported(dev, dma_mask); +} + +static int dmabounce_mapping_error(struct device *dev, dma_addr_t dma_addr) +{ + return swiotlb_dma_mapping_error(dev, dma_addr); +} + +static const struct dma_map_ops dmabounce_ops = { + .alloc = arm64_dma_alloc, + .free = arm64_dma_free, + .mmap = arm64_dma_mmap, + .get_sgtable = arm64_dma_get_sgtable, + .map_page = dmabounce_map_page, + .unmap_page = dmabounce_unmap_page, + .sync_single_for_cpu = dmabounce_sync_for_cpu, + .sync_single_for_device = dmabounce_sync_for_device, + .map_sg = arm64_dma_map_sg, + .unmap_sg = arm64_dma_unmap_sg, + .sync_sg_for_cpu = arm64_dma_sync_sg_for_cpu, + .sync_sg_for_device = arm64_dma_sync_sg_for_device, + .dma_supported = dmabounce_dma_supported, + .mapping_error = dmabounce_mapping_error, +}; + +int brcm_pcie_bounce_init(struct device *dev, + unsigned long buffer_size, + dma_addr_t threshold) +{ + struct dmabounce_device_info *device_info; + int ret; + + /* Only support a single client */ + if (g_dmabounce_device_info) + return -EBUSY; + + ret = bcm2838_dma40_memcpy_init(); + if (ret) + return ret; + + device_info = kmalloc(sizeof(struct dmabounce_device_info), GFP_ATOMIC); + if (!device_info) { + dev_err(dev, + "Could not allocated dmabounce_device_info\n"); + return -ENOMEM; + } + + ret = bounce_create(&device_info->pool, dev, buffer_size); + if (ret) { + dev_err(dev, + "dmabounce: could not allocate %ld byte DMA pool\n", + buffer_size); + goto err_bounce; + } + + device_info->dev = dev; + device_info->threshold = threshold; + INIT_LIST_HEAD(&device_info->safe_buffers); + rwlock_init(&device_info->lock); + + DO_STATS(device_info->map_count = 0); + DO_STATS(device_info->unmap_count = 0); + DO_STATS(device_info->sync_dev_count = 0); + DO_STATS(device_info->sync_cpu_count = 0); + DO_STATS(device_info->fail_count = 0); + DO_STATS(device_info->attr_res = + device_create_file(dev, &dev_attr_dmabounce_stats)); + + g_dmabounce_device_info = device_info; + + dev_err(dev, "dmabounce: initialised - %ld kB, threshold %pad\n", + buffer_size / 1024, &threshold); + + return 0; + + err_bounce: + kfree(device_info); + return ret; +} +EXPORT_SYMBOL(brcm_pcie_bounce_init); + +void brcm_pcie_bounce_uninit(struct device *dev) +{ + struct dmabounce_device_info *device_info = g_dmabounce_device_info; + + g_dmabounce_device_info = NULL; + + if (!device_info) { + dev_warn(dev, + "Never registered with dmabounce but attempting" + "to unregister!\n"); + return; + } + + if (!list_empty(&device_info->safe_buffers)) { + dev_err(dev, + "Removing from dmabounce with pending buffers!\n"); + BUG(); + } + + bounce_destroy(&device_info->pool); + + DO_STATS(if (device_info->attr_res == 0) + device_remove_file(dev, &dev_attr_dmabounce_stats)); + + kfree(device_info); +} +EXPORT_SYMBOL(brcm_pcie_bounce_uninit); + +int brcm_pcie_bounce_register_dev(struct device *dev) +{ + set_dma_ops(dev, &dmabounce_ops); + + return 0; +} +EXPORT_SYMBOL(brcm_pcie_bounce_register_dev); + +MODULE_AUTHOR("Phil Elwell "); +MODULE_DESCRIPTION("Dedicate DMA bounce support for pcie-brcmstb"); +MODULE_LICENSE("GPL"); diff --git a/drivers/pci/controller/pcie-brcmstb.c b/drivers/pci/controller/pcie-brcmstb.c index f99ab05ab4778a5609a57e335c8ad3747f8d82f9..b1b53ad41f19a1ba057c4096aea2ca8cf91b2a40 100644 --- a/drivers/pci/controller/pcie-brcmstb.c +++ b/drivers/pci/controller/pcie-brcmstb.c @@ -617,28 +617,6 @@ static const struct dma_map_ops brcm_dma_ops = { static void brcm_set_dma_ops(struct device *dev) { - int ret; - - if (IS_ENABLED(CONFIG_ARM64)) { - /* - * We are going to invoke get_dma_ops(). That - * function, at this point in time, invokes - * get_arch_dma_ops(), and for ARM64 that function - * returns a pointer to dummy_dma_ops. So then we'd - * like to call arch_setup_dma_ops(), but that isn't - * exported. Instead, we call of_dma_configure(), - * which is exported, and this calls - * arch_setup_dma_ops(). Once we do this the call to - * get_dma_ops() will work properly because - * dev->dma_ops will be set. - */ - ret = of_dma_configure(dev, dev->of_node, true); - if (ret) { - dev_err(dev, "of_dma_configure() failed: %d\n", ret); - return; - } - } - arch_dma_ops = get_dma_ops(dev); if (!arch_dma_ops) { dev_err(dev, "failed to get arch_dma_ops\n"); @@ -657,12 +635,12 @@ static int brcmstb_platform_notifier(struct notifier_block *nb, extern unsigned long max_pfn; struct device *dev = __dev; const char *rc_name = "0000:00:00.0"; + int ret; switch (event) { case BUS_NOTIFY_ADD_DEVICE: if (max_pfn > (bounce_threshold/PAGE_SIZE) && strcmp(dev->kobj.name, rc_name)) { - int ret; ret = brcm_pcie_bounce_register_dev(dev); if (ret) { @@ -671,6 +649,12 @@ static int brcmstb_platform_notifier(struct notifier_block *nb, ret); return ret; } + } else if (IS_ENABLED(CONFIG_ARM64)) { + ret = of_dma_configure(dev, dev->of_node, true); + if (ret) { + dev_err(dev, "of_dma_configure() failed: %d\n", ret); + return ret; + } } brcm_set_dma_ops(dev); return NOTIFY_OK; diff --git a/drivers/pci/controller/pcie-mobiveil.c b/drivers/pci/controller/pcie-mobiveil.c index a939e8d31735a7bc0cb56b3fdc231dae50751ab0..a2d1e89d48674842e8e0afb0b711ec6370c602fb 100644 --- a/drivers/pci/controller/pcie-mobiveil.c +++ b/drivers/pci/controller/pcie-mobiveil.c @@ -508,6 +508,12 @@ static int mobiveil_host_init(struct mobiveil_pcie *pcie) return err; } + /* setup bus numbers */ + value = csr_readl(pcie, PCI_PRIMARY_BUS); + value &= 0xff000000; + value |= 0x00ff0100; + csr_writel(pcie, value, PCI_PRIMARY_BUS); + /* * program Bus Master Enable Bit in Command Register in PAB Config * Space @@ -547,7 +553,7 @@ static int mobiveil_host_init(struct mobiveil_pcie *pcie) resource_size(pcie->ob_io_res)); /* memory inbound translation window */ - program_ib_windows(pcie, WIN_NUM_1, 0, MEM_WINDOW_TYPE, IB_WIN_SIZE); + program_ib_windows(pcie, WIN_NUM_0, 0, MEM_WINDOW_TYPE, IB_WIN_SIZE); /* Get the I/O and memory ranges from DT */ resource_list_for_each_entry_safe(win, tmp, &pcie->resources) { @@ -559,11 +565,18 @@ static int mobiveil_host_init(struct mobiveil_pcie *pcie) if (type) { /* configure outbound translation window */ program_ob_windows(pcie, pcie->ob_wins_configured, - win->res->start, 0, type, - resource_size(win->res)); + win->res->start, + win->res->start - win->offset, + type, resource_size(win->res)); } } + /* fixup for PCIe class register */ + value = csr_readl(pcie, PAB_INTP_AXI_PIO_CLASS); + value &= 0xff; + value |= (PCI_CLASS_BRIDGE_PCI << 16); + csr_writel(pcie, value, PAB_INTP_AXI_PIO_CLASS); + /* setup MSI hardware registers */ mobiveil_pcie_enable_msi(pcie); @@ -804,9 +817,6 @@ static int mobiveil_pcie_probe(struct platform_device *pdev) goto error; } - /* fixup for PCIe class register */ - csr_writel(pcie, 0x060402ab, PAB_INTP_AXI_PIO_CLASS); - /* initialize the IRQ domains */ ret = mobiveil_pcie_init_irq_domain(pcie); if (ret) { diff --git a/drivers/pci/controller/pcie-xilinx-nwl.c b/drivers/pci/controller/pcie-xilinx-nwl.c index fb32840ce8e66ac75f7c164a402ac9a0e0ae094b..4850a1b8eec127628eb390a3cf5401078dbc3e2d 100644 --- a/drivers/pci/controller/pcie-xilinx-nwl.c +++ b/drivers/pci/controller/pcie-xilinx-nwl.c @@ -483,15 +483,13 @@ static int nwl_irq_domain_alloc(struct irq_domain *domain, unsigned int virq, int i; mutex_lock(&msi->lock); - bit = bitmap_find_next_zero_area(msi->bitmap, INT_PCI_MSI_NR, 0, - nr_irqs, 0); - if (bit >= INT_PCI_MSI_NR) { + bit = bitmap_find_free_region(msi->bitmap, INT_PCI_MSI_NR, + get_count_order(nr_irqs)); + if (bit < 0) { mutex_unlock(&msi->lock); return -ENOSPC; } - bitmap_set(msi->bitmap, bit, nr_irqs); - for (i = 0; i < nr_irqs; i++) { irq_domain_set_info(domain, virq + i, bit + i, &nwl_irq_chip, domain->host_data, handle_simple_irq, @@ -509,7 +507,8 @@ static void nwl_irq_domain_free(struct irq_domain *domain, unsigned int virq, struct nwl_msi *msi = &pcie->msi; mutex_lock(&msi->lock); - bitmap_clear(msi->bitmap, data->hwirq, nr_irqs); + bitmap_release_region(msi->bitmap, data->hwirq, + get_count_order(nr_irqs)); mutex_unlock(&msi->lock); } diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c index 33f3f475e5c6bc7bd7288cd17df61af5ff1446ce..956ee7527d2c440dbb39baba4c822950f0f8117f 100644 --- a/drivers/pci/pci-driver.c +++ b/drivers/pci/pci-driver.c @@ -414,6 +414,9 @@ static int pci_device_probe(struct device *dev) struct pci_dev *pci_dev = to_pci_dev(dev); struct pci_driver *drv = to_pci_driver(dev->driver); + if (!pci_device_can_probe(pci_dev)) + return -ENODEV; + pci_assign_irq(pci_dev); error = pcibios_alloc_irq(pci_dev); @@ -421,12 +424,10 @@ static int pci_device_probe(struct device *dev) return error; pci_dev_get(pci_dev); - if (pci_device_can_probe(pci_dev)) { - error = __pci_device_probe(drv, pci_dev); - if (error) { - pcibios_free_irq(pci_dev); - pci_dev_put(pci_dev); - } + error = __pci_device_probe(drv, pci_dev); + if (error) { + pcibios_free_irq(pci_dev); + pci_dev_put(pci_dev); } return error; diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c index 9ecfe13157c09660ec627076b470e118e794c405..1edf5a1836ea92880b74a13c421eadf891dc3404 100644 --- a/drivers/pci/pci-sysfs.c +++ b/drivers/pci/pci-sysfs.c @@ -478,7 +478,7 @@ static ssize_t remove_store(struct device *dev, struct device_attribute *attr, pci_stop_and_remove_bus_device_locked(to_pci_dev(dev)); return count; } -static struct device_attribute dev_remove_attr = __ATTR(remove, +static struct device_attribute dev_remove_attr = __ATTR_IGNORE_LOCKDEP(remove, (S_IWUSR|S_IWGRP), NULL, remove_store); diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index 61f2ef28ea1c73ed154ab0593918828a817e5f25..c65465385d8c81859e22ca3966cc27d1469256b5 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c @@ -2004,6 +2004,13 @@ static void pci_pme_list_scan(struct work_struct *work) */ if (bridge && bridge->current_state != PCI_D0) continue; + /* + * If the device is in D3cold it should not be + * polled either. + */ + if (pme_dev->dev->current_state == PCI_D3cold) + continue; + pci_pme_wakeup(pme_dev->dev, NULL); } else { list_del(&pme_dev->list); diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c index d0b7dd8fb184b041446707dceccda87588ba45a2..77995df7fe547f9e3adc00a148bebf1e65c974a1 100644 --- a/drivers/perf/arm_pmu.c +++ b/drivers/perf/arm_pmu.c @@ -730,8 +730,8 @@ static int cpu_pm_pmu_notify(struct notifier_block *b, unsigned long cmd, cpu_pm_pmu_setup(armpmu, cmd); break; case CPU_PM_EXIT: - cpu_pm_pmu_setup(armpmu, cmd); case CPU_PM_ENTER_FAILED: + cpu_pm_pmu_setup(armpmu, cmd); armpmu->start(armpmu); break; default: diff --git a/drivers/phy/renesas/phy-rcar-gen2.c b/drivers/phy/renesas/phy-rcar-gen2.c index 97d4dd6ea9247d8fc3ec36ce4021ed17af1d0ac3..aa02b19b7e0e92ea4b42e40bd570d31a70f13071 100644 --- a/drivers/phy/renesas/phy-rcar-gen2.c +++ b/drivers/phy/renesas/phy-rcar-gen2.c @@ -288,6 +288,7 @@ static int rcar_gen2_phy_probe(struct platform_device *pdev) error = of_property_read_u32(np, "reg", &channel_num); if (error || channel_num > 2) { dev_err(dev, "Invalid \"reg\" property\n"); + of_node_put(np); return error; } channel->select_mask = select_mask[channel_num]; @@ -303,6 +304,7 @@ static int rcar_gen2_phy_probe(struct platform_device *pdev) &rcar_gen2_phy_ops); if (IS_ERR(phy->phy)) { dev_err(dev, "Failed to create PHY\n"); + of_node_put(np); return PTR_ERR(phy->phy); } phy_set_drvdata(phy->phy, phy); diff --git a/drivers/pinctrl/mediatek/mtk-eint.c b/drivers/pinctrl/mediatek/mtk-eint.c index a613e546717a8d52d42d0b9e9872a9d565dc9e96..564cfaee129d2a54f673642e0e406ec90bd089d6 100644 --- a/drivers/pinctrl/mediatek/mtk-eint.c +++ b/drivers/pinctrl/mediatek/mtk-eint.c @@ -113,6 +113,8 @@ static void mtk_eint_mask(struct irq_data *d) void __iomem *reg = mtk_eint_get_offset(eint, d->hwirq, eint->regs->mask_set); + eint->cur_mask[d->hwirq >> 5] &= ~mask; + writel(mask, reg); } @@ -123,6 +125,8 @@ static void mtk_eint_unmask(struct irq_data *d) void __iomem *reg = mtk_eint_get_offset(eint, d->hwirq, eint->regs->mask_clr); + eint->cur_mask[d->hwirq >> 5] |= mask; + writel(mask, reg); if (eint->dual_edge[d->hwirq]) @@ -217,19 +221,6 @@ static void mtk_eint_chip_write_mask(const struct mtk_eint *eint, } } -static void mtk_eint_chip_read_mask(const struct mtk_eint *eint, - void __iomem *base, u32 *buf) -{ - int port; - void __iomem *reg; - - for (port = 0; port < eint->hw->ports; port++) { - reg = base + eint->regs->mask + (port << 2); - buf[port] = ~readl_relaxed(reg); - /* Mask is 0 when irq is enabled, and 1 when disabled. */ - } -} - static int mtk_eint_irq_request_resources(struct irq_data *d) { struct mtk_eint *eint = irq_data_get_irq_chip_data(d); @@ -318,7 +309,7 @@ static void mtk_eint_irq_handler(struct irq_desc *desc) struct irq_chip *chip = irq_desc_get_chip(desc); struct mtk_eint *eint = irq_desc_get_handler_data(desc); unsigned int status, eint_num; - int offset, index, virq; + int offset, mask_offset, index, virq; void __iomem *reg = mtk_eint_get_offset(eint, 0, eint->regs->stat); int dual_edge, start_level, curr_level; @@ -328,10 +319,24 @@ static void mtk_eint_irq_handler(struct irq_desc *desc) status = readl(reg); while (status) { offset = __ffs(status); + mask_offset = eint_num >> 5; index = eint_num + offset; virq = irq_find_mapping(eint->domain, index); status &= ~BIT(offset); + /* + * If we get an interrupt on pin that was only required + * for wake (but no real interrupt requested), mask the + * interrupt (as would mtk_eint_resume do anyway later + * in the resume sequence). + */ + if (eint->wake_mask[mask_offset] & BIT(offset) && + !(eint->cur_mask[mask_offset] & BIT(offset))) { + writel_relaxed(BIT(offset), reg - + eint->regs->stat + + eint->regs->mask_set); + } + dual_edge = eint->dual_edge[index]; if (dual_edge) { /* @@ -370,7 +375,6 @@ static void mtk_eint_irq_handler(struct irq_desc *desc) int mtk_eint_do_suspend(struct mtk_eint *eint) { - mtk_eint_chip_read_mask(eint, eint->base, eint->cur_mask); mtk_eint_chip_write_mask(eint, eint->base, eint->wake_mask); return 0; diff --git a/drivers/pinctrl/pinctrl-mcp23s08.c b/drivers/pinctrl/pinctrl-mcp23s08.c index cecbce21d01f754ad8b5471d49ad70db19bf35b7..33c3eca0ece97f1c9aec1901554bb4dd5682b15b 100644 --- a/drivers/pinctrl/pinctrl-mcp23s08.c +++ b/drivers/pinctrl/pinctrl-mcp23s08.c @@ -889,6 +889,10 @@ static int mcp23s08_probe_one(struct mcp23s08 *mcp, struct device *dev, if (ret < 0) goto fail; + ret = devm_gpiochip_add_data(dev, &mcp->chip, mcp); + if (ret < 0) + goto fail; + mcp->irq_controller = device_property_read_bool(dev, "interrupt-controller"); if (mcp->irq && mcp->irq_controller) { @@ -930,10 +934,6 @@ static int mcp23s08_probe_one(struct mcp23s08 *mcp, struct device *dev, goto fail; } - ret = devm_gpiochip_add_data(dev, &mcp->chip, mcp); - if (ret < 0) - goto fail; - if (one_regmap_config) { mcp->pinctrl_desc.name = devm_kasprintf(dev, GFP_KERNEL, "mcp23xxx-pinctrl.%d", raw_chip_address); diff --git a/drivers/pinctrl/pinctrl-rockchip.c b/drivers/pinctrl/pinctrl-rockchip.c index f4a61429e06e7bd28c20a9ff0e54cfbf41cee4fe..8d83817935dae789d8e81dacbb29e0e9f4b89c4c 100644 --- a/drivers/pinctrl/pinctrl-rockchip.c +++ b/drivers/pinctrl/pinctrl-rockchip.c @@ -3172,6 +3172,7 @@ static int rockchip_get_bank_data(struct rockchip_pin_bank *bank, base, &rockchip_regmap_config); } + of_node_put(node); } bank->irq = irq_of_parse_and_map(bank->of_node, 0); diff --git a/drivers/pps/pps.c b/drivers/pps/pps.c index 8febacb8fc54df3965cf53b64d040ea705bb68cf..0951564b6830a7bed1f4ebe1b64617f180428c49 100644 --- a/drivers/pps/pps.c +++ b/drivers/pps/pps.c @@ -166,6 +166,14 @@ static long pps_cdev_ioctl(struct file *file, pps->params.mode |= PPS_CANWAIT; pps->params.api_version = PPS_API_VERS; + /* + * Clear unused fields of pps_kparams to avoid leaking + * uninitialized data of the PPS_SETPARAMS caller via + * PPS_GETPARAMS + */ + pps->params.assert_off_tu.flags = 0; + pps->params.clear_off_tu.flags = 0; + spin_unlock_irq(&pps->lock); break; diff --git a/drivers/rapidio/devices/rio_mport_cdev.c b/drivers/rapidio/devices/rio_mport_cdev.c index cbe467ff1aba9d8ca72588f51856f2239dd4d0d9..fa0bbda4b3f2e5fad5e95572fe174ffe631a46cb 100644 --- a/drivers/rapidio/devices/rio_mport_cdev.c +++ b/drivers/rapidio/devices/rio_mport_cdev.c @@ -1688,6 +1688,7 @@ static int rio_mport_add_riodev(struct mport_cdev_priv *priv, if (copy_from_user(&dev_info, arg, sizeof(dev_info))) return -EFAULT; + dev_info.name[sizeof(dev_info.name) - 1] = '\0'; rmcd_debug(RDEV, "name:%s ct:0x%x did:0x%x hc:0x%x", dev_info.name, dev_info.comptag, dev_info.destid, dev_info.hopcount); @@ -1819,6 +1820,7 @@ static int rio_mport_del_riodev(struct mport_cdev_priv *priv, void __user *arg) if (copy_from_user(&dev_info, arg, sizeof(dev_info))) return -EFAULT; + dev_info.name[sizeof(dev_info.name) - 1] = '\0'; mport = priv->md->mport; diff --git a/drivers/ras/cec.c b/drivers/ras/cec.c index f85d6b7a19848503d4ea8c28f7c2feec64ac3038..5d2b2c02cbbeca68e70bbc5f82665a9288ecc809 100644 --- a/drivers/ras/cec.c +++ b/drivers/ras/cec.c @@ -369,7 +369,9 @@ static int pfn_set(void *data, u64 val) { *(u64 *)data = val; - return cec_add_elem(val); + cec_add_elem(val); + + return 0; } DEFINE_DEBUGFS_ATTRIBUTE(pfn_ops, u64_get, pfn_set, "0x%llx\n"); diff --git a/drivers/regulator/s2mps11.c b/drivers/regulator/s2mps11.c index c584bd1ffa9c1e2654b9e3bf137393d756ad5291..7c598c156d9e17eac95bf06e8e0d9dff2ab33cb3 100644 --- a/drivers/regulator/s2mps11.c +++ b/drivers/regulator/s2mps11.c @@ -373,8 +373,8 @@ static const struct regulator_desc s2mps11_regulators[] = { regulator_desc_s2mps11_buck1_4(4), regulator_desc_s2mps11_buck5, regulator_desc_s2mps11_buck67810(6, MIN_600_MV, STEP_6_25_MV), - regulator_desc_s2mps11_buck67810(7, MIN_600_MV, STEP_12_5_MV), - regulator_desc_s2mps11_buck67810(8, MIN_600_MV, STEP_12_5_MV), + regulator_desc_s2mps11_buck67810(7, MIN_750_MV, STEP_12_5_MV), + regulator_desc_s2mps11_buck67810(8, MIN_750_MV, STEP_12_5_MV), regulator_desc_s2mps11_buck9, regulator_desc_s2mps11_buck67810(10, MIN_750_MV, STEP_12_5_MV), }; diff --git a/drivers/s390/block/dasd_alias.c b/drivers/s390/block/dasd_alias.c index b9ce93e9df89295eb72132fcfc81d0257aaa1723..99f86612f7751ad6d47b7abee8661e8deec2b0b2 100644 --- a/drivers/s390/block/dasd_alias.c +++ b/drivers/s390/block/dasd_alias.c @@ -383,6 +383,20 @@ suborder_not_supported(struct dasd_ccw_req *cqr) char msg_format; char msg_no; + /* + * intrc values ENODEV, ENOLINK and EPERM + * will be optained from sleep_on to indicate that no + * IO operation can be started + */ + if (cqr->intrc == -ENODEV) + return 1; + + if (cqr->intrc == -ENOLINK) + return 1; + + if (cqr->intrc == -EPERM) + return 1; + sense = dasd_get_sense(&cqr->irb); if (!sense) return 0; @@ -447,12 +461,8 @@ static int read_unit_address_configuration(struct dasd_device *device, lcu->flags &= ~NEED_UAC_UPDATE; spin_unlock_irqrestore(&lcu->lock, flags); - do { - rc = dasd_sleep_on(cqr); - if (rc && suborder_not_supported(cqr)) - return -EOPNOTSUPP; - } while (rc && (cqr->retries > 0)); - if (rc) { + rc = dasd_sleep_on(cqr); + if (rc && !suborder_not_supported(cqr)) { spin_lock_irqsave(&lcu->lock, flags); lcu->flags |= NEED_UAC_UPDATE; spin_unlock_irqrestore(&lcu->lock, flags); diff --git a/drivers/s390/cio/qdio_main.c b/drivers/s390/cio/qdio_main.c index 9c7d9da42ba0829692d0d8dadbbd1f42935962f3..4b7cc8d425b1c64c6b27e4f3308d4152362a0e21 100644 --- a/drivers/s390/cio/qdio_main.c +++ b/drivers/s390/cio/qdio_main.c @@ -749,6 +749,7 @@ static int get_outbound_buffer_frontier(struct qdio_q *q) switch (state) { case SLSB_P_OUTPUT_EMPTY: + case SLSB_P_OUTPUT_PENDING: /* the adapter got it */ DBF_DEV_EVENT(DBF_INFO, q->irq_ptr, "out empty:%1d %02x", q->nr, count); @@ -1568,13 +1569,13 @@ static int handle_outbound(struct qdio_q *q, unsigned int callflags, rc = qdio_kick_outbound_q(q, phys_aob); } else if (need_siga_sync(q)) { rc = qdio_siga_sync_q(q); + } else if (count < QDIO_MAX_BUFFERS_PER_Q && + get_buf_state(q, prev_buf(bufnr), &state, 0) > 0 && + state == SLSB_CU_OUTPUT_PRIMED) { + /* The previous buffer is not processed yet, tack on. */ + qperf_inc(q, fast_requeue); } else { - /* try to fast requeue buffers */ - get_buf_state(q, prev_buf(bufnr), &state, 0); - if (state != SLSB_CU_OUTPUT_PRIMED) - rc = qdio_kick_outbound_q(q, 0); - else - qperf_inc(q, fast_requeue); + rc = qdio_kick_outbound_q(q, 0); } /* in case of SIGA errors we must process the error immediately */ diff --git a/drivers/s390/cio/qdio_setup.c b/drivers/s390/cio/qdio_setup.c index 78f1be41b05e3fb2cc5c91b31e0ec00877735851..034528a5453ec4b058b86c65f513037371a4e37d 100644 --- a/drivers/s390/cio/qdio_setup.c +++ b/drivers/s390/cio/qdio_setup.c @@ -151,6 +151,7 @@ static int __qdio_allocate_qs(struct qdio_q **irq_ptr_qs, int nr_queues) return -ENOMEM; } irq_ptr_qs[i] = q; + INIT_LIST_HEAD(&q->entry); } return 0; } @@ -179,6 +180,7 @@ static void setup_queues_misc(struct qdio_q *q, struct qdio_irq *irq_ptr, q->mask = 1 << (31 - i); q->nr = i; q->handler = handler; + INIT_LIST_HEAD(&q->entry); } static void setup_storage_lists(struct qdio_q *q, struct qdio_irq *irq_ptr, diff --git a/drivers/s390/cio/qdio_thinint.c b/drivers/s390/cio/qdio_thinint.c index 07dea602205bdf2a18bfe85fad5888cf2f91b4dd..6628e0c9e70e3596cb8d95abbd9e985bae1ad5e7 100644 --- a/drivers/s390/cio/qdio_thinint.c +++ b/drivers/s390/cio/qdio_thinint.c @@ -79,7 +79,6 @@ void tiqdio_add_input_queues(struct qdio_irq *irq_ptr) mutex_lock(&tiq_list_lock); list_add_rcu(&irq_ptr->input_qs[0]->entry, &tiq_list); mutex_unlock(&tiq_list_lock); - xchg(irq_ptr->dsci, 1 << 7); } void tiqdio_remove_input_queues(struct qdio_irq *irq_ptr) @@ -87,14 +86,14 @@ void tiqdio_remove_input_queues(struct qdio_irq *irq_ptr) struct qdio_q *q; q = irq_ptr->input_qs[0]; - /* if establish triggered an error */ - if (!q || !q->entry.prev || !q->entry.next) + if (!q) return; mutex_lock(&tiq_list_lock); list_del_rcu(&q->entry); mutex_unlock(&tiq_list_lock); synchronize_rcu(); + INIT_LIST_HEAD(&q->entry); } static inline int has_multiple_inq_on_dsci(struct qdio_irq *irq_ptr) diff --git a/drivers/s390/cio/vfio_ccw_cp.c b/drivers/s390/cio/vfio_ccw_cp.c index 70a006ba4d050d2d3063f6653e610372e7d2d074..4fe06ff7b2c8bcb680a464504d7e85696497645d 100644 --- a/drivers/s390/cio/vfio_ccw_cp.c +++ b/drivers/s390/cio/vfio_ccw_cp.c @@ -89,8 +89,10 @@ static int pfn_array_alloc_pin(struct pfn_array *pa, struct device *mdev, sizeof(*pa->pa_iova_pfn) + sizeof(*pa->pa_pfn), GFP_KERNEL); - if (unlikely(!pa->pa_iova_pfn)) + if (unlikely(!pa->pa_iova_pfn)) { + pa->pa_nr = 0; return -ENOMEM; + } pa->pa_pfn = pa->pa_iova_pfn + pa->pa_nr; pa->pa_iova_pfn[0] = pa->pa_iova >> PAGE_SHIFT; diff --git a/drivers/s390/scsi/zfcp_erp.c b/drivers/s390/scsi/zfcp_erp.c index ebdbc457003fe50e09db6afe554eeaa47f974363..332701db7379dc16e7143deb608463be500c7ca4 100644 --- a/drivers/s390/scsi/zfcp_erp.c +++ b/drivers/s390/scsi/zfcp_erp.c @@ -11,6 +11,7 @@ #define pr_fmt(fmt) KMSG_COMPONENT ": " fmt #include +#include #include "zfcp_ext.h" #include "zfcp_reqlist.h" @@ -238,6 +239,12 @@ static struct zfcp_erp_action *zfcp_erp_setup_act(int need, u32 act_status, struct zfcp_erp_action *erp_action; struct zfcp_scsi_dev *zfcp_sdev; + if (WARN_ON_ONCE(need != ZFCP_ERP_ACTION_REOPEN_LUN && + need != ZFCP_ERP_ACTION_REOPEN_PORT && + need != ZFCP_ERP_ACTION_REOPEN_PORT_FORCED && + need != ZFCP_ERP_ACTION_REOPEN_ADAPTER)) + return NULL; + switch (need) { case ZFCP_ERP_ACTION_REOPEN_LUN: zfcp_sdev = sdev_to_zfcp(sdev); diff --git a/drivers/scsi/NCR5380.c b/drivers/scsi/NCR5380.c index 90ea0f5d9bdbbfc78da0b474a1490b184f3bde25..5160d6214a36b040aaf5f6b134899350baaea89d 100644 --- a/drivers/scsi/NCR5380.c +++ b/drivers/scsi/NCR5380.c @@ -710,6 +710,8 @@ static void NCR5380_main(struct work_struct *work) NCR5380_information_transfer(instance); done = 0; } + if (!hostdata->connected) + NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask); spin_unlock_irq(&hostdata->lock); if (!done) cond_resched(); @@ -984,7 +986,7 @@ static struct scsi_cmnd *NCR5380_select(struct Scsi_Host *instance, if (!hostdata->selecting) { /* Command was aborted */ NCR5380_write(MODE_REG, MR_BASE); - goto out; + return NULL; } if (err < 0) { NCR5380_write(MODE_REG, MR_BASE); @@ -1033,7 +1035,7 @@ static struct scsi_cmnd *NCR5380_select(struct Scsi_Host *instance, if (!hostdata->selecting) { NCR5380_write(MODE_REG, MR_BASE); NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE); - goto out; + return NULL; } dsprintk(NDEBUG_ARBITRATION, instance, "won arbitration\n"); @@ -1106,8 +1108,6 @@ static struct scsi_cmnd *NCR5380_select(struct Scsi_Host *instance, spin_lock_irq(&hostdata->lock); NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE); NCR5380_reselect(instance); - if (!hostdata->connected) - NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask); shost_printk(KERN_ERR, instance, "reselection after won arbitration?\n"); goto out; } @@ -1115,14 +1115,16 @@ static struct scsi_cmnd *NCR5380_select(struct Scsi_Host *instance, if (err < 0) { spin_lock_irq(&hostdata->lock); NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE); - NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask); + /* Can't touch cmd if it has been reclaimed by the scsi ML */ - if (hostdata->selecting) { - cmd->result = DID_BAD_TARGET << 16; - complete_cmd(instance, cmd); - dsprintk(NDEBUG_SELECTION, instance, "target did not respond within 250ms\n"); - cmd = NULL; - } + if (!hostdata->selecting) + return NULL; + + cmd->result = DID_BAD_TARGET << 16; + complete_cmd(instance, cmd); + dsprintk(NDEBUG_SELECTION, instance, + "target did not respond within 250ms\n"); + cmd = NULL; goto out; } @@ -1150,12 +1152,11 @@ static struct scsi_cmnd *NCR5380_select(struct Scsi_Host *instance, if (err < 0) { shost_printk(KERN_ERR, instance, "select: REQ timeout\n"); NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE); - NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask); goto out; } if (!hostdata->selecting) { do_abort(instance); - goto out; + return NULL; } dsprintk(NDEBUG_SELECTION, instance, "target %d selected, going into MESSAGE OUT phase.\n", @@ -1817,9 +1818,6 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance) */ NCR5380_write(TARGET_COMMAND_REG, 0); - /* Enable reselect interrupts */ - NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask); - maybe_release_dma_irq(instance); return; case MESSAGE_REJECT: @@ -1851,8 +1849,6 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance) */ NCR5380_write(TARGET_COMMAND_REG, 0); - /* Enable reselect interrupts */ - NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask); #ifdef SUN3_SCSI_VME dregs->csr |= CSR_DMA_ENABLE; #endif @@ -1954,7 +1950,6 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance) cmd->result = DID_ERROR << 16; complete_cmd(instance, cmd); maybe_release_dma_irq(instance); - NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask); return; } msgout = NOP; diff --git a/drivers/scsi/NCR5380.h b/drivers/scsi/NCR5380.h index 31096a0b0fdd30202068b0376396eaf6b783bf29..8a6d002e67894011183d33de402ec3897acc7e69 100644 --- a/drivers/scsi/NCR5380.h +++ b/drivers/scsi/NCR5380.h @@ -235,7 +235,7 @@ struct NCR5380_cmd { #define NCR5380_PIO_CHUNK_SIZE 256 /* Time limit (ms) to poll registers when IRQs are disabled, e.g. during PDMA */ -#define NCR5380_REG_POLL_TIME 15 +#define NCR5380_REG_POLL_TIME 10 static inline struct scsi_cmnd *NCR5380_to_scmd(struct NCR5380_cmd *ncmd_ptr) { diff --git a/drivers/scsi/device_handler/scsi_dh_alua.c b/drivers/scsi/device_handler/scsi_dh_alua.c index d1154baa9436a03dac8548153bbacffd503645bf..9c21938ed67ed4e0cadfa4c4a77c4f3c053358c0 100644 --- a/drivers/scsi/device_handler/scsi_dh_alua.c +++ b/drivers/scsi/device_handler/scsi_dh_alua.c @@ -54,6 +54,7 @@ #define ALUA_FAILOVER_TIMEOUT 60 #define ALUA_FAILOVER_RETRIES 5 #define ALUA_RTPG_DELAY_MSECS 5 +#define ALUA_RTPG_RETRY_DELAY 2 /* device handler flags */ #define ALUA_OPTIMIZE_STPG 0x01 @@ -696,7 +697,7 @@ static int alua_rtpg(struct scsi_device *sdev, struct alua_port_group *pg) case SCSI_ACCESS_STATE_TRANSITIONING: if (time_before(jiffies, pg->expiry)) { /* State transition, retry */ - pg->interval = 2; + pg->interval = ALUA_RTPG_RETRY_DELAY; err = SCSI_DH_RETRY; } else { struct alua_dh_data *h; @@ -821,6 +822,8 @@ static void alua_rtpg_work(struct work_struct *work) spin_lock_irqsave(&pg->lock, flags); pg->flags &= ~ALUA_PG_RUNNING; pg->flags |= ALUA_PG_RUN_RTPG; + if (!pg->interval) + pg->interval = ALUA_RTPG_RETRY_DELAY; spin_unlock_irqrestore(&pg->lock, flags); queue_delayed_work(kaluad_wq, &pg->rtpg_work, pg->interval * HZ); @@ -832,6 +835,8 @@ static void alua_rtpg_work(struct work_struct *work) spin_lock_irqsave(&pg->lock, flags); if (err == SCSI_DH_RETRY || pg->flags & ALUA_PG_RUN_RTPG) { pg->flags &= ~ALUA_PG_RUNNING; + if (!pg->interval && !(pg->flags & ALUA_PG_RUN_RTPG)) + pg->interval = ALUA_RTPG_RETRY_DELAY; pg->flags |= ALUA_PG_RUN_RTPG; spin_unlock_irqrestore(&pg->lock, flags); queue_delayed_work(kaluad_wq, &pg->rtpg_work, diff --git a/drivers/scsi/fcoe/fcoe_ctlr.c b/drivers/scsi/fcoe/fcoe_ctlr.c index 4946df66a5ab83443e7066cf9a210c601789bb3c..ccf60801fe9d56f709a45c62b50a174c5ef89d3a 100644 --- a/drivers/scsi/fcoe/fcoe_ctlr.c +++ b/drivers/scsi/fcoe/fcoe_ctlr.c @@ -2017,7 +2017,7 @@ EXPORT_SYMBOL_GPL(fcoe_wwn_from_mac); */ static inline struct fcoe_rport *fcoe_ctlr_rport(struct fc_rport_priv *rdata) { - return (struct fcoe_rport *)(rdata + 1); + return container_of(rdata, struct fcoe_rport, rdata); } /** @@ -2281,7 +2281,7 @@ static void fcoe_ctlr_vn_start(struct fcoe_ctlr *fip) */ static int fcoe_ctlr_vn_parse(struct fcoe_ctlr *fip, struct sk_buff *skb, - struct fc_rport_priv *rdata) + struct fcoe_rport *frport) { struct fip_header *fiph; struct fip_desc *desc = NULL; @@ -2289,16 +2289,12 @@ static int fcoe_ctlr_vn_parse(struct fcoe_ctlr *fip, struct fip_wwn_desc *wwn = NULL; struct fip_vn_desc *vn = NULL; struct fip_size_desc *size = NULL; - struct fcoe_rport *frport; size_t rlen; size_t dlen; u32 desc_mask = 0; u32 dtype; u8 sub; - memset(rdata, 0, sizeof(*rdata) + sizeof(*frport)); - frport = fcoe_ctlr_rport(rdata); - fiph = (struct fip_header *)skb->data; frport->flags = ntohs(fiph->fip_flags); @@ -2361,15 +2357,17 @@ static int fcoe_ctlr_vn_parse(struct fcoe_ctlr *fip, if (dlen != sizeof(struct fip_wwn_desc)) goto len_err; wwn = (struct fip_wwn_desc *)desc; - rdata->ids.node_name = get_unaligned_be64(&wwn->fd_wwn); + frport->rdata.ids.node_name = + get_unaligned_be64(&wwn->fd_wwn); break; case FIP_DT_VN_ID: if (dlen != sizeof(struct fip_vn_desc)) goto len_err; vn = (struct fip_vn_desc *)desc; memcpy(frport->vn_mac, vn->fd_mac, ETH_ALEN); - rdata->ids.port_id = ntoh24(vn->fd_fc_id); - rdata->ids.port_name = get_unaligned_be64(&vn->fd_wwpn); + frport->rdata.ids.port_id = ntoh24(vn->fd_fc_id); + frport->rdata.ids.port_name = + get_unaligned_be64(&vn->fd_wwpn); break; case FIP_DT_FC4F: if (dlen != sizeof(struct fip_fc4_feat)) @@ -2750,10 +2748,7 @@ static int fcoe_ctlr_vn_recv(struct fcoe_ctlr *fip, struct sk_buff *skb) { struct fip_header *fiph; enum fip_vn2vn_subcode sub; - struct { - struct fc_rport_priv rdata; - struct fcoe_rport frport; - } buf; + struct fcoe_rport frport = { }; int rc, vlan_id = 0; fiph = (struct fip_header *)skb->data; @@ -2769,7 +2764,7 @@ static int fcoe_ctlr_vn_recv(struct fcoe_ctlr *fip, struct sk_buff *skb) goto drop; } - rc = fcoe_ctlr_vn_parse(fip, skb, &buf.rdata); + rc = fcoe_ctlr_vn_parse(fip, skb, &frport); if (rc) { LIBFCOE_FIP_DBG(fip, "vn_recv vn_parse error %d\n", rc); goto drop; @@ -2778,19 +2773,19 @@ static int fcoe_ctlr_vn_recv(struct fcoe_ctlr *fip, struct sk_buff *skb) mutex_lock(&fip->ctlr_mutex); switch (sub) { case FIP_SC_VN_PROBE_REQ: - fcoe_ctlr_vn_probe_req(fip, &buf.rdata); + fcoe_ctlr_vn_probe_req(fip, &frport.rdata); break; case FIP_SC_VN_PROBE_REP: - fcoe_ctlr_vn_probe_reply(fip, &buf.rdata); + fcoe_ctlr_vn_probe_reply(fip, &frport.rdata); break; case FIP_SC_VN_CLAIM_NOTIFY: - fcoe_ctlr_vn_claim_notify(fip, &buf.rdata); + fcoe_ctlr_vn_claim_notify(fip, &frport.rdata); break; case FIP_SC_VN_CLAIM_REP: - fcoe_ctlr_vn_claim_resp(fip, &buf.rdata); + fcoe_ctlr_vn_claim_resp(fip, &frport.rdata); break; case FIP_SC_VN_BEACON: - fcoe_ctlr_vn_beacon(fip, &buf.rdata); + fcoe_ctlr_vn_beacon(fip, &frport.rdata); break; default: LIBFCOE_FIP_DBG(fip, "vn_recv unknown subcode %d\n", sub); @@ -2814,22 +2809,18 @@ drop: */ static int fcoe_ctlr_vlan_parse(struct fcoe_ctlr *fip, struct sk_buff *skb, - struct fc_rport_priv *rdata) + struct fcoe_rport *frport) { struct fip_header *fiph; struct fip_desc *desc = NULL; struct fip_mac_desc *macd = NULL; struct fip_wwn_desc *wwn = NULL; - struct fcoe_rport *frport; size_t rlen; size_t dlen; u32 desc_mask = 0; u32 dtype; u8 sub; - memset(rdata, 0, sizeof(*rdata) + sizeof(*frport)); - frport = fcoe_ctlr_rport(rdata); - fiph = (struct fip_header *)skb->data; frport->flags = ntohs(fiph->fip_flags); @@ -2883,7 +2874,8 @@ static int fcoe_ctlr_vlan_parse(struct fcoe_ctlr *fip, if (dlen != sizeof(struct fip_wwn_desc)) goto len_err; wwn = (struct fip_wwn_desc *)desc; - rdata->ids.node_name = get_unaligned_be64(&wwn->fd_wwn); + frport->rdata.ids.node_name = + get_unaligned_be64(&wwn->fd_wwn); break; default: LIBFCOE_FIP_DBG(fip, "unexpected descriptor type %x " @@ -2994,22 +2986,19 @@ static int fcoe_ctlr_vlan_recv(struct fcoe_ctlr *fip, struct sk_buff *skb) { struct fip_header *fiph; enum fip_vlan_subcode sub; - struct { - struct fc_rport_priv rdata; - struct fcoe_rport frport; - } buf; + struct fcoe_rport frport = { }; int rc; fiph = (struct fip_header *)skb->data; sub = fiph->fip_subcode; - rc = fcoe_ctlr_vlan_parse(fip, skb, &buf.rdata); + rc = fcoe_ctlr_vlan_parse(fip, skb, &frport); if (rc) { LIBFCOE_FIP_DBG(fip, "vlan_recv vlan_parse error %d\n", rc); goto drop; } mutex_lock(&fip->ctlr_mutex); if (sub == FIP_SC_VL_REQ) - fcoe_ctlr_vlan_disc_reply(fip, &buf.rdata); + fcoe_ctlr_vlan_disc_reply(fip, &frport.rdata); mutex_unlock(&fip->ctlr_mutex); drop: diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c index c43eccdea65d2d2c8231265b2abd4cee720889b6..f570b8c5d857cce6e8d3d53e4d9cc861b8214558 100644 --- a/drivers/scsi/hpsa.c +++ b/drivers/scsi/hpsa.c @@ -2320,6 +2320,8 @@ static int handle_ioaccel_mode2_error(struct ctlr_info *h, case IOACCEL2_SERV_RESPONSE_COMPLETE: switch (c2->error_data.status) { case IOACCEL2_STATUS_SR_TASK_COMP_GOOD: + if (cmd) + cmd->result = 0; break; case IOACCEL2_STATUS_SR_TASK_COMP_CHK_COND: cmd->result |= SAM_STAT_CHECK_CONDITION; @@ -2479,8 +2481,10 @@ static void process_ioaccel2_completion(struct ctlr_info *h, /* check for good status */ if (likely(c2->error_data.serv_response == 0 && - c2->error_data.status == 0)) + c2->error_data.status == 0)) { + cmd->result = 0; return hpsa_cmd_free_and_done(h, c, cmd); + } /* * Any RAID offload error results in retry which will use @@ -5617,6 +5621,12 @@ static int hpsa_scsi_queue_command(struct Scsi_Host *sh, struct scsi_cmnd *cmd) } c = cmd_tagged_alloc(h, cmd); + /* + * This is necessary because the SML doesn't zero out this field during + * error recovery. + */ + cmd->result = 0; + /* * Call alternate submit routine for I/O accelerated commands. * Retries always go down the normal I/O path. diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c index b64ca977825df32111b9dbfcc1c0b13d21f7c317..71d53bb239e25d2eb30b91b7b80d45d70d8dbcd2 100644 --- a/drivers/scsi/ibmvscsi/ibmvfc.c +++ b/drivers/scsi/ibmvscsi/ibmvfc.c @@ -4874,8 +4874,8 @@ static int ibmvfc_remove(struct vio_dev *vdev) spin_lock_irqsave(vhost->host->host_lock, flags); ibmvfc_purge_requests(vhost, DID_ERROR); - ibmvfc_free_event_pool(vhost); spin_unlock_irqrestore(vhost->host->host_lock, flags); + ibmvfc_free_event_pool(vhost); ibmvfc_free_mem(vhost); spin_lock(&ibmvfc_driver_lock); diff --git a/drivers/scsi/libfc/fc_rport.c b/drivers/scsi/libfc/fc_rport.c index 3d51a936f6d547597daa94f819829e3291f9caaf..90a748551ede5d98f26ba52ff892f7bc57c2109f 100644 --- a/drivers/scsi/libfc/fc_rport.c +++ b/drivers/scsi/libfc/fc_rport.c @@ -140,6 +140,7 @@ EXPORT_SYMBOL(fc_rport_lookup); struct fc_rport_priv *fc_rport_create(struct fc_lport *lport, u32 port_id) { struct fc_rport_priv *rdata; + size_t rport_priv_size = sizeof(*rdata); lockdep_assert_held(&lport->disc.disc_mutex); @@ -147,7 +148,9 @@ struct fc_rport_priv *fc_rport_create(struct fc_lport *lport, u32 port_id) if (rdata) return rdata; - rdata = kzalloc(sizeof(*rdata) + lport->rport_priv_size, GFP_KERNEL); + if (lport->rport_priv_size > 0) + rport_priv_size = lport->rport_priv_size; + rdata = kzalloc(rport_priv_size, GFP_KERNEL); if (!rdata) return NULL; diff --git a/drivers/scsi/mac_scsi.c b/drivers/scsi/mac_scsi.c index dd6057359d7c6e681544b9f7b2be5393b39c7f1a..643321fc152dd6b1df74fa1c38d05c54b6d9197b 100644 --- a/drivers/scsi/mac_scsi.c +++ b/drivers/scsi/mac_scsi.c @@ -3,6 +3,8 @@ * * Copyright 1998, Michael Schmitz * + * Copyright 2019 Finn Thain + * * derived in part from: */ /* @@ -11,6 +13,7 @@ * Copyright 1995, Russell King */ +#include #include #include #include @@ -52,7 +55,7 @@ static int setup_cmd_per_lun = -1; module_param(setup_cmd_per_lun, int, 0); static int setup_sg_tablesize = -1; module_param(setup_sg_tablesize, int, 0); -static int setup_use_pdma = -1; +static int setup_use_pdma = 512; module_param(setup_use_pdma, int, 0); static int setup_hostid = -1; module_param(setup_hostid, int, 0); @@ -89,101 +92,217 @@ static int __init mac_scsi_setup(char *str) __setup("mac5380=", mac_scsi_setup); #endif /* !MODULE */ -/* Pseudo DMA asm originally by Ove Edlund */ - -#define CP_IO_TO_MEM(s,d,n) \ -__asm__ __volatile__ \ - (" cmp.w #4,%2\n" \ - " bls 8f\n" \ - " move.w %1,%%d0\n" \ - " neg.b %%d0\n" \ - " and.w #3,%%d0\n" \ - " sub.w %%d0,%2\n" \ - " bra 2f\n" \ - " 1: move.b (%0),(%1)+\n" \ - " 2: dbf %%d0,1b\n" \ - " move.w %2,%%d0\n" \ - " lsr.w #5,%%d0\n" \ - " bra 4f\n" \ - " 3: move.l (%0),(%1)+\n" \ - "31: move.l (%0),(%1)+\n" \ - "32: move.l (%0),(%1)+\n" \ - "33: move.l (%0),(%1)+\n" \ - "34: move.l (%0),(%1)+\n" \ - "35: move.l (%0),(%1)+\n" \ - "36: move.l (%0),(%1)+\n" \ - "37: move.l (%0),(%1)+\n" \ - " 4: dbf %%d0,3b\n" \ - " move.w %2,%%d0\n" \ - " lsr.w #2,%%d0\n" \ - " and.w #7,%%d0\n" \ - " bra 6f\n" \ - " 5: move.l (%0),(%1)+\n" \ - " 6: dbf %%d0,5b\n" \ - " and.w #3,%2\n" \ - " bra 8f\n" \ - " 7: move.b (%0),(%1)+\n" \ - " 8: dbf %2,7b\n" \ - " moveq.l #0, %2\n" \ - " 9: \n" \ - ".section .fixup,\"ax\"\n" \ - " .even\n" \ - "91: moveq.l #1, %2\n" \ - " jra 9b\n" \ - "94: moveq.l #4, %2\n" \ - " jra 9b\n" \ - ".previous\n" \ - ".section __ex_table,\"a\"\n" \ - " .align 4\n" \ - " .long 1b,91b\n" \ - " .long 3b,94b\n" \ - " .long 31b,94b\n" \ - " .long 32b,94b\n" \ - " .long 33b,94b\n" \ - " .long 34b,94b\n" \ - " .long 35b,94b\n" \ - " .long 36b,94b\n" \ - " .long 37b,94b\n" \ - " .long 5b,94b\n" \ - " .long 7b,91b\n" \ - ".previous" \ - : "=a"(s), "=a"(d), "=d"(n) \ - : "0"(s), "1"(d), "2"(n) \ - : "d0") +/* + * According to "Inside Macintosh: Devices", Mac OS requires disk drivers to + * specify the number of bytes between the delays expected from a SCSI target. + * This allows the operating system to "prevent bus errors when a target fails + * to deliver the next byte within the processor bus error timeout period." + * Linux SCSI drivers lack knowledge of the timing behaviour of SCSI targets + * so bus errors are unavoidable. + * + * If a MOVE.B instruction faults, we assume that zero bytes were transferred + * and simply retry. That assumption probably depends on target behaviour but + * seems to hold up okay. The NOP provides synchronization: without it the + * fault can sometimes occur after the program counter has moved past the + * offending instruction. Post-increment addressing can't be used. + */ + +#define MOVE_BYTE(operands) \ + asm volatile ( \ + "1: moveb " operands " \n" \ + "11: nop \n" \ + " addq #1,%0 \n" \ + " subq #1,%1 \n" \ + "40: \n" \ + " \n" \ + ".section .fixup,\"ax\" \n" \ + ".even \n" \ + "90: movel #1, %2 \n" \ + " jra 40b \n" \ + ".previous \n" \ + " \n" \ + ".section __ex_table,\"a\" \n" \ + ".align 4 \n" \ + ".long 1b,90b \n" \ + ".long 11b,90b \n" \ + ".previous \n" \ + : "+a" (addr), "+r" (n), "+r" (result) : "a" (io)) + +/* + * If a MOVE.W (or MOVE.L) instruction faults, it cannot be retried because + * the residual byte count would be uncertain. In that situation the MOVE_WORD + * macro clears n in the fixup section to abort the transfer. + */ + +#define MOVE_WORD(operands) \ + asm volatile ( \ + "1: movew " operands " \n" \ + "11: nop \n" \ + " subq #2,%1 \n" \ + "40: \n" \ + " \n" \ + ".section .fixup,\"ax\" \n" \ + ".even \n" \ + "90: movel #0, %1 \n" \ + " movel #2, %2 \n" \ + " jra 40b \n" \ + ".previous \n" \ + " \n" \ + ".section __ex_table,\"a\" \n" \ + ".align 4 \n" \ + ".long 1b,90b \n" \ + ".long 11b,90b \n" \ + ".previous \n" \ + : "+a" (addr), "+r" (n), "+r" (result) : "a" (io)) + +#define MOVE_16_WORDS(operands) \ + asm volatile ( \ + "1: movew " operands " \n" \ + "2: movew " operands " \n" \ + "3: movew " operands " \n" \ + "4: movew " operands " \n" \ + "5: movew " operands " \n" \ + "6: movew " operands " \n" \ + "7: movew " operands " \n" \ + "8: movew " operands " \n" \ + "9: movew " operands " \n" \ + "10: movew " operands " \n" \ + "11: movew " operands " \n" \ + "12: movew " operands " \n" \ + "13: movew " operands " \n" \ + "14: movew " operands " \n" \ + "15: movew " operands " \n" \ + "16: movew " operands " \n" \ + "17: nop \n" \ + " subl #32,%1 \n" \ + "40: \n" \ + " \n" \ + ".section .fixup,\"ax\" \n" \ + ".even \n" \ + "90: movel #0, %1 \n" \ + " movel #2, %2 \n" \ + " jra 40b \n" \ + ".previous \n" \ + " \n" \ + ".section __ex_table,\"a\" \n" \ + ".align 4 \n" \ + ".long 1b,90b \n" \ + ".long 2b,90b \n" \ + ".long 3b,90b \n" \ + ".long 4b,90b \n" \ + ".long 5b,90b \n" \ + ".long 6b,90b \n" \ + ".long 7b,90b \n" \ + ".long 8b,90b \n" \ + ".long 9b,90b \n" \ + ".long 10b,90b \n" \ + ".long 11b,90b \n" \ + ".long 12b,90b \n" \ + ".long 13b,90b \n" \ + ".long 14b,90b \n" \ + ".long 15b,90b \n" \ + ".long 16b,90b \n" \ + ".long 17b,90b \n" \ + ".previous \n" \ + : "+a" (addr), "+r" (n), "+r" (result) : "a" (io)) + +#define MAC_PDMA_DELAY 32 + +static inline int mac_pdma_recv(void __iomem *io, unsigned char *start, int n) +{ + unsigned char *addr = start; + int result = 0; + + if (n >= 1) { + MOVE_BYTE("%3@,%0@"); + if (result) + goto out; + } + if (n >= 1 && ((unsigned long)addr & 1)) { + MOVE_BYTE("%3@,%0@"); + if (result) + goto out; + } + while (n >= 32) + MOVE_16_WORDS("%3@,%0@+"); + while (n >= 2) + MOVE_WORD("%3@,%0@+"); + if (result) + return start - addr; /* Negated to indicate uncertain length */ + if (n == 1) + MOVE_BYTE("%3@,%0@"); +out: + return addr - start; +} + +static inline int mac_pdma_send(unsigned char *start, void __iomem *io, int n) +{ + unsigned char *addr = start; + int result = 0; + + if (n >= 1) { + MOVE_BYTE("%0@,%3@"); + if (result) + goto out; + } + if (n >= 1 && ((unsigned long)addr & 1)) { + MOVE_BYTE("%0@,%3@"); + if (result) + goto out; + } + while (n >= 32) + MOVE_16_WORDS("%0@+,%3@"); + while (n >= 2) + MOVE_WORD("%0@+,%3@"); + if (result) + return start - addr; /* Negated to indicate uncertain length */ + if (n == 1) + MOVE_BYTE("%0@,%3@"); +out: + return addr - start; +} static inline int macscsi_pread(struct NCR5380_hostdata *hostdata, unsigned char *dst, int len) { u8 __iomem *s = hostdata->pdma_io + (INPUT_DATA_REG << 4); unsigned char *d = dst; - int n = len; - int transferred; + + hostdata->pdma_residual = len; while (!NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG, BASR_DRQ | BASR_PHASE_MATCH, BASR_DRQ | BASR_PHASE_MATCH, HZ / 64)) { - CP_IO_TO_MEM(s, d, n); + int bytes; - transferred = d - dst - n; - hostdata->pdma_residual = len - transferred; + bytes = mac_pdma_recv(s, d, min(hostdata->pdma_residual, 512)); - /* No bus error. */ - if (n == 0) + if (bytes > 0) { + d += bytes; + hostdata->pdma_residual -= bytes; + } + + if (hostdata->pdma_residual == 0) return 0; - /* Target changed phase early? */ if (NCR5380_poll_politely2(hostdata, STATUS_REG, SR_REQ, SR_REQ, - BUS_AND_STATUS_REG, BASR_ACK, BASR_ACK, HZ / 64) < 0) - scmd_printk(KERN_ERR, hostdata->connected, + BUS_AND_STATUS_REG, BASR_ACK, + BASR_ACK, HZ / 64) < 0) + scmd_printk(KERN_DEBUG, hostdata->connected, "%s: !REQ and !ACK\n", __func__); if (!(NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH)) return 0; + if (bytes == 0) + udelay(MAC_PDMA_DELAY); + + if (bytes >= 0) + continue; + dsprintk(NDEBUG_PSEUDO_DMA, hostdata->host, - "%s: bus error (%d/%d)\n", __func__, transferred, len); + "%s: bus error (%d/%d)\n", __func__, d - dst, len); NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host); - d = dst + transferred; - n = len - transferred; + return -1; } scmd_printk(KERN_ERR, hostdata->connected, @@ -192,93 +311,27 @@ static inline int macscsi_pread(struct NCR5380_hostdata *hostdata, return -1; } - -#define CP_MEM_TO_IO(s,d,n) \ -__asm__ __volatile__ \ - (" cmp.w #4,%2\n" \ - " bls 8f\n" \ - " move.w %0,%%d0\n" \ - " neg.b %%d0\n" \ - " and.w #3,%%d0\n" \ - " sub.w %%d0,%2\n" \ - " bra 2f\n" \ - " 1: move.b (%0)+,(%1)\n" \ - " 2: dbf %%d0,1b\n" \ - " move.w %2,%%d0\n" \ - " lsr.w #5,%%d0\n" \ - " bra 4f\n" \ - " 3: move.l (%0)+,(%1)\n" \ - "31: move.l (%0)+,(%1)\n" \ - "32: move.l (%0)+,(%1)\n" \ - "33: move.l (%0)+,(%1)\n" \ - "34: move.l (%0)+,(%1)\n" \ - "35: move.l (%0)+,(%1)\n" \ - "36: move.l (%0)+,(%1)\n" \ - "37: move.l (%0)+,(%1)\n" \ - " 4: dbf %%d0,3b\n" \ - " move.w %2,%%d0\n" \ - " lsr.w #2,%%d0\n" \ - " and.w #7,%%d0\n" \ - " bra 6f\n" \ - " 5: move.l (%0)+,(%1)\n" \ - " 6: dbf %%d0,5b\n" \ - " and.w #3,%2\n" \ - " bra 8f\n" \ - " 7: move.b (%0)+,(%1)\n" \ - " 8: dbf %2,7b\n" \ - " moveq.l #0, %2\n" \ - " 9: \n" \ - ".section .fixup,\"ax\"\n" \ - " .even\n" \ - "91: moveq.l #1, %2\n" \ - " jra 9b\n" \ - "94: moveq.l #4, %2\n" \ - " jra 9b\n" \ - ".previous\n" \ - ".section __ex_table,\"a\"\n" \ - " .align 4\n" \ - " .long 1b,91b\n" \ - " .long 3b,94b\n" \ - " .long 31b,94b\n" \ - " .long 32b,94b\n" \ - " .long 33b,94b\n" \ - " .long 34b,94b\n" \ - " .long 35b,94b\n" \ - " .long 36b,94b\n" \ - " .long 37b,94b\n" \ - " .long 5b,94b\n" \ - " .long 7b,91b\n" \ - ".previous" \ - : "=a"(s), "=a"(d), "=d"(n) \ - : "0"(s), "1"(d), "2"(n) \ - : "d0") - static inline int macscsi_pwrite(struct NCR5380_hostdata *hostdata, unsigned char *src, int len) { unsigned char *s = src; u8 __iomem *d = hostdata->pdma_io + (OUTPUT_DATA_REG << 4); - int n = len; - int transferred; + + hostdata->pdma_residual = len; while (!NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG, BASR_DRQ | BASR_PHASE_MATCH, BASR_DRQ | BASR_PHASE_MATCH, HZ / 64)) { - CP_MEM_TO_IO(s, d, n); + int bytes; - transferred = s - src - n; - hostdata->pdma_residual = len - transferred; + bytes = mac_pdma_send(s, d, min(hostdata->pdma_residual, 512)); - /* Target changed phase early? */ - if (NCR5380_poll_politely2(hostdata, STATUS_REG, SR_REQ, SR_REQ, - BUS_AND_STATUS_REG, BASR_ACK, BASR_ACK, HZ / 64) < 0) - scmd_printk(KERN_ERR, hostdata->connected, - "%s: !REQ and !ACK\n", __func__); - if (!(NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH)) - return 0; + if (bytes > 0) { + s += bytes; + hostdata->pdma_residual -= bytes; + } - /* No bus error. */ - if (n == 0) { + if (hostdata->pdma_residual == 0) { if (NCR5380_poll_politely(hostdata, TARGET_COMMAND_REG, TCR_LAST_BYTE_SENT, TCR_LAST_BYTE_SENT, HZ / 64) < 0) @@ -287,17 +340,29 @@ static inline int macscsi_pwrite(struct NCR5380_hostdata *hostdata, return 0; } + if (NCR5380_poll_politely2(hostdata, STATUS_REG, SR_REQ, SR_REQ, + BUS_AND_STATUS_REG, BASR_ACK, + BASR_ACK, HZ / 64) < 0) + scmd_printk(KERN_DEBUG, hostdata->connected, + "%s: !REQ and !ACK\n", __func__); + if (!(NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH)) + return 0; + + if (bytes == 0) + udelay(MAC_PDMA_DELAY); + + if (bytes >= 0) + continue; + dsprintk(NDEBUG_PSEUDO_DMA, hostdata->host, - "%s: bus error (%d/%d)\n", __func__, transferred, len); + "%s: bus error (%d/%d)\n", __func__, s - src, len); NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host); - s = src + transferred; - n = len - transferred; + return -1; } scmd_printk(KERN_ERR, hostdata->connected, "%s: phase mismatch or !DRQ\n", __func__); NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host); - return -1; } @@ -305,7 +370,7 @@ static int macscsi_dma_xfer_len(struct NCR5380_hostdata *hostdata, struct scsi_cmnd *cmd) { if (hostdata->flags & FLAG_NO_PSEUDO_DMA || - cmd->SCp.this_residual < 16) + cmd->SCp.this_residual < setup_use_pdma) return 0; return cmd->SCp.this_residual; diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c index acb503ea8f0c4145bf28cb6abe5334e0b787990f..806ceabcabc3f822e48e91fc6b3366c5337ac0fc 100644 --- a/drivers/scsi/megaraid/megaraid_sas_base.c +++ b/drivers/scsi/megaraid/megaraid_sas_base.c @@ -3025,6 +3025,7 @@ megasas_fw_crash_buffer_show(struct device *cdev, u32 size; unsigned long buff_addr; unsigned long dmachunk = CRASH_DMA_BUF_SIZE; + unsigned long chunk_left_bytes; unsigned long src_addr; unsigned long flags; u32 buff_offset; @@ -3050,6 +3051,8 @@ megasas_fw_crash_buffer_show(struct device *cdev, } size = (instance->fw_crash_buffer_size * dmachunk) - buff_offset; + chunk_left_bytes = dmachunk - (buff_offset % dmachunk); + size = (size > chunk_left_bytes) ? chunk_left_bytes : size; size = (size >= PAGE_SIZE) ? (PAGE_SIZE - 1) : size; src_addr = (unsigned long)instance->crash_buf[buff_offset / dmachunk] + @@ -5862,7 +5865,8 @@ megasas_get_target_prop(struct megasas_instance *instance, int ret; struct megasas_cmd *cmd; struct megasas_dcmd_frame *dcmd; - u16 targetId = (sdev->channel % 2) + sdev->id; + u16 targetId = ((sdev->channel % 2) * MEGASAS_MAX_DEV_PER_CHANNEL) + + sdev->id; cmd = megasas_get_cmd(instance); diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c index 8776330175e343c3a620005e7105140209910922..d2ab52026014f99934989c3a0514f259fd16ceeb 100644 --- a/drivers/scsi/mpt3sas/mpt3sas_base.c +++ b/drivers/scsi/mpt3sas/mpt3sas_base.c @@ -2565,12 +2565,14 @@ _base_config_dma_addressing(struct MPT3SAS_ADAPTER *ioc, struct pci_dev *pdev) { struct sysinfo s; u64 consistent_dma_mask; + /* Set 63 bit DMA mask for all SAS3 and SAS35 controllers */ + int dma_mask = (ioc->hba_mpi_version_belonged > MPI2_VERSION) ? 63 : 64; if (ioc->is_mcpu_endpoint) goto try_32bit; if (ioc->dma_mask) - consistent_dma_mask = DMA_BIT_MASK(64); + consistent_dma_mask = DMA_BIT_MASK(dma_mask); else consistent_dma_mask = DMA_BIT_MASK(32); @@ -2578,11 +2580,11 @@ _base_config_dma_addressing(struct MPT3SAS_ADAPTER *ioc, struct pci_dev *pdev) const uint64_t required_mask = dma_get_required_mask(&pdev->dev); if ((required_mask > DMA_BIT_MASK(32)) && - !pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) && + !pci_set_dma_mask(pdev, DMA_BIT_MASK(dma_mask)) && !pci_set_consistent_dma_mask(pdev, consistent_dma_mask)) { ioc->base_add_sg_single = &_base_add_sg_single_64; ioc->sge_size = sizeof(Mpi2SGESimple64_t); - ioc->dma_mask = 64; + ioc->dma_mask = dma_mask; goto out; } } @@ -2609,7 +2611,7 @@ static int _base_change_consistent_dma_mask(struct MPT3SAS_ADAPTER *ioc, struct pci_dev *pdev) { - if (pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64))) { + if (pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(ioc->dma_mask))) { if (pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32))) return -ENODEV; } @@ -4545,7 +4547,7 @@ _base_allocate_memory_pools(struct MPT3SAS_ADAPTER *ioc) total_sz += sz; } while (ioc->rdpq_array_enable && (++i < ioc->reply_queue_count)); - if (ioc->dma_mask == 64) { + if (ioc->dma_mask > 32) { if (_base_change_consistent_dma_mask(ioc, ioc->pdev) != 0) { pr_warn(MPT3SAS_FMT "no suitable consistent DMA mask for %s\n", diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c index f84f9bf15027817f89c7d5e4c6695b17a8f90ac2..ddce32fe0513add889fcfec0848973b58cae6354 100644 --- a/drivers/scsi/qla2xxx/qla_init.c +++ b/drivers/scsi/qla2xxx/qla_init.c @@ -4732,7 +4732,7 @@ qla2x00_alloc_fcport(scsi_qla_host_t *vha, gfp_t flags) ql_log(ql_log_warn, vha, 0xd049, "Failed to allocate ct_sns request.\n"); kfree(fcport); - fcport = NULL; + return NULL; } INIT_WORK(&fcport->del_work, qla24xx_delete_sess_fn); INIT_LIST_HEAD(&fcport->gnl_entry); diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index 1fc832751a4fb913205dacc00b6886d77d84b744..75b926e700766db9ba92bd2fd60512f24c3288c6 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -71,11 +71,11 @@ int scsi_init_sense_cache(struct Scsi_Host *shost) struct kmem_cache *cache; int ret = 0; + mutex_lock(&scsi_sense_cache_mutex); cache = scsi_select_sense_cache(shost->unchecked_isa_dma); if (cache) - return 0; + goto exit; - mutex_lock(&scsi_sense_cache_mutex); if (shost->unchecked_isa_dma) { scsi_sense_isadma_cache = kmem_cache_create("scsi_sense_cache(DMA)", @@ -91,7 +91,7 @@ int scsi_init_sense_cache(struct Scsi_Host *shost) if (!scsi_sense_cache) ret = -ENOMEM; } - + exit: mutex_unlock(&scsi_sense_cache_mutex); return ret; } @@ -3059,11 +3059,14 @@ scsi_device_quiesce(struct scsi_device *sdev) */ WARN_ON_ONCE(sdev->quiesced_by && sdev->quiesced_by != current); - blk_set_preempt_only(q); + if (sdev->quiesced_by == current) + return 0; + + blk_set_pm_only(q); blk_mq_freeze_queue(q); /* - * Ensure that the effect of blk_set_preempt_only() will be visible + * Ensure that the effect of blk_set_pm_only() will be visible * for percpu_ref_tryget() callers that occur after the queue * unfreeze even if the queue was already frozen before this function * was called. See also https://lwn.net/Articles/573497/. @@ -3076,7 +3079,7 @@ scsi_device_quiesce(struct scsi_device *sdev) if (err == 0) sdev->quiesced_by = current; else - blk_clear_preempt_only(q); + blk_clear_pm_only(q); mutex_unlock(&sdev->state_mutex); return err; @@ -3099,8 +3102,10 @@ void scsi_device_resume(struct scsi_device *sdev) * device deleted during suspend) */ mutex_lock(&sdev->state_mutex); - sdev->quiesced_by = NULL; - blk_clear_preempt_only(sdev->request_queue); + if (sdev->quiesced_by) { + sdev->quiesced_by = NULL; + blk_clear_pm_only(sdev->request_queue); + } if (sdev->sdev_state == SDEV_QUIESCE) scsi_device_set_state(sdev, SDEV_RUNNING); mutex_unlock(&sdev->state_mutex); diff --git a/drivers/scsi/ufs/unipro.h b/drivers/scsi/ufs/unipro.h index 23129d7b2678dfea93b21f010c31a7aece9e9233..c77e365264478f3a74e1a28cdf0b6a881d5b991f 100644 --- a/drivers/scsi/ufs/unipro.h +++ b/drivers/scsi/ufs/unipro.h @@ -52,7 +52,7 @@ #define RX_HS_UNTERMINATED_ENABLE 0x00A6 #define RX_ENTER_HIBERN8 0x00A7 #define RX_BYPASS_8B10B_ENABLE 0x00A8 -#define RX_TERMINATION_FORCE_ENABLE 0x0089 +#define RX_TERMINATION_FORCE_ENABLE 0x00A9 #define RX_MIN_ACTIVATETIME_CAPABILITY 0x008F #define RX_HIBERN8TIME_CAPABILITY 0x0092 #define RX_REFCLKFREQ 0x00EB diff --git a/drivers/soc/bcm/bcm2835-power.c b/drivers/soc/bcm/bcm2835-power.c index 918cf54ab9ef9a1c8abf8e450184ce5a17b84b8a..05d2293e8e5848e217a5e8b6cdd3455bfe563226 100644 --- a/drivers/soc/bcm/bcm2835-power.c +++ b/drivers/soc/bcm/bcm2835-power.c @@ -637,15 +637,15 @@ static int bcm2835_power_probe(struct platform_device *pdev) power->base = pm->base; power->asb = pm->asb; - /* 2711 hack: the new ARGON ASB took over V3D, which is our + /* 2711 hack: the new RPiVid ASB took over V3D, which is our * only consumer of this driver so far. The old ASB seems to * still be present with ISP and H264 bits but no V3D, but I * don't know if that's real or not. The V3D is in the same * place in the new ASB as the old one, so just poke the new * one for now. */ - if (pm->arg_asb) { - power->asb = pm->arg_asb; + if (pm->rpivid_asb) { + power->asb = pm->rpivid_asb; power->is_2711 = true; } diff --git a/drivers/soundwire/cadence_master.c b/drivers/soundwire/cadence_master.c index cb6a331f448ab681ed4ef0bd24f8243debf8c232..70f78eda037e805604edbc7b051ee966fc1e5bc6 100644 --- a/drivers/soundwire/cadence_master.c +++ b/drivers/soundwire/cadence_master.c @@ -81,8 +81,8 @@ #define CDNS_MCP_INTSET 0x4C -#define CDNS_SDW_SLAVE_STAT 0x50 -#define CDNS_MCP_SLAVE_STAT_MASK BIT(1, 0) +#define CDNS_MCP_SLAVE_STAT 0x50 +#define CDNS_MCP_SLAVE_STAT_MASK GENMASK(1, 0) #define CDNS_MCP_SLAVE_INTSTAT0 0x54 #define CDNS_MCP_SLAVE_INTSTAT1 0x58 @@ -96,8 +96,8 @@ #define CDNS_MCP_SLAVE_INTMASK0 0x5C #define CDNS_MCP_SLAVE_INTMASK1 0x60 -#define CDNS_MCP_SLAVE_INTMASK0_MASK GENMASK(30, 0) -#define CDNS_MCP_SLAVE_INTMASK1_MASK GENMASK(16, 0) +#define CDNS_MCP_SLAVE_INTMASK0_MASK GENMASK(31, 0) +#define CDNS_MCP_SLAVE_INTMASK1_MASK GENMASK(15, 0) #define CDNS_MCP_PORT_INTSTAT 0x64 #define CDNS_MCP_PDI_STAT 0x6C diff --git a/drivers/spi/spi-bcm2835.c b/drivers/spi/spi-bcm2835.c index 885740ccedfac15e77d6c13febaab1ae1c05a02e..7b3eaccf2aaa66a49299491a47f55952d065b555 100644 --- a/drivers/spi/spi-bcm2835.c +++ b/drivers/spi/spi-bcm2835.c @@ -558,7 +558,8 @@ static int bcm2835_spi_transfer_one(struct spi_master *master, bcm2835_wr(bs, BCM2835_SPI_CLK, cdiv); /* handle all the 3-wire mode */ - if ((spi->mode & SPI_3WIRE) && (tfr->rx_buf)) + if (spi->mode & SPI_3WIRE && tfr->rx_buf && + tfr->rx_buf != master->dummy_rx) cs |= BCM2835_SPI_CS_REN; else cs &= ~BCM2835_SPI_CS_REN; diff --git a/drivers/staging/android/ion/ion_page_pool.c b/drivers/staging/android/ion/ion_page_pool.c index 9bc56eb48d2a894c2f3d760b8b129813d18b5177..890d264ac68798a1c7d8a40243d729fa7d75a6fa 100644 --- a/drivers/staging/android/ion/ion_page_pool.c +++ b/drivers/staging/android/ion/ion_page_pool.c @@ -8,11 +8,14 @@ #include #include #include +#include #include "ion.h" static inline struct page *ion_page_pool_alloc_pages(struct ion_page_pool *pool) { + if (fatal_signal_pending(current)) + return NULL; return alloc_pages(pool->gfp_mask, pool->order); } diff --git a/drivers/staging/comedi/drivers/dt3000.c b/drivers/staging/comedi/drivers/dt3000.c index 2edf3ee91300007c4dff503774bca1f2fe04b57b..caf4d4df4bd3044f68a6e26b0bc5145b20e150ef 100644 --- a/drivers/staging/comedi/drivers/dt3000.c +++ b/drivers/staging/comedi/drivers/dt3000.c @@ -342,9 +342,9 @@ static irqreturn_t dt3k_interrupt(int irq, void *d) static int dt3k_ns_to_timer(unsigned int timer_base, unsigned int *nanosec, unsigned int flags) { - int divider, base, prescale; + unsigned int divider, base, prescale; - /* This function needs improvment */ + /* This function needs improvement */ /* Don't know if divider==0 works. */ for (prescale = 0; prescale < 16; prescale++) { @@ -358,7 +358,7 @@ static int dt3k_ns_to_timer(unsigned int timer_base, unsigned int *nanosec, divider = (*nanosec) / base; break; case CMDF_ROUND_UP: - divider = (*nanosec) / base; + divider = DIV_ROUND_UP(*nanosec, base); break; } if (divider < 65536) { @@ -368,7 +368,7 @@ static int dt3k_ns_to_timer(unsigned int timer_base, unsigned int *nanosec, } prescale = 15; - base = timer_base * (1 << prescale); + base = timer_base * (prescale + 1); divider = 65535; *nanosec = divider * base; return (prescale << 16) | (divider); diff --git a/drivers/staging/gasket/apex_driver.c b/drivers/staging/gasket/apex_driver.c index c747e9ca451860309dab8440bad5705492d014a4..0cef1d6d2e2b0c9963621e8a476c6c1fcc23c0b3 100644 --- a/drivers/staging/gasket/apex_driver.c +++ b/drivers/staging/gasket/apex_driver.c @@ -538,7 +538,7 @@ static ssize_t sysfs_show(struct device *device, struct device_attribute *attr, break; case ATTR_KERNEL_HIB_SIMPLE_PAGE_TABLE_SIZE: ret = scnprintf(buf, PAGE_SIZE, "%u\n", - gasket_page_table_num_entries( + gasket_page_table_num_simple_entries( gasket_dev->page_table[0])); break; case ATTR_KERNEL_HIB_NUM_ACTIVE_PAGES: diff --git a/drivers/staging/media/davinci_vpfe/vpfe_video.c b/drivers/staging/media/davinci_vpfe/vpfe_video.c index 1269a983455e57c5434b71dc145f07f1a088623c..13b890b9ef187f959c1decfae7f60169102cb31a 100644 --- a/drivers/staging/media/davinci_vpfe/vpfe_video.c +++ b/drivers/staging/media/davinci_vpfe/vpfe_video.c @@ -422,6 +422,9 @@ static int vpfe_open(struct file *file) /* If decoder is not initialized. initialize it */ if (!video->initialized && vpfe_update_pipe_state(video)) { mutex_unlock(&video->lock); + v4l2_fh_del(&handle->vfh); + v4l2_fh_exit(&handle->vfh); + kfree(handle); return -ENODEV; } /* Increment device users counter */ diff --git a/drivers/staging/vc04_services/bcm2835-audio/bcm2835.c b/drivers/staging/vc04_services/bcm2835-audio/bcm2835.c index 86b921030db7bf2eab7ec97d7cb51b06959bcc96..4d5b99d8ae9dc11c41b2bdae6243da0a7b5f8b4d 100644 --- a/drivers/staging/vc04_services/bcm2835-audio/bcm2835.c +++ b/drivers/staging/vc04_services/bcm2835-audio/bcm2835.c @@ -79,7 +79,11 @@ static int bcm2835_audio_alsa_newpcm(struct bcm2835_chip *chip, if (err) return err; - err = snd_bcm2835_new_pcm(chip, "bcm2835 IEC958/HDMI", 1, 0, 1, true); + err = snd_bcm2835_new_pcm(chip, "bcm2835 IEC958/HDMI", 1, AUDIO_DEST_HDMI0, 1, true); + if (err) + return err; + + err = snd_bcm2835_new_pcm(chip, "bcm2835 IEC958/HDMI1", 2, AUDIO_DEST_HDMI1, 1, true); if (err) return err; @@ -106,7 +110,7 @@ static struct bcm2835_audio_driver bcm2835_audio_alsa = { .newctl = snd_bcm2835_new_ctl, }; -static struct bcm2835_audio_driver bcm2835_audio_hdmi = { +static struct bcm2835_audio_driver bcm2835_audio_hdmi0 = { .driver = { .name = "bcm2835_hdmi", .owner = THIS_MODULE, @@ -116,7 +120,20 @@ static struct bcm2835_audio_driver bcm2835_audio_hdmi = { .minchannels = 1, .newpcm = bcm2835_audio_simple_newpcm, .newctl = snd_bcm2835_new_hdmi_ctl, - .route = AUDIO_DEST_HDMI + .route = AUDIO_DEST_HDMI0 +}; + +static struct bcm2835_audio_driver bcm2835_audio_hdmi1 = { + .driver = { + .name = "bcm2835_hdmi", + .owner = THIS_MODULE, + }, + .shortname = "bcm2835 HDMI 1", + .longname = "bcm2835 HDMI 1", + .minchannels = 1, + .newpcm = bcm2835_audio_simple_newpcm, + .newctl = snd_bcm2835_new_hdmi_ctl, + .route = AUDIO_DEST_HDMI1 }; static struct bcm2835_audio_driver bcm2835_audio_headphones = { @@ -143,7 +160,11 @@ static struct bcm2835_audio_drivers children_devices[] = { .is_enabled = &enable_compat_alsa, }, { - .audio_driver = &bcm2835_audio_hdmi, + .audio_driver = &bcm2835_audio_hdmi0, + .is_enabled = &enable_hdmi, + }, + { + .audio_driver = &bcm2835_audio_hdmi1, .is_enabled = &enable_hdmi, }, { diff --git a/drivers/staging/vc04_services/bcm2835-audio/bcm2835.h b/drivers/staging/vc04_services/bcm2835-audio/bcm2835.h index 595ad584243f6e699e65893da14d0f5a95b1ff0f..67beb9f17dc5d68876757933ba38ea386e398117 100644 --- a/drivers/staging/vc04_services/bcm2835-audio/bcm2835.h +++ b/drivers/staging/vc04_services/bcm2835-audio/bcm2835.h @@ -33,7 +33,9 @@ enum { enum snd_bcm2835_route { AUDIO_DEST_AUTO = 0, AUDIO_DEST_HEADPHONES = 1, - AUDIO_DEST_HDMI = 2, + AUDIO_DEST_HDMI = 2, // for backwards compatibility. + AUDIO_DEST_HDMI0 = 2, + AUDIO_DEST_HDMI1 = 3, AUDIO_DEST_MAX, }; diff --git a/drivers/staging/vc04_services/bcm2835-camera/controls.c b/drivers/staging/vc04_services/bcm2835-camera/controls.c index e49897b958d2c6cee9f14a3f59ad9dd7a4411687..ae724fa5c69af0e65961feb4ee5609f66c4e9185 100644 --- a/drivers/staging/vc04_services/bcm2835-camera/controls.c +++ b/drivers/staging/vc04_services/bcm2835-camera/controls.c @@ -481,6 +481,10 @@ static int ctrl_set_awb_mode(struct bm2835_mmal_dev *dev, case V4L2_WHITE_BALANCE_SHADE: u32_value = MMAL_PARAM_AWBMODE_SHADE; break; + + case V4L2_WHITE_BALANCE_GREYWORLD: + u32_value = MMAL_PARAM_AWBMODE_GREYWORLD; + break; } return vchiq_mmal_port_parameter_set(dev->instance, control, @@ -1008,8 +1012,8 @@ static const struct bm2835_mmal_v4l2_ctrl v4l2_ctrls[V4L2_CTRL_COUNT] = { { V4L2_CID_AUTO_N_PRESET_WHITE_BALANCE, MMAL_CONTROL_TYPE_STD_MENU, - ~0x3ff, V4L2_WHITE_BALANCE_SHADE, V4L2_WHITE_BALANCE_AUTO, 0, - NULL, + ~0x7ff, V4L2_WHITE_BALANCE_GREYWORLD, V4L2_WHITE_BALANCE_AUTO, + 0, NULL, MMAL_PARAMETER_AWB_MODE, &ctrl_set_awb_mode, false diff --git a/drivers/staging/vc04_services/bcm2835-codec/Kconfig b/drivers/staging/vc04_services/bcm2835-codec/Kconfig index 951971b39ce0e6ddb40135f59bc62758f892f4c6..c104be9ad6da5e09f9cd86b78f6e074d996f1e60 100644 --- a/drivers/staging/vc04_services/bcm2835-codec/Kconfig +++ b/drivers/staging/vc04_services/bcm2835-codec/Kconfig @@ -1,6 +1,6 @@ config VIDEO_CODEC_BCM2835 tristate "BCM2835 Video codec support" - depends on MEDIA_SUPPORT + depends on MEDIA_SUPPORT && MEDIA_CONTROLLER depends on VIDEO_V4L2 && (ARCH_BCM2835 || COMPILE_TEST) select BCM2835_VCHIQ_MMAL select VIDEOBUF2_DMA_CONTIG diff --git a/drivers/staging/vc04_services/bcm2835-codec/bcm2835-v4l2-codec.c b/drivers/staging/vc04_services/bcm2835-codec/bcm2835-v4l2-codec.c index 708f76b7aa92bc88ffa31b598847677db40ef8ea..16f8dfb86cb4615cf7547dc3a5ed14735052e8e2 100644 --- a/drivers/staging/vc04_services/bcm2835-codec/bcm2835-v4l2-codec.c +++ b/drivers/staging/vc04_services/bcm2835-codec/bcm2835-v4l2-codec.c @@ -77,7 +77,7 @@ enum bcm2835_codec_role { ISP, }; -const static char *roles[] = { +static const char * const roles[] = { "decode", "encode", "isp" @@ -457,6 +457,9 @@ struct bcm2835_codec_ctx { }; struct bcm2835_codec_driver { + struct platform_device *pdev; + struct media_device mdev; + struct bcm2835_codec_dev *encode; struct bcm2835_codec_dev *decode; struct bcm2835_codec_dev *isp; @@ -504,7 +507,7 @@ static struct bcm2835_codec_fmt *find_format(struct v4l2_format *f, for (k = 0; k < fmts->num_entries; k++) { fmt = &fmts->list[k]; - if (fmt->fourcc == f->fmt.pix.pixelformat) + if (fmt->fourcc == f->fmt.pix_mp.pixelformat) break; } if (k == fmts->num_entries) @@ -522,9 +525,9 @@ static struct bcm2835_codec_q_data *get_q_data(struct bcm2835_codec_ctx *ctx, enum v4l2_buf_type type) { switch (type) { - case V4L2_BUF_TYPE_VIDEO_OUTPUT: + case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE: return &ctx->q_data[V4L2_M2M_SRC]; - case V4L2_BUF_TYPE_VIDEO_CAPTURE: + case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE: return &ctx->q_data[V4L2_M2M_DST]; default: v4l2_err(&ctx->dev->v4l2_dev, "%s: Invalid queue type %u\n", @@ -541,9 +544,9 @@ static struct vchiq_mmal_port *get_port_data(struct bcm2835_codec_ctx *ctx, return NULL; switch (type) { - case V4L2_BUF_TYPE_VIDEO_OUTPUT: + case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE: return &ctx->component->input[0]; - case V4L2_BUF_TYPE_VIDEO_CAPTURE: + case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE: return &ctx->component->output[0]; default: v4l2_err(&ctx->dev->v4l2_dev, "%s: Invalid queue type %u\n", @@ -557,7 +560,7 @@ static struct vchiq_mmal_port *get_port_data(struct bcm2835_codec_ctx *ctx, * mem2mem callbacks */ -/** +/* * job_ready() - check whether an instance is ready to be scheduled to run */ static int job_ready(void *priv) @@ -752,10 +755,16 @@ static void handle_fmt_changed(struct bcm2835_codec_ctx *ctx, format->es.video.crop.width, format->es.video.crop.height, format->es.video.color_space); - q_data = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE); + q_data = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE); + v4l2_dbg(1, debug, &ctx->dev->v4l2_dev, "%s: Format was %ux%u, crop %ux%u\n", + __func__, q_data->bytesperline, q_data->height, + q_data->crop_width, q_data->crop_height); + q_data->crop_width = format->es.video.crop.width; q_data->crop_height = format->es.video.crop.height; - q_data->bytesperline = format->es.video.crop.width; + q_data->bytesperline = get_bytesperline(format->es.video.width, + q_data->fmt); + q_data->height = format->es.video.height; q_data->sizeimage = format->buffer_size_min; if (format->es.video.color_space) @@ -941,12 +950,12 @@ static void device_run(void *priv) static int vidioc_querycap(struct file *file, void *priv, struct v4l2_capability *cap) { + struct bcm2835_codec_dev *dev = video_drvdata(file); + strncpy(cap->driver, MEM2MEM_NAME, sizeof(cap->driver) - 1); - strncpy(cap->card, MEM2MEM_NAME, sizeof(cap->card) - 1); + strncpy(cap->card, dev->vfd.name, sizeof(cap->card) - 1); snprintf(cap->bus_info, sizeof(cap->bus_info), "platform:%s", MEM2MEM_NAME); - cap->device_caps = V4L2_CAP_VIDEO_M2M | V4L2_CAP_STREAMING; - cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS; return 0; } @@ -996,16 +1005,20 @@ static int vidioc_g_fmt(struct bcm2835_codec_ctx *ctx, struct v4l2_format *f) q_data = get_q_data(ctx, f->type); - f->fmt.pix.width = q_data->crop_width; - f->fmt.pix.height = q_data->height; - f->fmt.pix.field = V4L2_FIELD_NONE; - f->fmt.pix.pixelformat = q_data->fmt->fourcc; - f->fmt.pix.bytesperline = q_data->bytesperline; - f->fmt.pix.sizeimage = q_data->sizeimage; - f->fmt.pix.colorspace = ctx->colorspace; - f->fmt.pix.xfer_func = ctx->xfer_func; - f->fmt.pix.ycbcr_enc = ctx->ycbcr_enc; - f->fmt.pix.quantization = ctx->quant; + f->fmt.pix_mp.width = q_data->crop_width; + f->fmt.pix_mp.height = q_data->height; + f->fmt.pix_mp.pixelformat = q_data->fmt->fourcc; + f->fmt.pix_mp.field = V4L2_FIELD_NONE; + f->fmt.pix_mp.colorspace = ctx->colorspace; + f->fmt.pix_mp.plane_fmt[0].sizeimage = q_data->sizeimage; + f->fmt.pix_mp.plane_fmt[0].bytesperline = q_data->bytesperline; + f->fmt.pix_mp.num_planes = 1; + f->fmt.pix_mp.ycbcr_enc = ctx->ycbcr_enc; + f->fmt.pix_mp.quantization = ctx->quant; + f->fmt.pix_mp.xfer_func = ctx->xfer_func; + + memset(f->fmt.pix_mp.plane_fmt[0].reserved, 0, + sizeof(f->fmt.pix_mp.plane_fmt[0].reserved)); return 0; } @@ -1029,35 +1042,37 @@ static int vidioc_try_fmt(struct bcm2835_codec_ctx *ctx, struct v4l2_format *f, * The V4L2 specification requires the driver to correct the format * struct if any of the dimensions is unsupported */ - if (f->fmt.pix.width > MAX_W) - f->fmt.pix.width = MAX_W; - if (f->fmt.pix.height > MAX_H) - f->fmt.pix.height = MAX_H; + if (f->fmt.pix_mp.width > MAX_W) + f->fmt.pix_mp.width = MAX_W; + if (f->fmt.pix_mp.height > MAX_H) + f->fmt.pix_mp.height = MAX_H; if (!fmt->flags & V4L2_FMT_FLAG_COMPRESSED) { /* Only clip min w/h on capture. Treat 0x0 as unknown. */ - if (f->fmt.pix.width < MIN_W) - f->fmt.pix.width = MIN_W; - if (f->fmt.pix.height < MIN_H) - f->fmt.pix.height = MIN_H; + if (f->fmt.pix_mp.width < MIN_W) + f->fmt.pix_mp.width = MIN_W; + if (f->fmt.pix_mp.height < MIN_H) + f->fmt.pix_mp.height = MIN_H; /* - * For codecs the buffer must have a vertical alignment of 16 + * For decoders the buffer must have a vertical alignment of 16 * lines. * The selection will reflect any cropping rectangle when only * some of the pixels are active. */ - if (ctx->dev->role != ISP) - f->fmt.pix.height = ALIGN(f->fmt.pix.height, 16); + if (ctx->dev->role == DECODE) + f->fmt.pix_mp.height = ALIGN(f->fmt.pix_mp.height, 16); } - f->fmt.pix.bytesperline = get_bytesperline(f->fmt.pix.width, - fmt); - f->fmt.pix.sizeimage = get_sizeimage(f->fmt.pix.bytesperline, - f->fmt.pix.width, - f->fmt.pix.height, - fmt); + f->fmt.pix_mp.num_planes = 1; + f->fmt.pix_mp.plane_fmt[0].bytesperline = + get_bytesperline(f->fmt.pix_mp.width, fmt); + f->fmt.pix_mp.plane_fmt[0].sizeimage = + get_sizeimage(f->fmt.pix_mp.plane_fmt[0].bytesperline, + f->fmt.pix_mp.width, f->fmt.pix_mp.height, fmt); + memset(f->fmt.pix_mp.plane_fmt[0].reserved, 0, + sizeof(f->fmt.pix_mp.plane_fmt[0].reserved)); - f->fmt.pix.field = V4L2_FIELD_NONE; + f->fmt.pix_mp.field = V4L2_FIELD_NONE; return 0; } @@ -1070,8 +1085,8 @@ static int vidioc_try_fmt_vid_cap(struct file *file, void *priv, fmt = find_format(f, ctx->dev, true); if (!fmt) { - f->fmt.pix.pixelformat = get_default_format(ctx->dev, - true)->fourcc; + f->fmt.pix_mp.pixelformat = get_default_format(ctx->dev, + true)->fourcc; fmt = find_format(f, ctx->dev, true); } @@ -1086,13 +1101,13 @@ static int vidioc_try_fmt_vid_out(struct file *file, void *priv, fmt = find_format(f, ctx->dev, false); if (!fmt) { - f->fmt.pix.pixelformat = get_default_format(ctx->dev, - false)->fourcc; + f->fmt.pix_mp.pixelformat = get_default_format(ctx->dev, + false)->fourcc; fmt = find_format(f, ctx->dev, false); } - if (!f->fmt.pix.colorspace) - f->fmt.pix.colorspace = ctx->colorspace; + if (!f->fmt.pix_mp.colorspace) + f->fmt.pix_mp.colorspace = ctx->colorspace; return vidioc_try_fmt(ctx, f, fmt); } @@ -1106,9 +1121,10 @@ static int vidioc_s_fmt(struct bcm2835_codec_ctx *ctx, struct v4l2_format *f, bool update_capture_port = false; int ret; - v4l2_dbg(1, debug, &ctx->dev->v4l2_dev, "Setting format for type %d, wxh: %dx%d, fmt: %08x, size %u\n", - f->type, f->fmt.pix.width, f->fmt.pix.height, - f->fmt.pix.pixelformat, f->fmt.pix.sizeimage); + v4l2_dbg(1, debug, &ctx->dev->v4l2_dev, "Setting format for type %d, wxh: %dx%d, fmt: " V4L2_FOURCC_CONV ", size %u\n", + f->type, f->fmt.pix_mp.width, f->fmt.pix_mp.height, + V4L2_FOURCC_CONV_ARGS(f->fmt.pix_mp.pixelformat), + f->fmt.pix_mp.plane_fmt[0].sizeimage); vq = v4l2_m2m_get_vq(ctx->fh.m2m_ctx, f->type); if (!vq) @@ -1124,9 +1140,9 @@ static int vidioc_s_fmt(struct bcm2835_codec_ctx *ctx, struct v4l2_format *f, } q_data->fmt = find_format(f, ctx->dev, - f->type == V4L2_BUF_TYPE_VIDEO_CAPTURE); - q_data->crop_width = f->fmt.pix.width; - q_data->height = f->fmt.pix.height; + f->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE); + q_data->crop_width = f->fmt.pix_mp.width; + q_data->height = f->fmt.pix_mp.height; if (!q_data->selection_set) q_data->crop_height = requested_height; @@ -1134,21 +1150,21 @@ static int vidioc_s_fmt(struct bcm2835_codec_ctx *ctx, struct v4l2_format *f, * Copying the behaviour of vicodec which retains a single set of * colorspace parameters for both input and output. */ - ctx->colorspace = f->fmt.pix.colorspace; - ctx->xfer_func = f->fmt.pix.xfer_func; - ctx->ycbcr_enc = f->fmt.pix.ycbcr_enc; - ctx->quant = f->fmt.pix.quantization; + ctx->colorspace = f->fmt.pix_mp.colorspace; + ctx->xfer_func = f->fmt.pix_mp.xfer_func; + ctx->ycbcr_enc = f->fmt.pix_mp.ycbcr_enc; + ctx->quant = f->fmt.pix_mp.quantization; /* All parameters should have been set correctly by try_fmt */ - q_data->bytesperline = f->fmt.pix.bytesperline; - q_data->sizeimage = f->fmt.pix.sizeimage; + q_data->bytesperline = f->fmt.pix_mp.plane_fmt[0].bytesperline; + q_data->sizeimage = f->fmt.pix_mp.plane_fmt[0].sizeimage; v4l2_dbg(1, debug, &ctx->dev->v4l2_dev, "Calulated bpl as %u, size %u\n", q_data->bytesperline, q_data->sizeimage); if (ctx->dev->role == DECODE && q_data->fmt->flags & V4L2_FMT_FLAG_COMPRESSED && - f->fmt.pix.width && f->fmt.pix.height) { + q_data->crop_width && q_data->height) { /* * On the decoder, if provided with a resolution on the input * side, then replicate that to the output side. @@ -1165,7 +1181,7 @@ static int vidioc_s_fmt(struct bcm2835_codec_ctx *ctx, struct v4l2_format *f, q_data_dst->height = ALIGN(q_data->crop_height, 16); q_data_dst->bytesperline = - get_bytesperline(f->fmt.pix.width, q_data_dst->fmt); + get_bytesperline(f->fmt.pix_mp.width, q_data_dst->fmt); q_data_dst->sizeimage = get_sizeimage(q_data_dst->bytesperline, q_data_dst->crop_width, q_data_dst->height, @@ -1215,7 +1231,7 @@ static int vidioc_s_fmt(struct bcm2835_codec_ctx *ctx, struct v4l2_format *f, static int vidioc_s_fmt_vid_cap(struct file *file, void *priv, struct v4l2_format *f) { - unsigned int height = f->fmt.pix.height; + unsigned int height = f->fmt.pix_mp.height; int ret; ret = vidioc_try_fmt_vid_cap(file, priv, f); @@ -1228,7 +1244,7 @@ static int vidioc_s_fmt_vid_cap(struct file *file, void *priv, static int vidioc_s_fmt_vid_out(struct file *file, void *priv, struct v4l2_format *f) { - unsigned int height = f->fmt.pix.height; + unsigned int height = f->fmt.pix_mp.height; int ret; ret = vidioc_try_fmt_vid_out(file, priv, f); @@ -1244,7 +1260,7 @@ static int vidioc_g_selection(struct file *file, void *priv, { struct bcm2835_codec_ctx *ctx = file2ctx(file); struct bcm2835_codec_q_data *q_data; - bool capture_queue = s->type == V4L2_BUF_TYPE_VIDEO_CAPTURE ? + bool capture_queue = s->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE ? true : false; if ((ctx->dev->role == DECODE && !capture_queue) || @@ -1307,7 +1323,7 @@ static int vidioc_s_selection(struct file *file, void *priv, { struct bcm2835_codec_ctx *ctx = file2ctx(file); struct bcm2835_codec_q_data *q_data = NULL; - bool capture_queue = s->type == V4L2_BUF_TYPE_VIDEO_CAPTURE ? + bool capture_queue = s->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE ? true : false; v4l2_dbg(1, debug, &ctx->dev->v4l2_dev, "%s: ctx %p, type %d, q_data %p, target %d, rect x/y %d/%d, w/h %ux%u\n", @@ -1368,7 +1384,7 @@ static int vidioc_s_parm(struct file *file, void *priv, { struct bcm2835_codec_ctx *ctx = file2ctx(file); - if (parm->type != V4L2_BUF_TYPE_VIDEO_OUTPUT) + if (parm->type != V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) return -EINVAL; ctx->framerate_num = @@ -1576,6 +1592,20 @@ static int bcm2835_codec_s_ctrl(struct v4l2_ctrl *ctrl) ret = bcm2835_codec_set_level_profile(ctx, ctrl); break; + case V4L2_CID_MPEG_VIDEO_FORCE_KEY_FRAME: { + u32 mmal_bool = 1; + + if (!ctx->component) + break; + + ret = vchiq_mmal_port_parameter_set(ctx->dev->instance, + &ctx->component->output[0], + MMAL_PARAMETER_VIDEO_REQUEST_I_FRAME, + &mmal_bool, + sizeof(mmal_bool)); + break; + } + default: v4l2_err(&ctx->dev->v4l2_dev, "Invalid control\n"); return -EINVAL; @@ -1738,15 +1768,15 @@ static int vidioc_encoder_cmd(struct file *file, void *priv, static const struct v4l2_ioctl_ops bcm2835_codec_ioctl_ops = { .vidioc_querycap = vidioc_querycap, - .vidioc_enum_fmt_vid_cap = vidioc_enum_fmt_vid_cap, - .vidioc_g_fmt_vid_cap = vidioc_g_fmt_vid_cap, - .vidioc_try_fmt_vid_cap = vidioc_try_fmt_vid_cap, - .vidioc_s_fmt_vid_cap = vidioc_s_fmt_vid_cap, + .vidioc_enum_fmt_vid_cap_mplane = vidioc_enum_fmt_vid_cap, + .vidioc_g_fmt_vid_cap_mplane = vidioc_g_fmt_vid_cap, + .vidioc_try_fmt_vid_cap_mplane = vidioc_try_fmt_vid_cap, + .vidioc_s_fmt_vid_cap_mplane = vidioc_s_fmt_vid_cap, - .vidioc_enum_fmt_vid_out = vidioc_enum_fmt_vid_out, - .vidioc_g_fmt_vid_out = vidioc_g_fmt_vid_out, - .vidioc_try_fmt_vid_out = vidioc_try_fmt_vid_out, - .vidioc_s_fmt_vid_out = vidioc_s_fmt_vid_out, + .vidioc_enum_fmt_vid_out_mplane = vidioc_enum_fmt_vid_out, + .vidioc_g_fmt_vid_out_mplane = vidioc_g_fmt_vid_out, + .vidioc_try_fmt_vid_out_mplane = vidioc_try_fmt_vid_out, + .vidioc_s_fmt_vid_out_mplane = vidioc_s_fmt_vid_out, .vidioc_reqbufs = v4l2_m2m_ioctl_reqbufs, .vidioc_querybuf = v4l2_m2m_ioctl_querybuf, @@ -2089,7 +2119,7 @@ static int bcm2835_codec_start_streaming(struct vb2_queue *q, ctx->component_enabled = true; } - if (q->type == V4L2_BUF_TYPE_VIDEO_OUTPUT) { + if (q->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) { /* * Create the EOS buffer. * We only need the MMAL part, and want to NOT attach a memory @@ -2216,7 +2246,7 @@ static int queue_init(void *priv, struct vb2_queue *src_vq, struct bcm2835_codec_ctx *ctx = priv; int ret; - src_vq->type = V4L2_BUF_TYPE_VIDEO_OUTPUT; + src_vq->type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE; src_vq->io_modes = VB2_MMAP | VB2_DMABUF; src_vq->drv_priv = ctx; src_vq->buf_struct_size = sizeof(struct m2m_mmal_buffer); @@ -2230,7 +2260,7 @@ static int queue_init(void *priv, struct vb2_queue *src_vq, if (ret) return ret; - dst_vq->type = V4L2_BUF_TYPE_VIDEO_CAPTURE; + dst_vq->type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE; dst_vq->io_modes = VB2_MMAP | VB2_DMABUF; dst_vq->drv_priv = ctx; dst_vq->buf_struct_size = sizeof(struct m2m_mmal_buffer); @@ -2300,7 +2330,7 @@ static int bcm2835_codec_open(struct file *file) hdl = &ctx->hdl; if (dev->role == ENCODE) { /* Encode controls */ - v4l2_ctrl_handler_init(hdl, 6); + v4l2_ctrl_handler_init(hdl, 7); v4l2_ctrl_new_std_menu(hdl, &bcm2835_codec_ctrl_ops, V4L2_CID_MPEG_VIDEO_BITRATE_MODE, @@ -2344,6 +2374,21 @@ static int bcm2835_codec_open(struct file *file) BIT(V4L2_MPEG_VIDEO_H264_PROFILE_MAIN) | BIT(V4L2_MPEG_VIDEO_H264_PROFILE_HIGH)), V4L2_MPEG_VIDEO_H264_PROFILE_HIGH); + v4l2_ctrl_new_std(hdl, &bcm2835_codec_ctrl_ops, + V4L2_CID_MPEG_VIDEO_FORCE_KEY_FRAME, + 0, 0, 0, 0); + if (hdl->error) { + rc = hdl->error; + goto free_ctrl_handler; + } + ctx->fh.ctrl_handler = hdl; + v4l2_ctrl_handler_setup(hdl); + } else if (dev->role == DECODE) { + v4l2_ctrl_handler_init(hdl, 1); + + v4l2_ctrl_new_std(hdl, &bcm2835_codec_ctrl_ops, + V4L2_CID_MIN_BUFFERS_FOR_CAPTURE, + 1, 1, 1, 1); if (hdl->error) { rc = hdl->error; goto free_ctrl_handler; @@ -2545,12 +2590,14 @@ destroy_component: return ret; } -static int bcm2835_codec_create(struct platform_device *pdev, +static int bcm2835_codec_create(struct bcm2835_codec_driver *drv, struct bcm2835_codec_dev **new_dev, enum bcm2835_codec_role role) { + struct platform_device *pdev = drv->pdev; struct bcm2835_codec_dev *dev; struct video_device *vfd; + int function; int video_nr; int ret; @@ -2570,17 +2617,21 @@ static int bcm2835_codec_create(struct platform_device *pdev, if (ret) goto vchiq_finalise; - ret = v4l2_device_register(&pdev->dev, &dev->v4l2_dev); - if (ret) - goto vchiq_finalise; - atomic_set(&dev->num_inst, 0); mutex_init(&dev->dev_mutex); + /* Initialise the video device */ dev->vfd = bcm2835_codec_videodev; + vfd = &dev->vfd; vfd->lock = &dev->dev_mutex; vfd->v4l2_dev = &dev->v4l2_dev; + vfd->device_caps = V4L2_CAP_VIDEO_M2M_MPLANE | V4L2_CAP_STREAMING; + vfd->v4l2_dev->mdev = &drv->mdev; + + ret = v4l2_device_register(&pdev->dev, &dev->v4l2_dev); + if (ret) + goto vchiq_finalise; switch (role) { case DECODE: @@ -2588,11 +2639,13 @@ static int bcm2835_codec_create(struct platform_device *pdev, v4l2_disable_ioctl(vfd, VIDIOC_TRY_ENCODER_CMD); v4l2_disable_ioctl(vfd, VIDIOC_S_PARM); v4l2_disable_ioctl(vfd, VIDIOC_G_PARM); + function = MEDIA_ENT_F_PROC_VIDEO_DECODER; video_nr = decode_video_nr; break; case ENCODE: v4l2_disable_ioctl(vfd, VIDIOC_DECODER_CMD); v4l2_disable_ioctl(vfd, VIDIOC_TRY_DECODER_CMD); + function = MEDIA_ENT_F_PROC_VIDEO_ENCODER; video_nr = encode_video_nr; break; case ISP: @@ -2602,6 +2655,7 @@ static int bcm2835_codec_create(struct platform_device *pdev, v4l2_disable_ioctl(vfd, VIDIOC_TRY_DECODER_CMD); v4l2_disable_ioctl(vfd, VIDIOC_S_PARM); v4l2_disable_ioctl(vfd, VIDIOC_G_PARM); + function = MEDIA_ENT_F_PROC_VIDEO_SCALER; video_nr = isp_video_nr; break; default: @@ -2616,8 +2670,8 @@ static int bcm2835_codec_create(struct platform_device *pdev, } video_set_drvdata(vfd, dev); - snprintf(vfd->name, sizeof(vfd->name), "%s", - bcm2835_codec_videodev.name); + snprintf(vfd->name, sizeof(vfd->name), "%s-%s", + bcm2835_codec_videodev.name, roles[role]); v4l2_info(&dev->v4l2_dev, "Device registered as /dev/video%d\n", vfd->num); @@ -2630,6 +2684,10 @@ static int bcm2835_codec_create(struct platform_device *pdev, goto err_m2m; } + ret = v4l2_m2m_register_media_controller(dev->m2m_dev, vfd, function); + if (ret) + goto err_m2m; + v4l2_info(&dev->v4l2_dev, "Loaded V4L2 %s\n", roles[role]); return 0; @@ -2651,6 +2709,7 @@ static int bcm2835_codec_destroy(struct bcm2835_codec_dev *dev) v4l2_info(&dev->v4l2_dev, "Removing " MEM2MEM_NAME ", %s\n", roles[dev->role]); + v4l2_m2m_unregister_media_controller(dev->m2m_dev); v4l2_m2m_release(dev->m2m_dev); video_unregister_device(&dev->vfd); v4l2_device_unregister(&dev->v4l2_dev); @@ -2662,24 +2721,42 @@ static int bcm2835_codec_destroy(struct bcm2835_codec_dev *dev) static int bcm2835_codec_probe(struct platform_device *pdev) { struct bcm2835_codec_driver *drv; + struct media_device *mdev; int ret = 0; drv = devm_kzalloc(&pdev->dev, sizeof(*drv), GFP_KERNEL); if (!drv) return -ENOMEM; - ret = bcm2835_codec_create(pdev, &drv->decode, DECODE); + drv->pdev = pdev; + mdev = &drv->mdev; + mdev->dev = &pdev->dev; + + strscpy(mdev->model, bcm2835_codec_videodev.name, sizeof(mdev->model)); + strscpy(mdev->serial, "0000", sizeof(mdev->serial)); + snprintf(mdev->bus_info, sizeof(mdev->bus_info), "platform:%s", + pdev->name); + + /* This should return the vgencmd version information or such .. */ + mdev->hw_revision = 1; + media_device_init(mdev); + + ret = bcm2835_codec_create(drv, &drv->decode, DECODE); if (ret) goto out; - ret = bcm2835_codec_create(pdev, &drv->encode, ENCODE); + ret = bcm2835_codec_create(drv, &drv->encode, ENCODE); if (ret) goto out; - ret = bcm2835_codec_create(pdev, &drv->isp, ISP); + ret = bcm2835_codec_create(drv, &drv->isp, ISP); if (ret) goto out; + /* Register the media device node */ + if (media_device_register(mdev) < 0) + goto out; + platform_set_drvdata(pdev, drv); return 0; @@ -2700,12 +2777,16 @@ static int bcm2835_codec_remove(struct platform_device *pdev) { struct bcm2835_codec_driver *drv = platform_get_drvdata(pdev); + media_device_unregister(&drv->mdev); + bcm2835_codec_destroy(drv->isp); bcm2835_codec_destroy(drv->encode); bcm2835_codec_destroy(drv->decode); + media_device_cleanup(&drv->mdev); + return 0; } diff --git a/drivers/staging/vc04_services/vc-sm-cma/vc_sm.c b/drivers/staging/vc04_services/vc-sm-cma/vc_sm.c index 1b5bb68b9bcd6a76815da2ced3e5dcc5bb96d11a..caaa52cd03f8dd79a908e24c0567ed85a187e77f 100644 --- a/drivers/staging/vc04_services/vc-sm-cma/vc_sm.c +++ b/drivers/staging/vc04_services/vc-sm-cma/vc_sm.c @@ -1501,11 +1501,10 @@ static long vc_sm_cma_ioctl(struct file *file, unsigned int cmd, return ret; } -#ifndef CONFIG_ARM64 #ifdef CONFIG_COMPAT struct vc_sm_cma_ioctl_clean_invalid2_32 { u32 op_count; - struct vc_sm_cma_ioctl_clean_invalid_block { + struct vc_sm_cma_ioctl_clean_invalid_block_32 { u16 invalidate_mode; u16 block_count; compat_uptr_t start_address; @@ -1516,7 +1515,7 @@ struct vc_sm_cma_ioctl_clean_invalid2_32 { #define VC_SM_CMA_CMD_CLEAN_INVALID2_32\ _IOR(VC_SM_CMA_MAGIC_TYPE, VC_SM_CMA_CMD_CLEAN_INVALID2,\ - struct vc_sm_cma_ioctl_clean_invalid2) + struct vc_sm_cma_ioctl_clean_invalid2_32) static long vc_sm_cma_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg) @@ -1524,23 +1523,20 @@ static long vc_sm_cma_compat_ioctl(struct file *file, unsigned int cmd, switch (cmd) { case VC_SM_CMA_CMD_CLEAN_INVALID2_32: /* FIXME */ - break; + return -EINVAL; default: - return vc_sm_cma_compat_ioctl(file, cmd, arg); + return vc_sm_cma_ioctl(file, cmd, arg); } } #endif -#endif /* Device operations that we managed in this driver. */ static const struct file_operations vc_sm_ops = { .owner = THIS_MODULE, .unlocked_ioctl = vc_sm_cma_ioctl, -#ifndef CONFIG_ARM64 #ifdef CONFIG_COMPAT .compat_ioctl = vc_sm_cma_compat_ioctl, -#endif #endif .open = vc_sm_cma_open, .release = vc_sm_cma_release, diff --git a/drivers/staging/vc04_services/vchiq-mmal/mmal-parameters.h b/drivers/staging/vc04_services/vchiq-mmal/mmal-parameters.h index 416b4e1f0bb82372a082aa6fe87b551256c47360..38702d3063c193e018fc5ec8e815c8397881949f 100644 --- a/drivers/staging/vc04_services/vchiq-mmal/mmal-parameters.h +++ b/drivers/staging/vc04_services/vchiq-mmal/mmal-parameters.h @@ -313,6 +313,7 @@ enum mmal_parameter_awbmode { MMAL_PARAM_AWBMODE_INCANDESCENT, MMAL_PARAM_AWBMODE_FLASH, MMAL_PARAM_AWBMODE_HORIZON, + MMAL_PARAM_AWBMODE_GREYWORLD, }; enum mmal_parameter_imagefx { diff --git a/drivers/staging/vt6656/main_usb.c b/drivers/staging/vt6656/main_usb.c index ccafcc2c87ac980da5f70e52cfa89ad41a0cdf45..70433f756d8e1ff10b8f0e113d167f058019ccbf 100644 --- a/drivers/staging/vt6656/main_usb.c +++ b/drivers/staging/vt6656/main_usb.c @@ -402,16 +402,19 @@ static void vnt_free_int_bufs(struct vnt_private *priv) kfree(priv->int_buf.data_buf); } -static bool vnt_alloc_bufs(struct vnt_private *priv) +static int vnt_alloc_bufs(struct vnt_private *priv) { + int ret = 0; struct vnt_usb_send_context *tx_context; struct vnt_rcb *rcb; int ii; for (ii = 0; ii < priv->num_tx_context; ii++) { tx_context = kmalloc(sizeof(*tx_context), GFP_KERNEL); - if (!tx_context) + if (!tx_context) { + ret = -ENOMEM; goto free_tx; + } priv->tx_context[ii] = tx_context; tx_context->priv = priv; @@ -419,16 +422,20 @@ static bool vnt_alloc_bufs(struct vnt_private *priv) /* allocate URBs */ tx_context->urb = usb_alloc_urb(0, GFP_KERNEL); - if (!tx_context->urb) + if (!tx_context->urb) { + ret = -ENOMEM; goto free_tx; + } tx_context->in_use = false; } for (ii = 0; ii < priv->num_rcb; ii++) { priv->rcb[ii] = kzalloc(sizeof(*priv->rcb[ii]), GFP_KERNEL); - if (!priv->rcb[ii]) + if (!priv->rcb[ii]) { + ret = -ENOMEM; goto free_rx_tx; + } rcb = priv->rcb[ii]; @@ -436,39 +443,46 @@ static bool vnt_alloc_bufs(struct vnt_private *priv) /* allocate URBs */ rcb->urb = usb_alloc_urb(0, GFP_KERNEL); - if (!rcb->urb) + if (!rcb->urb) { + ret = -ENOMEM; goto free_rx_tx; + } rcb->skb = dev_alloc_skb(priv->rx_buf_sz); - if (!rcb->skb) + if (!rcb->skb) { + ret = -ENOMEM; goto free_rx_tx; + } rcb->in_use = false; /* submit rx urb */ - if (vnt_submit_rx_urb(priv, rcb)) + ret = vnt_submit_rx_urb(priv, rcb); + if (ret) goto free_rx_tx; } priv->interrupt_urb = usb_alloc_urb(0, GFP_KERNEL); - if (!priv->interrupt_urb) + if (!priv->interrupt_urb) { + ret = -ENOMEM; goto free_rx_tx; + } priv->int_buf.data_buf = kmalloc(MAX_INTERRUPT_SIZE, GFP_KERNEL); if (!priv->int_buf.data_buf) { - usb_free_urb(priv->interrupt_urb); - goto free_rx_tx; + ret = -ENOMEM; + goto free_rx_tx_urb; } - return true; + return 0; +free_rx_tx_urb: + usb_free_urb(priv->interrupt_urb); free_rx_tx: vnt_free_rx_bufs(priv); - free_tx: vnt_free_tx_bufs(priv); - - return false; + return ret; } static void vnt_tx_80211(struct ieee80211_hw *hw, diff --git a/drivers/target/iscsi/iscsi_target_auth.c b/drivers/target/iscsi/iscsi_target_auth.c index 4e680d753941f71ea299d87b6c0cd18fc307b50f..e2fa3a3bc81dffff1af904a6b9c612ee37fe2d3a 100644 --- a/drivers/target/iscsi/iscsi_target_auth.c +++ b/drivers/target/iscsi/iscsi_target_auth.c @@ -89,6 +89,12 @@ out: return CHAP_DIGEST_UNKNOWN; } +static void chap_close(struct iscsi_conn *conn) +{ + kfree(conn->auth_protocol); + conn->auth_protocol = NULL; +} + static struct iscsi_chap *chap_server_open( struct iscsi_conn *conn, struct iscsi_node_auth *auth, @@ -126,7 +132,7 @@ static struct iscsi_chap *chap_server_open( case CHAP_DIGEST_UNKNOWN: default: pr_err("Unsupported CHAP_A value\n"); - kfree(conn->auth_protocol); + chap_close(conn); return NULL; } @@ -141,19 +147,13 @@ static struct iscsi_chap *chap_server_open( * Generate Challenge. */ if (chap_gen_challenge(conn, 1, aic_str, aic_len) < 0) { - kfree(conn->auth_protocol); + chap_close(conn); return NULL; } return chap; } -static void chap_close(struct iscsi_conn *conn) -{ - kfree(conn->auth_protocol); - conn->auth_protocol = NULL; -} - static int chap_server_compute_md5( struct iscsi_conn *conn, struct iscsi_node_auth *auth, diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c index ebc98ce465b8a95022074305c10ba1148e47dfd2..cd49a76be52a569b565947d47152eaa940e9cce3 100644 --- a/drivers/tty/serial/8250/8250_port.c +++ b/drivers/tty/serial/8250/8250_port.c @@ -1875,7 +1875,8 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir) status = serial8250_rx_chars(up, status); } serial8250_modem_status(up); - if ((!up->dma || up->dma->tx_err) && (status & UART_LSR_THRE)) + if ((!up->dma || up->dma->tx_err) && (status & UART_LSR_THRE) && + (up->ier & UART_IER_THRI)) serial8250_tx_chars(up); spin_unlock_irqrestore(&port->lock, flags); diff --git a/drivers/tty/serial/cpm_uart/cpm_uart_core.c b/drivers/tty/serial/cpm_uart/cpm_uart_core.c index e5389591bb4f1f83a207ee5be7e3577648972599..ad40c75bb58f84d5aceba4ca5340f673b16e0a75 100644 --- a/drivers/tty/serial/cpm_uart/cpm_uart_core.c +++ b/drivers/tty/serial/cpm_uart/cpm_uart_core.c @@ -407,7 +407,16 @@ static int cpm_uart_startup(struct uart_port *port) clrbits16(&pinfo->sccp->scc_sccm, UART_SCCM_RX); } cpm_uart_initbd(pinfo); - cpm_line_cr_cmd(pinfo, CPM_CR_INIT_TRX); + if (IS_SMC(pinfo)) { + out_be32(&pinfo->smcup->smc_rstate, 0); + out_be32(&pinfo->smcup->smc_tstate, 0); + out_be16(&pinfo->smcup->smc_rbptr, + in_be16(&pinfo->smcup->smc_rbase)); + out_be16(&pinfo->smcup->smc_tbptr, + in_be16(&pinfo->smcup->smc_tbase)); + } else { + cpm_line_cr_cmd(pinfo, CPM_CR_INIT_TRX); + } } /* Install interrupt handler. */ retval = request_irq(port->irq, cpm_uart_int, 0, "cpm_uart", port); @@ -861,16 +870,14 @@ static void cpm_uart_init_smc(struct uart_cpm_port *pinfo) (u8 __iomem *)pinfo->tx_bd_base - DPRAM_BASE); /* - * In case SMC1 is being relocated... + * In case SMC is being relocated... */ -#if defined (CONFIG_I2C_SPI_SMC1_UCODE_PATCH) out_be16(&up->smc_rbptr, in_be16(&pinfo->smcup->smc_rbase)); out_be16(&up->smc_tbptr, in_be16(&pinfo->smcup->smc_tbase)); out_be32(&up->smc_rstate, 0); out_be32(&up->smc_tstate, 0); out_be16(&up->smc_brkcr, 1); /* number of break chars */ out_be16(&up->smc_brkec, 0); -#endif /* Set up the uart parameters in the * parameter ram. @@ -884,8 +891,6 @@ static void cpm_uart_init_smc(struct uart_cpm_port *pinfo) out_be16(&up->smc_brkec, 0); out_be16(&up->smc_brkcr, 1); - cpm_line_cr_cmd(pinfo, CPM_CR_INIT_TRX); - /* Set UART mode, 8 bit, no parity, one stop. * Enable receive and transmit. */ diff --git a/drivers/tty/serial/digicolor-usart.c b/drivers/tty/serial/digicolor-usart.c index f460cca139e239c066b9aacaaf8f0c22500d7ade..13ac36e2da4f0f2ea36e48c7619f369dde3239bd 100644 --- a/drivers/tty/serial/digicolor-usart.c +++ b/drivers/tty/serial/digicolor-usart.c @@ -541,7 +541,11 @@ static int __init digicolor_uart_init(void) if (ret) return ret; - return platform_driver_register(&digicolor_uart_platform); + ret = platform_driver_register(&digicolor_uart_platform); + if (ret) + uart_unregister_driver(&digicolor_uart); + + return ret; } module_init(digicolor_uart_init); diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c index 0f67197a3783ffdd62eccd3b41a813a80d66b6df..105de92b0b3bfa5f48b04b773ecaa1154a51a5c4 100644 --- a/drivers/tty/serial/imx.c +++ b/drivers/tty/serial/imx.c @@ -382,6 +382,7 @@ static void imx_uart_ucrs_restore(struct imx_port *sport, } #endif +/* called with port.lock taken and irqs caller dependent */ static void imx_uart_rts_active(struct imx_port *sport, u32 *ucr2) { *ucr2 &= ~(UCR2_CTSC | UCR2_CTS); @@ -390,6 +391,7 @@ static void imx_uart_rts_active(struct imx_port *sport, u32 *ucr2) mctrl_gpio_set(sport->gpios, sport->port.mctrl); } +/* called with port.lock taken and irqs caller dependent */ static void imx_uart_rts_inactive(struct imx_port *sport, u32 *ucr2) { *ucr2 &= ~UCR2_CTSC; @@ -399,6 +401,7 @@ static void imx_uart_rts_inactive(struct imx_port *sport, u32 *ucr2) mctrl_gpio_set(sport->gpios, sport->port.mctrl); } +/* called with port.lock taken and irqs caller dependent */ static void imx_uart_rts_auto(struct imx_port *sport, u32 *ucr2) { *ucr2 |= UCR2_CTSC; @@ -1554,6 +1557,16 @@ imx_uart_set_termios(struct uart_port *port, struct ktermios *termios, old_csize = CS8; } + del_timer_sync(&sport->timer); + + /* + * Ask the core to calculate the divisor for us. + */ + baud = uart_get_baud_rate(port, termios, old, 50, port->uartclk / 16); + quot = uart_get_divisor(port, baud); + + spin_lock_irqsave(&sport->port.lock, flags); + if ((termios->c_cflag & CSIZE) == CS8) ucr2 = UCR2_WS | UCR2_SRST | UCR2_IRTS; else @@ -1597,16 +1610,6 @@ imx_uart_set_termios(struct uart_port *port, struct ktermios *termios, ucr2 |= UCR2_PROE; } - del_timer_sync(&sport->timer); - - /* - * Ask the core to calculate the divisor for us. - */ - baud = uart_get_baud_rate(port, termios, old, 50, port->uartclk / 16); - quot = uart_get_divisor(port, baud); - - spin_lock_irqsave(&sport->port.lock, flags); - sport->port.read_status_mask = 0; if (termios->c_iflag & INPCK) sport->port.read_status_mask |= (URXD_FRMERR | URXD_PRERR); diff --git a/drivers/tty/serial/max310x.c b/drivers/tty/serial/max310x.c index 38c48a02b9206eb8748a954ef2962154e5750f7b..bd3e6cf81af5cfb61d82d70e13dd59015a623aee 100644 --- a/drivers/tty/serial/max310x.c +++ b/drivers/tty/serial/max310x.c @@ -491,37 +491,48 @@ static bool max310x_reg_precious(struct device *dev, unsigned int reg) static int max310x_set_baud(struct uart_port *port, int baud) { - unsigned int mode = 0, clk = port->uartclk, div = clk / baud; + unsigned int mode = 0, div = 0, frac = 0, c = 0, F = 0; - /* Check for minimal value for divider */ - if (div < 16) - div = 16; - - if (clk % baud && (div / 16) < 0x8000) { + /* + * Calculate the integer divisor first. Select a proper mode + * in case if the requested baud is too high for the pre-defined + * clocks frequency. + */ + div = port->uartclk / baud; + if (div < 8) { + /* Mode x4 */ + c = 4; + mode = MAX310X_BRGCFG_4XMODE_BIT; + } else if (div < 16) { /* Mode x2 */ + c = 8; mode = MAX310X_BRGCFG_2XMODE_BIT; - clk = port->uartclk * 2; - div = clk / baud; - - if (clk % baud && (div / 16) < 0x8000) { - /* Mode x4 */ - mode = MAX310X_BRGCFG_4XMODE_BIT; - clk = port->uartclk * 4; - div = clk / baud; - } + } else { + c = 16; } - max310x_port_write(port, MAX310X_BRGDIVMSB_REG, (div / 16) >> 8); - max310x_port_write(port, MAX310X_BRGDIVLSB_REG, div / 16); - max310x_port_write(port, MAX310X_BRGCFG_REG, (div % 16) | mode); + /* Calculate the divisor in accordance with the fraction coefficient */ + div /= c; + F = c*baud; + + /* Calculate the baud rate fraction */ + if (div > 0) + frac = (16*(port->uartclk % F)) / F; + else + div = 1; + + max310x_port_write(port, MAX310X_BRGDIVMSB_REG, div >> 8); + max310x_port_write(port, MAX310X_BRGDIVLSB_REG, div); + max310x_port_write(port, MAX310X_BRGCFG_REG, frac | mode); - return DIV_ROUND_CLOSEST(clk, div); + /* Return the actual baud rate we just programmed */ + return (16*port->uartclk) / (c*(16*div + frac)); } static int max310x_update_best_err(unsigned long f, long *besterr) { /* Use baudrate 115200 for calculate error */ - long err = f % (115200 * 16); + long err = f % (460800 * 16); if ((*besterr < 0) || (*besterr > err)) { *besterr = err; diff --git a/drivers/tty/serial/msm_serial.c b/drivers/tty/serial/msm_serial.c index 0f41b936da03202cefdec84117dda5662cf5c0a3..310bbae515b048ce574397798dabc32a2f119318 100644 --- a/drivers/tty/serial/msm_serial.c +++ b/drivers/tty/serial/msm_serial.c @@ -383,10 +383,14 @@ no_rx: static inline void msm_wait_for_xmitr(struct uart_port *port) { + unsigned int timeout = 500000; + while (!(msm_read(port, UART_SR) & UART_SR_TX_EMPTY)) { if (msm_read(port, UART_ISR) & UART_ISR_TX_READY) break; udelay(1); + if (!timeout--) + break; } msm_write(port, UART_CR_CMD_RESET_TX_READY, UART_CR); } diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c index 8dbeb14a1e3ab8d82f09e637b864be43681d4c96..fe9261ffe3dbcfd96703530d6f90e2210b3a7554 100644 --- a/drivers/tty/serial/serial_core.c +++ b/drivers/tty/serial/serial_core.c @@ -1738,6 +1738,7 @@ static int uart_port_activate(struct tty_port *port, struct tty_struct *tty) { struct uart_state *state = container_of(port, struct uart_state, port); struct uart_port *uport; + int ret; uport = uart_port_check(state); if (!uport || uport->flags & UPF_DEAD) @@ -1748,7 +1749,11 @@ static int uart_port_activate(struct tty_port *port, struct tty_struct *tty) /* * Start up the serial port. */ - return uart_startup(tty, state, 0); + ret = uart_startup(tty, state, 0); + if (ret > 0) + tty_port_set_active(port, 1); + + return ret; } static const char *uart_type(struct uart_port *port) diff --git a/drivers/tty/serial/serial_mctrl_gpio.c b/drivers/tty/serial/serial_mctrl_gpio.c index 1c06325beacaeb3a6e19011385fcc379ef16ce8e..07f318603e7401c1b0420fc937a1f66f08afc448 100644 --- a/drivers/tty/serial/serial_mctrl_gpio.c +++ b/drivers/tty/serial/serial_mctrl_gpio.c @@ -12,6 +12,7 @@ #include #include #include +#include #include "serial_mctrl_gpio.h" @@ -115,6 +116,19 @@ struct mctrl_gpios *mctrl_gpio_init_noauto(struct device *dev, unsigned int idx) for (i = 0; i < UART_GPIO_MAX; i++) { enum gpiod_flags flags; + char *gpio_str; + bool present; + + /* Check if GPIO property exists and continue if not */ + gpio_str = kasprintf(GFP_KERNEL, "%s-gpios", + mctrl_gpios_desc[i].name); + if (!gpio_str) + continue; + + present = device_property_present(dev, gpio_str); + kfree(gpio_str); + if (!present) + continue; if (mctrl_gpios_desc[i].dir_out) flags = GPIOD_OUT_LOW; diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c index 040832635a64904e98ec2960e99b074760c487b8..5550289e6678b87e71cdb8bca2930c719c27f94e 100644 --- a/drivers/tty/serial/sh-sci.c +++ b/drivers/tty/serial/sh-sci.c @@ -1376,6 +1376,7 @@ static void work_fn_tx(struct work_struct *work) struct circ_buf *xmit = &port->state->xmit; unsigned long flags; dma_addr_t buf; + int head, tail; /* * DMA is idle now. @@ -1385,16 +1386,23 @@ static void work_fn_tx(struct work_struct *work) * consistent xmit buffer state. */ spin_lock_irq(&port->lock); - buf = s->tx_dma_addr + (xmit->tail & (UART_XMIT_SIZE - 1)); + head = xmit->head; + tail = xmit->tail; + buf = s->tx_dma_addr + (tail & (UART_XMIT_SIZE - 1)); s->tx_dma_len = min_t(unsigned int, - CIRC_CNT(xmit->head, xmit->tail, UART_XMIT_SIZE), - CIRC_CNT_TO_END(xmit->head, xmit->tail, UART_XMIT_SIZE)); - spin_unlock_irq(&port->lock); + CIRC_CNT(head, tail, UART_XMIT_SIZE), + CIRC_CNT_TO_END(head, tail, UART_XMIT_SIZE)); + if (!s->tx_dma_len) { + /* Transmit buffer has been flushed */ + spin_unlock_irq(&port->lock); + return; + } desc = dmaengine_prep_slave_single(chan, buf, s->tx_dma_len, DMA_MEM_TO_DEV, DMA_PREP_INTERRUPT | DMA_CTRL_ACK); if (!desc) { + spin_unlock_irq(&port->lock); dev_warn(port->dev, "Failed preparing Tx DMA descriptor\n"); goto switch_to_pio; } @@ -1402,18 +1410,18 @@ static void work_fn_tx(struct work_struct *work) dma_sync_single_for_device(chan->device->dev, buf, s->tx_dma_len, DMA_TO_DEVICE); - spin_lock_irq(&port->lock); desc->callback = sci_dma_tx_complete; desc->callback_param = s; - spin_unlock_irq(&port->lock); s->cookie_tx = dmaengine_submit(desc); if (dma_submit_error(s->cookie_tx)) { + spin_unlock_irq(&port->lock); dev_warn(port->dev, "Failed submitting Tx DMA descriptor\n"); goto switch_to_pio; } + spin_unlock_irq(&port->lock); dev_dbg(port->dev, "%s: %p: %d...%d, cookie %d\n", - __func__, xmit->buf, xmit->tail, xmit->head, s->cookie_tx); + __func__, xmit->buf, tail, head, s->cookie_tx); dma_async_issue_pending(chan); return; @@ -1633,11 +1641,18 @@ static void sci_free_dma(struct uart_port *port) static void sci_flush_buffer(struct uart_port *port) { + struct sci_port *s = to_sci_port(port); + /* * In uart_flush_buffer(), the xmit circular buffer has just been - * cleared, so we have to reset tx_dma_len accordingly. + * cleared, so we have to reset tx_dma_len accordingly, and stop any + * pending transfers */ - to_sci_port(port)->tx_dma_len = 0; + s->tx_dma_len = 0; + if (s->chan_tx) { + dmaengine_terminate_async(s->chan_tx); + s->cookie_tx = -EINVAL; + } } #else /* !CONFIG_SERIAL_SH_SCI_DMA */ static inline void sci_request_dma(struct uart_port *port) diff --git a/drivers/tty/tty_ldsem.c b/drivers/tty/tty_ldsem.c index b989ca26fc78858ea27e3b1c17b4a39c0dab23d5..2f0372976459eb771c830442efe94d24c70ba489 100644 --- a/drivers/tty/tty_ldsem.c +++ b/drivers/tty/tty_ldsem.c @@ -116,8 +116,7 @@ static void __ldsem_wake_readers(struct ld_semaphore *sem) list_for_each_entry_safe(waiter, next, &sem->read_wait, list) { tsk = waiter->task; - smp_mb(); - waiter->task = NULL; + smp_store_release(&waiter->task, NULL); wake_up_process(tsk); put_task_struct(tsk); } @@ -217,7 +216,7 @@ down_read_failed(struct ld_semaphore *sem, long count, long timeout) for (;;) { set_current_state(TASK_UNINTERRUPTIBLE); - if (!waiter.task) + if (!smp_load_acquire(&waiter.task)) break; if (!timeout) break; diff --git a/drivers/usb/chipidea/udc.c b/drivers/usb/chipidea/udc.c index cc7c856126df5a792d39804d7f13ff93593c65c4..169ccfacfc7550b145d49adcd97bf121bb093258 100644 --- a/drivers/usb/chipidea/udc.c +++ b/drivers/usb/chipidea/udc.c @@ -708,12 +708,6 @@ static int _gadget_stop_activity(struct usb_gadget *gadget) struct ci_hdrc *ci = container_of(gadget, struct ci_hdrc, gadget); unsigned long flags; - spin_lock_irqsave(&ci->lock, flags); - ci->gadget.speed = USB_SPEED_UNKNOWN; - ci->remote_wakeup = 0; - ci->suspended = 0; - spin_unlock_irqrestore(&ci->lock, flags); - /* flush all endpoints */ gadget_for_each_ep(ep, gadget) { usb_ep_fifo_flush(ep); @@ -731,6 +725,12 @@ static int _gadget_stop_activity(struct usb_gadget *gadget) ci->status = NULL; } + spin_lock_irqsave(&ci->lock, flags); + ci->gadget.speed = USB_SPEED_UNKNOWN; + ci->remote_wakeup = 0; + ci->suspended = 0; + spin_unlock_irqrestore(&ci->lock, flags); + return 0; } @@ -1302,6 +1302,10 @@ static int ep_disable(struct usb_ep *ep) return -EBUSY; spin_lock_irqsave(hwep->lock, flags); + if (hwep->ci->gadget.speed == USB_SPEED_UNKNOWN) { + spin_unlock_irqrestore(hwep->lock, flags); + return 0; + } /* only internal SW should disable ctrl endpts */ @@ -1391,6 +1395,10 @@ static int ep_queue(struct usb_ep *ep, struct usb_request *req, return -EINVAL; spin_lock_irqsave(hwep->lock, flags); + if (hwep->ci->gadget.speed == USB_SPEED_UNKNOWN) { + spin_unlock_irqrestore(hwep->lock, flags); + return 0; + } retval = _ep_queue(ep, req, gfp_flags); spin_unlock_irqrestore(hwep->lock, flags); return retval; @@ -1414,8 +1422,8 @@ static int ep_dequeue(struct usb_ep *ep, struct usb_request *req) return -EINVAL; spin_lock_irqsave(hwep->lock, flags); - - hw_ep_flush(hwep->ci, hwep->num, hwep->dir); + if (hwep->ci->gadget.speed != USB_SPEED_UNKNOWN) + hw_ep_flush(hwep->ci, hwep->num, hwep->dir); list_for_each_entry_safe(node, tmpnode, &hwreq->tds, td) { dma_pool_free(hwep->td_pool, node->ptr, node->dma); @@ -1486,6 +1494,10 @@ static void ep_fifo_flush(struct usb_ep *ep) } spin_lock_irqsave(hwep->lock, flags); + if (hwep->ci->gadget.speed == USB_SPEED_UNKNOWN) { + spin_unlock_irqrestore(hwep->lock, flags); + return; + } hw_ep_flush(hwep->ci, hwep->num, hwep->dir); @@ -1558,6 +1570,10 @@ static int ci_udc_wakeup(struct usb_gadget *_gadget) int ret = 0; spin_lock_irqsave(&ci->lock, flags); + if (ci->gadget.speed == USB_SPEED_UNKNOWN) { + spin_unlock_irqrestore(&ci->lock, flags); + return 0; + } if (!ci->remote_wakeup) { ret = -EOPNOTSUPP; goto out; diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c index 5b442bc68a767736438e273507925d5be2140276..59675cc7aa017e0bcce341a5367f2d551608ccc0 100644 --- a/drivers/usb/class/cdc-acm.c +++ b/drivers/usb/class/cdc-acm.c @@ -1333,10 +1333,6 @@ made_compressed_probe: tty_port_init(&acm->port); acm->port.ops = &acm_port_ops; - minor = acm_alloc_minor(acm); - if (minor < 0) - goto alloc_fail1; - ctrlsize = usb_endpoint_maxp(epctrl); readsize = usb_endpoint_maxp(epread) * (quirks == SINGLE_RX_URB ? 1 : 2); @@ -1344,6 +1340,13 @@ made_compressed_probe: acm->writesize = usb_endpoint_maxp(epwrite) * 20; acm->control = control_interface; acm->data = data_interface; + + usb_get_intf(acm->control); /* undone in destruct() */ + + minor = acm_alloc_minor(acm); + if (minor < 0) + goto alloc_fail1; + acm->minor = minor; acm->dev = usb_dev; if (h.usb_cdc_acm_descriptor) @@ -1490,7 +1493,6 @@ skip_countries: usb_driver_claim_interface(&acm_driver, data_interface, acm); usb_set_intfdata(data_interface, acm); - usb_get_intf(control_interface); tty_dev = tty_port_register_device(&acm->port, acm_tty_driver, minor, &control_interface->dev); if (IS_ERR(tty_dev)) { diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c index bec581fb7c6361891a81a3a0aa88d3863b686f58..b8a1fdefb515039f2f9dd2f63616900eff5d34c6 100644 --- a/drivers/usb/class/cdc-wdm.c +++ b/drivers/usb/class/cdc-wdm.c @@ -587,10 +587,20 @@ static int wdm_flush(struct file *file, fl_owner_t id) { struct wdm_device *desc = file->private_data; - wait_event(desc->wait, !test_bit(WDM_IN_USE, &desc->flags)); + wait_event(desc->wait, + /* + * needs both flags. We cannot do with one + * because resetting it would cause a race + * with write() yet we need to signal + * a disconnect + */ + !test_bit(WDM_IN_USE, &desc->flags) || + test_bit(WDM_DISCONNECTING, &desc->flags)); /* cannot dereference desc->intf if WDM_DISCONNECTING */ - if (desc->werr < 0 && !test_bit(WDM_DISCONNECTING, &desc->flags)) + if (test_bit(WDM_DISCONNECTING, &desc->flags)) + return -ENODEV; + if (desc->werr < 0) dev_err(&desc->intf->dev, "Error in flush path: %d\n", desc->werr); @@ -974,8 +984,6 @@ static void wdm_disconnect(struct usb_interface *intf) spin_lock_irqsave(&desc->iuspin, flags); set_bit(WDM_DISCONNECTING, &desc->flags); set_bit(WDM_READ, &desc->flags); - /* to terminate pending flushes */ - clear_bit(WDM_IN_USE, &desc->flags); spin_unlock_irqrestore(&desc->iuspin, flags); wake_up_all(&desc->wait); mutex_lock(&desc->rlock); diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c index ffccd40ea67da4c5d70a96ec057d4cc32cb04ed4..29c6414f48f1399f9af1e5c5b1cef045adae34d8 100644 --- a/drivers/usb/core/devio.c +++ b/drivers/usb/core/devio.c @@ -1792,8 +1792,6 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb return 0; error: - if (as && as->usbm) - dec_usb_memory_use_count(as->usbm, &as->usbm->urb_use_count); kfree(isopkt); kfree(dr); if (as) diff --git a/drivers/usb/core/file.c b/drivers/usb/core/file.c index 65de6f73b6725200d8ec5cb2301b7a557cdf8a9c..558890ada0e5bdcf8c573af0cc5be59c52c99ef1 100644 --- a/drivers/usb/core/file.c +++ b/drivers/usb/core/file.c @@ -193,9 +193,10 @@ int usb_register_dev(struct usb_interface *intf, intf->minor = minor; break; } - up_write(&minor_rwsem); - if (intf->minor < 0) + if (intf->minor < 0) { + up_write(&minor_rwsem); return -EXFULL; + } /* create a usb class device for this usb interface */ snprintf(name, sizeof(name), class_driver->name, minor - minor_base); @@ -203,12 +204,11 @@ int usb_register_dev(struct usb_interface *intf, MKDEV(USB_MAJOR, minor), class_driver, "%s", kbasename(name)); if (IS_ERR(intf->usb_dev)) { - down_write(&minor_rwsem); usb_minors[minor] = NULL; intf->minor = -1; - up_write(&minor_rwsem); retval = PTR_ERR(intf->usb_dev); } + up_write(&minor_rwsem); return retval; } EXPORT_SYMBOL_GPL(usb_register_dev); @@ -234,12 +234,12 @@ void usb_deregister_dev(struct usb_interface *intf, return; dev_dbg(&intf->dev, "removing %d minor\n", intf->minor); + device_destroy(usb_class->class, MKDEV(USB_MAJOR, intf->minor)); down_write(&minor_rwsem); usb_minors[intf->minor] = NULL; up_write(&minor_rwsem); - device_destroy(usb_class->class, MKDEV(USB_MAJOR, intf->minor)); intf->usb_dev = NULL; intf->minor = -1; destroy_usb_class(); diff --git a/drivers/usb/core/hcd-pci.c b/drivers/usb/core/hcd-pci.c index 03432467b05fb12810d7cac77d954a11ed0a002a..7537681355f6799c00b73fa6062565257e37d8e3 100644 --- a/drivers/usb/core/hcd-pci.c +++ b/drivers/usb/core/hcd-pci.c @@ -216,17 +216,18 @@ int usb_hcd_pci_probe(struct pci_dev *dev, const struct pci_device_id *id) /* EHCI, OHCI */ hcd->rsrc_start = pci_resource_start(dev, 0); hcd->rsrc_len = pci_resource_len(dev, 0); - if (!request_mem_region(hcd->rsrc_start, hcd->rsrc_len, - driver->description)) { + if (!devm_request_mem_region(&dev->dev, hcd->rsrc_start, + hcd->rsrc_len, driver->description)) { dev_dbg(&dev->dev, "controller already in use\n"); retval = -EBUSY; goto put_hcd; } - hcd->regs = ioremap_nocache(hcd->rsrc_start, hcd->rsrc_len); + hcd->regs = devm_ioremap_nocache(&dev->dev, hcd->rsrc_start, + hcd->rsrc_len); if (hcd->regs == NULL) { dev_dbg(&dev->dev, "error mapping memory\n"); retval = -EFAULT; - goto release_mem_region; + goto put_hcd; } } else { @@ -240,8 +241,8 @@ int usb_hcd_pci_probe(struct pci_dev *dev, const struct pci_device_id *id) hcd->rsrc_start = pci_resource_start(dev, region); hcd->rsrc_len = pci_resource_len(dev, region); - if (request_region(hcd->rsrc_start, hcd->rsrc_len, - driver->description)) + if (devm_request_region(&dev->dev, hcd->rsrc_start, + hcd->rsrc_len, driver->description)) break; } if (region == PCI_ROM_RESOURCE) { @@ -275,20 +276,13 @@ int usb_hcd_pci_probe(struct pci_dev *dev, const struct pci_device_id *id) } if (retval != 0) - goto unmap_registers; + goto put_hcd; device_wakeup_enable(hcd->self.controller); if (pci_dev_run_wake(dev)) pm_runtime_put_noidle(&dev->dev); return retval; -unmap_registers: - if (driver->flags & HCD_MEMORY) { - iounmap(hcd->regs); -release_mem_region: - release_mem_region(hcd->rsrc_start, hcd->rsrc_len); - } else - release_region(hcd->rsrc_start, hcd->rsrc_len); put_hcd: usb_put_hcd(hcd); disable_pci: @@ -347,14 +341,6 @@ void usb_hcd_pci_remove(struct pci_dev *dev) dev_set_drvdata(&dev->dev, NULL); up_read(&companions_rwsem); } - - if (hcd->driver->flags & HCD_MEMORY) { - iounmap(hcd->regs); - release_mem_region(hcd->rsrc_start, hcd->rsrc_len); - } else { - release_region(hcd->rsrc_start, hcd->rsrc_len); - } - usb_put_hcd(hcd); pci_disable_device(dev); } diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c index 0b658893dcfdd5ebce6d28e8dfea863e74ac6b29..ae0caac90e91bc4506a9ee7621b07e68a89b9560 100644 --- a/drivers/usb/core/hub.c +++ b/drivers/usb/core/hub.c @@ -3575,6 +3575,7 @@ static int hub_handle_remote_wakeup(struct usb_hub *hub, unsigned int port, struct usb_device *hdev; struct usb_device *udev; int connect_change = 0; + u16 link_state; int ret; hdev = hub->hdev; @@ -3584,9 +3585,11 @@ static int hub_handle_remote_wakeup(struct usb_hub *hub, unsigned int port, return 0; usb_clear_port_feature(hdev, port, USB_PORT_FEAT_C_SUSPEND); } else { + link_state = portstatus & USB_PORT_STAT_LINK_STATE; if (!udev || udev->state != USB_STATE_SUSPENDED || - (portstatus & USB_PORT_STAT_LINK_STATE) != - USB_SS_PORT_LS_U0) + (link_state != USB_SS_PORT_LS_U0 && + link_state != USB_SS_PORT_LS_U1 && + link_state != USB_SS_PORT_LS_U2)) return 0; } @@ -3958,6 +3961,9 @@ static int usb_set_lpm_timeout(struct usb_device *udev, * control transfers to set the hub timeout or enable device-initiated U1/U2 * will be successful. * + * If the control transfer to enable device-initiated U1/U2 entry fails, then + * hub-initiated U1/U2 will be disabled. + * * If we cannot set the parent hub U1/U2 timeout, we attempt to let the xHCI * driver know about it. If that call fails, it should be harmless, and just * take up more slightly more bus bandwidth for unnecessary U1/U2 exit latency. @@ -4012,23 +4018,24 @@ static void usb_enable_link_state(struct usb_hcd *hcd, struct usb_device *udev, * host know that this link state won't be enabled. */ hcd->driver->disable_usb3_lpm_timeout(hcd, udev, state); - } else { - /* Only a configured device will accept the Set Feature - * U1/U2_ENABLE - */ - if (udev->actconfig) - usb_set_device_initiated_lpm(udev, state, true); + return; + } - /* As soon as usb_set_lpm_timeout(timeout) returns 0, the - * hub-initiated LPM is enabled. Thus, LPM is enabled no - * matter the result of usb_set_device_initiated_lpm(). - * The only difference is whether device is able to initiate - * LPM. - */ + /* Only a configured device will accept the Set Feature + * U1/U2_ENABLE + */ + if (udev->actconfig && + usb_set_device_initiated_lpm(udev, state, true) == 0) { if (state == USB3_LPM_U1) udev->usb3_lpm_u1_enabled = 1; else if (state == USB3_LPM_U2) udev->usb3_lpm_u2_enabled = 1; + } else { + /* Don't request U1/U2 entry if the device + * cannot transition to U1/U2. + */ + usb_set_lpm_timeout(udev, state, 0); + hcd->driver->disable_usb3_lpm_timeout(hcd, udev, state); } } diff --git a/drivers/usb/core/message.c b/drivers/usb/core/message.c index 28a5c5ed0f2c15f3075337451bf63dd940eb65d5..81439a04f095d64cf5cf9d856b31a4534b611260 100644 --- a/drivers/usb/core/message.c +++ b/drivers/usb/core/message.c @@ -2305,14 +2305,14 @@ int cdc_parse_cdc_header(struct usb_cdc_parsed_header *hdr, (struct usb_cdc_dmm_desc *)buffer; break; case USB_CDC_MDLM_TYPE: - if (elength < sizeof(struct usb_cdc_mdlm_desc *)) + if (elength < sizeof(struct usb_cdc_mdlm_desc)) goto next_desc; if (desc) return -EINVAL; desc = (struct usb_cdc_mdlm_desc *)buffer; break; case USB_CDC_MDLM_DETAIL_TYPE: - if (elength < sizeof(struct usb_cdc_mdlm_detail_desc *)) + if (elength < sizeof(struct usb_cdc_mdlm_detail_desc)) goto next_desc; if (detail) return -EINVAL; diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c index b8a15840b4ffd574430cdc5d0e57e74a3c116242..dfcabadeed01bccfb86c546631d79e8fcc4ce253 100644 --- a/drivers/usb/gadget/composite.c +++ b/drivers/usb/gadget/composite.c @@ -1976,6 +1976,7 @@ void composite_disconnect(struct usb_gadget *gadget) * disconnect callbacks? */ spin_lock_irqsave(&cdev->lock, flags); + cdev->suspended = 0; if (cdev->config) reset_config(cdev); if (cdev->driver->disconnect) diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c index 5e9269cd14fac1bcfa1a14080c4f40d3da32084e..e2ca75a6e2417b5fc032c67d2742413f6d5045f7 100644 --- a/drivers/usb/gadget/function/f_fs.c +++ b/drivers/usb/gadget/function/f_fs.c @@ -1101,11 +1101,12 @@ static ssize_t ffs_epfile_write_iter(struct kiocb *kiocb, struct iov_iter *from) ENTER(); if (!is_sync_kiocb(kiocb)) { - p = kmalloc(sizeof(io_data), GFP_KERNEL); + p = kzalloc(sizeof(io_data), GFP_KERNEL); if (unlikely(!p)) return -ENOMEM; p->aio = true; } else { + memset(p, 0, sizeof(*p)); p->aio = false; } @@ -1137,11 +1138,12 @@ static ssize_t ffs_epfile_read_iter(struct kiocb *kiocb, struct iov_iter *to) ENTER(); if (!is_sync_kiocb(kiocb)) { - p = kmalloc(sizeof(io_data), GFP_KERNEL); + p = kzalloc(sizeof(io_data), GFP_KERNEL); if (unlikely(!p)) return -ENOMEM; p->aio = true; } else { + memset(p, 0, sizeof(*p)); p->aio = false; } diff --git a/drivers/usb/gadget/function/f_mass_storage.c b/drivers/usb/gadget/function/f_mass_storage.c index 1074cb82ec172d2ac464d72d9e52c9461715c868..0b7b4d09785b6e22be0d3a22d111bf9402e003ff 100644 --- a/drivers/usb/gadget/function/f_mass_storage.c +++ b/drivers/usb/gadget/function/f_mass_storage.c @@ -261,7 +261,7 @@ struct fsg_common; struct fsg_common { struct usb_gadget *gadget; struct usb_composite_dev *cdev; - struct fsg_dev *fsg, *new_fsg; + struct fsg_dev *fsg; wait_queue_head_t io_wait; wait_queue_head_t fsg_wait; @@ -290,6 +290,7 @@ struct fsg_common { unsigned int bulk_out_maxpacket; enum fsg_state state; /* For exception handling */ unsigned int exception_req_tag; + void *exception_arg; enum data_direction data_dir; u32 data_size; @@ -391,7 +392,8 @@ static int fsg_set_halt(struct fsg_dev *fsg, struct usb_ep *ep) /* These routines may be called in process context or in_irq */ -static void raise_exception(struct fsg_common *common, enum fsg_state new_state) +static void __raise_exception(struct fsg_common *common, enum fsg_state new_state, + void *arg) { unsigned long flags; @@ -404,6 +406,7 @@ static void raise_exception(struct fsg_common *common, enum fsg_state new_state) if (common->state <= new_state) { common->exception_req_tag = common->ep0_req_tag; common->state = new_state; + common->exception_arg = arg; if (common->thread_task) send_sig_info(SIGUSR1, SEND_SIG_FORCED, common->thread_task); @@ -411,6 +414,10 @@ static void raise_exception(struct fsg_common *common, enum fsg_state new_state) spin_unlock_irqrestore(&common->lock, flags); } +static void raise_exception(struct fsg_common *common, enum fsg_state new_state) +{ + __raise_exception(common, new_state, NULL); +} /*-------------------------------------------------------------------------*/ @@ -2285,16 +2292,16 @@ reset: static int fsg_set_alt(struct usb_function *f, unsigned intf, unsigned alt) { struct fsg_dev *fsg = fsg_from_func(f); - fsg->common->new_fsg = fsg; - raise_exception(fsg->common, FSG_STATE_CONFIG_CHANGE); + + __raise_exception(fsg->common, FSG_STATE_CONFIG_CHANGE, fsg); return USB_GADGET_DELAYED_STATUS; } static void fsg_disable(struct usb_function *f) { struct fsg_dev *fsg = fsg_from_func(f); - fsg->common->new_fsg = NULL; - raise_exception(fsg->common, FSG_STATE_CONFIG_CHANGE); + + __raise_exception(fsg->common, FSG_STATE_CONFIG_CHANGE, NULL); } @@ -2307,6 +2314,7 @@ static void handle_exception(struct fsg_common *common) enum fsg_state old_state; struct fsg_lun *curlun; unsigned int exception_req_tag; + struct fsg_dev *new_fsg; /* * Clear the existing signals. Anything but SIGUSR1 is converted @@ -2360,6 +2368,7 @@ static void handle_exception(struct fsg_common *common) common->next_buffhd_to_fill = &common->buffhds[0]; common->next_buffhd_to_drain = &common->buffhds[0]; exception_req_tag = common->exception_req_tag; + new_fsg = common->exception_arg; old_state = common->state; common->state = FSG_STATE_NORMAL; @@ -2413,8 +2422,8 @@ static void handle_exception(struct fsg_common *common) break; case FSG_STATE_CONFIG_CHANGE: - do_set_interface(common, common->new_fsg); - if (common->new_fsg) + do_set_interface(common, new_fsg); + if (new_fsg) usb_composite_setup_continue(common->cdev); break; @@ -2989,8 +2998,7 @@ static void fsg_unbind(struct usb_configuration *c, struct usb_function *f) DBG(fsg, "unbind\n"); if (fsg->common->fsg == fsg) { - fsg->common->new_fsg = NULL; - raise_exception(fsg->common, FSG_STATE_CONFIG_CHANGE); + __raise_exception(fsg->common, FSG_STATE_CONFIG_CHANGE, NULL); /* FIXME: make interruptible or killable somehow? */ wait_event(common->fsg_wait, common->fsg != fsg); } diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c index fea02c7ad4f4328901be479a4c2205a69d961d80..a5254e82d628232b8d518668effb89933b0a51e7 100644 --- a/drivers/usb/gadget/udc/renesas_usb3.c +++ b/drivers/usb/gadget/udc/renesas_usb3.c @@ -19,6 +19,7 @@ #include #include #include +#include #include #include #include @@ -2378,9 +2379,9 @@ static ssize_t role_store(struct device *dev, struct device_attribute *attr, if (usb3->forced_b_device) return -EBUSY; - if (!strncmp(buf, "host", strlen("host"))) + if (sysfs_streq(buf, "host")) new_mode_is_host = true; - else if (!strncmp(buf, "peripheral", strlen("peripheral"))) + else if (sysfs_streq(buf, "peripheral")) new_mode_is_host = false; else return -EINVAL; diff --git a/drivers/usb/host/dwc_otg/dwc_otg_hcd.c b/drivers/usb/host/dwc_otg/dwc_otg_hcd.c index 72397866066c93c0c9be8834630406f9a4375fa5..2378a841ef17e02f5e62066246518278e6f6fe0a 100644 --- a/drivers/usb/host/dwc_otg/dwc_otg_hcd.c +++ b/drivers/usb/host/dwc_otg/dwc_otg_hcd.c @@ -289,6 +289,7 @@ static int32_t dwc_otg_hcd_start_cb(void *p) */ static int32_t dwc_otg_hcd_disconnect_cb(void *p) { + unsigned long flags; gintsts_data_t intr; dwc_otg_hcd_t *dwc_otg_hcd = p; @@ -299,8 +300,7 @@ static int32_t dwc_otg_hcd_disconnect_cb(void *p) dwc_otg_hcd->flags.b.port_connect_status_change = 1; dwc_otg_hcd->flags.b.port_connect_status = 0; if(fiq_enable) { - local_fiq_disable(); - fiq_fsm_spin_lock(&dwc_otg_hcd->fiq_state->lock); + fiq_fsm_spin_lock_irqsave(&dwc_otg_hcd->fiq_state->lock, flags); } /* * Shutdown any transfers in process by clearing the Tx FIFO Empty @@ -379,8 +379,7 @@ static int32_t dwc_otg_hcd_disconnect_cb(void *p) } if(fiq_enable) { - fiq_fsm_spin_unlock(&dwc_otg_hcd->fiq_state->lock); - local_fiq_enable(); + fiq_fsm_spin_unlock_irqrestore(&dwc_otg_hcd->fiq_state->lock, flags); } if (dwc_otg_hcd->fops->disconnect) { @@ -546,6 +545,7 @@ int dwc_otg_hcd_urb_dequeue(dwc_otg_hcd_t * hcd, { dwc_otg_qh_t *qh; dwc_otg_qtd_t *urb_qtd; + unsigned long flags; BUG_ON(!hcd); BUG_ON(!dwc_otg_urb); @@ -600,15 +600,13 @@ int dwc_otg_hcd_urb_dequeue(dwc_otg_hcd_t * hcd, /* In FIQ FSM mode, we need to shut down carefully. * The FIQ may attempt to restart a disabled channel */ if (fiq_fsm_enable && (hcd->fiq_state->channel[n].fsm != FIQ_PASSTHROUGH)) { - local_fiq_disable(); - fiq_fsm_spin_lock(&hcd->fiq_state->lock); + fiq_fsm_spin_lock_irqsave(&hcd->fiq_state->lock, flags); qh->channel->halt_status = DWC_OTG_HC_XFER_URB_DEQUEUE; qh->channel->halt_pending = 1; if (hcd->fiq_state->channel[n].fsm == FIQ_HS_ISOC_TURBO || hcd->fiq_state->channel[n].fsm == FIQ_HS_ISOC_SLEEPING) hcd->fiq_state->channel[n].fsm = FIQ_HS_ISOC_ABORTED; - fiq_fsm_spin_unlock(&hcd->fiq_state->lock); - local_fiq_enable(); + fiq_fsm_spin_unlock_irqrestore(&hcd->fiq_state->lock, flags); } else { dwc_otg_hc_halt(hcd->core_if, qh->channel, DWC_OTG_HC_XFER_URB_DEQUEUE); @@ -1182,6 +1180,7 @@ static void assign_and_init_hc(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh) dwc_otg_qtd_t *qtd; dwc_otg_hcd_urb_t *urb; void* ptr = NULL; + uint16_t wLength; uint32_t intr_enable; unsigned long flags; gintmsk_data_t gintmsk = { .d32 = 0, }; @@ -1293,6 +1292,23 @@ static void assign_and_init_hc(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh) break; case DWC_OTG_CONTROL_DATA: DWC_DEBUGPL(DBG_HCDV, " Control data transaction\n"); + /* + * Hardware bug: small IN packets with length < 4 + * cause a 4-byte write to memory. We can only catch + * the case where we know a short packet is going to be + * returned in a control transfer, as the length is + * specified in the setup packet. This is only an issue + * for drivers that insist on packing a device's various + * properties into a struct and querying them one at a + * time (uvcvideo). + * Force the use of align_buf so that the subsequent + * memcpy puts the right number of bytes in the URB's + * buffer. + */ + wLength = ((uint16_t *)urb->setup_packet)[3]; + if (hc->ep_is_in && wLength < 4) + ptr = hc->xfer_buff; + hc->data_pid_start = qtd->data_toggle; break; case DWC_OTG_CONTROL_STATUS: @@ -1413,6 +1429,8 @@ static void assign_and_init_hc(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh) dwc_otg_hc_init(hcd->core_if, hc); + local_irq_save(flags); + if (fiq_enable) fiq_fsm_spin_lock_irqsave(&hcd->fiq_state->lock, flags); else @@ -1430,6 +1448,7 @@ static void assign_and_init_hc(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh) fiq_fsm_spin_unlock_irqrestore(&hcd->fiq_state->lock, flags); else local_irq_restore(flags); + hc->qh = qh; } diff --git a/drivers/usb/host/fotg210-hcd.c b/drivers/usb/host/fotg210-hcd.c index e64eb47770c8bb0f2b2c89d4abf9d6e3abf85140..2d5a72c15069e4caee454080045ecae6c8afd8d4 100644 --- a/drivers/usb/host/fotg210-hcd.c +++ b/drivers/usb/host/fotg210-hcd.c @@ -1627,6 +1627,10 @@ static int fotg210_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue, /* see what we found out */ temp = check_reset_complete(fotg210, wIndex, status_reg, fotg210_readl(fotg210, status_reg)); + + /* restart schedule */ + fotg210->command |= CMD_RUN; + fotg210_writel(fotg210, fotg210->command, &fotg210->regs->command); } if (!(temp & (PORT_RESUME|PORT_RESET))) { diff --git a/drivers/usb/host/hwa-hc.c b/drivers/usb/host/hwa-hc.c index 09a8ebd955888d6d375d7cee3d0fc53fd1cf0d9c..6968b9f2b76b5865bd70e7366b2701f6b2c50f6f 100644 --- a/drivers/usb/host/hwa-hc.c +++ b/drivers/usb/host/hwa-hc.c @@ -159,7 +159,7 @@ out: return result; error_set_cluster_id: - wusb_cluster_id_put(wusbhc->cluster_id); + wusb_cluster_id_put(addr); error_cluster_id_get: goto out; diff --git a/drivers/usb/host/ohci-hcd.c b/drivers/usb/host/ohci-hcd.c index 210181fd98d2e9d6e850662becf6ec08f7b901c2..af11887f5f9e4b9619534b431d684f50099b2664 100644 --- a/drivers/usb/host/ohci-hcd.c +++ b/drivers/usb/host/ohci-hcd.c @@ -418,8 +418,7 @@ static void ohci_usb_reset (struct ohci_hcd *ohci) * other cases where the next software may expect clean state from the * "firmware". this is bus-neutral, unlike shutdown() methods. */ -static void -ohci_shutdown (struct usb_hcd *hcd) +static void _ohci_shutdown(struct usb_hcd *hcd) { struct ohci_hcd *ohci; @@ -435,6 +434,16 @@ ohci_shutdown (struct usb_hcd *hcd) ohci->rh_state = OHCI_RH_HALTED; } +static void ohci_shutdown(struct usb_hcd *hcd) +{ + struct ohci_hcd *ohci = hcd_to_ohci(hcd); + unsigned long flags; + + spin_lock_irqsave(&ohci->lock, flags); + _ohci_shutdown(hcd); + spin_unlock_irqrestore(&ohci->lock, flags); +} + /*-------------------------------------------------------------------------* * HC functions *-------------------------------------------------------------------------*/ @@ -752,7 +761,7 @@ static void io_watchdog_func(struct timer_list *t) died: usb_hc_died(ohci_to_hcd(ohci)); ohci_dump(ohci); - ohci_shutdown(ohci_to_hcd(ohci)); + _ohci_shutdown(ohci_to_hcd(ohci)); goto done; } else { /* No write back because the done queue was empty */ diff --git a/drivers/usb/host/pci-quirks.c b/drivers/usb/host/pci-quirks.c index 3625a5c1a41b81c88176bcc6a0b9e5c94bf4d59a..070c66f86e67d9ccce855378bdcfd5e9ef2e4204 100644 --- a/drivers/usb/host/pci-quirks.c +++ b/drivers/usb/host/pci-quirks.c @@ -205,7 +205,7 @@ int usb_amd_find_chipset_info(void) { unsigned long flags; struct amd_chipset_info info; - int ret; + int need_pll_quirk = 0; spin_lock_irqsave(&amd_lock, flags); @@ -219,21 +219,28 @@ int usb_amd_find_chipset_info(void) spin_unlock_irqrestore(&amd_lock, flags); if (!amd_chipset_sb_type_init(&info)) { - ret = 0; goto commit; } - /* Below chipset generations needn't enable AMD PLL quirk */ - if (info.sb_type.gen == AMD_CHIPSET_UNKNOWN || - info.sb_type.gen == AMD_CHIPSET_SB600 || - info.sb_type.gen == AMD_CHIPSET_YANGTZE || - (info.sb_type.gen == AMD_CHIPSET_SB700 && - info.sb_type.rev > 0x3b)) { + switch (info.sb_type.gen) { + case AMD_CHIPSET_SB700: + need_pll_quirk = info.sb_type.rev <= 0x3B; + break; + case AMD_CHIPSET_SB800: + case AMD_CHIPSET_HUDSON2: + case AMD_CHIPSET_BOLTON: + need_pll_quirk = 1; + break; + default: + need_pll_quirk = 0; + break; + } + + if (!need_pll_quirk) { if (info.smbus_dev) { pci_dev_put(info.smbus_dev); info.smbus_dev = NULL; } - ret = 0; goto commit; } @@ -252,7 +259,7 @@ int usb_amd_find_chipset_info(void) } } - ret = info.probe_result = 1; + need_pll_quirk = info.probe_result = 1; printk(KERN_DEBUG "QUIRK: Enable AMD PLL fix\n"); commit: @@ -263,7 +270,7 @@ commit: /* Mark that we where here */ amd_chipset.probe_count++; - ret = amd_chipset.probe_result; + need_pll_quirk = amd_chipset.probe_result; spin_unlock_irqrestore(&amd_lock, flags); @@ -277,7 +284,7 @@ commit: spin_unlock_irqrestore(&amd_lock, flags); } - return ret; + return need_pll_quirk; } EXPORT_SYMBOL_GPL(usb_amd_find_chipset_info); diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c index b1f27aa38b1008c0edc8f425e0caf4b4a2c06959..7e417b71d24cbd27b534adf373123ae91ef6a207 100644 --- a/drivers/usb/host/xhci-mem.c +++ b/drivers/usb/host/xhci-mem.c @@ -2491,9 +2491,11 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags) * Event ring setup: Allocate a normal ring, but also setup * the event ring segment table (ERST). Section 4.9.3. */ + val2 = 1 << HCS_ERST_MAX(xhci->hcs_params2); + val2 = min_t(unsigned int, ERST_MAX_SEGS, val2); xhci_dbg_trace(xhci, trace_xhci_dbg_init, "// Allocating event ring"); - xhci->event_ring = xhci_ring_alloc(xhci, ERST_NUM_SEGS, 1, TYPE_EVENT, - 0, flags); + xhci->event_ring = xhci_ring_alloc(xhci, val2, 1, TYPE_EVENT, + 0, flags); if (!xhci->event_ring) goto fail; if (xhci_check_trb_in_td_math(xhci) < 0) @@ -2506,7 +2508,7 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags) /* set ERST count with the number of entries in the segment table */ val = readl(&xhci->ir_set->erst_size); val &= ERST_SIZE_MASK; - val |= ERST_NUM_SEGS; + val |= val2; xhci_dbg_trace(xhci, trace_xhci_dbg_init, "// Write ERST size = %i to ir_set 0 (some bits preserved)", val); diff --git a/drivers/usb/host/xhci-rcar.c b/drivers/usb/host/xhci-rcar.c index 671bce18782c5a788ad1af896ab9066fd4078839..2b0ccd150209fe3a1f55bb73d29a965fe1cd6f26 100644 --- a/drivers/usb/host/xhci-rcar.c +++ b/drivers/usb/host/xhci-rcar.c @@ -104,7 +104,7 @@ static int xhci_rcar_is_gen2(struct device *dev) return of_device_is_compatible(node, "renesas,xhci-r8a7790") || of_device_is_compatible(node, "renesas,xhci-r8a7791") || of_device_is_compatible(node, "renesas,xhci-r8a7793") || - of_device_is_compatible(node, "renensas,rcar-gen2-xhci"); + of_device_is_compatible(node, "renesas,rcar-gen2-xhci"); } static int xhci_rcar_is_gen3(struct device *dev) @@ -238,10 +238,15 @@ int xhci_rcar_init_quirk(struct usb_hcd *hcd) * pointers. So, this driver clears the AC64 bit of xhci->hcc_params * to call dma_set_coherent_mask(dev, DMA_BIT_MASK(32)) in * xhci_gen_setup(). + * + * And, since the firmware/internal CPU control the USBSTS.STS_HALT + * and the process speed is down when the roothub port enters U3, + * long delay for the handshake of STS_HALT is neeed in xhci_suspend(). */ if (xhci_rcar_is_gen2(hcd->self.controller) || - xhci_rcar_is_gen3(hcd->self.controller)) - xhci->quirks |= XHCI_NO_64BIT_SUPPORT; + xhci_rcar_is_gen3(hcd->self.controller)) { + xhci->quirks |= XHCI_NO_64BIT_SUPPORT | XHCI_SLOW_SUSPEND; + } if (!xhci_rcar_wait_for_pll_active(hcd)) return -ETIMEDOUT; diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h index 3a6c451777e7dc531142bd94613ed656fd74017b..ec515c563a37e2361908f17c53f8e033948f4384 100644 --- a/drivers/usb/host/xhci.h +++ b/drivers/usb/host/xhci.h @@ -1643,8 +1643,8 @@ struct urb_priv { * Each segment table entry is 4*32bits long. 1K seems like an ok size: * (1K bytes * 8bytes/bit) / (4*32 bits) = 64 segment entries in the table, * meaning 64 ring segments. - * Initial allocated size of the ERST, in number of entries */ -#define ERST_NUM_SEGS 1 + * Maximum number of segments in the ERST */ +#define ERST_MAX_SEGS 8 /* Initial allocated size of the ERST, in number of entries */ #define ERST_SIZE 64 /* Initial number of event segment rings allocated */ diff --git a/drivers/usb/misc/iowarrior.c b/drivers/usb/misc/iowarrior.c index c2991b8a65ce455b9e35c15fbb758108b1012071..55db0fc87927ea3432e939867ac8435e84fe05c4 100644 --- a/drivers/usb/misc/iowarrior.c +++ b/drivers/usb/misc/iowarrior.c @@ -866,19 +866,20 @@ static void iowarrior_disconnect(struct usb_interface *interface) dev = usb_get_intfdata(interface); mutex_lock(&iowarrior_open_disc_lock); usb_set_intfdata(interface, NULL); + /* prevent device read, write and ioctl */ + dev->present = 0; minor = dev->minor; + mutex_unlock(&iowarrior_open_disc_lock); + /* give back our minor - this will call close() locks need to be dropped at this point*/ - /* give back our minor */ usb_deregister_dev(interface, &iowarrior_class); mutex_lock(&dev->mutex); /* prevent device read, write and ioctl */ - dev->present = 0; mutex_unlock(&dev->mutex); - mutex_unlock(&iowarrior_open_disc_lock); if (dev->opened) { /* There is a process that holds a filedescriptor to the device , diff --git a/drivers/usb/misc/yurex.c b/drivers/usb/misc/yurex.c index 7b306aa22d2589518d696111cb2750bdc3bed4c0..6715a128e6c8b29bd52cb6c132b5eb0ecd459209 100644 --- a/drivers/usb/misc/yurex.c +++ b/drivers/usb/misc/yurex.c @@ -92,7 +92,6 @@ static void yurex_delete(struct kref *kref) dev_dbg(&dev->interface->dev, "%s\n", __func__); - usb_put_dev(dev->udev); if (dev->cntl_urb) { usb_kill_urb(dev->cntl_urb); kfree(dev->cntl_req); @@ -108,6 +107,7 @@ static void yurex_delete(struct kref *kref) dev->int_buffer, dev->urb->transfer_dma); usb_free_urb(dev->urb); } + usb_put_dev(dev->udev); kfree(dev); } diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c index e0a4749ba565e0227c27cacba3dbdaf0b8372f07..56f572cb08f8b5df05ab137adea3b694aae42b71 100644 --- a/drivers/usb/serial/option.c +++ b/drivers/usb/serial/option.c @@ -968,6 +968,11 @@ static const struct usb_device_id option_ids[] = { { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x7B) }, { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x7C) }, + /* Motorola devices */ + { USB_DEVICE_AND_INTERFACE_INFO(0x22b8, 0x2a70, 0xff, 0xff, 0xff) }, /* mdm6600 */ + { USB_DEVICE_AND_INTERFACE_INFO(0x22b8, 0x2e0a, 0xff, 0xff, 0xff) }, /* mdm9600 */ + { USB_DEVICE_AND_INTERFACE_INFO(0x22b8, 0x4281, 0x0a, 0x00, 0xfc) }, /* mdm ram dl */ + { USB_DEVICE_AND_INTERFACE_INFO(0x22b8, 0x900e, 0xff, 0xff, 0xff) }, /* mdm qc dl */ { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_V640) }, { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_V620) }, @@ -1549,6 +1554,7 @@ static const struct usb_device_id option_ids[] = { { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1428, 0xff, 0xff, 0xff), /* Telewell TW-LTE 4G v2 */ .driver_info = RSVD(2) }, { USB_DEVICE_INTERFACE_CLASS(ZTE_VENDOR_ID, 0x1476, 0xff) }, /* GosunCn ZTE WeLink ME3630 (ECM/NCM mode) */ + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1481, 0xff, 0x00, 0x00) }, /* ZTE MF871A */ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1533, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1534, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1535, 0xff, 0xff, 0xff) }, @@ -1952,11 +1958,15 @@ static const struct usb_device_id option_ids[] = { .driver_info = RSVD(4) }, { USB_DEVICE_INTERFACE_CLASS(0x2001, 0x7e35, 0xff), /* D-Link DWM-222 */ .driver_info = RSVD(4) }, + { USB_DEVICE_INTERFACE_CLASS(0x2001, 0x7e3d, 0xff), /* D-Link DWM-222 A2 */ + .driver_info = RSVD(4) }, { USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e01, 0xff, 0xff, 0xff) }, /* D-Link DWM-152/C1 */ { USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e02, 0xff, 0xff, 0xff) }, /* D-Link DWM-156/C1 */ { USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x7e11, 0xff, 0xff, 0xff) }, /* D-Link DWM-156/A3 */ { USB_DEVICE_INTERFACE_CLASS(0x2020, 0x2031, 0xff), /* Olicard 600 */ .driver_info = RSVD(4) }, + { USB_DEVICE_INTERFACE_CLASS(0x2020, 0x2060, 0xff), /* BroadMobi BM818 */ + .driver_info = RSVD(4) }, { USB_DEVICE_INTERFACE_CLASS(0x2020, 0x4000, 0xff) }, /* OLICARD300 - MT6225 */ { USB_DEVICE(INOVIA_VENDOR_ID, INOVIA_SEW858) }, { USB_DEVICE(VIATELECOM_VENDOR_ID, VIATELECOM_PRODUCT_CDS7) }, diff --git a/drivers/usb/storage/realtek_cr.c b/drivers/usb/storage/realtek_cr.c index cc794e25a0b6ed043149685eb1400492a977b2c3..1d9ce9cbc831d1035a6ad086b10f1f99e3188a63 100644 --- a/drivers/usb/storage/realtek_cr.c +++ b/drivers/usb/storage/realtek_cr.c @@ -38,7 +38,7 @@ MODULE_LICENSE("GPL"); static int auto_delink_en = 1; module_param(auto_delink_en, int, S_IRUGO | S_IWUSR); -MODULE_PARM_DESC(auto_delink_en, "enable auto delink"); +MODULE_PARM_DESC(auto_delink_en, "auto delink mode (0=firmware, 1=software [default])"); #ifdef CONFIG_REALTEK_AUTOPM static int ss_en = 1; @@ -996,12 +996,15 @@ static int init_realtek_cr(struct us_data *us) goto INIT_FAIL; } - if (CHECK_FW_VER(chip, 0x5888) || CHECK_FW_VER(chip, 0x5889) || - CHECK_FW_VER(chip, 0x5901)) - SET_AUTO_DELINK(chip); - if (STATUS_LEN(chip) == 16) { - if (SUPPORT_AUTO_DELINK(chip)) + if (CHECK_PID(chip, 0x0138) || CHECK_PID(chip, 0x0158) || + CHECK_PID(chip, 0x0159)) { + if (CHECK_FW_VER(chip, 0x5888) || CHECK_FW_VER(chip, 0x5889) || + CHECK_FW_VER(chip, 0x5901)) SET_AUTO_DELINK(chip); + if (STATUS_LEN(chip) == 16) { + if (SUPPORT_AUTO_DELINK(chip)) + SET_AUTO_DELINK(chip); + } } #ifdef CONFIG_REALTEK_AUTOPM if (ss_en) diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h index ea0d27a94afe058b3671ad6c66989ecbbdb98568..1cd9b6305b06042fecf75942c4f79af41747feea 100644 --- a/drivers/usb/storage/unusual_devs.h +++ b/drivers/usb/storage/unusual_devs.h @@ -2100,7 +2100,7 @@ UNUSUAL_DEV( 0x14cd, 0x6600, 0x0201, 0x0201, US_FL_IGNORE_RESIDUE ), /* Reported by Michael Büsch */ -UNUSUAL_DEV( 0x152d, 0x0567, 0x0114, 0x0116, +UNUSUAL_DEV( 0x152d, 0x0567, 0x0114, 0x0117, "JMicron", "USB to ATA/ATAPI Bridge", USB_SC_DEVICE, USB_PR_DEVICE, NULL, diff --git a/drivers/usb/typec/tcpm.c b/drivers/usb/typec/tcpm.c index 3457c1fdebd1b2191a97206ab6e92dd35a89e40d..fb20aa974ae12ae40804daf9c154ec83e7120dce 100644 --- a/drivers/usb/typec/tcpm.c +++ b/drivers/usb/typec/tcpm.c @@ -378,7 +378,8 @@ static enum tcpm_state tcpm_default_state(struct tcpm_port *port) return SNK_UNATTACHED; else if (port->try_role == TYPEC_SOURCE) return SRC_UNATTACHED; - else if (port->tcpc->config->default_role == TYPEC_SINK) + else if (port->tcpc->config && + port->tcpc->config->default_role == TYPEC_SINK) return SNK_UNATTACHED; /* Fall through to return SRC_UNATTACHED */ } else if (port->port_type == TYPEC_PORT_SNK) { @@ -585,7 +586,20 @@ static void tcpm_debugfs_init(struct tcpm_port *port) static void tcpm_debugfs_exit(struct tcpm_port *port) { + int i; + + mutex_lock(&port->logbuffer_lock); + for (i = 0; i < LOG_BUFFER_ENTRIES; i++) { + kfree(port->logbuffer[i]); + port->logbuffer[i] = NULL; + } + mutex_unlock(&port->logbuffer_lock); + debugfs_remove(port->dentry); + if (list_empty(&rootdir->d_subdirs)) { + debugfs_remove(rootdir); + rootdir = NULL; + } } #else @@ -1094,7 +1108,8 @@ static int tcpm_pd_svdm(struct tcpm_port *port, const __le32 *payload, int cnt, break; case CMD_ATTENTION: /* Attention command does not have response */ - typec_altmode_attention(adev, p[1]); + if (adev) + typec_altmode_attention(adev, p[1]); return 0; default: break; @@ -1146,20 +1161,26 @@ static int tcpm_pd_svdm(struct tcpm_port *port, const __le32 *payload, int cnt, } break; case CMD_ENTER_MODE: - typec_altmode_update_active(pdev, true); - - if (typec_altmode_vdm(adev, p[0], &p[1], cnt)) { - response[0] = VDO(adev->svid, 1, CMD_EXIT_MODE); - response[0] |= VDO_OPOS(adev->mode); - return 1; + if (adev && pdev) { + typec_altmode_update_active(pdev, true); + + if (typec_altmode_vdm(adev, p[0], &p[1], cnt)) { + response[0] = VDO(adev->svid, 1, + CMD_EXIT_MODE); + response[0] |= VDO_OPOS(adev->mode); + return 1; + } } return 0; case CMD_EXIT_MODE: - typec_altmode_update_active(pdev, false); + if (adev && pdev) { + typec_altmode_update_active(pdev, false); - /* Back to USB Operation */ - WARN_ON(typec_altmode_notify(adev, TYPEC_STATE_USB, - NULL)); + /* Back to USB Operation */ + WARN_ON(typec_altmode_notify(adev, + TYPEC_STATE_USB, + NULL)); + } break; default: break; @@ -1169,8 +1190,10 @@ static int tcpm_pd_svdm(struct tcpm_port *port, const __le32 *payload, int cnt, switch (cmd) { case CMD_ENTER_MODE: /* Back to USB Operation */ - WARN_ON(typec_altmode_notify(adev, TYPEC_STATE_USB, - NULL)); + if (adev) + WARN_ON(typec_altmode_notify(adev, + TYPEC_STATE_USB, + NULL)); break; default: break; @@ -1181,7 +1204,8 @@ static int tcpm_pd_svdm(struct tcpm_port *port, const __le32 *payload, int cnt, } /* Informing the alternate mode drivers about everything */ - typec_altmode_vdm(adev, p[0], &p[1], cnt); + if (adev) + typec_altmode_vdm(adev, p[0], &p[1], cnt); return rlen; } @@ -1421,7 +1445,7 @@ static enum pdo_err tcpm_caps_err(struct tcpm_port *port, const u32 *pdo, else if ((pdo_min_voltage(pdo[i]) == pdo_min_voltage(pdo[i - 1])) && (pdo_max_voltage(pdo[i]) == - pdo_min_voltage(pdo[i - 1]))) + pdo_max_voltage(pdo[i - 1]))) return PDO_ERR_DUPE_PDO; break; /* @@ -4083,7 +4107,7 @@ static int tcpm_try_role(const struct typec_capability *cap, int role) mutex_lock(&port->lock); if (tcpc->try_role) ret = tcpc->try_role(tcpc, role); - if (!ret && !tcpc->config->try_role_hw) + if (!ret && (!tcpc->config || !tcpc->config->try_role_hw)) port->try_role = role; port->try_src_count = 0; port->try_snk_count = 0; @@ -4730,7 +4754,7 @@ static int tcpm_copy_caps(struct tcpm_port *port, port->typec_caps.prefer_role = tcfg->default_role; port->typec_caps.type = tcfg->type; port->typec_caps.data = tcfg->data; - port->self_powered = port->tcpc->config->self_powered; + port->self_powered = tcfg->self_powered; return 0; } diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 39155d7cc8946ead49d2a55ebec37006ac0d95a1..124356dc39e14589e6cd94e7e7a078073836d667 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -36,7 +36,7 @@ #include "vhost.h" -static int experimental_zcopytx = 1; +static int experimental_zcopytx = 0; module_param(experimental_zcopytx, int, 0444); MODULE_PARM_DESC(experimental_zcopytx, "Enable Zero Copy TX;" " 1 -Enable; 0 - Disable"); @@ -497,12 +497,6 @@ static size_t init_iov_iter(struct vhost_virtqueue *vq, struct iov_iter *iter, return iov_iter_count(iter); } -static bool vhost_exceeds_weight(int pkts, int total_len) -{ - return total_len >= VHOST_NET_WEIGHT || - pkts >= VHOST_NET_PKT_WEIGHT; -} - static int get_tx_bufs(struct vhost_net *net, struct vhost_net_virtqueue *nvq, struct msghdr *msg, @@ -557,7 +551,7 @@ static void handle_tx_copy(struct vhost_net *net, struct socket *sock) int err; int sent_pkts = 0; - for (;;) { + do { bool busyloop_intr = false; head = get_tx_bufs(net, nvq, &msg, &out, &in, &len, @@ -598,11 +592,7 @@ static void handle_tx_copy(struct vhost_net *net, struct socket *sock) err, len); if (++nvq->done_idx >= VHOST_NET_BATCH) vhost_net_signal_used(nvq); - if (vhost_exceeds_weight(++sent_pkts, total_len)) { - vhost_poll_queue(&vq->poll); - break; - } - } + } while (likely(!vhost_exceeds_weight(vq, ++sent_pkts, total_len))); vhost_net_signal_used(nvq); } @@ -626,7 +616,7 @@ static void handle_tx_zerocopy(struct vhost_net *net, struct socket *sock) bool zcopy_used; int sent_pkts = 0; - for (;;) { + do { bool busyloop_intr; /* Release DMAs done buffers first */ @@ -701,11 +691,7 @@ static void handle_tx_zerocopy(struct vhost_net *net, struct socket *sock) else vhost_zerocopy_signal_used(net, vq); vhost_net_tx_packet(net); - if (unlikely(vhost_exceeds_weight(++sent_pkts, total_len))) { - vhost_poll_queue(&vq->poll); - break; - } - } + } while (likely(!vhost_exceeds_weight(vq, ++sent_pkts, total_len))); } /* Expects to be always run from workqueue - which acts as @@ -941,8 +927,11 @@ static void handle_rx(struct vhost_net *net) vq->log : NULL; mergeable = vhost_has_feature(vq, VIRTIO_NET_F_MRG_RXBUF); - while ((sock_len = vhost_net_rx_peek_head_len(net, sock->sk, - &busyloop_intr))) { + do { + sock_len = vhost_net_rx_peek_head_len(net, sock->sk, + &busyloop_intr); + if (!sock_len) + break; sock_len += sock_hlen; vhost_len = sock_len + vhost_hlen; headcount = get_rx_bufs(vq, vq->heads + nvq->done_idx, @@ -1027,14 +1016,11 @@ static void handle_rx(struct vhost_net *net) vhost_log_write(vq, vq_log, log, vhost_len, vq->iov, in); total_len += vhost_len; - if (unlikely(vhost_exceeds_weight(++recv_pkts, total_len))) { - vhost_poll_queue(&vq->poll); - goto out; - } - } + } while (likely(!vhost_exceeds_weight(vq, ++recv_pkts, total_len))); + if (unlikely(busyloop_intr)) vhost_poll_queue(&vq->poll); - else + else if (!sock_len) vhost_net_enable_vq(net, vq); out: vhost_net_signal_used(nvq); @@ -1115,7 +1101,8 @@ static int vhost_net_open(struct inode *inode, struct file *f) vhost_net_buf_init(&n->vqs[i].rxq); } vhost_dev_init(dev, vqs, VHOST_NET_VQ_MAX, - UIO_MAXIOV + VHOST_NET_BATCH); + UIO_MAXIOV + VHOST_NET_BATCH, + VHOST_NET_PKT_WEIGHT, VHOST_NET_WEIGHT); vhost_poll_init(n->poll + VHOST_NET_VQ_TX, handle_tx_net, EPOLLOUT, dev); vhost_poll_init(n->poll + VHOST_NET_VQ_RX, handle_rx_net, EPOLLIN, dev); diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index 0cfa925be4ec62df05d07222a95dad558f76ea90..5e298d9287f1c6dece8471eb38d13daf574ec894 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -57,6 +57,12 @@ #define VHOST_SCSI_PREALLOC_UPAGES 2048 #define VHOST_SCSI_PREALLOC_PROT_SGLS 2048 +/* Max number of requests before requeueing the job. + * Using this limit prevents one virtqueue from starving others with + * request. + */ +#define VHOST_SCSI_WEIGHT 256 + struct vhost_scsi_inflight { /* Wait for the flush operation to finish */ struct completion comp; @@ -811,7 +817,7 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq) u64 tag; u32 exp_data_len, data_direction; unsigned int out = 0, in = 0; - int head, ret, prot_bytes; + int head, ret, prot_bytes, c = 0; size_t req_size, rsp_size = sizeof(struct virtio_scsi_cmd_resp); size_t out_size, in_size; u16 lun; @@ -830,7 +836,7 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq) vhost_disable_notify(&vs->dev, vq); - for (;;) { + do { head = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov), &out, &in, NULL, NULL); @@ -1045,7 +1051,7 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq) */ INIT_WORK(&cmd->work, vhost_scsi_submission_work); queue_work(vhost_scsi_workqueue, &cmd->work); - } + } while (likely(!vhost_exceeds_weight(vq, ++c, 0))); out: mutex_unlock(&vq->mutex); } @@ -1398,7 +1404,8 @@ static int vhost_scsi_open(struct inode *inode, struct file *f) vqs[i] = &vs->vqs[i].vq; vs->vqs[i].vq.handle_kick = vhost_scsi_handle_kick; } - vhost_dev_init(&vs->dev, vqs, VHOST_SCSI_MAX_VQ, UIO_MAXIOV); + vhost_dev_init(&vs->dev, vqs, VHOST_SCSI_MAX_VQ, UIO_MAXIOV, + VHOST_SCSI_WEIGHT, 0); vhost_scsi_init_inflight(vs, NULL); diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index c163bc15976abd8e0de6380d1fcc446a6a8988ee..0752f8dc47b1249b9bbae80fd044045c3172b39a 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -413,8 +413,24 @@ static void vhost_dev_free_iovecs(struct vhost_dev *dev) vhost_vq_free_iovecs(dev->vqs[i]); } +bool vhost_exceeds_weight(struct vhost_virtqueue *vq, + int pkts, int total_len) +{ + struct vhost_dev *dev = vq->dev; + + if ((dev->byte_weight && total_len >= dev->byte_weight) || + pkts >= dev->weight) { + vhost_poll_queue(&vq->poll); + return true; + } + + return false; +} +EXPORT_SYMBOL_GPL(vhost_exceeds_weight); + void vhost_dev_init(struct vhost_dev *dev, - struct vhost_virtqueue **vqs, int nvqs, int iov_limit) + struct vhost_virtqueue **vqs, int nvqs, + int iov_limit, int weight, int byte_weight) { struct vhost_virtqueue *vq; int i; @@ -428,6 +444,8 @@ void vhost_dev_init(struct vhost_dev *dev, dev->mm = NULL; dev->worker = NULL; dev->iov_limit = iov_limit; + dev->weight = weight; + dev->byte_weight = byte_weight; init_llist_head(&dev->work_list); init_waitqueue_head(&dev->wait); INIT_LIST_HEAD(&dev->read_list); diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h index 9490e7ddb3404891515908cb8e67d74563640ebd..27a78a9b8cc7dc6e21626f1134d34055ca71dfe2 100644 --- a/drivers/vhost/vhost.h +++ b/drivers/vhost/vhost.h @@ -171,10 +171,13 @@ struct vhost_dev { struct list_head pending_list; wait_queue_head_t wait; int iov_limit; + int weight; + int byte_weight; }; +bool vhost_exceeds_weight(struct vhost_virtqueue *vq, int pkts, int total_len); void vhost_dev_init(struct vhost_dev *, struct vhost_virtqueue **vqs, - int nvqs, int iov_limit); + int nvqs, int iov_limit, int weight, int byte_weight); long vhost_dev_set_owner(struct vhost_dev *dev); bool vhost_dev_has_owner(struct vhost_dev *dev); long vhost_dev_check_owner(struct vhost_dev *); diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c index e440f87ae1d60630ae195d23e6d0b2fac67cbd05..bab495d73195f985ef30b26c36f5ddb0b33a5ec3 100644 --- a/drivers/vhost/vsock.c +++ b/drivers/vhost/vsock.c @@ -21,6 +21,14 @@ #include "vhost.h" #define VHOST_VSOCK_DEFAULT_HOST_CID 2 +/* Max number of bytes transferred before requeueing the job. + * Using this limit prevents one virtqueue from starving others. */ +#define VHOST_VSOCK_WEIGHT 0x80000 +/* Max number of packets transferred before requeueing the job. + * Using this limit prevents one virtqueue from starving others with + * small pkts. + */ +#define VHOST_VSOCK_PKT_WEIGHT 256 enum { VHOST_VSOCK_FEATURES = VHOST_FEATURES, @@ -78,6 +86,7 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock, struct vhost_virtqueue *vq) { struct vhost_virtqueue *tx_vq = &vsock->vqs[VSOCK_VQ_TX]; + int pkts = 0, total_len = 0; bool added = false; bool restart_tx = false; @@ -89,7 +98,7 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock, /* Avoid further vmexits, we're already processing the virtqueue */ vhost_disable_notify(&vsock->dev, vq); - for (;;) { + do { struct virtio_vsock_pkt *pkt; struct iov_iter iov_iter; unsigned out, in; @@ -174,8 +183,9 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock, */ virtio_transport_deliver_tap_pkt(pkt); + total_len += pkt->len; virtio_transport_free_pkt(pkt); - } + } while(likely(!vhost_exceeds_weight(vq, ++pkts, total_len))); if (added) vhost_signal(&vsock->dev, vq); @@ -350,7 +360,7 @@ static void vhost_vsock_handle_tx_kick(struct vhost_work *work) struct vhost_vsock *vsock = container_of(vq->dev, struct vhost_vsock, dev); struct virtio_vsock_pkt *pkt; - int head; + int head, pkts = 0, total_len = 0; unsigned int out, in; bool added = false; @@ -360,7 +370,7 @@ static void vhost_vsock_handle_tx_kick(struct vhost_work *work) goto out; vhost_disable_notify(&vsock->dev, vq); - for (;;) { + do { u32 len; if (!vhost_vsock_more_replies(vsock)) { @@ -401,9 +411,11 @@ static void vhost_vsock_handle_tx_kick(struct vhost_work *work) else virtio_transport_free_pkt(pkt); - vhost_add_used(vq, head, sizeof(pkt->hdr) + len); + len += sizeof(pkt->hdr); + vhost_add_used(vq, head, len); + total_len += len; added = true; - } + } while(likely(!vhost_exceeds_weight(vq, ++pkts, total_len))); no_more_replies: if (added) @@ -531,7 +543,9 @@ static int vhost_vsock_dev_open(struct inode *inode, struct file *file) vsock->vqs[VSOCK_VQ_TX].handle_kick = vhost_vsock_handle_tx_kick; vsock->vqs[VSOCK_VQ_RX].handle_kick = vhost_vsock_handle_rx_kick; - vhost_dev_init(&vsock->dev, vqs, ARRAY_SIZE(vsock->vqs), UIO_MAXIOV); + vhost_dev_init(&vsock->dev, vqs, ARRAY_SIZE(vsock->vqs), + UIO_MAXIOV, VHOST_VSOCK_PKT_WEIGHT, + VHOST_VSOCK_WEIGHT); file->private_data = vsock; spin_lock_init(&vsock->send_pkt_list_lock); diff --git a/drivers/watchdog/bcm2835_wdt.c b/drivers/watchdog/bcm2835_wdt.c index 4ed73194a64df4a748bdcbe3210b5537b224fb2f..81783f628b4de218ee776c48b73bb4bd503480d9 100644 --- a/drivers/watchdog/bcm2835_wdt.c +++ b/drivers/watchdog/bcm2835_wdt.c @@ -246,6 +246,7 @@ module_param(nowayout, bool, 0); MODULE_PARM_DESC(nowayout, "Watchdog cannot be stopped once started (default=" __MODULE_STRING(WATCHDOG_NOWAYOUT) ")"); +MODULE_ALIAS("platform:bcm2835-wdt"); MODULE_AUTHOR("Lubomir Rintel "); MODULE_DESCRIPTION("Driver for Broadcom BCM2835 watchdog timer"); MODULE_LICENSE("GPL"); diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c index 7ab6caef599c59c57e1de6e25b03a073652472c7..d4e8b717ce2b2d1b9068ee686023b4de4fb9741e 100644 --- a/drivers/xen/balloon.c +++ b/drivers/xen/balloon.c @@ -527,8 +527,15 @@ static void balloon_process(struct work_struct *work) state = reserve_additional_memory(); } - if (credit < 0) - state = decrease_reservation(-credit, GFP_BALLOON); + if (credit < 0) { + long n_pages; + + n_pages = min(-credit, si_mem_available()); + state = decrease_reservation(n_pages, GFP_BALLOON); + if (state == BP_DONE && n_pages != -credit && + n_pages < totalreserve_pages) + state = BP_EAGAIN; + } state = update_schedule(state); @@ -567,6 +574,9 @@ static int add_ballooned_pages(int nr_pages) } } + if (si_mem_available() < nr_pages) + return -ENOMEM; + st = decrease_reservation(nr_pages, GFP_USER); if (st != BP_DONE) return -ENOMEM; @@ -696,7 +706,7 @@ static int __init balloon_init(void) balloon_stats.schedule_delay = 1; balloon_stats.max_schedule_delay = 32; balloon_stats.retry_count = 1; - balloon_stats.max_retry_count = RETRY_UNLIMITED; + balloon_stats.max_retry_count = 4; #ifdef CONFIG_XEN_BALLOON_MEMORY_HOTPLUG set_online_page_callback(&xen_online_page); diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c index fe1f16351f9411455b15576cdc83cb6db7d98281..8d49b91d92cd33a343eac0cb9810412cfeaab99b 100644 --- a/drivers/xen/events/events_base.c +++ b/drivers/xen/events/events_base.c @@ -1293,7 +1293,7 @@ void rebind_evtchn_irq(int evtchn, int irq) } /* Rebind an evtchn so that it gets delivered to a specific cpu */ -int xen_rebind_evtchn_to_cpu(int evtchn, unsigned tcpu) +static int xen_rebind_evtchn_to_cpu(int evtchn, unsigned int tcpu) { struct evtchn_bind_vcpu bind_vcpu; int masked; @@ -1327,7 +1327,6 @@ int xen_rebind_evtchn_to_cpu(int evtchn, unsigned tcpu) return 0; } -EXPORT_SYMBOL_GPL(xen_rebind_evtchn_to_cpu); static int set_affinity_irq(struct irq_data *data, const struct cpumask *dest, bool force) @@ -1341,6 +1340,15 @@ static int set_affinity_irq(struct irq_data *data, const struct cpumask *dest, return ret; } +/* To be called with desc->lock held. */ +int xen_set_affinity_evtchn(struct irq_desc *desc, unsigned int tcpu) +{ + struct irq_data *d = irq_desc_get_irq_data(desc); + + return set_affinity_irq(d, cpumask_of(tcpu), false); +} +EXPORT_SYMBOL_GPL(xen_set_affinity_evtchn); + static void enable_dynirq(struct irq_data *data) { int evtchn = evtchn_from_irq(data->irq); diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c index 6d1a5e58968ffdfb42a71e5f984a52475ea0ca9c..47c70b826a6abf227960030cb09dd3a45fec4746 100644 --- a/drivers/xen/evtchn.c +++ b/drivers/xen/evtchn.c @@ -447,7 +447,7 @@ static void evtchn_bind_interdom_next_vcpu(int evtchn) this_cpu_write(bind_last_selected_cpu, selected_cpu); /* unmask expects irqs to be disabled */ - xen_rebind_evtchn_to_cpu(evtchn, selected_cpu); + xen_set_affinity_evtchn(desc, selected_cpu); raw_spin_unlock_irqrestore(&desc->lock, flags); } diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index aa081f8067283bece0384cfb01d4bf5d1d2aa6cc..3d9997595d900fb2eaeaf6325494f8ed2d02a39b 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -357,8 +357,8 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, /* Convert the size to actually allocated. */ size = 1UL << (order + XEN_PAGE_SHIFT); - if (((dev_addr + size - 1 <= dma_mask)) || - range_straddles_page_boundary(phys, size)) + if (!WARN_ON((dev_addr + size - 1 > dma_mask) || + range_straddles_page_boundary(phys, size))) xen_destroy_contiguous_region(phys, order); xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs); diff --git a/drivers/xen/xen-pciback/conf_space_capability.c b/drivers/xen/xen-pciback/conf_space_capability.c index 73427d8e01161ae55fa561599927bb68536bdbd0..e5694133ebe57f37950b134c26c3fbc2a231a221 100644 --- a/drivers/xen/xen-pciback/conf_space_capability.c +++ b/drivers/xen/xen-pciback/conf_space_capability.c @@ -116,13 +116,12 @@ static int pm_ctrl_write(struct pci_dev *dev, int offset, u16 new_value, { int err; u16 old_value; - pci_power_t new_state, old_state; + pci_power_t new_state; err = pci_read_config_word(dev, offset, &old_value); if (err) goto out; - old_state = (pci_power_t)(old_value & PCI_PM_CTRL_STATE_MASK); new_state = (pci_power_t)(new_value & PCI_PM_CTRL_STATE_MASK); new_value &= PM_OK_BITS; diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c index e1cbdfdb7c684fd24fdb6f25ee03f4e253e9ef58..1970693035109f9b20341209954a41f5b4d4496c 100644 --- a/fs/9p/vfs_addr.c +++ b/fs/9p/vfs_addr.c @@ -50,8 +50,9 @@ * @page: structure to page * */ -static int v9fs_fid_readpage(struct p9_fid *fid, struct page *page) +static int v9fs_fid_readpage(void *data, struct page *page) { + struct p9_fid *fid = data; struct inode *inode = page->mapping->host; struct bio_vec bvec = {.bv_page = page, .bv_len = PAGE_SIZE}; struct iov_iter to; @@ -122,7 +123,8 @@ static int v9fs_vfs_readpages(struct file *filp, struct address_space *mapping, if (ret == 0) return ret; - ret = read_cache_pages(mapping, pages, (void *)v9fs_vfs_readpage, filp); + ret = read_cache_pages(mapping, pages, v9fs_fid_readpage, + filp->private_data); p9_debug(P9_DEBUG_VFS, " = %d\n", ret); return ret; } diff --git a/fs/adfs/super.c b/fs/adfs/super.c index 7e099a7a4eb1e50bc9914e73432ecc751d0b21ed..4dc15b26348945cf308301a1c190056b976630ba 100644 --- a/fs/adfs/super.c +++ b/fs/adfs/super.c @@ -369,6 +369,7 @@ static int adfs_fill_super(struct super_block *sb, void *data, int silent) struct buffer_head *bh; struct object_info root_obj; unsigned char *b_data; + unsigned int blocksize; struct adfs_sb_info *asb; struct inode *root; int ret = -EINVAL; @@ -420,8 +421,10 @@ static int adfs_fill_super(struct super_block *sb, void *data, int silent) goto error_free_bh; } + blocksize = 1 << dr->log2secsize; brelse(bh); - if (sb_set_blocksize(sb, 1 << dr->log2secsize)) { + + if (sb_set_blocksize(sb, blocksize)) { bh = sb_bread(sb, ADFS_DISCRECORD / sb->s_blocksize); if (!bh) { adfs_error(sb, "couldn't read superblock on " diff --git a/fs/afs/callback.c b/fs/afs/callback.c index 5f261fbf2182b22a47fc93b7c6fee35f113e0097..4ad70125029988ae29ad81dc7bf0022f57a8c2b8 100644 --- a/fs/afs/callback.c +++ b/fs/afs/callback.c @@ -276,9 +276,9 @@ static void afs_break_one_callback(struct afs_server *server, struct afs_super_info *as = AFS_FS_S(cbi->sb); struct afs_volume *volume = as->volume; - write_lock(&volume->cb_break_lock); + write_lock(&volume->cb_v_break_lock); volume->cb_v_break++; - write_unlock(&volume->cb_break_lock); + write_unlock(&volume->cb_v_break_lock); } else { data.volume = NULL; data.fid = *fid; diff --git a/fs/afs/cmservice.c b/fs/afs/cmservice.c index 9e51d6fe7e8f975f34f877217a28a8e99bcfa5e4..40c6860d4c63226304cb633c9181970ec389ac23 100644 --- a/fs/afs/cmservice.c +++ b/fs/afs/cmservice.c @@ -423,18 +423,14 @@ static void SRXAFSCB_ProbeUuid(struct work_struct *work) struct afs_call *call = container_of(work, struct afs_call, work); struct afs_uuid *r = call->request; - struct { - __be32 match; - } reply; - _enter(""); if (memcmp(r, &call->net->uuid, sizeof(call->net->uuid)) == 0) - reply.match = htonl(0); + afs_send_empty_reply(call); else - reply.match = htonl(1); + rxrpc_kernel_abort_call(call->net->socket, call->rxcall, + 1, 1, "K-1"); - afs_send_simple_reply(call, &reply, sizeof(reply)); afs_put_call(call); _leave(""); } diff --git a/fs/afs/dir.c b/fs/afs/dir.c index 855bf2b79fed4117559f6f011cacd3b43f74b927..54e7f6f1405e298db38aafae45a499b41519dfd4 100644 --- a/fs/afs/dir.c +++ b/fs/afs/dir.c @@ -937,7 +937,7 @@ static int afs_d_revalidate(struct dentry *dentry, unsigned int flags) dir_version = (long)dir->status.data_version; de_version = (long)dentry->d_fsdata; if (de_version == dir_version) - goto out_valid; + goto out_valid_noupdate; dir_version = (long)dir->invalid_before; if (de_version - dir_version >= 0) @@ -1001,6 +1001,7 @@ static int afs_d_revalidate(struct dentry *dentry, unsigned int flags) out_valid: dentry->d_fsdata = (void *)dir_version; +out_valid_noupdate: dput(parent); key_put(key); _leave(" = 1 [valid]"); diff --git a/fs/afs/file.c b/fs/afs/file.c index 7d4f26198573d7f6a4dffb7ff4a82ee0f8fbb573..843d3b970b845050701c02febcc585f00321fd95 100644 --- a/fs/afs/file.c +++ b/fs/afs/file.c @@ -193,11 +193,13 @@ void afs_put_read(struct afs_read *req) int i; if (refcount_dec_and_test(&req->usage)) { - for (i = 0; i < req->nr_pages; i++) - if (req->pages[i]) - put_page(req->pages[i]); - if (req->pages != req->array) - kfree(req->pages); + if (req->pages) { + for (i = 0; i < req->nr_pages; i++) + if (req->pages[i]) + put_page(req->pages[i]); + if (req->pages != req->array) + kfree(req->pages); + } kfree(req); } } diff --git a/fs/afs/internal.h b/fs/afs/internal.h index 34c02fdcc25f107ccceca1ca26a304eb37f6e247..aea19614c08222f787705a8d60cdcbbd515ef4ff 100644 --- a/fs/afs/internal.h +++ b/fs/afs/internal.h @@ -477,7 +477,7 @@ struct afs_volume { unsigned int servers_seq; /* Incremented each time ->servers changes */ unsigned cb_v_break; /* Break-everything counter. */ - rwlock_t cb_break_lock; + rwlock_t cb_v_break_lock; afs_voltype_t type; /* type of volume */ short error; diff --git a/fs/afs/vlclient.c b/fs/afs/vlclient.c index c3b740813fc719850ca188f892d4f653352e8600..c7dd47eaff29d5f4dfb0be09ccd69289a9fdc7bb 100644 --- a/fs/afs/vlclient.c +++ b/fs/afs/vlclient.c @@ -60,23 +60,24 @@ static int afs_deliver_vl_get_entry_by_name_u(struct afs_call *call) struct afs_uuid__xdr *xdr; struct afs_uuid *uuid; int j; + int n = entry->nr_servers; tmp = ntohl(uvldb->serverFlags[i]); if (tmp & AFS_VLSF_DONTUSE || (new_only && !(tmp & AFS_VLSF_NEWREPSITE))) continue; if (tmp & AFS_VLSF_RWVOL) { - entry->fs_mask[i] |= AFS_VOL_VTM_RW; + entry->fs_mask[n] |= AFS_VOL_VTM_RW; if (vlflags & AFS_VLF_BACKEXISTS) - entry->fs_mask[i] |= AFS_VOL_VTM_BAK; + entry->fs_mask[n] |= AFS_VOL_VTM_BAK; } if (tmp & AFS_VLSF_ROVOL) - entry->fs_mask[i] |= AFS_VOL_VTM_RO; - if (!entry->fs_mask[i]) + entry->fs_mask[n] |= AFS_VOL_VTM_RO; + if (!entry->fs_mask[n]) continue; xdr = &uvldb->serverNumber[i]; - uuid = (struct afs_uuid *)&entry->fs_server[i]; + uuid = (struct afs_uuid *)&entry->fs_server[n]; uuid->time_low = xdr->time_low; uuid->time_mid = htons(ntohl(xdr->time_mid)); uuid->time_hi_and_version = htons(ntohl(xdr->time_hi_and_version)); diff --git a/fs/afs/volume.c b/fs/afs/volume.c index 3037bd01f617d13b1589d823cb6bdc112014bdca..5ec186ec56519ce677e0cddda50f8f59014c3fd6 100644 --- a/fs/afs/volume.c +++ b/fs/afs/volume.c @@ -47,6 +47,7 @@ static struct afs_volume *afs_alloc_volume(struct afs_mount_params *params, atomic_set(&volume->usage, 1); INIT_LIST_HEAD(&volume->proc_link); rwlock_init(&volume->servers_lock); + rwlock_init(&volume->cb_v_break_lock); memcpy(volume->name, vldb->name, vldb->name_len + 1); slist = afs_alloc_server_list(params->cell, params->key, vldb, type_mask); diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c index ac6c383d63140bc395f4e6beef7037d803036259..19855659f65030d57d1e56fb5f93341ab4efe477 100644 --- a/fs/btrfs/backref.c +++ b/fs/btrfs/backref.c @@ -1485,7 +1485,7 @@ int btrfs_check_shared(struct btrfs_root *root, u64 inum, u64 bytenr) goto out; } - trans = btrfs_attach_transaction(root); + trans = btrfs_join_transaction_nostart(root); if (IS_ERR(trans)) { if (PTR_ERR(trans) != -ENOENT && PTR_ERR(trans) != -EROFS) { ret = PTR_ERR(trans); diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c index e24c0a69ff5d43c72ee3e183c230b2adad639331..c84186563c31121c76abf4445f39c7c6f848550d 100644 --- a/fs/btrfs/file.c +++ b/fs/btrfs/file.c @@ -2732,6 +2732,11 @@ out_only_mutex: * for detecting, at fsync time, if the inode isn't yet in the * log tree or it's there but not up to date. */ + struct timespec64 now = current_time(inode); + + inode_inc_iversion(inode); + inode->i_mtime = now; + inode->i_ctime = now; trans = btrfs_start_transaction(root, 1); if (IS_ERR(trans)) { err = PTR_ERR(trans); diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index c1cd3fe2b29534938cf2784721616c84e2c6926f..355ff08e9d44ec937ffc07b9ab57fd731f10e47e 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -388,10 +388,31 @@ static noinline int add_async_extent(struct async_cow *cow, return 0; } +/* + * Check if the inode has flags compatible with compression + */ +static inline bool inode_can_compress(struct inode *inode) +{ + if (BTRFS_I(inode)->flags & BTRFS_INODE_NODATACOW || + BTRFS_I(inode)->flags & BTRFS_INODE_NODATASUM) + return false; + return true; +} + +/* + * Check if the inode needs to be submitted to compression, based on mount + * options, defragmentation, properties or heuristics. + */ static inline int inode_need_compress(struct inode *inode, u64 start, u64 end) { struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); + if (!inode_can_compress(inode)) { + WARN(IS_ENABLED(CONFIG_BTRFS_DEBUG), + KERN_ERR "BTRFS: unexpected compression for ino %llu\n", + btrfs_ino(BTRFS_I(inode))); + return 0; + } /* force compress */ if (btrfs_test_opt(fs_info, FORCE_COMPRESS)) return 1; @@ -1596,7 +1617,8 @@ static int run_delalloc_range(void *private_data, struct page *locked_page, } else if (BTRFS_I(inode)->flags & BTRFS_INODE_PREALLOC && !force_cow) { ret = run_delalloc_nocow(inode, locked_page, start, end, page_started, 0, nr_written); - } else if (!inode_need_compress(inode, start, end)) { + } else if (!inode_can_compress(inode) || + !inode_need_compress(inode, start, end)) { ret = cow_file_range(inode, locked_page, start, end, end, page_started, nr_written, 1, NULL); } else { diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c index e46e83e876001c5f5547228afd6ebfc62373abd1..734866ab519413ad406e2f22df16900b8ff6384d 100644 --- a/fs/btrfs/qgroup.c +++ b/fs/btrfs/qgroup.c @@ -2249,6 +2249,7 @@ int btrfs_qgroup_inherit(struct btrfs_trans_handle *trans, u64 srcid, int ret = 0; int i; u64 *i_qgroups; + bool committing = false; struct btrfs_fs_info *fs_info = trans->fs_info; struct btrfs_root *quota_root; struct btrfs_qgroup *srcgroup; @@ -2256,7 +2257,25 @@ int btrfs_qgroup_inherit(struct btrfs_trans_handle *trans, u64 srcid, u32 level_size = 0; u64 nums; - mutex_lock(&fs_info->qgroup_ioctl_lock); + /* + * There are only two callers of this function. + * + * One in create_subvol() in the ioctl context, which needs to hold + * the qgroup_ioctl_lock. + * + * The other one in create_pending_snapshot() where no other qgroup + * code can modify the fs as they all need to either start a new trans + * or hold a trans handler, thus we don't need to hold + * qgroup_ioctl_lock. + * This would avoid long and complex lock chain and make lockdep happy. + */ + spin_lock(&fs_info->trans_lock); + if (trans->transaction->state == TRANS_STATE_COMMIT_DOING) + committing = true; + spin_unlock(&fs_info->trans_lock); + + if (!committing) + mutex_lock(&fs_info->qgroup_ioctl_lock); if (!test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags)) goto out; @@ -2420,7 +2439,8 @@ int btrfs_qgroup_inherit(struct btrfs_trans_handle *trans, u64 srcid, unlock: spin_unlock(&fs_info->qgroup_lock); out: - mutex_unlock(&fs_info->qgroup_ioctl_lock); + if (!committing) + mutex_unlock(&fs_info->qgroup_ioctl_lock); return ret; } diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c index 258392b75048e5fa22723b7cd1f74c3cd0c00b5d..48ddbc187e58be308a9000a950a631fc72d0fa82 100644 --- a/fs/btrfs/send.c +++ b/fs/btrfs/send.c @@ -6272,68 +6272,21 @@ static int changed_extent(struct send_ctx *sctx, { int ret = 0; - if (sctx->cur_ino != sctx->cmp_key->objectid) { - - if (result == BTRFS_COMPARE_TREE_CHANGED) { - struct extent_buffer *leaf_l; - struct extent_buffer *leaf_r; - struct btrfs_file_extent_item *ei_l; - struct btrfs_file_extent_item *ei_r; - - leaf_l = sctx->left_path->nodes[0]; - leaf_r = sctx->right_path->nodes[0]; - ei_l = btrfs_item_ptr(leaf_l, - sctx->left_path->slots[0], - struct btrfs_file_extent_item); - ei_r = btrfs_item_ptr(leaf_r, - sctx->right_path->slots[0], - struct btrfs_file_extent_item); - - /* - * We may have found an extent item that has changed - * only its disk_bytenr field and the corresponding - * inode item was not updated. This case happens due to - * very specific timings during relocation when a leaf - * that contains file extent items is COWed while - * relocation is ongoing and its in the stage where it - * updates data pointers. So when this happens we can - * safely ignore it since we know it's the same extent, - * but just at different logical and physical locations - * (when an extent is fully replaced with a new one, we - * know the generation number must have changed too, - * since snapshot creation implies committing the current - * transaction, and the inode item must have been updated - * as well). - * This replacement of the disk_bytenr happens at - * relocation.c:replace_file_extents() through - * relocation.c:btrfs_reloc_cow_block(). - */ - if (btrfs_file_extent_generation(leaf_l, ei_l) == - btrfs_file_extent_generation(leaf_r, ei_r) && - btrfs_file_extent_ram_bytes(leaf_l, ei_l) == - btrfs_file_extent_ram_bytes(leaf_r, ei_r) && - btrfs_file_extent_compression(leaf_l, ei_l) == - btrfs_file_extent_compression(leaf_r, ei_r) && - btrfs_file_extent_encryption(leaf_l, ei_l) == - btrfs_file_extent_encryption(leaf_r, ei_r) && - btrfs_file_extent_other_encoding(leaf_l, ei_l) == - btrfs_file_extent_other_encoding(leaf_r, ei_r) && - btrfs_file_extent_type(leaf_l, ei_l) == - btrfs_file_extent_type(leaf_r, ei_r) && - btrfs_file_extent_disk_bytenr(leaf_l, ei_l) != - btrfs_file_extent_disk_bytenr(leaf_r, ei_r) && - btrfs_file_extent_disk_num_bytes(leaf_l, ei_l) == - btrfs_file_extent_disk_num_bytes(leaf_r, ei_r) && - btrfs_file_extent_offset(leaf_l, ei_l) == - btrfs_file_extent_offset(leaf_r, ei_r) && - btrfs_file_extent_num_bytes(leaf_l, ei_l) == - btrfs_file_extent_num_bytes(leaf_r, ei_r)) - return 0; - } - - inconsistent_snapshot_error(sctx, result, "extent"); - return -EIO; - } + /* + * We have found an extent item that changed without the inode item + * having changed. This can happen either after relocation (where the + * disk_bytenr of an extent item is replaced at + * relocation.c:replace_file_extents()) or after deduplication into a + * file in both the parent and send snapshots (where an extent item can + * get modified or replaced with a new one). Note that deduplication + * updates the inode item, but it only changes the iversion (sequence + * field in the inode item) of the inode, so if a file is deduplicated + * the same amount of times in both the parent and send snapshots, its + * iversion becames the same in both snapshots, whence the inode item is + * the same on both snapshots. + */ + if (sctx->cur_ino != sctx->cmp_key->objectid) + return 0; if (!sctx->cur_inode_new_gen && !sctx->cur_inode_deleted) { if (result != BTRFS_COMPARE_TREE_DELETED) diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c index bb8f6c020d227c0a66329188e08fb5425d1c86ba..26317bca56499a406492dd84e4a602e2ee00f1eb 100644 --- a/fs/btrfs/transaction.c +++ b/fs/btrfs/transaction.c @@ -28,15 +28,18 @@ static const unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = { [TRANS_STATE_COMMIT_START] = (__TRANS_START | __TRANS_ATTACH), [TRANS_STATE_COMMIT_DOING] = (__TRANS_START | __TRANS_ATTACH | - __TRANS_JOIN), + __TRANS_JOIN | + __TRANS_JOIN_NOSTART), [TRANS_STATE_UNBLOCKED] = (__TRANS_START | __TRANS_ATTACH | __TRANS_JOIN | - __TRANS_JOIN_NOLOCK), + __TRANS_JOIN_NOLOCK | + __TRANS_JOIN_NOSTART), [TRANS_STATE_COMPLETED] = (__TRANS_START | __TRANS_ATTACH | __TRANS_JOIN | - __TRANS_JOIN_NOLOCK), + __TRANS_JOIN_NOLOCK | + __TRANS_JOIN_NOSTART), }; void btrfs_put_transaction(struct btrfs_transaction *transaction) @@ -531,7 +534,8 @@ again: ret = join_transaction(fs_info, type); if (ret == -EBUSY) { wait_current_trans(fs_info); - if (unlikely(type == TRANS_ATTACH)) + if (unlikely(type == TRANS_ATTACH || + type == TRANS_JOIN_NOSTART)) ret = -ENOENT; } } while (ret == -EBUSY); @@ -647,6 +651,16 @@ struct btrfs_trans_handle *btrfs_join_transaction_nolock(struct btrfs_root *root BTRFS_RESERVE_NO_FLUSH, true); } +/* + * Similar to regular join but it never starts a transaction when none is + * running or after waiting for the current one to finish. + */ +struct btrfs_trans_handle *btrfs_join_transaction_nostart(struct btrfs_root *root) +{ + return start_transaction(root, 0, TRANS_JOIN_NOSTART, + BTRFS_RESERVE_NO_FLUSH, true); +} + /* * btrfs_attach_transaction() - catch the running transaction * @@ -2027,6 +2041,16 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans) } } else { spin_unlock(&fs_info->trans_lock); + /* + * The previous transaction was aborted and was already removed + * from the list of transactions at fs_info->trans_list. So we + * abort to prevent writing a new superblock that reflects a + * corrupt state (pointing to trees with unwritten nodes/leafs). + */ + if (test_bit(BTRFS_FS_STATE_TRANS_ABORTED, &fs_info->fs_state)) { + ret = -EROFS; + goto cleanup_transaction; + } } extwriter_counter_dec(cur_trans, trans->type); diff --git a/fs/btrfs/transaction.h b/fs/btrfs/transaction.h index 4cbb1b55387dc801894624f939804e253e17f414..c1d34cc70472223a81ca22de9bde27ccfc9276d5 100644 --- a/fs/btrfs/transaction.h +++ b/fs/btrfs/transaction.h @@ -97,11 +97,13 @@ struct btrfs_transaction { #define __TRANS_JOIN (1U << 11) #define __TRANS_JOIN_NOLOCK (1U << 12) #define __TRANS_DUMMY (1U << 13) +#define __TRANS_JOIN_NOSTART (1U << 14) #define TRANS_START (__TRANS_START | __TRANS_FREEZABLE) #define TRANS_ATTACH (__TRANS_ATTACH) #define TRANS_JOIN (__TRANS_JOIN | __TRANS_FREEZABLE) #define TRANS_JOIN_NOLOCK (__TRANS_JOIN_NOLOCK) +#define TRANS_JOIN_NOSTART (__TRANS_JOIN_NOSTART) #define TRANS_EXTWRITERS (__TRANS_START | __TRANS_ATTACH) @@ -187,6 +189,7 @@ struct btrfs_trans_handle *btrfs_start_transaction_fallback_global_rsv( int min_factor); struct btrfs_trans_handle *btrfs_join_transaction(struct btrfs_root *root); struct btrfs_trans_handle *btrfs_join_transaction_nolock(struct btrfs_root *root); +struct btrfs_trans_handle *btrfs_join_transaction_nostart(struct btrfs_root *root); struct btrfs_trans_handle *btrfs_attach_transaction(struct btrfs_root *root); struct btrfs_trans_handle *btrfs_attach_transaction_barrier( struct btrfs_root *root); diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c index 0d5840d20efcb435edfeaf30892984fba4feaf4f..08c5afa06aee0504c3666d8840908cc5aa08f00e 100644 --- a/fs/btrfs/tree-log.c +++ b/fs/btrfs/tree-log.c @@ -3262,6 +3262,30 @@ int btrfs_free_log_root_tree(struct btrfs_trans_handle *trans, return 0; } +/* + * Check if an inode was logged in the current transaction. We can't always rely + * on an inode's logged_trans value, because it's an in-memory only field and + * therefore not persisted. This means that its value is lost if the inode gets + * evicted and loaded again from disk (in which case it has a value of 0, and + * certainly it is smaller then any possible transaction ID), when that happens + * the full_sync flag is set in the inode's runtime flags, so on that case we + * assume eviction happened and ignore the logged_trans value, assuming the + * worst case, that the inode was logged before in the current transaction. + */ +static bool inode_logged(struct btrfs_trans_handle *trans, + struct btrfs_inode *inode) +{ + if (inode->logged_trans == trans->transid) + return true; + + if (inode->last_trans == trans->transid && + test_bit(BTRFS_INODE_NEEDS_FULL_SYNC, &inode->runtime_flags) && + !test_bit(BTRFS_FS_LOG_RECOVERING, &trans->fs_info->flags)) + return true; + + return false; +} + /* * If both a file and directory are logged, and unlinks or renames are * mixed in, we have a few interesting corners: @@ -3296,7 +3320,7 @@ int btrfs_del_dir_entries_in_log(struct btrfs_trans_handle *trans, int bytes_del = 0; u64 dir_ino = btrfs_ino(dir); - if (dir->logged_trans < trans->transid) + if (!inode_logged(trans, dir)) return 0; ret = join_running_log_trans(root); @@ -3401,7 +3425,7 @@ int btrfs_del_inode_ref_in_log(struct btrfs_trans_handle *trans, u64 index; int ret; - if (inode->logged_trans < trans->transid) + if (!inode_logged(trans, inode)) return 0; ret = join_running_log_trans(root); @@ -5250,9 +5274,19 @@ log_extents: } } + /* + * Don't update last_log_commit if we logged that an inode exists after + * it was loaded to memory (full_sync bit set). + * This is to prevent data loss when we do a write to the inode, then + * the inode gets evicted after all delalloc was flushed, then we log + * it exists (due to a rename for example) and then fsync it. This last + * fsync would do nothing (not logging the extents previously written). + */ spin_lock(&inode->lock); inode->logged_trans = trans->transid; - inode->last_log_commit = inode->last_sub_trans; + if (inode_only != LOG_INODE_EXISTS || + !test_bit(BTRFS_INODE_NEEDS_FULL_SYNC, &inode->runtime_flags)) + inode->last_log_commit = inode->last_sub_trans; spin_unlock(&inode->lock); out_unlock: mutex_unlock(&inode->log_mutex); diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index 2fd000308be763958aab22c11a12bd7995f771e7..6e008bd5c8cd16215eae378ecf15e399683cefbc 100644 --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -5040,8 +5040,7 @@ static inline int btrfs_chunk_max_errors(struct map_lookup *map) if (map->type & (BTRFS_BLOCK_GROUP_RAID1 | BTRFS_BLOCK_GROUP_RAID10 | - BTRFS_BLOCK_GROUP_RAID5 | - BTRFS_BLOCK_GROUP_DUP)) { + BTRFS_BLOCK_GROUP_RAID5)) { max_errors = 1; } else if (map->type & BTRFS_BLOCK_GROUP_RAID6) { max_errors = 2; diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index 9c332a6f66678a04cfe4b6cf05a33253c85f30f5..476728bdae8c668d3e3eb3eed90f3a892edae9bf 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -913,8 +913,9 @@ get_more_pages: if (page_offset(page) >= ceph_wbc.i_size) { dout("%p page eof %llu\n", page, ceph_wbc.i_size); - if (ceph_wbc.size_stable || - page_offset(page) >= i_size_read(inode)) + if ((ceph_wbc.size_stable || + page_offset(page) >= i_size_read(inode)) && + clear_page_dirty_for_io(page)) mapping->a_ops->invalidatepage(page, 0, PAGE_SIZE); unlock_page(page); diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c index c7542e8dd096c52ff4c6f0111530a0ad45adb99a..a11fa0b6b34d5e92e68c8ca6929b9b8a7907e17a 100644 --- a/fs/ceph/caps.c +++ b/fs/ceph/caps.c @@ -1237,20 +1237,23 @@ static int send_cap_msg(struct cap_msg_args *arg) } /* - * Queue cap releases when an inode is dropped from our cache. Since - * inode is about to be destroyed, there is no need for i_ceph_lock. + * Queue cap releases when an inode is dropped from our cache. */ void ceph_queue_caps_release(struct inode *inode) { struct ceph_inode_info *ci = ceph_inode(inode); struct rb_node *p; + /* lock i_ceph_lock, because ceph_d_revalidate(..., LOOKUP_RCU) + * may call __ceph_caps_issued_mask() on a freeing inode. */ + spin_lock(&ci->i_ceph_lock); p = rb_first(&ci->i_caps); while (p) { struct ceph_cap *cap = rb_entry(p, struct ceph_cap, ci_node); p = rb_next(p); __ceph_remove_cap(cap, true); } + spin_unlock(&ci->i_ceph_lock); } /* diff --git a/fs/ceph/locks.c b/fs/ceph/locks.c index 9dae2ec7e1fa89705f649b1c3349520b8bb23543..6a8f4a99582e57d7f1d08a97d81f49808176fcc3 100644 --- a/fs/ceph/locks.c +++ b/fs/ceph/locks.c @@ -111,8 +111,7 @@ static int ceph_lock_message(u8 lock_type, u16 operation, struct inode *inode, req->r_wait_for_completion = ceph_lock_wait_for_completion; err = ceph_mdsc_do_request(mdsc, inode, req); - - if (operation == CEPH_MDS_OP_GETFILELOCK) { + if (!err && operation == CEPH_MDS_OP_GETFILELOCK) { fl->fl_pid = -le64_to_cpu(req->r_reply_info.filelock_reply->pid); if (CEPH_LOCK_SHARED == req->r_reply_info.filelock_reply->type) fl->fl_type = F_RDLCK; diff --git a/fs/ceph/super.h b/fs/ceph/super.h index 582e28fd1b7bf1c6bda93b4796eb92cc4883fdd2..d8579a56e5dc2f1b335ab2f2c606183ff03bc31f 100644 --- a/fs/ceph/super.h +++ b/fs/ceph/super.h @@ -526,7 +526,12 @@ static inline void __ceph_dir_set_complete(struct ceph_inode_info *ci, long long release_count, long long ordered_count) { - smp_mb__before_atomic(); + /* + * Makes sure operations that setup readdir cache (update page + * cache and i_size) are strongly ordered w.r.t. the following + * atomic64_set() operations. + */ + smp_mb(); atomic64_set(&ci->i_complete_seq[0], release_count); atomic64_set(&ci->i_complete_seq[1], ordered_count); } diff --git a/fs/ceph/xattr.c b/fs/ceph/xattr.c index 5cc8b94f8206972ce6ce988b92b5103504b335b1..0a2d4898ee163321d11bd1a31f8efa3500756476 100644 --- a/fs/ceph/xattr.c +++ b/fs/ceph/xattr.c @@ -79,7 +79,7 @@ static size_t ceph_vxattrcb_layout(struct ceph_inode_info *ci, char *val, const char *ns_field = " pool_namespace="; char buf[128]; size_t len, total_len = 0; - int ret; + ssize_t ret; pool_ns = ceph_try_get_string(ci->i_layout.pool_ns); @@ -103,11 +103,8 @@ static size_t ceph_vxattrcb_layout(struct ceph_inode_info *ci, char *val, if (pool_ns) total_len += strlen(ns_field) + pool_ns->len; - if (!size) { - ret = total_len; - } else if (total_len > size) { - ret = -ERANGE; - } else { + ret = total_len; + if (size >= total_len) { memcpy(val, buf, len); ret = len; if (pool_name) { @@ -817,8 +814,11 @@ ssize_t __ceph_getxattr(struct inode *inode, const char *name, void *value, if (err) return err; err = -ENODATA; - if (!(vxattr->exists_cb && !vxattr->exists_cb(ci))) + if (!(vxattr->exists_cb && !vxattr->exists_cb(ci))) { err = vxattr->getxattr_cb(ci, value, size); + if (size && size < err) + err = -ERANGE; + } return err; } diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c index f31339db45fdb1603dae12137bd322e45e8386a8..c53a2e86ed544b968deef2920060b7c5af4d2702 100644 --- a/fs/cifs/connect.c +++ b/fs/cifs/connect.c @@ -563,10 +563,10 @@ static bool server_unresponsive(struct TCP_Server_Info *server) { /* - * We need to wait 2 echo intervals to make sure we handle such + * We need to wait 3 echo intervals to make sure we handle such * situations right: * 1s client sends a normal SMB request - * 2s client gets a response + * 3s client gets a response * 30s echo workqueue job pops, and decides we got a response recently * and don't need to send another * ... @@ -575,9 +575,9 @@ server_unresponsive(struct TCP_Server_Info *server) */ if ((server->tcpStatus == CifsGood || server->tcpStatus == CifsNeedNegotiate) && - time_after(jiffies, server->lstrp + 2 * server->echo_interval)) { + time_after(jiffies, server->lstrp + 3 * server->echo_interval)) { cifs_dbg(VFS, "Server %s has not responded in %lu seconds. Reconnecting...\n", - server->hostname, (2 * server->echo_interval) / HZ); + server->hostname, (3 * server->echo_interval) / HZ); cifs_reconnect(server); wake_up(&server->response_q); return true; diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c index 0ccf8f9b63a2e78336a31e4b8b8b0ec931b17c1c..cc9e846a38658b997453bbe0441569b08b8fb966 100644 --- a/fs/cifs/smb2ops.c +++ b/fs/cifs/smb2ops.c @@ -2545,7 +2545,15 @@ fill_transform_hdr(struct smb2_transform_hdr *tr_hdr, unsigned int orig_len, static inline void smb2_sg_set_buf(struct scatterlist *sg, const void *buf, unsigned int buflen) { - sg_set_page(sg, virt_to_page(buf), buflen, offset_in_page(buf)); + void *addr; + /* + * VMAP_STACK (at least) puts stack into the vmalloc address space + */ + if (is_vmalloc_addr(buf)) + addr = vmalloc_to_page(buf); + else + addr = virt_to_page(buf); + sg_set_page(sg, addr, buflen, offset_in_page(buf)); } /* Assumes the first rqst has a transform header as the first iov. @@ -3121,7 +3129,6 @@ receive_encrypted_standard(struct TCP_Server_Info *server, { int ret, length; char *buf = server->smallbuf; - char *tmpbuf; struct smb2_sync_hdr *shdr; unsigned int pdu_length = server->pdu_size; unsigned int buf_size; @@ -3151,18 +3158,15 @@ receive_encrypted_standard(struct TCP_Server_Info *server, return length; next_is_large = server->large_buf; - one_more: +one_more: shdr = (struct smb2_sync_hdr *)buf; if (shdr->NextCommand) { - if (next_is_large) { - tmpbuf = server->bigbuf; + if (next_is_large) next_buffer = (char *)cifs_buf_get(); - } else { - tmpbuf = server->smallbuf; + else next_buffer = (char *)cifs_small_buf_get(); - } memcpy(next_buffer, - tmpbuf + le32_to_cpu(shdr->NextCommand), + buf + le32_to_cpu(shdr->NextCommand), pdu_length - le32_to_cpu(shdr->NextCommand)); } @@ -3191,12 +3195,21 @@ receive_encrypted_standard(struct TCP_Server_Info *server, pdu_length -= le32_to_cpu(shdr->NextCommand); server->large_buf = next_is_large; if (next_is_large) - server->bigbuf = next_buffer; + server->bigbuf = buf = next_buffer; else - server->smallbuf = next_buffer; - - buf += le32_to_cpu(shdr->NextCommand); + server->smallbuf = buf = next_buffer; goto one_more; + } else if (ret != 0) { + /* + * ret != 0 here means that we didn't get to handle_mid() thus + * server->smallbuf and server->bigbuf are still valid. We need + * to free next_buffer because it is not going to be used + * anywhere. + */ + if (next_is_large) + free_rsp_buf(CIFS_LARGE_BUFFER, next_buffer); + else + free_rsp_buf(CIFS_SMALL_BUFFER, next_buffer); } return ret; diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c index c181f1621e1af09c62685ea023e2f57935fd2293..2bc47eb6215e2a526073e209137869130abb8bdd 100644 --- a/fs/cifs/smb2pdu.c +++ b/fs/cifs/smb2pdu.c @@ -168,7 +168,7 @@ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon) if (tcon == NULL) return 0; - if (smb2_command == SMB2_TREE_CONNECT) + if (smb2_command == SMB2_TREE_CONNECT || smb2_command == SMB2_IOCTL) return 0; if (tcon->tidStatus == CifsExiting) { @@ -1006,7 +1006,12 @@ SMB2_sess_alloc_buffer(struct SMB2_sess_data *sess_data) else req->SecurityMode = 0; +#ifdef CONFIG_CIFS_DFS_UPCALL + req->Capabilities = cpu_to_le32(SMB2_GLOBAL_CAP_DFS); +#else req->Capabilities = 0; +#endif /* DFS_UPCALL */ + req->Channel = 0; /* MBZ */ sess_data->iov[0].iov_base = (char *)req; diff --git a/fs/coda/file.c b/fs/coda/file.c index 1cbc1f2298ee480f283320fe2b9d6ca24ad3f9de..43d371551d2b1f571e0a1a87f9ac340d45e0d189 100644 --- a/fs/coda/file.c +++ b/fs/coda/file.c @@ -27,6 +27,13 @@ #include "coda_linux.h" #include "coda_int.h" +struct coda_vm_ops { + atomic_t refcnt; + struct file *coda_file; + const struct vm_operations_struct *host_vm_ops; + struct vm_operations_struct vm_ops; +}; + static ssize_t coda_file_read_iter(struct kiocb *iocb, struct iov_iter *to) { @@ -61,6 +68,34 @@ coda_file_write_iter(struct kiocb *iocb, struct iov_iter *to) return ret; } +static void +coda_vm_open(struct vm_area_struct *vma) +{ + struct coda_vm_ops *cvm_ops = + container_of(vma->vm_ops, struct coda_vm_ops, vm_ops); + + atomic_inc(&cvm_ops->refcnt); + + if (cvm_ops->host_vm_ops && cvm_ops->host_vm_ops->open) + cvm_ops->host_vm_ops->open(vma); +} + +static void +coda_vm_close(struct vm_area_struct *vma) +{ + struct coda_vm_ops *cvm_ops = + container_of(vma->vm_ops, struct coda_vm_ops, vm_ops); + + if (cvm_ops->host_vm_ops && cvm_ops->host_vm_ops->close) + cvm_ops->host_vm_ops->close(vma); + + if (atomic_dec_and_test(&cvm_ops->refcnt)) { + vma->vm_ops = cvm_ops->host_vm_ops; + fput(cvm_ops->coda_file); + kfree(cvm_ops); + } +} + static int coda_file_mmap(struct file *coda_file, struct vm_area_struct *vma) { @@ -68,6 +103,8 @@ coda_file_mmap(struct file *coda_file, struct vm_area_struct *vma) struct coda_inode_info *cii; struct file *host_file; struct inode *coda_inode, *host_inode; + struct coda_vm_ops *cvm_ops; + int ret; cfi = CODA_FTOC(coda_file); BUG_ON(!cfi || cfi->cfi_magic != CODA_MAGIC); @@ -76,6 +113,13 @@ coda_file_mmap(struct file *coda_file, struct vm_area_struct *vma) if (!host_file->f_op->mmap) return -ENODEV; + if (WARN_ON(coda_file != vma->vm_file)) + return -EIO; + + cvm_ops = kmalloc(sizeof(struct coda_vm_ops), GFP_KERNEL); + if (!cvm_ops) + return -ENOMEM; + coda_inode = file_inode(coda_file); host_inode = file_inode(host_file); @@ -89,6 +133,7 @@ coda_file_mmap(struct file *coda_file, struct vm_area_struct *vma) * the container file on us! */ else if (coda_inode->i_mapping != host_inode->i_mapping) { spin_unlock(&cii->c_lock); + kfree(cvm_ops); return -EBUSY; } @@ -97,7 +142,29 @@ coda_file_mmap(struct file *coda_file, struct vm_area_struct *vma) cfi->cfi_mapcount++; spin_unlock(&cii->c_lock); - return call_mmap(host_file, vma); + vma->vm_file = get_file(host_file); + ret = call_mmap(vma->vm_file, vma); + + if (ret) { + /* if call_mmap fails, our caller will put coda_file so we + * should drop the reference to the host_file that we got. + */ + fput(host_file); + kfree(cvm_ops); + } else { + /* here we add redirects for the open/close vm_operations */ + cvm_ops->host_vm_ops = vma->vm_ops; + if (vma->vm_ops) + cvm_ops->vm_ops = *vma->vm_ops; + + cvm_ops->vm_ops.open = coda_vm_open; + cvm_ops->vm_ops.close = coda_vm_close; + cvm_ops->coda_file = coda_file; + atomic_set(&cvm_ops->refcnt, 1); + + vma->vm_ops = &cvm_ops->vm_ops; + } + return ret; } int coda_open(struct inode *coda_inode, struct file *coda_file) @@ -207,4 +274,3 @@ const struct file_operations coda_file_operations = { .fsync = coda_fsync, .splice_read = generic_file_splice_read, }; - diff --git a/fs/coda/psdev.c b/fs/coda/psdev.c index c5234c21b539405a452417ce4d9cb9e977186637..55824cba324538aad639b6eda68863ad115054c6 100644 --- a/fs/coda/psdev.c +++ b/fs/coda/psdev.c @@ -187,8 +187,11 @@ static ssize_t coda_psdev_write(struct file *file, const char __user *buf, if (req->uc_opcode == CODA_OPEN_BY_FD) { struct coda_open_by_fd_out *outp = (struct coda_open_by_fd_out *)req->uc_data; - if (!outp->oh.result) + if (!outp->oh.result) { outp->fh = fget(outp->fd); + if (!outp->fh) + return -EBADF; + } } wake_up(&req->uc_sleep); diff --git a/fs/compat_ioctl.c b/fs/compat_ioctl.c index a9b00942e87d767fac5bea23999d954e29b0440f..8f08095ee54e98961a04980db41aebf9122514a5 100644 --- a/fs/compat_ioctl.c +++ b/fs/compat_ioctl.c @@ -894,9 +894,6 @@ COMPATIBLE_IOCTL(PPPIOCDISCONN) COMPATIBLE_IOCTL(PPPIOCATTCHAN) COMPATIBLE_IOCTL(PPPIOCGCHAN) COMPATIBLE_IOCTL(PPPIOCGL2TPSTATS) -/* PPPOX */ -COMPATIBLE_IOCTL(PPPOEIOCSFWD) -COMPATIBLE_IOCTL(PPPOEIOCDFWD) /* Big A */ /* sparc only */ /* Big Q for sound/OSS */ diff --git a/fs/crypto/crypto.c b/fs/crypto/crypto.c index 0f46cf550907fcc4a0675b85df9e3331f010e744..c83ddff3ff4ac4a647fb1f4c29ee34dc8f5fb8c1 100644 --- a/fs/crypto/crypto.c +++ b/fs/crypto/crypto.c @@ -149,7 +149,10 @@ int fscrypt_do_page_crypto(const struct inode *inode, fscrypt_direction_t rw, struct crypto_skcipher *tfm = ci->ci_ctfm; int res = 0; - BUG_ON(len == 0); + if (WARN_ON_ONCE(len <= 0)) + return -EINVAL; + if (WARN_ON_ONCE(len % FS_CRYPTO_BLOCK_SIZE != 0)) + return -EINVAL; BUILD_BUG_ON(sizeof(iv) != FS_IV_SIZE); BUILD_BUG_ON(AES_BLOCK_SIZE != FS_IV_SIZE); @@ -241,8 +244,6 @@ struct page *fscrypt_encrypt_page(const struct inode *inode, struct page *ciphertext_page = page; int err; - BUG_ON(len % FS_CRYPTO_BLOCK_SIZE != 0); - if (inode->i_sb->s_cop->flags & FS_CFLG_OWN_PAGES) { /* with inplace-encryption we just encrypt the page */ err = fscrypt_do_page_crypto(inode, FS_ENCRYPT, lblk_num, page, @@ -254,7 +255,8 @@ struct page *fscrypt_encrypt_page(const struct inode *inode, return ciphertext_page; } - BUG_ON(!PageLocked(page)); + if (WARN_ON_ONCE(!PageLocked(page))) + return ERR_PTR(-EINVAL); ctx = fscrypt_get_ctx(inode, gfp_flags); if (IS_ERR(ctx)) @@ -302,8 +304,9 @@ EXPORT_SYMBOL(fscrypt_encrypt_page); int fscrypt_decrypt_page(const struct inode *inode, struct page *page, unsigned int len, unsigned int offs, u64 lblk_num) { - if (!(inode->i_sb->s_cop->flags & FS_CFLG_OWN_PAGES)) - BUG_ON(!PageLocked(page)); + if (WARN_ON_ONCE(!PageLocked(page) && + !(inode->i_sb->s_cop->flags & FS_CFLG_OWN_PAGES))) + return -EINVAL; return fscrypt_do_page_crypto(inode, FS_DECRYPT, lblk_num, page, page, len, offs, GFP_NOFS); diff --git a/fs/dax.c b/fs/dax.c index 75a289c31c7e5dabb4e39c53b2b3cdd26ea7931f..f0d932fa39c20db0367e7db9129d520719fd2660 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -659,7 +659,7 @@ struct page *dax_layout_busy_page(struct address_space *mapping) * guaranteed to either see new references or prevent new * references from being established. */ - unmap_mapping_range(mapping, 0, 0, 1); + unmap_mapping_range(mapping, 0, 0, 0); while (index < end && pagevec_lookup_entries(&pvec, mapping, index, min(end - index, (pgoff_t)PAGEVEC_SIZE), diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c index a5e4a221435c04bdf97e1219ca3bd022ca864e6c..a93ebffe84b38e0700324d5eb39b5e5babedf8c4 100644 --- a/fs/dlm/lowcomms.c +++ b/fs/dlm/lowcomms.c @@ -1630,8 +1630,10 @@ static void clean_writequeues(void) static void work_stop(void) { - destroy_workqueue(recv_workqueue); - destroy_workqueue(send_workqueue); + if (recv_workqueue) + destroy_workqueue(recv_workqueue); + if (send_workqueue) + destroy_workqueue(send_workqueue); } static int work_start(void) @@ -1691,13 +1693,17 @@ static void work_flush(void) struct hlist_node *n; struct connection *con; - flush_workqueue(recv_workqueue); - flush_workqueue(send_workqueue); + if (recv_workqueue) + flush_workqueue(recv_workqueue); + if (send_workqueue) + flush_workqueue(send_workqueue); do { ok = 1; foreach_conn(stop_conn); - flush_workqueue(recv_workqueue); - flush_workqueue(send_workqueue); + if (recv_workqueue) + flush_workqueue(recv_workqueue); + if (send_workqueue) + flush_workqueue(send_workqueue); for (i = 0; i < CONN_HASH_SIZE && ok; i++) { hlist_for_each_entry_safe(con, n, &connection_hash[i], list) { diff --git a/fs/ecryptfs/crypto.c b/fs/ecryptfs/crypto.c index 4dd842f728465591cc7982d635f544c0084b064a..708f931c36f14adace91af82229a6043386c9baf 100644 --- a/fs/ecryptfs/crypto.c +++ b/fs/ecryptfs/crypto.c @@ -1018,8 +1018,10 @@ int ecryptfs_read_and_validate_header_region(struct inode *inode) rc = ecryptfs_read_lower(file_size, 0, ECRYPTFS_SIZE_AND_MARKER_BYTES, inode); - if (rc < ECRYPTFS_SIZE_AND_MARKER_BYTES) - return rc >= 0 ? -EINVAL : rc; + if (rc < 0) + return rc; + else if (rc < ECRYPTFS_SIZE_AND_MARKER_BYTES) + return -EINVAL; rc = ecryptfs_validate_marker(marker); if (!rc) ecryptfs_i_size_init(file_size, inode); @@ -1381,8 +1383,10 @@ int ecryptfs_read_and_validate_xattr_region(struct dentry *dentry, ecryptfs_inode_to_lower(inode), ECRYPTFS_XATTR_NAME, file_size, ECRYPTFS_SIZE_AND_MARKER_BYTES); - if (rc < ECRYPTFS_SIZE_AND_MARKER_BYTES) - return rc >= 0 ? -EINVAL : rc; + if (rc < 0) + return rc; + else if (rc < ECRYPTFS_SIZE_AND_MARKER_BYTES) + return -EINVAL; rc = ecryptfs_validate_marker(marker); if (!rc) ecryptfs_i_size_init(file_size, inode); diff --git a/fs/exec.c b/fs/exec.c index 352c1a6fa6a97d666bdc48e8d383408afcc7cec9..0d95c6349fb1be8081c781f8c2f8736adad0cac0 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -1828,7 +1828,7 @@ static int __do_execve_file(int fd, struct filename *filename, membarrier_execve(current); rseq_execve(current); acct_update_integrals(current); - task_numa_free(current); + task_numa_free(current, false); free_bprm(bprm); kfree(pathbuf); if (filename) diff --git a/fs/ext4/dir.c b/fs/ext4/dir.c index f93f9881ec184c351e69af715a849d1a01562b95..46d5c40f2835fec84ec26620113231fb619644b7 100644 --- a/fs/ext4/dir.c +++ b/fs/ext4/dir.c @@ -108,7 +108,6 @@ static int ext4_readdir(struct file *file, struct dir_context *ctx) struct inode *inode = file_inode(file); struct super_block *sb = inode->i_sb; struct buffer_head *bh = NULL; - int dir_has_error = 0; struct fscrypt_str fstr = FSTR_INIT(NULL, 0); if (ext4_encrypted_inode(inode)) { @@ -144,8 +143,6 @@ static int ext4_readdir(struct file *file, struct dir_context *ctx) return err; } - offset = ctx->pos & (sb->s_blocksize - 1); - while (ctx->pos < inode->i_size) { struct ext4_map_blocks map; @@ -154,9 +151,18 @@ static int ext4_readdir(struct file *file, struct dir_context *ctx) goto errout; } cond_resched(); + offset = ctx->pos & (sb->s_blocksize - 1); map.m_lblk = ctx->pos >> EXT4_BLOCK_SIZE_BITS(sb); map.m_len = 1; err = ext4_map_blocks(NULL, inode, &map, 0); + if (err == 0) { + /* m_len should never be zero but let's avoid + * an infinite loop if it somehow is */ + if (map.m_len == 0) + map.m_len = 1; + ctx->pos += map.m_len * sb->s_blocksize; + continue; + } if (err > 0) { pgoff_t index = map.m_pblk >> (PAGE_SHIFT - inode->i_blkbits); @@ -175,13 +181,6 @@ static int ext4_readdir(struct file *file, struct dir_context *ctx) } if (!bh) { - if (!dir_has_error) { - EXT4_ERROR_FILE(file, 0, - "directory contains a " - "hole at offset %llu", - (unsigned long long) ctx->pos); - dir_has_error = 1; - } /* corrupt size? Maybe no more blocks to read */ if (ctx->pos > inode->i_blocks << 9) break; diff --git a/fs/ext4/ext4_jbd2.h b/fs/ext4/ext4_jbd2.h index df908ef79ccea1ac594d26f8489799d9912afdb1..402dc366117eaa0275a2f9ba902ca2b93be3f08d 100644 --- a/fs/ext4/ext4_jbd2.h +++ b/fs/ext4/ext4_jbd2.h @@ -361,20 +361,20 @@ static inline int ext4_journal_force_commit(journal_t *journal) } static inline int ext4_jbd2_inode_add_write(handle_t *handle, - struct inode *inode) + struct inode *inode, loff_t start_byte, loff_t length) { if (ext4_handle_valid(handle)) - return jbd2_journal_inode_add_write(handle, - EXT4_I(inode)->jinode); + return jbd2_journal_inode_ranged_write(handle, + EXT4_I(inode)->jinode, start_byte, length); return 0; } static inline int ext4_jbd2_inode_add_wait(handle_t *handle, - struct inode *inode) + struct inode *inode, loff_t start_byte, loff_t length) { if (ext4_handle_valid(handle)) - return jbd2_journal_inode_add_wait(handle, - EXT4_I(inode)->jinode); + return jbd2_journal_inode_ranged_wait(handle, + EXT4_I(inode)->jinode, start_byte, length); return 0; } diff --git a/fs/ext4/file.c b/fs/ext4/file.c index 2c5baa5e8291165e07d5609b650c38c13eda587f..f4a24a46245eacebf90e771bf34defc8019bc792 100644 --- a/fs/ext4/file.c +++ b/fs/ext4/file.c @@ -165,6 +165,10 @@ static ssize_t ext4_write_checks(struct kiocb *iocb, struct iov_iter *from) ret = generic_write_checks(iocb, from); if (ret <= 0) return ret; + + if (unlikely(IS_IMMUTABLE(inode))) + return -EPERM; + /* * If we have encountered a bitmap-format file, the size limit * is smaller than s_maxbytes, which is for extent-mapped files. diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 05dc5a4ba481b85677e9f6ca3cb8da19a7f6e46c..e65559bf77281f8b56e6f69e00a0f196b371b071 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -729,10 +729,16 @@ out_sem: !(flags & EXT4_GET_BLOCKS_ZERO) && !ext4_is_quota_file(inode) && ext4_should_order_data(inode)) { + loff_t start_byte = + (loff_t)map->m_lblk << inode->i_blkbits; + loff_t length = (loff_t)map->m_len << inode->i_blkbits; + if (flags & EXT4_GET_BLOCKS_IO_SUBMIT) - ret = ext4_jbd2_inode_add_wait(handle, inode); + ret = ext4_jbd2_inode_add_wait(handle, inode, + start_byte, length); else - ret = ext4_jbd2_inode_add_write(handle, inode); + ret = ext4_jbd2_inode_add_write(handle, inode, + start_byte, length); if (ret) return ret; } @@ -4058,7 +4064,8 @@ static int __ext4_block_zero_page_range(handle_t *handle, err = 0; mark_buffer_dirty(bh); if (ext4_should_order_data(inode)) - err = ext4_jbd2_inode_add_write(handle, inode); + err = ext4_jbd2_inode_add_write(handle, inode, from, + length); } unlock: @@ -5491,6 +5498,14 @@ int ext4_setattr(struct dentry *dentry, struct iattr *attr) if (unlikely(ext4_forced_shutdown(EXT4_SB(inode->i_sb)))) return -EIO; + if (unlikely(IS_IMMUTABLE(inode))) + return -EPERM; + + if (unlikely(IS_APPEND(inode) && + (ia_valid & (ATTR_MODE | ATTR_UID | + ATTR_GID | ATTR_TIMES_SET)))) + return -EPERM; + error = setattr_prepare(dentry, attr); if (error) return error; @@ -6190,6 +6205,9 @@ int ext4_page_mkwrite(struct vm_fault *vmf) get_block_t *get_block; int retries = 0; + if (unlikely(IS_IMMUTABLE(inode))) + return VM_FAULT_SIGBUS; + sb_start_pagefault(inode->i_sb); file_update_time(vma->vm_file); diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c index 53d57cdf3c4d8876f10095b703f80f799aece343..abb6fcff0a1d3b8bd7b2c1a4cc9c69cc9f500d3a 100644 --- a/fs/ext4/ioctl.c +++ b/fs/ext4/ioctl.c @@ -268,6 +268,29 @@ static int uuid_is_zero(__u8 u[16]) } #endif +/* + * If immutable is set and we are not clearing it, we're not allowed to change + * anything else in the inode. Don't error out if we're only trying to set + * immutable on an immutable file. + */ +static int ext4_ioctl_check_immutable(struct inode *inode, __u32 new_projid, + unsigned int flags) +{ + struct ext4_inode_info *ei = EXT4_I(inode); + unsigned int oldflags = ei->i_flags; + + if (!(oldflags & EXT4_IMMUTABLE_FL) || !(flags & EXT4_IMMUTABLE_FL)) + return 0; + + if ((oldflags & ~EXT4_IMMUTABLE_FL) != (flags & ~EXT4_IMMUTABLE_FL)) + return -EPERM; + if (ext4_has_feature_project(inode->i_sb) && + __kprojid_val(ei->i_projid) != new_projid) + return -EPERM; + + return 0; +} + static int ext4_ioctl_setflags(struct inode *inode, unsigned int flags) { @@ -321,6 +344,20 @@ static int ext4_ioctl_setflags(struct inode *inode, goto flags_out; } + /* + * Wait for all pending directio and then flush all the dirty pages + * for this file. The flush marks all the pages readonly, so any + * subsequent attempt to write to the file (particularly mmap pages) + * will come through the filesystem and fail. + */ + if (S_ISREG(inode->i_mode) && !IS_IMMUTABLE(inode) && + (flags & EXT4_IMMUTABLE_FL)) { + inode_dio_wait(inode); + err = filemap_write_and_wait(inode->i_mapping); + if (err) + goto flags_out; + } + handle = ext4_journal_start(inode, EXT4_HT_INODE, 1); if (IS_ERR(handle)) { err = PTR_ERR(handle); @@ -750,7 +787,11 @@ long ext4_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) return err; inode_lock(inode); - err = ext4_ioctl_setflags(inode, flags); + err = ext4_ioctl_check_immutable(inode, + from_kprojid(&init_user_ns, ei->i_projid), + flags); + if (!err) + err = ext4_ioctl_setflags(inode, flags); inode_unlock(inode); mnt_drop_write_file(filp); return err; @@ -1120,6 +1161,9 @@ resizefs_out: goto out; flags = (ei->i_flags & ~EXT4_FL_XFLAG_VISIBLE) | (flags & EXT4_FL_XFLAG_VISIBLE); + err = ext4_ioctl_check_immutable(inode, fa.fsx_projid, flags); + if (err) + goto out; err = ext4_ioctl_setflags(inode, flags); if (err) goto out; diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c index 2f5be02fc6f6a2194f45adb5429f658c2d0a184a..287631bb09e753ae8c6a008742279bce568dd553 100644 --- a/fs/ext4/move_extent.c +++ b/fs/ext4/move_extent.c @@ -390,7 +390,8 @@ data_copy: /* Even in case of data=writeback it is reasonable to pin * inode to transaction, to prevent unexpected data loss */ - *err = ext4_jbd2_inode_add_write(handle, orig_inode); + *err = ext4_jbd2_inode_add_write(handle, orig_inode, + (loff_t)orig_page_offset << PAGE_SHIFT, replaced_size); unlock_pages: unlock_page(pagep[0]); diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c index 4c5aa5df657314f763a659ea5284d7c2a51b9df3..61dc1b0e4465d3e44e3dcf61190f7f1a747320e7 100644 --- a/fs/ext4/namei.c +++ b/fs/ext4/namei.c @@ -81,8 +81,18 @@ static struct buffer_head *ext4_append(handle_t *handle, static int ext4_dx_csum_verify(struct inode *inode, struct ext4_dir_entry *dirent); +/* + * Hints to ext4_read_dirblock regarding whether we expect a directory + * block being read to be an index block, or a block containing + * directory entries (and if the latter, whether it was found via a + * logical block in an htree index block). This is used to control + * what sort of sanity checkinig ext4_read_dirblock() will do on the + * directory block read from the storage device. EITHER will means + * the caller doesn't know what kind of directory block will be read, + * so no specific verification will be done. + */ typedef enum { - EITHER, INDEX, DIRENT + EITHER, INDEX, DIRENT, DIRENT_HTREE } dirblock_type_t; #define ext4_read_dirblock(inode, block, type) \ @@ -108,11 +118,14 @@ static struct buffer_head *__ext4_read_dirblock(struct inode *inode, return bh; } - if (!bh) { + if (!bh && (type == INDEX || type == DIRENT_HTREE)) { ext4_error_inode(inode, func, line, block, - "Directory hole found"); + "Directory hole found for htree %s block", + (type == INDEX) ? "index" : "leaf"); return ERR_PTR(-EFSCORRUPTED); } + if (!bh) + return NULL; dirent = (struct ext4_dir_entry *) bh->b_data; /* Determine whether or not we have an index block */ if (is_dx(inode)) { @@ -979,7 +992,7 @@ static int htree_dirblock_to_tree(struct file *dir_file, dxtrace(printk(KERN_INFO "In htree dirblock_to_tree: block %lu\n", (unsigned long)block)); - bh = ext4_read_dirblock(dir, block, DIRENT); + bh = ext4_read_dirblock(dir, block, DIRENT_HTREE); if (IS_ERR(bh)) return PTR_ERR(bh); @@ -1509,7 +1522,7 @@ static struct buffer_head * ext4_dx_find_entry(struct inode *dir, return (struct buffer_head *) frame; do { block = dx_get_block(frame->at); - bh = ext4_read_dirblock(dir, block, DIRENT); + bh = ext4_read_dirblock(dir, block, DIRENT_HTREE); if (IS_ERR(bh)) goto errout; @@ -2079,6 +2092,11 @@ static int ext4_add_entry(handle_t *handle, struct dentry *dentry, blocks = dir->i_size >> sb->s_blocksize_bits; for (block = 0; block < blocks; block++) { bh = ext4_read_dirblock(dir, block, DIRENT); + if (bh == NULL) { + bh = ext4_bread(handle, dir, block, + EXT4_GET_BLOCKS_CREATE); + goto add_to_new_block; + } if (IS_ERR(bh)) { retval = PTR_ERR(bh); bh = NULL; @@ -2099,6 +2117,7 @@ static int ext4_add_entry(handle_t *handle, struct dentry *dentry, brelse(bh); } bh = ext4_append(handle, dir, &block); +add_to_new_block: if (IS_ERR(bh)) { retval = PTR_ERR(bh); bh = NULL; @@ -2143,7 +2162,7 @@ again: return PTR_ERR(frame); entries = frame->entries; at = frame->at; - bh = ext4_read_dirblock(dir, dx_get_block(frame->at), DIRENT); + bh = ext4_read_dirblock(dir, dx_get_block(frame->at), DIRENT_HTREE); if (IS_ERR(bh)) { err = PTR_ERR(bh); bh = NULL; @@ -2691,7 +2710,10 @@ bool ext4_empty_dir(struct inode *inode) EXT4_ERROR_INODE(inode, "invalid size"); return true; } - bh = ext4_read_dirblock(inode, 0, EITHER); + /* The first directory block must not be a hole, + * so treat it as DIRENT_HTREE + */ + bh = ext4_read_dirblock(inode, 0, DIRENT_HTREE); if (IS_ERR(bh)) return true; @@ -2713,6 +2735,10 @@ bool ext4_empty_dir(struct inode *inode) brelse(bh); lblock = offset >> EXT4_BLOCK_SIZE_BITS(sb); bh = ext4_read_dirblock(inode, lblock, EITHER); + if (bh == NULL) { + offset += sb->s_blocksize; + continue; + } if (IS_ERR(bh)) return true; de = (struct ext4_dir_entry_2 *) bh->b_data; @@ -3256,7 +3282,10 @@ static struct buffer_head *ext4_get_first_dir_block(handle_t *handle, struct buffer_head *bh; if (!ext4_has_inline_data(inode)) { - bh = ext4_read_dirblock(inode, 0, EITHER); + /* The first directory block must not be a hole, so + * treat it as DIRENT_HTREE + */ + bh = ext4_read_dirblock(inode, 0, DIRENT_HTREE); if (IS_ERR(bh)) { *retval = PTR_ERR(bh); return NULL; diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c index 8fc3edb6760c2fe97ca1b09c1199b61e9be7cf8e..92f72bb5aff433421987867c681e631dc43447c6 100644 --- a/fs/f2fs/segment.c +++ b/fs/f2fs/segment.c @@ -3261,6 +3261,11 @@ static int read_compacted_summaries(struct f2fs_sb_info *sbi) seg_i = CURSEG_I(sbi, i); segno = le32_to_cpu(ckpt->cur_data_segno[i]); blk_off = le16_to_cpu(ckpt->cur_data_blkoff[i]); + if (blk_off > ENTRIES_IN_SUM) { + f2fs_bug_on(sbi, 1); + f2fs_put_page(page, 1); + return -EFAULT; + } seg_i->next_segno = segno; reset_curseg(sbi, i, 0); seg_i->alloc_type = ckpt->alloc_type[i]; diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index 9544e2f8b79ff814302d05486b3b878ef3f38839..7ee86d8f313d0feb5db522d4de7c491ace364850 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -721,6 +721,7 @@ void wbc_detach_inode(struct writeback_control *wbc) void wbc_account_io(struct writeback_control *wbc, struct page *page, size_t bytes) { + struct cgroup_subsys_state *css; int id; /* @@ -732,7 +733,12 @@ void wbc_account_io(struct writeback_control *wbc, struct page *page, if (!wbc->wb) return; - id = mem_cgroup_css_from_page(page)->id; + css = mem_cgroup_css_from_page(page); + /* dead cgroups shouldn't contribute to inode ownership arbitration */ + if (!(css->flags & CSS_ONLINE)) + return; + + id = css->id; if (id == wbc->wb_id) { wbc->wb_bytes += bytes; diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c index 7f8bb0868c0f8bef5c41c491d2e0f79e826d1b2f..d14d71d8d7eebb11ca6026ad96696aae3794566b 100644 --- a/fs/gfs2/bmap.c +++ b/fs/gfs2/bmap.c @@ -392,6 +392,19 @@ static int fillup_metapath(struct gfs2_inode *ip, struct metapath *mp, int h) return mp->mp_aheight - x - 1; } +static sector_t metapath_to_block(struct gfs2_sbd *sdp, struct metapath *mp) +{ + sector_t factor = 1, block = 0; + int hgt; + + for (hgt = mp->mp_fheight - 1; hgt >= 0; hgt--) { + if (hgt < mp->mp_aheight) + block += mp->mp_list[hgt] * factor; + factor *= sdp->sd_inptrs; + } + return block; +} + static void release_metapath(struct metapath *mp) { int i; @@ -432,60 +445,84 @@ static inline unsigned int gfs2_extent_length(struct buffer_head *bh, __be64 *pt return ptr - first; } -typedef const __be64 *(*gfs2_metadata_walker)( - struct metapath *mp, - const __be64 *start, const __be64 *end, - u64 factor, void *data); +enum walker_status { WALK_STOP, WALK_FOLLOW, WALK_CONTINUE }; -#define WALK_STOP ((__be64 *)0) -#define WALK_NEXT ((__be64 *)1) +/* + * gfs2_metadata_walker - walk an indirect block + * @mp: Metapath to indirect block + * @ptrs: Number of pointers to look at + * + * When returning WALK_FOLLOW, the walker must update @mp to point at the right + * indirect block to follow. + */ +typedef enum walker_status (*gfs2_metadata_walker)(struct metapath *mp, + unsigned int ptrs); + +/* + * gfs2_walk_metadata - walk a tree of indirect blocks + * @inode: The inode + * @mp: Starting point of walk + * @max_len: Maximum number of blocks to walk + * @walker: Called during the walk + * + * Returns 1 if the walk was stopped by @walker, 0 if we went past @max_len or + * past the end of metadata, and a negative error code otherwise. + */ -static int gfs2_walk_metadata(struct inode *inode, sector_t lblock, - u64 len, struct metapath *mp, gfs2_metadata_walker walker, - void *data) +static int gfs2_walk_metadata(struct inode *inode, struct metapath *mp, + u64 max_len, gfs2_metadata_walker walker) { - struct metapath clone; struct gfs2_inode *ip = GFS2_I(inode); struct gfs2_sbd *sdp = GFS2_SB(inode); - const __be64 *start, *end, *ptr; u64 factor = 1; unsigned int hgt; - int ret = 0; + int ret; - for (hgt = ip->i_height - 1; hgt >= mp->mp_aheight; hgt--) + /* + * The walk starts in the lowest allocated indirect block, which may be + * before the position indicated by @mp. Adjust @max_len accordingly + * to avoid a short walk. + */ + for (hgt = mp->mp_fheight - 1; hgt >= mp->mp_aheight; hgt--) { + max_len += mp->mp_list[hgt] * factor; + mp->mp_list[hgt] = 0; factor *= sdp->sd_inptrs; + } for (;;) { - u64 step; + u16 start = mp->mp_list[hgt]; + enum walker_status status; + unsigned int ptrs; + u64 len; /* Walk indirect block. */ - start = metapointer(hgt, mp); - end = metaend(hgt, mp); - - step = (end - start) * factor; - if (step > len) - end = start + DIV_ROUND_UP_ULL(len, factor); - - ptr = walker(mp, start, end, factor, data); - if (ptr == WALK_STOP) + ptrs = (hgt >= 1 ? sdp->sd_inptrs : sdp->sd_diptrs) - start; + len = ptrs * factor; + if (len > max_len) + ptrs = DIV_ROUND_UP_ULL(max_len, factor); + status = walker(mp, ptrs); + switch (status) { + case WALK_STOP: + return 1; + case WALK_FOLLOW: + BUG_ON(mp->mp_aheight == mp->mp_fheight); + ptrs = mp->mp_list[hgt] - start; + len = ptrs * factor; break; - if (step >= len) + case WALK_CONTINUE: break; - len -= step; - if (ptr != WALK_NEXT) { - BUG_ON(!*ptr); - mp->mp_list[hgt] += ptr - start; - goto fill_up_metapath; } + if (len >= max_len) + break; + max_len -= len; + if (status == WALK_FOLLOW) + goto fill_up_metapath; lower_metapath: /* Decrease height of metapath. */ - if (mp != &clone) { - clone_metapath(&clone, mp); - mp = &clone; - } brelse(mp->mp_bh[hgt]); mp->mp_bh[hgt] = NULL; + mp->mp_list[hgt] = 0; if (!hgt) break; hgt--; @@ -493,10 +530,7 @@ lower_metapath: /* Advance in metadata tree. */ (mp->mp_list[hgt])++; - start = metapointer(hgt, mp); - end = metaend(hgt, mp); - if (start >= end) { - mp->mp_list[hgt] = 0; + if (mp->mp_list[hgt] >= sdp->sd_inptrs) { if (!hgt) break; goto lower_metapath; @@ -504,44 +538,36 @@ lower_metapath: fill_up_metapath: /* Increase height of metapath. */ - if (mp != &clone) { - clone_metapath(&clone, mp); - mp = &clone; - } ret = fillup_metapath(ip, mp, ip->i_height - 1); if (ret < 0) - break; + return ret; hgt += ret; for (; ret; ret--) do_div(factor, sdp->sd_inptrs); mp->mp_aheight = hgt + 1; } - if (mp == &clone) - release_metapath(mp); - return ret; + return 0; } -struct gfs2_hole_walker_args { - u64 blocks; -}; - -static const __be64 *gfs2_hole_walker(struct metapath *mp, - const __be64 *start, const __be64 *end, - u64 factor, void *data) +static enum walker_status gfs2_hole_walker(struct metapath *mp, + unsigned int ptrs) { - struct gfs2_hole_walker_args *args = data; - const __be64 *ptr; + const __be64 *start, *ptr, *end; + unsigned int hgt; + + hgt = mp->mp_aheight - 1; + start = metapointer(hgt, mp); + end = start + ptrs; for (ptr = start; ptr < end; ptr++) { if (*ptr) { - args->blocks += (ptr - start) * factor; + mp->mp_list[hgt] += ptr - start; if (mp->mp_aheight == mp->mp_fheight) return WALK_STOP; - return ptr; /* increase height */ + return WALK_FOLLOW; } } - args->blocks += (end - start) * factor; - return WALK_NEXT; + return WALK_CONTINUE; } /** @@ -559,12 +585,24 @@ static const __be64 *gfs2_hole_walker(struct metapath *mp, static int gfs2_hole_size(struct inode *inode, sector_t lblock, u64 len, struct metapath *mp, struct iomap *iomap) { - struct gfs2_hole_walker_args args = { }; - int ret = 0; + struct metapath clone; + u64 hole_size; + int ret; - ret = gfs2_walk_metadata(inode, lblock, len, mp, gfs2_hole_walker, &args); - if (!ret) - iomap->length = args.blocks << inode->i_blkbits; + clone_metapath(&clone, mp); + ret = gfs2_walk_metadata(inode, &clone, len, gfs2_hole_walker); + if (ret < 0) + goto out; + + if (ret == 1) + hole_size = metapath_to_block(GFS2_SB(inode), &clone) - lblock; + else + hole_size = len; + iomap->length = hole_size << inode->i_blkbits; + ret = 0; + +out: + release_metapath(&clone); return ret; } diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c index 65ea0355a4f64fe88db9aa7ff22d3a0bb9e5fd2e..24f86ffe11d7477dd2939d4e66ae9d1a1dc22532 100644 --- a/fs/jbd2/commit.c +++ b/fs/jbd2/commit.c @@ -187,14 +187,15 @@ static int journal_wait_on_commit_record(journal_t *journal, * use writepages() because with dealyed allocation we may be doing * block allocation in writepages(). */ -static int journal_submit_inode_data_buffers(struct address_space *mapping) +static int journal_submit_inode_data_buffers(struct address_space *mapping, + loff_t dirty_start, loff_t dirty_end) { int ret; struct writeback_control wbc = { .sync_mode = WB_SYNC_ALL, .nr_to_write = mapping->nrpages * 2, - .range_start = 0, - .range_end = i_size_read(mapping->host), + .range_start = dirty_start, + .range_end = dirty_end, }; ret = generic_writepages(mapping, &wbc); @@ -218,6 +219,9 @@ static int journal_submit_data_buffers(journal_t *journal, spin_lock(&journal->j_list_lock); list_for_each_entry(jinode, &commit_transaction->t_inode_list, i_list) { + loff_t dirty_start = jinode->i_dirty_start; + loff_t dirty_end = jinode->i_dirty_end; + if (!(jinode->i_flags & JI_WRITE_DATA)) continue; mapping = jinode->i_vfs_inode->i_mapping; @@ -230,7 +234,8 @@ static int journal_submit_data_buffers(journal_t *journal, * only allocated blocks here. */ trace_jbd2_submit_inode_data(jinode->i_vfs_inode); - err = journal_submit_inode_data_buffers(mapping); + err = journal_submit_inode_data_buffers(mapping, dirty_start, + dirty_end); if (!ret) ret = err; spin_lock(&journal->j_list_lock); @@ -257,12 +262,16 @@ static int journal_finish_inode_data_buffers(journal_t *journal, /* For locking, see the comment in journal_submit_data_buffers() */ spin_lock(&journal->j_list_lock); list_for_each_entry(jinode, &commit_transaction->t_inode_list, i_list) { + loff_t dirty_start = jinode->i_dirty_start; + loff_t dirty_end = jinode->i_dirty_end; + if (!(jinode->i_flags & JI_WAIT_DATA)) continue; jinode->i_flags |= JI_COMMIT_RUNNING; spin_unlock(&journal->j_list_lock); - err = filemap_fdatawait_keep_errors( - jinode->i_vfs_inode->i_mapping); + err = filemap_fdatawait_range_keep_errors( + jinode->i_vfs_inode->i_mapping, dirty_start, + dirty_end); if (!ret) ret = err; spin_lock(&journal->j_list_lock); @@ -282,6 +291,8 @@ static int journal_finish_inode_data_buffers(journal_t *journal, &jinode->i_transaction->t_inode_list); } else { jinode->i_transaction = NULL; + jinode->i_dirty_start = 0; + jinode->i_dirty_end = 0; } } spin_unlock(&journal->j_list_lock); diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c index e9cf88f0bc291769fd90f7de98b310acccfcf426..df390a69c49a8cdd94401b6fd1e448d71c881ec2 100644 --- a/fs/jbd2/journal.c +++ b/fs/jbd2/journal.c @@ -94,6 +94,8 @@ EXPORT_SYMBOL(jbd2_journal_try_to_free_buffers); EXPORT_SYMBOL(jbd2_journal_force_commit); EXPORT_SYMBOL(jbd2_journal_inode_add_write); EXPORT_SYMBOL(jbd2_journal_inode_add_wait); +EXPORT_SYMBOL(jbd2_journal_inode_ranged_write); +EXPORT_SYMBOL(jbd2_journal_inode_ranged_wait); EXPORT_SYMBOL(jbd2_journal_init_jbd_inode); EXPORT_SYMBOL(jbd2_journal_release_jbd_inode); EXPORT_SYMBOL(jbd2_journal_begin_ordered_truncate); @@ -2588,6 +2590,8 @@ void jbd2_journal_init_jbd_inode(struct jbd2_inode *jinode, struct inode *inode) jinode->i_next_transaction = NULL; jinode->i_vfs_inode = inode; jinode->i_flags = 0; + jinode->i_dirty_start = 0; + jinode->i_dirty_end = 0; INIT_LIST_HEAD(&jinode->i_list); } diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c index e20a6703531f41168ed15fd6fa2ad7ca9865cdbe..911ff18249b75615e2151a50c4f6d3fa9436ccdc 100644 --- a/fs/jbd2/transaction.c +++ b/fs/jbd2/transaction.c @@ -2500,7 +2500,7 @@ void jbd2_journal_refile_buffer(journal_t *journal, struct journal_head *jh) * File inode in the inode list of the handle's transaction */ static int jbd2_journal_file_inode(handle_t *handle, struct jbd2_inode *jinode, - unsigned long flags) + unsigned long flags, loff_t start_byte, loff_t end_byte) { transaction_t *transaction = handle->h_transaction; journal_t *journal; @@ -2512,26 +2512,17 @@ static int jbd2_journal_file_inode(handle_t *handle, struct jbd2_inode *jinode, jbd_debug(4, "Adding inode %lu, tid:%d\n", jinode->i_vfs_inode->i_ino, transaction->t_tid); - /* - * First check whether inode isn't already on the transaction's - * lists without taking the lock. Note that this check is safe - * without the lock as we cannot race with somebody removing inode - * from the transaction. The reason is that we remove inode from the - * transaction only in journal_release_jbd_inode() and when we commit - * the transaction. We are guarded from the first case by holding - * a reference to the inode. We are safe against the second case - * because if jinode->i_transaction == transaction, commit code - * cannot touch the transaction because we hold reference to it, - * and if jinode->i_next_transaction == transaction, commit code - * will only file the inode where we want it. - */ - if ((jinode->i_transaction == transaction || - jinode->i_next_transaction == transaction) && - (jinode->i_flags & flags) == flags) - return 0; - spin_lock(&journal->j_list_lock); jinode->i_flags |= flags; + + if (jinode->i_dirty_end) { + jinode->i_dirty_start = min(jinode->i_dirty_start, start_byte); + jinode->i_dirty_end = max(jinode->i_dirty_end, end_byte); + } else { + jinode->i_dirty_start = start_byte; + jinode->i_dirty_end = end_byte; + } + /* Is inode already attached where we need it? */ if (jinode->i_transaction == transaction || jinode->i_next_transaction == transaction) @@ -2566,12 +2557,28 @@ done: int jbd2_journal_inode_add_write(handle_t *handle, struct jbd2_inode *jinode) { return jbd2_journal_file_inode(handle, jinode, - JI_WRITE_DATA | JI_WAIT_DATA); + JI_WRITE_DATA | JI_WAIT_DATA, 0, LLONG_MAX); } int jbd2_journal_inode_add_wait(handle_t *handle, struct jbd2_inode *jinode) { - return jbd2_journal_file_inode(handle, jinode, JI_WAIT_DATA); + return jbd2_journal_file_inode(handle, jinode, JI_WAIT_DATA, 0, + LLONG_MAX); +} + +int jbd2_journal_inode_ranged_write(handle_t *handle, + struct jbd2_inode *jinode, loff_t start_byte, loff_t length) +{ + return jbd2_journal_file_inode(handle, jinode, + JI_WRITE_DATA | JI_WAIT_DATA, start_byte, + start_byte + length - 1); +} + +int jbd2_journal_inode_ranged_wait(handle_t *handle, struct jbd2_inode *jinode, + loff_t start_byte, loff_t length) +{ + return jbd2_journal_file_inode(handle, jinode, JI_WAIT_DATA, + start_byte, start_byte + length - 1); } /* diff --git a/fs/nfs/client.c b/fs/nfs/client.c index c092661147b3054ee81f85205f61ed674cd1a674..0a2b59c1ecb3d066bbdf8b35bf8cea2676a56bfb 100644 --- a/fs/nfs/client.c +++ b/fs/nfs/client.c @@ -416,10 +416,10 @@ struct nfs_client *nfs_get_client(const struct nfs_client_initdata *cl_init) clp = nfs_match_client(cl_init); if (clp) { spin_unlock(&nn->nfs_client_lock); - if (IS_ERR(clp)) - return clp; if (new) new->rpc_ops->free_client(new); + if (IS_ERR(clp)) + return clp; return nfs_found_client(cl_init, clp); } if (new) { diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c index 9818a5dfb47287ce2d1dc0f230e320e6c00f3e8a..82cb219f571f3e06f93f9363c39e9e211a904aeb 100644 --- a/fs/nfs/dir.c +++ b/fs/nfs/dir.c @@ -1072,6 +1072,100 @@ int nfs_neg_need_reval(struct inode *dir, struct dentry *dentry, return !nfs_check_verifier(dir, dentry, flags & LOOKUP_RCU); } +static int +nfs_lookup_revalidate_done(struct inode *dir, struct dentry *dentry, + struct inode *inode, int error) +{ + switch (error) { + case 1: + dfprintk(LOOKUPCACHE, "NFS: %s(%pd2) is valid\n", + __func__, dentry); + return 1; + case 0: + nfs_mark_for_revalidate(dir); + if (inode && S_ISDIR(inode->i_mode)) { + /* Purge readdir caches. */ + nfs_zap_caches(inode); + /* + * We can't d_drop the root of a disconnected tree: + * its d_hash is on the s_anon list and d_drop() would hide + * it from shrink_dcache_for_unmount(), leading to busy + * inodes on unmount and further oopses. + */ + if (IS_ROOT(dentry)) + return 1; + } + dfprintk(LOOKUPCACHE, "NFS: %s(%pd2) is invalid\n", + __func__, dentry); + return 0; + } + dfprintk(LOOKUPCACHE, "NFS: %s(%pd2) lookup returned error %d\n", + __func__, dentry, error); + return error; +} + +static int +nfs_lookup_revalidate_negative(struct inode *dir, struct dentry *dentry, + unsigned int flags) +{ + int ret = 1; + if (nfs_neg_need_reval(dir, dentry, flags)) { + if (flags & LOOKUP_RCU) + return -ECHILD; + ret = 0; + } + return nfs_lookup_revalidate_done(dir, dentry, NULL, ret); +} + +static int +nfs_lookup_revalidate_delegated(struct inode *dir, struct dentry *dentry, + struct inode *inode) +{ + nfs_set_verifier(dentry, nfs_save_change_attribute(dir)); + return nfs_lookup_revalidate_done(dir, dentry, inode, 1); +} + +static int +nfs_lookup_revalidate_dentry(struct inode *dir, struct dentry *dentry, + struct inode *inode) +{ + struct nfs_fh *fhandle; + struct nfs_fattr *fattr; + struct nfs4_label *label; + int ret; + + ret = -ENOMEM; + fhandle = nfs_alloc_fhandle(); + fattr = nfs_alloc_fattr(); + label = nfs4_label_alloc(NFS_SERVER(inode), GFP_KERNEL); + if (fhandle == NULL || fattr == NULL || IS_ERR(label)) + goto out; + + ret = NFS_PROTO(dir)->lookup(dir, &dentry->d_name, fhandle, fattr, label); + if (ret < 0) { + if (ret == -ESTALE || ret == -ENOENT) + ret = 0; + goto out; + } + ret = 0; + if (nfs_compare_fh(NFS_FH(inode), fhandle)) + goto out; + if (nfs_refresh_inode(inode, fattr) < 0) + goto out; + + nfs_setsecurity(inode, fattr, label); + nfs_set_verifier(dentry, nfs_save_change_attribute(dir)); + + /* set a readdirplus hint that we had a cache miss */ + nfs_force_use_readdirplus(dir); + ret = 1; +out: + nfs_free_fattr(fattr); + nfs_free_fhandle(fhandle); + nfs4_label_free(label); + return nfs_lookup_revalidate_done(dir, dentry, inode, ret); +} + /* * This is called every time the dcache has a lookup hit, * and we should check whether we can really trust that @@ -1083,58 +1177,36 @@ int nfs_neg_need_reval(struct inode *dir, struct dentry *dentry, * If the parent directory is seen to have changed, we throw out the * cached dentry and do a new lookup. */ -static int nfs_lookup_revalidate(struct dentry *dentry, unsigned int flags) +static int +nfs_do_lookup_revalidate(struct inode *dir, struct dentry *dentry, + unsigned int flags) { - struct inode *dir; struct inode *inode; - struct dentry *parent; - struct nfs_fh *fhandle = NULL; - struct nfs_fattr *fattr = NULL; - struct nfs4_label *label = NULL; int error; - if (flags & LOOKUP_RCU) { - parent = READ_ONCE(dentry->d_parent); - dir = d_inode_rcu(parent); - if (!dir) - return -ECHILD; - } else { - parent = dget_parent(dentry); - dir = d_inode(parent); - } nfs_inc_stats(dir, NFSIOS_DENTRYREVALIDATE); inode = d_inode(dentry); - if (!inode) { - if (nfs_neg_need_reval(dir, dentry, flags)) { - if (flags & LOOKUP_RCU) - return -ECHILD; - goto out_bad; - } - goto out_valid; - } + if (!inode) + return nfs_lookup_revalidate_negative(dir, dentry, flags); if (is_bad_inode(inode)) { - if (flags & LOOKUP_RCU) - return -ECHILD; dfprintk(LOOKUPCACHE, "%s: %pd2 has dud inode\n", __func__, dentry); goto out_bad; } if (NFS_PROTO(dir)->have_delegation(inode, FMODE_READ)) - goto out_set_verifier; + return nfs_lookup_revalidate_delegated(dir, dentry, inode); /* Force a full look up iff the parent directory has changed */ if (!(flags & (LOOKUP_EXCL | LOOKUP_REVAL)) && nfs_check_verifier(dir, dentry, flags & LOOKUP_RCU)) { error = nfs_lookup_verify_inode(inode, flags); if (error) { - if (flags & LOOKUP_RCU) - return -ECHILD; if (error == -ESTALE) - goto out_zap_parent; - goto out_error; + nfs_zap_caches(dir); + goto out_bad; } nfs_advise_use_readdirplus(dir); goto out_valid; @@ -1146,81 +1218,45 @@ static int nfs_lookup_revalidate(struct dentry *dentry, unsigned int flags) if (NFS_STALE(inode)) goto out_bad; - error = -ENOMEM; - fhandle = nfs_alloc_fhandle(); - fattr = nfs_alloc_fattr(); - if (fhandle == NULL || fattr == NULL) - goto out_error; - - label = nfs4_label_alloc(NFS_SERVER(inode), GFP_NOWAIT); - if (IS_ERR(label)) - goto out_error; - trace_nfs_lookup_revalidate_enter(dir, dentry, flags); - error = NFS_PROTO(dir)->lookup(dir, &dentry->d_name, fhandle, fattr, label); + error = nfs_lookup_revalidate_dentry(dir, dentry, inode); trace_nfs_lookup_revalidate_exit(dir, dentry, flags, error); - if (error == -ESTALE || error == -ENOENT) - goto out_bad; - if (error) - goto out_error; - if (nfs_compare_fh(NFS_FH(inode), fhandle)) - goto out_bad; - if ((error = nfs_refresh_inode(inode, fattr)) != 0) - goto out_bad; - - nfs_setsecurity(inode, fattr, label); - - nfs_free_fattr(fattr); - nfs_free_fhandle(fhandle); - nfs4_label_free(label); + return error; +out_valid: + return nfs_lookup_revalidate_done(dir, dentry, inode, 1); +out_bad: + if (flags & LOOKUP_RCU) + return -ECHILD; + return nfs_lookup_revalidate_done(dir, dentry, inode, 0); +} - /* set a readdirplus hint that we had a cache miss */ - nfs_force_use_readdirplus(dir); +static int +__nfs_lookup_revalidate(struct dentry *dentry, unsigned int flags, + int (*reval)(struct inode *, struct dentry *, unsigned int)) +{ + struct dentry *parent; + struct inode *dir; + int ret; -out_set_verifier: - nfs_set_verifier(dentry, nfs_save_change_attribute(dir)); - out_valid: if (flags & LOOKUP_RCU) { + parent = READ_ONCE(dentry->d_parent); + dir = d_inode_rcu(parent); + if (!dir) + return -ECHILD; + ret = reval(dir, dentry, flags); if (parent != READ_ONCE(dentry->d_parent)) return -ECHILD; - } else + } else { + parent = dget_parent(dentry); + ret = reval(d_inode(parent), dentry, flags); dput(parent); - dfprintk(LOOKUPCACHE, "NFS: %s(%pd2) is valid\n", - __func__, dentry); - return 1; -out_zap_parent: - nfs_zap_caches(dir); - out_bad: - WARN_ON(flags & LOOKUP_RCU); - nfs_free_fattr(fattr); - nfs_free_fhandle(fhandle); - nfs4_label_free(label); - nfs_mark_for_revalidate(dir); - if (inode && S_ISDIR(inode->i_mode)) { - /* Purge readdir caches. */ - nfs_zap_caches(inode); - /* - * We can't d_drop the root of a disconnected tree: - * its d_hash is on the s_anon list and d_drop() would hide - * it from shrink_dcache_for_unmount(), leading to busy - * inodes on unmount and further oopses. - */ - if (IS_ROOT(dentry)) - goto out_valid; } - dput(parent); - dfprintk(LOOKUPCACHE, "NFS: %s(%pd2) is invalid\n", - __func__, dentry); - return 0; -out_error: - WARN_ON(flags & LOOKUP_RCU); - nfs_free_fattr(fattr); - nfs_free_fhandle(fhandle); - nfs4_label_free(label); - dput(parent); - dfprintk(LOOKUPCACHE, "NFS: %s(%pd2) lookup returned error %d\n", - __func__, dentry, error); - return error; + return ret; +} + +static int nfs_lookup_revalidate(struct dentry *dentry, unsigned int flags) +{ + return __nfs_lookup_revalidate(dentry, flags, nfs_do_lookup_revalidate); } /* @@ -1579,62 +1615,55 @@ no_open: } EXPORT_SYMBOL_GPL(nfs_atomic_open); -static int nfs4_lookup_revalidate(struct dentry *dentry, unsigned int flags) +static int +nfs4_do_lookup_revalidate(struct inode *dir, struct dentry *dentry, + unsigned int flags) { struct inode *inode; - int ret = 0; if (!(flags & LOOKUP_OPEN) || (flags & LOOKUP_DIRECTORY)) - goto no_open; + goto full_reval; if (d_mountpoint(dentry)) - goto no_open; - if (NFS_SB(dentry->d_sb)->caps & NFS_CAP_ATOMIC_OPEN_V1) - goto no_open; + goto full_reval; inode = d_inode(dentry); /* We can't create new files in nfs_open_revalidate(), so we * optimize away revalidation of negative dentries. */ - if (inode == NULL) { - struct dentry *parent; - struct inode *dir; - - if (flags & LOOKUP_RCU) { - parent = READ_ONCE(dentry->d_parent); - dir = d_inode_rcu(parent); - if (!dir) - return -ECHILD; - } else { - parent = dget_parent(dentry); - dir = d_inode(parent); - } - if (!nfs_neg_need_reval(dir, dentry, flags)) - ret = 1; - else if (flags & LOOKUP_RCU) - ret = -ECHILD; - if (!(flags & LOOKUP_RCU)) - dput(parent); - else if (parent != READ_ONCE(dentry->d_parent)) - return -ECHILD; - goto out; - } + if (inode == NULL) + goto full_reval; + + if (NFS_PROTO(dir)->have_delegation(inode, FMODE_READ)) + return nfs_lookup_revalidate_delegated(dir, dentry, inode); /* NFS only supports OPEN on regular files */ if (!S_ISREG(inode->i_mode)) - goto no_open; + goto full_reval; + /* We cannot do exclusive creation on a positive dentry */ - if (flags & LOOKUP_EXCL) - goto no_open; + if (flags & (LOOKUP_EXCL | LOOKUP_REVAL)) + goto reval_dentry; + + /* Check if the directory changed */ + if (!nfs_check_verifier(dir, dentry, flags & LOOKUP_RCU)) + goto reval_dentry; /* Let f_op->open() actually open (and revalidate) the file */ - ret = 1; + return 1; +reval_dentry: + if (flags & LOOKUP_RCU) + return -ECHILD; + return nfs_lookup_revalidate_dentry(dir, dentry, inode);; -out: - return ret; +full_reval: + return nfs_do_lookup_revalidate(dir, dentry, flags); +} -no_open: - return nfs_lookup_revalidate(dentry, flags); +static int nfs4_lookup_revalidate(struct dentry *dentry, unsigned int flags) +{ + return __nfs_lookup_revalidate(dentry, flags, + nfs4_do_lookup_revalidate); } #endif /* CONFIG_NFSV4 */ diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c index 33824a0a57bfe5de9e31f4d13e4d2eebc3b7b2df..f516ace8f45d3820bbd00af2a79eded3abb1b5b5 100644 --- a/fs/nfs/direct.c +++ b/fs/nfs/direct.c @@ -400,15 +400,21 @@ static void nfs_direct_read_completion(struct nfs_pgio_header *hdr) unsigned long bytes = 0; struct nfs_direct_req *dreq = hdr->dreq; - if (test_bit(NFS_IOHDR_REDO, &hdr->flags)) - goto out_put; - spin_lock(&dreq->lock); - if (test_bit(NFS_IOHDR_ERROR, &hdr->flags) && (hdr->good_bytes == 0)) + if (test_bit(NFS_IOHDR_ERROR, &hdr->flags)) dreq->error = hdr->error; - else + + if (test_bit(NFS_IOHDR_REDO, &hdr->flags)) { + spin_unlock(&dreq->lock); + goto out_put; + } + + if (hdr->good_bytes != 0) nfs_direct_good_bytes(dreq, hdr); + if (test_bit(NFS_IOHDR_EOF, &hdr->flags)) + dreq->error = 0; + spin_unlock(&dreq->lock); while (!list_empty(&hdr->pages)) { @@ -428,7 +434,7 @@ out_put: hdr->release(hdr); } -static void nfs_read_sync_pgio_error(struct list_head *head) +static void nfs_read_sync_pgio_error(struct list_head *head, int error) { struct nfs_page *req; @@ -664,8 +670,7 @@ static void nfs_direct_write_reschedule(struct nfs_direct_req *dreq) list_for_each_entry_safe(req, tmp, &reqs, wb_list) { if (!nfs_pageio_add_request(&desc, req)) { - nfs_list_remove_request(req); - nfs_list_add_request(req, &failed); + nfs_list_move_request(req, &failed); spin_lock(&cinfo.inode->i_lock); dreq->flags = 0; if (desc.pg_error < 0) @@ -775,16 +780,19 @@ static void nfs_direct_write_completion(struct nfs_pgio_header *hdr) bool request_commit = false; struct nfs_page *req = nfs_list_entry(hdr->pages.next); - if (test_bit(NFS_IOHDR_REDO, &hdr->flags)) - goto out_put; - nfs_init_cinfo_from_dreq(&cinfo, dreq); spin_lock(&dreq->lock); if (test_bit(NFS_IOHDR_ERROR, &hdr->flags)) dreq->error = hdr->error; - if (dreq->error == 0) { + + if (test_bit(NFS_IOHDR_REDO, &hdr->flags)) { + spin_unlock(&dreq->lock); + goto out_put; + } + + if (hdr->good_bytes != 0) { nfs_direct_good_bytes(dreq, hdr); if (nfs_write_need_commit(hdr)) { if (dreq->flags == NFS_ODIRECT_RESCHED_WRITES) @@ -821,7 +829,7 @@ out_put: hdr->release(hdr); } -static void nfs_write_sync_pgio_error(struct list_head *head) +static void nfs_write_sync_pgio_error(struct list_head *head, int error) { struct nfs_page *req; diff --git a/fs/nfs/flexfilelayout/flexfilelayoutdev.c b/fs/nfs/flexfilelayout/flexfilelayoutdev.c index 364028c710a8bf605a8a420ecbfd869b9d2c50a5..8da239b6cc16ffb7790015debe6eca4ae505ecd6 100644 --- a/fs/nfs/flexfilelayout/flexfilelayoutdev.c +++ b/fs/nfs/flexfilelayout/flexfilelayoutdev.c @@ -307,7 +307,7 @@ int ff_layout_track_ds_error(struct nfs4_flexfile_layout *flo, if (status == 0) return 0; - if (mirror->mirror_ds == NULL) + if (IS_ERR_OR_NULL(mirror->mirror_ds)) return -EINVAL; dserr = kmalloc(sizeof(*dserr), gfp_flags); diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c index 4dc887813c71d312fe09df36ddbbd78a0b27024a..a7bc4e0494f92f61ea7989310ffd8ae6ebadc21b 100644 --- a/fs/nfs/fscache.c +++ b/fs/nfs/fscache.c @@ -118,6 +118,10 @@ void nfs_fscache_get_super_cookie(struct super_block *sb, const char *uniq, int struct rb_node **p, *parent; int diff; + nfss->fscache_key = NULL; + nfss->fscache = NULL; + if (!(nfss->options & NFS_OPTION_FSCACHE)) + return; if (!uniq) { uniq = ""; ulen = 1; @@ -230,10 +234,11 @@ void nfs_fscache_release_super_cookie(struct super_block *sb) void nfs_fscache_init_inode(struct inode *inode) { struct nfs_fscache_inode_auxdata auxdata; + struct nfs_server *nfss = NFS_SERVER(inode); struct nfs_inode *nfsi = NFS_I(inode); nfsi->fscache = NULL; - if (!S_ISREG(inode->i_mode)) + if (!(nfss->fscache && S_ISREG(inode->i_mode))) return; memset(&auxdata, 0, sizeof(auxdata)); diff --git a/fs/nfs/fscache.h b/fs/nfs/fscache.h index 161ba2edb9d0410f7722cc4b7ba62f8d4614cc80..6363ea956858124a503a1ae8c14604dfd2726999 100644 --- a/fs/nfs/fscache.h +++ b/fs/nfs/fscache.h @@ -186,7 +186,7 @@ static inline void nfs_fscache_wait_on_invalidate(struct inode *inode) */ static inline const char *nfs_server_fscache_state(struct nfs_server *server) { - if (server->fscache && (server->options & NFS_OPTION_FSCACHE)) + if (server->fscache) return "yes"; return "no "; } diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c index 110ee6f78c31e8459ca06cc2640b84a54fd6e1ac..6f22c8d6576030a15de8266a3a9a021c0b6b8e25 100644 --- a/fs/nfs/inode.c +++ b/fs/nfs/inode.c @@ -1100,6 +1100,7 @@ int nfs_open(struct inode *inode, struct file *filp) nfs_fscache_open_file(inode, filp); return 0; } +EXPORT_SYMBOL_GPL(nfs_open); /* * This function is called whenever some part of NFS notices that diff --git a/fs/nfs/nfs4_fs.h b/fs/nfs/nfs4_fs.h index 2ae55eaa4a1ebc5351282ced715ee2de4a55e1b4..2771aafaca19973bc12252cc402b8fc425a4b2d1 100644 --- a/fs/nfs/nfs4_fs.h +++ b/fs/nfs/nfs4_fs.h @@ -469,7 +469,8 @@ static inline void nfs4_schedule_session_recovery(struct nfs4_session *session, extern struct nfs4_state_owner *nfs4_get_state_owner(struct nfs_server *, struct rpc_cred *, gfp_t); extern void nfs4_put_state_owner(struct nfs4_state_owner *); -extern void nfs4_purge_state_owners(struct nfs_server *); +extern void nfs4_purge_state_owners(struct nfs_server *, struct list_head *); +extern void nfs4_free_state_owners(struct list_head *head); extern struct nfs4_state * nfs4_get_open_state(struct inode *, struct nfs4_state_owner *); extern void nfs4_put_open_state(struct nfs4_state *); extern void nfs4_close_state(struct nfs4_state *, fmode_t); diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c index 8f53455c476530998396023edabf72bf3692d5ce..86991bcfbeb129d58c11f3d48cced99669337ca9 100644 --- a/fs/nfs/nfs4client.c +++ b/fs/nfs/nfs4client.c @@ -754,9 +754,12 @@ out: static void nfs4_destroy_server(struct nfs_server *server) { + LIST_HEAD(freeme); + nfs_server_return_all_delegations(server); unset_pnfs_layoutdriver(server); - nfs4_purge_state_owners(server); + nfs4_purge_state_owners(server, &freeme); + nfs4_free_state_owners(&freeme); } /* diff --git a/fs/nfs/nfs4file.c b/fs/nfs/nfs4file.c index 134858507268f15a352f5df2c90762bf41e4b30b..61abbb087ed135e7180e9a5c1eff343f449f0d62 100644 --- a/fs/nfs/nfs4file.c +++ b/fs/nfs/nfs4file.c @@ -49,7 +49,7 @@ nfs4_file_open(struct inode *inode, struct file *filp) return err; if ((openflags & O_ACCMODE) == 3) - openflags--; + return nfs_open(inode, filp); /* We can't create new files here */ openflags &= ~(O_CREAT|O_EXCL); diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c index 78c3f4359e766bc7eeb324c09c181f327d2c911f..a5f63bf5bf7a0d9f587c58fe1945a67fd2e38406 100644 --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -1355,12 +1355,20 @@ static bool nfs4_mode_match_open_stateid(struct nfs4_state *state, return false; } -static int can_open_cached(struct nfs4_state *state, fmode_t mode, int open_mode) +static int can_open_cached(struct nfs4_state *state, fmode_t mode, + int open_mode, enum open_claim_type4 claim) { int ret = 0; if (open_mode & (O_EXCL|O_TRUNC)) goto out; + switch (claim) { + case NFS4_OPEN_CLAIM_NULL: + case NFS4_OPEN_CLAIM_FH: + goto out; + default: + break; + } switch (mode & (FMODE_READ|FMODE_WRITE)) { case FMODE_READ: ret |= test_bit(NFS_O_RDONLY_STATE, &state->flags) != 0 @@ -1753,7 +1761,7 @@ static struct nfs4_state *nfs4_try_open_cached(struct nfs4_opendata *opendata) for (;;) { spin_lock(&state->owner->so_lock); - if (can_open_cached(state, fmode, open_mode)) { + if (can_open_cached(state, fmode, open_mode, claim)) { update_open_stateflags(state, fmode); spin_unlock(&state->owner->so_lock); goto out_return_state; @@ -2282,7 +2290,8 @@ static void nfs4_open_prepare(struct rpc_task *task, void *calldata) if (data->state != NULL) { struct nfs_delegation *delegation; - if (can_open_cached(data->state, data->o_arg.fmode, data->o_arg.open_flags)) + if (can_open_cached(data->state, data->o_arg.fmode, + data->o_arg.open_flags, claim)) goto out_no_action; rcu_read_lock(); delegation = rcu_dereference(NFS_I(data->state->inode)->delegation); @@ -3124,7 +3133,7 @@ static int _nfs4_do_setattr(struct inode *inode, if (nfs4_copy_delegation_stateid(inode, FMODE_WRITE, &arg->stateid, &delegation_cred)) { /* Use that stateid */ - } else if (ctx != NULL) { + } else if (ctx != NULL && ctx->state) { struct nfs_lock_context *l_ctx; if (!nfs4_valid_open_stateid(ctx->state)) return -EBADF; diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c index f10952680bd9a76f6a26aa5b0a11626dab4b169c..e86450c8d205d861f856386ea209f1b44c90a9da 100644 --- a/fs/nfs/nfs4state.c +++ b/fs/nfs/nfs4state.c @@ -628,24 +628,39 @@ void nfs4_put_state_owner(struct nfs4_state_owner *sp) /** * nfs4_purge_state_owners - Release all cached state owners * @server: nfs_server with cached state owners to release + * @head: resulting list of state owners * * Called at umount time. Remaining state owners will be on * the LRU with ref count of zero. + * Note that the state owners are not freed, but are added + * to the list @head, which can later be used as an argument + * to nfs4_free_state_owners. */ -void nfs4_purge_state_owners(struct nfs_server *server) +void nfs4_purge_state_owners(struct nfs_server *server, struct list_head *head) { struct nfs_client *clp = server->nfs_client; struct nfs4_state_owner *sp, *tmp; - LIST_HEAD(doomed); spin_lock(&clp->cl_lock); list_for_each_entry_safe(sp, tmp, &server->state_owners_lru, so_lru) { - list_move(&sp->so_lru, &doomed); + list_move(&sp->so_lru, head); nfs4_remove_state_owner_locked(sp); } spin_unlock(&clp->cl_lock); +} - list_for_each_entry_safe(sp, tmp, &doomed, so_lru) { +/** + * nfs4_purge_state_owners - Release all cached state owners + * @head: resulting list of state owners + * + * Frees a list of state owners that was generated by + * nfs4_purge_state_owners + */ +void nfs4_free_state_owners(struct list_head *head) +{ + struct nfs4_state_owner *sp, *tmp; + + list_for_each_entry_safe(sp, tmp, head, so_lru) { list_del(&sp->so_lru); nfs4_free_state_owner(sp); } @@ -1853,12 +1868,13 @@ static int nfs4_do_reclaim(struct nfs_client *clp, const struct nfs4_state_recov struct nfs4_state_owner *sp; struct nfs_server *server; struct rb_node *pos; + LIST_HEAD(freeme); int status = 0; restart: rcu_read_lock(); list_for_each_entry_rcu(server, &clp->cl_superblocks, client_link) { - nfs4_purge_state_owners(server); + nfs4_purge_state_owners(server, &freeme); spin_lock(&clp->cl_lock); for (pos = rb_first(&server->state_owners); pos != NULL; @@ -1887,6 +1903,7 @@ restart: spin_unlock(&clp->cl_lock); } rcu_read_unlock(); + nfs4_free_state_owners(&freeme); return 0; } diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c index 0ec6bce3dd692969ac8fb5681a8cdc1af0617fc9..d23ea74b5d20352b3bd2d39502fe7b1fcfa2ea87 100644 --- a/fs/nfs/pagelist.c +++ b/fs/nfs/pagelist.c @@ -769,8 +769,7 @@ int nfs_generic_pgio(struct nfs_pageio_descriptor *desc, pageused = 0; while (!list_empty(head)) { req = nfs_list_entry(head->next); - nfs_list_remove_request(req); - nfs_list_add_request(req, &hdr->pages); + nfs_list_move_request(req, &hdr->pages); if (!last_page || last_page != req->wb_page) { pageused++; @@ -962,8 +961,7 @@ static int nfs_pageio_do_add_request(struct nfs_pageio_descriptor *desc, } if (!nfs_can_coalesce_requests(prev, req, desc)) return 0; - nfs_list_remove_request(req); - nfs_list_add_request(req, &mirror->pg_list); + nfs_list_move_request(req, &mirror->pg_list); mirror->pg_count += req->wb_bytes; return 1; } @@ -995,9 +993,8 @@ nfs_pageio_cleanup_request(struct nfs_pageio_descriptor *desc, { LIST_HEAD(head); - nfs_list_remove_request(req); - nfs_list_add_request(req, &head); - desc->pg_completion_ops->error_cleanup(&head); + nfs_list_move_request(req, &head); + desc->pg_completion_ops->error_cleanup(&head, desc->pg_error); } /** @@ -1133,7 +1130,8 @@ static void nfs_pageio_error_cleanup(struct nfs_pageio_descriptor *desc) for (midx = 0; midx < desc->pg_mirror_count; midx++) { mirror = &desc->pg_mirrors[midx]; - desc->pg_completion_ops->error_cleanup(&mirror->pg_list); + desc->pg_completion_ops->error_cleanup(&mirror->pg_list, + desc->pg_error); } } @@ -1235,21 +1233,23 @@ static void nfs_pageio_complete_mirror(struct nfs_pageio_descriptor *desc, int nfs_pageio_resend(struct nfs_pageio_descriptor *desc, struct nfs_pgio_header *hdr) { - LIST_HEAD(failed); + LIST_HEAD(pages); desc->pg_io_completion = hdr->io_completion; desc->pg_dreq = hdr->dreq; - while (!list_empty(&hdr->pages)) { - struct nfs_page *req = nfs_list_entry(hdr->pages.next); + list_splice_init(&hdr->pages, &pages); + while (!list_empty(&pages)) { + struct nfs_page *req = nfs_list_entry(pages.next); - nfs_list_remove_request(req); if (!nfs_pageio_add_request(desc, req)) - nfs_list_add_request(req, &failed); + break; } nfs_pageio_complete(desc); - if (!list_empty(&failed)) { - list_move(&failed, &hdr->pages); - return desc->pg_error < 0 ? desc->pg_error : -EIO; + if (!list_empty(&pages)) { + int err = desc->pg_error < 0 ? desc->pg_error : -EIO; + hdr->completion_ops->error_cleanup(&pages, err); + nfs_set_pgio_error(hdr, err, hdr->io_start); + return err; } return 0; } diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c index 7d9a51e6b847c65df159d6632a98ac891370f80f..4931c3a75f038447ffe16b9e9d3cc1612a4a1840 100644 --- a/fs/nfs/pnfs.c +++ b/fs/nfs/pnfs.c @@ -1866,8 +1866,8 @@ lookup_again: atomic_read(&lo->plh_outstanding) != 0) { spin_unlock(&ino->i_lock); lseg = ERR_PTR(wait_var_event_killable(&lo->plh_outstanding, - atomic_read(&lo->plh_outstanding))); - if (IS_ERR(lseg) || !list_empty(&lo->plh_segs)) + !atomic_read(&lo->plh_outstanding))); + if (IS_ERR(lseg)) goto out_put_layout_hdr; pnfs_put_layout_hdr(lo); goto lookup_again; diff --git a/fs/nfs/read.c b/fs/nfs/read.c index 48d7277c60a9793b3684e9587a69d26af410a8fb..09d5c282f50e92ef7c9735162e7cd9f63a922deb 100644 --- a/fs/nfs/read.c +++ b/fs/nfs/read.c @@ -205,7 +205,7 @@ static void nfs_initiate_read(struct nfs_pgio_header *hdr, } static void -nfs_async_read_error(struct list_head *head) +nfs_async_read_error(struct list_head *head, int error) { struct nfs_page *req; diff --git a/fs/nfs/super.c b/fs/nfs/super.c index 6df9b85caf20560b8bafd83777ccabb7ec2f8760..d90efdea9fbd6fba6058d38c1e6221368eac6007 100644 --- a/fs/nfs/super.c +++ b/fs/nfs/super.c @@ -2239,6 +2239,7 @@ nfs_compare_remount_data(struct nfs_server *nfss, data->acdirmin != nfss->acdirmin / HZ || data->acdirmax != nfss->acdirmax / HZ || data->timeo != (10U * nfss->client->cl_timeout->to_initval / HZ) || + (data->options & NFS_OPTION_FSCACHE) != (nfss->options & NFS_OPTION_FSCACHE) || data->nfs_server.port != nfss->port || data->nfs_server.addrlen != nfss->nfs_client->cl_addrlen || !rpc_cmp_addr((struct sockaddr *)&data->nfs_server.address, diff --git a/fs/nfs/write.c b/fs/nfs/write.c index 51d0b7913c04cbdde0d1567e70c02345ad0180a7..5ab997912d8d5c45984fdb0a6c68e55aa29e14fe 100644 --- a/fs/nfs/write.c +++ b/fs/nfs/write.c @@ -1394,20 +1394,27 @@ static void nfs_redirty_request(struct nfs_page *req) nfs_release_request(req); } -static void nfs_async_write_error(struct list_head *head) +static void nfs_async_write_error(struct list_head *head, int error) { struct nfs_page *req; while (!list_empty(head)) { req = nfs_list_entry(head->next); nfs_list_remove_request(req); + if (nfs_error_is_fatal(error)) { + nfs_context_set_write_error(req->wb_context, error); + if (nfs_error_is_fatal_on_server(error)) { + nfs_write_error_remove_page(req); + continue; + } + } nfs_redirty_request(req); } } static void nfs_async_write_reschedule_io(struct nfs_pgio_header *hdr) { - nfs_async_write_error(&hdr->pages); + nfs_async_write_error(&hdr->pages, 0); filemap_fdatawrite_range(hdr->inode->i_mapping, hdr->args.offset, hdr->args.offset + hdr->args.count - 1); } diff --git a/fs/notify/fanotify/fanotify.c b/fs/notify/fanotify/fanotify.c index 29dee9630eeccfce780719f1faf62c437eb52e7d..a18b8d7a30759b468f0afb33322cdd97703ff63c 100644 --- a/fs/notify/fanotify/fanotify.c +++ b/fs/notify/fanotify/fanotify.c @@ -148,10 +148,13 @@ struct fanotify_event_info *fanotify_alloc_event(struct fsnotify_group *group, /* * For queues with unlimited length lost events are not expected and * can possibly have security implications. Avoid losing events when - * memory is short. + * memory is short. For the limited size queues, avoid OOM killer in the + * target monitoring memcg as it may have security repercussion. */ if (group->max_events == UINT_MAX) gfp |= __GFP_NOFAIL; + else + gfp |= __GFP_RETRY_MAYFAIL; /* Whoever is interested in the event, pays for the allocation. */ memalloc_use_memcg(group->memcg); diff --git a/fs/notify/inotify/inotify_fsnotify.c b/fs/notify/inotify/inotify_fsnotify.c index f4184b4f38154443816a61e89d461f352e83d76f..16b8702af0e7f443cd81465a555b3f64ec0878d3 100644 --- a/fs/notify/inotify/inotify_fsnotify.c +++ b/fs/notify/inotify/inotify_fsnotify.c @@ -99,9 +99,13 @@ int inotify_handle_event(struct fsnotify_group *group, i_mark = container_of(inode_mark, struct inotify_inode_mark, fsn_mark); - /* Whoever is interested in the event, pays for the allocation. */ + /* + * Whoever is interested in the event, pays for the allocation. Do not + * trigger OOM killer in the target monitoring memcg as it may have + * security repercussion. + */ memalloc_use_memcg(group->memcg); - event = kmalloc(alloc_len, GFP_KERNEL_ACCOUNT); + event = kmalloc(alloc_len, GFP_KERNEL_ACCOUNT | __GFP_RETRY_MAYFAIL); memalloc_unuse_memcg(); if (unlikely(!event)) { diff --git a/fs/ocfs2/xattr.c b/fs/ocfs2/xattr.c index 3a24ce3deb01306d3027cdec670a7f852164f5fe..c146e12a8601fe0b11e1478997a04dc8c5708781 100644 --- a/fs/ocfs2/xattr.c +++ b/fs/ocfs2/xattr.c @@ -3833,7 +3833,6 @@ static int ocfs2_xattr_bucket_find(struct inode *inode, u16 blk_per_bucket = ocfs2_blocks_per_xattr_bucket(inode->i_sb); int low_bucket = 0, bucket, high_bucket; struct ocfs2_xattr_bucket *search; - u32 last_hash; u64 blkno, lower_blkno = 0; search = ocfs2_xattr_bucket_new(inode); @@ -3877,8 +3876,6 @@ static int ocfs2_xattr_bucket_find(struct inode *inode, if (xh->xh_count) xe = &xh->xh_entries[le16_to_cpu(xh->xh_count) - 1]; - last_hash = le32_to_cpu(xe->xe_name_hash); - /* record lower_blkno which may be the insert place. */ lower_blkno = blkno; diff --git a/fs/open.c b/fs/open.c index a00350018a4792e758d5749e6294902cf32f2174..878478745924b8ac6e324ab0bb7b2624a4d6684d 100644 --- a/fs/open.c +++ b/fs/open.c @@ -373,6 +373,25 @@ long do_faccessat(int dfd, const char __user *filename, int mode) override_cred->cap_permitted; } + /* + * The new set of credentials can *only* be used in + * task-synchronous circumstances, and does not need + * RCU freeing, unless somebody then takes a separate + * reference to it. + * + * NOTE! This is _only_ true because this credential + * is used purely for override_creds() that installs + * it as the subjective cred. Other threads will be + * accessing ->real_cred, not the subjective cred. + * + * If somebody _does_ make a copy of this (using the + * 'get_current_cred()' function), that will clear the + * non_rcu field, because now that other user may be + * expecting RCU freeing. But normal thread-synchronous + * cred accesses will keep things non-RCY. + */ + override_cred->non_rcu = 1; + old_cred = override_creds(override_cred); retry: res = user_path_at(dfd, filename, lookup_flags, &path); diff --git a/fs/proc/base.c b/fs/proc/base.c index bf9476600c7362a2cb45a4978e9eb9c045260bf5..a45d4d640f01f023c78b9b2926d45c73780d4ce9 100644 --- a/fs/proc/base.c +++ b/fs/proc/base.c @@ -205,12 +205,53 @@ static int proc_root_link(struct dentry *dentry, struct path *path) return result; } +/* + * If the user used setproctitle(), we just get the string from + * user space at arg_start, and limit it to a maximum of one page. + */ +static ssize_t get_mm_proctitle(struct mm_struct *mm, char __user *buf, + size_t count, unsigned long pos, + unsigned long arg_start) +{ + char *page; + int ret, got; + + if (pos >= PAGE_SIZE) + return 0; + + page = (char *)__get_free_page(GFP_KERNEL); + if (!page) + return -ENOMEM; + + ret = 0; + got = access_remote_vm(mm, arg_start, page, PAGE_SIZE, FOLL_ANON); + if (got > 0) { + int len = strnlen(page, got); + + /* Include the NUL character if it was found */ + if (len < got) + len++; + + if (len > pos) { + len -= pos; + if (len > count) + len = count; + len -= copy_to_user(buf, page+pos, len); + if (!len) + len = -EFAULT; + ret = len; + } + } + free_page((unsigned long)page); + return ret; +} + static ssize_t get_mm_cmdline(struct mm_struct *mm, char __user *buf, size_t count, loff_t *ppos) { unsigned long arg_start, arg_end, env_start, env_end; unsigned long pos, len; - char *page; + char *page, c; /* Check if process spawned far enough to have cmdline. */ if (!mm->env_end) @@ -227,28 +268,42 @@ static ssize_t get_mm_cmdline(struct mm_struct *mm, char __user *buf, return 0; /* - * We have traditionally allowed the user to re-write - * the argument strings and overflow the end result - * into the environment section. But only do that if - * the environment area is contiguous to the arguments. + * We allow setproctitle() to overwrite the argument + * strings, and overflow past the original end. But + * only when it overflows into the environment area. */ - if (env_start != arg_end || env_start >= env_end) + if (env_start != arg_end || env_end < env_start) env_start = env_end = arg_end; - - /* .. and limit it to a maximum of one page of slop */ - if (env_end >= arg_end + PAGE_SIZE) - env_end = arg_end + PAGE_SIZE - 1; + len = env_end - arg_start; /* We're not going to care if "*ppos" has high bits set */ - pos = arg_start + *ppos; - - /* .. but we do check the result is in the proper range */ - if (pos < arg_start || pos >= env_end) + pos = *ppos; + if (pos >= len) + return 0; + if (count > len - pos) + count = len - pos; + if (!count) return 0; - /* .. and we never go past env_end */ - if (env_end - pos < count) - count = env_end - pos; + /* + * Magical special case: if the argv[] end byte is not + * zero, the user has overwritten it with setproctitle(3). + * + * Possible future enhancement: do this only once when + * pos is 0, and set a flag in the 'struct file'. + */ + if (access_remote_vm(mm, arg_end-1, &c, 1, FOLL_ANON) == 1 && c) + return get_mm_proctitle(mm, buf, count, pos, arg_start); + + /* + * For the non-setproctitle() case we limit things strictly + * to the [arg_start, arg_end[ range. + */ + pos += arg_start; + if (pos < arg_start || pos >= arg_end) + return 0; + if (count > arg_end - pos) + count = arg_end - pos; page = (char *)__get_free_page(GFP_KERNEL); if (!page) @@ -258,48 +313,11 @@ static ssize_t get_mm_cmdline(struct mm_struct *mm, char __user *buf, while (count) { int got; size_t size = min_t(size_t, PAGE_SIZE, count); - long offset; - /* - * Are we already starting past the official end? - * We always include the last byte that is *supposed* - * to be NUL - */ - offset = (pos >= arg_end) ? pos - arg_end + 1 : 0; - - got = access_remote_vm(mm, pos - offset, page, size + offset, FOLL_ANON); - if (got <= offset) + got = access_remote_vm(mm, pos, page, size, FOLL_ANON); + if (got <= 0) break; - got -= offset; - - /* Don't walk past a NUL character once you hit arg_end */ - if (pos + got >= arg_end) { - int n = 0; - - /* - * If we started before 'arg_end' but ended up - * at or after it, we start the NUL character - * check at arg_end-1 (where we expect the normal - * EOF to be). - * - * NOTE! This is smaller than 'got', because - * pos + got >= arg_end - */ - if (pos < arg_end) - n = arg_end - pos - 1; - - /* Cut off at first NUL after 'n' */ - got = n + strnlen(page+n, offset+got-n); - if (got < offset) - break; - got -= offset; - - /* Include the NUL if it existed */ - if (got < size) - got++; - } - - got -= copy_to_user(buf, page+offset, got); + got -= copy_to_user(buf, page, got); if (unlikely(!got)) { if (!len) len = -EFAULT; @@ -1960,9 +1978,12 @@ static int map_files_d_revalidate(struct dentry *dentry, unsigned int flags) goto out; if (!dname_to_vma_addr(dentry, &vm_start, &vm_end)) { - down_read(&mm->mmap_sem); - exact_vma_exists = !!find_exact_vma(mm, vm_start, vm_end); - up_read(&mm->mmap_sem); + status = down_read_killable(&mm->mmap_sem); + if (!status) { + exact_vma_exists = !!find_exact_vma(mm, vm_start, + vm_end); + up_read(&mm->mmap_sem); + } } mmput(mm); @@ -2008,8 +2029,11 @@ static int map_files_get_link(struct dentry *dentry, struct path *path) if (rc) goto out_mmput; + rc = down_read_killable(&mm->mmap_sem); + if (rc) + goto out_mmput; + rc = -ENOENT; - down_read(&mm->mmap_sem); vma = find_exact_vma(mm, vm_start, vm_end); if (vma && vma->vm_file) { *path = vma->vm_file->f_path; @@ -2105,7 +2129,11 @@ static struct dentry *proc_map_files_lookup(struct inode *dir, if (!mm) goto out_put_task; - down_read(&mm->mmap_sem); + result = ERR_PTR(-EINTR); + if (down_read_killable(&mm->mmap_sem)) + goto out_put_mm; + + result = ERR_PTR(-ENOENT); vma = find_exact_vma(mm, vm_start, vm_end); if (!vma) goto out_no_vma; @@ -2116,6 +2144,7 @@ static struct dentry *proc_map_files_lookup(struct inode *dir, out_no_vma: up_read(&mm->mmap_sem); +out_put_mm: mmput(mm); out_put_task: put_task_struct(task); @@ -2157,7 +2186,12 @@ proc_map_files_readdir(struct file *file, struct dir_context *ctx) mm = get_task_mm(task); if (!mm) goto out_put_task; - down_read(&mm->mmap_sem); + + ret = down_read_killable(&mm->mmap_sem); + if (ret) { + mmput(mm); + goto out_put_task; + } nr_files = 0; diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c index 31f25ff3999ff235af69460881996faec97a9fd2..75f500cb7e745f818f697f7f1d71d17804529cc8 100644 --- a/fs/proc/proc_sysctl.c +++ b/fs/proc/proc_sysctl.c @@ -498,6 +498,10 @@ static struct inode *proc_sys_make_inode(struct super_block *sb, if (root->set_ownership) root->set_ownership(head, table, &inode->i_uid, &inode->i_gid); + else { + inode->i_uid = GLOBAL_ROOT_UID; + inode->i_gid = GLOBAL_ROOT_GID; + } return inode; } diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index c5819baee35c0c7f73ea91a45afcdbd378f74e24..71aba44c4fa6ddacb6cf42f709ea5d14da80ec96 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -166,7 +166,11 @@ static void *m_start(struct seq_file *m, loff_t *ppos) if (!mm || !mmget_not_zero(mm)) return NULL; - down_read(&mm->mmap_sem); + if (down_read_killable(&mm->mmap_sem)) { + mmput(mm); + return ERR_PTR(-EINTR); + } + hold_task_mempolicy(priv); priv->tail_vma = get_gate_vma(mm); @@ -826,7 +830,10 @@ static int show_smaps_rollup(struct seq_file *m, void *v) memset(&mss, 0, sizeof(mss)); - down_read(&mm->mmap_sem); + ret = down_read_killable(&mm->mmap_sem); + if (ret) + goto out_put_mm; + hold_task_mempolicy(priv); for (vma = priv->mm->mmap; vma; vma = vma->vm_next) { @@ -843,8 +850,9 @@ static int show_smaps_rollup(struct seq_file *m, void *v) release_task_mempolicy(priv); up_read(&mm->mmap_sem); - mmput(mm); +out_put_mm: + mmput(mm); out_put_task: put_task_struct(priv->task); priv->task = NULL; @@ -1127,7 +1135,10 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, goto out_mm; } - down_read(&mm->mmap_sem); + if (down_read_killable(&mm->mmap_sem)) { + count = -EINTR; + goto out_mm; + } tlb_gather_mmu(&tlb, mm, 0, -1); if (type == CLEAR_REFS_SOFT_DIRTY) { for (vma = mm->mmap; vma; vma = vma->vm_next) { @@ -1531,7 +1542,9 @@ static ssize_t pagemap_read(struct file *file, char __user *buf, /* overflow ? */ if (end < start_vaddr || end > end_vaddr) end = end_vaddr; - down_read(&mm->mmap_sem); + ret = down_read_killable(&mm->mmap_sem); + if (ret) + goto out_free; ret = walk_page_range(start_vaddr, end, &pagemap_walk); up_read(&mm->mmap_sem); start_vaddr = end; diff --git a/fs/proc/task_nommu.c b/fs/proc/task_nommu.c index 0b63d68dedb2a018e716aa4cb4e93663a8b46992..5161894a6d623370b2b9bb2275f572cdd50dd08a 100644 --- a/fs/proc/task_nommu.c +++ b/fs/proc/task_nommu.c @@ -211,7 +211,11 @@ static void *m_start(struct seq_file *m, loff_t *pos) if (!mm || !mmget_not_zero(mm)) return NULL; - down_read(&mm->mmap_sem); + if (down_read_killable(&mm->mmap_sem)) { + mmput(mm); + return ERR_PTR(-EINTR); + } + /* start from the Nth VMA */ for (p = rb_first(&mm->mm_rb); p; p = rb_next(p)) if (n-- == 0) diff --git a/fs/seq_file.c b/fs/seq_file.c index 1dea7a8a52550e234ce3c05565ad8cfa56eeb1f2..05e58b56f6202f6f2297b269f81d79e6be453e00 100644 --- a/fs/seq_file.c +++ b/fs/seq_file.c @@ -119,6 +119,7 @@ static int traverse(struct seq_file *m, loff_t offset) } if (seq_has_overflowed(m)) goto Eoverflow; + p = m->op->next(m, p, &m->index); if (pos + m->count > offset) { m->from = offset - pos; m->count -= m->from; @@ -126,7 +127,6 @@ static int traverse(struct seq_file *m, loff_t offset) } pos += m->count; m->count = 0; - p = m->op->next(m, p, &m->index); if (pos == offset) break; } diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index e1ebdbe40032e35e1d1dbd0bb05a33687aa161ba..9c2955f67f708af5922a9f6421576b67f8a43f48 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -881,6 +881,7 @@ static int userfaultfd_release(struct inode *inode, struct file *file) /* len == 0 means wake all */ struct userfaultfd_wake_range range = { .len = 0, }; unsigned long new_flags; + bool still_valid; WRITE_ONCE(ctx->released, true); @@ -896,8 +897,7 @@ static int userfaultfd_release(struct inode *inode, struct file *file) * taking the mmap_sem for writing. */ down_write(&mm->mmap_sem); - if (!mmget_still_valid(mm)) - goto skip_mm; + still_valid = mmget_still_valid(mm); prev = NULL; for (vma = mm->mmap; vma; vma = vma->vm_next) { cond_resched(); @@ -908,19 +908,20 @@ static int userfaultfd_release(struct inode *inode, struct file *file) continue; } new_flags = vma->vm_flags & ~(VM_UFFD_MISSING | VM_UFFD_WP); - prev = vma_merge(mm, prev, vma->vm_start, vma->vm_end, - new_flags, vma->anon_vma, - vma->vm_file, vma->vm_pgoff, - vma_policy(vma), - NULL_VM_UFFD_CTX); - if (prev) - vma = prev; - else - prev = vma; + if (still_valid) { + prev = vma_merge(mm, prev, vma->vm_start, vma->vm_end, + new_flags, vma->anon_vma, + vma->vm_file, vma->vm_pgoff, + vma_policy(vma), + NULL_VM_UFFD_CTX); + if (prev) + vma = prev; + else + prev = vma; + } vma->vm_flags = new_flags; vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX; } -skip_mm: up_write(&mm->mmap_sem); mmput(mm); wakeup: diff --git a/fs/xfs/libxfs/xfs_ag_resv.c b/fs/xfs/libxfs/xfs_ag_resv.c index e701ebc36c069f5696c5b6287474bf37fad4b05c..e2ba2a3b63b20a6378283e35e1c58c939f1d2476 100644 --- a/fs/xfs/libxfs/xfs_ag_resv.c +++ b/fs/xfs/libxfs/xfs_ag_resv.c @@ -281,7 +281,7 @@ xfs_ag_resv_init( */ ask = used = 0; - mp->m_inotbt_nores = true; + mp->m_finobt_nores = true; error = xfs_refcountbt_calc_reserves(mp, tp, agno, &ask, &used); diff --git a/fs/xfs/libxfs/xfs_attr.c b/fs/xfs/libxfs/xfs_attr.c index c6299f82a6e496ac00b1ef27953a5ca5313cb9f8..6410d3e00ce07dfcf1f3422a04ce977c2b3446dd 100644 --- a/fs/xfs/libxfs/xfs_attr.c +++ b/fs/xfs/libxfs/xfs_attr.c @@ -191,6 +191,121 @@ xfs_attr_calc_size( return nblks; } +STATIC int +xfs_attr_try_sf_addname( + struct xfs_inode *dp, + struct xfs_da_args *args) +{ + + struct xfs_mount *mp = dp->i_mount; + int error, error2; + + error = xfs_attr_shortform_addname(args); + if (error == -ENOSPC) + return error; + + /* + * Commit the shortform mods, and we're done. + * NOTE: this is also the error path (EEXIST, etc). + */ + if (!error && (args->flags & ATTR_KERNOTIME) == 0) + xfs_trans_ichgtime(args->trans, dp, XFS_ICHGTIME_CHG); + + if (mp->m_flags & XFS_MOUNT_WSYNC) + xfs_trans_set_sync(args->trans); + + error2 = xfs_trans_commit(args->trans); + args->trans = NULL; + return error ? error : error2; +} + +/* + * Set the attribute specified in @args. + */ +int +xfs_attr_set_args( + struct xfs_da_args *args) +{ + struct xfs_inode *dp = args->dp; + struct xfs_buf *leaf_bp = NULL; + int error; + + /* + * If the attribute list is non-existent or a shortform list, + * upgrade it to a single-leaf-block attribute list. + */ + if (dp->i_d.di_aformat == XFS_DINODE_FMT_LOCAL || + (dp->i_d.di_aformat == XFS_DINODE_FMT_EXTENTS && + dp->i_d.di_anextents == 0)) { + + /* + * Build initial attribute list (if required). + */ + if (dp->i_d.di_aformat == XFS_DINODE_FMT_EXTENTS) + xfs_attr_shortform_create(args); + + /* + * Try to add the attr to the attribute list in the inode. + */ + error = xfs_attr_try_sf_addname(dp, args); + if (error != -ENOSPC) + return error; + + /* + * It won't fit in the shortform, transform to a leaf block. + * GROT: another possible req'mt for a double-split btree op. + */ + error = xfs_attr_shortform_to_leaf(args, &leaf_bp); + if (error) + return error; + + /* + * Prevent the leaf buffer from being unlocked so that a + * concurrent AIL push cannot grab the half-baked leaf + * buffer and run into problems with the write verifier. + * Once we're done rolling the transaction we can release + * the hold and add the attr to the leaf. + */ + xfs_trans_bhold(args->trans, leaf_bp); + error = xfs_defer_finish(&args->trans); + xfs_trans_bhold_release(args->trans, leaf_bp); + if (error) { + xfs_trans_brelse(args->trans, leaf_bp); + return error; + } + } + + if (xfs_bmap_one_block(dp, XFS_ATTR_FORK)) + error = xfs_attr_leaf_addname(args); + else + error = xfs_attr_node_addname(args); + return error; +} + +/* + * Remove the attribute specified in @args. + */ +int +xfs_attr_remove_args( + struct xfs_da_args *args) +{ + struct xfs_inode *dp = args->dp; + int error; + + if (!xfs_inode_hasattr(dp)) { + error = -ENOATTR; + } else if (dp->i_d.di_aformat == XFS_DINODE_FMT_LOCAL) { + ASSERT(dp->i_afp->if_flags & XFS_IFINLINE); + error = xfs_attr_shortform_remove(args); + } else if (xfs_bmap_one_block(dp, XFS_ATTR_FORK)) { + error = xfs_attr_leaf_removename(args); + } else { + error = xfs_attr_node_removename(args); + } + + return error; +} + int xfs_attr_set( struct xfs_inode *dp, @@ -200,11 +315,10 @@ xfs_attr_set( int flags) { struct xfs_mount *mp = dp->i_mount; - struct xfs_buf *leaf_bp = NULL; struct xfs_da_args args; struct xfs_trans_res tres; int rsvd = (flags & ATTR_ROOT) != 0; - int error, err2, local; + int error, local; XFS_STATS_INC(mp, xs_attr_set); @@ -255,93 +369,17 @@ xfs_attr_set( error = xfs_trans_reserve_quota_nblks(args.trans, dp, args.total, 0, rsvd ? XFS_QMOPT_RES_REGBLKS | XFS_QMOPT_FORCE_RES : XFS_QMOPT_RES_REGBLKS); - if (error) { - xfs_iunlock(dp, XFS_ILOCK_EXCL); - xfs_trans_cancel(args.trans); - return error; - } + if (error) + goto out_trans_cancel; xfs_trans_ijoin(args.trans, dp, 0); - - /* - * If the attribute list is non-existent or a shortform list, - * upgrade it to a single-leaf-block attribute list. - */ - if (dp->i_d.di_aformat == XFS_DINODE_FMT_LOCAL || - (dp->i_d.di_aformat == XFS_DINODE_FMT_EXTENTS && - dp->i_d.di_anextents == 0)) { - - /* - * Build initial attribute list (if required). - */ - if (dp->i_d.di_aformat == XFS_DINODE_FMT_EXTENTS) - xfs_attr_shortform_create(&args); - - /* - * Try to add the attr to the attribute list in - * the inode. - */ - error = xfs_attr_shortform_addname(&args); - if (error != -ENOSPC) { - /* - * Commit the shortform mods, and we're done. - * NOTE: this is also the error path (EEXIST, etc). - */ - ASSERT(args.trans != NULL); - - /* - * If this is a synchronous mount, make sure that - * the transaction goes to disk before returning - * to the user. - */ - if (mp->m_flags & XFS_MOUNT_WSYNC) - xfs_trans_set_sync(args.trans); - - if (!error && (flags & ATTR_KERNOTIME) == 0) { - xfs_trans_ichgtime(args.trans, dp, - XFS_ICHGTIME_CHG); - } - err2 = xfs_trans_commit(args.trans); - xfs_iunlock(dp, XFS_ILOCK_EXCL); - - return error ? error : err2; - } - - /* - * It won't fit in the shortform, transform to a leaf block. - * GROT: another possible req'mt for a double-split btree op. - */ - error = xfs_attr_shortform_to_leaf(&args, &leaf_bp); - if (error) - goto out; - /* - * Prevent the leaf buffer from being unlocked so that a - * concurrent AIL push cannot grab the half-baked leaf - * buffer and run into problems with the write verifier. - */ - xfs_trans_bhold(args.trans, leaf_bp); - error = xfs_defer_finish(&args.trans); - if (error) - goto out; - - /* - * Commit the leaf transformation. We'll need another (linked) - * transaction to add the new attribute to the leaf, which - * means that we have to hold & join the leaf buffer here too. - */ - error = xfs_trans_roll_inode(&args.trans, dp); - if (error) - goto out; - xfs_trans_bjoin(args.trans, leaf_bp); - leaf_bp = NULL; - } - - if (xfs_bmap_one_block(dp, XFS_ATTR_FORK)) - error = xfs_attr_leaf_addname(&args); - else - error = xfs_attr_node_addname(&args); + error = xfs_attr_set_args(&args); if (error) - goto out; + goto out_trans_cancel; + if (!args.trans) { + /* shortform attribute has already been committed */ + goto out_unlock; + } /* * If this is a synchronous mount, make sure that the @@ -358,17 +396,14 @@ xfs_attr_set( */ xfs_trans_log_inode(args.trans, dp, XFS_ILOG_CORE); error = xfs_trans_commit(args.trans); +out_unlock: xfs_iunlock(dp, XFS_ILOCK_EXCL); - return error; -out: - if (leaf_bp) - xfs_trans_brelse(args.trans, leaf_bp); +out_trans_cancel: if (args.trans) xfs_trans_cancel(args.trans); - xfs_iunlock(dp, XFS_ILOCK_EXCL); - return error; + goto out_unlock; } /* @@ -423,17 +458,7 @@ xfs_attr_remove( */ xfs_trans_ijoin(args.trans, dp, 0); - if (!xfs_inode_hasattr(dp)) { - error = -ENOATTR; - } else if (dp->i_d.di_aformat == XFS_DINODE_FMT_LOCAL) { - ASSERT(dp->i_afp->if_flags & XFS_IFINLINE); - error = xfs_attr_shortform_remove(&args); - } else if (xfs_bmap_one_block(dp, XFS_ATTR_FORK)) { - error = xfs_attr_leaf_removename(&args); - } else { - error = xfs_attr_node_removename(&args); - } - + error = xfs_attr_remove_args(&args); if (error) goto out; diff --git a/fs/xfs/xfs_attr.h b/fs/xfs/libxfs/xfs_attr.h similarity index 98% rename from fs/xfs/xfs_attr.h rename to fs/xfs/libxfs/xfs_attr.h index 033ff8c478e2e5d1af9b4656422f02bb278c6dcc..cc04ee0aacfbea408923a6db8fad74aa08e8a84e 100644 --- a/fs/xfs/xfs_attr.h +++ b/fs/xfs/libxfs/xfs_attr.h @@ -140,7 +140,9 @@ int xfs_attr_get(struct xfs_inode *ip, const unsigned char *name, unsigned char *value, int *valuelenp, int flags); int xfs_attr_set(struct xfs_inode *dp, const unsigned char *name, unsigned char *value, int valuelen, int flags); +int xfs_attr_set_args(struct xfs_da_args *args); int xfs_attr_remove(struct xfs_inode *dp, const unsigned char *name, int flags); +int xfs_attr_remove_args(struct xfs_da_args *args); int xfs_attr_list(struct xfs_inode *dp, char *buffer, int bufsize, int flags, struct attrlist_cursor_kern *cursor); diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c index 3a496ffe6551c727ad883387fb1b2a08ec386c9d..06a7da8dbda5cb0563052c6ac5fe00badc764a29 100644 --- a/fs/xfs/libxfs/xfs_bmap.c +++ b/fs/xfs/libxfs/xfs_bmap.c @@ -1019,6 +1019,34 @@ xfs_bmap_add_attrfork_local( return -EFSCORRUPTED; } +/* Set an inode attr fork off based on the format */ +int +xfs_bmap_set_attrforkoff( + struct xfs_inode *ip, + int size, + int *version) +{ + switch (ip->i_d.di_format) { + case XFS_DINODE_FMT_DEV: + ip->i_d.di_forkoff = roundup(sizeof(xfs_dev_t), 8) >> 3; + break; + case XFS_DINODE_FMT_LOCAL: + case XFS_DINODE_FMT_EXTENTS: + case XFS_DINODE_FMT_BTREE: + ip->i_d.di_forkoff = xfs_attr_shortform_bytesfit(ip, size); + if (!ip->i_d.di_forkoff) + ip->i_d.di_forkoff = xfs_default_attroffset(ip) >> 3; + else if ((ip->i_mount->m_flags & XFS_MOUNT_ATTR2) && version) + *version = 2; + break; + default: + ASSERT(0); + return -EINVAL; + } + + return 0; +} + /* * Convert inode from non-attributed to attributed. * Must not be in a transaction, ip must not be locked. @@ -1070,26 +1098,9 @@ xfs_bmap_add_attrfork( xfs_trans_ijoin(tp, ip, 0); xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE); - - switch (ip->i_d.di_format) { - case XFS_DINODE_FMT_DEV: - ip->i_d.di_forkoff = roundup(sizeof(xfs_dev_t), 8) >> 3; - break; - case XFS_DINODE_FMT_LOCAL: - case XFS_DINODE_FMT_EXTENTS: - case XFS_DINODE_FMT_BTREE: - ip->i_d.di_forkoff = xfs_attr_shortform_bytesfit(ip, size); - if (!ip->i_d.di_forkoff) - ip->i_d.di_forkoff = xfs_default_attroffset(ip) >> 3; - else if (mp->m_flags & XFS_MOUNT_ATTR2) - version = 2; - break; - default: - ASSERT(0); - error = -EINVAL; + error = xfs_bmap_set_attrforkoff(ip, size, &version); + if (error) goto trans_cancel; - } - ASSERT(ip->i_afp == NULL); ip->i_afp = kmem_zone_zalloc(xfs_ifork_zone, KM_SLEEP); ip->i_afp->if_flags = XFS_IFEXTENTS; @@ -1178,7 +1189,10 @@ xfs_iread_extents( * Root level must use BMAP_BROOT_PTR_ADDR macro to get ptr out. */ level = be16_to_cpu(block->bb_level); - ASSERT(level > 0); + if (unlikely(level == 0)) { + XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, mp); + return -EFSCORRUPTED; + } pp = XFS_BMAP_BROOT_PTR_ADDR(mp, block, 1, ifp->if_broot_bytes); bno = be64_to_cpu(*pp); diff --git a/fs/xfs/libxfs/xfs_bmap.h b/fs/xfs/libxfs/xfs_bmap.h index b6e9b639e731a1fafd1116b9decb22872dc765af..488dc8860fd7c551b02e21eeae3462b96e268eb8 100644 --- a/fs/xfs/libxfs/xfs_bmap.h +++ b/fs/xfs/libxfs/xfs_bmap.h @@ -183,6 +183,7 @@ void xfs_trim_extent(struct xfs_bmbt_irec *irec, xfs_fileoff_t bno, xfs_filblks_t len); void xfs_trim_extent_eof(struct xfs_bmbt_irec *, struct xfs_inode *); int xfs_bmap_add_attrfork(struct xfs_inode *ip, int size, int rsvd); +int xfs_bmap_set_attrforkoff(struct xfs_inode *ip, int size, int *version); void xfs_bmap_local_to_extents_empty(struct xfs_inode *ip, int whichfork); void __xfs_bmap_add_free(struct xfs_trans *tp, xfs_fsblock_t bno, xfs_filblks_t len, struct xfs_owner_info *oinfo, diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c index e792b167150a025ec67dc9c231264792bfdd0923..c52beee31836ac794ecb482524dd27251ef9c5de 100644 --- a/fs/xfs/libxfs/xfs_defer.c +++ b/fs/xfs/libxfs/xfs_defer.c @@ -266,13 +266,15 @@ xfs_defer_trans_roll( trace_xfs_defer_trans_roll(tp, _RET_IP_); - /* Roll the transaction. */ + /* + * Roll the transaction. Rolling always given a new transaction (even + * if committing the old one fails!) to hand back to the caller, so we + * join the held resources to the new transaction so that we always + * return with the held resources joined to @tpp, no matter what + * happened. + */ error = xfs_trans_roll(tpp); tp = *tpp; - if (error) { - trace_xfs_defer_trans_roll_error(tp, error); - return error; - } /* Rejoin the joined inodes. */ for (i = 0; i < ipcount; i++) @@ -284,6 +286,8 @@ xfs_defer_trans_roll( xfs_trans_bhold(tp, bplist[i]); } + if (error) + trace_xfs_defer_trans_roll_error(tp, error); return error; } diff --git a/fs/xfs/libxfs/xfs_ialloc_btree.c b/fs/xfs/libxfs/xfs_ialloc_btree.c index 86c50208a14374e2a1588b5686df8d30dc677c57..adb2f6df5a1141c0a41d6328959668e751da366a 100644 --- a/fs/xfs/libxfs/xfs_ialloc_btree.c +++ b/fs/xfs/libxfs/xfs_ialloc_btree.c @@ -124,7 +124,7 @@ xfs_finobt_alloc_block( union xfs_btree_ptr *new, int *stat) { - if (cur->bc_mp->m_inotbt_nores) + if (cur->bc_mp->m_finobt_nores) return xfs_inobt_alloc_block(cur, start, new, stat); return __xfs_inobt_alloc_block(cur, start, new, stat, XFS_AG_RESV_METADATA); @@ -157,7 +157,7 @@ xfs_finobt_free_block( struct xfs_btree_cur *cur, struct xfs_buf *bp) { - if (cur->bc_mp->m_inotbt_nores) + if (cur->bc_mp->m_finobt_nores) return xfs_inobt_free_block(cur, bp); return __xfs_inobt_free_block(cur, bp, XFS_AG_RESV_METADATA); } diff --git a/fs/xfs/xfs_attr_list.c b/fs/xfs/xfs_attr_list.c index a58034049995b4c6a7db190164ea886ec5113dff..3d213a7394c5b747dfb5cffc17dfb3d44d66cf03 100644 --- a/fs/xfs/xfs_attr_list.c +++ b/fs/xfs/xfs_attr_list.c @@ -555,6 +555,7 @@ xfs_attr_put_listent( attrlist_ent_t *aep; int arraytop; + ASSERT(!context->seen_enough); ASSERT(!(context->flags & ATTR_KERNOVAL)); ASSERT(context->count >= 0); ASSERT(context->count < (ATTR_MAX_VALUELEN/8)); diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c index 211b06e4702e77cdb58162c0098b48e09e870a93..41ad9eaab6ce9d8284bcd58c8e2e94963d52c5d5 100644 --- a/fs/xfs/xfs_bmap_util.c +++ b/fs/xfs/xfs_bmap_util.c @@ -1080,7 +1080,7 @@ xfs_adjust_extent_unmap_boundaries( return 0; } -static int +int xfs_flush_unmap_range( struct xfs_inode *ip, xfs_off_t offset, diff --git a/fs/xfs/xfs_bmap_util.h b/fs/xfs/xfs_bmap_util.h index 87363d136bb618145c396223da5b4755bdd572c8..9c73d012f56ab91053bbbe9ea687bd2136568d18 100644 --- a/fs/xfs/xfs_bmap_util.h +++ b/fs/xfs/xfs_bmap_util.h @@ -76,6 +76,8 @@ int xfs_swap_extents(struct xfs_inode *ip, struct xfs_inode *tip, xfs_daddr_t xfs_fsb_to_db(struct xfs_inode *ip, xfs_fsblock_t fsb); xfs_extnum_t xfs_bmap_count_leaves(struct xfs_ifork *ifp, xfs_filblks_t *count); +int xfs_flush_unmap_range(struct xfs_inode *ip, xfs_off_t offset, + xfs_off_t len); int xfs_bmap_count_blocks(struct xfs_trans *tp, struct xfs_inode *ip, int whichfork, xfs_extnum_t *nextents, xfs_filblks_t *count); diff --git a/fs/xfs/xfs_dquot.c b/fs/xfs/xfs_dquot.c index 87e6dd5326d5daf8bfa3b883b129b350d150877e..a1af984e4913e94e88b0eac261c5479728d8b24e 100644 --- a/fs/xfs/xfs_dquot.c +++ b/fs/xfs/xfs_dquot.c @@ -277,7 +277,8 @@ xfs_dquot_set_prealloc_limits(struct xfs_dquot *dqp) /* * Ensure that the given in-core dquot has a buffer on disk backing it, and - * return the buffer. This is called when the bmapi finds a hole. + * return the buffer locked and held. This is called when the bmapi finds a + * hole. */ STATIC int xfs_dquot_disk_alloc( @@ -355,13 +356,14 @@ xfs_dquot_disk_alloc( * If everything succeeds, the caller of this function is returned a * buffer that is locked and held to the transaction. The caller * is responsible for unlocking any buffer passed back, either - * manually or by committing the transaction. + * manually or by committing the transaction. On error, the buffer is + * released and not passed back. */ xfs_trans_bhold(tp, bp); error = xfs_defer_finish(tpp); - tp = *tpp; if (error) { - xfs_buf_relse(bp); + xfs_trans_bhold_release(*tpp, bp); + xfs_trans_brelse(*tpp, bp); return error; } *bpp = bp; @@ -521,7 +523,6 @@ xfs_qm_dqread_alloc( struct xfs_buf **bpp) { struct xfs_trans *tp; - struct xfs_buf *bp; int error; error = xfs_trans_alloc(mp, &M_RES(mp)->tr_qm_dqalloc, @@ -529,7 +530,7 @@ xfs_qm_dqread_alloc( if (error) goto err; - error = xfs_dquot_disk_alloc(&tp, dqp, &bp); + error = xfs_dquot_disk_alloc(&tp, dqp, bpp); if (error) goto err_cancel; @@ -539,10 +540,10 @@ xfs_qm_dqread_alloc( * Buffer was held to the transaction, so we have to unlock it * manually here because we're not passing it back. */ - xfs_buf_relse(bp); + xfs_buf_relse(*bpp); + *bpp = NULL; goto err; } - *bpp = bp; return 0; err_cancel: diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c index 61a5ad2600e865a6b11a8345f956eba10203abd2..259549698ba7e607b105360f36ececb7e92fff51 100644 --- a/fs/xfs/xfs_file.c +++ b/fs/xfs/xfs_file.c @@ -517,6 +517,9 @@ xfs_file_dio_aio_write( } if (iocb->ki_flags & IOCB_NOWAIT) { + /* unaligned dio always waits, bail */ + if (unaligned_io) + return -EAGAIN; if (!xfs_ilock_nowait(ip, iolock)) return -EAGAIN; } else { @@ -529,18 +532,14 @@ xfs_file_dio_aio_write( count = iov_iter_count(from); /* - * If we are doing unaligned IO, wait for all other IO to drain, - * otherwise demote the lock if we had to take the exclusive lock - * for other reasons in xfs_file_aio_write_checks. + * If we are doing unaligned IO, we can't allow any other overlapping IO + * in-flight at the same time or we risk data corruption. Wait for all + * other IO to drain before we submit. If the IO is aligned, demote the + * iolock if we had to take the exclusive lock in + * xfs_file_aio_write_checks() for other reasons. */ if (unaligned_io) { - /* If we are going to wait for other DIO to finish, bail */ - if (iocb->ki_flags & IOCB_NOWAIT) { - if (atomic_read(&inode->i_dio_count)) - return -EAGAIN; - } else { - inode_dio_wait(inode); - } + inode_dio_wait(inode); } else if (iolock == XFS_IOLOCK_EXCL) { xfs_ilock_demote(ip, XFS_IOLOCK_EXCL); iolock = XFS_IOLOCK_SHARED; @@ -548,6 +547,14 @@ xfs_file_dio_aio_write( trace_xfs_file_direct_write(ip, count, iocb->ki_pos); ret = iomap_dio_rw(iocb, from, &xfs_iomap_ops, xfs_dio_write_end_io); + + /* + * If unaligned, this is the only IO in-flight. If it has not yet + * completed, wait on it before we release the iolock to prevent + * subsequent overlapping IO. + */ + if (ret == -EIOCBQUEUED && unaligned_io) + inode_dio_wait(inode); out: xfs_iunlock(ip, iolock); diff --git a/fs/xfs/xfs_fsops.c b/fs/xfs/xfs_fsops.c index 7c00b8bedfe358ae508f8e10ff25771618529a1d..09fd602507effcddf57492e0a410691880d4acba 100644 --- a/fs/xfs/xfs_fsops.c +++ b/fs/xfs/xfs_fsops.c @@ -534,6 +534,7 @@ xfs_fs_reserve_ag_blocks( int error = 0; int err2; + mp->m_finobt_nores = false; for (agno = 0; agno < mp->m_sb.sb_agcount; agno++) { pag = xfs_perag_get(mp, agno); err2 = xfs_ag_resv_init(pag, NULL); diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c index 05db9540e4597536211446475d301eac834e972e..5ed84d6c70597f02e0accd3bc907e1773d1e2293 100644 --- a/fs/xfs/xfs_inode.c +++ b/fs/xfs/xfs_inode.c @@ -1332,7 +1332,7 @@ xfs_create_tmpfile( if (error) goto out_trans_cancel; - error = xfs_dir_ialloc(&tp, dp, mode, 1, 0, prid, &ip); + error = xfs_dir_ialloc(&tp, dp, mode, 0, 0, prid, &ip); if (error) goto out_trans_cancel; @@ -1754,7 +1754,7 @@ xfs_inactive_ifree( * now remains allocated and sits on the unlinked list until the fs is * repaired. */ - if (unlikely(mp->m_inotbt_nores)) { + if (unlikely(mp->m_finobt_nores)) { error = xfs_trans_alloc(mp, &M_RES(mp)->tr_ifree, XFS_IFREE_SPACE_RES(mp), 0, XFS_TRANS_RESERVE, &tp); @@ -1907,11 +1907,8 @@ xfs_inactive( } /* - * This is called when the inode's link count goes to 0 or we are creating a - * tmpfile via O_TMPFILE. In the case of a tmpfile, @ignore_linkcount will be - * set to true as the link count is dropped to zero by the VFS after we've - * created the file successfully, so we have to add it to the unlinked list - * while the link count is non-zero. + * This is called when the inode's link count has gone to 0 or we are creating + * a tmpfile via O_TMPFILE. The inode @ip must have nlink == 0. * * We place the on-disk inode on a list in the AGI. It will be pulled from this * list when the inode is freed. @@ -1931,6 +1928,7 @@ xfs_iunlink( int offset; int error; + ASSERT(VFS_I(ip)->i_nlink == 0); ASSERT(VFS_I(ip)->i_mode != 0); /* @@ -2837,11 +2835,9 @@ xfs_rename_alloc_whiteout( /* * Prepare the tmpfile inode as if it were created through the VFS. - * Otherwise, the link increment paths will complain about nlink 0->1. - * Drop the link count as done by d_tmpfile(), complete the inode setup - * and flag it as linkable. + * Complete the inode setup and flag it as linkable. nlink is already + * zero, so we can skip the drop_nlink. */ - drop_nlink(VFS_I(tmpfile)); xfs_setup_iops(tmpfile); xfs_finish_inode_setup(tmpfile); VFS_I(tmpfile)->i_state |= I_LINKABLE; diff --git a/fs/xfs/xfs_iops.c b/fs/xfs/xfs_iops.c index f48ffd7a8d3e491d76defe66961194a635276115..e427ad097e2eeb7d14bc381309efcd94c838f72d 100644 --- a/fs/xfs/xfs_iops.c +++ b/fs/xfs/xfs_iops.c @@ -191,9 +191,18 @@ xfs_generic_create( xfs_setup_iops(ip); - if (tmpfile) + if (tmpfile) { + /* + * The VFS requires that any inode fed to d_tmpfile must have + * nlink == 1 so that it can decrement the nlink in d_tmpfile. + * However, we created the temp file with nlink == 0 because + * we're not allowed to put an inode with nlink > 0 on the + * unlinked list. Therefore we have to set nlink to 1 so that + * d_tmpfile can immediately set it back to zero. + */ + set_nlink(inode, 1); d_tmpfile(dentry, inode); - else + } else d_instantiate(dentry, inode); xfs_finish_inode_setup(ip); @@ -522,6 +531,10 @@ xfs_vn_getattr( } } + /* + * Note: If you add another clause to set an attribute flag, please + * update attributes_mask below. + */ if (ip->i_d.di_flags & XFS_DIFLAG_IMMUTABLE) stat->attributes |= STATX_ATTR_IMMUTABLE; if (ip->i_d.di_flags & XFS_DIFLAG_APPEND) @@ -529,6 +542,10 @@ xfs_vn_getattr( if (ip->i_d.di_flags & XFS_DIFLAG_NODUMP) stat->attributes |= STATX_ATTR_NODUMP; + stat->attributes_mask |= (STATX_ATTR_IMMUTABLE | + STATX_ATTR_APPEND | + STATX_ATTR_NODUMP); + switch (inode->i_mode & S_IFMT) { case S_IFBLK: case S_IFCHR: @@ -786,6 +803,7 @@ xfs_setattr_nonsize( out_cancel: xfs_trans_cancel(tp); + xfs_iunlock(ip, XFS_ILOCK_EXCL); out_dqrele: xfs_qm_dqrele(udqp); xfs_qm_dqrele(gdqp); diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h index 7964513c3128159744a6edc486155c8ef41364a2..7e0bf952e087d9ce8ae854081190c9bf55a1bc51 100644 --- a/fs/xfs/xfs_mount.h +++ b/fs/xfs/xfs_mount.h @@ -127,7 +127,7 @@ typedef struct xfs_mount { struct mutex m_growlock; /* growfs mutex */ int m_fixedfsid[2]; /* unchanged for life of FS */ uint64_t m_flags; /* global mount flags */ - bool m_inotbt_nores; /* no per-AG finobt resv. */ + bool m_finobt_nores; /* no per-AG finobt resv. */ int m_ialloc_inos; /* inodes in inode allocation */ int m_ialloc_blks; /* blocks in inode allocation */ int m_ialloc_min_blks;/* min blocks in sparse inode diff --git a/fs/xfs/xfs_reflink.c b/fs/xfs/xfs_reflink.c index 7088f44c0c5947d4fccf5df4315d56158e8da5c3..f3c393f309e19c8b99e6f2c683c84766efa6c1a8 100644 --- a/fs/xfs/xfs_reflink.c +++ b/fs/xfs/xfs_reflink.c @@ -1368,9 +1368,19 @@ xfs_reflink_remap_prep( if (ret) goto out_unlock; - /* Zap any page cache for the destination file's range. */ - truncate_inode_pages_range(&inode_out->i_data, pos_out, - PAGE_ALIGN(pos_out + *len) - 1); + /* + * If pos_out > EOF, we may have dirtied blocks between EOF and + * pos_out. In that case, we need to extend the flush and unmap to cover + * from EOF to the end of the copy length. + */ + if (pos_out > XFS_ISIZE(dest)) { + loff_t flen = *len + (pos_out - XFS_ISIZE(dest)); + ret = xfs_flush_unmap_range(dest, XFS_ISIZE(dest), flen); + } else { + ret = xfs_flush_unmap_range(dest, pos_out, *len); + } + if (ret) + goto out_unlock; /* If we're altering the file contents... */ if (!is_dedupe) { diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index 207ee302b1bb9f4039b8963b7ae1a68ce8ac8487..dce8114e3198c349b2f36b312a71adc92236cf48 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -1561,6 +1561,13 @@ xfs_mount_alloc( INIT_DELAYED_WORK(&mp->m_eofblocks_work, xfs_eofblocks_worker); INIT_DELAYED_WORK(&mp->m_cowblocks_work, xfs_cowblocks_worker); mp->m_kobj.kobject.kset = xfs_kset; + /* + * We don't create the finobt per-ag space reservation until after log + * recovery, so we must set this to true so that an ifree transaction + * started during log recovery will not depend on space reservations + * for finobt expansion. + */ + mp->m_finobt_nores = true; return mp; } diff --git a/fs/xfs/xfs_xattr.c b/fs/xfs/xfs_xattr.c index 63ee1d5bf1d77a33d7f760f0f10d722266488cb1..9a63016009a1394f41beaff8323a5568b6ceab22 100644 --- a/fs/xfs/xfs_xattr.c +++ b/fs/xfs/xfs_xattr.c @@ -129,6 +129,9 @@ __xfs_xattr_put_listent( char *offset; int arraytop; + if (context->count < 0 || context->seen_enough) + return; + if (!context->alist) goto compute_size; diff --git a/include/asm-generic/bug.h b/include/asm-generic/bug.h index 20561a60db9c4f077c68ee1a182eac38ff96d4dd..d4fb510a4fbe9e4a6dbe6d7ef85eec45dec65647 100644 --- a/include/asm-generic/bug.h +++ b/include/asm-generic/bug.h @@ -104,8 +104,10 @@ extern void warn_slowpath_null(const char *file, const int line); warn_slowpath_fmt_taint(__FILE__, __LINE__, taint, arg) #else extern __printf(1, 2) void __warn_printk(const char *fmt, ...); -#define __WARN() __WARN_TAINT(TAINT_WARN) -#define __WARN_printf(arg...) do { __warn_printk(arg); __WARN(); } while (0) +#define __WARN() do { \ + printk(KERN_WARNING CUT_HERE); __WARN_TAINT(TAINT_WARN); \ +} while (0) +#define __WARN_printf(arg...) __WARN_printf_taint(TAINT_WARN, arg) #define __WARN_printf_taint(taint, arg...) \ do { __warn_printk(arg); __WARN_TAINT(taint); } while (0) #endif diff --git a/include/asm-generic/getorder.h b/include/asm-generic/getorder.h index c64bea7a52bebd5c7332203e1e3a78bfa69b84c3..e9f20b813a699a3fb7630fe76aba3d1ab60c892e 100644 --- a/include/asm-generic/getorder.h +++ b/include/asm-generic/getorder.h @@ -7,24 +7,6 @@ #include #include -/* - * Runtime evaluation of get_order() - */ -static inline __attribute_const__ -int __get_order(unsigned long size) -{ - int order; - - size--; - size >>= PAGE_SHIFT; -#if BITS_PER_LONG == 32 - order = fls(size); -#else - order = fls64(size); -#endif - return order; -} - /** * get_order - Determine the allocation order of a memory size * @size: The size for which to get the order @@ -43,19 +25,27 @@ int __get_order(unsigned long size) * to hold an object of the specified size. * * The result is undefined if the size is 0. - * - * This function may be used to initialise variables with compile time - * evaluations of constants. */ -#define get_order(n) \ -( \ - __builtin_constant_p(n) ? ( \ - ((n) == 0UL) ? BITS_PER_LONG - PAGE_SHIFT : \ - (((n) < (1UL << PAGE_SHIFT)) ? 0 : \ - ilog2((n) - 1) - PAGE_SHIFT + 1) \ - ) : \ - __get_order(n) \ -) +static inline __attribute_const__ int get_order(unsigned long size) +{ + if (__builtin_constant_p(size)) { + if (!size) + return BITS_PER_LONG - PAGE_SHIFT; + + if (size < (1UL << PAGE_SHIFT)) + return 0; + + return ilog2((size) - 1) - PAGE_SHIFT + 1; + } + + size--; + size >>= PAGE_SHIFT; +#if BITS_PER_LONG == 32 + return fls(size); +#else + return fls64(size); +#endif +} #endif /* __ASSEMBLY__ */ diff --git a/include/drm/drm_atomic_helper.h b/include/drm/drm_atomic_helper.h index 99e2a5297c697cf79d0abb500bea997487c5bade..f88752489d0c0dbc019c80b627b792fe81d28db7 100644 --- a/include/drm/drm_atomic_helper.h +++ b/include/drm/drm_atomic_helper.h @@ -168,6 +168,7 @@ void drm_atomic_helper_plane_destroy_state(struct drm_plane *plane, void __drm_atomic_helper_connector_reset(struct drm_connector *connector, struct drm_connector_state *conn_state); void drm_atomic_helper_connector_reset(struct drm_connector *connector); +void drm_atomic_helper_connector_tv_reset(struct drm_connector *connector); void __drm_atomic_helper_connector_duplicate_state(struct drm_connector *connector, struct drm_connector_state *state); diff --git a/include/drm/drm_connector.h b/include/drm/drm_connector.h index 97ea41dc678fe64e8bcb41928677028faf89a639..6712a934c051bcdc33bd6ef3419688d706b01cd3 100644 --- a/include/drm/drm_connector.h +++ b/include/drm/drm_connector.h @@ -343,14 +343,38 @@ int drm_display_info_set_bus_formats(struct drm_display_info *info, const u32 *formats, unsigned int num_formats); +/** + * struct drm_connector_tv_margins - TV connector related margins + * + * Describes the margins in pixels to put around the image on TV + * connectors to deal with overscan. + */ +struct drm_connector_tv_margins { + /** + * @bottom: Bottom margin in pixels. + */ + unsigned int bottom; + + /** + * @left: Left margin in pixels. + */ + unsigned int left; + + /** + * @right: Right margin in pixels. + */ + unsigned int right; + + /** + * @top: Top margin in pixels. + */ + unsigned int top; +}; + /** * struct drm_tv_connector_state - TV connector related states * @subconnector: selected subconnector - * @margins: margins - * @margins.left: left margin - * @margins.right: right margin - * @margins.top: top margin - * @margins.bottom: bottom margin + * @margins: TV margins * @mode: TV mode * @brightness: brightness in percent * @contrast: contrast in percent @@ -361,12 +385,7 @@ int drm_display_info_set_bus_formats(struct drm_display_info *info, */ struct drm_tv_connector_state { enum drm_mode_subconnector subconnector; - struct { - unsigned int left; - unsigned int right; - unsigned int top; - unsigned int bottom; - } margins; + struct drm_connector_tv_margins margins; unsigned int mode; unsigned int brightness; unsigned int contrast; @@ -755,19 +774,123 @@ struct drm_connector_funcs { const struct drm_connector_state *state); }; -/* mode specified on the command line */ +/** + * struct drm_cmdline_mode - DRM Mode passed through the kernel command-line + * + * Each connector can have an initial mode with additional options + * passed through the kernel command line. This structure allows to + * express those parameters and will be filled by the command-line + * parser. + */ struct drm_cmdline_mode { + /** + * @name: + * + * Name of the mode. + */ + char name[DRM_DISPLAY_MODE_LEN]; + + /** + * @specified: + * + * Has a mode been read from the command-line? + */ bool specified; + + /** + * @refresh_specified: + * + * Did the mode have a preferred refresh rate? + */ bool refresh_specified; + + /** + * @bpp_specified: + * + * Did the mode have a preferred BPP? + */ bool bpp_specified; - int xres, yres; + + /** + * @xres: + * + * Active resolution on the X axis, in pixels. + */ + int xres; + + /** + * @yres: + * + * Active resolution on the Y axis, in pixels. + */ + int yres; + + /** + * @bpp: + * + * Bits per pixels for the mode. + */ int bpp; + + /** + * @refresh: + * + * Refresh rate, in Hertz. + */ int refresh; + + /** + * @rb: + * + * Do we need to use reduced blanking? + */ bool rb; + + /** + * @interlace: + * + * The mode is interlaced. + */ bool interlace; + + /** + * @cvt: + * + * The timings will be calculated using the VESA Coordinated + * Video Timings instead of looking up the mode from a table. + */ bool cvt; + + /** + * @margins: + * + * Add margins to the mode calculation (1.8% of xres rounded + * down to 8 pixels and 1.8% of yres). + */ bool margins; + + /** + * @force: + * + * Ignore the hotplug state of the connector, and force its + * state to one of the DRM_FORCE_* values. + */ enum drm_connector_force force; + + /** + * @rotation_reflection: + * + * Initial rotation and reflection of the mode setup from the + * command line. See DRM_MODE_ROTATE_* and + * DRM_MODE_REFLECT_*. The only rotations supported are + * DRM_MODE_ROTATE_0 and DRM_MODE_ROTATE_180. + */ + unsigned int rotation_reflection; + + /** + * @tv_margins: TV margins to apply to the mode. + */ + struct drm_connector_tv_margins tv_margins; }; /** @@ -1175,9 +1298,11 @@ const char *drm_get_tv_select_name(int val); const char *drm_get_content_protection_name(int val); int drm_mode_create_dvi_i_properties(struct drm_device *dev); +int drm_mode_create_tv_margin_properties(struct drm_device *dev); int drm_mode_create_tv_properties(struct drm_device *dev, unsigned int num_modes, const char * const modes[]); +void drm_connector_attach_tv_margin_properties(struct drm_connector *conn); int drm_mode_create_scaling_mode_property(struct drm_device *dev); int drm_connector_attach_content_type_property(struct drm_connector *dev); int drm_connector_attach_scaling_mode_property(struct drm_connector *connector, diff --git a/include/drm/drm_displayid.h b/include/drm/drm_displayid.h index c0d4df6a606fc00bbc0809321777c154738f574b..9d3b745c3107763a84269eb5a48484a698bb4bdd 100644 --- a/include/drm/drm_displayid.h +++ b/include/drm/drm_displayid.h @@ -40,6 +40,7 @@ #define DATA_BLOCK_DISPLAY_INTERFACE 0x0f #define DATA_BLOCK_STEREO_DISPLAY_INTERFACE 0x10 #define DATA_BLOCK_TILED_DISPLAY 0x12 +#define DATA_BLOCK_CTA 0x81 #define DATA_BLOCK_VENDOR_SPECIFIC 0x7f @@ -90,4 +91,13 @@ struct displayid_detailed_timing_block { struct displayid_block base; struct displayid_detailed_timings_1 timings[0]; }; + +#define for_each_displayid_db(displayid, block, idx, length) \ + for ((block) = (struct displayid_block *)&(displayid)[idx]; \ + (idx) + sizeof(struct displayid_block) <= (length) && \ + (idx) + sizeof(struct displayid_block) + (block)->num_bytes <= (length) && \ + (block)->num_bytes > 0; \ + (idx) += (block)->num_bytes + sizeof(struct displayid_block), \ + (block) = (struct displayid_block *)&(displayid)[idx]) + #endif diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h index a0b202e1d69a08363c9cf1d39dbc3377f9d07ef1..238c7d2242076327803baaab9e7d768937427f77 100644 --- a/include/drm/drm_mode_config.h +++ b/include/drm/drm_mode_config.h @@ -668,22 +668,22 @@ struct drm_mode_config { struct drm_property *tv_mode_property; /** * @tv_left_margin_property: Optional TV property to set the left - * margin. + * margin (expressed in pixels). */ struct drm_property *tv_left_margin_property; /** * @tv_right_margin_property: Optional TV property to set the right - * margin. + * margin (expressed in pixels). */ struct drm_property *tv_right_margin_property; /** * @tv_top_margin_property: Optional TV property to set the right - * margin. + * margin (expressed in pixels). */ struct drm_property *tv_top_margin_property; /** * @tv_bottom_margin_property: Optional TV property to set the right - * margin. + * margin (expressed in pixels). */ struct drm_property *tv_bottom_margin_property; /** diff --git a/include/drm/i915_pciids.h b/include/drm/i915_pciids.h index fbf5cfc9b352f7a005071909479624e46253f516..fd965ffbb92e33fcef415033b1a78a09849bda05 100644 --- a/include/drm/i915_pciids.h +++ b/include/drm/i915_pciids.h @@ -386,6 +386,7 @@ INTEL_VGA_DEVICE(0x3E91, info), /* SRV GT2 */ \ INTEL_VGA_DEVICE(0x3E92, info), /* SRV GT2 */ \ INTEL_VGA_DEVICE(0x3E96, info), /* SRV GT2 */ \ + INTEL_VGA_DEVICE(0x3E98, info), /* SRV GT2 */ \ INTEL_VGA_DEVICE(0x3E9A, info) /* SRV GT2 */ /* CFL H */ diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h index 90ac450745f1847f9bb9e5b36ba6a0753d0eca05..561fefc2a9801856b71c23725c369ad0f37961e7 100644 --- a/include/kvm/arm_vgic.h +++ b/include/kvm/arm_vgic.h @@ -361,6 +361,7 @@ int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu); void kvm_vgic_load(struct kvm_vcpu *vcpu); void kvm_vgic_put(struct kvm_vcpu *vcpu); +void kvm_vgic_vmcr_sync(struct kvm_vcpu *vcpu); #define irqchip_in_kernel(k) (!!((k)->arch.vgic.in_kernel)) #define vgic_initialized(k) ((k)->arch.vgic.initialized) diff --git a/include/linux/acpi.h b/include/linux/acpi.h index de8d3d3fa6512e3e9382e23cf4fcd03c36b77097..b4d23b3a2ef2daf7cf93777ec409f44ebbec9b66 100644 --- a/include/linux/acpi.h +++ b/include/linux/acpi.h @@ -326,7 +326,10 @@ void acpi_set_irq_model(enum acpi_irq_model_id model, #ifdef CONFIG_X86_IO_APIC extern int acpi_get_override_irq(u32 gsi, int *trigger, int *polarity); #else -#define acpi_get_override_irq(gsi, trigger, polarity) (-1) +static inline int acpi_get_override_irq(u32 gsi, int *trigger, int *polarity) +{ + return -1; +} #endif /* * This function undoes the effect of one call to acpi_register_gsi(). diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 940c794042aea2ddc69e06ff3ac9b84d9a9d8e05..7b7c0bc6a514830c4d4091eb78236bc06be30e62 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -508,6 +508,12 @@ struct request_queue { * various queue flags, see QUEUE_* below */ unsigned long queue_flags; + /* + * Number of contexts that have called blk_set_pm_only(). If this + * counter is above zero then only RQF_PM and RQF_PREEMPT requests are + * processed. + */ + atomic_t pm_only; /* * ida allocated id for this queue. Used to index queues from @@ -703,7 +709,6 @@ struct request_queue { #define QUEUE_FLAG_REGISTERED 26 /* queue has been registered to a disk */ #define QUEUE_FLAG_SCSI_PASSTHROUGH 27 /* queue supports SCSI commands */ #define QUEUE_FLAG_QUIESCED 28 /* queue has been quiesced */ -#define QUEUE_FLAG_PREEMPT_ONLY 29 /* only process REQ_PREEMPT requests */ #define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ (1 << QUEUE_FLAG_SAME_COMP) | \ @@ -741,12 +746,11 @@ bool blk_queue_flag_test_and_clear(unsigned int flag, struct request_queue *q); ((rq)->cmd_flags & (REQ_FAILFAST_DEV|REQ_FAILFAST_TRANSPORT| \ REQ_FAILFAST_DRIVER)) #define blk_queue_quiesced(q) test_bit(QUEUE_FLAG_QUIESCED, &(q)->queue_flags) -#define blk_queue_preempt_only(q) \ - test_bit(QUEUE_FLAG_PREEMPT_ONLY, &(q)->queue_flags) +#define blk_queue_pm_only(q) atomic_read(&(q)->pm_only) #define blk_queue_fua(q) test_bit(QUEUE_FLAG_FUA, &(q)->queue_flags) -extern int blk_set_preempt_only(struct request_queue *q); -extern void blk_clear_preempt_only(struct request_queue *q); +extern void blk_set_pm_only(struct request_queue *q); +extern void blk_clear_pm_only(struct request_queue *q); static inline int queue_in_flight(struct request_queue *q) { diff --git a/include/linux/ccp.h b/include/linux/ccp.h index 7e9c991c95e03fc1df55553461d95f9153d37cd7..43ed9e77cf81a6dec3fac4a8de3648d63f4d454d 100644 --- a/include/linux/ccp.h +++ b/include/linux/ccp.h @@ -173,6 +173,8 @@ struct ccp_aes_engine { enum ccp_aes_mode mode; enum ccp_aes_action action; + u32 authsize; + struct scatterlist *key; u32 key_len; /* In bytes */ diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h index 46a706e2ba3542abd9ba0148928685d1c1e5f9e6..34fb541e90be472d554e31861d58bdadc00f0d13 100644 --- a/include/linux/cgroup-defs.h +++ b/include/linux/cgroup-defs.h @@ -209,6 +209,7 @@ struct css_set { */ struct list_head tasks; struct list_head mg_tasks; + struct list_head dying_tasks; /* all css_task_iters currently walking this cset */ struct list_head task_iters; diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h index 8937d48a5389d1c9eb3d2da2676559760a4ba323..b4854b48a4f3ddbfbf115b716ce60feb70a80ad2 100644 --- a/include/linux/cgroup.h +++ b/include/linux/cgroup.h @@ -43,6 +43,9 @@ /* walk all threaded css_sets in the domain */ #define CSS_TASK_ITER_THREADED (1U << 1) +/* internal flags */ +#define CSS_TASK_ITER_SKIPPED (1U << 16) + /* a css_task_iter should be treated as an opaque object */ struct css_task_iter { struct cgroup_subsys *ss; @@ -57,6 +60,7 @@ struct css_task_iter { struct list_head *task_pos; struct list_head *tasks_head; struct list_head *mg_tasks_head; + struct list_head *dying_tasks_head; struct css_set *cur_cset; struct css_set *cur_dcset; diff --git a/include/linux/coda.h b/include/linux/coda.h index d30209b9cef81773b26e62f79b67de402e56ff68..0ca0c83fdb1c4864037d3162939d4a38b655581c 100644 --- a/include/linux/coda.h +++ b/include/linux/coda.h @@ -58,8 +58,7 @@ Mellon the rights to redistribute these changes without encumbrance. #ifndef _CODA_HEADER_ #define _CODA_HEADER_ -#if defined(__linux__) typedef unsigned long long u_quad_t; -#endif + #include #endif diff --git a/include/linux/coda_psdev.h b/include/linux/coda_psdev.h index 15170954aa2b3d35b1f817c782e30febee6349f0..57d2b2faf6a3e13fce381dc963d57fba2d6e4f71 100644 --- a/include/linux/coda_psdev.h +++ b/include/linux/coda_psdev.h @@ -19,6 +19,17 @@ struct venus_comm { struct mutex vc_mutex; }; +/* messages between coda filesystem in kernel and Venus */ +struct upc_req { + struct list_head uc_chain; + caddr_t uc_data; + u_short uc_flags; + u_short uc_inSize; /* Size is at most 5000 bytes */ + u_short uc_outSize; + u_short uc_opcode; /* copied from data to save lookup */ + int uc_unique; + wait_queue_head_t uc_sleep; /* process' wait queue */ +}; static inline struct venus_comm *coda_vcp(struct super_block *sb) { diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h index d64d8c2bbdabc5be1fd29d9f91247b654d3f86b0..d67c0035165c2a90273d8fbc90aa7529ec729846 100644 --- a/include/linux/cpuhotplug.h +++ b/include/linux/cpuhotplug.h @@ -116,10 +116,10 @@ enum cpuhp_state { CPUHP_AP_PERF_ARM_ACPI_STARTING, CPUHP_AP_PERF_ARM_STARTING, CPUHP_AP_ARM_L2X0_STARTING, + CPUHP_AP_EXYNOS4_MCT_TIMER_STARTING, CPUHP_AP_ARM_ARCH_TIMER_STARTING, CPUHP_AP_ARM_GLOBAL_TIMER_STARTING, CPUHP_AP_JCORE_TIMER_STARTING, - CPUHP_AP_EXYNOS4_MCT_TIMER_STARTING, CPUHP_AP_ARM_TWD_STARTING, CPUHP_AP_QCOM_TIMER_STARTING, CPUHP_AP_ARMADA_TIMER_STARTING, @@ -170,6 +170,7 @@ enum cpuhp_state { CPUHP_AP_WATCHDOG_ONLINE, CPUHP_AP_WORKQUEUE_ONLINE, CPUHP_AP_RCUTREE_ONLINE, + CPUHP_AP_BASE_CACHEINFO_ONLINE, CPUHP_AP_ONLINE_DYN, CPUHP_AP_ONLINE_DYN_END = CPUHP_AP_ONLINE_DYN + 30, CPUHP_AP_X86_HPET_ONLINE, diff --git a/include/linux/cred.h b/include/linux/cred.h index 7eed6101c7914a63dcec50a7f287a6618eb2cd08..1dc351d8548bfaa9f6afa4024910277fb84d4f2f 100644 --- a/include/linux/cred.h +++ b/include/linux/cred.h @@ -150,7 +150,11 @@ struct cred { struct user_struct *user; /* real user ID subscription */ struct user_namespace *user_ns; /* user_ns the caps and keyrings are relative to. */ struct group_info *group_info; /* supplementary groups for euid/fsgid */ - struct rcu_head rcu; /* RCU deletion hook */ + /* RCU deletion */ + union { + int non_rcu; /* Can we skip RCU deletion? */ + struct rcu_head rcu; /* RCU deletion hook */ + }; } __randomize_layout; extern void __put_cred(struct cred *); @@ -248,6 +252,7 @@ static inline const struct cred *get_cred(const struct cred *cred) { struct cred *nonconst_cred = (struct cred *) cred; validate_creds(cred); + nonconst_cred->non_rcu = 0; return get_new_cred(nonconst_cred); } diff --git a/include/linux/device.h b/include/linux/device.h index 3f1066a9e1c3a6f87e9f19ba3f3949d8b81e0c20..19dd8852602c4b1a8453a8f5673c60a5b17dc7bd 100644 --- a/include/linux/device.h +++ b/include/linux/device.h @@ -1332,6 +1332,7 @@ extern int (*platform_notify_remove)(struct device *dev); */ extern struct device *get_device(struct device *dev); extern void put_device(struct device *dev); +extern bool kill_device(struct device *dev); #ifdef CONFIG_DEVTMPFS extern int devtmpfs_create_node(struct device *dev); diff --git a/include/linux/fs.h b/include/linux/fs.h index 72749feed0e3364f9909893e48cb3eefbda46828..9b2b707e911266a3e798dd8f76e7d3bc5fb05000 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -2651,6 +2651,8 @@ extern int filemap_flush(struct address_space *); extern int filemap_fdatawait_keep_errors(struct address_space *mapping); extern int filemap_fdatawait_range(struct address_space *, loff_t lstart, loff_t lend); +extern int filemap_fdatawait_range_keep_errors(struct address_space *mapping, + loff_t start_byte, loff_t end_byte); static inline int filemap_fdatawait(struct address_space *mapping) { diff --git a/include/linux/host1x.h b/include/linux/host1x.h index 89110d896d72d7573df5140b7613f48d9b9ea5e4..aef6e2f738027a03d81bb08769f9f5951aeba7de 100644 --- a/include/linux/host1x.h +++ b/include/linux/host1x.h @@ -310,6 +310,8 @@ struct host1x_device { struct list_head clients; bool registered; + + struct device_dma_parameters dma_parms; }; static inline struct host1x_device *to_host1x_device(struct device *dev) diff --git a/include/linux/if_pppox.h b/include/linux/if_pppox.h index ba7a9b0c7c57e64ef794a55f62330705adbe7267..24e9b360da659f66746eca60091078ace1e894e5 100644 --- a/include/linux/if_pppox.h +++ b/include/linux/if_pppox.h @@ -84,6 +84,9 @@ extern int register_pppox_proto(int proto_num, const struct pppox_proto *pp); extern void unregister_pppox_proto(int proto_num); extern void pppox_unbind_sock(struct sock *sk);/* delete ppp-channel binding */ extern int pppox_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg); +extern int pppox_compat_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg); + +#define PPPOEIOCSFWD32 _IOW(0xB1 ,0, compat_size_t) /* PPPoX socket states */ enum { diff --git a/include/linux/iova.h b/include/linux/iova.h index 928442dda565f147b501dca93601731a92581b2d..84fbe73d2ec08adabd746c2918733ecb7c1ccba3 100644 --- a/include/linux/iova.h +++ b/include/linux/iova.h @@ -156,6 +156,7 @@ struct iova *reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo, void copy_reserved_iova(struct iova_domain *from, struct iova_domain *to); void init_iova_domain(struct iova_domain *iovad, unsigned long granule, unsigned long start_pfn); +bool has_iova_flush_queue(struct iova_domain *iovad); int init_iova_flush_queue(struct iova_domain *iovad, iova_flush_cb flush_cb, iova_entry_dtor entry_dtor); struct iova *find_iova(struct iova_domain *iovad, unsigned long pfn); @@ -236,6 +237,11 @@ static inline void init_iova_domain(struct iova_domain *iovad, { } +static inline bool has_iova_flush_queue(struct iova_domain *iovad) +{ + return false; +} + static inline int init_iova_flush_queue(struct iova_domain *iovad, iova_flush_cb flush_cb, iova_entry_dtor entry_dtor) diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h index 57f4ad8d45a5a64c897bed0200ab8e03d23c9189..2e32667360947b94ee22aeb57e158ebc063c14f9 100644 --- a/include/linux/jbd2.h +++ b/include/linux/jbd2.h @@ -478,6 +478,22 @@ struct jbd2_inode { * @i_flags: Flags of inode [j_list_lock] */ unsigned long i_flags; + + /** + * @i_dirty_start: + * + * Offset in bytes where the dirty range for this inode starts. + * [j_list_lock] + */ + loff_t i_dirty_start; + + /** + * @i_dirty_end: + * + * Inclusive offset in bytes where the dirty range for this inode + * ends. [j_list_lock] + */ + loff_t i_dirty_end; }; struct jbd2_revoke_table_s; @@ -1423,6 +1439,12 @@ extern int jbd2_journal_force_commit(journal_t *); extern int jbd2_journal_force_commit_nested(journal_t *); extern int jbd2_journal_inode_add_write(handle_t *handle, struct jbd2_inode *inode); extern int jbd2_journal_inode_add_wait(handle_t *handle, struct jbd2_inode *inode); +extern int jbd2_journal_inode_ranged_write(handle_t *handle, + struct jbd2_inode *inode, loff_t start_byte, + loff_t length); +extern int jbd2_journal_inode_ranged_wait(handle_t *handle, + struct jbd2_inode *inode, loff_t start_byte, + loff_t length); extern int jbd2_journal_begin_ordered_truncate(journal_t *journal, struct jbd2_inode *inode, loff_t new_size); extern void jbd2_journal_init_jbd_inode(struct jbd2_inode *jinode, struct inode *inode); diff --git a/include/linux/kernel.h b/include/linux/kernel.h index d81a153df451cdcb4aed0a7ade903a96137c7a8f..78f30d5530374038964ecb701d0c114d023da91a 100644 --- a/include/linux/kernel.h +++ b/include/linux/kernel.h @@ -118,7 +118,8 @@ #define DIV_ROUND_DOWN_ULL(ll, d) \ ({ unsigned long long _tmp = (ll); do_div(_tmp, d); _tmp; }) -#define DIV_ROUND_UP_ULL(ll, d) DIV_ROUND_DOWN_ULL((ll) + (d) - 1, (d)) +#define DIV_ROUND_UP_ULL(ll, d) \ + DIV_ROUND_DOWN_ULL((unsigned long long)(ll) + (d) - 1, (d)) #if BITS_PER_LONG == 32 # define DIV_ROUND_UP_SECTOR_T(ll,d) DIV_ROUND_UP_ULL(ll, d) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 30efb3663892346d3d323d74c6443f7ce6f09fd8..d42a36e4e6c24ed97f7ceebbc844b382128b1276 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -818,6 +818,7 @@ void kvm_arch_check_processor_compat(void *rtn); int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu); bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu); int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu); +bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu); #ifndef __KVM_HAVE_ARCH_VM_ALLOC /* diff --git a/include/linux/logic_pio.h b/include/linux/logic_pio.h index cbd9d8495690212fc689d3ce3a7caf0812d45523..88e1e6304a7193bca45a14500f7ade4632dbd335 100644 --- a/include/linux/logic_pio.h +++ b/include/linux/logic_pio.h @@ -117,6 +117,7 @@ struct logic_pio_hwaddr *find_io_range_by_fwnode(struct fwnode_handle *fwnode); unsigned long logic_pio_trans_hwaddr(struct fwnode_handle *fwnode, resource_size_t hw_addr, resource_size_t size); int logic_pio_register_range(struct logic_pio_hwaddr *newrange); +void logic_pio_unregister_range(struct logic_pio_hwaddr *range); resource_size_t logic_pio_to_hwaddr(unsigned long pio); unsigned long logic_pio_trans_cpuaddr(resource_size_t hw_addr); diff --git a/include/linux/mfd/bcm2835-pm.h b/include/linux/mfd/bcm2835-pm.h index b2d157091e12bdb4cb30f751867946e63d99b632..f70a810c55f7d5e886df22cd027c48c50fdf7a16 100644 --- a/include/linux/mfd/bcm2835-pm.h +++ b/include/linux/mfd/bcm2835-pm.h @@ -9,7 +9,7 @@ struct bcm2835_pm { struct device *dev; void __iomem *base; void __iomem *asb; - void __iomem *arg_asb; + void __iomem *rpivid_asb; }; #endif /* BCM2835_MFD_PM_H */ diff --git a/include/linux/mlx5/fs.h b/include/linux/mlx5/fs.h index 804516e4f483286e973ec2127f6980a0b98533de..3386399feadc82483bd44bab1b09285e05bb8c07 100644 --- a/include/linux/mlx5/fs.h +++ b/include/linux/mlx5/fs.h @@ -188,6 +188,7 @@ int mlx5_modify_rule_destination(struct mlx5_flow_handle *handler, struct mlx5_fc *mlx5_flow_rule_counter(struct mlx5_flow_handle *handler); struct mlx5_fc *mlx5_fc_create(struct mlx5_core_dev *dev, bool aging); void mlx5_fc_destroy(struct mlx5_core_dev *dev, struct mlx5_fc *counter); +u64 mlx5_fc_query_lastuse(struct mlx5_fc *counter); void mlx5_fc_query_cached(struct mlx5_fc *counter, u64 *bytes, u64 *packets, u64 *lastuse); int mlx5_fc_query(struct mlx5_core_dev *dev, struct mlx5_fc *counter, diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index f043d65b9bac2d65b0b8c8053e22a9051d03769f..177f11c96187b7ff8f06af56589bfe59060e0eae 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -5623,7 +5623,12 @@ struct mlx5_ifc_modify_cq_in_bits { struct mlx5_ifc_cqc_bits cq_context; - u8 reserved_at_280[0x600]; + u8 reserved_at_280[0x60]; + + u8 cq_umem_valid[0x1]; + u8 reserved_at_2e1[0x1f]; + + u8 reserved_at_300[0x580]; u8 pas[0][0x40]; }; diff --git a/include/linux/nfs_page.h b/include/linux/nfs_page.h index e27572d30d97751ba70a9c5d753997faaac49d51..ad69430fd0eb5a9123727e2054682971de3feca3 100644 --- a/include/linux/nfs_page.h +++ b/include/linux/nfs_page.h @@ -164,6 +164,16 @@ nfs_list_add_request(struct nfs_page *req, struct list_head *head) list_add_tail(&req->wb_list, head); } +/** + * nfs_list_move_request - Move a request to a new list + * @req: request + * @head: head of list into which to insert the request. + */ +static inline void +nfs_list_move_request(struct nfs_page *req, struct list_head *head) +{ + list_move_tail(&req->wb_list, head); +} /** * nfs_list_remove_request - Remove a request from its wb_list diff --git a/include/linux/nfs_xdr.h b/include/linux/nfs_xdr.h index 1fc27eb1f0213fa420a8367e3d80a000c6f6d996..73b0d19ef0d9435ceb17162a7f9da693a79a0b46 100644 --- a/include/linux/nfs_xdr.h +++ b/include/linux/nfs_xdr.h @@ -1539,7 +1539,7 @@ struct nfs_commit_data { }; struct nfs_pgio_completion_ops { - void (*error_cleanup)(struct list_head *head); + void (*error_cleanup)(struct list_head *head, int); void (*init_hdr)(struct nfs_pgio_header *hdr); void (*completion)(struct nfs_pgio_header *hdr); void (*reschedule_io)(struct nfs_pgio_header *hdr); diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 42fc852bf51237f07a909d61dec66f792060fd14..b22bc81f3669ad0d56f21ff8280d4c143fc09766 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -1030,6 +1030,11 @@ static inline int in_software_context(struct perf_event *event) return event->ctx->pmu->task_ctx_nr == perf_sw_context; } +static inline int is_exclusive_pmu(struct pmu *pmu) +{ + return pmu->capabilities & PERF_PMU_CAP_EXCLUSIVE; +} + extern struct static_key perf_swevent_enabled[PERF_COUNT_SW_MAX]; extern void ___perf_sw_event(u32, u64, struct pt_regs *, u64); diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index 241a4a9577a0f28915e054645aad067842fa6246..08d64e5713fc997540d8a7ae8a7d97536247464d 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -591,7 +591,7 @@ static inline void rcu_preempt_sleep_check(void) { } * read-side critical sections may be preempted and they may also block, but * only when acquiring spinlocks that are subject to priority inheritance. */ -static inline void rcu_read_lock(void) +static __always_inline void rcu_read_lock(void) { __rcu_read_lock(); __acquire(RCU); diff --git a/include/linux/sched.h b/include/linux/sched.h index c342fa06ab9921ba02680184696ae6514d7b8983..3c213ec3d3b51e74c51cdeee50bac2be0f4c1f4d 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1057,7 +1057,15 @@ struct task_struct { u64 last_sum_exec_runtime; struct callback_head numa_work; - struct numa_group *numa_group; + /* + * This pointer is only modified for current in syscall and + * pagefault context (and for tasks being destroyed), so it can be read + * from any of the following contexts: + * - RCU read-side critical section + * - current->numa_group from everywhere + * - task's runqueue locked, task not running + */ + struct numa_group __rcu *numa_group; /* * numa_faults is an array split into four regions: diff --git a/include/linux/sched/numa_balancing.h b/include/linux/sched/numa_balancing.h index e7dd04a84ba89d79507d461cee5b839269ff0b8a..3988762efe15c0e5a80602e2c9acb6a5820a740e 100644 --- a/include/linux/sched/numa_balancing.h +++ b/include/linux/sched/numa_balancing.h @@ -19,7 +19,7 @@ extern void task_numa_fault(int last_node, int node, int pages, int flags); extern pid_t task_numa_group_id(struct task_struct *p); extern void set_numabalancing_state(bool enabled); -extern void task_numa_free(struct task_struct *p); +extern void task_numa_free(struct task_struct *p, bool final); extern bool should_numa_migrate_memory(struct task_struct *p, struct page *page, int src_nid, int dst_cpu); #else @@ -34,7 +34,7 @@ static inline pid_t task_numa_group_id(struct task_struct *p) static inline void set_numabalancing_state(bool enabled) { } -static inline void task_numa_free(struct task_struct *p) +static inline void task_numa_free(struct task_struct *p, bool final) { } static inline bool should_numa_migrate_memory(struct task_struct *p, diff --git a/include/net/dst.h b/include/net/dst.h index 6cf0870414c783720026a151ba94307a2bb4ad30..ffc8ee0ea5e5b19afdb507681c322056b176c563 100644 --- a/include/net/dst.h +++ b/include/net/dst.h @@ -313,8 +313,9 @@ static inline bool dst_hold_safe(struct dst_entry *dst) * @skb: buffer * * If dst is not yet refcounted and not destroyed, grab a ref on it. + * Returns true if dst is refcounted. */ -static inline void skb_dst_force(struct sk_buff *skb) +static inline bool skb_dst_force(struct sk_buff *skb) { if (skb_dst_is_noref(skb)) { struct dst_entry *dst = skb_dst(skb); @@ -325,6 +326,8 @@ static inline void skb_dst_force(struct sk_buff *skb) skb->_skb_refdst = (unsigned long)dst; } + + return skb->_skb_refdst != 0UL; } diff --git a/include/net/ip_vs.h b/include/net/ip_vs.h index a0d2e0bb9a94b90da24b22a327253dc6e9c0e8ff..0e3c0d83bd9913a22ac0b0ea201d14c6af63ec02 100644 --- a/include/net/ip_vs.h +++ b/include/net/ip_vs.h @@ -806,11 +806,12 @@ struct ipvs_master_sync_state { struct ip_vs_sync_buff *sync_buff; unsigned long sync_queue_len; unsigned int sync_queue_delay; - struct task_struct *master_thread; struct delayed_work master_wakeup_work; struct netns_ipvs *ipvs; }; +struct ip_vs_sync_thread_data; + /* How much time to keep dests in trash */ #define IP_VS_DEST_TRASH_PERIOD (120 * HZ) @@ -941,7 +942,8 @@ struct netns_ipvs { spinlock_t sync_lock; struct ipvs_master_sync_state *ms; spinlock_t sync_buff_lock; - struct task_struct **backup_threads; + struct ip_vs_sync_thread_data *master_tinfo; + struct ip_vs_sync_thread_data *backup_tinfo; int threads_mask; volatile int sync_state; struct mutex sync_mutex; diff --git a/include/net/tcp.h b/include/net/tcp.h index e75661f92daaaa861a5c416d73674e38dccd1f9e..abcf53a6db045974a7a130b4e4e8035854709b2e 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1054,7 +1054,8 @@ void tcp_get_default_congestion_control(struct net *net, char *name); void tcp_get_available_congestion_control(char *buf, size_t len); void tcp_get_allowed_congestion_control(char *buf, size_t len); int tcp_set_allowed_congestion_control(char *allowed); -int tcp_set_congestion_control(struct sock *sk, const char *name, bool load, bool reinit); +int tcp_set_congestion_control(struct sock *sk, const char *name, bool load, + bool reinit, bool cap_net_admin); u32 tcp_slow_start(struct tcp_sock *tp, u32 acked); void tcp_cong_avoid_ai(struct tcp_sock *tp, u32 w, u32 acked); @@ -1646,6 +1647,11 @@ static inline struct sk_buff *tcp_rtx_queue_head(const struct sock *sk) return skb_rb_first(&sk->tcp_rtx_queue); } +static inline struct sk_buff *tcp_rtx_queue_tail(const struct sock *sk) +{ + return skb_rb_last(&sk->tcp_rtx_queue); +} + static inline struct sk_buff *tcp_write_queue_head(const struct sock *sk) { return skb_peek(&sk->sk_write_queue); diff --git a/include/net/tls.h b/include/net/tls.h index 95411057589146c810f56b65c3c2d148c672055c..98f5ad0319a2b538d0a76cc90f9480e4ef2f700d 100644 --- a/include/net/tls.h +++ b/include/net/tls.h @@ -234,6 +234,7 @@ struct tls_offload_context_rx { (ALIGN(sizeof(struct tls_offload_context_rx), sizeof(void *)) + \ TLS_DRIVER_STATE_SIZE) +void tls_ctx_free(struct tls_context *ctx); int wait_on_pending_writer(struct sock *sk, long *timeo); int tls_sk_query(struct sock *sk, int optname, char __user *optval, int __user *optlen); diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index ec299fcf55f7aff42e5e0fcc330865a664ceb28e..412c2820626dae8157e25bb2cc546eee2aef9c34 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -290,8 +290,8 @@ struct ib_rss_caps { }; enum ib_tm_cap_flags { - /* Support tag matching on RC transport */ - IB_TM_CAP_RC = 1 << 0, + /* Support tag matching with rendezvous offload for RC transport */ + IB_TM_CAP_RNDV_RC = 1 << 0, }; struct ib_tm_caps { diff --git a/include/scsi/libfcoe.h b/include/scsi/libfcoe.h index bb8092fa1e364114fc60458e86c7bc73f3beeb93..58507c7783cf01cf64ea2987af9472fd75e44428 100644 --- a/include/scsi/libfcoe.h +++ b/include/scsi/libfcoe.h @@ -241,6 +241,7 @@ struct fcoe_fcf { * @vn_mac: VN_Node assigned MAC address for data */ struct fcoe_rport { + struct fc_rport_priv rdata; unsigned long time; u16 fcoe_len; u16 flags; diff --git a/include/sound/compress_driver.h b/include/sound/compress_driver.h index e87f2d5b3cc656c852d0892ff37f3d723dd3b46d..127c2713b543a9652b25201e30426b4058ae25b4 100644 --- a/include/sound/compress_driver.h +++ b/include/sound/compress_driver.h @@ -171,10 +171,7 @@ static inline void snd_compr_drain_notify(struct snd_compr_stream *stream) if (snd_BUG_ON(!stream)) return; - if (stream->direction == SND_COMPRESS_PLAYBACK) - stream->runtime->state = SNDRV_PCM_STATE_SETUP; - else - stream->runtime->state = SNDRV_PCM_STATE_PREPARED; + stream->runtime->state = SNDRV_PCM_STATE_SETUP; wake_up(&stream->runtime->sleep); } diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index 6d182746afab36a486183b9a378f51f114f344fc..815dcfa64743000ce5fcbac4887df6de88b3fe57 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -500,10 +500,10 @@ rxrpc_tx_points; #define E_(a, b) { a, b } TRACE_EVENT(rxrpc_local, - TP_PROTO(struct rxrpc_local *local, enum rxrpc_local_trace op, + TP_PROTO(unsigned int local_debug_id, enum rxrpc_local_trace op, int usage, const void *where), - TP_ARGS(local, op, usage, where), + TP_ARGS(local_debug_id, op, usage, where), TP_STRUCT__entry( __field(unsigned int, local ) @@ -513,7 +513,7 @@ TRACE_EVENT(rxrpc_local, ), TP_fast_assign( - __entry->local = local->debug_id; + __entry->local = local_debug_id; __entry->op = op; __entry->usage = usage; __entry->where = where; @@ -1381,7 +1381,7 @@ TRACE_EVENT(rxrpc_rx_eproto, ), TP_fast_assign( - __entry->call = call->debug_id; + __entry->call = call ? call->debug_id : 0; __entry->serial = serial; __entry->why = why; ), diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 2932600ce271cfd1908ac123b435e419a3e04e4e..d143e277cdaf25739e69ef718b123f2a8bdd653a 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -2486,6 +2486,7 @@ struct bpf_prog_info { char name[BPF_OBJ_NAME_LEN]; __u32 ifindex; __u32 gpl_compatible:1; + __u32 :31; /* alignment pad */ __u64 netns_dev; __u64 netns_ino; __u32 nr_jited_ksyms; diff --git a/include/uapi/linux/coda_psdev.h b/include/uapi/linux/coda_psdev.h index aa6623efd2dd085e50b899285e1bc2c9a732a03e..d50d51a57fe4ec27ae11846548c7dd739014fa70 100644 --- a/include/uapi/linux/coda_psdev.h +++ b/include/uapi/linux/coda_psdev.h @@ -7,19 +7,6 @@ #define CODA_PSDEV_MAJOR 67 #define MAX_CODADEVS 5 /* how many do we allow */ - -/* messages between coda filesystem in kernel and Venus */ -struct upc_req { - struct list_head uc_chain; - caddr_t uc_data; - u_short uc_flags; - u_short uc_inSize; /* Size is at most 5000 bytes */ - u_short uc_outSize; - u_short uc_opcode; /* copied from data to save lookup */ - int uc_unique; - wait_queue_head_t uc_sleep; /* process' wait queue */ -}; - #define CODA_REQ_ASYNC 0x1 #define CODA_REQ_READ 0x2 #define CODA_REQ_WRITE 0x4 diff --git a/include/uapi/linux/nilfs2_ondisk.h b/include/uapi/linux/nilfs2_ondisk.h index a7e66ab11d1d656d8fb9f861f4b46949a7ac124b..c23f91ae5fe8b17c6176c91027b63840dd1c2a0b 100644 --- a/include/uapi/linux/nilfs2_ondisk.h +++ b/include/uapi/linux/nilfs2_ondisk.h @@ -29,7 +29,7 @@ #include #include - +#include #define NILFS_INODE_BMAP_SIZE 7 @@ -533,19 +533,19 @@ enum { static inline void \ nilfs_checkpoint_set_##name(struct nilfs_checkpoint *cp) \ { \ - cp->cp_flags = cpu_to_le32(le32_to_cpu(cp->cp_flags) | \ - (1UL << NILFS_CHECKPOINT_##flag)); \ + cp->cp_flags = __cpu_to_le32(__le32_to_cpu(cp->cp_flags) | \ + (1UL << NILFS_CHECKPOINT_##flag)); \ } \ static inline void \ nilfs_checkpoint_clear_##name(struct nilfs_checkpoint *cp) \ { \ - cp->cp_flags = cpu_to_le32(le32_to_cpu(cp->cp_flags) & \ + cp->cp_flags = __cpu_to_le32(__le32_to_cpu(cp->cp_flags) & \ ~(1UL << NILFS_CHECKPOINT_##flag)); \ } \ static inline int \ nilfs_checkpoint_##name(const struct nilfs_checkpoint *cp) \ { \ - return !!(le32_to_cpu(cp->cp_flags) & \ + return !!(__le32_to_cpu(cp->cp_flags) & \ (1UL << NILFS_CHECKPOINT_##flag)); \ } @@ -595,20 +595,20 @@ enum { static inline void \ nilfs_segment_usage_set_##name(struct nilfs_segment_usage *su) \ { \ - su->su_flags = cpu_to_le32(le32_to_cpu(su->su_flags) | \ + su->su_flags = __cpu_to_le32(__le32_to_cpu(su->su_flags) | \ (1UL << NILFS_SEGMENT_USAGE_##flag));\ } \ static inline void \ nilfs_segment_usage_clear_##name(struct nilfs_segment_usage *su) \ { \ su->su_flags = \ - cpu_to_le32(le32_to_cpu(su->su_flags) & \ + __cpu_to_le32(__le32_to_cpu(su->su_flags) & \ ~(1UL << NILFS_SEGMENT_USAGE_##flag)); \ } \ static inline int \ nilfs_segment_usage_##name(const struct nilfs_segment_usage *su) \ { \ - return !!(le32_to_cpu(su->su_flags) & \ + return !!(__le32_to_cpu(su->su_flags) & \ (1UL << NILFS_SEGMENT_USAGE_##flag)); \ } @@ -619,15 +619,15 @@ NILFS_SEGMENT_USAGE_FNS(ERROR, error) static inline void nilfs_segment_usage_set_clean(struct nilfs_segment_usage *su) { - su->su_lastmod = cpu_to_le64(0); - su->su_nblocks = cpu_to_le32(0); - su->su_flags = cpu_to_le32(0); + su->su_lastmod = __cpu_to_le64(0); + su->su_nblocks = __cpu_to_le32(0); + su->su_flags = __cpu_to_le32(0); } static inline int nilfs_segment_usage_clean(const struct nilfs_segment_usage *su) { - return !le32_to_cpu(su->su_flags); + return !__le32_to_cpu(su->su_flags); } /** diff --git a/include/uapi/linux/nl80211.h b/include/uapi/linux/nl80211.h index 7acc16f349427a772f507e6a25b66b5b95b210d7..fa43dd5a7b3dcc84d29ecf441e28986e2a6bcec4 100644 --- a/include/uapi/linux/nl80211.h +++ b/include/uapi/linux/nl80211.h @@ -2732,7 +2732,7 @@ enum nl80211_attrs { #define NL80211_HT_CAPABILITY_LEN 26 #define NL80211_VHT_CAPABILITY_LEN 12 #define NL80211_HE_MIN_CAPABILITY_LEN 16 -#define NL80211_HE_MAX_CAPABILITY_LEN 51 +#define NL80211_HE_MAX_CAPABILITY_LEN 54 #define NL80211_MAX_NR_CIPHER_SUITES 5 #define NL80211_MAX_NR_AKM_SUITES 2 diff --git a/include/uapi/linux/v4l2-controls.h b/include/uapi/linux/v4l2-controls.h index e4ee10ee917de1e12dbc3d732120711a394f4dae..baa768253fc30a80859c4c84da4e2fad9260d3c0 100644 --- a/include/uapi/linux/v4l2-controls.h +++ b/include/uapi/linux/v4l2-controls.h @@ -815,6 +815,7 @@ enum v4l2_auto_n_preset_white_balance { V4L2_WHITE_BALANCE_FLASH = 7, V4L2_WHITE_BALANCE_CLOUDY = 8, V4L2_WHITE_BALANCE_SHADE = 9, + V4L2_WHITE_BALANCE_GREYWORLD = 10, }; #define V4L2_CID_WIDE_DYNAMIC_RANGE (V4L2_CID_CAMERA_CLASS_BASE+21) diff --git a/include/xen/events.h b/include/xen/events.h index c3e6bc643a7bde4bf770f1c191a472d84c4af945..1650d39decaec96869b194bfdf4a0a910316d6f4 100644 --- a/include/xen/events.h +++ b/include/xen/events.h @@ -3,6 +3,7 @@ #define _XEN_EVENTS_H #include +#include #ifdef CONFIG_PCI_MSI #include #endif @@ -59,7 +60,7 @@ void evtchn_put(unsigned int evtchn); void xen_send_IPI_one(unsigned int cpu, enum ipi_vector vector); void rebind_evtchn_irq(int evtchn, int irq); -int xen_rebind_evtchn_to_cpu(int evtchn, unsigned tcpu); +int xen_set_affinity_evtchn(struct irq_desc *desc, unsigned int tcpu); static inline void notify_remote_via_evtchn(int port) { diff --git a/ipc/mqueue.c b/ipc/mqueue.c index bce7af1546d9c5b07532388b72ab8e4e599f523f..de4070d5472f2d6ce2dd52b0f1a08fda5b89a255 100644 --- a/ipc/mqueue.c +++ b/ipc/mqueue.c @@ -389,7 +389,6 @@ static void mqueue_evict_inode(struct inode *inode) { struct mqueue_inode_info *info; struct user_struct *user; - unsigned long mq_bytes, mq_treesize; struct ipc_namespace *ipc_ns; struct msg_msg *msg, *nmsg; LIST_HEAD(tmp_msg); @@ -412,16 +411,18 @@ static void mqueue_evict_inode(struct inode *inode) free_msg(msg); } - /* Total amount of bytes accounted for the mqueue */ - mq_treesize = info->attr.mq_maxmsg * sizeof(struct msg_msg) + - min_t(unsigned int, info->attr.mq_maxmsg, MQ_PRIO_MAX) * - sizeof(struct posix_msg_tree_node); - - mq_bytes = mq_treesize + (info->attr.mq_maxmsg * - info->attr.mq_msgsize); - user = info->user; if (user) { + unsigned long mq_bytes, mq_treesize; + + /* Total amount of bytes accounted for the mqueue */ + mq_treesize = info->attr.mq_maxmsg * sizeof(struct msg_msg) + + min_t(unsigned int, info->attr.mq_maxmsg, MQ_PRIO_MAX) * + sizeof(struct posix_msg_tree_node); + + mq_bytes = mq_treesize + (info->attr.mq_maxmsg * + info->attr.mq_msgsize); + spin_lock(&mq_lock); user->mq_bytes -= mq_bytes; /* diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile index 0488b8258321257aac395f9ece7bef5e04004918..ffc39a7e028d7e62705e5beb8e966201bc869014 100644 --- a/kernel/bpf/Makefile +++ b/kernel/bpf/Makefile @@ -1,5 +1,6 @@ # SPDX-License-Identifier: GPL-2.0 obj-y := core.o +CFLAGS_core.o += $(call cc-disable-warning, override-init) obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o inode.o helpers.o tnum.o obj-$(CONFIG_BPF_SYSCALL) += hashtab.o arraymap.o percpu_freelist.o bpf_lru_list.o lpm_trie.o map_in_map.o diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c index 7b536796daf8768e68979fb993bf4c3c80c1f610..30dd5754e62e310637216b67a9f745e5bf0c9cef 100644 --- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c @@ -212,7 +212,8 @@ static struct cftype cgroup_base_files[]; static int cgroup_apply_control(struct cgroup *cgrp); static void cgroup_finalize_control(struct cgroup *cgrp, int ret); -static void css_task_iter_advance(struct css_task_iter *it); +static void css_task_iter_skip(struct css_task_iter *it, + struct task_struct *task); static int cgroup_destroy_locked(struct cgroup *cgrp); static struct cgroup_subsys_state *css_create(struct cgroup *cgrp, struct cgroup_subsys *ss); @@ -672,6 +673,7 @@ struct css_set init_css_set = { .dom_cset = &init_css_set, .tasks = LIST_HEAD_INIT(init_css_set.tasks), .mg_tasks = LIST_HEAD_INIT(init_css_set.mg_tasks), + .dying_tasks = LIST_HEAD_INIT(init_css_set.dying_tasks), .task_iters = LIST_HEAD_INIT(init_css_set.task_iters), .threaded_csets = LIST_HEAD_INIT(init_css_set.threaded_csets), .cgrp_links = LIST_HEAD_INIT(init_css_set.cgrp_links), @@ -775,6 +777,21 @@ static void css_set_update_populated(struct css_set *cset, bool populated) cgroup_update_populated(link->cgrp, populated); } +/* + * @task is leaving, advance task iterators which are pointing to it so + * that they can resume at the next position. Advancing an iterator might + * remove it from the list, use safe walk. See css_task_iter_skip() for + * details. + */ +static void css_set_skip_task_iters(struct css_set *cset, + struct task_struct *task) +{ + struct css_task_iter *it, *pos; + + list_for_each_entry_safe(it, pos, &cset->task_iters, iters_node) + css_task_iter_skip(it, task); +} + /** * css_set_move_task - move a task from one css_set to another * @task: task being moved @@ -800,22 +817,9 @@ static void css_set_move_task(struct task_struct *task, css_set_update_populated(to_cset, true); if (from_cset) { - struct css_task_iter *it, *pos; - WARN_ON_ONCE(list_empty(&task->cg_list)); - /* - * @task is leaving, advance task iterators which are - * pointing to it so that they can resume at the next - * position. Advancing an iterator might remove it from - * the list, use safe walk. See css_task_iter_advance*() - * for details. - */ - list_for_each_entry_safe(it, pos, &from_cset->task_iters, - iters_node) - if (it->task_pos == &task->cg_list) - css_task_iter_advance(it); - + css_set_skip_task_iters(from_cset, task); list_del_init(&task->cg_list); if (!css_set_populated(from_cset)) css_set_update_populated(from_cset, false); @@ -1142,6 +1146,7 @@ static struct css_set *find_css_set(struct css_set *old_cset, cset->dom_cset = cset; INIT_LIST_HEAD(&cset->tasks); INIT_LIST_HEAD(&cset->mg_tasks); + INIT_LIST_HEAD(&cset->dying_tasks); INIT_LIST_HEAD(&cset->task_iters); INIT_LIST_HEAD(&cset->threaded_csets); INIT_HLIST_NODE(&cset->hlist); @@ -4149,15 +4154,18 @@ static void css_task_iter_advance_css_set(struct css_task_iter *it) it->task_pos = NULL; return; } - } while (!css_set_populated(cset)); + } while (!css_set_populated(cset) && list_empty(&cset->dying_tasks)); if (!list_empty(&cset->tasks)) it->task_pos = cset->tasks.next; - else + else if (!list_empty(&cset->mg_tasks)) it->task_pos = cset->mg_tasks.next; + else + it->task_pos = cset->dying_tasks.next; it->tasks_head = &cset->tasks; it->mg_tasks_head = &cset->mg_tasks; + it->dying_tasks_head = &cset->dying_tasks; /* * We don't keep css_sets locked across iteration steps and thus @@ -4183,9 +4191,20 @@ static void css_task_iter_advance_css_set(struct css_task_iter *it) list_add(&it->iters_node, &cset->task_iters); } +static void css_task_iter_skip(struct css_task_iter *it, + struct task_struct *task) +{ + lockdep_assert_held(&css_set_lock); + + if (it->task_pos == &task->cg_list) { + it->task_pos = it->task_pos->next; + it->flags |= CSS_TASK_ITER_SKIPPED; + } +} + static void css_task_iter_advance(struct css_task_iter *it) { - struct list_head *next; + struct task_struct *task; lockdep_assert_held(&css_set_lock); repeat: @@ -4195,25 +4214,40 @@ repeat: * consumed first and then ->mg_tasks. After ->mg_tasks, * we move onto the next cset. */ - next = it->task_pos->next; - - if (next == it->tasks_head) - next = it->mg_tasks_head->next; + if (it->flags & CSS_TASK_ITER_SKIPPED) + it->flags &= ~CSS_TASK_ITER_SKIPPED; + else + it->task_pos = it->task_pos->next; - if (next == it->mg_tasks_head) + if (it->task_pos == it->tasks_head) + it->task_pos = it->mg_tasks_head->next; + if (it->task_pos == it->mg_tasks_head) + it->task_pos = it->dying_tasks_head->next; + if (it->task_pos == it->dying_tasks_head) css_task_iter_advance_css_set(it); - else - it->task_pos = next; } else { /* called from start, proceed to the first cset */ css_task_iter_advance_css_set(it); } - /* if PROCS, skip over tasks which aren't group leaders */ - if ((it->flags & CSS_TASK_ITER_PROCS) && it->task_pos && - !thread_group_leader(list_entry(it->task_pos, struct task_struct, - cg_list))) - goto repeat; + if (!it->task_pos) + return; + + task = list_entry(it->task_pos, struct task_struct, cg_list); + + if (it->flags & CSS_TASK_ITER_PROCS) { + /* if PROCS, skip over tasks which aren't group leaders */ + if (!thread_group_leader(task)) + goto repeat; + + /* and dying leaders w/o live member threads */ + if (!atomic_read(&task->signal->live)) + goto repeat; + } else { + /* skip all dying ones */ + if (task->flags & PF_EXITING) + goto repeat; + } } /** @@ -4269,6 +4303,10 @@ struct task_struct *css_task_iter_next(struct css_task_iter *it) spin_lock_irq(&css_set_lock); + /* @it may be half-advanced by skips, finish advancing */ + if (it->flags & CSS_TASK_ITER_SKIPPED) + css_task_iter_advance(it); + if (it->task_pos) { it->cur_task = list_entry(it->task_pos, struct task_struct, cg_list); @@ -5671,6 +5709,7 @@ void cgroup_exit(struct task_struct *tsk) if (!list_empty(&tsk->cg_list)) { spin_lock_irq(&css_set_lock); css_set_move_task(tsk, cset, NULL, false); + list_add_tail(&tsk->cg_list, &cset->dying_tasks); cset->nr_tasks--; spin_unlock_irq(&css_set_lock); } else { @@ -5691,6 +5730,13 @@ void cgroup_release(struct task_struct *task) do_each_subsys_mask(ss, ssid, have_release_callback) { ss->release(task); } while_each_subsys_mask(); + + if (use_task_css_set_links) { + spin_lock_irq(&css_set_lock); + css_set_skip_task_iters(task_css_set(task), task); + list_del_init(&task->cg_list); + spin_unlock_irq(&css_set_lock); + } } void cgroup_free(struct task_struct *task) diff --git a/kernel/cpu.c b/kernel/cpu.c index 3857a0afdfbfdce34ced815aa724453469588d4b..efa687c2c17021b5b5fd2a06bda559be0adfee6a 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -1992,6 +1992,9 @@ static ssize_t write_cpuhp_fail(struct device *dev, if (ret) return ret; + if (fail < CPUHP_OFFLINE || fail > CPUHP_ONLINE) + return -EINVAL; + /* * Cannot fail STARTING/DYING callbacks. */ diff --git a/kernel/cred.c b/kernel/cred.c index efd04b2ec84c2c4675940a688cf7c084541efa9f..5ab1f7ec946e13a5848ced77d0ecec559ea4d53e 100644 --- a/kernel/cred.c +++ b/kernel/cred.c @@ -147,7 +147,10 @@ void __put_cred(struct cred *cred) BUG_ON(cred == current->cred); BUG_ON(cred == current->real_cred); - call_rcu(&cred->rcu, put_cred_rcu); + if (cred->non_rcu) + put_cred_rcu(&cred->rcu); + else + call_rcu(&cred->rcu, put_cred_rcu); } EXPORT_SYMBOL(__put_cred); @@ -258,6 +261,7 @@ struct cred *prepare_creds(void) old = task->cred; memcpy(new, old, sizeof(struct cred)); + new->non_rcu = 0; atomic_set(&new->usage, 1); set_cred_subscribers(new, 0); get_group_info(new->group_info); @@ -537,7 +541,19 @@ const struct cred *override_creds(const struct cred *new) validate_creds(old); validate_creds(new); - get_cred(new); + + /* + * NOTE! This uses 'get_new_cred()' rather than 'get_cred()'. + * + * That means that we do not clear the 'non_rcu' flag, since + * we are only installing the cred into the thread-synchronous + * '->cred' pointer, not the '->real_cred' pointer that is + * visible to other threads under RCU. + * + * Also note that we did validate_creds() manually, not depending + * on the validation in 'get_cred()'. + */ + get_new_cred((struct cred *)new); alter_cred_subscribers(new, 1); rcu_assign_pointer(current->cred, new); alter_cred_subscribers(old, -1); @@ -620,6 +636,7 @@ struct cred *prepare_kernel_cred(struct task_struct *daemon) validate_creds(old); *new = *old; + new->non_rcu = 0; atomic_set(&new->usage, 1); set_cred_subscribers(new, 0); get_uid(new->user); diff --git a/kernel/events/core.c b/kernel/events/core.c index a7807c609c220413ff67cdca48b7b6c799c48eb4..d67e0d90b04e6e9755bd35eb85cddab1ce5e8b3b 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -2541,6 +2541,9 @@ unlock: return ret; } +static bool exclusive_event_installable(struct perf_event *event, + struct perf_event_context *ctx); + /* * Attach a performance event to a context. * @@ -2555,6 +2558,8 @@ perf_install_in_context(struct perf_event_context *ctx, lockdep_assert_held(&ctx->mutex); + WARN_ON_ONCE(!exclusive_event_installable(event, ctx)); + if (event->cpu != -1) event->cpu = cpu; @@ -4341,7 +4346,7 @@ static int exclusive_event_init(struct perf_event *event) { struct pmu *pmu = event->pmu; - if (!(pmu->capabilities & PERF_PMU_CAP_EXCLUSIVE)) + if (!is_exclusive_pmu(pmu)) return 0; /* @@ -4372,7 +4377,7 @@ static void exclusive_event_destroy(struct perf_event *event) { struct pmu *pmu = event->pmu; - if (!(pmu->capabilities & PERF_PMU_CAP_EXCLUSIVE)) + if (!is_exclusive_pmu(pmu)) return; /* see comment in exclusive_event_init() */ @@ -4392,14 +4397,15 @@ static bool exclusive_event_match(struct perf_event *e1, struct perf_event *e2) return false; } -/* Called under the same ctx::mutex as perf_install_in_context() */ static bool exclusive_event_installable(struct perf_event *event, struct perf_event_context *ctx) { struct perf_event *iter_event; struct pmu *pmu = event->pmu; - if (!(pmu->capabilities & PERF_PMU_CAP_EXCLUSIVE)) + lockdep_assert_held(&ctx->mutex); + + if (!is_exclusive_pmu(pmu)) return true; list_for_each_entry(iter_event, &ctx->event_list, event_entry) { @@ -4446,12 +4452,20 @@ static void _free_event(struct perf_event *event) if (event->destroy) event->destroy(event); - if (event->ctx) - put_ctx(event->ctx); - + /* + * Must be after ->destroy(), due to uprobe_perf_close() using + * hw.target. + */ if (event->hw.target) put_task_struct(event->hw.target); + /* + * perf_event_free_task() relies on put_ctx() being 'last', in particular + * all task references must be cleaned up. + */ + if (event->ctx) + put_ctx(event->ctx); + exclusive_event_destroy(event); module_put(event->pmu->module); @@ -4631,8 +4645,17 @@ again: mutex_unlock(&event->child_mutex); list_for_each_entry_safe(child, tmp, &free_list, child_list) { + void *var = &child->ctx->refcount; + list_del(&child->child_list); free_event(child); + + /* + * Wake any perf_event_free_task() waiting for this event to be + * freed. + */ + smp_mb(); /* pairs with wait_var_event() */ + wake_up_var(var); } no_ctx: @@ -5906,7 +5929,7 @@ static void perf_sample_regs_user(struct perf_regs *regs_user, if (user_mode(regs)) { regs_user->abi = perf_reg_abi(current); regs_user->regs = regs; - } else if (current->mm) { + } else if (!(current->flags & PF_KTHREAD)) { perf_get_regs_user(regs_user, regs, regs_user_copy); } else { regs_user->abi = PERF_SAMPLE_REGS_ABI_NONE; @@ -10613,11 +10636,6 @@ SYSCALL_DEFINE5(perf_event_open, goto err_alloc; } - if ((pmu->capabilities & PERF_PMU_CAP_EXCLUSIVE) && group_leader) { - err = -EBUSY; - goto err_context; - } - /* * Look up the group leader (we will attach this event to it): */ @@ -10705,6 +10723,18 @@ SYSCALL_DEFINE5(perf_event_open, move_group = 0; } } + + /* + * Failure to create exclusive events returns -EBUSY. + */ + err = -EBUSY; + if (!exclusive_event_installable(group_leader, ctx)) + goto err_locked; + + for_each_sibling_event(sibling, group_leader) { + if (!exclusive_event_installable(sibling, ctx)) + goto err_locked; + } } else { mutex_lock(&ctx->mutex); } @@ -10741,9 +10771,6 @@ SYSCALL_DEFINE5(perf_event_open, * because we need to serialize with concurrent event creation. */ if (!exclusive_event_installable(event, ctx)) { - /* exclusive and group stuff are assumed mutually exclusive */ - WARN_ON_ONCE(move_group); - err = -EBUSY; goto err_locked; } @@ -10930,7 +10957,7 @@ perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu, goto err_unlock; } - perf_install_in_context(ctx, event, cpu); + perf_install_in_context(ctx, event, event->cpu); perf_unpin_context(ctx); mutex_unlock(&ctx->mutex); @@ -11210,11 +11237,11 @@ static void perf_free_event(struct perf_event *event, } /* - * Free an unexposed, unused context as created by inheritance by - * perf_event_init_task below, used by fork() in case of fail. + * Free a context as created by inheritance by perf_event_init_task() below, + * used by fork() in case of fail. * - * Not all locks are strictly required, but take them anyway to be nice and - * help out with the lockdep assertions. + * Even though the task has never lived, the context and events have been + * exposed through the child_list, so we must take care tearing it all down. */ void perf_event_free_task(struct task_struct *task) { @@ -11244,7 +11271,23 @@ void perf_event_free_task(struct task_struct *task) perf_free_event(event, ctx); mutex_unlock(&ctx->mutex); - put_ctx(ctx); + + /* + * perf_event_release_kernel() could've stolen some of our + * child events and still have them on its free_list. In that + * case we must wait for these events to have been freed (in + * particular all their references to this task must've been + * dropped). + * + * Without this copy_process() will unconditionally free this + * task (irrespective of its reference count) and + * _free_event()'s put_task_struct(event->hw.target) will be a + * use-after-free. + * + * Wait for all events to drop their context reference. + */ + wait_var_event(&ctx->refcount, atomic_read(&ctx->refcount) == 1); + put_ctx(ctx); /* must be last */ } } diff --git a/kernel/exit.c b/kernel/exit.c index 47d4161d110499211bfa302fa2731c2a64f0991e..ac4ca468db840b4b3ff2dad42a2b0905876686d5 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -194,6 +194,7 @@ repeat: rcu_read_unlock(); proc_flush_task(p); + cgroup_release(p); write_lock_irq(&tasklist_lock); ptrace_release_task(p); @@ -219,7 +220,6 @@ repeat: } write_unlock_irq(&tasklist_lock); - cgroup_release(p); release_thread(p); call_rcu(&p->rcu, delayed_put_task_struct); diff --git a/kernel/fork.c b/kernel/fork.c index aa4905338ff472c0f5eccdc30edd56d01ba46f1f..e8845d14bcd364931afdd0017bc3d33c147f25d0 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -704,7 +704,7 @@ void __put_task_struct(struct task_struct *tsk) put_task_stack(tsk); cgroup_free(tsk); - task_numa_free(tsk); + task_numa_free(tsk, true); security_task_free(tsk); exit_creds(tsk); delayacct_tsk_free(tsk); diff --git a/kernel/irq/autoprobe.c b/kernel/irq/autoprobe.c index 16cbf6beb276844a532f9fc886cf6dd0f5c8f91e..ae60cae24e9aa3db1a6ac950c474e83854f73b37 100644 --- a/kernel/irq/autoprobe.c +++ b/kernel/irq/autoprobe.c @@ -90,7 +90,7 @@ unsigned long probe_irq_on(void) /* It triggered already - consider it spurious. */ if (!(desc->istate & IRQS_WAITING)) { desc->istate &= ~IRQS_AUTODETECT; - irq_shutdown(desc); + irq_shutdown_and_deactivate(desc); } else if (i < 32) mask |= 1 << i; @@ -127,7 +127,7 @@ unsigned int probe_irq_mask(unsigned long val) mask |= 1 << i; desc->istate &= ~IRQS_AUTODETECT; - irq_shutdown(desc); + irq_shutdown_and_deactivate(desc); } raw_spin_unlock_irq(&desc->lock); } @@ -169,7 +169,7 @@ int probe_irq_off(unsigned long val) nr_of_irqs++; } desc->istate &= ~IRQS_AUTODETECT; - irq_shutdown(desc); + irq_shutdown_and_deactivate(desc); } raw_spin_unlock_irq(&desc->lock); } diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c index 379e89c706c9afbae8ee220051bed22092b04d48..09d914e486a2d32d84b0e002560b834da64427fa 100644 --- a/kernel/irq/chip.c +++ b/kernel/irq/chip.c @@ -314,6 +314,12 @@ void irq_shutdown(struct irq_desc *desc) } irq_state_clr_started(desc); } +} + + +void irq_shutdown_and_deactivate(struct irq_desc *desc) +{ + irq_shutdown(desc); /* * This must be called even if the interrupt was never started up, * because the activation can happen before the interrupt is diff --git a/kernel/irq/cpuhotplug.c b/kernel/irq/cpuhotplug.c index 5b1072e394b26d069bcbd44c55b68dc642bd30c9..6c7ca2e983a595ff561396e3d3b9400072426c56 100644 --- a/kernel/irq/cpuhotplug.c +++ b/kernel/irq/cpuhotplug.c @@ -116,7 +116,7 @@ static bool migrate_one_irq(struct irq_desc *desc) */ if (irqd_affinity_is_managed(d)) { irqd_set_managed_shutdown(d); - irq_shutdown(desc); + irq_shutdown_and_deactivate(desc); return false; } affinity = cpu_online_mask; diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h index e74e7eea76cf1996e11bfd31bf71dc59903dce7c..ea57f3d397fe35fcaacb227b0be9387f062c8c1f 100644 --- a/kernel/irq/internals.h +++ b/kernel/irq/internals.h @@ -80,6 +80,7 @@ extern int irq_activate_and_startup(struct irq_desc *desc, bool resend); extern int irq_startup(struct irq_desc *desc, bool resend, bool force); extern void irq_shutdown(struct irq_desc *desc); +extern void irq_shutdown_and_deactivate(struct irq_desc *desc); extern void irq_enable(struct irq_desc *desc); extern void irq_disable(struct irq_desc *desc); extern void irq_percpu_enable(struct irq_desc *desc, unsigned int cpu); @@ -94,6 +95,10 @@ static inline void irq_mark_irq(unsigned int irq) { } extern void irq_mark_irq(unsigned int irq); #endif +extern int __irq_get_irqchip_state(struct irq_data *data, + enum irqchip_irq_state which, + bool *state); + extern void init_kstat_irqs(struct irq_desc *desc, int node, int nr); irqreturn_t __handle_irq_event_percpu(struct irq_desc *desc, unsigned int *flags); diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c index 8e009cee651742946174c1872f83b60b4dba21f8..26814a14013cbf5826c5a94b981a9d0202f8d92b 100644 --- a/kernel/irq/irqdesc.c +++ b/kernel/irq/irqdesc.c @@ -294,6 +294,18 @@ static void irq_sysfs_add(int irq, struct irq_desc *desc) } } +static void irq_sysfs_del(struct irq_desc *desc) +{ + /* + * If irq_sysfs_init() has not yet been invoked (early boot), then + * irq_kobj_base is NULL and the descriptor was never added. + * kobject_del() complains about a object with no parent, so make + * it conditional. + */ + if (irq_kobj_base) + kobject_del(&desc->kobj); +} + static int __init irq_sysfs_init(void) { struct irq_desc *desc; @@ -324,6 +336,7 @@ static struct kobj_type irq_kobj_type = { }; static void irq_sysfs_add(int irq, struct irq_desc *desc) {} +static void irq_sysfs_del(struct irq_desc *desc) {} #endif /* CONFIG_SYSFS */ @@ -437,7 +450,7 @@ static void free_desc(unsigned int irq) * The sysfs entry must be serialized against a concurrent * irq_sysfs_init() as well. */ - kobject_del(&desc->kobj); + irq_sysfs_del(desc); delete_irq_desc(irq); /* diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index b2736d7d863b4921c18b686ab4940eb9371a31be..290cd520dba1ad7474f9ebc1f262ca34f0f9fee4 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include @@ -36,8 +37,9 @@ early_param("threadirqs", setup_forced_irqthreads); # endif #endif -static void __synchronize_hardirq(struct irq_desc *desc) +static void __synchronize_hardirq(struct irq_desc *desc, bool sync_chip) { + struct irq_data *irqd = irq_desc_get_irq_data(desc); bool inprogress; do { @@ -53,6 +55,20 @@ static void __synchronize_hardirq(struct irq_desc *desc) /* Ok, that indicated we're done: double-check carefully. */ raw_spin_lock_irqsave(&desc->lock, flags); inprogress = irqd_irq_inprogress(&desc->irq_data); + + /* + * If requested and supported, check at the chip whether it + * is in flight at the hardware level, i.e. already pending + * in a CPU and waiting for service and acknowledge. + */ + if (!inprogress && sync_chip) { + /* + * Ignore the return code. inprogress is only updated + * when the chip supports it. + */ + __irq_get_irqchip_state(irqd, IRQCHIP_STATE_ACTIVE, + &inprogress); + } raw_spin_unlock_irqrestore(&desc->lock, flags); /* Oops, that failed? */ @@ -75,13 +91,18 @@ static void __synchronize_hardirq(struct irq_desc *desc) * Returns: false if a threaded handler is active. * * This function may be called - with care - from IRQ context. + * + * It does not check whether there is an interrupt in flight at the + * hardware level, but not serviced yet, as this might deadlock when + * called with interrupts disabled and the target CPU of the interrupt + * is the current CPU. */ bool synchronize_hardirq(unsigned int irq) { struct irq_desc *desc = irq_to_desc(irq); if (desc) { - __synchronize_hardirq(desc); + __synchronize_hardirq(desc, false); return !atomic_read(&desc->threads_active); } @@ -97,14 +118,19 @@ EXPORT_SYMBOL(synchronize_hardirq); * to complete before returning. If you use this function while * holding a resource the IRQ handler may need you will deadlock. * - * This function may be called - with care - from IRQ context. + * Can only be called from preemptible code as it might sleep when + * an interrupt thread is associated to @irq. + * + * It optionally makes sure (when the irq chip supports that method) + * that the interrupt is not pending in any CPU and waiting for + * service. */ void synchronize_irq(unsigned int irq) { struct irq_desc *desc = irq_to_desc(irq); if (desc) { - __synchronize_hardirq(desc); + __synchronize_hardirq(desc, true); /* * We made sure that no hardirq handler is * running. Now verify that no threaded handlers are @@ -1668,6 +1694,7 @@ static struct irqaction *__free_irq(struct irq_desc *desc, void *dev_id) /* If this was the last handler, shut down the IRQ line: */ if (!desc->action) { irq_settings_clr_disable_unlazy(desc); + /* Only shutdown. Deactivate after synchronize_hardirq() */ irq_shutdown(desc); } @@ -1696,8 +1723,12 @@ static struct irqaction *__free_irq(struct irq_desc *desc, void *dev_id) unregister_handler_proc(irq, action); - /* Make sure it's not being used on another CPU: */ - synchronize_hardirq(irq); + /* + * Make sure it's not being used on another CPU and if the chip + * supports it also make sure that there is no (not yet serviced) + * interrupt in flight at the hardware level. + */ + __synchronize_hardirq(desc, true); #ifdef CONFIG_DEBUG_SHIRQ /* @@ -1737,6 +1768,14 @@ static struct irqaction *__free_irq(struct irq_desc *desc, void *dev_id) * require it to deallocate resources over the slow bus. */ chip_bus_lock(desc); + /* + * There is no interrupt on the fly anymore. Deactivate it + * completely. + */ + raw_spin_lock_irqsave(&desc->lock, flags); + irq_domain_deactivate_irq(&desc->irq_data); + raw_spin_unlock_irqrestore(&desc->lock, flags); + irq_release_resources(desc); chip_bus_sync_unlock(desc); irq_remove_timings(desc); @@ -2222,6 +2261,28 @@ int __request_percpu_irq(unsigned int irq, irq_handler_t handler, } EXPORT_SYMBOL_GPL(__request_percpu_irq); +int __irq_get_irqchip_state(struct irq_data *data, enum irqchip_irq_state which, + bool *state) +{ + struct irq_chip *chip; + int err = -EINVAL; + + do { + chip = irq_data_get_irq_chip(data); + if (chip->irq_get_irqchip_state) + break; +#ifdef CONFIG_IRQ_DOMAIN_HIERARCHY + data = data->parent_data; +#else + data = NULL; +#endif + } while (data); + + if (data) + err = chip->irq_get_irqchip_state(data, which, state); + return err; +} + /** * irq_get_irqchip_state - returns the irqchip state of a interrupt. * @irq: Interrupt line that is forwarded to a VM @@ -2240,7 +2301,6 @@ int irq_get_irqchip_state(unsigned int irq, enum irqchip_irq_state which, { struct irq_desc *desc; struct irq_data *data; - struct irq_chip *chip; unsigned long flags; int err = -EINVAL; @@ -2250,19 +2310,7 @@ int irq_get_irqchip_state(unsigned int irq, enum irqchip_irq_state which, data = irq_desc_get_irq_data(desc); - do { - chip = irq_data_get_irq_chip(data); - if (chip->irq_get_irqchip_state) - break; -#ifdef CONFIG_IRQ_DOMAIN_HIERARCHY - data = data->parent_data; -#else - data = NULL; -#endif - } while (data); - - if (data) - err = chip->irq_get_irqchip_state(data, which, state); + err = __irq_get_irqchip_state(data, which, state); irq_put_desc_busunlock(desc, flags); return err; diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index 6daeb369f6916e97cd094e87107e3ff88e812faa..a67ca3182f72a56af0327ae5ed8cfc5fcc9ecf0a 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -3326,17 +3326,17 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass, if (depth) { hlock = curr->held_locks + depth - 1; if (hlock->class_idx == class_idx && nest_lock) { - if (hlock->references) { - /* - * Check: unsigned int references:12, overflow. - */ - if (DEBUG_LOCKS_WARN_ON(hlock->references == (1 << 12)-1)) - return 0; + if (!references) + references++; + if (!hlock->references) hlock->references++; - } else { - hlock->references = 2; - } + + hlock->references += references; + + /* Overflow */ + if (DEBUG_LOCKS_WARN_ON(hlock->references < references)) + return 0; return 1; } diff --git a/kernel/locking/lockdep_proc.c b/kernel/locking/lockdep_proc.c index 3dd980dfba2de3cefba6562e9301c1951a238907..6fcc4650f0c489bb51ee97caf8aab44ea819f7e8 100644 --- a/kernel/locking/lockdep_proc.c +++ b/kernel/locking/lockdep_proc.c @@ -200,7 +200,6 @@ static void lockdep_stats_debug_show(struct seq_file *m) static int lockdep_stats_show(struct seq_file *m, void *v) { - struct lock_class *class; unsigned long nr_unused = 0, nr_uncategorized = 0, nr_irq_safe = 0, nr_irq_unsafe = 0, nr_softirq_safe = 0, nr_softirq_unsafe = 0, @@ -210,6 +209,9 @@ static int lockdep_stats_show(struct seq_file *m, void *v) nr_hardirq_read_safe = 0, nr_hardirq_read_unsafe = 0, sum_forward_deps = 0; +#ifdef CONFIG_PROVE_LOCKING + struct lock_class *class; + list_for_each_entry(class, &all_lock_classes, lock_entry) { if (class->usage_mask == 0) @@ -241,12 +243,12 @@ static int lockdep_stats_show(struct seq_file *m, void *v) if (class->usage_mask & LOCKF_ENABLED_HARDIRQ_READ) nr_hardirq_read_unsafe++; -#ifdef CONFIG_PROVE_LOCKING sum_forward_deps += lockdep_count_forward_deps(class); -#endif } #ifdef CONFIG_DEBUG_LOCKDEP DEBUG_LOCKS_WARN_ON(debug_atomic_read(nr_unused_locks) != nr_unused); +#endif + #endif seq_printf(m, " lock-classes: %11lu [max: %lu]\n", nr_lock_classes, MAX_LOCKDEP_KEYS); diff --git a/kernel/module.c b/kernel/module.c index b8f37376856bdfaddd0d367a4478b4c2726d238f..3fda10c549a25608646ac31ff6f732066d4d2c30 100644 --- a/kernel/module.c +++ b/kernel/module.c @@ -3388,8 +3388,7 @@ static bool finished_loading(const char *name) sched_annotate_sleep(); mutex_lock(&module_mutex); mod = find_module_all(name, strlen(name), true); - ret = !mod || mod->state == MODULE_STATE_LIVE - || mod->state == MODULE_STATE_GOING; + ret = !mod || mod->state == MODULE_STATE_LIVE; mutex_unlock(&module_mutex); return ret; @@ -3559,8 +3558,7 @@ again: mutex_lock(&module_mutex); old = find_module_all(mod->name, strlen(mod->name), true); if (old != NULL) { - if (old->state == MODULE_STATE_COMING - || old->state == MODULE_STATE_UNFORMED) { + if (old->state != MODULE_STATE_LIVE) { /* Wait in case it fails to load. */ mutex_unlock(&module_mutex); err = wait_event_interruptible(module_wq, diff --git a/kernel/padata.c b/kernel/padata.c index d568cc56405f8835d499df7b6555a406e99c9e96..6c06b3039faed01d992965f73c95bc218880e286 100644 --- a/kernel/padata.c +++ b/kernel/padata.c @@ -267,7 +267,12 @@ static void padata_reorder(struct parallel_data *pd) * The next object that needs serialization might have arrived to * the reorder queues in the meantime, we will be called again * from the timer function if no one else cares for it. + * + * Ensure reorder_objects is read after pd->lock is dropped so we see + * an increment from another task in padata_do_serial. Pairs with + * smp_mb__after_atomic in padata_do_serial. */ + smp_mb(); if (atomic_read(&pd->reorder_objects) && !(pinst->flags & PADATA_RESET)) mod_timer(&pd->timer, jiffies + HZ); @@ -387,6 +392,13 @@ void padata_do_serial(struct padata_priv *padata) list_add_tail(&padata->list, &pqueue->reorder.list); spin_unlock(&pqueue->reorder.lock); + /* + * Ensure the atomic_inc of reorder_objects above is ordered correctly + * with the trylock of pd->lock in padata_reorder. Pairs with smp_mb + * in padata_reorder. + */ + smp_mb__after_atomic(); + put_cpu(); /* If we're running on the wrong CPU, call padata_reorder() via a diff --git a/kernel/pid_namespace.c b/kernel/pid_namespace.c index 2a2ac53d8b8bb845f25581b549a1ea4136564692..95271f180687e0367943aac2077fba2ff6cf7a61 100644 --- a/kernel/pid_namespace.c +++ b/kernel/pid_namespace.c @@ -325,7 +325,7 @@ int reboot_pid_ns(struct pid_namespace *pid_ns, int cmd) } read_lock(&tasklist_lock); - force_sig(SIGKILL, pid_ns->child_reaper); + send_sig(SIGKILL, pid_ns->child_reaper, 1); read_unlock(&tasklist_lock); do_exit(0); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 1b2503b87473a3365cf4dd3c7b35dca4323921ae..34e786c60d7972c9182a6412db51cf38f6894f5d 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -5250,7 +5250,7 @@ long __sched io_schedule_timeout(long timeout) } EXPORT_SYMBOL(io_schedule_timeout); -void io_schedule(void) +void __sched io_schedule(void) { int token; diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index 4e3625109b28d8db76c8912248fd145c3af118bf..64d54acc992829d98c65e7371b9967203e1b08c8 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -40,6 +40,7 @@ struct sugov_policy { struct task_struct *thread; bool work_in_progress; + bool limits_changed; bool need_freq_update; }; @@ -90,8 +91,11 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time) !cpufreq_this_cpu_can_update(sg_policy->policy)) return false; - if (unlikely(sg_policy->need_freq_update)) + if (unlikely(sg_policy->limits_changed)) { + sg_policy->limits_changed = false; + sg_policy->need_freq_update = true; return true; + } delta_ns = time - sg_policy->last_freq_update_time; @@ -405,7 +409,7 @@ static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; } static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu, struct sugov_policy *sg_policy) { if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_dl) - sg_policy->need_freq_update = true; + sg_policy->limits_changed = true; } static void sugov_update_single(struct update_util_data *hook, u64 time, @@ -425,7 +429,8 @@ static void sugov_update_single(struct update_util_data *hook, u64 time, if (!sugov_should_update_freq(sg_policy, time)) return; - busy = sugov_cpu_is_busy(sg_cpu); + /* Limits may have changed, don't skip frequency update */ + busy = !sg_policy->need_freq_update && sugov_cpu_is_busy(sg_cpu); util = sugov_get_util(sg_cpu); max = sg_cpu->max; @@ -798,6 +803,7 @@ static int sugov_start(struct cpufreq_policy *policy) sg_policy->last_freq_update_time = 0; sg_policy->next_freq = 0; sg_policy->work_in_progress = false; + sg_policy->limits_changed = false; sg_policy->need_freq_update = false; sg_policy->cached_raw_freq = 0; @@ -849,7 +855,7 @@ static void sugov_limits(struct cpufreq_policy *policy) mutex_unlock(&sg_policy->work_lock); } - sg_policy->need_freq_update = true; + sg_policy->limits_changed = true; } static struct cpufreq_governor schedutil_gov = { diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 2cca09d59019c417ed7d43b96b97f3108015b087..38ff12cc67a31d285c6e123f8db61603d33a84ae 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1053,6 +1053,21 @@ struct numa_group { unsigned long faults[0]; }; +/* + * For functions that can be called in multiple contexts that permit reading + * ->numa_group (see struct task_struct for locking rules). + */ +static struct numa_group *deref_task_numa_group(struct task_struct *p) +{ + return rcu_dereference_check(p->numa_group, p == current || + (lockdep_is_held(&task_rq(p)->lock) && !READ_ONCE(p->on_cpu))); +} + +static struct numa_group *deref_curr_numa_group(struct task_struct *p) +{ + return rcu_dereference_protected(p->numa_group, p == current); +} + static inline unsigned long group_faults_priv(struct numa_group *ng); static inline unsigned long group_faults_shared(struct numa_group *ng); @@ -1096,10 +1111,12 @@ static unsigned int task_scan_start(struct task_struct *p) { unsigned long smin = task_scan_min(p); unsigned long period = smin; + struct numa_group *ng; /* Scale the maximum scan period with the amount of shared memory. */ - if (p->numa_group) { - struct numa_group *ng = p->numa_group; + rcu_read_lock(); + ng = rcu_dereference(p->numa_group); + if (ng) { unsigned long shared = group_faults_shared(ng); unsigned long private = group_faults_priv(ng); @@ -1107,6 +1124,7 @@ static unsigned int task_scan_start(struct task_struct *p) period *= shared + 1; period /= private + shared + 1; } + rcu_read_unlock(); return max(smin, period); } @@ -1115,13 +1133,14 @@ static unsigned int task_scan_max(struct task_struct *p) { unsigned long smin = task_scan_min(p); unsigned long smax; + struct numa_group *ng; /* Watch for min being lower than max due to floor calculations */ smax = sysctl_numa_balancing_scan_period_max / task_nr_scan_windows(p); /* Scale the maximum scan period with the amount of shared memory. */ - if (p->numa_group) { - struct numa_group *ng = p->numa_group; + ng = deref_curr_numa_group(p); + if (ng) { unsigned long shared = group_faults_shared(ng); unsigned long private = group_faults_priv(ng); unsigned long period = smax; @@ -1153,7 +1172,7 @@ void init_numa_balancing(unsigned long clone_flags, struct task_struct *p) p->numa_scan_period = sysctl_numa_balancing_scan_delay; p->numa_work.next = &p->numa_work; p->numa_faults = NULL; - p->numa_group = NULL; + RCU_INIT_POINTER(p->numa_group, NULL); p->last_task_numa_placement = 0; p->last_sum_exec_runtime = 0; @@ -1200,7 +1219,16 @@ static void account_numa_dequeue(struct rq *rq, struct task_struct *p) pid_t task_numa_group_id(struct task_struct *p) { - return p->numa_group ? p->numa_group->gid : 0; + struct numa_group *ng; + pid_t gid = 0; + + rcu_read_lock(); + ng = rcu_dereference(p->numa_group); + if (ng) + gid = ng->gid; + rcu_read_unlock(); + + return gid; } /* @@ -1225,11 +1253,13 @@ static inline unsigned long task_faults(struct task_struct *p, int nid) static inline unsigned long group_faults(struct task_struct *p, int nid) { - if (!p->numa_group) + struct numa_group *ng = deref_task_numa_group(p); + + if (!ng) return 0; - return p->numa_group->faults[task_faults_idx(NUMA_MEM, nid, 0)] + - p->numa_group->faults[task_faults_idx(NUMA_MEM, nid, 1)]; + return ng->faults[task_faults_idx(NUMA_MEM, nid, 0)] + + ng->faults[task_faults_idx(NUMA_MEM, nid, 1)]; } static inline unsigned long group_faults_cpu(struct numa_group *group, int nid) @@ -1367,12 +1397,13 @@ static inline unsigned long task_weight(struct task_struct *p, int nid, static inline unsigned long group_weight(struct task_struct *p, int nid, int dist) { + struct numa_group *ng = deref_task_numa_group(p); unsigned long faults, total_faults; - if (!p->numa_group) + if (!ng) return 0; - total_faults = p->numa_group->total_faults; + total_faults = ng->total_faults; if (!total_faults) return 0; @@ -1386,7 +1417,7 @@ static inline unsigned long group_weight(struct task_struct *p, int nid, bool should_numa_migrate_memory(struct task_struct *p, struct page * page, int src_nid, int dst_cpu) { - struct numa_group *ng = p->numa_group; + struct numa_group *ng = deref_curr_numa_group(p); int dst_nid = cpu_to_node(dst_cpu); int last_cpupid, this_cpupid; @@ -1592,13 +1623,14 @@ static bool load_too_imbalanced(long src_load, long dst_load, static void task_numa_compare(struct task_numa_env *env, long taskimp, long groupimp, bool maymove) { + struct numa_group *cur_ng, *p_ng = deref_curr_numa_group(env->p); struct rq *dst_rq = cpu_rq(env->dst_cpu); + long imp = p_ng ? groupimp : taskimp; struct task_struct *cur; long src_load, dst_load; - long load; - long imp = env->p->numa_group ? groupimp : taskimp; - long moveimp = imp; int dist = env->dist; + long moveimp = imp; + long load; if (READ_ONCE(dst_rq->numa_migrate_on)) return; @@ -1637,21 +1669,22 @@ static void task_numa_compare(struct task_numa_env *env, * If dst and source tasks are in the same NUMA group, or not * in any group then look only at task weights. */ - if (cur->numa_group == env->p->numa_group) { + cur_ng = rcu_dereference(cur->numa_group); + if (cur_ng == p_ng) { imp = taskimp + task_weight(cur, env->src_nid, dist) - task_weight(cur, env->dst_nid, dist); /* * Add some hysteresis to prevent swapping the * tasks within a group over tiny differences. */ - if (cur->numa_group) + if (cur_ng) imp -= imp / 16; } else { /* * Compare the group weights. If a task is all by itself * (not part of a group), use the task weight instead. */ - if (cur->numa_group && env->p->numa_group) + if (cur_ng && p_ng) imp += group_weight(cur, env->src_nid, dist) - group_weight(cur, env->dst_nid, dist); else @@ -1749,11 +1782,12 @@ static int task_numa_migrate(struct task_struct *p) .best_imp = 0, .best_cpu = -1, }; + unsigned long taskweight, groupweight; struct sched_domain *sd; + long taskimp, groupimp; + struct numa_group *ng; struct rq *best_rq; - unsigned long taskweight, groupweight; int nid, ret, dist; - long taskimp, groupimp; /* * Pick the lowest SD_NUMA domain, as that would have the smallest @@ -1799,7 +1833,8 @@ static int task_numa_migrate(struct task_struct *p) * multiple NUMA nodes; in order to better consolidate the group, * we need to check other locations. */ - if (env.best_cpu == -1 || (p->numa_group && p->numa_group->active_nodes > 1)) { + ng = deref_curr_numa_group(p); + if (env.best_cpu == -1 || (ng && ng->active_nodes > 1)) { for_each_online_node(nid) { if (nid == env.src_nid || nid == p->numa_preferred_nid) continue; @@ -1832,7 +1867,7 @@ static int task_numa_migrate(struct task_struct *p) * A task that migrated to a second choice node will be better off * trying for a better one later. Do not set the preferred node here. */ - if (p->numa_group) { + if (ng) { if (env.best_cpu == -1) nid = env.src_nid; else @@ -2127,6 +2162,7 @@ static void task_numa_placement(struct task_struct *p) unsigned long total_faults; u64 runtime, period; spinlock_t *group_lock = NULL; + struct numa_group *ng; /* * The p->mm->numa_scan_seq field gets updated without @@ -2144,8 +2180,9 @@ static void task_numa_placement(struct task_struct *p) runtime = numa_get_avg_runtime(p, &period); /* If the task is part of a group prevent parallel updates to group stats */ - if (p->numa_group) { - group_lock = &p->numa_group->lock; + ng = deref_curr_numa_group(p); + if (ng) { + group_lock = &ng->lock; spin_lock_irq(group_lock); } @@ -2186,7 +2223,7 @@ static void task_numa_placement(struct task_struct *p) p->numa_faults[cpu_idx] += f_diff; faults += p->numa_faults[mem_idx]; p->total_numa_faults += diff; - if (p->numa_group) { + if (ng) { /* * safe because we can only change our own group * @@ -2194,14 +2231,14 @@ static void task_numa_placement(struct task_struct *p) * nid and priv in a specific region because it * is at the beginning of the numa_faults array. */ - p->numa_group->faults[mem_idx] += diff; - p->numa_group->faults_cpu[mem_idx] += f_diff; - p->numa_group->total_faults += diff; - group_faults += p->numa_group->faults[mem_idx]; + ng->faults[mem_idx] += diff; + ng->faults_cpu[mem_idx] += f_diff; + ng->total_faults += diff; + group_faults += ng->faults[mem_idx]; } } - if (!p->numa_group) { + if (!ng) { if (faults > max_faults) { max_faults = faults; max_nid = nid; @@ -2212,8 +2249,8 @@ static void task_numa_placement(struct task_struct *p) } } - if (p->numa_group) { - numa_group_count_active_nodes(p->numa_group); + if (ng) { + numa_group_count_active_nodes(ng); spin_unlock_irq(group_lock); max_nid = preferred_group_nid(p, max_nid); } @@ -2247,7 +2284,7 @@ static void task_numa_group(struct task_struct *p, int cpupid, int flags, int cpu = cpupid_to_cpu(cpupid); int i; - if (unlikely(!p->numa_group)) { + if (unlikely(!deref_curr_numa_group(p))) { unsigned int size = sizeof(struct numa_group) + 4*nr_node_ids*sizeof(unsigned long); @@ -2283,7 +2320,7 @@ static void task_numa_group(struct task_struct *p, int cpupid, int flags, if (!grp) goto no_join; - my_grp = p->numa_group; + my_grp = deref_curr_numa_group(p); if (grp == my_grp) goto no_join; @@ -2345,13 +2382,24 @@ no_join: return; } -void task_numa_free(struct task_struct *p) +/* + * Get rid of NUMA staticstics associated with a task (either current or dead). + * If @final is set, the task is dead and has reached refcount zero, so we can + * safely free all relevant data structures. Otherwise, there might be + * concurrent reads from places like load balancing and procfs, and we should + * reset the data back to default state without freeing ->numa_faults. + */ +void task_numa_free(struct task_struct *p, bool final) { - struct numa_group *grp = p->numa_group; - void *numa_faults = p->numa_faults; + /* safe: p either is current or is being freed by current */ + struct numa_group *grp = rcu_dereference_raw(p->numa_group); + unsigned long *numa_faults = p->numa_faults; unsigned long flags; int i; + if (!numa_faults) + return; + if (grp) { spin_lock_irqsave(&grp->lock, flags); for (i = 0; i < NR_NUMA_HINT_FAULT_STATS * nr_node_ids; i++) @@ -2364,8 +2412,14 @@ void task_numa_free(struct task_struct *p) put_numa_group(grp); } - p->numa_faults = NULL; - kfree(numa_faults); + if (final) { + p->numa_faults = NULL; + kfree(numa_faults); + } else { + p->total_numa_faults = 0; + for (i = 0; i < NR_NUMA_HINT_FAULT_STATS * nr_node_ids; i++) + numa_faults[i] = 0; + } } /* @@ -2418,7 +2472,7 @@ void task_numa_fault(int last_cpupid, int mem_node, int pages, int flags) * actively using should be counted as local. This allows the * scan rate to slow down when a workload has settled down. */ - ng = p->numa_group; + ng = deref_curr_numa_group(p); if (!priv && !local && ng && ng->active_nodes > 1 && numa_is_active_node(cpu_node, ng) && numa_is_active_node(mem_node, ng)) @@ -10220,18 +10274,22 @@ void show_numa_stats(struct task_struct *p, struct seq_file *m) { int node; unsigned long tsf = 0, tpf = 0, gsf = 0, gpf = 0; + struct numa_group *ng; + rcu_read_lock(); + ng = rcu_dereference(p->numa_group); for_each_online_node(node) { if (p->numa_faults) { tsf = p->numa_faults[task_faults_idx(NUMA_MEM, node, 0)]; tpf = p->numa_faults[task_faults_idx(NUMA_MEM, node, 1)]; } - if (p->numa_group) { - gsf = p->numa_group->faults[task_faults_idx(NUMA_MEM, node, 0)], - gpf = p->numa_group->faults[task_faults_idx(NUMA_MEM, node, 1)]; + if (ng) { + gsf = ng->faults[task_faults_idx(NUMA_MEM, node, 0)], + gpf = ng->faults[task_faults_idx(NUMA_MEM, node, 1)]; } print_numa_stats(m, node, tsf, tpf, gsf, gpf); } + rcu_read_unlock(); } #endif /* CONFIG_NUMA_BALANCING */ #endif /* CONFIG_SCHED_DEBUG */ diff --git a/kernel/sched/sched-pelt.h b/kernel/sched/sched-pelt.h index a26473674fb797411b7c3b1e8790c64dcfd7ed51..c529706bed11531eea6e4301f126a0aaadefce15 100644 --- a/kernel/sched/sched-pelt.h +++ b/kernel/sched/sched-pelt.h @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: GPL-2.0 */ /* Generated by Documentation/scheduler/sched-pelt; do not modify. */ -static const u32 runnable_avg_yN_inv[] = { +static const u32 runnable_avg_yN_inv[] __maybe_unused = { 0xffffffff, 0xfa83b2da, 0xf5257d14, 0xefe4b99a, 0xeac0c6e6, 0xe5b906e6, 0xe0ccdeeb, 0xdbfbb796, 0xd744fcc9, 0xd2a81d91, 0xce248c14, 0xc9b9bd85, 0xc5672a10, 0xc12c4cc9, 0xbd08a39e, 0xb8fbaf46, 0xb504f333, 0xb123f581, diff --git a/kernel/time/ntp.c b/kernel/time/ntp.c index 6b23cd584295f57787e7689191550a252140dd7e..e1110a7bd3e64afefcd20eacb45df112d16ac69e 100644 --- a/kernel/time/ntp.c +++ b/kernel/time/ntp.c @@ -43,6 +43,7 @@ static u64 tick_length_base; #define MAX_TICKADJ 500LL /* usecs */ #define MAX_TICKADJ_SCALED \ (((MAX_TICKADJ * NSEC_PER_USEC) << NTP_SCALE_SHIFT) / NTP_INTERVAL_FREQ) +#define MAX_TAI_OFFSET 100000 /* * phase-lock loop variables @@ -698,7 +699,8 @@ static inline void process_adjtimex_modes(const struct timex *txc, s32 *time_tai time_constant = max(time_constant, 0l); } - if (txc->modes & ADJ_TAI && txc->constant >= 0) + if (txc->modes & ADJ_TAI && + txc->constant >= 0 && txc->constant <= MAX_TAI_OFFSET) *time_tai = txc->constant; if (txc->modes & ADJ_OFFSET) diff --git a/kernel/time/timer_list.c b/kernel/time/timer_list.c index d647dabdac97a3b592b25cdc929909df92b04b8a..07afcfe2a61b7c02ec6d6e43687fae1344775373 100644 --- a/kernel/time/timer_list.c +++ b/kernel/time/timer_list.c @@ -287,23 +287,6 @@ static inline void timer_list_header(struct seq_file *m, u64 now) SEQ_printf(m, "\n"); } -static int timer_list_show(struct seq_file *m, void *v) -{ - struct timer_list_iter *iter = v; - - if (iter->cpu == -1 && !iter->second_pass) - timer_list_header(m, iter->now); - else if (!iter->second_pass) - print_cpu(m, iter->cpu, iter->now); -#ifdef CONFIG_GENERIC_CLOCKEVENTS - else if (iter->cpu == -1 && iter->second_pass) - timer_list_show_tickdevices_header(m); - else - print_tickdevice(m, tick_get_device(iter->cpu), iter->cpu); -#endif - return 0; -} - void sysrq_timer_list_show(void) { u64 now = ktime_to_ns(ktime_get()); @@ -322,6 +305,24 @@ void sysrq_timer_list_show(void) return; } +#ifdef CONFIG_PROC_FS +static int timer_list_show(struct seq_file *m, void *v) +{ + struct timer_list_iter *iter = v; + + if (iter->cpu == -1 && !iter->second_pass) + timer_list_header(m, iter->now); + else if (!iter->second_pass) + print_cpu(m, iter->cpu, iter->now); +#ifdef CONFIG_GENERIC_CLOCKEVENTS + else if (iter->cpu == -1 && iter->second_pass) + timer_list_show_tickdevices_header(m); + else + print_tickdevice(m, tick_get_device(iter->cpu), iter->cpu); +#endif + return 0; +} + static void *move_iter(struct timer_list_iter *iter, loff_t offset) { for (; offset; offset--) { @@ -381,3 +382,4 @@ static int __init init_timer_list_procfs(void) return 0; } __initcall(init_timer_list_procfs); +#endif diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 118ecce143866bf8034093ce15baaef38124a709..7e215dac969330ed3b7a96a903372b0a365184e3 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -1647,6 +1647,11 @@ static bool test_rec_ops_needs_regs(struct dyn_ftrace *rec) return keep_regs; } +static struct ftrace_ops * +ftrace_find_tramp_ops_any(struct dyn_ftrace *rec); +static struct ftrace_ops * +ftrace_find_tramp_ops_next(struct dyn_ftrace *rec, struct ftrace_ops *ops); + static bool __ftrace_hash_rec_update(struct ftrace_ops *ops, int filter_hash, bool inc) @@ -1775,15 +1780,17 @@ static bool __ftrace_hash_rec_update(struct ftrace_ops *ops, } /* - * If the rec had TRAMP enabled, then it needs to - * be cleared. As TRAMP can only be enabled iff - * there is only a single ops attached to it. - * In otherwords, always disable it on decrementing. - * In the future, we may set it if rec count is - * decremented to one, and the ops that is left - * has a trampoline. + * The TRAMP needs to be set only if rec count + * is decremented to one, and the ops that is + * left has a trampoline. As TRAMP can only be + * enabled if there is only a single ops attached + * to it. */ - rec->flags &= ~FTRACE_FL_TRAMP; + if (ftrace_rec_count(rec) == 1 && + ftrace_find_tramp_ops_any(rec)) + rec->flags |= FTRACE_FL_TRAMP; + else + rec->flags &= ~FTRACE_FL_TRAMP; /* * flags will be cleared in ftrace_check_record() @@ -1976,11 +1983,6 @@ static void print_ip_ins(const char *fmt, const unsigned char *p) printk(KERN_CONT "%s%02x", i ? ":" : "", p[i]); } -static struct ftrace_ops * -ftrace_find_tramp_ops_any(struct dyn_ftrace *rec); -static struct ftrace_ops * -ftrace_find_tramp_ops_next(struct dyn_ftrace *rec, struct ftrace_ops *ops); - enum ftrace_bug_type ftrace_bug_type; const void *ftrace_expected; @@ -3110,6 +3112,14 @@ t_probe_next(struct seq_file *m, loff_t *pos) hnd = &iter->probe_entry->hlist; hash = iter->probe->ops.func_hash->filter_hash; + + /* + * A probe being registered may temporarily have an empty hash + * and it's at the end of the func_probes list. + */ + if (!hash || hash == EMPTY_HASH) + return NULL; + size = 1 << hash->size_bits; retry: @@ -4305,12 +4315,21 @@ register_ftrace_function_probe(char *glob, struct trace_array *tr, mutex_unlock(&ftrace_lock); + /* + * Note, there's a small window here that the func_hash->filter_hash + * may be NULL or empty. Need to be carefule when reading the loop. + */ mutex_lock(&probe->ops.func_hash->regex_lock); orig_hash = &probe->ops.func_hash->filter_hash; old_hash = *orig_hash; hash = alloc_and_copy_ftrace_hash(FTRACE_HASH_DEFAULT_BITS, old_hash); + if (!hash) { + ret = -ENOMEM; + goto out; + } + ret = ftrace_match_records(hash, glob, strlen(glob)); /* Nothing found? */ diff --git a/lib/logic_pio.c b/lib/logic_pio.c index feea48fd1a0dd6ae7913b6fad19cc52e20664785..905027574e5d8041d1394dc4e5582061ed084e2f 100644 --- a/lib/logic_pio.c +++ b/lib/logic_pio.c @@ -35,7 +35,7 @@ int logic_pio_register_range(struct logic_pio_hwaddr *new_range) struct logic_pio_hwaddr *range; resource_size_t start; resource_size_t end; - resource_size_t mmio_sz = 0; + resource_size_t mmio_end = 0; resource_size_t iio_sz = MMIO_UPPER_LIMIT; int ret = 0; @@ -46,7 +46,7 @@ int logic_pio_register_range(struct logic_pio_hwaddr *new_range) end = new_range->hw_start + new_range->size; mutex_lock(&io_range_mutex); - list_for_each_entry_rcu(range, &io_range_list, list) { + list_for_each_entry(range, &io_range_list, list) { if (range->fwnode == new_range->fwnode) { /* range already there */ goto end_register; @@ -56,7 +56,7 @@ int logic_pio_register_range(struct logic_pio_hwaddr *new_range) /* for MMIO ranges we need to check for overlap */ if (start >= range->hw_start + range->size || end < range->hw_start) { - mmio_sz += range->size; + mmio_end = range->io_start + range->size; } else { ret = -EFAULT; goto end_register; @@ -69,16 +69,16 @@ int logic_pio_register_range(struct logic_pio_hwaddr *new_range) /* range not registered yet, check for available space */ if (new_range->flags == LOGIC_PIO_CPU_MMIO) { - if (mmio_sz + new_range->size - 1 > MMIO_UPPER_LIMIT) { + if (mmio_end + new_range->size - 1 > MMIO_UPPER_LIMIT) { /* if it's too big check if 64K space can be reserved */ - if (mmio_sz + SZ_64K - 1 > MMIO_UPPER_LIMIT) { + if (mmio_end + SZ_64K - 1 > MMIO_UPPER_LIMIT) { ret = -E2BIG; goto end_register; } new_range->size = SZ_64K; pr_warn("Requested IO range too big, new size set to 64K\n"); } - new_range->io_start = mmio_sz; + new_range->io_start = mmio_end; } else if (new_range->flags == LOGIC_PIO_INDIRECT) { if (iio_sz + new_range->size - 1 > IO_SPACE_LIMIT) { ret = -E2BIG; @@ -98,6 +98,20 @@ end_register: return ret; } +/** + * logic_pio_unregister_range - unregister a logical PIO range for a host + * @range: pointer to the IO range which has been already registered. + * + * Unregister a previously-registered IO range node. + */ +void logic_pio_unregister_range(struct logic_pio_hwaddr *range) +{ + mutex_lock(&io_range_mutex); + list_del_rcu(&range->list); + mutex_unlock(&io_range_mutex); + synchronize_rcu(); +} + /** * find_io_range_by_fwnode - find logical PIO range for given FW node * @fwnode: FW node handle associated with logical PIO range @@ -108,26 +122,38 @@ end_register: */ struct logic_pio_hwaddr *find_io_range_by_fwnode(struct fwnode_handle *fwnode) { - struct logic_pio_hwaddr *range; + struct logic_pio_hwaddr *range, *found_range = NULL; + rcu_read_lock(); list_for_each_entry_rcu(range, &io_range_list, list) { - if (range->fwnode == fwnode) - return range; + if (range->fwnode == fwnode) { + found_range = range; + break; + } } - return NULL; + rcu_read_unlock(); + + return found_range; } /* Return a registered range given an input PIO token */ static struct logic_pio_hwaddr *find_io_range(unsigned long pio) { - struct logic_pio_hwaddr *range; + struct logic_pio_hwaddr *range, *found_range = NULL; + rcu_read_lock(); list_for_each_entry_rcu(range, &io_range_list, list) { - if (in_range(pio, range->io_start, range->size)) - return range; + if (in_range(pio, range->io_start, range->size)) { + found_range = range; + break; + } } - pr_err("PIO entry token %lx invalid\n", pio); - return NULL; + rcu_read_unlock(); + + if (!found_range) + pr_err("PIO entry token 0x%lx invalid\n", pio); + + return found_range; } /** @@ -180,14 +206,23 @@ unsigned long logic_pio_trans_cpuaddr(resource_size_t addr) { struct logic_pio_hwaddr *range; + rcu_read_lock(); list_for_each_entry_rcu(range, &io_range_list, list) { if (range->flags != LOGIC_PIO_CPU_MMIO) continue; - if (in_range(addr, range->hw_start, range->size)) - return addr - range->hw_start + range->io_start; + if (in_range(addr, range->hw_start, range->size)) { + unsigned long cpuaddr; + + cpuaddr = addr - range->hw_start + range->io_start; + + rcu_read_unlock(); + return cpuaddr; + } } - pr_err("addr %llx not registered in io_range_list\n", - (unsigned long long) addr); + rcu_read_unlock(); + + pr_err("addr %pa not registered in io_range_list\n", &addr); + return ~0UL; } diff --git a/lib/reed_solomon/decode_rs.c b/lib/reed_solomon/decode_rs.c index 1db74eb098d0eb6e8123d07ed4be795a09bf82ce..121beb2f09307d7471aa0ce6238483d3a8d6cb30 100644 --- a/lib/reed_solomon/decode_rs.c +++ b/lib/reed_solomon/decode_rs.c @@ -42,8 +42,18 @@ BUG_ON(pad < 0 || pad >= nn); /* Does the caller provide the syndrome ? */ - if (s != NULL) - goto decode; + if (s != NULL) { + for (i = 0; i < nroots; i++) { + /* The syndrome is in index form, + * so nn represents zero + */ + if (s[i] != nn) + goto decode; + } + + /* syndrome is zero, no errors to correct */ + return 0; + } /* form the syndromes; i.e., evaluate data(x) at roots of * g(x) */ @@ -99,9 +109,9 @@ if (no_eras > 0) { /* Init lambda to be the erasure locator polynomial */ lambda[1] = alpha_to[rs_modnn(rs, - prim * (nn - 1 - eras_pos[0]))]; + prim * (nn - 1 - (eras_pos[0] + pad)))]; for (i = 1; i < no_eras; i++) { - u = rs_modnn(rs, prim * (nn - 1 - eras_pos[i])); + u = rs_modnn(rs, prim * (nn - 1 - (eras_pos[i] + pad))); for (j = i + 1; j > 0; j--) { tmp = index_of[lambda[j - 1]]; if (tmp != nn) { diff --git a/lib/scatterlist.c b/lib/scatterlist.c index 5c2c689627099ff556cc976bb9cf612edb7782f6..336162c2813fc6d31eec4eb2103942d7e83ba9c3 100644 --- a/lib/scatterlist.c +++ b/lib/scatterlist.c @@ -652,17 +652,18 @@ static bool sg_miter_get_next_page(struct sg_mapping_iter *miter) { if (!miter->__remaining) { struct scatterlist *sg; - unsigned long pgoffset; if (!__sg_page_iter_next(&miter->piter)) return false; sg = miter->piter.sg; - pgoffset = miter->piter.sg_pgoffset; - miter->__offset = pgoffset ? 0 : sg->offset; + miter->__offset = miter->piter.sg_pgoffset ? 0 : sg->offset; + miter->piter.sg_pgoffset += miter->__offset >> PAGE_SHIFT; + miter->__offset &= PAGE_SIZE - 1; miter->__remaining = sg->offset + sg->length - - (pgoffset << PAGE_SHIFT) - miter->__offset; + (miter->piter.sg_pgoffset << PAGE_SHIFT) - + miter->__offset; miter->__remaining = min_t(unsigned long, miter->__remaining, PAGE_SIZE - miter->__offset); } diff --git a/lib/test_firmware.c b/lib/test_firmware.c index fd48a15a0710c7c28a0e6fd7b2b7c4ad89a5312d..a74b1aae74618b8a6954d5d531f9a7767a5165b0 100644 --- a/lib/test_firmware.c +++ b/lib/test_firmware.c @@ -894,8 +894,11 @@ static int __init test_firmware_init(void) return -ENOMEM; rc = __test_firmware_config_init(); - if (rc) + if (rc) { + kfree(test_fw_config); + pr_err("could not init firmware test config: %d\n", rc); return rc; + } rc = misc_register(&test_fw_misc_device); if (rc) { diff --git a/lib/test_overflow.c b/lib/test_overflow.c index fc680562d8b69fc80cf2dbe9fab71d9e65303662..7a4b6f6c5473c7cf43e6f86a21ff795dc2389486 100644 --- a/lib/test_overflow.c +++ b/lib/test_overflow.c @@ -486,16 +486,17 @@ static int __init test_overflow_shift(void) * Deal with the various forms of allocator arguments. See comments above * the DEFINE_TEST_ALLOC() instances for mapping of the "bits". */ -#define alloc010(alloc, arg, sz) alloc(sz, GFP_KERNEL) -#define alloc011(alloc, arg, sz) alloc(sz, GFP_KERNEL, NUMA_NO_NODE) +#define alloc_GFP (GFP_KERNEL | __GFP_NOWARN) +#define alloc010(alloc, arg, sz) alloc(sz, alloc_GFP) +#define alloc011(alloc, arg, sz) alloc(sz, alloc_GFP, NUMA_NO_NODE) #define alloc000(alloc, arg, sz) alloc(sz) #define alloc001(alloc, arg, sz) alloc(sz, NUMA_NO_NODE) -#define alloc110(alloc, arg, sz) alloc(arg, sz, GFP_KERNEL) +#define alloc110(alloc, arg, sz) alloc(arg, sz, alloc_GFP) #define free0(free, arg, ptr) free(ptr) #define free1(free, arg, ptr) free(arg, ptr) -/* Wrap around to 8K */ -#define TEST_SIZE (9 << PAGE_SHIFT) +/* Wrap around to 16K */ +#define TEST_SIZE (5 * 4096) #define DEFINE_TEST_ALLOC(func, free_func, want_arg, want_gfp, want_node)\ static int __init test_ ## func (void *arg) \ diff --git a/lib/test_string.c b/lib/test_string.c index 0fcdb82dca8667604a0a0756803b7c7914fc3aa1..98a787e7a1fd6426cebd738e0e766c0dc8f502bc 100644 --- a/lib/test_string.c +++ b/lib/test_string.c @@ -35,7 +35,7 @@ static __init int memset16_selftest(void) fail: kfree(p); if (i < 256) - return (i << 24) | (j << 16) | k; + return (i << 24) | (j << 16) | k | 0x8000; return 0; } @@ -71,7 +71,7 @@ static __init int memset32_selftest(void) fail: kfree(p); if (i < 256) - return (i << 24) | (j << 16) | k; + return (i << 24) | (j << 16) | k | 0x8000; return 0; } @@ -107,7 +107,7 @@ static __init int memset64_selftest(void) fail: kfree(p); if (i < 256) - return (i << 24) | (j << 16) | k; + return (i << 24) | (j << 16) | k | 0x8000; return 0; } diff --git a/mm/cma.c b/mm/cma.c index 476dfe13a701f76d4e2d29c30a6ce0e944e12bb8..4c2864270a39b0239f7a38d025763ba7d988a2ee 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -282,6 +282,12 @@ int __init cma_declare_contiguous(phys_addr_t base, */ alignment = max(alignment, (phys_addr_t)PAGE_SIZE << max_t(unsigned long, MAX_ORDER - 1, pageblock_order)); + if (fixed && base & (alignment - 1)) { + ret = -EINVAL; + pr_err("Region at %pa must be aligned to %pa bytes\n", + &base, &alignment); + goto err; + } base = ALIGN(base, alignment); size = ALIGN(size, alignment); limit &= ~(alignment - 1); @@ -312,6 +318,13 @@ int __init cma_declare_contiguous(phys_addr_t base, if (limit == 0 || limit > memblock_end) limit = memblock_end; + if (base + size > limit) { + ret = -EINVAL; + pr_err("Size (%pa) of region at %pa exceeds limit (%pa)\n", + &size, &base, &limit); + goto err; + } + /* Reserve memory */ if (fixed) { if (memblock_is_region_reserved(base, size) || diff --git a/mm/filemap.c b/mm/filemap.c index 52517f28e6f4a69020cfc60867f5a57461c89fb8..287f3fa02e5ee84bcd86847c575d3180da94ac8a 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -561,6 +561,28 @@ int filemap_fdatawait_range(struct address_space *mapping, loff_t start_byte, } EXPORT_SYMBOL(filemap_fdatawait_range); +/** + * filemap_fdatawait_range_keep_errors - wait for writeback to complete + * @mapping: address space structure to wait for + * @start_byte: offset in bytes where the range starts + * @end_byte: offset in bytes where the range ends (inclusive) + * + * Walk the list of under-writeback pages of the given address space in the + * given range and wait for all of them. Unlike filemap_fdatawait_range(), + * this function does not clear error status of the address space. + * + * Use this function if callers don't handle errors themselves. Expected + * call sites are system-wide / filesystem-wide data flushers: e.g. sync(2), + * fsfreeze(8) + */ +int filemap_fdatawait_range_keep_errors(struct address_space *mapping, + loff_t start_byte, loff_t end_byte) +{ + __filemap_fdatawait_range(mapping, start_byte, end_byte); + return filemap_check_and_keep_errors(mapping); +} +EXPORT_SYMBOL(filemap_fdatawait_range_keep_errors); + /** * file_fdatawait_range - wait for writeback to complete * @file: file pointing to address space structure to wait for diff --git a/mm/gup.c b/mm/gup.c index caadd31714a544411a915babd954867453576f2b..f3088d25bd926278c439726359567dbd95e3c861 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -458,11 +458,14 @@ static int get_gate_page(struct mm_struct *mm, unsigned long address, pgd = pgd_offset_k(address); else pgd = pgd_offset_gate(mm, address); - BUG_ON(pgd_none(*pgd)); + if (pgd_none(*pgd)) + return -EFAULT; p4d = p4d_offset(pgd, address); - BUG_ON(p4d_none(*p4d)); + if (p4d_none(*p4d)) + return -EFAULT; pud = pud_offset(p4d, address); - BUG_ON(pud_none(*pud)); + if (pud_none(*pud)) + return -EFAULT; pmd = pmd_offset(pud, address); if (!pmd_present(*pmd)) return -EFAULT; @@ -1367,7 +1370,8 @@ static inline pte_t gup_get_pte(pte_t *ptep) } #endif -static void undo_dev_pagemap(int *nr, int nr_start, struct page **pages) +static void __maybe_unused undo_dev_pagemap(int *nr, int nr_start, + struct page **pages) { while ((*nr) - nr_start) { struct page *page = pages[--(*nr)]; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 6fad1864ba03bc1f0dd6be5f77107d288d490934..09ce8528bbdd9012420e93de87c28c7cd547c5de 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #include @@ -2477,6 +2478,9 @@ static void __split_huge_page(struct page *page, struct list_head *list, } ClearPageCompound(head); + + split_page_owner(head, HPAGE_PMD_ORDER); + /* See comment in __split_huge_page_tail() */ if (PageAnon(head)) { /* Additional pin to radix tree of swap cache */ diff --git a/mm/kmemleak.c b/mm/kmemleak.c index 0ed54904507477552e4175fd91eb0f3617b4796d..92ce99b15f2bae3581729a3cc1276d593e488914 100644 --- a/mm/kmemleak.c +++ b/mm/kmemleak.c @@ -126,7 +126,7 @@ /* GFP bitmask for kmemleak internal allocations */ #define gfp_kmemleak_mask(gfp) (((gfp) & (GFP_KERNEL | GFP_ATOMIC)) | \ __GFP_NORETRY | __GFP_NOMEMALLOC | \ - __GFP_NOWARN | __GFP_NOFAIL) + __GFP_NOWARN) /* scanning area inside a memory block */ struct kmemleak_scan_area { @@ -576,7 +576,7 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size, if (in_irq()) { object->pid = 0; strncpy(object->comm, "hardirq", sizeof(object->comm)); - } else if (in_softirq()) { + } else if (in_serving_softirq()) { object->pid = 0; strncpy(object->comm, "softirq", sizeof(object->comm)); } else { diff --git a/mm/memcontrol.c b/mm/memcontrol.c index d0f245d80f93cc5a5d477e8c3c009e3762fb29ef..3af6a4f0990bdc994f829690cc966735caeecfd8 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1040,26 +1040,45 @@ void mem_cgroup_iter_break(struct mem_cgroup *root, css_put(&prev->css); } -static void invalidate_reclaim_iterators(struct mem_cgroup *dead_memcg) +static void __invalidate_reclaim_iterators(struct mem_cgroup *from, + struct mem_cgroup *dead_memcg) { - struct mem_cgroup *memcg = dead_memcg; struct mem_cgroup_reclaim_iter *iter; struct mem_cgroup_per_node *mz; int nid; int i; - for (; memcg; memcg = parent_mem_cgroup(memcg)) { - for_each_node(nid) { - mz = mem_cgroup_nodeinfo(memcg, nid); - for (i = 0; i <= DEF_PRIORITY; i++) { - iter = &mz->iter[i]; - cmpxchg(&iter->position, - dead_memcg, NULL); - } + for_each_node(nid) { + mz = mem_cgroup_nodeinfo(from, nid); + for (i = 0; i <= DEF_PRIORITY; i++) { + iter = &mz->iter[i]; + cmpxchg(&iter->position, + dead_memcg, NULL); } } } +static void invalidate_reclaim_iterators(struct mem_cgroup *dead_memcg) +{ + struct mem_cgroup *memcg = dead_memcg; + struct mem_cgroup *last; + + do { + __invalidate_reclaim_iterators(memcg, dead_memcg); + last = memcg; + } while ((memcg = parent_mem_cgroup(memcg))); + + /* + * When cgruop1 non-hierarchy mode is used, + * parent_mem_cgroup() does not walk all the way up to the + * cgroup root (root_mem_cgroup). So we have to handle + * dead_memcg from cgroup root separately. + */ + if (last != root_mem_cgroup) + __invalidate_reclaim_iterators(root_mem_cgroup, + dead_memcg); +} + /** * mem_cgroup_scan_tasks - iterate over tasks of a memory cgroup hierarchy * @memcg: hierarchy root diff --git a/mm/memory.c b/mm/memory.c index e0010cb870e050ff40f21ef55d2b9a5651910781..fb5655b518c99cd38cfbc6302ec9ec8220d868a7 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4491,7 +4491,9 @@ int __access_remote_vm(struct task_struct *tsk, struct mm_struct *mm, void *old_buf = buf; int write = gup_flags & FOLL_WRITE; - down_read(&mm->mmap_sem); + if (down_read_killable(&mm->mmap_sem)) + return 0; + /* ignore errors, just check how much was successfully transferred */ while (len) { int bytes, ret, offset; diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 62f945ea3e362f109fb0dce4221c73618e33d681..70298b635b59349d816e8a29503d63490e0c1108 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -403,7 +403,7 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX] = { }, }; -static void migrate_page_add(struct page *page, struct list_head *pagelist, +static int migrate_page_add(struct page *page, struct list_head *pagelist, unsigned long flags); struct queue_pages { @@ -429,11 +429,14 @@ static inline bool queue_pages_required(struct page *page, } /* - * queue_pages_pmd() has three possible return values: - * 1 - pages are placed on the right node or queued successfully. - * 0 - THP was split. - * -EIO - is migration entry or MPOL_MF_STRICT was specified and an existing - * page was already on a node that does not follow the policy. + * queue_pages_pmd() has four possible return values: + * 0 - pages are placed on the right node or queued successfully. + * 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were + * specified. + * 2 - THP was split. + * -EIO - is migration entry or only MPOL_MF_STRICT was specified and an + * existing page was already on a node that does not follow the + * policy. */ static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr, unsigned long end, struct mm_walk *walk) @@ -451,23 +454,20 @@ static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr, if (is_huge_zero_page(page)) { spin_unlock(ptl); __split_huge_pmd(walk->vma, pmd, addr, false, NULL); + ret = 2; goto out; } - if (!queue_pages_required(page, qp)) { - ret = 1; + if (!queue_pages_required(page, qp)) goto unlock; - } - ret = 1; flags = qp->flags; /* go to thp migration */ if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) { - if (!vma_migratable(walk->vma)) { - ret = -EIO; + if (!vma_migratable(walk->vma) || + migrate_page_add(page, qp->pagelist, flags)) { + ret = 1; goto unlock; } - - migrate_page_add(page, qp->pagelist, flags); } else ret = -EIO; unlock: @@ -479,6 +479,13 @@ out: /* * Scan through pages checking if pages follow certain conditions, * and move them to the pagelist if they do. + * + * queue_pages_pte_range() has three possible return values: + * 0 - pages are placed on the right node or queued successfully. + * 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were + * specified. + * -EIO - only MPOL_MF_STRICT was specified and an existing page was already + * on a node that does not follow the policy. */ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct mm_walk *walk) @@ -488,17 +495,17 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr, struct queue_pages *qp = walk->private; unsigned long flags = qp->flags; int ret; + bool has_unmovable = false; pte_t *pte; spinlock_t *ptl; ptl = pmd_trans_huge_lock(pmd, vma); if (ptl) { ret = queue_pages_pmd(pmd, ptl, addr, end, walk); - if (ret > 0) - return 0; - else if (ret < 0) + if (ret != 2) return ret; } + /* THP was split, fall through to pte walk */ if (pmd_trans_unstable(pmd)) return 0; @@ -519,14 +526,28 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr, if (!queue_pages_required(page, qp)) continue; if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) { - if (!vma_migratable(vma)) + /* MPOL_MF_STRICT must be specified if we get here */ + if (!vma_migratable(vma)) { + has_unmovable = true; break; - migrate_page_add(page, qp->pagelist, flags); + } + + /* + * Do not abort immediately since there may be + * temporary off LRU pages in the range. Still + * need migrate other LRU pages. + */ + if (migrate_page_add(page, qp->pagelist, flags)) + has_unmovable = true; } else break; } pte_unmap_unlock(pte - 1, ptl); cond_resched(); + + if (has_unmovable) + return 1; + return addr != end ? -EIO : 0; } @@ -639,7 +660,13 @@ static int queue_pages_test_walk(unsigned long start, unsigned long end, * * If pages found in a given range are on a set of nodes (determined by * @nodes and @flags,) it's isolated and queued to the pagelist which is - * passed via @private.) + * passed via @private. + * + * queue_pages_range() has three possible return values: + * 1 - there is unmovable page, but MPOL_MF_MOVE* & MPOL_MF_STRICT were + * specified. + * 0 - queue pages successfully or no misplaced page. + * -EIO - there is misplaced page and only MPOL_MF_STRICT was specified. */ static int queue_pages_range(struct mm_struct *mm, unsigned long start, unsigned long end, @@ -926,7 +953,7 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask, /* * page migration, thp tail pages can be passed. */ -static void migrate_page_add(struct page *page, struct list_head *pagelist, +static int migrate_page_add(struct page *page, struct list_head *pagelist, unsigned long flags) { struct page *head = compound_head(page); @@ -939,8 +966,19 @@ static void migrate_page_add(struct page *page, struct list_head *pagelist, mod_node_page_state(page_pgdat(head), NR_ISOLATED_ANON + page_is_file_cache(head), hpage_nr_pages(head)); + } else if (flags & MPOL_MF_STRICT) { + /* + * Non-movable page may reach here. And, there may be + * temporary off LRU pages or non-LRU movable pages. + * Treat them as unmovable pages since they can't be + * isolated, so they can't be moved at the moment. It + * should return -EIO for this case too. + */ + return -EIO; } } + + return 0; } /* page allocation callback for NUMA node migration */ @@ -1143,9 +1181,10 @@ static struct page *new_page(struct page *page, unsigned long start) } #else -static void migrate_page_add(struct page *page, struct list_head *pagelist, +static int migrate_page_add(struct page *page, struct list_head *pagelist, unsigned long flags) { + return -EIO; } int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from, @@ -1168,6 +1207,7 @@ static long do_mbind(unsigned long start, unsigned long len, struct mempolicy *new; unsigned long end; int err; + int ret; LIST_HEAD(pagelist); if (flags & ~(unsigned long)MPOL_MF_VALID) @@ -1229,10 +1269,15 @@ static long do_mbind(unsigned long start, unsigned long len, if (err) goto mpol_out; - err = queue_pages_range(mm, start, end, nmask, + ret = queue_pages_range(mm, start, end, nmask, flags | MPOL_MF_INVERT, &pagelist); - if (!err) - err = mbind_range(mm, start, end, new); + + if (ret < 0) { + err = -EIO; + goto up_out; + } + + err = mbind_range(mm, start, end, new); if (!err) { int nr_failed = 0; @@ -1245,13 +1290,14 @@ static long do_mbind(unsigned long start, unsigned long len, putback_movable_pages(&pagelist); } - if (nr_failed && (flags & MPOL_MF_STRICT)) + if ((ret > 0) || (nr_failed && (flags & MPOL_MF_STRICT))) err = -EIO; } else putback_movable_pages(&pagelist); +up_out: up_write(&mm->mmap_sem); - mpol_out: +mpol_out: mpol_put(new); return err; } diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c index 82bb1a939c0e496affc2849c1e1af795fe3d5d96..06dedb1755727fe0bf36ed0be0fb3cca9dbee3ea 100644 --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -316,7 +316,7 @@ static int do_mmu_notifier_register(struct mmu_notifier *mn, * thanks to mm_take_all_locks(). */ spin_lock(&mm->mmu_notifier_mm->lock); - hlist_add_head(&mn->hlist, &mm->mmu_notifier_mm->list); + hlist_add_head_rcu(&mn->hlist, &mm->mmu_notifier_mm->list); spin_unlock(&mm->mmu_notifier_mm->lock); mm_drop_all_locks(mm); diff --git a/mm/nommu.c b/mm/nommu.c index e4aac33216aec0f05022f17578b6dee03e25a4e4..1d63ecfc98c5d874ed7bb56b7a9c5ab1dac26d71 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -1779,7 +1779,8 @@ int __access_remote_vm(struct task_struct *tsk, struct mm_struct *mm, struct vm_area_struct *vma; int write = gup_flags & FOLL_WRITE; - down_read(&mm->mmap_sem); + if (down_read_killable(&mm->mmap_sem)) + return 0; /* the access must start within one of the target process's mappings */ vma = find_vma(mm, addr); diff --git a/mm/rmap.c b/mm/rmap.c index f048c2651954b672a5c16dfa85aef544bcdb4c0d..1bd94ea62f7f1fcbb33949a78a31f8555c4f27fd 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1467,7 +1467,15 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, /* * No need to invalidate here it will synchronize on * against the special swap migration pte. + * + * The assignment to subpage above was computed from a + * swap PTE which results in an invalid pointer. + * Since only PAGE_SIZE pages can currently be + * migrated, just set it to page. This will need to be + * changed when hugepage migrations to device private + * memory are supported. */ + subpage = page; goto discard; } diff --git a/mm/swap.c b/mm/swap.c index 0457927d3f0cd45be885eb649117b9524f56a907..3885645a45cea4a2100030b18d0d3b08a03bc01f 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -770,15 +770,20 @@ void release_pages(struct page **pages, int nr) if (is_huge_zero_page(page)) continue; - /* Device public page can not be huge page */ - if (is_device_public_page(page)) { + if (is_zone_device_page(page)) { if (locked_pgdat) { spin_unlock_irqrestore(&locked_pgdat->lru_lock, flags); locked_pgdat = NULL; } - put_devmap_managed_page(page); - continue; + /* + * ZONE_DEVICE pages that return 'false' from + * put_devmap_managed_page() do not require special + * processing, and instead, expect a call to + * put_page_testzero(). + */ + if (put_devmap_managed_page(page)) + continue; } page = compound_head(page); diff --git a/mm/usercopy.c b/mm/usercopy.c index 14faadcedd06cb01611a4c8b9897fecc3109af87..51411f9c4068263048db4a039091eb055b4ad569 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -151,7 +151,7 @@ static inline void check_bogus_address(const unsigned long ptr, unsigned long n, bool to_user) { /* Reject if object wraps past end of memory. */ - if (ptr + n < ptr) + if (ptr + (n - 1) < ptr) usercopy_abort("wrapped address", NULL, to_user, 0, ptr + n); /* Reject if NULL or ZERO-allocation. */ diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 5c6939cc28b78e5c47b3c2ef644bb06def7df70c..9b7cf993cada861f90e35f53d5078f8dee5b0ab4 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1754,6 +1754,12 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, if (!addr) return NULL; + /* + * First make sure the mappings are removed from all page-tables + * before they are freed. + */ + vmalloc_sync_all(); + /* * In this function, newly allocated vm_struct has VM_UNINITIALIZED * flag. It means that vm_struct is not fully initialized. @@ -2299,6 +2305,9 @@ EXPORT_SYMBOL(remap_vmalloc_range); /* * Implement a stub for vmalloc_sync_all() if the architecture chose not to * have one. + * + * The purpose of this function is to make sure the vmalloc area + * mappings are identical in all page-tables in the system. */ void __weak vmalloc_sync_all(void) { diff --git a/mm/vmscan.c b/mm/vmscan.c index e42f44cf7b435dd04d932adef3bca6449aa01ed4..b37610c0eac626158bae78a2b2c202d3dfbe793d 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -670,7 +670,14 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, unsigned long ret, freed = 0; struct shrinker *shrinker; - if (!mem_cgroup_is_root(memcg)) + /* + * The root memcg might be allocated even though memcg is disabled + * via "cgroup_disable=memory" boot parameter. This could make + * mem_cgroup_is_root() return false, then just run memcg slab + * shrink, but skip global shrink. This may result in premature + * oom. + */ + if (!mem_cgroup_disabled() && !mem_cgroup_is_root(memcg)) return shrink_slab_memcg(gfp_mask, nid, memcg, priority); if (!down_read_trylock(&shrinker_rwsem)) @@ -2190,7 +2197,7 @@ static void shrink_active_list(unsigned long nr_to_scan, * 10TB 320 32GB */ static bool inactive_list_is_low(struct lruvec *lruvec, bool file, - struct scan_control *sc, bool actual_reclaim) + struct scan_control *sc, bool trace) { enum lru_list active_lru = file * LRU_FILE + LRU_ACTIVE; struct pglist_data *pgdat = lruvec_pgdat(lruvec); @@ -2216,7 +2223,7 @@ static bool inactive_list_is_low(struct lruvec *lruvec, bool file, * rid of the stale workingset quickly. */ refaults = lruvec_page_state(lruvec, WORKINGSET_ACTIVATE); - if (file && actual_reclaim && lruvec->refaults != refaults) { + if (file && lruvec->refaults != refaults) { inactive_ratio = 0; } else { gb = (inactive + active) >> (30 - PAGE_SHIFT); @@ -2226,7 +2233,7 @@ static bool inactive_list_is_low(struct lruvec *lruvec, bool file, inactive_ratio = 1; } - if (actual_reclaim) + if (trace) trace_mm_vmscan_inactive_list_is_low(pgdat->node_id, sc->reclaim_idx, lruvec_lru_size(lruvec, inactive_lru, MAX_NR_ZONES), inactive, lruvec_lru_size(lruvec, active_lru, MAX_NR_ZONES), active, diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 63c193c1ff96e32f4859d623235d5ca83a08a13c..1a2f6c13acbdc1c6cb08c3a5d5c806e64ee3e479 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -53,6 +53,7 @@ #include #include #include +#include #include #include #include @@ -281,6 +282,10 @@ struct zs_pool { #ifdef CONFIG_COMPACTION struct inode *inode; struct work_struct free_work; + /* A wait queue for when migration races with async_free_zspage() */ + struct wait_queue_head migration_wait; + atomic_long_t isolated_pages; + bool destroying; #endif }; @@ -1950,6 +1955,31 @@ static void dec_zspage_isolation(struct zspage *zspage) zspage->isolated--; } +static void putback_zspage_deferred(struct zs_pool *pool, + struct size_class *class, + struct zspage *zspage) +{ + enum fullness_group fg; + + fg = putback_zspage(class, zspage); + if (fg == ZS_EMPTY) + schedule_work(&pool->free_work); + +} + +static inline void zs_pool_dec_isolated(struct zs_pool *pool) +{ + VM_BUG_ON(atomic_long_read(&pool->isolated_pages) <= 0); + atomic_long_dec(&pool->isolated_pages); + /* + * There's no possibility of racing, since wait_for_isolated_drain() + * checks the isolated count under &class->lock after enqueuing + * on migration_wait. + */ + if (atomic_long_read(&pool->isolated_pages) == 0 && pool->destroying) + wake_up_all(&pool->migration_wait); +} + static void replace_sub_page(struct size_class *class, struct zspage *zspage, struct page *newpage, struct page *oldpage) { @@ -2019,6 +2049,7 @@ static bool zs_page_isolate(struct page *page, isolate_mode_t mode) */ if (!list_empty(&zspage->list) && !is_zspage_isolated(zspage)) { get_zspage_mapping(zspage, &class_idx, &fullness); + atomic_long_inc(&pool->isolated_pages); remove_zspage(class, zspage, fullness); } @@ -2118,8 +2149,16 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage, * Page migration is done so let's putback isolated zspage to * the list if @page is final isolated subpage in the zspage. */ - if (!is_zspage_isolated(zspage)) - putback_zspage(class, zspage); + if (!is_zspage_isolated(zspage)) { + /* + * We cannot race with zs_destroy_pool() here because we wait + * for isolation to hit zero before we start destroying. + * Also, we ensure that everyone can see pool->destroying before + * we start waiting. + */ + putback_zspage_deferred(pool, class, zspage); + zs_pool_dec_isolated(pool); + } reset_page(page); put_page(page); @@ -2165,13 +2204,12 @@ static void zs_page_putback(struct page *page) spin_lock(&class->lock); dec_zspage_isolation(zspage); if (!is_zspage_isolated(zspage)) { - fg = putback_zspage(class, zspage); /* * Due to page_lock, we cannot free zspage immediately * so let's defer. */ - if (fg == ZS_EMPTY) - schedule_work(&pool->free_work); + putback_zspage_deferred(pool, class, zspage); + zs_pool_dec_isolated(pool); } spin_unlock(&class->lock); } @@ -2195,8 +2233,36 @@ static int zs_register_migration(struct zs_pool *pool) return 0; } +static bool pool_isolated_are_drained(struct zs_pool *pool) +{ + return atomic_long_read(&pool->isolated_pages) == 0; +} + +/* Function for resolving migration */ +static void wait_for_isolated_drain(struct zs_pool *pool) +{ + + /* + * We're in the process of destroying the pool, so there are no + * active allocations. zs_page_isolate() fails for completely free + * zspages, so we need only wait for the zs_pool's isolated + * count to hit zero. + */ + wait_event(pool->migration_wait, + pool_isolated_are_drained(pool)); +} + static void zs_unregister_migration(struct zs_pool *pool) { + pool->destroying = true; + /* + * We need a memory barrier here to ensure global visibility of + * pool->destroying. Thus pool->isolated pages will either be 0 in which + * case we don't care, or it will be > 0 and pool->destroying will + * ensure that we wake up once isolation hits 0. + */ + smp_mb(); + wait_for_isolated_drain(pool); /* This can block */ flush_work(&pool->free_work); iput(pool->inode); } @@ -2434,6 +2500,10 @@ struct zs_pool *zs_create_pool(const char *name) if (!pool->name) goto err; +#ifdef CONFIG_COMPACTION + init_waitqueue_head(&pool->migration_wait); +#endif + if (create_cache(pool)) goto err; diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c index eb596c2ed546ca5555a2100426eebb67205d99b1..849336211c79b99c4c6b691becf955e598d80ed5 100644 --- a/net/9p/trans_virtio.c +++ b/net/9p/trans_virtio.c @@ -782,10 +782,16 @@ static struct p9_trans_module p9_virtio_trans = { /* The standard init function */ static int __init p9_virtio_init(void) { + int rc; + INIT_LIST_HEAD(&virtio_chan_list); v9fs_register_trans(&p9_virtio_trans); - return register_virtio_driver(&p9_virtio_drv); + rc = register_virtio_driver(&p9_virtio_drv); + if (rc) + v9fs_unregister_trans(&p9_virtio_trans); + + return rc; } static void __exit p9_virtio_cleanup(void) diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c index e2fbf3677b9baf3fa99ba98485f73caf0118f249..9daab0dd833b31cbb663b7756970a545cc59f3cf 100644 --- a/net/9p/trans_xen.c +++ b/net/9p/trans_xen.c @@ -530,13 +530,19 @@ static struct xenbus_driver xen_9pfs_front_driver = { static int p9_trans_xen_init(void) { + int rc; + if (!xen_domain()) return -ENODEV; pr_info("Initialising Xen transport for 9pfs\n"); v9fs_register_trans(&p9_xen_trans); - return xenbus_register_frontend(&xen_9pfs_front_driver); + rc = xenbus_register_frontend(&xen_9pfs_front_driver); + if (rc) + v9fs_unregister_trans(&p9_xen_trans); + + return rc; } module_init(p9_trans_xen_init); diff --git a/net/batman-adv/bat_iv_ogm.c b/net/batman-adv/bat_iv_ogm.c index 73bf6a93a3cf1141a34657bf1284893199e04db9..0b7b36fa0d5cd440ddef141ad27acfe7b20aee43 100644 --- a/net/batman-adv/bat_iv_ogm.c +++ b/net/batman-adv/bat_iv_ogm.c @@ -2485,7 +2485,7 @@ batadv_iv_ogm_neigh_is_sob(struct batadv_neigh_node *neigh1, return ret; } -static void batadv_iv_iface_activate(struct batadv_hard_iface *hard_iface) +static void batadv_iv_iface_enabled(struct batadv_hard_iface *hard_iface) { /* begin scheduling originator messages on that interface */ batadv_iv_ogm_schedule(hard_iface); @@ -2825,8 +2825,8 @@ unlock: static struct batadv_algo_ops batadv_batman_iv __read_mostly = { .name = "BATMAN_IV", .iface = { - .activate = batadv_iv_iface_activate, .enable = batadv_iv_ogm_iface_enable, + .enabled = batadv_iv_iface_enabled, .disable = batadv_iv_ogm_iface_disable, .update_mac = batadv_iv_ogm_iface_update_mac, .primary_set = batadv_iv_ogm_primary_iface_set, diff --git a/net/batman-adv/hard-interface.c b/net/batman-adv/hard-interface.c index 08690d06b7be2b25ca3f009394763c7083c70644..36f0962040d16af4f9ed82629ff03ce85c83ed57 100644 --- a/net/batman-adv/hard-interface.c +++ b/net/batman-adv/hard-interface.c @@ -821,6 +821,9 @@ int batadv_hardif_enable_interface(struct batadv_hard_iface *hard_iface, batadv_hardif_recalc_extra_skbroom(soft_iface); + if (bat_priv->algo_ops->iface.enabled) + bat_priv->algo_ops->iface.enabled(hard_iface); + out: return 0; diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c index 359ec1a6e822ffd9781edc7897e457f96c50244b..9fa5389ea244c37cf02d57693f90b74d5c741911 100644 --- a/net/batman-adv/translation-table.c +++ b/net/batman-adv/translation-table.c @@ -3821,6 +3821,8 @@ static void batadv_tt_purge(struct work_struct *work) */ void batadv_tt_free(struct batadv_priv *bat_priv) { + batadv_tvlv_handler_unregister(bat_priv, BATADV_TVLV_ROAM, 1); + batadv_tvlv_container_unregister(bat_priv, BATADV_TVLV_TT, 1); batadv_tvlv_handler_unregister(bat_priv, BATADV_TVLV_TT, 1); diff --git a/net/batman-adv/types.h b/net/batman-adv/types.h index eeee3e61c625df5b38a69290406031e348387fa4..fdba8a144d7373862ac8028dfd3e97cba591320a 100644 --- a/net/batman-adv/types.h +++ b/net/batman-adv/types.h @@ -2130,6 +2130,9 @@ struct batadv_algo_iface_ops { /** @enable: init routing info when hard-interface is enabled */ int (*enable)(struct batadv_hard_iface *hard_iface); + /** @enabled: notification when hard-interface was enabled (optional) */ + void (*enabled)(struct batadv_hard_iface *hard_iface); + /** @disable: de-init routing info when hard-interface is disabled */ void (*disable)(struct batadv_hard_iface *hard_iface); diff --git a/net/bluetooth/6lowpan.c b/net/bluetooth/6lowpan.c index 4e2576fc0c59932cbb1f3c98c150764b91bb52c1..357475cceec61ba264fd49621e75de624f4afff4 100644 --- a/net/bluetooth/6lowpan.c +++ b/net/bluetooth/6lowpan.c @@ -187,10 +187,16 @@ static inline struct lowpan_peer *peer_lookup_dst(struct lowpan_btle_dev *dev, } if (!rt) { - nexthop = &lowpan_cb(skb)->gw; - - if (ipv6_addr_any(nexthop)) - return NULL; + if (ipv6_addr_any(&lowpan_cb(skb)->gw)) { + /* There is neither route nor gateway, + * probably the destination is a direct peer. + */ + nexthop = daddr; + } else { + /* There is a known gateway + */ + nexthop = &lowpan_cb(skb)->gw; + } } else { nexthop = rt6_nexthop(rt, daddr); diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c index 3e7badb3ac2d50fd8516bd87d08a3c424e9e6cb9..0adcddb211fa5a53374fa9ccb854686f51493531 100644 --- a/net/bluetooth/hci_event.c +++ b/net/bluetooth/hci_event.c @@ -5545,6 +5545,11 @@ static void hci_le_remote_conn_param_req_evt(struct hci_dev *hdev, return send_conn_param_neg_reply(hdev, handle, HCI_ERROR_UNKNOWN_CONN_ID); + if (min < hcon->le_conn_min_interval || + max > hcon->le_conn_max_interval) + return send_conn_param_neg_reply(hdev, handle, + HCI_ERROR_INVALID_LL_PARAMS); + if (hci_check_conn_params(min, max, latency, timeout)) return send_conn_param_neg_reply(hdev, handle, HCI_ERROR_INVALID_LL_PARAMS); diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c index 879d5432bf7794d4c4183aae2ca2d092281417b8..a54dadf4a6ca0f96f83ebb0a446acbaa4f4a6a78 100644 --- a/net/bluetooth/l2cap_core.c +++ b/net/bluetooth/l2cap_core.c @@ -4384,6 +4384,12 @@ static inline int l2cap_disconnect_rsp(struct l2cap_conn *conn, l2cap_chan_lock(chan); + if (chan->state != BT_DISCONN) { + l2cap_chan_unlock(chan); + mutex_unlock(&conn->chan_lock); + return 0; + } + l2cap_chan_hold(chan); l2cap_chan_del(chan, 0); @@ -5281,7 +5287,14 @@ static inline int l2cap_conn_param_update_req(struct l2cap_conn *conn, memset(&rsp, 0, sizeof(rsp)); - err = hci_check_conn_params(min, max, latency, to_multiplier); + if (min < hcon->le_conn_min_interval || + max > hcon->le_conn_max_interval) { + BT_DBG("requested connection interval exceeds current bounds."); + err = -EINVAL; + } else { + err = hci_check_conn_params(min, max, latency, to_multiplier); + } + if (err) rsp.result = cpu_to_le16(L2CAP_CONN_PARAM_REJECTED); else diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c index a1c1b7e8a45ca6d6c44de507d7a5ff232d776edc..cc2f7ca91ccdd32136ea82f5fc6bbcff87206d1d 100644 --- a/net/bluetooth/smp.c +++ b/net/bluetooth/smp.c @@ -2580,6 +2580,19 @@ static int smp_cmd_ident_addr_info(struct l2cap_conn *conn, goto distribute; } + /* Drop IRK if peer is using identity address during pairing but is + * providing different address as identity information. + * + * Microsoft Surface Precision Mouse is known to have this bug. + */ + if (hci_is_identity_address(&hcon->dst, hcon->dst_type) && + (bacmp(&info->bdaddr, &hcon->dst) || + info->addr_type != hcon->dst_type)) { + bt_dev_err(hcon->hdev, + "ignoring IRK with invalid identity address"); + goto distribute; + } + bacpy(&smp->id_addr, &info->bdaddr); smp->id_addr_type = info->addr_type; diff --git a/net/bridge/br_input.c b/net/bridge/br_input.c index fed0ff446abb3ab59159a990f7635cf609dbe18f..2532c1a19645c7b13623df9f0e9581e29774919e 100644 --- a/net/bridge/br_input.c +++ b/net/bridge/br_input.c @@ -79,7 +79,6 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb struct net_bridge_fdb_entry *dst = NULL; struct net_bridge_mdb_entry *mdst; bool local_rcv, mcast_hit = false; - const unsigned char *dest; struct net_bridge *br; u16 vid = 0; @@ -97,10 +96,9 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb br_fdb_update(br, p, eth_hdr(skb)->h_source, vid, false); local_rcv = !!(br->dev->flags & IFF_PROMISC); - dest = eth_hdr(skb)->h_dest; - if (is_multicast_ether_addr(dest)) { + if (is_multicast_ether_addr(eth_hdr(skb)->h_dest)) { /* by definition the broadcast is also a multicast address */ - if (is_broadcast_ether_addr(dest)) { + if (is_broadcast_ether_addr(eth_hdr(skb)->h_dest)) { pkt_type = BR_PKT_BROADCAST; local_rcv = true; } else { @@ -150,7 +148,7 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb } break; case BR_PKT_UNICAST: - dst = br_fdb_find_rcu(br, dest, vid); + dst = br_fdb_find_rcu(br, eth_hdr(skb)->h_dest, vid); default: break; } diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index 75901c4641b1ced8308066c5b94ecbe13f268f09..6a362da211e1b45aa5e7b94a8085217ab707023f 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -1147,6 +1147,7 @@ static int br_ip4_multicast_igmp3_report(struct net_bridge *br, int type; int err = 0; __be32 group; + u16 nsrcs; ih = igmpv3_report_hdr(skb); num = ntohs(ih->ngrec); @@ -1160,8 +1161,9 @@ static int br_ip4_multicast_igmp3_report(struct net_bridge *br, grec = (void *)(skb->data + len - sizeof(*grec)); group = grec->grec_mca; type = grec->grec_type; + nsrcs = ntohs(grec->grec_nsrcs); - len += ntohs(grec->grec_nsrcs) * 4; + len += nsrcs * 4; if (!pskb_may_pull(skb, len)) return -EINVAL; @@ -1182,7 +1184,7 @@ static int br_ip4_multicast_igmp3_report(struct net_bridge *br, src = eth_hdr(skb)->h_source; if ((type == IGMPV3_CHANGE_TO_INCLUDE || type == IGMPV3_MODE_IS_INCLUDE) && - ntohs(grec->grec_nsrcs) == 0) { + nsrcs == 0) { br_ip4_multicast_leave_group(br, port, group, vid, src); } else { err = br_ip4_multicast_add_group(br, port, group, vid, @@ -1217,23 +1219,26 @@ static int br_ip6_multicast_mld2_report(struct net_bridge *br, len = skb_transport_offset(skb) + sizeof(*icmp6h); for (i = 0; i < num; i++) { - __be16 *nsrcs, _nsrcs; - - nsrcs = skb_header_pointer(skb, - len + offsetof(struct mld2_grec, - grec_nsrcs), - sizeof(_nsrcs), &_nsrcs); - if (!nsrcs) + __be16 *_nsrcs, __nsrcs; + u16 nsrcs; + + _nsrcs = skb_header_pointer(skb, + len + offsetof(struct mld2_grec, + grec_nsrcs), + sizeof(__nsrcs), &__nsrcs); + if (!_nsrcs) return -EINVAL; + nsrcs = ntohs(*_nsrcs); + if (!pskb_may_pull(skb, len + sizeof(*grec) + - sizeof(struct in6_addr) * ntohs(*nsrcs))) + sizeof(struct in6_addr) * nsrcs)) return -EINVAL; grec = (struct mld2_grec *)(skb->data + len); len += sizeof(*grec) + - sizeof(struct in6_addr) * ntohs(*nsrcs); + sizeof(struct in6_addr) * nsrcs; /* We treat these as MLDv1 reports for now. */ switch (grec->grec_type) { @@ -1252,7 +1257,7 @@ static int br_ip6_multicast_mld2_report(struct net_bridge *br, src = eth_hdr(skb)->h_source; if ((grec->grec_type == MLD2_CHANGE_TO_INCLUDE || grec->grec_type == MLD2_MODE_IS_INCLUDE) && - ntohs(*nsrcs) == 0) { + nsrcs == 0) { br_ip6_multicast_leave_group(br, port, &grec->grec_mca, vid, src); } else { @@ -1505,7 +1510,6 @@ static int br_ip6_multicast_query(struct net_bridge *br, struct sk_buff *skb, u16 vid) { - const struct ipv6hdr *ip6h = ipv6_hdr(skb); struct mld_msg *mld; struct net_bridge_mdb_entry *mp; struct mld2_query *mld2q; @@ -1549,7 +1553,7 @@ static int br_ip6_multicast_query(struct net_bridge *br, if (is_general_query) { saddr.proto = htons(ETH_P_IPV6); - saddr.u.ip6 = ip6h->saddr; + saddr.u.ip6 = ipv6_hdr(skb)->saddr; br_multicast_query_received(br, port, &br->ip6_other_query, &saddr, max_delay); @@ -1617,6 +1621,9 @@ br_multicast_leave_group(struct net_bridge *br, if (!br_port_group_equal(p, port, src)) continue; + if (p->flags & MDB_PG_FLAGS_PERMANENT) + break; + rcu_assign_pointer(*pp, p->next); hlist_del_init(&p->mglist); del_timer(&p->timer); diff --git a/net/bridge/br_stp_bpdu.c b/net/bridge/br_stp_bpdu.c index 1b75d6bf12bd9cbe4411c6dadf85a273bbdcbdbc..37ddcea3fc96bbdb55b5194d910231eab738d8a5 100644 --- a/net/bridge/br_stp_bpdu.c +++ b/net/bridge/br_stp_bpdu.c @@ -147,7 +147,6 @@ void br_send_tcn_bpdu(struct net_bridge_port *p) void br_stp_rcv(const struct stp_proto *proto, struct sk_buff *skb, struct net_device *dev) { - const unsigned char *dest = eth_hdr(skb)->h_dest; struct net_bridge_port *p; struct net_bridge *br; const unsigned char *buf; @@ -176,7 +175,7 @@ void br_stp_rcv(const struct stp_proto *proto, struct sk_buff *skb, if (p->state == BR_STATE_DISABLED) goto out; - if (!ether_addr_equal(dest, br->group_addr)) + if (!ether_addr_equal(eth_hdr(skb)->h_dest, br->group_addr)) goto out; if (p->flags & BR_BPDU_GUARD) { diff --git a/net/bridge/br_vlan.c b/net/bridge/br_vlan.c index 7df269092103c3e00dc79ecc7904c4f859f62cce..5f3950f00f73b1395f68c3a66c7ca73dde376b3d 100644 --- a/net/bridge/br_vlan.c +++ b/net/bridge/br_vlan.c @@ -677,6 +677,11 @@ void br_vlan_flush(struct net_bridge *br) ASSERT_RTNL(); + /* delete auto-added default pvid local fdb before flushing vlans + * otherwise it will be leaked on bridge device init failure + */ + br_fdb_delete_by_port(br, NULL, 0, 1); + vg = br_vlan_group(br); __vlan_flush(vg); RCU_INIT_POINTER(br->vlgrp, NULL); diff --git a/net/bridge/netfilter/ebtables.c b/net/bridge/netfilter/ebtables.c index 0bb4d712b80cbec65cec48cbc3531be4c9c6dea1..62ffc989a44a2176bd436771e217b7cd248463b6 100644 --- a/net/bridge/netfilter/ebtables.c +++ b/net/bridge/netfilter/ebtables.c @@ -1779,20 +1779,28 @@ static int compat_calc_entry(const struct ebt_entry *e, return 0; } +static int ebt_compat_init_offsets(unsigned int number) +{ + if (number > INT_MAX) + return -EINVAL; + + /* also count the base chain policies */ + number += NF_BR_NUMHOOKS; + + return xt_compat_init_offsets(NFPROTO_BRIDGE, number); +} static int compat_table_info(const struct ebt_table_info *info, struct compat_ebt_replace *newinfo) { unsigned int size = info->entries_size; const void *entries = info->entries; + int ret; newinfo->entries_size = size; - if (info->nentries) { - int ret = xt_compat_init_offsets(NFPROTO_BRIDGE, - info->nentries); - if (ret) - return ret; - } + ret = ebt_compat_init_offsets(info->nentries); + if (ret) + return ret; return EBT_ENTRY_ITERATE(entries, size, compat_calc_entry, info, entries, newinfo); @@ -2241,11 +2249,9 @@ static int compat_do_replace(struct net *net, void __user *user, xt_compat_lock(NFPROTO_BRIDGE); - if (tmp.nentries) { - ret = xt_compat_init_offsets(NFPROTO_BRIDGE, tmp.nentries); - if (ret < 0) - goto out_unlock; - } + ret = ebt_compat_init_offsets(tmp.nentries); + if (ret < 0) + goto out_unlock; ret = compat_copy_entries(entries_tmp, tmp.entries_size, &state); if (ret < 0) @@ -2268,8 +2274,10 @@ static int compat_do_replace(struct net *net, void __user *user, state.buf_kern_len = size64; ret = compat_copy_entries(entries_tmp, tmp.entries_size, &state); - if (WARN_ON(ret < 0)) + if (WARN_ON(ret < 0)) { + vfree(entries_tmp); goto out_unlock; + } vfree(entries_tmp); tmp.entries_size = size64; diff --git a/net/can/gw.c b/net/can/gw.c index 53859346dc9a92f76daf35e4daf68a15ef1c2c9b..bd2161470e456e185eca2b134ce9c05674fec56f 100644 --- a/net/can/gw.c +++ b/net/can/gw.c @@ -1046,32 +1046,50 @@ static __init int cgw_module_init(void) pr_info("can: netlink gateway (rev " CAN_GW_VERSION ") max_hops=%d\n", max_hops); - register_pernet_subsys(&cangw_pernet_ops); + ret = register_pernet_subsys(&cangw_pernet_ops); + if (ret) + return ret; + + ret = -ENOMEM; cgw_cache = kmem_cache_create("can_gw", sizeof(struct cgw_job), 0, 0, NULL); - if (!cgw_cache) - return -ENOMEM; + goto out_cache_create; /* set notifier */ notifier.notifier_call = cgw_notifier; - register_netdevice_notifier(¬ifier); + ret = register_netdevice_notifier(¬ifier); + if (ret) + goto out_register_notifier; ret = rtnl_register_module(THIS_MODULE, PF_CAN, RTM_GETROUTE, NULL, cgw_dump_jobs, 0); - if (ret) { - unregister_netdevice_notifier(¬ifier); - kmem_cache_destroy(cgw_cache); - return -ENOBUFS; - } - - /* Only the first call to rtnl_register_module can fail */ - rtnl_register_module(THIS_MODULE, PF_CAN, RTM_NEWROUTE, - cgw_create_job, NULL, 0); - rtnl_register_module(THIS_MODULE, PF_CAN, RTM_DELROUTE, - cgw_remove_job, NULL, 0); + if (ret) + goto out_rtnl_register1; + + ret = rtnl_register_module(THIS_MODULE, PF_CAN, RTM_NEWROUTE, + cgw_create_job, NULL, 0); + if (ret) + goto out_rtnl_register2; + ret = rtnl_register_module(THIS_MODULE, PF_CAN, RTM_DELROUTE, + cgw_remove_job, NULL, 0); + if (ret) + goto out_rtnl_register3; return 0; + +out_rtnl_register3: + rtnl_unregister(PF_CAN, RTM_NEWROUTE); +out_rtnl_register2: + rtnl_unregister(PF_CAN, RTM_GETROUTE); +out_rtnl_register1: + unregister_netdevice_notifier(¬ifier); +out_register_notifier: + kmem_cache_destroy(cgw_cache); +out_cache_create: + unregister_pernet_subsys(&cangw_pernet_ops); + + return ret; } static __exit void cgw_module_exit(void) diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c index 60934bd8796c53c5ec96477e1c29fb9e8234e804..76c41a84550e76459aa6dd5ec8677c8275127fd0 100644 --- a/net/ceph/osd_client.c +++ b/net/ceph/osd_client.c @@ -1423,7 +1423,7 @@ static enum calc_target_result calc_target(struct ceph_osd_client *osdc, struct ceph_osds up, acting; bool force_resend = false; bool unpaused = false; - bool legacy_change; + bool legacy_change = false; bool split = false; bool sort_bitwise = ceph_osdmap_flag(osdc, CEPH_OSDMAP_SORTBITWISE); bool recovery_deletes = ceph_osdmap_flag(osdc, @@ -1511,15 +1511,14 @@ static enum calc_target_result calc_target(struct ceph_osd_client *osdc, t->osd = acting.primary; } - if (unpaused || legacy_change || force_resend || - (split && con && CEPH_HAVE_FEATURE(con->peer_features, - RESEND_ON_SPLIT))) + if (unpaused || legacy_change || force_resend || split) ct_res = CALC_TARGET_NEED_RESEND; else ct_res = CALC_TARGET_NO_ACTION; out: - dout("%s t %p -> ct_res %d osd %d\n", __func__, t, ct_res, t->osd); + dout("%s t %p -> %d%d%d%d ct_res %d osd%d\n", __func__, t, unpaused, + legacy_change, force_resend, split, ct_res, t->osd); return ct_res; } diff --git a/net/core/dev.c b/net/core/dev.c index 63b3058dd172d2d7872ea397857e3ca7727de8c0..ee5b23fbd530d4563f755a9e0299d6d3b55ec7ef 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -9548,6 +9548,8 @@ static void __net_exit default_device_exit(struct net *net) /* Push remaining network devices to init_net */ snprintf(fb_name, IFNAMSIZ, "dev%d", dev->ifindex); + if (__dev_get_by_name(&init_net, fb_name)) + snprintf(fb_name, IFNAMSIZ, "dev%%d"); err = dev_change_net_namespace(dev, &init_net, fb_name); if (err) { pr_emerg("%s: failed to move %s to init_net: %d\n", diff --git a/net/core/filter.c b/net/core/filter.c index 03925960fb5c0a8d4e4d2127c444c4aa09636178..54d37e97a8d2bfc48cfd23dcd9179820dd4139c1 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -3991,7 +3991,7 @@ BPF_CALL_5(bpf_setsockopt, struct bpf_sock_ops_kern *, bpf_sock, TCP_CA_NAME_MAX-1)); name[TCP_CA_NAME_MAX-1] = 0; ret = tcp_set_congestion_control(sk, name, false, - reinit); + reinit, true); } else { struct tcp_sock *tp = tcp_sk(sk); diff --git a/net/core/neighbour.c b/net/core/neighbour.c index cd9e991f21d734b0e83ed4ab8aa9ab993c499315..c52d6e6b341cfcaf655efeaed8ac27a8509290ce 100644 --- a/net/core/neighbour.c +++ b/net/core/neighbour.c @@ -1021,6 +1021,7 @@ int __neigh_event_send(struct neighbour *neigh, struct sk_buff *skb) atomic_set(&neigh->probes, NEIGH_VAR(neigh->parms, UCAST_PROBES)); + neigh_del_timer(neigh); neigh->nud_state = NUD_INCOMPLETE; neigh->updated = now; next = now + max(NEIGH_VAR(neigh->parms, RETRANS_TIME), @@ -1037,6 +1038,7 @@ int __neigh_event_send(struct neighbour *neigh, struct sk_buff *skb) } } else if (neigh->nud_state & NUD_STALE) { neigh_dbg(2, "neigh %p is delayed\n", neigh); + neigh_del_timer(neigh); neigh->nud_state = NUD_DELAY; neigh->updated = jiffies; neigh_add_timer(neigh, jiffies + diff --git a/net/core/stream.c b/net/core/stream.c index 7d329fb1f553a832e277e12989b8c49ede21ea0e..7f5eaa95a6756073164a36b64e001a31cdfb1e58 100644 --- a/net/core/stream.c +++ b/net/core/stream.c @@ -120,7 +120,6 @@ int sk_stream_wait_memory(struct sock *sk, long *timeo_p) int err = 0; long vm_wait = 0; long current_timeo = *timeo_p; - bool noblock = (*timeo_p ? false : true); DEFINE_WAIT_FUNC(wait, woken_wake_function); if (sk_stream_memory_free(sk)) @@ -133,11 +132,8 @@ int sk_stream_wait_memory(struct sock *sk, long *timeo_p) if (sk->sk_err || (sk->sk_shutdown & SEND_SHUTDOWN)) goto do_error; - if (!*timeo_p) { - if (noblock) - set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); - goto do_nonblock; - } + if (!*timeo_p) + goto do_eagain; if (signal_pending(current)) goto do_interrupted; sk_clear_bit(SOCKWQ_ASYNC_NOSPACE, sk); @@ -169,7 +165,13 @@ out: do_error: err = -EPIPE; goto out; -do_nonblock: +do_eagain: + /* Make sure that whenever EAGAIN is returned, EPOLLOUT event can + * be generated later. + * When TCP receives ACK packets that make room, tcp_check_space() + * only calls tcp_new_space() if SOCK_NOSPACE is set. + */ + set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); err = -EAGAIN; goto out; do_interrupted: diff --git a/net/dsa/switch.c b/net/dsa/switch.c index 142b294d34468508f80457a6e3b5ffc35c680f07..b0b9413fa5bf9c150c0339fe8ca97b581ba52c10 100644 --- a/net/dsa/switch.c +++ b/net/dsa/switch.c @@ -127,6 +127,9 @@ static void dsa_switch_mdb_add_bitmap(struct dsa_switch *ds, { int port; + if (!ds->ops->port_mdb_add) + return; + for_each_set_bit(port, bitmap, ds->num_ports) ds->ops->port_mdb_add(ds, port, mdb); } diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c index ea4bd8a52422e75c98e30cb3ab537e97b2170520..d23746143cd2b0e9a07543c3e9154b92c70f233c 100644 --- a/net/ipv4/devinet.c +++ b/net/ipv4/devinet.c @@ -66,6 +66,11 @@ #include #include +#define IPV6ONLY_FLAGS \ + (IFA_F_NODAD | IFA_F_OPTIMISTIC | IFA_F_DADFAILED | \ + IFA_F_HOMEADDRESS | IFA_F_TENTATIVE | \ + IFA_F_MANAGETEMPADDR | IFA_F_STABLE_PRIVACY) + static struct ipv4_devconf ipv4_devconf = { .data = { [IPV4_DEVCONF_ACCEPT_REDIRECTS - 1] = 1, @@ -462,6 +467,9 @@ static int __inet_insert_ifa(struct in_ifaddr *ifa, struct nlmsghdr *nlh, ifa->ifa_flags &= ~IFA_F_SECONDARY; last_primary = &in_dev->ifa_list; + /* Don't set IPv6 only flags to IPv4 addresses */ + ifa->ifa_flags &= ~IPV6ONLY_FLAGS; + for (ifap = &in_dev->ifa_list; (ifa1 = *ifap) != NULL; ifap = &ifa1->ifa_next) { if (!(ifa1->ifa_flags & IFA_F_SECONDARY) && diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c index 1770ff1638bcdc88e5918e7dd92577e89c1a607f..acec420899c59b6f79c55832fa6a90b858f1c8c1 100644 --- a/net/ipv4/icmp.c +++ b/net/ipv4/icmp.c @@ -595,7 +595,13 @@ void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info, if (!rt) goto out; - net = dev_net(rt->dst.dev); + + if (rt->dst.dev) + net = dev_net(rt->dst.dev); + else if (skb_in->dev) + net = dev_net(skb_in->dev); + else + goto out; /* * Find the original header. It is expected to be valid, of course. diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c index d187ee8156a19f4f8590938c66b4f1424884cec6..b2240b7f225d545e9508abf554fe162257b2a9cc 100644 --- a/net/ipv4/igmp.c +++ b/net/ipv4/igmp.c @@ -1218,12 +1218,8 @@ static void igmpv3_del_delrec(struct in_device *in_dev, struct ip_mc_list *im) if (pmc) { im->interface = pmc->interface; if (im->sfmode == MCAST_INCLUDE) { - im->tomb = pmc->tomb; - pmc->tomb = NULL; - - im->sources = pmc->sources; - pmc->sources = NULL; - + swap(im->tomb, pmc->tomb); + swap(im->sources, pmc->sources); for (psf = im->sources; psf; psf = psf->sf_next) psf->sf_crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv; } else { diff --git a/net/ipv4/ip_tunnel_core.c b/net/ipv4/ip_tunnel_core.c index c248e0dccbe17afa397910b0d68260daf2a9eb3f..67ef9d853d906a6018ca7ccade6ebe9af3cd18ce 100644 --- a/net/ipv4/ip_tunnel_core.c +++ b/net/ipv4/ip_tunnel_core.c @@ -89,9 +89,12 @@ void iptunnel_xmit(struct sock *sk, struct rtable *rt, struct sk_buff *skb, __ip_select_ident(net, iph, skb_shinfo(skb)->gso_segs ?: 1); err = ip_local_out(net, sk, skb); - if (unlikely(net_xmit_eval(err))) - pkt_len = 0; - iptunnel_xmit_stats(dev, pkt_len); + + if (dev) { + if (unlikely(net_xmit_eval(err))) + pkt_len = 0; + iptunnel_xmit_stats(dev, pkt_len); + } } EXPORT_SYMBOL_GPL(iptunnel_xmit); diff --git a/net/ipv4/ipip.c b/net/ipv4/ipip.c index c891235b4966cc4994d81acf39070d27c3060f13..4368282eb6f8fb38bcd979e8967e3f5d105cc81c 100644 --- a/net/ipv4/ipip.c +++ b/net/ipv4/ipip.c @@ -281,6 +281,9 @@ static netdev_tx_t ipip_tunnel_xmit(struct sk_buff *skb, const struct iphdr *tiph = &tunnel->parms.iph; u8 ipproto; + if (!pskb_inet_may_pull(skb)) + goto tx_error; + switch (skb->protocol) { case htons(ETH_P_IP): ipproto = IPPROTO_IPIP; diff --git a/net/ipv4/netfilter/ipt_rpfilter.c b/net/ipv4/netfilter/ipt_rpfilter.c index 12843c9ef1421d204fba6bea42a85615e2e69cc7..74b19a5c572e9fa570e834448eec0152665e2a2a 100644 --- a/net/ipv4/netfilter/ipt_rpfilter.c +++ b/net/ipv4/netfilter/ipt_rpfilter.c @@ -96,6 +96,7 @@ static bool rpfilter_mt(const struct sk_buff *skb, struct xt_action_param *par) flow.flowi4_mark = info->flags & XT_RPFILTER_VALID_MARK ? skb->mark : 0; flow.flowi4_tos = RT_TOS(iph->tos); flow.flowi4_scope = RT_SCOPE_UNIVERSE; + flow.flowi4_oif = l3mdev_master_ifindex_rcu(xt_in(par)); return rpfilter_lookup_reverse(xt_net(par), &flow, xt_in(par), info->flags) ^ invert; } diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 364e6fdaa38f50545fc41d8780bd8de7f7478dce..b7ef367fe6a1785dba51cdba58f3e7102183df26 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -2594,6 +2594,8 @@ int tcp_disconnect(struct sock *sk, int flags) tcp_saved_syn_free(tp); tp->compressed_ack = 0; tp->bytes_sent = 0; + tp->bytes_acked = 0; + tp->bytes_received = 0; tp->bytes_retrans = 0; tp->dsack_dups = 0; tp->reord_seen = 0; @@ -2729,7 +2731,9 @@ static int do_tcp_setsockopt(struct sock *sk, int level, name[val] = 0; lock_sock(sk); - err = tcp_set_congestion_control(sk, name, true, true); + err = tcp_set_congestion_control(sk, name, true, true, + ns_capable(sock_net(sk)->user_ns, + CAP_NET_ADMIN)); release_sock(sk); return err; } diff --git a/net/ipv4/tcp_cong.c b/net/ipv4/tcp_cong.c index bc6c02f1624383043147101cd4ad2157b3bc9289..48f79db446a02a57db9afc5a9e01fa1cd069316f 100644 --- a/net/ipv4/tcp_cong.c +++ b/net/ipv4/tcp_cong.c @@ -332,7 +332,8 @@ out: * tcp_reinit_congestion_control (if the current congestion control was * already initialized. */ -int tcp_set_congestion_control(struct sock *sk, const char *name, bool load, bool reinit) +int tcp_set_congestion_control(struct sock *sk, const char *name, bool load, + bool reinit, bool cap_net_admin) { struct inet_connection_sock *icsk = inet_csk(sk); const struct tcp_congestion_ops *ca; @@ -368,8 +369,7 @@ int tcp_set_congestion_control(struct sock *sk, const char *name, bool load, boo } else { err = -EBUSY; } - } else if (!((ca->flags & TCP_CONG_NON_RESTRICTED) || - ns_capable(sock_net(sk)->user_ns, CAP_NET_ADMIN))) { + } else if (!((ca->flags & TCP_CONG_NON_RESTRICTED) || cap_net_admin)) { err = -EPERM; } else if (!try_module_get(ca->owner)) { err = -EBUSY; diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 221d9b72423b961d948cda2659f7b86b9727438d..88c7e821fd1162e1e8849fc53d4f84d41977b4e2 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -1289,6 +1289,7 @@ int tcp_fragment(struct sock *sk, enum tcp_queue tcp_queue, struct tcp_sock *tp = tcp_sk(sk); struct sk_buff *buff; int nsize, old_factor; + long limit; int nlen; u8 flags; @@ -1299,8 +1300,16 @@ int tcp_fragment(struct sock *sk, enum tcp_queue tcp_queue, if (nsize < 0) nsize = 0; - if (unlikely((sk->sk_wmem_queued >> 1) > sk->sk_sndbuf && - tcp_queue != TCP_FRAG_IN_WRITE_QUEUE)) { + /* tcp_sendmsg() can overshoot sk_wmem_queued by one full size skb. + * We need some allowance to not penalize applications setting small + * SO_SNDBUF values. + * Also allow first and last skb in retransmit queue to be split. + */ + limit = sk->sk_sndbuf + 2 * SKB_TRUESIZE(GSO_MAX_SIZE); + if (unlikely((sk->sk_wmem_queued >> 1) > limit && + tcp_queue != TCP_FRAG_IN_WRITE_QUEUE && + skb != tcp_rtx_queue_head(sk) && + skb != tcp_rtx_queue_tail(sk))) { NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPWQUEUETOOBIG); return -ENOMEM; } diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c index c57efd5c5b387e17c7e99835f47b7fcb84e9acb5..49e2f6dac6462b96fde878df24b937fd2f401a9f 100644 --- a/net/ipv6/addrconf.c +++ b/net/ipv6/addrconf.c @@ -995,7 +995,8 @@ ipv6_add_addr(struct inet6_dev *idev, struct ifa6_config *cfg, int err = 0; if (addr_type == IPV6_ADDR_ANY || - addr_type & IPV6_ADDR_MULTICAST || + (addr_type & IPV6_ADDR_MULTICAST && + !(cfg->ifa_flags & IFA_F_MCAUTOJOIN)) || (!(idev->dev->flags & IFF_LOOPBACK) && addr_type & IPV6_ADDR_LOOPBACK)) return ERR_PTR(-EADDRNOTAVAIL); diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c index a6c0479c1d55f8f52b71449895ff56b5903f2ce2..bbb5ffb3397d8b98227c81023cac218603822619 100644 --- a/net/ipv6/ip6_fib.c +++ b/net/ipv6/ip6_fib.c @@ -1081,8 +1081,24 @@ add: err = call_fib6_entry_notifiers(info->nl_net, FIB_EVENT_ENTRY_ADD, rt, extack); - if (err) + if (err) { + struct fib6_info *sibling, *next_sibling; + + /* If the route has siblings, then it first + * needs to be unlinked from them. + */ + if (!rt->fib6_nsiblings) + return err; + + list_for_each_entry_safe(sibling, next_sibling, + &rt->fib6_siblings, + fib6_siblings) + sibling->fib6_nsiblings--; + rt->fib6_nsiblings = 0; + list_del_init(&rt->fib6_siblings); + rt6_multipath_rebalance(next_sibling); return err; + } rcu_assign_pointer(rt->fib6_next, iter); atomic_inc(&rt->fib6_ref); diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c index 01ecd510014f2a03f9e064bf320912450cb6f468..a53ef079a53947d3983785e823bc63acf69d1383 100644 --- a/net/ipv6/ip6_gre.c +++ b/net/ipv6/ip6_gre.c @@ -680,12 +680,13 @@ static int prepare_ip6gre_xmit_ipv6(struct sk_buff *skb, struct flowi6 *fl6, __u8 *dsfield, int *encap_limit) { - struct ipv6hdr *ipv6h = ipv6_hdr(skb); + struct ipv6hdr *ipv6h; struct ip6_tnl *t = netdev_priv(dev); __u16 offset; offset = ip6_tnl_parse_tlv_enc_lim(skb, skb_network_header(skb)); /* ip6_tnl_parse_tlv_enc_lim() might have reallocated skb->head */ + ipv6h = ipv6_hdr(skb); if (offset > 0) { struct ipv6_tlv_tnl_enc_lim *tel; diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c index ade1390c63488a60b405ca70052b3493fecc67d5..d0ad85b8650da7a63956038ad3ea80c6c25625b9 100644 --- a/net/ipv6/ip6_tunnel.c +++ b/net/ipv6/ip6_tunnel.c @@ -1283,12 +1283,11 @@ ip4ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev) } fl6.flowi6_uid = sock_net_uid(dev_net(dev), NULL); + dsfield = INET_ECN_encapsulate(dsfield, ipv4_get_dsfield(iph)); if (iptunnel_handle_offloads(skb, SKB_GSO_IPXIP6)) return -1; - dsfield = INET_ECN_encapsulate(dsfield, ipv4_get_dsfield(iph)); - skb_set_inner_ipproto(skb, IPPROTO_IPIP); err = ip6_tnl_xmit(skb, dev, dsfield, &fl6, encap_limit, &mtu, @@ -1372,12 +1371,11 @@ ip6ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev) } fl6.flowi6_uid = sock_net_uid(dev_net(dev), NULL); + dsfield = INET_ECN_encapsulate(dsfield, ipv6_get_dsfield(ipv6h)); if (iptunnel_handle_offloads(skb, SKB_GSO_IPXIP6)) return -1; - dsfield = INET_ECN_encapsulate(dsfield, ipv6_get_dsfield(ipv6h)); - skb_set_inner_ipproto(skb, IPPROTO_IPV6); err = ip6_tnl_xmit(skb, dev, dsfield, &fl6, encap_limit, &mtu, diff --git a/net/ipv6/netfilter/ip6t_rpfilter.c b/net/ipv6/netfilter/ip6t_rpfilter.c index c3c6b09acdc4fcb76f923b057a864ae06fa5be13..0f3407f2851ed7fb003451d484c6fe7666d58a60 100644 --- a/net/ipv6/netfilter/ip6t_rpfilter.c +++ b/net/ipv6/netfilter/ip6t_rpfilter.c @@ -58,7 +58,9 @@ static bool rpfilter_lookup_reverse6(struct net *net, const struct sk_buff *skb, if (rpfilter_addr_linklocal(&iph->saddr)) { lookup_flags |= RT6_LOOKUP_F_IFACE; fl6.flowi6_oif = dev->ifindex; - } else if ((flags & XT_RPFILTER_LOOSE) == 0) + /* Set flowi6_oif for vrf devices to lookup route in l3mdev domain. */ + } else if (netif_is_l3_master(dev) || netif_is_l3_slave(dev) || + (flags & XT_RPFILTER_LOOSE) == 0) fl6.flowi6_oif = dev->ifindex; rt = (void *)ip6_route_lookup(net, &fl6, skb, lookup_flags); @@ -73,7 +75,9 @@ static bool rpfilter_lookup_reverse6(struct net *net, const struct sk_buff *skb, goto out; } - if (rt->rt6i_idev->dev == dev || (flags & XT_RPFILTER_LOOSE)) + if (rt->rt6i_idev->dev == dev || + l3mdev_master_ifindex_rcu(rt->rt6i_idev->dev) == dev->ifindex || + (flags & XT_RPFILTER_LOOSE)) ret = true; out: ip6_rt_put(rt); diff --git a/net/ipv6/route.c b/net/ipv6/route.c index 24f7b2cf504b2dc2ab2f6467f9a5bfd969ed3ba7..c8858638013400a2c1cc38be299af56d7620142c 100644 --- a/net/ipv6/route.c +++ b/net/ipv6/route.c @@ -2214,7 +2214,7 @@ static struct dst_entry *rt6_check(struct rt6_info *rt, { u32 rt_cookie = 0; - if ((from && !fib6_get_cookie_safe(from, &rt_cookie)) || + if (!from || !fib6_get_cookie_safe(from, &rt_cookie) || rt_cookie != cookie) return NULL; @@ -3109,7 +3109,7 @@ static struct fib6_info *ip6_route_info_create(struct fib6_config *cfg, rt->fib6_metric = cfg->fc_metric; rt->fib6_nh.nh_weight = 1; - rt->fib6_type = cfg->fc_type; + rt->fib6_type = cfg->fc_type ? : RTN_UNICAST; /* We cannot add true routes via loopback here, they would result in kernel looping; promote them to reject routes diff --git a/net/key/af_key.c b/net/key/af_key.c index 0b79c9aa8eb1f2184afbfe38cb29fa4b66b49ba6..1982f9f31debb26c615a91fffdc45887d01cbcba 100644 --- a/net/key/af_key.c +++ b/net/key/af_key.c @@ -2442,8 +2442,10 @@ static int key_pol_get_resp(struct sock *sk, struct xfrm_policy *xp, const struc goto out; } err = pfkey_xfrm_policy2msg(out_skb, xp, dir); - if (err < 0) + if (err < 0) { + kfree_skb(out_skb); goto out; + } out_hdr = (struct sadb_msg *) out_skb->data; out_hdr->sadb_msg_version = hdr->sadb_msg_version; @@ -2694,8 +2696,10 @@ static int dump_sp(struct xfrm_policy *xp, int dir, int count, void *ptr) return PTR_ERR(out_skb); err = pfkey_xfrm_policy2msg(out_skb, xp, dir); - if (err < 0) + if (err < 0) { + kfree_skb(out_skb); return err; + } out_hdr = (struct sadb_msg *) out_skb->data; out_hdr->sadb_msg_version = pfk->dump.msg_version; diff --git a/net/l2tp/l2tp_ppp.c b/net/l2tp/l2tp_ppp.c index 04d9946dcdba648f8e4dce9c0c0211f7036ae135..c0956781665e154c89a4409d52cacc2b35e0cd36 100644 --- a/net/l2tp/l2tp_ppp.c +++ b/net/l2tp/l2tp_ppp.c @@ -1686,6 +1686,9 @@ static const struct proto_ops pppol2tp_ops = { .recvmsg = pppol2tp_recvmsg, .mmap = sock_no_mmap, .ioctl = pppox_ioctl, +#ifdef CONFIG_COMPAT + .compat_ioctl = pppox_compat_ioctl, +#endif }; static const struct pppox_proto pppol2tp_proto = { diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c index 40c51022346790073a5d328e80d7fd9022c2563d..a48e83b19cfa7b0250bfd38ca6ed0fce4351d0bd 100644 --- a/net/mac80211/cfg.c +++ b/net/mac80211/cfg.c @@ -1471,6 +1471,11 @@ static int ieee80211_add_station(struct wiphy *wiphy, struct net_device *dev, if (is_multicast_ether_addr(mac)) return -EINVAL; + if (params->sta_flags_set & BIT(NL80211_STA_FLAG_TDLS_PEER) && + sdata->vif.type == NL80211_IFTYPE_STATION && + !sdata->u.mgd.associated) + return -EINVAL; + sta = sta_info_alloc(sdata, mac, GFP_KERNEL); if (!sta) return -ENOMEM; @@ -1478,10 +1483,6 @@ static int ieee80211_add_station(struct wiphy *wiphy, struct net_device *dev, if (params->sta_flags_set & BIT(NL80211_STA_FLAG_TDLS_PEER)) sta->sta.tdls = true; - if (sta->sta.tdls && sdata->vif.type == NL80211_IFTYPE_STATION && - !sdata->u.mgd.associated) - return -EINVAL; - err = sta_apply_parameters(local, sta, params); if (err) { sta_info_free(local, sta); diff --git a/net/mac80211/driver-ops.c b/net/mac80211/driver-ops.c index bb886e7db47f1dcf409a4746d7ee47be8c679462..f783d1377d9a8205eea3218354083debc3c5ecdb 100644 --- a/net/mac80211/driver-ops.c +++ b/net/mac80211/driver-ops.c @@ -169,11 +169,16 @@ int drv_conf_tx(struct ieee80211_local *local, if (!check_sdata_in_driver(sdata)) return -EIO; - if (WARN_ONCE(params->cw_min == 0 || - params->cw_min > params->cw_max, - "%s: invalid CW_min/CW_max: %d/%d\n", - sdata->name, params->cw_min, params->cw_max)) + if (params->cw_min == 0 || params->cw_min > params->cw_max) { + /* + * If we can't configure hardware anyway, don't warn. We may + * never have initialized the CW parameters. + */ + WARN_ONCE(local->ops->conf_tx, + "%s: invalid CW_min/CW_max: %d/%d\n", + sdata->name, params->cw_min, params->cw_max); return -EINVAL; + } trace_drv_conf_tx(local, sdata, ac, params); if (local->ops->conf_tx) diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c index 1aaa73fa308e655e013970e2abeb0d5da7cb4bb5..b5c06242a92e5b9667760b3bfada218256e63cc8 100644 --- a/net/mac80211/mlme.c +++ b/net/mac80211/mlme.c @@ -1967,6 +1967,16 @@ ieee80211_sta_wmm_params(struct ieee80211_local *local, ieee80211_regulatory_limit_wmm_params(sdata, ¶ms[ac], ac); } + /* WMM specification requires all 4 ACIs. */ + for (ac = 0; ac < IEEE80211_NUM_ACS; ac++) { + if (params[ac].cw_min == 0) { + sdata_info(sdata, + "AP has invalid WMM params (missing AC %d), using defaults\n", + ac); + return false; + } + } + for (ac = 0; ac < IEEE80211_NUM_ACS; ac++) { mlme_dbg(sdata, "WMM AC=%d acm=%d aifs=%d cWmin=%d cWmax=%d txop=%d uapsd=%d, downgraded=%d\n", diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c index 7523d995ea8abe0d26973f04e181898ca3f5cc7f..b12f23c996f4e5e0584b8046d14a460a0335bdfa 100644 --- a/net/mac80211/rx.c +++ b/net/mac80211/rx.c @@ -2372,11 +2372,13 @@ static void ieee80211_deliver_skb_to_local_stack(struct sk_buff *skb, skb->protocol == cpu_to_be16(ETH_P_PREAUTH)) && sdata->control_port_over_nl80211)) { struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb); - bool noencrypt = status->flag & RX_FLAG_DECRYPTED; + bool noencrypt = !(status->flag & RX_FLAG_DECRYPTED); cfg80211_rx_control_port(dev, skb, noencrypt); dev_kfree_skb(skb); } else { + memset(skb->cb, 0, sizeof(skb->cb)); + /* deliver to local stack */ if (rx->napi) napi_gro_receive(rx->napi, skb); @@ -2470,8 +2472,6 @@ ieee80211_deliver_skb(struct ieee80211_rx_data *rx) if (skb) { skb->protocol = eth_type_trans(skb, dev); - memset(skb->cb, 0, sizeof(skb->cb)); - ieee80211_deliver_skb_to_local_stack(skb, rx); } diff --git a/net/netfilter/ipset/ip_set_bitmap_ipmac.c b/net/netfilter/ipset/ip_set_bitmap_ipmac.c index 13ade5782847bf8e4fe73eae31bdc0f37f51098b..4f01321e793ce120fa20d827506470c1c21ba4f4 100644 --- a/net/netfilter/ipset/ip_set_bitmap_ipmac.c +++ b/net/netfilter/ipset/ip_set_bitmap_ipmac.c @@ -230,7 +230,7 @@ bitmap_ipmac_kadt(struct ip_set *set, const struct sk_buff *skb, e.id = ip_to_id(map, ip); - if (opt->flags & IPSET_DIM_ONE_SRC) + if (opt->flags & IPSET_DIM_TWO_SRC) ether_addr_copy(e.ether, eth_hdr(skb)->h_source); else ether_addr_copy(e.ether, eth_hdr(skb)->h_dest); diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c index 1577f2f76060dcd816f94078412f52943568ce40..e2538c5786714ff097f28530af99da893fb61dad 100644 --- a/net/netfilter/ipset/ip_set_core.c +++ b/net/netfilter/ipset/ip_set_core.c @@ -1157,7 +1157,7 @@ static int ip_set_rename(struct net *net, struct sock *ctnl, return -ENOENT; write_lock_bh(&ip_set_ref_lock); - if (set->ref != 0) { + if (set->ref != 0 || set->ref_netlink != 0) { ret = -IPSET_ERR_REFERENCED; goto out; } diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h index 8a33dac4e8058b0ec07281814f5c90507421eb4e..ddfe06d7530bae18cabb850f825a1e1b9bf6d72a 100644 --- a/net/netfilter/ipset/ip_set_hash_gen.h +++ b/net/netfilter/ipset/ip_set_hash_gen.h @@ -625,7 +625,7 @@ retry: goto cleanup; } m->size = AHASH_INIT_SIZE; - extsize = ext_size(AHASH_INIT_SIZE, dsize); + extsize += ext_size(AHASH_INIT_SIZE, dsize); RCU_INIT_POINTER(hbucket(t, key), m); } else if (m->pos >= m->size) { struct hbucket *ht; diff --git a/net/netfilter/ipset/ip_set_hash_ipmac.c b/net/netfilter/ipset/ip_set_hash_ipmac.c index fd87de3ed55b33a1d38035211001ba11b81b808e..16ec822e40447447095b9da3f50541d6da6bf990 100644 --- a/net/netfilter/ipset/ip_set_hash_ipmac.c +++ b/net/netfilter/ipset/ip_set_hash_ipmac.c @@ -95,15 +95,11 @@ hash_ipmac4_kadt(struct ip_set *set, const struct sk_buff *skb, struct hash_ipmac4_elem e = { .ip = 0, { .foo[0] = 0, .foo[1] = 0 } }; struct ip_set_ext ext = IP_SET_INIT_KEXT(skb, opt, set); - /* MAC can be src only */ - if (!(opt->flags & IPSET_DIM_TWO_SRC)) - return 0; - if (skb_mac_header(skb) < skb->head || (skb_mac_header(skb) + ETH_HLEN) > skb->data) return -EINVAL; - if (opt->flags & IPSET_DIM_ONE_SRC) + if (opt->flags & IPSET_DIM_TWO_SRC) ether_addr_copy(e.ether, eth_hdr(skb)->h_source); else ether_addr_copy(e.ether, eth_hdr(skb)->h_dest); diff --git a/net/netfilter/ipvs/ip_vs_core.c b/net/netfilter/ipvs/ip_vs_core.c index 62c0e80dcd7198e668e12e15beed6447027f762c..a71f777d1353abf16509c8389fd3863860ba0562 100644 --- a/net/netfilter/ipvs/ip_vs_core.c +++ b/net/netfilter/ipvs/ip_vs_core.c @@ -2218,7 +2218,6 @@ static const struct nf_hook_ops ip_vs_ops[] = { static int __net_init __ip_vs_init(struct net *net) { struct netns_ipvs *ipvs; - int ret; ipvs = net_generic(net, ip_vs_net_id); if (ipvs == NULL) @@ -2250,17 +2249,11 @@ static int __net_init __ip_vs_init(struct net *net) if (ip_vs_sync_net_init(ipvs) < 0) goto sync_fail; - ret = nf_register_net_hooks(net, ip_vs_ops, ARRAY_SIZE(ip_vs_ops)); - if (ret < 0) - goto hook_fail; - return 0; /* * Error handling */ -hook_fail: - ip_vs_sync_net_cleanup(ipvs); sync_fail: ip_vs_conn_net_cleanup(ipvs); conn_fail: @@ -2290,6 +2283,19 @@ static void __net_exit __ip_vs_cleanup(struct net *net) net->ipvs = NULL; } +static int __net_init __ip_vs_dev_init(struct net *net) +{ + int ret; + + ret = nf_register_net_hooks(net, ip_vs_ops, ARRAY_SIZE(ip_vs_ops)); + if (ret < 0) + goto hook_fail; + return 0; + +hook_fail: + return ret; +} + static void __net_exit __ip_vs_dev_cleanup(struct net *net) { struct netns_ipvs *ipvs = net_ipvs(net); @@ -2309,6 +2315,7 @@ static struct pernet_operations ipvs_core_ops = { }; static struct pernet_operations ipvs_core_dev_ops = { + .init = __ip_vs_dev_init, .exit = __ip_vs_dev_cleanup, }; diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c index 2d4e048762f6ddb85e7785586ea6c1d4f9d26bba..3df94a49912663cfb2314b3132b96751df088f42 100644 --- a/net/netfilter/ipvs/ip_vs_ctl.c +++ b/net/netfilter/ipvs/ip_vs_ctl.c @@ -2382,9 +2382,7 @@ do_ip_vs_set_ctl(struct sock *sk, int cmd, void __user *user, unsigned int len) cfg.syncid = dm->syncid; ret = start_sync_thread(ipvs, &cfg, dm->state); } else { - mutex_lock(&ipvs->sync_mutex); ret = stop_sync_thread(ipvs, dm->state); - mutex_unlock(&ipvs->sync_mutex); } goto out_dec; } @@ -3492,10 +3490,8 @@ static int ip_vs_genl_del_daemon(struct netns_ipvs *ipvs, struct nlattr **attrs) if (!attrs[IPVS_DAEMON_ATTR_STATE]) return -EINVAL; - mutex_lock(&ipvs->sync_mutex); ret = stop_sync_thread(ipvs, nla_get_u32(attrs[IPVS_DAEMON_ATTR_STATE])); - mutex_unlock(&ipvs->sync_mutex); return ret; } diff --git a/net/netfilter/ipvs/ip_vs_sync.c b/net/netfilter/ipvs/ip_vs_sync.c index d4020c5e831d3020a6e412ead6d1895f81b5a124..ecb71062fcb3cbfe230de9ddd5c4b86dbd41f051 100644 --- a/net/netfilter/ipvs/ip_vs_sync.c +++ b/net/netfilter/ipvs/ip_vs_sync.c @@ -195,6 +195,7 @@ union ip_vs_sync_conn { #define IPVS_OPT_F_PARAM (1 << (IPVS_OPT_PARAM-1)) struct ip_vs_sync_thread_data { + struct task_struct *task; struct netns_ipvs *ipvs; struct socket *sock; char *buf; @@ -374,8 +375,11 @@ static inline void sb_queue_tail(struct netns_ipvs *ipvs, max(IPVS_SYNC_SEND_DELAY, 1)); ms->sync_queue_len++; list_add_tail(&sb->list, &ms->sync_queue); - if ((++ms->sync_queue_delay) == IPVS_SYNC_WAKEUP_RATE) - wake_up_process(ms->master_thread); + if ((++ms->sync_queue_delay) == IPVS_SYNC_WAKEUP_RATE) { + int id = (int)(ms - ipvs->ms); + + wake_up_process(ipvs->master_tinfo[id].task); + } } else ip_vs_sync_buff_release(sb); spin_unlock(&ipvs->sync_lock); @@ -1636,8 +1640,10 @@ static void master_wakeup_work_handler(struct work_struct *work) spin_lock_bh(&ipvs->sync_lock); if (ms->sync_queue_len && ms->sync_queue_delay < IPVS_SYNC_WAKEUP_RATE) { + int id = (int)(ms - ipvs->ms); + ms->sync_queue_delay = IPVS_SYNC_WAKEUP_RATE; - wake_up_process(ms->master_thread); + wake_up_process(ipvs->master_tinfo[id].task); } spin_unlock_bh(&ipvs->sync_lock); } @@ -1703,10 +1709,6 @@ done: if (sb) ip_vs_sync_buff_release(sb); - /* release the sending multicast socket */ - sock_release(tinfo->sock); - kfree(tinfo); - return 0; } @@ -1740,11 +1742,6 @@ static int sync_thread_backup(void *data) } } - /* release the sending multicast socket */ - sock_release(tinfo->sock); - kfree(tinfo->buf); - kfree(tinfo); - return 0; } @@ -1752,8 +1749,8 @@ static int sync_thread_backup(void *data) int start_sync_thread(struct netns_ipvs *ipvs, struct ipvs_sync_daemon_cfg *c, int state) { - struct ip_vs_sync_thread_data *tinfo = NULL; - struct task_struct **array = NULL, *task; + struct ip_vs_sync_thread_data *ti = NULL, *tinfo; + struct task_struct *task; struct net_device *dev; char *name; int (*threadfn)(void *data); @@ -1822,7 +1819,7 @@ int start_sync_thread(struct netns_ipvs *ipvs, struct ipvs_sync_daemon_cfg *c, threadfn = sync_thread_master; } else if (state == IP_VS_STATE_BACKUP) { result = -EEXIST; - if (ipvs->backup_threads) + if (ipvs->backup_tinfo) goto out_early; ipvs->bcfg = *c; @@ -1849,28 +1846,22 @@ int start_sync_thread(struct netns_ipvs *ipvs, struct ipvs_sync_daemon_cfg *c, master_wakeup_work_handler); ms->ipvs = ipvs; } - } else { - array = kcalloc(count, sizeof(struct task_struct *), - GFP_KERNEL); - result = -ENOMEM; - if (!array) - goto out; } + result = -ENOMEM; + ti = kcalloc(count, sizeof(struct ip_vs_sync_thread_data), + GFP_KERNEL); + if (!ti) + goto out; for (id = 0; id < count; id++) { - result = -ENOMEM; - tinfo = kmalloc(sizeof(*tinfo), GFP_KERNEL); - if (!tinfo) - goto out; + tinfo = &ti[id]; tinfo->ipvs = ipvs; - tinfo->sock = NULL; if (state == IP_VS_STATE_BACKUP) { + result = -ENOMEM; tinfo->buf = kmalloc(ipvs->bcfg.sync_maxlen, GFP_KERNEL); if (!tinfo->buf) goto out; - } else { - tinfo->buf = NULL; } tinfo->id = id; if (state == IP_VS_STATE_MASTER) @@ -1885,17 +1876,15 @@ int start_sync_thread(struct netns_ipvs *ipvs, struct ipvs_sync_daemon_cfg *c, result = PTR_ERR(task); goto out; } - tinfo = NULL; - if (state == IP_VS_STATE_MASTER) - ipvs->ms[id].master_thread = task; - else - array[id] = task; + tinfo->task = task; } /* mark as active */ - if (state == IP_VS_STATE_BACKUP) - ipvs->backup_threads = array; + if (state == IP_VS_STATE_MASTER) + ipvs->master_tinfo = ti; + else + ipvs->backup_tinfo = ti; spin_lock_bh(&ipvs->sync_buff_lock); ipvs->sync_state |= state; spin_unlock_bh(&ipvs->sync_buff_lock); @@ -1910,29 +1899,31 @@ int start_sync_thread(struct netns_ipvs *ipvs, struct ipvs_sync_daemon_cfg *c, out: /* We do not need RTNL lock anymore, release it here so that - * sock_release below and in the kthreads can use rtnl_lock - * to leave the mcast group. + * sock_release below can use rtnl_lock to leave the mcast group. */ rtnl_unlock(); - count = id; - while (count-- > 0) { - if (state == IP_VS_STATE_MASTER) - kthread_stop(ipvs->ms[count].master_thread); - else - kthread_stop(array[count]); + id = min(id, count - 1); + if (ti) { + for (tinfo = ti + id; tinfo >= ti; tinfo--) { + if (tinfo->task) + kthread_stop(tinfo->task); + } } if (!(ipvs->sync_state & IP_VS_STATE_MASTER)) { kfree(ipvs->ms); ipvs->ms = NULL; } mutex_unlock(&ipvs->sync_mutex); - if (tinfo) { - if (tinfo->sock) - sock_release(tinfo->sock); - kfree(tinfo->buf); - kfree(tinfo); + + /* No more mutexes, release socks */ + if (ti) { + for (tinfo = ti + id; tinfo >= ti; tinfo--) { + if (tinfo->sock) + sock_release(tinfo->sock); + kfree(tinfo->buf); + } + kfree(ti); } - kfree(array); return result; out_early: @@ -1944,15 +1935,18 @@ out_early: int stop_sync_thread(struct netns_ipvs *ipvs, int state) { - struct task_struct **array; + struct ip_vs_sync_thread_data *ti, *tinfo; int id; int retc = -EINVAL; IP_VS_DBG(7, "%s(): pid %d\n", __func__, task_pid_nr(current)); + mutex_lock(&ipvs->sync_mutex); if (state == IP_VS_STATE_MASTER) { + retc = -ESRCH; if (!ipvs->ms) - return -ESRCH; + goto err; + ti = ipvs->master_tinfo; /* * The lock synchronizes with sb_queue_tail(), so that we don't @@ -1971,38 +1965,56 @@ int stop_sync_thread(struct netns_ipvs *ipvs, int state) struct ipvs_master_sync_state *ms = &ipvs->ms[id]; int ret; + tinfo = &ti[id]; pr_info("stopping master sync thread %d ...\n", - task_pid_nr(ms->master_thread)); + task_pid_nr(tinfo->task)); cancel_delayed_work_sync(&ms->master_wakeup_work); - ret = kthread_stop(ms->master_thread); + ret = kthread_stop(tinfo->task); if (retc >= 0) retc = ret; } kfree(ipvs->ms); ipvs->ms = NULL; + ipvs->master_tinfo = NULL; } else if (state == IP_VS_STATE_BACKUP) { - if (!ipvs->backup_threads) - return -ESRCH; + retc = -ESRCH; + if (!ipvs->backup_tinfo) + goto err; + ti = ipvs->backup_tinfo; ipvs->sync_state &= ~IP_VS_STATE_BACKUP; - array = ipvs->backup_threads; retc = 0; for (id = ipvs->threads_mask; id >= 0; id--) { int ret; + tinfo = &ti[id]; pr_info("stopping backup sync thread %d ...\n", - task_pid_nr(array[id])); - ret = kthread_stop(array[id]); + task_pid_nr(tinfo->task)); + ret = kthread_stop(tinfo->task); if (retc >= 0) retc = ret; } - kfree(array); - ipvs->backup_threads = NULL; + ipvs->backup_tinfo = NULL; + } else { + goto err; } + id = ipvs->threads_mask; + mutex_unlock(&ipvs->sync_mutex); + + /* No more mutexes, release socks */ + for (tinfo = ti + id; tinfo >= ti; tinfo--) { + if (tinfo->sock) + sock_release(tinfo->sock); + kfree(tinfo->buf); + } + kfree(ti); /* decrease the module use count */ ip_vs_use_count_dec(); + return retc; +err: + mutex_unlock(&ipvs->sync_mutex); return retc; } @@ -2021,7 +2033,6 @@ void ip_vs_sync_net_cleanup(struct netns_ipvs *ipvs) { int retc; - mutex_lock(&ipvs->sync_mutex); retc = stop_sync_thread(ipvs, IP_VS_STATE_MASTER); if (retc && retc != -ESRCH) pr_err("Failed to stop Master Daemon\n"); @@ -2029,5 +2040,4 @@ void ip_vs_sync_net_cleanup(struct netns_ipvs *ipvs) retc = stop_sync_thread(ipvs, IP_VS_STATE_BACKUP); if (retc && retc != -ESRCH) pr_err("Failed to stop Backup Daemon\n"); - mutex_unlock(&ipvs->sync_mutex); } diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c index 27eff89fad01c3ccc3a8e9593d4aca94c216758d..c6073d17c324430404775a76b9ef9d1bdbe174ab 100644 --- a/net/netfilter/nf_conntrack_core.c +++ b/net/netfilter/nf_conntrack_core.c @@ -431,13 +431,12 @@ EXPORT_SYMBOL_GPL(nf_ct_invert_tuple); * table location, we assume id gets exposed to userspace. * * Following nf_conn items do not change throughout lifetime - * of the nf_conn after it has been committed to main hash table: + * of the nf_conn: * * 1. nf_conn address - * 2. nf_conn->ext address - * 3. nf_conn->master address (normally NULL) - * 4. tuple - * 5. the associated net namespace + * 2. nf_conn->master address (normally NULL) + * 3. the associated net namespace + * 4. the original direction tuple */ u32 nf_ct_get_id(const struct nf_conn *ct) { @@ -447,9 +446,10 @@ u32 nf_ct_get_id(const struct nf_conn *ct) net_get_random_once(&ct_id_seed, sizeof(ct_id_seed)); a = (unsigned long)ct; - b = (unsigned long)ct->master ^ net_hash_mix(nf_ct_net(ct)); - c = (unsigned long)ct->ext; - d = (unsigned long)siphash(&ct->tuplehash, sizeof(ct->tuplehash), + b = (unsigned long)ct->master; + c = (unsigned long)nf_ct_net(ct); + d = (unsigned long)siphash(&ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple, + sizeof(ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple), &ct_id_seed); #ifdef CONFIG_64BIT return siphash_4u64((u64)a, (u64)b, (u64)c, (u64)d, &ct_id_seed); diff --git a/net/netfilter/nf_conntrack_proto_tcp.c b/net/netfilter/nf_conntrack_proto_tcp.c index 842f3f86fb2e7af1a7f75af55cfdd8bad5e92c81..7011ab27c4371bbedcac3c15795173eaa074ab46 100644 --- a/net/netfilter/nf_conntrack_proto_tcp.c +++ b/net/netfilter/nf_conntrack_proto_tcp.c @@ -480,6 +480,7 @@ static bool tcp_in_window(const struct nf_conn *ct, struct ip_ct_tcp_state *receiver = &state->seen[!dir]; const struct nf_conntrack_tuple *tuple = &ct->tuplehash[dir].tuple; __u32 seq, ack, sack, end, win, swin; + u16 win_raw; s32 receiver_offset; bool res, in_recv_win; @@ -488,7 +489,8 @@ static bool tcp_in_window(const struct nf_conn *ct, */ seq = ntohl(tcph->seq); ack = sack = ntohl(tcph->ack_seq); - win = ntohs(tcph->window); + win_raw = ntohs(tcph->window); + win = win_raw; end = segment_seq_plus_len(seq, skb->len, dataoff, tcph); if (receiver->flags & IP_CT_TCP_FLAG_SACK_PERM) @@ -663,14 +665,14 @@ static bool tcp_in_window(const struct nf_conn *ct, && state->last_seq == seq && state->last_ack == ack && state->last_end == end - && state->last_win == win) + && state->last_win == win_raw) state->retrans++; else { state->last_dir = dir; state->last_seq = seq; state->last_ack = ack; state->last_end = end; - state->last_win = win; + state->last_win = win_raw; state->retrans = 0; } } diff --git a/net/netfilter/nf_queue.c b/net/netfilter/nf_queue.c index 7569ba00e732843958f616efc29458de65c65ecf..a96a8c16baf99fda44c5bdd6ddfd8581f360f9a4 100644 --- a/net/netfilter/nf_queue.c +++ b/net/netfilter/nf_queue.c @@ -174,6 +174,11 @@ static int __nf_queue(struct sk_buff *skb, const struct nf_hook_state *state, goto err; } + if (!skb_dst_force(skb) && state->hook != NF_INET_PRE_ROUTING) { + status = -ENETDOWN; + goto err; + } + *entry = (struct nf_queue_entry) { .skb = skb, .state = *state, @@ -182,7 +187,6 @@ static int __nf_queue(struct sk_buff *skb, const struct nf_hook_state *state, }; nf_queue_entry_get_refs(entry); - skb_dst_force(skb); switch (entry->state.pf) { case AF_INET: diff --git a/net/netfilter/nfnetlink.c b/net/netfilter/nfnetlink.c index 916913454624f2740212c62d1c0ce61bc8ae6f73..7f2c1915763f8caec6e2b2b5a460e6db5767d094 100644 --- a/net/netfilter/nfnetlink.c +++ b/net/netfilter/nfnetlink.c @@ -575,7 +575,7 @@ static int nfnetlink_bind(struct net *net, int group) ss = nfnetlink_get_subsys(type << 8); rcu_read_unlock(); if (!ss) - request_module("nfnetlink-subsys-%d", type); + request_module_nowait("nfnetlink-subsys-%d", type); return 0; } #endif diff --git a/net/netfilter/nft_hash.c b/net/netfilter/nft_hash.c index c2d237144f747c4e4938ca4807668a400d1018c5..b8f23f75aea6cbe96bf627c030d39e50539596ad 100644 --- a/net/netfilter/nft_hash.c +++ b/net/netfilter/nft_hash.c @@ -196,7 +196,7 @@ static int nft_symhash_init(const struct nft_ctx *ctx, priv->dreg = nft_parse_register(tb[NFTA_HASH_DREG]); priv->modulus = ntohl(nla_get_be32(tb[NFTA_HASH_MODULUS])); - if (priv->modulus <= 1) + if (priv->modulus < 1) return -ERANGE; if (priv->offset + priv->modulus - 1 < priv->offset) diff --git a/net/netrom/af_netrom.c b/net/netrom/af_netrom.c index 71ffd1a6dc7c6063c00f4c82f985fe9fc0d80dc0..43910e50752cb3f1fabb69f61ed124edf7c926ef 100644 --- a/net/netrom/af_netrom.c +++ b/net/netrom/af_netrom.c @@ -872,7 +872,7 @@ int nr_rx_frame(struct sk_buff *skb, struct net_device *dev) unsigned short frametype, flags, window, timeout; int ret; - skb->sk = NULL; /* Initially we don't know who it's for */ + skb_orphan(skb); /* * skb->data points to the netrom frame start @@ -970,7 +970,9 @@ int nr_rx_frame(struct sk_buff *skb, struct net_device *dev) window = skb->data[20]; + sock_hold(make); skb->sk = make; + skb->destructor = sock_efree; make->sk_state = TCP_ESTABLISHED; /* Fill in his circuit details */ diff --git a/net/nfc/nci/data.c b/net/nfc/nci/data.c index 908f25e3773e5b494342228fc2e94f6ed4195659..5405d073804c6569988069252db5cd099025915e 100644 --- a/net/nfc/nci/data.c +++ b/net/nfc/nci/data.c @@ -119,7 +119,7 @@ static int nci_queue_tx_data_frags(struct nci_dev *ndev, conn_info = nci_get_conn_info_by_conn_id(ndev, conn_id); if (!conn_info) { rc = -EPROTO; - goto free_exit; + goto exit; } __skb_queue_head_init(&frags_q); diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c index 85ae53d8fd098b80e22a4b3ccc26f83be7b110c5..8211e8e97c96462837bcf96bb707330489e97fb4 100644 --- a/net/openvswitch/actions.c +++ b/net/openvswitch/actions.c @@ -175,8 +175,7 @@ static void update_ethertype(struct sk_buff *skb, struct ethhdr *hdr, if (skb->ip_summed == CHECKSUM_COMPLETE) { __be16 diff[] = { ~(hdr->h_proto), ethertype }; - skb->csum = ~csum_partial((char *)diff, sizeof(diff), - ~skb->csum); + skb->csum = csum_partial((char *)diff, sizeof(diff), skb->csum); } hdr->h_proto = ethertype; @@ -268,8 +267,7 @@ static int set_mpls(struct sk_buff *skb, struct sw_flow_key *flow_key, if (skb->ip_summed == CHECKSUM_COMPLETE) { __be32 diff[] = { ~(stack->label_stack_entry), lse }; - skb->csum = ~csum_partial((char *)diff, sizeof(diff), - ~skb->csum); + skb->csum = csum_partial((char *)diff, sizeof(diff), skb->csum); } stack->label_stack_entry = lse; diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c index 47009dddb7402b9a7419e4bafe42e7745c055a13..2c22fa8cf9bfe89ad8c5c0945a2379a7bf2442b2 100644 --- a/net/packet/af_packet.c +++ b/net/packet/af_packet.c @@ -2617,6 +2617,13 @@ static int tpacket_snd(struct packet_sock *po, struct msghdr *msg) mutex_lock(&po->pg_vec_lock); + /* packet_sendmsg() check on tx_ring.pg_vec was lockless, + * we need to confirm it under protection of pg_vec_lock. + */ + if (unlikely(!po->tx_ring.pg_vec)) { + err = -EBUSY; + goto out; + } if (likely(saddr == NULL)) { dev = packet_cached_dev_get(po); proto = po->num; diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c index 3c39b8805d01f2460df022edfad47f3bae6aaed5..7319d3ca30e9494e31ff2a1202ac22e93ad6f56b 100644 --- a/net/rxrpc/af_rxrpc.c +++ b/net/rxrpc/af_rxrpc.c @@ -195,7 +195,7 @@ static int rxrpc_bind(struct socket *sock, struct sockaddr *saddr, int len) service_in_use: write_unlock(&local->services_lock); - rxrpc_put_local(local); + rxrpc_unuse_local(local); ret = -EADDRINUSE; error_unlock: release_sock(&rx->sk); @@ -552,6 +552,7 @@ static int rxrpc_sendmsg(struct socket *sock, struct msghdr *m, size_t len) switch (rx->sk.sk_state) { case RXRPC_UNBOUND: + case RXRPC_CLIENT_UNBOUND: rx->srx.srx_family = AF_RXRPC; rx->srx.srx_service = 0; rx->srx.transport_type = SOCK_DGRAM; @@ -576,10 +577,9 @@ static int rxrpc_sendmsg(struct socket *sock, struct msghdr *m, size_t len) } rx->local = local; - rx->sk.sk_state = RXRPC_CLIENT_UNBOUND; + rx->sk.sk_state = RXRPC_CLIENT_BOUND; /* Fall through */ - case RXRPC_CLIENT_UNBOUND: case RXRPC_CLIENT_BOUND: if (!m->msg_name && test_bit(RXRPC_SOCK_CONNECTED, &rx->flags)) { @@ -908,7 +908,7 @@ static int rxrpc_release_sock(struct sock *sk) rxrpc_queue_work(&rxnet->service_conn_reaper); rxrpc_queue_work(&rxnet->client_conn_reaper); - rxrpc_put_local(rx->local); + rxrpc_unuse_local(rx->local); rx->local = NULL; key_put(rx->key); rx->key = NULL; diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 03e0fc8c183f0f1e1cbe32eee2cc35390b132d77..dfd9eab77cc8a883b55a4e40c2c456c261804efc 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -258,7 +258,8 @@ struct rxrpc_security { */ struct rxrpc_local { struct rcu_head rcu; - atomic_t usage; + atomic_t active_users; /* Number of users of the local endpoint */ + atomic_t usage; /* Number of references to the structure */ struct rxrpc_net *rxnet; /* The network ns in which this resides */ struct list_head link; struct socket *socket; /* my UDP socket */ @@ -998,6 +999,8 @@ struct rxrpc_local *rxrpc_lookup_local(struct net *, const struct sockaddr_rxrpc struct rxrpc_local *rxrpc_get_local(struct rxrpc_local *); struct rxrpc_local *rxrpc_get_local_maybe(struct rxrpc_local *); void rxrpc_put_local(struct rxrpc_local *); +struct rxrpc_local *rxrpc_use_local(struct rxrpc_local *); +void rxrpc_unuse_local(struct rxrpc_local *); void rxrpc_queue_local(struct rxrpc_local *); void rxrpc_destroy_all_locals(struct rxrpc_net *); @@ -1057,6 +1060,7 @@ void rxrpc_destroy_all_peers(struct rxrpc_net *); struct rxrpc_peer *rxrpc_get_peer(struct rxrpc_peer *); struct rxrpc_peer *rxrpc_get_peer_maybe(struct rxrpc_peer *); void rxrpc_put_peer(struct rxrpc_peer *); +void rxrpc_put_peer_locked(struct rxrpc_peer *); /* * proc.c diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index d591f54cb91fb615d67ec96566566dcef39efd18..7965600ee5dec068a8830366e510a7acd9644eb6 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -1106,8 +1106,12 @@ static void rxrpc_post_packet_to_local(struct rxrpc_local *local, { _enter("%p,%p", local, skb); - skb_queue_tail(&local->event_queue, skb); - rxrpc_queue_local(local); + if (rxrpc_get_local_maybe(local)) { + skb_queue_tail(&local->event_queue, skb); + rxrpc_queue_local(local); + } else { + rxrpc_free_skb(skb, rxrpc_skb_rx_freed); + } } /* @@ -1117,8 +1121,12 @@ static void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb) { CHECK_SLAB_OKAY(&local->usage); - skb_queue_tail(&local->reject_queue, skb); - rxrpc_queue_local(local); + if (rxrpc_get_local_maybe(local)) { + skb_queue_tail(&local->reject_queue, skb); + rxrpc_queue_local(local); + } else { + rxrpc_free_skb(skb, rxrpc_skb_rx_freed); + } } /* diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c index 10317dbdab5f4ba0ddd4c3a7f3a775764a81a06b..c752ad4870678dcdd7c04e42692ef7c586fc014b 100644 --- a/net/rxrpc/local_object.c +++ b/net/rxrpc/local_object.c @@ -83,6 +83,7 @@ static struct rxrpc_local *rxrpc_alloc_local(struct rxrpc_net *rxnet, local = kzalloc(sizeof(struct rxrpc_local), GFP_KERNEL); if (local) { atomic_set(&local->usage, 1); + atomic_set(&local->active_users, 1); local->rxnet = rxnet; INIT_LIST_HEAD(&local->link); INIT_WORK(&local->processor, rxrpc_local_processor); @@ -96,7 +97,7 @@ static struct rxrpc_local *rxrpc_alloc_local(struct rxrpc_net *rxnet, local->debug_id = atomic_inc_return(&rxrpc_debug_id); memcpy(&local->srx, srx, sizeof(*srx)); local->srx.srx_service = 0; - trace_rxrpc_local(local, rxrpc_local_new, 1, NULL); + trace_rxrpc_local(local->debug_id, rxrpc_local_new, 1, NULL); } _leave(" = %p", local); @@ -270,11 +271,8 @@ struct rxrpc_local *rxrpc_lookup_local(struct net *net, * bind the transport socket may still fail if we're attempting * to use a local address that the dying object is still using. */ - if (!rxrpc_get_local_maybe(local)) { - cursor = cursor->next; - list_del_init(&local->link); + if (!rxrpc_use_local(local)) break; - } age = "old"; goto found; @@ -288,7 +286,10 @@ struct rxrpc_local *rxrpc_lookup_local(struct net *net, if (ret < 0) goto sock_error; - list_add_tail(&local->link, cursor); + if (cursor != &rxnet->local_endpoints) + list_replace_init(cursor, &local->link); + else + list_add_tail(&local->link, cursor); age = "new"; found: @@ -324,7 +325,7 @@ struct rxrpc_local *rxrpc_get_local(struct rxrpc_local *local) int n; n = atomic_inc_return(&local->usage); - trace_rxrpc_local(local, rxrpc_local_got, n, here); + trace_rxrpc_local(local->debug_id, rxrpc_local_got, n, here); return local; } @@ -338,7 +339,8 @@ struct rxrpc_local *rxrpc_get_local_maybe(struct rxrpc_local *local) if (local) { int n = atomic_fetch_add_unless(&local->usage, 1, 0); if (n > 0) - trace_rxrpc_local(local, rxrpc_local_got, n + 1, here); + trace_rxrpc_local(local->debug_id, rxrpc_local_got, + n + 1, here); else local = NULL; } @@ -346,24 +348,18 @@ struct rxrpc_local *rxrpc_get_local_maybe(struct rxrpc_local *local) } /* - * Queue a local endpoint. + * Queue a local endpoint and pass the caller's reference to the work item. */ void rxrpc_queue_local(struct rxrpc_local *local) { const void *here = __builtin_return_address(0); + unsigned int debug_id = local->debug_id; + int n = atomic_read(&local->usage); if (rxrpc_queue_work(&local->processor)) - trace_rxrpc_local(local, rxrpc_local_queued, - atomic_read(&local->usage), here); -} - -/* - * A local endpoint reached its end of life. - */ -static void __rxrpc_put_local(struct rxrpc_local *local) -{ - _enter("%d", local->debug_id); - rxrpc_queue_work(&local->processor); + trace_rxrpc_local(debug_id, rxrpc_local_queued, n, here); + else + rxrpc_put_local(local); } /* @@ -376,10 +372,47 @@ void rxrpc_put_local(struct rxrpc_local *local) if (local) { n = atomic_dec_return(&local->usage); - trace_rxrpc_local(local, rxrpc_local_put, n, here); + trace_rxrpc_local(local->debug_id, rxrpc_local_put, n, here); if (n == 0) - __rxrpc_put_local(local); + call_rcu(&local->rcu, rxrpc_local_rcu); + } +} + +/* + * Start using a local endpoint. + */ +struct rxrpc_local *rxrpc_use_local(struct rxrpc_local *local) +{ + unsigned int au; + + local = rxrpc_get_local_maybe(local); + if (!local) + return NULL; + + au = atomic_fetch_add_unless(&local->active_users, 1, 0); + if (au == 0) { + rxrpc_put_local(local); + return NULL; + } + + return local; +} + +/* + * Cease using a local endpoint. Once the number of active users reaches 0, we + * start the closure of the transport in the work processor. + */ +void rxrpc_unuse_local(struct rxrpc_local *local) +{ + unsigned int au; + + if (local) { + au = atomic_dec_return(&local->active_users); + if (au == 0) + rxrpc_queue_local(local); + else + rxrpc_put_local(local); } } @@ -397,16 +430,6 @@ static void rxrpc_local_destroyer(struct rxrpc_local *local) _enter("%d", local->debug_id); - /* We can get a race between an incoming call packet queueing the - * processor again and the work processor starting the destruction - * process which will shut down the UDP socket. - */ - if (local->dead) { - _leave(" [already dead]"); - return; - } - local->dead = true; - mutex_lock(&rxnet->local_mutex); list_del_init(&local->link); mutex_unlock(&rxnet->local_mutex); @@ -426,13 +449,11 @@ static void rxrpc_local_destroyer(struct rxrpc_local *local) */ rxrpc_purge_queue(&local->reject_queue); rxrpc_purge_queue(&local->event_queue); - - _debug("rcu local %d", local->debug_id); - call_rcu(&local->rcu, rxrpc_local_rcu); } /* - * Process events on an endpoint + * Process events on an endpoint. The work item carries a ref which + * we must release. */ static void rxrpc_local_processor(struct work_struct *work) { @@ -440,13 +461,15 @@ static void rxrpc_local_processor(struct work_struct *work) container_of(work, struct rxrpc_local, processor); bool again; - trace_rxrpc_local(local, rxrpc_local_processing, + trace_rxrpc_local(local->debug_id, rxrpc_local_processing, atomic_read(&local->usage), NULL); do { again = false; - if (atomic_read(&local->usage) == 0) - return rxrpc_local_destroyer(local); + if (atomic_read(&local->active_users) == 0) { + rxrpc_local_destroyer(local); + break; + } if (!skb_queue_empty(&local->reject_queue)) { rxrpc_reject_packets(local); @@ -458,6 +481,8 @@ static void rxrpc_local_processor(struct work_struct *work) again = true; } } while (again); + + rxrpc_put_local(local); } /* diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c index bd2fa3b7caa7e08ccec66979380f01423aca7f7b..dc7fdaf20445b117237f4a33466f52ec0be346ff 100644 --- a/net/rxrpc/peer_event.c +++ b/net/rxrpc/peer_event.c @@ -375,7 +375,7 @@ static void rxrpc_peer_keepalive_dispatch(struct rxrpc_net *rxnet, spin_lock_bh(&rxnet->peer_hash_lock); list_add_tail(&peer->keepalive_link, &rxnet->peer_keepalive[slot & mask]); - rxrpc_put_peer(peer); + rxrpc_put_peer_locked(peer); } spin_unlock_bh(&rxnet->peer_hash_lock); diff --git a/net/rxrpc/peer_object.c b/net/rxrpc/peer_object.c index 5691b7d266ca0aaef3a5b2da30d64891e644f0f5..71547e8673b99d19f40a036c9530336643cad324 100644 --- a/net/rxrpc/peer_object.c +++ b/net/rxrpc/peer_object.c @@ -440,6 +440,24 @@ void rxrpc_put_peer(struct rxrpc_peer *peer) } } +/* + * Drop a ref on a peer record where the caller already holds the + * peer_hash_lock. + */ +void rxrpc_put_peer_locked(struct rxrpc_peer *peer) +{ + const void *here = __builtin_return_address(0); + int n; + + n = atomic_dec_return(&peer->usage); + trace_rxrpc_peer(peer, rxrpc_peer_put, n, here); + if (n == 0) { + hash_del_rcu(&peer->hash_link); + list_del_init(&peer->keepalive_link); + kfree_rcu(peer, rcu); + } +} + /* * Make sure all peer records have been discarded. */ diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index be01f9c5d963ddfc766fac811ace9a381b89a7f7..5d6ab4f6fd7abb86ece09271665d6d8f70583f91 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -230,6 +230,7 @@ static void rxrpc_queue_packet(struct rxrpc_sock *rx, struct rxrpc_call *call, rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR, 0, ret); + rxrpc_notify_socket(call); goto out; } _debug("need instant resend %d", ret); diff --git a/net/sched/act_bpf.c b/net/sched/act_bpf.c index 0c68bc9cf0b4df540a223e14dfa8ff569f96a40c..20fae5ca87faa8d6515b6958f8ba66892ad61ae8 100644 --- a/net/sched/act_bpf.c +++ b/net/sched/act_bpf.c @@ -287,6 +287,7 @@ static int tcf_bpf_init(struct net *net, struct nlattr *nla, struct tcf_bpf *prog; bool is_bpf, is_ebpf; int ret, res = 0; + u32 index; if (!nla) return -EINVAL; @@ -299,13 +300,13 @@ static int tcf_bpf_init(struct net *net, struct nlattr *nla, return -EINVAL; parm = nla_data(tb[TCA_ACT_BPF_PARMS]); - - ret = tcf_idr_check_alloc(tn, &parm->index, act, bind); + index = parm->index; + ret = tcf_idr_check_alloc(tn, &index, act, bind); if (!ret) { - ret = tcf_idr_create(tn, parm->index, est, act, + ret = tcf_idr_create(tn, index, est, act, &act_bpf_ops, bind, true); if (ret < 0) { - tcf_idr_cleanup(tn, parm->index); + tcf_idr_cleanup(tn, index); return ret; } diff --git a/net/sched/act_connmark.c b/net/sched/act_connmark.c index 6f0f273f1139f83ef1a45f017c37d200c36fcae1..6054367479784adbe89431940db155a5c7816080 100644 --- a/net/sched/act_connmark.c +++ b/net/sched/act_connmark.c @@ -104,6 +104,7 @@ static int tcf_connmark_init(struct net *net, struct nlattr *nla, struct tcf_connmark_info *ci; struct tc_connmark *parm; int ret = 0; + u32 index; if (!nla) return -EINVAL; @@ -117,13 +118,13 @@ static int tcf_connmark_init(struct net *net, struct nlattr *nla, return -EINVAL; parm = nla_data(tb[TCA_CONNMARK_PARMS]); - - ret = tcf_idr_check_alloc(tn, &parm->index, a, bind); + index = parm->index; + ret = tcf_idr_check_alloc(tn, &index, a, bind); if (!ret) { - ret = tcf_idr_create(tn, parm->index, est, a, + ret = tcf_idr_create(tn, index, est, a, &act_connmark_ops, bind, false); if (ret) { - tcf_idr_cleanup(tn, parm->index); + tcf_idr_cleanup(tn, index); return ret; } diff --git a/net/sched/act_csum.c b/net/sched/act_csum.c index b8a67ae3105ad10f645bfcb65a503dafcbe5cbb2..40437197e053e61bb836fe2d1dd2fa65598511b7 100644 --- a/net/sched/act_csum.c +++ b/net/sched/act_csum.c @@ -55,6 +55,7 @@ static int tcf_csum_init(struct net *net, struct nlattr *nla, struct tc_csum *parm; struct tcf_csum *p; int ret = 0, err; + u32 index; if (nla == NULL) return -EINVAL; @@ -66,13 +67,13 @@ static int tcf_csum_init(struct net *net, struct nlattr *nla, if (tb[TCA_CSUM_PARMS] == NULL) return -EINVAL; parm = nla_data(tb[TCA_CSUM_PARMS]); - - err = tcf_idr_check_alloc(tn, &parm->index, a, bind); + index = parm->index; + err = tcf_idr_check_alloc(tn, &index, a, bind); if (!err) { - ret = tcf_idr_create(tn, parm->index, est, a, + ret = tcf_idr_create(tn, index, est, a, &act_csum_ops, bind, true); if (ret) { - tcf_idr_cleanup(tn, parm->index); + tcf_idr_cleanup(tn, index); return ret; } ret = ACT_P_CREATED; diff --git a/net/sched/act_gact.c b/net/sched/act_gact.c index cd1d9bd32ef9af4c5789e0331b6d1c1b7e6820f3..72d3347bdd41c0443bae0e78327e0b6d31742110 100644 --- a/net/sched/act_gact.c +++ b/net/sched/act_gact.c @@ -64,6 +64,7 @@ static int tcf_gact_init(struct net *net, struct nlattr *nla, struct tc_gact *parm; struct tcf_gact *gact; int ret = 0; + u32 index; int err; #ifdef CONFIG_GACT_PROB struct tc_gact_p *p_parm = NULL; @@ -79,6 +80,7 @@ static int tcf_gact_init(struct net *net, struct nlattr *nla, if (tb[TCA_GACT_PARMS] == NULL) return -EINVAL; parm = nla_data(tb[TCA_GACT_PARMS]); + index = parm->index; #ifndef CONFIG_GACT_PROB if (tb[TCA_GACT_PROB] != NULL) @@ -91,12 +93,12 @@ static int tcf_gact_init(struct net *net, struct nlattr *nla, } #endif - err = tcf_idr_check_alloc(tn, &parm->index, a, bind); + err = tcf_idr_check_alloc(tn, &index, a, bind); if (!err) { - ret = tcf_idr_create(tn, parm->index, est, a, + ret = tcf_idr_create(tn, index, est, a, &act_gact_ops, bind, true); if (ret) { - tcf_idr_cleanup(tn, parm->index); + tcf_idr_cleanup(tn, index); return ret; } ret = ACT_P_CREATED; diff --git a/net/sched/act_ife.c b/net/sched/act_ife.c index 06a3d48018782e5d35981fdcfc3208a5f11ad276..24047e0e5db01b381b34554247e753449cd3f9cb 100644 --- a/net/sched/act_ife.c +++ b/net/sched/act_ife.c @@ -482,8 +482,14 @@ static int tcf_ife_init(struct net *net, struct nlattr *nla, u8 *saddr = NULL; bool exists = false; int ret = 0; + u32 index; int err; + if (!nla) { + NL_SET_ERR_MSG_MOD(extack, "IFE requires attributes to be passed"); + return -EINVAL; + } + err = nla_parse_nested(tb, TCA_IFE_MAX, nla, ife_policy, NULL); if (err < 0) return err; @@ -504,7 +510,8 @@ static int tcf_ife_init(struct net *net, struct nlattr *nla, if (!p) return -ENOMEM; - err = tcf_idr_check_alloc(tn, &parm->index, a, bind); + index = parm->index; + err = tcf_idr_check_alloc(tn, &index, a, bind); if (err < 0) { kfree(p); return err; @@ -516,10 +523,10 @@ static int tcf_ife_init(struct net *net, struct nlattr *nla, } if (!exists) { - ret = tcf_idr_create(tn, parm->index, est, a, &act_ife_ops, + ret = tcf_idr_create(tn, index, est, a, &act_ife_ops, bind, true); if (ret) { - tcf_idr_cleanup(tn, parm->index); + tcf_idr_cleanup(tn, index); kfree(p); return ret; } diff --git a/net/sched/act_mirred.c b/net/sched/act_mirred.c index f767e78e38c9878add2d3371d7b974005255b81d..548614bd9366c37841fd17db96367a849f6e7646 100644 --- a/net/sched/act_mirred.c +++ b/net/sched/act_mirred.c @@ -104,6 +104,7 @@ static int tcf_mirred_init(struct net *net, struct nlattr *nla, struct net_device *dev; bool exists = false; int ret, err; + u32 index; if (!nla) { NL_SET_ERR_MSG_MOD(extack, "Mirred requires attributes to be passed"); @@ -117,8 +118,8 @@ static int tcf_mirred_init(struct net *net, struct nlattr *nla, return -EINVAL; } parm = nla_data(tb[TCA_MIRRED_PARMS]); - - err = tcf_idr_check_alloc(tn, &parm->index, a, bind); + index = parm->index; + err = tcf_idr_check_alloc(tn, &index, a, bind); if (err < 0) return err; exists = err; @@ -135,21 +136,21 @@ static int tcf_mirred_init(struct net *net, struct nlattr *nla, if (exists) tcf_idr_release(*a, bind); else - tcf_idr_cleanup(tn, parm->index); + tcf_idr_cleanup(tn, index); NL_SET_ERR_MSG_MOD(extack, "Unknown mirred option"); return -EINVAL; } if (!exists) { if (!parm->ifindex) { - tcf_idr_cleanup(tn, parm->index); + tcf_idr_cleanup(tn, index); NL_SET_ERR_MSG_MOD(extack, "Specified device does not exist"); return -EINVAL; } - ret = tcf_idr_create(tn, parm->index, est, a, + ret = tcf_idr_create(tn, index, est, a, &act_mirred_ops, bind, true); if (ret) { - tcf_idr_cleanup(tn, parm->index); + tcf_idr_cleanup(tn, index); return ret; } ret = ACT_P_CREATED; diff --git a/net/sched/act_nat.c b/net/sched/act_nat.c index 4313aa102440e9b55fb0ba200406801217d65284..619828920b97b8b5669fd54f283f57f211e9534f 100644 --- a/net/sched/act_nat.c +++ b/net/sched/act_nat.c @@ -45,6 +45,7 @@ static int tcf_nat_init(struct net *net, struct nlattr *nla, struct nlattr *est, struct tc_nat *parm; int ret = 0, err; struct tcf_nat *p; + u32 index; if (nla == NULL) return -EINVAL; @@ -56,13 +57,13 @@ static int tcf_nat_init(struct net *net, struct nlattr *nla, struct nlattr *est, if (tb[TCA_NAT_PARMS] == NULL) return -EINVAL; parm = nla_data(tb[TCA_NAT_PARMS]); - - err = tcf_idr_check_alloc(tn, &parm->index, a, bind); + index = parm->index; + err = tcf_idr_check_alloc(tn, &index, a, bind); if (!err) { - ret = tcf_idr_create(tn, parm->index, est, a, + ret = tcf_idr_create(tn, index, est, a, &act_nat_ops, bind, false); if (ret) { - tcf_idr_cleanup(tn, parm->index); + tcf_idr_cleanup(tn, index); return ret; } ret = ACT_P_CREATED; diff --git a/net/sched/act_pedit.c b/net/sched/act_pedit.c index ca535a8585bc893fb4df0ae1b2dbad85c0fd9052..82d258b2a75ab89262adda2f0d7c57a523f0b07a 100644 --- a/net/sched/act_pedit.c +++ b/net/sched/act_pedit.c @@ -149,6 +149,7 @@ static int tcf_pedit_init(struct net *net, struct nlattr *nla, struct tcf_pedit *p; int ret = 0, err; int ksize; + u32 index; if (!nla) { NL_SET_ERR_MSG_MOD(extack, "Pedit requires attributes to be passed"); @@ -178,18 +179,19 @@ static int tcf_pedit_init(struct net *net, struct nlattr *nla, if (IS_ERR(keys_ex)) return PTR_ERR(keys_ex); - err = tcf_idr_check_alloc(tn, &parm->index, a, bind); + index = parm->index; + err = tcf_idr_check_alloc(tn, &index, a, bind); if (!err) { if (!parm->nkeys) { - tcf_idr_cleanup(tn, parm->index); + tcf_idr_cleanup(tn, index); NL_SET_ERR_MSG_MOD(extack, "Pedit requires keys to be passed"); ret = -EINVAL; goto out_free; } - ret = tcf_idr_create(tn, parm->index, est, a, + ret = tcf_idr_create(tn, index, est, a, &act_pedit_ops, bind, false); if (ret) { - tcf_idr_cleanup(tn, parm->index); + tcf_idr_cleanup(tn, index); goto out_free; } ret = ACT_P_CREATED; diff --git a/net/sched/act_police.c b/net/sched/act_police.c index 5d8bfa878477e8e738be55d2c9b818423c3c8ccc..997c34db1491995326256666f7ffda6d708842f9 100644 --- a/net/sched/act_police.c +++ b/net/sched/act_police.c @@ -85,6 +85,7 @@ static int tcf_police_init(struct net *net, struct nlattr *nla, struct qdisc_rate_table *R_tab = NULL, *P_tab = NULL; struct tc_action_net *tn = net_generic(net, police_net_id); bool exists = false; + u32 index; int size; if (nla == NULL) @@ -101,7 +102,8 @@ static int tcf_police_init(struct net *net, struct nlattr *nla, return -EINVAL; parm = nla_data(tb[TCA_POLICE_TBF]); - err = tcf_idr_check_alloc(tn, &parm->index, a, bind); + index = parm->index; + err = tcf_idr_check_alloc(tn, &index, a, bind); if (err < 0) return err; exists = err; @@ -109,10 +111,10 @@ static int tcf_police_init(struct net *net, struct nlattr *nla, return 0; if (!exists) { - ret = tcf_idr_create(tn, parm->index, NULL, a, + ret = tcf_idr_create(tn, index, NULL, a, &act_police_ops, bind, false); if (ret) { - tcf_idr_cleanup(tn, parm->index); + tcf_idr_cleanup(tn, index); return ret; } ret = ACT_P_CREATED; diff --git a/net/sched/act_sample.c b/net/sched/act_sample.c index c7f5d630d97cff5580c4d803cd1eac3f923849f8..ac37654ca2922dbcd80daa30f7447c7f54784020 100644 --- a/net/sched/act_sample.c +++ b/net/sched/act_sample.c @@ -43,7 +43,7 @@ static int tcf_sample_init(struct net *net, struct nlattr *nla, struct tc_action_net *tn = net_generic(net, sample_net_id); struct nlattr *tb[TCA_SAMPLE_MAX + 1]; struct psample_group *psample_group; - u32 psample_group_num, rate; + u32 psample_group_num, rate, index; struct tc_sample *parm; struct tcf_sample *s; bool exists = false; @@ -59,8 +59,8 @@ static int tcf_sample_init(struct net *net, struct nlattr *nla, return -EINVAL; parm = nla_data(tb[TCA_SAMPLE_PARMS]); - - err = tcf_idr_check_alloc(tn, &parm->index, a, bind); + index = parm->index; + err = tcf_idr_check_alloc(tn, &index, a, bind); if (err < 0) return err; exists = err; @@ -68,10 +68,10 @@ static int tcf_sample_init(struct net *net, struct nlattr *nla, return 0; if (!exists) { - ret = tcf_idr_create(tn, parm->index, est, a, + ret = tcf_idr_create(tn, index, est, a, &act_sample_ops, bind, true); if (ret) { - tcf_idr_cleanup(tn, parm->index); + tcf_idr_cleanup(tn, index); return ret; } ret = ACT_P_CREATED; diff --git a/net/sched/act_simple.c b/net/sched/act_simple.c index 52400d49f81f233a572de8cdffff9c94099e614a..658efae71a09de6f8194d9adb0d7e3c0e103044a 100644 --- a/net/sched/act_simple.c +++ b/net/sched/act_simple.c @@ -88,6 +88,7 @@ static int tcf_simp_init(struct net *net, struct nlattr *nla, struct tcf_defact *d; bool exists = false; int ret = 0, err; + u32 index; if (nla == NULL) return -EINVAL; @@ -100,7 +101,8 @@ static int tcf_simp_init(struct net *net, struct nlattr *nla, return -EINVAL; parm = nla_data(tb[TCA_DEF_PARMS]); - err = tcf_idr_check_alloc(tn, &parm->index, a, bind); + index = parm->index; + err = tcf_idr_check_alloc(tn, &index, a, bind); if (err < 0) return err; exists = err; @@ -111,15 +113,15 @@ static int tcf_simp_init(struct net *net, struct nlattr *nla, if (exists) tcf_idr_release(*a, bind); else - tcf_idr_cleanup(tn, parm->index); + tcf_idr_cleanup(tn, index); return -EINVAL; } if (!exists) { - ret = tcf_idr_create(tn, parm->index, est, a, + ret = tcf_idr_create(tn, index, est, a, &act_simp_ops, bind, false); if (ret) { - tcf_idr_cleanup(tn, parm->index); + tcf_idr_cleanup(tn, index); return ret; } diff --git a/net/sched/act_skbedit.c b/net/sched/act_skbedit.c index 86d90fc5e97ea351aad3bbae6c03332163496cbc..7709710a41f7270241dc0306525745691a9a9e33 100644 --- a/net/sched/act_skbedit.c +++ b/net/sched/act_skbedit.c @@ -107,6 +107,7 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla, u16 *queue_mapping = NULL, *ptype = NULL; bool exists = false; int ret = 0, err; + u32 index; if (nla == NULL) return -EINVAL; @@ -153,8 +154,8 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla, } parm = nla_data(tb[TCA_SKBEDIT_PARMS]); - - err = tcf_idr_check_alloc(tn, &parm->index, a, bind); + index = parm->index; + err = tcf_idr_check_alloc(tn, &index, a, bind); if (err < 0) return err; exists = err; @@ -165,15 +166,15 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla, if (exists) tcf_idr_release(*a, bind); else - tcf_idr_cleanup(tn, parm->index); + tcf_idr_cleanup(tn, index); return -EINVAL; } if (!exists) { - ret = tcf_idr_create(tn, parm->index, est, a, + ret = tcf_idr_create(tn, index, est, a, &act_skbedit_ops, bind, true); if (ret) { - tcf_idr_cleanup(tn, parm->index); + tcf_idr_cleanup(tn, index); return ret; } diff --git a/net/sched/act_skbmod.c b/net/sched/act_skbmod.c index 588077fafd6cc58473b1d0de85b1aa3103b3bf69..3038493d18ca197e871e890a807d0fd07687baa3 100644 --- a/net/sched/act_skbmod.c +++ b/net/sched/act_skbmod.c @@ -88,12 +88,12 @@ static int tcf_skbmod_init(struct net *net, struct nlattr *nla, struct nlattr *tb[TCA_SKBMOD_MAX + 1]; struct tcf_skbmod_params *p, *p_old; struct tc_skbmod *parm; + u32 lflags = 0, index; struct tcf_skbmod *d; bool exists = false; u8 *daddr = NULL; u8 *saddr = NULL; u16 eth_type = 0; - u32 lflags = 0; int ret = 0, err; if (!nla) @@ -122,10 +122,11 @@ static int tcf_skbmod_init(struct net *net, struct nlattr *nla, } parm = nla_data(tb[TCA_SKBMOD_PARMS]); + index = parm->index; if (parm->flags & SKBMOD_F_SWAPMAC) lflags = SKBMOD_F_SWAPMAC; - err = tcf_idr_check_alloc(tn, &parm->index, a, bind); + err = tcf_idr_check_alloc(tn, &index, a, bind); if (err < 0) return err; exists = err; @@ -136,15 +137,15 @@ static int tcf_skbmod_init(struct net *net, struct nlattr *nla, if (exists) tcf_idr_release(*a, bind); else - tcf_idr_cleanup(tn, parm->index); + tcf_idr_cleanup(tn, index); return -EINVAL; } if (!exists) { - ret = tcf_idr_create(tn, parm->index, est, a, + ret = tcf_idr_create(tn, index, est, a, &act_skbmod_ops, bind, true); if (ret) { - tcf_idr_cleanup(tn, parm->index); + tcf_idr_cleanup(tn, index); return ret; } diff --git a/net/sched/act_tunnel_key.c b/net/sched/act_tunnel_key.c index 72d9c432e8b42693094bd11d524c86d34fd10bf5..66bfe57e74ae0553e64dcd92a8da00e30bda26fa 100644 --- a/net/sched/act_tunnel_key.c +++ b/net/sched/act_tunnel_key.c @@ -224,6 +224,7 @@ static int tunnel_key_init(struct net *net, struct nlattr *nla, __be16 flags; u8 tos, ttl; int ret = 0; + u32 index; int err; if (!nla) { @@ -244,7 +245,8 @@ static int tunnel_key_init(struct net *net, struct nlattr *nla, } parm = nla_data(tb[TCA_TUNNEL_KEY_PARMS]); - err = tcf_idr_check_alloc(tn, &parm->index, a, bind); + index = parm->index; + err = tcf_idr_check_alloc(tn, &index, a, bind); if (err < 0) return err; exists = err; @@ -338,7 +340,7 @@ static int tunnel_key_init(struct net *net, struct nlattr *nla, } if (!exists) { - ret = tcf_idr_create(tn, parm->index, est, a, + ret = tcf_idr_create(tn, index, est, a, &act_tunnel_key_ops, bind, true); if (ret) { NL_SET_ERR_MSG(extack, "Cannot create TC IDR"); @@ -384,7 +386,7 @@ err_out: if (exists) tcf_idr_release(*a, bind); else - tcf_idr_cleanup(tn, parm->index); + tcf_idr_cleanup(tn, index); return ret; } diff --git a/net/sched/act_vlan.c b/net/sched/act_vlan.c index 033d273afe50236a090fd4caddd0a8328ce82157..da993edd2e40b6612ec9f97cb60ab08d57d8a730 100644 --- a/net/sched/act_vlan.c +++ b/net/sched/act_vlan.c @@ -118,6 +118,7 @@ static int tcf_vlan_init(struct net *net, struct nlattr *nla, u8 push_prio = 0; bool exists = false; int ret = 0, err; + u32 index; if (!nla) return -EINVAL; @@ -129,7 +130,8 @@ static int tcf_vlan_init(struct net *net, struct nlattr *nla, if (!tb[TCA_VLAN_PARMS]) return -EINVAL; parm = nla_data(tb[TCA_VLAN_PARMS]); - err = tcf_idr_check_alloc(tn, &parm->index, a, bind); + index = parm->index; + err = tcf_idr_check_alloc(tn, &index, a, bind); if (err < 0) return err; exists = err; @@ -145,7 +147,7 @@ static int tcf_vlan_init(struct net *net, struct nlattr *nla, if (exists) tcf_idr_release(*a, bind); else - tcf_idr_cleanup(tn, parm->index); + tcf_idr_cleanup(tn, index); return -EINVAL; } push_vid = nla_get_u16(tb[TCA_VLAN_PUSH_VLAN_ID]); @@ -153,7 +155,7 @@ static int tcf_vlan_init(struct net *net, struct nlattr *nla, if (exists) tcf_idr_release(*a, bind); else - tcf_idr_cleanup(tn, parm->index); + tcf_idr_cleanup(tn, index); return -ERANGE; } @@ -167,7 +169,7 @@ static int tcf_vlan_init(struct net *net, struct nlattr *nla, if (exists) tcf_idr_release(*a, bind); else - tcf_idr_cleanup(tn, parm->index); + tcf_idr_cleanup(tn, index); return -EPROTONOSUPPORT; } } else { @@ -181,16 +183,16 @@ static int tcf_vlan_init(struct net *net, struct nlattr *nla, if (exists) tcf_idr_release(*a, bind); else - tcf_idr_cleanup(tn, parm->index); + tcf_idr_cleanup(tn, index); return -EINVAL; } action = parm->v_action; if (!exists) { - ret = tcf_idr_create(tn, parm->index, est, a, + ret = tcf_idr_create(tn, index, est, a, &act_vlan_ops, bind, true); if (ret) { - tcf_idr_cleanup(tn, parm->index); + tcf_idr_cleanup(tn, index); return ret; } @@ -296,6 +298,14 @@ static int tcf_vlan_search(struct net *net, struct tc_action **a, u32 index, return tcf_idr_search(tn, a, index); } +static size_t tcf_vlan_get_fill_size(const struct tc_action *act) +{ + return nla_total_size(sizeof(struct tc_vlan)) + + nla_total_size(sizeof(u16)) /* TCA_VLAN_PUSH_VLAN_ID */ + + nla_total_size(sizeof(u16)) /* TCA_VLAN_PUSH_VLAN_PROTOCOL */ + + nla_total_size(sizeof(u8)); /* TCA_VLAN_PUSH_VLAN_PRIORITY */ +} + static struct tc_action_ops act_vlan_ops = { .kind = "vlan", .type = TCA_ACT_VLAN, @@ -305,6 +315,7 @@ static struct tc_action_ops act_vlan_ops = { .init = tcf_vlan_init, .cleanup = tcf_vlan_cleanup, .walk = tcf_vlan_walker, + .get_fill_size = tcf_vlan_get_fill_size, .lookup = tcf_vlan_search, .size = sizeof(struct tcf_vlan), }; diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c index 2167c6ca55e3e1143aff352c65ec15ca92d5f80d..4159bcb479c6b1668d179226cbd36e70593f4efa 100644 --- a/net/sched/cls_api.c +++ b/net/sched/cls_api.c @@ -1325,6 +1325,9 @@ replay: tcf_chain_tp_insert(chain, &chain_info, tp); tfilter_notify(net, skb, n, tp, block, q, parent, fh, RTM_NEWTFILTER, false); + /* q pointer is NULL for shared blocks */ + if (q) + q->flags &= ~TCQ_F_CAN_BYPASS; } else { if (tp_created) tcf_proto_destroy(tp, NULL); diff --git a/net/sched/sch_codel.c b/net/sched/sch_codel.c index 17cd81f84b5de9d9f53acf2c9078a7618927ddc5..77fae0b7c6ee1c04afd842fe04781c32b87c30f6 100644 --- a/net/sched/sch_codel.c +++ b/net/sched/sch_codel.c @@ -71,10 +71,10 @@ static struct sk_buff *dequeue_func(struct codel_vars *vars, void *ctx) struct Qdisc *sch = ctx; struct sk_buff *skb = __qdisc_dequeue_head(&sch->q); - if (skb) + if (skb) { sch->qstats.backlog -= qdisc_pkt_len(skb); - - prefetch(&skb->end); /* we'll need skb_shinfo() */ + prefetch(&skb->end); /* we'll need skb_shinfo() */ + } return skb; } diff --git a/net/sched/sch_fq_codel.c b/net/sched/sch_fq_codel.c index 6c0a9d5dbf9441d00a832915e23d6b82bd8ab313..137692cb8b4f9b40f6d328ce370fe4b62a560e5b 100644 --- a/net/sched/sch_fq_codel.c +++ b/net/sched/sch_fq_codel.c @@ -600,8 +600,6 @@ static unsigned long fq_codel_find(struct Qdisc *sch, u32 classid) static unsigned long fq_codel_bind(struct Qdisc *sch, unsigned long parent, u32 classid) { - /* we cannot bypass queue discipline anymore */ - sch->flags &= ~TCQ_F_CAN_BYPASS; return 0; } diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c index 2f2678197760ab476c6f817d0429d4b7da1a56d8..650f2146385377944d192f9aa3aac3b26d05c441 100644 --- a/net/sched/sch_sfq.c +++ b/net/sched/sch_sfq.c @@ -828,8 +828,6 @@ static unsigned long sfq_find(struct Qdisc *sch, u32 classid) static unsigned long sfq_bind(struct Qdisc *sch, unsigned long parent, u32 classid) { - /* we cannot bypass queue discipline anymore */ - sch->flags &= ~TCQ_F_CAN_BYPASS; return 0; } diff --git a/net/sctp/sm_sideeffect.c b/net/sctp/sm_sideeffect.c index 3131b4154c74d0d666698a9fa529003bdd0df280..28adac31f0ff0130f35d3e620877e25c2f39bbc9 100644 --- a/net/sctp/sm_sideeffect.c +++ b/net/sctp/sm_sideeffect.c @@ -561,7 +561,7 @@ static void sctp_do_8_2_transport_strike(struct sctp_cmd_seq *commands, */ if (net->sctp.pf_enable && (transport->state == SCTP_ACTIVE) && - (asoc->pf_retrans < transport->pathmaxrxt) && + (transport->error_count < transport->pathmaxrxt) && (transport->error_count > asoc->pf_retrans)) { sctp_assoc_control_transport(asoc, transport, diff --git a/net/sctp/socket.c b/net/sctp/socket.c index 8c00a7ef1bcda261018b9025672abd8b50749cd3..9f5b4e547b6312e9a9e4b339888f8e5242de5569 100644 --- a/net/sctp/socket.c +++ b/net/sctp/socket.c @@ -4507,34 +4507,18 @@ out_nounlock: static int sctp_connect(struct sock *sk, struct sockaddr *addr, int addr_len, int flags) { - struct inet_sock *inet = inet_sk(sk); struct sctp_af *af; - int err = 0; + int err = -EINVAL; lock_sock(sk); pr_debug("%s: sk:%p, sockaddr:%p, addr_len:%d\n", __func__, sk, addr, addr_len); - /* We may need to bind the socket. */ - if (!inet->inet_num) { - if (sk->sk_prot->get_port(sk, 0)) { - release_sock(sk); - return -EAGAIN; - } - inet->inet_sport = htons(inet->inet_num); - } - /* Validate addr_len before calling common connect/connectx routine. */ af = sctp_get_af_specific(addr->sa_family); - if (!af || addr_len < af->sockaddr_len) { - err = -EINVAL; - } else { - /* Pass correct addr len to common routine (so it knows there - * is only one address being passed. - */ + if (af && addr_len >= af->sockaddr_len) err = __sctp_connect(sk, addr, af->sockaddr_len, flags, NULL); - } release_sock(sk); return err; diff --git a/net/sctp/stream.c b/net/sctp/stream.c index 3b47457862ccdac668a8947f6191507d1b83ee68..87061a4bb44b6a9d4bf80de1ce4201d64f34192b 100644 --- a/net/sctp/stream.c +++ b/net/sctp/stream.c @@ -253,13 +253,20 @@ out: int sctp_stream_init_ext(struct sctp_stream *stream, __u16 sid) { struct sctp_stream_out_ext *soute; + int ret; soute = kzalloc(sizeof(*soute), GFP_KERNEL); if (!soute) return -ENOMEM; SCTP_SO(stream, sid)->ext = soute; - return sctp_sched_init_sid(stream, sid, GFP_KERNEL); + ret = sctp_sched_init_sid(stream, sid, GFP_KERNEL); + if (ret) { + kfree(SCTP_SO(stream, sid)->ext); + SCTP_SO(stream, sid)->ext = NULL; + } + + return ret; } void sctp_stream_free(struct sctp_stream *stream) @@ -409,6 +416,7 @@ int sctp_send_reset_streams(struct sctp_association *asoc, nstr_list[i] = htons(str_list[i]); if (out && !sctp_stream_outq_is_empty(stream, str_nums, nstr_list)) { + kfree(nstr_list); retval = -EAGAIN; goto out; } diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c index 9bbab6ba2dab0d9c2d685453371233576ba7e1a9..26dcd02b2d0cefbd2874edb1132f2e0d64dab97c 100644 --- a/net/smc/af_smc.c +++ b/net/smc/af_smc.c @@ -1680,14 +1680,18 @@ static int smc_setsockopt(struct socket *sock, int level, int optname, } break; case TCP_NODELAY: - if (sk->sk_state != SMC_INIT && sk->sk_state != SMC_LISTEN) { + if (sk->sk_state != SMC_INIT && + sk->sk_state != SMC_LISTEN && + sk->sk_state != SMC_CLOSED) { if (val && !smc->use_fallback) mod_delayed_work(system_wq, &smc->conn.tx_work, 0); } break; case TCP_CORK: - if (sk->sk_state != SMC_INIT && sk->sk_state != SMC_LISTEN) { + if (sk->sk_state != SMC_INIT && + sk->sk_state != SMC_LISTEN && + sk->sk_state != SMC_CLOSED) { if (!val && !smc->use_fallback) mod_delayed_work(system_wq, &smc->conn.tx_work, 0); diff --git a/net/smc/smc_tx.c b/net/smc/smc_tx.c index d8366ed517576ba7f258ad74f0cf91f7fd92d7e0..28361aef998256a327a73f41fd0218492b205646 100644 --- a/net/smc/smc_tx.c +++ b/net/smc/smc_tx.c @@ -75,13 +75,11 @@ static int smc_tx_wait(struct smc_sock *smc, int flags) DEFINE_WAIT_FUNC(wait, woken_wake_function); struct smc_connection *conn = &smc->conn; struct sock *sk = &smc->sk; - bool noblock; long timeo; int rc = 0; /* similar to sk_stream_wait_memory */ timeo = sock_sndtimeo(sk, flags & MSG_DONTWAIT); - noblock = timeo ? false : true; add_wait_queue(sk_sleep(sk), &wait); while (1) { sk_set_bit(SOCKWQ_ASYNC_NOSPACE, sk); @@ -96,8 +94,8 @@ static int smc_tx_wait(struct smc_sock *smc, int flags) break; } if (!timeo) { - if (noblock) - set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); + /* ensure EPOLLOUT is subsequently generated */ + set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); rc = -EAGAIN; break; } diff --git a/net/tipc/addr.c b/net/tipc/addr.c index b88d48d009130985db69993bc115c8cae80a71a9..0f1eaed1bd1b310833443cfb930e57d85dfa51a9 100644 --- a/net/tipc/addr.c +++ b/net/tipc/addr.c @@ -75,6 +75,7 @@ void tipc_set_node_addr(struct net *net, u32 addr) tipc_set_node_id(net, node_id); } tn->trial_addr = addr; + tn->addr_trial_end = jiffies; pr_info("32-bit node address hash set to %x\n", addr); } diff --git a/net/tipc/netlink_compat.c b/net/tipc/netlink_compat.c index 85ebb675600c59981e4d847e5c5446e9630a43d6..318c541970ecd3a50e8dc9f7d9e300ce374be410 100644 --- a/net/tipc/netlink_compat.c +++ b/net/tipc/netlink_compat.c @@ -55,6 +55,7 @@ struct tipc_nl_compat_msg { int rep_type; int rep_size; int req_type; + int req_size; struct net *net; struct sk_buff *rep; struct tlv_desc *req; @@ -257,7 +258,8 @@ static int tipc_nl_compat_dumpit(struct tipc_nl_compat_cmd_dump *cmd, int err; struct sk_buff *arg; - if (msg->req_type && !TLV_CHECK_TYPE(msg->req, msg->req_type)) + if (msg->req_type && (!msg->req_size || + !TLV_CHECK_TYPE(msg->req, msg->req_type))) return -EINVAL; msg->rep = tipc_tlv_alloc(msg->rep_size); @@ -354,7 +356,8 @@ static int tipc_nl_compat_doit(struct tipc_nl_compat_cmd_doit *cmd, { int err; - if (msg->req_type && !TLV_CHECK_TYPE(msg->req, msg->req_type)) + if (msg->req_type && (!msg->req_size || + !TLV_CHECK_TYPE(msg->req, msg->req_type))) return -EINVAL; err = __tipc_nl_compat_doit(cmd, msg); @@ -1276,8 +1279,8 @@ static int tipc_nl_compat_recv(struct sk_buff *skb, struct genl_info *info) goto send; } - len = nlmsg_attrlen(req_nlh, GENL_HDRLEN + TIPC_GENL_HDRLEN); - if (!len || !TLV_OK(msg.req, len)) { + msg.req_size = nlmsg_attrlen(req_nlh, GENL_HDRLEN + TIPC_GENL_HDRLEN); + if (msg.req_size && !TLV_OK(msg.req, msg.req_size)) { msg.rep = tipc_get_err_tlv(TIPC_CFG_NOT_SUPPORTED); err = -EOPNOTSUPP; goto send; diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c index ead29c2aefa762b8aff13eaab72b55c2671e276e..0a613e0ef3bf9908c38575cb1137c700dd61aefd 100644 --- a/net/tls/tls_device.c +++ b/net/tls/tls_device.c @@ -61,7 +61,7 @@ static void tls_device_free_ctx(struct tls_context *ctx) if (ctx->rx_conf == TLS_HW) kfree(tls_offload_ctx_rx(ctx)); - kfree(ctx); + tls_ctx_free(ctx); } static void tls_device_gc_task(struct work_struct *work) diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c index 25b3fb585777b5aeb0f9c73788da814a709d556c..3288bdff9889457bf9c9caf5de6226c4ecd5a105 100644 --- a/net/tls/tls_main.c +++ b/net/tls/tls_main.c @@ -241,7 +241,7 @@ static void tls_write_space(struct sock *sk) ctx->sk_write_space(sk); } -static void tls_ctx_free(struct tls_context *ctx) +void tls_ctx_free(struct tls_context *ctx) { if (!ctx) return; @@ -301,6 +301,8 @@ static void tls_sk_proto_close(struct sock *sk, long timeout) #else { #endif + if (sk->sk_write_space == tls_write_space) + sk->sk_write_space = ctx->sk_write_space; tls_ctx_free(ctx); ctx = NULL; } diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c index 6848a81967118e5c4d905e1959f1f2efc49191f4..bbb2da70e8701a1231374c84532632ff0d3b181d 100644 --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -354,7 +354,7 @@ int tls_sw_sendmsg(struct sock *sk, struct msghdr *msg, size_t size) { struct tls_context *tls_ctx = tls_get_ctx(sk); struct tls_sw_context_tx *ctx = tls_sw_ctx_tx(tls_ctx); - int ret = 0; + int ret; int required_size; long timeo = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT); bool eor = !(msg->msg_flags & MSG_MORE); @@ -370,7 +370,8 @@ int tls_sw_sendmsg(struct sock *sk, struct msghdr *msg, size_t size) lock_sock(sk); - if (tls_complete_pending_work(sk, tls_ctx, msg->msg_flags, &timeo)) + ret = tls_complete_pending_work(sk, tls_ctx, msg->msg_flags, &timeo); + if (ret) goto send_end; if (unlikely(msg->msg_controllen)) { @@ -505,7 +506,7 @@ int tls_sw_sendpage(struct sock *sk, struct page *page, { struct tls_context *tls_ctx = tls_get_ctx(sk); struct tls_sw_context_tx *ctx = tls_sw_ctx_tx(tls_ctx); - int ret = 0; + int ret; long timeo = sock_sndtimeo(sk, flags & MSG_DONTWAIT); bool eor; size_t orig_size = size; @@ -525,7 +526,8 @@ int tls_sw_sendpage(struct sock *sk, struct page *page, sk_clear_bit(SOCKWQ_ASYNC_NOSPACE, sk); - if (tls_complete_pending_work(sk, tls_ctx, flags, &timeo)) + ret = tls_complete_pending_work(sk, tls_ctx, flags, &timeo); + if (ret) goto sendpage_end; /* Call the sk_stream functions to manage the sndbuf mem. */ diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c index ab27a2872935774d41fb1f2c2f9341eb67c8cc0a..2e30bf197583526d75a242188168c4e9f3ad7388 100644 --- a/net/vmw_vsock/af_vsock.c +++ b/net/vmw_vsock/af_vsock.c @@ -281,7 +281,8 @@ EXPORT_SYMBOL_GPL(vsock_insert_connected); void vsock_remove_bound(struct vsock_sock *vsk) { spin_lock_bh(&vsock_table_lock); - __vsock_remove_bound(vsk); + if (__vsock_in_bound_table(vsk)) + __vsock_remove_bound(vsk); spin_unlock_bh(&vsock_table_lock); } EXPORT_SYMBOL_GPL(vsock_remove_bound); @@ -289,7 +290,8 @@ EXPORT_SYMBOL_GPL(vsock_remove_bound); void vsock_remove_connected(struct vsock_sock *vsk) { spin_lock_bh(&vsock_table_lock); - __vsock_remove_connected(vsk); + if (__vsock_in_connected_table(vsk)) + __vsock_remove_connected(vsk); spin_unlock_bh(&vsock_table_lock); } EXPORT_SYMBOL_GPL(vsock_remove_connected); @@ -325,35 +327,10 @@ struct sock *vsock_find_connected_socket(struct sockaddr_vm *src, } EXPORT_SYMBOL_GPL(vsock_find_connected_socket); -static bool vsock_in_bound_table(struct vsock_sock *vsk) -{ - bool ret; - - spin_lock_bh(&vsock_table_lock); - ret = __vsock_in_bound_table(vsk); - spin_unlock_bh(&vsock_table_lock); - - return ret; -} - -static bool vsock_in_connected_table(struct vsock_sock *vsk) -{ - bool ret; - - spin_lock_bh(&vsock_table_lock); - ret = __vsock_in_connected_table(vsk); - spin_unlock_bh(&vsock_table_lock); - - return ret; -} - void vsock_remove_sock(struct vsock_sock *vsk) { - if (vsock_in_bound_table(vsk)) - vsock_remove_bound(vsk); - - if (vsock_in_connected_table(vsk)) - vsock_remove_connected(vsk); + vsock_remove_bound(vsk); + vsock_remove_connected(vsk); } EXPORT_SYMBOL_GPL(vsock_remove_sock); @@ -484,8 +461,7 @@ static void vsock_pending_work(struct work_struct *work) * incoming packets can't find this socket, and to reduce the reference * count. */ - if (vsock_in_connected_table(vsk)) - vsock_remove_connected(vsk); + vsock_remove_connected(vsk); sk->sk_state = TCP_CLOSE; diff --git a/net/vmw_vsock/hyperv_transport.c b/net/vmw_vsock/hyperv_transport.c index a827547aa102be4f3cf46a267c8c684e815e02d6..9c7da811d130f84800567ed13bb89ba0af4481b4 100644 --- a/net/vmw_vsock/hyperv_transport.c +++ b/net/vmw_vsock/hyperv_transport.c @@ -35,6 +35,9 @@ /* The MTU is 16KB per the host side's design */ #define HVS_MTU_SIZE (1024 * 16) +/* How long to wait for graceful shutdown of a connection */ +#define HVS_CLOSE_TIMEOUT (8 * HZ) + struct vmpipe_proto_header { u32 pkt_type; u32 data_size; @@ -217,18 +220,6 @@ static void hvs_set_channel_pending_send_size(struct vmbus_channel *chan) set_channel_pending_send_size(chan, HVS_PKT_LEN(HVS_SEND_BUF_SIZE)); - /* See hvs_stream_has_space(): we must make sure the host has seen - * the new pending send size, before we can re-check the writable - * bytes. - */ - virt_mb(); -} - -static void hvs_clear_channel_pending_send_size(struct vmbus_channel *chan) -{ - set_channel_pending_send_size(chan, 0); - - /* Ditto */ virt_mb(); } @@ -298,26 +289,36 @@ static void hvs_channel_cb(void *ctx) if (hvs_channel_readable(chan)) sk->sk_data_ready(sk); - /* See hvs_stream_has_space(): when we reach here, the writable bytes - * may be already less than HVS_PKT_LEN(HVS_SEND_BUF_SIZE). - */ if (hv_get_bytes_to_write(&chan->outbound) > 0) sk->sk_write_space(sk); } -static void hvs_close_connection(struct vmbus_channel *chan) +static void hvs_do_close_lock_held(struct vsock_sock *vsk, + bool cancel_timeout) { - struct sock *sk = get_per_channel_state(chan); - struct vsock_sock *vsk = vsock_sk(sk); - - lock_sock(sk); + struct sock *sk = sk_vsock(vsk); - sk->sk_state = TCP_CLOSE; sock_set_flag(sk, SOCK_DONE); - vsk->peer_shutdown |= SEND_SHUTDOWN | RCV_SHUTDOWN; - + vsk->peer_shutdown = SHUTDOWN_MASK; + if (vsock_stream_has_data(vsk) <= 0) + sk->sk_state = TCP_CLOSING; sk->sk_state_change(sk); + if (vsk->close_work_scheduled && + (!cancel_timeout || cancel_delayed_work(&vsk->close_work))) { + vsk->close_work_scheduled = false; + vsock_remove_sock(vsk); + + /* Release the reference taken while scheduling the timeout */ + sock_put(sk); + } +} +static void hvs_close_connection(struct vmbus_channel *chan) +{ + struct sock *sk = get_per_channel_state(chan); + + lock_sock(sk); + hvs_do_close_lock_held(vsock_sk(sk), true); release_sock(sk); } @@ -328,8 +329,9 @@ static void hvs_open_connection(struct vmbus_channel *chan) struct sockaddr_vm addr; struct sock *sk, *new = NULL; - struct vsock_sock *vnew; - struct hvsock *hvs, *hvs_new; + struct vsock_sock *vnew = NULL; + struct hvsock *hvs = NULL; + struct hvsock *hvs_new = NULL; int ret; if_type = &chan->offermsg.offer.if_type; @@ -388,6 +390,13 @@ static void hvs_open_connection(struct vmbus_channel *chan) set_per_channel_state(chan, conn_from_host ? new : sk); vmbus_set_chn_rescind_callback(chan, hvs_close_connection); + /* Set the pending send size to max packet size to always get + * notifications from the host when there is enough writable space. + * The host is optimized to send notifications only when the pending + * size boundary is crossed, and not always. + */ + hvs_set_channel_pending_send_size(chan); + if (conn_from_host) { new->sk_state = TCP_ESTABLISHED; sk->sk_ack_backlog++; @@ -452,50 +461,80 @@ static int hvs_connect(struct vsock_sock *vsk) return vmbus_send_tl_connect_request(&h->vm_srv_id, &h->host_srv_id); } +static void hvs_shutdown_lock_held(struct hvsock *hvs, int mode) +{ + struct vmpipe_proto_header hdr; + + if (hvs->fin_sent || !hvs->chan) + return; + + /* It can't fail: see hvs_channel_writable_bytes(). */ + (void)hvs_send_data(hvs->chan, (struct hvs_send_buf *)&hdr, 0); + hvs->fin_sent = true; +} + static int hvs_shutdown(struct vsock_sock *vsk, int mode) { struct sock *sk = sk_vsock(vsk); - struct vmpipe_proto_header hdr; - struct hvs_send_buf *send_buf; - struct hvsock *hvs; if (!(mode & SEND_SHUTDOWN)) return 0; lock_sock(sk); + hvs_shutdown_lock_held(vsk->trans, mode); + release_sock(sk); + return 0; +} - hvs = vsk->trans; - if (hvs->fin_sent) - goto out; - - send_buf = (struct hvs_send_buf *)&hdr; +static void hvs_close_timeout(struct work_struct *work) +{ + struct vsock_sock *vsk = + container_of(work, struct vsock_sock, close_work.work); + struct sock *sk = sk_vsock(vsk); - /* It can't fail: see hvs_channel_writable_bytes(). */ - (void)hvs_send_data(hvs->chan, send_buf, 0); + sock_hold(sk); + lock_sock(sk); + if (!sock_flag(sk, SOCK_DONE)) + hvs_do_close_lock_held(vsk, false); - hvs->fin_sent = true; -out: + vsk->close_work_scheduled = false; release_sock(sk); - return 0; + sock_put(sk); } -static void hvs_release(struct vsock_sock *vsk) +/* Returns true, if it is safe to remove socket; false otherwise */ +static bool hvs_close_lock_held(struct vsock_sock *vsk) { struct sock *sk = sk_vsock(vsk); - struct hvsock *hvs = vsk->trans; - struct vmbus_channel *chan; - lock_sock(sk); + if (!(sk->sk_state == TCP_ESTABLISHED || + sk->sk_state == TCP_CLOSING)) + return true; - sk->sk_state = TCP_CLOSING; - vsock_remove_sock(vsk); + if ((sk->sk_shutdown & SHUTDOWN_MASK) != SHUTDOWN_MASK) + hvs_shutdown_lock_held(vsk->trans, SHUTDOWN_MASK); - release_sock(sk); + if (sock_flag(sk, SOCK_DONE)) + return true; - chan = hvs->chan; - if (chan) - hvs_shutdown(vsk, RCV_SHUTDOWN | SEND_SHUTDOWN); + /* This reference will be dropped by the delayed close routine */ + sock_hold(sk); + INIT_DELAYED_WORK(&vsk->close_work, hvs_close_timeout); + vsk->close_work_scheduled = true; + schedule_delayed_work(&vsk->close_work, HVS_CLOSE_TIMEOUT); + return false; +} + +static void hvs_release(struct vsock_sock *vsk) +{ + struct sock *sk = sk_vsock(vsk); + bool remove_sock; + lock_sock(sk); + remove_sock = hvs_close_lock_held(vsk); + release_sock(sk); + if (remove_sock) + vsock_remove_sock(vsk); } static void hvs_destruct(struct vsock_sock *vsk) @@ -651,23 +690,8 @@ static s64 hvs_stream_has_data(struct vsock_sock *vsk) static s64 hvs_stream_has_space(struct vsock_sock *vsk) { struct hvsock *hvs = vsk->trans; - struct vmbus_channel *chan = hvs->chan; - s64 ret; - ret = hvs_channel_writable_bytes(chan); - if (ret > 0) { - hvs_clear_channel_pending_send_size(chan); - } else { - /* See hvs_channel_cb() */ - hvs_set_channel_pending_send_size(chan); - - /* Re-check the writable bytes to avoid race */ - ret = hvs_channel_writable_bytes(chan); - if (ret > 0) - hvs_clear_channel_pending_send_size(chan); - } - - return ret; + return hvs_channel_writable_bytes(hvs->chan); } static u64 hvs_stream_rcvhiwat(struct vsock_sock *vsk) diff --git a/net/wireless/reg.c b/net/wireless/reg.c index 8a47297ff206d20385ed95b54abb3ca2532edb33..d8ebf4f0ef6e240c882eca2715f6d6646e34edea 100644 --- a/net/wireless/reg.c +++ b/net/wireless/reg.c @@ -2777,7 +2777,7 @@ static void reg_process_pending_hints(void) /* When last_request->processed becomes true this will be rescheduled */ if (lr && !lr->processed) { - reg_process_hint(lr); + pr_debug("Pending regulatory request, waiting for it to be processed...\n"); return; } diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h index 8a64b150be546d2d5903bbc62e1b32525e73d992..fe96c0d039f2fae806584c3a389d6d4b7dca0a57 100644 --- a/net/xdp/xsk_queue.h +++ b/net/xdp/xsk_queue.h @@ -239,7 +239,7 @@ static inline void xskq_produce_flush_desc(struct xsk_queue *q) /* Order producer and data */ smp_wmb(); - q->prod_tail = q->prod_head, + q->prod_tail = q->prod_head; WRITE_ONCE(q->ring->producer, q->prod_tail); } diff --git a/net/xfrm/Kconfig b/net/xfrm/Kconfig index 4a9ee2d83158ba87a4da985af1020faae8c440b7..372c91faa2834d520857e4bd94a52f7ee8547a7a 100644 --- a/net/xfrm/Kconfig +++ b/net/xfrm/Kconfig @@ -14,6 +14,8 @@ config XFRM_ALGO tristate select XFRM select CRYPTO + select CRYPTO_HASH + select CRYPTO_BLKCIPHER config XFRM_USER tristate "Transformation user configuration interface" diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c index 2122f89f615550dc7c38bf5e545b8989340c55f7..1484bc99a53756d2ac306d87445c7ddf2b8c3179 100644 --- a/net/xfrm/xfrm_user.c +++ b/net/xfrm/xfrm_user.c @@ -150,6 +150,25 @@ static int verify_newsa_info(struct xfrm_usersa_info *p, err = -EINVAL; switch (p->family) { + case AF_INET: + break; + + case AF_INET6: +#if IS_ENABLED(CONFIG_IPV6) + break; +#else + err = -EAFNOSUPPORT; + goto out; +#endif + + default: + goto out; + } + + switch (p->sel.family) { + case AF_UNSPEC: + break; + case AF_INET: if (p->sel.prefixlen_d > 32 || p->sel.prefixlen_s > 32) goto out; diff --git a/scripts/Kconfig.include b/scripts/Kconfig.include index dad5583451afba96b2de211309fa898c9e69eb3e..3b2861f47709b46aa6f50218a8cf644e9423bf82 100644 --- a/scripts/Kconfig.include +++ b/scripts/Kconfig.include @@ -20,7 +20,7 @@ success = $(if-success,$(1),y,n) # $(cc-option,) # Return y if the compiler supports , n otherwise -cc-option = $(success,$(CC) -Werror $(1) -E -x c /dev/null -o /dev/null) +cc-option = $(success,$(CC) -Werror $(CLANG_FLAGS) $(1) -E -x c /dev/null -o /dev/null) # $(ld-option,) # Return y if the linker supports , n otherwise diff --git a/scripts/Makefile.modpost b/scripts/Makefile.modpost index 7d4af0d0accb34a990098a80062d289168c8fe5e..51884c7b80697911c56fb81ba168d2d2c93a7d8b 100644 --- a/scripts/Makefile.modpost +++ b/scripts/Makefile.modpost @@ -75,7 +75,7 @@ modpost = scripts/mod/modpost \ $(if $(CONFIG_MODULE_SRCVERSION_ALL),-a,) \ $(if $(KBUILD_EXTMOD),-i,-o) $(kernelsymfile) \ $(if $(KBUILD_EXTMOD),-I $(modulesymfile)) \ - $(if $(KBUILD_EXTRA_SYMBOLS), $(patsubst %, -e %,$(KBUILD_EXTRA_SYMBOLS))) \ + $(if $(KBUILD_EXTMOD),$(addprefix -e ,$(KBUILD_EXTRA_SYMBOLS))) \ $(if $(KBUILD_EXTMOD),-o $(modulesymfile)) \ $(if $(CONFIG_DEBUG_SECTION_MISMATCH),,-S) \ $(if $(CONFIG_SECTION_MISMATCH_WARN_ONLY),,-E) \ diff --git a/scripts/genksyms/keywords.c b/scripts/genksyms/keywords.c index 9f40bcd17d07f5bbeb1ab2fa0793d632116ae06e..f6956aa41366885d9f2ef370a7090433a6ba81fb 100644 --- a/scripts/genksyms/keywords.c +++ b/scripts/genksyms/keywords.c @@ -24,6 +24,10 @@ static struct resword { { "__volatile__", VOLATILE_KEYW }, { "__builtin_va_list", VA_LIST_KEYW }, + { "__int128", BUILTIN_INT_KEYW }, + { "__int128_t", BUILTIN_INT_KEYW }, + { "__uint128_t", BUILTIN_INT_KEYW }, + // According to rth, c99 defines "_Bool", __restrict", __restrict__", "restrict". KAO { "_Bool", BOOL_KEYW }, { "_restrict", RESTRICT_KEYW }, diff --git a/scripts/genksyms/parse.y b/scripts/genksyms/parse.y index 00a6d7e5497126147dc0d9a564c4a6525fff6079..1ebcf52cd0f9e0402624b748d3818c27d40384ee 100644 --- a/scripts/genksyms/parse.y +++ b/scripts/genksyms/parse.y @@ -76,6 +76,7 @@ static void record_compound(struct string_list **keyw, %token ATTRIBUTE_KEYW %token AUTO_KEYW %token BOOL_KEYW +%token BUILTIN_INT_KEYW %token CHAR_KEYW %token CONST_KEYW %token DOUBLE_KEYW @@ -263,6 +264,7 @@ simple_type_specifier: | VOID_KEYW | BOOL_KEYW | VA_LIST_KEYW + | BUILTIN_INT_KEYW | TYPE { (*$1)->tag = SYM_TYPEDEF; $$ = $1; } ; diff --git a/scripts/kallsyms.c b/scripts/kallsyms.c index 0c9c54b57515e0e6dd1417a251c19ffd79d44cd0..31ed7f3f0e1574c230aca5ccbef9d18a54667272 100644 --- a/scripts/kallsyms.c +++ b/scripts/kallsyms.c @@ -152,6 +152,9 @@ static int read_symbol(FILE *in, struct sym_entry *s) /* exclude debugging symbols */ else if (stype == 'N' || stype == 'n') return -1; + /* exclude s390 kasan local symbols */ + else if (!strncmp(sym, ".LASANPC", 8)) + return -1; /* include the type field in the symbol name, so that it gets * compressed together */ diff --git a/scripts/kconfig/confdata.c b/scripts/kconfig/confdata.c index 91d0a5c014acd3a17cecad8f486ff90f59e1211c..0dde19cf748655b174b89321423eab09463e5e7f 100644 --- a/scripts/kconfig/confdata.c +++ b/scripts/kconfig/confdata.c @@ -784,6 +784,7 @@ int conf_write(const char *name) const char *str; char dirname[PATH_MAX+1], tmpname[PATH_MAX+22], newname[PATH_MAX+8]; char *env; + int i; dirname[0] = 0; if (name && name[0]) { @@ -834,11 +835,12 @@ int conf_write(const char *name) "#\n" "# %s\n" "#\n", str); - } else if (!(sym->flags & SYMBOL_CHOICE)) { + } else if (!(sym->flags & SYMBOL_CHOICE) && + !(sym->flags & SYMBOL_WRITTEN)) { sym_calc_value(sym); if (!(sym->flags & SYMBOL_WRITE)) goto next; - sym->flags &= ~SYMBOL_WRITE; + sym->flags |= SYMBOL_WRITTEN; conf_write_symbol(out, sym, &kconfig_printer_cb, NULL); } @@ -859,6 +861,9 @@ next: } fclose(out); + for_all_symbols(i, sym) + sym->flags &= ~SYMBOL_WRITTEN; + if (*tmpname) { strcat(dirname, basename); strcat(dirname, ".old"); @@ -1024,8 +1029,6 @@ int conf_write_autoconf(int overwrite) if (!overwrite && is_present(autoconf_name)) return 0; - sym_clear_all_valid(); - conf_write_dep("include/config/auto.conf.cmd"); if (conf_split_config()) diff --git a/scripts/kconfig/expr.h b/scripts/kconfig/expr.h index 7c329e179007a1e7cf92f012eee74f9ccb2f85d1..43a87f8ea738cbb90ec68d0a8c541a1c636c687f 100644 --- a/scripts/kconfig/expr.h +++ b/scripts/kconfig/expr.h @@ -141,6 +141,7 @@ struct symbol { #define SYMBOL_OPTIONAL 0x0100 /* choice is optional - values can be 'n' */ #define SYMBOL_WRITE 0x0200 /* write symbol to file (KCONFIG_CONFIG) */ #define SYMBOL_CHANGED 0x0400 /* ? */ +#define SYMBOL_WRITTEN 0x0800 /* track info to avoid double-write to .config */ #define SYMBOL_NO_WRITE 0x1000 /* Symbol for internal use only; it will not be written */ #define SYMBOL_CHECKED 0x2000 /* used during dependency checking */ #define SYMBOL_WARNED 0x8000 /* warning has been issued */ diff --git a/scripts/recordmcount.h b/scripts/recordmcount.h index 2e7793735e145157d5e3cf849679ed70ed2924a7..ccfbfde615563a7e6738c7d5b33fc725f79a1d51 100644 --- a/scripts/recordmcount.h +++ b/scripts/recordmcount.h @@ -326,7 +326,8 @@ static uint_t *sift_rel_mcount(uint_t *mlocp, if (!mcountsym) mcountsym = get_mcountsym(sym0, relp, str0); - if (mcountsym == Elf_r_sym(relp) && !is_fake_mcount(relp)) { + if (mcountsym && mcountsym == Elf_r_sym(relp) && + !is_fake_mcount(relp)) { uint_t const addend = _w(_w(relp->r_offset) - recval + mcount_adjust); mrelp->r_offset = _w(offbase diff --git a/scripts/sphinx-pre-install b/scripts/sphinx-pre-install index 067459760a7b048d4b53c28ac476189ac5884c21..3524dbc313163e70d05a5bda0b0a31d41668502d 100755 --- a/scripts/sphinx-pre-install +++ b/scripts/sphinx-pre-install @@ -301,7 +301,7 @@ sub give_redhat_hints() # # Checks valid for RHEL/CentOS version 7.x. # - if (! $system_release =~ /Fedora/) { + if (!($system_release =~ /Fedora/)) { $map{"virtualenv"} = "python-virtualenv"; } diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c index 70bad15ed7a0f5a1572797718ab5d8a437552b7c..109ab510bdb13bc9d21753f2ea100f0411a54df5 100644 --- a/security/selinux/hooks.c +++ b/security/selinux/hooks.c @@ -6550,11 +6550,12 @@ static int selinux_setprocattr(const char *name, void *value, size_t size) } else if (!strcmp(name, "fscreate")) { tsec->create_sid = sid; } else if (!strcmp(name, "keycreate")) { - error = avc_has_perm(&selinux_state, - mysid, sid, SECCLASS_KEY, KEY__CREATE, - NULL); - if (error) - goto abort_change; + if (sid) { + error = avc_has_perm(&selinux_state, mysid, sid, + SECCLASS_KEY, KEY__CREATE, NULL); + if (error) + goto abort_change; + } tsec->keycreate_sid = sid; } else if (!strcmp(name, "sockcreate")) { tsec->sockcreate_sid = sid; diff --git a/security/selinux/ss/policydb.c b/security/selinux/ss/policydb.c index d31a52e56b9ecaa97b3dfd37de21a8e7728f49df..91d259c87d10c2135869937c58cad361745eda85 100644 --- a/security/selinux/ss/policydb.c +++ b/security/selinux/ss/policydb.c @@ -275,6 +275,8 @@ static int rangetr_cmp(struct hashtab *h, const void *k1, const void *k2) return v; } +static int (*destroy_f[SYM_NUM]) (void *key, void *datum, void *datap); + /* * Initialize a policy database structure. */ @@ -322,8 +324,10 @@ static int policydb_init(struct policydb *p) out: hashtab_destroy(p->filename_trans); hashtab_destroy(p->range_tr); - for (i = 0; i < SYM_NUM; i++) + for (i = 0; i < SYM_NUM; i++) { + hashtab_map(p->symtab[i].table, destroy_f[i], NULL); hashtab_destroy(p->symtab[i].table); + } return rc; } diff --git a/sound/ac97/bus.c b/sound/ac97/bus.c index 9cbf6927abe96a735f0e188cfd735710c1d989ad..ca50ff4447964e59ff556650c18d4345f47af2a3 100644 --- a/sound/ac97/bus.c +++ b/sound/ac97/bus.c @@ -125,17 +125,12 @@ static int ac97_codec_add(struct ac97_controller *ac97_ctrl, int idx, vendor_id); ret = device_add(&codec->dev); - if (ret) - goto err_free_codec; + if (ret) { + put_device(&codec->dev); + return ret; + } return 0; -err_free_codec: - of_node_put(codec->dev.of_node); - put_device(&codec->dev); - kfree(codec); - ac97_ctrl->codecs[idx] = NULL; - - return ret; } unsigned int snd_ac97_bus_scan_one(struct ac97_controller *adrv, diff --git a/sound/core/compress_offload.c b/sound/core/compress_offload.c index 8b78ddffa509ab2598f17219c11792307cc50357..516ec35873256e4e598c150473fc5affabe78a20 100644 --- a/sound/core/compress_offload.c +++ b/sound/core/compress_offload.c @@ -575,10 +575,7 @@ snd_compr_set_params(struct snd_compr_stream *stream, unsigned long arg) stream->metadata_set = false; stream->next_track = false; - if (stream->direction == SND_COMPRESS_PLAYBACK) - stream->runtime->state = SNDRV_PCM_STATE_SETUP; - else - stream->runtime->state = SNDRV_PCM_STATE_PREPARED; + stream->runtime->state = SNDRV_PCM_STATE_SETUP; } else { return -EPERM; } @@ -694,8 +691,17 @@ static int snd_compr_start(struct snd_compr_stream *stream) { int retval; - if (stream->runtime->state != SNDRV_PCM_STATE_PREPARED) + switch (stream->runtime->state) { + case SNDRV_PCM_STATE_SETUP: + if (stream->direction != SND_COMPRESS_CAPTURE) + return -EPERM; + break; + case SNDRV_PCM_STATE_PREPARED: + break; + default: return -EPERM; + } + retval = stream->ops->trigger(stream, SNDRV_PCM_TRIGGER_START); if (!retval) stream->runtime->state = SNDRV_PCM_STATE_RUNNING; @@ -706,9 +712,15 @@ static int snd_compr_stop(struct snd_compr_stream *stream) { int retval; - if (stream->runtime->state == SNDRV_PCM_STATE_PREPARED || - stream->runtime->state == SNDRV_PCM_STATE_SETUP) + switch (stream->runtime->state) { + case SNDRV_PCM_STATE_OPEN: + case SNDRV_PCM_STATE_SETUP: + case SNDRV_PCM_STATE_PREPARED: return -EPERM; + default: + break; + } + retval = stream->ops->trigger(stream, SNDRV_PCM_TRIGGER_STOP); if (!retval) { snd_compr_drain_notify(stream); @@ -796,9 +808,17 @@ static int snd_compr_drain(struct snd_compr_stream *stream) { int retval; - if (stream->runtime->state == SNDRV_PCM_STATE_PREPARED || - stream->runtime->state == SNDRV_PCM_STATE_SETUP) + switch (stream->runtime->state) { + case SNDRV_PCM_STATE_OPEN: + case SNDRV_PCM_STATE_SETUP: + case SNDRV_PCM_STATE_PREPARED: + case SNDRV_PCM_STATE_PAUSED: return -EPERM; + case SNDRV_PCM_STATE_XRUN: + return -EPIPE; + default: + break; + } retval = stream->ops->trigger(stream, SND_COMPR_TRIGGER_DRAIN); if (retval) { @@ -818,6 +838,10 @@ static int snd_compr_next_track(struct snd_compr_stream *stream) if (stream->runtime->state != SNDRV_PCM_STATE_RUNNING) return -EPERM; + /* next track doesn't have any meaning for capture streams */ + if (stream->direction == SND_COMPRESS_CAPTURE) + return -EPERM; + /* you can signal next track if this is intended to be a gapless stream * and current track metadata is set */ @@ -835,9 +859,23 @@ static int snd_compr_next_track(struct snd_compr_stream *stream) static int snd_compr_partial_drain(struct snd_compr_stream *stream) { int retval; - if (stream->runtime->state == SNDRV_PCM_STATE_PREPARED || - stream->runtime->state == SNDRV_PCM_STATE_SETUP) + + switch (stream->runtime->state) { + case SNDRV_PCM_STATE_OPEN: + case SNDRV_PCM_STATE_SETUP: + case SNDRV_PCM_STATE_PREPARED: + case SNDRV_PCM_STATE_PAUSED: + return -EPERM; + case SNDRV_PCM_STATE_XRUN: + return -EPIPE; + default: + break; + } + + /* partial drain doesn't have any meaning for capture streams */ + if (stream->direction == SND_COMPRESS_CAPTURE) return -EPERM; + /* stream can be drained only when next track has been signalled */ if (stream->next_track == false) return -EPERM; diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c index f59e13c1d84a8c8ab3ac478087bec2adc66b247e..bd3d68e0489dd9d7ef28643f40d41ddec49c4659 100644 --- a/sound/core/seq/seq_clientmgr.c +++ b/sound/core/seq/seq_clientmgr.c @@ -1004,7 +1004,7 @@ static ssize_t snd_seq_write(struct file *file, const char __user *buf, { struct snd_seq_client *client = file->private_data; int written = 0, len; - int err; + int err, handled; struct snd_seq_event event; if (!(snd_seq_file_flags(file) & SNDRV_SEQ_LFLG_OUTPUT)) @@ -1017,6 +1017,8 @@ static ssize_t snd_seq_write(struct file *file, const char __user *buf, if (!client->accept_output || client->pool == NULL) return -ENXIO; + repeat: + handled = 0; /* allocate the pool now if the pool is not allocated yet */ mutex_lock(&client->ioctl_mutex); if (client->pool->size > 0 && !snd_seq_write_pool_allocated(client)) { @@ -1076,12 +1078,19 @@ static ssize_t snd_seq_write(struct file *file, const char __user *buf, 0, 0, &client->ioctl_mutex); if (err < 0) break; + handled++; __skip_event: /* Update pointers and counts */ count -= len; buf += len; written += len; + + /* let's have a coffee break if too many events are queued */ + if (++handled >= 200) { + mutex_unlock(&client->ioctl_mutex); + goto repeat; + } } out: @@ -1809,8 +1818,7 @@ static int snd_seq_ioctl_get_client_pool(struct snd_seq_client *client, if (cptr->type == USER_CLIENT) { info->input_pool = cptr->data.user.fifo_pool_size; info->input_free = info->input_pool; - if (cptr->data.user.fifo) - info->input_free = snd_seq_unused_cells(cptr->data.user.fifo->pool); + info->input_free = snd_seq_fifo_unused_cells(cptr->data.user.fifo); } else { info->input_pool = 0; info->input_free = 0; diff --git a/sound/core/seq/seq_fifo.c b/sound/core/seq/seq_fifo.c index 72c0302a55d23c05720d6062bef600b40fec6971..6a24732704fcf980b7c580bcf3ceadd3052304e2 100644 --- a/sound/core/seq/seq_fifo.c +++ b/sound/core/seq/seq_fifo.c @@ -280,3 +280,20 @@ int snd_seq_fifo_resize(struct snd_seq_fifo *f, int poolsize) return 0; } + +/* get the number of unused cells safely */ +int snd_seq_fifo_unused_cells(struct snd_seq_fifo *f) +{ + unsigned long flags; + int cells; + + if (!f) + return 0; + + snd_use_lock_use(&f->use_lock); + spin_lock_irqsave(&f->lock, flags); + cells = snd_seq_unused_cells(f->pool); + spin_unlock_irqrestore(&f->lock, flags); + snd_use_lock_free(&f->use_lock); + return cells; +} diff --git a/sound/core/seq/seq_fifo.h b/sound/core/seq/seq_fifo.h index 062c446e786722de5aa1936a4c187c74be660d94..5d38a0d7f0cd673a46becc883261ef25adf6005d 100644 --- a/sound/core/seq/seq_fifo.h +++ b/sound/core/seq/seq_fifo.h @@ -68,5 +68,7 @@ int snd_seq_fifo_poll_wait(struct snd_seq_fifo *f, struct file *file, poll_table /* resize pool in fifo */ int snd_seq_fifo_resize(struct snd_seq_fifo *f, int poolsize); +/* get the number of unused cells safely */ +int snd_seq_fifo_unused_cells(struct snd_seq_fifo *f); #endif diff --git a/sound/firewire/packets-buffer.c b/sound/firewire/packets-buffer.c index 1ebf00c83409666579b2a35d9b05836dd2f745bf..715cd99f28de8030e693ba0602827d1eadbe40a5 100644 --- a/sound/firewire/packets-buffer.c +++ b/sound/firewire/packets-buffer.c @@ -37,7 +37,7 @@ int iso_packets_buffer_init(struct iso_packets_buffer *b, struct fw_unit *unit, packets_per_page = PAGE_SIZE / packet_size; if (WARN_ON(!packets_per_page)) { err = -EINVAL; - goto error; + goto err_packets; } pages = DIV_ROUND_UP(count, packets_per_page); diff --git a/sound/hda/hdac_i915.c b/sound/hda/hdac_i915.c index 27eb0270a711f4db55511d6924db13656bc67276..3847fe841d33bd85d7c3084d2d58a11bc5bc4329 100644 --- a/sound/hda/hdac_i915.c +++ b/sound/hda/hdac_i915.c @@ -143,10 +143,12 @@ int snd_hdac_i915_init(struct hdac_bus *bus) if (!acomp) return -ENODEV; if (!acomp->ops) { - request_module("i915"); - /* 60s timeout */ - wait_for_completion_timeout(&bind_complete, - msecs_to_jiffies(60 * 1000)); + if (!IS_ENABLED(CONFIG_MODULES) || + !request_module("i915")) { + /* 60s timeout */ + wait_for_completion_timeout(&bind_complete, + msecs_to_jiffies(60 * 1000)); + } } if (!acomp->ops) { dev_info(bus->dev, "couldn't bind with audio component\n"); diff --git a/sound/pci/hda/hda_controller.c b/sound/pci/hda/hda_controller.c index a12e594d4e3b3a23d78cc0b75531b845c7b6e331..a41c1bec7c88cf308de2e3ed0cd070f395fdad5d 100644 --- a/sound/pci/hda/hda_controller.c +++ b/sound/pci/hda/hda_controller.c @@ -609,11 +609,9 @@ static int azx_pcm_open(struct snd_pcm_substream *substream) } runtime->private_data = azx_dev; - if (chip->gts_present) - azx_pcm_hw.info = azx_pcm_hw.info | - SNDRV_PCM_INFO_HAS_LINK_SYNCHRONIZED_ATIME; - runtime->hw = azx_pcm_hw; + if (chip->gts_present) + runtime->hw.info |= SNDRV_PCM_INFO_HAS_LINK_SYNCHRONIZED_ATIME; runtime->hw.channels_min = hinfo->channels_min; runtime->hw.channels_max = hinfo->channels_max; runtime->hw.formats = hinfo->formats; @@ -626,6 +624,13 @@ static int azx_pcm_open(struct snd_pcm_substream *substream) 20, 178000000); + /* by some reason, the playback stream stalls on PulseAudio with + * tsched=1 when a capture stream triggers. Until we figure out the + * real cause, disable tsched mode by telling the PCM info flag. + */ + if (chip->driver_caps & AZX_DCAPS_AMD_WORKAROUND) + runtime->hw.info |= SNDRV_PCM_INFO_BATCH; + if (chip->align_buffer_size) /* constrain buffer sizes to be multiple of 128 bytes. This is more efficient in terms of memory diff --git a/sound/pci/hda/hda_controller.h b/sound/pci/hda/hda_controller.h index 53c3cd28bc9952be2bc898df3e9faec7b3811171..8a9dd4767b1ecb65c4ab4e801c29e1ff9f188b9e 100644 --- a/sound/pci/hda/hda_controller.h +++ b/sound/pci/hda/hda_controller.h @@ -40,7 +40,7 @@ /* 14 unused */ #define AZX_DCAPS_CTX_WORKAROUND (1 << 15) /* X-Fi workaround */ #define AZX_DCAPS_POSFIX_LPIB (1 << 16) /* Use LPIB as default */ -/* 17 unused */ +#define AZX_DCAPS_AMD_WORKAROUND (1 << 17) /* AMD-specific workaround */ #define AZX_DCAPS_NO_64BIT (1 << 18) /* No 64bit address */ #define AZX_DCAPS_SYNC_WRITE (1 << 19) /* sync each cmd write */ #define AZX_DCAPS_OLD_SSYNC (1 << 20) /* Old SSYNC reg for ICH */ diff --git a/sound/pci/hda/hda_generic.c b/sound/pci/hda/hda_generic.c index 579984ecdec301b077512093b01f912261e92a2b..bb2bd33b00ec3c5d904e121857fbbae98978e2f2 100644 --- a/sound/pci/hda/hda_generic.c +++ b/sound/pci/hda/hda_generic.c @@ -6033,6 +6033,24 @@ void snd_hda_gen_free(struct hda_codec *codec) } EXPORT_SYMBOL_GPL(snd_hda_gen_free); +/** + * snd_hda_gen_reboot_notify - Make codec enter D3 before rebooting + * @codec: the HDA codec + * + * This can be put as patch_ops reboot_notify function. + */ +void snd_hda_gen_reboot_notify(struct hda_codec *codec) +{ + /* Make the codec enter D3 to avoid spurious noises from the internal + * speaker during (and after) reboot + */ + snd_hda_codec_set_power_to_all(codec, codec->core.afg, AC_PWRST_D3); + snd_hda_codec_write(codec, codec->core.afg, 0, + AC_VERB_SET_POWER_STATE, AC_PWRST_D3); + msleep(10); +} +EXPORT_SYMBOL_GPL(snd_hda_gen_reboot_notify); + #ifdef CONFIG_PM /** * snd_hda_gen_check_power_status - check the loopback power save state @@ -6060,6 +6078,7 @@ static const struct hda_codec_ops generic_patch_ops = { .init = snd_hda_gen_init, .free = snd_hda_gen_free, .unsol_event = snd_hda_jack_unsol_event, + .reboot_notify = snd_hda_gen_reboot_notify, #ifdef CONFIG_PM .check_power_status = snd_hda_gen_check_power_status, #endif @@ -6082,7 +6101,7 @@ static int snd_hda_parse_generic_codec(struct hda_codec *codec) err = snd_hda_parse_pin_defcfg(codec, &spec->autocfg, NULL, 0); if (err < 0) - return err; + goto error; err = snd_hda_gen_parse_auto_config(codec, &spec->autocfg); if (err < 0) diff --git a/sound/pci/hda/hda_generic.h b/sound/pci/hda/hda_generic.h index 10123664fa619af70a3b9060685060d541106d2e..ce9c293717b96acfbc99f2a5fdc71ebafa3b9797 100644 --- a/sound/pci/hda/hda_generic.h +++ b/sound/pci/hda/hda_generic.h @@ -336,6 +336,7 @@ int snd_hda_gen_parse_auto_config(struct hda_codec *codec, struct auto_pin_cfg *cfg); int snd_hda_gen_build_controls(struct hda_codec *codec); int snd_hda_gen_build_pcms(struct hda_codec *codec); +void snd_hda_gen_reboot_notify(struct hda_codec *codec); /* standard jack event callbacks */ void snd_hda_gen_hp_automute(struct hda_codec *codec, diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c index 308ce76149ccac014269910a384d1984c53d3b5e..7a3e34b120b33038aeb86353ab67fefa7a0b7c78 100644 --- a/sound/pci/hda/hda_intel.c +++ b/sound/pci/hda/hda_intel.c @@ -78,6 +78,7 @@ enum { POS_FIX_VIACOMBO, POS_FIX_COMBO, POS_FIX_SKL, + POS_FIX_FIFO, }; /* Defines for ATI HD Audio support in SB450 south bridge */ @@ -149,7 +150,7 @@ module_param_array(model, charp, NULL, 0444); MODULE_PARM_DESC(model, "Use the given board model."); module_param_array(position_fix, int, NULL, 0444); MODULE_PARM_DESC(position_fix, "DMA pointer read method." - "(-1 = system default, 0 = auto, 1 = LPIB, 2 = POSBUF, 3 = VIACOMBO, 4 = COMBO, 5 = SKL+)."); + "(-1 = system default, 0 = auto, 1 = LPIB, 2 = POSBUF, 3 = VIACOMBO, 4 = COMBO, 5 = SKL+, 6 = FIFO)."); module_param_array(bdl_pos_adj, int, NULL, 0644); MODULE_PARM_DESC(bdl_pos_adj, "BDL position adjustment offset."); module_param_array(probe_mask, int, NULL, 0444); @@ -350,6 +351,11 @@ enum { #define AZX_DCAPS_PRESET_ATI_HDMI_NS \ (AZX_DCAPS_PRESET_ATI_HDMI | AZX_DCAPS_SNOOP_OFF) +/* quirks for AMD SB */ +#define AZX_DCAPS_PRESET_AMD_SB \ + (AZX_DCAPS_NO_TCSEL | AZX_DCAPS_SYNC_WRITE | AZX_DCAPS_AMD_WORKAROUND |\ + AZX_DCAPS_SNOOP_TYPE(ATI) | AZX_DCAPS_PM_RUNTIME) + /* quirks for Nvidia */ #define AZX_DCAPS_PRESET_NVIDIA \ (AZX_DCAPS_NO_MSI | AZX_DCAPS_CORBRP_SELF_CLEAR |\ @@ -920,6 +926,49 @@ static unsigned int azx_via_get_position(struct azx *chip, return bound_pos + mod_dma_pos; } +#define AMD_FIFO_SIZE 32 + +/* get the current DMA position with FIFO size correction */ +static unsigned int azx_get_pos_fifo(struct azx *chip, struct azx_dev *azx_dev) +{ + struct snd_pcm_substream *substream = azx_dev->core.substream; + struct snd_pcm_runtime *runtime = substream->runtime; + unsigned int pos, delay; + + pos = snd_hdac_stream_get_pos_lpib(azx_stream(azx_dev)); + if (!runtime) + return pos; + + runtime->delay = AMD_FIFO_SIZE; + delay = frames_to_bytes(runtime, AMD_FIFO_SIZE); + if (azx_dev->insufficient) { + if (pos < delay) { + delay = pos; + runtime->delay = bytes_to_frames(runtime, pos); + } else { + azx_dev->insufficient = 0; + } + } + + /* correct the DMA position for capture stream */ + if (substream->stream == SNDRV_PCM_STREAM_CAPTURE) { + if (pos < delay) + pos += azx_dev->core.bufsize; + pos -= delay; + } + + return pos; +} + +static int azx_get_delay_from_fifo(struct azx *chip, struct azx_dev *azx_dev, + unsigned int pos) +{ + struct snd_pcm_substream *substream = azx_dev->core.substream; + + /* just read back the calculated value in the above */ + return substream->runtime->delay; +} + static unsigned int azx_skl_get_dpib_pos(struct azx *chip, struct azx_dev *azx_dev) { @@ -1528,6 +1577,7 @@ static int check_position_fix(struct azx *chip, int fix) case POS_FIX_VIACOMBO: case POS_FIX_COMBO: case POS_FIX_SKL: + case POS_FIX_FIFO: return fix; } @@ -1544,6 +1594,10 @@ static int check_position_fix(struct azx *chip, int fix) dev_dbg(chip->card->dev, "Using VIACOMBO position fix\n"); return POS_FIX_VIACOMBO; } + if (chip->driver_caps & AZX_DCAPS_AMD_WORKAROUND) { + dev_dbg(chip->card->dev, "Using FIFO position fix\n"); + return POS_FIX_FIFO; + } if (chip->driver_caps & AZX_DCAPS_POSFIX_LPIB) { dev_dbg(chip->card->dev, "Using LPIB position fix\n"); return POS_FIX_LPIB; @@ -1564,6 +1618,7 @@ static void assign_position_fix(struct azx *chip, int fix) [POS_FIX_VIACOMBO] = azx_via_get_position, [POS_FIX_COMBO] = azx_get_pos_lpib, [POS_FIX_SKL] = azx_get_pos_skl, + [POS_FIX_FIFO] = azx_get_pos_fifo, }; chip->get_position[0] = chip->get_position[1] = callbacks[fix]; @@ -1578,6 +1633,9 @@ static void assign_position_fix(struct azx *chip, int fix) azx_get_delay_from_lpib; } + if (fix == POS_FIX_FIFO) + chip->get_delay[0] = chip->get_delay[1] = + azx_get_delay_from_fifo; } /* @@ -2594,6 +2652,12 @@ static const struct pci_device_id azx_ids[] = { /* AMD Hudson */ { PCI_DEVICE(0x1022, 0x780d), .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB }, + /* AMD, X370 & co */ + { PCI_DEVICE(0x1022, 0x1457), + .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_AMD_SB }, + /* AMD, X570 & co */ + { PCI_DEVICE(0x1022, 0x1487), + .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_AMD_SB }, /* AMD Stoney */ { PCI_DEVICE(0x1022, 0x157a), .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB | diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c index 3cbd2119e14893064b943afe32c7f5f6af87eb22..ae8fde4c1a1254d7f9899951e0ed4f56879cc5fb 100644 --- a/sound/pci/hda/patch_conexant.c +++ b/sound/pci/hda/patch_conexant.c @@ -176,23 +176,10 @@ static void cx_auto_reboot_notify(struct hda_codec *codec) { struct conexant_spec *spec = codec->spec; - switch (codec->core.vendor_id) { - case 0x14f12008: /* CX8200 */ - case 0x14f150f2: /* CX20722 */ - case 0x14f150f4: /* CX20724 */ - break; - default: - return; - } - /* Turn the problematic codec into D3 to avoid spurious noises from the internal speaker during (and after) reboot */ cx_auto_turn_eapd(codec, spec->num_eapds, spec->eapds, false); - - snd_hda_codec_set_power_to_all(codec, codec->core.afg, AC_PWRST_D3); - snd_hda_codec_write(codec, codec->core.afg, 0, - AC_VERB_SET_POWER_STATE, AC_PWRST_D3); - msleep(10); + snd_hda_gen_reboot_notify(codec); } static void cx_auto_free(struct hda_codec *codec) @@ -637,18 +624,20 @@ static void cxt_fixup_hp_gate_mic_jack(struct hda_codec *codec, /* update LED status via GPIO */ static void cxt_update_gpio_led(struct hda_codec *codec, unsigned int mask, - bool enabled) + bool led_on) { struct conexant_spec *spec = codec->spec; unsigned int oldval = spec->gpio_led; if (spec->mute_led_polarity) - enabled = !enabled; + led_on = !led_on; - if (enabled) - spec->gpio_led &= ~mask; - else + if (led_on) spec->gpio_led |= mask; + else + spec->gpio_led &= ~mask; + codec_dbg(codec, "mask:%d enabled:%d gpio_led:%d\n", + mask, led_on, spec->gpio_led); if (spec->gpio_led != oldval) snd_hda_codec_write(codec, 0x01, 0, AC_VERB_SET_GPIO_DATA, spec->gpio_led); @@ -659,8 +648,8 @@ static void cxt_fixup_gpio_mute_hook(void *private_data, int enabled) { struct hda_codec *codec = private_data; struct conexant_spec *spec = codec->spec; - - cxt_update_gpio_led(codec, spec->gpio_mute_led_mask, enabled); + /* muted -> LED on */ + cxt_update_gpio_led(codec, spec->gpio_mute_led_mask, !enabled); } /* turn on/off mic-mute LED via GPIO per capture hook */ @@ -682,7 +671,6 @@ static void cxt_fixup_mute_led_gpio(struct hda_codec *codec, { 0x01, AC_VERB_SET_GPIO_DIRECTION, 0x03 }, {} }; - codec_info(codec, "action: %d gpio_led: %d\n", action, spec->gpio_led); if (action == HDA_FIXUP_ACT_PRE_PROBE) { spec->gen.vmaster_mute.hook = cxt_fixup_gpio_mute_hook; @@ -1096,6 +1084,7 @@ static int patch_conexant_auto(struct hda_codec *codec) */ static const struct hda_device_id snd_hda_id_conexant[] = { + HDA_CODEC_ENTRY(0x14f11f86, "CX8070", patch_conexant_auto), HDA_CODEC_ENTRY(0x14f12008, "CX8200", patch_conexant_auto), HDA_CODEC_ENTRY(0x14f15045, "CX20549 (Venice)", patch_conexant_auto), HDA_CODEC_ENTRY(0x14f15047, "CX20551 (Waikiki)", patch_conexant_auto), diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index 98cfdcfce5b3624387dcf82c615cda3b6f9106ca..9b5caf099bfbf735325f2e2b044d966825e921da 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -868,15 +868,6 @@ static void alc_reboot_notify(struct hda_codec *codec) alc_shutup(codec); } -/* power down codec to D3 at reboot/shutdown; set as reboot_notify ops */ -static void alc_d3_at_reboot(struct hda_codec *codec) -{ - snd_hda_codec_set_power_to_all(codec, codec->core.afg, AC_PWRST_D3); - snd_hda_codec_write(codec, codec->core.afg, 0, - AC_VERB_SET_POWER_STATE, AC_PWRST_D3); - msleep(10); -} - #define alc_free snd_hda_gen_free #ifdef CONFIG_PM @@ -5111,7 +5102,7 @@ static void alc_fixup_tpt440_dock(struct hda_codec *codec, struct alc_spec *spec = codec->spec; if (action == HDA_FIXUP_ACT_PRE_PROBE) { - spec->reboot_notify = alc_d3_at_reboot; /* reduce noise */ + spec->reboot_notify = snd_hda_gen_reboot_notify; /* reduce noise */ spec->parse_flags = HDA_PINCFG_NO_HP_FIXUP; codec->power_save_node = 0; /* avoid click noises */ snd_hda_apply_pincfgs(codec, pincfgs); @@ -6851,6 +6842,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x103c, 0x82bf, "HP G3 mini", ALC221_FIXUP_HP_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x103c, 0x82c0, "HP G3 mini premium", ALC221_FIXUP_HP_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x103c, 0x83b9, "HP Spectre x360", ALC269_FIXUP_HP_MUTE_LED_MIC3), + SND_PCI_QUIRK(0x103c, 0x8497, "HP Envy x360", ALC269_FIXUP_HP_MUTE_LED_MIC3), SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC), SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300), SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), @@ -7518,9 +7510,12 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = { {0x12, 0x90a60130}, {0x17, 0x90170110}, {0x21, 0x03211020}), - SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE, + SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE, {0x14, 0x90170110}, {0x21, 0x04211020}), + SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE, + {0x14, 0x90170110}, + {0x21, 0x04211030}), SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE, ALC295_STANDARD_PINS, {0x17, 0x21014020}, @@ -8654,6 +8649,11 @@ static const struct snd_hda_pin_quirk alc662_pin_fixup_tbl[] = { {0x18, 0x01a19030}, {0x1a, 0x01813040}, {0x21, 0x01014020}), + SND_HDA_PIN_QUIRK(0x10ec0867, 0x1028, "Dell", ALC891_FIXUP_DELL_MIC_NO_PRESENCE, + {0x16, 0x01813030}, + {0x17, 0x02211010}, + {0x18, 0x01a19040}, + {0x21, 0x01014020}), SND_HDA_PIN_QUIRK(0x10ec0662, 0x1028, "Dell", ALC662_FIXUP_DELL_MIC_NO_PRESENCE, {0x14, 0x01014010}, {0x18, 0x01a19020}, diff --git a/sound/soc/bcm/Kconfig b/sound/soc/bcm/Kconfig index 75786ab06a53b8551df43a1c8e5d158f4dafcfe7..a40042489b6cdf2f916d42272eac219787e88247 100644 --- a/sound/soc/bcm/Kconfig +++ b/sound/soc/bcm/Kconfig @@ -48,6 +48,14 @@ config SND_BCM2708_SOC_HIFIBERRY_DACPLUSADC help Say Y or M if you want to add support for HifiBerry DAC+ADC. +config SND_BCM2708_SOC_HIFIBERRY_DACPLUSADCPRO + tristate "Support for HifiBerry DAC+ADC PRO" + depends on SND_BCM2708_SOC_I2S || SND_BCM2835_SOC_I2S + select SND_SOC_PCM512x_I2C + select SND_SOC_PCM186X_I2C + help + Say Y or M if you want to add support for HifiBerry DAC+ADC PRO. + config SND_BCM2708_SOC_HIFIBERRY_DIGI tristate "Support for HifiBerry Digi" depends on SND_BCM2708_SOC_I2S || SND_BCM2835_SOC_I2S diff --git a/sound/soc/bcm/Makefile b/sound/soc/bcm/Makefile index 4f588a29d8253d4537fa5c5c766e66d964a733a3..78a96fa577d728ad34aad3597739bd88070a9be2 100644 --- a/sound/soc/bcm/Makefile +++ b/sound/soc/bcm/Makefile @@ -14,6 +14,7 @@ snd-soc-googlevoicehat-codec-objs := googlevoicehat-codec.o # BCM2708 Machine Support snd-soc-hifiberry-dacplus-objs := hifiberry_dacplus.o snd-soc-hifiberry-dacplusadc-objs := hifiberry_dacplusadc.o +snd-soc-hifiberry-dacplusadcpro-objs := hifiberry_dacplusadcpro.o snd-soc-justboom-dac-objs := justboom-dac.o snd-soc-rpi-cirrus-objs := rpi-cirrus.o snd-soc-rpi-proto-objs := rpi-proto.o @@ -38,6 +39,7 @@ snd-soc-rpi-wm8804-soundcard-objs := rpi-wm8804-soundcard.o obj-$(CONFIG_SND_BCM2708_SOC_GOOGLEVOICEHAT_SOUNDCARD) += snd-soc-googlevoicehat-codec.o obj-$(CONFIG_SND_BCM2708_SOC_HIFIBERRY_DACPLUS) += snd-soc-hifiberry-dacplus.o obj-$(CONFIG_SND_BCM2708_SOC_HIFIBERRY_DACPLUSADC) += snd-soc-hifiberry-dacplusadc.o +obj-$(CONFIG_SND_BCM2708_SOC_HIFIBERRY_DACPLUSADCPRO) += snd-soc-hifiberry-dacplusadcpro.o obj-$(CONFIG_SND_BCM2708_SOC_JUSTBOOM_DAC) += snd-soc-justboom-dac.o obj-$(CONFIG_SND_BCM2708_SOC_RPI_CIRRUS) += snd-soc-rpi-cirrus.o obj-$(CONFIG_SND_BCM2708_SOC_RPI_PROTO) += snd-soc-rpi-proto.o diff --git a/sound/soc/bcm/allo-katana-codec.c b/sound/soc/bcm/allo-katana-codec.c index 228531dd785ee6d12cc2bde0229e25b80e89e725..b0aebd40fe5eac656bc4050362e34a16e5378eb2 100644 --- a/sound/soc/bcm/allo-katana-codec.c +++ b/sound/soc/bcm/allo-katana-codec.c @@ -126,7 +126,7 @@ static SOC_VALUE_ENUM_SINGLE_DECL(katana_codec_deemphasis, katana_codec_deemphasis_texts, katana_codec_deemphasis_values); -static const SNDRV_CTL_TLVD_DECLARE_DB_MINMAX(master_tlv, -12700, 0); +static const SNDRV_CTL_TLVD_DECLARE_DB_MINMAX(master_tlv, -12750, 0); static const struct snd_kcontrol_new katana_codec_controls[] = { SOC_DOUBLE_R_TLV("Master Playback Volume", KATANA_CODEC_VOLUME_1, diff --git a/sound/soc/bcm/hifiberry_dacplusadcpro.c b/sound/soc/bcm/hifiberry_dacplusadcpro.c new file mode 100644 index 0000000000000000000000000000000000000000..ed080b24eb4947833e1442a39f6c103137f74193 --- /dev/null +++ b/sound/soc/bcm/hifiberry_dacplusadcpro.c @@ -0,0 +1,538 @@ +/* + * ASoC Driver for HiFiBerry DAC+ / DAC Pro with ADC PRO Version (SW control) + * + * Author: Daniel Matuschek, Stuart MacLean + * Copyright 2014-2015 + * based on code by Florian Meier + * ADC added by Joerg Schambacher + * Copyright 2018-19 + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +#include "../codecs/pcm512x.h" +#include "../codecs/pcm186x.h" + +#define HIFIBERRY_DACPRO_NOCLOCK 0 +#define HIFIBERRY_DACPRO_CLK44EN 1 +#define HIFIBERRY_DACPRO_CLK48EN 2 + +struct pcm512x_priv { + struct regmap *regmap; + struct clk *sclk; +}; + +/* Clock rate of CLK44EN attached to GPIO6 pin */ +#define CLK_44EN_RATE 22579200UL +/* Clock rate of CLK48EN attached to GPIO3 pin */ +#define CLK_48EN_RATE 24576000UL + +static bool slave; +static bool snd_rpi_hifiberry_is_dacpro; +static bool digital_gain_0db_limit = true; + +static const unsigned int pcm186x_adc_input_channel_sel_value[] = { + 0x00, 0x01, 0x02, 0x03, 0x10 +}; + +static const char * const pcm186x_adcl_input_channel_sel_text[] = { + "No Select", + "VINL1[SE]", /* Default for ADCL */ + "VINL2[SE]", + "VINL2[SE] + VINL1[SE]", + "{VIN1P, VIN1M}[DIFF]" +}; + +static const char * const pcm186x_adcr_input_channel_sel_text[] = { + "No Select", + "VINR1[SE]", /* Default for ADCR */ + "VINR2[SE]", + "VINR2[SE] + VINR1[SE]", + "{VIN2P, VIN2M}[DIFF]" +}; + +static const struct soc_enum pcm186x_adc_input_channel_sel[] = { + SOC_VALUE_ENUM_SINGLE(PCM186X_ADC1_INPUT_SEL_L, 0, + PCM186X_ADC_INPUT_SEL_MASK, + ARRAY_SIZE(pcm186x_adcl_input_channel_sel_text), + pcm186x_adcl_input_channel_sel_text, + pcm186x_adc_input_channel_sel_value), + SOC_VALUE_ENUM_SINGLE(PCM186X_ADC1_INPUT_SEL_R, 0, + PCM186X_ADC_INPUT_SEL_MASK, + ARRAY_SIZE(pcm186x_adcr_input_channel_sel_text), + pcm186x_adcr_input_channel_sel_text, + pcm186x_adc_input_channel_sel_value), +}; + +static const unsigned int pcm186x_mic_bias_sel_value[] = { + 0x00, 0x01, 0x11 +}; + +static const char * const pcm186x_mic_bias_sel_text[] = { + "Mic Bias off", + "Mic Bias on", + "Mic Bias with Bypass Resistor" +}; + +static const struct soc_enum pcm186x_mic_bias_sel[] = { + SOC_VALUE_ENUM_SINGLE(PCM186X_MIC_BIAS_CTRL, 0, + GENMASK(4, 0), + ARRAY_SIZE(pcm186x_mic_bias_sel_text), + pcm186x_mic_bias_sel_text, + pcm186x_mic_bias_sel_value), +}; + +static const unsigned int pcm186x_gain_sel_value[] = { + 0xe8, 0xe9, 0xea, 0xeb, 0xec, 0xed, 0xee, 0xef, + 0xf0, 0xf1, 0xf2, 0xf3, 0xf4, 0xf5, 0xf6, 0xf7, + 0xf8, 0xf9, 0xfa, 0xfb, 0xfc, 0xfd, 0xfe, 0xff, + 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, + 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, + 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, + 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, + 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27, + 0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x2d, 0x2e, 0x2f, + 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, + 0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e, 0x3f, + 0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47, + 0x48, 0x49, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e, 0x4f, + 0x50 +}; + +static const char * const pcm186x_gain_sel_text[] = { + "-12.0dB", "-11.5dB", "-11.0dB", "-10.5dB", "-10.0dB", "-9.5dB", + "-9.0dB", "-8.5dB", "-8.0dB", "-7.5dB", "-7.0dB", "-6.5dB", + "-6.0dB", "-5.5dB", "-5.0dB", "-4.5dB", "-4.0dB", "-3.5dB", + "-3.0dB", "-2.5dB", "-2.0dB", "-1.5dB", "-1.0dB", "-0.5dB", + "0.0dB", "0.5dB", "1.0dB", "1.5dB", "2.0dB", "2.5dB", + "3.0dB", "3.5dB", "4.0dB", "4.5dB", "5.0dB", "5.5dB", + "6.0dB", "6.5dB", "7.0dB", "7.5dB", "8.0dB", "8.5dB", + "9.0dB", "9.5dB", "10.0dB", "10.5dB", "11.0dB", "11.5dB", + "12.0dB", "12.5dB", "13.0dB", "13.5dB", "14.0dB", "14.5dB", + "15.0dB", "15.5dB", "16.0dB", "16.5dB", "17.0dB", "17.5dB", + "18.0dB", "18.5dB", "19.0dB", "19.5dB", "20.0dB", "20.5dB", + "21.0dB", "21.5dB", "22.0dB", "22.5dB", "23.0dB", "23.5dB", + "24.0dB", "24.5dB", "25.0dB", "25.5dB", "26.0dB", "26.5dB", + "27.0dB", "27.5dB", "28.0dB", "28.5dB", "29.0dB", "29.5dB", + "30.0dB", "30.5dB", "31.0dB", "31.5dB", "32.0dB", "32.5dB", + "33.0dB", "33.5dB", "34.0dB", "34.5dB", "35.0dB", "35.5dB", + "36.0dB", "36.5dB", "37.0dB", "37.5dB", "38.0dB", "38.5dB", + "39.0dB", "39.5dB", "40.0dB"}; + +static const struct soc_enum pcm186x_gain_sel[] = { + SOC_VALUE_ENUM_SINGLE(PCM186X_PGA_VAL_CH1_L, 0, + 0xff, + ARRAY_SIZE(pcm186x_gain_sel_text), + pcm186x_gain_sel_text, + pcm186x_gain_sel_value), + SOC_VALUE_ENUM_SINGLE(PCM186X_PGA_VAL_CH1_R, 0, + 0xff, + ARRAY_SIZE(pcm186x_gain_sel_text), + pcm186x_gain_sel_text, + pcm186x_gain_sel_value), +}; + +static const struct snd_kcontrol_new pcm1863_snd_controls_card[] = { + SOC_ENUM("ADC Left Input", pcm186x_adc_input_channel_sel[0]), + SOC_ENUM("ADC Right Input", pcm186x_adc_input_channel_sel[1]), + SOC_ENUM("ADC Mic Bias", pcm186x_mic_bias_sel), + SOC_ENUM("PGA Gain Left", pcm186x_gain_sel[0]), + SOC_ENUM("PGA Gain Right", pcm186x_gain_sel[1]), +}; + +static int pcm1863_add_controls(struct snd_soc_component *component) +{ + snd_soc_add_component_controls(component, + pcm1863_snd_controls_card, + ARRAY_SIZE(pcm1863_snd_controls_card)); + return 0; +} + +static void snd_rpi_hifiberry_dacplusadcpro_select_clk( + struct snd_soc_component *component, int clk_id) +{ + switch (clk_id) { + case HIFIBERRY_DACPRO_NOCLOCK: + snd_soc_component_update_bits(component, + PCM512x_GPIO_CONTROL_1, 0x24, 0x00); + break; + case HIFIBERRY_DACPRO_CLK44EN: + snd_soc_component_update_bits(component, + PCM512x_GPIO_CONTROL_1, 0x24, 0x20); + break; + case HIFIBERRY_DACPRO_CLK48EN: + snd_soc_component_update_bits(component, + PCM512x_GPIO_CONTROL_1, 0x24, 0x04); + break; + } +} + +static void snd_rpi_hifiberry_dacplusadcpro_clk_gpio(struct snd_soc_component *component) +{ + snd_soc_component_update_bits(component, PCM512x_GPIO_EN, 0x24, 0x24); + snd_soc_component_update_bits(component, PCM512x_GPIO_OUTPUT_3, 0x0f, 0x02); + snd_soc_component_update_bits(component, PCM512x_GPIO_OUTPUT_6, 0x0f, 0x02); +} + +static bool snd_rpi_hifiberry_dacplusadcpro_is_sclk(struct snd_soc_component *component) +{ + unsigned int sck; + + snd_soc_component_read(component, PCM512x_RATE_DET_4, &sck); + return (!(sck & 0x40)); +} + +static bool snd_rpi_hifiberry_dacplusadcpro_is_sclk_sleep( + struct snd_soc_component *component) +{ + msleep(2); + return snd_rpi_hifiberry_dacplusadcpro_is_sclk(component); +} + +static bool snd_rpi_hifiberry_dacplusadcpro_is_pro_card(struct snd_soc_component *component) +{ + bool isClk44EN, isClk48En, isNoClk; + + snd_rpi_hifiberry_dacplusadcpro_clk_gpio(component); + + snd_rpi_hifiberry_dacplusadcpro_select_clk(component, HIFIBERRY_DACPRO_CLK44EN); + isClk44EN = snd_rpi_hifiberry_dacplusadcpro_is_sclk_sleep(component); + + snd_rpi_hifiberry_dacplusadcpro_select_clk(component, HIFIBERRY_DACPRO_NOCLOCK); + isNoClk = snd_rpi_hifiberry_dacplusadcpro_is_sclk_sleep(component); + + snd_rpi_hifiberry_dacplusadcpro_select_clk(component, HIFIBERRY_DACPRO_CLK48EN); + isClk48En = snd_rpi_hifiberry_dacplusadcpro_is_sclk_sleep(component); + + return (isClk44EN && isClk48En && !isNoClk); +} + +static int snd_rpi_hifiberry_dacplusadcpro_clk_for_rate(int sample_rate) +{ + int type; + + switch (sample_rate) { + case 11025: + case 22050: + case 44100: + case 88200: + case 176400: + case 352800: + type = HIFIBERRY_DACPRO_CLK44EN; + break; + default: + type = HIFIBERRY_DACPRO_CLK48EN; + break; + } + return type; +} + +static void snd_rpi_hifiberry_dacplusadcpro_set_sclk(struct snd_soc_component *component, + int sample_rate) +{ + struct pcm512x_priv *pcm512x = snd_soc_component_get_drvdata(component); + + if (!IS_ERR(pcm512x->sclk)) { + int ctype; + + ctype = snd_rpi_hifiberry_dacplusadcpro_clk_for_rate(sample_rate); + clk_set_rate(pcm512x->sclk, (ctype == HIFIBERRY_DACPRO_CLK44EN) + ? CLK_44EN_RATE : CLK_48EN_RATE); + snd_rpi_hifiberry_dacplusadcpro_select_clk(component, ctype); + } +} + +static int snd_rpi_hifiberry_dacplusadcpro_init(struct snd_soc_pcm_runtime *rtd) +{ + struct snd_soc_component *dac = rtd->codec_dais[0]->component; + struct snd_soc_component *adc = rtd->codec_dais[1]->component; + struct snd_soc_dai_driver *adc_driver = rtd->codec_dais[1]->driver; + struct pcm512x_priv *priv; + int ret; + + if (slave) + snd_rpi_hifiberry_is_dacpro = false; + else + snd_rpi_hifiberry_is_dacpro = + snd_rpi_hifiberry_dacplusadcpro_is_pro_card(dac); + + if (snd_rpi_hifiberry_is_dacpro) { + struct snd_soc_dai_link *dai = rtd->dai_link; + + dai->name = "HiFiBerry DAC+ADC Pro"; + dai->stream_name = "HiFiBerry DAC+ADC Pro HiFi"; + + // set DAC DAI configuration + ret = snd_soc_dai_set_fmt(rtd->codec_dais[0], + SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_NB_NF + | SND_SOC_DAIFMT_CBM_CFM); + if (ret < 0) + return ret; + + // set ADC DAI configuration + ret = snd_soc_dai_set_fmt(rtd->codec_dais[1], + SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_NB_NF + | SND_SOC_DAIFMT_CBS_CFS); + if (ret < 0) + return ret; + + // set CPU DAI configuration + ret = snd_soc_dai_set_fmt(rtd->cpu_dai, + SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_NB_NF | SND_SOC_DAIFMT_CBS_CFS); + if (ret < 0) + return ret; + + snd_soc_component_update_bits(dac, PCM512x_BCLK_LRCLK_CFG, 0x31, 0x11); + snd_soc_component_update_bits(dac, PCM512x_MASTER_MODE, 0x03, 0x03); + snd_soc_component_update_bits(dac, PCM512x_MASTER_CLKDIV_2, 0x7f, 63); + } else { + priv = snd_soc_component_get_drvdata(dac); + priv->sclk = ERR_PTR(-ENOENT); + } + + /* disable 24bit mode as long as I2S module does not have sign extension fixed */ + adc_driver->capture.formats = SNDRV_PCM_FMTBIT_S32_LE | SNDRV_PCM_FMTBIT_S16_LE; + + snd_soc_component_update_bits(dac, PCM512x_GPIO_EN, 0x08, 0x08); + snd_soc_component_update_bits(dac, PCM512x_GPIO_OUTPUT_4, 0x0f, 0x02); + snd_soc_component_update_bits(dac, PCM512x_GPIO_CONTROL_1, 0x08, 0x08); + + ret = pcm1863_add_controls(adc); + if (ret < 0) + dev_warn(rtd->dev, "Failed to add pcm1863 controls: %d\n", + ret); + + /* set GPIO2 to output, GPIO3 input */ + snd_soc_component_write(adc, PCM186X_GPIO3_2_CTRL, 0x00); + snd_soc_component_write(adc, PCM186X_GPIO3_2_DIR_CTRL, 0x04); + snd_soc_component_update_bits(adc, PCM186X_GPIO_IN_OUT, 0x40, 0x40); + + if (digital_gain_0db_limit) { + int ret; + struct snd_soc_card *card = rtd->card; + + ret = snd_soc_limit_volume(card, "Digital Playback Volume", 207); + if (ret < 0) + dev_warn(card->dev, "Failed to set volume limit: %d\n", ret); + } + + return 0; +} + +static int snd_rpi_hifiberry_dacplusadcpro_update_rate_den( + struct snd_pcm_substream *substream, struct snd_pcm_hw_params *params) +{ + struct snd_soc_pcm_runtime *rtd = substream->private_data; + struct snd_soc_component *component = rtd->codec_dais[0]->component; /* only use DAC */ + struct pcm512x_priv *pcm512x = snd_soc_component_get_drvdata(component); + struct snd_ratnum *rats_no_pll; + unsigned int num = 0, den = 0; + int err; + + rats_no_pll = devm_kzalloc(rtd->dev, sizeof(*rats_no_pll), GFP_KERNEL); + if (!rats_no_pll) + return -ENOMEM; + + rats_no_pll->num = clk_get_rate(pcm512x->sclk) / 64; + rats_no_pll->den_min = 1; + rats_no_pll->den_max = 128; + rats_no_pll->den_step = 1; + + err = snd_interval_ratnum(hw_param_interval(params, + SNDRV_PCM_HW_PARAM_RATE), 1, rats_no_pll, &num, &den); + if (err >= 0 && den) { + params->rate_num = num; + params->rate_den = den; + } + + devm_kfree(rtd->dev, rats_no_pll); + return 0; +} + +static int snd_rpi_hifiberry_dacplusadcpro_hw_params( + struct snd_pcm_substream *substream, struct snd_pcm_hw_params *params) +{ + int ret = 0; + struct snd_soc_pcm_runtime *rtd = substream->private_data; + int channels = params_channels(params); + int width = 32; + struct snd_soc_component *dac = rtd->codec_dais[0]->component; + + if (snd_rpi_hifiberry_is_dacpro) { + + width = snd_pcm_format_physical_width(params_format(params)); + + snd_rpi_hifiberry_dacplusadcpro_set_sclk(dac, + params_rate(params)); + + ret = snd_rpi_hifiberry_dacplusadcpro_update_rate_den( + substream, params); + if (ret) + return ret; + } + + ret = snd_soc_dai_set_tdm_slot(rtd->cpu_dai, 0x03, 0x03, + channels, width); + if (ret) + return ret; + ret = snd_soc_dai_set_tdm_slot(rtd->codec_dais[0], 0x03, 0x03, + channels, width); + if (ret) + return ret; + ret = snd_soc_dai_set_tdm_slot(rtd->codec_dais[1], 0x03, 0x03, + channels, width); + return ret; +} + +static int snd_rpi_hifiberry_dacplusadcpro_startup( + struct snd_pcm_substream *substream) +{ + struct snd_soc_pcm_runtime *rtd = substream->private_data; + struct snd_soc_component *dac = rtd->codec_dais[0]->component; + struct snd_soc_component *adc = rtd->codec_dais[1]->component; + + /* switch on respective LED */ + if (!substream->stream) + snd_soc_component_update_bits(dac, PCM512x_GPIO_CONTROL_1, 0x08, 0x08); + else + snd_soc_component_update_bits(adc, PCM186X_GPIO_IN_OUT, 0x40, 0x40); + return 0; +} + +static void snd_rpi_hifiberry_dacplusadcpro_shutdown( + struct snd_pcm_substream *substream) +{ + struct snd_soc_pcm_runtime *rtd = substream->private_data; + struct snd_soc_component *dac = rtd->codec_dais[0]->component; + struct snd_soc_component *adc = rtd->codec_dais[1]->component; + + /* switch off respective LED */ + if (!substream->stream) + snd_soc_component_update_bits(dac, PCM512x_GPIO_CONTROL_1, 0x08, 0x00); + else + snd_soc_component_update_bits(adc, PCM186X_GPIO_IN_OUT, 0x40, 0x00); +} + + +/* machine stream operations */ +static struct snd_soc_ops snd_rpi_hifiberry_dacplusadcpro_ops = { + .hw_params = snd_rpi_hifiberry_dacplusadcpro_hw_params, + .startup = snd_rpi_hifiberry_dacplusadcpro_startup, + .shutdown = snd_rpi_hifiberry_dacplusadcpro_shutdown, +}; + +static struct snd_soc_dai_link_component snd_rpi_hifiberry_dacplusadcpro_codecs[] = { + { + .name = "pcm512x.1-004d", + .dai_name = "pcm512x-hifi", + }, + { + .name = "pcm186x.1-004a", + .dai_name = "pcm1863-aif", + }, +}; + +static struct snd_soc_dai_link snd_rpi_hifiberry_dacplusadcpro_dai[] = { +{ + .name = "HiFiBerry DAC+ADC PRO", + .stream_name = "HiFiBerry DAC+ADC PRO HiFi", + .cpu_dai_name = "bcm2708-i2s.0", + .platform_name = "bcm2708-i2s.0", + .codecs = snd_rpi_hifiberry_dacplusadcpro_codecs, + .num_codecs = 2, + .dai_fmt = SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_NB_NF | + SND_SOC_DAIFMT_CBS_CFS, + .ops = &snd_rpi_hifiberry_dacplusadcpro_ops, + .init = snd_rpi_hifiberry_dacplusadcpro_init, +}, +}; + +/* audio machine driver */ +static struct snd_soc_card snd_rpi_hifiberry_dacplusadcpro = { + .name = "snd_rpi_hifiberry_dacplusadcpro", + .driver_name = "HifiberryDacpAdcPro", + .owner = THIS_MODULE, + .dai_link = snd_rpi_hifiberry_dacplusadcpro_dai, + .num_links = ARRAY_SIZE(snd_rpi_hifiberry_dacplusadcpro_dai), +}; + +static int snd_rpi_hifiberry_dacplusadcpro_probe(struct platform_device *pdev) +{ + int ret = 0, i = 0; + struct snd_soc_card *card = &snd_rpi_hifiberry_dacplusadcpro; + + snd_rpi_hifiberry_dacplusadcpro.dev = &pdev->dev; + if (pdev->dev.of_node) { + struct device_node *i2s_node; + struct snd_soc_dai_link *dai; + + dai = &snd_rpi_hifiberry_dacplusadcpro_dai[0]; + i2s_node = of_parse_phandle(pdev->dev.of_node, + "i2s-controller", 0); + if (i2s_node) { + for (i = 0; i < card->num_links; i++) { + dai->cpu_dai_name = NULL; + dai->cpu_of_node = i2s_node; + dai->platform_name = NULL; + dai->platform_of_node = i2s_node; + } + } + } + digital_gain_0db_limit = !of_property_read_bool( + pdev->dev.of_node, "hifiberry-dacplusadcpro,24db_digital_gain"); + slave = of_property_read_bool(pdev->dev.of_node, + "hifiberry-dacplusadcpro,slave"); + ret = snd_soc_register_card(&snd_rpi_hifiberry_dacplusadcpro); + if (ret && ret != -EPROBE_DEFER) + dev_err(&pdev->dev, + "snd_soc_register_card() failed: %d\n", ret); + + return ret; +} + +static const struct of_device_id snd_rpi_hifiberry_dacplusadcpro_of_match[] = { + { .compatible = "hifiberry,hifiberry-dacplusadcpro", }, + {}, +}; + +MODULE_DEVICE_TABLE(of, snd_rpi_hifiberry_dacplusadcpro_of_match); + +static struct platform_driver snd_rpi_hifiberry_dacplusadcpro_driver = { + .driver = { + .name = "snd-rpi-hifiberry-dacplusadcpro", + .owner = THIS_MODULE, + .of_match_table = snd_rpi_hifiberry_dacplusadcpro_of_match, + }, + .probe = snd_rpi_hifiberry_dacplusadcpro_probe, +}; + +module_platform_driver(snd_rpi_hifiberry_dacplusadcpro_driver); + +MODULE_AUTHOR("Joerg Schambacher "); +MODULE_AUTHOR("Daniel Matuschek "); +MODULE_DESCRIPTION("ASoC Driver for HiFiBerry DAC+ADC"); +MODULE_LICENSE("GPL v2"); diff --git a/sound/soc/bcm/hifiberry_dacplusdsp.c b/sound/soc/bcm/hifiberry_dacplusdsp.c new file mode 100644 index 0000000000000000000000000000000000000000..cda7ee519093095348d02d6d6438cba203727369 --- /dev/null +++ b/sound/soc/bcm/hifiberry_dacplusdsp.c @@ -0,0 +1,90 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * ASoC Driver for HiFiBerry DAC + DSP + * + * Author: Joerg Schambacher + * Copyright 2018 + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#include +#include +#include +#include +#include + +static struct snd_soc_component_driver dacplusdsp_component_driver; + +static struct snd_soc_dai_driver dacplusdsp_dai = { + .name = "dacplusdsp-hifi", + .capture = { + .stream_name = "DAC+DSP Capture", + .channels_min = 2, + .channels_max = 2, + .rates = SNDRV_PCM_RATE_CONTINUOUS, + .formats = SNDRV_PCM_FMTBIT_S16_LE | + SNDRV_PCM_FMTBIT_S24_LE | + SNDRV_PCM_FMTBIT_S32_LE, + }, + .playback = { + .stream_name = "DACP+DSP Playback", + .channels_min = 2, + .channels_max = 2, + .rates = SNDRV_PCM_RATE_CONTINUOUS, + .formats = SNDRV_PCM_FMTBIT_S16_LE | + SNDRV_PCM_FMTBIT_S24_LE | + SNDRV_PCM_FMTBIT_S32_LE, + }, + .symmetric_rates = 1}; + +#ifdef CONFIG_OF +static const struct of_device_id dacplusdsp_ids[] = { + { + .compatible = "hifiberry,dacplusdsp", + }, + {} }; +MODULE_DEVICE_TABLE(of, dacplusdsp_ids); +#endif + +static int dacplusdsp_platform_probe(struct platform_device *pdev) +{ + int ret; + + ret = snd_soc_register_component(&pdev->dev, + &dacplusdsp_component_driver, &dacplusdsp_dai, 1); + if (ret) { + pr_alert("snd_soc_register_component failed\n"); + return ret; + } + + return 0; +} + +static int dacplusdsp_platform_remove(struct platform_device *pdev) +{ + snd_soc_unregister_component(&pdev->dev); + return 0; +} + +static struct platform_driver dacplusdsp_driver = { + .driver = { + .name = "hifiberry-dacplusdsp-codec", + .of_match_table = of_match_ptr(dacplusdsp_ids), + }, + .probe = dacplusdsp_platform_probe, + .remove = dacplusdsp_platform_remove, +}; + +module_platform_driver(dacplusdsp_driver); + +MODULE_AUTHOR("Joerg Schambacher "); +MODULE_DESCRIPTION("ASoC Driver for HiFiBerry DAC+DSP"); +MODULE_LICENSE("GPL v2"); diff --git a/sound/soc/codecs/hdac_hdmi.c b/sound/soc/codecs/hdac_hdmi.c index 63487240b61e40c1c011e6e0ca6a290a10466083..098196610542a08ba757944cc76f6d3318c97de4 100644 --- a/sound/soc/codecs/hdac_hdmi.c +++ b/sound/soc/codecs/hdac_hdmi.c @@ -1854,6 +1854,12 @@ static void hdmi_codec_remove(struct snd_soc_component *component) { struct hdac_hdmi_priv *hdmi = snd_soc_component_get_drvdata(component); struct hdac_device *hdev = hdmi->hdev; + int ret; + + ret = snd_hdac_acomp_register_notifier(hdev->bus, NULL); + if (ret < 0) + dev_err(&hdev->dev, "notifier unregister failed: err: %d\n", + ret); pm_runtime_disable(&hdev->dev); } diff --git a/sound/soc/davinci/davinci-mcasp.c b/sound/soc/davinci/davinci-mcasp.c index 160b2764b2ad8924ea286caeb02207be3eeb6d4d..6a8c279a4b20b472a2c245b8d7c6aacc794c3d31 100644 --- a/sound/soc/davinci/davinci-mcasp.c +++ b/sound/soc/davinci/davinci-mcasp.c @@ -1150,6 +1150,28 @@ static int davinci_mcasp_trigger(struct snd_pcm_substream *substream, return ret; } +static int davinci_mcasp_hw_rule_slot_width(struct snd_pcm_hw_params *params, + struct snd_pcm_hw_rule *rule) +{ + struct davinci_mcasp_ruledata *rd = rule->private; + struct snd_mask *fmt = hw_param_mask(params, SNDRV_PCM_HW_PARAM_FORMAT); + struct snd_mask nfmt; + int i, slot_width; + + snd_mask_none(&nfmt); + slot_width = rd->mcasp->slot_width; + + for (i = 0; i <= SNDRV_PCM_FORMAT_LAST; i++) { + if (snd_mask_test(fmt, i)) { + if (snd_pcm_format_width(i) <= slot_width) { + snd_mask_set(&nfmt, i); + } + } + } + + return snd_mask_refine(fmt, &nfmt); +} + static const unsigned int davinci_mcasp_dai_rates[] = { 8000, 11025, 16000, 22050, 32000, 44100, 48000, 64000, 88200, 96000, 176400, 192000, @@ -1257,7 +1279,7 @@ static int davinci_mcasp_startup(struct snd_pcm_substream *substream, struct davinci_mcasp_ruledata *ruledata = &mcasp->ruledata[substream->stream]; u32 max_channels = 0; - int i, dir; + int i, dir, ret; int tdm_slots = mcasp->tdm_slots; /* Do not allow more then one stream per direction */ @@ -1286,6 +1308,7 @@ static int davinci_mcasp_startup(struct snd_pcm_substream *substream, max_channels++; } ruledata->serializers = max_channels; + ruledata->mcasp = mcasp; max_channels *= tdm_slots; /* * If the already active stream has less channels than the calculated @@ -1311,20 +1334,22 @@ static int davinci_mcasp_startup(struct snd_pcm_substream *substream, 0, SNDRV_PCM_HW_PARAM_CHANNELS, &mcasp->chconstr[substream->stream]); - if (mcasp->slot_width) - snd_pcm_hw_constraint_minmax(substream->runtime, - SNDRV_PCM_HW_PARAM_SAMPLE_BITS, - 8, mcasp->slot_width); + if (mcasp->slot_width) { + /* Only allow formats require <= slot_width bits on the bus */ + ret = snd_pcm_hw_rule_add(substream->runtime, 0, + SNDRV_PCM_HW_PARAM_FORMAT, + davinci_mcasp_hw_rule_slot_width, + ruledata, + SNDRV_PCM_HW_PARAM_FORMAT, -1); + if (ret) + return ret; + } /* * If we rely on implicit BCLK divider setting we should * set constraints based on what we can provide. */ if (mcasp->bclk_master && mcasp->bclk_div == 0 && mcasp->sysclk_freq) { - int ret; - - ruledata->mcasp = mcasp; - ret = snd_pcm_hw_rule_add(substream->runtime, 0, SNDRV_PCM_HW_PARAM_RATE, davinci_mcasp_hw_rule_rate, diff --git a/sound/soc/meson/axg-tdm.h b/sound/soc/meson/axg-tdm.h index e578b6f40a07b778ccd16486d7dc9882177e25d5..5774ce0916d403a4fb59116a05c661dd7343198b 100644 --- a/sound/soc/meson/axg-tdm.h +++ b/sound/soc/meson/axg-tdm.h @@ -40,7 +40,7 @@ struct axg_tdm_iface { static inline bool axg_tdm_lrclk_invert(unsigned int fmt) { - return (fmt & SND_SOC_DAIFMT_I2S) ^ + return ((fmt & SND_SOC_DAIFMT_FORMAT_MASK) == SND_SOC_DAIFMT_I2S) ^ !!(fmt & (SND_SOC_DAIFMT_IB_IF | SND_SOC_DAIFMT_NB_IF)); } diff --git a/sound/soc/rockchip/rockchip_i2s.c b/sound/soc/rockchip/rockchip_i2s.c index 60d43d53a8f5e8fb00166085531eba66aaef1b79..11399f81c92f973b81cd6c2f037bda7143318d30 100644 --- a/sound/soc/rockchip/rockchip_i2s.c +++ b/sound/soc/rockchip/rockchip_i2s.c @@ -329,7 +329,6 @@ static int rockchip_i2s_hw_params(struct snd_pcm_substream *substream, val |= I2S_CHN_4; break; case 2: - case 1: val |= I2S_CHN_2; break; default: @@ -462,7 +461,7 @@ static struct snd_soc_dai_driver rockchip_i2s_dai = { }, .capture = { .stream_name = "Capture", - .channels_min = 1, + .channels_min = 2, .channels_max = 2, .rates = SNDRV_PCM_RATE_8000_192000, .formats = (SNDRV_PCM_FMTBIT_S8 | @@ -662,7 +661,7 @@ static int rockchip_i2s_probe(struct platform_device *pdev) } if (!of_property_read_u32(node, "rockchip,capture-channels", &val)) { - if (val >= 1 && val <= 8) + if (val >= 2 && val <= 8) soc_dai->capture.channels_max = val; } diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c index 2257b1b0151c41b9acd6987c53b1a5bc15649fe9..4ce57510b6236cf6398ed0c5829c651e2c8b24f6 100644 --- a/sound/soc/soc-dapm.c +++ b/sound/soc/soc-dapm.c @@ -1145,8 +1145,8 @@ static __always_inline int is_connected_ep(struct snd_soc_dapm_widget *widget, list_add_tail(&widget->work_list, list); if (custom_stop_condition && custom_stop_condition(widget, dir)) { - widget->endpoints[dir] = 1; - return widget->endpoints[dir]; + list = NULL; + custom_stop_condition = NULL; } if ((widget->is_ep & SND_SOC_DAPM_DIR_TO_EP(dir)) && widget->connected) { @@ -1183,8 +1183,8 @@ static __always_inline int is_connected_ep(struct snd_soc_dapm_widget *widget, * * Optionally, can be supplied with a function acting as a stopping condition. * This function takes the dapm widget currently being examined and the walk - * direction as an arguments, it should return true if the walk should be - * stopped and false otherwise. + * direction as an arguments, it should return true if widgets from that point + * in the graph onwards should not be added to the widget list. */ static int is_connected_output_ep(struct snd_soc_dapm_widget *widget, struct list_head *list, @@ -2139,23 +2139,25 @@ void snd_soc_dapm_debugfs_init(struct snd_soc_dapm_context *dapm, { struct dentry *d; - if (!parent) + if (!parent || IS_ERR(parent)) return; dapm->debugfs_dapm = debugfs_create_dir("dapm", parent); - if (!dapm->debugfs_dapm) { + if (IS_ERR(dapm->debugfs_dapm)) { dev_warn(dapm->dev, - "ASoC: Failed to create DAPM debugfs directory\n"); + "ASoC: Failed to create DAPM debugfs directory %ld\n", + PTR_ERR(dapm->debugfs_dapm)); return; } d = debugfs_create_file("bias_level", 0444, dapm->debugfs_dapm, dapm, &dapm_bias_fops); - if (!d) + if (IS_ERR(d)) dev_warn(dapm->dev, - "ASoC: Failed to create bias level debugfs file\n"); + "ASoC: Failed to create bias level debugfs file: %ld\n", + PTR_ERR(d)); } static void dapm_debugfs_add_widget(struct snd_soc_dapm_widget *w) @@ -2169,10 +2171,10 @@ static void dapm_debugfs_add_widget(struct snd_soc_dapm_widget *w) d = debugfs_create_file(w->name, 0444, dapm->debugfs_dapm, w, &dapm_widget_power_fops); - if (!d) + if (IS_ERR(d)) dev_warn(w->dapm->dev, - "ASoC: Failed to create %s debugfs file\n", - w->name); + "ASoC: Failed to create %s debugfs file: %ld\n", + w->name, PTR_ERR(d)); } static void dapm_debugfs_cleanup(struct snd_soc_dapm_context *dapm) diff --git a/sound/sound_core.c b/sound/sound_core.c index 40ad000c2e3ca88f1e028a6c0acf84094b05c1a2..dd64c4b19f23935423c15f3151bb2bf84e1e9550 100644 --- a/sound/sound_core.c +++ b/sound/sound_core.c @@ -280,7 +280,8 @@ retry: goto retry; } spin_unlock(&sound_loader_lock); - return -EBUSY; + r = -EBUSY; + goto fail; } } diff --git a/sound/usb/hiface/pcm.c b/sound/usb/hiface/pcm.c index e1fbb9cc9ea7679f93da3b683833678826d54008..a197fc3b9ab08197f1d8344a2f32d9dc47320f9b 100644 --- a/sound/usb/hiface/pcm.c +++ b/sound/usb/hiface/pcm.c @@ -604,14 +604,13 @@ int hiface_pcm_init(struct hiface_chip *chip, u8 extra_freq) ret = hiface_pcm_init_urb(&rt->out_urbs[i], chip, OUT_EP, hiface_pcm_out_urb_handler); if (ret < 0) - return ret; + goto error; } ret = snd_pcm_new(chip->card, "USB-SPDIF Audio", 0, 1, 0, &pcm); if (ret < 0) { - kfree(rt); dev_err(&chip->dev->dev, "Cannot create pcm instance\n"); - return ret; + goto error; } pcm->private_data = rt; @@ -624,4 +623,10 @@ int hiface_pcm_init(struct hiface_chip *chip, u8 extra_freq) chip->pcm = rt; return 0; + +error: + for (i = 0; i < PCM_N_URBS; i++) + kfree(rt->out_urbs[i].buffer); + kfree(rt); + return ret; } diff --git a/sound/usb/line6/pcm.c b/sound/usb/line6/pcm.c index 78c2d6cab3b52f7341e7f02ee1e7e0d502f8679b..531564269444e9ff9f148eddd6ad466b9dde4c70 100644 --- a/sound/usb/line6/pcm.c +++ b/sound/usb/line6/pcm.c @@ -554,6 +554,15 @@ int line6_init_pcm(struct usb_line6 *line6, line6pcm->volume_monitor = 255; line6pcm->line6 = line6; + spin_lock_init(&line6pcm->out.lock); + spin_lock_init(&line6pcm->in.lock); + line6pcm->impulse_period = LINE6_IMPULSE_DEFAULT_PERIOD; + + line6->line6pcm = line6pcm; + + pcm->private_data = line6pcm; + pcm->private_free = line6_cleanup_pcm; + line6pcm->max_packet_size_in = usb_maxpacket(line6->usbdev, usb_rcvisocpipe(line6->usbdev, ep_read), 0); @@ -566,15 +575,6 @@ int line6_init_pcm(struct usb_line6 *line6, return -EINVAL; } - spin_lock_init(&line6pcm->out.lock); - spin_lock_init(&line6pcm->in.lock); - line6pcm->impulse_period = LINE6_IMPULSE_DEFAULT_PERIOD; - - line6->line6pcm = line6pcm; - - pcm->private_data = line6pcm; - pcm->private_free = line6_cleanup_pcm; - err = line6_create_audio_out_urbs(line6pcm); if (err < 0) return err; diff --git a/sound/usb/line6/podhd.c b/sound/usb/line6/podhd.c index 5f3c87264e66776049f436b45ce7e44f368db4e8..da627b015b32b9c9e14a5ab6c615dc46b53b062d 100644 --- a/sound/usb/line6/podhd.c +++ b/sound/usb/line6/podhd.c @@ -417,7 +417,7 @@ static const struct line6_properties podhd_properties_table[] = { .name = "POD HD500", .capabilities = LINE6_CAP_PCM | LINE6_CAP_HWMON, - .altsetting = 1, + .altsetting = 0, .ep_ctrl_r = 0x81, .ep_ctrl_w = 0x01, .ep_audio_r = 0x86, diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c index 7e1c6c2dc99e80096c006bf3861d8a01cd3a9d30..b0c5d4ef613740c0cc015e2cc96f9f9a8b723b6d 100644 --- a/sound/usb/mixer.c +++ b/sound/usb/mixer.c @@ -83,6 +83,7 @@ struct mixer_build { unsigned char *buffer; unsigned int buflen; DECLARE_BITMAP(unitbitmap, MAX_ID_ELEMS); + DECLARE_BITMAP(termbitmap, MAX_ID_ELEMS); struct usb_audio_term oterm; const struct usbmix_name_map *map; const struct usbmix_selector_map *selector_map; @@ -753,12 +754,13 @@ static int uac_mixer_unit_get_channels(struct mixer_build *state, struct uac_mixer_unit_descriptor *desc) { int mu_channels; - void *c; if (desc->bLength < sizeof(*desc)) return -EINVAL; if (!desc->bNrInPins) return -EINVAL; + if (desc->bLength < sizeof(*desc) + desc->bNrInPins) + return -EINVAL; switch (state->mixer->protocol) { case UAC_VERSION_1: @@ -774,13 +776,6 @@ static int uac_mixer_unit_get_channels(struct mixer_build *state, break; } - if (!mu_channels) - return 0; - - c = uac_mixer_unit_bmControls(desc, state->mixer->protocol); - if (c - (void *)desc + (mu_channels - 1) / 8 >= desc->bLength) - return 0; /* no bmControls -> skip */ - return mu_channels; } @@ -788,16 +783,25 @@ static int uac_mixer_unit_get_channels(struct mixer_build *state, * parse the source unit recursively until it reaches to a terminal * or a branched unit. */ -static int check_input_term(struct mixer_build *state, int id, +static int __check_input_term(struct mixer_build *state, int id, struct usb_audio_term *term) { int protocol = state->mixer->protocol; int err; void *p1; + unsigned char *hdr; memset(term, 0, sizeof(*term)); - while ((p1 = find_audio_control_unit(state, id)) != NULL) { - unsigned char *hdr = p1; + for (;;) { + /* a loop in the terminal chain? */ + if (test_and_set_bit(id, state->termbitmap)) + return -EINVAL; + + p1 = find_audio_control_unit(state, id); + if (!p1) + break; + + hdr = p1; term->id = id; if (protocol == UAC_VERSION_1 || protocol == UAC_VERSION_2) { @@ -815,7 +819,7 @@ static int check_input_term(struct mixer_build *state, int id, /* call recursively to verify that the * referenced clock entity is valid */ - err = check_input_term(state, d->bCSourceID, term); + err = __check_input_term(state, d->bCSourceID, term); if (err < 0) return err; @@ -849,7 +853,7 @@ static int check_input_term(struct mixer_build *state, int id, case UAC2_CLOCK_SELECTOR: { struct uac_selector_unit_descriptor *d = p1; /* call recursively to retrieve the channel info */ - err = check_input_term(state, d->baSourceID[0], term); + err = __check_input_term(state, d->baSourceID[0], term); if (err < 0) return err; term->type = UAC3_SELECTOR_UNIT << 16; /* virtual type */ @@ -912,7 +916,7 @@ static int check_input_term(struct mixer_build *state, int id, /* call recursively to verify that the * referenced clock entity is valid */ - err = check_input_term(state, d->bCSourceID, term); + err = __check_input_term(state, d->bCSourceID, term); if (err < 0) return err; @@ -963,7 +967,7 @@ static int check_input_term(struct mixer_build *state, int id, case UAC3_CLOCK_SELECTOR: { struct uac_selector_unit_descriptor *d = p1; /* call recursively to retrieve the channel info */ - err = check_input_term(state, d->baSourceID[0], term); + err = __check_input_term(state, d->baSourceID[0], term); if (err < 0) return err; term->type = UAC3_SELECTOR_UNIT << 16; /* virtual type */ @@ -979,7 +983,7 @@ static int check_input_term(struct mixer_build *state, int id, return -EINVAL; /* call recursively to retrieve the channel info */ - err = check_input_term(state, d->baSourceID[0], term); + err = __check_input_term(state, d->baSourceID[0], term); if (err < 0) return err; @@ -997,6 +1001,15 @@ static int check_input_term(struct mixer_build *state, int id, return -ENODEV; } + +static int check_input_term(struct mixer_build *state, int id, + struct usb_audio_term *term) +{ + memset(term, 0, sizeof(*term)); + memset(state->termbitmap, 0, sizeof(state->termbitmap)); + return __check_input_term(state, id, term); +} + /* * Feature Unit */ @@ -2007,6 +2020,31 @@ static int parse_audio_feature_unit(struct mixer_build *state, int unitid, * Mixer Unit */ +/* check whether the given in/out overflows bmMixerControls matrix */ +static bool mixer_bitmap_overflow(struct uac_mixer_unit_descriptor *desc, + int protocol, int num_ins, int num_outs) +{ + u8 *hdr = (u8 *)desc; + u8 *c = uac_mixer_unit_bmControls(desc, protocol); + size_t rest; /* remaining bytes after bmMixerControls */ + + switch (protocol) { + case UAC_VERSION_1: + default: + rest = 1; /* iMixer */ + break; + case UAC_VERSION_2: + rest = 2; /* bmControls + iMixer */ + break; + case UAC_VERSION_3: + rest = 6; /* bmControls + wMixerDescrStr */ + break; + } + + /* overflow? */ + return c + (num_ins * num_outs + 7) / 8 + rest > hdr + hdr[0]; +} + /* * build a mixer unit control * @@ -2135,6 +2173,9 @@ static int parse_audio_mixer_unit(struct mixer_build *state, int unitid, if (err < 0) return err; num_ins += iterm.channels; + if (mixer_bitmap_overflow(desc, state->mixer->protocol, + num_ins, num_outs)) + break; for (; ich < num_ins; ich++) { int och, ich_has_controls = 0; diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c index 5b342fe30c7516b4053035471130e90ab6f96e1e..10c6971cf4772c931272f37cc165573457f24ea8 100644 --- a/sound/usb/mixer_quirks.c +++ b/sound/usb/mixer_quirks.c @@ -1167,17 +1167,17 @@ void snd_emuusb_set_samplerate(struct snd_usb_audio *chip, { struct usb_mixer_interface *mixer; struct usb_mixer_elem_info *cval; - int unitid = 12; /* SamleRate ExtensionUnit ID */ + int unitid = 12; /* SampleRate ExtensionUnit ID */ list_for_each_entry(mixer, &chip->mixer_list, list) { - cval = mixer_elem_list_to_info(mixer->id_elems[unitid]); - if (cval) { + if (mixer->id_elems[unitid]) { + cval = mixer_elem_list_to_info(mixer->id_elems[unitid]); snd_usb_mixer_set_ctl_value(cval, UAC_SET_CUR, cval->control << 8, samplerate_id); snd_usb_mixer_notify_id(mixer, unitid); + break; } - break; } } diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c index db114f3977e0fb4044e88c028f9dea9bad6382af..35c57a4204a8a4552b8820deafa7e16cc84dc1e7 100644 --- a/sound/usb/pcm.c +++ b/sound/usb/pcm.c @@ -350,6 +350,7 @@ static int set_sync_ep_implicit_fb_quirk(struct snd_usb_substream *subs, ep = 0x81; ifnum = 2; goto add_sync_ep_from_ifnum; + case USB_ID(0x1397, 0x0001): /* Behringer UFX1604 */ case USB_ID(0x1397, 0x0002): /* Behringer UFX1204 */ ep = 0x81; ifnum = 1; diff --git a/sound/usb/stream.c b/sound/usb/stream.c index d9e3de495c163a3bbca99164fe25b047c055d882..bc582202bd10199cf465324d84f3a1a132019cb2 100644 --- a/sound/usb/stream.c +++ b/sound/usb/stream.c @@ -1053,6 +1053,7 @@ found_clock: pd = kzalloc(sizeof(*pd), GFP_KERNEL); if (!pd) { + kfree(fp->chmap); kfree(fp->rate_table); kfree(fp); return NULL; diff --git a/tools/bpf/bpftool/jit_disasm.c b/tools/bpf/bpftool/jit_disasm.c index 87439320ef70e1ce5f17d437ff1d882897ccab78..73d7252729fad65bb2cdbda96ad625156bfd516d 100644 --- a/tools/bpf/bpftool/jit_disasm.c +++ b/tools/bpf/bpftool/jit_disasm.c @@ -10,6 +10,8 @@ * Licensed under the GNU General Public License, version 2.0 (GPLv2) */ +#define _GNU_SOURCE +#include #include #include #include @@ -51,11 +53,13 @@ static int fprintf_json(void *out, const char *fmt, ...) char *s; va_start(ap, fmt); + if (vasprintf(&s, fmt, ap) < 0) + return -1; + va_end(ap); + if (!oper_count) { int i; - s = va_arg(ap, char *); - /* Strip trailing spaces */ i = strlen(s) - 1; while (s[i] == ' ') @@ -68,11 +72,10 @@ static int fprintf_json(void *out, const char *fmt, ...) } else if (!strcmp(fmt, ",")) { /* Skip */ } else { - s = va_arg(ap, char *); jsonw_string(json_wtr, s); oper_count++; } - va_end(ap); + free(s); return 0; } diff --git a/tools/hv/hv_kvp_daemon.c b/tools/hv/hv_kvp_daemon.c index d7e06fe0270eefb29343b82ce4971765c2a981d5..0ce50c319cfd644acf387c5588764d6c31cd207c 100644 --- a/tools/hv/hv_kvp_daemon.c +++ b/tools/hv/hv_kvp_daemon.c @@ -1386,6 +1386,8 @@ int main(int argc, char *argv[]) daemonize = 0; break; case 'h': + print_usage(argv); + exit(0); default: print_usage(argv); exit(EXIT_FAILURE); diff --git a/tools/hv/hv_vss_daemon.c b/tools/hv/hv_vss_daemon.c index b1330017276236b91ca2a5be84a3ce48dc07e4f9..c2bb8a360177724659e3d66619a9a21707b96602 100644 --- a/tools/hv/hv_vss_daemon.c +++ b/tools/hv/hv_vss_daemon.c @@ -229,6 +229,8 @@ int main(int argc, char *argv[]) daemonize = 0; break; case 'h': + print_usage(argv); + exit(0); default: print_usage(argv); exit(EXIT_FAILURE); diff --git a/tools/hv/lsvmbus b/tools/hv/lsvmbus index 55e7374bade0d5c581fbee36e3183b77b057858d..099f2c44dbed26e9b6fb8fa9e242fe8ed24ce825 100644 --- a/tools/hv/lsvmbus +++ b/tools/hv/lsvmbus @@ -4,10 +4,10 @@ import os from optparse import OptionParser +help_msg = "print verbose messages. Try -vv, -vvv for more verbose messages" parser = OptionParser() -parser.add_option("-v", "--verbose", dest="verbose", - help="print verbose messages. Try -vv, -vvv for \ - more verbose messages", action="count") +parser.add_option( + "-v", "--verbose", dest="verbose", help=help_msg, action="count") (options, args) = parser.parse_args() @@ -21,27 +21,28 @@ if not os.path.isdir(vmbus_sys_path): exit(-1) vmbus_dev_dict = { - '{0e0b6031-5213-4934-818b-38d90ced39db}' : '[Operating system shutdown]', - '{9527e630-d0ae-497b-adce-e80ab0175caf}' : '[Time Synchronization]', - '{57164f39-9115-4e78-ab55-382f3bd5422d}' : '[Heartbeat]', - '{a9a0f4e7-5a45-4d96-b827-8a841e8c03e6}' : '[Data Exchange]', - '{35fa2e29-ea23-4236-96ae-3a6ebacba440}' : '[Backup (volume checkpoint)]', - '{34d14be3-dee4-41c8-9ae7-6b174977c192}' : '[Guest services]', - '{525074dc-8985-46e2-8057-a307dc18a502}' : '[Dynamic Memory]', - '{cfa8b69e-5b4a-4cc0-b98b-8ba1a1f3f95a}' : 'Synthetic mouse', - '{f912ad6d-2b17-48ea-bd65-f927a61c7684}' : 'Synthetic keyboard', - '{da0a7802-e377-4aac-8e77-0558eb1073f8}' : 'Synthetic framebuffer adapter', - '{f8615163-df3e-46c5-913f-f2d2f965ed0e}' : 'Synthetic network adapter', - '{32412632-86cb-44a2-9b5c-50d1417354f5}' : 'Synthetic IDE Controller', - '{ba6163d9-04a1-4d29-b605-72e2ffb1dc7f}' : 'Synthetic SCSI Controller', - '{2f9bcc4a-0069-4af3-b76b-6fd0be528cda}' : 'Synthetic fiber channel adapter', - '{8c2eaf3d-32a7-4b09-ab99-bd1f1c86b501}' : 'Synthetic RDMA adapter', - '{44c4f61d-4444-4400-9d52-802e27ede19f}' : 'PCI Express pass-through', - '{276aacf4-ac15-426c-98dd-7521ad3f01fe}' : '[Reserved system device]', - '{f8e65716-3cb3-4a06-9a60-1889c5cccab5}' : '[Reserved system device]', - '{3375baf4-9e15-4b30-b765-67acb10d607b}' : '[Reserved system device]', + '{0e0b6031-5213-4934-818b-38d90ced39db}': '[Operating system shutdown]', + '{9527e630-d0ae-497b-adce-e80ab0175caf}': '[Time Synchronization]', + '{57164f39-9115-4e78-ab55-382f3bd5422d}': '[Heartbeat]', + '{a9a0f4e7-5a45-4d96-b827-8a841e8c03e6}': '[Data Exchange]', + '{35fa2e29-ea23-4236-96ae-3a6ebacba440}': '[Backup (volume checkpoint)]', + '{34d14be3-dee4-41c8-9ae7-6b174977c192}': '[Guest services]', + '{525074dc-8985-46e2-8057-a307dc18a502}': '[Dynamic Memory]', + '{cfa8b69e-5b4a-4cc0-b98b-8ba1a1f3f95a}': 'Synthetic mouse', + '{f912ad6d-2b17-48ea-bd65-f927a61c7684}': 'Synthetic keyboard', + '{da0a7802-e377-4aac-8e77-0558eb1073f8}': 'Synthetic framebuffer adapter', + '{f8615163-df3e-46c5-913f-f2d2f965ed0e}': 'Synthetic network adapter', + '{32412632-86cb-44a2-9b5c-50d1417354f5}': 'Synthetic IDE Controller', + '{ba6163d9-04a1-4d29-b605-72e2ffb1dc7f}': 'Synthetic SCSI Controller', + '{2f9bcc4a-0069-4af3-b76b-6fd0be528cda}': 'Synthetic fiber channel adapter', + '{8c2eaf3d-32a7-4b09-ab99-bd1f1c86b501}': 'Synthetic RDMA adapter', + '{44c4f61d-4444-4400-9d52-802e27ede19f}': 'PCI Express pass-through', + '{276aacf4-ac15-426c-98dd-7521ad3f01fe}': '[Reserved system device]', + '{f8e65716-3cb3-4a06-9a60-1889c5cccab5}': '[Reserved system device]', + '{3375baf4-9e15-4b30-b765-67acb10d607b}': '[Reserved system device]', } + def get_vmbus_dev_attr(dev_name, attr): try: f = open('%s/%s/%s' % (vmbus_sys_path, dev_name, attr), 'r') @@ -52,6 +53,7 @@ def get_vmbus_dev_attr(dev_name, attr): return lines + class VMBus_Dev: pass @@ -66,12 +68,13 @@ for f in os.listdir(vmbus_sys_path): chn_vp_mapping = get_vmbus_dev_attr(f, 'channel_vp_mapping') chn_vp_mapping = [c.strip() for c in chn_vp_mapping] - chn_vp_mapping = sorted(chn_vp_mapping, - key = lambda c : int(c.split(':')[0])) + chn_vp_mapping = sorted( + chn_vp_mapping, key=lambda c: int(c.split(':')[0])) - chn_vp_mapping = ['\tRel_ID=%s, target_cpu=%s' % - (c.split(':')[0], c.split(':')[1]) - for c in chn_vp_mapping] + chn_vp_mapping = [ + '\tRel_ID=%s, target_cpu=%s' % + (c.split(':')[0], c.split(':')[1]) for c in chn_vp_mapping + ] d = VMBus_Dev() d.sysfs_path = '%s/%s' % (vmbus_sys_path, f) d.vmbus_id = vmbus_id @@ -85,7 +88,7 @@ for f in os.listdir(vmbus_sys_path): vmbus_dev_list.append(d) -vmbus_dev_list = sorted(vmbus_dev_list, key = lambda d : int(d.vmbus_id)) +vmbus_dev_list = sorted(vmbus_dev_list, key=lambda d: int(d.vmbus_id)) format0 = '%2s: %s' format1 = '%2s: Class_ID = %s - %s\n%s' @@ -95,9 +98,15 @@ for d in vmbus_dev_list: if verbose == 0: print(('VMBUS ID ' + format0) % (d.vmbus_id, d.dev_desc)) elif verbose == 1: - print (('VMBUS ID ' + format1) % \ - (d.vmbus_id, d.class_id, d.dev_desc, d.chn_vp_mapping)) + print( + ('VMBUS ID ' + format1) % + (d.vmbus_id, d.class_id, d.dev_desc, d.chn_vp_mapping) + ) else: - print (('VMBUS ID ' + format2) % \ - (d.vmbus_id, d.class_id, d.dev_desc, \ - d.device_id, d.sysfs_path, d.chn_vp_mapping)) + print( + ('VMBUS ID ' + format2) % + ( + d.vmbus_id, d.class_id, d.dev_desc, + d.device_id, d.sysfs_path, d.chn_vp_mapping + ) + ) diff --git a/tools/iio/iio_utils.c b/tools/iio/iio_utils.c index 7a6d61c6c0121f9c9cab0021602021c1eb44b8e6..55272fef3b508ef5bd016f643b138e41d4c27f67 100644 --- a/tools/iio/iio_utils.c +++ b/tools/iio/iio_utils.c @@ -159,9 +159,9 @@ int iioutils_get_type(unsigned *is_signed, unsigned *bytes, unsigned *bits_used, *be = (endianchar == 'b'); *bytes = padint / 8; if (*bits_used == 64) - *mask = ~0; + *mask = ~(0ULL); else - *mask = (1ULL << *bits_used) - 1; + *mask = (1ULL << *bits_used) - 1ULL; *is_signed = (signchar == 's'); if (fclose(sysfsfp)) { diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 66917a4eba2716ebdef75a78e854f4e9874c88b9..bf4cd924aed555320859dfebddd1427cd4b2a4a9 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -2484,6 +2484,7 @@ struct bpf_prog_info { char name[BPF_OBJ_NAME_LEN]; __u32 ifindex; __u32 gpl_compatible:1; + __u32 :31; /* alignment pad */ __u64 netns_dev; __u64 netns_ino; __u32 nr_jited_ksyms; diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index bdb94939fd602750f882dd0baac261f22da899a6..a350f97e3a1a4eb571c3355e01b77cc976787577 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -2293,10 +2293,7 @@ int bpf_prog_load(const char *file, enum bpf_prog_type type, int bpf_prog_load_xattr(const struct bpf_prog_load_attr *attr, struct bpf_object **pobj, int *prog_fd) { - struct bpf_object_open_attr open_attr = { - .file = attr->file, - .prog_type = attr->prog_type, - }; + struct bpf_object_open_attr open_attr = {}; struct bpf_program *prog, *first_prog = NULL; enum bpf_attach_type expected_attach_type; enum bpf_prog_type prog_type; @@ -2309,6 +2306,9 @@ int bpf_prog_load_xattr(const struct bpf_prog_load_attr *attr, if (!attr->file) return -EINVAL; + open_attr.file = attr->file; + open_attr.prog_type = attr->prog_type; + obj = bpf_object__open_xattr(&open_attr); if (IS_ERR_OR_NULL(obj)) return -ENOENT; diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c index abed594a9653b0de7a200344a03ac9c92bcfe686..b8f3cca8e58b4ec327876c7fd3173a4a3ae6c31d 100644 --- a/tools/objtool/elf.c +++ b/tools/objtool/elf.c @@ -305,7 +305,7 @@ static int read_symbols(struct elf *elf) if (sym->type != STT_FUNC) continue; sym->pfunc = sym->cfunc = sym; - coldstr = strstr(sym->name, ".cold."); + coldstr = strstr(sym->name, ".cold"); if (!coldstr) continue; diff --git a/tools/perf/arch/arm/util/cs-etm.c b/tools/perf/arch/arm/util/cs-etm.c index 2f595cd73da662be982a71f130f045f734c29fce..16af6c3b1365b3af0760e9dda3663a9102d7ba5a 100644 --- a/tools/perf/arch/arm/util/cs-etm.c +++ b/tools/perf/arch/arm/util/cs-etm.c @@ -32,6 +32,8 @@ struct cs_etm_recording { struct auxtrace_record itr; struct perf_pmu *cs_etm_pmu; struct perf_evlist *evlist; + int wrapped_cnt; + bool *wrapped; bool snapshot_mode; size_t snapshot_size; }; @@ -495,16 +497,131 @@ static int cs_etm_info_fill(struct auxtrace_record *itr, return 0; } -static int cs_etm_find_snapshot(struct auxtrace_record *itr __maybe_unused, +static int cs_etm_alloc_wrapped_array(struct cs_etm_recording *ptr, int idx) +{ + bool *wrapped; + int cnt = ptr->wrapped_cnt; + + /* Make @ptr->wrapped as big as @idx */ + while (cnt <= idx) + cnt++; + + /* + * Free'ed in cs_etm_recording_free(). Using realloc() to avoid + * cross compilation problems where the host's system supports + * reallocarray() but not the target. + */ + wrapped = realloc(ptr->wrapped, cnt * sizeof(bool)); + if (!wrapped) + return -ENOMEM; + + wrapped[cnt - 1] = false; + ptr->wrapped_cnt = cnt; + ptr->wrapped = wrapped; + + return 0; +} + +static bool cs_etm_buffer_has_wrapped(unsigned char *buffer, + size_t buffer_size, u64 head) +{ + u64 i, watermark; + u64 *buf = (u64 *)buffer; + size_t buf_size = buffer_size; + + /* + * We want to look the very last 512 byte (chosen arbitrarily) in + * the ring buffer. + */ + watermark = buf_size - 512; + + /* + * @head is continuously increasing - if its value is equal or greater + * than the size of the ring buffer, it has wrapped around. + */ + if (head >= buffer_size) + return true; + + /* + * The value of @head is somewhere within the size of the ring buffer. + * This can be that there hasn't been enough data to fill the ring + * buffer yet or the trace time was so long that @head has numerically + * wrapped around. To find we need to check if we have data at the very + * end of the ring buffer. We can reliably do this because mmap'ed + * pages are zeroed out and there is a fresh mapping with every new + * session. + */ + + /* @head is less than 512 byte from the end of the ring buffer */ + if (head > watermark) + watermark = head; + + /* + * Speed things up by using 64 bit transactions (see "u64 *buf" above) + */ + watermark >>= 3; + buf_size >>= 3; + + /* + * If we find trace data at the end of the ring buffer, @head has + * been there and has numerically wrapped around at least once. + */ + for (i = watermark; i < buf_size; i++) + if (buf[i]) + return true; + + return false; +} + +static int cs_etm_find_snapshot(struct auxtrace_record *itr, int idx, struct auxtrace_mmap *mm, - unsigned char *data __maybe_unused, + unsigned char *data, u64 *head, u64 *old) { + int err; + bool wrapped; + struct cs_etm_recording *ptr = + container_of(itr, struct cs_etm_recording, itr); + + /* + * Allocate memory to keep track of wrapping if this is the first + * time we deal with this *mm. + */ + if (idx >= ptr->wrapped_cnt) { + err = cs_etm_alloc_wrapped_array(ptr, idx); + if (err) + return err; + } + + /* + * Check to see if *head has wrapped around. If it hasn't only the + * amount of data between *head and *old is snapshot'ed to avoid + * bloating the perf.data file with zeros. But as soon as *head has + * wrapped around the entire size of the AUX ring buffer it taken. + */ + wrapped = ptr->wrapped[idx]; + if (!wrapped && cs_etm_buffer_has_wrapped(data, mm->len, *head)) { + wrapped = true; + ptr->wrapped[idx] = true; + } + pr_debug3("%s: mmap index %d old head %zu new head %zu size %zu\n", __func__, idx, (size_t)*old, (size_t)*head, mm->len); - *old = *head; - *head += mm->len; + /* No wrap has occurred, we can just use *head and *old. */ + if (!wrapped) + return 0; + + /* + * *head has wrapped around - adjust *head and *old to pickup the + * entire content of the AUX buffer. + */ + if (*head >= mm->len) { + *old = *head - mm->len; + } else { + *head += mm->len; + *old = *head - mm->len; + } return 0; } @@ -545,6 +662,8 @@ static void cs_etm_recording_free(struct auxtrace_record *itr) { struct cs_etm_recording *ptr = container_of(itr, struct cs_etm_recording, itr); + + zfree(&ptr->wrapped); free(ptr); } diff --git a/tools/perf/arch/s390/util/machine.c b/tools/perf/arch/s390/util/machine.c index a19690a17291ebb52a9f0bccf97a356f4af6ade0..c8c86a0c9b793d6f5173d9bc2fa7bb60f7d23f6e 100644 --- a/tools/perf/arch/s390/util/machine.c +++ b/tools/perf/arch/s390/util/machine.c @@ -6,8 +6,9 @@ #include "machine.h" #include "api/fs/fs.h" #include "debug.h" +#include "symbol.h" -int arch__fix_module_text_start(u64 *start, const char *name) +int arch__fix_module_text_start(u64 *start, u64 *size, const char *name) { u64 m_start = *start; char path[PATH_MAX]; @@ -17,7 +18,35 @@ int arch__fix_module_text_start(u64 *start, const char *name) if (sysfs__read_ull(path, (unsigned long long *)start) < 0) { pr_debug2("Using module %s start:%#lx\n", path, m_start); *start = m_start; + } else { + /* Successful read of the modules segment text start address. + * Calculate difference between module start address + * in memory and module text segment start address. + * For example module load address is 0x3ff8011b000 + * (from /proc/modules) and module text segment start + * address is 0x3ff8011b870 (from file above). + * + * Adjust the module size and subtract the GOT table + * size located at the beginning of the module. + */ + *size -= (*start - m_start); } return 0; } + +/* On s390 kernel text segment start is located at very low memory addresses, + * for example 0x10000. Modules are located at very high memory addresses, + * for example 0x3ff xxxx xxxx. The gap between end of kernel text segment + * and beginning of first module's text segment is very big. + * Therefore do not fill this gap and do not assign it to the kernel dso map. + */ +void arch__symbols__fixup_end(struct symbol *p, struct symbol *c) +{ + if (strchr(p->name, '[') == NULL && strchr(c->name, '[')) + /* Last kernel symbol mapped to end of page */ + p->end = roundup(p->end, page_size); + else + p->end = c->start; + pr_debug4("%s sym:%s end:%#lx\n", __func__, p->name, p->end); +} diff --git a/tools/perf/bench/numa.c b/tools/perf/bench/numa.c index fa56fde6e8d8036921e8b012bc69ae9afb70581a..91c0a4434da2767a6eedef5d0b3d0d90d274b368 100644 --- a/tools/perf/bench/numa.c +++ b/tools/perf/bench/numa.c @@ -378,8 +378,10 @@ static u8 *alloc_data(ssize_t bytes0, int map_flags, /* Allocate and initialize all memory on CPU#0: */ if (init_cpu0) { - orig_mask = bind_to_node(0); - bind_to_memnode(0); + int node = numa_node_of_cpu(0); + + orig_mask = bind_to_node(node); + bind_to_memnode(node); } bytes = bytes0 + HPSIZE; diff --git a/tools/perf/builtin-ftrace.c b/tools/perf/builtin-ftrace.c index f42f228e88992bd037474ae4c20df6934bfd8674..137955197ba8dd0bb4490b651986c8bc24ffd326 100644 --- a/tools/perf/builtin-ftrace.c +++ b/tools/perf/builtin-ftrace.c @@ -174,7 +174,7 @@ static int set_tracing_cpumask(struct cpu_map *cpumap) int last_cpu; last_cpu = cpu_map__cpu(cpumap, cpumap->nr - 1); - mask_size = (last_cpu + 3) / 4 + 1; + mask_size = last_cpu / 4 + 2; /* one more byte for EOS */ mask_size += last_cpu / 32; /* ',' is needed for every 32th cpus */ cpumask = malloc(mask_size); diff --git a/tools/perf/builtin-probe.c b/tools/perf/builtin-probe.c index 99de91698de1e544a68a607ba11c1bc3fc08ba73..0bdb34fee9d8187c6de233cab830d92f72b60cbc 100644 --- a/tools/perf/builtin-probe.c +++ b/tools/perf/builtin-probe.c @@ -711,6 +711,16 @@ __cmd_probe(int argc, const char **argv) ret = perf_add_probe_events(params.events, params.nevents); if (ret < 0) { + + /* + * When perf_add_probe_events() fails it calls + * cleanup_perf_probe_events(pevs, npevs), i.e. + * cleanup_perf_probe_events(params.events, params.nevents), which + * will call clear_perf_probe_event(), so set nevents to zero + * to avoid cleanup_params() to call clear_perf_probe_event() again + * on the same pevs. + */ + params.nevents = 0; pr_err_with_code(" Error: Failed to add events.", ret); return ret; } diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c index 40720150ccd80670e49285d3a0a49c2d184b45e3..789962565c9c5aa40cc545a8dc770b838df3cdff 100644 --- a/tools/perf/builtin-stat.c +++ b/tools/perf/builtin-stat.c @@ -2497,8 +2497,8 @@ static int add_default_attributes(void) fprintf(stderr, "Cannot set up top down events %s: %d\n", str, err); - free(str); parse_events_print_error(&errinfo, str); + free(str); return -1; } } else { diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c index 33eefc33e0ea9fc49a7d9b1568c3853a29653f9e..d0733251a386e75ac86981303936e1075bc5c898 100644 --- a/tools/perf/builtin-top.c +++ b/tools/perf/builtin-top.c @@ -99,7 +99,7 @@ static void perf_top__resize(struct perf_top *top) static int perf_top__parse_source(struct perf_top *top, struct hist_entry *he) { - struct perf_evsel *evsel = hists_to_evsel(he->hists); + struct perf_evsel *evsel; struct symbol *sym; struct annotation *notes; struct map *map; @@ -108,6 +108,8 @@ static int perf_top__parse_source(struct perf_top *top, struct hist_entry *he) if (!he || !he->ms.sym) return -1; + evsel = hists_to_evsel(he->hists); + sym = he->ms.sym; map = he->ms.map; @@ -224,7 +226,7 @@ static void perf_top__record_precise_ip(struct perf_top *top, static void perf_top__show_details(struct perf_top *top) { struct hist_entry *he = top->sym_filter_entry; - struct perf_evsel *evsel = hists_to_evsel(he->hists); + struct perf_evsel *evsel; struct annotation *notes; struct symbol *symbol; int more; @@ -232,6 +234,8 @@ static void perf_top__show_details(struct perf_top *top) if (!he) return; + evsel = hists_to_evsel(he->hists); + symbol = he->ms.sym; notes = symbol__annotation(symbol); diff --git a/tools/perf/builtin-version.c b/tools/perf/builtin-version.c index 50df168be326d84cba4e5cfbc26ea8a119d392cf..b02c961046403ef3763891e5c7f622489637f8e2 100644 --- a/tools/perf/builtin-version.c +++ b/tools/perf/builtin-version.c @@ -19,6 +19,7 @@ static struct version version; static struct option version_options[] = { OPT_BOOLEAN(0, "build-options", &version.build_options, "display the build options"), + OPT_END(), }; static const char * const version_usage[] = { diff --git a/tools/perf/jvmti/libjvmti.c b/tools/perf/jvmti/libjvmti.c index 6add3e9826141346a34a93b9e0a970a22720e361..3361d98a4edd6783a5fef158f6a1c495e28c37fb 100644 --- a/tools/perf/jvmti/libjvmti.c +++ b/tools/perf/jvmti/libjvmti.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 #include +#include #include #include #include @@ -150,8 +151,7 @@ copy_class_filename(const char * class_sign, const char * file_name, char * resu result[i] = '\0'; } else { /* fallback case */ - size_t file_name_len = strlen(file_name); - strncpy(result, file_name, file_name_len < max_length ? file_name_len : max_length); + strlcpy(result, file_name, max_length); } } diff --git a/tools/perf/perf.h b/tools/perf/perf.h index 21bf7f5a3cf51a1a42e3169daa738c8e7e0a8d83..19d435a9623b542cc6b53411eb91b92276731e3b 100644 --- a/tools/perf/perf.h +++ b/tools/perf/perf.h @@ -26,7 +26,7 @@ static inline unsigned long long rdclock(void) } #ifndef MAX_NR_CPUS -#define MAX_NR_CPUS 1024 +#define MAX_NR_CPUS 2048 #endif extern const char *input_name; diff --git a/tools/perf/pmu-events/jevents.c b/tools/perf/pmu-events/jevents.c index 68c92bb599eef708228287e0d1c6728429a67ced..6b36b7110669569475e32fc7281abbaac7a980ad 100644 --- a/tools/perf/pmu-events/jevents.c +++ b/tools/perf/pmu-events/jevents.c @@ -450,6 +450,7 @@ static struct fixed { { "inst_retired.any_p", "event=0xc0" }, { "cpu_clk_unhalted.ref", "event=0x0,umask=0x03" }, { "cpu_clk_unhalted.thread", "event=0x3c" }, + { "cpu_clk_unhalted.core", "event=0x3c" }, { "cpu_clk_unhalted.thread_any", "event=0x3c,any=1" }, { NULL, NULL}, }; diff --git a/tools/perf/tests/mmap-thread-lookup.c b/tools/perf/tests/mmap-thread-lookup.c index b1af2499a3c972a97ddbe766f0fe272716a9fca0..7a9b123c7bfcfe6376bdda54c5b7ba57e76d5c25 100644 --- a/tools/perf/tests/mmap-thread-lookup.c +++ b/tools/perf/tests/mmap-thread-lookup.c @@ -52,7 +52,7 @@ static void *thread_fn(void *arg) { struct thread_data *td = arg; ssize_t ret; - int go; + int go = 0; if (thread_init(td)) return NULL; diff --git a/tools/perf/tests/parse-events.c b/tools/perf/tests/parse-events.c index 3b97ac018d5aac5d6505924e508a5451888c6cd7..532c95e8fa6b492937618490fa0dd883a2556c9b 100644 --- a/tools/perf/tests/parse-events.c +++ b/tools/perf/tests/parse-events.c @@ -18,6 +18,32 @@ #define PERF_TP_SAMPLE_TYPE (PERF_SAMPLE_RAW | PERF_SAMPLE_TIME | \ PERF_SAMPLE_CPU | PERF_SAMPLE_PERIOD) +#if defined(__s390x__) +/* Return true if kvm module is available and loaded. Test this + * and retun success when trace point kvm_s390_create_vm + * exists. Otherwise this test always fails. + */ +static bool kvm_s390_create_vm_valid(void) +{ + char *eventfile; + bool rc = false; + + eventfile = get_events_file("kvm-s390"); + + if (eventfile) { + DIR *mydir = opendir(eventfile); + + if (mydir) { + rc = true; + closedir(mydir); + } + put_events_file(eventfile); + } + + return rc; +} +#endif + static int test__checkevent_tracepoint(struct perf_evlist *evlist) { struct perf_evsel *evsel = perf_evlist__first(evlist); @@ -1622,6 +1648,7 @@ static struct evlist_test test__events[] = { { .name = "kvm-s390:kvm_s390_create_vm", .check = test__checkevent_tracepoint, + .valid = kvm_s390_create_vm_valid, .id = 100, }, #endif diff --git a/tools/perf/tests/shell/record+probe_libc_inet_pton.sh b/tools/perf/tests/shell/record+probe_libc_inet_pton.sh index cab7b0aea6eabbec7575db5bcf6a2cffbe93a762..f5837f28f3af4875ecc0bf127f7a99238c1feaad 100755 --- a/tools/perf/tests/shell/record+probe_libc_inet_pton.sh +++ b/tools/perf/tests/shell/record+probe_libc_inet_pton.sh @@ -43,7 +43,7 @@ trace_libc_inet_pton_backtrace() { eventattr='max-stack=4' echo "gaih_inet.*\+0x[[:xdigit:]]+[[:space:]]\($libc\)$" >> $expected echo "getaddrinfo\+0x[[:xdigit:]]+[[:space:]]\($libc\)$" >> $expected - echo ".*\+0x[[:xdigit:]]+[[:space:]]\(.*/bin/ping.*\)$" >> $expected + echo ".*(\+0x[[:xdigit:]]+|\[unknown\])[[:space:]]\(.*/bin/ping.*\)$" >> $expected ;; *) eventattr='max-stack=3' diff --git a/tools/perf/ui/browsers/annotate.c b/tools/perf/ui/browsers/annotate.c index 1d00e5ec7906ebaf23b57a5d9e1471ed1cbab465..a3c255228d62469e485fd17589b651bc0d33e85f 100644 --- a/tools/perf/ui/browsers/annotate.c +++ b/tools/perf/ui/browsers/annotate.c @@ -96,11 +96,12 @@ static void annotate_browser__write(struct ui_browser *browser, void *entry, int struct annotate_browser *ab = container_of(browser, struct annotate_browser, b); struct annotation *notes = browser__annotation(browser); struct annotation_line *al = list_entry(entry, struct annotation_line, node); + const bool is_current_entry = ui_browser__is_current_entry(browser, row); struct annotation_write_ops ops = { .first_line = row == 0, - .current_entry = ui_browser__is_current_entry(browser, row), + .current_entry = is_current_entry, .change_color = (!notes->options->hide_src_code && - (!ops.current_entry || + (!is_current_entry || (browser->use_navkeypressed && !browser->navkeypressed))), .width = browser->width, diff --git a/tools/perf/ui/browsers/hists.c b/tools/perf/ui/browsers/hists.c index a96f62ca984ac8077fe4171bc481ec48fcc2ed43..692d2fa31c351b0ea681ffc1b285b05f591bf31c 100644 --- a/tools/perf/ui/browsers/hists.c +++ b/tools/perf/ui/browsers/hists.c @@ -633,7 +633,11 @@ int hist_browser__run(struct hist_browser *browser, const char *help, switch (key) { case K_TIMER: { u64 nr_entries; - hbt->timer(hbt->arg); + + WARN_ON_ONCE(!hbt); + + if (hbt) + hbt->timer(hbt->arg); if (hist_browser__has_filter(browser) || symbol_conf.report_hierarchy) @@ -2707,7 +2711,7 @@ static int perf_evsel__hists_browse(struct perf_evsel *evsel, int nr_events, { struct hists *hists = evsel__hists(evsel); struct hist_browser *browser = perf_evsel_browser__new(evsel, hbt, env, annotation_opts); - struct branch_info *bi; + struct branch_info *bi = NULL; #define MAX_OPTIONS 16 char *options[MAX_OPTIONS]; struct popup_action actions[MAX_OPTIONS]; @@ -2973,7 +2977,9 @@ static int perf_evsel__hists_browse(struct perf_evsel *evsel, int nr_events, goto skip_annotation; if (sort__mode == SORT_MODE__BRANCH) { - bi = browser->he_selection->branch_info; + + if (browser->he_selection) + bi = browser->he_selection->branch_info; if (bi == NULL) goto skip_annotation; @@ -3144,7 +3150,8 @@ static int perf_evsel_menu__run(struct perf_evsel_menu *menu, switch (key) { case K_TIMER: - hbt->timer(hbt->arg); + if (hbt) + hbt->timer(hbt->arg); if (!menu->lost_events_warned && menu->lost_events && diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c index dfee110b3a589d998287b3ec1bdd713dbd65a40c..daea1fdf738566891175747d9b54c7903a9f3f2e 100644 --- a/tools/perf/util/annotate.c +++ b/tools/perf/util/annotate.c @@ -911,9 +911,8 @@ static int symbol__inc_addr_samples(struct symbol *sym, struct map *map, if (sym == NULL) return 0; src = symbol__hists(sym, evsel->evlist->nr_entries); - if (src == NULL) - return -ENOMEM; - return __symbol__inc_addr_samples(sym, map, src, evsel->idx, addr, sample); + return (src) ? __symbol__inc_addr_samples(sym, map, src, evsel->idx, + addr, sample) : 0; } static int symbol__account_cycles(u64 addr, u64 start, @@ -1080,16 +1079,14 @@ static int disasm_line__parse(char *line, const char **namep, char **rawp) *namep = strdup(name); if (*namep == NULL) - goto out_free_name; + goto out; (*rawp)[0] = tmp; *rawp = ltrim(*rawp); return 0; -out_free_name: - free((void *)namep); - *namep = NULL; +out: return -1; } diff --git a/tools/perf/util/cpumap.c b/tools/perf/util/cpumap.c index 383674f448fcd67997afe9f07ae0a7fbb48f2f45..f93846edc1e0d463a685f5114e5b798af02a0608 100644 --- a/tools/perf/util/cpumap.c +++ b/tools/perf/util/cpumap.c @@ -701,7 +701,10 @@ size_t cpu_map__snprint_mask(struct cpu_map *map, char *buf, size_t size) unsigned char *bitmap; int last_cpu = cpu_map__cpu(map, map->nr - 1); - bitmap = zalloc((last_cpu + 7) / 8); + if (buf == NULL) + return 0; + + bitmap = zalloc(last_cpu / 8 + 1); if (bitmap == NULL) { buf[0] = '\0'; return 0; diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c index b65ad5a273eb1ff1f2579a926e826ba872b9a747..4fad92213609f3923a12daf74df70caf412765e3 100644 --- a/tools/perf/util/evsel.c +++ b/tools/perf/util/evsel.c @@ -590,6 +590,9 @@ const char *perf_evsel__name(struct perf_evsel *evsel) { char bf[128]; + if (!evsel) + goto out_unknown; + if (evsel->name) return evsel->name; @@ -626,7 +629,10 @@ const char *perf_evsel__name(struct perf_evsel *evsel) evsel->name = strdup(bf); - return evsel->name ?: "unknown"; + if (evsel->name) + return evsel->name; +out_unknown: + return "unknown"; } const char *perf_evsel__group_name(struct perf_evsel *evsel) diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c index b9a82598e2ac3970fd298b3c3c44bb875f1c58a2..54c34c107cab5cde892f1c47e4fd5570af6fc8fc 100644 --- a/tools/perf/util/header.c +++ b/tools/perf/util/header.c @@ -1173,7 +1173,7 @@ static int build_caches(struct cpu_cache_level caches[], u32 size, u32 *cntp) return 0; } -#define MAX_CACHES 2000 +#define MAX_CACHES (MAX_NR_CPUS * 4) static int write_cache(struct feat_fd *ff, struct perf_evlist *evlist __maybe_unused) @@ -3285,6 +3285,13 @@ int perf_session__read_header(struct perf_session *session) data->file.path); } + if (f_header.attr_size == 0) { + pr_err("ERROR: The %s file's attr size field is 0 which is unexpected.\n" + "Was the 'perf record' command properly terminated?\n", + data->file.path); + return -EINVAL; + } + nr_attrs = f_header.attrs.size / f_header.attr_size; lseek(fd, f_header.attrs.offset, SEEK_SET); @@ -3365,7 +3372,7 @@ int perf_event__synthesize_attr(struct perf_tool *tool, size += sizeof(struct perf_event_header); size += ids * sizeof(u64); - ev = malloc(size); + ev = zalloc(size); if (ev == NULL) return -ENOMEM; @@ -3472,7 +3479,7 @@ int perf_event__process_feature(struct perf_tool *tool, return 0; ff.buf = (void *)fe->data; - ff.size = event->header.size - sizeof(event->header); + ff.size = event->header.size - sizeof(*fe); ff.ph = &session->header; if (feat_ops[feat].process(&ff, NULL)) diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c index 076718a7b3eaa20c6e9bfdde07d03b5e77f71d64..003b70daf0bfc9310feff31bfc377e822aa7e25a 100644 --- a/tools/perf/util/machine.c +++ b/tools/perf/util/machine.c @@ -1295,6 +1295,7 @@ static int machine__set_modules_path(struct machine *machine) return map_groups__set_modules_path_dir(&machine->kmaps, modules_path, 0); } int __weak arch__fix_module_text_start(u64 *start __maybe_unused, + u64 *size __maybe_unused, const char *name __maybe_unused) { return 0; @@ -1306,7 +1307,7 @@ static int machine__create_module(void *arg, const char *name, u64 start, struct machine *machine = arg; struct map *map; - if (arch__fix_module_text_start(&start, name) < 0) + if (arch__fix_module_text_start(&start, &size, name) < 0) return -1; map = machine__findnew_module_map(machine, start, name); diff --git a/tools/perf/util/machine.h b/tools/perf/util/machine.h index ebde3ea70225b04fc74b20781a1fd6a93ff0d762..6f3767808bd92a81b076f645e97daf646921c7a6 100644 --- a/tools/perf/util/machine.h +++ b/tools/perf/util/machine.h @@ -219,7 +219,7 @@ struct symbol *machine__find_kernel_symbol_by_name(struct machine *machine, struct map *machine__findnew_module_map(struct machine *machine, u64 start, const char *filename); -int arch__fix_module_text_start(u64 *start, const char *name); +int arch__fix_module_text_start(u64 *start, u64 *size, const char *name); int machine__load_kallsyms(struct machine *machine, const char *filename); diff --git a/tools/perf/util/metricgroup.c b/tools/perf/util/metricgroup.c index a28f9b5cc4ffed9c2a4a1a131e2d258b693284da..8b3dafe3fac3a7fa5b70ab7d075491f6c6e3b2c1 100644 --- a/tools/perf/util/metricgroup.c +++ b/tools/perf/util/metricgroup.c @@ -94,26 +94,49 @@ struct egroup { const char *metric_expr; }; -static struct perf_evsel *find_evsel(struct perf_evlist *perf_evlist, - const char **ids, - int idnum, - struct perf_evsel **metric_events) +static bool record_evsel(int *ind, struct perf_evsel **start, + int idnum, + struct perf_evsel **metric_events, + struct perf_evsel *ev) +{ + metric_events[*ind] = ev; + if (*ind == 0) + *start = ev; + if (++*ind == idnum) { + metric_events[*ind] = NULL; + return true; + } + return false; +} + +static struct perf_evsel *find_evsel_group(struct perf_evlist *perf_evlist, + const char **ids, + int idnum, + struct perf_evsel **metric_events) { struct perf_evsel *ev, *start = NULL; int ind = 0; evlist__for_each_entry (perf_evlist, ev) { + if (ev->collect_stat) + continue; if (!strcmp(ev->name, ids[ind])) { - metric_events[ind] = ev; - if (ind == 0) - start = ev; - if (++ind == idnum) { - metric_events[ind] = NULL; + if (record_evsel(&ind, &start, idnum, + metric_events, ev)) return start; - } } else { + /* + * We saw some other event that is not + * in our list of events. Discard + * the whole match and start again. + */ ind = 0; start = NULL; + if (!strcmp(ev->name, ids[ind])) { + if (record_evsel(&ind, &start, idnum, + metric_events, ev)) + return start; + } } } /* @@ -143,8 +166,8 @@ static int metricgroup__setup_events(struct list_head *groups, ret = -ENOMEM; break; } - evsel = find_evsel(perf_evlist, eg->ids, eg->idnum, - metric_events); + evsel = find_evsel_group(perf_evlist, eg->ids, eg->idnum, + metric_events); if (!evsel) { pr_debug("Cannot resolve %s: %s\n", eg->metric_name, eg->metric_expr); diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c index 11086097fc9f88f34ef01f28287b8a687361da4f..f016d1b330e543b875d83f27ed803e30e28d21b2 100644 --- a/tools/perf/util/session.c +++ b/tools/perf/util/session.c @@ -1141,6 +1141,9 @@ static void dump_read(struct perf_evsel *evsel, union perf_event *event) evsel ? perf_evsel__name(evsel) : "FAIL", event->read.value); + if (!evsel) + return; + read_format = evsel->attr.read_format; if (read_format & PERF_FORMAT_TOTAL_TIME_ENABLED) diff --git a/tools/perf/util/stat-shadow.c b/tools/perf/util/stat-shadow.c index 99990f5f2512acbe59b0a51ab450fb92c0c6b446..bbb0e042d8e5802a8de01f50e1c2eebbcf02bfe2 100644 --- a/tools/perf/util/stat-shadow.c +++ b/tools/perf/util/stat-shadow.c @@ -303,7 +303,7 @@ static struct perf_evsel *perf_stat__find_event(struct perf_evlist *evsel_list, struct perf_evsel *c2; evlist__for_each_entry (evsel_list, c2) { - if (!strcasecmp(c2->name, name)) + if (!strcasecmp(c2->name, name) && !c2->collect_stat) return c2; } return NULL; @@ -342,7 +342,8 @@ void perf_stat__collect_metric_expr(struct perf_evlist *evsel_list) if (leader) { /* Search in group */ for_each_group_member (oc, leader) { - if (!strcasecmp(oc->name, metric_names[i])) { + if (!strcasecmp(oc->name, metric_names[i]) && + !oc->collect_stat) { found = true; break; } diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c index 0715f972a275c56a7f6091da5eddd16c9da13275..91404bacc3df81a6faddf606e807265dc7579757 100644 --- a/tools/perf/util/symbol.c +++ b/tools/perf/util/symbol.c @@ -86,6 +86,11 @@ static int prefix_underscores_count(const char *str) return tail - str; } +void __weak arch__symbols__fixup_end(struct symbol *p, struct symbol *c) +{ + p->end = c->start; +} + const char * __weak arch__normalize_symbol_name(const char *name) { return name; @@ -212,7 +217,7 @@ void symbols__fixup_end(struct rb_root *symbols) curr = rb_entry(nd, struct symbol, rb_node); if (prev->end == prev->start && prev->end != curr->start) - prev->end = curr->start; + arch__symbols__fixup_end(prev, curr); } /* Last entry */ diff --git a/tools/perf/util/symbol.h b/tools/perf/util/symbol.h index f25fae4b5743c76bdc50f515110a0fa448fe7fcc..76ef2facd934591826914abc497cf6a7acf63766 100644 --- a/tools/perf/util/symbol.h +++ b/tools/perf/util/symbol.h @@ -349,6 +349,7 @@ const char *arch__normalize_symbol_name(const char *name); #define SYMBOL_A 0 #define SYMBOL_B 1 +void arch__symbols__fixup_end(struct symbol *p, struct symbol *c); int arch__compare_symbol_names(const char *namea, const char *nameb); int arch__compare_symbol_names_n(const char *namea, const char *nameb, unsigned int n); diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c index 56007a7e0b4d7f6aadd8300e2c8c2023354c29bc..2c146d0c217bea33694a128e43b40249976afe90 100644 --- a/tools/perf/util/thread.c +++ b/tools/perf/util/thread.c @@ -192,14 +192,24 @@ struct comm *thread__comm(const struct thread *thread) struct comm *thread__exec_comm(const struct thread *thread) { - struct comm *comm, *last = NULL; + struct comm *comm, *last = NULL, *second_last = NULL; list_for_each_entry(comm, &thread->comm_list, list) { if (comm->exec) return comm; + second_last = last; last = comm; } + /* + * 'last' with no start time might be the parent's comm of a synthesized + * thread (created by processing a synthesized fork event). For a main + * thread, that is very probably wrong. Prefer a later comm to avoid + * that case. + */ + if (second_last && !last->start && thread->pid_ == thread->tid) + return second_last; + return last; } diff --git a/tools/power/cpupower/utils/cpufreq-set.c b/tools/power/cpupower/utils/cpufreq-set.c index 1eef0aed64239509795229d522cada4e80edf5a2..08a405593a791ed6c52f921824c7f7925a23cf17 100644 --- a/tools/power/cpupower/utils/cpufreq-set.c +++ b/tools/power/cpupower/utils/cpufreq-set.c @@ -306,6 +306,8 @@ int cmd_freq_set(int argc, char **argv) bitmask_setbit(cpus_chosen, cpus->cpu); cpus = cpus->next; } + /* Set the last cpu in related cpus list */ + bitmask_setbit(cpus_chosen, cpus->cpu); cpufreq_put_related_cpus(cpus); } } diff --git a/tools/testing/selftests/bpf/sendmsg6_prog.c b/tools/testing/selftests/bpf/sendmsg6_prog.c index 5aeaa284fc47493decf0fc55b31de7c8a734ac6d..a680628204108b41d0c046a5371cec9f2cfedc0b 100644 --- a/tools/testing/selftests/bpf/sendmsg6_prog.c +++ b/tools/testing/selftests/bpf/sendmsg6_prog.c @@ -41,8 +41,7 @@ int sendmsg_v6_prog(struct bpf_sock_addr *ctx) } /* Rewrite destination. */ - if ((ctx->user_ip6[0] & 0xFFFF) == bpf_htons(0xFACE) && - ctx->user_ip6[0] >> 16 == bpf_htons(0xB00C)) { + if (ctx->user_ip6[0] == bpf_htonl(0xFACEB00C)) { ctx->user_ip6[0] = bpf_htonl(DST_REWRITE_IP6_0); ctx->user_ip6[1] = bpf_htonl(DST_REWRITE_IP6_1); ctx->user_ip6[2] = bpf_htonl(DST_REWRITE_IP6_2); diff --git a/tools/testing/selftests/bpf/test_lwt_seg6local.c b/tools/testing/selftests/bpf/test_lwt_seg6local.c index 0575751bc1bc4020535a238bda9d4be9a2503eb8..e2f6ed0a583de8ed793b0095eade70827b33507d 100644 --- a/tools/testing/selftests/bpf/test_lwt_seg6local.c +++ b/tools/testing/selftests/bpf/test_lwt_seg6local.c @@ -61,7 +61,7 @@ struct sr6_tlv_t { unsigned char value[0]; } BPF_PACKET_HEADER; -__attribute__((always_inline)) struct ip6_srh_t *get_srh(struct __sk_buff *skb) +static __always_inline struct ip6_srh_t *get_srh(struct __sk_buff *skb) { void *cursor, *data_end; struct ip6_srh_t *srh; @@ -95,7 +95,7 @@ __attribute__((always_inline)) struct ip6_srh_t *get_srh(struct __sk_buff *skb) return srh; } -__attribute__((always_inline)) +static __always_inline int update_tlv_pad(struct __sk_buff *skb, uint32_t new_pad, uint32_t old_pad, uint32_t pad_off) { @@ -125,7 +125,7 @@ int update_tlv_pad(struct __sk_buff *skb, uint32_t new_pad, return 0; } -__attribute__((always_inline)) +static __always_inline int is_valid_tlv_boundary(struct __sk_buff *skb, struct ip6_srh_t *srh, uint32_t *tlv_off, uint32_t *pad_size, uint32_t *pad_off) @@ -184,7 +184,7 @@ int is_valid_tlv_boundary(struct __sk_buff *skb, struct ip6_srh_t *srh, return 0; } -__attribute__((always_inline)) +static __always_inline int add_tlv(struct __sk_buff *skb, struct ip6_srh_t *srh, uint32_t tlv_off, struct sr6_tlv_t *itlv, uint8_t tlv_size) { @@ -228,7 +228,7 @@ int add_tlv(struct __sk_buff *skb, struct ip6_srh_t *srh, uint32_t tlv_off, return update_tlv_pad(skb, new_pad, pad_size, pad_off); } -__attribute__((always_inline)) +static __always_inline int delete_tlv(struct __sk_buff *skb, struct ip6_srh_t *srh, uint32_t tlv_off) { @@ -266,7 +266,7 @@ int delete_tlv(struct __sk_buff *skb, struct ip6_srh_t *srh, return update_tlv_pad(skb, new_pad, pad_size, pad_off); } -__attribute__((always_inline)) +static __always_inline int has_egr_tlv(struct __sk_buff *skb, struct ip6_srh_t *srh) { int tlv_offset = sizeof(struct ip6_t) + sizeof(struct ip6_srh_t) + diff --git a/tools/testing/selftests/cgroup/cgroup_util.c b/tools/testing/selftests/cgroup/cgroup_util.c index 14c9fe2848062f0c2a8c37004f087d07b1d976f6..075cb0c730149907414f0aa7b1f2957d7e8361e9 100644 --- a/tools/testing/selftests/cgroup/cgroup_util.c +++ b/tools/testing/selftests/cgroup/cgroup_util.c @@ -181,8 +181,7 @@ int cg_find_unified_root(char *root, size_t len) strtok(NULL, delim); strtok(NULL, delim); - if (strcmp(fs, "cgroup") == 0 && - strcmp(type, "cgroup2") == 0) { + if (strcmp(type, "cgroup2") == 0) { strncpy(root, mount, len); return 0; } diff --git a/tools/testing/selftests/kvm/config b/tools/testing/selftests/kvm/config new file mode 100644 index 0000000000000000000000000000000000000000..63ed533f73d6e82419b281b512cf32cbec100eb2 --- /dev/null +++ b/tools/testing/selftests/kvm/config @@ -0,0 +1,3 @@ +CONFIG_KVM=y +CONFIG_KVM_INTEL=y +CONFIG_KVM_AMD=y diff --git a/tools/testing/selftests/net/forwarding/gre_multipath.sh b/tools/testing/selftests/net/forwarding/gre_multipath.sh index cca2baa03fb81b12f393a1df566dbe0494cabe27..a8d8e8b3dc819f3abc90060eaa0d312632c2145f 100755 --- a/tools/testing/selftests/net/forwarding/gre_multipath.sh +++ b/tools/testing/selftests/net/forwarding/gre_multipath.sh @@ -93,18 +93,10 @@ sw1_create() ip route add vrf v$ol1 192.0.2.16/28 \ nexthop dev g1a \ nexthop dev g1b - - tc qdisc add dev $ul1 clsact - tc filter add dev $ul1 egress pref 111 prot ipv4 \ - flower dst_ip 192.0.2.66 action pass - tc filter add dev $ul1 egress pref 222 prot ipv4 \ - flower dst_ip 192.0.2.82 action pass } sw1_destroy() { - tc qdisc del dev $ul1 clsact - ip route del vrf v$ol1 192.0.2.16/28 ip route del vrf v$ol1 192.0.2.82/32 via 192.0.2.146 @@ -139,10 +131,18 @@ sw2_create() ip route add vrf v$ol2 192.0.2.0/28 \ nexthop dev g2a \ nexthop dev g2b + + tc qdisc add dev $ul2 clsact + tc filter add dev $ul2 ingress pref 111 prot 802.1Q \ + flower vlan_id 111 action pass + tc filter add dev $ul2 ingress pref 222 prot 802.1Q \ + flower vlan_id 222 action pass } sw2_destroy() { + tc qdisc del dev $ul2 clsact + ip route del vrf v$ol2 192.0.2.0/28 ip route del vrf v$ol2 192.0.2.81/32 via 192.0.2.145 @@ -187,12 +187,16 @@ setup_prepare() sw1_create sw2_create h2_create + + forwarding_enable } cleanup() { pre_cleanup + forwarding_restore + h2_destroy sw2_destroy sw1_destroy @@ -211,15 +215,15 @@ multipath4_test() nexthop dev g1a weight $weight1 \ nexthop dev g1b weight $weight2 - local t0_111=$(tc_rule_stats_get $ul1 111 egress) - local t0_222=$(tc_rule_stats_get $ul1 222 egress) + local t0_111=$(tc_rule_stats_get $ul2 111 ingress) + local t0_222=$(tc_rule_stats_get $ul2 222 ingress) ip vrf exec v$h1 \ $MZ $h1 -q -p 64 -A 192.0.2.1 -B 192.0.2.18 \ -d 1msec -t udp "sp=1024,dp=0-32768" - local t1_111=$(tc_rule_stats_get $ul1 111 egress) - local t1_222=$(tc_rule_stats_get $ul1 222 egress) + local t1_111=$(tc_rule_stats_get $ul2 111 ingress) + local t1_222=$(tc_rule_stats_get $ul2 222 ingress) local d111=$((t1_111 - t0_111)) local d222=$((t1_222 - t0_222)) diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index d36802fe2825e756ffac5ef4604da1335fd3e968..efe2d6c0201cb45d02de30e9edcb71ff895bb564 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -338,6 +338,17 @@ int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu) void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) { kvm_timer_schedule(vcpu); + /* + * If we're about to block (most likely because we've just hit a + * WFI), we need to sync back the state of the GIC CPU interface + * so that we have the lastest PMR and group enables. This ensures + * that kvm_arch_vcpu_runnable has up-to-date data to decide + * whether we have pending interrupts. + */ + preempt_disable(); + kvm_vgic_vmcr_sync(vcpu); + preempt_enable(); + kvm_vgic_v4_enable_doorbell(vcpu); } diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c index ceeda7e04a4d9aa57932adeb1140e9d9348e2a0d..762f81900529eef4051d42517aa117875927d565 100644 --- a/virt/kvm/arm/vgic/vgic-mmio.c +++ b/virt/kvm/arm/vgic/vgic-mmio.c @@ -203,6 +203,12 @@ static void vgic_hw_irq_spending(struct kvm_vcpu *vcpu, struct vgic_irq *irq, vgic_irq_set_phys_active(irq, true); } +static bool is_vgic_v2_sgi(struct kvm_vcpu *vcpu, struct vgic_irq *irq) +{ + return (vgic_irq_is_sgi(irq->intid) && + vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V2); +} + void vgic_mmio_write_spending(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len, unsigned long val) @@ -215,6 +221,12 @@ void vgic_mmio_write_spending(struct kvm_vcpu *vcpu, for_each_set_bit(i, &val, len * 8) { struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i); + /* GICD_ISPENDR0 SGI bits are WI */ + if (is_vgic_v2_sgi(vcpu, irq)) { + vgic_put_irq(vcpu->kvm, irq); + continue; + } + spin_lock_irqsave(&irq->irq_lock, flags); if (irq->hw) vgic_hw_irq_spending(vcpu, irq, is_uaccess); @@ -262,6 +274,12 @@ void vgic_mmio_write_cpending(struct kvm_vcpu *vcpu, for_each_set_bit(i, &val, len * 8) { struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i); + /* GICD_ICPENDR0 SGI bits are WI */ + if (is_vgic_v2_sgi(vcpu, irq)) { + vgic_put_irq(vcpu->kvm, irq); + continue; + } + spin_lock_irqsave(&irq->irq_lock, flags); if (irq->hw) diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c index 69b892abd7dc6faec17e820fe6f4c0f95800ecb3..91b14dfacd1dd72fe80b12b3c88deaa0434af5c7 100644 --- a/virt/kvm/arm/vgic/vgic-v2.c +++ b/virt/kvm/arm/vgic/vgic-v2.c @@ -195,7 +195,10 @@ void vgic_v2_populate_lr(struct kvm_vcpu *vcpu, struct vgic_irq *irq, int lr) if (vgic_irq_is_sgi(irq->intid)) { u32 src = ffs(irq->source); - BUG_ON(!src); + if (WARN_RATELIMIT(!src, "No SGI source for INTID %d\n", + irq->intid)) + return; + val |= (src - 1) << GICH_LR_PHYSID_CPUID_SHIFT; irq->source &= ~(1 << (src - 1)); if (irq->source) { @@ -495,10 +498,17 @@ void vgic_v2_load(struct kvm_vcpu *vcpu) kvm_vgic_global_state.vctrl_base + GICH_APR); } -void vgic_v2_put(struct kvm_vcpu *vcpu) +void vgic_v2_vmcr_sync(struct kvm_vcpu *vcpu) { struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2; cpu_if->vgic_vmcr = readl_relaxed(kvm_vgic_global_state.vctrl_base + GICH_VMCR); +} + +void vgic_v2_put(struct kvm_vcpu *vcpu) +{ + struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2; + + vgic_v2_vmcr_sync(vcpu); cpu_if->vgic_apr = readl_relaxed(kvm_vgic_global_state.vctrl_base + GICH_APR); } diff --git a/virt/kvm/arm/vgic/vgic-v3.c b/virt/kvm/arm/vgic/vgic-v3.c index 3f2350a4d4ab8375ad3d768a557c3aaa067785d3..8b958ed05306efbac2dd40f6baef26c793d317e0 100644 --- a/virt/kvm/arm/vgic/vgic-v3.c +++ b/virt/kvm/arm/vgic/vgic-v3.c @@ -179,7 +179,10 @@ void vgic_v3_populate_lr(struct kvm_vcpu *vcpu, struct vgic_irq *irq, int lr) model == KVM_DEV_TYPE_ARM_VGIC_V2) { u32 src = ffs(irq->source); - BUG_ON(!src); + if (WARN_RATELIMIT(!src, "No SGI source for INTID %d\n", + irq->intid)) + return; + val |= (src - 1) << GICH_LR_PHYSID_CPUID_SHIFT; irq->source &= ~(1 << (src - 1)); if (irq->source) { @@ -674,12 +677,17 @@ void vgic_v3_load(struct kvm_vcpu *vcpu) __vgic_v3_activate_traps(vcpu); } -void vgic_v3_put(struct kvm_vcpu *vcpu) +void vgic_v3_vmcr_sync(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; if (likely(cpu_if->vgic_sre)) cpu_if->vgic_vmcr = kvm_call_hyp(__vgic_v3_read_vmcr); +} + +void vgic_v3_put(struct kvm_vcpu *vcpu) +{ + vgic_v3_vmcr_sync(vcpu); kvm_call_hyp(__vgic_v3_save_aprs, vcpu); diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c index c5165e3b80cbea92fe813f2416d3a8b1d84ba39f..4040a33cdc9028482dc83864896d66db3d63ff69 100644 --- a/virt/kvm/arm/vgic/vgic.c +++ b/virt/kvm/arm/vgic/vgic.c @@ -244,6 +244,13 @@ static int vgic_irq_cmp(void *priv, struct list_head *a, struct list_head *b) bool penda, pendb; int ret; + /* + * list_sort may call this function with the same element when + * the list is fairly long. + */ + if (unlikely(irqa == irqb)) + return 0; + spin_lock(&irqa->irq_lock); spin_lock_nested(&irqb->irq_lock, SINGLE_DEPTH_NESTING); @@ -902,6 +909,17 @@ void kvm_vgic_put(struct kvm_vcpu *vcpu) vgic_v3_put(vcpu); } +void kvm_vgic_vmcr_sync(struct kvm_vcpu *vcpu) +{ + if (unlikely(!irqchip_in_kernel(vcpu->kvm))) + return; + + if (kvm_vgic_global_state.type == VGIC_V2) + vgic_v2_vmcr_sync(vcpu); + else + vgic_v3_vmcr_sync(vcpu); +} + int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu) { struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; diff --git a/virt/kvm/arm/vgic/vgic.h b/virt/kvm/arm/vgic/vgic.h index a90024718ca44b941e8b4d7aa0f6bf20233c4596..d5e4542799252d35496c7373a209903ff448c59d 100644 --- a/virt/kvm/arm/vgic/vgic.h +++ b/virt/kvm/arm/vgic/vgic.h @@ -204,6 +204,7 @@ int vgic_register_dist_iodev(struct kvm *kvm, gpa_t dist_base_address, void vgic_v2_init_lrs(void); void vgic_v2_load(struct kvm_vcpu *vcpu); void vgic_v2_put(struct kvm_vcpu *vcpu); +void vgic_v2_vmcr_sync(struct kvm_vcpu *vcpu); void vgic_v2_save_state(struct kvm_vcpu *vcpu); void vgic_v2_restore_state(struct kvm_vcpu *vcpu); @@ -234,6 +235,7 @@ bool vgic_v3_check_base(struct kvm *kvm); void vgic_v3_load(struct kvm_vcpu *vcpu); void vgic_v3_put(struct kvm_vcpu *vcpu); +void vgic_v3_vmcr_sync(struct kvm_vcpu *vcpu); bool vgic_has_its(struct kvm *kvm); int kvm_vgic_register_its_device(void); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 2b36a51afb5764f485ca7918c67675b58ad6883f..4a584a57522161ff6e389f8a55e21b600a50694f 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2317,6 +2317,29 @@ static bool kvm_vcpu_eligible_for_directed_yield(struct kvm_vcpu *vcpu) #endif } +/* + * Unlike kvm_arch_vcpu_runnable, this function is called outside + * a vcpu_load/vcpu_put pair. However, for most architectures + * kvm_arch_vcpu_runnable does not require vcpu_load. + */ +bool __weak kvm_arch_dy_runnable(struct kvm_vcpu *vcpu) +{ + return kvm_arch_vcpu_runnable(vcpu); +} + +static bool vcpu_dy_runnable(struct kvm_vcpu *vcpu) +{ + if (kvm_arch_dy_runnable(vcpu)) + return true; + +#ifdef CONFIG_KVM_ASYNC_PF + if (!list_empty_careful(&vcpu->async_pf.done)) + return true; +#endif + + return false; +} + void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield_to_kernel_mode) { struct kvm *kvm = me->kvm; @@ -2346,7 +2369,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield_to_kernel_mode) continue; if (vcpu == me) continue; - if (swait_active(&vcpu->wq) && !kvm_arch_vcpu_runnable(vcpu)) + if (swait_active(&vcpu->wq) && !vcpu_dy_runnable(vcpu)) continue; if (yield_to_kernel_mode && !kvm_arch_vcpu_in_kernel(vcpu)) continue;