S: Stanford, California 94305
S: USA
+N: Carlos Chinea
+E: carlos.chinea@nokia.com
+E: cch.devel@gmail.com
+D: Author of HSI Subsystem
+
N: Randolph Chung
E: tausq@debian.org
D: Linux/PA-RISC hacker
format. For the single-planar API, applications must set <structfield> plane
</structfield> to zero. Additional flags may be posted in the <structfield>
flags </structfield> field. Refer to a manual for open() for details.
-Currently only O_CLOEXEC is supported. All other fields must be set to zero.
+Currently only O_CLOEXEC, O_RDONLY, O_WRONLY, and O_RDWR are supported. All
+other fields must be set to zero.
In the case of multi-planar API, every plane is exported separately using
multiple <constant> VIDIOC_EXPBUF </constant> calls. </para>
<entry>__u32</entry>
<entry><structfield>flags</structfield></entry>
<entry>Flags for the newly created file, currently only <constant>
-O_CLOEXEC </constant> is supported, refer to the manual of open() for more
-details.</entry>
+O_CLOEXEC </constant>, <constant>O_RDONLY</constant>, <constant>O_WRONLY
+</constant>, and <constant>O_RDWR</constant> are supported, refer to the manual
+of open() for more details.</entry>
</row>
<row>
<entry>__s32</entry>
(4) Diff the index keys of two objects.
- int (*diff_objects)(const void *a, const void *b);
+ int (*diff_objects)(const void *object, const void *index_key);
- Return the bit position at which the index keys of two objects differ or
- -1 if they are the same.
+ Return the bit position at which the index key of the specified object
+ differs from the given index key or -1 if they are the same.
(5) Free an object.
Invalidation is removing an entry from the cache without writing it
back. Cache blocks can be invalidated via the invalidate_cblocks
message, which takes an arbitrary number of cblock ranges. Each cblock
-must be expressed as a decimal value, in the future a variant message
-that takes cblock ranges expressed in hexidecimal may be needed to
-better support efficient invalidation of larger caches. The cache must
-be in passthrough mode when invalidate_cblocks is used.
+range's end value is "one past the end", meaning 5-10 expresses a range
+of values from 5 to 9. Each cblock must be expressed as a decimal
+value, in the future a variant message that takes cblock ranges
+expressed in hexidecimal may be needed to better support efficient
+invalidation of larger caches. The cache must be in passthrough mode
+when invalidate_cblocks is used.
invalidate_cblocks [<cblock>|<cblock begin>-<cblock end>]*
Required properties:
- compatible : Should be "ti,omap3-mpu" for OMAP3
Should be "ti,omap4-mpu" for OMAP4
+ Should be "ti,omap5-mpu" for OMAP5
- ti,hwmods: "mpu"
Examples:
+- For an OMAP5 SMP system:
+
+mpu {
+ compatible = "ti,omap5-mpu";
+ ti,hwmods = "mpu"
+};
+
- For an OMAP4 SMP system:
mpu {
Required properties:
- compatible : should be one of
+ "arm,armv8-pmuv3"
"arm,cortex-a15-pmu"
"arm,cortex-a9-pmu"
"arm,cortex-a8-pmu"
/* NTC thermistor is a hwmon device */
ncp15wb473@0 {
compatible = "ntc,ncp15wb473";
- pullup-uV = <1800000>;
+ pullup-uv = <1800000>;
pullup-ohm = <47000>;
pulldown-ohm = <0>;
io-channels = <&adc 4>;
Required Properties:
-- comptible: should be one of the following.
+- compatible: should be one of the following.
- "samsung,exynos4210-clock" - controller compatible with Exynos4210 SoC.
- "samsung,exynos4412-clock" - controller compatible with Exynos4412 SoC.
Required Properties:
-- comptible: should be one of the following.
+- compatible: should be one of the following.
- "samsung,exynos5250-clock" - controller compatible with Exynos5250 SoC.
- reg: physical base address of the controller and length of memory mapped
Required Properties:
-- comptible: should be one of the following.
+- compatible: should be one of the following.
- "samsung,exynos5420-clock" - controller compatible with Exynos5420 SoC.
- reg: physical base address of the controller and length of memory mapped
Required Properties:
-- comptible: should be "samsung,exynos5440-clock".
+- compatible: should be "samsung,exynos5440-clock".
- reg: physical base address of the controller and length of memory mapped
region.
Every GPIO controller node must have #gpio-cells property defined,
this information will be used to translate gpio-specifiers.
+See bindings/gpio/gpio.txt for details of how to specify GPIO
+information for devices.
+
+The GPIO module usually is connected to the SoC's internal interrupt
+controller, see bindings/interrupt-controller/interrupts.txt (the
+interrupt client nodes section) for details how to specify this GPIO
+module's interrupt.
+
+The GPIO module may serve as another interrupt controller (cascaded to
+the SoC's internal interrupt controller). See the interrupt controller
+nodes section in bindings/interrupt-controller/interrupts.txt for
+details.
Required properties:
-- compatible : "fsl,<CHIP>-gpio" followed by "fsl,mpc8349-gpio" for
- 83xx, "fsl,mpc8572-gpio" for 85xx and "fsl,mpc8610-gpio" for 86xx.
-- #gpio-cells : Should be two. The first cell is the pin number and the
- second cell is used to specify optional parameters (currently unused).
- - interrupts : Interrupt mapping for GPIO IRQ.
- - interrupt-parent : Phandle for the interrupt controller that
- services interrupts for this device.
-- gpio-controller : Marks the port as GPIO controller.
+- compatible: "fsl,<chip>-gpio" followed by "fsl,mpc8349-gpio"
+ for 83xx, "fsl,mpc8572-gpio" for 85xx, or
+ "fsl,mpc8610-gpio" for 86xx.
+- #gpio-cells: Should be two. The first cell is the pin number
+ and the second cell is used to specify optional
+ parameters (currently unused).
+- interrupt-parent: Phandle for the interrupt controller that
+ services interrupts for this device.
+- interrupts: Interrupt mapping for GPIO IRQ.
+- gpio-controller: Marks the port as GPIO controller.
+
+Optional properties:
+- interrupt-controller: Empty boolean property which marks the GPIO
+ module as an IRQ controller.
+- #interrupt-cells: Should be two. Defines the number of integer
+ cells required to specify an interrupt within
+ this interrupt controller. The first cell
+ defines the pin number, the second cell
+ defines additional flags (trigger type,
+ trigger polarity). Note that the available
+ set of trigger conditions supported by the
+ GPIO module depends on the actual SoC.
Example of gpio-controller nodes for a MPC8347 SoC:
#gpio-cells = <2>;
compatible = "fsl,mpc8347-gpio", "fsl,mpc8349-gpio";
reg = <0xc00 0x100>;
- interrupts = <74 0x8>;
interrupt-parent = <&ipic>;
+ interrupts = <74 0x8>;
gpio-controller;
+ interrupt-controller;
+ #interrupt-cells = <2>;
};
gpio2: gpio-controller@d00 {
#gpio-cells = <2>;
compatible = "fsl,mpc8347-gpio", "fsl,mpc8349-gpio";
reg = <0xd00 0x100>;
- interrupts = <75 0x8>;
interrupt-parent = <&ipic>;
+ interrupts = <75 0x8>;
gpio-controller;
};
-See booting-without-of.txt for details of how to specify GPIO
-information for devices.
-
-To use GPIO pins as interrupt sources for peripherals, specify the
-GPIO controller as the interrupt parent and define GPIO number +
-trigger mode using the interrupts property, which is defined like
-this:
-
-interrupts = <number trigger>, where:
- - number: GPIO pin (0..31)
- - trigger: trigger mode:
- 2 = trigger on falling edge
- 3 = trigger on both edges
-
-Example of device using this is:
+Example of a peripheral using the GPIO module as an IRQ controller:
funkyfpga@0 {
compatible = "funky-fpga";
...
- interrupts = <4 3>;
interrupt-parent = <&gpio1>;
+ interrupts = <4 3>;
};
--- /dev/null
+* TI MMC host controller for OMAP1 and 2420
+
+The MMC Host Controller on TI OMAP1 and 2420 family provides
+an interface for MMC, SD, and SDIO types of memory cards.
+
+This file documents differences between the core properties described
+by mmc.txt and the properties used by the omap mmc driver.
+
+Note that this driver will not work with omap2430 or later omaps,
+please see the omap hsmmc driver for the current omaps.
+
+Required properties:
+- compatible: Must be "ti,omap2420-mmc", for OMAP2420 controllers
+- ti,hwmods: For 2420, must be "msdi<n>", where n is controller
+ instance starting 1
+
+Examples:
+
+ msdi1: mmc@4809c000 {
+ compatible = "ti,omap2420-mmc";
+ ti,hwmods = "msdi1";
+ reg = <0x4809c000 0x80>;
+ interrupts = <83>;
+ dmas = <&sdma 61 &sdma 62>;
+ dma-names = "tx", "rx";
+ };
+
+* TI MMC host controller for OMAP1 and 2420
+
+The MMC Host Controller on TI OMAP1 and 2420 family provides
+an interface for MMC, SD, and SDIO types of memory cards.
+
+This file documents differences between the core properties described
+by mmc.txt and the properties used by the omap mmc driver.
+
+Note that this driver will not work with omap2430 or later omaps,
+please see the omap hsmmc driver for the current omaps.
+
+Required properties:
+- compatible: Must be "ti,omap2420-mmc", for OMAP2420 controllers
+- ti,hwmods: For 2420, must be "msdi<n>", where n is controller
+ instance starting 1
+
+Examples:
+
+ msdi1: mmc@4809c000 {
+ compatible = "ti,omap2420-mmc";
+ ti,hwmods = "msdi1";
+ reg = <0x4809c000 0x80>;
+ interrupts = <83>;
+ dmas = <&sdma 61 &sdma 62>;
+ dma-names = "tx", "rx";
+ };
+
only if property "phy-reset-gpios" is available. Missing the property
will have the duration be 1 millisecond. Numbers greater than 1000 are
invalid and 1 millisecond will be used instead.
+- phy-supply: regulator that powers the Ethernet PHY.
Example:
phy-mode = "mii";
phy-reset-gpios = <&gpio2 14 0>; /* GPIO2_14 */
local-mac-address = [00 04 9F 01 1B B9];
+ phy-supply = <®_fec_supply>;
};
+++ /dev/null
-NVIDIA Tegra 2 SPI device
-
-Required properties:
-- compatible : should be "nvidia,tegra20-spi".
-- gpios : should specify GPIOs used for chipselect.
fsl Freescale Semiconductor
GEFanuc GE Fanuc Intelligent Platforms Embedded Systems, Inc.
gef GE Fanuc Intelligent Platforms Embedded Systems, Inc.
+gmt Global Mixed-mode Technology, Inc.
hisilicon Hisilicon Limited.
hp Hewlett Packard
ibm International Business Machines (IBM)
idt Integrated Device Technologies, Inc.
img Imagination Technologies Ltd.
intercontrol Inter Control Group
+lg LG Corporation
linux Linux-specific binding
lsi LSI Corp. (LSI Logic)
marvell Marvell Technology Group Ltd.
--- /dev/null
+00-INDEX
+ - This file
+gpio.txt
+ - Introduction to GPIOs and their kernel interfaces
+consumer.txt
+ - How to obtain and use GPIOs in a driver
+driver.txt
+ - How to write a GPIO driver
+board.txt
+ - How to assign GPIOs to a consumer device and a function
+sysfs.txt
+ - Information about the GPIO sysfs interface
+gpio-legacy.txt
+ - Historical documentation of the deprecated GPIO integer interface
int i;
void *dp = get_dp(mic, type);
- for (i = mic_aligned_size(struct mic_bootparam); i < PAGE_SIZE;
+ for (i = sizeof(struct mic_bootparam); i < PAGE_SIZE;
i += mic_total_desc_size(d)) {
d = dp + i;
__func__, mic->name, vr0->va, vr0->info, vr_size,
vring_size(MIC_VRING_ENTRIES, MIC_VIRTIO_RING_ALIGN));
mpsslog("magic 0x%x expected 0x%x\n",
- vr0->info->magic, MIC_MAGIC + type);
- assert(vr0->info->magic == MIC_MAGIC + type);
+ le32toh(vr0->info->magic), MIC_MAGIC + type);
+ assert(le32toh(vr0->info->magic) == MIC_MAGIC + type);
if (vr1) {
vr1->va = (struct mic_vring *)
&va[MIC_DEVICE_PAGE_END + vr_size];
__func__, mic->name, vr1->va, vr1->info, vr_size,
vring_size(MIC_VRING_ENTRIES, MIC_VIRTIO_RING_ALIGN));
mpsslog("magic 0x%x expected 0x%x\n",
- vr1->info->magic, MIC_MAGIC + type + 1);
- assert(vr1->info->magic == MIC_MAGIC + type + 1);
+ le32toh(vr1->info->magic), MIC_MAGIC + type + 1);
+ assert(le32toh(vr1->info->magic) == MIC_MAGIC + type + 1);
}
done:
return va;
virtio_net(void *arg)
{
static __u8 vnet_hdr[2][sizeof(struct virtio_net_hdr)];
- static __u8 vnet_buf[2][MAX_NET_PKT_SIZE] __aligned(64);
+ static __u8 vnet_buf[2][MAX_NET_PKT_SIZE] __attribute__ ((aligned(64)));
struct iovec vnet_iov[2][2] = {
{ { .iov_base = vnet_hdr[0], .iov_len = sizeof(vnet_hdr[0]) },
{ .iov_base = vnet_buf[0], .iov_len = sizeof(vnet_buf[0]) } },
}
do {
+ ret = lseek(fd, 0, SEEK_SET);
+ if (ret < 0) {
+ mpsslog("%s: Failed to seek to file start '%s': %s\n",
+ mic->name, pathname, strerror(errno));
+ goto close_error1;
+ }
ret = read(fd, value, sizeof(value));
if (ret < 0) {
mpsslog("%s: Failed to read sysfs entry '%s': %s\n",
F: arch/arm/mach-footbridge/
ARM/FREESCALE IMX / MXC ARM ARCHITECTURE
+M: Shawn Guo <shawn.guo@linaro.org>
M: Sascha Hauer <kernel@pengutronix.de>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
-T: git git://git.pengutronix.de/git/imx/linux-2.6.git
+T: git git://git.linaro.org/people/shawnguo/linux-2.6.git
F: arch/arm/mach-imx/
+F: arch/arm/boot/dts/imx*
F: arch/arm/configs/imx*_defconfig
-ARM/FREESCALE IMX6
-M: Shawn Guo <shawn.guo@linaro.org>
-L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
-S: Maintained
-T: git git://git.linaro.org/people/shawnguo/linux-2.6.git
-F: arch/arm/mach-imx/*imx6*
-
ARM/FREESCALE MXS ARM ARCHITECTURE
M: Shawn Guo <shawn.guo@linaro.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
F: drivers/gpio/gpio-bt8xx.c
BTRFS FILE SYSTEM
-M: Chris Mason <chris.mason@fusionio.com>
+M: Chris Mason <clm@fb.com>
+M: Josef Bacik <jbacik@fb.com>
L: linux-btrfs@vger.kernel.org
W: http://btrfs.wiki.kernel.org/
Q: http://patchwork.kernel.org/project/linux-btrfs/list/
F: Documentation/zh_CN/
CHIPIDEA USB HIGH SPEED DUAL ROLE CONTROLLER
-M: Alexander Shishkin <alexander.shishkin@linux.intel.com>
+M: Peter Chen <Peter.Chen@freescale.com>
+T: git://github.com/hzpeterchen/linux-usb.git
L: linux-usb@vger.kernel.org
S: Maintained
F: drivers/usb/chipidea/
S: Maintained
F: fs/hpfs/
+HSI SUBSYSTEM
+M: Sebastian Reichel <sre@debian.org>
+S: Maintained
+F: Documentation/ABI/testing/sysfs-bus-hsi
+F: drivers/hsi/
+F: include/linux/hsi/
+F: include/uapi/linux/hsi/
+
HSO 3G MODEM DRIVER
M: Jan Dumon <j.dumon@option.com>
W: http://www.pharscape.org
S: Maintained
F: drivers/net/usb/hso.c
+HSR NETWORK PROTOCOL
+M: Arvid Brodin <arvid.brodin@alten.se>
+L: netdev@vger.kernel.org
+S: Maintained
+F: net/hsr/
+
HTCPEN TOUCHSCREEN DRIVER
M: Pau Oliva Fora <pof@eslack.org>
L: linux-input@vger.kernel.org
F: Documentation/lockdep*.txt
F: Documentation/lockstat.txt
F: include/linux/lockdep.h
-F: kernel/lockdep*
+F: kernel/locking/
LOGICAL DISK MANAGER SUPPORT (LDM, Windows 2000/XP/Vista Dynamic Disks)
M: "Richard Russon (FlatCap)" <ldm@flatcap.org>
F: include/linux/platform_data/pn544.h
NFS, SUNRPC, AND LOCKD CLIENTS
-M: Trond Myklebust <Trond.Myklebust@netapp.com>
+M: Trond Myklebust <trond.myklebust@primarydata.com>
L: linux-nfs@vger.kernel.org
W: http://client.linux-nfs.org
-T: git git://git.linux-nfs.org/pub/linux/nfs-2.6.git
+T: git git://git.linux-nfs.org/projects/trondmy/linux-nfs.git
S: Maintained
F: fs/lockd/
F: fs/nfs/
M: Rob Herring <rob.herring@calxeda.com>
M: Pawel Moll <pawel.moll@arm.com>
M: Mark Rutland <mark.rutland@arm.com>
-M: Stephen Warren <swarren@wwwdotorg.org>
M: Ian Campbell <ijc+devicetree@hellion.org.uk>
+M: Kumar Gala <galak@codeaurora.org>
L: devicetree@vger.kernel.org
S: Maintained
F: Documentation/devicetree/
F: kernel/sched/
F: include/linux/sched.h
F: include/uapi/linux/sched.h
-F: kernel/wait.c
F: include/linux/wait.h
SCORE ARCHITECTURE
VERSION = 3
PATCHLEVEL = 13
SUBLEVEL = 0
-EXTRAVERSION = -rc2
+EXTRAVERSION = -rc3
NAME = One Giant Leap for Frogkind
# *DOCUMENTATION*
config ARC
def_bool y
+ select BUILDTIME_EXTABLE_SORT
select CLONE_BACKWARDS
# ARC Busybox based initramfs absolutely relies on DEVTMPFS for /dev
select DEVTMPFS if !INITRAMFS_SOURCE=""
/******** no-legacy-syscalls-ABI *******/
+#ifndef _UAPI_ASM_ARC_UNISTD_H
+#define _UAPI_ASM_ARC_UNISTD_H
+
#define __ARCH_WANT_SYS_EXECVE
#define __ARCH_WANT_SYS_CLONE
#define __ARCH_WANT_SYS_VFORK
/* Generic syscall (fs/filesystems.c - lost in asm-generic/unistd.h */
#define __NR_sysfs (__NR_arch_specific_syscall + 3)
__SYSCALL(__NR_sysfs, sys_sysfs)
+
+#endif
cache_result = (config >> 16) & 0xff;
if (cache_type >= PERF_COUNT_HW_CACHE_MAX)
return -EINVAL;
- if (cache_type >= PERF_COUNT_HW_CACHE_OP_MAX)
+ if (cache_op >= PERF_COUNT_HW_CACHE_OP_MAX)
return -EINVAL;
- if (cache_type >= PERF_COUNT_HW_CACHE_RESULT_MAX)
+ if (cache_result >= PERF_COUNT_HW_CACHE_RESULT_MAX)
return -EINVAL;
ret = arc_pmu_cache_map[cache_type][cache_op][cache_result];
/ {
model = "IGEP COM AM335x on AQUILA Expansion";
compatible = "isee,am335x-base0033", "isee,am335x-igep0033", "ti,am33xx";
+
+ hdmi {
+ compatible = "ti,tilcdc,slave";
+ i2c = <&i2c0>;
+ pinctrl-names = "default", "off";
+ pinctrl-0 = <&nxp_hdmi_pins>;
+ pinctrl-1 = <&nxp_hdmi_off_pins>;
+ status = "okay";
+ };
+
+ leds_base {
+ pinctrl-names = "default";
+ pinctrl-0 = <&leds_base_pins>;
+
+ compatible = "gpio-leds";
+
+ led@0 {
+ label = "base:red:user";
+ gpios = <&gpio1 21 GPIO_ACTIVE_HIGH>; /* gpio1_21 */
+ default-state = "off";
+ };
+
+ led@1 {
+ label = "base:green:user";
+ gpios = <&gpio2 0 GPIO_ACTIVE_HIGH>; /* gpio2_0 */
+ default-state = "off";
+ };
+ };
+};
+
+&am33xx_pinmux {
+ nxp_hdmi_pins: pinmux_nxp_hdmi_pins {
+ pinctrl-single,pins = <
+ 0x1b0 (PIN_OUTPUT | MUX_MODE3) /* xdma_event_intr0.clkout1 */
+ 0xa0 (PIN_OUTPUT | MUX_MODE0) /* lcd_data0 */
+ 0xa4 (PIN_OUTPUT | MUX_MODE0) /* lcd_data1 */
+ 0xa8 (PIN_OUTPUT | MUX_MODE0) /* lcd_data2 */
+ 0xac (PIN_OUTPUT | MUX_MODE0) /* lcd_data3 */
+ 0xb0 (PIN_OUTPUT | MUX_MODE0) /* lcd_data4 */
+ 0xb4 (PIN_OUTPUT | MUX_MODE0) /* lcd_data5 */
+ 0xb8 (PIN_OUTPUT | MUX_MODE0) /* lcd_data6 */
+ 0xbc (PIN_OUTPUT | MUX_MODE0) /* lcd_data7 */
+ 0xc0 (PIN_OUTPUT | MUX_MODE0) /* lcd_data8 */
+ 0xc4 (PIN_OUTPUT | MUX_MODE0) /* lcd_data9 */
+ 0xc8 (PIN_OUTPUT | MUX_MODE0) /* lcd_data10 */
+ 0xcc (PIN_OUTPUT | MUX_MODE0) /* lcd_data11 */
+ 0xd0 (PIN_OUTPUT | MUX_MODE0) /* lcd_data12 */
+ 0xd4 (PIN_OUTPUT | MUX_MODE0) /* lcd_data13 */
+ 0xd8 (PIN_OUTPUT | MUX_MODE0) /* lcd_data14 */
+ 0xdc (PIN_OUTPUT | MUX_MODE0) /* lcd_data15 */
+ 0xe0 (PIN_OUTPUT | MUX_MODE0) /* lcd_vsync */
+ 0xe4 (PIN_OUTPUT | MUX_MODE0) /* lcd_hsync */
+ 0xe8 (PIN_OUTPUT | MUX_MODE0) /* lcd_pclk */
+ 0xec (PIN_OUTPUT | MUX_MODE0) /* lcd_ac_bias_en */
+ >;
+ };
+ nxp_hdmi_off_pins: pinmux_nxp_hdmi_off_pins {
+ pinctrl-single,pins = <
+ 0x1b0 (PIN_OUTPUT | MUX_MODE3) /* xdma_event_intr0.clkout1 */
+ >;
+ };
+
+ leds_base_pins: pinmux_leds_base_pins {
+ pinctrl-single,pins = <
+ 0x54 (PIN_OUTPUT_PULLDOWN | MUX_MODE7) /* gpmc_a5.gpio1_21 */
+ 0x88 (PIN_OUTPUT_PULLDOWN | MUX_MODE7) /* gpmc_csn3.gpio2_0 */
+ >;
+ };
+};
+
+&lcdc {
+ status = "okay";
+};
+
+&i2c0 {
+ eeprom: eeprom@50 {
+ compatible = "at,24c256";
+ reg = <0x50>;
+ };
};
pinctrl-0 = <&uart0_pins>;
};
+&usb {
+ status = "okay";
+
+ control@44e10000 {
+ status = "okay";
+ };
+
+ usb-phy@47401300 {
+ status = "okay";
+ };
+
+ usb-phy@47401b00 {
+ status = "okay";
+ };
+
+ usb@47401000 {
+ status = "okay";
+ };
+
+ usb@47401800 {
+ status = "okay";
+ dr_mode = "host";
+ };
+
+ dma-controller@07402000 {
+ status = "okay";
+ };
+};
+
#include "tps65910.dtsi"
&tps {
*/
/dts-v1/;
-#include "omap34xx.dtsi"
+#include "am3517.dtsi"
/ {
- model = "TI AM3517 EVM (AM3517/05)";
- compatible = "ti,am3517-evm", "ti,omap3";
+ model = "TI AM3517 EVM (AM3517/05 TMDSEVM3517)";
+ compatible = "ti,am3517-evm", "ti,am3517", "ti,omap3";
memory {
device_type = "memory";
--- /dev/null
+/*
+ * Device Tree Source for am3517 SoC
+ *
+ * Copyright (C) 2013 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * This file is licensed under the terms of the GNU General Public License
+ * version 2. This program is licensed "as is" without any warranty of any
+ * kind, whether express or implied.
+ */
+
+#include "omap3.dtsi"
+
+/ {
+ aliases {
+ serial3 = &uart4;
+ };
+
+ ocp {
+ am35x_otg_hs: am35x_otg_hs@5c040000 {
+ compatible = "ti,omap3-musb";
+ ti,hwmods = "am35x_otg_hs";
+ status = "disabled";
+ reg = <0x5c040000 0x1000>;
+ interrupts = <71>;
+ interrupt-names = "mc";
+ };
+
+ davinci_emac: ethernet@0x5c000000 {
+ compatible = "ti,am3517-emac";
+ ti,hwmods = "davinci_emac";
+ status = "disabled";
+ reg = <0x5c000000 0x30000>;
+ interrupts = <67 68 69 70>;
+ ti,davinci-ctrl-reg-offset = <0x10000>;
+ ti,davinci-ctrl-mod-reg-offset = <0>;
+ ti,davinci-ctrl-ram-offset = <0x20000>;
+ ti,davinci-ctrl-ram-size = <0x2000>;
+ ti,davinci-rmii-en = /bits/ 8 <1>;
+ local-mac-address = [ 00 00 00 00 00 00 ];
+ };
+
+ davinci_mdio: ethernet@0x5c030000 {
+ compatible = "ti,davinci_mdio";
+ ti,hwmods = "davinci_mdio";
+ status = "disabled";
+ reg = <0x5c030000 0x1000>;
+ bus_freq = <1000000>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ };
+
+ uart4: serial@4809e000 {
+ compatible = "ti,omap3-uart";
+ ti,hwmods = "uart4";
+ status = "disabled";
+ reg = <0x4809e000 0x400>;
+ interrupts = <84>;
+ dmas = <&sdma 55 &sdma 54>;
+ dma-names = "tx", "rx";
+ clock-frequency = <48000000>;
+ };
+ };
+};
spi-max-frequency = <50000000>;
};
};
+ };
- pcie-controller {
+ pcie-controller {
+ status = "okay";
+ /*
+ * The two PCIe units are accessible through
+ * both standard PCIe slots and mini-PCIe
+ * slots on the board.
+ */
+ pcie@1,0 {
+ /* Port 0, Lane 0 */
+ status = "okay";
+ };
+ pcie@2,0 {
+ /* Port 1, Lane 0 */
status = "okay";
- /*
- * The two PCIe units are accessible through
- * both standard PCIe slots and mini-PCIe
- * slots on the board.
- */
- pcie@1,0 {
- /* Port 0, Lane 0 */
- status = "okay";
- };
- pcie@2,0 {
- /* Port 1, Lane 0 */
- status = "okay";
- };
};
};
};
coherency-fabric@20200 {
compatible = "marvell,coherency-fabric";
- reg = <0x20200 0xb0>, <0x21810 0x1c>;
+ reg = <0x20200 0xb0>, <0x21010 0x1c>;
};
serial@12000 {
/*
* MV78230 has 2 PCIe units Gen2.0: One unit can be
* configured as x4 or quad x1 lanes. One unit is
- * x4/x1.
+ * x1 only.
*/
pcie-controller {
compatible = "marvell,armada-xp-pcie";
ranges =
<0x82000000 0 0x40000 MBUS_ID(0xf0, 0x01) 0x40000 0 0x00002000 /* Port 0.0 registers */
- 0x82000000 0 0x42000 MBUS_ID(0xf0, 0x01) 0x42000 0 0x00002000 /* Port 2.0 registers */
0x82000000 0 0x44000 MBUS_ID(0xf0, 0x01) 0x44000 0 0x00002000 /* Port 0.1 registers */
0x82000000 0 0x48000 MBUS_ID(0xf0, 0x01) 0x48000 0 0x00002000 /* Port 0.2 registers */
0x82000000 0 0x4c000 MBUS_ID(0xf0, 0x01) 0x4c000 0 0x00002000 /* Port 0.3 registers */
+ 0x82000000 0 0x80000 MBUS_ID(0xf0, 0x01) 0x80000 0 0x00002000 /* Port 1.0 registers */
0x82000000 0x1 0 MBUS_ID(0x04, 0xe8) 0 1 0 /* Port 0.0 MEM */
0x81000000 0x1 0 MBUS_ID(0x04, 0xe0) 0 1 0 /* Port 0.0 IO */
0x82000000 0x2 0 MBUS_ID(0x04, 0xd8) 0 1 0 /* Port 0.1 MEM */
0x81000000 0x3 0 MBUS_ID(0x04, 0xb0) 0 1 0 /* Port 0.2 IO */
0x82000000 0x4 0 MBUS_ID(0x04, 0x78) 0 1 0 /* Port 0.3 MEM */
0x81000000 0x4 0 MBUS_ID(0x04, 0x70) 0 1 0 /* Port 0.3 IO */
- 0x82000000 0x9 0 MBUS_ID(0x04, 0xf8) 0 1 0 /* Port 2.0 MEM */
- 0x81000000 0x9 0 MBUS_ID(0x04, 0xf0) 0 1 0 /* Port 2.0 IO */>;
+ 0x82000000 0x5 0 MBUS_ID(0x08, 0xe8) 0 1 0 /* Port 1.0 MEM */
+ 0x81000000 0x5 0 MBUS_ID(0x08, 0xe0) 0 1 0 /* Port 1.0 IO */>;
pcie@1,0 {
device_type = "pci";
status = "disabled";
};
- pcie@9,0 {
+ pcie@5,0 {
device_type = "pci";
- assigned-addresses = <0x82000800 0 0x42000 0 0x2000>;
- reg = <0x4800 0 0 0 0>;
+ assigned-addresses = <0x82000800 0 0x80000 0 0x2000>;
+ reg = <0x2800 0 0 0 0>;
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
- ranges = <0x82000000 0 0 0x82000000 0x9 0 1 0
- 0x81000000 0 0 0x81000000 0x9 0 1 0>;
+ ranges = <0x82000000 0 0 0x82000000 0x5 0 1 0
+ 0x81000000 0 0 0x81000000 0x5 0 1 0>;
interrupt-map-mask = <0 0 0 0>;
- interrupt-map = <0 0 0 0 &mpic 99>;
- marvell,pcie-port = <2>;
+ interrupt-map = <0 0 0 0 &mpic 62>;
+ marvell,pcie-port = <1>;
marvell,pcie-lane = <0>;
- clocks = <&gateclk 26>;
+ clocks = <&gateclk 9>;
status = "disabled";
};
};
/*
* MV78260 has 3 PCIe units Gen2.0: Two units can be
* configured as x4 or quad x1 lanes. One unit is
- * x4/x1.
+ * x4 only.
*/
pcie-controller {
compatible = "marvell,armada-xp-pcie";
0x82000000 0 0x48000 MBUS_ID(0xf0, 0x01) 0x48000 0 0x00002000 /* Port 0.2 registers */
0x82000000 0 0x4c000 MBUS_ID(0xf0, 0x01) 0x4c000 0 0x00002000 /* Port 0.3 registers */
0x82000000 0 0x80000 MBUS_ID(0xf0, 0x01) 0x80000 0 0x00002000 /* Port 1.0 registers */
- 0x82000000 0 0x82000 MBUS_ID(0xf0, 0x01) 0x82000 0 0x00002000 /* Port 3.0 registers */
+ 0x82000000 0 0x84000 MBUS_ID(0xf0, 0x01) 0x84000 0 0x00002000 /* Port 1.1 registers */
+ 0x82000000 0 0x88000 MBUS_ID(0xf0, 0x01) 0x88000 0 0x00002000 /* Port 1.2 registers */
+ 0x82000000 0 0x8c000 MBUS_ID(0xf0, 0x01) 0x8c000 0 0x00002000 /* Port 1.3 registers */
0x82000000 0x1 0 MBUS_ID(0x04, 0xe8) 0 1 0 /* Port 0.0 MEM */
0x81000000 0x1 0 MBUS_ID(0x04, 0xe0) 0 1 0 /* Port 0.0 IO */
0x82000000 0x2 0 MBUS_ID(0x04, 0xd8) 0 1 0 /* Port 0.1 MEM */
0x81000000 0x3 0 MBUS_ID(0x04, 0xb0) 0 1 0 /* Port 0.2 IO */
0x82000000 0x4 0 MBUS_ID(0x04, 0x78) 0 1 0 /* Port 0.3 MEM */
0x81000000 0x4 0 MBUS_ID(0x04, 0x70) 0 1 0 /* Port 0.3 IO */
- 0x82000000 0x9 0 MBUS_ID(0x08, 0xe8) 0 1 0 /* Port 1.0 MEM */
- 0x81000000 0x9 0 MBUS_ID(0x08, 0xe0) 0 1 0 /* Port 1.0 IO */
- 0x82000000 0xa 0 MBUS_ID(0x08, 0xf8) 0 1 0 /* Port 3.0 MEM */
- 0x81000000 0xa 0 MBUS_ID(0x08, 0xf0) 0 1 0 /* Port 3.0 IO */>;
+
+ 0x82000000 0x5 0 MBUS_ID(0x08, 0xe8) 0 1 0 /* Port 1.0 MEM */
+ 0x81000000 0x5 0 MBUS_ID(0x08, 0xe0) 0 1 0 /* Port 1.0 IO */
+ 0x82000000 0x6 0 MBUS_ID(0x08, 0xd8) 0 1 0 /* Port 1.1 MEM */
+ 0x81000000 0x6 0 MBUS_ID(0x08, 0xd0) 0 1 0 /* Port 1.1 IO */
+ 0x82000000 0x7 0 MBUS_ID(0x08, 0xb8) 0 1 0 /* Port 1.2 MEM */
+ 0x81000000 0x7 0 MBUS_ID(0x08, 0xb0) 0 1 0 /* Port 1.2 IO */
+ 0x82000000 0x8 0 MBUS_ID(0x08, 0x78) 0 1 0 /* Port 1.3 MEM */
+ 0x81000000 0x8 0 MBUS_ID(0x08, 0x70) 0 1 0 /* Port 1.3 IO */
+
+ 0x82000000 0x9 0 MBUS_ID(0x04, 0xf8) 0 1 0 /* Port 2.0 MEM */
+ 0x81000000 0x9 0 MBUS_ID(0x04, 0xf0) 0 1 0 /* Port 2.0 IO */>;
pcie@1,0 {
device_type = "pci";
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
- ranges = <0x82000000 0 0 0x82000000 0x2 0 1 0
- 0x81000000 0 0 0x81000000 0x2 0 1 0>;
+ ranges = <0x82000000 0 0 0x82000000 0x2 0 1 0
+ 0x81000000 0 0 0x81000000 0x2 0 1 0>;
interrupt-map-mask = <0 0 0 0>;
interrupt-map = <0 0 0 0 &mpic 59>;
marvell,pcie-port = <0>;
status = "disabled";
};
- pcie@9,0 {
+ pcie@5,0 {
device_type = "pci";
- assigned-addresses = <0x82000800 0 0x42000 0 0x2000>;
- reg = <0x4800 0 0 0 0>;
+ assigned-addresses = <0x82000800 0 0x80000 0 0x2000>;
+ reg = <0x2800 0 0 0 0>;
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
- ranges = <0x82000000 0 0 0x82000000 0x9 0 1 0
- 0x81000000 0 0 0x81000000 0x9 0 1 0>;
+ ranges = <0x82000000 0 0 0x82000000 0x5 0 1 0
+ 0x81000000 0 0 0x81000000 0x5 0 1 0>;
interrupt-map-mask = <0 0 0 0>;
- interrupt-map = <0 0 0 0 &mpic 99>;
- marvell,pcie-port = <2>;
+ interrupt-map = <0 0 0 0 &mpic 62>;
+ marvell,pcie-port = <1>;
marvell,pcie-lane = <0>;
- clocks = <&gateclk 26>;
+ clocks = <&gateclk 9>;
status = "disabled";
};
- pcie@10,0 {
+ pcie@6,0 {
device_type = "pci";
- assigned-addresses = <0x82000800 0 0x82000 0 0x2000>;
- reg = <0x5000 0 0 0 0>;
+ assigned-addresses = <0x82000800 0 0x84000 0 0x2000>;
+ reg = <0x3000 0 0 0 0>;
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
- ranges = <0x82000000 0 0 0x82000000 0xa 0 1 0
- 0x81000000 0 0 0x81000000 0xa 0 1 0>;
+ ranges = <0x82000000 0 0 0x82000000 0x6 0 1 0
+ 0x81000000 0 0 0x81000000 0x6 0 1 0>;
interrupt-map-mask = <0 0 0 0>;
- interrupt-map = <0 0 0 0 &mpic 103>;
- marvell,pcie-port = <3>;
+ interrupt-map = <0 0 0 0 &mpic 63>;
+ marvell,pcie-port = <1>;
+ marvell,pcie-lane = <1>;
+ clocks = <&gateclk 10>;
+ status = "disabled";
+ };
+
+ pcie@7,0 {
+ device_type = "pci";
+ assigned-addresses = <0x82000800 0 0x88000 0 0x2000>;
+ reg = <0x3800 0 0 0 0>;
+ #address-cells = <3>;
+ #size-cells = <2>;
+ #interrupt-cells = <1>;
+ ranges = <0x82000000 0 0 0x82000000 0x7 0 1 0
+ 0x81000000 0 0 0x81000000 0x7 0 1 0>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &mpic 64>;
+ marvell,pcie-port = <1>;
+ marvell,pcie-lane = <2>;
+ clocks = <&gateclk 11>;
+ status = "disabled";
+ };
+
+ pcie@8,0 {
+ device_type = "pci";
+ assigned-addresses = <0x82000800 0 0x8c000 0 0x2000>;
+ reg = <0x4000 0 0 0 0>;
+ #address-cells = <3>;
+ #size-cells = <2>;
+ #interrupt-cells = <1>;
+ ranges = <0x82000000 0 0 0x82000000 0x8 0 1 0
+ 0x81000000 0 0 0x81000000 0x8 0 1 0>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &mpic 65>;
+ marvell,pcie-port = <1>;
+ marvell,pcie-lane = <3>;
+ clocks = <&gateclk 12>;
+ status = "disabled";
+ };
+
+ pcie@9,0 {
+ device_type = "pci";
+ assigned-addresses = <0x82000800 0 0x42000 0 0x2000>;
+ reg = <0x4800 0 0 0 0>;
+ #address-cells = <3>;
+ #size-cells = <2>;
+ #interrupt-cells = <1>;
+ ranges = <0x82000000 0 0 0x82000000 0x9 0 1 0
+ 0x81000000 0 0 0x81000000 0x9 0 1 0>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &mpic 99>;
+ marvell,pcie-port = <2>;
marvell,pcie-lane = <0>;
- clocks = <&gateclk 27>;
+ clocks = <&gateclk 26>;
status = "disabled";
};
};
#include <dt-bindings/interrupt-controller/irq.h>
/ {
+ aliases {
+ serial4 = &usart3;
+ };
+
ahb {
apb {
pinctrl@fffff400 {
gpmc,wr-access-ns = <186>;
gpmc,cycle2cycle-samecsen;
gpmc,cycle2cycle-diffcsen;
- vmmc-supply = <&vddvario>;
- vmmc_aux-supply = <&vdd33a>;
+ vddvario-supply = <&vddvario>;
+ vdd33a-supply = <&vdd33a>;
reg-io-width = <4>;
smsc,save-mac-address;
};
&usbhsehci {
phys = <0 &hsusb2_phy>;
};
+
+&vaux2 {
+ regulator-name = "usb_1v8";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-always-on;
+};
vcc-supply = <&hsusb2_power>;
};
+ sound {
+ compatible = "ti,omap-twl4030";
+ ti,model = "omap3beagle";
+
+ ti,mcbsp = <&mcbsp2>;
+ ti,codec = <&twl_audio>;
+ };
+
gpio_keys {
compatible = "gpio-keys";
reg = <0x48>;
interrupts = <7>; /* SYS_NIRQ cascaded to intc */
interrupt-parent = <&intc>;
+
+ twl_audio: audio {
+ compatible = "ti,twl4030-audio";
+ codec {
+ };
+ };
};
};
mode = <3>;
power = <50>;
};
+
+&vaux2 {
+ regulator-name = "vdd_ehci";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-always-on;
+};
/*
- * Device Tree Source for IGEP Technology devices
+ * Common device tree for IGEP boards based on AM/DM37x
*
* Copyright (C) 2012 Javier Martinez Canillas <javier@collabora.co.uk>
* Copyright (C) 2012 Enric Balletbo i Serra <eballetbo@gmail.com>
*/
/dts-v1/;
-#include "omap34xx.dtsi"
+#include "omap36xx.dtsi"
/ {
memory {
ti,mcbsp = <&mcbsp2>;
ti,codec = <&twl_audio>;
};
+
+ vdd33: regulator-vdd33 {
+ compatible = "regulator-fixed";
+ regulator-name = "vdd33";
+ regulator-always-on;
+ };
+
+ lbee1usjyc_vmmc: lbee1usjyc_vmmc {
+ pinctrl-names = "default";
+ pinctrl-0 = <&lbee1usjyc_pins>;
+ compatible = "regulator-fixed";
+ regulator-name = "regulator-lbee1usjyc";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ gpio = <&gpio5 10 GPIO_ACTIVE_HIGH>; /* gpio_138 WIFI_PDN */
+ startup-delay-us = <10000>;
+ enable-active-high;
+ vin-supply = <&vdd33>;
+ };
};
&omap3_pmx_core {
>;
};
+ /* WiFi/BT combo */
+ lbee1usjyc_pins: pinmux_lbee1usjyc_pins {
+ pinctrl-single,pins = <
+ 0x136 (PIN_OUTPUT | MUX_MODE4) /* sdmmc2_dat5.gpio_137 */
+ 0x138 (PIN_OUTPUT | MUX_MODE4) /* sdmmc2_dat6.gpio_138 */
+ 0x13a (PIN_OUTPUT | MUX_MODE4) /* sdmmc2_dat7.gpio_139 */
+ >;
+ };
+
mcbsp2_pins: pinmux_mcbsp2_pins {
pinctrl-single,pins = <
0x10c (PIN_INPUT | MUX_MODE0) /* mcbsp2_fsx.mcbsp2_fsx */
0x11a (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc1_dat1.sdmmc1_dat1 */
0x11c (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc1_dat2.sdmmc1_dat2 */
0x11e (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc1_dat3.sdmmc1_dat3 */
- 0x120 (PIN_INPUT | MUX_MODE0) /* sdmmc1_dat4.sdmmc1_dat4 */
- 0x122 (PIN_INPUT | MUX_MODE0) /* sdmmc1_dat5.sdmmc1_dat5 */
- 0x124 (PIN_INPUT | MUX_MODE0) /* sdmmc1_dat6.sdmmc1_dat6 */
- 0x126 (PIN_INPUT | MUX_MODE0) /* sdmmc1_dat7.sdmmc1_dat7 */
+ >;
+ };
+
+ mmc2_pins: pinmux_mmc2_pins {
+ pinctrl-single,pins = <
+ 0x128 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_clk.sdmmc2_clk */
+ 0x12a (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_cmd.sdmmc2_cmd */
+ 0x12c (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat0.sdmmc2_dat0 */
+ 0x12e (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat1.sdmmc2_dat1 */
+ 0x130 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat2.sdmmc2_dat2 */
+ 0x132 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat3.sdmmc2_dat3 */
>;
};
>;
};
+ i2c1_pins: pinmux_i2c1_pins {
+ pinctrl-single,pins = <
+ 0x18a (PIN_INPUT | MUX_MODE0) /* i2c1_scl.i2c1_scl */
+ 0x18c (PIN_INPUT | MUX_MODE0) /* i2c1_sda.i2c1_sda */
+ >;
+ };
+
+ i2c2_pins: pinmux_i2c2_pins {
+ pinctrl-single,pins = <
+ 0x18e (PIN_INPUT | MUX_MODE0) /* i2c2_scl.i2c2_scl */
+ 0x190 (PIN_INPUT | MUX_MODE0) /* i2c2_sda.i2c2_sda */
+ >;
+ };
+
+ i2c3_pins: pinmux_i2c3_pins {
+ pinctrl-single,pins = <
+ 0x192 (PIN_INPUT | MUX_MODE0) /* i2c3_scl.i2c3_scl */
+ 0x194 (PIN_INPUT | MUX_MODE0) /* i2c3_sda.i2c3_sda */
+ >;
+ };
+
leds_pins: pinmux_leds_pins { };
};
&i2c1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c1_pins>;
clock-frequency = <2600000>;
twl: twl@48 {
#include "twl4030_omap3.dtsi"
&i2c2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c2_pins>;
clock-frequency = <400000>;
};
+&i2c3 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c3_pins>;
+};
+
&mcbsp2 {
pinctrl-names = "default";
pinctrl-0 = <&mcbsp2_pins>;
pinctrl-0 = <&mmc1_pins>;
vmmc-supply = <&vmmc1>;
vmmc_aux-supply = <&vsim>;
- bus-width = <8>;
+ bus-width = <4>;
};
&mmc2 {
- status = "disabled";
+ pinctrl-names = "default";
+ pinctrl-0 = <&mmc2_pins>;
+ vmmc-supply = <&lbee1usjyc_vmmc>;
+ bus-width = <4>;
+ non-removable;
};
&mmc3 {
/*
- * Device Tree Source for IGEPv2 board
+ * Device Tree Source for IGEPv2 Rev. (TI OMAP AM/DM37x)
*
* Copyright (C) 2012 Javier Martinez Canillas <javier@collabora.co.uk>
* Copyright (C) 2012 Enric Balletbo i Serra <eballetbo@gmail.com>
#include "omap-gpmc-smsc911x.dtsi"
/ {
- model = "IGEPv2";
+ model = "IGEPv2 (TI OMAP AM/DM37x)";
compatible = "isee,omap3-igep0020", "ti,omap3";
leds {
pinctrl-names = "default";
pinctrl-0 = <
&hsusbb1_pins
+ &tfp410_pins
+ &dss_pins
>;
hsusbb1_pins: pinmux_hsusbb1_pins {
0x5ba (PIN_INPUT_PULLDOWN | MUX_MODE3) /* etk_d7.hsusb1_data3 */
>;
};
+
+ tfp410_pins: tfp410_dvi_pins {
+ pinctrl-single,pins = <
+ 0x196 (PIN_OUTPUT | MUX_MODE4) /* hdq_sio.gpio_170 */
+ >;
+ };
+
+ dss_pins: pinmux_dss_dvi_pins {
+ pinctrl-single,pins = <
+ 0x0a4 (PIN_OUTPUT | MUX_MODE0) /* dss_pclk.dss_pclk */
+ 0x0a6 (PIN_OUTPUT | MUX_MODE0) /* dss_hsync.dss_hsync */
+ 0x0a8 (PIN_OUTPUT | MUX_MODE0) /* dss_vsync.dss_vsync */
+ 0x0aa (PIN_OUTPUT | MUX_MODE0) /* dss_acbias.dss_acbias */
+ 0x0ac (PIN_OUTPUT | MUX_MODE0) /* dss_data0.dss_data0 */
+ 0x0ae (PIN_OUTPUT | MUX_MODE0) /* dss_data1.dss_data1 */
+ 0x0b0 (PIN_OUTPUT | MUX_MODE0) /* dss_data2.dss_data2 */
+ 0x0b2 (PIN_OUTPUT | MUX_MODE0) /* dss_data3.dss_data3 */
+ 0x0b4 (PIN_OUTPUT | MUX_MODE0) /* dss_data4.dss_data4 */
+ 0x0b6 (PIN_OUTPUT | MUX_MODE0) /* dss_data5.dss_data5 */
+ 0x0b8 (PIN_OUTPUT | MUX_MODE0) /* dss_data6.dss_data6 */
+ 0x0ba (PIN_OUTPUT | MUX_MODE0) /* dss_data7.dss_data7 */
+ 0x0bc (PIN_OUTPUT | MUX_MODE0) /* dss_data8.dss_data8 */
+ 0x0be (PIN_OUTPUT | MUX_MODE0) /* dss_data9.dss_data9 */
+ 0x0c0 (PIN_OUTPUT | MUX_MODE0) /* dss_data10.dss_data10 */
+ 0x0c2 (PIN_OUTPUT | MUX_MODE0) /* dss_data11.dss_data11 */
+ 0x0c4 (PIN_OUTPUT | MUX_MODE0) /* dss_data12.dss_data12 */
+ 0x0c6 (PIN_OUTPUT | MUX_MODE0) /* dss_data13.dss_data13 */
+ 0x0c8 (PIN_OUTPUT | MUX_MODE0) /* dss_data14.dss_data14 */
+ 0x0ca (PIN_OUTPUT | MUX_MODE0) /* dss_data15.dss_data15 */
+ 0x0cc (PIN_OUTPUT | MUX_MODE0) /* dss_data16.dss_data16 */
+ 0x0ce (PIN_OUTPUT | MUX_MODE0) /* dss_data17.dss_data17 */
+ 0x0d0 (PIN_OUTPUT | MUX_MODE0) /* dss_data18.dss_data18 */
+ 0x0d2 (PIN_OUTPUT | MUX_MODE0) /* dss_data19.dss_data19 */
+ 0x0d4 (PIN_OUTPUT | MUX_MODE0) /* dss_data20.dss_data20 */
+ 0x0d6 (PIN_OUTPUT | MUX_MODE0) /* dss_data21.dss_data21 */
+ 0x0d8 (PIN_OUTPUT | MUX_MODE0) /* dss_data22.dss_data22 */
+ 0x0da (PIN_OUTPUT | MUX_MODE0) /* dss_data23.dss_data23 */
+ >;
+ };
};
&leds_pins {
&usbhsehci {
phys = <&hsusb1_phy>;
};
+
+&vpll2 {
+ /* Needed for DSS */
+ regulator-name = "vdds_dsi";
+};
/*
- * Device Tree Source for IGEP COM Module
+ * Device Tree Source for IGEP COM MODULE (TI OMAP AM/DM37x)
*
* Copyright (C) 2012 Javier Martinez Canillas <javier@collabora.co.uk>
* Copyright (C) 2012 Enric Balletbo i Serra <eballetbo@gmail.com>
#include "omap3-igep.dtsi"
/ {
- model = "IGEP COM Module";
+ model = "IGEP COM MODULE (TI OMAP AM/DM37x)";
compatible = "isee,omap3-igep0030", "ti,omap3";
leds {
/dts-v1/;
-#include "omap34xx.dtsi"
+#include "omap34xx-hs.dtsi"
/ {
model = "Nokia N900";
>;
};
+ mmc2_pins: pinmux_mmc2_pins {
+ pinctrl-single,pins = <
+ 0x128 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_clk */
+ 0x12a (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_cmd */
+ 0x12c (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat0 */
+ 0x12e (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat1 */
+ 0x130 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat2 */
+ 0x132 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat3 */
+ 0x134 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat4 */
+ 0x136 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat5 */
+ 0x138 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat6 */
+ 0x13a (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat7 */
+ >;
+ };
+
display_pins: pinmux_display_pins {
pinctrl-single,pins = <
0x0d4 (PIN_OUTPUT | MUX_MODE4) /* RX51_LCD_RESET_GPIO */
cd-gpios = <&gpio6 0 GPIO_ACTIVE_HIGH>; /* 160 */
};
+/* most boards use vaux3, only some old versions use vmmc2 instead */
&mmc2 {
- status = "disabled";
+ pinctrl-names = "default";
+ pinctrl-0 = <&mmc2_pins>;
+ vmmc-supply = <&vaux3>;
+ vmmc_aux-supply = <&vsim>;
+ bus-width = <8>;
+ non-removable;
};
&mmc3 {
* published by the Free Software Foundation.
*/
-#include "omap36xx.dtsi"
+#include "omap36xx-hs.dtsi"
/ {
cpus {
ranges;
ti,hwmods = "l3_main";
+ aes: aes@480c5000 {
+ compatible = "ti,omap3-aes";
+ ti,hwmods = "aes";
+ reg = <0x480c5000 0x50>;
+ interrupts = <0>;
+ };
+
counter32k: counter@48320000 {
compatible = "ti,omap-counter32k";
reg = <0x48320000 0x20>;
ti,hwmods = "i2c3";
};
+ mailbox: mailbox@48094000 {
+ compatible = "ti,omap3-mailbox";
+ ti,hwmods = "mailbox";
+ reg = <0x48094000 0x200>;
+ interrupts = <26>;
+ };
+
mcspi1: spi@48098000 {
compatible = "ti,omap2-mcspi";
reg = <0x48098000 0x100>;
dma-names = "tx", "rx";
};
+ mmu_isp: mmu@480bd400 {
+ compatible = "ti,omap3-mmu-isp";
+ ti,hwmods = "mmu_isp";
+ reg = <0x480bd400 0x80>;
+ interrupts = <8>;
+ };
+
wdt2: wdt@48314000 {
compatible = "ti,omap3-wdt";
reg = <0x48314000 0x80>;
dma-names = "tx", "rx";
};
+ sham: sham@480c3000 {
+ compatible = "ti,omap3-sham";
+ ti,hwmods = "sham";
+ reg = <0x480c3000 0x64>;
+ interrupts = <49>;
+ };
+
+ smartreflex_core: smartreflex@480cb000 {
+ compatible = "ti,omap3-smartreflex-core";
+ ti,hwmods = "smartreflex_core";
+ reg = <0x480cb000 0x400>;
+ interrupts = <19>;
+ };
+
+ smartreflex_mpu_iva: smartreflex@480c9000 {
+ compatible = "ti,omap3-smartreflex-iva";
+ ti,hwmods = "smartreflex_mpu_iva";
+ reg = <0x480c9000 0x400>;
+ interrupts = <18>;
+ };
+
timer1: timer@48318000 {
compatible = "ti,omap3430-timer";
reg = <0x48318000 0x400>;
--- /dev/null
+/* Disabled modules for secure omaps */
+
+#include "omap34xx.dtsi"
+
+/* Secure omaps have some devices inaccessible depending on the firmware */
+&aes {
+ status = "disabled";
+};
+
+&sham {
+ status = "disabled";
+};
+
+&timer12 {
+ status = "disabled";
+};
--- /dev/null
+/* Disabled modules for secure omaps */
+
+#include "omap36xx.dtsi"
+
+/* Secure omaps have some devices inaccessible depending on the firmware */
+&aes {
+ status = "disabled";
+};
+
+&sham {
+ status = "disabled";
+};
+
+&timer12 {
+ status = "disabled";
+};
0xf0 (PIN_INPUT_PULLUP | MUX_MODE0) /* i2c4_sda */
>;
};
-};
-
-&omap4_pmx_wkup {
- led_wkgpio_pins: pinmux_leds_wkpins {
- pinctrl-single,pins = <
- 0x1a (PIN_OUTPUT | MUX_MODE3) /* gpio_wk7 */
- 0x1c (PIN_OUTPUT | MUX_MODE3) /* gpio_wk8 */
- >;
- };
/*
* wl12xx GPIO outputs for WLAN_EN, BT_EN, FM_EN, BT_WAKEUP
pinctrl-single,pins = <
0x38 (PIN_INPUT | MUX_MODE3) /* gpmc_ncs2.gpio_52 */
0x3a (PIN_INPUT | MUX_MODE3) /* gpmc_ncs3.gpio_53 */
- 0x108 (PIN_OUTPUT | MUX_MODE0) /* sdmmc5_clk.sdmmc5_clk */
+ 0x108 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_clk.sdmmc5_clk */
0x10a (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_cmd.sdmmc5_cmd */
0x10c (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_dat0.sdmmc5_dat0 */
0x10e (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_dat1.sdmmc5_dat1 */
};
};
+&omap4_pmx_wkup {
+ led_wkgpio_pins: pinmux_leds_wkpins {
+ pinctrl-single,pins = <
+ 0x1a (PIN_OUTPUT | MUX_MODE3) /* gpio_wk7 */
+ 0x1c (PIN_OUTPUT | MUX_MODE3) /* gpio_wk8 */
+ >;
+ };
+};
+
&i2c1 {
pinctrl-names = "default";
pinctrl-0 = <&i2c1_pins>;
wl12xx_pins: pinmux_wl12xx_pins {
pinctrl-single,pins = <
0x3a (PIN_INPUT | MUX_MODE3) /* gpmc_ncs3.gpio_53 */
- 0x108 (PIN_OUTPUT | MUX_MODE3) /* sdmmc5_clk.sdmmc5_clk */
- 0x10a (PIN_INPUT_PULLUP | MUX_MODE3) /* sdmmc5_cmd.sdmmc5_cmd */
- 0x10c (PIN_INPUT_PULLUP | MUX_MODE3) /* sdmmc5_dat0.sdmmc5_dat0 */
- 0x10e (PIN_INPUT_PULLUP | MUX_MODE3) /* sdmmc5_dat1.sdmmc5_dat1 */
- 0x110 (PIN_INPUT_PULLUP | MUX_MODE3) /* sdmmc5_dat2.sdmmc5_dat2 */
- 0x112 (PIN_INPUT_PULLUP | MUX_MODE3) /* sdmmc5_dat3.sdmmc5_dat3 */
+ 0x108 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_clk.sdmmc5_clk */
+ 0x10a (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_cmd.sdmmc5_cmd */
+ 0x10c (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_dat0.sdmmc5_dat0 */
+ 0x10e (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_dat1.sdmmc5_dat1 */
+ 0x110 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_dat2.sdmmc5_dat2 */
+ 0x112 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_dat3.sdmmc5_dat3 */
>;
};
};
mpu_periph_clk: mpu_periph_clk {
#clock-cells = <0>;
- compatible = "altr,socfpga-gate-clk";
+ compatible = "altr,socfpga-perip-clk";
clocks = <&mpuclk>;
fixed-divider = <4>;
};
mpu_l2_ram_clk: mpu_l2_ram_clk {
#clock-cells = <0>;
- compatible = "altr,socfpga-gate-clk";
+ compatible = "altr,socfpga-perip-clk";
clocks = <&mpuclk>;
fixed-divider = <2>;
};
l3_main_clk: l3_main_clk {
#clock-cells = <0>;
- compatible = "altr,socfpga-gate-clk";
+ compatible = "altr,socfpga-perip-clk";
clocks = <&mainclk>;
+ fixed-divider = <1>;
};
l3_mp_clk: l3_mp_clk {
pio: pinctrl@01c20800 {
compatible = "allwinner,sun6i-a31-pinctrl";
reg = <0x01c20800 0x400>;
- interrupts = <0 11 1>, <0 15 1>, <0 16 1>, <0 17 1>;
+ interrupts = <0 11 4>,
+ <0 15 4>,
+ <0 16 4>,
+ <0 17 4>;
clocks = <&apb1_gates 5>;
gpio-controller;
interrupt-controller;
timer@01c20c00 {
compatible = "allwinner,sun4i-timer";
reg = <0x01c20c00 0xa0>;
- interrupts = <0 18 1>,
- <0 19 1>,
- <0 20 1>,
- <0 21 1>,
- <0 22 1>;
+ interrupts = <0 18 4>,
+ <0 19 4>,
+ <0 20 4>,
+ <0 21 4>,
+ <0 22 4>;
clocks = <&osc24M>;
};
uart0: serial@01c28000 {
compatible = "snps,dw-apb-uart";
reg = <0x01c28000 0x400>;
- interrupts = <0 0 1>;
+ interrupts = <0 0 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb2_gates 16>;
uart1: serial@01c28400 {
compatible = "snps,dw-apb-uart";
reg = <0x01c28400 0x400>;
- interrupts = <0 1 1>;
+ interrupts = <0 1 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb2_gates 17>;
uart2: serial@01c28800 {
compatible = "snps,dw-apb-uart";
reg = <0x01c28800 0x400>;
- interrupts = <0 2 1>;
+ interrupts = <0 2 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb2_gates 18>;
uart3: serial@01c28c00 {
compatible = "snps,dw-apb-uart";
reg = <0x01c28c00 0x400>;
- interrupts = <0 3 1>;
+ interrupts = <0 3 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb2_gates 19>;
uart4: serial@01c29000 {
compatible = "snps,dw-apb-uart";
reg = <0x01c29000 0x400>;
- interrupts = <0 4 1>;
+ interrupts = <0 4 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb2_gates 20>;
uart5: serial@01c29400 {
compatible = "snps,dw-apb-uart";
reg = <0x01c29400 0x400>;
- interrupts = <0 5 1>;
+ interrupts = <0 5 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb2_gates 21>;
emac: ethernet@01c0b000 {
compatible = "allwinner,sun4i-emac";
reg = <0x01c0b000 0x1000>;
- interrupts = <0 55 1>;
+ interrupts = <0 55 4>;
clocks = <&ahb_gates 17>;
status = "disabled";
};
pio: pinctrl@01c20800 {
compatible = "allwinner,sun7i-a20-pinctrl";
reg = <0x01c20800 0x400>;
- interrupts = <0 28 1>;
+ interrupts = <0 28 4>;
clocks = <&apb0_gates 5>;
gpio-controller;
interrupt-controller;
timer@01c20c00 {
compatible = "allwinner,sun4i-timer";
reg = <0x01c20c00 0x90>;
- interrupts = <0 22 1>,
- <0 23 1>,
- <0 24 1>,
- <0 25 1>,
- <0 67 1>,
- <0 68 1>;
+ interrupts = <0 22 4>,
+ <0 23 4>,
+ <0 24 4>,
+ <0 25 4>,
+ <0 67 4>,
+ <0 68 4>;
clocks = <&osc24M>;
};
uart0: serial@01c28000 {
compatible = "snps,dw-apb-uart";
reg = <0x01c28000 0x400>;
- interrupts = <0 1 1>;
+ interrupts = <0 1 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb1_gates 16>;
uart1: serial@01c28400 {
compatible = "snps,dw-apb-uart";
reg = <0x01c28400 0x400>;
- interrupts = <0 2 1>;
+ interrupts = <0 2 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb1_gates 17>;
uart2: serial@01c28800 {
compatible = "snps,dw-apb-uart";
reg = <0x01c28800 0x400>;
- interrupts = <0 3 1>;
+ interrupts = <0 3 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb1_gates 18>;
uart3: serial@01c28c00 {
compatible = "snps,dw-apb-uart";
reg = <0x01c28c00 0x400>;
- interrupts = <0 4 1>;
+ interrupts = <0 4 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb1_gates 19>;
uart4: serial@01c29000 {
compatible = "snps,dw-apb-uart";
reg = <0x01c29000 0x400>;
- interrupts = <0 17 1>;
+ interrupts = <0 17 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb1_gates 20>;
uart5: serial@01c29400 {
compatible = "snps,dw-apb-uart";
reg = <0x01c29400 0x400>;
- interrupts = <0 18 1>;
+ interrupts = <0 18 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb1_gates 21>;
uart6: serial@01c29800 {
compatible = "snps,dw-apb-uart";
reg = <0x01c29800 0x400>;
- interrupts = <0 19 1>;
+ interrupts = <0 19 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb1_gates 22>;
uart7: serial@01c29c00 {
compatible = "snps,dw-apb-uart";
reg = <0x01c29c00 0x400>;
- interrupts = <0 20 1>;
+ interrupts = <0 20 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb1_gates 23>;
i2c0: i2c@01c2ac00 {
compatible = "allwinner,sun4i-i2c";
reg = <0x01c2ac00 0x400>;
- interrupts = <0 7 1>;
+ interrupts = <0 7 4>;
clocks = <&apb1_gates 0>;
clock-frequency = <100000>;
status = "disabled";
i2c1: i2c@01c2b000 {
compatible = "allwinner,sun4i-i2c";
reg = <0x01c2b000 0x400>;
- interrupts = <0 8 1>;
+ interrupts = <0 8 4>;
clocks = <&apb1_gates 1>;
clock-frequency = <100000>;
status = "disabled";
i2c2: i2c@01c2b400 {
compatible = "allwinner,sun4i-i2c";
reg = <0x01c2b400 0x400>;
- interrupts = <0 9 1>;
+ interrupts = <0 9 4>;
clocks = <&apb1_gates 2>;
clock-frequency = <100000>;
status = "disabled";
i2c3: i2c@01c2b800 {
compatible = "allwinner,sun4i-i2c";
reg = <0x01c2b800 0x400>;
- interrupts = <0 88 1>;
+ interrupts = <0 88 4>;
clocks = <&apb1_gates 3>;
clock-frequency = <100000>;
status = "disabled";
i2c4: i2c@01c2bc00 {
compatible = "allwinner,sun4i-i2c";
reg = <0x01c2bc00 0x400>;
- interrupts = <0 89 1>;
+ interrupts = <0 89 4>;
clocks = <&apb1_gates 15>;
clock-frequency = <100000>;
status = "disabled";
CONFIG_SMSC911X=y
CONFIG_STMMAC_ETH=y
CONFIG_MDIO_SUN4I=y
+CONFIG_TI_CPSW=y
CONFIG_KEYBOARD_SPEAR=y
CONFIG_SERIO_AMBAKMI=y
CONFIG_SERIAL_8250=y
CONFIG_USB_ISP1301=y
CONFIG_USB_MXS_PHY=y
CONFIG_MMC=y
+CONFIG_MMC_BLOCK_MINORS=16
CONFIG_MMC_ARMMMCI=y
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_PLTFM=y
CONFIG_MMC_SDHCI_ESDHC_IMX=y
CONFIG_MMC_SDHCI_TEGRA=y
CONFIG_MMC_SDHCI_SPEAR=y
+CONFIG_MMC_SDHCI_BCM_KONA=y
CONFIG_MMC_OMAP=y
CONFIG_MMC_OMAP_HS=y
CONFIG_EDAC=y
CONFIG_MFD_TPS65217=y
CONFIG_MFD_TPS65910=y
CONFIG_TWL6040_CORE=y
+CONFIG_REGULATOR_FIXED_VOLTAGE=y
CONFIG_REGULATOR_PALMAS=y
CONFIG_REGULATOR_TPS65023=y
CONFIG_REGULATOR_TPS6507X=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_INET=y
+CONFIG_IP_PNP=y
+CONFIG_IP_PNP_DHCP=y
+CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
CONFIG_LEDS_TRIGGER_DEFAULT_ON=y
CONFIG_COMMON_CLK_DEBUG=y
# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_TMPFS=y
+CONFIG_NFS_FS=y
+CONFIG_ROOT_NFS=y
CONFIG_NLS=y
+CONFIG_PRINTK_TIME=y
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
CONFIG_CPU_IDLE=y
+CONFIG_ARM_U8500_CPUIDLE=y
CONFIG_VFP=y
CONFIG_NEON=y
CONFIG_PM_RUNTIME=y
CONFIG_EXT3_FS=y
CONFIG_EXT4_FS=y
CONFIG_VFAT_FS=y
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
# CONFIG_MISC_FILESYSTEMS is not set
#define TASK_UNMAPPED_BASE UL(0x00000000)
#endif
-#ifndef PHYS_OFFSET
-#define PHYS_OFFSET UL(CONFIG_DRAM_BASE)
-#endif
-
#ifndef END_MEM
#define END_MEM (UL(CONFIG_DRAM_BASE) + CONFIG_DRAM_SIZE)
#endif
#ifndef PAGE_OFFSET
-#define PAGE_OFFSET (PHYS_OFFSET)
+#define PAGE_OFFSET PLAT_PHYS_OFFSET
#endif
/*
* The module can be at any place in ram in nommu mode.
*/
#define MODULES_END (END_MEM)
-#define MODULES_VADDR (PHYS_OFFSET)
+#define MODULES_VADDR PAGE_OFFSET
#define XIP_VIRT_ADDR(physaddr) (physaddr)
#endif
#define ARCH_PGD_MASK ((1 << ARCH_PGD_SHIFT) - 1)
+/*
+ * PLAT_PHYS_OFFSET is the offset (from zero) of the start of physical
+ * memory. This is used for XIP and NoMMU kernels, or by kernels which
+ * have their own mach/memory.h. Assembly code must always use
+ * PLAT_PHYS_OFFSET and not PHYS_OFFSET.
+ */
+#ifndef PLAT_PHYS_OFFSET
+#define PLAT_PHYS_OFFSET UL(CONFIG_PHYS_OFFSET)
+#endif
+
#ifndef __ASSEMBLY__
/*
#else
+#define PHYS_OFFSET PLAT_PHYS_OFFSET
+
static inline phys_addr_t __virt_to_phys(unsigned long x)
{
return (phys_addr_t)x - PAGE_OFFSET + PHYS_OFFSET;
#endif
#endif
-#endif /* __ASSEMBLY__ */
-
-#ifndef PHYS_OFFSET
-#ifdef PLAT_PHYS_OFFSET
-#define PHYS_OFFSET PLAT_PHYS_OFFSET
-#else
-#define PHYS_OFFSET UL(CONFIG_PHYS_OFFSET)
-#endif
-#endif
-
-#ifndef __ASSEMBLY__
/*
* PFNs are used to describe any physical page; this means
* mapping to be mapped at. This is particularly important for
* non-high vector CPUs.
*/
-#define FIRST_USER_ADDRESS PAGE_SIZE
+#define FIRST_USER_ADDRESS (PAGE_SIZE * 2)
/*
* Use TASK_SIZE as the ceiling argument for free_pgtables() and
#ifdef CONFIG_ARM_MPU
/* Calculate the size of a region covering just the kernel */
- ldr r5, =PHYS_OFFSET @ Region start: PHYS_OFFSET
+ ldr r5, =PLAT_PHYS_OFFSET @ Region start: PHYS_OFFSET
ldr r6, =(_end) @ Cover whole kernel
sub r6, r6, r5 @ Minimum size of region to map
clz r6, r6 @ Region size must be 2^N...
set_region_nr r0, #MPU_RAM_REGION
isb
/* Full access from PL0, PL1, shared for CONFIG_SMP, cacheable */
- ldr r0, =PHYS_OFFSET @ RAM starts at PHYS_OFFSET
+ ldr r0, =PLAT_PHYS_OFFSET @ RAM starts at PHYS_OFFSET
ldr r5,=(MPU_AP_PL1RW_PL0RW | MPU_RGN_NORMAL)
setup_region r0, r5, r6, MPU_DATA_SIDE @ PHYS_OFFSET, shared, enabled
sub r4, r3, r4 @ (PHYS_OFFSET - PAGE_OFFSET)
add r8, r8, r4 @ PHYS_OFFSET
#else
- ldr r8, =PHYS_OFFSET @ always constant in this case
+ ldr r8, =PLAT_PHYS_OFFSET @ always constant in this case
#endif
/*
#include <asm/pgalloc.h>
#include <asm/mmu_context.h>
#include <asm/cacheflush.h>
+#include <asm/fncpy.h>
#include <asm/mach-types.h>
#include <asm/smp_plat.h>
#include <asm/system_misc.h>
-extern const unsigned char relocate_new_kernel[];
+extern void relocate_new_kernel(void);
extern const unsigned int relocate_new_kernel_size;
extern unsigned long kexec_start_address;
{
unsigned long page_list;
unsigned long reboot_code_buffer_phys;
+ unsigned long reboot_entry = (unsigned long)relocate_new_kernel;
+ unsigned long reboot_entry_phys;
void *reboot_code_buffer;
/*
/* copy our kernel relocation code to the control code page */
- memcpy(reboot_code_buffer,
- relocate_new_kernel, relocate_new_kernel_size);
+ reboot_entry = fncpy(reboot_code_buffer,
+ reboot_entry,
+ relocate_new_kernel_size);
+ reboot_entry_phys = (unsigned long)reboot_entry +
+ (reboot_code_buffer_phys - (unsigned long)reboot_code_buffer);
-
- flush_icache_range((unsigned long) reboot_code_buffer,
- (unsigned long) reboot_code_buffer + KEXEC_CONTROL_PAGE_SIZE);
printk(KERN_INFO "Bye!\n");
if (kexec_reinit)
kexec_reinit();
- soft_restart(reboot_code_buffer_phys);
+ soft_restart(reboot_entry_phys);
}
unsigned long get_wchan(struct task_struct *p)
{
struct stackframe frame;
+ unsigned long stack_page;
int count = 0;
if (!p || p == current || p->state == TASK_RUNNING)
return 0;
frame.sp = thread_saved_sp(p);
frame.lr = 0; /* recovered from the stack */
frame.pc = thread_saved_pc(p);
+ stack_page = (unsigned long)task_stack_page(p);
do {
- int ret = unwind_frame(&frame);
- if (ret < 0)
+ if (frame.sp < stack_page ||
+ frame.sp >= stack_page + THREAD_SIZE ||
+ unwind_frame(&frame) < 0)
return 0;
if (!in_sched_functions(frame.pc))
return frame.pc;
* relocate_kernel.S - put the kernel image in place to boot
*/
+#include <linux/linkage.h>
#include <asm/kexec.h>
- .globl relocate_new_kernel
-relocate_new_kernel:
+ .align 3 /* not needed for this code, but keeps fncpy() happy */
+
+ENTRY(relocate_new_kernel)
ldr r0,kexec_indirection_page
ldr r1,kexec_start_address
kexec_boot_atags:
.long 0x0
+ENDPROC(relocate_new_kernel)
+
relocate_new_kernel_end:
.globl relocate_new_kernel_size
machine_desc = mdesc;
machine_name = mdesc->name;
- setup_dma_zone(mdesc);
-
if (mdesc->reboot_mode != REBOOT_HARD)
reboot_mode = mdesc->reboot_mode;
sort(&meminfo.bank, meminfo.nr_banks, sizeof(meminfo.bank[0]), meminfo_cmp, NULL);
early_paging_init(mdesc, lookup_processor_type(read_cpuid_id()));
+ setup_dma_zone(mdesc);
sanity_check_meminfo();
arm_memblock_init(&meminfo, mdesc);
* snippets.
*/
+/*
+ * In CPU_THUMBONLY case kernel arm opcodes are not allowed.
+ * Note in this case codes skips those instructions but it uses .org
+ * directive to keep correct layout of sigreturn_codes array.
+ */
+#ifndef CONFIG_CPU_THUMBONLY
+#define ARM_OK(code...) code
+#else
+#define ARM_OK(code...)
+#endif
+
+ .macro arm_slot n
+ .org sigreturn_codes + 12 * (\n)
+ARM_OK( .arm )
+ .endm
+
+ .macro thumb_slot n
+ .org sigreturn_codes + 12 * (\n) + 8
+ .thumb
+ .endm
+
#if __LINUX_ARM_ARCH__ <= 4
/*
* Note we manually set minimally required arch that supports
.global sigreturn_codes
.type sigreturn_codes, #object
- .arm
+ .align
sigreturn_codes:
/* ARM sigreturn syscall code snippet */
- mov r7, #(__NR_sigreturn - __NR_SYSCALL_BASE)
- swi #(__NR_sigreturn)|(__NR_OABI_SYSCALL_BASE)
+ arm_slot 0
+ARM_OK( mov r7, #(__NR_sigreturn - __NR_SYSCALL_BASE) )
+ARM_OK( swi #(__NR_sigreturn)|(__NR_OABI_SYSCALL_BASE) )
/* Thumb sigreturn syscall code snippet */
- .thumb
+ thumb_slot 0
movs r7, #(__NR_sigreturn - __NR_SYSCALL_BASE)
swi #0
/* ARM sigreturn_rt syscall code snippet */
- .arm
- mov r7, #(__NR_rt_sigreturn - __NR_SYSCALL_BASE)
- swi #(__NR_rt_sigreturn)|(__NR_OABI_SYSCALL_BASE)
+ arm_slot 1
+ARM_OK( mov r7, #(__NR_rt_sigreturn - __NR_SYSCALL_BASE) )
+ARM_OK( swi #(__NR_rt_sigreturn)|(__NR_OABI_SYSCALL_BASE) )
/* Thumb sigreturn_rt syscall code snippet */
- .thumb
+ thumb_slot 1
movs r7, #(__NR_rt_sigreturn - __NR_SYSCALL_BASE)
swi #0
* it is thumb case or not, so we need additional
* word after real last entry.
*/
- .arm
+ arm_slot 2
.space 4
.size sigreturn_codes, . - sigreturn_codes
high = ALIGN(low, THREAD_SIZE);
/* check current frame pointer is within bounds */
- if (fp < (low + 12) || fp + 4 >= high)
+ if (fp < low + 12 || fp > high - 4)
return -EINVAL;
/* restore the registers from the stack frame */
__do_cache_op(unsigned long start, unsigned long end)
{
int ret;
- unsigned long chunk = PAGE_SIZE;
do {
+ unsigned long chunk = min(PAGE_SIZE, end - start);
+
if (signal_pending(current)) {
struct thread_info *ti = current_thread_info();
/*
* loops = r0 * HZ * loops_per_jiffy / 1000000
*/
+ .align 3
@ Delay routine
ENTRY(__loop_delay)
static struct clock_event_device clkevt = {
.name = "at91_tick",
.features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT,
- .shift = 32,
.rating = 150,
.set_next_event = clkevt32k_next_event,
.set_mode = clkevt32k_mode,
at91_st_write(AT91_ST_RTMR, 1);
/* Setup timer clockevent, with minimum of two ticks (important!!) */
- clkevt.mult = div_sc(AT91_SLOW_CLOCK, NSEC_PER_SEC, clkevt.shift);
- clkevt.max_delta_ns = clockevent_delta2ns(AT91_ST_ALMV, &clkevt);
- clkevt.min_delta_ns = clockevent_delta2ns(2, &clkevt) + 1;
clkevt.cpumask = cpumask_of(0);
- clockevents_register_device(&clkevt);
+ clockevents_config_and_register(&clkevt, AT91_SLOW_CLOCK,
+ 2, AT91_ST_ALMV);
/* register clocksource */
clocksource_register_hz(&clk32k, AT91_SLOW_CLOCK);
#include <mach/at91_ramc.h>
#include <mach/at91rm9200_sdramc.h>
+#ifdef CONFIG_PM
extern void at91_pm_set_standby(void (*at91_standby)(void));
+#else
+static inline void at91_pm_set_standby(void (*at91_standby)(void)) { }
+#endif
/*
* The AT91RM9200 goes into self-refresh mode with this command, and will
.name = "twi0_clk",
.pid = SAMA5D3_ID_TWI0,
.type = CLK_TYPE_PERIPHERAL,
- .div = AT91_PMC_PCR_DIV2,
+ .div = AT91_PMC_PCR_DIV8,
};
static struct clk twi1_clk = {
.name = "twi1_clk",
.pid = SAMA5D3_ID_TWI1,
.type = CLK_TYPE_PERIPHERAL,
- .div = AT91_PMC_PCR_DIV2,
+ .div = AT91_PMC_PCR_DIV8,
};
static struct clk twi2_clk = {
.name = "twi2_clk",
.pid = SAMA5D3_ID_TWI2,
.type = CLK_TYPE_PERIPHERAL,
- .div = AT91_PMC_PCR_DIV2,
+ .div = AT91_PMC_PCR_DIV8,
};
static struct clk mmc0_clk = {
.name = "mci0_clk",
static struct resource da830_mcasp1_resources[] = {
{
- .name = "mcasp1",
+ .name = "mpu",
.start = DAVINCI_DA830_MCASP1_REG_BASE,
.end = DAVINCI_DA830_MCASP1_REG_BASE + (SZ_1K * 12) - 1,
.flags = IORESOURCE_MEM,
static struct resource da850_mcasp_resources[] = {
{
- .name = "mcasp",
+ .name = "mpu",
.start = DAVINCI_DA8XX_MCASP0_REG_BASE,
.end = DAVINCI_DA8XX_MCASP0_REG_BASE + (SZ_1K * 12) - 1,
.flags = IORESOURCE_MEM,
static struct resource dm355_asp1_resources[] = {
{
+ .name = "mpu",
.start = DAVINCI_ASP1_BASE,
.end = DAVINCI_ASP1_BASE + SZ_8K - 1,
.flags = IORESOURCE_MEM,
int __init dm355_gpio_register(void)
{
return davinci_gpio_register(dm355_gpio_resources,
- sizeof(dm355_gpio_resources),
+ ARRAY_SIZE(dm355_gpio_resources),
&dm355_gpio_platform_data);
}
/*----------------------------------------------------------------------*/
int __init dm365_gpio_register(void)
{
return davinci_gpio_register(dm365_gpio_resources,
- sizeof(dm365_gpio_resources),
+ ARRAY_SIZE(dm365_gpio_resources),
&dm365_gpio_platform_data);
}
static struct resource dm365_asp_resources[] = {
{
+ .name = "mpu",
.start = DAVINCI_DM365_ASP0_BASE,
.end = DAVINCI_DM365_ASP0_BASE + SZ_8K - 1,
.flags = IORESOURCE_MEM,
/* DM6446 EVM uses ASP0; line-out is a pair of RCA jacks */
static struct resource dm644x_asp_resources[] = {
{
+ .name = "mpu",
.start = DAVINCI_ASP0_BASE,
.end = DAVINCI_ASP0_BASE + SZ_8K - 1,
.flags = IORESOURCE_MEM,
int __init dm644x_gpio_register(void)
{
return davinci_gpio_register(dm644_gpio_resources,
- sizeof(dm644_gpio_resources),
+ ARRAY_SIZE(dm644_gpio_resources),
&dm644_gpio_platform_data);
}
/*----------------------------------------------------------------------*/
static struct resource dm646x_mcasp0_resources[] = {
{
- .name = "mcasp0",
+ .name = "mpu",
.start = DAVINCI_DM646X_MCASP0_REG_BASE,
.end = DAVINCI_DM646X_MCASP0_REG_BASE + (SZ_1K << 1) - 1,
.flags = IORESOURCE_MEM,
static struct resource dm646x_mcasp1_resources[] = {
{
- .name = "mcasp1",
+ .name = "mpu",
.start = DAVINCI_DM646X_MCASP1_REG_BASE,
.end = DAVINCI_DM646X_MCASP1_REG_BASE + (SZ_1K << 1) - 1,
.flags = IORESOURCE_MEM,
int __init dm646x_gpio_register(void)
{
return davinci_gpio_register(dm646x_gpio_resources,
- sizeof(dm646x_gpio_resources),
+ ARRAY_SIZE(dm646x_gpio_resources),
&dm646x_gpio_platform_data);
}
/*----------------------------------------------------------------------*/
#include <linux/init.h>
#include <linux/io.h>
#include <linux/spinlock.h>
+#include <video/vga.h>
#include <asm/pgtable.h>
#include <asm/page.h>
iotable_init(ebsa285_host_io_desc, ARRAY_SIZE(ebsa285_host_io_desc));
pci_map_io_early(__phys_to_pfn(DC21285_PCI_IO));
}
+
+ vga_base = PCIMEM_BASE;
}
void footbridge_restart(enum reboot_mode mode, const char *cmd)
#include <linux/irq.h>
#include <linux/io.h>
#include <linux/spinlock.h>
-#include <video/vga.h>
#include <asm/irq.h>
#include <asm/mach/pci.h>
int cfn_mode;
pcibios_min_mem = 0x81000000;
- vga_base = PCIMEM_BASE;
mem_size = (unsigned int)high_memory - PAGE_OFFSET;
for (mem_mask = 0x00100000; mem_mask < 0x10000000; mem_mask <<= 1)
const char *name;
const char *trigger;
} ebsa285_leds[] = {
- { "ebsa285:amber", "heartbeat", },
- { "ebsa285:green", "cpu0", },
+ { "ebsa285:amber", "cpu0", },
+ { "ebsa285:green", "heartbeat", },
{ "ebsa285:red",},
};
+static unsigned char hw_led_state;
+
static void ebsa285_led_set(struct led_classdev *cdev,
enum led_brightness b)
{
struct ebsa285_led *led = container_of(cdev,
struct ebsa285_led, cdev);
- if (b != LED_OFF)
- *XBUS_LEDS |= led->mask;
+ if (b == LED_OFF)
+ hw_led_state |= led->mask;
else
- *XBUS_LEDS &= ~led->mask;
+ hw_led_state &= ~led->mask;
+ *XBUS_LEDS = hw_led_state;
}
static enum led_brightness ebsa285_led_get(struct led_classdev *cdev)
struct ebsa285_led *led = container_of(cdev,
struct ebsa285_led, cdev);
- return (*XBUS_LEDS & led->mask) ? LED_FULL : LED_OFF;
+ return hw_led_state & led->mask ? LED_OFF : LED_FULL;
}
static int __init ebsa285_leds_init(void)
{
int i;
- if (machine_is_ebsa285())
+ if (!machine_is_ebsa285())
return -ENODEV;
- /* 3 LEDS All ON */
- *XBUS_LEDS |= XBUS_LED_AMBER | XBUS_LED_GREEN | XBUS_LED_RED;
+ /* 3 LEDS all off */
+ hw_led_state = XBUS_LED_AMBER | XBUS_LED_GREEN | XBUS_LED_RED;
+ *XBUS_LEDS = hw_led_state;
for (i = 0; i < ARRAY_SIZE(ebsa285_leds); i++) {
struct ebsa285_led *led;
#include <linux/clkdev.h>
#include <linux/clocksource.h>
#include <linux/dma-mapping.h>
+#include <linux/input.h>
#include <linux/io.h>
#include <linux/irqchip.h>
+#include <linux/mailbox.h>
#include <linux/of.h>
#include <linux/of_irq.h>
#include <linux/of_platform.h>
#include <linux/of_address.h>
+#include <linux/reboot.h>
#include <linux/amba/bus.h>
#include <linux/platform_device.h>
.name = "cpuidle-calxeda",
};
+static int hb_keys_notifier(struct notifier_block *nb, unsigned long event, void *data)
+{
+ u32 key = *(u32 *)data;
+
+ if (event != 0x1000)
+ return 0;
+
+ if (key == KEY_POWER)
+ orderly_poweroff(false);
+ else if (key == 0xffff)
+ ctrl_alt_del();
+
+ return 0;
+}
+static struct notifier_block hb_keys_nb = {
+ .notifier_call = hb_keys_notifier,
+};
+
static void __init highbank_init(void)
{
struct device_node *np;
bus_register_notifier(&platform_bus_type, &highbank_platform_nb);
bus_register_notifier(&amba_bustype, &highbank_amba_nb);
+ pl320_ipc_register_notifier(&hb_keys_nb);
+
of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
if (psci_ops.cpu_suspend)
.dt_compat = omap3_gp_boards_compat,
.restart = omap3xxx_restart,
MACHINE_END
+
+static const char *am3517_boards_compat[] __initdata = {
+ "ti,am3517",
+ NULL,
+};
+
+DT_MACHINE_START(AM3517_DT, "Generic AM3517 (Flattened Device Tree)")
+ .reserve = omap_reserve,
+ .map_io = omap3_map_io,
+ .init_early = am35xx_init_early,
+ .init_irq = omap_intc_of_init,
+ .handle_irq = omap3_intc_handle_irq,
+ .init_machine = omap_generic_init,
+ .init_late = omap3_init_late,
+ .init_time = omap3_gptimer_timer_init,
+ .dt_compat = am3517_boards_compat,
+ .restart = omap3xxx_restart,
+MACHINE_END
#endif
#ifdef CONFIG_SOC_AM33XX
static struct connector_dvi_platform_data omap3_igep2_dvi_connector_pdata = {
.name = "dvi",
.source = "tfp410.0",
- .i2c_bus_num = 3,
+ .i2c_bus_num = 2,
};
static struct platform_device omap3_igep2_dvi_connector_device = {
odbfd_exit1:
kfree(hwmods);
odbfd_exit:
+ /* if data/we are at fault.. load up a fail handler */
+ if (ret)
+ pdev->dev.pm_domain = &omap_device_fail_pm_domain;
+
return ret;
}
return pm_generic_runtime_resume(dev);
}
+
+static int _od_fail_runtime_suspend(struct device *dev)
+{
+ dev_warn(dev, "%s: FIXME: missing hwmod/omap_dev info\n", __func__);
+ return -ENODEV;
+}
+
+static int _od_fail_runtime_resume(struct device *dev)
+{
+ dev_warn(dev, "%s: FIXME: missing hwmod/omap_dev info\n", __func__);
+ return -ENODEV;
+}
+
#endif
#ifdef CONFIG_SUSPEND
#define _od_resume_noirq NULL
#endif
+struct dev_pm_domain omap_device_fail_pm_domain = {
+ .ops = {
+ SET_RUNTIME_PM_OPS(_od_fail_runtime_suspend,
+ _od_fail_runtime_resume, NULL)
+ }
+};
+
struct dev_pm_domain omap_device_pm_domain = {
.ops = {
SET_RUNTIME_PM_OPS(_od_runtime_suspend, _od_runtime_resume,
#include "omap_hwmod.h"
extern struct dev_pm_domain omap_device_pm_domain;
+extern struct dev_pm_domain omap_device_fail_pm_domain;
/* omap_device._state values */
#define OMAP_DEVICE_STATE_UNKNOWN 0
}
/**
- * _set_softreset: set OCP_SYSCONFIG.CLOCKACTIVITY bits in @v
+ * _set_softreset: set OCP_SYSCONFIG.SOFTRESET bit in @v
* @oh: struct omap_hwmod *
* @v: pointer to register contents to modify
*
return 0;
}
+/**
+ * _clear_softreset: clear OCP_SYSCONFIG.SOFTRESET bit in @v
+ * @oh: struct omap_hwmod *
+ * @v: pointer to register contents to modify
+ *
+ * Clear the SOFTRESET bit in @v for hwmod @oh. Returns -EINVAL upon
+ * error or 0 upon success.
+ */
+static int _clear_softreset(struct omap_hwmod *oh, u32 *v)
+{
+ u32 softrst_mask;
+
+ if (!oh->class->sysc ||
+ !(oh->class->sysc->sysc_flags & SYSC_HAS_SOFTRESET))
+ return -EINVAL;
+
+ if (!oh->class->sysc->sysc_fields) {
+ WARN(1,
+ "omap_hwmod: %s: sysc_fields absent for sysconfig class\n",
+ oh->name);
+ return -EINVAL;
+ }
+
+ softrst_mask = (0x1 << oh->class->sysc->sysc_fields->srst_shift);
+
+ *v &= ~softrst_mask;
+
+ return 0;
+}
+
/**
* _wait_softreset_complete - wait for an OCP softreset to complete
* @oh: struct omap_hwmod * to wait on
pr_warning("omap_hwmod: %s: cannot clk_get interface_clk %s\n",
oh->name, os->clk);
ret = -EINVAL;
+ continue;
}
os->_clk = c;
/*
pr_warning("omap_hwmod: %s: cannot clk_get opt_clk %s\n",
oh->name, oc->clk);
ret = -EINVAL;
+ continue;
}
oc->_clk = c;
/*
ret = _set_softreset(oh, &v);
if (ret)
goto dis_opt_clks;
+
+ _write_sysconfig(v, oh);
+ ret = _clear_softreset(oh, &v);
+ if (ret)
+ goto dis_opt_clks;
+
_write_sysconfig(v, oh);
if (oh->class->sysc->srst_udelay)
return 0;
}
+static int of_dev_find_hwmod(struct device_node *np,
+ struct omap_hwmod *oh)
+{
+ int count, i, res;
+ const char *p;
+
+ count = of_property_count_strings(np, "ti,hwmods");
+ if (count < 1)
+ return -ENODEV;
+
+ for (i = 0; i < count; i++) {
+ res = of_property_read_string_index(np, "ti,hwmods",
+ i, &p);
+ if (res)
+ continue;
+ if (!strcmp(p, oh->name)) {
+ pr_debug("omap_hwmod: dt %s[%i] uses hwmod %s\n",
+ np->name, i, oh->name);
+ return i;
+ }
+ }
+
+ return -ENODEV;
+}
+
/**
* of_dev_hwmod_lookup - look up needed hwmod from dt blob
* @np: struct device_node *
* @oh: struct omap_hwmod *
+ * @index: index of the entry found
+ * @found: struct device_node * found or NULL
*
* Parse the dt blob and find out needed hwmod. Recursive function is
* implemented to take care hierarchical dt blob parsing.
- * Return: The device node on success or NULL on failure.
+ * Return: Returns 0 on success, -ENODEV when not found.
*/
-static struct device_node *of_dev_hwmod_lookup(struct device_node *np,
- struct omap_hwmod *oh)
+static int of_dev_hwmod_lookup(struct device_node *np,
+ struct omap_hwmod *oh,
+ int *index,
+ struct device_node **found)
{
- struct device_node *np0 = NULL, *np1 = NULL;
- const char *p;
+ struct device_node *np0 = NULL;
+ int res;
+
+ res = of_dev_find_hwmod(np, oh);
+ if (res >= 0) {
+ *found = np;
+ *index = res;
+ return 0;
+ }
for_each_child_of_node(np, np0) {
- if (of_find_property(np0, "ti,hwmods", NULL)) {
- p = of_get_property(np0, "ti,hwmods", NULL);
- if (!strcmp(p, oh->name))
- return np0;
- np1 = of_dev_hwmod_lookup(np0, oh);
- if (np1)
- return np1;
+ struct device_node *fc;
+ int i;
+
+ res = of_dev_hwmod_lookup(np0, oh, &i, &fc);
+ if (res == 0) {
+ *found = fc;
+ *index = i;
+ return 0;
}
}
- return NULL;
+
+ *found = NULL;
+ *index = 0;
+
+ return -ENODEV;
}
/**
* _init_mpu_rt_base - populate the virtual address for a hwmod
* @oh: struct omap_hwmod * to locate the virtual address
* @data: (unused, caller should pass NULL)
+ * @index: index of the reg entry iospace in device tree
* @np: struct device_node * of the IP block's device node in the DT data
*
* Cache the virtual address used by the MPU to access this IP block's
* -ENXIO on absent or invalid register target address space.
*/
static int __init _init_mpu_rt_base(struct omap_hwmod *oh, void *data,
- struct device_node *np)
+ int index, struct device_node *np)
{
struct omap_hwmod_addr_space *mem;
void __iomem *va_start = NULL;
if (!np)
return -ENXIO;
- va_start = of_iomap(np, oh->mpu_rt_idx);
+ va_start = of_iomap(np, index + oh->mpu_rt_idx);
} else {
va_start = ioremap(mem->pa_start, mem->pa_end - mem->pa_start);
}
if (!va_start) {
- pr_err("omap_hwmod: %s: Could not ioremap\n", oh->name);
+ if (mem)
+ pr_err("omap_hwmod: %s: Could not ioremap\n", oh->name);
+ else
+ pr_err("omap_hwmod: %s: Missing dt reg%i for %s\n",
+ oh->name, index, np->full_name);
return -ENXIO;
}
*/
static int __init _init(struct omap_hwmod *oh, void *data)
{
- int r;
+ int r, index;
struct device_node *np = NULL;
if (oh->_state != _HWMOD_STATE_REGISTERED)
return 0;
- if (of_have_populated_dt())
- np = of_dev_hwmod_lookup(of_find_node_by_name(NULL, "ocp"), oh);
+ if (of_have_populated_dt()) {
+ struct device_node *bus;
+
+ bus = of_find_node_by_name(NULL, "ocp");
+ if (!bus)
+ return -ENODEV;
+
+ r = of_dev_hwmod_lookup(bus, oh, &index, &np);
+ if (r)
+ pr_debug("omap_hwmod: %s missing dt data\n", oh->name);
+ else if (np && index)
+ pr_warn("omap_hwmod: %s using broken dt data from %s\n",
+ oh->name, np->name);
+ }
if (oh->class->sysc) {
- r = _init_mpu_rt_base(oh, NULL, np);
+ r = _init_mpu_rt_base(oh, NULL, index, np);
if (r < 0) {
WARN(1, "omap_hwmod: %s: doesn't have mpu register target base\n",
oh->name);
goto error;
_write_sysconfig(v, oh);
+ ret = _clear_softreset(oh, &v);
+ if (ret)
+ goto error;
+ _write_sysconfig(v, oh);
+
error:
return ret;
}
.syss_offs = 0x0014,
.sysc_flags = (SYSC_HAS_MIDLEMODE | SYSC_HAS_CLOCKACTIVITY |
SYSC_HAS_SIDLEMODE | SYSC_HAS_ENAWAKEUP |
- SYSC_HAS_SOFTRESET | SYSC_HAS_AUTOIDLE),
+ SYSC_HAS_SOFTRESET | SYSC_HAS_AUTOIDLE |
+ SYSS_HAS_RESET_STATUS),
.idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART |
MSTANDBY_FORCE | MSTANDBY_NO | MSTANDBY_SMART),
.sysc_fields = &omap_hwmod_sysc_type1,
* hence HWMOD_SWSUP_MSTANDBY
*/
- /*
- * During system boot; If the hwmod framework resets the module
- * the module will have smart idle settings; which can lead to deadlock
- * (above Errata Id:i660); so, dont reset the module during boot;
- * Use HWMOD_INIT_NO_RESET.
- */
-
- .flags = HWMOD_SWSUP_SIDLE | HWMOD_SWSUP_MSTANDBY |
- HWMOD_INIT_NO_RESET,
+ .flags = HWMOD_SWSUP_SIDLE | HWMOD_SWSUP_MSTANDBY,
};
/*
.sysc_offs = 0x0010,
.syss_offs = 0x0014,
.sysc_flags = (SYSC_HAS_MIDLEMODE | SYSC_HAS_SIDLEMODE |
- SYSC_HAS_SOFTRESET),
+ SYSC_HAS_SOFTRESET | SYSC_HAS_RESET_STATUS),
.idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART |
SIDLE_SMART_WKUP | MSTANDBY_FORCE | MSTANDBY_NO |
MSTANDBY_SMART | MSTANDBY_SMART_WKUP),
* hence HWMOD_SWSUP_MSTANDBY
*/
- /*
- * During system boot; If the hwmod framework resets the module
- * the module will have smart idle settings; which can lead to deadlock
- * (above Errata Id:i660); so, dont reset the module during boot;
- * Use HWMOD_INIT_NO_RESET.
- */
-
- .flags = HWMOD_SWSUP_SIDLE | HWMOD_SWSUP_MSTANDBY |
- HWMOD_INIT_NO_RESET,
+ .flags = HWMOD_SWSUP_SIDLE | HWMOD_SWSUP_MSTANDBY,
};
/*
.rev_offs = 0x0000,
.sysc_offs = 0x0010,
.sysc_flags = (SYSC_HAS_MIDLEMODE | SYSC_HAS_RESET_STATUS |
- SYSC_HAS_SIDLEMODE | SYSC_HAS_SOFTRESET),
+ SYSC_HAS_SIDLEMODE | SYSC_HAS_SOFTRESET |
+ SYSC_HAS_RESET_STATUS),
.idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART |
SIDLE_SMART_WKUP | MSTANDBY_FORCE | MSTANDBY_NO |
MSTANDBY_SMART | MSTANDBY_SMART_WKUP),
* hence HWMOD_SWSUP_MSTANDBY
*/
- /*
- * During system boot; If the hwmod framework resets the module
- * the module will have smart idle settings; which can lead to deadlock
- * (above Errata Id:i660); so, dont reset the module during boot;
- * Use HWMOD_INIT_NO_RESET.
- */
-
- .flags = HWMOD_SWSUP_SIDLE | HWMOD_SWSUP_MSTANDBY |
- HWMOD_INIT_NO_RESET,
+ .flags = HWMOD_SWSUP_SIDLE | HWMOD_SWSUP_MSTANDBY,
.main_clk = "l3init_60m_fclk",
.prcm = {
.omap4 = {
static struct pdata_init pdata_quirks[] __initdata = {
#ifdef CONFIG_ARCH_OMAP3
+ { "nokia,omap3-n900", hsmmc2_internal_input_clk, },
{ "nokia,omap3-n9", hsmmc2_internal_input_clk, },
{ "nokia,omap3-n950", hsmmc2_internal_input_clk, },
{ "isee,omap3-igep0020", omap3_igep0020_legacy_init, },
for (i = 0; i < pwrdm->banks; i++)
pwrdm->ret_mem_off_counter[i] = 0;
- arch_pwrdm->pwrdm_wait_transition(pwrdm);
+ if (arch_pwrdm && arch_pwrdm->pwrdm_wait_transition)
+ arch_pwrdm->pwrdm_wait_transition(pwrdm);
pwrdm->state = pwrdm_read_pwrst(pwrdm);
pwrdm->state_counter[pwrdm->state] = 1;
#include <mach/regs-ost.h>
#include <mach/reset.h>
+#include <mach/smemc.h>
unsigned int reset_status;
EXPORT_SYMBOL(reset_status);
writel_relaxed(OSSR_M3, OSSR);
/* ... in 100 ms */
writel_relaxed(readl_relaxed(OSCR) + 368640, OSMR3);
+ /*
+ * SDRAM hangs on watchdog reset on Marvell PXA270 (erratum 71)
+ * we put SDRAM into self-refresh to prevent that
+ */
+ while (1)
+ writel_relaxed(MDREFR_SLFRSH, MDREFR);
}
void pxa_restart(enum reboot_mode mode, const char *cmd)
break;
}
}
-
* Tosa Keyboard
*/
static const uint32_t tosakbd_keymap[] = {
- KEY(0, 2, KEY_W),
- KEY(0, 6, KEY_K),
- KEY(0, 7, KEY_BACKSPACE),
- KEY(0, 8, KEY_P),
- KEY(1, 1, KEY_Q),
- KEY(1, 2, KEY_E),
- KEY(1, 3, KEY_T),
- KEY(1, 4, KEY_Y),
- KEY(1, 6, KEY_O),
- KEY(1, 7, KEY_I),
- KEY(1, 8, KEY_COMMA),
- KEY(2, 1, KEY_A),
- KEY(2, 2, KEY_D),
- KEY(2, 3, KEY_G),
- KEY(2, 4, KEY_U),
- KEY(2, 6, KEY_L),
- KEY(2, 7, KEY_ENTER),
- KEY(2, 8, KEY_DOT),
- KEY(3, 1, KEY_Z),
- KEY(3, 2, KEY_C),
- KEY(3, 3, KEY_V),
- KEY(3, 4, KEY_J),
- KEY(3, 5, TOSA_KEY_ADDRESSBOOK),
- KEY(3, 6, TOSA_KEY_CANCEL),
- KEY(3, 7, TOSA_KEY_CENTER),
- KEY(3, 8, TOSA_KEY_OK),
- KEY(3, 9, KEY_LEFTSHIFT),
- KEY(4, 1, KEY_S),
- KEY(4, 2, KEY_R),
- KEY(4, 3, KEY_B),
- KEY(4, 4, KEY_N),
- KEY(4, 5, TOSA_KEY_CALENDAR),
- KEY(4, 6, TOSA_KEY_HOMEPAGE),
- KEY(4, 7, KEY_LEFTCTRL),
- KEY(4, 8, TOSA_KEY_LIGHT),
- KEY(4, 10, KEY_RIGHTSHIFT),
- KEY(5, 1, KEY_TAB),
- KEY(5, 2, KEY_SLASH),
- KEY(5, 3, KEY_H),
- KEY(5, 4, KEY_M),
- KEY(5, 5, TOSA_KEY_MENU),
- KEY(5, 7, KEY_UP),
- KEY(5, 11, TOSA_KEY_FN),
- KEY(6, 1, KEY_X),
- KEY(6, 2, KEY_F),
- KEY(6, 3, KEY_SPACE),
- KEY(6, 4, KEY_APOSTROPHE),
- KEY(6, 5, TOSA_KEY_MAIL),
- KEY(6, 6, KEY_LEFT),
- KEY(6, 7, KEY_DOWN),
- KEY(6, 8, KEY_RIGHT),
+ KEY(0, 1, KEY_W),
+ KEY(0, 5, KEY_K),
+ KEY(0, 6, KEY_BACKSPACE),
+ KEY(0, 7, KEY_P),
+ KEY(1, 0, KEY_Q),
+ KEY(1, 1, KEY_E),
+ KEY(1, 2, KEY_T),
+ KEY(1, 3, KEY_Y),
+ KEY(1, 5, KEY_O),
+ KEY(1, 6, KEY_I),
+ KEY(1, 7, KEY_COMMA),
+ KEY(2, 0, KEY_A),
+ KEY(2, 1, KEY_D),
+ KEY(2, 2, KEY_G),
+ KEY(2, 3, KEY_U),
+ KEY(2, 5, KEY_L),
+ KEY(2, 6, KEY_ENTER),
+ KEY(2, 7, KEY_DOT),
+ KEY(3, 0, KEY_Z),
+ KEY(3, 1, KEY_C),
+ KEY(3, 2, KEY_V),
+ KEY(3, 3, KEY_J),
+ KEY(3, 4, TOSA_KEY_ADDRESSBOOK),
+ KEY(3, 5, TOSA_KEY_CANCEL),
+ KEY(3, 6, TOSA_KEY_CENTER),
+ KEY(3, 7, TOSA_KEY_OK),
+ KEY(3, 8, KEY_LEFTSHIFT),
+ KEY(4, 0, KEY_S),
+ KEY(4, 1, KEY_R),
+ KEY(4, 2, KEY_B),
+ KEY(4, 3, KEY_N),
+ KEY(4, 4, TOSA_KEY_CALENDAR),
+ KEY(4, 5, TOSA_KEY_HOMEPAGE),
+ KEY(4, 6, KEY_LEFTCTRL),
+ KEY(4, 7, TOSA_KEY_LIGHT),
+ KEY(4, 9, KEY_RIGHTSHIFT),
+ KEY(5, 0, KEY_TAB),
+ KEY(5, 1, KEY_SLASH),
+ KEY(5, 2, KEY_H),
+ KEY(5, 3, KEY_M),
+ KEY(5, 4, TOSA_KEY_MENU),
+ KEY(5, 6, KEY_UP),
+ KEY(5, 10, TOSA_KEY_FN),
+ KEY(6, 0, KEY_X),
+ KEY(6, 1, KEY_F),
+ KEY(6, 2, KEY_SPACE),
+ KEY(6, 3, KEY_APOSTROPHE),
+ KEY(6, 4, TOSA_KEY_MAIL),
+ KEY(6, 5, KEY_LEFT),
+ KEY(6, 6, KEY_DOWN),
+ KEY(6, 7, KEY_RIGHT),
};
static struct matrix_keymap_data tosakbd_keymap_data = {
select GENERIC_CLOCKEVENTS
select GPIO_PL061 if GPIOLIB
select HAVE_ARM_SCU
+ select HAVE_ARM_TWD if SMP
select HAVE_SMP
select MFD_SYSCON
select SPARSE_IRQ
switch (tegra_chip_id) {
case TEGRA20:
tegra20_fuse_init_randomness();
+ break;
case TEGRA30:
case TEGRA114:
default:
tegra30_fuse_init_randomness();
+ break;
}
pr_info("Tegra Revision: %s SKU: %d CPU Process: %d Core Process: %d\n",
/* Requires call-back bindings. */
OF_DEV_AUXDATA("arm,cortex-a9-pmu", 0, "arm-pmu", &db8500_pmu_platdata),
/* Requires DMA bindings. */
+ OF_DEV_AUXDATA("arm,pl18x", 0x80126000, "sdi0", &mop500_sdi0_data),
+ OF_DEV_AUXDATA("arm,pl18x", 0x80118000, "sdi1", &mop500_sdi1_data),
+ OF_DEV_AUXDATA("arm,pl18x", 0x80005000, "sdi2", &mop500_sdi2_data),
+ OF_DEV_AUXDATA("arm,pl18x", 0x80114000, "sdi4", &mop500_sdi4_data),
OF_DEV_AUXDATA("stericsson,ux500-msp-i2s", 0x80123000,
"ux500-msp-i2s.0", &msp0_platform_data),
OF_DEV_AUXDATA("stericsson,ux500-msp-i2s", 0x80124000,
*
* DMA uncached mapping support.
*/
+#include <linux/bootmem.h>
#include <linux/module.h>
#include <linux/mm.h>
#include <linux/gfp.h>
};
EXPORT_SYMBOL(arm_coherent_dma_ops);
+static int __dma_supported(struct device *dev, u64 mask, bool warn)
+{
+ unsigned long max_dma_pfn;
+
+ /*
+ * If the mask allows for more memory than we can address,
+ * and we actually have that much memory, then we must
+ * indicate that DMA to this device is not supported.
+ */
+ if (sizeof(mask) != sizeof(dma_addr_t) &&
+ mask > (dma_addr_t)~0 &&
+ dma_to_pfn(dev, ~0) < max_pfn) {
+ if (warn) {
+ dev_warn(dev, "Coherent DMA mask %#llx is larger than dma_addr_t allows\n",
+ mask);
+ dev_warn(dev, "Driver did not use or check the return value from dma_set_coherent_mask()?\n");
+ }
+ return 0;
+ }
+
+ max_dma_pfn = min(max_pfn, arm_dma_pfn_limit);
+
+ /*
+ * Translate the device's DMA mask to a PFN limit. This
+ * PFN number includes the page which we can DMA to.
+ */
+ if (dma_to_pfn(dev, mask) < max_dma_pfn) {
+ if (warn)
+ dev_warn(dev, "Coherent DMA mask %#llx (pfn %#lx-%#lx) covers a smaller range of system memory than the DMA zone pfn 0x0-%#lx\n",
+ mask,
+ dma_to_pfn(dev, 0), dma_to_pfn(dev, mask) + 1,
+ max_dma_pfn + 1);
+ return 0;
+ }
+
+ return 1;
+}
+
static u64 get_coherent_dma_mask(struct device *dev)
{
u64 mask = (u64)DMA_BIT_MASK(32);
return 0;
}
- /*
- * If the mask allows for more memory than we can address,
- * and we actually have that much memory, then fail the
- * allocation.
- */
- if (sizeof(mask) != sizeof(dma_addr_t) &&
- mask > (dma_addr_t)~0 &&
- dma_to_pfn(dev, ~0) > arm_dma_pfn_limit) {
- dev_warn(dev, "Coherent DMA mask %#llx is larger than dma_addr_t allows\n",
- mask);
- dev_warn(dev, "Driver did not use or check the return value from dma_set_coherent_mask()?\n");
- return 0;
- }
-
- /*
- * Now check that the mask, when translated to a PFN,
- * fits within the allowable addresses which we can
- * allocate.
- */
- if (dma_to_pfn(dev, mask) < arm_dma_pfn_limit) {
- dev_warn(dev, "Coherent DMA mask %#llx (pfn %#lx-%#lx) covers a smaller range of system memory than the DMA zone pfn 0x0-%#lx\n",
- mask,
- dma_to_pfn(dev, 0), dma_to_pfn(dev, mask) + 1,
- arm_dma_pfn_limit + 1);
+ if (!__dma_supported(dev, mask, true))
return 0;
- }
}
return mask;
*/
int dma_supported(struct device *dev, u64 mask)
{
- unsigned long limit;
-
- /*
- * If the mask allows for more memory than we can address,
- * and we actually have that much memory, then we must
- * indicate that DMA to this device is not supported.
- */
- if (sizeof(mask) != sizeof(dma_addr_t) &&
- mask > (dma_addr_t)~0 &&
- dma_to_pfn(dev, ~0) > arm_dma_pfn_limit)
- return 0;
-
- /*
- * Translate the device's DMA mask to a PFN limit. This
- * PFN number includes the page which we can DMA to.
- */
- limit = dma_to_pfn(dev, mask);
-
- if (limit < arm_dma_pfn_limit)
- return 0;
-
- return 1;
+ return __dma_supported(dev, mask, false);
}
EXPORT_SYMBOL(dma_supported);
#ifdef CONFIG_ZONE_DMA
if (mdesc->dma_zone_size) {
arm_dma_zone_size = mdesc->dma_zone_size;
- arm_dma_limit = PHYS_OFFSET + arm_dma_zone_size - 1;
+ arm_dma_limit = __pv_phys_offset + arm_dma_zone_size - 1;
} else
arm_dma_limit = 0xffffffff;
arm_dma_pfn_limit = arm_dma_limit >> PAGE_SHIFT;
info.flags = VM_UNMAPPED_AREA_TOPDOWN;
info.length = len;
- info.low_limit = PAGE_SIZE;
+ info.low_limit = FIRST_USER_ADDRESS;
info.high_limit = mm->mmap_base;
info.align_mask = do_align ? (PAGE_MASK & (SHMLBA - 1)) : 0;
info.align_offset = pgoff << PAGE_SHIFT;
init_pud = pud_offset(init_pgd, 0);
init_pmd = pmd_offset(init_pud, 0);
init_pte = pte_offset_map(init_pmd, 0);
- set_pte_ext(new_pte, *init_pte, 0);
+ set_pte_ext(new_pte + 0, init_pte[0], 0);
+ set_pte_ext(new_pte + 1, init_pte[1], 0);
pte_unmap(init_pte);
pte_unmap(new_pte);
}
if (timer->posted)
return;
- if (timer->errata & OMAP_TIMER_ERRATA_I103_I767)
+ if (timer->errata & OMAP_TIMER_ERRATA_I103_I767) {
+ timer->posted = OMAP_TIMER_NONPOSTED;
+ __omap_dm_timer_write(timer, OMAP_TIMER_IF_CTRL_REG, 0, 0);
return;
+ }
__omap_dm_timer_write(timer, OMAP_TIMER_IF_CTRL_REG,
OMAP_TIMER_CTRL_POSTED, 0);
struct rb_node rbnode_phys;
};
-rwlock_t p2m_lock;
+static rwlock_t p2m_lock;
struct rb_root phys_to_mach = RB_ROOT;
+EXPORT_SYMBOL_GPL(phys_to_mach);
static struct rb_root mach_to_phys = RB_ROOT;
static int xen_add_phys_to_mach_entry(struct xen_p2m_entry *new)
}
EXPORT_SYMBOL_GPL(__set_phys_to_machine);
-int p2m_init(void)
+static int p2m_init(void)
{
rwlock_init(&p2m_lock);
return 0;
range 2 32
depends on SMP
# These have to remain sorted largest to smallest
- default "8" if ARCH_XGENE
- default "4"
+ default "8"
config HOTPLUG_CPU
bool "Support for hot-pluggable CPUs"
extern void __iounmap(volatile void __iomem *addr);
extern void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size);
-#define PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF | PTE_DIRTY)
+#define PROT_DEFAULT (pgprot_default | PTE_DIRTY)
#define PROT_DEVICE_nGnRE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_ATTRINDX(MT_DEVICE_nGnRE))
#define PROT_NORMAL_NC (PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL_NC))
#define PROT_NORMAL (PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL))
* Section
*/
#define PMD_SECT_VALID (_AT(pmdval_t, 1) << 0)
-#define PMD_SECT_PROT_NONE (_AT(pmdval_t, 1) << 2)
+#define PMD_SECT_PROT_NONE (_AT(pmdval_t, 1) << 58)
#define PMD_SECT_USER (_AT(pmdval_t, 1) << 6) /* AP[1] */
#define PMD_SECT_RDONLY (_AT(pmdval_t, 1) << 7) /* AP[2] */
#define PMD_SECT_S (_AT(pmdval_t, 3) << 8)
* be used where CPUs are brought online dynamically by the kernel.
*/
ENTRY(secondary_entry)
- bl __calc_phys_offset // x2=phys offset
bl el2_setup // Drop to EL1
+ bl __calc_phys_offset // x24=PHYS_OFFSET, x28=PHYS_OFFSET-PAGE_OFFSET
+ bl set_cpu_boot_mode_flag
b secondary_startup
ENDPROC(secondary_entry)
bl __flush_dcache_all
mov lr, x28
ic iallu // I+BTB cache invalidate
+ tlbi vmalle1is // invalidate I + D TLBs
dsb sy
mov x0, #3 << 20
msr cpacr_el1, x0 // Enable FP/ASIMD
msr mdscr_el1, xzr // Reset mdscr_el1
- tlbi vmalle1is // invalidate I + D TLBs
/*
* Memory region attributes for LPAE:
*
*/
retval = clk_round_rate(pll1,
CONFIG_BOARD_FAVR32_ABDAC_RATE * 256 * 16);
- if (retval < 0)
+ if (retval <= 0) {
+ retval = -EINVAL;
goto out_abdac;
+ }
retval = clk_set_rate(pll1, retval);
if (retval != 0)
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
CONFIG_MTD_CONCAT=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
CONFIG_MTD_CFI=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
static struct irqaction timer_irqaction = {
.handler = timer_interrupt,
/* Oprofile uses the same irq as the timer, so allow it to be shared */
- .flags = IRQF_TIMER | IRQF_DISABLED | IRQF_SHARED,
+ .flags = IRQF_TIMER | IRQF_SHARED,
.name = "avr32_comparator",
};
.enter = avr32_pm_enter,
};
-static unsigned long avr32_pm_offset(void *symbol)
+static unsigned long __init avr32_pm_offset(void *symbol)
{
extern u8 pm_exception[];
CONFIG_IDE=y
CONFIG_BLK_DEV_IDECD=y
CONFIG_BLK_DEV_NS87415=y
-CONFIG_BLK_DEV_SIIMAGE=m
+CONFIG_PATA_SIL680=m
CONFIG_SCSI=y
CONFIG_BLK_DEV_SD=y
CONFIG_CHR_DEV_ST=y
CONFIG_MODVERSIONS=y
CONFIG_BLK_DEV_INTEGRITY=y
CONFIG_PA8X00=y
-CONFIG_MLONGCALLS=y
CONFIG_64BIT=y
CONFIG_SMP=y
CONFIG_PREEMPT=y
CONFIG_BLK_DEV_IDECD=y
CONFIG_BLK_DEV_PLATFORM=y
CONFIG_BLK_DEV_GENERIC=y
-CONFIG_BLK_DEV_SIIMAGE=y
-CONFIG_SCSI=y
CONFIG_BLK_DEV_SD=y
CONFIG_CHR_DEV_ST=m
CONFIG_BLK_DEV_SR=m
CONFIG_SCSI_SAS_LIBSAS=m
CONFIG_ISCSI_TCP=m
CONFIG_ISCSI_BOOT_SYSFS=m
+CONFIG_ATA=y
+CONFIG_PATA_SIL680=y
CONFIG_FUSION=y
CONFIG_FUSION_SPI=y
CONFIG_FUSION_SAS=y
# CONFIG_KEYBOARD_ATKBD is not set
# CONFIG_KEYBOARD_HIL_OLD is not set
# CONFIG_KEYBOARD_HIL is not set
-CONFIG_MOUSE_PS2=m
+# CONFIG_MOUSE_PS2 is not set
CONFIG_INPUT_MISC=y
-CONFIG_INPUT_CM109=m
CONFIG_SERIO_SERPORT=m
CONFIG_SERIO_PARKBD=m
CONFIG_SERIO_GSCPS2=m
CONFIG_SND_AD1889=m
# CONFIG_SND_USB is not set
# CONFIG_SND_GSC is not set
-CONFIG_HID_A4TECH=m
-CONFIG_HID_APPLE=m
-CONFIG_HID_BELKIN=m
-CONFIG_HID_CHERRY=m
-CONFIG_HID_CHICONY=m
-CONFIG_HID_CYPRESS=m
-CONFIG_HID_DRAGONRISE=m
-CONFIG_HID_EZKEY=m
-CONFIG_HID_KYE=m
-CONFIG_HID_GYRATION=m
-CONFIG_HID_TWINHAN=m
-CONFIG_HID_KENSINGTON=m
-CONFIG_HID_LOGITECH=m
-CONFIG_HID_LOGITECH_DJ=m
-CONFIG_HID_MICROSOFT=m
-CONFIG_HID_MONTEREY=m
-CONFIG_HID_NTRIG=m
-CONFIG_HID_ORTEK=m
-CONFIG_HID_PANTHERLORD=m
-CONFIG_HID_PETALYNX=m
-CONFIG_HID_SAMSUNG=m
-CONFIG_HID_SUNPLUS=m
-CONFIG_HID_GREENASIA=m
-CONFIG_HID_SMARTJOYPLUS=m
-CONFIG_HID_TOPSEED=m
-CONFIG_HID_THRUSTMASTER=m
-CONFIG_HID_ZEROPLUS=m
-CONFIG_USB_HID=m
CONFIG_USB=y
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_STORAGE=y
CONFIG_BLK_DEV_INTEGRITY=y
# CONFIG_IOSCHED_DEADLINE is not set
CONFIG_PA8X00=y
-CONFIG_MLONGCALLS=y
CONFIG_64BIT=y
CONFIG_SMP=y
# CONFIG_COMPACTION is not set
CONFIG_IDE_GD_ATAPI=y
CONFIG_BLK_DEV_IDECD=m
CONFIG_BLK_DEV_NS87415=y
-CONFIG_BLK_DEV_SIIMAGE=y
# CONFIG_SCSI_PROC_FS is not set
CONFIG_BLK_DEV_SD=y
CONFIG_BLK_DEV_SR=y
CONFIG_SCSI_QLA_ISCSI=m
CONFIG_SCSI_DH=y
CONFIG_ATA=y
+CONFIG_PATA_SIL680=y
CONFIG_ATA_GENERIC=y
CONFIG_MD=y
CONFIG_MD_LINEAR=m
CONFIG_INPUT_EVDEV=y
# CONFIG_KEYBOARD_HIL_OLD is not set
# CONFIG_KEYBOARD_HIL is not set
-# CONFIG_INPUT_MOUSE is not set
+# CONFIG_MOUSE_PS2 is not set
CONFIG_INPUT_MISC=y
CONFIG_SERIO_SERPORT=m
# CONFIG_HP_SDC is not set
CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y
CONFIG_LOGO=y
# CONFIG_LOGO_LINUX_MONO is not set
-CONFIG_HID=m
CONFIG_HIDRAW=y
-CONFIG_HID_DRAGONRISE=m
-CONFIG_DRAGONRISE_FF=y
-CONFIG_HID_KYE=m
-CONFIG_HID_GYRATION=m
-CONFIG_HID_TWINHAN=m
-CONFIG_LOGITECH_FF=y
-CONFIG_LOGIRUMBLEPAD2_FF=y
-CONFIG_HID_NTRIG=m
-CONFIG_HID_PANTHERLORD=m
-CONFIG_PANTHERLORD_FF=y
-CONFIG_HID_PETALYNX=m
-CONFIG_HID_SAMSUNG=m
-CONFIG_HID_SONY=m
-CONFIG_HID_SUNPLUS=m
-CONFIG_HID_GREENASIA=m
-CONFIG_GREENASIA_FF=y
-CONFIG_HID_SMARTJOYPLUS=m
-CONFIG_SMARTJOYPLUS_FF=y
-CONFIG_HID_TOPSEED=m
-CONFIG_HID_THRUSTMASTER=m
-CONFIG_THRUSTMASTER_FF=y
-CONFIG_HID_ZEROPLUS=m
-CONFIG_ZEROPLUS_FF=y
-CONFIG_USB_HID=m
CONFIG_HID_PID=y
CONFIG_USB_HIDDEV=y
CONFIG_USB=y
CONFIG_USB_MON=m
CONFIG_USB_WUSB_CBAF=m
CONFIG_USB_XHCI_HCD=m
-CONFIG_USB_EHCI_HCD=m
-CONFIG_USB_OHCI_HCD=m
-CONFIG_USB_R8A66597_HCD=m
-CONFIG_USB_ACM=m
-CONFIG_USB_PRINTER=m
-CONFIG_USB_WDM=m
-CONFIG_USB_TMC=m
+CONFIG_USB_EHCI_HCD=y
+CONFIG_USB_OHCI_HCD=y
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
CONFIG_LEDS_TRIGGERS=y
* This is used for 16550-compatible UARTs
*/
#define BASE_BAUD ( 1843200 / 16 )
-
-#define SERIAL_PORT_DFNS
* HP PARISC Hardware Database
* Access to this database is only possible during bootup
* so don't reference this table after starting the init process
+ *
+ * NOTE: Product names which are listed here and ends with a '?'
+ * are guessed. If you know the correct name, please let us know.
*/
static struct hp_hardware hp_hardware_list[] = {
{HPHW_NPROC,0x5DD,0x4,0x81,"Duet W2"},
{HPHW_NPROC,0x5DE,0x4,0x81,"Piccolo W+"},
{HPHW_NPROC,0x5DF,0x4,0x81,"Cantata W2"},
- {HPHW_NPROC,0x5DF,0x0,0x00,"Marcato W+? (rp5470)"},
+ {HPHW_NPROC,0x5DF,0x0,0x00,"Marcato W+ (rp5470)?"},
{HPHW_NPROC,0x5E0,0x4,0x91,"Cantata DC- W2"},
{HPHW_NPROC,0x5E1,0x4,0x91,"Crescendo DC- W2"},
{HPHW_NPROC,0x5E2,0x4,0x91,"Crescendo 650 W2"},
{HPHW_NPROC,0x888,0x4,0x91,"Storm Peak Fast DC-"},
{HPHW_NPROC,0x889,0x4,0x91,"Storm Peak Fast"},
{HPHW_NPROC,0x88A,0x4,0x91,"Crestone Peak Slow"},
+ {HPHW_NPROC,0x88B,0x4,0x91,"Crestone Peak Fast?"},
{HPHW_NPROC,0x88C,0x4,0x91,"Orca Mako+"},
{HPHW_NPROC,0x88D,0x4,0x91,"Rainier/Medel Mako+ Slow"},
{HPHW_NPROC,0x88E,0x4,0x91,"Rainier/Medel Mako+ Fast"},
+ {HPHW_NPROC,0x892,0x4,0x91,"Mt. Hamilton Slow Mako+?"},
{HPHW_NPROC,0x894,0x4,0x91,"Mt. Hamilton Fast Mako+"},
{HPHW_NPROC,0x895,0x4,0x91,"Storm Peak Slow Mako+"},
{HPHW_NPROC,0x896,0x4,0x91,"Storm Peak Fast Mako+"},
.import fault_vector_11,code /* IVA parisc 1.1 32 bit */
.import $global$ /* forward declaration */
#endif /*!CONFIG_64BIT*/
- .export _stext,data /* Kernel want it this way! */
-_stext:
-ENTRY(stext)
+ENTRY(parisc_kernel_start)
.proc
.callinfo
.procend
#endif /* CONFIG_SMP */
-ENDPROC(stext)
+ENDPROC(parisc_kernel_start)
#ifndef CONFIG_64BIT
.section .data..read_mostly
return (unsigned long) mapping >> 8;
}
-static unsigned long get_shared_area(struct address_space *mapping,
- unsigned long addr, unsigned long len, unsigned long pgoff)
+static unsigned long shared_align_offset(struct file *filp, unsigned long pgoff)
+{
+ struct address_space *mapping = filp ? filp->f_mapping : NULL;
+
+ return (get_offset(mapping) + pgoff) << PAGE_SHIFT;
+}
+
+static unsigned long get_shared_area(struct file *filp, unsigned long addr,
+ unsigned long len, unsigned long pgoff)
{
struct vm_unmapped_area_info info;
info.low_limit = PAGE_ALIGN(addr);
info.high_limit = TASK_SIZE;
info.align_mask = PAGE_MASK & (SHMLBA - 1);
- info.align_offset = (get_offset(mapping) + pgoff) << PAGE_SHIFT;
+ info.align_offset = shared_align_offset(filp, pgoff);
return vm_unmapped_area(&info);
}
return -ENOMEM;
if (flags & MAP_FIXED) {
if ((flags & MAP_SHARED) &&
- (addr - (pgoff << PAGE_SHIFT)) & (SHMLBA - 1))
+ (addr - shared_align_offset(filp, pgoff)) & (SHMLBA - 1))
return -EINVAL;
return addr;
}
if (!addr)
addr = TASK_UNMAPPED_BASE;
- if (filp) {
- addr = get_shared_area(filp->f_mapping, addr, len, pgoff);
- } else if(flags & MAP_SHARED) {
- addr = get_shared_area(NULL, addr, len, pgoff);
- } else {
+ if (filp || (flags & MAP_SHARED))
+ addr = get_shared_area(filp, addr, len, pgoff);
+ else
addr = get_unshared_area(addr, len);
- }
+
return addr;
}
}
/* Called from setup_arch to import the kernel unwind info */
-int unwind_init(void)
+int __init unwind_init(void)
{
long start, stop;
register unsigned long gp __asm__ ("r27");
e = find_unwind_entry(info->ip);
if (e == NULL) {
unsigned long sp;
- extern char _stext[], _etext[];
dbg("Cannot find unwind entry for 0x%lx; forced unwinding\n", info->ip);
break;
info->prev_ip = tmp;
sp = info->prev_sp;
- } while (info->prev_ip < (unsigned long)_stext ||
- info->prev_ip > (unsigned long)_etext);
+ } while (!kernel_text_address(info->prev_ip));
info->rp = 0;
do {
if (unwind_once(&info) < 0 || info.ip == 0)
return 0;
- if (!__kernel_text_address(info.ip)) {
+ if (!kernel_text_address(info.ip))
return 0;
- }
} while (info.ip && level--);
return info.ip;
* Copyright (C) 2000 Michael Ang <mang with subcarrier.org>
* Copyright (C) 2002 Randolph Chung <tausq with parisc-linux.org>
* Copyright (C) 2003 James Bottomley <jejb with parisc-linux.org>
- * Copyright (C) 2006 Helge Deller <deller@gmx.de>
- *
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ * Copyright (C) 2006-2013 Helge Deller <deller@gmx.de>
+ */
+
+/*
+ * Put page table entries (swapper_pg_dir) as the first thing in .bss. This
+ * will ensure that it has .bss alignment (PAGE_SIZE).
*/
+#define BSS_FIRST_SECTIONS *(.data..vm0.pmd) \
+ *(.data..vm0.pgd) \
+ *(.data..vm0.pte)
+
#include <asm-generic/vmlinux.lds.h>
+
/* needed for the processor specific cache alignment size */
#include <asm/cache.h>
#include <asm/page.h>
OUTPUT_ARCH(hppa:hppa2.0w)
#endif
-ENTRY(_stext)
+ENTRY(parisc_kernel_start)
#ifndef CONFIG_64BIT
jiffies = jiffies_64 + 4;
#else
{
. = KERNEL_BINARY_TEXT_START;
+ __init_begin = .;
+ HEAD_TEXT_SECTION
+ INIT_TEXT_SECTION(8)
+
+ . = ALIGN(PAGE_SIZE);
+ INIT_DATA_SECTION(PAGE_SIZE)
+ /* we have to discard exit text and such at runtime, not link time */
+ .exit.text :
+ {
+ EXIT_TEXT
+ }
+ .exit.data :
+ {
+ EXIT_DATA
+ }
+ PERCPU_SECTION(8)
+ . = ALIGN(PAGE_SIZE);
+ __init_end = .;
+ /* freed after init ends here */
+
_text = .; /* Text and read-only data */
- .head ALIGN(16) : {
- HEAD_TEXT
- } = 0
- .text ALIGN(16) : {
+ _stext = .;
+ .text ALIGN(PAGE_SIZE) : {
TEXT_TEXT
SCHED_TEXT
LOCK_TEXT
*(.lock.text) /* out-of-line lock text */
*(.gnu.warning)
}
- /* End of text section */
+ . = ALIGN(PAGE_SIZE);
_etext = .;
+ /* End of text section */
/* Start of data section */
_sdata = .;
- RODATA
+ RO_DATA_SECTION(8)
- /* writeable */
- /* Make sure this is page aligned so
- * that we can properly leave these
- * as writable
- */
- . = ALIGN(PAGE_SIZE);
- data_start = .;
+#ifdef CONFIG_64BIT
+ . = ALIGN(16);
+ /* Linkage tables */
+ .opd : {
+ *(.opd)
+ } PROVIDE (__gp = .);
+ .plt : {
+ *(.plt)
+ }
+ .dlt : {
+ *(.dlt)
+ }
+#endif
/* unwind info */
.PARISC.unwind : {
__stop___unwind = .;
}
- EXCEPTION_TABLE(16)
+ /* writeable */
+ /* Make sure this is page aligned so
+ * that we can properly leave these
+ * as writable
+ */
+ . = ALIGN(PAGE_SIZE);
+ data_start = .;
+
+ EXCEPTION_TABLE(8)
NOTES
/* Data */
_edata = .;
/* BSS */
- __bss_start = .;
- /* page table entries need to be PAGE_SIZE aligned */
- . = ALIGN(PAGE_SIZE);
- .data..vmpages : {
- *(.data..vm0.pmd)
- *(.data..vm0.pgd)
- *(.data..vm0.pte)
- }
- .bss : {
- *(.bss)
- *(COMMON)
- }
- __bss_stop = .;
-
-#ifdef CONFIG_64BIT
- . = ALIGN(16);
- /* Linkage tables */
- .opd : {
- *(.opd)
- } PROVIDE (__gp = .);
- .plt : {
- *(.plt)
- }
- .dlt : {
- *(.dlt)
- }
-#endif
+ BSS_SECTION(PAGE_SIZE, PAGE_SIZE, 8)
- /* reserve space for interrupt stack by aligning __init* to 16k */
- . = ALIGN(16384);
- __init_begin = .;
- INIT_TEXT_SECTION(16384)
- . = ALIGN(PAGE_SIZE);
- INIT_DATA_SECTION(16)
- /* we have to discard exit text and such at runtime, not link time */
- .exit.text :
- {
- EXIT_TEXT
- }
- .exit.data :
- {
- EXIT_DATA
- }
-
- PERCPU_SECTION(L1_CACHE_BYTES)
- . = ALIGN(PAGE_SIZE);
- __init_end = .;
- /* freed after init ends here */
_end = . ;
STABS_DEBUG
#include <asm/sections.h>
extern int data_start;
+extern void parisc_kernel_start(void); /* Kernel entry point in head.S */
#if PT_NLEVELS == 3
/* NOTE: This layout exactly conforms to the hybrid L2/L3 page table layout
reserve_bootmem_node(NODE_DATA(0), 0UL,
(unsigned long)(PAGE0->mem_free +
PDC_CONSOLE_IO_IODC_SIZE), BOOTMEM_DEFAULT);
- reserve_bootmem_node(NODE_DATA(0), __pa((unsigned long)_text),
- (unsigned long)(_end - _text), BOOTMEM_DEFAULT);
+ reserve_bootmem_node(NODE_DATA(0), __pa(KERNEL_BINARY_TEXT_START),
+ (unsigned long)(_end - KERNEL_BINARY_TEXT_START),
+ BOOTMEM_DEFAULT);
reserve_bootmem_node(NODE_DATA(0), (bootmap_start_pfn << PAGE_SHIFT),
((bootmap_pfn - bootmap_start_pfn) << PAGE_SHIFT),
BOOTMEM_DEFAULT);
request_resource(&sysram_resources[0], &pdcdata_resource);
}
+static int __init parisc_text_address(unsigned long vaddr)
+{
+ static unsigned long head_ptr __initdata;
+
+ if (!head_ptr)
+ head_ptr = PAGE_MASK & (unsigned long)
+ dereference_function_descriptor(&parisc_kernel_start);
+
+ return core_kernel_text(vaddr) || vaddr == head_ptr;
+}
+
static void __init map_pages(unsigned long start_vaddr,
unsigned long start_paddr, unsigned long size,
pgprot_t pgprot, int force)
*/
if (force)
pte = __mk_pte(address, pgprot);
- else if (core_kernel_text(vaddr) &&
+ else if (parisc_text_address(vaddr) &&
address != fv_addr)
pte = __mk_pte(address, PAGE_KERNEL_EXEC);
else
compatible = "fsl,mpc5121-immr";
#address-cells = <1>;
#size-cells = <1>;
- #interrupt-cells = <2>;
ranges = <0x0 0x80000000 0x400000>;
reg = <0x80000000 0x400000>;
bus-frequency = <66000000>; /* 66 MHz ips bus */
CONFIG_PPC_MPC52xx=y
CONFIG_PPC_MPC5200_SIMPLE=y
# CONFIG_PPC_PMAC is not set
-CONFIG_PPC_BESTCOMM=y
CONFIG_SPARSE_IRQ=y
CONFIG_PM=y
# CONFIG_PCI is not set
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_OHCI_HCD_PPC_OF_BE=y
CONFIG_USB_STORAGE=y
+CONFIG_DMADEVICES=y
+CONFIG_PPC_BESTCOMM=y
CONFIG_EXT2_FS=y
CONFIG_EXT3_FS=y
# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
CONFIG_PPC_MPC5200_SIMPLE=y
CONFIG_PPC_LITE5200=y
# CONFIG_PPC_PMAC is not set
-CONFIG_PPC_BESTCOMM=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_SPARSE_IRQ=y
CONFIG_I2C_MPC=y
# CONFIG_HWMON is not set
CONFIG_VIDEO_OUTPUT_CONTROL=m
+CONFIG_DMADEVICES=y
+CONFIG_PPC_BESTCOMM=y
CONFIG_EXT2_FS=y
CONFIG_EXT3_FS=y
# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
CONFIG_PPC_MPC52xx=y
CONFIG_PPC_MPC5200_SIMPLE=y
# CONFIG_PPC_PMAC is not set
-CONFIG_PPC_BESTCOMM=y
CONFIG_SPARSE_IRQ=y
CONFIG_PM=y
# CONFIG_PCI is not set
CONFIG_LEDS_TRIGGER_TIMER=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_DS1307=y
+CONFIG_DMADEVICES=y
+CONFIG_PPC_BESTCOMM=y
CONFIG_EXT2_FS=y
CONFIG_EXT3_FS=y
# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
CONFIG_PPC_MPC52xx=y
CONFIG_PPC_MPC5200_SIMPLE=y
# CONFIG_PPC_PMAC is not set
-CONFIG_PPC_BESTCOMM=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_HZ_100=y
CONFIG_USB_STORAGE=m
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_PCF8563=m
+CONFIG_DMADEVICES=y
+CONFIG_PPC_BESTCOMM=y
CONFIG_EXT2_FS=m
CONFIG_EXT3_FS=m
# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
CONFIG_PPC_MPC5200_SIMPLE=y
CONFIG_PPC_MPC5200_BUGFIX=y
# CONFIG_PPC_PMAC is not set
-CONFIG_PPC_BESTCOMM=y
CONFIG_PM=y
# CONFIG_PCI is not set
CONFIG_NET=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_DS1307=y
CONFIG_RTC_DRV_DS1374=y
+CONFIG_DMADEVICES=y
+CONFIG_PPC_BESTCOMM=y
CONFIG_EXT2_FS=y
CONFIG_EXT3_FS=y
# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
CONFIG_PPC_MPC5200_BUGFIX=y
CONFIG_PPC_MPC5200_LPBFIFO=m
# CONFIG_PPC_PMAC is not set
-CONFIG_PPC_BESTCOMM=y
CONFIG_SIMPLE_GPIO=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_RTC_DRV_DS1307=y
CONFIG_RTC_DRV_DS1374=y
CONFIG_RTC_DRV_PCF8563=m
+CONFIG_DMADEVICES=y
+CONFIG_PPC_BESTCOMM=y
CONFIG_EXT2_FS=y
CONFIG_EXT3_FS=y
# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
CONFIG_ALTIVEC=y
CONFIG_SMP=y
CONFIG_NR_CPUS=2
-CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_INET_ESP=y
# CONFIG_IPV6 is not set
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
CONFIG_MTD=y
-CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
CONFIG_MTD_SLRAM=y
CONFIG_MTD_PHRAM=y
CONFIG_DM_CRYPT=y
CONFIG_NETDEVICES=y
CONFIG_DUMMY=y
-CONFIG_MII=y
CONFIG_TIGON3=y
CONFIG_E1000=y
CONFIG_PASEMI_MAC=y
CONFIG_NLS_ISO8859_1=y
CONFIG_CRC_CCITT=y
CONFIG_PRINTK_TIME=y
-CONFIG_MAGIC_SYSRQ=y
CONFIG_DEBUG_FS=y
+CONFIG_MAGIC_SYSRQ=y
CONFIG_DEBUG_KERNEL=y
CONFIG_DETECT_HUNG_TASK=y
# CONFIG_SCHED_DEBUG is not set
static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
unsigned long address)
{
- struct page *page = page_address(table);
-
tlb_flush_pgtable(tlb, address);
- pgtable_page_dtor(page);
- pgtable_free_tlb(tlb, page, 0);
+ pgtable_page_dtor(table);
+ pgtable_free_tlb(tlb, page_address(table), 0);
}
#endif /* _ASM_POWERPC_PGALLOC_32_H */
static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
unsigned long address)
{
- struct page *page = page_address(table);
-
tlb_flush_pgtable(tlb, address);
- pgtable_page_dtor(page);
- pgtable_free_tlb(tlb, page, 0);
+ pgtable_page_dtor(table);
+ pgtable_free_tlb(tlb, page_address(table), 0);
}
#else /* if CONFIG_PPC_64K_PAGES */
* a small SLB (128MB) since the crash kernel needs to place
* itself and some stacks to be in the first segment.
*/
- crashk_res.start = min(0x80000000ULL, (ppc64_rma_size / 2));
+ crashk_res.start = min(0x8000000ULL, (ppc64_rma_size / 2));
#else
crashk_res.start = KDUMP_KERNELBASE;
#endif
or r3,r7,r9
blr
-#if defined(CONFIG_PPC_PMAC) || defined(CONFIG_PPC_MAPLE)
+#ifdef CONFIG_PPC_EARLY_DEBUG_BOOTX
_GLOBAL(rmci_on)
sync
isync
isync
sync
blr
+#endif /* CONFIG_PPC_EARLY_DEBUG_BOOTX */
+
+#if defined(CONFIG_PPC_PMAC) || defined(CONFIG_PPC_MAPLE)
/*
* Do an IO access in real mode
tbl->it_type = TCE_PCI_SWINV_CREATE | TCE_PCI_SWINV_FREE;
}
iommu_init_table(tbl, phb->hose->node);
+ iommu_register_group(tbl, pci_domain_nr(pe->pbus), pe->pe_number);
if (pe->pdev)
set_iommu_table_base(&pe->pdev->dev, tbl);
if (IS_ERR_VALUE(offset))
continue;
- ocm_blk = kzalloc(sizeof(struct ocm_block *), GFP_KERNEL);
+ ocm_blk = kzalloc(sizeof(struct ocm_block), GFP_KERNEL);
if (!ocm_blk) {
printk(KERN_ERR "PPC4XX OCM: could not allocate ocm block");
rh_free(ocm_reg->rh, offset);
Even if you don't know what to do here, say Y.
config NR_CPUS
- int "Maximum number of CPUs (2-64)"
- range 2 64
+ int "Maximum number of CPUs (2-256)"
+ range 2 256
depends on SMP
default "32" if !64BIT
default "64" if 64BIT
help
This allows you to specify the maximum number of CPUs which this
- kernel will support. The maximum supported value is 64 and the
+ kernel will support. The maximum supported value is 256 and the
minimum value which makes sense is 2.
This is purely to save memory - each supported CPU adds
struct s390_xts_ctx {
u8 key[32];
- u8 xts_param[16];
- struct pcc_param pcc;
+ u8 pcc_key[32];
long enc;
long dec;
int key_len;
xts_ctx->enc = KM_XTS_128_ENCRYPT;
xts_ctx->dec = KM_XTS_128_DECRYPT;
memcpy(xts_ctx->key + 16, in_key, 16);
- memcpy(xts_ctx->pcc.key + 16, in_key + 16, 16);
+ memcpy(xts_ctx->pcc_key + 16, in_key + 16, 16);
break;
case 48:
xts_ctx->enc = 0;
xts_ctx->enc = KM_XTS_256_ENCRYPT;
xts_ctx->dec = KM_XTS_256_DECRYPT;
memcpy(xts_ctx->key, in_key, 32);
- memcpy(xts_ctx->pcc.key, in_key + 32, 32);
+ memcpy(xts_ctx->pcc_key, in_key + 32, 32);
break;
default:
*flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
unsigned int nbytes = walk->nbytes;
unsigned int n;
u8 *in, *out;
- void *param;
+ struct pcc_param pcc_param;
+ struct {
+ u8 key[32];
+ u8 init[16];
+ } xts_param;
if (!nbytes)
goto out;
- memset(xts_ctx->pcc.block, 0, sizeof(xts_ctx->pcc.block));
- memset(xts_ctx->pcc.bit, 0, sizeof(xts_ctx->pcc.bit));
- memset(xts_ctx->pcc.xts, 0, sizeof(xts_ctx->pcc.xts));
- memcpy(xts_ctx->pcc.tweak, walk->iv, sizeof(xts_ctx->pcc.tweak));
- param = xts_ctx->pcc.key + offset;
- ret = crypt_s390_pcc(func, param);
+ memset(pcc_param.block, 0, sizeof(pcc_param.block));
+ memset(pcc_param.bit, 0, sizeof(pcc_param.bit));
+ memset(pcc_param.xts, 0, sizeof(pcc_param.xts));
+ memcpy(pcc_param.tweak, walk->iv, sizeof(pcc_param.tweak));
+ memcpy(pcc_param.key, xts_ctx->pcc_key, 32);
+ ret = crypt_s390_pcc(func, &pcc_param.key[offset]);
if (ret < 0)
return -EIO;
- memcpy(xts_ctx->xts_param, xts_ctx->pcc.xts, 16);
- param = xts_ctx->key + offset;
+ memcpy(xts_param.key, xts_ctx->key, 32);
+ memcpy(xts_param.init, pcc_param.xts, 16);
do {
/* only use complete blocks */
n = nbytes & ~(AES_BLOCK_SIZE - 1);
out = walk->dst.virt.addr;
in = walk->src.virt.addr;
- ret = crypt_s390_km(func, param, out, in, n);
+ ret = crypt_s390_km(func, &xts_param.key[offset], out, in, n);
if (ret < 0 || ret != n)
return -EIO;
#include <linux/types.h>
#include <asm/chpid.h>
+#include <asm/cpu.h>
#define SCLP_CHP_INFO_MASK_SIZE 32
unsigned int standby;
unsigned int combined;
int has_cpu_type;
- struct sclp_cpu_entry cpu[255];
+ struct sclp_cpu_entry cpu[MAX_CPU_ADDRESS + 1];
};
int sclp_get_cpu_info(struct sclp_cpu_info *info);
/* constants used by the vdso */
DEFINE(__CLOCK_REALTIME, CLOCK_REALTIME);
DEFINE(__CLOCK_MONOTONIC, CLOCK_MONOTONIC);
+ DEFINE(__CLOCK_THREAD_CPUTIME_ID, CLOCK_THREAD_CPUTIME_ID);
DEFINE(__CLOCK_REALTIME_RES, MONOTONIC_RES_NSEC);
BLANK();
/* idle data offsets */
psal[i] = 0x80000000;
lowcore->paste[4] = (u32)(addr_t) psal;
- psal[0] = 0x20000000;
+ psal[0] = 0x02000000;
psal[2] = (u32)(addr_t) aste;
*(unsigned long *) (aste + 2) = segment_table +
_ASCE_TABLE_LENGTH + _ASCE_USER_BITS + _ASCE_TYPE_SEGMENT;
jnm 3f
a %r0,__VDSO_TK_MULT(%r5)
3: alr %r0,%r2
- al %r0,__VDSO_XTIME_NSEC(%r5) /* + tk->xtime_nsec */
- al %r1,__VDSO_XTIME_NSEC+4(%r5)
- brc 12,4f
- ahi %r0,1
-4: al %r0,__VDSO_WTOM_NSEC(%r5) /* + wall_to_monotonic.nsec */
+ al %r0,__VDSO_WTOM_NSEC(%r5)
al %r1,__VDSO_WTOM_NSEC+4(%r5)
brc 12,5f
ahi %r0,1
5: l %r2,__VDSO_TK_SHIFT(%r5) /* Timekeeper shift */
srdl %r0,0(%r2) /* >> tk->shift */
- l %r2,__VDSO_XTIME_SEC+4(%r5)
- al %r2,__VDSO_WTOM_SEC+4(%r5)
+ l %r2,__VDSO_WTOM_SEC+4(%r5)
cl %r4,__VDSO_UPD_COUNT+4(%r5) /* check update counter */
jne 1b
basr %r5,0
je 0f
cghi %r2,__CLOCK_MONOTONIC
je 0f
- cghi %r2,-2 /* CLOCK_THREAD_CPUTIME_ID for this thread */
+ cghi %r2,__CLOCK_THREAD_CPUTIME_ID
+ je 0f
+ cghi %r2,-2 /* Per-thread CPUCLOCK with PID=0, VIRT=1 */
jne 2f
larl %r5,_vdso_data
icm %r0,15,__LC_ECTG_OK(%r5)
larl %r5,_vdso_data
cghi %r2,__CLOCK_REALTIME
je 4f
- cghi %r2,-2 /* CLOCK_THREAD_CPUTIME_ID for this thread */
+ cghi %r2,__CLOCK_THREAD_CPUTIME_ID
+ je 9f
+ cghi %r2,-2 /* Per-thread CPUCLOCK with PID=0, VIRT=1 */
je 9f
cghi %r2,__CLOCK_MONOTONIC
jne 12f
jnz 0b
stck 48(%r15) /* Store TOD clock */
lgf %r2,__VDSO_TK_SHIFT(%r5) /* Timekeeper shift */
- lg %r0,__VDSO_XTIME_SEC(%r5) /* tk->xtime_sec */
- alg %r0,__VDSO_WTOM_SEC(%r5) /* + wall_to_monotonic.sec */
+ lg %r0,__VDSO_WTOM_SEC(%r5)
lg %r1,48(%r15)
sg %r1,__VDSO_XTIME_STAMP(%r5) /* TOD - cycle_last */
msgf %r1,__VDSO_TK_MULT(%r5) /* * tk->mult */
- alg %r1,__VDSO_XTIME_NSEC(%r5) /* + tk->xtime_nsec */
- alg %r1,__VDSO_WTOM_NSEC(%r5) /* + wall_to_monotonic.nsec */
+ alg %r1,__VDSO_WTOM_NSEC(%r5)
srlg %r1,%r1,0(%r2) /* >> tk->shift */
clg %r4,__VDSO_UPD_COUNT(%r5) /* check update counter */
jne 0b
HEADER_ARCH := $(SUBARCH)
-# Additional ARCH settings for x86
-ifeq ($(SUBARCH),i386)
- HEADER_ARCH := x86
+ifneq ($(filter $(SUBARCH),x86 x86_64 i386),)
+ HEADER_ARCH := x86
endif
-ifeq ($(SUBARCH),x86_64)
- HEADER_ARCH := x86
+
+ifdef CONFIG_64BIT
KBUILD_CFLAGS += -mcmodel=large
endif
unsigned long return_address;
};
-static void print_stack_trace(unsigned long *sp, unsigned long bp)
+static void do_stack_trace(unsigned long *sp, unsigned long bp)
{
int reliable;
unsigned long addr;
}
printk(KERN_CONT "\n");
- print_stack_trace(sp, bp);
+ do_stack_trace(sp, bp);
}
KBUILD_CFLAGS += -msoft-float -mregparm=3 -freg-struct-return
+ # Don't autogenerate SSE instructions
+ KBUILD_CFLAGS += -mno-sse
+
# Never want PIC in a 32-bit kernel, prevent breakage with GCC built
# with nonstandard options
KBUILD_CFLAGS += -fno-pic
KBUILD_AFLAGS += -m64
KBUILD_CFLAGS += -m64
+ # Don't autogenerate SSE instructions
+ KBUILD_CFLAGS += -mno-sse
+
# Use -mpreferred-stack-boundary=3 if supported.
- KBUILD_CFLAGS += $(call cc-option,-mno-sse -mpreferred-stack-boundary=3)
+ KBUILD_CFLAGS += $(call cc-option,-mpreferred-stack-boundary=3)
# FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
*/
static inline int atomic_sub_and_test(int i, atomic_t *v)
{
- GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, i, "%0", "e");
+ GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, "er", i, "%0", "e");
}
/**
*/
static inline int atomic_add_negative(int i, atomic_t *v)
{
- GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, i, "%0", "s");
+ GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, "er", i, "%0", "s");
}
/**
*/
static inline int atomic64_sub_and_test(long i, atomic64_t *v)
{
- GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, i, "%0", "e");
+ GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, "er", i, "%0", "e");
}
/**
*/
static inline int atomic64_add_negative(long i, atomic64_t *v)
{
- GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, i, "%0", "s");
+ GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, "er", i, "%0", "s");
}
/**
*/
static inline int test_and_set_bit(long nr, volatile unsigned long *addr)
{
- GEN_BINARY_RMWcc(LOCK_PREFIX "bts", *addr, nr, "%0", "c");
+ GEN_BINARY_RMWcc(LOCK_PREFIX "bts", *addr, "Ir", nr, "%0", "c");
}
/**
*/
static inline int test_and_clear_bit(long nr, volatile unsigned long *addr)
{
- GEN_BINARY_RMWcc(LOCK_PREFIX "btr", *addr, nr, "%0", "c");
+ GEN_BINARY_RMWcc(LOCK_PREFIX "btr", *addr, "Ir", nr, "%0", "c");
}
/**
*/
static inline int test_and_change_bit(long nr, volatile unsigned long *addr)
{
- GEN_BINARY_RMWcc(LOCK_PREFIX "btc", *addr, nr, "%0", "c");
+ GEN_BINARY_RMWcc(LOCK_PREFIX "btc", *addr, "Ir", nr, "%0", "c");
}
static __always_inline int constant_test_bit(long nr, const volatile unsigned long *addr)
*/
static inline int local_sub_and_test(long i, local_t *l)
{
- GEN_BINARY_RMWcc(_ASM_SUB, l->a.counter, i, "%0", "e");
+ GEN_BINARY_RMWcc(_ASM_SUB, l->a.counter, "er", i, "%0", "e");
}
/**
*/
static inline int local_add_negative(long i, local_t *l)
{
- GEN_BINARY_RMWcc(_ASM_ADD, l->a.counter, i, "%0", "s");
+ GEN_BINARY_RMWcc(_ASM_ADD, l->a.counter, "er", i, "%0", "s");
}
/**
#define GEN_UNARY_RMWcc(op, var, arg0, cc) \
__GEN_RMWcc(op " " arg0, var, cc)
-#define GEN_BINARY_RMWcc(op, var, val, arg0, cc) \
- __GEN_RMWcc(op " %1, " arg0, var, cc, "er" (val))
+#define GEN_BINARY_RMWcc(op, var, vcon, val, arg0, cc) \
+ __GEN_RMWcc(op " %1, " arg0, var, cc, vcon (val))
#else /* !CC_HAVE_ASM_GOTO */
#define GEN_UNARY_RMWcc(op, var, arg0, cc) \
__GEN_RMWcc(op " " arg0, var, cc)
-#define GEN_BINARY_RMWcc(op, var, val, arg0, cc) \
- __GEN_RMWcc(op " %2, " arg0, var, cc, "er" (val))
+#define GEN_BINARY_RMWcc(op, var, vcon, val, arg0, cc) \
+ __GEN_RMWcc(op " %2, " arg0, var, cc, vcon (val))
#endif /* CC_HAVE_ASM_GOTO */
*/
DEFINE_IRQ_VECTOR_EVENT(irq_work);
+/*
+ * We must dis-allow sampling irq_work_exit() because perf event sampling
+ * itself can cause irq_work, which would lead to an infinite loop;
+ *
+ * 1) irq_work_exit happens
+ * 2) generates perf sample
+ * 3) generates irq_work
+ * 4) goto 1
+ */
+TRACE_EVENT_PERF_PERM(irq_work_exit, is_sampling_event(p_event) ? -EPERM : 0);
+
/*
* call_function - called when entering/exiting a call function interrupt
* vector handler
{
/* Stop the cpus and apics */
#ifdef CONFIG_X86_IO_APIC
+ /*
+ * Disabling IO APIC before local APIC is a workaround for
+ * erratum AVR31 in "Intel Atom Processor C2000 Product Family
+ * Specification Update". In this situation, interrupts that target
+ * a Logical Processor whose Local APIC is either in the process of
+ * being hardware disabled or software disabled are neither delivered
+ * nor discarded. When this erratum occurs, the processor may hang.
+ *
+ * Even without the erratum, it still makes sense to quiet IO APIC
+ * before disabling Local APIC.
+ */
disable_IO_APIC();
#endif
return (kvm_apic_get_reg(apic, APIC_ID) >> 24) & 0xff;
}
+#define KVM_X2APIC_CID_BITS 0
+
static void recalculate_apic_map(struct kvm *kvm)
{
struct kvm_apic_map *new, *old = NULL;
if (apic_x2apic_mode(apic)) {
new->ldr_bits = 32;
new->cid_shift = 16;
- new->cid_mask = new->lid_mask = 0xffff;
+ new->cid_mask = (1 << KVM_X2APIC_CID_BITS) - 1;
+ new->lid_mask = 0xffff;
} else if (kvm_apic_sw_enabled(apic) &&
!new->cid_mask /* flat mode */ &&
kvm_apic_get_reg(apic, APIC_DFR) == APIC_DFR_CLUSTER) {
ASSERT(apic != NULL);
/* if initial count is 0, current count should also be 0 */
- if (kvm_apic_get_reg(apic, APIC_TMICT) == 0)
+ if (kvm_apic_get_reg(apic, APIC_TMICT) == 0 ||
+ apic->lapic_timer.period == 0)
return 0;
remaining = hrtimer_get_remaining(&apic->lapic_timer.timer);
void kvm_lapic_sync_from_vapic(struct kvm_vcpu *vcpu)
{
u32 data;
- void *vapic;
if (test_bit(KVM_APIC_PV_EOI_PENDING, &vcpu->arch.apic_attention))
apic_sync_pv_eoi_from_guest(vcpu, vcpu->arch.apic);
if (!test_bit(KVM_APIC_CHECK_VAPIC, &vcpu->arch.apic_attention))
return;
- vapic = kmap_atomic(vcpu->arch.apic->vapic_page);
- data = *(u32 *)(vapic + offset_in_page(vcpu->arch.apic->vapic_addr));
- kunmap_atomic(vapic);
+ kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.apic->vapic_cache, &data,
+ sizeof(u32));
apic_set_tpr(vcpu->arch.apic, data & 0xff);
}
u32 data, tpr;
int max_irr, max_isr;
struct kvm_lapic *apic = vcpu->arch.apic;
- void *vapic;
apic_sync_pv_eoi_to_guest(vcpu, apic);
max_isr = 0;
data = (tpr & 0xff) | ((max_isr & 0xf0) << 8) | (max_irr << 24);
- vapic = kmap_atomic(vcpu->arch.apic->vapic_page);
- *(u32 *)(vapic + offset_in_page(vcpu->arch.apic->vapic_addr)) = data;
- kunmap_atomic(vapic);
+ kvm_write_guest_cached(vcpu->kvm, &vcpu->arch.apic->vapic_cache, &data,
+ sizeof(u32));
}
-void kvm_lapic_set_vapic_addr(struct kvm_vcpu *vcpu, gpa_t vapic_addr)
+int kvm_lapic_set_vapic_addr(struct kvm_vcpu *vcpu, gpa_t vapic_addr)
{
- vcpu->arch.apic->vapic_addr = vapic_addr;
- if (vapic_addr)
+ if (vapic_addr) {
+ if (kvm_gfn_to_hva_cache_init(vcpu->kvm,
+ &vcpu->arch.apic->vapic_cache,
+ vapic_addr, sizeof(u32)))
+ return -EINVAL;
__set_bit(KVM_APIC_CHECK_VAPIC, &vcpu->arch.apic_attention);
- else
+ } else {
__clear_bit(KVM_APIC_CHECK_VAPIC, &vcpu->arch.apic_attention);
+ }
+
+ vcpu->arch.apic->vapic_addr = vapic_addr;
+ return 0;
}
int kvm_x2apic_msr_write(struct kvm_vcpu *vcpu, u32 msr, u64 data)
*/
void *regs;
gpa_t vapic_addr;
- struct page *vapic_page;
+ struct gfn_to_hva_cache vapic_cache;
unsigned long pending_events;
unsigned int sipi_vector;
};
void kvm_apic_write_nodecode(struct kvm_vcpu *vcpu, u32 offset);
void kvm_apic_set_eoi_accelerated(struct kvm_vcpu *vcpu, int vector);
-void kvm_lapic_set_vapic_addr(struct kvm_vcpu *vcpu, gpa_t vapic_addr);
+int kvm_lapic_set_vapic_addr(struct kvm_vcpu *vcpu, gpa_t vapic_addr);
void kvm_lapic_sync_from_vapic(struct kvm_vcpu *vcpu);
void kvm_lapic_sync_to_vapic(struct kvm_vcpu *vcpu);
r = -EFAULT;
if (copy_from_user(&va, argp, sizeof va))
goto out;
- r = 0;
- kvm_lapic_set_vapic_addr(vcpu, va.vapic_addr);
+ r = kvm_lapic_set_vapic_addr(vcpu, va.vapic_addr);
break;
}
case KVM_X86_SETUP_MCE: {
!kvm_event_needs_reinjection(vcpu);
}
-static int vapic_enter(struct kvm_vcpu *vcpu)
-{
- struct kvm_lapic *apic = vcpu->arch.apic;
- struct page *page;
-
- if (!apic || !apic->vapic_addr)
- return 0;
-
- page = gfn_to_page(vcpu->kvm, apic->vapic_addr >> PAGE_SHIFT);
- if (is_error_page(page))
- return -EFAULT;
-
- vcpu->arch.apic->vapic_page = page;
- return 0;
-}
-
-static void vapic_exit(struct kvm_vcpu *vcpu)
-{
- struct kvm_lapic *apic = vcpu->arch.apic;
- int idx;
-
- if (!apic || !apic->vapic_addr)
- return;
-
- idx = srcu_read_lock(&vcpu->kvm->srcu);
- kvm_release_page_dirty(apic->vapic_page);
- mark_page_dirty(vcpu->kvm, apic->vapic_addr >> PAGE_SHIFT);
- srcu_read_unlock(&vcpu->kvm->srcu, idx);
-}
-
static void update_cr8_intercept(struct kvm_vcpu *vcpu)
{
int max_irr, tpr;
struct kvm *kvm = vcpu->kvm;
vcpu->srcu_idx = srcu_read_lock(&kvm->srcu);
- r = vapic_enter(vcpu);
- if (r) {
- srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx);
- return r;
- }
r = 1;
while (r > 0) {
srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx);
- vapic_exit(vcpu);
-
return r;
}
efi_y += font->height;
}
- if (efi_y + font->height >= si->lfb_height) {
+ if (efi_y + font->height > si->lfb_height) {
u32 i;
efi_y -= font->height;
uint64_t v;
do {
- start = u64_stats_fetch_begin(&stat->syncp);
+ start = u64_stats_fetch_begin_bh(&stat->syncp);
v = stat->cnt;
- } while (u64_stats_fetch_retry(&stat->syncp, start));
+ } while (u64_stats_fetch_retry_bh(&stat->syncp, start));
return v;
}
struct blkg_rwstat tmp;
do {
- start = u64_stats_fetch_begin(&rwstat->syncp);
+ start = u64_stats_fetch_begin_bh(&rwstat->syncp);
tmp = *rwstat;
- } while (u64_stats_fetch_retry(&rwstat->syncp, start));
+ } while (u64_stats_fetch_retry_bh(&rwstat->syncp, start));
return tmp;
}
}
}
-static void bio_end_flush(struct bio *bio, int err)
-{
- if (err)
- clear_bit(BIO_UPTODATE, &bio->bi_flags);
- if (bio->bi_private)
- complete(bio->bi_private);
- bio_put(bio);
-}
-
/**
* blkdev_issue_flush - queue a flush
* @bdev: blockdev to issue flush for
int blkdev_issue_flush(struct block_device *bdev, gfp_t gfp_mask,
sector_t *error_sector)
{
- DECLARE_COMPLETION_ONSTACK(wait);
struct request_queue *q;
struct bio *bio;
int ret = 0;
return -ENXIO;
bio = bio_alloc(gfp_mask, 0);
- bio->bi_end_io = bio_end_flush;
bio->bi_bdev = bdev;
- bio->bi_private = &wait;
- bio_get(bio);
- submit_bio(WRITE_FLUSH, bio);
- wait_for_completion_io(&wait);
+ ret = submit_bio_wait(WRITE_FLUSH, bio);
/*
* The driver must store the error location in ->bi_sector, if
if (error_sector)
*error_sector = bio->bi_sector;
- if (!bio_flagged(bio, BIO_UPTODATE))
- ret = -EIO;
-
bio_put(bio);
return ret;
}
if (rq) {
blk_mq_rq_ctx_init(q, ctx, rq, rw);
break;
- } else if (!(gfp & __GFP_WAIT))
- break;
+ }
blk_mq_put_ctx(ctx);
+ if (!(gfp & __GFP_WAIT))
+ break;
+
__blk_mq_run_hw_queue(hctx);
blk_mq_wait_for_tags(hctx->tags);
} while (1);
return NULL;
rq = blk_mq_alloc_request_pinned(q, rw, gfp, reserved);
- blk_mq_put_ctx(rq->mq_ctx);
+ if (rq)
+ blk_mq_put_ctx(rq->mq_ctx);
return rq;
}
return NULL;
rq = blk_mq_alloc_request_pinned(q, rw, gfp, true);
- blk_mq_put_ctx(rq->mq_ctx);
+ if (rq)
+ blk_mq_put_ctx(rq->mq_ctx);
return rq;
}
EXPORT_SYMBOL(blk_mq_alloc_reserved_request);
blk_account_io_completion(rq, bytes);
+ blk_account_io_done(rq);
+
if (rq->end_io)
rq->end_io(rq, error);
else
blk_mq_free_request(rq);
-
- blk_account_io_done(rq);
}
void __blk_mq_end_io(struct request *rq, int error)
struct hash_ctx *ctx = ask->private;
int err;
+ if (flags & MSG_SENDPAGE_NOTLAST)
+ flags |= MSG_MORE;
+
lock_sock(sk);
sg_init_table(ctx->sgl.sg, 1);
sg_set_page(ctx->sgl.sg, page, size, offset);
struct skcipher_sg_list *sgl;
int err = -EINVAL;
+ if (flags & MSG_SENDPAGE_NOTLAST)
+ flags |= MSG_MORE;
+
lock_sock(sk);
if (!ctx->more && ctx->used)
goto unlock;
if (!err) {
struct crypto_aead *authenc = crypto_aead_reqtfm(areq);
struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc);
- struct ablkcipher_request *abreq = aead_request_ctx(areq);
- u8 *iv = (u8 *)(abreq + 1) +
- crypto_ablkcipher_reqsize(ctx->enc);
+ struct authenc_request_ctx *areq_ctx = aead_request_ctx(areq);
+ struct ablkcipher_request *abreq = (void *)(areq_ctx->tail
+ + ctx->reqoff);
+ u8 *iv = (u8 *)abreq - crypto_ablkcipher_ivsize(ctx->enc);
err = crypto_authenc_genicv(areq, iv, 0);
}
}
/* compute plaintext into mac */
- get_data_to_compute(cipher, pctx, plain, cryptlen);
+ if (cryptlen)
+ get_data_to_compute(cipher, pctx, plain, cryptlen);
out:
return err;
ret += tcrypt_test("cmac(des3_ede)");
break;
+ case 155:
+ ret += tcrypt_test("authenc(hmac(sha1),cbc(aes))");
+ break;
+
case 200:
test_cipher_speed("ecb(aes)", ENCRYPT, sec, NULL, 0,
speed_template_16_24_32);
goto out;
}
- sg_init_one(&sg[0], input,
- template[i].ilen + (enc ? authsize : 0));
-
if (diff_dst) {
output = xoutbuf[0];
output += align_offset;
+ sg_init_one(&sg[0], input, template[i].ilen);
sg_init_one(&sgout[0], output,
+ template[i].rlen);
+ } else {
+ sg_init_one(&sg[0], input,
template[i].ilen +
(enc ? authsize : 0));
- } else {
output = input;
}
memcpy(q, template[i].input + temp,
template[i].tap[k]);
- n = template[i].tap[k];
- if (k == template[i].np - 1 && enc)
- n += authsize;
- if (offset_in_page(q) + n < PAGE_SIZE)
- q[n] = 0;
-
sg_set_buf(&sg[k], q, template[i].tap[k]);
if (diff_dst) {
offset_in_page(IDX[k]);
memset(q, 0, template[i].tap[k]);
- if (offset_in_page(q) + n < PAGE_SIZE)
- q[n] = 0;
sg_set_buf(&sgout[k], q,
template[i].tap[k]);
}
+ n = template[i].tap[k];
+ if (k == template[i].np - 1 && enc)
+ n += authsize;
+ if (offset_in_page(q) + n < PAGE_SIZE)
+ q[n] = 0;
+
temp += template[i].tap[k];
}
goto out;
}
- sg[k - 1].length += authsize;
-
if (diff_dst)
sgout[k - 1].length += authsize;
+ else
+ sg[k - 1].length += authsize;
}
sg_init_table(asg, template[i].anp);
shost->max_lun = 1;
shost->max_channel = 1;
shost->max_cmd_len = 16;
+ shost->no_write_same = 1;
/* Schedule policy is determined by ->qc_defer()
* callback and it needs to see every deferred qc.
BUG_ON(reg_size != 4);
- if (ctx->clk) {
+ if (!IS_ERR(ctx->clk)) {
ret = clk_enable(ctx->clk);
if (ret < 0)
return ret;
offset += ctx->val_bytes;
}
- if (ctx->clk)
+ if (!IS_ERR(ctx->clk))
clk_disable(ctx->clk);
return 0;
BUG_ON(reg_size != 4);
- if (ctx->clk) {
+ if (!IS_ERR(ctx->clk)) {
ret = clk_enable(ctx->clk);
if (ret < 0)
return ret;
offset += ctx->val_bytes;
}
- if (ctx->clk)
+ if (!IS_ERR(ctx->clk))
clk_disable(ctx->clk);
return 0;
{
struct regmap_mmio_context *ctx = context;
- if (ctx->clk) {
+ if (!IS_ERR(ctx->clk)) {
clk_unprepare(ctx->clk);
clk_put(ctx->clk);
}
ctx->regs = regs;
ctx->val_bytes = config->val_bits / 8;
+ ctx->clk = ERR_PTR(-ENODEV);
if (clk_id == NULL)
return ctx;
val + (i * val_bytes),
val_bytes);
if (ret != 0)
- return ret;
+ goto out;
}
} else {
ret = _regmap_raw_write(map, reg, wval, val_bytes * val_count);
/**
* regmap_read(): Read a value from a single register
*
- * @map: Register map to write to
+ * @map: Register map to read from
* @reg: Register to be read from
* @val: Pointer to store read value
*
/**
* regmap_raw_read(): Read raw data from the device
*
- * @map: Register map to write to
+ * @map: Register map to read from
* @reg: First register to be read from
* @val: Pointer to store read value
* @val_len: Size of data to read
/**
* regmap_bulk_read(): Read multiple registers from the device
*
- * @map: Register map to write to
+ * @map: Register map to read from
* @reg: First register to be read from
* @val: Pointer to store read value, in native register size for device
* @val_count: Number of registers to read
if ((ring_req->operation == BLKIF_OP_INDIRECT) &&
(i % SEGS_PER_INDIRECT_FRAME == 0)) {
- unsigned long pfn;
+ unsigned long uninitialized_var(pfn);
if (segments)
kunmap_atomic(segments);
bdev = bdget_disk(disk, 0);
+ if (!bdev) {
+ WARN(1, "Block device %s yanked out from us!\n", disk->disk_name);
+ goto out_mutex;
+ }
if (bdev->bd_openers)
goto out;
out:
bdput(bdev);
+out_mutex:
mutex_unlock(&blkfront_mutex);
}
DMI_MATCH(DMI_PRODUCT_NAME, "Vostro"),
},
},
+ {
+ .ident = "Dell XPS421",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "XPS L421X"),
+ },
+ },
{ }
};
config ARM_ARCH_TIMER_EVTSTREAM
bool "Support for ARM architected timer event stream generation"
default y if ARM_ARCH_TIMER
+ depends on ARM_ARCH_TIMER
help
This option enables support for event stream generation based on
the ARM architected timer. It is used for waking up CPUs executing
goto err1;
}
- return sh_mtu2_register(p, (char *)dev_name(&p->pdev->dev),
- cfg->clockevent_rating);
+ ret = clk_prepare(p->clk);
+ if (ret < 0)
+ goto err2;
+
+ ret = sh_mtu2_register(p, (char *)dev_name(&p->pdev->dev),
+ cfg->clockevent_rating);
+ if (ret < 0)
+ goto err3;
+
+ return 0;
+ err3:
+ clk_unprepare(p->clk);
+ err2:
+ clk_put(p->clk);
err1:
iounmap(p->mapbase);
err0:
ret = PTR_ERR(p->clk);
goto err1;
}
+
+ ret = clk_prepare(p->clk);
+ if (ret < 0)
+ goto err2;
+
p->cs_enabled = false;
p->enable_count = 0;
- return sh_tmu_register(p, (char *)dev_name(&p->pdev->dev),
- cfg->clockevent_rating,
- cfg->clocksource_rating);
+ ret = sh_tmu_register(p, (char *)dev_name(&p->pdev->dev),
+ cfg->clockevent_rating,
+ cfg->clocksource_rating);
+ if (ret < 0)
+ goto err3;
+
+ return 0;
+
+ err3:
+ clk_unprepare(p->clk);
+ err2:
+ clk_put(p->clk);
err1:
iounmap(p->mapbase);
err0:
return 0;
}
-static int __init at32_cpufreq_driver_init(struct cpufreq_policy *policy)
+static int at32_cpufreq_driver_init(struct cpufreq_policy *policy)
{
unsigned int frequency, rate, min_freq;
int retval, steps, i;
*/
void cpuidle_unregister_device(struct cpuidle_device *dev)
{
- if (dev->registered == 0)
+ if (!dev || dev->registered == 0)
return;
cpuidle_pause_and_lock();
ivsize, 1);
print_hex_dump(KERN_ERR, "dst @"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, sg_virt(req->dst),
- req->cryptlen, 1);
+ req->cryptlen - ctx->authsize, 1);
#endif
if (err) {
(edesc->src_nents ? : 1);
in_options = LDST_SGF;
}
- if (encrypt)
- append_seq_in_ptr(desc, src_dma, req->assoclen + ivsize +
- req->cryptlen - authsize, in_options);
- else
- append_seq_in_ptr(desc, src_dma, req->assoclen + ivsize +
- req->cryptlen, in_options);
+
+ append_seq_in_ptr(desc, src_dma, req->assoclen + ivsize + req->cryptlen,
+ in_options);
if (likely(req->src == req->dst)) {
if (all_contig) {
}
}
if (encrypt)
- append_seq_out_ptr(desc, dst_dma, req->cryptlen, out_options);
+ append_seq_out_ptr(desc, dst_dma, req->cryptlen + authsize,
+ out_options);
else
append_seq_out_ptr(desc, dst_dma, req->cryptlen - authsize,
out_options);
sec4_sg_index += edesc->assoc_nents + 1 + edesc->src_nents;
in_options = LDST_SGF;
}
- append_seq_in_ptr(desc, src_dma, req->assoclen + ivsize +
- req->cryptlen - authsize, in_options);
+ append_seq_in_ptr(desc, src_dma, req->assoclen + ivsize + req->cryptlen,
+ in_options);
if (contig & GIV_DST_CONTIG) {
dst_dma = edesc->iv_dma;
}
}
- append_seq_out_ptr(desc, dst_dma, ivsize + req->cryptlen, out_options);
+ append_seq_out_ptr(desc, dst_dma, ivsize + req->cryptlen + authsize,
+ out_options);
}
/*
* allocate and map the aead extended descriptor
*/
static struct aead_edesc *aead_edesc_alloc(struct aead_request *req,
- int desc_bytes, bool *all_contig_ptr)
+ int desc_bytes, bool *all_contig_ptr,
+ bool encrypt)
{
struct crypto_aead *aead = crypto_aead_reqtfm(req);
struct caam_ctx *ctx = crypto_aead_ctx(aead);
bool assoc_chained = false, src_chained = false, dst_chained = false;
int ivsize = crypto_aead_ivsize(aead);
int sec4_sg_index, sec4_sg_len = 0, sec4_sg_bytes;
+ unsigned int authsize = ctx->authsize;
assoc_nents = sg_count(req->assoc, req->assoclen, &assoc_chained);
- src_nents = sg_count(req->src, req->cryptlen, &src_chained);
- if (unlikely(req->dst != req->src))
- dst_nents = sg_count(req->dst, req->cryptlen, &dst_chained);
+ if (unlikely(req->dst != req->src)) {
+ src_nents = sg_count(req->src, req->cryptlen, &src_chained);
+ dst_nents = sg_count(req->dst,
+ req->cryptlen +
+ (encrypt ? authsize : (-authsize)),
+ &dst_chained);
+ } else {
+ src_nents = sg_count(req->src,
+ req->cryptlen +
+ (encrypt ? authsize : 0),
+ &src_chained);
+ }
sgc = dma_map_sg_chained(jrdev, req->assoc, assoc_nents ? : 1,
DMA_TO_DEVICE, assoc_chained);
u32 *desc;
int ret = 0;
- req->cryptlen += ctx->authsize;
-
/* allocate extended descriptor */
edesc = aead_edesc_alloc(req, DESC_JOB_IO_LEN *
- CAAM_CMD_SZ, &all_contig);
+ CAAM_CMD_SZ, &all_contig, true);
if (IS_ERR(edesc))
return PTR_ERR(edesc);
/* allocate extended descriptor */
edesc = aead_edesc_alloc(req, DESC_JOB_IO_LEN *
- CAAM_CMD_SZ, &all_contig);
+ CAAM_CMD_SZ, &all_contig, false);
if (IS_ERR(edesc))
return PTR_ERR(edesc);
src_nents = sg_count(req->src, req->cryptlen, &src_chained);
if (unlikely(req->dst != req->src))
- dst_nents = sg_count(req->dst, req->cryptlen, &dst_chained);
+ dst_nents = sg_count(req->dst, req->cryptlen + ctx->authsize,
+ &dst_chained);
sgc = dma_map_sg_chained(jrdev, req->assoc, assoc_nents ? : 1,
DMA_TO_DEVICE, assoc_chained);
u32 *desc;
int ret = 0;
- req->cryptlen += ctx->authsize;
-
/* allocate extended descriptor */
edesc = aead_giv_edesc_alloc(areq, DESC_JOB_IO_LEN *
CAAM_CMD_SZ, &contig);
*/
#include <linux/of_irq.h>
+#include <linux/of_address.h>
#include "compat.h"
#include "regs.h"
if (edesc->assoc_chained)
talitos_unmap_sg_chain(dev, areq->assoc, DMA_TO_DEVICE);
- else
+ else if (areq->assoclen)
/* assoc_nents counts also for IV in non-contiguous cases */
dma_unmap_sg(dev, areq->assoc,
edesc->assoc_nents ? edesc->assoc_nents - 1 : 1,
dma_sync_single_for_device(dev, edesc->dma_link_tbl,
edesc->dma_len, DMA_BIDIRECTIONAL);
} else {
- to_talitos_ptr(&desc->ptr[1], sg_dma_address(areq->assoc));
+ if (areq->assoclen)
+ to_talitos_ptr(&desc->ptr[1],
+ sg_dma_address(areq->assoc));
+ else
+ to_talitos_ptr(&desc->ptr[1], edesc->iv_dma);
desc->ptr[1].j_extent = 0;
}
unsigned int authsize,
unsigned int ivsize,
int icv_stashing,
- u32 cryptoflags)
+ u32 cryptoflags,
+ bool encrypt)
{
struct talitos_edesc *edesc;
int assoc_nents = 0, src_nents, dst_nents, alloc_len, dma_len;
return ERR_PTR(-EINVAL);
}
- if (iv)
+ if (ivsize)
iv_dma = dma_map_single(dev, iv, ivsize, DMA_TO_DEVICE);
- if (assoc) {
+ if (assoclen) {
/*
* Currently it is assumed that iv is provided whenever assoc
* is.
assoc_nents = assoc_nents ? assoc_nents + 1 : 2;
}
- src_nents = sg_count(src, cryptlen + authsize, &src_chained);
- src_nents = (src_nents == 1) ? 0 : src_nents;
-
- if (!dst) {
- dst_nents = 0;
- } else {
- if (dst == src) {
- dst_nents = src_nents;
- } else {
- dst_nents = sg_count(dst, cryptlen + authsize,
- &dst_chained);
- dst_nents = (dst_nents == 1) ? 0 : dst_nents;
- }
+ if (!dst || dst == src) {
+ src_nents = sg_count(src, cryptlen + authsize, &src_chained);
+ src_nents = (src_nents == 1) ? 0 : src_nents;
+ dst_nents = dst ? src_nents : 0;
+ } else { /* dst && dst != src*/
+ src_nents = sg_count(src, cryptlen + (encrypt ? 0 : authsize),
+ &src_chained);
+ src_nents = (src_nents == 1) ? 0 : src_nents;
+ dst_nents = sg_count(dst, cryptlen + (encrypt ? authsize : 0),
+ &dst_chained);
+ dst_nents = (dst_nents == 1) ? 0 : dst_nents;
}
/*
edesc = kmalloc(alloc_len, GFP_DMA | flags);
if (!edesc) {
- talitos_unmap_sg_chain(dev, assoc, DMA_TO_DEVICE);
+ if (assoc_chained)
+ talitos_unmap_sg_chain(dev, assoc, DMA_TO_DEVICE);
+ else if (assoclen)
+ dma_unmap_sg(dev, assoc,
+ assoc_nents ? assoc_nents - 1 : 1,
+ DMA_TO_DEVICE);
+
if (iv_dma)
dma_unmap_single(dev, iv_dma, ivsize, DMA_TO_DEVICE);
+
dev_err(dev, "could not allocate edescriptor\n");
return ERR_PTR(-ENOMEM);
}
}
static struct talitos_edesc *aead_edesc_alloc(struct aead_request *areq, u8 *iv,
- int icv_stashing)
+ int icv_stashing, bool encrypt)
{
struct crypto_aead *authenc = crypto_aead_reqtfm(areq);
struct talitos_ctx *ctx = crypto_aead_ctx(authenc);
return talitos_edesc_alloc(ctx->dev, areq->assoc, areq->src, areq->dst,
iv, areq->assoclen, areq->cryptlen,
ctx->authsize, ivsize, icv_stashing,
- areq->base.flags);
+ areq->base.flags, encrypt);
}
static int aead_encrypt(struct aead_request *req)
struct talitos_edesc *edesc;
/* allocate extended descriptor */
- edesc = aead_edesc_alloc(req, req->iv, 0);
+ edesc = aead_edesc_alloc(req, req->iv, 0, true);
if (IS_ERR(edesc))
return PTR_ERR(edesc);
req->cryptlen -= authsize;
/* allocate extended descriptor */
- edesc = aead_edesc_alloc(req, req->iv, 1);
+ edesc = aead_edesc_alloc(req, req->iv, 1, false);
if (IS_ERR(edesc))
return PTR_ERR(edesc);
struct talitos_edesc *edesc;
/* allocate extended descriptor */
- edesc = aead_edesc_alloc(areq, req->giv, 0);
+ edesc = aead_edesc_alloc(areq, req->giv, 0, true);
if (IS_ERR(edesc))
return PTR_ERR(edesc);
}
static struct talitos_edesc *ablkcipher_edesc_alloc(struct ablkcipher_request *
- areq)
+ areq, bool encrypt)
{
struct crypto_ablkcipher *cipher = crypto_ablkcipher_reqtfm(areq);
struct talitos_ctx *ctx = crypto_ablkcipher_ctx(cipher);
return talitos_edesc_alloc(ctx->dev, NULL, areq->src, areq->dst,
areq->info, 0, areq->nbytes, 0, ivsize, 0,
- areq->base.flags);
+ areq->base.flags, encrypt);
}
static int ablkcipher_encrypt(struct ablkcipher_request *areq)
struct talitos_edesc *edesc;
/* allocate extended descriptor */
- edesc = ablkcipher_edesc_alloc(areq);
+ edesc = ablkcipher_edesc_alloc(areq, true);
if (IS_ERR(edesc))
return PTR_ERR(edesc);
struct talitos_edesc *edesc;
/* allocate extended descriptor */
- edesc = ablkcipher_edesc_alloc(areq);
+ edesc = ablkcipher_edesc_alloc(areq, false);
if (IS_ERR(edesc))
return PTR_ERR(edesc);
struct talitos_ahash_req_ctx *req_ctx = ahash_request_ctx(areq);
return talitos_edesc_alloc(ctx->dev, NULL, req_ctx->psrc, NULL, NULL, 0,
- nbytes, 0, 0, 0, areq->base.flags);
+ nbytes, 0, 0, 0, areq->base.flags, false);
}
static int ahash_init(struct ahash_request *areq)
struct pl08x_txd *txd = to_pl08x_txd(&vd->tx);
struct pl08x_dma_chan *plchan = to_pl08x_chan(vd->tx.chan);
- dma_descriptor_unmap(txd);
+ dma_descriptor_unmap(&vd->tx);
if (!txd->done)
pl08x_release_mux(plchan);
}
}
+ platform_set_drvdata(op, pdev);
dev_info(pdev->device.dev, "initialized %d channels\n", dma_channels);
return 0;
}
s3cchan->state = S3C24XX_DMA_CHAN_IDLE;
}
-static void s3c24xx_dma_unmap_buffers(struct s3c24xx_txd *txd)
-{
- struct device *dev = txd->vd.tx.chan->device->dev;
- struct s3c24xx_sg *dsg;
-
- if (!(txd->vd.tx.flags & DMA_COMPL_SKIP_SRC_UNMAP)) {
- if (txd->vd.tx.flags & DMA_COMPL_SRC_UNMAP_SINGLE)
- list_for_each_entry(dsg, &txd->dsg_list, node)
- dma_unmap_single(dev, dsg->src_addr, dsg->len,
- DMA_TO_DEVICE);
- else {
- list_for_each_entry(dsg, &txd->dsg_list, node)
- dma_unmap_page(dev, dsg->src_addr, dsg->len,
- DMA_TO_DEVICE);
- }
- }
-
- if (!(txd->vd.tx.flags & DMA_COMPL_SKIP_DEST_UNMAP)) {
- if (txd->vd.tx.flags & DMA_COMPL_DEST_UNMAP_SINGLE)
- list_for_each_entry(dsg, &txd->dsg_list, node)
- dma_unmap_single(dev, dsg->dst_addr, dsg->len,
- DMA_FROM_DEVICE);
- else
- list_for_each_entry(dsg, &txd->dsg_list, node)
- dma_unmap_page(dev, dsg->dst_addr, dsg->len,
- DMA_FROM_DEVICE);
- }
-}
-
static void s3c24xx_dma_desc_free(struct virt_dma_desc *vd)
{
struct s3c24xx_txd *txd = to_s3c24xx_txd(&vd->tx);
struct s3c24xx_dma_chan *s3cchan = to_s3c24xx_dma_chan(vd->tx.chan);
if (!s3cchan->slave)
- s3c24xx_dma_unmap_buffers(txd);
+ dma_descriptor_unmap(&vd->tx);
s3c24xx_dma_free_txd(txd);
}
spin_lock_irqsave(&s3cchan->vc.lock, flags);
ret = dma_cookie_status(chan, cookie, txstate);
- if (ret == DMA_SUCCESS) {
+ if (ret == DMA_COMPLETE) {
spin_unlock_irqrestore(&s3cchan->vc.lock, flags);
return ret;
}
#define HPB_DMAE_DSTPR_DMSTP BIT(0)
/* DMA status register (DSTSR) bits */
+#define HPB_DMAE_DSTSR_DQSTS BIT(2)
#define HPB_DMAE_DSTSR_DMSTS BIT(0)
/* DMA common registers */
ch_reg_write(chan, HPB_DMAE_DCMDR_DQEND, HPB_DMAE_DCMDR);
ch_reg_write(chan, HPB_DMAE_DSTPR_DMSTP, HPB_DMAE_DSTPR);
+
+ chan->plane_idx = 0;
+ chan->first_desc = true;
}
static const struct hpb_dmae_slave_config *
struct hpb_dmae_chan *chan = to_chan(schan);
u32 dstsr = ch_reg_read(chan, HPB_DMAE_DSTSR);
- return (dstsr & HPB_DMAE_DSTSR_DMSTS) == HPB_DMAE_DSTSR_DMSTS;
+ if (chan->xfer_mode == XFER_DOUBLE)
+ return dstsr & HPB_DMAE_DSTSR_DQSTS;
+ else
+ return dstsr & HPB_DMAE_DSTSR_DMSTS;
}
static int
}
schan = &new_hpb_chan->shdma_chan;
+ schan->max_xfer_len = HPB_DMA_TCR_MAX;
+
shdma_chan_probe(sdev, schan, id);
if (pdev->id >= 0)
static int arizona_extcon_probe(struct platform_device *pdev)
{
struct arizona *arizona = dev_get_drvdata(pdev->dev.parent);
- struct arizona_pdata *pdata;
+ struct arizona_pdata *pdata = &arizona->pdata;
struct arizona_extcon_info *info;
unsigned int val;
int jack_irq_fall, jack_irq_rise;
if (!arizona->dapm || !arizona->dapm->card)
return -EPROBE_DEFER;
- pdata = dev_get_platdata(arizona->dev);
-
info = devm_kzalloc(&pdev->dev, sizeof(*info), GFP_KERNEL);
if (!info) {
dev_err(&pdev->dev, "Failed to allocate memory\n");
return;
}
+ device_unregister(&edev->dev);
+
if (edev->mutually_exclusive && edev->max_supported) {
for (index = 0; edev->mutually_exclusive[index];
index++)
if (switch_class)
class_compat_remove_link(switch_class, &edev->dev, NULL);
#endif
- device_unregister(&edev->dev);
put_device(&edev->dev);
}
EXPORT_SYMBOL_GPL(extcon_dev_unregister);
.cmd_per_lun = 1,
.can_queue = 1,
.sdev_attrs = sbp2_scsi_sysfs_attrs,
+ .no_write_same = 1,
};
MODULE_AUTHOR("Kristian Hoegsberg <krh@bitplanet.net>");
static int efi_pstore_open(struct pstore_info *psi)
{
- efivar_entry_iter_begin();
psi->data = NULL;
return 0;
}
static int efi_pstore_close(struct pstore_info *psi)
{
- efivar_entry_iter_end();
psi->data = NULL;
return 0;
}
char **buf;
};
+static inline u64 generic_id(unsigned long timestamp,
+ unsigned int part, int count)
+{
+ return (timestamp * 100 + part) * 1000 + count;
+}
+
static int efi_pstore_read_func(struct efivar_entry *entry, void *data)
{
efi_guid_t vendor = LINUX_EFI_CRASH_GUID;
if (sscanf(name, "dump-type%u-%u-%d-%lu-%c",
cb_data->type, &part, &cnt, &time, &data_type) == 5) {
- *cb_data->id = part;
+ *cb_data->id = generic_id(time, part, cnt);
*cb_data->count = cnt;
cb_data->timespec->tv_sec = time;
cb_data->timespec->tv_nsec = 0;
*cb_data->compressed = false;
} else if (sscanf(name, "dump-type%u-%u-%d-%lu",
cb_data->type, &part, &cnt, &time) == 4) {
- *cb_data->id = part;
+ *cb_data->id = generic_id(time, part, cnt);
*cb_data->count = cnt;
cb_data->timespec->tv_sec = time;
cb_data->timespec->tv_nsec = 0;
* which doesn't support holding
* multiple logs, remains.
*/
- *cb_data->id = part;
+ *cb_data->id = generic_id(time, part, 0);
*cb_data->count = 0;
cb_data->timespec->tv_sec = time;
cb_data->timespec->tv_nsec = 0;
__efivar_entry_get(entry, &entry->var.Attributes,
&entry->var.DataSize, entry->var.Data);
size = entry->var.DataSize;
+ memcpy(*cb_data->buf, entry->var.Data,
+ (size_t)min_t(unsigned long, EFIVARS_DATA_SIZE_MAX, size));
- *cb_data->buf = kmemdup(entry->var.Data, size, GFP_KERNEL);
- if (*cb_data->buf == NULL)
- return -ENOMEM;
return size;
}
+/**
+ * efi_pstore_scan_sysfs_enter
+ * @entry: scanning entry
+ * @next: next entry
+ * @head: list head
+ */
+static void efi_pstore_scan_sysfs_enter(struct efivar_entry *pos,
+ struct efivar_entry *next,
+ struct list_head *head)
+{
+ pos->scanning = true;
+ if (&next->list != head)
+ next->scanning = true;
+}
+
+/**
+ * __efi_pstore_scan_sysfs_exit
+ * @entry: deleting entry
+ * @turn_off_scanning: Check if a scanning flag should be turned off
+ */
+static inline void __efi_pstore_scan_sysfs_exit(struct efivar_entry *entry,
+ bool turn_off_scanning)
+{
+ if (entry->deleting) {
+ list_del(&entry->list);
+ efivar_entry_iter_end();
+ efivar_unregister(entry);
+ efivar_entry_iter_begin();
+ } else if (turn_off_scanning)
+ entry->scanning = false;
+}
+
+/**
+ * efi_pstore_scan_sysfs_exit
+ * @pos: scanning entry
+ * @next: next entry
+ * @head: list head
+ * @stop: a flag checking if scanning will stop
+ */
+static void efi_pstore_scan_sysfs_exit(struct efivar_entry *pos,
+ struct efivar_entry *next,
+ struct list_head *head, bool stop)
+{
+ __efi_pstore_scan_sysfs_exit(pos, true);
+ if (stop)
+ __efi_pstore_scan_sysfs_exit(next, &next->list != head);
+}
+
+/**
+ * efi_pstore_sysfs_entry_iter
+ *
+ * @data: function-specific data to pass to callback
+ * @pos: entry to begin iterating from
+ *
+ * You MUST call efivar_enter_iter_begin() before this function, and
+ * efivar_entry_iter_end() afterwards.
+ *
+ * It is possible to begin iteration from an arbitrary entry within
+ * the list by passing @pos. @pos is updated on return to point to
+ * the next entry of the last one passed to efi_pstore_read_func().
+ * To begin iterating from the beginning of the list @pos must be %NULL.
+ */
+static int efi_pstore_sysfs_entry_iter(void *data, struct efivar_entry **pos)
+{
+ struct efivar_entry *entry, *n;
+ struct list_head *head = &efivar_sysfs_list;
+ int size = 0;
+
+ if (!*pos) {
+ list_for_each_entry_safe(entry, n, head, list) {
+ efi_pstore_scan_sysfs_enter(entry, n, head);
+
+ size = efi_pstore_read_func(entry, data);
+ efi_pstore_scan_sysfs_exit(entry, n, head, size < 0);
+ if (size)
+ break;
+ }
+ *pos = n;
+ return size;
+ }
+
+ list_for_each_entry_safe_from((*pos), n, head, list) {
+ efi_pstore_scan_sysfs_enter((*pos), n, head);
+
+ size = efi_pstore_read_func((*pos), data);
+ efi_pstore_scan_sysfs_exit((*pos), n, head, size < 0);
+ if (size)
+ break;
+ }
+ *pos = n;
+ return size;
+}
+
+/**
+ * efi_pstore_read
+ *
+ * This function returns a size of NVRAM entry logged via efi_pstore_write().
+ * The meaning and behavior of efi_pstore/pstore are as below.
+ *
+ * size > 0: Got data of an entry logged via efi_pstore_write() successfully,
+ * and pstore filesystem will continue reading subsequent entries.
+ * size == 0: Entry was not logged via efi_pstore_write(),
+ * and efi_pstore driver will continue reading subsequent entries.
+ * size < 0: Failed to get data of entry logging via efi_pstore_write(),
+ * and pstore will stop reading entry.
+ */
static ssize_t efi_pstore_read(u64 *id, enum pstore_type_id *type,
int *count, struct timespec *timespec,
char **buf, bool *compressed,
struct pstore_info *psi)
{
struct pstore_read_data data;
+ ssize_t size;
data.id = id;
data.type = type;
data.compressed = compressed;
data.buf = buf;
- return __efivar_entry_iter(efi_pstore_read_func, &efivar_sysfs_list, &data,
- (struct efivar_entry **)&psi->data);
+ *data.buf = kzalloc(EFIVARS_DATA_SIZE_MAX, GFP_KERNEL);
+ if (!*data.buf)
+ return -ENOMEM;
+
+ efivar_entry_iter_begin();
+ size = efi_pstore_sysfs_entry_iter(&data,
+ (struct efivar_entry **)&psi->data);
+ efivar_entry_iter_end();
+ if (size <= 0)
+ kfree(*data.buf);
+ return size;
}
static int efi_pstore_write(enum pstore_type_id type,
return 0;
}
+ if (entry->scanning) {
+ /*
+ * Skip deletion because this entry will be deleted
+ * after scanning is completed.
+ */
+ entry->deleting = true;
+ } else
+ list_del(&entry->list);
+
/* found */
__efivar_entry_delete(entry);
- list_del(&entry->list);
return 1;
}
char name[DUMP_NAME_LEN];
efi_char16_t efi_name[DUMP_NAME_LEN];
int found, i;
+ unsigned int part;
- sprintf(name, "dump-type%u-%u-%d-%lu", type, (unsigned int)id, count,
- time.tv_sec);
+ do_div(id, 1000);
+ part = do_div(id, 100);
+ sprintf(name, "dump-type%u-%u-%d-%lu", type, part, count, time.tv_sec);
for (i = 0; i < DUMP_NAME_LEN; i++)
efi_name[i] = name[i];
- edata.id = id;
+ edata.id = part;
edata.type = type;
edata.count = count;
edata.time = time;
efivar_entry_iter_begin();
found = __efivar_entry_iter(efi_pstore_erase_func, &efivar_sysfs_list, &edata, &entry);
- efivar_entry_iter_end();
- if (found)
+ if (found && !entry->scanning) {
+ efivar_entry_iter_end();
efivar_unregister(entry);
+ } else
+ efivar_entry_iter_end();
return 0;
}
else if (__efivar_entry_delete(entry))
err = -EIO;
- efivar_entry_iter_end();
-
- if (err)
+ if (err) {
+ efivar_entry_iter_end();
return err;
+ }
- efivar_unregister(entry);
+ if (!entry->scanning) {
+ efivar_entry_iter_end();
+ efivar_unregister(entry);
+ } else
+ efivar_entry_iter_end();
/* It's dead Jim.... */
return count;
if (!found)
return NULL;
- if (remove)
- list_del(&entry->list);
+ if (remove) {
+ if (entry->scanning) {
+ /*
+ * The entry will be deleted
+ * after scanning is completed.
+ */
+ entry->deleting = true;
+ } else
+ list_del(&entry->list);
+ }
return entry;
}
* NOTE: we assume for now that only irqs in the first gpio_chip
* can provide direct-mapped IRQs to AINTC (up to 32 GPIOs).
*/
- if (offset < d->irq_base)
+ if (offset < d->gpio_unbanked)
return d->gpio_irq + offset;
else
return -ENODEV;
/* pass "bank 0" GPIO IRQs to AINTC */
chips[0].chip.to_irq = gpio_to_irq_unbanked;
+ chips[0].gpio_irq = bank_irq;
+ chips[0].gpio_unbanked = pdata->gpio_unbanked;
binten = BIT(0);
/* AINTC handles mask/unmask; GPIO handles triggering */
u32 val;
struct of_mm_gpio_chip *mm = to_of_mm_gpio_chip(gc);
struct mpc8xxx_gpio_chip *mpc8xxx_gc = to_mpc8xxx_gpio_chip(mm);
+ u32 out_mask, out_shadow;
- val = in_be32(mm->regs + GPIO_DAT) & ~in_be32(mm->regs + GPIO_DIR);
+ out_mask = in_be32(mm->regs + GPIO_DIR);
- return (val | mpc8xxx_gc->data) & mpc8xxx_gpio2mask(gpio);
+ val = in_be32(mm->regs + GPIO_DAT) & ~out_mask;
+ out_shadow = mpc8xxx_gc->data & out_mask;
+
+ return (val | out_shadow) & mpc8xxx_gpio2mask(gpio);
}
static int mpc8xxx_gpio_get(struct gpio_chip *gc, unsigned int gpio)
continue;
}
- if (chip->ngpio >= p->chip_hwnum) {
+ if (chip->ngpio <= p->chip_hwnum) {
dev_warn(dev, "GPIO chip %s has %d GPIOs\n",
chip->label, chip->ngpio);
continue;
const char *con_id,
unsigned int idx)
{
- struct gpio_desc *desc;
+ struct gpio_desc *desc = NULL;
int status;
enum gpio_lookup_flags flags = 0;
} else if (IS_ENABLED(CONFIG_ACPI) && dev && ACPI_HANDLE(dev)) {
dev_dbg(dev, "using ACPI for GPIO lookup\n");
desc = acpi_find_gpio(dev, con_id, idx, &flags);
- } else {
+ }
+
+ /*
+ * Either we are not using DT or ACPI, or their lookup did not return
+ * a result. In that case, use platform lookup as a fallback.
+ */
+ if (!desc || IS_ERR(desc)) {
+ struct gpio_desc *pdesc;
dev_dbg(dev, "using lookup tables for GPIO lookup");
- desc = gpiod_find(dev, con_id, idx, &flags);
+ pdesc = gpiod_find(dev, con_id, idx, &flags);
+ /* If used as fallback, do not replace the previous error */
+ if (!IS_ERR(pdesc) || !desc)
+ desc = pdesc;
}
if (IS_ERR(desc)) {
- dev_warn(dev, "lookup for GPIO %s failed\n", con_id);
+ dev_dbg(dev, "lookup for GPIO %s failed\n", con_id);
return desc;
}
int modes = 0;
u8 cea_mode;
- if (video_db == NULL || video_index > video_len)
+ if (video_db == NULL || video_index >= video_len)
return 0;
/* CEA modes are numbered 1..127 */
if (structure & (1 << 8)) {
newmode = drm_mode_duplicate(dev, &edid_cea_modes[cea_mode]);
if (newmode) {
- newmode->flags = DRM_MODE_FLAG_3D_SIDE_BY_SIDE_HALF;
+ newmode->flags |= DRM_MODE_FLAG_3D_SIDE_BY_SIDE_HALF;
drm_mode_probed_add(connector, newmode);
modes++;
}
static void exynos_drm_preclose(struct drm_device *dev,
struct drm_file *file)
+{
+ exynos_drm_subdrv_close(dev, file);
+}
+
+static void exynos_drm_postclose(struct drm_device *dev, struct drm_file *file)
{
struct exynos_drm_private *private = dev->dev_private;
- struct drm_pending_vblank_event *e, *t;
+ struct drm_pending_vblank_event *v, *vt;
+ struct drm_pending_event *e, *et;
unsigned long flags;
- /* release events of current file */
+ if (!file->driver_priv)
+ return;
+
+ /* Release all events not unhandled by page flip handler. */
spin_lock_irqsave(&dev->event_lock, flags);
- list_for_each_entry_safe(e, t, &private->pageflip_event_list,
+ list_for_each_entry_safe(v, vt, &private->pageflip_event_list,
base.link) {
- if (e->base.file_priv == file) {
- list_del(&e->base.link);
- e->base.destroy(&e->base);
+ if (v->base.file_priv == file) {
+ list_del(&v->base.link);
+ drm_vblank_put(dev, v->pipe);
+ v->base.destroy(&v->base);
}
}
- spin_unlock_irqrestore(&dev->event_lock, flags);
- exynos_drm_subdrv_close(dev, file);
-}
+ /* Release all events handled by page flip handler but not freed. */
+ list_for_each_entry_safe(e, et, &file->event_list, link) {
+ list_del(&e->link);
+ e->destroy(e);
+ }
+ spin_unlock_irqrestore(&dev->event_lock, flags);
-static void exynos_drm_postclose(struct drm_device *dev, struct drm_file *file)
-{
- if (!file->driver_priv)
- return;
kfree(file->driver_priv);
file->driver_priv = NULL;
#include "exynos_drm_iommu.h"
/*
- * FIMD is stand for Fully Interactive Mobile Display and
+ * FIMD stands for Fully Interactive Mobile Display and
* as a display controller, it transfers contents drawn on memory
* to a LCD Panel through Display Interfaces such as RGB or
* CPU Interface.
* Disable CRTCs directly since we want to preserve sw state
* for _thaw.
*/
+ mutex_lock(&dev->mode_config.mutex);
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head)
dev_priv->display.crtc_disable(crtc);
+ mutex_unlock(&dev->mode_config.mutex);
intel_modeset_suspend_hw(dev);
}
if (dev_priv->ellc_size)
I915_WRITE(HSW_IDICR, I915_READ(HSW_IDICR) | IDIHASHMSK(0xf));
- if (IS_HSW_GT3(dev))
- I915_WRITE(MI_PREDICATE_RESULT_2, LOWER_SLICE_ENABLED);
- else
- I915_WRITE(MI_PREDICATE_RESULT_2, LOWER_SLICE_DISABLED);
+ if (IS_HASWELL(dev))
+ I915_WRITE(MI_PREDICATE_RESULT_2, IS_HSW_GT3(dev) ?
+ LOWER_SLICE_ENABLED : LOWER_SLICE_DISABLED);
if (HAS_PCH_NOP(dev)) {
u32 temp = I915_READ(GEN7_MSG_CTL);
ret = i915_gem_object_get_pages(obj);
if (ret)
- goto error;
+ goto err;
+
+ i915_gem_object_pin_pages(obj);
ret = -ENOMEM;
pages = drm_malloc_ab(obj->base.size >> PAGE_SHIFT, sizeof(*pages));
if (pages == NULL)
- goto error;
+ goto err_unpin;
i = 0;
for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, 0)
drm_free_large(pages);
if (!obj->dma_buf_vmapping)
- goto error;
+ goto err_unpin;
obj->vmapping_count = 1;
- i915_gem_object_pin_pages(obj);
out_unlock:
mutex_unlock(&dev->struct_mutex);
return obj->dma_buf_vmapping;
-error:
+err_unpin:
+ i915_gem_object_unpin_pages(obj);
+err:
mutex_unlock(&dev->struct_mutex);
return ERR_PTR(ret);
}
#include "intel_drv.h"
#include <linux/dma_remapping.h>
+#define __EXEC_OBJECT_HAS_PIN (1<<31)
+#define __EXEC_OBJECT_HAS_FENCE (1<<30)
+
struct eb_vmas {
struct list_head vmas;
int and;
}
}
-static void eb_destroy(struct eb_vmas *eb) {
+static void
+i915_gem_execbuffer_unreserve_vma(struct i915_vma *vma)
+{
+ struct drm_i915_gem_exec_object2 *entry;
+ struct drm_i915_gem_object *obj = vma->obj;
+
+ if (!drm_mm_node_allocated(&vma->node))
+ return;
+
+ entry = vma->exec_entry;
+
+ if (entry->flags & __EXEC_OBJECT_HAS_FENCE)
+ i915_gem_object_unpin_fence(obj);
+
+ if (entry->flags & __EXEC_OBJECT_HAS_PIN)
+ i915_gem_object_unpin(obj);
+
+ entry->flags &= ~(__EXEC_OBJECT_HAS_FENCE | __EXEC_OBJECT_HAS_PIN);
+}
+
+static void eb_destroy(struct eb_vmas *eb)
+{
while (!list_empty(&eb->vmas)) {
struct i915_vma *vma;
struct i915_vma,
exec_list);
list_del_init(&vma->exec_list);
+ i915_gem_execbuffer_unreserve_vma(vma);
drm_gem_object_unreference(&vma->obj->base);
}
kfree(eb);
return ret;
}
-#define __EXEC_OBJECT_HAS_PIN (1<<31)
-#define __EXEC_OBJECT_HAS_FENCE (1<<30)
-
static int
need_reloc_mappable(struct i915_vma *vma)
{
return 0;
}
-static void
-i915_gem_execbuffer_unreserve_vma(struct i915_vma *vma)
-{
- struct drm_i915_gem_exec_object2 *entry;
- struct drm_i915_gem_object *obj = vma->obj;
-
- if (!drm_mm_node_allocated(&vma->node))
- return;
-
- entry = vma->exec_entry;
-
- if (entry->flags & __EXEC_OBJECT_HAS_FENCE)
- i915_gem_object_unpin_fence(obj);
-
- if (entry->flags & __EXEC_OBJECT_HAS_PIN)
- i915_gem_object_unpin(obj);
-
- entry->flags &= ~(__EXEC_OBJECT_HAS_FENCE | __EXEC_OBJECT_HAS_PIN);
-}
-
static int
i915_gem_execbuffer_reserve(struct intel_ring_buffer *ring,
struct list_head *vmas,
goto err;
}
-err: /* Decrement pin count for bound objects */
- list_for_each_entry(vma, vmas, exec_list)
- i915_gem_execbuffer_unreserve_vma(vma);
-
+err:
if (ret != -ENOSPC || retry++)
return ret;
+ /* Decrement pin count for bound objects */
+ list_for_each_entry(vma, vmas, exec_list)
+ i915_gem_execbuffer_unreserve_vma(vma);
+
ret = i915_gem_evict_vm(vm, true);
if (ret)
return ret;
while (!list_empty(&eb->vmas)) {
vma = list_first_entry(&eb->vmas, struct i915_vma, exec_list);
list_del_init(&vma->exec_list);
+ i915_gem_execbuffer_unreserve_vma(vma);
drm_gem_object_unreference(&vma->obj->base);
}
#define HSW_WB_LLC_AGE3 HSW_CACHEABILITY_CONTROL(0x2)
#define HSW_WB_LLC_AGE0 HSW_CACHEABILITY_CONTROL(0x3)
#define HSW_WB_ELLC_LLC_AGE0 HSW_CACHEABILITY_CONTROL(0xb)
+#define HSW_WB_ELLC_LLC_AGE3 HSW_CACHEABILITY_CONTROL(0x8)
#define HSW_WT_ELLC_LLC_AGE0 HSW_CACHEABILITY_CONTROL(0x6)
+#define HSW_WT_ELLC_LLC_AGE3 HSW_CACHEABILITY_CONTROL(0x7)
#define GEN8_PTES_PER_PAGE (PAGE_SIZE / sizeof(gen8_gtt_pte_t))
#define GEN8_PDES_PER_PAGE (PAGE_SIZE / sizeof(gen8_ppgtt_pde_t))
case I915_CACHE_NONE:
break;
case I915_CACHE_WT:
- pte |= HSW_WT_ELLC_LLC_AGE0;
+ pte |= HSW_WT_ELLC_LLC_AGE3;
break;
default:
- pte |= HSW_WB_ELLC_LLC_AGE0;
+ pte |= HSW_WB_ELLC_LLC_AGE3;
break;
}
*/
#define MI_LOAD_REGISTER_IMM(x) MI_INSTR(0x22, 2*x-1)
#define MI_STORE_REGISTER_MEM(x) MI_INSTR(0x24, 2*x-1)
+#define MI_SRM_LRM_GLOBAL_GTT (1<<22)
#define MI_FLUSH_DW MI_INSTR(0x26, 1) /* for GEN6 */
#define MI_FLUSH_DW_STORE_INDEX (1<<21)
#define MI_INVALIDATE_TLB (1<<18)
ddi_translations = ddi_translations_dp;
break;
case PORT_D:
- if (intel_dpd_is_edp(dev))
+ if (intel_dp_is_edp(dev, PORT_D))
ddi_translations = ddi_translations_edp;
else
ddi_translations = ddi_translations_dp;
if (wait)
intel_wait_ddi_buf_idle(dev_priv, port);
- if (type == INTEL_OUTPUT_EDP) {
+ if (type == INTEL_OUTPUT_DISPLAYPORT || type == INTEL_OUTPUT_EDP) {
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
ironlake_edp_panel_vdd_on(intel_dp);
+ intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_OFF);
ironlake_edp_panel_off(intel_dp);
}
uint16_t postoff = 0;
if (intel_crtc->config.limited_color_range)
- postoff = (16 * (1 << 13) / 255) & 0x1fff;
+ postoff = (16 * (1 << 12) / 255) & 0x1fff;
I915_WRITE(PIPE_CSC_POSTOFF_HI(pipe), postoff);
I915_WRITE(PIPE_CSC_POSTOFF_ME(pipe), postoff);
/* Make sure we're not on PC8 state before disabling PC8, otherwise
* we'll hang the machine! */
- dev_priv->uncore.funcs.force_wake_get(dev_priv);
+ gen6_gt_force_wake_get(dev_priv);
if (val & LCPLL_POWER_DOWN_ALLOW) {
val &= ~LCPLL_POWER_DOWN_ALLOW;
DRM_ERROR("Switching back to LCPLL failed\n");
}
- dev_priv->uncore.funcs.force_wake_put(dev_priv);
+ gen6_gt_force_wake_put(dev_priv);
}
void hsw_enable_pc8_work(struct work_struct *__work)
intel_ring_emit(ring, ~(DERRMR_PIPEA_PRI_FLIP_DONE |
DERRMR_PIPEB_PRI_FLIP_DONE |
DERRMR_PIPEC_PRI_FLIP_DONE));
- intel_ring_emit(ring, MI_STORE_REGISTER_MEM(1));
+ intel_ring_emit(ring, MI_STORE_REGISTER_MEM(1) |
+ MI_SRM_LRM_GLOBAL_GTT);
intel_ring_emit(ring, DERRMR);
intel_ring_emit(ring, ring->scratch.gtt_offset + 256);
}
intel_ddi_init(dev, PORT_D);
} else if (HAS_PCH_SPLIT(dev)) {
int found;
- dpd_is_edp = intel_dpd_is_edp(dev);
+ dpd_is_edp = intel_dp_is_edp(dev, PORT_D);
if (has_edp_a(dev))
intel_dp_init(dev, DP_A, PORT_A);
intel_hdmi_init(dev, VLV_DISPLAY_BASE + GEN4_HDMIC,
PORT_C);
if (I915_READ(VLV_DISPLAY_BASE + DP_C) & DP_DETECTED)
- intel_dp_init(dev, VLV_DISPLAY_BASE + DP_C,
- PORT_C);
+ intel_dp_init(dev, VLV_DISPLAY_BASE + DP_C, PORT_C);
}
intel_dsi_init(dev);
}
/* check the VBT to see whether the eDP is on DP-D port */
-bool intel_dpd_is_edp(struct drm_device *dev)
+bool intel_dp_is_edp(struct drm_device *dev, enum port port)
{
struct drm_i915_private *dev_priv = dev->dev_private;
union child_device_config *p_child;
int i;
+ static const short port_mapping[] = {
+ [PORT_B] = PORT_IDPB,
+ [PORT_C] = PORT_IDPC,
+ [PORT_D] = PORT_IDPD,
+ };
+
+ if (port == PORT_A)
+ return true;
if (!dev_priv->vbt.child_dev_num)
return false;
for (i = 0; i < dev_priv->vbt.child_dev_num; i++) {
p_child = dev_priv->vbt.child_dev + i;
- if (p_child->common.dvo_port == PORT_IDPD &&
+ if (p_child->common.dvo_port == port_mapping[port] &&
(p_child->common.device_type & DEVICE_TYPE_eDP_BITS) ==
(DEVICE_TYPE_eDP & DEVICE_TYPE_eDP_BITS))
return true;
intel_dp->DP = I915_READ(intel_dp->output_reg);
intel_dp->attached_connector = intel_connector;
- type = DRM_MODE_CONNECTOR_DisplayPort;
- /*
- * FIXME : We need to initialize built-in panels before external panels.
- * For X0, DP_C is fixed as eDP. Revisit this as part of VLV eDP cleanup
- */
- switch (port) {
- case PORT_A:
+ if (intel_dp_is_edp(dev, port))
type = DRM_MODE_CONNECTOR_eDP;
- break;
- case PORT_C:
- if (IS_VALLEYVIEW(dev))
- type = DRM_MODE_CONNECTOR_eDP;
- break;
- case PORT_D:
- if (HAS_PCH_SPLIT(dev) && intel_dpd_is_edp(dev))
- type = DRM_MODE_CONNECTOR_eDP;
- break;
- default: /* silence GCC warning */
- break;
- }
+ else
+ type = DRM_MODE_CONNECTOR_DisplayPort;
/*
* For eDP we always set the encoder type to INTEL_OUTPUT_EDP, but
void intel_dp_check_link_status(struct intel_dp *intel_dp);
bool intel_dp_compute_config(struct intel_encoder *encoder,
struct intel_crtc_config *pipe_config);
-bool intel_dpd_is_edp(struct drm_device *dev);
+bool intel_dp_is_edp(struct drm_device *dev, enum port port);
void ironlake_edp_backlight_on(struct intel_dp *intel_dp);
void ironlake_edp_backlight_off(struct intel_dp *intel_dp);
void ironlake_edp_panel_on(struct intel_dp *intel_dp);
adjusted_mode = &to_intel_crtc(crtc)->config.adjusted_mode;
clock = adjusted_mode->crtc_clock;
- htotal = adjusted_mode->htotal;
+ htotal = adjusted_mode->crtc_htotal;
hdisplay = to_intel_crtc(crtc)->config.pipe_src_w;
pixel_size = crtc->fb->bits_per_pixel / 8;
crtc = intel_get_crtc_for_plane(dev, plane);
adjusted_mode = &to_intel_crtc(crtc)->config.adjusted_mode;
clock = adjusted_mode->crtc_clock;
- htotal = adjusted_mode->htotal;
+ htotal = adjusted_mode->crtc_htotal;
hdisplay = to_intel_crtc(crtc)->config.pipe_src_w;
pixel_size = crtc->fb->bits_per_pixel / 8;
const struct drm_display_mode *adjusted_mode =
&to_intel_crtc(crtc)->config.adjusted_mode;
int clock = adjusted_mode->crtc_clock;
- int htotal = adjusted_mode->htotal;
+ int htotal = adjusted_mode->crtc_htotal;
int hdisplay = to_intel_crtc(crtc)->config.pipe_src_w;
int pixel_size = crtc->fb->bits_per_pixel / 8;
unsigned long line_time_us;
const struct drm_display_mode *adjusted_mode =
&to_intel_crtc(enabled)->config.adjusted_mode;
int clock = adjusted_mode->crtc_clock;
- int htotal = adjusted_mode->htotal;
+ int htotal = adjusted_mode->crtc_htotal;
int hdisplay = to_intel_crtc(enabled)->config.pipe_src_w;
int pixel_size = enabled->fb->bits_per_pixel / 8;
unsigned long line_time_us;
crtc = intel_get_crtc_for_plane(dev, plane);
adjusted_mode = &to_intel_crtc(crtc)->config.adjusted_mode;
clock = adjusted_mode->crtc_clock;
- htotal = adjusted_mode->htotal;
+ htotal = adjusted_mode->crtc_htotal;
hdisplay = to_intel_crtc(crtc)->config.pipe_src_w;
pixel_size = crtc->fb->bits_per_pixel / 8;
/* The WM are computed with base on how long it takes to fill a single
* row at the given clock rate, multiplied by 8.
* */
- linetime = DIV_ROUND_CLOSEST(mode->htotal * 1000 * 8, mode->clock);
- ips_linetime = DIV_ROUND_CLOSEST(mode->htotal * 1000 * 8,
+ linetime = DIV_ROUND_CLOSEST(mode->crtc_htotal * 1000 * 8,
+ mode->crtc_clock);
+ ips_linetime = DIV_ROUND_CLOSEST(mode->crtc_htotal * 1000 * 8,
intel_ddi_get_cdclk_freq(dev_priv));
return PIPE_WM_LINETIME_IPS_LINETIME(ips_linetime) |
nouveau-y += core/subdev/clock/nv50.o
nouveau-y += core/subdev/clock/nv84.o
nouveau-y += core/subdev/clock/nva3.o
+nouveau-y += core/subdev/clock/nvaa.o
nouveau-y += core/subdev/clock/nvc0.o
nouveau-y += core/subdev/clock/nve0.o
nouveau-y += core/subdev/clock/pllnv04.o
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
device->oclass[NVDEV_SUBDEV_GPIO ] = &nv50_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = &nv94_i2c_oclass;
- device->oclass[NVDEV_SUBDEV_CLOCK ] = nv84_clock_oclass;
+ device->oclass[NVDEV_SUBDEV_CLOCK ] = nvaa_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nv84_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
device->oclass[NVDEV_SUBDEV_DEVINIT] = &nv50_devinit_oclass;
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
device->oclass[NVDEV_SUBDEV_GPIO ] = &nv50_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = &nv94_i2c_oclass;
- device->oclass[NVDEV_SUBDEV_CLOCK ] = nv84_clock_oclass;
+ device->oclass[NVDEV_SUBDEV_CLOCK ] = nvaa_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nv84_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
device->oclass[NVDEV_SUBDEV_DEVINIT] = &nv50_devinit_oclass;
#include <engine/dmaobj.h>
#include <engine/fifo.h>
+#include "nv04.h"
#include "nv50.h"
/*******************************************************************************
nv_subdev(priv)->intr = nv04_fifo_intr;
nv_engine(priv)->cclass = &nv50_fifo_cclass;
nv_engine(priv)->sclass = nv50_fifo_sclass;
+ priv->base.pause = nv04_fifo_pause;
+ priv->base.start = nv04_fifo_start;
return 0;
}
#include <engine/dmaobj.h>
#include <engine/fifo.h>
+#include "nv04.h"
#include "nv50.h"
/*******************************************************************************
nv_subdev(priv)->intr = nv04_fifo_intr;
nv_engine(priv)->cclass = &nv84_fifo_cclass;
nv_engine(priv)->sclass = nv84_fifo_sclass;
+ priv->base.pause = nv04_fifo_pause;
+ priv->base.start = nv04_fifo_start;
return 0;
}
if (ret)
return ret;
- chan->vblank.nr_event = pdisp->vblank->index_nr;
+ chan->vblank.nr_event = pdisp ? pdisp->vblank->index_nr : 0;
chan->vblank.event = kzalloc(chan->vblank.nr_event *
sizeof(*chan->vblank.event), GFP_KERNEL);
if (!chan->vblank.event)
nv_clk_src_hclk,
nv_clk_src_hclkm3,
nv_clk_src_hclkm3d2,
+ nv_clk_src_hclkm2d3, /* NVAA */
+ nv_clk_src_hclkm4, /* NVAA */
+ nv_clk_src_cclk, /* NVAA */
nv_clk_src_host,
extern struct nouveau_oclass nv40_clock_oclass;
extern struct nouveau_oclass *nv50_clock_oclass;
extern struct nouveau_oclass *nv84_clock_oclass;
+extern struct nouveau_oclass *nvaa_clock_oclass;
extern struct nouveau_oclass nva3_clock_oclass;
extern struct nouveau_oclass nvc0_clock_oclass;
extern struct nouveau_oclass nve0_clock_oclass;
return 0;
}
+static struct nouveau_clocks
+nv04_domain[] = {
+ { nv_clk_src_max }
+};
+
static int
nv04_clock_ctor(struct nouveau_object *parent, struct nouveau_object *engine,
struct nouveau_oclass *oclass, void *data, u32 size,
struct nv04_clock_priv *priv;
int ret;
- ret = nouveau_clock_create(parent, engine, oclass, NULL, &priv);
+ ret = nouveau_clock_create(parent, engine, oclass, nv04_domain, &priv);
*pobject = nv_object(priv);
if (ret)
return ret;
--- /dev/null
+/*
+ * Copyright 2012 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Ben Skeggs
+ */
+
+#include <engine/fifo.h>
+#include <subdev/bios.h>
+#include <subdev/bios/pll.h>
+#include <subdev/timer.h>
+#include <subdev/clock.h>
+
+#include "pll.h"
+
+struct nvaa_clock_priv {
+ struct nouveau_clock base;
+ enum nv_clk_src csrc, ssrc, vsrc;
+ u32 cctrl, sctrl;
+ u32 ccoef, scoef;
+ u32 cpost, spost;
+ u32 vdiv;
+};
+
+static u32
+read_div(struct nouveau_clock *clk)
+{
+ return nv_rd32(clk, 0x004600);
+}
+
+static u32
+read_pll(struct nouveau_clock *clk, u32 base)
+{
+ u32 ctrl = nv_rd32(clk, base + 0);
+ u32 coef = nv_rd32(clk, base + 4);
+ u32 ref = clk->read(clk, nv_clk_src_href);
+ u32 post_div = 0;
+ u32 clock = 0;
+ int N1, M1;
+
+ switch (base){
+ case 0x4020:
+ post_div = 1 << ((nv_rd32(clk, 0x4070) & 0x000f0000) >> 16);
+ break;
+ case 0x4028:
+ post_div = (nv_rd32(clk, 0x4040) & 0x000f0000) >> 16;
+ break;
+ default:
+ break;
+ }
+
+ N1 = (coef & 0x0000ff00) >> 8;
+ M1 = (coef & 0x000000ff);
+ if ((ctrl & 0x80000000) && M1) {
+ clock = ref * N1 / M1;
+ clock = clock / post_div;
+ }
+
+ return clock;
+}
+
+static int
+nvaa_clock_read(struct nouveau_clock *clk, enum nv_clk_src src)
+{
+ struct nvaa_clock_priv *priv = (void *)clk;
+ u32 mast = nv_rd32(clk, 0x00c054);
+ u32 P = 0;
+
+ switch (src) {
+ case nv_clk_src_crystal:
+ return nv_device(priv)->crystal;
+ case nv_clk_src_href:
+ return 100000; /* PCIE reference clock */
+ case nv_clk_src_hclkm4:
+ return clk->read(clk, nv_clk_src_href) * 4;
+ case nv_clk_src_hclkm2d3:
+ return clk->read(clk, nv_clk_src_href) * 2 / 3;
+ case nv_clk_src_host:
+ switch (mast & 0x000c0000) {
+ case 0x00000000: return clk->read(clk, nv_clk_src_hclkm2d3);
+ case 0x00040000: break;
+ case 0x00080000: return clk->read(clk, nv_clk_src_hclkm4);
+ case 0x000c0000: return clk->read(clk, nv_clk_src_cclk);
+ }
+ break;
+ case nv_clk_src_core:
+ P = (nv_rd32(clk, 0x004028) & 0x00070000) >> 16;
+
+ switch (mast & 0x00000003) {
+ case 0x00000000: return clk->read(clk, nv_clk_src_crystal) >> P;
+ case 0x00000001: return 0;
+ case 0x00000002: return clk->read(clk, nv_clk_src_hclkm4) >> P;
+ case 0x00000003: return read_pll(clk, 0x004028) >> P;
+ }
+ break;
+ case nv_clk_src_cclk:
+ if ((mast & 0x03000000) != 0x03000000)
+ return clk->read(clk, nv_clk_src_core);
+
+ if ((mast & 0x00000200) == 0x00000000)
+ return clk->read(clk, nv_clk_src_core);
+
+ switch (mast & 0x00000c00) {
+ case 0x00000000: return clk->read(clk, nv_clk_src_href);
+ case 0x00000400: return clk->read(clk, nv_clk_src_hclkm4);
+ case 0x00000800: return clk->read(clk, nv_clk_src_hclkm2d3);
+ default: return 0;
+ }
+ case nv_clk_src_shader:
+ P = (nv_rd32(clk, 0x004020) & 0x00070000) >> 16;
+ switch (mast & 0x00000030) {
+ case 0x00000000:
+ if (mast & 0x00000040)
+ return clk->read(clk, nv_clk_src_href) >> P;
+ return clk->read(clk, nv_clk_src_crystal) >> P;
+ case 0x00000010: break;
+ case 0x00000020: return read_pll(clk, 0x004028) >> P;
+ case 0x00000030: return read_pll(clk, 0x004020) >> P;
+ }
+ break;
+ case nv_clk_src_mem:
+ return 0;
+ break;
+ case nv_clk_src_vdec:
+ P = (read_div(clk) & 0x00000700) >> 8;
+
+ switch (mast & 0x00400000) {
+ case 0x00400000:
+ return clk->read(clk, nv_clk_src_core) >> P;
+ break;
+ default:
+ return 500000 >> P;
+ break;
+ }
+ break;
+ default:
+ break;
+ }
+
+ nv_debug(priv, "unknown clock source %d 0x%08x\n", src, mast);
+ return 0;
+}
+
+static u32
+calc_pll(struct nvaa_clock_priv *priv, u32 reg,
+ u32 clock, int *N, int *M, int *P)
+{
+ struct nouveau_bios *bios = nouveau_bios(priv);
+ struct nvbios_pll pll;
+ struct nouveau_clock *clk = &priv->base;
+ int ret;
+
+ ret = nvbios_pll_parse(bios, reg, &pll);
+ if (ret)
+ return 0;
+
+ pll.vco2.max_freq = 0;
+ pll.refclk = clk->read(clk, nv_clk_src_href);
+ if (!pll.refclk)
+ return 0;
+
+ return nv04_pll_calc(nv_subdev(priv), &pll, clock, N, M, NULL, NULL, P);
+}
+
+static inline u32
+calc_P(u32 src, u32 target, int *div)
+{
+ u32 clk0 = src, clk1 = src;
+ for (*div = 0; *div <= 7; (*div)++) {
+ if (clk0 <= target) {
+ clk1 = clk0 << (*div ? 1 : 0);
+ break;
+ }
+ clk0 >>= 1;
+ }
+
+ if (target - clk0 <= clk1 - target)
+ return clk0;
+ (*div)--;
+ return clk1;
+}
+
+static int
+nvaa_clock_calc(struct nouveau_clock *clk, struct nouveau_cstate *cstate)
+{
+ struct nvaa_clock_priv *priv = (void *)clk;
+ const int shader = cstate->domain[nv_clk_src_shader];
+ const int core = cstate->domain[nv_clk_src_core];
+ const int vdec = cstate->domain[nv_clk_src_vdec];
+ u32 out = 0, clock = 0;
+ int N, M, P1, P2 = 0;
+ int divs = 0;
+
+ /* cclk: find suitable source, disable PLL if we can */
+ if (core < clk->read(clk, nv_clk_src_hclkm4))
+ out = calc_P(clk->read(clk, nv_clk_src_hclkm4), core, &divs);
+
+ /* Calculate clock * 2, so shader clock can use it too */
+ clock = calc_pll(priv, 0x4028, (core << 1), &N, &M, &P1);
+
+ if (abs(core - out) <=
+ abs(core - (clock >> 1))) {
+ priv->csrc = nv_clk_src_hclkm4;
+ priv->cctrl = divs << 16;
+ } else {
+ /* NVCTRL is actually used _after_ NVPOST, and after what we
+ * call NVPLL. To make matters worse, NVPOST is an integer
+ * divider instead of a right-shift number. */
+ if(P1 > 2) {
+ P2 = P1 - 2;
+ P1 = 2;
+ }
+
+ priv->csrc = nv_clk_src_core;
+ priv->ccoef = (N << 8) | M;
+
+ priv->cctrl = (P2 + 1) << 16;
+ priv->cpost = (1 << P1) << 16;
+ }
+
+ /* sclk: nvpll + divisor, href or spll */
+ out = 0;
+ if (shader == clk->read(clk, nv_clk_src_href)) {
+ priv->ssrc = nv_clk_src_href;
+ } else {
+ clock = calc_pll(priv, 0x4020, shader, &N, &M, &P1);
+ if (priv->csrc == nv_clk_src_core) {
+ out = calc_P((core << 1), shader, &divs);
+ }
+
+ if (abs(shader - out) <=
+ abs(shader - clock) &&
+ (divs + P2) <= 7) {
+ priv->ssrc = nv_clk_src_core;
+ priv->sctrl = (divs + P2) << 16;
+ } else {
+ priv->ssrc = nv_clk_src_shader;
+ priv->scoef = (N << 8) | M;
+ priv->sctrl = P1 << 16;
+ }
+ }
+
+ /* vclk */
+ out = calc_P(core, vdec, &divs);
+ clock = calc_P(500000, vdec, &P1);
+ if(abs(vdec - out) <=
+ abs(vdec - clock)) {
+ priv->vsrc = nv_clk_src_cclk;
+ priv->vdiv = divs << 16;
+ } else {
+ priv->vsrc = nv_clk_src_vdec;
+ priv->vdiv = P1 << 16;
+ }
+
+ /* Print strategy! */
+ nv_debug(priv, "nvpll: %08x %08x %08x\n",
+ priv->ccoef, priv->cpost, priv->cctrl);
+ nv_debug(priv, " spll: %08x %08x %08x\n",
+ priv->scoef, priv->spost, priv->sctrl);
+ nv_debug(priv, " vdiv: %08x\n", priv->vdiv);
+ if (priv->csrc == nv_clk_src_hclkm4)
+ nv_debug(priv, "core: hrefm4\n");
+ else
+ nv_debug(priv, "core: nvpll\n");
+
+ if (priv->ssrc == nv_clk_src_hclkm4)
+ nv_debug(priv, "shader: hrefm4\n");
+ else if (priv->ssrc == nv_clk_src_core)
+ nv_debug(priv, "shader: nvpll\n");
+ else
+ nv_debug(priv, "shader: spll\n");
+
+ if (priv->vsrc == nv_clk_src_hclkm4)
+ nv_debug(priv, "vdec: 500MHz\n");
+ else
+ nv_debug(priv, "vdec: core\n");
+
+ return 0;
+}
+
+static int
+nvaa_clock_prog(struct nouveau_clock *clk)
+{
+ struct nvaa_clock_priv *priv = (void *)clk;
+ struct nouveau_fifo *pfifo = nouveau_fifo(clk);
+ unsigned long flags;
+ u32 pllmask = 0, mast, ptherm_gate;
+ int ret = -EBUSY;
+
+ /* halt and idle execution engines */
+ ptherm_gate = nv_mask(clk, 0x020060, 0x00070000, 0x00000000);
+ nv_mask(clk, 0x002504, 0x00000001, 0x00000001);
+ /* Wait until the interrupt handler is finished */
+ if (!nv_wait(clk, 0x000100, 0xffffffff, 0x00000000))
+ goto resume;
+
+ if (pfifo)
+ pfifo->pause(pfifo, &flags);
+
+ if (!nv_wait(clk, 0x002504, 0x00000010, 0x00000010))
+ goto resume;
+ if (!nv_wait(clk, 0x00251c, 0x0000003f, 0x0000003f))
+ goto resume;
+
+ /* First switch to safe clocks: href */
+ mast = nv_mask(clk, 0xc054, 0x03400e70, 0x03400640);
+ mast &= ~0x00400e73;
+ mast |= 0x03000000;
+
+ switch (priv->csrc) {
+ case nv_clk_src_hclkm4:
+ nv_mask(clk, 0x4028, 0x00070000, priv->cctrl);
+ mast |= 0x00000002;
+ break;
+ case nv_clk_src_core:
+ nv_wr32(clk, 0x402c, priv->ccoef);
+ nv_wr32(clk, 0x4028, 0x80000000 | priv->cctrl);
+ nv_wr32(clk, 0x4040, priv->cpost);
+ pllmask |= (0x3 << 8);
+ mast |= 0x00000003;
+ break;
+ default:
+ nv_warn(priv,"Reclocking failed: unknown core clock\n");
+ goto resume;
+ }
+
+ switch (priv->ssrc) {
+ case nv_clk_src_href:
+ nv_mask(clk, 0x4020, 0x00070000, 0x00000000);
+ /* mast |= 0x00000000; */
+ break;
+ case nv_clk_src_core:
+ nv_mask(clk, 0x4020, 0x00070000, priv->sctrl);
+ mast |= 0x00000020;
+ break;
+ case nv_clk_src_shader:
+ nv_wr32(clk, 0x4024, priv->scoef);
+ nv_wr32(clk, 0x4020, 0x80000000 | priv->sctrl);
+ nv_wr32(clk, 0x4070, priv->spost);
+ pllmask |= (0x3 << 12);
+ mast |= 0x00000030;
+ break;
+ default:
+ nv_warn(priv,"Reclocking failed: unknown sclk clock\n");
+ goto resume;
+ }
+
+ if (!nv_wait(clk, 0x004080, pllmask, pllmask)) {
+ nv_warn(priv,"Reclocking failed: unstable PLLs\n");
+ goto resume;
+ }
+
+ switch (priv->vsrc) {
+ case nv_clk_src_cclk:
+ mast |= 0x00400000;
+ default:
+ nv_wr32(clk, 0x4600, priv->vdiv);
+ }
+
+ nv_wr32(clk, 0xc054, mast);
+ ret = 0;
+
+resume:
+ if (pfifo)
+ pfifo->start(pfifo, &flags);
+
+ nv_mask(clk, 0x002504, 0x00000001, 0x00000000);
+ nv_wr32(clk, 0x020060, ptherm_gate);
+
+ /* Disable some PLLs and dividers when unused */
+ if (priv->csrc != nv_clk_src_core) {
+ nv_wr32(clk, 0x4040, 0x00000000);
+ nv_mask(clk, 0x4028, 0x80000000, 0x00000000);
+ }
+
+ if (priv->ssrc != nv_clk_src_shader) {
+ nv_wr32(clk, 0x4070, 0x00000000);
+ nv_mask(clk, 0x4020, 0x80000000, 0x00000000);
+ }
+
+ return ret;
+}
+
+static void
+nvaa_clock_tidy(struct nouveau_clock *clk)
+{
+}
+
+static struct nouveau_clocks
+nvaa_domains[] = {
+ { nv_clk_src_crystal, 0xff },
+ { nv_clk_src_href , 0xff },
+ { nv_clk_src_core , 0xff, 0, "core", 1000 },
+ { nv_clk_src_shader , 0xff, 0, "shader", 1000 },
+ { nv_clk_src_vdec , 0xff, 0, "vdec", 1000 },
+ { nv_clk_src_max }
+};
+
+static int
+nvaa_clock_ctor(struct nouveau_object *parent, struct nouveau_object *engine,
+ struct nouveau_oclass *oclass, void *data, u32 size,
+ struct nouveau_object **pobject)
+{
+ struct nvaa_clock_priv *priv;
+ int ret;
+
+ ret = nouveau_clock_create(parent, engine, oclass, nvaa_domains, &priv);
+ *pobject = nv_object(priv);
+ if (ret)
+ return ret;
+
+ priv->base.read = nvaa_clock_read;
+ priv->base.calc = nvaa_clock_calc;
+ priv->base.prog = nvaa_clock_prog;
+ priv->base.tidy = nvaa_clock_tidy;
+ return 0;
+}
+
+struct nouveau_oclass *
+nvaa_clock_oclass = &(struct nouveau_oclass) {
+ .handle = NV_SUBDEV(CLOCK, 0xaa),
+ .ofuncs = &(struct nouveau_ofuncs) {
+ .ctor = nvaa_clock_ctor,
+ .dtor = _nouveau_clock_dtor,
+ .init = _nouveau_clock_init,
+ .fini = _nouveau_clock_fini,
+ },
+};
};
static uint32_t formats[] = {
- DRM_FORMAT_NV12,
DRM_FORMAT_UYVY,
+ DRM_FORMAT_NV12,
};
/* Sine can be approximated with
struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc);
struct nouveau_bo *cur = nv_plane->cur;
bool flip = nv_plane->flip;
- int format = ALIGN(src_w * 4, 0x100);
int soff = NV_PCRTC0_SIZE * nv_crtc->index;
int soff2 = NV_PCRTC0_SIZE * !nv_crtc->index;
- int ret;
+ int format, ret;
+
+ /* Source parameters given in 16.16 fixed point, ignore fractional. */
+ src_x >>= 16;
+ src_y >>= 16;
+ src_w >>= 16;
+ src_h >>= 16;
+
+ format = ALIGN(src_w * 4, 0x100);
if (format > 0xffff)
- return -EINVAL;
+ return -ERANGE;
+
+ if (dev->chipset >= 0x30) {
+ if (crtc_w < (src_w >> 1) || crtc_h < (src_h >> 1))
+ return -ERANGE;
+ } else {
+ if (crtc_w < (src_w >> 3) || crtc_h < (src_h >> 3))
+ return -ERANGE;
+ }
ret = nouveau_bo_pin(nv_fb->nvbo, TTM_PL_FLAG_VRAM);
if (ret)
nv_plane->cur = nv_fb->nvbo;
- /* Source parameters given in 16.16 fixed point, ignore fractional. */
- src_x = src_x >> 16;
- src_y = src_y >> 16;
- src_w = src_w >> 16;
- src_h = src_h >> 16;
-
nv_mask(dev, NV_PCRTC_ENGINE_CTRL + soff, NV_CRTC_FSEL_OVERLAY, NV_CRTC_FSEL_OVERLAY);
nv_mask(dev, NV_PCRTC_ENGINE_CTRL + soff2, NV_CRTC_FSEL_OVERLAY, 0);
{
struct nouveau_device *dev = nouveau_dev(device);
struct nouveau_plane *plane = kzalloc(sizeof(struct nouveau_plane), GFP_KERNEL);
+ int num_formats = ARRAY_SIZE(formats);
int ret;
if (!plane)
return;
+ switch (dev->chipset) {
+ case 0x10:
+ case 0x11:
+ case 0x15:
+ case 0x1a:
+ case 0x20:
+ num_formats = 1;
+ break;
+ }
+
ret = drm_plane_init(device, &plane->base, 3 /* both crtc's */,
&nv10_plane_funcs,
- formats, ARRAY_SIZE(formats), false);
+ formats, num_formats, false);
if (ret)
goto err;
fence = nouveau_fence_ref(new_bo->bo.sync_obj);
spin_unlock(&new_bo->bo.bdev->fence_lock);
ret = nouveau_fence_sync(fence, chan);
+ nouveau_fence_unref(&fence);
if (ret)
return ret;
s = list_first_entry(&fctx->flip, struct nouveau_page_flip_state, head);
if (s->event)
- drm_send_vblank_event(dev, -1, s->event);
+ drm_send_vblank_event(dev, s->crtc, s->event);
list_del(&s->head);
if (ps)
uint32_t start, uint32_t size)
{
struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc);
- u32 end = max(start + size, (u32)256);
+ u32 end = min_t(u32, start + size, 256);
u32 i;
for (i = start; i < end; i++) {
PROCESS_I2C_CHANNEL_TRANSACTION_PS_ALLOCATION args;
int index = GetIndexIntoMasterTable(COMMAND, ProcessI2cChannelTransaction);
unsigned char *base;
- u16 out;
+ u16 out = cpu_to_le16(0);
memset(&args, 0, sizeof(args));
DRM_ERROR("hw i2c: tried to write too many bytes (%d vs 3)\n", num);
return -EINVAL;
}
- args.ucRegIndex = buf[0];
- if (num > 1) {
+ if (buf == NULL)
+ args.ucRegIndex = 0;
+ else
+ args.ucRegIndex = buf[0];
+ if (num)
num--;
+ if (num)
memcpy(&out, &buf[1], num);
- }
args.lpI2CDataOut = cpu_to_le16(out);
} else {
if (num > ATOM_MAX_HW_I2C_READ) {
struct radeon_i2c_chan *i2c = i2c_get_adapdata(i2c_adap);
struct i2c_msg *p;
int i, remaining, current_count, buffer_offset, max_bytes, ret;
- u8 buf = 0, flags;
+ u8 flags;
/* check for bus probe */
p = &msgs[0];
if ((num == 1) && (p->len == 0)) {
ret = radeon_process_i2c_ch(i2c,
p->addr, HW_I2C_WRITE,
- &buf, 1);
+ NULL, 0);
if (ret)
return ret;
else
struct radeon_device *rdev = encoder->dev->dev_private;
struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
- u32 offset = dig->afmt->offset;
+ u32 offset;
- if (!dig->afmt->pin)
+ if (!dig || !dig->afmt || !dig->afmt->pin)
return;
+ offset = dig->afmt->offset;
+
WREG32(AFMT_AUDIO_SRC_CONTROL + offset,
AFMT_AUDIO_SRC_SELECT(dig->afmt->pin->id));
}
struct radeon_connector *radeon_connector = NULL;
u32 tmp = 0, offset;
- if (!dig->afmt->pin)
+ if (!dig || !dig->afmt || !dig->afmt->pin)
return;
offset = dig->afmt->pin->offset;
u8 *sadb;
int sad_count;
- if (!dig->afmt->pin)
+ if (!dig || !dig->afmt || !dig->afmt->pin)
return;
offset = dig->afmt->pin->offset;
{ AZ_F0_CODEC_PIN_CONTROL_AUDIO_DESCRIPTOR13, HDMI_AUDIO_CODING_TYPE_WMA_PRO },
};
- if (!dig->afmt->pin)
+ if (!dig || !dig->afmt || !dig->afmt->pin)
return;
offset = dig->afmt->pin->offset;
struct ni_ps *ps = ni_get_ps(rps);
struct radeon_clock_and_voltage_limits *max_limits;
bool disable_mclk_switching;
- u32 mclk, sclk;
- u16 vddc, vddci;
+ u32 mclk;
+ u16 vddci;
u32 max_sclk_vddc, max_mclk_vddci, max_mclk_vddc;
int i;
/* XXX validate the min clocks required for display */
+ /* adjust low state */
if (disable_mclk_switching) {
- mclk = ps->performance_levels[ps->performance_level_count - 1].mclk;
- sclk = ps->performance_levels[0].sclk;
- vddc = ps->performance_levels[0].vddc;
- vddci = ps->performance_levels[ps->performance_level_count - 1].vddci;
- } else {
- sclk = ps->performance_levels[0].sclk;
- mclk = ps->performance_levels[0].mclk;
- vddc = ps->performance_levels[0].vddc;
- vddci = ps->performance_levels[0].vddci;
+ ps->performance_levels[0].mclk =
+ ps->performance_levels[ps->performance_level_count - 1].mclk;
+ ps->performance_levels[0].vddci =
+ ps->performance_levels[ps->performance_level_count - 1].vddci;
}
- /* adjusted low state */
- ps->performance_levels[0].sclk = sclk;
- ps->performance_levels[0].mclk = mclk;
- ps->performance_levels[0].vddc = vddc;
- ps->performance_levels[0].vddci = vddci;
-
btc_skip_blacklist_clocks(rdev, max_limits->sclk, max_limits->mclk,
&ps->performance_levels[0].sclk,
&ps->performance_levels[0].mclk);
ps->performance_levels[i].vddc = ps->performance_levels[i - 1].vddc;
}
+ /* adjust remaining states */
if (disable_mclk_switching) {
mclk = ps->performance_levels[0].mclk;
+ vddci = ps->performance_levels[0].vddci;
for (i = 1; i < ps->performance_level_count; i++) {
if (mclk < ps->performance_levels[i].mclk)
mclk = ps->performance_levels[i].mclk;
+ if (vddci < ps->performance_levels[i].vddci)
+ vddci = ps->performance_levels[i].vddci;
}
for (i = 0; i < ps->performance_level_count; i++) {
ps->performance_levels[i].mclk = mclk;
WREG32(DCCG_AUDIO_DTO1_MODULE, dto_modulo);
WREG32(DCCG_AUDIO_DTO_SELECT, 1); /* select DTO1 */
}
- } else if (ASIC_IS_DCE3(rdev)) {
+ } else {
/* according to the reg specs, this should DCE3.2 only, but in
- * practice it seems to cover DCE3.0/3.1 as well.
+ * practice it seems to cover DCE2.0/3.0/3.1 as well.
*/
if (dig->dig_encoder == 0) {
WREG32(DCCG_AUDIO_DTO0_PHASE, base_rate * 100);
WREG32(DCCG_AUDIO_DTO1_MODULE, clock * 100);
WREG32(DCCG_AUDIO_DTO_SELECT, 1); /* select DTO1 */
}
- } else {
- /* according to the reg specs, this should be DCE2.0 and DCE3.0/3.1 */
- WREG32(AUDIO_DTO, AUDIO_DTO_PHASE(base_rate / 10) |
- AUDIO_DTO_MODULE(clock / 10));
}
}
struct radeon_vm *vm,
struct radeon_fence *fence);
uint64_t radeon_vm_map_gart(struct radeon_device *rdev, uint64_t addr);
-int radeon_vm_bo_update_pte(struct radeon_device *rdev,
- struct radeon_vm *vm,
- struct radeon_bo *bo,
- struct ttm_mem_reg *mem);
+int radeon_vm_bo_update(struct radeon_device *rdev,
+ struct radeon_vm *vm,
+ struct radeon_bo *bo,
+ struct ttm_mem_reg *mem);
void radeon_vm_bo_invalidate(struct radeon_device *rdev,
struct radeon_bo *bo);
struct radeon_bo_va *radeon_vm_bo_find(struct radeon_vm *vm,
mpll_param->dll_speed = args.ucDllSpeed;
mpll_param->bwcntl = args.ucBWCntl;
mpll_param->vco_mode =
- (args.ucPllCntlFlag & MPLL_CNTL_FLAG_VCO_MODE_MASK) ? 1 : 0;
+ (args.ucPllCntlFlag & MPLL_CNTL_FLAG_VCO_MODE_MASK);
mpll_param->yclk_sel =
(args.ucPllCntlFlag & MPLL_CNTL_FLAG_BYPASS_DQ_PLL) ? 1 : 0;
mpll_param->qdr =
struct radeon_bo *bo;
int r;
- r = radeon_vm_bo_update_pte(rdev, vm, rdev->ring_tmp_bo.bo, &rdev->ring_tmp_bo.bo->tbo.mem);
+ r = radeon_vm_bo_update(rdev, vm, rdev->ring_tmp_bo.bo, &rdev->ring_tmp_bo.bo->tbo.mem);
if (r) {
return r;
}
list_for_each_entry(lobj, &parser->validated, tv.head) {
bo = lobj->bo;
- r = radeon_vm_bo_update_pte(parser->rdev, vm, bo, &bo->tbo.mem);
+ r = radeon_vm_bo_update(parser->rdev, vm, bo, &bo->tbo.mem);
if (r) {
return r;
}
* 1.31- Add support for num Z pipes from GET_PARAM
* 1.32- fixes for rv740 setup
* 1.33- Add r6xx/r7xx const buffer support
+ * 1.34- fix evergreen/cayman GS register
*/
#define DRIVER_MAJOR 1
-#define DRIVER_MINOR 33
+#define DRIVER_MINOR 34
#define DRIVER_PATCHLEVEL 0
long radeon_drm_ioctl(struct file *filp,
#include <drm/radeon_drm.h>
#include "radeon.h"
#include "radeon_reg.h"
+#include "radeon_trace.h"
/*
* GART
for (i = 0; i < 2; ++i) {
if (choices[i]) {
vm->id = choices[i];
+ trace_radeon_vm_grab_id(vm->id, ring);
return rdev->vm_manager.active[choices[i]];
}
}
}
/**
- * radeon_vm_bo_update_pte - map a bo into the vm page table
+ * radeon_vm_bo_update - map a bo into the vm page table
*
* @rdev: radeon_device pointer
* @vm: requested vm
*
* Object have to be reserved & global and local mutex must be locked!
*/
-int radeon_vm_bo_update_pte(struct radeon_device *rdev,
- struct radeon_vm *vm,
- struct radeon_bo *bo,
- struct ttm_mem_reg *mem)
+int radeon_vm_bo_update(struct radeon_device *rdev,
+ struct radeon_vm *vm,
+ struct radeon_bo *bo,
+ struct ttm_mem_reg *mem)
{
struct radeon_ib ib;
struct radeon_bo_va *bo_va;
bo_va->valid = false;
}
+ trace_radeon_vm_bo_update(bo_va);
+
nptes = radeon_bo_ngpu_pages(bo);
/* assume two extra pdes in case the mapping overlaps the borders */
mutex_lock(&rdev->vm_manager.lock);
mutex_lock(&bo_va->vm->mutex);
if (bo_va->soffset) {
- r = radeon_vm_bo_update_pte(rdev, bo_va->vm, bo_va->bo, NULL);
+ r = radeon_vm_bo_update(rdev, bo_va->vm, bo_va->bo, NULL);
}
mutex_unlock(&rdev->vm_manager.lock);
list_del(&bo_va->vm_list);
struct device_attribute *attr,
char *buf)
{
- struct drm_device *ddev = dev_get_drvdata(dev);
- struct radeon_device *rdev = ddev->dev_private;
+ struct radeon_device *rdev = dev_get_drvdata(dev);
int temp;
if (rdev->asic->pm.get_temperature)
return snprintf(buf, PAGE_SIZE, "%d\n", temp);
}
-static ssize_t radeon_hwmon_show_name(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- return sprintf(buf, "radeon\n");
-}
-
static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, radeon_hwmon_show_temp, NULL, 0);
static SENSOR_DEVICE_ATTR(temp1_crit, S_IRUGO, radeon_hwmon_show_temp_thresh, NULL, 0);
static SENSOR_DEVICE_ATTR(temp1_crit_hyst, S_IRUGO, radeon_hwmon_show_temp_thresh, NULL, 1);
-static SENSOR_DEVICE_ATTR(name, S_IRUGO, radeon_hwmon_show_name, NULL, 0);
static struct attribute *hwmon_attributes[] = {
&sensor_dev_attr_temp1_input.dev_attr.attr,
&sensor_dev_attr_temp1_crit.dev_attr.attr,
&sensor_dev_attr_temp1_crit_hyst.dev_attr.attr,
- &sensor_dev_attr_name.dev_attr.attr,
NULL
};
.is_visible = hwmon_attributes_visible,
};
+static const struct attribute_group *hwmon_groups[] = {
+ &hwmon_attrgroup,
+ NULL
+};
+
static int radeon_hwmon_init(struct radeon_device *rdev)
{
int err = 0;
-
- rdev->pm.int_hwmon_dev = NULL;
+ struct device *hwmon_dev;
switch (rdev->pm.int_thermal_type) {
case THERMAL_TYPE_RV6XX:
case THERMAL_TYPE_KV:
if (rdev->asic->pm.get_temperature == NULL)
return err;
- rdev->pm.int_hwmon_dev = hwmon_device_register(rdev->dev);
- if (IS_ERR(rdev->pm.int_hwmon_dev)) {
- err = PTR_ERR(rdev->pm.int_hwmon_dev);
+ hwmon_dev = hwmon_device_register_with_groups(rdev->dev,
+ "radeon", rdev,
+ hwmon_groups);
+ if (IS_ERR(hwmon_dev)) {
+ err = PTR_ERR(hwmon_dev);
dev_err(rdev->dev,
"Unable to register hwmon device: %d\n", err);
- break;
- }
- dev_set_drvdata(rdev->pm.int_hwmon_dev, rdev->ddev);
- err = sysfs_create_group(&rdev->pm.int_hwmon_dev->kobj,
- &hwmon_attrgroup);
- if (err) {
- dev_err(rdev->dev,
- "Unable to create hwmon sysfs file: %d\n", err);
- hwmon_device_unregister(rdev->dev);
}
break;
default:
return err;
}
-static void radeon_hwmon_fini(struct radeon_device *rdev)
-{
- if (rdev->pm.int_hwmon_dev) {
- sysfs_remove_group(&rdev->pm.int_hwmon_dev->kobj, &hwmon_attrgroup);
- hwmon_device_unregister(rdev->pm.int_hwmon_dev);
- }
-}
-
static void radeon_dpm_thermal_work_handler(struct work_struct *work)
{
struct radeon_device *rdev =
if (rdev->pm.power_state)
kfree(rdev->pm.power_state);
-
- radeon_hwmon_fini(rdev);
}
static void radeon_pm_fini_dpm(struct radeon_device *rdev)
if (rdev->pm.power_state)
kfree(rdev->pm.power_state);
-
- radeon_hwmon_fini(rdev);
}
void radeon_pm_fini(struct radeon_device *rdev)
__entry->fences)
);
+TRACE_EVENT(radeon_vm_grab_id,
+ TP_PROTO(unsigned vmid, int ring),
+ TP_ARGS(vmid, ring),
+ TP_STRUCT__entry(
+ __field(u32, vmid)
+ __field(u32, ring)
+ ),
+
+ TP_fast_assign(
+ __entry->vmid = vmid;
+ __entry->ring = ring;
+ ),
+ TP_printk("vmid=%u, ring=%u", __entry->vmid, __entry->ring)
+);
+
+TRACE_EVENT(radeon_vm_bo_update,
+ TP_PROTO(struct radeon_bo_va *bo_va),
+ TP_ARGS(bo_va),
+ TP_STRUCT__entry(
+ __field(u64, soffset)
+ __field(u64, eoffset)
+ __field(u32, flags)
+ ),
+
+ TP_fast_assign(
+ __entry->soffset = bo_va->soffset;
+ __entry->eoffset = bo_va->eoffset;
+ __entry->flags = bo_va->flags;
+ ),
+ TP_printk("soffs=%010llx, eoffs=%010llx, flags=%08x",
+ __entry->soffset, __entry->eoffset, __entry->flags)
+);
+
TRACE_EVENT(radeon_vm_set_page,
TP_PROTO(uint64_t pe, uint64_t addr, unsigned count,
uint32_t incr, uint32_t flags),
0x000089AC VGT_COMPUTE_THREAD_GOURP_SIZE
0x000089B0 VGT_HS_OFFCHIP_PARAM
0x00008A14 PA_CL_ENHANCE
-0x00008A60 PA_SC_LINE_STIPPLE_VALUE
+0x00008A60 PA_SU_LINE_STIPPLE_VALUE
0x00008B10 PA_SC_LINE_STIPPLE_STATE
0x00008BF0 PA_SC_ENHANCE
0x00008D8C SQ_DYN_GPR_CNTL_PS_FLUSH_REQ
0x00028B84 PA_SU_POLY_OFFSET_FRONT_OFFSET
0x00028B88 PA_SU_POLY_OFFSET_BACK_SCALE
0x00028B8C PA_SU_POLY_OFFSET_BACK_OFFSET
-0x00028B74 VGT_GS_INSTANCE_CNT
+0x00028B90 VGT_GS_INSTANCE_CNT
0x00028BD4 PA_SC_CENTROID_PRIORITY_0
0x00028BD8 PA_SC_CENTROID_PRIORITY_1
0x00028BDC PA_SC_LINE_CNTL
0x000089A4 VGT_COMPUTE_START_Z
0x000089AC VGT_COMPUTE_THREAD_GOURP_SIZE
0x00008A14 PA_CL_ENHANCE
-0x00008A60 PA_SC_LINE_STIPPLE_VALUE
+0x00008A60 PA_SU_LINE_STIPPLE_VALUE
0x00008B10 PA_SC_LINE_STIPPLE_STATE
0x00008BF0 PA_SC_ENHANCE
0x00008D8C SQ_DYN_GPR_CNTL_PS_FLUSH_REQ
0x00028B84 PA_SU_POLY_OFFSET_FRONT_OFFSET
0x00028B88 PA_SU_POLY_OFFSET_BACK_SCALE
0x00028B8C PA_SU_POLY_OFFSET_BACK_OFFSET
-0x00028B74 VGT_GS_INSTANCE_CNT
+0x00028B90 VGT_GS_INSTANCE_CNT
0x00028C00 PA_SC_LINE_CNTL
0x00028C08 PA_SU_VTX_CNTL
0x00028C0C PA_CL_GB_VERT_CLIP_ADJ
rdev->mc.aper_base = pci_resource_start(rdev->pdev, 0);
rdev->mc.aper_size = pci_resource_len(rdev->pdev, 0);
/* size in MB on si */
- rdev->mc.mc_vram_size = RREG32(CONFIG_MEMSIZE) * 1024ULL * 1024ULL;
- rdev->mc.real_vram_size = RREG32(CONFIG_MEMSIZE) * 1024ULL * 1024ULL;
+ tmp = RREG32(CONFIG_MEMSIZE);
+ /* some boards may have garbage in the upper 16 bits */
+ if (tmp & 0xffff0000) {
+ DRM_INFO("Probable bad vram size: 0x%08x\n", tmp);
+ if (tmp & 0xffff)
+ tmp &= 0xffff;
+ }
+ rdev->mc.mc_vram_size = tmp * 1024ULL * 1024ULL;
+ rdev->mc.real_vram_size = rdev->mc.mc_vram_size;
rdev->mc.visible_vram_size = rdev->mc.aper_size;
si_vram_gtt_location(rdev, &rdev->mc);
radeon_update_bandwidth_info(rdev);
unsigned int num_relocs = args->num_relocs;
unsigned int num_waitchks = args->num_waitchks;
struct drm_tegra_cmdbuf __user *cmdbufs =
- (void * __user)(uintptr_t)args->cmdbufs;
+ (void __user *)(uintptr_t)args->cmdbufs;
struct drm_tegra_reloc __user *relocs =
- (void * __user)(uintptr_t)args->relocs;
+ (void __user *)(uintptr_t)args->relocs;
struct drm_tegra_waitchk __user *waitchks =
- (void * __user)(uintptr_t)args->waitchks;
+ (void __user *)(uintptr_t)args->waitchks;
struct drm_tegra_syncpt syncpt;
struct host1x_job *job;
int err;
struct drm_tegra_cmdbuf cmdbuf;
struct host1x_bo *bo;
- err = copy_from_user(&cmdbuf, cmdbufs, sizeof(cmdbuf));
- if (err)
+ if (copy_from_user(&cmdbuf, cmdbufs, sizeof(cmdbuf))) {
+ err = -EFAULT;
goto fail;
+ }
bo = host1x_bo_lookup(drm, file, cmdbuf.handle);
if (!bo) {
cmdbufs++;
}
- err = copy_from_user(job->relocarray, relocs,
- sizeof(*relocs) * num_relocs);
- if (err)
+ if (copy_from_user(job->relocarray, relocs,
+ sizeof(*relocs) * num_relocs)) {
+ err = -EFAULT;
goto fail;
+ }
while (num_relocs--) {
struct host1x_reloc *reloc = &job->relocarray[num_relocs];
}
}
- err = copy_from_user(job->waitchk, waitchks,
- sizeof(*waitchks) * num_waitchks);
- if (err)
+ if (copy_from_user(job->waitchk, waitchks,
+ sizeof(*waitchks) * num_waitchks)) {
+ err = -EFAULT;
goto fail;
+ }
- err = copy_from_user(&syncpt, (void * __user)(uintptr_t)args->syncpts,
- sizeof(syncpt));
- if (err)
+ if (copy_from_user(&syncpt, (void __user *)(uintptr_t)args->syncpts,
+ sizeof(syncpt))) {
+ err = -EFAULT;
goto fail;
+ }
job->is_addr_reg = context->client->ops->is_addr_reg;
job->syncpt_incrs = syncpt.incrs;
}
#endif
-struct drm_driver tegra_drm_driver = {
+static struct drm_driver tegra_drm_driver = {
.driver_features = DRIVER_MODESET | DRIVER_GEM,
.load = tegra_drm_load,
.unload = tegra_drm_unload,
static inline struct tegra_dc *to_tegra_dc(struct drm_crtc *crtc)
{
- return container_of(crtc, struct tegra_dc, base);
+ return crtc ? container_of(crtc, struct tegra_dc, base) : NULL;
}
static inline void tegra_dc_writel(struct tegra_dc *dc, unsigned long value,
info->var.yoffset * fb->pitches[0];
drm->mode_config.fb_base = (resource_size_t)bo->paddr;
- info->screen_base = bo->vaddr + offset;
+ info->screen_base = (void __iomem *)bo->vaddr + offset;
info->screen_size = size;
info->fix.smem_start = (unsigned long)(bo->paddr + offset);
info->fix.smem_len = size;
struct tegra_rgb {
struct tegra_output output;
+ struct tegra_dc *dc;
+
struct clk *clk_parent;
struct clk *clk;
};
static int tegra_output_rgb_enable(struct tegra_output *output)
{
- struct tegra_dc *dc = to_tegra_dc(output->encoder.crtc);
+ struct tegra_rgb *rgb = to_rgb(output);
- tegra_dc_write_regs(dc, rgb_enable, ARRAY_SIZE(rgb_enable));
+ tegra_dc_write_regs(rgb->dc, rgb_enable, ARRAY_SIZE(rgb_enable));
return 0;
}
static int tegra_output_rgb_disable(struct tegra_output *output)
{
- struct tegra_dc *dc = to_tegra_dc(output->encoder.crtc);
+ struct tegra_rgb *rgb = to_rgb(output);
- tegra_dc_write_regs(dc, rgb_disable, ARRAY_SIZE(rgb_disable));
+ tegra_dc_write_regs(rgb->dc, rgb_disable, ARRAY_SIZE(rgb_disable));
return 0;
}
rgb->output.dev = dc->dev;
rgb->output.of_node = np;
+ rgb->dc = dc;
err = tegra_output_probe(&rgb->output);
if (err < 0)
static void udl_gem_put_pages(struct udl_gem_object *obj)
{
+ if (obj->base.import_attach) {
+ drm_free_large(obj->pages);
+ obj->pages = NULL;
+ return;
+ }
+
drm_gem_put_pages(&obj->base, obj->pages, false, false);
obj->pages = NULL;
}
bool mapped;
};
+const size_t vmw_tt_size = sizeof(struct vmw_ttm_tt);
+
/**
* Helper functions to advance a struct vmw_piter iterator.
*
* TTM buffer object driver - vmwgfx_buffer.c
*/
+extern const size_t vmw_tt_size;
extern struct ttm_placement vmw_vram_placement;
extern struct ttm_placement vmw_vram_ne_placement;
extern struct ttm_placement vmw_vram_sys_placement;
vmw_surface_unreference(&du->cursor_surface);
if (du->cursor_dmabuf)
vmw_dmabuf_unreference(&du->cursor_dmabuf);
+ drm_sysfs_connector_remove(&du->connector);
drm_crtc_cleanup(&du->crtc);
drm_encoder_cleanup(&du->encoder);
drm_connector_cleanup(&du->connector);
connector->encoder = NULL;
encoder->crtc = NULL;
crtc->fb = NULL;
+ crtc->enabled = false;
vmw_ldu_del_active(dev_priv, ldu);
crtc->x = set->x;
crtc->y = set->y;
crtc->mode = *mode;
+ crtc->enabled = true;
vmw_ldu_add_active(dev_priv, ldu, vfb);
encoder->possible_crtcs = (1 << unit);
encoder->possible_clones = 0;
+ (void) drm_sysfs_connector_add(connector);
+
drm_crtc_init(dev, crtc, &vmw_legacy_crtc_funcs);
drm_mode_crtc_set_gamma_size(crtc, 256);
/**
* Buffer management.
*/
+
+/**
+ * vmw_dmabuf_acc_size - Calculate the pinned memory usage of buffers
+ *
+ * @dev_priv: Pointer to a struct vmw_private identifying the device.
+ * @size: The requested buffer size.
+ * @user: Whether this is an ordinary dma buffer or a user dma buffer.
+ */
+static size_t vmw_dmabuf_acc_size(struct vmw_private *dev_priv, size_t size,
+ bool user)
+{
+ static size_t struct_size, user_struct_size;
+ size_t num_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
+ size_t page_array_size = ttm_round_pot(num_pages * sizeof(void *));
+
+ if (unlikely(struct_size == 0)) {
+ size_t backend_size = ttm_round_pot(vmw_tt_size);
+
+ struct_size = backend_size +
+ ttm_round_pot(sizeof(struct vmw_dma_buffer));
+ user_struct_size = backend_size +
+ ttm_round_pot(sizeof(struct vmw_user_dma_buffer));
+ }
+
+ if (dev_priv->map_mode == vmw_dma_alloc_coherent)
+ page_array_size +=
+ ttm_round_pot(num_pages * sizeof(dma_addr_t));
+
+ return ((user) ? user_struct_size : struct_size) +
+ page_array_size;
+}
+
void vmw_dmabuf_bo_free(struct ttm_buffer_object *bo)
{
struct vmw_dma_buffer *vmw_bo = vmw_dma_buffer(bo);
kfree(vmw_bo);
}
+static void vmw_user_dmabuf_destroy(struct ttm_buffer_object *bo)
+{
+ struct vmw_user_dma_buffer *vmw_user_bo = vmw_user_dma_buffer(bo);
+
+ ttm_prime_object_kfree(vmw_user_bo, prime);
+}
+
int vmw_dmabuf_init(struct vmw_private *dev_priv,
struct vmw_dma_buffer *vmw_bo,
size_t size, struct ttm_placement *placement,
struct ttm_bo_device *bdev = &dev_priv->bdev;
size_t acc_size;
int ret;
+ bool user = (bo_free == &vmw_user_dmabuf_destroy);
- BUG_ON(!bo_free);
+ BUG_ON(!bo_free && (!user && (bo_free != vmw_dmabuf_bo_free)));
- acc_size = ttm_bo_acc_size(bdev, size, sizeof(struct vmw_dma_buffer));
+ acc_size = vmw_dmabuf_acc_size(dev_priv, size, user);
memset(vmw_bo, 0, sizeof(*vmw_bo));
INIT_LIST_HEAD(&vmw_bo->res_list);
ret = ttm_bo_init(bdev, &vmw_bo->base, size,
- ttm_bo_type_device, placement,
+ (user) ? ttm_bo_type_device :
+ ttm_bo_type_kernel, placement,
0, interruptible,
NULL, acc_size, NULL, bo_free);
return ret;
}
-static void vmw_user_dmabuf_destroy(struct ttm_buffer_object *bo)
-{
- struct vmw_user_dma_buffer *vmw_user_bo = vmw_user_dma_buffer(bo);
-
- ttm_prime_object_kfree(vmw_user_bo, prime);
-}
-
static void vmw_user_dmabuf_release(struct ttm_base_object **p_base)
{
struct vmw_user_dma_buffer *vmw_user_bo;
}
+/**
+ * vmw_dumb_create - Create a dumb kms buffer
+ *
+ * @file_priv: Pointer to a struct drm_file identifying the caller.
+ * @dev: Pointer to the drm device.
+ * @args: Pointer to a struct drm_mode_create_dumb structure
+ *
+ * This is a driver callback for the core drm create_dumb functionality.
+ * Note that this is very similar to the vmw_dmabuf_alloc ioctl, except
+ * that the arguments have a different format.
+ */
int vmw_dumb_create(struct drm_file *file_priv,
struct drm_device *dev,
struct drm_mode_create_dumb *args)
{
struct vmw_private *dev_priv = vmw_priv(dev);
struct vmw_master *vmaster = vmw_master(file_priv->master);
- struct vmw_user_dma_buffer *vmw_user_bo;
- struct ttm_buffer_object *tmp;
+ struct vmw_dma_buffer *dma_buf;
int ret;
args->pitch = args->width * ((args->bpp + 7) / 8);
args->size = args->pitch * args->height;
- vmw_user_bo = kzalloc(sizeof(*vmw_user_bo), GFP_KERNEL);
- if (vmw_user_bo == NULL)
- return -ENOMEM;
-
ret = ttm_read_lock(&vmaster->lock, true);
- if (ret != 0) {
- kfree(vmw_user_bo);
+ if (unlikely(ret != 0))
return ret;
- }
- ret = vmw_dmabuf_init(dev_priv, &vmw_user_bo->dma, args->size,
- &vmw_vram_sys_placement, true,
- &vmw_user_dmabuf_destroy);
- if (ret != 0)
- goto out_no_dmabuf;
-
- tmp = ttm_bo_reference(&vmw_user_bo->dma.base);
- ret = ttm_prime_object_init(vmw_fpriv(file_priv)->tfile,
- args->size,
- &vmw_user_bo->prime,
- false,
- ttm_buffer_type,
- &vmw_user_dmabuf_release, NULL);
+ ret = vmw_user_dmabuf_alloc(dev_priv, vmw_fpriv(file_priv)->tfile,
+ args->size, false, &args->handle,
+ &dma_buf);
if (unlikely(ret != 0))
- goto out_no_base_object;
-
- args->handle = vmw_user_bo->prime.base.hash.key;
+ goto out_no_dmabuf;
-out_no_base_object:
- ttm_bo_unref(&tmp);
+ vmw_dmabuf_unreference(&dma_buf);
out_no_dmabuf:
ttm_read_unlock(&vmaster->lock);
return ret;
}
+/**
+ * vmw_dumb_map_offset - Return the address space offset of a dumb buffer
+ *
+ * @file_priv: Pointer to a struct drm_file identifying the caller.
+ * @dev: Pointer to the drm device.
+ * @handle: Handle identifying the dumb buffer.
+ * @offset: The address space offset returned.
+ *
+ * This is a driver callback for the core drm dumb_map_offset functionality.
+ */
int vmw_dumb_map_offset(struct drm_file *file_priv,
struct drm_device *dev, uint32_t handle,
uint64_t *offset)
return 0;
}
+/**
+ * vmw_dumb_destroy - Destroy a dumb boffer
+ *
+ * @file_priv: Pointer to a struct drm_file identifying the caller.
+ * @dev: Pointer to the drm device.
+ * @handle: Handle identifying the dumb buffer.
+ *
+ * This is a driver callback for the core drm dumb_destroy functionality.
+ */
int vmw_dumb_destroy(struct drm_file *file_priv,
struct drm_device *dev,
uint32_t handle)
crtc->fb = NULL;
crtc->x = 0;
crtc->y = 0;
+ crtc->enabled = false;
vmw_sou_del_active(dev_priv, sou);
crtc->fb = NULL;
crtc->x = 0;
crtc->y = 0;
+ crtc->enabled = false;
return ret;
}
crtc->fb = fb;
crtc->x = set->x;
crtc->y = set->y;
+ crtc->enabled = true;
return 0;
}
encoder->possible_crtcs = (1 << unit);
encoder->possible_clones = 0;
+ (void) drm_sysfs_connector_add(connector);
+
drm_crtc_init(dev, crtc, &vmw_screen_object_crtc_funcs);
drm_mode_crtc_set_gamma_size(crtc, 256);
#include <linux/of.h>
#include <linux/slab.h>
+#include "bus.h"
#include "dev.h"
static DEFINE_MUTEX(clients_lock);
return -ENODEV;
}
-struct bus_type host1x_bus_type = {
+static struct bus_type host1x_bus_type = {
.name = "host1x",
};
device->dev.coherent_dma_mask = host1x->dev->coherent_dma_mask;
device->dev.dma_mask = &device->dev.coherent_dma_mask;
device->dev.release = host1x_device_release;
- dev_set_name(&device->dev, driver->name);
+ dev_set_name(&device->dev, "%s", driver->name);
device->dev.bus = &host1x_bus_type;
device->dev.parent = host1x->dev;
u32 *p = (u32 *)((u32)pb->mapped + getptr);
*(p++) = HOST1X_OPCODE_NOP;
*(p++) = HOST1X_OPCODE_NOP;
- dev_dbg(host1x->dev, "%s: NOP at 0x%x\n", __func__,
- pb->phys + getptr);
+ dev_dbg(host1x->dev, "%s: NOP at %#llx\n", __func__,
+ (u64)pb->phys + getptr);
getptr = (getptr + 8) & (pb->size_bytes - 1);
}
wmb();
continue;
}
- host1x_debug_output(o, " GATHER at %08x+%04x, %d words\n",
- g->base, g->offset, g->words);
+ host1x_debug_output(o, " GATHER at %#llx+%04x, %d words\n",
+ (u64)g->base, g->offset, g->words);
show_gather(o, g->base + g->offset, g->words, cdma,
g->base, mapped);
case USB_DEVICE_ID_GENIUS_GX_IMPERATOR:
rdesc = kye_consumer_control_fixup(hdev, rdesc, rsize, 83,
"Genius Gx Imperator Keyboard");
+ break;
case USB_DEVICE_ID_GENIUS_MANTICORE:
rdesc = kye_consumer_control_fixup(hdev, rdesc, rsize, 104,
"Genius Manticore Keyboard");
static void sensor_hub_fill_attr_info(
struct hid_sensor_hub_attribute_info *info,
- s32 index, s32 report_id, s32 units, s32 unit_expo, s32 size)
+ s32 index, s32 report_id, struct hid_field *field)
{
info->index = index;
info->report_id = report_id;
- info->units = units;
- info->unit_expo = unit_expo;
- info->size = size/8;
+ info->units = field->unit;
+ info->unit_expo = field->unit_exponent;
+ info->size = (field->report_size * field->report_count)/8;
+ info->logical_minimum = field->logical_minimum;
+ info->logical_maximum = field->logical_maximum;
}
static struct hid_sensor_hub_callbacks *sensor_hub_get_callback(
if (field->physical == usage_id &&
field->logical == attr_usage_id) {
sensor_hub_fill_attr_info(info, i, report->id,
- field->unit, field->unit_exponent,
- field->report_size *
- field->report_count);
+ field);
ret = 0;
} else {
for (j = 0; j < field->maxusage; ++j) {
field->usage[j].collection_index ==
collection_index) {
sensor_hub_fill_attr_info(info,
- i, report->id,
- field->unit,
- field->unit_exponent,
- field->report_size *
- field->report_count);
+ i, report->id, field);
ret = 0;
break;
}
ret = -ENOMEM;
goto err_free_names;
}
+ sd->hid_sensor_hub_client_devs[
+ sd->hid_sensor_client_cnt].id = PLATFORM_DEVID_AUTO;
sd->hid_sensor_hub_client_devs[
sd->hid_sensor_client_cnt].name = name;
sd->hid_sensor_hub_client_devs[
* @last_update: time of last update (jiffies)
* @temperature: cached temperature measurement value
* @humidity: cached humidity measurement value
+ * @write_length: length for I2C measurement request
*/
struct hih6130 {
struct device *hwmon_dev;
unsigned long last_update;
int temperature;
int humidity;
+ size_t write_length;
};
/**
*/
if (time_after(jiffies, hih6130->last_update + HZ) || !hih6130->valid) {
- /* write to slave address, no data, to request a measurement */
- ret = i2c_master_send(client, tmp, 0);
+ /*
+ * Write to slave address to request a measurement.
+ * According with the datasheet it should be with no data, but
+ * for systems with I2C bus drivers that do not allow zero
+ * length packets we write one dummy byte to allow sensor
+ * measurements on them.
+ */
+ tmp[0] = 0;
+ ret = i2c_master_send(client, tmp, hih6130->write_length);
if (ret < 0)
goto out;
goto fail_remove_sysfs;
}
+ if (!i2c_check_functionality(client->adapter, I2C_FUNC_SMBUS_QUICK))
+ hih6130->write_length = 1;
+
return 0;
fail_remove_sysfs:
{
if (rpm <= 0)
return 255;
+ if (rpm > 1350000)
+ return 1;
return clamp_val((1350000 + rpm * div / 2) / (rpm * div), 1, 254);
}
"lm90", client);
if (err < 0) {
dev_err(dev, "cannot request IRQ %d\n", client->irq);
- goto exit_remove_files;
+ goto exit_unregister;
}
}
return 0;
+exit_unregister:
+ hwmon_device_unregister(data->hwmon_dev);
exit_remove_files:
lm90_remove_files(client, data);
exit_restore:
{
if (rpm <= 0)
return 255;
+ if (rpm > 1350000)
+ return 1;
return clamp_val((1350000 + rpm * div / 2) / (rpm * div), 1, 254);
}
*/
static inline u8 FAN_TO_REG(long rpm, int div)
{
- if (rpm == 0)
+ if (rpm <= 0 || rpm > 1310720)
return 0;
return clamp_val(1310720 / (rpm * div), 1, 255);
}
if (err)
return err;
val = clamp_val(val, 0, 255);
+ val = DIV_ROUND_CLOSEST(val, 0x11);
mutex_lock(&data->update_lock);
- data->pwm[nr] = val;
+ data->pwm[nr] = val * 0x11;
+ val |= w83l786ng_read_value(client, W83L786NG_REG_PWM[nr]) & 0xf0;
w83l786ng_write_value(client, W83L786NG_REG_PWM[nr], val);
mutex_unlock(&data->update_lock);
return count;
mutex_lock(&data->update_lock);
reg = w83l786ng_read_value(client, W83L786NG_REG_FAN_CFG);
data->pwm_enable[nr] = val;
- reg &= ~(0x02 << W83L786NG_PWM_ENABLE_SHIFT[nr]);
+ reg &= ~(0x03 << W83L786NG_PWM_ENABLE_SHIFT[nr]);
reg |= (val - 1) << W83L786NG_PWM_ENABLE_SHIFT[nr];
w83l786ng_write_value(client, W83L786NG_REG_FAN_CFG, reg);
mutex_unlock(&data->update_lock);
((pwmcfg >> W83L786NG_PWM_MODE_SHIFT[i]) & 1)
? 0 : 1;
data->pwm_enable[i] =
- ((pwmcfg >> W83L786NG_PWM_ENABLE_SHIFT[i]) & 2) + 1;
- data->pwm[i] = w83l786ng_read_value(client,
- W83L786NG_REG_PWM[i]);
+ ((pwmcfg >> W83L786NG_PWM_ENABLE_SHIFT[i]) & 3) + 1;
+ data->pwm[i] =
+ (w83l786ng_read_value(client, W83L786NG_REG_PWM[i])
+ & 0x0f) * 0x11;
}
dev_dbg(&i2c_imx->adapter.dev, "<%s>\n", __func__);
- clk_prepare_enable(i2c_imx->clk);
+ result = clk_prepare_enable(i2c_imx->clk);
+ if (result)
+ return result;
imx_i2c_write_reg(i2c_imx->ifdr, i2c_imx, IMX_I2C_IFDR);
/* Enable I2C controller */
imx_i2c_write_reg(i2c_imx->hwdata->i2sr_clr_opcode, i2c_imx, IMX_I2C_I2SR);
priv->adap.algo = &priv->algo;
priv->adap.algo_data = priv;
priv->adap.dev.parent = &parent->dev;
+ priv->adap.retries = parent->retries;
+ priv->adap.timeout = parent->timeout;
/* Sanity check on class */
if (i2c_mux_parent_classes(parent) & class)
{
.enter = NULL }
};
-static struct cpuidle_state avn_cstates[CPUIDLE_STATE_MAX] = {
+static struct cpuidle_state avn_cstates[] __initdata = {
{
.name = "C1-AVN",
.desc = "MWAIT 0x00",
{
.name = "C6-AVN",
.desc = "MWAIT 0x51",
- .flags = MWAIT2flg(0x58) | CPUIDLE_FLAG_TIME_VALID | CPUIDLE_FLAG_TLB_FLUSHED,
+ .flags = MWAIT2flg(0x51) | CPUIDLE_FLAG_TIME_VALID | CPUIDLE_FLAG_TLB_FLUSHED,
.exit_latency = 15,
.target_residency = 45,
.enter = &intel_idle },
If this driver is compiled as a module, it will be named
hid-sensor-trigger.
-config HID_SENSOR_ENUM_BASE_QUIRKS
- bool "ENUM base quirks for HID Sensor IIO drivers"
- depends on HID_SENSOR_IIO_COMMON
- help
- Say yes here to build support for sensor hub FW using
- enumeration, which is using 1 as base instead of 0.
- Since logical minimum is still set 0 instead of 1,
- there is no easy way to differentiate.
-
endmenu
{
struct hid_sensor_common *st = iio_trigger_get_drvdata(trig);
int state_val;
+ int report_val;
if (state) {
if (sensor_hub_device_open(st->hsdev))
return -EIO;
- } else
+ state_val =
+ HID_USAGE_SENSOR_PROP_POWER_STATE_D0_FULL_POWER_ENUM;
+ report_val =
+ HID_USAGE_SENSOR_PROP_REPORTING_STATE_ALL_EVENTS_ENUM;
+
+ } else {
sensor_hub_device_close(st->hsdev);
+ state_val =
+ HID_USAGE_SENSOR_PROP_POWER_STATE_D4_POWER_OFF_ENUM;
+ report_val =
+ HID_USAGE_SENSOR_PROP_REPORTING_STATE_NO_EVENTS_ENUM;
+ }
- state_val = state ? 1 : 0;
- if (IS_ENABLED(CONFIG_HID_SENSOR_ENUM_BASE_QUIRKS))
- ++state_val;
st->data_ready = state;
+ state_val += st->power_state.logical_minimum;
+ report_val += st->report_state.logical_minimum;
sensor_hub_set_feature(st->hsdev, st->power_state.report_id,
st->power_state.index,
(s32)state_val);
sensor_hub_set_feature(st->hsdev, st->report_state.report_id,
st->report_state.index,
- (s32)state_val);
+ (s32)report_val);
return 0;
}
depends on I2C
select IIO_BUFFER
select IIO_TRIGGERED_BUFFER
+ select IRQ_WORK
help
Say Y here if you have a Sharp GP2AP020A00F proximity/ALS combo-chip
hooked to an I2C bus.
__set_bit(EV_REP, input->evbit);
for (i = 0; i < input->keycodemax; i++)
- __set_bit(kpad->keycode[i] & KEY_MAX, input->keybit);
+ if (kpad->keycode[i] <= KEY_MAX)
+ __set_bit(kpad->keycode[i], input->keybit);
__clear_bit(KEY_RESERVED, input->keybit);
if (kpad->gpimapsize)
__set_bit(EV_REP, input->evbit);
for (i = 0; i < input->keycodemax; i++)
- __set_bit(kpad->keycode[i] & KEY_MAX, input->keybit);
+ if (kpad->keycode[i] <= KEY_MAX)
+ __set_bit(kpad->keycode[i], input->keybit);
__clear_bit(KEY_RESERVED, input->keybit);
if (kpad->gpimapsize)
__set_bit(EV_REP, input->evbit);
for (i = 0; i < input->keycodemax; i++)
- __set_bit(bf54x_kpad->keycode[i] & KEY_MAX, input->keybit);
+ if (bf54x_kpad->keycode[i] <= KEY_MAX)
+ __set_bit(bf54x_kpad->keycode[i], input->keybit);
__clear_bit(KEY_RESERVED, input->keybit);
error = input_register_device(input);
/* ORIENT ADXL346 only */
#define ADXL346_2D_VALID (1 << 6)
-#define ADXL346_2D_ORIENT(x) (((x) & 0x3) >> 4)
+#define ADXL346_2D_ORIENT(x) (((x) & 0x30) >> 4)
#define ADXL346_3D_VALID (1 << 3)
#define ADXL346_3D_ORIENT(x) ((x) & 0x7)
#define ADXL346_2D_PORTRAIT_POS 0 /* +X */
idev->keycodemax = ARRAY_SIZE(lp->btncode);
for (i = 0; i < ARRAY_SIZE(pcf8574_kp_btncode); i++) {
- lp->btncode[i] = pcf8574_kp_btncode[i];
- __set_bit(lp->btncode[i] & KEY_MAX, idev->keybit);
+ if (lp->btncode[i] <= KEY_MAX) {
+ lp->btncode[i] = pcf8574_kp_btncode[i];
+ __set_bit(lp->btncode[i], idev->keybit);
+ }
}
+ __clear_bit(KEY_RESERVED, idev->keybit);
sprintf(lp->name, DRV_NAME);
sprintf(lp->phys, "kp_data/input0");
{ PSMOUSE_CMD_SETSCALE11, 0x00 }, /* f */
};
+static const struct alps_nibble_commands alps_v6_nibble_commands[] = {
+ { PSMOUSE_CMD_ENABLE, 0x00 }, /* 0 */
+ { PSMOUSE_CMD_SETRATE, 0x0a }, /* 1 */
+ { PSMOUSE_CMD_SETRATE, 0x14 }, /* 2 */
+ { PSMOUSE_CMD_SETRATE, 0x28 }, /* 3 */
+ { PSMOUSE_CMD_SETRATE, 0x3c }, /* 4 */
+ { PSMOUSE_CMD_SETRATE, 0x50 }, /* 5 */
+ { PSMOUSE_CMD_SETRATE, 0x64 }, /* 6 */
+ { PSMOUSE_CMD_SETRATE, 0xc8 }, /* 7 */
+ { PSMOUSE_CMD_GETID, 0x00 }, /* 8 */
+ { PSMOUSE_CMD_GETINFO, 0x00 }, /* 9 */
+ { PSMOUSE_CMD_SETRES, 0x00 }, /* a */
+ { PSMOUSE_CMD_SETRES, 0x01 }, /* b */
+ { PSMOUSE_CMD_SETRES, 0x02 }, /* c */
+ { PSMOUSE_CMD_SETRES, 0x03 }, /* d */
+ { PSMOUSE_CMD_SETSCALE21, 0x00 }, /* e */
+ { PSMOUSE_CMD_SETSCALE11, 0x00 }, /* f */
+};
+
#define ALPS_DUALPOINT 0x02 /* touchpad has trackstick */
#define ALPS_PASS 0x04 /* device has a pass-through port */
/* Dell Latitude E5500, E6400, E6500, Precision M4400 */
{ { 0x62, 0x02, 0x14 }, 0x00, ALPS_PROTO_V2, 0xcf, 0xcf,
ALPS_PASS | ALPS_DUALPOINT | ALPS_PS2_INTERLEAVED },
+ { { 0x73, 0x00, 0x14 }, 0x00, ALPS_PROTO_V6, 0xff, 0xff, ALPS_DUALPOINT }, /* Dell XT2 */
{ { 0x73, 0x02, 0x50 }, 0x00, ALPS_PROTO_V2, 0xcf, 0xcf, ALPS_FOUR_BUTTONS }, /* Dell Vostro 1400 */
{ { 0x52, 0x01, 0x14 }, 0x00, ALPS_PROTO_V2, 0xff, 0xff,
ALPS_PASS | ALPS_DUALPOINT | ALPS_PS2_INTERLEAVED }, /* Toshiba Tecra A11-11L */
alps_process_touchpad_packet_v3(psmouse);
}
+static void alps_process_packet_v6(struct psmouse *psmouse)
+{
+ struct alps_data *priv = psmouse->private;
+ unsigned char *packet = psmouse->packet;
+ struct input_dev *dev = psmouse->dev;
+ struct input_dev *dev2 = priv->dev2;
+ int x, y, z, left, right, middle;
+
+ /*
+ * We can use Byte5 to distinguish if the packet is from Touchpad
+ * or Trackpoint.
+ * Touchpad: 0 - 0x7E
+ * Trackpoint: 0x7F
+ */
+ if (packet[5] == 0x7F) {
+ /* It should be a DualPoint when received Trackpoint packet */
+ if (!(priv->flags & ALPS_DUALPOINT))
+ return;
+
+ /* Trackpoint packet */
+ x = packet[1] | ((packet[3] & 0x20) << 2);
+ y = packet[2] | ((packet[3] & 0x40) << 1);
+ z = packet[4];
+ left = packet[3] & 0x01;
+ right = packet[3] & 0x02;
+ middle = packet[3] & 0x04;
+
+ /* To prevent the cursor jump when finger lifted */
+ if (x == 0x7F && y == 0x7F && z == 0x7F)
+ x = y = z = 0;
+
+ /* Divide 4 since trackpoint's speed is too fast */
+ input_report_rel(dev2, REL_X, (char)x / 4);
+ input_report_rel(dev2, REL_Y, -((char)y / 4));
+
+ input_report_key(dev2, BTN_LEFT, left);
+ input_report_key(dev2, BTN_RIGHT, right);
+ input_report_key(dev2, BTN_MIDDLE, middle);
+
+ input_sync(dev2);
+ return;
+ }
+
+ /* Touchpad packet */
+ x = packet[1] | ((packet[3] & 0x78) << 4);
+ y = packet[2] | ((packet[4] & 0x78) << 4);
+ z = packet[5];
+ left = packet[3] & 0x01;
+ right = packet[3] & 0x02;
+
+ if (z > 30)
+ input_report_key(dev, BTN_TOUCH, 1);
+ if (z < 25)
+ input_report_key(dev, BTN_TOUCH, 0);
+
+ if (z > 0) {
+ input_report_abs(dev, ABS_X, x);
+ input_report_abs(dev, ABS_Y, y);
+ }
+
+ input_report_abs(dev, ABS_PRESSURE, z);
+ input_report_key(dev, BTN_TOOL_FINGER, z > 0);
+
+ /* v6 touchpad does not have middle button */
+ input_report_key(dev, BTN_LEFT, left);
+ input_report_key(dev, BTN_RIGHT, right);
+
+ input_sync(dev);
+}
+
static void alps_process_packet_v4(struct psmouse *psmouse)
{
struct alps_data *priv = psmouse->private;
}
/* Bytes 2 - pktsize should have 0 in the highest bit */
- if (priv->proto_version != ALPS_PROTO_V5 &&
+ if ((priv->proto_version < ALPS_PROTO_V5) &&
psmouse->pktcnt >= 2 && psmouse->pktcnt <= psmouse->pktsize &&
(psmouse->packet[psmouse->pktcnt - 1] & 0x80)) {
psmouse_dbg(psmouse, "refusing packet[%i] = %x\n",
return ps2_command(&psmouse->ps2dev, NULL, PSMOUSE_CMD_SETPOLL);
}
+static int alps_monitor_mode_send_word(struct psmouse *psmouse, u16 word)
+{
+ int i, nibble;
+
+ /*
+ * b0-b11 are valid bits, send sequence is inverse.
+ * e.g. when word = 0x0123, nibble send sequence is 3, 2, 1
+ */
+ for (i = 0; i <= 8; i += 4) {
+ nibble = (word >> i) & 0xf;
+ if (alps_command_mode_send_nibble(psmouse, nibble))
+ return -1;
+ }
+
+ return 0;
+}
+
+static int alps_monitor_mode_write_reg(struct psmouse *psmouse,
+ u16 addr, u16 value)
+{
+ struct ps2dev *ps2dev = &psmouse->ps2dev;
+
+ /* 0x0A0 is the command to write the word */
+ if (ps2_command(ps2dev, NULL, PSMOUSE_CMD_ENABLE) ||
+ alps_monitor_mode_send_word(psmouse, 0x0A0) ||
+ alps_monitor_mode_send_word(psmouse, addr) ||
+ alps_monitor_mode_send_word(psmouse, value) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_DISABLE))
+ return -1;
+
+ return 0;
+}
+
+static int alps_monitor_mode(struct psmouse *psmouse, bool enable)
+{
+ struct ps2dev *ps2dev = &psmouse->ps2dev;
+
+ if (enable) {
+ /* EC E9 F5 F5 E7 E6 E7 E9 to enter monitor mode */
+ if (ps2_command(ps2dev, NULL, PSMOUSE_CMD_RESET_WRAP) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_GETINFO) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_DISABLE) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_DISABLE) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSCALE21) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSCALE11) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSCALE21) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_GETINFO))
+ return -1;
+ } else {
+ /* EC to exit monitor mode */
+ if (ps2_command(ps2dev, NULL, PSMOUSE_CMD_RESET_WRAP))
+ return -1;
+ }
+
+ return 0;
+}
+
+static int alps_absolute_mode_v6(struct psmouse *psmouse)
+{
+ u16 reg_val = 0x181;
+ int ret = -1;
+
+ /* enter monitor mode, to write the register */
+ if (alps_monitor_mode(psmouse, true))
+ return -1;
+
+ ret = alps_monitor_mode_write_reg(psmouse, 0x000, reg_val);
+
+ if (alps_monitor_mode(psmouse, false))
+ ret = -1;
+
+ return ret;
+}
+
static int alps_get_status(struct psmouse *psmouse, char *param)
{
/* Get status: 0xF5 0xF5 0xF5 0xE9 */
return 0;
}
+static int alps_hw_init_v6(struct psmouse *psmouse)
+{
+ unsigned char param[2] = {0xC8, 0x14};
+
+ /* Enter passthrough mode to let trackpoint enter 6byte raw mode */
+ if (alps_passthrough_mode_v2(psmouse, true))
+ return -1;
+
+ if (ps2_command(&psmouse->ps2dev, NULL, PSMOUSE_CMD_SETSCALE11) ||
+ ps2_command(&psmouse->ps2dev, NULL, PSMOUSE_CMD_SETSCALE11) ||
+ ps2_command(&psmouse->ps2dev, NULL, PSMOUSE_CMD_SETSCALE11) ||
+ ps2_command(&psmouse->ps2dev, ¶m[0], PSMOUSE_CMD_SETRATE) ||
+ ps2_command(&psmouse->ps2dev, ¶m[1], PSMOUSE_CMD_SETRATE))
+ return -1;
+
+ if (alps_passthrough_mode_v2(psmouse, false))
+ return -1;
+
+ if (alps_absolute_mode_v6(psmouse)) {
+ psmouse_err(psmouse, "Failed to enable absolute mode\n");
+ return -1;
+ }
+
+ return 0;
+}
+
/*
* Enable or disable passthrough mode to the trackstick.
*/
priv->hw_init = alps_hw_init_v1_v2;
priv->process_packet = alps_process_packet_v1_v2;
priv->set_abs_params = alps_set_abs_params_st;
+ priv->x_max = 1023;
+ priv->y_max = 767;
break;
case ALPS_PROTO_V3:
priv->hw_init = alps_hw_init_v3;
priv->x_bits = 23;
priv->y_bits = 12;
break;
+ case ALPS_PROTO_V6:
+ priv->hw_init = alps_hw_init_v6;
+ priv->process_packet = alps_process_packet_v6;
+ priv->set_abs_params = alps_set_abs_params_st;
+ priv->nibble_commands = alps_v6_nibble_commands;
+ priv->x_max = 2047;
+ priv->y_max = 1535;
+ break;
}
}
static void alps_set_abs_params_st(struct alps_data *priv,
struct input_dev *dev1)
{
- input_set_abs_params(dev1, ABS_X, 0, 1023, 0, 0);
- input_set_abs_params(dev1, ABS_Y, 0, 767, 0, 0);
+ input_set_abs_params(dev1, ABS_X, 0, priv->x_max, 0, 0);
+ input_set_abs_params(dev1, ABS_Y, 0, priv->y_max, 0, 0);
}
static void alps_set_abs_params_mt(struct alps_data *priv,
#define ALPS_PROTO_V3 3
#define ALPS_PROTO_V4 4
#define ALPS_PROTO_V5 5
+#define ALPS_PROTO_V6 6
/**
* struct alps_model_info - touchpad ID table
break;
case 6:
case 7:
+ case 8:
etd->hw_version = 4;
break;
default:
static DEVICE_ATTR_RO(proto);
static DEVICE_ATTR_RO(id);
static DEVICE_ATTR_RO(extra);
-static DEVICE_ATTR_RO(modalias);
-static DEVICE_ATTR_WO(drvctl);
-static DEVICE_ATTR(description, S_IRUGO, serio_show_description, NULL);
-static DEVICE_ATTR(bind_mode, S_IWUSR | S_IRUGO, serio_show_bind_mode, serio_set_bind_mode);
static struct attribute *serio_device_id_attrs[] = {
&dev_attr_type.attr,
&dev_attr_proto.attr,
&dev_attr_id.attr,
&dev_attr_extra.attr,
+ NULL
+};
+
+static struct attribute_group serio_id_attr_group = {
+ .name = "id",
+ .attrs = serio_device_id_attrs,
+};
+
+static DEVICE_ATTR_RO(modalias);
+static DEVICE_ATTR_WO(drvctl);
+static DEVICE_ATTR(description, S_IRUGO, serio_show_description, NULL);
+static DEVICE_ATTR(bind_mode, S_IWUSR | S_IRUGO, serio_show_bind_mode, serio_set_bind_mode);
+
+static struct attribute *serio_device_attrs[] = {
&dev_attr_modalias.attr,
&dev_attr_description.attr,
&dev_attr_drvctl.attr,
NULL
};
-static struct attribute_group serio_id_attr_group = {
- .name = "id",
- .attrs = serio_device_id_attrs,
+static struct attribute_group serio_device_attr_group = {
+ .attrs = serio_device_attrs,
};
static const struct attribute_group *serio_device_attr_groups[] = {
&serio_id_attr_group,
+ &serio_device_attr_group,
NULL
};
struct sur40_state *sur40 = polldev->private;
struct input_dev *input = polldev->input;
int result, bulk_read, need_blobs, packet_blobs, i;
- u32 packet_id;
+ u32 uninitialized_var(packet_id);
struct sur40_header *header = &sur40->bulk_in_buffer->header;
struct sur40_blob *inblob = &sur40->bulk_in_buffer->blobs[0];
if (need_blobs == -1) {
need_blobs = le16_to_cpu(header->count);
dev_dbg(sur40->dev, "need %d blobs\n", need_blobs);
- packet_id = header->packet_id;
+ packet_id = le32_to_cpu(header->packet_id);
}
/*
struct usbtouch_usb {
unsigned char *data;
dma_addr_t data_dma;
+ int data_size;
unsigned char *buffer;
int buf_len;
struct urb *irq;
static void usbtouch_free_buffers(struct usb_device *udev,
struct usbtouch_usb *usbtouch)
{
- usb_free_coherent(udev, usbtouch->type->rept_size,
+ usb_free_coherent(udev, usbtouch->data_size,
usbtouch->data, usbtouch->data_dma);
kfree(usbtouch->buffer);
}
if (!type->process_pkt)
type->process_pkt = usbtouch_process_pkt;
- usbtouch->data = usb_alloc_coherent(udev, type->rept_size,
+ usbtouch->data_size = type->rept_size;
+ if (type->get_pkt_len) {
+ /*
+ * When dealing with variable-length packets we should
+ * not request more than wMaxPacketSize bytes at once
+ * as we do not know if there is more data coming or
+ * we filled exactly wMaxPacketSize bytes and there is
+ * nothing else.
+ */
+ usbtouch->data_size = min(usbtouch->data_size,
+ usb_endpoint_maxp(endpoint));
+ }
+
+ usbtouch->data = usb_alloc_coherent(udev, usbtouch->data_size,
GFP_KERNEL, &usbtouch->data_dma);
if (!usbtouch->data)
goto out_free;
if (usb_endpoint_type(endpoint) == USB_ENDPOINT_XFER_INT)
usb_fill_int_urb(usbtouch->irq, udev,
usb_rcvintpipe(udev, endpoint->bEndpointAddress),
- usbtouch->data, type->rept_size,
+ usbtouch->data, usbtouch->data_size,
usbtouch_irq, usbtouch, endpoint->bInterval);
else
usb_fill_bulk_urb(usbtouch->irq, udev,
usb_rcvbulkpipe(udev, endpoint->bEndpointAddress),
- usbtouch->data, type->rept_size,
+ usbtouch->data, usbtouch->data_size,
usbtouch_irq, usbtouch);
usbtouch->irq->dev = udev;
struct arm_smmu_cfg root_cfg;
phys_addr_t output_mask;
- spinlock_t lock;
+ struct mutex lock;
};
static DEFINE_SPINLOCK(arm_smmu_devices_lock);
goto out_free_domain;
smmu_domain->root_cfg.pgd = pgd;
- spin_lock_init(&smmu_domain->lock);
+ mutex_init(&smmu_domain->lock);
domain->priv = smmu_domain;
return 0;
* Sanity check the domain. We don't currently support domains
* that cross between different SMMU chains.
*/
- spin_lock(&smmu_domain->lock);
+ mutex_lock(&smmu_domain->lock);
if (!smmu_domain->leaf_smmu) {
/* Now that we have a master, we can finalise the domain */
ret = arm_smmu_init_domain_context(domain, dev);
dev_name(device_smmu->dev));
goto err_unlock;
}
- spin_unlock(&smmu_domain->lock);
+ mutex_unlock(&smmu_domain->lock);
/* Looks ok, so add the device to the domain */
master = find_smmu_master(smmu_domain->leaf_smmu, dev->of_node);
return arm_smmu_domain_add_master(smmu_domain, master);
err_unlock:
- spin_unlock(&smmu_domain->lock);
+ mutex_unlock(&smmu_domain->lock);
return ret;
}
if (paddr & ~output_mask)
return -ERANGE;
- spin_lock(&smmu_domain->lock);
+ mutex_lock(&smmu_domain->lock);
pgd += pgd_index(iova);
end = iova + size;
do {
} while (pgd++, iova != end);
out_unlock:
- spin_unlock(&smmu_domain->lock);
+ mutex_unlock(&smmu_domain->lock);
/* Ensure new page tables are visible to the hardware walker */
if (smmu->features & ARM_SMMU_FEAT_COHERENT_WALK)
phys_addr_t paddr, size_t size, int flags)
{
struct arm_smmu_domain *smmu_domain = domain->priv;
- struct arm_smmu_device *smmu = smmu_domain->leaf_smmu;
- if (!smmu_domain || !smmu)
+ if (!smmu_domain)
return -ENODEV;
/* Check for silent address truncation up the SMMU chain. */
static phys_addr_t arm_smmu_iova_to_phys(struct iommu_domain *domain,
dma_addr_t iova)
{
- pgd_t *pgd;
- pud_t *pud;
- pmd_t *pmd;
- pte_t *pte;
+ pgd_t *pgdp, pgd;
+ pud_t pud;
+ pmd_t pmd;
+ pte_t pte;
struct arm_smmu_domain *smmu_domain = domain->priv;
struct arm_smmu_cfg *root_cfg = &smmu_domain->root_cfg;
- struct arm_smmu_device *smmu = root_cfg->smmu;
- spin_lock(&smmu_domain->lock);
- pgd = root_cfg->pgd;
- if (!pgd)
- goto err_unlock;
+ pgdp = root_cfg->pgd;
+ if (!pgdp)
+ return 0;
- pgd += pgd_index(iova);
- if (pgd_none_or_clear_bad(pgd))
- goto err_unlock;
+ pgd = *(pgdp + pgd_index(iova));
+ if (pgd_none(pgd))
+ return 0;
- pud = pud_offset(pgd, iova);
- if (pud_none_or_clear_bad(pud))
- goto err_unlock;
+ pud = *pud_offset(&pgd, iova);
+ if (pud_none(pud))
+ return 0;
- pmd = pmd_offset(pud, iova);
- if (pmd_none_or_clear_bad(pmd))
- goto err_unlock;
+ pmd = *pmd_offset(&pud, iova);
+ if (pmd_none(pmd))
+ return 0;
- pte = pmd_page_vaddr(*pmd) + pte_index(iova);
+ pte = *(pmd_page_vaddr(pmd) + pte_index(iova));
if (pte_none(pte))
- goto err_unlock;
-
- spin_unlock(&smmu_domain->lock);
- return __pfn_to_phys(pte_pfn(*pte)) | (iova & ~PAGE_MASK);
+ return 0;
-err_unlock:
- spin_unlock(&smmu_domain->lock);
- dev_warn(smmu->dev,
- "invalid (corrupt?) page tables detected for iova 0x%llx\n",
- (unsigned long long)iova);
- return -EINVAL;
+ return __pfn_to_phys(pte_pfn(pte)) | (iova & ~PAGE_MASK);
}
static int arm_smmu_domain_has_cap(struct iommu_domain *domain,
dev_err(dev,
"found only %d context interrupt(s) but %d required\n",
smmu->num_context_irqs, smmu->num_context_banks);
+ err = -ENODEV;
goto out_put_parent;
}
if (WARN_ON(!gic->domain))
return;
+ if (gic_nr == 0) {
#ifdef CONFIG_SMP
- set_smp_cross_call(gic_raise_softirq);
- register_cpu_notifier(&gic_cpu_notifier);
+ set_smp_cross_call(gic_raise_softirq);
+ register_cpu_notifier(&gic_cpu_notifier);
#endif
-
- set_handle_irq(gic_handle_irq);
+ set_handle_irq(gic_handle_irq);
+ }
gic_chip.flags |= gic_arch_extn.flags;
gic_dist_init(gic);
(sizeof(struct led_pwm_data) * num_leds);
}
-static struct led_pwm_priv *led_pwm_create_of(struct platform_device *pdev)
+static int led_pwm_create_of(struct platform_device *pdev,
+ struct led_pwm_priv *priv)
{
struct device_node *node = pdev->dev.of_node;
struct device_node *child;
- struct led_pwm_priv *priv;
- int count, ret;
-
- /* count LEDs in this device, so we know how much to allocate */
- count = of_get_child_count(node);
- if (!count)
- return NULL;
-
- priv = devm_kzalloc(&pdev->dev, sizeof_pwm_leds_priv(count),
- GFP_KERNEL);
- if (!priv)
- return NULL;
+ int ret;
for_each_child_of_node(node, child) {
struct led_pwm_data *led_dat = &priv->leds[priv->num_leds];
if (IS_ERR(led_dat->pwm)) {
dev_err(&pdev->dev, "unable to request PWM for %s\n",
led_dat->cdev.name);
+ ret = PTR_ERR(led_dat->pwm);
goto err;
}
/* Get the period from PWM core when n*/
priv->num_leds++;
}
- return priv;
+ return 0;
err:
while (priv->num_leds--)
led_classdev_unregister(&priv->leds[priv->num_leds].cdev);
- return NULL;
+ return ret;
}
static int led_pwm_probe(struct platform_device *pdev)
{
struct led_pwm_platform_data *pdata = dev_get_platdata(&pdev->dev);
struct led_pwm_priv *priv;
- int i, ret = 0;
+ int count, i;
+ int ret = 0;
+
+ if (pdata)
+ count = pdata->num_leds;
+ else
+ count = of_get_child_count(pdev->dev.of_node);
+
+ if (!count)
+ return -EINVAL;
- if (pdata && pdata->num_leds) {
- priv = devm_kzalloc(&pdev->dev,
- sizeof_pwm_leds_priv(pdata->num_leds),
- GFP_KERNEL);
- if (!priv)
- return -ENOMEM;
+ priv = devm_kzalloc(&pdev->dev, sizeof_pwm_leds_priv(count),
+ GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
- for (i = 0; i < pdata->num_leds; i++) {
+ if (pdata) {
+ for (i = 0; i < count; i++) {
struct led_pwm *cur_led = &pdata->leds[i];
struct led_pwm_data *led_dat = &priv->leds[i];
if (ret < 0)
goto err;
}
- priv->num_leds = pdata->num_leds;
+ priv->num_leds = count;
} else {
- priv = led_pwm_create_of(pdev);
- if (!priv)
- return -ENODEV;
+ ret = led_pwm_create_of(pdev, priv);
+ if (ret)
+ return ret;
}
platform_set_drvdata(pdev, priv);
{
__u64 mem;
+ dm_bufio_allocated_kmem_cache = 0;
+ dm_bufio_allocated_get_free_pages = 0;
+ dm_bufio_allocated_vmalloc = 0;
+ dm_bufio_current_allocated = 0;
+
memset(&dm_bufio_caches, 0, sizeof dm_bufio_caches);
memset(&dm_bufio_cache_names, 0, sizeof dm_bufio_cache_names);
int r = 0;
bool updated = updated_this_tick(mq, e);
- requeue_and_update_tick(mq, e);
-
if ((!discarded_oblock && updated) ||
- !should_promote(mq, e, discarded_oblock, data_dir))
+ !should_promote(mq, e, discarded_oblock, data_dir)) {
+ requeue_and_update_tick(mq, e);
result->op = POLICY_MISS;
- else if (!can_migrate)
+
+ } else if (!can_migrate)
r = -EWOULDBLOCK;
- else
+
+ else {
+ requeue_and_update_tick(mq, e);
r = pre_cache_to_cache(mq, e, result);
+ }
return r;
}
{
int r;
- r = dm_cache_resize(cache->cmd, cache->cache_size);
+ r = dm_cache_resize(cache->cmd, new_size);
if (r) {
DMERR("could not resize cache metadata");
return r;
struct delay_c {
struct timer_list delay_timer;
struct mutex timer_lock;
+ struct workqueue_struct *kdelayd_wq;
struct work_struct flush_expired_bios;
struct list_head delayed_bios;
atomic_t may_delay;
static DEFINE_MUTEX(delayed_bios_lock);
-static struct workqueue_struct *kdelayd_wq;
static struct kmem_cache *delayed_cache;
static void handle_delayed_timer(unsigned long data)
{
struct delay_c *dc = (struct delay_c *)data;
- queue_work(kdelayd_wq, &dc->flush_expired_bios);
+ queue_work(dc->kdelayd_wq, &dc->flush_expired_bios);
}
static void queue_timeout(struct delay_c *dc, unsigned long expires)
goto bad_dev_write;
}
+ dc->kdelayd_wq = alloc_workqueue("kdelayd", WQ_MEM_RECLAIM, 0);
+ if (!dc->kdelayd_wq) {
+ DMERR("Couldn't start kdelayd");
+ goto bad_queue;
+ }
+
setup_timer(&dc->delay_timer, handle_delayed_timer, (unsigned long)dc);
INIT_WORK(&dc->flush_expired_bios, flush_expired_bios);
ti->private = dc;
return 0;
+bad_queue:
+ mempool_destroy(dc->delayed_pool);
bad_dev_write:
if (dc->dev_write)
dm_put_device(ti, dc->dev_write);
{
struct delay_c *dc = ti->private;
- flush_workqueue(kdelayd_wq);
+ destroy_workqueue(dc->kdelayd_wq);
dm_put_device(ti, dc->dev_read);
{
int r = -ENOMEM;
- kdelayd_wq = alloc_workqueue("kdelayd", WQ_MEM_RECLAIM, 0);
- if (!kdelayd_wq) {
- DMERR("Couldn't start kdelayd");
- goto bad_queue;
- }
-
delayed_cache = KMEM_CACHE(dm_delay_info, 0);
if (!delayed_cache) {
DMERR("Couldn't create delayed bio cache.");
bad_register:
kmem_cache_destroy(delayed_cache);
bad_memcache:
- destroy_workqueue(kdelayd_wq);
-bad_queue:
return r;
}
{
dm_unregister_target(&delay_target);
kmem_cache_destroy(delayed_cache);
- destroy_workqueue(kdelayd_wq);
}
/* Module hooks */
atomic_t pending_exceptions_count;
+ /* Protected by "lock" */
+ sector_t exception_start_sequence;
+
+ /* Protected by kcopyd single-threaded callback */
+ sector_t exception_complete_sequence;
+
+ /*
+ * A list of pending exceptions that completed out of order.
+ * Protected by kcopyd single-threaded callback.
+ */
+ struct list_head out_of_order_list;
+
mempool_t *pending_pool;
struct dm_exception_table pending;
*/
int started;
+ /* There was copying error. */
+ int copy_error;
+
+ /* A sequence number, it is used for in-order completion. */
+ sector_t exception_sequence;
+
+ struct list_head out_of_order_entry;
+
/*
* For writing a complete chunk, bypassing the copy.
*/
s->valid = 1;
s->active = 0;
atomic_set(&s->pending_exceptions_count, 0);
+ s->exception_start_sequence = 0;
+ s->exception_complete_sequence = 0;
+ INIT_LIST_HEAD(&s->out_of_order_list);
init_rwsem(&s->lock);
INIT_LIST_HEAD(&s->list);
spin_lock_init(&s->pe_lock);
pending_complete(pe, success);
}
+static void complete_exception(struct dm_snap_pending_exception *pe)
+{
+ struct dm_snapshot *s = pe->snap;
+
+ if (unlikely(pe->copy_error))
+ pending_complete(pe, 0);
+
+ else
+ /* Update the metadata if we are persistent */
+ s->store->type->commit_exception(s->store, &pe->e,
+ commit_callback, pe);
+}
+
/*
* Called when the copy I/O has finished. kcopyd actually runs
* this code so don't block.
struct dm_snap_pending_exception *pe = context;
struct dm_snapshot *s = pe->snap;
- if (read_err || write_err)
- pending_complete(pe, 0);
+ pe->copy_error = read_err || write_err;
- else
- /* Update the metadata if we are persistent */
- s->store->type->commit_exception(s->store, &pe->e,
- commit_callback, pe);
+ if (pe->exception_sequence == s->exception_complete_sequence) {
+ s->exception_complete_sequence++;
+ complete_exception(pe);
+
+ while (!list_empty(&s->out_of_order_list)) {
+ pe = list_entry(s->out_of_order_list.next,
+ struct dm_snap_pending_exception, out_of_order_entry);
+ if (pe->exception_sequence != s->exception_complete_sequence)
+ break;
+ s->exception_complete_sequence++;
+ list_del(&pe->out_of_order_entry);
+ complete_exception(pe);
+ }
+ } else {
+ struct list_head *lh;
+ struct dm_snap_pending_exception *pe2;
+
+ list_for_each_prev(lh, &s->out_of_order_list) {
+ pe2 = list_entry(lh, struct dm_snap_pending_exception, out_of_order_entry);
+ if (pe2->exception_sequence < pe->exception_sequence)
+ break;
+ }
+ list_add(&pe->out_of_order_entry, lh);
+ }
}
/*
return NULL;
}
+ pe->exception_sequence = s->exception_start_sequence++;
+
dm_insert_exception(&s->pending, &pe->e);
return pe;
static struct target_type snapshot_target = {
.name = "snapshot",
- .version = {1, 11, 1},
+ .version = {1, 12, 0},
.module = THIS_MODULE,
.ctr = snapshot_ctr,
.dtr = snapshot_dtr,
int __init dm_statistics_init(void)
{
+ shared_memory_amount = 0;
dm_stat_need_rcu_barrier = 0;
return 0;
}
num_targets = dm_round_up(num_targets, KEYS_PER_NODE);
+ if (!num_targets) {
+ kfree(t);
+ return -ENOMEM;
+ }
+
if (alloc_targets(t, num_targets)) {
kfree(t);
return -ENOMEM;
up_write(&pmd->root_lock);
}
+void dm_pool_metadata_read_write(struct dm_pool_metadata *pmd)
+{
+ down_write(&pmd->root_lock);
+ pmd->read_only = false;
+ dm_bm_set_read_write(pmd->bm);
+ up_write(&pmd->root_lock);
+}
+
int dm_pool_register_metadata_threshold(struct dm_pool_metadata *pmd,
dm_block_t threshold,
dm_sm_threshold_fn fn,
* that nothing is changing.
*/
void dm_pool_metadata_read_only(struct dm_pool_metadata *pmd);
+void dm_pool_metadata_read_write(struct dm_pool_metadata *pmd);
int dm_pool_register_metadata_threshold(struct dm_pool_metadata *pmd,
dm_block_t threshold,
*/
r = dm_thin_insert_block(tc->td, m->virt_block, m->data_block);
if (r) {
- DMERR_LIMIT("dm_thin_insert_block() failed");
+ DMERR_LIMIT("%s: dm_thin_insert_block() failed: error = %d",
+ dm_device_name(pool->pool_md), r);
+ set_pool_mode(pool, PM_READ_ONLY);
cell_error(pool, m->cell);
goto out;
}
}
}
-static int commit(struct pool *pool)
-{
- int r;
-
- r = dm_pool_commit_metadata(pool->pmd);
- if (r)
- DMERR_LIMIT("%s: commit failed: error = %d",
- dm_device_name(pool->pool_md), r);
-
- return r;
-}
-
/*
* A non-zero return indicates read_only or fail_io mode.
* Many callers don't care about the return value.
*/
-static int commit_or_fallback(struct pool *pool)
+static int commit(struct pool *pool)
{
int r;
if (get_pool_mode(pool) != PM_WRITE)
return -EINVAL;
- r = commit(pool);
- if (r)
+ r = dm_pool_commit_metadata(pool->pmd);
+ if (r) {
+ DMERR_LIMIT("%s: dm_pool_commit_metadata failed: error = %d",
+ dm_device_name(pool->pool_md), r);
set_pool_mode(pool, PM_READ_ONLY);
+ }
return r;
}
* Try to commit to see if that will free up some
* more space.
*/
- (void) commit_or_fallback(pool);
+ r = commit(pool);
+ if (r)
+ return r;
r = dm_pool_get_free_block_count(pool->pmd, &free_blocks);
if (r)
* table reload).
*/
if (!free_blocks) {
- DMWARN("%s: no free space available.",
+ DMWARN("%s: no free data space available.",
dm_device_name(pool->pool_md));
spin_lock_irqsave(&pool->lock, flags);
pool->no_free_space = 1;
}
r = dm_pool_alloc_data_block(pool->pmd, result);
- if (r)
+ if (r) {
+ if (r == -ENOSPC &&
+ !dm_pool_get_free_metadata_block_count(pool->pmd, &free_blocks) &&
+ !free_blocks) {
+ DMWARN("%s: no free metadata space available.",
+ dm_device_name(pool->pool_md));
+ set_pool_mode(pool, PM_READ_ONLY);
+ }
return r;
+ }
return 0;
}
if (bio_list_empty(&bios) && !need_commit_due_to_time(pool))
return;
- if (commit_or_fallback(pool)) {
+ if (commit(pool)) {
while ((bio = bio_list_pop(&bios)))
bio_io_error(bio);
return;
case PM_FAIL:
DMERR("%s: switching pool to failure mode",
dm_device_name(pool->pool_md));
+ dm_pool_metadata_read_only(pool->pmd);
pool->process_bio = process_bio_fail;
pool->process_discard = process_bio_fail;
pool->process_prepared_mapping = process_prepared_mapping_fail;
break;
case PM_WRITE:
+ dm_pool_metadata_read_write(pool->pmd);
pool->process_bio = process_bio;
pool->process_discard = process_discard;
pool->process_prepared_mapping = process_prepared_mapping;
struct pool_c *pt = ti->private;
/*
- * We want to make sure that degraded pools are never upgraded.
+ * We want to make sure that a pool in PM_FAIL mode is never upgraded.
*/
enum pool_mode old_mode = pool->pf.mode;
enum pool_mode new_mode = pt->adjusted_pf.mode;
- if (old_mode > new_mode)
+ /*
+ * If we were in PM_FAIL mode, rollback of metadata failed. We're
+ * not going to recover without a thin_repair. So we never let the
+ * pool move out of the old mode. On the other hand a PM_READ_ONLY
+ * may have been due to a lack of metadata or data space, and may
+ * now work (ie. if the underlying devices have been resized).
+ */
+ if (old_mode == PM_FAIL)
new_mode = old_mode;
pool->ti = ti;
return r;
if (need_commit1 || need_commit2)
- (void) commit_or_fallback(pool);
+ (void) commit(pool);
return 0;
}
cancel_delayed_work(&pool->waker);
flush_workqueue(pool->wq);
- (void) commit_or_fallback(pool);
+ (void) commit(pool);
}
static int check_arg_count(unsigned argc, unsigned args_required)
if (r)
return r;
- (void) commit_or_fallback(pool);
+ (void) commit(pool);
r = dm_pool_reserve_metadata_snap(pool->pmd);
if (r)
DMWARN("Unrecognised thin pool target message received: %s", argv[0]);
if (!r)
- (void) commit_or_fallback(pool);
+ (void) commit(pool);
return r;
}
/* Commit to ensure statistics aren't out-of-date */
if (!(status_flags & DM_STATUS_NOFLUSH_FLAG) && !dm_suspended(ti))
- (void) commit_or_fallback(pool);
+ (void) commit(pool);
r = dm_pool_get_metadata_transaction_id(pool->pmd, &transaction_id);
if (r) {
finish_wait(&mddev->sb_wait, &wq);
}
-static void bi_complete(struct bio *bio, int error)
-{
- complete((struct completion*)bio->bi_private);
-}
-
int sync_page_io(struct md_rdev *rdev, sector_t sector, int size,
struct page *page, int rw, bool metadata_op)
{
struct bio *bio = bio_alloc_mddev(GFP_NOIO, 1, rdev->mddev);
- struct completion event;
int ret;
rw |= REQ_SYNC;
else
bio->bi_sector = sector + rdev->data_offset;
bio_add_page(bio, page, size, 0);
- init_completion(&event);
- bio->bi_private = &event;
- bio->bi_end_io = bi_complete;
- submit_bio(rw, bio);
- wait_for_completion(&event);
+ submit_bio_wait(rw, bio);
ret = test_bit(BIO_UPTODATE, &bio->bi_flags);
bio_put(bio);
* The shadow op will often be a noop. Only insert if it really
* copied data.
*/
- if (dm_block_location(*block) != b)
+ if (dm_block_location(*block) != b) {
+ /*
+ * dm_tm_shadow_block will have already decremented the old
+ * block, but it is still referenced by the btree. We
+ * increment to stop the insert decrementing it below zero
+ * when overwriting the old value.
+ */
+ dm_tm_inc(info->btree_info.tm, b);
r = insert_ablock(info, index, *block, root);
+ }
return r;
}
}
EXPORT_SYMBOL_GPL(dm_bm_set_read_only);
+void dm_bm_set_read_write(struct dm_block_manager *bm)
+{
+ bm->read_only = false;
+}
+EXPORT_SYMBOL_GPL(dm_bm_set_read_write);
+
u32 dm_bm_checksum(const void *data, size_t len, u32 init_xor)
{
return crc32c(~(u32) 0, data, len) ^ init_xor;
int dm_bm_flush_and_unlock(struct dm_block_manager *bm,
struct dm_block *superblock);
- /*
- * Request data be prefetched into the cache.
- */
+/*
+ * Request data is prefetched into the cache.
+ */
void dm_bm_prefetch(struct dm_block_manager *bm, dm_block_t b);
/*
* be returned if you do.
*/
void dm_bm_set_read_only(struct dm_block_manager *bm);
+void dm_bm_set_read_write(struct dm_block_manager *bm);
u32 dm_bm_checksum(const void *data, size_t len, u32 init_xor);
}
static int sm_ll_mutate(struct ll_disk *ll, dm_block_t b,
- uint32_t (*mutator)(void *context, uint32_t old),
+ int (*mutator)(void *context, uint32_t old, uint32_t *new),
void *context, enum allocation_event *ev)
{
int r;
if (old > 2) {
r = sm_ll_lookup_big_ref_count(ll, b, &old);
- if (r < 0)
+ if (r < 0) {
+ dm_tm_unlock(ll->tm, nb);
return r;
+ }
}
- ref_count = mutator(context, old);
+ r = mutator(context, old, &ref_count);
+ if (r) {
+ dm_tm_unlock(ll->tm, nb);
+ return r;
+ }
if (ref_count <= 2) {
sm_set_bitmap(bm_le, bit, ref_count);
return ll->save_ie(ll, index, &ie_disk);
}
-static uint32_t set_ref_count(void *context, uint32_t old)
+static int set_ref_count(void *context, uint32_t old, uint32_t *new)
{
- return *((uint32_t *) context);
+ *new = *((uint32_t *) context);
+ return 0;
}
int sm_ll_insert(struct ll_disk *ll, dm_block_t b,
return sm_ll_mutate(ll, b, set_ref_count, &ref_count, ev);
}
-static uint32_t inc_ref_count(void *context, uint32_t old)
+static int inc_ref_count(void *context, uint32_t old, uint32_t *new)
{
- return old + 1;
+ *new = old + 1;
+ return 0;
}
int sm_ll_inc(struct ll_disk *ll, dm_block_t b, enum allocation_event *ev)
return sm_ll_mutate(ll, b, inc_ref_count, NULL, ev);
}
-static uint32_t dec_ref_count(void *context, uint32_t old)
+static int dec_ref_count(void *context, uint32_t old, uint32_t *new)
{
- return old - 1;
+ if (!old) {
+ DMERR_LIMIT("unable to decrement a reference count below 0");
+ return -EINVAL;
+ }
+
+ *new = old - 1;
+ return 0;
}
int sm_ll_dec(struct ll_disk *ll, dm_block_t b, enum allocation_event *ev)
struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
int r = sm_metadata_new_block_(sm, b);
- if (r)
+ if (r) {
DMERR("unable to allocate new metadata block");
+ return r;
+ }
r = sm_metadata_get_nr_free(sm, &count);
- if (r)
+ if (r) {
DMERR("couldn't get free block count");
+ return r;
+ }
check_threshold(&smm->threshold, count);
u32 modem_state; /* from SMSHOSTLIB_DVB_MODEM_STATE_ET */
s32 SNR; /* dB */
u32 ber; /* Post Viterbi ber [1E-5] */
- u32 ber_error_count; /* Number of erronous SYNC bits. */
+ u32 ber_error_count; /* Number of erroneous SYNC bits. */
u32 ber_bit_count; /* Total number of SYNC bits. */
u32 ts_per; /* Transport stream PER,
0xFFFFFFFF indicate N/A */
u32 modem_state; /* from SMSHOSTLIB_DVB_MODEM_STATE_ET */
s32 SNR; /* dB */
u32 ber; /* Post Viterbi ber [1E-5] */
- u32 ber_error_count; /* Number of erronous SYNC bits. */
+ u32 ber_error_count; /* Number of erroneous SYNC bits. */
u32 ber_bit_count; /* Total number of SYNC bits. */
u32 ts_per; /* Transport stream PER,
0xFFFFFFFF indicate N/A */
u32 is_demod_locked; /* 0 - not locked, 1 - locked */
u32 ber_bit_count; /* Total number of SYNC bits. */
- u32 ber_error_count; /* Number of erronous SYNC bits. */
+ u32 ber_error_count; /* Number of erroneous SYNC bits. */
s32 MRC_SNR; /* dB */
s32 mrc_in_band_pwr; /* In band power in dBM */
dprintk_tscheck("TEI detected. "
"PID=0x%x data1=0x%x\n",
pid, buf[1]);
- /* data in this packet cant be trusted - drop it unless
+ /* data in this packet can't be trusted - drop it unless
* module option dvb_demux_feed_err_pkts is set */
if (!dvb_demux_feed_err_pkts)
return;
return -EINVAL;
}
- if (feed->is_filtering)
+ if (feed->is_filtering) {
+ /* release dvbdmx->mutex as far as it is
+ acquired by stop_filtering() itself */
+ mutex_unlock(&dvbdmx->mutex);
feed->stop_filtering(feed);
+ mutex_lock(&dvbdmx->mutex);
+ }
spin_lock_irq(&dvbdmx->lock);
f = dvbdmxfeed->filter;
static int af9033_wr_reg_val_tab(struct af9033_state *state,
const struct reg_val *tab, int tab_len)
{
+#define MAX_TAB_LEN 212
int ret, i, j;
- u8 buf[MAX_XFER_SIZE];
+ u8 buf[1 + MAX_TAB_LEN];
+
+ dev_dbg(&state->i2c->dev, "%s: tab_len=%d\n", __func__, tab_len);
if (tab_len > sizeof(buf)) {
- dev_warn(&state->i2c->dev,
- "%s: i2c wr len=%d is too big!\n",
- KBUILD_MODNAME, tab_len);
+ dev_warn(&state->i2c->dev, "%s: tab len %d is too big\n",
+ KBUILD_MODNAME, tab_len);
return -EINVAL;
}
- dev_dbg(&state->i2c->dev, "%s: tab_len=%d\n", __func__, tab_len);
-
for (i = 0, j = 0; i < tab_len; i++) {
buf[j] = tab[i].val;
num = if_freq / 1000; /* Hz => kHz */
num *= 0x4000;
- if_ctl = cxd2820r_div_u64_round_closest(num, 41000);
+ if_ctl = 0x4000 - cxd2820r_div_u64_round_closest(num, 41000);
buf[0] = (if_ctl >> 8) & 0x3f;
buf[1] = (if_ctl >> 0) & 0xff;
dib8000_set_diversity_in(state->fe[0], state->diversity_onoff);
locks = (dib8000_read_word(state, 180) >> 6) & 0x3f; /* P_coff_winlen ? */
- /* coff should lock over P_coff_winlen ofdm symbols : give 3 times this lenght to lock */
+ /* coff should lock over P_coff_winlen ofdm symbols : give 3 times this length to lock */
*timeout = dib8000_get_timeout(state, 2 * locks, SYMBOL_DEPENDENT_ON);
*tune_state = CT_DEMOD_STEP_5;
break;
case CT_DEMOD_STEP_9: /* 39 */
if ((state->revision == 0x8090) || ((dib8000_read_word(state, 1291) >> 9) & 0x1)) { /* fe capable of deinterleaving : esram */
- /* defines timeout for mpeg lock depending on interleaver lenght of longest layer */
+ /* defines timeout for mpeg lock depending on interleaver length of longest layer */
for (i = 0; i < 3; i++) {
if (c->layer[i].interleaving >= deeper_interleaver) {
dprintk("layer%i: time interleaver = %d ", i, c->layer[i].interleaving);
goto error;
if (state->m_enable_parallel == true) {
- /* paralel -> enable MD1 to MD7 */
+ /* parallel -> enable MD1 to MD7 */
status = write16(state, SIO_PDR_MD1_CFG__A,
sio_pdr_mdx_cfg);
if (status < 0)
dprintk(1, "\n");
- /* Gracefull shutdown (byte boundaries) */
+ /* Graceful shutdown (byte boundaries) */
status = read16(state, FEC_OC_SNC_MODE__A, &fec_oc_snc_mode);
if (status < 0)
goto error;
fec_oc_dto_burst_len = 204;
}
- /* Check serial or parrallel output */
+ /* Check serial or parallel output */
fec_oc_reg_ipr_mode &= (~(FEC_OC_IPR_MODE_SERIAL__M));
if (state->m_enable_parallel == false) {
/* MPEG data output is serial -> set ipr_mode[0] */
goto error;
if (count == 1) {
- /* Try sampling on a diffrent edge */
+ /* Try sampling on a different edge */
u16 clk_neg = 0;
status = read16(state, IQM_AF_CLKNEG__A, &clk_neg);
if (status < 0)
goto error;
- /* Retreive results parameters from SC */
+ /* Retrieve results parameters from SC */
switch (cmd) {
/* All commands yielding 5 results */
/* All commands yielding 4 results */
break;
}
#if 0
- /* No hierachical channels support in BDA */
+ /* No hierarchical channels support in BDA */
/* Priority (only for hierarchical channels) */
switch (channel->priority) {
case DRX_PRIORITY_LOW:
/*============================================================================*/
/**
-* \brief Retreive lock status .
+* \brief Retrieve lock status .
* \param demod Pointer to demodulator instance.
* \param lockStat Pointer to lock status structure.
* \return DRXStatus_t.
goto error;
/* Stamp driver version number in SCU data RAM in BCD code
- Done to enable field application engineers to retreive drxdriver version
+ Done to enable field application engineers to retrieve drxdriver version
via I2C from SCU RAM.
Not using SCU command interface for SCU register access since no
microcode may be present.
fe->ops.tuner_ops.get_if_frequency(fe, &IF);
start(state, 0, IF);
- /* After set_frontend, stats aren't avaliable */
+ /* After set_frontend, stats aren't available */
p->strength.stat[0].scale = FE_SCALE_RELATIVE;
p->cnr.stat[0].scale = FE_SCALE_NOT_AVAILABLE;
p->block_error.stat[0].scale = FE_SCALE_NOT_AVAILABLE;
sizeof(priv->tuner_i2c_adapter.name));
priv->tuner_i2c_adapter.algo = &rtl2830_tuner_i2c_algo;
priv->tuner_i2c_adapter.algo_data = NULL;
+ priv->tuner_i2c_adapter.dev.parent = &i2c->dev;
i2c_set_adapdata(&priv->tuner_i2c_adapter, priv);
if (i2c_add_adapter(&priv->tuner_i2c_adapter) < 0) {
dev_err(&i2c->dev,
#define ADV7183_VS_FIELD_CTRL_1 0x31 /* Vsync field control 1 */
#define ADV7183_VS_FIELD_CTRL_2 0x32 /* Vsync field control 2 */
#define ADV7183_VS_FIELD_CTRL_3 0x33 /* Vsync field control 3 */
-#define ADV7183_HS_POS_CTRL_1 0x34 /* Hsync positon control 1 */
-#define ADV7183_HS_POS_CTRL_2 0x35 /* Hsync positon control 2 */
-#define ADV7183_HS_POS_CTRL_3 0x36 /* Hsync positon control 3 */
+#define ADV7183_HS_POS_CTRL_1 0x34 /* Hsync position control 1 */
+#define ADV7183_HS_POS_CTRL_2 0x35 /* Hsync position control 2 */
+#define ADV7183_HS_POS_CTRL_3 0x36 /* Hsync position control 3 */
#define ADV7183_POLARITY 0x37 /* Polarity */
#define ADV7183_NTSC_COMB_CTRL 0x38 /* NTSC comb control */
#define ADV7183_PAL_COMB_CTRL 0x39 /* PAL comb control */
break;
case ADV7604_MODE_HDMI:
/* set default prim_mode/vid_std for HDMI
- accoring to [REF_03, c. 4.2] */
+ according to [REF_03, c. 4.2] */
io_write(sd, 0x00, 0x02); /* video std */
io_write(sd, 0x01, 0x06); /* prim mode */
break;
break;
case ADV7842_MODE_HDMI:
/* set default prim_mode/vid_std for HDMI
- accoring to [REF_03, c. 4.2] */
+ according to [REF_03, c. 4.2] */
io_write(sd, 0x00, 0x02); /* video std */
io_write(sd, 0x01, 0x06); /* prim mode */
break;
if (!rc) {
/*
- * If platform_data doesn't specify rc_dev, initilize it
+ * If platform_data doesn't specify rc_dev, initialize it
* internally
*/
rc = rc_allocate_device();
u16 zoom_step;
int ret;
- /* Determine the firmware dependant control range and step values */
+ /* Determine the firmware dependent control range and step values */
ret = m5mols_read_u16(sd, AE_MAX_GAIN_MON, &exposure_max);
if (ret < 0)
return ret;
#include <linux/i2c.h>
#include <linux/log2.h>
#include <linux/module.h>
+#include <linux/of.h>
#include <linux/of_gpio.h>
#include <linux/pm.h>
#include <linux/regulator/consumer.h>
mutex_unlock(&state->lock);
v4l2_dbg(1, s5c73m3_dbg, sd, "%s: Booting %s (%d)\n",
- __func__, ret ? "failed" : "succeded", ret);
+ __func__, ret ? "failed" : "succeeded", ret);
return ret;
}
/* External master clock frequency */
u32 mclk_frequency;
- /* Video bus type - MIPI-CSI2/paralell */
+ /* Video bus type - MIPI-CSI2/parallel */
enum v4l2_mbus_type bus_type;
const struct s5c73m3_frame_size *sensor_pix_size[2];
* the analog demod.
* If the tuner is not found, it returns -ENODEV.
* If auto-detection is disabled and the tuner doesn't match what it was
- * requred, it returns -EINVAL and fills 'name'.
+ * required, it returns -EINVAL and fills 'name'.
* If the chip is found, it returns the chip ID and fills 'name'.
*/
static int saa711x_detect_chip(struct i2c_client *client,
static int reg_read(struct i2c_client *client, u16 reg, u8 *val)
{
int ret;
- /* We have 16-bit i2c addresses - care for endianess */
+ /* We have 16-bit i2c addresses - care for endianness */
unsigned char data[2] = { reg >> 8, reg & 0xff };
ret = i2c_master_send(client, data, 2);
}
/* following function is used to set ths7303 */
-int ths7303_setval(struct v4l2_subdev *sd, enum ths7303_filter_mode mode)
+static int ths7303_setval(struct v4l2_subdev *sd,
+ enum ths7303_filter_mode mode)
{
struct i2c_client *client = v4l2_get_subdevdata(sd);
struct ths7303_state *state = to_state(sd);
return -EINVAL;
}
state->input = input;
- if (!v4l2_ctrl_g_ctrl(state->mute))
+ if (v4l2_ctrl_g_ctrl(state->mute))
return 0;
if (!v4l2_ctrl_g_ctrl(state->vol))
return 0;
- if (!v4l2_ctrl_g_ctrl(state->bal))
- return 0;
wm8775_set_audio(sd, 1);
return 0;
}
}
btv->std = V4L2_STD_PAL;
init_irqreg(btv);
- v4l2_ctrl_handler_setup(hdl);
+ if (!bttv_tvcards[btv->c.type].no_video)
+ v4l2_ctrl_handler_setup(hdl);
if (hdl->error) {
result = hdl->error;
goto fail2;
};
/* per-mdl bit flags */
-#define CX18_F_M_NEED_SWAP 0 /* mdl buffer data must be endianess swapped */
+#define CX18_F_M_NEED_SWAP 0 /* mdl buffer data must be endianness swapped */
/* per-stream, s_flags */
#define CX18_F_S_CLAIMED 3 /* this stream is claimed */
cx_write(MC417_RWD, regval);
/* Transition RD to effect read transaction across bus.
- * Transtion 0x5000 -> 0x9000 correct (RD/RDY -> WR/RDY)?
+ * Transition 0x5000 -> 0x9000 correct (RD/RDY -> WR/RDY)?
* Should it be 0x9000 -> 0xF000 (also why is RDY being set, its
* input only...)
*/
/* set automatic LED control by FPGA */
pluto_rw(pluto, REG_MISC, MISC_ALED, MISC_ALED);
- /* set data endianess */
+ /* set data endianness */
#ifdef __LITTLE_ENDIAN
pluto_rw(pluto, REG_PIDn(0), PID0_END, PID0_END);
#else
if (fw_debug) {
dev->kthread = kthread_run(saa7164_thread_function, dev,
"saa7164 debug");
- if (!dev->kthread)
+ if (IS_ERR(dev->kthread)) {
+ dev->kthread = NULL;
printk(KERN_ERR "%s() Failed to create "
"debug kernel thread\n", __func__);
+ }
}
} /* != BOARD_UNKNOWN */
if (q_data->fourcc == V4L2_PIX_FMT_H264 &&
vb->vb2_queue->type == V4L2_BUF_TYPE_VIDEO_OUTPUT) {
/*
- * For backwards compatiblity, queuing an empty buffer marks
+ * For backwards compatibility, queuing an empty buffer marks
* the stream end
*/
if (vb2_get_plane_payload(vb, 0) == 0)
dbg("fimc%d: state: 0x%lx", fimc->id, fimc->state);
- /* Enable clocks and perform basic initalization */
+ /* Enable clocks and perform basic initialization */
clk_enable(fimc->clock[CLK_GATE]);
fimc_hw_reset(fimc);
goto dev_unlock;
drvdata = dev_get_drvdata(dev);
- /* Some subdev didn't probe succesfully id drvdata is NULL */
+ /* Some subdev didn't probe successfully id drvdata is NULL */
if (drvdata) {
switch (plat_entity) {
case IDX_FIMC:
struct mmp_camera *cam = mcam_to_cam(mcam);
struct mmp_camera_platform_data *pdata;
- if (mcam->bus_type == V4L2_MBUS_CSI2) {
- cam->mipi_clk = devm_clk_get(mcam->dev, "mipi");
- if ((IS_ERR(cam->mipi_clk) && mcam->dphy[2] == 0))
- return PTR_ERR(cam->mipi_clk);
- }
-
/*
* Turn on power and clocks to the controller.
*/
gpio_set_value(pdata->sensor_power_gpio, 0);
gpio_set_value(pdata->sensor_reset_gpio, 0);
- if (mcam->bus_type == V4L2_MBUS_CSI2 && !IS_ERR(cam->mipi_clk)) {
- if (cam->mipi_clk)
- devm_clk_put(mcam->dev, cam->mipi_clk);
- cam->mipi_clk = NULL;
- }
-
mcam_clk_disable(mcam);
}
return;
/* get the escape clk, this is hard coded */
+ clk_prepare_enable(cam->mipi_clk);
tx_clk_esc = (clk_get_rate(cam->mipi_clk) / 1000000) / 12;
-
+ clk_disable_unprepare(cam->mipi_clk);
/*
* dphy[2] - CSI2_DPHY6:
* bit 0 ~ bit 7: CK Term Enable
return IRQ_RETVAL(handled);
}
-static void mcam_deinit_clk(struct mcam_camera *mcam)
-{
- unsigned int i;
-
- for (i = 0; i < NR_MCAM_CLK; i++) {
- if (!IS_ERR(mcam->clk[i])) {
- if (mcam->clk[i])
- devm_clk_put(mcam->dev, mcam->clk[i]);
- }
- mcam->clk[i] = NULL;
- }
-}
-
static void mcam_init_clk(struct mcam_camera *mcam)
{
unsigned int i;
if (cam == NULL)
return -ENOMEM;
cam->pdev = pdev;
- cam->mipi_clk = NULL;
INIT_LIST_HEAD(&cam->devlist);
mcam = &cam->mcam;
mcam->mclk_div = pdata->mclk_div;
mcam->bus_type = pdata->bus_type;
mcam->dphy = pdata->dphy;
+ if (mcam->bus_type == V4L2_MBUS_CSI2) {
+ cam->mipi_clk = devm_clk_get(mcam->dev, "mipi");
+ if ((IS_ERR(cam->mipi_clk) && mcam->dphy[2] == 0))
+ return PTR_ERR(cam->mipi_clk);
+ }
mcam->mipi_enabled = false;
mcam->lane = pdata->lane;
mcam->chip_id = MCAM_ARMADA610;
*/
ret = mmpcam_power_up(mcam);
if (ret)
- goto out_deinit_clk;
+ return ret;
ret = mccic_register(mcam);
if (ret)
goto out_power_down;
mccic_shutdown(mcam);
out_power_down:
mmpcam_power_down(mcam);
-out_deinit_clk:
- mcam_deinit_clk(mcam);
return ret;
}
static int mmpcam_remove(struct mmp_camera *cam)
{
struct mcam_camera *mcam = &cam->mcam;
- struct mmp_camera_platform_data *pdata;
mmpcam_remove_device(cam);
mccic_shutdown(mcam);
mmpcam_power_down(mcam);
- pdata = cam->pdev->dev.platform_data;
- gpio_free(pdata->sensor_reset_gpio);
- gpio_free(pdata->sensor_power_gpio);
- mcam_deinit_clk(mcam);
- iounmap(cam->power_regs);
- iounmap(mcam->regs);
- kfree(cam);
return 0;
}
* ISP clocks get disabled in suspend(). Similarly, the clocks are reenabled in
* resume(), and the the pipelines are restarted in complete().
*
- * TODO: PM dependencies between the ISP and sensors are not modeled explicitly
+ * TODO: PM dependencies between the ISP and sensors are not modelled explicitly
* yet.
*/
static int isp_pm_prepare(struct device *dev)
if (subdev == NULL)
return -EINVAL;
- mutex_lock(&video->mutex);
-
fmt.pad = pad;
fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;
- ret = v4l2_subdev_call(subdev, pad, get_fmt, NULL, &fmt);
- if (ret == -ENOIOCTLCMD)
- ret = -EINVAL;
+ mutex_lock(&video->mutex);
+ ret = v4l2_subdev_call(subdev, pad, get_fmt, NULL, &fmt);
mutex_unlock(&video->mutex);
if (ret)
#define S5P_FIMV_R2H_CMD_EDFU_INIT_RET 16
#define S5P_FIMV_R2H_CMD_ERR_RET 32
-/* Dummy definition for MFCv6 compatibilty */
+/* Dummy definition for MFCv6 compatibility */
#define S5P_FIMV_CODEC_H264_MVC_DEC -1
#define S5P_FIMV_R2H_CMD_FIELD_DONE_RET -1
#define S5P_FIMV_MFC_RESET -1
frame_type = s5p_mfc_hw_call(dev->mfc_ops, get_dec_frame_type, dev);
/* Copy timestamp / timecode from decoded src to dst and set
- appropraite flags */
+ appropriate flags */
src_buf = list_entry(ctx->src_queue.next, struct s5p_mfc_buf, list);
list_for_each_entry(dst_buf, &ctx->dst_queue, list) {
if (vb2_dma_contig_plane_dma_addr(dst_buf->b, 0) == dec_y_addr) {
case MFCINST_FINISHING:
case MFCINST_FINISHED:
case MFCINST_RUNNING:
- /* It is higly probable that an error occured
+ /* It is highly probable that an error occurred
* while decoding a frame */
clear_work_bit(ctx);
ctx->state = MFCINST_ERROR;
mfc_debug(1, "Int reason: %d (err: %08x)\n", reason, err);
switch (reason) {
case S5P_MFC_R2H_CMD_ERR_RET:
- /* An error has occured */
+ /* An error has occurred */
if (ctx->state == MFCINST_RUNNING &&
s5p_mfc_hw_call(dev->mfc_ops, err_dec, err) >=
dev->warn_start)
mutex_unlock(&dev->mfc_mutex);
mfc_debug_leave();
return ret;
- /* Deinit when failure occured */
+ /* Deinit when failure occurred */
err_queue_init:
if (dev->num_inst == 1)
s5p_mfc_deinit_hw(dev);
/* Mark context as idle */
clear_work_bit_irqsave(ctx);
/* If instance was initialised then
- * return instance and free reosurces */
+ * return instance and free resources */
if (ctx->inst_no != MFC_NO_INSTANCE_SET) {
mfc_debug(2, "Has to free instance\n");
ctx->state = MFCINST_RETURN_INST;
set_work_bit_irqsave(ctx);
s5p_mfc_clean_ctx_int_flags(ctx);
s5p_mfc_hw_call(dev->mfc_ops, try_run, dev);
- /* Wait until instance is returned or timeout occured */
+ /* Wait until instance is returned or timeout occurred */
if (s5p_mfc_wait_for_done_ctx
(ctx, S5P_MFC_R2H_CMD_CLOSE_INSTANCE_RET, 0)) {
s5p_mfc_clock_off();
} else {
/* In this case bank2 can point to the same address as bank1.
- * Firmware will always occupy the beggining of this area so it is
+ * Firmware will always occupy the beginning of this area so it is
* impossible having a video frame buffer with zero address. */
dev->bank2 = dev->bank1;
}
int num_subframes;
/** specifies to which subframe belong given plane */
int plane2subframe[MXR_MAX_PLANES];
- /** internal code, driver dependant */
+ /** internal code, driver dependent */
unsigned long cookie;
};
mutex_lock(&mdev->mutex);
/* timings change cannot be done while there is an entity
- * dependant on output configuration
+ * dependent on output configuration
*/
if (mdev->n_output > 0) {
mutex_unlock(&mdev->mutex);
mutex_lock(&mdev->mutex);
/* standard change cannot be done while there is an entity
- * dependant on output configuration
+ * dependent on output configuration
*/
if (mdev->n_output > 0) {
mutex_unlock(&mdev->mutex);
if (ctrlclock & LCLK_EN)
CAM_WRITE(pcdev, CTRLCLOCK, ctrlclock);
- /* select bus endianess */
+ /* select bus endianness */
xlate = soc_camera_xlate_by_fourcc(icd, pixfmt);
fmt = xlate->host_fmt;
return 0;
}
-/* timeperframe is arbitrary and continous */
+/* timeperframe is arbitrary and continuous */
static int vidioc_enum_frameintervals(struct file *file, void *priv,
struct v4l2_frmivalenum *fival)
{
fival->type = V4L2_FRMIVAL_TYPE_CONTINUOUS;
- /* fill in stepwise (step=1.0 is requred by V4L2 spec) */
+ /* fill in stepwise (step=1.0 is required by V4L2 spec) */
fival->stepwise.min = tpf_min;
fival->stepwise.max = tpf_max;
fival->stepwise.step = (struct v4l2_fract) {1, 1};
* Increment the VSP1 reference count and initialize the device if the first
* reference is taken.
*
- * Return a pointer to the VSP1 device or NULL if an error occured.
+ * Return a pointer to the VSP1 device or NULL if an error occurred.
*/
struct vsp1_device *vsp1_device_get(struct vsp1_device *vsp1)
{
/* ... and the buffers queue... */
video->alloc_ctx = vb2_dma_contig_init_ctx(video->vsp1->dev);
- if (IS_ERR(video->alloc_ctx))
+ if (IS_ERR(video->alloc_ctx)) {
+ ret = PTR_ERR(video->alloc_ctx);
goto error;
+ }
video->queue.type = video->type;
video->queue.io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF;
cancel_work_sync(&shark->led_work);
}
-#ifdef CONFIG_PM
-static void shark_resume_leds(struct shark_device *shark)
+static inline void shark_resume_leds(struct shark_device *shark)
{
if (test_bit(BLUE_IS_PULSE, &shark->brightness_new))
set_bit(BLUE_PULSE_LED, &shark->brightness_new);
set_bit(RED_LED, &shark->brightness_new);
schedule_work(&shark->led_work);
}
-#endif
#else
static int shark_register_leds(struct shark_device *shark, struct device *dev)
{
cancel_work_sync(&shark->led_work);
}
-#ifdef CONFIG_PM
-static void shark_resume_leds(struct shark_device *shark)
+static inline void shark_resume_leds(struct shark_device *shark)
{
int i;
schedule_work(&shark->led_work);
}
-#endif
#else
static int shark_register_leds(struct shark_device *shark, struct device *dev)
{
*
* @tune_freq: Tune chip to a specific frequency
* @seek_start: Star station seeking
- * @rsq_status: Get Recieved Signal Quality(RSQ) status
- * @rds_blckcnt: Get recived RDS blocks count
+ * @rsq_status: Get Received Signal Quality(RSQ) status
+ * @rds_blckcnt: Get received RDS blocks count
* @phase_diversity: Change phase diversity mode of the tuner
* @phase_div_status: Get phase diversity mode status
* @acf_status: Get the status of Automatically Controlled
So we keep it as-is. */
return -EINVAL;
}
- clamp(freq, FREQ_MIN * FREQ_MUL, FREQ_MAX * FREQ_MUL);
+ freq = clamp(freq, FREQ_MIN * FREQ_MUL, FREQ_MAX * FREQ_MUL);
tea5764_power_up(radio);
tea5764_tune(radio, (freq * 125) / 2);
return 0;
if (f->tuner != 0)
return -EINVAL;
- clamp(freq, TEF6862_LO_FREQ, TEF6862_HI_FREQ);
+ freq = clamp(freq, TEF6862_LO_FREQ, TEF6862_HI_FREQ);
pll = 1964 + ((freq - TEF6862_LO_FREQ) * 20) / FREQ_MUL;
i2cmsg[0] = (MSA_MODE_PRESET << MSA_MODE_SHIFT) | WM_SUB_PLLM;
i2cmsg[1] = (pll >> 8) & 0xff;
* 0x68nnnnB7 to 0x6AnnnnB7, the left mouse button generates
* 0x688301b7 and the right one 0x688481b7. All other keys generate
* 0x2nnnnnnn. Position coordinate is encoded in buf[1] and buf[2] with
- * reversed endianess. Extract direction from buffer, rotate endianess,
+ * reversed endianness. Extract direction from buffer, rotate endianness,
* adjust sign and feed the values into stabilize(). The resulting codes
* will be 0x01008000, 0x01007F00, which match the newer devices.
*/
#define RR3_IR_IO_LENGTH_FUZZ 0x04
/* Timeout for end of signal detection */
#define RR3_IR_IO_SIG_TIMEOUT 0x05
-/* Minumum value for pause recognition. */
+/* Minimum value for pause recognition. */
#define RR3_IR_IO_MIN_PAUSE 0x06
/* Clock freq. of EZ-USB chip */
* DNC Output is selected, the other is always off)
*
* @state: ptr to mt2063_state structure
- * @Mode: desired reciever delivery system
+ * @Mode: desired receiver delivery system
*
* Note: Register cache must be valid for it to work
*/
/*
* As defined on EN 300 429, the DVB-C roll-off factor is 0.15.
- * So, the amount of the needed bandwith is given by:
+ * So, the amount of the needed bandwidth is given by:
* Bw = Symbol_rate * (1 + 0.15)
* As such, the maximum symbol rate supported by 6 MHz is given by:
* max_symbol_rate = 6 MHz / 1.15 = 5217391 Bauds
#define V4L2_STD_A2 (V4L2_STD_A2_A | V4L2_STD_A2_B)
#define V4L2_STD_NICAM (V4L2_STD_NICAM_A | V4L2_STD_NICAM_B)
-/* To preserve backward compatibilty,
+/* To preserve backward compatibility,
(std & V4L2_STD_AUDIO) = 0 means that ALL audio stds are supported
*/
usb_set_intfdata(interface, NULL);
err_if:
usb_put_dev(udev);
- kfree(dev);
clear_bit(dev->devno, &cx231xx_devused);
+ kfree(dev);
return retval;
}
{
u8 wbuf[MAX_XFER_SIZE];
u8 mbox = (reg >> 16) & 0xff;
- struct usb_req req = { CMD_MEM_WR, mbox, sizeof(wbuf), wbuf, 0, NULL };
+ struct usb_req req = { CMD_MEM_WR, mbox, 6 + len, wbuf, 0, NULL };
if (6 + len > sizeof(wbuf)) {
dev_warn(&d->udev->dev, "%s: i2c wr: len=%d is too big!\n",
} else {
/* I2C */
u8 buf[MAX_XFER_SIZE];
- struct usb_req req = { CMD_I2C_RD, 0, sizeof(buf),
+ struct usb_req req = { CMD_I2C_RD, 0, 5 + msg[0].len,
buf, msg[1].len, msg[1].buf };
if (5 + msg[0].len > sizeof(buf)) {
dev_warn(&d->udev->dev,
"%s: i2c xfer: len=%d is too big!\n",
KBUILD_MODNAME, msg[0].len);
- return -EOPNOTSUPP;
+ ret = -EOPNOTSUPP;
+ goto unlock;
}
req.mbox |= ((msg[0].addr & 0x80) >> 3);
buf[0] = msg[1].len;
} else {
/* I2C */
u8 buf[MAX_XFER_SIZE];
- struct usb_req req = { CMD_I2C_WR, 0, sizeof(buf), buf,
- 0, NULL };
+ struct usb_req req = { CMD_I2C_WR, 0, 5 + msg[0].len,
+ buf, 0, NULL };
if (5 + msg[0].len > sizeof(buf)) {
dev_warn(&d->udev->dev,
"%s: i2c xfer: len=%d is too big!\n",
KBUILD_MODNAME, msg[0].len);
- return -EOPNOTSUPP;
+ ret = -EOPNOTSUPP;
+ goto unlock;
}
req.mbox |= ((msg[0].addr & 0x80) >> 3);
buf[0] = msg[0].len;
ret = -EOPNOTSUPP;
}
+unlock:
mutex_unlock(&d->i2c_mutex);
if (ret < 0)
/* XXX: that same ID [0ccd:0099] is used by af9015 driver too */
{ DVB_USB_DEVICE(USB_VID_TERRATEC, 0x0099,
&af9035_props, "TerraTec Cinergy T Stick Dual RC (rev. 2)", NULL) },
+ { DVB_USB_DEVICE(USB_VID_LEADTEK, 0x6a05,
+ &af9035_props, "Leadtek WinFast DTV Dongle Dual", NULL) },
{ }
};
MODULE_DEVICE_TABLE(usb, af9035_id_table);
struct mxl111sf_adap_state *adap_state = &state->adap_state[fe->id];
int err;
- /* exit if we didnt initialize the driver yet */
+ /* exit if we didn't initialize the driver yet */
if (!state->chip_id) {
mxl_debug("driver not yet initialized, exit.");
goto fail;
struct mxl111sf_adap_state *adap_state = &state->adap_state[fe->id];
int err;
- /* exit if we didnt initialize the driver yet */
+ /* exit if we didn't initialize the driver yet */
if (!state->chip_id) {
mxl_debug("driver not yet initialized, exit.");
goto fail;
if (rxlen > 62) {
err("i2c RX buffer can't exceed 62 bytes (dev 0x%02x)",
device_addr);
- txlen = 62;
+ rxlen = 62;
}
b[0] = I2C_SPEED_100KHZ_BIT;
em28xx_videodbg("users=%d\n", dev->users);
- mutex_lock(&dev->lock);
vb2_fop_release(filp);
+ mutex_lock(&dev->lock);
if (dev->users == 1) {
/* the device is already disconnect,
s32 nToSkip =
sd->swapRB * (gspca_dev->cam.cam_mode[mode].bytesperline + 1);
- /* Test only against 0202h, so endianess does not matter */
+ /* Test only against 0202h, so endianness does not matter */
switch (*(s16 *) data) {
case 0x0202: /* End of frame, start a new one */
gspca_frame_add(gspca_dev, LAST_PACKET, NULL, 0);
#if IS_ENABLED(CONFIG_INPUT)
static int sd_int_pkt_scan(struct gspca_dev *gspca_dev,
u8 *data, /* interrupt packet data */
- int len) /* interrput packet length */
+ int len) /* interrupt packet length */
{
int ret = -EINVAL;
#if IS_ENABLED(CONFIG_INPUT)
static int sd_int_pkt_scan(struct gspca_dev *gspca_dev,
u8 *data, /* interrupt packet data */
- int len) /* interrput packet length */
+ int len) /* interrupt packet length */
{
int ret = -EINVAL;
u8 data0, data1;
/* set serial interface clock divider (30MHz/0x1f*16+2) = 60240 kHz) */
reg_w(gspca_dev, STK1135_REG_SICTL + 2, 0x1f);
+
+ /* wait a while for sensor to catch up */
+ udelay(1000);
}
static void stk1135_camera_disable(struct gspca_dev *gspca_dev)
struct sd *sd = (struct sd *) gspca_dev;
struct cam *cam = &gspca_dev->cam;
- /* Give the camera some time to settle, otherwise initalization will
+ /* Give the camera some time to settle, otherwise initialization will
fail on hotplug, and yes it really needs a full second. */
msleep(1000);
{USB_DEVICE(0x055f, 0xc650), BS(SPCA533, 0)},
{USB_DEVICE(0x05da, 0x1018), BS(SPCA504B, 0)},
{USB_DEVICE(0x06d6, 0x0031), BS(SPCA533, 0)},
+ {USB_DEVICE(0x06d6, 0x0041), BS(SPCA504B, 0)},
{USB_DEVICE(0x0733, 0x1311), BS(SPCA533, 0)},
{USB_DEVICE(0x0733, 0x1314), BS(SPCA533, 0)},
{USB_DEVICE(0x0733, 0x2211), BS(SPCA533, 0)},
#if IS_ENABLED(CONFIG_INPUT)
static int sd_int_pkt_scan(struct gspca_dev *gspca_dev,
u8 *data, /* interrupt packet data */
- int len) /* interrput packet length */
+ int len) /* interrupt packet length */
{
if (len == 8 && data[4] == 1) {
input_report_key(gspca_dev->input_dev, KEY_CAMERA, 1);
/* Set the leds off */
pwc_set_leds(pdev, 0, 0);
- /* Setup intial videomode */
+ /* Setup initial videomode */
rc = pwc_set_video_mode(pdev, MAX_WIDTH, MAX_HEIGHT,
V4L2_PIX_FMT_YUV420, 30, &compression, 1);
if (rc)
#define USBTV_ISOC_TRANSFERS 16
#define USBTV_ISOC_PACKETS 8
-#define USBTV_WIDTH 720
-#define USBTV_HEIGHT 480
-
#define USBTV_CHUNK_SIZE 256
#define USBTV_CHUNK 240
-#define USBTV_CHUNKS (USBTV_WIDTH * USBTV_HEIGHT \
- / 4 / USBTV_CHUNK)
/* Chunk header. */
#define USBTV_MAGIC_OK(chunk) ((be32_to_cpu(chunk[0]) & 0xff000000) \
#define USBTV_ODD(chunk) ((be32_to_cpu(chunk[0]) & 0x0000f000) >> 15)
#define USBTV_CHUNK_NO(chunk) (be32_to_cpu(chunk[0]) & 0x00000fff)
+#define USBTV_TV_STD (V4L2_STD_525_60 | V4L2_STD_PAL)
+
+/* parameters for supported TV norms */
+struct usbtv_norm_params {
+ v4l2_std_id norm;
+ int cap_width, cap_height;
+};
+
+static struct usbtv_norm_params norm_params[] = {
+ {
+ .norm = V4L2_STD_525_60,
+ .cap_width = 720,
+ .cap_height = 480,
+ },
+ {
+ .norm = V4L2_STD_PAL,
+ .cap_width = 720,
+ .cap_height = 576,
+ }
+};
+
/* A single videobuf2 frame buffer. */
struct usbtv_buf {
struct vb2_buffer vb;
USBTV_COMPOSITE_INPUT,
USBTV_SVIDEO_INPUT,
} input;
+ v4l2_std_id norm;
+ int width, height;
+ int n_chunks;
int iso_size;
unsigned int sequence;
struct urb *isoc_urbs[USBTV_ISOC_TRANSFERS];
};
+static int usbtv_configure_for_norm(struct usbtv *usbtv, v4l2_std_id norm)
+{
+ int i, ret = 0;
+ struct usbtv_norm_params *params = NULL;
+
+ for (i = 0; i < ARRAY_SIZE(norm_params); i++) {
+ if (norm_params[i].norm & norm) {
+ params = &norm_params[i];
+ break;
+ }
+ }
+
+ if (params) {
+ usbtv->width = params->cap_width;
+ usbtv->height = params->cap_height;
+ usbtv->n_chunks = usbtv->width * usbtv->height
+ / 4 / USBTV_CHUNK;
+ usbtv->norm = params->norm;
+ } else
+ ret = -EINVAL;
+
+ return ret;
+}
+
static int usbtv_set_regs(struct usbtv *usbtv, const u16 regs[][2], int size)
{
int ret;
return ret;
}
+static int usbtv_select_norm(struct usbtv *usbtv, v4l2_std_id norm)
+{
+ int ret;
+ static const u16 pal[][2] = {
+ { USBTV_BASE + 0x001a, 0x0068 },
+ { USBTV_BASE + 0x010e, 0x0072 },
+ { USBTV_BASE + 0x010f, 0x00a2 },
+ { USBTV_BASE + 0x0112, 0x00b0 },
+ { USBTV_BASE + 0x0117, 0x0001 },
+ { USBTV_BASE + 0x0118, 0x002c },
+ { USBTV_BASE + 0x012d, 0x0010 },
+ { USBTV_BASE + 0x012f, 0x0020 },
+ { USBTV_BASE + 0x024f, 0x0002 },
+ { USBTV_BASE + 0x0254, 0x0059 },
+ { USBTV_BASE + 0x025a, 0x0016 },
+ { USBTV_BASE + 0x025b, 0x0035 },
+ { USBTV_BASE + 0x0263, 0x0017 },
+ { USBTV_BASE + 0x0266, 0x0016 },
+ { USBTV_BASE + 0x0267, 0x0036 }
+ };
+
+ static const u16 ntsc[][2] = {
+ { USBTV_BASE + 0x001a, 0x0079 },
+ { USBTV_BASE + 0x010e, 0x0068 },
+ { USBTV_BASE + 0x010f, 0x009c },
+ { USBTV_BASE + 0x0112, 0x00f0 },
+ { USBTV_BASE + 0x0117, 0x0000 },
+ { USBTV_BASE + 0x0118, 0x00fc },
+ { USBTV_BASE + 0x012d, 0x0004 },
+ { USBTV_BASE + 0x012f, 0x0008 },
+ { USBTV_BASE + 0x024f, 0x0001 },
+ { USBTV_BASE + 0x0254, 0x005f },
+ { USBTV_BASE + 0x025a, 0x0012 },
+ { USBTV_BASE + 0x025b, 0x0001 },
+ { USBTV_BASE + 0x0263, 0x001c },
+ { USBTV_BASE + 0x0266, 0x0011 },
+ { USBTV_BASE + 0x0267, 0x0005 }
+ };
+
+ ret = usbtv_configure_for_norm(usbtv, norm);
+
+ if (!ret) {
+ if (norm & V4L2_STD_525_60)
+ ret = usbtv_set_regs(usbtv, ntsc, ARRAY_SIZE(ntsc));
+ else if (norm & V4L2_STD_PAL)
+ ret = usbtv_set_regs(usbtv, pal, ARRAY_SIZE(pal));
+ }
+
+ return ret;
+}
+
static int usbtv_setup_capture(struct usbtv *usbtv)
{
int ret;
{ USBTV_BASE + 0x0284, 0x0088 },
{ USBTV_BASE + 0x0003, 0x0004 },
- { USBTV_BASE + 0x001a, 0x0079 },
{ USBTV_BASE + 0x0100, 0x00d3 },
- { USBTV_BASE + 0x010e, 0x0068 },
- { USBTV_BASE + 0x010f, 0x009c },
- { USBTV_BASE + 0x0112, 0x00f0 },
{ USBTV_BASE + 0x0115, 0x0015 },
- { USBTV_BASE + 0x0117, 0x0000 },
- { USBTV_BASE + 0x0118, 0x00fc },
- { USBTV_BASE + 0x012d, 0x0004 },
- { USBTV_BASE + 0x012f, 0x0008 },
{ USBTV_BASE + 0x0220, 0x002e },
{ USBTV_BASE + 0x0225, 0x0008 },
{ USBTV_BASE + 0x024e, 0x0002 },
- { USBTV_BASE + 0x024f, 0x0001 },
- { USBTV_BASE + 0x0254, 0x005f },
- { USBTV_BASE + 0x025a, 0x0012 },
- { USBTV_BASE + 0x025b, 0x0001 },
- { USBTV_BASE + 0x0263, 0x001c },
- { USBTV_BASE + 0x0266, 0x0011 },
- { USBTV_BASE + 0x0267, 0x0005 },
{ USBTV_BASE + 0x024e, 0x0002 },
{ USBTV_BASE + 0x024f, 0x0002 },
};
if (ret)
return ret;
+ ret = usbtv_select_norm(usbtv, usbtv->norm);
+ if (ret)
+ return ret;
+
ret = usbtv_select_input(usbtv, usbtv->input);
if (ret)
return ret;
frame_id = USBTV_FRAME_ID(chunk);
odd = USBTV_ODD(chunk);
chunk_no = USBTV_CHUNK_NO(chunk);
- if (chunk_no >= USBTV_CHUNKS)
+ if (chunk_no >= usbtv->n_chunks)
return;
/* Beginning of a frame. */
usbtv->chunks_done++;
/* Last chunk in a frame, signalling an end */
- if (odd && chunk_no == USBTV_CHUNKS-1) {
+ if (odd && chunk_no == usbtv->n_chunks-1) {
int size = vb2_plane_size(&buf->vb, 0);
enum vb2_buffer_state state = usbtv->chunks_done ==
- USBTV_CHUNKS ?
+ usbtv->n_chunks ?
VB2_BUF_STATE_DONE :
VB2_BUF_STATE_ERROR;
static int usbtv_enum_input(struct file *file, void *priv,
struct v4l2_input *i)
{
+ struct usbtv *dev = video_drvdata(file);
+
switch (i->index) {
case USBTV_COMPOSITE_INPUT:
strlcpy(i->name, "Composite", sizeof(i->name));
}
i->type = V4L2_INPUT_TYPE_CAMERA;
- i->std = V4L2_STD_525_60;
+ i->std = dev->vdev.tvnorms;
return 0;
}
static int usbtv_fmt_vid_cap(struct file *file, void *priv,
struct v4l2_format *f)
{
- f->fmt.pix.width = USBTV_WIDTH;
- f->fmt.pix.height = USBTV_HEIGHT;
+ struct usbtv *usbtv = video_drvdata(file);
+
+ f->fmt.pix.width = usbtv->width;
+ f->fmt.pix.height = usbtv->height;
f->fmt.pix.pixelformat = V4L2_PIX_FMT_YUYV;
f->fmt.pix.field = V4L2_FIELD_INTERLACED;
- f->fmt.pix.bytesperline = USBTV_WIDTH * 2;
+ f->fmt.pix.bytesperline = usbtv->width * 2;
f->fmt.pix.sizeimage = (f->fmt.pix.bytesperline * f->fmt.pix.height);
f->fmt.pix.colorspace = V4L2_COLORSPACE_SMPTE170M;
- f->fmt.pix.priv = 0;
+
return 0;
}
static int usbtv_g_std(struct file *file, void *priv, v4l2_std_id *norm)
{
- *norm = V4L2_STD_525_60;
+ struct usbtv *usbtv = video_drvdata(file);
+ *norm = usbtv->norm;
return 0;
}
+static int usbtv_s_std(struct file *file, void *priv, v4l2_std_id norm)
+{
+ int ret = -EINVAL;
+ struct usbtv *usbtv = video_drvdata(file);
+
+ if ((norm & V4L2_STD_525_60) || (norm & V4L2_STD_PAL))
+ ret = usbtv_select_norm(usbtv, norm);
+
+ return ret;
+}
+
static int usbtv_g_input(struct file *file, void *priv, unsigned int *i)
{
struct usbtv *usbtv = video_drvdata(file);
return usbtv_select_input(usbtv, i);
}
-static int usbtv_s_std(struct file *file, void *priv, v4l2_std_id norm)
-{
- if (norm & V4L2_STD_525_60)
- return 0;
- return -EINVAL;
-}
-
struct v4l2_ioctl_ops usbtv_ioctl_ops = {
.vidioc_querycap = usbtv_querycap,
.vidioc_enum_input = usbtv_enum_input,
const struct v4l2_format *v4l_fmt, unsigned int *nbuffers,
unsigned int *nplanes, unsigned int sizes[], void *alloc_ctxs[])
{
+ struct usbtv *usbtv = vb2_get_drv_priv(vq);
+
if (*nbuffers < 2)
*nbuffers = 2;
*nplanes = 1;
- sizes[0] = USBTV_WIDTH * USBTV_HEIGHT / 2 * sizeof(u32);
+ sizes[0] = USBTV_CHUNK * usbtv->n_chunks * 2 * sizeof(u32);
return 0;
}
return -ENOMEM;
usbtv->dev = dev;
usbtv->udev = usb_get_dev(interface_to_usbdev(intf));
+
usbtv->iso_size = size;
+
+ (void)usbtv_configure_for_norm(usbtv, V4L2_STD_525_60);
+
spin_lock_init(&usbtv->buflock);
mutex_init(&usbtv->v4l2_lock);
mutex_init(&usbtv->vb2q_lock);
usbtv->vdev.release = video_device_release_empty;
usbtv->vdev.fops = &usbtv_fops;
usbtv->vdev.ioctl_ops = &usbtv_ioctl_ops;
- usbtv->vdev.tvnorms = V4L2_STD_525_60;
+ usbtv->vdev.tvnorms = USBTV_TV_STD;
usbtv->vdev.queue = &usbtv->vb2q;
usbtv->vdev.lock = &usbtv->v4l2_lock;
set_bit(V4L2_FL_USE_FH_PRIO, &usbtv->vdev.flags);
*
* SOF = ((SOF2 - SOF1) * PTS + SOF1 * STC2 - SOF2 * STC1) / (STC2 - STC1) (1)
*
- * to avoid loosing precision in the division. Similarly, the host timestamp is
+ * to avoid losing precision in the division. Similarly, the host timestamp is
* computed with
*
* TS = ((TS2 - TS1) * PTS + TS1 * SOF2 - TS2 * SOF1) / (SOF2 - SOF1) (2)
"Advanced Simple",
"Core",
"Simple Scalable",
- "Advanced Coding Efficency",
+ "Advanced Coding Efficiency",
NULL,
};
__vb2_plane_dmabuf_put(q, &vb->planes[plane]);
}
+/**
+ * __setup_lengths() - setup initial lengths for every plane in
+ * every buffer on the queue
+ */
+static void __setup_lengths(struct vb2_queue *q, unsigned int n)
+{
+ unsigned int buffer, plane;
+ struct vb2_buffer *vb;
+
+ for (buffer = q->num_buffers; buffer < q->num_buffers + n; ++buffer) {
+ vb = q->bufs[buffer];
+ if (!vb)
+ continue;
+
+ for (plane = 0; plane < vb->num_planes; ++plane)
+ vb->v4l2_planes[plane].length = q->plane_sizes[plane];
+ }
+}
+
/**
* __setup_offsets() - setup unique offsets ("cookies") for every plane in
* every buffer on the queue
continue;
for (plane = 0; plane < vb->num_planes; ++plane) {
- vb->v4l2_planes[plane].length = q->plane_sizes[plane];
vb->v4l2_planes[plane].m.mem_offset = off;
dprintk(3, "Buffer %d, plane %d offset 0x%08lx\n",
q->bufs[q->num_buffers + buffer] = vb;
}
+ __setup_lengths(q, buffer);
if (memory == V4L2_MEMORY_MMAP)
__setup_offsets(q, buffer);
return -EINVAL;
}
- if (eb->flags & ~O_CLOEXEC) {
- dprintk(1, "Queue does support only O_CLOEXEC flag\n");
+ if (eb->flags & ~(O_CLOEXEC | O_ACCMODE)) {
+ dprintk(1, "Queue does support only O_CLOEXEC and access mode flags\n");
return -EINVAL;
}
vb_plane = &vb->planes[eb->plane];
- dbuf = call_memop(q, get_dmabuf, vb_plane->mem_priv);
+ dbuf = call_memop(q, get_dmabuf, vb_plane->mem_priv, eb->flags & O_ACCMODE);
if (IS_ERR_OR_NULL(dbuf)) {
dprintk(1, "Failed to export buffer %d, plane %d\n",
eb->index, eb->plane);
return -EINVAL;
}
- ret = dma_buf_fd(dbuf, eb->flags);
+ ret = dma_buf_fd(dbuf, eb->flags & ~O_ACCMODE);
if (ret < 0) {
dprintk(3, "buffer %d, plane %d failed to export (%d)\n",
eb->index, eb->plane, ret);
return sgt;
}
-static struct dma_buf *vb2_dc_get_dmabuf(void *buf_priv)
+static struct dma_buf *vb2_dc_get_dmabuf(void *buf_priv, unsigned long flags)
{
struct vb2_dc_buf *buf = buf_priv;
struct dma_buf *dbuf;
if (WARN_ON(!buf->sgt_base))
return NULL;
- dbuf = dma_buf_export(buf, &vb2_dc_dmabuf_ops, buf->size, 0);
+ dbuf = dma_buf_export(buf, &vb2_dc_dmabuf_ops, buf->size, flags);
if (IS_ERR(dbuf))
return NULL;
buf->pages = kzalloc(buf->num_pages * sizeof(struct page *),
GFP_KERNEL);
if (!buf->pages)
- return NULL;
+ goto userptr_fail_alloc_pages;
num_pages_from_user = get_user_pages(current, current->mm,
vaddr & PAGE_MASK,
while (--num_pages_from_user >= 0)
put_page(buf->pages[num_pages_from_user]);
kfree(buf->pages);
+userptr_fail_alloc_pages:
kfree(buf);
return NULL;
}
select MFD_CORE
select REGMAP_I2C
select REGMAP_IRQ
- depends on I2C && OF
+ depends on I2C=y && OF
help
The ams AS3722 is a compact system PMU suitable for mobile phones,
tablets etc. It has 4 DC/DC step-down regulators, 3 DC/DC step-down
.iTCO_version = 2,
},
[LPC_WPT_LP] = {
- .name = "Lynx Point_LP",
+ .name = "Wildcat Point_LP",
.iTCO_version = 2,
},
};
int sec_reg_read(struct sec_pmic_dev *sec_pmic, u8 reg, void *dest)
{
- return regmap_read(sec_pmic->regmap, reg, dest);
+ return regmap_read(sec_pmic->regmap_pmic, reg, dest);
}
EXPORT_SYMBOL_GPL(sec_reg_read);
int sec_bulk_read(struct sec_pmic_dev *sec_pmic, u8 reg, int count, u8 *buf)
{
- return regmap_bulk_read(sec_pmic->regmap, reg, buf, count);
+ return regmap_bulk_read(sec_pmic->regmap_pmic, reg, buf, count);
}
EXPORT_SYMBOL_GPL(sec_bulk_read);
int sec_reg_write(struct sec_pmic_dev *sec_pmic, u8 reg, u8 value)
{
- return regmap_write(sec_pmic->regmap, reg, value);
+ return regmap_write(sec_pmic->regmap_pmic, reg, value);
}
EXPORT_SYMBOL_GPL(sec_reg_write);
int sec_bulk_write(struct sec_pmic_dev *sec_pmic, u8 reg, int count, u8 *buf)
{
- return regmap_raw_write(sec_pmic->regmap, reg, buf, count);
+ return regmap_raw_write(sec_pmic->regmap_pmic, reg, buf, count);
}
EXPORT_SYMBOL_GPL(sec_bulk_write);
int sec_reg_update(struct sec_pmic_dev *sec_pmic, u8 reg, u8 val, u8 mask)
{
- return regmap_update_bits(sec_pmic->regmap, reg, mask, val);
+ return regmap_update_bits(sec_pmic->regmap_pmic, reg, mask, val);
}
EXPORT_SYMBOL_GPL(sec_reg_update);
.cache_type = REGCACHE_FLAT,
};
+static const struct regmap_config sec_rtc_regmap_config = {
+ .reg_bits = 8,
+ .val_bits = 8,
+};
+
#ifdef CONFIG_OF
/*
* Only the common platform data elements for s5m8767 are parsed here from the
break;
}
- sec_pmic->regmap = devm_regmap_init_i2c(i2c, regmap);
- if (IS_ERR(sec_pmic->regmap)) {
- ret = PTR_ERR(sec_pmic->regmap);
+ sec_pmic->regmap_pmic = devm_regmap_init_i2c(i2c, regmap);
+ if (IS_ERR(sec_pmic->regmap_pmic)) {
+ ret = PTR_ERR(sec_pmic->regmap_pmic);
dev_err(&i2c->dev, "Failed to allocate register map: %d\n",
ret);
return ret;
sec_pmic->rtc = i2c_new_dummy(i2c->adapter, RTC_I2C_ADDR);
i2c_set_clientdata(sec_pmic->rtc, sec_pmic);
+ sec_pmic->regmap_rtc = devm_regmap_init_i2c(sec_pmic->rtc,
+ &sec_rtc_regmap_config);
+ if (IS_ERR(sec_pmic->regmap_rtc)) {
+ ret = PTR_ERR(sec_pmic->regmap_rtc);
+ dev_err(&i2c->dev, "Failed to allocate RTC register map: %d\n",
+ ret);
+ return ret;
+ }
+
if (pdata && pdata->cfg_pmic_irq)
pdata->cfg_pmic_irq();
switch (type) {
case S5M8763X:
- ret = regmap_add_irq_chip(sec_pmic->regmap, sec_pmic->irq,
+ ret = regmap_add_irq_chip(sec_pmic->regmap_pmic, sec_pmic->irq,
IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
sec_pmic->irq_base, &s5m8763_irq_chip,
&sec_pmic->irq_data);
break;
case S5M8767X:
- ret = regmap_add_irq_chip(sec_pmic->regmap, sec_pmic->irq,
+ ret = regmap_add_irq_chip(sec_pmic->regmap_pmic, sec_pmic->irq,
IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
sec_pmic->irq_base, &s5m8767_irq_chip,
&sec_pmic->irq_data);
break;
case S2MPS11X:
- ret = regmap_add_irq_chip(sec_pmic->regmap, sec_pmic->irq,
+ ret = regmap_add_irq_chip(sec_pmic->regmap_pmic, sec_pmic->irq,
IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
sec_pmic->irq_base, &s2mps11_irq_chip,
&sec_pmic->irq_data);
#include <linux/platform_device.h>
#include <linux/delay.h>
#include <linux/io.h>
+#include <linux/sched.h>
#include <linux/mfd/core.h>
#include <linux/mfd/ti_ssp.h>
cells[id].id = id;
cells[id].name = data->dev_name;
cells[id].platform_data = data->pdata;
- cells[id].data_size = data->pdata_size;
}
error = mfd_add_devices(dev, 0, cells, 2, NULL, 0, NULL);
{
char name[ENCLOSURE_NAME_SIZE];
+ /*
+ * In odd circumstances, like multipath devices, something else may
+ * already have removed the links, so check for this condition first.
+ */
+ if (!cdev->dev->kobj.sd)
+ return;
+
enclosure_link_name(cdev, name);
sysfs_remove_link(&cdev->dev->kobj, name);
sysfs_remove_link(&cdev->cdev.kobj, "device");
#define MEI_DEV_ID_PPT_2 0x1CBA /* Panther Point */
#define MEI_DEV_ID_PPT_3 0x1DBA /* Panther Point */
-#define MEI_DEV_ID_LPT 0x8C3A /* Lynx Point */
+#define MEI_DEV_ID_LPT_H 0x8C3A /* Lynx Point H */
#define MEI_DEV_ID_LPT_W 0x8D3A /* Lynx Point - Wellsburg */
#define MEI_DEV_ID_LPT_LP 0x9C3A /* Lynx Point LP */
+#define MEI_DEV_ID_LPT_HR 0x8CBA /* Lynx Point H Refresh */
+
+#define MEI_DEV_ID_WPT_LP 0x9CBA /* Wildcat Point LP */
/*
* MEI HW Section
*/
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, MEI_DEV_ID_PPT_1)},
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, MEI_DEV_ID_PPT_2)},
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, MEI_DEV_ID_PPT_3)},
- {PCI_DEVICE(PCI_VENDOR_ID_INTEL, MEI_DEV_ID_LPT)},
+ {PCI_DEVICE(PCI_VENDOR_ID_INTEL, MEI_DEV_ID_LPT_H)},
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, MEI_DEV_ID_LPT_W)},
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, MEI_DEV_ID_LPT_LP)},
+ {PCI_DEVICE(PCI_VENDOR_ID_INTEL, MEI_DEV_ID_LPT_HR)},
+ {PCI_DEVICE(PCI_VENDOR_ID_INTEL, MEI_DEV_ID_WPT_LP)},
/* required last entry */
{0, }
{
struct mic_vdev *mvdev = to_micvdev(vdev);
struct mic_device_ctrl __iomem *dc = mvdev->dc;
- int retry = 100, i;
+ int retry;
iowrite8(0, &dc->host_ack);
iowrite8(1, &dc->vdev_reset);
mic_send_intr(mvdev->mdev, mvdev->c2h_vdev_db);
/* Wait till host completes all card accesses and acks the reset */
- for (i = retry; i--;) {
+ for (retry = 100; retry--;) {
if (ioread8(&dc->host_ack))
break;
msleep(100);
/*
* The virtio_ring code calls this API when it wants to notify the Host.
*/
-static void mic_notify(struct virtqueue *vq)
+static bool mic_notify(struct virtqueue *vq)
{
struct mic_vdev *mvdev = vq->priv;
mic_send_intr(mvdev->mdev, mvdev->c2h_vdev_db);
+ return true;
}
static void mic_del_vq(struct virtqueue *vq, int n)
/* First assign the vring's allocated in host memory */
vqconfig = mic_vq_config(mvdev->desc) + index;
memcpy_fromio(&config, vqconfig, sizeof(config));
- _vr_size = vring_size(config.num, MIC_VIRTIO_RING_ALIGN);
+ _vr_size = vring_size(le16_to_cpu(config.num), MIC_VIRTIO_RING_ALIGN);
vr_size = PAGE_ALIGN(_vr_size + sizeof(struct _mic_vring_info));
- va = mic_card_map(mvdev->mdev, config.address, vr_size);
+ va = mic_card_map(mvdev->mdev, le64_to_cpu(config.address), vr_size);
if (!va)
return ERR_PTR(-ENOMEM);
mvdev->vr[index] = va;
memset_io(va, 0x0, _vr_size);
- vq = vring_new_virtqueue(index,
- config.num, MIC_VIRTIO_RING_ALIGN, vdev,
- false,
- va, mic_notify, callback, name);
+ vq = vring_new_virtqueue(index, le16_to_cpu(config.num),
+ MIC_VIRTIO_RING_ALIGN, vdev, false,
+ (void __force *)va, mic_notify, callback,
+ name);
if (!vq) {
err = -ENOMEM;
goto unmap;
/* Allocate and reassign used ring now */
mvdev->used_size[index] = PAGE_ALIGN(sizeof(__u16) * 3 +
- sizeof(struct vring_used_elem) * config.num);
+ sizeof(struct vring_used_elem) *
+ le16_to_cpu(config.num));
used = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
get_order(mvdev->used_size[index]));
if (!used) {
{
struct mic_vdev *mvdev = to_micvdev(vdev);
struct mic_device_ctrl __iomem *dc = mvdev->dc;
- int i, err, retry = 100;
+ int i, err, retry;
/* We must have this many virtqueues. */
if (nvqs > ioread8(&mvdev->desc->num_vq))
* rings have been re-assigned.
*/
mic_send_intr(mvdev->mdev, mvdev->c2h_vdev_db);
- for (i = retry; i--;) {
+ for (retry = 100; retry--;) {
if (!ioread8(&dc->used_address_updated))
break;
msleep(100);
struct device *dev;
int ret;
- for (i = mic_aligned_size(struct mic_bootparam);
- i < MIC_DP_SIZE; i += mic_total_desc_size(d)) {
+ for (i = sizeof(struct mic_bootparam); i < MIC_DP_SIZE;
+ i += mic_total_desc_size(d)) {
d = mdrv->dp + i;
dc = (void __iomem *)d + mic_aligned_desc_size(d);
/*
continue;
/* device already exists */
- dev = device_find_child(mdrv->dev, d, mic_match_desc);
+ dev = device_find_child(mdrv->dev, (void __force *)d,
+ mic_match_desc);
if (dev) {
if (remove)
iowrite8(MIC_VIRTIO_PARAM_DEV_REMOVE,
static inline unsigned mic_desc_size(struct mic_device_desc __iomem *desc)
{
- return mic_aligned_size(*desc)
- + ioread8(&desc->num_vq) * mic_aligned_size(struct mic_vqconfig)
+ return sizeof(*desc)
+ + ioread8(&desc->num_vq) * sizeof(struct mic_vqconfig)
+ ioread8(&desc->feature_len) * 2
+ ioread8(&desc->config_len);
}
}
static inline unsigned mic_total_desc_size(struct mic_device_desc __iomem *desc)
{
- return mic_aligned_desc_size(desc) +
- mic_aligned_size(struct mic_device_ctrl);
+ return mic_aligned_desc_size(desc) + sizeof(struct mic_device_ctrl);
}
int mic_devices_init(struct mic_driver *mdrv);
{
struct mic_bootparam *bootparam = mdev->dp;
- bootparam->magic = MIC_MAGIC;
+ bootparam->magic = cpu_to_le32(MIC_MAGIC);
bootparam->c2h_shutdown_db = mdev->shutdown_db;
bootparam->h2c_shutdown_db = -1;
bootparam->h2c_config_db = -1;
* We are copying from IO below an should ideally use something
* like copy_to_user_fromio(..) if it existed.
*/
- if (copy_to_user(ubuf, dbuf, len)) {
+ if (copy_to_user(ubuf, (void __force *)dbuf, len)) {
err = -EFAULT;
dev_err(mic_dev(mvdev), "%s %d err %d\n",
__func__, __LINE__, err);
* We are copying to IO below and should ideally use something
* like copy_from_user_toio(..) if it existed.
*/
- if (copy_from_user(dbuf, ubuf, len)) {
+ if (copy_from_user((void __force *)dbuf, ubuf, len)) {
err = -EFAULT;
dev_err(mic_dev(mvdev), "%s %d err %d\n",
__func__, __LINE__, err);
continue;
}
mvdev->mvr[i].vrh.vring.used =
- mvdev->mdev->aper.va +
+ (void __force *)mvdev->mdev->aper.va +
le64_to_cpu(vqconfig[i].used_address);
}
void __user *argp)
{
DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wake);
- int ret = 0, retry = 100, i;
+ int ret = 0, retry, i;
struct mic_bootparam *bootparam = mvdev->mdev->dp;
s8 db = bootparam->h2c_config_db;
mvdev->dc->config_change = MIC_VIRTIO_PARAM_CONFIG_CHANGED;
mvdev->mdev->ops->send_intr(mvdev->mdev, db);
- for (i = retry; i--;) {
+ for (retry = 100; retry--;) {
ret = wait_event_timeout(wake,
mvdev->dc->guest_ack, msecs_to_jiffies(100));
if (ret)
}
/* Find the first free device page entry */
- for (i = mic_aligned_size(struct mic_bootparam);
+ for (i = sizeof(struct mic_bootparam);
i < MIC_DP_SIZE - mic_total_desc_size(dd_config);
i += mic_total_desc_size(devp)) {
devp = mdev->dp + i;
char irqname[10];
struct mic_bootparam *bootparam = mdev->dp;
u16 num;
+ dma_addr_t vr_addr;
mutex_lock(&mdev->mic_mutex);
}
vr->len = vr_size;
vr->info = vr->va + vring_size(num, MIC_VIRTIO_RING_ALIGN);
- vr->info->magic = MIC_MAGIC + mvdev->virtio_id + i;
- vqconfig[i].address = mic_map_single(mdev,
- vr->va, vr_size);
- if (mic_map_error(vqconfig[i].address)) {
+ vr->info->magic = cpu_to_le32(MIC_MAGIC + mvdev->virtio_id + i);
+ vr_addr = mic_map_single(mdev, vr->va, vr_size);
+ if (mic_map_error(vr_addr)) {
free_pages((unsigned long)vr->va, get_order(vr_size));
ret = -ENOMEM;
dev_err(mic_dev(mvdev), "%s %d err %d\n",
__func__, __LINE__, ret);
goto err;
}
- vqconfig[i].address = cpu_to_le64(vqconfig[i].address);
+ vqconfig[i].address = cpu_to_le64(vr_addr);
vring_init(&vr->vr, num, vr->va, MIC_VIRTIO_RING_ALIGN);
ret = vringh_init_kern(&mvr->vrh,
struct mic_vdev *tmp_mvdev;
struct mic_device *mdev = mvdev->mdev;
DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wake);
- int i, ret, retry = 100;
+ int i, ret, retry;
struct mic_vqconfig *vqconfig;
struct mic_bootparam *bootparam = mdev->dp;
s8 db;
"Requesting hot remove id %d\n", mvdev->virtio_id);
mvdev->dc->config_change = MIC_VIRTIO_PARAM_DEV_REMOVE;
mdev->ops->send_intr(mdev, db);
- for (i = retry; i--;) {
+ for (retry = 100; retry--;) {
ret = wait_event_timeout(wake,
mvdev->dc->guest_ack, msecs_to_jiffies(100));
if (ret)
break;
}
dev_dbg(mdev->sdev->parent,
- "Device id %d config_change %d guest_ack %d\n",
+ "Device id %d config_change %d guest_ack %d retry %d\n",
mvdev->virtio_id, mvdev->dc->config_change,
- mvdev->dc->guest_ack);
+ mvdev->dc->guest_ack, retry);
mvdev->dc->config_change = 0;
mvdev->dc->guest_ack = 0;
skip_hot_remove:
* so copy over the ramdisk @ 128M.
*/
memcpy_toio(mdev->aper.va + (mdev->bootaddr << 1), fw->data, fw->size);
- iowrite32(cpu_to_le32(mdev->bootaddr << 1), &bp->hdr.ramdisk_image);
- iowrite32(cpu_to_le32(fw->size), &bp->hdr.ramdisk_size);
+ iowrite32(mdev->bootaddr << 1, &bp->hdr.ramdisk_image);
+ iowrite32(fw->size, &bp->hdr.ramdisk_size);
release_firmware(fw);
error:
return rc;
#include <linux/delay.h>
#include <linux/spinlock.h>
#include <linux/timer.h>
+#include <linux/of.h>
#include <linux/omap-dma.h>
#include <linux/mmc/host.h>
#include <linux/mmc/card.h>
#define OMAP_MMC_CMDTYPE_AC 2
#define OMAP_MMC_CMDTYPE_ADTC 3
-#define OMAP_DMA_MMC_TX 21
-#define OMAP_DMA_MMC_RX 22
-#define OMAP_DMA_MMC2_TX 54
-#define OMAP_DMA_MMC2_RX 55
-
-#define OMAP24XX_DMA_MMC2_TX 47
-#define OMAP24XX_DMA_MMC2_RX 48
-#define OMAP24XX_DMA_MMC1_TX 61
-#define OMAP24XX_DMA_MMC1_RX 62
-
-
#define DRIVER_NAME "mmci-omap"
/* Specifies how often in millisecs to poll for card status changes
struct mmc_omap_host *host = NULL;
struct resource *res;
dma_cap_mask_t mask;
- unsigned sig;
+ unsigned sig = 0;
int i, ret = 0;
int irq;
}
if (pdata->nr_slots == 0) {
dev_err(&pdev->dev, "no slots\n");
- return -ENXIO;
+ return -EPROBE_DEFER;
}
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
host->dma_tx_burst = -1;
host->dma_rx_burst = -1;
- if (mmc_omap2())
- sig = host->id == 0 ? OMAP24XX_DMA_MMC1_TX : OMAP24XX_DMA_MMC2_TX;
- else
- sig = host->id == 0 ? OMAP_DMA_MMC_TX : OMAP_DMA_MMC2_TX;
- host->dma_tx = dma_request_channel(mask, omap_dma_filter_fn, &sig);
+ res = platform_get_resource_byname(pdev, IORESOURCE_DMA, "tx");
+ if (res)
+ sig = res->start;
+ host->dma_tx = dma_request_slave_channel_compat(mask,
+ omap_dma_filter_fn, &sig, &pdev->dev, "tx");
if (!host->dma_tx)
dev_warn(host->dev, "unable to obtain TX DMA engine channel %u\n",
sig);
- if (mmc_omap2())
- sig = host->id == 0 ? OMAP24XX_DMA_MMC1_RX : OMAP24XX_DMA_MMC2_RX;
- else
- sig = host->id == 0 ? OMAP_DMA_MMC_RX : OMAP_DMA_MMC2_RX;
- host->dma_rx = dma_request_channel(mask, omap_dma_filter_fn, &sig);
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_DMA, "rx");
+ if (res)
+ sig = res->start;
+ host->dma_rx = dma_request_slave_channel_compat(mask,
+ omap_dma_filter_fn, &sig, &pdev->dev, "rx");
if (!host->dma_rx)
dev_warn(host->dev, "unable to obtain RX DMA engine channel %u\n",
sig);
return 0;
}
+#if IS_BUILTIN(CONFIG_OF)
+static const struct of_device_id mmc_omap_match[] = {
+ { .compatible = "ti,omap2420-mmc", },
+ { },
+};
+#endif
+
static struct platform_driver mmc_omap_driver = {
.probe = mmc_omap_probe,
.remove = mmc_omap_remove,
.driver = {
.name = DRIVER_NAME,
.owner = THIS_MODULE,
+ .of_match_table = of_match_ptr(mmc_omap_match),
},
};
static void pxa3xx_nand_free_buff(struct pxa3xx_nand_info *info)
{
struct platform_device *pdev = info->pdev;
- if (use_dma) {
+ if (info->use_dma) {
pxa_free_dma(info->data_dma_ch);
dma_free_coherent(&pdev->dev, info->buf_size,
info->data_buff, info->data_buff_phys);
.compatible = "marvell,pxa3xx-nand",
.data = (void *)PXA3XX_NAND_VARIANT_PXA,
},
- {
- .compatible = "marvell,armada370-nand",
- .data = (void *)PXA3XX_NAND_VARIANT_ARMADA370,
- },
{}
};
MODULE_DEVICE_TABLE(of, pxa3xx_nand_dt_ids);
if (!miimon) {
pr_warning("Warning: miimon must be specified, otherwise bonding will not detect link failure, speed and duplex which are essential for 802.3ad operation\n");
pr_warning("Forcing miimon to 100msec\n");
- miimon = 100;
+ miimon = BOND_DEFAULT_MIIMON;
}
}
if (!miimon) {
pr_warning("Warning: miimon must be specified, otherwise bonding will not detect link failure and link speed which are essential for TLB/ALB load balancing\n");
pr_warning("Forcing miimon to 100msec\n");
- miimon = 100;
+ miimon = BOND_DEFAULT_MIIMON;
}
}
return -EPERM;
}
- if (BOND_MODE_IS_LB(mode) && bond->params.arp_interval) {
- pr_err("%s: %s mode is incompatible with arp monitoring.\n",
- bond->dev->name, bond_mode_tbl[mode].modename);
- return -EINVAL;
+ if (BOND_NO_USES_ARP(mode) && bond->params.arp_interval) {
+ pr_info("%s: %s mode is incompatible with arp monitoring, start mii monitoring\n",
+ bond->dev->name, bond_mode_tbl[mode].modename);
+ /* disable arp monitoring */
+ bond->params.arp_interval = 0;
+ /* set miimon to default value */
+ bond->params.miimon = BOND_DEFAULT_MIIMON;
+ pr_info("%s: Setting MII monitoring interval to %d.\n",
+ bond->dev->name, bond->params.miimon);
}
/* don't cache arp_validate between modes */
ret = -EINVAL;
goto out;
}
- if (bond->params.mode == BOND_MODE_ALB ||
- bond->params.mode == BOND_MODE_TLB ||
- bond->params.mode == BOND_MODE_8023AD) {
+ if (BOND_NO_USES_ARP(bond->params.mode)) {
pr_info("%s: ARP monitoring cannot be used with ALB/TLB/802.3ad. Only MII monitoring is supported on %s.\n",
bond->dev->name, bond->dev->name);
ret = -EINVAL;
#define BOND_MAX_ARP_TARGETS 16
+#define BOND_DEFAULT_MIIMON 100
+
#define IS_UP(dev) \
((((dev)->flags & IFF_UP) == IFF_UP) && \
netif_running(dev) && \
((mode) == BOND_MODE_TLB) || \
((mode) == BOND_MODE_ALB))
+#define BOND_NO_USES_ARP(mode) \
+ (((mode) == BOND_MODE_8023AD) || \
+ ((mode) == BOND_MODE_TLB) || \
+ ((mode) == BOND_MODE_ALB))
+
#define TX_QUEUE_OVERRIDE(mode) \
(((mode) == BOND_MODE_ACTIVEBACKUP) || \
((mode) == BOND_MODE_ROUNDROBIN))
return 0;
}
-static int c_can_get_berr_counter(const struct net_device *dev,
- struct can_berr_counter *bec)
+static int __c_can_get_berr_counter(const struct net_device *dev,
+ struct can_berr_counter *bec)
{
unsigned int reg_err_counter;
struct c_can_priv *priv = netdev_priv(dev);
- c_can_pm_runtime_get_sync(priv);
-
reg_err_counter = priv->read_reg(priv, C_CAN_ERR_CNT_REG);
bec->rxerr = (reg_err_counter & ERR_CNT_REC_MASK) >>
ERR_CNT_REC_SHIFT;
bec->txerr = reg_err_counter & ERR_CNT_TEC_MASK;
+ return 0;
+}
+
+static int c_can_get_berr_counter(const struct net_device *dev,
+ struct can_berr_counter *bec)
+{
+ struct c_can_priv *priv = netdev_priv(dev);
+ int err;
+
+ c_can_pm_runtime_get_sync(priv);
+ err = __c_can_get_berr_counter(dev, bec);
c_can_pm_runtime_put_sync(priv);
- return 0;
+ return err;
}
/*
if (!(val & (1 << (msg_obj_no - 1)))) {
can_get_echo_skb(dev,
msg_obj_no - C_CAN_MSG_OBJ_TX_FIRST);
+ c_can_object_get(dev, 0, msg_obj_no, IF_COMM_ALL);
stats->tx_bytes += priv->read_reg(priv,
C_CAN_IFACE(MSGCTRL_REG, 0))
& IF_MCONT_DLC_MASK;
if (unlikely(!skb))
return 0;
- c_can_get_berr_counter(dev, &bec);
+ __c_can_get_berr_counter(dev, &bec);
reg_err_counter = priv->read_reg(priv, C_CAN_ERR_CNT_REG);
rx_err_passive = (reg_err_counter & ERR_CNT_RP_MASK) >>
ERR_CNT_RP_SHIFT;
dev_err(&pdev->dev, "no ipg clock defined\n");
return PTR_ERR(clk_ipg);
}
- clock_freq = clk_get_rate(clk_ipg);
clk_per = devm_clk_get(&pdev->dev, "per");
if (IS_ERR(clk_per)) {
dev_err(&pdev->dev, "no per clock defined\n");
return PTR_ERR(clk_per);
}
+ clock_freq = clk_get_rate(clk_per);
}
mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
uint8_t isrc, status;
int n = 0;
- /* Shared interrupts and IRQ off? */
- if (priv->read_reg(priv, SJA1000_IER) == IRQ_OFF)
- return IRQ_NONE;
-
if (priv->pre_irq)
priv->pre_irq(priv);
+ /* Shared interrupts and IRQ off? */
+ if (priv->read_reg(priv, SJA1000_IER) == IRQ_OFF)
+ goto out;
+
while ((isrc = priv->read_reg(priv, SJA1000_IR)) &&
(n < SJA1000_MAX_IRQ)) {
- n++;
+
status = priv->read_reg(priv, SJA1000_SR);
/* check for absent controller due to hw unplug */
if (status == 0xFF && sja1000_is_absent(priv))
- return IRQ_NONE;
+ goto out;
if (isrc & IRQ_WUI)
netdev_warn(dev, "wakeup interrupt\n");
status = priv->read_reg(priv, SJA1000_SR);
/* check for absent controller */
if (status == 0xFF && sja1000_is_absent(priv))
- return IRQ_NONE;
+ goto out;
}
}
if (isrc & (IRQ_DOI | IRQ_EI | IRQ_BEI | IRQ_EPI | IRQ_ALI)) {
if (sja1000_err(dev, isrc, status))
break;
}
+ n++;
}
-
+out:
if (priv->post_irq)
priv->post_irq(priv);
static ssize_t tg3_show_temp(struct device *dev,
struct device_attribute *devattr, char *buf)
{
- struct pci_dev *pdev = to_pci_dev(dev);
- struct net_device *netdev = pci_get_drvdata(pdev);
- struct tg3 *tp = netdev_priv(netdev);
struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr);
+ struct tg3 *tp = dev_get_drvdata(dev);
u32 temperature;
spin_lock_bh(&tp->lock);
static SENSOR_DEVICE_ATTR(temp1_max, S_IRUGO, tg3_show_temp, NULL,
TG3_TEMP_MAX_OFFSET);
-static struct attribute *tg3_attributes[] = {
+static struct attribute *tg3_attrs[] = {
&sensor_dev_attr_temp1_input.dev_attr.attr,
&sensor_dev_attr_temp1_crit.dev_attr.attr,
&sensor_dev_attr_temp1_max.dev_attr.attr,
NULL
};
-
-static const struct attribute_group tg3_group = {
- .attrs = tg3_attributes,
-};
+ATTRIBUTE_GROUPS(tg3);
static void tg3_hwmon_close(struct tg3 *tp)
{
if (tp->hwmon_dev) {
hwmon_device_unregister(tp->hwmon_dev);
tp->hwmon_dev = NULL;
- sysfs_remove_group(&tp->pdev->dev.kobj, &tg3_group);
}
}
static void tg3_hwmon_open(struct tg3 *tp)
{
- int i, err;
+ int i;
u32 size = 0;
struct pci_dev *pdev = tp->pdev;
struct tg3_ocir ocirs[TG3_SD_NUM_RECS];
if (!size)
return;
- /* Register hwmon sysfs hooks */
- err = sysfs_create_group(&pdev->dev.kobj, &tg3_group);
- if (err) {
- dev_err(&pdev->dev, "Cannot create sysfs group, aborting\n");
- return;
- }
-
- tp->hwmon_dev = hwmon_device_register(&pdev->dev);
+ tp->hwmon_dev = hwmon_device_register_with_groups(&pdev->dev, "tg3",
+ tp, tg3_groups);
if (IS_ERR(tp->hwmon_dev)) {
tp->hwmon_dev = NULL;
dev_err(&pdev->dev, "Cannot register hwmon device, aborting\n");
- sysfs_remove_group(&pdev->dev.kobj, &tg3_group);
}
}
};
#define be_physfn(adapter) (!adapter->virtfn)
+#define be_virtfn(adapter) (adapter->virtfn)
#define sriov_enabled(adapter) (adapter->num_vfs > 0)
#define sriov_want(adapter) (be_physfn(adapter) && \
(num_vfs || pci_num_vf(adapter->pdev)))
} else {
req->hdr.version = 2;
req->page_size = 1; /* 1 for 4K */
+
+ /* coalesce-wm field in this cmd is not relevant to Lancer.
+ * Lancer uses COMMON_MODIFY_CQ to set this field
+ */
+ if (!lancer_chip(adapter))
+ AMAP_SET_BITS(struct amap_cq_context_v2, coalescwm,
+ ctxt, coalesce_wm);
AMAP_SET_BITS(struct amap_cq_context_v2, nodelay, ctxt,
no_delay);
AMAP_SET_BITS(struct amap_cq_context_v2, count, ctxt,
be_roce_dev_close(adapter);
- for_all_evt_queues(adapter, eqo, i) {
- if (adapter->flags & BE_FLAGS_NAPI_ENABLED) {
+ if (adapter->flags & BE_FLAGS_NAPI_ENABLED) {
+ for_all_evt_queues(adapter, eqo, i) {
napi_disable(&eqo->napi);
be_disable_busy_poll(eqo);
}
memcpy(mac, adapter->netdev->dev_addr, ETH_ALEN);
}
- /* On BE3 VFs this cmd may fail due to lack of privilege.
- * Ignore the failure as in this case pmac_id is fetched
- * in the IFACE_CREATE cmd.
- */
- be_cmd_pmac_add(adapter, mac, adapter->if_handle,
- &adapter->pmac_id[0], 0);
+ /* For BE3-R VFs, the PF programs the initial MAC address */
+ if (!(BEx_chip(adapter) && be_virtfn(adapter)))
+ be_cmd_pmac_add(adapter, mac, adapter->if_handle,
+ &adapter->pmac_id[0], 0);
return 0;
}
if (adapter->wol)
be_setup_wol(adapter, true);
+ be_intr_set(adapter, false);
cancel_delayed_work_sync(&adapter->func_recovery_work);
netif_device_detach(netdev);
if (status)
return status;
+ be_intr_set(adapter, true);
/* tell fw we're ready to fire cmds */
status = be_cmd_fw_init(adapter);
if (status)
#define E1000_MAX_INTR 10
+/*
+ * Count for polling __E1000_RESET condition every 10-20msec.
+ */
+#define E1000_CHECK_RESET_COUNT 50
+
/* TX/RX descriptor defines */
#define E1000_DEFAULT_TXD 256
#define E1000_MAX_TXD 256
struct delayed_work watchdog_task;
struct delayed_work fifo_stall_task;
struct delayed_work phy_info_task;
-
- struct mutex mutex;
};
enum e1000_state_t {
{
set_bit(__E1000_DOWN, &adapter->flags);
- /* Only kill reset task if adapter is not resetting */
- if (!test_bit(__E1000_RESETTING, &adapter->flags))
- cancel_work_sync(&adapter->reset_task);
-
cancel_delayed_work_sync(&adapter->watchdog_task);
+
+ /*
+ * Since the watchdog task can reschedule other tasks, we should cancel
+ * it first, otherwise we can run into the situation when a work is
+ * still running after the adapter has been turned down.
+ */
+
cancel_delayed_work_sync(&adapter->phy_info_task);
cancel_delayed_work_sync(&adapter->fifo_stall_task);
+
+ /* Only kill reset task if adapter is not resetting */
+ if (!test_bit(__E1000_RESETTING, &adapter->flags))
+ cancel_work_sync(&adapter->reset_task);
}
void e1000_down(struct e1000_adapter *adapter)
e1000_clean_all_rx_rings(adapter);
}
-static void e1000_reinit_safe(struct e1000_adapter *adapter)
-{
- while (test_and_set_bit(__E1000_RESETTING, &adapter->flags))
- msleep(1);
- mutex_lock(&adapter->mutex);
- e1000_down(adapter);
- e1000_up(adapter);
- mutex_unlock(&adapter->mutex);
- clear_bit(__E1000_RESETTING, &adapter->flags);
-}
-
void e1000_reinit_locked(struct e1000_adapter *adapter)
{
- /* if rtnl_lock is not held the call path is bogus */
- ASSERT_RTNL();
WARN_ON(in_interrupt());
while (test_and_set_bit(__E1000_RESETTING, &adapter->flags))
msleep(1);
e1000_irq_disable(adapter);
spin_lock_init(&adapter->stats_lock);
- mutex_init(&adapter->mutex);
set_bit(__E1000_DOWN, &adapter->flags);
{
struct e1000_adapter *adapter = netdev_priv(netdev);
struct e1000_hw *hw = &adapter->hw;
+ int count = E1000_CHECK_RESET_COUNT;
+
+ while (test_bit(__E1000_RESETTING, &adapter->flags) && count--)
+ usleep_range(10000, 20000);
WARN_ON(test_bit(__E1000_RESETTING, &adapter->flags));
e1000_down(adapter);
struct e1000_adapter *adapter = container_of(work,
struct e1000_adapter,
phy_info_task.work);
- if (test_bit(__E1000_DOWN, &adapter->flags))
- return;
- mutex_lock(&adapter->mutex);
+
e1000_phy_get_info(&adapter->hw, &adapter->phy_info);
- mutex_unlock(&adapter->mutex);
}
/**
struct net_device *netdev = adapter->netdev;
u32 tctl;
- if (test_bit(__E1000_DOWN, &adapter->flags))
- return;
- mutex_lock(&adapter->mutex);
if (atomic_read(&adapter->tx_fifo_stall)) {
if ((er32(TDT) == er32(TDH)) &&
(er32(TDFT) == er32(TDFH)) &&
schedule_delayed_work(&adapter->fifo_stall_task, 1);
}
}
- mutex_unlock(&adapter->mutex);
}
bool e1000_has_link(struct e1000_adapter *adapter)
struct e1000_tx_ring *txdr = adapter->tx_ring;
u32 link, tctl;
- if (test_bit(__E1000_DOWN, &adapter->flags))
- return;
-
- mutex_lock(&adapter->mutex);
link = e1000_has_link(adapter);
if ((netif_carrier_ok(netdev)) && link)
goto link_up;
adapter->tx_timeout_count++;
schedule_work(&adapter->reset_task);
/* exit immediately since reset is imminent */
- goto unlock;
+ return;
}
}
/* Reschedule the task */
if (!test_bit(__E1000_DOWN, &adapter->flags))
schedule_delayed_work(&adapter->watchdog_task, 2 * HZ);
-
-unlock:
- mutex_unlock(&adapter->mutex);
}
enum latency_range {
struct e1000_adapter *adapter =
container_of(work, struct e1000_adapter, reset_task);
- if (test_bit(__E1000_DOWN, &adapter->flags))
- return;
e_err(drv, "Reset adapter\n");
- e1000_reinit_safe(adapter);
+ e1000_reinit_locked(adapter);
}
/**
netif_device_detach(netdev);
if (netif_running(netdev)) {
+ int count = E1000_CHECK_RESET_COUNT;
+
+ while (test_bit(__E1000_RESETTING, &adapter->flags) && count--)
+ usleep_range(10000, 20000);
+
WARN_ON(test_bit(__E1000_RESETTING, &adapter->flags));
e1000_down(adapter);
}
{
struct igb_adapter *adapter = netdev_priv(netdev);
- wol->supported = WAKE_UCAST | WAKE_MCAST |
- WAKE_BCAST | WAKE_MAGIC |
- WAKE_PHY;
wol->wolopts = 0;
if (!(adapter->flags & IGB_FLAG_WOL_SUPPORTED))
return;
+ wol->supported = WAKE_UCAST | WAKE_MCAST |
+ WAKE_BCAST | WAKE_MAGIC |
+ WAKE_PHY;
+
/* apply any specific unsupported masks here */
switch (adapter->hw.device_id) {
default:
rx_ring->l2_accel_priv = NULL;
}
-int ixgbe_fwd_ring_down(struct net_device *vdev,
- struct ixgbe_fwd_adapter *accel)
+static int ixgbe_fwd_ring_down(struct net_device *vdev,
+ struct ixgbe_fwd_adapter *accel)
{
struct ixgbe_adapter *adapter = accel->real_adapter;
unsigned int rxbase = accel->rx_base_queue;
NETIF_F_TSO |
NETIF_F_TSO6 |
NETIF_F_RXHASH |
- NETIF_F_RXCSUM |
- NETIF_F_HW_L2FW_DOFFLOAD;
+ NETIF_F_RXCSUM;
- netdev->hw_features = netdev->features;
+ netdev->hw_features = netdev->features | NETIF_F_HW_L2FW_DOFFLOAD;
switch (adapter->hw.mac.type) {
case ixgbe_mac_82599EB:
static void ixgbe_i2c_bus_clear(struct ixgbe_hw *hw);
static enum ixgbe_phy_type ixgbe_get_phy_type_from_id(u32 phy_id);
static s32 ixgbe_get_phy_id(struct ixgbe_hw *hw);
+static s32 ixgbe_identify_qsfp_module_generic(struct ixgbe_hw *hw);
/**
* ixgbe_identify_phy_generic - Get physical layer module
*
* Searches for and identifies the QSFP module and assigns appropriate PHY type
**/
-s32 ixgbe_identify_qsfp_module_generic(struct ixgbe_hw *hw)
+static s32 ixgbe_identify_qsfp_module_generic(struct ixgbe_hw *hw)
{
struct ixgbe_adapter *adapter = hw->back;
s32 status = IXGBE_ERR_PHY_ADDR_INVALID;
s32 ixgbe_reset_phy_nl(struct ixgbe_hw *hw);
s32 ixgbe_identify_module_generic(struct ixgbe_hw *hw);
s32 ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw);
-s32 ixgbe_identify_qsfp_module_generic(struct ixgbe_hw *hw);
s32 ixgbe_get_sfp_init_sequence_offsets(struct ixgbe_hw *hw,
u16 *list_offset,
u16 *data_offset);
{
struct mlx4_en_priv *priv = netdev_priv(dev);
struct mlx4_en_dev *mdev = priv->mdev;
- struct mlx4_en_tx_ring *tx_ring;
int i, carrier_ok;
memset(buf, 0, sizeof(u64) * MLX4_EN_NUM_SELF_TEST);
carrier_ok = netif_carrier_ok(dev);
netif_carrier_off(dev);
-retry_tx:
/* Wait until all tx queues are empty.
* there should not be any additional incoming traffic
* since we turned the carrier off */
msleep(200);
- for (i = 0; i < priv->tx_ring_num && carrier_ok; i++) {
- tx_ring = priv->tx_ring[i];
- if (tx_ring->prod != (tx_ring->cons + tx_ring->last_nr_txbb))
- goto retry_tx;
- }
if (priv->mdev->dev->caps.flags &
MLX4_DEV_CAP_FLAG_UC_LOOPBACK) {
le32_to_cpu(txd->opts1) & 0xffff,
PCI_DMA_TODEVICE);
- bytes_compl += skb->len;
- pkts_compl++;
-
if (status & LastFrag) {
if (status & (TxError | TxFIFOUnder)) {
netif_dbg(cp, tx_err, cp->dev,
netif_dbg(cp, tx_done, cp->dev,
"tx done, slot %d\n", tx_tail);
}
+ bytes_compl += skb->len;
+ pkts_compl++;
dev_kfree_skb_irq(skb);
}
rtl_writephy(tp, 0x14, 0x9065);
rtl_writephy(tp, 0x14, 0x1065);
+ /* Check ALDPS bit, disable it if enabled */
+ rtl_writephy(tp, 0x1f, 0x0a43);
+ if (rtl_readphy(tp, 0x10) & 0x0004)
+ rtl_w1w0_phy(tp, 0x10, 0x0000, 0x0004);
+
rtl_writephy(tp, 0x1f, 0x0000);
}
unsigned long last_update;
struct device *device;
struct efx_mcdi_mon_attribute *attrs;
+ struct attribute_group group;
+ const struct attribute_group *groups[2];
unsigned int n_attrs;
};
return rc;
}
-static ssize_t efx_mcdi_mon_show_name(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- return sprintf(buf, "%s\n", KBUILD_MODNAME);
-}
-
static int efx_mcdi_mon_get_entry(struct device *dev, unsigned int index,
efx_dword_t *entry)
{
- struct efx_nic *efx = dev_get_drvdata(dev);
+ struct efx_nic *efx = dev_get_drvdata(dev->parent);
struct efx_mcdi_mon *hwmon = efx_mcdi_mon(efx);
int rc;
efx_mcdi_sensor_type[mon_attr->type].label);
}
-static int
+static void
efx_mcdi_mon_add_attr(struct efx_nic *efx, const char *name,
ssize_t (*reader)(struct device *,
struct device_attribute *, char *),
{
struct efx_mcdi_mon *hwmon = efx_mcdi_mon(efx);
struct efx_mcdi_mon_attribute *attr = &hwmon->attrs[hwmon->n_attrs];
- int rc;
strlcpy(attr->name, name, sizeof(attr->name));
attr->index = index;
attr->dev_attr.attr.name = attr->name;
attr->dev_attr.attr.mode = S_IRUGO;
attr->dev_attr.show = reader;
- rc = device_create_file(&efx->pci_dev->dev, &attr->dev_attr);
- if (rc == 0)
- ++hwmon->n_attrs;
- return rc;
+ hwmon->group.attrs[hwmon->n_attrs++] = &attr->dev_attr.attr;
}
int efx_mcdi_mon_probe(struct efx_nic *efx)
efx_mcdi_mon_update(efx);
/* Allocate space for the maximum possible number of
- * attributes for this set of sensors: name of the driver plus
+ * attributes for this set of sensors:
* value, min, max, crit, alarm and label for each sensor.
*/
- n_attrs = 1 + 6 * n_sensors;
+ n_attrs = 6 * n_sensors;
hwmon->attrs = kcalloc(n_attrs, sizeof(*hwmon->attrs), GFP_KERNEL);
if (!hwmon->attrs) {
rc = -ENOMEM;
goto fail;
}
-
- hwmon->device = hwmon_device_register(&efx->pci_dev->dev);
- if (IS_ERR(hwmon->device)) {
- rc = PTR_ERR(hwmon->device);
+ hwmon->group.attrs = kcalloc(n_attrs + 1, sizeof(struct attribute *),
+ GFP_KERNEL);
+ if (!hwmon->group.attrs) {
+ rc = -ENOMEM;
goto fail;
}
- rc = efx_mcdi_mon_add_attr(efx, "name", efx_mcdi_mon_show_name, 0, 0, 0);
- if (rc)
- goto fail;
-
for (i = 0, j = -1, type = -1; ; i++) {
enum efx_hwmon_type hwmon_type;
const char *hwmon_prefix;
page = type / 32;
j = -1;
if (page == n_pages)
- return 0;
+ goto hwmon_register;
MCDI_SET_DWORD(inbuf, SENSOR_INFO_EXT_IN_PAGE,
page);
if (min1 != max1) {
snprintf(name, sizeof(name), "%s%u_input",
hwmon_prefix, hwmon_index);
- rc = efx_mcdi_mon_add_attr(
+ efx_mcdi_mon_add_attr(
efx, name, efx_mcdi_mon_show_value, i, type, 0);
- if (rc)
- goto fail;
if (hwmon_type != EFX_HWMON_POWER) {
snprintf(name, sizeof(name), "%s%u_min",
hwmon_prefix, hwmon_index);
- rc = efx_mcdi_mon_add_attr(
+ efx_mcdi_mon_add_attr(
efx, name, efx_mcdi_mon_show_limit,
i, type, min1);
- if (rc)
- goto fail;
}
snprintf(name, sizeof(name), "%s%u_max",
hwmon_prefix, hwmon_index);
- rc = efx_mcdi_mon_add_attr(
+ efx_mcdi_mon_add_attr(
efx, name, efx_mcdi_mon_show_limit,
i, type, max1);
- if (rc)
- goto fail;
if (min2 != max2) {
/* Assume max2 is critical value.
*/
snprintf(name, sizeof(name), "%s%u_crit",
hwmon_prefix, hwmon_index);
- rc = efx_mcdi_mon_add_attr(
+ efx_mcdi_mon_add_attr(
efx, name, efx_mcdi_mon_show_limit,
i, type, max2);
- if (rc)
- goto fail;
}
}
snprintf(name, sizeof(name), "%s%u_alarm",
hwmon_prefix, hwmon_index);
- rc = efx_mcdi_mon_add_attr(
+ efx_mcdi_mon_add_attr(
efx, name, efx_mcdi_mon_show_alarm, i, type, 0);
- if (rc)
- goto fail;
if (type < ARRAY_SIZE(efx_mcdi_sensor_type) &&
efx_mcdi_sensor_type[type].label) {
snprintf(name, sizeof(name), "%s%u_label",
hwmon_prefix, hwmon_index);
- rc = efx_mcdi_mon_add_attr(
+ efx_mcdi_mon_add_attr(
efx, name, efx_mcdi_mon_show_label, i, type, 0);
- if (rc)
- goto fail;
}
}
+hwmon_register:
+ hwmon->groups[0] = &hwmon->group;
+ hwmon->device = hwmon_device_register_with_groups(&efx->pci_dev->dev,
+ KBUILD_MODNAME, NULL,
+ hwmon->groups);
+ if (IS_ERR(hwmon->device)) {
+ rc = PTR_ERR(hwmon->device);
+ goto fail;
+ }
+
+ return 0;
+
fail:
efx_mcdi_mon_remove(efx);
return rc;
void efx_mcdi_mon_remove(struct efx_nic *efx)
{
struct efx_mcdi_mon *hwmon = efx_mcdi_mon(efx);
- unsigned int i;
- for (i = 0; i < hwmon->n_attrs; i++)
- device_remove_file(&efx->pci_dev->dev,
- &hwmon->attrs[i].dev_attr);
- kfree(hwmon->attrs);
if (hwmon->device)
hwmon_device_unregister(hwmon->device);
+ kfree(hwmon->attrs);
+ kfree(hwmon->group.attrs);
efx_nic_free_buffer(efx, &hwmon->dma_buf);
}
defined(CONFIG_MACH_LITTLETON) ||\
defined(CONFIG_MACH_ZYLONITE2) ||\
defined(CONFIG_ARCH_VIPER) ||\
- defined(CONFIG_MACH_STARGATE2)
+ defined(CONFIG_MACH_STARGATE2) ||\
+ defined(CONFIG_ARCH_VERSATILE)
#include <asm/mach-types.h>
#define SMC_outl(v, a, r) writel(v, (a) + (r))
#define SMC_insl(a, r, p, l) readsl((a) + (r), p, l)
#define SMC_outsl(a, r, p, l) writesl((a) + (r), p, l)
+#define SMC_insw(a, r, p, l) readsw((a) + (r), p, l)
+#define SMC_outsw(a, r, p, l) writesw((a) + (r), p, l)
#define SMC_IRQ_FLAGS (-1) /* from resource */
/* We actually can't write halfwords properly if not word aligned */
#define RPC_LSA_DEFAULT RPC_LED_TX_RX
#define RPC_LSB_DEFAULT RPC_LED_100_10
-#elif defined(CONFIG_ARCH_VERSATILE)
-
-#define SMC_CAN_USE_8BIT 1
-#define SMC_CAN_USE_16BIT 1
-#define SMC_CAN_USE_32BIT 1
-#define SMC_NOWAIT 1
-
-#define SMC_inb(a, r) readb((a) + (r))
-#define SMC_inw(a, r) readw((a) + (r))
-#define SMC_inl(a, r) readl((a) + (r))
-#define SMC_outb(v, a, r) writeb(v, (a) + (r))
-#define SMC_outw(v, a, r) writew(v, (a) + (r))
-#define SMC_outl(v, a, r) writel(v, (a) + (r))
-#define SMC_insl(a, r, p, l) readsl((a) + (r), p, l)
-#define SMC_outsl(a, r, p, l) writesl((a) + (r), p, l)
-#define SMC_IRQ_FLAGS (-1) /* from resource */
-
#elif defined(CONFIG_MN10300)
/*
unsigned int rx_done;
unsigned long flags;
- spin_lock_irqsave(&vptr->lock, flags);
/*
* Do rx and tx twice for performance (taken from the VIA
* out-of-tree driver).
*/
- rx_done = velocity_rx_srv(vptr, budget / 2);
- velocity_tx_srv(vptr);
- rx_done += velocity_rx_srv(vptr, budget - rx_done);
+ rx_done = velocity_rx_srv(vptr, budget);
+ spin_lock_irqsave(&vptr->lock, flags);
velocity_tx_srv(vptr);
-
/* If budget not fully consumed, exit the polling mode */
if (rx_done < budget) {
napi_complete(napi);
if (ret < 0)
goto out_free_tmp_vptr_1;
+ napi_disable(&vptr->napi);
+
spin_lock_irqsave(&vptr->lock, flags);
netif_stop_queue(dev);
velocity_give_many_rx_descs(vptr);
+ napi_enable(&vptr->napi);
+
mac_enable_int(vptr->mac_regs);
netif_start_queue(dev);
rcu_read_lock();
vlan = rcu_dereference(q->vlan);
if (vlan)
- vlan->dev->stats.tx_dropped++;
+ this_cpu_inc(vlan->pcpu_stats->tx_dropped);
rcu_read_unlock();
return err;
const struct sk_buff *skb,
const struct iovec *iv, int len)
{
- struct macvlan_dev *vlan;
int ret;
int vnet_hdr_len = 0;
int vlan_offset = 0;
copied += len;
done:
- rcu_read_lock();
- vlan = rcu_dereference(q->vlan);
- if (vlan) {
- preempt_disable();
- macvlan_count_rx(vlan, copied - vnet_hdr_len, ret == 0, 0);
- preempt_enable();
- }
- rcu_read_unlock();
-
return ret ? ret : copied;
}
#define PHY_ID_VSC8234 0x000fc620
#define PHY_ID_VSC8244 0x000fc6c0
+#define PHY_ID_VSC8514 0x00070670
#define PHY_ID_VSC8574 0x000704a0
#define PHY_ID_VSC8662 0x00070660
#define PHY_ID_VSC8221 0x000fc550
err = phy_write(phydev, MII_VSC8244_IMASK,
(phydev->drv->phy_id == PHY_ID_VSC8234 ||
phydev->drv->phy_id == PHY_ID_VSC8244 ||
+ phydev->drv->phy_id == PHY_ID_VSC8514 ||
phydev->drv->phy_id == PHY_ID_VSC8574) ?
MII_VSC8244_IMASK_MASK :
MII_VSC8221_IMASK_MASK);
.ack_interrupt = &vsc824x_ack_interrupt,
.config_intr = &vsc82xx_config_intr,
.driver = { .owner = THIS_MODULE,},
+}, {
+ .phy_id = PHY_ID_VSC8514,
+ .name = "Vitesse VSC8514",
+ .phy_id_mask = 0x000ffff0,
+ .features = PHY_GBIT_FEATURES,
+ .flags = PHY_HAS_INTERRUPT,
+ .config_init = &vsc824x_config_init,
+ .config_aneg = &vsc82x4_config_aneg,
+ .read_status = &genphy_read_status,
+ .ack_interrupt = &vsc824x_ack_interrupt,
+ .config_intr = &vsc82xx_config_intr,
+ .driver = { .owner = THIS_MODULE,},
}, {
.phy_id = PHY_ID_VSC8574,
.name = "Vitesse VSC8574",
static struct mdio_device_id __maybe_unused vitesse_tbl[] = {
{ PHY_ID_VSC8234, 0x000ffff0 },
{ PHY_ID_VSC8244, 0x000fffc0 },
+ { PHY_ID_VSC8514, 0x000ffff0 },
{ PHY_ID_VSC8574, 0x000ffff0 },
{ PHY_ID_VSC8662, 0x000ffff0 },
{ PHY_ID_VSC8221, 0x000ffff0 },
return 0;
}
+static void __team_carrier_check(struct team *team);
+
static int team_user_linkup_option_set(struct team *team,
struct team_gsetter_ctx *ctx)
{
port->user.linkup = ctx->data.bool_val;
team_refresh_port_linkup(port);
+ __team_carrier_check(port->team);
return 0;
}
port->user.linkup_enabled = ctx->data.bool_val;
team_refresh_port_linkup(port);
+ __team_carrier_check(port->team);
return 0;
}
return skb;
}
-static int receive_mergeable(struct receive_queue *rq, struct sk_buff *head_skb)
+static struct sk_buff *receive_small(void *buf, unsigned int len)
{
- struct skb_vnet_hdr *hdr = skb_vnet_hdr(head_skb);
+ struct sk_buff * skb = buf;
+
+ len -= sizeof(struct virtio_net_hdr);
+ skb_trim(skb, len);
+
+ return skb;
+}
+
+static struct sk_buff *receive_big(struct net_device *dev,
+ struct receive_queue *rq,
+ void *buf,
+ unsigned int len)
+{
+ struct page *page = buf;
+ struct sk_buff *skb = page_to_skb(rq, page, 0, len, PAGE_SIZE);
+
+ if (unlikely(!skb))
+ goto err;
+
+ return skb;
+
+err:
+ dev->stats.rx_dropped++;
+ give_pages(rq, page);
+ return NULL;
+}
+
+static struct sk_buff *receive_mergeable(struct net_device *dev,
+ struct receive_queue *rq,
+ void *buf,
+ unsigned int len)
+{
+ struct skb_vnet_hdr *hdr = buf;
+ int num_buf = hdr->mhdr.num_buffers;
+ struct page *page = virt_to_head_page(buf);
+ int offset = buf - page_address(page);
+ struct sk_buff *head_skb = page_to_skb(rq, page, offset, len,
+ MERGE_BUFFER_LEN);
struct sk_buff *curr_skb = head_skb;
- char *buf;
- struct page *page;
- int num_buf, len, offset;
- num_buf = hdr->mhdr.num_buffers;
+ if (unlikely(!curr_skb))
+ goto err_skb;
+
while (--num_buf) {
- int num_skb_frags = skb_shinfo(curr_skb)->nr_frags;
+ int num_skb_frags;
+
buf = virtqueue_get_buf(rq->vq, &len);
if (unlikely(!buf)) {
- pr_debug("%s: rx error: %d buffers missing\n",
- head_skb->dev->name, hdr->mhdr.num_buffers);
- head_skb->dev->stats.rx_length_errors++;
- return -EINVAL;
+ pr_debug("%s: rx error: %d buffers out of %d missing\n",
+ dev->name, num_buf, hdr->mhdr.num_buffers);
+ dev->stats.rx_length_errors++;
+ goto err_buf;
}
if (unlikely(len > MERGE_BUFFER_LEN)) {
pr_debug("%s: rx error: merge buffer too long\n",
- head_skb->dev->name);
+ dev->name);
len = MERGE_BUFFER_LEN;
}
+
+ page = virt_to_head_page(buf);
+ --rq->num;
+
+ num_skb_frags = skb_shinfo(curr_skb)->nr_frags;
if (unlikely(num_skb_frags == MAX_SKB_FRAGS)) {
struct sk_buff *nskb = alloc_skb(0, GFP_ATOMIC);
- if (unlikely(!nskb)) {
- head_skb->dev->stats.rx_dropped++;
- return -ENOMEM;
- }
+
+ if (unlikely(!nskb))
+ goto err_skb;
if (curr_skb == head_skb)
skb_shinfo(curr_skb)->frag_list = nskb;
else
head_skb->len += len;
head_skb->truesize += MERGE_BUFFER_LEN;
}
- page = virt_to_head_page(buf);
- offset = buf - (char *)page_address(page);
+ offset = buf - page_address(page);
if (skb_can_coalesce(curr_skb, num_skb_frags, page, offset)) {
put_page(page);
skb_coalesce_rx_frag(curr_skb, num_skb_frags - 1,
skb_add_rx_frag(curr_skb, num_skb_frags, page,
offset, len, MERGE_BUFFER_LEN);
}
+ }
+
+ return head_skb;
+
+err_skb:
+ put_page(page);
+ while (--num_buf) {
+ buf = virtqueue_get_buf(rq->vq, &len);
+ if (unlikely(!buf)) {
+ pr_debug("%s: rx error: %d buffers missing\n",
+ dev->name, num_buf);
+ dev->stats.rx_length_errors++;
+ break;
+ }
+ page = virt_to_head_page(buf);
+ put_page(page);
--rq->num;
}
- return 0;
+err_buf:
+ dev->stats.rx_dropped++;
+ dev_kfree_skb(head_skb);
+ return NULL;
}
static void receive_buf(struct receive_queue *rq, void *buf, unsigned int len)
struct net_device *dev = vi->dev;
struct virtnet_stats *stats = this_cpu_ptr(vi->stats);
struct sk_buff *skb;
- struct page *page;
struct skb_vnet_hdr *hdr;
if (unlikely(len < sizeof(struct virtio_net_hdr) + ETH_HLEN)) {
return;
}
- if (!vi->mergeable_rx_bufs && !vi->big_packets) {
- skb = buf;
- len -= sizeof(struct virtio_net_hdr);
- skb_trim(skb, len);
- } else if (vi->mergeable_rx_bufs) {
- struct page *page = virt_to_head_page(buf);
- skb = page_to_skb(rq, page,
- (char *)buf - (char *)page_address(page),
- len, MERGE_BUFFER_LEN);
- if (unlikely(!skb)) {
- dev->stats.rx_dropped++;
- put_page(page);
- return;
- }
- if (receive_mergeable(rq, skb)) {
- dev_kfree_skb(skb);
- return;
- }
- } else {
- page = buf;
- skb = page_to_skb(rq, page, 0, len, PAGE_SIZE);
- if (unlikely(!skb)) {
- dev->stats.rx_dropped++;
- give_pages(rq, page);
- return;
- }
- }
+ if (vi->mergeable_rx_bufs)
+ skb = receive_mergeable(dev, rq, buf, len);
+ else if (vi->big_packets)
+ skb = receive_big(dev, rq, buf, len);
+ else
+ skb = receive_small(buf, len);
+
+ if (unlikely(!skb))
+ return;
hdr = skb_vnet_hdr(skb);
if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_MAC,
VIRTIO_NET_CTRL_MAC_TABLE_SET,
sg, NULL))
- dev_warn(&dev->dev, "Failed to set MAC fitler table.\n");
+ dev_warn(&dev->dev, "Failed to set MAC filter table.\n");
kfree(buf);
}
#include <linux/udp.h>
#include <net/tcp.h>
+#include <net/ip6_checksum.h>
#include <xen/xen.h>
#include <xen/events.h>
ret = abx500_gpio_set_bits(chip,
AB8500_GPIO_ALTFUN_REG,
af.alt_bit1,
- !!(af.alta_val && BIT(0)));
+ !!(af.alta_val & BIT(0)));
if (ret < 0)
goto out;
goto out;
ret = abx500_gpio_set_bits(chip, AB8500_GPIO_ALTFUN_REG,
- af.alt_bit1, !!(af.altb_val && BIT(0)));
+ af.alt_bit1, !!(af.altb_val & BIT(0)));
if (ret < 0)
goto out;
goto out;
ret = abx500_gpio_set_bits(chip, AB8500_GPIO_ALTFUN_REG,
- af.alt_bit2, !!(af.altc_val && BIT(1)));
+ af.alt_bit2, !!(af.altc_val & BIT(1)));
break;
default:
-#ifndef PINCTRL_PINCTRL_ABx5O0_H
+#ifndef PINCTRL_PINCTRL_ABx500_H
#define PINCTRL_PINCTRL_ABx500_H
/* Package definitions */
data |= (3 << bit);
break;
default:
+ spin_unlock_irqrestore(&bank->slock, flags);
dev_err(info->dev, "unsupported pull setting %d\n",
pull);
return -EINVAL;
if (ctrl->type == RK3188) {
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
info->reg_pull = devm_ioremap_resource(&pdev->dev, res);
- if (IS_ERR(info->reg_base))
- return PTR_ERR(info->reg_base);
+ if (IS_ERR(info->reg_pull))
+ return PTR_ERR(info->reg_pull);
}
ret = rockchip_gpiolib_register(pdev, info);
const struct r8a7740_portcr_group *group =
&r8a7740_portcr_offsets[i];
- if (i <= group->end_pin)
+ if (pin <= group->end_pin)
return pfc->window->virt + group->offset + pin;
}
const struct sh7372_portcr_group *group =
&sh7372_portcr_offsets[i];
- if (i <= group->end_pin)
+ if (pin <= group->end_pin)
return pfc->window->virt + group->offset + pin;
}
return __pnp_bus_suspend(dev, PMSG_FREEZE);
}
+static int pnp_bus_poweroff(struct device *dev)
+{
+ return __pnp_bus_suspend(dev, PMSG_HIBERNATE);
+}
+
static int pnp_bus_resume(struct device *dev)
{
struct pnp_dev *pnp_dev = to_pnp_dev(dev);
}
static const struct dev_pm_ops pnp_bus_dev_pm_ops = {
+ /* Suspend callbacks */
.suspend = pnp_bus_suspend,
- .freeze = pnp_bus_freeze,
.resume = pnp_bus_resume,
+ /* Hibernate callbacks */
+ .freeze = pnp_bus_freeze,
+ .thaw = pnp_bus_resume,
+ .poweroff = pnp_bus_poweroff,
+ .restore = pnp_bus_resume,
};
struct bus_type pnp_bus_type = {
if (power_zone->ops->get_max_energy_range_uj)
power_zone->zone_dev_attrs[count++] =
&dev_attr_max_energy_range_uj.attr;
- if (power_zone->ops->get_energy_uj)
+ if (power_zone->ops->get_energy_uj) {
+ if (power_zone->ops->reset_energy_uj)
+ dev_attr_energy_uj.attr.mode = S_IWUSR | S_IRUGO;
+ else
+ dev_attr_energy_uj.attr.mode = S_IRUGO;
power_zone->zone_dev_attrs[count++] =
&dev_attr_energy_uj.attr;
+ }
if (power_zone->ops->get_power_uw)
power_zone->zone_dev_attrs[count++] =
&dev_attr_power_uw.attr;
default:
return -EINVAL;
}
+ ret <<= ffs(mask) - 1;
val = ret & mask;
- val <<= ffs(mask) - 1;
return as3722_update_bits(as3722, reg, mask, val);
}
return "";
}
+static bool have_full_constraints(void)
+{
+ return has_full_constraints || of_have_populated_dt();
+}
+
/**
* of_get_regulator - get a regulator device node based on supply name
* @dev: Device pointer for the consumer (of regulator) device
* Assume that a regulator is physically present and enabled
* even if it isn't hooked up and just provide a dummy.
*/
- if (has_full_constraints && allow_dummy) {
+ if (have_full_constraints() && allow_dummy) {
pr_warn("%s supply %s not found, using dummy regulator\n",
devname, id);
if (error)
ret = error;
} else {
- if (!has_full_constraints)
+ if (!have_full_constraints())
goto unlock;
if (!ops->disable)
goto unlock;
if (!enabled)
goto unlock;
- if (has_full_constraints) {
+ if (have_full_constraints()) {
/* We log since this may kill the system if it
* goes wrong. */
rdev_info(rdev, "disabling\n");
#define PFUZE100_DEVICEID 0x0
#define PFUZE100_REVID 0x3
-#define PFUZE100_FABID 0x3
+#define PFUZE100_FABID 0x4
#define PFUZE100_SW1ABVOL 0x20
#define PFUZE100_SW1CVOL 0x2e
config.dev = s5m8767->dev;
config.init_data = pdata->regulators[i].initdata;
config.driver_data = s5m8767;
- config.regmap = iodev->regmap;
+ config.regmap = iodev->regmap_pmic;
config.of_node = pdata->regulators[i].reg_node;
rdev[i] = devm_regulator_register(&pdev->dev, ®ulators[id],
at91_alarm_year = tm.tm_year;
+ tm.tm_mon = alrm->time.tm_mon;
+ tm.tm_mday = alrm->time.tm_mday;
tm.tm_hour = alrm->time.tm_hour;
tm.tm_min = alrm->time.tm_min;
tm.tm_sec = alrm->time.tm_sec;
#include <linux/mfd/samsung/irq.h>
#include <linux/mfd/samsung/rtc.h>
+/*
+ * Maximum number of retries for checking changes in UDR field
+ * of SEC_RTC_UDR_CON register (to limit possible endless loop).
+ *
+ * After writing to RTC registers (setting time or alarm) read the UDR field
+ * in SEC_RTC_UDR_CON register. UDR is auto-cleared when data have
+ * been transferred.
+ */
+#define UDR_READ_RETRY_CNT 5
+
struct s5m_rtc_info {
struct device *dev;
struct sec_pmic_dev *s5m87xx;
- struct regmap *rtc;
+ struct regmap *regmap;
struct rtc_device *rtc_dev;
int irq;
int device_type;
}
}
+/*
+ * Read RTC_UDR_CON register and wait till UDR field is cleared.
+ * This indicates that time/alarm update ended.
+ */
+static inline int s5m8767_wait_for_udr_update(struct s5m_rtc_info *info)
+{
+ int ret, retry = UDR_READ_RETRY_CNT;
+ unsigned int data;
+
+ do {
+ ret = regmap_read(info->regmap, SEC_RTC_UDR_CON, &data);
+ } while (--retry && (data & RTC_UDR_MASK) && !ret);
+
+ if (!retry)
+ dev_err(info->dev, "waiting for UDR update, reached max number of retries\n");
+
+ return ret;
+}
+
static inline int s5m8767_rtc_set_time_reg(struct s5m_rtc_info *info)
{
int ret;
unsigned int data;
- ret = regmap_read(info->rtc, SEC_RTC_UDR_CON, &data);
+ ret = regmap_read(info->regmap, SEC_RTC_UDR_CON, &data);
if (ret < 0) {
dev_err(info->dev, "failed to read update reg(%d)\n", ret);
return ret;
data |= RTC_TIME_EN_MASK;
data |= RTC_UDR_MASK;
- ret = regmap_write(info->rtc, SEC_RTC_UDR_CON, data);
+ ret = regmap_write(info->regmap, SEC_RTC_UDR_CON, data);
if (ret < 0) {
dev_err(info->dev, "failed to write update reg(%d)\n", ret);
return ret;
}
- do {
- ret = regmap_read(info->rtc, SEC_RTC_UDR_CON, &data);
- } while ((data & RTC_UDR_MASK) && !ret);
+ ret = s5m8767_wait_for_udr_update(info);
return ret;
}
int ret;
unsigned int data;
- ret = regmap_read(info->rtc, SEC_RTC_UDR_CON, &data);
+ ret = regmap_read(info->regmap, SEC_RTC_UDR_CON, &data);
if (ret < 0) {
dev_err(info->dev, "%s: fail to read update reg(%d)\n",
__func__, ret);
data &= ~RTC_TIME_EN_MASK;
data |= RTC_UDR_MASK;
- ret = regmap_write(info->rtc, SEC_RTC_UDR_CON, data);
+ ret = regmap_write(info->regmap, SEC_RTC_UDR_CON, data);
if (ret < 0) {
dev_err(info->dev, "%s: fail to write update reg(%d)\n",
__func__, ret);
return ret;
}
- do {
- ret = regmap_read(info->rtc, SEC_RTC_UDR_CON, &data);
- } while ((data & RTC_UDR_MASK) && !ret);
+ ret = s5m8767_wait_for_udr_update(info);
return ret;
}
u8 data[8];
int ret;
- ret = regmap_bulk_read(info->rtc, SEC_RTC_SEC, data, 8);
+ ret = regmap_bulk_read(info->regmap, SEC_RTC_SEC, data, 8);
if (ret < 0)
return ret;
1900 + tm->tm_year, 1 + tm->tm_mon, tm->tm_mday,
tm->tm_hour, tm->tm_min, tm->tm_sec, tm->tm_wday);
- ret = regmap_raw_write(info->rtc, SEC_RTC_SEC, data, 8);
+ ret = regmap_raw_write(info->regmap, SEC_RTC_SEC, data, 8);
if (ret < 0)
return ret;
unsigned int val;
int ret, i;
- ret = regmap_bulk_read(info->rtc, SEC_ALARM0_SEC, data, 8);
+ ret = regmap_bulk_read(info->regmap, SEC_ALARM0_SEC, data, 8);
if (ret < 0)
return ret;
switch (info->device_type) {
case S5M8763X:
s5m8763_data_to_tm(data, &alrm->time);
- ret = regmap_read(info->rtc, SEC_ALARM0_CONF, &val);
+ ret = regmap_read(info->regmap, SEC_ALARM0_CONF, &val);
if (ret < 0)
return ret;
alrm->enabled = !!val;
- ret = regmap_read(info->rtc, SEC_RTC_STATUS, &val);
+ ret = regmap_read(info->regmap, SEC_RTC_STATUS, &val);
if (ret < 0)
return ret;
}
alrm->pending = 0;
- ret = regmap_read(info->rtc, SEC_RTC_STATUS, &val);
+ ret = regmap_read(info->regmap, SEC_RTC_STATUS, &val);
if (ret < 0)
return ret;
break;
int ret, i;
struct rtc_time tm;
- ret = regmap_bulk_read(info->rtc, SEC_ALARM0_SEC, data, 8);
+ ret = regmap_bulk_read(info->regmap, SEC_ALARM0_SEC, data, 8);
if (ret < 0)
return ret;
switch (info->device_type) {
case S5M8763X:
- ret = regmap_write(info->rtc, SEC_ALARM0_CONF, 0);
+ ret = regmap_write(info->regmap, SEC_ALARM0_CONF, 0);
break;
case S5M8767X:
for (i = 0; i < 7; i++)
data[i] &= ~ALARM_ENABLE_MASK;
- ret = regmap_raw_write(info->rtc, SEC_ALARM0_SEC, data, 8);
+ ret = regmap_raw_write(info->regmap, SEC_ALARM0_SEC, data, 8);
if (ret < 0)
return ret;
u8 alarm0_conf;
struct rtc_time tm;
- ret = regmap_bulk_read(info->rtc, SEC_ALARM0_SEC, data, 8);
+ ret = regmap_bulk_read(info->regmap, SEC_ALARM0_SEC, data, 8);
if (ret < 0)
return ret;
switch (info->device_type) {
case S5M8763X:
alarm0_conf = 0x77;
- ret = regmap_write(info->rtc, SEC_ALARM0_CONF, alarm0_conf);
+ ret = regmap_write(info->regmap, SEC_ALARM0_CONF, alarm0_conf);
break;
case S5M8767X:
if (data[RTC_YEAR1] & 0x7f)
data[RTC_YEAR1] |= ALARM_ENABLE_MASK;
- ret = regmap_raw_write(info->rtc, SEC_ALARM0_SEC, data, 8);
+ ret = regmap_raw_write(info->regmap, SEC_ALARM0_SEC, data, 8);
if (ret < 0)
return ret;
ret = s5m8767_rtc_set_alarm_reg(info);
if (ret < 0)
return ret;
- ret = regmap_raw_write(info->rtc, SEC_ALARM0_SEC, data, 8);
+ ret = regmap_raw_write(info->regmap, SEC_ALARM0_SEC, data, 8);
if (ret < 0)
return ret;
static void s5m_rtc_enable_wtsr(struct s5m_rtc_info *info, bool enable)
{
int ret;
- ret = regmap_update_bits(info->rtc, SEC_WTSR_SMPL_CNTL,
+ ret = regmap_update_bits(info->regmap, SEC_WTSR_SMPL_CNTL,
WTSR_ENABLE_MASK,
enable ? WTSR_ENABLE_MASK : 0);
if (ret < 0)
static void s5m_rtc_enable_smpl(struct s5m_rtc_info *info, bool enable)
{
int ret;
- ret = regmap_update_bits(info->rtc, SEC_WTSR_SMPL_CNTL,
+ ret = regmap_update_bits(info->regmap, SEC_WTSR_SMPL_CNTL,
SMPL_ENABLE_MASK,
enable ? SMPL_ENABLE_MASK : 0);
if (ret < 0)
int ret;
struct rtc_time tm;
- ret = regmap_read(info->rtc, SEC_RTC_UDR_CON, &tp_read);
+ ret = regmap_read(info->regmap, SEC_RTC_UDR_CON, &tp_read);
if (ret < 0) {
dev_err(info->dev, "%s: fail to read control reg(%d)\n",
__func__, ret);
data[1] = (0 << BCD_EN_SHIFT) | (1 << MODEL24_SHIFT);
info->rtc_24hr_mode = 1;
- ret = regmap_raw_write(info->rtc, SEC_ALARM0_CONF, data, 2);
+ ret = regmap_raw_write(info->regmap, SEC_ALARM0_CONF, data, 2);
if (ret < 0) {
dev_err(info->dev, "%s: fail to write controlm reg(%d)\n",
__func__, ret);
ret = s5m_rtc_set_time(info->dev, &tm);
}
- ret = regmap_update_bits(info->rtc, SEC_RTC_UDR_CON,
+ ret = regmap_update_bits(info->regmap, SEC_RTC_UDR_CON,
RTC_TCON_MASK, tp_read | RTC_TCON_MASK);
if (ret < 0)
dev_err(info->dev, "%s: fail to update TCON reg(%d)\n",
info->dev = &pdev->dev;
info->s5m87xx = s5m87xx;
- info->rtc = s5m87xx->rtc;
+ info->regmap = s5m87xx->regmap_rtc;
info->device_type = s5m87xx->device_type;
info->wtsr_smpl = s5m87xx->wtsr_smpl;
switch (pdata->device_type) {
case S5M8763X:
- info->irq = s5m87xx->irq_base + S5M8763_IRQ_ALARM0;
+ info->irq = regmap_irq_get_virq(s5m87xx->irq_data,
+ S5M8763_IRQ_ALARM0);
break;
case S5M8767X:
- info->irq = s5m87xx->irq_base + S5M8767_IRQ_RTCA1;
+ info->irq = regmap_irq_get_virq(s5m87xx->irq_data,
+ S5M8767_IRQ_RTCA1);
break;
default:
if (info->wtsr_smpl) {
for (i = 0; i < 3; i++) {
s5m_rtc_enable_wtsr(info, false);
- regmap_read(info->rtc, SEC_WTSR_SMPL_CNTL, &val);
+ regmap_read(info->regmap, SEC_WTSR_SMPL_CNTL, &val);
pr_debug("%s: WTSR_SMPL reg(0x%02x)\n", __func__, val);
if (val & WTSR_ENABLE_MASK)
pr_emerg("%s: fail to disable WTSR\n",
s5m_rtc_enable_smpl(info, false);
}
+static int s5m_rtc_resume(struct device *dev)
+{
+ struct s5m_rtc_info *info = dev_get_drvdata(dev);
+ int ret = 0;
+
+ if (device_may_wakeup(dev))
+ ret = disable_irq_wake(info->irq);
+
+ return ret;
+}
+
+static int s5m_rtc_suspend(struct device *dev)
+{
+ struct s5m_rtc_info *info = dev_get_drvdata(dev);
+ int ret = 0;
+
+ if (device_may_wakeup(dev))
+ ret = enable_irq_wake(info->irq);
+
+ return ret;
+}
+
+static SIMPLE_DEV_PM_OPS(s5m_rtc_pm_ops, s5m_rtc_suspend, s5m_rtc_resume);
+
static const struct platform_device_id s5m_rtc_id[] = {
{ "s5m-rtc", 0 },
};
.driver = {
.name = "s5m-rtc",
.owner = THIS_MODULE,
+ .pm = &s5m_rtc_pm_ops,
},
.probe = s5m_rtc_probe,
.shutdown = s5m_rtc_shutdown,
{
if (block->gdp) {
del_gendisk(block->gdp);
- block->gdp->queue = NULL;
block->gdp->private_data = NULL;
put_disk(block->gdp);
block->gdp = NULL;
u8 _reserved5[4096 - 112]; /* 112-4095 */
} __packed __aligned(PAGE_SIZE);
-static __initdata struct init_sccb early_event_mask_sccb __aligned(PAGE_SIZE);
static __initdata struct read_info_sccb early_read_info_sccb;
static __initdata char sccb_early[PAGE_SIZE] __aligned(PAGE_SIZE);
static unsigned long sclp_hsa_size;
bool __init sclp_has_linemode(void)
{
- struct init_sccb *sccb = &early_event_mask_sccb;
+ struct init_sccb *sccb = (void *) &sccb_early;
if (sccb->header.response_code != 0x20)
return 0;
bool __init sclp_has_vt220(void)
{
- struct init_sccb *sccb = &early_event_mask_sccb;
+ struct init_sccb *sccb = (void *) &sccb_early;
if (sccb->header.response_code != 0x20)
return 0;
.cmd_per_lun = TW_MAX_CMDS_PER_LUN,
.use_clustering = ENABLE_CLUSTERING,
.shost_attrs = twa_host_attrs,
- .emulated = 1
+ .emulated = 1,
+ .no_write_same = 1,
};
/* This function will probe and initialize a card */
.cmd_per_lun = TW_MAX_CMDS_PER_LUN,
.use_clustering = ENABLE_CLUSTERING,
.shost_attrs = twl_host_attrs,
- .emulated = 1
+ .emulated = 1,
+ .no_write_same = 1,
};
/* This function will probe and initialize a card */
.cmd_per_lun = TW_MAX_CMDS_PER_LUN,
.use_clustering = ENABLE_CLUSTERING,
.shost_attrs = tw_host_attrs,
- .emulated = 1
+ .emulated = 1,
+ .no_write_same = 1,
};
/* This function will probe and initialize a card */
#endif
.use_clustering = ENABLE_CLUSTERING,
.emulated = 1,
+ .no_write_same = 1,
};
static void __aac_shutdown(struct aac_dev * aac)
.cmd_per_lun = ARCMSR_MAX_CMD_PERLUN,
.use_clustering = ENABLE_CLUSTERING,
.shost_attrs = arcmsr_host_attrs,
+ .no_write_same = 1,
};
static struct pci_device_id arcmsr_device_id_table[] = {
{PCI_DEVICE(PCI_VENDOR_ID_ARECA, PCI_DEVICE_ID_ARECA_1110)},
struct bfa_fcs_lport_s *bfa_fcs_lookup_port(struct bfa_fcs_s *fcs,
u16 vf_id, wwn_t lpwwn);
+void bfa_fcs_lport_set_symname(struct bfa_fcs_lport_s *port, char *symname);
void bfa_fcs_lport_get_info(struct bfa_fcs_lport_s *port,
struct bfa_lport_info_s *port_info);
void bfa_fcs_lport_get_attr(struct bfa_fcs_lport_s *port,
bfa_sm_send_event(lport, BFA_FCS_PORT_SM_CREATE);
}
+void
+bfa_fcs_lport_set_symname(struct bfa_fcs_lport_s *port,
+ char *symname)
+{
+ strcpy(port->port_cfg.sym_name.symname, symname);
+
+ if (bfa_sm_cmp_state(port, bfa_fcs_lport_sm_online))
+ bfa_fcs_lport_ns_util_send_rspn_id(
+ BFA_FCS_GET_NS_FROM_PORT(port), NULL);
+}
+
/*
* fcs_lport_api
*/
u8 *psymbl = &symbl[0];
int len;
- if (!bfa_sm_cmp_state(port, bfa_fcs_lport_sm_online))
- return;
-
/* Avoid sending RSPN in the following states. */
if (bfa_sm_cmp_state(ns, bfa_fcs_lport_ns_sm_offline) ||
bfa_sm_cmp_state(ns, bfa_fcs_lport_ns_sm_plogi_sending) ||
return;
spin_lock_irqsave(&bfad->bfad_lock, flags);
- if (strlen(sym_name) > 0) {
- strcpy(fcs_vport->lport.port_cfg.sym_name.symname, sym_name);
- bfa_fcs_lport_ns_util_send_rspn_id(
- BFA_FCS_GET_NS_FROM_PORT((&fcs_vport->lport)), NULL);
- }
+ if (strlen(sym_name) > 0)
+ bfa_fcs_lport_set_symname(&fcs_vport->lport, sym_name);
spin_unlock_irqrestore(&bfad->bfad_lock, flags);
}
.cmd_per_lun = GDTH_MAXC_P_L,
.unchecked_isa_dma = 1,
.use_clustering = ENABLE_CLUSTERING,
+ .no_write_same = 1,
};
#ifdef CONFIG_ISA
shost->use_clustering = sht->use_clustering;
shost->ordered_tag = sht->ordered_tag;
shost->eh_deadline = shost_eh_deadline * HZ;
+ shost->no_write_same = sht->no_write_same;
if (sht->supported_mode == MODE_UNKNOWN)
/* means we didn't set it ... default to INITIATOR */
.sdev_attrs = hpsa_sdev_attrs,
.shost_attrs = hpsa_shost_attrs,
.max_sectors = 8192,
+ .no_write_same = 1,
};
"has check condition: aborted command: "
"ASC: 0x%x, ASCQ: 0x%x\n",
cp, asc, ascq);
- cmd->result = DID_SOFT_ERROR << 16;
+ cmd->result |= DID_SOFT_ERROR << 16;
break;
}
/* Must be some other type of check condition */
hpsa_hba_inquiry(h);
hpsa_register_scsi(h); /* hook ourselves into SCSI subsystem */
start_controller_lockup_detector(h);
- return 1;
+ return 0;
clean4:
hpsa_free_sg_chain_blocks(h);
.use_clustering = ENABLE_CLUSTERING,
.shost_attrs = ipr_ioa_attrs,
.sdev_attrs = ipr_dev_attrs,
- .proc_name = IPR_NAME
+ .proc_name = IPR_NAME,
+ .no_write_same = 1,
};
/**
.sg_tablesize = IPS_MAX_SG,
.cmd_per_lun = 3,
.use_clustering = ENABLE_CLUSTERING,
+ .no_write_same = 1,
};
qc->tf.nsect = 0;
}
- ata_tf_to_fis(&qc->tf, 1, 0, (u8*)&task->ata_task.fis);
+ ata_tf_to_fis(&qc->tf, qc->dev->link->pmp, 1, (u8 *)&task->ata_task.fis);
task->uldd_task = qc;
if (ata_is_atapi(qc->tf.protocol)) {
memcpy(task->ata_task.atapi_packet, qc->cdb, qc->dev->cdb_len);
.eh_device_reset_handler = megaraid_reset,
.eh_bus_reset_handler = megaraid_reset,
.eh_host_reset_handler = megaraid_reset,
+ .no_write_same = 1,
};
static int
.eh_host_reset_handler = megaraid_reset_handler,
.change_queue_depth = megaraid_change_queue_depth,
.use_clustering = ENABLE_CLUSTERING,
+ .no_write_same = 1,
.sdev_attrs = megaraid_sdev_attrs,
.shost_attrs = megaraid_shost_attrs,
};
.bios_param = megasas_bios_param,
.use_clustering = ENABLE_CLUSTERING,
.change_queue_depth = megasas_change_queue_depth,
+ .no_write_same = 1,
};
/**
unsigned long flags;
u8 deviceType = pPayload->sas_identify.dev_type;
port->port_state = portstate;
+ phy->phy_state = PHY_STATE_LINK_UP_SPC;
PM8001_MSG_DBG(pm8001_ha,
pm8001_printk("HW_EVENT_SAS_PHY_UP port id = %d, phy id = %d\n",
port_id, phy_id));
pm8001_printk("HW_EVENT_SATA_PHY_UP port id = %d,"
" phy id = %d\n", port_id, phy_id));
port->port_state = portstate;
+ phy->phy_state = PHY_STATE_LINK_UP_SPC;
port->port_attached = 1;
pm8001_get_lrate_mode(phy, link_rate);
phy->phy_type |= PORT_TYPE_SATA;
#define LINKRATE_30 (0x02 << 8)
#define LINKRATE_60 (0x04 << 8)
+/* for phy state */
+
+#define PHY_STATE_LINK_UP_SPC 0x1
+
/* for new SPC controllers MEMBASE III is shared between BIOS and DATA */
#define GSM_SM_BASE 0x4F0000
struct mpi_msg_hdr{
static void pm8001_tasklet(unsigned long opaque)
{
struct pm8001_hba_info *pm8001_ha;
- u32 vec;
- pm8001_ha = (struct pm8001_hba_info *)opaque;
+ struct isr_param *irq_vector;
+
+ irq_vector = (struct isr_param *)opaque;
+ pm8001_ha = irq_vector->drv_inst;
if (unlikely(!pm8001_ha))
BUG_ON(1);
- vec = pm8001_ha->int_vector;
- PM8001_CHIP_DISP->isr(pm8001_ha, vec);
+ PM8001_CHIP_DISP->isr(pm8001_ha, irq_vector->irq_id);
}
#endif
-static struct pm8001_hba_info *outq_to_hba(u8 *outq)
-{
- return container_of((outq - *outq), struct pm8001_hba_info, outq[0]);
-}
-
/**
* pm8001_interrupt_handler_msix - main MSIX interrupt handler.
* It obtains the vector number and calls the equivalent bottom
*/
static irqreturn_t pm8001_interrupt_handler_msix(int irq, void *opaque)
{
- struct pm8001_hba_info *pm8001_ha = outq_to_hba(opaque);
- u8 outq = *(u8 *)opaque;
+ struct isr_param *irq_vector;
+ struct pm8001_hba_info *pm8001_ha;
irqreturn_t ret = IRQ_HANDLED;
+ irq_vector = (struct isr_param *)opaque;
+ pm8001_ha = irq_vector->drv_inst;
+
if (unlikely(!pm8001_ha))
return IRQ_NONE;
if (!PM8001_CHIP_DISP->is_our_interupt(pm8001_ha))
return IRQ_NONE;
- pm8001_ha->int_vector = outq;
#ifdef PM8001_USE_TASKLET
- tasklet_schedule(&pm8001_ha->tasklet);
+ tasklet_schedule(&pm8001_ha->tasklet[irq_vector->irq_id]);
#else
- ret = PM8001_CHIP_DISP->isr(pm8001_ha, outq);
+ ret = PM8001_CHIP_DISP->isr(pm8001_ha, irq_vector->irq_id);
#endif
return ret;
}
if (!PM8001_CHIP_DISP->is_our_interupt(pm8001_ha))
return IRQ_NONE;
- pm8001_ha->int_vector = 0;
#ifdef PM8001_USE_TASKLET
- tasklet_schedule(&pm8001_ha->tasklet);
+ tasklet_schedule(&pm8001_ha->tasklet[0]);
#else
ret = PM8001_CHIP_DISP->isr(pm8001_ha, 0);
#endif
{
struct pm8001_hba_info *pm8001_ha;
struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost);
-
+ int j;
pm8001_ha = sha->lldd_ha;
if (!pm8001_ha)
pm8001_ha->iomb_size = IOMB_SIZE_SPC;
#ifdef PM8001_USE_TASKLET
- /**
- * default tasklet for non msi-x interrupt handler/first msi-x
- * interrupt handler
- **/
- tasklet_init(&pm8001_ha->tasklet, pm8001_tasklet,
- (unsigned long)pm8001_ha);
+ /* Tasklet for non msi-x interrupt handler */
+ if ((!pdev->msix_cap) || (pm8001_ha->chip_id == chip_8001))
+ tasklet_init(&pm8001_ha->tasklet[0], pm8001_tasklet,
+ (unsigned long)&(pm8001_ha->irq_vector[0]));
+ else
+ for (j = 0; j < PM8001_MAX_MSIX_VEC; j++)
+ tasklet_init(&pm8001_ha->tasklet[j], pm8001_tasklet,
+ (unsigned long)&(pm8001_ha->irq_vector[j]));
#endif
pm8001_ioremap(pm8001_ha);
if (!pm8001_alloc(pm8001_ha, ent))
"pci_enable_msix request ret:%d no of intr %d\n",
rc, pm8001_ha->number_of_intr));
- for (i = 0; i < number_of_intr; i++)
- pm8001_ha->outq[i] = i;
for (i = 0; i < number_of_intr; i++) {
snprintf(intr_drvname[i], sizeof(intr_drvname[0]),
DRV_NAME"%d", i);
+ pm8001_ha->irq_vector[i].irq_id = i;
+ pm8001_ha->irq_vector[i].drv_inst = pm8001_ha;
+
if (request_irq(pm8001_ha->msix_entries[i].vector,
pm8001_interrupt_handler_msix, flag,
- intr_drvname[i], &pm8001_ha->outq[i])) {
+ intr_drvname[i], &(pm8001_ha->irq_vector[i]))) {
for (j = 0; j < i; j++)
free_irq(
pm8001_ha->msix_entries[j].vector,
- &pm8001_ha->outq[j]);
+ &(pm8001_ha->irq_vector[i]));
pci_disable_msix(pm8001_ha->pdev);
break;
}
{
struct sas_ha_struct *sha = pci_get_drvdata(pdev);
struct pm8001_hba_info *pm8001_ha;
- int i;
+ int i, j;
pm8001_ha = sha->lldd_ha;
sas_unregister_ha(sha);
sas_remove_host(pm8001_ha->shost);
synchronize_irq(pm8001_ha->msix_entries[i].vector);
for (i = 0; i < pm8001_ha->number_of_intr; i++)
free_irq(pm8001_ha->msix_entries[i].vector,
- &pm8001_ha->outq[i]);
+ &(pm8001_ha->irq_vector[i]));
pci_disable_msix(pdev);
#else
free_irq(pm8001_ha->irq, sha);
#endif
#ifdef PM8001_USE_TASKLET
- tasklet_kill(&pm8001_ha->tasklet);
+ /* For non-msix and msix interrupts */
+ if ((!pdev->msix_cap) || (pm8001_ha->chip_id == chip_8001))
+ tasklet_kill(&pm8001_ha->tasklet[0]);
+ else
+ for (j = 0; j < PM8001_MAX_MSIX_VEC; j++)
+ tasklet_kill(&pm8001_ha->tasklet[j]);
#endif
pm8001_free(pm8001_ha);
kfree(sha->sas_phy);
{
struct sas_ha_struct *sha = pci_get_drvdata(pdev);
struct pm8001_hba_info *pm8001_ha;
- int i;
+ int i, j;
u32 device_state;
pm8001_ha = sha->lldd_ha;
flush_workqueue(pm8001_wq);
synchronize_irq(pm8001_ha->msix_entries[i].vector);
for (i = 0; i < pm8001_ha->number_of_intr; i++)
free_irq(pm8001_ha->msix_entries[i].vector,
- &pm8001_ha->outq[i]);
+ &(pm8001_ha->irq_vector[i]));
pci_disable_msix(pdev);
#else
free_irq(pm8001_ha->irq, sha);
#endif
#ifdef PM8001_USE_TASKLET
- tasklet_kill(&pm8001_ha->tasklet);
+ /* For non-msix and msix interrupts */
+ if ((!pdev->msix_cap) || (pm8001_ha->chip_id == chip_8001))
+ tasklet_kill(&pm8001_ha->tasklet[0]);
+ else
+ for (j = 0; j < PM8001_MAX_MSIX_VEC; j++)
+ tasklet_kill(&pm8001_ha->tasklet[j]);
#endif
device_state = pci_choose_state(pdev, state);
pm8001_printk("pdev=0x%p, slot=%s, entering "
struct sas_ha_struct *sha = pci_get_drvdata(pdev);
struct pm8001_hba_info *pm8001_ha;
int rc;
- u8 i = 0;
+ u8 i = 0, j;
u32 device_state;
pm8001_ha = sha->lldd_ha;
device_state = pdev->current_state;
if (rc)
goto err_out_disable;
#ifdef PM8001_USE_TASKLET
- /* default tasklet for non msi-x interrupt handler/first msi-x
- * interrupt handler */
- tasklet_init(&pm8001_ha->tasklet, pm8001_tasklet,
- (unsigned long)pm8001_ha);
+ /* Tasklet for non msi-x interrupt handler */
+ if ((!pdev->msix_cap) || (pm8001_ha->chip_id == chip_8001))
+ tasklet_init(&pm8001_ha->tasklet[0], pm8001_tasklet,
+ (unsigned long)&(pm8001_ha->irq_vector[0]));
+ else
+ for (j = 0; j < PM8001_MAX_MSIX_VEC; j++)
+ tasklet_init(&pm8001_ha->tasklet[j], pm8001_tasklet,
+ (unsigned long)&(pm8001_ha->irq_vector[j]));
#endif
PM8001_CHIP_DISP->interrupt_enable(pm8001_ha, 0);
if (pm8001_ha->chip_id != chip_8001) {
MODULE_AUTHOR("Jack Wang <jack_wang@usish.com>");
MODULE_AUTHOR("Anand Kumar Santhanam <AnandKumar.Santhanam@pmcs.com>");
MODULE_AUTHOR("Sangeetha Gnanasekaran <Sangeetha.Gnanasekaran@pmcs.com>");
+MODULE_AUTHOR("Nikith Ganigarakoppal <Nikith.Ganigarakoppal@pmcs.com>");
MODULE_DESCRIPTION(
"PMC-Sierra PM8001/8081/8088/8089/8074/8076/8077 "
"SAS/SATA controller driver");
struct pm8001_tmf_task tmf_task;
struct pm8001_device *pm8001_dev = dev->lldd_dev;
struct pm8001_hba_info *pm8001_ha = pm8001_find_ha_by_dev(dev);
+ DECLARE_COMPLETION_ONSTACK(completion_setstate);
if (dev_is_sata(dev)) {
struct sas_phy *phy = sas_get_local_phy(dev);
rc = pm8001_exec_internal_task_abort(pm8001_ha, pm8001_dev ,
dev, 1, 0);
rc = sas_phy_reset(phy, 1);
sas_put_local_phy(phy);
+ pm8001_dev->setds_completion = &completion_setstate;
rc = PM8001_CHIP_DISP->set_dev_state_req(pm8001_ha,
pm8001_dev, 0x01);
- msleep(2000);
+ wait_for_completion(&completion_setstate);
} else {
tmf_task.tmf = TMF_LU_RESET;
rc = pm8001_issue_ssp_tmf(dev, lun, &tmf_task);
u64 membase;
u32 memsize;
};
+struct isr_param {
+ struct pm8001_hba_info *drv_inst;
+ u32 irq_id;
+};
struct pm8001_hba_info {
char name[PM8001_NAME_LENGTH];
struct list_head list;
int number_of_intr;/*will be used in remove()*/
#endif
#ifdef PM8001_USE_TASKLET
- struct tasklet_struct tasklet;
+ struct tasklet_struct tasklet[PM8001_MAX_MSIX_VEC];
#endif
u32 logging_level;
u32 fw_status;
u32 smp_exp_mode;
- u32 int_vector;
const struct firmware *fw_image;
- u8 outq[PM8001_MAX_MSIX_VEC];
+ struct isr_param irq_vector[PM8001_MAX_MSIX_VEC];
};
struct pm8001_work {
unsigned long flags;
u8 deviceType = pPayload->sas_identify.dev_type;
port->port_state = portstate;
+ phy->phy_state = PHY_STATE_LINK_UP_SPCV;
PM8001_MSG_DBG(pm8001_ha, pm8001_printk(
"portid:%d; phyid:%d; linkrate:%d; "
"portstate:%x; devicetype:%x\n",
port_id, phy_id, link_rate, portstate));
port->port_state = portstate;
+ phy->phy_state = PHY_STATE_LINK_UP_SPCV;
port->port_attached = 1;
pm8001_get_lrate_mode(phy, link_rate);
phy->phy_type |= PORT_TYPE_SATA;
#define SAS_DOPNRJT_RTRY_TMO 128
#define SAS_COPNRJT_RTRY_TMO 128
+/* for phy state */
+#define PHY_STATE_LINK_UP_SPCV 0x2
/*
Making ORR bigger than IT NEXUS LOSS which is 2000000us = 2 second.
Assuming a bigger value 3 second, 3000000/128 = 23437.5 where 128
};
#define PMCRAID_AEN_CMD_MAX (__PMCRAID_AEN_CMD_MAX - 1)
+static struct genl_multicast_group pmcraid_mcgrps[] = {
+ { .name = "events", /* not really used - see ID discussion below */ },
+};
+
static struct genl_family pmcraid_event_family = {
- .id = GENL_ID_GENERATE,
+ /*
+ * Due to prior multicast group abuse (the code having assumed that
+ * the family ID can be used as a multicast group ID) we need to
+ * statically allocate a family (and thus group) ID.
+ */
+ .id = GENL_ID_PMCRAID,
.name = "pmcraid",
.version = 1,
- .maxattr = PMCRAID_AEN_ATTR_MAX
+ .maxattr = PMCRAID_AEN_ATTR_MAX,
+ .mcgrps = pmcraid_mcgrps,
+ .n_mcgrps = ARRAY_SIZE(pmcraid_mcgrps),
};
/**
return result;
}
- result =
- genlmsg_multicast(&pmcraid_event_family, skb, 0,
- pmcraid_event_family.id, GFP_ATOMIC);
+ result = genlmsg_multicast(&pmcraid_event_family, skb,
+ 0, 0, GFP_ATOMIC);
/* If there are no listeners, genlmsg_multicast may return non-zero
* value.
.this_id = -1,
.sg_tablesize = PMCRAID_MAX_IOADLS,
.max_sectors = PMCRAID_IOA_MAX_SECTORS,
+ .no_write_same = 1,
.cmd_per_lun = PMCRAID_MAX_CMD_PER_LUN,
.use_clustering = ENABLE_CLUSTERING,
.shost_attrs = pmcraid_host_attrs,
{
struct scsi_device *sdev = sdkp->device;
+ if (sdev->host->no_write_same) {
+ sdev->no_write_same = 1;
+
+ return;
+ }
+
if (scsi_report_opcode(sdev, buffer, SD_BUF_SIZE, INQUIRY) < 0) {
/* too large values might cause issues with arcmsr */
int vpd_buf_len = 64;
.use_clustering = DISABLE_CLUSTERING,
/* Make sure we dont get a sg segment crosses a page boundary */
.dma_boundary = PAGE_SIZE-1,
+ .no_write_same = 1,
};
enum {
static int bcm2835_spi_remove(struct platform_device *pdev)
{
- struct spi_master *master = spi_master_get(platform_get_drvdata(pdev));
+ struct spi_master *master = platform_get_drvdata(pdev);
struct bcm2835_spi *bs = spi_master_get_devdata(master);
free_irq(bs->irq, master);
static int bcm63xx_spi_remove(struct platform_device *pdev)
{
- struct spi_master *master = spi_master_get(platform_get_drvdata(pdev));
+ struct spi_master *master = platform_get_drvdata(pdev);
struct bcm63xx_spi *bs = spi_master_get_devdata(master);
/* reset spi block */
static int mpc512x_psc_spi_do_remove(struct device *dev)
{
- struct spi_master *master = spi_master_get(dev_get_drvdata(dev));
+ struct spi_master *master = dev_get_drvdata(dev);
struct mpc512x_psc_spi *mps = spi_master_get_devdata(master);
clk_disable_unprepare(mps->clk_mclk);
struct mxs_spi *spi;
struct mxs_ssp *ssp;
- master = spi_master_get(platform_get_drvdata(pdev));
+ master = platform_get_drvdata(pdev);
spi = spi_master_get_devdata(master);
ssp = &spi->ssp;
static struct acpi_device_id pxa2xx_spi_acpi_match[] = {
{ "INT33C0", 0 },
{ "INT33C1", 0 },
+ { "INT3430", 0 },
+ { "INT3431", 0 },
{ "80860F0E", 0 },
{ },
};
/* Enable the SSP clock */
clk_prepare_enable(ssp->clk);
+ /* Restore LPSS private register bits */
+ lpss_ssp_setup(drv_data);
+
/* Start the queue running */
status = spi_master_resume(drv_data->master);
if (status != 0) {
static int rspi_remove(struct platform_device *pdev)
{
- struct rspi_data *rspi = spi_master_get(platform_get_drvdata(pdev));
+ struct rspi_data *rspi = platform_get_drvdata(pdev);
spi_unregister_master(rspi->master);
rspi_release_dma(rspi);
free_irq(platform_get_irq(pdev, 0), rspi);
clk_put(rspi->clk);
iounmap(rspi->addr);
- spi_master_put(rspi->master);
return 0;
}
qspi->spi_max_frequency, clk_div);
ret = pm_runtime_get_sync(qspi->dev);
- if (ret) {
+ if (ret < 0) {
dev_err(qspi->dev, "pm_runtime_get_sync() failed\n");
return ret;
}
if (!of_property_read_u32(np, "num-cs", &num_cs))
master->num_chipselect = num_cs;
- platform_set_drvdata(pdev, master);
-
qspi = spi_master_get_devdata(master);
qspi->master = master;
qspi->dev = &pdev->dev;
+ platform_set_drvdata(pdev, qspi);
r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
static int ti_qspi_remove(struct platform_device *pdev)
{
- struct ti_qspi *qspi = platform_get_drvdata(pdev);
+ struct spi_master *master;
+ struct ti_qspi *qspi;
+ int ret;
+
+ master = platform_get_drvdata(pdev);
+ qspi = spi_master_get_devdata(master);
+
+ ret = pm_runtime_get_sync(qspi->dev);
+ if (ret < 0) {
+ dev_err(qspi->dev, "pm_runtime_get_sync() failed\n");
+ return ret;
+ }
ti_qspi_write(qspi, QSPI_WC_INT_DISABLE, QSPI_INTR_ENABLE_CLEAR_REG);
+ pm_runtime_put(qspi->dev);
+ pm_runtime_disable(&pdev->dev);
+
+ spi_unregister_master(master);
+
return 0;
}
static int txx9spi_remove(struct platform_device *dev)
{
- struct spi_master *master = spi_master_get(platform_get_drvdata(dev));
+ struct spi_master *master = platform_get_drvdata(dev);
struct txx9spi *c = spi_master_get_devdata(master);
destroy_workqueue(c->workqueue);
return -ENOMEM;
ret = spi_register_master(master);
- if (ret != 0) {
+ if (!ret) {
*ptr = master;
devres_add(dev, ptr);
} else {
/* This function maps kernel space memory to user space memory. */
static int bridge_mmap(struct file *filp, struct vm_area_struct *vma)
{
- u32 status;
+ struct omap_dsp_platform_data *pdata =
+ omap_dspbridge_dev->dev.platform_data;
/* VM_IO | VM_DONTEXPAND | VM_DONTDUMP are set by remap_pfn_range() */
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
vma->vm_start, vma->vm_end, vma->vm_page_prot,
vma->vm_flags);
- status = remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
- vma->vm_end - vma->vm_start,
- vma->vm_page_prot);
- if (status != 0)
- status = -EAGAIN;
-
- return status;
+ return vm_iomap_memory(vma,
+ pdata->phys_mempool_base,
+ pdata->phys_mempool_size);
}
static const struct file_operations bridge_fops = {
struct n_tty_data *ldata = tty->disc_data;
size_t echoed;
- if (!L_ECHO(tty) || ldata->echo_commit == ldata->echo_tail)
+ if ((!L_ECHO(tty) && !L_ECHONL(tty)) ||
+ ldata->echo_commit == ldata->echo_tail)
return;
mutex_lock(&ldata->output_lock);
{
struct n_tty_data *ldata = tty->disc_data;
- if (!L_ECHO(tty) || ldata->echo_commit == ldata->echo_head)
+ if ((!L_ECHO(tty) && !L_ECHONL(tty)) ||
+ ldata->echo_commit == ldata->echo_head)
return;
mutex_lock(&ldata->output_lock);
return -EINVAL;
mem = idev->info->mem + mi;
+ if (mem->addr & ~PAGE_MASK)
+ return -ENODEV;
if (vma->vm_end - vma->vm_start > mem->size)
return -EINVAL;
static const struct usb_device_id acm_ids[] = {
/* quirky and broken devices */
+ { USB_DEVICE(0x17ef, 0x7000), /* Lenovo USB modem */
+ .driver_info = NO_UNION_NORMAL, },/* has no union descriptor */
{ USB_DEVICE(0x0870, 0x0001), /* Metricom GS Modem */
.driver_info = NO_UNION_NORMAL, /* has no union descriptor */
},
hub->ports[i - 1]->child;
dev_dbg(hub_dev, "warm reset port %d\n", i);
- if (!udev || !(portstatus &
- USB_PORT_STAT_CONNECTION)) {
+ if (!udev ||
+ !(portstatus & USB_PORT_STAT_CONNECTION) ||
+ udev->state == USB_STATE_NOTATTACHED) {
status = hub_port_reset(hub, i,
NULL, HUB_BH_RESET_TIME,
true);
dep = dwc3_wIndex_to_dep(dwc, wIndex);
if (!dep)
return -EINVAL;
+ if (set == 0 && (dep->flags & DWC3_EP_WEDGE))
+ break;
ret = __dwc3_gadget_ep_set_halt(dep, set);
if (ret)
return -EINVAL;
else
dep->flags |= DWC3_EP_STALL;
} else {
- if (dep->flags & DWC3_EP_WEDGE)
- return 0;
-
ret = dwc3_send_gadget_ep_cmd(dwc, dep->number,
DWC3_DEPCMD_CLEARSTALL, ¶ms);
if (ret)
value ? "set" : "clear",
dep->name);
else
- dep->flags &= ~DWC3_EP_STALL;
+ dep->flags &= ~(DWC3_EP_STALL | DWC3_EP_WEDGE);
}
return ret;
config USB_CONFIGFS_MASS_STORAGE
boolean "Mass storage"
depends on USB_CONFIGFS
+ depends on BLOCK
select USB_F_MASS_STORAGE
help
The Mass Storage Gadget acts as a USB Mass Storage disk drive.
bitmap_zero(f->endpoints, 32);
}
cdev->config = NULL;
+ cdev->delayed_status = 0;
}
static int set_config(struct usb_composite_dev *cdev,
{
struct ffs_data *ffs = kzalloc(sizeof *ffs, GFP_KERNEL);
if (unlikely(!ffs))
- return 0;
+ return NULL;
ENTER();
*/
DBG(fsg, "bulk reset request\n");
raise_exception(fsg->common, FSG_STATE_RESET);
- return DELAYED_STATUS;
+ return USB_GADGET_DELAYED_STATUS;
case US_BULK_GET_MAX_LUN:
if (ctrl->bRequestType !=
return true;
}
-static int sleep_thread(struct fsg_common *common)
+static int sleep_thread(struct fsg_common *common, bool can_freeze)
{
int rc = 0;
/* Wait until a signal arrives or we are woken up */
for (;;) {
- try_to_freeze();
+ if (can_freeze)
+ try_to_freeze();
set_current_state(TASK_INTERRUPTIBLE);
if (signal_pending(current)) {
rc = -EINTR;
/* Wait for the next buffer to become available */
bh = common->next_buffhd_to_fill;
while (bh->state != BUF_STATE_EMPTY) {
- rc = sleep_thread(common);
+ rc = sleep_thread(common, false);
if (rc)
return rc;
}
}
/* Wait for something to happen */
- rc = sleep_thread(common);
+ rc = sleep_thread(common, false);
if (rc)
return rc;
}
}
/* Otherwise wait for something to happen */
- rc = sleep_thread(common);
+ rc = sleep_thread(common, true);
if (rc)
return rc;
}
/* Wait for the next buffer to become available */
bh = common->next_buffhd_to_fill;
while (bh->state != BUF_STATE_EMPTY) {
- rc = sleep_thread(common);
+ rc = sleep_thread(common, true);
if (rc)
return rc;
}
bh = common->next_buffhd_to_fill;
common->next_buffhd_to_drain = bh;
while (bh->state != BUF_STATE_EMPTY) {
- rc = sleep_thread(common);
+ rc = sleep_thread(common, true);
if (rc)
return rc;
}
/* Wait for the next buffer to become available */
bh = common->next_buffhd_to_fill;
while (bh->state != BUF_STATE_EMPTY) {
- rc = sleep_thread(common);
+ rc = sleep_thread(common, true);
if (rc)
return rc;
}
/* Wait for the CBW to arrive */
while (bh->state != BUF_STATE_FULL) {
- rc = sleep_thread(common);
+ rc = sleep_thread(common, true);
if (rc)
return rc;
}
}
if (num_active == 0)
break;
- if (sleep_thread(common))
+ if (sleep_thread(common, true))
return;
}
}
if (!common->running) {
- sleep_thread(common);
+ sleep_thread(common, true);
continue;
}
fsg->common->can_stall);
if (ret)
return ret;
- fsg_common_set_inquiry_string(fsg->common, 0, 0);
+ fsg_common_set_inquiry_string(fsg->common, NULL, NULL);
ret = fsg_common_run_thread(fsg->common);
if (ret)
return ret;
*/
#ifdef CONFIG_ARCH_PXA
#include <mach/pxa25x-udc.h>
+#include <mach/hardware.h>
#endif
#ifdef CONFIG_ARCH_LUBBOCK
}
static void s3c_hsotg_enqueue_setup(struct s3c_hsotg *hsotg);
+static void s3c_hsotg_disconnect(struct s3c_hsotg *hsotg);
/**
* s3c_hsotg_process_control - process a control request
if ((ctrl->bRequestType & USB_TYPE_MASK) == USB_TYPE_STANDARD) {
switch (ctrl->bRequest) {
case USB_REQ_SET_ADDRESS:
+ s3c_hsotg_disconnect(hsotg);
dcfg = readl(hsotg->regs + DCFG);
dcfg &= ~DCFG_DevAddr_MASK;
dcfg |= ctrl->wValue << DCFG_DevAddr_SHIFT;
/* as a fallback, try delivering it to the driver to deal with */
if (ret == 0 && hsotg->driver) {
+ spin_unlock(&hsotg->lock);
ret = hsotg->driver->setup(&hsotg->gadget, ctrl);
+ spin_lock(&hsotg->lock);
if (ret < 0)
dev_dbg(hsotg->dev, "driver->setup() ret %d\n", ret);
}
return;
}
+ spin_lock(&hsotg->lock);
if (req->actual == 0)
s3c_hsotg_enqueue_setup(hsotg);
else
s3c_hsotg_process_control(hsotg, req->buf);
+ spin_unlock(&hsotg->lock);
}
/**
writel(GINTSTS_USBSusp, hsotg->regs + GINTSTS);
call_gadget(hsotg, suspend);
- s3c_hsotg_disconnect(hsotg);
}
if (gintsts & GINTSTS_WkUpInt) {
return curlun->filp != NULL;
}
-/* Big enough to hold our biggest descriptor */
-#define EP0_BUFSIZE 256
-#define DELAYED_STATUS (EP0_BUFSIZE + 999) /* An impossibly large value */
-
/* Default size of buffer length. */
#define FSG_BUFLEN ((u32)16384)
return -ENOMEM;
}
-void bot_cleanup_old_alt(struct f_uas *fu)
+static void bot_cleanup_old_alt(struct f_uas *fu)
{
if (!(fu->flags & USBG_ENABLED))
return;
* functional coverage for the "USBCV" test harness from USB-IF.
* It's always set if OTG mode is enabled.
*/
-unsigned autoresume = DEFAULT_AUTORESUME;
+static unsigned autoresume = DEFAULT_AUTORESUME;
module_param(autoresume, uint, S_IRUGO);
MODULE_PARM_DESC(autoresume, "zero, or seconds before remote wakeup");
/* Maximum Autoresume time */
-unsigned max_autoresume;
+static unsigned max_autoresume;
module_param(max_autoresume, uint, S_IRUGO);
MODULE_PARM_DESC(max_autoresume, "maximum seconds before remote wakeup");
/* Interval between two remote wakeups */
-unsigned autoresume_interval_ms;
+static unsigned autoresume_interval_ms;
module_param(autoresume_interval_ms, uint, S_IRUGO);
MODULE_PARM_DESC(autoresume_interval_ms,
"milliseconds to increase successive wakeup delays");
#include <linux/clk.h>
#include <linux/device.h>
+#include <linux/dma-mapping.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/module.h>
}
while (1) {
- if (room_on_ring(xhci, ep_ring, num_trbs))
- break;
+ if (room_on_ring(xhci, ep_ring, num_trbs)) {
+ union xhci_trb *trb = ep_ring->enqueue;
+ unsigned int usable = ep_ring->enq_seg->trbs +
+ TRBS_PER_SEGMENT - 1 - trb;
+ u32 nop_cmd;
+
+ /*
+ * Section 4.11.7.1 TD Fragments states that a link
+ * TRB must only occur at the boundary between
+ * data bursts (eg 512 bytes for 480M).
+ * While it is possible to split a large fragment
+ * we don't know the size yet.
+ * Simplest solution is to fill the trb before the
+ * LINK with nop commands.
+ */
+ if (num_trbs == 1 || num_trbs <= usable || usable == 0)
+ break;
+
+ if (ep_ring->type != TYPE_BULK)
+ /*
+ * While isoc transfers might have a buffer that
+ * crosses a 64k boundary it is unlikely.
+ * Since we can't add NOPs without generating
+ * gaps in the traffic just hope it never
+ * happens at the end of the ring.
+ * This could be fixed by writing a LINK TRB
+ * instead of the first NOP - however the
+ * TRB_TYPE_LINK_LE32() calls would all need
+ * changing to check the ring length.
+ */
+ break;
+
+ if (num_trbs >= TRBS_PER_SEGMENT) {
+ xhci_err(xhci, "Too many fragments %d, max %d\n",
+ num_trbs, TRBS_PER_SEGMENT - 1);
+ return -ENOMEM;
+ }
+
+ nop_cmd = cpu_to_le32(TRB_TYPE(TRB_TR_NOOP) |
+ ep_ring->cycle_state);
+ ep_ring->num_trbs_free -= usable;
+ do {
+ trb->generic.field[0] = 0;
+ trb->generic.field[1] = 0;
+ trb->generic.field[2] = 0;
+ trb->generic.field[3] = nop_cmd;
+ trb++;
+ } while (--usable);
+ ep_ring->enqueue = trb;
+ if (room_on_ring(xhci, ep_ring, num_trbs))
+ break;
+ }
if (ep_ring == xhci->cmd_ring) {
xhci_err(xhci, "Do not support expand command ring\n");
disable_irq_wake(musb->nIrq);
free_irq(musb->nIrq, musb);
}
- cancel_work_sync(&musb->irq_work);
musb_host_free(musb);
}
musb_platform_disable(musb);
musb_generic_disable(musb);
+ /* Init IRQ workqueue before request_irq */
+ INIT_WORK(&musb->irq_work, musb_irq_work);
+
/* setup musb parts of the core (especially endpoints) */
status = musb_core_init(plat->config->multipoint
? MUSB_CONTROLLER_MHDRC
setup_timer(&musb->otg_timer, musb_otg_timer_func, (unsigned long) musb);
- /* Init IRQ workqueue before request_irq */
- INIT_WORK(&musb->irq_work, musb_irq_work);
-
/* attach to the IRQ */
if (request_irq(nIrq, musb->isr, 0, dev_name(dev), musb)) {
dev_err(dev, "request_irq %d failed!\n", nIrq);
musb_host_cleanup(musb);
fail3:
+ cancel_work_sync(&musb->irq_work);
if (musb->dma_controller)
dma_controller_destroy(musb->dma_controller);
fail2_5:
if (musb->dma_controller)
dma_controller_destroy(musb->dma_controller);
+ cancel_work_sync(&musb->irq_work);
musb_free(musb);
device_init_wakeup(dev, 0);
return 0;
u32 prog_len;
u32 transferred;
u32 packet_sz;
+ struct list_head tx_check;
};
#define MUSB_DMA_NUM_CHANNELS 15
struct cppi41_dma_channel rx_channel[MUSB_DMA_NUM_CHANNELS];
struct cppi41_dma_channel tx_channel[MUSB_DMA_NUM_CHANNELS];
struct musb *musb;
+ struct hrtimer early_tx;
+ struct list_head early_tx_list;
u32 rx_mode;
u32 tx_mode;
u32 auto_req;
cppi41_channel->usb_toggle = toggle;
}
-static void cppi41_dma_callback(void *private_data)
+static bool musb_is_tx_fifo_empty(struct musb_hw_ep *hw_ep)
{
- struct dma_channel *channel = private_data;
- struct cppi41_dma_channel *cppi41_channel = channel->private_data;
- struct musb_hw_ep *hw_ep = cppi41_channel->hw_ep;
- struct musb *musb = hw_ep->musb;
- unsigned long flags;
- struct dma_tx_state txstate;
- u32 transferred;
+ u8 epnum = hw_ep->epnum;
+ struct musb *musb = hw_ep->musb;
+ void __iomem *epio = musb->endpoints[epnum].regs;
+ u16 csr;
- spin_lock_irqsave(&musb->lock, flags);
+ csr = musb_readw(epio, MUSB_TXCSR);
+ if (csr & MUSB_TXCSR_TXPKTRDY)
+ return false;
+ return true;
+}
- dmaengine_tx_status(cppi41_channel->dc, cppi41_channel->cookie,
- &txstate);
- transferred = cppi41_channel->prog_len - txstate.residue;
- cppi41_channel->transferred += transferred;
+static void cppi41_dma_callback(void *private_data);
- dev_dbg(musb->controller, "DMA transfer done on hw_ep=%d bytes=%d/%d\n",
- hw_ep->epnum, cppi41_channel->transferred,
- cppi41_channel->total_len);
+static void cppi41_trans_done(struct cppi41_dma_channel *cppi41_channel)
+{
+ struct musb_hw_ep *hw_ep = cppi41_channel->hw_ep;
+ struct musb *musb = hw_ep->musb;
- update_rx_toggle(cppi41_channel);
-
- if (cppi41_channel->transferred == cppi41_channel->total_len ||
- transferred < cppi41_channel->packet_sz) {
+ if (!cppi41_channel->prog_len) {
/* done, complete */
cppi41_channel->channel.actual_len =
remain_bytes,
direction,
DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
- if (WARN_ON(!dma_desc)) {
- spin_unlock_irqrestore(&musb->lock, flags);
+ if (WARN_ON(!dma_desc))
return;
- }
dma_desc->callback = cppi41_dma_callback;
- dma_desc->callback_param = channel;
+ dma_desc->callback_param = &cppi41_channel->channel;
cppi41_channel->cookie = dma_desc->tx_submit(dma_desc);
dma_async_issue_pending(dc);
musb_writew(epio, MUSB_RXCSR, csr);
}
}
+}
+
+static enum hrtimer_restart cppi41_recheck_tx_req(struct hrtimer *timer)
+{
+ struct cppi41_dma_controller *controller;
+ struct cppi41_dma_channel *cppi41_channel, *n;
+ struct musb *musb;
+ unsigned long flags;
+ enum hrtimer_restart ret = HRTIMER_NORESTART;
+
+ controller = container_of(timer, struct cppi41_dma_controller,
+ early_tx);
+ musb = controller->musb;
+
+ spin_lock_irqsave(&musb->lock, flags);
+ list_for_each_entry_safe(cppi41_channel, n, &controller->early_tx_list,
+ tx_check) {
+ bool empty;
+ struct musb_hw_ep *hw_ep = cppi41_channel->hw_ep;
+
+ empty = musb_is_tx_fifo_empty(hw_ep);
+ if (empty) {
+ list_del_init(&cppi41_channel->tx_check);
+ cppi41_trans_done(cppi41_channel);
+ }
+ }
+
+ if (!list_empty(&controller->early_tx_list)) {
+ ret = HRTIMER_RESTART;
+ hrtimer_forward_now(&controller->early_tx,
+ ktime_set(0, 150 * NSEC_PER_USEC));
+ }
+
+ spin_unlock_irqrestore(&musb->lock, flags);
+ return ret;
+}
+
+static void cppi41_dma_callback(void *private_data)
+{
+ struct dma_channel *channel = private_data;
+ struct cppi41_dma_channel *cppi41_channel = channel->private_data;
+ struct musb_hw_ep *hw_ep = cppi41_channel->hw_ep;
+ struct musb *musb = hw_ep->musb;
+ unsigned long flags;
+ struct dma_tx_state txstate;
+ u32 transferred;
+ bool empty;
+
+ spin_lock_irqsave(&musb->lock, flags);
+
+ dmaengine_tx_status(cppi41_channel->dc, cppi41_channel->cookie,
+ &txstate);
+ transferred = cppi41_channel->prog_len - txstate.residue;
+ cppi41_channel->transferred += transferred;
+
+ dev_dbg(musb->controller, "DMA transfer done on hw_ep=%d bytes=%d/%d\n",
+ hw_ep->epnum, cppi41_channel->transferred,
+ cppi41_channel->total_len);
+
+ update_rx_toggle(cppi41_channel);
+
+ if (cppi41_channel->transferred == cppi41_channel->total_len ||
+ transferred < cppi41_channel->packet_sz)
+ cppi41_channel->prog_len = 0;
+
+ empty = musb_is_tx_fifo_empty(hw_ep);
+ if (empty) {
+ cppi41_trans_done(cppi41_channel);
+ } else {
+ struct cppi41_dma_controller *controller;
+ /*
+ * On AM335x it has been observed that the TX interrupt fires
+ * too early that means the TXFIFO is not yet empty but the DMA
+ * engine says that it is done with the transfer. We don't
+ * receive a FIFO empty interrupt so the only thing we can do is
+ * to poll for the bit. On HS it usually takes 2us, on FS around
+ * 110us - 150us depending on the transfer size.
+ * We spin on HS (no longer than than 25us and setup a timer on
+ * FS to check for the bit and complete the transfer.
+ */
+ controller = cppi41_channel->controller;
+
+ if (musb->g.speed == USB_SPEED_HIGH) {
+ unsigned wait = 25;
+
+ do {
+ empty = musb_is_tx_fifo_empty(hw_ep);
+ if (empty)
+ break;
+ wait--;
+ if (!wait)
+ break;
+ udelay(1);
+ } while (1);
+
+ empty = musb_is_tx_fifo_empty(hw_ep);
+ if (empty) {
+ cppi41_trans_done(cppi41_channel);
+ goto out;
+ }
+ }
+ list_add_tail(&cppi41_channel->tx_check,
+ &controller->early_tx_list);
+ if (!hrtimer_active(&controller->early_tx)) {
+ hrtimer_start_range_ns(&controller->early_tx,
+ ktime_set(0, 140 * NSEC_PER_USEC),
+ 40 * NSEC_PER_USEC,
+ HRTIMER_MODE_REL);
+ }
+ }
+out:
spin_unlock_irqrestore(&musb->lock, flags);
}
WARN_ON(1);
return 1;
}
+ if (cppi41_channel->hw_ep->ep_in.type != USB_ENDPOINT_XFER_BULK)
+ return 0;
if (cppi41_channel->is_tx)
return 1;
/* AM335x Advisory 1.0.13. No workaround for device RX mode */
if (cppi41_channel->channel.status == MUSB_DMA_STATUS_FREE)
return 0;
+ list_del_init(&cppi41_channel->tx_check);
if (is_tx) {
csr = musb_readw(epio, MUSB_TXCSR);
csr &= ~MUSB_TXCSR_DMAENAB;
cppi41_channel->controller = controller;
cppi41_channel->port_num = port;
cppi41_channel->is_tx = is_tx;
+ INIT_LIST_HEAD(&cppi41_channel->tx_check);
musb_dma = &cppi41_channel->channel;
musb_dma->private_data = cppi41_channel;
struct cppi41_dma_controller *controller = container_of(c,
struct cppi41_dma_controller, controller);
+ hrtimer_cancel(&controller->early_tx);
cppi41_dma_controller_stop(controller);
kfree(controller);
}
if (!controller)
goto kzalloc_fail;
+ hrtimer_init(&controller->early_tx, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ controller->early_tx.function = cppi41_recheck_tx_req;
+ INIT_LIST_HEAD(&controller->early_tx_list);
controller->musb = musb;
controller->controller.channel_alloc = cppi41_dma_channel_allocate;
/* this "gadget" abstracts/virtualizes the controller */
musb->g.name = musb_driver_name;
+#if IS_ENABLED(CONFIG_USB_MUSB_DUAL_ROLE)
musb->g.is_otg = 1;
+#elif IS_ENABLED(CONFIG_USB_MUSB_GADGET)
+ musb->g.is_otg = 0;
+#endif
musb_g_init_endpoints(musb);
return am_phy->id;
}
- ret = usb_phy_gen_create_phy(dev, &am_phy->usb_phy_gen,
- USB_PHY_TYPE_USB2, 0, false);
+ ret = usb_phy_gen_create_phy(dev, &am_phy->usb_phy_gen, NULL);
if (ret)
return ret;
platform_set_drvdata(pdev, am_phy);
return 0;
-
- return ret;
}
static int am335x_phy_remove(struct platform_device *pdev)
if (pd)
return;
pd = platform_device_register_simple("usb_phy_gen_xceiv", -1, NULL, 0);
- if (!pd) {
+ if (IS_ERR(pd)) {
pr_err("Unable to register generic usb transceiver\n");
+ pd = NULL;
return;
}
}
}
int usb_phy_gen_create_phy(struct device *dev, struct usb_phy_gen_xceiv *nop,
- enum usb_phy_type type, u32 clk_rate, bool needs_vcc)
+ struct usb_phy_gen_xceiv_platform_data *pdata)
{
+ enum usb_phy_type type = USB_PHY_TYPE_USB2;
int err;
+ u32 clk_rate = 0;
+ bool needs_vcc = false;
+
+ nop->reset_active_low = true; /* default behaviour */
+
+ if (dev->of_node) {
+ struct device_node *node = dev->of_node;
+ enum of_gpio_flags flags = 0;
+
+ if (of_property_read_u32(node, "clock-frequency", &clk_rate))
+ clk_rate = 0;
+
+ needs_vcc = of_property_read_bool(node, "vcc-supply");
+ nop->gpio_reset = of_get_named_gpio_flags(node, "reset-gpios",
+ 0, &flags);
+ if (nop->gpio_reset == -EPROBE_DEFER)
+ return -EPROBE_DEFER;
+
+ nop->reset_active_low = flags & OF_GPIO_ACTIVE_LOW;
+
+ } else if (pdata) {
+ type = pdata->type;
+ clk_rate = pdata->clk_rate;
+ needs_vcc = pdata->needs_vcc;
+ nop->gpio_reset = pdata->gpio_reset;
+ } else {
+ nop->gpio_reset = -1;
+ }
+
nop->phy.otg = devm_kzalloc(dev, sizeof(*nop->phy.otg),
GFP_KERNEL);
if (!nop->phy.otg)
static int usb_phy_gen_xceiv_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
- struct usb_phy_gen_xceiv_platform_data *pdata =
- dev_get_platdata(&pdev->dev);
struct usb_phy_gen_xceiv *nop;
- enum usb_phy_type type = USB_PHY_TYPE_USB2;
int err;
- u32 clk_rate = 0;
- bool needs_vcc = false;
nop = devm_kzalloc(dev, sizeof(*nop), GFP_KERNEL);
if (!nop)
return -ENOMEM;
- nop->reset_active_low = true; /* default behaviour */
-
- if (dev->of_node) {
- struct device_node *node = dev->of_node;
- enum of_gpio_flags flags;
-
- if (of_property_read_u32(node, "clock-frequency", &clk_rate))
- clk_rate = 0;
-
- needs_vcc = of_property_read_bool(node, "vcc-supply");
- nop->gpio_reset = of_get_named_gpio_flags(node, "reset-gpios",
- 0, &flags);
- if (nop->gpio_reset == -EPROBE_DEFER)
- return -EPROBE_DEFER;
-
- nop->reset_active_low = flags & OF_GPIO_ACTIVE_LOW;
-
- } else if (pdata) {
- type = pdata->type;
- clk_rate = pdata->clk_rate;
- needs_vcc = pdata->needs_vcc;
- nop->gpio_reset = pdata->gpio_reset;
- }
-
- err = usb_phy_gen_create_phy(dev, nop, type, clk_rate, needs_vcc);
+ err = usb_phy_gen_create_phy(dev, nop, dev_get_platdata(&pdev->dev));
if (err)
return err;
platform_set_drvdata(pdev, nop);
return 0;
-
- return err;
}
static int usb_phy_gen_xceiv_remove(struct platform_device *pdev)
#ifndef _PHY_GENERIC_H_
#define _PHY_GENERIC_H_
+#include <linux/usb/usb_phy_gen_xceiv.h>
+
struct usb_phy_gen_xceiv {
struct usb_phy phy;
struct device *dev;
void usb_gen_phy_shutdown(struct usb_phy *phy);
int usb_phy_gen_create_phy(struct device *dev, struct usb_phy_gen_xceiv *nop,
- enum usb_phy_type type, u32 clk_rate, bool needs_vcc);
+ struct usb_phy_gen_xceiv_platform_data *pdata);
#endif
mxs_phy->clk = clk;
- platform_set_drvdata(pdev, &mxs_phy->phy);
+ platform_set_drvdata(pdev, mxs_phy);
ret = usb_add_phy_dev(&mxs_phy->phy);
if (ret)
clk_prepare_enable(priv->clk);
/* Set USB channels in the USBHS UGCTRL2 register */
- val = ioread32(priv->base);
+ val = ioread32(priv->base + USBHS_UGCTRL2_REG);
val &= ~(USBHS_UGCTRL2_USB0_HS | USBHS_UGCTRL2_USB2_SS);
val |= priv->ugctrl2;
- iowrite32(val, priv->base);
+ iowrite32(val, priv->base + USBHS_UGCTRL2_REG);
}
/* Shutdown USB channels */
termios->c_cflag |= CRTSCTS;
}
+ /*
+ * All FTDI UART chips are limited to CS7/8. We won't pretend to
+ * support CS5/6 and revert the CSIZE setting instead.
+ */
+ if ((C_CSIZE(tty) != CS8) && (C_CSIZE(tty) != CS7)) {
+ dev_warn(ddev, "requested CSIZE setting not supported\n");
+
+ termios->c_cflag &= ~CSIZE;
+ if (old_termios)
+ termios->c_cflag |= old_termios->c_cflag & CSIZE;
+ else
+ termios->c_cflag |= CS8;
+ }
+
cflag = termios->c_cflag;
if (!old_termios)
} else {
urb_value |= FTDI_SIO_SET_DATA_PARITY_NONE;
}
- if (cflag & CSIZE) {
- switch (cflag & CSIZE) {
- case CS7:
- urb_value |= 7;
- dev_dbg(ddev, "Setting CS7\n");
- break;
- case CS8:
- urb_value |= 8;
- dev_dbg(ddev, "Setting CS8\n");
- break;
- default:
- dev_err(ddev, "CSIZE was set but not CS7-CS8\n");
- }
+ switch (cflag & CSIZE) {
+ case CS7:
+ urb_value |= 7;
+ dev_dbg(ddev, "Setting CS7\n");
+ break;
+ default:
+ case CS8:
+ urb_value |= 8;
+ dev_dbg(ddev, "Setting CS8\n");
+ break;
}
/* This is needed by the break command since it uses the same command
clear_bit_unlock(USB_SERIAL_WRITE_BUSY, &port->flags);
return result;
}
- /*
- * Try sending off another urb, unless called from completion handler
- * (in which case there will be no free urb or no data).
- */
- if (mem_flags != GFP_ATOMIC)
- goto retry;
- clear_bit_unlock(USB_SERIAL_WRITE_BUSY, &port->flags);
-
- return 0;
+ goto retry; /* try sending off another urb */
}
EXPORT_SYMBOL_GPL(usb_serial_generic_write_start);
return 0;
count = kfifo_in_locked(&port->write_fifo, buf, count, &port->lock);
- result = usb_serial_generic_write_start(port, GFP_KERNEL);
+ result = usb_serial_generic_write_start(port, GFP_ATOMIC);
if (result)
return result;
iflag = tty->termios.c_iflag;
/* Change the number of bits */
- if (cflag & CSIZE) {
- switch (cflag & CSIZE) {
- case CS5:
- lData = LCR_BITS_5;
- break;
+ switch (cflag & CSIZE) {
+ case CS5:
+ lData = LCR_BITS_5;
+ break;
- case CS6:
- lData = LCR_BITS_6;
- break;
+ case CS6:
+ lData = LCR_BITS_6;
+ break;
- case CS7:
- lData = LCR_BITS_7;
- break;
- default:
- case CS8:
- lData = LCR_BITS_8;
- break;
- }
+ case CS7:
+ lData = LCR_BITS_7;
+ break;
+
+ default:
+ case CS8:
+ lData = LCR_BITS_8;
+ break;
}
+
/* Change the Parity bit */
if (cflag & PARENB) {
if (cflag & PARODD) {
#define HUAWEI_PRODUCT_K4505 0x1464
#define HUAWEI_PRODUCT_K3765 0x1465
#define HUAWEI_PRODUCT_K4605 0x14C6
+#define HUAWEI_PRODUCT_E173S6 0x1C07
#define QUANTA_VENDOR_ID 0x0408
#define QUANTA_PRODUCT_Q101 0xEA02
{ USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0x1c23, USB_CLASS_COMM, 0x02, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E173, 0xff, 0xff, 0xff),
.driver_info = (kernel_ulong_t) &net_intf1_blacklist },
+ { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E173S6, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t) &net_intf1_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E1750, 0xff, 0xff, 0xff),
.driver_info = (kernel_ulong_t) &net_intf2_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0x1441, USB_CLASS_COMM, 0x02, 0xff) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x01, 0x6D) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x01, 0x6E) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x01, 0x6F) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x01, 0x72) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x01, 0x73) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x01, 0x74) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x01, 0x75) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x01, 0x78) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x01, 0x79) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x01, 0x7A) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x6D) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x6E) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x6F) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x72) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x73) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x74) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x75) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x78) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x79) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x7A) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x6D) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x6E) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x6F) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x72) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x73) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x74) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x75) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x78) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x79) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x7A) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x6D) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x6E) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x6F) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x72) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x73) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x74) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x75) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x78) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x79) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x7A) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x6D) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x6E) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x6F) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x72) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x73) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x74) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x75) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x78) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x79) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x7A) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x6D) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x6E) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x6F) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x72) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x73) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x74) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x75) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x78) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x79) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x7A) },
0, 0, buf, 7, 100);
dev_dbg(&port->dev, "0xa1:0x21:0:0 %d - %7ph\n", i, buf);
- if (C_CSIZE(tty)) {
- switch (C_CSIZE(tty)) {
- case CS5:
- buf[6] = 5;
- break;
- case CS6:
- buf[6] = 6;
- break;
- case CS7:
- buf[6] = 7;
- break;
- default:
- case CS8:
- buf[6] = 8;
- }
- dev_dbg(&port->dev, "data bits = %d\n", buf[6]);
+ switch (C_CSIZE(tty)) {
+ case CS5:
+ buf[6] = 5;
+ break;
+ case CS6:
+ buf[6] = 6;
+ break;
+ case CS7:
+ buf[6] = 7;
+ break;
+ default:
+ case CS8:
+ buf[6] = 8;
}
+ dev_dbg(&port->dev, "data bits = %d\n", buf[6]);
/* For reference buf[0]:buf[3] baud rate value */
pl2303_encode_baudrate(tty, port, &buf[0]);
}
/* Set Data Length : 00:5bit, 01:6bit, 10:7bit, 11:8bit */
- if (cflag & CSIZE) {
- switch (cflag & CSIZE) {
- case CS5:
- buf[1] |= SET_UART_FORMAT_SIZE_5;
- break;
- case CS6:
- buf[1] |= SET_UART_FORMAT_SIZE_6;
- break;
- case CS7:
- buf[1] |= SET_UART_FORMAT_SIZE_7;
- break;
- default:
- case CS8:
- buf[1] |= SET_UART_FORMAT_SIZE_8;
- break;
- }
+ switch (cflag & CSIZE) {
+ case CS5:
+ buf[1] |= SET_UART_FORMAT_SIZE_5;
+ break;
+ case CS6:
+ buf[1] |= SET_UART_FORMAT_SIZE_6;
+ break;
+ case CS7:
+ buf[1] |= SET_UART_FORMAT_SIZE_7;
+ break;
+ default:
+ case CS8:
+ buf[1] |= SET_UART_FORMAT_SIZE_8;
+ break;
}
/* Set Stop bit2 : 0:1bit 1:2bit */
static void wusb_dev_free(struct wusb_dev *wusb_dev)
{
- if (wusb_dev) {
- kfree(wusb_dev->set_gtk_req);
- usb_free_urb(wusb_dev->set_gtk_urb);
- kfree(wusb_dev);
- }
+ kfree(wusb_dev);
}
static struct wusb_dev *wusb_dev_alloc(struct wusbhc *wusbhc)
{
struct wusb_dev *wusb_dev;
- struct urb *urb;
- struct usb_ctrlrequest *req;
wusb_dev = kzalloc(sizeof(*wusb_dev), GFP_KERNEL);
if (wusb_dev == NULL)
INIT_WORK(&wusb_dev->devconnect_acked_work, wusbhc_devconnect_acked_work);
- urb = usb_alloc_urb(0, GFP_KERNEL);
- if (urb == NULL)
- goto err;
- wusb_dev->set_gtk_urb = urb;
-
- req = kmalloc(sizeof(*req), GFP_KERNEL);
- if (req == NULL)
- goto err;
- wusb_dev->set_gtk_req = req;
-
- req->bRequestType = USB_DIR_OUT | USB_TYPE_STANDARD | USB_RECIP_DEVICE;
- req->bRequest = USB_REQ_SET_DESCRIPTOR;
- req->wValue = cpu_to_le16(USB_DT_KEY << 8 | wusbhc->gtk_index);
- req->wIndex = 0;
- req->wLength = cpu_to_le16(wusbhc->gtk.descr.bLength);
-
return wusb_dev;
err:
wusb_dev_free(wusb_dev);
/*
* Refresh the list of keep alives to emit in the MMC
*
- * Some devices don't respond to keep alives unless they've been
- * authenticated, so skip unauthenticated devices.
- *
* We only publish the first four devices that have a coming timeout
* condition. Then when we are done processing those, we go for the
* next ones. We ignore the ones that have timed out already (they'll
if (wusb_dev == NULL)
continue;
- if (wusb_dev->usb_dev == NULL || !wusb_dev->usb_dev->authenticated)
+ if (wusb_dev->usb_dev == NULL)
continue;
if (time_after(jiffies, wusb_dev->entry_ts + tt)) {
*
* @wusbhc shall be referenced and unlocked
*/
-static void wusbhc_handle_dn_alive(struct wusbhc *wusbhc, struct wusb_dev *wusb_dev)
+static void wusbhc_handle_dn_alive(struct wusbhc *wusbhc, u8 srcaddr)
{
+ struct wusb_dev *wusb_dev;
+
mutex_lock(&wusbhc->mutex);
- wusb_dev->entry_ts = jiffies;
- __wusbhc_keep_alive(wusbhc);
+ wusb_dev = wusbhc_find_dev_by_addr(wusbhc, srcaddr);
+ if (wusb_dev == NULL) {
+ dev_dbg(wusbhc->dev, "ignoring DN_Alive from unconnected device %02x\n",
+ srcaddr);
+ } else {
+ wusb_dev->entry_ts = jiffies;
+ __wusbhc_keep_alive(wusbhc);
+ }
mutex_unlock(&wusbhc->mutex);
}
*
* @wusbhc shall be referenced and unlocked
*/
-static void wusbhc_handle_dn_disconnect(struct wusbhc *wusbhc, struct wusb_dev *wusb_dev)
+static void wusbhc_handle_dn_disconnect(struct wusbhc *wusbhc, u8 srcaddr)
{
struct device *dev = wusbhc->dev;
-
- dev_info(dev, "DN DISCONNECT: device 0x%02x going down\n", wusb_dev->addr);
+ struct wusb_dev *wusb_dev;
mutex_lock(&wusbhc->mutex);
- __wusbhc_dev_disconnect(wusbhc, wusb_port_by_idx(wusbhc, wusb_dev->port_idx));
+ wusb_dev = wusbhc_find_dev_by_addr(wusbhc, srcaddr);
+ if (wusb_dev == NULL) {
+ dev_dbg(dev, "ignoring DN DISCONNECT from unconnected device %02x\n",
+ srcaddr);
+ } else {
+ dev_info(dev, "DN DISCONNECT: device 0x%02x going down\n",
+ wusb_dev->addr);
+ __wusbhc_dev_disconnect(wusbhc, wusb_port_by_idx(wusbhc,
+ wusb_dev->port_idx));
+ }
mutex_unlock(&wusbhc->mutex);
}
struct wusb_dn_hdr *dn_hdr, size_t size)
{
struct device *dev = wusbhc->dev;
- struct wusb_dev *wusb_dev;
if (size < sizeof(struct wusb_dn_hdr)) {
dev_err(dev, "DN data shorter than DN header (%d < %d)\n",
(int)size, (int)sizeof(struct wusb_dn_hdr));
return;
}
-
- wusb_dev = wusbhc_find_dev_by_addr(wusbhc, srcaddr);
- if (wusb_dev == NULL && dn_hdr->bType != WUSB_DN_CONNECT) {
- dev_dbg(dev, "ignoring DN %d from unconnected device %02x\n",
- dn_hdr->bType, srcaddr);
- return;
- }
-
switch (dn_hdr->bType) {
case WUSB_DN_CONNECT:
wusbhc_handle_dn_connect(wusbhc, dn_hdr, size);
break;
case WUSB_DN_ALIVE:
- wusbhc_handle_dn_alive(wusbhc, wusb_dev);
+ wusbhc_handle_dn_alive(wusbhc, srcaddr);
break;
case WUSB_DN_DISCONNECT:
- wusbhc_handle_dn_disconnect(wusbhc, wusb_dev);
+ wusbhc_handle_dn_disconnect(wusbhc, srcaddr);
break;
case WUSB_DN_MASAVAILCHANGED:
case WUSB_DN_RWAKE:
#include <linux/export.h>
#include "wusbhc.h"
-static void wusbhc_set_gtk_callback(struct urb *urb);
-static void wusbhc_gtk_rekey_done_work(struct work_struct *work);
+static void wusbhc_gtk_rekey_work(struct work_struct *work);
int wusbhc_sec_create(struct wusbhc *wusbhc)
{
wusbhc->gtk.descr.bLength = sizeof(wusbhc->gtk.descr) + sizeof(wusbhc->gtk.data);
wusbhc->gtk.descr.bDescriptorType = USB_DT_KEY;
wusbhc->gtk.descr.bReserved = 0;
+ wusbhc->gtk_index = 0;
- wusbhc->gtk_index = wusb_key_index(0, WUSB_KEY_INDEX_TYPE_GTK,
- WUSB_KEY_INDEX_ORIGINATOR_HOST);
-
- INIT_WORK(&wusbhc->gtk_rekey_done_work, wusbhc_gtk_rekey_done_work);
+ INIT_WORK(&wusbhc->gtk_rekey_work, wusbhc_gtk_rekey_work);
return 0;
}
wusbhc_generate_gtk(wusbhc);
result = wusbhc->set_gtk(wusbhc, wusbhc->gtk_tkid,
- &wusbhc->gtk.descr.bKeyData, key_size);
+ &wusbhc->gtk.descr.bKeyData, key_size);
if (result < 0)
dev_err(wusbhc->dev, "cannot set GTK for the host: %d\n",
result);
*/
void wusbhc_sec_stop(struct wusbhc *wusbhc)
{
- cancel_work_sync(&wusbhc->gtk_rekey_done_work);
+ cancel_work_sync(&wusbhc->gtk_rekey_work);
}
static int wusb_dev_set_gtk(struct wusbhc *wusbhc, struct wusb_dev *wusb_dev)
{
struct usb_device *usb_dev = wusb_dev->usb_dev;
+ u8 key_index = wusb_key_index(wusbhc->gtk_index,
+ WUSB_KEY_INDEX_TYPE_GTK, WUSB_KEY_INDEX_ORIGINATOR_HOST);
return usb_control_msg(
usb_dev, usb_sndctrlpipe(usb_dev, 0),
USB_REQ_SET_DESCRIPTOR,
USB_DIR_OUT | USB_TYPE_STANDARD | USB_RECIP_DEVICE,
- USB_DT_KEY << 8 | wusbhc->gtk_index, 0,
+ USB_DT_KEY << 8 | key_index, 0,
&wusbhc->gtk.descr, wusbhc->gtk.descr.bLength,
1000);
}
* Once all connected and authenticated devices have received the new
* GTK, switch the host to using it.
*/
-static void wusbhc_gtk_rekey_done_work(struct work_struct *work)
+static void wusbhc_gtk_rekey_work(struct work_struct *work)
{
- struct wusbhc *wusbhc = container_of(work, struct wusbhc, gtk_rekey_done_work);
+ struct wusbhc *wusbhc = container_of(work,
+ struct wusbhc, gtk_rekey_work);
size_t key_size = sizeof(wusbhc->gtk.data);
+ int port_idx;
+ struct wusb_dev *wusb_dev, *wusb_dev_next;
+ LIST_HEAD(rekey_list);
mutex_lock(&wusbhc->mutex);
+ /* generate the new key */
+ wusbhc_generate_gtk(wusbhc);
+ /* roll the gtk index. */
+ wusbhc->gtk_index = (wusbhc->gtk_index + 1) % (WUSB_KEY_INDEX_MAX + 1);
+ /*
+ * Save all connected devices on a list while holding wusbhc->mutex and
+ * take a reference to each one. Then submit the set key request to
+ * them after releasing the lock in order to avoid a deadlock.
+ */
+ for (port_idx = 0; port_idx < wusbhc->ports_max; port_idx++) {
+ wusb_dev = wusbhc->port[port_idx].wusb_dev;
+ if (!wusb_dev || !wusb_dev->usb_dev
+ || !wusb_dev->usb_dev->authenticated)
+ continue;
- if (--wusbhc->pending_set_gtks == 0)
- wusbhc->set_gtk(wusbhc, wusbhc->gtk_tkid, &wusbhc->gtk.descr.bKeyData, key_size);
-
+ wusb_dev_get(wusb_dev);
+ list_add_tail(&wusb_dev->rekey_node, &rekey_list);
+ }
mutex_unlock(&wusbhc->mutex);
-}
-static void wusbhc_set_gtk_callback(struct urb *urb)
-{
- struct wusbhc *wusbhc = urb->context;
+ /* Submit the rekey requests without holding wusbhc->mutex. */
+ list_for_each_entry_safe(wusb_dev, wusb_dev_next, &rekey_list,
+ rekey_node) {
+ list_del_init(&wusb_dev->rekey_node);
+ dev_dbg(&wusb_dev->usb_dev->dev, "%s: rekey device at port %d\n",
+ __func__, wusb_dev->port_idx);
+
+ if (wusb_dev_set_gtk(wusbhc, wusb_dev) < 0) {
+ dev_err(&wusb_dev->usb_dev->dev, "%s: rekey device at port %d failed\n",
+ __func__, wusb_dev->port_idx);
+ }
+ wusb_dev_put(wusb_dev);
+ }
- queue_work(wusbd, &wusbhc->gtk_rekey_done_work);
+ /* Switch the host controller to use the new GTK. */
+ mutex_lock(&wusbhc->mutex);
+ wusbhc->set_gtk(wusbhc, wusbhc->gtk_tkid,
+ &wusbhc->gtk.descr.bKeyData, key_size);
+ mutex_unlock(&wusbhc->mutex);
}
/**
*/
void wusbhc_gtk_rekey(struct wusbhc *wusbhc)
{
- static const size_t key_size = sizeof(wusbhc->gtk.data);
- int p;
-
- wusbhc_generate_gtk(wusbhc);
-
- for (p = 0; p < wusbhc->ports_max; p++) {
- struct wusb_dev *wusb_dev;
-
- wusb_dev = wusbhc->port[p].wusb_dev;
- if (!wusb_dev || !wusb_dev->usb_dev || !wusb_dev->usb_dev->authenticated)
- continue;
-
- usb_fill_control_urb(wusb_dev->set_gtk_urb, wusb_dev->usb_dev,
- usb_sndctrlpipe(wusb_dev->usb_dev, 0),
- (void *)wusb_dev->set_gtk_req,
- &wusbhc->gtk.descr, wusbhc->gtk.descr.bLength,
- wusbhc_set_gtk_callback, wusbhc);
- if (usb_submit_urb(wusb_dev->set_gtk_urb, GFP_KERNEL) == 0)
- wusbhc->pending_set_gtks++;
- }
- if (wusbhc->pending_set_gtks == 0)
- wusbhc->set_gtk(wusbhc, wusbhc->gtk_tkid, &wusbhc->gtk.descr.bKeyData, key_size);
+ /*
+ * We need to submit a URB to the downstream WUSB devices in order to
+ * change the group key. This can't be done while holding the
+ * wusbhc->mutex since that is also taken in the urb_enqueue routine
+ * and will cause a deadlock. Instead, queue a work item to do
+ * it when the lock is not held
+ */
+ queue_work(wusbd, &wusbhc->gtk_rekey_work);
}
struct kref refcnt;
struct wusbhc *wusbhc;
struct list_head cack_node; /* Connect-Ack list */
+ struct list_head rekey_node; /* GTK rekey list */
u8 port_idx;
u8 addr;
u8 beacon_type:4;
struct usb_wireless_cap_descriptor *wusb_cap_descr;
struct uwb_mas_bm availability;
struct work_struct devconnect_acked_work;
- struct urb *set_gtk_urb;
- struct usb_ctrlrequest *set_gtk_req;
struct usb_device *usb_dev;
};
} __attribute__((packed)) gtk;
u8 gtk_index;
u32 gtk_tkid;
- struct work_struct gtk_rekey_done_work;
- int pending_set_gtks;
+ struct work_struct gtk_rekey_work;
struct usb_encryption_descriptor *ccm1_etd;
};
/* terminator */
}
};
+MODULE_DEVICE_TABLE(platform, atmel_lcdfb_devtypes);
static struct atmel_lcdfb_config *
atmel_lcdfb_get_config(struct platform_device *pdev)
return -EINVAL;
}
case KYRO_IOCTL_UVSTRIDE:
- if (copy_to_user(argp, &deviceInfo.ulOverlayUVStride, sizeof(unsigned long)))
+ if (copy_to_user(argp, &deviceInfo.ulOverlayUVStride, sizeof(deviceInfo.ulOverlayUVStride)))
return -EFAULT;
break;
case KYRO_IOCTL_STRIDE:
- if (copy_to_user(argp, &deviceInfo.ulOverlayStride, sizeof(unsigned long)))
+ if (copy_to_user(argp, &deviceInfo.ulOverlayStride, sizeof(deviceInfo.ulOverlayStride)))
return -EFAULT;
break;
case KYRO_IOCTL_OVERLAY_OFFSET:
- if (copy_to_user(argp, &deviceInfo.ulOverlayOffset, sizeof(unsigned long)))
+ if (copy_to_user(argp, &deviceInfo.ulOverlayOffset, sizeof(deviceInfo.ulOverlayOffset)))
return -EFAULT;
break;
}
#define AVIVO_DC_LUTB_WHITE_OFFSET_GREEN 0x6cd4
#define AVIVO_DC_LUTB_WHITE_OFFSET_RED 0x6cd8
+#define FB_RIGHT_POS(p, bpp) (fb_be_math(p) ? 0 : (32 - (bpp)))
+
+static inline u32 offb_cmap_byteswap(struct fb_info *info, u32 value)
+{
+ u32 bpp = info->var.bits_per_pixel;
+
+ return cpu_to_be32(value) >> FB_RIGHT_POS(info, bpp);
+}
+
/*
* Set a single color register. The values supplied are already
* rounded down to the hardware's capabilities (according to the
mask <<= info->var.transp.offset;
value |= mask;
}
- pal[regno] = value;
+ pal[regno] = offb_cmap_byteswap(info, value);
return 0;
}
static void __iomem *offb_map_reg(struct device_node *np, int index,
unsigned long offset, unsigned long size)
{
- const u32 *addrp;
+ const __be32 *addrp;
u64 asize, taddr;
unsigned int flags;
}
of_node_put(pciparent);
} else if (dp && of_device_is_compatible(dp, "qemu,std-vga")) {
- const u32 io_of_addr[3] = { 0x01000000, 0x0, 0x0 };
+#ifdef __BIG_ENDIAN
+ const __be32 io_of_addr[3] = { 0x01000000, 0x0, 0x0 };
+#else
+ const __be32 io_of_addr[3] = { 0x00000001, 0x0, 0x0 };
+#endif
u64 io_addr = of_translate_address(dp, io_of_addr);
if (io_addr != OF_BAD_ADDR) {
par->cmap_adr = ioremap(io_addr + 0x3c8, 2);
unsigned int flags, rsize, addr_prop = 0;
unsigned long max_size = 0;
u64 rstart, address = OF_BAD_ADDR;
- const u32 *pp, *addrp, *up;
+ const __be32 *pp, *addrp, *up;
u64 asize;
int foreign_endian = 0;
if (pp == NULL)
pp = of_get_property(dp, "depth", &len);
if (pp && len == sizeof(u32))
- depth = *pp;
+ depth = be32_to_cpup(pp);
pp = of_get_property(dp, "linux,bootx-width", &len);
if (pp == NULL)
pp = of_get_property(dp, "width", &len);
if (pp && len == sizeof(u32))
- width = *pp;
+ width = be32_to_cpup(pp);
pp = of_get_property(dp, "linux,bootx-height", &len);
if (pp == NULL)
pp = of_get_property(dp, "height", &len);
if (pp && len == sizeof(u32))
- height = *pp;
+ height = be32_to_cpup(pp);
pp = of_get_property(dp, "linux,bootx-linebytes", &len);
if (pp == NULL)
pp = of_get_property(dp, "linebytes", &len);
if (pp && len == sizeof(u32) && (*pp != 0xffffffffu))
- pitch = *pp;
+ pitch = be32_to_cpup(pp);
else
pitch = width * ((depth + 7) / 8);
struct omap_dss_device *in = ddata->in;
int r;
+ mutex_lock(&ddata->mutex);
+
dev_dbg(&ddata->spi->dev, "%s\n", __func__);
in->ops.sdi->set_timings(in, &ddata->videomode);
if (omapdss_device_is_enabled(dssdev))
return 0;
- mutex_lock(&ddata->mutex);
r = acx565akm_panel_power_on(dssdev);
- mutex_unlock(&ddata->mutex);
-
if (r)
return r;
* Power management
*/
+#if defined(CONFIG_PM_SLEEP) || defined(CONFIG_PM_RUNTIME)
static int sh_mobile_meram_suspend(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
meram_write_reg(priv->base, common_regs[i], priv->regs[i]);
return 0;
}
+#endif /* CONFIG_PM_SLEEP || CONFIG_PM_RUNTIME */
static UNIVERSAL_DEV_PM_OPS(sh_mobile_meram_dev_pm_ops,
sh_mobile_meram_suspend,
+ sizeof(u32) * 16, GFP_KERNEL);
if (!fbi) {
dev_err(&pdev->dev, "Failed to initialize framebuffer device\n");
- ret = -ENOMEM;
- goto failed;
+ return -ENOMEM;
}
strcpy(fbi->fb.fix.id, "VT8500 LCD");
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (res == NULL) {
dev_err(&pdev->dev, "no I/O memory resource defined\n");
- ret = -ENODEV;
- goto failed_fbi;
+ return -ENODEV;
}
res = request_mem_region(res->start, resource_size(res), "vt8500lcd");
if (res == NULL) {
dev_err(&pdev->dev, "failed to request I/O memory\n");
- ret = -EBUSY;
- goto failed_fbi;
+ return -EBUSY;
}
fbi->regbase = ioremap(res->start, resource_size(res));
}
disp_timing = of_get_display_timings(pdev->dev.of_node);
- if (!disp_timing)
- return -EINVAL;
+ if (!disp_timing) {
+ ret = -EINVAL;
+ goto failed_free_io;
+ }
ret = of_get_fb_videomode(pdev->dev.of_node, &of_mode,
OF_USE_NATIVE_MODE);
if (ret)
- return ret;
+ goto failed_free_io;
ret = of_property_read_u32(pdev->dev.of_node, "bits-per-pixel", &bpp);
if (ret)
- return ret;
+ goto failed_free_io;
/* try allocating the framebuffer */
fb_mem_len = of_mode.xres * of_mode.yres * 2 * (bpp / 8);
GFP_KERNEL);
if (!fb_mem_virt) {
pr_err("%s: Failed to allocate framebuffer\n", __func__);
- return -ENOMEM;
+ ret = -ENOMEM;
+ goto failed_free_io;
}
fbi->fb.fix.smem_start = fb_mem_phys;
iounmap(fbi->regbase);
failed_free_res:
release_mem_region(res->start, resource_size(res));
-failed_fbi:
- kfree(fbi);
-failed:
return ret;
}
#include <linux/watchdog.h>
#include <linux/platform_device.h>
#include <linux/of_address.h>
-#include <linux/miscdevice.h>
#define PM_RSTC 0x1c
#define PM_WDOG 0x24
#include <linux/platform_device.h>
#include <linux/module.h>
-#include <linux/miscdevice.h>
#include <linux/watchdog.h>
#include <linux/timer.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/watchdog.h>
-#include <linux/miscdevice.h>
#include <linux/seq_file.h>
#include <linux/debugfs.h>
#include <linux/uaccess.h>
#include <linux/moduleparam.h>
#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/miscdevice.h>
#include <linux/watchdog.h>
#include <linux/init.h>
#include <linux/platform_device.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
-#include <linux/miscdevice.h>
#include <linux/uaccess.h>
#include <linux/watchdog.h>
#include <linux/platform_device.h>
#include <linux/moduleparam.h>
#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/miscdevice.h>
#include <linux/watchdog.h>
#include <linux/init.h>
#include <linux/bitops.h>
#include <linux/moduleparam.h>
#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/miscdevice.h>
#include <linux/platform_device.h>
#include <linux/watchdog.h>
#include <linux/init.h>
#include <linux/moduleparam.h>
#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/miscdevice.h>
#include <linux/watchdog.h>
#include <linux/init.h>
#include <linux/platform_device.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/watchdog.h>
-#include <linux/miscdevice.h>
#include <linux/moduleparam.h>
#include <linux/platform_device.h>
#if defined CONFIG_PNP
/* now that the user has specified an IO port and we haven't detected
* any devices, disable pnp support */
+ if (isapnp)
+ pnp_unregister_driver(&scl200wdt_pnp_driver);
isapnp = 0;
- pnp_unregister_driver(&scl200wdt_pnp_driver);
#endif
if (!request_region(io, io_len, SC1200_MODULE_NAME)) {
#include <linux/init.h>
#include <linux/types.h>
#include <linux/spinlock.h>
-#include <linux/miscdevice.h>
#include <linux/watchdog.h>
#include <linux/pm_runtime.h>
#include <linux/fs.h>
#include <linux/moduleparam.h>
#include <linux/types.h>
#include <linux/timer.h>
-#include <linux/miscdevice.h>
#include <linux/watchdog.h>
#include <linux/notifier.h>
#include <linux/reboot.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
-#include <linux/miscdevice.h>
#include <linux/watchdog.h>
#include <linux/platform_device.h>
#include <linux/stmp3xxx_rtc_wdt.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/types.h>
-#include <linux/miscdevice.h>
#include <linux/watchdog.h>
#include <linux/init.h>
#include <linux/platform_device.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/moduleparam.h>
-#include <linux/miscdevice.h>
#include <linux/err.h>
#include <linux/uaccess.h>
#include <linux/watchdog.h>
ret = m2p_add_override(mfn, pages[i], kmap_ops ?
&kmap_ops[i] : NULL);
if (ret)
- return ret;
+ goto out;
}
+ out:
if (lazy)
arch_leave_lazy_mmu_mode();
ret = m2p_remove_override(pages[i], kmap_ops ?
&kmap_ops[i] : NULL);
if (ret)
- return ret;
+ goto out;
}
+ out:
if (lazy)
arch_leave_lazy_mmu_mode();
sg_dma_len(sgl) = 0;
return 0;
}
+ xen_dma_map_page(hwdev, pfn_to_page(map >> PAGE_SHIFT),
+ map & ~PAGE_MASK,
+ sg->length,
+ dir,
+ attrs);
sg->dma_address = xen_phys_to_bus(map);
} else {
/* we are not interested in the dma_addr returned by
if (nr_pages > AIO_RING_PAGES) {
ctx->ring_pages = kcalloc(nr_pages, sizeof(struct page *),
GFP_KERNEL);
- if (!ctx->ring_pages)
+ if (!ctx->ring_pages) {
+ put_aio_ring_file(ctx);
return -ENOMEM;
+ }
}
ctx->mmap_size = nr_pages * PAGE_SIZE;
aio_nr + nr_events < aio_nr) {
spin_unlock(&aio_nr_lock);
err = -EAGAIN;
- goto err;
+ goto err_ctx;
}
aio_nr += ctx->max_reqs;
spin_unlock(&aio_nr_lock);
err_cleanup:
aio_nr_sub(ctx->max_reqs);
+err_ctx:
+ aio_free_ring(ctx);
err:
free_percpu(ctx->cpu);
free_percpu(ctx->reqs.pcpu_count);
static int btrfsic_read_block(struct btrfsic_state *state,
struct btrfsic_block_data_ctx *block_ctx);
static void btrfsic_dump_database(struct btrfsic_state *state);
-static void btrfsic_complete_bio_end_io(struct bio *bio, int err);
static int btrfsic_test_for_metadata(struct btrfsic_state *state,
char **datav, unsigned int num_pages);
static void btrfsic_process_written_block(struct btrfsic_dev_state *dev_state,
for (i = 0; i < num_pages;) {
struct bio *bio;
unsigned int j;
- DECLARE_COMPLETION_ONSTACK(complete);
bio = btrfs_io_bio_alloc(GFP_NOFS, num_pages - i);
if (!bio) {
}
bio->bi_bdev = block_ctx->dev->bdev;
bio->bi_sector = dev_bytenr >> 9;
- bio->bi_end_io = btrfsic_complete_bio_end_io;
- bio->bi_private = &complete;
for (j = i; j < num_pages; j++) {
ret = bio_add_page(bio, block_ctx->pagev[j],
"btrfsic: error, failed to add a single page!\n");
return -1;
}
- submit_bio(READ, bio);
-
- /* this will also unplug the queue */
- wait_for_completion(&complete);
-
- if (!test_bit(BIO_UPTODATE, &bio->bi_flags)) {
+ if (submit_bio_wait(READ, bio)) {
printk(KERN_INFO
"btrfsic: read error at logical %llu dev %s!\n",
block_ctx->start, block_ctx->dev->name);
return block_ctx->len;
}
-static void btrfsic_complete_bio_end_io(struct bio *bio, int err)
-{
- complete((struct completion *)bio->bi_private);
-}
-
static void btrfsic_dump_database(struct btrfsic_state *state)
{
struct list_head *elem_all;
return submit_bh(rw, bh);
}
-void btrfsic_submit_bio(int rw, struct bio *bio)
+static void __btrfsic_submit_bio(int rw, struct bio *bio)
{
struct btrfsic_dev_state *dev_state;
- if (!btrfsic_is_initialized) {
- submit_bio(rw, bio);
+ if (!btrfsic_is_initialized)
return;
- }
mutex_lock(&btrfsic_mutex);
/* since btrfsic_submit_bio() is also called before
}
leave:
mutex_unlock(&btrfsic_mutex);
+}
+void btrfsic_submit_bio(int rw, struct bio *bio)
+{
+ __btrfsic_submit_bio(rw, bio);
submit_bio(rw, bio);
}
+int btrfsic_submit_bio_wait(int rw, struct bio *bio)
+{
+ __btrfsic_submit_bio(rw, bio);
+ return submit_bio_wait(rw, bio);
+}
+
int btrfsic_mount(struct btrfs_root *root,
struct btrfs_fs_devices *fs_devices,
int including_extent_data, u32 print_mask)
#ifdef CONFIG_BTRFS_FS_CHECK_INTEGRITY
int btrfsic_submit_bh(int rw, struct buffer_head *bh);
void btrfsic_submit_bio(int rw, struct bio *bio);
+int btrfsic_submit_bio_wait(int rw, struct bio *bio);
#else
#define btrfsic_submit_bh submit_bh
#define btrfsic_submit_bio submit_bio
+#define btrfsic_submit_bio_wait submit_bio_wait
#endif
int btrfsic_mount(struct btrfs_root *root,
if (!path)
return -ENOMEM;
- if (metadata) {
- key.objectid = bytenr;
- key.type = BTRFS_METADATA_ITEM_KEY;
- key.offset = offset;
- } else {
- key.objectid = bytenr;
- key.type = BTRFS_EXTENT_ITEM_KEY;
- key.offset = offset;
- }
-
if (!trans) {
path->skip_locking = 1;
path->search_commit_root = 1;
}
+
+search_again:
+ key.objectid = bytenr;
+ key.offset = offset;
+ if (metadata)
+ key.type = BTRFS_METADATA_ITEM_KEY;
+ else
+ key.type = BTRFS_EXTENT_ITEM_KEY;
+
again:
ret = btrfs_search_slot(trans, root->fs_info->extent_root,
&key, path, 0, 0);
goto out_free;
if (ret > 0 && metadata && key.type == BTRFS_METADATA_ITEM_KEY) {
- metadata = 0;
if (path->slots[0]) {
path->slots[0]--;
btrfs_item_key_to_cpu(path->nodes[0], &key,
mutex_lock(&head->mutex);
mutex_unlock(&head->mutex);
btrfs_put_delayed_ref(&head->node);
- goto again;
+ goto search_again;
}
if (head->extent_op && head->extent_op->update_flags)
extent_flags |= head->extent_op->flags_to_set;
return err;
}
-static void repair_io_failure_callback(struct bio *bio, int err)
-{
- complete(bio->bi_private);
-}
-
/*
* this bypasses the standard btrfs submit functions deliberately, as
* the standard behavior is to write all copies in a raid setup. here we only
{
struct bio *bio;
struct btrfs_device *dev;
- DECLARE_COMPLETION_ONSTACK(compl);
u64 map_length = 0;
u64 sector;
struct btrfs_bio *bbio = NULL;
bio = btrfs_io_bio_alloc(GFP_NOFS, 1);
if (!bio)
return -EIO;
- bio->bi_private = &compl;
- bio->bi_end_io = repair_io_failure_callback;
bio->bi_size = 0;
map_length = length;
}
bio->bi_bdev = dev->bdev;
bio_add_page(bio, page, length, start - page_offset(page));
- btrfsic_submit_bio(WRITE_SYNC, bio);
- wait_for_completion(&compl);
- if (!test_bit(BIO_UPTODATE, &bio->bi_flags)) {
+ if (btrfsic_submit_bio_wait(WRITE_SYNC, bio)) {
/* try to remap that extent elsewhere? */
bio_put(bio);
btrfs_dev_stat_inc_and_print(dev, BTRFS_DEV_STAT_WRITE_ERRS);
err = mutex_lock_killable_nested(&dir->i_mutex, I_MUTEX_PARENT);
if (err == -EINTR)
- goto out;
+ goto out_drop_write;
dentry = lookup_one_len(vol_args->name, parent, namelen);
if (IS_ERR(dentry)) {
err = PTR_ERR(dentry);
dput(dentry);
out_unlock_dir:
mutex_unlock(&dir->i_mutex);
+out_drop_write:
mnt_drop_write_file(file);
out:
kfree(vol_args);
root_objectid == BTRFS_CHUNK_TREE_OBJECTID ||
root_objectid == BTRFS_DEV_TREE_OBJECTID ||
root_objectid == BTRFS_TREE_LOG_OBJECTID ||
- root_objectid == BTRFS_CSUM_TREE_OBJECTID)
+ root_objectid == BTRFS_CSUM_TREE_OBJECTID ||
+ root_objectid == BTRFS_UUID_TREE_OBJECTID ||
+ root_objectid == BTRFS_QUOTA_TREE_OBJECTID)
return 1;
return 0;
}
}
/*
- * helper to update/delete the 'address of tree root -> reloc tree'
+ * helper to delete the 'address of tree root -> reloc tree'
* mapping
*/
-static int __update_reloc_root(struct btrfs_root *root, int del)
+static void __del_reloc_root(struct btrfs_root *root)
{
struct rb_node *rb_node;
struct mapping_node *node = NULL;
spin_lock(&rc->reloc_root_tree.lock);
rb_node = tree_search(&rc->reloc_root_tree.rb_root,
- root->commit_root->start);
+ root->node->start);
if (rb_node) {
node = rb_entry(rb_node, struct mapping_node, rb_node);
rb_erase(&node->rb_node, &rc->reloc_root_tree.rb_root);
spin_unlock(&rc->reloc_root_tree.lock);
if (!node)
- return 0;
+ return;
BUG_ON((struct btrfs_root *)node->data != root);
- if (!del) {
- spin_lock(&rc->reloc_root_tree.lock);
- node->bytenr = root->node->start;
- rb_node = tree_insert(&rc->reloc_root_tree.rb_root,
- node->bytenr, &node->rb_node);
- spin_unlock(&rc->reloc_root_tree.lock);
- if (rb_node)
- backref_tree_panic(rb_node, -EEXIST, node->bytenr);
- } else {
- spin_lock(&root->fs_info->trans_lock);
- list_del_init(&root->root_list);
- spin_unlock(&root->fs_info->trans_lock);
- kfree(node);
+ spin_lock(&root->fs_info->trans_lock);
+ list_del_init(&root->root_list);
+ spin_unlock(&root->fs_info->trans_lock);
+ kfree(node);
+}
+
+/*
+ * helper to update the 'address of tree root -> reloc tree'
+ * mapping
+ */
+static int __update_reloc_root(struct btrfs_root *root, u64 new_bytenr)
+{
+ struct rb_node *rb_node;
+ struct mapping_node *node = NULL;
+ struct reloc_control *rc = root->fs_info->reloc_ctl;
+
+ spin_lock(&rc->reloc_root_tree.lock);
+ rb_node = tree_search(&rc->reloc_root_tree.rb_root,
+ root->node->start);
+ if (rb_node) {
+ node = rb_entry(rb_node, struct mapping_node, rb_node);
+ rb_erase(&node->rb_node, &rc->reloc_root_tree.rb_root);
}
+ spin_unlock(&rc->reloc_root_tree.lock);
+
+ if (!node)
+ return 0;
+ BUG_ON((struct btrfs_root *)node->data != root);
+
+ spin_lock(&rc->reloc_root_tree.lock);
+ node->bytenr = new_bytenr;
+ rb_node = tree_insert(&rc->reloc_root_tree.rb_root,
+ node->bytenr, &node->rb_node);
+ spin_unlock(&rc->reloc_root_tree.lock);
+ if (rb_node)
+ backref_tree_panic(rb_node, -EEXIST, node->bytenr);
return 0;
}
{
struct btrfs_root *reloc_root;
struct btrfs_root_item *root_item;
- int del = 0;
int ret;
if (!root->reloc_root)
if (root->fs_info->reloc_ctl->merge_reloc_tree &&
btrfs_root_refs(root_item) == 0) {
root->reloc_root = NULL;
- del = 1;
+ __del_reloc_root(reloc_root);
}
- __update_reloc_root(reloc_root, del);
-
if (reloc_root->commit_root != reloc_root->node) {
btrfs_set_root_node(root_item, reloc_root->node);
free_extent_buffer(reloc_root->commit_root);
while (!list_empty(list)) {
reloc_root = list_entry(list->next, struct btrfs_root,
root_list);
- __update_reloc_root(reloc_root, 1);
+ __del_reloc_root(reloc_root);
free_extent_buffer(reloc_root->node);
free_extent_buffer(reloc_root->commit_root);
kfree(reloc_root);
ret = merge_reloc_root(rc, root);
if (ret) {
- __update_reloc_root(reloc_root, 1);
+ __del_reloc_root(reloc_root);
free_extent_buffer(reloc_root->node);
free_extent_buffer(reloc_root->commit_root);
kfree(reloc_root);
btrfs_std_error(root->fs_info, ret);
if (!list_empty(&reloc_roots))
free_reloc_roots(&reloc_roots);
+
+ /* new reloc root may be added */
+ mutex_lock(&root->fs_info->reloc_mutex);
+ list_splice_init(&rc->reloc_roots, &reloc_roots);
+ mutex_unlock(&root->fs_info->reloc_mutex);
+ if (!list_empty(&reloc_roots))
+ free_reloc_roots(&reloc_roots);
}
BUG_ON(!RB_EMPTY_ROOT(&rc->reloc_root_tree.rb_root));
BUG_ON(rc->stage == UPDATE_DATA_PTRS &&
root->root_key.objectid == BTRFS_DATA_RELOC_TREE_OBJECTID);
+ if (root->root_key.objectid == BTRFS_TREE_RELOC_OBJECTID) {
+ if (buf == root->node)
+ __update_reloc_root(root, cow->start);
+ }
+
level = btrfs_header_level(buf);
if (btrfs_header_generation(buf) <=
btrfs_root_last_snapshot(&root->root_item))
int is_metadata, int have_csum,
const u8 *csum, u64 generation,
u16 csum_size);
-static void scrub_complete_bio_end_io(struct bio *bio, int err);
static int scrub_repair_block_from_good_copy(struct scrub_block *sblock_bad,
struct scrub_block *sblock_good,
int force_write);
for (page_num = 0; page_num < sblock->page_count; page_num++) {
struct bio *bio;
struct scrub_page *page = sblock->pagev[page_num];
- DECLARE_COMPLETION_ONSTACK(complete);
if (page->dev->bdev == NULL) {
page->io_error = 1;
}
bio->bi_bdev = page->dev->bdev;
bio->bi_sector = page->physical >> 9;
- bio->bi_end_io = scrub_complete_bio_end_io;
- bio->bi_private = &complete;
bio_add_page(bio, page->page, PAGE_SIZE, 0);
- btrfsic_submit_bio(READ, bio);
-
- /* this will also unplug the queue */
- wait_for_completion(&complete);
-
- page->io_error = !test_bit(BIO_UPTODATE, &bio->bi_flags);
- if (!test_bit(BIO_UPTODATE, &bio->bi_flags))
+ if (btrfsic_submit_bio_wait(READ, bio))
sblock->no_io_error_seen = 0;
+
bio_put(bio);
}
sblock->checksum_error = 1;
}
-static void scrub_complete_bio_end_io(struct bio *bio, int err)
-{
- complete((struct completion *)bio->bi_private);
-}
-
static int scrub_repair_block_from_good_copy(struct scrub_block *sblock_bad,
struct scrub_block *sblock_good,
int force_write)
sblock_bad->checksum_error || page_bad->io_error) {
struct bio *bio;
int ret;
- DECLARE_COMPLETION_ONSTACK(complete);
if (!page_bad->dev->bdev) {
printk_ratelimited(KERN_WARNING
return -EIO;
bio->bi_bdev = page_bad->dev->bdev;
bio->bi_sector = page_bad->physical >> 9;
- bio->bi_end_io = scrub_complete_bio_end_io;
- bio->bi_private = &complete;
ret = bio_add_page(bio, page_good->page, PAGE_SIZE, 0);
if (PAGE_SIZE != ret) {
bio_put(bio);
return -EIO;
}
- btrfsic_submit_bio(WRITE, bio);
- /* this will also unplug the queue */
- wait_for_completion(&complete);
- if (!bio_flagged(bio, BIO_UPTODATE)) {
+ if (btrfsic_submit_bio_wait(WRITE, bio)) {
btrfs_dev_stat_inc_and_print(page_bad->dev,
BTRFS_DEV_STAT_WRITE_ERRS);
btrfs_dev_replace_stats_inc(
struct bio *bio;
struct btrfs_device *dev;
int ret;
- DECLARE_COMPLETION_ONSTACK(compl);
dev = sctx->wr_ctx.tgtdev;
if (!dev)
spin_unlock(&sctx->stat_lock);
return -ENOMEM;
}
- bio->bi_private = &compl;
- bio->bi_end_io = scrub_complete_bio_end_io;
bio->bi_size = 0;
bio->bi_sector = physical_for_dev_replace >> 9;
bio->bi_bdev = dev->bdev;
btrfs_dev_stat_inc_and_print(dev, BTRFS_DEV_STAT_WRITE_ERRS);
return -EIO;
}
- btrfsic_submit_bio(WRITE_SYNC, bio);
- wait_for_completion(&compl);
- if (!test_bit(BIO_UPTODATE, &bio->bi_flags))
+ if (btrfsic_submit_bio_wait(WRITE_SYNC, bio))
goto leave_with_eio;
bio_put(bio);
}
if (!access_ok(VERIFY_READ, arg->clone_sources,
- sizeof(*arg->clone_sources *
- arg->clone_sources_count))) {
+ sizeof(*arg->clone_sources) *
+ arg->clone_sources_count)) {
ret = -EFAULT;
goto out;
}
} else {
printk(KERN_INFO "btrfs: setting nodatacow\n");
}
- info->compress_type = BTRFS_COMPRESS_NONE;
btrfs_clear_opt(info->mount_opt, COMPRESS);
btrfs_clear_opt(info->mount_opt, FORCE_COMPRESS);
btrfs_set_opt(info->mount_opt, NODATACOW);
btrfs_set_fs_incompat(info, COMPRESS_LZO);
} else if (strncmp(args[0].from, "no", 2) == 0) {
compress_type = "no";
- info->compress_type = BTRFS_COMPRESS_NONE;
btrfs_clear_opt(info->mount_opt, COMPRESS);
btrfs_clear_opt(info->mount_opt, FORCE_COMPRESS);
compress_force = false;
btrfs_set_opt(info->mount_opt, FORCE_COMPRESS);
pr_info("btrfs: force %s compression\n",
compress_type);
- } else
+ } else if (btrfs_test_opt(root, COMPRESS)) {
pr_info("btrfs: use %s compression\n",
compress_type);
+ }
break;
case Opt_ssd:
printk(KERN_INFO "btrfs: use ssd allocation scheme\n");
if (!tcount)
return 0;
}
- mask = ~(~0ul << tcount*8);
+ mask = bytemask_from_count(tcount);
return unlikely(!!((a ^ b) & mask));
}
goto error_tgt_fput;
/* Check if EPOLLWAKEUP is allowed */
- if ((epds.events & EPOLLWAKEUP) && !capable(CAP_BLOCK_SUSPEND))
- epds.events &= ~EPOLLWAKEUP;
+ ep_take_care_of_epollwakeup(&epds);
/*
* We have to check that the file structure underneath the file descriptor
u16 embed_count;
};
-static void hfsplus_end_io_sync(struct bio *bio, int err)
-{
- if (err)
- clear_bit(BIO_UPTODATE, &bio->bi_flags);
- complete(bio->bi_private);
-}
-
/*
* hfsplus_submit_bio - Perfrom block I/O
* @sb: super block of volume for I/O
int hfsplus_submit_bio(struct super_block *sb, sector_t sector,
void *buf, void **data, int rw)
{
- DECLARE_COMPLETION_ONSTACK(wait);
struct bio *bio;
int ret = 0;
u64 io_size;
bio = bio_alloc(GFP_NOIO, 1);
bio->bi_sector = sector;
bio->bi_bdev = sb->s_bdev;
- bio->bi_end_io = hfsplus_end_io_sync;
- bio->bi_private = &wait;
if (!(rw & WRITE) && data)
*data = (u8 *)buf + offset;
buf = (u8 *)buf + len;
}
- submit_bio(rw, bio);
- wait_for_completion(&wait);
-
- if (!bio_flagged(bio, BIO_UPTODATE))
- ret = -EIO;
-
+ ret = submit_bio_wait(rw, bio);
out:
bio_put(bio);
return ret < 0 ? ret : 0;
#define PAGE_OFS(ofs) ((ofs) & (PAGE_SIZE-1))
-static void request_complete(struct bio *bio, int err)
-{
- complete((struct completion *)bio->bi_private);
-}
-
static int sync_request(struct page *page, struct block_device *bdev, int rw)
{
struct bio bio;
struct bio_vec bio_vec;
- struct completion complete;
bio_init(&bio);
bio.bi_max_vecs = 1;
bio.bi_size = PAGE_SIZE;
bio.bi_bdev = bdev;
bio.bi_sector = page->index * (PAGE_SIZE >> 9);
- init_completion(&complete);
- bio.bi_private = &complete;
- bio.bi_end_io = request_complete;
- submit_bio(rw, &bio);
- wait_for_completion(&complete);
- return test_bit(BIO_UPTODATE, &bio.bi_flags) ? 0 : -EIO;
+ return submit_bio_wait(rw, &bio);
}
static int bdev_readpage(void *_sb, struct page *page)
* do a "get_unaligned()" if this helps and is sufficiently
* fast.
*
- * - Little-endian machines (so that we can generate the mask
- * of low bytes efficiently). Again, we *could* do a byte
- * swapping load on big-endian architectures if that is not
- * expensive enough to make the optimization worthless.
- *
* - non-CONFIG_DEBUG_PAGEALLOC configurations (so that we
* do not trap on the (extremely unlikely) case of a page
* crossing operation.
if (!len)
goto done;
}
- mask = ~(~0ul << len*8);
+ mask = bytemask_from_count(len);
hash += mask & a;
done:
return fold_hash(hash);
#include <linux/nfs_fs.h>
#include <linux/sunrpc/rpc_pipe_fs.h>
+#include "../nfs4_fs.h"
#include "../pnfs.h"
#include "../netns.h"
static inline sector_t normalize(sector_t s, int base)
{
sector_t tmp = s; /* Since do_div modifies its argument */
- return s - do_div(tmp, base);
+ return s - sector_div(tmp, base);
}
static inline sector_t normalize_up(sector_t s, int base)
#include <linux/sunrpc/cache.h>
#include <linux/sunrpc/svcauth.h>
#include <linux/sunrpc/rpc_pipe_fs.h>
+#include <linux/nfs_fs.h>
+#include "nfs4_fs.h"
#include "dns_resolve.h"
#include "cache_lib.h"
#include "netns.h"
}
EXPORT_SYMBOL_GPL(nfs4_label_alloc);
#else
-void inline nfs_setsecurity(struct inode *inode, struct nfs_fattr *fattr,
+void nfs_setsecurity(struct inode *inode, struct nfs_fattr *fattr,
struct nfs4_label *label)
{
}
extern struct rpc_procinfo nfs4_procedures[];
#endif
+#ifdef CONFIG_NFS_V4_SECURITY_LABEL
+extern struct nfs4_label *nfs4_label_alloc(struct nfs_server *server, gfp_t flags);
+static inline void nfs4_label_free(struct nfs4_label *label)
+{
+ if (label) {
+ kfree(label->label);
+ kfree(label);
+ }
+ return;
+}
+#else
+static inline struct nfs4_label *nfs4_label_alloc(struct nfs_server *server, gfp_t flags) { return NULL; }
+static inline void nfs4_label_free(void *label) {}
+#endif /* CONFIG_NFS_V4_SECURITY_LABEL */
+
/* proc.c */
void nfs_close_context(struct nfs_open_context *ctx, int is_sync);
extern struct nfs_client *nfs_init_client(struct nfs_client *clp,
#ifndef __LINUX_FS_NFS_NFS4_FS_H
#define __LINUX_FS_NFS_NFS4_FS_H
+#if defined(CONFIG_NFS_V4_2)
+#define NFS4_MAX_MINOR_VERSION 2
+#elif defined(CONFIG_NFS_V4_1)
+#define NFS4_MAX_MINOR_VERSION 1
+#else
+#define NFS4_MAX_MINOR_VERSION 0
+#endif
+
#if IS_ENABLED(CONFIG_NFS_V4)
#define NFS4_MAX_LOOP_ON_RECOVER (10)
calldata->roc_barrier);
nfs_set_open_stateid(state, &calldata->res.stateid, 0);
renew_lease(server, calldata->timestamp);
- nfs4_close_clear_stateid_flags(state,
- calldata->arg.fmode);
break;
+ case -NFS4ERR_ADMIN_REVOKED:
case -NFS4ERR_STALE_STATEID:
case -NFS4ERR_OLD_STATEID:
case -NFS4ERR_BAD_STATEID:
if (calldata->arg.fmode == 0)
break;
default:
- if (nfs4_async_handle_error(task, server, state) == -EAGAIN)
+ if (nfs4_async_handle_error(task, server, state) == -EAGAIN) {
rpc_restart_call_prepare(task);
+ goto out_release;
+ }
}
+ nfs4_close_clear_stateid_flags(state, calldata->arg.fmode);
+out_release:
nfs_release_seqid(calldata->arg.seqid);
nfs_refresh_inode(calldata->inode, calldata->res.fattr);
dprintk("%s: done, ret = %d!\n", __func__, task->tk_status);
dprintk("%s ERROR %d, Reset session\n", __func__,
task->tk_status);
nfs4_schedule_session_recovery(clp->cl_session, task->tk_status);
- goto restart_call;
+ goto wait_on_recovery;
#endif /* CONFIG_NFS_V4_1 */
case -NFS4ERR_DELAY:
nfs_inc_server_stats(server, NFSIOS_DELAY);
trace_nfs4_delegreturn_exit(&data->args, &data->res, task->tk_status);
switch (task->tk_status) {
- case -NFS4ERR_STALE_STATEID:
- case -NFS4ERR_EXPIRED:
case 0:
renew_lease(data->res.server, data->timestamp);
break;
+ case -NFS4ERR_ADMIN_REVOKED:
+ case -NFS4ERR_DELEG_REVOKED:
+ case -NFS4ERR_BAD_STATEID:
+ case -NFS4ERR_OLD_STATEID:
+ case -NFS4ERR_STALE_STATEID:
+ case -NFS4ERR_EXPIRED:
+ task->tk_status = 0;
+ break;
default:
if (nfs4_async_handle_error(task, data->res.server, NULL) ==
-EAGAIN) {
return;
server = NFS_SERVER(lrp->args.inode);
- if (nfs4_async_handle_error(task, server, NULL) == -EAGAIN) {
+ switch (task->tk_status) {
+ default:
+ task->tk_status = 0;
+ case 0:
+ break;
+ case -NFS4ERR_DELAY:
+ if (nfs4_async_handle_error(task, server, NULL) != -EAGAIN)
+ break;
rpc_restart_call_prepare(task);
return;
}
return rp;
}
+static void
+nfsd_reply_cache_unhash(struct svc_cacherep *rp)
+{
+ hlist_del_init(&rp->c_hash);
+ list_del_init(&rp->c_lru);
+}
+
static void
nfsd_reply_cache_free_locked(struct svc_cacherep *rp)
{
rp = list_first_entry(&lru_head, struct svc_cacherep, c_lru);
if (nfsd_cache_entry_expired(rp) ||
num_drc_entries >= max_drc_entries) {
- lru_put_end(rp);
+ nfsd_reply_cache_unhash(rp);
prune_cache_entries();
goto search_cache;
}
return mask;
}
+static void put_pipe_info(struct inode *inode, struct pipe_inode_info *pipe)
+{
+ int kill = 0;
+
+ spin_lock(&inode->i_lock);
+ if (!--pipe->files) {
+ inode->i_pipe = NULL;
+ kill = 1;
+ }
+ spin_unlock(&inode->i_lock);
+
+ if (kill)
+ free_pipe_info(pipe);
+}
+
static int
pipe_release(struct inode *inode, struct file *file)
{
- struct pipe_inode_info *pipe = inode->i_pipe;
- int kill = 0;
+ struct pipe_inode_info *pipe = file->private_data;
__pipe_lock(pipe);
if (file->f_mode & FMODE_READ)
kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);
kill_fasync(&pipe->fasync_writers, SIGIO, POLL_OUT);
}
- spin_lock(&inode->i_lock);
- if (!--pipe->files) {
- inode->i_pipe = NULL;
- kill = 1;
- }
- spin_unlock(&inode->i_lock);
__pipe_unlock(pipe);
- if (kill)
- free_pipe_info(pipe);
-
+ put_pipe_info(inode, pipe);
return 0;
}
{
struct pipe_inode_info *pipe;
bool is_pipe = inode->i_sb->s_magic == PIPEFS_MAGIC;
- int kill = 0;
int ret;
filp->f_version = 0;
goto err;
err:
- spin_lock(&inode->i_lock);
- if (!--pipe->files) {
- inode->i_pipe = NULL;
- kill = 1;
- }
- spin_unlock(&inode->i_lock);
__pipe_unlock(pipe);
- if (kill)
- free_pipe_info(pipe);
+
+ put_pipe_info(inode, pipe);
return ret;
}
{
struct proc_dir_entry *pde = PDE(file_inode(file));
unsigned long rv = -EIO;
- unsigned long (*get_area)(struct file *, unsigned long, unsigned long,
- unsigned long, unsigned long) = NULL;
+
if (use_pde(pde)) {
+ typeof(proc_reg_get_unmapped_area) *get_area;
+
+ get_area = pde->proc_fops->get_unmapped_area;
#ifdef CONFIG_MMU
- get_area = current->mm->get_unmapped_area;
+ if (!get_area)
+ get_area = current->mm->get_unmapped_area;
#endif
- if (pde->proc_fops->get_unmapped_area)
- get_area = pde->proc_fops->get_unmapped_area;
+
if (get_area)
rv = get_area(file, orig_addr, len, pgoff, flags);
+ else
+ rv = orig_addr;
unuse_pde(pde);
}
return rv;
*/
res = squashfs_read_cache(target_page, block, bsize, pages,
page);
+ if (res < 0)
+ goto mark_errored;
+
goto out;
}
* dealt with by the caller
*/
for (i = 0; i < pages; i++) {
- if (page[i] == target_page)
+ if (page[i] == NULL || page[i] == target_page)
continue;
flush_dcache_page(page[i]);
SetPageError(page[i]);
struct xfs_mount *mp,
struct fstrim_range __user *urange)
{
- struct request_queue *q = mp->m_ddev_targp->bt_bdev->bd_disk->queue;
+ struct request_queue *q = bdev_get_queue(mp->m_ddev_targp->bt_bdev);
unsigned int granularity = q->limits.discard_granularity;
struct fstrim_range range;
xfs_daddr_t start, end, minlen;
* matter as trimming blocks is an advisory interface.
*/
if (range.start >= XFS_FSB_TO_B(mp, mp->m_sb.sb_dblocks) ||
- range.minlen > XFS_FSB_TO_B(mp, XFS_ALLOC_AG_MAX_USABLE(mp)))
+ range.minlen > XFS_FSB_TO_B(mp, XFS_ALLOC_AG_MAX_USABLE(mp)) ||
+ range.len < mp->m_sb.sb_blocksize)
return -XFS_ERROR(EINVAL);
start = BTOBB(range.start);
*/
nfree = 0;
for (agno = nagcount - 1; agno >= oagcount; agno--, new -= agsize) {
+ __be32 *agfl_bno;
+
/*
* AG freespace header block
*/
agfl->agfl_seqno = cpu_to_be32(agno);
uuid_copy(&agfl->agfl_uuid, &mp->m_sb.sb_uuid);
}
+
+ agfl_bno = XFS_BUF_TO_AGFL_BNO(mp, bp);
for (bucket = 0; bucket < XFS_AGFL_SIZE(mp); bucket++)
- agfl->agfl_bno[bucket] = cpu_to_be32(NULLAGBLOCK);
+ agfl_bno[bucket] = cpu_to_be32(NULLAGBLOCK);
error = xfs_bwrite(bp);
xfs_buf_relse(bp);
return -XFS_ERROR(EPERM);
if (copy_from_user(&al_hreq, arg, sizeof(xfs_fsop_attrlist_handlereq_t)))
return -XFS_ERROR(EFAULT);
- if (al_hreq.buflen > XATTR_LIST_MAX)
+ if (al_hreq.buflen < sizeof(struct attrlist) ||
+ al_hreq.buflen > XATTR_LIST_MAX)
return -XFS_ERROR(EINVAL);
/*
if (copy_from_user(&al_hreq, arg,
sizeof(compat_xfs_fsop_attrlist_handlereq_t)))
return -XFS_ERROR(EFAULT);
- if (al_hreq.buflen > XATTR_LIST_MAX)
+ if (al_hreq.buflen < sizeof(struct attrlist) ||
+ al_hreq.buflen > XATTR_LIST_MAX)
return -XFS_ERROR(EINVAL);
/*
return (val + c->high_bits) & ~rhs;
}
+#ifndef zero_bytemask
+#ifdef CONFIG_64BIT
+#define zero_bytemask(mask) (~0ul << fls64(mask))
+#else
+#define zero_bytemask(mask) (~0ul << fls(mask))
+#endif /* CONFIG_64BIT */
+#endif /* zero_bytemask */
+
#endif /* _ASM_WORD_AT_A_TIME_H */
{
sg_set_page(&sg1[num - 1], (void *)sg2, 0, 0);
sg1[num - 1].page_link &= ~0x02;
+ sg1[num - 1].page_link |= 0x01;
}
static inline struct scatterlist *scatterwalk_sg_next(struct scatterlist *sg)
if (sg_is_last(sg))
return NULL;
- return (++sg)->length ? sg : (void *)sg_page(sg);
+ return (++sg)->length ? sg : sg_chain_ptr(sg);
}
static inline void scatterwalk_crypto_chain(struct scatterlist *head,
/* Is this the object we're looking for? */
bool (*compare_object)(const void *object, const void *index_key);
- /* How different are two objects, to a bit position in their keys? (or
- * -1 if they're the same)
+ /* How different is an object from an index key, to a bit position in
+ * their keys? (or -1 if they're the same)
*/
- int (*diff_objects)(const void *a, const void *b);
+ int (*diff_objects)(const void *object, const void *index_key);
/* Method to free an object. */
void (*free_object)(void *object);
/* The hash is always the low bits of hash_len */
#ifdef __LITTLE_ENDIAN
#define HASH_LEN_DECLARE u32 hash; u32 len;
+ #define bytemask_from_count(cnt) (~(~0ul << (cnt)*8))
#else
#define HASH_LEN_DECLARE u32 len; u32 hash;
+ #define bytemask_from_count(cnt) (~(~0ul >> (cnt)*8))
#endif
/*
struct efi_variable var;
struct list_head list;
struct kobject kobj;
+ bool scanning;
+ bool deleting;
};
#if defined(CONFIG_EFI_VARS) || defined(CONFIG_EFI_VARS_MODULE)
int efivars_sysfs_init(void);
+#define EFIVARS_DATA_SIZE_MAX 1024
+
#endif /* CONFIG_EFI_VARS */
#endif /* _LINUX_EFI_H */
#ifdef CONFIG_PERF_EVENTS
int perf_refcount;
struct hlist_head __percpu *perf_events;
+
+ int (*perf_perm)(struct ftrace_event_call *,
+ struct perf_event *);
#endif
};
} \
early_initcall(trace_init_flags_##name);
+#define __TRACE_EVENT_PERF_PERM(name, expr...) \
+ static int perf_perm_##name(struct ftrace_event_call *tp_event, \
+ struct perf_event *p_event) \
+ { \
+ return ({ expr; }); \
+ } \
+ static int __init trace_init_perf_perm_##name(void) \
+ { \
+ event_##name.perf_perm = &perf_perm_##name; \
+ return 0; \
+ } \
+ early_initcall(trace_init_perf_perm_##name);
+
#define PERF_MAX_TRACE_SIZE 2048
#define MAX_FILTER_STR_VAL 256 /* Should handle KSYM_SYMBOL_LEN */
#define __LINUX_GPIO_DRIVER_H
#include <linux/types.h>
+#include <linux/module.h>
struct device;
struct gpio_desc;
+struct of_phandle_args;
+struct device_node;
struct seq_file;
/**
s32 units;
s32 unit_expo;
s32 size;
+ s32 logical_minimum;
+ s32 logical_maximum;
};
/**
#define HID_USAGE_SENSOR_PROP_REPORT_STATE 0x200316
#define HID_USAGE_SENSOR_PROY_POWER_STATE 0x200319
+/* Power state enumerations */
+#define HID_USAGE_SENSOR_PROP_POWER_STATE_UNDEFINED_ENUM 0x00
+#define HID_USAGE_SENSOR_PROP_POWER_STATE_D0_FULL_POWER_ENUM 0x01
+#define HID_USAGE_SENSOR_PROP_POWER_STATE_D1_LOW_POWER_ENUM 0x02
+#define HID_USAGE_SENSOR_PROP_POWER_STATE_D2_STANDBY_WITH_WAKE_ENUM 0x03
+#define HID_USAGE_SENSOR_PROP_POWER_STATE_D3_SLEEP_WITH_WAKE_ENUM 0x04
+#define HID_USAGE_SENSOR_PROP_POWER_STATE_D4_POWER_OFF_ENUM 0x05
+
+/* Report State enumerations */
+#define HID_USAGE_SENSOR_PROP_REPORTING_STATE_NO_EVENTS_ENUM 0x00
+#define HID_USAGE_SENSOR_PROP_REPORTING_STATE_ALL_EVENTS_ENUM 0x01
+
#endif
return 0;
}
-#define isolate_huge_page(p, l) false
+static inline bool isolate_huge_page(struct page *page, struct list_head *list)
+{
+ return false;
+}
#define putback_active_hugepage(p) do {} while (0)
#define is_hugepage_active(x) false
};
typedef enum irqreturn irqreturn_t;
-#define IRQ_RETVAL(x) ((x) != IRQ_NONE)
+#define IRQ_RETVAL(x) ((x) ? IRQ_HANDLED : IRQ_NONE)
#endif
(__x < 0) ? -__x : __x; \
})
-#if defined(CONFIG_PROVE_LOCKING) || defined(CONFIG_DEBUG_ATOMIC_SLEEP)
+#if defined(CONFIG_MMU) && \
+ (defined(CONFIG_PROVE_LOCKING) || defined(CONFIG_DEBUG_ATOMIC_SLEEP))
void might_fault(void);
#else
static inline void might_fault(void) { }
struct sec_pmic_dev {
struct device *dev;
struct sec_platform_data *pdata;
- struct regmap *regmap;
+ struct regmap *regmap_pmic;
+ struct regmap *regmap_rtc;
struct i2c_client *i2c;
struct i2c_client *rtc;
#define NFS4_VERSION 4
#define NFS4_MINOR_VERSION 0
-#if defined(CONFIG_NFS_V4_2)
-#define NFS4_MAX_MINOR_VERSION 2
-#else
-#if defined(CONFIG_NFS_V4_1)
-#define NFS4_MAX_MINOR_VERSION 1
-#else
-#define NFS4_MAX_MINOR_VERSION 0
-#endif /* CONFIG_NFS_V4_1 */
-#endif /* CONFIG_NFS_V4_2 */
-
#define NFS4_DEBUG 1
/* Index of predefined Linux client operations */
extern int nfs_mountpoint_expiry_timeout;
extern void nfs_release_automount_timer(void);
-/*
- * linux/fs/nfs/nfs4proc.c
- */
-#ifdef CONFIG_NFS_V4_SECURITY_LABEL
-extern struct nfs4_label *nfs4_label_alloc(struct nfs_server *server, gfp_t flags);
-static inline void nfs4_label_free(struct nfs4_label *label)
-{
- if (label) {
- kfree(label->label);
- kfree(label);
- }
- return;
-}
-#else
-static inline struct nfs4_label *nfs4_label_alloc(struct nfs_server *server, gfp_t flags) { return NULL; }
-static inline void nfs4_label_free(void *label) {}
-#endif
-
/*
* linux/fs/nfs/unlink.c
*/
unsigned int balance_interval; /* initialise to 1. units in ms. */
unsigned int nr_balance_failed; /* initialise to 0 */
- u64 last_update;
-
/* idle_balance() stats */
u64 max_newidle_lb_cost;
unsigned long next_decay_max_lb_cost;
extern int shmem_fill_super(struct super_block *sb, void *data, int silent);
extern struct file *shmem_file_setup(const char *name,
loff_t size, unsigned long flags);
+extern struct file *shmem_kernel_file_setup(const char *name, loff_t size,
+ unsigned long flags);
extern int shmem_zero_setup(struct vm_area_struct *);
extern int shmem_lock(struct file *file, int lock, struct user_struct *user);
extern void shmem_unlock_mapping(struct address_space *mapping);
#define TRACE_EVENT_FLAGS(event, flag)
+#define TRACE_EVENT_PERF_PERM(event, expr...)
+
#endif /* DECLARE_TRACE */
#ifndef TRACE_EVENT
#define TRACE_EVENT_FLAGS(event, flag)
+#define TRACE_EVENT_PERF_PERM(event, expr...)
+
#endif /* ifdef TRACE_EVENT (see note above) */
* @sg: scatter gather buffer list, the buffer size of each element in
* the list (except the last) must be divisible by the endpoint's
* max packet size if no_sg_constraint isn't set in 'struct usb_bus'
+ * (FIXME: scatter-gather under xHCI is broken for periodic transfers.
+ * Do not use urb->sg for interrupt endpoints for now, only bulk.)
* @num_mapped_sgs: (internal) number of mapped sg entries
* @num_sgs: number of entries in the sg list
* @transfer_buffer_length: How big is transfer_buffer. The transfer may
#define WUSB_KEY_INDEX_TYPE_GTK 2
#define WUSB_KEY_INDEX_ORIGINATOR_HOST 0
#define WUSB_KEY_INDEX_ORIGINATOR_DEVICE 1
+/* bits 0-3 used for the key index. */
+#define WUSB_KEY_INDEX_MAX 15
/* A CCM Nonce, defined in WUSB1.0[6.4.1] */
struct aes_ccm_nonce {
struct vb2_mem_ops {
void *(*alloc)(void *alloc_ctx, unsigned long size, gfp_t gfp_flags);
void (*put)(void *buf_priv);
- struct dma_buf *(*get_dmabuf)(void *buf_priv);
+ struct dma_buf *(*get_dmabuf)(void *buf_priv, unsigned long flags);
void *(*get_userptr)(void *alloc_ctx, unsigned long vaddr,
unsigned long size, int write);
int ip_ra_control(struct sock *sk, unsigned char on,
void (*destructor)(struct sock *));
-int ip_recv_error(struct sock *sk, struct msghdr *msg, int len);
+int ip_recv_error(struct sock *sk, struct msghdr *msg, int len, int *addr_len);
void ip_icmp_error(struct sock *sk, struct sk_buff *skb, int err, __be16 port,
u32 info, u8 *payload);
void ip_local_error(struct sock *sk, int err, __be32 daddr, __be16 dport,
int ip6_datagram_connect(struct sock *sk, struct sockaddr *addr, int addr_len);
-int ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len);
-int ipv6_recv_rxpmtu(struct sock *sk, struct msghdr *msg, int len);
+int ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len,
+ int *addr_len);
+int ipv6_recv_rxpmtu(struct sock *sk, struct msghdr *msg, int len,
+ int *addr_len);
void ipv6_icmp_error(struct sock *sk, struct sk_buff *skb, int err, __be16 port,
u32 info, u8 *payload);
void ipv6_local_error(struct sock *sk, int err, struct flowi6 *fl6, u32 info);
/* Compatibility glue so we can support IPv6 when it's compiled as a module */
struct pingv6_ops {
- int (*ipv6_recv_error)(struct sock *sk, struct msghdr *msg, int len);
+ int (*ipv6_recv_error)(struct sock *sk, struct msghdr *msg, int len,
+ int *addr_len);
int (*ip6_datagram_recv_ctl)(struct sock *sk, struct msghdr *msg,
struct sk_buff *skb);
int (*icmpv6_err_convert)(u8 type, u8 code, int *err);
#define SCTP_NEED_FRTX 0x1
#define SCTP_DONT_FRTX 0x2
__u16 rtt_in_progress:1, /* This chunk used for RTT calc? */
+ resent:1, /* Has this chunk ever been resent. */
has_tsn:1, /* Does this chunk have a TSN yet? */
has_ssn:1, /* Does this chunk have a SSN yet? */
singleton:1, /* Only chunk in the packet? */
*/
unsigned ordered_tag:1;
+ /* True if the controller does not support WRITE SAME */
+ unsigned no_write_same:1;
+
/*
* Countdown for host blocking with no commands outstanding.
*/
/* Don't resume host in EH */
unsigned eh_noresume:1;
+ /* The controller does not support WRITE SAME */
+ unsigned no_write_same:1;
+
/*
* Optional work queue to be utilized by the transport
*/
{
struct snd_sg_buf *sgbuf = dmab->private_data;
dma_addr_t addr = sgbuf->table[offset >> PAGE_SHIFT].addr;
- addr &= PAGE_MASK;
+ addr &= ~((dma_addr_t)PAGE_SIZE - 1);
return addr + offset % PAGE_SIZE;
}
SND_SOC_DAPM_INIT_REG_VAL(wreg, wshift, winvert), \
.kcontrol_news = wcontrols, .num_kcontrols = 1}
#define SND_SOC_DAPM_MUX(wname, wreg, wshift, winvert, wcontrols) \
-{ .id = snd_soc_dapm_mux, .name = wname, .reg = wreg, \
+{ .id = snd_soc_dapm_mux, .name = wname, \
+ SND_SOC_DAPM_INIT_REG_VAL(wreg, wshift, winvert), \
.kcontrol_news = wcontrols, .num_kcontrols = 1}
#define SND_SOC_DAPM_VIRT_MUX(wname, wreg, wshift, winvert, wcontrols) \
{ .id = snd_soc_dapm_virt_mux, .name = wname, \
#define TRACE_EVENT_FLAGS(name, value) \
__TRACE_EVENT_FLAGS(name, value)
+#undef TRACE_EVENT_PERF_PERM
+#define TRACE_EVENT_PERF_PERM(name, expr...) \
+ __TRACE_EVENT_PERF_PERM(name, expr)
+
#include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
#undef TRACE_EVENT_FLAGS
#define TRACE_EVENT_FLAGS(event, flag)
+#undef TRACE_EVENT_PERF_PERM
+#define TRACE_EVENT_PERF_PERM(event, expr...)
+
#include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
/*
__u64 data;
} EPOLL_PACKED;
-
+#ifdef CONFIG_PM_SLEEP
+static inline void ep_take_care_of_epollwakeup(struct epoll_event *epev)
+{
+ if ((epev->events & EPOLLWAKEUP) && !capable(CAP_BLOCK_SUSPEND))
+ epev->events &= ~EPOLLWAKEUP;
+}
+#else
+static inline void ep_take_care_of_epollwakeup(struct epoll_event *epev)
+{
+ epev->events &= ~EPOLLWAKEUP;
+}
+#endif
#endif /* _UAPI_LINUX_EVENTPOLL_H */
#define GENL_ID_GENERATE 0
#define GENL_ID_CTRL NLMSG_MIN_TYPE
#define GENL_ID_VFS_DQUOT (NLMSG_MIN_TYPE + 1)
+#define GENL_ID_PMCRAID (NLMSG_MIN_TYPE + 2)
/**************************************************************************
* Controller
IFLA_HSR_UNSPEC,
IFLA_HSR_SLAVE1,
IFLA_HSR_SLAVE2,
- IFLA_HSR_MULTICAST_SPEC,
+ IFLA_HSR_MULTICAST_SPEC, /* Last byte of supervision addr */
+ IFLA_HSR_SUPERVISION_ADDR, /* Supervision frame multicast addr */
+ IFLA_HSR_SEQ_NR,
__IFLA_HSR_MAX,
};
#define BTN_DPAD_LEFT 0x222
#define BTN_DPAD_RIGHT 0x223
+#define KEY_ALS_TOGGLE 0x230 /* Ambient light sensor */
+
#define BTN_TRIGGER_HAPPY 0x2c0
#define BTN_TRIGGER_HAPPY1 0x2c0
#define BTN_TRIGGER_HAPPY2 0x2c1
#define SW_FRONT_PROXIMITY 0x0b /* set = front proximity sensor active */
#define SW_ROTATE_LOCK 0x0c /* set = rotate locked/disabled */
#define SW_LINEIN_INSERT 0x0d /* set = inserted */
+#define SW_MUTE_DEVICE 0x0e /* set = device disabled */
#define SW_MAX 0x0f
#define SW_CNT (SW_MAX+1)
#include <linux/virtio_ring.h>
-#ifndef __KERNEL__
-#define ALIGN(a, x) (((a) + (x) - 1) & ~((x) - 1))
-#define __aligned(x) __attribute__ ((aligned(x)))
-#endif
-
-#define mic_aligned_size(x) ALIGN(sizeof(x), 8)
+#define __mic_align(a, x) (((a) + (x) - 1) & ~((x) - 1))
/**
* struct mic_device_desc: Virtio device information shared between the
__u8 feature_len;
__u8 config_len;
__u8 status;
- __u64 config[0];
-} __aligned(8);
+ __le64 config[0];
+} __attribute__ ((aligned(8)));
/**
* struct mic_device_ctrl: Per virtio device information in the device page
* @h2c_vdev_db: The doorbell number to be used by host. Set by guest.
*/
struct mic_device_ctrl {
- __u64 vdev;
+ __le64 vdev;
__u8 config_change;
__u8 vdev_reset;
__u8 guest_ack;
__u8 used_address_updated;
__s8 c2h_vdev_db;
__s8 h2c_vdev_db;
-} __aligned(8);
+} __attribute__ ((aligned(8)));
/**
* struct mic_bootparam: Virtio device independent information in device page
* @shutdown_card: Set to 1 by the host when a card shutdown is initiated
*/
struct mic_bootparam {
- __u32 magic;
+ __le32 magic;
__s8 c2h_shutdown_db;
__s8 h2c_shutdown_db;
__s8 h2c_config_db;
__u8 shutdown_status;
__u8 shutdown_card;
-} __aligned(8);
+} __attribute__ ((aligned(8)));
/**
* struct mic_device_page: High level representation of the device page
* @num: The number of entries in the virtio_ring
*/
struct mic_vqconfig {
- __u64 address;
- __u64 used_address;
- __u16 num;
-} __aligned(8);
+ __le64 address;
+ __le64 used_address;
+ __le16 num;
+} __attribute__ ((aligned(8)));
/*
* The alignment to use between consumer and producer parts of vring.
*/
struct _mic_vring_info {
__u16 avail_idx;
- int magic;
+ __le32 magic;
};
/**
int len;
};
-#define mic_aligned_desc_size(d) ALIGN(mic_desc_size(d), 8)
+#define mic_aligned_desc_size(d) __mic_align(mic_desc_size(d), 8)
#ifndef INTEL_MIC_CARD
static inline unsigned mic_desc_size(const struct mic_device_desc *desc)
{
- return mic_aligned_size(*desc)
- + desc->num_vq * mic_aligned_size(struct mic_vqconfig)
- + desc->feature_len * 2
- + desc->config_len;
+ return sizeof(*desc) + desc->num_vq * sizeof(struct mic_vqconfig)
+ + desc->feature_len * 2 + desc->config_len;
}
static inline struct mic_vqconfig *
}
static inline unsigned mic_total_desc_size(struct mic_device_desc *desc)
{
- return mic_aligned_desc_size(desc) +
- mic_aligned_size(struct mic_device_ctrl);
+ return mic_aligned_desc_size(desc) + sizeof(struct mic_device_ctrl);
}
#endif
};
enum {
+ /* NETLINK_DIAG_NONE, standard nl API requires this attribute! */
NETLINK_DIAG_MEMINFO,
NETLINK_DIAG_GROUPS,
NETLINK_DIAG_RX_RING,
};
enum {
+ /* PACKET_DIAG_NONE, standard nl API requires this attribute! */
PACKET_DIAG_INFO,
PACKET_DIAG_MCLIST,
PACKET_DIAG_RX_RING,
};
enum {
+ /* UNIX_DIAG_NONE, standard nl API requires this attribute! */
UNIX_DIAG_NAME,
UNIX_DIAG_VFS,
UNIX_DIAG_PEER,
#include <sound/compress_params.h>
-#define SNDRV_COMPRESS_VERSION SNDRV_PROTOCOL_VERSION(0, 1, 1)
+#define SNDRV_COMPRESS_VERSION SNDRV_PROTOCOL_VERSION(0, 1, 2)
/**
* struct snd_compressed_buffer: compressed buffer
* @fragment_size: size of buffer fragment in bytes
struct snd_compr_tstamp {
__u32 byte_offset;
__u32 copied_total;
- snd_pcm_uframes_t pcm_frames;
- snd_pcm_uframes_t pcm_io_frames;
+ __u32 pcm_frames;
+ __u32 pcm_io_frames;
__u32 sampling_rate;
};
config_data.gz
timeconst.h
hz.bc
+x509_certificate_list
{
int cpu;
- if (event->cpu != -1) {
- swevent_hlist_put_cpu(event, event->cpu);
- return;
- }
-
for_each_possible_cpu(cpu)
swevent_hlist_put_cpu(event, cpu);
}
int err;
int cpu, failed_cpu;
- if (event->cpu != -1)
- return swevent_hlist_get_cpu(event, event->cpu);
-
get_online_cpus();
for_each_possible_cpu(cpu) {
err = swevent_hlist_get_cpu(event, cpu);
return -EINVAL;
address -= key->both.offset;
+ if (unlikely(!access_ok(rw, uaddr, sizeof(u32))))
+ return -EFAULT;
+
/*
* PROCESS_PRIVATE futexes are fast.
* As the mm cannot disappear under us and the 'key' only needs
* but access_ok() should be faster than find_vma()
*/
if (!fshared) {
- if (unlikely(!access_ok(VERIFY_WRITE, uaddr, sizeof(u32))))
- return -EFAULT;
key->private.mm = mm;
key->private.address = address;
get_futex_key_refs(key);
put_page(page);
/* serialize against __split_huge_page_splitting() */
local_irq_disable();
- if (likely(__get_user_pages_fast(address, 1, 1, &page) == 1)) {
+ if (likely(__get_user_pages_fast(address, 1, !ro, &page) == 1)) {
page_head = compound_head(page);
/*
* page_head is valid pointer but we must pin
bool is_early = desc->action &&
desc->action->flags & IRQF_EARLY_RESUME;
- if (is_early != want_early)
+ if (!is_early && want_early)
continue;
raw_spin_lock_irqsave(&desc->lock, flags);
static int rcu_idle_lazy_gp_delay = RCU_IDLE_LAZY_GP_DELAY;
module_param(rcu_idle_lazy_gp_delay, int, 0644);
-extern int tick_nohz_enabled;
+extern int tick_nohz_active;
/*
* Try to advance callbacks for all flavors of RCU on the current CPU, but
int tne;
/* Handle nohz enablement switches conservatively. */
- tne = ACCESS_ONCE(tick_nohz_enabled);
+ tne = ACCESS_ONCE(tick_nohz_active);
if (tne != rdtp->tick_nohz_enabled_snap) {
if (rcu_cpu_has_callbacks(cpu, NULL))
invoke_rcu_core(); /* force nohz to see update. */
} while (need_resched());
}
EXPORT_SYMBOL(preempt_schedule);
+#endif /* CONFIG_PREEMPT */
/*
* this is the entry point to schedule() from kernel preemption
exception_exit(prev_state);
}
-#endif /* CONFIG_PREEMPT */
-
int default_wake_function(wait_queue_t *curr, unsigned mode, int wake_flags,
void *key)
{
cpumask_clear_cpu(rq->cpu, old_rd->span);
/*
- * If we dont want to free the old_rt yet then
+ * If we dont want to free the old_rd yet then
* set old_rd to NULL to skip the freeing later
* in this function:
*/
if (sd) {
id = cpumask_first(sched_domain_span(sd));
size = cpumask_weight(sched_domain_span(sd));
- rcu_assign_pointer(per_cpu(sd_busy, cpu), sd->parent);
+ sd = sd->parent; /* sd_busy */
}
+ rcu_assign_pointer(per_cpu(sd_busy, cpu), sd);
rcu_assign_pointer(per_cpu(sd_llc, cpu), sd);
per_cpu(sd_llc_size, cpu) = size;
*/
for_each_cpu(cpu, sched_group_cpus(sdg)) {
- struct sched_group *sg = cpu_rq(cpu)->sd->groups;
+ struct sched_group_power *sgp;
+ struct rq *rq = cpu_rq(cpu);
- power_orig += sg->sgp->power_orig;
- power += sg->sgp->power;
+ /*
+ * build_sched_domains() -> init_sched_groups_power()
+ * gets here before we've attached the domains to the
+ * runqueues.
+ *
+ * Use power_of(), which is set irrespective of domains
+ * in update_cpu_power().
+ *
+ * This avoids power/power_orig from being 0 and
+ * causing divide-by-zero issues on boot.
+ *
+ * Runtime updates will correct power_orig.
+ */
+ if (unlikely(!rq->sd)) {
+ power_orig += power_of(cpu);
+ power += power_of(cpu);
+ continue;
+ }
+
+ sgp = rq->sd->groups->sgp;
+ power_orig += sgp->power_orig;
+ power += sgp->power;
}
} else {
/*
__INITRODATA
+ .align 8
.globl VMLINUX_SYMBOL(system_certificate_list)
VMLINUX_SYMBOL(system_certificate_list):
+__cert_list_start:
.incbin "kernel/x509_certificate_list"
- .globl VMLINUX_SYMBOL(system_certificate_list_end)
-VMLINUX_SYMBOL(system_certificate_list_end):
+__cert_list_end:
+
+ .align 8
+ .globl VMLINUX_SYMBOL(system_certificate_list_size)
+VMLINUX_SYMBOL(system_certificate_list_size):
+#ifdef CONFIG_64BIT
+ .quad __cert_list_end - __cert_list_start
+#else
+ .long __cert_list_end - __cert_list_start
+#endif
EXPORT_SYMBOL_GPL(system_trusted_keyring);
extern __initconst const u8 system_certificate_list[];
-extern __initconst const u8 system_certificate_list_end[];
+extern __initconst const unsigned long system_certificate_list_size;
/*
* Load the compiled-in keys
pr_notice("Loading compiled-in X.509 certificates\n");
- end = system_certificate_list_end;
p = system_certificate_list;
+ end = p + system_certificate_list_size;
while (p < end) {
/* Each cert begins with an ASN.1 SEQUENCE tag and must be more
* than 256 bytes in size.
*/
ktime_t tick_next_period;
ktime_t tick_period;
+
+/*
+ * tick_do_timer_cpu is a timer core internal variable which holds the CPU NR
+ * which is responsible for calling do_timer(), i.e. the timekeeping stuff. This
+ * variable has two functions:
+ *
+ * 1) Prevent a thundering herd issue of a gazillion of CPUs trying to grab the
+ * timekeeping lock all at once. Only the CPU which is assigned to do the
+ * update is handling it.
+ *
+ * 2) Hand off the duty in the NOHZ idle case by setting the value to
+ * TICK_DO_TIMER_NONE, i.e. a non existing CPU. So the next cpu which looks
+ * at it will take over and keep the time keeping alive. The handover
+ * procedure also covers cpu hotplug.
+ */
int tick_do_timer_cpu __read_mostly = TICK_DO_TIMER_BOOT;
/*
/*
* NO HZ enabled ?
*/
-int tick_nohz_enabled __read_mostly = 1;
-
+static int tick_nohz_enabled __read_mostly = 1;
+int tick_nohz_active __read_mostly;
/*
* Enable / Disable tickless mode
*/
struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu);
ktime_t now, idle;
- if (!tick_nohz_enabled)
+ if (!tick_nohz_active)
return -1;
now = ktime_get();
struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu);
ktime_t now, iowait;
- if (!tick_nohz_enabled)
+ if (!tick_nohz_active)
return -1;
now = ktime_get();
return false;
}
- if (unlikely(ts->nohz_mode == NOHZ_MODE_INACTIVE))
+ if (unlikely(ts->nohz_mode == NOHZ_MODE_INACTIVE)) {
+ ts->sleep_length = (ktime_t) { .tv64 = NSEC_PER_SEC/HZ };
return false;
+ }
if (need_resched())
return false;
local_irq_disable();
ts = &__get_cpu_var(tick_cpu_sched);
- /*
- * set ts->inidle unconditionally. even if the system did not
- * switch to nohz mode the cpu frequency governers rely on the
- * update of the idle time accounting in tick_nohz_start_idle().
- */
ts->inidle = 1;
__tick_nohz_idle_enter(ts);
struct tick_sched *ts = &__get_cpu_var(tick_cpu_sched);
ktime_t next;
- if (!tick_nohz_enabled)
+ if (!tick_nohz_active)
return;
local_irq_disable();
local_irq_enable();
return;
}
-
+ tick_nohz_active = 1;
ts->nohz_mode = NOHZ_MODE_LOWRES;
/*
}
#ifdef CONFIG_NO_HZ_COMMON
- if (tick_nohz_enabled)
+ if (tick_nohz_enabled) {
ts->nohz_mode = NOHZ_MODE_HIGHRES;
+ tick_nohz_active = 1;
+ }
#endif
}
#endif /* HIGH_RES_TIMERS */
tk->xtime_nsec -= remainder;
tk->xtime_nsec += 1ULL << tk->shift;
tk->ntp_error += remainder << tk->ntp_error_shift;
-
+ tk->ntp_error -= (1ULL << tk->shift) << tk->ntp_error_shift;
}
#else
#define old_vsyscall_fixup(tk)
/*
* The APs use this path later in boot
*/
- base = kmalloc_node(sizeof(*base),
- GFP_KERNEL | __GFP_ZERO,
- cpu_to_node(cpu));
+ base = kzalloc_node(sizeof(*base), GFP_KERNEL,
+ cpu_to_node(cpu));
if (!base)
return -ENOMEM;
static int perf_trace_event_perm(struct ftrace_event_call *tp_event,
struct perf_event *p_event)
{
+ if (tp_event->perf_perm) {
+ int ret = tp_event->perf_perm(tp_event, p_event);
+ if (ret)
+ return ret;
+ }
+
/* The ftrace function trace is allowed only for root. */
if (ftrace_event_is_function(tp_event) &&
perf_paranoid_tracepoint_raw() && !capable(CAP_SYS_ADMIN))
int perf_trace_init(struct perf_event *p_event)
{
struct ftrace_event_call *tp_event;
- int event_id = p_event->attr.config;
+ u64 event_id = p_event->attr.config;
int ret = -EINVAL;
mutex_lock(&event_mutex);
/* Disable any running events */
__ftrace_set_clr_event_nolock(tr, NULL, NULL, NULL, 0);
+ /* Access to events are within rcu_read_lock_sched() */
+ synchronize_sched();
+
down_write(&trace_event_sem);
__trace_remove_event_dirs(tr);
debugfs_remove_recursive(tr->event_dir);
if (!tr->sys_refcount_enter)
unregister_trace_sys_enter(ftrace_syscall_enter, tr);
mutex_unlock(&syscall_trace_lock);
- /*
- * Callers expect the event to be completely disabled on
- * return, so wait for current handlers to finish.
- */
- synchronize_sched();
}
static int reg_event_syscall_exit(struct ftrace_event_file *file,
if (!tr->sys_refcount_exit)
unregister_trace_sys_exit(ftrace_syscall_exit, tr);
mutex_unlock(&syscall_trace_lock);
- /*
- * Callers expect the event to be completely disabled on
- * return, so wait for current handlers to finish.
- */
- synchronize_sched();
}
static int __init init_syscall_trace(struct ftrace_event_call *call)
pr_devel("all leaves cluster together\n");
diff = INT_MAX;
for (i = 0; i < ASSOC_ARRAY_FAN_OUT; i++) {
- int x = ops->diff_objects(assoc_array_ptr_to_leaf(edit->leaf),
- assoc_array_ptr_to_leaf(node->slots[i]));
+ int x = ops->diff_objects(assoc_array_ptr_to_leaf(node->slots[i]),
+ index_key);
if (x < diff) {
BUG_ON(x < 0);
diff = x;
pmd = pmdp_get_and_clear(mm, old_addr, old_pmd);
VM_BUG_ON(!pmd_none(*new_pmd));
set_pmd_at(mm, new_addr, new_pmd, pmd_mksoft_dirty(pmd));
- if (new_ptl != old_ptl)
+ if (new_ptl != old_ptl) {
+ pgtable_t pgtable;
+
+ /*
+ * Move preallocated PTE page table if new_pmd is on
+ * different PMD page table.
+ */
+ pgtable = pgtable_trans_huge_withdraw(mm, old_pmd);
+ pgtable_trans_huge_deposit(mm, new_pmd, pgtable);
+
spin_unlock(new_ptl);
+ }
spin_unlock(old_ptl);
}
out:
goto bypass;
if (unlikely(task_in_memcg_oom(current)))
- goto bypass;
+ goto nomem;
+
+ if (gfp_mask & __GFP_NOFAIL)
+ oom = false;
/*
* We always charge the cgroup the mm_struct belongs to.
static void mem_cgroup_css_free(struct cgroup_subsys_state *css)
{
struct mem_cgroup *memcg = mem_cgroup_from_css(css);
+ /*
+ * XXX: css_offline() would be where we should reparent all
+ * memory to prepare the cgroup for destruction. However,
+ * memcg does not do css_tryget() and res_counter charging
+ * under the same RCU lock region, which means that charging
+ * could race with offlining. Offlining only happens to
+ * cgroups with no tasks in them but charges can show up
+ * without any tasks from the swapin path when the target
+ * memcg is looked up from the swapout record and not from the
+ * current task as it usually is. A race like this can leak
+ * charges and put pages with stale cgroup pointers into
+ * circulation:
+ *
+ * #0 #1
+ * lookup_swap_cgroup_id()
+ * rcu_read_lock()
+ * mem_cgroup_lookup()
+ * css_tryget()
+ * rcu_read_unlock()
+ * disable css_tryget()
+ * call_rcu()
+ * offline_css()
+ * reparent_charges()
+ * res_counter_charge()
+ * css_put()
+ * css_free()
+ * pc->mem_cgroup = dead memcg
+ * add page to lru
+ *
+ * The bulk of the charges are still moved in offline_css() to
+ * avoid pinning a lot of pages in case a long-term reference
+ * like a swapout record is deferring the css_free() to long
+ * after offlining. But this makes sure we catch any charges
+ * made after offlining:
+ */
+ mem_cgroup_reparent_charges(memcg);
memcg_destroy_kmem(memcg);
__mem_cgroup_free(memcg);
.d_dname = simple_dname
};
-/**
- * shmem_file_setup - get an unlinked file living in tmpfs
- * @name: name for dentry (to be seen in /proc/<pid>/maps
- * @size: size to be set for the file
- * @flags: VM_NORESERVE suppresses pre-accounting of the entire object size
- */
-struct file *shmem_file_setup(const char *name, loff_t size, unsigned long flags)
+static struct file *__shmem_file_setup(const char *name, loff_t size,
+ unsigned long flags, unsigned int i_flags)
{
struct file *res;
struct inode *inode;
if (!inode)
goto put_dentry;
+ inode->i_flags |= i_flags;
d_instantiate(path.dentry, inode);
inode->i_size = size;
clear_nlink(inode); /* It is unlinked */
shmem_unacct_size(flags, size);
return res;
}
+
+/**
+ * shmem_kernel_file_setup - get an unlinked file living in tmpfs which must be
+ * kernel internal. There will be NO LSM permission checks against the
+ * underlying inode. So users of this interface must do LSM checks at a
+ * higher layer. The one user is the big_key implementation. LSM checks
+ * are provided at the key level rather than the inode level.
+ * @name: name for dentry (to be seen in /proc/<pid>/maps
+ * @size: size to be set for the file
+ * @flags: VM_NORESERVE suppresses pre-accounting of the entire object size
+ */
+struct file *shmem_kernel_file_setup(const char *name, loff_t size, unsigned long flags)
+{
+ return __shmem_file_setup(name, size, flags, S_PRIVATE);
+}
+
+/**
+ * shmem_file_setup - get an unlinked file living in tmpfs
+ * @name: name for dentry (to be seen in /proc/<pid>/maps
+ * @size: size to be set for the file
+ * @flags: VM_NORESERVE suppresses pre-accounting of the entire object size
+ */
+struct file *shmem_file_setup(const char *name, loff_t size, unsigned long flags)
+{
+ return __shmem_file_setup(name, size, flags, 0);
+}
EXPORT_SYMBOL_GPL(shmem_file_setup);
/**
__get_user(kmsg->msg_flags, &umsg->msg_flags))
return -EFAULT;
if (kmsg->msg_namelen > sizeof(struct sockaddr_storage))
- return -EINVAL;
+ kmsg->msg_namelen = sizeof(struct sockaddr_storage);
kmsg->msg_name = compat_ptr(tmp1);
kmsg->msg_iov = compat_ptr(tmp2);
kmsg->msg_control = compat_ptr(tmp3);
if (x) {
int ret;
__u8 *eth;
+ struct iphdr *iph;
+
nhead = x->props.header_len - skb_headroom(skb);
if (nhead > 0) {
ret = pskb_expand_head(skb, nhead, 0, GFP_ATOMIC);
eth = (__u8 *) skb_push(skb, ETH_HLEN);
memcpy(eth, pkt_dev->hh, 12);
*(u16 *) ð[12] = protocol;
+
+ /* Update IPv4 header len as well as checksum value */
+ iph = ip_hdr(skb);
+ iph->tot_len = htons(skb->len - ETH_HLEN);
+ ip_send_check(iph);
}
}
return 1;
static bool seq_nr_after(u16 a, u16 b)
{
/* Remove inconsistency where
- * seq_nr_after(a, b) == seq_nr_before(a, b) */
+ * seq_nr_after(a, b) == seq_nr_before(a, b)
+ */
if ((int) b - a == 32768)
return false;
[IFLA_HSR_SLAVE1] = { .type = NLA_U32 },
[IFLA_HSR_SLAVE2] = { .type = NLA_U32 },
[IFLA_HSR_MULTICAST_SPEC] = { .type = NLA_U8 },
+ [IFLA_HSR_SUPERVISION_ADDR] = { .type = NLA_BINARY, .len = ETH_ALEN },
+ [IFLA_HSR_SEQ_NR] = { .type = NLA_U16 },
};
return hsr_dev_finalize(dev, link, multicast_spec);
}
+static int hsr_fill_info(struct sk_buff *skb, const struct net_device *dev)
+{
+ struct hsr_priv *hsr_priv;
+
+ hsr_priv = netdev_priv(dev);
+
+ if (hsr_priv->slave[0])
+ if (nla_put_u32(skb, IFLA_HSR_SLAVE1, hsr_priv->slave[0]->ifindex))
+ goto nla_put_failure;
+
+ if (hsr_priv->slave[1])
+ if (nla_put_u32(skb, IFLA_HSR_SLAVE2, hsr_priv->slave[1]->ifindex))
+ goto nla_put_failure;
+
+ if (nla_put(skb, IFLA_HSR_SUPERVISION_ADDR, ETH_ALEN,
+ hsr_priv->sup_multicast_addr) ||
+ nla_put_u16(skb, IFLA_HSR_SEQ_NR, hsr_priv->sequence_nr))
+ goto nla_put_failure;
+
+ return 0;
+
+nla_put_failure:
+ return -EMSGSIZE;
+}
+
static struct rtnl_link_ops hsr_link_ops __read_mostly = {
.kind = "hsr",
.maxtype = IFLA_HSR_MAX,
.priv_size = sizeof(struct hsr_priv),
.setup = hsr_dev_setup,
.newlink = hsr_newlink,
+ .fill_info = hsr_fill_info,
};
/*
* Handle MSG_ERRQUEUE
*/
-int ip_recv_error(struct sock *sk, struct msghdr *msg, int len)
+int ip_recv_error(struct sock *sk, struct msghdr *msg, int len, int *addr_len)
{
struct sock_exterr_skb *serr;
struct sk_buff *skb, *skb2;
serr->addr_offset);
sin->sin_port = serr->port;
memset(&sin->sin_zero, 0, sizeof(sin->sin_zero));
+ *addr_len = sizeof(*sin);
}
memcpy(&errhdr.ee, &serr->ee, sizeof(struct sock_extended_err));
err = PTR_ERR(rt);
rt = NULL;
if (err == -ENETUNREACH)
- IP_INC_STATS_BH(net, IPSTATS_MIB_OUTNOROUTES);
+ IP_INC_STATS(net, IPSTATS_MIB_OUTNOROUTES);
goto out;
}
if (flags & MSG_ERRQUEUE) {
if (family == AF_INET) {
- return ip_recv_error(sk, msg, len);
+ return ip_recv_error(sk, msg, len, addr_len);
#if IS_ENABLED(CONFIG_IPV6)
} else if (family == AF_INET6) {
- return pingv6_ops.ipv6_recv_error(sk, msg, len);
+ return pingv6_ops.ipv6_recv_error(sk, msg, len,
+ addr_len);
#endif
}
}
const struct net_protocol __rcu *inet_protos[MAX_INET_PROTOS] __read_mostly;
const struct net_offload __rcu *inet_offloads[MAX_INET_PROTOS] __read_mostly;
-/*
- * Add a protocol handler to the hash tables
- */
-
int inet_add_protocol(const struct net_protocol *prot, unsigned char protocol)
{
if (!prot->netns_ok) {
}
EXPORT_SYMBOL(inet_add_offload);
-/*
- * Remove a protocol from the hash tables.
- */
-
int inet_del_protocol(const struct net_protocol *prot, unsigned char protocol)
{
int ret;
goto out;
if (flags & MSG_ERRQUEUE) {
- err = ip_recv_error(sk, msg, len);
+ err = ip_recv_error(sk, msg, len, addr_len);
goto out;
}
if (IS_ERR(rt)) {
err = PTR_ERR(rt);
if (err == -ENETUNREACH)
- IP_INC_STATS_BH(sock_net(sk), IPSTATS_MIB_OUTNOROUTES);
+ IP_INC_STATS(sock_net(sk), IPSTATS_MIB_OUTNOROUTES);
return err;
}
static int tcp_update_limit(struct mem_cgroup *memcg, u64 val)
{
struct cg_proto *cg_proto;
- u64 old_lim;
int i;
int ret;
if (val > RES_COUNTER_MAX)
val = RES_COUNTER_MAX;
- old_lim = res_counter_read_u64(&cg_proto->memory_allocated, RES_LIMIT);
ret = res_counter_set_limit(&cg_proto->memory_allocated, val);
if (ret)
return ret;
{
const struct iphdr *iph = skb_gro_network_header(skb);
__wsum wsum;
- __sum16 sum;
+
+ /* Don't bother verifying checksum if we're going to flush anyway. */
+ if (NAPI_GRO_CB(skb)->flush)
+ goto skip_csum;
+
+ wsum = skb->csum;
switch (skb->ip_summed) {
+ case CHECKSUM_NONE:
+ wsum = skb_checksum(skb, skb_gro_offset(skb), skb_gro_len(skb),
+ 0);
+
+ /* fall through */
+
case CHECKSUM_COMPLETE:
if (!tcp_v4_check(skb_gro_len(skb), iph->saddr, iph->daddr,
- skb->csum)) {
+ wsum)) {
skb->ip_summed = CHECKSUM_UNNECESSARY;
break;
}
-flush:
+
NAPI_GRO_CB(skb)->flush = 1;
return NULL;
-
- case CHECKSUM_NONE:
- wsum = csum_tcpudp_nofold(iph->saddr, iph->daddr,
- skb_gro_len(skb), IPPROTO_TCP, 0);
- sum = csum_fold(skb_checksum(skb,
- skb_gro_offset(skb),
- skb_gro_len(skb),
- wsum));
- if (sum)
- goto flush;
-
- skb->ip_summed = CHECKSUM_UNNECESSARY;
- break;
}
+skip_csum:
return tcp_gro_receive(head, skb);
}
err = PTR_ERR(rt);
rt = NULL;
if (err == -ENETUNREACH)
- IP_INC_STATS_BH(net, IPSTATS_MIB_OUTNOROUTES);
+ IP_INC_STATS(net, IPSTATS_MIB_OUTNOROUTES);
goto out;
}
struct udp_sock *up = udp_sk(sk);
int ret;
+ if (flags & MSG_SENDPAGE_NOTLAST)
+ flags |= MSG_MORE;
+
if (!up->pending) {
struct msghdr msg = { .msg_flags = flags|MSG_MORE };
bool slow;
if (flags & MSG_ERRQUEUE)
- return ip_recv_error(sk, msg, len);
+ return ip_recv_error(sk, msg, len, addr_len);
try_again:
skb = __skb_recv_datagram(sk, flags | (noblock ? MSG_DONTWAIT : 0),
/*
* Handle MSG_ERRQUEUE
*/
-int ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len)
+int ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len, int *addr_len)
{
struct ipv6_pinfo *np = inet6_sk(sk);
struct sock_exterr_skb *serr;
&sin->sin6_addr);
sin->sin6_scope_id = 0;
}
+ *addr_len = sizeof(*sin);
}
memcpy(&errhdr.ee, &serr->ee, sizeof(struct sock_extended_err));
if (serr->ee.ee_origin != SO_EE_ORIGIN_LOCAL) {
sin->sin6_family = AF_INET6;
sin->sin6_flowinfo = 0;
+ sin->sin6_port = 0;
if (skb->protocol == htons(ETH_P_IPV6)) {
sin->sin6_addr = ipv6_hdr(skb)->saddr;
if (np->rxopt.all)
/*
* Handle IPV6_RECVPATHMTU
*/
-int ipv6_recv_rxpmtu(struct sock *sk, struct msghdr *msg, int len)
+int ipv6_recv_rxpmtu(struct sock *sk, struct msghdr *msg, int len,
+ int *addr_len)
{
struct ipv6_pinfo *np = inet6_sk(sk);
struct sk_buff *skb;
sin->sin6_port = 0;
sin->sin6_scope_id = mtu_info.ip6m_addr.sin6_scope_id;
sin->sin6_addr = mtu_info.ip6m_addr.sin6_addr;
+ *addr_len = sizeof(*sin);
}
put_cmsg(msg, SOL_IPV6, IPV6_PATHMTU, sizeof(mtu_info), &mtu_info);
}
rcu_read_unlock_bh();
- IP6_INC_STATS_BH(dev_net(dst->dev),
- ip6_dst_idev(dst), IPSTATS_MIB_OUTNOROUTES);
+ IP6_INC_STATS(dev_net(dst->dev),
+ ip6_dst_idev(dst), IPSTATS_MIB_OUTNOROUTES);
kfree_skb(skb);
return -EINVAL;
}
/* Compatibility glue so we can support IPv6 when it's compiled as a module */
-static int dummy_ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len)
+static int dummy_ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len,
+ int *addr_len)
{
return -EAFNOSUPPORT;
}
}
EXPORT_SYMBOL(inet6_add_protocol);
-/*
- * Remove a protocol from the hash tables.
- */
-
int inet6_del_protocol(const struct inet6_protocol *prot, unsigned char protocol)
{
int ret;
return -EOPNOTSUPP;
if (flags & MSG_ERRQUEUE)
- return ipv6_recv_error(sk, msg, len);
+ return ipv6_recv_error(sk, msg, len, addr_len);
if (np->rxpmtu && np->rxopt.bits.rxpmtu)
- return ipv6_recv_rxpmtu(sk, msg, len);
+ return ipv6_recv_rxpmtu(sk, msg, len, addr_len);
skb = skb_recv_datagram(sk, flags, noblock, &err);
if (!skb)
dev_put(dev);
}
+/* Generate icmpv6 with type/code ICMPV6_DEST_UNREACH/ICMPV6_ADDR_UNREACH
+ * if sufficient data bytes are available
+ */
+static int ipip6_err_gen_icmpv6_unreach(struct sk_buff *skb)
+{
+ const struct iphdr *iph = (const struct iphdr *) skb->data;
+ struct rt6_info *rt;
+ struct sk_buff *skb2;
+
+ if (!pskb_may_pull(skb, iph->ihl * 4 + sizeof(struct ipv6hdr) + 8))
+ return 1;
+
+ skb2 = skb_clone(skb, GFP_ATOMIC);
+
+ if (!skb2)
+ return 1;
+
+ skb_dst_drop(skb2);
+ skb_pull(skb2, iph->ihl * 4);
+ skb_reset_network_header(skb2);
+
+ rt = rt6_lookup(dev_net(skb->dev), &ipv6_hdr(skb2)->saddr, NULL, 0, 0);
+
+ if (rt && rt->dst.dev)
+ skb2->dev = rt->dst.dev;
+
+ icmpv6_send(skb2, ICMPV6_DEST_UNREACH, ICMPV6_ADDR_UNREACH, 0);
+
+ if (rt)
+ ip6_rt_put(rt);
+
+ kfree_skb(skb2);
+
+ return 0;
+}
static int ipip6_err(struct sk_buff *skb, u32 info)
{
-
-/* All the routers (except for Linux) return only
- 8 bytes of packet payload. It means, that precise relaying of
- ICMP in the real Internet is absolutely infeasible.
- */
const struct iphdr *iph = (const struct iphdr *)skb->data;
const int type = icmp_hdr(skb)->type;
const int code = icmp_hdr(skb)->code;
case ICMP_DEST_UNREACH:
switch (code) {
case ICMP_SR_FAILED:
- case ICMP_PORT_UNREACH:
/* Impossible event. */
return 0;
default:
goto out;
err = 0;
+ if (!ipip6_err_gen_icmpv6_unreach(skb))
+ goto out;
+
if (t->parms.iph.ttl == 0 && type == ICMP_TIME_EXCEEDED)
goto out;
if (!new_skb) {
ip_rt_put(rt);
dev->stats.tx_dropped++;
- dev_kfree_skb(skb);
+ kfree_skb(skb);
return NETDEV_TX_OK;
}
if (skb->sk)
tx_error_icmp:
dst_link_failure(skb);
tx_error:
- dev_kfree_skb(skb);
+ kfree_skb(skb);
out:
dev->stats.tx_errors++;
return NETDEV_TX_OK;
tx_err:
dev->stats.tx_errors++;
- dev_kfree_skb(skb);
+ kfree_skb(skb);
return NETDEV_TX_OK;
}
{
const struct ipv6hdr *iph = skb_gro_network_header(skb);
__wsum wsum;
- __sum16 sum;
+
+ /* Don't bother verifying checksum if we're going to flush anyway. */
+ if (NAPI_GRO_CB(skb)->flush)
+ goto skip_csum;
+
+ wsum = skb->csum;
switch (skb->ip_summed) {
+ case CHECKSUM_NONE:
+ wsum = skb_checksum(skb, skb_gro_offset(skb), skb_gro_len(skb),
+ wsum);
+
+ /* fall through */
+
case CHECKSUM_COMPLETE:
if (!tcp_v6_check(skb_gro_len(skb), &iph->saddr, &iph->daddr,
- skb->csum)) {
+ wsum)) {
skb->ip_summed = CHECKSUM_UNNECESSARY;
break;
}
-flush:
+
NAPI_GRO_CB(skb)->flush = 1;
return NULL;
-
- case CHECKSUM_NONE:
- wsum = ~csum_unfold(csum_ipv6_magic(&iph->saddr, &iph->daddr,
- skb_gro_len(skb),
- IPPROTO_TCP, 0));
- sum = csum_fold(skb_checksum(skb,
- skb_gro_offset(skb),
- skb_gro_len(skb),
- wsum));
- if (sum)
- goto flush;
-
- skb->ip_summed = CHECKSUM_UNNECESSARY;
- break;
}
+skip_csum:
return tcp_gro_receive(head, skb);
}
bool slow;
if (flags & MSG_ERRQUEUE)
- return ipv6_recv_error(sk, msg, len);
+ return ipv6_recv_error(sk, msg, len, addr_len);
if (np->rxpmtu && np->rxopt.bits.rxpmtu)
- return ipv6_recv_rxpmtu(sk, msg, len);
+ return ipv6_recv_rxpmtu(sk, msg, len, addr_len);
try_again:
skb = __skb_recv_datagram(sk, flags | (noblock ? MSG_DONTWAIT : 0),
*addr_len = sizeof(*lsa);
if (flags & MSG_ERRQUEUE)
- return ipv6_recv_error(sk, msg, len);
+ return ipv6_recv_error(sk, msg, len, addr_len);
skb = skb_recv_datagram(sk, flags, noblock, &err);
if (!skb)
* Bit 17 is marked as already used since the VFS quota code
* also abused this API and relied on family == group ID, we
* cater to that by giving it a static family and group ID.
+ * Bit 18 is marked as already used since the PMCRAID driver
+ * did the same thing as the VFS quota code (maybe copied?)
*/
static unsigned long mc_group_start = 0x3 | BIT(GENL_ID_CTRL) |
- BIT(GENL_ID_VFS_DQUOT);
+ BIT(GENL_ID_VFS_DQUOT) |
+ BIT(GENL_ID_PMCRAID);
static unsigned long *mc_groups = &mc_group_start;
static unsigned long mc_groups_longs = 1;
for (i = 0; i <= GENL_MAX_ID - GENL_MIN_ID; i++) {
if (id_gen_idx != GENL_ID_VFS_DQUOT &&
+ id_gen_idx != GENL_ID_PMCRAID &&
!genl_family_find_byid(id_gen_idx))
return id_gen_idx;
if (++id_gen_idx > GENL_MAX_ID)
{
int first_id;
int n_groups = family->n_mcgrps;
- int err, i;
+ int err = 0, i;
bool groups_allocated = false;
if (!n_groups)
} else if (strcmp(family->name, "NET_DM") == 0) {
first_id = 1;
BUG_ON(n_groups != 1);
- } else if (strcmp(family->name, "VFS_DQUOT") == 0) {
+ } else if (family->id == GENL_ID_VFS_DQUOT) {
first_id = GENL_ID_VFS_DQUOT;
BUG_ON(n_groups != 1);
+ } else if (family->id == GENL_ID_PMCRAID) {
+ first_id = GENL_ID_PMCRAID;
+ BUG_ON(n_groups != 1);
} else {
groups_allocated = true;
err = genl_allocate_reserve_groups(n_groups, &first_id);
pkc = tx_ring ? &po->tx_ring.prb_bdqc : &po->rx_ring.prb_bdqc;
- spin_lock(&rb_queue->lock);
+ spin_lock_bh(&rb_queue->lock);
pkc->delete_blk_timer = 1;
- spin_unlock(&rb_queue->lock);
+ spin_unlock_bh(&rb_queue->lock);
prb_del_retire_blk_timer(pkc);
}
if (rnd < clg->a4) {
clg->state = 4;
return true;
- } else if (clg->a4 < rnd && rnd < clg->a1) {
+ } else if (clg->a4 < rnd && rnd < clg->a1 + clg->a4) {
clg->state = 3;
return true;
- } else if (clg->a1 < rnd)
+ } else if (clg->a1 + clg->a4 < rnd)
clg->state = 1;
break;
clg->state = 2;
if (net_random() < clg->a4)
return true;
+ break;
case 2:
if (net_random() < clg->a2)
clg->state = 1;
- if (clg->a3 > net_random())
+ if (net_random() > clg->a3)
return true;
}
#include <net/netlink.h>
#include <net/sch_generic.h>
#include <net/pkt_sched.h>
+#include <net/tcp.h>
/* Simple Token Bucket Filter.
};
+/*
+ * Return length of individual segments of a gso packet,
+ * including all headers (MAC, IP, TCP/UDP)
+ */
+static unsigned int skb_gso_seglen(const struct sk_buff *skb)
+{
+ unsigned int hdr_len = skb_transport_header(skb) - skb_mac_header(skb);
+ const struct skb_shared_info *shinfo = skb_shinfo(skb);
+
+ if (likely(shinfo->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6)))
+ hdr_len += tcp_hdrlen(skb);
+ else
+ hdr_len += sizeof(struct udphdr);
+ return hdr_len + shinfo->gso_size;
+}
+
/* GSO packet is too big, segment it so that tbf can transmit
* each segment in time
*/
while (segs) {
nskb = segs->next;
segs->next = NULL;
- if (likely(segs->len <= q->max_size)) {
- qdisc_skb_cb(segs)->pkt_len = segs->len;
- ret = qdisc_enqueue(segs, q->qdisc);
- } else {
- ret = qdisc_reshape_fail(skb, sch);
- }
+ qdisc_skb_cb(segs)->pkt_len = segs->len;
+ ret = qdisc_enqueue(segs, q->qdisc);
if (ret != NET_XMIT_SUCCESS) {
if (net_xmit_drop_count(ret))
sch->qstats.drops++;
int ret;
if (qdisc_pkt_len(skb) > q->max_size) {
- if (skb_is_gso(skb))
+ if (skb_is_gso(skb) && skb_gso_seglen(skb) <= q->max_size)
return tbf_segment(skb, sch);
return qdisc_reshape_fail(skb, sch);
}
if (max_size < 0)
goto done;
+ if (max_size < psched_mtu(qdisc_dev(sch)))
+ pr_warn_ratelimited("sch_tbf: burst %u is lower than device %s mtu (%u) !\n",
+ max_size, qdisc_dev(sch)->name,
+ psched_mtu(qdisc_dev(sch)));
+
if (q->qdisc != &noop_qdisc) {
err = fifo_set_limit(q->qdisc, qopt->limit);
if (err)
* for a given destination transport address.
*/
- if (!tp->rto_pending) {
+ if (!chunk->resent && !tp->rto_pending) {
chunk->rtt_in_progress = 1;
tp->rto_pending = 1;
}
+
has_data = 1;
}
transport->rto_pending = 0;
}
+ chunk->resent = 1;
+
/* Move the chunk to the retransmit queue. The chunks
* on the retransmit queue are always kept in order.
*/
* instance).
*/
if (!tchunk->tsn_gap_acked &&
+ !tchunk->resent &&
tchunk->rtt_in_progress) {
tchunk->rtt_in_progress = 0;
rtt = jiffies - tchunk->sent_at;
*/
if (!tchunk->tsn_gap_acked) {
tchunk->tsn_gap_acked = 1;
- *highest_new_tsn_in_sack = tsn;
+ if (TSN_lt(*highest_new_tsn_in_sack, tsn))
+ *highest_new_tsn_in_sack = tsn;
bytes_acked += sctp_data_size(tchunk);
if (!tchunk->transport)
migrate_bytes += sctp_data_size(tchunk);
if (copy_from_user(kmsg, umsg, sizeof(struct msghdr)))
return -EFAULT;
if (kmsg->msg_namelen > sizeof(struct sockaddr_storage))
- return -EINVAL;
+ kmsg->msg_namelen = sizeof(struct sockaddr_storage);
return 0;
}
static int
gss_refresh_null(struct rpc_task *task)
{
- return -EACCES;
+ return 0;
}
static __be32 *
} elsif ($arch eq "blackfin") {
$mcount_regex = "^\\s*([0-9a-fA-F]+):.*\\s__mcount\$";
$mcount_adjust = -4;
-} elsif ($arch eq "tilegx") {
+} elsif ($arch eq "tilegx" || $arch eq "tile") {
+ # Default to the newer TILE-Gx architecture if only "tile" is given.
$mcount_regex = "^\\s*([0-9a-fA-F]+):.*\\s__mcount\$";
$type = ".quad";
$alignment = 8;
#include <tools/be_byteshift.h>
#include <tools/le_byteshift.h>
+#ifndef EM_ARCOMPACT
+#define EM_ARCOMPACT 93
+#endif
+
#ifndef EM_AARCH64
#define EM_AARCH64 183
#endif
case EM_S390:
custom_sort = sort_relative_table;
break;
+ case EM_ARCOMPACT:
case EM_ARM:
case EM_AARCH64:
case EM_MIPS:
int xattr_len, struct ima_template_entry **entry);
int ima_store_template(struct ima_template_entry *entry, int violation,
struct inode *inode, const unsigned char *filename);
+void ima_free_template_entry(struct ima_template_entry *entry);
const char *ima_d_path(struct path *path, char **pathbuf);
/* rbtree tree calls to lookup, insert, delete
#include <crypto/hash_info.h>
#include "ima.h"
+/*
+ * ima_free_template_entry - free an existing template entry
+ */
+void ima_free_template_entry(struct ima_template_entry *entry)
+{
+ int i;
+
+ for (i = 0; i < entry->template_desc->num_fields; i++)
+ kfree(entry->template_data[i].data);
+
+ kfree(entry);
+}
+
/*
* ima_alloc_init_template - create and initialize a new template entry
*/
if (!*entry)
return -ENOMEM;
+ (*entry)->template_desc = template_desc;
for (i = 0; i < template_desc->num_fields; i++) {
struct ima_template_field *field = template_desc->fields[i];
u32 len;
(*entry)->template_data_len += sizeof(len);
(*entry)->template_data_len += len;
}
- (*entry)->template_desc = template_desc;
return 0;
out:
- kfree(*entry);
+ ima_free_template_entry(*entry);
*entry = NULL;
return result;
}
}
result = ima_store_template(entry, violation, inode, filename);
if (result < 0)
- kfree(entry);
+ ima_free_template_entry(entry);
err_out:
integrity_audit_msg(AUDIT_INTEGRITY_PCR, inode, filename,
op, cause, result, 0);
if (!result || result == -EEXIST)
iint->flags |= IMA_MEASURED;
if (result < 0)
- kfree(entry);
+ ima_free_template_entry(entry);
}
void ima_audit_measurement(struct integrity_iint_cache *iint,
result = ima_calc_boot_aggregate(&hash.hdr);
if (result < 0) {
audit_cause = "hashing_error";
- kfree(entry);
goto err_out;
}
}
result = ima_store_template(entry, violation, NULL,
boot_aggregate_name);
if (result < 0)
- kfree(entry);
+ ima_free_template_entry(entry);
return;
err_out:
integrity_audit_msg(AUDIT_INTEGRITY_PCR, NULL, boot_aggregate_name, op,
struct ima_template_field ***fields,
int *num_fields)
{
- char *c, *template_fmt_copy;
+ char *c, *template_fmt_copy, *template_fmt_ptr;
int template_num_fields = template_fmt_size(template_fmt);
int i, result = 0;
result = -ENOMEM;
goto out;
}
- for (i = 0; (c = strsep(&template_fmt_copy, "|")) != NULL &&
+
+ template_fmt_ptr = template_fmt_copy;
+ for (i = 0; (c = strsep(&template_fmt_ptr, "|")) != NULL &&
i < template_num_fields; i++) {
struct ima_template_field *f = lookup_template_field(c);
*
* TODO: Encrypt the stored data with a temporary key.
*/
- file = shmem_file_setup("", datalen, 0);
+ file = shmem_kernel_file_setup("", datalen, 0);
if (IS_ERR(file)) {
ret = PTR_ERR(file);
goto err_quota;
}
/* allocate and initialise the key and its description */
- key = kmem_cache_alloc(key_jar, GFP_KERNEL);
+ key = kmem_cache_zalloc(key_jar, GFP_KERNEL);
if (!key)
goto no_memory_2;
key->uid = uid;
key->gid = gid;
key->perm = perm;
- key->flags = 0;
- key->expiry = 0;
- key->payload.data = NULL;
- key->security = NULL;
if (!(flags & KEY_ALLOC_NOT_IN_QUOTA))
key->flags |= 1 << KEY_FLAG_IN_QUOTA;
if (flags & KEY_ALLOC_TRUSTED)
key->flags |= 1 << KEY_FLAG_TRUSTED;
- memset(&key->type_data, 0, sizeof(key->type_data));
-
#ifdef KEY_DEBUGGING
key->magic = KEY_DEBUG_MAGIC;
#endif
static unsigned long hash_key_type_and_desc(const struct keyring_index_key *index_key)
{
const unsigned level_shift = ASSOC_ARRAY_LEVEL_STEP;
- const unsigned long level_mask = ASSOC_ARRAY_LEVEL_STEP_MASK;
+ const unsigned long fan_mask = ASSOC_ARRAY_FAN_MASK;
const char *description = index_key->description;
unsigned long hash, type;
u32 piece;
* ordinary keys by making sure the lowest level segment in the hash is
* zero for keyrings and non-zero otherwise.
*/
- if (index_key->type != &key_type_keyring && (hash & level_mask) == 0)
+ if (index_key->type != &key_type_keyring && (hash & fan_mask) == 0)
return hash | (hash >> (ASSOC_ARRAY_KEY_CHUNK_SIZE - level_shift)) | 1;
- if (index_key->type == &key_type_keyring && (hash & level_mask) != 0)
- return (hash + (hash << level_shift)) & ~level_mask;
+ if (index_key->type == &key_type_keyring && (hash & fan_mask) != 0)
+ return (hash + (hash << level_shift)) & ~fan_mask;
return hash;
}
* Compare the index keys of a pair of objects and determine the bit position
* at which they differ - if they differ.
*/
-static int keyring_diff_objects(const void *_a, const void *_b)
+static int keyring_diff_objects(const void *object, const void *data)
{
- const struct key *key_a = keyring_ptr_to_key(_a);
- const struct key *key_b = keyring_ptr_to_key(_b);
+ const struct key *key_a = keyring_ptr_to_key(object);
const struct keyring_index_key *a = &key_a->index_key;
- const struct keyring_index_key *b = &key_b->index_key;
+ const struct keyring_index_key *b = data;
unsigned long seg_a, seg_b;
int level, i;
smp_read_barrier_depends();
ptr = ACCESS_ONCE(shortcut->next_node);
BUG_ON(!assoc_array_ptr_is_node(ptr));
- node = assoc_array_ptr_to_node(ptr);
}
+ node = assoc_array_ptr_to_node(ptr);
begin_node:
kdebug("begin_node");
if (new_rate < 0)
break;
/* make sure we are below the ABDAC clock */
- if (new_rate <= clk_get_rate(dac->pclk)) {
+ if (index < MAX_NUM_RATES &&
+ new_rate <= clk_get_rate(dac->pclk)) {
dac->rates[index] = new_rate / 256;
index++;
}
if (dice_proc_read_mem(dice, &tx_rx_header, sections[2], 2) < 0)
return;
- quadlets = min_t(u32, tx_rx_header.size, sizeof(buf.tx));
+ quadlets = min_t(u32, tx_rx_header.size, sizeof(buf.tx) / 4);
for (stream = 0; stream < tx_rx_header.number; ++stream) {
if (dice_proc_read_mem(dice, &buf.tx, sections[2] + 2 +
stream * tx_rx_header.size,
if (dice_proc_read_mem(dice, &tx_rx_header, sections[4], 2) < 0)
return;
- quadlets = min_t(u32, tx_rx_header.size, sizeof(buf.rx));
+ quadlets = min_t(u32, tx_rx_header.size, sizeof(buf.rx) / 4);
for (stream = 0; stream < tx_rx_header.number; ++stream) {
if (dice_proc_read_mem(dice, &buf.rx, sections[4] + 2 +
stream * tx_rx_header.size,
memset(path, 0, sizeof(*path));
}
+/* return a DAC if paired to the given pin by codec driver */
+static hda_nid_t get_preferred_dac(struct hda_codec *codec, hda_nid_t pin)
+{
+ struct hda_gen_spec *spec = codec->spec;
+ const hda_nid_t *list = spec->preferred_dacs;
+
+ if (!list)
+ return 0;
+ for (; *list; list += 2)
+ if (*list == pin)
+ return list[1];
+ return 0;
+}
+
/* look for an empty DAC slot */
static hda_nid_t look_for_dac(struct hda_codec *codec, hda_nid_t pin,
bool is_digital)
continue;
}
- dacs[i] = look_for_dac(codec, pin, false);
+ dacs[i] = get_preferred_dac(codec, pin);
+ if (dacs[i]) {
+ if (is_dac_already_used(codec, dacs[i]))
+ badness += bad->shared_primary;
+ }
+
+ if (!dacs[i])
+ dacs[i] = look_for_dac(codec, pin, false);
if (!dacs[i] && !i) {
/* try to steal the DAC of surrounds for the front */
for (j = 1; j < num_outs; j++) {
return AC_PWRST_D3;
}
+/* mute all aamix inputs initially; parse up to the first leaves */
+static void mute_all_mixer_nid(struct hda_codec *codec, hda_nid_t mix)
+{
+ int i, nums;
+ const hda_nid_t *conn;
+ bool has_amp;
+
+ nums = snd_hda_get_conn_list(codec, mix, &conn);
+ has_amp = nid_has_mute(codec, mix, HDA_INPUT);
+ for (i = 0; i < nums; i++) {
+ if (has_amp)
+ snd_hda_codec_amp_stereo(codec, mix,
+ HDA_INPUT, i,
+ 0xff, HDA_AMP_MUTE);
+ else if (nid_has_volume(codec, conn[i], HDA_OUTPUT))
+ snd_hda_codec_amp_stereo(codec, conn[i],
+ HDA_OUTPUT, 0,
+ 0xff, HDA_AMP_MUTE);
+ }
+}
/*
* Parse the given BIOS configuration and set up the hda_gen_spec
}
}
+ /* mute all aamix input initially */
+ if (spec->mixer_nid)
+ mute_all_mixer_nid(codec, spec->mixer_nid);
+
dig_only:
parse_digital(codec);
const struct badness_table *main_out_badness;
const struct badness_table *extra_out_badness;
+ /* preferred pin/DAC pairs; an array of paired NIDs */
+ const hda_nid_t *preferred_dacs;
+
/* loopback mixing mode */
bool aamix_mode;
}
dev++;
- complete_all(&chip->probe_wait);
+ if (chip->disabled)
+ complete_all(&chip->probe_wait);
return 0;
out_free:
if ((chip->driver_caps & AZX_DCAPS_PM_RUNTIME) || chip->use_vga_switcheroo)
pm_runtime_put_noidle(&pci->dev);
- return 0;
-
out_free:
- chip->init_failed = 1;
+ if (err < 0)
+ chip->init_failed = 1;
+ complete_all(&chip->probe_wait);
return err;
}
if (!spec->eapd_nid)
return;
+ if (codec->inv_eapd)
+ enabled = !enabled;
snd_hda_codec_update_cache(codec, spec->eapd_nid, 0,
AC_VERB_SET_EAPD_BTLENABLE,
enabled ? 0x02 : 0x00);
{
int err;
struct ad198x_spec *spec;
+ static hda_nid_t preferred_pairs[] = {
+ 0x1a, 0x03,
+ 0x1b, 0x03,
+ 0x1c, 0x04,
+ 0x1d, 0x05,
+ 0x1e, 0x03,
+ 0
+ };
err = alloc_ad_spec(codec);
if (err < 0)
* So, let's disable the shared stream.
*/
spec->gen.multiout.no_share_stream = 1;
+ /* give fixed DAC/pin pairs */
+ spec->gen.preferred_dacs = preferred_pairs;
+
+ /* AD1986A can't manage the dynamic pin on/off smoothly */
+ spec->gen.auto_mute_via_amp = 1;
snd_hda_pick_fixup(codec, ad1986a_fixup_models, ad1986a_fixup_tbl,
ad1986a_fixups);
switch (action) {
case HDA_FIXUP_ACT_PRE_PROBE:
spec->gen.vmaster_mute.hook = ad1884_vmaster_hp_gpio_hook;
+ spec->gen.own_eapd_ctl = 1;
snd_hda_sequence_write_cache(codec, gpio_init_verbs);
break;
case HDA_FIXUP_ACT_PROBE:
SND_PCI_QUIRK(0x1028, 0x0401, "Dell Vostro 1014", CXT5066_DELL_VOSTRO),
SND_PCI_QUIRK(0x1028, 0x0408, "Dell Inspiron One 19T", CXT5066_IDEAPAD),
SND_PCI_QUIRK(0x1028, 0x050f, "Dell Inspiron", CXT5066_IDEAPAD),
- SND_PCI_QUIRK(0x1028, 0x0510, "Dell Vostro", CXT5066_IDEAPAD),
SND_PCI_QUIRK(0x103c, 0x360b, "HP G60", CXT5066_HP_LAPTOP),
SND_PCI_QUIRK(0x1043, 0x13f3, "Asus A52J", CXT5066_ASUS),
SND_PCI_QUIRK(0x1043, 0x1643, "Asus K52JU", CXT5066_ASUS),
static bool hdmi_present_sense(struct hdmi_spec_per_pin *per_pin, int repoll);
-static void hdmi_intrinsic_event(struct hda_codec *codec, unsigned int res)
+static void jack_callback(struct hda_codec *codec, struct hda_jack_tbl *jack)
{
struct hdmi_spec *spec = codec->spec;
+ int pin_idx = pin_nid_to_pin_index(spec, jack->nid);
+ if (pin_idx < 0)
+ return;
+
+ if (hdmi_present_sense(get_pin(spec, pin_idx), 1))
+ snd_hda_jack_report_sync(codec);
+}
+
+static void hdmi_intrinsic_event(struct hda_codec *codec, unsigned int res)
+{
int tag = res >> AC_UNSOL_RES_TAG_SHIFT;
- int pin_nid;
- int pin_idx;
struct hda_jack_tbl *jack;
int dev_entry = (res & AC_UNSOL_RES_DE) >> AC_UNSOL_RES_DE_SHIFT;
jack = snd_hda_jack_tbl_get_from_tag(codec, tag);
if (!jack)
return;
- pin_nid = jack->nid;
jack->jack_dirty = 1;
_snd_printd(SND_PR_VERBOSE,
"HDMI hot plug event: Codec=%d Pin=%d Device=%d Inactive=%d Presence_Detect=%d ELD_Valid=%d\n",
- codec->addr, pin_nid, dev_entry, !!(res & AC_UNSOL_RES_IA),
+ codec->addr, jack->nid, dev_entry, !!(res & AC_UNSOL_RES_IA),
!!(res & AC_UNSOL_RES_PD), !!(res & AC_UNSOL_RES_ELDV));
- pin_idx = pin_nid_to_pin_index(spec, pin_nid);
- if (pin_idx < 0)
- return;
-
- if (hdmi_present_sense(get_pin(spec, pin_idx), 1))
- snd_hda_jack_report_sync(codec);
+ jack_callback(codec, jack);
}
static void hdmi_non_intrinsic_event(struct hda_codec *codec, unsigned int res)
hda_nid_t pin_nid = per_pin->pin_nid;
hdmi_init_pin(codec, pin_nid);
- snd_hda_jack_detect_enable(codec, pin_nid, pin_nid);
+ snd_hda_jack_detect_enable_callback(codec, pin_nid, pin_nid,
+ codec->jackpoll_interval > 0 ? jack_callback : NULL);
}
return 0;
}
int err;
per_cvt = get_cvt(spec, 0);
- err = snd_hda_create_spdif_out_ctls(codec, per_cvt->cvt_nid,
- per_cvt->cvt_nid);
+ err = snd_hda_create_dig_out_ctls(codec, per_cvt->cvt_nid,
+ per_cvt->cvt_nid,
+ HDA_PCM_TYPE_HDMI);
if (err < 0)
return err;
return simple_hdmi_build_jack(codec, 0);
ALC889_FIXUP_DAC_ROUTE,
ALC889_FIXUP_MBP_VREF,
ALC889_FIXUP_IMAC91_VREF,
+ ALC889_FIXUP_MBA21_VREF,
ALC882_FIXUP_INV_DMIC,
ALC882_FIXUP_NO_PRIMARY_HP,
ALC887_FIXUP_ASUS_BASS,
}
}
-/* Set VREF on speaker pins on imac91 */
-static void alc889_fixup_imac91_vref(struct hda_codec *codec,
- const struct hda_fixup *fix, int action)
+static void alc889_fixup_mac_pins(struct hda_codec *codec,
+ const hda_nid_t *nids, int num_nids)
{
struct alc_spec *spec = codec->spec;
- static hda_nid_t nids[2] = { 0x18, 0x1a };
int i;
- if (action != HDA_FIXUP_ACT_INIT)
- return;
- for (i = 0; i < ARRAY_SIZE(nids); i++) {
+ for (i = 0; i < num_nids; i++) {
unsigned int val;
val = snd_hda_codec_get_pin_target(codec, nids[i]);
val |= AC_PINCTL_VREF_50;
spec->gen.keep_vref_in_automute = 1;
}
+/* Set VREF on speaker pins on imac91 */
+static void alc889_fixup_imac91_vref(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+{
+ static hda_nid_t nids[2] = { 0x18, 0x1a };
+
+ if (action == HDA_FIXUP_ACT_INIT)
+ alc889_fixup_mac_pins(codec, nids, ARRAY_SIZE(nids));
+}
+
+/* Set VREF on speaker pins on mba21 */
+static void alc889_fixup_mba21_vref(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+{
+ static hda_nid_t nids[2] = { 0x18, 0x19 };
+
+ if (action == HDA_FIXUP_ACT_INIT)
+ alc889_fixup_mac_pins(codec, nids, ARRAY_SIZE(nids));
+}
+
/* Don't take HP output as primary
* Strangely, the speaker output doesn't work on Vaio Z and some Vaio
* all-in-one desktop PCs (for example VGC-LN51JGB) through DAC 0x05
.chained = true,
.chain_id = ALC882_FIXUP_GPIO1,
},
+ [ALC889_FIXUP_MBA21_VREF] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc889_fixup_mba21_vref,
+ .chained = true,
+ .chain_id = ALC889_FIXUP_MBP_VREF,
+ },
[ALC882_FIXUP_INV_DMIC] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc_fixup_inv_dmic_0x12,
SND_PCI_QUIRK(0x106b, 0x3000, "iMac", ALC889_FIXUP_MBP_VREF),
SND_PCI_QUIRK(0x106b, 0x3200, "iMac 7,1 Aluminum", ALC882_FIXUP_EAPD),
SND_PCI_QUIRK(0x106b, 0x3400, "MacBookAir 1,1", ALC889_FIXUP_MBP_VREF),
- SND_PCI_QUIRK(0x106b, 0x3500, "MacBookAir 2,1", ALC889_FIXUP_MBP_VREF),
+ SND_PCI_QUIRK(0x106b, 0x3500, "MacBookAir 2,1", ALC889_FIXUP_MBA21_VREF),
SND_PCI_QUIRK(0x106b, 0x3600, "Macbook 3,1", ALC889_FIXUP_MBP_VREF),
SND_PCI_QUIRK(0x106b, 0x3800, "MacbookPro 4,1", ALC889_FIXUP_MBP_VREF),
SND_PCI_QUIRK(0x106b, 0x3e00, "iMac 24 Aluminum", ALC885_FIXUP_MACPRO_GPIO),
alc_write_coef_idx(codec, 0x18, 0x7388);
break;
case 0x10ec0668:
+ alc_write_coef_idx(codec, 0x11, 0x0001);
alc_write_coef_idx(codec, 0x15, 0x0d60);
alc_write_coef_idx(codec, 0xc3, 0x0000);
break;
alc_write_coef_idx(codec, 0x18, 0x7388);
break;
case 0x10ec0668:
+ alc_write_coef_idx(codec, 0x11, 0x0001);
alc_write_coef_idx(codec, 0x15, 0x0d50);
alc_write_coef_idx(codec, 0xc3, 0x0000);
break;
vref);
}
-static void alc283_chromebook_caps(struct hda_codec *codec)
-{
- snd_hda_override_wcaps(codec, 0x03, 0);
-}
-
static void alc283_fixup_chromebook(struct hda_codec *codec,
const struct hda_fixup *fix, int action)
{
switch (action) {
case HDA_FIXUP_ACT_PRE_PROBE:
- alc283_chromebook_caps(codec);
+ snd_hda_override_wcaps(codec, 0x03, 0);
/* Disable AA-loopback as it causes white noise */
spec->gen.mixer_nid = 0;
+ break;
+ case HDA_FIXUP_ACT_INIT:
+ /* Enable Line1 input control by verb */
+ val = alc_read_coef_idx(codec, 0x1a);
+ alc_write_coef_idx(codec, 0x1a, val | (1 << 4));
+ break;
+ }
+}
+
+static void alc283_fixup_sense_combo_jack(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+{
+ struct alc_spec *spec = codec->spec;
+ int val;
+
+ switch (action) {
+ case HDA_FIXUP_ACT_PRE_PROBE:
spec->gen.hp_automute_hook = alc283_hp_automute_hook;
break;
case HDA_FIXUP_ACT_INIT:
/* Set to manual mode */
val = alc_read_coef_idx(codec, 0x06);
alc_write_coef_idx(codec, 0x06, val & ~0x000c);
- /* Enable Line1 input control by verb */
- val = alc_read_coef_idx(codec, 0x1a);
- alc_write_coef_idx(codec, 0x1a, val | (1 << 4));
break;
}
}
ALC269_FIXUP_ASUS_X101,
ALC271_FIXUP_AMIC_MIC2,
ALC271_FIXUP_HP_GATE_MIC_JACK,
+ ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572,
ALC269_FIXUP_ACER_AC700,
ALC269_FIXUP_LIMIT_INT_MIC_BOOST,
ALC269VB_FIXUP_ASUS_ZENBOOK,
ALC269_FIXUP_LIMIT_INT_MIC_BOOST_MUTE_LED,
ALC269VB_FIXUP_ORDISSIMO_EVE2,
ALC283_FIXUP_CHROME_BOOK,
+ ALC283_FIXUP_SENSE_COMBO_JACK,
ALC282_FIXUP_ASUS_TX300,
ALC283_FIXUP_INT_MIC,
ALC290_FIXUP_MONO_SPEAKERS,
.chained = true,
.chain_id = ALC271_FIXUP_AMIC_MIC2,
},
+ [ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc269_fixup_limit_int_mic_boost,
+ .chained = true,
+ .chain_id = ALC271_FIXUP_HP_GATE_MIC_JACK,
+ },
[ALC269_FIXUP_ACER_AC700] = {
.type = HDA_FIXUP_PINS,
.v.pins = (const struct hda_pintbl[]) {
.type = HDA_FIXUP_FUNC,
.v.func = alc283_fixup_chromebook,
},
+ [ALC283_FIXUP_SENSE_COMBO_JACK] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc283_fixup_sense_combo_jack,
+ .chained = true,
+ .chain_id = ALC283_FIXUP_CHROME_BOOK,
+ },
[ALC282_FIXUP_ASUS_TX300] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc282_fixup_asus_tx300,
SND_PCI_QUIRK(0x1025, 0x0740, "Acer AO725", ALC271_FIXUP_HP_GATE_MIC_JACK),
SND_PCI_QUIRK(0x1025, 0x0742, "Acer AO756", ALC271_FIXUP_HP_GATE_MIC_JACK),
SND_PCI_QUIRK_VENDOR(0x1025, "Acer Aspire", ALC271_FIXUP_DMIC),
+ SND_PCI_QUIRK(0x1025, 0x0775, "Acer Aspire E1-572", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572),
SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z),
SND_PCI_QUIRK(0x1028, 0x05bd, "Dell", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x05be, "Dell", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x0614, "Dell Inspiron 3135", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x0616, "Dell Vostro 5470", ALC290_FIXUP_MONO_SPEAKERS),
SND_PCI_QUIRK(0x1028, 0x061f, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0638, "Dell Inspiron 5439", ALC290_FIXUP_MONO_SPEAKERS),
SND_PCI_QUIRK(0x1028, 0x063f, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x15cc, "Dell X5 Precision", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x15cd, "Dell X5 Precision", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x103c, 0x1973, "HP Pavilion", ALC269_FIXUP_HP_MUTE_LED_MIC1),
SND_PCI_QUIRK(0x103c, 0x1983, "HP Pavilion", ALC269_FIXUP_HP_MUTE_LED_MIC1),
SND_PCI_QUIRK(0x103c, 0x218b, "HP", ALC269_FIXUP_LIMIT_INT_MIC_BOOST_MUTE_LED),
- SND_PCI_QUIRK(0x103c, 0x21ed, "HP Falco Chromebook", ALC283_FIXUP_CHROME_BOOK),
SND_PCI_QUIRK_VENDOR(0x103c, "HP", ALC269_FIXUP_HP_MUTE_LED),
SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
{.id = ALC269_FIXUP_HP_GPIO_LED, .name = "hp-gpio-led"},
{.id = ALC269_FIXUP_DELL1_MIC_NO_PRESENCE, .name = "dell-headset-multi"},
{.id = ALC269_FIXUP_DELL2_MIC_NO_PRESENCE, .name = "dell-headset-dock"},
+ {.id = ALC283_FIXUP_CHROME_BOOK, .name = "alc283-chrome"},
+ {.id = ALC283_FIXUP_SENSE_COMBO_JACK, .name = "alc283-sense-combo"},
{}
};
ALC861_FIXUP_AMP_VREF_0F,
ALC861_FIXUP_NO_JACK_DETECT,
ALC861_FIXUP_ASUS_A6RP,
+ ALC660_FIXUP_ASUS_W7J,
};
/* On some laptops, VREF of pin 0x0f is abused for controlling the main amp */
.v.func = alc861_fixup_asus_amp_vref_0f,
.chained = true,
.chain_id = ALC861_FIXUP_NO_JACK_DETECT,
+ },
+ [ALC660_FIXUP_ASUS_W7J] = {
+ .type = HDA_FIXUP_VERBS,
+ .v.verbs = (const struct hda_verb[]) {
+ /* ASUS W7J needs a magic pin setup on unused NID 0x10
+ * for enabling outputs
+ */
+ {0x10, AC_VERB_SET_PIN_WIDGET_CONTROL, 0x24},
+ { }
+ },
}
};
static const struct snd_pci_quirk alc861_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1253, "ASUS W7J", ALC660_FIXUP_ASUS_W7J),
+ SND_PCI_QUIRK(0x1043, 0x1263, "ASUS Z35HL", ALC660_FIXUP_ASUS_W7J),
SND_PCI_QUIRK(0x1043, 0x1393, "ASUS A6Rp", ALC861_FIXUP_ASUS_A6RP),
SND_PCI_QUIRK_VENDOR(0x1043, "ASUS laptop", ALC861_FIXUP_AMP_VREF_0F),
SND_PCI_QUIRK(0x1462, 0x7254, "HP DX2200", ALC861_FIXUP_NO_JACK_DETECT),
SND_PCI_QUIRK(0x1025, 0x038b, "Acer Aspire 8943G", ALC662_FIXUP_ASPIRE),
SND_PCI_QUIRK(0x1028, 0x05d8, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x05db, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0623, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0624, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x0625, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x0626, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0628, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800),
SND_PCI_QUIRK(0x1043, 0x11cd, "Asus N550", ALC662_FIXUP_BASS_1A_CHMAP),
SND_PCI_QUIRK(0x1043, 0x1477, "ASUS N56VZ", ALC662_FIXUP_BASS_CHMAP),
goto out;
}
+ snd_soc_card_set_drvdata(card, priv);
+
card->dev = &pdev->dev;
card->owner = THIS_MODULE;
card->dai_link = dai;
ARIZONA_MIXER_CONTROLS("SPKDAT2L", ARIZONA_OUT6LMIX_INPUT_1_SOURCE),
ARIZONA_MIXER_CONTROLS("SPKDAT2R", ARIZONA_OUT6RMIX_INPUT_1_SOURCE),
-SOC_SINGLE("HPOUT1 High Performance Switch", ARIZONA_OUTPUT_PATH_CONFIG_1L,
- ARIZONA_OUT1_OSR_SHIFT, 1, 0),
-SOC_SINGLE("HPOUT2 High Performance Switch", ARIZONA_OUTPUT_PATH_CONFIG_2L,
- ARIZONA_OUT2_OSR_SHIFT, 1, 0),
-SOC_SINGLE("HPOUT3 High Performance Switch", ARIZONA_OUTPUT_PATH_CONFIG_3L,
- ARIZONA_OUT3_OSR_SHIFT, 1, 0),
-SOC_SINGLE("Speaker High Performance Switch", ARIZONA_OUTPUT_PATH_CONFIG_4L,
- ARIZONA_OUT4_OSR_SHIFT, 1, 0),
-SOC_SINGLE("SPKDAT1 High Performance Switch", ARIZONA_OUTPUT_PATH_CONFIG_5L,
- ARIZONA_OUT5_OSR_SHIFT, 1, 0),
-SOC_SINGLE("SPKDAT2 High Performance Switch", ARIZONA_OUTPUT_PATH_CONFIG_6L,
- ARIZONA_OUT6_OSR_SHIFT, 1, 0),
-
SOC_DOUBLE_R("HPOUT1 Digital Switch", ARIZONA_DAC_DIGITAL_VOLUME_1L,
ARIZONA_DAC_DIGITAL_VOLUME_1R, ARIZONA_OUT1L_MUTE_SHIFT, 1, 1),
SOC_DOUBLE_R("HPOUT2 Digital Switch", ARIZONA_DAC_DIGITAL_VOLUME_2L,
ARIZONA_DAC_DIGITAL_VOLUME_6R, ARIZONA_OUT6L_VOL_SHIFT,
0xbf, 0, digital_tlv),
-SOC_DOUBLE_R_RANGE_TLV("HPOUT1 Volume", ARIZONA_OUTPUT_PATH_CONFIG_1L,
- ARIZONA_OUTPUT_PATH_CONFIG_1R,
- ARIZONA_OUT1L_PGA_VOL_SHIFT,
- 0x34, 0x40, 0, ana_tlv),
-SOC_DOUBLE_R_RANGE_TLV("HPOUT2 Volume", ARIZONA_OUTPUT_PATH_CONFIG_2L,
- ARIZONA_OUTPUT_PATH_CONFIG_2R,
- ARIZONA_OUT2L_PGA_VOL_SHIFT,
- 0x34, 0x40, 0, ana_tlv),
-SOC_DOUBLE_R_RANGE_TLV("HPOUT3 Volume", ARIZONA_OUTPUT_PATH_CONFIG_3L,
- ARIZONA_OUTPUT_PATH_CONFIG_3R,
- ARIZONA_OUT3L_PGA_VOL_SHIFT, 0x34, 0x40, 0, ana_tlv),
-
SOC_DOUBLE("SPKDAT1 Switch", ARIZONA_PDM_SPK1_CTRL_1, ARIZONA_SPK1L_MUTE_SHIFT,
ARIZONA_SPK1R_MUTE_SHIFT, 1, 1),
SOC_DOUBLE("SPKDAT2 Switch", ARIZONA_PDM_SPK2_CTRL_1, ARIZONA_SPK2L_MUTE_SHIFT,
iface |= 0x0001;
break;
case SND_SOC_DAIFMT_DSP_A:
- iface |= 0x0003;
+ iface |= 0x0013;
break;
case SND_SOC_DAIFMT_DSP_B:
- iface |= 0x0013;
+ iface |= 0x0003;
break;
default:
return -EINVAL;
/* disable POBCTRL, SOFT_ST and BUFDCOPEN */
snd_soc_write(codec, WM8990_ANTIPOP2, 0x0);
+
+ codec->cache_sync = 1;
break;
}
return -ENOMEM;
card->dev = &op->dev;
- platform_set_drvdata(op, pdata);
pdata->card = card;
if (ret)
dev_err(&op->dev, "snd_soc_register_card() failed: %d\n", ret);
+ platform_set_drvdata(op, pdata);
+
return ret;
}
SNDRV_PCM_FMTBIT_S24_LE | \
SNDRV_PCM_FMTBIT_S32_LE)
+#define KIRKWOOD_SPDIF_FORMATS \
+ (SNDRV_PCM_FMTBIT_S16_LE | \
+ SNDRV_PCM_FMTBIT_S24_LE)
+
static int kirkwood_i2s_set_fmt(struct snd_soc_dai *cpu_dai,
unsigned int fmt)
{
ctl);
}
- if (dai->id == 0)
- ctl &= ~KIRKWOOD_PLAYCTL_SPDIF_EN; /* i2s */
- else
- ctl &= ~KIRKWOOD_PLAYCTL_I2S_EN; /* spdif */
-
switch (cmd) {
case SNDRV_PCM_TRIGGER_START:
/* configure */
ctl = priv->ctl_play;
+ if (dai->id == 0)
+ ctl &= ~KIRKWOOD_PLAYCTL_SPDIF_EN; /* i2s */
+ else
+ ctl &= ~KIRKWOOD_PLAYCTL_I2S_EN; /* spdif */
+
value = ctl & ~KIRKWOOD_PLAYCTL_ENABLE_MASK;
writel(value, priv->io + KIRKWOOD_PLAYCTL);
.channels_max = 2,
.rates = SNDRV_PCM_RATE_44100 | SNDRV_PCM_RATE_48000 |
SNDRV_PCM_RATE_96000,
- .formats = KIRKWOOD_I2S_FORMATS,
+ .formats = KIRKWOOD_SPDIF_FORMATS,
},
.capture = {
.channels_min = 1,
.channels_max = 2,
.rates = SNDRV_PCM_RATE_44100 | SNDRV_PCM_RATE_48000 |
SNDRV_PCM_RATE_96000,
- .formats = KIRKWOOD_I2S_FORMATS,
+ .formats = KIRKWOOD_SPDIF_FORMATS,
},
.ops = &kirkwood_i2s_dai_ops,
},
.rates = SNDRV_PCM_RATE_8000_192000 |
SNDRV_PCM_RATE_CONTINUOUS |
SNDRV_PCM_RATE_KNOT,
- .formats = KIRKWOOD_I2S_FORMATS,
+ .formats = KIRKWOOD_SPDIF_FORMATS,
},
.capture = {
.channels_min = 1,
.rates = SNDRV_PCM_RATE_8000_192000 |
SNDRV_PCM_RATE_CONTINUOUS |
SNDRV_PCM_RATE_KNOT,
- .formats = KIRKWOOD_I2S_FORMATS,
+ .formats = KIRKWOOD_SPDIF_FORMATS,
},
.ops = &kirkwood_i2s_dai_ops,
},
SNDRV_PCM_HW_PARAM_CHANNELS, 2, 2);
n810_ext_control(&codec->dapm);
- return clk_enable(sys_clkout2);
+ return clk_prepare_enable(sys_clkout2);
}
static void n810_shutdown(struct snd_pcm_substream *substream)
{
- clk_disable(sys_clkout2);
+ clk_disable_unprepare(sys_clkout2);
}
static int n810_hw_params(struct snd_pcm_substream *substream,
config SND_SOC_RCAR
tristate "R-Car series SRU/SCU/SSIU/SSI support"
select SND_SIMPLE_CARD
+ select REGMAP
help
This option enables R-Car SUR/SCU/SSIU/SSI sound support
break;
case 2:
((u16 *)(&ucontrol->value.bytes.data))[0]
- &= ~params->mask;
+ &= cpu_to_be16(~params->mask);
break;
case 4:
((u32 *)(&ucontrol->value.bytes.data))[0]
- &= ~params->mask;
+ &= cpu_to_be32(~params->mask);
break;
default:
return -EINVAL;
*/
int devm_snd_soc_register_card(struct device *dev, struct snd_soc_card *card)
{
- struct device **ptr;
+ struct snd_soc_card **ptr;
int ret;
ptr = devres_alloc(devm_card_release, sizeof(*ptr), GFP_KERNEL);
ret = snd_soc_register_card(card);
if (ret == 0) {
- *ptr = dev;
+ *ptr = card;
devres_add(dev, ptr);
} else {
devres_free(ptr);
}
}
-static void soc_pcm_init_runtime_hw(struct snd_pcm_hardware *hw,
+static void soc_pcm_init_runtime_hw(struct snd_pcm_runtime *runtime,
struct snd_soc_pcm_stream *codec_stream,
struct snd_soc_pcm_stream *cpu_stream)
{
- hw->rate_min = max(codec_stream->rate_min, cpu_stream->rate_min);
- hw->rate_max = max(codec_stream->rate_max, cpu_stream->rate_max);
+ struct snd_pcm_hardware *hw = &runtime->hw;
+
hw->channels_min = max(codec_stream->channels_min,
cpu_stream->channels_min);
hw->channels_max = min(codec_stream->channels_max,
if (cpu_stream->rates
& (SNDRV_PCM_RATE_KNOT | SNDRV_PCM_RATE_CONTINUOUS))
hw->rates |= codec_stream->rates;
+
+ snd_pcm_limit_hw_rates(runtime);
+
+ hw->rate_min = max(hw->rate_min, cpu_stream->rate_min);
+ hw->rate_min = max(hw->rate_min, codec_stream->rate_min);
+ hw->rate_max = min_not_zero(hw->rate_max, cpu_stream->rate_max);
+ hw->rate_max = min_not_zero(hw->rate_max, codec_stream->rate_max);
}
/*
/* Check that the codec and cpu DAIs are compatible */
if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
- soc_pcm_init_runtime_hw(&runtime->hw, &codec_dai_drv->playback,
+ soc_pcm_init_runtime_hw(runtime, &codec_dai_drv->playback,
&cpu_dai_drv->playback);
} else {
- soc_pcm_init_runtime_hw(&runtime->hw, &codec_dai_drv->capture,
+ soc_pcm_init_runtime_hw(runtime, &codec_dai_drv->capture,
&cpu_dai_drv->capture);
}
ret = -EINVAL;
- snd_pcm_limit_hw_rates(runtime);
if (!runtime->hw.rates) {
printk(KERN_ERR "ASoC: %s <-> %s No matching rates\n",
codec_dai->name, cpu_dai->name);
return err;
}
- return err;
+ return 0;
}
int snd_usb_mixer_apply_create_quirk(struct usb_mixer_interface *mixer)
static enum event_type
process_op(struct event_format *event, struct print_arg *arg, char **tok);
+/*
+ * For __print_symbolic() and __print_flags, we need to completely
+ * evaluate the first argument, which defines what to print next.
+ */
+static enum event_type
+process_field_arg(struct event_format *event, struct print_arg *arg, char **tok)
+{
+ enum event_type type;
+
+ type = process_arg(event, arg, tok);
+
+ while (type == EVENT_OP) {
+ type = process_op(event, arg, tok);
+ }
+
+ return type;
+}
+
static enum event_type
process_cond(struct event_format *event, struct print_arg *top, char **tok)
{
goto out_free;
}
- type = process_arg(event, field, &token);
+ type = process_field_arg(event, field, &token);
/* Handle operations in the first argument */
while (type == EVENT_OP)
goto out_free;
}
- type = process_arg(event, field, &token);
+ type = process_field_arg(event, field, &token);
+
if (test_type_token(type, token, EVENT_DELIM, ","))
goto out_free_field;
* is in the bottom half of the 32 bit field.
*/
offset &= 0xffff;
- val = (unsigned long long)(data + offset);
+ val = (unsigned long long)((unsigned long)data + offset);
break;
default: /* not sure what to do there */
return 0;
if (evsel->idx == (int) desc[i].leader_idx) {
evsel->leader = evsel;
/* {anon_group} is a dummy name */
- if (strcmp(desc[i].name, "{anon_group}"))
+ if (strcmp(desc[i].name, "{anon_group}")) {
evsel->group_name = desc[i].name;
+ desc[i].name = NULL;
+ }
evsel->nr_members = desc[i].nr_members;
if (i >= nr_groups || nr > 0) {
ret = 0;
out_free:
- while ((int) --i >= 0)
+ for (i = 0; i < nr_groups; i++)
free(desc[i].name);
free(desc);
/* Override latest entry if it had no specific time coverage */
if (!curr->start) {
comm__override(curr, str, timestamp);
- return 0;
+ } else {
+ new = comm__new(str, timestamp);
+ if (!new)
+ return -ENOMEM;
+ list_add(&new->list, &thread->comm_list);
}
- new = comm__new(str, timestamp);
- if (!new)
- return -ENOMEM;
-
- list_add(&new->list, &thread->comm_list);
thread->comm_set = true;
return 0;
CC = $(CROSS_COMPILE)gcc
PTHREAD_LIBS = -lpthread
WARNINGS = -Wall -Wextra
-CFLAGS = $(WARNINGS) -g $(PTHREAD_LIBS) -I../include
+CFLAGS = $(WARNINGS) -g -I../include
+LDFLAGS = $(PTHREAD_LIBS)
all: testusb ffs-test
%: %.c
- $(CC) $(CFLAGS) -o $@ $^
+ $(CC) $(CFLAGS) -o $@ $^ $(LDFLAGS)
clean:
$(RM) testusb ffs-test
int r;
struct kvm_vcpu *vcpu, *v;
+ if (id >= KVM_MAX_VCPUS)
+ return -EINVAL;
+
vcpu = kvm_arch_vcpu_create(kvm, id);
if (IS_ERR(vcpu))
return PTR_ERR(vcpu);