<title>Device Registration</title>
<para>
A number of functions are provided to help with device registration.
- The functions deal with PCI, USB and platform devices, respectively.
+ The functions deal with PCI and platform devices, respectively.
</para>
!Edrivers/gpu/drm/drm_pci.c
-!Edrivers/gpu/drm/drm_usb.c
!Edrivers/gpu/drm/drm_platform.c
<para>
New drivers that no longer rely on the services provided by the
by scheduling a timer. The delay is accessible through the vblankoffdelay
module parameter or the <varname>drm_vblank_offdelay</varname> global
variable and expressed in milliseconds. Its default value is 5000 ms.
+ Zero means never disable, and a negative value means disable immediately.
+ Drivers may override the behaviour by setting the
+ <structname>drm_device</structname>
+ <structfield>vblank_disable_immediate</structfield> flag, which when set
+ causes vblank interrupts to be disabled immediately regardless of the
+ drm_vblank_offdelay value. The flag should only be set if there's a
+ properly working hardware vblank counter present.
</para>
<para>
When a vertical blanking interrupt occurs drivers only need to call the
<sect2>
<title>Vertical Blanking and Interrupt Handling Functions Reference</title>
!Edrivers/gpu/drm/drm_irq.c
+!Finclude/drm/drmP.h drm_crtc_vblank_waitqueue
</sect2>
</sect1>
!Pdrivers/gpu/drm/i915/i915_cmd_parser.c batch buffer command parser
!Idrivers/gpu/drm/i915/i915_cmd_parser.c
</sect2>
+ <sect2>
+ <title>Logical Rings, Logical Ring Contexts and Execlists</title>
+!Pdrivers/gpu/drm/i915/intel_lrc.c Logical Rings, Logical Ring Contexts and Execlists
+!Idrivers/gpu/drm/i915/intel_lrc.c
+ </sect2>
</sect1>
</chapter>
</part>
<http://www.kroah.com/log/linux/maintainer-03.html>
<http://www.kroah.com/log/linux/maintainer-04.html>
<http://www.kroah.com/log/linux/maintainer-05.html>
+ <http://www.kroah.com/log/linux/maintainer-06.html>
NO!!!! No more huge patch bombs to linux-kernel@vger.kernel.org people!
<https://lkml.org/lkml/2005/7/11/336>
* DMA client
Required properties:
-- dmas: a list of <[DMA multiplexer phandle] [SRS/DRS value]> pairs,
- where SRS/DRS values are fixed handles, specified in the SoC
- manual as the value that would be written into the PDMACHCR.
+- dmas: a list of <[DMA multiplexer phandle] [SRS << 8 | DRS]> pairs.
+ where SRS/DRS are specified in the SoC manual.
+ It will be written into PDMACHCR as high 16-bit parts.
- dma-names: a list of DMA channel names, one per "dmas" entry
Example:
Documentation/devicetree/bindings/video/display-timing.txt for display
timing binding details.
+Optional properties:
+- backlight: phandle of the backlight device attached to the panel
+- enable-gpios: GPIO pin to enable or disable the panel
+
Recommended properties:
- pinctrl-names, pinctrl-0: the pincontrol settings to configure
muxing properly for pins that connect to TFP410 device
compatible = "ti,tilcdc,panel";
pinctrl-names = "default";
pinctrl-0 = <&bone_lcd3_cape_lcd_pins>;
+ backlight = <&backlight>;
+ enable-gpios = <&gpio3 19 0>;
+
panel-info {
ac-bias = <255>;
ac-bias-intrpt = <0>;
keycode generated by each GPIO. Linux keycodes are defined in
<dt-bindings/input/input.h>.
+- linux,gpio-keymap: When enabled, the SPT_GPIOPWN_T19 object sends messages
+ on GPIO bit changes. An array of up to 8 entries can be provided
+ indicating the Linux keycode mapped to each bit of the status byte,
+ starting at the LSB. Linux keycodes are defined in
+ <dt-bindings/input/input.h>.
+
+ Note: the numbering of the GPIOs and the bit they start at varies between
+ maXTouch devices. You must either refer to the documentation, or
+ experiment to determine which bit corresponds to which input. Use
+ KEY_RESERVED for unused padding values.
+
Example:
touch@4b {
1) Interrupt client nodes
-------------------------
-Nodes that describe devices which generate interrupts must contain an either an
-"interrupts" property or an "interrupts-extended" property. These properties
-contain a list of interrupt specifiers, one per output interrupt. The format of
-the interrupt specifier is determined by the interrupt controller to which the
-interrupts are routed; see section 2 below for details.
+Nodes that describe devices which generate interrupts must contain an
+"interrupts" property, an "interrupts-extended" property, or both. If both are
+present, the latter should take precedence; the former may be provided simply
+for compatibility with software that does not recognize the latter. These
+properties contain a list of interrupt specifiers, one per output interrupt. The
+format of the interrupt specifier is determined by the interrupt controller to
+which the interrupts are routed; see section 2 below for details.
Example:
interrupt-parent = <&intc1>;
--- /dev/null
+* Toshiba TC3589x multi-purpose expander
+
+The Toshiba TC3589x series are I2C-based MFD devices which may expose the
+following built-in devices: gpio, keypad, rotator (vibrator), PWM (for
+e.g. LEDs or vibrators) The included models are:
+
+- TC35890
+- TC35892
+- TC35893
+- TC35894
+- TC35895
+- TC35896
+
+Required properties:
+ - compatible : must be "toshiba,tc35890", "toshiba,tc35892", "toshiba,tc35893",
+ "toshiba,tc35894", "toshiba,tc35895" or "toshiba,tc35896"
+ - reg : I2C address of the device
+ - interrupt-parent : specifies which IRQ controller we're connected to
+ - interrupts : the interrupt on the parent the controller is connected to
+ - interrupt-controller : marks the device node as an interrupt controller
+ - #interrupt-cells : should be <1>, the first cell is the IRQ offset on this
+ TC3589x interrupt controller.
+
+Optional nodes:
+
+- GPIO
+ This GPIO module inside the TC3589x has 24 (TC35890, TC35892) or 20
+ (other models) GPIO lines.
+ - compatible : must be "toshiba,tc3589x-gpio"
+ - interrupts : interrupt on the parent, which must be the tc3589x MFD device
+ - interrupt-controller : marks the device node as an interrupt controller
+ - #interrupt-cells : should be <2>, the first cell is the IRQ offset on this
+ TC3589x GPIO interrupt controller, the second cell is the interrupt flags
+ in accordance with <dt-bindings/interrupt-controller/irq.h>. The following
+ flags are valid:
+ - IRQ_TYPE_LEVEL_LOW
+ - IRQ_TYPE_LEVEL_HIGH
+ - IRQ_TYPE_EDGE_RISING
+ - IRQ_TYPE_EDGE_FALLING
+ - IRQ_TYPE_EDGE_BOTH
+ - gpio-controller : marks the device node as a GPIO controller
+ - #gpio-cells : should be <2>, the first cell is the GPIO offset on this
+ GPIO controller, the second cell is the flags.
+
+- Keypad
+ This keypad is the same on all variants, supporting up to 96 different
+ keys. The linux-specific properties are modeled on those already existing
+ in other input drivers.
+ - compatible : must be "toshiba,tc3589x-keypad"
+ - debounce-delay-ms : debounce interval in milliseconds
+ - keypad,num-rows : number of rows in the matrix, see
+ bindings/input/matrix-keymap.txt
+ - keypad,num-columns : number of columns in the matrix, see
+ bindings/input/matrix-keymap.txt
+ - linux,keymap: the definition can be found in
+ bindings/input/matrix-keymap.txt
+ - linux,no-autorepeat: do no enable autorepeat feature.
+ - linux,wakeup: use any event on keypad as wakeup event.
+
+Example:
+
+tc35893@44 {
+ compatible = "toshiba,tc35893";
+ reg = <0x44>;
+ interrupt-parent = <&gpio6>;
+ interrupts = <26 IRQ_TYPE_EDGE_RISING>;
+
+ interrupt-controller;
+ #interrupt-cells = <1>;
+
+ tc3589x_gpio {
+ compatible = "toshiba,tc3589x-gpio";
+ interrupts = <0>;
+
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ };
+ tc3589x_keypad {
+ compatible = "toshiba,tc3589x-keypad";
+ interrupts = <6>;
+ debounce-delay-ms = <4>;
+ keypad,num-columns = <8>;
+ keypad,num-rows = <8>;
+ linux,no-autorepeat;
+ linux,wakeup;
+ linux,keymap = <0x0301006b
+ 0x04010066
+ 0x06040072
+ 0x040200d7
+ 0x0303006a
+ 0x0205000e
+ 0x0607008b
+ 0x0500001c
+ 0x0403000b
+ 0x03040034
+ 0x05020067
+ 0x0305006c
+ 0x040500e7
+ 0x0005009e
+ 0x06020073
+ 0x01030039
+ 0x07060069
+ 0x050500d9>;
+ };
+};
width of 8 is assumed.
- ti,nand-ecc-opt: A string setting the ECC layout to use. One of:
- "sw" <deprecated> use "ham1" instead
+ "sw" 1-bit Hamming ecc code via software
"hw" <deprecated> use "ham1" instead
"hw-romcode" <deprecated> use "ham1" instead
"ham1" 1-bit Hamming ecc code
further clocks may be specified in derived bindings.
- clock-names: One name for each entry in the clocks property, the
first one should be "stmmaceth".
+- clk_ptp_ref: this is the PTP reference clock; in case of the PTP is
+ available this clock is used for programming the Timestamp Addend Register.
+ If not passed then the system clock will be used and this is fine on some
+ platforms.
Examples:
--- /dev/null
+AU Optronics Corporation 10.1" WXGA TFT LCD panel
+
+Required properties:
+- compatible: should be "auo,b101xtn01"
+
+This binding is compatible with the simple-panel binding, which is specified
+in simple-panel.txt in this directory.
Required properties:
- compatible: should contain "snps,dw-pcie" to identify the core.
+- reg: Should contain the configuration address space.
+- reg-names: Must be "config" for the PCIe configuration space.
+ (The old way of getting the configuration address space from "ranges"
+ is deprecated and should be avoided.)
- #address-cells: set to <3>
- #size-cells: set to <2>
- device_type: set to "pci"
--- /dev/null
+TI PCI Controllers
+
+PCIe Designware Controller
+ - compatible: Should be "ti,dra7-pcie""
+ - reg : Two register ranges as listed in the reg-names property
+ - reg-names : The first entry must be "ti-conf" for the TI specific registers
+ The second entry must be "rc-dbics" for the designware pcie
+ registers
+ The third entry must be "config" for the PCIe configuration space
+ - phys : list of PHY specifiers (used by generic PHY framework)
+ - phy-names : must be "pcie-phy0", "pcie-phy1", "pcie-phyN".. based on the
+ number of PHYs as specified in *phys* property.
+ - ti,hwmods : Name of the hwmod associated to the pcie, "pcie<X>",
+ where <X> is the instance number of the pcie from the HW spec.
+ - interrupts : Two interrupt entries must be specified. The first one is for
+ main interrupt line and the second for MSI interrupt line.
+ - #address-cells,
+ #size-cells,
+ #interrupt-cells,
+ device_type,
+ ranges,
+ num-lanes,
+ interrupt-map-mask,
+ interrupt-map : as specified in ../designware-pcie.txt
+
+Example:
+axi {
+ compatible = "simple-bus";
+ #size-cells = <1>;
+ #address-cells = <1>;
+ ranges = <0x51000000 0x51000000 0x3000
+ 0x0 0x20000000 0x10000000>;
+ pcie@51000000 {
+ compatible = "ti,dra7-pcie";
+ reg = <0x51000000 0x2000>, <0x51002000 0x14c>, <0x1000 0x2000>;
+ reg-names = "rc_dbics", "ti_conf", "config";
+ interrupts = <0 232 0x4>, <0 233 0x4>;
+ #address-cells = <3>;
+ #size-cells = <2>;
+ device_type = "pci";
+ ranges = <0x81000000 0 0 0x03000 0 0x00010000
+ 0x82000000 0 0x20013000 0x13000 0 0xffed000>;
+ #interrupt-cells = <1>;
+ num-lanes = <1>;
+ ti,hwmods = "pcie1";
+ phys = <&pcie1_phy>;
+ phy-names = "pcie-phy0";
+ interrupt-map-mask = <0 0 0 7>;
+ interrupt-map = <0 0 0 1 &pcie_intc 1>,
+ <0 0 0 2 &pcie_intc 2>,
+ <0 0 0 3 &pcie_intc 3>,
+ <0 0 0 4 &pcie_intc 4>;
+ pcie_intc: interrupt-controller {
+ interrupt-controller;
+ #address-cells = <0>;
+ #interrupt-cells = <1>;
+ };
+ };
+};
#gpio-cells = <2>;
interrupt-controller;
#interrupt-cells = <2>;
- interrupts = <0 32 0x4>;
+ interrupts = <0 16 0x4>;
pinctrl-names = "default";
pinctrl-0 = <&gsbi5_uart_default>;
infet5-supply = <&some_reg>;
infet6-supply = <&some_reg>;
infet7-supply = <&some_reg>;
- vsys_l1-supply = <&some_reg>;
- vsys_l2-supply = <&some_reg>;
+ vsys-l1-supply = <&some_reg>;
+ vsys-l2-supply = <&some_reg>;
regulators {
dcdc1 {
ADI AXI-SPDIF controller
Required properties:
- - compatible : Must be "adi,axi-spdif-1.00.a"
+ - compatible : Must be "adi,axi-spdif-tx-1.00.a"
- reg : Must contain SPDIF core's registers location and length
- clocks : Pairs of phandle and specifier referencing the controller's clocks.
The controller expects two clocks, the clock used for the AXI interface and
* "fsl,imx23-usbphy" for imx23 and imx28
* "fsl,imx6q-usbphy" for imx6dq and imx6dl
* "fsl,imx6sl-usbphy" for imx6sl
+ * "fsl,imx6sx-usbphy" for imx6sx
"fsl,imx23-usbphy" is still a fallback for other strings
- reg: Should contain registers location and length
- interrupts: Should contain phy interrupt
mediatek MediaTek Inc.
micrel Micrel Inc.
microchip Microchip Technology Inc.
+mitsubishi Mitsubishi Electric Corporation
mosaixtech Mosaix Technologies, Inc.
moxa Moxa
mpl MPL AG
ste ST-Ericsson
stericsson ST-Ericsson
synology Synology, Inc.
+thine THine Electronics, Inc.
ti Texas Instruments
tlm Trusted Logic Mobility
toradex Toradex AG
--- /dev/null
+Analog Device ADV7123 Video DAC
+-------------------------------
+
+The ADV7123 is a digital-to-analog converter that outputs VGA signals from a
+parallel video input.
+
+Required properties:
+
+- compatible: Should be "adi,adv7123"
+
+Optional properties:
+
+- psave-gpios: Power save control GPIO
+
+Required nodes:
+
+The ADV7123 has two video ports. Their connections are modeled using the OF
+graph bindings specified in Documentation/devicetree/bindings/graph.txt.
+
+- Video port 0 for DPI input
+- Video port 1 for VGA output
+
+
+Example
+-------
+
+ adv7123: encoder@0 {
+ compatible = "adi,adv7123";
+
+ ports {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ port@0 {
+ reg = <0>;
+
+ adv7123_in: endpoint@0 {
+ remote-endpoint = <&dpi_out>;
+ };
+ };
+
+ port@1 {
+ reg = <1>;
+
+ adv7123_out: endpoint@0 {
+ remote-endpoint = <&vga_connector_in>;
+ };
+ };
+ };
+ };
===================
Required properties:
-- compatible: "composite-connector" or "svideo-connector"
+- compatible: "composite-video-connector" or "svideo-connector"
Optional properties:
- label: a symbolic name for the connector
-------
tv: connector {
- compatible = "composite-connector";
+ compatible = "composite-video-connector";
label = "tv";
port {
Required properties:
- compatible: value should be one of the following
+ "samsung,exynos3250-mipi-dsi" /* for Exynos3250/3472 SoCs */
"samsung,exynos4210-mipi-dsi" /* for Exynos4 SoCs */
"samsung,exynos5410-mipi-dsi" /* for Exynos5410/5420/5440 SoCs */
- reg: physical base address and length of the registers set for the device
--- /dev/null
+* Renesas R-Car Display Unit (DU)
+
+Required Properties:
+
+ - compatible: must be one of the following.
+ - "renesas,du-r8a7779" for R8A7779 (R-Car H1) compatible DU
+ - "renesas,du-r8a7790" for R8A7790 (R-Car H2) compatible DU
+ - "renesas,du-r8a7791" for R8A7791 (R-Car M2) compatible DU
+
+ - reg: A list of base address and length of each memory resource, one for
+ each entry in the reg-names property.
+ - reg-names: Name of the memory resources. The DU requires one memory
+ resource for the DU core (named "du") and one memory resource for each
+ LVDS encoder (named "lvds.x" with "x" being the LVDS controller numerical
+ index).
+
+ - interrupt-parent: phandle of the parent interrupt controller.
+ - interrupts: Interrupt specifiers for the DU interrupts.
+
+ - clocks: A list of phandles + clock-specifier pairs, one for each entry in
+ the clock-names property.
+ - clock-names: Name of the clocks. This property is model-dependent.
+ - R8A7779 uses a single functional clock. The clock doesn't need to be
+ named.
+ - R8A7790 and R8A7791 use one functional clock per channel and one clock
+ per LVDS encoder. The functional clocks must be named "du.x" with "x"
+ being the channel numerical index. The LVDS clocks must be named
+ "lvds.x" with "x" being the LVDS encoder numerical index.
+
+Required nodes:
+
+The connections to the DU output video ports are modeled using the OF graph
+bindings specified in Documentation/devicetree/bindings/graph.txt.
+
+The following table lists for each supported model the port number
+corresponding to each DU output.
+
+ Port 0 Port1 Port2
+-----------------------------------------------------------------------------
+ R8A7779 (H1) DPAD 0 DPAD 1 -
+ R8A7790 (H2) DPAD LVDS 0 LVDS 1
+ R8A7791 (M2) DPAD LVDS 0 -
+
+
+Example: R8A7790 (R-Car H2) DU
+
+ du: du@feb00000 {
+ compatible = "renesas,du-r8a7790";
+ reg = <0 0xfeb00000 0 0x70000>,
+ <0 0xfeb90000 0 0x1c>,
+ <0 0xfeb94000 0 0x1c>;
+ reg-names = "du", "lvds.0", "lvds.1";
+ interrupt-parent = <&gic>;
+ interrupts = <0 256 IRQ_TYPE_LEVEL_HIGH>,
+ <0 268 IRQ_TYPE_LEVEL_HIGH>,
+ <0 269 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&mstp7_clks R8A7790_CLK_DU0>,
+ <&mstp7_clks R8A7790_CLK_DU1>,
+ <&mstp7_clks R8A7790_CLK_DU2>,
+ <&mstp7_clks R8A7790_CLK_LVDS0>,
+ <&mstp7_clks R8A7790_CLK_LVDS1>;
+ clock-names = "du.0", "du.1", "du.2", "lvds.0", "lvds.1";
+
+ ports {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ port@0 {
+ reg = <0>;
+ du_out_rgb: endpoint {
+ };
+ };
+ port@1 {
+ reg = <1>;
+ du_out_lvds0: endpoint {
+ };
+ };
+ port@2 {
+ reg = <2>;
+ du_out_lvds1: endpoint {
+ };
+ };
+ };
+ };
"samsung,s3c2443-fimd"; /* for S3C24XX SoCs */
"samsung,s3c6400-fimd"; /* for S3C64XX SoCs */
"samsung,s5pv210-fimd"; /* for S5PV210 SoC */
+ "samsung,exynos3250-fimd"; /* for Exynos3250/3472 SoCs */
"samsung,exynos4210-fimd"; /* for Exynos4 SoCs */
"samsung,exynos5250-fimd"; /* for Exynos5 SoCs */
--- /dev/null
+THine Electronics THC63LVDM83D LVDS serializer
+----------------------------------------------
+
+The THC63LVDM83D is an LVDS serializer designed to support pixel data
+transmission between a host and a flat panel.
+
+Required properties:
+
+- compatible: Should be "thine,thc63lvdm83d"
+
+Optional properties:
+
+- pwdn-gpios: Power down control GPIO
+
+Required nodes:
+
+The THC63LVDM83D has two video ports. Their connections are modeled using the
+OFgraph bindings specified in Documentation/devicetree/bindings/graph.txt.
+
+- Video port 0 for CMOS/TTL input
+- Video port 1 for LVDS output
+
+
+Example
+-------
+
+ lvds_enc: encoder@0 {
+ compatible = "thine,thc63lvdm83d";
+
+ ports {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ port@0 {
+ reg = <0>;
+
+ lvds_enc_in: endpoint@0 {
+ remote-endpoint = <&rgb_out>;
+ };
+ };
+
+ port@1 {
+ reg = <1>;
+
+ lvds_enc_out: endpoint@0 {
+ remote-endpoint = <&panel_in>;
+ };
+ };
+ };
+ };
--- /dev/null
+VGA Connector
+=============
+
+Required properties:
+
+- compatible: "vga-connector"
+
+Optional properties:
+
+- label: a symbolic name for the connector corresponding to a hardware label
+- ddc-i2c-bus: phandle to the I2C bus that is connected to VGA DDC
+
+Required nodes:
+
+The VGA connector internal connections are modeled using the OF graph bindings
+specified in Documentation/devicetree/bindings/graph.txt.
+
+The VGA connector has a single port that must be connected to a video source
+port.
+
+
+Example
+-------
+
+vga0: connector@0 {
+ compatible = "vga-connector";
+ label = "vga";
+
+ ddc-i2c-bus = <&i2c3>;
+
+ port {
+ vga_connector_in: endpoint {
+ remote-endpoint = <&adv7123_out>;
+ };
+ };
+};
size_t size, int flags,
const char *exp_name)
- If this succeeds, dma_buf_export allocates a dma_buf structure, and returns a
- pointer to the same. It also associates an anonymous file with this buffer,
- so it can be exported. On failure to allocate the dma_buf object, it returns
- NULL.
+ If this succeeds, dma_buf_export_named allocates a dma_buf structure, and
+ returns a pointer to the same. It also associates an anonymous file with this
+ buffer, so it can be exported. On failure to allocate the dma_buf object,
+ it returns NULL.
'exp_name' is the name of exporter - to facilitate information while
debugging.
drivers and/or processes.
Interface:
- int dma_buf_fd(struct dma_buf *dmabuf)
+ int dma_buf_fd(struct dma_buf *dmabuf, int flags)
This API installs an fd for the anonymous file associated with this buffer;
returns either 'fd', or error.
"dma_buf->ops->" indirection from the users of this interface.
In struct dma_buf_ops, unmap_dma_buf is defined as
- void (*unmap_dma_buf)(struct dma_buf_attachment *, struct sg_table *);
+ void (*unmap_dma_buf)(struct dma_buf_attachment *,
+ struct sg_table *,
+ enum dma_data_direction);
unmap_dma_buf signifies the end-of-DMA for the attachment provided. Like
map_dma_buf, this API also must be implemented by the exporter.
- Build, install, reboot
The NFS/RDMA code will be enabled automatically if NFS and RDMA
- are turned on. The NFS/RDMA client and server are configured via the hidden
- SUNRPC_XPRT_RDMA config option that depends on SUNRPC and INFINIBAND. The
- value of SUNRPC_XPRT_RDMA will be:
+ are turned on. The NFS/RDMA client and server are configured via the
+ SUNRPC_XPRT_RDMA_CLIENT and SUNRPC_XPRT_RDMA_SERVER config options that both
+ depend on SUNRPC and INFINIBAND. The default value of both options will be:
- N if either SUNRPC or INFINIBAND are N, in this case the NFS/RDMA client
and server will not be built
- Start the NFS server
- If the NFS/RDMA server was built as a module (CONFIG_SUNRPC_XPRT_RDMA=m in
- kernel config), load the RDMA transport module:
+ If the NFS/RDMA server was built as a module
+ (CONFIG_SUNRPC_XPRT_RDMA_SERVER=m in kernel config), load the RDMA
+ transport module:
$ modprobe svcrdma
- On the client system
- If the NFS/RDMA client was built as a module (CONFIG_SUNRPC_XPRT_RDMA=m in
- kernel config), load the RDMA client module:
+ If the NFS/RDMA client was built as a module
+ (CONFIG_SUNRPC_XPRT_RDMA_CLIENT=m in kernel config), load the RDMA client
+ module:
$ modprobe xprtrdma.ko
private field of the seq_file structure; that value can then be retrieved
by the iterator functions.
+There is also a wrapper function to seq_open() called seq_open_private(). It
+kmallocs a zero filled block of memory and stores a pointer to it in the
+private field of the seq_file structure, returning 0 on success. The
+block size is specified in a third parameter to the function, e.g.:
+
+ static int ct_open(struct inode *inode, struct file *file)
+ {
+ return seq_open_private(file, &ct_seq_ops,
+ sizeof(struct mystruct));
+ }
+
+There is also a variant function, __seq_open_private(), which is functionally
+identical except that, if successful, it returns the pointer to the allocated
+memory block, allowing further initialisation e.g.:
+
+ static int ct_open(struct inode *inode, struct file *file)
+ {
+ struct mystruct *p =
+ __seq_open_private(file, &ct_seq_ops, sizeof(*p));
+
+ if (!p)
+ return -ENOMEM;
+
+ p->foo = bar; /* initialize my stuff */
+ ...
+ p->baz = true;
+
+ return 0;
+ }
+
+A corresponding close function, seq_release_private() is available which
+frees the memory allocated in the corresponding open.
+
The other operations of interest - read(), llseek(), and release() - are
all implemented by the seq_file code itself. So a virtual file's
file_operations structure will look like:
if and only if no GPIO has been assigned to the device/function/index triplet,
other error codes are used for cases where a GPIO has been assigned but an error
occurred while trying to acquire it. This is useful to discriminate between mere
-errors and an absence of GPIO for optional GPIO parameters.
+errors and an absence of GPIO for optional GPIO parameters. For the common
+pattern where a GPIO is optional, the gpiod_get_optional() and
+gpiod_get_index_optional() functions can be used. These functions return NULL
+instead of -ENOENT if no GPIO has been assigned to the requested function:
+
+
+ struct gpio_desc *gpiod_get_optional(struct device *dev,
+ const char *con_id,
+ enum gpiod_flags flags)
+
+ struct gpio_desc *gpiod_get_index_optional(struct device *dev,
+ const char *con_id,
+ unsigned int index,
+ enum gpiod_flags flags)
Device-managed variants of these functions are also defined:
unsigned int idx,
enum gpiod_flags flags)
+ struct gpio_desc *devm_gpiod_get_optional(struct device *dev,
+ const char *con_id,
+ enum gpiod_flags flags)
+
+ struct gpio_desc * devm_gpiod_get_index_optional(struct device *dev,
+ const char *con_id,
+ unsigned int index,
+ enum gpiod_flags flags)
+
A GPIO descriptor can be disposed of using the gpiod_put() function:
void gpiod_put(struct gpio_desc *desc)
I2C to communicate with your device. SMBus commands are preferred if
the device supports them. Both are illustrated below.
- __u8 register = 0x10; /* Device register to access */
+ __u8 reg = 0x10; /* Device register to access */
__s32 res;
char buf[10];
/* Using SMBus commands */
- res = i2c_smbus_read_word_data(file, register);
+ res = i2c_smbus_read_word_data(file, reg);
if (res < 0) {
/* ERROR HANDLING: i2c transaction failed */
} else {
}
/* Using I2C Write, equivalent of
- i2c_smbus_write_word_data(file, register, 0x6543) */
- buf[0] = register;
+ i2c_smbus_write_word_data(file, reg, 0x6543) */
+ buf[0] = reg;
buf[1] = 0x43;
buf[2] = 0x65;
- if (write(file, buf, 3) ! =3) {
+ if (write(file, buf, 3) != 3) {
/* ERROR HANDLING: i2c transaction failed */
}
a remote system.
Kdump and kexec are currently supported on the x86, x86_64, ppc64, ia64,
-and s390x architectures.
+s390x and arm architectures.
When the system kernel boots, it reserves a small section of memory for
the dump-capture kernel. This ensures that ongoing Direct Memory Access
2) Or use the system kernel binary itself as dump-capture kernel and there is
no need to build a separate dump-capture kernel. This is possible
only with the architectures which support a relocatable kernel. As
- of today, i386, x86_64, ppc64 and ia64 architectures support relocatable
+ of today, i386, x86_64, ppc64, ia64 and arm architectures support relocatable
kernel.
Building a relocatable kernel is advantageous from the point of view that
kernel will be aligned to 64Mb, so if the start address is not then
any space below the alignment point will be wasted.
+Dump-capture kernel config options (Arch Dependent, arm)
+----------------------------------------------------------
+
+- To use a relocatable kernel,
+ Enable "AUTO_ZRELADDR" support under "Boot" options:
+
+ AUTO_ZRELADDR=y
Extended crashkernel syntax
===========================
crashkernel=<range1>:<size1>[,<range2>:<size2>,...][@offset]
range=start-[end]
+Please note, on arm, the offset is required.
+ crashkernel=<range1>:<size1>[,<range2>:<size2>,...]@offset
+ range=start-[end]
+
'start' is inclusive and 'end' is exclusive.
For example:
on the memory consumption of the kdump system. In general this is not
dependent on the memory size of the production system.
+ On arm, use "crashkernel=Y@X". Note that the start address of the kernel
+ will be aligned to 128MiB (0x08000000), so if the start address is not then
+ any space below the alignment point may be overwritten by the dump-capture kernel,
+ which means it is possible that the vmcore is not that precise as expected.
+
+
Load the Dump-capture Kernel
============================
- Use vmlinux or vmlinuz.gz
For s390x:
- Use image or bzImage
-
+For arm:
+ - Use zImage
If you are using a uncompressed vmlinux image then use following command
to load dump-capture kernel.
--initrd=<initrd-for-dump-capture-kernel> \
--append="root=<root-dev> <arch-specific-options>"
+If you are using a compressed zImage, then use following command
+to load dump-capture kernel.
+
+ kexec --type zImage -p <dump-capture-kernel-bzImage> \
+ --initrd=<initrd-for-dump-capture-kernel> \
+ --dtb=<dtb-for-dump-capture-kernel> \
+ --append="root=<root-dev> <arch-specific-options>"
+
+
Please note, that --args-linux does not need to be specified for ia64.
It is planned to make this a no-op on that architecture, but for now
it should be omitted
For s390x:
"1 maxcpus=1 cgroup_disable=memory"
+For arm:
+ "1 maxcpus=1 reset_devices"
+
Notes on loading the dump-capture kernel:
* By default, the ELF headers are stored in ELF64 format to support
bogus residue values);
s = SINGLE_LUN (the device has only one
Logical Unit);
+ u = IGNORE_UAS (don't bind to the uas driver);
w = NO_WP_DETECT (don't test whether the
medium is write-protected).
Example: quirks=0419:aaf5:rl,0421:0433:rc
from the device. It supports blocking operations, poll/select and
fasync operation modes. You must read 1 bytes from the device. The
result is number of free-fall interrupts since the last successful
-read (or 255 if number of interrupts would not fit). See the hpfall.c
+read (or 255 if number of interrupts would not fit). See the freefall.c
file for an example on using the device.
on all its consumers) and change operating mode (if necessary and permitted)
to best match the current operating load.
-The load_uA value can be determined from the consumers datasheet. e.g.most
-datasheets have tables showing the max current consumed in certain situations.
+The load_uA value can be determined from the consumer's datasheet. e.g. most
+datasheets have tables showing the maximum current consumed in certain
+situations.
Most consumers will use indirect operating mode control since they have no
knowledge of the regulator or whether the regulator is shared with other
int regulator_register_notifier(struct regulator *regulator,
struct notifier_block *nb);
-Consumers can uregister interest by calling :-
+Consumers can unregister interest by calling :-
int regulator_unregister_notifier(struct regulator *regulator,
struct notifier_block *nb);
- Errors in regulator configuration can have very serious consequences
for the system, potentially including lasting hardware damage.
- - It is not possible to automatically determine the power confugration
+ - It is not possible to automatically determine the power configuration
of the system - software-equivalent variants of the same chip may
- have different power requirments, and not all components with power
+ have different power requirements, and not all components with power
requirements are visible to software.
=> The API should make no changes to the hardware state unless it has
- specific knowledge that these changes are safe to do perform on
- this particular system.
+ specific knowledge that these changes are safe to perform on this
+ particular system.
Consumer use cases
------------------
+-> [Consumer B @ 3.3V]
The drivers for consumers A & B must be mapped to the correct regulator in
-order to control their power supply. This mapping can be achieved in machine
+order to control their power supplies. This mapping can be achieved in machine
initialisation code by creating a struct regulator_consumer_supply for
each regulator.
Constraints can now be registered by defining a struct regulator_init_data
for each regulator power domain. This structure also maps the consumers
-to their supply regulator :-
+to their supply regulators :-
static struct regulator_init_data regulator1_data = {
.constraints = {
Consumers can be classified into two types:-
Static: consumer does not change its supply voltage or
- current limit. It only needs to enable or disable it's
+ current limit. It only needs to enable or disable its
power supply. Its supply voltage is set by the hardware,
bootloader, firmware or kernel board initialisation code.
- Dynamic: consumer needs to change it's supply voltage or
+ Dynamic: consumer needs to change its supply voltage or
current limit to meet operation demands.
This interface is for machine specific code and allows the creation of
voltage/current domains (with constraints) for each regulator. It can
provide regulator constraints that will prevent device damage through
- overvoltage or over current caused by buggy client drivers. It also
+ overvoltage or overcurrent caused by buggy client drivers. It also
allows the creation of a regulator tree whereby some regulators are
supplied by others (similar to a clock tree).
struct regulator_dev *regulator_register(struct regulator_desc *regulator_desc,
const struct regulator_config *config);
-This will register the regulators capabilities and operations to the regulator
+This will register the regulator's capabilities and operations to the regulator
core.
Regulators can be unregistered by calling :-
Regulator Events
================
-Regulators can send events (e.g. over temp, under voltage, etc) to consumer
-drivers by calling :-
+Regulators can send events (e.g. overtemperature, undervoltage, etc) to
+consumer drivers by calling :-
int regulator_notifier_call_chain(struct regulator_dev *rdev,
unsigned long event, void *data);
-------------------
this_cpu operations are a way of optimizing access to per cpu
-variables associated with the *currently* executing processor through
-the use of segment registers (or a dedicated register where the cpu
-permanently stored the beginning of the per cpu area for a specific
-processor).
+variables associated with the *currently* executing processor. This is
+done through the use of segment registers (or a dedicated register where
+the cpu permanently stored the beginning of the per cpu area for a
+specific processor).
-The this_cpu operations add a per cpu variable offset to the processor
-specific percpu base and encode that operation in the instruction
+this_cpu operations add a per cpu variable offset to the processor
+specific per cpu base and encode that operation in the instruction
operating on the per cpu variable.
-This means there are no atomicity issues between the calculation of
+This means that there are no atomicity issues between the calculation of
the offset and the operation on the data. Therefore it is not
-necessary to disable preempt or interrupts to ensure that the
+necessary to disable preemption or interrupts to ensure that the
processor is not changed between the calculation of the address and
the operation on the data.
Read-modify-write operations are of particular interest. Frequently
processors have special lower latency instructions that can operate
-without the typical synchronization overhead but still provide some
-sort of relaxed atomicity guarantee. The x86 for example can execute
-RMV (Read Modify Write) instructions like inc/dec/cmpxchg without the
+without the typical synchronization overhead, but still provide some
+sort of relaxed atomicity guarantees. The x86, for example, can execute
+RMW (Read Modify Write) instructions like inc/dec/cmpxchg without the
lock prefix and the associated latency penalty.
Access to the variable without the lock prefix is not synchronized but
processor should be accessing that variable and therefore there are no
concurrency issues with other processors in the system.
+Please note that accesses by remote processors to a per cpu area are
+exceptional situations and may impact performance and/or correctness
+(remote write operations) of local RMW operations via this_cpu_*.
+
+The main use of the this_cpu operations has been to optimize counter
+operations.
+
+The following this_cpu() operations with implied preemption protection
+are defined. These operations can be used without worrying about
+preemption and interrupts.
+
+ this_cpu_add()
+ this_cpu_read(pcp)
+ this_cpu_write(pcp, val)
+ this_cpu_add(pcp, val)
+ this_cpu_and(pcp, val)
+ this_cpu_or(pcp, val)
+ this_cpu_add_return(pcp, val)
+ this_cpu_xchg(pcp, nval)
+ this_cpu_cmpxchg(pcp, oval, nval)
+ this_cpu_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2)
+ this_cpu_sub(pcp, val)
+ this_cpu_inc(pcp)
+ this_cpu_dec(pcp)
+ this_cpu_sub_return(pcp, val)
+ this_cpu_inc_return(pcp)
+ this_cpu_dec_return(pcp)
+
+
+Inner working of this_cpu operations
+------------------------------------
+
On x86 the fs: or the gs: segment registers contain the base of the
per cpu area. It is then possible to simply use the segment override
to relocate a per cpu relative address to the proper per cpu area for
mov ax, gs:[x]
instead of a sequence of calculation of the address and then a fetch
-from that address which occurs with the percpu operations. Before
+from that address which occurs with the per cpu operations. Before
this_cpu_ops such sequence also required preempt disable/enable to
prevent the kernel from moving the thread to a different processor
while the calculation is performed.
-The main use of the this_cpu operations has been to optimize counter
-operations.
+Consider the following this_cpu operation:
this_cpu_inc(x)
-results in the following single instruction (no lock prefix!)
+The above results in the following single instruction (no lock prefix!)
inc gs:[x]
instead of the following operations required if there is no segment
-register.
+register:
int *y;
int cpu;
(*y)++;
put_cpu();
-Note that these operations can only be used on percpu data that is
+Note that these operations can only be used on per cpu data that is
reserved for a specific processor. Without disabling preemption in the
surrounding code this_cpu_inc() will only guarantee that one of the
-percpu counters is correctly incremented. However, there is no
+per cpu counters is correctly incremented. However, there is no
guarantee that the OS will not move the process directly before or
after the this_cpu instruction is executed. In general this means that
the value of the individual counters for each processor are
Per cpu variables are used for performance reasons. Bouncing cache
lines can be avoided if multiple processors concurrently go through
the same code paths. Since each processor has its own per cpu
-variables no concurrent cacheline updates take place. The price that
+variables no concurrent cache line updates take place. The price that
has to be paid for this optimization is the need to add up the per cpu
-counters when the value of the counter is needed.
+counters when the value of a counter is needed.
Special operations:
of the per cpu variable that belongs to the currently executing
processor. this_cpu_ptr avoids multiple steps that the common
get_cpu/put_cpu sequence requires. No processor number is
-available. Instead the offset of the local per cpu area is simply
-added to the percpu offset.
+available. Instead, the offset of the local per cpu area is simply
+added to the per cpu offset.
+Note that this operation is usually used in a code segment when
+preemption has been disabled. The pointer is then used to
+access local per cpu data in a critical section. When preemption
+is re-enabled this pointer is usually no longer useful since it may
+no longer point to per cpu data of the current processor.
Per cpu variables and offsets
-----------------------------
-Per cpu variables have *offsets* to the beginning of the percpu
+Per cpu variables have *offsets* to the beginning of the per cpu
area. They do not have addresses although they look like that in the
code. Offsets cannot be directly dereferenced. The offset must be
-added to a base pointer of a percpu area of a processor in order to
+added to a base pointer of a per cpu area of a processor in order to
form a valid address.
Therefore the use of x or &x outside of the context of per cpu
operations is invalid and will generally be treated like a NULL
pointer dereference.
-In the context of per cpu operations
+ DEFINE_PER_CPU(int, x);
- x is a per cpu variable. Most this_cpu operations take a cpu
- variable.
+In the context of per cpu operations the above implies that x is a per
+cpu variable. Most this_cpu operations take a cpu variable.
- &x is the *offset* a per cpu variable. this_cpu_ptr() takes
- the offset of a per cpu variable which makes this look a bit
- strange.
+ int __percpu *p = &x;
+&x and hence p is the *offset* of a per cpu variable. this_cpu_ptr()
+takes the offset of a per cpu variable which makes this look a bit
+strange.
Operations on a field of a per cpu structure
struct s __percpu *ps = &p;
- z = this_cpu_dec(ps->m);
+ this_cpu_dec(ps->m);
z = this_cpu_inc_return(ps->n);
Variants of this_cpu ops
-------------------------
-this_cpu ops are interrupt safe. Some architecture do not support
+this_cpu ops are interrupt safe. Some architectures do not support
these per cpu local operations. In that case the operation must be
replaced by code that disables interrupts, then does the operations
-that are guaranteed to be atomic and then reenable interrupts. Doing
+that are guaranteed to be atomic and then re-enable interrupts. Doing
so is expensive. If there are other reasons why the scheduler cannot
change the processor we are executing on then there is no reason to
-disable interrupts. For that purpose the __this_cpu operations are
-provided. For example.
-
- __this_cpu_inc(x);
-
-Will increment x and will not fallback to code that disables
+disable interrupts. For that purpose the following __this_cpu operations
+are provided.
+
+These operations have no guarantee against concurrent interrupts or
+preemption. If a per cpu variable is not used in an interrupt context
+and the scheduler cannot preempt, then they are safe. If any interrupts
+still occur while an operation is in progress and if the interrupt too
+modifies the variable, then RMW actions can not be guaranteed to be
+safe.
+
+ __this_cpu_add()
+ __this_cpu_read(pcp)
+ __this_cpu_write(pcp, val)
+ __this_cpu_add(pcp, val)
+ __this_cpu_and(pcp, val)
+ __this_cpu_or(pcp, val)
+ __this_cpu_add_return(pcp, val)
+ __this_cpu_xchg(pcp, nval)
+ __this_cpu_cmpxchg(pcp, oval, nval)
+ __this_cpu_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2)
+ __this_cpu_sub(pcp, val)
+ __this_cpu_inc(pcp)
+ __this_cpu_dec(pcp)
+ __this_cpu_sub_return(pcp, val)
+ __this_cpu_inc_return(pcp)
+ __this_cpu_dec_return(pcp)
+
+
+Will increment x and will not fall-back to code that disables
interrupts on platforms that cannot accomplish atomicity through
address relocation and a Read-Modify-Write operation in the same
instruction.
-
&this_cpu_ptr(pp)->n vs this_cpu_ptr(&pp->n)
--------------------------------------------
The first operation takes the offset and forms an address and then
-adds the offset of the n field.
+adds the offset of the n field. This may result in two add
+instructions emitted by the compiler.
The second one first adds the two offsets and then does the
relocation. IMHO the second form looks cleaner and has an easier time
this_cpu_read() and friends are used.
-Christoph Lameter, April 3rd, 2013
+Remote access to per cpu data
+------------------------------
+
+Per cpu data structures are designed to be used by one cpu exclusively.
+If you use the variables as intended, this_cpu_ops() are guaranteed to
+be "atomic" as no other CPU has access to these data structures.
+
+There are special cases where you might need to access per cpu data
+structures remotely. It is usually safe to do a remote read access
+and that is frequently done to summarize counters. Remote write access
+something which could be problematic because this_cpu ops do not
+have lock semantics. A remote write may interfere with a this_cpu
+RMW operation.
+
+Remote write accesses to percpu data structures are highly discouraged
+unless absolutely necessary. Please consider using an IPI to wake up
+the remote CPU and perform the update to its per cpu area.
+
+To access per-cpu data structure remotely, typically the per_cpu_ptr()
+function is used:
+
+
+ DEFINE_PER_CPU(struct data, datap);
+
+ struct data *p = per_cpu_ptr(&datap, cpu);
+
+This makes it explicit that we are getting ready to access a percpu
+area remotely.
+
+You can also do the following to convert the datap offset to an address
+
+ struct data *p = this_cpu_ptr(&datap);
+
+but, passing of pointers calculated via this_cpu_ptr to other cpus is
+unusual and should be avoided.
+
+Remote access are typically only for reading the status of another cpus
+per cpu data. Write accesses can cause unique problems due to the
+relaxed synchronization requirements for this_cpu operations.
+
+One example that illustrates some concerns with write operations is
+the following scenario that occurs because two per cpu variables
+share a cache-line but the relaxed synchronization is applied to
+only one process updating the cache-line.
+
+Consider the following example
+
+
+ struct test {
+ atomic_t a;
+ int b;
+ };
+
+ DEFINE_PER_CPU(struct test, onecacheline);
+
+There is some concern about what would happen if the field 'a' is updated
+remotely from one processor and the local processor would use this_cpu ops
+to update field b. Care should be taken that such simultaneous accesses to
+data within the same cache line are avoided. Also costly synchronization
+may be necessary. IPIs are generally recommended in such scenarios instead
+of a remote write to the per cpu area of another processor.
+
+Even in cases where the remote writes are rare, please bear in
+mind that a remote write will evict the cache line from the processor
+that most likely will access it. If the processor wakes up and finds a
+missing local cache line of a per cpu area, its performance and hence
+the wake up times will be affected.
+
+Christoph Lameter, August 4th, 2014
+Pranith Kumar, Aug 2nd, 2014
profiles. If you believe that individual invalidations being
called too often, you can lower the tunable:
- /sys/debug/kernel/x86/tlb_single_page_flush_ceiling
+ /sys/kernel/debug/x86/tlb_single_page_flush_ceiling
This will cause us to do the global flush for more cases.
Lowering it to 0 will disable the use of the individual flushes.
ARM/Rockchip SoC support
M: Heiko Stuebner <heiko@sntech.de>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+L: linux-rockchip@lists.infradead.org
S: Maintained
+F: arch/arm/boot/dts/rk3*
F: arch/arm/mach-rockchip/
+F: drivers/clk/rockchip/
+F: drivers/i2c/busses/i2c-rk3x.c
F: drivers/*/*rockchip*
+F: drivers/*/*/*rockchip*
+F: sound/soc/rockchip/
ARM/SAMSUNG ARM ARCHITECTURES
M: Ben Dooks <ben-linux@fluff.org>
F: Documentation/filesystems/befs.txt
F: fs/befs/
+BECKHOFF CX5020 ETHERCAT MASTER DRIVER
+M: Dariusz Marcinkiewicz <reksio@newterm.pl>
+L: netdev@vger.kernel.org
+S: Maintained
+F: drivers/net/ethernet/ec_bhf.c
+
BFS FILE SYSTEM
M: "Tigran A. Aivazian" <tigran@aivazian.fsnet.co.uk>
S: Maintained
F: drivers/scsi/bnx2i/
BROADCOM KONA GPIO DRIVER
-M: Markus Mayer <markus.mayer@linaro.org>
+M: Ray Jui <rjui@broadcom.com>
L: bcm-kernel-feedback-list@broadcom.com
S: Supported
F: drivers/gpio/gpio-bcm-kona.c
F: Documentation/devicetree/bindings/panel/
INTEL DRM DRIVERS (excluding Poulsbo, Moorestown and derivative chipsets)
-M: Daniel Vetter <daniel.vetter@ffwll.ch>
+M: Daniel Vetter <daniel.vetter@intel.com>
M: Jani Nikula <jani.nikula@linux.intel.com>
L: intel-gfx@lists.freedesktop.org
L: dri-devel@lists.freedesktop.org
F: include/uapi/drm/tegra_drm.h
F: Documentation/devicetree/bindings/gpu/nvidia,tegra20-host1x.txt
+DRM DRIVERS FOR RENESAS
+M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+L: dri-devel@lists.freedesktop.org
+L: linux-sh@vger.kernel.org
+T: git git://people.freedesktop.org/~airlied/linux
+S: Supported
+F: drivers/gpu/drm/rcar-du/
+F: drivers/gpu/drm/shmobile/
+F: include/linux/platform_data/rcar-du.h
+F: include/linux/platform_data/shmob_drm.h
+
DSBR100 USB FM RADIO DRIVER
M: Alexey Klimov <klimov.linux@gmail.com>
L: linux-media@vger.kernel.org
FREESCALE SOC SOUND DRIVERS
M: Timur Tabi <timur@tabi.org>
+M: Nicolin Chen <nicoleotsuka@gmail.com>
+M: Xiubo Li <Li.Xiubo@freescale.com>
L: alsa-devel@alsa-project.org (moderated for non-subscribers)
L: linuxppc-dev@lists.ozlabs.org
S: Maintained
F: sound/soc/fsl/fsl*
+F: sound/soc/fsl/imx*
F: sound/soc/fsl/mpc8610_hpcd.c
FREEVXFS FILESYSTEM
F: include/uapi/linux/i2c.h
F: include/uapi/linux/i2c-*.h
+I2C ACPI SUPPORT
+M: Mika Westerberg <mika.westerberg@linux.intel.com>
+L: linux-i2c@vger.kernel.org
+L: linux-acpi@vger.kernel.org
+S: Maintained
+F: drivers/i2c/i2c-acpi.c
+
I2C-TAOS-EVM DRIVER
M: Jean Delvare <jdelvare@suse.de>
L: linux-i2c@vger.kernel.org
S: Maintained
F: drivers/media/radio/radio-mr800.c
+MRF24J40 IEEE 802.15.4 RADIO DRIVER
+M: Alan Ott <alan@signal11.us>
+L: linux-wpan@vger.kernel.org
+S: Maintained
+F: drivers/net/ieee802154/mrf24j40.c
+
MSI LAPTOP SUPPORT
M: "Lee, Chun-Yi" <jlee@suse.com>
L: platform-driver-x86@vger.kernel.org
F: drivers/scsi/nsp32*
NTB DRIVER
-M: Jon Mason <jon.mason@intel.com>
+M: Jon Mason <jdmason@kudzu.us>
+M: Dave Jiang <dave.jiang@intel.com>
S: Supported
W: https://github.com/jonmason/ntb/wiki
T: git git://github.com/jonmason/ntb.git
F: Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt
F: drivers/pci/host/pci-tegra.c
+PCI DRIVER FOR TI DRA7XX
+M: Kishon Vijay Abraham I <kishon@ti.com>
+L: linux-omap@vger.kernel.org
+L: linux-pci@vger.kernel.org
+S: Supported
+F: Documentation/devicetree/bindings/pci/ti-pci.txt
+F: drivers/pci/host/pci-dra7xx.c
+
PCI DRIVER FOR RENESAS R-CAR
M: Simon Horman <horms@verge.net.au>
L: linux-pci@vger.kernel.org
F: drivers/pinctrl/sh-pfc/
PIN CONTROLLER - SAMSUNG
-M: Tomasz Figa <t.figa@samsung.com>
+M: Tomasz Figa <tomasz.figa@gmail.com>
M: Thomas Abraham <thomas.abraham@linaro.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers)
F: drivers/media/i2c/s5k5baf.c
SAMSUNG SOC CLOCK DRIVERS
-M: Tomasz Figa <t.figa@samsung.com>
+M: Sylwester Nawrocki <s.nawrocki@samsung.com>
+M: Tomasz Figa <tomasz.figa@gmail.com>
S: Supported
L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers)
F: drivers/clk/samsung/
L: netdev@vger.kernel.org
F: drivers/net/ethernet/samsung/sxgbe/
+SAMSUNG USB2 PHY DRIVER
+M: Kamil Debski <k.debski@samsung.com>
+L: linux-kernel@vger.kernel.org
+S: Supported
+F: Documentation/devicetree/bindings/phy/samsung-phy.txt
+F: Documentation/phy/samsung-usb2.txt
+F: drivers/phy/phy-exynos4210-usb2.c
+F: drivers/phy/phy-exynos4x12-usb2.c
+F: drivers/phy/phy-exynos5250-usb2.c
+F: drivers/phy/phy-s5pv210-usb2.c
+F: drivers/phy/phy-samsung-usb2.c
+F: drivers/phy/phy-samsung-usb2.h
+
SERIAL DRIVERS
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
L: linux-serial@vger.kernel.org
F: Documentation/usb/ohci.txt
F: drivers/usb/host/ohci*
+USB OVER IP DRIVER
+M: Valentina Manea <valentina.manea.m@gmail.com>
+M: Shuah Khan <shuah.kh@samsung.com>
+L: linux-usb@vger.kernel.org
+S: Maintained
+F: drivers/usb/usbip/
+F: tools/usb/usbip/
+
USB PEGASUS DRIVER
M: Petko Manolov <petkan@nucleusys.com>
L: linux-usb@vger.kernel.org
F: arch/x86/
X86 PLATFORM DRIVERS
-M: Matthew Garrett <matthew.garrett@nebula.com>
+M: Darren Hart <dvhart@infradead.org>
L: platform-driver-x86@vger.kernel.org
-T: git git://cavan.codon.org.uk/platform-drivers-x86.git
+T: git git://git.infradead.org/users/dvhart/linux-platform-drivers-x86.git
S: Maintained
F: drivers/platform/x86/
VERSION = 3
PATCHLEVEL = 17
SUBLEVEL = 0
-EXTRAVERSION = -rc1
+EXTRAVERSION = -rc5
NAME = Shuffling Zombie Juror
# *DOCUMENTATION*
#define outb_p outb
#define outw_p outw
#define outl_p outl
-#define readb_relaxed(addr) __raw_readb(addr)
-#define readw_relaxed(addr) __raw_readw(addr)
-#define readl_relaxed(addr) __raw_readl(addr)
-#define readq_relaxed(addr) __raw_readq(addr)
+#define readb_relaxed(addr) __raw_readb(addr)
+#define readw_relaxed(addr) __raw_readw(addr)
+#define readl_relaxed(addr) __raw_readl(addr)
+#define readq_relaxed(addr) __raw_readq(addr)
+#define writeb_relaxed(b, addr) __raw_writeb(b, addr)
+#define writew_relaxed(b, addr) __raw_writew(b, addr)
+#define writel_relaxed(b, addr) __raw_writel(b, addr)
+#define writeq_relaxed(b, addr) __raw_writeq(b, addr)
#define mmiowb()
#include <uapi/asm/unistd.h>
-#define NR_SYSCALLS 508
+#define NR_SYSCALLS 511
#define __ARCH_WANT_OLD_READDIR
#define __ARCH_WANT_STAT64
#define __NR_process_vm_writev 505
#define __NR_kcmp 506
#define __NR_finit_module 507
+#define __NR_sched_setattr 508
+#define __NR_sched_getattr 509
+#define __NR_renameat2 510
#endif /* _UAPI_ALPHA_UNISTD_H */
.quad sys_process_vm_writev /* 505 */
.quad sys_kcmp
.quad sys_finit_module
+ .quad sys_sched_setattr
+ .quad sys_sched_getattr
+ .quad sys_renameat2 /* 510 */
.size sys_call_table, . - sys_call_table
.type sys_call_table, @object
static void __ic_line_inv_vaddr_helper(void *info)
{
- struct ic_inv *ic_inv_args = (struct ic_inv_args *) info;
+ struct ic_inv_args *ic_inv = info;
__ic_line_inv_vaddr_local(ic_inv->paddr, ic_inv->vaddr, ic_inv->sz);
}
tot_sz -= sz;
}
}
+EXPORT_SYMBOL(flush_icache_range);
/*
* General purpose helper to make I and D cache lines consistent.
config KEXEC
bool "Kexec system call (EXPERIMENTAL)"
depends on (!SMP || PM_SLEEP_SMP)
- select CRYPTO
- select CRYPTO_SHA256
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot
usb1: usb@48390000 {
compatible = "synopsys,dwc3";
- reg = <0x48390000 0x17000>;
+ reg = <0x48390000 0x10000>;
interrupts = <GIC_SPI 168 IRQ_TYPE_LEVEL_HIGH>;
phys = <&usb2_phy1>;
phy-names = "usb2-phy";
usb2: usb@483d0000 {
compatible = "synopsys,dwc3";
- reg = <0x483d0000 0x17000>;
+ reg = <0x483d0000 0x10000>;
interrupts = <GIC_SPI 174 IRQ_TYPE_LEVEL_HIGH>;
phys = <&usb2_phy2>;
phy-names = "usb2-phy";
status = "okay";
pinctrl-names = "default";
pinctrl-0 = <&i2c0_pins>;
- clock-frequency = <400000>;
+ clock-frequency = <100000>;
tps65218: tps65218@24 {
reg = <0x24>;
ranges = <0 0 0 0x01000000>; /* minimum GPMC partition = 16MB */
nand@0,0 {
reg = <0 0 4>; /* device IO registers */
- ti,nand-ecc-opt = "bch8";
+ ti,nand-ecc-opt = "bch16";
ti,elm-id = <&elm>;
nand-bus-width = <8>;
gpmc,device-width = <1>;
gpmc,rd-cycle-ns = <40>;
gpmc,wr-cycle-ns = <40>;
gpmc,wait-pin = <0>;
- gpmc,wait-on-read;
- gpmc,wait-on-write;
gpmc,bus-turnaround-ns = <0>;
gpmc,cycle2cycle-delay-ns = <0>;
gpmc,clk-activation-ns = <0>;
};
&gpmc {
- status = "okay";
+ status = "okay"; /* Disable QSPI when enabling GPMC (NAND) */
pinctrl-names = "default";
pinctrl-0 = <&nand_flash_x8>;
ranges = <0 0 0x08000000 0x10000000>; /* CS0: NAND */
nand@0,0 {
reg = <0 0 0>; /* CS0, offset 0 */
- ti,nand-ecc-opt = "bch8";
+ ti,nand-ecc-opt = "bch16";
ti,elm-id = <&elm>;
nand-bus-width = <8>;
gpmc,device-width = <1>;
gpmc,access-ns = <30>; /* tCEA + 4*/
gpmc,rd-cycle-ns = <40>;
gpmc,wr-cycle-ns = <40>;
- gpmc,wait-on-read = "true";
- gpmc,wait-on-write = "true";
+ gpmc,wait-pin = <0>;
gpmc,bus-turnaround-ns = <0>;
gpmc,cycle2cycle-delay-ns = <0>;
gpmc,clk-activation-ns = <0>;
};
&qspi {
- status = "okay";
+ status = "disabled"; /* Disable GPMC (NAND) when enabling QSPI */
pinctrl-names = "default";
pinctrl-0 = <&qspi1_default>;
usb: usbck {
compatible = "atmel,at91rm9200-clk-usb";
#clock-cells = <0>;
- atmel,clk-divisors = <1 2>;
+ atmel,clk-divisors = <1 2 0 0>;
clocks = <&pllb>;
};
};
pllb: pllbck {
+ compatible = "atmel,at91sam9g20-clk-pllb";
atmel,clk-input-range = <2000000 32000000>;
atmel,pll-clk-output-ranges = <30000000 100000000 0 0>;
};
/dts-v1/;
#include "dra74x.dtsi"
+#include <dt-bindings/gpio/gpio.h>
/ {
model = "TI DRA742";
regulator-min-microvolt = <3300000>;
regulator-max-microvolt = <3300000>;
};
+
+ vtt_fixed: fixedregulator-vtt {
+ compatible = "regulator-fixed";
+ regulator-name = "vtt_fixed";
+ regulator-min-microvolt = <1350000>;
+ regulator-max-microvolt = <1350000>;
+ regulator-always-on;
+ regulator-boot-on;
+ enable-active-high;
+ gpio = <&gpio7 11 GPIO_ACTIVE_HIGH>;
+ };
};
&dra7_pmx_core {
+ pinctrl-names = "default";
+ pinctrl-0 = <&vtt_pin>;
+
+ vtt_pin: pinmux_vtt_pin {
+ pinctrl-single,pins = <
+ 0x3b4 (PIN_OUTPUT | MUX_MODE14) /* spi1_cs1.gpio7_11 */
+ >;
+ };
+
i2c1_pins: pinmux_i2c1_pins {
pinctrl-single,pins = <
0x400 (PIN_INPUT | MUX_MODE0) /* i2c1_sda */
i2c3_pins: pinmux_i2c3_pins {
pinctrl-single,pins = <
- 0x410 (PIN_INPUT | MUX_MODE0) /* i2c3_sda */
- 0x414 (PIN_INPUT | MUX_MODE0) /* i2c3_scl */
+ 0x288 (PIN_INPUT | MUX_MODE9) /* gpio6_14.i2c3_sda */
+ 0x28c (PIN_INPUT | MUX_MODE9) /* gpio6_15.i2c3_scl */
>;
};
mcspi1_pins: pinmux_mcspi1_pins {
pinctrl-single,pins = <
- 0x3a4 (PIN_INPUT | MUX_MODE0) /* spi2_clk */
- 0x3a8 (PIN_INPUT | MUX_MODE0) /* spi2_d1 */
- 0x3ac (PIN_INPUT | MUX_MODE0) /* spi2_d0 */
- 0x3b0 (PIN_INPUT_SLEW | MUX_MODE0) /* spi2_cs0 */
- 0x3b4 (PIN_INPUT_SLEW | MUX_MODE0) /* spi2_cs1 */
- 0x3b8 (PIN_INPUT_SLEW | MUX_MODE6) /* spi2_cs2 */
- 0x3bc (PIN_INPUT_SLEW | MUX_MODE6) /* spi2_cs3 */
+ 0x3a4 (PIN_INPUT | MUX_MODE0) /* spi1_sclk */
+ 0x3a8 (PIN_INPUT | MUX_MODE0) /* spi1_d1 */
+ 0x3ac (PIN_INPUT | MUX_MODE0) /* spi1_d0 */
+ 0x3b0 (PIN_INPUT_SLEW | MUX_MODE0) /* spi1_cs0 */
+ 0x3b8 (PIN_INPUT_SLEW | MUX_MODE6) /* spi1_cs2.hdmi1_hpd */
+ 0x3bc (PIN_INPUT_SLEW | MUX_MODE6) /* spi1_cs3.hdmi1_cec */
>;
};
status = "okay";
pinctrl-names = "default";
pinctrl-0 = <&i2c3_pins>;
- clock-frequency = <3400000>;
+ clock-frequency = <400000>;
};
&mcspi1 {
reg = <0x001c0000 0x00020000>;
};
partition@7 {
- label = "NAND.u-boot-env";
+ label = "NAND.u-boot-env.backup1";
reg = <0x001e0000 0x00020000>;
};
partition@8 {
&usb2_phy2 {
phy-supply = <&ldousb_reg>;
};
+
+&gpio7 {
+ ti,no-reset-on-init;
+ ti,no-idle-on-init;
+};
gpio-controller;
#gpio-cells = <2>;
interrupt-controller;
- #interrupt-cells = <1>;
+ #interrupt-cells = <2>;
};
gpio2: gpio@48055000 {
gpio-controller;
#gpio-cells = <2>;
interrupt-controller;
- #interrupt-cells = <1>;
+ #interrupt-cells = <2>;
};
gpio3: gpio@48057000 {
gpio-controller;
#gpio-cells = <2>;
interrupt-controller;
- #interrupt-cells = <1>;
+ #interrupt-cells = <2>;
};
gpio4: gpio@48059000 {
gpio-controller;
#gpio-cells = <2>;
interrupt-controller;
- #interrupt-cells = <1>;
+ #interrupt-cells = <2>;
};
gpio5: gpio@4805b000 {
gpio-controller;
#gpio-cells = <2>;
interrupt-controller;
- #interrupt-cells = <1>;
+ #interrupt-cells = <2>;
};
gpio6: gpio@4805d000 {
gpio-controller;
#gpio-cells = <2>;
interrupt-controller;
- #interrupt-cells = <1>;
+ #interrupt-cells = <2>;
};
gpio7: gpio@48051000 {
gpio-controller;
#gpio-cells = <2>;
interrupt-controller;
- #interrupt-cells = <1>;
+ #interrupt-cells = <2>;
};
gpio8: gpio@48053000 {
gpio-controller;
#gpio-cells = <2>;
interrupt-controller;
- #interrupt-cells = <1>;
+ #interrupt-cells = <2>;
};
uart1: serial@4806a000 {
reg = <0x10020000 0x4000>;
};
+ mipi_phy: video-phy@10020710 {
+ compatible = "samsung,s5pv210-mipi-video-phy";
+ reg = <0x10020710 8>;
+ #phy-cells = <1>;
+ };
+
pd_cam: cam-power-domain@10023C00 {
compatible = "samsung,exynos4210-pd";
reg = <0x10023C00 0x20>;
interrupts = <0 240 0>;
};
+ fimd: fimd@11c00000 {
+ compatible = "samsung,exynos3250-fimd";
+ reg = <0x11c00000 0x30000>;
+ interrupt-names = "fifo", "vsync", "lcd_sys";
+ interrupts = <0 84 0>, <0 85 0>, <0 86 0>;
+ clocks = <&cmu CLK_SCLK_FIMD0>, <&cmu CLK_FIMD0>;
+ clock-names = "sclk_fimd", "fimd";
+ samsung,power-domain = <&pd_lcd0>;
+ samsung,sysreg = <&sys_reg>;
+ status = "disabled";
+ };
+
+ dsi_0: dsi@11C80000 {
+ compatible = "samsung,exynos3250-mipi-dsi";
+ reg = <0x11C80000 0x10000>;
+ interrupts = <0 83 0>;
+ samsung,phy-type = <0>;
+ samsung,power-domain = <&pd_lcd0>;
+ phys = <&mipi_phy 1>;
+ phy-names = "dsim";
+ clocks = <&cmu CLK_DSIM0>, <&cmu CLK_SCLK_MIPI0>;
+ clock-names = "bus_clk", "pll_clk";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ status = "disabled";
+ };
+
mshc_0: mshc@12510000 {
compatible = "samsung,exynos5250-dw-mshc";
reg = <0x12510000 0x1000>;
i2c@13860000 {
pinctrl-0 = <&i2c0_bus>;
pinctrl-names = "default";
+ samsung,i2c-sda-delay = <100>;
+ samsung,i2c-max-bus-freq = <400000>;
status = "okay";
usb3503: usb3503@08 {
max77686: pmic@09 {
compatible = "maxim,max77686";
+ interrupt-parent = <&gpx3>;
+ interrupts = <2 0>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&max77686_irq>;
reg = <0x09>;
#clock-cells = <1>;
samsung,pins = "gpx1-3";
samsung,pin-pud = <0>;
};
+
+ max77686_irq: max77686-irq {
+ samsung,pins = "gpx3-2";
+ samsung,pin-function = <0>;
+ samsung,pin-pud = <0>;
+ samsung,pin-drv = <0>;
+ };
};
MX53_PAD_CSI0_DAT9__I2C1_SCL 0x400001ec
>;
};
+
+ pinctrl_pmic: pmicgrp {
+ fsl,pins = <
+ MX53_PAD_CSI0_DAT5__GPIO5_23 0x1e4 /* IRQ */
+ >;
+ };
};
};
pmic: mc34708@8 {
compatible = "fsl,mc34708";
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_pmic>;
reg = <0x08>;
interrupt-parent = <&gpio5>;
interrupts = <23 0x8>;
compatible = "fsl,imx53-vpu";
reg = <0x63ff4000 0x1000>;
interrupts = <9>;
- clocks = <&clks IMX5_CLK_VPU_GATE>,
+ clocks = <&clks IMX5_CLK_VPU_REFERENCE_GATE>,
<&clks IMX5_CLK_VPU_GATE>;
clock-names = "per", "ahb";
resets = <&src 1>;
sound-spdif {
compatible = "fsl,imx-audio-spdif";
- model = "imx-spdif";
+ model = "On-board SPDIF";
/* IMX6 doesn't implement this yet */
spdif-controller = <&spdif>;
spdif-out;
};
&usbh1 {
+ disable-over-current;
vbus-supply = <®_usbh1_vbus>;
status = "okay";
};
&usbotg {
+ disable-over-current;
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_hummingboard_usbotg_id>;
vbus-supply = <®_usbotg_vbus>;
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_enet>;
phy-mode = "rgmii";
- phy-reset-gpios = <&gpio3 23 0>;
+ phy-reset-gpios = <&gpio1 25 0>;
phy-supply = <&vgen2_1v2_eth>;
status = "okay";
};
MX6QDL_PAD_ENET_REF_CLK__ENET_TX_CLK 0x1b0b0
MX6QDL_PAD_ENET_MDIO__ENET_MDIO 0x1b0b0
MX6QDL_PAD_ENET_MDC__ENET_MDC 0x1b0b0
+ MX6QDL_PAD_ENET_CRS_DV__GPIO1_IO25 0x1b0b0
MX6QDL_PAD_GPIO_16__ENET_REF_CLK 0x4001b0a8
>;
};
sound-spdif {
compatible = "fsl,imx-audio-spdif";
- model = "imx-spdif";
+ model = "Integrated SPDIF";
/* IMX6 doesn't implement this yet */
spdif-controller = <&spdif>;
spdif-out;
fsl,pins = <MX6QDL_PAD_GPIO_17__SPDIF_OUT 0x13091>;
};
+ pinctrl_cubox_i_usbh1: cubox-i-usbh1 {
+ fsl,pins = <MX6QDL_PAD_GPIO_3__USB_H1_OC 0x1b0b0>;
+ };
+
pinctrl_cubox_i_usbh1_vbus: cubox-i-usbh1-vbus {
fsl,pins = <MX6QDL_PAD_GPIO_0__GPIO1_IO00 0x4001b0b0>;
};
- pinctrl_cubox_i_usbotg_id: cubox-i-usbotg-id {
+ pinctrl_cubox_i_usbotg: cubox-i-usbotg {
/*
- * The Cubox-i pulls this low, but as it's pointless
+ * The Cubox-i pulls ID low, but as it's pointless
* leaving it as a pull-up, even if it is just 10uA.
*/
- fsl,pins = <MX6QDL_PAD_GPIO_1__USB_OTG_ID 0x13059>;
+ fsl,pins = <
+ MX6QDL_PAD_GPIO_1__USB_OTG_ID 0x13059
+ MX6QDL_PAD_KEY_COL4__USB_OTG_OC 0x1b0b0
+ >;
};
pinctrl_cubox_i_usbotg_vbus: cubox-i-usbotg-vbus {
};
&usbh1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_cubox_i_usbh1>;
vbus-supply = <®_usbh1_vbus>;
status = "okay";
};
&usbotg {
pinctrl-names = "default";
- pinctrl-0 = <&pinctrl_cubox_i_usbotg_id>;
+ pinctrl-0 = <&pinctrl_cubox_i_usbotg>;
vbus-supply = <®_usbotg_vbus>;
status = "okay";
};
enet {
pinctrl_microsom_enet_ar8035: microsom-enet-ar8035 {
fsl,pins = <
- MX6QDL_PAD_ENET_MDIO__ENET_MDIO 0x1b0b0
+ MX6QDL_PAD_ENET_MDIO__ENET_MDIO 0x1b8b0
MX6QDL_PAD_ENET_MDC__ENET_MDC 0x1b0b0
/* AR8035 reset */
MX6QDL_PAD_KEY_ROW4__GPIO4_IO15 0x130b0
#define MX6SX_PAD_GPIO1_IO07__USDHC2_WP 0x0030 0x0378 0x0870 0x1 0x1
#define MX6SX_PAD_GPIO1_IO07__ENET2_MDIO 0x0030 0x0378 0x0770 0x2 0x0
#define MX6SX_PAD_GPIO1_IO07__AUDMUX_MCLK 0x0030 0x0378 0x0000 0x3 0x0
-#define MX6SX_PAD_GPIO1_IO07__UART1_CTS_B 0x0030 0x0378 0x082C 0x4 0x1
+#define MX6SX_PAD_GPIO1_IO07__UART1_CTS_B 0x0030 0x0378 0x0000 0x4 0x0
#define MX6SX_PAD_GPIO1_IO07__GPIO1_IO_7 0x0030 0x0378 0x0000 0x5 0x0
#define MX6SX_PAD_GPIO1_IO07__SRC_EARLY_RESET 0x0030 0x0378 0x0000 0x6 0x0
#define MX6SX_PAD_GPIO1_IO07__DCIC2_OUT 0x0030 0x0378 0x0000 0x7 0x0
#define MX6SX_PAD_GPIO1_IO09__WDOG2_WDOG_B 0x0038 0x0380 0x0000 0x1 0x0
#define MX6SX_PAD_GPIO1_IO09__SDMA_EXT_EVENT_1 0x0038 0x0380 0x0820 0x2 0x0
#define MX6SX_PAD_GPIO1_IO09__CCM_OUT0 0x0038 0x0380 0x0000 0x3 0x0
-#define MX6SX_PAD_GPIO1_IO09__UART2_CTS_B 0x0038 0x0380 0x0834 0x4 0x1
+#define MX6SX_PAD_GPIO1_IO09__UART2_CTS_B 0x0038 0x0380 0x0000 0x4 0x0
#define MX6SX_PAD_GPIO1_IO09__GPIO1_IO_9 0x0038 0x0380 0x0000 0x5 0x0
#define MX6SX_PAD_GPIO1_IO09__SRC_INT_BOOT 0x0038 0x0380 0x0000 0x6 0x0
#define MX6SX_PAD_GPIO1_IO09__OBSERVE_MUX_OUT_4 0x0038 0x0380 0x0000 0x7 0x0
#define MX6SX_PAD_CSI_DATA07__ESAI_TX3_RX2 0x0068 0x03B0 0x079C 0x1 0x1
#define MX6SX_PAD_CSI_DATA07__I2C4_SDA 0x0068 0x03B0 0x07C4 0x2 0x2
#define MX6SX_PAD_CSI_DATA07__KPP_ROW_7 0x0068 0x03B0 0x07DC 0x3 0x0
-#define MX6SX_PAD_CSI_DATA07__UART6_CTS_B 0x0068 0x03B0 0x0854 0x4 0x1
+#define MX6SX_PAD_CSI_DATA07__UART6_CTS_B 0x0068 0x03B0 0x0000 0x4 0x0
#define MX6SX_PAD_CSI_DATA07__GPIO1_IO_21 0x0068 0x03B0 0x0000 0x5 0x0
#define MX6SX_PAD_CSI_DATA07__WEIM_DATA_16 0x0068 0x03B0 0x0000 0x6 0x0
#define MX6SX_PAD_CSI_DATA07__DCIC1_OUT 0x0068 0x03B0 0x0000 0x7 0x0
#define MX6SX_PAD_CSI_VSYNC__CSI1_VSYNC 0x0078 0x03C0 0x0708 0x0 0x0
#define MX6SX_PAD_CSI_VSYNC__ESAI_TX5_RX0 0x0078 0x03C0 0x07A4 0x1 0x1
#define MX6SX_PAD_CSI_VSYNC__AUDMUX_AUD6_RXD 0x0078 0x03C0 0x0674 0x2 0x1
-#define MX6SX_PAD_CSI_VSYNC__UART4_CTS_B 0x0078 0x03C0 0x0844 0x3 0x3
+#define MX6SX_PAD_CSI_VSYNC__UART4_CTS_B 0x0078 0x03C0 0x0000 0x3 0x0
#define MX6SX_PAD_CSI_VSYNC__MQS_RIGHT 0x0078 0x03C0 0x0000 0x4 0x0
#define MX6SX_PAD_CSI_VSYNC__GPIO1_IO_25 0x0078 0x03C0 0x0000 0x5 0x0
#define MX6SX_PAD_CSI_VSYNC__WEIM_DATA_24 0x0078 0x03C0 0x0000 0x6 0x0
#define MX6SX_PAD_ENET2_TX_CLK__ENET2_TX_CLK 0x00A0 0x03E8 0x0000 0x0 0x0
#define MX6SX_PAD_ENET2_TX_CLK__ENET2_REF_CLK2 0x00A0 0x03E8 0x076C 0x1 0x1
#define MX6SX_PAD_ENET2_TX_CLK__I2C3_SDA 0x00A0 0x03E8 0x07BC 0x2 0x1
-#define MX6SX_PAD_ENET2_TX_CLK__UART1_CTS_B 0x00A0 0x03E8 0x082C 0x3 0x3
+#define MX6SX_PAD_ENET2_TX_CLK__UART1_CTS_B 0x00A0 0x03E8 0x0000 0x3 0x0
#define MX6SX_PAD_ENET2_TX_CLK__MLB_CLK 0x00A0 0x03E8 0x07E8 0x4 0x1
#define MX6SX_PAD_ENET2_TX_CLK__GPIO2_IO_9 0x00A0 0x03E8 0x0000 0x5 0x0
#define MX6SX_PAD_ENET2_TX_CLK__USB_OTG2_PWR 0x00A0 0x03E8 0x0000 0x6 0x0
#define MX6SX_PAD_KEY_COL4__SAI2_RX_BCLK 0x00B4 0x03FC 0x0808 0x7 0x0
#define MX6SX_PAD_KEY_ROW0__KPP_ROW_0 0x00B8 0x0400 0x0000 0x0 0x0
#define MX6SX_PAD_KEY_ROW0__USDHC3_WP 0x00B8 0x0400 0x0000 0x1 0x0
-#define MX6SX_PAD_KEY_ROW0__UART6_CTS_B 0x00B8 0x0400 0x0854 0x2 0x3
+#define MX6SX_PAD_KEY_ROW0__UART6_CTS_B 0x00B8 0x0400 0x0000 0x2 0x0
#define MX6SX_PAD_KEY_ROW0__ECSPI1_MOSI 0x00B8 0x0400 0x0718 0x3 0x0
#define MX6SX_PAD_KEY_ROW0__AUDMUX_AUD5_TXD 0x00B8 0x0400 0x0660 0x4 0x0
#define MX6SX_PAD_KEY_ROW0__GPIO2_IO_15 0x00B8 0x0400 0x0000 0x5 0x0
#define MX6SX_PAD_KEY_ROW1__M4_NMI 0x00BC 0x0404 0x0000 0x8 0x0
#define MX6SX_PAD_KEY_ROW2__KPP_ROW_2 0x00C0 0x0408 0x0000 0x0 0x0
#define MX6SX_PAD_KEY_ROW2__USDHC4_WP 0x00C0 0x0408 0x0878 0x1 0x1
-#define MX6SX_PAD_KEY_ROW2__UART5_CTS_B 0x00C0 0x0408 0x084C 0x2 0x3
+#define MX6SX_PAD_KEY_ROW2__UART5_CTS_B 0x00C0 0x0408 0x0000 0x2 0x0
#define MX6SX_PAD_KEY_ROW2__CAN1_RX 0x00C0 0x0408 0x068C 0x3 0x1
#define MX6SX_PAD_KEY_ROW2__CANFD_RX1 0x00C0 0x0408 0x0694 0x4 0x1
#define MX6SX_PAD_KEY_ROW2__GPIO2_IO_17 0x00C0 0x0408 0x0000 0x5 0x0
#define MX6SX_PAD_NAND_DATA05__RAWNAND_DATA05 0x0164 0x04AC 0x0000 0x0 0x0
#define MX6SX_PAD_NAND_DATA05__USDHC2_DATA5 0x0164 0x04AC 0x0000 0x1 0x0
#define MX6SX_PAD_NAND_DATA05__QSPI2_B_DQS 0x0164 0x04AC 0x0000 0x2 0x0
-#define MX6SX_PAD_NAND_DATA05__UART3_CTS_B 0x0164 0x04AC 0x083C 0x3 0x1
+#define MX6SX_PAD_NAND_DATA05__UART3_CTS_B 0x0164 0x04AC 0x0000 0x3 0x0
#define MX6SX_PAD_NAND_DATA05__AUDMUX_AUD4_RXC 0x0164 0x04AC 0x064C 0x4 0x0
#define MX6SX_PAD_NAND_DATA05__GPIO4_IO_9 0x0164 0x04AC 0x0000 0x5 0x0
#define MX6SX_PAD_NAND_DATA05__WEIM_AD_5 0x0164 0x04AC 0x0000 0x6 0x0
#define MX6SX_PAD_QSPI1A_SS1_B__SIM_M_HADDR_12 0x019C 0x04E4 0x0000 0x7 0x0
#define MX6SX_PAD_QSPI1A_SS1_B__SDMA_DEBUG_PC_3 0x019C 0x04E4 0x0000 0x9 0x0
#define MX6SX_PAD_QSPI1B_DATA0__QSPI1_B_DATA_0 0x01A0 0x04E8 0x0000 0x0 0x0
-#define MX6SX_PAD_QSPI1B_DATA0__UART3_CTS_B 0x01A0 0x04E8 0x083C 0x1 0x4
+#define MX6SX_PAD_QSPI1B_DATA0__UART3_CTS_B 0x01A0 0x04E8 0x0000 0x1 0x0
#define MX6SX_PAD_QSPI1B_DATA0__ECSPI3_MOSI 0x01A0 0x04E8 0x0738 0x2 0x1
#define MX6SX_PAD_QSPI1B_DATA0__ESAI_RX_FS 0x01A0 0x04E8 0x0778 0x3 0x2
#define MX6SX_PAD_QSPI1B_DATA0__CSI1_DATA_22 0x01A0 0x04E8 0x06F4 0x4 0x1
#define MX6SX_PAD_SD1_DATA2__AUDMUX_AUD5_TXFS 0x0230 0x0578 0x0670 0x1 0x1
#define MX6SX_PAD_SD1_DATA2__PWM3_OUT 0x0230 0x0578 0x0000 0x2 0x0
#define MX6SX_PAD_SD1_DATA2__GPT_COMPARE2 0x0230 0x0578 0x0000 0x3 0x0
-#define MX6SX_PAD_SD1_DATA2__UART2_CTS_B 0x0230 0x0578 0x0834 0x4 0x2
+#define MX6SX_PAD_SD1_DATA2__UART2_CTS_B 0x0230 0x0578 0x0000 0x4 0x0
#define MX6SX_PAD_SD1_DATA2__GPIO6_IO_4 0x0230 0x0578 0x0000 0x5 0x0
#define MX6SX_PAD_SD1_DATA2__ECSPI4_RDY 0x0230 0x0578 0x0000 0x6 0x0
#define MX6SX_PAD_SD1_DATA2__CCM_OUT0 0x0230 0x0578 0x0000 0x7 0x0
#define MX6SX_PAD_SD2_DATA3__VADC_CLAMP_CURRENT_3 0x024C 0x0594 0x0000 0x8 0x0
#define MX6SX_PAD_SD2_DATA3__MMDC_DEBUG_31 0x024C 0x0594 0x0000 0x9 0x0
#define MX6SX_PAD_SD3_CLK__USDHC3_CLK 0x0250 0x0598 0x0000 0x0 0x0
-#define MX6SX_PAD_SD3_CLK__UART4_CTS_B 0x0250 0x0598 0x0844 0x1 0x0
+#define MX6SX_PAD_SD3_CLK__UART4_CTS_B 0x0250 0x0598 0x0000 0x1 0x0
#define MX6SX_PAD_SD3_CLK__ECSPI4_SCLK 0x0250 0x0598 0x0740 0x2 0x0
#define MX6SX_PAD_SD3_CLK__AUDMUX_AUD6_RXFS 0x0250 0x0598 0x0680 0x3 0x0
#define MX6SX_PAD_SD3_CLK__LCDIF2_VSYNC 0x0250 0x0598 0x0000 0x4 0x0
#define MX6SX_PAD_SD3_DATA7__USDHC3_DATA7 0x0274 0x05BC 0x0000 0x0 0x0
#define MX6SX_PAD_SD3_DATA7__CAN1_RX 0x0274 0x05BC 0x068C 0x1 0x0
#define MX6SX_PAD_SD3_DATA7__CANFD_RX1 0x0274 0x05BC 0x0694 0x2 0x0
-#define MX6SX_PAD_SD3_DATA7__UART3_CTS_B 0x0274 0x05BC 0x083C 0x3 0x3
+#define MX6SX_PAD_SD3_DATA7__UART3_CTS_B 0x0274 0x05BC 0x0000 0x3 0x0
#define MX6SX_PAD_SD3_DATA7__LCDIF2_DATA_5 0x0274 0x05BC 0x0000 0x4 0x0
#define MX6SX_PAD_SD3_DATA7__GPIO7_IO_9 0x0274 0x05BC 0x0000 0x5 0x0
#define MX6SX_PAD_SD3_DATA7__ENET1_1588_EVENT0_IN 0x0274 0x05BC 0x0000 0x6 0x0
#define MX6SX_PAD_SD4_DATA6__SDMA_DEBUG_EVENT_CHANNEL_1 0x0298 0x05E0 0x0000 0x9 0x0
#define MX6SX_PAD_SD4_DATA7__USDHC4_DATA7 0x029C 0x05E4 0x0000 0x0 0x0
#define MX6SX_PAD_SD4_DATA7__RAWNAND_DATA08 0x029C 0x05E4 0x0000 0x1 0x0
-#define MX6SX_PAD_SD4_DATA7__UART5_CTS_B 0x029C 0x05E4 0x084C 0x2 0x1
+#define MX6SX_PAD_SD4_DATA7__UART5_CTS_B 0x029C 0x05E4 0x0000 0x2 0x0
#define MX6SX_PAD_SD4_DATA7__ECSPI3_SS0 0x029C 0x05E4 0x073C 0x3 0x0
#define MX6SX_PAD_SD4_DATA7__LCDIF2_DATA_15 0x029C 0x05E4 0x0000 0x4 0x0
#define MX6SX_PAD_SD4_DATA7__GPIO6_IO_21 0x029C 0x05E4 0x0000 0x5 0x0
&uart3 {
pinctrl-names = "default";
pinctrl-0 = <&uart3_pins>;
+ interrupts-extended = <&intc 74 &omap3_pmx_core OMAP3_UART3_RX>;
};
&gpio1 {
};
tv: connector {
- compatible = "composite-connector";
+ compatible = "composite-video-connector";
label = "tv";
port {
};
twl_power: power {
- compatible = "ti,twl4030-power-n900";
+ compatible = "ti,twl4030-power-n900", "ti,twl4030-power-idle-osc-off";
ti,use_poweroff;
};
};
#address-cells = <1>;
#size-cells = <1>;
reg = <1 0 0x08000000>;
- ti,nand-ecc-opt = "ham1";
+ ti,nand-ecc-opt = "sw";
nand-bus-width = <8>;
gpmc,cs-on-ns = <0>;
gpmc,cs-rd-off-ns = <36>;
ti,bit-shift = <0x1e>;
reg = <0x0d00>;
ti,set-bit-to-disable;
+ ti,set-rate-parent;
};
dpll4_m6_ck: dpll4_m6_ck {
l3_iclk_div: l3_iclk_div {
#clock-cells = <0>;
- compatible = "fixed-factor-clock";
+ compatible = "ti,divider-clock";
+ ti,max-div = <2>;
+ ti,bit-shift = <4>;
+ reg = <0x100>;
clocks = <&dpll_core_h12x2_ck>;
- clock-mult = <1>;
- clock-div = <1>;
+ ti,index-power-of-two;
};
gpu_l3_iclk: gpu_l3_iclk {
l4_root_clk_div: l4_root_clk_div {
#clock-cells = <0>;
- compatible = "fixed-factor-clock";
+ compatible = "ti,divider-clock";
+ ti,max-div = <2>;
+ ti,bit-shift = <8>;
+ reg = <0x100>;
clocks = <&l3_iclk_div>;
- clock-mult = <1>;
- clock-div = <1>;
+ ti,index-power-of-two;
};
slimbus1_slimbus_clk: slimbus1_slimbus_clk {
renesas,function = "msiof0";
};
- i2c6_pins: i2c6 {
- renesas,groups = "i2c6";
- renesas,function = "i2c6";
- };
-
usb0_pins: usb0 {
renesas,groups = "usb0";
renesas,function = "usb0";
};
&i2c6 {
- pinctrl-names = "default";
- pinctrl-0 = <&i2c6_pins>;
status = "okay";
clock-frequency = <100000>;
&mmc0 { /* sdmmc */
num-slots = <1>;
status = "okay";
+ pinctrl-names = "default";
+ pinctrl-0 = <&sd0_clk>, <&sd0_cmd>, <&sd0_cd>, <&sd0_bus4>;
vmmc-supply = <&vcc_sd0>;
slot@0 {
&mmc0 {
num-slots = <1>;
status = "okay";
+ pinctrl-names = "default";
+ pinctrl-0 = <&sd0_clk>, <&sd0_cmd>, <&sd0_cd>, <&sd0_bus4>;
vmmc-supply = <&vcc_sd0>;
slot@0 {
msp2: msp@80117000 {
pinctrl-names = "default";
pinctrl-0 = <&msp2_default_mode>;
- status = "okay";
};
msp3: msp@80125000 {
clock-frequency = <100000>;
resets = <&apb2_rst 0>;
status = "disabled";
+ #address-cells = <1>;
+ #size-cells = <0>;
};
i2c1: i2c@01c2b000 {
clock-frequency = <100000>;
resets = <&apb2_rst 1>;
status = "disabled";
+ #address-cells = <1>;
+ #size-cells = <0>;
};
i2c2: i2c@01c2b400 {
clock-frequency = <100000>;
resets = <&apb2_rst 2>;
status = "disabled";
+ #address-cells = <1>;
+ #size-cells = <0>;
};
i2c3: i2c@01c2b800 {
clock-frequency = <100000>;
resets = <&apb2_rst 3>;
status = "disabled";
+ #address-cells = <1>;
+ #size-cells = <0>;
};
gmac: ethernet@01c30000 {
vcc4-supply = <&sys_3v3_reg>;
vcc5-supply = <&sys_3v3_reg>;
vcc6-supply = <&vio_reg>;
- vcc7-supply = <&sys_5v0_reg>;
+ vcc7-supply = <&charge_pump_5v0_reg>;
vccio-supply = <&sys_3v3_reg>;
regulators {
regulator-max-microvolt = <3300000>;
regulator-always-on;
};
+
+ charge_pump_5v0_reg: regulator@101 {
+ compatible = "regulator-fixed";
+ reg = <101>;
+ regulator-name = "5v0";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ regulator-always-on;
+ };
};
};
vcc4-supply = <&sys_3v3_reg>;
vcc5-supply = <&sys_3v3_reg>;
vcc6-supply = <&vio_reg>;
- vcc7-supply = <&sys_5v0_reg>;
+ vcc7-supply = <&charge_pump_5v0_reg>;
vccio-supply = <&sys_3v3_reg>;
regulators {
regulator-max-microvolt = <3300000>;
regulator-always-on;
};
+
+ charge_pump_5v0_reg: regulator@101 {
+ compatible = "regulator-fixed";
+ reg = <101>;
+ regulator-name = "5v0";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ regulator-always-on;
+ };
};
};
regulator-always-on;
};
- clk32kg: regulator-clk32kg {
- compatible = "ti,twl6030-clk32kg";
- };
-
twl_usb_comparator: usb-comparator {
compatible = "ti,twl6030-usb";
interrupts = <4>, <10>;
};
pinctrl_esdhc1: esdhc1grp {
- fsl,fsl,pins = <
+ fsl,pins = <
VF610_PAD_PTA24__ESDHC1_CLK 0x31ef
VF610_PAD_PTA25__ESDHC1_CMD 0x31ef
VF610_PAD_PTA26__ESDHC1_DAT0 0x31ef
EXPORT_SYMBOL(edma_assign_channel_eventq);
static int edma_setup_from_hw(struct device *dev, struct edma_soc_info *pdata,
- struct edma *edma_cc)
+ struct edma *edma_cc, int cc_id)
{
int i;
u32 value, cccfg;
s8 (*queue_priority_map)[2];
/* Decode the eDMA3 configuration from CCCFG register */
- cccfg = edma_read(0, EDMA_CCCFG);
+ cccfg = edma_read(cc_id, EDMA_CCCFG);
value = GET_NUM_REGN(cccfg);
edma_cc->num_region = BIT(value);
value = GET_NUM_EVQUE(cccfg);
edma_cc->num_tc = value + 1;
- dev_dbg(dev, "eDMA3 HW configuration (cccfg: 0x%08x):\n", cccfg);
+ dev_dbg(dev, "eDMA3 CC%d HW configuration (cccfg: 0x%08x):\n", cc_id,
+ cccfg);
dev_dbg(dev, "num_region: %u\n", edma_cc->num_region);
dev_dbg(dev, "num_channel: %u\n", edma_cc->num_channels);
dev_dbg(dev, "num_slot: %u\n", edma_cc->num_slots);
return -ENOMEM;
/* Get eDMA3 configuration from IP */
- ret = edma_setup_from_hw(dev, info[j], edma_cc[j]);
+ ret = edma_setup_from_hw(dev, info[j], edma_cc[j], j);
if (ret)
return ret;
"mcr p15, 0, r0, c1, c0, 0 @ set SCTLR \n\t" \
"isb \n\t" \
"bl v7_flush_dcache_"__stringify(level)" \n\t" \
- "clrex \n\t" \
"mrc p15, 0, r0, c1, c0, 1 @ get ACTLR \n\t" \
"bic r0, r0, #(1 << 6) @ disable local coherency \n\t" \
"mcr p15, 0, r0, c1, c0, 1 @ set ACTLR \n\t" \
#define ARM_CPU_PART_CORTEX_A12 0x4100c0d0
#define ARM_CPU_PART_CORTEX_A17 0x4100c0e0
#define ARM_CPU_PART_CORTEX_A15 0x4100c0f0
+#define ARM_CPU_PART_MASK 0xff00fff0
#define ARM_CPU_XSCALE_ARCH_MASK 0xe000
#define ARM_CPU_XSCALE_ARCH_V1 0x2000
*/
static inline unsigned int __attribute_const__ read_cpuid_part(void)
{
- return read_cpuid_id() & 0xff00fff0;
+ return read_cpuid_id() & ARM_CPU_PART_MASK;
}
static inline unsigned int __attribute_const__ __deprecated read_cpuid_part_number(void)
#define R_ARM_ABS32 2
#define R_ARM_CALL 28
#define R_ARM_JUMP24 29
+#define R_ARM_TARGET1 38
#define R_ARM_V4BX 40
#define R_ARM_PREL31 42
#define R_ARM_MOVW_ABS_NC 43
#include <linux/cpumask.h>
#include <linux/err.h>
+#include <asm/cpu.h>
#include <asm/cputype.h>
/*
#endif
}
+/**
+ * smp_cpuid_part() - return part id for a given cpu
+ * @cpu: logical cpu id.
+ *
+ * Return: part id of logical cpu passed as argument.
+ */
+static inline unsigned int smp_cpuid_part(int cpu)
+{
+ struct cpuinfo_arm *cpu_info = &per_cpu(cpu_data, cpu);
+
+ return is_smp() ? cpu_info->cpuid & ARM_CPU_PART_MASK :
+ read_cpuid_part();
+}
+
/* all SMP configurations have the extended CPUID registers */
#ifndef CONFIG_MMU
#define tlb_ops_need_broadcast() 0
__generic_dma_ops(hwdev)->map_page(hwdev, page, offset, size, dir, attrs);
}
-static inline void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle,
+void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle,
size_t size, enum dma_data_direction dir,
- struct dma_attrs *attrs)
-{
- if (__generic_dma_ops(hwdev)->unmap_page)
- __generic_dma_ops(hwdev)->unmap_page(hwdev, handle, size, dir, attrs);
-}
+ struct dma_attrs *attrs);
-static inline void xen_dma_sync_single_for_cpu(struct device *hwdev,
- dma_addr_t handle, size_t size, enum dma_data_direction dir)
-{
- if (__generic_dma_ops(hwdev)->sync_single_for_cpu)
- __generic_dma_ops(hwdev)->sync_single_for_cpu(hwdev, handle, size, dir);
-}
+void xen_dma_sync_single_for_cpu(struct device *hwdev,
+ dma_addr_t handle, size_t size, enum dma_data_direction dir);
+
+void xen_dma_sync_single_for_device(struct device *hwdev,
+ dma_addr_t handle, size_t size, enum dma_data_direction dir);
-static inline void xen_dma_sync_single_for_device(struct device *hwdev,
- dma_addr_t handle, size_t size, enum dma_data_direction dir)
-{
- if (__generic_dma_ops(hwdev)->sync_single_for_device)
- __generic_dma_ops(hwdev)->sync_single_for_device(hwdev, handle, size, dir);
-}
#endif /* _ASM_ARM_XEN_PAGE_COHERENT_H */
#define INVALID_P2M_ENTRY (~0UL)
unsigned long __pfn_to_mfn(unsigned long pfn);
-unsigned long __mfn_to_pfn(unsigned long mfn);
extern struct rb_root phys_to_mach;
static inline unsigned long pfn_to_mfn(unsigned long pfn)
static inline unsigned long mfn_to_pfn(unsigned long mfn)
{
- unsigned long pfn;
-
- if (phys_to_mach.rb_node != NULL) {
- pfn = __mfn_to_pfn(mfn);
- if (pfn != INVALID_P2M_ENTRY)
- return pfn;
- }
-
return mfn;
}
#endif
.endif
msr spsr_cxsf, \rpsr
-#if defined(CONFIG_CPU_V6)
- ldr r0, [sp]
- strex r1, r2, [sp] @ clear the exclusive monitor
- ldmib sp, {r1 - pc}^ @ load r1 - pc, cpsr
-#elif defined(CONFIG_CPU_32v6K)
- clrex @ clear the exclusive monitor
- ldmia sp, {r0 - pc}^ @ load r0 - pc, cpsr
-#else
- ldmia sp, {r0 - pc}^ @ load r0 - pc, cpsr
+#if defined(CONFIG_CPU_V6) || defined(CONFIG_CPU_32v6K)
+ @ We must avoid clrex due to Cortex-A15 erratum #830321
+ sub r0, sp, #4 @ uninhabited address
+ strex r1, r2, [r0] @ clear the exclusive monitor
#endif
+ ldmia sp, {r0 - pc}^ @ load r0 - pc, cpsr
.endm
.macro restore_user_regs, fast = 0, offset = 0
ldr r1, [sp, #\offset + S_PSR] @ get calling cpsr
ldr lr, [sp, #\offset + S_PC]! @ get pc
msr spsr_cxsf, r1 @ save in spsr_svc
-#if defined(CONFIG_CPU_V6)
+#if defined(CONFIG_CPU_V6) || defined(CONFIG_CPU_32v6K)
+ @ We must avoid clrex due to Cortex-A15 erratum #830321
strex r1, r2, [sp] @ clear the exclusive monitor
-#elif defined(CONFIG_CPU_32v6K)
- clrex @ clear the exclusive monitor
#endif
.if \fast
ldmdb sp, {r1 - lr}^ @ get calling r1 - lr
.endif
ldr lr, [sp, #S_SP] @ top of the stack
ldrd r0, r1, [sp, #S_LR] @ calling lr and pc
- clrex @ clear the exclusive monitor
+
+ @ We must avoid clrex due to Cortex-A15 erratum #830321
+ strex r2, r1, [sp, #S_LR] @ clear the exclusive monitor
+
stmdb lr!, {r0, r1, \rpsr} @ calling lr and rfe context
ldmia sp, {r0 - r12}
mov sp, lr
.endm
#else /* ifdef CONFIG_CPU_V7M */
.macro restore_user_regs, fast = 0, offset = 0
- clrex @ clear the exclusive monitor
mov r2, sp
load_user_sp_lr r2, r3, \offset + S_SP @ calling sp, lr
ldr r1, [sp, #\offset + S_PSR] @ get calling cpsr
ldr lr, [sp, #\offset + S_PC] @ get pc
add sp, sp, #\offset + S_SP
msr spsr_cxsf, r1 @ save in spsr_svc
+
+ @ We must avoid clrex due to Cortex-A15 erratum #830321
+ strex r1, r2, [sp] @ clear the exclusive monitor
+
.if \fast
ldmdb sp, {r1 - r12} @ get calling r1 - r12
.else
break;
case R_ARM_ABS32:
+ case R_ARM_TARGET1:
*(u32 *)loc += sym->st_value;
break;
else
kvm_vcpu_block(vcpu);
+ kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
+
return 1;
}
mrc p15, 0, r0, c10, c2, 1
mcr p15, 4, r0, c10, c2, 1
+ @ Invalidate the stale TLBs from Bootloader
+ mcr p15, 4, r0, c8, c7, 0 @ TLBIALLH
+ dsb ish
+
@ Set the HSCTLR to:
@ - ARM/THUMB exceptions: Kernel config (Thumb-2 kernel)
@ - Endianness: Kernel config
#include <linux/gpio.h>
#include <linux/of.h>
#include <linux/of_irq.h>
+#include <linux/clk-provider.h>
#include <asm/setup.h>
#include <asm/irq.h>
of_irq_init(irq_of_match);
}
+static void __init at91rm9200_dt_timer_init(void)
+{
+#if defined(CONFIG_COMMON_CLK)
+ of_clk_init(NULL);
+#endif
+ at91rm9200_timer_init();
+}
+
static const char *at91rm9200_dt_board_compat[] __initdata = {
"atmel,at91rm9200",
NULL
};
DT_MACHINE_START(at91rm9200_dt, "Atmel AT91RM9200 (Device Tree)")
- .init_time = at91rm9200_timer_init,
+ .init_time = at91rm9200_dt_timer_init,
.map_io = at91_map_io,
.handle_irq = at91_aic_handle_irq,
.init_early = at91rm9200_dt_initialize,
ifeq ($(CONFIG_ARCH_BRCMSTB),y)
obj-y += brcmstb.o
-obj-$(CONFIG_SMP) += headsmp-brcmstb.o platsmp-brcmstb.o
endif
+++ /dev/null
-/*
- * Copyright (C) 2013-2014 Broadcom Corporation
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License as
- * published by the Free Software Foundation version 2.
- *
- * This program is distributed "as is" WITHOUT ANY WARRANTY of any
- * kind, whether express or implied; without even the implied warranty
- * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- */
-
-#ifndef __BRCMSTB_H__
-#define __BRCMSTB_H__
-
-void brcmstb_secondary_startup(void);
-
-#endif /* __BRCMSTB_H__ */
+++ /dev/null
-/*
- * SMP boot code for secondary CPUs
- * Based on arch/arm/mach-tegra/headsmp.S
- *
- * Copyright (C) 2010 NVIDIA, Inc.
- * Copyright (C) 2013-2014 Broadcom Corporation
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License as
- * published by the Free Software Foundation version 2.
- *
- * This program is distributed "as is" WITHOUT ANY WARRANTY of any
- * kind, whether express or implied; without even the implied warranty
- * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- */
-
-#include <asm/assembler.h>
-#include <linux/linkage.h>
-#include <linux/init.h>
-
- .section ".text.head", "ax"
-
-ENTRY(brcmstb_secondary_startup)
- /*
- * Ensure CPU is in a sane state by disabling all IRQs and switching
- * into SVC mode.
- */
- setmode PSR_I_BIT | PSR_F_BIT | SVC_MODE, r0
-
- bl v7_invalidate_l1
- b secondary_startup
-ENDPROC(brcmstb_secondary_startup)
+++ /dev/null
-/*
- * Broadcom STB CPU SMP and hotplug support for ARM
- *
- * Copyright (C) 2013-2014 Broadcom Corporation
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License as
- * published by the Free Software Foundation version 2.
- *
- * This program is distributed "as is" WITHOUT ANY WARRANTY of any
- * kind, whether express or implied; without even the implied warranty
- * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- */
-
-#include <linux/delay.h>
-#include <linux/errno.h>
-#include <linux/init.h>
-#include <linux/io.h>
-#include <linux/of_address.h>
-#include <linux/of_platform.h>
-#include <linux/printk.h>
-#include <linux/regmap.h>
-#include <linux/smp.h>
-#include <linux/mfd/syscon.h>
-#include <linux/spinlock.h>
-
-#include <asm/cacheflush.h>
-#include <asm/cp15.h>
-#include <asm/mach-types.h>
-#include <asm/smp_plat.h>
-
-#include "brcmstb.h"
-
-enum {
- ZONE_MAN_CLKEN_MASK = BIT(0),
- ZONE_MAN_RESET_CNTL_MASK = BIT(1),
- ZONE_MAN_MEM_PWR_MASK = BIT(4),
- ZONE_RESERVED_1_MASK = BIT(5),
- ZONE_MAN_ISO_CNTL_MASK = BIT(6),
- ZONE_MANUAL_CONTROL_MASK = BIT(7),
- ZONE_PWR_DN_REQ_MASK = BIT(9),
- ZONE_PWR_UP_REQ_MASK = BIT(10),
- ZONE_BLK_RST_ASSERT_MASK = BIT(12),
- ZONE_PWR_OFF_STATE_MASK = BIT(25),
- ZONE_PWR_ON_STATE_MASK = BIT(26),
- ZONE_DPG_PWR_STATE_MASK = BIT(28),
- ZONE_MEM_PWR_STATE_MASK = BIT(29),
- ZONE_RESET_STATE_MASK = BIT(31),
- CPU0_PWR_ZONE_CTRL_REG = 1,
- CPU_RESET_CONFIG_REG = 2,
-};
-
-static void __iomem *cpubiuctrl_block;
-static void __iomem *hif_cont_block;
-static u32 cpu0_pwr_zone_ctrl_reg;
-static u32 cpu_rst_cfg_reg;
-static u32 hif_cont_reg;
-
-#ifdef CONFIG_HOTPLUG_CPU
-static DEFINE_PER_CPU_ALIGNED(int, per_cpu_sw_state);
-
-static int per_cpu_sw_state_rd(u32 cpu)
-{
- sync_cache_r(SHIFT_PERCPU_PTR(&per_cpu_sw_state, per_cpu_offset(cpu)));
- return per_cpu(per_cpu_sw_state, cpu);
-}
-
-static void per_cpu_sw_state_wr(u32 cpu, int val)
-{
- per_cpu(per_cpu_sw_state, cpu) = val;
- dmb();
- sync_cache_w(SHIFT_PERCPU_PTR(&per_cpu_sw_state, per_cpu_offset(cpu)));
- dsb_sev();
-}
-#else
-static inline void per_cpu_sw_state_wr(u32 cpu, int val) { }
-#endif
-
-static void __iomem *pwr_ctrl_get_base(u32 cpu)
-{
- void __iomem *base = cpubiuctrl_block + cpu0_pwr_zone_ctrl_reg;
- base += (cpu_logical_map(cpu) * 4);
- return base;
-}
-
-static u32 pwr_ctrl_rd(u32 cpu)
-{
- void __iomem *base = pwr_ctrl_get_base(cpu);
- return readl_relaxed(base);
-}
-
-static void pwr_ctrl_wr(u32 cpu, u32 val)
-{
- void __iomem *base = pwr_ctrl_get_base(cpu);
- writel(val, base);
-}
-
-static void cpu_rst_cfg_set(u32 cpu, int set)
-{
- u32 val;
- val = readl_relaxed(cpubiuctrl_block + cpu_rst_cfg_reg);
- if (set)
- val |= BIT(cpu_logical_map(cpu));
- else
- val &= ~BIT(cpu_logical_map(cpu));
- writel_relaxed(val, cpubiuctrl_block + cpu_rst_cfg_reg);
-}
-
-static void cpu_set_boot_addr(u32 cpu, unsigned long boot_addr)
-{
- const int reg_ofs = cpu_logical_map(cpu) * 8;
- writel_relaxed(0, hif_cont_block + hif_cont_reg + reg_ofs);
- writel_relaxed(boot_addr, hif_cont_block + hif_cont_reg + 4 + reg_ofs);
-}
-
-static void brcmstb_cpu_boot(u32 cpu)
-{
- pr_info("SMP: Booting CPU%d...\n", cpu);
-
- /*
- * set the reset vector to point to the secondary_startup
- * routine
- */
- cpu_set_boot_addr(cpu, virt_to_phys(brcmstb_secondary_startup));
-
- /* unhalt the cpu */
- cpu_rst_cfg_set(cpu, 0);
-}
-
-static void brcmstb_cpu_power_on(u32 cpu)
-{
- /*
- * The secondary cores power was cut, so we must go through
- * power-on initialization.
- */
- u32 tmp;
-
- pr_info("SMP: Powering up CPU%d...\n", cpu);
-
- /* Request zone power up */
- pwr_ctrl_wr(cpu, ZONE_PWR_UP_REQ_MASK);
-
- /* Wait for the power up FSM to complete */
- do {
- tmp = pwr_ctrl_rd(cpu);
- } while (!(tmp & ZONE_PWR_ON_STATE_MASK));
-
- per_cpu_sw_state_wr(cpu, 1);
-}
-
-static int brcmstb_cpu_get_power_state(u32 cpu)
-{
- int tmp = pwr_ctrl_rd(cpu);
- return (tmp & ZONE_RESET_STATE_MASK) ? 0 : 1;
-}
-
-#ifdef CONFIG_HOTPLUG_CPU
-
-static void brcmstb_cpu_die(u32 cpu)
-{
- v7_exit_coherency_flush(all);
-
- /* Prevent all interrupts from reaching this CPU. */
- arch_local_irq_disable();
-
- /*
- * Final full barrier to ensure everything before this instruction has
- * quiesced.
- */
- isb();
- dsb();
-
- per_cpu_sw_state_wr(cpu, 0);
-
- /* Sit and wait to die */
- wfi();
-
- /* We should never get here... */
- panic("Spurious interrupt on CPU %d received!\n", cpu);
-}
-
-static int brcmstb_cpu_kill(u32 cpu)
-{
- u32 tmp;
-
- pr_info("SMP: Powering down CPU%d...\n", cpu);
-
- while (per_cpu_sw_state_rd(cpu))
- ;
-
- /* Program zone reset */
- pwr_ctrl_wr(cpu, ZONE_RESET_STATE_MASK | ZONE_BLK_RST_ASSERT_MASK |
- ZONE_PWR_DN_REQ_MASK);
-
- /* Verify zone reset */
- tmp = pwr_ctrl_rd(cpu);
- if (!(tmp & ZONE_RESET_STATE_MASK))
- pr_err("%s: Zone reset bit for CPU %d not asserted!\n",
- __func__, cpu);
-
- /* Wait for power down */
- do {
- tmp = pwr_ctrl_rd(cpu);
- } while (!(tmp & ZONE_PWR_OFF_STATE_MASK));
-
- /* Settle-time from Broadcom-internal DVT reference code */
- udelay(7);
-
- /* Assert reset on the CPU */
- cpu_rst_cfg_set(cpu, 1);
-
- return 1;
-}
-
-#endif /* CONFIG_HOTPLUG_CPU */
-
-static int __init setup_hifcpubiuctrl_regs(struct device_node *np)
-{
- int rc = 0;
- char *name;
- struct device_node *syscon_np = NULL;
-
- name = "syscon-cpu";
-
- syscon_np = of_parse_phandle(np, name, 0);
- if (!syscon_np) {
- pr_err("can't find phandle %s\n", name);
- rc = -EINVAL;
- goto cleanup;
- }
-
- cpubiuctrl_block = of_iomap(syscon_np, 0);
- if (!cpubiuctrl_block) {
- pr_err("iomap failed for cpubiuctrl_block\n");
- rc = -EINVAL;
- goto cleanup;
- }
-
- rc = of_property_read_u32_index(np, name, CPU0_PWR_ZONE_CTRL_REG,
- &cpu0_pwr_zone_ctrl_reg);
- if (rc) {
- pr_err("failed to read 1st entry from %s property (%d)\n", name,
- rc);
- rc = -EINVAL;
- goto cleanup;
- }
-
- rc = of_property_read_u32_index(np, name, CPU_RESET_CONFIG_REG,
- &cpu_rst_cfg_reg);
- if (rc) {
- pr_err("failed to read 2nd entry from %s property (%d)\n", name,
- rc);
- rc = -EINVAL;
- goto cleanup;
- }
-
-cleanup:
- if (syscon_np)
- of_node_put(syscon_np);
-
- return rc;
-}
-
-static int __init setup_hifcont_regs(struct device_node *np)
-{
- int rc = 0;
- char *name;
- struct device_node *syscon_np = NULL;
-
- name = "syscon-cont";
-
- syscon_np = of_parse_phandle(np, name, 0);
- if (!syscon_np) {
- pr_err("can't find phandle %s\n", name);
- rc = -EINVAL;
- goto cleanup;
- }
-
- hif_cont_block = of_iomap(syscon_np, 0);
- if (!hif_cont_block) {
- pr_err("iomap failed for hif_cont_block\n");
- rc = -EINVAL;
- goto cleanup;
- }
-
- /* offset is at top of hif_cont_block */
- hif_cont_reg = 0;
-
-cleanup:
- if (syscon_np)
- of_node_put(syscon_np);
-
- return rc;
-}
-
-static void __init brcmstb_cpu_ctrl_setup(unsigned int max_cpus)
-{
- int rc;
- struct device_node *np;
- char *name;
-
- name = "brcm,brcmstb-smpboot";
- np = of_find_compatible_node(NULL, NULL, name);
- if (!np) {
- pr_err("can't find compatible node %s\n", name);
- return;
- }
-
- rc = setup_hifcpubiuctrl_regs(np);
- if (rc)
- return;
-
- rc = setup_hifcont_regs(np);
- if (rc)
- return;
-}
-
-static DEFINE_SPINLOCK(boot_lock);
-
-static void brcmstb_secondary_init(unsigned int cpu)
-{
- /*
- * Synchronise with the boot thread.
- */
- spin_lock(&boot_lock);
- spin_unlock(&boot_lock);
-}
-
-static int brcmstb_boot_secondary(unsigned int cpu, struct task_struct *idle)
-{
- /*
- * set synchronisation state between this boot processor
- * and the secondary one
- */
- spin_lock(&boot_lock);
-
- /* Bring up power to the core if necessary */
- if (brcmstb_cpu_get_power_state(cpu) == 0)
- brcmstb_cpu_power_on(cpu);
-
- brcmstb_cpu_boot(cpu);
-
- /*
- * now the secondary core is starting up let it run its
- * calibrations, then wait for it to finish
- */
- spin_unlock(&boot_lock);
-
- return 0;
-}
-
-static struct smp_operations brcmstb_smp_ops __initdata = {
- .smp_prepare_cpus = brcmstb_cpu_ctrl_setup,
- .smp_secondary_init = brcmstb_secondary_init,
- .smp_boot_secondary = brcmstb_boot_secondary,
-#ifdef CONFIG_HOTPLUG_CPU
- .cpu_kill = brcmstb_cpu_kill,
- .cpu_die = brcmstb_cpu_die,
-#endif
-};
-
-CPU_METHOD_OF_DECLARE(brcmstb_smp, "brcm,brahma-b15", &brcmstb_smp_ops);
"mcr p15, 0, r0, c1, c0, 0 @ set SCTLR\n\t" \
"isb\n\t"\
"bl v7_flush_dcache_"__stringify(level)"\n\t" \
- "clrex\n\t"\
"mrc p15, 0, r0, c1, c0, 1 @ get ACTLR\n\t" \
"bic r0, r0, #(1 << 6) @ disable local coherency\n\t" \
/* Dummy Load of a device register to avoid Erratum 799270 */ \
config SOC_IMX27
bool
- select ARCH_HAS_OPP
select CPU_ARM926T
select IMX_HAVE_IOMUX_V1
select MXC_AVIC
config SOC_IMX5
bool
- select ARCH_HAS_OPP
select HAVE_IMX_SRC
select MXC_TZIC
obj-$(CONFIG_HAVE_IMX_GPC) += gpc.o
obj-$(CONFIG_HAVE_IMX_MMDC) += mmdc.o
obj-$(CONFIG_HAVE_IMX_SRC) += src.o
+ifdef CONFIG_SOC_IMX6
AFLAGS_headsmp.o :=-Wa,-march=armv7-a
obj-$(CONFIG_SMP) += headsmp.o platsmp.o
obj-$(CONFIG_HOTPLUG_CPU) += hotplug.o
+endif
obj-$(CONFIG_SOC_IMX6Q) += clk-imx6q.o mach-imx6q.o
obj-$(CONFIG_SOC_IMX6SL) += clk-imx6sl.o mach-imx6sl.o
obj-$(CONFIG_SOC_IMX6SX) += clk-imx6sx.o mach-imx6sx.o
clk[IMX6QDL_CLK_PLL3_80M] = imx_clk_fixed_factor("pll3_80m", "pll3_usb_otg", 1, 6);
clk[IMX6QDL_CLK_PLL3_60M] = imx_clk_fixed_factor("pll3_60m", "pll3_usb_otg", 1, 8);
clk[IMX6QDL_CLK_TWD] = imx_clk_fixed_factor("twd", "arm", 1, 2);
+ if (cpu_is_imx6dl()) {
+ clk[IMX6QDL_CLK_GPU2D_AXI] = imx_clk_fixed_factor("gpu2d_axi", "mmdc_ch0_axi_podf", 1, 1);
+ clk[IMX6QDL_CLK_GPU3D_AXI] = imx_clk_fixed_factor("gpu3d_axi", "mmdc_ch0_axi_podf", 1, 1);
+ }
clk[IMX6QDL_CLK_PLL4_POST_DIV] = clk_register_divider_table(NULL, "pll4_post_div", "pll4_audio", CLK_SET_RATE_PARENT, base + 0x70, 19, 2, 0, post_div_table, &imx_ccm_lock);
clk[IMX6QDL_CLK_PLL4_AUDIO_DIV] = clk_register_divider(NULL, "pll4_audio_div", "pll4_post_div", CLK_SET_RATE_PARENT, base + 0x170, 15, 1, 0, &imx_ccm_lock);
clk[IMX6QDL_CLK_ESAI_SEL] = imx_clk_mux("esai_sel", base + 0x20, 19, 2, audio_sels, ARRAY_SIZE(audio_sels));
clk[IMX6QDL_CLK_ASRC_SEL] = imx_clk_mux("asrc_sel", base + 0x30, 7, 2, audio_sels, ARRAY_SIZE(audio_sels));
clk[IMX6QDL_CLK_SPDIF_SEL] = imx_clk_mux("spdif_sel", base + 0x30, 20, 2, audio_sels, ARRAY_SIZE(audio_sels));
- clk[IMX6QDL_CLK_GPU2D_AXI] = imx_clk_mux("gpu2d_axi", base + 0x18, 0, 1, gpu_axi_sels, ARRAY_SIZE(gpu_axi_sels));
- clk[IMX6QDL_CLK_GPU3D_AXI] = imx_clk_mux("gpu3d_axi", base + 0x18, 1, 1, gpu_axi_sels, ARRAY_SIZE(gpu_axi_sels));
+ if (cpu_is_imx6q()) {
+ clk[IMX6QDL_CLK_GPU2D_AXI] = imx_clk_mux("gpu2d_axi", base + 0x18, 0, 1, gpu_axi_sels, ARRAY_SIZE(gpu_axi_sels));
+ clk[IMX6QDL_CLK_GPU3D_AXI] = imx_clk_mux("gpu3d_axi", base + 0x18, 1, 1, gpu_axi_sels, ARRAY_SIZE(gpu_axi_sels));
+ }
clk[IMX6QDL_CLK_GPU2D_CORE_SEL] = imx_clk_mux("gpu2d_core_sel", base + 0x18, 16, 2, gpu2d_core_sels, ARRAY_SIZE(gpu2d_core_sels));
clk[IMX6QDL_CLK_GPU3D_CORE_SEL] = imx_clk_mux("gpu3d_core_sel", base + 0x18, 4, 2, gpu3d_core_sels, ARRAY_SIZE(gpu3d_core_sels));
clk[IMX6QDL_CLK_GPU3D_SHADER_SEL] = imx_clk_mux("gpu3d_shader_sel", base + 0x18, 8, 2, gpu3d_shader_sels, ARRAY_SIZE(gpu3d_shader_sels));
ldr r6, [r11, #0x0]
ldr r11, [r0, #PM_INFO_MX6Q_GPC_V_OFFSET]
ldr r6, [r11, #0x0]
+ ldr r11, [r0, #PM_INFO_MX6Q_IOMUXC_V_OFFSET]
+ ldr r6, [r11, #0x0]
/* use r11 to store the IO address */
ldr r11, [r0, #PM_INFO_MX6Q_SRC_V_OFFSET]
board_nand_data.nr_parts = nr_parts;
board_nand_data.devsize = nand_type;
- board_nand_data.ecc_opt = OMAP_ECC_HAM1_CODE_HW;
+ board_nand_data.ecc_opt = OMAP_ECC_HAM1_CODE_SW;
gpmc_nand_init(&board_nand_data, gpmc_t);
}
#endif /* CONFIG_MTD_NAND_OMAP2 || CONFIG_MTD_NAND_OMAP2_MODULE */
return 0;
/* legacy platforms support only HAM1 (1-bit Hamming) ECC scheme */
- if (ecc_opt == OMAP_ECC_HAM1_CODE_HW)
+ if (ecc_opt == OMAP_ECC_HAM1_CODE_HW ||
+ ecc_opt == OMAP_ECC_HAM1_CODE_SW)
return 1;
else
return 0;
}
}
- if ((p->wait_on_read || p->wait_on_write) &&
- (p->wait_pin > gpmc_nr_waitpins)) {
+ if (p->wait_pin > gpmc_nr_waitpins) {
pr_err("%s: invalid wait-pin (%d)\n", __func__, p->wait_pin);
return -EINVAL;
}
p->wait_on_write = of_property_read_bool(np,
"gpmc,wait-on-write");
if (!p->wait_on_read && !p->wait_on_write)
- pr_warn("%s: read/write wait monitoring not enabled!\n",
- __func__);
+ pr_debug("%s: rd/wr wait monitoring not enabled!\n",
+ __func__);
}
}
pr_err("%s: ti,nand-ecc-opt not found\n", __func__);
return -ENODEV;
}
- if (!strcmp(s, "ham1") || !strcmp(s, "sw") ||
- !strcmp(s, "hw") || !strcmp(s, "hw-romcode"))
+
+ if (!strcmp(s, "sw"))
+ gpmc_nand_data->ecc_opt = OMAP_ECC_HAM1_CODE_SW;
+ else if (!strcmp(s, "ham1") ||
+ !strcmp(s, "hw") || !strcmp(s, "hw-romcode"))
gpmc_nand_data->ecc_opt =
OMAP_ECC_HAM1_CODE_HW;
else if (!strcmp(s, "bch4"))
default:
/* Unknown default to latest silicon rev as default*/
- pr_warn("%s: unknown idcode=0x%08x (hawkeye=0x%08x,rev=0x%d)\n",
+ pr_warn("%s: unknown idcode=0x%08x (hawkeye=0x%08x,rev=0x%x)\n",
__func__, idcode, hawkeye, rev);
omap_revision = DRA752_REV_ES1_1;
}
r = clk_get_sys(dev_name(&od->pdev->dev), clk_alias);
if (!IS_ERR(r)) {
- dev_warn(&od->pdev->dev,
+ dev_dbg(&od->pdev->dev,
"alias %s already exists\n", clk_alias);
clk_put(r);
return;
oh->mux->pads_dynamic))) {
omap_hwmod_mux(oh->mux, _HWMOD_STATE_ENABLED);
_reconfigure_io_chain();
+ } else if (oh->flags & HWMOD_FORCE_MSTANDBY) {
+ _reconfigure_io_chain();
}
_add_initiator_dep(oh, mpu_oh);
if (oh->mux && oh->mux->pads_dynamic) {
omap_hwmod_mux(oh->mux, _HWMOD_STATE_IDLE);
_reconfigure_io_chain();
+ } else if (oh->flags & HWMOD_FORCE_MSTANDBY) {
+ _reconfigure_io_chain();
}
oh->_state = _HWMOD_STATE_IDLE;
if (!ois)
return 0;
+ if (ois[0] == NULL) /* Empty list */
+ return 0;
+
if (!linkspace) {
if (_alloc_linkspace(ois)) {
pr_err("omap_hwmod: could not allocate link space\n");
#include "i2c.h"
#include "mmc.h"
#include "wd_timer.h"
+#include "soc.h"
/* Base offset for all DRA7XX interrupts external to MPUSS */
#define DRA7XX_IRQ_GIC_START 32
&dra7xx_l4_per3__usb_otg_ss1,
&dra7xx_l4_per3__usb_otg_ss2,
&dra7xx_l4_per3__usb_otg_ss3,
- &dra7xx_l4_per3__usb_otg_ss4,
&dra7xx_l3_main_1__vcp1,
&dra7xx_l4_per2__vcp1,
&dra7xx_l3_main_1__vcp2,
NULL,
};
+static struct omap_hwmod_ocp_if *dra74x_hwmod_ocp_ifs[] __initdata = {
+ &dra7xx_l4_per3__usb_otg_ss4,
+ NULL,
+};
+
+static struct omap_hwmod_ocp_if *dra72x_hwmod_ocp_ifs[] __initdata = {
+ NULL,
+};
+
int __init dra7xx_hwmod_init(void)
{
+ int ret;
+
omap_hwmod_init();
- return omap_hwmod_register_links(dra7xx_hwmod_ocp_ifs);
+ ret = omap_hwmod_register_links(dra7xx_hwmod_ocp_ifs);
+
+ if (!ret && soc_is_dra74x())
+ return omap_hwmod_register_links(dra74x_hwmod_ocp_ifs);
+ else if (!ret && soc_is_dra72x())
+ return omap_hwmod_register_links(dra72x_hwmod_ocp_ifs);
+
+ return ret;
}
#define soc_is_omap54xx() 0
#define soc_is_omap543x() 0
#define soc_is_dra7xx() 0
+#define soc_is_dra74x() 0
+#define soc_is_dra72x() 0
#if defined(MULTI_OMAP2)
# if defined(CONFIG_ARCH_OMAP2)
#if defined(CONFIG_SOC_DRA7XX)
#undef soc_is_dra7xx
+#undef soc_is_dra74x
+#undef soc_is_dra72x
#define soc_is_dra7xx() (of_machine_is_compatible("ti,dra7"))
+#define soc_is_dra74x() (of_machine_is_compatible("ti,dra74"))
+#define soc_is_dra72x() (of_machine_is_compatible("ti,dra72"))
#endif
/* Various silicon revisions for omap2 */
select ARM_CPU_SUSPEND if PM || CPU_IDLE
select CPU_V7
select SH_CLK_CPG
+ select SH_INTC
select SYS_SUPPORTS_SH_CMT
select SYS_SUPPORTS_SH_TMU
select CPU_V7
select I2C
select SH_CLK_CPG
+ select SH_INTC
select RENESAS_INTC_IRQPIN
select SYS_SUPPORTS_SH_CMT
select SYS_SUPPORTS_SH_TMU
.width_mm = 210,
.height_mm = 158,
.mode = {
- .clock = 65000,
- .hdisplay = 1024,
- .hsync_start = 1048,
- .hsync_end = 1184,
- .htotal = 1344,
- .vdisplay = 768,
- .vsync_start = 771,
- .vsync_end = 777,
- .vtotal = 806,
- .flags = 0,
+ .pixelclock = 65000000,
+ .hactive = 1024,
+ .hfront_porch = 20,
+ .hback_porch = 160,
+ .hsync_len = 136,
+ .vactive = 768,
+ .vfront_porch = 3,
+ .vback_porch = 29,
+ .vsync_len = 6,
},
},
},
.width_mm = 210,
.height_mm = 158,
.mode = {
- .clock = 65000,
- .hdisplay = 1024,
- .hsync_start = 1048,
- .hsync_end = 1184,
- .htotal = 1344,
- .vdisplay = 768,
- .vsync_start = 771,
- .vsync_end = 777,
- .vtotal = 806,
- .flags = 0,
+ .pixelclock = 65000000,
+ .hactive = 1024,
+ .hfront_porch = 20,
+ .hback_porch = 160,
+ .hsync_len = 136,
+ .vactive = 768,
+ .vfront_porch = 3,
+ .vback_porch = 29,
+ .vsync_len = 6,
},
},
},
.width_mm = 210,
.height_mm = 158,
.mode = {
- .clock = 65000,
- .hdisplay = 1024,
- .hsync_start = 1048,
- .hsync_end = 1184,
- .htotal = 1344,
- .vdisplay = 768,
- .vsync_start = 771,
- .vsync_end = 777,
- .vtotal = 806,
- .flags = 0,
+ .pixelclock = 65000000,
+ .hactive = 1024,
+ .hfront_porch = 20,
+ .hback_porch = 160,
+ .hsync_len = 136,
+ .vactive = 768,
+ .vfront_porch = 3,
+ .vback_porch = 29,
+ .vsync_len = 6,
},
},
},
.width_mm = 210,
.height_mm = 158,
.mode = {
- .clock = 65000,
- .hdisplay = 1024,
- .hsync_start = 1048,
- .hsync_end = 1184,
- .htotal = 1344,
- .vdisplay = 768,
- .vsync_start = 771,
- .vsync_end = 777,
- .vtotal = 806,
- .flags = 0,
+ .pixelclock = 65000000,
+ .hactive = 1024,
+ .hfront_porch = 20,
+ .hback_porch = 160,
+ .hsync_len = 136,
+ .vactive = 768,
+ .vfront_porch = 3,
+ .vback_porch = 29,
+ .vsync_len = 6,
},
},
},
.width_mm = 210,
.height_mm = 158,
.mode = {
- .clock = 65000,
- .hdisplay = 1024,
- .hsync_start = 1048,
- .hsync_end = 1184,
- .htotal = 1344,
- .vdisplay = 768,
- .vsync_start = 771,
- .vsync_end = 777,
- .vtotal = 806,
- .flags = 0,
+ .pixelclock = 65000000,
+ .hactive = 1024,
+ .hfront_porch = 20,
+ .hback_porch = 160,
+ .hsync_len = 136,
+ .vactive = 768,
+ .vfront_porch = 3,
+ .vback_porch = 29,
+ .vsync_len = 6,
},
},
},
static struct clk div4_clks[DIV4_NR] = {
[DIV4_SDH] = SH_CLK_DIV4(&pll1_clk, SDCKCR, 8, 0x0dff, CLK_ENABLE_ON_INIT),
- [DIV4_SD0] = SH_CLK_DIV4(&pll1_clk, SDCKCR, 4, 0x1de0, CLK_ENABLE_ON_INIT),
- [DIV4_SD1] = SH_CLK_DIV4(&pll1_clk, SDCKCR, 0, 0x1de0, CLK_ENABLE_ON_INIT),
+ [DIV4_SD0] = SH_CLK_DIV4(&pll1_clk, SDCKCR, 4, 0x1df0, CLK_ENABLE_ON_INIT),
+ [DIV4_SD1] = SH_CLK_DIV4(&pll1_clk, SDCKCR, 0, 0x1df0, CLK_ENABLE_ON_INIT),
};
/* DIV6 clocks */
static struct clk div4_clks[DIV4_NR] = {
[DIV4_SDH] = SH_CLK_DIV4(&pll1_clk, SDCKCR, 8, 0x0dff, CLK_ENABLE_ON_INIT),
- [DIV4_SD0] = SH_CLK_DIV4(&pll1_clk, SDCKCR, 4, 0x1de0, CLK_ENABLE_ON_INIT),
+ [DIV4_SD0] = SH_CLK_DIV4(&pll1_clk, SDCKCR, 4, 0x1df0, CLK_ENABLE_ON_INIT),
};
/* DIV6 clocks */
CLKDEV_DEV_ID("sh-sci.5", &mstp_clks[MSTP207]), /* SCIFA5 */
CLKDEV_DEV_ID("e6cb0000.serial", &mstp_clks[MSTP207]), /* SCIFA5 */
CLKDEV_DEV_ID("sh-sci.8", &mstp_clks[MSTP206]), /* SCIFB */
- CLKDEV_DEV_ID("0xe6c3000.serial", &mstp_clks[MSTP206]), /* SCIFB */
+ CLKDEV_DEV_ID("e6c3000.serial", &mstp_clks[MSTP206]), /* SCIFB */
CLKDEV_DEV_ID("sh-sci.0", &mstp_clks[MSTP204]), /* SCIFA0 */
CLKDEV_DEV_ID("e6c40000.serial", &mstp_clks[MSTP204]), /* SCIFA0 */
CLKDEV_DEV_ID("sh-sci.1", &mstp_clks[MSTP203]), /* SCIFA1 */
static int ve_init_opp_table(struct device *cpu_dev)
{
- int cluster = topology_physical_package_id(cpu_dev->id);
- int idx, ret = 0, max_opp = info->num_opps[cluster];
- struct ve_spc_opp *opps = info->opps[cluster];
+ int cluster;
+ int idx, ret = 0, max_opp;
+ struct ve_spc_opp *opps;
+
+ cluster = topology_physical_package_id(cpu_dev->id);
+ cluster = cluster < 0 ? 0 : cluster;
+
+ max_opp = info->num_opps[cluster];
+ opps = info->opps[cluster];
for (idx = 0; idx < max_opp; idx++, opps++) {
ret = dev_pm_opp_add(cpu_dev, opps->freq * 1000, opps->u_volt);
spc->hw.init = &init;
spc->cluster = topology_physical_package_id(cpu_dev->id);
+ spc->cluster = spc->cluster < 0 ? 0 : spc->cluster;
+
init.name = dev_name(cpu_dev);
init.ops = &clk_spc_ops;
init.flags = CLK_IS_ROOT | CLK_GET_RATE_NOCACHE;
*/
.align 5
ENTRY(v6_early_abort)
-#ifdef CONFIG_CPU_V6
- sub r1, sp, #4 @ Get unused stack location
- strex r0, r1, [r1] @ Clear the exclusive monitor
-#elif defined(CONFIG_CPU_32v6K)
- clrex
-#endif
mrc p15, 0, r1, c5, c0, 0 @ get FSR
mrc p15, 0, r0, c6, c0, 0 @ get FAR
/*
*/
.align 5
ENTRY(v7_early_abort)
- /*
- * The effect of data aborts on on the exclusive access monitor are
- * UNPREDICTABLE. Do a CLREX to clear the state
- */
- clrex
-
mrc p15, 0, r1, c5, c0, 0 @ get FSR
mrc p15, 0, r0, c6, c0, 0 @ get FAR
-obj-y := enlighten.o hypercall.o grant-table.o p2m.o mm.o
+obj-y := enlighten.o hypercall.o grant-table.o p2m.o mm.o mm32.o
xen_domain_type = XEN_HVM_DOMAIN;
xen_setup_features();
+
+ if (!xen_feature(XENFEAT_grant_map_identity)) {
+ pr_warn("Please upgrade your Xen.\n"
+ "If your platform has any non-coherent DMA devices, they won't work properly.\n");
+ }
+
if (xen_feature(XENFEAT_dom0))
xen_start_info->flags |= SIF_INITDOMAIN|SIF_PRIVILEGED;
else
--- /dev/null
+#include <linux/cpu.h>
+#include <linux/dma-mapping.h>
+#include <linux/gfp.h>
+#include <linux/highmem.h>
+
+#include <xen/features.h>
+
+static DEFINE_PER_CPU(unsigned long, xen_mm32_scratch_virt);
+static DEFINE_PER_CPU(pte_t *, xen_mm32_scratch_ptep);
+
+static int alloc_xen_mm32_scratch_page(int cpu)
+{
+ struct page *page;
+ unsigned long virt;
+ pmd_t *pmdp;
+ pte_t *ptep;
+
+ if (per_cpu(xen_mm32_scratch_ptep, cpu) != NULL)
+ return 0;
+
+ page = alloc_page(GFP_KERNEL);
+ if (page == NULL) {
+ pr_warn("Failed to allocate xen_mm32_scratch_page for cpu %d\n", cpu);
+ return -ENOMEM;
+ }
+
+ virt = (unsigned long)__va(page_to_phys(page));
+ pmdp = pmd_offset(pud_offset(pgd_offset_k(virt), virt), virt);
+ ptep = pte_offset_kernel(pmdp, virt);
+
+ per_cpu(xen_mm32_scratch_virt, cpu) = virt;
+ per_cpu(xen_mm32_scratch_ptep, cpu) = ptep;
+
+ return 0;
+}
+
+static int xen_mm32_cpu_notify(struct notifier_block *self,
+ unsigned long action, void *hcpu)
+{
+ int cpu = (long)hcpu;
+ switch (action) {
+ case CPU_UP_PREPARE:
+ if (alloc_xen_mm32_scratch_page(cpu))
+ return NOTIFY_BAD;
+ break;
+ default:
+ break;
+ }
+ return NOTIFY_OK;
+}
+
+static struct notifier_block xen_mm32_cpu_notifier = {
+ .notifier_call = xen_mm32_cpu_notify,
+};
+
+static void* xen_mm32_remap_page(dma_addr_t handle)
+{
+ unsigned long virt = get_cpu_var(xen_mm32_scratch_virt);
+ pte_t *ptep = __get_cpu_var(xen_mm32_scratch_ptep);
+
+ *ptep = pfn_pte(handle >> PAGE_SHIFT, PAGE_KERNEL);
+ local_flush_tlb_kernel_page(virt);
+
+ return (void*)virt;
+}
+
+static void xen_mm32_unmap(void *vaddr)
+{
+ put_cpu_var(xen_mm32_scratch_virt);
+}
+
+
+/* functions called by SWIOTLB */
+
+static void dma_cache_maint(dma_addr_t handle, unsigned long offset,
+ size_t size, enum dma_data_direction dir,
+ void (*op)(const void *, size_t, int))
+{
+ unsigned long pfn;
+ size_t left = size;
+
+ pfn = (handle >> PAGE_SHIFT) + offset / PAGE_SIZE;
+ offset %= PAGE_SIZE;
+
+ do {
+ size_t len = left;
+ void *vaddr;
+
+ if (!pfn_valid(pfn))
+ {
+ /* Cannot map the page, we don't know its physical address.
+ * Return and hope for the best */
+ if (!xen_feature(XENFEAT_grant_map_identity))
+ return;
+ vaddr = xen_mm32_remap_page(handle) + offset;
+ op(vaddr, len, dir);
+ xen_mm32_unmap(vaddr - offset);
+ } else {
+ struct page *page = pfn_to_page(pfn);
+
+ if (PageHighMem(page)) {
+ if (len + offset > PAGE_SIZE)
+ len = PAGE_SIZE - offset;
+
+ if (cache_is_vipt_nonaliasing()) {
+ vaddr = kmap_atomic(page);
+ op(vaddr + offset, len, dir);
+ kunmap_atomic(vaddr);
+ } else {
+ vaddr = kmap_high_get(page);
+ if (vaddr) {
+ op(vaddr + offset, len, dir);
+ kunmap_high(page);
+ }
+ }
+ } else {
+ vaddr = page_address(page) + offset;
+ op(vaddr, len, dir);
+ }
+ }
+
+ offset = 0;
+ pfn++;
+ left -= len;
+ } while (left);
+}
+
+static void __xen_dma_page_dev_to_cpu(struct device *hwdev, dma_addr_t handle,
+ size_t size, enum dma_data_direction dir)
+{
+ /* Cannot use __dma_page_dev_to_cpu because we don't have a
+ * struct page for handle */
+
+ if (dir != DMA_TO_DEVICE)
+ outer_inv_range(handle, handle + size);
+
+ dma_cache_maint(handle & PAGE_MASK, handle & ~PAGE_MASK, size, dir, dmac_unmap_area);
+}
+
+static void __xen_dma_page_cpu_to_dev(struct device *hwdev, dma_addr_t handle,
+ size_t size, enum dma_data_direction dir)
+{
+
+ dma_cache_maint(handle & PAGE_MASK, handle & ~PAGE_MASK, size, dir, dmac_map_area);
+
+ if (dir == DMA_FROM_DEVICE) {
+ outer_inv_range(handle, handle + size);
+ } else {
+ outer_clean_range(handle, handle + size);
+ }
+}
+
+void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle,
+ size_t size, enum dma_data_direction dir,
+ struct dma_attrs *attrs)
+
+{
+ if (!__generic_dma_ops(hwdev)->unmap_page)
+ return;
+ if (dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))
+ return;
+
+ __xen_dma_page_dev_to_cpu(hwdev, handle, size, dir);
+}
+
+void xen_dma_sync_single_for_cpu(struct device *hwdev,
+ dma_addr_t handle, size_t size, enum dma_data_direction dir)
+{
+ if (!__generic_dma_ops(hwdev)->sync_single_for_cpu)
+ return;
+ __xen_dma_page_dev_to_cpu(hwdev, handle, size, dir);
+}
+
+void xen_dma_sync_single_for_device(struct device *hwdev,
+ dma_addr_t handle, size_t size, enum dma_data_direction dir)
+{
+ if (!__generic_dma_ops(hwdev)->sync_single_for_device)
+ return;
+ __xen_dma_page_cpu_to_dev(hwdev, handle, size, dir);
+}
+
+int __init xen_mm32_init(void)
+{
+ int cpu;
+
+ if (!xen_initial_domain())
+ return 0;
+
+ register_cpu_notifier(&xen_mm32_cpu_notifier);
+ get_online_cpus();
+ for_each_online_cpu(cpu) {
+ if (alloc_xen_mm32_scratch_page(cpu)) {
+ put_online_cpus();
+ unregister_cpu_notifier(&xen_mm32_cpu_notifier);
+ return -ENOMEM;
+ }
+ }
+ put_online_cpus();
+
+ return 0;
+}
+arch_initcall(xen_mm32_init);
unsigned long pfn;
unsigned long mfn;
unsigned long nr_pages;
- struct rb_node rbnode_mach;
struct rb_node rbnode_phys;
};
static rwlock_t p2m_lock;
struct rb_root phys_to_mach = RB_ROOT;
EXPORT_SYMBOL_GPL(phys_to_mach);
-static struct rb_root mach_to_phys = RB_ROOT;
static int xen_add_phys_to_mach_entry(struct xen_p2m_entry *new)
{
parent = *link;
entry = rb_entry(parent, struct xen_p2m_entry, rbnode_phys);
- if (new->mfn == entry->mfn)
- goto err_out;
if (new->pfn == entry->pfn)
goto err_out;
}
EXPORT_SYMBOL_GPL(__pfn_to_mfn);
-static int xen_add_mach_to_phys_entry(struct xen_p2m_entry *new)
-{
- struct rb_node **link = &mach_to_phys.rb_node;
- struct rb_node *parent = NULL;
- struct xen_p2m_entry *entry;
- int rc = 0;
-
- while (*link) {
- parent = *link;
- entry = rb_entry(parent, struct xen_p2m_entry, rbnode_mach);
-
- if (new->mfn == entry->mfn)
- goto err_out;
- if (new->pfn == entry->pfn)
- goto err_out;
-
- if (new->mfn < entry->mfn)
- link = &(*link)->rb_left;
- else
- link = &(*link)->rb_right;
- }
- rb_link_node(&new->rbnode_mach, parent, link);
- rb_insert_color(&new->rbnode_mach, &mach_to_phys);
- goto out;
-
-err_out:
- rc = -EINVAL;
- pr_warn("%s: cannot add pfn=%pa -> mfn=%pa: pfn=%pa -> mfn=%pa already exists\n",
- __func__, &new->pfn, &new->mfn, &entry->pfn, &entry->mfn);
-out:
- return rc;
-}
-
-unsigned long __mfn_to_pfn(unsigned long mfn)
-{
- struct rb_node *n = mach_to_phys.rb_node;
- struct xen_p2m_entry *entry;
- unsigned long irqflags;
-
- read_lock_irqsave(&p2m_lock, irqflags);
- while (n) {
- entry = rb_entry(n, struct xen_p2m_entry, rbnode_mach);
- if (entry->mfn <= mfn &&
- entry->mfn + entry->nr_pages > mfn) {
- read_unlock_irqrestore(&p2m_lock, irqflags);
- return entry->pfn + (mfn - entry->mfn);
- }
- if (mfn < entry->mfn)
- n = n->rb_left;
- else
- n = n->rb_right;
- }
- read_unlock_irqrestore(&p2m_lock, irqflags);
-
- return INVALID_P2M_ENTRY;
-}
-EXPORT_SYMBOL_GPL(__mfn_to_pfn);
-
int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
struct gnttab_map_grant_ref *kmap_ops,
struct page **pages, unsigned int count)
p2m_entry = rb_entry(n, struct xen_p2m_entry, rbnode_phys);
if (p2m_entry->pfn <= pfn &&
p2m_entry->pfn + p2m_entry->nr_pages > pfn) {
- rb_erase(&p2m_entry->rbnode_mach, &mach_to_phys);
rb_erase(&p2m_entry->rbnode_phys, &phys_to_mach);
write_unlock_irqrestore(&p2m_lock, irqflags);
kfree(p2m_entry);
p2m_entry->mfn = mfn;
write_lock_irqsave(&p2m_lock, irqflags);
- if ((rc = xen_add_phys_to_mach_entry(p2m_entry) < 0) ||
- (rc = xen_add_mach_to_phys_entry(p2m_entry) < 0)) {
+ if ((rc = xen_add_phys_to_mach_entry(p2m_entry)) < 0) {
write_unlock_irqrestore(&p2m_lock, irqflags);
return false;
}
# The byte offset of the kernel image in RAM from the start of RAM.
ifeq ($(CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET), y)
-TEXT_OFFSET := $(shell awk 'BEGIN {srand(); printf "0x%04x0\n", int(65535 * rand())}')
+TEXT_OFFSET := $(shell awk 'BEGIN {srand(); printf "0x%03x000\n", int(512 * rand())}')
else
TEXT_OFFSET := 0x00080000
endif
CONFIG_BLK_DEV_SD=y
# CONFIG_SCSI_LOWLEVEL is not set
CONFIG_ATA=y
+CONFIG_AHCI_XGENE=y
+CONFIG_PHY_XGENE=y
CONFIG_PATA_PLATFORM=y
CONFIG_PATA_OF_PLATFORM=y
CONFIG_NETDEVICES=y
CONFIG_VIRTIO_NET=y
CONFIG_SMC91X=y
CONFIG_SMSC911X=y
+CONFIG_NET_XGENE=y
# CONFIG_WLAN is not set
CONFIG_INPUT_EVDEV=y
# CONFIG_SERIO_SERPORT is not set
kernel_neon_begin_partial(28);
sha2_ce_transform(blocks, data, sctx->state, NULL, len);
kernel_neon_end();
- data += blocks * SHA256_BLOCK_SIZE;
}
static int sha224_finup(struct shash_desc *desc, const u8 *data,
*/
#define ARM_MAX_BRP 16
#define ARM_MAX_WRP 16
-#define ARM_MAX_HBP_SLOTS (ARM_MAX_BRP + ARM_MAX_WRP)
/* Virtual debug register bases. */
#define AARCH64_DBG_REG_BVR 0
((struct pt_regs *)(THREAD_START_SP + task_stack_page(p)) - 1)
#define KSTK_EIP(tsk) ((unsigned long)task_pt_regs(tsk)->pc)
-#define KSTK_ESP(tsk) ((unsigned long)task_pt_regs(tsk)->sp)
+#define KSTK_ESP(tsk) user_stack_pointer(task_pt_regs(tsk))
/*
* Prefetching support
(!((regs)->pstate & PSR_F_BIT))
#define user_stack_pointer(regs) \
- (!compat_user_mode(regs)) ? ((regs)->sp) : ((regs)->compat_sp)
+ (!compat_user_mode(regs) ? (regs)->sp : (regs)->compat_sp)
static inline unsigned long regs_return_value(struct pt_regs *regs)
{
#define __ASM_SPARSEMEM_H
#ifdef CONFIG_SPARSEMEM
-#define MAX_PHYSMEM_BITS 40
+#define MAX_PHYSMEM_BITS 48
#define SECTION_SIZE_BITS 30
#endif
#define __ARM_NR_compat_cacheflush (__ARM_NR_COMPAT_BASE+2)
#define __ARM_NR_compat_set_tls (__ARM_NR_COMPAT_BASE+5)
-#define __NR_compat_syscalls 383
+#define __NR_compat_syscalls 386
#endif
#define __ARCH_WANT_SYS_CLONE
__SYSCALL(__NR_sched_getattr, sys_sched_getattr)
#define __NR_renameat2 382
__SYSCALL(__NR_renameat2, sys_renameat2)
+ /* 383 for seccomp */
+#define __NR_getrandom 384
+__SYSCALL(__NR_getrandom, sys_getrandom)
+#define __NR_memfd_create 385
+__SYSCALL(__NR_memfd_create, sys_memfd_create)
if (l1ip != ICACHE_POLICY_PIPT)
set_bit(ICACHEF_ALIASING, &__icache_flags);
- if (l1ip == ICACHE_POLICY_AIVIVT);
+ if (l1ip == ICACHE_POLICY_AIVIVT)
set_bit(ICACHEF_AIVIVT, &__icache_flags);
pr_info("Detected %s I-cache on CPU%d\n", icache_policy_str[l1ip], cpu);
if (uefi_debug)
pr_cont("\n");
}
+
+ set_bit(EFI_MEMMAP, &efi.flags);
}
efi_native_runtime_setup();
set_bit(EFI_RUNTIME_SERVICES, &efi.flags);
+ efi.runtime_version = efi.systab->hdr.revision;
+
return 0;
err_unmap:
case CPU_PM_ENTER:
if (current->mm && !test_thread_flag(TIF_FOREIGN_FPSTATE))
fpsimd_save_state(¤t->thread.fpsimd_state);
+ this_cpu_write(fpsimd_last_state, NULL);
break;
case CPU_PM_EXIT:
if (current->mm)
#define KERNEL_RAM_VADDR (PAGE_OFFSET + TEXT_OFFSET)
-#if (TEXT_OFFSET & 0xf) != 0
-#error TEXT_OFFSET must be at least 16B aligned
-#elif (PAGE_OFFSET & 0xfffff) != 0
+#if (TEXT_OFFSET & 0xfff) != 0
+#error TEXT_OFFSET must be at least 4KB aligned
+#elif (PAGE_OFFSET & 0x1fffff) != 0
#error PAGE_OFFSET must be at least 2MB aligned
-#elif TEXT_OFFSET > 0xfffff
+#elif TEXT_OFFSET > 0x1fffff
#error TEXT_OFFSET must be less than 2MB
#endif
.long 0
.popsection
- .align 3
-2: .quad .
- .quad PAGE_OFFSET
-
#ifdef CONFIG_SMP
.align 3
1: .quad .
if (irqd_is_per_cpu(d) || !cpumask_test_cpu(smp_processor_id(), affinity))
return false;
- if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids)
+ if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids) {
+ affinity = cpu_online_mask;
ret = true;
+ }
- /*
- * when using forced irq_set_affinity we must ensure that the cpu
- * being offlined is not present in the affinity mask, it may be
- * selected as the target CPU otherwise
- */
- affinity = cpu_online_mask;
c = irq_data_get_irq_chip(d);
if (!c->irq_set_affinity)
pr_debug("IRQ%u: unable to set affinity\n", d->irq);
- else if (c->irq_set_affinity(d, affinity, true) == IRQ_SET_MASK_OK && ret)
+ else if (c->irq_set_affinity(d, affinity, false) == IRQ_SET_MASK_OK && ret)
cpumask_copy(d->affinity, affinity);
return ret;
return regs->compat_lr;
}
+ if ((u32)idx == PERF_REG_ARM64_SP)
+ return regs->sp;
+
+ if ((u32)idx == PERF_REG_ARM64_PC)
+ return regs->pc;
+
return regs->regs[idx];
}
{
}
+static void tls_thread_flush(void)
+{
+ asm ("msr tpidr_el0, xzr");
+
+ if (is_compat_task()) {
+ current->thread.tp_value = 0;
+
+ /*
+ * We need to ensure ordering between the shadow state and the
+ * hardware state, so that we don't corrupt the hardware state
+ * with a stale shadow state during context switch.
+ */
+ barrier();
+ asm ("msr tpidrro_el0, xzr");
+ }
+}
+
void flush_thread(void)
{
fpsimd_flush_thread();
+ tls_thread_flush();
flush_ptrace_hw_breakpoint(current);
}
break;
}
}
- for (i = ARM_MAX_BRP; i < ARM_MAX_HBP_SLOTS && !bp; ++i) {
+
+ for (i = 0; i < ARM_MAX_WRP; ++i) {
if (current->thread.debug.hbp_watch[i] == bp) {
info.si_errno = -((i << 1) + 1);
break;
kbuf += sizeof(reg);
} else {
ret = copy_to_user(ubuf, ®, sizeof(reg));
- if (ret)
+ if (ret) {
+ ret = -EFAULT;
break;
+ }
ubuf += sizeof(reg);
}
kbuf += sizeof(reg);
} else {
ret = copy_from_user(®, ubuf, sizeof(reg));
- if (ret)
- return ret;
+ if (ret) {
+ ret = -EFAULT;
+ break;
+ }
ubuf += sizeof(reg);
}
if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))
trace_sys_enter(regs, regs->syscallno);
-#ifdef CONFIG_AUDITSYSCALL
audit_syscall_entry(syscall_get_arch(), regs->syscallno,
regs->orig_x0, regs->regs[1], regs->regs[2], regs->regs[3]);
-#endif
return regs->syscallno;
}
asmlinkage void syscall_trace_exit(struct pt_regs *regs)
{
-#ifdef CONFIG_AUDITSYSCALL
audit_syscall_exit(regs);
-#endif
if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))
trace_sys_exit(regs, regs_return_value(regs));
#endif
static const char *cpu_name;
+static const char *machine_name;
phys_addr_t __fdt_pointer __initdata;
/*
while (true)
cpu_relax();
}
+
+ machine_name = of_flat_dt_get_machine_name();
}
/*
{
int i;
- /*
- * Dump out the common processor features in a single line. Userspace
- * should read the hwcaps with getauxval(AT_HWCAP) rather than
- * attempting to parse this.
- */
- seq_puts(m, "features\t:");
- for (i = 0; hwcap_str[i]; i++)
- if (elf_hwcap & (1 << i))
- seq_printf(m, " %s", hwcap_str[i]);
- seq_puts(m, "\n\n");
+ seq_printf(m, "Processor\t: %s rev %d (%s)\n",
+ cpu_name, read_cpuid_id() & 15, ELF_PLATFORM);
for_each_online_cpu(i) {
- struct cpuinfo_arm64 *cpuinfo = &per_cpu(cpu_data, i);
- u32 midr = cpuinfo->reg_midr;
-
/*
* glibc reads /proc/cpuinfo to determine the number of
* online processors, looking for lines beginning with
#ifdef CONFIG_SMP
seq_printf(m, "processor\t: %d\n", i);
#endif
- seq_printf(m, "implementer\t: 0x%02x\n",
- MIDR_IMPLEMENTOR(midr));
- seq_printf(m, "variant\t\t: 0x%x\n", MIDR_VARIANT(midr));
- seq_printf(m, "partnum\t\t: 0x%03x\n", MIDR_PARTNUM(midr));
- seq_printf(m, "revision\t: 0x%x\n\n", MIDR_REVISION(midr));
}
+ /* dump out the processor features */
+ seq_puts(m, "Features\t: ");
+
+ for (i = 0; hwcap_str[i]; i++)
+ if (elf_hwcap & (1 << i))
+ seq_printf(m, "%s ", hwcap_str[i]);
+
+ seq_printf(m, "\nCPU implementer\t: 0x%02x\n", read_cpuid_id() >> 24);
+ seq_printf(m, "CPU architecture: AArch64\n");
+ seq_printf(m, "CPU variant\t: 0x%x\n", (read_cpuid_id() >> 20) & 15);
+ seq_printf(m, "CPU part\t: 0x%03x\n", (read_cpuid_id() >> 4) & 0xfff);
+ seq_printf(m, "CPU revision\t: %d\n", read_cpuid_id() & 15);
+
+ seq_puts(m, "\n");
+
+ seq_printf(m, "Hardware\t: %s\n", machine_name);
+
return 0;
}
case __ARM_NR_compat_set_tls:
current->thread.tp_value = regs->regs[0];
+
+ /*
+ * Protect against register corruption from context switch.
+ * See comment in tls_thread_flush.
+ */
+ barrier();
asm ("msr tpidrro_el0, %0" : : "r" (regs->regs[0]));
return 0;
else
kvm_vcpu_block(vcpu);
+ kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
+
return 1;
}
msr mair_el2, x4
isb
+ /* Invalidate the stale TLBs from Bootloader */
+ tlbi alle2
+ dsb sy
+
mrs x4, sctlr_el2
and x4, x4, #SCTLR_EL2_EE // preserve endianness of EL2
ldr x5, =SCTLR_EL2_FLAGS
#include <linux/of_fdt.h>
#include <linux/dma-mapping.h>
#include <linux/dma-contiguous.h>
+#include <linux/efi.h>
#include <asm/fixmap.h>
#include <asm/sections.h>
memblock_reserve(__virt_to_phys(initrd_start), initrd_end - initrd_start);
#endif
- early_init_fdt_scan_reserved_mem();
+ if (!efi_enabled(EFI_MEMMAP))
+ early_init_fdt_scan_reserved_mem();
/* 4GB maximum for 32-bit only capable devices */
if (IS_ENABLED(CONFIG_ZONE_DMA))
#define KSTK_EIP(tsk) ((tsk)->thread.frame0->pc)
#define KSTK_ESP(tsk) ((tsk)->thread.frame0->sp)
-#define cpu_relax() barrier()
+#define cpu_relax() barrier()
+#define cpu_relax_lowlatency() cpu_relax()
/* data cache prefetch */
#define ARCH_HAS_PREFETCH
);
local_irq_restore(flags);
}
+EXPORT_SYMBOL(flush_icache_range);
void hexagon_clean_dcache_range(unsigned long start, unsigned long end)
{
config KEXEC
bool "kexec system call"
depends on !IA64_HP_SIM && (!SMP || HOTPLUG_CPU)
- select CRYPTO
- select CRYPTO_SHA256
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot
-#define NR_syscalls 316 /* length of syscall table */
+#define NR_syscalls 317 /* length of syscall table */
/*
* The following defines stop scripts/checksyscalls.sh from complaining about
#define __NR_sched_getattr 1337
#define __NR_renameat2 1338
#define __NR_getrandom 1339
+#define __NR_memfd_create 1339
#endif /* _UAPI_ASM_IA64_UNISTD_H */
data8 sys_sched_getattr
data8 sys_renameat2
data8 sys_getrandom
+ data8 sys_memfd_create // 1340
.org sys_call_table + 8*NR_syscalls // guard against failures to increase NR_syscalls
#endif /* __IA64_ASM_PARAVIRTUALIZED_NATIVE */
config KEXEC
bool "kexec system call"
depends on M68KCLASSIC
- select CRYPTO
- select CRYPTO_SHA256
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot
#include <uapi/asm/unistd.h>
-#define NR_syscalls 352
+#define NR_syscalls 354
#define __ARCH_WANT_OLD_READDIR
#define __ARCH_WANT_OLD_STAT
#define __NR_sched_setattr 349
#define __NR_sched_getattr 350
#define __NR_renameat2 351
+#define __NR_getrandom 352
+#define __NR_memfd_create 353
#endif /* _UAPI_ASM_M68K_UNISTD_H_ */
.long sys_sched_setattr
.long sys_sched_getattr /* 350 */
.long sys_renameat2
+ .long sys_getrandom
+ .long sys_memfd_create
endmenu
-menu "Advanced setup"
+menu "Kernel features"
config ADVANCED_OPTIONS
bool "Prompt for advanced kernel configuration options"
endchoice
-endmenu
-
source "mm/Kconfig"
+endmenu
+
menu "Executable file formats"
source "fs/Kconfig.binfmt"
#include <asm/percpu.h>
#include <asm/ptrace.h>
+#include <linux/linkage.h>
/*
* These are per-cpu variables required in entry.S, among other
if ((get_fs().seg < ((unsigned long)addr)) ||
(get_fs().seg < ((unsigned long)addr + size - 1))) {
- pr_debug("ACCESS fail: %s at 0x%08x (size 0x%x), seg 0x%08x\n",
+ pr_devel("ACCESS fail: %s at 0x%08x (size 0x%x), seg 0x%08x\n",
type ? "WRITE" : "READ ", (__force u32)addr, (u32)size,
(u32)get_fs().seg);
return 0;
}
ok:
- pr_debug("ACCESS OK: %s at 0x%08x (size 0x%x), seg 0x%08x\n",
+ pr_devel("ACCESS OK: %s at 0x%08x (size 0x%x), seg 0x%08x\n",
type ? "WRITE" : "READ ", (__force u32)addr, (u32)size,
(u32)get_fs().seg);
return 1;
#endif /* __ASSEMBLY__ */
-#define __NR_syscalls 381
+#define __NR_syscalls 387
#endif /* _ASM_MICROBLAZE_UNISTD_H */
#define __NR_sched_setattr 381
#define __NR_sched_getattr 382
#define __NR_renameat2 383
+#define __NR_seccomp 384
+#define __NR_getrandom 385
+#define __NR_memfd_create 386
#endif /* _UAPI_ASM_MICROBLAZE_UNISTD_H */
.long sys_sched_setattr
.long sys_sched_getattr
.long sys_renameat2
+ .long sys_seccomp
+ .long sys_getrandom /* 385 */
+ .long sys_memfd_create
config KEXEC
bool "Kexec system call"
- select CRYPTO
- select CRYPTO_SHA256
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot
pr_warn("DB1200: cant get I2C close to 50MHz\n");
else
clk_set_rate(c, pfc);
+ clk_prepare_enable(c);
clk_put(c);
}
}
/* Audio PSC clock is supplied externally. (FIXME: platdata!!) */
- c = clk_get(NULL, "psc1_intclk");
- if (!IS_ERR(c)) {
- clk_prepare_enable(c);
- clk_put(c);
- }
__raw_writel(PSC_SEL_CLK_SERCLK,
(void __iomem *)KSEG1ADDR(AU1550_PSC1_PHYS_ADDR) + PSC_SEL_OFFSET);
wmb();
switch (bcm47xx_bus_type) {
#ifdef CONFIG_BCM47XX_SSB
case BCM47XX_BUS_TYPE_SSB:
- ssb_watchdog_timer_set(&bcm47xx_bus.ssb, 3);
+ if (bcm47xx_bus.ssb.chip_id == 0x4785)
+ write_c0_diag4(1 << 22);
+ ssb_watchdog_timer_set(&bcm47xx_bus.ssb, 1);
+ if (bcm47xx_bus.ssb.chip_id == 0x4785) {
+ __asm__ __volatile__(
+ ".set\tmips3\n\t"
+ "sync\n\t"
+ "wait\n\t"
+ ".set\tmips0");
+ }
break;
#endif
#ifdef CONFIG_BCM47XX_BCMA
case BCM47XX_BUS_TYPE_BCMA:
- bcma_chipco_watchdog_timer_set(&bcm47xx_bus.bcma.bus.drv_cc, 3);
+ bcma_chipco_watchdog_timer_set(&bcm47xx_bus.bcma.bus.drv_cc, 1);
break;
#endif
}
static int octeon_uart;
extern asmlinkage void handle_int(void);
-extern asmlinkage void plat_irq_dispatch(void);
/**
* Return non zero if we are currently running in the Octeon simulator
octeon_kill_core(NULL);
}
+static char __read_mostly octeon_system_type[80];
+
+static int __init init_octeon_system_type(void)
+{
+ snprintf(octeon_system_type, sizeof(octeon_system_type), "%s (%s)",
+ cvmx_board_type_to_string(octeon_bootinfo->board_type),
+ octeon_model_get_string(read_c0_prid()));
+
+ return 0;
+}
+early_initcall(init_octeon_system_type);
+
/**
* Return a string representing the system type
*
*/
const char *octeon_board_type_string(void)
{
- static char name[80];
- sprintf(name, "%s (%s)",
- cvmx_board_type_to_string(octeon_bootinfo->board_type),
- octeon_model_get_string(read_c0_prid()));
- return name;
+ return octeon_system_type;
}
const char *get_system_type(void)
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2014, Imagination Technologies Ltd.
+ *
+ * EVA functions for generic code
+ */
+
+#ifndef _ASM_EVA_H
+#define _ASM_EVA_H
+
+#include <kernel-entry-init.h>
+
+#ifdef __ASSEMBLY__
+
+#ifdef CONFIG_EVA
+
+/*
+ * EVA early init code
+ *
+ * Platforms must define their own 'platform_eva_init' macro in
+ * their kernel-entry-init.h header. This macro usually does the
+ * platform specific configuration of the segmentation registers,
+ * and it is normally called from assembly code.
+ *
+ */
+
+.macro eva_init
+platform_eva_init
+.endm
+
+#else
+
+.macro eva_init
+.endm
+
+#endif /* CONFIG_EVA */
+
+#endif /* __ASSEMBLY__ */
+
+#endif
#endif
#define GICBIS(reg, mask, bits) \
do { u32 data; \
- GICREAD((reg), data); \
+ GICREAD(reg, data); \
data &= ~(mask); \
data |= ((bits) & (mask)); \
GICWRITE((reg), data); \
#define irq_canonicalize(irq) (irq) /* Sane hardware, sane code ... */
#endif
+asmlinkage void plat_irq_dispatch(void);
+
extern void do_IRQ(unsigned int irq);
extern void arch_init_irq(void);
#ifndef __ASM_MACH_MIPS_KERNEL_ENTRY_INIT_H
#define __ASM_MACH_MIPS_KERNEL_ENTRY_INIT_H
+#include <asm/regdef.h>
+#include <asm/mipsregs.h>
+
/*
* Prepare segments for EVA boot:
*
* This is in case the processor boots in legacy configuration
* (SI_EVAReset is de-asserted and CONFIG5.K == 0)
*
- * On entry, t1 is loaded with CP0_CONFIG
- *
* ========================= Mappings =============================
* Virtual memory Physical memory Mapping
* 0x00000000 - 0x7fffffff 0x80000000 - 0xfffffffff MUSUK (kuseg)
*
*
* Lowmem is expanded to 2GB
+ *
+ * The following code uses the t0, t1, t2 and ra registers without
+ * previously preserving them.
+ *
*/
- .macro eva_entry
+ .macro platform_eva_init
+
+ .set push
+ .set reorder
/*
* Get Config.K0 value and use it to program
* the segmentation registers
*/
+ mfc0 t1, CP0_CONFIG
andi t1, 0x7 /* CCA */
move t2, t1
ins t2, t1, 16, 3
mtc0 t0, $16, 5
sync
jal mips_ihb
+
+ .set pop
.endm
.macro kernel_entry_setup
sll t0, t0, 6 /* SC bit */
bgez t0, 9f
- eva_entry
+ platform_eva_init
b 0f
9:
/* Assume we came from YAMON... */
#ifdef CONFIG_EVA
sync
ehb
- mfc0 t1, CP0_CONFIG
- eva_entry
+ platform_eva_init
#endif
.endm
#include <asm/mach-netlogic/multi-node.h>
-#ifdef CONFIG_SMP
-#define topology_physical_package_id(cpu) cpu_to_node(cpu)
-#define topology_core_id(cpu) (cpu_logical_map(cpu) / NLM_THREADS_PER_CORE)
-#define topology_thread_cpumask(cpu) (&cpu_sibling_map[cpu])
-#define topology_core_cpumask(cpu) cpumask_of_node(cpu_to_node(cpu))
-#endif
-
#include <asm-generic/topology.h>
#endif /* _ASM_MACH_NETLOGIC_TOPOLOGY_H */
} \
} while(0)
+extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
+ pte_t pteval);
+
#if defined(CONFIG_64BIT_PHYS_ADDR) && defined(CONFIG_CPU_MIPS32)
#define pte_none(pte) (!(((pte).pte_low | (pte).pte_high) & ~_PAGE_GLOBAL))
}
}
}
-#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval)
static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
{
}
#endif
}
-#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval)
static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
{
extern void __update_tlb(struct vm_area_struct *vma, unsigned long address,
pte_t pte);
-extern void __update_cache(struct vm_area_struct *vma, unsigned long address,
- pte_t pte);
static inline void update_mmu_cache(struct vm_area_struct *vma,
unsigned long address, pte_t *ptep)
{
pte_t pte = *ptep;
__update_tlb(vma, address, pte);
- __update_cache(vma, address, pte);
}
static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
{
int arch = EM_MIPS;
#ifdef CONFIG_64BIT
- if (!test_thread_flag(TIF_32BIT_REGS))
+ if (!test_thread_flag(TIF_32BIT_REGS)) {
arch |= __AUDIT_ARCH_64BIT;
- if (test_thread_flag(TIF_32BIT_ADDR))
- arch |= __AUDIT_ARCH_CONVENTION_MIPS64_N32;
+ /* N32 sets only TIF_32BIT_ADDR */
+ if (test_thread_flag(TIF_32BIT_ADDR))
+ arch |= __AUDIT_ARCH_CONVENTION_MIPS64_N32;
+ }
#endif
#if defined(__LITTLE_ENDIAN)
arch |= __AUDIT_ARCH_LE;
#include <asm/asm-offsets.h>
#include <asm/asmmacro.h>
#include <asm/cacheops.h>
+#include <asm/eva.h>
#include <asm/mipsregs.h>
#include <asm/mipsmtregs.h>
#include <asm/pm.h>
1: jal mips_cps_core_init
nop
+ /* Do any EVA initialization if necessary */
+ eva_init
+
/*
* Boot any other VPEs within this core that should be online, and
* deactivate this VPE if it should be offline.
if (mipspmu.irq >= 0) {
/* Request my own irq handler. */
err = request_irq(mipspmu.irq, mipsxx_pmu_handle_irq,
- IRQF_PERCPU | IRQF_NOBALANCING,
+ IRQF_PERCPU | IRQF_NOBALANCING | IRQF_NO_THREAD,
"mips_perf_pmu", NULL);
if (err) {
pr_warning("Unable to request IRQ%d for MIPS "
move s0, t2 # Save syscall pointer
move a0, sp
/*
- * syscall number is in v0 unless we called syscall(__NR_###)
+ * absolute syscall number is in v0 unless we called syscall(__NR_###)
* where the real syscall number is in a0
* note: NR_syscall is the first O32 syscall but the macro is
* only defined when compiling with -mabi=32 (CONFIG_32BIT)
* therefore __NR_O32_Linux is used (4000)
*/
- addiu a1, v0, __NR_O32_Linux
- bnez v0, 1f /* __NR_syscall at offset 0 */
- lw a1, PT_R4(sp)
+ .set push
+ .set reorder
+ subu t1, v0, __NR_O32_Linux
+ move a1, v0
+ bnez t1, 1f /* __NR_syscall at offset 0 */
+ lw a1, PT_R4(sp) /* Arg1 for __NR_syscall case */
+ .set pop
1: jal syscall_trace_enter
static int loongson_cu2_call(struct notifier_block *nfb, unsigned long action,
void *data)
{
- int fpu_enabled;
+ int fpu_owned;
int fr = !test_thread_flag(TIF_32BIT_FPREGS);
switch (action) {
case CU2_EXCEPTION:
preempt_disable();
- fpu_enabled = read_c0_status() & ST0_CU1;
+ fpu_owned = __is_fpu_owner();
if (!fr)
set_c0_status(ST0_CU1 | ST0_CU2);
else
KSTK_STATUS(current) |= ST0_FR;
else
KSTK_STATUS(current) &= ~ST0_FR;
- /* If FPU is enabled, we needn't init or restore fp */
- if(!fpu_enabled) {
+ /* If FPU is owned, we needn't init or restore fp */
+ if (!fpu_owned) {
set_thread_flag(TIF_USEDFPU);
if (!used_math()) {
_init_fpu();
#include <asm/page.h>
#include <asm/pgalloc.h>
#include <asm/sections.h>
-#include <linux/bootmem.h>
-#include <linux/init.h>
#include <linux/irq.h>
#include <asm/bootinfo.h>
#include <asm/mc146818-time.h>
EXPORT_SYMBOL(__flush_anon_page);
-void __update_cache(struct vm_area_struct *vma, unsigned long address,
- pte_t pte)
+static void mips_flush_dcache_from_pte(pte_t pteval, unsigned long address)
{
struct page *page;
- unsigned long pfn, addr;
- int exec = (vma->vm_flags & VM_EXEC) && !cpu_has_ic_fills_f_dc;
+ unsigned long pfn = pte_pfn(pteval);
- pfn = pte_pfn(pte);
if (unlikely(!pfn_valid(pfn)))
return;
+
page = pfn_to_page(pfn);
if (page_mapping(page) && Page_dcache_dirty(page)) {
- addr = (unsigned long) page_address(page);
- if (exec || pages_do_alias(addr, address & PAGE_MASK))
- flush_data_cache_page(addr);
+ unsigned long page_addr = (unsigned long) page_address(page);
+
+ if (!cpu_has_ic_fills_f_dc ||
+ pages_do_alias(page_addr, address & PAGE_MASK))
+ flush_data_cache_page(page_addr);
ClearPageDcacheDirty(page);
}
}
+void set_pte_at(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, pte_t pteval)
+{
+ if (cpu_has_dc_aliases || !cpu_has_ic_fills_f_dc) {
+ if (pte_present(pteval))
+ mips_flush_dcache_from_pte(pteval, addr);
+ }
+
+ set_pte(ptep, pteval);
+}
+
unsigned long _page_cachable_default;
EXPORT_SYMBOL(_page_cachable_default);
/* otherwise look in the environment */
memsize_str = fw_getenv("memsize");
- if (memsize_str)
- tmp = kstrtol(memsize_str, 0, &memsize);
+ if (memsize_str) {
+ tmp = kstrtoul(memsize_str, 0, &memsize);
+ if (tmp)
+ pr_warn("Failed to read the 'memsize' env variable.\n");
+ }
if (eva) {
/* Look for ememsize for EVA */
ememsize_str = fw_getenv("ememsize");
- if (ememsize_str)
- tmp = kstrtol(ememsize_str, 0, &ememsize);
+ if (ememsize_str) {
+ tmp = kstrtoul(ememsize_str, 0, &ememsize);
+ if (tmp)
+ pr_warn("Failed to read the 'ememsize' env variable.\n");
+ }
}
if (!memsize && !ememsize) {
pr_warn("memsize not set in YAMON, set to default (32Mb)\n");
* the range 40-71.
*/
-asmlinkage void plat_irq_dispatch(struct pt_regs *regs)
+asmlinkage void plat_irq_dispatch(void)
{
u32 pending;
source "arch/parisc/Kconfig.debug"
+config SECCOMP
+ def_bool y
+ prompt "Enable seccomp to safely compute untrusted bytecode"
+ ---help---
+ This kernel feature is useful for number crunching applications
+ that may need to compute untrusted bytecode during their
+ execution. By using pipes or other transports made available to
+ the process as file descriptors supporting the read/write
+ syscalls, it's possible to isolate those applications in
+ their own address space using seccomp. Once seccomp is
+ enabled via prctl(PR_SET_SECCOMP), it cannot be disabled
+ and the task is only allowed to execute a few safe syscalls
+ defined by each seccomp mode.
+
+ If unsure, say Y. Only embedded should say N here.
+
source "security/Kconfig"
source "crypto/Kconfig"
}
/* String could be altered by userspace after strlen_user() */
- fsname[len] = '\0';
+ fsname[len - 1] = '\0';
printk(KERN_DEBUG "that is '%s' as (char *)\n", fsname);
if ( !strcmp(fsname, "hfs") ) {
--- /dev/null
+#ifndef _ASM_PARISC_SECCOMP_H
+#define _ASM_PARISC_SECCOMP_H
+
+#include <linux/unistd.h>
+
+#define __NR_seccomp_read __NR_read
+#define __NR_seccomp_write __NR_write
+#define __NR_seccomp_exit __NR_exit
+#define __NR_seccomp_sigreturn __NR_rt_sigreturn
+
+#define __NR_seccomp_read_32 __NR_read
+#define __NR_seccomp_write_32 __NR_write
+#define __NR_seccomp_exit_32 __NR_exit
+#define __NR_seccomp_sigreturn_32 __NR_rt_sigreturn
+
+#endif /* _ASM_PARISC_SECCOMP_H */
#define TIF_NOTIFY_RESUME 8 /* callback before returning to user */
#define TIF_SINGLESTEP 9 /* single stepping? */
#define TIF_BLOCKSTEP 10 /* branch stepping? */
+#define TIF_SECCOMP 11 /* secure computing */
#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
#define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP)
#define _TIF_BLOCKSTEP (1 << TIF_BLOCKSTEP)
+#define _TIF_SECCOMP (1 << TIF_SECCOMP)
#define _TIF_USER_WORK_MASK (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | \
_TIF_NEED_RESCHED)
#define _TIF_SYSCALL_TRACE_MASK (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP | \
- _TIF_BLOCKSTEP | _TIF_SYSCALL_AUDIT)
+ _TIF_BLOCKSTEP | _TIF_SYSCALL_AUDIT | \
+ _TIF_SECCOMP)
#ifdef CONFIG_64BIT
# ifdef CONFIG_COMPAT
#define __NR_sched_getattr (__NR_Linux + 335)
#define __NR_utimes (__NR_Linux + 336)
#define __NR_renameat2 (__NR_Linux + 337)
+#define __NR_seccomp (__NR_Linux + 338)
+#define __NR_getrandom (__NR_Linux + 339)
+#define __NR_memfd_create (__NR_Linux + 340)
-#define __NR_Linux_syscalls (__NR_renameat2 + 1)
+#define __NR_Linux_syscalls (__NR_memfd_create + 1)
#define __IGNORE_select /* newselect */
{
long ret = 0;
+ /* Do the secure computing check first. */
+ if (secure_computing(regs->gr[20])) {
+ /* seccomp failures shouldn't expose any additional code. */
+ return -1;
+ }
+
if (test_thread_flag(TIF_SYSCALL_TRACE) &&
tracehook_report_syscall_entry(regs))
ret = -1L;
/* ADDRESS 0xb0 to 0xb8, lws uses two insns for entry */
/* Light-weight-syscall entry must always be located at 0xb0 */
/* WARNING: Keep this number updated with table size changes */
-#define __NR_lws_entries (2)
+#define __NR_lws_entries (3)
lws_entry:
gate lws_start, %r0 /* increase privilege */
/***************************************************
- Implementing CAS as an atomic operation:
+ Implementing 32bit CAS as an atomic operation:
%r26 - Address to examine
%r25 - Old value to check (old)
ASM_EXCEPTIONTABLE_ENTRY(2b-linux_gateway_page, 3b-linux_gateway_page)
+ /***************************************************
+ New CAS implementation which uses pointers and variable size
+ information. The value pointed by old and new MUST NOT change
+ while performing CAS. The lock only protect the value at %r26.
+
+ %r26 - Address to examine
+ %r25 - Pointer to the value to check (old)
+ %r24 - Pointer to the value to set (new)
+ %r23 - Size of the variable (0/1/2/3 for 8/16/32/64 bit)
+ %r28 - Return non-zero on failure
+ %r21 - Kernel error code
+
+ %r21 has the following meanings:
+
+ EAGAIN - CAS is busy, ldcw failed, try again.
+ EFAULT - Read or write failed.
+
+ Scratch: r20, r22, r28, r29, r1, fr4 (32bit for 64bit CAS only)
+
+ ****************************************************/
+
+ /* ELF32 Process entry path */
+lws_compare_and_swap_2:
+#ifdef CONFIG_64BIT
+ /* Clip the input registers */
+ depdi 0, 31, 32, %r26
+ depdi 0, 31, 32, %r25
+ depdi 0, 31, 32, %r24
+ depdi 0, 31, 32, %r23
+#endif
+
+ /* Check the validity of the size pointer */
+ subi,>>= 4, %r23, %r0
+ b,n lws_exit_nosys
+
+ /* Jump to the functions which will load the old and new values into
+ registers depending on the their size */
+ shlw %r23, 2, %r29
+ blr %r29, %r0
+ nop
+
+ /* 8bit load */
+4: ldb 0(%sr3,%r25), %r25
+ b cas2_lock_start
+5: ldb 0(%sr3,%r24), %r24
+ nop
+ nop
+ nop
+ nop
+ nop
+
+ /* 16bit load */
+6: ldh 0(%sr3,%r25), %r25
+ b cas2_lock_start
+7: ldh 0(%sr3,%r24), %r24
+ nop
+ nop
+ nop
+ nop
+ nop
+
+ /* 32bit load */
+8: ldw 0(%sr3,%r25), %r25
+ b cas2_lock_start
+9: ldw 0(%sr3,%r24), %r24
+ nop
+ nop
+ nop
+ nop
+ nop
+
+ /* 64bit load */
+#ifdef CONFIG_64BIT
+10: ldd 0(%sr3,%r25), %r25
+11: ldd 0(%sr3,%r24), %r24
+#else
+ /* Load new value into r22/r23 - high/low */
+10: ldw 0(%sr3,%r25), %r22
+11: ldw 4(%sr3,%r25), %r23
+ /* Load new value into fr4 for atomic store later */
+12: flddx 0(%sr3,%r24), %fr4
+#endif
+
+cas2_lock_start:
+ /* Load start of lock table */
+ ldil L%lws_lock_start, %r20
+ ldo R%lws_lock_start(%r20), %r28
+
+ /* Extract four bits from r26 and hash lock (Bits 4-7) */
+ extru %r26, 27, 4, %r20
+
+ /* Find lock to use, the hash is either one of 0 to
+ 15, multiplied by 16 (keep it 16-byte aligned)
+ and add to the lock table offset. */
+ shlw %r20, 4, %r20
+ add %r20, %r28, %r20
+
+ rsm PSW_SM_I, %r0 /* Disable interrupts */
+ /* COW breaks can cause contention on UP systems */
+ LDCW 0(%sr2,%r20), %r28 /* Try to acquire the lock */
+ cmpb,<>,n %r0, %r28, cas2_action /* Did we get it? */
+cas2_wouldblock:
+ ldo 2(%r0), %r28 /* 2nd case */
+ ssm PSW_SM_I, %r0
+ b lws_exit /* Contended... */
+ ldo -EAGAIN(%r0), %r21 /* Spin in userspace */
+
+ /*
+ prev = *addr;
+ if ( prev == old )
+ *addr = new;
+ return prev;
+ */
+
+ /* NOTES:
+ This all works becuse intr_do_signal
+ and schedule both check the return iasq
+ and see that we are on the kernel page
+ so this process is never scheduled off
+ or is ever sent any signal of any sort,
+ thus it is wholly atomic from usrspaces
+ perspective
+ */
+cas2_action:
+ /* Jump to the correct function */
+ blr %r29, %r0
+ /* Set %r28 as non-zero for now */
+ ldo 1(%r0),%r28
+
+ /* 8bit CAS */
+13: ldb,ma 0(%sr3,%r26), %r29
+ sub,= %r29, %r25, %r0
+ b,n cas2_end
+14: stb,ma %r24, 0(%sr3,%r26)
+ b cas2_end
+ copy %r0, %r28
+ nop
+ nop
+
+ /* 16bit CAS */
+15: ldh,ma 0(%sr3,%r26), %r29
+ sub,= %r29, %r25, %r0
+ b,n cas2_end
+16: sth,ma %r24, 0(%sr3,%r26)
+ b cas2_end
+ copy %r0, %r28
+ nop
+ nop
+
+ /* 32bit CAS */
+17: ldw,ma 0(%sr3,%r26), %r29
+ sub,= %r29, %r25, %r0
+ b,n cas2_end
+18: stw,ma %r24, 0(%sr3,%r26)
+ b cas2_end
+ copy %r0, %r28
+ nop
+ nop
+
+ /* 64bit CAS */
+#ifdef CONFIG_64BIT
+19: ldd,ma 0(%sr3,%r26), %r29
+ sub,= %r29, %r25, %r0
+ b,n cas2_end
+20: std,ma %r24, 0(%sr3,%r26)
+ copy %r0, %r28
+#else
+ /* Compare first word */
+19: ldw,ma 0(%sr3,%r26), %r29
+ sub,= %r29, %r22, %r0
+ b,n cas2_end
+ /* Compare second word */
+20: ldw,ma 4(%sr3,%r26), %r29
+ sub,= %r29, %r23, %r0
+ b,n cas2_end
+ /* Perform the store */
+21: fstdx %fr4, 0(%sr3,%r26)
+ copy %r0, %r28
+#endif
+
+cas2_end:
+ /* Free lock */
+ stw,ma %r20, 0(%sr2,%r20)
+ /* Enable interrupts */
+ ssm PSW_SM_I, %r0
+ /* Return to userspace, set no error */
+ b lws_exit
+ copy %r0, %r21
+
+22:
+ /* Error occurred on load or store */
+ /* Free lock */
+ stw %r20, 0(%sr2,%r20)
+ ssm PSW_SM_I, %r0
+ ldo 1(%r0),%r28
+ b lws_exit
+ ldo -EFAULT(%r0),%r21 /* set errno */
+ nop
+ nop
+ nop
+
+ /* Exception table entries, for the load and store, return EFAULT.
+ Each of the entries must be relocated. */
+ ASM_EXCEPTIONTABLE_ENTRY(4b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(5b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(6b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(7b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(8b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(9b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(10b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(11b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(13b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(14b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(15b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(16b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(17b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(18b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(19b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(20b-linux_gateway_page, 22b-linux_gateway_page)
+#ifndef CONFIG_64BIT
+ ASM_EXCEPTIONTABLE_ENTRY(12b-linux_gateway_page, 22b-linux_gateway_page)
+ ASM_EXCEPTIONTABLE_ENTRY(21b-linux_gateway_page, 22b-linux_gateway_page)
+#endif
+
/* Make sure nothing else is placed on this page */
.align PAGE_SIZE
END(linux_gateway_page)
/* Light-weight-syscall table */
/* Start of lws table. */
ENTRY(lws_table)
- LWS_ENTRY(compare_and_swap32) /* 0 - ELF32 Atomic compare and swap */
- LWS_ENTRY(compare_and_swap64) /* 1 - ELF64 Atomic compare and swap */
+ LWS_ENTRY(compare_and_swap32) /* 0 - ELF32 Atomic 32bit CAS */
+ LWS_ENTRY(compare_and_swap64) /* 1 - ELF64 Atomic 32bit CAS */
+ LWS_ENTRY(compare_and_swap_2) /* 2 - ELF32 Atomic 64bit CAS */
END(lws_table)
/* End of lws table */
ENTRY_SAME(sched_getattr) /* 335 */
ENTRY_COMP(utimes)
ENTRY_SAME(renameat2)
+ ENTRY_SAME(seccomp)
+ ENTRY_SAME(getrandom)
+ ENTRY_SAME(memfd_create) /* 340 */
/* Nothing yet */
config KEXEC
bool "kexec system call"
depends on (PPC_BOOK3S || FSL_BOOKE || (44x && !SMP))
- select CRYPTO
- select CRYPTO_SHA256
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot
CONFIG_NR_CPUS=4
CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y
+CONFIG_FHANDLE=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=15
CONFIG_NR_CPUS=4
CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y
+CONFIG_FHANDLE=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=15
CONFIG_SMP=y
CONFIG_NR_CPUS=24
CONFIG_SYSVIPC=y
+CONFIG_FHANDLE=y
CONFIG_IRQ_DOMAIN_DEBUG=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
+CONFIG_FHANDLE=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
+CONFIG_FHANDLE=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
# CONFIG_COMPAT_BRK is not set
CONFIG_SMP=y
CONFIG_NR_CPUS=2
CONFIG_SYSVIPC=y
+CONFIG_FHANDLE=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_SMP=y
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
+CONFIG_FHANDLE=y
CONFIG_IRQ_DOMAIN_DEBUG=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_SMP=y
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
+CONFIG_FHANDLE=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_TASKSTATS=y
CONFIG_NR_CPUS=2
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
+CONFIG_FHANDLE=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_RD_LZMA=y
CONFIG_NR_CPUS=2048
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
+CONFIG_FHANDLE=y
CONFIG_AUDIT=y
CONFIG_AUDITSYSCALL=y
CONFIG_IRQ_DOMAIN_DEBUG=y
CONFIG_CPU_LITTLE_ENDIAN=y
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
+CONFIG_FHANDLE=y
CONFIG_AUDIT=y
CONFIG_AUDITSYSCALL=y
CONFIG_IRQ_DOMAIN_DEBUG=y
STACK_FRAME_OVERHEAD + KERNEL_REDZONE_SIZE)
#define STACK_FRAME_MARKER 12
+#if defined(_CALL_ELF) && _CALL_ELF == 2
+#define STACK_FRAME_MIN_SIZE 32
+#else
+#define STACK_FRAME_MIN_SIZE STACK_FRAME_OVERHEAD
+#endif
+
/* Size of dummy stack frame allocated when calling signal handler. */
#define __SIGNAL_FRAMESIZE 128
#define __SIGNAL_FRAMESIZE32 64
#define STACK_FRAME_REGS_MARKER ASM_CONST(0x72656773)
#define STACK_INT_FRAME_SIZE (sizeof(struct pt_regs) + STACK_FRAME_OVERHEAD)
#define STACK_FRAME_MARKER 2
+#define STACK_FRAME_MIN_SIZE STACK_FRAME_OVERHEAD
/* Size of stack frame allocated when calling signal handler. */
#define __SIGNAL_FRAMESIZE 64
SYSCALL_SPU(sched_setattr)
SYSCALL_SPU(sched_getattr)
SYSCALL_SPU(renameat2)
+SYSCALL_SPU(seccomp)
+SYSCALL_SPU(getrandom)
+SYSCALL_SPU(memfd_create)
#include <uapi/asm/unistd.h>
-#define __NR_syscalls 358
+#define __NR_syscalls 361
#define __NR__exit __NR_exit
#define NR_syscalls __NR_syscalls
#define __NR_sched_setattr 355
#define __NR_sched_getattr 356
#define __NR_renameat2 357
+#define __NR_seccomp 358
+#define __NR_getrandom 359
+#define __NR_memfd_create 360
#endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */
}
kvm->arch.hpt_cma_alloc = 0;
- page = kvm_alloc_hpt(1 << (order - PAGE_SHIFT));
+ page = kvm_alloc_hpt(1ul << (order - PAGE_SHIFT));
if (page) {
hpt = (unsigned long)pfn_to_kaddr(page_to_pfn(page));
- memset((void *)hpt, 0, (1 << order));
+ memset((void *)hpt, 0, (1ul << order));
kvm->arch.hpt_cma_alloc = 1;
}
ri = kmalloc(sizeof(struct kvm_rma_info), GFP_KERNEL);
if (!ri)
return NULL;
- page = cma_alloc(kvm_cma, kvm_rma_pages, get_order(kvm_rma_pages));
+ page = cma_alloc(kvm_cma, kvm_rma_pages, order_base_2(kvm_rma_pages));
if (!page)
goto err_out;
atomic_set(&ri->use_count, 1);
{
unsigned long align_pages = HPT_ALIGN_PAGES;
- VM_BUG_ON(get_order(nr_pages) < KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
+ VM_BUG_ON(order_base_2(nr_pages) < KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
/* Old CPUs require HPT aligned on a multiple of its size */
if (!cpu_has_feature(CPU_FTR_ARCH_206))
align_pages = nr_pages;
- return cma_alloc(kvm_cma, nr_pages, get_order(align_pages));
+ return cma_alloc(kvm_cma, nr_pages, order_base_2(align_pages));
}
EXPORT_SYMBOL_GPL(kvm_alloc_hpt);
return 0; /* must be 16-byte aligned */
if (!validate_sp(sp, current, STACK_FRAME_OVERHEAD))
return 0;
- if (sp >= prev_sp + STACK_FRAME_OVERHEAD)
+ if (sp >= prev_sp + STACK_FRAME_MIN_SIZE)
return 1;
/*
* sp could decrease when we jump off an interrupt stack
#include <asm/opal.h>
#include <asm/cputable.h>
+#include <asm/machdep.h>
static int opal_hmi_handler_nb_init;
struct OpalHmiEvtNode {
}
return 0;
}
-subsys_initcall(opal_hmi_handler_init);
+machine_subsys_initcall(powernv, opal_hmi_handler_init);
static int pseries_remove_mem_node(struct device_node *np)
{
const char *type;
- const unsigned int *regs;
+ const __be32 *regs;
unsigned long base;
unsigned int lmb_size;
int ret = -EINVAL;
if (!regs)
return ret;
- base = *(unsigned long *)regs;
- lmb_size = regs[3];
+ base = be64_to_cpu(*(unsigned long *)regs);
+ lmb_size = be32_to_cpu(regs[3]);
pseries_remove_memblock(base, lmb_size);
return 0;
static int pseries_add_mem_node(struct device_node *np)
{
const char *type;
- const unsigned int *regs;
+ const __be32 *regs;
unsigned long base;
unsigned int lmb_size;
int ret = -EINVAL;
if (!regs)
return ret;
- base = *(unsigned long *)regs;
- lmb_size = regs[3];
+ base = be64_to_cpu(*(unsigned long *)regs);
+ lmb_size = be32_to_cpu(regs[3]);
/*
* Update memory region to represent the memory add
struct of_drconf_cell *new_drmem, *old_drmem;
unsigned long memblock_size;
u32 entries;
- u32 *p;
+ __be32 *p;
int i, rc = -EINVAL;
memblock_size = pseries_memory_block_size();
if (!memblock_size)
return -EINVAL;
- p = (u32 *) pr->old_prop->value;
+ p = (__be32 *) pr->old_prop->value;
if (!p)
return -EINVAL;
* entries. Get the niumber of entries and skip to the array of
* of_drconf_cell's.
*/
- entries = *p++;
+ entries = be32_to_cpu(*p++);
old_drmem = (struct of_drconf_cell *)p;
- p = (u32 *)pr->prop->value;
+ p = (__be32 *)pr->prop->value;
p++;
new_drmem = (struct of_drconf_cell *)p;
for (i = 0; i < entries; i++) {
- if ((old_drmem[i].flags & DRCONF_MEM_ASSIGNED) &&
- (!(new_drmem[i].flags & DRCONF_MEM_ASSIGNED))) {
- rc = pseries_remove_memblock(old_drmem[i].base_addr,
+ if ((be32_to_cpu(old_drmem[i].flags) & DRCONF_MEM_ASSIGNED) &&
+ (!(be32_to_cpu(new_drmem[i].flags) & DRCONF_MEM_ASSIGNED))) {
+ rc = pseries_remove_memblock(
+ be64_to_cpu(old_drmem[i].base_addr),
memblock_size);
break;
- } else if ((!(old_drmem[i].flags & DRCONF_MEM_ASSIGNED)) &&
- (new_drmem[i].flags & DRCONF_MEM_ASSIGNED)) {
- rc = memblock_add(old_drmem[i].base_addr,
+ } else if ((!(be32_to_cpu(old_drmem[i].flags) &
+ DRCONF_MEM_ASSIGNED)) &&
+ (be32_to_cpu(new_drmem[i].flags) &
+ DRCONF_MEM_ASSIGNED)) {
+ rc = memblock_add(be64_to_cpu(old_drmem[i].base_addr),
memblock_size);
rc = (rc < 0) ? -EINVAL : 0;
break;
}
}
-
return rc;
}
config KEXEC
def_bool y
- select CRYPTO
- select CRYPTO_SHA256
config AUDIT_ARCH
def_bool y
#define IPL_PARM_BLK_FCP_LEN (sizeof(struct ipl_list_hdr) + \
sizeof(struct ipl_block_fcp))
-#define IPL_PARM_BLK0_FCP_LEN (sizeof(struct ipl_block_fcp) + 8)
+#define IPL_PARM_BLK0_FCP_LEN (sizeof(struct ipl_block_fcp) + 16)
#define IPL_PARM_BLK_CCW_LEN (sizeof(struct ipl_list_hdr) + \
sizeof(struct ipl_block_ccw))
-#define IPL_PARM_BLK0_CCW_LEN (sizeof(struct ipl_block_ccw) + 8)
+#define IPL_PARM_BLK0_CCW_LEN (sizeof(struct ipl_block_ccw) + 16)
#define IPL_MAX_SUPPORTED_VERSION (0)
u8 pbt;
u8 flags;
u16 reserved2;
+ u8 loadparm[8];
} __attribute__((packed));
struct ipl_block_fcp {
- u8 reserved1[313-1];
+ u8 reserved1[305-1];
u8 opt;
u8 reserved2[3];
u16 reserved3;
offsetof(struct ipl_block_fcp, scp_data)))
struct ipl_block_ccw {
- u8 load_parm[8];
u8 reserved1[84];
u8 reserved2[2];
u16 devno;
unsigned long addr, pte_t *ptep)
{
pgste_t pgste;
- pte_t pte;
+ pte_t pte, oldpte;
int young;
if (mm_has_pgste(vma->vm_mm)) {
pgste = pgste_ipte_notify(vma->vm_mm, ptep, pgste);
}
- pte = *ptep;
+ oldpte = pte = *ptep;
ptep_flush_direct(vma->vm_mm, addr, ptep);
young = pte_young(pte);
pte = pte_mkold(pte);
if (mm_has_pgste(vma->vm_mm)) {
+ pgste = pgste_update_all(&oldpte, pgste, vma->vm_mm);
pgste = pgste_set_pte(ptep, pgste, pte);
pgste_set_unlock(ptep, pgste);
} else
ptep_flush_direct(vma->vm_mm, address, ptep);
if (mm_has_pgste(vma->vm_mm)) {
+ pgste_set_key(ptep, pgste, entry, vma->vm_mm);
pgste = pgste_set_pte(ptep, pgste, entry);
pgste_set_unlock(ptep, pgste);
} else
#define __NR_sched_setattr 345
#define __NR_sched_getattr 346
#define __NR_renameat2 347
-#define NR_syscalls 348
+#define __NR_seccomp 348
+#define __NR_getrandom 349
+#define __NR_memfd_create 350
+#define NR_syscalls 351
/*
* There are some system calls that are not present on 64 bit, some
COMPAT_SYSCALL_WRAP3(sched_setattr, pid_t, pid, struct sched_attr __user *, attr, unsigned int, flags);
COMPAT_SYSCALL_WRAP4(sched_getattr, pid_t, pid, struct sched_attr __user *, attr, unsigned int, size, unsigned int, flags);
COMPAT_SYSCALL_WRAP5(renameat2, int, olddfd, const char __user *, oldname, int, newdfd, const char __user *, newname, unsigned int, flags);
+COMPAT_SYSCALL_WRAP3(seccomp, unsigned int, op, unsigned int, flags, const char __user *, uargs)
+COMPAT_SYSCALL_WRAP3(getrandom, char __user *, buf, size_t, count, unsigned int, flags)
+COMPAT_SYSCALL_WRAP2(memfd_create, const char __user *, uname, unsigned int, flags)
DEFINE_IPL_ATTR_RO(ipl_fcp, br_lba, "%lld\n", (unsigned long long)
IPL_PARMBLOCK_START->ipl_info.fcp.br_lba);
-static struct attribute *ipl_fcp_attrs[] = {
- &sys_ipl_type_attr.attr,
- &sys_ipl_device_attr.attr,
- &sys_ipl_fcp_wwpn_attr.attr,
- &sys_ipl_fcp_lun_attr.attr,
- &sys_ipl_fcp_bootprog_attr.attr,
- &sys_ipl_fcp_br_lba_attr.attr,
- NULL,
-};
-
-static struct attribute_group ipl_fcp_attr_group = {
- .attrs = ipl_fcp_attrs,
-};
-
-/* CCW ipl device attributes */
-
static ssize_t ipl_ccw_loadparm_show(struct kobject *kobj,
struct kobj_attribute *attr, char *page)
{
static struct kobj_attribute sys_ipl_ccw_loadparm_attr =
__ATTR(loadparm, 0444, ipl_ccw_loadparm_show, NULL);
+static struct attribute *ipl_fcp_attrs[] = {
+ &sys_ipl_type_attr.attr,
+ &sys_ipl_device_attr.attr,
+ &sys_ipl_fcp_wwpn_attr.attr,
+ &sys_ipl_fcp_lun_attr.attr,
+ &sys_ipl_fcp_bootprog_attr.attr,
+ &sys_ipl_fcp_br_lba_attr.attr,
+ &sys_ipl_ccw_loadparm_attr.attr,
+ NULL,
+};
+
+static struct attribute_group ipl_fcp_attr_group = {
+ .attrs = ipl_fcp_attrs,
+};
+
+/* CCW ipl device attributes */
+
static struct attribute *ipl_ccw_attrs_vm[] = {
&sys_ipl_type_attr.attr,
&sys_ipl_device_attr.attr,
DEFINE_IPL_ATTR_RW(reipl_fcp, device, "0.0.%04llx\n", "0.0.%llx\n",
reipl_block_fcp->ipl_info.fcp.devno);
-static struct attribute *reipl_fcp_attrs[] = {
- &sys_reipl_fcp_device_attr.attr,
- &sys_reipl_fcp_wwpn_attr.attr,
- &sys_reipl_fcp_lun_attr.attr,
- &sys_reipl_fcp_bootprog_attr.attr,
- &sys_reipl_fcp_br_lba_attr.attr,
- NULL,
-};
-
-static struct attribute_group reipl_fcp_attr_group = {
- .attrs = reipl_fcp_attrs,
-};
-
-/* CCW reipl device attributes */
-
-DEFINE_IPL_ATTR_RW(reipl_ccw, device, "0.0.%04llx\n", "0.0.%llx\n",
- reipl_block_ccw->ipl_info.ccw.devno);
-
static void reipl_get_ascii_loadparm(char *loadparm,
struct ipl_parameter_block *ibp)
{
- memcpy(loadparm, ibp->ipl_info.ccw.load_parm, LOADPARM_LEN);
+ memcpy(loadparm, ibp->hdr.loadparm, LOADPARM_LEN);
EBCASC(loadparm, LOADPARM_LEN);
loadparm[LOADPARM_LEN] = 0;
strim(loadparm);
return -EINVAL;
}
/* initialize loadparm with blanks */
- memset(ipb->ipl_info.ccw.load_parm, ' ', LOADPARM_LEN);
+ memset(ipb->hdr.loadparm, ' ', LOADPARM_LEN);
/* copy and convert to ebcdic */
- memcpy(ipb->ipl_info.ccw.load_parm, buf, lp_len);
- ASCEBC(ipb->ipl_info.ccw.load_parm, LOADPARM_LEN);
+ memcpy(ipb->hdr.loadparm, buf, lp_len);
+ ASCEBC(ipb->hdr.loadparm, LOADPARM_LEN);
return len;
}
+/* FCP wrapper */
+static ssize_t reipl_fcp_loadparm_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *page)
+{
+ return reipl_generic_loadparm_show(reipl_block_fcp, page);
+}
+
+static ssize_t reipl_fcp_loadparm_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t len)
+{
+ return reipl_generic_loadparm_store(reipl_block_fcp, buf, len);
+}
+
+static struct kobj_attribute sys_reipl_fcp_loadparm_attr =
+ __ATTR(loadparm, S_IRUGO | S_IWUSR, reipl_fcp_loadparm_show,
+ reipl_fcp_loadparm_store);
+
+static struct attribute *reipl_fcp_attrs[] = {
+ &sys_reipl_fcp_device_attr.attr,
+ &sys_reipl_fcp_wwpn_attr.attr,
+ &sys_reipl_fcp_lun_attr.attr,
+ &sys_reipl_fcp_bootprog_attr.attr,
+ &sys_reipl_fcp_br_lba_attr.attr,
+ &sys_reipl_fcp_loadparm_attr.attr,
+ NULL,
+};
+
+static struct attribute_group reipl_fcp_attr_group = {
+ .attrs = reipl_fcp_attrs,
+};
+
+/* CCW reipl device attributes */
+
+DEFINE_IPL_ATTR_RW(reipl_ccw, device, "0.0.%04llx\n", "0.0.%llx\n",
+ reipl_block_ccw->ipl_info.ccw.devno);
+
/* NSS wrapper */
static ssize_t reipl_nss_loadparm_show(struct kobject *kobj,
struct kobj_attribute *attr, char *page)
/* LOADPARM */
/* check if read scp info worked and set loadparm */
if (sclp_ipl_info.is_valid)
- memcpy(ipb->ipl_info.ccw.load_parm,
- &sclp_ipl_info.loadparm, LOADPARM_LEN);
+ memcpy(ipb->hdr.loadparm, &sclp_ipl_info.loadparm, LOADPARM_LEN);
else
/* read scp info failed: set empty loadparm (EBCDIC blanks) */
- memset(ipb->ipl_info.ccw.load_parm, 0x40, LOADPARM_LEN);
+ memset(ipb->hdr.loadparm, 0x40, LOADPARM_LEN);
ipb->hdr.flags = DIAG308_FLAGS_LP_VALID;
/* VM PARM */
return rc;
}
- if (ipl_info.type == IPL_TYPE_FCP)
+ if (ipl_info.type == IPL_TYPE_FCP) {
memcpy(reipl_block_fcp, IPL_PARMBLOCK_START, PAGE_SIZE);
- else {
+ /*
+ * Fix loadparm: There are systems where the (SCSI) LOADPARM
+ * is invalid in the SCSI IPL parameter block, so take it
+ * always from sclp_ipl_info.
+ */
+ memcpy(reipl_block_fcp->hdr.loadparm, sclp_ipl_info.loadparm,
+ LOADPARM_LEN);
+ } else {
reipl_block_fcp->hdr.len = IPL_PARM_BLK_FCP_LEN;
reipl_block_fcp->hdr.version = IPL_PARM_BLOCK_VERSION;
reipl_block_fcp->hdr.blk0_len = IPL_PARM_BLK0_FCP_LEN;
static int __init s390_ipl_init(void)
{
+ char str[8] = {0x40, 0x40, 0x40, 0x40, 0x40, 0x40, 0x40, 0x40};
+
sclp_get_ipl_info(&sclp_ipl_info);
+ /*
+ * Fix loadparm: There are systems where the (SCSI) LOADPARM
+ * returned by read SCP info is invalid (contains EBCDIC blanks)
+ * when the system has been booted via diag308. In that case we use
+ * the value from diag308, if available.
+ *
+ * There are also systems where diag308 store does not work in
+ * case the system is booted from HMC. Fortunately in this case
+ * READ SCP info provides the correct value.
+ */
+ if (memcmp(sclp_ipl_info.loadparm, str, sizeof(str)) == 0 &&
+ diag308_set_works)
+ memcpy(sclp_ipl_info.loadparm, ipl_block.hdr.loadparm,
+ LOADPARM_LEN);
shutdown_actions_init();
shutdown_triggers_init();
return 0;
S390_lowcore.program_new_psw.addr =
PSW_ADDR_AMODE | (unsigned long) s390_base_pgm_handler;
+ /*
+ * Clear subchannel ID and number to signal new kernel that no CCW or
+ * SCSI IPL has been done (for kexec and kdump)
+ */
+ S390_lowcore.subchannel_id = 0;
+ S390_lowcore.subchannel_nr = 0;
+
/* Store status at absolute zero */
store_status();
#include <linux/stddef.h>
#include <linux/unistd.h>
#include <linux/ptrace.h>
+#include <linux/random.h>
#include <linux/user.h>
#include <linux/tty.h>
#include <linux/ioport.h>
#include <asm/diag.h>
#include <asm/os_info.h>
#include <asm/sclp.h>
+#include <asm/sysinfo.h>
#include "entry.h"
/*
#endif
get_cpu_id(&cpu_id);
+ add_device_randomness(&cpu_id, sizeof(cpu_id));
switch (cpu_id.machine) {
case 0x9672:
#if !defined(CONFIG_64BIT)
}
}
+/*
+ * Add system information as device randomness
+ */
+static void __init setup_randomness(void)
+{
+ struct sysinfo_3_2_2 *vmms;
+
+ vmms = (struct sysinfo_3_2_2 *) alloc_page(GFP_KERNEL);
+ if (vmms && stsi(vmms, 3, 2, 2) == 0 && vmms->count)
+ add_device_randomness(&vmms, vmms->count);
+ free_page((unsigned long) vmms);
+}
+
/*
* Setup function called from init/main.c just after the banner
* was printed.
/* Setup zfcpdump support */
setup_zfcpdump();
+
+ /* Add system specific data to the random pool */
+ setup_randomness();
}
#ifdef CONFIG_32BIT
SYSCALL(sys_sched_setattr,sys_sched_setattr,compat_sys_sched_setattr) /* 345 */
SYSCALL(sys_sched_getattr,sys_sched_getattr,compat_sys_sched_getattr)
SYSCALL(sys_renameat2,sys_renameat2,compat_sys_renameat2)
+SYSCALL(sys_seccomp,sys_seccomp,compat_sys_seccomp)
+SYSCALL(sys_getrandom,sys_getrandom,compat_sys_getrandom)
+SYSCALL(sys_memfd_create,sys_memfd_create,compat_sys_memfd_create) /* 350 */
basr %r5,0
0: al %r5,21f-0b(%r5) /* get &_vdso_data */
chi %r2,__CLOCK_REALTIME
- je 10f
+ je 11f
chi %r2,__CLOCK_MONOTONIC
jne 19f
/* CLOCK_MONOTONIC */
- ltr %r3,%r3
- jz 9f /* tp == NULL */
1: l %r4,__VDSO_UPD_COUNT+4(%r5) /* load update counter */
tml %r4,0x0001 /* pending update ? loop */
jnz 1b
j 6b
8: st %r2,0(%r3) /* store tp->tv_sec */
st %r1,4(%r3) /* store tp->tv_nsec */
-9: lhi %r2,0
+ lhi %r2,0
br %r14
/* CLOCK_REALTIME */
-10: ltr %r3,%r3 /* tp == NULL */
- jz 18f
11: l %r4,__VDSO_UPD_COUNT+4(%r5) /* load update counter */
tml %r4,0x0001 /* pending update ? loop */
jnz 11b
j 15b
17: st %r2,0(%r3) /* store tp->tv_sec */
st %r1,4(%r3) /* store tp->tv_nsec */
-18: lhi %r2,0
+ lhi %r2,0
br %r14
/* Fallback to system call */
.cfi_startproc
larl %r5,_vdso_data
cghi %r2,__CLOCK_REALTIME
- je 4f
+ je 5f
cghi %r2,__CLOCK_THREAD_CPUTIME_ID
je 9f
cghi %r2,-2 /* Per-thread CPUCLOCK with PID=0, VIRT=1 */
jne 12f
/* CLOCK_MONOTONIC */
- ltgr %r3,%r3
- jz 3f /* tp == NULL */
0: lg %r4,__VDSO_UPD_COUNT(%r5) /* load update counter */
tmll %r4,0x0001 /* pending update ? loop */
jnz 0b
j 1b
2: stg %r0,0(%r3) /* store tp->tv_sec */
stg %r1,8(%r3) /* store tp->tv_nsec */
-3: lghi %r2,0
+ lghi %r2,0
br %r14
/* CLOCK_REALTIME */
-4: ltr %r3,%r3 /* tp == NULL */
- jz 8f
5: lg %r4,__VDSO_UPD_COUNT(%r5) /* load update counter */
tmll %r4,0x0001 /* pending update ? loop */
jnz 5b
j 6b
7: stg %r0,0(%r3) /* store tp->tv_sec */
stg %r1,8(%r3) /* store tp->tv_nsec */
-8: lghi %r2,0
+ lghi %r2,0
br %r14
/* CLOCK_THREAD_CPUTIME_ID for this thread */
return -EINVAL;
}
- switch (kvm_run->exit_reason) {
- case KVM_EXIT_S390_SIEIC:
- case KVM_EXIT_UNKNOWN:
- case KVM_EXIT_INTR:
- case KVM_EXIT_S390_RESET:
- case KVM_EXIT_S390_UCONTROL:
- case KVM_EXIT_S390_TSCH:
- case KVM_EXIT_DEBUG:
- break;
- default:
- BUG();
- }
-
vcpu->arch.sie_block->gpsw.mask = kvm_run->psw_mask;
vcpu->arch.sie_block->gpsw.addr = kvm_run->psw_addr;
if (kvm_run->kvm_dirty_regs & KVM_SYNC_PREFIX) {
pte_t *ptep;
down_read(&mm->mmap_sem);
+retry:
ptep = get_locked_pte(current->mm, addr, &ptl);
if (unlikely(!ptep)) {
up_read(&mm->mmap_sem);
return -EFAULT;
}
+ if (!(pte_val(*ptep) & _PAGE_INVALID) &&
+ (pte_val(*ptep) & _PAGE_PROTECT)) {
+ pte_unmap_unlock(*ptep, ptl);
+ if (fixup_user_fault(current, mm, addr, FAULT_FLAG_WRITE)) {
+ up_read(&mm->mmap_sem);
+ return -EFAULT;
+ }
+ goto retry;
+ }
new = old = pgste_get_lock(ptep);
pgste_val(new) &= ~(PGSTE_GR_BIT | PGSTE_GC_BIT |
#
config CPU_SH2
bool
+ select SH_INTC
config CPU_SH2A
bool
bool
select CPU_HAS_INTEVT
select CPU_HAS_SR_RB
+ select SH_INTC
select SYS_SUPPORTS_SH_TMU
config CPU_SH4
select CPU_HAS_INTEVT
select CPU_HAS_SR_RB
select CPU_HAS_FPU if !CPU_SH4AL_DSP
+ select SH_INTC
select SYS_SUPPORTS_SH_TMU
select SYS_SUPPORTS_HUGETLBFS if MMU
config KEXEC
bool "kexec system call (EXPERIMENTAL)"
depends on SUPERH32 && MMU
- select CRYPTO
- select CRYPTO_SHA256
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot
cacheop_on_each_cpu(local_flush_icache_range, (void *)&data, 1);
}
+EXPORT_SYMBOL(flush_icache_range);
void flush_icache_page(struct vm_area_struct *vma, struct page *page)
{
VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
page = pte_page(pte);
get_page(page);
+ __flush_anon_page(page, addr);
+ flush_dcache_page(page);
pages[*nr] = page;
(*nr)++;
config KEXEC
bool "kexec system call"
- select CRYPTO
- select CRYPTO_SHA256
---help---
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot
preempt_enable();
}
}
+EXPORT_SYMBOL(flush_icache_range);
/* Called when smp_send_reschedule() triggers IRQ_RESCHEDULE. */
err |= setup_sigframe(frame, regs, set);
if (err == 0)
- err |= setup_return(regs, &ksig->ka, frame->retcode, frame, usig);
+ err |= setup_return(regs, &ksig->ka, frame->retcode, frame,
+ ksig->sig);
return err;
}
err |= __save_altstack(&frame->sig.uc.uc_stack, regs->UCreg_sp);
err |= setup_sigframe(&frame->sig, regs, set);
if (err == 0)
- err |= setup_return(regs, &ksig->ka, frame->sig.retcode, frame, usig);
+ err |= setup_return(regs, &ksig->ka, frame->sig.retcode, frame,
+ ksig->sig);
if (err == 0) {
/*
int syscall)
{
struct thread_info *thread = current_thread_info();
- struct task_struct *tsk = current;
sigset_t *oldset = sigmask_to_save();
int usig = ksig->sig;
int ret;
if (!user_mode(regs))
return;
- if (get_signsl(&ksig)) {
+ if (get_signal(&ksig)) {
handle_signal(&ksig, regs, syscall);
return;
}
obj-y += platform/
obj-y += net/
-ifeq ($(CONFIG_X86_64),y)
-obj-$(CONFIG_KEXEC) += purgatory/
-endif
+obj-$(CONFIG_KEXEC_FILE) += purgatory/
def_bool y
select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI
select ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS
+ select ARCH_HAS_FAST_MULTIPLIER
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_MIGHT_HAVE_PC_SERIO
select HAVE_AOUT if X86_32
config KEXEC
bool "kexec system call"
- select BUILD_BIN2C
- select CRYPTO
- select CRYPTO_SHA256
---help---
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot
interface is strongly in flux, so no good recommendation can be
made.
+config KEXEC_FILE
+ bool "kexec file based system call"
+ select BUILD_BIN2C
+ depends on KEXEC
+ depends on X86_64
+ depends on CRYPTO=y
+ depends on CRYPTO_SHA256=y
+ ---help---
+ This is new version of kexec system call. This system call is
+ file based and takes file descriptors as system call argument
+ for kernel and initramfs as opposed to list of segments as
+ accepted by previous system call.
+
config KEXEC_VERIFY_SIG
bool "Verify kernel signature during kexec_file_load() syscall"
- depends on KEXEC
+ depends on KEXEC_FILE
---help---
This option makes kernel signature verification mandatory for
kexec_file_load() syscall. If kernel is signature can not be
$(Q)$(MAKE) $(build)=arch/x86/syscalls all
archprepare:
-ifeq ($(CONFIG_KEXEC),y)
-# Build only for 64bit. No loaders for 32bit yet.
- ifeq ($(CONFIG_X86_64),y)
+ifeq ($(CONFIG_KEXEC_FILE),y)
$(Q)$(MAKE) $(build)=arch/x86/purgatory arch/x86/purgatory/kexec-purgatory.c
- endif
endif
###
$(Q)rm -rf $(objtree)/arch/x86_64
$(Q)$(MAKE) $(clean)=$(boot)
$(Q)$(MAKE) $(clean)=arch/x86/tools
+ $(Q)$(MAKE) $(clean)=arch/x86/purgatory
PHONY += kvmconfig
kvmconfig:
#include <asm-generic/bitops/sched.h>
-#define ARCH_HAS_FAST_MULTIPLIER 1
-
#include <asm/arch_hweight.h>
#include <asm-generic/bitops/const_hweight.h>
extern void io_apic_eoi(unsigned int apic, unsigned int vector);
+extern bool mp_should_keep_irq(struct device *dev);
+
#else /* !CONFIG_X86_IO_APIC */
#define io_apic_assign_pci_irqs 0
#define KVM_REFILL_PAGES 25
#define KVM_MAX_CPUID_ENTRIES 80
#define KVM_NR_FIXED_MTRR_REGION 88
-#define KVM_NR_VAR_MTRR 10
+#define KVM_NR_VAR_MTRR 8
#define ASYNC_PF_PER_VCPU 64
static inline int pte_special(pte_t pte)
{
- return (pte_flags(pte) & (_PAGE_PRESENT|_PAGE_SPECIAL)) ==
- (_PAGE_PRESENT|_PAGE_SPECIAL);
+ /*
+ * See CONFIG_NUMA_BALANCING pte_numa in include/asm-generic/pgtable.h.
+ * On x86 we have _PAGE_BIT_NUMA == _PAGE_BIT_GLOBAL+1 ==
+ * __PAGE_BIT_SOFTW1 == _PAGE_BIT_SPECIAL.
+ */
+ return (pte_flags(pte) & _PAGE_SPECIAL) &&
+ (pte_flags(pte) & (_PAGE_PRESENT|_PAGE_PROTNONE));
}
static inline unsigned long pte_pfn(pte_t pte)
extern pmd_t level2_kernel_pgt[512];
extern pmd_t level2_fixmap_pgt[512];
extern pmd_t level2_ident_pgt[512];
+extern pte_t level1_fixmap_pgt[512];
extern pgd_t init_level4_pgt[];
#define swapper_pg_dir init_level4_pgt
obj-$(CONFIG_X86_TSC) += trace_clock.o
obj-$(CONFIG_KEXEC) += machine_kexec_$(BITS).o
obj-$(CONFIG_KEXEC) += relocate_kernel_$(BITS).o crash.o
+obj-$(CONFIG_KEXEC_FILE) += kexec-bzimage64.o
obj-$(CONFIG_CRASH_DUMP) += crash_dump_$(BITS).o
obj-y += kprobes/
obj-$(CONFIG_MODULES) += module.o
obj-$(CONFIG_PCI_MMCONFIG) += mmconf-fam10h_64.o
obj-y += vsmp_64.o
- obj-$(CONFIG_KEXEC) += kexec-bzimage64.o
endif
}
if (flags & IOAPIC_MAP_ALLOC) {
+ /* special handling for legacy IRQs */
+ if (irq < nr_legacy_irqs() && info->count == 1 &&
+ mp_irqdomain_map(domain, irq, pin) != 0)
+ irq = -1;
+
if (irq > 0)
info->count++;
else if (info->count == 0)
info->polarity = 1;
}
info->node = NUMA_NO_NODE;
- info->set = 1;
+
+ /*
+ * setup_IO_APIC_irqs() programs all legacy IRQs with default
+ * trigger and polarity attributes. Don't set the flag for that
+ * case so the first legacy IRQ user could reprogram the pin
+ * with real trigger and polarity attributes.
+ */
+ if (virq >= nr_legacy_irqs() || info->count)
+ info->set = 1;
}
set_io_apic_irq_attr(&attr, ioapic, hwirq, info->trigger,
info->polarity);
return ret;
}
+bool mp_should_keep_irq(struct device *dev)
+{
+ if (dev->power.is_prepared)
+ return true;
+#ifdef CONFIG_PM_RUNTIME
+ if (dev->power.runtime_status == RPM_SUSPENDING)
+ return true;
+#endif
+
+ return false;
+}
+
/* Enable IOAPIC early just for system timer */
void __init pre_init_apic_IRQ0(void)
{
crash_save_cpu(regs, safe_smp_processor_id());
}
-#ifdef CONFIG_X86_64
-
+#ifdef CONFIG_KEXEC_FILE
static int get_nr_ram_ranges_callback(unsigned long start_pfn,
unsigned long nr_pfn, void *arg)
{
return ret;
}
-
-#endif /* CONFIG_X86_64 */
+#endif /* CONFIG_KEXEC_FILE */
sysenter_badsys:
movl $-ENOSYS,%eax
jmp sysenter_after_call
-END(syscall_badsys)
+END(sysenter_badsys)
CFI_ENDPROC
.macro FIXUP_ESPFIX_STACK
set_intr_gate(i, interrupt[i - FIRST_EXTERNAL_VECTOR]);
}
- if (!acpi_ioapic && !of_ioapic)
+ if (!acpi_ioapic && !of_ioapic && nr_legacy_irqs())
setup_irq(2, &irq2);
#ifdef CONFIG_X86_32
#include <asm/debugreg.h>
#include <asm/kexec-bzimage64.h>
+#ifdef CONFIG_KEXEC_FILE
static struct kexec_file_ops *kexec_file_loaders[] = {
&kexec_bzImage64_ops,
};
+#endif
static void free_transition_pgtable(struct kimage *image)
{
);
}
+#ifdef CONFIG_KEXEC_FILE
/* Update purgatory as needed after various image segments have been prepared */
static int arch_update_purgatory(struct kimage *image)
{
return ret;
}
+#else /* !CONFIG_KEXEC_FILE */
+static inline int arch_update_purgatory(struct kimage *image)
+{
+ return 0;
+}
+#endif /* CONFIG_KEXEC_FILE */
int machine_kexec_prepare(struct kimage *image)
{
/* arch-dependent functionality related to kexec file-based syscall */
+#ifdef CONFIG_KEXEC_FILE
int arch_kexec_kernel_image_probe(struct kimage *image, void *buf,
unsigned long buf_len)
{
(int)ELF64_R_TYPE(rel[i].r_info), value);
return -ENOEXEC;
}
+#endif /* CONFIG_KEXEC_FILE */
void __init setup_default_timer_irq(void)
{
+ if (!nr_legacy_irqs())
+ return;
setup_irq(0, &irq0);
}
goto exception;
break;
case VCPU_SREG_CS:
- if (in_task_switch && rpl != dpl)
- goto exception;
-
if (!(seg_desc.type & 8))
goto exception;
ctxt->execute = opcode.u.execute;
+ if (unlikely(ctxt->ud) && likely(!(ctxt->d & EmulateOnUD)))
+ return EMULATION_FAILED;
+
if (unlikely(ctxt->d &
- (NotImpl|EmulateOnUD|Stack|Op3264|Sse|Mmx|Intercept|CheckPerm))) {
+ (NotImpl|Stack|Op3264|Sse|Mmx|Intercept|CheckPerm))) {
/*
* These are copied unconditionally here, and checked unconditionally
* in x86_emulate_insn.
if (ctxt->d & NotImpl)
return EMULATION_FAILED;
- if (!(ctxt->d & EmulateOnUD) && ctxt->ud)
- return EMULATION_FAILED;
-
if (mode == X86EMUL_MODE_PROT64 && (ctxt->d & Stack))
ctxt->op_bytes = 8;
if (cpumask_test_cpu(cpu, mm_cpumask(active_mm))) {
cpumask_clear_cpu(cpu, mm_cpumask(active_mm));
load_cr3(swapper_pg_dir);
- trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
+ /*
+ * This gets called in the idle path where RCU
+ * functions differently. Tracing normally
+ * uses RCU, so we have to call the tracepoint
+ * specially here.
+ */
+ trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
}
}
EXPORT_SYMBOL_GPL(leave_mm);
*
* This is in units of pages.
*/
-unsigned long tlb_single_page_flush_ceiling = 33;
+static unsigned long tlb_single_page_flush_ceiling __read_mostly = 33;
void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
unsigned long end, unsigned long vmflag)
static void intel_mid_pci_irq_disable(struct pci_dev *dev)
{
- if (!dev->dev.power.is_prepared && dev->irq > 0)
+ if (!mp_should_keep_irq(&dev->dev) && dev->irq > 0)
mp_unmap_irq(dev->irq);
}
static void pirq_disable_irq(struct pci_dev *dev)
{
- if (io_apic_assign_pci_irqs && !dev->dev.power.is_prepared &&
+ if (io_apic_assign_pci_irqs && !mp_should_keep_irq(&dev->dev) &&
dev->irq) {
mp_unmap_irq(dev->irq);
dev->irq = 0;
# sure how to relocate those. Like kexec-tools, use custom flags.
KBUILD_CFLAGS := -fno-strict-aliasing -Wall -Wstrict-prototypes -fno-zero-initialized-in-bss -fno-builtin -ffreestanding -c -MD -Os -mcmodel=large
+KBUILD_CFLAGS += -m$(BITS)
$(obj)/purgatory.ro: $(PURGATORY_OBJS) FORCE
$(call if_changed,ld)
$(call if_changed,bin2c)
-# No loaders for 32bits yet.
-ifeq ($(CONFIG_X86_64),y)
- obj-$(CONFIG_KEXEC) += kexec-purgatory.o
-endif
+obj-$(CONFIG_KEXEC_FILE) += kexec-purgatory.o
*
* We can construct this by grafting the Xen provided pagetable into
* head_64.S's preconstructed pagetables. We copy the Xen L2's into
- * level2_ident_pgt, level2_kernel_pgt and level2_fixmap_pgt. This
- * means that only the kernel has a physical mapping to start with -
- * but that's enough to get __va working. We need to fill in the rest
- * of the physical mapping once some sort of allocator has been set
- * up.
- * NOTE: for PVH, the page tables are native.
+ * level2_ident_pgt, and level2_kernel_pgt. This means that only the
+ * kernel has a physical mapping to start with - but that's enough to
+ * get __va working. We need to fill in the rest of the physical
+ * mapping once some sort of allocator has been set up. NOTE: for
+ * PVH, the page tables are native.
*/
void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
{
/* L3_i[0] -> level2_ident_pgt */
convert_pfn_mfn(level3_ident_pgt);
/* L3_k[510] -> level2_kernel_pgt
- * L3_i[511] -> level2_fixmap_pgt */
+ * L3_k[511] -> level2_fixmap_pgt */
convert_pfn_mfn(level3_kernel_pgt);
+
+ /* L3_k[511][506] -> level1_fixmap_pgt */
+ convert_pfn_mfn(level2_fixmap_pgt);
}
/* We get [511][511] and have Xen's version of level2_kernel_pgt */
l3 = m2v(pgd[pgd_index(__START_KERNEL_map)].pgd);
addr[1] = (unsigned long)l3;
addr[2] = (unsigned long)l2;
/* Graft it onto L4[272][0]. Note that we creating an aliasing problem:
- * Both L4[272][0] and L4[511][511] have entries that point to the same
+ * Both L4[272][0] and L4[511][510] have entries that point to the same
* L2 (PMD) tables. Meaning that if you modify it in __va space
* it will be also modified in the __ka space! (But if you just
* modify the PMD table to point to other PTE's or none, then you
* are OK - which is what cleanup_highmap does) */
copy_page(level2_ident_pgt, l2);
- /* Graft it onto L4[511][511] */
+ /* Graft it onto L4[511][510] */
copy_page(level2_kernel_pgt, l2);
- /* Get [511][510] and graft that in level2_fixmap_pgt */
- l3 = m2v(pgd[pgd_index(__START_KERNEL_map + PMD_SIZE)].pgd);
- l2 = m2v(l3[pud_index(__START_KERNEL_map + PMD_SIZE)].pud);
- copy_page(level2_fixmap_pgt, l2);
- /* Note that we don't do anything with level1_fixmap_pgt which
- * we don't need. */
if (!xen_feature(XENFEAT_auto_translated_physmap)) {
/* Make pagetable pieces RO */
set_page_prot(init_level4_pgt, PAGE_KERNEL_RO);
set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO);
set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
+ set_page_prot(level1_fixmap_pgt, PAGE_KERNEL_RO);
/* Pin down new L4 */
pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
config XTENSA
def_bool y
select ARCH_WANT_FRAME_POINTERS
- select HAVE_IDE
- select GENERIC_ATOMIC64
- select GENERIC_CLOCKEVENTS
- select VIRT_TO_BUS
- select GENERIC_IRQ_SHOW
- select GENERIC_SCHED_CLOCK
- select MODULES_USE_ELF_RELA
- select GENERIC_PCI_IOMAP
select ARCH_WANT_IPC_PARSE_VERSION
select ARCH_WANT_OPTIONAL_GPIOLIB
select BUILDTIME_EXTABLE_SORT
select CLONE_BACKWARDS
- select IRQ_DOMAIN
- select HAVE_OPROFILE
+ select COMMON_CLK
+ select GENERIC_ATOMIC64
+ select GENERIC_CLOCKEVENTS
+ select GENERIC_IRQ_SHOW
+ select GENERIC_PCI_IOMAP
+ select GENERIC_SCHED_CLOCK
select HAVE_FUNCTION_TRACER
select HAVE_IRQ_TIME_ACCOUNTING
+ select HAVE_OPROFILE
select HAVE_PERF_EVENTS
- select COMMON_CLK
+ select IRQ_DOMAIN
+ select MODULES_USE_ELF_RELA
+ select VIRT_TO_BUS
help
Xtensa processors are 32-bit RISC machines designed by Tensilica
primarily for embedded systems. These processors are both
def_bool y
config MMU
- def_bool n
+ bool
+ default n if !XTENSA_VARIANT_CUSTOM
+ default XTENSA_VARIANT_MMU if XTENSA_VARIANT_CUSTOM
config VARIANT_IRQ_SWITCH
def_bool n
select VARIANT_IRQ_SWITCH
select ARCH_REQUIRE_GPIOLIB
select XTENSA_CALIBRATE_CCOUNT
+
+config XTENSA_VARIANT_CUSTOM
+ bool "Custom Xtensa processor configuration"
+ select MAY_HAVE_SMP
+ select HAVE_XTENSA_GPIO32
+ help
+ Select this variant to use a custom Xtensa processor configuration.
+ You will be prompted for a processor variant CORENAME.
endchoice
+config XTENSA_VARIANT_CUSTOM_NAME
+ string "Xtensa Processor Custom Core Variant Name"
+ depends on XTENSA_VARIANT_CUSTOM
+ help
+ Provide the name of a custom Xtensa processor variant.
+ This CORENAME selects arch/xtensa/variant/CORENAME.
+ Dont forget you have to select MMU if you have one.
+
+config XTENSA_VARIANT_NAME
+ string
+ default "dc232b" if XTENSA_VARIANT_DC232B
+ default "dc233c" if XTENSA_VARIANT_DC233C
+ default "fsf" if XTENSA_VARIANT_FSF
+ default "s6000" if XTENSA_VARIANT_S6000
+ default XTENSA_VARIANT_CUSTOM_NAME if XTENSA_VARIANT_CUSTOM
+
+config XTENSA_VARIANT_MMU
+ bool "Core variant has a Full MMU (TLB, Pages, Protection, etc)"
+ depends on XTENSA_VARIANT_CUSTOM
+ default y
+ help
+ Build a Conventional Kernel with full MMU support,
+ ie: it supports a TLB with auto-loading, page protection.
+
config XTENSA_UNALIGNED_USER
bool "Unaligned memory access in use space"
help
Say N if you want to disable CPU hotplug.
-config MATH_EMULATION
- bool "Math emulation"
- help
- Can we use information of configuration file?
-
config INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX
bool "Initialize Xtensa MMU inside the Linux kernel code"
+ depends on MMU
default y
help
Earlier version initialized the MMU in the exception vector
config HIGHMEM
bool "High Memory Support"
+ depends on MMU
help
Linux can use the full amount of RAM in the system by
default. However, the default MMUv2 setup only maps the
If unsure, say Y.
+config FAST_SYSCALL_XTENSA
+ bool "Enable fast atomic syscalls"
+ default n
+ help
+ fast_syscall_xtensa is a syscall that can make atomic operations
+ on UP kernel when processor has no s32c1i support.
+
+ This syscall is deprecated. It may have issues when called with
+ invalid arguments. It is provided only for backwards compatibility.
+ Only enable it if your userspace software requires it.
+
+ If unsure, say N.
+
+config FAST_SYSCALL_SPILL_REGISTERS
+ bool "Enable spill registers syscall"
+ default n
+ help
+ fast_syscall_spill_registers is a syscall that spills all active
+ register windows of a calling userspace task onto its stack.
+
+ This syscall is deprecated. It may have issues when called with
+ invalid arguments. It is provided only for backwards compatibility.
+ Only enable it if your userspace software requires it.
+
+ If unsure, say N.
+
endmenu
config XTENSA_CALIBRATE_CCOUNT
config XTENSA_PLATFORM_XT2000
bool "XT2000"
+ select HAVE_IDE
help
XT2000 is the name of Tensilica's feature-rich emulation platform.
This hardware is capable of running a full Linux distribution.
config XTENSA_PLATFORM_S6105
bool "S6105"
+ select HAVE_IDE
select SERIAL_CONSOLE
select NO_IOPORT_MAP
# for more details.
#
# Copyright (C) 2001 - 2005 Tensilica Inc.
+# Copyright (C) 2014 Cadence Design Systems Inc.
#
# This file is included by the global makefile so that you can add your own
# architecture-specific flags and dependencies. Remember to do have actions
# Core configuration.
# (Use VAR=<xtensa_config> to use another default compiler.)
-variant-$(CONFIG_XTENSA_VARIANT_FSF) := fsf
-variant-$(CONFIG_XTENSA_VARIANT_DC232B) := dc232b
-variant-$(CONFIG_XTENSA_VARIANT_DC233C) := dc233c
-variant-$(CONFIG_XTENSA_VARIANT_S6000) := s6000
-variant-$(CONFIG_XTENSA_VARIANT_LINUX_CUSTOM) := custom
+variant-y := $(patsubst "%",%,$(CONFIG_XTENSA_VARIANT_NAME))
VARIANT = $(variant-y)
export VARIANT
/ {
compatible = "cdns,xtensa-kc705";
+ chosen {
+ bootargs = "earlycon=uart8250,mmio32,0xfd050020,115200n8 console=ttyS0,115200n8 ip=dhcp root=/dev/nfs rw debug memmap=0x38000000";
+ };
memory@0 {
device_type = "memory";
- reg = <0x00000000 0x08000000>;
+ reg = <0x00000000 0x38000000>;
};
};
CONFIG_MMU=y
# CONFIG_XTENSA_UNALIGNED_USER is not set
# CONFIG_PREEMPT is not set
-# CONFIG_MATH_EMULATION is not set
# CONFIG_HIGHMEM is not set
#
# CONFIG_XTENSA_VARIANT_S6000 is not set
# CONFIG_XTENSA_UNALIGNED_USER is not set
# CONFIG_PREEMPT is not set
-# CONFIG_MATH_EMULATION is not set
CONFIG_XTENSA_CALIBRATE_CCOUNT=y
CONFIG_SERIAL_CONSOLE=y
CONFIG_XTENSA_ISS_NETWORK=y
# EEPROM support
#
# CONFIG_EEPROM_93CX6 is not set
-CONFIG_HAVE_IDE=y
+# CONFIG_HAVE_IDE is not set
# CONFIG_IDE is not set
#
CONFIG_XTENSA_VARIANT_S6000=y
# CONFIG_XTENSA_UNALIGNED_USER is not set
CONFIG_PREEMPT=y
-# CONFIG_MATH_EMULATION is not set
# CONFIG_HIGHMEM is not set
CONFIG_XTENSA_CALIBRATE_CCOUNT=y
CONFIG_SERIAL_CONSOLE=y
* specials for cache aliasing:
*
* __flush_invalidate_dcache_page_alias(vaddr,paddr)
+ * __invalidate_dcache_page_alias(vaddr,paddr)
* __invalidate_icache_page_alias(vaddr,paddr)
*/
#if defined(CONFIG_MMU) && (DCACHE_WAY_SIZE > PAGE_SIZE)
extern void __flush_invalidate_dcache_page_alias(unsigned long, unsigned long);
+extern void __invalidate_dcache_page_alias(unsigned long, unsigned long);
#else
static inline void __flush_invalidate_dcache_page_alias(unsigned long virt,
unsigned long phys) { }
* Here we define all the compile-time 'special' virtual
* addresses. The point is to have a constant address at
* compile time, but to set the physical address only
- * in the boot process. We allocate these special addresses
- * from the end of the consistent memory region backwards.
+ * in the boot process. We allocate these special addresses
+ * from the start of the consistent memory region upwards.
* Also this lets us do fail-safe vmalloc(), we
* can guarantee that these special addresses and
* vmalloc()-ed addresses never overlap.
#ifdef CONFIG_HIGHMEM
/* reserved pte's for temporary kernel mappings */
FIX_KMAP_BEGIN,
- FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_TYPE_NR * NR_CPUS) - 1,
+ FIX_KMAP_END = FIX_KMAP_BEGIN +
+ (KM_TYPE_NR * NR_CPUS * DCACHE_N_COLORS) - 1,
#endif
__end_of_fixed_addresses
};
#define FIXADDR_SIZE (__end_of_fixed_addresses << PAGE_SHIFT)
#define FIXADDR_START ((FIXADDR_TOP - FIXADDR_SIZE) & PMD_MASK)
-#include <asm-generic/fixmap.h>
+#define __fix_to_virt(x) (FIXADDR_START + ((x) << PAGE_SHIFT))
+#define __virt_to_fix(x) (((x) - FIXADDR_START) >> PAGE_SHIFT)
+
+#ifndef __ASSEMBLY__
+/*
+ * 'index to address' translation. If anyone tries to use the idx
+ * directly without translation, we catch the bug with a NULL-deference
+ * kernel oops. Illegal ranges of incoming indices are caught too.
+ */
+static __always_inline unsigned long fix_to_virt(const unsigned int idx)
+{
+ BUILD_BUG_ON(idx >= __end_of_fixed_addresses);
+ return __fix_to_virt(idx);
+}
+
+static inline unsigned long virt_to_fix(const unsigned long vaddr)
+{
+ BUG_ON(vaddr >= FIXADDR_TOP || vaddr < FIXADDR_START);
+ return __virt_to_fix(vaddr);
+}
+
+#endif
#define kmap_get_fixmap_pte(vaddr) \
pte_offset_kernel( \
#ifndef _XTENSA_HIGHMEM_H
#define _XTENSA_HIGHMEM_H
+#include <linux/wait.h>
#include <asm/cacheflush.h>
#include <asm/fixmap.h>
#include <asm/kmap_types.h>
#include <asm/pgtable.h>
-#define PKMAP_BASE (FIXADDR_START - PMD_SIZE)
-#define LAST_PKMAP PTRS_PER_PTE
+#define PKMAP_BASE ((FIXADDR_START - \
+ (LAST_PKMAP + 1) * PAGE_SIZE) & PMD_MASK)
+#define LAST_PKMAP (PTRS_PER_PTE * DCACHE_N_COLORS)
#define LAST_PKMAP_MASK (LAST_PKMAP - 1)
#define PKMAP_NR(virt) (((virt) - PKMAP_BASE) >> PAGE_SHIFT)
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))
#define kmap_prot PAGE_KERNEL
+#if DCACHE_WAY_SIZE > PAGE_SIZE
+#define get_pkmap_color get_pkmap_color
+static inline int get_pkmap_color(struct page *page)
+{
+ return DCACHE_ALIAS(page_to_phys(page));
+}
+
+extern unsigned int last_pkmap_nr_arr[];
+
+static inline unsigned int get_next_pkmap_nr(unsigned int color)
+{
+ last_pkmap_nr_arr[color] =
+ (last_pkmap_nr_arr[color] + DCACHE_N_COLORS) & LAST_PKMAP_MASK;
+ return last_pkmap_nr_arr[color] + color;
+}
+
+static inline int no_more_pkmaps(unsigned int pkmap_nr, unsigned int color)
+{
+ return pkmap_nr < DCACHE_N_COLORS;
+}
+
+static inline int get_pkmap_entries_count(unsigned int color)
+{
+ return LAST_PKMAP / DCACHE_N_COLORS;
+}
+
+extern wait_queue_head_t pkmap_map_wait_arr[];
+
+static inline wait_queue_head_t *get_pkmap_wait_queue_head(unsigned int color)
+{
+ return pkmap_map_wait_arr + color;
+}
+#endif
+
extern pte_t *pkmap_page_table;
void *kmap_high(struct page *page);
# define DCACHE_ALIAS_EQ(a,b) ((((a) ^ (b)) & DCACHE_ALIAS_MASK) == 0)
#else
# define DCACHE_ALIAS_ORDER 0
+# define DCACHE_ALIAS(a) ((void)(a), 0)
#endif
+#define DCACHE_N_COLORS (1 << DCACHE_ALIAS_ORDER)
#if ICACHE_WAY_SIZE > PAGE_SIZE
# define ICACHE_ALIAS_ORDER (ICACHE_WAY_SHIFT - PAGE_SHIFT)
#endif
struct page;
+struct vm_area_struct;
extern void clear_page(void *page);
extern void copy_page(void *to, void *from);
*/
#if DCACHE_WAY_SIZE > PAGE_SIZE
-extern void clear_user_page(void*, unsigned long, struct page*);
-extern void copy_user_page(void*, void*, unsigned long, struct page*);
+extern void clear_page_alias(void *vaddr, unsigned long paddr);
+extern void copy_page_alias(void *to, void *from,
+ unsigned long to_paddr, unsigned long from_paddr);
+
+#define clear_user_highpage clear_user_highpage
+void clear_user_highpage(struct page *page, unsigned long vaddr);
+#define __HAVE_ARCH_COPY_USER_HIGHPAGE
+void copy_user_highpage(struct page *to, struct page *from,
+ unsigned long vaddr, struct vm_area_struct *vma);
#else
# define clear_user_page(page, vaddr, pg) clear_page(page)
# define copy_user_page(to, from, vaddr, pg) copy_page(to, from)
#define VMALLOC_START 0xC0000000
#define VMALLOC_END 0xC7FEFFFF
#define TLBTEMP_BASE_1 0xC7FF0000
-#define TLBTEMP_BASE_2 0xC7FF8000
+#define TLBTEMP_BASE_2 (TLBTEMP_BASE_1 + DCACHE_WAY_SIZE)
+#if 2 * DCACHE_WAY_SIZE > ICACHE_WAY_SIZE
+#define TLBTEMP_SIZE (2 * DCACHE_WAY_SIZE)
+#else
+#define TLBTEMP_SIZE ICACHE_WAY_SIZE
+#endif
/*
* For the Xtensa architecture, the PTE layout is as follows:
*/
.macro get_fs ad, sp
GET_CURRENT(\ad,\sp)
+#if THREAD_CURRENT_DS > 1020
+ addi \ad, \ad, TASK_THREAD
+ l32i \ad, \ad, THREAD_CURRENT_DS - TASK_THREAD
+#else
l32i \ad, \ad, THREAD_CURRENT_DS
+#endif
.endm
/*
#define TCSETSW 0x5403
#define TCSETSF 0x5404
-#define TCGETA _IOR('t', 23, struct termio)
-#define TCSETA _IOW('t', 24, struct termio)
-#define TCSETAW _IOW('t', 25, struct termio)
-#define TCSETAF _IOW('t', 28, struct termio)
+#define TCGETA 0x80127417 /* _IOR('t', 23, struct termio) */
+#define TCSETA 0x40127418 /* _IOW('t', 24, struct termio) */
+#define TCSETAW 0x40127419 /* _IOW('t', 25, struct termio) */
+#define TCSETAF 0x4012741C /* _IOW('t', 28, struct termio) */
#define TCSBRK _IO('t', 29)
#define TCXONC _IO('t', 30)
#define TCFLSH _IO('t', 31)
-#define TIOCSWINSZ _IOW('t', 103, struct winsize)
-#define TIOCGWINSZ _IOR('t', 104, struct winsize)
+#define TIOCSWINSZ 0x40087467 /* _IOW('t', 103, struct winsize) */
+#define TIOCGWINSZ 0x80087468 /* _IOR('t', 104, struct winsize) */
#define TIOCSTART _IO('t', 110) /* start output, like ^Q */
#define TIOCSTOP _IO('t', 111) /* stop output, like ^S */
#define TIOCOUTQ _IOR('t', 115, int) /* output queue size */
#define TIOCSETD _IOW('T', 35, int)
#define TIOCGETD _IOR('T', 36, int)
#define TCSBRKP _IOW('T', 37, int) /* Needed for POSIX tcsendbreak()*/
-#define TIOCTTYGSTRUCT _IOR('T', 38, struct tty_struct) /* For debugging only*/
#define TIOCSBRK _IO('T', 39) /* BSD compatibility */
#define TIOCCBRK _IO('T', 40) /* BSD compatibility */
#define TIOCGSID _IOR('T', 41, pid_t) /* Return the session ID of FD*/
#define TIOCSERGETLSR _IOR('T', 89, unsigned int) /* Get line status reg. */
/* ioctl (fd, TIOCSERGETLSR, &result) where result may be as below */
# define TIOCSER_TEMT 0x01 /* Transmitter physically empty */
-#define TIOCSERGETMULTI _IOR('T', 90, struct serial_multiport_struct) /* Get multiport config */
-#define TIOCSERSETMULTI _IOW('T', 91, struct serial_multiport_struct) /* Set multiport config */
+#define TIOCSERGETMULTI 0x80a8545a /* Get multiport config */
+ /* _IOR('T', 90, struct serial_multiport_struct) */
+#define TIOCSERSETMULTI 0x40a8545b /* Set multiport config */
+ /* _IOW('T', 91, struct serial_multiport_struct) */
#define TIOCMIWAIT _IO('T', 92) /* wait for a change on serial input line(s) */
#define TIOCGICOUNT 0x545D /* read serial port inline interrupt counts */
#define __NR_sched_getattr 335
__SYSCALL(335, sys_sched_getattr, 3)
-#define __NR_syscall_count 336
+#define __NR_renameat2 336
+__SYSCALL(336, sys_renameat2, 5)
+
+#define __NR_syscall_count 337
/*
* sysxtensa syscall handler
* this archive for more details.
*
* Copyright (C) 2001 - 2005 Tensilica, Inc.
+ * Copyright (C) 2014 Cadence Design Systems Inc.
*
* Rewritten by Chris Zankel <chris@zankel.net>
*
s32i a0, a2, PT_AREG2
s32i a3, a2, PT_AREG3
+ rsr a3, excsave1
+ movi a4, fast_unaligned_fixup
+ s32i a4, a3, EXC_TABLE_FIXUP
+
/* Keep value of SAR in a0 */
rsr a0, sar
addx8 a5, a6, a5
jx a5 # jump into table
- /* Invalid instruction, CRITICAL! */
-.Linvalid_instruction_load:
- j .Linvalid_instruction
-
/* Load: Load memory address. */
.Lload: movi a3, ~3
/* Set target register. */
1:
-
-#if XCHAL_HAVE_LOOPS
- rsr a5, lend # check if we reached LEND
- bne a7, a5, 1f
- rsr a5, lcount # and LCOUNT != 0
- beqz a5, 1f
- addi a5, a5, -1 # decrement LCOUNT and set
- rsr a7, lbeg # set PC to LBEGIN
- wsr a5, lcount
-#endif
-
-1: wsr a7, epc1 # skip load instruction
extui a4, a4, INSN_T, 4 # extract target register
movi a5, .Lload_table
addx8 a4, a4, a5
mov a3, a14 ; _j 1f; .align 8
mov a3, a15 ; _j 1f; .align 8
+ /* We cannot handle this exception. */
+
+ .extern _kernel_exception
+.Linvalid_instruction_load:
+.Linvalid_instruction_store:
+
+ movi a4, 0
+ rsr a3, excsave1
+ s32i a4, a3, EXC_TABLE_FIXUP
+
+ /* Restore a4...a8 and SAR, set SP, and jump to default exception. */
+
+ l32i a8, a2, PT_AREG8
+ l32i a7, a2, PT_AREG7
+ l32i a6, a2, PT_AREG6
+ l32i a5, a2, PT_AREG5
+ l32i a4, a2, PT_AREG4
+ wsr a0, sar
+ mov a1, a2
+
+ rsr a0, ps
+ bbsi.l a0, PS_UM_BIT, 2f # jump if user mode
+
+ movi a0, _kernel_exception
+ jx a0
+
+2: movi a0, _user_exception
+ jx a0
+
1: # a7: instruction pointer, a4: instruction, a3: value
movi a6, 0 # mask: ffffffff:00000000
/* Get memory address */
1:
-#if XCHAL_HAVE_LOOPS
- rsr a4, lend # check if we reached LEND
- bne a7, a4, 1f
- rsr a4, lcount # and LCOUNT != 0
- beqz a4, 1f
- addi a4, a4, -1 # decrement LCOUNT and set
- rsr a7, lbeg # set PC to LBEGIN
- wsr a4, lcount
-#endif
-
-1: wsr a7, epc1 # skip store instruction
movi a4, ~3
and a4, a4, a8 # align memory address
#endif
__ssa8r a8
- __src_b a7, a5, a6 # lo-mask F..F0..0 (BE) 0..0F..F (LE)
+ __src_b a8, a5, a6 # lo-mask F..F0..0 (BE) 0..0F..F (LE)
__src_b a6, a6, a5 # hi-mask 0..0F..F (BE) F..F0..0 (LE)
#ifdef UNALIGNED_USER_EXCEPTION
l32e a5, a4, -8
#else
l32i a5, a4, 0 # load lower address word
#endif
- and a5, a5, a7 # mask
- __sh a7, a3 # shift value
- or a5, a5, a7 # or with original value
+ and a5, a5, a8 # mask
+ __sh a8, a3 # shift value
+ or a5, a5, a8 # or with original value
#ifdef UNALIGNED_USER_EXCEPTION
s32e a5, a4, -8
- l32e a7, a4, -4
+ l32e a8, a4, -4
#else
s32i a5, a4, 0 # store
- l32i a7, a4, 4 # same for upper address word
+ l32i a8, a4, 4 # same for upper address word
#endif
__sl a5, a3
- and a6, a7, a6
+ and a6, a8, a6
or a6, a6, a5
#ifdef UNALIGNED_USER_EXCEPTION
s32e a6, a4, -4
s32i a6, a4, 4
#endif
- /* Done. restore stack and return */
-
.Lexit:
+#if XCHAL_HAVE_LOOPS
+ rsr a4, lend # check if we reached LEND
+ bne a7, a4, 1f
+ rsr a4, lcount # and LCOUNT != 0
+ beqz a4, 1f
+ addi a4, a4, -1 # decrement LCOUNT and set
+ rsr a7, lbeg # set PC to LBEGIN
+ wsr a4, lcount
+#endif
+
+1: wsr a7, epc1 # skip emulated instruction
+
+ /* Update icount if we're single-stepping in userspace. */
+ rsr a4, icountlevel
+ beqz a4, 1f
+ bgeui a4, LOCKLEVEL + 1, 1f
+ rsr a4, icount
+ addi a4, a4, 1
+ wsr a4, icount
+1:
movi a4, 0
rsr a3, excsave1
s32i a4, a3, EXC_TABLE_FIXUP
l32i a2, a2, PT_AREG2
rfe
- /* We cannot handle this exception. */
+ENDPROC(fast_unaligned)
- .extern _kernel_exception
-.Linvalid_instruction_store:
-.Linvalid_instruction:
+ENTRY(fast_unaligned_fixup)
- /* Restore a4...a8 and SAR, set SP, and jump to default exception. */
+ l32i a2, a3, EXC_TABLE_DOUBLE_SAVE
+ wsr a3, excsave1
l32i a8, a2, PT_AREG8
l32i a7, a2, PT_AREG7
l32i a6, a2, PT_AREG6
l32i a5, a2, PT_AREG5
l32i a4, a2, PT_AREG4
+ l32i a0, a2, PT_AREG2
+ xsr a0, depc # restore depc and a0
wsr a0, sar
- mov a1, a2
+
+ rsr a0, exccause
+ s32i a0, a2, PT_DEPC # mark as a regular exception
rsr a0, ps
- bbsi.l a2, PS_UM_BIT, 1f # jump if user mode
+ bbsi.l a0, PS_UM_BIT, 1f # jump if user mode
- movi a0, _kernel_exception
+ rsr a0, exccause
+ addx4 a0, a0, a3 # find entry in table
+ l32i a0, a0, EXC_TABLE_FAST_KERNEL # load handler
+ l32i a3, a2, PT_AREG3
jx a0
-
-1: movi a0, _user_exception
+1:
+ rsr a0, exccause
+ addx4 a0, a0, a3 # find entry in table
+ l32i a0, a0, EXC_TABLE_FAST_USER # load handler
+ l32i a3, a2, PT_AREG3
jx a0
-ENDPROC(fast_unaligned)
+ENDPROC(fast_unaligned_fixup)
#endif /* XCHAL_UNALIGNED_LOAD_EXCEPTION || XCHAL_UNALIGNED_STORE_EXCEPTION */
* j done
*/
+#ifdef CONFIG_FAST_SYSCALL_XTENSA
+
#define TRY \
.section __ex_table, "a"; \
.word 66f, 67f; \
movi a7, 4 # sizeof(unsigned int)
access_ok a3, a7, a0, a2, .Leac # a0: scratch reg, a2: sp
- addi a6, a6, -1 # assuming SYS_XTENSA_ATOMIC_SET = 1
- _bgeui a6, SYS_XTENSA_COUNT - 1, .Lill
- _bnei a6, SYS_XTENSA_ATOMIC_CMP_SWP - 1, .Lnswp
+ _bgeui a6, SYS_XTENSA_COUNT, .Lill
+ _bnei a6, SYS_XTENSA_ATOMIC_CMP_SWP, .Lnswp
/* Fall through for ATOMIC_CMP_SWP. */
l32i a7, a2, PT_AREG7 # restore a7
l32i a0, a2, PT_AREG0 # restore a0
movi a2, 1 # and return 1
- addi a6, a6, 1 # restore a6 (really necessary?)
rfe
1: l32i a7, a2, PT_AREG7 # restore a7
l32i a0, a2, PT_AREG0 # restore a0
movi a2, 0 # return 0 (note that we cannot set
- addi a6, a6, 1 # restore a6 (really necessary?)
rfe
.Lnswp: /* Atomic set, add, and exg_add. */
TRY l32i a7, a3, 0 # orig
+ addi a6, a6, -SYS_XTENSA_ATOMIC_SET
add a0, a4, a7 # + arg
moveqz a0, a4, a6 # set
+ addi a6, a6, SYS_XTENSA_ATOMIC_SET
TRY s32i a0, a3, 0 # write new value
mov a0, a2
mov a2, a7
l32i a7, a0, PT_AREG7 # restore a7
l32i a0, a0, PT_AREG0 # restore a0
- addi a6, a6, 1 # restore a6 (really necessary?)
rfe
CATCH
movi a2, -EFAULT
rfe
-.Lill: l32i a7, a2, PT_AREG0 # restore a7
+.Lill: l32i a7, a2, PT_AREG7 # restore a7
l32i a0, a2, PT_AREG0 # restore a0
movi a2, -EINVAL
rfe
ENDPROC(fast_syscall_xtensa)
+#else /* CONFIG_FAST_SYSCALL_XTENSA */
+
+ENTRY(fast_syscall_xtensa)
+
+ l32i a0, a2, PT_AREG0 # restore a0
+ movi a2, -ENOSYS
+ rfe
+
+ENDPROC(fast_syscall_xtensa)
+
+#endif /* CONFIG_FAST_SYSCALL_XTENSA */
+
/* fast_syscall_spill_registers.
*
* Note: We assume the stack pointer is EXC_TABLE_KSTK in the fixup handler.
*/
+#ifdef CONFIG_FAST_SYSCALL_SPILL_REGISTERS
+
ENTRY(fast_syscall_spill_registers)
/* Register a FIXUP handler (pass current wb as a parameter) */
ENDPROC(fast_syscall_spill_registers_fixup_return)
+#else /* CONFIG_FAST_SYSCALL_SPILL_REGISTERS */
+
+ENTRY(fast_syscall_spill_registers)
+
+ l32i a0, a2, PT_AREG0 # restore a0
+ movi a2, -ENOSYS
+ rfe
+
+ENDPROC(fast_syscall_spill_registers)
+
+#endif /* CONFIG_FAST_SYSCALL_SPILL_REGISTERS */
+
#ifdef CONFIG_MMU
/*
* We should never get here. Bail out!
rsr a0, excvaddr
bltu a0, a3, 2f
- addi a1, a0, -(2 << (DCACHE_ALIAS_ORDER + PAGE_SHIFT))
+ addi a1, a0, -TLBTEMP_SIZE
bgeu a1, a3, 2f
/* Check if we have to restore an ITLB mapping. */
entry a1, 16
- mov a10, a2 # preserve 'prev' (a2)
mov a11, a3 # and 'next' (a3)
l32i a4, a2, TASK_THREAD_INFO
save_xtregs_user a4 a6 a8 a9 a12 a13 THREAD_XTREGS_USER
- s32i a0, a10, THREAD_RA # save return address
- s32i a1, a10, THREAD_SP # save stack pointer
+#if THREAD_RA > 1020 || THREAD_SP > 1020
+ addi a10, a2, TASK_THREAD
+ s32i a0, a10, THREAD_RA - TASK_THREAD # save return address
+ s32i a1, a10, THREAD_SP - TASK_THREAD # save stack pointer
+#else
+ s32i a0, a2, THREAD_RA # save return address
+ s32i a1, a2, THREAD_SP # save stack pointer
+#endif
/* Disable ints while we manipulate the stack pointer. */
load_xtregs_user a5 a6 a8 a9 a12 a13 THREAD_XTREGS_USER
wsr a14, ps
- mov a2, a10 # return 'prev'
rsync
retw
/* We currently don't support coherent memory outside KSEG */
- if (ret < XCHAL_KSEG_CACHED_VADDR
- || ret >= XCHAL_KSEG_CACHED_VADDR + XCHAL_KSEG_SIZE)
- BUG();
+ BUG_ON(ret < XCHAL_KSEG_CACHED_VADDR ||
+ ret > XCHAL_KSEG_CACHED_VADDR + XCHAL_KSEG_SIZE - 1);
if (ret != 0) {
void dma_free_coherent(struct device *hwdev, size_t size,
void *vaddr, dma_addr_t dma_handle)
{
- long addr=(long)vaddr+XCHAL_KSEG_CACHED_VADDR-XCHAL_KSEG_BYPASS_VADDR;
+ unsigned long addr = (unsigned long)vaddr +
+ XCHAL_KSEG_CACHED_VADDR - XCHAL_KSEG_BYPASS_VADDR;
- if (addr < 0 || addr >= XCHAL_KSEG_SIZE)
- BUG();
+ BUG_ON(addr < XCHAL_KSEG_CACHED_VADDR ||
+ addr > XCHAL_KSEG_CACHED_VADDR + XCHAL_KSEG_SIZE - 1);
free_pages(addr, get_order(size));
}
};
on_each_cpu(ipi_flush_icache_range, &fd, 1);
}
+EXPORT_SYMBOL(flush_icache_range);
/* ------------------------------------------------------------------------- */
#if XCHAL_UNALIGNED_LOAD_EXCEPTION || XCHAL_UNALIGNED_STORE_EXCEPTION
#ifdef CONFIG_XTENSA_UNALIGNED_USER
{ EXCCAUSE_UNALIGNED, USER, fast_unaligned },
-#else
-{ EXCCAUSE_UNALIGNED, 0, do_unaligned_user },
#endif
+{ EXCCAUSE_UNALIGNED, 0, do_unaligned_user },
{ EXCCAUSE_UNALIGNED, KRNL, fast_unaligned },
#endif
#ifdef CONFIG_MMU
*/
#if XCHAL_UNALIGNED_LOAD_EXCEPTION || XCHAL_UNALIGNED_STORE_EXCEPTION
-#ifndef CONFIG_XTENSA_UNALIGNED_USER
void
do_unaligned_user (struct pt_regs *regs)
{
}
#endif
-#endif
void
do_debug(struct pt_regs *regs)
s32i a0, a2, PT_DEPC
_DoubleExceptionVector_handle_exception:
+ addi a0, a0, -EXCCAUSE_UNALIGNED
+ beqz a0, 2f
addx4 a0, a0, a3
- l32i a0, a0, EXC_TABLE_FAST_USER
+ l32i a0, a0, EXC_TABLE_FAST_USER + 4 * EXCCAUSE_UNALIGNED
+ xsr a3, excsave1
+ jx a0
+2:
+ movi a0, user_exception
xsr a3, excsave1
jx a0
.UserExceptionVector.literal)
SECTION_VECTOR (_DoubleExceptionVector_literal,
.DoubleExceptionVector.literal,
- DOUBLEEXC_VECTOR_VADDR - 40,
+ DOUBLEEXC_VECTOR_VADDR - 48,
SIZEOF(.UserExceptionVector.text),
.UserExceptionVector.text)
SECTION_VECTOR (_DoubleExceptionVector_text,
.DoubleExceptionVector.text,
DOUBLEEXC_VECTOR_VADDR,
- 40,
+ 48,
.DoubleExceptionVector.literal)
. = (LOADADDR( .DoubleExceptionVector.text ) + SIZEOF( .DoubleExceptionVector.text ) + 3) & ~ 3;
*
*/
-#if (DCACHE_WAY_SIZE > PAGE_SIZE) && defined(CONFIG_HIGHMEM)
-#error "HIGHMEM is not supported on cores with aliasing cache."
-#endif
+#if (DCACHE_WAY_SIZE > PAGE_SIZE)
+static inline void kmap_invalidate_coherent(struct page *page,
+ unsigned long vaddr)
+{
+ if (!DCACHE_ALIAS_EQ(page_to_phys(page), vaddr)) {
+ unsigned long kvaddr;
+
+ if (!PageHighMem(page)) {
+ kvaddr = (unsigned long)page_to_virt(page);
+
+ __invalidate_dcache_page(kvaddr);
+ } else {
+ kvaddr = TLBTEMP_BASE_1 +
+ (page_to_phys(page) & DCACHE_ALIAS_MASK);
+
+ __invalidate_dcache_page_alias(kvaddr,
+ page_to_phys(page));
+ }
+ }
+}
+
+static inline void *coherent_kvaddr(struct page *page, unsigned long base,
+ unsigned long vaddr, unsigned long *paddr)
+{
+ if (PageHighMem(page) || !DCACHE_ALIAS_EQ(page_to_phys(page), vaddr)) {
+ *paddr = page_to_phys(page);
+ return (void *)(base + (vaddr & DCACHE_ALIAS_MASK));
+ } else {
+ *paddr = 0;
+ return page_to_virt(page);
+ }
+}
+
+void clear_user_highpage(struct page *page, unsigned long vaddr)
+{
+ unsigned long paddr;
+ void *kvaddr = coherent_kvaddr(page, TLBTEMP_BASE_1, vaddr, &paddr);
+
+ pagefault_disable();
+ kmap_invalidate_coherent(page, vaddr);
+ set_bit(PG_arch_1, &page->flags);
+ clear_page_alias(kvaddr, paddr);
+ pagefault_enable();
+}
+
+void copy_user_highpage(struct page *dst, struct page *src,
+ unsigned long vaddr, struct vm_area_struct *vma)
+{
+ unsigned long dst_paddr, src_paddr;
+ void *dst_vaddr = coherent_kvaddr(dst, TLBTEMP_BASE_1, vaddr,
+ &dst_paddr);
+ void *src_vaddr = coherent_kvaddr(src, TLBTEMP_BASE_2, vaddr,
+ &src_paddr);
+
+ pagefault_disable();
+ kmap_invalidate_coherent(dst, vaddr);
+ set_bit(PG_arch_1, &dst->flags);
+ copy_page_alias(dst_vaddr, src_vaddr, dst_paddr, src_paddr);
+ pagefault_enable();
+}
+
+#endif /* DCACHE_WAY_SIZE > PAGE_SIZE */
#if (DCACHE_WAY_SIZE > PAGE_SIZE) && XCHAL_DCACHE_IS_WRITEBACK
if (!alias && !mapping)
return;
- __flush_invalidate_dcache_page((long)page_address(page));
+ virt = TLBTEMP_BASE_1 + (phys & DCACHE_ALIAS_MASK);
+ __flush_invalidate_dcache_page_alias(virt, phys);
virt = TLBTEMP_BASE_1 + (temp & DCACHE_ALIAS_MASK);
#if (DCACHE_WAY_SIZE > PAGE_SIZE) && XCHAL_DCACHE_IS_WRITEBACK
if (!PageReserved(page) && test_bit(PG_arch_1, &page->flags)) {
-
- unsigned long paddr = (unsigned long) page_address(page);
unsigned long phys = page_to_phys(page);
- unsigned long tmp = TLBTEMP_BASE_1 + (addr & DCACHE_ALIAS_MASK);
-
- __flush_invalidate_dcache_page(paddr);
+ unsigned long tmp;
+ tmp = TLBTEMP_BASE_1 + (phys & DCACHE_ALIAS_MASK);
+ __flush_invalidate_dcache_page_alias(tmp, phys);
+ tmp = TLBTEMP_BASE_1 + (addr & DCACHE_ALIAS_MASK);
__flush_invalidate_dcache_page_alias(tmp, phys);
__invalidate_icache_page_alias(tmp, phys);
static pte_t *kmap_pte;
+#if DCACHE_WAY_SIZE > PAGE_SIZE
+unsigned int last_pkmap_nr_arr[DCACHE_N_COLORS];
+wait_queue_head_t pkmap_map_wait_arr[DCACHE_N_COLORS];
+
+static void __init kmap_waitqueues_init(void)
+{
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(pkmap_map_wait_arr); ++i)
+ init_waitqueue_head(pkmap_map_wait_arr + i);
+}
+#else
+static inline void kmap_waitqueues_init(void)
+{
+}
+#endif
+
+static inline enum fixed_addresses kmap_idx(int type, unsigned long color)
+{
+ return (type + KM_TYPE_NR * smp_processor_id()) * DCACHE_N_COLORS +
+ color;
+}
+
void *kmap_atomic(struct page *page)
{
enum fixed_addresses idx;
unsigned long vaddr;
- int type;
pagefault_disable();
if (!PageHighMem(page))
return page_address(page);
- type = kmap_atomic_idx_push();
- idx = type + KM_TYPE_NR * smp_processor_id();
+ idx = kmap_idx(kmap_atomic_idx_push(),
+ DCACHE_ALIAS(page_to_phys(page)));
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
#ifdef CONFIG_DEBUG_HIGHMEM
- BUG_ON(!pte_none(*(kmap_pte - idx)));
+ BUG_ON(!pte_none(*(kmap_pte + idx)));
#endif
- set_pte(kmap_pte - idx, mk_pte(page, PAGE_KERNEL_EXEC));
+ set_pte(kmap_pte + idx, mk_pte(page, PAGE_KERNEL_EXEC));
return (void *)vaddr;
}
void __kunmap_atomic(void *kvaddr)
{
- int idx, type;
-
if (kvaddr >= (void *)FIXADDR_START &&
kvaddr < (void *)FIXADDR_TOP) {
- type = kmap_atomic_idx();
- idx = type + KM_TYPE_NR * smp_processor_id();
+ int idx = kmap_idx(kmap_atomic_idx(),
+ DCACHE_ALIAS((unsigned long)kvaddr));
/*
* Force other mappings to Oops if they'll try to access this
* is a bad idea also, in case the page changes cacheability
* attributes or becomes a protected page in a hypervisor.
*/
- pte_clear(&init_mm, kvaddr, kmap_pte - idx);
+ pte_clear(&init_mm, kvaddr, kmap_pte + idx);
local_flush_tlb_kernel_range((unsigned long)kvaddr,
(unsigned long)kvaddr + PAGE_SIZE);
/* cache the first kmap pte */
kmap_vstart = __fix_to_virt(FIX_KMAP_BEGIN);
kmap_pte = kmap_get_fixmap_pte(kmap_vstart);
+ kmap_waitqueues_init();
}
#if (DCACHE_WAY_SIZE > PAGE_SIZE)
/*
- * clear_user_page (void *addr, unsigned long vaddr, struct page *page)
- * a2 a3 a4
+ * clear_page_alias(void *addr, unsigned long paddr)
+ * a2 a3
*/
-ENTRY(clear_user_page)
+ENTRY(clear_page_alias)
entry a1, 32
- /* Mark page dirty and determine alias. */
+ /* Skip setting up a temporary DTLB if not aliased low page. */
- movi a7, (1 << PG_ARCH_1)
- l32i a5, a4, PAGE_FLAGS
- xor a6, a2, a3
- extui a3, a3, PAGE_SHIFT, DCACHE_ALIAS_ORDER
- extui a6, a6, PAGE_SHIFT, DCACHE_ALIAS_ORDER
- or a5, a5, a7
- slli a3, a3, PAGE_SHIFT
- s32i a5, a4, PAGE_FLAGS
+ movi a5, PAGE_OFFSET
+ movi a6, 0
+ beqz a3, 1f
- /* Skip setting up a temporary DTLB if not aliased. */
-
- beqz a6, 1f
-
- /* Invalidate kernel page. */
-
- mov a10, a2
- call8 __invalidate_dcache_page
-
- /* Setup a temporary DTLB with the color of the VPN */
-
- movi a4, ((PAGE_KERNEL | _PAGE_HW_WRITE) - PAGE_OFFSET) & 0xffffffff
- movi a5, TLBTEMP_BASE_1 # virt
- add a6, a2, a4 # ppn
- add a2, a5, a3 # add 'color'
+ /* Setup a temporary DTLB for the addr. */
+ addi a6, a3, (PAGE_KERNEL | _PAGE_HW_WRITE)
+ mov a4, a2
wdtlb a6, a2
dsync
/* We need to invalidate the temporary idtlb entry, if any. */
-1: addi a2, a2, -PAGE_SIZE
- idtlb a2
+1: idtlb a4
dsync
retw
-ENDPROC(clear_user_page)
+ENDPROC(clear_page_alias)
/*
- * copy_page_user (void *to, void *from, unsigned long vaddr, struct page *page)
- * a2 a3 a4 a5
+ * copy_page_alias(void *to, void *from,
+ * a2 a3
+ * unsigned long to_paddr, unsigned long from_paddr)
+ * a4 a5
*/
-ENTRY(copy_user_page)
+ENTRY(copy_page_alias)
entry a1, 32
- /* Mark page dirty and determine alias for destination. */
-
- movi a8, (1 << PG_ARCH_1)
- l32i a9, a5, PAGE_FLAGS
- xor a6, a2, a4
- xor a7, a3, a4
- extui a4, a4, PAGE_SHIFT, DCACHE_ALIAS_ORDER
- extui a6, a6, PAGE_SHIFT, DCACHE_ALIAS_ORDER
- extui a7, a7, PAGE_SHIFT, DCACHE_ALIAS_ORDER
- or a9, a9, a8
- slli a4, a4, PAGE_SHIFT
- s32i a9, a5, PAGE_FLAGS
- movi a5, ((PAGE_KERNEL | _PAGE_HW_WRITE) - PAGE_OFFSET) & 0xffffffff
-
- beqz a6, 1f
-
- /* Invalidate dcache */
-
- mov a10, a2
- call8 __invalidate_dcache_page
+ /* Skip setting up a temporary DTLB for destination if not aliased. */
- /* Setup a temporary DTLB with a matching color. */
+ movi a6, 0
+ movi a7, 0
+ beqz a4, 1f
- movi a8, TLBTEMP_BASE_1 # base
- add a6, a2, a5 # ppn
- add a2, a8, a4 # add 'color'
+ /* Setup a temporary DTLB for destination. */
+ addi a6, a4, (PAGE_KERNEL | _PAGE_HW_WRITE)
wdtlb a6, a2
dsync
- /* Skip setting up a temporary DTLB for destination if not aliased. */
+ /* Skip setting up a temporary DTLB for source if not aliased. */
-1: beqz a7, 1f
+1: beqz a5, 1f
- /* Setup a temporary DTLB with a matching color. */
+ /* Setup a temporary DTLB for source. */
- movi a8, TLBTEMP_BASE_2 # base
- add a7, a3, a5 # ppn
- add a3, a8, a4
+ addi a7, a5, PAGE_KERNEL
addi a8, a3, 1 # way1
wdtlb a7, a8
retw
-ENDPROC(copy_user_page)
+ENDPROC(copy_page_alias)
#endif
retw
ENDPROC(__flush_invalidate_dcache_page_alias)
+
+/*
+ * void __invalidate_dcache_page_alias (addr, phys)
+ * a2 a3
+ */
+
+ENTRY(__invalidate_dcache_page_alias)
+
+ entry sp, 16
+
+ movi a7, 0 # required for exception handler
+ addi a6, a3, (PAGE_KERNEL | _PAGE_HW_WRITE)
+ mov a4, a2
+ wdtlb a6, a2
+ dsync
+
+ ___invalidate_dcache_page a2 a3
+
+ idtlb a4
+ dsync
+
+ retw
+
+ENDPROC(__invalidate_dcache_page_alias)
#endif
ENTRY(__tlbtemp_mapping_itlb)
#include <asm/io.h>
#if defined(CONFIG_HIGHMEM)
-static void * __init init_pmd(unsigned long vaddr)
+static void * __init init_pmd(unsigned long vaddr, unsigned long n_pages)
{
pgd_t *pgd = pgd_offset_k(vaddr);
pmd_t *pmd = pmd_offset(pgd, vaddr);
+ pte_t *pte;
+ unsigned long i;
- if (pmd_none(*pmd)) {
- unsigned i;
- pte_t *pte = alloc_bootmem_low_pages(PAGE_SIZE);
+ n_pages = ALIGN(n_pages, PTRS_PER_PTE);
- for (i = 0; i < 1024; i++)
- pte_clear(NULL, 0, pte + i);
+ pr_debug("%s: vaddr: 0x%08lx, n_pages: %ld\n",
+ __func__, vaddr, n_pages);
- set_pmd(pmd, __pmd(((unsigned long)pte) & PAGE_MASK));
- BUG_ON(pte != pte_offset_kernel(pmd, 0));
- pr_debug("%s: vaddr: 0x%08lx, pmd: 0x%p, pte: 0x%p\n",
- __func__, vaddr, pmd, pte);
- return pte;
- } else {
- return pte_offset_kernel(pmd, 0);
+ pte = alloc_bootmem_low_pages(n_pages * sizeof(pte_t));
+
+ for (i = 0; i < n_pages; ++i)
+ pte_clear(NULL, 0, pte + i);
+
+ for (i = 0; i < n_pages; i += PTRS_PER_PTE, ++pmd) {
+ pte_t *cur_pte = pte + i;
+
+ BUG_ON(!pmd_none(*pmd));
+ set_pmd(pmd, __pmd(((unsigned long)cur_pte) & PAGE_MASK));
+ BUG_ON(cur_pte != pte_offset_kernel(pmd, 0));
+ pr_debug("%s: pmd: 0x%p, pte: 0x%p\n",
+ __func__, pmd, cur_pte);
}
+ return pte;
}
static void __init fixedrange_init(void)
{
- BUILD_BUG_ON(FIXADDR_SIZE > PMD_SIZE);
- init_pmd(__fix_to_virt(__end_of_fixed_addresses - 1) & PMD_MASK);
+ init_pmd(__fix_to_virt(0), __end_of_fixed_addresses);
}
#endif
memset(swapper_pg_dir, 0, PAGE_SIZE);
#ifdef CONFIG_HIGHMEM
fixedrange_init();
- pkmap_page_table = init_pmd(PKMAP_BASE);
+ pkmap_page_table = init_pmd(PKMAP_BASE, LAST_PKMAP);
kmap_init();
#endif
}
*/
if (error) {
bio->bi_end_io = bip->bip_end_io;
- bio_endio(bio, error);
+ bio_endio_nodec(bio, error);
return;
}
rq->__sector = (sector_t) -1;
rq->bio = rq->biotail = NULL;
memset(rq->__cmd, 0, sizeof(rq->__cmd));
- rq->cmd = rq->__cmd;
}
EXPORT_SYMBOL(blk_rq_set_block_pc);
#include "blk.h"
static unsigned int __blk_recalc_rq_segments(struct request_queue *q,
- struct bio *bio)
+ struct bio *bio,
+ bool no_sg_merge)
{
struct bio_vec bv, bvprv = { NULL };
- int cluster, high, highprv = 1, no_sg_merge;
+ int cluster, high, highprv = 1;
unsigned int seg_size, nr_phys_segs;
struct bio *fbio, *bbio;
struct bvec_iter iter;
cluster = blk_queue_cluster(q);
seg_size = 0;
nr_phys_segs = 0;
- no_sg_merge = test_bit(QUEUE_FLAG_NO_SG_MERGE, &q->queue_flags);
high = 0;
for_each_bio(bio) {
bio_for_each_segment(bv, bio, iter) {
void blk_recalc_rq_segments(struct request *rq)
{
- rq->nr_phys_segments = __blk_recalc_rq_segments(rq->q, rq->bio);
+ bool no_sg_merge = !!test_bit(QUEUE_FLAG_NO_SG_MERGE,
+ &rq->q->queue_flags);
+
+ rq->nr_phys_segments = __blk_recalc_rq_segments(rq->q, rq->bio,
+ no_sg_merge);
}
void blk_recount_segments(struct request_queue *q, struct bio *bio)
{
- if (test_bit(QUEUE_FLAG_NO_SG_MERGE, &q->queue_flags))
+ if (test_bit(QUEUE_FLAG_NO_SG_MERGE, &q->queue_flags) &&
+ bio->bi_vcnt < queue_max_segments(q))
bio->bi_phys_segments = bio->bi_vcnt;
else {
struct bio *nxt = bio->bi_next;
bio->bi_next = NULL;
- bio->bi_phys_segments = __blk_recalc_rq_segments(q, bio);
+ bio->bi_phys_segments = __blk_recalc_rq_segments(q, bio, false);
bio->bi_next = nxt;
}
*/
void blk_mq_freeze_queue(struct request_queue *q)
{
+ bool freeze;
+
spin_lock_irq(q->queue_lock);
- q->mq_freeze_depth++;
+ freeze = !q->mq_freeze_depth++;
spin_unlock_irq(q->queue_lock);
- percpu_ref_kill(&q->mq_usage_counter);
- blk_mq_run_queues(q, false);
+ if (freeze) {
+ percpu_ref_kill(&q->mq_usage_counter);
+ blk_mq_run_queues(q, false);
+ }
wait_event(q->mq_freeze_wq, percpu_ref_is_zero(&q->mq_usage_counter));
}
static void blk_mq_unfreeze_queue(struct request_queue *q)
{
- bool wake = false;
+ bool wake;
spin_lock_irq(q->queue_lock);
wake = !--q->mq_freeze_depth;
/* tag was already set */
rq->errors = 0;
+ rq->cmd = rq->__cmd;
+
rq->extra_len = 0;
rq->sense_len = 0;
rq->resid_len = 0;
blk_account_io_start(rq, 1);
}
+static inline bool hctx_allow_merges(struct blk_mq_hw_ctx *hctx)
+{
+ return (hctx->flags & BLK_MQ_F_SHOULD_MERGE) &&
+ !blk_queue_nomerges(hctx->queue);
+}
+
static inline bool blk_mq_merge_queue_io(struct blk_mq_hw_ctx *hctx,
struct blk_mq_ctx *ctx,
struct request *rq, struct bio *bio)
{
- struct request_queue *q = hctx->queue;
-
- if (!(hctx->flags & BLK_MQ_F_SHOULD_MERGE)) {
+ if (!hctx_allow_merges(hctx)) {
blk_mq_bio_to_request(rq, bio);
spin_lock(&ctx->lock);
insert_rq:
spin_unlock(&ctx->lock);
return false;
} else {
+ struct request_queue *q = hctx->queue;
+
spin_lock(&ctx->lock);
if (!blk_mq_attempt_merge(q, ctx, bio)) {
blk_mq_bio_to_request(rq, bio);
continue;
set->ops->exit_request(set->driver_data, tags->rqs[i],
hctx_idx, i);
+ tags->rqs[i] = NULL;
}
}
INIT_LIST_HEAD(&tags->page_list);
- tags->rqs = kmalloc_node(set->queue_depth * sizeof(struct request *),
- GFP_KERNEL, set->numa_node);
+ tags->rqs = kzalloc_node(set->queue_depth * sizeof(struct request *),
+ GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY,
+ set->numa_node);
if (!tags->rqs) {
blk_mq_free_tags(tags);
return NULL;
this_order--;
do {
- page = alloc_pages_node(set->numa_node, GFP_KERNEL,
- this_order);
+ page = alloc_pages_node(set->numa_node,
+ GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY,
+ this_order);
if (page)
break;
if (!this_order--)
if (set->ops->init_request) {
if (set->ops->init_request(set->driver_data,
tags->rqs[i], hctx_idx, i,
- set->numa_node))
+ set->numa_node)) {
+ tags->rqs[i] = NULL;
goto fail;
+ }
}
p += rq_size;
return tags;
fail:
- pr_warn("%s: failed to allocate requests\n", __func__);
blk_mq_free_rq_map(set, tags, hctx_idx);
return NULL;
}
hctx->tags = set->tags[i];
/*
- * Allocate space for all possible cpus to avoid allocation in
+ * Allocate space for all possible cpus to avoid allocation at
* runtime
*/
hctx->ctxs = kmalloc_node(nr_cpu_ids * sizeof(void *),
queue_for_each_hw_ctx(q, hctx, i) {
/*
- * If not software queues are mapped to this hardware queue,
- * disable it and free the request entries
+ * If no software queues are mapped to this hardware queue,
+ * disable it and free the request entries.
*/
if (!hctx->nr_ctx) {
struct blk_mq_tag_set *set = q->tag_set;
{
struct blk_mq_tag_set *set = q->tag_set;
- blk_mq_freeze_queue(q);
-
mutex_lock(&set->tag_list_lock);
list_del_init(&q->tag_set_list);
blk_mq_update_tag_set_depth(set);
mutex_unlock(&set->tag_list_lock);
-
- blk_mq_unfreeze_queue(q);
}
static void blk_mq_add_queue_tag_set(struct blk_mq_tag_set *set,
return NOTIFY_OK;
}
+static int __blk_mq_alloc_rq_maps(struct blk_mq_tag_set *set)
+{
+ int i;
+
+ for (i = 0; i < set->nr_hw_queues; i++) {
+ set->tags[i] = blk_mq_init_rq_map(set, i);
+ if (!set->tags[i])
+ goto out_unwind;
+ }
+
+ return 0;
+
+out_unwind:
+ while (--i >= 0)
+ blk_mq_free_rq_map(set, set->tags[i], i);
+
+ set->tags = NULL;
+ return -ENOMEM;
+}
+
+/*
+ * Allocate the request maps associated with this tag_set. Note that this
+ * may reduce the depth asked for, if memory is tight. set->queue_depth
+ * will be updated to reflect the allocated depth.
+ */
+static int blk_mq_alloc_rq_maps(struct blk_mq_tag_set *set)
+{
+ unsigned int depth;
+ int err;
+
+ depth = set->queue_depth;
+ do {
+ err = __blk_mq_alloc_rq_maps(set);
+ if (!err)
+ break;
+
+ set->queue_depth >>= 1;
+ if (set->queue_depth < set->reserved_tags + BLK_MQ_TAG_MIN) {
+ err = -ENOMEM;
+ break;
+ }
+ } while (set->queue_depth);
+
+ if (!set->queue_depth || err) {
+ pr_err("blk-mq: failed to allocate request map\n");
+ return -ENOMEM;
+ }
+
+ if (depth != set->queue_depth)
+ pr_info("blk-mq: reduced tag depth (%u -> %u)\n",
+ depth, set->queue_depth);
+
+ return 0;
+}
+
/*
* Alloc a tag set to be associated with one or more request queues.
* May fail with EINVAL for various error conditions. May adjust the
*/
int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set)
{
- int i;
-
if (!set->nr_hw_queues)
return -EINVAL;
if (!set->queue_depth)
sizeof(struct blk_mq_tags *),
GFP_KERNEL, set->numa_node);
if (!set->tags)
- goto out;
+ return -ENOMEM;
- for (i = 0; i < set->nr_hw_queues; i++) {
- set->tags[i] = blk_mq_init_rq_map(set, i);
- if (!set->tags[i])
- goto out_unwind;
- }
+ if (blk_mq_alloc_rq_maps(set))
+ goto enomem;
mutex_init(&set->tag_list_lock);
INIT_LIST_HEAD(&set->tag_list);
return 0;
-
-out_unwind:
- while (--i >= 0)
- blk_mq_free_rq_map(set, set->tags[i], i);
-out:
+enomem:
+ kfree(set->tags);
+ set->tags = NULL;
return -ENOMEM;
}
EXPORT_SYMBOL(blk_mq_alloc_tag_set);
}
kfree(set->tags);
+ set->tags = NULL;
}
EXPORT_SYMBOL(blk_mq_free_tag_set);
* Initialization must be complete by now. Finish the initial
* bypass from queue allocation.
*/
- queue_flag_set_unlocked(QUEUE_FLAG_INIT_DONE, q);
- blk_queue_bypass_end(q);
+ if (!blk_queue_init_done(q)) {
+ queue_flag_set_unlocked(QUEUE_FLAG_INIT_DONE, q);
+ blk_queue_bypass_end(q);
+ }
ret = blk_trace_init_sysfs(dev);
if (ret)
rb_insert_color(&cfqg->rb_node, &st->rb);
}
+/*
+ * This has to be called only on activation of cfqg
+ */
static void
cfq_update_group_weight(struct cfq_group *cfqg)
{
- BUG_ON(!RB_EMPTY_NODE(&cfqg->rb_node));
-
if (cfqg->new_weight) {
cfqg->weight = cfqg->new_weight;
cfqg->new_weight = 0;
}
+}
+
+static void
+cfq_update_group_leaf_weight(struct cfq_group *cfqg)
+{
+ BUG_ON(!RB_EMPTY_NODE(&cfqg->rb_node));
if (cfqg->new_leaf_weight) {
cfqg->leaf_weight = cfqg->new_leaf_weight;
/* add to the service tree */
BUG_ON(!RB_EMPTY_NODE(&cfqg->rb_node));
- cfq_update_group_weight(cfqg);
+ /*
+ * Update leaf_weight. We cannot update weight at this point
+ * because cfqg might already have been activated and is
+ * contributing its current weight to the parent's child_weight.
+ */
+ cfq_update_group_leaf_weight(cfqg);
__cfq_group_service_tree_add(st, cfqg);
/*
*/
while ((parent = cfqg_parent(pos))) {
if (propagate) {
+ cfq_update_group_weight(pos);
propagate = !parent->nr_active++;
parent->children_weight += pos->weight;
}
/* for extended dynamic devt allocation, currently only one major is used */
#define NR_EXT_DEVT (1 << MINORBITS)
-/* For extended devt allocation. ext_devt_mutex prevents look up
+/* For extended devt allocation. ext_devt_lock prevents look up
* results from going away underneath its user.
*/
-static DEFINE_MUTEX(ext_devt_mutex);
+static DEFINE_SPINLOCK(ext_devt_lock);
static DEFINE_IDR(ext_devt_idr);
static struct device_type disk_type;
}
/* allocate ext devt */
- mutex_lock(&ext_devt_mutex);
- idx = idr_alloc(&ext_devt_idr, part, 0, NR_EXT_DEVT, GFP_KERNEL);
- mutex_unlock(&ext_devt_mutex);
+ idr_preload(GFP_KERNEL);
+
+ spin_lock(&ext_devt_lock);
+ idx = idr_alloc(&ext_devt_idr, part, 0, NR_EXT_DEVT, GFP_NOWAIT);
+ spin_unlock(&ext_devt_lock);
+
+ idr_preload_end();
if (idx < 0)
return idx == -ENOSPC ? -EBUSY : idx;
return;
if (MAJOR(devt) == BLOCK_EXT_MAJOR) {
- mutex_lock(&ext_devt_mutex);
+ spin_lock(&ext_devt_lock);
idr_remove(&ext_devt_idr, blk_mangle_minor(MINOR(devt)));
- mutex_unlock(&ext_devt_mutex);
+ spin_unlock(&ext_devt_lock);
}
}
sysfs_remove_link(block_depr, dev_name(disk_to_dev(disk)));
pm_runtime_set_memalloc_noio(disk_to_dev(disk), false);
device_del(disk_to_dev(disk));
- blk_free_devt(disk_to_dev(disk)->devt);
}
EXPORT_SYMBOL(del_gendisk);
} else {
struct hd_struct *part;
- mutex_lock(&ext_devt_mutex);
+ spin_lock(&ext_devt_lock);
part = idr_find(&ext_devt_idr, blk_mangle_minor(MINOR(devt)));
if (part && get_disk(part_to_disk(part))) {
*partno = part->partno;
disk = part_to_disk(part);
}
- mutex_unlock(&ext_devt_mutex);
+ spin_unlock(&ext_devt_lock);
}
return disk;
{
struct gendisk *disk = dev_to_disk(dev);
+ blk_free_devt(dev->devt);
disk_release_events(disk);
kfree(disk->random);
disk_replace_part_tbl(disk, NULL);
static void part_release(struct device *dev)
{
struct hd_struct *p = dev_to_part(dev);
+ blk_free_devt(dev->devt);
free_part_stats(p);
free_part_info(p);
kfree(p);
rcu_assign_pointer(ptbl->last_lookup, NULL);
kobject_put(part->holder_dir);
device_del(part_to_dev(part));
- blk_free_devt(part_devt(part));
hd_struct_put(part);
}
r = blk_rq_unmap_user(bio);
if (!ret)
ret = r;
- blk_put_request(rq);
return ret;
}
if (hdr->interface_id != 'S')
return -EINVAL;
- if (hdr->cmd_len > BLK_MAX_CDB)
- return -EINVAL;
if (hdr->dxfer_len > (queue_max_hw_sectors(q) << 9))
return -EIO;
if (hdr->flags & SG_FLAG_Q_AT_HEAD)
at_head = 1;
+ ret = -ENOMEM;
rq = blk_get_request(q, writing ? WRITE : READ, GFP_KERNEL);
if (!rq)
- return -ENOMEM;
+ goto out;
blk_rq_set_block_pc(rq);
- if (blk_fill_sghdr_rq(q, rq, hdr, mode)) {
- blk_put_request(rq);
- return -EFAULT;
+ if (hdr->cmd_len > BLK_MAX_CDB) {
+ rq->cmd = kzalloc(hdr->cmd_len, GFP_KERNEL);
+ if (!rq->cmd)
+ goto out_put_request;
}
+ ret = -EFAULT;
+ if (blk_fill_sghdr_rq(q, rq, hdr, mode))
+ goto out_free_cdb;
+
+ ret = 0;
if (hdr->iovec_count) {
size_t iov_data_len;
struct iovec *iov = NULL;
0, NULL, &iov);
if (ret < 0) {
kfree(iov);
- goto out;
+ goto out_free_cdb;
}
iov_data_len = ret;
GFP_KERNEL);
if (ret)
- goto out;
+ goto out_free_cdb;
bio = rq->bio;
memset(sense, 0, sizeof(sense));
hdr->duration = jiffies_to_msecs(jiffies - start_time);
- return blk_complete_sghdr_rq(rq, hdr, bio);
-out:
+ ret = blk_complete_sghdr_rq(rq, hdr, bio);
+
+out_free_cdb:
+ if (rq->cmd != rq->__cmd)
+ kfree(rq->cmd);
+out_put_request:
blk_put_request(rq);
+out:
return ret;
}
}
rq = blk_get_request(q, in_len ? WRITE : READ, __GFP_WAIT);
+ if (!rq) {
+ err = -ENOMEM;
+ goto error;
+ }
+ blk_rq_set_block_pc(rq);
cmdlen = COMMAND_SIZE(opcode);
memset(sense, 0, sizeof(sense));
rq->sense = sense;
rq->sense_len = 0;
- blk_rq_set_block_pc(rq);
blk_execute_rq(q, disk, rq, 0);
error:
kfree(buffer);
- blk_put_request(rq);
+ if (rq)
+ blk_put_request(rq);
return err;
}
EXPORT_SYMBOL_GPL(sg_scsi_ioctl);
struct asymmetric_key_subtype public_key_subtype = {
.owner = THIS_MODULE,
.name = "public_key",
+ .name_len = sizeof("public_key") - 1,
.describe = public_key_describe,
.destroy = public_key_destroy,
.verify_signature = public_key_verify_signature_2,
{
struct win_certificate wrapper;
const u8 *pkcs7;
+ unsigned len;
if (ctx->sig_len < sizeof(wrapper)) {
pr_debug("Signature wrapper too short\n");
return -ENOTSUPP;
}
- /* Looks like actual pkcs signature length is in wrapper->length.
- * size obtained from data dir entries lists the total size of
- * certificate table which is also aligned to octawrod boundary.
- *
- * So set signature length field appropriately.
+ /* It looks like the pkcs signature length in wrapper->length and the
+ * size obtained from the data dir entries, which lists the total size
+ * of certificate table, are both aligned to an octaword boundary, so
+ * we may have to deal with some padding.
*/
ctx->sig_len = wrapper.length;
ctx->sig_offset += sizeof(wrapper);
ctx->sig_len -= sizeof(wrapper);
- if (ctx->sig_len == 0) {
+ if (ctx->sig_len < 4) {
pr_debug("Signature data missing\n");
return -EKEYREJECTED;
}
- /* What's left should a PKCS#7 cert */
+ /* What's left should be a PKCS#7 cert */
pkcs7 = pebuf + ctx->sig_offset;
- if (pkcs7[0] == (ASN1_CONS_BIT | ASN1_SEQ)) {
- if (pkcs7[1] == 0x82 &&
- pkcs7[2] == (((ctx->sig_len - 4) >> 8) & 0xff) &&
- pkcs7[3] == ((ctx->sig_len - 4) & 0xff))
- return 0;
- if (pkcs7[1] == 0x80)
- return 0;
- if (pkcs7[1] > 0x82)
- return -EMSGSIZE;
+ if (pkcs7[0] != (ASN1_CONS_BIT | ASN1_SEQ))
+ goto not_pkcs7;
+
+ switch (pkcs7[1]) {
+ case 0 ... 0x7f:
+ len = pkcs7[1] + 2;
+ goto check_len;
+ case ASN1_INDEFINITE_LENGTH:
+ return 0;
+ case 0x81:
+ len = pkcs7[2] + 3;
+ goto check_len;
+ case 0x82:
+ len = ((pkcs7[2] << 8) | pkcs7[3]) + 4;
+ goto check_len;
+ case 0x83 ... 0xff:
+ return -EMSGSIZE;
+ default:
+ goto not_pkcs7;
}
+check_len:
+ if (len <= ctx->sig_len) {
+ /* There may be padding */
+ ctx->sig_len = len;
+ return 0;
+ }
+not_pkcs7:
pr_debug("Signature data not PKCS#7\n");
return -ELIBBAD;
}
void *handler_context, void *region_context)
{
int i;
- u8 *value = (u8 *)&value64;
+ u8 *value = (u8 *)value64;
if (address > 0xff || !value64)
return AE_BAD_PARAMETER;
.setup = lpss_i2c_setup,
};
+static struct lpss_shared_clock bsw_pwm_clock = {
+ .name = "pwm_clk",
+ .rate = 19200000,
+};
+
+static struct lpss_device_desc bsw_pwm_dev_desc = {
+ .clk_required = true,
+ .save_ctx = true,
+ .shared_clock = &bsw_pwm_clock,
+};
+
#else
#define LPSS_ADDR(desc) (0UL)
{ "INT33B2", },
{ "INT33FC", },
+ /* Braswell LPSS devices */
+ { "80862288", LPSS_ADDR(bsw_pwm_dev_desc) },
+ { "8086228A", LPSS_ADDR(byt_uart_dev_desc) },
+ { "8086228E", LPSS_ADDR(byt_spi_dev_desc) },
+ { "808622C1", LPSS_ADDR(byt_i2c_dev_desc) },
+
{ "INT3430", LPSS_ADDR(lpt_dev_desc) },
{ "INT3431", LPSS_ADDR(lpt_dev_desc) },
{ "INT3432", LPSS_ADDR(lpt_i2c_dev_desc) },
return acpi_dev_suspend_late(dev);
}
-static int acpi_lpss_restore_early(struct device *dev)
+static int acpi_lpss_resume_early(struct device *dev)
{
int ret = acpi_dev_resume_early(dev);
static struct dev_pm_domain acpi_lpss_pm_domain = {
.ops = {
#ifdef CONFIG_PM_SLEEP
- .suspend_late = acpi_lpss_suspend_late,
- .restore_early = acpi_lpss_restore_early,
.prepare = acpi_subsys_prepare,
.complete = acpi_subsys_complete,
.suspend = acpi_subsys_suspend,
- .resume_early = acpi_subsys_resume_early,
+ .suspend_late = acpi_lpss_suspend_late,
+ .resume_early = acpi_lpss_resume_early,
.freeze = acpi_subsys_freeze,
.poweroff = acpi_subsys_suspend,
- .poweroff_late = acpi_subsys_suspend_late,
+ .poweroff_late = acpi_lpss_suspend_late,
+ .restore_early = acpi_lpss_resume_early,
#endif
#ifdef CONFIG_PM_RUNTIME
.runtime_suspend = acpi_lpss_runtime_suspend,
acpi_ns_check_package_list(info, package, elements, count);
break;
+ case ACPI_PTYPE2_UUID_PAIR:
+
+ /* The package must contain pairs of (UUID + type) */
+
+ if (count & 1) {
+ expected_count = count + 1;
+ goto package_too_small;
+ }
+
+ while (count > 0) {
+ status = acpi_ns_check_object_type(info, elements,
+ package->ret_info.
+ object_type1, 0);
+ if (ACPI_FAILURE(status)) {
+ return (status);
+ }
+
+ /* Validate length of the UUID buffer */
+
+ if ((*elements)->buffer.length != 16) {
+ ACPI_WARN_PREDEFINED((AE_INFO,
+ info->full_pathname,
+ info->node_flags,
+ "Invalid length for UUID Buffer"));
+ return (AE_AML_OPERAND_VALUE);
+ }
+
+ status = acpi_ns_check_object_type(info, elements + 1,
+ package->ret_info.
+ object_type2, 0);
+ if (ACPI_FAILURE(status)) {
+ return (status);
+ }
+
+ elements += 2;
+ count -= 2;
+ }
+ break;
+
default:
/* Should not get here if predefined info table is correct */
" invalid.\n");
}
- /*
- * When fully charged, some batteries wrongly report
- * capacity_now = design_capacity instead of = full_charge_capacity
- */
- if (battery->capacity_now > battery->full_charge_capacity
- && battery->full_charge_capacity != ACPI_BATTERY_VALUE_UNKNOWN) {
- if (battery->capacity_now != battery->design_capacity)
- printk_once(KERN_WARNING FW_BUG
- "battery: reported current charge level (%d) "
- "is higher than reported maximum charge level (%d).\n",
- battery->capacity_now, battery->full_charge_capacity);
- battery->capacity_now = battery->full_charge_capacity;
- }
-
if (test_bit(ACPI_BATTERY_QUIRK_PERCENTAGE_CAPACITY, &battery->flags)
&& battery->capacity_now >= 0 && battery->capacity_now <= 100)
battery->capacity_now = (battery->capacity_now *
t->rdata[t->ri++] = acpi_ec_read_data(ec);
if (t->rlen == t->ri) {
t->flags |= ACPI_EC_COMMAND_COMPLETE;
+ if (t->command == ACPI_EC_COMMAND_QUERY)
+ pr_debug("hardware QR_EC completion\n");
wakeup = true;
}
} else
}
return wakeup;
} else {
- if ((status & ACPI_EC_FLAG_IBF) == 0) {
+ /*
+ * There is firmware refusing to respond QR_EC when SCI_EVT
+ * is not set, for which case, we complete the QR_EC
+ * without issuing it to the firmware.
+ * https://bugzilla.kernel.org/show_bug.cgi?id=86211
+ */
+ if (!(status & ACPI_EC_FLAG_SCI) &&
+ (t->command == ACPI_EC_COMMAND_QUERY)) {
+ t->flags |= ACPI_EC_COMMAND_POLL;
+ t->rdata[t->ri++] = 0x00;
+ t->flags |= ACPI_EC_COMMAND_COMPLETE;
+ pr_debug("software QR_EC completion\n");
+ wakeup = true;
+ } else if ((status & ACPI_EC_FLAG_IBF) == 0) {
acpi_ec_write_cmd(ec, t->command);
t->flags |= ACPI_EC_COMMAND_POLL;
} else
/* following two actions should be kept atomic */
ec->curr = t;
start_transaction(ec);
- if (ec->curr->command == ACPI_EC_COMMAND_QUERY)
- clear_bit(EC_FLAGS_QUERY_PENDING, &ec->flags);
spin_unlock_irqrestore(&ec->lock, tmp);
ret = ec_poll(ec);
spin_lock_irqsave(&ec->lock, tmp);
+ if (ec->curr->command == ACPI_EC_COMMAND_QUERY)
+ clear_bit(EC_FLAGS_QUERY_PENDING, &ec->flags);
ec->curr = NULL;
spin_unlock_irqrestore(&ec->lock, tmp);
return ret;
DMI_MATCH(DMI_SYS_VENDOR, "Quanta"),
DMI_MATCH(DMI_PRODUCT_NAME, "TW9/SW9"),}, NULL},
{
+ ec_flag_msi, "Clevo W350etq", {
+ DMI_MATCH(DMI_SYS_VENDOR, "CLEVO CO."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "W35_37ET"),}, NULL},
+ {
ec_validate_ecdt, "ASUS hardware", {
DMI_MATCH(DMI_BIOS_VENDOR, "ASUS") }, NULL},
{
/* Keep IOAPIC pin configuration when suspending */
if (dev->dev.power.is_prepared)
return;
+#ifdef CONFIG_PM_RUNTIME
+ if (dev->dev.power.runtime_status == RPM_SUSPENDING)
+ return;
+#endif
entry = acpi_pci_irq_lookup(dev, pin);
if (!entry)
if (pr->id == 0 && cpuidle_get_driver() == &acpi_idle_driver) {
- cpuidle_pause_and_lock();
/* Protect against cpu-hotplug */
get_online_cpus();
+ cpuidle_pause_and_lock();
/* Disable all cpuidle devices */
for_each_online_cpu(cpu) {
cpuidle_enable_device(dev);
}
}
- put_online_cpus();
cpuidle_resume_and_unlock();
+ put_online_cpus();
}
return 0;
acpi_device_sun_show(struct device *dev, struct device_attribute *attr,
char *buf) {
struct acpi_device *acpi_dev = to_acpi_device(dev);
+ acpi_status status;
+ unsigned long long sun;
+
+ status = acpi_evaluate_integer(acpi_dev->handle, "_SUN", NULL, &sun);
+ if (ACPI_FAILURE(status))
+ return -ENODEV;
- return sprintf(buf, "%lu\n", acpi_dev->pnp.sun);
+ return sprintf(buf, "%llu\n", sun);
}
static DEVICE_ATTR(sun, 0444, acpi_device_sun_show, NULL);
{
struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL};
acpi_status status;
- unsigned long long sun;
int result = 0;
/*
if (dev->pnp.unique_id)
result = device_create_file(&dev->dev, &dev_attr_uid);
- status = acpi_evaluate_integer(dev->handle, "_SUN", NULL, &sun);
- if (ACPI_SUCCESS(status)) {
- dev->pnp.sun = (unsigned long)sun;
+ if (acpi_has_method(dev->handle, "_SUN")) {
result = device_create_file(&dev->dev, &dev_attr_sun);
if (result)
goto end;
- } else {
- dev->pnp.sun = (unsigned long)-1;
}
if (acpi_has_method(dev->handle, "_STA")) {
device->driver->ops.notify(device, event);
}
-static acpi_status acpi_device_notify_fixed(void *data)
+static void acpi_device_notify_fixed(void *data)
{
struct acpi_device *device = data;
/* Fixed hardware devices have no handles */
acpi_device_notify(NULL, ACPI_FIXED_HARDWARE_EVENT, device);
+}
+
+static acpi_status acpi_device_fixed_event(void *data)
+{
+ acpi_os_execute(OSL_NOTIFY_HANDLER, acpi_device_notify_fixed, data);
return AE_OK;
}
if (device->device_type == ACPI_BUS_TYPE_POWER_BUTTON)
status =
acpi_install_fixed_event_handler(ACPI_EVENT_POWER_BUTTON,
- acpi_device_notify_fixed,
+ acpi_device_fixed_event,
device);
else if (device->device_type == ACPI_BUS_TYPE_SLEEP_BUTTON)
status =
acpi_install_fixed_event_handler(ACPI_EVENT_SLEEP_BUTTON,
- acpi_device_notify_fixed,
+ acpi_device_fixed_event,
device);
else
status = acpi_install_notify_handler(device->handle,
{
if (device->device_type == ACPI_BUS_TYPE_POWER_BUTTON)
acpi_remove_fixed_event_handler(ACPI_EVENT_POWER_BUTTON,
- acpi_device_notify_fixed);
+ acpi_device_fixed_event);
else if (device->device_type == ACPI_BUS_TYPE_SLEEP_BUTTON)
acpi_remove_fixed_event_handler(ACPI_EVENT_SLEEP_BUTTON,
- acpi_device_notify_fixed);
+ acpi_device_fixed_event);
else
acpi_remove_notify_handler(device->handle, ACPI_DEVICE_NOTIFY,
acpi_device_notify);
struct acpi_driver *acpi_drv = to_acpi_driver(dev->driver);
int ret;
- if (acpi_dev->handler)
+ if (acpi_dev->handler && !acpi_is_pnp_device(acpi_dev))
return -EINVAL;
if (!acpi_drv->ops.add)
* For Windows 8 systems: used to decide if video module
* should skip registering backlight interface of its own.
*/
-static int use_native_backlight_param = 1;
+static int use_native_backlight_param = -1;
module_param_named(use_native_backlight, use_native_backlight_param, int, 0444);
-static bool use_native_backlight_dmi = false;
+static bool use_native_backlight_dmi = true;
static int register_count;
static struct mutex video_list_lock;
return 0;
}
+static int __init video_disable_native_backlight(const struct dmi_system_id *d)
+{
+ use_native_backlight_dmi = false;
+ return 0;
+}
+
static struct dmi_system_id video_dmi_table[] __initdata = {
/*
* Broken _BQC workaround http://bugzilla.kernel.org/show_bug.cgi?id=13121
DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook 8780w"),
},
},
+
+ /*
+ * These models have a working acpi_video backlight control, and using
+ * native backlight causes a regression where backlight does not work
+ * when userspace is not handling brightness key events. Disable
+ * native_backlight on these to fix this:
+ * https://bugzilla.kernel.org/show_bug.cgi?id=81691
+ */
+ {
+ .callback = video_disable_native_backlight,
+ .ident = "ThinkPad T420",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T420"),
+ },
+ },
+ {
+ .callback = video_disable_native_backlight,
+ .ident = "ThinkPad T520",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T520"),
+ },
+ },
+
+ /* The native backlight controls do not work on some older machines */
+ {
+ /* https://bugs.freedesktop.org/show_bug.cgi?id=81515 */
+ .callback = video_disable_native_backlight,
+ .ident = "HP ENVY 15 Notebook",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "HP ENVY 15 Notebook PC"),
+ },
+ },
{}
};
{ PCI_VDEVICE(INTEL, 0x9c85), board_ahci }, /* Wildcat Point-LP RAID */
{ PCI_VDEVICE(INTEL, 0x9c87), board_ahci }, /* Wildcat Point-LP RAID */
{ PCI_VDEVICE(INTEL, 0x9c8f), board_ahci }, /* Wildcat Point-LP RAID */
+ { PCI_VDEVICE(INTEL, 0x8c82), board_ahci }, /* 9 Series AHCI */
+ { PCI_VDEVICE(INTEL, 0x8c83), board_ahci }, /* 9 Series AHCI */
+ { PCI_VDEVICE(INTEL, 0x8c84), board_ahci }, /* 9 Series RAID */
+ { PCI_VDEVICE(INTEL, 0x8c85), board_ahci }, /* 9 Series RAID */
+ { PCI_VDEVICE(INTEL, 0x8c86), board_ahci }, /* 9 Series RAID */
+ { PCI_VDEVICE(INTEL, 0x8c87), board_ahci }, /* 9 Series RAID */
+ { PCI_VDEVICE(INTEL, 0x8c8e), board_ahci }, /* 9 Series RAID */
+ { PCI_VDEVICE(INTEL, 0x8c8f), board_ahci }, /* 9 Series RAID */
/* JMicron 360/1/3/5/6, match class to avoid IDE function */
{ PCI_VENDOR_ID_JMICRON, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x917a),
.driver_data = board_ahci_yes_fbs }, /* 88se9172 */
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x9172),
+ .driver_data = board_ahci_yes_fbs }, /* 88se9182 */
+ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x9182),
.driver_data = board_ahci_yes_fbs }, /* 88se9172 */
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x9192),
.driver_data = board_ahci_yes_fbs }, /* 88se9172 on some Gigabyte */
else if (pdev->vendor == 0x1c44 && pdev->device == 0x8000)
ahci_pci_bar = AHCI_PCI_BAR_ENMOTUS;
+ /*
+ * The JMicron chip 361/363 contains one SATA controller and one
+ * PATA controller,for powering on these both controllers, we must
+ * follow the sequence one by one, otherwise one of them can not be
+ * powered on successfully, so here we disable the async suspend
+ * method for these chips.
+ */
+ if (pdev->vendor == PCI_VENDOR_ID_JMICRON &&
+ (pdev->device == PCI_DEVICE_ID_JMICRON_JMB363 ||
+ pdev->device == PCI_DEVICE_ID_JMICRON_JMB361))
+ device_disable_async_suspend(&pdev->dev);
+
/* acquire resources */
rc = pcim_enable_device(pdev);
if (rc)
*/
#include <linux/ahci_platform.h>
-#include <linux/reset.h>
#include <linux/errno.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
-#include <linux/tegra-powergate.h>
#include <linux/regulator/consumer.h>
+#include <linux/reset.h>
+
+#include <soc/tegra/fuse.h>
+#include <soc/tegra/pmc.h>
+
#include "ahci.h"
#define SATA_CONFIGURATION_0 0x180
/* Pad calibration */
- /* FIXME Always use calibration 0. Change this to read the calibration
- * fuse once the fuse driver has landed. */
- val = 0;
+ ret = tegra_fuse_readl(FUSE_SATA_CALIB, &val);
+ if (ret) {
+ dev_err(&tegra->pdev->dev,
+ "failed to read calibration fuse: %d\n", ret);
+ return ret;
+ }
calib = tegra124_pad_calibration[val & FUSE_SATA_CALIB_MASK];
#define CFG_MEM_RAM_SHUTDOWN 0x00000070
#define BLOCK_MEM_RDY 0x00000074
+/* Max retry for link down */
+#define MAX_LINK_DOWN_RETRY 3
+
struct xgene_ahci_context {
struct ahci_host_priv *hpriv;
struct device *dev;
return rc;
}
+static bool xgene_ahci_is_memram_inited(struct xgene_ahci_context *ctx)
+{
+ void __iomem *diagcsr = ctx->csr_diag;
+
+ return (readl(diagcsr + CFG_MEM_RAM_SHUTDOWN) == 0 &&
+ readl(diagcsr + BLOCK_MEM_RDY) == 0xFFFFFFFF);
+}
+
/**
* xgene_ahci_read_id - Read ID data from the specified device
* @dev: device
* and Gen1 (1.5Gbps). Otherwise during long IO stress test, the PHY will
* report disparity error and etc. In addition, during COMRESET, there can
* be error reported in the register PORT_SCR_ERR. For SERR_DISPARITY and
- * SERR_10B_8B_ERR, the PHY receiver line must be reseted. The following
- * algorithm is followed to proper configure the hardware PHY during COMRESET:
+ * SERR_10B_8B_ERR, the PHY receiver line must be reseted. Also during long
+ * reboot cycle regression, sometimes the PHY reports link down even if the
+ * device is present because of speed negotiation failure. so need to retry
+ * the COMRESET to get the link up. The following algorithm is followed to
+ * proper configure the hardware PHY during COMRESET:
*
* Alg Part 1:
* 1. Start the PHY at Gen3 speed (default setting)
* Alg Part 2:
* 1. On link up, if there are any SERR_DISPARITY and SERR_10B_8B_ERR error
* reported in the register PORT_SCR_ERR, then reset the PHY receiver line
- * 2. Go to Alg Part 3
+ * 2. Go to Alg Part 4
*
* Alg Part 3:
+ * 1. Check the PORT_SCR_STAT to see whether device presence detected but PHY
+ * communication establishment failed and maximum link down attempts are
+ * less than Max attempts 3 then goto Alg Part 1.
+ * 2. Go to Alg Part 4.
+ *
+ * Alg Part 4:
* 1. Clear any pending from register PORT_SCR_ERR.
*
* NOTE: For the initial version, we will NOT support Gen1/Gen2. In addition
u8 *d2h_fis = pp->rx_fis + RX_FIS_D2H_REG;
void __iomem *port_mmio = ahci_port_base(ap);
struct ata_taskfile tf;
+ int link_down_retry = 0;
int rc;
- u32 val;
-
- /* clear D2H reception area to properly wait for D2H FIS */
- ata_tf_init(link->device, &tf);
- tf.command = ATA_BUSY;
- ata_tf_to_fis(&tf, 0, 0, d2h_fis);
- rc = sata_link_hardreset(link, timing, deadline, online,
+ u32 val, sstatus;
+
+ do {
+ /* clear D2H reception area to properly wait for D2H FIS */
+ ata_tf_init(link->device, &tf);
+ tf.command = ATA_BUSY;
+ ata_tf_to_fis(&tf, 0, 0, d2h_fis);
+ rc = sata_link_hardreset(link, timing, deadline, online,
ahci_check_ready);
+ if (*online) {
+ val = readl(port_mmio + PORT_SCR_ERR);
+ if (val & (SERR_DISPARITY | SERR_10B_8B_ERR))
+ dev_warn(ctx->dev, "link has error\n");
+ break;
+ }
- val = readl(port_mmio + PORT_SCR_ERR);
- if (val & (SERR_DISPARITY | SERR_10B_8B_ERR))
- dev_warn(ctx->dev, "link has error\n");
+ sata_scr_read(link, SCR_STATUS, &sstatus);
+ } while (link_down_retry++ < MAX_LINK_DOWN_RETRY &&
+ (sstatus & 0xff) == 0x1);
/* clear all errors if any pending */
val = readl(port_mmio + PORT_SCR_ERR);
};
static const struct ata_port_info xgene_ahci_port_info = {
- .flags = AHCI_FLAG_COMMON | ATA_FLAG_NCQ,
+ .flags = AHCI_FLAG_COMMON,
.pio_mask = ATA_PIO4,
.udma_mask = ATA_UDMA6,
.port_ops = &xgene_ahci_ops,
return -ENODEV;
}
+ if (xgene_ahci_is_memram_inited(ctx)) {
+ dev_info(dev, "skip clock and PHY initialization\n");
+ goto skip_clk_phy;
+ }
+
/* Due to errata, HW requires full toggle transition */
rc = ahci_platform_enable_clks(hpriv);
if (rc)
/* Configure the host controller */
xgene_ahci_hw_init(hpriv);
-
- hpriv->flags = AHCI_HFLAG_NO_PMP | AHCI_HFLAG_YES_NCQ;
+skip_clk_phy:
+ hpriv->flags = AHCI_HFLAG_NO_PMP | AHCI_HFLAG_NO_NCQ;
rc = ahci_platform_init_host(pdev, hpriv, &xgene_ahci_port_info);
if (rc)
{ 0x8086, 0x0F21, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata_byt },
/* SATA Controller IDE (Coleto Creek) */
{ 0x8086, 0x23a6, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata },
+ /* SATA Controller IDE (9 Series) */
+ { 0x8086, 0x8c88, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata_snb },
+ /* SATA Controller IDE (9 Series) */
+ { 0x8086, 0x8c89, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata_snb },
+ /* SATA Controller IDE (9 Series) */
+ { 0x8086, 0x8c80, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_sata_snb },
+ /* SATA Controller IDE (9 Series) */
+ { 0x8086, 0x8c81, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_sata_snb },
{ } /* terminate list */
};
{ "Micron_M500*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, },
{ "Crucial_CT???M500SSD*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, },
{ "Micron_M550*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, },
- { "Crucial_CT???M550SSD*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, },
+ { "Crucial_CT*M550SSD*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, },
/*
* Some WD SATA-I drives spin up and down erratically when the link
};
const struct ata_port_info *ppi[] = { &info, NULL };
+ /*
+ * The JMicron chip 361/363 contains one SATA controller and one
+ * PATA controller,for powering on these both controllers, we must
+ * follow the sequence one by one, otherwise one of them can not be
+ * powered on successfully, so here we disable the async suspend
+ * method for these chips.
+ */
+ if (pdev->vendor == PCI_VENDOR_ID_JMICRON &&
+ (pdev->device == PCI_DEVICE_ID_JMICRON_JMB363 ||
+ pdev->device == PCI_DEVICE_ID_JMICRON_JMB361))
+ device_disable_async_suspend(&pdev->dev);
+
return ata_pci_bmdma_init_one(pdev, ppi, &jmicron_sht, NULL, 0);
}
/*
* pata_s3c_bus_softreset - PATA device software reset
*/
-static unsigned int pata_s3c_bus_softreset(struct ata_port *ap,
+static int pata_s3c_bus_softreset(struct ata_port *ap,
unsigned long deadline)
{
struct ata_ioports *ioaddr = &ap->ioaddr;
* Note: Original code is ata_bus_softreset().
*/
-static unsigned int scc_bus_softreset(struct ata_port *ap, unsigned int devmask,
+static int scc_bus_softreset(struct ata_port *ap, unsigned int devmask,
unsigned long deadline)
{
struct ata_ioports *ioaddr = &ap->ioaddr;
udelay(20);
out_be32(ioaddr->ctl_addr, ap->ctl);
- scc_wait_after_reset(&ap->link, devmask, deadline);
-
- return 0;
+ return scc_wait_after_reset(&ap->link, devmask, deadline);
}
/**
{
struct ata_port *ap = link->ap;
unsigned int slave_possible = ap->flags & ATA_FLAG_SLAVE_POSS;
- unsigned int devmask = 0, err_mask;
+ unsigned int devmask = 0;
+ int rc;
u8 err;
DPRINTK("ENTER\n");
/* issue bus reset */
DPRINTK("about to softreset, devmask=%x\n", devmask);
- err_mask = scc_bus_softreset(ap, devmask, deadline);
- if (err_mask) {
- ata_port_err(ap, "SRST failed (err_mask=0x%x)\n", err_mask);
+ rc = scc_bus_softreset(ap, devmask, deadline);
+ if (rc) {
+ ata_port_err(ap, "SRST failed (err_mask=0x%x)\n", rc);
return -EIO;
}
enum regcache_type type;
int (*init)(struct regmap *map);
int (*exit)(struct regmap *map);
+#ifdef CONFIG_DEBUG_FS
+ void (*debugfs_init)(struct regmap *map);
+#endif
int (*read)(struct regmap *map, unsigned int reg, unsigned int *value);
int (*write)(struct regmap *map, unsigned int reg, unsigned int value);
int (*sync)(struct regmap *map, unsigned int min, unsigned int max);
{
debugfs_create_file("rbtree", 0400, map->debugfs, map, &rbtree_fops);
}
-#else
-static void rbtree_debugfs_init(struct regmap *map)
-{
-}
#endif
static int regcache_rbtree_init(struct regmap *map)
goto err;
}
- rbtree_debugfs_init(map);
-
return 0;
err:
.name = "rbtree",
.init = regcache_rbtree_init,
.exit = regcache_rbtree_exit,
+#ifdef CONFIG_DEBUG_FS
+ .debugfs_init = rbtree_debugfs_init,
+#endif
.read = regcache_rbtree_read,
.write = regcache_rbtree_write,
.sync = regcache_rbtree_sync,
unsigned int block_base, unsigned int start,
unsigned int end)
{
- if (regmap_can_raw_write(map))
+ if (regmap_can_raw_write(map) && !map->use_single_rw)
return regcache_sync_block_raw(map, block, cache_present,
block_base, start, end);
else
next = rb_next(&range_node->node);
}
+
+ if (map->cache_ops && map->cache_ops->debugfs_init)
+ map->cache_ops->debugfs_init(map);
}
void regmap_debugfs_exit(struct regmap *map)
bool regmap_volatile(struct regmap *map, unsigned int reg)
{
- if (!regmap_readable(map, reg))
+ if (!map->format.format_write && !regmap_readable(map, reg))
return false;
if (map->volatile_reg)
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x43a9) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x43aa) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4727) },
+ { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 43227) }, /* 0xA8DB */
{ 0, },
};
MODULE_DEVICE_TABLE(pci, bcma_pci_bridge_tbl);
int rd_size = CONFIG_BLK_DEV_RAM_SIZE;
static int max_part;
static int part_shift;
+static int part_show = 0;
module_param(rd_nr, int, S_IRUGO);
MODULE_PARM_DESC(rd_nr, "Maximum number of brd devices");
module_param(rd_size, int, S_IRUGO);
MODULE_PARM_DESC(rd_size, "Size of each RAM disk in kbytes.");
module_param(max_part, int, S_IRUGO);
MODULE_PARM_DESC(max_part, "Maximum number of partitions per RAM disk");
+module_param(part_show, int, S_IRUGO);
+MODULE_PARM_DESC(part_show, "Control RAM disk visibility in /proc/partitions");
MODULE_LICENSE("GPL");
MODULE_ALIAS_BLOCKDEV_MAJOR(RAMDISK_MAJOR);
MODULE_ALIAS("rd");
disk->fops = &brd_fops;
disk->private_data = brd;
disk->queue = brd->brd_queue;
- disk->flags |= GENHD_FL_SUPPRESS_PARTITION_INFO;
+ if (!part_show)
+ disk->flags |= GENHD_FL_SUPPRESS_PARTITION_INFO;
sprintf(disk->disk_name, "ram%d", i);
set_capacity(disk, rd_size * 2);
if (rv) {
dev_err(&dd->pdev->dev,
"Unable to allocate request queue\n");
- rv = -ENOMEM;
goto block_queue_alloc_init_error;
}
struct gendisk *disk;
struct nullb *nullb;
sector_t size;
+ int rv;
nullb = kzalloc_node(sizeof(*nullb), GFP_KERNEL, home_node);
- if (!nullb)
+ if (!nullb) {
+ rv = -ENOMEM;
goto out;
+ }
spin_lock_init(&nullb->lock);
if (queue_mode == NULL_Q_MQ && use_per_node_hctx)
submit_queues = nr_online_nodes;
- if (setup_queues(nullb))
+ rv = setup_queues(nullb);
+ if (rv)
goto out_free_nullb;
if (queue_mode == NULL_Q_MQ) {
nullb->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
nullb->tag_set.driver_data = nullb;
- if (blk_mq_alloc_tag_set(&nullb->tag_set))
+ rv = blk_mq_alloc_tag_set(&nullb->tag_set);
+ if (rv)
goto out_cleanup_queues;
nullb->q = blk_mq_init_queue(&nullb->tag_set);
- if (!nullb->q)
+ if (!nullb->q) {
+ rv = -ENOMEM;
goto out_cleanup_tags;
+ }
} else if (queue_mode == NULL_Q_BIO) {
nullb->q = blk_alloc_queue_node(GFP_KERNEL, home_node);
- if (!nullb->q)
+ if (!nullb->q) {
+ rv = -ENOMEM;
goto out_cleanup_queues;
+ }
blk_queue_make_request(nullb->q, null_queue_bio);
init_driver_queues(nullb);
} else {
nullb->q = blk_init_queue_node(null_request_fn, &nullb->lock, home_node);
- if (!nullb->q)
+ if (!nullb->q) {
+ rv = -ENOMEM;
goto out_cleanup_queues;
+ }
blk_queue_prep_rq(nullb->q, null_rq_prep_fn);
blk_queue_softirq_done(nullb->q, null_softirq_done_fn);
init_driver_queues(nullb);
queue_flag_set_unlocked(QUEUE_FLAG_NONROT, nullb->q);
disk = nullb->disk = alloc_disk_node(1, home_node);
- if (!disk)
+ if (!disk) {
+ rv = -ENOMEM;
goto out_cleanup_blk_queue;
+ }
mutex_lock(&lock);
list_add_tail(&nullb->list, &nullb_list);
out_free_nullb:
kfree(nullb);
out:
- return -ENOMEM;
+ return rv;
}
static int __init null_init(void)
set_capacity(rbd_dev->disk, rbd_dev->mapping.size / SECTOR_SIZE);
set_disk_ro(rbd_dev->disk, rbd_dev->mapping.read_only);
- rbd_dev->rq_wq = alloc_workqueue(rbd_dev->disk->disk_name, 0, 0);
- if (!rbd_dev->rq_wq)
+ rbd_dev->rq_wq = alloc_workqueue("%s", 0, 0, rbd_dev->disk->disk_name);
+ if (!rbd_dev->rq_wq) {
+ ret = -ENOMEM;
goto err_out_mapping;
+ }
ret = rbd_bus_add_dev(rbd_dev);
if (ret)
.probe = ace_probe,
.remove = ace_remove,
.driver = {
- .owner = THIS_MODULE,
.name = "xsysace",
.of_match_table = ace_of_match,
},
/* Should NEVER happen. Return bio error if it does. */
if (unlikely(ret)) {
pr_err("Decompression failed! err=%d, page=%u\n", ret, index);
- atomic64_inc(&zram->stats.failed_reads);
return ret;
}
zcomp_strm_release(zram->comp, zstrm);
if (is_partial_io(bvec))
kfree(uncmem);
- if (ret)
- atomic64_inc(&zram->stats.failed_writes);
return ret;
}
ret = zram_bvec_write(zram, bvec, index, offset);
}
+ if (unlikely(ret)) {
+ if (rw == READ)
+ atomic64_inc(&zram->stats.failed_reads);
+ else
+ atomic64_inc(&zram->stats.failed_writes);
+ }
+
return ret;
}
atomic64_t compr_data_size; /* compressed size of pages stored */
atomic64_t num_reads; /* failed + successful */
atomic64_t num_writes; /* --do-- */
- atomic64_t failed_reads; /* should NEVER! happen */
+ atomic64_t failed_reads; /* can happen when memory is too low */
atomic64_t failed_writes; /* can happen when memory is too low */
atomic64_t invalid_io; /* non-page-aligned I/O requests */
atomic64_t notify_free; /* no. of swap slot free notifications */
return 0;
}
+static void arm_ccn_pmu_event_destroy(struct perf_event *event)
+{
+ struct arm_ccn *ccn = pmu_to_arm_ccn(event->pmu);
+ struct hw_perf_event *hw = &event->hw;
+
+ if (hw->idx == CCN_IDX_PMU_CYCLE_COUNTER) {
+ clear_bit(CCN_IDX_PMU_CYCLE_COUNTER, ccn->dt.pmu_counters_mask);
+ } else {
+ struct arm_ccn_component *source =
+ ccn->dt.pmu_counters[hw->idx].source;
+
+ if (CCN_CONFIG_TYPE(event->attr.config) == CCN_TYPE_XP &&
+ CCN_CONFIG_EVENT(event->attr.config) ==
+ CCN_EVENT_WATCHPOINT)
+ clear_bit(hw->config_base, source->xp.dt_cmp_mask);
+ else
+ clear_bit(hw->config_base, source->pmu_events_mask);
+ clear_bit(hw->idx, ccn->dt.pmu_counters_mask);
+ }
+
+ ccn->dt.pmu_counters[hw->idx].source = NULL;
+ ccn->dt.pmu_counters[hw->idx].event = NULL;
+}
+
static int arm_ccn_pmu_event_init(struct perf_event *event)
{
struct arm_ccn *ccn;
return -ENOENT;
ccn = pmu_to_arm_ccn(event->pmu);
+ event->destroy = arm_ccn_pmu_event_destroy;
if (hw->sample_period) {
dev_warn(ccn->dev, "Sampling not supported!\n");
}
if (e->num_vcs && vc >= e->num_vcs) {
dev_warn(ccn->dev, "Invalid vc %d for node/XP %d!\n",
- port, node_xp);
+ vc, node_xp);
return -EINVAL;
}
valid = 1;
return 0;
}
-static void arm_ccn_pmu_event_free(struct perf_event *event)
-{
- struct arm_ccn *ccn = pmu_to_arm_ccn(event->pmu);
- struct hw_perf_event *hw = &event->hw;
-
- if (hw->idx == CCN_IDX_PMU_CYCLE_COUNTER) {
- clear_bit(CCN_IDX_PMU_CYCLE_COUNTER, ccn->dt.pmu_counters_mask);
- } else {
- struct arm_ccn_component *source =
- ccn->dt.pmu_counters[hw->idx].source;
-
- if (CCN_CONFIG_TYPE(event->attr.config) == CCN_TYPE_XP &&
- CCN_CONFIG_EVENT(event->attr.config) ==
- CCN_EVENT_WATCHPOINT)
- clear_bit(hw->config_base, source->xp.dt_cmp_mask);
- else
- clear_bit(hw->config_base, source->pmu_events_mask);
- clear_bit(hw->idx, ccn->dt.pmu_counters_mask);
- }
-
- ccn->dt.pmu_counters[hw->idx].source = NULL;
- ccn->dt.pmu_counters[hw->idx].event = NULL;
-}
-
static u64 arm_ccn_pmu_read_counter(struct arm_ccn *ccn, int idx)
{
u64 res;
static void arm_ccn_pmu_event_del(struct perf_event *event, int flags)
{
arm_ccn_pmu_event_stop(event, PERF_EF_UPDATE);
-
- arm_ccn_pmu_event_free(event);
}
static void arm_ccn_pmu_event_read(struct perf_event *event)
goto out;
}
- freq_table = kcalloc(sizeof(*freq_table), (max_opps + 1), GFP_ATOMIC);
+ freq_table = kcalloc((max_opps + 1), sizeof(*freq_table), GFP_ATOMIC);
if (!freq_table) {
ret = -ENOMEM;
goto out;
ICPU(0x3f, core_params),
ICPU(0x45, core_params),
ICPU(0x46, core_params),
+ ICPU(0x4c, byt_params),
ICPU(0x4f, core_params),
ICPU(0x56, core_params),
{}
add_timer_on(&cpu->timer, cpunum);
- pr_info("Intel pstate controlling: cpu %d\n", cpunum);
+ pr_debug("Intel pstate controlling: cpu %d\n", cpunum);
return 0;
}
static int intel_pstate_set_policy(struct cpufreq_policy *policy)
{
- struct cpudata *cpu;
-
- cpu = all_cpu_data[policy->cpu];
-
if (!policy->cpuinfo.max_freq)
return -ENODEV;
return val >> 8;
}
-static int __init s5pv210_cpu_init(struct cpufreq_policy *policy)
+static int s5pv210_cpu_init(struct cpufreq_policy *policy)
{
unsigned long mem_type;
int ret;
return idx;
}
-static int __init bl_idle_driver_init(struct cpuidle_driver *drv, int cpu_id)
+static int __init bl_idle_driver_init(struct cpuidle_driver *drv, int part_id)
{
- struct cpuinfo_arm *cpu_info;
struct cpumask *cpumask;
- unsigned long cpuid;
int cpu;
cpumask = kzalloc(cpumask_size(), GFP_KERNEL);
if (!cpumask)
return -ENOMEM;
- for_each_possible_cpu(cpu) {
- cpu_info = &per_cpu(cpu_data, cpu);
- cpuid = is_smp() ? cpu_info->cpuid : read_cpuid_id();
-
- /* read cpu id part number */
- if ((cpuid & 0xFFF0) == cpu_id)
+ for_each_possible_cpu(cpu)
+ if (smp_cpuid_part(cpu) == part_id)
cpumask_set_cpu(cpu, cpumask);
- }
drv->cpumask = cpumask;
EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
EXPORT_TRACEPOINT_SYMBOL(fence_emit);
-/**
+/*
* fence context counter: each execution context should have its own
* fence context, this allows checking if fences belong to the same
* context or not. One device can have multiple separate contexts,
vchan_cyclic_callback(&chan->desc->vdesc);
} else {
if (chan->next_sg == chan->desc->num_sgs) {
- chan->desc = NULL;
+ list_del(&chan->desc->vdesc.node);
vchan_cookie_complete(&chan->desc->vdesc);
+ chan->desc = NULL;
}
}
}
*/
static void efivar_entry_list_del_unlock(struct efivar_entry *entry)
{
- WARN_ON(!spin_is_locked(&__efivars->lock));
+ lockdep_assert_held(&__efivars->lock);
list_del(&entry->list);
spin_unlock_irq(&__efivars->lock);
const struct efivar_operations *ops = __efivars->ops;
efi_status_t status;
- WARN_ON(!spin_is_locked(&__efivars->lock));
+ lockdep_assert_held(&__efivars->lock);
status = ops->set_variable(entry->var.VariableName,
&entry->var.VendorGuid,
int strsize1, strsize2;
bool found = false;
- WARN_ON(!spin_is_locked(&__efivars->lock));
+ lockdep_assert_held(&__efivars->lock);
list_for_each_entry_safe(entry, n, head, list) {
strsize1 = ucs2_strsize(name, 1024);
const struct efivar_operations *ops = __efivars->ops;
efi_status_t status;
- WARN_ON(!spin_is_locked(&__efivars->lock));
+ lockdep_assert_held(&__efivars->lock);
status = ops->get_variable(entry->var.VariableName,
&entry->var.VendorGuid,
struct gpio_desc **dr;
struct gpio_desc *desc;
- dr = devres_alloc(devm_gpiod_release, sizeof(struct gpiod_desc *),
+ dr = devres_alloc(devm_gpiod_release, sizeof(struct gpio_desc *),
GFP_KERNEL);
if (!dr)
return ERR_PTR(-ENOMEM);
bgwrite(~0x0, BT848_INT_STAT);
bgwrite(0x0, BT848_GPIO_OUT_EN);
- iounmap(bg->mmio);
- release_mem_region(pci_resource_start(pdev, 0),
- pci_resource_len(pdev, 0));
pci_disable_device(pdev);
}
return 0;
}
+static int lp_gpio_resume(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct lp_gpio *lg = platform_get_drvdata(pdev);
+ unsigned long reg;
+ int i;
+
+ /* on some hardware suspend clears input sensing, re-enable it here */
+ for (i = 0; i < lg->chip.ngpio; i++) {
+ if (gpiochip_is_requested(&lg->chip, i) != NULL) {
+ reg = lp_gpio_reg(&lg->chip, i, LP_CONFIG2);
+ outl(inl(reg) & ~GPINDIS_BIT, reg);
+ }
+ }
+ return 0;
+}
+
static const struct dev_pm_ops lp_gpio_pm_ops = {
.runtime_suspend = lp_gpio_runtime_suspend,
.runtime_resume = lp_gpio_runtime_resume,
+ .resume = lp_gpio_resume,
};
static const struct acpi_device_id lynxpoint_gpio_acpi_match[] = {
struct clk *clk;
};
+static struct irq_chip zynq_gpio_level_irqchip;
+static struct irq_chip zynq_gpio_edge_irqchip;
+
/**
* zynq_gpio_get_bank_pin - Get the bank number and pin number within that bank
* for a given pin in the GPIO device
gpio->base_addr + ZYNQ_GPIO_INTPOL_OFFSET(bank_num));
writel_relaxed(int_any,
gpio->base_addr + ZYNQ_GPIO_INTANY_OFFSET(bank_num));
+
+ if (type & IRQ_TYPE_LEVEL_MASK) {
+ __irq_set_chip_handler_name_locked(irq_data->irq,
+ &zynq_gpio_level_irqchip, handle_fasteoi_irq, NULL);
+ } else {
+ __irq_set_chip_handler_name_locked(irq_data->irq,
+ &zynq_gpio_edge_irqchip, handle_level_irq, NULL);
+ }
+
return 0;
}
}
/* irq chip descriptor */
-static struct irq_chip zynq_gpio_irqchip = {
+static struct irq_chip zynq_gpio_level_irqchip = {
.name = DRIVER_NAME,
.irq_enable = zynq_gpio_irq_enable,
+ .irq_eoi = zynq_gpio_irq_ack,
+ .irq_mask = zynq_gpio_irq_mask,
+ .irq_unmask = zynq_gpio_irq_unmask,
+ .irq_set_type = zynq_gpio_set_irq_type,
+ .irq_set_wake = zynq_gpio_set_wake,
+ .flags = IRQCHIP_EOI_THREADED | IRQCHIP_EOI_IF_HANDLED,
+};
+
+static struct irq_chip zynq_gpio_edge_irqchip = {
+ .name = DRIVER_NAME,
+ .irq_enable = zynq_gpio_irq_enable,
+ .irq_ack = zynq_gpio_irq_ack,
.irq_mask = zynq_gpio_irq_mask,
.irq_unmask = zynq_gpio_irq_unmask,
.irq_set_type = zynq_gpio_set_irq_type,
offset);
generic_handle_irq(gpio_irq);
}
-
- /* clear IRQ in HW */
- writel_relaxed(int_sts, gpio->base_addr +
- ZYNQ_GPIO_INTSTS_OFFSET(bank_num));
}
}
writel_relaxed(ZYNQ_GPIO_IXR_DISABLE_ALL, gpio->base_addr +
ZYNQ_GPIO_INTDIS_OFFSET(bank_num));
- ret = gpiochip_irqchip_add(chip, &zynq_gpio_irqchip, 0,
- handle_simple_irq, IRQ_TYPE_NONE);
+ ret = gpiochip_irqchip_add(chip, &zynq_gpio_edge_irqchip, 0,
+ handle_level_irq, IRQ_TYPE_NONE);
if (ret) {
dev_err(&pdev->dev, "Failed to add irq chip\n");
goto err_rm_gpiochip;
}
- gpiochip_set_chained_irqchip(chip, &zynq_gpio_irqchip, irq,
+ gpiochip_set_chained_irqchip(chip, &zynq_gpio_edge_irqchip, irq,
zynq_gpio_irqhandler);
pm_runtime_set_active(&pdev->dev);
void of_gpiochip_remove(struct gpio_chip *chip)
{
gpiochip_remove_pin_ranges(chip);
-
- if (chip->of_node)
- of_node_put(chip->of_node);
+ of_node_put(chip->of_node);
}
tristate "Direct Rendering Manager (XFree86 4.1.0 and higher DRI support)"
depends on (AGP || AGP=n) && !EMULATED_CMPXCHG && MMU && HAS_DMA
select HDMI
+ select FB_CMDLINE
select I2C
select I2C_ALGOBIT
select DMA_SHARED_BUFFER
bool
depends on DRM
-config DRM_USB
- tristate
- depends on DRM
- depends on USB_SUPPORT && USB_ARCH_HAS_HCD
- select USB
-
config DRM_KMS_HELPER
tristate
depends on DRM
select HWMON
select BACKLIGHT_CLASS_DEVICE
select INTERVAL_TREE
+ select MMU_NOTIFIER
help
Choose this option if you have an ATI Radeon graphics card. There
are both PCI and AGP versions. You don't need to choose this to
ccflags-y := -Iinclude/drm
-drm-y := drm_auth.o drm_buffer.o drm_bufs.o drm_cache.o \
+drm-y := drm_auth.o drm_bufs.o drm_cache.o \
drm_context.o drm_dma.o \
drm_fops.o drm_gem.o drm_ioctl.o drm_irq.o \
drm_lock.o drm_memory.o drm_drv.o drm_vm.o \
drm-$(CONFIG_DRM_PANEL) += drm_panel.o
drm-$(CONFIG_OF) += drm_of.o
-drm-usb-y := drm_usb.o
-
drm_kms_helper-y := drm_crtc_helper.o drm_dp_helper.o drm_probe_helper.o \
drm_plane_helper.o drm_dp_mst_topology.o
drm_kms_helper-$(CONFIG_DRM_LOAD_EDID_FIRMWARE) += drm_edid_load.o
obj-$(CONFIG_DRM) += drm.o
obj-$(CONFIG_DRM_MIPI_DSI) += drm_mipi_dsi.o
-obj-$(CONFIG_DRM_USB) += drm_usb.o
obj-$(CONFIG_DRM_TTM) += ttm/
obj-$(CONFIG_DRM_TDFX) += tdfx/
obj-$(CONFIG_DRM_R128) += r128/
.postclose = NULL,
.lastclose = armada_drm_lastclose,
.unload = armada_drm_unload,
+ .set_busid = drm_platform_set_busid,
.get_vblank_counter = drm_vblank_count,
.enable_vblank = armada_drm_enable_vblank,
.disable_vblank = armada_drm_disable_vblank,
#ifndef ARMADA_GEM_H
#define ARMADA_GEM_H
+#include <drm/drm_gem.h>
+
/* GEM */
struct armada_gem_object {
struct drm_gem_object obj;
return true;
}
+
+static void ast_init_analog(struct drm_device *dev)
+{
+ struct ast_private *ast = dev->dev_private;
+ u32 data;
+
+ /*
+ * Set DAC source to VGA mode in SCU2C via the P2A
+ * bridge. First configure the P2U to target the SCU
+ * in case it isn't at this stage.
+ */
+ ast_write32(ast, 0xf004, 0x1e6e0000);
+ ast_write32(ast, 0xf000, 0x1);
+
+ /* Then unlock the SCU with the magic password */
+ ast_write32(ast, 0x12000, 0x1688a8a8);
+ ast_write32(ast, 0x12000, 0x1688a8a8);
+ ast_write32(ast, 0x12000, 0x1688a8a8);
+
+ /* Finally, clear bits [17:16] of SCU2c */
+ data = ast_read32(ast, 0x1202c);
+ data &= 0xfffcffff;
+ ast_write32(ast, 0, data);
+
+ /* Disable DVO */
+ ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xa3, 0xcf, 0x00);
+}
+
void ast_init_3rdtx(struct drm_device *dev)
{
struct ast_private *ast = dev->dev_private;
u8 jreg;
- u32 data;
+
if (ast->chip == AST2300 || ast->chip == AST2400) {
jreg = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xd1, 0xff);
switch (jreg & 0x0e) {
default:
if (ast->tx_chip_type == AST_TX_SIL164)
ast_init_dvo(dev);
- else {
- ast_write32(ast, 0x12000, 0x1688a8a8);
- data = ast_read32(ast, 0x1202c);
- data &= 0xfffcffff;
- ast_write32(ast, 0, data);
- }
+ else
+ ast_init_analog(dev);
}
}
}
.load = ast_driver_load,
.unload = ast_driver_unload,
+ .set_busid = drm_pci_set_busid,
.fops = &ast_fops,
.name = DRIVER_NAME,
#include <drm/ttm/ttm_memory.h>
#include <drm/ttm/ttm_module.h>
+#include <drm/drm_gem.h>
+
#include <linux/i2c.h>
#include <linux/i2c-algo-bit.h>
#define AST_IO_AR_PORT_WRITE (0x40)
#define AST_IO_MISC_PORT_WRITE (0x42)
+#define AST_IO_VGA_ENABLE_PORT (0x43)
#define AST_IO_SEQ_PORT (0x44)
-#define AST_DAC_INDEX_READ (0x3c7)
+#define AST_IO_DAC_INDEX_READ (0x47)
#define AST_IO_DAC_INDEX_WRITE (0x48)
#define AST_IO_DAC_DATA (0x49)
#define AST_IO_GR_PORT (0x4E)
#define AST_IO_INPUT_STATUS1_READ (0x5A)
#define AST_IO_MISC_PORT_READ (0x4C)
+#define AST_IO_MM_OFFSET (0x380)
+
#define __ast_read(x) \
static inline u##x ast_read##x(struct ast_private *ast, u32 reg) { \
u##x val = 0;\
struct ttm_placement placement;
struct ttm_bo_kmap_obj kmap;
struct drm_gem_object gem;
- u32 placements[3];
+ struct ttm_place placements[3];
int pin_count;
};
#define gem_to_ast_bo(gobj) container_of((gobj), struct ast_bo, gem)
int ast_mmap(struct file *filp, struct vm_area_struct *vma);
/* ast post */
+void ast_enable_vga(struct drm_device *dev);
+void ast_enable_mmio(struct drm_device *dev);
+bool ast_is_vga_enabled(struct drm_device *dev);
void ast_post_gpu(struct drm_device *dev);
u32 ast_mindwm(struct ast_private *ast, u32 r);
void ast_moutdwm(struct ast_private *ast, u32 r, u32 v);
static int astfb_create(struct drm_fb_helper *helper,
struct drm_fb_helper_surface_size *sizes)
{
- struct ast_fbdev *afbdev = (struct ast_fbdev *)helper;
+ struct ast_fbdev *afbdev =
+ container_of(helper, struct ast_fbdev, helper);
struct drm_device *dev = afbdev->helper.dev;
struct drm_mode_fb_cmd2 mode_cmd;
struct drm_framebuffer *fb;
}
-static int ast_detect_chip(struct drm_device *dev)
+static int ast_detect_chip(struct drm_device *dev, bool *need_post)
{
struct ast_private *ast = dev->dev_private;
uint32_t data, jreg;
+ ast_open_key(ast);
if (dev->pdev->device == PCI_CHIP_AST1180) {
ast->chip = AST1100;
}
ast->vga2_clone = false;
} else {
- ast->chip = 2000;
+ ast->chip = AST2000;
DRM_INFO("AST 2000 detected\n");
}
}
+ /*
+ * If VGA isn't enabled, we need to enable now or subsequent
+ * access to the scratch registers will fail. We also inform
+ * our caller that it needs to POST the chip
+ * (Assumption: VGA not enabled -> need to POST)
+ */
+ if (!ast_is_vga_enabled(dev)) {
+ ast_enable_vga(dev);
+ ast_enable_mmio(dev);
+ DRM_INFO("VGA not enabled on entry, requesting chip POST\n");
+ *need_post = true;
+ } else
+ *need_post = false;
+
+ /* Check if we support wide screen */
switch (ast->chip) {
case AST1180:
ast->support_wide_screen = true;
ast->support_wide_screen = true;
else {
ast->support_wide_screen = false;
+ /* Read SCU7c (silicon revision register) */
ast_write32(ast, 0xf004, 0x1e6e0000);
ast_write32(ast, 0xf000, 0x1);
data = ast_read32(ast, 0x1207c);
break;
}
+ /* Check 3rd Tx option (digital output afaik) */
ast->tx_chip_type = AST_TX_NONE;
- jreg = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xa3, 0xff);
- if (jreg & 0x80)
- ast->tx_chip_type = AST_TX_SIL164;
+
+ /*
+ * VGACRA3 Enhanced Color Mode Register, check if DVO is already
+ * enabled, in that case, assume we have a SIL164 TMDS transmitter
+ *
+ * Don't make that assumption if we the chip wasn't enabled and
+ * is at power-on reset, otherwise we'll incorrectly "detect" a
+ * SIL164 when there is none.
+ */
+ if (!*need_post) {
+ jreg = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xa3, 0xff);
+ if (jreg & 0x80)
+ ast->tx_chip_type = AST_TX_SIL164;
+ }
+
if ((ast->chip == AST2300) || (ast->chip == AST2400)) {
+ /*
+ * On AST2300 and 2400, look the configuration set by the SoC in
+ * the SOC scratch register #1 bits 11:8 (interestingly marked
+ * as "reserved" in the spec)
+ */
jreg = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xd1, 0xff);
switch (jreg) {
case 0x04:
}
}
+ /* Print stuff for diagnostic purposes */
+ switch(ast->tx_chip_type) {
+ case AST_TX_SIL164:
+ DRM_INFO("Using Sil164 TMDS transmitter\n");
+ break;
+ case AST_TX_DP501:
+ DRM_INFO("Using DP501 DisplayPort transmitter\n");
+ break;
+ default:
+ DRM_INFO("Analog VGA only\n");
+ }
return 0;
}
int ast_driver_load(struct drm_device *dev, unsigned long flags)
{
struct ast_private *ast;
+ bool need_post;
int ret = 0;
ast = kzalloc(sizeof(struct ast_private), GFP_KERNEL);
ret = -EIO;
goto out_free;
}
- ast->ioregs = pci_iomap(dev->pdev, 2, 0);
+
+ /*
+ * If we don't have IO space at all, use MMIO now and
+ * assume the chip has MMIO enabled by default (rev 0x20
+ * and higher).
+ */
+ if (!(pci_resource_flags(dev->pdev, 2) & IORESOURCE_IO)) {
+ DRM_INFO("platform has no IO space, trying MMIO\n");
+ ast->ioregs = ast->regs + AST_IO_MM_OFFSET;
+ }
+
+ /* "map" IO regs if the above hasn't done so already */
if (!ast->ioregs) {
- ret = -EIO;
- goto out_free;
+ ast->ioregs = pci_iomap(dev->pdev, 2, 0);
+ if (!ast->ioregs) {
+ ret = -EIO;
+ goto out_free;
+ }
}
- ast_detect_chip(dev);
+ ast_detect_chip(dev, &need_post);
if (ast->chip != AST1180) {
ast_get_dram_info(dev);
DRM_INFO("dram %d %d %d %08x\n", ast->mclk, ast->dram_type, ast->dram_bus_width, ast->vram_size);
}
+ if (need_post)
+ ast_post_gpu(dev);
+
ret = ast_mm_init(ast);
if (ret)
goto out_free;
struct ast_private *ast = crtc->dev->dev_private;
u32 refresh_rate_index = 0, mode_id, color_index, refresh_rate;
u32 hborder, vborder;
+ bool check_sync;
+ struct ast_vbios_enhtable *best = NULL;
switch (crtc->primary->fb->bits_per_pixel) {
case 8:
}
refresh_rate = drm_mode_vrefresh(mode);
- while (vbios_mode->enh_table->refresh_rate < refresh_rate) {
- vbios_mode->enh_table++;
- if ((vbios_mode->enh_table->refresh_rate > refresh_rate) ||
- (vbios_mode->enh_table->refresh_rate == 0xff)) {
- vbios_mode->enh_table--;
- break;
+ check_sync = vbios_mode->enh_table->flags & WideScreenMode;
+ do {
+ struct ast_vbios_enhtable *loop = vbios_mode->enh_table;
+
+ while (loop->refresh_rate != 0xff) {
+ if ((check_sync) &&
+ (((mode->flags & DRM_MODE_FLAG_NVSYNC) &&
+ (loop->flags & PVSync)) ||
+ ((mode->flags & DRM_MODE_FLAG_PVSYNC) &&
+ (loop->flags & NVSync)) ||
+ ((mode->flags & DRM_MODE_FLAG_NHSYNC) &&
+ (loop->flags & PHSync)) ||
+ ((mode->flags & DRM_MODE_FLAG_PHSYNC) &&
+ (loop->flags & NHSync)))) {
+ loop++;
+ continue;
+ }
+ if (loop->refresh_rate <= refresh_rate
+ && (!best || loop->refresh_rate > best->refresh_rate))
+ best = loop;
+ loop++;
}
- }
+ if (best || !check_sync)
+ break;
+ check_sync = 0;
+ } while (1);
+ if (best)
+ vbios_mode->enh_table = best;
hborder = (vbios_mode->enh_table->flags & HBorder) ? 8 : 0;
vborder = (vbios_mode->enh_table->flags & VBorder) ? 8 : 0;
struct ast_private *ast = dev->dev_private;
u8 jreg;
- jreg = ast_io_read8(ast, AST_IO_MISC_PORT_READ);
- jreg |= (vbios_mode->enh_table->flags & SyncNN);
+ jreg = ast_io_read8(ast, AST_IO_MISC_PORT_READ);
+ jreg &= ~0xC0;
+ if (vbios_mode->enh_table->flags & NVSync) jreg |= 0x80;
+ if (vbios_mode->enh_table->flags & NHSync) jreg |= 0x40;
ast_io_write8(ast, AST_IO_MISC_PORT_WRITE, jreg);
}
static void ast_init_dram_2300(struct drm_device *dev);
-static void
-ast_enable_vga(struct drm_device *dev)
+void ast_enable_vga(struct drm_device *dev)
+{
+ struct ast_private *ast = dev->dev_private;
+
+ ast_io_write8(ast, AST_IO_VGA_ENABLE_PORT, 0x01);
+ ast_io_write8(ast, AST_IO_MISC_PORT_WRITE, 0x01);
+}
+
+void ast_enable_mmio(struct drm_device *dev)
{
struct ast_private *ast = dev->dev_private;
- ast_io_write8(ast, 0x43, 0x01);
- ast_io_write8(ast, 0x42, 0x01);
+ ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xa1, 0xff, 0x04);
}
-#if 0 /* will use later */
-static bool
-ast_is_vga_enabled(struct drm_device *dev)
+
+bool ast_is_vga_enabled(struct drm_device *dev)
{
struct ast_private *ast = dev->dev_private;
u8 ch;
if (ast->chip == AST1180) {
/* TODO 1180 */
} else {
- ch = ast_io_read8(ast, 0x43);
+ ch = ast_io_read8(ast, AST_IO_VGA_ENABLE_PORT);
if (ch) {
ast_open_key(ast);
ch = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb6, 0xff);
}
return 0;
}
-#endif
static const u8 extreginfo[] = { 0x0f, 0x04, 0x1c, 0xff };
static const u8 extreginfo_ast2300a0[] = { 0x0f, 0x04, 0x1c, 0xff };
pci_write_config_dword(ast->dev->pdev, 0x04, reg);
ast_enable_vga(dev);
+ ast_enable_mmio(dev);
ast_open_key(ast);
ast_set_def_ext_reg(dev);
#define HalfDCLK 0x00000002
#define DoubleScanMode 0x00000004
#define LineCompareOff 0x00000008
-#define SyncPP 0x00000000
-#define SyncPN 0x00000040
-#define SyncNP 0x00000080
-#define SyncNN 0x000000C0
#define HBorder 0x00000020
#define VBorder 0x00000010
#define WideScreenMode 0x00000100
#define NewModeInfo 0x00000200
+#define NHSync 0x00000400
+#define PHSync 0x00000800
+#define NVSync 0x00001000
+#define PVSync 0x00002000
+#define SyncPP (PVSync | PHSync)
+#define SyncPN (PVSync | NHSync)
+#define SyncNP (NVSync | PHSync)
+#define SyncNN (NVSync | NHSync)
/* DCLK Index */
#define VCLK25_175 0x00
#define VCLK119 0x17
#define VCLK85_5 0x18
#define VCLK97_75 0x19
+#define VCLK118_25 0x1A
static struct ast_vbios_dclk_info dclk_table[] = {
{0x2C, 0xE7, 0x03}, /* 00: VCLK25_175 */
{0x25, 0x65, 0x80}, /* 16: VCLK88.75 */
{0x77, 0x58, 0x80}, /* 17: VCLK119 */
{0x32, 0x67, 0x80}, /* 18: VCLK85_5 */
+ {0x6a, 0x6d, 0x80}, /* 19: VCLK97_75 */
+ {0x3b, 0x2c, 0x81}, /* 1A: VCLK118_25 */
};
static struct ast_vbios_stdtable vbios_stdtable[] = {
static struct ast_vbios_enhtable res_1600x900[] = {
{1760, 1600, 48, 32, 926, 900, 3, 5, VCLK97_75, /* 60Hz CVT RB */
(SyncNP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 60, 1, 0x3A },
- {1760, 1600, 48, 32, 926, 900, 3, 5, VCLK97_75, /* end */
- (SyncNP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 0xFF, 1, 0x3A }
+ {2112, 1600, 88,168, 934, 900, 3, 5, VCLK118_25, /* 60Hz CVT */
+ (SyncPN | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 60, 2, 0x3A },
+ {2112, 1600, 88,168, 934, 900, 3, 5, VCLK118_25, /* 60Hz CVT */
+ (SyncPN | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 0xFF, 2, 0x3A },
};
static struct ast_vbios_enhtable res_1920x1080[] = {
/* 16:10 */
static struct ast_vbios_enhtable res_1280x800[] = {
{1440, 1280, 48, 32, 823, 800, 3, 6, VCLK71, /* 60Hz RB */
- (SyncNP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 60, 1, 35 },
+ (SyncNP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 60, 1, 0x35 },
{1680, 1280, 72,128, 831, 800, 3, 6, VCLK83_5, /* 60Hz */
- (SyncPN | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 60, 1, 0x35 },
+ (SyncPN | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 60, 2, 0x35 },
{1680, 1280, 72,128, 831, 800, 3, 6, VCLK83_5, /* 60Hz */
- (SyncPN | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 0xFF, 1, 0x35 },
+ (SyncPN | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 0xFF, 2, 0x35 },
};
{1600, 1440, 48, 32, 926, 900, 3, 6, VCLK88_75, /* 60Hz RB */
(SyncNP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 60, 1, 0x36 },
{1904, 1440, 80,152, 934, 900, 3, 6, VCLK106_5, /* 60Hz */
- (SyncPN | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 60, 1, 0x36 },
+ (SyncPN | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 60, 2, 0x36 },
{1904, 1440, 80,152, 934, 900, 3, 6, VCLK106_5, /* 60Hz */
- (SyncPN | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 0xFF, 1, 0x36 },
+ (SyncPN | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 0xFF, 2, 0x36 },
};
static struct ast_vbios_enhtable res_1680x1050[] = {
{1840, 1680, 48, 32, 1080, 1050, 3, 6, VCLK119, /* 60Hz RB */
(SyncNP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 60, 1, 0x37 },
{2240, 1680,104,176, 1089, 1050, 3, 6, VCLK146_25, /* 60Hz */
- (SyncPN | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 60, 1, 0x37 },
+ (SyncPN | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 60, 2, 0x37 },
{2240, 1680,104,176, 1089, 1050, 3, 6, VCLK146_25, /* 60Hz */
- (SyncPN | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 0xFF, 1, 0x37 },
+ (SyncPN | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 0xFF, 2, 0x37 },
};
static struct ast_vbios_enhtable res_1920x1200[] = {
- {2080, 1920, 48, 32, 1235, 1200, 3, 6, VCLK154, /* 60Hz */
+ {2080, 1920, 48, 32, 1235, 1200, 3, 6, VCLK154, /* 60Hz RB*/
(SyncNP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 60, 1, 0x34 },
- {2080, 1920, 48, 32, 1235, 1200, 3, 6, VCLK154, /* 60Hz */
+ {2080, 1920, 48, 32, 1235, 1200, 3, 6, VCLK154, /* 60Hz RB */
(SyncNP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 0xFF, 1, 0x34 },
};
void ast_ttm_placement(struct ast_bo *bo, int domain)
{
u32 c = 0;
- bo->placement.fpfn = 0;
- bo->placement.lpfn = 0;
+ unsigned i;
+
bo->placement.placement = bo->placements;
bo->placement.busy_placement = bo->placements;
if (domain & TTM_PL_FLAG_VRAM)
- bo->placements[c++] = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_VRAM;
+ bo->placements[c++].flags = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_VRAM;
if (domain & TTM_PL_FLAG_SYSTEM)
- bo->placements[c++] = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
+ bo->placements[c++].flags = TTM_PL_FLAG_CACHED | TTM_PL_FLAG_SYSTEM;
if (!c)
- bo->placements[c++] = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
+ bo->placements[c++].flags = TTM_PL_FLAG_CACHED | TTM_PL_FLAG_SYSTEM;
bo->placement.num_placement = c;
bo->placement.num_busy_placement = c;
+ for (i = 0; i < c; ++i) {
+ bo->placements[i].fpfn = 0;
+ bo->placements[i].lpfn = 0;
+ }
}
int ast_bo_create(struct drm_device *dev, int size, int align,
ret = ttm_bo_init(&ast->ttm.bdev, &astbo->bo, size,
ttm_bo_type_device, &astbo->placement,
align >> PAGE_SHIFT, false, NULL, acc_size,
- NULL, ast_bo_ttm_destroy);
+ NULL, NULL, ast_bo_ttm_destroy);
if (ret)
return ret;
ast_ttm_placement(bo, pl_flag);
for (i = 0; i < bo->placement.num_placement; i++)
- bo->placements[i] |= TTM_PL_FLAG_NO_EVICT;
+ bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
ret = ttm_bo_validate(&bo->bo, &bo->placement, false, false);
if (ret)
return ret;
return 0;
for (i = 0; i < bo->placement.num_placement ; i++)
- bo->placements[i] &= ~TTM_PL_FLAG_NO_EVICT;
+ bo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT;
ret = ttm_bo_validate(&bo->bo, &bo->placement, false, false);
if (ret)
return ret;
ast_ttm_placement(bo, TTM_PL_FLAG_SYSTEM);
for (i = 0; i < bo->placement.num_placement ; i++)
- bo->placements[i] |= TTM_PL_FLAG_NO_EVICT;
+ bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
ret = ttm_bo_validate(&bo->bo, &bo->placement, false, false);
if (ret) {
struct ast_private *ast;
if (unlikely(vma->vm_pgoff < DRM_FILE_PAGE_OFFSET))
- return drm_mmap(filp, vma);
+ return -EINVAL;
file_priv = filp->private_data;
ast = file_priv->minor->dev->dev_private;
#include <linux/export.h>
#include <drm/drmP.h>
+#include <drm/ati_pcigart.h>
+
# define ATI_PCIGART_PAGE_SIZE 4096 /**< PCI GART page size */
static int drm_ati_alloc_pcigart_table(struct drm_device *dev,
#include <drm/drm_crtc_helper.h>
#include <drm/drm_fb_helper.h>
+#include <drm/drm_gem.h>
+
#include <ttm/ttm_bo_driver.h>
#include <ttm/ttm_page_alloc.h>
struct ttm_placement placement;
struct ttm_bo_kmap_obj kmap;
struct drm_gem_object gem;
- u32 placements[3];
+ struct ttm_place placements[3];
int pin_count;
};
.driver_features = DRIVER_GEM | DRIVER_MODESET,
.load = bochs_load,
.unload = bochs_unload,
+ .set_busid = drm_pci_set_busid,
.fops = &bochs_fops,
.name = "bochs-drm",
.desc = "bochs dispi vga interface (qemu stdvga)",
static void bochs_ttm_placement(struct bochs_bo *bo, int domain)
{
+ unsigned i;
u32 c = 0;
- bo->placement.fpfn = 0;
- bo->placement.lpfn = 0;
bo->placement.placement = bo->placements;
bo->placement.busy_placement = bo->placements;
if (domain & TTM_PL_FLAG_VRAM) {
- bo->placements[c++] = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED
+ bo->placements[c++].flags = TTM_PL_FLAG_WC
+ | TTM_PL_FLAG_UNCACHED
| TTM_PL_FLAG_VRAM;
}
if (domain & TTM_PL_FLAG_SYSTEM) {
- bo->placements[c++] = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
+ bo->placements[c++].flags = TTM_PL_MASK_CACHING
+ | TTM_PL_FLAG_SYSTEM;
}
if (!c) {
- bo->placements[c++] = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
+ bo->placements[c++].flags = TTM_PL_MASK_CACHING
+ | TTM_PL_FLAG_SYSTEM;
+ }
+ for (i = 0; i < c; ++i) {
+ bo->placements[i].fpfn = 0;
+ bo->placements[i].lpfn = 0;
}
bo->placement.num_placement = c;
bo->placement.num_busy_placement = c;
bochs_ttm_placement(bo, pl_flag);
for (i = 0; i < bo->placement.num_placement; i++)
- bo->placements[i] |= TTM_PL_FLAG_NO_EVICT;
+ bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
ret = ttm_bo_validate(&bo->bo, &bo->placement, false, false);
if (ret)
return ret;
return 0;
for (i = 0; i < bo->placement.num_placement; i++)
- bo->placements[i] &= ~TTM_PL_FLAG_NO_EVICT;
+ bo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT;
ret = ttm_bo_validate(&bo->bo, &bo->placement, false, false);
if (ret)
return ret;
struct bochs_device *bochs;
if (unlikely(vma->vm_pgoff < DRM_FILE_PAGE_OFFSET))
- return drm_mmap(filp, vma);
+ return -EINVAL;
file_priv = filp->private_data;
bochs = file_priv->minor->dev->dev_private;
ret = ttm_bo_init(&bochs->ttm.bdev, &bochsbo->bo, size,
ttm_bo_type_device, &bochsbo->placement,
align >> PAGE_SHIFT, false, NULL, acc_size,
- NULL, bochs_bo_ttm_destroy);
+ NULL, NULL, bochs_bo_ttm_destroy);
if (ret)
return ret;
.driver_features = DRIVER_MODESET | DRIVER_GEM,
.load = cirrus_driver_load,
.unload = cirrus_driver_unload,
+ .set_busid = drm_pci_set_busid,
.fops = &cirrus_driver_fops,
.name = DRIVER_NAME,
.desc = DRIVER_DESC,
#include <drm/ttm/ttm_memory.h>
#include <drm/ttm/ttm_module.h>
+#include <drm/drm_gem.h>
+
#define DRIVER_AUTHOR "Matthew Garrett"
#define DRIVER_NAME "cirrus"
struct ttm_placement placement;
struct ttm_bo_kmap_obj kmap;
struct drm_gem_object gem;
- u32 placements[3];
+ struct ttm_place placements[3];
int pin_count;
};
#define gem_to_cirrus_bo(gobj) container_of((gobj), struct cirrus_bo, gem)
static int cirrusfb_create(struct drm_fb_helper *helper,
struct drm_fb_helper_surface_size *sizes)
{
- struct cirrus_fbdev *gfbdev = (struct cirrus_fbdev *)helper;
+ struct cirrus_fbdev *gfbdev =
+ container_of(helper, struct cirrus_fbdev, helper);
struct drm_device *dev = gfbdev->helper.dev;
struct cirrus_device *cdev = gfbdev->helper.dev->dev_private;
struct fb_info *info;
void cirrus_ttm_placement(struct cirrus_bo *bo, int domain)
{
u32 c = 0;
- bo->placement.fpfn = 0;
- bo->placement.lpfn = 0;
+ unsigned i;
bo->placement.placement = bo->placements;
bo->placement.busy_placement = bo->placements;
if (domain & TTM_PL_FLAG_VRAM)
- bo->placements[c++] = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_VRAM;
+ bo->placements[c++].flags = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_VRAM;
if (domain & TTM_PL_FLAG_SYSTEM)
- bo->placements[c++] = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
+ bo->placements[c++].flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
if (!c)
- bo->placements[c++] = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
+ bo->placements[c++].flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
bo->placement.num_placement = c;
bo->placement.num_busy_placement = c;
+ for (i = 0; i < c; ++i) {
+ bo->placements[i].fpfn = 0;
+ bo->placements[i].lpfn = 0;
+ }
}
int cirrus_bo_create(struct drm_device *dev, int size, int align,
ret = ttm_bo_init(&cirrus->ttm.bdev, &cirrusbo->bo, size,
ttm_bo_type_device, &cirrusbo->placement,
align >> PAGE_SHIFT, false, NULL, acc_size,
- NULL, cirrus_bo_ttm_destroy);
+ NULL, NULL, cirrus_bo_ttm_destroy);
if (ret)
return ret;
cirrus_ttm_placement(bo, pl_flag);
for (i = 0; i < bo->placement.num_placement; i++)
- bo->placements[i] |= TTM_PL_FLAG_NO_EVICT;
+ bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
ret = ttm_bo_validate(&bo->bo, &bo->placement, false, false);
if (ret)
return ret;
cirrus_ttm_placement(bo, TTM_PL_FLAG_SYSTEM);
for (i = 0; i < bo->placement.num_placement ; i++)
- bo->placements[i] |= TTM_PL_FLAG_NO_EVICT;
+ bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
ret = ttm_bo_validate(&bo->bo, &bo->placement, false, false);
if (ret) {
struct cirrus_device *cirrus;
if (unlikely(vma->vm_pgoff < DRM_FILE_PAGE_OFFSET))
- return drm_mmap(filp, vma);
+ return -EINVAL;
file_priv = filp->private_data;
cirrus = file_priv->minor->dev->dev_private;
#include <drm/drmP.h>
#include <linux/module.h>
#include <linux/slab.h>
+#include "drm_legacy.h"
#if __OS_HAS_AGP
*/
#include <drm/drmP.h>
+#include "drm_internal.h"
+
+struct drm_magic_entry {
+ struct list_head head;
+ struct drm_hash_item hash_item;
+ struct drm_file *priv;
+};
/**
* Find the file with the given magic number.
-/**
- * \file drm_bufs.c
- * Generic buffer template
- *
- * \author Rickard E. (Rik) Faith <faith@valinux.com>
- * \author Gareth Hughes <gareth@valinux.com>
- */
-
/*
- * Created: Thu Nov 23 03:10:50 2000 by gareth@valinux.com
+ * Legacy: Generic DRM Buffer Management
*
* Copyright 1999, 2000 Precision Insight, Inc., Cedar Park, Texas.
* Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
* All Rights Reserved.
*
+ * Author: Rickard E. (Rik) Faith <faith@valinux.com>
+ * Author: Gareth Hughes <gareth@valinux.com>
+ *
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
#include <linux/export.h>
#include <asm/shmparam.h>
#include <drm/drmP.h>
+#include "drm_legacy.h"
static struct drm_map_list *drm_find_matching_map(struct drm_device *dev,
struct drm_local_map *map)
return 0;
}
-int drm_addmap(struct drm_device * dev, resource_size_t offset,
- unsigned int size, enum drm_map_type type,
- enum drm_map_flags flags, struct drm_local_map ** map_ptr)
+int drm_legacy_addmap(struct drm_device * dev, resource_size_t offset,
+ unsigned int size, enum drm_map_type type,
+ enum drm_map_flags flags, struct drm_local_map **map_ptr)
{
struct drm_map_list *list;
int rc;
*map_ptr = list->map;
return rc;
}
-
-EXPORT_SYMBOL(drm_addmap);
+EXPORT_SYMBOL(drm_legacy_addmap);
/**
* Ioctl to specify a range of memory that is available for mapping by a
* \return zero on success or a negative value on error.
*
*/
-int drm_addmap_ioctl(struct drm_device *dev, void *data,
- struct drm_file *file_priv)
+int drm_legacy_addmap_ioctl(struct drm_device *dev, void *data,
+ struct drm_file *file_priv)
{
struct drm_map *map = data;
struct drm_map_list *maplist;
* its being used, and free any associate resource (such as MTRR's) if it's not
* being on use.
*
- * \sa drm_addmap
+ * \sa drm_legacy_addmap
*/
-int drm_rmmap_locked(struct drm_device *dev, struct drm_local_map *map)
+int drm_legacy_rmmap_locked(struct drm_device *dev, struct drm_local_map *map)
{
struct drm_map_list *r_list = NULL, *list_t;
drm_dma_handle_t dmah;
dmah.vaddr = map->handle;
dmah.busaddr = map->offset;
dmah.size = map->size;
- __drm_pci_free(dev, &dmah);
+ __drm_legacy_pci_free(dev, &dmah);
break;
}
kfree(map);
return 0;
}
-EXPORT_SYMBOL(drm_rmmap_locked);
+EXPORT_SYMBOL(drm_legacy_rmmap_locked);
-int drm_rmmap(struct drm_device *dev, struct drm_local_map *map)
+int drm_legacy_rmmap(struct drm_device *dev, struct drm_local_map *map)
{
int ret;
mutex_lock(&dev->struct_mutex);
- ret = drm_rmmap_locked(dev, map);
+ ret = drm_legacy_rmmap_locked(dev, map);
mutex_unlock(&dev->struct_mutex);
return ret;
}
-EXPORT_SYMBOL(drm_rmmap);
+EXPORT_SYMBOL(drm_legacy_rmmap);
/* The rmmap ioctl appears to be unnecessary. All mappings are torn down on
* the last close of the device, and this is necessary for cleanup when things
* \param arg pointer to a struct drm_map structure.
* \return zero on success or a negative value on error.
*/
-int drm_rmmap_ioctl(struct drm_device *dev, void *data,
- struct drm_file *file_priv)
+int drm_legacy_rmmap_ioctl(struct drm_device *dev, void *data,
+ struct drm_file *file_priv)
{
struct drm_map *request = data;
struct drm_local_map *map = NULL;
return 0;
}
- ret = drm_rmmap_locked(dev, map);
+ ret = drm_legacy_rmmap_locked(dev, map);
mutex_unlock(&dev->struct_mutex);
* reallocates the buffer list of the same size order to accommodate the new
* buffers.
*/
-int drm_addbufs_agp(struct drm_device * dev, struct drm_buf_desc * request)
+int drm_legacy_addbufs_agp(struct drm_device *dev,
+ struct drm_buf_desc *request)
{
struct drm_device_dma *dma = dev->dma;
struct drm_buf_entry *entry;
atomic_dec(&dev->buf_alloc);
return 0;
}
-EXPORT_SYMBOL(drm_addbufs_agp);
+EXPORT_SYMBOL(drm_legacy_addbufs_agp);
#endif /* __OS_HAS_AGP */
-int drm_addbufs_pci(struct drm_device * dev, struct drm_buf_desc * request)
+int drm_legacy_addbufs_pci(struct drm_device *dev,
+ struct drm_buf_desc *request)
{
struct drm_device_dma *dma = dev->dma;
int count;
return 0;
}
-EXPORT_SYMBOL(drm_addbufs_pci);
+EXPORT_SYMBOL(drm_legacy_addbufs_pci);
-static int drm_addbufs_sg(struct drm_device * dev, struct drm_buf_desc * request)
+static int drm_legacy_addbufs_sg(struct drm_device *dev,
+ struct drm_buf_desc *request)
{
struct drm_device_dma *dma = dev->dma;
struct drm_buf_entry *entry;
* addbufs_sg() or addbufs_pci() for AGP, scatter-gather or consistent
* PCI memory respectively.
*/
-int drm_addbufs(struct drm_device *dev, void *data,
- struct drm_file *file_priv)
+int drm_legacy_addbufs(struct drm_device *dev, void *data,
+ struct drm_file *file_priv)
{
struct drm_buf_desc *request = data;
int ret;
#if __OS_HAS_AGP
if (request->flags & _DRM_AGP_BUFFER)
- ret = drm_addbufs_agp(dev, request);
+ ret = drm_legacy_addbufs_agp(dev, request);
else
#endif
if (request->flags & _DRM_SG_BUFFER)
- ret = drm_addbufs_sg(dev, request);
+ ret = drm_legacy_addbufs_sg(dev, request);
else if (request->flags & _DRM_FB_BUFFER)
ret = -EINVAL;
else
- ret = drm_addbufs_pci(dev, request);
+ ret = drm_legacy_addbufs_pci(dev, request);
return ret;
}
* lock, preventing of allocating more buffers after this call. Information
* about each requested buffer is then copied into user space.
*/
-int drm_infobufs(struct drm_device *dev, void *data,
- struct drm_file *file_priv)
+int drm_legacy_infobufs(struct drm_device *dev, void *data,
+ struct drm_file *file_priv)
{
struct drm_device_dma *dma = dev->dma;
struct drm_buf_info *request = data;
*
* \note This ioctl is deprecated and mostly never used.
*/
-int drm_markbufs(struct drm_device *dev, void *data,
- struct drm_file *file_priv)
+int drm_legacy_markbufs(struct drm_device *dev, void *data,
+ struct drm_file *file_priv)
{
struct drm_device_dma *dma = dev->dma;
struct drm_buf_desc *request = data;
* Calls free_buffer() for each used buffer.
* This function is primarily used for debugging.
*/
-int drm_freebufs(struct drm_device *dev, void *data,
- struct drm_file *file_priv)
+int drm_legacy_freebufs(struct drm_device *dev, void *data,
+ struct drm_file *file_priv)
{
struct drm_device_dma *dma = dev->dma;
struct drm_buf_free *request = data;
task_pid_nr(current));
return -EINVAL;
}
- drm_free_buffer(dev, buf);
+ drm_legacy_free_buffer(dev, buf);
}
return 0;
* offset equal to 0, which drm_mmap() interpretes as PCI buffers and calls
* drm_mmap_dma().
*/
-int drm_mapbufs(struct drm_device *dev, void *data,
- struct drm_file *file_priv)
+int drm_legacy_mapbufs(struct drm_device *dev, void *data,
+ struct drm_file *file_priv)
{
struct drm_device_dma *dma = dev->dma;
int retcode = 0;
return retcode;
}
-int drm_dma_ioctl(struct drm_device *dev, void *data,
+int drm_legacy_dma_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
if (drm_core_check_feature(dev, DRIVER_MODESET))
return -EINVAL;
}
-struct drm_local_map *drm_getsarea(struct drm_device *dev)
+struct drm_local_map *drm_legacy_getsarea(struct drm_device *dev)
{
struct drm_map_list *entry;
}
return NULL;
}
-EXPORT_SYMBOL(drm_getsarea);
+EXPORT_SYMBOL(drm_legacy_getsarea);
#include <drm/drm_modeset_lock.h>
#include "drm_crtc_internal.h"
+#include "drm_internal.h"
static struct drm_framebuffer *add_framebuffer_internal(struct drm_device *dev,
struct drm_mode_fb_cmd2 *r,
struct drm_file *file_priv);
-/**
- * drm_modeset_lock_all - take all modeset locks
- * @dev: drm device
- *
- * This function takes all modeset locks, suitable where a more fine-grained
- * scheme isn't (yet) implemented. Locks must be dropped with
- * drm_modeset_unlock_all.
- */
-void drm_modeset_lock_all(struct drm_device *dev)
-{
- struct drm_mode_config *config = &dev->mode_config;
- struct drm_modeset_acquire_ctx *ctx;
- int ret;
-
- ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
- if (WARN_ON(!ctx))
- return;
-
- mutex_lock(&config->mutex);
-
- drm_modeset_acquire_init(ctx, 0);
-
-retry:
- ret = drm_modeset_lock(&config->connection_mutex, ctx);
- if (ret)
- goto fail;
- ret = drm_modeset_lock_all_crtcs(dev, ctx);
- if (ret)
- goto fail;
-
- WARN_ON(config->acquire_ctx);
-
- /* now we hold the locks, so now that it is safe, stash the
- * ctx for drm_modeset_unlock_all():
- */
- config->acquire_ctx = ctx;
-
- drm_warn_on_modeset_not_all_locked(dev);
-
- return;
-
-fail:
- if (ret == -EDEADLK) {
- drm_modeset_backoff(ctx);
- goto retry;
- }
-}
-EXPORT_SYMBOL(drm_modeset_lock_all);
-
-/**
- * drm_modeset_unlock_all - drop all modeset locks
- * @dev: device
- *
- * This function drop all modeset locks taken by drm_modeset_lock_all.
- */
-void drm_modeset_unlock_all(struct drm_device *dev)
-{
- struct drm_mode_config *config = &dev->mode_config;
- struct drm_modeset_acquire_ctx *ctx = config->acquire_ctx;
-
- if (WARN_ON(!ctx))
- return;
-
- config->acquire_ctx = NULL;
- drm_modeset_drop_locks(ctx);
- drm_modeset_acquire_fini(ctx);
-
- kfree(ctx);
-
- mutex_unlock(&dev->mode_config.mutex);
-}
-EXPORT_SYMBOL(drm_modeset_unlock_all);
-
-/**
- * drm_warn_on_modeset_not_all_locked - check that all modeset locks are locked
- * @dev: device
- *
- * Useful as a debug assert.
- */
-void drm_warn_on_modeset_not_all_locked(struct drm_device *dev)
-{
- struct drm_crtc *crtc;
-
- /* Locking is currently fubar in the panic handler. */
- if (oops_in_progress)
- return;
-
- list_for_each_entry(crtc, &dev->mode_config.crtc_list, head)
- WARN_ON(!drm_modeset_is_locked(&crtc->mutex));
-
- WARN_ON(!drm_modeset_is_locked(&dev->mode_config.connection_mutex));
- WARN_ON(!mutex_is_locked(&dev->mode_config.mutex));
-}
-EXPORT_SYMBOL(drm_warn_on_modeset_not_all_locked);
-
/* Avoid boilerplate. I'm tired of typing. */
#define DRM_ENUM_NAME_FN(fnname, list) \
const char *fnname(int val) \
if (ret)
goto out;
- /* Grab the idr reference. */
- drm_framebuffer_reference(fb);
-
dev->mode_config.num_fb++;
list_add(&fb->head, &dev->mode_config.fb_list);
out:
}
EXPORT_SYMBOL(drm_framebuffer_init);
+/* dev->mode_config.fb_lock must be held! */
+static void __drm_framebuffer_unregister(struct drm_device *dev,
+ struct drm_framebuffer *fb)
+{
+ mutex_lock(&dev->mode_config.idr_mutex);
+ idr_remove(&dev->mode_config.crtc_idr, fb->base.id);
+ mutex_unlock(&dev->mode_config.idr_mutex);
+
+ fb->base.id = 0;
+}
+
static void drm_framebuffer_free(struct kref *kref)
{
struct drm_framebuffer *fb =
container_of(kref, struct drm_framebuffer, refcount);
+ struct drm_device *dev = fb->dev;
+
+ /*
+ * The lookup idr holds a weak reference, which has not necessarily been
+ * removed at this point. Check for that.
+ */
+ mutex_lock(&dev->mode_config.fb_lock);
+ if (fb->base.id) {
+ /* Mark fb as reaped and drop idr ref. */
+ __drm_framebuffer_unregister(dev, fb);
+ }
+ mutex_unlock(&dev->mode_config.fb_lock);
+
fb->funcs->destroy(fb);
}
mutex_lock(&dev->mode_config.fb_lock);
fb = __drm_framebuffer_lookup(dev, id);
- if (fb)
- drm_framebuffer_reference(fb);
+ if (fb) {
+ if (!kref_get_unless_zero(&fb->refcount))
+ fb = NULL;
+ }
mutex_unlock(&dev->mode_config.fb_lock);
return fb;
kref_put(&fb->refcount, drm_framebuffer_free_bug);
}
-/* dev->mode_config.fb_lock must be held! */
-static void __drm_framebuffer_unregister(struct drm_device *dev,
- struct drm_framebuffer *fb)
-{
- mutex_lock(&dev->mode_config.idr_mutex);
- idr_remove(&dev->mode_config.crtc_idr, fb->base.id);
- mutex_unlock(&dev->mode_config.idr_mutex);
-
- fb->base.id = 0;
-
- __drm_framebuffer_unreference(fb);
-}
-
/**
* drm_framebuffer_unregister_private - unregister a private fb from the lookup idr
* @fb: fb to unregister
crtc->funcs = funcs;
crtc->invert_dimensions = false;
- drm_modeset_lock_all(dev);
drm_modeset_lock_init(&crtc->mutex);
- /* dropped by _unlock_all(): */
- drm_modeset_lock(&crtc->mutex, config->acquire_ctx);
-
ret = drm_mode_object_get(dev, &crtc->base, DRM_MODE_OBJECT_CRTC);
if (ret)
goto out;
cursor->possible_crtcs = 1 << drm_crtc_index(crtc);
out:
- drm_modeset_unlock_all(dev);
return ret;
}
drm_mode_destroy(connector->dev, mode);
}
+/**
+ * drm_connector_get_cmdline_mode - reads the user's cmdline mode
+ * @connector: connector to quwery
+ * @mode: returned mode
+ *
+ * The kernel supports per-connector configration of its consoles through
+ * use of the video= parameter. This function parses that option and
+ * extracts the user's specified mode (or enable/disable status) for a
+ * particular connector. This is typically only used during the early fbdev
+ * setup.
+ */
+static void drm_connector_get_cmdline_mode(struct drm_connector *connector)
+{
+ struct drm_cmdline_mode *mode = &connector->cmdline_mode;
+ char *option = NULL;
+
+ if (fb_get_options(connector->name, &option))
+ return;
+
+ if (!drm_mode_parse_command_line_for_connector(option,
+ connector,
+ mode))
+ return;
+
+ if (mode->force) {
+ const char *s;
+
+ switch (mode->force) {
+ case DRM_FORCE_OFF:
+ s = "OFF";
+ break;
+ case DRM_FORCE_ON_DIGITAL:
+ s = "ON - dig";
+ break;
+ default:
+ case DRM_FORCE_ON:
+ s = "ON";
+ break;
+ }
+
+ DRM_INFO("forcing %s connector %s\n", connector->name, s);
+ connector->force = mode->force;
+ }
+
+ DRM_DEBUG_KMS("cmdline mode for connector %s %dx%d@%dHz%s%s%s\n",
+ connector->name,
+ mode->xres, mode->yres,
+ mode->refresh_specified ? mode->refresh : 60,
+ mode->rb ? " reduced blanking" : "",
+ mode->margins ? " with margins" : "",
+ mode->interlace ? " interlaced" : "");
+}
+
/**
* drm_connector_init - Init a preallocated connector
* @dev: DRM device
connector->edid_blob_ptr = NULL;
connector->status = connector_status_unknown;
+ drm_connector_get_cmdline_mode(connector);
+
list_add_tail(&connector->head, &dev->mode_config.connector_list);
dev->mode_config.num_connector++;
}
EXPORT_SYMBOL(drm_connector_cleanup);
+/**
+ * drm_connector_index - find the index of a registered connector
+ * @connector: connector to find index for
+ *
+ * Given a registered connector, return the index of that connector within a DRM
+ * device's list of connectors.
+ */
+unsigned int drm_connector_index(struct drm_connector *connector)
+{
+ unsigned int index = 0;
+ struct drm_connector *tmp;
+
+ list_for_each_entry(tmp, &connector->dev->mode_config.connector_list, head) {
+ if (tmp == connector)
+ return index;
+
+ index++;
+ }
+
+ BUG();
+}
+EXPORT_SYMBOL(drm_connector_index);
+
/**
* drm_connector_register - register a connector
* @connector: the connector to register
}
EXPORT_SYMBOL(drm_plane_cleanup);
+/**
+ * drm_plane_index - find the index of a registered plane
+ * @plane: plane to find index for
+ *
+ * Given a registered plane, return the index of that CRTC within a DRM
+ * device's list of planes.
+ */
+unsigned int drm_plane_index(struct drm_plane *plane)
+{
+ unsigned int index = 0;
+ struct drm_plane *tmp;
+
+ list_for_each_entry(tmp, &plane->dev->mode_config.plane_list, head) {
+ if (tmp == plane)
+ return index;
+
+ index++;
+ }
+
+ BUG();
+}
+EXPORT_SYMBOL(drm_plane_index);
+
/**
* drm_plane_force_disable - Forcibly disable a plane
* @plane: plane to disable
*/
void drm_plane_force_disable(struct drm_plane *plane)
{
- struct drm_framebuffer *old_fb = plane->fb;
int ret;
- if (!old_fb)
+ if (!plane->fb)
return;
+ plane->old_fb = plane->fb;
ret = plane->funcs->disable_plane(plane);
if (ret) {
DRM_ERROR("failed to disable plane with busy fb\n");
+ plane->old_fb = NULL;
return;
}
/* disconnect the plane from the fb and crtc: */
- __drm_framebuffer_unreference(old_fb);
+ __drm_framebuffer_unreference(plane->old_fb);
+ plane->old_fb = NULL;
plane->fb = NULL;
plane->crtc = NULL;
}
*
* src_{x,y,w,h} are provided in 16.16 fixed point format
*/
-static int setplane_internal(struct drm_plane *plane,
- struct drm_crtc *crtc,
- struct drm_framebuffer *fb,
- int32_t crtc_x, int32_t crtc_y,
- uint32_t crtc_w, uint32_t crtc_h,
- /* src_{x,y,w,h} values are 16.16 fixed point */
- uint32_t src_x, uint32_t src_y,
- uint32_t src_w, uint32_t src_h)
+static int __setplane_internal(struct drm_plane *plane,
+ struct drm_crtc *crtc,
+ struct drm_framebuffer *fb,
+ int32_t crtc_x, int32_t crtc_y,
+ uint32_t crtc_w, uint32_t crtc_h,
+ /* src_{x,y,w,h} values are 16.16 fixed point */
+ uint32_t src_x, uint32_t src_y,
+ uint32_t src_w, uint32_t src_h)
{
- struct drm_device *dev = plane->dev;
- struct drm_framebuffer *old_fb = NULL;
int ret = 0;
unsigned int fb_width, fb_height;
int i;
/* No fb means shut it down */
if (!fb) {
- drm_modeset_lock_all(dev);
- old_fb = plane->fb;
+ plane->old_fb = plane->fb;
ret = plane->funcs->disable_plane(plane);
if (!ret) {
plane->crtc = NULL;
plane->fb = NULL;
} else {
- old_fb = NULL;
+ plane->old_fb = NULL;
}
- drm_modeset_unlock_all(dev);
goto out;
}
goto out;
}
- drm_modeset_lock_all(dev);
- old_fb = plane->fb;
+ plane->old_fb = plane->fb;
ret = plane->funcs->update_plane(plane, crtc, fb,
crtc_x, crtc_y, crtc_w, crtc_h,
src_x, src_y, src_w, src_h);
plane->fb = fb;
fb = NULL;
} else {
- old_fb = NULL;
+ plane->old_fb = NULL;
}
- drm_modeset_unlock_all(dev);
out:
if (fb)
drm_framebuffer_unreference(fb);
- if (old_fb)
- drm_framebuffer_unreference(old_fb);
+ if (plane->old_fb)
+ drm_framebuffer_unreference(plane->old_fb);
+ plane->old_fb = NULL;
return ret;
+}
+
+static int setplane_internal(struct drm_plane *plane,
+ struct drm_crtc *crtc,
+ struct drm_framebuffer *fb,
+ int32_t crtc_x, int32_t crtc_y,
+ uint32_t crtc_w, uint32_t crtc_h,
+ /* src_{x,y,w,h} values are 16.16 fixed point */
+ uint32_t src_x, uint32_t src_y,
+ uint32_t src_w, uint32_t src_h)
+{
+ int ret;
+
+ drm_modeset_lock_all(plane->dev);
+ ret = __setplane_internal(plane, crtc, fb,
+ crtc_x, crtc_y, crtc_w, crtc_h,
+ src_x, src_y, src_w, src_h);
+ drm_modeset_unlock_all(plane->dev);
+ return ret;
}
/**
* crtcs. Atomic modeset will have saner semantics ...
*/
list_for_each_entry(tmp, &crtc->dev->mode_config.crtc_list, head)
- tmp->old_fb = tmp->primary->fb;
+ tmp->primary->old_fb = tmp->primary->fb;
fb = set->fb;
list_for_each_entry(tmp, &crtc->dev->mode_config.crtc_list, head) {
if (tmp->primary->fb)
drm_framebuffer_reference(tmp->primary->fb);
- if (tmp->old_fb)
- drm_framebuffer_unreference(tmp->old_fb);
+ if (tmp->primary->old_fb)
+ drm_framebuffer_unreference(tmp->primary->old_fb);
+ tmp->primary->old_fb = NULL;
}
return ret;
int ret = 0;
BUG_ON(!crtc->cursor);
+ WARN_ON(crtc->cursor->crtc != crtc && crtc->cursor->crtc != NULL);
/*
* Obtain fb we'll be using (either new or existing) and take an extra
fb = NULL;
}
} else {
- mutex_lock(&dev->mode_config.mutex);
fb = crtc->cursor->fb;
if (fb)
drm_framebuffer_reference(fb);
- mutex_unlock(&dev->mode_config.mutex);
}
if (req->flags & DRM_MODE_CURSOR_MOVE) {
* setplane_internal will take care of deref'ing either the old or new
* framebuffer depending on success.
*/
- ret = setplane_internal(crtc->cursor, crtc, fb,
+ ret = __setplane_internal(crtc->cursor, crtc, fb,
crtc_x, crtc_y, crtc_w, crtc_h,
0, 0, src_w, src_h);
* If this crtc has a universal cursor plane, call that plane's update
* handler rather than using legacy cursor handlers.
*/
- if (crtc->cursor)
- return drm_mode_cursor_universal(crtc, req, file_priv);
+ drm_modeset_lock_crtc(crtc);
+ if (crtc->cursor) {
+ ret = drm_mode_cursor_universal(crtc, req, file_priv);
+ goto out;
+ }
- drm_modeset_lock(&crtc->mutex, NULL);
if (req->flags & DRM_MODE_CURSOR_BO) {
if (!crtc->funcs->cursor_set && !crtc->funcs->cursor_set2) {
ret = -ENXIO;
}
}
out:
- drm_modeset_unlock(&crtc->mutex);
+ drm_modeset_unlock_crtc(crtc);
return ret;
struct drm_device *dev = priv->minor->dev;
struct drm_framebuffer *fb, *tfb;
- mutex_lock(&priv->fbs_lock);
+ /*
+ * When the file gets released that means no one else can access the fb
+ * list any more, so no need to grab fpriv->fbs_lock. And we need to to
+ * avoid upsetting lockdep since the universal cursor code adds a
+ * framebuffer while holding mutex locks.
+ *
+ * Note that a real deadlock between fpriv->fbs_lock and the modeset
+ * locks is impossible here since no one else but this function can get
+ * at it any more.
+ */
list_for_each_entry_safe(fb, tfb, &priv->fbs, filp_head) {
mutex_lock(&dev->mode_config.fb_lock);
/* This will also drop the fpriv->fbs reference. */
drm_framebuffer_remove(fb);
}
- mutex_unlock(&priv->fbs_lock);
}
/**
* @flags: flags specifying the property type
* @name: name of the property
* @props: enumeration lists with property bitflags
- * @num_values: number of pre-defined values
+ * @num_props: size of the @props array
+ * @supported_bits: bitmask of all supported enumeration values
*
- * This creates a new generic drm property which can then be attached to a drm
+ * This creates a new bitmask drm property which can then be attached to a drm
* object with drm_object_attach_property. The returned property object must be
* freed with drm_property_destroy.
*
return ret;
}
-static int drm_mode_plane_set_obj_prop(struct drm_mode_object *obj,
- struct drm_property *property,
- uint64_t value)
+/**
+ * drm_mode_plane_set_obj_prop - set the value of a property
+ * @plane: drm plane object to set property value for
+ * @property: property to set
+ * @value: value the property should be set to
+ *
+ * This functions sets a given property on a given plane object. This function
+ * calls the driver's ->set_property callback and changes the software state of
+ * the property if the callback succeeds.
+ *
+ * Returns:
+ * Zero on success, error code on failure.
+ */
+int drm_mode_plane_set_obj_prop(struct drm_plane *plane,
+ struct drm_property *property,
+ uint64_t value)
{
int ret = -EINVAL;
- struct drm_plane *plane = obj_to_plane(obj);
+ struct drm_mode_object *obj = &plane->base;
if (plane->funcs->set_property)
ret = plane->funcs->set_property(plane, property, value);
return ret;
}
+EXPORT_SYMBOL(drm_mode_plane_set_obj_prop);
/**
* drm_mode_getproperty_ioctl - get the current value of a object's property
ret = drm_mode_crtc_set_obj_prop(arg_obj, property, arg->value);
break;
case DRM_MODE_OBJECT_PLANE:
- ret = drm_mode_plane_set_obj_prop(arg_obj, property, arg->value);
+ ret = drm_mode_plane_set_obj_prop(obj_to_plane(arg_obj),
+ property, arg->value);
break;
}
{
struct drm_mode_crtc_page_flip *page_flip = data;
struct drm_crtc *crtc;
- struct drm_framebuffer *fb = NULL, *old_fb = NULL;
+ struct drm_framebuffer *fb = NULL;
struct drm_pending_vblank_event *e = NULL;
unsigned long flags;
int ret = -EINVAL;
if (!crtc)
return -ENOENT;
- drm_modeset_lock(&crtc->mutex, NULL);
+ drm_modeset_lock_crtc(crtc);
if (crtc->primary->fb == NULL) {
/* The framebuffer is currently unbound, presumably
* due to a hotplug event, that userspace has not
(void (*) (struct drm_pending_event *)) kfree;
}
- old_fb = crtc->primary->fb;
+ crtc->primary->old_fb = crtc->primary->fb;
ret = crtc->funcs->page_flip(crtc, fb, e, page_flip->flags);
if (ret) {
if (page_flip->flags & DRM_MODE_PAGE_FLIP_EVENT) {
kfree(e);
}
/* Keep the old fb, don't unref it. */
- old_fb = NULL;
+ crtc->primary->old_fb = NULL;
} else {
/*
* Warn if the driver hasn't properly updated the crtc->fb
out:
if (fb)
drm_framebuffer_unreference(fb);
- if (old_fb)
- drm_framebuffer_unreference(old_fb);
- drm_modeset_unlock(&crtc->mutex);
+ if (crtc->primary->old_fb)
+ drm_framebuffer_unreference(crtc->primary->old_fb);
+ crtc->primary->old_fb = NULL;
+ drm_modeset_unlock_crtc(crtc);
return ret;
}
void drm_mode_config_reset(struct drm_device *dev)
{
struct drm_crtc *crtc;
+ struct drm_plane *plane;
struct drm_encoder *encoder;
struct drm_connector *connector;
+ list_for_each_entry(plane, &dev->mode_config.plane_list, head)
+ if (plane->funcs->reset)
+ plane->funcs->reset(plane);
+
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head)
if (crtc->funcs->reset)
crtc->funcs->reset(crtc);
return -EINVAL;
/* overflow checks for 32bit size calculations */
+ /* NOTE: DIV_ROUND_UP() can overflow */
cpp = DIV_ROUND_UP(args->bpp, 8);
- if (cpp > 0xffffffffU / args->width)
+ if (!cpp || cpp > 0xffffffffU / args->width)
return -EINVAL;
stride = cpp * args->width;
if (args->height > 0xffffffffU / stride)
#include <linux/export.h>
#include <drm/drmP.h>
#include <drm/drm_edid.h>
+#include "drm_internal.h"
#if defined(CONFIG_DEBUG_FS)
{"clients", drm_clients_info, 0},
{"bufs", drm_bufs_info, 0},
{"gem_names", drm_gem_name_info, DRIVER_GEM},
-#if DRM_DEBUG_CODE
{"vma", drm_vma_info, 0},
-#endif
};
#define DRM_DEBUGFS_ENTRIES ARRAY_SIZE(drm_debugfs_list)
#include <linux/export.h>
#include <drm/drmP.h>
+#include "drm_legacy.h"
/**
* Initialize the DMA data.
*
* Resets the fields of \p buf.
*/
-void drm_free_buffer(struct drm_device *dev, struct drm_buf * buf)
+void drm_legacy_free_buffer(struct drm_device *dev, struct drm_buf * buf)
{
if (!buf)
return;
*
* Frees each buffer associated with \p file_priv not already on the hardware.
*/
-void drm_core_reclaim_buffers(struct drm_device *dev,
- struct drm_file *file_priv)
+void drm_legacy_reclaim_buffers(struct drm_device *dev,
+ struct drm_file *file_priv)
{
struct drm_device_dma *dma = dev->dma;
int i;
if (dma->buflist[i]->file_priv == file_priv) {
switch (dma->buflist[i]->list) {
case DRM_LIST_NONE:
- drm_free_buffer(dev, dma->buflist[i]);
+ drm_legacy_free_buffer(dev, dma->buflist[i]);
break;
case DRM_LIST_WAIT:
dma->buflist[i]->list = DRM_LIST_RECLAIM;
}
}
}
-
-EXPORT_SYMBOL(drm_core_reclaim_buffers);
case DP_LINK_BW_5_4:
return 10 * dp_link_count;
}
- return 0;
+ BUG();
}
/**
* drm_dp_mst_hpd_irq() - MST hotplug IRQ notify
* @mgr: manager to notify irq for.
* @esi: 4 bytes from SINK_COUNT_ESI
+ * @handled: whether the hpd interrupt was consumed or not
*
* This should be called from the driver when it detects a short IRQ,
* along with the value of the DEVICE_SERVICE_IRQ_VECTOR_ESI0. The
#include <drm/drmP.h>
#include <drm/drm_core.h>
#include "drm_legacy.h"
+#include "drm_internal.h"
unsigned int drm_debug = 0; /* 1 to enable debug output */
EXPORT_SYMBOL(drm_debug);
-unsigned int drm_vblank_offdelay = 5000; /* Default to 5000 msecs. */
-
-unsigned int drm_timestamp_precision = 20; /* Default to 20 usecs. */
-
-/*
- * Default to use monotonic timestamps for wait-for-vblank and page-flip
- * complete events.
- */
-unsigned int drm_timestamp_monotonic = 1;
-
MODULE_AUTHOR(CORE_AUTHOR);
MODULE_DESCRIPTION(CORE_DESC);
MODULE_LICENSE("GPL and additional rights");
MODULE_PARM_DESC(debug, "Enable debug output");
-MODULE_PARM_DESC(vblankoffdelay, "Delay until vblank irq auto-disable [msecs]");
+MODULE_PARM_DESC(vblankoffdelay, "Delay until vblank irq auto-disable [msecs] (0: never disable, <0: disable immediately)");
MODULE_PARM_DESC(timestamp_precision_usec, "Max. error on timestamps [usecs]");
MODULE_PARM_DESC(timestamp_monotonic, "Use monotonic timestamps");
module_param_named(debug, drm_debug, int, 0600);
-module_param_named(vblankoffdelay, drm_vblank_offdelay, int, 0600);
-module_param_named(timestamp_precision_usec, drm_timestamp_precision, int, 0600);
-module_param_named(timestamp_monotonic, drm_timestamp_monotonic, int, 0600);
static DEFINE_SPINLOCK(drm_minor_lock);
static struct idr drm_minors_idr;
struct class *drm_class;
static struct dentry *drm_debugfs_root;
-int drm_err(const char *func, const char *format, ...)
+void drm_err(const char *func, const char *format, ...)
{
struct va_format vaf;
va_list args;
- int r;
va_start(args, format);
vaf.fmt = format;
vaf.va = &args;
- r = printk(KERN_ERR "[" DRM_NAME ":%s] *ERROR* %pV", func, &vaf);
+ printk(KERN_ERR "[" DRM_NAME ":%s] *ERROR* %pV", func, &vaf);
va_end(args);
-
- return r;
}
EXPORT_SYMBOL(drm_err);
}
EXPORT_SYMBOL(drm_ut_debug_printk);
+#define DRM_MAGIC_HASH_ORDER 4 /**< Size of key hash table. Must be power of 2. */
+
struct drm_master *drm_master_create(struct drm_minor *minor)
{
struct drm_master *master;
static void drm_master_destroy(struct kref *kref)
{
struct drm_master *master = container_of(kref, struct drm_master, refcount);
- struct drm_magic_entry *pt, *next;
struct drm_device *dev = master->minor->dev;
struct drm_map_list *r_list, *list_temp;
list_for_each_entry_safe(r_list, list_temp, &dev->maplist, head) {
if (r_list->master == master) {
- drm_rmmap_locked(dev, r_list->map);
+ drm_legacy_rmmap_locked(dev, r_list->map);
r_list = NULL;
}
}
master->unique_len = 0;
}
- list_for_each_entry_safe(pt, next, &master->magicfree, head) {
- list_del(&pt->head);
- drm_ht_remove_item(&master->magiclist, &pt->hash_item);
- kfree(pt);
- }
-
drm_ht_remove(&master->magiclist);
mutex_unlock(&dev->struct_mutex);
goto err_ht;
}
- if (driver->driver_features & DRIVER_GEM) {
+ if (drm_core_check_feature(dev, DRIVER_GEM)) {
ret = drm_gem_init(dev);
if (ret) {
DRM_ERROR("Cannot initialize graphics execution manager (GEM)\n");
{
struct drm_device *dev = container_of(ref, struct drm_device, ref);
- if (dev->driver->driver_features & DRIVER_GEM)
+ if (drm_core_check_feature(dev, DRIVER_GEM))
drm_gem_destroy(dev);
drm_legacy_ctxbitmap_cleanup(dev);
drm_vblank_cleanup(dev);
list_for_each_entry_safe(r_list, list_temp, &dev->maplist, head)
- drm_rmmap(dev, r_list->map);
+ drm_legacy_rmmap(dev, r_list->map);
drm_minor_unregister(dev, DRM_MINOR_LEGACY);
drm_minor_unregister(dev, DRM_MINOR_RENDER);
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC |
DRM_MODE_FLAG_INTERLACE),
.vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
- /* 6 - 1440x480i@60Hz */
- { DRM_MODE("1440x480i", DRM_MODE_TYPE_DRIVER, 27000, 1440, 1478,
- 1602, 1716, 0, 480, 488, 494, 525, 0,
+ /* 6 - 720(1440)x480i@60Hz */
+ { DRM_MODE("720x480i", DRM_MODE_TYPE_DRIVER, 13500, 720, 739,
+ 801, 858, 0, 480, 488, 494, 525, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK),
.vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
- /* 7 - 1440x480i@60Hz */
- { DRM_MODE("1440x480i", DRM_MODE_TYPE_DRIVER, 27000, 1440, 1478,
- 1602, 1716, 0, 480, 488, 494, 525, 0,
+ /* 7 - 720(1440)x480i@60Hz */
+ { DRM_MODE("720x480i", DRM_MODE_TYPE_DRIVER, 13500, 720, 739,
+ 801, 858, 0, 480, 488, 494, 525, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK),
.vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
- /* 8 - 1440x240@60Hz */
- { DRM_MODE("1440x240", DRM_MODE_TYPE_DRIVER, 27000, 1440, 1478,
- 1602, 1716, 0, 240, 244, 247, 262, 0,
+ /* 8 - 720(1440)x240@60Hz */
+ { DRM_MODE("720x240", DRM_MODE_TYPE_DRIVER, 13500, 720, 739,
+ 801, 858, 0, 240, 244, 247, 262, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_DBLCLK),
.vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
- /* 9 - 1440x240@60Hz */
- { DRM_MODE("1440x240", DRM_MODE_TYPE_DRIVER, 27000, 1440, 1478,
- 1602, 1716, 0, 240, 244, 247, 262, 0,
+ /* 9 - 720(1440)x240@60Hz */
+ { DRM_MODE("720x240", DRM_MODE_TYPE_DRIVER, 13500, 720, 739,
+ 801, 858, 0, 240, 244, 247, 262, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_DBLCLK),
.vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC |
DRM_MODE_FLAG_INTERLACE),
.vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
- /* 21 - 1440x576i@50Hz */
- { DRM_MODE("1440x576i", DRM_MODE_TYPE_DRIVER, 27000, 1440, 1464,
- 1590, 1728, 0, 576, 580, 586, 625, 0,
+ /* 21 - 720(1440)x576i@50Hz */
+ { DRM_MODE("720x576i", DRM_MODE_TYPE_DRIVER, 13500, 720, 732,
+ 795, 864, 0, 576, 580, 586, 625, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK),
.vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
- /* 22 - 1440x576i@50Hz */
- { DRM_MODE("1440x576i", DRM_MODE_TYPE_DRIVER, 27000, 1440, 1464,
- 1590, 1728, 0, 576, 580, 586, 625, 0,
+ /* 22 - 720(1440)x576i@50Hz */
+ { DRM_MODE("720x576i", DRM_MODE_TYPE_DRIVER, 13500, 720, 732,
+ 795, 864, 0, 576, 580, 586, 625, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK),
.vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
- /* 23 - 1440x288@50Hz */
- { DRM_MODE("1440x288", DRM_MODE_TYPE_DRIVER, 27000, 1440, 1464,
- 1590, 1728, 0, 288, 290, 293, 312, 0,
+ /* 23 - 720(1440)x288@50Hz */
+ { DRM_MODE("720x288", DRM_MODE_TYPE_DRIVER, 13500, 720, 732,
+ 795, 864, 0, 288, 290, 293, 312, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_DBLCLK),
.vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
- /* 24 - 1440x288@50Hz */
- { DRM_MODE("1440x288", DRM_MODE_TYPE_DRIVER, 27000, 1440, 1464,
- 1590, 1728, 0, 288, 290, 293, 312, 0,
+ /* 24 - 720(1440)x288@50Hz */
+ { DRM_MODE("720x288", DRM_MODE_TYPE_DRIVER, 13500, 720, 732,
+ 795, 864, 0, 288, 290, 293, 312, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_DBLCLK),
.vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
796, 864, 0, 576, 581, 586, 625, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
- /* 44 - 1440x576i@100Hz */
- { DRM_MODE("1440x576", DRM_MODE_TYPE_DRIVER, 54000, 1440, 1464,
- 1590, 1728, 0, 576, 580, 586, 625, 0,
+ /* 44 - 720(1440)x576i@100Hz */
+ { DRM_MODE("720x576i", DRM_MODE_TYPE_DRIVER, 27000, 720, 732,
+ 795, 864, 0, 576, 580, 586, 625, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
- DRM_MODE_FLAG_DBLCLK),
+ DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK),
.vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
- /* 45 - 1440x576i@100Hz */
- { DRM_MODE("1440x576", DRM_MODE_TYPE_DRIVER, 54000, 1440, 1464,
- 1590, 1728, 0, 576, 580, 586, 625, 0,
+ /* 45 - 720(1440)x576i@100Hz */
+ { DRM_MODE("720x576i", DRM_MODE_TYPE_DRIVER, 27000, 720, 732,
+ 795, 864, 0, 576, 580, 586, 625, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
- DRM_MODE_FLAG_DBLCLK),
+ DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK),
.vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 46 - 1920x1080i@120Hz */
{ DRM_MODE("1920x1080i", DRM_MODE_TYPE_DRIVER, 148500, 1920, 2008,
798, 858, 0, 480, 489, 495, 525, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
- /* 50 - 1440x480i@120Hz */
- { DRM_MODE("1440x480i", DRM_MODE_TYPE_DRIVER, 54000, 1440, 1478,
- 1602, 1716, 0, 480, 488, 494, 525, 0,
+ /* 50 - 720(1440)x480i@120Hz */
+ { DRM_MODE("720x480i", DRM_MODE_TYPE_DRIVER, 27000, 720, 739,
+ 801, 858, 0, 480, 488, 494, 525, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK),
.vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
- /* 51 - 1440x480i@120Hz */
- { DRM_MODE("1440x480i", DRM_MODE_TYPE_DRIVER, 54000, 1440, 1478,
- 1602, 1716, 0, 480, 488, 494, 525, 0,
+ /* 51 - 720(1440)x480i@120Hz */
+ { DRM_MODE("720x480i", DRM_MODE_TYPE_DRIVER, 27000, 720, 739,
+ 801, 858, 0, 480, 488, 494, 525, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK),
.vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
796, 864, 0, 576, 581, 586, 625, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 200, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
- /* 54 - 1440x576i@200Hz */
- { DRM_MODE("1440x576i", DRM_MODE_TYPE_DRIVER, 108000, 1440, 1464,
- 1590, 1728, 0, 576, 580, 586, 625, 0,
+ /* 54 - 720(1440)x576i@200Hz */
+ { DRM_MODE("720x576i", DRM_MODE_TYPE_DRIVER, 54000, 720, 732,
+ 795, 864, 0, 576, 580, 586, 625, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK),
.vrefresh = 200, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
- /* 55 - 1440x576i@200Hz */
- { DRM_MODE("1440x576i", DRM_MODE_TYPE_DRIVER, 108000, 1440, 1464,
- 1590, 1728, 0, 576, 580, 586, 625, 0,
+ /* 55 - 720(1440)x576i@200Hz */
+ { DRM_MODE("720x576i", DRM_MODE_TYPE_DRIVER, 54000, 720, 732,
+ 795, 864, 0, 576, 580, 586, 625, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK),
.vrefresh = 200, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
798, 858, 0, 480, 489, 495, 525, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 240, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
- /* 58 - 1440x480i@240 */
- { DRM_MODE("1440x480i", DRM_MODE_TYPE_DRIVER, 108000, 1440, 1478,
- 1602, 1716, 0, 480, 488, 494, 525, 0,
+ /* 58 - 720(1440)x480i@240 */
+ { DRM_MODE("720x480i", DRM_MODE_TYPE_DRIVER, 54000, 720, 739,
+ 801, 858, 0, 480, 488, 494, 525, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK),
.vrefresh = 240, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
- /* 59 - 1440x480i@240 */
- { DRM_MODE("1440x480i", DRM_MODE_TYPE_DRIVER, 108000, 1440, 1478,
- 1602, 1716, 0, 480, 488, 494, 525, 0,
+ /* 59 - 720(1440)x480i@240 */
+ { DRM_MODE("720x480i", DRM_MODE_TYPE_DRIVER, 54000, 720, 739,
+ 801, 858, 0, 480, 488, 494, 525, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK),
.vrefresh = 240, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
add_inferred_modes(struct drm_connector *connector, struct edid *edid)
{
struct detailed_mode_closure closure = {
- connector, edid, 0, 0, 0
+ .connector = connector,
+ .edid = edid,
};
if (version_greater(edid, 1, 0))
((edid->established_timings.mfg_rsvd & 0x80) << 9);
int i, modes = 0;
struct detailed_mode_closure closure = {
- connector, edid, 0, 0, 0
+ .connector = connector,
+ .edid = edid,
};
for (i = 0; i <= EDID_EST_TIMINGS; i++) {
{
int i, modes = 0;
struct detailed_mode_closure closure = {
- connector, edid, 0, 0, 0
+ .connector = connector,
+ .edid = edid,
};
for (i = 0; i < EDID_STD_TIMINGS; i++) {
add_cvt_modes(struct drm_connector *connector, struct edid *edid)
{
struct detailed_mode_closure closure = {
- connector, edid, 0, 0, 0
+ .connector = connector,
+ .edid = edid,
};
if (version_greater(edid, 1, 2))
u32 quirks)
{
struct detailed_mode_closure closure = {
- connector,
- edid,
- 1,
- quirks,
- 0
+ .connector = connector,
+ .edid = edid,
+ .preferred = 1,
+ .quirks = quirks,
};
if (closure.preferred && !version_greater(edid, 1, 3))
/**
* drm_assign_hdmi_deep_color_info - detect whether monitor supports
* hdmi deep color modes and update drm_display_info if so.
- *
* @edid: monitor EDID information
* @info: Updated with maximum supported deep color bpc and color format
* if deep color supported.
+ * @connector: DRM connector, used only for debug output
*
* Parse the CEA extension according to CEA-861-B.
* Return true if HDMI deep color supported, false if not or unknown.
WARN_ON(!mutex_is_locked(&fb_helper->dev->mode_config.mutex));
if (fb_helper->connector_count + 1 > fb_helper->connector_info_alloc_count) {
- temp = krealloc(fb_helper->connector_info, sizeof(struct drm_fb_helper_connector) * (fb_helper->connector_count + 1), GFP_KERNEL);
+ temp = krealloc(fb_helper->connector_info, sizeof(struct drm_fb_helper_connector *) * (fb_helper->connector_count + 1), GFP_KERNEL);
if (!temp)
return -ENOMEM;
}
EXPORT_SYMBOL(drm_fb_helper_remove_one_connector);
-static int drm_fb_helper_parse_command_line(struct drm_fb_helper *fb_helper)
-{
- struct drm_fb_helper_connector *fb_helper_conn;
- int i;
-
- for (i = 0; i < fb_helper->connector_count; i++) {
- struct drm_cmdline_mode *mode;
- struct drm_connector *connector;
- char *option = NULL;
-
- fb_helper_conn = fb_helper->connector_info[i];
- connector = fb_helper_conn->connector;
- mode = &fb_helper_conn->cmdline_mode;
-
- /* do something on return - turn off connector maybe */
- if (fb_get_options(connector->name, &option))
- continue;
-
- if (drm_mode_parse_command_line_for_connector(option,
- connector,
- mode)) {
- if (mode->force) {
- const char *s;
- switch (mode->force) {
- case DRM_FORCE_OFF:
- s = "OFF";
- break;
- case DRM_FORCE_ON_DIGITAL:
- s = "ON - dig";
- break;
- default:
- case DRM_FORCE_ON:
- s = "ON";
- break;
- }
-
- DRM_INFO("forcing %s connector %s\n",
- connector->name, s);
- connector->force = mode->force;
- }
-
- DRM_DEBUG_KMS("cmdline mode for connector %s %dx%d@%dHz%s%s%s\n",
- connector->name,
- mode->xres, mode->yres,
- mode->refresh_specified ? mode->refresh : 60,
- mode->rb ? " reduced blanking" : "",
- mode->margins ? " with margins" : "",
- mode->interlace ? " interlaced" : "");
- }
-
- }
- return 0;
-}
-
static void drm_fb_helper_save_lut_atomic(struct drm_crtc *crtc, struct drm_fb_helper *helper)
{
uint16_t *r_base, *g_base, *b_base;
drm_warn_on_modeset_not_all_locked(dev);
- list_for_each_entry(plane, &dev->mode_config.plane_list, head)
+ list_for_each_entry(plane, &dev->mode_config.plane_list, head) {
if (plane->type != DRM_PLANE_TYPE_PRIMARY)
drm_plane_force_disable(plane);
+ if (dev->mode_config.rotation_property) {
+ drm_mode_plane_set_obj_prop(plane,
+ dev->mode_config.rotation_property,
+ BIT(DRM_ROTATE_0));
+ }
+ }
+
for (i = 0; i < fb_helper->crtc_count; i++) {
struct drm_mode_set *mode_set = &fb_helper->crtc_info[i].mode_set;
struct drm_crtc *crtc = mode_set->crtc;
if (dev->switch_power_state == DRM_SWITCH_POWER_OFF)
continue;
- /* NOTE: we use lockless flag below to avoid grabbing other
- * modeset locks. So just trylock the underlying mutex
- * directly:
+ /*
+ * NOTE: Use trylock mode to avoid deadlocks and sleeping in
+ * panic context.
*/
- if (!mutex_trylock(&dev->mode_config.mutex)) {
+ if (__drm_modeset_lock_all(dev, true) != 0) {
error = true;
continue;
}
if (ret)
error = true;
- mutex_unlock(&dev->mode_config.mutex);
+ drm_modeset_unlock_all(dev);
}
return error;
}
struct drm_fb_helper_connector *fb_helper_conn = fb_helper->connector_info[i];
struct drm_cmdline_mode *cmdline_mode;
- cmdline_mode = &fb_helper_conn->cmdline_mode;
+ cmdline_mode = &fb_helper_conn->connector->cmdline_mode;
if (cmdline_mode->bpp_specified) {
switch (cmdline_mode->bpp) {
static bool drm_has_cmdline_mode(struct drm_fb_helper_connector *fb_connector)
{
- struct drm_cmdline_mode *cmdline_mode;
- cmdline_mode = &fb_connector->cmdline_mode;
- return cmdline_mode->specified;
+ return fb_connector->connector->cmdline_mode.specified;
}
struct drm_display_mode *drm_pick_cmdline_mode(struct drm_fb_helper_connector *fb_helper_conn,
struct drm_display_mode *mode = NULL;
bool prefer_non_interlace;
- cmdline_mode = &fb_helper_conn->cmdline_mode;
+ cmdline_mode = &fb_helper_conn->connector->cmdline_mode;
if (cmdline_mode->specified == false)
return mode;
struct drm_device *dev = fb_helper->dev;
int count = 0;
- drm_fb_helper_parse_command_line(fb_helper);
-
mutex_lock(&dev->mode_config.mutex);
count = drm_fb_helper_probe_connector_modes(fb_helper,
dev->mode_config.max_width,
#include <linux/slab.h>
#include <linux/module.h>
#include "drm_legacy.h"
+#include "drm_internal.h"
/* from BKL pushdown */
DEFINE_MUTEX(drm_global_mutex);
-EXPORT_SYMBOL(drm_global_mutex);
static int drm_open_helper(struct file *filp, struct drm_minor *minor);
init_waitqueue_head(&priv->event_wait);
priv->event_space = 4096; /* set aside 4k for event buffer */
- if (dev->driver->driver_features & DRIVER_GEM)
+ if (drm_core_check_feature(dev, DRIVER_GEM))
drm_gem_open(dev, priv);
if (drm_core_check_feature(dev, DRIVER_PRIME))
out_prime_destroy:
if (drm_core_check_feature(dev, DRIVER_PRIME))
drm_prime_destroy_file_private(&priv->prime);
- if (dev->driver->driver_features & DRIVER_GEM)
+ if (drm_core_check_feature(dev, DRIVER_GEM))
drm_gem_release(dev, priv);
put_pid(priv->pid);
kfree(priv);
{
struct drm_file *file_priv = filp->private_data;
- if (drm_i_have_hw_lock(dev, file_priv)) {
+ if (drm_legacy_i_have_hw_lock(dev, file_priv)) {
DRM_DEBUG("File %p released, freeing lock for context %d\n",
filp, _DRM_LOCKING_CONTEXT(file_priv->master->lock.hw_lock->lock));
- drm_lock_free(&file_priv->master->lock,
- _DRM_LOCKING_CONTEXT(file_priv->master->lock.hw_lock->lock));
+ drm_legacy_lock_free(&file_priv->master->lock,
+ _DRM_LOCKING_CONTEXT(file_priv->master->lock.hw_lock->lock));
}
}
*/
int drm_lastclose(struct drm_device * dev)
{
- struct drm_vma_entry *vma, *vma_temp;
-
DRM_DEBUG("\n");
if (dev->driver->lastclose)
drm_agp_clear(dev);
drm_legacy_sg_cleanup(dev);
-
- /* Clear vma list (only built for debugging) */
- list_for_each_entry_safe(vma, vma_temp, &dev->vmalist, head) {
- list_del(&vma->head);
- kfree(vma);
- }
-
+ drm_legacy_vma_flush(dev);
drm_legacy_dma_takedown(dev);
mutex_unlock(&dev->struct_mutex);
drm_master_release(dev, filp);
if (drm_core_check_feature(dev, DRIVER_HAVE_DMA))
- drm_core_reclaim_buffers(dev, file_priv);
+ drm_legacy_reclaim_buffers(dev, file_priv);
drm_events_release(file_priv);
- if (dev->driver->driver_features & DRIVER_MODESET)
+ if (drm_core_check_feature(dev, DRIVER_MODESET))
drm_fb_release(file_priv);
- if (dev->driver->driver_features & DRIVER_GEM)
+ if (drm_core_check_feature(dev, DRIVER_GEM))
drm_gem_release(dev, file_priv);
drm_legacy_ctxbitmap_flush(dev, file_priv);
if (drm_core_check_feature(dev, DRIVER_PRIME))
drm_prime_destroy_file_private(&file_priv->prime);
+ WARN_ON(!list_empty(&file_priv->event_list));
+
put_pid(file_priv->pid);
kfree(file_priv);
#include <linux/dma-buf.h>
#include <drm/drmP.h>
#include <drm/drm_vma_manager.h>
+#include <drm/drm_gem.h>
+#include "drm_internal.h"
/** @file drm_gem.c
*
EXPORT_SYMBOL(drm_gem_object_init);
/**
- * drm_gem_object_init - initialize an allocated private GEM object
+ * drm_gem_private_object_init - initialize an allocated private GEM object
* @dev: drm_device the object should be initialized for
* @obj: drm_gem_object to initialize
* @size: object size
struct drm_gem_close *args = data;
int ret;
- if (!(dev->driver->driver_features & DRIVER_GEM))
+ if (!drm_core_check_feature(dev, DRIVER_GEM))
return -ENODEV;
ret = drm_gem_handle_delete(file_priv, args->handle);
struct drm_gem_object *obj;
int ret;
- if (!(dev->driver->driver_features & DRIVER_GEM))
+ if (!drm_core_check_feature(dev, DRIVER_GEM))
return -ENODEV;
obj = drm_gem_object_lookup(dev, file_priv, args->handle);
int ret;
u32 handle;
- if (!(dev->driver->driver_features & DRIVER_GEM))
+ if (!drm_core_check_feature(dev, DRIVER_GEM))
return -ENODEV;
mutex_lock(&dev->object_name_lock);
vma_pages(vma));
if (!node) {
mutex_unlock(&dev->struct_mutex);
- return drm_mmap(filp, vma);
+ return -EINVAL;
} else if (!drm_vma_node_is_allowed(node, filp)) {
mutex_unlock(&dev->struct_mutex);
return -EACCES;
EXPORT_SYMBOL_GPL(drm_gem_cma_prime_get_sg_table);
struct drm_gem_object *
-drm_gem_cma_prime_import_sg_table(struct drm_device *dev, size_t size,
+drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
+ struct dma_buf_attachment *attach,
struct sg_table *sgt)
{
struct drm_gem_cma_object *cma_obj;
return ERR_PTR(-EINVAL);
/* Create a CMA GEM buffer. */
- cma_obj = __drm_gem_cma_create(dev, size);
+ cma_obj = __drm_gem_cma_create(dev, attach->dmabuf->size);
if (IS_ERR(cma_obj))
return ERR_CAST(cma_obj);
cma_obj->paddr = sg_dma_address(sgt->sgl);
cma_obj->sgt = sgt;
- DRM_DEBUG_PRIME("dma_addr = %pad, size = %zu\n", &cma_obj->paddr, size);
+ DRM_DEBUG_PRIME("dma_addr = %pad, size = %zu\n", &cma_obj->paddr, attach->dmabuf->size);
return &cma_obj->base;
}
#include <linux/seq_file.h>
#include <drm/drmP.h>
+#include <drm/drm_gem.h>
+
+#include "drm_legacy.h"
/**
* Called when "/proc/dri/.../name" is read.
struct drm_device *dev = node->minor->dev;
struct drm_file *priv;
+ seq_printf(m,
+ "%20s %5s %3s master a %5s %10s\n",
+ "command",
+ "pid",
+ "dev",
+ "uid",
+ "magic");
+
+ /* dev->filelist is sorted youngest first, but we want to present
+ * oldest first (i.e. kernel, servers, clients), so walk backwardss.
+ */
mutex_lock(&dev->struct_mutex);
- seq_printf(m, "a dev pid uid magic\n\n");
- list_for_each_entry(priv, &dev->filelist, lhead) {
- seq_printf(m, "%c %3d %5d %5d %10u\n",
- priv->authenticated ? 'y' : 'n',
- priv->minor->index,
+ list_for_each_entry_reverse(priv, &dev->filelist, lhead) {
+ struct task_struct *task;
+
+ rcu_read_lock(); /* locks pid_task()->comm */
+ task = pid_task(priv->pid, PIDTYPE_PID);
+ seq_printf(m, "%20s %5d %3d %c %c %5d %10u\n",
+ task ? task->comm : "<unknown>",
pid_vnr(priv->pid),
+ priv->minor->index,
+ priv->is_master ? 'y' : 'n',
+ priv->authenticated ? 'y' : 'n',
from_kuid_munged(seq_user_ns(m), priv->uid),
priv->magic);
+ rcu_read_unlock();
}
mutex_unlock(&dev->struct_mutex);
return 0;
return 0;
}
-
-#if DRM_DEBUG_CODE
-
-int drm_vma_info(struct seq_file *m, void *data)
-{
- struct drm_info_node *node = (struct drm_info_node *) m->private;
- struct drm_device *dev = node->minor->dev;
- struct drm_vma_entry *pt;
- struct vm_area_struct *vma;
- unsigned long vma_count = 0;
-#if defined(__i386__)
- unsigned int pgprot;
-#endif
-
- mutex_lock(&dev->struct_mutex);
- list_for_each_entry(pt, &dev->vmalist, head)
- vma_count++;
-
- seq_printf(m, "vma use count: %lu, high_memory = %pK, 0x%pK\n",
- vma_count, high_memory,
- (void *)(unsigned long)virt_to_phys(high_memory));
-
- list_for_each_entry(pt, &dev->vmalist, head) {
- vma = pt->vma;
- if (!vma)
- continue;
- seq_printf(m,
- "\n%5d 0x%pK-0x%pK %c%c%c%c%c%c 0x%08lx000",
- pt->pid,
- (void *)vma->vm_start, (void *)vma->vm_end,
- vma->vm_flags & VM_READ ? 'r' : '-',
- vma->vm_flags & VM_WRITE ? 'w' : '-',
- vma->vm_flags & VM_EXEC ? 'x' : '-',
- vma->vm_flags & VM_MAYSHARE ? 's' : 'p',
- vma->vm_flags & VM_LOCKED ? 'l' : '-',
- vma->vm_flags & VM_IO ? 'i' : '-',
- vma->vm_pgoff);
-
-#if defined(__i386__)
- pgprot = pgprot_val(vma->vm_page_prot);
- seq_printf(m, " %c%c%c%c%c%c%c%c%c",
- pgprot & _PAGE_PRESENT ? 'p' : '-',
- pgprot & _PAGE_RW ? 'w' : 'r',
- pgprot & _PAGE_USER ? 'u' : 's',
- pgprot & _PAGE_PWT ? 't' : 'b',
- pgprot & _PAGE_PCD ? 'u' : 'c',
- pgprot & _PAGE_ACCESSED ? 'a' : '-',
- pgprot & _PAGE_DIRTY ? 'd' : '-',
- pgprot & _PAGE_PSE ? 'm' : 'k',
- pgprot & _PAGE_GLOBAL ? 'g' : 'l');
-#endif
- seq_printf(m, "\n");
- }
- mutex_unlock(&dev->struct_mutex);
- return 0;
-}
-
-#endif
-
--- /dev/null
+/*
+ * Copyright © 2014 Intel Corporation
+ * Daniel Vetter <daniel.vetter@ffwll.ch>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/* drm_irq.c */
+extern unsigned int drm_timestamp_monotonic;
+
+/* drm_fops.c */
+extern struct mutex drm_global_mutex;
+int drm_lastclose(struct drm_device *dev);
+
+/* drm_pci.c */
+int drm_pci_set_unique(struct drm_device *dev,
+ struct drm_master *master,
+ struct drm_unique *u);
+int drm_irq_by_busid(struct drm_device *dev, void *data,
+ struct drm_file *file_priv);
+
+/* drm_vm.c */
+int drm_vma_info(struct seq_file *m, void *data);
+void drm_vm_open_locked(struct drm_device *dev, struct vm_area_struct *vma);
+void drm_vm_close_locked(struct drm_device *dev, struct vm_area_struct *vma);
+
+/* drm_prime.c */
+int drm_prime_handle_to_fd_ioctl(struct drm_device *dev, void *data,
+ struct drm_file *file_priv);
+int drm_prime_fd_to_handle_ioctl(struct drm_device *dev, void *data,
+ struct drm_file *file_priv);
+
+void drm_prime_init_file_private(struct drm_prime_file_private *prime_fpriv);
+void drm_prime_destroy_file_private(struct drm_prime_file_private *prime_fpriv);
+void drm_prime_remove_buf_handle_locked(struct drm_prime_file_private *prime_fpriv,
+ struct dma_buf *dma_buf);
+
+/* drm_info.c */
+int drm_name_info(struct seq_file *m, void *data);
+int drm_vm_info(struct seq_file *m, void *data);
+int drm_bufs_info(struct seq_file *m, void *data);
+int drm_vblank_info(struct seq_file *m, void *data);
+int drm_clients_info(struct seq_file *m, void* data);
+int drm_gem_name_info(struct seq_file *m, void *data);
+
+/* drm_irq.c */
+int drm_control(struct drm_device *dev, void *data,
+ struct drm_file *file_priv);
+int drm_modeset_ctl(struct drm_device *dev, void *data,
+ struct drm_file *file_priv);
+
+/* drm_auth.c */
+int drm_getmagic(struct drm_device *dev, void *data,
+ struct drm_file *file_priv);
+int drm_authmagic(struct drm_device *dev, void *data,
+ struct drm_file *file_priv);
+int drm_remove_magic(struct drm_master *master, drm_magic_t magic);
+
+/* drm_sysfs.c */
+extern struct class *drm_class;
+
+struct class *drm_sysfs_create(struct module *owner, char *name);
+void drm_sysfs_destroy(void);
+struct device *drm_sysfs_minor_alloc(struct drm_minor *minor);
+int drm_sysfs_connector_add(struct drm_connector *connector);
+void drm_sysfs_connector_remove(struct drm_connector *connector);
+
+/* drm_gem.c */
+int drm_gem_init(struct drm_device *dev);
+void drm_gem_destroy(struct drm_device *dev);
+int drm_gem_handle_create_tail(struct drm_file *file_priv,
+ struct drm_gem_object *obj,
+ u32 *handlep);
+int drm_gem_close_ioctl(struct drm_device *dev, void *data,
+ struct drm_file *file_priv);
+int drm_gem_flink_ioctl(struct drm_device *dev, void *data,
+ struct drm_file *file_priv);
+int drm_gem_open_ioctl(struct drm_device *dev, void *data,
+ struct drm_file *file_priv);
+void drm_gem_open(struct drm_device *dev, struct drm_file *file_private);
+void drm_gem_release(struct drm_device *dev, struct drm_file *file_private);
+
+/* drm_drv.c */
+int drm_setmaster_ioctl(struct drm_device *dev, void *data,
+ struct drm_file *file_priv);
+int drm_dropmaster_ioctl(struct drm_device *dev, void *data,
+ struct drm_file *file_priv);
+struct drm_master *drm_master_create(struct drm_minor *minor);
+
+/* drm_debugfs.c */
+#if defined(CONFIG_DEBUG_FS)
+int drm_debugfs_init(struct drm_minor *minor, int minor_id,
+ struct dentry *root);
+int drm_debugfs_cleanup(struct drm_minor *minor);
+int drm_debugfs_connector_add(struct drm_connector *connector);
+void drm_debugfs_connector_remove(struct drm_connector *connector);
+#else
+static inline int drm_debugfs_init(struct drm_minor *minor, int minor_id,
+ struct dentry *root)
+{
+ return 0;
+}
+
+static inline int drm_debugfs_cleanup(struct drm_minor *minor)
+{
+ return 0;
+}
+
+static inline int drm_debugfs_connector_add(struct drm_connector *connector)
+{
+ return 0;
+}
+static inline void drm_debugfs_connector_remove(struct drm_connector *connector)
+{
+}
+#endif
#include <drm/drmP.h>
#include <drm/drm_core.h>
#include "drm_legacy.h"
+#include "drm_internal.h"
#include <linux/pci.h>
#include <linux/export.h>
static int drm_version(struct drm_device *dev, void *data,
struct drm_file *file_priv);
-#define DRM_IOCTL_DEF(ioctl, _func, _flags) \
- [DRM_IOCTL_NR(ioctl)] = {.cmd = ioctl, .func = _func, .flags = _flags, .cmd_drv = 0, .name = #ioctl}
-
-/** Ioctl table */
-static const struct drm_ioctl_desc drm_ioctls[] = {
- DRM_IOCTL_DEF(DRM_IOCTL_VERSION, drm_version, DRM_UNLOCKED|DRM_RENDER_ALLOW),
- DRM_IOCTL_DEF(DRM_IOCTL_GET_UNIQUE, drm_getunique, 0),
- DRM_IOCTL_DEF(DRM_IOCTL_GET_MAGIC, drm_getmagic, 0),
- DRM_IOCTL_DEF(DRM_IOCTL_IRQ_BUSID, drm_irq_by_busid, DRM_MASTER|DRM_ROOT_ONLY),
- DRM_IOCTL_DEF(DRM_IOCTL_GET_MAP, drm_getmap, DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_GET_CLIENT, drm_getclient, DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_GET_STATS, drm_getstats, DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_GET_CAP, drm_getcap, DRM_UNLOCKED|DRM_RENDER_ALLOW),
- DRM_IOCTL_DEF(DRM_IOCTL_SET_CLIENT_CAP, drm_setclientcap, 0),
- DRM_IOCTL_DEF(DRM_IOCTL_SET_VERSION, drm_setversion, DRM_MASTER),
-
- DRM_IOCTL_DEF(DRM_IOCTL_SET_UNIQUE, drm_setunique, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
- DRM_IOCTL_DEF(DRM_IOCTL_BLOCK, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
- DRM_IOCTL_DEF(DRM_IOCTL_UNBLOCK, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
- DRM_IOCTL_DEF(DRM_IOCTL_AUTH_MAGIC, drm_authmagic, DRM_AUTH|DRM_MASTER),
-
- DRM_IOCTL_DEF(DRM_IOCTL_ADD_MAP, drm_addmap_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
- DRM_IOCTL_DEF(DRM_IOCTL_RM_MAP, drm_rmmap_ioctl, DRM_AUTH),
-
- DRM_IOCTL_DEF(DRM_IOCTL_SET_SAREA_CTX, drm_legacy_setsareactx, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
- DRM_IOCTL_DEF(DRM_IOCTL_GET_SAREA_CTX, drm_legacy_getsareactx, DRM_AUTH),
-
- DRM_IOCTL_DEF(DRM_IOCTL_SET_MASTER, drm_setmaster_ioctl, DRM_ROOT_ONLY),
- DRM_IOCTL_DEF(DRM_IOCTL_DROP_MASTER, drm_dropmaster_ioctl, DRM_ROOT_ONLY),
-
- DRM_IOCTL_DEF(DRM_IOCTL_ADD_CTX, drm_legacy_addctx, DRM_AUTH|DRM_ROOT_ONLY),
- DRM_IOCTL_DEF(DRM_IOCTL_RM_CTX, drm_legacy_rmctx, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
- DRM_IOCTL_DEF(DRM_IOCTL_MOD_CTX, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
- DRM_IOCTL_DEF(DRM_IOCTL_GET_CTX, drm_legacy_getctx, DRM_AUTH),
- DRM_IOCTL_DEF(DRM_IOCTL_SWITCH_CTX, drm_legacy_switchctx, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
- DRM_IOCTL_DEF(DRM_IOCTL_NEW_CTX, drm_legacy_newctx, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
- DRM_IOCTL_DEF(DRM_IOCTL_RES_CTX, drm_legacy_resctx, DRM_AUTH),
-
- DRM_IOCTL_DEF(DRM_IOCTL_ADD_DRAW, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
- DRM_IOCTL_DEF(DRM_IOCTL_RM_DRAW, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
-
- DRM_IOCTL_DEF(DRM_IOCTL_LOCK, drm_lock, DRM_AUTH),
- DRM_IOCTL_DEF(DRM_IOCTL_UNLOCK, drm_unlock, DRM_AUTH),
-
- DRM_IOCTL_DEF(DRM_IOCTL_FINISH, drm_noop, DRM_AUTH),
-
- DRM_IOCTL_DEF(DRM_IOCTL_ADD_BUFS, drm_addbufs, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
- DRM_IOCTL_DEF(DRM_IOCTL_MARK_BUFS, drm_markbufs, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
- DRM_IOCTL_DEF(DRM_IOCTL_INFO_BUFS, drm_infobufs, DRM_AUTH),
- DRM_IOCTL_DEF(DRM_IOCTL_MAP_BUFS, drm_mapbufs, DRM_AUTH),
- DRM_IOCTL_DEF(DRM_IOCTL_FREE_BUFS, drm_freebufs, DRM_AUTH),
- DRM_IOCTL_DEF(DRM_IOCTL_DMA, drm_dma_ioctl, DRM_AUTH),
-
- DRM_IOCTL_DEF(DRM_IOCTL_CONTROL, drm_control, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
-
-#if __OS_HAS_AGP
- DRM_IOCTL_DEF(DRM_IOCTL_AGP_ACQUIRE, drm_agp_acquire_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
- DRM_IOCTL_DEF(DRM_IOCTL_AGP_RELEASE, drm_agp_release_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
- DRM_IOCTL_DEF(DRM_IOCTL_AGP_ENABLE, drm_agp_enable_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
- DRM_IOCTL_DEF(DRM_IOCTL_AGP_INFO, drm_agp_info_ioctl, DRM_AUTH),
- DRM_IOCTL_DEF(DRM_IOCTL_AGP_ALLOC, drm_agp_alloc_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
- DRM_IOCTL_DEF(DRM_IOCTL_AGP_FREE, drm_agp_free_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
- DRM_IOCTL_DEF(DRM_IOCTL_AGP_BIND, drm_agp_bind_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
- DRM_IOCTL_DEF(DRM_IOCTL_AGP_UNBIND, drm_agp_unbind_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
-#endif
-
- DRM_IOCTL_DEF(DRM_IOCTL_SG_ALLOC, drm_sg_alloc, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
- DRM_IOCTL_DEF(DRM_IOCTL_SG_FREE, drm_sg_free, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
-
- DRM_IOCTL_DEF(DRM_IOCTL_WAIT_VBLANK, drm_wait_vblank, DRM_UNLOCKED),
-
- DRM_IOCTL_DEF(DRM_IOCTL_MODESET_CTL, drm_modeset_ctl, 0),
-
- DRM_IOCTL_DEF(DRM_IOCTL_UPDATE_DRAW, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
-
- DRM_IOCTL_DEF(DRM_IOCTL_GEM_CLOSE, drm_gem_close_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
- DRM_IOCTL_DEF(DRM_IOCTL_GEM_FLINK, drm_gem_flink_ioctl, DRM_AUTH|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_GEM_OPEN, drm_gem_open_ioctl, DRM_AUTH|DRM_UNLOCKED),
-
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETRESOURCES, drm_mode_getresources, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
-
- DRM_IOCTL_DEF(DRM_IOCTL_PRIME_HANDLE_TO_FD, drm_prime_handle_to_fd_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW),
- DRM_IOCTL_DEF(DRM_IOCTL_PRIME_FD_TO_HANDLE, drm_prime_fd_to_handle_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW),
-
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPLANERESOURCES, drm_mode_getplane_res, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETCRTC, drm_mode_getcrtc, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETCRTC, drm_mode_setcrtc, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPLANE, drm_mode_getplane, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETPLANE, drm_mode_setplane, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_CURSOR, drm_mode_cursor_ioctl, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETGAMMA, drm_mode_gamma_get_ioctl, DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETGAMMA, drm_mode_gamma_set_ioctl, DRM_MASTER|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETENCODER, drm_mode_getencoder, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETCONNECTOR, drm_mode_getconnector, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_ATTACHMODE, drm_noop, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_DETACHMODE, drm_noop, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPROPERTY, drm_mode_getproperty_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETPROPERTY, drm_mode_connector_property_set_ioctl, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPROPBLOB, drm_mode_getblob_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETFB, drm_mode_getfb, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_ADDFB, drm_mode_addfb, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_ADDFB2, drm_mode_addfb2, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_RMFB, drm_mode_rmfb, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_PAGE_FLIP, drm_mode_page_flip_ioctl, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_DIRTYFB, drm_mode_dirtyfb_ioctl, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_CREATE_DUMB, drm_mode_create_dumb_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_MAP_DUMB, drm_mode_mmap_dumb_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_DESTROY_DUMB, drm_mode_destroy_dumb_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_OBJ_GETPROPERTIES, drm_mode_obj_get_properties_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_OBJ_SETPROPERTY, drm_mode_obj_set_property_ioctl, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF(DRM_IOCTL_MODE_CURSOR2, drm_mode_cursor2_ioctl, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
-};
-
-#define DRM_CORE_IOCTL_COUNT ARRAY_SIZE( drm_ioctls )
-
/**
* Get the bus id.
*
*
* Copies the bus id from drm_device::unique into user space.
*/
-int drm_getunique(struct drm_device *dev, void *data,
+static int drm_getunique(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_unique *u = data;
kfree(master->unique);
master->unique = NULL;
master->unique_len = 0;
- master->unique_size = 0;
}
/**
* version 1.1 or greater. Also note that KMS is all version 1.1 and later and
* UMS was only ever supported on pci devices.
*/
-int drm_setunique(struct drm_device *dev, void *data,
+static int drm_setunique(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_unique *u = data;
if (master->unique != NULL)
drm_unset_busid(dev, master);
- if (dev->driver->bus && dev->driver->bus->set_busid) {
- ret = dev->driver->bus->set_busid(dev, master);
+ if (dev->driver->set_busid) {
+ ret = dev->driver->set_busid(dev, master);
if (ret) {
drm_unset_busid(dev, master);
return ret;
}
} else {
if (WARN(dev->unique == NULL,
- "No drm_bus.set_busid() implementation provided by "
+ "No drm_driver.set_busid() implementation provided by "
"%ps. Use drm_dev_set_unique() to set the unique "
"name explicitly.", dev->driver))
return -EINVAL;
* Searches for the mapping with the specified offset and copies its information
* into userspace
*/
-int drm_getmap(struct drm_device *dev, void *data,
+static int drm_getmap(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_map *map = data;
* Searches for the client with the specified index and copies its information
* into userspace
*/
-int drm_getclient(struct drm_device *dev, void *data,
+static int drm_getclient(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_client *client = data;
*
* \return zero on success or a negative number on failure.
*/
-int drm_getstats(struct drm_device *dev, void *data,
+static int drm_getstats(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_stats *stats = data;
/**
* Get device/driver capabilities
*/
-int drm_getcap(struct drm_device *dev, void *data, struct drm_file *file_priv)
+static int drm_getcap(struct drm_device *dev, void *data, struct drm_file *file_priv)
{
struct drm_get_cap *req = data;
/**
* Set device/driver capabilities
*/
-int
+static int
drm_setclientcap(struct drm_device *dev, void *data, struct drm_file *file_priv)
{
struct drm_set_client_cap *req = data;
*
* Sets the requested interface version
*/
-int drm_setversion(struct drm_device *dev, void *data, struct drm_file *file_priv)
+static int drm_setversion(struct drm_device *dev, void *data, struct drm_file *file_priv)
{
struct drm_set_version *sv = data;
int if_version, retcode = 0;
return 0;
}
+#define DRM_IOCTL_DEF(ioctl, _func, _flags) \
+ [DRM_IOCTL_NR(ioctl)] = {.cmd = ioctl, .func = _func, .flags = _flags, .cmd_drv = 0, .name = #ioctl}
+
+/** Ioctl table */
+static const struct drm_ioctl_desc drm_ioctls[] = {
+ DRM_IOCTL_DEF(DRM_IOCTL_VERSION, drm_version, DRM_UNLOCKED|DRM_RENDER_ALLOW),
+ DRM_IOCTL_DEF(DRM_IOCTL_GET_UNIQUE, drm_getunique, 0),
+ DRM_IOCTL_DEF(DRM_IOCTL_GET_MAGIC, drm_getmagic, 0),
+ DRM_IOCTL_DEF(DRM_IOCTL_IRQ_BUSID, drm_irq_by_busid, DRM_MASTER|DRM_ROOT_ONLY),
+ DRM_IOCTL_DEF(DRM_IOCTL_GET_MAP, drm_getmap, DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_GET_CLIENT, drm_getclient, DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_GET_STATS, drm_getstats, DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_GET_CAP, drm_getcap, DRM_UNLOCKED|DRM_RENDER_ALLOW),
+ DRM_IOCTL_DEF(DRM_IOCTL_SET_CLIENT_CAP, drm_setclientcap, 0),
+ DRM_IOCTL_DEF(DRM_IOCTL_SET_VERSION, drm_setversion, DRM_MASTER),
+
+ DRM_IOCTL_DEF(DRM_IOCTL_SET_UNIQUE, drm_setunique, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
+ DRM_IOCTL_DEF(DRM_IOCTL_BLOCK, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
+ DRM_IOCTL_DEF(DRM_IOCTL_UNBLOCK, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
+ DRM_IOCTL_DEF(DRM_IOCTL_AUTH_MAGIC, drm_authmagic, DRM_AUTH|DRM_MASTER),
+
+ DRM_IOCTL_DEF(DRM_IOCTL_ADD_MAP, drm_legacy_addmap_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
+ DRM_IOCTL_DEF(DRM_IOCTL_RM_MAP, drm_legacy_rmmap_ioctl, DRM_AUTH),
+
+ DRM_IOCTL_DEF(DRM_IOCTL_SET_SAREA_CTX, drm_legacy_setsareactx, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
+ DRM_IOCTL_DEF(DRM_IOCTL_GET_SAREA_CTX, drm_legacy_getsareactx, DRM_AUTH),
+
+ DRM_IOCTL_DEF(DRM_IOCTL_SET_MASTER, drm_setmaster_ioctl, DRM_ROOT_ONLY),
+ DRM_IOCTL_DEF(DRM_IOCTL_DROP_MASTER, drm_dropmaster_ioctl, DRM_ROOT_ONLY),
+
+ DRM_IOCTL_DEF(DRM_IOCTL_ADD_CTX, drm_legacy_addctx, DRM_AUTH|DRM_ROOT_ONLY),
+ DRM_IOCTL_DEF(DRM_IOCTL_RM_CTX, drm_legacy_rmctx, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
+ DRM_IOCTL_DEF(DRM_IOCTL_MOD_CTX, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
+ DRM_IOCTL_DEF(DRM_IOCTL_GET_CTX, drm_legacy_getctx, DRM_AUTH),
+ DRM_IOCTL_DEF(DRM_IOCTL_SWITCH_CTX, drm_legacy_switchctx, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
+ DRM_IOCTL_DEF(DRM_IOCTL_NEW_CTX, drm_legacy_newctx, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
+ DRM_IOCTL_DEF(DRM_IOCTL_RES_CTX, drm_legacy_resctx, DRM_AUTH),
+
+ DRM_IOCTL_DEF(DRM_IOCTL_ADD_DRAW, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
+ DRM_IOCTL_DEF(DRM_IOCTL_RM_DRAW, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
+
+ DRM_IOCTL_DEF(DRM_IOCTL_LOCK, drm_legacy_lock, DRM_AUTH),
+ DRM_IOCTL_DEF(DRM_IOCTL_UNLOCK, drm_legacy_unlock, DRM_AUTH),
+
+ DRM_IOCTL_DEF(DRM_IOCTL_FINISH, drm_noop, DRM_AUTH),
+
+ DRM_IOCTL_DEF(DRM_IOCTL_ADD_BUFS, drm_legacy_addbufs, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
+ DRM_IOCTL_DEF(DRM_IOCTL_MARK_BUFS, drm_legacy_markbufs, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
+ DRM_IOCTL_DEF(DRM_IOCTL_INFO_BUFS, drm_legacy_infobufs, DRM_AUTH),
+ DRM_IOCTL_DEF(DRM_IOCTL_MAP_BUFS, drm_legacy_mapbufs, DRM_AUTH),
+ DRM_IOCTL_DEF(DRM_IOCTL_FREE_BUFS, drm_legacy_freebufs, DRM_AUTH),
+ DRM_IOCTL_DEF(DRM_IOCTL_DMA, drm_legacy_dma_ioctl, DRM_AUTH),
+
+ DRM_IOCTL_DEF(DRM_IOCTL_CONTROL, drm_control, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
+
+#if __OS_HAS_AGP
+ DRM_IOCTL_DEF(DRM_IOCTL_AGP_ACQUIRE, drm_agp_acquire_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
+ DRM_IOCTL_DEF(DRM_IOCTL_AGP_RELEASE, drm_agp_release_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
+ DRM_IOCTL_DEF(DRM_IOCTL_AGP_ENABLE, drm_agp_enable_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
+ DRM_IOCTL_DEF(DRM_IOCTL_AGP_INFO, drm_agp_info_ioctl, DRM_AUTH),
+ DRM_IOCTL_DEF(DRM_IOCTL_AGP_ALLOC, drm_agp_alloc_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
+ DRM_IOCTL_DEF(DRM_IOCTL_AGP_FREE, drm_agp_free_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
+ DRM_IOCTL_DEF(DRM_IOCTL_AGP_BIND, drm_agp_bind_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
+ DRM_IOCTL_DEF(DRM_IOCTL_AGP_UNBIND, drm_agp_unbind_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
+#endif
+
+ DRM_IOCTL_DEF(DRM_IOCTL_SG_ALLOC, drm_legacy_sg_alloc, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
+ DRM_IOCTL_DEF(DRM_IOCTL_SG_FREE, drm_legacy_sg_free, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
+
+ DRM_IOCTL_DEF(DRM_IOCTL_WAIT_VBLANK, drm_wait_vblank, DRM_UNLOCKED),
+
+ DRM_IOCTL_DEF(DRM_IOCTL_MODESET_CTL, drm_modeset_ctl, 0),
+
+ DRM_IOCTL_DEF(DRM_IOCTL_UPDATE_DRAW, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
+
+ DRM_IOCTL_DEF(DRM_IOCTL_GEM_CLOSE, drm_gem_close_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
+ DRM_IOCTL_DEF(DRM_IOCTL_GEM_FLINK, drm_gem_flink_ioctl, DRM_AUTH|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_GEM_OPEN, drm_gem_open_ioctl, DRM_AUTH|DRM_UNLOCKED),
+
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETRESOURCES, drm_mode_getresources, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+
+ DRM_IOCTL_DEF(DRM_IOCTL_PRIME_HANDLE_TO_FD, drm_prime_handle_to_fd_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW),
+ DRM_IOCTL_DEF(DRM_IOCTL_PRIME_FD_TO_HANDLE, drm_prime_fd_to_handle_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW),
+
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPLANERESOURCES, drm_mode_getplane_res, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETCRTC, drm_mode_getcrtc, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETCRTC, drm_mode_setcrtc, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPLANE, drm_mode_getplane, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETPLANE, drm_mode_setplane, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_CURSOR, drm_mode_cursor_ioctl, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETGAMMA, drm_mode_gamma_get_ioctl, DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETGAMMA, drm_mode_gamma_set_ioctl, DRM_MASTER|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETENCODER, drm_mode_getencoder, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETCONNECTOR, drm_mode_getconnector, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_ATTACHMODE, drm_noop, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_DETACHMODE, drm_noop, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPROPERTY, drm_mode_getproperty_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETPROPERTY, drm_mode_connector_property_set_ioctl, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPROPBLOB, drm_mode_getblob_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETFB, drm_mode_getfb, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_ADDFB, drm_mode_addfb, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_ADDFB2, drm_mode_addfb2, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_RMFB, drm_mode_rmfb, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_PAGE_FLIP, drm_mode_page_flip_ioctl, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_DIRTYFB, drm_mode_dirtyfb_ioctl, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_CREATE_DUMB, drm_mode_create_dumb_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_MAP_DUMB, drm_mode_mmap_dumb_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_DESTROY_DUMB, drm_mode_destroy_dumb_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_OBJ_GETPROPERTIES, drm_mode_obj_get_properties_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_OBJ_SETPROPERTY, drm_mode_obj_set_property_ioctl, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_MODE_CURSOR2, drm_mode_cursor2_ioctl, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+};
+
+#define DRM_CORE_IOCTL_COUNT ARRAY_SIZE( drm_ioctls )
+
/**
* Called whenever a process performs an ioctl on /dev/drm.
*
#include <drm/drmP.h>
#include "drm_trace.h"
+#include "drm_internal.h"
#include <linux/interrupt.h> /* For task queue support */
#include <linux/slab.h>
*/
#define DRM_REDUNDANT_VBLIRQ_THRESH_NS 1000000
+static bool
+drm_get_last_vbltimestamp(struct drm_device *dev, int crtc,
+ struct timeval *tvblank, unsigned flags);
+
+static unsigned int drm_timestamp_precision = 20; /* Default to 20 usecs. */
+
/*
- * Clear vblank timestamp buffer for a crtc.
+ * Default to use monotonic timestamps for wait-for-vblank and page-flip
+ * complete events.
+ */
+unsigned int drm_timestamp_monotonic = 1;
+
+static int drm_vblank_offdelay = 5000; /* Default to 5000 msecs. */
+
+module_param_named(vblankoffdelay, drm_vblank_offdelay, int, 0600);
+module_param_named(timestamp_precision_usec, drm_timestamp_precision, int, 0600);
+module_param_named(timestamp_monotonic, drm_timestamp_monotonic, int, 0600);
+
+/**
+ * drm_update_vblank_count - update the master vblank counter
+ * @dev: DRM device
+ * @crtc: counter to update
+ *
+ * Call back into the driver to update the appropriate vblank counter
+ * (specified by @crtc). Deal with wraparound, if it occurred, and
+ * update the last read value so we can deal with wraparound on the next
+ * call if necessary.
+ *
+ * Only necessary when going from off->on, to account for frames we
+ * didn't get an interrupt for.
+ *
+ * Note: caller must hold dev->vbl_lock since this reads & writes
+ * device vblank fields.
*/
-static void clear_vblank_timestamps(struct drm_device *dev, int crtc)
+static void drm_update_vblank_count(struct drm_device *dev, int crtc)
{
- memset(dev->vblank[crtc].time, 0, sizeof(dev->vblank[crtc].time));
+ struct drm_vblank_crtc *vblank = &dev->vblank[crtc];
+ u32 cur_vblank, diff, tslot;
+ bool rc;
+ struct timeval t_vblank;
+
+ /*
+ * Interrupts were disabled prior to this call, so deal with counter
+ * wrap if needed.
+ * NOTE! It's possible we lost a full dev->max_vblank_count events
+ * here if the register is small or we had vblank interrupts off for
+ * a long time.
+ *
+ * We repeat the hardware vblank counter & timestamp query until
+ * we get consistent results. This to prevent races between gpu
+ * updating its hardware counter while we are retrieving the
+ * corresponding vblank timestamp.
+ */
+ do {
+ cur_vblank = dev->driver->get_vblank_counter(dev, crtc);
+ rc = drm_get_last_vbltimestamp(dev, crtc, &t_vblank, 0);
+ } while (cur_vblank != dev->driver->get_vblank_counter(dev, crtc));
+
+ /* Deal with counter wrap */
+ diff = cur_vblank - vblank->last;
+ if (cur_vblank < vblank->last) {
+ diff += dev->max_vblank_count;
+
+ DRM_DEBUG("last_vblank[%d]=0x%x, cur_vblank=0x%x => diff=0x%x\n",
+ crtc, vblank->last, cur_vblank, diff);
+ }
+
+ DRM_DEBUG("updating vblank count on crtc %d, missed %d\n",
+ crtc, diff);
+
+ if (diff == 0)
+ return;
+
+ /* Reinitialize corresponding vblank timestamp if high-precision query
+ * available. Skip this step if query unsupported or failed. Will
+ * reinitialize delayed at next vblank interrupt in that case.
+ */
+ if (rc) {
+ tslot = atomic_read(&vblank->count) + diff;
+ vblanktimestamp(dev, crtc, tslot) = t_vblank;
+ }
+
+ smp_mb__before_atomic();
+ atomic_add(diff, &vblank->count);
+ smp_mb__after_atomic();
}
/*
*/
static void vblank_disable_and_save(struct drm_device *dev, int crtc)
{
+ struct drm_vblank_crtc *vblank = &dev->vblank[crtc];
unsigned long irqflags;
u32 vblcount;
s64 diff_ns;
- int vblrc;
+ bool vblrc;
struct timeval tvblank;
int count = DRM_TIMESTAMP_MAXRETRIES;
*/
spin_lock_irqsave(&dev->vblank_time_lock, irqflags);
+ /*
+ * If the vblank interrupt was already disbled update the count
+ * and timestamp to maintain the appearance that the counter
+ * has been ticking all along until this time. This makes the
+ * count account for the entire time between drm_vblank_on() and
+ * drm_vblank_off().
+ *
+ * But only do this if precise vblank timestamps are available.
+ * Otherwise we might read a totally bogus timestamp since drivers
+ * lacking precise timestamp support rely upon sampling the system clock
+ * at vblank interrupt time. Which obviously won't work out well if the
+ * vblank interrupt is disabled.
+ */
+ if (!vblank->enabled &&
+ drm_get_last_vbltimestamp(dev, crtc, &tvblank, 0)) {
+ drm_update_vblank_count(dev, crtc);
+ spin_unlock_irqrestore(&dev->vblank_time_lock, irqflags);
+ return;
+ }
+
dev->driver->disable_vblank(dev, crtc);
- dev->vblank[crtc].enabled = false;
+ vblank->enabled = false;
/* No further vblank irq's will be processed after
* this point. Get current hardware vblank count and
* delayed gpu counter increment.
*/
do {
- dev->vblank[crtc].last = dev->driver->get_vblank_counter(dev, crtc);
+ vblank->last = dev->driver->get_vblank_counter(dev, crtc);
vblrc = drm_get_last_vbltimestamp(dev, crtc, &tvblank, 0);
- } while (dev->vblank[crtc].last != dev->driver->get_vblank_counter(dev, crtc) && (--count) && vblrc);
+ } while (vblank->last != dev->driver->get_vblank_counter(dev, crtc) && (--count) && vblrc);
if (!count)
vblrc = 0;
/* Compute time difference to stored timestamp of last vblank
* as updated by last invocation of drm_handle_vblank() in vblank irq.
*/
- vblcount = atomic_read(&dev->vblank[crtc].count);
+ vblcount = atomic_read(&vblank->count);
diff_ns = timeval_to_ns(&tvblank) -
timeval_to_ns(&vblanktimestamp(dev, crtc, vblcount));
* available. In that case we can't account for this and just
* hope for the best.
*/
- if ((vblrc > 0) && (abs64(diff_ns) > 1000000)) {
- atomic_inc(&dev->vblank[crtc].count);
+ if (vblrc && (abs64(diff_ns) > 1000000)) {
+ /* Store new timestamp in ringbuffer. */
+ vblanktimestamp(dev, crtc, vblcount + 1) = tvblank;
+
+ /* Increment cooked vblank count. This also atomically commits
+ * the timestamp computed above.
+ */
+ smp_mb__before_atomic();
+ atomic_inc(&vblank->count);
smp_mb__after_atomic();
}
- /* Invalidate all timestamps while vblank irq's are off. */
- clear_vblank_timestamps(dev, crtc);
-
spin_unlock_irqrestore(&dev->vblank_time_lock, irqflags);
}
void drm_vblank_cleanup(struct drm_device *dev)
{
int crtc;
+ unsigned long irqflags;
/* Bail if the driver didn't call drm_vblank_init() */
if (dev->num_crtcs == 0)
return;
for (crtc = 0; crtc < dev->num_crtcs; crtc++) {
- del_timer_sync(&dev->vblank[crtc].disable_timer);
- vblank_disable_fn((unsigned long)&dev->vblank[crtc]);
+ struct drm_vblank_crtc *vblank = &dev->vblank[crtc];
+
+ del_timer_sync(&vblank->disable_timer);
+
+ spin_lock_irqsave(&dev->vbl_lock, irqflags);
+ vblank_disable_and_save(dev, crtc);
+ spin_unlock_irqrestore(&dev->vbl_lock, irqflags);
}
kfree(dev->vblank);
goto err;
for (i = 0; i < num_crtcs; i++) {
- dev->vblank[i].dev = dev;
- dev->vblank[i].crtc = i;
- init_waitqueue_head(&dev->vblank[i].queue);
- setup_timer(&dev->vblank[i].disable_timer, vblank_disable_fn,
- (unsigned long)&dev->vblank[i]);
+ struct drm_vblank_crtc *vblank = &dev->vblank[i];
+
+ vblank->dev = dev;
+ vblank->crtc = i;
+ init_waitqueue_head(&vblank->queue);
+ setup_timer(&vblank->disable_timer, vblank_disable_fn,
+ (unsigned long)vblank);
}
DRM_INFO("Supports vblank timestamp caching Rev 2 (21.10.2013).\n");
return 0;
err:
- drm_vblank_cleanup(dev);
+ dev->num_crtcs = 0;
return ret;
}
EXPORT_SYMBOL(drm_vblank_init);
if (dev->num_crtcs) {
spin_lock_irqsave(&dev->vbl_lock, irqflags);
for (i = 0; i < dev->num_crtcs; i++) {
- wake_up(&dev->vblank[i].queue);
- dev->vblank[i].enabled = false;
- dev->vblank[i].last =
+ struct drm_vblank_crtc *vblank = &dev->vblank[i];
+
+ wake_up(&vblank->queue);
+ vblank->enabled = false;
+ vblank->last =
dev->driver->get_vblank_counter(dev, i);
}
spin_unlock_irqrestore(&dev->vbl_lock, irqflags);
* within vblank area, counting down the number of lines until
* start of scanout.
*/
- invbl = vbl_status & DRM_SCANOUTPOS_INVBL;
+ invbl = vbl_status & DRM_SCANOUTPOS_IN_VBLANK;
/* Convert scanout position into elapsed time at raw_time query
* since start of scanout at first display scanline. delta_ns
vbl_status = DRM_VBLANKTIME_SCANOUTPOS_METHOD;
if (invbl)
- vbl_status |= DRM_VBLANKTIME_INVBL;
+ vbl_status |= DRM_VBLANKTIME_IN_VBLANK;
return vbl_status;
}
* call, i.e., it isn't very precisely locked to the true vblank.
*
* Returns:
- * Non-zero if timestamp is considered to be very precise, zero otherwise.
+ * True if timestamp is considered to be very precise, false otherwise.
*/
-u32 drm_get_last_vbltimestamp(struct drm_device *dev, int crtc,
- struct timeval *tvblank, unsigned flags)
+static bool
+drm_get_last_vbltimestamp(struct drm_device *dev, int crtc,
+ struct timeval *tvblank, unsigned flags)
{
int ret;
ret = dev->driver->get_vblank_timestamp(dev, crtc, &max_error,
tvblank, flags);
if (ret > 0)
- return (u32) ret;
+ return true;
}
/* GPU high precision timestamp query unsupported or failed.
*/
*tvblank = get_drm_timestamp();
- return 0;
+ return false;
}
-EXPORT_SYMBOL(drm_get_last_vbltimestamp);
/**
* drm_vblank_count - retrieve "cooked" vblank counter value
*/
u32 drm_vblank_count(struct drm_device *dev, int crtc)
{
- return atomic_read(&dev->vblank[crtc].count);
+ struct drm_vblank_crtc *vblank = &dev->vblank[crtc];
+
+ if (WARN_ON(crtc >= dev->num_crtcs))
+ return 0;
+ return atomic_read(&vblank->count);
}
EXPORT_SYMBOL(drm_vblank_count);
u32 drm_vblank_count_and_time(struct drm_device *dev, int crtc,
struct timeval *vblanktime)
{
+ struct drm_vblank_crtc *vblank = &dev->vblank[crtc];
u32 cur_vblank;
+ if (WARN_ON(crtc >= dev->num_crtcs))
+ return 0;
+
/* Read timestamp from slot of _vblank_time ringbuffer
* that corresponds to current vblank count. Retry if
* count has incremented during readout. This works like
* a seqlock.
*/
do {
- cur_vblank = atomic_read(&dev->vblank[crtc].count);
+ cur_vblank = atomic_read(&vblank->count);
*vblanktime = vblanktimestamp(dev, crtc, cur_vblank);
smp_rmb();
- } while (cur_vblank != atomic_read(&dev->vblank[crtc].count));
+ } while (cur_vblank != atomic_read(&vblank->count));
return cur_vblank;
}
}
EXPORT_SYMBOL(drm_send_vblank_event);
-/**
- * drm_update_vblank_count - update the master vblank counter
- * @dev: DRM device
- * @crtc: counter to update
- *
- * Call back into the driver to update the appropriate vblank counter
- * (specified by @crtc). Deal with wraparound, if it occurred, and
- * update the last read value so we can deal with wraparound on the next
- * call if necessary.
- *
- * Only necessary when going from off->on, to account for frames we
- * didn't get an interrupt for.
- *
- * Note: caller must hold dev->vbl_lock since this reads & writes
- * device vblank fields.
- */
-static void drm_update_vblank_count(struct drm_device *dev, int crtc)
-{
- u32 cur_vblank, diff, tslot, rc;
- struct timeval t_vblank;
-
- /*
- * Interrupts were disabled prior to this call, so deal with counter
- * wrap if needed.
- * NOTE! It's possible we lost a full dev->max_vblank_count events
- * here if the register is small or we had vblank interrupts off for
- * a long time.
- *
- * We repeat the hardware vblank counter & timestamp query until
- * we get consistent results. This to prevent races between gpu
- * updating its hardware counter while we are retrieving the
- * corresponding vblank timestamp.
- */
- do {
- cur_vblank = dev->driver->get_vblank_counter(dev, crtc);
- rc = drm_get_last_vbltimestamp(dev, crtc, &t_vblank, 0);
- } while (cur_vblank != dev->driver->get_vblank_counter(dev, crtc));
-
- /* Deal with counter wrap */
- diff = cur_vblank - dev->vblank[crtc].last;
- if (cur_vblank < dev->vblank[crtc].last) {
- diff += dev->max_vblank_count;
-
- DRM_DEBUG("last_vblank[%d]=0x%x, cur_vblank=0x%x => diff=0x%x\n",
- crtc, dev->vblank[crtc].last, cur_vblank, diff);
- }
-
- DRM_DEBUG("enabling vblank interrupts on crtc %d, missed %d\n",
- crtc, diff);
-
- /* Reinitialize corresponding vblank timestamp if high-precision query
- * available. Skip this step if query unsupported or failed. Will
- * reinitialize delayed at next vblank interrupt in that case.
- */
- if (rc) {
- tslot = atomic_read(&dev->vblank[crtc].count) + diff;
- vblanktimestamp(dev, crtc, tslot) = t_vblank;
- }
-
- smp_mb__before_atomic();
- atomic_add(diff, &dev->vblank[crtc].count);
- smp_mb__after_atomic();
-}
-
/**
* drm_vblank_enable - enable the vblank interrupt on a CRTC
* @dev: DRM device
*/
static int drm_vblank_enable(struct drm_device *dev, int crtc)
{
+ struct drm_vblank_crtc *vblank = &dev->vblank[crtc];
int ret = 0;
assert_spin_locked(&dev->vbl_lock);
spin_lock(&dev->vblank_time_lock);
- if (!dev->vblank[crtc].enabled) {
+ if (!vblank->enabled) {
/*
* Enable vblank irqs under vblank_time_lock protection.
* All vblank count & timestamp updates are held off
ret = dev->driver->enable_vblank(dev, crtc);
DRM_DEBUG("enabling vblank on crtc %d, ret: %d\n", crtc, ret);
if (ret)
- atomic_dec(&dev->vblank[crtc].refcount);
+ atomic_dec(&vblank->refcount);
else {
- dev->vblank[crtc].enabled = true;
+ vblank->enabled = true;
drm_update_vblank_count(dev, crtc);
}
}
*/
int drm_vblank_get(struct drm_device *dev, int crtc)
{
+ struct drm_vblank_crtc *vblank = &dev->vblank[crtc];
unsigned long irqflags;
int ret = 0;
+ if (WARN_ON(crtc >= dev->num_crtcs))
+ return -EINVAL;
+
spin_lock_irqsave(&dev->vbl_lock, irqflags);
/* Going from 0->1 means we have to enable interrupts again */
- if (atomic_add_return(1, &dev->vblank[crtc].refcount) == 1) {
+ if (atomic_add_return(1, &vblank->refcount) == 1) {
ret = drm_vblank_enable(dev, crtc);
} else {
- if (!dev->vblank[crtc].enabled) {
- atomic_dec(&dev->vblank[crtc].refcount);
+ if (!vblank->enabled) {
+ atomic_dec(&vblank->refcount);
ret = -EINVAL;
}
}
*/
void drm_vblank_put(struct drm_device *dev, int crtc)
{
- BUG_ON(atomic_read(&dev->vblank[crtc].refcount) == 0);
+ struct drm_vblank_crtc *vblank = &dev->vblank[crtc];
+
+ BUG_ON(atomic_read(&vblank->refcount) == 0);
+
+ if (WARN_ON(crtc >= dev->num_crtcs))
+ return;
/* Last user schedules interrupt disable */
- if (atomic_dec_and_test(&dev->vblank[crtc].refcount) &&
- (drm_vblank_offdelay > 0))
- mod_timer(&dev->vblank[crtc].disable_timer,
- jiffies + ((drm_vblank_offdelay * HZ)/1000));
+ if (atomic_dec_and_test(&vblank->refcount)) {
+ if (drm_vblank_offdelay == 0)
+ return;
+ else if (dev->vblank_disable_immediate || drm_vblank_offdelay < 0)
+ vblank_disable_fn((unsigned long)vblank);
+ else
+ mod_timer(&vblank->disable_timer,
+ jiffies + ((drm_vblank_offdelay * HZ)/1000));
+ }
}
EXPORT_SYMBOL(drm_vblank_put);
}
EXPORT_SYMBOL(drm_crtc_vblank_put);
+/**
+ * drm_wait_one_vblank - wait for one vblank
+ * @dev: DRM device
+ * @crtc: crtc index
+ *
+ * This waits for one vblank to pass on @crtc, using the irq driver interfaces.
+ * It is a failure to call this when the vblank irq for @crtc is disabled, e.g.
+ * due to lack of driver support or because the crtc is off.
+ */
+void drm_wait_one_vblank(struct drm_device *dev, int crtc)
+{
+ int ret;
+ u32 last;
+
+ ret = drm_vblank_get(dev, crtc);
+ if (WARN(ret, "vblank not available on crtc %i, ret=%i\n", crtc, ret))
+ return;
+
+ last = drm_vblank_count(dev, crtc);
+
+ ret = wait_event_timeout(dev->vblank[crtc].queue,
+ last != drm_vblank_count(dev, crtc),
+ msecs_to_jiffies(100));
+
+ WARN(ret == 0, "vblank wait timed out on crtc %i\n", crtc);
+
+ drm_vblank_put(dev, crtc);
+}
+EXPORT_SYMBOL(drm_wait_one_vblank);
+
+/**
+ * drm_crtc_wait_one_vblank - wait for one vblank
+ * @crtc: DRM crtc
+ *
+ * This waits for one vblank to pass on @crtc, using the irq driver interfaces.
+ * It is a failure to call this when the vblank irq for @crtc is disabled, e.g.
+ * due to lack of driver support or because the crtc is off.
+ */
+void drm_crtc_wait_one_vblank(struct drm_crtc *crtc)
+{
+ drm_wait_one_vblank(crtc->dev, drm_crtc_index(crtc));
+}
+EXPORT_SYMBOL(drm_crtc_wait_one_vblank);
+
/**
* drm_vblank_off - disable vblank events on a CRTC
* @dev: DRM device
*/
void drm_vblank_off(struct drm_device *dev, int crtc)
{
+ struct drm_vblank_crtc *vblank = &dev->vblank[crtc];
struct drm_pending_vblank_event *e, *t;
struct timeval now;
unsigned long irqflags;
unsigned int seq;
- spin_lock_irqsave(&dev->vbl_lock, irqflags);
+ if (WARN_ON(crtc >= dev->num_crtcs))
+ return;
+
+ spin_lock_irqsave(&dev->event_lock, irqflags);
+
+ spin_lock(&dev->vbl_lock);
vblank_disable_and_save(dev, crtc);
- wake_up(&dev->vblank[crtc].queue);
+ wake_up(&vblank->queue);
+
+ /*
+ * Prevent subsequent drm_vblank_get() from re-enabling
+ * the vblank interrupt by bumping the refcount.
+ */
+ if (!vblank->inmodeset) {
+ atomic_inc(&vblank->refcount);
+ vblank->inmodeset = 1;
+ }
+ spin_unlock(&dev->vbl_lock);
/* Send any queued vblank events, lest the natives grow disquiet */
seq = drm_vblank_count_and_time(dev, crtc, &now);
- spin_lock(&dev->event_lock);
list_for_each_entry_safe(e, t, &dev->vblank_event_list, base.link) {
if (e->pipe != crtc)
continue;
drm_vblank_put(dev, e->pipe);
send_vblank_event(dev, e, seq, &now);
}
- spin_unlock(&dev->event_lock);
-
- spin_unlock_irqrestore(&dev->vbl_lock, irqflags);
+ spin_unlock_irqrestore(&dev->event_lock, irqflags);
}
EXPORT_SYMBOL(drm_vblank_off);
*/
void drm_vblank_on(struct drm_device *dev, int crtc)
{
+ struct drm_vblank_crtc *vblank = &dev->vblank[crtc];
unsigned long irqflags;
+ if (WARN_ON(crtc >= dev->num_crtcs))
+ return;
+
spin_lock_irqsave(&dev->vbl_lock, irqflags);
- /* re-enable interrupts if there's are users left */
- if (atomic_read(&dev->vblank[crtc].refcount) != 0)
+ /* Drop our private "prevent drm_vblank_get" refcount */
+ if (vblank->inmodeset) {
+ atomic_dec(&vblank->refcount);
+ vblank->inmodeset = 0;
+ }
+
+ /*
+ * sample the current counter to avoid random jumps
+ * when drm_vblank_enable() applies the diff
+ *
+ * -1 to make sure user will never see the same
+ * vblank counter value before and after a modeset
+ */
+ vblank->last =
+ (dev->driver->get_vblank_counter(dev, crtc) - 1) &
+ dev->max_vblank_count;
+ /*
+ * re-enable interrupts if there are users left, or the
+ * user wishes vblank interrupts to be enabled all the time.
+ */
+ if (atomic_read(&vblank->refcount) != 0 ||
+ (!dev->vblank_disable_immediate && drm_vblank_offdelay == 0))
WARN_ON(drm_vblank_enable(dev, crtc));
spin_unlock_irqrestore(&dev->vbl_lock, irqflags);
}
*/
void drm_vblank_pre_modeset(struct drm_device *dev, int crtc)
{
+ struct drm_vblank_crtc *vblank = &dev->vblank[crtc];
+
/* vblank is not initialized (IRQ not installed ?), or has been freed */
if (!dev->num_crtcs)
return;
+
+ if (WARN_ON(crtc >= dev->num_crtcs))
+ return;
+
/*
* To avoid all the problems that might happen if interrupts
* were enabled/disabled around or between these calls, we just
* to avoid corrupting the count if multiple, mismatch calls occur),
* so that interrupts remain enabled in the interim.
*/
- if (!dev->vblank[crtc].inmodeset) {
- dev->vblank[crtc].inmodeset = 0x1;
+ if (!vblank->inmodeset) {
+ vblank->inmodeset = 0x1;
if (drm_vblank_get(dev, crtc) == 0)
- dev->vblank[crtc].inmodeset |= 0x2;
+ vblank->inmodeset |= 0x2;
}
}
EXPORT_SYMBOL(drm_vblank_pre_modeset);
*/
void drm_vblank_post_modeset(struct drm_device *dev, int crtc)
{
+ struct drm_vblank_crtc *vblank = &dev->vblank[crtc];
unsigned long irqflags;
/* vblank is not initialized (IRQ not installed ?), or has been freed */
if (!dev->num_crtcs)
return;
- if (dev->vblank[crtc].inmodeset) {
+ if (vblank->inmodeset) {
spin_lock_irqsave(&dev->vbl_lock, irqflags);
dev->vblank_disable_allowed = true;
spin_unlock_irqrestore(&dev->vbl_lock, irqflags);
- if (dev->vblank[crtc].inmodeset & 0x2)
+ if (vblank->inmodeset & 0x2)
drm_vblank_put(dev, crtc);
- dev->vblank[crtc].inmodeset = 0;
+ vblank->inmodeset = 0;
}
}
EXPORT_SYMBOL(drm_vblank_post_modeset);
union drm_wait_vblank *vblwait,
struct drm_file *file_priv)
{
+ struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
struct drm_pending_vblank_event *e;
struct timeval now;
unsigned long flags;
spin_lock_irqsave(&dev->event_lock, flags);
+ /*
+ * drm_vblank_off() might have been called after we called
+ * drm_vblank_get(). drm_vblank_off() holds event_lock
+ * around the vblank disable, so no need for further locking.
+ * The reference from drm_vblank_get() protects against
+ * vblank disable from another source.
+ */
+ if (!vblank->enabled) {
+ ret = -EINVAL;
+ goto err_unlock;
+ }
+
if (file_priv->event_space < sizeof e->event) {
ret = -EBUSY;
goto err_unlock;
int drm_wait_vblank(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
+ struct drm_vblank_crtc *vblank;
union drm_wait_vblank *vblwait = data;
int ret;
unsigned int flags, seq, crtc, high_crtc;
if (crtc >= dev->num_crtcs)
return -EINVAL;
+ vblank = &dev->vblank[crtc];
+
ret = drm_vblank_get(dev, crtc);
if (ret) {
DRM_DEBUG("failed to acquire vblank counter, %d\n", ret);
DRM_DEBUG("waiting on vblank count %d, crtc %d\n",
vblwait->request.sequence, crtc);
- dev->vblank[crtc].last_wait = vblwait->request.sequence;
- DRM_WAIT_ON(ret, dev->vblank[crtc].queue, 3 * HZ,
+ vblank->last_wait = vblwait->request.sequence;
+ DRM_WAIT_ON(ret, vblank->queue, 3 * HZ,
(((drm_vblank_count(dev, crtc) -
vblwait->request.sequence) <= (1 << 23)) ||
- !dev->vblank[crtc].enabled ||
+ !vblank->enabled ||
!dev->irq_enabled));
if (ret != -EINTR) {
{
struct drm_pending_vblank_event *e, *t;
struct timeval now;
- unsigned long flags;
unsigned int seq;
- seq = drm_vblank_count_and_time(dev, crtc, &now);
+ assert_spin_locked(&dev->event_lock);
- spin_lock_irqsave(&dev->event_lock, flags);
+ seq = drm_vblank_count_and_time(dev, crtc, &now);
list_for_each_entry_safe(e, t, &dev->vblank_event_list, base.link) {
if (e->pipe != crtc)
send_vblank_event(dev, e, seq, &now);
}
- spin_unlock_irqrestore(&dev->event_lock, flags);
-
trace_drm_vblank_event(crtc, seq);
}
*/
bool drm_handle_vblank(struct drm_device *dev, int crtc)
{
+ struct drm_vblank_crtc *vblank = &dev->vblank[crtc];
u32 vblcount;
s64 diff_ns;
struct timeval tvblank;
if (!dev->num_crtcs)
return false;
+ if (WARN_ON(crtc >= dev->num_crtcs))
+ return false;
+
+ spin_lock_irqsave(&dev->event_lock, irqflags);
+
/* Need timestamp lock to prevent concurrent execution with
* vblank enable/disable, as this would cause inconsistent
* or corrupted timestamps and vblank counts.
*/
- spin_lock_irqsave(&dev->vblank_time_lock, irqflags);
+ spin_lock(&dev->vblank_time_lock);
/* Vblank irq handling disabled. Nothing to do. */
- if (!dev->vblank[crtc].enabled) {
- spin_unlock_irqrestore(&dev->vblank_time_lock, irqflags);
+ if (!vblank->enabled) {
+ spin_unlock(&dev->vblank_time_lock);
+ spin_unlock_irqrestore(&dev->event_lock, irqflags);
return false;
}
*/
/* Get current timestamp and count. */
- vblcount = atomic_read(&dev->vblank[crtc].count);
+ vblcount = atomic_read(&vblank->count);
drm_get_last_vbltimestamp(dev, crtc, &tvblank, DRM_CALLED_FROM_VBLIRQ);
/* Compute time difference to timestamp of last vblank */
* the timestamp computed above.
*/
smp_mb__before_atomic();
- atomic_inc(&dev->vblank[crtc].count);
+ atomic_inc(&vblank->count);
smp_mb__after_atomic();
} else {
DRM_DEBUG("crtc %d: Redundant vblirq ignored. diff_ns = %d\n",
crtc, (int) diff_ns);
}
- wake_up(&dev->vblank[crtc].queue);
+ spin_unlock(&dev->vblank_time_lock);
+
+ wake_up(&vblank->queue);
drm_handle_vblank_events(dev, crtc);
- spin_unlock_irqrestore(&dev->vblank_time_lock, irqflags);
+ spin_unlock_irqrestore(&dev->event_lock, irqflags);
+
return true;
}
EXPORT_SYMBOL(drm_handle_vblank);
* OTHER DEALINGS IN THE SOFTWARE.
*/
+/*
+ * This file contains legacy interfaces that modern drm drivers
+ * should no longer be using. They cannot be removed as legacy
+ * drivers use them, and removing them are API breaks.
+ */
+#include <linux/list.h>
+#include <drm/drm_legacy.h>
+
+struct agp_memory;
struct drm_device;
struct drm_file;
int drm_legacy_setsareactx(struct drm_device *d, void *v, struct drm_file *f);
int drm_legacy_getsareactx(struct drm_device *d, void *v, struct drm_file *f);
+/*
+ * Generic Buffer Management
+ */
+
+#define DRM_MAP_HASH_OFFSET 0x10000000
+
+int drm_legacy_addmap_ioctl(struct drm_device *d, void *v, struct drm_file *f);
+int drm_legacy_rmmap_ioctl(struct drm_device *d, void *v, struct drm_file *f);
+int drm_legacy_addbufs(struct drm_device *d, void *v, struct drm_file *f);
+int drm_legacy_infobufs(struct drm_device *d, void *v, struct drm_file *f);
+int drm_legacy_markbufs(struct drm_device *d, void *v, struct drm_file *f);
+int drm_legacy_freebufs(struct drm_device *d, void *v, struct drm_file *f);
+int drm_legacy_mapbufs(struct drm_device *d, void *v, struct drm_file *f);
+int drm_legacy_dma_ioctl(struct drm_device *d, void *v, struct drm_file *f);
+
+void drm_legacy_vma_flush(struct drm_device *d);
+
+/*
+ * AGP Support
+ */
+
+struct drm_agp_mem {
+ unsigned long handle;
+ struct agp_memory *memory;
+ unsigned long bound;
+ int pages;
+ struct list_head head;
+};
+
+/*
+ * Generic Userspace Locking-API
+ */
+
+int drm_legacy_i_have_hw_lock(struct drm_device *d, struct drm_file *f);
+int drm_legacy_lock(struct drm_device *d, void *v, struct drm_file *f);
+int drm_legacy_unlock(struct drm_device *d, void *v, struct drm_file *f);
+int drm_legacy_lock_free(struct drm_lock_data *lock, unsigned int ctx);
+
+/* DMA support */
+int drm_legacy_dma_setup(struct drm_device *dev);
+void drm_legacy_dma_takedown(struct drm_device *dev);
+void drm_legacy_free_buffer(struct drm_device *dev,
+ struct drm_buf * buf);
+void drm_legacy_reclaim_buffers(struct drm_device *dev,
+ struct drm_file *filp);
+
+/* Scatter Gather Support */
+void drm_legacy_sg_cleanup(struct drm_device *dev);
+int drm_legacy_sg_alloc(struct drm_device *dev, void *data,
+ struct drm_file *file_priv);
+int drm_legacy_sg_free(struct drm_device *dev, void *data,
+ struct drm_file *file_priv);
+
#endif /* __DRM_LEGACY_H__ */
#include <linux/export.h>
#include <drm/drmP.h>
#include "drm_legacy.h"
+#include "drm_internal.h"
static int drm_notifier(void *priv);
*
* Add the current task to the lock wait queue, and attempt to take to lock.
*/
-int drm_lock(struct drm_device *dev, void *data, struct drm_file *file_priv)
+int drm_legacy_lock(struct drm_device *dev, void *data,
+ struct drm_file *file_priv)
{
DECLARE_WAITQUEUE(entry, current);
struct drm_lock *lock = data;
sigaddset(&dev->sigmask, SIGTTOU);
dev->sigdata.context = lock->context;
dev->sigdata.lock = master->lock.hw_lock;
- block_all_signals(drm_notifier, &dev->sigdata, &dev->sigmask);
+ block_all_signals(drm_notifier, dev, &dev->sigmask);
}
if (dev->driver->dma_quiescent && (lock->flags & _DRM_LOCK_QUIESCENT))
*
* Transfer and free the lock.
*/
-int drm_unlock(struct drm_device *dev, void *data, struct drm_file *file_priv)
+int drm_legacy_unlock(struct drm_device *dev, void *data, struct drm_file *file_priv)
{
struct drm_lock *lock = data;
struct drm_master *master = file_priv->master;
return -EINVAL;
}
- if (drm_lock_free(&master->lock, lock->context)) {
+ if (drm_legacy_lock_free(&master->lock, lock->context)) {
/* FIXME: Should really bail out here. */
}
* Marks the lock as not held, via the \p cmpxchg instruction. Wakes any task
* waiting on the lock queue.
*/
-int drm_lock_free(struct drm_lock_data *lock_data, unsigned int context)
+int drm_legacy_lock_free(struct drm_lock_data *lock_data, unsigned int context)
{
unsigned int old, new, prev;
volatile unsigned int *lock = &lock_data->hw_lock->lock;
* If the lock is not held, then let the signal proceed as usual. If the lock
* is held, then set the contended flag and keep the signal blocked.
*
- * \param priv pointer to a drm_sigdata structure.
+ * \param priv pointer to a drm_device structure.
* \return one if the signal should be delivered normally, or zero if the
* signal should be blocked.
*/
static int drm_notifier(void *priv)
{
- struct drm_sigdata *s = (struct drm_sigdata *) priv;
+ struct drm_device *dev = priv;
+ struct drm_hw_lock *lock = dev->sigdata.lock;
unsigned int old, new, prev;
/* Allow signal delivery if lock isn't held */
- if (!s->lock || !_DRM_LOCK_IS_HELD(s->lock->lock)
- || _DRM_LOCKING_CONTEXT(s->lock->lock) != s->context)
+ if (!lock || !_DRM_LOCK_IS_HELD(lock->lock)
+ || _DRM_LOCKING_CONTEXT(lock->lock) != dev->sigdata.context)
return 1;
/* Otherwise, set flag to force call to
drmUnlock */
do {
- old = s->lock->lock;
+ old = lock->lock;
new = old | _DRM_LOCK_CONT;
- prev = cmpxchg(&s->lock->lock, old, new);
+ prev = cmpxchg(&lock->lock, old, new);
} while (prev != old);
return 0;
}
* having to worry about starvation.
*/
-void drm_idlelock_take(struct drm_lock_data *lock_data)
+void drm_legacy_idlelock_take(struct drm_lock_data *lock_data)
{
int ret;
}
spin_unlock_bh(&lock_data->spinlock);
}
-EXPORT_SYMBOL(drm_idlelock_take);
+EXPORT_SYMBOL(drm_legacy_idlelock_take);
-void drm_idlelock_release(struct drm_lock_data *lock_data)
+void drm_legacy_idlelock_release(struct drm_lock_data *lock_data)
{
unsigned int old, prev;
volatile unsigned int *lock = &lock_data->hw_lock->lock;
}
spin_unlock_bh(&lock_data->spinlock);
}
-EXPORT_SYMBOL(drm_idlelock_release);
+EXPORT_SYMBOL(drm_legacy_idlelock_release);
-int drm_i_have_hw_lock(struct drm_device *dev, struct drm_file *file_priv)
+int drm_legacy_i_have_hw_lock(struct drm_device *dev,
+ struct drm_file *file_priv)
{
struct drm_master *master = file_priv->master;
return (file_priv->lock_count && master->lock.hw_lock &&
#include <linux/highmem.h>
#include <linux/export.h>
#include <drm/drmP.h>
+#include "drm_legacy.h"
#if __OS_HAS_AGP
+
+#ifdef HAVE_PAGE_AGP
+# include <asm/agp.h>
+#else
+# ifdef __powerpc__
+# define PAGE_AGP __pgprot(_PAGE_KERNEL | _PAGE_NO_CACHE)
+# else
+# define PAGE_AGP PAGE_KERNEL
+# endif
+#endif
+
static void *agp_remap(unsigned long offset, unsigned long size,
struct drm_device * dev)
{
#endif /* agp */
-void drm_core_ioremap(struct drm_local_map *map, struct drm_device *dev)
+void drm_legacy_ioremap(struct drm_local_map *map, struct drm_device *dev)
{
if (dev->agp && dev->agp->cant_use_aperture && map->type == _DRM_AGP)
map->handle = agp_remap(map->offset, map->size, dev);
else
map->handle = ioremap(map->offset, map->size);
}
-EXPORT_SYMBOL(drm_core_ioremap);
+EXPORT_SYMBOL(drm_legacy_ioremap);
-void drm_core_ioremap_wc(struct drm_local_map *map, struct drm_device *dev)
+void drm_legacy_ioremap_wc(struct drm_local_map *map, struct drm_device *dev)
{
if (dev->agp && dev->agp->cant_use_aperture && map->type == _DRM_AGP)
map->handle = agp_remap(map->offset, map->size, dev);
else
map->handle = ioremap_wc(map->offset, map->size);
}
-EXPORT_SYMBOL(drm_core_ioremap_wc);
+EXPORT_SYMBOL(drm_legacy_ioremap_wc);
-void drm_core_ioremapfree(struct drm_local_map *map, struct drm_device *dev)
+void drm_legacy_ioremapfree(struct drm_local_map *map, struct drm_device *dev)
{
if (!map->handle || !map->size)
return;
else
iounmap(map->handle);
}
-EXPORT_SYMBOL(drm_core_ioremapfree);
+EXPORT_SYMBOL(drm_legacy_ioremapfree);
break;
}
+ if (dsi->mode_flags & MIPI_DSI_MODE_LPM)
+ msg.flags = MIPI_DSI_MSG_USE_LPM;
+
return ops->transfer(dsi->host, &msg);
}
EXPORT_SYMBOL(mipi_dsi_dcs_write);
if (!ops || !ops->transfer)
return -ENOSYS;
+ if (dsi->mode_flags & MIPI_DSI_MODE_LPM)
+ msg.flags = MIPI_DSI_MSG_USE_LPM;
+
return ops->transfer(dsi->host, &msg);
}
EXPORT_SYMBOL(mipi_dsi_dcs_read);
if (!mode)
return NULL;
+ mode->type |= DRM_MODE_TYPE_USERDEF;
drm_mode_set_crtcinfo(mode, CRTC_INTERLACE_HALVE_V);
return mode;
}
*/
+/**
+ * __drm_modeset_lock_all - internal helper to grab all modeset locks
+ * @dev: DRM device
+ * @trylock: trylock mode for atomic contexts
+ *
+ * This is a special version of drm_modeset_lock_all() which can also be used in
+ * atomic contexts. Then @trylock must be set to true.
+ *
+ * Returns:
+ * 0 on success or negative error code on failure.
+ */
+int __drm_modeset_lock_all(struct drm_device *dev,
+ bool trylock)
+{
+ struct drm_mode_config *config = &dev->mode_config;
+ struct drm_modeset_acquire_ctx *ctx;
+ int ret;
+
+ ctx = kzalloc(sizeof(*ctx),
+ trylock ? GFP_ATOMIC : GFP_KERNEL);
+ if (!ctx)
+ return -ENOMEM;
+
+ if (trylock) {
+ if (!mutex_trylock(&config->mutex))
+ return -EBUSY;
+ } else {
+ mutex_lock(&config->mutex);
+ }
+
+ drm_modeset_acquire_init(ctx, 0);
+ ctx->trylock_only = trylock;
+
+retry:
+ ret = drm_modeset_lock(&config->connection_mutex, ctx);
+ if (ret)
+ goto fail;
+ ret = drm_modeset_lock_all_crtcs(dev, ctx);
+ if (ret)
+ goto fail;
+
+ WARN_ON(config->acquire_ctx);
+
+ /* now we hold the locks, so now that it is safe, stash the
+ * ctx for drm_modeset_unlock_all():
+ */
+ config->acquire_ctx = ctx;
+
+ drm_warn_on_modeset_not_all_locked(dev);
+
+ return 0;
+
+fail:
+ if (ret == -EDEADLK) {
+ drm_modeset_backoff(ctx);
+ goto retry;
+ }
+
+ return ret;
+}
+EXPORT_SYMBOL(__drm_modeset_lock_all);
+
+/**
+ * drm_modeset_lock_all - take all modeset locks
+ * @dev: drm device
+ *
+ * This function takes all modeset locks, suitable where a more fine-grained
+ * scheme isn't (yet) implemented. Locks must be dropped with
+ * drm_modeset_unlock_all.
+ */
+void drm_modeset_lock_all(struct drm_device *dev)
+{
+ WARN_ON(__drm_modeset_lock_all(dev, false) != 0);
+}
+EXPORT_SYMBOL(drm_modeset_lock_all);
+
+/**
+ * drm_modeset_unlock_all - drop all modeset locks
+ * @dev: device
+ *
+ * This function drop all modeset locks taken by drm_modeset_lock_all.
+ */
+void drm_modeset_unlock_all(struct drm_device *dev)
+{
+ struct drm_mode_config *config = &dev->mode_config;
+ struct drm_modeset_acquire_ctx *ctx = config->acquire_ctx;
+
+ if (WARN_ON(!ctx))
+ return;
+
+ config->acquire_ctx = NULL;
+ drm_modeset_drop_locks(ctx);
+ drm_modeset_acquire_fini(ctx);
+
+ kfree(ctx);
+
+ mutex_unlock(&dev->mode_config.mutex);
+}
+EXPORT_SYMBOL(drm_modeset_unlock_all);
+
+/**
+ * drm_modeset_lock_crtc - lock crtc with hidden acquire ctx
+ * @crtc: drm crtc
+ *
+ * This function locks the given crtc using a hidden acquire context. This is
+ * necessary so that drivers internally using the atomic interfaces can grab
+ * further locks with the lock acquire context.
+ */
+void drm_modeset_lock_crtc(struct drm_crtc *crtc)
+{
+ struct drm_modeset_acquire_ctx *ctx;
+ int ret;
+
+ ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+ if (WARN_ON(!ctx))
+ return;
+
+ drm_modeset_acquire_init(ctx, 0);
+
+retry:
+ ret = drm_modeset_lock(&crtc->mutex, ctx);
+ if (ret)
+ goto fail;
+
+ WARN_ON(crtc->acquire_ctx);
+
+ /* now we hold the locks, so now that it is safe, stash the
+ * ctx for drm_modeset_unlock_crtc():
+ */
+ crtc->acquire_ctx = ctx;
+
+ return;
+
+fail:
+ if (ret == -EDEADLK) {
+ drm_modeset_backoff(ctx);
+ goto retry;
+ }
+}
+EXPORT_SYMBOL(drm_modeset_lock_crtc);
+
+/**
+ * drm_modeset_legacy_acquire_ctx - find acquire ctx for legacy ioctls
+ * @crtc: drm crtc
+ *
+ * Legacy ioctl operations like cursor updates or page flips only have per-crtc
+ * locking, and store the acquire ctx in the corresponding crtc. All other
+ * legacy operations take all locks and use a global acquire context. This
+ * function grabs the right one.
+ */
+struct drm_modeset_acquire_ctx *
+drm_modeset_legacy_acquire_ctx(struct drm_crtc *crtc)
+{
+ if (crtc->acquire_ctx)
+ return crtc->acquire_ctx;
+
+ WARN_ON(!crtc->dev->mode_config.acquire_ctx);
+
+ return crtc->dev->mode_config.acquire_ctx;
+}
+EXPORT_SYMBOL(drm_modeset_legacy_acquire_ctx);
+
+/**
+ * drm_modeset_unlock_crtc - drop crtc lock
+ * @crtc: drm crtc
+ *
+ * This drops the crtc lock acquire with drm_modeset_lock_crtc() and all other
+ * locks acquired through the hidden context.
+ */
+void drm_modeset_unlock_crtc(struct drm_crtc *crtc)
+{
+ struct drm_modeset_acquire_ctx *ctx = crtc->acquire_ctx;
+
+ if (WARN_ON(!ctx))
+ return;
+
+ crtc->acquire_ctx = NULL;
+ drm_modeset_drop_locks(ctx);
+ drm_modeset_acquire_fini(ctx);
+
+ kfree(ctx);
+}
+EXPORT_SYMBOL(drm_modeset_unlock_crtc);
+
+/**
+ * drm_warn_on_modeset_not_all_locked - check that all modeset locks are locked
+ * @dev: device
+ *
+ * Useful as a debug assert.
+ */
+void drm_warn_on_modeset_not_all_locked(struct drm_device *dev)
+{
+ struct drm_crtc *crtc;
+
+ /* Locking is currently fubar in the panic handler. */
+ if (oops_in_progress)
+ return;
+
+ list_for_each_entry(crtc, &dev->mode_config.crtc_list, head)
+ WARN_ON(!drm_modeset_is_locked(&crtc->mutex));
+
+ WARN_ON(!drm_modeset_is_locked(&dev->mode_config.connection_mutex));
+ WARN_ON(!mutex_is_locked(&dev->mode_config.mutex));
+}
+EXPORT_SYMBOL(drm_warn_on_modeset_not_all_locked);
+
/**
* drm_modeset_acquire_init - initialize acquire context
* @ctx: the acquire context
WARN_ON(ctx->contended);
- if (interruptible && slow) {
+ if (ctx->trylock_only) {
+ if (!ww_mutex_trylock(&lock->mutex))
+ return -EBUSY;
+ else
+ return 0;
+ } else if (interruptible && slow) {
ret = ww_mutex_lock_slow_interruptible(&lock->mutex, &ctx->ww_ctx);
} else if (interruptible) {
ret = ww_mutex_lock_interruptible(&lock->mutex, &ctx->ww_ctx);
#include <linux/dma-mapping.h>
#include <linux/export.h>
#include <drm/drmP.h>
+#include "drm_legacy.h"
/**
* drm_pci_alloc - Allocate a PCI consistent memory block, for DMA.
*
* This function is for internal use in the Linux-specific DRM core code.
*/
-void __drm_pci_free(struct drm_device * dev, drm_dma_handle_t * dmah)
+void __drm_legacy_pci_free(struct drm_device * dev, drm_dma_handle_t * dmah)
{
unsigned long addr;
size_t sz;
*/
void drm_pci_free(struct drm_device * dev, drm_dma_handle_t * dmah)
{
- __drm_pci_free(dev, dmah);
+ __drm_legacy_pci_free(dev, dmah);
kfree(dmah);
}
return pci_domain_nr(dev->pdev->bus);
}
-static int drm_pci_set_busid(struct drm_device *dev, struct drm_master *master)
+int drm_pci_set_busid(struct drm_device *dev, struct drm_master *master)
{
- int len, ret;
- master->unique_len = 40;
- master->unique_size = master->unique_len;
- master->unique = kmalloc(master->unique_size, GFP_KERNEL);
- if (master->unique == NULL)
+ master->unique = kasprintf(GFP_KERNEL, "pci:%04x:%02x:%02x.%d",
+ drm_get_pci_domain(dev),
+ dev->pdev->bus->number,
+ PCI_SLOT(dev->pdev->devfn),
+ PCI_FUNC(dev->pdev->devfn));
+ if (!master->unique)
return -ENOMEM;
-
- len = snprintf(master->unique, master->unique_len,
- "pci:%04x:%02x:%02x.%d",
- drm_get_pci_domain(dev),
- dev->pdev->bus->number,
- PCI_SLOT(dev->pdev->devfn),
- PCI_FUNC(dev->pdev->devfn));
-
- if (len >= master->unique_len) {
- DRM_ERROR("buffer overflow");
- ret = -EINVAL;
- goto err;
- } else
- master->unique_len = len;
-
+ master->unique_len = strlen(master->unique);
return 0;
-err:
- return ret;
}
+EXPORT_SYMBOL(drm_pci_set_busid);
int drm_pci_set_unique(struct drm_device *dev,
struct drm_master *master,
int domain, bus, slot, func, ret;
master->unique_len = u->unique_len;
- master->unique_size = u->unique_len + 1;
- master->unique = kmalloc(master->unique_size, GFP_KERNEL);
+ master->unique = kmalloc(master->unique_len + 1, GFP_KERNEL);
if (!master->unique) {
ret = -ENOMEM;
goto err;
}
}
-static struct drm_bus drm_pci_bus = {
- .set_busid = drm_pci_set_busid,
-};
-
/**
* drm_get_pci_dev - Register a PCI device with the DRM subsystem
* @pdev: PCI device
DRM_DEBUG("\n");
- driver->bus = &drm_pci_bus;
-
if (driver->driver_features & DRIVER_MODESET)
return pci_register_driver(pdriver);
return ret;
}
-static int drm_platform_set_busid(struct drm_device *dev, struct drm_master *master)
+int drm_platform_set_busid(struct drm_device *dev, struct drm_master *master)
{
- int len, ret, id;
-
- master->unique_len = 13 + strlen(dev->platformdev->name);
- master->unique_size = master->unique_len;
- master->unique = kmalloc(master->unique_len + 1, GFP_KERNEL);
-
- if (master->unique == NULL)
- return -ENOMEM;
+ int id;
id = dev->platformdev->id;
-
- /* if only a single instance of the platform device, id will be
- * set to -1.. use 0 instead to avoid a funny looking bus-id:
- */
- if (id == -1)
+ if (id < 0)
id = 0;
- len = snprintf(master->unique, master->unique_len,
- "platform:%s:%02d", dev->platformdev->name, id);
-
- if (len > master->unique_len) {
- DRM_ERROR("Unique buffer overflowed\n");
- ret = -EINVAL;
- goto err;
- }
+ master->unique = kasprintf(GFP_KERNEL, "platform:%s:%02d",
+ dev->platformdev->name, id);
+ if (!master->unique)
+ return -ENOMEM;
+ master->unique_len = strlen(master->unique);
return 0;
-err:
- return ret;
}
-
-static struct drm_bus drm_platform_bus = {
- .set_busid = drm_platform_set_busid,
-};
+EXPORT_SYMBOL(drm_platform_set_busid);
/**
* drm_platform_init - Register a platform device with the DRM subsystem
{
DRM_DEBUG("\n");
- driver->bus = &drm_platform_bus;
return drm_get_platform_dev(platform_device, driver);
}
EXPORT_SYMBOL(drm_platform_init);
#include <linux/export.h>
#include <linux/dma-buf.h>
#include <drm/drmP.h>
+#include <drm/drm_gem.h>
+
+#include "drm_internal.h"
/*
* DMA-BUF/GEM Object references and lifetime overview:
goto fail_detach;
}
- obj = dev->driver->gem_prime_import_sg_table(dev, dma_buf->size, sgt);
+ obj = dev->driver->gem_prime_import_sg_table(dev, attach, sgt);
if (IS_ERR(obj)) {
ret = PTR_ERR(obj);
goto fail_unmap;
return;
}
+static int drm_helper_probe_add_cmdline_mode(struct drm_connector *connector)
+{
+ struct drm_display_mode *mode;
+
+ if (!connector->cmdline_mode.specified)
+ return 0;
+
+ mode = drm_mode_create_from_cmdline_mode(connector->dev,
+ &connector->cmdline_mode);
+ if (mode == NULL)
+ return 0;
+
+ drm_mode_probed_add(connector, mode);
+ return 1;
+}
+
static int drm_helper_probe_single_connector_modes_merge_bits(struct drm_connector *connector,
uint32_t maxX, uint32_t maxY, bool merge_type_bits)
{
if (count == 0 && connector->status == connector_status_connected)
count = drm_add_modes_noedid(connector, 1024, 768);
+ count += drm_helper_probe_add_cmdline_mode(connector);
if (count == 0)
goto prune;
#include <linux/vmalloc.h>
#include <linux/slab.h>
#include <drm/drmP.h>
+#include "drm_legacy.h"
#define DEBUG_SCATTER 0
# define ScatterHandle(x) (unsigned int)(x)
#endif
-int drm_sg_alloc(struct drm_device *dev, void *data,
- struct drm_file *file_priv)
+int drm_legacy_sg_alloc(struct drm_device *dev, void *data,
+ struct drm_file *file_priv)
{
struct drm_scatter_gather *request = data;
struct drm_sg_mem *entry;
return -ENOMEM;
}
-int drm_sg_free(struct drm_device *dev, void *data,
- struct drm_file *file_priv)
+int drm_legacy_sg_free(struct drm_device *dev, void *data,
+ struct drm_file *file_priv)
{
struct drm_scatter_gather *request = data;
struct drm_sg_mem *entry;
#include <drm/drm_sysfs.h>
#include <drm/drm_core.h>
#include <drm/drmP.h>
+#include "drm_internal.h"
#define to_drm_minor(d) dev_get_drvdata(d)
#define to_drm_connector(d) dev_get_drvdata(d)
+++ /dev/null
-#include <drm/drmP.h>
-#include <drm/drm_usb.h>
-#include <linux/usb.h>
-#include <linux/module.h>
-
-int drm_get_usb_dev(struct usb_interface *interface,
- const struct usb_device_id *id,
- struct drm_driver *driver)
-{
- struct drm_device *dev;
- int ret;
-
- DRM_DEBUG("\n");
-
- dev = drm_dev_alloc(driver, &interface->dev);
- if (!dev)
- return -ENOMEM;
-
- dev->usbdev = interface_to_usbdev(interface);
- usb_set_intfdata(interface, dev);
-
- ret = drm_dev_register(dev, 0);
- if (ret)
- goto err_free;
-
- DRM_INFO("Initialized %s %d.%d.%d %s on minor %d\n",
- driver->name, driver->major, driver->minor, driver->patchlevel,
- driver->date, dev->primary->index);
-
- return 0;
-
-err_free:
- drm_dev_unref(dev);
- return ret;
-
-}
-EXPORT_SYMBOL(drm_get_usb_dev);
-
-static int drm_usb_set_busid(struct drm_device *dev,
- struct drm_master *master)
-{
- return 0;
-}
-
-static struct drm_bus drm_usb_bus = {
- .set_busid = drm_usb_set_busid,
-};
-
-/**
- * drm_usb_init - Register matching USB devices with the DRM subsystem
- * @driver: DRM device driver
- * @udriver: USB device driver
- *
- * Registers one or more devices matched by a USB driver with the DRM
- * subsystem.
- *
- * Return: 0 on success or a negative error code on failure.
- */
-int drm_usb_init(struct drm_driver *driver, struct usb_driver *udriver)
-{
- int res;
- DRM_DEBUG("\n");
-
- driver->bus = &drm_usb_bus;
-
- res = usb_register(udriver);
- return res;
-}
-EXPORT_SYMBOL(drm_usb_init);
-
-/**
- * drm_usb_exit - Unregister matching USB devices from the DRM subsystem
- * @driver: DRM device driver
- * @udriver: USB device driver
- *
- * Unregisters one or more devices matched by a USB driver from the DRM
- * subsystem.
- */
-void drm_usb_exit(struct drm_driver *driver,
- struct usb_driver *udriver)
-{
- usb_deregister(udriver);
-}
-EXPORT_SYMBOL(drm_usb_exit);
-
-MODULE_AUTHOR("David Airlie");
-MODULE_DESCRIPTION("USB DRM support");
-MODULE_LICENSE("GPL and additional rights");
#include <drm/drmP.h>
#include <linux/export.h>
+#include <linux/seq_file.h>
#if defined(__ia64__)
#include <linux/efi.h>
#include <linux/slab.h>
#endif
+#include <asm/pgtable.h>
+#include "drm_legacy.h"
+
+struct drm_vma_entry {
+ struct list_head head;
+ struct vm_area_struct *vma;
+ pid_t pid;
+};
static void drm_vm_open(struct vm_area_struct *vma);
static void drm_vm_close(struct vm_area_struct *vma);
{
pgprot_t tmp = vm_get_page_prot(vma->vm_flags);
-#if defined(__i386__) || defined(__x86_64__)
+#if defined(__i386__) || defined(__x86_64__) || defined(__powerpc__)
if (map->type == _DRM_REGISTERS && !(map->flags & _DRM_WRITE_COMBINING))
tmp = pgprot_noncached(tmp);
else
tmp = pgprot_writecombine(tmp);
-#elif defined(__powerpc__)
- pgprot_val(tmp) |= _PAGE_NO_CACHE;
- if (map->type == _DRM_REGISTERS)
- pgprot_val(tmp) |= _PAGE_GUARDED;
#elif defined(__ia64__)
if (efi_range_is_wc(vma->vm_start, vma->vm_end -
vma->vm_start))
dmah.vaddr = map->handle;
dmah.busaddr = map->offset;
dmah.size = map->size;
- __drm_pci_free(dev, &dmah);
+ __drm_legacy_pci_free(dev, &dmah);
break;
}
kfree(map);
list_add(&vma_entry->head, &dev->vmalist);
}
}
-EXPORT_SYMBOL_GPL(drm_vm_open_locked);
static void drm_vm_open(struct vm_area_struct *vma)
{
* according to the mapping type and remaps the pages. Finally sets the file
* pointer and calls vm_open().
*/
-int drm_mmap_locked(struct file *filp, struct vm_area_struct *vma)
+static int drm_mmap_locked(struct file *filp, struct vm_area_struct *vma)
{
struct drm_file *priv = filp->private_data;
struct drm_device *dev = priv->minor->dev;
return 0;
}
-int drm_mmap(struct file *filp, struct vm_area_struct *vma)
+int drm_legacy_mmap(struct file *filp, struct vm_area_struct *vma)
{
struct drm_file *priv = filp->private_data;
struct drm_device *dev = priv->minor->dev;
return ret;
}
-EXPORT_SYMBOL(drm_mmap);
+EXPORT_SYMBOL(drm_legacy_mmap);
+
+void drm_legacy_vma_flush(struct drm_device *dev)
+{
+ struct drm_vma_entry *vma, *vma_temp;
+
+ /* Clear vma list (only needed for legacy drivers) */
+ list_for_each_entry_safe(vma, vma_temp, &dev->vmalist, head) {
+ list_del(&vma->head);
+ kfree(vma);
+ }
+}
+
+int drm_vma_info(struct seq_file *m, void *data)
+{
+ struct drm_info_node *node = (struct drm_info_node *) m->private;
+ struct drm_device *dev = node->minor->dev;
+ struct drm_vma_entry *pt;
+ struct vm_area_struct *vma;
+ unsigned long vma_count = 0;
+#if defined(__i386__)
+ unsigned int pgprot;
+#endif
+
+ mutex_lock(&dev->struct_mutex);
+ list_for_each_entry(pt, &dev->vmalist, head)
+ vma_count++;
+
+ seq_printf(m, "vma use count: %lu, high_memory = %pK, 0x%pK\n",
+ vma_count, high_memory,
+ (void *)(unsigned long)virt_to_phys(high_memory));
+
+ list_for_each_entry(pt, &dev->vmalist, head) {
+ vma = pt->vma;
+ if (!vma)
+ continue;
+ seq_printf(m,
+ "\n%5d 0x%pK-0x%pK %c%c%c%c%c%c 0x%08lx000",
+ pt->pid,
+ (void *)vma->vm_start, (void *)vma->vm_end,
+ vma->vm_flags & VM_READ ? 'r' : '-',
+ vma->vm_flags & VM_WRITE ? 'w' : '-',
+ vma->vm_flags & VM_EXEC ? 'x' : '-',
+ vma->vm_flags & VM_MAYSHARE ? 's' : 'p',
+ vma->vm_flags & VM_LOCKED ? 'l' : '-',
+ vma->vm_flags & VM_IO ? 'i' : '-',
+ vma->vm_pgoff);
+
+#if defined(__i386__)
+ pgprot = pgprot_val(vma->vm_page_prot);
+ seq_printf(m, " %c%c%c%c%c%c%c%c%c",
+ pgprot & _PAGE_PRESENT ? 'p' : '-',
+ pgprot & _PAGE_RW ? 'w' : 'r',
+ pgprot & _PAGE_USER ? 'u' : 's',
+ pgprot & _PAGE_PWT ? 't' : 'b',
+ pgprot & _PAGE_PCD ? 'u' : 'c',
+ pgprot & _PAGE_ACCESSED ? 'a' : '-',
+ pgprot & _PAGE_DIRTY ? 'd' : '-',
+ pgprot & _PAGE_PSE ? 'm' : 'k',
+ pgprot & _PAGE_GLOBAL ? 'g' : 'l');
+#endif
+ seq_printf(m, "\n");
+ }
+ mutex_unlock(&dev->struct_mutex);
+ return 0;
+}
return retval;
for (lane = 0; lane < lane_count; lane++)
- buf[lane] = DP_TRAIN_PRE_EMPHASIS_0 |
- DP_TRAIN_VOLTAGE_SWING_400;
+ buf[lane] = DP_TRAIN_PRE_EMPH_LEVEL_0 |
+ DP_TRAIN_VOLTAGE_SWING_LEVEL_0;
retval = exynos_dp_write_bytes_to_dpcd(dp, DP_TRAINING_LANE0_SET,
lane_count, buf);
static void exynos_dp_connector_destroy(struct drm_connector *connector)
{
+ drm_connector_unregister(connector);
+ drm_connector_cleanup(connector);
}
static struct drm_connector_funcs exynos_dp_connector_funcs = {
exynos_dp_dpms(display, DRM_MODE_DPMS_OFF);
+ exynos_dp_connector_destroy(&dp->connector);
encoder->funcs->destroy(encoder);
- drm_connector_cleanup(&dp->connector);
}
static const struct component_ops exynos_dp_ops = {
* Exynos specific crtc structure.
*
* @drm_crtc: crtc object.
- * @drm_plane: pointer of private plane object for this crtc
* @manager: the manager associated with this crtc
* @pipe: a crtc index created at load() with a new crtc object creation
* and the crtc object would be set to private->crtc array
*/
struct exynos_drm_crtc {
struct drm_crtc drm_crtc;
- struct drm_plane *plane;
struct exynos_drm_manager *manager;
unsigned int pipe;
unsigned int dpms;
exynos_drm_crtc_dpms(crtc, DRM_MODE_DPMS_ON);
- exynos_plane_commit(exynos_crtc->plane);
+ exynos_plane_commit(crtc->primary);
if (manager->ops->commit)
manager->ops->commit(manager);
- exynos_plane_dpms(exynos_crtc->plane, DRM_MODE_DPMS_ON);
+ exynos_plane_dpms(crtc->primary, DRM_MODE_DPMS_ON);
}
static bool
{
struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(crtc);
struct exynos_drm_manager *manager = exynos_crtc->manager;
- struct drm_plane *plane = exynos_crtc->plane;
+ struct drm_framebuffer *fb = crtc->primary->fb;
unsigned int crtc_w;
unsigned int crtc_h;
- int ret;
/*
* copy the mode data adjusted by mode_fixup() into crtc->mode
*/
memcpy(&crtc->mode, adjusted_mode, sizeof(*adjusted_mode));
- crtc_w = crtc->primary->fb->width - x;
- crtc_h = crtc->primary->fb->height - y;
+ crtc_w = fb->width - x;
+ crtc_h = fb->height - y;
if (manager->ops->mode_set)
manager->ops->mode_set(manager, &crtc->mode);
- ret = exynos_plane_mode_set(plane, crtc, crtc->primary->fb, 0, 0, crtc_w, crtc_h,
- x, y, crtc_w, crtc_h);
- if (ret)
- return ret;
-
- plane->crtc = crtc;
- plane->fb = crtc->primary->fb;
- drm_framebuffer_reference(plane->fb);
-
- return 0;
+ return exynos_plane_mode_set(crtc->primary, crtc, fb, 0, 0,
+ crtc_w, crtc_h, x, y, crtc_w, crtc_h);
}
static int exynos_drm_crtc_mode_set_commit(struct drm_crtc *crtc, int x, int y,
struct drm_framebuffer *old_fb)
{
struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(crtc);
- struct drm_plane *plane = exynos_crtc->plane;
+ struct drm_framebuffer *fb = crtc->primary->fb;
unsigned int crtc_w;
unsigned int crtc_h;
int ret;
return -EPERM;
}
- crtc_w = crtc->primary->fb->width - x;
- crtc_h = crtc->primary->fb->height - y;
+ crtc_w = fb->width - x;
+ crtc_h = fb->height - y;
- ret = exynos_plane_mode_set(plane, crtc, crtc->primary->fb, 0, 0, crtc_w, crtc_h,
- x, y, crtc_w, crtc_h);
+ ret = exynos_plane_mode_set(crtc->primary, crtc, fb, 0, 0,
+ crtc_w, crtc_h, x, y, crtc_w, crtc_h);
if (ret)
return ret;
exynos_drm_crtc_commit(crtc);
break;
case CRTC_MODE_BLANK:
- exynos_plane_dpms(exynos_crtc->plane,
- DRM_MODE_DPMS_OFF);
+ exynos_plane_dpms(crtc->primary, DRM_MODE_DPMS_OFF);
break;
default:
break;
int exynos_drm_crtc_create(struct exynos_drm_manager *manager)
{
struct exynos_drm_crtc *exynos_crtc;
+ struct drm_plane *plane;
struct exynos_drm_private *private = manager->drm_dev->dev_private;
struct drm_crtc *crtc;
+ int ret;
exynos_crtc = kzalloc(sizeof(*exynos_crtc), GFP_KERNEL);
if (!exynos_crtc)
exynos_crtc->dpms = DRM_MODE_DPMS_OFF;
exynos_crtc->manager = manager;
exynos_crtc->pipe = manager->pipe;
- exynos_crtc->plane = exynos_plane_init(manager->drm_dev,
- 1 << manager->pipe, true);
- if (!exynos_crtc->plane) {
- kfree(exynos_crtc);
- return -ENOMEM;
+ plane = exynos_plane_init(manager->drm_dev, 1 << manager->pipe,
+ DRM_PLANE_TYPE_PRIMARY);
+ if (IS_ERR(plane)) {
+ ret = PTR_ERR(plane);
+ goto err_plane;
}
manager->crtc = &exynos_crtc->drm_crtc;
private->crtc[manager->pipe] = crtc;
- drm_crtc_init(manager->drm_dev, crtc, &exynos_crtc_funcs);
+ ret = drm_crtc_init_with_planes(manager->drm_dev, crtc, plane, NULL,
+ &exynos_crtc_funcs);
+ if (ret < 0)
+ goto err_crtc;
+
drm_crtc_helper_add(crtc, &exynos_crtc_helper_funcs);
exynos_drm_crtc_attach_mode_property(crtc);
return 0;
+
+err_crtc:
+ plane->funcs->destroy(plane);
+err_plane:
+ kfree(exynos_crtc);
+ return ret;
}
int exynos_drm_crtc_enable_vblank(struct drm_device *dev, int pipe)
struct exynos_dpi *ctx = exynos_dpi_display.ctx;
exynos_dpi_dpms(&exynos_dpi_display, DRM_MODE_DPMS_OFF);
+
+ exynos_dpi_connector_destroy(&ctx->connector);
encoder->funcs->destroy(encoder);
- drm_connector_cleanup(&ctx->connector);
+
+ if (ctx->panel)
+ drm_panel_detach(ctx->panel);
exynos_drm_component_del(dev, EXYNOS_DEVICE_TYPE_CONNECTOR);
#include <drm/drmP.h>
#include <drm/drm_crtc_helper.h>
-#include <linux/anon_inodes.h>
#include <linux/component.h>
#include <drm/exynos_drm.h>
struct drm_plane *plane;
unsigned long possible_crtcs = (1 << MAX_CRTC) - 1;
- plane = exynos_plane_init(dev, possible_crtcs, false);
- if (!plane)
+ plane = exynos_plane_init(dev, possible_crtcs,
+ DRM_PLANE_TYPE_OVERLAY);
+ if (IS_ERR(plane))
goto err_mode_config_cleanup;
}
/* force connectors detection */
drm_helper_hpd_irq_event(dev);
+ /*
+ * enable drm irq mode.
+ * - with irq_enabled = true, we can use the vblank feature.
+ *
+ * P.S. note that we wouldn't use drm irq handler but
+ * just specific driver own one instead because
+ * drm framework supports only one irq handler.
+ */
+ dev->irq_enabled = true;
+
+ /*
+ * with vblank_disable_allowed = true, vblank interrupt will be disabled
+ * by drm timer once a current process gives up ownership of
+ * vblank event.(after drm_vblank_put function is called)
+ */
+ dev->vblank_disable_allowed = true;
+
return 0;
err_unbind_all:
exynos_drm_device_subdrv_remove(dev);
exynos_drm_fbdev_fini(dev);
- drm_vblank_cleanup(dev);
drm_kms_helper_poll_fini(dev);
- drm_mode_config_cleanup(dev);
+ component_unbind_all(dev->dev, dev);
+ drm_vblank_cleanup(dev);
+ drm_mode_config_cleanup(dev);
drm_release_iommu_mapping(dev);
- kfree(dev->dev_private);
- component_unbind_all(dev->dev, dev);
+ kfree(dev->dev_private);
dev->dev_private = NULL;
return 0;
}
-static const struct file_operations exynos_drm_gem_fops = {
- .mmap = exynos_drm_gem_mmap_buffer,
-};
-
static int exynos_drm_suspend(struct drm_device *dev, pm_message_t state)
{
struct drm_connector *connector;
static int exynos_drm_open(struct drm_device *dev, struct drm_file *file)
{
struct drm_exynos_file_private *file_priv;
- struct file *anon_filp;
int ret;
file_priv = kzalloc(sizeof(*file_priv), GFP_KERNEL);
if (ret)
goto err_file_priv_free;
- anon_filp = anon_inode_getfile("exynos_gem", &exynos_drm_gem_fops,
- NULL, 0);
- if (IS_ERR(anon_filp)) {
- ret = PTR_ERR(anon_filp);
- goto err_subdrv_close;
- }
-
- anon_filp->f_mode = FMODE_READ | FMODE_WRITE;
- file_priv->anon_filp = anon_filp;
-
return ret;
-err_subdrv_close:
- exynos_drm_subdrv_close(dev, file);
-
err_file_priv_free:
kfree(file_priv);
file->driver_priv = NULL;
static void exynos_drm_postclose(struct drm_device *dev, struct drm_file *file)
{
struct exynos_drm_private *private = dev->dev_private;
- struct drm_exynos_file_private *file_priv;
struct drm_pending_vblank_event *v, *vt;
struct drm_pending_event *e, *et;
unsigned long flags;
}
spin_unlock_irqrestore(&dev->event_lock, flags);
- file_priv = file->driver_priv;
- if (file_priv->anon_filp)
- fput(file_priv->anon_filp);
-
kfree(file->driver_priv);
file->driver_priv = NULL;
}
static const struct drm_ioctl_desc exynos_ioctls[] = {
DRM_IOCTL_DEF_DRV(EXYNOS_GEM_CREATE, exynos_drm_gem_create_ioctl,
DRM_UNLOCKED | DRM_AUTH),
- DRM_IOCTL_DEF_DRV(EXYNOS_GEM_MAP_OFFSET,
- exynos_drm_gem_map_offset_ioctl, DRM_UNLOCKED |
- DRM_AUTH),
- DRM_IOCTL_DEF_DRV(EXYNOS_GEM_MMAP,
- exynos_drm_gem_mmap_ioctl, DRM_UNLOCKED | DRM_AUTH),
DRM_IOCTL_DEF_DRV(EXYNOS_GEM_GET,
exynos_drm_gem_get_ioctl, DRM_UNLOCKED),
DRM_IOCTL_DEF_DRV(EXYNOS_VIDI_CONNECTION,
.preclose = exynos_drm_preclose,
.lastclose = exynos_drm_lastclose,
.postclose = exynos_drm_postclose,
+ .set_busid = drm_platform_set_busid,
.get_vblank_counter = drm_vblank_count,
.enable_vblank = exynos_drm_crtc_enable_vblank,
.disable_vblank = exynos_drm_crtc_disable_vblank,
mutex_unlock(&drm_component_lock);
}
-static int compare_of(struct device *dev, void *data)
+static int compare_dev(struct device *dev, void *data)
{
return dev == (struct device *)data;
}
-static int exynos_drm_add_components(struct device *dev, struct master *m)
+static struct component_match *exynos_drm_match_add(struct device *dev)
{
+ struct component_match *match = NULL;
struct component_dev *cdev;
unsigned int attach_cnt = 0;
mutex_lock(&drm_component_lock);
list_for_each_entry(cdev, &drm_component_list, list) {
- int ret;
-
/*
* Add components to master only in case that crtc and
* encoder/connector device objects exist.
/*
* fimd and dpi modules have same device object so add
* only crtc device object in this case.
- *
- * TODO. if dpi module follows driver-model driver then
- * below codes can be removed.
*/
if (cdev->crtc_dev == cdev->conn_dev) {
- ret = component_master_add_child(m, compare_of,
- cdev->crtc_dev);
- if (ret < 0)
- return ret;
-
+ component_match_add(dev, &match, compare_dev,
+ cdev->crtc_dev);
goto out_lock;
}
* connector/encoder need pipe number of crtc when they
* are created.
*/
- ret = component_master_add_child(m, compare_of, cdev->crtc_dev);
- ret |= component_master_add_child(m, compare_of,
- cdev->conn_dev);
- if (ret < 0)
- return ret;
+ component_match_add(dev, &match, compare_dev, cdev->crtc_dev);
+ component_match_add(dev, &match, compare_dev, cdev->conn_dev);
out_lock:
mutex_lock(&drm_component_lock);
mutex_unlock(&drm_component_lock);
- return attach_cnt ? 0 : -ENODEV;
+ return attach_cnt ? match : ERR_PTR(-EPROBE_DEFER);
}
static int exynos_drm_bind(struct device *dev)
}
static const struct component_master_ops exynos_drm_ops = {
- .add_components = exynos_drm_add_components,
.bind = exynos_drm_bind,
.unbind = exynos_drm_unbind,
};
static int exynos_drm_platform_probe(struct platform_device *pdev)
{
+ struct component_match *match;
int ret;
pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32);
goto err_unregister_ipp_drv;
#endif
- ret = component_master_add(&pdev->dev, &exynos_drm_ops);
+ match = exynos_drm_match_add(&pdev->dev);
+ if (IS_ERR(match)) {
+ ret = PTR_ERR(match);
+ goto err_unregister_resources;
+ }
+
+ ret = component_master_add_with_match(&pdev->dev, &exynos_drm_ops,
+ match);
if (ret < 0)
- DRM_DEBUG_KMS("re-tried by last sub driver probed later.\n");
+ goto err_unregister_resources;
- return 0;
+ return ret;
+
+err_unregister_resources:
#ifdef CONFIG_DRM_EXYNOS_IPP
+ exynos_platform_device_ipp_unregister();
err_unregister_ipp_drv:
platform_driver_unregister(&ipp_driver);
err_unregister_gsc_drv:
struct drm_exynos_file_private {
struct exynos_drm_g2d_private *g2d_priv;
struct device *ipp_dev;
- struct file *anon_filp;
};
/*
#define DSIM_SYNC_INFORM (1 << 27)
#define DSIM_EOT_DISABLE (1 << 28)
#define DSIM_MFLUSH_VS (1 << 29)
+/* This flag is valid only for exynos3250/3472/4415/5260/5430 */
+#define DSIM_CLKLANE_STOP (1 << 30)
/* DSIM_ESCMODE */
#define DSIM_TX_TRIGGER_RST (1 << 4)
unsigned int plltmr_reg;
unsigned int has_freqband:1;
+ unsigned int has_clklane_stop:1;
};
struct exynos_dsi {
#define host_to_dsi(host) container_of(host, struct exynos_dsi, dsi_host)
#define connector_to_dsi(c) container_of(c, struct exynos_dsi, connector)
+static struct exynos_dsi_driver_data exynos3_dsi_driver_data = {
+ .plltmr_reg = 0x50,
+ .has_freqband = 1,
+ .has_clklane_stop = 1,
+};
+
static struct exynos_dsi_driver_data exynos4_dsi_driver_data = {
.plltmr_reg = 0x50,
.has_freqband = 1,
+ .has_clklane_stop = 1,
};
static struct exynos_dsi_driver_data exynos5_dsi_driver_data = {
};
static struct of_device_id exynos_dsi_of_match[] = {
+ { .compatible = "samsung,exynos3250-mipi-dsi",
+ .data = &exynos3_dsi_driver_data },
{ .compatible = "samsung,exynos4210-mipi-dsi",
.data = &exynos4_dsi_driver_data },
{ .compatible = "samsung,exynos5410-mipi-dsi",
if (!fout) {
dev_err(dsi->dev,
"failed to find PLL PMS for requested frequency\n");
- return -EFAULT;
+ return 0;
}
dev_dbg(dsi->dev, "PLL freq %lu, (p %d, m %d, s %d)\n", fout, p, m, s);
do {
if (timeout-- == 0) {
dev_err(dsi->dev, "PLL failed to stabilize\n");
- return -EFAULT;
+ return 0;
}
reg = readl(dsi->reg_base + DSIM_STATUS_REG);
} while ((reg & DSIM_PLL_STABLE) == 0);
static int exynos_dsi_init_link(struct exynos_dsi *dsi)
{
+ struct exynos_dsi_driver_data *driver_data = dsi->driver_data;
int timeout;
u32 reg;
u32 lanes_mask;
reg |= DSIM_LANE_EN(lanes_mask);
writel(reg, dsi->reg_base + DSIM_CONFIG_REG);
+ /*
+ * Use non-continuous clock mode if the periparal wants and
+ * host controller supports
+ *
+ * In non-continous clock mode, host controller will turn off
+ * the HS clock between high-speed transmissions to reduce
+ * power consumption.
+ */
+ if (driver_data->has_clklane_stop &&
+ dsi->mode_flags & MIPI_DSI_CLOCK_NON_CONTINUOUS) {
+ reg |= DSIM_CLKLANE_STOP;
+ writel(reg, dsi->reg_base + DSIM_CONFIG_REG);
+ }
+
/* Check clock and data lane state are stop state */
timeout = 100;
do {
static void exynos_dsi_connector_destroy(struct drm_connector *connector)
{
+ drm_connector_unregister(connector);
+ drm_connector_cleanup(connector);
+ connector->dev = NULL;
}
static struct drm_connector_funcs exynos_dsi_connector_funcs = {
exynos_dsi_dpms(&exynos_dsi_display, DRM_MODE_DPMS_OFF);
- mipi_dsi_host_unregister(&dsi->dsi_host);
-
+ exynos_dsi_connector_destroy(&dsi->connector);
encoder->funcs->destroy(encoder);
- drm_connector_cleanup(&dsi->connector);
+
+ mipi_dsi_host_unregister(&dsi->dsi_host);
}
static const struct component_ops exynos_dsi_component_ops = {
ret = drm_framebuffer_init(dev, &exynos_fb->fb, &exynos_drm_fb_funcs);
if (ret) {
+ kfree(exynos_fb);
DRM_ERROR("failed to initialize framebuffer\n");
return ERR_PTR(ret);
}
fbi->screen_base = buffer->kvaddr + offset;
fbi->screen_size = size;
+ fbi->fix.smem_len = size;
return 0;
}
fbdev = to_exynos_fbdev(private->fb_helper);
- if (fbdev->exynos_gem_obj)
- exynos_drm_gem_destroy(fbdev->exynos_gem_obj);
-
exynos_drm_fbdev_destroy(dev, private->fb_helper);
kfree(fbdev);
private->fb_helper = NULL;
fimc_set_bits(ctx, EXYNOS_CIWDOFST,
EXYNOS_CIWDOFST_CLROVFIY | EXYNOS_CIWDOFST_CLROVFICB |
EXYNOS_CIWDOFST_CLROVFICR);
- fimc_clear_bits(ctx, EXYNOS_CIWDOFST,
- EXYNOS_CIWDOFST_CLROVFIY | EXYNOS_CIWDOFST_CLROVFICB |
- EXYNOS_CIWDOFST_CLROVFICR);
dev_err(ippdrv->dev, "occurred overflow at %d, status 0x%x.\n",
ctx->id, status);
case IPP_BUF_ENQUEUE:
config = &property->config[EXYNOS_DRM_OPS_SRC];
fimc_write(ctx, buf_info->base[EXYNOS_DRM_PLANAR_Y],
- EXYNOS_CIIYSA(buf_id));
+ EXYNOS_CIIYSA0);
if (config->fmt == DRM_FORMAT_YVU420) {
fimc_write(ctx, buf_info->base[EXYNOS_DRM_PLANAR_CR],
- EXYNOS_CIICBSA(buf_id));
+ EXYNOS_CIICBSA0);
fimc_write(ctx, buf_info->base[EXYNOS_DRM_PLANAR_CB],
- EXYNOS_CIICRSA(buf_id));
+ EXYNOS_CIICRSA0);
} else {
fimc_write(ctx, buf_info->base[EXYNOS_DRM_PLANAR_CB],
- EXYNOS_CIICBSA(buf_id));
+ EXYNOS_CIICBSA0);
fimc_write(ctx, buf_info->base[EXYNOS_DRM_PLANAR_CR],
- EXYNOS_CIICRSA(buf_id));
+ EXYNOS_CIICRSA0);
}
break;
case IPP_BUF_DEQUEUE:
- fimc_write(ctx, 0x0, EXYNOS_CIIYSA(buf_id));
- fimc_write(ctx, 0x0, EXYNOS_CIICBSA(buf_id));
- fimc_write(ctx, 0x0, EXYNOS_CIICRSA(buf_id));
+ fimc_write(ctx, 0x0, EXYNOS_CIIYSA0);
+ fimc_write(ctx, 0x0, EXYNOS_CIICBSA0);
+ fimc_write(ctx, 0x0, EXYNOS_CIICRSA0);
break;
default:
/* bypass */
return 0;
}
-static int fimc_dst_get_buf_count(struct fimc_context *ctx)
-{
- u32 cfg, buf_num;
-
- cfg = fimc_read(ctx, EXYNOS_CIFCNTSEQ);
-
- buf_num = hweight32(cfg);
-
- DRM_DEBUG_KMS("buf_num[%d]\n", buf_num);
-
- return buf_num;
-}
-
-static int fimc_dst_set_buf_seq(struct fimc_context *ctx, u32 buf_id,
+static void fimc_dst_set_buf_seq(struct fimc_context *ctx, u32 buf_id,
enum drm_exynos_ipp_buf_type buf_type)
{
- struct exynos_drm_ippdrv *ippdrv = &ctx->ippdrv;
- bool enable;
- u32 cfg;
- u32 mask = 0x00000001 << buf_id;
- int ret = 0;
unsigned long flags;
+ u32 buf_num;
+ u32 cfg;
DRM_DEBUG_KMS("buf_id[%d]buf_type[%d]\n", buf_id, buf_type);
spin_lock_irqsave(&ctx->lock, flags);
- /* mask register set */
cfg = fimc_read(ctx, EXYNOS_CIFCNTSEQ);
- switch (buf_type) {
- case IPP_BUF_ENQUEUE:
- enable = true;
- break;
- case IPP_BUF_DEQUEUE:
- enable = false;
- break;
- default:
- dev_err(ippdrv->dev, "invalid buf ctrl parameter.\n");
- ret = -EINVAL;
- goto err_unlock;
- }
+ if (buf_type == IPP_BUF_ENQUEUE)
+ cfg |= (1 << buf_id);
+ else
+ cfg &= ~(1 << buf_id);
- /* sequence id */
- cfg &= ~mask;
- cfg |= (enable << buf_id);
fimc_write(ctx, cfg, EXYNOS_CIFCNTSEQ);
- /* interrupt enable */
- if (buf_type == IPP_BUF_ENQUEUE &&
- fimc_dst_get_buf_count(ctx) >= FIMC_BUF_START)
- fimc_mask_irq(ctx, true);
+ buf_num = hweight32(cfg);
- /* interrupt disable */
- if (buf_type == IPP_BUF_DEQUEUE &&
- fimc_dst_get_buf_count(ctx) <= FIMC_BUF_STOP)
+ if (buf_type == IPP_BUF_ENQUEUE && buf_num >= FIMC_BUF_START)
+ fimc_mask_irq(ctx, true);
+ else if (buf_type == IPP_BUF_DEQUEUE && buf_num <= FIMC_BUF_STOP)
fimc_mask_irq(ctx, false);
-err_unlock:
spin_unlock_irqrestore(&ctx->lock, flags);
- return ret;
}
static int fimc_dst_set_addr(struct device *dev,
break;
}
- return fimc_dst_set_buf_seq(ctx, buf_id, buf_type);
+ fimc_dst_set_buf_seq(ctx, buf_id, buf_type);
+
+ return 0;
}
static struct exynos_drm_ipp_ops fimc_dst_ops = {
DRM_DEBUG_KMS("buf_id[%d]\n", buf_id);
- if (fimc_dst_set_buf_seq(ctx, buf_id, IPP_BUF_DEQUEUE) < 0) {
- DRM_ERROR("failed to dequeue.\n");
- return IRQ_HANDLED;
- }
+ fimc_dst_set_buf_seq(ctx, buf_id, IPP_BUF_DEQUEUE);
event_work->ippdrv = ippdrv;
event_work->buf_id[EXYNOS_DRM_OPS_DST] = buf_id;
- queue_work(ippdrv->event_workq, (struct work_struct *)event_work);
+ queue_work(ippdrv->event_workq, &event_work->work);
return IRQ_HANDLED;
}
fimc_clear_bits(ctx, EXYNOS_CIOCTRL, EXYNOS_CIOCTRL_WEAVE_MASK);
- if (cmd == IPP_CMD_M2M) {
- fimc_set_bits(ctx, EXYNOS_MSCTRL, EXYNOS_MSCTRL_ENVID);
-
+ if (cmd == IPP_CMD_M2M)
fimc_set_bits(ctx, EXYNOS_MSCTRL, EXYNOS_MSCTRL_ENVID);
- }
return 0;
}
.has_limited_fmt = 1,
};
+static struct fimd_driver_data exynos3_fimd_driver_data = {
+ .timing_base = 0x20000,
+ .lcdblk_offset = 0x210,
+ .lcdblk_bypass_shift = 1,
+ .has_shadowcon = 1,
+ .has_vidoutcon = 1,
+};
+
static struct fimd_driver_data exynos4_fimd_driver_data = {
.timing_base = 0x0,
.lcdblk_offset = 0x210,
static const struct of_device_id fimd_driver_dt_match[] = {
{ .compatible = "samsung,s3c6400-fimd",
.data = &s3c64xx_fimd_driver_data },
+ { .compatible = "samsung,exynos3250-fimd",
+ .data = &exynos3_fimd_driver_data },
{ .compatible = "samsung,exynos4210-fimd",
.data = &exynos4_fimd_driver_data },
{ .compatible = "samsung,exynos5250-fimd",
DRM_DEBUG_KMS("vblank wait timed out.\n");
}
-
static void fimd_clear_channel(struct exynos_drm_manager *mgr)
{
struct fimd_context *ctx = mgr->ctx;
/* Check if any channel is enabled. */
for (win = 0; win < WINDOWS_NR; win++) {
- u32 val = readl(ctx->regs + SHADOWCON);
- if (val & SHADOWCON_CHx_ENABLE(win)) {
- val &= ~SHADOWCON_CHx_ENABLE(win);
- writel(val, ctx->regs + SHADOWCON);
+ u32 val = readl(ctx->regs + WINCON(win));
+
+ if (val & WINCONx_ENWIN) {
+ /* wincon */
+ val &= ~WINCONx_ENWIN;
+ writel(val, ctx->regs + WINCON(win));
+
+ /* unprotect windows */
+ if (ctx->driver_data->has_shadowcon) {
+ val = readl(ctx->regs + SHADOWCON);
+ val &= ~SHADOWCON_CHx_ENABLE(win);
+ writel(val, ctx->regs + SHADOWCON);
+ }
ch_enabled = 1;
}
}
/* Wait for vsync, as disable channel takes effect at next vsync */
- if (ch_enabled)
+ if (ch_enabled) {
+ unsigned int state = ctx->suspended;
+
+ ctx->suspended = 0;
fimd_wait_for_vblank(mgr);
+ ctx->suspended = state;
+ }
}
static int fimd_mgr_initialize(struct exynos_drm_manager *mgr,
mgr->drm_dev = ctx->drm_dev = drm_dev;
mgr->pipe = ctx->pipe = priv->pipe++;
- /*
- * enable drm irq mode.
- * - with irq_enabled = true, we can use the vblank feature.
- *
- * P.S. note that we wouldn't use drm irq handler but
- * just specific driver own one instead because
- * drm framework supports only one irq handler.
- */
- drm_dev->irq_enabled = true;
-
- /*
- * with vblank_disable_allowed = true, vblank interrupt will be disabled
- * by drm timer once a current process gives up ownership of
- * vblank event.(after drm_vblank_put function is called)
- */
- drm_dev->vblank_disable_allowed = true;
-
/* attach this sub driver to iommu mapping if supported. */
if (is_drm_iommu_supported(ctx->drm_dev)) {
/*
{
struct exynos_drm_manager *mgr = dev_get_drvdata(dev);
struct fimd_context *ctx = fimd_manager.ctx;
- struct drm_crtc *crtc = mgr->crtc;
fimd_dpms(mgr, DRM_MODE_DPMS_OFF);
exynos_dpi_remove(dev);
fimd_mgr_remove(mgr);
-
- crtc->funcs->destroy(crtc);
}
static const struct component_ops fimd_component_ops = {
drm_gem_object_unreference_unlocked(obj);
}
-int exynos_drm_gem_map_offset_ioctl(struct drm_device *dev, void *data,
- struct drm_file *file_priv)
-{
- struct drm_exynos_gem_map_off *args = data;
-
- DRM_DEBUG_KMS("handle = 0x%x, offset = 0x%lx\n",
- args->handle, (unsigned long)args->offset);
-
- if (!(dev->driver->driver_features & DRIVER_GEM)) {
- DRM_ERROR("does not support GEM.\n");
- return -ENODEV;
- }
-
- return exynos_drm_gem_dumb_map_offset(file_priv, dev, args->handle,
- &args->offset);
-}
-
-int exynos_drm_gem_mmap_buffer(struct file *filp,
+int exynos_drm_gem_mmap_buffer(struct exynos_drm_gem_obj *exynos_gem_obj,
struct vm_area_struct *vma)
{
- struct drm_gem_object *obj = filp->private_data;
- struct exynos_drm_gem_obj *exynos_gem_obj = to_exynos_gem_obj(obj);
- struct drm_device *drm_dev = obj->dev;
+ struct drm_device *drm_dev = exynos_gem_obj->base.dev;
struct exynos_drm_gem_buf *buffer;
unsigned long vm_size;
int ret;
- WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex));
-
- vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
- vma->vm_private_data = obj;
- vma->vm_ops = drm_dev->driver->gem_vm_ops;
-
- update_vm_cache_attr(exynos_gem_obj, vma);
+ vma->vm_flags &= ~VM_PFNMAP;
+ vma->vm_pgoff = 0;
vm_size = vma->vm_end - vma->vm_start;
return ret;
}
- /*
- * take a reference to this mapping of the object. And this reference
- * is unreferenced by the corresponding vm_close call.
- */
- drm_gem_object_reference(obj);
-
- drm_vm_open_locked(drm_dev, vma);
-
- return 0;
-}
-
-int exynos_drm_gem_mmap_ioctl(struct drm_device *dev, void *data,
- struct drm_file *file_priv)
-{
- struct drm_exynos_file_private *exynos_file_priv;
- struct drm_exynos_gem_mmap *args = data;
- struct drm_gem_object *obj;
- struct file *anon_filp;
- unsigned long addr;
-
- if (!(dev->driver->driver_features & DRIVER_GEM)) {
- DRM_ERROR("does not support GEM.\n");
- return -ENODEV;
- }
-
- mutex_lock(&dev->struct_mutex);
-
- obj = drm_gem_object_lookup(dev, file_priv, args->handle);
- if (!obj) {
- DRM_ERROR("failed to lookup gem object.\n");
- mutex_unlock(&dev->struct_mutex);
- return -EINVAL;
- }
-
- exynos_file_priv = file_priv->driver_priv;
- anon_filp = exynos_file_priv->anon_filp;
- anon_filp->private_data = obj;
-
- addr = vm_mmap(anon_filp, 0, args->size, PROT_READ | PROT_WRITE,
- MAP_SHARED, 0);
-
- drm_gem_object_unreference(obj);
-
- if (IS_ERR_VALUE(addr)) {
- mutex_unlock(&dev->struct_mutex);
- return (int)addr;
- }
-
- mutex_unlock(&dev->struct_mutex);
-
- args->mapped = addr;
-
- DRM_DEBUG_KMS("mapped = 0x%lx\n", (unsigned long)args->mapped);
-
return 0;
}
exynos_gem_obj = to_exynos_gem_obj(obj);
ret = check_gem_flags(exynos_gem_obj->flags);
- if (ret) {
- drm_gem_vm_close(vma);
- drm_gem_free_mmap_offset(obj);
- return ret;
- }
-
- vma->vm_flags &= ~VM_PFNMAP;
- vma->vm_flags |= VM_MIXEDMAP;
+ if (ret)
+ goto err_close_vm;
update_vm_cache_attr(exynos_gem_obj, vma);
+ ret = exynos_drm_gem_mmap_buffer(exynos_gem_obj, vma);
+ if (ret)
+ goto err_close_vm;
+
+ return ret;
+
+err_close_vm:
+ drm_gem_vm_close(vma);
+ drm_gem_free_mmap_offset(obj);
+
return ret;
}
#ifndef _EXYNOS_DRM_GEM_H_
#define _EXYNOS_DRM_GEM_H_
+#include <drm/drm_gem.h>
+
#define to_exynos_gem_obj(x) container_of(x,\
struct exynos_drm_gem_obj, base)
unsigned int gem_handle,
struct drm_file *filp);
-/* get buffer offset to map to user space. */
-int exynos_drm_gem_map_offset_ioctl(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-
-/*
- * mmap the physically continuous memory that a gem object contains
- * to user space.
- */
-int exynos_drm_gem_mmap_ioctl(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-
-int exynos_drm_gem_mmap_buffer(struct file *filp,
- struct vm_area_struct *vma);
-
/* map user space allocated by malloc to pages. */
int exynos_drm_gem_userptr_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
buf_id[EXYNOS_DRM_OPS_SRC];
event_work->buf_id[EXYNOS_DRM_OPS_DST] =
buf_id[EXYNOS_DRM_OPS_DST];
- queue_work(ippdrv->event_workq,
- (struct work_struct *)event_work);
+ queue_work(ippdrv->event_workq, &event_work->work);
}
return IRQ_HANDLED;
u32 prop_id;
u32 buf_id;
struct drm_exynos_ipp_buf_info buf_info;
- struct drm_file *filp;
};
/*
sz->hsize, sz->vsize, config->flip, config->degree);
}
-static int ipp_find_and_set_property(struct drm_exynos_ipp_property *property)
-{
- struct exynos_drm_ippdrv *ippdrv;
- struct drm_exynos_ipp_cmd_node *c_node;
- u32 prop_id = property->prop_id;
-
- DRM_DEBUG_KMS("prop_id[%d]\n", prop_id);
-
- ippdrv = ipp_find_drv_by_handle(prop_id);
- if (IS_ERR(ippdrv)) {
- DRM_ERROR("failed to get ipp driver.\n");
- return -EINVAL;
- }
-
- /*
- * Find command node using command list in ippdrv.
- * when we find this command no using prop_id.
- * return property information set in this command node.
- */
- mutex_lock(&ippdrv->cmd_lock);
- list_for_each_entry(c_node, &ippdrv->cmd_list, list) {
- if ((c_node->property.prop_id == prop_id) &&
- (c_node->state == IPP_STATE_STOP)) {
- mutex_unlock(&ippdrv->cmd_lock);
- DRM_DEBUG_KMS("found cmd[%d]ippdrv[0x%x]\n",
- property->cmd, (int)ippdrv);
-
- c_node->property = *property;
- return 0;
- }
- }
- mutex_unlock(&ippdrv->cmd_lock);
-
- DRM_ERROR("failed to search property.\n");
-
- return -EINVAL;
-}
-
static struct drm_exynos_ipp_cmd_work *ipp_create_cmd_work(void)
{
struct drm_exynos_ipp_cmd_work *cmd_work;
struct drm_exynos_ipp_property *property = data;
struct exynos_drm_ippdrv *ippdrv;
struct drm_exynos_ipp_cmd_node *c_node;
+ u32 prop_id;
int ret, i;
if (!ctx) {
return -EINVAL;
}
+ prop_id = property->prop_id;
+
/*
* This is log print for user application property.
* user application set various property.
ipp_print_property(property, i);
/*
- * set property ioctl generated new prop_id.
- * but in this case already asigned prop_id using old set property.
- * e.g PAUSE state. this case supports find current prop_id and use it
- * instead of allocation.
+ * In case prop_id is not zero try to set existing property.
*/
- if (property->prop_id) {
- DRM_DEBUG_KMS("prop_id[%d]\n", property->prop_id);
- return ipp_find_and_set_property(property);
+ if (prop_id) {
+ c_node = ipp_find_obj(&ctx->prop_idr, &ctx->prop_lock, prop_id);
+
+ if (!c_node || c_node->filp != file) {
+ DRM_DEBUG_KMS("prop_id[%d] not found\n", prop_id);
+ return -EINVAL;
+ }
+
+ if (c_node->state != IPP_STATE_STOP) {
+ DRM_DEBUG_KMS("prop_id[%d] not stopped\n", prop_id);
+ return -EINVAL;
+ }
+
+ c_node->property = *property;
+
+ return 0;
}
/* find ipp driver using ipp id */
property->prop_id, property->cmd, (int)ippdrv);
/* stored property information and ippdrv in private data */
- c_node->dev = dev;
c_node->property = *property;
c_node->state = IPP_STATE_IDLE;
+ c_node->filp = file;
c_node->start_work = ipp_create_cmd_work();
if (IS_ERR(c_node->start_work)) {
return ret;
}
-static void ipp_clean_cmd_node(struct ipp_context *ctx,
- struct drm_exynos_ipp_cmd_node *c_node)
-{
- /* delete list */
- list_del(&c_node->list);
-
- ipp_remove_id(&ctx->prop_idr, &ctx->prop_lock,
- c_node->property.prop_id);
-
- /* destroy mutex */
- mutex_destroy(&c_node->lock);
- mutex_destroy(&c_node->mem_lock);
- mutex_destroy(&c_node->event_lock);
-
- /* free command node */
- kfree(c_node->start_work);
- kfree(c_node->stop_work);
- kfree(c_node->event_work);
- kfree(c_node);
-}
-
-static bool ipp_check_mem_list(struct drm_exynos_ipp_cmd_node *c_node)
-{
- switch (c_node->property.cmd) {
- case IPP_CMD_WB:
- return !list_empty(&c_node->mem_list[EXYNOS_DRM_OPS_DST]);
- case IPP_CMD_OUTPUT:
- return !list_empty(&c_node->mem_list[EXYNOS_DRM_OPS_SRC]);
- case IPP_CMD_M2M:
- default:
- return !list_empty(&c_node->mem_list[EXYNOS_DRM_OPS_SRC]) &&
- !list_empty(&c_node->mem_list[EXYNOS_DRM_OPS_DST]);
- }
-}
-
-static struct drm_exynos_ipp_mem_node
- *ipp_find_mem_node(struct drm_exynos_ipp_cmd_node *c_node,
- struct drm_exynos_ipp_queue_buf *qbuf)
-{
- struct drm_exynos_ipp_mem_node *m_node;
- struct list_head *head;
- int count = 0;
-
- DRM_DEBUG_KMS("buf_id[%d]\n", qbuf->buf_id);
-
- /* source/destination memory list */
- head = &c_node->mem_list[qbuf->ops_id];
-
- /* find memory node from memory list */
- list_for_each_entry(m_node, head, list) {
- DRM_DEBUG_KMS("count[%d]m_node[0x%x]\n", count++, (int)m_node);
-
- /* compare buffer id */
- if (m_node->buf_id == qbuf->buf_id)
- return m_node;
- }
-
- return NULL;
-}
-
-static int ipp_set_mem_node(struct exynos_drm_ippdrv *ippdrv,
+static int ipp_put_mem_node(struct drm_device *drm_dev,
struct drm_exynos_ipp_cmd_node *c_node,
struct drm_exynos_ipp_mem_node *m_node)
{
- struct exynos_drm_ipp_ops *ops = NULL;
- int ret = 0;
+ int i;
DRM_DEBUG_KMS("node[0x%x]\n", (int)m_node);
if (!m_node) {
- DRM_ERROR("invalid queue node.\n");
+ DRM_ERROR("invalid dequeue node.\n");
return -EFAULT;
}
DRM_DEBUG_KMS("ops_id[%d]\n", m_node->ops_id);
- /* get operations callback */
- ops = ippdrv->ops[m_node->ops_id];
- if (!ops) {
- DRM_ERROR("not support ops.\n");
- return -EFAULT;
+ /* put gem buffer */
+ for_each_ipp_planar(i) {
+ unsigned long handle = m_node->buf_info.handles[i];
+ if (handle)
+ exynos_drm_gem_put_dma_addr(drm_dev, handle,
+ c_node->filp);
}
- /* set address and enable irq */
- if (ops->set_addr) {
- ret = ops->set_addr(ippdrv->dev, &m_node->buf_info,
- m_node->buf_id, IPP_BUF_ENQUEUE);
- if (ret) {
- DRM_ERROR("failed to set addr.\n");
- return ret;
- }
- }
+ list_del(&m_node->list);
+ kfree(m_node);
- return ret;
+ return 0;
}
static struct drm_exynos_ipp_mem_node
*ipp_get_mem_node(struct drm_device *drm_dev,
- struct drm_file *file,
struct drm_exynos_ipp_cmd_node *c_node,
struct drm_exynos_ipp_queue_buf *qbuf)
{
m_node->ops_id = qbuf->ops_id;
m_node->prop_id = qbuf->prop_id;
m_node->buf_id = qbuf->buf_id;
+ INIT_LIST_HEAD(&m_node->list);
DRM_DEBUG_KMS("m_node[0x%x]ops_id[%d]\n", (int)m_node, qbuf->ops_id);
DRM_DEBUG_KMS("prop_id[%d]buf_id[%d]\n", qbuf->prop_id, m_node->buf_id);
dma_addr_t *addr;
addr = exynos_drm_gem_get_dma_addr(drm_dev,
- qbuf->handle[i], file);
+ qbuf->handle[i], c_node->filp);
if (IS_ERR(addr)) {
DRM_ERROR("failed to get addr.\n");
- goto err_clear;
+ ipp_put_mem_node(drm_dev, c_node, m_node);
+ return ERR_PTR(-EFAULT);
}
buf_info->handles[i] = qbuf->handle[i];
}
}
- m_node->filp = file;
mutex_lock(&c_node->mem_lock);
list_add_tail(&m_node->list, &c_node->mem_list[qbuf->ops_id]);
mutex_unlock(&c_node->mem_lock);
return m_node;
-
-err_clear:
- kfree(m_node);
- return ERR_PTR(-EFAULT);
}
-static int ipp_put_mem_node(struct drm_device *drm_dev,
- struct drm_exynos_ipp_cmd_node *c_node,
- struct drm_exynos_ipp_mem_node *m_node)
+static void ipp_clean_mem_nodes(struct drm_device *drm_dev,
+ struct drm_exynos_ipp_cmd_node *c_node, int ops)
{
- int i;
-
- DRM_DEBUG_KMS("node[0x%x]\n", (int)m_node);
+ struct drm_exynos_ipp_mem_node *m_node, *tm_node;
+ struct list_head *head = &c_node->mem_list[ops];
- if (!m_node) {
- DRM_ERROR("invalid dequeue node.\n");
- return -EFAULT;
- }
+ mutex_lock(&c_node->mem_lock);
- DRM_DEBUG_KMS("ops_id[%d]\n", m_node->ops_id);
+ list_for_each_entry_safe(m_node, tm_node, head, list) {
+ int ret;
- /* put gem buffer */
- for_each_ipp_planar(i) {
- unsigned long handle = m_node->buf_info.handles[i];
- if (handle)
- exynos_drm_gem_put_dma_addr(drm_dev, handle,
- m_node->filp);
+ ret = ipp_put_mem_node(drm_dev, c_node, m_node);
+ if (ret)
+ DRM_ERROR("failed to put m_node.\n");
}
- /* delete list in queue */
- list_del(&m_node->list);
- kfree(m_node);
-
- return 0;
+ mutex_unlock(&c_node->mem_lock);
}
static void ipp_free_event(struct drm_pending_event *event)
}
static int ipp_get_event(struct drm_device *drm_dev,
- struct drm_file *file,
struct drm_exynos_ipp_cmd_node *c_node,
struct drm_exynos_ipp_queue_buf *qbuf)
{
e = kzalloc(sizeof(*e), GFP_KERNEL);
if (!e) {
spin_lock_irqsave(&drm_dev->event_lock, flags);
- file->event_space += sizeof(e->event);
+ c_node->filp->event_space += sizeof(e->event);
spin_unlock_irqrestore(&drm_dev->event_lock, flags);
return -ENOMEM;
}
e->event.prop_id = qbuf->prop_id;
e->event.buf_id[EXYNOS_DRM_OPS_DST] = qbuf->buf_id;
e->base.event = &e->event.base;
- e->base.file_priv = file;
+ e->base.file_priv = c_node->filp;
e->base.destroy = ipp_free_event;
mutex_lock(&c_node->event_lock);
list_add_tail(&e->base.link, &c_node->event_list);
return;
}
+static void ipp_clean_cmd_node(struct ipp_context *ctx,
+ struct drm_exynos_ipp_cmd_node *c_node)
+{
+ int i;
+
+ /* cancel works */
+ cancel_work_sync(&c_node->start_work->work);
+ cancel_work_sync(&c_node->stop_work->work);
+ cancel_work_sync(&c_node->event_work->work);
+
+ /* put event */
+ ipp_put_event(c_node, NULL);
+
+ for_each_ipp_ops(i)
+ ipp_clean_mem_nodes(ctx->subdrv.drm_dev, c_node, i);
+
+ /* delete list */
+ list_del(&c_node->list);
+
+ ipp_remove_id(&ctx->prop_idr, &ctx->prop_lock,
+ c_node->property.prop_id);
+
+ /* destroy mutex */
+ mutex_destroy(&c_node->lock);
+ mutex_destroy(&c_node->mem_lock);
+ mutex_destroy(&c_node->event_lock);
+
+ /* free command node */
+ kfree(c_node->start_work);
+ kfree(c_node->stop_work);
+ kfree(c_node->event_work);
+ kfree(c_node);
+}
+
+static bool ipp_check_mem_list(struct drm_exynos_ipp_cmd_node *c_node)
+{
+ switch (c_node->property.cmd) {
+ case IPP_CMD_WB:
+ return !list_empty(&c_node->mem_list[EXYNOS_DRM_OPS_DST]);
+ case IPP_CMD_OUTPUT:
+ return !list_empty(&c_node->mem_list[EXYNOS_DRM_OPS_SRC]);
+ case IPP_CMD_M2M:
+ default:
+ return !list_empty(&c_node->mem_list[EXYNOS_DRM_OPS_SRC]) &&
+ !list_empty(&c_node->mem_list[EXYNOS_DRM_OPS_DST]);
+ }
+}
+
+static struct drm_exynos_ipp_mem_node
+ *ipp_find_mem_node(struct drm_exynos_ipp_cmd_node *c_node,
+ struct drm_exynos_ipp_queue_buf *qbuf)
+{
+ struct drm_exynos_ipp_mem_node *m_node;
+ struct list_head *head;
+ int count = 0;
+
+ DRM_DEBUG_KMS("buf_id[%d]\n", qbuf->buf_id);
+
+ /* source/destination memory list */
+ head = &c_node->mem_list[qbuf->ops_id];
+
+ /* find memory node from memory list */
+ list_for_each_entry(m_node, head, list) {
+ DRM_DEBUG_KMS("count[%d]m_node[0x%x]\n", count++, (int)m_node);
+
+ /* compare buffer id */
+ if (m_node->buf_id == qbuf->buf_id)
+ return m_node;
+ }
+
+ return NULL;
+}
+
+static int ipp_set_mem_node(struct exynos_drm_ippdrv *ippdrv,
+ struct drm_exynos_ipp_cmd_node *c_node,
+ struct drm_exynos_ipp_mem_node *m_node)
+{
+ struct exynos_drm_ipp_ops *ops = NULL;
+ int ret = 0;
+
+ DRM_DEBUG_KMS("node[0x%x]\n", (int)m_node);
+
+ if (!m_node) {
+ DRM_ERROR("invalid queue node.\n");
+ return -EFAULT;
+ }
+
+ DRM_DEBUG_KMS("ops_id[%d]\n", m_node->ops_id);
+
+ /* get operations callback */
+ ops = ippdrv->ops[m_node->ops_id];
+ if (!ops) {
+ DRM_ERROR("not support ops.\n");
+ return -EFAULT;
+ }
+
+ /* set address and enable irq */
+ if (ops->set_addr) {
+ ret = ops->set_addr(ippdrv->dev, &m_node->buf_info,
+ m_node->buf_id, IPP_BUF_ENQUEUE);
+ if (ret) {
+ DRM_ERROR("failed to set addr.\n");
+ return ret;
+ }
+ }
+
+ return ret;
+}
+
static void ipp_handle_cmd_work(struct device *dev,
struct exynos_drm_ippdrv *ippdrv,
struct drm_exynos_ipp_cmd_work *cmd_work,
cmd_work->ippdrv = ippdrv;
cmd_work->c_node = c_node;
- queue_work(ctx->cmd_workq, (struct work_struct *)cmd_work);
+ queue_work(ctx->cmd_workq, &cmd_work->work);
}
static int ipp_queue_buf_with_run(struct device *dev,
/* find command node */
c_node = ipp_find_obj(&ctx->prop_idr, &ctx->prop_lock,
qbuf->prop_id);
- if (!c_node) {
+ if (!c_node || c_node->filp != file) {
DRM_ERROR("failed to get command node.\n");
return -ENODEV;
}
switch (qbuf->buf_type) {
case IPP_BUF_ENQUEUE:
/* get memory node */
- m_node = ipp_get_mem_node(drm_dev, file, c_node, qbuf);
+ m_node = ipp_get_mem_node(drm_dev, c_node, qbuf);
if (IS_ERR(m_node)) {
DRM_ERROR("failed to get m_node.\n");
return PTR_ERR(m_node);
*/
if (qbuf->ops_id == EXYNOS_DRM_OPS_DST) {
/* get event for destination buffer */
- ret = ipp_get_event(drm_dev, file, c_node, qbuf);
+ ret = ipp_get_event(drm_dev, c_node, qbuf);
if (ret) {
DRM_ERROR("failed to get event.\n");
goto err_clean_node;
c_node = ipp_find_obj(&ctx->prop_idr, &ctx->prop_lock,
cmd_ctrl->prop_id);
- if (!c_node) {
+ if (!c_node || c_node->filp != file) {
DRM_ERROR("invalid command node list.\n");
return -ENODEV;
}
struct exynos_drm_ippdrv *ippdrv,
struct drm_exynos_ipp_cmd_node *c_node)
{
- struct drm_exynos_ipp_mem_node *m_node, *tm_node;
struct drm_exynos_ipp_property *property = &c_node->property;
- struct list_head *head;
- int ret = 0, i;
+ int i;
DRM_DEBUG_KMS("prop_id[%d]\n", property->prop_id);
- /* put event */
- ipp_put_event(c_node, NULL);
-
- mutex_lock(&c_node->mem_lock);
+ /* stop operations */
+ if (ippdrv->stop)
+ ippdrv->stop(ippdrv->dev, property->cmd);
/* check command */
switch (property->cmd) {
case IPP_CMD_M2M:
- for_each_ipp_ops(i) {
- /* source/destination memory list */
- head = &c_node->mem_list[i];
-
- list_for_each_entry_safe(m_node, tm_node,
- head, list) {
- ret = ipp_put_mem_node(drm_dev, c_node,
- m_node);
- if (ret) {
- DRM_ERROR("failed to put m_node.\n");
- goto err_clear;
- }
- }
- }
+ for_each_ipp_ops(i)
+ ipp_clean_mem_nodes(drm_dev, c_node, i);
break;
case IPP_CMD_WB:
- /* destination memory list */
- head = &c_node->mem_list[EXYNOS_DRM_OPS_DST];
-
- list_for_each_entry_safe(m_node, tm_node, head, list) {
- ret = ipp_put_mem_node(drm_dev, c_node, m_node);
- if (ret) {
- DRM_ERROR("failed to put m_node.\n");
- goto err_clear;
- }
- }
+ ipp_clean_mem_nodes(drm_dev, c_node, EXYNOS_DRM_OPS_DST);
break;
case IPP_CMD_OUTPUT:
- /* source memory list */
- head = &c_node->mem_list[EXYNOS_DRM_OPS_SRC];
-
- list_for_each_entry_safe(m_node, tm_node, head, list) {
- ret = ipp_put_mem_node(drm_dev, c_node, m_node);
- if (ret) {
- DRM_ERROR("failed to put m_node.\n");
- goto err_clear;
- }
- }
+ ipp_clean_mem_nodes(drm_dev, c_node, EXYNOS_DRM_OPS_SRC);
break;
default:
DRM_ERROR("invalid operations.\n");
- ret = -EINVAL;
- goto err_clear;
+ return -EINVAL;
}
-err_clear:
- mutex_unlock(&c_node->mem_lock);
-
- /* stop operations */
- if (ippdrv->stop)
- ippdrv->stop(ippdrv->dev, property->cmd);
-
- return ret;
+ return 0;
}
void ipp_sched_cmd(struct work_struct *work)
{
struct drm_exynos_ipp_cmd_work *cmd_work =
- (struct drm_exynos_ipp_cmd_work *)work;
+ container_of(work, struct drm_exynos_ipp_cmd_work, work);
struct exynos_drm_ippdrv *ippdrv;
struct drm_exynos_ipp_cmd_node *c_node;
struct drm_exynos_ipp_property *property;
void ipp_sched_event(struct work_struct *work)
{
struct drm_exynos_ipp_event_work *event_work =
- (struct drm_exynos_ipp_event_work *)work;
+ container_of(work, struct drm_exynos_ipp_event_work, work);
struct exynos_drm_ippdrv *ippdrv;
struct drm_exynos_ipp_cmd_node *c_node;
int ret;
static void ipp_subdrv_remove(struct drm_device *drm_dev, struct device *dev)
{
- struct exynos_drm_ippdrv *ippdrv;
+ struct exynos_drm_ippdrv *ippdrv, *t;
struct ipp_context *ctx = get_ipp_context(dev);
/* get ipp driver entry */
- list_for_each_entry(ippdrv, &exynos_drm_ippdrv_list, drv_list) {
+ list_for_each_entry_safe(ippdrv, t, &exynos_drm_ippdrv_list, drv_list) {
if (is_drm_iommu_supported(drm_dev))
drm_iommu_detach_device(drm_dev, ippdrv->dev);
static void ipp_subdrv_close(struct drm_device *drm_dev, struct device *dev,
struct drm_file *file)
{
- struct drm_exynos_file_private *file_priv = file->driver_priv;
struct exynos_drm_ippdrv *ippdrv = NULL;
struct ipp_context *ctx = get_ipp_context(dev);
struct drm_exynos_ipp_cmd_node *c_node, *tc_node;
int count = 0;
- DRM_DEBUG_KMS("for priv[0x%x]\n", (int)file_priv->ipp_dev);
-
list_for_each_entry(ippdrv, &exynos_drm_ippdrv_list, drv_list) {
mutex_lock(&ippdrv->cmd_lock);
list_for_each_entry_safe(c_node, tc_node,
DRM_DEBUG_KMS("count[%d]ippdrv[0x%x]\n",
count++, (int)ippdrv);
- if (c_node->dev == file_priv->ipp_dev) {
+ if (c_node->filp == file) {
/*
* userland goto unnormal state. process killed.
* and close the file.
return 0;
}
-static int ipp_power_ctrl(struct ipp_context *ctx, bool enable)
-{
- DRM_DEBUG_KMS("enable[%d]\n", enable);
-
- return 0;
-}
-
-#ifdef CONFIG_PM_SLEEP
-static int ipp_suspend(struct device *dev)
-{
- struct ipp_context *ctx = get_ipp_context(dev);
-
- if (pm_runtime_suspended(dev))
- return 0;
-
- return ipp_power_ctrl(ctx, false);
-}
-
-static int ipp_resume(struct device *dev)
-{
- struct ipp_context *ctx = get_ipp_context(dev);
-
- if (!pm_runtime_suspended(dev))
- return ipp_power_ctrl(ctx, true);
-
- return 0;
-}
-#endif
-
-#ifdef CONFIG_PM_RUNTIME
-static int ipp_runtime_suspend(struct device *dev)
-{
- struct ipp_context *ctx = get_ipp_context(dev);
-
- return ipp_power_ctrl(ctx, false);
-}
-
-static int ipp_runtime_resume(struct device *dev)
-{
- struct ipp_context *ctx = get_ipp_context(dev);
-
- return ipp_power_ctrl(ctx, true);
-}
-#endif
-
-static const struct dev_pm_ops ipp_pm_ops = {
- SET_SYSTEM_SLEEP_PM_OPS(ipp_suspend, ipp_resume)
- SET_RUNTIME_PM_OPS(ipp_runtime_suspend, ipp_runtime_resume, NULL)
-};
-
struct platform_driver ipp_driver = {
.probe = ipp_probe,
.remove = ipp_remove,
.driver = {
.name = "exynos-drm-ipp",
.owner = THIS_MODULE,
- .pm = &ipp_pm_ops,
},
};
/*
* A structure of command node.
*
- * @dev: IPP device.
* @list: list head to command queue information.
* @event_list: list head of event.
* @mem_list: list head to source,destination memory queue information.
* @stop_work: stop command work structure.
* @event_work: event work structure.
* @state: state of command node.
+ * @filp: associated file pointer.
*/
struct drm_exynos_ipp_cmd_node {
- struct device *dev;
struct list_head list;
struct list_head event_list;
struct list_head mem_list[EXYNOS_DRM_OPS_MAX];
struct drm_exynos_ipp_cmd_work *stop_work;
struct drm_exynos_ipp_event_work *event_work;
enum drm_exynos_ipp_state state;
+ struct drm_file *filp;
};
/*
overlay->crtc_x, overlay->crtc_y,
overlay->crtc_width, overlay->crtc_height);
+ plane->crtc = crtc;
+
exynos_drm_crtc_plane_mode_set(crtc, overlay);
return 0;
if (ret < 0)
return ret;
- plane->crtc = crtc;
-
exynos_plane_commit(plane);
exynos_plane_dpms(plane, DRM_MODE_DPMS_ON);
}
struct drm_plane *exynos_plane_init(struct drm_device *dev,
- unsigned long possible_crtcs, bool priv)
+ unsigned long possible_crtcs,
+ enum drm_plane_type type)
{
struct exynos_plane *exynos_plane;
int err;
exynos_plane = kzalloc(sizeof(struct exynos_plane), GFP_KERNEL);
if (!exynos_plane)
- return NULL;
+ return ERR_PTR(-ENOMEM);
- err = drm_plane_init(dev, &exynos_plane->base, possible_crtcs,
- &exynos_plane_funcs, formats, ARRAY_SIZE(formats),
- priv);
+ err = drm_universal_plane_init(dev, &exynos_plane->base, possible_crtcs,
+ &exynos_plane_funcs, formats,
+ ARRAY_SIZE(formats), type);
if (err) {
DRM_ERROR("failed to initialize plane\n");
kfree(exynos_plane);
- return NULL;
+ return ERR_PTR(err);
}
- if (priv)
+ if (type == DRM_PLANE_TYPE_PRIMARY)
exynos_plane->overlay.zpos = DEFAULT_ZPOS;
else
exynos_plane_attach_zpos_property(&exynos_plane->base);
void exynos_plane_commit(struct drm_plane *plane);
void exynos_plane_dpms(struct drm_plane *plane, int mode);
struct drm_plane *exynos_plane_init(struct drm_device *dev,
- unsigned long possible_crtcs, bool priv);
+ unsigned long possible_crtcs,
+ enum drm_plane_type type);
event_work->ippdrv = ippdrv;
event_work->buf_id[EXYNOS_DRM_OPS_DST] =
rot->cur_buf_id[EXYNOS_DRM_OPS_DST];
- queue_work(ippdrv->event_workq,
- (struct work_struct *)event_work);
+ queue_work(ippdrv->event_workq, &event_work->work);
} else {
DRM_ERROR("the SFR is set illegally\n");
}
mgr->drm_dev = ctx->drm_dev = drm_dev;
mgr->pipe = ctx->pipe = priv->pipe++;
- /*
- * enable drm irq mode.
- * - with irq_enabled = 1, we can use the vblank feature.
- *
- * P.S. note that we wouldn't use drm irq handler but
- * just specific driver own one instead because
- * drm framework supports only one irq handler.
- */
- drm_dev->irq_enabled = 1;
-
- /*
- * with vblank_disable_allowed = 1, vblank interrupt will be disabled
- * by drm timer once a current process gives up ownership of
- * vblank event.(after drm_vblank_put function is called)
- */
- drm_dev->vblank_disable_allowed = 1;
-
return 0;
}
struct exynos_drm_manager *mgr = platform_get_drvdata(pdev);
struct vidi_context *ctx = mgr->ctx;
struct drm_encoder *encoder = ctx->encoder;
- struct drm_crtc *crtc = mgr->crtc;
if (ctx->raw_edid != (struct edid *)fake_edid_info) {
kfree(ctx->raw_edid);
return -EINVAL;
}
- crtc->funcs->destroy(crtc);
encoder->funcs->destroy(encoder);
drm_connector_cleanup(&ctx->connector);
static void hdmi_connector_destroy(struct drm_connector *connector)
{
+ drm_connector_unregister(connector);
+ drm_connector_cleanup(connector);
}
static struct drm_connector_funcs hdmi_connector_funcs = {
struct drm_encoder *encoder = display->encoder;
struct hdmi_context *hdata = display->ctx;
+ hdmi_connector_destroy(&hdata->connector);
encoder->funcs->destroy(encoder);
- drm_connector_cleanup(&hdata->connector);
}
static const struct component_ops hdmi_component_ops = {
static void mixer_unbind(struct device *dev, struct device *master, void *data)
{
struct exynos_drm_manager *mgr = dev_get_drvdata(dev);
- struct drm_crtc *crtc = mgr->crtc;
dev_info(dev, "remove successful\n");
mixer_mgr_remove(mgr);
pm_runtime_disable(dev);
-
- crtc->funcs->destroy(crtc);
}
static const struct component_ops mixer_component_ops = {
};
#endif
-#define CDV_DP_VOLTAGE_MAX DP_TRAIN_VOLTAGE_SWING_1200
+#define CDV_DP_VOLTAGE_MAX DP_TRAIN_VOLTAGE_SWING_LEVEL_3
/*
static uint8_t
cdv_intel_dp_pre_emphasis_max(uint8_t voltage_swing)
cdv_sb_write(dev, ddi_reg->VSwing2, dp_vswing_premph_table[index]);
/* ;gfx_dpio_set_reg(0x814c, 0x40802040) */
- if ((vswing + premph) == DP_TRAIN_VOLTAGE_SWING_1200)
+ if ((vswing + premph) == DP_TRAIN_VOLTAGE_SWING_LEVEL_3)
cdv_sb_write(dev, ddi_reg->VSwing3, 0x70802040);
else
cdv_sb_write(dev, ddi_reg->VSwing3, 0x40802040);
static int psbfb_probe(struct drm_fb_helper *helper,
struct drm_fb_helper_surface_size *sizes)
{
- struct psb_fbdev *psb_fbdev = (struct psb_fbdev *)helper;
+ struct psb_fbdev *psb_fbdev =
+ container_of(helper, struct psb_fbdev, psb_fb_helper);
struct drm_device *dev = psb_fbdev->psb_fb_helper.dev;
struct drm_psb_private *dev_priv = dev->dev_private;
int bytespp;
#define _PSB_GTT_H_
#include <drm/drmP.h>
+#include <drm/drm_gem.h>
/* This wants cleaning up with respect to the psb_dev and un-needed stuff */
struct psb_gtt {
switch (edp_link_params->preemphasis) {
case 0:
- dev_priv->edp.preemphasis = DP_TRAIN_PRE_EMPHASIS_0;
+ dev_priv->edp.preemphasis = DP_TRAIN_PRE_EMPH_LEVEL_0;
break;
case 1:
- dev_priv->edp.preemphasis = DP_TRAIN_PRE_EMPHASIS_3_5;
+ dev_priv->edp.preemphasis = DP_TRAIN_PRE_EMPH_LEVEL_1;
break;
case 2:
- dev_priv->edp.preemphasis = DP_TRAIN_PRE_EMPHASIS_6;
+ dev_priv->edp.preemphasis = DP_TRAIN_PRE_EMPH_LEVEL_2;
break;
case 3:
- dev_priv->edp.preemphasis = DP_TRAIN_PRE_EMPHASIS_9_5;
+ dev_priv->edp.preemphasis = DP_TRAIN_PRE_EMPH_LEVEL_3;
break;
}
switch (edp_link_params->vswing) {
case 0:
- dev_priv->edp.vswing = DP_TRAIN_VOLTAGE_SWING_400;
+ dev_priv->edp.vswing = DP_TRAIN_VOLTAGE_SWING_LEVEL_0;
break;
case 1:
- dev_priv->edp.vswing = DP_TRAIN_VOLTAGE_SWING_600;
+ dev_priv->edp.vswing = DP_TRAIN_VOLTAGE_SWING_LEVEL_1;
break;
case 2:
- dev_priv->edp.vswing = DP_TRAIN_VOLTAGE_SWING_800;
+ dev_priv->edp.vswing = DP_TRAIN_VOLTAGE_SWING_LEVEL_2;
break;
case 3:
- dev_priv->edp.vswing = DP_TRAIN_VOLTAGE_SWING_1200;
+ dev_priv->edp.vswing = DP_TRAIN_VOLTAGE_SWING_LEVEL_3;
break;
}
DRM_DEBUG_KMS("VBT reports EDP: VSwing %d, Preemph %d\n",
.unload = psb_driver_unload,
.lastclose = psb_driver_lastclose,
.preclose = psb_driver_preclose,
+ .set_busid = drm_pci_set_busid,
.num_ioctls = ARRAY_SIZE(psb_ioctls),
.device_is_agp = psb_driver_device_is_agp,
(drm_i810_private_t *) dev->dev_private;
if (dev_priv->ring.virtual_start)
- drm_core_ioremapfree(&dev_priv->ring.map, dev);
+ drm_legacy_ioremapfree(&dev_priv->ring.map, dev);
if (dev_priv->hw_status_page) {
pci_free_consistent(dev->pdev, PAGE_SIZE,
dev_priv->hw_status_page,
drm_i810_buf_priv_t *buf_priv = buf->dev_private;
if (buf_priv->kernel_virtual && buf->total)
- drm_core_ioremapfree(&buf_priv->map, dev);
+ drm_legacy_ioremapfree(&buf_priv->map, dev);
}
}
return 0;
buf_priv->map.flags = 0;
buf_priv->map.mtrr = 0;
- drm_core_ioremap(&buf_priv->map, dev);
+ drm_legacy_ioremap(&buf_priv->map, dev);
buf_priv->kernel_virtual = buf_priv->map.handle;
}
DRM_ERROR("can not find sarea!\n");
return -EINVAL;
}
- dev_priv->mmio_map = drm_core_findmap(dev, init->mmio_offset);
+ dev_priv->mmio_map = drm_legacy_findmap(dev, init->mmio_offset);
if (!dev_priv->mmio_map) {
dev->dev_private = (void *)dev_priv;
i810_dma_cleanup(dev);
return -EINVAL;
}
dev->agp_buffer_token = init->buffers_offset;
- dev->agp_buffer_map = drm_core_findmap(dev, init->buffers_offset);
+ dev->agp_buffer_map = drm_legacy_findmap(dev, init->buffers_offset);
if (!dev->agp_buffer_map) {
dev->dev_private = (void *)dev_priv;
i810_dma_cleanup(dev);
dev_priv->ring.map.flags = 0;
dev_priv->ring.map.mtrr = 0;
- drm_core_ioremap(&dev_priv->ring.map, dev);
+ drm_legacy_ioremap(&dev_priv->ring.map, dev);
if (dev_priv->ring.map.handle == NULL) {
dev->dev_private = (void *)dev_priv;
}
if (file_priv->master && file_priv->master->lock.hw_lock) {
- drm_idlelock_take(&file_priv->master->lock);
+ drm_legacy_idlelock_take(&file_priv->master->lock);
i810_driver_reclaim_buffers(dev, file_priv);
- drm_idlelock_release(&file_priv->master->lock);
+ drm_legacy_idlelock_release(&file_priv->master->lock);
} else {
/* master disappeared, clean up stuff anyway and hope nothing
* goes wrong */
.open = drm_open,
.release = drm_release,
.unlocked_ioctl = drm_ioctl,
- .mmap = drm_mmap,
+ .mmap = drm_legacy_mmap,
.poll = drm_poll,
#ifdef CONFIG_COMPAT
.compat_ioctl = drm_compat_ioctl,
.load = i810_driver_load,
.lastclose = i810_driver_lastclose,
.preclose = i810_driver_preclose,
+ .set_busid = drm_pci_set_busid,
.device_is_agp = i810_driver_device_is_agp,
.dma_quiescent = i810_driver_dma_quiescent,
.ioctls = i810_ioctls,
#ifndef _I810_DRV_H_
#define _I810_DRV_H_
+#include <drm/drm_legacy.h>
+
/* General customization:
*/
i915_gpu_error.o \
i915_irq.o \
i915_trace_points.o \
+ intel_lrc.o \
intel_ringbuffer.o \
intel_uncore.o
#define NS2501_REGC 0x0c
+enum {
+ MODE_640x480,
+ MODE_800x600,
+ MODE_1024x768,
+};
+
+struct ns2501_reg {
+ uint8_t offset;
+ uint8_t value;
+};
+
+/*
+ * Magic values based on what the BIOS on
+ * Fujitsu-Siemens Lifebook S6010 programs (1024x768 panel).
+ */
+static const struct ns2501_reg regs_1024x768[][86] = {
+ [MODE_640x480] = {
+ [0] = { .offset = 0x0a, .value = 0x81, },
+ [1] = { .offset = 0x18, .value = 0x07, },
+ [2] = { .offset = 0x19, .value = 0x00, },
+ [3] = { .offset = 0x1a, .value = 0x00, },
+ [4] = { .offset = 0x1b, .value = 0x11, },
+ [5] = { .offset = 0x1c, .value = 0x54, },
+ [6] = { .offset = 0x1d, .value = 0x03, },
+ [7] = { .offset = 0x1e, .value = 0x02, },
+ [8] = { .offset = 0xf3, .value = 0x90, },
+ [9] = { .offset = 0xf9, .value = 0x00, },
+ [10] = { .offset = 0xc1, .value = 0x90, },
+ [11] = { .offset = 0xc2, .value = 0x00, },
+ [12] = { .offset = 0xc3, .value = 0x0f, },
+ [13] = { .offset = 0xc4, .value = 0x03, },
+ [14] = { .offset = 0xc5, .value = 0x16, },
+ [15] = { .offset = 0xc6, .value = 0x00, },
+ [16] = { .offset = 0xc7, .value = 0x02, },
+ [17] = { .offset = 0xc8, .value = 0x02, },
+ [18] = { .offset = 0xf4, .value = 0x00, },
+ [19] = { .offset = 0x80, .value = 0xff, },
+ [20] = { .offset = 0x81, .value = 0x07, },
+ [21] = { .offset = 0x82, .value = 0x3d, },
+ [22] = { .offset = 0x83, .value = 0x05, },
+ [23] = { .offset = 0x94, .value = 0x00, },
+ [24] = { .offset = 0x95, .value = 0x00, },
+ [25] = { .offset = 0x96, .value = 0x05, },
+ [26] = { .offset = 0x97, .value = 0x00, },
+ [27] = { .offset = 0x9a, .value = 0x88, },
+ [28] = { .offset = 0x9b, .value = 0x00, },
+ [29] = { .offset = 0x98, .value = 0x00, },
+ [30] = { .offset = 0x99, .value = 0x00, },
+ [31] = { .offset = 0xf7, .value = 0x88, },
+ [32] = { .offset = 0xf8, .value = 0x0a, },
+ [33] = { .offset = 0x9c, .value = 0x24, },
+ [34] = { .offset = 0x9d, .value = 0x00, },
+ [35] = { .offset = 0x9e, .value = 0x25, },
+ [36] = { .offset = 0x9f, .value = 0x03, },
+ [37] = { .offset = 0xa0, .value = 0x28, },
+ [38] = { .offset = 0xa1, .value = 0x01, },
+ [39] = { .offset = 0xa2, .value = 0x28, },
+ [40] = { .offset = 0xa3, .value = 0x05, },
+ [41] = { .offset = 0xb6, .value = 0x09, },
+ [42] = { .offset = 0xb8, .value = 0x00, },
+ [43] = { .offset = 0xb9, .value = 0xa0, },
+ [44] = { .offset = 0xba, .value = 0x00, },
+ [45] = { .offset = 0xbb, .value = 0x20, },
+ [46] = { .offset = 0x10, .value = 0x00, },
+ [47] = { .offset = 0x11, .value = 0xa0, },
+ [48] = { .offset = 0x12, .value = 0x02, },
+ [49] = { .offset = 0x20, .value = 0x00, },
+ [50] = { .offset = 0x22, .value = 0x00, },
+ [51] = { .offset = 0x23, .value = 0x00, },
+ [52] = { .offset = 0x24, .value = 0x00, },
+ [53] = { .offset = 0x25, .value = 0x00, },
+ [54] = { .offset = 0x8c, .value = 0x10, },
+ [55] = { .offset = 0x8d, .value = 0x02, },
+ [56] = { .offset = 0x8e, .value = 0x10, },
+ [57] = { .offset = 0x8f, .value = 0x00, },
+ [58] = { .offset = 0x90, .value = 0xff, },
+ [59] = { .offset = 0x91, .value = 0x07, },
+ [60] = { .offset = 0x92, .value = 0xa0, },
+ [61] = { .offset = 0x93, .value = 0x02, },
+ [62] = { .offset = 0xa5, .value = 0x00, },
+ [63] = { .offset = 0xa6, .value = 0x00, },
+ [64] = { .offset = 0xa7, .value = 0x00, },
+ [65] = { .offset = 0xa8, .value = 0x00, },
+ [66] = { .offset = 0xa9, .value = 0x04, },
+ [67] = { .offset = 0xaa, .value = 0x70, },
+ [68] = { .offset = 0xab, .value = 0x4f, },
+ [69] = { .offset = 0xac, .value = 0x00, },
+ [70] = { .offset = 0xa4, .value = 0x84, },
+ [71] = { .offset = 0x7e, .value = 0x18, },
+ [72] = { .offset = 0x84, .value = 0x00, },
+ [73] = { .offset = 0x85, .value = 0x00, },
+ [74] = { .offset = 0x86, .value = 0x00, },
+ [75] = { .offset = 0x87, .value = 0x00, },
+ [76] = { .offset = 0x88, .value = 0x00, },
+ [77] = { .offset = 0x89, .value = 0x00, },
+ [78] = { .offset = 0x8a, .value = 0x00, },
+ [79] = { .offset = 0x8b, .value = 0x00, },
+ [80] = { .offset = 0x26, .value = 0x00, },
+ [81] = { .offset = 0x27, .value = 0x00, },
+ [82] = { .offset = 0xad, .value = 0x00, },
+ [83] = { .offset = 0x08, .value = 0x30, }, /* 0x31 */
+ [84] = { .offset = 0x41, .value = 0x00, },
+ [85] = { .offset = 0xc0, .value = 0x05, },
+ },
+ [MODE_800x600] = {
+ [0] = { .offset = 0x0a, .value = 0x81, },
+ [1] = { .offset = 0x18, .value = 0x07, },
+ [2] = { .offset = 0x19, .value = 0x00, },
+ [3] = { .offset = 0x1a, .value = 0x00, },
+ [4] = { .offset = 0x1b, .value = 0x19, },
+ [5] = { .offset = 0x1c, .value = 0x64, },
+ [6] = { .offset = 0x1d, .value = 0x02, },
+ [7] = { .offset = 0x1e, .value = 0x02, },
+ [8] = { .offset = 0xf3, .value = 0x90, },
+ [9] = { .offset = 0xf9, .value = 0x00, },
+ [10] = { .offset = 0xc1, .value = 0xd7, },
+ [11] = { .offset = 0xc2, .value = 0x00, },
+ [12] = { .offset = 0xc3, .value = 0xf8, },
+ [13] = { .offset = 0xc4, .value = 0x03, },
+ [14] = { .offset = 0xc5, .value = 0x1a, },
+ [15] = { .offset = 0xc6, .value = 0x00, },
+ [16] = { .offset = 0xc7, .value = 0x73, },
+ [17] = { .offset = 0xc8, .value = 0x02, },
+ [18] = { .offset = 0xf4, .value = 0x00, },
+ [19] = { .offset = 0x80, .value = 0x27, },
+ [20] = { .offset = 0x81, .value = 0x03, },
+ [21] = { .offset = 0x82, .value = 0x41, },
+ [22] = { .offset = 0x83, .value = 0x05, },
+ [23] = { .offset = 0x94, .value = 0x00, },
+ [24] = { .offset = 0x95, .value = 0x00, },
+ [25] = { .offset = 0x96, .value = 0x05, },
+ [26] = { .offset = 0x97, .value = 0x00, },
+ [27] = { .offset = 0x9a, .value = 0x88, },
+ [28] = { .offset = 0x9b, .value = 0x00, },
+ [29] = { .offset = 0x98, .value = 0x00, },
+ [30] = { .offset = 0x99, .value = 0x00, },
+ [31] = { .offset = 0xf7, .value = 0x88, },
+ [32] = { .offset = 0xf8, .value = 0x06, },
+ [33] = { .offset = 0x9c, .value = 0x23, },
+ [34] = { .offset = 0x9d, .value = 0x00, },
+ [35] = { .offset = 0x9e, .value = 0x25, },
+ [36] = { .offset = 0x9f, .value = 0x03, },
+ [37] = { .offset = 0xa0, .value = 0x28, },
+ [38] = { .offset = 0xa1, .value = 0x01, },
+ [39] = { .offset = 0xa2, .value = 0x28, },
+ [40] = { .offset = 0xa3, .value = 0x05, },
+ [41] = { .offset = 0xb6, .value = 0x09, },
+ [42] = { .offset = 0xb8, .value = 0x30, },
+ [43] = { .offset = 0xb9, .value = 0xc8, },
+ [44] = { .offset = 0xba, .value = 0x00, },
+ [45] = { .offset = 0xbb, .value = 0x20, },
+ [46] = { .offset = 0x10, .value = 0x20, },
+ [47] = { .offset = 0x11, .value = 0xc8, },
+ [48] = { .offset = 0x12, .value = 0x02, },
+ [49] = { .offset = 0x20, .value = 0x00, },
+ [50] = { .offset = 0x22, .value = 0x00, },
+ [51] = { .offset = 0x23, .value = 0x00, },
+ [52] = { .offset = 0x24, .value = 0x00, },
+ [53] = { .offset = 0x25, .value = 0x00, },
+ [54] = { .offset = 0x8c, .value = 0x10, },
+ [55] = { .offset = 0x8d, .value = 0x02, },
+ [56] = { .offset = 0x8e, .value = 0x04, },
+ [57] = { .offset = 0x8f, .value = 0x00, },
+ [58] = { .offset = 0x90, .value = 0xff, },
+ [59] = { .offset = 0x91, .value = 0x07, },
+ [60] = { .offset = 0x92, .value = 0xa0, },
+ [61] = { .offset = 0x93, .value = 0x02, },
+ [62] = { .offset = 0xa5, .value = 0x00, },
+ [63] = { .offset = 0xa6, .value = 0x00, },
+ [64] = { .offset = 0xa7, .value = 0x00, },
+ [65] = { .offset = 0xa8, .value = 0x00, },
+ [66] = { .offset = 0xa9, .value = 0x83, },
+ [67] = { .offset = 0xaa, .value = 0x40, },
+ [68] = { .offset = 0xab, .value = 0x32, },
+ [69] = { .offset = 0xac, .value = 0x00, },
+ [70] = { .offset = 0xa4, .value = 0x80, },
+ [71] = { .offset = 0x7e, .value = 0x18, },
+ [72] = { .offset = 0x84, .value = 0x00, },
+ [73] = { .offset = 0x85, .value = 0x00, },
+ [74] = { .offset = 0x86, .value = 0x00, },
+ [75] = { .offset = 0x87, .value = 0x00, },
+ [76] = { .offset = 0x88, .value = 0x00, },
+ [77] = { .offset = 0x89, .value = 0x00, },
+ [78] = { .offset = 0x8a, .value = 0x00, },
+ [79] = { .offset = 0x8b, .value = 0x00, },
+ [80] = { .offset = 0x26, .value = 0x00, },
+ [81] = { .offset = 0x27, .value = 0x00, },
+ [82] = { .offset = 0xad, .value = 0x00, },
+ [83] = { .offset = 0x08, .value = 0x30, }, /* 0x31 */
+ [84] = { .offset = 0x41, .value = 0x00, },
+ [85] = { .offset = 0xc0, .value = 0x07, },
+ },
+ [MODE_1024x768] = {
+ [0] = { .offset = 0x0a, .value = 0x81, },
+ [1] = { .offset = 0x18, .value = 0x07, },
+ [2] = { .offset = 0x19, .value = 0x00, },
+ [3] = { .offset = 0x1a, .value = 0x00, },
+ [4] = { .offset = 0x1b, .value = 0x11, },
+ [5] = { .offset = 0x1c, .value = 0x54, },
+ [6] = { .offset = 0x1d, .value = 0x03, },
+ [7] = { .offset = 0x1e, .value = 0x02, },
+ [8] = { .offset = 0xf3, .value = 0x90, },
+ [9] = { .offset = 0xf9, .value = 0x00, },
+ [10] = { .offset = 0xc1, .value = 0x90, },
+ [11] = { .offset = 0xc2, .value = 0x00, },
+ [12] = { .offset = 0xc3, .value = 0x0f, },
+ [13] = { .offset = 0xc4, .value = 0x03, },
+ [14] = { .offset = 0xc5, .value = 0x16, },
+ [15] = { .offset = 0xc6, .value = 0x00, },
+ [16] = { .offset = 0xc7, .value = 0x02, },
+ [17] = { .offset = 0xc8, .value = 0x02, },
+ [18] = { .offset = 0xf4, .value = 0x00, },
+ [19] = { .offset = 0x80, .value = 0xff, },
+ [20] = { .offset = 0x81, .value = 0x07, },
+ [21] = { .offset = 0x82, .value = 0x3d, },
+ [22] = { .offset = 0x83, .value = 0x05, },
+ [23] = { .offset = 0x94, .value = 0x00, },
+ [24] = { .offset = 0x95, .value = 0x00, },
+ [25] = { .offset = 0x96, .value = 0x05, },
+ [26] = { .offset = 0x97, .value = 0x00, },
+ [27] = { .offset = 0x9a, .value = 0x88, },
+ [28] = { .offset = 0x9b, .value = 0x00, },
+ [29] = { .offset = 0x98, .value = 0x00, },
+ [30] = { .offset = 0x99, .value = 0x00, },
+ [31] = { .offset = 0xf7, .value = 0x88, },
+ [32] = { .offset = 0xf8, .value = 0x0a, },
+ [33] = { .offset = 0x9c, .value = 0x24, },
+ [34] = { .offset = 0x9d, .value = 0x00, },
+ [35] = { .offset = 0x9e, .value = 0x25, },
+ [36] = { .offset = 0x9f, .value = 0x03, },
+ [37] = { .offset = 0xa0, .value = 0x28, },
+ [38] = { .offset = 0xa1, .value = 0x01, },
+ [39] = { .offset = 0xa2, .value = 0x28, },
+ [40] = { .offset = 0xa3, .value = 0x05, },
+ [41] = { .offset = 0xb6, .value = 0x09, },
+ [42] = { .offset = 0xb8, .value = 0x00, },
+ [43] = { .offset = 0xb9, .value = 0xa0, },
+ [44] = { .offset = 0xba, .value = 0x00, },
+ [45] = { .offset = 0xbb, .value = 0x20, },
+ [46] = { .offset = 0x10, .value = 0x00, },
+ [47] = { .offset = 0x11, .value = 0xa0, },
+ [48] = { .offset = 0x12, .value = 0x02, },
+ [49] = { .offset = 0x20, .value = 0x00, },
+ [50] = { .offset = 0x22, .value = 0x00, },
+ [51] = { .offset = 0x23, .value = 0x00, },
+ [52] = { .offset = 0x24, .value = 0x00, },
+ [53] = { .offset = 0x25, .value = 0x00, },
+ [54] = { .offset = 0x8c, .value = 0x10, },
+ [55] = { .offset = 0x8d, .value = 0x02, },
+ [56] = { .offset = 0x8e, .value = 0x10, },
+ [57] = { .offset = 0x8f, .value = 0x00, },
+ [58] = { .offset = 0x90, .value = 0xff, },
+ [59] = { .offset = 0x91, .value = 0x07, },
+ [60] = { .offset = 0x92, .value = 0xa0, },
+ [61] = { .offset = 0x93, .value = 0x02, },
+ [62] = { .offset = 0xa5, .value = 0x00, },
+ [63] = { .offset = 0xa6, .value = 0x00, },
+ [64] = { .offset = 0xa7, .value = 0x00, },
+ [65] = { .offset = 0xa8, .value = 0x00, },
+ [66] = { .offset = 0xa9, .value = 0x04, },
+ [67] = { .offset = 0xaa, .value = 0x70, },
+ [68] = { .offset = 0xab, .value = 0x4f, },
+ [69] = { .offset = 0xac, .value = 0x00, },
+ [70] = { .offset = 0xa4, .value = 0x84, },
+ [71] = { .offset = 0x7e, .value = 0x18, },
+ [72] = { .offset = 0x84, .value = 0x00, },
+ [73] = { .offset = 0x85, .value = 0x00, },
+ [74] = { .offset = 0x86, .value = 0x00, },
+ [75] = { .offset = 0x87, .value = 0x00, },
+ [76] = { .offset = 0x88, .value = 0x00, },
+ [77] = { .offset = 0x89, .value = 0x00, },
+ [78] = { .offset = 0x8a, .value = 0x00, },
+ [79] = { .offset = 0x8b, .value = 0x00, },
+ [80] = { .offset = 0x26, .value = 0x00, },
+ [81] = { .offset = 0x27, .value = 0x00, },
+ [82] = { .offset = 0xad, .value = 0x00, },
+ [83] = { .offset = 0x08, .value = 0x34, }, /* 0x35 */
+ [84] = { .offset = 0x41, .value = 0x00, },
+ [85] = { .offset = 0xc0, .value = 0x01, },
+ },
+};
+
+static const struct ns2501_reg regs_init[] = {
+ [0] = { .offset = 0x35, .value = 0xff, },
+ [1] = { .offset = 0x34, .value = 0x00, },
+ [2] = { .offset = 0x08, .value = 0x30, },
+};
+
struct ns2501_priv {
- //I2CDevRec d;
bool quiet;
- int reg_8_shadow;
- int reg_8_set;
- // Shadow registers for i915
- int dvoc;
- int pll_a;
- int srcdim;
- int fw_blc;
+ const struct ns2501_reg *regs;
};
#define NSPTR(d) ((NS2501Ptr)(d->DriverPrivate.ptr))
goto out;
}
ns->quiet = false;
- ns->reg_8_set = 0;
- ns->reg_8_shadow =
- NS2501_8_PD | NS2501_8_BPAS | NS2501_8_VEN | NS2501_8_HEN;
DRM_DEBUG_KMS("init ns2501 dvo controller successfully!\n");
+
return true;
out:
* of the panel in here so we could always accept it
* by disabling the scaler.
*/
- if ((mode->hdisplay == 800 && mode->vdisplay == 600) ||
- (mode->hdisplay == 640 && mode->vdisplay == 480) ||
- (mode->hdisplay == 1024 && mode->vdisplay == 768)) {
+ if ((mode->hdisplay == 640 && mode->vdisplay == 480 && mode->clock == 25175) ||
+ (mode->hdisplay == 800 && mode->vdisplay == 600 && mode->clock == 40000) ||
+ (mode->hdisplay == 1024 && mode->vdisplay == 768 && mode->clock == 65000)) {
return MODE_OK;
} else {
return MODE_ONE_SIZE; /* Is this a reasonable error? */
struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
{
- bool ok;
- int retries = 10;
struct ns2501_priv *ns = (struct ns2501_priv *)(dvo->dev_priv);
+ int mode_idx, i;
DRM_DEBUG_KMS
("set mode (hdisplay=%d,htotal=%d,vdisplay=%d,vtotal=%d).\n",
mode->hdisplay, mode->htotal, mode->vdisplay, mode->vtotal);
- /*
- * Where do I find the native resolution for which scaling is not required???
- *
- * First trigger the DVO on as otherwise the chip does not appear on the i2c
- * bus.
- */
- do {
- ok = true;
-
- if (mode->hdisplay == 800 && mode->vdisplay == 600) {
- /* mode 277 */
- ns->reg_8_shadow &= ~NS2501_8_BPAS;
- DRM_DEBUG_KMS("switching to 800x600\n");
-
- /*
- * No, I do not know where this data comes from.
- * It is just what the video bios left in the DVO, so
- * I'm just copying it here over.
- * This also means that I cannot support any other modes
- * except the ones supported by the bios.
- */
- ok &= ns2501_writeb(dvo, 0x11, 0xc8); // 0xc7 also works.
- ok &= ns2501_writeb(dvo, 0x1b, 0x19);
- ok &= ns2501_writeb(dvo, 0x1c, 0x62); // VBIOS left 0x64 here, but 0x62 works nicer
- ok &= ns2501_writeb(dvo, 0x1d, 0x02);
-
- ok &= ns2501_writeb(dvo, 0x34, 0x03);
- ok &= ns2501_writeb(dvo, 0x35, 0xff);
+ if (mode->hdisplay == 640 && mode->vdisplay == 480)
+ mode_idx = MODE_640x480;
+ else if (mode->hdisplay == 800 && mode->vdisplay == 600)
+ mode_idx = MODE_800x600;
+ else if (mode->hdisplay == 1024 && mode->vdisplay == 768)
+ mode_idx = MODE_1024x768;
+ else
+ return;
- ok &= ns2501_writeb(dvo, 0x80, 0x27);
- ok &= ns2501_writeb(dvo, 0x81, 0x03);
- ok &= ns2501_writeb(dvo, 0x82, 0x41);
- ok &= ns2501_writeb(dvo, 0x83, 0x05);
+ /* Hopefully doing it every time won't hurt... */
+ for (i = 0; i < ARRAY_SIZE(regs_init); i++)
+ ns2501_writeb(dvo, regs_init[i].offset, regs_init[i].value);
- ok &= ns2501_writeb(dvo, 0x8d, 0x02);
- ok &= ns2501_writeb(dvo, 0x8e, 0x04);
- ok &= ns2501_writeb(dvo, 0x8f, 0x00);
+ ns->regs = regs_1024x768[mode_idx];
- ok &= ns2501_writeb(dvo, 0x90, 0xfe); /* vertical. VBIOS left 0xff here, but 0xfe works better */
- ok &= ns2501_writeb(dvo, 0x91, 0x07);
- ok &= ns2501_writeb(dvo, 0x94, 0x00);
- ok &= ns2501_writeb(dvo, 0x95, 0x00);
-
- ok &= ns2501_writeb(dvo, 0x96, 0x00);
-
- ok &= ns2501_writeb(dvo, 0x99, 0x00);
- ok &= ns2501_writeb(dvo, 0x9a, 0x88);
-
- ok &= ns2501_writeb(dvo, 0x9c, 0x23); /* Looks like first and last line of the image. */
- ok &= ns2501_writeb(dvo, 0x9d, 0x00);
- ok &= ns2501_writeb(dvo, 0x9e, 0x25);
- ok &= ns2501_writeb(dvo, 0x9f, 0x03);
-
- ok &= ns2501_writeb(dvo, 0xa4, 0x80);
-
- ok &= ns2501_writeb(dvo, 0xb6, 0x00);
-
- ok &= ns2501_writeb(dvo, 0xb9, 0xc8); /* horizontal? */
- ok &= ns2501_writeb(dvo, 0xba, 0x00); /* horizontal? */
-
- ok &= ns2501_writeb(dvo, 0xc0, 0x05); /* horizontal? */
- ok &= ns2501_writeb(dvo, 0xc1, 0xd7);
-
- ok &= ns2501_writeb(dvo, 0xc2, 0x00);
- ok &= ns2501_writeb(dvo, 0xc3, 0xf8);
-
- ok &= ns2501_writeb(dvo, 0xc4, 0x03);
- ok &= ns2501_writeb(dvo, 0xc5, 0x1a);
-
- ok &= ns2501_writeb(dvo, 0xc6, 0x00);
- ok &= ns2501_writeb(dvo, 0xc7, 0x73);
- ok &= ns2501_writeb(dvo, 0xc8, 0x02);
-
- } else if (mode->hdisplay == 640 && mode->vdisplay == 480) {
- /* mode 274 */
- DRM_DEBUG_KMS("switching to 640x480\n");
- /*
- * No, I do not know where this data comes from.
- * It is just what the video bios left in the DVO, so
- * I'm just copying it here over.
- * This also means that I cannot support any other modes
- * except the ones supported by the bios.
- */
- ns->reg_8_shadow &= ~NS2501_8_BPAS;
-
- ok &= ns2501_writeb(dvo, 0x11, 0xa0);
- ok &= ns2501_writeb(dvo, 0x1b, 0x11);
- ok &= ns2501_writeb(dvo, 0x1c, 0x54);
- ok &= ns2501_writeb(dvo, 0x1d, 0x03);
-
- ok &= ns2501_writeb(dvo, 0x34, 0x03);
- ok &= ns2501_writeb(dvo, 0x35, 0xff);
-
- ok &= ns2501_writeb(dvo, 0x80, 0xff);
- ok &= ns2501_writeb(dvo, 0x81, 0x07);
- ok &= ns2501_writeb(dvo, 0x82, 0x3d);
- ok &= ns2501_writeb(dvo, 0x83, 0x05);
-
- ok &= ns2501_writeb(dvo, 0x8d, 0x02);
- ok &= ns2501_writeb(dvo, 0x8e, 0x10);
- ok &= ns2501_writeb(dvo, 0x8f, 0x00);
-
- ok &= ns2501_writeb(dvo, 0x90, 0xff); /* vertical */
- ok &= ns2501_writeb(dvo, 0x91, 0x07);
- ok &= ns2501_writeb(dvo, 0x94, 0x00);
- ok &= ns2501_writeb(dvo, 0x95, 0x00);
-
- ok &= ns2501_writeb(dvo, 0x96, 0x05);
-
- ok &= ns2501_writeb(dvo, 0x99, 0x00);
- ok &= ns2501_writeb(dvo, 0x9a, 0x88);
-
- ok &= ns2501_writeb(dvo, 0x9c, 0x24);
- ok &= ns2501_writeb(dvo, 0x9d, 0x00);
- ok &= ns2501_writeb(dvo, 0x9e, 0x25);
- ok &= ns2501_writeb(dvo, 0x9f, 0x03);
-
- ok &= ns2501_writeb(dvo, 0xa4, 0x84);
-
- ok &= ns2501_writeb(dvo, 0xb6, 0x09);
-
- ok &= ns2501_writeb(dvo, 0xb9, 0xa0); /* horizontal? */
- ok &= ns2501_writeb(dvo, 0xba, 0x00); /* horizontal? */
-
- ok &= ns2501_writeb(dvo, 0xc0, 0x05); /* horizontal? */
- ok &= ns2501_writeb(dvo, 0xc1, 0x90);
-
- ok &= ns2501_writeb(dvo, 0xc2, 0x00);
- ok &= ns2501_writeb(dvo, 0xc3, 0x0f);
-
- ok &= ns2501_writeb(dvo, 0xc4, 0x03);
- ok &= ns2501_writeb(dvo, 0xc5, 0x16);
-
- ok &= ns2501_writeb(dvo, 0xc6, 0x00);
- ok &= ns2501_writeb(dvo, 0xc7, 0x02);
- ok &= ns2501_writeb(dvo, 0xc8, 0x02);
-
- } else if (mode->hdisplay == 1024 && mode->vdisplay == 768) {
- /* mode 280 */
- DRM_DEBUG_KMS("switching to 1024x768\n");
- /*
- * This might or might not work, actually. I'm silently
- * assuming here that the native panel resolution is
- * 1024x768. If not, then this leaves the scaler disabled
- * generating a picture that is likely not the expected.
- *
- * Problem is that I do not know where to take the panel
- * dimensions from.
- *
- * Enable the bypass, scaling not required.
- *
- * The scaler registers are irrelevant here....
- *
- */
- ns->reg_8_shadow |= NS2501_8_BPAS;
- ok &= ns2501_writeb(dvo, 0x37, 0x44);
- } else {
- /*
- * Data not known. Bummer!
- * Hopefully, the code should not go here
- * as mode_OK delivered no other modes.
- */
- ns->reg_8_shadow |= NS2501_8_BPAS;
- }
- ok &= ns2501_writeb(dvo, NS2501_REG8, ns->reg_8_shadow);
- } while (!ok && retries--);
+ for (i = 0; i < 84; i++)
+ ns2501_writeb(dvo, ns->regs[i].offset, ns->regs[i].value);
}
/* set the NS2501 power state */
if (!ns2501_readb(dvo, NS2501_REG8, &ch))
return false;
- if (ch & NS2501_8_PD)
- return true;
- else
- return false;
+ return ch & NS2501_8_PD;
}
/* set the NS2501 power state */
static void ns2501_dpms(struct intel_dvo_device *dvo, bool enable)
{
- bool ok;
- int retries = 10;
struct ns2501_priv *ns = (struct ns2501_priv *)(dvo->dev_priv);
- unsigned char ch;
DRM_DEBUG_KMS("Trying set the dpms of the DVO to %i\n", enable);
- ch = ns->reg_8_shadow;
+ if (enable) {
+ if (WARN_ON(ns->regs[83].offset != 0x08 ||
+ ns->regs[84].offset != 0x41 ||
+ ns->regs[85].offset != 0xc0))
+ return;
- if (enable)
- ch |= NS2501_8_PD;
- else
- ch &= ~NS2501_8_PD;
-
- if (ns->reg_8_set == 0 || ns->reg_8_shadow != ch) {
- ns->reg_8_set = 1;
- ns->reg_8_shadow = ch;
-
- do {
- ok = true;
- ok &= ns2501_writeb(dvo, NS2501_REG8, ch);
- ok &=
- ns2501_writeb(dvo, 0x34,
- enable ? 0x03 : 0x00);
- ok &=
- ns2501_writeb(dvo, 0x35,
- enable ? 0xff : 0x00);
- } while (!ok && retries--);
- }
-}
+ ns2501_writeb(dvo, 0xc0, ns->regs[85].value | 0x08);
-static void ns2501_dump_regs(struct intel_dvo_device *dvo)
-{
- uint8_t val;
-
- ns2501_readb(dvo, NS2501_FREQ_LO, &val);
- DRM_DEBUG_KMS("NS2501_FREQ_LO: 0x%02x\n", val);
- ns2501_readb(dvo, NS2501_FREQ_HI, &val);
- DRM_DEBUG_KMS("NS2501_FREQ_HI: 0x%02x\n", val);
- ns2501_readb(dvo, NS2501_REG8, &val);
- DRM_DEBUG_KMS("NS2501_REG8: 0x%02x\n", val);
- ns2501_readb(dvo, NS2501_REG9, &val);
- DRM_DEBUG_KMS("NS2501_REG9: 0x%02x\n", val);
- ns2501_readb(dvo, NS2501_REGC, &val);
- DRM_DEBUG_KMS("NS2501_REGC: 0x%02x\n", val);
+ ns2501_writeb(dvo, 0x41, ns->regs[84].value);
+
+ ns2501_writeb(dvo, 0x34, 0x01);
+ msleep(15);
+
+ ns2501_writeb(dvo, 0x08, 0x35);
+ if (!(ns->regs[83].value & NS2501_8_BPAS))
+ ns2501_writeb(dvo, 0x08, 0x31);
+ msleep(200);
+
+ ns2501_writeb(dvo, 0x34, 0x03);
+
+ ns2501_writeb(dvo, 0xc0, ns->regs[85].value);
+ } else {
+ ns2501_writeb(dvo, 0x34, 0x01);
+ msleep(200);
+
+ ns2501_writeb(dvo, 0x08, 0x34);
+ msleep(15);
+
+ ns2501_writeb(dvo, 0x34, 0x00);
+ }
}
static void ns2501_destroy(struct intel_dvo_device *dvo)
.mode_set = ns2501_mode_set,
.dpms = ns2501_dpms,
.get_hw_state = ns2501_get_hw_state,
- .dump_regs = ns2501_dump_regs,
.destroy = ns2501_destroy,
};
*/
bool i915_needs_cmd_parser(struct intel_engine_cs *ring)
{
- struct drm_i915_private *dev_priv = ring->dev->dev_private;
-
if (!ring->needs_cmd_parser)
return false;
* disabled. That will cause all of the parser's PPGTT checks to
* fail. For now, disable parsing when PPGTT is off.
*/
- if (!dev_priv->mm.aliasing_ppgtt)
+ if (USES_PPGTT(ring->dev))
return false;
return (i915.enable_cmd_parser == 1);
obj->last_read_seqno,
obj->last_write_seqno,
obj->last_fenced_seqno,
- i915_cache_level_str(obj->cache_level),
+ i915_cache_level_str(to_i915(obj->base.dev), obj->cache_level),
obj->dirty ? " dirty" : "",
obj->madv == I915_MADV_DONTNEED ? " purgeable" : "");
if (obj->base.name)
}
ppgtt = container_of(vma->vm, struct i915_hw_ppgtt, base);
- if (ppgtt->ctx && ppgtt->ctx->file_priv != stats->file_priv)
+ if (ppgtt->file_priv != stats->file_priv)
continue;
if (obj->ring) /* XXX per-vma statistic */
{
struct drm_info_node *node = m->private;
struct drm_device *dev = node->minor->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
unsigned long flags;
struct intel_crtc *crtc;
int ret;
seq_printf(m, "No flip due on pipe %c (plane %c)\n",
pipe, plane);
} else {
+ u32 addr;
+
if (atomic_read(&work->pending) < INTEL_FLIP_COMPLETE) {
seq_printf(m, "Flip queued on pipe %c (plane %c)\n",
pipe, plane);
seq_printf(m, "Flip pending (waiting for vsync) on pipe %c (plane %c)\n",
pipe, plane);
}
+ if (work->flip_queued_ring) {
+ seq_printf(m, "Flip queued on %s at seqno %u, next seqno %u [current breadcrumb %u], completed? %d\n",
+ work->flip_queued_ring->name,
+ work->flip_queued_seqno,
+ dev_priv->next_seqno,
+ work->flip_queued_ring->get_seqno(work->flip_queued_ring, true),
+ i915_seqno_passed(work->flip_queued_ring->get_seqno(work->flip_queued_ring, true),
+ work->flip_queued_seqno));
+ } else
+ seq_printf(m, "Flip not associated with any ring\n");
+ seq_printf(m, "Flip queued on frame %d, (was ready on frame %d), now %d\n",
+ work->flip_queued_vblank,
+ work->flip_ready_vblank,
+ drm_vblank_count(dev, crtc->pipe));
if (work->enable_stall_check)
seq_puts(m, "Stall check enabled, ");
else
seq_puts(m, "Stall check waiting for page flip ioctl, ");
seq_printf(m, "%d prepares\n", atomic_read(&work->pending));
- if (work->old_fb_obj) {
- struct drm_i915_gem_object *obj = work->old_fb_obj;
- if (obj)
- seq_printf(m, "Old framebuffer gtt_offset 0x%08lx\n",
- i915_gem_obj_ggtt_offset(obj));
- }
+ if (INTEL_INFO(dev)->gen >= 4)
+ addr = I915_HI_DISPBASE(I915_READ(DSPSURF(crtc->plane)));
+ else
+ addr = I915_READ(DSPADDR(crtc->plane));
+ seq_printf(m, "Current scanout address 0x%08x\n", addr);
+
if (work->pending_flip_obj) {
- struct drm_i915_gem_object *obj = work->pending_flip_obj;
- if (obj)
- seq_printf(m, "New framebuffer gtt_offset 0x%08lx\n",
- i915_gem_obj_ggtt_offset(obj));
+ seq_printf(m, "New framebuffer address 0x%08lx\n", (long)work->gtt_offset);
+ seq_printf(m, "MMIO update completed? %d\n", addr == work->gtt_offset);
}
}
spin_unlock_irqrestore(&dev->event_lock, flags);
intel_runtime_pm_get(dev_priv);
if (IS_CHERRYVIEW(dev)) {
- int i;
seq_printf(m, "Master Interrupt Control:\t%08x\n",
I915_READ(GEN8_MASTER_IRQ));
I915_READ(VLV_IIR_RW));
seq_printf(m, "Display IMR:\t%08x\n",
I915_READ(VLV_IMR));
- for_each_pipe(pipe)
+ for_each_pipe(dev_priv, pipe)
seq_printf(m, "Pipe %c stat:\t%08x\n",
pipe_name(pipe),
I915_READ(PIPESTAT(pipe)));
i, I915_READ(GEN8_GT_IER(i)));
}
- for_each_pipe(pipe) {
+ for_each_pipe(dev_priv, pipe) {
+ if (!intel_display_power_enabled(dev_priv,
+ POWER_DOMAIN_PIPE(pipe))) {
+ seq_printf(m, "Pipe %c power disabled\n",
+ pipe_name(pipe));
+ continue;
+ }
seq_printf(m, "Pipe %c IMR:\t%08x\n",
pipe_name(pipe),
I915_READ(GEN8_DE_PIPE_IMR(pipe)));
I915_READ(VLV_IIR_RW));
seq_printf(m, "Display IMR:\t%08x\n",
I915_READ(VLV_IMR));
- for_each_pipe(pipe)
+ for_each_pipe(dev_priv, pipe)
seq_printf(m, "Pipe %c stat:\t%08x\n",
pipe_name(pipe),
I915_READ(PIPESTAT(pipe)));
I915_READ(IIR));
seq_printf(m, "Interrupt mask: %08x\n",
I915_READ(IMR));
- for_each_pipe(pipe)
+ for_each_pipe(dev_priv, pipe)
seq_printf(m, "Pipe %c stat: %08x\n",
pipe_name(pipe),
I915_READ(PIPESTAT(pipe)));
ssize_t ret_count = 0;
int ret;
- ret = i915_error_state_buf_init(&error_str, count, *pos);
+ ret = i915_error_state_buf_init(&error_str, to_i915(error_priv->dev), count, *pos);
if (ret)
return ret;
u32 rpstat, cagf, reqf;
u32 rpupei, rpcurup, rpprevup;
u32 rpdownei, rpcurdown, rpprevdown;
+ u32 pm_ier, pm_imr, pm_isr, pm_iir, pm_mask;
int max_freq;
/* RPSTAT1 is in the GT power well */
gen6_gt_force_wake_put(dev_priv, FORCEWAKE_ALL);
mutex_unlock(&dev->struct_mutex);
+ if (IS_GEN6(dev) || IS_GEN7(dev)) {
+ pm_ier = I915_READ(GEN6_PMIER);
+ pm_imr = I915_READ(GEN6_PMIMR);
+ pm_isr = I915_READ(GEN6_PMISR);
+ pm_iir = I915_READ(GEN6_PMIIR);
+ pm_mask = I915_READ(GEN6_PMINTRMSK);
+ } else {
+ pm_ier = I915_READ(GEN8_GT_IER(2));
+ pm_imr = I915_READ(GEN8_GT_IMR(2));
+ pm_isr = I915_READ(GEN8_GT_ISR(2));
+ pm_iir = I915_READ(GEN8_GT_IIR(2));
+ pm_mask = I915_READ(GEN6_PMINTRMSK);
+ }
seq_printf(m, "PM IER=0x%08x IMR=0x%08x ISR=0x%08x IIR=0x%08x, MASK=0x%08x\n",
- I915_READ(GEN6_PMIER),
- I915_READ(GEN6_PMIMR),
- I915_READ(GEN6_PMISR),
- I915_READ(GEN6_PMIIR),
- I915_READ(GEN6_PMINTRMSK));
+ pm_ier, pm_imr, pm_isr, pm_iir, pm_mask);
seq_printf(m, "GT_PERF_STATUS: 0x%08x\n", gt_perf_status);
seq_printf(m, "Render p-state ratio: %d\n",
(gt_perf_status & 0xff00) >> 8);
if (IS_VALLEYVIEW(dev))
return vlv_drpc_info(m);
- else if (IS_GEN6(dev) || IS_GEN7(dev))
+ else if (INTEL_INFO(dev)->gen >= 6)
return gen6_drpc_info(m);
else
return ironlake_drpc_info(m);
return 0;
}
+static int i915_fbc_fc_get(void *data, u64 *val)
+{
+ struct drm_device *dev = data;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+
+ if (INTEL_INFO(dev)->gen < 7 || !HAS_FBC(dev))
+ return -ENODEV;
+
+ drm_modeset_lock_all(dev);
+ *val = dev_priv->fbc.false_color;
+ drm_modeset_unlock_all(dev);
+
+ return 0;
+}
+
+static int i915_fbc_fc_set(void *data, u64 val)
+{
+ struct drm_device *dev = data;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ u32 reg;
+
+ if (INTEL_INFO(dev)->gen < 7 || !HAS_FBC(dev))
+ return -ENODEV;
+
+ drm_modeset_lock_all(dev);
+
+ reg = I915_READ(ILK_DPFC_CONTROL);
+ dev_priv->fbc.false_color = val;
+
+ I915_WRITE(ILK_DPFC_CONTROL, val ?
+ (reg | FBC_CTL_FALSE_COLOR) :
+ (reg & ~FBC_CTL_FALSE_COLOR));
+
+ drm_modeset_unlock_all(dev);
+ return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(i915_fbc_fc_fops,
+ i915_fbc_fc_get, i915_fbc_fc_set,
+ "%llu\n");
+
static int i915_ips_status(struct seq_file *m, void *unused)
{
struct drm_info_node *node = m->private;
return 0;
}
+static void describe_ctx_ringbuf(struct seq_file *m,
+ struct intel_ringbuffer *ringbuf)
+{
+ seq_printf(m, " (ringbuffer, space: %d, head: %u, tail: %u, last head: %d)",
+ ringbuf->space, ringbuf->head, ringbuf->tail,
+ ringbuf->last_retired_head);
+}
+
static int i915_context_status(struct seq_file *m, void *unused)
{
struct drm_info_node *node = m->private;
}
list_for_each_entry(ctx, &dev_priv->context_list, link) {
- if (ctx->legacy_hw_ctx.rcs_state == NULL)
+ if (!i915.enable_execlists &&
+ ctx->legacy_hw_ctx.rcs_state == NULL)
continue;
seq_puts(m, "HW context ");
describe_ctx(m, ctx);
- for_each_ring(ring, dev_priv, i)
+ for_each_ring(ring, dev_priv, i) {
+ if (ring->default_context == ctx)
+ seq_printf(m, "(default context %s) ",
+ ring->name);
+ }
+
+ if (i915.enable_execlists) {
+ seq_putc(m, '\n');
+ for_each_ring(ring, dev_priv, i) {
+ struct drm_i915_gem_object *ctx_obj =
+ ctx->engine[i].state;
+ struct intel_ringbuffer *ringbuf =
+ ctx->engine[i].ringbuf;
+
+ seq_printf(m, "%s: ", ring->name);
+ if (ctx_obj)
+ describe_obj(m, ctx_obj);
+ if (ringbuf)
+ describe_ctx_ringbuf(m, ringbuf);
+ seq_putc(m, '\n');
+ }
+ } else {
+ describe_obj(m, ctx->legacy_hw_ctx.rcs_state);
+ }
+
+ seq_putc(m, '\n');
+ }
+
+ mutex_unlock(&dev->struct_mutex);
+
+ return 0;
+}
+
+static int i915_dump_lrc(struct seq_file *m, void *unused)
+{
+ struct drm_info_node *node = (struct drm_info_node *) m->private;
+ struct drm_device *dev = node->minor->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_engine_cs *ring;
+ struct intel_context *ctx;
+ int ret, i;
+
+ if (!i915.enable_execlists) {
+ seq_printf(m, "Logical Ring Contexts are disabled\n");
+ return 0;
+ }
+
+ ret = mutex_lock_interruptible(&dev->struct_mutex);
+ if (ret)
+ return ret;
+
+ list_for_each_entry(ctx, &dev_priv->context_list, link) {
+ for_each_ring(ring, dev_priv, i) {
+ struct drm_i915_gem_object *ctx_obj = ctx->engine[i].state;
+
if (ring->default_context == ctx)
- seq_printf(m, "(default context %s) ", ring->name);
+ continue;
+
+ if (ctx_obj) {
+ struct page *page = i915_gem_object_get_page(ctx_obj, 1);
+ uint32_t *reg_state = kmap_atomic(page);
+ int j;
+
+ seq_printf(m, "CONTEXT: %s %u\n", ring->name,
+ intel_execlists_ctx_id(ctx_obj));
+
+ for (j = 0; j < 0x600 / sizeof(u32) / 4; j += 4) {
+ seq_printf(m, "\t[0x%08lx] 0x%08x 0x%08x 0x%08x 0x%08x\n",
+ i915_gem_obj_ggtt_offset(ctx_obj) + 4096 + (j * 4),
+ reg_state[j], reg_state[j + 1],
+ reg_state[j + 2], reg_state[j + 3]);
+ }
+ kunmap_atomic(reg_state);
+
+ seq_putc(m, '\n');
+ }
+ }
+ }
+
+ mutex_unlock(&dev->struct_mutex);
+
+ return 0;
+}
+
+static int i915_execlists(struct seq_file *m, void *data)
+{
+ struct drm_info_node *node = (struct drm_info_node *)m->private;
+ struct drm_device *dev = node->minor->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_engine_cs *ring;
+ u32 status_pointer;
+ u8 read_pointer;
+ u8 write_pointer;
+ u32 status;
+ u32 ctx_id;
+ struct list_head *cursor;
+ int ring_id, i;
+ int ret;
+
+ if (!i915.enable_execlists) {
+ seq_puts(m, "Logical Ring Contexts are disabled\n");
+ return 0;
+ }
+
+ ret = mutex_lock_interruptible(&dev->struct_mutex);
+ if (ret)
+ return ret;
+
+ for_each_ring(ring, dev_priv, ring_id) {
+ struct intel_ctx_submit_request *head_req = NULL;
+ int count = 0;
+ unsigned long flags;
+
+ seq_printf(m, "%s\n", ring->name);
+
+ status = I915_READ(RING_EXECLIST_STATUS(ring));
+ ctx_id = I915_READ(RING_EXECLIST_STATUS(ring) + 4);
+ seq_printf(m, "\tExeclist status: 0x%08X, context: %u\n",
+ status, ctx_id);
+
+ status_pointer = I915_READ(RING_CONTEXT_STATUS_PTR(ring));
+ seq_printf(m, "\tStatus pointer: 0x%08X\n", status_pointer);
+
+ read_pointer = ring->next_context_status_buffer;
+ write_pointer = status_pointer & 0x07;
+ if (read_pointer > write_pointer)
+ write_pointer += 6;
+ seq_printf(m, "\tRead pointer: 0x%08X, write pointer 0x%08X\n",
+ read_pointer, write_pointer);
+
+ for (i = 0; i < 6; i++) {
+ status = I915_READ(RING_CONTEXT_STATUS_BUF(ring) + 8*i);
+ ctx_id = I915_READ(RING_CONTEXT_STATUS_BUF(ring) + 8*i + 4);
+
+ seq_printf(m, "\tStatus buffer %d: 0x%08X, context: %u\n",
+ i, status, ctx_id);
+ }
+
+ spin_lock_irqsave(&ring->execlist_lock, flags);
+ list_for_each(cursor, &ring->execlist_queue)
+ count++;
+ head_req = list_first_entry_or_null(&ring->execlist_queue,
+ struct intel_ctx_submit_request, execlist_link);
+ spin_unlock_irqrestore(&ring->execlist_lock, flags);
+
+ seq_printf(m, "\t%d requests in queue\n", count);
+ if (head_req) {
+ struct drm_i915_gem_object *ctx_obj;
+
+ ctx_obj = head_req->ctx->engine[ring_id].state;
+ seq_printf(m, "\tHead request id: %u\n",
+ intel_execlists_ctx_id(ctx_obj));
+ seq_printf(m, "\tHead request tail: %u\n",
+ head_req->tail);
+ }
- describe_obj(m, ctx->legacy_hw_ctx.rcs_state);
seq_putc(m, '\n');
}
{
struct intel_context *ctx = ptr;
struct seq_file *m = data;
- struct i915_hw_ppgtt *ppgtt = ctx_to_ppgtt(ctx);
+ struct i915_hw_ppgtt *ppgtt = ctx->ppgtt;
+
+ if (!ppgtt) {
+ seq_printf(m, " no ppgtt for context %d\n",
+ ctx->user_handle);
+ return 0;
+ }
if (i915_gem_context_is_default(ctx))
seq_puts(m, " default context:\n");
seq_printf(m, "pd gtt offset: 0x%08x\n", ppgtt->pd_offset);
ppgtt->debug_dump(ppgtt, m);
- } else
- return;
+ }
list_for_each_entry_reverse(file, &dev->filelist, lhead) {
struct drm_i915_file_private *file_priv = file->driver_priv;
return 0;
}
+static int i915_wa_registers(struct seq_file *m, void *unused)
+{
+ int i;
+ int ret;
+ struct drm_info_node *node = (struct drm_info_node *) m->private;
+ struct drm_device *dev = node->minor->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+
+ ret = mutex_lock_interruptible(&dev->struct_mutex);
+ if (ret)
+ return ret;
+
+ intel_runtime_pm_get(dev_priv);
+
+ seq_printf(m, "Workarounds applied: %d\n", dev_priv->num_wa_regs);
+ for (i = 0; i < dev_priv->num_wa_regs; ++i) {
+ u32 addr, mask;
+
+ addr = dev_priv->intel_wa_regs[i].addr;
+ mask = dev_priv->intel_wa_regs[i].mask;
+ dev_priv->intel_wa_regs[i].value = I915_READ(addr) | mask;
+ if (dev_priv->intel_wa_regs[i].addr)
+ seq_printf(m, "0x%X: 0x%08X, mask: 0x%08X\n",
+ dev_priv->intel_wa_regs[i].addr,
+ dev_priv->intel_wa_regs[i].value,
+ dev_priv->intel_wa_regs[i].mask);
+ }
+
+ intel_runtime_pm_put(dev_priv);
+ mutex_unlock(&dev->struct_mutex);
+
+ return 0;
+}
+
struct pipe_crc_info {
const char *name;
struct drm_device *dev;
*source = INTEL_PIPE_CRC_SOURCE_PIPE;
drm_modeset_lock_all(dev);
- list_for_each_entry(encoder, &dev->mode_config.encoder_list,
- base.head) {
+ for_each_intel_encoder(dev, encoder) {
if (!encoder->base.crtc)
continue;
{
struct drm_device *dev = data;
struct drm_i915_private *dev_priv = dev->dev_private;
- struct drm_i915_gem_object *obj, *next;
- struct i915_address_space *vm;
- struct i915_vma *vma, *x;
int ret;
DRM_DEBUG("Dropping caches: 0x%08llx\n", val);
if (val & (DROP_RETIRE | DROP_ACTIVE))
i915_gem_retire_requests(dev);
- if (val & DROP_BOUND) {
- list_for_each_entry(vm, &dev_priv->vm_list, global_link) {
- list_for_each_entry_safe(vma, x, &vm->inactive_list,
- mm_list) {
- if (vma->pin_count)
- continue;
+ if (val & DROP_BOUND)
+ i915_gem_shrink(dev_priv, LONG_MAX, I915_SHRINK_BOUND);
- ret = i915_vma_unbind(vma);
- if (ret)
- goto unlock;
- }
- }
- }
-
- if (val & DROP_UNBOUND) {
- list_for_each_entry_safe(obj, next, &dev_priv->mm.unbound_list,
- global_list)
- if (obj->pages_pin_count == 0) {
- ret = i915_gem_object_put_pages(obj);
- if (ret)
- goto unlock;
- }
- }
+ if (val & DROP_UNBOUND)
+ i915_gem_shrink(dev_priv, LONG_MAX, I915_SHRINK_UNBOUND);
unlock:
mutex_unlock(&dev->struct_mutex);
{"i915_opregion", i915_opregion, 0},
{"i915_gem_framebuffer", i915_gem_framebuffer_info, 0},
{"i915_context_status", i915_context_status, 0},
+ {"i915_dump_lrc", i915_dump_lrc, 0},
+ {"i915_execlists", i915_execlists, 0},
{"i915_gen6_forcewake_count", i915_gen6_forcewake_count_info, 0},
{"i915_swizzle_info", i915_swizzle_info, 0},
{"i915_ppgtt_info", i915_ppgtt_info, 0},
{"i915_semaphore_status", i915_semaphore_status, 0},
{"i915_shared_dplls_info", i915_shared_dplls_info, 0},
{"i915_dp_mst_info", i915_dp_mst_info, 0},
+ {"i915_wa_registers", i915_wa_registers, 0},
};
#define I915_DEBUGFS_ENTRIES ARRAY_SIZE(i915_debugfs_list)
{"i915_pri_wm_latency", &i915_pri_wm_latency_fops},
{"i915_spr_wm_latency", &i915_spr_wm_latency_fops},
{"i915_cur_wm_latency", &i915_cur_wm_latency_fops},
+ {"i915_fbc_false_color", &i915_fbc_fc_fops},
};
void intel_display_crc_init(struct drm_device *dev)
struct drm_i915_private *dev_priv = dev->dev_private;
enum pipe pipe;
- for_each_pipe(pipe) {
+ for_each_pipe(dev_priv, pipe) {
struct intel_pipe_crc *pipe_crc = &dev_priv->pipe_crc[pipe];
pipe_crc->opened = false;
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include <linux/async.h>
#include <drm/drmP.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_fb_helper.h>
+#include <drm/drm_legacy.h>
#include "intel_drv.h"
#include <drm/i915_drm.h>
#include "i915_drv.h"
struct drm_i915_master_private *master_priv = dev->primary->master->driver_priv;
int ret;
- master_priv->sarea = drm_getsarea(dev);
+ master_priv->sarea = drm_legacy_getsarea(dev);
if (master_priv->sarea) {
master_priv->sarea_priv = (drm_i915_sarea_t *)
((u8 *)master_priv->sarea->handle + init->sarea_priv_offset);
value = HAS_WT(dev);
break;
case I915_PARAM_HAS_ALIASING_PPGTT:
- value = dev_priv->mm.aliasing_ppgtt || USES_FULL_PPGTT(dev);
+ value = USES_PPGTT(dev);
break;
case I915_PARAM_HAS_WAIT_TIMEOUT:
value = 1;
intel_power_domains_init_hw(dev_priv);
+ /*
+ * We enable some interrupt sources in our postinstall hooks, so mark
+ * interrupts as enabled _before_ actually enabling them to avoid
+ * special cases in our ordering checks.
+ */
+ dev_priv->pm._irqs_disabled = false;
+
ret = drm_irq_install(dev, dev->pdev->irq);
if (ret)
goto cleanup_gem_stolen;
- dev_priv->pm._irqs_disabled = false;
-
/* Important: The output setup functions called by modeset_init need
* working irqs for e.g. gmbus and dp aux transfers. */
intel_modeset_init(dev);
if (ret)
goto cleanup_irq;
- INIT_WORK(&dev_priv->console_resume_work, intel_console_resume);
-
intel_modeset_gem_init(dev);
/* Always safe in the mode setting case. */
* scanning against hotplug events. Hence do this first and ignore the
* tiny window where we will loose hotplug notifactions.
*/
- intel_fbdev_initial_config(dev);
+ async_schedule(intel_fbdev_initial_config, dev_priv);
drm_kms_helper_poll_init(dev);
i915_gem_cleanup_ringbuffer(dev);
i915_gem_context_fini(dev);
mutex_unlock(&dev->struct_mutex);
- WARN_ON(dev_priv->mm.aliasing_ppgtt);
cleanup_irq:
drm_irq_uninstall(dev);
cleanup_gem_stolen:
info = (struct intel_device_info *)&dev_priv->info;
if (IS_VALLEYVIEW(dev))
- for_each_pipe(pipe)
+ for_each_pipe(dev_priv, pipe)
info->num_sprites[pipe] = 2;
else
- for_each_pipe(pipe)
+ for_each_pipe(dev_priv, pipe)
info->num_sprites[pipe] = 1;
if (i915.disable_display) {
dev->dev_private = dev_priv;
dev_priv->dev = dev;
- /* copy initial configuration to dev_priv->info */
+ /* Setup the write-once "constant" device info */
device_info = (struct intel_device_info *)&dev_priv->info;
- *device_info = *info;
+ memcpy(device_info, info, sizeof(dev_priv->info));
+ device_info->device_id = dev->pdev->device;
spin_lock_init(&dev_priv->irq_lock);
spin_lock_init(&dev_priv->gpu_error.lock);
arch_phys_wc_del(dev_priv->gtt.mtrr);
io_mapping_free(dev_priv->gtt.mappable);
out_gtt:
- dev_priv->gtt.base.cleanup(&dev_priv->gtt.base);
+ i915_global_gtt_cleanup(dev);
out_regs:
intel_uncore_fini(dev);
pci_iounmap(dev->pdev, dev_priv->regs);
if (drm_core_check_feature(dev, DRIVER_MODESET)) {
intel_fbdev_fini(dev);
intel_modeset_cleanup(dev);
- cancel_work_sync(&dev_priv->console_resume_work);
/*
* free the memory space allocated for the child device
mutex_lock(&dev->struct_mutex);
i915_gem_cleanup_ringbuffer(dev);
i915_gem_context_fini(dev);
- WARN_ON(dev_priv->mm.aliasing_ppgtt);
mutex_unlock(&dev->struct_mutex);
i915_gem_cleanup_stolen(dev);
i915_free_hws(dev);
}
- WARN_ON(!list_empty(&dev_priv->vm_list));
-
drm_vblank_cleanup(dev);
intel_teardown_gmbus(dev);
destroy_workqueue(dev_priv->wq);
pm_qos_remove_request(&dev_priv->pm_qos);
- dev_priv->gtt.base.cleanup(&dev_priv->gtt.base);
+ i915_global_gtt_cleanup(dev);
intel_uncore_fini(dev);
if (dev_priv->regs != NULL)
i915_gem_context_close(dev, file);
i915_gem_release(dev, file);
mutex_unlock(&dev->struct_mutex);
+
+ if (drm_core_check_feature(dev, DRIVER_MODESET))
+ intel_modeset_preclose(dev, file);
}
void i915_driver_postclose(struct drm_device *dev, struct drm_file *file)
if (i915.semaphores >= 0)
return i915.semaphores;
+ /* TODO: make semaphores and Execlists play nicely together */
+ if (i915.enable_execlists)
+ return false;
+
/* Until we get further testing... */
if (IS_GEN8(dev))
return false;
return true;
}
+void intel_hpd_cancel_work(struct drm_i915_private *dev_priv)
+{
+ spin_lock_irq(&dev_priv->irq_lock);
+
+ dev_priv->long_hpd_port_mask = 0;
+ dev_priv->short_hpd_port_mask = 0;
+ dev_priv->hpd_event_bits = 0;
+
+ spin_unlock_irq(&dev_priv->irq_lock);
+
+ cancel_work_sync(&dev_priv->dig_port_work);
+ cancel_work_sync(&dev_priv->hotplug_work);
+ cancel_delayed_work_sync(&dev_priv->hotplug_reenable_work);
+}
+
+static void intel_suspend_encoders(struct drm_i915_private *dev_priv)
+{
+ struct drm_device *dev = dev_priv->dev;
+ struct drm_encoder *encoder;
+
+ drm_modeset_lock_all(dev);
+ list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
+ struct intel_encoder *intel_encoder = to_intel_encoder(encoder);
+
+ if (intel_encoder->suspend)
+ intel_encoder->suspend(intel_encoder);
+ }
+ drm_modeset_unlock_all(dev);
+}
+
+static int intel_suspend_complete(struct drm_i915_private *dev_priv);
+static int intel_resume_prepare(struct drm_i915_private *dev_priv,
+ bool rpm_resume);
+
static int i915_drm_freeze(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
flush_delayed_work(&dev_priv->rps.delayed_resume_work);
intel_runtime_pm_disable_interrupts(dev);
+ intel_hpd_cancel_work(dev_priv);
+
+ intel_suspend_encoders(dev_priv);
intel_suspend_gt_powersave(dev);
intel_uncore_forcewake_reset(dev, false);
intel_opregion_fini(dev);
- console_lock();
- intel_fbdev_set_suspend(dev, FBINFO_STATE_SUSPENDED);
- console_unlock();
+ intel_fbdev_set_suspend(dev, FBINFO_STATE_SUSPENDED, true);
dev_priv->suspend_count++;
return 0;
}
-void intel_console_resume(struct work_struct *work)
-{
- struct drm_i915_private *dev_priv =
- container_of(work, struct drm_i915_private,
- console_resume_work);
- struct drm_device *dev = dev_priv->dev;
-
- console_lock();
- intel_fbdev_set_suspend(dev, FBINFO_STATE_RUNNING);
- console_unlock();
-}
-
static int i915_drm_thaw_early(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
+ int ret;
- if (IS_HASWELL(dev) || IS_BROADWELL(dev))
- hsw_disable_pc8(dev_priv);
+ ret = intel_resume_prepare(dev_priv, false);
+ if (ret)
+ DRM_ERROR("Resume prepare failed: %d,Continuing resume\n", ret);
intel_uncore_early_sanitize(dev, true);
intel_uncore_sanitize(dev);
intel_power_domains_init_hw(dev_priv);
- return 0;
+ return ret;
}
static int __i915_drm_thaw(struct drm_device *dev, bool restore_gtt_mappings)
intel_opregion_init(dev);
- /*
- * The console lock can be pretty contented on resume due
- * to all the printk activity. Try to keep it out of the hot
- * path of resume if possible.
- */
- if (console_trylock()) {
- intel_fbdev_set_suspend(dev, FBINFO_STATE_RUNNING);
- console_unlock();
- } else {
- schedule_work(&dev_priv->console_resume_work);
- }
+ intel_fbdev_set_suspend(dev, FBINFO_STATE_RUNNING, false);
mutex_lock(&dev_priv->modeset_restore_lock);
dev_priv->modeset_restore = MODESET_DONE;
!dev_priv->ums.mm_suspended) {
dev_priv->ums.mm_suspended = 0;
+ /* Used to prevent gem_check_wedged returning -EAGAIN during gpu reset */
+ dev_priv->gpu_error.reload_in_reset = true;
+
ret = i915_gem_init_hw(dev);
+
+ dev_priv->gpu_error.reload_in_reset = false;
+
mutex_unlock(&dev->struct_mutex);
if (ret) {
DRM_ERROR("Failed hw init on reset %d\n", ret);
*/
if (INTEL_INFO(dev)->gen > 5)
intel_reset_gt_powersave(dev);
-
- intel_hpd_init(dev);
} else {
mutex_unlock(&dev->struct_mutex);
}
struct pci_dev *pdev = to_pci_dev(dev);
struct drm_device *drm_dev = pci_get_drvdata(pdev);
struct drm_i915_private *dev_priv = drm_dev->dev_private;
+ int ret;
/*
* We have a suspedn ordering issue with the snd-hda driver also
if (drm_dev->switch_power_state == DRM_SWITCH_POWER_OFF)
return 0;
- if (IS_HASWELL(drm_dev) || IS_BROADWELL(drm_dev))
- hsw_enable_pc8(dev_priv);
+ ret = intel_suspend_complete(dev_priv);
- pci_disable_device(pdev);
- pci_set_power_state(pdev, PCI_D3hot);
+ if (ret)
+ DRM_ERROR("Suspend complete failed: %d\n", ret);
+ else {
+ pci_disable_device(pdev);
+ pci_set_power_state(pdev, PCI_D3hot);
+ }
- return 0;
+ return ret;
}
static int i915_pm_resume_early(struct device *dev)
return i915_drm_freeze(drm_dev);
}
-static int hsw_runtime_suspend(struct drm_i915_private *dev_priv)
+static int hsw_suspend_complete(struct drm_i915_private *dev_priv)
{
hsw_enable_pc8(dev_priv);
return 0;
}
-static int snb_runtime_resume(struct drm_i915_private *dev_priv)
+static int snb_resume_prepare(struct drm_i915_private *dev_priv,
+ bool rpm_resume)
{
struct drm_device *dev = dev_priv->dev;
- intel_init_pch_refclk(dev);
+ if (rpm_resume)
+ intel_init_pch_refclk(dev);
return 0;
}
-static int hsw_runtime_resume(struct drm_i915_private *dev_priv)
+static int hsw_resume_prepare(struct drm_i915_private *dev_priv,
+ bool rpm_resume)
{
hsw_disable_pc8(dev_priv);
I915_WRITE(VLV_GTLC_PW_STATUS, VLV_GTLC_ALLOWWAKEERR);
}
-static int vlv_runtime_suspend(struct drm_i915_private *dev_priv)
+static int vlv_suspend_complete(struct drm_i915_private *dev_priv)
{
u32 mask;
int err;
return err;
}
-static int vlv_runtime_resume(struct drm_i915_private *dev_priv)
+static int vlv_resume_prepare(struct drm_i915_private *dev_priv,
+ bool rpm_resume)
{
struct drm_device *dev = dev_priv->dev;
int err;
vlv_check_no_gt_access(dev_priv);
- intel_init_clock_gating(dev);
- i915_gem_restore_fences(dev);
+ if (rpm_resume) {
+ intel_init_clock_gating(dev);
+ i915_gem_restore_fences(dev);
+ }
return ret;
}
if (WARN_ON_ONCE(!(dev_priv->rps.enabled && intel_enable_rc6(dev))))
return -ENODEV;
- WARN_ON(!HAS_RUNTIME_PM(dev));
+ if (WARN_ON_ONCE(!HAS_RUNTIME_PM(dev)))
+ return -ENODEV;
+
assert_force_wake_inactive(dev_priv);
DRM_DEBUG_KMS("Suspending device\n");
cancel_work_sync(&dev_priv->rps.work);
intel_runtime_pm_disable_interrupts(dev);
- if (IS_GEN6(dev)) {
- ret = 0;
- } else if (IS_HASWELL(dev) || IS_BROADWELL(dev)) {
- ret = hsw_runtime_suspend(dev_priv);
- } else if (IS_VALLEYVIEW(dev)) {
- ret = vlv_runtime_suspend(dev_priv);
- } else {
- ret = -ENODEV;
- WARN_ON(1);
- }
-
+ ret = intel_suspend_complete(dev_priv);
if (ret) {
DRM_ERROR("Runtime suspend failed, disabling it (%d)\n", ret);
intel_runtime_pm_restore_interrupts(dev);
dev_priv->pm.suspended = true;
/*
- * current versions of firmware which depend on this opregion
- * notification have repurposed the D1 definition to mean
- * "runtime suspended" vs. what you would normally expect (D3)
- * to distinguish it from notifications that might be sent
- * via the suspend path.
+ * FIXME: We really should find a document that references the arguments
+ * used below!
*/
- intel_opregion_notify_adapter(dev, PCI_D1);
+ if (IS_HASWELL(dev)) {
+ /*
+ * current versions of firmware which depend on this opregion
+ * notification have repurposed the D1 definition to mean
+ * "runtime suspended" vs. what you would normally expect (D3)
+ * to distinguish it from notifications that might be sent via
+ * the suspend path.
+ */
+ intel_opregion_notify_adapter(dev, PCI_D1);
+ } else {
+ /*
+ * On Broadwell, if we use PCI_D1 the PCH DDI ports will stop
+ * being detected, and the call we do at intel_runtime_resume()
+ * won't be able to restore them. Since PCI_D3hot matches the
+ * actual specification and appears to be working, use it. Let's
+ * assume the other non-Haswell platforms will stay the same as
+ * Broadwell.
+ */
+ intel_opregion_notify_adapter(dev, PCI_D3hot);
+ }
DRM_DEBUG_KMS("Device suspended\n");
return 0;
struct drm_i915_private *dev_priv = dev->dev_private;
int ret;
- WARN_ON(!HAS_RUNTIME_PM(dev));
+ if (WARN_ON_ONCE(!HAS_RUNTIME_PM(dev)))
+ return -ENODEV;
DRM_DEBUG_KMS("Resuming device\n");
intel_opregion_notify_adapter(dev, PCI_D0);
dev_priv->pm.suspended = false;
- if (IS_GEN6(dev)) {
- ret = snb_runtime_resume(dev_priv);
- } else if (IS_HASWELL(dev) || IS_BROADWELL(dev)) {
- ret = hsw_runtime_resume(dev_priv);
- } else if (IS_VALLEYVIEW(dev)) {
- ret = vlv_runtime_resume(dev_priv);
- } else {
- WARN_ON(1);
- ret = -ENODEV;
- }
-
+ ret = intel_resume_prepare(dev_priv, true);
/*
* No point of rolling back things in case of an error, as the best
* we can do is to hope that things will still work (and disable RPM).
return ret;
}
+/*
+ * This function implements common functionality of runtime and system
+ * suspend sequence.
+ */
+static int intel_suspend_complete(struct drm_i915_private *dev_priv)
+{
+ struct drm_device *dev = dev_priv->dev;
+ int ret;
+
+ if (IS_HASWELL(dev) || IS_BROADWELL(dev))
+ ret = hsw_suspend_complete(dev_priv);
+ else if (IS_VALLEYVIEW(dev))
+ ret = vlv_suspend_complete(dev_priv);
+ else
+ ret = 0;
+
+ return ret;
+}
+
+/*
+ * This function implements common functionality of runtime and system
+ * resume sequence. Variable rpm_resume used for implementing different
+ * code paths.
+ */
+static int intel_resume_prepare(struct drm_i915_private *dev_priv,
+ bool rpm_resume)
+{
+ struct drm_device *dev = dev_priv->dev;
+ int ret;
+
+ if (IS_GEN6(dev))
+ ret = snb_resume_prepare(dev_priv, rpm_resume);
+ else if (IS_HASWELL(dev) || IS_BROADWELL(dev))
+ ret = hsw_resume_prepare(dev_priv, rpm_resume);
+ else if (IS_VALLEYVIEW(dev))
+ ret = vlv_resume_prepare(dev_priv, rpm_resume);
+ else
+ ret = 0;
+
+ return ret;
+}
+
static const struct dev_pm_ops i915_pm_ops = {
.suspend = i915_pm_suspend,
.suspend_late = i915_pm_suspend_late,
.lastclose = i915_driver_lastclose,
.preclose = i915_driver_preclose,
.postclose = i915_driver_postclose,
+ .set_busid = drm_pci_set_busid,
/* Used in place of i915_pm_ops for non-DRIVER_MODESET */
.suspend = i915_suspend,
module_init(i915_init);
module_exit(i915_exit);
-MODULE_AUTHOR(DRIVER_AUTHOR);
+MODULE_AUTHOR("Tungsten Graphics, Inc.");
+MODULE_AUTHOR("Intel Corporation");
+
MODULE_DESCRIPTION(DRIVER_DESC);
MODULE_LICENSE("GPL and additional rights");
#include "i915_reg.h"
#include "intel_bios.h"
#include "intel_ringbuffer.h"
+#include "intel_lrc.h"
#include "i915_gem_gtt.h"
+#include "i915_gem_render_state.h"
#include <linux/io-mapping.h>
#include <linux/i2c.h>
#include <linux/i2c-algo-bit.h>
#include <drm/intel-gtt.h>
+#include <drm/drm_legacy.h> /* for struct drm_dma_handle */
+#include <drm/drm_gem.h>
#include <linux/backlight.h>
#include <linux/hashtable.h>
#include <linux/intel-iommu.h>
/* General customization:
*/
-#define DRIVER_AUTHOR "Tungsten Graphics, Inc."
-
#define DRIVER_NAME "i915"
#define DRIVER_DESC "Intel Graphics"
-#define DRIVER_DATE "20140725"
+#define DRIVER_DATE "20140905"
enum pipe {
INVALID_PIPE = -1,
I915_GEM_DOMAIN_INSTRUCTION | \
I915_GEM_DOMAIN_VERTEX)
-#define for_each_pipe(p) for ((p) = 0; (p) < INTEL_INFO(dev)->num_pipes; (p)++)
+#define for_each_pipe(__dev_priv, __p) \
+ for ((__p) = 0; (__p) < INTEL_INFO(__dev_priv)->num_pipes; (__p)++)
+#define for_each_plane(pipe, p) \
+ for ((p) = 0; (p) < INTEL_INFO(dev)->num_sprites[(pipe)] + 1; (p)++)
#define for_each_sprite(p, s) for ((s) = 0; (s) < INTEL_INFO(dev)->num_sprites[(p)]; (s)++)
#define for_each_crtc(dev, crtc) \
#define for_each_intel_crtc(dev, intel_crtc) \
list_for_each_entry(intel_crtc, &dev->mode_config.crtc_list, base.head)
+#define for_each_intel_encoder(dev, intel_encoder) \
+ list_for_each_entry(intel_encoder, \
+ &(dev)->mode_config.encoder_list, \
+ base.head)
+
#define for_each_encoder_on_crtc(dev, __crtc, intel_encoder) \
list_for_each_entry((intel_encoder), &(dev)->mode_config.encoder_list, base.head) \
if ((intel_encoder)->base.crtc == (__crtc))
if ((1 << (domain)) & (mask))
struct drm_i915_private;
+struct i915_mm_struct;
struct i915_mmu_object;
enum intel_dpll_id {
#define I915_NUM_PLLS 2
struct intel_dpll_hw_state {
+ /* i9xx, pch plls */
uint32_t dpll;
uint32_t dpll_md;
uint32_t fp0;
uint32_t fp1;
+
+ /* hsw, bdw */
uint32_t wrpll;
};
struct intel_overlay;
struct intel_overlay_error_state;
+struct drm_local_map;
+
struct drm_i915_master_private {
- drm_local_map_t *sarea;
+ struct drm_local_map *sarea;
struct _drm_i915_sarea *sarea_priv;
};
#define I915_FENCE_REG_NONE -1
pid_t pid;
char comm[TASK_COMM_LEN];
} ring[I915_NUM_RINGS];
+
struct drm_i915_error_buffer {
u32 size;
u32 name;
} **active_bo, **pinned_bo;
u32 *active_bo_count, *pinned_bo_count;
+ u32 vm_count;
};
struct intel_connector;
struct intel_device_info {
u32 display_mmio_offset;
+ u16 device_id;
u8 num_pipes:3;
u8 num_sprites[I915_MAX_PIPES];
u8 gen;
uint8_t remap_slice;
struct drm_i915_file_private *file_priv;
struct i915_ctx_hang_stats hang_stats;
- struct i915_address_space *vm;
+ struct i915_hw_ppgtt *ppgtt;
+ /* Legacy ring buffer submission */
struct {
struct drm_i915_gem_object *rcs_state;
bool initialized;
} legacy_hw_ctx;
+ /* Execlists */
+ bool rcs_initialized;
+ struct {
+ struct drm_i915_gem_object *state;
+ struct intel_ringbuffer *ringbuf;
+ } engine[I915_NUM_RINGS];
+
struct list_head link;
};
struct drm_mm_node compressed_fb;
struct drm_mm_node *compressed_llb;
+ bool false_color;
+
struct intel_fbc_work {
struct delayed_work work;
struct drm_crtc *crtc;
#define QUIRK_LVDS_SSC_DISABLE (1<<1)
#define QUIRK_INVERT_BRIGHTNESS (1<<2)
#define QUIRK_BACKLIGHT_PRESENT (1<<3)
+#define QUIRK_PIPEB_FORCE (1<<4)
struct intel_fbdev;
struct intel_fbc_work;
};
struct drm_i915_error_state_buf {
+ struct drm_i915_private *i915;
unsigned bytes;
unsigned size;
int err;
/* For missed irq/seqno simulation. */
unsigned int test_irq_rings;
+
+ /* Used to prevent gem_check_wedged returning -EAGAIN during gpu reset */
+ bool reload_in_reset;
};
enum modeset_restore {
};
struct ddi_vbt_port_info {
+ /*
+ * This is an index in the HDMI/DVI DDI buffer translation table.
+ * The special value HDMI_LEVEL_SHIFT_UNKNOWN means the VBT didn't
+ * populate this field.
+ */
+#define HDMI_LEVEL_SHIFT_UNKNOWN 0xff
uint8_t hdmi_level_shift;
uint8_t supports_dvi:1;
struct drm_i915_gem_object *semaphore_obj;
uint32_t last_seqno, next_seqno;
- drm_dma_handle_t *status_page_dmah;
+ struct drm_dma_handle *status_page_dmah;
struct resource mch_res;
/* protects the irq masks */
} hpd_mark;
} hpd_stats[HPD_NUM_PINS];
u32 hpd_event_bits;
- struct timer_list hotplug_reenable_timer;
+ struct delayed_work hotplug_reenable_work;
struct i915_fbc fbc;
struct i915_drrs drrs;
/* LVDS info */
bool no_aux_handshake;
+ /* protects panel power sequencer state */
+ struct mutex pps_mutex;
+
struct drm_i915_fence_reg fence_regs[I915_MAX_NUM_FENCES]; /* assume 965 */
int fence_reg_start; /* 4 if userland hasn't ioctl'd us yet */
int num_fence_regs; /* 8 on pre-965, 16 otherwise */
struct i915_gtt gtt; /* VM representing the global address space */
struct i915_gem_mm mm;
-#if defined(CONFIG_MMU_NOTIFIER)
- DECLARE_HASHTABLE(mmu_notifiers, 7);
-#endif
+ DECLARE_HASHTABLE(mm_structs, 7);
+ struct mutex mm_lock;
/* Kernel Modesetting */
struct intel_shared_dpll shared_dplls[I915_NUM_PLLS];
int dpio_phy_iosf_port[I915_NUM_PHYS_VLV];
+ /*
+ * workarounds are currently applied at different places and
+ * changes are being done to consolidate them so exact count is
+ * not clear at this point, use a max value for now.
+ */
+#define I915_MAX_WA_REGS 16
+ struct {
+ u32 addr;
+ u32 value;
+ /* bitmask representing WA bits */
+ u32 mask;
+ } intel_wa_regs[I915_MAX_WA_REGS];
+ u32 num_wa_regs;
+
/* Reclocking support */
bool render_reclock_avail;
bool lvds_downclock_avail;
#ifdef CONFIG_DRM_I915_FBDEV
/* list of fbdev register on this device */
struct intel_fbdev *fbdev;
+ struct work_struct fbdev_suspend_work;
#endif
- /*
- * The console may be contended at resume, but we don't
- * want it to block on it.
- */
- struct work_struct console_resume_work;
-
struct drm_property *broadcast_rgb_property;
struct drm_property *force_audio_property;
*/
struct workqueue_struct *dp_wq;
+ uint32_t bios_vgacntr;
+
/* Old dri1 support infrastructure, beware the dragons ya fools entering
* here! */
struct i915_dri1_state dri1;
/* Old ums support infrastructure, same warning applies. */
struct i915_ums_state ums;
+ /* Abstract the submission mechanism (legacy ringbuffer or execlists) away */
+ struct {
+ int (*do_execbuf)(struct drm_device *dev, struct drm_file *file,
+ struct intel_engine_cs *ring,
+ struct intel_context *ctx,
+ struct drm_i915_gem_execbuffer2 *args,
+ struct list_head *vmas,
+ struct drm_i915_gem_object *batch_obj,
+ u64 exec_start, u32 flags);
+ int (*init_rings)(struct drm_device *dev);
+ void (*cleanup_ring)(struct intel_engine_cs *ring);
+ void (*stop_ring)(struct intel_engine_cs *ring);
+ } gt;
+
/*
* NOTE: This is the dri1/ums dungeon, don't add stuff here. Your patch
* will be rejected. Instead look for a better place.
* Only honoured if hardware has relevant pte bit
*/
unsigned long gt_ro:1;
-
- /*
- * Is the GPU currently using a fence to access this buffer,
- */
- unsigned int pending_fenced_gpu_access:1;
- unsigned int fenced_gpu_access:1;
-
unsigned int cache_level:3;
unsigned int has_aliasing_ppgtt_mapping:1;
struct drm_file *pin_filp;
/** for phy allocated objects */
- drm_dma_handle_t *phys_handle;
+ struct drm_dma_handle *phys_handle;
union {
struct i915_gem_userptr {
unsigned workers :4;
#define I915_GEM_USERPTR_MAX_WORKERS 15
- struct mm_struct *mm;
- struct i915_mmu_object *mn;
+ struct i915_mm_struct *mm;
+ struct i915_mmu_object *mmu_object;
struct work_struct *work;
} userptr;
};
int count;
};
-#define INTEL_INFO(dev) (&to_i915(dev)->info)
-
-#define IS_I830(dev) ((dev)->pdev->device == 0x3577)
-#define IS_845G(dev) ((dev)->pdev->device == 0x2562)
+/* Note that the (struct drm_i915_private *) cast is just to shut up gcc. */
+#define __I915__(p) ({ \
+ struct drm_i915_private *__p; \
+ if (__builtin_types_compatible_p(typeof(*p), struct drm_i915_private)) \
+ __p = (struct drm_i915_private *)p; \
+ else if (__builtin_types_compatible_p(typeof(*p), struct drm_device)) \
+ __p = to_i915((struct drm_device *)p); \
+ else \
+ BUILD_BUG(); \
+ __p; \
+})
+#define INTEL_INFO(p) (&__I915__(p)->info)
+#define INTEL_DEVID(p) (INTEL_INFO(p)->device_id)
+
+#define IS_I830(dev) (INTEL_DEVID(dev) == 0x3577)
+#define IS_845G(dev) (INTEL_DEVID(dev) == 0x2562)
#define IS_I85X(dev) (INTEL_INFO(dev)->is_i85x)
-#define IS_I865G(dev) ((dev)->pdev->device == 0x2572)
+#define IS_I865G(dev) (INTEL_DEVID(dev) == 0x2572)
#define IS_I915G(dev) (INTEL_INFO(dev)->is_i915g)
-#define IS_I915GM(dev) ((dev)->pdev->device == 0x2592)
-#define IS_I945G(dev) ((dev)->pdev->device == 0x2772)
+#define IS_I915GM(dev) (INTEL_DEVID(dev) == 0x2592)
+#define IS_I945G(dev) (INTEL_DEVID(dev) == 0x2772)
#define IS_I945GM(dev) (INTEL_INFO(dev)->is_i945gm)
#define IS_BROADWATER(dev) (INTEL_INFO(dev)->is_broadwater)
#define IS_CRESTLINE(dev) (INTEL_INFO(dev)->is_crestline)
-#define IS_GM45(dev) ((dev)->pdev->device == 0x2A42)
+#define IS_GM45(dev) (INTEL_DEVID(dev) == 0x2A42)
#define IS_G4X(dev) (INTEL_INFO(dev)->is_g4x)
-#define IS_PINEVIEW_G(dev) ((dev)->pdev->device == 0xa001)
-#define IS_PINEVIEW_M(dev) ((dev)->pdev->device == 0xa011)
+#define IS_PINEVIEW_G(dev) (INTEL_DEVID(dev) == 0xa001)
+#define IS_PINEVIEW_M(dev) (INTEL_DEVID(dev) == 0xa011)
#define IS_PINEVIEW(dev) (INTEL_INFO(dev)->is_pineview)
#define IS_G33(dev) (INTEL_INFO(dev)->is_g33)
-#define IS_IRONLAKE_M(dev) ((dev)->pdev->device == 0x0046)
+#define IS_IRONLAKE_M(dev) (INTEL_DEVID(dev) == 0x0046)
#define IS_IVYBRIDGE(dev) (INTEL_INFO(dev)->is_ivybridge)
-#define IS_IVB_GT1(dev) ((dev)->pdev->device == 0x0156 || \
- (dev)->pdev->device == 0x0152 || \
- (dev)->pdev->device == 0x015a)
-#define IS_SNB_GT1(dev) ((dev)->pdev->device == 0x0102 || \
- (dev)->pdev->device == 0x0106 || \
- (dev)->pdev->device == 0x010A)
+#define IS_IVB_GT1(dev) (INTEL_DEVID(dev) == 0x0156 || \
+ INTEL_DEVID(dev) == 0x0152 || \
+ INTEL_DEVID(dev) == 0x015a)
+#define IS_SNB_GT1(dev) (INTEL_DEVID(dev) == 0x0102 || \
+ INTEL_DEVID(dev) == 0x0106 || \
+ INTEL_DEVID(dev) == 0x010A)
#define IS_VALLEYVIEW(dev) (INTEL_INFO(dev)->is_valleyview)
#define IS_CHERRYVIEW(dev) (INTEL_INFO(dev)->is_valleyview && IS_GEN8(dev))
#define IS_HASWELL(dev) (INTEL_INFO(dev)->is_haswell)
#define IS_BROADWELL(dev) (!INTEL_INFO(dev)->is_valleyview && IS_GEN8(dev))
#define IS_MOBILE(dev) (INTEL_INFO(dev)->is_mobile)
#define IS_HSW_EARLY_SDV(dev) (IS_HASWELL(dev) && \
- ((dev)->pdev->device & 0xFF00) == 0x0C00)
+ (INTEL_DEVID(dev) & 0xFF00) == 0x0C00)
#define IS_BDW_ULT(dev) (IS_BROADWELL(dev) && \
- (((dev)->pdev->device & 0xf) == 0x2 || \
- ((dev)->pdev->device & 0xf) == 0x6 || \
- ((dev)->pdev->device & 0xf) == 0xe))
+ ((INTEL_DEVID(dev) & 0xf) == 0x2 || \
+ (INTEL_DEVID(dev) & 0xf) == 0x6 || \
+ (INTEL_DEVID(dev) & 0xf) == 0xe))
#define IS_HSW_ULT(dev) (IS_HASWELL(dev) && \
- ((dev)->pdev->device & 0xFF00) == 0x0A00)
+ (INTEL_DEVID(dev) & 0xFF00) == 0x0A00)
#define IS_ULT(dev) (IS_HSW_ULT(dev) || IS_BDW_ULT(dev))
#define IS_HSW_GT3(dev) (IS_HASWELL(dev) && \
- ((dev)->pdev->device & 0x00F0) == 0x0020)
+ (INTEL_DEVID(dev) & 0x00F0) == 0x0020)
/* ULX machines are also considered ULT. */
-#define IS_HSW_ULX(dev) ((dev)->pdev->device == 0x0A0E || \
- (dev)->pdev->device == 0x0A1E)
+#define IS_HSW_ULX(dev) (INTEL_DEVID(dev) == 0x0A0E || \
+ INTEL_DEVID(dev) == 0x0A1E)
#define IS_PRELIMINARY_HW(intel_info) ((intel_info)->is_preliminary)
/*
#define I915_NEED_GFX_HWS(dev) (INTEL_INFO(dev)->need_gfx_hws)
#define HAS_HW_CONTEXTS(dev) (INTEL_INFO(dev)->gen >= 6)
+#define HAS_LOGICAL_RING_CONTEXTS(dev) (INTEL_INFO(dev)->gen >= 8)
#define HAS_ALIASING_PPGTT(dev) (INTEL_INFO(dev)->gen >= 6)
#define HAS_PPGTT(dev) (INTEL_INFO(dev)->gen >= 7 && !IS_GEN8(dev))
-#define USES_PPGTT(dev) intel_enable_ppgtt(dev, false)
-#define USES_FULL_PPGTT(dev) intel_enable_ppgtt(dev, true)
+#define USES_PPGTT(dev) (i915.enable_ppgtt)
+#define USES_FULL_PPGTT(dev) (i915.enable_ppgtt == 2)
#define HAS_OVERLAY(dev) (INTEL_INFO(dev)->has_overlay)
#define OVERLAY_NEEDS_PHYSICAL(dev) (INTEL_INFO(dev)->overlay_needs_physical)
int enable_rc6;
int enable_fbc;
int enable_ppgtt;
+ int enable_execlists;
int enable_psr;
unsigned int preliminary_hw_support;
int disable_power_well;
extern unsigned long i915_gfx_val(struct drm_i915_private *dev_priv);
extern void i915_update_gfx_val(struct drm_i915_private *dev_priv);
int vlv_force_gfx_clock(struct drm_i915_private *dev_priv, bool on);
-
-extern void intel_console_resume(struct work_struct *work);
+void intel_hpd_cancel_work(struct drm_i915_private *dev_priv);
/* i915_irq.c */
void i915_queue_hangcheck(struct drm_device *dev);
struct drm_file *file_priv);
int i915_gem_sw_finish_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
+void i915_gem_execbuffer_move_to_active(struct list_head *vmas,
+ struct intel_engine_cs *ring);
+void i915_gem_execbuffer_retire_commands(struct drm_device *dev,
+ struct drm_file *file,
+ struct intel_engine_cs *ring,
+ struct drm_i915_gem_object *obj);
+int i915_gem_ringbuffer_submission(struct drm_device *dev,
+ struct drm_file *file,
+ struct intel_engine_cs *ring,
+ struct intel_context *ctx,
+ struct drm_i915_gem_execbuffer2 *args,
+ struct list_head *vmas,
+ struct drm_i915_gem_object *batch_obj,
+ u64 exec_start, u32 flags);
int i915_gem_execbuffer(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int i915_gem_execbuffer2(struct drm_device *dev, void *data,
int i915_gem_wait_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
void i915_gem_load(struct drm_device *dev);
+unsigned long i915_gem_shrink(struct drm_i915_private *dev_priv,
+ long target,
+ unsigned flags);
+#define I915_SHRINK_PURGEABLE 0x1
+#define I915_SHRINK_UNBOUND 0x2
+#define I915_SHRINK_BOUND 0x4
void *i915_gem_object_alloc(struct drm_device *dev);
void i915_gem_object_free(struct drm_i915_gem_object *obj);
void i915_gem_object_init(struct drm_i915_gem_object *obj,
bool i915_gem_clflush_object(struct drm_i915_gem_object *obj, bool force);
int __must_check i915_gem_object_finish_gpu(struct drm_i915_gem_object *obj);
int __must_check i915_gem_init(struct drm_device *dev);
+int i915_gem_init_rings(struct drm_device *dev);
int __must_check i915_gem_init_hw(struct drm_device *dev);
int i915_gem_l3_remap(struct intel_engine_cs *ring, int slice);
void i915_gem_init_swizzling(struct drm_device *dev);
}
/* Some GGTT VM helpers */
-#define obj_to_ggtt(obj) \
+#define i915_obj_to_ggtt(obj) \
(&((struct drm_i915_private *)(obj)->base.dev->dev_private)->gtt.base)
static inline bool i915_is_ggtt(struct i915_address_space *vm)
{
return vm == ggtt;
}
+static inline struct i915_hw_ppgtt *
+i915_vm_to_ppgtt(struct i915_address_space *vm)
+{
+ WARN_ON(i915_is_ggtt(vm));
+
+ return container_of(vm, struct i915_hw_ppgtt, base);
+}
+
+
static inline bool i915_gem_obj_ggtt_bound(struct drm_i915_gem_object *obj)
{
- return i915_gem_obj_bound(obj, obj_to_ggtt(obj));
+ return i915_gem_obj_bound(obj, i915_obj_to_ggtt(obj));
}
static inline unsigned long
i915_gem_obj_ggtt_offset(struct drm_i915_gem_object *obj)
{
- return i915_gem_obj_offset(obj, obj_to_ggtt(obj));
+ return i915_gem_obj_offset(obj, i915_obj_to_ggtt(obj));
}
static inline unsigned long
i915_gem_obj_ggtt_size(struct drm_i915_gem_object *obj)
{
- return i915_gem_obj_size(obj, obj_to_ggtt(obj));
+ return i915_gem_obj_size(obj, i915_obj_to_ggtt(obj));
}
static inline int __must_check
uint32_t alignment,
unsigned flags)
{
- return i915_gem_object_pin(obj, obj_to_ggtt(obj), alignment, flags | PIN_GLOBAL);
+ return i915_gem_object_pin(obj, i915_obj_to_ggtt(obj),
+ alignment, flags | PIN_GLOBAL);
}
static inline int
void i915_gem_object_ggtt_unpin(struct drm_i915_gem_object *obj);
/* i915_gem_context.c */
-#define ctx_to_ppgtt(ctx) container_of((ctx)->vm, struct i915_hw_ppgtt, base)
int __must_check i915_gem_context_init(struct drm_device *dev);
void i915_gem_context_fini(struct drm_device *dev);
void i915_gem_context_reset(struct drm_device *dev);
struct intel_context *
i915_gem_context_get(struct drm_i915_file_private *file_priv, u32 id);
void i915_gem_context_free(struct kref *ctx_ref);
+struct drm_i915_gem_object *
+i915_gem_alloc_context_obj(struct drm_device *dev, size_t size);
static inline void i915_gem_context_reference(struct intel_context *ctx)
{
kref_get(&ctx->ref);
int i915_gem_context_destroy_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
-/* i915_gem_render_state.c */
-int i915_gem_render_state_init(struct intel_engine_cs *ring);
/* i915_gem_evict.c */
int __must_check i915_gem_evict_something(struct drm_device *dev,
struct i915_address_space *vm,
int i915_error_state_to_str(struct drm_i915_error_state_buf *estr,
const struct i915_error_state_file_priv *error);
int i915_error_state_buf_init(struct drm_i915_error_state_buf *eb,
+ struct drm_i915_private *i915,
size_t count, loff_t pos);
static inline void i915_error_state_buf_release(
struct drm_i915_error_state_buf *eb)
void i915_destroy_error_state(struct drm_device *dev);
void i915_get_extra_instdone(struct drm_device *dev, uint32_t *instdone);
-const char *i915_cache_level_str(int type);
+const char *i915_cache_level_str(struct drm_i915_private *i915, int type);
/* i915_cmd_parser.c */
int i915_cmd_parser_get_version(void);
extern void i915_redisable_vga(struct drm_device *dev);
extern void i915_redisable_vga_power_on(struct drm_device *dev);
extern bool intel_fbc_enabled(struct drm_device *dev);
+extern void gen8_fbc_sw_flush(struct drm_device *dev, u32 value);
extern void intel_disable_fbc(struct drm_device *dev);
extern bool ironlake_set_drps(struct drm_device *dev, u8 val);
extern void intel_init_pch_refclk(struct drm_device *dev);
static int i915_gem_shrinker_oom(struct notifier_block *nb,
unsigned long event,
void *ptr);
-static unsigned long i915_gem_purge(struct drm_i915_private *dev_priv, long target);
static unsigned long i915_gem_shrink_all(struct drm_i915_private *dev_priv);
static bool cpu_cache_is_coherent(struct drm_device *dev,
if (i915_terminally_wedged(error))
return -EIO;
- return -EAGAIN;
+ /*
+ * Check if GPU Reset is in progress - we need intel_ring_begin
+ * to work properly to reinit the hw state while the gpu is
+ * still marked as reset-in-progress. Handle this with a flag.
+ */
+ if (!error->reload_in_reset)
+ return -EAGAIN;
}
return 0;
out:
switch (ret) {
case -EIO:
- /* If this -EIO is due to a gpu hang, give the reset code a
- * chance to clean up the mess. Otherwise return the proper
- * SIGBUS. */
- if (i915_terminally_wedged(&dev_priv->gpu_error)) {
+ /*
+ * We eat errors when the gpu is terminally wedged to avoid
+ * userspace unduly crashing (gl has no provisions for mmaps to
+ * fail). But any other -EIO isn't ours (e.g. swap in failure)
+ * and so needs to be reported.
+ */
+ if (!i915_terminally_wedged(&dev_priv->gpu_error)) {
ret = VM_FAULT_SIGBUS;
break;
}
* offsets on purgeable objects by truncating it and marking it purged,
* which prevents userspace from ever using that object again.
*/
- i915_gem_purge(dev_priv, obj->base.size >> PAGE_SHIFT);
+ i915_gem_shrink(dev_priv,
+ obj->base.size >> PAGE_SHIFT,
+ I915_SHRINK_BOUND |
+ I915_SHRINK_UNBOUND |
+ I915_SHRINK_PURGEABLE);
ret = drm_gem_create_mmap_offset(&obj->base);
if (ret != -ENOSPC)
goto out;
return 0;
}
-static unsigned long
-__i915_gem_shrink(struct drm_i915_private *dev_priv, long target,
- bool purgeable_only)
+unsigned long
+i915_gem_shrink(struct drm_i915_private *dev_priv,
+ long target, unsigned flags)
{
- struct list_head still_in_list;
- struct drm_i915_gem_object *obj;
+ const bool purgeable_only = flags & I915_SHRINK_PURGEABLE;
unsigned long count = 0;
/*
* dev->struct_mutex and so we won't ever be able to observe an
* object on the bound_list with a reference count equals 0.
*/
- INIT_LIST_HEAD(&still_in_list);
- while (count < target && !list_empty(&dev_priv->mm.unbound_list)) {
- obj = list_first_entry(&dev_priv->mm.unbound_list,
- typeof(*obj), global_list);
- list_move_tail(&obj->global_list, &still_in_list);
+ if (flags & I915_SHRINK_UNBOUND) {
+ struct list_head still_in_list;
- if (!i915_gem_object_is_purgeable(obj) && purgeable_only)
- continue;
+ INIT_LIST_HEAD(&still_in_list);
+ while (count < target && !list_empty(&dev_priv->mm.unbound_list)) {
+ struct drm_i915_gem_object *obj;
- drm_gem_object_reference(&obj->base);
+ obj = list_first_entry(&dev_priv->mm.unbound_list,
+ typeof(*obj), global_list);
+ list_move_tail(&obj->global_list, &still_in_list);
- if (i915_gem_object_put_pages(obj) == 0)
- count += obj->base.size >> PAGE_SHIFT;
+ if (!i915_gem_object_is_purgeable(obj) && purgeable_only)
+ continue;
+
+ drm_gem_object_reference(&obj->base);
- drm_gem_object_unreference(&obj->base);
+ if (i915_gem_object_put_pages(obj) == 0)
+ count += obj->base.size >> PAGE_SHIFT;
+
+ drm_gem_object_unreference(&obj->base);
+ }
+ list_splice(&still_in_list, &dev_priv->mm.unbound_list);
}
- list_splice(&still_in_list, &dev_priv->mm.unbound_list);
- INIT_LIST_HEAD(&still_in_list);
- while (count < target && !list_empty(&dev_priv->mm.bound_list)) {
- struct i915_vma *vma, *v;
+ if (flags & I915_SHRINK_BOUND) {
+ struct list_head still_in_list;
- obj = list_first_entry(&dev_priv->mm.bound_list,
- typeof(*obj), global_list);
- list_move_tail(&obj->global_list, &still_in_list);
+ INIT_LIST_HEAD(&still_in_list);
+ while (count < target && !list_empty(&dev_priv->mm.bound_list)) {
+ struct drm_i915_gem_object *obj;
+ struct i915_vma *vma, *v;
- if (!i915_gem_object_is_purgeable(obj) && purgeable_only)
- continue;
+ obj = list_first_entry(&dev_priv->mm.bound_list,
+ typeof(*obj), global_list);
+ list_move_tail(&obj->global_list, &still_in_list);
- drm_gem_object_reference(&obj->base);
+ if (!i915_gem_object_is_purgeable(obj) && purgeable_only)
+ continue;
- list_for_each_entry_safe(vma, v, &obj->vma_list, vma_link)
- if (i915_vma_unbind(vma))
- break;
+ drm_gem_object_reference(&obj->base);
- if (i915_gem_object_put_pages(obj) == 0)
- count += obj->base.size >> PAGE_SHIFT;
+ list_for_each_entry_safe(vma, v, &obj->vma_list, vma_link)
+ if (i915_vma_unbind(vma))
+ break;
+
+ if (i915_gem_object_put_pages(obj) == 0)
+ count += obj->base.size >> PAGE_SHIFT;
- drm_gem_object_unreference(&obj->base);
+ drm_gem_object_unreference(&obj->base);
+ }
+ list_splice(&still_in_list, &dev_priv->mm.bound_list);
}
- list_splice(&still_in_list, &dev_priv->mm.bound_list);
return count;
}
-static unsigned long
-i915_gem_purge(struct drm_i915_private *dev_priv, long target)
-{
- return __i915_gem_shrink(dev_priv, target, true);
-}
-
static unsigned long
i915_gem_shrink_all(struct drm_i915_private *dev_priv)
{
i915_gem_evict_everything(dev_priv->dev);
- return __i915_gem_shrink(dev_priv, LONG_MAX, false);
+ return i915_gem_shrink(dev_priv, LONG_MAX,
+ I915_SHRINK_BOUND | I915_SHRINK_UNBOUND);
}
static int
for (i = 0; i < page_count; i++) {
page = shmem_read_mapping_page_gfp(mapping, i, gfp);
if (IS_ERR(page)) {
- i915_gem_purge(dev_priv, page_count);
+ i915_gem_shrink(dev_priv,
+ page_count,
+ I915_SHRINK_BOUND |
+ I915_SHRINK_UNBOUND |
+ I915_SHRINK_PURGEABLE);
page = shmem_read_mapping_page_gfp(mapping, i, gfp);
}
if (IS_ERR(page)) {
i915_gem_object_move_to_active(struct drm_i915_gem_object *obj,
struct intel_engine_cs *ring)
{
- struct drm_device *dev = obj->base.dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
u32 seqno = intel_ring_get_seqno(ring);
BUG_ON(ring == NULL);
list_move_tail(&obj->ring_list, &ring->active_list);
obj->last_read_seqno = seqno;
-
- if (obj->fenced_gpu_access) {
- obj->last_fenced_seqno = seqno;
-
- /* Bump MRU to take account of the delayed flush */
- if (obj->fence_reg != I915_FENCE_REG_NONE) {
- struct drm_i915_fence_reg *reg;
-
- reg = &dev_priv->fence_regs[obj->fence_reg];
- list_move_tail(®->lru_list,
- &dev_priv->mm.fence_list);
- }
- }
}
void i915_vma_move_to_active(struct i915_vma *vma,
obj->base.write_domain = 0;
obj->last_fenced_seqno = 0;
- obj->fenced_gpu_access = false;
obj->active = 0;
drm_gem_object_unreference(&obj->base);
{
struct drm_i915_private *dev_priv = ring->dev->dev_private;
struct drm_i915_gem_request *request;
+ struct intel_ringbuffer *ringbuf;
u32 request_ring_position, request_start;
int ret;
- request_start = intel_ring_get_tail(ring->buffer);
+ request = ring->preallocated_lazy_request;
+ if (WARN_ON(request == NULL))
+ return -ENOMEM;
+
+ if (i915.enable_execlists) {
+ struct intel_context *ctx = request->ctx;
+ ringbuf = ctx->engine[ring->id].ringbuf;
+ } else
+ ringbuf = ring->buffer;
+
+ request_start = intel_ring_get_tail(ringbuf);
/*
* Emit any outstanding flushes - execbuf can fail to emit the flush
* after having emitted the batchbuffer command. Hence we need to fix
* is that the flush _must_ happen before the next request, no matter
* what.
*/
- ret = intel_ring_flush_all_caches(ring);
- if (ret)
- return ret;
-
- request = ring->preallocated_lazy_request;
- if (WARN_ON(request == NULL))
- return -ENOMEM;
+ if (i915.enable_execlists) {
+ ret = logical_ring_flush_all_caches(ringbuf);
+ if (ret)
+ return ret;
+ } else {
+ ret = intel_ring_flush_all_caches(ring);
+ if (ret)
+ return ret;
+ }
/* Record the position of the start of the request so that
* should we detect the updated seqno part-way through the
* GPU processing the request, we never over-estimate the
* position of the head.
*/
- request_ring_position = intel_ring_get_tail(ring->buffer);
+ request_ring_position = intel_ring_get_tail(ringbuf);
- ret = ring->add_request(ring);
- if (ret)
- return ret;
+ if (i915.enable_execlists) {
+ ret = ring->emit_request(ringbuf);
+ if (ret)
+ return ret;
+ } else {
+ ret = ring->add_request(ring);
+ if (ret)
+ return ret;
+ }
request->seqno = intel_ring_get_seqno(ring);
request->ring = ring;
*/
request->batch_obj = obj;
- /* Hold a reference to the current context so that we can inspect
- * it later in case a hangcheck error event fires.
- */
- request->ctx = ring->last_context;
- if (request->ctx)
- i915_gem_context_reference(request->ctx);
+ if (!i915.enable_execlists) {
+ /* Hold a reference to the current context so that we can inspect
+ * it later in case a hangcheck error event fires.
+ */
+ request->ctx = ring->last_context;
+ if (request->ctx)
+ i915_gem_context_reference(request->ctx);
+ }
request->emitted_jiffies = jiffies;
list_add_tail(&request->list, &ring->request_list);
i915_gem_free_request(request);
}
+ while (!list_empty(&ring->execlist_queue)) {
+ struct intel_ctx_submit_request *submit_req;
+
+ submit_req = list_first_entry(&ring->execlist_queue,
+ struct intel_ctx_submit_request,
+ execlist_link);
+ list_del(&submit_req->execlist_link);
+ intel_runtime_pm_put(dev_priv);
+ i915_gem_context_unreference(submit_req->ctx);
+ kfree(submit_req);
+ }
+
/* These may not have been flush before the reset, do so now */
kfree(ring->preallocated_lazy_request);
ring->preallocated_lazy_request = NULL;
while (!list_empty(&ring->request_list)) {
struct drm_i915_gem_request *request;
+ struct intel_ringbuffer *ringbuf;
request = list_first_entry(&ring->request_list,
struct drm_i915_gem_request,
break;
trace_i915_gem_request_retire(ring, request->seqno);
+
+ /* This is one of the few common intersection points
+ * between legacy ringbuffer submission and execlists:
+ * we need to tell them apart in order to find the correct
+ * ringbuffer to which the request belongs to.
+ */
+ if (i915.enable_execlists) {
+ struct intel_context *ctx = request->ctx;
+ ringbuf = ctx->engine[ring->id].ringbuf;
+ } else
+ ringbuf = ring->buffer;
+
/* We know the GPU must have read the request to have
* sent us the seqno + interrupt, so use the position
* of tail of the request to update the last known position
* of the GPU head.
*/
- ring->buffer->last_retired_head = request->tail;
+ ringbuf->last_retired_head = request->tail;
i915_gem_free_request(request);
}
* cause memory corruption through use-after-free.
*/
+ /* Throw away the active reference before moving to the unbound list */
+ i915_gem_object_retire(obj);
+
if (i915_is_ggtt(vma->vm)) {
i915_gem_object_finish_gtt(obj);
vma->unbind_vma(vma);
list_del_init(&vma->mm_list);
- /* Avoid an unnecessary call to unbind on rebind. */
if (i915_is_ggtt(vma->vm))
- obj->map_and_fenceable = true;
+ obj->map_and_fenceable = false;
drm_mm_remove_node(&vma->node);
i915_gem_vma_destroy(vma);
/* Flush everything onto the inactive list. */
for_each_ring(ring, dev_priv, i) {
- ret = i915_switch_context(ring, ring->default_context);
- if (ret)
- return ret;
+ if (!i915.enable_execlists) {
+ ret = i915_switch_context(ring, ring->default_context);
+ if (ret)
+ return ret;
+ }
ret = intel_ring_idle(ring);
if (ret)
obj->last_fenced_seqno = 0;
}
- obj->fenced_gpu_access = false;
return 0;
}
return 0;
}
} else if (enable) {
+ if (WARN_ON(!obj->map_and_fenceable))
+ return -EINVAL;
+
reg = i915_find_fence_reg(dev);
if (IS_ERR(reg))
return PTR_ERR(reg);
return 0;
}
-static bool i915_gem_valid_gtt_space(struct drm_device *dev,
- struct drm_mm_node *gtt_space,
+static bool i915_gem_valid_gtt_space(struct i915_vma *vma,
unsigned long cache_level)
{
+ struct drm_mm_node *gtt_space = &vma->node;
struct drm_mm_node *other;
- /* On non-LLC machines we have to be careful when putting differing
- * types of snoopable memory together to avoid the prefetcher
- * crossing memory domains and dying.
+ /*
+ * On some machines we have to be careful when putting differing types
+ * of snoopable memory together to avoid the prefetcher crossing memory
+ * domains and dying. During vm initialisation, we decide whether or not
+ * these constraints apply and set the drm_mm.color_adjust
+ * appropriately.
*/
- if (HAS_LLC(dev))
+ if (vma->vm->mm.color_adjust == NULL)
return true;
if (!drm_mm_node_allocated(gtt_space))
goto err_free_vma;
}
- if (WARN_ON(!i915_gem_valid_gtt_space(dev, &vma->node,
- obj->cache_level))) {
+ if (WARN_ON(!i915_gem_valid_gtt_space(vma, obj->cache_level))) {
ret = -EINVAL;
goto err_remove_node;
}
i915_gem_object_set_to_gtt_domain(struct drm_i915_gem_object *obj, bool write)
{
struct drm_i915_private *dev_priv = obj->base.dev->dev_private;
+ struct i915_vma *vma = i915_gem_obj_to_ggtt(obj);
uint32_t old_write_domain, old_read_domains;
int ret;
/* Not valid to be called on unbound objects. */
- if (!i915_gem_obj_bound_any(obj))
+ if (vma == NULL)
return -EINVAL;
if (obj->base.write_domain == I915_GEM_DOMAIN_GTT)
old_write_domain);
/* And bump the LRU for this access */
- if (i915_gem_object_is_inactive(obj)) {
- struct i915_vma *vma = i915_gem_obj_to_ggtt(obj);
- if (vma)
- list_move_tail(&vma->mm_list,
- &dev_priv->gtt.base.inactive_list);
-
- }
+ if (i915_gem_object_is_inactive(obj))
+ list_move_tail(&vma->mm_list,
+ &dev_priv->gtt.base.inactive_list);
return 0;
}
}
list_for_each_entry_safe(vma, next, &obj->vma_list, vma_link) {
- if (!i915_gem_valid_gtt_space(dev, &vma->node, cache_level)) {
+ if (!i915_gem_valid_gtt_space(vma, cache_level)) {
ret = i915_vma_unbind(vma);
if (ret)
return ret;
{
struct i915_vma *vma;
- if (list_empty(&obj->vma_list))
- return false;
-
vma = i915_gem_obj_to_ggtt(obj);
if (!vma)
return false;
obj->fence_reg = I915_FENCE_REG_NONE;
obj->madv = I915_MADV_WILLNEED;
- /* Avoid an unnecessary call to unbind on the first bind. */
- obj->map_and_fenceable = true;
i915_gem_info_add_obj(obj->base.dev->dev_private, obj->base.size);
}
void i915_gem_vma_destroy(struct i915_vma *vma)
{
+ struct i915_address_space *vm = NULL;
WARN_ON(vma->node.allocated);
/* Keep the vma as a placeholder in the execbuffer reservation lists */
if (!list_empty(&vma->exec_list))
return;
+ vm = vma->vm;
+
+ if (!i915_is_ggtt(vm))
+ i915_ppgtt_put(i915_vm_to_ppgtt(vm));
+
list_del(&vma->vma_link);
kfree(vma);
int i;
for_each_ring(ring, dev_priv, i)
- intel_stop_ring_buffer(ring);
+ dev_priv->gt.stop_ring(ring);
}
int
return true;
}
-static int i915_gem_init_rings(struct drm_device *dev)
+static void init_unused_ring(struct drm_device *dev, u32 base)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+
+ I915_WRITE(RING_CTL(base), 0);
+ I915_WRITE(RING_HEAD(base), 0);
+ I915_WRITE(RING_TAIL(base), 0);
+ I915_WRITE(RING_START(base), 0);
+}
+
+static void init_unused_rings(struct drm_device *dev)
+{
+ if (IS_I830(dev)) {
+ init_unused_ring(dev, PRB1_BASE);
+ init_unused_ring(dev, SRB0_BASE);
+ init_unused_ring(dev, SRB1_BASE);
+ init_unused_ring(dev, SRB2_BASE);
+ init_unused_ring(dev, SRB3_BASE);
+ } else if (IS_GEN2(dev)) {
+ init_unused_ring(dev, SRB0_BASE);
+ init_unused_ring(dev, SRB1_BASE);
+ } else if (IS_GEN3(dev)) {
+ init_unused_ring(dev, PRB1_BASE);
+ init_unused_ring(dev, PRB2_BASE);
+ }
+}
+
+int i915_gem_init_rings(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
int ret;
+ /*
+ * At least 830 can leave some of the unused rings
+ * "active" (ie. head != tail) after resume which
+ * will prevent c3 entry. Makes sure all unused rings
+ * are totally idle.
+ */
+ init_unused_rings(dev);
+
ret = intel_init_render_ring_buffer(dev);
if (ret)
return ret;
i915_gem_init_swizzling(dev);
- ret = i915_gem_init_rings(dev);
+ ret = dev_priv->gt.init_rings(dev);
if (ret)
return ret;
if (ret && ret != -EIO) {
DRM_ERROR("Context enable failed %d\n", ret);
i915_gem_cleanup_ringbuffer(dev);
+
+ return ret;
+ }
+
+ ret = i915_ppgtt_init_hw(dev);
+ if (ret && ret != -EIO) {
+ DRM_ERROR("PPGTT enable failed %d\n", ret);
+ i915_gem_cleanup_ringbuffer(dev);
}
return ret;
struct drm_i915_private *dev_priv = dev->dev_private;
int ret;
+ i915.enable_execlists = intel_sanitize_enable_execlists(dev,
+ i915.enable_execlists);
+
mutex_lock(&dev->struct_mutex);
if (IS_VALLEYVIEW(dev)) {
DRM_DEBUG_DRIVER("allow wake ack timed out\n");
}
- i915_gem_init_userptr(dev);
+ if (!i915.enable_execlists) {
+ dev_priv->gt.do_execbuf = i915_gem_ringbuffer_submission;
+ dev_priv->gt.init_rings = i915_gem_init_rings;
+ dev_priv->gt.cleanup_ring = intel_cleanup_ring_buffer;
+ dev_priv->gt.stop_ring = intel_stop_ring_buffer;
+ } else {
+ dev_priv->gt.do_execbuf = intel_execlists_submission;
+ dev_priv->gt.init_rings = intel_logical_rings_init;
+ dev_priv->gt.cleanup_ring = intel_logical_ring_cleanup;
+ dev_priv->gt.stop_ring = intel_logical_ring_stop;
+ }
+
+ ret = i915_gem_init_userptr(dev);
+ if (ret) {
+ mutex_unlock(&dev->struct_mutex);
+ return ret;
+ }
+
i915_gem_init_global_gtt(dev);
ret = i915_gem_context_init(dev);
int i;
for_each_ring(ring, dev_priv, i)
- intel_cleanup_ring_buffer(ring);
+ dev_priv->gt.cleanup_ring(ring);
}
int
struct drm_i915_private *dev_priv = o->base.dev->dev_private;
struct i915_vma *vma;
- if (!dev_priv->mm.aliasing_ppgtt ||
- vm == &dev_priv->mm.aliasing_ppgtt->base)
- vm = &dev_priv->gtt.base;
+ WARN_ON(vm == &dev_priv->mm.aliasing_ppgtt->base);
list_for_each_entry(vma, &o->vma_list, vma_link) {
if (vma->vm == vm)
struct drm_i915_private *dev_priv = o->base.dev->dev_private;
struct i915_vma *vma;
- if (!dev_priv->mm.aliasing_ppgtt ||
- vm == &dev_priv->mm.aliasing_ppgtt->base)
- vm = &dev_priv->gtt.base;
+ WARN_ON(vm == &dev_priv->mm.aliasing_ppgtt->base);
BUG_ON(list_empty(&o->vma_list));
if (!i915_gem_shrinker_lock(dev, &unlock))
return SHRINK_STOP;
- freed = i915_gem_purge(dev_priv, sc->nr_to_scan);
+ freed = i915_gem_shrink(dev_priv,
+ sc->nr_to_scan,
+ I915_SHRINK_BOUND |
+ I915_SHRINK_UNBOUND |
+ I915_SHRINK_PURGEABLE);
if (freed < sc->nr_to_scan)
- freed += __i915_gem_shrink(dev_priv,
- sc->nr_to_scan - freed,
- false);
+ freed += i915_gem_shrink(dev_priv,
+ sc->nr_to_scan - freed,
+ I915_SHRINK_BOUND |
+ I915_SHRINK_UNBOUND);
if (unlock)
mutex_unlock(&dev->struct_mutex);
{
struct i915_vma *vma;
- /* This WARN has probably outlived its usefulness (callers already
- * WARN if they don't find the GGTT vma they expect). When removing,
- * remember to remove the pre-check in is_pin_display() as well */
- if (WARN_ON(list_empty(&obj->vma_list)))
- return NULL;
-
vma = list_first_entry(&obj->vma_list, typeof(*vma), vma_link);
- if (vma->vm != obj_to_ggtt(obj))
+ if (vma->vm != i915_obj_to_ggtt(obj))
return NULL;
return vma;
#define GEN6_CONTEXT_ALIGN (64<<10)
#define GEN7_CONTEXT_ALIGN 4096
-static void do_ppgtt_cleanup(struct i915_hw_ppgtt *ppgtt)
-{
- struct drm_device *dev = ppgtt->base.dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct i915_address_space *vm = &ppgtt->base;
-
- if (ppgtt == dev_priv->mm.aliasing_ppgtt ||
- (list_empty(&vm->active_list) && list_empty(&vm->inactive_list))) {
- ppgtt->base.cleanup(&ppgtt->base);
- return;
- }
-
- /*
- * Make sure vmas are unbound before we take down the drm_mm
- *
- * FIXME: Proper refcounting should take care of this, this shouldn't be
- * needed at all.
- */
- if (!list_empty(&vm->active_list)) {
- struct i915_vma *vma;
-
- list_for_each_entry(vma, &vm->active_list, mm_list)
- if (WARN_ON(list_empty(&vma->vma_link) ||
- list_is_singular(&vma->vma_link)))
- break;
-
- i915_gem_evict_vm(&ppgtt->base, true);
- } else {
- i915_gem_retire_requests(dev);
- i915_gem_evict_vm(&ppgtt->base, false);
- }
-
- ppgtt->base.cleanup(&ppgtt->base);
-}
-
-static void ppgtt_release(struct kref *kref)
-{
- struct i915_hw_ppgtt *ppgtt =
- container_of(kref, struct i915_hw_ppgtt, ref);
-
- do_ppgtt_cleanup(ppgtt);
- kfree(ppgtt);
-}
-
static size_t get_context_alignment(struct drm_device *dev)
{
if (IS_GEN6(dev))
void i915_gem_context_free(struct kref *ctx_ref)
{
struct intel_context *ctx = container_of(ctx_ref,
- typeof(*ctx), ref);
- struct i915_hw_ppgtt *ppgtt = NULL;
+ typeof(*ctx), ref);
- if (ctx->legacy_hw_ctx.rcs_state) {
- /* We refcount even the aliasing PPGTT to keep the code symmetric */
- if (USES_PPGTT(ctx->legacy_hw_ctx.rcs_state->base.dev))
- ppgtt = ctx_to_ppgtt(ctx);
- }
+ if (i915.enable_execlists)
+ intel_lr_context_free(ctx);
+
+ i915_ppgtt_put(ctx->ppgtt);
- if (ppgtt)
- kref_put(&ppgtt->ref, ppgtt_release);
if (ctx->legacy_hw_ctx.rcs_state)
drm_gem_object_unreference(&ctx->legacy_hw_ctx.rcs_state->base);
list_del(&ctx->link);
kfree(ctx);
}
-static struct drm_i915_gem_object *
+struct drm_i915_gem_object *
i915_gem_alloc_context_obj(struct drm_device *dev, size_t size)
{
struct drm_i915_gem_object *obj;
return obj;
}
-static struct i915_hw_ppgtt *
-create_vm_for_ctx(struct drm_device *dev, struct intel_context *ctx)
-{
- struct i915_hw_ppgtt *ppgtt;
- int ret;
-
- ppgtt = kzalloc(sizeof(*ppgtt), GFP_KERNEL);
- if (!ppgtt)
- return ERR_PTR(-ENOMEM);
-
- ret = i915_gem_init_ppgtt(dev, ppgtt);
- if (ret) {
- kfree(ppgtt);
- return ERR_PTR(ret);
- }
-
- ppgtt->ctx = ctx;
- return ppgtt;
-}
-
static struct intel_context *
__create_hw_context(struct drm_device *dev,
- struct drm_i915_file_private *file_priv)
+ struct drm_i915_file_private *file_priv)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_context *ctx;
*/
static struct intel_context *
i915_gem_create_context(struct drm_device *dev,
- struct drm_i915_file_private *file_priv,
- bool create_vm)
+ struct drm_i915_file_private *file_priv)
{
const bool is_global_default_ctx = file_priv == NULL;
- struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_context *ctx;
int ret = 0;
}
}
- if (create_vm) {
- struct i915_hw_ppgtt *ppgtt = create_vm_for_ctx(dev, ctx);
+ if (USES_FULL_PPGTT(dev)) {
+ struct i915_hw_ppgtt *ppgtt = i915_ppgtt_create(dev, file_priv);
if (IS_ERR_OR_NULL(ppgtt)) {
DRM_DEBUG_DRIVER("PPGTT setup failed (%ld)\n",
PTR_ERR(ppgtt));
ret = PTR_ERR(ppgtt);
goto err_unpin;
- } else
- ctx->vm = &ppgtt->base;
-
- /* This case is reserved for the global default context and
- * should only happen once. */
- if (is_global_default_ctx) {
- if (WARN_ON(dev_priv->mm.aliasing_ppgtt)) {
- ret = -EEXIST;
- goto err_unpin;
- }
-
- dev_priv->mm.aliasing_ppgtt = ppgtt;
}
- } else if (USES_PPGTT(dev)) {
- /* For platforms which only have aliasing PPGTT, we fake the
- * address space and refcounting. */
- ctx->vm = &dev_priv->mm.aliasing_ppgtt->base;
- kref_get(&dev_priv->mm.aliasing_ppgtt->ref);
- } else
- ctx->vm = &dev_priv->gtt.base;
+
+ ctx->ppgtt = ppgtt;
+ }
return ctx;
struct drm_i915_private *dev_priv = dev->dev_private;
int i;
- /* Prevent the hardware from restoring the last context (which hung) on
- * the next switch */
+ /* In execlists mode we will unreference the context when the execlist
+ * queue is cleared and the requests destroyed.
+ */
+ if (i915.enable_execlists)
+ return;
+
for (i = 0; i < I915_NUM_RINGS; i++) {
struct intel_engine_cs *ring = &dev_priv->ring[i];
- struct intel_context *dctx = ring->default_context;
struct intel_context *lctx = ring->last_context;
- /* Do a fake switch to the default context */
- if (lctx == dctx)
- continue;
-
- if (!lctx)
- continue;
+ if (lctx) {
+ if (lctx->legacy_hw_ctx.rcs_state && i == RCS)
+ i915_gem_object_ggtt_unpin(lctx->legacy_hw_ctx.rcs_state);
- if (dctx->legacy_hw_ctx.rcs_state && i == RCS) {
- WARN_ON(i915_gem_obj_ggtt_pin(dctx->legacy_hw_ctx.rcs_state,
- get_context_alignment(dev), 0));
- /* Fake a finish/inactive */
- dctx->legacy_hw_ctx.rcs_state->base.write_domain = 0;
- dctx->legacy_hw_ctx.rcs_state->active = 0;
+ i915_gem_context_unreference(lctx);
+ ring->last_context = NULL;
}
-
- if (lctx->legacy_hw_ctx.rcs_state && i == RCS)
- i915_gem_object_ggtt_unpin(lctx->legacy_hw_ctx.rcs_state);
-
- i915_gem_context_unreference(lctx);
- i915_gem_context_reference(dctx);
- ring->last_context = dctx;
}
}
if (WARN_ON(dev_priv->ring[RCS].default_context))
return 0;
- if (HAS_HW_CONTEXTS(dev)) {
+ if (i915.enable_execlists) {
+ /* NB: intentionally left blank. We will allocate our own
+ * backing objects as we need them, thank you very much */
+ dev_priv->hw_context_size = 0;
+ } else if (HAS_HW_CONTEXTS(dev)) {
dev_priv->hw_context_size = round_up(get_context_size(dev), 4096);
if (dev_priv->hw_context_size > (1<<20)) {
DRM_DEBUG_DRIVER("Disabling HW Contexts; invalid size %d\n",
}
}
- ctx = i915_gem_create_context(dev, NULL, USES_PPGTT(dev));
+ ctx = i915_gem_create_context(dev, NULL);
if (IS_ERR(ctx)) {
DRM_ERROR("Failed to create default global context (error %ld)\n",
PTR_ERR(ctx));
return PTR_ERR(ctx);
}
- /* NB: RCS will hold a ref for all rings */
- for (i = 0; i < I915_NUM_RINGS; i++)
- dev_priv->ring[i].default_context = ctx;
+ for (i = 0; i < I915_NUM_RINGS; i++) {
+ struct intel_engine_cs *ring = &dev_priv->ring[i];
- DRM_DEBUG_DRIVER("%s context support initialized\n", dev_priv->hw_context_size ? "HW" : "fake");
+ /* NB: RCS will hold a ref for all rings */
+ ring->default_context = ctx;
+ }
+
+ DRM_DEBUG_DRIVER("%s context support initialized\n",
+ i915.enable_execlists ? "LR" :
+ dev_priv->hw_context_size ? "HW" : "fake");
return 0;
}
struct intel_engine_cs *ring;
int ret, i;
- /* This is the only place the aliasing PPGTT gets enabled, which means
- * it has to happen before we bail on reset */
- if (dev_priv->mm.aliasing_ppgtt) {
- struct i915_hw_ppgtt *ppgtt = dev_priv->mm.aliasing_ppgtt;
- ppgtt->enable(ppgtt);
- }
+ BUG_ON(!dev_priv->ring[RCS].default_context);
- /* FIXME: We should make this work, even in reset */
- if (i915_reset_in_progress(&dev_priv->gpu_error))
+ if (i915.enable_execlists)
return 0;
- BUG_ON(!dev_priv->ring[RCS].default_context);
-
for_each_ring(ring, dev_priv, i) {
ret = i915_switch_context(ring, ring->default_context);
if (ret)
idr_init(&file_priv->context_idr);
mutex_lock(&dev->struct_mutex);
- ctx = i915_gem_create_context(dev, file_priv, USES_FULL_PPGTT(dev));
+ ctx = i915_gem_create_context(dev, file_priv);
mutex_unlock(&dev->struct_mutex);
if (IS_ERR(ctx)) {
struct intel_context *new_context,
u32 hw_flags)
{
+ u32 flags = hw_flags | MI_MM_SPACE_GTT;
int ret;
/* w/a: If Flush TLB Invalidation Mode is enabled, driver must do a TLB
return ret;
}
+ /* These flags are for resource streamer on HSW+ */
+ if (!IS_HASWELL(ring->dev) && INTEL_INFO(ring->dev)->gen < 8)
+ flags |= (MI_SAVE_EXT_STATE_EN | MI_RESTORE_EXT_STATE_EN);
+
ret = intel_ring_begin(ring, 6);
if (ret)
return ret;
intel_ring_emit(ring, MI_NOOP);
intel_ring_emit(ring, MI_SET_CONTEXT);
intel_ring_emit(ring, i915_gem_obj_ggtt_offset(new_context->legacy_hw_ctx.rcs_state) |
- MI_MM_SPACE_GTT |
- MI_SAVE_EXT_STATE_EN |
- MI_RESTORE_EXT_STATE_EN |
- hw_flags);
+ flags);
/*
* w/a: MI_SET_CONTEXT must always be followed by MI_NOOP
* WaMiSetContext_Hang:snb,ivb,vlv
{
struct drm_i915_private *dev_priv = ring->dev->dev_private;
struct intel_context *from = ring->last_context;
- struct i915_hw_ppgtt *ppgtt = ctx_to_ppgtt(to);
u32 hw_flags = 0;
bool uninitialized = false;
int ret, i;
*/
from = ring->last_context;
- if (USES_FULL_PPGTT(ring->dev)) {
- ret = ppgtt->switch_mm(ppgtt, ring, false);
+ if (to->ppgtt) {
+ ret = to->ppgtt->switch_mm(to->ppgtt, ring);
if (ret)
goto unpin_out;
}
ring->last_context = to;
if (uninitialized) {
+ if (ring->init_context) {
+ ret = ring->init_context(ring);
+ if (ret)
+ DRM_ERROR("ring init context: %d\n", ret);
+ }
+
ret = i915_gem_render_state_init(ring);
if (ret)
DRM_ERROR("init render state: %d\n", ret);
*
* The context life cycle is simple. The context refcount is incremented and
* decremented by 1 and create and destroy. If the context is in use by the GPU,
- * it will have a refoucnt > 1. This allows us to destroy the context abstract
+ * it will have a refcount > 1. This allows us to destroy the context abstract
* object while letting the normal object tracking destroy the backing BO.
+ *
+ * This function should not be used in execlists mode. Instead the context is
+ * switched by writing to the ELSP and requests keep a reference to their
+ * context.
*/
int i915_switch_context(struct intel_engine_cs *ring,
struct intel_context *to)
{
struct drm_i915_private *dev_priv = ring->dev->dev_private;
+ WARN_ON(i915.enable_execlists);
WARN_ON(!mutex_is_locked(&dev_priv->dev->struct_mutex));
if (to->legacy_hw_ctx.rcs_state == NULL) { /* We have the fake context */
return do_switch(ring, to);
}
-static bool hw_context_enabled(struct drm_device *dev)
+static bool contexts_enabled(struct drm_device *dev)
{
- return to_i915(dev)->hw_context_size;
+ return i915.enable_execlists || to_i915(dev)->hw_context_size;
}
int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
struct intel_context *ctx;
int ret;
- if (!hw_context_enabled(dev))
+ if (!contexts_enabled(dev))
return -ENODEV;
ret = i915_mutex_lock_interruptible(dev);
if (ret)
return ret;
- ctx = i915_gem_create_context(dev, file_priv, USES_FULL_PPGTT(dev));
+ ctx = i915_gem_create_context(dev, file_priv);
mutex_unlock(&dev->struct_mutex);
if (IS_ERR(ctx))
return PTR_ERR(ctx);
i915_gem_evict_everything(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
- struct i915_address_space *vm;
+ struct i915_address_space *vm, *v;
bool lists_empty = true;
int ret;
i915_gem_retire_requests(dev);
/* Having flushed everything, unbind() should never raise an error */
- list_for_each_entry(vm, &dev_priv->vm_list, global_link)
+ list_for_each_entry_safe(vm, v, &dev_priv->vm_list, global_link)
WARN_ON(i915_gem_evict_vm(vm, false));
return 0;
#define __EXEC_OBJECT_HAS_PIN (1<<31)
#define __EXEC_OBJECT_HAS_FENCE (1<<30)
+#define __EXEC_OBJECT_NEEDS_MAP (1<<29)
#define __EXEC_OBJECT_NEEDS_BIAS (1<<28)
#define BATCH_OFFSET_BIAS (256*1024)
struct i915_address_space *vm,
struct drm_file *file)
{
- struct drm_i915_private *dev_priv = vm->dev->dev_private;
struct drm_i915_gem_object *obj;
struct list_head objects;
int i, ret;
i = 0;
while (!list_empty(&objects)) {
struct i915_vma *vma;
- struct i915_address_space *bind_vm = vm;
-
- if (exec[i].flags & EXEC_OBJECT_NEEDS_GTT &&
- USES_FULL_PPGTT(vm->dev)) {
- ret = -EINVAL;
- goto err;
- }
-
- /* If we have secure dispatch, or the userspace assures us that
- * they know what they're doing, use the GGTT VM.
- */
- if (((args->flags & I915_EXEC_SECURE) &&
- (i == (args->buffer_count - 1))))
- bind_vm = &dev_priv->gtt.base;
obj = list_first_entry(&objects,
struct drm_i915_gem_object,
* from the (obj, vm) we don't run the risk of creating
* duplicated vmas for the same vm.
*/
- vma = i915_gem_obj_lookup_or_create_vma(obj, bind_vm);
+ vma = i915_gem_obj_lookup_or_create_vma(obj, vm);
if (IS_ERR(vma)) {
DRM_DEBUG("Failed to lookup VMA\n");
ret = PTR_ERR(vma);
struct drm_device *dev = obj->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
uint64_t delta = reloc->delta + target_offset;
- uint32_t __iomem *reloc_entry;
+ uint64_t offset;
void __iomem *reloc_page;
int ret;
return ret;
/* Map the page containing the relocation we're going to perform. */
- reloc->offset += i915_gem_obj_ggtt_offset(obj);
+ offset = i915_gem_obj_ggtt_offset(obj);
+ offset += reloc->offset;
reloc_page = io_mapping_map_atomic_wc(dev_priv->gtt.mappable,
- reloc->offset & PAGE_MASK);
- reloc_entry = (uint32_t __iomem *)
- (reloc_page + offset_in_page(reloc->offset));
- iowrite32(lower_32_bits(delta), reloc_entry);
+ offset & PAGE_MASK);
+ iowrite32(lower_32_bits(delta), reloc_page + offset_in_page(offset));
if (INTEL_INFO(dev)->gen >= 8) {
- reloc_entry += 1;
+ offset += sizeof(uint32_t);
- if (offset_in_page(reloc->offset + sizeof(uint32_t)) == 0) {
+ if (offset_in_page(offset) == 0) {
io_mapping_unmap_atomic(reloc_page);
- reloc_page = io_mapping_map_atomic_wc(
- dev_priv->gtt.mappable,
- reloc->offset + sizeof(uint32_t));
- reloc_entry = reloc_page;
+ reloc_page =
+ io_mapping_map_atomic_wc(dev_priv->gtt.mappable,
+ offset);
}
- iowrite32(upper_32_bits(delta), reloc_entry);
+ iowrite32(upper_32_bits(delta),
+ reloc_page + offset_in_page(offset));
}
io_mapping_unmap_atomic(reloc_page);
return ret;
}
-static int
-need_reloc_mappable(struct i915_vma *vma)
-{
- struct drm_i915_gem_exec_object2 *entry = vma->exec_entry;
- return entry->relocation_count && !use_cpu_reloc(vma->obj) &&
- i915_is_ggtt(vma->vm);
-}
-
static int
i915_gem_execbuffer_reserve_vma(struct i915_vma *vma,
struct intel_engine_cs *ring,
{
struct drm_i915_gem_object *obj = vma->obj;
struct drm_i915_gem_exec_object2 *entry = vma->exec_entry;
- bool has_fenced_gpu_access = INTEL_INFO(ring->dev)->gen < 4;
- bool need_fence;
uint64_t flags;
int ret;
flags = 0;
-
- need_fence =
- has_fenced_gpu_access &&
- entry->flags & EXEC_OBJECT_NEEDS_FENCE &&
- obj->tiling_mode != I915_TILING_NONE;
- if (need_fence || need_reloc_mappable(vma))
+ if (entry->flags & __EXEC_OBJECT_NEEDS_MAP)
flags |= PIN_MAPPABLE;
-
if (entry->flags & EXEC_OBJECT_NEEDS_GTT)
flags |= PIN_GLOBAL;
if (entry->flags & __EXEC_OBJECT_NEEDS_BIAS)
entry->flags |= __EXEC_OBJECT_HAS_PIN;
- if (has_fenced_gpu_access) {
- if (entry->flags & EXEC_OBJECT_NEEDS_FENCE) {
- ret = i915_gem_object_get_fence(obj);
- if (ret)
- return ret;
-
- if (i915_gem_object_pin_fence(obj))
- entry->flags |= __EXEC_OBJECT_HAS_FENCE;
+ if (entry->flags & EXEC_OBJECT_NEEDS_FENCE) {
+ ret = i915_gem_object_get_fence(obj);
+ if (ret)
+ return ret;
- obj->pending_fenced_gpu_access = true;
- }
+ if (i915_gem_object_pin_fence(obj))
+ entry->flags |= __EXEC_OBJECT_HAS_FENCE;
}
if (entry->offset != vma->node.start) {
}
static bool
-eb_vma_misplaced(struct i915_vma *vma, bool has_fenced_gpu_access)
+need_reloc_mappable(struct i915_vma *vma)
{
struct drm_i915_gem_exec_object2 *entry = vma->exec_entry;
- struct drm_i915_gem_object *obj = vma->obj;
- bool need_fence, need_mappable;
- need_fence =
- has_fenced_gpu_access &&
- entry->flags & EXEC_OBJECT_NEEDS_FENCE &&
- obj->tiling_mode != I915_TILING_NONE;
- need_mappable = need_fence || need_reloc_mappable(vma);
+ if (entry->relocation_count == 0)
+ return false;
+
+ if (!i915_is_ggtt(vma->vm))
+ return false;
+
+ /* See also use_cpu_reloc() */
+ if (HAS_LLC(vma->obj->base.dev))
+ return false;
- WARN_ON((need_mappable || need_fence) &&
+ if (vma->obj->base.write_domain == I915_GEM_DOMAIN_CPU)
+ return false;
+
+ return true;
+}
+
+static bool
+eb_vma_misplaced(struct i915_vma *vma)
+{
+ struct drm_i915_gem_exec_object2 *entry = vma->exec_entry;
+ struct drm_i915_gem_object *obj = vma->obj;
+
+ WARN_ON(entry->flags & __EXEC_OBJECT_NEEDS_MAP &&
!i915_is_ggtt(vma->vm));
if (entry->alignment &&
vma->node.start & (entry->alignment - 1))
return true;
- if (need_mappable && !obj->map_and_fenceable)
+ if (entry->flags & __EXEC_OBJECT_NEEDS_MAP && !obj->map_and_fenceable)
return true;
if (entry->flags & __EXEC_OBJECT_NEEDS_BIAS &&
bool has_fenced_gpu_access = INTEL_INFO(ring->dev)->gen < 4;
int retry;
- if (list_empty(vmas))
- return 0;
-
i915_gem_retire_requests_ring(ring);
vm = list_first_entry(vmas, struct i915_vma, exec_list)->vm;
obj = vma->obj;
entry = vma->exec_entry;
+ if (!has_fenced_gpu_access)
+ entry->flags &= ~EXEC_OBJECT_NEEDS_FENCE;
need_fence =
- has_fenced_gpu_access &&
entry->flags & EXEC_OBJECT_NEEDS_FENCE &&
obj->tiling_mode != I915_TILING_NONE;
need_mappable = need_fence || need_reloc_mappable(vma);
- if (need_mappable)
+ if (need_mappable) {
+ entry->flags |= __EXEC_OBJECT_NEEDS_MAP;
list_move(&vma->exec_list, &ordered_vmas);
- else
+ } else
list_move_tail(&vma->exec_list, &ordered_vmas);
obj->base.pending_read_domains = I915_GEM_GPU_DOMAINS & ~I915_GEM_DOMAIN_COMMAND;
obj->base.pending_write_domain = 0;
- obj->pending_fenced_gpu_access = false;
}
list_splice(&ordered_vmas, vmas);
if (!drm_mm_node_allocated(&vma->node))
continue;
- if (eb_vma_misplaced(vma, has_fenced_gpu_access))
+ if (eb_vma_misplaced(vma))
ret = i915_vma_unbind(vma);
else
ret = i915_gem_execbuffer_reserve_vma(vma, ring, need_relocs);
int i, total, ret;
unsigned count = args->buffer_count;
- if (WARN_ON(list_empty(&eb->vmas)))
- return 0;
-
vm = list_first_entry(&eb->vmas, struct i915_vma, exec_list)->vm;
/* We may process another execbuffer during the unlock... */
}
static int
-validate_exec_list(struct drm_i915_gem_exec_object2 *exec,
+validate_exec_list(struct drm_device *dev,
+ struct drm_i915_gem_exec_object2 *exec,
int count)
{
- int i;
unsigned relocs_total = 0;
unsigned relocs_max = UINT_MAX / sizeof(struct drm_i915_gem_relocation_entry);
+ unsigned invalid_flags;
+ int i;
+
+ invalid_flags = __EXEC_OBJECT_UNKNOWN_FLAGS;
+ if (USES_FULL_PPGTT(dev))
+ invalid_flags |= EXEC_OBJECT_NEEDS_GTT;
for (i = 0; i < count; i++) {
char __user *ptr = to_user_ptr(exec[i].relocs_ptr);
int length; /* limited by fault_in_pages_readable() */
- if (exec[i].flags & __EXEC_OBJECT_UNKNOWN_FLAGS)
+ if (exec[i].flags & invalid_flags)
return -EINVAL;
/* First check for malicious input causing overflow in
return ERR_PTR(-EIO);
}
+ if (i915.enable_execlists && !ctx->engine[ring->id].state) {
+ int ret = intel_lr_context_deferred_create(ctx, ring);
+ if (ret) {
+ DRM_DEBUG("Could not create LRC %u: %d\n", ctx_id, ret);
+ return ERR_PTR(ret);
+ }
+ }
+
return ctx;
}
-static void
+void
i915_gem_execbuffer_move_to_active(struct list_head *vmas,
struct intel_engine_cs *ring)
{
+ u32 seqno = intel_ring_get_seqno(ring);
struct i915_vma *vma;
list_for_each_entry(vma, vmas, exec_list) {
+ struct drm_i915_gem_exec_object2 *entry = vma->exec_entry;
struct drm_i915_gem_object *obj = vma->obj;
u32 old_read = obj->base.read_domains;
u32 old_write = obj->base.write_domain;
if (obj->base.write_domain == 0)
obj->base.pending_read_domains |= obj->base.read_domains;
obj->base.read_domains = obj->base.pending_read_domains;
- obj->fenced_gpu_access = obj->pending_fenced_gpu_access;
i915_vma_move_to_active(vma, ring);
if (obj->base.write_domain) {
obj->dirty = 1;
- obj->last_write_seqno = intel_ring_get_seqno(ring);
+ obj->last_write_seqno = seqno;
intel_fb_obj_invalidate(obj, ring);
/* update for the implicit flush after a batch */
obj->base.write_domain &= ~I915_GEM_GPU_DOMAINS;
}
+ if (entry->flags & EXEC_OBJECT_NEEDS_FENCE) {
+ obj->last_fenced_seqno = seqno;
+ if (entry->flags & __EXEC_OBJECT_HAS_FENCE) {
+ struct drm_i915_private *dev_priv = to_i915(ring->dev);
+ list_move_tail(&dev_priv->fence_regs[obj->fence_reg].lru_list,
+ &dev_priv->mm.fence_list);
+ }
+ }
trace_i915_gem_object_change_domain(obj, old_read, old_write);
}
}
-static void
+void
i915_gem_execbuffer_retire_commands(struct drm_device *dev,
struct drm_file *file,
struct intel_engine_cs *ring,
return 0;
}
-static int
-legacy_ringbuffer_submission(struct drm_device *dev, struct drm_file *file,
- struct intel_engine_cs *ring,
- struct intel_context *ctx,
- struct drm_i915_gem_execbuffer2 *args,
- struct list_head *vmas,
- struct drm_i915_gem_object *batch_obj,
- u64 exec_start, u32 flags)
+int
+i915_gem_ringbuffer_submission(struct drm_device *dev, struct drm_file *file,
+ struct intel_engine_cs *ring,
+ struct intel_context *ctx,
+ struct drm_i915_gem_execbuffer2 *args,
+ struct list_head *vmas,
+ struct drm_i915_gem_object *batch_obj,
+ u64 exec_start, u32 flags)
{
struct drm_clip_rect *cliprects = NULL;
struct drm_i915_private *dev_priv = dev->dev_private;
if (!i915_gem_check_execbuffer(args))
return -EINVAL;
- ret = validate_exec_list(exec, args->buffer_count);
+ ret = validate_exec_list(dev, exec, args->buffer_count);
if (ret)
return ret;
i915_gem_context_reference(ctx);
- vm = ctx->vm;
- if (!USES_FULL_PPGTT(dev))
+ if (ctx->ppgtt)
+ vm = &ctx->ppgtt->base;
+ else
vm = &dev_priv->gtt.base;
eb = eb_create(args);
/* snb/ivb/vlv conflate the "batch in ppgtt" bit with the "non-secure
* batch" bit. Hence we need to pin secure batches into the global gtt.
* hsw should have this fixed, but bdw mucks it up again. */
- if (flags & I915_DISPATCH_SECURE &&
- !batch_obj->has_global_gtt_mapping) {
- /* When we have multiple VMs, we'll need to make sure that we
- * allocate space first */
- struct i915_vma *vma = i915_gem_obj_to_ggtt(batch_obj);
- BUG_ON(!vma);
- vma->bind_vma(vma, batch_obj->cache_level, GLOBAL_BIND);
- }
+ if (flags & I915_DISPATCH_SECURE) {
+ /*
+ * So on first glance it looks freaky that we pin the batch here
+ * outside of the reservation loop. But:
+ * - The batch is already pinned into the relevant ppgtt, so we
+ * already have the backing storage fully allocated.
+ * - No other BO uses the global gtt (well contexts, but meh),
+ * so we don't really have issues with mutliple objects not
+ * fitting due to fragmentation.
+ * So this is actually safe.
+ */
+ ret = i915_gem_obj_ggtt_pin(batch_obj, 0, 0);
+ if (ret)
+ goto err;
- if (flags & I915_DISPATCH_SECURE)
exec_start += i915_gem_obj_ggtt_offset(batch_obj);
- else
+ } else
exec_start += i915_gem_obj_offset(batch_obj, vm);
- ret = legacy_ringbuffer_submission(dev, file, ring, ctx,
- args, &eb->vmas, batch_obj, exec_start, flags);
- if (ret)
- goto err;
+ ret = dev_priv->gt.do_execbuf(dev, file, ring, ctx, args,
+ &eb->vmas, batch_obj, exec_start, flags);
+ /*
+ * FIXME: We crucially rely upon the active tracking for the (ppgtt)
+ * batch vma for correctness. For less ugly and less fragility this
+ * needs to be adjusted to also track the ggtt batch vma properly as
+ * active.
+ */
+ if (flags & I915_DISPATCH_SECURE)
+ i915_gem_object_ggtt_unpin(batch_obj);
err:
/* the request owns the ref now */
i915_gem_context_unreference(ctx);
static void bdw_setup_private_ppat(struct drm_i915_private *dev_priv);
static void chv_setup_private_ppat(struct drm_i915_private *dev_priv);
-bool intel_enable_ppgtt(struct drm_device *dev, bool full)
-{
- if (i915.enable_ppgtt == 0)
- return false;
-
- if (i915.enable_ppgtt == 1 && full)
- return false;
-
- return true;
-}
-
static int sanitize_enable_ppgtt(struct drm_device *dev, int enable_ppgtt)
{
if (enable_ppgtt == 0 || !HAS_ALIASING_PPGTT(dev))
enum i915_cache_level cache_level,
u32 flags);
static void ppgtt_unbind_vma(struct i915_vma *vma);
-static int gen8_ppgtt_enable(struct i915_hw_ppgtt *ppgtt);
static inline gen8_gtt_pte_t gen8_pte_encode(dma_addr_t addr,
enum i915_cache_level level,
/* Broadwell Page Directory Pointer Descriptors */
static int gen8_write_pdp(struct intel_engine_cs *ring, unsigned entry,
- uint64_t val, bool synchronous)
+ uint64_t val)
{
- struct drm_i915_private *dev_priv = ring->dev->dev_private;
int ret;
BUG_ON(entry >= 4);
- if (synchronous) {
- I915_WRITE(GEN8_RING_PDP_UDW(ring, entry), val >> 32);
- I915_WRITE(GEN8_RING_PDP_LDW(ring, entry), (u32)val);
- return 0;
- }
-
ret = intel_ring_begin(ring, 6);
if (ret)
return ret;
}
static int gen8_mm_switch(struct i915_hw_ppgtt *ppgtt,
- struct intel_engine_cs *ring,
- bool synchronous)
+ struct intel_engine_cs *ring)
{
int i, ret;
for (i = used_pd - 1; i >= 0; i--) {
dma_addr_t addr = ppgtt->pd_dma_addr[i];
- ret = gen8_write_pdp(ring, i, addr, synchronous);
+ ret = gen8_write_pdp(ring, i, addr);
if (ret)
return ret;
}
struct i915_hw_ppgtt *ppgtt =
container_of(vm, struct i915_hw_ppgtt, base);
- list_del(&vm->global_link);
- drm_mm_takedown(&vm->mm);
-
gen8_ppgtt_unmap_pages(ppgtt);
gen8_ppgtt_free(ppgtt);
}
kunmap_atomic(pd_vaddr);
}
- ppgtt->enable = gen8_ppgtt_enable;
ppgtt->switch_mm = gen8_mm_switch;
ppgtt->base.clear_range = gen8_ppgtt_clear_range;
ppgtt->base.insert_entries = gen8_ppgtt_insert_entries;
}
static int hsw_mm_switch(struct i915_hw_ppgtt *ppgtt,
- struct intel_engine_cs *ring,
- bool synchronous)
+ struct intel_engine_cs *ring)
{
- struct drm_device *dev = ppgtt->base.dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
int ret;
- /* If we're in reset, we can assume the GPU is sufficiently idle to
- * manually frob these bits. Ideally we could use the ring functions,
- * except our error handling makes it quite difficult (can't use
- * intel_ring_begin, ring->flush, or intel_ring_advance)
- *
- * FIXME: We should try not to special case reset
- */
- if (synchronous ||
- i915_reset_in_progress(&dev_priv->gpu_error)) {
- WARN_ON(ppgtt != dev_priv->mm.aliasing_ppgtt);
- I915_WRITE(RING_PP_DIR_DCLV(ring), PP_DIR_DCLV_2G);
- I915_WRITE(RING_PP_DIR_BASE(ring), get_pd_offset(ppgtt));
- POSTING_READ(RING_PP_DIR_BASE(ring));
- return 0;
- }
-
/* NB: TLBs must be flushed and invalidated before a switch */
ret = ring->flush(ring, I915_GEM_GPU_DOMAINS, I915_GEM_GPU_DOMAINS);
if (ret)
}
static int gen7_mm_switch(struct i915_hw_ppgtt *ppgtt,
- struct intel_engine_cs *ring,
- bool synchronous)
+ struct intel_engine_cs *ring)
{
- struct drm_device *dev = ppgtt->base.dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
int ret;
- /* If we're in reset, we can assume the GPU is sufficiently idle to
- * manually frob these bits. Ideally we could use the ring functions,
- * except our error handling makes it quite difficult (can't use
- * intel_ring_begin, ring->flush, or intel_ring_advance)
- *
- * FIXME: We should try not to special case reset
- */
- if (synchronous ||
- i915_reset_in_progress(&dev_priv->gpu_error)) {
- WARN_ON(ppgtt != dev_priv->mm.aliasing_ppgtt);
- I915_WRITE(RING_PP_DIR_DCLV(ring), PP_DIR_DCLV_2G);
- I915_WRITE(RING_PP_DIR_BASE(ring), get_pd_offset(ppgtt));
- POSTING_READ(RING_PP_DIR_BASE(ring));
- return 0;
- }
-
/* NB: TLBs must be flushed and invalidated before a switch */
ret = ring->flush(ring, I915_GEM_GPU_DOMAINS, I915_GEM_GPU_DOMAINS);
if (ret)
}
static int gen6_mm_switch(struct i915_hw_ppgtt *ppgtt,
- struct intel_engine_cs *ring,
- bool synchronous)
+ struct intel_engine_cs *ring)
{
struct drm_device *dev = ppgtt->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- if (!synchronous)
- return 0;
I915_WRITE(RING_PP_DIR_DCLV(ring), PP_DIR_DCLV_2G);
I915_WRITE(RING_PP_DIR_BASE(ring), get_pd_offset(ppgtt));
return 0;
}
-static int gen8_ppgtt_enable(struct i915_hw_ppgtt *ppgtt)
+static void gen8_ppgtt_enable(struct drm_device *dev)
{
- struct drm_device *dev = ppgtt->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_engine_cs *ring;
- int j, ret;
+ int j;
for_each_ring(ring, dev_priv, j) {
I915_WRITE(RING_MODE_GEN7(ring),
_MASKED_BIT_ENABLE(GFX_PPGTT_ENABLE));
-
- /* We promise to do a switch later with FULL PPGTT. If this is
- * aliasing, this is the one and only switch we'll do */
- if (USES_FULL_PPGTT(dev))
- continue;
-
- ret = ppgtt->switch_mm(ppgtt, ring, true);
- if (ret)
- goto err_out;
}
-
- return 0;
-
-err_out:
- for_each_ring(ring, dev_priv, j)
- I915_WRITE(RING_MODE_GEN7(ring),
- _MASKED_BIT_DISABLE(GFX_PPGTT_ENABLE));
- return ret;
}
-static int gen7_ppgtt_enable(struct i915_hw_ppgtt *ppgtt)
+static void gen7_ppgtt_enable(struct drm_device *dev)
{
- struct drm_device *dev = ppgtt->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_engine_cs *ring;
uint32_t ecochk, ecobits;
I915_WRITE(GAM_ECOCHK, ecochk);
for_each_ring(ring, dev_priv, i) {
- int ret;
/* GFX_MODE is per-ring on gen7+ */
I915_WRITE(RING_MODE_GEN7(ring),
_MASKED_BIT_ENABLE(GFX_PPGTT_ENABLE));
-
- /* We promise to do a switch later with FULL PPGTT. If this is
- * aliasing, this is the one and only switch we'll do */
- if (USES_FULL_PPGTT(dev))
- continue;
-
- ret = ppgtt->switch_mm(ppgtt, ring, true);
- if (ret)
- return ret;
}
-
- return 0;
}
-static int gen6_ppgtt_enable(struct i915_hw_ppgtt *ppgtt)
+static void gen6_ppgtt_enable(struct drm_device *dev)
{
- struct drm_device *dev = ppgtt->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_engine_cs *ring;
uint32_t ecochk, gab_ctl, ecobits;
- int i;
ecobits = I915_READ(GAC_ECO_BITS);
I915_WRITE(GAC_ECO_BITS, ecobits | ECOBITS_SNB_BIT |
I915_WRITE(GAM_ECOCHK, ecochk | ECOCHK_SNB_BIT | ECOCHK_PPGTT_CACHE64B);
I915_WRITE(GFX_MODE, _MASKED_BIT_ENABLE(GFX_PPGTT_ENABLE));
-
- for_each_ring(ring, dev_priv, i) {
- int ret = ppgtt->switch_mm(ppgtt, ring, true);
- if (ret)
- return ret;
- }
-
- return 0;
}
/* PPGTT support for Sandybdrige/Gen6 and later */
struct i915_hw_ppgtt *ppgtt =
container_of(vm, struct i915_hw_ppgtt, base);
- list_del(&vm->global_link);
- drm_mm_takedown(&ppgtt->base.mm);
drm_mm_remove_node(&ppgtt->node);
gen6_ppgtt_unmap_pages(ppgtt);
ppgtt->base.pte_encode = dev_priv->gtt.base.pte_encode;
if (IS_GEN6(dev)) {
- ppgtt->enable = gen6_ppgtt_enable;
ppgtt->switch_mm = gen6_mm_switch;
} else if (IS_HASWELL(dev)) {
- ppgtt->enable = gen7_ppgtt_enable;
ppgtt->switch_mm = hsw_mm_switch;
} else if (IS_GEN7(dev)) {
- ppgtt->enable = gen7_ppgtt_enable;
ppgtt->switch_mm = gen7_mm_switch;
} else
BUG();
ppgtt->node.size >> 20,
ppgtt->node.start / PAGE_SIZE);
+ gen6_write_pdes(ppgtt);
+ DRM_DEBUG("Adding PPGTT at offset %x\n",
+ ppgtt->pd_offset << 10);
+
return 0;
}
-int i915_gem_init_ppgtt(struct drm_device *dev, struct i915_hw_ppgtt *ppgtt)
+static int __hw_ppgtt_init(struct drm_device *dev, struct i915_hw_ppgtt *ppgtt)
{
struct drm_i915_private *dev_priv = dev->dev_private;
- int ret = 0;
ppgtt->base.dev = dev;
ppgtt->base.scratch = dev_priv->gtt.base.scratch;
if (INTEL_INFO(dev)->gen < 8)
- ret = gen6_ppgtt_init(ppgtt);
+ return gen6_ppgtt_init(ppgtt);
else if (IS_GEN8(dev))
- ret = gen8_ppgtt_init(ppgtt, dev_priv->gtt.base.total);
+ return gen8_ppgtt_init(ppgtt, dev_priv->gtt.base.total);
else
BUG();
+}
+int i915_ppgtt_init(struct drm_device *dev, struct i915_hw_ppgtt *ppgtt)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ int ret = 0;
- if (!ret) {
- struct drm_i915_private *dev_priv = dev->dev_private;
+ ret = __hw_ppgtt_init(dev, ppgtt);
+ if (ret == 0) {
kref_init(&ppgtt->ref);
drm_mm_init(&ppgtt->base.mm, ppgtt->base.start,
ppgtt->base.total);
i915_init_vm(dev_priv, &ppgtt->base);
- if (INTEL_INFO(dev)->gen < 8) {
- gen6_write_pdes(ppgtt);
- DRM_DEBUG("Adding PPGTT at offset %x\n",
- ppgtt->pd_offset << 10);
+ }
+
+ return ret;
+}
+
+int i915_ppgtt_init_hw(struct drm_device *dev)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_engine_cs *ring;
+ struct i915_hw_ppgtt *ppgtt = dev_priv->mm.aliasing_ppgtt;
+ int i, ret = 0;
+
+ /* In the case of execlists, PPGTT is enabled by the context descriptor
+ * and the PDPs are contained within the context itself. We don't
+ * need to do anything here. */
+ if (i915.enable_execlists)
+ return 0;
+
+ if (!USES_PPGTT(dev))
+ return 0;
+
+ if (IS_GEN6(dev))
+ gen6_ppgtt_enable(dev);
+ else if (IS_GEN7(dev))
+ gen7_ppgtt_enable(dev);
+ else if (INTEL_INFO(dev)->gen >= 8)
+ gen8_ppgtt_enable(dev);
+ else
+ WARN_ON(1);
+
+ if (ppgtt) {
+ for_each_ring(ring, dev_priv, i) {
+ ret = ppgtt->switch_mm(ppgtt, ring);
+ if (ret != 0)
+ return ret;
}
}
return ret;
}
+struct i915_hw_ppgtt *
+i915_ppgtt_create(struct drm_device *dev, struct drm_i915_file_private *fpriv)
+{
+ struct i915_hw_ppgtt *ppgtt;
+ int ret;
+
+ ppgtt = kzalloc(sizeof(*ppgtt), GFP_KERNEL);
+ if (!ppgtt)
+ return ERR_PTR(-ENOMEM);
+
+ ret = i915_ppgtt_init(dev, ppgtt);
+ if (ret) {
+ kfree(ppgtt);
+ return ERR_PTR(ret);
+ }
+
+ ppgtt->file_priv = fpriv;
+
+ return ppgtt;
+}
+
+void i915_ppgtt_release(struct kref *kref)
+{
+ struct i915_hw_ppgtt *ppgtt =
+ container_of(kref, struct i915_hw_ppgtt, ref);
+
+ /* vmas should already be unbound */
+ WARN_ON(!list_empty(&ppgtt->base.active_list));
+ WARN_ON(!list_empty(&ppgtt->base.inactive_list));
+
+ list_del(&ppgtt->base.global_link);
+ drm_mm_takedown(&ppgtt->base.mm);
+
+ ppgtt->base.cleanup(&ppgtt->base);
+ kfree(ppgtt);
+}
static void
ppgtt_bind_vma(struct i915_vma *vma,
}
}
-void i915_gem_setup_global_gtt(struct drm_device *dev,
- unsigned long start,
- unsigned long mappable_end,
- unsigned long end)
+int i915_gem_setup_global_gtt(struct drm_device *dev,
+ unsigned long start,
+ unsigned long mappable_end,
+ unsigned long end)
{
/* Let GEM Manage all of the aperture.
*
struct drm_mm_node *entry;
struct drm_i915_gem_object *obj;
unsigned long hole_start, hole_end;
+ int ret;
BUG_ON(mappable_end > end);
/* Mark any preallocated objects as occupied */
list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) {
struct i915_vma *vma = i915_gem_obj_to_vma(obj, ggtt_vm);
- int ret;
+
DRM_DEBUG_KMS("reserving preallocated space: %lx + %zx\n",
i915_gem_obj_ggtt_offset(obj), obj->base.size);
WARN_ON(i915_gem_obj_ggtt_bound(obj));
ret = drm_mm_reserve_node(&ggtt_vm->mm, &vma->node);
- if (ret)
- DRM_DEBUG_KMS("Reservation failed\n");
+ if (ret) {
+ DRM_DEBUG_KMS("Reservation failed: %i\n", ret);
+ return ret;
+ }
obj->has_global_gtt_mapping = 1;
}
/* And finally clear the reserved guard page */
ggtt_vm->clear_range(ggtt_vm, end - PAGE_SIZE, PAGE_SIZE, true);
+
+ if (USES_PPGTT(dev) && !USES_FULL_PPGTT(dev)) {
+ struct i915_hw_ppgtt *ppgtt;
+
+ ppgtt = kzalloc(sizeof(*ppgtt), GFP_KERNEL);
+ if (!ppgtt)
+ return -ENOMEM;
+
+ ret = __hw_ppgtt_init(dev, ppgtt);
+ if (ret != 0)
+ return ret;
+
+ dev_priv->mm.aliasing_ppgtt = ppgtt;
+ }
+
+ return 0;
}
void i915_gem_init_global_gtt(struct drm_device *dev)
i915_gem_setup_global_gtt(dev, 0, mappable_size, gtt_size);
}
+void i915_global_gtt_cleanup(struct drm_device *dev)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct i915_address_space *vm = &dev_priv->gtt.base;
+
+ if (dev_priv->mm.aliasing_ppgtt) {
+ struct i915_hw_ppgtt *ppgtt = dev_priv->mm.aliasing_ppgtt;
+
+ ppgtt->base.cleanup(&ppgtt->base);
+ }
+
+ if (drm_mm_initialized(&vm->mm)) {
+ drm_mm_takedown(&vm->mm);
+ list_del(&vm->global_link);
+ }
+
+ vm->cleanup(vm);
+}
+
static int setup_scratch_page(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct i915_gtt *gtt = container_of(vm, struct i915_gtt, base);
- if (drm_mm_initialized(&vm->mm)) {
- drm_mm_takedown(&vm->mm);
- list_del(&vm->global_link);
- }
iounmap(gtt->gsm);
teardown_scratch_page(vm->dev);
}
static void i915_gmch_remove(struct i915_address_space *vm)
{
- if (drm_mm_initialized(&vm->mm)) {
- drm_mm_takedown(&vm->mm);
- list_del(&vm->global_link);
- }
intel_gmch_remove();
}
/* Keep GGTT vmas first to make debug easier */
if (i915_is_ggtt(vm))
list_add(&vma->vma_link, &obj->vma_list);
- else
+ else {
list_add_tail(&vma->vma_link, &obj->vma_list);
+ i915_ppgtt_get(i915_vm_to_ppgtt(vm));
+ }
return vma;
}
#ifndef __I915_GEM_GTT_H__
#define __I915_GEM_GTT_H__
+struct drm_i915_file_private;
+
typedef uint32_t gen6_gtt_pte_t;
typedef uint64_t gen8_gtt_pte_t;
typedef gen8_gtt_pte_t gen8_ppgtt_pde_t;
dma_addr_t *gen8_pt_dma_addr[4];
};
- struct intel_context *ctx;
+ struct drm_i915_file_private *file_priv;
int (*enable)(struct i915_hw_ppgtt *ppgtt);
int (*switch_mm)(struct i915_hw_ppgtt *ppgtt,
- struct intel_engine_cs *ring,
- bool synchronous);
+ struct intel_engine_cs *ring);
void (*debug_dump)(struct i915_hw_ppgtt *ppgtt, struct seq_file *m);
};
int i915_gem_gtt_init(struct drm_device *dev);
void i915_gem_init_global_gtt(struct drm_device *dev);
-void i915_gem_setup_global_gtt(struct drm_device *dev, unsigned long start,
- unsigned long mappable_end, unsigned long end);
-
-bool intel_enable_ppgtt(struct drm_device *dev, bool full);
-int i915_gem_init_ppgtt(struct drm_device *dev, struct i915_hw_ppgtt *ppgtt);
+int i915_gem_setup_global_gtt(struct drm_device *dev, unsigned long start,
+ unsigned long mappable_end, unsigned long end);
+void i915_global_gtt_cleanup(struct drm_device *dev);
+
+
+int i915_ppgtt_init(struct drm_device *dev, struct i915_hw_ppgtt *ppgtt);
+int i915_ppgtt_init_hw(struct drm_device *dev);
+void i915_ppgtt_release(struct kref *kref);
+struct i915_hw_ppgtt *i915_ppgtt_create(struct drm_device *dev,
+ struct drm_i915_file_private *fpriv);
+static inline void i915_ppgtt_get(struct i915_hw_ppgtt *ppgtt)
+{
+ if (ppgtt)
+ kref_get(&ppgtt->ref);
+}
+static inline void i915_ppgtt_put(struct i915_hw_ppgtt *ppgtt)
+{
+ if (ppgtt)
+ kref_put(&ppgtt->ref, i915_ppgtt_release);
+}
void i915_check_and_clear_faults(struct drm_device *dev);
void i915_gem_suspend_gtt_mappings(struct drm_device *dev);
#include "i915_drv.h"
#include "intel_renderstate.h"
-struct render_state {
- const struct intel_renderstate_rodata *rodata;
- struct drm_i915_gem_object *obj;
- u64 ggtt_offset;
- int gen;
-};
-
static const struct intel_renderstate_rodata *
render_state_get_rodata(struct drm_device *dev, const int gen)
{
return 0;
}
-static void render_state_fini(struct render_state *so)
+void i915_gem_render_state_fini(struct render_state *so)
{
i915_gem_object_ggtt_unpin(so->obj);
drm_gem_object_unreference(&so->obj->base);
}
-int i915_gem_render_state_init(struct intel_engine_cs *ring)
+int i915_gem_render_state_prepare(struct intel_engine_cs *ring,
+ struct render_state *so)
{
- struct render_state so;
int ret;
if (WARN_ON(ring->id != RCS))
return -ENOENT;
- ret = render_state_init(&so, ring->dev);
+ ret = render_state_init(so, ring->dev);
if (ret)
return ret;
- if (so.rodata == NULL)
+ if (so->rodata == NULL)
return 0;
- ret = render_state_setup(&so);
+ ret = render_state_setup(so);
+ if (ret) {
+ i915_gem_render_state_fini(so);
+ return ret;
+ }
+
+ return 0;
+}
+
+int i915_gem_render_state_init(struct intel_engine_cs *ring)
+{
+ struct render_state so;
+ int ret;
+
+ ret = i915_gem_render_state_prepare(ring, &so);
if (ret)
- goto out;
+ return ret;
+
+ if (so.rodata == NULL)
+ return 0;
ret = ring->dispatch_execbuffer(ring,
so.ggtt_offset,
ret = __i915_add_request(ring, NULL, so.obj, NULL);
/* __i915_add_request moves object to inactive if it fails */
out:
- render_state_fini(&so);
+ i915_gem_render_state_fini(&so);
return ret;
}
--- /dev/null
+/*
+ * Copyright © 2014 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef _I915_GEM_RENDER_STATE_H_
+#define _I915_GEM_RENDER_STATE_H_
+
+#include <linux/types.h>
+
+struct intel_renderstate_rodata {
+ const u32 *reloc;
+ const u32 *batch;
+ const u32 batch_items;
+};
+
+struct render_state {
+ const struct intel_renderstate_rodata *rodata;
+ struct drm_i915_gem_object *obj;
+ u64 ggtt_offset;
+ int gen;
+};
+
+int i915_gem_render_state_init(struct intel_engine_cs *ring);
+void i915_gem_render_state_fini(struct render_state *so);
+int i915_gem_render_state_prepare(struct intel_engine_cs *ring,
+ struct render_state *so);
+
+#endif /* _I915_GEM_RENDER_STATE_H_ */
int i915_gem_init_stolen(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
+ u32 tmp;
int bios_reserved = 0;
#ifdef CONFIG_INTEL_IOMMU
DRM_DEBUG_KMS("found %zd bytes of stolen memory at %08lx\n",
dev_priv->gtt.stolen_size, dev_priv->mm.stolen_base);
- if (IS_VALLEYVIEW(dev))
- bios_reserved = 1024*1024; /* top 1M on VLV/BYT */
+ if (INTEL_INFO(dev)->gen >= 8) {
+ tmp = I915_READ(GEN7_BIOS_RESERVED);
+ tmp >>= GEN8_BIOS_RESERVED_SHIFT;
+ tmp &= GEN8_BIOS_RESERVED_MASK;
+ bios_reserved = (1024*1024) << tmp;
+ } else if (IS_GEN7(dev)) {
+ tmp = I915_READ(GEN7_BIOS_RESERVED);
+ bios_reserved = tmp & GEN7_BIOS_RESERVED_256K ?
+ 256*1024 : 1024*1024;
+ }
if (WARN_ON(bios_reserved > dev_priv->gtt.stolen_size))
return 0;
uint32_t swizzle_x = I915_BIT_6_SWIZZLE_UNKNOWN;
uint32_t swizzle_y = I915_BIT_6_SWIZZLE_UNKNOWN;
- if (IS_VALLEYVIEW(dev)) {
+ if (INTEL_INFO(dev)->gen >= 8 || IS_VALLEYVIEW(dev)) {
+ /*
+ * On BDW+, swizzling is not used. We leave the CPU memory
+ * controller in charge of optimizing memory accesses without
+ * the extra address manipulation GPU side.
+ *
+ * VLV and CHV don't have GPU swizzling.
+ */
swizzle_x = I915_BIT_6_SWIZZLE_NONE;
swizzle_y = I915_BIT_6_SWIZZLE_NONE;
} else if (INTEL_INFO(dev)->gen >= 6) {
if (ret == 0) {
obj->fence_dirty =
- obj->fenced_gpu_access ||
+ obj->last_fenced_seqno ||
obj->fence_reg != I915_FENCE_REG_NONE;
obj->tiling_mode = args->tiling_mode;
#include <linux/mempolicy.h>
#include <linux/swap.h>
+struct i915_mm_struct {
+ struct mm_struct *mm;
+ struct drm_device *dev;
+ struct i915_mmu_notifier *mn;
+ struct hlist_node node;
+ struct kref kref;
+ struct work_struct work;
+};
+
#if defined(CONFIG_MMU_NOTIFIER)
#include <linux/interval_tree.h>
struct mmu_notifier mn;
struct rb_root objects;
struct list_head linear;
- struct drm_device *dev;
- struct mm_struct *mm;
- struct work_struct work;
- unsigned long count;
unsigned long serial;
bool has_linear;
};
struct i915_mmu_object {
- struct i915_mmu_notifier *mmu;
+ struct i915_mmu_notifier *mn;
struct interval_tree_node it;
struct list_head link;
struct drm_i915_gem_object *obj;
unsigned long start,
unsigned long end)
{
- struct i915_mmu_object *mmu;
+ struct i915_mmu_object *mo;
unsigned long serial;
restart:
serial = mn->serial;
- list_for_each_entry(mmu, &mn->linear, link) {
+ list_for_each_entry(mo, &mn->linear, link) {
struct drm_i915_gem_object *obj;
- if (mmu->it.last < start || mmu->it.start > end)
+ if (mo->it.last < start || mo->it.start > end)
continue;
- obj = mmu->obj;
+ obj = mo->obj;
drm_gem_object_reference(&obj->base);
spin_unlock(&mn->lock);
};
static struct i915_mmu_notifier *
-__i915_mmu_notifier_lookup(struct drm_device *dev, struct mm_struct *mm)
-{
- struct drm_i915_private *dev_priv = to_i915(dev);
- struct i915_mmu_notifier *mmu;
-
- /* Protected by dev->struct_mutex */
- hash_for_each_possible(dev_priv->mmu_notifiers, mmu, node, (unsigned long)mm)
- if (mmu->mm == mm)
- return mmu;
-
- return NULL;
-}
-
-static struct i915_mmu_notifier *
-i915_mmu_notifier_get(struct drm_device *dev, struct mm_struct *mm)
+i915_mmu_notifier_create(struct mm_struct *mm)
{
- struct drm_i915_private *dev_priv = to_i915(dev);
- struct i915_mmu_notifier *mmu;
+ struct i915_mmu_notifier *mn;
int ret;
- lockdep_assert_held(&dev->struct_mutex);
-
- mmu = __i915_mmu_notifier_lookup(dev, mm);
- if (mmu)
- return mmu;
-
- mmu = kmalloc(sizeof(*mmu), GFP_KERNEL);
- if (mmu == NULL)
+ mn = kmalloc(sizeof(*mn), GFP_KERNEL);
+ if (mn == NULL)
return ERR_PTR(-ENOMEM);
- spin_lock_init(&mmu->lock);
- mmu->dev = dev;
- mmu->mn.ops = &i915_gem_userptr_notifier;
- mmu->mm = mm;
- mmu->objects = RB_ROOT;
- mmu->count = 0;
- mmu->serial = 1;
- INIT_LIST_HEAD(&mmu->linear);
- mmu->has_linear = false;
-
- /* Protected by mmap_sem (write-lock) */
- ret = __mmu_notifier_register(&mmu->mn, mm);
+ spin_lock_init(&mn->lock);
+ mn->mn.ops = &i915_gem_userptr_notifier;
+ mn->objects = RB_ROOT;
+ mn->serial = 1;
+ INIT_LIST_HEAD(&mn->linear);
+ mn->has_linear = false;
+
+ /* Protected by mmap_sem (write-lock) */
+ ret = __mmu_notifier_register(&mn->mn, mm);
if (ret) {
- kfree(mmu);
+ kfree(mn);
return ERR_PTR(ret);
}
- /* Protected by dev->struct_mutex */
- hash_add(dev_priv->mmu_notifiers, &mmu->node, (unsigned long)mm);
- return mmu;
-}
-
-static void
-__i915_mmu_notifier_destroy_worker(struct work_struct *work)
-{
- struct i915_mmu_notifier *mmu = container_of(work, typeof(*mmu), work);
- mmu_notifier_unregister(&mmu->mn, mmu->mm);
- kfree(mmu);
-}
-
-static void
-__i915_mmu_notifier_destroy(struct i915_mmu_notifier *mmu)
-{
- lockdep_assert_held(&mmu->dev->struct_mutex);
-
- /* Protected by dev->struct_mutex */
- hash_del(&mmu->node);
-
- /* Our lock ordering is: mmap_sem, mmu_notifier_scru, struct_mutex.
- * We enter the function holding struct_mutex, therefore we need
- * to drop our mutex prior to calling mmu_notifier_unregister in
- * order to prevent lock inversion (and system-wide deadlock)
- * between the mmap_sem and struct-mutex. Hence we defer the
- * unregistration to a workqueue where we hold no locks.
- */
- INIT_WORK(&mmu->work, __i915_mmu_notifier_destroy_worker);
- schedule_work(&mmu->work);
-}
-
-static void __i915_mmu_notifier_update_serial(struct i915_mmu_notifier *mmu)
-{
- if (++mmu->serial == 0)
- mmu->serial = 1;
+ return mn;
}
-static bool i915_mmu_notifier_has_linear(struct i915_mmu_notifier *mmu)
+static void __i915_mmu_notifier_update_serial(struct i915_mmu_notifier *mn)
{
- struct i915_mmu_object *mn;
-
- list_for_each_entry(mn, &mmu->linear, link)
- if (mn->is_linear)
- return true;
-
- return false;
-}
-
-static void
-i915_mmu_notifier_del(struct i915_mmu_notifier *mmu,
- struct i915_mmu_object *mn)
-{
- lockdep_assert_held(&mmu->dev->struct_mutex);
-
- spin_lock(&mmu->lock);
- list_del(&mn->link);
- if (mn->is_linear)
- mmu->has_linear = i915_mmu_notifier_has_linear(mmu);
- else
- interval_tree_remove(&mn->it, &mmu->objects);
- __i915_mmu_notifier_update_serial(mmu);
- spin_unlock(&mmu->lock);
-
- /* Protected against _add() by dev->struct_mutex */
- if (--mmu->count == 0)
- __i915_mmu_notifier_destroy(mmu);
+ if (++mn->serial == 0)
+ mn->serial = 1;
}
static int
-i915_mmu_notifier_add(struct i915_mmu_notifier *mmu,
- struct i915_mmu_object *mn)
+i915_mmu_notifier_add(struct drm_device *dev,
+ struct i915_mmu_notifier *mn,
+ struct i915_mmu_object *mo)
{
struct interval_tree_node *it;
int ret;
- ret = i915_mutex_lock_interruptible(mmu->dev);
+ ret = i915_mutex_lock_interruptible(dev);
if (ret)
return ret;
* remove the objects from the interval tree) before we do
* the check for overlapping objects.
*/
- i915_gem_retire_requests(mmu->dev);
+ i915_gem_retire_requests(dev);
- spin_lock(&mmu->lock);
- it = interval_tree_iter_first(&mmu->objects,
- mn->it.start, mn->it.last);
+ spin_lock(&mn->lock);
+ it = interval_tree_iter_first(&mn->objects,
+ mo->it.start, mo->it.last);
if (it) {
struct drm_i915_gem_object *obj;
obj = container_of(it, struct i915_mmu_object, it)->obj;
if (!obj->userptr.workers)
- mmu->has_linear = mn->is_linear = true;
+ mn->has_linear = mo->is_linear = true;
else
ret = -EAGAIN;
} else
- interval_tree_insert(&mn->it, &mmu->objects);
+ interval_tree_insert(&mo->it, &mn->objects);
if (ret == 0) {
- list_add(&mn->link, &mmu->linear);
- __i915_mmu_notifier_update_serial(mmu);
+ list_add(&mo->link, &mn->linear);
+ __i915_mmu_notifier_update_serial(mn);
}
- spin_unlock(&mmu->lock);
- mutex_unlock(&mmu->dev->struct_mutex);
+ spin_unlock(&mn->lock);
+ mutex_unlock(&dev->struct_mutex);
return ret;
}
+static bool i915_mmu_notifier_has_linear(struct i915_mmu_notifier *mn)
+{
+ struct i915_mmu_object *mo;
+
+ list_for_each_entry(mo, &mn->linear, link)
+ if (mo->is_linear)
+ return true;
+
+ return false;
+}
+
+static void
+i915_mmu_notifier_del(struct i915_mmu_notifier *mn,
+ struct i915_mmu_object *mo)
+{
+ spin_lock(&mn->lock);
+ list_del(&mo->link);
+ if (mo->is_linear)
+ mn->has_linear = i915_mmu_notifier_has_linear(mn);
+ else
+ interval_tree_remove(&mo->it, &mn->objects);
+ __i915_mmu_notifier_update_serial(mn);
+ spin_unlock(&mn->lock);
+}
+
static void
i915_gem_userptr_release__mmu_notifier(struct drm_i915_gem_object *obj)
{
- struct i915_mmu_object *mn;
+ struct i915_mmu_object *mo;
- mn = obj->userptr.mn;
- if (mn == NULL)
+ mo = obj->userptr.mmu_object;
+ if (mo == NULL)
return;
- i915_mmu_notifier_del(mn->mmu, mn);
- obj->userptr.mn = NULL;
+ i915_mmu_notifier_del(mo->mn, mo);
+ kfree(mo);
+
+ obj->userptr.mmu_object = NULL;
+}
+
+static struct i915_mmu_notifier *
+i915_mmu_notifier_find(struct i915_mm_struct *mm)
+{
+ struct i915_mmu_notifier *mn = mm->mn;
+
+ mn = mm->mn;
+ if (mn)
+ return mn;
+
+ down_write(&mm->mm->mmap_sem);
+ mutex_lock(&to_i915(mm->dev)->mm_lock);
+ if ((mn = mm->mn) == NULL) {
+ mn = i915_mmu_notifier_create(mm->mm);
+ if (!IS_ERR(mn))
+ mm->mn = mn;
+ }
+ mutex_unlock(&to_i915(mm->dev)->mm_lock);
+ up_write(&mm->mm->mmap_sem);
+
+ return mn;
}
static int
i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj,
unsigned flags)
{
- struct i915_mmu_notifier *mmu;
- struct i915_mmu_object *mn;
+ struct i915_mmu_notifier *mn;
+ struct i915_mmu_object *mo;
int ret;
if (flags & I915_USERPTR_UNSYNCHRONIZED)
return capable(CAP_SYS_ADMIN) ? 0 : -EPERM;
- down_write(&obj->userptr.mm->mmap_sem);
- ret = i915_mutex_lock_interruptible(obj->base.dev);
- if (ret == 0) {
- mmu = i915_mmu_notifier_get(obj->base.dev, obj->userptr.mm);
- if (!IS_ERR(mmu))
- mmu->count++; /* preemptive add to act as a refcount */
- else
- ret = PTR_ERR(mmu);
- mutex_unlock(&obj->base.dev->struct_mutex);
- }
- up_write(&obj->userptr.mm->mmap_sem);
- if (ret)
- return ret;
+ if (WARN_ON(obj->userptr.mm == NULL))
+ return -EINVAL;
- mn = kzalloc(sizeof(*mn), GFP_KERNEL);
- if (mn == NULL) {
- ret = -ENOMEM;
- goto destroy_mmu;
- }
+ mn = i915_mmu_notifier_find(obj->userptr.mm);
+ if (IS_ERR(mn))
+ return PTR_ERR(mn);
- mn->mmu = mmu;
- mn->it.start = obj->userptr.ptr;
- mn->it.last = mn->it.start + obj->base.size - 1;
- mn->obj = obj;
+ mo = kzalloc(sizeof(*mo), GFP_KERNEL);
+ if (mo == NULL)
+ return -ENOMEM;
- ret = i915_mmu_notifier_add(mmu, mn);
- if (ret)
- goto free_mn;
+ mo->mn = mn;
+ mo->it.start = obj->userptr.ptr;
+ mo->it.last = mo->it.start + obj->base.size - 1;
+ mo->obj = obj;
- obj->userptr.mn = mn;
+ ret = i915_mmu_notifier_add(obj->base.dev, mn, mo);
+ if (ret) {
+ kfree(mo);
+ return ret;
+ }
+
+ obj->userptr.mmu_object = mo;
return 0;
+}
+
+static void
+i915_mmu_notifier_free(struct i915_mmu_notifier *mn,
+ struct mm_struct *mm)
+{
+ if (mn == NULL)
+ return;
-free_mn:
+ mmu_notifier_unregister(&mn->mn, mm);
kfree(mn);
-destroy_mmu:
- mutex_lock(&obj->base.dev->struct_mutex);
- if (--mmu->count == 0)
- __i915_mmu_notifier_destroy(mmu);
- mutex_unlock(&obj->base.dev->struct_mutex);
- return ret;
}
#else
return 0;
}
+
+static void
+i915_mmu_notifier_free(struct i915_mmu_notifier *mn,
+ struct mm_struct *mm)
+{
+}
+
#endif
+static struct i915_mm_struct *
+__i915_mm_struct_find(struct drm_i915_private *dev_priv, struct mm_struct *real)
+{
+ struct i915_mm_struct *mm;
+
+ /* Protected by dev_priv->mm_lock */
+ hash_for_each_possible(dev_priv->mm_structs, mm, node, (unsigned long)real)
+ if (mm->mm == real)
+ return mm;
+
+ return NULL;
+}
+
+static int
+i915_gem_userptr_init__mm_struct(struct drm_i915_gem_object *obj)
+{
+ struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
+ struct i915_mm_struct *mm;
+ int ret = 0;
+
+ /* During release of the GEM object we hold the struct_mutex. This
+ * precludes us from calling mmput() at that time as that may be
+ * the last reference and so call exit_mmap(). exit_mmap() will
+ * attempt to reap the vma, and if we were holding a GTT mmap
+ * would then call drm_gem_vm_close() and attempt to reacquire
+ * the struct mutex. So in order to avoid that recursion, we have
+ * to defer releasing the mm reference until after we drop the
+ * struct_mutex, i.e. we need to schedule a worker to do the clean
+ * up.
+ */
+ mutex_lock(&dev_priv->mm_lock);
+ mm = __i915_mm_struct_find(dev_priv, current->mm);
+ if (mm == NULL) {
+ mm = kmalloc(sizeof(*mm), GFP_KERNEL);
+ if (mm == NULL) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ kref_init(&mm->kref);
+ mm->dev = obj->base.dev;
+
+ mm->mm = current->mm;
+ atomic_inc(¤t->mm->mm_count);
+
+ mm->mn = NULL;
+
+ /* Protected by dev_priv->mm_lock */
+ hash_add(dev_priv->mm_structs,
+ &mm->node, (unsigned long)mm->mm);
+ } else
+ kref_get(&mm->kref);
+
+ obj->userptr.mm = mm;
+out:
+ mutex_unlock(&dev_priv->mm_lock);
+ return ret;
+}
+
+static void
+__i915_mm_struct_free__worker(struct work_struct *work)
+{
+ struct i915_mm_struct *mm = container_of(work, typeof(*mm), work);
+ i915_mmu_notifier_free(mm->mn, mm->mm);
+ mmdrop(mm->mm);
+ kfree(mm);
+}
+
+static void
+__i915_mm_struct_free(struct kref *kref)
+{
+ struct i915_mm_struct *mm = container_of(kref, typeof(*mm), kref);
+
+ /* Protected by dev_priv->mm_lock */
+ hash_del(&mm->node);
+ mutex_unlock(&to_i915(mm->dev)->mm_lock);
+
+ INIT_WORK(&mm->work, __i915_mm_struct_free__worker);
+ schedule_work(&mm->work);
+}
+
+static void
+i915_gem_userptr_release__mm_struct(struct drm_i915_gem_object *obj)
+{
+ if (obj->userptr.mm == NULL)
+ return;
+
+ kref_put_mutex(&obj->userptr.mm->kref,
+ __i915_mm_struct_free,
+ &to_i915(obj->base.dev)->mm_lock);
+ obj->userptr.mm = NULL;
+}
+
struct get_pages_work {
struct work_struct work;
struct drm_i915_gem_object *obj;
struct task_struct *task;
};
-
#if IS_ENABLED(CONFIG_SWIOTLB)
#define swiotlb_active() swiotlb_nr_tbl()
#else
if (pvec == NULL)
pvec = drm_malloc_ab(num_pages, sizeof(struct page *));
if (pvec != NULL) {
- struct mm_struct *mm = obj->userptr.mm;
+ struct mm_struct *mm = obj->userptr.mm->mm;
down_read(&mm->mmap_sem);
while (pinned < num_pages) {
pvec = NULL;
pinned = 0;
- if (obj->userptr.mm == current->mm) {
+ if (obj->userptr.mm->mm == current->mm) {
pvec = kmalloc(num_pages*sizeof(struct page *),
GFP_TEMPORARY | __GFP_NOWARN | __GFP_NORETRY);
if (pvec == NULL) {
static void
i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj)
{
- struct scatterlist *sg;
- int i;
+ struct sg_page_iter sg_iter;
BUG_ON(obj->userptr.work != NULL);
if (obj->madv != I915_MADV_WILLNEED)
obj->dirty = 0;
- for_each_sg(obj->pages->sgl, sg, obj->pages->nents, i) {
- struct page *page = sg_page(sg);
+ for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, 0) {
+ struct page *page = sg_page_iter_page(&sg_iter);
if (obj->dirty)
set_page_dirty(page);
i915_gem_userptr_release(struct drm_i915_gem_object *obj)
{
i915_gem_userptr_release__mmu_notifier(obj);
-
- if (obj->userptr.mm) {
- mmput(obj->userptr.mm);
- obj->userptr.mm = NULL;
- }
+ i915_gem_userptr_release__mm_struct(obj);
}
static int
i915_gem_userptr_dmabuf_export(struct drm_i915_gem_object *obj)
{
- if (obj->userptr.mn)
+ if (obj->userptr.mmu_object)
return 0;
return i915_gem_userptr_init__mmu_notifier(obj, 0);
return -ENODEV;
}
- /* Allocate the new object */
obj = i915_gem_object_alloc(dev);
if (obj == NULL)
return -ENOMEM;
* at binding. This means that we need to hook into the mmu_notifier
* in order to detect if the mmu is destroyed.
*/
- ret = -ENOMEM;
- if ((obj->userptr.mm = get_task_mm(current)))
+ ret = i915_gem_userptr_init__mm_struct(obj);
+ if (ret == 0)
ret = i915_gem_userptr_init__mmu_notifier(obj, args->flags);
if (ret == 0)
ret = drm_gem_handle_create(file, &obj->base, &handle);
int
i915_gem_init_userptr(struct drm_device *dev)
{
-#if defined(CONFIG_MMU_NOTIFIER)
struct drm_i915_private *dev_priv = to_i915(dev);
- hash_init(dev_priv->mmu_notifiers);
-#endif
+ mutex_init(&dev_priv->mm_lock);
+ hash_init(dev_priv->mm_structs);
return 0;
}
struct drm_i915_error_buffer *err,
int count)
{
- err_printf(m, "%s [%d]:\n", name, count);
+ err_printf(m, " %s [%d]:\n", name, count);
while (count--) {
- err_printf(m, " %08x %8u %02x %02x %x %x",
+ err_printf(m, " %08x %8u %02x %02x %x %x",
err->gtt_offset,
err->size,
err->read_domains,
err_puts(m, err->userptr ? " userptr" : "");
err_puts(m, err->ring != -1 ? " " : "");
err_puts(m, ring_str(err->ring));
- err_puts(m, i915_cache_level_str(err->cache_level));
+ err_puts(m, i915_cache_level_str(m->i915, err->cache_level));
if (err->name)
err_printf(m, " (name: %d)", err->name);
i915_ring_error_state(m, dev, &error->ring[i]);
}
- if (error->active_bo)
+ for (i = 0; i < error->vm_count; i++) {
+ err_printf(m, "vm[%d]\n", i);
+
print_error_buffers(m, "Active",
- error->active_bo[0],
- error->active_bo_count[0]);
+ error->active_bo[i],
+ error->active_bo_count[i]);
- if (error->pinned_bo)
print_error_buffers(m, "Pinned",
- error->pinned_bo[0],
- error->pinned_bo_count[0]);
+ error->pinned_bo[i],
+ error->pinned_bo_count[i]);
+ }
for (i = 0; i < ARRAY_SIZE(error->ring); i++) {
obj = error->ring[i].batchbuffer;
}
int i915_error_state_buf_init(struct drm_i915_error_state_buf *ebuf,
+ struct drm_i915_private *i915,
size_t count, loff_t pos)
{
memset(ebuf, 0, sizeof(*ebuf));
+ ebuf->i915 = i915;
/* We need to have enough room to store any i915_error_state printf
* so that we can move it to start position.
}
static struct drm_i915_error_object *
-i915_error_object_create_sized(struct drm_i915_private *dev_priv,
- struct drm_i915_gem_object *src,
- struct i915_address_space *vm,
- const int num_pages)
+i915_error_object_create(struct drm_i915_private *dev_priv,
+ struct drm_i915_gem_object *src,
+ struct i915_address_space *vm)
{
struct drm_i915_error_object *dst;
- int i;
+ int num_pages;
+ bool use_ggtt;
+ int i = 0;
u32 reloc_offset;
if (src == NULL || src->pages == NULL)
return NULL;
+ num_pages = src->base.size >> PAGE_SHIFT;
+
dst = kmalloc(sizeof(*dst) + num_pages * sizeof(u32 *), GFP_ATOMIC);
if (dst == NULL)
return NULL;
- reloc_offset = dst->gtt_offset = i915_gem_obj_offset(src, vm);
- for (i = 0; i < num_pages; i++) {
+ if (i915_gem_obj_bound(src, vm))
+ dst->gtt_offset = i915_gem_obj_offset(src, vm);
+ else
+ dst->gtt_offset = -1;
+
+ reloc_offset = dst->gtt_offset;
+ use_ggtt = (src->cache_level == I915_CACHE_NONE &&
+ i915_is_ggtt(vm) &&
+ src->has_global_gtt_mapping &&
+ reloc_offset + num_pages * PAGE_SIZE <= dev_priv->gtt.mappable_end);
+
+ /* Cannot access stolen address directly, try to use the aperture */
+ if (src->stolen) {
+ use_ggtt = true;
+
+ if (!src->has_global_gtt_mapping)
+ goto unwind;
+
+ reloc_offset = i915_gem_obj_ggtt_offset(src);
+ if (reloc_offset + num_pages * PAGE_SIZE > dev_priv->gtt.mappable_end)
+ goto unwind;
+ }
+
+ /* Cannot access snooped pages through the aperture */
+ if (use_ggtt && src->cache_level != I915_CACHE_NONE && !HAS_LLC(dev_priv->dev))
+ goto unwind;
+
+ dst->page_count = num_pages;
+ while (num_pages--) {
unsigned long flags;
void *d;
goto unwind;
local_irq_save(flags);
- if (src->cache_level == I915_CACHE_NONE &&
- reloc_offset < dev_priv->gtt.mappable_end &&
- src->has_global_gtt_mapping &&
- i915_is_ggtt(vm)) {
+ if (use_ggtt) {
void __iomem *s;
/* Simply ignore tiling or any overlapping fence.
reloc_offset);
memcpy_fromio(d, s, PAGE_SIZE);
io_mapping_unmap_atomic(s);
- } else if (src->stolen) {
- unsigned long offset;
-
- offset = dev_priv->mm.stolen_base;
- offset += src->stolen->start;
- offset += i << PAGE_SHIFT;
-
- memcpy_fromio(d, (void __iomem *) offset, PAGE_SIZE);
} else {
struct page *page;
void *s;
}
local_irq_restore(flags);
- dst->pages[i] = d;
-
+ dst->pages[i++] = d;
reloc_offset += PAGE_SIZE;
}
- dst->page_count = num_pages;
return dst;
kfree(dst);
return NULL;
}
-#define i915_error_object_create(dev_priv, src, vm) \
- i915_error_object_create_sized((dev_priv), (src), (vm), \
- (src)->base.size>>PAGE_SHIFT)
-
#define i915_error_ggtt_object_create(dev_priv, src) \
- i915_error_object_create_sized((dev_priv), (src), &(dev_priv)->gtt.base, \
- (src)->base.size>>PAGE_SHIFT)
+ i915_error_object_create((dev_priv), (src), &(dev_priv)->gtt.base)
static void capture_bo(struct drm_i915_error_buffer *err,
- struct drm_i915_gem_object *obj)
+ struct i915_vma *vma)
{
+ struct drm_i915_gem_object *obj = vma->obj;
+
err->size = obj->base.size;
err->name = obj->base.name;
err->rseqno = obj->last_read_seqno;
err->wseqno = obj->last_write_seqno;
- err->gtt_offset = i915_gem_obj_ggtt_offset(obj);
+ err->gtt_offset = vma->node.start;
err->read_domains = obj->base.read_domains;
err->write_domain = obj->base.write_domain;
err->fence_reg = obj->fence_reg;
int i = 0;
list_for_each_entry(vma, head, mm_list) {
- capture_bo(err++, vma->obj);
+ capture_bo(err++, vma);
if (++i == count)
break;
}
}
static u32 capture_pinned_bo(struct drm_i915_error_buffer *err,
- int count, struct list_head *head)
+ int count, struct list_head *head,
+ struct i915_address_space *vm)
{
struct drm_i915_gem_object *obj;
- int i = 0;
+ struct drm_i915_error_buffer * const first = err;
+ struct drm_i915_error_buffer * const last = err + count;
list_for_each_entry(obj, head, global_list) {
- if (!i915_gem_obj_is_pinned(obj))
- continue;
+ struct i915_vma *vma;
- capture_bo(err++, obj);
- if (++i == count)
+ if (err == last)
break;
+
+ list_for_each_entry(vma, &obj->vma_list, vma_link)
+ if (vma->vm == vm && vma->pin_count > 0) {
+ capture_bo(err++, vma);
+ break;
+ }
}
- return i;
+ return err - first;
}
/* Generate a semi-unique error code. The code is not meant to have meaning, The
ering->hws = I915_READ(mmio);
}
- ering->cpu_ring_head = ring->buffer->head;
- ering->cpu_ring_tail = ring->buffer->tail;
-
ering->hangcheck_score = ring->hangcheck.score;
ering->hangcheck_action = ring->hangcheck.action;
for (i = 0; i < I915_NUM_RINGS; i++) {
struct intel_engine_cs *ring = &dev_priv->ring[i];
+ struct intel_ringbuffer *rbuf;
error->ring[i].pid = -1;
request = i915_gem_find_active_request(ring);
if (request) {
+ struct i915_address_space *vm;
+
+ vm = request->ctx && request->ctx->ppgtt ?
+ &request->ctx->ppgtt->base :
+ &dev_priv->gtt.base;
+
/* We need to copy these to an anonymous buffer
* as the simplest method to avoid being overwritten
* by userspace.
error->ring[i].batchbuffer =
i915_error_object_create(dev_priv,
request->batch_obj,
- request->ctx ?
- request->ctx->vm :
- &dev_priv->gtt.base);
+ vm);
- if (HAS_BROKEN_CS_TLB(dev_priv->dev) &&
- ring->scratch.obj)
+ if (HAS_BROKEN_CS_TLB(dev_priv->dev))
error->ring[i].wa_batchbuffer =
i915_error_ggtt_object_create(dev_priv,
ring->scratch.obj);
}
}
+ if (i915.enable_execlists) {
+ /* TODO: This is only a small fix to keep basic error
+ * capture working, but we need to add more information
+ * for it to be useful (e.g. dump the context being
+ * executed).
+ */
+ if (request)
+ rbuf = request->ctx->engine[ring->id].ringbuf;
+ else
+ rbuf = ring->default_context->engine[ring->id].ringbuf;
+ } else
+ rbuf = ring->buffer;
+
+ error->ring[i].cpu_ring_head = rbuf->head;
+ error->ring[i].cpu_ring_tail = rbuf->tail;
+
error->ring[i].ringbuffer =
- i915_error_ggtt_object_create(dev_priv, ring->buffer->obj);
+ i915_error_ggtt_object_create(dev_priv, rbuf->obj);
- if (ring->status_page.obj)
- error->ring[i].hws_page =
- i915_error_ggtt_object_create(dev_priv, ring->status_page.obj);
+ error->ring[i].hws_page =
+ i915_error_ggtt_object_create(dev_priv, ring->status_page.obj);
i915_gem_record_active_context(ring, error, &error->ring[i]);
list_for_each_entry(vma, &vm->active_list, mm_list)
i++;
error->active_bo_count[ndx] = i;
- list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list)
- if (i915_gem_obj_is_pinned(obj))
- i++;
+
+ list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) {
+ list_for_each_entry(vma, &obj->vma_list, vma_link)
+ if (vma->vm == vm && vma->pin_count > 0) {
+ i++;
+ break;
+ }
+ }
error->pinned_bo_count[ndx] = i - error->active_bo_count[ndx];
if (i) {
error->pinned_bo_count[ndx] =
capture_pinned_bo(pinned_bo,
error->pinned_bo_count[ndx],
- &dev_priv->mm.bound_list);
+ &dev_priv->mm.bound_list, vm);
error->active_bo[ndx] = active_bo;
error->pinned_bo[ndx] = pinned_bo;
}
error->pinned_bo_count = kcalloc(cnt, sizeof(*error->pinned_bo_count),
GFP_ATOMIC);
- list_for_each_entry(vm, &dev_priv->vm_list, global_link)
- i915_gem_capture_vm(dev_priv, error, vm, i++);
+ if (error->active_bo == NULL ||
+ error->pinned_bo == NULL ||
+ error->active_bo_count == NULL ||
+ error->pinned_bo_count == NULL) {
+ kfree(error->active_bo);
+ kfree(error->active_bo_count);
+ kfree(error->pinned_bo);
+ kfree(error->pinned_bo_count);
+
+ error->active_bo = NULL;
+ error->active_bo_count = NULL;
+ error->pinned_bo = NULL;
+ error->pinned_bo_count = NULL;
+ } else {
+ list_for_each_entry(vm, &dev_priv->vm_list, global_link)
+ i915_gem_capture_vm(dev_priv, error, vm, i++);
+
+ error->vm_count = cnt;
+ }
}
/* Capture all registers which don't fit into another category. */
kref_put(&error->ref, i915_error_state_free);
}
-const char *i915_cache_level_str(int type)
+const char *i915_cache_level_str(struct drm_i915_private *i915, int type)
{
switch (type) {
case I915_CACHE_NONE: return " uncached";
- case I915_CACHE_LLC: return " snooped or LLC";
+ case I915_CACHE_LLC: return HAS_LLC(i915) ? " LLC" : " snooped";
case I915_CACHE_L3_LLC: return " L3+LLC";
case I915_CACHE_WT: return " WT";
default: return "";
{
assert_spin_locked(&dev_priv->irq_lock);
- if (!intel_irqs_enabled(dev_priv))
+ if (WARN_ON(!intel_irqs_enabled(dev_priv)))
return;
if ((dev_priv->irq_mask & mask) != mask) {
assert_spin_locked(&dev_priv->irq_lock);
- for_each_pipe(pipe) {
+ for_each_pipe(dev_priv, pipe) {
crtc = to_intel_crtc(dev_priv->pipe_to_crtc_mapping[pipe]);
if (crtc->cpu_fifo_underrun_disabled)
assert_spin_locked(&dev_priv->irq_lock);
- for_each_pipe(pipe) {
+ for_each_pipe(dev_priv, pipe) {
crtc = to_intel_crtc(dev_priv->pipe_to_crtc_mapping[pipe]);
if (crtc->pch_fifo_underrun_disabled)
old = !intel_crtc->cpu_fifo_underrun_disabled;
intel_crtc->cpu_fifo_underrun_disabled = !enable;
- if (INTEL_INFO(dev)->gen < 5 || IS_VALLEYVIEW(dev))
+ if (HAS_GMCH_DISPLAY(dev))
i9xx_set_fifo_underrun_reporting(dev, pipe, enable, old);
else if (IS_GEN5(dev) || IS_GEN6(dev))
ironlake_set_fifo_underrun_reporting(dev, pipe, enable);
/* In vblank? */
if (in_vbl)
- ret |= DRM_SCANOUTPOS_INVBL;
+ ret |= DRM_SCANOUTPOS_IN_VBLANK;
return ret;
}
* some connectors */
if (hpd_disabled) {
drm_kms_helper_poll_enable(dev);
- mod_timer(&dev_priv->hotplug_reenable_timer,
- jiffies + msecs_to_jiffies(I915_REENABLE_HOTPLUG_DELAY));
+ mod_delayed_work(system_wq, &dev_priv->hotplug_reenable_work,
+ msecs_to_jiffies(I915_REENABLE_HOTPLUG_DELAY));
}
spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags);
drm_kms_helper_hotplug_event(dev);
}
-static void intel_hpd_irq_uninstall(struct drm_i915_private *dev_priv)
-{
- del_timer_sync(&dev_priv->hotplug_reenable_timer);
-}
-
static void ironlake_rps_change_irq_handler(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
* @dev_priv: DRM device private
*
*/
-static u32 vlv_calc_delay_from_C0_counters(struct drm_i915_private *dev_priv)
+static int vlv_calc_delay_from_C0_counters(struct drm_i915_private *dev_priv)
{
u32 residency_C0_up = 0, residency_C0_down = 0;
- u8 new_delay, adj;
+ int new_delay, adj;
dev_priv->rps.ei_interrupt_count++;
struct drm_i915_private *dev_priv,
u32 master_ctl)
{
+ struct intel_engine_cs *ring;
u32 rcs, bcs, vcs;
uint32_t tmp = 0;
irqreturn_t ret = IRQ_NONE;
if (tmp) {
I915_WRITE(GEN8_GT_IIR(0), tmp);
ret = IRQ_HANDLED;
+
rcs = tmp >> GEN8_RCS_IRQ_SHIFT;
- bcs = tmp >> GEN8_BCS_IRQ_SHIFT;
+ ring = &dev_priv->ring[RCS];
if (rcs & GT_RENDER_USER_INTERRUPT)
- notify_ring(dev, &dev_priv->ring[RCS]);
+ notify_ring(dev, ring);
+ if (rcs & GT_CONTEXT_SWITCH_INTERRUPT)
+ intel_execlists_handle_ctx_events(ring);
+
+ bcs = tmp >> GEN8_BCS_IRQ_SHIFT;
+ ring = &dev_priv->ring[BCS];
if (bcs & GT_RENDER_USER_INTERRUPT)
- notify_ring(dev, &dev_priv->ring[BCS]);
+ notify_ring(dev, ring);
+ if (bcs & GT_CONTEXT_SWITCH_INTERRUPT)
+ intel_execlists_handle_ctx_events(ring);
} else
DRM_ERROR("The master control interrupt lied (GT0)!\n");
}
if (tmp) {
I915_WRITE(GEN8_GT_IIR(1), tmp);
ret = IRQ_HANDLED;
+
vcs = tmp >> GEN8_VCS1_IRQ_SHIFT;
+ ring = &dev_priv->ring[VCS];
if (vcs & GT_RENDER_USER_INTERRUPT)
- notify_ring(dev, &dev_priv->ring[VCS]);
+ notify_ring(dev, ring);
+ if (vcs & GT_CONTEXT_SWITCH_INTERRUPT)
+ intel_execlists_handle_ctx_events(ring);
+
vcs = tmp >> GEN8_VCS2_IRQ_SHIFT;
+ ring = &dev_priv->ring[VCS2];
if (vcs & GT_RENDER_USER_INTERRUPT)
- notify_ring(dev, &dev_priv->ring[VCS2]);
+ notify_ring(dev, ring);
+ if (vcs & GT_CONTEXT_SWITCH_INTERRUPT)
+ intel_execlists_handle_ctx_events(ring);
} else
DRM_ERROR("The master control interrupt lied (GT1)!\n");
}
if (tmp) {
I915_WRITE(GEN8_GT_IIR(3), tmp);
ret = IRQ_HANDLED;
+
vcs = tmp >> GEN8_VECS_IRQ_SHIFT;
+ ring = &dev_priv->ring[VECS];
if (vcs & GT_RENDER_USER_INTERRUPT)
- notify_ring(dev, &dev_priv->ring[VECS]);
+ notify_ring(dev, ring);
+ if (vcs & GT_CONTEXT_SWITCH_INTERRUPT)
+ intel_execlists_handle_ctx_events(ring);
} else
DRM_ERROR("The master control interrupt lied (GT3)!\n");
}
long_hpd = (dig_hotplug_reg >> dig_shift) & PORTB_HOTPLUG_LONG_DETECT;
}
- DRM_DEBUG_DRIVER("digital hpd port %d %d\n", port, long_hpd);
+ DRM_DEBUG_DRIVER("digital hpd port %c - %s\n",
+ port_name(port),
+ long_hpd ? "long" : "short");
/* for long HPD pulses we want to have the digital queue happen,
but we still want HPD storm detection to function. */
if (long_hpd) {
static bool intel_pipe_handle_vblank(struct drm_device *dev, enum pipe pipe)
{
- struct intel_crtc *crtc;
-
if (!drm_handle_vblank(dev, pipe))
return false;
- crtc = to_intel_crtc(intel_get_crtc_for_pipe(dev, pipe));
- wake_up(&crtc->vbl_wait);
-
return true;
}
int pipe;
spin_lock(&dev_priv->irq_lock);
- for_each_pipe(pipe) {
+ for_each_pipe(dev_priv, pipe) {
int reg;
u32 mask, iir_bit = 0;
}
spin_unlock(&dev_priv->irq_lock);
- for_each_pipe(pipe) {
- if (pipe_stats[pipe] & PIPE_START_VBLANK_INTERRUPT_STATUS)
- intel_pipe_handle_vblank(dev, pipe);
+ for_each_pipe(dev_priv, pipe) {
+ if (pipe_stats[pipe] & PIPE_START_VBLANK_INTERRUPT_STATUS &&
+ intel_pipe_handle_vblank(dev, pipe))
+ intel_check_page_flip(dev, pipe);
if (pipe_stats[pipe] & PLANE_FLIP_DONE_INT_STATUS_VLV) {
intel_prepare_page_flip(dev, pipe);
DRM_ERROR("PCH poison interrupt\n");
if (pch_iir & SDE_FDI_MASK)
- for_each_pipe(pipe)
+ for_each_pipe(dev_priv, pipe)
DRM_DEBUG_DRIVER(" pipe %c FDI IIR: 0x%08x\n",
pipe_name(pipe),
I915_READ(FDI_RX_IIR(pipe)));
if (err_int & ERR_INT_POISON)
DRM_ERROR("Poison interrupt\n");
- for_each_pipe(pipe) {
+ for_each_pipe(dev_priv, pipe) {
if (err_int & ERR_INT_FIFO_UNDERRUN(pipe)) {
if (intel_set_cpu_fifo_underrun_reporting(dev, pipe,
false))
DRM_DEBUG_DRIVER("Audio CP change interrupt\n");
if (pch_iir & SDE_FDI_MASK_CPT)
- for_each_pipe(pipe)
+ for_each_pipe(dev_priv, pipe)
DRM_DEBUG_DRIVER(" pipe %c FDI IIR: 0x%08x\n",
pipe_name(pipe),
I915_READ(FDI_RX_IIR(pipe)));
if (de_iir & DE_POISON)
DRM_ERROR("Poison interrupt\n");
- for_each_pipe(pipe) {
- if (de_iir & DE_PIPE_VBLANK(pipe))
- intel_pipe_handle_vblank(dev, pipe);
+ for_each_pipe(dev_priv, pipe) {
+ if (de_iir & DE_PIPE_VBLANK(pipe) &&
+ intel_pipe_handle_vblank(dev, pipe))
+ intel_check_page_flip(dev, pipe);
if (de_iir & DE_PIPE_FIFO_UNDERRUN(pipe))
if (intel_set_cpu_fifo_underrun_reporting(dev, pipe, false))
if (de_iir & DE_GSE_IVB)
intel_opregion_asle_intr(dev);
- for_each_pipe(pipe) {
- if (de_iir & (DE_PIPE_VBLANK_IVB(pipe)))
- intel_pipe_handle_vblank(dev, pipe);
+ for_each_pipe(dev_priv, pipe) {
+ if (de_iir & (DE_PIPE_VBLANK_IVB(pipe)) &&
+ intel_pipe_handle_vblank(dev, pipe))
+ intel_check_page_flip(dev, pipe);
/* plane/pipes map 1:1 on ilk+ */
if (de_iir & DE_PLANE_FLIP_DONE_IVB(pipe)) {
DRM_ERROR("The master control interrupt lied (DE PORT)!\n");
}
- for_each_pipe(pipe) {
+ for_each_pipe(dev_priv, pipe) {
uint32_t pipe_iir;
if (!(master_ctl & GEN8_DE_PIPE_IRQ(pipe)))
if (pipe_iir) {
ret = IRQ_HANDLED;
I915_WRITE(GEN8_DE_PIPE_IIR(pipe), pipe_iir);
- if (pipe_iir & GEN8_PIPE_VBLANK)
- intel_pipe_handle_vblank(dev, pipe);
+ if (pipe_iir & GEN8_PIPE_VBLANK &&
+ intel_pipe_handle_vblank(dev, pipe))
+ intel_check_page_flip(dev, pipe);
if (pipe_iir & GEN8_PIPE_PRIMARY_FLIP_DONE) {
intel_prepare_page_flip(dev, pipe);
if (eir & I915_ERROR_MEMORY_REFRESH) {
pr_err("memory refresh error:\n");
- for_each_pipe(pipe)
+ for_each_pipe(dev_priv, pipe)
pr_err("pipe %c stat: 0x%08x\n",
pipe_name(pipe), I915_READ(PIPESTAT(pipe)));
/* pipestat has already been acked */
schedule_work(&dev_priv->gpu_error.work);
}
-static void __always_unused i915_pageflip_stall_check(struct drm_device *dev, int pipe)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct drm_crtc *crtc = dev_priv->pipe_to_crtc_mapping[pipe];
- struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- struct drm_i915_gem_object *obj;
- struct intel_unpin_work *work;
- unsigned long flags;
- bool stall_detected;
-
- /* Ignore early vblank irqs */
- if (intel_crtc == NULL)
- return;
-
- spin_lock_irqsave(&dev->event_lock, flags);
- work = intel_crtc->unpin_work;
-
- if (work == NULL ||
- atomic_read(&work->pending) >= INTEL_FLIP_COMPLETE ||
- !work->enable_stall_check) {
- /* Either the pending flip IRQ arrived, or we're too early. Don't check */
- spin_unlock_irqrestore(&dev->event_lock, flags);
- return;
- }
-
- /* Potential stall - if we see that the flip has happened, assume a missed interrupt */
- obj = work->pending_flip_obj;
- if (INTEL_INFO(dev)->gen >= 4) {
- int dspsurf = DSPSURF(intel_crtc->plane);
- stall_detected = I915_HI_DISPBASE(I915_READ(dspsurf)) ==
- i915_gem_obj_ggtt_offset(obj);
- } else {
- int dspaddr = DSPADDR(intel_crtc->plane);
- stall_detected = I915_READ(dspaddr) == (i915_gem_obj_ggtt_offset(obj) +
- crtc->y * crtc->primary->fb->pitches[0] +
- crtc->x * crtc->primary->fb->bits_per_pixel/8);
- }
-
- spin_unlock_irqrestore(&dev->event_lock, flags);
-
- if (stall_detected) {
- DRM_DEBUG_DRIVER("Pageflip stall detected\n");
- intel_prepare_page_flip(dev, intel_crtc->plane);
- }
-}
-
/* Called from drm generic code, passed 'crtc' which
* we use as a pipe index
*/
I915_WRITE(PORT_HOTPLUG_EN, 0);
I915_WRITE(PORT_HOTPLUG_STAT, I915_READ(PORT_HOTPLUG_STAT));
- for_each_pipe(pipe)
+ for_each_pipe(dev_priv, pipe)
I915_WRITE(PIPESTAT(pipe), 0xffff);
I915_WRITE(VLV_IIR, 0xffffffff);
I915_WRITE(VLV_IMR, 0xffffffff);
gen8_gt_irq_reset(dev_priv);
- for_each_pipe(pipe)
+ for_each_pipe(dev_priv, pipe)
if (intel_display_power_enabled(dev_priv,
POWER_DOMAIN_PIPE(pipe)))
GEN8_IRQ_RESET_NDX(DE_PIPE, pipe);
I915_WRITE(PORT_HOTPLUG_EN, 0);
I915_WRITE(PORT_HOTPLUG_STAT, I915_READ(PORT_HOTPLUG_STAT));
- for_each_pipe(pipe)
+ for_each_pipe(dev_priv, pipe)
I915_WRITE(PIPESTAT(pipe), 0xffff);
I915_WRITE(VLV_IMR, 0xffffffff);
static void ibx_hpd_irq_setup(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
- struct drm_mode_config *mode_config = &dev->mode_config;
struct intel_encoder *intel_encoder;
u32 hotplug_irqs, hotplug, enabled_irqs = 0;
if (HAS_PCH_IBX(dev)) {
hotplug_irqs = SDE_HOTPLUG_MASK;
- list_for_each_entry(intel_encoder, &mode_config->encoder_list, base.head)
+ for_each_intel_encoder(dev, intel_encoder)
if (dev_priv->hpd_stats[intel_encoder->hpd_pin].hpd_mark == HPD_ENABLED)
enabled_irqs |= hpd_ibx[intel_encoder->hpd_pin];
} else {
hotplug_irqs = SDE_HOTPLUG_MASK_CPT;
- list_for_each_entry(intel_encoder, &mode_config->encoder_list, base.head)
+ for_each_intel_encoder(dev, intel_encoder)
if (dev_priv->hpd_stats[intel_encoder->hpd_pin].hpd_mark == HPD_ENABLED)
enabled_irqs |= hpd_cpt[intel_encoder->hpd_pin];
}
static void gen8_gt_irq_postinstall(struct drm_i915_private *dev_priv)
{
- int i;
-
/* These are interrupts we'll toggle with the ring mask register */
uint32_t gt_interrupts[] = {
GT_RENDER_USER_INTERRUPT << GEN8_RCS_IRQ_SHIFT |
+ GT_CONTEXT_SWITCH_INTERRUPT << GEN8_RCS_IRQ_SHIFT |
GT_RENDER_L3_PARITY_ERROR_INTERRUPT |
- GT_RENDER_USER_INTERRUPT << GEN8_BCS_IRQ_SHIFT,
+ GT_RENDER_USER_INTERRUPT << GEN8_BCS_IRQ_SHIFT |
+ GT_CONTEXT_SWITCH_INTERRUPT << GEN8_BCS_IRQ_SHIFT,
GT_RENDER_USER_INTERRUPT << GEN8_VCS1_IRQ_SHIFT |
- GT_RENDER_USER_INTERRUPT << GEN8_VCS2_IRQ_SHIFT,
+ GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VCS1_IRQ_SHIFT |
+ GT_RENDER_USER_INTERRUPT << GEN8_VCS2_IRQ_SHIFT |
+ GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VCS2_IRQ_SHIFT,
0,
- GT_RENDER_USER_INTERRUPT << GEN8_VECS_IRQ_SHIFT
+ GT_RENDER_USER_INTERRUPT << GEN8_VECS_IRQ_SHIFT |
+ GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VECS_IRQ_SHIFT
};
- for (i = 0; i < ARRAY_SIZE(gt_interrupts); i++)
- GEN8_IRQ_INIT_NDX(GT, i, ~gt_interrupts[i], gt_interrupts[i]);
-
dev_priv->pm_irq_mask = 0xffffffff;
+ GEN8_IRQ_INIT_NDX(GT, 0, ~gt_interrupts[0], gt_interrupts[0]);
+ GEN8_IRQ_INIT_NDX(GT, 1, ~gt_interrupts[1], gt_interrupts[1]);
+ GEN8_IRQ_INIT_NDX(GT, 2, dev_priv->pm_irq_mask, dev_priv->pm_rps_events);
+ GEN8_IRQ_INIT_NDX(GT, 3, ~gt_interrupts[3], gt_interrupts[3]);
}
static void gen8_de_irq_postinstall(struct drm_i915_private *dev_priv)
{
- struct drm_device *dev = dev_priv->dev;
uint32_t de_pipe_masked = GEN8_PIPE_PRIMARY_FLIP_DONE |
GEN8_PIPE_CDCLK_CRC_DONE |
GEN8_DE_PIPE_IRQ_FAULT_ERRORS;
dev_priv->de_irq_mask[PIPE_B] = ~de_pipe_masked;
dev_priv->de_irq_mask[PIPE_C] = ~de_pipe_masked;
- for_each_pipe(pipe)
+ for_each_pipe(dev_priv, pipe)
if (intel_display_power_enabled(dev_priv,
POWER_DOMAIN_PIPE(pipe)))
GEN8_IRQ_INIT_NDX(DE_PIPE, pipe,
*/
dev_priv->irq_mask = ~enable_mask;
- for_each_pipe(pipe)
+ for_each_pipe(dev_priv, pipe)
I915_WRITE(PIPESTAT(pipe), 0xffff);
spin_lock_irqsave(&dev_priv->irq_lock, irqflags);
i915_enable_pipestat(dev_priv, PIPE_A, PIPE_GMBUS_INTERRUPT_STATUS);
- for_each_pipe(pipe)
+ for_each_pipe(dev_priv, pipe)
i915_enable_pipestat(dev_priv, pipe, pipestat_enable);
spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags);
if (!dev_priv)
return;
- intel_hpd_irq_uninstall(dev_priv);
-
gen8_irq_reset(dev);
}
I915_WRITE(VLV_MASTER_IER, 0);
- intel_hpd_irq_uninstall(dev_priv);
-
- for_each_pipe(pipe)
+ for_each_pipe(dev_priv, pipe)
I915_WRITE(PIPESTAT(pipe), 0xffff);
I915_WRITE(HWSTAM, 0xffffffff);
I915_WRITE(PORT_HOTPLUG_EN, 0);
I915_WRITE(PORT_HOTPLUG_STAT, I915_READ(PORT_HOTPLUG_STAT));
- for_each_pipe(pipe)
+ for_each_pipe(dev_priv, pipe)
I915_WRITE(PIPESTAT(pipe), 0xffff);
I915_WRITE(VLV_IMR, 0xffffffff);
if (!dev_priv)
return;
- intel_hpd_irq_uninstall(dev_priv);
-
ironlake_irq_reset(dev);
}
struct drm_i915_private *dev_priv = dev->dev_private;
int pipe;
- for_each_pipe(pipe)
+ for_each_pipe(dev_priv, pipe)
I915_WRITE(PIPESTAT(pipe), 0);
I915_WRITE16(IMR, 0xffff);
I915_WRITE16(IER, 0x0);
return false;
if ((iir & flip_pending) == 0)
- return false;
+ goto check_page_flip;
intel_prepare_page_flip(dev, plane);
* an interrupt per se, we watch for the change at vblank.
*/
if (I915_READ16(ISR) & flip_pending)
- return false;
+ goto check_page_flip;
intel_finish_page_flip(dev, pipe);
-
return true;
+
+check_page_flip:
+ intel_check_page_flip(dev, pipe);
+ return false;
}
static irqreturn_t i8xx_irq_handler(int irq, void *arg)
"Command parser error, iir 0x%08x",
iir);
- for_each_pipe(pipe) {
+ for_each_pipe(dev_priv, pipe) {
int reg = PIPESTAT(pipe);
pipe_stats[pipe] = I915_READ(reg);
if (iir & I915_USER_INTERRUPT)
notify_ring(dev, &dev_priv->ring[RCS]);
- for_each_pipe(pipe) {
+ for_each_pipe(dev_priv, pipe) {
int plane = pipe;
if (HAS_FBC(dev))
plane = !plane;
struct drm_i915_private *dev_priv = dev->dev_private;
int pipe;
- for_each_pipe(pipe) {
+ for_each_pipe(dev_priv, pipe) {
/* Clear enable bits; then clear status bits */
I915_WRITE(PIPESTAT(pipe), 0);
I915_WRITE(PIPESTAT(pipe), I915_READ(PIPESTAT(pipe)));
}
I915_WRITE16(HWSTAM, 0xeffe);
- for_each_pipe(pipe)
+ for_each_pipe(dev_priv, pipe)
I915_WRITE(PIPESTAT(pipe), 0);
I915_WRITE(IMR, 0xffffffff);
I915_WRITE(IER, 0x0);
return false;
if ((iir & flip_pending) == 0)
- return false;
+ goto check_page_flip;
intel_prepare_page_flip(dev, plane);
* an interrupt per se, we watch for the change at vblank.
*/
if (I915_READ(ISR) & flip_pending)
- return false;
+ goto check_page_flip;
intel_finish_page_flip(dev, pipe);
-
return true;
+
+check_page_flip:
+ intel_check_page_flip(dev, pipe);
+ return false;
}
static irqreturn_t i915_irq_handler(int irq, void *arg)
"Command parser error, iir 0x%08x",
iir);
- for_each_pipe(pipe) {
+ for_each_pipe(dev_priv, pipe) {
int reg = PIPESTAT(pipe);
pipe_stats[pipe] = I915_READ(reg);
if (iir & I915_USER_INTERRUPT)
notify_ring(dev, &dev_priv->ring[RCS]);
- for_each_pipe(pipe) {
+ for_each_pipe(dev_priv, pipe) {
int plane = pipe;
if (HAS_FBC(dev))
plane = !plane;
struct drm_i915_private *dev_priv = dev->dev_private;
int pipe;
- intel_hpd_irq_uninstall(dev_priv);
-
if (I915_HAS_HOTPLUG(dev)) {
I915_WRITE(PORT_HOTPLUG_EN, 0);
I915_WRITE(PORT_HOTPLUG_STAT, I915_READ(PORT_HOTPLUG_STAT));
}
I915_WRITE16(HWSTAM, 0xffff);
- for_each_pipe(pipe) {
+ for_each_pipe(dev_priv, pipe) {
/* Clear enable bits; then clear status bits */
I915_WRITE(PIPESTAT(pipe), 0);
I915_WRITE(PIPESTAT(pipe), I915_READ(PIPESTAT(pipe)));
I915_WRITE(PORT_HOTPLUG_STAT, I915_READ(PORT_HOTPLUG_STAT));
I915_WRITE(HWSTAM, 0xeffe);
- for_each_pipe(pipe)
+ for_each_pipe(dev_priv, pipe)
I915_WRITE(PIPESTAT(pipe), 0);
I915_WRITE(IMR, 0xffffffff);
I915_WRITE(IER, 0x0);
static void i915_hpd_irq_setup(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
- struct drm_mode_config *mode_config = &dev->mode_config;
struct intel_encoder *intel_encoder;
u32 hotplug_en;
hotplug_en &= ~HOTPLUG_INT_EN_MASK;
/* Note HDMI and DP share hotplug bits */
/* enable bits are the same for all generations */
- list_for_each_entry(intel_encoder, &mode_config->encoder_list, base.head)
+ for_each_intel_encoder(dev, intel_encoder)
if (dev_priv->hpd_stats[intel_encoder->hpd_pin].hpd_mark == HPD_ENABLED)
hotplug_en |= hpd_mask_i915[intel_encoder->hpd_pin];
/* Programming the CRT detection parameters tends
"Command parser error, iir 0x%08x",
iir);
- for_each_pipe(pipe) {
+ for_each_pipe(dev_priv, pipe) {
int reg = PIPESTAT(pipe);
pipe_stats[pipe] = I915_READ(reg);
if (iir & I915_BSD_USER_INTERRUPT)
notify_ring(dev, &dev_priv->ring[VCS]);
- for_each_pipe(pipe) {
+ for_each_pipe(dev_priv, pipe) {
if (pipe_stats[pipe] & PIPE_START_VBLANK_INTERRUPT_STATUS &&
i915_handle_vblank(dev, pipe, pipe, iir))
flip_mask &= ~DISPLAY_PLANE_FLIP_PENDING(pipe);
if (!dev_priv)
return;
- intel_hpd_irq_uninstall(dev_priv);
-
I915_WRITE(PORT_HOTPLUG_EN, 0);
I915_WRITE(PORT_HOTPLUG_STAT, I915_READ(PORT_HOTPLUG_STAT));
I915_WRITE(HWSTAM, 0xffffffff);
- for_each_pipe(pipe)
+ for_each_pipe(dev_priv, pipe)
I915_WRITE(PIPESTAT(pipe), 0);
I915_WRITE(IMR, 0xffffffff);
I915_WRITE(IER, 0x0);
- for_each_pipe(pipe)
+ for_each_pipe(dev_priv, pipe)
I915_WRITE(PIPESTAT(pipe),
I915_READ(PIPESTAT(pipe)) & 0x8000ffff);
I915_WRITE(IIR, I915_READ(IIR));
}
-static void intel_hpd_irq_reenable(unsigned long data)
+static void intel_hpd_irq_reenable(struct work_struct *work)
{
- struct drm_i915_private *dev_priv = (struct drm_i915_private *)data;
+ struct drm_i915_private *dev_priv =
+ container_of(work, typeof(*dev_priv),
+ hotplug_reenable_work.work);
struct drm_device *dev = dev_priv->dev;
struct drm_mode_config *mode_config = &dev->mode_config;
unsigned long irqflags;
int i;
+ intel_runtime_pm_get(dev_priv);
+
spin_lock_irqsave(&dev_priv->irq_lock, irqflags);
for (i = (HPD_NONE + 1); i < HPD_NUM_PINS; i++) {
struct drm_connector *connector;
if (dev_priv->display.hpd_irq_setup)
dev_priv->display.hpd_irq_setup(dev);
spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags);
+
+ intel_runtime_pm_put(dev_priv);
}
void intel_irq_init(struct drm_device *dev)
INIT_WORK(&dev_priv->l3_parity.error_work, ivybridge_parity_work);
/* Let's track the enabled rps events */
- if (IS_VALLEYVIEW(dev))
- /* WaGsvRC0ResidenncyMethod:VLV */
+ if (IS_VALLEYVIEW(dev) && !IS_CHERRYVIEW(dev))
+ /* WaGsvRC0ResidencyMethod:vlv */
dev_priv->pm_rps_events = GEN6_PM_RP_UP_EI_EXPIRED;
else
dev_priv->pm_rps_events = GEN6_PM_RPS_EVENTS;
setup_timer(&dev_priv->gpu_error.hangcheck_timer,
i915_hangcheck_elapsed,
(unsigned long) dev);
- setup_timer(&dev_priv->hotplug_reenable_timer, intel_hpd_irq_reenable,
- (unsigned long) dev_priv);
+ INIT_DELAYED_WORK(&dev_priv->hotplug_reenable_work,
+ intel_hpd_irq_reenable);
pm_qos_add_request(&dev_priv->pm_qos, PM_QOS_CPU_DMA_LATENCY, PM_QOS_DEFAULT_VALUE);
dev->max_vblank_count = 0xffffff; /* only 24 bits of frame count */
}
+ /*
+ * Opt out of the vblank disable timer on everything except gen2.
+ * Gen2 doesn't have a hardware frame counter and so depends on
+ * vblank interrupts to produce sane vblank seuquence numbers.
+ */
+ if (!IS_GEN2(dev))
+ dev->vblank_disable_immediate = true;
+
if (drm_core_check_feature(dev, DRIVER_MODESET)) {
dev->driver->get_vblank_timestamp = i915_get_vblank_timestamp;
dev->driver->get_scanout_position = i915_get_crtc_scanoutpos;
.vbt_sdvo_panel_type = -1,
.enable_rc6 = -1,
.enable_fbc = -1,
+ .enable_execlists = 0,
.enable_hangcheck = true,
.enable_ppgtt = -1,
.enable_psr = 0,
"Override PPGTT usage. "
"(-1=auto [default], 0=disabled, 1=aliasing, 2=full)");
+module_param_named(enable_execlists, i915.enable_execlists, int, 0400);
+MODULE_PARM_DESC(enable_execlists,
+ "Override execlists usage. "
+ "(-1=auto, 0=disabled [default], 1=enabled)");
+
module_param_named(enable_psr, i915.enable_psr, int, 0600);
MODULE_PARM_DESC(enable_psr, "Enable PSR (default: false)");
#define GAB_CTL 0x24000
#define GAB_CTL_CONT_AFTER_PAGEFAULT (1<<8)
+#define GEN7_BIOS_RESERVED 0x1082C0
+#define GEN7_BIOS_RESERVED_1M (0 << 5)
+#define GEN7_BIOS_RESERVED_256K (1 << 5)
+#define GEN8_BIOS_RESERVED_SHIFT 7
+#define GEN7_BIOS_RESERVED_MASK 0x1
+#define GEN8_BIOS_RESERVED_MASK 0x3
+
+
/* VGA stuff */
#define VGA_ST01_MDA 0x3ba
#define MI_SEMAPHORE_POLL (1<<15)
#define MI_SEMAPHORE_SAD_GTE_SDD (1<<12)
#define MI_STORE_DWORD_IMM MI_INSTR(0x20, 1)
+#define MI_STORE_DWORD_IMM_GEN8 MI_INSTR(0x20, 2)
#define MI_MEM_VIRTUAL (1 << 22) /* 965+ only */
#define MI_STORE_DWORD_INDEX MI_INSTR(0x21, 1)
#define MI_STORE_DWORD_INDEX_SHIFT 2
* address/value pairs. Don't overdue it, though, x <= 2^4 must hold!
*/
#define MI_LOAD_REGISTER_IMM(x) MI_INSTR(0x22, 2*(x)-1)
+#define MI_LRI_FORCE_POSTED (1<<12)
#define MI_STORE_REGISTER_MEM(x) MI_INSTR(0x24, 2*(x)-1)
#define MI_STORE_REGISTER_MEM_GEN8(x) MI_INSTR(0x24, 3*(x)-1)
#define MI_SRM_LRM_GLOBAL_GTT (1<<22)
#define GFX_OP_DESTBUFFER_INFO ((0x3<<29)|(0x1d<<24)|(0x8e<<16)|1)
#define GFX_OP_DRAWRECT_INFO ((0x3<<29)|(0x1d<<24)|(0x80<<16)|(0x3))
#define GFX_OP_DRAWRECT_INFO_I965 ((0x7900<<16)|0x2)
-#define SRC_COPY_BLT_CMD ((2<<29)|(0x43<<22)|4)
+
+#define COLOR_BLT_CMD (2<<29 | 0x40<<22 | (5-2))
+#define SRC_COPY_BLT_CMD ((2<<29)|(0x43<<22)|4)
#define XY_SRC_COPY_BLT_CMD ((2<<29)|(0x53<<22)|6)
#define XY_MONO_SRC_COPY_IMM_BLT ((2<<29)|(0x71<<22)|5)
-#define XY_SRC_COPY_BLT_WRITE_ALPHA (1<<21)
-#define XY_SRC_COPY_BLT_WRITE_RGB (1<<20)
+#define BLT_WRITE_A (2<<20)
+#define BLT_WRITE_RGB (1<<20)
+#define BLT_WRITE_RGBA (BLT_WRITE_RGB | BLT_WRITE_A)
#define BLT_DEPTH_8 (0<<24)
#define BLT_DEPTH_16_565 (1<<24)
#define BLT_DEPTH_16_1555 (2<<24)
#define BLT_DEPTH_32 (3<<24)
-#define BLT_ROP_GXCOPY (0xcc<<16)
+#define BLT_ROP_SRC_COPY (0xcc<<16)
+#define BLT_ROP_COLOR_COPY (0xf0<<16)
#define XY_SRC_COPY_BLT_SRC_TILED (1<<15) /* 965+ only */
#define XY_SRC_COPY_BLT_DST_TILED (1<<11) /* 965+ only */
#define CMD_OP_DISPLAYBUFFER_INFO ((0x0<<29)|(0x14<<23)|2)
#define BUNIT_REG_BISOC 0x11
#define PUNIT_REG_DSPFREQ 0x36
+#define DSPFREQSTAT_SHIFT_CHV 24
+#define DSPFREQSTAT_MASK_CHV (0x1f << DSPFREQSTAT_SHIFT_CHV)
+#define DSPFREQGUAR_SHIFT_CHV 8
+#define DSPFREQGUAR_MASK_CHV (0x1f << DSPFREQGUAR_SHIFT_CHV)
#define DSPFREQSTAT_SHIFT 30
#define DSPFREQSTAT_MASK (0x3 << DSPFREQSTAT_SHIFT)
#define DSPFREQGUAR_SHIFT 14
#define DSPFREQGUAR_MASK (0x3 << DSPFREQGUAR_SHIFT)
+#define _DP_SSC(val, pipe) ((val) << (2 * (pipe)))
+#define DP_SSC_MASK(pipe) _DP_SSC(0x3, (pipe))
+#define DP_SSC_PWR_ON(pipe) _DP_SSC(0x0, (pipe))
+#define DP_SSC_CLK_GATE(pipe) _DP_SSC(0x1, (pipe))
+#define DP_SSC_RESET(pipe) _DP_SSC(0x2, (pipe))
+#define DP_SSC_PWR_GATE(pipe) _DP_SSC(0x3, (pipe))
+#define _DP_SSS(val, pipe) ((val) << (2 * (pipe) + 16))
+#define DP_SSS_MASK(pipe) _DP_SSS(0x3, (pipe))
+#define DP_SSS_PWR_ON(pipe) _DP_SSS(0x0, (pipe))
+#define DP_SSS_CLK_GATE(pipe) _DP_SSS(0x1, (pipe))
+#define DP_SSS_RESET(pipe) _DP_SSS(0x2, (pipe))
+#define DP_SSS_PWR_GATE(pipe) _DP_SSS(0x3, (pipe))
/* See the PUNIT HAS v0.8 for the below bits */
enum punit_power_well {
PUNIT_POWER_WELL_DPIO_TX_C_LANES_23 = 9,
PUNIT_POWER_WELL_DPIO_RX0 = 10,
PUNIT_POWER_WELL_DPIO_RX1 = 11,
+ PUNIT_POWER_WELL_DPIO_CMN_D = 12,
+ /* FIXME: guesswork below */
+ PUNIT_POWER_WELL_DPIO_TX_D_LANES_01 = 13,
+ PUNIT_POWER_WELL_DPIO_TX_D_LANES_23 = 14,
+ PUNIT_POWER_WELL_DPIO_RX2 = 15,
PUNIT_POWER_WELL_NUM,
};
#define _VLV_TX_DW2_CH0 0x8288
#define _VLV_TX_DW2_CH1 0x8488
-#define DPIO_SWING_MARGIN_SHIFT 16
-#define DPIO_SWING_MARGIN_MASK (0xff << DPIO_SWING_MARGIN_SHIFT)
+#define DPIO_SWING_MARGIN000_SHIFT 16
+#define DPIO_SWING_MARGIN000_MASK (0xff << DPIO_SWING_MARGIN000_SHIFT)
#define DPIO_UNIQ_TRANS_SCALE_SHIFT 8
#define VLV_TX_DW2(ch) _PORT(ch, _VLV_TX_DW2_CH0, _VLV_TX_DW2_CH1)
#define _VLV_TX_DW3_CH1 0x848c
/* The following bit for CHV phy */
#define DPIO_TX_UNIQ_TRANS_SCALE_EN (1<<27)
+#define DPIO_SWING_MARGIN101_SHIFT 16
+#define DPIO_SWING_MARGIN101_MASK (0xff << DPIO_SWING_MARGIN101_SHIFT)
#define VLV_TX_DW3(ch) _PORT(ch, _VLV_TX_DW3_CH0, _VLV_TX_DW3_CH1)
#define _VLV_TX_DW4_CH0 0x8290
#define _VLV_TX_DW4_CH1 0x8490
#define DPIO_SWING_DEEMPH9P5_SHIFT 24
#define DPIO_SWING_DEEMPH9P5_MASK (0xff << DPIO_SWING_DEEMPH9P5_SHIFT)
+#define DPIO_SWING_DEEMPH6P0_SHIFT 16
+#define DPIO_SWING_DEEMPH6P0_MASK (0xff << DPIO_SWING_DEEMPH6P0_SHIFT)
#define VLV_TX_DW4(ch) _PORT(ch, _VLV_TX_DW4_CH0, _VLV_TX_DW4_CH1)
#define _VLV_TX3_DW4_CH0 0x690
#define PGTBL_ADDRESS_LO_MASK 0xfffff000 /* bits [31:12] */
#define PGTBL_ADDRESS_HI_MASK 0x000000f0 /* bits [35:32] (gen4) */
#define PGTBL_ER 0x02024
+#define PRB0_BASE (0x2030-0x30)
+#define PRB1_BASE (0x2040-0x30) /* 830,gen3 */
+#define PRB2_BASE (0x2050-0x30) /* gen3 */
+#define SRB0_BASE (0x2100-0x30) /* gen2 */
+#define SRB1_BASE (0x2110-0x30) /* gen2 */
+#define SRB2_BASE (0x2120-0x30) /* 830 */
+#define SRB3_BASE (0x2130-0x30) /* 830 */
#define RENDER_RING_BASE 0x02000
#define BSD_RING_BASE 0x04000
#define GEN6_BSD_RING_BASE 0x12000
#define RING_ACTHD_UDW(base) ((base)+0x5c)
#define RING_NOPID(base) ((base)+0x94)
#define RING_IMR(base) ((base)+0xa8)
+#define RING_HWSTAM(base) ((base)+0x98)
#define RING_TIMESTAMP(base) ((base)+0x358)
#define TAIL_ADDR 0x001FFFF8
#define HEAD_WRAP_COUNT 0xFFE00000
#define INSTPM_TLB_INVALIDATE (1<<9)
#define INSTPM_SYNC_FLUSH (1<<5)
#define ACTHD 0x020c8
+#define MEM_MODE 0x020cc
+#define MEM_DISPLAY_B_TRICKLE_FEED_DISABLE (1<<3) /* 830 only */
+#define MEM_DISPLAY_A_TRICKLE_FEED_DISABLE (1<<2) /* 830/845 only */
+#define MEM_DISPLAY_TRICKLE_FEED_DISABLE (1<<2) /* 85x only */
#define FW_BLC 0x020d8
#define FW_BLC2 0x020dc
#define FW_BLC_SELF 0x020e0 /* 915+ only */
#define GT_BSD_CS_ERROR_INTERRUPT (1 << 15)
#define GT_BSD_USER_INTERRUPT (1 << 12)
#define GT_RENDER_L3_PARITY_ERROR_INTERRUPT_S1 (1 << 11) /* hsw+; rsvd on snb, ivb, vlv */
+#define GT_CONTEXT_SWITCH_INTERRUPT (1 << 8)
#define GT_RENDER_L3_PARITY_ERROR_INTERRUPT (1 << 5) /* !snb */
#define GT_RENDER_PIPECTL_NOTIFY_INTERRUPT (1 << 4)
#define GT_RENDER_CS_MASTER_ERROR_INTERRUPT (1 << 3)
/* Framebuffer compression for Ironlake */
#define ILK_DPFC_CB_BASE 0x43200
#define ILK_DPFC_CONTROL 0x43208
+#define FBC_CTL_FALSE_COLOR (1<<10)
/* The bit 28-8 is reserved */
#define DPFC_RESERVED (0x1FFFFF00)
#define ILK_DPFC_RECOMP_CTL 0x4320c
#define DPIO_PHY_STATUS (VLV_DISPLAY_BASE + 0x6240)
#define DPLL_PORTD_READY_MASK (0xf)
#define DISPLAY_PHY_CONTROL (VLV_DISPLAY_BASE + 0x60100)
-#define PHY_COM_LANE_RESET_DEASSERT(phy, val) \
- ((phy == DPIO_PHY0) ? (val | 1) : (val | 2))
-#define PHY_COM_LANE_RESET_ASSERT(phy, val) \
- ((phy == DPIO_PHY0) ? (val & ~1) : (val & ~2))
+#define PHY_COM_LANE_RESET_DEASSERT(phy) (1 << (phy))
#define DISPLAY_PHY_STATUS (VLV_DISPLAY_BASE + 0x60104)
-#define PHY_POWERGOOD(phy) ((phy == DPIO_PHY0) ? (1<<31) : (1<<30))
+#define PHY_POWERGOOD(phy) (((phy) == DPIO_PHY0) ? (1<<31) : (1<<30))
/*
* The i830 generation, in LVDS mode, defines P1 as the bit number set within
#define _PIPEASRC 0x6001c
#define _BCLRPAT_A 0x60020
#define _VSYNCSHIFT_A 0x60028
+#define _PIPE_MULT_A 0x6002c
/* Pipe B timing regs */
#define _HTOTAL_B 0x61000
#define _PIPEBSRC 0x6101c
#define _BCLRPAT_B 0x61020
#define _VSYNCSHIFT_B 0x61028
+#define _PIPE_MULT_B 0x6102c
#define TRANSCODER_A_OFFSET 0x60000
#define TRANSCODER_B_OFFSET 0x61000
#define BCLRPAT(trans) _TRANSCODER2(trans, _BCLRPAT_A)
#define VSYNCSHIFT(trans) _TRANSCODER2(trans, _VSYNCSHIFT_A)
#define PIPESRC(trans) _TRANSCODER2(trans, _PIPEASRC)
+#define PIPE_MULT(trans) _TRANSCODER2(trans, _PIPE_MULT_A)
/* HSW+ eDP PSR registers */
#define EDP_PSR_BASE(dev) (IS_HASWELL(dev) ? 0x64800 : 0x6f800)
#define DP_LINK_TRAIN_OFF (3 << 28)
#define DP_LINK_TRAIN_MASK (3 << 28)
#define DP_LINK_TRAIN_SHIFT 28
+#define DP_LINK_TRAIN_PAT_3_CHV (1 << 14)
+#define DP_LINK_TRAIN_MASK_CHV ((3 << 28)|(1<<14))
/* CPT Link training mode */
#define DP_LINK_TRAIN_PAT_1_CPT (0 << 8)
#define PIPE_VSYNC_INTERRUPT_STATUS (1UL<<9)
#define PIPE_DISPLAY_LINE_COMPARE_STATUS (1UL<<8)
#define PIPE_DPST_EVENT_STATUS (1UL<<7)
-#define PIPE_LEGACY_BLC_EVENT_STATUS (1UL<<6)
#define PIPE_A_PSR_STATUS_VLV (1UL<<6)
#define PIPE_LEGACY_BLC_EVENT_STATUS (1UL<<6)
#define PIPE_ODD_FIELD_INTERRUPT_STATUS (1UL<<5)
#define DSPARB_BEND_SHIFT 9 /* on 855 */
#define DSPARB_AEND_SHIFT 0
+/* pnv/gen4/g4x/vlv/chv */
#define DSPFW1 (dev_priv->info.display_mmio_offset + 0x70034)
-#define DSPFW_SR_SHIFT 23
-#define DSPFW_SR_MASK (0x1ff<<23)
-#define DSPFW_CURSORB_SHIFT 16
-#define DSPFW_CURSORB_MASK (0x3f<<16)
-#define DSPFW_PLANEB_SHIFT 8
-#define DSPFW_PLANEB_MASK (0x7f<<8)
-#define DSPFW_PLANEA_MASK (0x7f)
+#define DSPFW_SR_SHIFT 23
+#define DSPFW_SR_MASK (0x1ff<<23)
+#define DSPFW_CURSORB_SHIFT 16
+#define DSPFW_CURSORB_MASK (0x3f<<16)
+#define DSPFW_PLANEB_SHIFT 8
+#define DSPFW_PLANEB_MASK (0x7f<<8)
+#define DSPFW_PLANEB_MASK_VLV (0xff<<8) /* vlv/chv */
+#define DSPFW_PLANEA_SHIFT 0
+#define DSPFW_PLANEA_MASK (0x7f<<0)
+#define DSPFW_PLANEA_MASK_VLV (0xff<<0) /* vlv/chv */
#define DSPFW2 (dev_priv->info.display_mmio_offset + 0x70038)
-#define DSPFW_CURSORA_MASK 0x00003f00
-#define DSPFW_CURSORA_SHIFT 8
-#define DSPFW_PLANEC_MASK (0x7f)
+#define DSPFW_FBC_SR_EN (1<<31) /* g4x */
+#define DSPFW_FBC_SR_SHIFT 28
+#define DSPFW_FBC_SR_MASK (0x7<<28) /* g4x */
+#define DSPFW_FBC_HPLL_SR_SHIFT 24
+#define DSPFW_FBC_HPLL_SR_MASK (0xf<<24) /* g4x */
+#define DSPFW_SPRITEB_SHIFT (16)
+#define DSPFW_SPRITEB_MASK (0x7f<<16) /* g4x */
+#define DSPFW_SPRITEB_MASK_VLV (0xff<<16) /* vlv/chv */
+#define DSPFW_CURSORA_SHIFT 8
+#define DSPFW_CURSORA_MASK (0x3f<<8)
+#define DSPFW_PLANEC_SHIFT_OLD 0
+#define DSPFW_PLANEC_MASK_OLD (0x7f<<0) /* pre-gen4 sprite C */
+#define DSPFW_SPRITEA_SHIFT 0
+#define DSPFW_SPRITEA_MASK (0x7f<<0) /* g4x */
+#define DSPFW_SPRITEA_MASK_VLV (0xff<<0) /* vlv/chv */
#define DSPFW3 (dev_priv->info.display_mmio_offset + 0x7003c)
-#define DSPFW_HPLL_SR_EN (1<<31)
-#define DSPFW_CURSOR_SR_SHIFT 24
+#define DSPFW_HPLL_SR_EN (1<<31)
#define PINEVIEW_SELF_REFRESH_EN (1<<30)
+#define DSPFW_CURSOR_SR_SHIFT 24
#define DSPFW_CURSOR_SR_MASK (0x3f<<24)
#define DSPFW_HPLL_CURSOR_SHIFT 16
#define DSPFW_HPLL_CURSOR_MASK (0x3f<<16)
-#define DSPFW_HPLL_SR_MASK (0x1ff)
-#define DSPFW4 (dev_priv->info.display_mmio_offset + 0x70070)
-#define DSPFW7 (dev_priv->info.display_mmio_offset + 0x7007c)
+#define DSPFW_HPLL_SR_SHIFT 0
+#define DSPFW_HPLL_SR_MASK (0x1ff<<0)
+
+/* vlv/chv */
+#define DSPFW4 (VLV_DISPLAY_BASE + 0x70070)
+#define DSPFW_SPRITEB_WM1_SHIFT 16
+#define DSPFW_SPRITEB_WM1_MASK (0xff<<16)
+#define DSPFW_CURSORA_WM1_SHIFT 8
+#define DSPFW_CURSORA_WM1_MASK (0x3f<<8)
+#define DSPFW_SPRITEA_WM1_SHIFT 0
+#define DSPFW_SPRITEA_WM1_MASK (0xff<<0)
+#define DSPFW5 (VLV_DISPLAY_BASE + 0x70074)
+#define DSPFW_PLANEB_WM1_SHIFT 24
+#define DSPFW_PLANEB_WM1_MASK (0xff<<24)
+#define DSPFW_PLANEA_WM1_SHIFT 16
+#define DSPFW_PLANEA_WM1_MASK (0xff<<16)
+#define DSPFW_CURSORB_WM1_SHIFT 8
+#define DSPFW_CURSORB_WM1_MASK (0x3f<<8)
+#define DSPFW_CURSOR_SR_WM1_SHIFT 0
+#define DSPFW_CURSOR_SR_WM1_MASK (0x3f<<0)
+#define DSPFW6 (VLV_DISPLAY_BASE + 0x70078)
+#define DSPFW_SR_WM1_SHIFT 0
+#define DSPFW_SR_WM1_MASK (0x1ff<<0)
+#define DSPFW7 (VLV_DISPLAY_BASE + 0x7007c)
+#define DSPFW7_CHV (VLV_DISPLAY_BASE + 0x700b4) /* wtf #1? */
+#define DSPFW_SPRITED_WM1_SHIFT 24
+#define DSPFW_SPRITED_WM1_MASK (0xff<<24)
+#define DSPFW_SPRITED_SHIFT 16
+#define DSPFW_SPRITED_MASK (0xff<<16)
+#define DSPFW_SPRITEC_WM1_SHIFT 8
+#define DSPFW_SPRITEC_WM1_MASK (0xff<<8)
+#define DSPFW_SPRITEC_SHIFT 0
+#define DSPFW_SPRITEC_MASK (0xff<<0)
+#define DSPFW8_CHV (VLV_DISPLAY_BASE + 0x700b8)
+#define DSPFW_SPRITEF_WM1_SHIFT 24
+#define DSPFW_SPRITEF_WM1_MASK (0xff<<24)
+#define DSPFW_SPRITEF_SHIFT 16
+#define DSPFW_SPRITEF_MASK (0xff<<16)
+#define DSPFW_SPRITEE_WM1_SHIFT 8
+#define DSPFW_SPRITEE_WM1_MASK (0xff<<8)
+#define DSPFW_SPRITEE_SHIFT 0
+#define DSPFW_SPRITEE_MASK (0xff<<0)
+#define DSPFW9_CHV (VLV_DISPLAY_BASE + 0x7007c) /* wtf #2? */
+#define DSPFW_PLANEC_WM1_SHIFT 24
+#define DSPFW_PLANEC_WM1_MASK (0xff<<24)
+#define DSPFW_PLANEC_SHIFT 16
+#define DSPFW_PLANEC_MASK (0xff<<16)
+#define DSPFW_CURSORC_WM1_SHIFT 8
+#define DSPFW_CURSORC_WM1_MASK (0x3f<<16)
+#define DSPFW_CURSORC_SHIFT 0
+#define DSPFW_CURSORC_MASK (0x3f<<0)
+
+/* vlv/chv high order bits */
+#define DSPHOWM (VLV_DISPLAY_BASE + 0x70064)
+#define DSPFW_SR_HI_SHIFT 24
+#define DSPFW_SR_HI_MASK (1<<24)
+#define DSPFW_SPRITEF_HI_SHIFT 23
+#define DSPFW_SPRITEF_HI_MASK (1<<23)
+#define DSPFW_SPRITEE_HI_SHIFT 22
+#define DSPFW_SPRITEE_HI_MASK (1<<22)
+#define DSPFW_PLANEC_HI_SHIFT 21
+#define DSPFW_PLANEC_HI_MASK (1<<21)
+#define DSPFW_SPRITED_HI_SHIFT 20
+#define DSPFW_SPRITED_HI_MASK (1<<20)
+#define DSPFW_SPRITEC_HI_SHIFT 16
+#define DSPFW_SPRITEC_HI_MASK (1<<16)
+#define DSPFW_PLANEB_HI_SHIFT 12
+#define DSPFW_PLANEB_HI_MASK (1<<12)
+#define DSPFW_SPRITEB_HI_SHIFT 8
+#define DSPFW_SPRITEB_HI_MASK (1<<8)
+#define DSPFW_SPRITEA_HI_SHIFT 4
+#define DSPFW_SPRITEA_HI_MASK (1<<4)
+#define DSPFW_PLANEA_HI_SHIFT 0
+#define DSPFW_PLANEA_HI_MASK (1<<0)
+#define DSPHOWM1 (VLV_DISPLAY_BASE + 0x70068)
+#define DSPFW_SR_WM1_HI_SHIFT 24
+#define DSPFW_SR_WM1_HI_MASK (1<<24)
+#define DSPFW_SPRITEF_WM1_HI_SHIFT 23
+#define DSPFW_SPRITEF_WM1_HI_MASK (1<<23)
+#define DSPFW_SPRITEE_WM1_HI_SHIFT 22
+#define DSPFW_SPRITEE_WM1_HI_MASK (1<<22)
+#define DSPFW_PLANEC_WM1_HI_SHIFT 21
+#define DSPFW_PLANEC_WM1_HI_MASK (1<<21)
+#define DSPFW_SPRITED_WM1_HI_SHIFT 20
+#define DSPFW_SPRITED_WM1_HI_MASK (1<<20)
+#define DSPFW_SPRITEC_WM1_HI_SHIFT 16
+#define DSPFW_SPRITEC_WM1_HI_MASK (1<<16)
+#define DSPFW_PLANEB_WM1_HI_SHIFT 12
+#define DSPFW_PLANEB_WM1_HI_MASK (1<<12)
+#define DSPFW_SPRITEB_WM1_HI_SHIFT 8
+#define DSPFW_SPRITEB_WM1_HI_MASK (1<<8)
+#define DSPFW_SPRITEA_WM1_HI_SHIFT 4
+#define DSPFW_SPRITEA_WM1_HI_MASK (1<<4)
+#define DSPFW_PLANEA_WM1_HI_SHIFT 0
+#define DSPFW_PLANEA_WM1_HI_MASK (1<<0)
/* drain latency register values*/
#define DRAIN_LATENCY_PRECISION_32 32
#define DRAIN_LATENCY_PRECISION_64 64
-#define VLV_DDL1 (VLV_DISPLAY_BASE + 0x70050)
-#define DDL_CURSORA_PRECISION_64 (1<<31)
-#define DDL_CURSORA_PRECISION_32 (0<<31)
-#define DDL_CURSORA_SHIFT 24
-#define DDL_SPRITEB_PRECISION_64 (1<<23)
-#define DDL_SPRITEB_PRECISION_32 (0<<23)
-#define DDL_SPRITEB_SHIFT 16
-#define DDL_SPRITEA_PRECISION_64 (1<<15)
-#define DDL_SPRITEA_PRECISION_32 (0<<15)
-#define DDL_SPRITEA_SHIFT 8
-#define DDL_PLANEA_PRECISION_64 (1<<7)
-#define DDL_PLANEA_PRECISION_32 (0<<7)
-#define DDL_PLANEA_SHIFT 0
-
-#define VLV_DDL2 (VLV_DISPLAY_BASE + 0x70054)
-#define DDL_CURSORB_PRECISION_64 (1<<31)
-#define DDL_CURSORB_PRECISION_32 (0<<31)
-#define DDL_CURSORB_SHIFT 24
-#define DDL_SPRITED_PRECISION_64 (1<<23)
-#define DDL_SPRITED_PRECISION_32 (0<<23)
-#define DDL_SPRITED_SHIFT 16
-#define DDL_SPRITEC_PRECISION_64 (1<<15)
-#define DDL_SPRITEC_PRECISION_32 (0<<15)
-#define DDL_SPRITEC_SHIFT 8
-#define DDL_PLANEB_PRECISION_64 (1<<7)
-#define DDL_PLANEB_PRECISION_32 (0<<7)
-#define DDL_PLANEB_SHIFT 0
-
-#define VLV_DDL3 (VLV_DISPLAY_BASE + 0x70058)
-#define DDL_CURSORC_PRECISION_64 (1<<31)
-#define DDL_CURSORC_PRECISION_32 (0<<31)
-#define DDL_CURSORC_SHIFT 24
-#define DDL_SPRITEF_PRECISION_64 (1<<23)
-#define DDL_SPRITEF_PRECISION_32 (0<<23)
-#define DDL_SPRITEF_SHIFT 16
-#define DDL_SPRITEE_PRECISION_64 (1<<15)
-#define DDL_SPRITEE_PRECISION_32 (0<<15)
-#define DDL_SPRITEE_SHIFT 8
-#define DDL_PLANEC_PRECISION_64 (1<<7)
-#define DDL_PLANEC_PRECISION_32 (0<<7)
-#define DDL_PLANEC_SHIFT 0
+#define VLV_DDL(pipe) (VLV_DISPLAY_BASE + 0x70050 + 4 * (pipe))
+#define DDL_CURSOR_PRECISION_64 (1<<31)
+#define DDL_CURSOR_PRECISION_32 (0<<31)
+#define DDL_CURSOR_SHIFT 24
+#define DDL_SPRITE_PRECISION_64(sprite) (1<<(15+8*(sprite)))
+#define DDL_SPRITE_PRECISION_32(sprite) (0<<(15+8*(sprite)))
+#define DDL_SPRITE_SHIFT(sprite) (8+8*(sprite))
+#define DDL_PLANE_PRECISION_64 (1<<7)
+#define DDL_PLANE_PRECISION_32 (0<<7)
+#define DDL_PLANE_SHIFT 0
+#define DRAIN_LATENCY_MASK 0x7f
/* FIFO watermark sizes etc */
#define G4X_FIFO_LINE_SIZE 64
/* Old style CUR*CNTR flags (desktop 8xx) */
#define CURSOR_ENABLE 0x80000000
#define CURSOR_GAMMA_ENABLE 0x40000000
-#define CURSOR_STRIDE_MASK 0x30000000
+#define CURSOR_STRIDE_SHIFT 28
+#define CURSOR_STRIDE(x) ((ffs(x)-9) << CURSOR_STRIDE_SHIFT) /* 256,512,1k,2k */
#define CURSOR_PIPE_CSC_ENABLE (1<<24)
#define CURSOR_FORMAT_SHIFT 24
#define CURSOR_FORMAT_MASK (0x07 << CURSOR_FORMAT_SHIFT)
#define DISPPLANE_NO_LINE_DOUBLE 0
#define DISPPLANE_STEREO_POLARITY_FIRST 0
#define DISPPLANE_STEREO_POLARITY_SECOND (1<<18)
+#define DISPPLANE_ROTATE_180 (1<<15)
#define DISPPLANE_TRICKLE_FEED_DISABLE (1<<14) /* Ironlake */
#define DISPPLANE_TILED (1<<10)
#define _DSPAADDR 0x70184
#define DVS_YUV_ORDER_UYVY (1<<16)
#define DVS_YUV_ORDER_YVYU (2<<16)
#define DVS_YUV_ORDER_VYUY (3<<16)
+#define DVS_ROTATE_180 (1<<15)
#define DVS_DEST_KEY (1<<2)
#define DVS_TRICKLE_FEED_DISABLE (1<<14)
#define DVS_TILED (1<<10)
#define SPRITE_YUV_ORDER_UYVY (1<<16)
#define SPRITE_YUV_ORDER_YVYU (2<<16)
#define SPRITE_YUV_ORDER_VYUY (3<<16)
+#define SPRITE_ROTATE_180 (1<<15)
#define SPRITE_TRICKLE_FEED_DISABLE (1<<14)
#define SPRITE_INT_GAMMA_ENABLE (1<<13)
#define SPRITE_TILED (1<<10)
#define SP_YUV_ORDER_UYVY (1<<16)
#define SP_YUV_ORDER_YVYU (2<<16)
#define SP_YUV_ORDER_VYUY (3<<16)
+#define SP_ROTATE_180 (1<<15)
#define SP_TILED (1<<10)
#define _SPALINOFF (VLV_DISPLAY_BASE + 0x72184)
#define _SPASTRIDE (VLV_DISPLAY_BASE + 0x72188)
#define PIPEA_PP_STATUS (VLV_DISPLAY_BASE + 0x61200)
#define PIPEA_PP_CONTROL (VLV_DISPLAY_BASE + 0x61204)
#define PIPEA_PP_ON_DELAYS (VLV_DISPLAY_BASE + 0x61208)
-#define PANEL_PORT_SELECT_DPB_VLV (1 << 30)
-#define PANEL_PORT_SELECT_DPC_VLV (2 << 30)
+#define PANEL_PORT_SELECT_VLV(port) ((port) << 30)
#define PIPEA_PP_OFF_DELAYS (VLV_DISPLAY_BASE + 0x6120c)
#define PIPEA_PP_DIVISOR (VLV_DISPLAY_BASE + 0x61210)
#define VLV_GTLC_ALLOWWAKEERR (1 << 1)
#define VLV_GTLC_PW_MEDIA_STATUS_MASK (1 << 5)
#define VLV_GTLC_PW_RENDER_STATUS_MASK (1 << 7)
-#define VLV_GTLC_SURVIVABILITY_REG 0x130098
#define FORCEWAKE_MT 0xa188 /* multi-threaded */
#define FORCEWAKE_KERNEL 0x1
#define FORCEWAKE_USER 0x2
GEN6_PM_RP_DOWN_THRESHOLD | \
GEN6_PM_RP_DOWN_TIMEOUT)
-#define CHV_CZ_CLOCK_FREQ_MODE_200 200
-#define CHV_CZ_CLOCK_FREQ_MODE_267 267
-#define CHV_CZ_CLOCK_FREQ_MODE_320 320
-#define CHV_CZ_CLOCK_FREQ_MODE_333 333
-#define CHV_CZ_CLOCK_FREQ_MODE_400 400
-
#define GEN7_GT_SCRATCH_BASE 0x4F100
#define GEN7_GT_SCRATCH_REG_NUM 8
#define DDI_BUF_CTL_B 0x64100
#define DDI_BUF_CTL(port) _PORT(port, DDI_BUF_CTL_A, DDI_BUF_CTL_B)
#define DDI_BUF_CTL_ENABLE (1<<31)
-#define DDI_BUF_EMP_400MV_0DB_HSW (0<<24) /* Sel0 */
-#define DDI_BUF_EMP_400MV_3_5DB_HSW (1<<24) /* Sel1 */
-#define DDI_BUF_EMP_400MV_6DB_HSW (2<<24) /* Sel2 */
-#define DDI_BUF_EMP_400MV_9_5DB_HSW (3<<24) /* Sel3 */
-#define DDI_BUF_EMP_600MV_0DB_HSW (4<<24) /* Sel4 */
-#define DDI_BUF_EMP_600MV_3_5DB_HSW (5<<24) /* Sel5 */
-#define DDI_BUF_EMP_600MV_6DB_HSW (6<<24) /* Sel6 */
-#define DDI_BUF_EMP_800MV_0DB_HSW (7<<24) /* Sel7 */
-#define DDI_BUF_EMP_800MV_3_5DB_HSW (8<<24) /* Sel8 */
+#define DDI_BUF_TRANS_SELECT(n) ((n) << 24)
#define DDI_BUF_EMP_MASK (0xf<<24)
#define DDI_BUF_PORT_REVERSAL (1<<16)
#define DDI_BUF_IS_IDLE (1<<7)
memset(&error_priv, 0, sizeof(error_priv));
- ret = i915_error_state_buf_init(&error_str, count, off);
+ ret = i915_error_state_buf_init(&error_str, to_i915(dev), count, off);
if (ret)
return ret;
switch (edp_link_params->preemphasis) {
case EDP_PREEMPHASIS_NONE:
- dev_priv->vbt.edp_preemphasis = DP_TRAIN_PRE_EMPHASIS_0;
+ dev_priv->vbt.edp_preemphasis = DP_TRAIN_PRE_EMPH_LEVEL_0;
break;
case EDP_PREEMPHASIS_3_5dB:
- dev_priv->vbt.edp_preemphasis = DP_TRAIN_PRE_EMPHASIS_3_5;
+ dev_priv->vbt.edp_preemphasis = DP_TRAIN_PRE_EMPH_LEVEL_1;
break;
case EDP_PREEMPHASIS_6dB:
- dev_priv->vbt.edp_preemphasis = DP_TRAIN_PRE_EMPHASIS_6;
+ dev_priv->vbt.edp_preemphasis = DP_TRAIN_PRE_EMPH_LEVEL_2;
break;
case EDP_PREEMPHASIS_9_5dB:
- dev_priv->vbt.edp_preemphasis = DP_TRAIN_PRE_EMPHASIS_9_5;
+ dev_priv->vbt.edp_preemphasis = DP_TRAIN_PRE_EMPH_LEVEL_3;
break;
default:
DRM_DEBUG_KMS("VBT has unknown eDP pre-emphasis value %u\n",
switch (edp_link_params->vswing) {
case EDP_VSWING_0_4V:
- dev_priv->vbt.edp_vswing = DP_TRAIN_VOLTAGE_SWING_400;
+ dev_priv->vbt.edp_vswing = DP_TRAIN_VOLTAGE_SWING_LEVEL_0;
break;
case EDP_VSWING_0_6V:
- dev_priv->vbt.edp_vswing = DP_TRAIN_VOLTAGE_SWING_600;
+ dev_priv->vbt.edp_vswing = DP_TRAIN_VOLTAGE_SWING_LEVEL_1;
break;
case EDP_VSWING_0_8V:
- dev_priv->vbt.edp_vswing = DP_TRAIN_VOLTAGE_SWING_800;
+ dev_priv->vbt.edp_vswing = DP_TRAIN_VOLTAGE_SWING_LEVEL_2;
break;
case EDP_VSWING_1_2V:
- dev_priv->vbt.edp_vswing = DP_TRAIN_VOLTAGE_SWING_1200;
+ dev_priv->vbt.edp_vswing = DP_TRAIN_VOLTAGE_SWING_LEVEL_3;
break;
default:
DRM_DEBUG_KMS("VBT has unknown eDP voltage swing value %u\n",
if (bdb->version >= 158) {
/* The VBT HDMI level shift values match the table we have. */
hdmi_level_shift = child->raw[7] & 0xF;
- if (hdmi_level_shift < 0xC) {
- DRM_DEBUG_KMS("VBT HDMI level shift for port %c: %d\n",
- port_name(port),
- hdmi_level_shift);
- info->hdmi_level_shift = hdmi_level_shift;
- }
+ DRM_DEBUG_KMS("VBT HDMI level shift for port %c: %d\n",
+ port_name(port),
+ hdmi_level_shift);
+ info->hdmi_level_shift = hdmi_level_shift;
}
}
struct ddi_vbt_port_info *info =
&dev_priv->vbt.ddi_port_info[port];
- /* Recommended BSpec default: 800mV 0dB. */
- info->hdmi_level_shift = 6;
+ info->hdmi_level_shift = HDMI_LEVEL_SHIFT_UNKNOWN;
info->supports_dvi = (port != PORT_A && port != PORT_E);
info->supports_hdmi = info->supports_dvi;
}
}
-static int __init intel_no_opregion_vbt_callback(const struct dmi_system_id *id)
+static int intel_no_opregion_vbt_callback(const struct dmi_system_id *id)
{
DRM_DEBUG_KMS("Falling back to manually reading VBT from "
"VBIOS ROM for %s\n",
u16 rsvd4;
- u8 rsvd5[5];
+ u8 rsvd5;
+ u32 target_burst_mode_freq;
u32 dsi_ddr_clk;
u32 bridge_ref_clk;
goto out;
}
+ drm_modeset_acquire_init(&ctx, 0);
+
/* for pre-945g platforms use load detect */
if (intel_get_load_detect_pipe(connector, NULL, &tmp, &ctx)) {
if (intel_crt_detect_ddc(connector))
status = connector_status_connected;
else
status = intel_crt_load_detect(crt);
- intel_release_load_detect_pipe(connector, &tmp, &ctx);
+ intel_release_load_detect_pipe(connector, &tmp);
} else
status = connector_status_unknown;
+ drm_modeset_drop_locks(&ctx);
+ drm_modeset_acquire_fini(&ctx);
+
out:
intel_display_power_put(dev_priv, power_domain);
return status;
.destroy = intel_encoder_destroy,
};
-static int __init intel_no_crt_dmi_callback(const struct dmi_system_id *id)
+static int intel_no_crt_dmi_callback(const struct dmi_system_id *id)
{
DRM_INFO("Skipping CRT initialization for %s\n", id->ident);
return 1;
#include "i915_drv.h"
#include "intel_drv.h"
+struct ddi_buf_trans {
+ u32 trans1; /* balance leg enable, de-emph level */
+ u32 trans2; /* vref sel, vswing */
+};
+
/* HDMI/DVI modes ignore everything but the last 2 items. So we share
* them for both DP and FDI transports, allowing those ports to
* automatically adapt to HDMI connections as well
*/
-static const u32 hsw_ddi_translations_dp[] = {
- 0x00FFFFFF, 0x0006000E, /* DP parameters */
- 0x00D75FFF, 0x0005000A,
- 0x00C30FFF, 0x00040006,
- 0x80AAAFFF, 0x000B0000,
- 0x00FFFFFF, 0x0005000A,
- 0x00D75FFF, 0x000C0004,
- 0x80C30FFF, 0x000B0000,
- 0x00FFFFFF, 0x00040006,
- 0x80D75FFF, 0x000B0000,
+static const struct ddi_buf_trans hsw_ddi_translations_dp[] = {
+ { 0x00FFFFFF, 0x0006000E },
+ { 0x00D75FFF, 0x0005000A },
+ { 0x00C30FFF, 0x00040006 },
+ { 0x80AAAFFF, 0x000B0000 },
+ { 0x00FFFFFF, 0x0005000A },
+ { 0x00D75FFF, 0x000C0004 },
+ { 0x80C30FFF, 0x000B0000 },
+ { 0x00FFFFFF, 0x00040006 },
+ { 0x80D75FFF, 0x000B0000 },
};
-static const u32 hsw_ddi_translations_fdi[] = {
- 0x00FFFFFF, 0x0007000E, /* FDI parameters */
- 0x00D75FFF, 0x000F000A,
- 0x00C30FFF, 0x00060006,
- 0x00AAAFFF, 0x001E0000,
- 0x00FFFFFF, 0x000F000A,
- 0x00D75FFF, 0x00160004,
- 0x00C30FFF, 0x001E0000,
- 0x00FFFFFF, 0x00060006,
- 0x00D75FFF, 0x001E0000,
+static const struct ddi_buf_trans hsw_ddi_translations_fdi[] = {
+ { 0x00FFFFFF, 0x0007000E },
+ { 0x00D75FFF, 0x000F000A },
+ { 0x00C30FFF, 0x00060006 },
+ { 0x00AAAFFF, 0x001E0000 },
+ { 0x00FFFFFF, 0x000F000A },
+ { 0x00D75FFF, 0x00160004 },
+ { 0x00C30FFF, 0x001E0000 },
+ { 0x00FFFFFF, 0x00060006 },
+ { 0x00D75FFF, 0x001E0000 },
};
-static const u32 hsw_ddi_translations_hdmi[] = {
- /* Idx NT mV diff T mV diff db */
- 0x00FFFFFF, 0x0006000E, /* 0: 400 400 0 */
- 0x00E79FFF, 0x000E000C, /* 1: 400 500 2 */
- 0x00D75FFF, 0x0005000A, /* 2: 400 600 3.5 */
- 0x00FFFFFF, 0x0005000A, /* 3: 600 600 0 */
- 0x00E79FFF, 0x001D0007, /* 4: 600 750 2 */
- 0x00D75FFF, 0x000C0004, /* 5: 600 900 3.5 */
- 0x00FFFFFF, 0x00040006, /* 6: 800 800 0 */
- 0x80E79FFF, 0x00030002, /* 7: 800 1000 2 */
- 0x00FFFFFF, 0x00140005, /* 8: 850 850 0 */
- 0x00FFFFFF, 0x000C0004, /* 9: 900 900 0 */
- 0x00FFFFFF, 0x001C0003, /* 10: 950 950 0 */
- 0x80FFFFFF, 0x00030002, /* 11: 1000 1000 0 */
+static const struct ddi_buf_trans hsw_ddi_translations_hdmi[] = {
+ /* Idx NT mV d T mV d db */
+ { 0x00FFFFFF, 0x0006000E }, /* 0: 400 400 0 */
+ { 0x00E79FFF, 0x000E000C }, /* 1: 400 500 2 */
+ { 0x00D75FFF, 0x0005000A }, /* 2: 400 600 3.5 */
+ { 0x00FFFFFF, 0x0005000A }, /* 3: 600 600 0 */
+ { 0x00E79FFF, 0x001D0007 }, /* 4: 600 750 2 */
+ { 0x00D75FFF, 0x000C0004 }, /* 5: 600 900 3.5 */
+ { 0x00FFFFFF, 0x00040006 }, /* 6: 800 800 0 */
+ { 0x80E79FFF, 0x00030002 }, /* 7: 800 1000 2 */
+ { 0x00FFFFFF, 0x00140005 }, /* 8: 850 850 0 */
+ { 0x00FFFFFF, 0x000C0004 }, /* 9: 900 900 0 */
+ { 0x00FFFFFF, 0x001C0003 }, /* 10: 950 950 0 */
+ { 0x80FFFFFF, 0x00030002 }, /* 11: 1000 1000 0 */
};
-static const u32 bdw_ddi_translations_edp[] = {
- 0x00FFFFFF, 0x00000012, /* eDP parameters */
- 0x00EBAFFF, 0x00020011,
- 0x00C71FFF, 0x0006000F,
- 0x00AAAFFF, 0x000E000A,
- 0x00FFFFFF, 0x00020011,
- 0x00DB6FFF, 0x0005000F,
- 0x00BEEFFF, 0x000A000C,
- 0x00FFFFFF, 0x0005000F,
- 0x00DB6FFF, 0x000A000C,
- 0x00FFFFFF, 0x00140006 /* HDMI parameters 800mV 0dB*/
+static const struct ddi_buf_trans bdw_ddi_translations_edp[] = {
+ { 0x00FFFFFF, 0x00000012 },
+ { 0x00EBAFFF, 0x00020011 },
+ { 0x00C71FFF, 0x0006000F },
+ { 0x00AAAFFF, 0x000E000A },
+ { 0x00FFFFFF, 0x00020011 },
+ { 0x00DB6FFF, 0x0005000F },
+ { 0x00BEEFFF, 0x000A000C },
+ { 0x00FFFFFF, 0x0005000F },
+ { 0x00DB6FFF, 0x000A000C },
};
-static const u32 bdw_ddi_translations_dp[] = {
- 0x00FFFFFF, 0x0007000E, /* DP parameters */
- 0x00D75FFF, 0x000E000A,
- 0x00BEFFFF, 0x00140006,
- 0x80B2CFFF, 0x001B0002,
- 0x00FFFFFF, 0x000E000A,
- 0x00D75FFF, 0x00180004,
- 0x80CB2FFF, 0x001B0002,
- 0x00F7DFFF, 0x00180004,
- 0x80D75FFF, 0x001B0002,
- 0x00FFFFFF, 0x00140006 /* HDMI parameters 800mV 0dB*/
+static const struct ddi_buf_trans bdw_ddi_translations_dp[] = {
+ { 0x00FFFFFF, 0x0007000E },
+ { 0x00D75FFF, 0x000E000A },
+ { 0x00BEFFFF, 0x00140006 },
+ { 0x80B2CFFF, 0x001B0002 },
+ { 0x00FFFFFF, 0x000E000A },
+ { 0x00D75FFF, 0x00180004 },
+ { 0x80CB2FFF, 0x001B0002 },
+ { 0x00F7DFFF, 0x00180004 },
+ { 0x80D75FFF, 0x001B0002 },
};
-static const u32 bdw_ddi_translations_fdi[] = {
- 0x00FFFFFF, 0x0001000E, /* FDI parameters */
- 0x00D75FFF, 0x0004000A,
- 0x00C30FFF, 0x00070006,
- 0x00AAAFFF, 0x000C0000,
- 0x00FFFFFF, 0x0004000A,
- 0x00D75FFF, 0x00090004,
- 0x00C30FFF, 0x000C0000,
- 0x00FFFFFF, 0x00070006,
- 0x00D75FFF, 0x000C0000,
- 0x00FFFFFF, 0x00140006 /* HDMI parameters 800mV 0dB*/
+static const struct ddi_buf_trans bdw_ddi_translations_fdi[] = {
+ { 0x00FFFFFF, 0x0001000E },
+ { 0x00D75FFF, 0x0004000A },
+ { 0x00C30FFF, 0x00070006 },
+ { 0x00AAAFFF, 0x000C0000 },
+ { 0x00FFFFFF, 0x0004000A },
+ { 0x00D75FFF, 0x00090004 },
+ { 0x00C30FFF, 0x000C0000 },
+ { 0x00FFFFFF, 0x00070006 },
+ { 0x00D75FFF, 0x000C0000 },
+};
+
+static const struct ddi_buf_trans bdw_ddi_translations_hdmi[] = {
+ /* Idx NT mV d T mV df db */
+ { 0x00FFFFFF, 0x0007000E }, /* 0: 400 400 0 */
+ { 0x00D75FFF, 0x000E000A }, /* 1: 400 600 3.5 */
+ { 0x00BEFFFF, 0x00140006 }, /* 2: 400 800 6 */
+ { 0x00FFFFFF, 0x0009000D }, /* 3: 450 450 0 */
+ { 0x00FFFFFF, 0x000E000A }, /* 4: 600 600 0 */
+ { 0x00D7FFFF, 0x00140006 }, /* 5: 600 800 2.5 */
+ { 0x80CB2FFF, 0x001B0002 }, /* 6: 600 1000 4.5 */
+ { 0x00FFFFFF, 0x00140006 }, /* 7: 800 800 0 */
+ { 0x80E79FFF, 0x001B0002 }, /* 8: 800 1000 2 */
+ { 0x80FFFFFF, 0x001B0002 }, /* 9: 1000 1000 0 */
};
enum port intel_ddi_get_encoder_port(struct intel_encoder *intel_encoder)
{
struct drm_i915_private *dev_priv = dev->dev_private;
u32 reg;
- int i;
+ int i, n_hdmi_entries, hdmi_800mV_0dB;
int hdmi_level = dev_priv->vbt.ddi_port_info[port].hdmi_level_shift;
- const u32 *ddi_translations_fdi;
- const u32 *ddi_translations_dp;
- const u32 *ddi_translations_edp;
- const u32 *ddi_translations;
+ const struct ddi_buf_trans *ddi_translations_fdi;
+ const struct ddi_buf_trans *ddi_translations_dp;
+ const struct ddi_buf_trans *ddi_translations_edp;
+ const struct ddi_buf_trans *ddi_translations_hdmi;
+ const struct ddi_buf_trans *ddi_translations;
if (IS_BROADWELL(dev)) {
ddi_translations_fdi = bdw_ddi_translations_fdi;
ddi_translations_dp = bdw_ddi_translations_dp;
ddi_translations_edp = bdw_ddi_translations_edp;
+ ddi_translations_hdmi = bdw_ddi_translations_hdmi;
+ n_hdmi_entries = ARRAY_SIZE(bdw_ddi_translations_hdmi);
+ hdmi_800mV_0dB = 7;
} else if (IS_HASWELL(dev)) {
ddi_translations_fdi = hsw_ddi_translations_fdi;
ddi_translations_dp = hsw_ddi_translations_dp;
ddi_translations_edp = hsw_ddi_translations_dp;
+ ddi_translations_hdmi = hsw_ddi_translations_hdmi;
+ n_hdmi_entries = ARRAY_SIZE(hsw_ddi_translations_hdmi);
+ hdmi_800mV_0dB = 6;
} else {
WARN(1, "ddi translation table missing\n");
ddi_translations_edp = bdw_ddi_translations_dp;
ddi_translations_fdi = bdw_ddi_translations_fdi;
ddi_translations_dp = bdw_ddi_translations_dp;
+ ddi_translations_hdmi = bdw_ddi_translations_hdmi;
+ n_hdmi_entries = ARRAY_SIZE(bdw_ddi_translations_hdmi);
+ hdmi_800mV_0dB = 7;
}
switch (port) {
for (i = 0, reg = DDI_BUF_TRANS(port);
i < ARRAY_SIZE(hsw_ddi_translations_fdi); i++) {
- I915_WRITE(reg, ddi_translations[i]);
+ I915_WRITE(reg, ddi_translations[i].trans1);
reg += 4;
- }
- /* Entry 9 is for HDMI: */
- for (i = 0; i < 2; i++) {
- I915_WRITE(reg, hsw_ddi_translations_hdmi[hdmi_level * 2 + i]);
+ I915_WRITE(reg, ddi_translations[i].trans2);
reg += 4;
}
+
+ /* Choose a good default if VBT is badly populated */
+ if (hdmi_level == HDMI_LEVEL_SHIFT_UNKNOWN ||
+ hdmi_level >= n_hdmi_entries)
+ hdmi_level = hdmi_800mV_0dB;
+
+ /* Entry 9 is for HDMI: */
+ I915_WRITE(reg, ddi_translations_hdmi[hdmi_level].trans1);
+ reg += 4;
+ I915_WRITE(reg, ddi_translations_hdmi[hdmi_level].trans2);
+ reg += 4;
}
/* Program DDI buffers translations for DP. By default, program ports A-D in DP
intel_prepare_ddi_buffers(dev, port);
}
-static const long hsw_ddi_buf_ctl_values[] = {
- DDI_BUF_EMP_400MV_0DB_HSW,
- DDI_BUF_EMP_400MV_3_5DB_HSW,
- DDI_BUF_EMP_400MV_6DB_HSW,
- DDI_BUF_EMP_400MV_9_5DB_HSW,
- DDI_BUF_EMP_600MV_0DB_HSW,
- DDI_BUF_EMP_600MV_3_5DB_HSW,
- DDI_BUF_EMP_600MV_6DB_HSW,
- DDI_BUF_EMP_800MV_0DB_HSW,
- DDI_BUF_EMP_800MV_3_5DB_HSW
-};
-
static void intel_wait_ddi_buf_idle(struct drm_i915_private *dev_priv,
enum port port)
{
/* Start the training iterating through available voltages and emphasis,
* testing each value twice. */
- for (i = 0; i < ARRAY_SIZE(hsw_ddi_buf_ctl_values) * 2; i++) {
+ for (i = 0; i < ARRAY_SIZE(hsw_ddi_translations_fdi) * 2; i++) {
/* Configure DP_TP_CTL with auto-training */
I915_WRITE(DP_TP_CTL(PORT_E),
DP_TP_CTL_FDI_AUTOTRAIN |
I915_WRITE(DDI_BUF_CTL(PORT_E),
DDI_BUF_CTL_ENABLE |
((intel_crtc->config.fdi_lanes - 1) << 1) |
- hsw_ddi_buf_ctl_values[i / 2]);
+ DDI_BUF_TRANS_SELECT(i / 2));
POSTING_READ(DDI_BUF_CTL(PORT_E));
udelay(600);
enc_to_dig_port(&encoder->base);
intel_dp->DP = intel_dig_port->saved_port_bits |
- DDI_BUF_CTL_ENABLE | DDI_BUF_EMP_400MV_0DB_HSW;
+ DDI_BUF_CTL_ENABLE | DDI_BUF_TRANS_SELECT(0);
intel_dp->DP |= DDI_PORT_WIDTH(intel_dp->lane_count);
}
}
#define LC_FREQ 2700
-#define LC_FREQ_2K (LC_FREQ * 2000)
+#define LC_FREQ_2K U64_C(LC_FREQ * 2000)
#define P_MIN 2
#define P_MAX 64
#define VCO_MIN 2400
#define VCO_MAX 4800
-#define ABS_DIFF(a, b) ((a > b) ? (a - b) : (b - a))
+#define abs_diff(a, b) ({ \
+ typeof(a) __a = (a); \
+ typeof(b) __b = (b); \
+ (void) (&__a == &__b); \
+ __a > __b ? (__a - __b) : (__b - __a); })
struct wrpll_rnp {
unsigned p, n2, r2;
*/
a = freq2k * budget * p * r2;
b = freq2k * budget * best->p * best->r2;
- diff = ABS_DIFF((freq2k * p * r2), (LC_FREQ_2K * n2));
- diff_best = ABS_DIFF((freq2k * best->p * best->r2),
- (LC_FREQ_2K * best->n2));
+ diff = abs_diff(freq2k * p * r2, LC_FREQ_2K * n2);
+ diff_best = abs_diff(freq2k * best->p * best->r2,
+ LC_FREQ_2K * best->n2);
c = 1000000 * diff;
d = 1000000 * diff_best;
return (refclk * n * 100) / (p * r);
}
-void intel_ddi_clock_get(struct intel_encoder *encoder,
- struct intel_crtc_config *pipe_config)
+static void hsw_ddi_clock_get(struct intel_encoder *encoder,
+ struct intel_crtc_config *pipe_config)
{
struct drm_i915_private *dev_priv = encoder->base.dev->dev_private;
int link_clock = 0;
pipe_config->adjusted_mode.crtc_clock = pipe_config->port_clock;
}
+void intel_ddi_clock_get(struct intel_encoder *encoder,
+ struct intel_crtc_config *pipe_config)
+{
+ hsw_ddi_clock_get(encoder, pipe_config);
+}
+
static void
-intel_ddi_calculate_wrpll(int clock /* in Hz */,
- unsigned *r2_out, unsigned *n2_out, unsigned *p_out)
+hsw_ddi_calculate_wrpll(int clock /* in Hz */,
+ unsigned *r2_out, unsigned *n2_out, unsigned *p_out)
{
uint64_t freq2k;
unsigned p, n2, r2;
*r2_out = best.r2;
}
-/*
- * Tries to find a PLL for the CRTC. If it finds, it increases the refcount and
- * stores it in intel_crtc->ddi_pll_sel, so other mode sets won't be able to
- * steal the selected PLL. You need to call intel_ddi_pll_enable to actually
- * enable the PLL.
- */
-bool intel_ddi_pll_select(struct intel_crtc *intel_crtc)
+static bool
+hsw_ddi_pll_select(struct intel_crtc *intel_crtc,
+ struct intel_encoder *intel_encoder,
+ int clock)
{
- struct drm_crtc *crtc = &intel_crtc->base;
- struct intel_encoder *intel_encoder = intel_ddi_get_crtc_encoder(crtc);
- int type = intel_encoder->type;
- int clock = intel_crtc->config.port_clock;
-
- intel_put_shared_dpll(intel_crtc);
-
- if (type == INTEL_OUTPUT_HDMI) {
+ if (intel_encoder->type == INTEL_OUTPUT_HDMI) {
struct intel_shared_dpll *pll;
uint32_t val;
unsigned p, n2, r2;
- intel_ddi_calculate_wrpll(clock * 1000, &r2, &n2, &p);
+ hsw_ddi_calculate_wrpll(clock * 1000, &r2, &n2, &p);
val = WRPLL_PLL_ENABLE | WRPLL_PLL_LCPLL |
WRPLL_DIVIDER_REFERENCE(r2) | WRPLL_DIVIDER_FEEDBACK(n2) |
return true;
}
+
+/*
+ * Tries to find a *shared* PLL for the CRTC and store it in
+ * intel_crtc->ddi_pll_sel.
+ *
+ * For private DPLLs, compute_config() should do the selection for us. This
+ * function should be folded into compute_config() eventually.
+ */
+bool intel_ddi_pll_select(struct intel_crtc *intel_crtc)
+{
+ struct drm_crtc *crtc = &intel_crtc->base;
+ struct intel_encoder *intel_encoder = intel_ddi_get_crtc_encoder(crtc);
+ int clock = intel_crtc->config.port_clock;
+
+ intel_put_shared_dpll(intel_crtc);
+
+ return hsw_ddi_pll_select(intel_crtc, intel_encoder, clock);
+}
+
void intel_ddi_set_pipe_settings(struct drm_crtc *crtc)
{
struct drm_i915_private *dev_priv = crtc->dev->dev_private;
}
}
-int intel_ddi_get_cdclk_freq(struct drm_i915_private *dev_priv)
+static int bdw_get_cdclk_freq(struct drm_i915_private *dev_priv)
+{
+ uint32_t lcpll = I915_READ(LCPLL_CTL);
+ uint32_t freq = lcpll & LCPLL_CLK_FREQ_MASK;
+
+ if (lcpll & LCPLL_CD_SOURCE_FCLK)
+ return 800000;
+ else if (I915_READ(FUSE_STRAP) & HSW_CDCLK_LIMIT)
+ return 450000;
+ else if (freq == LCPLL_CLK_FREQ_450)
+ return 450000;
+ else if (freq == LCPLL_CLK_FREQ_54O_BDW)
+ return 540000;
+ else if (freq == LCPLL_CLK_FREQ_337_5_BDW)
+ return 337500;
+ else
+ return 675000;
+}
+
+static int hsw_get_cdclk_freq(struct drm_i915_private *dev_priv)
{
struct drm_device *dev = dev_priv->dev;
uint32_t lcpll = I915_READ(LCPLL_CTL);
uint32_t freq = lcpll & LCPLL_CLK_FREQ_MASK;
- if (lcpll & LCPLL_CD_SOURCE_FCLK) {
+ if (lcpll & LCPLL_CD_SOURCE_FCLK)
return 800000;
- } else if (I915_READ(FUSE_STRAP) & HSW_CDCLK_LIMIT) {
+ else if (I915_READ(FUSE_STRAP) & HSW_CDCLK_LIMIT)
return 450000;
- } else if (freq == LCPLL_CLK_FREQ_450) {
+ else if (freq == LCPLL_CLK_FREQ_450)
return 450000;
- } else if (IS_HASWELL(dev)) {
- if (IS_ULT(dev))
- return 337500;
- else
- return 540000;
- } else {
- if (freq == LCPLL_CLK_FREQ_54O_BDW)
- return 540000;
- else if (freq == LCPLL_CLK_FREQ_337_5_BDW)
- return 337500;
- else
- return 675000;
- }
+ else if (IS_ULT(dev))
+ return 337500;
+ else
+ return 540000;
+}
+
+int intel_ddi_get_cdclk_freq(struct drm_i915_private *dev_priv)
+{
+ struct drm_device *dev = dev_priv->dev;
+
+ if (IS_BROADWELL(dev))
+ return bdw_get_cdclk_freq(dev_priv);
+
+ /* Haswell */
+ return hsw_get_cdclk_freq(dev_priv);
}
static void hsw_ddi_pll_enable(struct drm_i915_private *dev_priv,
"WRPLL 2",
};
-void intel_ddi_pll_init(struct drm_device *dev)
+static void hsw_shared_dplls_init(struct drm_i915_private *dev_priv)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
- uint32_t val = I915_READ(LCPLL_CTL);
int i;
dev_priv->num_shared_dpll = 2;
dev_priv->shared_dplls[i].get_hw_state =
hsw_ddi_pll_get_hw_state;
}
+}
+
+void intel_ddi_pll_init(struct drm_device *dev)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ uint32_t val = I915_READ(LCPLL_CTL);
+
+ hsw_shared_dplls_init(dev_priv);
/* The LCPLL register should be turned on by the BIOS. For now let's
* just check its state and print errors in case something is wrong.
dev_priv->vbt.edp_bpp = pipe_config->pipe_bpp;
}
- intel_ddi_clock_get(encoder, pipe_config);
+ hsw_ddi_clock_get(encoder, pipe_config);
}
static void intel_ddi_destroy(struct drm_encoder *encoder)
struct intel_framebuffer *ifb,
struct drm_mode_fb_cmd2 *mode_cmd,
struct drm_i915_gem_object *obj);
-static void intel_dp_set_m_n(struct intel_crtc *crtc);
static void i9xx_set_pipeconf(struct intel_crtc *intel_crtc);
static void intel_set_pipe_timings(struct intel_crtc *intel_crtc);
static void intel_cpu_transcoder_set_m_n(struct intel_crtc *crtc,
- struct intel_link_m_n *m_n);
+ struct intel_link_m_n *m_n,
+ struct intel_link_m_n *m2_n2);
static void ironlake_set_pipeconf(struct drm_crtc *crtc);
static void haswell_set_pipeconf(struct drm_crtc *crtc);
static void intel_set_pipe_csc(struct drm_crtc *crtc);
static void vlv_prepare_pll(struct intel_crtc *crtc);
+static void chv_prepare_pll(struct intel_crtc *crtc);
static struct intel_encoder *intel_find_encoder(struct intel_connector *connector, int pipe)
{
frame = I915_READ(frame_reg);
if (wait_for(I915_READ_NOTRACE(frame_reg) != frame, 50))
- WARN(1, "vblank wait timed out\n");
+ WARN(1, "vblank wait on pipe %c timed out\n",
+ pipe_name(pipe));
}
/**
if (wait_for(I915_READ(pipestat_reg) &
PIPE_VBLANK_INTERRUPT_STATUS,
50))
- DRM_DEBUG_KMS("vblank wait timed out\n");
+ DRM_DEBUG_KMS("vblank wait on pipe %c timed out\n",
+ pipe_name(pipe));
}
static bool pipe_dsl_stopped(struct drm_device *dev, enum pipe pipe)
/*
* intel_wait_for_pipe_off - wait for pipe to turn off
- * @dev: drm device
- * @pipe: pipe to wait for
+ * @crtc: crtc whose pipe to wait for
*
* After disabling a pipe, we can't wait for vblank in the usual way,
* spinning on the vblank interrupt status bit, since we won't actually
* ends up stopping at the start of the next frame).
*
*/
-void intel_wait_for_pipe_off(struct drm_device *dev, int pipe)
+static void intel_wait_for_pipe_off(struct intel_crtc *crtc)
{
+ struct drm_device *dev = crtc->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- enum transcoder cpu_transcoder = intel_pipe_to_cpu_transcoder(dev_priv,
- pipe);
+ enum transcoder cpu_transcoder = crtc->config.cpu_transcoder;
+ enum pipe pipe = crtc->pipe;
if (INTEL_INFO(dev)->gen >= 4) {
int reg = PIPECONF(cpu_transcoder);
static void assert_panel_unlocked(struct drm_i915_private *dev_priv,
enum pipe pipe)
{
- int pp_reg, lvds_reg;
+ struct drm_device *dev = dev_priv->dev;
+ int pp_reg;
u32 val;
enum pipe panel_pipe = PIPE_A;
bool locked = true;
- if (HAS_PCH_SPLIT(dev_priv->dev)) {
+ if (WARN_ON(HAS_DDI(dev)))
+ return;
+
+ if (HAS_PCH_SPLIT(dev)) {
+ u32 port_sel;
+
pp_reg = PCH_PP_CONTROL;
- lvds_reg = PCH_LVDS;
+ port_sel = I915_READ(PCH_PP_ON_DELAYS) & PANEL_PORT_SELECT_MASK;
+
+ if (port_sel == PANEL_PORT_SELECT_LVDS &&
+ I915_READ(PCH_LVDS) & LVDS_PIPEB_SELECT)
+ panel_pipe = PIPE_B;
+ /* XXX: else fix for eDP */
+ } else if (IS_VALLEYVIEW(dev)) {
+ /* presumably write lock depends on pipe, not port select */
+ pp_reg = VLV_PIPE_PP_CONTROL(pipe);
+ panel_pipe = pipe;
} else {
pp_reg = PP_CONTROL;
- lvds_reg = LVDS;
+ if (I915_READ(LVDS) & LVDS_PIPEB_SELECT)
+ panel_pipe = PIPE_B;
}
val = I915_READ(pp_reg);
if (!(val & PANEL_POWER_ON) ||
- ((val & PANEL_UNLOCK_REGS) == PANEL_UNLOCK_REGS))
+ ((val & PANEL_UNLOCK_MASK) == PANEL_UNLOCK_REGS))
locked = false;
- if (I915_READ(lvds_reg) & LVDS_PIPEB_SELECT)
- panel_pipe = PIPE_B;
-
WARN(panel_pipe == pipe && locked,
"panel assertion failure, pipe %c regs locked\n",
pipe_name(pipe));
enum transcoder cpu_transcoder = intel_pipe_to_cpu_transcoder(dev_priv,
pipe);
- /* if we need the pipe A quirk it must be always on */
- if (pipe == PIPE_A && dev_priv->quirks & QUIRK_PIPEA_FORCE)
+ /* if we need the pipe quirk it must be always on */
+ if ((pipe == PIPE_A && dev_priv->quirks & QUIRK_PIPEA_FORCE) ||
+ (pipe == PIPE_B && dev_priv->quirks & QUIRK_PIPEB_FORCE))
state = true;
if (!intel_display_power_enabled(dev_priv,
}
/* Need to check both planes against the pipe */
- for_each_pipe(i) {
+ for_each_pipe(dev_priv, i) {
reg = DSPCNTR(i);
val = I915_READ(reg);
cur_pipe = (val & DISPPLANE_SEL_PIPE_MASK) >>
}
}
+static void assert_vblank_disabled(struct drm_crtc *crtc)
+{
+ if (WARN_ON(drm_crtc_vblank_get(crtc) == 0))
+ drm_crtc_vblank_put(crtc);
+}
+
static void ibx_assert_pch_refclk_enabled(struct drm_i915_private *dev_priv)
{
u32 val;
}
}
-static void intel_reset_dpio(struct drm_device *dev)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
-
- if (IS_CHERRYVIEW(dev)) {
- enum dpio_phy phy;
- u32 val;
-
- for (phy = DPIO_PHY0; phy < I915_NUM_PHYS_VLV; phy++) {
- /* Poll for phypwrgood signal */
- if (wait_for(I915_READ(DISPLAY_PHY_STATUS) &
- PHY_POWERGOOD(phy), 1))
- DRM_ERROR("Display PHY %d is not power up\n", phy);
-
- /*
- * Deassert common lane reset for PHY.
- *
- * This should only be done on init and resume from S3
- * with both PLLs disabled, or we risk losing DPIO and
- * PLL synchronization.
- */
- val = I915_READ(DISPLAY_PHY_CONTROL);
- I915_WRITE(DISPLAY_PHY_CONTROL,
- PHY_COM_LANE_RESET_DEASSERT(phy, val));
- }
- }
-}
-
static void vlv_enable_pll(struct intel_crtc *crtc)
{
struct drm_device *dev = crtc->base.dev;
BUG_ON(!IS_VALLEYVIEW(dev_priv->dev));
/* PLL is protected by panel, make sure we can write it */
- if (IS_MOBILE(dev_priv->dev) && !IS_I830(dev_priv->dev))
+ if (IS_MOBILE(dev_priv->dev))
assert_panel_unlocked(dev_priv, crtc->pipe);
I915_WRITE(reg, dpll);
mutex_unlock(&dev_priv->dpio_lock);
}
+static int intel_num_dvo_pipes(struct drm_device *dev)
+{
+ struct intel_crtc *crtc;
+ int count = 0;
+
+ for_each_intel_crtc(dev, crtc)
+ count += crtc->active &&
+ intel_pipe_has_type(&crtc->base, INTEL_OUTPUT_DVO);
+
+ return count;
+}
+
static void i9xx_enable_pll(struct intel_crtc *crtc)
{
struct drm_device *dev = crtc->base.dev;
if (IS_MOBILE(dev) && !IS_I830(dev))
assert_panel_unlocked(dev_priv, crtc->pipe);
- I915_WRITE(reg, dpll);
+ /* Enable DVO 2x clock on both PLLs if necessary */
+ if (IS_I830(dev) && intel_num_dvo_pipes(dev) > 0) {
+ /*
+ * It appears to be important that we don't enable this
+ * for the current pipe before otherwise configuring the
+ * PLL. No idea how this should be handled if multiple
+ * DVO outputs are enabled simultaneosly.
+ */
+ dpll |= DPLL_DVO_2X_MODE;
+ I915_WRITE(DPLL(!crtc->pipe),
+ I915_READ(DPLL(!crtc->pipe)) | DPLL_DVO_2X_MODE);
+ }
/* Wait for the clocks to stabilize. */
POSTING_READ(reg);
*
* Note! This is for pre-ILK only.
*/
-static void i9xx_disable_pll(struct drm_i915_private *dev_priv, enum pipe pipe)
+static void i9xx_disable_pll(struct intel_crtc *crtc)
{
- /* Don't disable pipe A or pipe A PLLs if needed */
- if (pipe == PIPE_A && (dev_priv->quirks & QUIRK_PIPEA_FORCE))
+ struct drm_device *dev = crtc->base.dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ enum pipe pipe = crtc->pipe;
+
+ /* Disable DVO 2x clock on both PLLs if necessary */
+ if (IS_I830(dev) &&
+ intel_pipe_has_type(&crtc->base, INTEL_OUTPUT_DVO) &&
+ intel_num_dvo_pipes(dev) == 1) {
+ I915_WRITE(DPLL(PIPE_B),
+ I915_READ(DPLL(PIPE_B)) & ~DPLL_DVO_2X_MODE);
+ I915_WRITE(DPLL(PIPE_A),
+ I915_READ(DPLL(PIPE_A)) & ~DPLL_DVO_2X_MODE);
+ }
+
+ /* Don't disable pipe or pipe PLLs if needed */
+ if ((pipe == PIPE_A && dev_priv->quirks & QUIRK_PIPEA_FORCE) ||
+ (pipe == PIPE_B && dev_priv->quirks & QUIRK_PIPEB_FORCE))
return;
/* Make sure the pipe isn't still relying on us */
assert_pipe_disabled(dev_priv, pipe);
/* Set PLL en = 0 */
- val = DPLL_SSC_REF_CLOCK_CHV;
+ val = DPLL_SSC_REF_CLOCK_CHV | DPLL_REFA_CLK_ENABLE_VLV;
if (pipe != PIPE_A)
val |= DPLL_INTEGRATED_CRI_CLK_VLV;
I915_WRITE(DPLL(pipe), val);
if (WARN_ON(pll->refcount == 0))
return;
- DRM_DEBUG_KMS("enable %s (active %d, on? %d)for crtc %d\n",
+ DRM_DEBUG_KMS("enable %s (active %d, on? %d) for crtc %d\n",
pll->name, pll->active, pll->on,
crtc->base.base.id);
pll->on = true;
}
-void intel_disable_shared_dpll(struct intel_crtc *crtc)
+static void intel_disable_shared_dpll(struct intel_crtc *crtc)
{
struct drm_device *dev = crtc->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
uint32_t reg, val, pipeconf_val;
/* PCH only available on ILK+ */
- BUG_ON(INTEL_INFO(dev)->gen < 5);
+ BUG_ON(!HAS_PCH_SPLIT(dev));
/* Make sure PCH DPLL is enabled */
assert_shared_dpll_enabled(dev_priv,
u32 val, pipeconf_val;
/* PCH only available on ILK+ */
- BUG_ON(INTEL_INFO(dev_priv->dev)->gen < 5);
+ BUG_ON(!HAS_PCH_SPLIT(dev_priv->dev));
/* FDI must be feeding us bits for PCH ports */
assert_fdi_tx_enabled(dev_priv, (enum pipe) cpu_transcoder);
reg = PIPECONF(cpu_transcoder);
val = I915_READ(reg);
if (val & PIPECONF_ENABLE) {
- WARN_ON(!(pipe == PIPE_A &&
- dev_priv->quirks & QUIRK_PIPEA_FORCE));
+ WARN_ON(!((pipe == PIPE_A && dev_priv->quirks & QUIRK_PIPEA_FORCE) ||
+ (pipe == PIPE_B && dev_priv->quirks & QUIRK_PIPEB_FORCE)));
return;
}
/**
* intel_disable_pipe - disable a pipe, asserting requirements
- * @dev_priv: i915 private structure
- * @pipe: pipe to disable
- *
- * Disable @pipe, making sure that various hardware specific requirements
- * are met, if applicable, e.g. plane disabled, panel fitter off, etc.
+ * @crtc: crtc whose pipes is to be disabled
*
- * @pipe should be %PIPE_A or %PIPE_B.
+ * Disable the pipe of @crtc, making sure that various hardware
+ * specific requirements are met, if applicable, e.g. plane
+ * disabled, panel fitter off, etc.
*
* Will wait until the pipe has shut down before returning.
*/
-static void intel_disable_pipe(struct drm_i915_private *dev_priv,
- enum pipe pipe)
+static void intel_disable_pipe(struct intel_crtc *crtc)
{
- enum transcoder cpu_transcoder = intel_pipe_to_cpu_transcoder(dev_priv,
- pipe);
+ struct drm_i915_private *dev_priv = crtc->base.dev->dev_private;
+ enum transcoder cpu_transcoder = crtc->config.cpu_transcoder;
+ enum pipe pipe = crtc->pipe;
int reg;
u32 val;
assert_cursor_disabled(dev_priv, pipe);
assert_sprites_disabled(dev_priv, pipe);
- /* Don't disable pipe A or pipe A PLLs if needed */
- if (pipe == PIPE_A && (dev_priv->quirks & QUIRK_PIPEA_FORCE))
- return;
-
reg = PIPECONF(cpu_transcoder);
val = I915_READ(reg);
if ((val & PIPECONF_ENABLE) == 0)
return;
- I915_WRITE(reg, val & ~PIPECONF_ENABLE);
- intel_wait_for_pipe_off(dev_priv->dev, pipe);
+ /*
+ * Double wide has implications for planes
+ * so best keep it disabled when not needed.
+ */
+ if (crtc->config.double_wide)
+ val &= ~PIPECONF_DOUBLE_WIDE;
+
+ /* Don't disable pipe or pipe PLLs if needed */
+ if (!(pipe == PIPE_A && dev_priv->quirks & QUIRK_PIPEA_FORCE) &&
+ !(pipe == PIPE_B && dev_priv->quirks & QUIRK_PIPEB_FORCE))
+ val &= ~PIPECONF_ENABLE;
+
+ I915_WRITE(reg, val);
+ if ((val & PIPECONF_ENABLE) == 0)
+ intel_wait_for_pipe_off(crtc);
}
/*
/**
* intel_enable_primary_hw_plane - enable the primary plane on a given pipe
- * @dev_priv: i915 private structure
- * @plane: plane to enable
- * @pipe: pipe being fed
+ * @plane: plane to be enabled
+ * @crtc: crtc for the plane
*
- * Enable @plane on @pipe, making sure that @pipe is running first.
+ * Enable @plane on @crtc, making sure that the pipe is running first.
*/
-static void intel_enable_primary_hw_plane(struct drm_i915_private *dev_priv,
- enum plane plane, enum pipe pipe)
+static void intel_enable_primary_hw_plane(struct drm_plane *plane,
+ struct drm_crtc *crtc)
{
- struct drm_device *dev = dev_priv->dev;
- struct intel_crtc *intel_crtc =
- to_intel_crtc(dev_priv->pipe_to_crtc_mapping[pipe]);
- int reg;
- u32 val;
+ struct drm_device *dev = plane->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
/* If the pipe isn't enabled, we can't pump pixels and may hang */
- assert_pipe_enabled(dev_priv, pipe);
+ assert_pipe_enabled(dev_priv, intel_crtc->pipe);
if (intel_crtc->primary_enabled)
return;
intel_crtc->primary_enabled = true;
- reg = DSPCNTR(plane);
- val = I915_READ(reg);
- WARN_ON(val & DISPLAY_PLANE_ENABLE);
-
- I915_WRITE(reg, val | DISPLAY_PLANE_ENABLE);
- intel_flush_primary_plane(dev_priv, plane);
+ dev_priv->display.update_primary_plane(crtc, plane->fb,
+ crtc->x, crtc->y);
/*
* BDW signals flip done immediately if the plane
/**
* intel_disable_primary_hw_plane - disable the primary hardware plane
- * @dev_priv: i915 private structure
- * @plane: plane to disable
- * @pipe: pipe consuming the data
+ * @plane: plane to be disabled
+ * @crtc: crtc for the plane
*
- * Disable @plane; should be an independent operation.
+ * Disable @plane on @crtc, making sure that the pipe is running first.
*/
-static void intel_disable_primary_hw_plane(struct drm_i915_private *dev_priv,
- enum plane plane, enum pipe pipe)
+static void intel_disable_primary_hw_plane(struct drm_plane *plane,
+ struct drm_crtc *crtc)
{
- struct intel_crtc *intel_crtc =
- to_intel_crtc(dev_priv->pipe_to_crtc_mapping[pipe]);
- int reg;
- u32 val;
+ struct drm_device *dev = plane->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+
+ assert_pipe_enabled(dev_priv, intel_crtc->pipe);
if (!intel_crtc->primary_enabled)
return;
intel_crtc->primary_enabled = false;
- reg = DSPCNTR(plane);
- val = I915_READ(reg);
- WARN_ON((val & DISPLAY_PLANE_ENABLE) == 0);
-
- I915_WRITE(reg, val & ~DISPLAY_PLANE_ENABLE);
- intel_flush_primary_plane(dev_priv, plane);
+ dev_priv->display.update_primary_plane(crtc, plane->fb,
+ crtc->x, crtc->y);
}
static bool need_vtd_wa(struct drm_device *dev)
if (need_vtd_wa(dev) && alignment < 256 * 1024)
alignment = 256 * 1024;
+ /*
+ * Global gtt pte registers are special registers which actually forward
+ * writes to a chunk of system memory. Which means that there is no risk
+ * that the register values disappear as soon as we call
+ * intel_runtime_pm_put(), so it is correct to wrap only the
+ * pin/unpin/fence and not more.
+ */
+ intel_runtime_pm_get(dev_priv);
+
dev_priv->mm.interruptible = false;
ret = i915_gem_object_pin_to_display_plane(obj, alignment, pipelined);
if (ret)
i915_gem_object_pin_fence(obj);
dev_priv->mm.interruptible = true;
+ intel_runtime_pm_put(dev_priv);
return 0;
err_unpin:
i915_gem_object_unpin_from_display_plane(obj);
err_interruptible:
dev_priv->mm.interruptible = true;
+ intel_runtime_pm_put(dev_priv);
return ret;
}
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- struct drm_i915_gem_object *obj = intel_fb_obj(fb);
+ struct drm_i915_gem_object *obj;
int plane = intel_crtc->plane;
unsigned long linear_offset;
u32 dspcntr;
- u32 reg;
+ u32 reg = DSPCNTR(plane);
+ int pixel_size;
+
+ if (!intel_crtc->primary_enabled) {
+ I915_WRITE(reg, 0);
+ if (INTEL_INFO(dev)->gen >= 4)
+ I915_WRITE(DSPSURF(plane), 0);
+ else
+ I915_WRITE(DSPADDR(plane), 0);
+ POSTING_READ(reg);
+ return;
+ }
+
+ obj = intel_fb_obj(fb);
+ if (WARN_ON(obj == NULL))
+ return;
+
+ pixel_size = drm_format_plane_cpp(fb->pixel_format, 0);
+
+ dspcntr = DISPPLANE_GAMMA_ENABLE;
+
+ dspcntr |= DISPLAY_PLANE_ENABLE;
+
+ if (INTEL_INFO(dev)->gen < 4) {
+ if (intel_crtc->pipe == PIPE_B)
+ dspcntr |= DISPPLANE_SEL_PIPE_B;
+
+ /* pipesrc and dspsize control the size that is scaled from,
+ * which should always be the user's requested size.
+ */
+ I915_WRITE(DSPSIZE(plane),
+ ((intel_crtc->config.pipe_src_h - 1) << 16) |
+ (intel_crtc->config.pipe_src_w - 1));
+ I915_WRITE(DSPPOS(plane), 0);
+ }
- reg = DSPCNTR(plane);
- dspcntr = I915_READ(reg);
- /* Mask out pixel format bits in case we change it */
- dspcntr &= ~DISPPLANE_PIXFORMAT_MASK;
switch (fb->pixel_format) {
case DRM_FORMAT_C8:
dspcntr |= DISPPLANE_8BPP;
BUG();
}
- if (INTEL_INFO(dev)->gen >= 4) {
- if (obj->tiling_mode != I915_TILING_NONE)
- dspcntr |= DISPPLANE_TILED;
- else
- dspcntr &= ~DISPPLANE_TILED;
- }
+ if (INTEL_INFO(dev)->gen >= 4 &&
+ obj->tiling_mode != I915_TILING_NONE)
+ dspcntr |= DISPPLANE_TILED;
if (IS_G4X(dev))
dspcntr |= DISPPLANE_TRICKLE_FEED_DISABLE;
- I915_WRITE(reg, dspcntr);
-
- linear_offset = y * fb->pitches[0] + x * (fb->bits_per_pixel / 8);
+ linear_offset = y * fb->pitches[0] + x * pixel_size;
if (INTEL_INFO(dev)->gen >= 4) {
intel_crtc->dspaddr_offset =
intel_gen4_compute_page_offset(&x, &y, obj->tiling_mode,
- fb->bits_per_pixel / 8,
+ pixel_size,
fb->pitches[0]);
linear_offset -= intel_crtc->dspaddr_offset;
} else {
intel_crtc->dspaddr_offset = linear_offset;
}
+ if (to_intel_plane(crtc->primary)->rotation == BIT(DRM_ROTATE_180)) {
+ dspcntr |= DISPPLANE_ROTATE_180;
+
+ x += (intel_crtc->config.pipe_src_w - 1);
+ y += (intel_crtc->config.pipe_src_h - 1);
+
+ /* Finding the last pixel of the last line of the display
+ data and adding to linear_offset*/
+ linear_offset +=
+ (intel_crtc->config.pipe_src_h - 1) * fb->pitches[0] +
+ (intel_crtc->config.pipe_src_w - 1) * pixel_size;
+ }
+
+ I915_WRITE(reg, dspcntr);
+
DRM_DEBUG_KMS("Writing base %08lX %08lX %d %d %d\n",
i915_gem_obj_ggtt_offset(obj), linear_offset, x, y,
fb->pitches[0]);
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- struct drm_i915_gem_object *obj = intel_fb_obj(fb);
+ struct drm_i915_gem_object *obj;
int plane = intel_crtc->plane;
unsigned long linear_offset;
u32 dspcntr;
- u32 reg;
+ u32 reg = DSPCNTR(plane);
+ int pixel_size;
+
+ if (!intel_crtc->primary_enabled) {
+ I915_WRITE(reg, 0);
+ I915_WRITE(DSPSURF(plane), 0);
+ POSTING_READ(reg);
+ return;
+ }
+
+ obj = intel_fb_obj(fb);
+ if (WARN_ON(obj == NULL))
+ return;
+
+ pixel_size = drm_format_plane_cpp(fb->pixel_format, 0);
+
+ dspcntr = DISPPLANE_GAMMA_ENABLE;
+
+ dspcntr |= DISPLAY_PLANE_ENABLE;
+
+ if (IS_HASWELL(dev) || IS_BROADWELL(dev))
+ dspcntr |= DISPPLANE_PIPE_CSC_ENABLE;
- reg = DSPCNTR(plane);
- dspcntr = I915_READ(reg);
- /* Mask out pixel format bits in case we change it */
- dspcntr &= ~DISPPLANE_PIXFORMAT_MASK;
switch (fb->pixel_format) {
case DRM_FORMAT_C8:
dspcntr |= DISPPLANE_8BPP;
if (obj->tiling_mode != I915_TILING_NONE)
dspcntr |= DISPPLANE_TILED;
- else
- dspcntr &= ~DISPPLANE_TILED;
- if (IS_HASWELL(dev) || IS_BROADWELL(dev))
- dspcntr &= ~DISPPLANE_TRICKLE_FEED_DISABLE;
- else
+ if (!IS_HASWELL(dev) && !IS_BROADWELL(dev))
dspcntr |= DISPPLANE_TRICKLE_FEED_DISABLE;
- I915_WRITE(reg, dspcntr);
-
- linear_offset = y * fb->pitches[0] + x * (fb->bits_per_pixel / 8);
+ linear_offset = y * fb->pitches[0] + x * pixel_size;
intel_crtc->dspaddr_offset =
intel_gen4_compute_page_offset(&x, &y, obj->tiling_mode,
- fb->bits_per_pixel / 8,
+ pixel_size,
fb->pitches[0]);
linear_offset -= intel_crtc->dspaddr_offset;
+ if (to_intel_plane(crtc->primary)->rotation == BIT(DRM_ROTATE_180)) {
+ dspcntr |= DISPPLANE_ROTATE_180;
+
+ if (!IS_HASWELL(dev) && !IS_BROADWELL(dev)) {
+ x += (intel_crtc->config.pipe_src_w - 1);
+ y += (intel_crtc->config.pipe_src_h - 1);
+
+ /* Finding the last pixel of the last line of the display
+ data and adding to linear_offset*/
+ linear_offset +=
+ (intel_crtc->config.pipe_src_h - 1) * fb->pitches[0] +
+ (intel_crtc->config.pipe_src_w - 1) * pixel_size;
+ }
+ }
+
+ I915_WRITE(reg, dspcntr);
DRM_DEBUG_KMS("Writing base %08lX %08lX %d %d %d\n",
i915_gem_obj_ggtt_offset(obj), linear_offset, x, y,
return false;
}
+static void page_flip_completed(struct intel_crtc *intel_crtc)
+{
+ struct drm_i915_private *dev_priv = to_i915(intel_crtc->base.dev);
+ struct intel_unpin_work *work = intel_crtc->unpin_work;
+
+ /* ensure that the unpin work is consistent wrt ->pending. */
+ smp_rmb();
+ intel_crtc->unpin_work = NULL;
+
+ if (work->event)
+ drm_send_vblank_event(intel_crtc->base.dev,
+ intel_crtc->pipe,
+ work->event);
+
+ drm_crtc_vblank_put(&intel_crtc->base);
+
+ wake_up_all(&dev_priv->pending_flip_queue);
+ queue_work(dev_priv->wq, &work->work);
+
+ trace_i915_flip_complete(intel_crtc->plane,
+ work->pending_flip_obj);
+}
+
void intel_crtc_wait_for_pending_flips(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- if (crtc->primary->fb == NULL)
- return;
-
WARN_ON(waitqueue_active(&dev_priv->pending_flip_queue));
+ if (WARN_ON(wait_event_timeout(dev_priv->pending_flip_queue,
+ !intel_crtc_has_pending_flip(crtc),
+ 60*HZ) == 0)) {
+ struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+ unsigned long flags;
- WARN_ON(wait_event_timeout(dev_priv->pending_flip_queue,
- !intel_crtc_has_pending_flip(crtc),
- 60*HZ) == 0);
+ spin_lock_irqsave(&dev->event_lock, flags);
+ if (intel_crtc->unpin_work) {
+ WARN_ONCE(1, "Removing stuck page flip\n");
+ page_flip_completed(intel_crtc);
+ }
+ spin_unlock_irqrestore(&dev->event_lock, flags);
+ }
- mutex_lock(&dev->struct_mutex);
- intel_finish_fb(crtc->primary->fb);
- mutex_unlock(&dev->struct_mutex);
+ if (crtc->primary->fb) {
+ mutex_lock(&dev->struct_mutex);
+ intel_finish_fb(crtc->primary->fb);
+ mutex_unlock(&dev->struct_mutex);
+ }
}
/* Program iCLKIP clock to the desired frequency */
static void intel_crtc_enable_planes(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
int pipe = intel_crtc->pipe;
- int plane = intel_crtc->plane;
+
+ assert_vblank_disabled(crtc);
drm_vblank_on(dev, pipe);
- intel_enable_primary_hw_plane(dev_priv, plane, pipe);
+ intel_enable_primary_hw_plane(crtc->primary, crtc);
intel_enable_planes(crtc);
intel_crtc_update_cursor(crtc, true);
intel_crtc_dpms_overlay(intel_crtc, true);
intel_crtc_dpms_overlay(intel_crtc, false);
intel_crtc_update_cursor(crtc, false);
intel_disable_planes(crtc);
- intel_disable_primary_hw_plane(dev_priv, plane, pipe);
+ intel_disable_primary_hw_plane(crtc->primary, crtc);
/*
* FIXME: Once we grow proper nuclear flip support out of this we need
intel_frontbuffer_flip(dev, INTEL_FRONTBUFFER_ALL_MASK(pipe));
drm_vblank_off(dev, pipe);
+
+ assert_vblank_disabled(crtc);
}
static void ironlake_crtc_enable(struct drm_crtc *crtc)
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct intel_encoder *encoder;
int pipe = intel_crtc->pipe;
- enum plane plane = intel_crtc->plane;
WARN_ON(!crtc->enabled);
if (intel_crtc->config.has_pch_encoder) {
intel_cpu_transcoder_set_m_n(intel_crtc,
- &intel_crtc->config.fdi_m_n);
+ &intel_crtc->config.fdi_m_n, NULL);
}
ironlake_set_pipeconf(crtc);
- /* Set up the display plane register */
- I915_WRITE(DSPCNTR(plane), DISPPLANE_GAMMA_ENABLE);
- POSTING_READ(DSPCNTR(plane));
-
- dev_priv->display.update_primary_plane(crtc, crtc->primary->fb,
- crtc->x, crtc->y);
-
intel_crtc->active = true;
intel_set_cpu_fifo_underrun_reporting(dev, pipe, true);
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct intel_encoder *encoder;
int pipe = intel_crtc->pipe;
- enum plane plane = intel_crtc->plane;
WARN_ON(!crtc->enabled);
intel_set_pipe_timings(intel_crtc);
+ if (intel_crtc->config.cpu_transcoder != TRANSCODER_EDP) {
+ I915_WRITE(PIPE_MULT(intel_crtc->config.cpu_transcoder),
+ intel_crtc->config.pixel_multiplier - 1);
+ }
+
if (intel_crtc->config.has_pch_encoder) {
intel_cpu_transcoder_set_m_n(intel_crtc,
- &intel_crtc->config.fdi_m_n);
+ &intel_crtc->config.fdi_m_n, NULL);
}
haswell_set_pipeconf(crtc);
intel_set_pipe_csc(crtc);
- /* Set up the display plane register */
- I915_WRITE(DSPCNTR(plane), DISPPLANE_GAMMA_ENABLE | DISPPLANE_PIPE_CSC_ENABLE);
- POSTING_READ(DSPCNTR(plane));
-
- dev_priv->display.update_primary_plane(crtc, crtc->primary->fb,
- crtc->x, crtc->y);
-
intel_crtc->active = true;
intel_set_cpu_fifo_underrun_reporting(dev, pipe, true);
if (intel_crtc->config.has_pch_encoder)
intel_set_pch_fifo_underrun_reporting(dev, pipe, false);
- intel_disable_pipe(dev_priv, pipe);
-
- if (intel_crtc->config.dp_encoder_is_mst)
- intel_ddi_set_vc_payload_alloc(crtc, false);
+ intel_disable_pipe(intel_crtc);
ironlake_pfit_disable(intel_crtc);
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct intel_encoder *encoder;
- int pipe = intel_crtc->pipe;
enum transcoder cpu_transcoder = intel_crtc->config.cpu_transcoder;
if (!intel_crtc->active)
if (intel_crtc->config.has_pch_encoder)
intel_set_pch_fifo_underrun_reporting(dev, TRANSCODER_A, false);
- intel_disable_pipe(dev_priv, pipe);
+ intel_disable_pipe(intel_crtc);
+
+ if (intel_crtc->config.dp_encoder_is_mst)
+ intel_ddi_set_vc_payload_alloc(crtc, false);
intel_ddi_disable_transcoder_func(dev_priv, cpu_transcoder);
vlv_update_cdclk(dev);
}
+static void cherryview_set_cdclk(struct drm_device *dev, int cdclk)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ u32 val, cmd;
+
+ WARN_ON(dev_priv->display.get_display_clock_speed(dev) != dev_priv->vlv_cdclk_freq);
+
+ switch (cdclk) {
+ case 400000:
+ cmd = 3;
+ break;
+ case 333333:
+ case 320000:
+ cmd = 2;
+ break;
+ case 266667:
+ cmd = 1;
+ break;
+ case 200000:
+ cmd = 0;
+ break;
+ default:
+ WARN_ON(1);
+ return;
+ }
+
+ mutex_lock(&dev_priv->rps.hw_lock);
+ val = vlv_punit_read(dev_priv, PUNIT_REG_DSPFREQ);
+ val &= ~DSPFREQGUAR_MASK_CHV;
+ val |= (cmd << DSPFREQGUAR_SHIFT_CHV);
+ vlv_punit_write(dev_priv, PUNIT_REG_DSPFREQ, val);
+ if (wait_for((vlv_punit_read(dev_priv, PUNIT_REG_DSPFREQ) &
+ DSPFREQSTAT_MASK_CHV) == (cmd << DSPFREQSTAT_SHIFT_CHV),
+ 50)) {
+ DRM_ERROR("timed out waiting for CDclk change\n");
+ }
+ mutex_unlock(&dev_priv->rps.hw_lock);
+
+ vlv_update_cdclk(dev);
+}
+
static int valleyview_calc_cdclk(struct drm_i915_private *dev_priv,
int max_pixclk)
{
int vco = valleyview_get_vco(dev_priv);
int freq_320 = (vco << 1) % 320000 != 0 ? 333333 : 320000;
+ /* FIXME: Punit isn't quite ready yet */
+ if (IS_CHERRYVIEW(dev_priv->dev))
+ return 400000;
+
/*
* Really only a few cases to deal with, as only 4 CDclks are supported:
* 200MHz
int max_pixclk = intel_mode_max_pixclk(dev_priv);
int req_cdclk = valleyview_calc_cdclk(dev_priv, max_pixclk);
- if (req_cdclk != dev_priv->vlv_cdclk_freq)
- valleyview_set_cdclk(dev, req_cdclk);
+ if (req_cdclk != dev_priv->vlv_cdclk_freq) {
+ if (IS_CHERRYVIEW(dev))
+ cherryview_set_cdclk(dev, req_cdclk);
+ else
+ valleyview_set_cdclk(dev, req_cdclk);
+ }
+
modeset_update_crtc_power_domains(dev);
}
static void valleyview_crtc_enable(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct intel_encoder *encoder;
int pipe = intel_crtc->pipe;
- int plane = intel_crtc->plane;
bool is_dsi;
- u32 dspcntr;
WARN_ON(!crtc->enabled);
is_dsi = intel_pipe_has_type(crtc, INTEL_OUTPUT_DSI);
- if (!is_dsi && !IS_CHERRYVIEW(dev))
- vlv_prepare_pll(intel_crtc);
-
- /* Set up the display plane register */
- dspcntr = DISPPLANE_GAMMA_ENABLE;
+ if (!is_dsi) {
+ if (IS_CHERRYVIEW(dev))
+ chv_prepare_pll(intel_crtc);
+ else
+ vlv_prepare_pll(intel_crtc);
+ }
if (intel_crtc->config.has_dp_encoder)
intel_dp_set_m_n(intel_crtc);
intel_set_pipe_timings(intel_crtc);
- /* pipesrc and dspsize control the size that is scaled from,
- * which should always be the user's requested size.
- */
- I915_WRITE(DSPSIZE(plane),
- ((intel_crtc->config.pipe_src_h - 1) << 16) |
- (intel_crtc->config.pipe_src_w - 1));
- I915_WRITE(DSPPOS(plane), 0);
-
i9xx_set_pipeconf(intel_crtc);
- I915_WRITE(DSPCNTR(plane), dspcntr);
- POSTING_READ(DSPCNTR(plane));
-
- dev_priv->display.update_primary_plane(crtc, crtc->primary->fb,
- crtc->x, crtc->y);
-
intel_crtc->active = true;
intel_set_cpu_fifo_underrun_reporting(dev, pipe, true);
static void i9xx_crtc_enable(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct intel_encoder *encoder;
int pipe = intel_crtc->pipe;
- int plane = intel_crtc->plane;
- u32 dspcntr;
WARN_ON(!crtc->enabled);
i9xx_set_pll_dividers(intel_crtc);
- /* Set up the display plane register */
- dspcntr = DISPPLANE_GAMMA_ENABLE;
-
- if (pipe == 0)
- dspcntr &= ~DISPPLANE_SEL_PIPE_MASK;
- else
- dspcntr |= DISPPLANE_SEL_PIPE_B;
-
if (intel_crtc->config.has_dp_encoder)
intel_dp_set_m_n(intel_crtc);
intel_set_pipe_timings(intel_crtc);
- /* pipesrc and dspsize control the size that is scaled from,
- * which should always be the user's requested size.
- */
- I915_WRITE(DSPSIZE(plane),
- ((intel_crtc->config.pipe_src_h - 1) << 16) |
- (intel_crtc->config.pipe_src_w - 1));
- I915_WRITE(DSPPOS(plane), 0);
-
i9xx_set_pipeconf(intel_crtc);
- I915_WRITE(DSPCNTR(plane), dspcntr);
- POSTING_READ(DSPCNTR(plane));
-
- dev_priv->display.update_primary_plane(crtc, crtc->primary->fb,
- crtc->x, crtc->y);
-
intel_crtc->active = true;
if (!IS_GEN2(dev))
*/
intel_wait_for_vblank(dev, pipe);
- intel_disable_pipe(dev_priv, pipe);
+ intel_disable_pipe(intel_crtc);
i9xx_pfit_disable(intel_crtc);
else if (IS_VALLEYVIEW(dev))
vlv_disable_pll(dev_priv, pipe);
else
- i9xx_disable_pll(dev_priv, pipe);
+ i9xx_disable_pll(intel_crtc);
}
if (!IS_GEN2(dev))
u32 val;
int divider;
+ /* FIXME: Punit isn't quite ready yet */
+ if (IS_CHERRYVIEW(dev))
+ return 400000;
+
mutex_lock(&dev_priv->dpio_lock);
val = vlv_cck_read(dev_priv, CCK_DISPLAY_CLOCK_CONTROL);
mutex_unlock(&dev_priv->dpio_lock);
}
static void intel_cpu_transcoder_set_m_n(struct intel_crtc *crtc,
- struct intel_link_m_n *m_n)
+ struct intel_link_m_n *m_n,
+ struct intel_link_m_n *m2_n2)
{
struct drm_device *dev = crtc->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
I915_WRITE(PIPE_DATA_N1(transcoder), m_n->gmch_n);
I915_WRITE(PIPE_LINK_M1(transcoder), m_n->link_m);
I915_WRITE(PIPE_LINK_N1(transcoder), m_n->link_n);
+ /* M2_N2 registers to be set only for gen < 8 (M2_N2 available
+ * for gen < 8) and if DRRS is supported (to make sure the
+ * registers are not unnecessarily accessed).
+ */
+ if (m2_n2 && INTEL_INFO(dev)->gen < 8 &&
+ crtc->config.has_drrs) {
+ I915_WRITE(PIPE_DATA_M2(transcoder),
+ TU_SIZE(m2_n2->tu) | m2_n2->gmch_m);
+ I915_WRITE(PIPE_DATA_N2(transcoder), m2_n2->gmch_n);
+ I915_WRITE(PIPE_LINK_M2(transcoder), m2_n2->link_m);
+ I915_WRITE(PIPE_LINK_N2(transcoder), m2_n2->link_n);
+ }
} else {
I915_WRITE(PIPE_DATA_M_G4X(pipe), TU_SIZE(m_n->tu) | m_n->gmch_m);
I915_WRITE(PIPE_DATA_N_G4X(pipe), m_n->gmch_n);
}
}
-static void intel_dp_set_m_n(struct intel_crtc *crtc)
+void intel_dp_set_m_n(struct intel_crtc *crtc)
{
if (crtc->config.has_pch_encoder)
intel_pch_transcoder_set_m_n(crtc, &crtc->config.dp_m_n);
else
- intel_cpu_transcoder_set_m_n(crtc, &crtc->config.dp_m_n);
+ intel_cpu_transcoder_set_m_n(crtc, &crtc->config.dp_m_n,
+ &crtc->config.dp_m2_n2);
}
static void vlv_update_pll(struct intel_crtc *crtc)
}
static void chv_update_pll(struct intel_crtc *crtc)
+{
+ crtc->config.dpll_hw_state.dpll = DPLL_SSC_REF_CLOCK_CHV |
+ DPLL_REFA_CLK_ENABLE_VLV | DPLL_VGA_MODE_DIS |
+ DPLL_VCO_ENABLE;
+ if (crtc->pipe != PIPE_A)
+ crtc->config.dpll_hw_state.dpll |= DPLL_INTEGRATED_CRI_CLK_VLV;
+
+ crtc->config.dpll_hw_state.dpll_md =
+ (crtc->config.pixel_multiplier - 1) << DPLL_MD_UDI_MULTIPLIER_SHIFT;
+}
+
+static void chv_prepare_pll(struct intel_crtc *crtc)
{
struct drm_device *dev = crtc->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
u32 bestn, bestm1, bestm2, bestp1, bestp2, bestm2_frac;
int refclk;
- crtc->config.dpll_hw_state.dpll = DPLL_SSC_REF_CLOCK_CHV |
- DPLL_REFA_CLK_ENABLE_VLV | DPLL_VGA_MODE_DIS |
- DPLL_VCO_ENABLE;
- if (pipe != PIPE_A)
- crtc->config.dpll_hw_state.dpll |= DPLL_INTEGRATED_CRI_CLK_VLV;
-
- crtc->config.dpll_hw_state.dpll_md =
- (crtc->config.pixel_multiplier - 1) << DPLL_MD_UDI_MULTIPLIER_SHIFT;
-
bestn = crtc->config.dpll.n;
bestm2_frac = crtc->config.dpll.m2 & 0x3fffff;
bestm1 = crtc->config.dpll.m1;
dpll |= PLL_P2_DIVIDE_BY_4;
}
- if (intel_pipe_has_type(&crtc->base, INTEL_OUTPUT_DVO))
+ if (!IS_I830(dev) && intel_pipe_has_type(&crtc->base, INTEL_OUTPUT_DVO))
dpll |= DPLL_DVO_2X_MODE;
if (intel_pipe_has_type(&crtc->base, INTEL_OUTPUT_LVDS) &&
pipeconf = 0;
- if (dev_priv->quirks & QUIRK_PIPEA_FORCE &&
- I915_READ(PIPECONF(intel_crtc->pipe)) & PIPECONF_ENABLE)
- pipeconf |= PIPECONF_ENABLE;
+ if ((intel_crtc->pipe == PIPE_A && dev_priv->quirks & QUIRK_PIPEA_FORCE) ||
+ (intel_crtc->pipe == PIPE_B && dev_priv->quirks & QUIRK_PIPEB_FORCE))
+ pipeconf |= I915_READ(PIPECONF(intel_crtc->pipe)) & PIPECONF_ENABLE;
if (intel_crtc->config.double_wide)
pipeconf |= PIPECONF_DOUBLE_WIDE;
crtc->base.primary->fb->height = ((val >> 0) & 0xfff) + 1;
val = I915_READ(DSPSTRIDE(pipe));
- crtc->base.primary->fb->pitches[0] = val & 0xffffff80;
+ crtc->base.primary->fb->pitches[0] = val & 0xffffffc0;
aligned_height = intel_align_height(dev, crtc->base.primary->fb->height,
plane_config->tiled);
}
pipe_config->dpll_hw_state.dpll = I915_READ(DPLL(crtc->pipe));
if (!IS_VALLEYVIEW(dev)) {
+ /*
+ * DPLL_DVO_2X_MODE must be enabled for both DPLLs
+ * on 830. Filter it out here so that we don't
+ * report errors due to that.
+ */
+ if (IS_I830(dev))
+ pipe_config->dpll_hw_state.dpll &= ~DPLL_DVO_2X_MODE;
+
pipe_config->dpll_hw_state.fp0 = I915_READ(FP0(crtc->pipe));
pipe_config->dpll_hw_state.fp1 = I915_READ(FP1(crtc->pipe));
} else {
static void ironlake_init_pch_refclk(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
- struct drm_mode_config *mode_config = &dev->mode_config;
struct intel_encoder *encoder;
u32 val, final;
bool has_lvds = false;
bool can_ssc = false;
/* We need to take the global config into account */
- list_for_each_entry(encoder, &mode_config->encoder_list,
- base.head) {
+ for_each_intel_encoder(dev, encoder) {
switch (encoder->type) {
case INTEL_OUTPUT_LVDS:
has_panel = true;
static void lpt_init_pch_refclk(struct drm_device *dev)
{
- struct drm_mode_config *mode_config = &dev->mode_config;
struct intel_encoder *encoder;
bool has_vga = false;
- list_for_each_entry(encoder, &mode_config->encoder_list, base.head) {
+ for_each_intel_encoder(dev, encoder) {
switch (encoder->type) {
case INTEL_OUTPUT_ANALOG:
has_vga = true;
static void intel_cpu_transcoder_get_m_n(struct intel_crtc *crtc,
enum transcoder transcoder,
- struct intel_link_m_n *m_n)
+ struct intel_link_m_n *m_n,
+ struct intel_link_m_n *m2_n2)
{
struct drm_device *dev = crtc->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
m_n->gmch_n = I915_READ(PIPE_DATA_N1(transcoder));
m_n->tu = ((I915_READ(PIPE_DATA_M1(transcoder))
& TU_SIZE_MASK) >> TU_SIZE_SHIFT) + 1;
+ /* Read M2_N2 registers only for gen < 8 (M2_N2 available for
+ * gen < 8) and if DRRS is supported (to make sure the
+ * registers are not unnecessarily read).
+ */
+ if (m2_n2 && INTEL_INFO(dev)->gen < 8 &&
+ crtc->config.has_drrs) {
+ m2_n2->link_m = I915_READ(PIPE_LINK_M2(transcoder));
+ m2_n2->link_n = I915_READ(PIPE_LINK_N2(transcoder));
+ m2_n2->gmch_m = I915_READ(PIPE_DATA_M2(transcoder))
+ & ~TU_SIZE_MASK;
+ m2_n2->gmch_n = I915_READ(PIPE_DATA_N2(transcoder));
+ m2_n2->tu = ((I915_READ(PIPE_DATA_M2(transcoder))
+ & TU_SIZE_MASK) >> TU_SIZE_SHIFT) + 1;
+ }
} else {
m_n->link_m = I915_READ(PIPE_LINK_M_G4X(pipe));
m_n->link_n = I915_READ(PIPE_LINK_N_G4X(pipe));
intel_pch_transcoder_get_m_n(crtc, &pipe_config->dp_m_n);
else
intel_cpu_transcoder_get_m_n(crtc, pipe_config->cpu_transcoder,
- &pipe_config->dp_m_n);
+ &pipe_config->dp_m_n,
+ &pipe_config->dp_m2_n2);
}
static void ironlake_get_fdi_m_n_config(struct intel_crtc *crtc,
struct intel_crtc_config *pipe_config)
{
intel_cpu_transcoder_get_m_n(crtc, pipe_config->cpu_transcoder,
- &pipe_config->fdi_m_n);
+ &pipe_config->fdi_m_n, NULL);
}
static void ironlake_get_pfit_config(struct intel_crtc *crtc,
crtc->base.primary->fb->height = ((val >> 0) & 0xfff) + 1;
val = I915_READ(DSPSTRIDE(pipe));
- crtc->base.primary->fb->pitches[0] = val & 0xffffff80;
+ crtc->base.primary->fb->pitches[0] = val & 0xffffffc0;
aligned_height = intel_align_height(dev, crtc->base.primary->fb->height,
plane_config->tiled);
return 0;
}
+static void haswell_get_ddi_pll(struct drm_i915_private *dev_priv,
+ enum port port,
+ struct intel_crtc_config *pipe_config)
+{
+ pipe_config->ddi_pll_sel = I915_READ(PORT_CLK_SEL(port));
+
+ switch (pipe_config->ddi_pll_sel) {
+ case PORT_CLK_SEL_WRPLL1:
+ pipe_config->shared_dpll = DPLL_ID_WRPLL1;
+ break;
+ case PORT_CLK_SEL_WRPLL2:
+ pipe_config->shared_dpll = DPLL_ID_WRPLL2;
+ break;
+ }
+}
+
static void haswell_get_ddi_port_state(struct intel_crtc *crtc,
struct intel_crtc_config *pipe_config)
{
port = (tmp & TRANS_DDI_PORT_MASK) >> TRANS_DDI_PORT_SHIFT;
- pipe_config->ddi_pll_sel = I915_READ(PORT_CLK_SEL(port));
-
- switch (pipe_config->ddi_pll_sel) {
- case PORT_CLK_SEL_WRPLL1:
- pipe_config->shared_dpll = DPLL_ID_WRPLL1;
- break;
- case PORT_CLK_SEL_WRPLL2:
- pipe_config->shared_dpll = DPLL_ID_WRPLL2;
- break;
- }
+ haswell_get_ddi_pll(dev_priv, port, pipe_config);
if (pipe_config->shared_dpll >= 0) {
pll = &dev_priv->shared_dplls[pipe_config->shared_dpll];
pipe_config->ips_enabled = hsw_crtc_supports_ips(crtc) &&
(I915_READ(IPS_CTL) & IPS_ENABLE);
- pipe_config->pixel_multiplier = 1;
+ if (pipe_config->cpu_transcoder != TRANSCODER_EDP) {
+ pipe_config->pixel_multiplier =
+ I915_READ(PIPE_MULT(pipe_config->cpu_transcoder)) + 1;
+ } else {
+ pipe_config->pixel_multiplier = 1;
+ }
return true;
}
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- uint32_t cntl;
+ uint32_t cntl = 0, size = 0;
- if (base != intel_crtc->cursor_base) {
- /* On these chipsets we can only modify the base whilst
- * the cursor is disabled.
- */
- if (intel_crtc->cursor_cntl) {
- I915_WRITE(_CURACNTR, 0);
- POSTING_READ(_CURACNTR);
- intel_crtc->cursor_cntl = 0;
+ if (base) {
+ unsigned int width = intel_crtc->cursor_width;
+ unsigned int height = intel_crtc->cursor_height;
+ unsigned int stride = roundup_pow_of_two(width) * 4;
+
+ switch (stride) {
+ default:
+ WARN_ONCE(1, "Invalid cursor width/stride, width=%u, stride=%u\n",
+ width, stride);
+ stride = 256;
+ /* fallthrough */
+ case 256:
+ case 512:
+ case 1024:
+ case 2048:
+ break;
}
- I915_WRITE(_CURABASE, base);
- POSTING_READ(_CURABASE);
+ cntl |= CURSOR_ENABLE |
+ CURSOR_GAMMA_ENABLE |
+ CURSOR_FORMAT_ARGB |
+ CURSOR_STRIDE(stride);
+
+ size = (height << 12) | width;
}
- /* XXX width must be 64, stride 256 => 0x00 << 28 */
- cntl = 0;
- if (base)
- cntl = (CURSOR_ENABLE |
- CURSOR_GAMMA_ENABLE |
- CURSOR_FORMAT_ARGB);
- if (intel_crtc->cursor_cntl != cntl) {
- I915_WRITE(_CURACNTR, cntl);
+ if (intel_crtc->cursor_cntl != 0 &&
+ (intel_crtc->cursor_base != base ||
+ intel_crtc->cursor_size != size ||
+ intel_crtc->cursor_cntl != cntl)) {
+ /* On these chipsets we can only modify the base/size/stride
+ * whilst the cursor is disabled.
+ */
+ I915_WRITE(_CURACNTR, 0);
POSTING_READ(_CURACNTR);
- intel_crtc->cursor_cntl = cntl;
+ intel_crtc->cursor_cntl = 0;
}
-}
-static void i9xx_update_cursor(struct drm_crtc *crtc, u32 base)
-{
- struct drm_device *dev = crtc->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- int pipe = intel_crtc->pipe;
- uint32_t cntl;
+ if (intel_crtc->cursor_base != base)
+ I915_WRITE(_CURABASE, base);
- cntl = 0;
- if (base) {
- cntl = MCURSOR_GAMMA_ENABLE;
- switch (intel_crtc->cursor_width) {
- case 64:
- cntl |= CURSOR_MODE_64_ARGB_AX;
- break;
- case 128:
- cntl |= CURSOR_MODE_128_ARGB_AX;
- break;
- case 256:
- cntl |= CURSOR_MODE_256_ARGB_AX;
- break;
- default:
- WARN_ON(1);
- return;
- }
- cntl |= pipe << 28; /* Connect to correct pipe */
+ if (intel_crtc->cursor_size != size) {
+ I915_WRITE(CURSIZE, size);
+ intel_crtc->cursor_size = size;
}
+
if (intel_crtc->cursor_cntl != cntl) {
- I915_WRITE(CURCNTR(pipe), cntl);
- POSTING_READ(CURCNTR(pipe));
+ I915_WRITE(_CURACNTR, cntl);
+ POSTING_READ(_CURACNTR);
intel_crtc->cursor_cntl = cntl;
}
-
- /* and commit changes on next vblank */
- I915_WRITE(CURBASE(pipe), base);
- POSTING_READ(CURBASE(pipe));
}
-static void ivb_update_cursor(struct drm_crtc *crtc, u32 base)
+static void i9xx_update_cursor(struct drm_crtc *crtc, u32 base)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
WARN_ON(1);
return;
}
+ cntl |= pipe << 28; /* Connect to correct pipe */
}
if (IS_HASWELL(dev) || IS_BROADWELL(dev))
cntl |= CURSOR_PIPE_CSC_ENABLE;
I915_WRITE(CURPOS(pipe), pos);
- if (IS_IVYBRIDGE(dev) || IS_HASWELL(dev) || IS_BROADWELL(dev))
- ivb_update_cursor(crtc, base);
- else if (IS_845G(dev) || IS_I865G(dev))
+ if (IS_845G(dev) || IS_I865G(dev))
i845_update_cursor(crtc, base);
else
i9xx_update_cursor(crtc, base);
intel_crtc->cursor_base = base;
}
+static bool cursor_size_ok(struct drm_device *dev,
+ uint32_t width, uint32_t height)
+{
+ if (width == 0 || height == 0)
+ return false;
+
+ /*
+ * 845g/865g are special in that they are only limited by
+ * the width of their cursors, the height is arbitrary up to
+ * the precision of the register. Everything else requires
+ * square cursors, limited to a few power-of-two sizes.
+ */
+ if (IS_845G(dev) || IS_I865G(dev)) {
+ if ((width & 63) != 0)
+ return false;
+
+ if (width > (IS_845G(dev) ? 64 : 512))
+ return false;
+
+ if (height > 1023)
+ return false;
+ } else {
+ switch (width | height) {
+ case 256:
+ case 128:
+ if (IS_GEN2(dev))
+ return false;
+ case 64:
+ break;
+ default:
+ return false;
+ }
+ }
+
+ return true;
+}
+
/*
* intel_crtc_cursor_set_obj - Set cursor to specified GEM object
*
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
enum pipe pipe = intel_crtc->pipe;
- unsigned old_width;
+ unsigned old_width, stride;
uint32_t addr;
int ret;
if (!obj) {
DRM_DEBUG_KMS("cursor off\n");
addr = 0;
- obj = NULL;
mutex_lock(&dev->struct_mutex);
goto finish;
}
/* Check for which cursor types we support */
- if (!((width == 64 && height == 64) ||
- (width == 128 && height == 128 && !IS_GEN2(dev)) ||
- (width == 256 && height == 256 && !IS_GEN2(dev)))) {
+ if (!cursor_size_ok(dev, width, height)) {
DRM_DEBUG("Cursor dimension not supported\n");
return -EINVAL;
}
- if (obj->base.size < width * height * 4) {
+ stride = roundup_pow_of_two(width) * 4;
+ if (obj->base.size < stride * height) {
DRM_DEBUG_KMS("buffer is too small\n");
ret = -ENOMEM;
goto fail;
goto fail_locked;
}
+ /*
+ * Global gtt pte registers are special registers which actually
+ * forward writes to a chunk of system memory. Which means that
+ * there is no risk that the register values disappear as soon
+ * as we call intel_runtime_pm_put(), so it is correct to wrap
+ * only the pin/unpin/fence and not more.
+ */
+ intel_runtime_pm_get(dev_priv);
+
/* Note that the w/a also requires 2 PTE of padding following
* the bo. We currently fill all unused PTE with the shadow
* page and so we should always have valid PTE following the
ret = i915_gem_object_pin_to_display_plane(obj, alignment, NULL);
if (ret) {
DRM_DEBUG_KMS("failed to move cursor bo into the GTT\n");
+ intel_runtime_pm_put(dev_priv);
goto fail_locked;
}
ret = i915_gem_object_put_fence(obj);
if (ret) {
DRM_DEBUG_KMS("failed to release fence for cursor");
+ intel_runtime_pm_put(dev_priv);
goto fail_unpin;
}
addr = i915_gem_obj_ggtt_offset(obj);
+
+ intel_runtime_pm_put(dev_priv);
} else {
int align = IS_I830(dev) ? 16 * 1024 : 256;
ret = i915_gem_object_attach_phys(obj, align);
addr = obj->phys_handle->busaddr;
}
- if (IS_GEN2(dev))
- I915_WRITE(CURSIZE, (height << 12) | width);
-
finish:
if (intel_crtc->cursor_bo) {
if (!INTEL_INFO(dev)->cursor_needs_physical)
connector->base.id, connector->name,
encoder->base.id, encoder->name);
- drm_modeset_acquire_init(ctx, 0);
-
retry:
ret = drm_modeset_lock(&config->connection_mutex, ctx);
if (ret)
i++;
if (!(encoder->possible_crtcs & (1 << i)))
continue;
- if (!possible_crtc->enabled) {
- crtc = possible_crtc;
- break;
- }
+ if (possible_crtc->enabled)
+ continue;
+ /* This can occur when applying the pipe A quirk on resume. */
+ if (to_intel_crtc(possible_crtc)->new_enabled)
+ continue;
+
+ crtc = possible_crtc;
+ break;
}
/*
goto retry;
}
- drm_modeset_drop_locks(ctx);
- drm_modeset_acquire_fini(ctx);
-
return false;
}
void intel_release_load_detect_pipe(struct drm_connector *connector,
- struct intel_load_detect_pipe *old,
- struct drm_modeset_acquire_ctx *ctx)
+ struct intel_load_detect_pipe *old)
{
struct intel_encoder *intel_encoder =
intel_attached_encoder(connector);
drm_framebuffer_unreference(old->release_fb);
}
- goto unlock;
return;
}
/* Switch crtc and encoder back off if necessary */
if (old->dpms_mode != DRM_MODE_DPMS_ON)
connector->funcs->dpms(connector, old->dpms_mode);
-
-unlock:
- drm_modeset_drop_locks(ctx);
- drm_modeset_acquire_fini(ctx);
}
static int i9xx_pll_refclk(struct drm_device *dev,
unsigned frontbuffer_bits,
struct intel_engine_cs *ring)
{
+ struct drm_i915_private *dev_priv = dev->dev_private;
enum pipe pipe;
if (!i915.powersave)
return;
- for_each_pipe(pipe) {
+ for_each_pipe(dev_priv, pipe) {
if (!(frontbuffer_bits & INTEL_FRONTBUFFER_ALL_MASK(pipe)))
continue;
intel_mark_fb_busy(dev, frontbuffer_bits, NULL);
intel_edp_psr_flush(dev, frontbuffer_bits);
+
+ /*
+ * FIXME: Unconditional fbc flushing here is a rather gross hack and
+ * needs to be reworked into a proper frontbuffer tracking scheme like
+ * psr employs.
+ */
+ if (IS_BROADWELL(dev))
+ gen8_fbc_sw_flush(dev, FBC_REND_CACHE_CLEAN);
}
/**
static void do_intel_finish_page_flip(struct drm_device *dev,
struct drm_crtc *crtc)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct intel_unpin_work *work;
unsigned long flags;
return;
}
- /* and that the unpin work is consistent wrt ->pending. */
- smp_rmb();
-
- intel_crtc->unpin_work = NULL;
-
- if (work->event)
- drm_send_vblank_event(dev, intel_crtc->pipe, work->event);
-
- drm_crtc_vblank_put(crtc);
+ page_flip_completed(intel_crtc);
spin_unlock_irqrestore(&dev->event_lock, flags);
-
- wake_up_all(&dev_priv->pending_flip_queue);
-
- queue_work(dev_priv->wq, &work->work);
-
- trace_i915_flip_complete(intel_crtc->plane, work->pending_flip_obj);
}
void intel_finish_page_flip(struct drm_device *dev, int pipe)
return false;
else if (i915.use_mmio_flip > 0)
return true;
+ else if (i915.enable_execlists)
+ return true;
else
return ring != obj->ring;
}
return -ENODEV;
}
+static bool __intel_pageflip_stall_check(struct drm_device *dev,
+ struct drm_crtc *crtc)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+ struct intel_unpin_work *work = intel_crtc->unpin_work;
+ u32 addr;
+
+ if (atomic_read(&work->pending) >= INTEL_FLIP_COMPLETE)
+ return true;
+
+ if (!work->enable_stall_check)
+ return false;
+
+ if (work->flip_ready_vblank == 0) {
+ if (work->flip_queued_ring &&
+ !i915_seqno_passed(work->flip_queued_ring->get_seqno(work->flip_queued_ring, true),
+ work->flip_queued_seqno))
+ return false;
+
+ work->flip_ready_vblank = drm_vblank_count(dev, intel_crtc->pipe);
+ }
+
+ if (drm_vblank_count(dev, intel_crtc->pipe) - work->flip_ready_vblank < 3)
+ return false;
+
+ /* Potential stall - if we see that the flip has happened,
+ * assume a missed interrupt. */
+ if (INTEL_INFO(dev)->gen >= 4)
+ addr = I915_HI_DISPBASE(I915_READ(DSPSURF(intel_crtc->plane)));
+ else
+ addr = I915_READ(DSPADDR(intel_crtc->plane));
+
+ /* There is a potential issue here with a false positive after a flip
+ * to the same address. We could address this by checking for a
+ * non-incrementing frame counter.
+ */
+ return addr == work->gtt_offset;
+}
+
+void intel_check_page_flip(struct drm_device *dev, int pipe)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_crtc *crtc = dev_priv->pipe_to_crtc_mapping[pipe];
+ struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+ unsigned long flags;
+
+ if (crtc == NULL)
+ return;
+
+ spin_lock_irqsave(&dev->event_lock, flags);
+ if (intel_crtc->unpin_work && __intel_pageflip_stall_check(dev, crtc)) {
+ WARN_ONCE(1, "Kicking stuck page flip: queued at %d, now %d\n",
+ intel_crtc->unpin_work->flip_queued_vblank, drm_vblank_count(dev, pipe));
+ page_flip_completed(intel_crtc);
+ }
+ spin_unlock_irqrestore(&dev->event_lock, flags);
+}
+
static int intel_crtc_page_flip(struct drm_crtc *crtc,
struct drm_framebuffer *fb,
struct drm_pending_vblank_event *event,
/* We borrow the event spin lock for protecting unpin_work */
spin_lock_irqsave(&dev->event_lock, flags);
if (intel_crtc->unpin_work) {
- spin_unlock_irqrestore(&dev->event_lock, flags);
- kfree(work);
- drm_crtc_vblank_put(crtc);
+ /* Before declaring the flip queue wedged, check if
+ * the hardware completed the operation behind our backs.
+ */
+ if (__intel_pageflip_stall_check(dev, crtc)) {
+ DRM_DEBUG_DRIVER("flip queue: previous flip completed, continuing\n");
+ page_flip_completed(intel_crtc);
+ } else {
+ DRM_DEBUG_DRIVER("flip queue: crtc already busy\n");
+ spin_unlock_irqrestore(&dev->event_lock, flags);
- DRM_DEBUG_DRIVER("flip queue: crtc already busy\n");
- return -EBUSY;
+ drm_crtc_vblank_put(crtc);
+ kfree(work);
+ return -EBUSY;
+ }
}
intel_crtc->unpin_work = work;
spin_unlock_irqrestore(&dev->event_lock, flags);
work->pending_flip_obj = obj;
- work->enable_stall_check = true;
-
atomic_inc(&intel_crtc->unpin_work_count);
intel_crtc->reset_counter = atomic_read(&dev_priv->gpu_error.reset_counter);
work->gtt_offset =
i915_gem_obj_ggtt_offset(obj) + intel_crtc->dspaddr_offset;
- if (use_mmio_flip(ring, obj))
+ if (use_mmio_flip(ring, obj)) {
ret = intel_queue_mmio_flip(dev, crtc, fb, obj, ring,
page_flip_flags);
- else
+ if (ret)
+ goto cleanup_unpin;
+
+ work->flip_queued_seqno = obj->last_write_seqno;
+ work->flip_queued_ring = obj->ring;
+ } else {
ret = dev_priv->display.queue_flip(dev, crtc, fb, obj, ring,
- page_flip_flags);
- if (ret)
- goto cleanup_unpin;
+ page_flip_flags);
+ if (ret)
+ goto cleanup_unpin;
+
+ work->flip_queued_seqno = intel_ring_get_seqno(ring);
+ work->flip_queued_ring = ring;
+ }
+
+ work->flip_queued_vblank = drm_vblank_count(dev, intel_crtc->pipe);
+ work->enable_stall_check = true;
i915_gem_track_fb(work->old_fb_obj, obj,
INTEL_FRONTBUFFER_PRIMARY(pipe));
out_hang:
intel_crtc_wait_for_pending_flips(crtc);
ret = intel_pipe_set_base(crtc, crtc->x, crtc->y, fb);
- if (ret == 0 && event)
+ if (ret == 0 && event) {
+ spin_lock_irqsave(&dev->event_lock, flags);
drm_send_vblank_event(dev, pipe, event);
+ spin_unlock_irqrestore(&dev->event_lock, flags);
+ }
}
return ret;
}
to_intel_encoder(connector->base.encoder);
}
- list_for_each_entry(encoder, &dev->mode_config.encoder_list,
- base.head) {
+ for_each_intel_encoder(dev, encoder) {
encoder->new_crtc =
to_intel_crtc(encoder->base.crtc);
}
connector->base.encoder = &connector->new_encoder->base;
}
- list_for_each_entry(encoder, &dev->mode_config.encoder_list,
- base.head) {
+ for_each_intel_encoder(dev, encoder) {
encoder->base.crtc = &encoder->new_crtc->base;
}
pipe_config->dp_m_n.gmch_m, pipe_config->dp_m_n.gmch_n,
pipe_config->dp_m_n.link_m, pipe_config->dp_m_n.link_n,
pipe_config->dp_m_n.tu);
+
+ DRM_DEBUG_KMS("dp: %i, gmch_m2: %u, gmch_n2: %u, link_m2: %u, link_n2: %u, tu2: %u\n",
+ pipe_config->has_dp_encoder,
+ pipe_config->dp_m2_n2.gmch_m,
+ pipe_config->dp_m2_n2.gmch_n,
+ pipe_config->dp_m2_n2.link_m,
+ pipe_config->dp_m2_n2.link_n,
+ pipe_config->dp_m2_n2.tu);
+
DRM_DEBUG_KMS("requested mode:\n");
drm_mode_debug_printmodeline(&pipe_config->requested_mode);
DRM_DEBUG_KMS("adjusted mode:\n");
struct drm_device *dev = crtc->base.dev;
struct intel_encoder *source_encoder;
- list_for_each_entry(source_encoder,
- &dev->mode_config.encoder_list, base.head) {
+ for_each_intel_encoder(dev, source_encoder) {
if (source_encoder->new_crtc != crtc)
continue;
struct drm_device *dev = crtc->base.dev;
struct intel_encoder *encoder;
- list_for_each_entry(encoder,
- &dev->mode_config.encoder_list, base.head) {
+ for_each_intel_encoder(dev, encoder) {
if (encoder->new_crtc != crtc)
continue;
* adjust it according to limitations or connector properties, and also
* a chance to reject the mode entirely.
*/
- list_for_each_entry(encoder, &dev->mode_config.encoder_list,
- base.head) {
+ for_each_intel_encoder(dev, encoder) {
if (&encoder->new_crtc->base != crtc)
continue;
1 << connector->new_encoder->new_crtc->pipe;
}
- list_for_each_entry(encoder, &dev->mode_config.encoder_list,
- base.head) {
+ for_each_intel_encoder(dev, encoder) {
if (encoder->base.crtc == &encoder->new_crtc->base)
continue;
struct intel_crtc *intel_crtc;
struct drm_connector *connector;
- list_for_each_entry(intel_encoder, &dev->mode_config.encoder_list,
- base.head) {
+ for_each_intel_encoder(dev, intel_encoder) {
if (!intel_encoder->base.crtc)
continue;
return false; \
}
+/* This is required for BDW+ where there is only one set of registers for
+ * switching between high and low RR.
+ * This macro can be used whenever a comparison has to be made between one
+ * hw state and multiple sw state variables.
+ */
+#define PIPE_CONF_CHECK_I_ALT(name, alt_name) \
+ if ((current_config->name != pipe_config->name) && \
+ (current_config->alt_name != pipe_config->name)) { \
+ DRM_ERROR("mismatch in " #name " " \
+ "(expected %i or %i, found %i)\n", \
+ current_config->name, \
+ current_config->alt_name, \
+ pipe_config->name); \
+ return false; \
+ }
+
#define PIPE_CONF_CHECK_FLAGS(name, mask) \
if ((current_config->name ^ pipe_config->name) & (mask)) { \
DRM_ERROR("mismatch in " #name "(" #mask ") " \
PIPE_CONF_CHECK_I(fdi_m_n.tu);
PIPE_CONF_CHECK_I(has_dp_encoder);
- PIPE_CONF_CHECK_I(dp_m_n.gmch_m);
- PIPE_CONF_CHECK_I(dp_m_n.gmch_n);
- PIPE_CONF_CHECK_I(dp_m_n.link_m);
- PIPE_CONF_CHECK_I(dp_m_n.link_n);
- PIPE_CONF_CHECK_I(dp_m_n.tu);
+
+ if (INTEL_INFO(dev)->gen < 8) {
+ PIPE_CONF_CHECK_I(dp_m_n.gmch_m);
+ PIPE_CONF_CHECK_I(dp_m_n.gmch_n);
+ PIPE_CONF_CHECK_I(dp_m_n.link_m);
+ PIPE_CONF_CHECK_I(dp_m_n.link_n);
+ PIPE_CONF_CHECK_I(dp_m_n.tu);
+
+ if (current_config->has_drrs) {
+ PIPE_CONF_CHECK_I(dp_m2_n2.gmch_m);
+ PIPE_CONF_CHECK_I(dp_m2_n2.gmch_n);
+ PIPE_CONF_CHECK_I(dp_m2_n2.link_m);
+ PIPE_CONF_CHECK_I(dp_m2_n2.link_n);
+ PIPE_CONF_CHECK_I(dp_m2_n2.tu);
+ }
+ } else {
+ PIPE_CONF_CHECK_I_ALT(dp_m_n.gmch_m, dp_m2_n2.gmch_m);
+ PIPE_CONF_CHECK_I_ALT(dp_m_n.gmch_n, dp_m2_n2.gmch_n);
+ PIPE_CONF_CHECK_I_ALT(dp_m_n.link_m, dp_m2_n2.link_m);
+ PIPE_CONF_CHECK_I_ALT(dp_m_n.link_n, dp_m2_n2.link_n);
+ PIPE_CONF_CHECK_I_ALT(dp_m_n.tu, dp_m2_n2.tu);
+ }
PIPE_CONF_CHECK_I(adjusted_mode.crtc_hdisplay);
PIPE_CONF_CHECK_I(adjusted_mode.crtc_htotal);
#undef PIPE_CONF_CHECK_X
#undef PIPE_CONF_CHECK_I
+#undef PIPE_CONF_CHECK_I_ALT
#undef PIPE_CONF_CHECK_FLAGS
#undef PIPE_CONF_CHECK_CLOCK_FUZZY
#undef PIPE_CONF_QUIRK
struct intel_encoder *encoder;
struct intel_connector *connector;
- list_for_each_entry(encoder, &dev->mode_config.encoder_list,
- base.head) {
+ for_each_intel_encoder(dev, encoder) {
bool enabled = false;
bool active = false;
enum pipe pipe, tracked_pipe;
WARN(crtc->active && !crtc->base.enabled,
"active crtc, but not enabled in sw tracking\n");
- list_for_each_entry(encoder, &dev->mode_config.encoder_list,
- base.head) {
+ for_each_intel_encoder(dev, encoder) {
if (encoder->base.crtc != &crtc->base)
continue;
enabled = true;
active = dev_priv->display.get_pipe_config(crtc,
&pipe_config);
- /* hw state is inconsistent with the pipe A quirk */
- if (crtc->pipe == PIPE_A && dev_priv->quirks & QUIRK_PIPEA_FORCE)
+ /* hw state is inconsistent with the pipe quirk */
+ if ((crtc->pipe == PIPE_A && dev_priv->quirks & QUIRK_PIPEA_FORCE) ||
+ (crtc->pipe == PIPE_B && dev_priv->quirks & QUIRK_PIPEB_FORCE))
active = crtc->active;
- list_for_each_entry(encoder, &dev->mode_config.encoder_list,
- base.head) {
+ for_each_intel_encoder(dev, encoder) {
enum pipe pipe;
if (encoder->base.crtc != &crtc->base)
continue;
}
count = 0;
- list_for_each_entry(encoder, &dev->mode_config.encoder_list, base.head) {
+ for_each_intel_encoder(dev, encoder) {
encoder->new_crtc =
to_intel_crtc(config->save_encoder_crtcs[count++]);
}
}
/* Check for any encoders that needs to be disabled. */
- list_for_each_entry(encoder, &dev->mode_config.encoder_list,
- base.head) {
+ for_each_intel_encoder(dev, encoder) {
int num_connectors = 0;
list_for_each_entry(connector,
&dev->mode_config.connector_list,
for_each_intel_crtc(dev, crtc) {
crtc->new_enabled = false;
- list_for_each_entry(encoder,
- &dev->mode_config.encoder_list,
- base.head) {
+ for_each_intel_encoder(dev, encoder) {
if (encoder->new_crtc == crtc) {
crtc->new_enabled = true;
break;
connector->new_encoder = NULL;
}
- list_for_each_entry(encoder, &dev->mode_config.encoder_list, base.head) {
+ for_each_intel_encoder(dev, encoder) {
if (encoder->new_crtc == crtc)
encoder->new_crtc = NULL;
}
ret = intel_set_mode(set->crtc, set->mode,
set->x, set->y, set->fb);
} else if (config->fb_changed) {
- struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(set->crtc);
intel_crtc_wait_for_pending_flips(set->crtc);
*/
if (!intel_crtc->primary_enabled && ret == 0) {
WARN_ON(!intel_crtc->active);
- intel_enable_primary_hw_plane(dev_priv, intel_crtc->plane,
- intel_crtc->pipe);
+ intel_enable_primary_hw_plane(set->crtc->primary, set->crtc);
}
/*
intel_primary_plane_disable(struct drm_plane *plane)
{
struct drm_device *dev = plane->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_plane *intel_plane = to_intel_plane(plane);
struct intel_crtc *intel_crtc;
if (!plane->fb)
goto disable_unpin;
intel_crtc_wait_for_pending_flips(plane->crtc);
- intel_disable_primary_hw_plane(dev_priv, intel_plane->plane,
- intel_plane->pipe);
+ intel_disable_primary_hw_plane(plane, plane->crtc);
+
disable_unpin:
mutex_lock(&dev->struct_mutex);
i915_gem_track_fb(intel_fb_obj(plane->fb), NULL,
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- struct intel_plane *intel_plane = to_intel_plane(plane);
struct drm_i915_gem_object *obj = intel_fb_obj(fb);
struct drm_i915_gem_object *old_obj = intel_fb_obj(plane->fb);
struct drm_rect dest = {
.x2 = intel_crtc->active ? intel_crtc->config.pipe_src_w : 0,
.y2 = intel_crtc->active ? intel_crtc->config.pipe_src_h : 0,
};
+ const struct {
+ int crtc_x, crtc_y;
+ unsigned int crtc_w, crtc_h;
+ uint32_t src_x, src_y, src_w, src_h;
+ } orig = {
+ .crtc_x = crtc_x,
+ .crtc_y = crtc_y,
+ .crtc_w = crtc_w,
+ .crtc_h = crtc_h,
+ .src_x = src_x,
+ .src_y = src_y,
+ .src_w = src_w,
+ .src_h = src_h,
+ };
+ struct intel_plane *intel_plane = to_intel_plane(plane);
bool visible;
int ret;
INTEL_FRONTBUFFER_PRIMARY(intel_crtc->pipe));
if (intel_crtc->primary_enabled)
- intel_disable_primary_hw_plane(dev_priv,
- intel_plane->plane,
- intel_plane->pipe);
+ intel_disable_primary_hw_plane(plane, crtc);
if (plane->fb != fb)
mutex_unlock(&dev->struct_mutex);
- return 0;
- }
+ } else {
+ if (intel_crtc && intel_crtc->active &&
+ intel_crtc->primary_enabled) {
+ /*
+ * FBC does not work on some platforms for rotated
+ * planes, so disable it when rotation is not 0 and
+ * update it when rotation is set back to 0.
+ *
+ * FIXME: This is redundant with the fbc update done in
+ * the primary plane enable function except that that
+ * one is done too late. We eventually need to unify
+ * this.
+ */
+ if (INTEL_INFO(dev)->gen <= 4 && !IS_G4X(dev) &&
+ dev_priv->fbc.plane == intel_crtc->plane &&
+ intel_plane->rotation != BIT(DRM_ROTATE_0)) {
+ intel_disable_fbc(dev);
+ }
+ }
+ ret = intel_pipe_set_base(crtc, src.x1, src.y1, fb);
+ if (ret)
+ return ret;
- ret = intel_pipe_set_base(crtc, src.x1, src.y1, fb);
- if (ret)
- return ret;
+ if (!intel_crtc->primary_enabled)
+ intel_enable_primary_hw_plane(plane, crtc);
+ }
- if (!intel_crtc->primary_enabled)
- intel_enable_primary_hw_plane(dev_priv, intel_crtc->plane,
- intel_crtc->pipe);
+ intel_plane->crtc_x = orig.crtc_x;
+ intel_plane->crtc_y = orig.crtc_y;
+ intel_plane->crtc_w = orig.crtc_w;
+ intel_plane->crtc_h = orig.crtc_h;
+ intel_plane->src_x = orig.src_x;
+ intel_plane->src_y = orig.src_y;
+ intel_plane->src_w = orig.src_w;
+ intel_plane->src_h = orig.src_h;
+ intel_plane->obj = obj;
return 0;
}
.update_plane = intel_primary_plane_setplane,
.disable_plane = intel_primary_plane_disable,
.destroy = intel_plane_destroy,
+ .set_property = intel_plane_set_property
};
static struct drm_plane *intel_primary_plane_create(struct drm_device *dev,
primary->max_downscale = 1;
primary->pipe = pipe;
primary->plane = pipe;
+ primary->rotation = BIT(DRM_ROTATE_0);
if (HAS_FBC(dev) && INTEL_INFO(dev)->gen < 4)
primary->plane = !pipe;
&intel_primary_plane_funcs,
intel_primary_formats, num_formats,
DRM_PLANE_TYPE_PRIMARY);
+
+ if (INTEL_INFO(dev)->gen >= 4) {
+ if (!dev->mode_config.rotation_property)
+ dev->mode_config.rotation_property =
+ drm_mode_create_rotation_property(dev,
+ BIT(DRM_ROTATE_0) |
+ BIT(DRM_ROTATE_180));
+ if (dev->mode_config.rotation_property)
+ drm_object_attach_property(&primary->base.base,
+ dev->mode_config.rotation_property,
+ primary->rotation);
+ }
+
return &primary->base;
}
};
const struct drm_rect clip = {
/* integer pixels */
- .x2 = intel_crtc->config.pipe_src_w,
- .y2 = intel_crtc->config.pipe_src_h,
+ .x2 = intel_crtc->active ? intel_crtc->config.pipe_src_w : 0,
+ .y2 = intel_crtc->active ? intel_crtc->config.pipe_src_h : 0,
};
bool visible;
int ret;
return intel_crtc_cursor_set_obj(crtc, obj, crtc_w, crtc_h);
} else {
intel_crtc_update_cursor(crtc, visible);
+
+ intel_frontbuffer_flip(crtc->dev,
+ INTEL_FRONTBUFFER_CURSOR(intel_crtc->pipe));
+
return 0;
}
}
intel_crtc->cursor_base = ~0;
intel_crtc->cursor_cntl = ~0;
-
- init_waitqueue_head(&intel_crtc->vbl_wait);
+ intel_crtc->cursor_size = ~0;
BUG_ON(pipe >= ARRAY_SIZE(dev_priv->plane_to_crtc_mapping) ||
dev_priv->plane_to_crtc_mapping[intel_crtc->plane] != NULL);
int index_mask = 0;
int entry = 0;
- list_for_each_entry(source_encoder,
- &dev->mode_config.encoder_list, base.head) {
+ for_each_intel_encoder(dev, source_encoder) {
if (encoders_cloneable(encoder, source_encoder))
index_mask |= (1 << entry);
intel_edp_psr_init(dev);
- list_for_each_entry(encoder, &dev->mode_config.encoder_list, base.head) {
+ for_each_intel_encoder(dev, encoder) {
encoder->base.possible_crtcs = encoder->crtc_mask;
encoder->base.possible_clones =
intel_encoder_clones(encoder);
dev_priv->display.get_display_clock_speed =
i830_get_display_clock_speed;
- if (HAS_PCH_SPLIT(dev)) {
- if (IS_GEN5(dev)) {
- dev_priv->display.fdi_link_train = ironlake_fdi_link_train;
- dev_priv->display.write_eld = ironlake_write_eld;
- } else if (IS_GEN6(dev)) {
- dev_priv->display.fdi_link_train = gen6_fdi_link_train;
- dev_priv->display.write_eld = ironlake_write_eld;
- dev_priv->display.modeset_global_resources =
- snb_modeset_global_resources;
- } else if (IS_IVYBRIDGE(dev)) {
- /* FIXME: detect B0+ stepping and use auto training */
- dev_priv->display.fdi_link_train = ivb_manual_fdi_link_train;
- dev_priv->display.write_eld = ironlake_write_eld;
- dev_priv->display.modeset_global_resources =
- ivb_modeset_global_resources;
- } else if (IS_HASWELL(dev) || IS_GEN8(dev)) {
- dev_priv->display.fdi_link_train = hsw_fdi_link_train;
- dev_priv->display.write_eld = haswell_write_eld;
- dev_priv->display.modeset_global_resources =
- haswell_modeset_global_resources;
- }
- } else if (IS_G4X(dev)) {
+ if (IS_G4X(dev)) {
dev_priv->display.write_eld = g4x_write_eld;
+ } else if (IS_GEN5(dev)) {
+ dev_priv->display.fdi_link_train = ironlake_fdi_link_train;
+ dev_priv->display.write_eld = ironlake_write_eld;
+ } else if (IS_GEN6(dev)) {
+ dev_priv->display.fdi_link_train = gen6_fdi_link_train;
+ dev_priv->display.write_eld = ironlake_write_eld;
+ dev_priv->display.modeset_global_resources =
+ snb_modeset_global_resources;
+ } else if (IS_IVYBRIDGE(dev)) {
+ /* FIXME: detect B0+ stepping and use auto training */
+ dev_priv->display.fdi_link_train = ivb_manual_fdi_link_train;
+ dev_priv->display.write_eld = ironlake_write_eld;
+ dev_priv->display.modeset_global_resources =
+ ivb_modeset_global_resources;
+ } else if (IS_HASWELL(dev) || IS_BROADWELL(dev)) {
+ dev_priv->display.fdi_link_train = hsw_fdi_link_train;
+ dev_priv->display.write_eld = haswell_write_eld;
+ dev_priv->display.modeset_global_resources =
+ haswell_modeset_global_resources;
} else if (IS_VALLEYVIEW(dev)) {
dev_priv->display.modeset_global_resources =
valleyview_modeset_global_resources;
}
intel_panel_init_backlight_funcs(dev);
+
+ mutex_init(&dev_priv->pps_mutex);
}
/*
DRM_INFO("applying pipe a force quirk\n");
}
+static void quirk_pipeb_force(struct drm_device *dev)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+
+ dev_priv->quirks |= QUIRK_PIPEB_FORCE;
+ DRM_INFO("applying pipe b force quirk\n");
+}
+
/*
* Some machines (Lenovo U160) do not work with SSC on LVDS for some reason
*/
/* ThinkPad T60 needs pipe A force quirk (bug #16494) */
{ 0x2782, 0x17aa, 0x201a, quirk_pipea_force },
+ /* 830 needs to leave pipe A & dpll A up */
+ { 0x3577, PCI_ANY_ID, PCI_ANY_ID, quirk_pipea_force },
+
+ /* 830 needs to leave pipe B & dpll B up */
+ { 0x3577, PCI_ANY_ID, PCI_ANY_ID, quirk_pipeb_force },
+
/* Lenovo U160 cannot use SSC on LVDS */
{ 0x0046, 0x17aa, 0x3920, quirk_ssc_force_disable },
/* Acer C720 and C720P Chromebooks (Celeron 2955U) have backlights */
{ 0x0a06, 0x1025, 0x0a11, quirk_backlight_present },
+ /* Acer C720 Chromebook (Core i3 4005U) */
+ { 0x0a16, 0x1025, 0x0a11, quirk_backlight_present },
+
/* Toshiba CB35 Chromebook (Celeron 2955U) */
{ 0x0a06, 0x1179, 0x0a88, quirk_backlight_present },
vga_put(dev->pdev, VGA_RSRC_LEGACY_IO);
udelay(300);
- I915_WRITE(vga_reg, VGA_DISP_DISABLE);
+ /*
+ * Fujitsu-Siemens Lifebook S6010 (830) has problems resuming
+ * from S3 without preserving (some of?) the other bits.
+ */
+ I915_WRITE(vga_reg, dev_priv->bios_vgacntr | VGA_DISP_DISABLE);
POSTING_READ(vga_reg);
}
intel_init_clock_gating(dev);
- intel_reset_dpio(dev);
-
intel_enable_gt_powersave(dev);
}
dev->mode_config.max_height = 8192;
}
- if (IS_GEN2(dev)) {
+ if (IS_845G(dev) || IS_I865G(dev)) {
+ dev->mode_config.cursor_width = IS_845G(dev) ? 64 : 512;
+ dev->mode_config.cursor_height = 1023;
+ } else if (IS_GEN2(dev)) {
dev->mode_config.cursor_width = GEN2_CURSOR_WIDTH;
dev->mode_config.cursor_height = GEN2_CURSOR_HEIGHT;
} else {
INTEL_INFO(dev)->num_pipes,
INTEL_INFO(dev)->num_pipes > 1 ? "s" : "");
- for_each_pipe(pipe) {
+ for_each_pipe(dev_priv, pipe) {
intel_crtc_init(dev, pipe);
for_each_sprite(pipe, sprite) {
ret = intel_plane_init(dev, pipe, sprite);
}
intel_init_dpio(dev);
- intel_reset_dpio(dev);
intel_shared_dpll_init(dev);
+ /* save the BIOS value before clobbering it */
+ dev_priv->bios_vgacntr = I915_READ(i915_vgacntrl_reg(dev));
/* Just disable it once at startup */
i915_disable_vga(dev);
intel_setup_outputs(dev);
struct intel_connector *connector;
struct drm_connector *crt = NULL;
struct intel_load_detect_pipe load_detect_temp;
- struct drm_modeset_acquire_ctx ctx;
+ struct drm_modeset_acquire_ctx *ctx = dev->mode_config.acquire_ctx;
/* We can't just switch on the pipe A, we need to set things up with a
* proper mode and output configuration. As a gross hack, enable pipe A
if (!crt)
return;
- if (intel_get_load_detect_pipe(crt, NULL, &load_detect_temp, &ctx))
- intel_release_load_detect_pipe(crt, &load_detect_temp, &ctx);
-
-
+ if (intel_get_load_detect_pipe(crt, NULL, &load_detect_temp, ctx))
+ intel_release_load_detect_pipe(crt, &load_detect_temp);
}
static bool
I915_WRITE(reg, I915_READ(reg) & ~PIPECONF_FRAME_START_DELAY_MASK);
/* restore vblank interrupts to correct state */
- if (crtc->active)
+ if (crtc->active) {
+ update_scanline_offset(crtc);
drm_vblank_on(dev, crtc->pipe);
- else
+ } else
drm_vblank_off(dev, crtc->pipe);
/* We need to sanitize the plane -> pipe mapping first because this will
}
}
- if (crtc->active || IS_VALLEYVIEW(dev) || INTEL_INFO(dev)->gen < 5) {
+ if (crtc->active || HAS_GMCH_DISPLAY(dev)) {
/*
* We start out with underrun reporting disabled to avoid races.
* For correct bookkeeping mark this on active crtcs.
*/
crtc->cpu_fifo_underrun_disabled = true;
crtc->pch_fifo_underrun_disabled = true;
-
- update_scanline_offset(crtc);
}
}
intel_display_power_get(dev_priv, POWER_DOMAIN_PLLS);
}
- list_for_each_entry(encoder, &dev->mode_config.encoder_list,
- base.head) {
+ for_each_intel_encoder(dev, encoder) {
pipe = 0;
if (encoder->get_hw_state(encoder, &pipe)) {
}
/* HW state is read out, now we need to sanitize this mess. */
- list_for_each_entry(encoder, &dev->mode_config.encoder_list,
- base.head) {
+ for_each_intel_encoder(dev, encoder) {
intel_sanitize_encoder(encoder);
}
- for_each_pipe(pipe) {
+ for_each_pipe(dev_priv, pipe) {
crtc = to_intel_crtc(dev_priv->pipe_to_crtc_mapping[pipe]);
intel_sanitize_crtc(crtc);
intel_dump_pipe_config(crtc, &crtc->config, "[setup_hw_state]");
* We need to use raw interfaces for restoring state to avoid
* checking (bogus) intermediate states.
*/
- for_each_pipe(pipe) {
+ for_each_pipe(dev_priv, pipe) {
struct drm_crtc *crtc =
dev_priv->pipe_to_crtc_mapping[pipe];
* experience fancy races otherwise.
*/
drm_irq_uninstall(dev);
- cancel_work_sync(&dev_priv->hotplug_work);
+ intel_hpd_cancel_work(dev_priv);
dev_priv->pm._irqs_disabled = true;
/*
if (IS_HASWELL(dev) || IS_BROADWELL(dev))
error->power_well_driver = I915_READ(HSW_PWR_WELL_DRIVER);
- for_each_pipe(i) {
+ for_each_pipe(dev_priv, i) {
error->pipe[i].power_domain_on =
intel_display_power_enabled_unlocked(dev_priv,
POWER_DOMAIN_PIPE(i));
struct drm_device *dev,
struct intel_display_error_state *error)
{
+ struct drm_i915_private *dev_priv = dev->dev_private;
int i;
if (!error)
if (IS_HASWELL(dev) || IS_BROADWELL(dev))
err_printf(m, "PWR_WELL_CTL2: %08x\n",
error->power_well_driver);
- for_each_pipe(i) {
+ for_each_pipe(dev_priv, i) {
err_printf(m, "Pipe [%d]:\n", i);
err_printf(m, " Power: %s\n",
error->pipe[i].power_domain_on ? "on" : "off");
err_printf(m, " VSYNC: %08x\n", error->transcoder[i].vsync);
}
}
+
+void intel_modeset_preclose(struct drm_device *dev, struct drm_file *file)
+{
+ struct intel_crtc *crtc;
+
+ for_each_intel_crtc(dev, crtc) {
+ struct intel_unpin_work *work;
+ unsigned long irqflags;
+
+ spin_lock_irqsave(&dev->event_lock, irqflags);
+
+ work = crtc->unpin_work;
+
+ if (work && work->event &&
+ work->event->base.file_priv == file) {
+ kfree(work->event);
+ work->event = NULL;
+ }
+
+ spin_unlock_irqrestore(&dev->event_lock, irqflags);
+ }
+}
}
static void intel_dp_link_down(struct intel_dp *intel_dp);
-static bool _edp_panel_vdd_on(struct intel_dp *intel_dp);
+static bool edp_panel_vdd_on(struct intel_dp *intel_dp);
static void edp_panel_vdd_off(struct intel_dp *intel_dp, bool sync);
int
struct intel_dp *intel_dp,
struct edp_power_seq *out);
+static void pps_lock(struct intel_dp *intel_dp)
+{
+ struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
+ struct intel_encoder *encoder = &intel_dig_port->base;
+ struct drm_device *dev = encoder->base.dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ enum intel_display_power_domain power_domain;
+
+ /*
+ * See vlv_power_sequencer_reset() why we need
+ * a power domain reference here.
+ */
+ power_domain = intel_display_port_power_domain(encoder);
+ intel_display_power_get(dev_priv, power_domain);
+
+ mutex_lock(&dev_priv->pps_mutex);
+}
+
+static void pps_unlock(struct intel_dp *intel_dp)
+{
+ struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
+ struct intel_encoder *encoder = &intel_dig_port->base;
+ struct drm_device *dev = encoder->base.dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ enum intel_display_power_domain power_domain;
+
+ mutex_unlock(&dev_priv->pps_mutex);
+
+ power_domain = intel_display_port_power_domain(encoder);
+ intel_display_power_put(dev_priv, power_domain);
+}
+
static enum pipe
vlv_power_sequencer_pipe(struct intel_dp *intel_dp)
{
struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
- struct drm_crtc *crtc = intel_dig_port->base.base.crtc;
struct drm_device *dev = intel_dig_port->base.base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- enum port port = intel_dig_port->port;
- enum pipe pipe;
+ struct intel_encoder *encoder;
+ unsigned int pipes = (1 << PIPE_A) | (1 << PIPE_B);
+ struct edp_power_seq power_seq;
+
+ lockdep_assert_held(&dev_priv->pps_mutex);
+
+ if (intel_dp->pps_pipe != INVALID_PIPE)
+ return intel_dp->pps_pipe;
+
+ /*
+ * We don't have power sequencer currently.
+ * Pick one that's not used by other ports.
+ */
+ list_for_each_entry(encoder, &dev->mode_config.encoder_list,
+ base.head) {
+ struct intel_dp *tmp;
+
+ if (encoder->type != INTEL_OUTPUT_EDP)
+ continue;
+
+ tmp = enc_to_intel_dp(&encoder->base);
+
+ if (tmp->pps_pipe != INVALID_PIPE)
+ pipes &= ~(1 << tmp->pps_pipe);
+ }
+
+ /*
+ * Didn't find one. This should not happen since there
+ * are two power sequencers and up to two eDP ports.
+ */
+ if (WARN_ON(pipes == 0))
+ return PIPE_A;
+
+ intel_dp->pps_pipe = ffs(pipes) - 1;
+
+ DRM_DEBUG_KMS("picked pipe %c power sequencer for port %c\n",
+ pipe_name(intel_dp->pps_pipe),
+ port_name(intel_dig_port->port));
+
+ /* init power sequencer on this pipe and port */
+ intel_dp_init_panel_power_sequencer(dev, intel_dp, &power_seq);
+ intel_dp_init_panel_power_sequencer_registers(dev, intel_dp,
+ &power_seq);
+
+ return intel_dp->pps_pipe;
+}
+
+typedef bool (*vlv_pipe_check)(struct drm_i915_private *dev_priv,
+ enum pipe pipe);
- /* modeset should have pipe */
- if (crtc)
- return to_intel_crtc(crtc)->pipe;
+static bool vlv_pipe_has_pp_on(struct drm_i915_private *dev_priv,
+ enum pipe pipe)
+{
+ return I915_READ(VLV_PIPE_PP_STATUS(pipe)) & PP_ON;
+}
+
+static bool vlv_pipe_has_vdd_on(struct drm_i915_private *dev_priv,
+ enum pipe pipe)
+{
+ return I915_READ(VLV_PIPE_PP_CONTROL(pipe)) & EDP_FORCE_VDD;
+}
+
+static bool vlv_pipe_any(struct drm_i915_private *dev_priv,
+ enum pipe pipe)
+{
+ return true;
+}
+
+static enum pipe
+vlv_initial_pps_pipe(struct drm_i915_private *dev_priv,
+ enum port port,
+ vlv_pipe_check pipe_check)
+{
+ enum pipe pipe;
- /* init time, try to find a pipe with this port selected */
for (pipe = PIPE_A; pipe <= PIPE_B; pipe++) {
u32 port_sel = I915_READ(VLV_PIPE_PP_ON_DELAYS(pipe)) &
PANEL_PORT_SELECT_MASK;
- if (port_sel == PANEL_PORT_SELECT_DPB_VLV && port == PORT_B)
- return pipe;
- if (port_sel == PANEL_PORT_SELECT_DPC_VLV && port == PORT_C)
- return pipe;
+
+ if (port_sel != PANEL_PORT_SELECT_VLV(port))
+ continue;
+
+ if (!pipe_check(dev_priv, pipe))
+ continue;
+
+ return pipe;
+ }
+
+ return INVALID_PIPE;
+}
+
+static void
+vlv_initial_power_sequencer_setup(struct intel_dp *intel_dp)
+{
+ struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
+ struct drm_device *dev = intel_dig_port->base.base.dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct edp_power_seq power_seq;
+ enum port port = intel_dig_port->port;
+
+ lockdep_assert_held(&dev_priv->pps_mutex);
+
+ /* try to find a pipe with this port selected */
+ /* first pick one where the panel is on */
+ intel_dp->pps_pipe = vlv_initial_pps_pipe(dev_priv, port,
+ vlv_pipe_has_pp_on);
+ /* didn't find one? pick one where vdd is on */
+ if (intel_dp->pps_pipe == INVALID_PIPE)
+ intel_dp->pps_pipe = vlv_initial_pps_pipe(dev_priv, port,
+ vlv_pipe_has_vdd_on);
+ /* didn't find one? pick one with just the correct port */
+ if (intel_dp->pps_pipe == INVALID_PIPE)
+ intel_dp->pps_pipe = vlv_initial_pps_pipe(dev_priv, port,
+ vlv_pipe_any);
+
+ /* didn't find one? just let vlv_power_sequencer_pipe() pick one when needed */
+ if (intel_dp->pps_pipe == INVALID_PIPE) {
+ DRM_DEBUG_KMS("no initial power sequencer for port %c\n",
+ port_name(port));
+ return;
}
- /* shrug */
- return PIPE_A;
+ DRM_DEBUG_KMS("initial power sequencer for port %c: pipe %c\n",
+ port_name(port), pipe_name(intel_dp->pps_pipe));
+
+ intel_dp_init_panel_power_sequencer(dev, intel_dp, &power_seq);
+ intel_dp_init_panel_power_sequencer_registers(dev, intel_dp,
+ &power_seq);
+}
+
+void vlv_power_sequencer_reset(struct drm_i915_private *dev_priv)
+{
+ struct drm_device *dev = dev_priv->dev;
+ struct intel_encoder *encoder;
+
+ if (WARN_ON(!IS_VALLEYVIEW(dev)))
+ return;
+
+ /*
+ * We can't grab pps_mutex here due to deadlock with power_domain
+ * mutex when power_domain functions are called while holding pps_mutex.
+ * That also means that in order to use pps_pipe the code needs to
+ * hold both a power domain reference and pps_mutex, and the power domain
+ * reference get/put must be done while _not_ holding pps_mutex.
+ * pps_{lock,unlock}() do these steps in the correct order, so one
+ * should use them always.
+ */
+
+ list_for_each_entry(encoder, &dev->mode_config.encoder_list, base.head) {
+ struct intel_dp *intel_dp;
+
+ if (encoder->type != INTEL_OUTPUT_EDP)
+ continue;
+
+ intel_dp = enc_to_intel_dp(&encoder->base);
+ intel_dp->pps_pipe = INVALID_PIPE;
+ }
}
static u32 _pp_ctrl_reg(struct intel_dp *intel_dp)
struct drm_i915_private *dev_priv = dev->dev_private;
u32 pp_div;
u32 pp_ctrl_reg, pp_div_reg;
- enum pipe pipe = vlv_power_sequencer_pipe(intel_dp);
if (!is_edp(intel_dp) || code != SYS_RESTART)
return 0;
+ pps_lock(intel_dp);
+
if (IS_VALLEYVIEW(dev)) {
+ enum pipe pipe = vlv_power_sequencer_pipe(intel_dp);
+
pp_ctrl_reg = VLV_PIPE_PP_CONTROL(pipe);
pp_div_reg = VLV_PIPE_PP_DIVISOR(pipe);
pp_div = I915_READ(pp_div_reg);
msleep(intel_dp->panel_power_cycle_delay);
}
+ pps_unlock(intel_dp);
+
return 0;
}
struct drm_device *dev = intel_dp_to_dev(intel_dp);
struct drm_i915_private *dev_priv = dev->dev_private;
+ lockdep_assert_held(&dev_priv->pps_mutex);
+
return (I915_READ(_pp_stat_reg(intel_dp)) & PP_ON) != 0;
}
{
struct drm_device *dev = intel_dp_to_dev(intel_dp);
struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
- struct intel_encoder *intel_encoder = &intel_dig_port->base;
- enum intel_display_power_domain power_domain;
- power_domain = intel_display_port_power_domain(intel_encoder);
- return intel_display_power_enabled(dev_priv, power_domain) &&
- (I915_READ(_pp_ctrl_reg(intel_dp)) & EDP_FORCE_VDD) != 0;
+ lockdep_assert_held(&dev_priv->pps_mutex);
+
+ return I915_READ(_pp_ctrl_reg(intel_dp)) & EDP_FORCE_VDD;
}
static void
bool has_aux_irq = HAS_AUX_IRQ(dev);
bool vdd;
- vdd = _edp_panel_vdd_on(intel_dp);
+ pps_lock(intel_dp);
+
+ /*
+ * We will be called with VDD already enabled for dpcd/edid/oui reads.
+ * In such cases we want to leave VDD enabled and it's up to upper layers
+ * to turn it off. But for eg. i2c-dev access we need to turn it on/off
+ * ourselves.
+ */
+ vdd = edp_panel_vdd_on(intel_dp);
/* dp aux is extremely sensitive to irq latency, hence request the
* lowest possible wakeup latency and so prevent the cpu from going into
if (vdd)
edp_panel_vdd_off(intel_dp, false);
+ pps_unlock(intel_dp);
+
return ret;
}
}
}
-static void
-intel_dp_set_m2_n2(struct intel_crtc *crtc, struct intel_link_m_n *m_n)
-{
- struct drm_device *dev = crtc->base.dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- enum transcoder transcoder = crtc->config.cpu_transcoder;
-
- I915_WRITE(PIPE_DATA_M2(transcoder),
- TU_SIZE(m_n->tu) | m_n->gmch_m);
- I915_WRITE(PIPE_DATA_N2(transcoder), m_n->gmch_n);
- I915_WRITE(PIPE_LINK_M2(transcoder), m_n->link_m);
- I915_WRITE(PIPE_LINK_N2(transcoder), m_n->link_n);
-}
-
bool
intel_dp_compute_config(struct intel_encoder *encoder,
struct intel_crtc_config *pipe_config)
pipe_config->has_pch_encoder = true;
pipe_config->has_dp_encoder = true;
+ pipe_config->has_drrs = false;
pipe_config->has_audio = intel_dp->has_audio;
if (is_edp(intel_dp) && intel_connector->panel.fixed_mode) {
bpp = dev_priv->vbt.edp_bpp;
}
- if (IS_BROADWELL(dev)) {
- /* Yes, it's an ugly hack. */
- min_lane_count = max_lane_count;
- DRM_DEBUG_KMS("forcing lane count to max (%u) on BDW\n",
- min_lane_count);
- } else if (dev_priv->vbt.edp_lanes) {
- min_lane_count = min(dev_priv->vbt.edp_lanes,
- max_lane_count);
- DRM_DEBUG_KMS("using min %u lanes per VBT\n",
- min_lane_count);
- }
-
- if (dev_priv->vbt.edp_rate) {
- min_clock = min(dev_priv->vbt.edp_rate >> 3, max_clock);
- DRM_DEBUG_KMS("using min %02x link bw per VBT\n",
- bws[min_clock]);
- }
+ /*
+ * Use the maximum clock and number of lanes the eDP panel
+ * advertizes being capable of. The panels are generally
+ * designed to support only a single clock and lane
+ * configuration, and typically these values correspond to the
+ * native resolution of the panel.
+ */
+ min_lane_count = max_lane_count;
+ min_clock = max_clock;
}
for (; bpp >= 6*3; bpp -= 2*3) {
if (intel_connector->panel.downclock_mode != NULL &&
intel_dp->drrs_state.type == SEAMLESS_DRRS_SUPPORT) {
+ pipe_config->has_drrs = true;
intel_link_compute_m_n(bpp, lane_count,
intel_connector->panel.downclock_mode->clock,
pipe_config->port_clock,
&pipe_config->dp_m2_n2);
}
- if (HAS_DDI(dev))
+ if (IS_HASWELL(dev) || IS_BROADWELL(dev))
hsw_dp_set_ddi_pll_sel(pipe_config, intel_dp->link_bw);
else
intel_dp_set_clock(encoder, pipe_config, intel_dp->link_bw);
struct drm_i915_private *dev_priv = dev->dev_private;
u32 pp_stat_reg, pp_ctrl_reg;
+ lockdep_assert_held(&dev_priv->pps_mutex);
+
pp_stat_reg = _pp_stat_reg(intel_dp);
pp_ctrl_reg = _pp_ctrl_reg(intel_dp);
struct drm_i915_private *dev_priv = dev->dev_private;
u32 control;
+ lockdep_assert_held(&dev_priv->pps_mutex);
+
control = I915_READ(_pp_ctrl_reg(intel_dp));
control &= ~PANEL_UNLOCK_MASK;
control |= PANEL_UNLOCK_REGS;
return control;
}
-static bool _edp_panel_vdd_on(struct intel_dp *intel_dp)
+/*
+ * Must be paired with edp_panel_vdd_off().
+ * Must hold pps_mutex around the whole on/off sequence.
+ * Can be nested with intel_edp_panel_vdd_{on,off}() calls.
+ */
+static bool edp_panel_vdd_on(struct intel_dp *intel_dp)
{
struct drm_device *dev = intel_dp_to_dev(intel_dp);
struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
u32 pp_stat_reg, pp_ctrl_reg;
bool need_to_disable = !intel_dp->want_panel_vdd;
+ lockdep_assert_held(&dev_priv->pps_mutex);
+
if (!is_edp(intel_dp))
return false;
return need_to_disable;
}
+/*
+ * Must be paired with intel_edp_panel_vdd_off() or
+ * intel_edp_panel_off().
+ * Nested calls to these functions are not allowed since
+ * we drop the lock. Caller must use some higher level
+ * locking to prevent nested calls from other threads.
+ */
void intel_edp_panel_vdd_on(struct intel_dp *intel_dp)
{
- if (is_edp(intel_dp)) {
- bool vdd = _edp_panel_vdd_on(intel_dp);
+ bool vdd;
- WARN(!vdd, "eDP VDD already requested on\n");
- }
+ if (!is_edp(intel_dp))
+ return;
+
+ pps_lock(intel_dp);
+ vdd = edp_panel_vdd_on(intel_dp);
+ pps_unlock(intel_dp);
+
+ WARN(!vdd, "eDP VDD already requested on\n");
}
static void edp_panel_vdd_off_sync(struct intel_dp *intel_dp)
{
struct drm_device *dev = intel_dp_to_dev(intel_dp);
struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_digital_port *intel_dig_port =
+ dp_to_dig_port(intel_dp);
+ struct intel_encoder *intel_encoder = &intel_dig_port->base;
+ enum intel_display_power_domain power_domain;
u32 pp;
u32 pp_stat_reg, pp_ctrl_reg;
- WARN_ON(!drm_modeset_is_locked(&dev->mode_config.connection_mutex));
+ lockdep_assert_held(&dev_priv->pps_mutex);
- if (!intel_dp->want_panel_vdd && edp_have_panel_vdd(intel_dp)) {
- struct intel_digital_port *intel_dig_port =
- dp_to_dig_port(intel_dp);
- struct intel_encoder *intel_encoder = &intel_dig_port->base;
- enum intel_display_power_domain power_domain;
+ WARN_ON(intel_dp->want_panel_vdd);
- DRM_DEBUG_KMS("Turning eDP VDD off\n");
+ if (!edp_have_panel_vdd(intel_dp))
+ return;
- pp = ironlake_get_pp_control(intel_dp);
- pp &= ~EDP_FORCE_VDD;
+ DRM_DEBUG_KMS("Turning eDP VDD off\n");
- pp_ctrl_reg = _pp_ctrl_reg(intel_dp);
- pp_stat_reg = _pp_stat_reg(intel_dp);
+ pp = ironlake_get_pp_control(intel_dp);
+ pp &= ~EDP_FORCE_VDD;
- I915_WRITE(pp_ctrl_reg, pp);
- POSTING_READ(pp_ctrl_reg);
+ pp_ctrl_reg = _pp_ctrl_reg(intel_dp);
+ pp_stat_reg = _pp_stat_reg(intel_dp);
+
+ I915_WRITE(pp_ctrl_reg, pp);
+ POSTING_READ(pp_ctrl_reg);
- /* Make sure sequencer is idle before allowing subsequent activity */
- DRM_DEBUG_KMS("PP_STATUS: 0x%08x PP_CONTROL: 0x%08x\n",
- I915_READ(pp_stat_reg), I915_READ(pp_ctrl_reg));
+ /* Make sure sequencer is idle before allowing subsequent activity */
+ DRM_DEBUG_KMS("PP_STATUS: 0x%08x PP_CONTROL: 0x%08x\n",
+ I915_READ(pp_stat_reg), I915_READ(pp_ctrl_reg));
- if ((pp & POWER_TARGET_ON) == 0)
- intel_dp->last_power_cycle = jiffies;
+ if ((pp & POWER_TARGET_ON) == 0)
+ intel_dp->last_power_cycle = jiffies;
- power_domain = intel_display_port_power_domain(intel_encoder);
- intel_display_power_put(dev_priv, power_domain);
- }
+ power_domain = intel_display_port_power_domain(intel_encoder);
+ intel_display_power_put(dev_priv, power_domain);
}
static void edp_panel_vdd_work(struct work_struct *__work)
{
struct intel_dp *intel_dp = container_of(to_delayed_work(__work),
struct intel_dp, panel_vdd_work);
- struct drm_device *dev = intel_dp_to_dev(intel_dp);
- drm_modeset_lock(&dev->mode_config.connection_mutex, NULL);
- edp_panel_vdd_off_sync(intel_dp);
- drm_modeset_unlock(&dev->mode_config.connection_mutex);
+ pps_lock(intel_dp);
+ if (!intel_dp->want_panel_vdd)
+ edp_panel_vdd_off_sync(intel_dp);
+ pps_unlock(intel_dp);
}
static void edp_panel_vdd_schedule_off(struct intel_dp *intel_dp)
schedule_delayed_work(&intel_dp->panel_vdd_work, delay);
}
+/*
+ * Must be paired with edp_panel_vdd_on().
+ * Must hold pps_mutex around the whole on/off sequence.
+ * Can be nested with intel_edp_panel_vdd_{on,off}() calls.
+ */
static void edp_panel_vdd_off(struct intel_dp *intel_dp, bool sync)
{
+ struct drm_i915_private *dev_priv =
+ intel_dp_to_dev(intel_dp)->dev_private;
+
+ lockdep_assert_held(&dev_priv->pps_mutex);
+
if (!is_edp(intel_dp))
return;
edp_panel_vdd_schedule_off(intel_dp);
}
+/*
+ * Must be paired with intel_edp_panel_vdd_on().
+ * Nested calls to these functions are not allowed since
+ * we drop the lock. Caller must use some higher level
+ * locking to prevent nested calls from other threads.
+ */
+static void intel_edp_panel_vdd_off(struct intel_dp *intel_dp, bool sync)
+{
+ if (!is_edp(intel_dp))
+ return;
+
+ pps_lock(intel_dp);
+ edp_panel_vdd_off(intel_dp, sync);
+ pps_unlock(intel_dp);
+}
+
void intel_edp_panel_on(struct intel_dp *intel_dp)
{
struct drm_device *dev = intel_dp_to_dev(intel_dp);
DRM_DEBUG_KMS("Turn eDP power on\n");
+ pps_lock(intel_dp);
+
if (edp_have_panel_power(intel_dp)) {
DRM_DEBUG_KMS("eDP power already on\n");
- return;
+ goto out;
}
wait_panel_power_cycle(intel_dp);
I915_WRITE(pp_ctrl_reg, pp);
POSTING_READ(pp_ctrl_reg);
}
+
+ out:
+ pps_unlock(intel_dp);
}
void intel_edp_panel_off(struct intel_dp *intel_dp)
DRM_DEBUG_KMS("Turn eDP power off\n");
+ pps_lock(intel_dp);
+
WARN(!intel_dp->want_panel_vdd, "Need VDD to turn off panel\n");
pp = ironlake_get_pp_control(intel_dp);
/* We got a reference when we enabled the VDD. */
power_domain = intel_display_port_power_domain(intel_encoder);
intel_display_power_put(dev_priv, power_domain);
+
+ pps_unlock(intel_dp);
}
-void intel_edp_backlight_on(struct intel_dp *intel_dp)
+/* Enable backlight in the panel power control. */
+static void _intel_edp_backlight_on(struct intel_dp *intel_dp)
{
struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
struct drm_device *dev = intel_dig_port->base.base.dev;
u32 pp;
u32 pp_ctrl_reg;
- if (!is_edp(intel_dp))
- return;
-
- DRM_DEBUG_KMS("\n");
-
- intel_panel_enable_backlight(intel_dp->attached_connector);
-
/*
* If we enable the backlight right away following a panel power
* on, we may see slight flicker as the panel syncs with the eDP
* allowing it to appear.
*/
wait_backlight_on(intel_dp);
+
+ pps_lock(intel_dp);
+
pp = ironlake_get_pp_control(intel_dp);
pp |= EDP_BLC_ENABLE;
I915_WRITE(pp_ctrl_reg, pp);
POSTING_READ(pp_ctrl_reg);
+
+ pps_unlock(intel_dp);
}
-void intel_edp_backlight_off(struct intel_dp *intel_dp)
+/* Enable backlight PWM and backlight PP control. */
+void intel_edp_backlight_on(struct intel_dp *intel_dp)
+{
+ if (!is_edp(intel_dp))
+ return;
+
+ DRM_DEBUG_KMS("\n");
+
+ intel_panel_enable_backlight(intel_dp->attached_connector);
+ _intel_edp_backlight_on(intel_dp);
+}
+
+/* Disable backlight in the panel power control. */
+static void _intel_edp_backlight_off(struct intel_dp *intel_dp)
{
struct drm_device *dev = intel_dp_to_dev(intel_dp);
struct drm_i915_private *dev_priv = dev->dev_private;
if (!is_edp(intel_dp))
return;
- DRM_DEBUG_KMS("\n");
+ pps_lock(intel_dp);
+
pp = ironlake_get_pp_control(intel_dp);
pp &= ~EDP_BLC_ENABLE;
I915_WRITE(pp_ctrl_reg, pp);
POSTING_READ(pp_ctrl_reg);
- intel_dp->last_backlight_off = jiffies;
+ pps_unlock(intel_dp);
+
+ intel_dp->last_backlight_off = jiffies;
edp_wait_backlight_off(intel_dp);
+}
+
+/* Disable backlight PP control and backlight PWM. */
+void intel_edp_backlight_off(struct intel_dp *intel_dp)
+{
+ if (!is_edp(intel_dp))
+ return;
+ DRM_DEBUG_KMS("\n");
+
+ _intel_edp_backlight_off(intel_dp);
intel_panel_disable_backlight(intel_dp->attached_connector);
}
+/*
+ * Hook for controlling the panel power control backlight through the bl_power
+ * sysfs attribute. Take care to handle multiple calls.
+ */
+static void intel_edp_backlight_power(struct intel_connector *connector,
+ bool enable)
+{
+ struct intel_dp *intel_dp = intel_attached_dp(&connector->base);
+ bool is_enabled;
+
+ pps_lock(intel_dp);
+ is_enabled = ironlake_get_pp_control(intel_dp) & EDP_BLC_ENABLE;
+ pps_unlock(intel_dp);
+
+ if (is_enabled == enable)
+ return;
+
+ DRM_DEBUG_KMS("panel power control backlight %s\n",
+ enable ? "enable" : "disable");
+
+ if (enable)
+ _intel_edp_backlight_on(intel_dp);
+ else
+ _intel_edp_backlight_off(intel_dp);
+}
+
static void ironlake_edp_pll_on(struct intel_dp *intel_dp)
{
struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
if (mode != DRM_MODE_DPMS_ON) {
ret = drm_dp_dpcd_writeb(&intel_dp->aux, DP_SET_POWER,
DP_SET_POWER_D3);
- if (ret != 1)
- DRM_DEBUG_DRIVER("failed to write sink power state\n");
} else {
/*
* When turning on, we need to retry for 1ms to give the sink
msleep(1);
}
}
+
+ if (ret != 1)
+ DRM_DEBUG_KMS("failed to %s sink power state\n",
+ mode == DRM_MODE_DPMS_ON ? "enable" : "disable");
}
static bool intel_dp_get_hw_state(struct intel_encoder *encoder,
return true;
}
- for_each_pipe(i) {
+ for_each_pipe(dev_priv, i) {
trans_dp = I915_READ(TRANS_DP_CTL(i));
if ((trans_dp & TRANS_DP_PORT_SEL_MASK) == trans_sel) {
*pipe = i;
static void intel_disable_dp(struct intel_encoder *encoder)
{
struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
- enum port port = dp_to_dig_port(intel_dp)->port;
struct drm_device *dev = encoder->base.dev;
/* Make sure the panel is off before trying to change the mode. But also
intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_OFF);
intel_edp_panel_off(intel_dp);
- /* cpu edp my only be disable _after_ the cpu pipe/plane is disabled. */
- if (!(port == PORT_A || IS_VALLEYVIEW(dev)))
+ /* disable the port before the pipe on g4x */
+ if (INTEL_INFO(dev)->gen < 5)
intel_dp_link_down(intel_dp);
}
-static void g4x_post_disable_dp(struct intel_encoder *encoder)
+static void ilk_post_disable_dp(struct intel_encoder *encoder)
{
struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
enum port port = dp_to_dig_port(intel_dp)->port;
- if (port != PORT_A)
- return;
-
intel_dp_link_down(intel_dp);
- ironlake_edp_pll_off(intel_dp);
+ if (port == PORT_A)
+ ironlake_edp_pll_off(intel_dp);
}
static void vlv_post_disable_dp(struct intel_encoder *encoder)
mutex_unlock(&dev_priv->dpio_lock);
}
-static void intel_enable_dp(struct intel_encoder *encoder)
+static void
+_intel_dp_set_link_train(struct intel_dp *intel_dp,
+ uint32_t *DP,
+ uint8_t dp_train_pat)
{
- struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
- struct drm_device *dev = encoder->base.dev;
+ struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
+ struct drm_device *dev = intel_dig_port->base.base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- uint32_t dp_reg = I915_READ(intel_dp->output_reg);
+ enum port port = intel_dig_port->port;
- if (WARN_ON(dp_reg & DP_PORT_EN))
- return;
+ if (HAS_DDI(dev)) {
+ uint32_t temp = I915_READ(DP_TP_CTL(port));
- intel_edp_panel_vdd_on(intel_dp);
- intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
- intel_dp_start_link_train(intel_dp);
- intel_edp_panel_on(intel_dp);
- edp_panel_vdd_off(intel_dp, true);
- intel_dp_complete_link_train(intel_dp);
- intel_dp_stop_link_train(intel_dp);
-}
+ if (dp_train_pat & DP_LINK_SCRAMBLING_DISABLE)
+ temp |= DP_TP_CTL_SCRAMBLE_DISABLE;
+ else
+ temp &= ~DP_TP_CTL_SCRAMBLE_DISABLE;
-static void g4x_enable_dp(struct intel_encoder *encoder)
-{
- struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
+ temp &= ~DP_TP_CTL_LINK_TRAIN_MASK;
+ switch (dp_train_pat & DP_TRAINING_PATTERN_MASK) {
+ case DP_TRAINING_PATTERN_DISABLE:
+ temp |= DP_TP_CTL_LINK_TRAIN_NORMAL;
+
+ break;
+ case DP_TRAINING_PATTERN_1:
+ temp |= DP_TP_CTL_LINK_TRAIN_PAT1;
+ break;
+ case DP_TRAINING_PATTERN_2:
+ temp |= DP_TP_CTL_LINK_TRAIN_PAT2;
+ break;
+ case DP_TRAINING_PATTERN_3:
+ temp |= DP_TP_CTL_LINK_TRAIN_PAT3;
+ break;
+ }
+ I915_WRITE(DP_TP_CTL(port), temp);
+
+ } else if (HAS_PCH_CPT(dev) && (IS_GEN7(dev) || port != PORT_A)) {
+ *DP &= ~DP_LINK_TRAIN_MASK_CPT;
+
+ switch (dp_train_pat & DP_TRAINING_PATTERN_MASK) {
+ case DP_TRAINING_PATTERN_DISABLE:
+ *DP |= DP_LINK_TRAIN_OFF_CPT;
+ break;
+ case DP_TRAINING_PATTERN_1:
+ *DP |= DP_LINK_TRAIN_PAT_1_CPT;
+ break;
+ case DP_TRAINING_PATTERN_2:
+ *DP |= DP_LINK_TRAIN_PAT_2_CPT;
+ break;
+ case DP_TRAINING_PATTERN_3:
+ DRM_ERROR("DP training pattern 3 not supported\n");
+ *DP |= DP_LINK_TRAIN_PAT_2_CPT;
+ break;
+ }
+
+ } else {
+ if (IS_CHERRYVIEW(dev))
+ *DP &= ~DP_LINK_TRAIN_MASK_CHV;
+ else
+ *DP &= ~DP_LINK_TRAIN_MASK;
+
+ switch (dp_train_pat & DP_TRAINING_PATTERN_MASK) {
+ case DP_TRAINING_PATTERN_DISABLE:
+ *DP |= DP_LINK_TRAIN_OFF;
+ break;
+ case DP_TRAINING_PATTERN_1:
+ *DP |= DP_LINK_TRAIN_PAT_1;
+ break;
+ case DP_TRAINING_PATTERN_2:
+ *DP |= DP_LINK_TRAIN_PAT_2;
+ break;
+ case DP_TRAINING_PATTERN_3:
+ if (IS_CHERRYVIEW(dev)) {
+ *DP |= DP_LINK_TRAIN_PAT_3_CHV;
+ } else {
+ DRM_ERROR("DP training pattern 3 not supported\n");
+ *DP |= DP_LINK_TRAIN_PAT_2;
+ }
+ break;
+ }
+ }
+}
+
+static void intel_dp_enable_port(struct intel_dp *intel_dp)
+{
+ struct drm_device *dev = intel_dp_to_dev(intel_dp);
+ struct drm_i915_private *dev_priv = dev->dev_private;
+
+ intel_dp->DP |= DP_PORT_EN;
+
+ /* enable with pattern 1 (as per spec) */
+ _intel_dp_set_link_train(intel_dp, &intel_dp->DP,
+ DP_TRAINING_PATTERN_1);
+
+ I915_WRITE(intel_dp->output_reg, intel_dp->DP);
+ POSTING_READ(intel_dp->output_reg);
+}
+
+static void intel_enable_dp(struct intel_encoder *encoder)
+{
+ struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
+ struct drm_device *dev = encoder->base.dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ uint32_t dp_reg = I915_READ(intel_dp->output_reg);
+
+ if (WARN_ON(dp_reg & DP_PORT_EN))
+ return;
+
+ intel_dp_enable_port(intel_dp);
+ intel_edp_panel_vdd_on(intel_dp);
+ intel_edp_panel_on(intel_dp);
+ intel_edp_panel_vdd_off(intel_dp, true);
+ intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
+ intel_dp_start_link_train(intel_dp);
+ intel_dp_complete_link_train(intel_dp);
+ intel_dp_stop_link_train(intel_dp);
+}
+
+static void g4x_enable_dp(struct intel_encoder *encoder)
+{
+ struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
intel_enable_dp(encoder);
intel_edp_backlight_on(intel_dp);
}
}
+static void vlv_steal_power_sequencer(struct drm_device *dev,
+ enum pipe pipe)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_encoder *encoder;
+
+ lockdep_assert_held(&dev_priv->pps_mutex);
+
+ list_for_each_entry(encoder, &dev->mode_config.encoder_list,
+ base.head) {
+ struct intel_dp *intel_dp;
+ enum port port;
+
+ if (encoder->type != INTEL_OUTPUT_EDP)
+ continue;
+
+ intel_dp = enc_to_intel_dp(&encoder->base);
+ port = dp_to_dig_port(intel_dp)->port;
+
+ if (intel_dp->pps_pipe != pipe)
+ continue;
+
+ DRM_DEBUG_KMS("stealing pipe %c power sequencer from port %c\n",
+ pipe_name(pipe), port_name(port));
+
+ /* make sure vdd is off before we steal it */
+ edp_panel_vdd_off_sync(intel_dp);
+
+ intel_dp->pps_pipe = INVALID_PIPE;
+ }
+}
+
+static void vlv_init_panel_power_sequencer(struct intel_dp *intel_dp)
+{
+ struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
+ struct intel_encoder *encoder = &intel_dig_port->base;
+ struct drm_device *dev = encoder->base.dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_crtc *crtc = to_intel_crtc(encoder->base.crtc);
+ struct edp_power_seq power_seq;
+
+ lockdep_assert_held(&dev_priv->pps_mutex);
+
+ if (intel_dp->pps_pipe == crtc->pipe)
+ return;
+
+ /*
+ * If another power sequencer was being used on this
+ * port previously make sure to turn off vdd there while
+ * we still have control of it.
+ */
+ if (intel_dp->pps_pipe != INVALID_PIPE)
+ edp_panel_vdd_off_sync(intel_dp);
+
+ /*
+ * We may be stealing the power
+ * sequencer from another port.
+ */
+ vlv_steal_power_sequencer(dev, crtc->pipe);
+
+ /* now it's all ours */
+ intel_dp->pps_pipe = crtc->pipe;
+
+ DRM_DEBUG_KMS("initializing pipe %c power sequencer for port %c\n",
+ pipe_name(intel_dp->pps_pipe), port_name(intel_dig_port->port));
+
+ /* init power sequencer on this pipe and port */
+ intel_dp_init_panel_power_sequencer(dev, intel_dp, &power_seq);
+ intel_dp_init_panel_power_sequencer_registers(dev, intel_dp,
+ &power_seq);
+}
+
static void vlv_pre_enable_dp(struct intel_encoder *encoder)
{
struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
struct intel_crtc *intel_crtc = to_intel_crtc(encoder->base.crtc);
enum dpio_channel port = vlv_dport_to_channel(dport);
int pipe = intel_crtc->pipe;
- struct edp_power_seq power_seq;
u32 val;
mutex_lock(&dev_priv->dpio_lock);
mutex_unlock(&dev_priv->dpio_lock);
if (is_edp(intel_dp)) {
- /* init power sequencer on this pipe and port */
- intel_dp_init_panel_power_sequencer(dev, intel_dp, &power_seq);
- intel_dp_init_panel_power_sequencer_registers(dev, intel_dp,
- &power_seq);
+ pps_lock(intel_dp);
+ vlv_init_panel_power_sequencer(intel_dp);
+ pps_unlock(intel_dp);
}
intel_enable_dp(encoder);
struct intel_digital_port *dport = dp_to_dig_port(intel_dp);
struct drm_device *dev = encoder->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- struct edp_power_seq power_seq;
struct intel_crtc *intel_crtc =
to_intel_crtc(encoder->base.crtc);
enum dpio_channel ch = vlv_dport_to_channel(dport);
mutex_unlock(&dev_priv->dpio_lock);
if (is_edp(intel_dp)) {
- /* init power sequencer on this pipe and port */
- intel_dp_init_panel_power_sequencer(dev, intel_dp, &power_seq);
- intel_dp_init_panel_power_sequencer_registers(dev, intel_dp,
- &power_seq);
+ pps_lock(intel_dp);
+ vlv_init_panel_power_sequencer(intel_dp);
+ pps_unlock(intel_dp);
}
intel_enable_dp(encoder);
enum pipe pipe = intel_crtc->pipe;
u32 val;
+ intel_dp_prepare(encoder);
+
mutex_lock(&dev_priv->dpio_lock);
/* program left/right clock distribution */
enum port port = dp_to_dig_port(intel_dp)->port;
if (IS_VALLEYVIEW(dev))
- return DP_TRAIN_VOLTAGE_SWING_1200;
+ return DP_TRAIN_VOLTAGE_SWING_LEVEL_3;
else if (IS_GEN7(dev) && port == PORT_A)
- return DP_TRAIN_VOLTAGE_SWING_800;
+ return DP_TRAIN_VOLTAGE_SWING_LEVEL_2;
else if (HAS_PCH_CPT(dev) && port != PORT_A)
- return DP_TRAIN_VOLTAGE_SWING_1200;
+ return DP_TRAIN_VOLTAGE_SWING_LEVEL_3;
else
- return DP_TRAIN_VOLTAGE_SWING_800;
+ return DP_TRAIN_VOLTAGE_SWING_LEVEL_2;
}
static uint8_t
if (IS_HASWELL(dev) || IS_BROADWELL(dev)) {
switch (voltage_swing & DP_TRAIN_VOLTAGE_SWING_MASK) {
- case DP_TRAIN_VOLTAGE_SWING_400:
- return DP_TRAIN_PRE_EMPHASIS_9_5;
- case DP_TRAIN_VOLTAGE_SWING_600:
- return DP_TRAIN_PRE_EMPHASIS_6;
- case DP_TRAIN_VOLTAGE_SWING_800:
- return DP_TRAIN_PRE_EMPHASIS_3_5;
- case DP_TRAIN_VOLTAGE_SWING_1200:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_0:
+ return DP_TRAIN_PRE_EMPH_LEVEL_3;
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_1:
+ return DP_TRAIN_PRE_EMPH_LEVEL_2;
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_2:
+ return DP_TRAIN_PRE_EMPH_LEVEL_1;
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_3:
default:
- return DP_TRAIN_PRE_EMPHASIS_0;
+ return DP_TRAIN_PRE_EMPH_LEVEL_0;
}
} else if (IS_VALLEYVIEW(dev)) {
switch (voltage_swing & DP_TRAIN_VOLTAGE_SWING_MASK) {
- case DP_TRAIN_VOLTAGE_SWING_400:
- return DP_TRAIN_PRE_EMPHASIS_9_5;
- case DP_TRAIN_VOLTAGE_SWING_600:
- return DP_TRAIN_PRE_EMPHASIS_6;
- case DP_TRAIN_VOLTAGE_SWING_800:
- return DP_TRAIN_PRE_EMPHASIS_3_5;
- case DP_TRAIN_VOLTAGE_SWING_1200:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_0:
+ return DP_TRAIN_PRE_EMPH_LEVEL_3;
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_1:
+ return DP_TRAIN_PRE_EMPH_LEVEL_2;
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_2:
+ return DP_TRAIN_PRE_EMPH_LEVEL_1;
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_3:
default:
- return DP_TRAIN_PRE_EMPHASIS_0;
+ return DP_TRAIN_PRE_EMPH_LEVEL_0;
}
} else if (IS_GEN7(dev) && port == PORT_A) {
switch (voltage_swing & DP_TRAIN_VOLTAGE_SWING_MASK) {
- case DP_TRAIN_VOLTAGE_SWING_400:
- return DP_TRAIN_PRE_EMPHASIS_6;
- case DP_TRAIN_VOLTAGE_SWING_600:
- case DP_TRAIN_VOLTAGE_SWING_800:
- return DP_TRAIN_PRE_EMPHASIS_3_5;
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_0:
+ return DP_TRAIN_PRE_EMPH_LEVEL_2;
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_1:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_2:
+ return DP_TRAIN_PRE_EMPH_LEVEL_1;
default:
- return DP_TRAIN_PRE_EMPHASIS_0;
+ return DP_TRAIN_PRE_EMPH_LEVEL_0;
}
} else {
switch (voltage_swing & DP_TRAIN_VOLTAGE_SWING_MASK) {
- case DP_TRAIN_VOLTAGE_SWING_400:
- return DP_TRAIN_PRE_EMPHASIS_6;
- case DP_TRAIN_VOLTAGE_SWING_600:
- return DP_TRAIN_PRE_EMPHASIS_6;
- case DP_TRAIN_VOLTAGE_SWING_800:
- return DP_TRAIN_PRE_EMPHASIS_3_5;
- case DP_TRAIN_VOLTAGE_SWING_1200:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_0:
+ return DP_TRAIN_PRE_EMPH_LEVEL_2;
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_1:
+ return DP_TRAIN_PRE_EMPH_LEVEL_2;
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_2:
+ return DP_TRAIN_PRE_EMPH_LEVEL_1;
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_3:
default:
- return DP_TRAIN_PRE_EMPHASIS_0;
+ return DP_TRAIN_PRE_EMPH_LEVEL_0;
}
}
}
int pipe = intel_crtc->pipe;
switch (train_set & DP_TRAIN_PRE_EMPHASIS_MASK) {
- case DP_TRAIN_PRE_EMPHASIS_0:
+ case DP_TRAIN_PRE_EMPH_LEVEL_0:
preemph_reg_value = 0x0004000;
switch (train_set & DP_TRAIN_VOLTAGE_SWING_MASK) {
- case DP_TRAIN_VOLTAGE_SWING_400:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_0:
demph_reg_value = 0x2B405555;
uniqtranscale_reg_value = 0x552AB83A;
break;
- case DP_TRAIN_VOLTAGE_SWING_600:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_1:
demph_reg_value = 0x2B404040;
uniqtranscale_reg_value = 0x5548B83A;
break;
- case DP_TRAIN_VOLTAGE_SWING_800:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_2:
demph_reg_value = 0x2B245555;
uniqtranscale_reg_value = 0x5560B83A;
break;
- case DP_TRAIN_VOLTAGE_SWING_1200:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_3:
demph_reg_value = 0x2B405555;
uniqtranscale_reg_value = 0x5598DA3A;
break;
return 0;
}
break;
- case DP_TRAIN_PRE_EMPHASIS_3_5:
+ case DP_TRAIN_PRE_EMPH_LEVEL_1:
preemph_reg_value = 0x0002000;
switch (train_set & DP_TRAIN_VOLTAGE_SWING_MASK) {
- case DP_TRAIN_VOLTAGE_SWING_400:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_0:
demph_reg_value = 0x2B404040;
uniqtranscale_reg_value = 0x5552B83A;
break;
- case DP_TRAIN_VOLTAGE_SWING_600:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_1:
demph_reg_value = 0x2B404848;
uniqtranscale_reg_value = 0x5580B83A;
break;
- case DP_TRAIN_VOLTAGE_SWING_800:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_2:
demph_reg_value = 0x2B404040;
uniqtranscale_reg_value = 0x55ADDA3A;
break;
return 0;
}
break;
- case DP_TRAIN_PRE_EMPHASIS_6:
+ case DP_TRAIN_PRE_EMPH_LEVEL_2:
preemph_reg_value = 0x0000000;
switch (train_set & DP_TRAIN_VOLTAGE_SWING_MASK) {
- case DP_TRAIN_VOLTAGE_SWING_400:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_0:
demph_reg_value = 0x2B305555;
uniqtranscale_reg_value = 0x5570B83A;
break;
- case DP_TRAIN_VOLTAGE_SWING_600:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_1:
demph_reg_value = 0x2B2B4040;
uniqtranscale_reg_value = 0x55ADDA3A;
break;
return 0;
}
break;
- case DP_TRAIN_PRE_EMPHASIS_9_5:
+ case DP_TRAIN_PRE_EMPH_LEVEL_3:
preemph_reg_value = 0x0006000;
switch (train_set & DP_TRAIN_VOLTAGE_SWING_MASK) {
- case DP_TRAIN_VOLTAGE_SWING_400:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_0:
demph_reg_value = 0x1B405555;
uniqtranscale_reg_value = 0x55ADDA3A;
break;
int i;
switch (train_set & DP_TRAIN_PRE_EMPHASIS_MASK) {
- case DP_TRAIN_PRE_EMPHASIS_0:
+ case DP_TRAIN_PRE_EMPH_LEVEL_0:
switch (train_set & DP_TRAIN_VOLTAGE_SWING_MASK) {
- case DP_TRAIN_VOLTAGE_SWING_400:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_0:
deemph_reg_value = 128;
margin_reg_value = 52;
break;
- case DP_TRAIN_VOLTAGE_SWING_600:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_1:
deemph_reg_value = 128;
margin_reg_value = 77;
break;
- case DP_TRAIN_VOLTAGE_SWING_800:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_2:
deemph_reg_value = 128;
margin_reg_value = 102;
break;
- case DP_TRAIN_VOLTAGE_SWING_1200:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_3:
deemph_reg_value = 128;
margin_reg_value = 154;
/* FIXME extra to set for 1200 */
return 0;
}
break;
- case DP_TRAIN_PRE_EMPHASIS_3_5:
+ case DP_TRAIN_PRE_EMPH_LEVEL_1:
switch (train_set & DP_TRAIN_VOLTAGE_SWING_MASK) {
- case DP_TRAIN_VOLTAGE_SWING_400:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_0:
deemph_reg_value = 85;
margin_reg_value = 78;
break;
- case DP_TRAIN_VOLTAGE_SWING_600:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_1:
deemph_reg_value = 85;
margin_reg_value = 116;
break;
- case DP_TRAIN_VOLTAGE_SWING_800:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_2:
deemph_reg_value = 85;
margin_reg_value = 154;
break;
return 0;
}
break;
- case DP_TRAIN_PRE_EMPHASIS_6:
+ case DP_TRAIN_PRE_EMPH_LEVEL_2:
switch (train_set & DP_TRAIN_VOLTAGE_SWING_MASK) {
- case DP_TRAIN_VOLTAGE_SWING_400:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_0:
deemph_reg_value = 64;
margin_reg_value = 104;
break;
- case DP_TRAIN_VOLTAGE_SWING_600:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_1:
deemph_reg_value = 64;
margin_reg_value = 154;
break;
return 0;
}
break;
- case DP_TRAIN_PRE_EMPHASIS_9_5:
+ case DP_TRAIN_PRE_EMPH_LEVEL_3:
switch (train_set & DP_TRAIN_VOLTAGE_SWING_MASK) {
- case DP_TRAIN_VOLTAGE_SWING_400:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_0:
deemph_reg_value = 43;
margin_reg_value = 154;
break;
/* Program swing margin */
for (i = 0; i < 4; i++) {
val = vlv_dpio_read(dev_priv, pipe, CHV_TX_DW2(ch, i));
- val &= ~DPIO_SWING_MARGIN_MASK;
- val |= margin_reg_value << DPIO_SWING_MARGIN_SHIFT;
+ val &= ~DPIO_SWING_MARGIN000_MASK;
+ val |= margin_reg_value << DPIO_SWING_MARGIN000_SHIFT;
vlv_dpio_write(dev_priv, pipe, CHV_TX_DW2(ch, i), val);
}
}
if (((train_set & DP_TRAIN_PRE_EMPHASIS_MASK)
- == DP_TRAIN_PRE_EMPHASIS_0) &&
+ == DP_TRAIN_PRE_EMPH_LEVEL_0) &&
((train_set & DP_TRAIN_VOLTAGE_SWING_MASK)
- == DP_TRAIN_VOLTAGE_SWING_1200)) {
+ == DP_TRAIN_VOLTAGE_SWING_LEVEL_3)) {
/*
* The document said it needs to set bit 27 for ch0 and bit 26
uint32_t signal_levels = 0;
switch (train_set & DP_TRAIN_VOLTAGE_SWING_MASK) {
- case DP_TRAIN_VOLTAGE_SWING_400:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_0:
default:
signal_levels |= DP_VOLTAGE_0_4;
break;
- case DP_TRAIN_VOLTAGE_SWING_600:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_1:
signal_levels |= DP_VOLTAGE_0_6;
break;
- case DP_TRAIN_VOLTAGE_SWING_800:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_2:
signal_levels |= DP_VOLTAGE_0_8;
break;
- case DP_TRAIN_VOLTAGE_SWING_1200:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_3:
signal_levels |= DP_VOLTAGE_1_2;
break;
}
switch (train_set & DP_TRAIN_PRE_EMPHASIS_MASK) {
- case DP_TRAIN_PRE_EMPHASIS_0:
+ case DP_TRAIN_PRE_EMPH_LEVEL_0:
default:
signal_levels |= DP_PRE_EMPHASIS_0;
break;
- case DP_TRAIN_PRE_EMPHASIS_3_5:
+ case DP_TRAIN_PRE_EMPH_LEVEL_1:
signal_levels |= DP_PRE_EMPHASIS_3_5;
break;
- case DP_TRAIN_PRE_EMPHASIS_6:
+ case DP_TRAIN_PRE_EMPH_LEVEL_2:
signal_levels |= DP_PRE_EMPHASIS_6;
break;
- case DP_TRAIN_PRE_EMPHASIS_9_5:
+ case DP_TRAIN_PRE_EMPH_LEVEL_3:
signal_levels |= DP_PRE_EMPHASIS_9_5;
break;
}
int signal_levels = train_set & (DP_TRAIN_VOLTAGE_SWING_MASK |
DP_TRAIN_PRE_EMPHASIS_MASK);
switch (signal_levels) {
- case DP_TRAIN_VOLTAGE_SWING_400 | DP_TRAIN_PRE_EMPHASIS_0:
- case DP_TRAIN_VOLTAGE_SWING_600 | DP_TRAIN_PRE_EMPHASIS_0:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_0 | DP_TRAIN_PRE_EMPH_LEVEL_0:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_1 | DP_TRAIN_PRE_EMPH_LEVEL_0:
return EDP_LINK_TRAIN_400_600MV_0DB_SNB_B;
- case DP_TRAIN_VOLTAGE_SWING_400 | DP_TRAIN_PRE_EMPHASIS_3_5:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_0 | DP_TRAIN_PRE_EMPH_LEVEL_1:
return EDP_LINK_TRAIN_400MV_3_5DB_SNB_B;
- case DP_TRAIN_VOLTAGE_SWING_400 | DP_TRAIN_PRE_EMPHASIS_6:
- case DP_TRAIN_VOLTAGE_SWING_600 | DP_TRAIN_PRE_EMPHASIS_6:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_0 | DP_TRAIN_PRE_EMPH_LEVEL_2:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_1 | DP_TRAIN_PRE_EMPH_LEVEL_2:
return EDP_LINK_TRAIN_400_600MV_6DB_SNB_B;
- case DP_TRAIN_VOLTAGE_SWING_600 | DP_TRAIN_PRE_EMPHASIS_3_5:
- case DP_TRAIN_VOLTAGE_SWING_800 | DP_TRAIN_PRE_EMPHASIS_3_5:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_1 | DP_TRAIN_PRE_EMPH_LEVEL_1:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_2 | DP_TRAIN_PRE_EMPH_LEVEL_1:
return EDP_LINK_TRAIN_600_800MV_3_5DB_SNB_B;
- case DP_TRAIN_VOLTAGE_SWING_800 | DP_TRAIN_PRE_EMPHASIS_0:
- case DP_TRAIN_VOLTAGE_SWING_1200 | DP_TRAIN_PRE_EMPHASIS_0:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_2 | DP_TRAIN_PRE_EMPH_LEVEL_0:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_3 | DP_TRAIN_PRE_EMPH_LEVEL_0:
return EDP_LINK_TRAIN_800_1200MV_0DB_SNB_B;
default:
DRM_DEBUG_KMS("Unsupported voltage swing/pre-emphasis level:"
int signal_levels = train_set & (DP_TRAIN_VOLTAGE_SWING_MASK |
DP_TRAIN_PRE_EMPHASIS_MASK);
switch (signal_levels) {
- case DP_TRAIN_VOLTAGE_SWING_400 | DP_TRAIN_PRE_EMPHASIS_0:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_0 | DP_TRAIN_PRE_EMPH_LEVEL_0:
return EDP_LINK_TRAIN_400MV_0DB_IVB;
- case DP_TRAIN_VOLTAGE_SWING_400 | DP_TRAIN_PRE_EMPHASIS_3_5:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_0 | DP_TRAIN_PRE_EMPH_LEVEL_1:
return EDP_LINK_TRAIN_400MV_3_5DB_IVB;
- case DP_TRAIN_VOLTAGE_SWING_400 | DP_TRAIN_PRE_EMPHASIS_6:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_0 | DP_TRAIN_PRE_EMPH_LEVEL_2:
return EDP_LINK_TRAIN_400MV_6DB_IVB;
- case DP_TRAIN_VOLTAGE_SWING_600 | DP_TRAIN_PRE_EMPHASIS_0:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_1 | DP_TRAIN_PRE_EMPH_LEVEL_0:
return EDP_LINK_TRAIN_600MV_0DB_IVB;
- case DP_TRAIN_VOLTAGE_SWING_600 | DP_TRAIN_PRE_EMPHASIS_3_5:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_1 | DP_TRAIN_PRE_EMPH_LEVEL_1:
return EDP_LINK_TRAIN_600MV_3_5DB_IVB;
- case DP_TRAIN_VOLTAGE_SWING_800 | DP_TRAIN_PRE_EMPHASIS_0:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_2 | DP_TRAIN_PRE_EMPH_LEVEL_0:
return EDP_LINK_TRAIN_800MV_0DB_IVB;
- case DP_TRAIN_VOLTAGE_SWING_800 | DP_TRAIN_PRE_EMPHASIS_3_5:
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_2 | DP_TRAIN_PRE_EMPH_LEVEL_1:
return EDP_LINK_TRAIN_800MV_3_5DB_IVB;
default:
int signal_levels = train_set & (DP_TRAIN_VOLTAGE_SWING_MASK |
DP_TRAIN_PRE_EMPHASIS_MASK);
switch (signal_levels) {
- case DP_TRAIN_VOLTAGE_SWING_400 | DP_TRAIN_PRE_EMPHASIS_0:
- return DDI_BUF_EMP_400MV_0DB_HSW;
- case DP_TRAIN_VOLTAGE_SWING_400 | DP_TRAIN_PRE_EMPHASIS_3_5:
- return DDI_BUF_EMP_400MV_3_5DB_HSW;
- case DP_TRAIN_VOLTAGE_SWING_400 | DP_TRAIN_PRE_EMPHASIS_6:
- return DDI_BUF_EMP_400MV_6DB_HSW;
- case DP_TRAIN_VOLTAGE_SWING_400 | DP_TRAIN_PRE_EMPHASIS_9_5:
- return DDI_BUF_EMP_400MV_9_5DB_HSW;
-
- case DP_TRAIN_VOLTAGE_SWING_600 | DP_TRAIN_PRE_EMPHASIS_0:
- return DDI_BUF_EMP_600MV_0DB_HSW;
- case DP_TRAIN_VOLTAGE_SWING_600 | DP_TRAIN_PRE_EMPHASIS_3_5:
- return DDI_BUF_EMP_600MV_3_5DB_HSW;
- case DP_TRAIN_VOLTAGE_SWING_600 | DP_TRAIN_PRE_EMPHASIS_6:
- return DDI_BUF_EMP_600MV_6DB_HSW;
-
- case DP_TRAIN_VOLTAGE_SWING_800 | DP_TRAIN_PRE_EMPHASIS_0:
- return DDI_BUF_EMP_800MV_0DB_HSW;
- case DP_TRAIN_VOLTAGE_SWING_800 | DP_TRAIN_PRE_EMPHASIS_3_5:
- return DDI_BUF_EMP_800MV_3_5DB_HSW;
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_0 | DP_TRAIN_PRE_EMPH_LEVEL_0:
+ return DDI_BUF_TRANS_SELECT(0);
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_0 | DP_TRAIN_PRE_EMPH_LEVEL_1:
+ return DDI_BUF_TRANS_SELECT(1);
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_0 | DP_TRAIN_PRE_EMPH_LEVEL_2:
+ return DDI_BUF_TRANS_SELECT(2);
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_0 | DP_TRAIN_PRE_EMPH_LEVEL_3:
+ return DDI_BUF_TRANS_SELECT(3);
+
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_1 | DP_TRAIN_PRE_EMPH_LEVEL_0:
+ return DDI_BUF_TRANS_SELECT(4);
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_1 | DP_TRAIN_PRE_EMPH_LEVEL_1:
+ return DDI_BUF_TRANS_SELECT(5);
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_1 | DP_TRAIN_PRE_EMPH_LEVEL_2:
+ return DDI_BUF_TRANS_SELECT(6);
+
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_2 | DP_TRAIN_PRE_EMPH_LEVEL_0:
+ return DDI_BUF_TRANS_SELECT(7);
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_2 | DP_TRAIN_PRE_EMPH_LEVEL_1:
+ return DDI_BUF_TRANS_SELECT(8);
default:
DRM_DEBUG_KMS("Unsupported voltage swing/pre-emphasis level:"
"0x%x\n", signal_levels);
- return DDI_BUF_EMP_400MV_0DB_HSW;
+ return DDI_BUF_TRANS_SELECT(0);
}
}
struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
struct drm_device *dev = intel_dig_port->base.base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- enum port port = intel_dig_port->port;
uint8_t buf[sizeof(intel_dp->train_set) + 1];
int ret, len;
- if (HAS_DDI(dev)) {
- uint32_t temp = I915_READ(DP_TP_CTL(port));
-
- if (dp_train_pat & DP_LINK_SCRAMBLING_DISABLE)
- temp |= DP_TP_CTL_SCRAMBLE_DISABLE;
- else
- temp &= ~DP_TP_CTL_SCRAMBLE_DISABLE;
-
- temp &= ~DP_TP_CTL_LINK_TRAIN_MASK;
- switch (dp_train_pat & DP_TRAINING_PATTERN_MASK) {
- case DP_TRAINING_PATTERN_DISABLE:
- temp |= DP_TP_CTL_LINK_TRAIN_NORMAL;
-
- break;
- case DP_TRAINING_PATTERN_1:
- temp |= DP_TP_CTL_LINK_TRAIN_PAT1;
- break;
- case DP_TRAINING_PATTERN_2:
- temp |= DP_TP_CTL_LINK_TRAIN_PAT2;
- break;
- case DP_TRAINING_PATTERN_3:
- temp |= DP_TP_CTL_LINK_TRAIN_PAT3;
- break;
- }
- I915_WRITE(DP_TP_CTL(port), temp);
-
- } else if (HAS_PCH_CPT(dev) && (IS_GEN7(dev) || port != PORT_A)) {
- *DP &= ~DP_LINK_TRAIN_MASK_CPT;
-
- switch (dp_train_pat & DP_TRAINING_PATTERN_MASK) {
- case DP_TRAINING_PATTERN_DISABLE:
- *DP |= DP_LINK_TRAIN_OFF_CPT;
- break;
- case DP_TRAINING_PATTERN_1:
- *DP |= DP_LINK_TRAIN_PAT_1_CPT;
- break;
- case DP_TRAINING_PATTERN_2:
- *DP |= DP_LINK_TRAIN_PAT_2_CPT;
- break;
- case DP_TRAINING_PATTERN_3:
- DRM_ERROR("DP training pattern 3 not supported\n");
- *DP |= DP_LINK_TRAIN_PAT_2_CPT;
- break;
- }
-
- } else {
- *DP &= ~DP_LINK_TRAIN_MASK;
-
- switch (dp_train_pat & DP_TRAINING_PATTERN_MASK) {
- case DP_TRAINING_PATTERN_DISABLE:
- *DP |= DP_LINK_TRAIN_OFF;
- break;
- case DP_TRAINING_PATTERN_1:
- *DP |= DP_LINK_TRAIN_PAT_1;
- break;
- case DP_TRAINING_PATTERN_2:
- *DP |= DP_LINK_TRAIN_PAT_2;
- break;
- case DP_TRAINING_PATTERN_3:
- DRM_ERROR("DP training pattern 3 not supported\n");
- *DP |= DP_LINK_TRAIN_PAT_2;
- break;
- }
- }
+ _intel_dp_set_link_train(intel_dp, DP, dp_train_pat);
I915_WRITE(intel_dp->output_reg, *DP);
POSTING_READ(intel_dp->output_reg);
DP &= ~DP_LINK_TRAIN_MASK_CPT;
I915_WRITE(intel_dp->output_reg, DP | DP_LINK_TRAIN_PAT_IDLE_CPT);
} else {
- DP &= ~DP_LINK_TRAIN_MASK;
+ if (IS_CHERRYVIEW(dev))
+ DP &= ~DP_LINK_TRAIN_MASK_CHV;
+ else
+ DP &= ~DP_LINK_TRAIN_MASK;
I915_WRITE(intel_dp->output_reg, DP | DP_LINK_TRAIN_PAT_IDLE);
}
POSTING_READ(intel_dp->output_reg);
struct drm_device *dev = dig_port->base.base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- char dpcd_hex_dump[sizeof(intel_dp->dpcd) * 3];
-
if (intel_dp_dpcd_read_wake(&intel_dp->aux, 0x000, intel_dp->dpcd,
sizeof(intel_dp->dpcd)) < 0)
return false; /* aux transfer failed */
- hex_dump_to_buffer(intel_dp->dpcd, sizeof(intel_dp->dpcd),
- 32, 1, dpcd_hex_dump, sizeof(dpcd_hex_dump), false);
- DRM_DEBUG_KMS("DPCD: %s\n", dpcd_hex_dump);
+ DRM_DEBUG_KMS("DPCD: %*ph\n", (int) sizeof(intel_dp->dpcd), intel_dp->dpcd);
if (intel_dp->dpcd[DP_DPCD_REV] == 0)
return false; /* DPCD not present */
if (intel_dp->dpcd[DP_DPCD_REV] >= 0x12 &&
intel_dp->dpcd[DP_MAX_LANE_COUNT] & DP_TPS3_SUPPORTED) {
intel_dp->use_tps3 = true;
- DRM_DEBUG_KMS("Displayport TPS3 supported");
+ DRM_DEBUG_KMS("Displayport TPS3 supported\n");
} else
intel_dp->use_tps3 = false;
DRM_DEBUG_KMS("Branch OUI: %02hx%02hx%02hx\n",
buf[0], buf[1], buf[2]);
- edp_panel_vdd_off(intel_dp, false);
+ intel_edp_panel_vdd_off(intel_dp, false);
}
static bool
if (intel_dp->dpcd[DP_DPCD_REV] < 0x12)
return false;
- _edp_panel_vdd_on(intel_dp);
+ intel_edp_panel_vdd_on(intel_dp);
if (intel_dp_dpcd_read_wake(&intel_dp->aux, DP_MSTM_CAP, buf, 1)) {
if (buf[0] & DP_MST_CAP) {
DRM_DEBUG_KMS("Sink is MST capable\n");
intel_dp->is_mst = false;
}
}
- edp_panel_vdd_off(intel_dp, false);
+ intel_edp_panel_vdd_off(intel_dp, false);
drm_dp_mst_topology_mgr_set_mst(&intel_dp->mst_mgr, intel_dp->is_mst);
return intel_dp->is_mst;
u8 buf[1];
if (drm_dp_dpcd_readb(&intel_dp->aux, DP_TEST_SINK_MISC, buf) < 0)
- return -EAGAIN;
+ return -EIO;
if (!(buf[0] & DP_TEST_CRC_SUPPORTED))
return -ENOTTY;
if (drm_dp_dpcd_writeb(&intel_dp->aux, DP_TEST_SINK,
DP_TEST_SINK_START) < 0)
- return -EAGAIN;
+ return -EIO;
/* Wait 2 vblanks to be sure we will have the correct CRC value */
intel_wait_for_vblank(dev, intel_crtc->pipe);
intel_wait_for_vblank(dev, intel_crtc->pipe);
if (drm_dp_dpcd_read(&intel_dp->aux, DP_TEST_CRC_R_CR, crc, 6) < 0)
- return -EAGAIN;
+ return -EIO;
drm_dp_dpcd_writeb(&intel_dp->aux, DP_TEST_SINK, 0);
return 0;
if (WARN_ON(!intel_encoder->base.crtc))
return;
+ if (!to_intel_crtc(intel_encoder->base.crtc)->active)
+ return;
+
/* Try to read receiver status if the link appears to be up */
if (!intel_dp_get_link_status(intel_dp, link_status)) {
return;
return connector_status_disconnected;
}
+static enum drm_connector_status
+edp_detect(struct intel_dp *intel_dp)
+{
+ struct drm_device *dev = intel_dp_to_dev(intel_dp);
+ enum drm_connector_status status;
+
+ status = intel_panel_detect(dev);
+ if (status == connector_status_unknown)
+ status = connector_status_connected;
+
+ return status;
+}
+
static enum drm_connector_status
ironlake_dp_detect(struct intel_dp *intel_dp)
{
struct drm_device *dev = intel_dp_to_dev(intel_dp);
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
- enum drm_connector_status status;
-
- /* Can't disconnect eDP, but you can close the lid... */
- if (is_edp(intel_dp)) {
- status = intel_panel_detect(dev);
- if (status == connector_status_unknown)
- status = connector_status_connected;
- return status;
- }
if (!ibx_digital_port_connected(dev_priv, intel_dig_port))
return connector_status_disconnected;
return intel_dp_detect_dpcd(intel_dp);
}
-static enum drm_connector_status
-g4x_dp_detect(struct intel_dp *intel_dp)
+static int g4x_digital_port_connected(struct drm_device *dev,
+ struct intel_digital_port *intel_dig_port)
{
- struct drm_device *dev = intel_dp_to_dev(intel_dp);
struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
uint32_t bit;
- /* Can't disconnect eDP, but you can close the lid... */
- if (is_edp(intel_dp)) {
- enum drm_connector_status status;
-
- status = intel_panel_detect(dev);
- if (status == connector_status_unknown)
- status = connector_status_connected;
- return status;
- }
-
if (IS_VALLEYVIEW(dev)) {
switch (intel_dig_port->port) {
case PORT_B:
bit = PORTD_HOTPLUG_LIVE_STATUS_VLV;
break;
default:
- return connector_status_unknown;
+ return -EINVAL;
}
} else {
switch (intel_dig_port->port) {
bit = PORTD_HOTPLUG_LIVE_STATUS_G4X;
break;
default:
- return connector_status_unknown;
+ return -EINVAL;
}
}
if ((I915_READ(PORT_HOTPLUG_STAT) & bit) == 0)
+ return 0;
+ return 1;
+}
+
+static enum drm_connector_status
+g4x_dp_detect(struct intel_dp *intel_dp)
+{
+ struct drm_device *dev = intel_dp_to_dev(intel_dp);
+ struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
+ int ret;
+
+ /* Can't disconnect eDP, but you can close the lid... */
+ if (is_edp(intel_dp)) {
+ enum drm_connector_status status;
+
+ status = intel_panel_detect(dev);
+ if (status == connector_status_unknown)
+ status = connector_status_connected;
+ return status;
+ }
+
+ ret = g4x_digital_port_connected(dev, intel_dig_port);
+ if (ret == -EINVAL)
+ return connector_status_unknown;
+ else if (ret == 0)
return connector_status_disconnected;
return intel_dp_detect_dpcd(intel_dp);
}
static struct edid *
-intel_dp_get_edid(struct drm_connector *connector, struct i2c_adapter *adapter)
+intel_dp_get_edid(struct intel_dp *intel_dp)
{
- struct intel_connector *intel_connector = to_intel_connector(connector);
+ struct intel_connector *intel_connector = intel_dp->attached_connector;
/* use cached edid if we have one */
if (intel_connector->edid) {
return NULL;
return drm_edid_duplicate(intel_connector->edid);
- }
+ } else
+ return drm_get_edid(&intel_connector->base,
+ &intel_dp->aux.ddc);
+}
- return drm_get_edid(connector, adapter);
+static void
+intel_dp_set_edid(struct intel_dp *intel_dp)
+{
+ struct intel_connector *intel_connector = intel_dp->attached_connector;
+ struct edid *edid;
+
+ edid = intel_dp_get_edid(intel_dp);
+ intel_connector->detect_edid = edid;
+
+ if (intel_dp->force_audio != HDMI_AUDIO_AUTO)
+ intel_dp->has_audio = intel_dp->force_audio == HDMI_AUDIO_ON;
+ else
+ intel_dp->has_audio = drm_detect_monitor_audio(edid);
}
-static int
-intel_dp_get_edid_modes(struct drm_connector *connector, struct i2c_adapter *adapter)
+static void
+intel_dp_unset_edid(struct intel_dp *intel_dp)
{
- struct intel_connector *intel_connector = to_intel_connector(connector);
+ struct intel_connector *intel_connector = intel_dp->attached_connector;
- /* use cached edid if we have one */
- if (intel_connector->edid) {
- /* invalid edid */
- if (IS_ERR(intel_connector->edid))
- return 0;
+ kfree(intel_connector->detect_edid);
+ intel_connector->detect_edid = NULL;
- return intel_connector_update_modes(connector,
- intel_connector->edid);
- }
+ intel_dp->has_audio = false;
+}
+
+static enum intel_display_power_domain
+intel_dp_power_get(struct intel_dp *dp)
+{
+ struct intel_encoder *encoder = &dp_to_dig_port(dp)->base;
+ enum intel_display_power_domain power_domain;
+
+ power_domain = intel_display_port_power_domain(encoder);
+ intel_display_power_get(to_i915(encoder->base.dev), power_domain);
- return intel_ddc_get_modes(connector, adapter);
+ return power_domain;
+}
+
+static void
+intel_dp_power_put(struct intel_dp *dp,
+ enum intel_display_power_domain power_domain)
+{
+ struct intel_encoder *encoder = &dp_to_dig_port(dp)->base;
+ intel_display_power_put(to_i915(encoder->base.dev), power_domain);
}
static enum drm_connector_status
struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
struct intel_encoder *intel_encoder = &intel_dig_port->base;
struct drm_device *dev = connector->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
enum drm_connector_status status;
enum intel_display_power_domain power_domain;
- struct edid *edid = NULL;
bool ret;
- power_domain = intel_display_port_power_domain(intel_encoder);
- intel_display_power_get(dev_priv, power_domain);
-
DRM_DEBUG_KMS("[CONNECTOR:%d:%s]\n",
connector->base.id, connector->name);
+ intel_dp_unset_edid(intel_dp);
if (intel_dp->is_mst) {
/* MST devices are disconnected from a monitor POV */
if (intel_encoder->type != INTEL_OUTPUT_EDP)
intel_encoder->type = INTEL_OUTPUT_DISPLAYPORT;
- status = connector_status_disconnected;
- goto out;
+ return connector_status_disconnected;
}
- intel_dp->has_audio = false;
+ power_domain = intel_dp_power_get(intel_dp);
- if (HAS_PCH_SPLIT(dev))
+ /* Can't disconnect eDP, but you can close the lid... */
+ if (is_edp(intel_dp))
+ status = edp_detect(intel_dp);
+ else if (HAS_PCH_SPLIT(dev))
status = ironlake_dp_detect(intel_dp);
else
status = g4x_dp_detect(intel_dp);
-
if (status != connector_status_connected)
goto out;
goto out;
}
- if (intel_dp->force_audio != HDMI_AUDIO_AUTO) {
- intel_dp->has_audio = (intel_dp->force_audio == HDMI_AUDIO_ON);
- } else {
- edid = intel_dp_get_edid(connector, &intel_dp->aux.ddc);
- if (edid) {
- intel_dp->has_audio = drm_detect_monitor_audio(edid);
- kfree(edid);
- }
- }
+ intel_dp_set_edid(intel_dp);
if (intel_encoder->type != INTEL_OUTPUT_EDP)
intel_encoder->type = INTEL_OUTPUT_DISPLAYPORT;
status = connector_status_connected;
out:
- intel_display_power_put(dev_priv, power_domain);
+ intel_dp_power_put(intel_dp, power_domain);
return status;
}
-static int intel_dp_get_modes(struct drm_connector *connector)
+static void
+intel_dp_force(struct drm_connector *connector)
{
struct intel_dp *intel_dp = intel_attached_dp(connector);
- struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
- struct intel_encoder *intel_encoder = &intel_dig_port->base;
- struct intel_connector *intel_connector = to_intel_connector(connector);
- struct drm_device *dev = connector->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_encoder *intel_encoder = &dp_to_dig_port(intel_dp)->base;
enum intel_display_power_domain power_domain;
- int ret;
- /* We should parse the EDID data and find out if it has an audio sink
- */
+ DRM_DEBUG_KMS("[CONNECTOR:%d:%s]\n",
+ connector->base.id, connector->name);
+ intel_dp_unset_edid(intel_dp);
- power_domain = intel_display_port_power_domain(intel_encoder);
- intel_display_power_get(dev_priv, power_domain);
+ if (connector->status != connector_status_connected)
+ return;
- ret = intel_dp_get_edid_modes(connector, &intel_dp->aux.ddc);
- intel_display_power_put(dev_priv, power_domain);
- if (ret)
- return ret;
+ power_domain = intel_dp_power_get(intel_dp);
+
+ intel_dp_set_edid(intel_dp);
+
+ intel_dp_power_put(intel_dp, power_domain);
+
+ if (intel_encoder->type != INTEL_OUTPUT_EDP)
+ intel_encoder->type = INTEL_OUTPUT_DISPLAYPORT;
+}
+
+static int intel_dp_get_modes(struct drm_connector *connector)
+{
+ struct intel_connector *intel_connector = to_intel_connector(connector);
+ struct edid *edid;
+
+ edid = intel_connector->detect_edid;
+ if (edid) {
+ int ret = intel_connector_update_modes(connector, edid);
+ if (ret)
+ return ret;
+ }
/* if eDP has no EDID, fall back to fixed mode */
- if (is_edp(intel_dp) && intel_connector->panel.fixed_mode) {
+ if (is_edp(intel_attached_dp(connector)) &&
+ intel_connector->panel.fixed_mode) {
struct drm_display_mode *mode;
- mode = drm_mode_duplicate(dev,
+
+ mode = drm_mode_duplicate(connector->dev,
intel_connector->panel.fixed_mode);
if (mode) {
drm_mode_probed_add(connector, mode);
return 1;
}
}
+
return 0;
}
static bool
intel_dp_detect_audio(struct drm_connector *connector)
{
- struct intel_dp *intel_dp = intel_attached_dp(connector);
- struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
- struct intel_encoder *intel_encoder = &intel_dig_port->base;
- struct drm_device *dev = connector->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- enum intel_display_power_domain power_domain;
- struct edid *edid;
bool has_audio = false;
+ struct edid *edid;
- power_domain = intel_display_port_power_domain(intel_encoder);
- intel_display_power_get(dev_priv, power_domain);
-
- edid = intel_dp_get_edid(connector, &intel_dp->aux.ddc);
- if (edid) {
+ edid = to_intel_connector(connector)->detect_edid;
+ if (edid)
has_audio = drm_detect_monitor_audio(edid);
- kfree(edid);
- }
-
- intel_display_power_put(dev_priv, power_domain);
return has_audio;
}
{
struct intel_connector *intel_connector = to_intel_connector(connector);
+ kfree(intel_connector->detect_edid);
+
if (!IS_ERR_OR_NULL(intel_connector->edid))
kfree(intel_connector->edid);
{
struct intel_digital_port *intel_dig_port = enc_to_dig_port(encoder);
struct intel_dp *intel_dp = &intel_dig_port->dp;
- struct drm_device *dev = intel_dp_to_dev(intel_dp);
drm_dp_aux_unregister(&intel_dp->aux);
intel_dp_mst_encoder_cleanup(intel_dig_port);
drm_encoder_cleanup(encoder);
if (is_edp(intel_dp)) {
cancel_delayed_work_sync(&intel_dp->panel_vdd_work);
- drm_modeset_lock(&dev->mode_config.connection_mutex, NULL);
+ /*
+ * vdd might still be enabled do to the delayed vdd off.
+ * Make sure vdd is actually turned off here.
+ */
+ pps_lock(intel_dp);
edp_panel_vdd_off_sync(intel_dp);
- drm_modeset_unlock(&dev->mode_config.connection_mutex);
+ pps_unlock(intel_dp);
+
if (intel_dp->edp_notifier.notifier_call) {
unregister_reboot_notifier(&intel_dp->edp_notifier);
intel_dp->edp_notifier.notifier_call = NULL;
kfree(intel_dig_port);
}
+static void intel_dp_encoder_suspend(struct intel_encoder *intel_encoder)
+{
+ struct intel_dp *intel_dp = enc_to_intel_dp(&intel_encoder->base);
+
+ if (!is_edp(intel_dp))
+ return;
+
+ /*
+ * vdd might still be enabled do to the delayed vdd off.
+ * Make sure vdd is actually turned off here.
+ */
+ pps_lock(intel_dp);
+ edp_panel_vdd_off_sync(intel_dp);
+ pps_unlock(intel_dp);
+}
+
static void intel_dp_encoder_reset(struct drm_encoder *encoder)
{
intel_edp_panel_vdd_sanitize(to_intel_encoder(encoder));
static const struct drm_connector_funcs intel_dp_connector_funcs = {
.dpms = intel_connector_dpms,
.detect = intel_dp_detect,
+ .force = intel_dp_force,
.fill_modes = drm_helper_probe_single_connector_modes,
.set_property = intel_dp_set_property,
.destroy = intel_dp_connector_destroy,
intel_dp_hpd_pulse(struct intel_digital_port *intel_dig_port, bool long_hpd)
{
struct intel_dp *intel_dp = &intel_dig_port->dp;
+ struct intel_encoder *intel_encoder = &intel_dig_port->base;
struct drm_device *dev = intel_dig_port->base.base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- int ret;
+ enum intel_display_power_domain power_domain;
+ bool ret = true;
+
if (intel_dig_port->base.type != INTEL_OUTPUT_EDP)
intel_dig_port->base.type = INTEL_OUTPUT_DISPLAYPORT;
- DRM_DEBUG_KMS("got hpd irq on port %d - %s\n", intel_dig_port->port,
+ DRM_DEBUG_KMS("got hpd irq on port %c - %s\n",
+ port_name(intel_dig_port->port),
long_hpd ? "long" : "short");
+ power_domain = intel_display_port_power_domain(intel_encoder);
+ intel_display_power_get(dev_priv, power_domain);
+
if (long_hpd) {
- if (!ibx_digital_port_connected(dev_priv, intel_dig_port))
- goto mst_fail;
+
+ if (HAS_PCH_SPLIT(dev)) {
+ if (!ibx_digital_port_connected(dev_priv, intel_dig_port))
+ goto mst_fail;
+ } else {
+ if (g4x_digital_port_connected(dev, intel_dig_port) != 1)
+ goto mst_fail;
+ }
if (!intel_dp_get_dpcd(intel_dp)) {
goto mst_fail;
} else {
if (intel_dp->is_mst) {
- ret = intel_dp_check_mst_status(intel_dp);
- if (ret == -EINVAL)
+ if (intel_dp_check_mst_status(intel_dp) == -EINVAL)
goto mst_fail;
}
drm_modeset_unlock(&dev->mode_config.connection_mutex);
}
}
- return false;
+ ret = false;
+ goto put_power;
mst_fail:
/* if we were in MST mode, and device is not there get out of MST mode */
if (intel_dp->is_mst) {
intel_dp->is_mst = false;
drm_dp_mst_topology_mgr_set_mst(&intel_dp->mst_mgr, intel_dp->is_mst);
}
- return true;
+put_power:
+ intel_display_power_put(dev_priv, power_domain);
+
+ return ret;
}
/* Return which DP Port should be selected for Transcoder DP control */
u32 pp_on, pp_off, pp_div, pp;
int pp_ctrl_reg, pp_on_reg, pp_off_reg, pp_div_reg;
+ lockdep_assert_held(&dev_priv->pps_mutex);
+
if (HAS_PCH_SPLIT(dev)) {
pp_ctrl_reg = PCH_PP_CONTROL;
pp_on_reg = PCH_PP_ON_DELAYS;
u32 pp_on, pp_off, pp_div, port_sel = 0;
int div = HAS_PCH_SPLIT(dev) ? intel_pch_rawclk(dev) : intel_hrawclk(dev);
int pp_on_reg, pp_off_reg, pp_div_reg;
+ enum port port = dp_to_dig_port(intel_dp)->port;
+
+ lockdep_assert_held(&dev_priv->pps_mutex);
if (HAS_PCH_SPLIT(dev)) {
pp_on_reg = PCH_PP_ON_DELAYS;
/* Haswell doesn't have any port selection bits for the panel
* power sequencer any more. */
if (IS_VALLEYVIEW(dev)) {
- if (dp_to_dig_port(intel_dp)->port == PORT_B)
- port_sel = PANEL_PORT_SELECT_DPB_VLV;
- else
- port_sel = PANEL_PORT_SELECT_DPC_VLV;
+ port_sel = PANEL_PORT_SELECT_VLV(port);
} else if (HAS_PCH_IBX(dev) || HAS_PCH_CPT(dev)) {
- if (dp_to_dig_port(intel_dp)->port == PORT_A)
+ if (port == PORT_A)
port_sel = PANEL_PORT_SELECT_DPA;
else
port_sel = PANEL_PORT_SELECT_DPD;
val = I915_READ(reg);
if (index > DRRS_HIGH_RR) {
val |= PIPECONF_EDP_RR_MODE_SWITCH;
- intel_dp_set_m2_n2(intel_crtc, &config->dp_m2_n2);
+ intel_dp_set_m_n(intel_crtc);
} else {
val &= ~PIPECONF_EDP_RR_MODE_SWITCH;
}
}
if (dev_priv->vbt.drrs_type != SEAMLESS_DRRS_SUPPORT) {
- DRM_INFO("VBT doesn't support DRRS\n");
+ DRM_DEBUG_KMS("VBT doesn't support DRRS\n");
return NULL;
}
(dev, fixed_mode, connector);
if (!downclock_mode) {
- DRM_INFO("DRRS not supported\n");
+ DRM_DEBUG_KMS("DRRS not supported\n");
return NULL;
}
intel_dp->drrs_state.type = dev_priv->vbt.drrs_type;
intel_dp->drrs_state.refresh_rate_type = DRRS_HIGH_RR;
- DRM_INFO("seamless DRRS supported for eDP panel.\n");
+ DRM_DEBUG_KMS("seamless DRRS supported for eDP panel.\n");
return downclock_mode;
}
return;
intel_dp = enc_to_intel_dp(&intel_encoder->base);
+
+ pps_lock(intel_dp);
+
if (!edp_have_panel_vdd(intel_dp))
- return;
+ goto out;
/*
* The VDD bit needs a power domain reference, so if the bit is
* already enabled when we boot or resume, grab this reference and
intel_display_power_get(dev_priv, power_domain);
edp_panel_vdd_schedule_off(intel_dp);
+ out:
+ pps_unlock(intel_dp);
}
static bool intel_edp_init_connector(struct intel_dp *intel_dp,
/* Cache DPCD and EDID for edp. */
intel_edp_panel_vdd_on(intel_dp);
has_dpcd = intel_dp_get_dpcd(intel_dp);
- edp_panel_vdd_off(intel_dp, false);
+ intel_edp_panel_vdd_off(intel_dp, false);
if (has_dpcd) {
if (intel_dp->dpcd[DP_DPCD_REV] >= 0x11)
}
/* We now know it's not a ghost, init power sequence regs. */
+ pps_lock(intel_dp);
intel_dp_init_panel_power_sequencer_registers(dev, intel_dp, power_seq);
+ pps_unlock(intel_dp);
mutex_lock(&dev->mode_config.mutex);
edid = drm_get_edid(connector, &intel_dp->aux.ddc);
}
intel_panel_init(&intel_connector->panel, fixed_mode, downclock_mode);
+ intel_connector->panel.backlight_power = intel_edp_backlight_power;
intel_panel_setup_backlight(connector);
return true;
struct edp_power_seq power_seq = { 0 };
int type;
+ intel_dp->pps_pipe = INVALID_PIPE;
+
/* intel_dp vfuncs */
if (IS_VALLEYVIEW(dev))
intel_dp->get_aux_clock_divider = vlv_get_aux_clock_divider;
}
if (is_edp(intel_dp)) {
- intel_dp_init_panel_power_timestamps(intel_dp);
- intel_dp_init_panel_power_sequencer(dev, intel_dp, &power_seq);
+ pps_lock(intel_dp);
+ if (IS_VALLEYVIEW(dev)) {
+ vlv_initial_power_sequencer_setup(intel_dp);
+ } else {
+ intel_dp_init_panel_power_timestamps(intel_dp);
+ intel_dp_init_panel_power_sequencer(dev, intel_dp,
+ &power_seq);
+ }
+ pps_unlock(intel_dp);
}
intel_dp_aux_init(intel_dp, intel_connector);
/* init MST on ports that can support it */
if (IS_HASWELL(dev) || IS_BROADWELL(dev)) {
if (port == PORT_B || port == PORT_C || port == PORT_D) {
- intel_dp_mst_encoder_init(intel_dig_port, intel_connector->base.base.id);
+ intel_dp_mst_encoder_init(intel_dig_port,
+ intel_connector->base.base.id);
}
}
drm_dp_aux_unregister(&intel_dp->aux);
if (is_edp(intel_dp)) {
cancel_delayed_work_sync(&intel_dp->panel_vdd_work);
- drm_modeset_lock(&dev->mode_config.connection_mutex, NULL);
+ /*
+ * vdd might still be enabled do to the delayed vdd off.
+ * Make sure vdd is actually turned off here.
+ */
+ pps_lock(intel_dp);
edp_panel_vdd_off_sync(intel_dp);
- drm_modeset_unlock(&dev->mode_config.connection_mutex);
+ pps_unlock(intel_dp);
}
drm_connector_unregister(connector);
drm_connector_cleanup(connector);
intel_encoder->disable = intel_disable_dp;
intel_encoder->get_hw_state = intel_dp_get_hw_state;
intel_encoder->get_config = intel_dp_get_config;
+ intel_encoder->suspend = intel_dp_encoder_suspend;
if (IS_CHERRYVIEW(dev)) {
intel_encoder->pre_pll_enable = chv_dp_pre_pll_enable;
intel_encoder->pre_enable = chv_pre_enable_dp;
} else {
intel_encoder->pre_enable = g4x_pre_enable_dp;
intel_encoder->enable = g4x_enable_dp;
- intel_encoder->post_disable = g4x_post_disable_dp;
+ if (INTEL_INFO(dev)->gen >= 5)
+ intel_encoder->post_disable = ilk_post_disable_dp;
}
intel_dig_port->port = port;
#ifndef __INTEL_DRV_H__
#define __INTEL_DRV_H__
+#include <linux/async.h>
#include <linux/i2c.h>
#include <linux/hdmi.h>
#include <drm/i915_drm.h>
* be set correctly before calling this function. */
void (*get_config)(struct intel_encoder *,
struct intel_crtc_config *pipe_config);
+ /*
+ * Called during system suspend after all pending requests for the
+ * encoder are flushed (for example for DP AUX transactions) and
+ * device interrupts are disabled.
+ */
+ void (*suspend)(struct intel_encoder *);
int crtc_mask;
enum hpd_pin hpd_pin;
};
bool active_low_pwm;
struct backlight_device *device;
} backlight;
+
+ void (*backlight_power)(struct intel_connector *, bool enable);
};
struct intel_connector {
/* Cached EDID for eDP and LVDS. May hold ERR_PTR for invalid EDID. */
struct edid *edid;
+ struct edid *detect_edid;
/* since POLL and HPD connectors may use the same HPD line keep the native
state of connector->polled in case hotplug storm detection changes it */
/* m2_n2 for eDP downclock */
struct intel_link_m_n dp_m2_n2;
+ bool has_drrs;
/*
* Frequence the dpll for the port should run at. Differs from the
uint32_t cursor_addr;
int16_t cursor_width, cursor_height;
uint32_t cursor_cntl;
+ uint32_t cursor_size;
uint32_t cursor_base;
struct intel_plane_config plane_config;
struct intel_pipe_wm active;
} wm;
- wait_queue_head_t vbl_wait;
-
int scanline_offset;
struct intel_mmio_flip mmio_flip;
};
unsigned int crtc_w, crtc_h;
uint32_t src_x, src_y;
uint32_t src_w, src_h;
+ unsigned int rotation;
/* Since we need to change the watermarks before/after
* enabling/disabling the planes, we need to store the parameters here
struct notifier_block edp_notifier;
+ /*
+ * Pipe whose power sequencer is currently locked into
+ * this port. Only relevant on VLV/CHV.
+ */
+ enum pipe pps_pipe;
+
bool use_tps3;
bool can_mst; /* this port supports mst */
bool is_mst;
#define INTEL_FLIP_COMPLETE 2
u32 flip_count;
u32 gtt_offset;
+ struct intel_engine_cs *flip_queued_ring;
+ u32 flip_queued_seqno;
+ int flip_queued_vblank;
+ int flip_ready_vblank;
bool enable_stall_check;
};
enum transcoder intel_pipe_to_cpu_transcoder(struct drm_i915_private *dev_priv,
enum pipe pipe);
void intel_wait_for_vblank(struct drm_device *dev, int pipe);
-void intel_wait_for_pipe_off(struct drm_device *dev, int pipe);
int ironlake_get_lanes_required(int target_clock, int link_bw, int bpp);
void vlv_wait_port_ready(struct drm_i915_private *dev_priv,
struct intel_digital_port *dport);
struct intel_load_detect_pipe *old,
struct drm_modeset_acquire_ctx *ctx);
void intel_release_load_detect_pipe(struct drm_connector *connector,
- struct intel_load_detect_pipe *old,
- struct drm_modeset_acquire_ctx *ctx);
+ struct intel_load_detect_pipe *old);
int intel_pin_and_fence_fb_obj(struct drm_device *dev,
struct drm_i915_gem_object *obj,
struct intel_engine_cs *pipelined);
void intel_prepare_page_flip(struct drm_device *dev, int plane);
void intel_finish_page_flip(struct drm_device *dev, int pipe);
void intel_finish_page_flip_plane(struct drm_device *dev, int plane);
+void intel_check_page_flip(struct drm_device *dev, int pipe);
/* shared dpll functions */
struct intel_shared_dpll *intel_crtc_to_shared_dpll(struct intel_crtc *crtc);
void hsw_disable_pc8(struct drm_i915_private *dev_priv);
void intel_dp_get_m_n(struct intel_crtc *crtc,
struct intel_crtc_config *pipe_config);
+void intel_dp_set_m_n(struct intel_crtc *crtc);
int intel_dotclock_calculate(int link_freq, const struct intel_link_m_n *m_n);
void
ironlake_check_encoder_dotclock(const struct intel_crtc_config *pipe_config,
struct intel_crtc_config *pipe_config);
int intel_format_to_fourcc(int format);
void intel_crtc_wait_for_pending_flips(struct drm_crtc *crtc);
-
+void intel_modeset_preclose(struct drm_device *dev, struct drm_file *file);
/* intel_dp.c */
void intel_dp_init(struct drm_device *dev, int output_reg, enum port port);
void intel_dp_mst_resume(struct drm_device *dev);
int intel_dp_max_link_bw(struct intel_dp *intel_dp);
void intel_dp_hot_plug(struct intel_encoder *intel_encoder);
+void vlv_power_sequencer_reset(struct drm_i915_private *dev_priv);
/* intel_dp_mst.c */
int intel_dp_mst_encoder_init(struct intel_digital_port *intel_dig_port, int conn_id);
void intel_dp_mst_encoder_cleanup(struct intel_digital_port *intel_dig_port);
/* legacy fbdev emulation in intel_fbdev.c */
#ifdef CONFIG_DRM_I915_FBDEV
extern int intel_fbdev_init(struct drm_device *dev);
-extern void intel_fbdev_initial_config(struct drm_device *dev);
+extern void intel_fbdev_initial_config(void *data, async_cookie_t cookie);
extern void intel_fbdev_fini(struct drm_device *dev);
-extern void intel_fbdev_set_suspend(struct drm_device *dev, int state);
+extern void intel_fbdev_set_suspend(struct drm_device *dev, int state, bool synchronous);
extern void intel_fbdev_output_poll_changed(struct drm_device *dev);
extern void intel_fbdev_restore_mode(struct drm_device *dev);
#else
return 0;
}
-static inline void intel_fbdev_initial_config(struct drm_device *dev)
+static inline void intel_fbdev_initial_config(void *data, async_cookie_t cookie)
{
}
{
}
-static inline void intel_fbdev_set_suspend(struct drm_device *dev, int state)
+static inline void intel_fbdev_set_suspend(struct drm_device *dev, int state, bool synchronous)
{
}
int intel_plane_init(struct drm_device *dev, enum pipe pipe, int plane);
void intel_flush_primary_plane(struct drm_i915_private *dev_priv,
enum plane plane);
-void intel_plane_restore(struct drm_plane *plane);
+int intel_plane_set_property(struct drm_plane *plane,
+ struct drm_property *prop,
+ uint64_t val);
+int intel_plane_restore(struct drm_plane *plane);
void intel_plane_disable(struct drm_plane *plane);
int intel_sprite_set_colorkey(struct drm_device *dev, void *data,
struct drm_file *file_priv);
/* update the hw state for DPLL */
intel_crtc->config.dpll_hw_state.dpll = DPLL_INTEGRATED_CLOCK_VLV |
- DPLL_REFA_CLK_ENABLE_VLV;
+ DPLL_REFA_CLK_ENABLE_VLV;
tmp = I915_READ(DSPCLK_GATE_D);
tmp |= DPOUNIT_CLOCK_GATE_DISABLE;
temp = I915_READ(MIPI_CTRL(pipe));
temp &= ~ESCAPE_CLOCK_DIVIDER_MASK;
I915_WRITE(MIPI_CTRL(pipe), temp |
- intel_dsi->escape_clk_div <<
- ESCAPE_CLOCK_DIVIDER_SHIFT);
+ intel_dsi->escape_clk_div <<
+ ESCAPE_CLOCK_DIVIDER_SHIFT);
I915_WRITE(MIPI_EOT_DISABLE(pipe), CLOCKSTOP);
usleep_range(2000, 2500);
if (wait_for(((I915_READ(MIPI_PORT_CTRL(pipe)) & AFE_LATCHOUT)
- == 0x00000), 30))
+ == 0x00000), 30))
DRM_ERROR("DSI LP not going Low\n");
val = I915_READ(MIPI_PORT_CTRL(pipe));
}
/* return pixels in terms of txbyteclkhs */
-static u16 txbyteclkhs(u16 pixels, int bpp, int lane_count)
+static u16 txbyteclkhs(u16 pixels, int bpp, int lane_count,
+ u16 burst_mode_ratio)
{
- return DIV_ROUND_UP(DIV_ROUND_UP(pixels * bpp, 8), lane_count);
+ return DIV_ROUND_UP(DIV_ROUND_UP(pixels * bpp * burst_mode_ratio,
+ 8 * 100), lane_count);
}
static void set_dsi_timings(struct drm_encoder *encoder,
vbp = mode->vtotal - mode->vsync_end;
/* horizontal values are in terms of high speed byte clock */
- hactive = txbyteclkhs(hactive, bpp, lane_count);
- hfp = txbyteclkhs(hfp, bpp, lane_count);
- hsync = txbyteclkhs(hsync, bpp, lane_count);
- hbp = txbyteclkhs(hbp, bpp, lane_count);
+ hactive = txbyteclkhs(hactive, bpp, lane_count,
+ intel_dsi->burst_mode_ratio);
+ hfp = txbyteclkhs(hfp, bpp, lane_count, intel_dsi->burst_mode_ratio);
+ hsync = txbyteclkhs(hsync, bpp, lane_count,
+ intel_dsi->burst_mode_ratio);
+ hbp = txbyteclkhs(hbp, bpp, lane_count, intel_dsi->burst_mode_ratio);
I915_WRITE(MIPI_HACTIVE_AREA_COUNT(pipe), hactive);
I915_WRITE(MIPI_HFP_COUNT(pipe), hfp);
intel_dsi->video_mode_format == VIDEO_MODE_BURST) {
I915_WRITE(MIPI_HS_TX_TIMEOUT(pipe),
txbyteclkhs(adjusted_mode->htotal, bpp,
- intel_dsi->lane_count) + 1);
+ intel_dsi->lane_count,
+ intel_dsi->burst_mode_ratio) + 1);
} else {
I915_WRITE(MIPI_HS_TX_TIMEOUT(pipe),
txbyteclkhs(adjusted_mode->vtotal *
adjusted_mode->htotal,
- bpp, intel_dsi->lane_count) + 1);
+ bpp, intel_dsi->lane_count,
+ intel_dsi->burst_mode_ratio) + 1);
}
I915_WRITE(MIPI_LP_RX_TIMEOUT(pipe), intel_dsi->lp_rx_timeout);
I915_WRITE(MIPI_TURN_AROUND_TIMEOUT(pipe), intel_dsi->turn_arnd_val);
* XXX: write MIPI_STOP_STATE_STALL?
*/
I915_WRITE(MIPI_HIGH_LOW_SWITCH_COUNT(pipe),
- intel_dsi->hs_to_lp_count);
+ intel_dsi->hs_to_lp_count);
/* XXX: low power clock equivalence in terms of byte clock. the number
* of byte clocks occupied in one low power clock. based on txbyteclkhs
* 64 like 1366 x 768. Enable RANDOM resolution support for such
* panels by default */
I915_WRITE(MIPI_VIDEO_MODE_FORMAT(pipe),
- intel_dsi->video_frmt_cfg_bits |
- intel_dsi->video_mode_format |
- IP_TG_CONFIG |
- RANDOM_DPI_DISPLAY_RESOLUTION);
+ intel_dsi->video_frmt_cfg_bits |
+ intel_dsi->video_mode_format |
+ IP_TG_CONFIG |
+ RANDOM_DPI_DISPLAY_RESOLUTION);
}
static void intel_dsi_pre_pll_enable(struct intel_encoder *encoder)
u16 clk_hs_to_lp_count;
u16 init_count;
+ u32 pclk;
+ u16 burst_mode_ratio;
/* all delays in ms */
u16 backlight_off_delay;
u32 mask;
mask = LP_CTRL_FIFO_EMPTY | HS_CTRL_FIFO_EMPTY |
- LP_DATA_FIFO_EMPTY | HS_DATA_FIFO_EMPTY;
+ LP_DATA_FIFO_EMPTY | HS_DATA_FIFO_EMPTY;
if (wait_for((I915_READ(MIPI_GEN_FIFO_STAT(pipe)) & mask) == mask, 100))
DRM_ERROR("DPI FIFOs are not empty\n");
u32 ths_prepare_ns, tclk_trail_ns;
u32 tclk_prepare_clkzero, ths_prepare_hszero;
u32 lp_to_hs_switch, hs_to_lp_switch;
+ u32 pclk, computed_ddr;
+ u16 burst_mode_ratio;
DRM_DEBUG_KMS("\n");
else if (intel_dsi->pixel_format == VID_MODE_FORMAT_RGB565)
bits_per_pixel = 16;
- bitrate = (mode->clock * bits_per_pixel) / intel_dsi->lane_count;
-
intel_dsi->operation_mode = mipi_config->is_cmd_mode;
intel_dsi->video_mode_format = mipi_config->video_transfer_mode;
intel_dsi->escape_clk_div = mipi_config->byte_clk_sel;
intel_dsi->video_frmt_cfg_bits =
mipi_config->bta_enabled ? DISABLE_VIDEO_BTA : 0;
+ pclk = mode->clock;
+
+ /* Burst Mode Ratio
+ * Target ddr frequency from VBT / non burst ddr freq
+ * multiply by 100 to preserve remainder
+ */
+ if (intel_dsi->video_mode_format == VIDEO_MODE_BURST) {
+ if (mipi_config->target_burst_mode_freq) {
+ computed_ddr =
+ (pclk * bits_per_pixel) / intel_dsi->lane_count;
+
+ if (mipi_config->target_burst_mode_freq <
+ computed_ddr) {
+ DRM_ERROR("Burst mode freq is less than computed\n");
+ return false;
+ }
+
+ burst_mode_ratio = DIV_ROUND_UP(
+ mipi_config->target_burst_mode_freq * 100,
+ computed_ddr);
+
+ pclk = DIV_ROUND_UP(pclk * burst_mode_ratio, 100);
+ } else {
+ DRM_ERROR("Burst mode target is not set\n");
+ return false;
+ }
+ } else
+ burst_mode_ratio = 100;
+
+ intel_dsi->burst_mode_ratio = burst_mode_ratio;
+ intel_dsi->pclk = pclk;
+
+ bitrate = (pclk * bits_per_pixel) / intel_dsi->lane_count;
+
switch (intel_dsi->escape_clk_div) {
case 0:
tlpx_ns = 50;
#else
/* Get DSI clock from pixel clock */
-static u32 dsi_clk_from_pclk(const struct drm_display_mode *mode,
- int pixel_format, int lane_count)
+static u32 dsi_clk_from_pclk(u32 pclk, int pixel_format, int lane_count)
{
u32 dsi_clk_khz;
u32 bpp;
/* DSI data rate = pixel clock * bits per pixel / lane count
pixel clock is converted from KHz to Hz */
- dsi_clk_khz = DIV_ROUND_CLOSEST(mode->clock * bpp, lane_count);
+ dsi_clk_khz = DIV_ROUND_CLOSEST(pclk * bpp, lane_count);
return dsi_clk_khz;
}
for (m = 62; m <= 92; m++) {
for (p = 2; p <= 6; p++) {
/* Find the optimal m and p divisors
- with minimal error +/- the required clock */
+ with minimal error +/- the required clock */
calc_dsi_clk = (m * ref_clk) / p;
if (calc_dsi_clk == target_dsi_clk) {
calc_m = m;
static void vlv_configure_dsi_pll(struct intel_encoder *encoder)
{
struct drm_i915_private *dev_priv = encoder->base.dev->dev_private;
- struct intel_crtc *intel_crtc = to_intel_crtc(encoder->base.crtc);
- const struct drm_display_mode *mode = &intel_crtc->config.adjusted_mode;
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
int ret;
struct dsi_mnp dsi_mnp;
u32 dsi_clk;
- dsi_clk = dsi_clk_from_pclk(mode, intel_dsi->pixel_format,
- intel_dsi->lane_count);
+ dsi_clk = dsi_clk_from_pclk(intel_dsi->pclk, intel_dsi->pixel_format,
+ intel_dsi->lane_count);
ret = dsi_calc_mnp(dsi_clk, &dsi_mnp);
if (ret) {
}
WARN(bpp != pipe_bpp,
- "bpp match assertion failure (expected %d, current %d)\n",
- bpp, pipe_bpp);
+ "bpp match assertion failure (expected %d, current %d)\n",
+ bpp, pipe_bpp);
}
u32 vlv_get_dsi_pclk(struct intel_encoder *encoder, int pipe_bpp)
{
.type = INTEL_DVO_CHIP_TMDS,
.name = "ns2501",
- .dvo_reg = DVOC,
+ .dvo_reg = DVOB,
.slave_addr = NS2501_ADDR,
.dev_ops = &ns2501_ops,
}
u32 dvo_reg = intel_dvo->dev.dvo_reg;
u32 temp = I915_READ(dvo_reg);
- I915_WRITE(dvo_reg, temp | DVO_ENABLE);
- I915_READ(dvo_reg);
intel_dvo->dev.dev_ops->mode_set(&intel_dvo->dev,
&crtc->config.requested_mode,
&crtc->config.adjusted_mode);
+ I915_WRITE(dvo_reg, temp | DVO_ENABLE);
+ I915_READ(dvo_reg);
+
intel_dvo->dev.dev_ops->dpms(&intel_dvo->dev, true);
}
intel_crtc_update_dpms(crtc);
- intel_dvo->dev.dev_ops->mode_set(&intel_dvo->dev,
- &config->requested_mode,
- &config->adjusted_mode);
-
intel_dvo->dev.dev_ops->dpms(&intel_dvo->dev, true);
} else {
intel_dvo->dev.dev_ops->dpms(&intel_dvo->dev, false);
* David Airlie
*/
+#include <linux/async.h>
#include <linux/module.h>
#include <linux/kernel.h>
+#include <linux/console.h>
#include <linux/errno.h>
#include <linux/string.h>
#include <linux/mm.h>
int num_connectors_enabled = 0;
int num_connectors_detected = 0;
- /*
- * If the user specified any force options, just bail here
- * and use that config.
- */
- for (i = 0; i < fb_helper->connector_count; i++) {
- struct drm_fb_helper_connector *fb_conn;
- struct drm_connector *connector;
-
- fb_conn = fb_helper->connector_info[i];
- connector = fb_conn->connector;
-
- if (!enabled[i])
- continue;
-
- if (connector->force != DRM_FORCE_UNSPECIFIED)
- return false;
- }
-
save_enabled = kcalloc(dev->mode_config.num_connector, sizeof(bool),
GFP_KERNEL);
if (!save_enabled)
continue;
}
+ if (connector->force == DRM_FORCE_OFF) {
+ DRM_DEBUG_KMS("connector %s is disabled by user, skipping\n",
+ connector->name);
+ enabled[i] = false;
+ continue;
+ }
+
encoder = connector->encoder;
if (!encoder || WARN_ON(!encoder->crtc)) {
+ if (connector->force > DRM_FORCE_OFF)
+ goto bail;
+
DRM_DEBUG_KMS("connector %s has no encoder or crtc, skipping\n",
connector->name);
enabled[i] = false;
for (j = 0; j < fb_helper->connector_count; j++) {
if (crtcs[j] == new_crtc) {
DRM_DEBUG_KMS("fallback: cloned configuration\n");
- fallback = true;
- goto out;
+ goto bail;
}
}
fallback = true;
}
-out:
if (fallback) {
+bail:
DRM_DEBUG_KMS("Not using firmware configuration\n");
memcpy(enabled, save_enabled, dev->mode_config.num_connector);
kfree(save_enabled);
return false;
}
+static void intel_fbdev_suspend_worker(struct work_struct *work)
+{
+ intel_fbdev_set_suspend(container_of(work,
+ struct drm_i915_private,
+ fbdev_suspend_work)->dev,
+ FBINFO_STATE_RUNNING,
+ true);
+}
+
int intel_fbdev_init(struct drm_device *dev)
{
struct intel_fbdev *ifbdev;
}
dev_priv->fbdev = ifbdev;
+ INIT_WORK(&dev_priv->fbdev_suspend_work, intel_fbdev_suspend_worker);
+
drm_fb_helper_single_add_all_connectors(&ifbdev->helper);
return 0;
}
-void intel_fbdev_initial_config(struct drm_device *dev)
+void intel_fbdev_initial_config(void *data, async_cookie_t cookie)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = data;
struct intel_fbdev *ifbdev = dev_priv->fbdev;
/* Due to peculiar init order wrt to hpd handling this is separate. */
if (!dev_priv->fbdev)
return;
+ flush_work(&dev_priv->fbdev_suspend_work);
+
+ async_synchronize_full();
intel_fbdev_destroy(dev, dev_priv->fbdev);
kfree(dev_priv->fbdev);
dev_priv->fbdev = NULL;
}
-void intel_fbdev_set_suspend(struct drm_device *dev, int state)
+void intel_fbdev_set_suspend(struct drm_device *dev, int state, bool synchronous)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_fbdev *ifbdev = dev_priv->fbdev;
info = ifbdev->helper.fbdev;
+ if (synchronous) {
+ /* Flush any pending work to turn the console on, and then
+ * wait to turn it off. It must be synchronous as we are
+ * about to suspend or unload the driver.
+ *
+ * Note that from within the work-handler, we cannot flush
+ * ourselves, so only flush outstanding work upon suspend!
+ */
+ if (state != FBINFO_STATE_RUNNING)
+ flush_work(&dev_priv->fbdev_suspend_work);
+ console_lock();
+ } else {
+ /*
+ * The console lock can be pretty contented on resume due
+ * to all the printk activity. Try to keep it out of the hot
+ * path of resume if possible.
+ */
+ WARN_ON(state != FBINFO_STATE_RUNNING);
+ if (!console_trylock()) {
+ /* Don't block our own workqueue as this can
+ * be run in parallel with other i915.ko tasks.
+ */
+ schedule_work(&dev_priv->fbdev_suspend_work);
+ return;
+ }
+ }
+
/* On resume from hibernation: If the object is shmemfs backed, it has
* been restored from swap. If the object is stolen however, it will be
* full of whatever garbage was left in there.
memset_io(info->screen_base, 0, info->screen_size);
fb_set_suspend(info, state);
+ console_unlock();
}
void intel_fbdev_output_poll_changed(struct drm_device *dev)
intel_hdmi_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
{
- if (mode->clock > hdmi_portclock_limit(intel_attached_hdmi(connector),
- true))
+ int clock = mode->clock;
+
+ if (mode->flags & DRM_MODE_FLAG_DBLCLK)
+ clock *= 2;
+
+ if (clock > hdmi_portclock_limit(intel_attached_hdmi(connector),
+ true))
return MODE_CLOCK_HIGH;
- if (mode->clock < 20000)
+ if (clock < 20000)
return MODE_CLOCK_LOW;
if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
if (HAS_GMCH_DISPLAY(dev))
return false;
- list_for_each_entry(encoder, &dev->mode_config.encoder_list, base.head) {
+ for_each_intel_encoder(dev, encoder) {
if (encoder->new_crtc != crtc)
continue;
intel_hdmi->color_range = 0;
}
+ if (adjusted_mode->flags & DRM_MODE_FLAG_DBLCLK) {
+ pipe_config->pixel_multiplier = 2;
+ }
+
if (intel_hdmi->color_range)
pipe_config->limited_color_range = true;
return true;
}
-static enum drm_connector_status
-intel_hdmi_detect(struct drm_connector *connector, bool force)
+static void
+intel_hdmi_unset_edid(struct drm_connector *connector)
{
- struct drm_device *dev = connector->dev;
struct intel_hdmi *intel_hdmi = intel_attached_hdmi(connector);
- struct intel_digital_port *intel_dig_port =
- hdmi_to_dig_port(intel_hdmi);
- struct intel_encoder *intel_encoder = &intel_dig_port->base;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct edid *edid;
- enum intel_display_power_domain power_domain;
- enum drm_connector_status status = connector_status_disconnected;
- DRM_DEBUG_KMS("[CONNECTOR:%d:%s]\n",
- connector->base.id, connector->name);
+ intel_hdmi->has_hdmi_sink = false;
+ intel_hdmi->has_audio = false;
+ intel_hdmi->rgb_quant_range_selectable = false;
+
+ kfree(to_intel_connector(connector)->detect_edid);
+ to_intel_connector(connector)->detect_edid = NULL;
+}
+
+static bool
+intel_hdmi_set_edid(struct drm_connector *connector)
+{
+ struct drm_i915_private *dev_priv = to_i915(connector->dev);
+ struct intel_hdmi *intel_hdmi = intel_attached_hdmi(connector);
+ struct intel_encoder *intel_encoder =
+ &hdmi_to_dig_port(intel_hdmi)->base;
+ enum intel_display_power_domain power_domain;
+ struct edid *edid;
+ bool connected = false;
power_domain = intel_display_port_power_domain(intel_encoder);
intel_display_power_get(dev_priv, power_domain);
- intel_hdmi->has_hdmi_sink = false;
- intel_hdmi->has_audio = false;
- intel_hdmi->rgb_quant_range_selectable = false;
edid = drm_get_edid(connector,
intel_gmbus_get_adapter(dev_priv,
intel_hdmi->ddc_bus));
- if (edid) {
- if (edid->input & DRM_EDID_INPUT_DIGITAL) {
- status = connector_status_connected;
- if (intel_hdmi->force_audio != HDMI_AUDIO_OFF_DVI)
- intel_hdmi->has_hdmi_sink =
- drm_detect_hdmi_monitor(edid);
- intel_hdmi->has_audio = drm_detect_monitor_audio(edid);
- intel_hdmi->rgb_quant_range_selectable =
- drm_rgb_quant_range_selectable(edid);
- }
- kfree(edid);
- }
+ intel_display_power_put(dev_priv, power_domain);
+
+ to_intel_connector(connector)->detect_edid = edid;
+ if (edid && edid->input & DRM_EDID_INPUT_DIGITAL) {
+ intel_hdmi->rgb_quant_range_selectable =
+ drm_rgb_quant_range_selectable(edid);
- if (status == connector_status_connected) {
+ intel_hdmi->has_audio = drm_detect_monitor_audio(edid);
if (intel_hdmi->force_audio != HDMI_AUDIO_AUTO)
intel_hdmi->has_audio =
- (intel_hdmi->force_audio == HDMI_AUDIO_ON);
- intel_encoder->type = INTEL_OUTPUT_HDMI;
+ intel_hdmi->force_audio == HDMI_AUDIO_ON;
+
+ if (intel_hdmi->force_audio != HDMI_AUDIO_OFF_DVI)
+ intel_hdmi->has_hdmi_sink =
+ drm_detect_hdmi_monitor(edid);
+
+ connected = true;
}
- intel_display_power_put(dev_priv, power_domain);
+ return connected;
+}
+
+static enum drm_connector_status
+intel_hdmi_detect(struct drm_connector *connector, bool force)
+{
+ enum drm_connector_status status;
+
+ DRM_DEBUG_KMS("[CONNECTOR:%d:%s]\n",
+ connector->base.id, connector->name);
+
+ intel_hdmi_unset_edid(connector);
+
+ if (intel_hdmi_set_edid(connector)) {
+ struct intel_hdmi *intel_hdmi = intel_attached_hdmi(connector);
+
+ hdmi_to_dig_port(intel_hdmi)->base.type = INTEL_OUTPUT_HDMI;
+ status = connector_status_connected;
+ } else
+ status = connector_status_disconnected;
return status;
}
-static int intel_hdmi_get_modes(struct drm_connector *connector)
+static void
+intel_hdmi_force(struct drm_connector *connector)
{
- struct intel_encoder *intel_encoder = intel_attached_encoder(connector);
- struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(&intel_encoder->base);
- struct drm_i915_private *dev_priv = connector->dev->dev_private;
- enum intel_display_power_domain power_domain;
- int ret;
+ struct intel_hdmi *intel_hdmi = intel_attached_hdmi(connector);
- /* We should parse the EDID data and find out if it's an HDMI sink so
- * we can send audio to it.
- */
+ DRM_DEBUG_KMS("[CONNECTOR:%d:%s]\n",
+ connector->base.id, connector->name);
- power_domain = intel_display_port_power_domain(intel_encoder);
- intel_display_power_get(dev_priv, power_domain);
+ intel_hdmi_unset_edid(connector);
- ret = intel_ddc_get_modes(connector,
- intel_gmbus_get_adapter(dev_priv,
- intel_hdmi->ddc_bus));
+ if (connector->status != connector_status_connected)
+ return;
- intel_display_power_put(dev_priv, power_domain);
+ intel_hdmi_set_edid(connector);
+ hdmi_to_dig_port(intel_hdmi)->base.type = INTEL_OUTPUT_HDMI;
+}
- return ret;
+static int intel_hdmi_get_modes(struct drm_connector *connector)
+{
+ struct edid *edid;
+
+ edid = to_intel_connector(connector)->detect_edid;
+ if (edid == NULL)
+ return 0;
+
+ return intel_connector_update_modes(connector, edid);
}
static bool
intel_hdmi_detect_audio(struct drm_connector *connector)
{
- struct intel_encoder *intel_encoder = intel_attached_encoder(connector);
- struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(&intel_encoder->base);
- struct drm_i915_private *dev_priv = connector->dev->dev_private;
- enum intel_display_power_domain power_domain;
- struct edid *edid;
bool has_audio = false;
+ struct edid *edid;
- power_domain = intel_display_port_power_domain(intel_encoder);
- intel_display_power_get(dev_priv, power_domain);
-
- edid = drm_get_edid(connector,
- intel_gmbus_get_adapter(dev_priv,
- intel_hdmi->ddc_bus));
- if (edid) {
- if (edid->input & DRM_EDID_INPUT_DIGITAL)
- has_audio = drm_detect_monitor_audio(edid);
- kfree(edid);
- }
-
- intel_display_power_put(dev_priv, power_domain);
+ edid = to_intel_connector(connector)->detect_edid;
+ if (edid && edid->input & DRM_EDID_INPUT_DIGITAL)
+ has_audio = drm_detect_monitor_audio(edid);
return has_audio;
}
enum pipe pipe = intel_crtc->pipe;
u32 val;
+ intel_hdmi_prepare(encoder);
+
mutex_lock(&dev_priv->dpio_lock);
/* program left/right clock distribution */
for (i = 0; i < 4; i++) {
val = vlv_dpio_read(dev_priv, pipe, CHV_TX_DW2(ch, i));
- val &= ~DPIO_SWING_MARGIN_MASK;
- val |= 102 << DPIO_SWING_MARGIN_SHIFT;
+ val &= ~DPIO_SWING_MARGIN000_MASK;
+ val |= 102 << DPIO_SWING_MARGIN000_SHIFT;
vlv_dpio_write(dev_priv, pipe, CHV_TX_DW2(ch, i), val);
}
static void intel_hdmi_destroy(struct drm_connector *connector)
{
+ kfree(to_intel_connector(connector)->detect_edid);
drm_connector_cleanup(connector);
kfree(connector);
}
static const struct drm_connector_funcs intel_hdmi_connector_funcs = {
.dpms = intel_connector_dpms,
.detect = intel_hdmi_detect,
+ .force = intel_hdmi_force,
.fill_modes = drm_helper_probe_single_connector_modes,
.set_property = intel_hdmi_set_property,
.destroy = intel_hdmi_destroy,
--- /dev/null
+/*
+ * Copyright © 2014 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ * Ben Widawsky <ben@bwidawsk.net>
+ * Michel Thierry <michel.thierry@intel.com>
+ * Thomas Daniel <thomas.daniel@intel.com>
+ * Oscar Mateo <oscar.mateo@intel.com>
+ *
+ */
+
+/**
+ * DOC: Logical Rings, Logical Ring Contexts and Execlists
+ *
+ * Motivation:
+ * GEN8 brings an expansion of the HW contexts: "Logical Ring Contexts".
+ * These expanded contexts enable a number of new abilities, especially
+ * "Execlists" (also implemented in this file).
+ *
+ * One of the main differences with the legacy HW contexts is that logical
+ * ring contexts incorporate many more things to the context's state, like
+ * PDPs or ringbuffer control registers:
+ *
+ * The reason why PDPs are included in the context is straightforward: as
+ * PPGTTs (per-process GTTs) are actually per-context, having the PDPs
+ * contained there mean you don't need to do a ppgtt->switch_mm yourself,
+ * instead, the GPU will do it for you on the context switch.
+ *
+ * But, what about the ringbuffer control registers (head, tail, etc..)?
+ * shouldn't we just need a set of those per engine command streamer? This is
+ * where the name "Logical Rings" starts to make sense: by virtualizing the
+ * rings, the engine cs shifts to a new "ring buffer" with every context
+ * switch. When you want to submit a workload to the GPU you: A) choose your
+ * context, B) find its appropriate virtualized ring, C) write commands to it
+ * and then, finally, D) tell the GPU to switch to that context.
+ *
+ * Instead of the legacy MI_SET_CONTEXT, the way you tell the GPU to switch
+ * to a contexts is via a context execution list, ergo "Execlists".
+ *
+ * LRC implementation:
+ * Regarding the creation of contexts, we have:
+ *
+ * - One global default context.
+ * - One local default context for each opened fd.
+ * - One local extra context for each context create ioctl call.
+ *
+ * Now that ringbuffers belong per-context (and not per-engine, like before)
+ * and that contexts are uniquely tied to a given engine (and not reusable,
+ * like before) we need:
+ *
+ * - One ringbuffer per-engine inside each context.
+ * - One backing object per-engine inside each context.
+ *
+ * The global default context starts its life with these new objects fully
+ * allocated and populated. The local default context for each opened fd is
+ * more complex, because we don't know at creation time which engine is going
+ * to use them. To handle this, we have implemented a deferred creation of LR
+ * contexts:
+ *
+ * The local context starts its life as a hollow or blank holder, that only
+ * gets populated for a given engine once we receive an execbuffer. If later
+ * on we receive another execbuffer ioctl for the same context but a different
+ * engine, we allocate/populate a new ringbuffer and context backing object and
+ * so on.
+ *
+ * Finally, regarding local contexts created using the ioctl call: as they are
+ * only allowed with the render ring, we can allocate & populate them right
+ * away (no need to defer anything, at least for now).
+ *
+ * Execlists implementation:
+ * Execlists are the new method by which, on gen8+ hardware, workloads are
+ * submitted for execution (as opposed to the legacy, ringbuffer-based, method).
+ * This method works as follows:
+ *
+ * When a request is committed, its commands (the BB start and any leading or
+ * trailing commands, like the seqno breadcrumbs) are placed in the ringbuffer
+ * for the appropriate context. The tail pointer in the hardware context is not
+ * updated at this time, but instead, kept by the driver in the ringbuffer
+ * structure. A structure representing this request is added to a request queue
+ * for the appropriate engine: this structure contains a copy of the context's
+ * tail after the request was written to the ring buffer and a pointer to the
+ * context itself.
+ *
+ * If the engine's request queue was empty before the request was added, the
+ * queue is processed immediately. Otherwise the queue will be processed during
+ * a context switch interrupt. In any case, elements on the queue will get sent
+ * (in pairs) to the GPU's ExecLists Submit Port (ELSP, for short) with a
+ * globally unique 20-bits submission ID.
+ *
+ * When execution of a request completes, the GPU updates the context status
+ * buffer with a context complete event and generates a context switch interrupt.
+ * During the interrupt handling, the driver examines the events in the buffer:
+ * for each context complete event, if the announced ID matches that on the head
+ * of the request queue, then that request is retired and removed from the queue.
+ *
+ * After processing, if any requests were retired and the queue is not empty
+ * then a new execution list can be submitted. The two requests at the front of
+ * the queue are next to be submitted but since a context may not occur twice in
+ * an execution list, if subsequent requests have the same ID as the first then
+ * the two requests must be combined. This is done simply by discarding requests
+ * at the head of the queue until either only one requests is left (in which case
+ * we use a NULL second context) or the first two requests have unique IDs.
+ *
+ * By always executing the first two requests in the queue the driver ensures
+ * that the GPU is kept as busy as possible. In the case where a single context
+ * completes but a second context is still executing, the request for this second
+ * context will be at the head of the queue when we remove the first one. This
+ * request will then be resubmitted along with a new request for a different context,
+ * which will cause the hardware to continue executing the second request and queue
+ * the new request (the GPU detects the condition of a context getting preempted
+ * with the same context and optimizes the context switch flow by not doing
+ * preemption, but just sampling the new tail pointer).
+ *
+ */
+
+#include <drm/drmP.h>
+#include <drm/i915_drm.h>
+#include "i915_drv.h"
+
+#define GEN8_LR_CONTEXT_RENDER_SIZE (20 * PAGE_SIZE)
+#define GEN8_LR_CONTEXT_OTHER_SIZE (2 * PAGE_SIZE)
+
+#define GEN8_LR_CONTEXT_ALIGN 4096
+
+#define RING_EXECLIST_QFULL (1 << 0x2)
+#define RING_EXECLIST1_VALID (1 << 0x3)
+#define RING_EXECLIST0_VALID (1 << 0x4)
+#define RING_EXECLIST_ACTIVE_STATUS (3 << 0xE)
+#define RING_EXECLIST1_ACTIVE (1 << 0x11)
+#define RING_EXECLIST0_ACTIVE (1 << 0x12)
+
+#define GEN8_CTX_STATUS_IDLE_ACTIVE (1 << 0)
+#define GEN8_CTX_STATUS_PREEMPTED (1 << 1)
+#define GEN8_CTX_STATUS_ELEMENT_SWITCH (1 << 2)
+#define GEN8_CTX_STATUS_ACTIVE_IDLE (1 << 3)
+#define GEN8_CTX_STATUS_COMPLETE (1 << 4)
+#define GEN8_CTX_STATUS_LITE_RESTORE (1 << 15)
+
+#define CTX_LRI_HEADER_0 0x01
+#define CTX_CONTEXT_CONTROL 0x02
+#define CTX_RING_HEAD 0x04
+#define CTX_RING_TAIL 0x06
+#define CTX_RING_BUFFER_START 0x08
+#define CTX_RING_BUFFER_CONTROL 0x0a
+#define CTX_BB_HEAD_U 0x0c
+#define CTX_BB_HEAD_L 0x0e
+#define CTX_BB_STATE 0x10
+#define CTX_SECOND_BB_HEAD_U 0x12
+#define CTX_SECOND_BB_HEAD_L 0x14
+#define CTX_SECOND_BB_STATE 0x16
+#define CTX_BB_PER_CTX_PTR 0x18
+#define CTX_RCS_INDIRECT_CTX 0x1a
+#define CTX_RCS_INDIRECT_CTX_OFFSET 0x1c
+#define CTX_LRI_HEADER_1 0x21
+#define CTX_CTX_TIMESTAMP 0x22
+#define CTX_PDP3_UDW 0x24
+#define CTX_PDP3_LDW 0x26
+#define CTX_PDP2_UDW 0x28
+#define CTX_PDP2_LDW 0x2a
+#define CTX_PDP1_UDW 0x2c
+#define CTX_PDP1_LDW 0x2e
+#define CTX_PDP0_UDW 0x30
+#define CTX_PDP0_LDW 0x32
+#define CTX_LRI_HEADER_2 0x41
+#define CTX_R_PWR_CLK_STATE 0x42
+#define CTX_GPGPU_CSR_BASE_ADDRESS 0x44
+
+#define GEN8_CTX_VALID (1<<0)
+#define GEN8_CTX_FORCE_PD_RESTORE (1<<1)
+#define GEN8_CTX_FORCE_RESTORE (1<<2)
+#define GEN8_CTX_L3LLC_COHERENT (1<<5)
+#define GEN8_CTX_PRIVILEGE (1<<8)
+enum {
+ ADVANCED_CONTEXT = 0,
+ LEGACY_CONTEXT,
+ ADVANCED_AD_CONTEXT,
+ LEGACY_64B_CONTEXT
+};
+#define GEN8_CTX_MODE_SHIFT 3
+enum {
+ FAULT_AND_HANG = 0,
+ FAULT_AND_HALT, /* Debug only */
+ FAULT_AND_STREAM,
+ FAULT_AND_CONTINUE /* Unsupported */
+};
+#define GEN8_CTX_ID_SHIFT 32
+
+/**
+ * intel_sanitize_enable_execlists() - sanitize i915.enable_execlists
+ * @dev: DRM device.
+ * @enable_execlists: value of i915.enable_execlists module parameter.
+ *
+ * Only certain platforms support Execlists (the prerequisites being
+ * support for Logical Ring Contexts and Aliasing PPGTT or better),
+ * and only when enabled via module parameter.
+ *
+ * Return: 1 if Execlists is supported and has to be enabled.
+ */
+int intel_sanitize_enable_execlists(struct drm_device *dev, int enable_execlists)
+{
+ WARN_ON(i915.enable_ppgtt == -1);
+
+ if (enable_execlists == 0)
+ return 0;
+
+ if (HAS_LOGICAL_RING_CONTEXTS(dev) && USES_PPGTT(dev) &&
+ i915.use_mmio_flip >= 0)
+ return 1;
+
+ return 0;
+}
+
+/**
+ * intel_execlists_ctx_id() - get the Execlists Context ID
+ * @ctx_obj: Logical Ring Context backing object.
+ *
+ * Do not confuse with ctx->id! Unfortunately we have a name overload
+ * here: the old context ID we pass to userspace as a handler so that
+ * they can refer to a context, and the new context ID we pass to the
+ * ELSP so that the GPU can inform us of the context status via
+ * interrupts.
+ *
+ * Return: 20-bits globally unique context ID.
+ */
+u32 intel_execlists_ctx_id(struct drm_i915_gem_object *ctx_obj)
+{
+ u32 lrca = i915_gem_obj_ggtt_offset(ctx_obj);
+
+ /* LRCA is required to be 4K aligned so the more significant 20 bits
+ * are globally unique */
+ return lrca >> 12;
+}
+
+static uint64_t execlists_ctx_descriptor(struct drm_i915_gem_object *ctx_obj)
+{
+ uint64_t desc;
+ uint64_t lrca = i915_gem_obj_ggtt_offset(ctx_obj);
+
+ WARN_ON(lrca & 0xFFFFFFFF00000FFFULL);
+
+ desc = GEN8_CTX_VALID;
+ desc |= LEGACY_CONTEXT << GEN8_CTX_MODE_SHIFT;
+ desc |= GEN8_CTX_L3LLC_COHERENT;
+ desc |= GEN8_CTX_PRIVILEGE;
+ desc |= lrca;
+ desc |= (u64)intel_execlists_ctx_id(ctx_obj) << GEN8_CTX_ID_SHIFT;
+
+ /* TODO: WaDisableLiteRestore when we start using semaphore
+ * signalling between Command Streamers */
+ /* desc |= GEN8_CTX_FORCE_RESTORE; */
+
+ return desc;
+}
+
+static void execlists_elsp_write(struct intel_engine_cs *ring,
+ struct drm_i915_gem_object *ctx_obj0,
+ struct drm_i915_gem_object *ctx_obj1)
+{
+ struct drm_i915_private *dev_priv = ring->dev->dev_private;
+ uint64_t temp = 0;
+ uint32_t desc[4];
+ unsigned long flags;
+
+ /* XXX: You must always write both descriptors in the order below. */
+ if (ctx_obj1)
+ temp = execlists_ctx_descriptor(ctx_obj1);
+ else
+ temp = 0;
+ desc[1] = (u32)(temp >> 32);
+ desc[0] = (u32)temp;
+
+ temp = execlists_ctx_descriptor(ctx_obj0);
+ desc[3] = (u32)(temp >> 32);
+ desc[2] = (u32)temp;
+
+ /* Set Force Wakeup bit to prevent GT from entering C6 while ELSP writes
+ * are in progress.
+ *
+ * The other problem is that we can't just call gen6_gt_force_wake_get()
+ * because that function calls intel_runtime_pm_get(), which might sleep.
+ * Instead, we do the runtime_pm_get/put when creating/destroying requests.
+ */
+ spin_lock_irqsave(&dev_priv->uncore.lock, flags);
+ if (IS_CHERRYVIEW(dev_priv->dev)) {
+ if (dev_priv->uncore.fw_rendercount++ == 0)
+ dev_priv->uncore.funcs.force_wake_get(dev_priv,
+ FORCEWAKE_RENDER);
+ if (dev_priv->uncore.fw_mediacount++ == 0)
+ dev_priv->uncore.funcs.force_wake_get(dev_priv,
+ FORCEWAKE_MEDIA);
+ } else {
+ if (dev_priv->uncore.forcewake_count++ == 0)
+ dev_priv->uncore.funcs.force_wake_get(dev_priv,
+ FORCEWAKE_ALL);
+ }
+ spin_unlock_irqrestore(&dev_priv->uncore.lock, flags);
+
+ I915_WRITE(RING_ELSP(ring), desc[1]);
+ I915_WRITE(RING_ELSP(ring), desc[0]);
+ I915_WRITE(RING_ELSP(ring), desc[3]);
+ /* The context is automatically loaded after the following */
+ I915_WRITE(RING_ELSP(ring), desc[2]);
+
+ /* ELSP is a wo register, so use another nearby reg for posting instead */
+ POSTING_READ(RING_EXECLIST_STATUS(ring));
+
+ /* Release Force Wakeup (see the big comment above). */
+ spin_lock_irqsave(&dev_priv->uncore.lock, flags);
+ if (IS_CHERRYVIEW(dev_priv->dev)) {
+ if (--dev_priv->uncore.fw_rendercount == 0)
+ dev_priv->uncore.funcs.force_wake_put(dev_priv,
+ FORCEWAKE_RENDER);
+ if (--dev_priv->uncore.fw_mediacount == 0)
+ dev_priv->uncore.funcs.force_wake_put(dev_priv,
+ FORCEWAKE_MEDIA);
+ } else {
+ if (--dev_priv->uncore.forcewake_count == 0)
+ dev_priv->uncore.funcs.force_wake_put(dev_priv,
+ FORCEWAKE_ALL);
+ }
+
+ spin_unlock_irqrestore(&dev_priv->uncore.lock, flags);
+}
+
+static int execlists_ctx_write_tail(struct drm_i915_gem_object *ctx_obj, u32 tail)
+{
+ struct page *page;
+ uint32_t *reg_state;
+
+ page = i915_gem_object_get_page(ctx_obj, 1);
+ reg_state = kmap_atomic(page);
+
+ reg_state[CTX_RING_TAIL+1] = tail;
+
+ kunmap_atomic(reg_state);
+
+ return 0;
+}
+
+static int execlists_submit_context(struct intel_engine_cs *ring,
+ struct intel_context *to0, u32 tail0,
+ struct intel_context *to1, u32 tail1)
+{
+ struct drm_i915_gem_object *ctx_obj0;
+ struct drm_i915_gem_object *ctx_obj1 = NULL;
+
+ ctx_obj0 = to0->engine[ring->id].state;
+ BUG_ON(!ctx_obj0);
+ WARN_ON(!i915_gem_obj_is_pinned(ctx_obj0));
+
+ execlists_ctx_write_tail(ctx_obj0, tail0);
+
+ if (to1) {
+ ctx_obj1 = to1->engine[ring->id].state;
+ BUG_ON(!ctx_obj1);
+ WARN_ON(!i915_gem_obj_is_pinned(ctx_obj1));
+
+ execlists_ctx_write_tail(ctx_obj1, tail1);
+ }
+
+ execlists_elsp_write(ring, ctx_obj0, ctx_obj1);
+
+ return 0;
+}
+
+static void execlists_context_unqueue(struct intel_engine_cs *ring)
+{
+ struct intel_ctx_submit_request *req0 = NULL, *req1 = NULL;
+ struct intel_ctx_submit_request *cursor = NULL, *tmp = NULL;
+ struct drm_i915_private *dev_priv = ring->dev->dev_private;
+
+ assert_spin_locked(&ring->execlist_lock);
+
+ if (list_empty(&ring->execlist_queue))
+ return;
+
+ /* Try to read in pairs */
+ list_for_each_entry_safe(cursor, tmp, &ring->execlist_queue,
+ execlist_link) {
+ if (!req0) {
+ req0 = cursor;
+ } else if (req0->ctx == cursor->ctx) {
+ /* Same ctx: ignore first request, as second request
+ * will update tail past first request's workload */
+ cursor->elsp_submitted = req0->elsp_submitted;
+ list_del(&req0->execlist_link);
+ queue_work(dev_priv->wq, &req0->work);
+ req0 = cursor;
+ } else {
+ req1 = cursor;
+ break;
+ }
+ }
+
+ WARN_ON(req1 && req1->elsp_submitted);
+
+ WARN_ON(execlists_submit_context(ring, req0->ctx, req0->tail,
+ req1 ? req1->ctx : NULL,
+ req1 ? req1->tail : 0));
+
+ req0->elsp_submitted++;
+ if (req1)
+ req1->elsp_submitted++;
+}
+
+static bool execlists_check_remove_request(struct intel_engine_cs *ring,
+ u32 request_id)
+{
+ struct drm_i915_private *dev_priv = ring->dev->dev_private;
+ struct intel_ctx_submit_request *head_req;
+
+ assert_spin_locked(&ring->execlist_lock);
+
+ head_req = list_first_entry_or_null(&ring->execlist_queue,
+ struct intel_ctx_submit_request,
+ execlist_link);
+
+ if (head_req != NULL) {
+ struct drm_i915_gem_object *ctx_obj =
+ head_req->ctx->engine[ring->id].state;
+ if (intel_execlists_ctx_id(ctx_obj) == request_id) {
+ WARN(head_req->elsp_submitted == 0,
+ "Never submitted head request\n");
+
+ if (--head_req->elsp_submitted <= 0) {
+ list_del(&head_req->execlist_link);
+ queue_work(dev_priv->wq, &head_req->work);
+ return true;
+ }
+ }
+ }
+
+ return false;
+}
+
+/**
+ * intel_execlists_handle_ctx_events() - handle Context Switch interrupts
+ * @ring: Engine Command Streamer to handle.
+ *
+ * Check the unread Context Status Buffers and manage the submission of new
+ * contexts to the ELSP accordingly.
+ */
+void intel_execlists_handle_ctx_events(struct intel_engine_cs *ring)
+{
+ struct drm_i915_private *dev_priv = ring->dev->dev_private;
+ u32 status_pointer;
+ u8 read_pointer;
+ u8 write_pointer;
+ u32 status;
+ u32 status_id;
+ u32 submit_contexts = 0;
+
+ status_pointer = I915_READ(RING_CONTEXT_STATUS_PTR(ring));
+
+ read_pointer = ring->next_context_status_buffer;
+ write_pointer = status_pointer & 0x07;
+ if (read_pointer > write_pointer)
+ write_pointer += 6;
+
+ spin_lock(&ring->execlist_lock);
+
+ while (read_pointer < write_pointer) {
+ read_pointer++;
+ status = I915_READ(RING_CONTEXT_STATUS_BUF(ring) +
+ (read_pointer % 6) * 8);
+ status_id = I915_READ(RING_CONTEXT_STATUS_BUF(ring) +
+ (read_pointer % 6) * 8 + 4);
+
+ if (status & GEN8_CTX_STATUS_PREEMPTED) {
+ if (status & GEN8_CTX_STATUS_LITE_RESTORE) {
+ if (execlists_check_remove_request(ring, status_id))
+ WARN(1, "Lite Restored request removed from queue\n");
+ } else
+ WARN(1, "Preemption without Lite Restore\n");
+ }
+
+ if ((status & GEN8_CTX_STATUS_ACTIVE_IDLE) ||
+ (status & GEN8_CTX_STATUS_ELEMENT_SWITCH)) {
+ if (execlists_check_remove_request(ring, status_id))
+ submit_contexts++;
+ }
+ }
+
+ if (submit_contexts != 0)
+ execlists_context_unqueue(ring);
+
+ spin_unlock(&ring->execlist_lock);
+
+ WARN(submit_contexts > 2, "More than two context complete events?\n");
+ ring->next_context_status_buffer = write_pointer % 6;
+
+ I915_WRITE(RING_CONTEXT_STATUS_PTR(ring),
+ ((u32)ring->next_context_status_buffer & 0x07) << 8);
+}
+
+static void execlists_free_request_task(struct work_struct *work)
+{
+ struct intel_ctx_submit_request *req =
+ container_of(work, struct intel_ctx_submit_request, work);
+ struct drm_device *dev = req->ring->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+
+ intel_runtime_pm_put(dev_priv);
+
+ mutex_lock(&dev->struct_mutex);
+ i915_gem_context_unreference(req->ctx);
+ mutex_unlock(&dev->struct_mutex);
+
+ kfree(req);
+}
+
+static int execlists_context_queue(struct intel_engine_cs *ring,
+ struct intel_context *to,
+ u32 tail)
+{
+ struct intel_ctx_submit_request *req = NULL, *cursor;
+ struct drm_i915_private *dev_priv = ring->dev->dev_private;
+ unsigned long flags;
+ int num_elements = 0;
+
+ req = kzalloc(sizeof(*req), GFP_KERNEL);
+ if (req == NULL)
+ return -ENOMEM;
+ req->ctx = to;
+ i915_gem_context_reference(req->ctx);
+ req->ring = ring;
+ req->tail = tail;
+ INIT_WORK(&req->work, execlists_free_request_task);
+
+ intel_runtime_pm_get(dev_priv);
+
+ spin_lock_irqsave(&ring->execlist_lock, flags);
+
+ list_for_each_entry(cursor, &ring->execlist_queue, execlist_link)
+ if (++num_elements > 2)
+ break;
+
+ if (num_elements > 2) {
+ struct intel_ctx_submit_request *tail_req;
+
+ tail_req = list_last_entry(&ring->execlist_queue,
+ struct intel_ctx_submit_request,
+ execlist_link);
+
+ if (to == tail_req->ctx) {
+ WARN(tail_req->elsp_submitted != 0,
+ "More than 2 already-submitted reqs queued\n");
+ list_del(&tail_req->execlist_link);
+ queue_work(dev_priv->wq, &tail_req->work);
+ }
+ }
+
+ list_add_tail(&req->execlist_link, &ring->execlist_queue);
+ if (num_elements == 0)
+ execlists_context_unqueue(ring);
+
+ spin_unlock_irqrestore(&ring->execlist_lock, flags);
+
+ return 0;
+}
+
+static int logical_ring_invalidate_all_caches(struct intel_ringbuffer *ringbuf)
+{
+ struct intel_engine_cs *ring = ringbuf->ring;
+ uint32_t flush_domains;
+ int ret;
+
+ flush_domains = 0;
+ if (ring->gpu_caches_dirty)
+ flush_domains = I915_GEM_GPU_DOMAINS;
+
+ ret = ring->emit_flush(ringbuf, I915_GEM_GPU_DOMAINS, flush_domains);
+ if (ret)
+ return ret;
+
+ ring->gpu_caches_dirty = false;
+ return 0;
+}
+
+static int execlists_move_to_gpu(struct intel_ringbuffer *ringbuf,
+ struct list_head *vmas)
+{
+ struct intel_engine_cs *ring = ringbuf->ring;
+ struct i915_vma *vma;
+ uint32_t flush_domains = 0;
+ bool flush_chipset = false;
+ int ret;
+
+ list_for_each_entry(vma, vmas, exec_list) {
+ struct drm_i915_gem_object *obj = vma->obj;
+
+ ret = i915_gem_object_sync(obj, ring);
+ if (ret)
+ return ret;
+
+ if (obj->base.write_domain & I915_GEM_DOMAIN_CPU)
+ flush_chipset |= i915_gem_clflush_object(obj, false);
+
+ flush_domains |= obj->base.write_domain;
+ }
+
+ if (flush_domains & I915_GEM_DOMAIN_GTT)
+ wmb();
+
+ /* Unconditionally invalidate gpu caches and ensure that we do flush
+ * any residual writes from the previous batch.
+ */
+ return logical_ring_invalidate_all_caches(ringbuf);
+}
+
+/**
+ * execlists_submission() - submit a batchbuffer for execution, Execlists style
+ * @dev: DRM device.
+ * @file: DRM file.
+ * @ring: Engine Command Streamer to submit to.
+ * @ctx: Context to employ for this submission.
+ * @args: execbuffer call arguments.
+ * @vmas: list of vmas.
+ * @batch_obj: the batchbuffer to submit.
+ * @exec_start: batchbuffer start virtual address pointer.
+ * @flags: translated execbuffer call flags.
+ *
+ * This is the evil twin version of i915_gem_ringbuffer_submission. It abstracts
+ * away the submission details of the execbuffer ioctl call.
+ *
+ * Return: non-zero if the submission fails.
+ */
+int intel_execlists_submission(struct drm_device *dev, struct drm_file *file,
+ struct intel_engine_cs *ring,
+ struct intel_context *ctx,
+ struct drm_i915_gem_execbuffer2 *args,
+ struct list_head *vmas,
+ struct drm_i915_gem_object *batch_obj,
+ u64 exec_start, u32 flags)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_ringbuffer *ringbuf = ctx->engine[ring->id].ringbuf;
+ int instp_mode;
+ u32 instp_mask;
+ int ret;
+
+ instp_mode = args->flags & I915_EXEC_CONSTANTS_MASK;
+ instp_mask = I915_EXEC_CONSTANTS_MASK;
+ switch (instp_mode) {
+ case I915_EXEC_CONSTANTS_REL_GENERAL:
+ case I915_EXEC_CONSTANTS_ABSOLUTE:
+ case I915_EXEC_CONSTANTS_REL_SURFACE:
+ if (instp_mode != 0 && ring != &dev_priv->ring[RCS]) {
+ DRM_DEBUG("non-0 rel constants mode on non-RCS\n");
+ return -EINVAL;
+ }
+
+ if (instp_mode != dev_priv->relative_constants_mode) {
+ if (instp_mode == I915_EXEC_CONSTANTS_REL_SURFACE) {
+ DRM_DEBUG("rel surface constants mode invalid on gen5+\n");
+ return -EINVAL;
+ }
+
+ /* The HW changed the meaning on this bit on gen6 */
+ instp_mask &= ~I915_EXEC_CONSTANTS_REL_SURFACE;
+ }
+ break;
+ default:
+ DRM_DEBUG("execbuf with unknown constants: %d\n", instp_mode);
+ return -EINVAL;
+ }
+
+ if (args->num_cliprects != 0) {
+ DRM_DEBUG("clip rectangles are only valid on pre-gen5\n");
+ return -EINVAL;
+ } else {
+ if (args->DR4 == 0xffffffff) {
+ DRM_DEBUG("UXA submitting garbage DR4, fixing up\n");
+ args->DR4 = 0;
+ }
+
+ if (args->DR1 || args->DR4 || args->cliprects_ptr) {
+ DRM_DEBUG("0 cliprects but dirt in cliprects fields\n");
+ return -EINVAL;
+ }
+ }
+
+ if (args->flags & I915_EXEC_GEN7_SOL_RESET) {
+ DRM_DEBUG("sol reset is gen7 only\n");
+ return -EINVAL;
+ }
+
+ ret = execlists_move_to_gpu(ringbuf, vmas);
+ if (ret)
+ return ret;
+
+ if (ring == &dev_priv->ring[RCS] &&
+ instp_mode != dev_priv->relative_constants_mode) {
+ ret = intel_logical_ring_begin(ringbuf, 4);
+ if (ret)
+ return ret;
+
+ intel_logical_ring_emit(ringbuf, MI_NOOP);
+ intel_logical_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(1));
+ intel_logical_ring_emit(ringbuf, INSTPM);
+ intel_logical_ring_emit(ringbuf, instp_mask << 16 | instp_mode);
+ intel_logical_ring_advance(ringbuf);
+
+ dev_priv->relative_constants_mode = instp_mode;
+ }
+
+ ret = ring->emit_bb_start(ringbuf, exec_start, flags);
+ if (ret)
+ return ret;
+
+ i915_gem_execbuffer_move_to_active(vmas, ring);
+ i915_gem_execbuffer_retire_commands(dev, file, ring, batch_obj);
+
+ return 0;
+}
+
+void intel_logical_ring_stop(struct intel_engine_cs *ring)
+{
+ struct drm_i915_private *dev_priv = ring->dev->dev_private;
+ int ret;
+
+ if (!intel_ring_initialized(ring))
+ return;
+
+ ret = intel_ring_idle(ring);
+ if (ret && !i915_reset_in_progress(&to_i915(ring->dev)->gpu_error))
+ DRM_ERROR("failed to quiesce %s whilst cleaning up: %d\n",
+ ring->name, ret);
+
+ /* TODO: Is this correct with Execlists enabled? */
+ I915_WRITE_MODE(ring, _MASKED_BIT_ENABLE(STOP_RING));
+ if (wait_for_atomic((I915_READ_MODE(ring) & MODE_IDLE) != 0, 1000)) {
+ DRM_ERROR("%s :timed out trying to stop ring\n", ring->name);
+ return;
+ }
+ I915_WRITE_MODE(ring, _MASKED_BIT_DISABLE(STOP_RING));
+}
+
+int logical_ring_flush_all_caches(struct intel_ringbuffer *ringbuf)
+{
+ struct intel_engine_cs *ring = ringbuf->ring;
+ int ret;
+
+ if (!ring->gpu_caches_dirty)
+ return 0;
+
+ ret = ring->emit_flush(ringbuf, 0, I915_GEM_GPU_DOMAINS);
+ if (ret)
+ return ret;
+
+ ring->gpu_caches_dirty = false;
+ return 0;
+}
+
+/**
+ * intel_logical_ring_advance_and_submit() - advance the tail and submit the workload
+ * @ringbuf: Logical Ringbuffer to advance.
+ *
+ * The tail is updated in our logical ringbuffer struct, not in the actual context. What
+ * really happens during submission is that the context and current tail will be placed
+ * on a queue waiting for the ELSP to be ready to accept a new context submission. At that
+ * point, the tail *inside* the context is updated and the ELSP written to.
+ */
+void intel_logical_ring_advance_and_submit(struct intel_ringbuffer *ringbuf)
+{
+ struct intel_engine_cs *ring = ringbuf->ring;
+ struct intel_context *ctx = ringbuf->FIXME_lrc_ctx;
+
+ intel_logical_ring_advance(ringbuf);
+
+ if (intel_ring_stopped(ring))
+ return;
+
+ execlists_context_queue(ring, ctx, ringbuf->tail);
+}
+
+static int logical_ring_alloc_seqno(struct intel_engine_cs *ring,
+ struct intel_context *ctx)
+{
+ if (ring->outstanding_lazy_seqno)
+ return 0;
+
+ if (ring->preallocated_lazy_request == NULL) {
+ struct drm_i915_gem_request *request;
+
+ request = kmalloc(sizeof(*request), GFP_KERNEL);
+ if (request == NULL)
+ return -ENOMEM;
+
+ /* Hold a reference to the context this request belongs to
+ * (we will need it when the time comes to emit/retire the
+ * request).
+ */
+ request->ctx = ctx;
+ i915_gem_context_reference(request->ctx);
+
+ ring->preallocated_lazy_request = request;
+ }
+
+ return i915_gem_get_seqno(ring->dev, &ring->outstanding_lazy_seqno);
+}
+
+static int logical_ring_wait_request(struct intel_ringbuffer *ringbuf,
+ int bytes)
+{
+ struct intel_engine_cs *ring = ringbuf->ring;
+ struct drm_i915_gem_request *request;
+ u32 seqno = 0;
+ int ret;
+
+ if (ringbuf->last_retired_head != -1) {
+ ringbuf->head = ringbuf->last_retired_head;
+ ringbuf->last_retired_head = -1;
+
+ ringbuf->space = intel_ring_space(ringbuf);
+ if (ringbuf->space >= bytes)
+ return 0;
+ }
+
+ list_for_each_entry(request, &ring->request_list, list) {
+ if (__intel_ring_space(request->tail, ringbuf->tail,
+ ringbuf->size) >= bytes) {
+ seqno = request->seqno;
+ break;
+ }
+ }
+
+ if (seqno == 0)
+ return -ENOSPC;
+
+ ret = i915_wait_seqno(ring, seqno);
+ if (ret)
+ return ret;
+
+ i915_gem_retire_requests_ring(ring);
+ ringbuf->head = ringbuf->last_retired_head;
+ ringbuf->last_retired_head = -1;
+
+ ringbuf->space = intel_ring_space(ringbuf);
+ return 0;
+}
+
+static int logical_ring_wait_for_space(struct intel_ringbuffer *ringbuf,
+ int bytes)
+{
+ struct intel_engine_cs *ring = ringbuf->ring;
+ struct drm_device *dev = ring->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ unsigned long end;
+ int ret;
+
+ ret = logical_ring_wait_request(ringbuf, bytes);
+ if (ret != -ENOSPC)
+ return ret;
+
+ /* Force the context submission in case we have been skipping it */
+ intel_logical_ring_advance_and_submit(ringbuf);
+
+ /* With GEM the hangcheck timer should kick us out of the loop,
+ * leaving it early runs the risk of corrupting GEM state (due
+ * to running on almost untested codepaths). But on resume
+ * timers don't work yet, so prevent a complete hang in that
+ * case by choosing an insanely large timeout. */
+ end = jiffies + 60 * HZ;
+
+ do {
+ ringbuf->head = I915_READ_HEAD(ring);
+ ringbuf->space = intel_ring_space(ringbuf);
+ if (ringbuf->space >= bytes) {
+ ret = 0;
+ break;
+ }
+
+ msleep(1);
+
+ if (dev_priv->mm.interruptible && signal_pending(current)) {
+ ret = -ERESTARTSYS;
+ break;
+ }
+
+ ret = i915_gem_check_wedge(&dev_priv->gpu_error,
+ dev_priv->mm.interruptible);
+ if (ret)
+ break;
+
+ if (time_after(jiffies, end)) {
+ ret = -EBUSY;
+ break;
+ }
+ } while (1);
+
+ return ret;
+}
+
+static int logical_ring_wrap_buffer(struct intel_ringbuffer *ringbuf)
+{
+ uint32_t __iomem *virt;
+ int rem = ringbuf->size - ringbuf->tail;
+
+ if (ringbuf->space < rem) {
+ int ret = logical_ring_wait_for_space(ringbuf, rem);
+
+ if (ret)
+ return ret;
+ }
+
+ virt = ringbuf->virtual_start + ringbuf->tail;
+ rem /= 4;
+ while (rem--)
+ iowrite32(MI_NOOP, virt++);
+
+ ringbuf->tail = 0;
+ ringbuf->space = intel_ring_space(ringbuf);
+
+ return 0;
+}
+
+static int logical_ring_prepare(struct intel_ringbuffer *ringbuf, int bytes)
+{
+ int ret;
+
+ if (unlikely(ringbuf->tail + bytes > ringbuf->effective_size)) {
+ ret = logical_ring_wrap_buffer(ringbuf);
+ if (unlikely(ret))
+ return ret;
+ }
+
+ if (unlikely(ringbuf->space < bytes)) {
+ ret = logical_ring_wait_for_space(ringbuf, bytes);
+ if (unlikely(ret))
+ return ret;
+ }
+
+ return 0;
+}
+
+/**
+ * intel_logical_ring_begin() - prepare the logical ringbuffer to accept some commands
+ *
+ * @ringbuf: Logical ringbuffer.
+ * @num_dwords: number of DWORDs that we plan to write to the ringbuffer.
+ *
+ * The ringbuffer might not be ready to accept the commands right away (maybe it needs to
+ * be wrapped, or wait a bit for the tail to be updated). This function takes care of that
+ * and also preallocates a request (every workload submission is still mediated through
+ * requests, same as it did with legacy ringbuffer submission).
+ *
+ * Return: non-zero if the ringbuffer is not ready to be written to.
+ */
+int intel_logical_ring_begin(struct intel_ringbuffer *ringbuf, int num_dwords)
+{
+ struct intel_engine_cs *ring = ringbuf->ring;
+ struct drm_device *dev = ring->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ int ret;
+
+ ret = i915_gem_check_wedge(&dev_priv->gpu_error,
+ dev_priv->mm.interruptible);
+ if (ret)
+ return ret;
+
+ ret = logical_ring_prepare(ringbuf, num_dwords * sizeof(uint32_t));
+ if (ret)
+ return ret;
+
+ /* Preallocate the olr before touching the ring */
+ ret = logical_ring_alloc_seqno(ring, ringbuf->FIXME_lrc_ctx);
+ if (ret)
+ return ret;
+
+ ringbuf->space -= num_dwords * sizeof(uint32_t);
+ return 0;
+}
+
+static int gen8_init_common_ring(struct intel_engine_cs *ring)
+{
+ struct drm_device *dev = ring->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+
+ I915_WRITE_IMR(ring, ~(ring->irq_enable_mask | ring->irq_keep_mask));
+ I915_WRITE(RING_HWSTAM(ring->mmio_base), 0xffffffff);
+
+ I915_WRITE(RING_MODE_GEN7(ring),
+ _MASKED_BIT_DISABLE(GFX_REPLAY_MODE) |
+ _MASKED_BIT_ENABLE(GFX_RUN_LIST_ENABLE));
+ POSTING_READ(RING_MODE_GEN7(ring));
+ DRM_DEBUG_DRIVER("Execlists enabled for %s\n", ring->name);
+
+ memset(&ring->hangcheck, 0, sizeof(ring->hangcheck));
+
+ return 0;
+}
+
+static int gen8_init_render_ring(struct intel_engine_cs *ring)
+{
+ struct drm_device *dev = ring->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ int ret;
+
+ ret = gen8_init_common_ring(ring);
+ if (ret)
+ return ret;
+
+ /* We need to disable the AsyncFlip performance optimisations in order
+ * to use MI_WAIT_FOR_EVENT within the CS. It should already be
+ * programmed to '1' on all products.
+ *
+ * WaDisableAsyncFlipPerfMode:snb,ivb,hsw,vlv,bdw,chv
+ */
+ I915_WRITE(MI_MODE, _MASKED_BIT_ENABLE(ASYNC_FLIP_PERF_DISABLE));
+
+ ret = intel_init_pipe_control(ring);
+ if (ret)
+ return ret;
+
+ I915_WRITE(INSTPM, _MASKED_BIT_ENABLE(INSTPM_FORCE_ORDERING));
+
+ return ret;
+}
+
+static int gen8_emit_bb_start(struct intel_ringbuffer *ringbuf,
+ u64 offset, unsigned flags)
+{
+ bool ppgtt = !(flags & I915_DISPATCH_SECURE);
+ int ret;
+
+ ret = intel_logical_ring_begin(ringbuf, 4);
+ if (ret)
+ return ret;
+
+ /* FIXME(BDW): Address space and security selectors. */
+ intel_logical_ring_emit(ringbuf, MI_BATCH_BUFFER_START_GEN8 | (ppgtt<<8));
+ intel_logical_ring_emit(ringbuf, lower_32_bits(offset));
+ intel_logical_ring_emit(ringbuf, upper_32_bits(offset));
+ intel_logical_ring_emit(ringbuf, MI_NOOP);
+ intel_logical_ring_advance(ringbuf);
+
+ return 0;
+}
+
+static bool gen8_logical_ring_get_irq(struct intel_engine_cs *ring)
+{
+ struct drm_device *dev = ring->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ unsigned long flags;
+
+ if (!dev->irq_enabled)
+ return false;
+
+ spin_lock_irqsave(&dev_priv->irq_lock, flags);
+ if (ring->irq_refcount++ == 0) {
+ I915_WRITE_IMR(ring, ~(ring->irq_enable_mask | ring->irq_keep_mask));
+ POSTING_READ(RING_IMR(ring->mmio_base));
+ }
+ spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
+
+ return true;
+}
+
+static void gen8_logical_ring_put_irq(struct intel_engine_cs *ring)
+{
+ struct drm_device *dev = ring->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ unsigned long flags;
+
+ spin_lock_irqsave(&dev_priv->irq_lock, flags);
+ if (--ring->irq_refcount == 0) {
+ I915_WRITE_IMR(ring, ~ring->irq_keep_mask);
+ POSTING_READ(RING_IMR(ring->mmio_base));
+ }
+ spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
+}
+
+static int gen8_emit_flush(struct intel_ringbuffer *ringbuf,
+ u32 invalidate_domains,
+ u32 unused)
+{
+ struct intel_engine_cs *ring = ringbuf->ring;
+ struct drm_device *dev = ring->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ uint32_t cmd;
+ int ret;
+
+ ret = intel_logical_ring_begin(ringbuf, 4);
+ if (ret)
+ return ret;
+
+ cmd = MI_FLUSH_DW + 1;
+
+ if (ring == &dev_priv->ring[VCS]) {
+ if (invalidate_domains & I915_GEM_GPU_DOMAINS)
+ cmd |= MI_INVALIDATE_TLB | MI_INVALIDATE_BSD |
+ MI_FLUSH_DW_STORE_INDEX |
+ MI_FLUSH_DW_OP_STOREDW;
+ } else {
+ if (invalidate_domains & I915_GEM_DOMAIN_RENDER)
+ cmd |= MI_INVALIDATE_TLB | MI_FLUSH_DW_STORE_INDEX |
+ MI_FLUSH_DW_OP_STOREDW;
+ }
+
+ intel_logical_ring_emit(ringbuf, cmd);
+ intel_logical_ring_emit(ringbuf,
+ I915_GEM_HWS_SCRATCH_ADDR |
+ MI_FLUSH_DW_USE_GTT);
+ intel_logical_ring_emit(ringbuf, 0); /* upper addr */
+ intel_logical_ring_emit(ringbuf, 0); /* value */
+ intel_logical_ring_advance(ringbuf);
+
+ return 0;
+}
+
+static int gen8_emit_flush_render(struct intel_ringbuffer *ringbuf,
+ u32 invalidate_domains,
+ u32 flush_domains)
+{
+ struct intel_engine_cs *ring = ringbuf->ring;
+ u32 scratch_addr = ring->scratch.gtt_offset + 2 * CACHELINE_BYTES;
+ u32 flags = 0;
+ int ret;
+
+ flags |= PIPE_CONTROL_CS_STALL;
+
+ if (flush_domains) {
+ flags |= PIPE_CONTROL_RENDER_TARGET_CACHE_FLUSH;
+ flags |= PIPE_CONTROL_DEPTH_CACHE_FLUSH;
+ }
+
+ if (invalidate_domains) {
+ flags |= PIPE_CONTROL_TLB_INVALIDATE;
+ flags |= PIPE_CONTROL_INSTRUCTION_CACHE_INVALIDATE;
+ flags |= PIPE_CONTROL_TEXTURE_CACHE_INVALIDATE;
+ flags |= PIPE_CONTROL_VF_CACHE_INVALIDATE;
+ flags |= PIPE_CONTROL_CONST_CACHE_INVALIDATE;
+ flags |= PIPE_CONTROL_STATE_CACHE_INVALIDATE;
+ flags |= PIPE_CONTROL_QW_WRITE;
+ flags |= PIPE_CONTROL_GLOBAL_GTT_IVB;
+ }
+
+ ret = intel_logical_ring_begin(ringbuf, 6);
+ if (ret)
+ return ret;
+
+ intel_logical_ring_emit(ringbuf, GFX_OP_PIPE_CONTROL(6));
+ intel_logical_ring_emit(ringbuf, flags);
+ intel_logical_ring_emit(ringbuf, scratch_addr);
+ intel_logical_ring_emit(ringbuf, 0);
+ intel_logical_ring_emit(ringbuf, 0);
+ intel_logical_ring_emit(ringbuf, 0);
+ intel_logical_ring_advance(ringbuf);
+
+ return 0;
+}
+
+static u32 gen8_get_seqno(struct intel_engine_cs *ring, bool lazy_coherency)
+{
+ return intel_read_status_page(ring, I915_GEM_HWS_INDEX);
+}
+
+static void gen8_set_seqno(struct intel_engine_cs *ring, u32 seqno)
+{
+ intel_write_status_page(ring, I915_GEM_HWS_INDEX, seqno);
+}
+
+static int gen8_emit_request(struct intel_ringbuffer *ringbuf)
+{
+ struct intel_engine_cs *ring = ringbuf->ring;
+ u32 cmd;
+ int ret;
+
+ ret = intel_logical_ring_begin(ringbuf, 6);
+ if (ret)
+ return ret;
+
+ cmd = MI_STORE_DWORD_IMM_GEN8;
+ cmd |= MI_GLOBAL_GTT;
+
+ intel_logical_ring_emit(ringbuf, cmd);
+ intel_logical_ring_emit(ringbuf,
+ (ring->status_page.gfx_addr +
+ (I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT)));
+ intel_logical_ring_emit(ringbuf, 0);
+ intel_logical_ring_emit(ringbuf, ring->outstanding_lazy_seqno);
+ intel_logical_ring_emit(ringbuf, MI_USER_INTERRUPT);
+ intel_logical_ring_emit(ringbuf, MI_NOOP);
+ intel_logical_ring_advance_and_submit(ringbuf);
+
+ return 0;
+}
+
+/**
+ * intel_logical_ring_cleanup() - deallocate the Engine Command Streamer
+ *
+ * @ring: Engine Command Streamer.
+ *
+ */
+void intel_logical_ring_cleanup(struct intel_engine_cs *ring)
+{
+ struct drm_i915_private *dev_priv = ring->dev->dev_private;
+
+ if (!intel_ring_initialized(ring))
+ return;
+
+ intel_logical_ring_stop(ring);
+ WARN_ON((I915_READ_MODE(ring) & MODE_IDLE) == 0);
+ ring->preallocated_lazy_request = NULL;
+ ring->outstanding_lazy_seqno = 0;
+
+ if (ring->cleanup)
+ ring->cleanup(ring);
+
+ i915_cmd_parser_fini_ring(ring);
+
+ if (ring->status_page.obj) {
+ kunmap(sg_page(ring->status_page.obj->pages->sgl));
+ ring->status_page.obj = NULL;
+ }
+}
+
+static int logical_ring_init(struct drm_device *dev, struct intel_engine_cs *ring)
+{
+ int ret;
+
+ /* Intentionally left blank. */
+ ring->buffer = NULL;
+
+ ring->dev = dev;
+ INIT_LIST_HEAD(&ring->active_list);
+ INIT_LIST_HEAD(&ring->request_list);
+ init_waitqueue_head(&ring->irq_queue);
+
+ INIT_LIST_HEAD(&ring->execlist_queue);
+ spin_lock_init(&ring->execlist_lock);
+ ring->next_context_status_buffer = 0;
+
+ ret = i915_cmd_parser_init_ring(ring);
+ if (ret)
+ return ret;
+
+ if (ring->init) {
+ ret = ring->init(ring);
+ if (ret)
+ return ret;
+ }
+
+ ret = intel_lr_context_deferred_create(ring->default_context, ring);
+
+ return ret;
+}
+
+static int logical_render_ring_init(struct drm_device *dev)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_engine_cs *ring = &dev_priv->ring[RCS];
+
+ ring->name = "render ring";
+ ring->id = RCS;
+ ring->mmio_base = RENDER_RING_BASE;
+ ring->irq_enable_mask =
+ GT_RENDER_USER_INTERRUPT << GEN8_RCS_IRQ_SHIFT;
+ ring->irq_keep_mask =
+ GT_CONTEXT_SWITCH_INTERRUPT << GEN8_RCS_IRQ_SHIFT;
+ if (HAS_L3_DPF(dev))
+ ring->irq_keep_mask |= GT_RENDER_L3_PARITY_ERROR_INTERRUPT;
+
+ ring->init = gen8_init_render_ring;
+ ring->cleanup = intel_fini_pipe_control;
+ ring->get_seqno = gen8_get_seqno;
+ ring->set_seqno = gen8_set_seqno;
+ ring->emit_request = gen8_emit_request;
+ ring->emit_flush = gen8_emit_flush_render;
+ ring->irq_get = gen8_logical_ring_get_irq;
+ ring->irq_put = gen8_logical_ring_put_irq;
+ ring->emit_bb_start = gen8_emit_bb_start;
+
+ return logical_ring_init(dev, ring);
+}
+
+static int logical_bsd_ring_init(struct drm_device *dev)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_engine_cs *ring = &dev_priv->ring[VCS];
+
+ ring->name = "bsd ring";
+ ring->id = VCS;
+ ring->mmio_base = GEN6_BSD_RING_BASE;
+ ring->irq_enable_mask =
+ GT_RENDER_USER_INTERRUPT << GEN8_VCS1_IRQ_SHIFT;
+ ring->irq_keep_mask =
+ GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VCS1_IRQ_SHIFT;
+
+ ring->init = gen8_init_common_ring;
+ ring->get_seqno = gen8_get_seqno;
+ ring->set_seqno = gen8_set_seqno;
+ ring->emit_request = gen8_emit_request;
+ ring->emit_flush = gen8_emit_flush;
+ ring->irq_get = gen8_logical_ring_get_irq;
+ ring->irq_put = gen8_logical_ring_put_irq;
+ ring->emit_bb_start = gen8_emit_bb_start;
+
+ return logical_ring_init(dev, ring);
+}
+
+static int logical_bsd2_ring_init(struct drm_device *dev)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_engine_cs *ring = &dev_priv->ring[VCS2];
+
+ ring->name = "bds2 ring";
+ ring->id = VCS2;
+ ring->mmio_base = GEN8_BSD2_RING_BASE;
+ ring->irq_enable_mask =
+ GT_RENDER_USER_INTERRUPT << GEN8_VCS2_IRQ_SHIFT;
+ ring->irq_keep_mask =
+ GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VCS2_IRQ_SHIFT;
+
+ ring->init = gen8_init_common_ring;
+ ring->get_seqno = gen8_get_seqno;
+ ring->set_seqno = gen8_set_seqno;
+ ring->emit_request = gen8_emit_request;
+ ring->emit_flush = gen8_emit_flush;
+ ring->irq_get = gen8_logical_ring_get_irq;
+ ring->irq_put = gen8_logical_ring_put_irq;
+ ring->emit_bb_start = gen8_emit_bb_start;
+
+ return logical_ring_init(dev, ring);
+}
+
+static int logical_blt_ring_init(struct drm_device *dev)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_engine_cs *ring = &dev_priv->ring[BCS];
+
+ ring->name = "blitter ring";
+ ring->id = BCS;
+ ring->mmio_base = BLT_RING_BASE;
+ ring->irq_enable_mask =
+ GT_RENDER_USER_INTERRUPT << GEN8_BCS_IRQ_SHIFT;
+ ring->irq_keep_mask =
+ GT_CONTEXT_SWITCH_INTERRUPT << GEN8_BCS_IRQ_SHIFT;
+
+ ring->init = gen8_init_common_ring;
+ ring->get_seqno = gen8_get_seqno;
+ ring->set_seqno = gen8_set_seqno;
+ ring->emit_request = gen8_emit_request;
+ ring->emit_flush = gen8_emit_flush;
+ ring->irq_get = gen8_logical_ring_get_irq;
+ ring->irq_put = gen8_logical_ring_put_irq;
+ ring->emit_bb_start = gen8_emit_bb_start;
+
+ return logical_ring_init(dev, ring);
+}
+
+static int logical_vebox_ring_init(struct drm_device *dev)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_engine_cs *ring = &dev_priv->ring[VECS];
+
+ ring->name = "video enhancement ring";
+ ring->id = VECS;
+ ring->mmio_base = VEBOX_RING_BASE;
+ ring->irq_enable_mask =
+ GT_RENDER_USER_INTERRUPT << GEN8_VECS_IRQ_SHIFT;
+ ring->irq_keep_mask =
+ GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VECS_IRQ_SHIFT;
+
+ ring->init = gen8_init_common_ring;
+ ring->get_seqno = gen8_get_seqno;
+ ring->set_seqno = gen8_set_seqno;
+ ring->emit_request = gen8_emit_request;
+ ring->emit_flush = gen8_emit_flush;
+ ring->irq_get = gen8_logical_ring_get_irq;
+ ring->irq_put = gen8_logical_ring_put_irq;
+ ring->emit_bb_start = gen8_emit_bb_start;
+
+ return logical_ring_init(dev, ring);
+}
+
+/**
+ * intel_logical_rings_init() - allocate, populate and init the Engine Command Streamers
+ * @dev: DRM device.
+ *
+ * This function inits the engines for an Execlists submission style (the equivalent in the
+ * legacy ringbuffer submission world would be i915_gem_init_rings). It does it only for
+ * those engines that are present in the hardware.
+ *
+ * Return: non-zero if the initialization failed.
+ */
+int intel_logical_rings_init(struct drm_device *dev)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ int ret;
+
+ ret = logical_render_ring_init(dev);
+ if (ret)
+ return ret;
+
+ if (HAS_BSD(dev)) {
+ ret = logical_bsd_ring_init(dev);
+ if (ret)
+ goto cleanup_render_ring;
+ }
+
+ if (HAS_BLT(dev)) {
+ ret = logical_blt_ring_init(dev);
+ if (ret)
+ goto cleanup_bsd_ring;
+ }
+
+ if (HAS_VEBOX(dev)) {
+ ret = logical_vebox_ring_init(dev);
+ if (ret)
+ goto cleanup_blt_ring;
+ }
+
+ if (HAS_BSD2(dev)) {
+ ret = logical_bsd2_ring_init(dev);
+ if (ret)
+ goto cleanup_vebox_ring;
+ }
+
+ ret = i915_gem_set_seqno(dev, ((u32)~0 - 0x1000));
+ if (ret)
+ goto cleanup_bsd2_ring;
+
+ return 0;
+
+cleanup_bsd2_ring:
+ intel_logical_ring_cleanup(&dev_priv->ring[VCS2]);
+cleanup_vebox_ring:
+ intel_logical_ring_cleanup(&dev_priv->ring[VECS]);
+cleanup_blt_ring:
+ intel_logical_ring_cleanup(&dev_priv->ring[BCS]);
+cleanup_bsd_ring:
+ intel_logical_ring_cleanup(&dev_priv->ring[VCS]);
+cleanup_render_ring:
+ intel_logical_ring_cleanup(&dev_priv->ring[RCS]);
+
+ return ret;
+}
+
+int intel_lr_context_render_state_init(struct intel_engine_cs *ring,
+ struct intel_context *ctx)
+{
+ struct intel_ringbuffer *ringbuf = ctx->engine[ring->id].ringbuf;
+ struct render_state so;
+ struct drm_i915_file_private *file_priv = ctx->file_priv;
+ struct drm_file *file = file_priv ? file_priv->file : NULL;
+ int ret;
+
+ ret = i915_gem_render_state_prepare(ring, &so);
+ if (ret)
+ return ret;
+
+ if (so.rodata == NULL)
+ return 0;
+
+ ret = ring->emit_bb_start(ringbuf,
+ so.ggtt_offset,
+ I915_DISPATCH_SECURE);
+ if (ret)
+ goto out;
+
+ i915_vma_move_to_active(i915_gem_obj_to_ggtt(so.obj), ring);
+
+ ret = __i915_add_request(ring, file, so.obj, NULL);
+ /* intel_logical_ring_add_request moves object to inactive if it
+ * fails */
+out:
+ i915_gem_render_state_fini(&so);
+ return ret;
+}
+
+static int
+populate_lr_context(struct intel_context *ctx, struct drm_i915_gem_object *ctx_obj,
+ struct intel_engine_cs *ring, struct intel_ringbuffer *ringbuf)
+{
+ struct drm_device *dev = ring->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_gem_object *ring_obj = ringbuf->obj;
+ struct i915_hw_ppgtt *ppgtt = ctx->ppgtt;
+ struct page *page;
+ uint32_t *reg_state;
+ int ret;
+
+ if (!ppgtt)
+ ppgtt = dev_priv->mm.aliasing_ppgtt;
+
+ ret = i915_gem_object_set_to_cpu_domain(ctx_obj, true);
+ if (ret) {
+ DRM_DEBUG_DRIVER("Could not set to CPU domain\n");
+ return ret;
+ }
+
+ ret = i915_gem_object_get_pages(ctx_obj);
+ if (ret) {
+ DRM_DEBUG_DRIVER("Could not get object pages\n");
+ return ret;
+ }
+
+ i915_gem_object_pin_pages(ctx_obj);
+
+ /* The second page of the context object contains some fields which must
+ * be set up prior to the first execution. */
+ page = i915_gem_object_get_page(ctx_obj, 1);
+ reg_state = kmap_atomic(page);
+
+ /* A context is actually a big batch buffer with several MI_LOAD_REGISTER_IMM
+ * commands followed by (reg, value) pairs. The values we are setting here are
+ * only for the first context restore: on a subsequent save, the GPU will
+ * recreate this batchbuffer with new values (including all the missing
+ * MI_LOAD_REGISTER_IMM commands that we are not initializing here). */
+ if (ring->id == RCS)
+ reg_state[CTX_LRI_HEADER_0] = MI_LOAD_REGISTER_IMM(14);
+ else
+ reg_state[CTX_LRI_HEADER_0] = MI_LOAD_REGISTER_IMM(11);
+ reg_state[CTX_LRI_HEADER_0] |= MI_LRI_FORCE_POSTED;
+ reg_state[CTX_CONTEXT_CONTROL] = RING_CONTEXT_CONTROL(ring);
+ reg_state[CTX_CONTEXT_CONTROL+1] =
+ _MASKED_BIT_ENABLE((1<<3) | MI_RESTORE_INHIBIT);
+ reg_state[CTX_RING_HEAD] = RING_HEAD(ring->mmio_base);
+ reg_state[CTX_RING_HEAD+1] = 0;
+ reg_state[CTX_RING_TAIL] = RING_TAIL(ring->mmio_base);
+ reg_state[CTX_RING_TAIL+1] = 0;
+ reg_state[CTX_RING_BUFFER_START] = RING_START(ring->mmio_base);
+ reg_state[CTX_RING_BUFFER_START+1] = i915_gem_obj_ggtt_offset(ring_obj);
+ reg_state[CTX_RING_BUFFER_CONTROL] = RING_CTL(ring->mmio_base);
+ reg_state[CTX_RING_BUFFER_CONTROL+1] =
+ ((ringbuf->size - PAGE_SIZE) & RING_NR_PAGES) | RING_VALID;
+ reg_state[CTX_BB_HEAD_U] = ring->mmio_base + 0x168;
+ reg_state[CTX_BB_HEAD_U+1] = 0;
+ reg_state[CTX_BB_HEAD_L] = ring->mmio_base + 0x140;
+ reg_state[CTX_BB_HEAD_L+1] = 0;
+ reg_state[CTX_BB_STATE] = ring->mmio_base + 0x110;
+ reg_state[CTX_BB_STATE+1] = (1<<5);
+ reg_state[CTX_SECOND_BB_HEAD_U] = ring->mmio_base + 0x11c;
+ reg_state[CTX_SECOND_BB_HEAD_U+1] = 0;
+ reg_state[CTX_SECOND_BB_HEAD_L] = ring->mmio_base + 0x114;
+ reg_state[CTX_SECOND_BB_HEAD_L+1] = 0;
+ reg_state[CTX_SECOND_BB_STATE] = ring->mmio_base + 0x118;
+ reg_state[CTX_SECOND_BB_STATE+1] = 0;
+ if (ring->id == RCS) {
+ /* TODO: according to BSpec, the register state context
+ * for CHV does not have these. OTOH, these registers do
+ * exist in CHV. I'm waiting for a clarification */
+ reg_state[CTX_BB_PER_CTX_PTR] = ring->mmio_base + 0x1c0;
+ reg_state[CTX_BB_PER_CTX_PTR+1] = 0;
+ reg_state[CTX_RCS_INDIRECT_CTX] = ring->mmio_base + 0x1c4;
+ reg_state[CTX_RCS_INDIRECT_CTX+1] = 0;
+ reg_state[CTX_RCS_INDIRECT_CTX_OFFSET] = ring->mmio_base + 0x1c8;
+ reg_state[CTX_RCS_INDIRECT_CTX_OFFSET+1] = 0;
+ }
+ reg_state[CTX_LRI_HEADER_1] = MI_LOAD_REGISTER_IMM(9);
+ reg_state[CTX_LRI_HEADER_1] |= MI_LRI_FORCE_POSTED;
+ reg_state[CTX_CTX_TIMESTAMP] = ring->mmio_base + 0x3a8;
+ reg_state[CTX_CTX_TIMESTAMP+1] = 0;
+ reg_state[CTX_PDP3_UDW] = GEN8_RING_PDP_UDW(ring, 3);
+ reg_state[CTX_PDP3_LDW] = GEN8_RING_PDP_LDW(ring, 3);
+ reg_state[CTX_PDP2_UDW] = GEN8_RING_PDP_UDW(ring, 2);
+ reg_state[CTX_PDP2_LDW] = GEN8_RING_PDP_LDW(ring, 2);
+ reg_state[CTX_PDP1_UDW] = GEN8_RING_PDP_UDW(ring, 1);
+ reg_state[CTX_PDP1_LDW] = GEN8_RING_PDP_LDW(ring, 1);
+ reg_state[CTX_PDP0_UDW] = GEN8_RING_PDP_UDW(ring, 0);
+ reg_state[CTX_PDP0_LDW] = GEN8_RING_PDP_LDW(ring, 0);
+ reg_state[CTX_PDP3_UDW+1] = upper_32_bits(ppgtt->pd_dma_addr[3]);
+ reg_state[CTX_PDP3_LDW+1] = lower_32_bits(ppgtt->pd_dma_addr[3]);
+ reg_state[CTX_PDP2_UDW+1] = upper_32_bits(ppgtt->pd_dma_addr[2]);
+ reg_state[CTX_PDP2_LDW+1] = lower_32_bits(ppgtt->pd_dma_addr[2]);
+ reg_state[CTX_PDP1_UDW+1] = upper_32_bits(ppgtt->pd_dma_addr[1]);
+ reg_state[CTX_PDP1_LDW+1] = lower_32_bits(ppgtt->pd_dma_addr[1]);
+ reg_state[CTX_PDP0_UDW+1] = upper_32_bits(ppgtt->pd_dma_addr[0]);
+ reg_state[CTX_PDP0_LDW+1] = lower_32_bits(ppgtt->pd_dma_addr[0]);
+ if (ring->id == RCS) {
+ reg_state[CTX_LRI_HEADER_2] = MI_LOAD_REGISTER_IMM(1);
+ reg_state[CTX_R_PWR_CLK_STATE] = 0x20c8;
+ reg_state[CTX_R_PWR_CLK_STATE+1] = 0;
+ }
+
+ kunmap_atomic(reg_state);
+
+ ctx_obj->dirty = 1;
+ set_page_dirty(page);
+ i915_gem_object_unpin_pages(ctx_obj);
+
+ return 0;
+}
+
+/**
+ * intel_lr_context_free() - free the LRC specific bits of a context
+ * @ctx: the LR context to free.
+ *
+ * The real context freeing is done in i915_gem_context_free: this only
+ * takes care of the bits that are LRC related: the per-engine backing
+ * objects and the logical ringbuffer.
+ */
+void intel_lr_context_free(struct intel_context *ctx)
+{
+ int i;
+
+ for (i = 0; i < I915_NUM_RINGS; i++) {
+ struct drm_i915_gem_object *ctx_obj = ctx->engine[i].state;
+ struct intel_ringbuffer *ringbuf = ctx->engine[i].ringbuf;
+
+ if (ctx_obj) {
+ intel_destroy_ringbuffer_obj(ringbuf);
+ kfree(ringbuf);
+ i915_gem_object_ggtt_unpin(ctx_obj);
+ drm_gem_object_unreference(&ctx_obj->base);
+ }
+ }
+}
+
+static uint32_t get_lr_context_size(struct intel_engine_cs *ring)
+{
+ int ret = 0;
+
+ WARN_ON(INTEL_INFO(ring->dev)->gen != 8);
+
+ switch (ring->id) {
+ case RCS:
+ ret = GEN8_LR_CONTEXT_RENDER_SIZE;
+ break;
+ case VCS:
+ case BCS:
+ case VECS:
+ case VCS2:
+ ret = GEN8_LR_CONTEXT_OTHER_SIZE;
+ break;
+ }
+
+ return ret;
+}
+
+/**
+ * intel_lr_context_deferred_create() - create the LRC specific bits of a context
+ * @ctx: LR context to create.
+ * @ring: engine to be used with the context.
+ *
+ * This function can be called more than once, with different engines, if we plan
+ * to use the context with them. The context backing objects and the ringbuffers
+ * (specially the ringbuffer backing objects) suck a lot of memory up, and that's why
+ * the creation is a deferred call: it's better to make sure first that we need to use
+ * a given ring with the context.
+ *
+ * Return: non-zero on eror.
+ */
+int intel_lr_context_deferred_create(struct intel_context *ctx,
+ struct intel_engine_cs *ring)
+{
+ struct drm_device *dev = ring->dev;
+ struct drm_i915_gem_object *ctx_obj;
+ uint32_t context_size;
+ struct intel_ringbuffer *ringbuf;
+ int ret;
+
+ WARN_ON(ctx->legacy_hw_ctx.rcs_state != NULL);
+ if (ctx->engine[ring->id].state)
+ return 0;
+
+ context_size = round_up(get_lr_context_size(ring), 4096);
+
+ ctx_obj = i915_gem_alloc_context_obj(dev, context_size);
+ if (IS_ERR(ctx_obj)) {
+ ret = PTR_ERR(ctx_obj);
+ DRM_DEBUG_DRIVER("Alloc LRC backing obj failed: %d\n", ret);
+ return ret;
+ }
+
+ ret = i915_gem_obj_ggtt_pin(ctx_obj, GEN8_LR_CONTEXT_ALIGN, 0);
+ if (ret) {
+ DRM_DEBUG_DRIVER("Pin LRC backing obj failed: %d\n", ret);
+ drm_gem_object_unreference(&ctx_obj->base);
+ return ret;
+ }
+
+ ringbuf = kzalloc(sizeof(*ringbuf), GFP_KERNEL);
+ if (!ringbuf) {
+ DRM_DEBUG_DRIVER("Failed to allocate ringbuffer %s\n",
+ ring->name);
+ i915_gem_object_ggtt_unpin(ctx_obj);
+ drm_gem_object_unreference(&ctx_obj->base);
+ ret = -ENOMEM;
+ return ret;
+ }
+
+ ringbuf->ring = ring;
+ ringbuf->FIXME_lrc_ctx = ctx;
+
+ ringbuf->size = 32 * PAGE_SIZE;
+ ringbuf->effective_size = ringbuf->size;
+ ringbuf->head = 0;
+ ringbuf->tail = 0;
+ ringbuf->space = ringbuf->size;
+ ringbuf->last_retired_head = -1;
+
+ /* TODO: For now we put this in the mappable region so that we can reuse
+ * the existing ringbuffer code which ioremaps it. When we start
+ * creating many contexts, this will no longer work and we must switch
+ * to a kmapish interface.
+ */
+ ret = intel_alloc_ringbuffer_obj(dev, ringbuf);
+ if (ret) {
+ DRM_DEBUG_DRIVER("Failed to allocate ringbuffer obj %s: %d\n",
+ ring->name, ret);
+ goto error;
+ }
+
+ ret = populate_lr_context(ctx, ctx_obj, ring, ringbuf);
+ if (ret) {
+ DRM_DEBUG_DRIVER("Failed to populate LRC: %d\n", ret);
+ intel_destroy_ringbuffer_obj(ringbuf);
+ goto error;
+ }
+
+ ctx->engine[ring->id].ringbuf = ringbuf;
+ ctx->engine[ring->id].state = ctx_obj;
+
+ if (ctx == ring->default_context) {
+ /* The status page is offset 0 from the default context object
+ * in LRC mode. */
+ ring->status_page.gfx_addr = i915_gem_obj_ggtt_offset(ctx_obj);
+ ring->status_page.page_addr =
+ kmap(sg_page(ctx_obj->pages->sgl));
+ if (ring->status_page.page_addr == NULL)
+ return -ENOMEM;
+ ring->status_page.obj = ctx_obj;
+ }
+
+ if (ring->id == RCS && !ctx->rcs_initialized) {
+ ret = intel_lr_context_render_state_init(ring, ctx);
+ if (ret) {
+ DRM_ERROR("Init render state failed: %d\n", ret);
+ ctx->engine[ring->id].ringbuf = NULL;
+ ctx->engine[ring->id].state = NULL;
+ intel_destroy_ringbuffer_obj(ringbuf);
+ goto error;
+ }
+ ctx->rcs_initialized = true;
+ }
+
+ return 0;
+
+error:
+ kfree(ringbuf);
+ i915_gem_object_ggtt_unpin(ctx_obj);
+ drm_gem_object_unreference(&ctx_obj->base);
+ return ret;
+}
--- /dev/null
+/*
+ * Copyright © 2014 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef _INTEL_LRC_H_
+#define _INTEL_LRC_H_
+
+/* Execlists regs */
+#define RING_ELSP(ring) ((ring)->mmio_base+0x230)
+#define RING_EXECLIST_STATUS(ring) ((ring)->mmio_base+0x234)
+#define RING_CONTEXT_CONTROL(ring) ((ring)->mmio_base+0x244)
+#define RING_CONTEXT_STATUS_BUF(ring) ((ring)->mmio_base+0x370)
+#define RING_CONTEXT_STATUS_PTR(ring) ((ring)->mmio_base+0x3a0)
+
+/* Logical Rings */
+void intel_logical_ring_stop(struct intel_engine_cs *ring);
+void intel_logical_ring_cleanup(struct intel_engine_cs *ring);
+int intel_logical_rings_init(struct drm_device *dev);
+
+int logical_ring_flush_all_caches(struct intel_ringbuffer *ringbuf);
+void intel_logical_ring_advance_and_submit(struct intel_ringbuffer *ringbuf);
+/**
+ * intel_logical_ring_advance() - advance the ringbuffer tail
+ * @ringbuf: Ringbuffer to advance.
+ *
+ * The tail is only updated in our logical ringbuffer struct.
+ */
+static inline void intel_logical_ring_advance(struct intel_ringbuffer *ringbuf)
+{
+ ringbuf->tail &= ringbuf->size - 1;
+}
+/**
+ * intel_logical_ring_emit() - write a DWORD to the ringbuffer.
+ * @ringbuf: Ringbuffer to write to.
+ * @data: DWORD to write.
+ */
+static inline void intel_logical_ring_emit(struct intel_ringbuffer *ringbuf,
+ u32 data)
+{
+ iowrite32(data, ringbuf->virtual_start + ringbuf->tail);
+ ringbuf->tail += 4;
+}
+int intel_logical_ring_begin(struct intel_ringbuffer *ringbuf, int num_dwords);
+
+/* Logical Ring Contexts */
+int intel_lr_context_render_state_init(struct intel_engine_cs *ring,
+ struct intel_context *ctx);
+void intel_lr_context_free(struct intel_context *ctx);
+int intel_lr_context_deferred_create(struct intel_context *ctx,
+ struct intel_engine_cs *ring);
+
+/* Execlists */
+int intel_sanitize_enable_execlists(struct drm_device *dev, int enable_execlists);
+int intel_execlists_submission(struct drm_device *dev, struct drm_file *file,
+ struct intel_engine_cs *ring,
+ struct intel_context *ctx,
+ struct drm_i915_gem_execbuffer2 *args,
+ struct list_head *vmas,
+ struct drm_i915_gem_object *batch_obj,
+ u64 exec_start, u32 flags);
+u32 intel_execlists_ctx_id(struct drm_i915_gem_object *ctx_obj);
+
+/**
+ * struct intel_ctx_submit_request - queued context submission request
+ * @ctx: Context to submit to the ELSP.
+ * @ring: Engine to submit it to.
+ * @tail: how far in the context's ringbuffer this request goes to.
+ * @execlist_link: link in the submission queue.
+ * @work: workqueue for processing this request in a bottom half.
+ * @elsp_submitted: no. of times this request has been sent to the ELSP.
+ *
+ * The ELSP only accepts two elements at a time, so we queue context/tail
+ * pairs on a given queue (ring->execlist_queue) until the hardware is
+ * available. The queue serves a double purpose: we also use it to keep track
+ * of the up to 2 contexts currently in the hardware (usually one in execution
+ * and the other queued up by the GPU): We only remove elements from the head
+ * of the queue when the hardware informs us that an element has been
+ * completed.
+ *
+ * All accesses to the queue are mediated by a spinlock (ring->execlist_lock).
+ */
+struct intel_ctx_submit_request {
+ struct intel_context *ctx;
+ struct intel_engine_cs *ring;
+ u32 tail;
+
+ struct list_head execlist_link;
+ struct work_struct work;
+
+ int elsp_submitted;
+};
+
+void intel_execlists_handle_ctx_events(struct intel_engine_cs *ring);
+
+#endif /* _INTEL_LRC_H_ */
.destroy = intel_encoder_destroy,
};
-static int __init intel_no_lvds_dmi_callback(const struct dmi_system_id *id)
+static int intel_no_lvds_dmi_callback(const struct dmi_system_id *id)
{
DRM_INFO("Skipping LVDS initialization for %s\n", id->ident);
return 1;
struct intel_encoder *encoder;
struct intel_lvds_encoder *lvds_encoder;
- list_for_each_entry(encoder, &dev->mode_config.encoder_list,
- base.head) {
+ for_each_intel_encoder(dev, encoder) {
if (encoder->type == INTEL_OUTPUT_LVDS) {
lvds_encoder = to_lvds_encoder(&encoder->base);
spin_lock_irqsave(&dev_priv->backlight_lock, flags);
+ if (panel->backlight.device)
+ panel->backlight.device->props.power = FB_BLANK_POWERDOWN;
panel->backlight.enabled = false;
dev_priv->display.disable_backlight(connector);
cpu_ctl2 = I915_READ(BLC_PWM_CPU_CTL2);
if (cpu_ctl2 & BLM_PWM_ENABLE) {
- WARN(1, "cpu backlight already enabled\n");
+ DRM_DEBUG_KMS("cpu backlight already enabled\n");
cpu_ctl2 &= ~BLM_PWM_ENABLE;
I915_WRITE(BLC_PWM_CPU_CTL2, cpu_ctl2);
}
ctl = I915_READ(BLC_PWM_CTL);
if (ctl & BACKLIGHT_DUTY_CYCLE_MASK_PNV) {
- WARN(1, "backlight already enabled\n");
+ DRM_DEBUG_KMS("backlight already enabled\n");
I915_WRITE(BLC_PWM_CTL, 0);
}
ctl2 = I915_READ(BLC_PWM_CTL2);
if (ctl2 & BLM_PWM_ENABLE) {
- WARN(1, "backlight already enabled\n");
+ DRM_DEBUG_KMS("backlight already enabled\n");
ctl2 &= ~BLM_PWM_ENABLE;
I915_WRITE(BLC_PWM_CTL2, ctl2);
}
ctl2 = I915_READ(VLV_BLC_PWM_CTL2(pipe));
if (ctl2 & BLM_PWM_ENABLE) {
- WARN(1, "backlight already enabled\n");
+ DRM_DEBUG_KMS("backlight already enabled\n");
ctl2 &= ~BLM_PWM_ENABLE;
I915_WRITE(VLV_BLC_PWM_CTL2(pipe), ctl2);
}
dev_priv->display.enable_backlight(connector);
panel->backlight.enabled = true;
+ if (panel->backlight.device)
+ panel->backlight.device->props.power = FB_BLANK_UNBLANK;
spin_unlock_irqrestore(&dev_priv->backlight_lock, flags);
}
static int intel_backlight_device_update_status(struct backlight_device *bd)
{
struct intel_connector *connector = bl_get_data(bd);
+ struct intel_panel *panel = &connector->panel;
struct drm_device *dev = connector->base.dev;
drm_modeset_lock(&dev->mode_config.connection_mutex, NULL);
bd->props.brightness, bd->props.max_brightness);
intel_panel_set_backlight(connector, bd->props.brightness,
bd->props.max_brightness);
+
+ /*
+ * Allow flipping bl_power as a sub-state of enabled. Sadly the
+ * backlight class device does not make it easy to to differentiate
+ * between callbacks for brightness and bl_power, so our backlight_power
+ * callback needs to take this into account.
+ */
+ if (panel->backlight.enabled) {
+ if (panel->backlight_power) {
+ bool enable = bd->props.power == FB_BLANK_UNBLANK &&
+ bd->props.brightness != 0;
+ panel->backlight_power(connector, enable);
+ }
+ } else {
+ bd->props.power = FB_BLANK_POWERDOWN;
+ }
+
drm_modeset_unlock(&dev->mode_config.connection_mutex);
return 0;
}
panel->backlight.level,
props.max_brightness);
+ if (panel->backlight.enabled)
+ props.power = FB_BLANK_UNBLANK;
+ else
+ props.power = FB_BLANK_POWERDOWN;
+
/*
* Note: using the same name independent of the connector prevents
* registration of multiple backlight devices in the driver.
enum pipe pipe;
u32 ctl, ctl2, val;
- for_each_pipe(pipe) {
+ for_each_pipe(dev_priv, pipe) {
u32 cur_val = I915_READ(VLV_BLC_PWM_CTL(pipe));
/* Skip if the modulation freq is already set */
dpfc_ctl |= IVB_DPFC_CTL_FENCE_EN;
+ if (dev_priv->fbc.false_color)
+ dpfc_ctl |= FBC_CTL_FALSE_COLOR;
+
I915_WRITE(ILK_DPFC_CONTROL, dpfc_ctl | DPFC_CTL_EN);
if (IS_IVYBRIDGE(dev)) {
return dev_priv->display.fbc_enabled(dev);
}
+void gen8_fbc_sw_flush(struct drm_device *dev, u32 value)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+
+ if (!IS_GEN8(dev))
+ return;
+
+ I915_WRITE(MSG_FBC_REND_STATE, value);
+}
+
static void intel_fbc_work_fn(struct work_struct *__work)
{
struct intel_fbc_work *work =
DRM_DEBUG_KMS("framebuffer not tiled or fenced, disabling compression\n");
goto out_disable;
}
+ if (INTEL_INFO(dev)->gen <= 4 && !IS_G4X(dev) &&
+ to_intel_plane(crtc->primary)->rotation != BIT(DRM_ROTATE_0)) {
+ if (set_no_fbc_reason(dev_priv, FBC_UNSUPPORTED_MODE))
+ DRM_DEBUG_KMS("Rotation unsupported, disabling\n");
+ goto out_disable;
+ }
/* If the kernel debugger is active, always disable compression */
if (in_dbg_master())
* A value of 5us seems to be a good balance; safe for very low end
* platforms but not overly aggressive on lower latency configs.
*/
-static const int latency_ns = 5000;
+static const int pessimal_latency_ns = 5000;
static int i9xx_get_fifo_size(struct drm_device *dev, int plane)
{
.guard_size = 2,
.cacheline_size = I915_FIFO_LINE_SIZE,
};
-static const struct intel_watermark_params i830_wm_info = {
+static const struct intel_watermark_params i830_a_wm_info = {
.fifo_size = I855GM_FIFO_SIZE,
.max_wm = I915_MAX_WM,
.default_wm = 1,
.guard_size = 2,
.cacheline_size = I830_FIFO_LINE_SIZE,
};
+static const struct intel_watermark_params i830_bc_wm_info = {
+ .fifo_size = I855GM_FIFO_SIZE,
+ .max_wm = I915_MAX_WM/2,
+ .default_wm = 1,
+ .guard_size = 2,
+ .cacheline_size = I830_FIFO_LINE_SIZE,
+};
static const struct intel_watermark_params i845_wm_info = {
.fifo_size = I830_FIFO_SIZE,
.max_wm = I915_MAX_WM,
wm_size = wm->max_wm;
if (wm_size <= 0)
wm_size = wm->default_wm;
+
+ /*
+ * Bspec seems to indicate that the value shouldn't be lower than
+ * 'burst size + 1'. Certainly 830 is quite unhappy with low values.
+ * Lets go for 8 which is the burst size since certain platforms
+ * already use a hardcoded 8 (which is what the spec says should be
+ * done).
+ */
+ if (wm_size <= 8)
+ wm_size = 8;
+
return wm_size;
}
display, cursor);
}
-static bool vlv_compute_drain_latency(struct drm_device *dev,
- int plane,
- int *plane_prec_mult,
- int *plane_dl,
- int *cursor_prec_mult,
- int *cursor_dl)
+static bool vlv_compute_drain_latency(struct drm_crtc *crtc,
+ int pixel_size,
+ int *prec_mult,
+ int *drain_latency)
{
- struct drm_crtc *crtc;
- int clock, pixel_size;
int entries;
+ int clock = to_intel_crtc(crtc)->config.adjusted_mode.crtc_clock;
- crtc = intel_get_crtc_for_plane(dev, plane);
- if (!intel_crtc_active(crtc))
+ if (WARN(clock == 0, "Pixel clock is zero!\n"))
return false;
- clock = to_intel_crtc(crtc)->config.adjusted_mode.crtc_clock;
- pixel_size = crtc->primary->fb->bits_per_pixel / 8; /* BPP */
+ if (WARN(pixel_size == 0, "Pixel size is zero!\n"))
+ return false;
- entries = (clock / 1000) * pixel_size;
- *plane_prec_mult = (entries > 128) ?
- DRAIN_LATENCY_PRECISION_64 : DRAIN_LATENCY_PRECISION_32;
- *plane_dl = (64 * (*plane_prec_mult) * 4) / entries;
+ entries = DIV_ROUND_UP(clock, 1000) * pixel_size;
+ *prec_mult = (entries > 128) ? DRAIN_LATENCY_PRECISION_64 :
+ DRAIN_LATENCY_PRECISION_32;
+ *drain_latency = (64 * (*prec_mult) * 4) / entries;
- entries = (clock / 1000) * 4; /* BPP is always 4 for cursor */
- *cursor_prec_mult = (entries > 128) ?
- DRAIN_LATENCY_PRECISION_64 : DRAIN_LATENCY_PRECISION_32;
- *cursor_dl = (64 * (*cursor_prec_mult) * 4) / entries;
+ if (*drain_latency > DRAIN_LATENCY_MASK)
+ *drain_latency = DRAIN_LATENCY_MASK;
return true;
}
* latency value.
*/
-static void vlv_update_drain_latency(struct drm_device *dev)
+static void vlv_update_drain_latency(struct drm_crtc *crtc)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
- int planea_prec, planea_dl, planeb_prec, planeb_dl;
- int cursora_prec, cursora_dl, cursorb_prec, cursorb_dl;
- int plane_prec_mult, cursor_prec_mult; /* Precision multiplier is
- either 16 or 32 */
+ struct drm_i915_private *dev_priv = crtc->dev->dev_private;
+ struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+ int pixel_size;
+ int drain_latency;
+ enum pipe pipe = intel_crtc->pipe;
+ int plane_prec, prec_mult, plane_dl;
+
+ plane_dl = I915_READ(VLV_DDL(pipe)) & ~(DDL_PLANE_PRECISION_64 |
+ DRAIN_LATENCY_MASK | DDL_CURSOR_PRECISION_64 |
+ (DRAIN_LATENCY_MASK << DDL_CURSOR_SHIFT));
- /* For plane A, Cursor A */
- if (vlv_compute_drain_latency(dev, 0, &plane_prec_mult, &planea_dl,
- &cursor_prec_mult, &cursora_dl)) {
- cursora_prec = (cursor_prec_mult == DRAIN_LATENCY_PRECISION_32) ?
- DDL_CURSORA_PRECISION_32 : DDL_CURSORA_PRECISION_64;
- planea_prec = (plane_prec_mult == DRAIN_LATENCY_PRECISION_32) ?
- DDL_PLANEA_PRECISION_32 : DDL_PLANEA_PRECISION_64;
+ if (!intel_crtc_active(crtc)) {
+ I915_WRITE(VLV_DDL(pipe), plane_dl);
+ return;
+ }
- I915_WRITE(VLV_DDL1, cursora_prec |
- (cursora_dl << DDL_CURSORA_SHIFT) |
- planea_prec | planea_dl);
+ /* Primary plane Drain Latency */
+ pixel_size = crtc->primary->fb->bits_per_pixel / 8; /* BPP */
+ if (vlv_compute_drain_latency(crtc, pixel_size, &prec_mult, &drain_latency)) {
+ plane_prec = (prec_mult == DRAIN_LATENCY_PRECISION_64) ?
+ DDL_PLANE_PRECISION_64 :
+ DDL_PLANE_PRECISION_32;
+ plane_dl |= plane_prec | drain_latency;
}
- /* For plane B, Cursor B */
- if (vlv_compute_drain_latency(dev, 1, &plane_prec_mult, &planeb_dl,
- &cursor_prec_mult, &cursorb_dl)) {
- cursorb_prec = (cursor_prec_mult == DRAIN_LATENCY_PRECISION_32) ?
- DDL_CURSORB_PRECISION_32 : DDL_CURSORB_PRECISION_64;
- planeb_prec = (plane_prec_mult == DRAIN_LATENCY_PRECISION_32) ?
- DDL_PLANEB_PRECISION_32 : DDL_PLANEB_PRECISION_64;
+ /* Cursor Drain Latency
+ * BPP is always 4 for cursor
+ */
+ pixel_size = 4;
- I915_WRITE(VLV_DDL2, cursorb_prec |
- (cursorb_dl << DDL_CURSORB_SHIFT) |
- planeb_prec | planeb_dl);
+ /* Program cursor DL only if it is enabled */
+ if (intel_crtc->cursor_base &&
+ vlv_compute_drain_latency(crtc, pixel_size, &prec_mult, &drain_latency)) {
+ plane_prec = (prec_mult == DRAIN_LATENCY_PRECISION_64) ?
+ DDL_CURSOR_PRECISION_64 :
+ DDL_CURSOR_PRECISION_32;
+ plane_dl |= plane_prec | (drain_latency << DDL_CURSOR_SHIFT);
}
+
+ I915_WRITE(VLV_DDL(pipe), plane_dl);
}
#define single_plane_enabled(mask) is_power_of_2(mask)
unsigned int enabled = 0;
bool cxsr_enabled;
- vlv_update_drain_latency(dev);
+ vlv_update_drain_latency(crtc);
+
+ if (g4x_compute_wm0(dev, PIPE_A,
+ &valleyview_wm_info, pessimal_latency_ns,
+ &valleyview_cursor_wm_info, pessimal_latency_ns,
+ &planea_wm, &cursora_wm))
+ enabled |= 1 << PIPE_A;
+
+ if (g4x_compute_wm0(dev, PIPE_B,
+ &valleyview_wm_info, pessimal_latency_ns,
+ &valleyview_cursor_wm_info, pessimal_latency_ns,
+ &planeb_wm, &cursorb_wm))
+ enabled |= 1 << PIPE_B;
+
+ if (single_plane_enabled(enabled) &&
+ g4x_compute_srwm(dev, ffs(enabled) - 1,
+ sr_latency_ns,
+ &valleyview_wm_info,
+ &valleyview_cursor_wm_info,
+ &plane_sr, &ignore_cursor_sr) &&
+ g4x_compute_srwm(dev, ffs(enabled) - 1,
+ 2*sr_latency_ns,
+ &valleyview_wm_info,
+ &valleyview_cursor_wm_info,
+ &ignore_plane_sr, &cursor_sr)) {
+ cxsr_enabled = true;
+ } else {
+ cxsr_enabled = false;
+ intel_set_memory_cxsr(dev_priv, false);
+ plane_sr = cursor_sr = 0;
+ }
+
+ DRM_DEBUG_KMS("Setting FIFO watermarks - A: plane=%d, cursor=%d, "
+ "B: plane=%d, cursor=%d, SR: plane=%d, cursor=%d\n",
+ planea_wm, cursora_wm,
+ planeb_wm, cursorb_wm,
+ plane_sr, cursor_sr);
+
+ I915_WRITE(DSPFW1,
+ (plane_sr << DSPFW_SR_SHIFT) |
+ (cursorb_wm << DSPFW_CURSORB_SHIFT) |
+ (planeb_wm << DSPFW_PLANEB_SHIFT) |
+ (planea_wm << DSPFW_PLANEA_SHIFT));
+ I915_WRITE(DSPFW2,
+ (I915_READ(DSPFW2) & ~DSPFW_CURSORA_MASK) |
+ (cursora_wm << DSPFW_CURSORA_SHIFT));
+ I915_WRITE(DSPFW3,
+ (I915_READ(DSPFW3) & ~DSPFW_CURSOR_SR_MASK) |
+ (cursor_sr << DSPFW_CURSOR_SR_SHIFT));
+
+ if (cxsr_enabled)
+ intel_set_memory_cxsr(dev_priv, true);
+}
+
+static void cherryview_update_wm(struct drm_crtc *crtc)
+{
+ struct drm_device *dev = crtc->dev;
+ static const int sr_latency_ns = 12000;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ int planea_wm, planeb_wm, planec_wm;
+ int cursora_wm, cursorb_wm, cursorc_wm;
+ int plane_sr, cursor_sr;
+ int ignore_plane_sr, ignore_cursor_sr;
+ unsigned int enabled = 0;
+ bool cxsr_enabled;
+
+ vlv_update_drain_latency(crtc);
if (g4x_compute_wm0(dev, PIPE_A,
- &valleyview_wm_info, latency_ns,
- &valleyview_cursor_wm_info, latency_ns,
+ &valleyview_wm_info, pessimal_latency_ns,
+ &valleyview_cursor_wm_info, pessimal_latency_ns,
&planea_wm, &cursora_wm))
enabled |= 1 << PIPE_A;
if (g4x_compute_wm0(dev, PIPE_B,
- &valleyview_wm_info, latency_ns,
- &valleyview_cursor_wm_info, latency_ns,
+ &valleyview_wm_info, pessimal_latency_ns,
+ &valleyview_cursor_wm_info, pessimal_latency_ns,
&planeb_wm, &cursorb_wm))
enabled |= 1 << PIPE_B;
+ if (g4x_compute_wm0(dev, PIPE_C,
+ &valleyview_wm_info, pessimal_latency_ns,
+ &valleyview_cursor_wm_info, pessimal_latency_ns,
+ &planec_wm, &cursorc_wm))
+ enabled |= 1 << PIPE_C;
+
if (single_plane_enabled(enabled) &&
g4x_compute_srwm(dev, ffs(enabled) - 1,
sr_latency_ns,
plane_sr = cursor_sr = 0;
}
- DRM_DEBUG_KMS("Setting FIFO watermarks - A: plane=%d, cursor=%d, B: plane=%d, cursor=%d, SR: plane=%d, cursor=%d\n",
+ DRM_DEBUG_KMS("Setting FIFO watermarks - A: plane=%d, cursor=%d, "
+ "B: plane=%d, cursor=%d, C: plane=%d, cursor=%d, "
+ "SR: plane=%d, cursor=%d\n",
planea_wm, cursora_wm,
planeb_wm, cursorb_wm,
+ planec_wm, cursorc_wm,
plane_sr, cursor_sr);
I915_WRITE(DSPFW1,
(plane_sr << DSPFW_SR_SHIFT) |
(cursorb_wm << DSPFW_CURSORB_SHIFT) |
(planeb_wm << DSPFW_PLANEB_SHIFT) |
- planea_wm);
+ (planea_wm << DSPFW_PLANEA_SHIFT));
I915_WRITE(DSPFW2,
(I915_READ(DSPFW2) & ~DSPFW_CURSORA_MASK) |
(cursora_wm << DSPFW_CURSORA_SHIFT));
I915_WRITE(DSPFW3,
(I915_READ(DSPFW3) & ~DSPFW_CURSOR_SR_MASK) |
(cursor_sr << DSPFW_CURSOR_SR_SHIFT));
+ I915_WRITE(DSPFW9_CHV,
+ (I915_READ(DSPFW9_CHV) & ~(DSPFW_PLANEC_MASK |
+ DSPFW_CURSORC_MASK)) |
+ (planec_wm << DSPFW_PLANEC_SHIFT) |
+ (cursorc_wm << DSPFW_CURSORC_SHIFT));
if (cxsr_enabled)
intel_set_memory_cxsr(dev_priv, true);
}
+static void valleyview_update_sprite_wm(struct drm_plane *plane,
+ struct drm_crtc *crtc,
+ uint32_t sprite_width,
+ uint32_t sprite_height,
+ int pixel_size,
+ bool enabled, bool scaled)
+{
+ struct drm_device *dev = crtc->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ int pipe = to_intel_plane(plane)->pipe;
+ int sprite = to_intel_plane(plane)->plane;
+ int drain_latency;
+ int plane_prec;
+ int sprite_dl;
+ int prec_mult;
+
+ sprite_dl = I915_READ(VLV_DDL(pipe)) & ~(DDL_SPRITE_PRECISION_64(sprite) |
+ (DRAIN_LATENCY_MASK << DDL_SPRITE_SHIFT(sprite)));
+
+ if (enabled && vlv_compute_drain_latency(crtc, pixel_size, &prec_mult,
+ &drain_latency)) {
+ plane_prec = (prec_mult == DRAIN_LATENCY_PRECISION_64) ?
+ DDL_SPRITE_PRECISION_64(sprite) :
+ DDL_SPRITE_PRECISION_32(sprite);
+ sprite_dl |= plane_prec |
+ (drain_latency << DDL_SPRITE_SHIFT(sprite));
+ }
+
+ I915_WRITE(VLV_DDL(pipe), sprite_dl);
+}
+
static void g4x_update_wm(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
bool cxsr_enabled;
if (g4x_compute_wm0(dev, PIPE_A,
- &g4x_wm_info, latency_ns,
- &g4x_cursor_wm_info, latency_ns,
+ &g4x_wm_info, pessimal_latency_ns,
+ &g4x_cursor_wm_info, pessimal_latency_ns,
&planea_wm, &cursora_wm))
enabled |= 1 << PIPE_A;
if (g4x_compute_wm0(dev, PIPE_B,
- &g4x_wm_info, latency_ns,
- &g4x_cursor_wm_info, latency_ns,
+ &g4x_wm_info, pessimal_latency_ns,
+ &g4x_cursor_wm_info, pessimal_latency_ns,
&planeb_wm, &cursorb_wm))
enabled |= 1 << PIPE_B;
plane_sr = cursor_sr = 0;
}
- DRM_DEBUG_KMS("Setting FIFO watermarks - A: plane=%d, cursor=%d, B: plane=%d, cursor=%d, SR: plane=%d, cursor=%d\n",
+ DRM_DEBUG_KMS("Setting FIFO watermarks - A: plane=%d, cursor=%d, "
+ "B: plane=%d, cursor=%d, SR: plane=%d, cursor=%d\n",
planea_wm, cursora_wm,
planeb_wm, cursorb_wm,
plane_sr, cursor_sr);
(plane_sr << DSPFW_SR_SHIFT) |
(cursorb_wm << DSPFW_CURSORB_SHIFT) |
(planeb_wm << DSPFW_PLANEB_SHIFT) |
- planea_wm);
+ (planea_wm << DSPFW_PLANEA_SHIFT));
I915_WRITE(DSPFW2,
(I915_READ(DSPFW2) & ~DSPFW_CURSORA_MASK) |
(cursora_wm << DSPFW_CURSORA_SHIFT));
/* 965 has limitations... */
I915_WRITE(DSPFW1, (srwm << DSPFW_SR_SHIFT) |
- (8 << 16) | (8 << 8) | (8 << 0));
- I915_WRITE(DSPFW2, (8 << 8) | (8 << 0));
+ (8 << DSPFW_CURSORB_SHIFT) |
+ (8 << DSPFW_PLANEB_SHIFT) |
+ (8 << DSPFW_PLANEA_SHIFT));
+ I915_WRITE(DSPFW2, (8 << DSPFW_CURSORA_SHIFT) |
+ (8 << DSPFW_PLANEC_SHIFT_OLD));
/* update cursor SR watermark */
I915_WRITE(DSPFW3, (cursor_sr << DSPFW_CURSOR_SR_SHIFT));
else if (!IS_GEN2(dev))
wm_info = &i915_wm_info;
else
- wm_info = &i830_wm_info;
+ wm_info = &i830_a_wm_info;
fifo_size = dev_priv->display.get_fifo_size(dev, 0);
crtc = intel_get_crtc_for_plane(dev, 0);
adjusted_mode = &to_intel_crtc(crtc)->config.adjusted_mode;
planea_wm = intel_calculate_wm(adjusted_mode->crtc_clock,
wm_info, fifo_size, cpp,
- latency_ns);
+ pessimal_latency_ns);
enabled = crtc;
- } else
+ } else {
planea_wm = fifo_size - wm_info->guard_size;
+ if (planea_wm > (long)wm_info->max_wm)
+ planea_wm = wm_info->max_wm;
+ }
+
+ if (IS_GEN2(dev))
+ wm_info = &i830_bc_wm_info;
fifo_size = dev_priv->display.get_fifo_size(dev, 1);
crtc = intel_get_crtc_for_plane(dev, 1);
adjusted_mode = &to_intel_crtc(crtc)->config.adjusted_mode;
planeb_wm = intel_calculate_wm(adjusted_mode->crtc_clock,
wm_info, fifo_size, cpp,
- latency_ns);
+ pessimal_latency_ns);
if (enabled == NULL)
enabled = crtc;
else
enabled = NULL;
- } else
+ } else {
planeb_wm = fifo_size - wm_info->guard_size;
+ if (planeb_wm > (long)wm_info->max_wm)
+ planeb_wm = wm_info->max_wm;
+ }
DRM_DEBUG_KMS("FIFO watermarks - A: %d, B: %d\n", planea_wm, planeb_wm);
planea_wm = intel_calculate_wm(adjusted_mode->crtc_clock,
&i845_wm_info,
dev_priv->display.get_fifo_size(dev, 0),
- 4, latency_ns);
+ 4, pessimal_latency_ns);
fwater_lo = I915_READ(FW_BLC) & ~0xfff;
fwater_lo |= (3<<8) | planea_wm;
#define WM_DIRTY_FBC (1 << 24)
#define WM_DIRTY_DDB (1 << 25)
-static unsigned int ilk_compute_wm_dirty(struct drm_device *dev,
+static unsigned int ilk_compute_wm_dirty(struct drm_i915_private *dev_priv,
const struct ilk_wm_values *old,
const struct ilk_wm_values *new)
{
enum pipe pipe;
int wm_lp;
- for_each_pipe(pipe) {
+ for_each_pipe(dev_priv, pipe) {
if (old->wm_linetime[pipe] != new->wm_linetime[pipe]) {
dirty |= WM_DIRTY_LINETIME(pipe);
/* Must disable LP1+ watermarks too */
unsigned int dirty;
uint32_t val;
- dirty = ilk_compute_wm_dirty(dev, previous, results);
+ dirty = ilk_compute_wm_dirty(dev_priv, previous, results);
if (!dirty)
return;
WARN_ON(val > dev_priv->rps.max_freq_softlimit);
WARN_ON(val < dev_priv->rps.min_freq_softlimit);
- DRM_DEBUG_DRIVER("GPU freq request from %d MHz (%u) to %d MHz (%u)\n",
- vlv_gpu_freq(dev_priv, dev_priv->rps.cur_freq),
- dev_priv->rps.cur_freq,
- vlv_gpu_freq(dev_priv, val), val);
+ if (WARN_ONCE(IS_CHERRYVIEW(dev) && (val & 1),
+ "Odd GPU freq value\n"))
+ val &= ~1;
+
+ if (val != dev_priv->rps.cur_freq) {
+ DRM_DEBUG_DRIVER("GPU freq request from %d MHz (%u) to %d MHz (%u)\n",
+ vlv_gpu_freq(dev_priv, dev_priv->rps.cur_freq),
+ dev_priv->rps.cur_freq,
+ vlv_gpu_freq(dev_priv, val), val);
- if (val != dev_priv->rps.cur_freq)
vlv_punit_write(dev_priv, PUNIT_REG_GPU_FREQ_REQ, val);
+ }
I915_WRITE(GEN6_PMINTRMSK, gen6_rps_pm_mask(dev_priv, val));
{
struct drm_i915_private *dev_priv = dev->dev_private;
+ /* we're doing forcewake before Disabling RC6,
+ * This what the BIOS expects when going into suspend */
+ gen6_gt_force_wake_get(dev_priv, FORCEWAKE_ALL);
+
I915_WRITE(GEN6_RC_CONTROL, 0);
+ gen6_gt_force_wake_put(dev_priv, FORCEWAKE_ALL);
+
gen6_disable_rps_interrupts(dev);
}
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_engine_cs *ring;
u32 rp_state_cap;
- u32 gt_perf_status;
u32 rc6vids, pcu_mbox = 0, rc6_mask = 0;
u32 gtfifodbg;
int rc6_mode;
gen6_gt_force_wake_get(dev_priv, FORCEWAKE_ALL);
rp_state_cap = I915_READ(GEN6_RP_STATE_CAP);
- gt_perf_status = I915_READ(GEN6_GT_PERF_STATUS);
parse_rp_state_cap(dev_priv, rp_state_cap);
static void valleyview_init_gt_powersave(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
+ u32 val;
valleyview_setup_pctx(dev);
mutex_lock(&dev_priv->rps.hw_lock);
+ val = vlv_punit_read(dev_priv, PUNIT_REG_GPU_FREQ_STS);
+ switch ((val >> 6) & 3) {
+ case 0:
+ case 1:
+ dev_priv->mem_freq = 800;
+ break;
+ case 2:
+ dev_priv->mem_freq = 1066;
+ break;
+ case 3:
+ dev_priv->mem_freq = 1333;
+ break;
+ }
+ DRM_DEBUG_DRIVER("DDR speed: %d MHz", dev_priv->mem_freq);
+
dev_priv->rps.max_freq = valleyview_rps_max_freq(dev_priv);
dev_priv->rps.rp0_freq = dev_priv->rps.max_freq;
DRM_DEBUG_DRIVER("max GPU freq: %d MHz (%u)\n",
static void cherryview_init_gt_powersave(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
+ u32 val;
cherryview_setup_pctx(dev);
mutex_lock(&dev_priv->rps.hw_lock);
+ val = vlv_punit_read(dev_priv, CCK_FUSE_REG);
+ switch ((val >> 2) & 0x7) {
+ case 0:
+ case 1:
+ dev_priv->rps.cz_freq = 200;
+ dev_priv->mem_freq = 1600;
+ break;
+ case 2:
+ dev_priv->rps.cz_freq = 267;
+ dev_priv->mem_freq = 1600;
+ break;
+ case 3:
+ dev_priv->rps.cz_freq = 333;
+ dev_priv->mem_freq = 2000;
+ break;
+ case 4:
+ dev_priv->rps.cz_freq = 320;
+ dev_priv->mem_freq = 1600;
+ break;
+ case 5:
+ dev_priv->rps.cz_freq = 400;
+ dev_priv->mem_freq = 1600;
+ break;
+ }
+ DRM_DEBUG_DRIVER("DDR speed: %d MHz", dev_priv->mem_freq);
+
dev_priv->rps.max_freq = cherryview_rps_max_freq(dev_priv);
dev_priv->rps.rp0_freq = dev_priv->rps.max_freq;
DRM_DEBUG_DRIVER("max GPU freq: %d MHz (%u)\n",
vlv_gpu_freq(dev_priv, dev_priv->rps.min_freq),
dev_priv->rps.min_freq);
+ WARN_ONCE((dev_priv->rps.max_freq |
+ dev_priv->rps.efficient_freq |
+ dev_priv->rps.rp1_freq |
+ dev_priv->rps.min_freq) & 1,
+ "Odd GPU freq values\n");
+
/* Preserve min/max settings in case of re-init */
if (dev_priv->rps.max_freq_softlimit == 0)
dev_priv->rps.max_freq_softlimit = dev_priv->rps.max_freq;
struct drm_i915_private *dev_priv = dev->dev_private;
int pipe;
- for_each_pipe(pipe) {
+ for_each_pipe(dev_priv, pipe) {
I915_WRITE(DSPCNTR(pipe),
I915_READ(DSPCNTR(pipe)) |
DISPPLANE_TRICKLE_FEED_DISABLE);
/* The below fixes the weird display corruption, a few pixels shifted
* downward, on (only) LVDS of some HP laptops with IVY.
*/
- for_each_pipe(pipe) {
+ for_each_pipe(dev_priv, pipe) {
val = I915_READ(TRANS_CHICKEN2(pipe));
val |= TRANS_CHICKEN2_TIMING_OVERRIDE;
val &= ~TRANS_CHICKEN2_FDI_POLARITY_REVERSED;
I915_WRITE(TRANS_CHICKEN2(pipe), val);
}
/* WADP0ClockGatingDisable */
- for_each_pipe(pipe) {
+ for_each_pipe(dev_priv, pipe) {
I915_WRITE(TRANS_CHICKEN1(pipe),
TRANS_CHICKEN1_DP0UNIT_GC_DISABLE);
}
}
}
-static void gen8_init_clock_gating(struct drm_device *dev)
+static void broadwell_init_clock_gating(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
enum pipe pipe;
/* FIXME(BDW): Check all the w/a, some might only apply to
* pre-production hw. */
- /* WaDisablePartialInstShootdown:bdw */
- I915_WRITE(GEN8_ROW_CHICKEN,
- _MASKED_BIT_ENABLE(PARTIAL_INSTRUCTION_SHOOTDOWN_DISABLE));
- /* WaDisableThreadStallDopClockGating:bdw */
- /* FIXME: Unclear whether we really need this on production bdw. */
- I915_WRITE(GEN8_ROW_CHICKEN,
- _MASKED_BIT_ENABLE(STALL_DOP_GATING_DISABLE));
-
- /*
- * This GEN8_CENTROID_PIXEL_OPT_DIS W/A is only needed for
- * pre-production hardware
- */
- I915_WRITE(HALF_SLICE_CHICKEN3,
- _MASKED_BIT_ENABLE(GEN8_CENTROID_PIXEL_OPT_DIS));
- I915_WRITE(HALF_SLICE_CHICKEN3,
- _MASKED_BIT_ENABLE(GEN8_SAMPLER_POWER_BYPASS_DIS));
I915_WRITE(GAMTARBMODE, _MASKED_BIT_ENABLE(ARB_MODE_BWGTLB_DISABLE));
I915_WRITE(_3D_CHICKEN3,
_MASKED_BIT_ENABLE(_3D_CHICKEN_SDE_LIMIT_FIFO_POLY_DEPTH(2)));
- I915_WRITE(COMMON_SLICE_CHICKEN2,
- _MASKED_BIT_ENABLE(GEN8_CSC2_SBE_VUE_CACHE_CONSERVATIVE));
-
- I915_WRITE(GEN7_HALF_SLICE_CHICKEN1,
- _MASKED_BIT_ENABLE(GEN7_SINGLE_SUBSCAN_DISPATCH_ENABLE));
-
- /* WaDisableDopClockGating:bdw May not be needed for production */
- I915_WRITE(GEN7_ROW_CHICKEN2,
- _MASKED_BIT_ENABLE(DOP_CLOCK_GATING_DISABLE));
/* WaSwitchSolVfFArbitrationPriority:bdw */
I915_WRITE(GAM_ECOCHK, I915_READ(GAM_ECOCHK) | HSW_ECOCHK_ARB_PRIO_SOL);
I915_READ(CHICKEN_PAR1_1) | DPA_MASK_VBLANK_SRD);
/* WaPsrDPRSUnmaskVBlankInSRD:bdw */
- for_each_pipe(pipe) {
+ for_each_pipe(dev_priv, pipe) {
I915_WRITE(CHICKEN_PIPESL_1(pipe),
I915_READ(CHICKEN_PIPESL_1(pipe)) |
BDW_DPRS_MASK_VBLANK_SRD);
}
- /* Use Force Non-Coherent whenever executing a 3D context. This is a
- * workaround for for a possible hang in the unlikely event a TLB
- * invalidation occurs during a PSD flush.
- */
- I915_WRITE(HDC_CHICKEN0,
- I915_READ(HDC_CHICKEN0) |
- _MASKED_BIT_ENABLE(HDC_FORCE_NON_COHERENT));
-
/* WaVSRefCountFullforceMissDisable:bdw */
/* WaDSRefCountFullforceMissDisable:bdw */
I915_WRITE(GEN7_FF_THREAD_MODE,
I915_READ(GEN7_FF_THREAD_MODE) &
~(GEN8_FF_DS_REF_CNT_FFME | GEN7_FF_VS_REF_CNT_FFME));
- /*
- * BSpec recommends 8x4 when MSAA is used,
- * however in practice 16x4 seems fastest.
- *
- * Note that PS/WM thread counts depend on the WIZ hashing
- * disable bit, which we don't touch here, but it's good
- * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
- */
- I915_WRITE(GEN7_GT_MODE,
- GEN6_WIZ_HASHING_MASK | GEN6_WIZ_HASHING_16x4);
-
I915_WRITE(GEN6_RC_SLEEP_PSMI_CONTROL,
_MASKED_BIT_ENABLE(GEN8_RC_SEMA_IDLE_MSG_DISABLE));
I915_WRITE(GEN8_UCGCTL6, I915_READ(GEN8_UCGCTL6) |
GEN8_SDEUNIT_CLOCK_GATE_DISABLE);
- /* Wa4x4STCOptimizationDisable:bdw */
- I915_WRITE(CACHE_MODE_1,
- _MASKED_BIT_ENABLE(GEN8_4x4_STC_OPTIMIZATION_DISABLE));
+ lpt_init_clock_gating(dev);
}
static void haswell_init_clock_gating(struct drm_device *dev)
static void valleyview_init_clock_gating(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
- u32 val;
-
- mutex_lock(&dev_priv->rps.hw_lock);
- val = vlv_punit_read(dev_priv, PUNIT_REG_GPU_FREQ_STS);
- mutex_unlock(&dev_priv->rps.hw_lock);
- switch ((val >> 6) & 3) {
- case 0:
- case 1:
- dev_priv->mem_freq = 800;
- break;
- case 2:
- dev_priv->mem_freq = 1066;
- break;
- case 3:
- dev_priv->mem_freq = 1333;
- break;
- }
- DRM_DEBUG_DRIVER("DDR speed: %d MHz", dev_priv->mem_freq);
I915_WRITE(DSPCLK_GATE_D, VRHUNIT_CLOCK_GATE_DISABLE);
static void cherryview_init_clock_gating(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
- u32 val;
-
- mutex_lock(&dev_priv->rps.hw_lock);
- val = vlv_punit_read(dev_priv, CCK_FUSE_REG);
- mutex_unlock(&dev_priv->rps.hw_lock);
- switch ((val >> 2) & 0x7) {
- case 0:
- case 1:
- dev_priv->rps.cz_freq = CHV_CZ_CLOCK_FREQ_MODE_200;
- dev_priv->mem_freq = 1600;
- break;
- case 2:
- dev_priv->rps.cz_freq = CHV_CZ_CLOCK_FREQ_MODE_267;
- dev_priv->mem_freq = 1600;
- break;
- case 3:
- dev_priv->rps.cz_freq = CHV_CZ_CLOCK_FREQ_MODE_333;
- dev_priv->mem_freq = 2000;
- break;
- case 4:
- dev_priv->rps.cz_freq = CHV_CZ_CLOCK_FREQ_MODE_320;
- dev_priv->mem_freq = 1600;
- break;
- case 5:
- dev_priv->rps.cz_freq = CHV_CZ_CLOCK_FREQ_MODE_400;
- dev_priv->mem_freq = 1600;
- break;
- }
- DRM_DEBUG_DRIVER("DDR speed: %d MHz", dev_priv->mem_freq);
I915_WRITE(DSPCLK_GATE_D, VRHUNIT_CLOCK_GATE_DISABLE);
I915_WRITE(MI_ARB_VLV, MI_ARB_DISPLAY_TRICKLE_FEED_DISABLE);
- /* WaDisablePartialInstShootdown:chv */
- I915_WRITE(GEN8_ROW_CHICKEN,
- _MASKED_BIT_ENABLE(PARTIAL_INSTRUCTION_SHOOTDOWN_DISABLE));
-
- /* WaDisableThreadStallDopClockGating:chv */
- I915_WRITE(GEN8_ROW_CHICKEN,
- _MASKED_BIT_ENABLE(STALL_DOP_GATING_DISABLE));
-
/* WaVSRefCountFullforceMissDisable:chv */
/* WaDSRefCountFullforceMissDisable:chv */
I915_WRITE(GEN7_FF_THREAD_MODE,
I915_WRITE(GEN8_UCGCTL6, I915_READ(GEN8_UCGCTL6) |
GEN8_SDEUNIT_CLOCK_GATE_DISABLE);
- /* WaDisableSamplerPowerBypass:chv (pre-production hw) */
- I915_WRITE(HALF_SLICE_CHICKEN3,
- _MASKED_BIT_ENABLE(GEN8_SAMPLER_POWER_BYPASS_DIS));
-
/* WaDisableGunitClockGating:chv (pre-production hw) */
I915_WRITE(VLV_GUNIT_CLOCK_GATE, I915_READ(VLV_GUNIT_CLOCK_GATE) |
GINT_DIS);
_MASKED_BIT_ENABLE(GEN8_FF_DOP_CLOCK_GATE_DISABLE));
/* WaDisableDopClockGating:chv (pre-production hw) */
- I915_WRITE(GEN7_ROW_CHICKEN2,
- _MASKED_BIT_ENABLE(DOP_CLOCK_GATING_DISABLE));
I915_WRITE(GEN6_UCGCTL1, I915_READ(GEN6_UCGCTL1) |
GEN6_EU_TCUNIT_CLOCK_GATE_DISABLE);
}
/* On GEN3 we really need to make sure the ARB C3 LP bit is set */
I915_WRITE(MI_ARB_STATE, _MASKED_BIT_ENABLE(MI_ARB_C3_LP_WRITE_ENABLE));
+
+ I915_WRITE(MI_ARB_STATE,
+ _MASKED_BIT_ENABLE(MI_ARB_DISPLAY_TRICKLE_FEED_DISABLE));
}
static void i85x_init_clock_gating(struct drm_device *dev)
/* interrupts should cause a wake up from C3 */
I915_WRITE(MI_STATE, _MASKED_BIT_ENABLE(MI_AGPBUSY_INT_EN) |
_MASKED_BIT_DISABLE(MI_AGPBUSY_830_MODE));
+
+ I915_WRITE(MEM_MODE,
+ _MASKED_BIT_ENABLE(MEM_DISPLAY_TRICKLE_FEED_DISABLE));
}
static void i830_init_clock_gating(struct drm_device *dev)
struct drm_i915_private *dev_priv = dev->dev_private;
I915_WRITE(DSPCLK_GATE_D, OVRUNIT_CLOCK_GATE_DISABLE);
+
+ I915_WRITE(MEM_MODE,
+ _MASKED_BIT_ENABLE(MEM_DISPLAY_A_TRICKLE_FEED_DISABLE) |
+ _MASKED_BIT_ENABLE(MEM_DISPLAY_B_TRICKLE_FEED_DISABLE));
}
void intel_init_clock_gating(struct drm_device *dev)
spin_unlock_irq(&dev_priv->irq_lock);
vlv_set_power_well(dev_priv, power_well, false);
+
+ vlv_power_sequencer_reset(dev_priv);
}
static void vlv_dpio_cmn_power_well_enable(struct drm_i915_private *dev_priv,
static void vlv_dpio_cmn_power_well_disable(struct drm_i915_private *dev_priv,
struct i915_power_well *power_well)
{
- struct drm_device *dev = dev_priv->dev;
enum pipe pipe;
WARN_ON_ONCE(power_well->data != PUNIT_POWER_WELL_DPIO_CMN_BC);
- for_each_pipe(pipe)
+ for_each_pipe(dev_priv, pipe)
assert_pll_disabled(dev_priv, pipe);
/* Assert common reset */
vlv_set_power_well(dev_priv, power_well, false);
}
+static void chv_dpio_cmn_power_well_enable(struct drm_i915_private *dev_priv,
+ struct i915_power_well *power_well)
+{
+ enum dpio_phy phy;
+
+ WARN_ON_ONCE(power_well->data != PUNIT_POWER_WELL_DPIO_CMN_BC &&
+ power_well->data != PUNIT_POWER_WELL_DPIO_CMN_D);
+
+ /*
+ * Enable the CRI clock source so we can get at the
+ * display and the reference clock for VGA
+ * hotplug / manual detection.
+ */
+ if (power_well->data == PUNIT_POWER_WELL_DPIO_CMN_BC) {
+ phy = DPIO_PHY0;
+ I915_WRITE(DPLL(PIPE_B), I915_READ(DPLL(PIPE_B)) |
+ DPLL_REFA_CLK_ENABLE_VLV);
+ I915_WRITE(DPLL(PIPE_B), I915_READ(DPLL(PIPE_B)) |
+ DPLL_REFA_CLK_ENABLE_VLV | DPLL_INTEGRATED_CRI_CLK_VLV);
+ } else {
+ phy = DPIO_PHY1;
+ I915_WRITE(DPLL(PIPE_C), I915_READ(DPLL(PIPE_C)) |
+ DPLL_REFA_CLK_ENABLE_VLV | DPLL_INTEGRATED_CRI_CLK_VLV);
+ }
+ udelay(1); /* >10ns for cmnreset, >0ns for sidereset */
+ vlv_set_power_well(dev_priv, power_well, true);
+
+ /* Poll for phypwrgood signal */
+ if (wait_for(I915_READ(DISPLAY_PHY_STATUS) & PHY_POWERGOOD(phy), 1))
+ DRM_ERROR("Display PHY %d is not power up\n", phy);
+
+ I915_WRITE(DISPLAY_PHY_CONTROL, I915_READ(DISPLAY_PHY_CONTROL) |
+ PHY_COM_LANE_RESET_DEASSERT(phy));
+}
+
+static void chv_dpio_cmn_power_well_disable(struct drm_i915_private *dev_priv,
+ struct i915_power_well *power_well)
+{
+ enum dpio_phy phy;
+
+ WARN_ON_ONCE(power_well->data != PUNIT_POWER_WELL_DPIO_CMN_BC &&
+ power_well->data != PUNIT_POWER_WELL_DPIO_CMN_D);
+
+ if (power_well->data == PUNIT_POWER_WELL_DPIO_CMN_BC) {
+ phy = DPIO_PHY0;
+ assert_pll_disabled(dev_priv, PIPE_A);
+ assert_pll_disabled(dev_priv, PIPE_B);
+ } else {
+ phy = DPIO_PHY1;
+ assert_pll_disabled(dev_priv, PIPE_C);
+ }
+
+ I915_WRITE(DISPLAY_PHY_CONTROL, I915_READ(DISPLAY_PHY_CONTROL) &
+ ~PHY_COM_LANE_RESET_DEASSERT(phy));
+
+ vlv_set_power_well(dev_priv, power_well, false);
+}
+
+static bool chv_pipe_power_well_enabled(struct drm_i915_private *dev_priv,
+ struct i915_power_well *power_well)
+{
+ enum pipe pipe = power_well->data;
+ bool enabled;
+ u32 state, ctrl;
+
+ mutex_lock(&dev_priv->rps.hw_lock);
+
+ state = vlv_punit_read(dev_priv, PUNIT_REG_DSPFREQ) & DP_SSS_MASK(pipe);
+ /*
+ * We only ever set the power-on and power-gate states, anything
+ * else is unexpected.
+ */
+ WARN_ON(state != DP_SSS_PWR_ON(pipe) && state != DP_SSS_PWR_GATE(pipe));
+ enabled = state == DP_SSS_PWR_ON(pipe);
+
+ /*
+ * A transient state at this point would mean some unexpected party
+ * is poking at the power controls too.
+ */
+ ctrl = vlv_punit_read(dev_priv, PUNIT_REG_DSPFREQ) & DP_SSC_MASK(pipe);
+ WARN_ON(ctrl << 16 != state);
+
+ mutex_unlock(&dev_priv->rps.hw_lock);
+
+ return enabled;
+}
+
+static void chv_set_pipe_power_well(struct drm_i915_private *dev_priv,
+ struct i915_power_well *power_well,
+ bool enable)
+{
+ enum pipe pipe = power_well->data;
+ u32 state;
+ u32 ctrl;
+
+ state = enable ? DP_SSS_PWR_ON(pipe) : DP_SSS_PWR_GATE(pipe);
+
+ mutex_lock(&dev_priv->rps.hw_lock);
+
+#define COND \
+ ((vlv_punit_read(dev_priv, PUNIT_REG_DSPFREQ) & DP_SSS_MASK(pipe)) == state)
+
+ if (COND)
+ goto out;
+
+ ctrl = vlv_punit_read(dev_priv, PUNIT_REG_DSPFREQ);
+ ctrl &= ~DP_SSC_MASK(pipe);
+ ctrl |= enable ? DP_SSC_PWR_ON(pipe) : DP_SSC_PWR_GATE(pipe);
+ vlv_punit_write(dev_priv, PUNIT_REG_DSPFREQ, ctrl);
+
+ if (wait_for(COND, 100))
+ DRM_ERROR("timout setting power well state %08x (%08x)\n",
+ state,
+ vlv_punit_read(dev_priv, PUNIT_REG_DSPFREQ));
+
+#undef COND
+
+out:
+ mutex_unlock(&dev_priv->rps.hw_lock);
+}
+
+static void chv_pipe_power_well_sync_hw(struct drm_i915_private *dev_priv,
+ struct i915_power_well *power_well)
+{
+ chv_set_pipe_power_well(dev_priv, power_well, power_well->count > 0);
+}
+
+static void chv_pipe_power_well_enable(struct drm_i915_private *dev_priv,
+ struct i915_power_well *power_well)
+{
+ WARN_ON_ONCE(power_well->data != PIPE_A &&
+ power_well->data != PIPE_B &&
+ power_well->data != PIPE_C);
+
+ chv_set_pipe_power_well(dev_priv, power_well, true);
+}
+
+static void chv_pipe_power_well_disable(struct drm_i915_private *dev_priv,
+ struct i915_power_well *power_well)
+{
+ WARN_ON_ONCE(power_well->data != PIPE_A &&
+ power_well->data != PIPE_B &&
+ power_well->data != PIPE_C);
+
+ chv_set_pipe_power_well(dev_priv, power_well, false);
+}
+
static void check_power_well_state(struct drm_i915_private *dev_priv,
struct i915_power_well *power_well)
{
BIT(POWER_DOMAIN_PORT_DDI_C_4_LANES) | \
BIT(POWER_DOMAIN_INIT))
+#define CHV_PIPE_A_POWER_DOMAINS ( \
+ BIT(POWER_DOMAIN_PIPE_A) | \
+ BIT(POWER_DOMAIN_INIT))
+
+#define CHV_PIPE_B_POWER_DOMAINS ( \
+ BIT(POWER_DOMAIN_PIPE_B) | \
+ BIT(POWER_DOMAIN_INIT))
+
+#define CHV_PIPE_C_POWER_DOMAINS ( \
+ BIT(POWER_DOMAIN_PIPE_C) | \
+ BIT(POWER_DOMAIN_INIT))
+
+#define CHV_DPIO_CMN_BC_POWER_DOMAINS ( \
+ BIT(POWER_DOMAIN_PORT_DDI_B_2_LANES) | \
+ BIT(POWER_DOMAIN_PORT_DDI_B_4_LANES) | \
+ BIT(POWER_DOMAIN_PORT_DDI_C_2_LANES) | \
+ BIT(POWER_DOMAIN_PORT_DDI_C_4_LANES) | \
+ BIT(POWER_DOMAIN_INIT))
+
+#define CHV_DPIO_CMN_D_POWER_DOMAINS ( \
+ BIT(POWER_DOMAIN_PORT_DDI_D_2_LANES) | \
+ BIT(POWER_DOMAIN_PORT_DDI_D_4_LANES) | \
+ BIT(POWER_DOMAIN_INIT))
+
+#define CHV_DPIO_TX_D_LANES_01_POWER_DOMAINS ( \
+ BIT(POWER_DOMAIN_PORT_DDI_D_2_LANES) | \
+ BIT(POWER_DOMAIN_PORT_DDI_D_4_LANES) | \
+ BIT(POWER_DOMAIN_INIT))
+
+#define CHV_DPIO_TX_D_LANES_23_POWER_DOMAINS ( \
+ BIT(POWER_DOMAIN_PORT_DDI_D_4_LANES) | \
+ BIT(POWER_DOMAIN_INIT))
+
static const struct i915_power_well_ops i9xx_always_on_power_well_ops = {
.sync_hw = i9xx_always_on_power_well_noop,
.enable = i9xx_always_on_power_well_noop,
.is_enabled = i9xx_always_on_power_well_enabled,
};
+static const struct i915_power_well_ops chv_pipe_power_well_ops = {
+ .sync_hw = chv_pipe_power_well_sync_hw,
+ .enable = chv_pipe_power_well_enable,
+ .disable = chv_pipe_power_well_disable,
+ .is_enabled = chv_pipe_power_well_enabled,
+};
+
+static const struct i915_power_well_ops chv_dpio_cmn_power_well_ops = {
+ .sync_hw = vlv_power_well_sync_hw,
+ .enable = chv_dpio_cmn_power_well_enable,
+ .disable = chv_dpio_cmn_power_well_disable,
+ .is_enabled = vlv_power_well_enabled,
+};
+
static struct i915_power_well i9xx_always_on_power_well[] = {
{
.name = "always-on",
},
};
+static struct i915_power_well chv_power_wells[] = {
+ {
+ .name = "always-on",
+ .always_on = 1,
+ .domains = VLV_ALWAYS_ON_POWER_DOMAINS,
+ .ops = &i9xx_always_on_power_well_ops,
+ },
+#if 0
+ {
+ .name = "display",
+ .domains = VLV_DISPLAY_POWER_DOMAINS,
+ .data = PUNIT_POWER_WELL_DISP2D,
+ .ops = &vlv_display_power_well_ops,
+ },
+ {
+ .name = "pipe-a",
+ .domains = CHV_PIPE_A_POWER_DOMAINS,
+ .data = PIPE_A,
+ .ops = &chv_pipe_power_well_ops,
+ },
+ {
+ .name = "pipe-b",
+ .domains = CHV_PIPE_B_POWER_DOMAINS,
+ .data = PIPE_B,
+ .ops = &chv_pipe_power_well_ops,
+ },
+ {
+ .name = "pipe-c",
+ .domains = CHV_PIPE_C_POWER_DOMAINS,
+ .data = PIPE_C,
+ .ops = &chv_pipe_power_well_ops,
+ },
+#endif
+ {
+ .name = "dpio-common-bc",
+ /*
+ * XXX: cmnreset for one PHY seems to disturb the other.
+ * As a workaround keep both powered on at the same
+ * time for now.
+ */
+ .domains = CHV_DPIO_CMN_BC_POWER_DOMAINS | CHV_DPIO_CMN_D_POWER_DOMAINS,
+ .data = PUNIT_POWER_WELL_DPIO_CMN_BC,
+ .ops = &chv_dpio_cmn_power_well_ops,
+ },
+ {
+ .name = "dpio-common-d",
+ /*
+ * XXX: cmnreset for one PHY seems to disturb the other.
+ * As a workaround keep both powered on at the same
+ * time for now.
+ */
+ .domains = CHV_DPIO_CMN_BC_POWER_DOMAINS | CHV_DPIO_CMN_D_POWER_DOMAINS,
+ .data = PUNIT_POWER_WELL_DPIO_CMN_D,
+ .ops = &chv_dpio_cmn_power_well_ops,
+ },
+#if 0
+ {
+ .name = "dpio-tx-b-01",
+ .domains = VLV_DPIO_TX_B_LANES_01_POWER_DOMAINS |
+ VLV_DPIO_TX_B_LANES_23_POWER_DOMAINS,
+ .ops = &vlv_dpio_power_well_ops,
+ .data = PUNIT_POWER_WELL_DPIO_TX_B_LANES_01,
+ },
+ {
+ .name = "dpio-tx-b-23",
+ .domains = VLV_DPIO_TX_B_LANES_01_POWER_DOMAINS |
+ VLV_DPIO_TX_B_LANES_23_POWER_DOMAINS,
+ .ops = &vlv_dpio_power_well_ops,
+ .data = PUNIT_POWER_WELL_DPIO_TX_B_LANES_23,
+ },
+ {
+ .name = "dpio-tx-c-01",
+ .domains = VLV_DPIO_TX_C_LANES_01_POWER_DOMAINS |
+ VLV_DPIO_TX_C_LANES_23_POWER_DOMAINS,
+ .ops = &vlv_dpio_power_well_ops,
+ .data = PUNIT_POWER_WELL_DPIO_TX_C_LANES_01,
+ },
+ {
+ .name = "dpio-tx-c-23",
+ .domains = VLV_DPIO_TX_C_LANES_01_POWER_DOMAINS |
+ VLV_DPIO_TX_C_LANES_23_POWER_DOMAINS,
+ .ops = &vlv_dpio_power_well_ops,
+ .data = PUNIT_POWER_WELL_DPIO_TX_C_LANES_23,
+ },
+ {
+ .name = "dpio-tx-d-01",
+ .domains = CHV_DPIO_TX_D_LANES_01_POWER_DOMAINS |
+ CHV_DPIO_TX_D_LANES_23_POWER_DOMAINS,
+ .ops = &vlv_dpio_power_well_ops,
+ .data = PUNIT_POWER_WELL_DPIO_TX_D_LANES_01,
+ },
+ {
+ .name = "dpio-tx-d-23",
+ .domains = CHV_DPIO_TX_D_LANES_01_POWER_DOMAINS |
+ CHV_DPIO_TX_D_LANES_23_POWER_DOMAINS,
+ .ops = &vlv_dpio_power_well_ops,
+ .data = PUNIT_POWER_WELL_DPIO_TX_D_LANES_23,
+ },
+#endif
+};
+
static struct i915_power_well *lookup_power_well(struct drm_i915_private *dev_priv,
enum punit_power_well power_well_id)
{
} else if (IS_BROADWELL(dev_priv->dev)) {
set_power_wells(power_domains, bdw_power_wells);
hsw_pwr = power_domains;
+ } else if (IS_CHERRYVIEW(dev_priv->dev)) {
+ set_power_wells(power_domains, chv_power_wells);
} else if (IS_VALLEYVIEW(dev_priv->dev)) {
set_power_wells(power_domains, vlv_power_wells);
} else {
else if (IS_HASWELL(dev))
dev_priv->display.init_clock_gating = haswell_init_clock_gating;
else if (INTEL_INFO(dev)->gen == 8)
- dev_priv->display.init_clock_gating = gen8_init_clock_gating;
+ dev_priv->display.init_clock_gating = broadwell_init_clock_gating;
} else if (IS_CHERRYVIEW(dev)) {
- dev_priv->display.update_wm = valleyview_update_wm;
+ dev_priv->display.update_wm = cherryview_update_wm;
+ dev_priv->display.update_sprite_wm = valleyview_update_sprite_wm;
dev_priv->display.init_clock_gating =
cherryview_init_clock_gating;
} else if (IS_VALLEYVIEW(dev)) {
dev_priv->display.update_wm = valleyview_update_wm;
+ dev_priv->display.update_sprite_wm = valleyview_update_sprite_wm;
dev_priv->display.init_clock_gating =
valleyview_init_clock_gating;
} else if (IS_PINEVIEW(dev)) {
return -1;
}
+ /* CHV needs even values */
opcode = (DIV_ROUND_CLOSEST((val * 2 * mul), dev_priv->rps.cz_freq) * 2);
return opcode;
#ifndef _INTEL_RENDERSTATE_H
#define _INTEL_RENDERSTATE_H
-#include <linux/types.h>
-
-struct intel_renderstate_rodata {
- const u32 *reloc;
- const u32 *batch;
- const u32 batch_items;
-};
+#include "i915_drv.h"
extern const struct intel_renderstate_rodata gen6_null_state;
extern const struct intel_renderstate_rodata gen7_null_state;
#include "i915_trace.h"
#include "intel_drv.h"
-/* Early gen2 devices have a cacheline of just 32 bytes, using 64 is overkill,
- * but keeps the logic simple. Indeed, the whole purpose of this macro is just
- * to give some inclination as to some of the magic values used in the various
- * workarounds!
- */
-#define CACHELINE_BYTES 64
+bool
+intel_ring_initialized(struct intel_engine_cs *ring)
+{
+ struct drm_device *dev = ring->dev;
+
+ if (!dev)
+ return false;
-static inline int __ring_space(int head, int tail, int size)
+ if (i915.enable_execlists) {
+ struct intel_context *dctx = ring->default_context;
+ struct intel_ringbuffer *ringbuf = dctx->engine[ring->id].ringbuf;
+
+ return ringbuf->obj;
+ } else
+ return ring->buffer && ring->buffer->obj;
+}
+
+int __intel_ring_space(int head, int tail, int size)
{
int space = head - (tail + I915_RING_FREE_SPACE);
if (space < 0)
return space;
}
-static inline int ring_space(struct intel_ringbuffer *ringbuf)
+int intel_ring_space(struct intel_ringbuffer *ringbuf)
{
- return __ring_space(ringbuf->head & HEAD_ADDR, ringbuf->tail, ringbuf->size);
+ return __intel_ring_space(ringbuf->head & HEAD_ADDR,
+ ringbuf->tail, ringbuf->size);
}
-static bool intel_ring_stopped(struct intel_engine_cs *ring)
+bool intel_ring_stopped(struct intel_engine_cs *ring)
{
struct drm_i915_private *dev_priv = ring->dev->dev_private;
return dev_priv->gpu_error.stop_rings & intel_ring_flag(ring);
return ret;
}
- return gen8_emit_pipe_control(ring, flags, scratch_addr);
+ ret = gen8_emit_pipe_control(ring, flags, scratch_addr);
+ if (ret)
+ return ret;
+
+ if (!invalidate_domains && flush_domains)
+ return gen7_ring_fbc_flush(ring, FBC_REND_NUKE);
+
+ return 0;
}
static void ring_write_tail(struct intel_engine_cs *ring,
if (!IS_GEN2(ring->dev)) {
I915_WRITE_MODE(ring, _MASKED_BIT_ENABLE(STOP_RING));
- if (wait_for_atomic((I915_READ_MODE(ring) & MODE_IDLE) != 0, 1000)) {
- DRM_ERROR("%s :timed out trying to stop ring\n", ring->name);
- return false;
+ if (wait_for((I915_READ_MODE(ring) & MODE_IDLE) != 0, 1000)) {
+ DRM_ERROR("%s : timed out trying to stop ring\n", ring->name);
+ /* Sometimes we observe that the idle flag is not
+ * set even though the ring is empty. So double
+ * check before giving up.
+ */
+ if (I915_READ_HEAD(ring) != I915_READ_TAIL(ring))
+ return false;
}
}
* also enforces ordering), otherwise the hw might lose the new ring
* register values. */
I915_WRITE_START(ring, i915_gem_obj_ggtt_offset(obj));
+
+ /* WaClearRingBufHeadRegAtInit:ctg,elk */
+ if (I915_READ_HEAD(ring))
+ DRM_DEBUG("%s initialization failed [head=%08x], fudging\n",
+ ring->name, I915_READ_HEAD(ring));
+ I915_WRITE_HEAD(ring, 0);
+ (void)I915_READ_HEAD(ring);
+
I915_WRITE_CTL(ring,
((ringbuf->size - PAGE_SIZE) & RING_NR_PAGES)
| RING_VALID);
else {
ringbuf->head = I915_READ_HEAD(ring);
ringbuf->tail = I915_READ_TAIL(ring) & TAIL_ADDR;
- ringbuf->space = ring_space(ringbuf);
+ ringbuf->space = intel_ring_space(ringbuf);
ringbuf->last_retired_head = -1;
}
return ret;
}
-static int
-init_pipe_control(struct intel_engine_cs *ring)
+void
+intel_fini_pipe_control(struct intel_engine_cs *ring)
+{
+ struct drm_device *dev = ring->dev;
+
+ if (ring->scratch.obj == NULL)
+ return;
+
+ if (INTEL_INFO(dev)->gen >= 5) {
+ kunmap(sg_page(ring->scratch.obj->pages->sgl));
+ i915_gem_object_ggtt_unpin(ring->scratch.obj);
+ }
+
+ drm_gem_object_unreference(&ring->scratch.obj->base);
+ ring->scratch.obj = NULL;
+}
+
+int
+intel_init_pipe_control(struct intel_engine_cs *ring)
{
int ret;
return ret;
}
+static inline void intel_ring_emit_wa(struct intel_engine_cs *ring,
+ u32 addr, u32 value)
+{
+ struct drm_device *dev = ring->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+
+ if (WARN_ON(dev_priv->num_wa_regs >= I915_MAX_WA_REGS))
+ return;
+
+ intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1));
+ intel_ring_emit(ring, addr);
+ intel_ring_emit(ring, value);
+
+ dev_priv->intel_wa_regs[dev_priv->num_wa_regs].addr = addr;
+ dev_priv->intel_wa_regs[dev_priv->num_wa_regs].mask = value & 0xFFFF;
+ /* value is updated with the status of remaining bits of this
+ * register when it is read from debugfs file
+ */
+ dev_priv->intel_wa_regs[dev_priv->num_wa_regs].value = value;
+ dev_priv->num_wa_regs++;
+
+ return;
+}
+
+static int bdw_init_workarounds(struct intel_engine_cs *ring)
+{
+ int ret;
+ struct drm_device *dev = ring->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+
+ /*
+ * workarounds applied in this fn are part of register state context,
+ * they need to be re-initialized followed by gpu reset, suspend/resume,
+ * module reload.
+ */
+ dev_priv->num_wa_regs = 0;
+ memset(dev_priv->intel_wa_regs, 0, sizeof(dev_priv->intel_wa_regs));
+
+ /*
+ * update the number of dwords required based on the
+ * actual number of workarounds applied
+ */
+ ret = intel_ring_begin(ring, 18);
+ if (ret)
+ return ret;
+
+ /* WaDisablePartialInstShootdown:bdw */
+ /* WaDisableThreadStallDopClockGating:bdw */
+ /* FIXME: Unclear whether we really need this on production bdw. */
+ intel_ring_emit_wa(ring, GEN8_ROW_CHICKEN,
+ _MASKED_BIT_ENABLE(PARTIAL_INSTRUCTION_SHOOTDOWN_DISABLE
+ | STALL_DOP_GATING_DISABLE));
+
+ /* WaDisableDopClockGating:bdw May not be needed for production */
+ intel_ring_emit_wa(ring, GEN7_ROW_CHICKEN2,
+ _MASKED_BIT_ENABLE(DOP_CLOCK_GATING_DISABLE));
+
+ intel_ring_emit_wa(ring, HALF_SLICE_CHICKEN3,
+ _MASKED_BIT_ENABLE(GEN8_SAMPLER_POWER_BYPASS_DIS));
+
+ /* Use Force Non-Coherent whenever executing a 3D context. This is a
+ * workaround for for a possible hang in the unlikely event a TLB
+ * invalidation occurs during a PSD flush.
+ */
+ intel_ring_emit_wa(ring, HDC_CHICKEN0,
+ _MASKED_BIT_ENABLE(HDC_FORCE_NON_COHERENT));
+
+ /* Wa4x4STCOptimizationDisable:bdw */
+ intel_ring_emit_wa(ring, CACHE_MODE_1,
+ _MASKED_BIT_ENABLE(GEN8_4x4_STC_OPTIMIZATION_DISABLE));
+
+ /*
+ * BSpec recommends 8x4 when MSAA is used,
+ * however in practice 16x4 seems fastest.
+ *
+ * Note that PS/WM thread counts depend on the WIZ hashing
+ * disable bit, which we don't touch here, but it's good
+ * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
+ */
+ intel_ring_emit_wa(ring, GEN7_GT_MODE,
+ GEN6_WIZ_HASHING_MASK | GEN6_WIZ_HASHING_16x4);
+
+ intel_ring_advance(ring);
+
+ DRM_DEBUG_DRIVER("Number of Workarounds applied: %d\n",
+ dev_priv->num_wa_regs);
+
+ return 0;
+}
+
+static int chv_init_workarounds(struct intel_engine_cs *ring)
+{
+ int ret;
+ struct drm_device *dev = ring->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+
+ /*
+ * workarounds applied in this fn are part of register state context,
+ * they need to be re-initialized followed by gpu reset, suspend/resume,
+ * module reload.
+ */
+ dev_priv->num_wa_regs = 0;
+ memset(dev_priv->intel_wa_regs, 0, sizeof(dev_priv->intel_wa_regs));
+
+ ret = intel_ring_begin(ring, 12);
+ if (ret)
+ return ret;
+
+ /* WaDisablePartialInstShootdown:chv */
+ intel_ring_emit_wa(ring, GEN8_ROW_CHICKEN,
+ _MASKED_BIT_ENABLE(PARTIAL_INSTRUCTION_SHOOTDOWN_DISABLE));
+
+ /* WaDisableThreadStallDopClockGating:chv */
+ intel_ring_emit_wa(ring, GEN8_ROW_CHICKEN,
+ _MASKED_BIT_ENABLE(STALL_DOP_GATING_DISABLE));
+
+ /* WaDisableDopClockGating:chv (pre-production hw) */
+ intel_ring_emit_wa(ring, GEN7_ROW_CHICKEN2,
+ _MASKED_BIT_ENABLE(DOP_CLOCK_GATING_DISABLE));
+
+ /* WaDisableSamplerPowerBypass:chv (pre-production hw) */
+ intel_ring_emit_wa(ring, HALF_SLICE_CHICKEN3,
+ _MASKED_BIT_ENABLE(GEN8_SAMPLER_POWER_BYPASS_DIS));
+
+ intel_ring_advance(ring);
+
+ return 0;
+}
+
static int init_render_ring(struct intel_engine_cs *ring)
{
struct drm_device *dev = ring->dev;
_MASKED_BIT_ENABLE(GFX_REPLAY_MODE));
if (INTEL_INFO(dev)->gen >= 5) {
- ret = init_pipe_control(ring);
+ ret = intel_init_pipe_control(ring);
if (ret)
return ret;
}
dev_priv->semaphore_obj = NULL;
}
- if (ring->scratch.obj == NULL)
- return;
-
- if (INTEL_INFO(dev)->gen >= 5) {
- kunmap(sg_page(ring->scratch.obj->pages->sgl));
- i915_gem_object_ggtt_unpin(ring->scratch.obj);
- }
-
- drm_gem_object_unreference(&ring->scratch.obj->base);
- ring->scratch.obj = NULL;
+ intel_fini_pipe_control(ring);
}
static int gen8_rcs_signal(struct intel_engine_cs *signaller,
/* Just userspace ABI convention to limit the wa batch bo to a resonable size */
#define I830_BATCH_LIMIT (256*1024)
+#define I830_TLB_ENTRIES (2)
+#define I830_WA_SIZE max(I830_TLB_ENTRIES*4096, I830_BATCH_LIMIT)
static int
i830_dispatch_execbuffer(struct intel_engine_cs *ring,
u64 offset, u32 len,
unsigned flags)
{
+ u32 cs_offset = ring->scratch.gtt_offset;
int ret;
- if (flags & I915_DISPATCH_PINNED) {
- ret = intel_ring_begin(ring, 4);
- if (ret)
- return ret;
+ ret = intel_ring_begin(ring, 6);
+ if (ret)
+ return ret;
- intel_ring_emit(ring, MI_BATCH_BUFFER);
- intel_ring_emit(ring, offset | (flags & I915_DISPATCH_SECURE ? 0 : MI_BATCH_NON_SECURE));
- intel_ring_emit(ring, offset + len - 8);
- intel_ring_emit(ring, MI_NOOP);
- intel_ring_advance(ring);
- } else {
- u32 cs_offset = ring->scratch.gtt_offset;
+ /* Evict the invalid PTE TLBs */
+ intel_ring_emit(ring, COLOR_BLT_CMD | BLT_WRITE_RGBA);
+ intel_ring_emit(ring, BLT_DEPTH_32 | BLT_ROP_COLOR_COPY | 4096);
+ intel_ring_emit(ring, I830_TLB_ENTRIES << 16 | 4); /* load each page */
+ intel_ring_emit(ring, cs_offset);
+ intel_ring_emit(ring, 0xdeadbeef);
+ intel_ring_emit(ring, MI_NOOP);
+ intel_ring_advance(ring);
+ if ((flags & I915_DISPATCH_PINNED) == 0) {
if (len > I830_BATCH_LIMIT)
return -ENOSPC;
- ret = intel_ring_begin(ring, 9+3);
+ ret = intel_ring_begin(ring, 6 + 2);
if (ret)
return ret;
- /* Blit the batch (which has now all relocs applied) to the stable batch
- * scratch bo area (so that the CS never stumbles over its tlb
- * invalidation bug) ... */
- intel_ring_emit(ring, XY_SRC_COPY_BLT_CMD |
- XY_SRC_COPY_BLT_WRITE_ALPHA |
- XY_SRC_COPY_BLT_WRITE_RGB);
- intel_ring_emit(ring, BLT_DEPTH_32 | BLT_ROP_GXCOPY | 4096);
- intel_ring_emit(ring, 0);
- intel_ring_emit(ring, (DIV_ROUND_UP(len, 4096) << 16) | 1024);
+
+ /* Blit the batch (which has now all relocs applied) to the
+ * stable batch scratch bo area (so that the CS never
+ * stumbles over its tlb invalidation bug) ...
+ */
+ intel_ring_emit(ring, SRC_COPY_BLT_CMD | BLT_WRITE_RGBA);
+ intel_ring_emit(ring, BLT_DEPTH_32 | BLT_ROP_SRC_COPY | 4096);
+ intel_ring_emit(ring, DIV_ROUND_UP(len, 4096) << 16 | 1024);
intel_ring_emit(ring, cs_offset);
- intel_ring_emit(ring, 0);
intel_ring_emit(ring, 4096);
intel_ring_emit(ring, offset);
+
intel_ring_emit(ring, MI_FLUSH);
+ intel_ring_emit(ring, MI_NOOP);
+ intel_ring_advance(ring);
/* ... and execute it. */
- intel_ring_emit(ring, MI_BATCH_BUFFER);
- intel_ring_emit(ring, cs_offset | (flags & I915_DISPATCH_SECURE ? 0 : MI_BATCH_NON_SECURE));
- intel_ring_emit(ring, cs_offset + len - 8);
- intel_ring_advance(ring);
+ offset = cs_offset;
}
+ ret = intel_ring_begin(ring, 4);
+ if (ret)
+ return ret;
+
+ intel_ring_emit(ring, MI_BATCH_BUFFER);
+ intel_ring_emit(ring, offset | (flags & I915_DISPATCH_SECURE ? 0 : MI_BATCH_NON_SECURE));
+ intel_ring_emit(ring, offset + len - 8);
+ intel_ring_emit(ring, MI_NOOP);
+ intel_ring_advance(ring);
+
return 0;
}
return 0;
}
-static void intel_destroy_ringbuffer_obj(struct intel_ringbuffer *ringbuf)
+void intel_destroy_ringbuffer_obj(struct intel_ringbuffer *ringbuf)
{
if (!ringbuf->obj)
return;
ringbuf->obj = NULL;
}
-static int intel_alloc_ringbuffer_obj(struct drm_device *dev,
- struct intel_ringbuffer *ringbuf)
+int intel_alloc_ringbuffer_obj(struct drm_device *dev,
+ struct intel_ringbuffer *ringbuf)
{
struct drm_i915_private *dev_priv = to_i915(dev);
struct drm_i915_gem_object *obj;
ring->dev = dev;
INIT_LIST_HEAD(&ring->active_list);
INIT_LIST_HEAD(&ring->request_list);
+ INIT_LIST_HEAD(&ring->execlist_queue);
ringbuf->size = 32 * PAGE_SIZE;
+ ringbuf->ring = ring;
memset(ring->semaphore.sync_seqno, 0, sizeof(ring->semaphore.sync_seqno));
init_waitqueue_head(&ring->irq_queue);
ringbuf->head = ringbuf->last_retired_head;
ringbuf->last_retired_head = -1;
- ringbuf->space = ring_space(ringbuf);
+ ringbuf->space = intel_ring_space(ringbuf);
if (ringbuf->space >= n)
return 0;
}
list_for_each_entry(request, &ring->request_list, list) {
- if (__ring_space(request->tail, ringbuf->tail, ringbuf->size) >= n) {
+ if (__intel_ring_space(request->tail, ringbuf->tail,
+ ringbuf->size) >= n) {
seqno = request->seqno;
break;
}
ringbuf->head = ringbuf->last_retired_head;
ringbuf->last_retired_head = -1;
- ringbuf->space = ring_space(ringbuf);
+ ringbuf->space = intel_ring_space(ringbuf);
return 0;
}
trace_i915_ring_wait_begin(ring);
do {
ringbuf->head = I915_READ_HEAD(ring);
- ringbuf->space = ring_space(ringbuf);
+ ringbuf->space = intel_ring_space(ringbuf);
if (ringbuf->space >= n) {
ret = 0;
break;
iowrite32(MI_NOOP, virt++);
ringbuf->tail = 0;
- ringbuf->space = ring_space(ringbuf);
+ ringbuf->space = intel_ring_space(ringbuf);
return 0;
}
u64 offset, u32 len,
unsigned flags)
{
- struct drm_i915_private *dev_priv = ring->dev->dev_private;
- bool ppgtt = dev_priv->mm.aliasing_ppgtt != NULL &&
- !(flags & I915_DISPATCH_SECURE);
+ bool ppgtt = USES_PPGTT(ring->dev) && !(flags & I915_DISPATCH_SECURE);
int ret;
ret = intel_ring_begin(ring, 4);
return ret;
intel_ring_emit(ring,
- MI_BATCH_BUFFER_START | MI_BATCH_PPGTT_HSW |
- (flags & I915_DISPATCH_SECURE ? 0 : MI_BATCH_NON_SECURE_HSW));
+ MI_BATCH_BUFFER_START |
+ (flags & I915_DISPATCH_SECURE ?
+ 0 : MI_BATCH_PPGTT_HSW | MI_BATCH_NON_SECURE_HSW));
/* bit0-7 is the length on GEN6+ */
intel_ring_emit(ring, offset);
intel_ring_advance(ring);
dev_priv->semaphore_obj = obj;
}
}
+ if (IS_CHERRYVIEW(dev))
+ ring->init_context = chv_init_workarounds;
+ else
+ ring->init_context = bdw_init_workarounds;
ring->add_request = gen6_add_request;
ring->flush = gen8_render_ring_flush;
ring->irq_get = gen8_ring_get_irq;
/* Workaround batchbuffer to combat CS tlb bug. */
if (HAS_BROKEN_CS_TLB(dev)) {
- obj = i915_gem_alloc_object(dev, I830_BATCH_LIMIT);
+ obj = i915_gem_alloc_object(dev, I830_WA_SIZE);
if (obj == NULL) {
DRM_ERROR("Failed to allocate batch bo\n");
return -ENOMEM;
#define I915_CMD_HASH_ORDER 9
+/* Early gen2 devices have a cacheline of just 32 bytes, using 64 is overkill,
+ * but keeps the logic simple. Indeed, the whole purpose of this macro is just
+ * to give some inclination as to some of the magic values used in the various
+ * workarounds!
+ */
+#define CACHELINE_BYTES 64
+
/*
* Gen2 BSpec "1. Programming Environment" / 1.4.4.6 "Ring Buffer Use"
* Gen3 BSpec "vol1c Memory Interface Functions" / 2.3.4.5 "Ring Buffer Use"
struct drm_i915_gem_object *obj;
void __iomem *virtual_start;
+ struct intel_engine_cs *ring;
+
+ /*
+ * FIXME: This backpointer is an artifact of the history of how the
+ * execlist patches came into being. It will get removed once the basic
+ * code has landed.
+ */
+ struct intel_context *FIXME_lrc_ctx;
+
u32 head;
u32 tail;
int space;
int (*init)(struct intel_engine_cs *ring);
+ int (*init_context)(struct intel_engine_cs *ring);
+
void (*write_tail)(struct intel_engine_cs *ring,
u32 value);
int __must_check (*flush)(struct intel_engine_cs *ring,
unsigned int num_dwords);
} semaphore;
+ /* Execlists */
+ spinlock_t execlist_lock;
+ struct list_head execlist_queue;
+ u8 next_context_status_buffer;
+ u32 irq_keep_mask; /* bitmask for interrupts that should not be masked */
+ int (*emit_request)(struct intel_ringbuffer *ringbuf);
+ int (*emit_flush)(struct intel_ringbuffer *ringbuf,
+ u32 invalidate_domains,
+ u32 flush_domains);
+ int (*emit_bb_start)(struct intel_ringbuffer *ringbuf,
+ u64 offset, unsigned flags);
+
/**
* List of objects currently involved in rendering from the
* ringbuffer.
u32 (*get_cmd_length_mask)(u32 cmd_header);
};
-static inline bool
-intel_ring_initialized(struct intel_engine_cs *ring)
-{
- return ring->buffer && ring->buffer->obj;
-}
+bool intel_ring_initialized(struct intel_engine_cs *ring);
static inline unsigned
intel_ring_flag(struct intel_engine_cs *ring)
#define I915_GEM_HWS_SCRATCH_INDEX 0x30
#define I915_GEM_HWS_SCRATCH_ADDR (I915_GEM_HWS_SCRATCH_INDEX << MI_STORE_DWORD_INDEX_SHIFT)
+void intel_destroy_ringbuffer_obj(struct intel_ringbuffer *ringbuf);
+int intel_alloc_ringbuffer_obj(struct drm_device *dev,
+ struct intel_ringbuffer *ringbuf);
+
void intel_stop_ring_buffer(struct intel_engine_cs *ring);
void intel_cleanup_ring_buffer(struct intel_engine_cs *ring);
struct intel_ringbuffer *ringbuf = ring->buffer;
ringbuf->tail &= ringbuf->size - 1;
}
+int __intel_ring_space(int head, int tail, int size);
+int intel_ring_space(struct intel_ringbuffer *ringbuf);
+bool intel_ring_stopped(struct intel_engine_cs *ring);
void __intel_ring_advance(struct intel_engine_cs *ring);
int __must_check intel_ring_idle(struct intel_engine_cs *ring);
int intel_ring_flush_all_caches(struct intel_engine_cs *ring);
int intel_ring_invalidate_all_caches(struct intel_engine_cs *ring);
+void intel_fini_pipe_control(struct intel_engine_cs *ring);
+int intel_init_pipe_control(struct intel_engine_cs *ring);
+
int intel_init_render_ring_buffer(struct drm_device *dev);
int intel_init_bsd_ring_buffer(struct drm_device *dev);
int intel_init_bsd2_ring_buffer(struct drm_device *dev);
enum pipe pipe = crtc->pipe;
long timeout = msecs_to_jiffies_timeout(1);
int scanline, min, max, vblank_start;
+ wait_queue_head_t *wq = drm_crtc_vblank_waitqueue(&crtc->base);
DEFINE_WAIT(wait);
WARN_ON(!drm_modeset_is_locked(&crtc->base.mutex));
* other CPUs can see the task state update by the time we
* read the scanline.
*/
- prepare_to_wait(&crtc->vbl_wait, &wait, TASK_UNINTERRUPTIBLE);
+ prepare_to_wait(wq, &wait, TASK_UNINTERRUPTIBLE);
scanline = intel_get_crtc_scanline(crtc);
if (scanline < min || scanline > max)
local_irq_disable();
}
- finish_wait(&crtc->vbl_wait, &wait);
+ finish_wait(wq, &wait);
drm_vblank_put(dev, pipe);
sprctl &= ~SP_PIXFORMAT_MASK;
sprctl &= ~SP_YUV_BYTE_ORDER_MASK;
sprctl &= ~SP_TILED;
+ sprctl &= ~SP_ROTATE_180;
switch (fb->pixel_format) {
case DRM_FORMAT_YUYV:
fb->pitches[0]);
linear_offset -= sprsurf_offset;
+ if (intel_plane->rotation == BIT(DRM_ROTATE_180)) {
+ sprctl |= SP_ROTATE_180;
+
+ x += src_w;
+ y += src_h;
+ linear_offset += src_h * fb->pitches[0] + src_w * pixel_size;
+ }
+
atomic_update = intel_pipe_update_start(intel_crtc, &start_vbl_count);
intel_update_primary_plane(intel_crtc);
sprctl &= ~SPRITE_RGB_ORDER_RGBX;
sprctl &= ~SPRITE_YUV_BYTE_ORDER_MASK;
sprctl &= ~SPRITE_TILED;
+ sprctl &= ~SPRITE_ROTATE_180;
switch (fb->pixel_format) {
case DRM_FORMAT_XBGR8888:
pixel_size, fb->pitches[0]);
linear_offset -= sprsurf_offset;
+ if (intel_plane->rotation == BIT(DRM_ROTATE_180)) {
+ sprctl |= SPRITE_ROTATE_180;
+
+ /* HSW and BDW does this automagically in hardware */
+ if (!IS_HASWELL(dev) && !IS_BROADWELL(dev)) {
+ x += src_w;
+ y += src_h;
+ linear_offset += src_h * fb->pitches[0] +
+ src_w * pixel_size;
+ }
+ }
+
atomic_update = intel_pipe_update_start(intel_crtc, &start_vbl_count);
intel_update_primary_plane(intel_crtc);
dvscntr &= ~DVS_RGB_ORDER_XBGR;
dvscntr &= ~DVS_YUV_BYTE_ORDER_MASK;
dvscntr &= ~DVS_TILED;
+ dvscntr &= ~DVS_ROTATE_180;
switch (fb->pixel_format) {
case DRM_FORMAT_XBGR8888:
pixel_size, fb->pitches[0]);
linear_offset -= dvssurf_offset;
+ if (intel_plane->rotation == BIT(DRM_ROTATE_180)) {
+ dvscntr |= DVS_ROTATE_180;
+
+ x += src_w;
+ y += src_h;
+ linear_offset += src_h * fb->pitches[0] + src_w * pixel_size;
+ }
+
atomic_update = intel_pipe_update_start(intel_crtc, &start_vbl_count);
intel_update_primary_plane(intel_crtc);
max_scale = intel_plane->max_downscale << 16;
min_scale = intel_plane->can_scale ? 1 : (1 << 16);
+ drm_rect_rotate(&src, fb->width << 16, fb->height << 16,
+ intel_plane->rotation);
+
hscale = drm_rect_calc_hscale_relaxed(&src, &dst, min_scale, max_scale);
BUG_ON(hscale < 0);
drm_rect_width(&dst) * hscale - drm_rect_width(&src),
drm_rect_height(&dst) * vscale - drm_rect_height(&src));
+ drm_rect_rotate_inv(&src, fb->width << 16, fb->height << 16,
+ intel_plane->rotation);
+
/* sanity check to make sure the src viewport wasn't enlarged */
WARN_ON(src.x1 < (int) src_x ||
src.y1 < (int) src_y ||
return ret;
}
-void intel_plane_restore(struct drm_plane *plane)
+int intel_plane_set_property(struct drm_plane *plane,
+ struct drm_property *prop,
+ uint64_t val)
+{
+ struct drm_device *dev = plane->dev;
+ struct intel_plane *intel_plane = to_intel_plane(plane);
+ uint64_t old_val;
+ int ret = -ENOENT;
+
+ if (prop == dev->mode_config.rotation_property) {
+ /* exactly one rotation angle please */
+ if (hweight32(val & 0xf) != 1)
+ return -EINVAL;
+
+ if (intel_plane->rotation == val)
+ return 0;
+
+ old_val = intel_plane->rotation;
+ intel_plane->rotation = val;
+ ret = intel_plane_restore(plane);
+ if (ret)
+ intel_plane->rotation = old_val;
+ }
+
+ return ret;
+}
+
+int intel_plane_restore(struct drm_plane *plane)
{
struct intel_plane *intel_plane = to_intel_plane(plane);
if (!plane->crtc || !plane->fb)
- return;
+ return 0;
- intel_update_plane(plane, plane->crtc, plane->fb,
- intel_plane->crtc_x, intel_plane->crtc_y,
- intel_plane->crtc_w, intel_plane->crtc_h,
- intel_plane->src_x, intel_plane->src_y,
- intel_plane->src_w, intel_plane->src_h);
+ return plane->funcs->update_plane(plane, plane->crtc, plane->fb,
+ intel_plane->crtc_x, intel_plane->crtc_y,
+ intel_plane->crtc_w, intel_plane->crtc_h,
+ intel_plane->src_x, intel_plane->src_y,
+ intel_plane->src_w, intel_plane->src_h);
}
void intel_plane_disable(struct drm_plane *plane)
.update_plane = intel_update_plane,
.disable_plane = intel_disable_plane,
.destroy = intel_destroy_plane,
+ .set_property = intel_plane_set_property,
};
static uint32_t ilk_plane_formats[] = {
intel_plane->pipe = pipe;
intel_plane->plane = plane;
+ intel_plane->rotation = BIT(DRM_ROTATE_0);
possible_crtcs = (1 << pipe);
- ret = drm_plane_init(dev, &intel_plane->base, possible_crtcs,
- &intel_plane_funcs,
- plane_formats, num_plane_formats,
- false);
- if (ret)
+ ret = drm_universal_plane_init(dev, &intel_plane->base, possible_crtcs,
+ &intel_plane_funcs,
+ plane_formats, num_plane_formats,
+ DRM_PLANE_TYPE_OVERLAY);
+ if (ret) {
kfree(intel_plane);
+ goto out;
+ }
+
+ if (!dev->mode_config.rotation_property)
+ dev->mode_config.rotation_property =
+ drm_mode_create_rotation_property(dev,
+ BIT(DRM_ROTATE_0) |
+ BIT(DRM_ROTATE_180));
+
+ if (dev->mode_config.rotation_property)
+ drm_object_attach_property(&intel_plane->base.base,
+ dev->mode_config.rotation_property,
+ intel_plane->rotation);
+ out:
return ret;
}
struct drm_device *dev = encoder->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
+ /* Prevents vblank waits from timing out in intel_tv_detect_type() */
+ intel_wait_for_vblank(encoder->base.dev,
+ to_intel_crtc(encoder->base.crtc)->pipe);
+
I915_WRITE(TV_CTL, I915_READ(TV_CTL) | TV_ENC_ENABLE);
}
{
struct drm_display_mode mode;
struct intel_tv *intel_tv = intel_attached_tv(connector);
+ enum drm_connector_status status;
int type;
DRM_DEBUG_KMS("[CONNECTOR:%d:%s] force=%d\n",
struct intel_load_detect_pipe tmp;
struct drm_modeset_acquire_ctx ctx;
+ drm_modeset_acquire_init(&ctx, 0);
+
if (intel_get_load_detect_pipe(connector, &mode, &tmp, &ctx)) {
type = intel_tv_detect_type(intel_tv, connector);
- intel_release_load_detect_pipe(connector, &tmp, &ctx);
+ intel_release_load_detect_pipe(connector, &tmp);
+ status = type < 0 ?
+ connector_status_disconnected :
+ connector_status_connected;
} else
- return connector_status_unknown;
+ status = connector_status_unknown;
+
+ drm_modeset_drop_locks(&ctx);
+ drm_modeset_acquire_fini(&ctx);
} else
return connector->status;
- if (type < 0)
- return connector_status_disconnected;
+ if (status != connector_status_connected)
+ return status;
intel_tv->type = type;
intel_tv_find_better_format(connector);
{
u32 forcewake_ack;
- if (IS_HASWELL(dev_priv->dev) || IS_GEN8(dev_priv->dev))
+ if (IS_HASWELL(dev_priv->dev) || IS_BROADWELL(dev_priv->dev))
forcewake_ack = FORCEWAKE_ACK_HSW;
else
forcewake_ack = FORCEWAKE_MT_ACK;
else if (IS_GEN6(dev) || IS_GEN7(dev))
__gen6_gt_force_wake_reset(dev_priv);
- if (IS_IVYBRIDGE(dev) || IS_HASWELL(dev) || IS_GEN8(dev))
+ if (IS_IVYBRIDGE(dev) || IS_HASWELL(dev) || IS_BROADWELL(dev))
__gen7_gt_force_wake_mt_reset(dev_priv);
if (restore) { /* If reset with a user forcewake, try to restore */
if (IS_VALLEYVIEW(dev)) {
dev_priv->uncore.funcs.force_wake_get = __vlv_force_wake_get;
dev_priv->uncore.funcs.force_wake_put = __vlv_force_wake_put;
- } else if (IS_HASWELL(dev) || IS_GEN8(dev)) {
+ } else if (IS_HASWELL(dev) || IS_BROADWELL(dev)) {
dev_priv->uncore.funcs.force_wake_get = __gen7_gt_force_wake_mt_get;
dev_priv->uncore.funcs.force_wake_put = __gen7_gt_force_wake_mt_put;
} else if (IS_IVYBRIDGE(dev)) {
return err;
}
- /* Make drm_addbufs happy by not trying to create a mapping for less
- * than a page.
+ /* Make drm_legacy_addbufs happy by not trying to create a mapping for
+ * less than a page.
*/
if (warp_size < PAGE_SIZE)
warp_size = PAGE_SIZE;
offset = 0;
- err = drm_addmap(dev, offset, warp_size,
- _DRM_AGP, _DRM_READ_ONLY, &dev_priv->warp);
+ err = drm_legacy_addmap(dev, offset, warp_size,
+ _DRM_AGP, _DRM_READ_ONLY, &dev_priv->warp);
if (err) {
DRM_ERROR("Unable to map WARP microcode: %d\n", err);
return err;
}
offset += warp_size;
- err = drm_addmap(dev, offset, dma_bs->primary_size,
- _DRM_AGP, _DRM_READ_ONLY, &dev_priv->primary);
+ err = drm_legacy_addmap(dev, offset, dma_bs->primary_size,
+ _DRM_AGP, _DRM_READ_ONLY, &dev_priv->primary);
if (err) {
DRM_ERROR("Unable to map primary DMA region: %d\n", err);
return err;
}
offset += dma_bs->primary_size;
- err = drm_addmap(dev, offset, secondary_size,
- _DRM_AGP, 0, &dev->agp_buffer_map);
+ err = drm_legacy_addmap(dev, offset, secondary_size,
+ _DRM_AGP, 0, &dev->agp_buffer_map);
if (err) {
DRM_ERROR("Unable to map secondary DMA region: %d\n", err);
return err;
req.flags = _DRM_AGP_BUFFER;
req.agp_start = offset;
- err = drm_addbufs_agp(dev, &req);
+ err = drm_legacy_addbufs_agp(dev, &req);
if (err) {
DRM_ERROR("Unable to add secondary DMA buffers: %d\n", err);
return err;
}
offset += secondary_size;
- err = drm_addmap(dev, offset, agp_size - offset,
- _DRM_AGP, 0, &dev_priv->agp_textures);
+ err = drm_legacy_addmap(dev, offset, agp_size - offset,
+ _DRM_AGP, 0, &dev_priv->agp_textures);
if (err) {
DRM_ERROR("Unable to map AGP texture region %d\n", err);
return err;
}
- drm_core_ioremap(dev_priv->warp, dev);
- drm_core_ioremap(dev_priv->primary, dev);
- drm_core_ioremap(dev->agp_buffer_map, dev);
+ drm_legacy_ioremap(dev_priv->warp, dev);
+ drm_legacy_ioremap(dev_priv->primary, dev);
+ drm_legacy_ioremap(dev->agp_buffer_map, dev);
if (!dev_priv->warp->handle ||
!dev_priv->primary->handle || !dev->agp_buffer_map->handle) {
*
* \todo
* Determine whether the maximum address passed to drm_pci_alloc is correct.
- * The same goes for drm_addbufs_pci.
+ * The same goes for drm_legacy_addbufs_pci.
*
* \sa mga_do_dma_bootstrap, mga_do_agp_dma_bootstrap
*/
return -EFAULT;
}
- /* Make drm_addbufs happy by not trying to create a mapping for less
- * than a page.
+ /* Make drm_legacy_addbufs happy by not trying to create a mapping for
+ * less than a page.
*/
if (warp_size < PAGE_SIZE)
warp_size = PAGE_SIZE;
/* The proper alignment is 0x100 for this mapping */
- err = drm_addmap(dev, 0, warp_size, _DRM_CONSISTENT,
- _DRM_READ_ONLY, &dev_priv->warp);
+ err = drm_legacy_addmap(dev, 0, warp_size, _DRM_CONSISTENT,
+ _DRM_READ_ONLY, &dev_priv->warp);
if (err != 0) {
DRM_ERROR("Unable to create mapping for WARP microcode: %d\n",
err);
for (primary_size = dma_bs->primary_size; primary_size != 0;
primary_size >>= 1) {
/* The proper alignment for this mapping is 0x04 */
- err = drm_addmap(dev, 0, primary_size, _DRM_CONSISTENT,
- _DRM_READ_ONLY, &dev_priv->primary);
+ err = drm_legacy_addmap(dev, 0, primary_size, _DRM_CONSISTENT,
+ _DRM_READ_ONLY, &dev_priv->primary);
if (!err)
break;
}
req.count = bin_count;
req.size = dma_bs->secondary_bin_size;
- err = drm_addbufs_pci(dev, &req);
+ err = drm_legacy_addbufs_pci(dev, &req);
if (!err)
break;
}
/* The first steps are the same for both PCI and AGP based DMA. Map
* the cards MMIO registers and map a status page.
*/
- err = drm_addmap(dev, dev_priv->mmio_base, dev_priv->mmio_size,
- _DRM_REGISTERS, _DRM_READ_ONLY, &dev_priv->mmio);
+ err = drm_legacy_addmap(dev, dev_priv->mmio_base, dev_priv->mmio_size,
+ _DRM_REGISTERS, _DRM_READ_ONLY,
+ &dev_priv->mmio);
if (err) {
DRM_ERROR("Unable to map MMIO region: %d\n", err);
return err;
}
- err = drm_addmap(dev, 0, SAREA_MAX, _DRM_SHM,
- _DRM_READ_ONLY | _DRM_LOCKED | _DRM_KERNEL,
+ err = drm_legacy_addmap(dev, 0, SAREA_MAX, _DRM_SHM,
+ _DRM_READ_ONLY | _DRM_LOCKED | _DRM_KERNEL,
&dev_priv->status);
if (err) {
DRM_ERROR("Unable to map status region: %d\n", err);
dev_priv->texture_offset = init->texture_offset[0];
dev_priv->texture_size = init->texture_size[0];
- dev_priv->sarea = drm_getsarea(dev);
+ dev_priv->sarea = drm_legacy_getsarea(dev);
if (!dev_priv->sarea) {
DRM_ERROR("failed to find sarea!\n");
return -EINVAL;
dev_priv->dma_access = MGA_PAGPXFER;
dev_priv->wagp_enable = MGA_WAGP_ENABLE;
- dev_priv->status = drm_core_findmap(dev, init->status_offset);
+ dev_priv->status = drm_legacy_findmap(dev, init->status_offset);
if (!dev_priv->status) {
DRM_ERROR("failed to find status page!\n");
return -EINVAL;
}
- dev_priv->mmio = drm_core_findmap(dev, init->mmio_offset);
+ dev_priv->mmio = drm_legacy_findmap(dev, init->mmio_offset);
if (!dev_priv->mmio) {
DRM_ERROR("failed to find mmio region!\n");
return -EINVAL;
}
- dev_priv->warp = drm_core_findmap(dev, init->warp_offset);
+ dev_priv->warp = drm_legacy_findmap(dev, init->warp_offset);
if (!dev_priv->warp) {
DRM_ERROR("failed to find warp microcode region!\n");
return -EINVAL;
}
- dev_priv->primary = drm_core_findmap(dev, init->primary_offset);
+ dev_priv->primary = drm_legacy_findmap(dev, init->primary_offset);
if (!dev_priv->primary) {
DRM_ERROR("failed to find primary dma region!\n");
return -EINVAL;
}
dev->agp_buffer_token = init->buffers_offset;
dev->agp_buffer_map =
- drm_core_findmap(dev, init->buffers_offset);
+ drm_legacy_findmap(dev, init->buffers_offset);
if (!dev->agp_buffer_map) {
DRM_ERROR("failed to find dma buffer region!\n");
return -EINVAL;
}
- drm_core_ioremap(dev_priv->warp, dev);
- drm_core_ioremap(dev_priv->primary, dev);
- drm_core_ioremap(dev->agp_buffer_map, dev);
+ drm_legacy_ioremap(dev_priv->warp, dev);
+ drm_legacy_ioremap(dev_priv->primary, dev);
+ drm_legacy_ioremap(dev->agp_buffer_map, dev);
}
dev_priv->sarea_priv =
if ((dev_priv->warp != NULL)
&& (dev_priv->warp->type != _DRM_CONSISTENT))
- drm_core_ioremapfree(dev_priv->warp, dev);
+ drm_legacy_ioremapfree(dev_priv->warp, dev);
if ((dev_priv->primary != NULL)
&& (dev_priv->primary->type != _DRM_CONSISTENT))
- drm_core_ioremapfree(dev_priv->primary, dev);
+ drm_legacy_ioremapfree(dev_priv->primary, dev);
if (dev->agp_buffer_map != NULL)
- drm_core_ioremapfree(dev->agp_buffer_map, dev);
+ drm_legacy_ioremapfree(dev->agp_buffer_map, dev);
if (dev_priv->used_new_dma_init) {
#if __OS_HAS_AGP
.open = drm_open,
.release = drm_release,
.unlocked_ioctl = drm_ioctl,
- .mmap = drm_mmap,
+ .mmap = drm_legacy_mmap,
.poll = drm_poll,
#ifdef CONFIG_COMPAT
.compat_ioctl = mga_compat_ioctl,
.load = mga_driver_load,
.unload = mga_driver_unload,
.lastclose = mga_driver_lastclose,
+ .set_busid = drm_pci_set_busid,
.dma_quiescent = mga_driver_dma_quiescent,
.device_is_agp = mga_driver_device_is_agp,
.get_vblank_counter = mga_get_vblank_counter,
#ifndef __MGA_DRV_H__
#define __MGA_DRV_H__
+#include <drm/drm_legacy.h>
+
/* General customization:
*/
.driver_features = DRIVER_GEM | DRIVER_MODESET,
.load = mgag200_driver_load,
.unload = mgag200_driver_unload,
+ .set_busid = drm_pci_set_busid,
.fops = &mgag200_driver_fops,
.name = DRIVER_NAME,
.desc = DRIVER_DESC,
#include <drm/ttm/ttm_memory.h>
#include <drm/ttm/ttm_module.h>
+#include <drm/drm_gem.h>
+
#include <linux/i2c.h>
#include <linux/i2c-algo-bit.h>
resource_size_t rmmio_size;
void __iomem *rmmio;
- drm_local_map_t *framebuffer;
-
struct mga_mc mc;
struct mga_mode_info mode_info;
struct ttm_placement placement;
struct ttm_bo_kmap_obj kmap;
struct drm_gem_object gem;
- u32 placements[3];
+ struct ttm_place placements[3];
int pin_count;
};
#define gem_to_mga_bo(gobj) container_of((gobj), struct mgag200_bo, gem)
static int mgag200fb_create(struct drm_fb_helper *helper,
struct drm_fb_helper_surface_size *sizes)
{
- struct mga_fbdev *mfbdev = (struct mga_fbdev *)helper;
+ struct mga_fbdev *mfbdev =
+ container_of(helper, struct mga_fbdev, helper);
struct drm_device *dev = mfbdev->helper.dev;
struct drm_mode_fb_cmd2 mode_cmd;
struct mga_device *mdev = dev->dev_private;
{
struct drm_device *dev = connector->dev;
struct mga_device *mdev = (struct mga_device*)dev->dev_private;
- struct mga_fbdev *mfbdev = mdev->mfbdev;
- struct drm_fb_helper *fb_helper = &mfbdev->helper;
- struct drm_fb_helper_connector *fb_helper_conn = NULL;
int bpp = 32;
- int i = 0;
if (IS_G200_SE(mdev)) {
if (mdev->unique_rev_id == 0x01) {
}
/* Validate the mode input by the user */
- for (i = 0; i < fb_helper->connector_count; i++) {
- if (fb_helper->connector_info[i]->connector == connector) {
- /* Found the helper for this connector */
- fb_helper_conn = fb_helper->connector_info[i];
- if (fb_helper_conn->cmdline_mode.specified) {
- if (fb_helper_conn->cmdline_mode.bpp_specified) {
- bpp = fb_helper_conn->cmdline_mode.bpp;
- }
- }
- }
+ if (connector->cmdline_mode.specified) {
+ if (connector->cmdline_mode.bpp_specified)
+ bpp = connector->cmdline_mode.bpp;
}
if ((mode->hdisplay * mode->vdisplay * (bpp/8)) > mdev->mc.vram_size) {
- if (fb_helper_conn)
- fb_helper_conn->cmdline_mode.specified = false;
+ if (connector->cmdline_mode.specified)
+ connector->cmdline_mode.specified = false;
return MODE_BAD;
}
void mgag200_ttm_placement(struct mgag200_bo *bo, int domain)
{
u32 c = 0;
- bo->placement.fpfn = 0;
- bo->placement.lpfn = 0;
+ unsigned i;
+
bo->placement.placement = bo->placements;
bo->placement.busy_placement = bo->placements;
if (domain & TTM_PL_FLAG_VRAM)
- bo->placements[c++] = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_VRAM;
+ bo->placements[c++].flags = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_VRAM;
if (domain & TTM_PL_FLAG_SYSTEM)
- bo->placements[c++] = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
+ bo->placements[c++].flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
if (!c)
- bo->placements[c++] = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
+ bo->placements[c++].flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
bo->placement.num_placement = c;
bo->placement.num_busy_placement = c;
+ for (i = 0; i < c; ++i) {
+ bo->placements[i].fpfn = 0;
+ bo->placements[i].lpfn = 0;
+ }
}
int mgag200_bo_create(struct drm_device *dev, int size, int align,
ret = ttm_bo_init(&mdev->ttm.bdev, &mgabo->bo, size,
ttm_bo_type_device, &mgabo->placement,
align >> PAGE_SHIFT, false, NULL, acc_size,
- NULL, mgag200_bo_ttm_destroy);
+ NULL, NULL, mgag200_bo_ttm_destroy);
if (ret)
return ret;
mgag200_ttm_placement(bo, pl_flag);
for (i = 0; i < bo->placement.num_placement; i++)
- bo->placements[i] |= TTM_PL_FLAG_NO_EVICT;
+ bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
ret = ttm_bo_validate(&bo->bo, &bo->placement, false, false);
if (ret)
return ret;
return 0;
for (i = 0; i < bo->placement.num_placement ; i++)
- bo->placements[i] &= ~TTM_PL_FLAG_NO_EVICT;
+ bo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT;
ret = ttm_bo_validate(&bo->bo, &bo->placement, false, false);
if (ret)
return ret;
mgag200_ttm_placement(bo, TTM_PL_FLAG_SYSTEM);
for (i = 0; i < bo->placement.num_placement ; i++)
- bo->placements[i] |= TTM_PL_FLAG_NO_EVICT;
+ bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
ret = ttm_bo_validate(&bo->bo, &bo->placement, false, false);
if (ret) {
struct mga_device *mdev;
if (unlikely(vma->vm_pgoff < DRM_FILE_PAGE_OFFSET))
- return drm_mmap(filp, vma);
+ return -EINVAL;
file_priv = filp->private_data;
mdev = file_priv->minor->dev->dev_private;
depends on DRM
depends on ARCH_QCOM || (ARM && COMPILE_TEST)
select DRM_KMS_HELPER
+ select DRM_PANEL
select SHMEM
select TMPFS
default y
endif
msm-y := \
+ adreno/adreno_device.o \
adreno/adreno_gpu.o \
adreno/a3xx_gpu.o \
hdmi/hdmi.o \
mdp/mdp_kms.o \
mdp/mdp4/mdp4_crtc.o \
mdp/mdp4/mdp4_dtv_encoder.o \
+ mdp/mdp4/mdp4_lcdc_encoder.o \
+ mdp/mdp4/mdp4_lvds_connector.o \
mdp/mdp4/mdp4_irq.o \
mdp/mdp4/mdp4_kms.o \
mdp/mdp4/mdp4_plane.o \
msm_ringbuffer.o
msm-$(CONFIG_DRM_MSM_FBDEV) += msm_fbdev.o
+msm-$(CONFIG_COMMON_CLK) += mdp/mdp4/mdp4_lvds_pll.o
obj-$(CONFIG_DRM_MSM) += msm.o
- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1453 bytes, from 2013-03-31 16:51:27)
- /home/robclark/src/freedreno/envytools/rnndb/adreno/a2xx.xml ( 32901 bytes, from 2014-06-02 15:21:30)
- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_common.xml ( 9859 bytes, from 2014-06-02 15:21:30)
-- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_pm4.xml ( 14477 bytes, from 2014-05-16 11:51:57)
-- /home/robclark/src/freedreno/envytools/rnndb/adreno/a3xx.xml ( 58020 bytes, from 2014-06-25 12:57:16)
-- /home/robclark/src/freedreno/envytools/rnndb/adreno/a4xx.xml ( 26602 bytes, from 2014-06-25 12:57:16)
+- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_pm4.xml ( 14960 bytes, from 2014-07-27 17:22:13)
+- /home/robclark/src/freedreno/envytools/rnndb/adreno/a3xx.xml ( 58020 bytes, from 2014-08-01 12:22:48)
+- /home/robclark/src/freedreno/envytools/rnndb/adreno/a4xx.xml ( 41068 bytes, from 2014-08-01 12:22:48)
Copyright (C) 2013-2014 by the following authors:
- Rob Clark <robdclark@gmail.com> (robclark)
- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1453 bytes, from 2013-03-31 16:51:27)
- /home/robclark/src/freedreno/envytools/rnndb/adreno/a2xx.xml ( 32901 bytes, from 2014-06-02 15:21:30)
- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_common.xml ( 9859 bytes, from 2014-06-02 15:21:30)
-- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_pm4.xml ( 14477 bytes, from 2014-05-16 11:51:57)
-- /home/robclark/src/freedreno/envytools/rnndb/adreno/a3xx.xml ( 58020 bytes, from 2014-06-25 12:57:16)
-- /home/robclark/src/freedreno/envytools/rnndb/adreno/a4xx.xml ( 26602 bytes, from 2014-06-25 12:57:16)
+- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_pm4.xml ( 14960 bytes, from 2014-07-27 17:22:13)
+- /home/robclark/src/freedreno/envytools/rnndb/adreno/a3xx.xml ( 58020 bytes, from 2014-08-01 12:22:48)
+- /home/robclark/src/freedreno/envytools/rnndb/adreno/a4xx.xml ( 41068 bytes, from 2014-08-01 12:22:48)
Copyright (C) 2013-2014 by the following authors:
- Rob Clark <robdclark@gmail.com> (robclark)
#define A3XX_GRAS_SU_POLY_OFFSET_SCALE_VAL__SHIFT 0
static inline uint32_t A3XX_GRAS_SU_POLY_OFFSET_SCALE_VAL(float val)
{
- return ((((uint32_t)(val * 40.0))) << A3XX_GRAS_SU_POLY_OFFSET_SCALE_VAL__SHIFT) & A3XX_GRAS_SU_POLY_OFFSET_SCALE_VAL__MASK;
+ return ((((uint32_t)(val * 28.0))) << A3XX_GRAS_SU_POLY_OFFSET_SCALE_VAL__SHIFT) & A3XX_GRAS_SU_POLY_OFFSET_SCALE_VAL__MASK;
}
#define REG_A3XX_GRAS_SU_POLY_OFFSET_OFFSET 0x0000206d
#define A3XX_GRAS_SU_POLY_OFFSET_OFFSET__SHIFT 0
static inline uint32_t A3XX_GRAS_SU_POLY_OFFSET_OFFSET(float val)
{
- return ((((uint32_t)(val * 44.0))) << A3XX_GRAS_SU_POLY_OFFSET_OFFSET__SHIFT) & A3XX_GRAS_SU_POLY_OFFSET_OFFSET__MASK;
+ return ((((uint32_t)(val * 28.0))) << A3XX_GRAS_SU_POLY_OFFSET_OFFSET__SHIFT) & A3XX_GRAS_SU_POLY_OFFSET_OFFSET__MASK;
}
#define REG_A3XX_GRAS_SU_MODE_CONTROL 0x00002070
{
return ((val) << A3XX_SP_VS_CTRL_REG1_CONSTFOOTPRINT__SHIFT) & A3XX_SP_VS_CTRL_REG1_CONSTFOOTPRINT__MASK;
}
-#define A3XX_SP_VS_CTRL_REG1_INITIALOUTSTANDING__MASK 0x3f000000
+#define A3XX_SP_VS_CTRL_REG1_INITIALOUTSTANDING__MASK 0x7f000000
#define A3XX_SP_VS_CTRL_REG1_INITIALOUTSTANDING__SHIFT 24
static inline uint32_t A3XX_SP_VS_CTRL_REG1_INITIALOUTSTANDING(uint32_t val)
{
A3XX_INT0_CP_AHB_ERROR_HALT | \
A3XX_INT0_UCHE_OOB_ACCESS)
+extern bool hang_debug;
-static bool hang_debug = false;
-MODULE_PARM_DESC(hang_debug, "Dump registers when hang is detected (can be slow!)");
-module_param_named(hang_debug, hang_debug, bool, 0600);
static void a3xx_dump(struct msm_gpu *gpu);
static void a3xx_me_init(struct msm_gpu *gpu)
0x2750, 0x2756, 0x2760, 0x2760, 0x300c, 0x300e, 0x301c, 0x301d,
0x302a, 0x302a, 0x302c, 0x302d, 0x3030, 0x3031, 0x3034, 0x3036,
0x303c, 0x303c, 0x305e, 0x305f,
+ ~0 /* sentinel */
};
#ifdef CONFIG_DEBUG_FS
static void a3xx_show(struct msm_gpu *gpu, struct seq_file *m)
{
- int i;
-
- adreno_show(gpu, m);
-
gpu->funcs->pm_resume(gpu);
-
seq_printf(m, "status: %08x\n",
gpu_read(gpu, REG_A3XX_RBBM_STATUS));
-
- /* dump these out in a form that can be parsed by demsm: */
- seq_printf(m, "IO:region %s 00000000 00020000\n", gpu->name);
- for (i = 0; i < ARRAY_SIZE(a3xx_registers); i += 2) {
- uint32_t start = a3xx_registers[i];
- uint32_t end = a3xx_registers[i+1];
- uint32_t addr;
-
- for (addr = start; addr <= end; addr++) {
- uint32_t val = gpu_read(gpu, addr);
- seq_printf(m, "IO:R %08x %08x\n", addr<<2, val);
- }
- }
-
gpu->funcs->pm_suspend(gpu);
+ adreno_show(gpu, m);
}
#endif
/* would be nice to not have to duplicate the _show() stuff with printk(): */
static void a3xx_dump(struct msm_gpu *gpu)
{
- int i;
-
- adreno_dump(gpu);
printk("status: %08x\n",
gpu_read(gpu, REG_A3XX_RBBM_STATUS));
-
- /* dump these out in a form that can be parsed by demsm: */
- printk("IO:region %s 00000000 00020000\n", gpu->name);
- for (i = 0; i < ARRAY_SIZE(a3xx_registers); i += 2) {
- uint32_t start = a3xx_registers[i];
- uint32_t end = a3xx_registers[i+1];
- uint32_t addr;
-
- for (addr = start; addr <= end; addr++) {
- uint32_t val = gpu_read(gpu, addr);
- printk("IO:R %08x %08x\n", addr<<2, val);
- }
- }
+ adreno_dump(gpu);
}
static const struct adreno_gpu_funcs funcs = {
struct msm_gpu *gpu;
struct msm_drm_private *priv = dev->dev_private;
struct platform_device *pdev = priv->gpu_pdev;
- struct adreno_platform_config *config;
int ret;
if (!pdev) {
goto fail;
}
- config = pdev->dev.platform_data;
-
a3xx_gpu = kzalloc(sizeof(*a3xx_gpu), GFP_KERNEL);
if (!a3xx_gpu) {
ret = -ENOMEM;
a3xx_gpu->pdev = pdev;
- gpu->fast_rate = config->fast_rate;
- gpu->slow_rate = config->slow_rate;
- gpu->bus_freq = config->bus_freq;
-#ifdef CONFIG_MSM_BUS_SCALING
- gpu->bus_scale_table = config->bus_scale_table;
-#endif
-
- DBG("fast_rate=%u, slow_rate=%u, bus_freq=%u",
- gpu->fast_rate, gpu->slow_rate, gpu->bus_freq);
-
gpu->perfcntrs = perfcntrs;
gpu->num_perfcntrs = ARRAY_SIZE(perfcntrs);
- ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs, config->rev);
+ adreno_gpu->registers = a3xx_registers;
+
+ ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs);
if (ret)
goto fail;
return ERR_PTR(ret);
}
-
-/*
- * The a3xx device:
- */
-
-#if defined(CONFIG_MSM_BUS_SCALING) && !defined(CONFIG_OF)
-# include <mach/kgsl.h>
-#endif
-
-static void set_gpu_pdev(struct drm_device *dev,
- struct platform_device *pdev)
-{
- struct msm_drm_private *priv = dev->dev_private;
- priv->gpu_pdev = pdev;
-}
-
-static int a3xx_bind(struct device *dev, struct device *master, void *data)
-{
- static struct adreno_platform_config config = {};
-#ifdef CONFIG_OF
- struct device_node *child, *node = dev->of_node;
- u32 val;
- int ret;
-
- ret = of_property_read_u32(node, "qcom,chipid", &val);
- if (ret) {
- dev_err(dev, "could not find chipid: %d\n", ret);
- return ret;
- }
-
- config.rev = ADRENO_REV((val >> 24) & 0xff,
- (val >> 16) & 0xff, (val >> 8) & 0xff, val & 0xff);
-
- /* find clock rates: */
- config.fast_rate = 0;
- config.slow_rate = ~0;
- for_each_child_of_node(node, child) {
- if (of_device_is_compatible(child, "qcom,gpu-pwrlevels")) {
- struct device_node *pwrlvl;
- for_each_child_of_node(child, pwrlvl) {
- ret = of_property_read_u32(pwrlvl, "qcom,gpu-freq", &val);
- if (ret) {
- dev_err(dev, "could not find gpu-freq: %d\n", ret);
- return ret;
- }
- config.fast_rate = max(config.fast_rate, val);
- config.slow_rate = min(config.slow_rate, val);
- }
- }
- }
-
- if (!config.fast_rate) {
- dev_err(dev, "could not find clk rates\n");
- return -ENXIO;
- }
-
-#else
- struct kgsl_device_platform_data *pdata = dev->platform_data;
- uint32_t version = socinfo_get_version();
- if (cpu_is_apq8064ab()) {
- config.fast_rate = 450000000;
- config.slow_rate = 27000000;
- config.bus_freq = 4;
- config.rev = ADRENO_REV(3, 2, 1, 0);
- } else if (cpu_is_apq8064()) {
- config.fast_rate = 400000000;
- config.slow_rate = 27000000;
- config.bus_freq = 4;
-
- if (SOCINFO_VERSION_MAJOR(version) == 2)
- config.rev = ADRENO_REV(3, 2, 0, 2);
- else if ((SOCINFO_VERSION_MAJOR(version) == 1) &&
- (SOCINFO_VERSION_MINOR(version) == 1))
- config.rev = ADRENO_REV(3, 2, 0, 1);
- else
- config.rev = ADRENO_REV(3, 2, 0, 0);
-
- } else if (cpu_is_msm8960ab()) {
- config.fast_rate = 400000000;
- config.slow_rate = 320000000;
- config.bus_freq = 4;
-
- if (SOCINFO_VERSION_MINOR(version) == 0)
- config.rev = ADRENO_REV(3, 2, 1, 0);
- else
- config.rev = ADRENO_REV(3, 2, 1, 1);
-
- } else if (cpu_is_msm8930()) {
- config.fast_rate = 400000000;
- config.slow_rate = 27000000;
- config.bus_freq = 3;
-
- if ((SOCINFO_VERSION_MAJOR(version) == 1) &&
- (SOCINFO_VERSION_MINOR(version) == 2))
- config.rev = ADRENO_REV(3, 0, 5, 2);
- else
- config.rev = ADRENO_REV(3, 0, 5, 0);
-
- }
-# ifdef CONFIG_MSM_BUS_SCALING
- config.bus_scale_table = pdata->bus_scale_table;
-# endif
-#endif
- dev->platform_data = &config;
- set_gpu_pdev(dev_get_drvdata(master), to_platform_device(dev));
- return 0;
-}
-
-static void a3xx_unbind(struct device *dev, struct device *master,
- void *data)
-{
- set_gpu_pdev(dev_get_drvdata(master), NULL);
-}
-
-static const struct component_ops a3xx_ops = {
- .bind = a3xx_bind,
- .unbind = a3xx_unbind,
-};
-
-static int a3xx_probe(struct platform_device *pdev)
-{
- return component_add(&pdev->dev, &a3xx_ops);
-}
-
-static int a3xx_remove(struct platform_device *pdev)
-{
- component_del(&pdev->dev, &a3xx_ops);
- return 0;
-}
-
-static const struct of_device_id dt_match[] = {
- { .compatible = "qcom,adreno-3xx" },
- /* for backwards compat w/ downstream kgsl DT files: */
- { .compatible = "qcom,kgsl-3d0" },
- {}
-};
-
-static struct platform_driver a3xx_driver = {
- .probe = a3xx_probe,
- .remove = a3xx_remove,
- .driver = {
- .name = "kgsl-3d0",
- .of_match_table = dt_match,
- },
-};
-
-void __init a3xx_register(void)
-{
- platform_driver_register(&a3xx_driver);
-}
-
-void __exit a3xx_unregister(void)
-{
- platform_driver_unregister(&a3xx_driver);
-}
- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1453 bytes, from 2013-03-31 16:51:27)
- /home/robclark/src/freedreno/envytools/rnndb/adreno/a2xx.xml ( 32901 bytes, from 2014-06-02 15:21:30)
- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_common.xml ( 9859 bytes, from 2014-06-02 15:21:30)
-- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_pm4.xml ( 14477 bytes, from 2014-05-16 11:51:57)
-- /home/robclark/src/freedreno/envytools/rnndb/adreno/a3xx.xml ( 58020 bytes, from 2014-06-25 12:57:16)
-- /home/robclark/src/freedreno/envytools/rnndb/adreno/a4xx.xml ( 26602 bytes, from 2014-06-25 12:57:16)
+- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_pm4.xml ( 14960 bytes, from 2014-07-27 17:22:13)
+- /home/robclark/src/freedreno/envytools/rnndb/adreno/a3xx.xml ( 58020 bytes, from 2014-08-01 12:22:48)
+- /home/robclark/src/freedreno/envytools/rnndb/adreno/a4xx.xml ( 41068 bytes, from 2014-08-01 12:22:48)
Copyright (C) 2013-2014 by the following authors:
- Rob Clark <robdclark@gmail.com> (robclark)
--- /dev/null
+/*
+ * Copyright (C) 2013-2014 Red Hat
+ * Author: Rob Clark <robdclark@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "adreno_gpu.h"
+
+#if defined(CONFIG_MSM_BUS_SCALING) && !defined(CONFIG_OF)
+# include <mach/kgsl.h>
+#endif
+
+#define ANY_ID 0xff
+
+bool hang_debug = false;
+MODULE_PARM_DESC(hang_debug, "Dump registers when hang is detected (can be slow!)");
+module_param_named(hang_debug, hang_debug, bool, 0600);
+
+struct msm_gpu *a3xx_gpu_init(struct drm_device *dev);
+
+static const struct adreno_info gpulist[] = {
+ {
+ .rev = ADRENO_REV(3, 0, 5, ANY_ID),
+ .revn = 305,
+ .name = "A305",
+ .pm4fw = "a300_pm4.fw",
+ .pfpfw = "a300_pfp.fw",
+ .gmem = SZ_256K,
+ .init = a3xx_gpu_init,
+ }, {
+ .rev = ADRENO_REV(3, 2, ANY_ID, ANY_ID),
+ .revn = 320,
+ .name = "A320",
+ .pm4fw = "a300_pm4.fw",
+ .pfpfw = "a300_pfp.fw",
+ .gmem = SZ_512K,
+ .init = a3xx_gpu_init,
+ }, {
+ .rev = ADRENO_REV(3, 3, 0, ANY_ID),
+ .revn = 330,
+ .name = "A330",
+ .pm4fw = "a330_pm4.fw",
+ .pfpfw = "a330_pfp.fw",
+ .gmem = SZ_1M,
+ .init = a3xx_gpu_init,
+ },
+};
+
+MODULE_FIRMWARE("a300_pm4.fw");
+MODULE_FIRMWARE("a300_pfp.fw");
+MODULE_FIRMWARE("a330_pm4.fw");
+MODULE_FIRMWARE("a330_pfp.fw");
+
+static inline bool _rev_match(uint8_t entry, uint8_t id)
+{
+ return (entry == ANY_ID) || (entry == id);
+}
+
+const struct adreno_info *adreno_info(struct adreno_rev rev)
+{
+ int i;
+
+ /* identify gpu: */
+ for (i = 0; i < ARRAY_SIZE(gpulist); i++) {
+ const struct adreno_info *info = &gpulist[i];
+ if (_rev_match(info->rev.core, rev.core) &&
+ _rev_match(info->rev.major, rev.major) &&
+ _rev_match(info->rev.minor, rev.minor) &&
+ _rev_match(info->rev.patchid, rev.patchid))
+ return info;
+ }
+
+ return NULL;
+}
+
+struct msm_gpu *adreno_load_gpu(struct drm_device *dev)
+{
+ struct msm_drm_private *priv = dev->dev_private;
+ struct platform_device *pdev = priv->gpu_pdev;
+ struct adreno_platform_config *config;
+ struct adreno_rev rev;
+ const struct adreno_info *info;
+ struct msm_gpu *gpu = NULL;
+
+ if (!pdev) {
+ dev_err(dev->dev, "no adreno device\n");
+ return NULL;
+ }
+
+ config = pdev->dev.platform_data;
+ rev = config->rev;
+ info = adreno_info(config->rev);
+
+ if (!info) {
+ dev_warn(dev->dev, "Unknown GPU revision: %u.%u.%u.%u\n",
+ rev.core, rev.major, rev.minor, rev.patchid);
+ return NULL;
+ }
+
+ DBG("Found GPU: %u.%u.%u.%u", rev.core, rev.major,
+ rev.minor, rev.patchid);
+
+ gpu = info->init(dev);
+ if (IS_ERR(gpu)) {
+ dev_warn(dev->dev, "failed to load adreno gpu\n");
+ gpu = NULL;
+ /* not fatal */
+ }
+
+ if (gpu) {
+ int ret;
+ mutex_lock(&dev->struct_mutex);
+ gpu->funcs->pm_resume(gpu);
+ mutex_unlock(&dev->struct_mutex);
+ ret = gpu->funcs->hw_init(gpu);
+ if (ret) {
+ dev_err(dev->dev, "gpu hw init failed: %d\n", ret);
+ gpu->funcs->destroy(gpu);
+ gpu = NULL;
+ } else {
+ /* give inactive pm a chance to kick in: */
+ msm_gpu_retire(gpu);
+ }
+ }
+
+ return gpu;
+}
+
+static void set_gpu_pdev(struct drm_device *dev,
+ struct platform_device *pdev)
+{
+ struct msm_drm_private *priv = dev->dev_private;
+ priv->gpu_pdev = pdev;
+}
+
+static int adreno_bind(struct device *dev, struct device *master, void *data)
+{
+ static struct adreno_platform_config config = {};
+#ifdef CONFIG_OF
+ struct device_node *child, *node = dev->of_node;
+ u32 val;
+ int ret;
+
+ ret = of_property_read_u32(node, "qcom,chipid", &val);
+ if (ret) {
+ dev_err(dev, "could not find chipid: %d\n", ret);
+ return ret;
+ }
+
+ config.rev = ADRENO_REV((val >> 24) & 0xff,
+ (val >> 16) & 0xff, (val >> 8) & 0xff, val & 0xff);
+
+ /* find clock rates: */
+ config.fast_rate = 0;
+ config.slow_rate = ~0;
+ for_each_child_of_node(node, child) {
+ if (of_device_is_compatible(child, "qcom,gpu-pwrlevels")) {
+ struct device_node *pwrlvl;
+ for_each_child_of_node(child, pwrlvl) {
+ ret = of_property_read_u32(pwrlvl, "qcom,gpu-freq", &val);
+ if (ret) {
+ dev_err(dev, "could not find gpu-freq: %d\n", ret);
+ return ret;
+ }
+ config.fast_rate = max(config.fast_rate, val);
+ config.slow_rate = min(config.slow_rate, val);
+ }
+ }
+ }
+
+ if (!config.fast_rate) {
+ dev_err(dev, "could not find clk rates\n");
+ return -ENXIO;
+ }
+
+#else
+ struct kgsl_device_platform_data *pdata = dev->platform_data;
+ uint32_t version = socinfo_get_version();
+ if (cpu_is_apq8064ab()) {
+ config.fast_rate = 450000000;
+ config.slow_rate = 27000000;
+ config.bus_freq = 4;
+ config.rev = ADRENO_REV(3, 2, 1, 0);
+ } else if (cpu_is_apq8064()) {
+ config.fast_rate = 400000000;
+ config.slow_rate = 27000000;
+ config.bus_freq = 4;
+
+ if (SOCINFO_VERSION_MAJOR(version) == 2)
+ config.rev = ADRENO_REV(3, 2, 0, 2);
+ else if ((SOCINFO_VERSION_MAJOR(version) == 1) &&
+ (SOCINFO_VERSION_MINOR(version) == 1))
+ config.rev = ADRENO_REV(3, 2, 0, 1);
+ else
+ config.rev = ADRENO_REV(3, 2, 0, 0);
+
+ } else if (cpu_is_msm8960ab()) {
+ config.fast_rate = 400000000;
+ config.slow_rate = 320000000;
+ config.bus_freq = 4;
+
+ if (SOCINFO_VERSION_MINOR(version) == 0)
+ config.rev = ADRENO_REV(3, 2, 1, 0);
+ else
+ config.rev = ADRENO_REV(3, 2, 1, 1);
+
+ } else if (cpu_is_msm8930()) {
+ config.fast_rate = 400000000;
+ config.slow_rate = 27000000;
+ config.bus_freq = 3;
+
+ if ((SOCINFO_VERSION_MAJOR(version) == 1) &&
+ (SOCINFO_VERSION_MINOR(version) == 2))
+ config.rev = ADRENO_REV(3, 0, 5, 2);
+ else
+ config.rev = ADRENO_REV(3, 0, 5, 0);
+
+ }
+# ifdef CONFIG_MSM_BUS_SCALING
+ config.bus_scale_table = pdata->bus_scale_table;
+# endif
+#endif
+ dev->platform_data = &config;
+ set_gpu_pdev(dev_get_drvdata(master), to_platform_device(dev));
+ return 0;
+}
+
+static void adreno_unbind(struct device *dev, struct device *master,
+ void *data)
+{
+ set_gpu_pdev(dev_get_drvdata(master), NULL);
+}
+
+static const struct component_ops a3xx_ops = {
+ .bind = adreno_bind,
+ .unbind = adreno_unbind,
+};
+
+static int adreno_probe(struct platform_device *pdev)
+{
+ return component_add(&pdev->dev, &a3xx_ops);
+}
+
+static int adreno_remove(struct platform_device *pdev)
+{
+ component_del(&pdev->dev, &a3xx_ops);
+ return 0;
+}
+
+static const struct of_device_id dt_match[] = {
+ { .compatible = "qcom,adreno-3xx" },
+ /* for backwards compat w/ downstream kgsl DT files: */
+ { .compatible = "qcom,kgsl-3d0" },
+ {}
+};
+
+static struct platform_driver adreno_driver = {
+ .probe = adreno_probe,
+ .remove = adreno_remove,
+ .driver = {
+ .name = "adreno",
+ .of_match_table = dt_match,
+ },
+};
+
+void __init adreno_register(void)
+{
+ platform_driver_register(&adreno_driver);
+}
+
+void __exit adreno_unregister(void)
+{
+ platform_driver_unregister(&adreno_driver);
+}
#include "msm_gem.h"
#include "msm_mmu.h"
-struct adreno_info {
- struct adreno_rev rev;
- uint32_t revn;
- const char *name;
- const char *pm4fw, *pfpfw;
- uint32_t gmem;
-};
-
-#define ANY_ID 0xff
-
-static const struct adreno_info gpulist[] = {
- {
- .rev = ADRENO_REV(3, 0, 5, ANY_ID),
- .revn = 305,
- .name = "A305",
- .pm4fw = "a300_pm4.fw",
- .pfpfw = "a300_pfp.fw",
- .gmem = SZ_256K,
- }, {
- .rev = ADRENO_REV(3, 2, ANY_ID, ANY_ID),
- .revn = 320,
- .name = "A320",
- .pm4fw = "a300_pm4.fw",
- .pfpfw = "a300_pfp.fw",
- .gmem = SZ_512K,
- }, {
- .rev = ADRENO_REV(3, 3, 0, ANY_ID),
- .revn = 330,
- .name = "A330",
- .pm4fw = "a330_pm4.fw",
- .pfpfw = "a330_pfp.fw",
- .gmem = SZ_1M,
- },
-};
-
-MODULE_FIRMWARE("a300_pm4.fw");
-MODULE_FIRMWARE("a300_pfp.fw");
-MODULE_FIRMWARE("a330_pm4.fw");
-MODULE_FIRMWARE("a330_pfp.fw");
-
#define RB_SIZE SZ_32K
#define RB_BLKSIZE 16
void adreno_show(struct msm_gpu *gpu, struct seq_file *m)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ int i;
seq_printf(m, "revision: %d (%d.%d.%d.%d)\n",
adreno_gpu->info->revn, adreno_gpu->rev.core,
seq_printf(m, "rptr: %d\n", adreno_gpu->memptrs->rptr);
seq_printf(m, "wptr: %d\n", adreno_gpu->memptrs->wptr);
seq_printf(m, "rb wptr: %d\n", get_wptr(gpu->rb));
+
+ gpu->funcs->pm_resume(gpu);
+
+ /* dump these out in a form that can be parsed by demsm: */
+ seq_printf(m, "IO:region %s 00000000 00020000\n", gpu->name);
+ for (i = 0; adreno_gpu->registers[i] != ~0; i += 2) {
+ uint32_t start = adreno_gpu->registers[i];
+ uint32_t end = adreno_gpu->registers[i+1];
+ uint32_t addr;
+
+ for (addr = start; addr <= end; addr++) {
+ uint32_t val = gpu_read(gpu, addr);
+ seq_printf(m, "IO:R %08x %08x\n", addr<<2, val);
+ }
+ }
+
+ gpu->funcs->pm_suspend(gpu);
}
#endif
void adreno_dump(struct msm_gpu *gpu)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ int i;
printk("revision: %d (%d.%d.%d.%d)\n",
adreno_gpu->info->revn, adreno_gpu->rev.core,
printk("wptr: %d\n", adreno_gpu->memptrs->wptr);
printk("rb wptr: %d\n", get_wptr(gpu->rb));
+ /* dump these out in a form that can be parsed by demsm: */
+ printk("IO:region %s 00000000 00020000\n", gpu->name);
+ for (i = 0; adreno_gpu->registers[i] != ~0; i += 2) {
+ uint32_t start = adreno_gpu->registers[i];
+ uint32_t end = adreno_gpu->registers[i+1];
+ uint32_t addr;
+
+ for (addr = start; addr <= end; addr++) {
+ uint32_t val = gpu_read(gpu, addr);
+ printk("IO:R %08x %08x\n", addr<<2, val);
+ }
+ }
}
static uint32_t ring_freewords(struct msm_gpu *gpu)
"gfx3d1_user", "gfx3d1_priv",
};
-static inline bool _rev_match(uint8_t entry, uint8_t id)
-{
- return (entry == ANY_ID) || (entry == id);
-}
-
int adreno_gpu_init(struct drm_device *drm, struct platform_device *pdev,
- struct adreno_gpu *gpu, const struct adreno_gpu_funcs *funcs,
- struct adreno_rev rev)
+ struct adreno_gpu *adreno_gpu, const struct adreno_gpu_funcs *funcs)
{
+ struct adreno_platform_config *config = pdev->dev.platform_data;
+ struct msm_gpu *gpu = &adreno_gpu->base;
struct msm_mmu *mmu;
- int i, ret;
-
- /* identify gpu: */
- for (i = 0; i < ARRAY_SIZE(gpulist); i++) {
- const struct adreno_info *info = &gpulist[i];
- if (_rev_match(info->rev.core, rev.core) &&
- _rev_match(info->rev.major, rev.major) &&
- _rev_match(info->rev.minor, rev.minor) &&
- _rev_match(info->rev.patchid, rev.patchid)) {
- gpu->info = info;
- gpu->revn = info->revn;
- break;
- }
- }
-
- if (i == ARRAY_SIZE(gpulist)) {
- dev_err(drm->dev, "Unknown GPU revision: %u.%u.%u.%u\n",
- rev.core, rev.major, rev.minor, rev.patchid);
- return -ENXIO;
- }
+ int ret;
- DBG("Found GPU: %s (%u.%u.%u.%u)", gpu->info->name,
- rev.core, rev.major, rev.minor, rev.patchid);
+ adreno_gpu->funcs = funcs;
+ adreno_gpu->info = adreno_info(config->rev);
+ adreno_gpu->gmem = adreno_gpu->info->gmem;
+ adreno_gpu->revn = adreno_gpu->info->revn;
+ adreno_gpu->rev = config->rev;
+
+ gpu->fast_rate = config->fast_rate;
+ gpu->slow_rate = config->slow_rate;
+ gpu->bus_freq = config->bus_freq;
+#ifdef CONFIG_MSM_BUS_SCALING
+ gpu->bus_scale_table = config->bus_scale_table;
+#endif
- gpu->funcs = funcs;
- gpu->gmem = gpu->info->gmem;
- gpu->rev = rev;
+ DBG("fast_rate=%u, slow_rate=%u, bus_freq=%u",
+ gpu->fast_rate, gpu->slow_rate, gpu->bus_freq);
- ret = request_firmware(&gpu->pm4, gpu->info->pm4fw, drm->dev);
+ ret = request_firmware(&adreno_gpu->pm4, adreno_gpu->info->pm4fw, drm->dev);
if (ret) {
dev_err(drm->dev, "failed to load %s PM4 firmware: %d\n",
- gpu->info->pm4fw, ret);
+ adreno_gpu->info->pm4fw, ret);
return ret;
}
- ret = request_firmware(&gpu->pfp, gpu->info->pfpfw, drm->dev);
+ ret = request_firmware(&adreno_gpu->pfp, adreno_gpu->info->pfpfw, drm->dev);
if (ret) {
dev_err(drm->dev, "failed to load %s PFP firmware: %d\n",
- gpu->info->pfpfw, ret);
+ adreno_gpu->info->pfpfw, ret);
return ret;
}
- ret = msm_gpu_init(drm, pdev, &gpu->base, &funcs->base,
- gpu->info->name, "kgsl_3d0_reg_memory", "kgsl_3d0_irq",
+ ret = msm_gpu_init(drm, pdev, &adreno_gpu->base, &funcs->base,
+ adreno_gpu->info->name, "kgsl_3d0_reg_memory", "kgsl_3d0_irq",
RB_SIZE);
if (ret)
return ret;
- mmu = gpu->base.mmu;
+ mmu = gpu->mmu;
if (mmu) {
ret = mmu->funcs->attach(mmu, iommu_ports,
ARRAY_SIZE(iommu_ports));
}
mutex_lock(&drm->struct_mutex);
- gpu->memptrs_bo = msm_gem_new(drm, sizeof(*gpu->memptrs),
+ adreno_gpu->memptrs_bo = msm_gem_new(drm, sizeof(*adreno_gpu->memptrs),
MSM_BO_UNCACHED);
mutex_unlock(&drm->struct_mutex);
- if (IS_ERR(gpu->memptrs_bo)) {
- ret = PTR_ERR(gpu->memptrs_bo);
- gpu->memptrs_bo = NULL;
+ if (IS_ERR(adreno_gpu->memptrs_bo)) {
+ ret = PTR_ERR(adreno_gpu->memptrs_bo);
+ adreno_gpu->memptrs_bo = NULL;
dev_err(drm->dev, "could not allocate memptrs: %d\n", ret);
return ret;
}
- gpu->memptrs = msm_gem_vaddr(gpu->memptrs_bo);
- if (!gpu->memptrs) {
+ adreno_gpu->memptrs = msm_gem_vaddr(adreno_gpu->memptrs_bo);
+ if (!adreno_gpu->memptrs) {
dev_err(drm->dev, "could not vmap memptrs\n");
return -ENOMEM;
}
- ret = msm_gem_get_iova(gpu->memptrs_bo, gpu->base.id,
- &gpu->memptrs_iova);
+ ret = msm_gem_get_iova(adreno_gpu->memptrs_bo, gpu->id,
+ &adreno_gpu->memptrs_iova);
if (ret) {
dev_err(drm->dev, "could not map memptrs: %d\n", ret);
return ret;
struct msm_gpu_funcs base;
};
-struct adreno_info;
+struct adreno_info {
+ struct adreno_rev rev;
+ uint32_t revn;
+ const char *name;
+ const char *pm4fw, *pfpfw;
+ uint32_t gmem;
+ struct msm_gpu *(*init)(struct drm_device *dev);
+};
+
+const struct adreno_info *adreno_info(struct adreno_rev rev);
struct adreno_rbmemptrs {
volatile uint32_t rptr;
uint32_t revn; /* numeric revision name */
const struct adreno_gpu_funcs *funcs;
+ /* interesting register offsets to dump: */
+ const unsigned int *registers;
+
/* firmware: */
const struct firmware *pm4, *pfp;
void adreno_wait_ring(struct msm_gpu *gpu, uint32_t ndwords);
int adreno_gpu_init(struct drm_device *drm, struct platform_device *pdev,
- struct adreno_gpu *gpu, const struct adreno_gpu_funcs *funcs,
- struct adreno_rev rev);
+ struct adreno_gpu *gpu, const struct adreno_gpu_funcs *funcs);
void adreno_gpu_cleanup(struct adreno_gpu *gpu);
- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1453 bytes, from 2013-03-31 16:51:27)
- /home/robclark/src/freedreno/envytools/rnndb/adreno/a2xx.xml ( 32901 bytes, from 2014-06-02 15:21:30)
- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_common.xml ( 9859 bytes, from 2014-06-02 15:21:30)
-- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_pm4.xml ( 14477 bytes, from 2014-05-16 11:51:57)
-- /home/robclark/src/freedreno/envytools/rnndb/adreno/a3xx.xml ( 58020 bytes, from 2014-06-25 12:57:16)
-- /home/robclark/src/freedreno/envytools/rnndb/adreno/a4xx.xml ( 26602 bytes, from 2014-06-25 12:57:16)
+- /home/robclark/src/freedreno/envytools/rnndb/adreno/adreno_pm4.xml ( 14960 bytes, from 2014-07-27 17:22:13)
+- /home/robclark/src/freedreno/envytools/rnndb/adreno/a3xx.xml ( 58020 bytes, from 2014-08-01 12:22:48)
+- /home/robclark/src/freedreno/envytools/rnndb/adreno/a4xx.xml ( 41068 bytes, from 2014-08-01 12:22:48)
Copyright (C) 2013-2014 by the following authors:
- Rob Clark <robdclark@gmail.com> (robclark)
CP_INDIRECT_BUFFER_PFE = 63,
CP_SET_BIN = 76,
CP_TEST_TWO_MEMS = 113,
+ CP_REG_WR_NO_CTXT = 120,
+ CP_RECORD_PFP_TIMESTAMP = 17,
CP_WAIT_FOR_ME = 19,
CP_SET_DRAW_STATE = 67,
CP_DRAW_INDX_OFFSET = 56,
CP_DRAW_INDIRECT = 40,
CP_DRAW_INDX_INDIRECT = 41,
CP_DRAW_AUTO = 36,
+ CP_UNKNOWN_1A = 26,
+ CP_WIDE_REG_WRITE = 116,
IN_IB_PREFETCH_END = 23,
IN_SUBBLK_PREFETCH = 31,
IN_INSTR_PREFETCH = 32,
The rules-ng-ng source files this header was generated from are:
- /home/robclark/src/freedreno/envytools/rnndb/msm.xml ( 647 bytes, from 2013-11-30 14:45:35)
- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1453 bytes, from 2013-03-31 16:51:27)
-- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 17996 bytes, from 2013-12-01 19:10:31)
-- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 1615 bytes, from 2013-11-30 15:00:52)
-- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 22517 bytes, from 2014-06-25 12:55:02)
+- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20457 bytes, from 2014-08-01 12:22:48)
+- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 1615 bytes, from 2014-07-17 15:34:33)
+- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 22517 bytes, from 2014-07-17 15:34:33)
- /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 11712 bytes, from 2013-08-17 17:13:43)
- /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 344 bytes, from 2013-08-11 19:26:32)
-- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1544 bytes, from 2013-08-16 19:17:05)
+- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2014-08-01 12:23:53)
- /home/robclark/src/freedreno/envytools/rnndb/hdmi/qfprom.xml ( 600 bytes, from 2013-07-05 19:21:12)
-- /home/robclark/src/freedreno/envytools/rnndb/hdmi/hdmi.xml ( 23613 bytes, from 2014-06-25 12:53:44)
+- /home/robclark/src/freedreno/envytools/rnndb/hdmi/hdmi.xml ( 23613 bytes, from 2014-07-17 15:33:30)
Copyright (C) 2013 by the following authors:
- Rob Clark <robdclark@gmail.com> (robclark)
The rules-ng-ng source files this header was generated from are:
- /home/robclark/src/freedreno/envytools/rnndb/msm.xml ( 647 bytes, from 2013-11-30 14:45:35)
- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1453 bytes, from 2013-03-31 16:51:27)
-- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 17996 bytes, from 2013-12-01 19:10:31)
-- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 1615 bytes, from 2013-11-30 15:00:52)
-- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 22517 bytes, from 2014-06-25 12:55:02)
+- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20457 bytes, from 2014-08-01 12:22:48)
+- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 1615 bytes, from 2014-07-17 15:34:33)
+- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 22517 bytes, from 2014-07-17 15:34:33)
- /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 11712 bytes, from 2013-08-17 17:13:43)
- /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 344 bytes, from 2013-08-11 19:26:32)
-- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1544 bytes, from 2013-08-16 19:17:05)
+- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2014-08-01 12:23:53)
- /home/robclark/src/freedreno/envytools/rnndb/hdmi/qfprom.xml ( 600 bytes, from 2013-07-05 19:21:12)
-- /home/robclark/src/freedreno/envytools/rnndb/hdmi/hdmi.xml ( 23613 bytes, from 2014-06-25 12:53:44)
+- /home/robclark/src/freedreno/envytools/rnndb/hdmi/hdmi.xml ( 23613 bytes, from 2014-07-17 15:33:30)
-Copyright (C) 2013 by the following authors:
+Copyright (C) 2013-2014 by the following authors:
- Rob Clark <robdclark@gmail.com> (robclark)
Permission is hereby granted, free of charge, to any person obtaining
return ((val) << MMSS_CC_CLK_NS_VAL__SHIFT) & MMSS_CC_CLK_NS_VAL__MASK;
}
+#define REG_MMSS_CC_DSI2_PIXEL_CC 0x00000094
+
+#define REG_MMSS_CC_DSI2_PIXEL_NS 0x000000e4
+
+#define REG_MMSS_CC_DSI2_PIXEL_CC2 0x00000264
+
#endif /* MMSS_CC_XML */
The rules-ng-ng source files this header was generated from are:
- /home/robclark/src/freedreno/envytools/rnndb/msm.xml ( 647 bytes, from 2013-11-30 14:45:35)
- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1453 bytes, from 2013-03-31 16:51:27)
-- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 17996 bytes, from 2013-12-01 19:10:31)
-- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 1615 bytes, from 2013-11-30 15:00:52)
-- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 22517 bytes, from 2014-06-25 12:55:02)
+- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20457 bytes, from 2014-08-01 12:22:48)
+- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 1615 bytes, from 2014-07-17 15:34:33)
+- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 22517 bytes, from 2014-07-17 15:34:33)
- /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 11712 bytes, from 2013-08-17 17:13:43)
- /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 344 bytes, from 2013-08-11 19:26:32)
-- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1544 bytes, from 2013-08-16 19:17:05)
+- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2014-08-01 12:23:53)
- /home/robclark/src/freedreno/envytools/rnndb/hdmi/qfprom.xml ( 600 bytes, from 2013-07-05 19:21:12)
-- /home/robclark/src/freedreno/envytools/rnndb/hdmi/hdmi.xml ( 23613 bytes, from 2014-06-25 12:53:44)
+- /home/robclark/src/freedreno/envytools/rnndb/hdmi/hdmi.xml ( 23613 bytes, from 2014-07-17 15:33:30)
Copyright (C) 2013 by the following authors:
- Rob Clark <robdclark@gmail.com> (robclark)
for (i = 0; i < config->hpd_reg_cnt; i++) {
struct regulator *reg;
- reg = devm_regulator_get_exclusive(&pdev->dev,
+ reg = devm_regulator_get(&pdev->dev,
config->hpd_reg_names[i]);
if (IS_ERR(reg)) {
ret = PTR_ERR(reg);
for (i = 0; i < config->pwr_reg_cnt; i++) {
struct regulator *reg;
- reg = devm_regulator_get_exclusive(&pdev->dev,
+ reg = devm_regulator_get(&pdev->dev,
config->pwr_reg_names[i]);
if (IS_ERR(reg)) {
ret = PTR_ERR(reg);
priv->hdmi_pdev = pdev;
}
+#ifdef CONFIG_OF
+static int get_gpio(struct device *dev, struct device_node *of_node, const char *name)
+{
+ int gpio = of_get_named_gpio(of_node, name, 0);
+ if (gpio < 0) {
+ char name2[32];
+ snprintf(name2, sizeof(name2), "%s-gpio", name);
+ gpio = of_get_named_gpio(of_node, name2, 0);
+ if (gpio < 0) {
+ dev_err(dev, "failed to get gpio: %s (%d)\n",
+ name, gpio);
+ gpio = -1;
+ }
+ }
+ return gpio;
+}
+#endif
+
static int hdmi_bind(struct device *dev, struct device *master, void *data)
{
static struct hdmi_platform_config config = {};
#ifdef CONFIG_OF
struct device_node *of_node = dev->of_node;
- int get_gpio(const char *name)
- {
- int gpio = of_get_named_gpio(of_node, name, 0);
- if (gpio < 0) {
- char name2[32];
- snprintf(name2, sizeof(name2), "%s-gpio", name);
- gpio = of_get_named_gpio(of_node, name2, 0);
- if (gpio < 0) {
- dev_err(dev, "failed to get gpio: %s (%d)\n",
- name, gpio);
- gpio = -1;
- }
- }
- return gpio;
- }
-
if (of_device_is_compatible(of_node, "qcom,hdmi-tx-8074")) {
static const char *hpd_reg_names[] = {"hpd-gdsc", "hpd-5v"};
static const char *pwr_reg_names[] = {"core-vdda", "core-vcc"};
}
config.mmio_name = "core_physical";
- config.ddc_clk_gpio = get_gpio("qcom,hdmi-tx-ddc-clk");
- config.ddc_data_gpio = get_gpio("qcom,hdmi-tx-ddc-data");
- config.hpd_gpio = get_gpio("qcom,hdmi-tx-hpd");
- config.mux_en_gpio = get_gpio("qcom,hdmi-tx-mux-en");
- config.mux_sel_gpio = get_gpio("qcom,hdmi-tx-mux-sel");
- config.mux_lpm_gpio = get_gpio("qcom,hdmi-tx-mux-lpm");
+ config.ddc_clk_gpio = get_gpio(dev, of_node, "qcom,hdmi-tx-ddc-clk");
+ config.ddc_data_gpio = get_gpio(dev, of_node, "qcom,hdmi-tx-ddc-data");
+ config.hpd_gpio = get_gpio(dev, of_node, "qcom,hdmi-tx-hpd");
+ config.mux_en_gpio = get_gpio(dev, of_node, "qcom,hdmi-tx-mux-en");
+ config.mux_sel_gpio = get_gpio(dev, of_node, "qcom,hdmi-tx-mux-sel");
+ config.mux_lpm_gpio = get_gpio(dev, of_node, "qcom,hdmi-tx-mux-lpm");
#else
static const char *hpd_clk_names[] = {
The rules-ng-ng source files this header was generated from are:
- /home/robclark/src/freedreno/envytools/rnndb/msm.xml ( 647 bytes, from 2013-11-30 14:45:35)
- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1453 bytes, from 2013-03-31 16:51:27)
-- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 17996 bytes, from 2013-12-01 19:10:31)
-- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 1615 bytes, from 2013-11-30 15:00:52)
-- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 22517 bytes, from 2014-06-25 12:55:02)
+- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20457 bytes, from 2014-08-01 12:22:48)
+- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 1615 bytes, from 2014-07-17 15:34:33)
+- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 22517 bytes, from 2014-07-17 15:34:33)
- /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 11712 bytes, from 2013-08-17 17:13:43)
- /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 344 bytes, from 2013-08-11 19:26:32)
-- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1544 bytes, from 2013-08-16 19:17:05)
+- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2014-08-01 12:23:53)
- /home/robclark/src/freedreno/envytools/rnndb/hdmi/qfprom.xml ( 600 bytes, from 2013-07-05 19:21:12)
-- /home/robclark/src/freedreno/envytools/rnndb/hdmi/hdmi.xml ( 23613 bytes, from 2014-06-25 12:53:44)
+- /home/robclark/src/freedreno/envytools/rnndb/hdmi/hdmi.xml ( 23613 bytes, from 2014-07-17 15:33:30)
Copyright (C) 2013-2014 by the following authors:
- Rob Clark <robdclark@gmail.com> (robclark)
* this program. If not, see <http://www.gnu.org/licenses/>.
*/
+#ifdef CONFIG_COMMON_CLK
#include <linux/clk.h>
#include <linux/clk-provider.h>
+#endif
#include "hdmi.h"
struct hdmi_phy_8960 {
struct hdmi_phy base;
struct hdmi *hdmi;
+#ifdef CONFIG_COMMON_CLK
struct clk_hw pll_hw;
struct clk *pll;
unsigned long pixclk;
+#endif
};
#define to_hdmi_phy_8960(x) container_of(x, struct hdmi_phy_8960, base)
+
+#ifdef CONFIG_COMMON_CLK
#define clk_to_phy(x) container_of(x, struct hdmi_phy_8960, pll_hw)
/*
.parent_names = hdmi_pll_parents,
.num_parents = ARRAY_SIZE(hdmi_pll_parents),
};
-
+#endif
/*
* HDMI Phy:
{
struct hdmi_phy_8960 *phy_8960;
struct hdmi_phy *phy = NULL;
- int ret, i;
+ int ret;
+#ifdef CONFIG_COMMON_CLK
+ int i;
/* sanity check: */
for (i = 0; i < (ARRAY_SIZE(freqtbl) - 1); i++)
if (WARN_ON(freqtbl[i].rate < freqtbl[i+1].rate))
return ERR_PTR(-EINVAL);
+#endif
phy_8960 = kzalloc(sizeof(*phy_8960), GFP_KERNEL);
if (!phy_8960) {
phy_8960->hdmi = hdmi;
+#ifdef CONFIG_COMMON_CLK
phy_8960->pll_hw.init = &pll_init;
phy_8960->pll = devm_clk_register(hdmi->dev->dev, &phy_8960->pll_hw);
if (IS_ERR(phy_8960->pll)) {
phy_8960->pll = NULL;
goto fail;
}
+#endif
return phy;
The rules-ng-ng source files this header was generated from are:
- /home/robclark/src/freedreno/envytools/rnndb/msm.xml ( 647 bytes, from 2013-11-30 14:45:35)
- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1453 bytes, from 2013-03-31 16:51:27)
-- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 17996 bytes, from 2013-12-01 19:10:31)
-- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 1615 bytes, from 2013-11-30 15:00:52)
-- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 22517 bytes, from 2014-06-25 12:55:02)
+- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20457 bytes, from 2014-08-01 12:22:48)
+- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 1615 bytes, from 2014-07-17 15:34:33)
+- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 22517 bytes, from 2014-07-17 15:34:33)
- /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 11712 bytes, from 2013-08-17 17:13:43)
- /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 344 bytes, from 2013-08-11 19:26:32)
-- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1544 bytes, from 2013-08-16 19:17:05)
+- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2014-08-01 12:23:53)
- /home/robclark/src/freedreno/envytools/rnndb/hdmi/qfprom.xml ( 600 bytes, from 2013-07-05 19:21:12)
-- /home/robclark/src/freedreno/envytools/rnndb/hdmi/hdmi.xml ( 23613 bytes, from 2014-06-25 12:53:44)
+- /home/robclark/src/freedreno/envytools/rnndb/hdmi/hdmi.xml ( 23613 bytes, from 2014-07-17 15:33:30)
Copyright (C) 2013 by the following authors:
- Rob Clark <robdclark@gmail.com> (robclark)
The rules-ng-ng source files this header was generated from are:
- /home/robclark/src/freedreno/envytools/rnndb/msm.xml ( 647 bytes, from 2013-11-30 14:45:35)
- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1453 bytes, from 2013-03-31 16:51:27)
-- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 17996 bytes, from 2013-12-01 19:10:31)
-- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 1615 bytes, from 2013-11-30 15:00:52)
-- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 22517 bytes, from 2014-06-25 12:55:02)
+- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20457 bytes, from 2014-08-01 12:22:48)
+- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 1615 bytes, from 2014-07-17 15:34:33)
+- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 22517 bytes, from 2014-07-17 15:34:33)
- /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 11712 bytes, from 2013-08-17 17:13:43)
- /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 344 bytes, from 2013-08-11 19:26:32)
-- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1544 bytes, from 2013-08-16 19:17:05)
+- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2014-08-01 12:23:53)
- /home/robclark/src/freedreno/envytools/rnndb/hdmi/qfprom.xml ( 600 bytes, from 2013-07-05 19:21:12)
-- /home/robclark/src/freedreno/envytools/rnndb/hdmi/hdmi.xml ( 23613 bytes, from 2014-06-25 12:53:44)
+- /home/robclark/src/freedreno/envytools/rnndb/hdmi/hdmi.xml ( 23613 bytes, from 2014-07-17 15:33:30)
-Copyright (C) 2013 by the following authors:
+Copyright (C) 2013-2014 by the following authors:
- Rob Clark <robdclark@gmail.com> (robclark)
Permission is hereby granted, free of charge, to any person obtaining
#define MDP4_LCDC_CTRL_POLARITY_VSYNC_LOW 0x00000002
#define MDP4_LCDC_CTRL_POLARITY_DATA_EN_LOW 0x00000004
+#define REG_MDP4_LCDC_LVDS_INTF_CTL 0x000c2000
+#define MDP4_LCDC_LVDS_INTF_CTL_MODE_SEL 0x00000004
+#define MDP4_LCDC_LVDS_INTF_CTL_RGB_OUT 0x00000008
+#define MDP4_LCDC_LVDS_INTF_CTL_CH_SWAP 0x00000010
+#define MDP4_LCDC_LVDS_INTF_CTL_CH1_RES_BIT 0x00000020
+#define MDP4_LCDC_LVDS_INTF_CTL_CH2_RES_BIT 0x00000040
+#define MDP4_LCDC_LVDS_INTF_CTL_ENABLE 0x00000080
+#define MDP4_LCDC_LVDS_INTF_CTL_CH1_DATA_LANE0_EN 0x00000100
+#define MDP4_LCDC_LVDS_INTF_CTL_CH1_DATA_LANE1_EN 0x00000200
+#define MDP4_LCDC_LVDS_INTF_CTL_CH1_DATA_LANE2_EN 0x00000400
+#define MDP4_LCDC_LVDS_INTF_CTL_CH1_DATA_LANE3_EN 0x00000800
+#define MDP4_LCDC_LVDS_INTF_CTL_CH2_DATA_LANE0_EN 0x00001000
+#define MDP4_LCDC_LVDS_INTF_CTL_CH2_DATA_LANE1_EN 0x00002000
+#define MDP4_LCDC_LVDS_INTF_CTL_CH2_DATA_LANE2_EN 0x00004000
+#define MDP4_LCDC_LVDS_INTF_CTL_CH2_DATA_LANE3_EN 0x00008000
+#define MDP4_LCDC_LVDS_INTF_CTL_CH1_CLK_LANE_EN 0x00010000
+#define MDP4_LCDC_LVDS_INTF_CTL_CH2_CLK_LANE_EN 0x00020000
+
+static inline uint32_t REG_MDP4_LCDC_LVDS_MUX_CTL(uint32_t i0) { return 0x000c2014 + 0x8*i0; }
+
+static inline uint32_t REG_MDP4_LCDC_LVDS_MUX_CTL_3_TO_0(uint32_t i0) { return 0x000c2014 + 0x8*i0; }
+#define MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT0__MASK 0x000000ff
+#define MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT0__SHIFT 0
+static inline uint32_t MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT0(uint32_t val)
+{
+ return ((val) << MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT0__SHIFT) & MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT0__MASK;
+}
+#define MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT1__MASK 0x0000ff00
+#define MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT1__SHIFT 8
+static inline uint32_t MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT1(uint32_t val)
+{
+ return ((val) << MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT1__SHIFT) & MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT1__MASK;
+}
+#define MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT2__MASK 0x00ff0000
+#define MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT2__SHIFT 16
+static inline uint32_t MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT2(uint32_t val)
+{
+ return ((val) << MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT2__SHIFT) & MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT2__MASK;
+}
+#define MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT3__MASK 0xff000000
+#define MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT3__SHIFT 24
+static inline uint32_t MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT3(uint32_t val)
+{
+ return ((val) << MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT3__SHIFT) & MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT3__MASK;
+}
+
+static inline uint32_t REG_MDP4_LCDC_LVDS_MUX_CTL_6_TO_4(uint32_t i0) { return 0x000c2018 + 0x8*i0; }
+#define MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT4__MASK 0x000000ff
+#define MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT4__SHIFT 0
+static inline uint32_t MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT4(uint32_t val)
+{
+ return ((val) << MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT4__SHIFT) & MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT4__MASK;
+}
+#define MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT5__MASK 0x0000ff00
+#define MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT5__SHIFT 8
+static inline uint32_t MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT5(uint32_t val)
+{
+ return ((val) << MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT5__SHIFT) & MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT5__MASK;
+}
+#define MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT6__MASK 0x00ff0000
+#define MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT6__SHIFT 16
+static inline uint32_t MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT6(uint32_t val)
+{
+ return ((val) << MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT6__SHIFT) & MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT6__MASK;
+}
+
+#define REG_MDP4_LCDC_LVDS_PHY_RESET 0x000c2034
+
+#define REG_MDP4_LVDS_PHY_PLL_CTRL_0 0x000c3000
+
+#define REG_MDP4_LVDS_PHY_PLL_CTRL_1 0x000c3004
+
+#define REG_MDP4_LVDS_PHY_PLL_CTRL_2 0x000c3008
+
+#define REG_MDP4_LVDS_PHY_PLL_CTRL_3 0x000c300c
+
+#define REG_MDP4_LVDS_PHY_PLL_CTRL_5 0x000c3014
+
+#define REG_MDP4_LVDS_PHY_PLL_CTRL_6 0x000c3018
+
+#define REG_MDP4_LVDS_PHY_PLL_CTRL_7 0x000c301c
+
+#define REG_MDP4_LVDS_PHY_PLL_CTRL_8 0x000c3020
+
+#define REG_MDP4_LVDS_PHY_PLL_CTRL_9 0x000c3024
+
+#define REG_MDP4_LVDS_PHY_PLL_LOCKED 0x000c3080
+
+#define REG_MDP4_LVDS_PHY_CFG2 0x000c3108
+
+#define REG_MDP4_LVDS_PHY_CFG0 0x000c3100
+#define MDP4_LVDS_PHY_CFG0_SERIALIZATION_ENBLE 0x00000010
+#define MDP4_LVDS_PHY_CFG0_CHANNEL0 0x00000040
+#define MDP4_LVDS_PHY_CFG0_CHANNEL1 0x00000080
+
#define REG_MDP4_DTV 0x000d0000
#define REG_MDP4_DTV_ENABLE 0x000d0000
};
bool alpha[4]= { false, false, false, false };
+ /* Don't rely on value read back from hw, but instead use our
+ * own shadowed value. Possibly disable/reenable looses the
+ * previous value and goes back to power-on default?
+ */
+ mixer_cfg = mdp4_kms->mixer_cfg;
+
mdp4_write(mdp4_kms, REG_MDP4_OVLP_TRANSP_LOW0(ovlp), 0);
mdp4_write(mdp4_kms, REG_MDP4_OVLP_TRANSP_LOW1(ovlp), 0);
mdp4_write(mdp4_kms, REG_MDP4_OVLP_TRANSP_HIGH0(ovlp), 0);
mdp4_write(mdp4_kms, REG_MDP4_OVLP_TRANSP_HIGH1(ovlp), 0);
- /* TODO single register for all CRTCs, so this won't work properly
- * when multiple CRTCs are active..
- */
for (i = 0; i < ARRAY_SIZE(mdp4_crtc->planes); i++) {
struct drm_plane *plane = mdp4_crtc->planes[i];
if (plane) {
to_mdp_format(msm_framebuffer_format(plane->fb));
alpha[idx-1] = format->alpha_enable;
}
- mixer_cfg |= mixercfg(mdp4_crtc->mixer, pipe_id, stages[idx]);
+ mixer_cfg = mixercfg(mixer_cfg, mdp4_crtc->mixer,
+ pipe_id, stages[idx]);
}
}
mdp4_write(mdp4_kms, REG_MDP4_OVLP_STAGE_TRANSP_HIGH1(ovlp, i), 0);
}
+ mdp4_kms->mixer_cfg = mixer_cfg;
mdp4_write(mdp4_kms, REG_MDP4_LAYERMIXER_IN_CFG, mixer_cfg);
}
struct mdp4_crtc *mdp4_crtc = to_mdp4_crtc(crtc);
DBG("%s", mdp4_crtc->name);
/* make sure we hold a ref to mdp clks while setting up mode: */
+ drm_crtc_vblank_get(crtc);
mdp4_enable(get_kms(crtc));
mdp4_crtc_dpms(crtc, DRM_MODE_DPMS_OFF);
}
crtc_flush(crtc);
/* drop the ref to mdp clk's that we got in prepare: */
mdp4_disable(get_kms(crtc));
+ drm_crtc_vblank_put(crtc);
}
static int mdp4_crtc_mode_set_base(struct drm_crtc *crtc, int x, int y,
}
/* set interface for routing crtc->encoder: */
-void mdp4_crtc_set_intf(struct drm_crtc *crtc, enum mdp4_intf intf)
+void mdp4_crtc_set_intf(struct drm_crtc *crtc, enum mdp4_intf intf, int mixer)
{
struct mdp4_crtc *mdp4_crtc = to_mdp4_crtc(crtc);
struct mdp4_kms *mdp4_kms = get_kms(crtc);
if (intf == INTF_DSI_VIDEO) {
intf_sel &= ~MDP4_DISP_INTF_SEL_DSI_CMD;
intf_sel |= MDP4_DISP_INTF_SEL_DSI_VIDEO;
- mdp4_crtc->mixer = 0;
} else if (intf == INTF_DSI_CMD) {
intf_sel &= ~MDP4_DISP_INTF_SEL_DSI_VIDEO;
intf_sel |= MDP4_DISP_INTF_SEL_DSI_CMD;
- mdp4_crtc->mixer = 0;
- } else if (intf == INTF_LCDC_DTV){
- mdp4_crtc->mixer = 1;
}
+ mdp4_crtc->mixer = mixer;
+
blend_setup(crtc);
DBG("%s: intf_sel=%08x", mdp4_crtc->name, intf_sel);
MDP4_DMA_CONFIG_G_BPC(BPC8) |
MDP4_DMA_CONFIG_B_BPC(BPC8) |
MDP4_DMA_CONFIG_PACK(0x21));
- mdp4_crtc_set_intf(encoder->crtc, INTF_LCDC_DTV);
+ mdp4_crtc_set_intf(encoder->crtc, INTF_LCDC_DTV, 1);
mdp4_dtv_encoder_dpms(encoder, DRM_MODE_DPMS_ON);
}
if (mdp4_kms->rev >= 2)
mdp4_write(mdp4_kms, REG_MDP4_LAYERMIXER_IN_CFG_UPDATE_METHOD, 1);
+ mdp4_write(mdp4_kms, REG_MDP4_LAYERMIXER_IN_CFG, 0);
/* disable CSC matrix / YUV by default: */
mdp4_write(mdp4_kms, REG_MDP4_PIPE_OP_MODE(VG1), 0);
return 0;
}
+#ifdef CONFIG_OF
+static struct drm_panel *detect_panel(struct drm_device *dev, const char *name)
+{
+ struct device_node *n;
+ struct drm_panel *panel = NULL;
+
+ n = of_parse_phandle(dev->dev->of_node, name, 0);
+ if (n) {
+ panel = of_drm_find_panel(n);
+ if (!panel)
+ panel = ERR_PTR(-EPROBE_DEFER);
+ }
+
+ return panel;
+}
+#else
+static struct drm_panel *detect_panel(struct drm_device *dev, const char *name)
+{
+ // ??? maybe use a module param to specify which panel is attached?
+}
+#endif
+
static int modeset_init(struct mdp4_kms *mdp4_kms)
{
struct drm_device *dev = mdp4_kms->dev;
struct drm_plane *plane;
struct drm_crtc *crtc;
struct drm_encoder *encoder;
+ struct drm_connector *connector;
+ struct drm_panel *panel;
struct hdmi *hdmi;
int ret;
- /*
- * NOTE: this is a bit simplistic until we add support
- * for more than just RGB1->DMA_E->DTV->HDMI
- */
-
/* construct non-private planes: */
plane = mdp4_plane_init(dev, VG1, false);
if (IS_ERR(plane)) {
}
priv->planes[priv->num_planes++] = plane;
- /* the CRTCs get constructed with a private plane: */
+ /*
+ * Setup the LCDC/LVDS path: RGB2 -> DMA_P -> LCDC -> LVDS:
+ */
+
+ panel = detect_panel(dev, "qcom,lvds-panel");
+ if (IS_ERR(panel)) {
+ ret = PTR_ERR(panel);
+ dev_err(dev->dev, "failed to detect LVDS panel: %d\n", ret);
+ goto fail;
+ }
+
+ plane = mdp4_plane_init(dev, RGB2, true);
+ if (IS_ERR(plane)) {
+ dev_err(dev->dev, "failed to construct plane for RGB2\n");
+ ret = PTR_ERR(plane);
+ goto fail;
+ }
+
+ crtc = mdp4_crtc_init(dev, plane, priv->num_crtcs, 0, DMA_P);
+ if (IS_ERR(crtc)) {
+ dev_err(dev->dev, "failed to construct crtc for DMA_P\n");
+ ret = PTR_ERR(crtc);
+ goto fail;
+ }
+
+ encoder = mdp4_lcdc_encoder_init(dev, panel);
+ if (IS_ERR(encoder)) {
+ dev_err(dev->dev, "failed to construct LCDC encoder\n");
+ ret = PTR_ERR(encoder);
+ goto fail;
+ }
+
+ /* LCDC can be hooked to DMA_P: */
+ encoder->possible_crtcs = 1 << priv->num_crtcs;
+
+ priv->crtcs[priv->num_crtcs++] = crtc;
+ priv->encoders[priv->num_encoders++] = encoder;
+
+ connector = mdp4_lvds_connector_init(dev, panel, encoder);
+ if (IS_ERR(connector)) {
+ ret = PTR_ERR(connector);
+ dev_err(dev->dev, "failed to initialize LVDS connector: %d\n", ret);
+ goto fail;
+ }
+
+ priv->connectors[priv->num_connectors++] = connector;
+
+ /*
+ * Setup DTV/HDMI path: RGB1 -> DMA_E -> DTV -> HDMI:
+ */
+
plane = mdp4_plane_init(dev, RGB1, true);
if (IS_ERR(plane)) {
dev_err(dev->dev, "failed to construct plane for RGB1\n");
ret = PTR_ERR(crtc);
goto fail;
}
- priv->crtcs[priv->num_crtcs++] = crtc;
encoder = mdp4_dtv_encoder_init(dev);
if (IS_ERR(encoder)) {
ret = PTR_ERR(encoder);
goto fail;
}
- encoder->possible_crtcs = 0x1; /* DTV can be hooked to DMA_E */
+
+ /* DTV can be hooked to DMA_E: */
+ encoder->possible_crtcs = 1 << priv->num_crtcs;
+
+ priv->crtcs[priv->num_crtcs++] = crtc;
priv->encoders[priv->num_encoders++] = encoder;
hdmi = hdmi_init(dev, encoder);
#include "mdp/mdp_kms.h"
#include "mdp4.xml.h"
+#include "drm_panel.h"
+
struct mdp4_kms {
struct mdp_kms base;
int rev;
+ /* Shadow value for MDP4_LAYERMIXER_IN_CFG.. since setup for all
+ * crtcs/encoders is in one shared register, we need to update it
+ * via read/modify/write. But to avoid getting confused by power-
+ * on-default values after resume, use this shadow value instead:
+ */
+ uint32_t mixer_cfg;
+
/* mapper-id used to request GEM buffer mapped for scanout: */
int id;
case VG1: return MDP4_OVERLAY_FLUSH_VG1;
case VG2: return MDP4_OVERLAY_FLUSH_VG2;
case RGB1: return MDP4_OVERLAY_FLUSH_RGB1;
- case RGB2: return MDP4_OVERLAY_FLUSH_RGB1;
+ case RGB2: return MDP4_OVERLAY_FLUSH_RGB2;
default: return 0;
}
}
}
}
-static inline uint32_t mixercfg(int mixer, enum mdp4_pipe pipe,
- enum mdp_mixer_stage_id stage)
+static inline uint32_t mixercfg(uint32_t mixer_cfg, int mixer,
+ enum mdp4_pipe pipe, enum mdp_mixer_stage_id stage)
{
- uint32_t mixer_cfg = 0;
-
switch (pipe) {
case VG1:
- mixer_cfg = MDP4_LAYERMIXER_IN_CFG_PIPE0(stage) |
+ mixer_cfg &= ~(MDP4_LAYERMIXER_IN_CFG_PIPE0__MASK |
+ MDP4_LAYERMIXER_IN_CFG_PIPE0_MIXER1);
+ mixer_cfg |= MDP4_LAYERMIXER_IN_CFG_PIPE0(stage) |
COND(mixer == 1, MDP4_LAYERMIXER_IN_CFG_PIPE0_MIXER1);
break;
case VG2:
- mixer_cfg = MDP4_LAYERMIXER_IN_CFG_PIPE1(stage) |
+ mixer_cfg &= ~(MDP4_LAYERMIXER_IN_CFG_PIPE1__MASK |
+ MDP4_LAYERMIXER_IN_CFG_PIPE1_MIXER1);
+ mixer_cfg |= MDP4_LAYERMIXER_IN_CFG_PIPE1(stage) |
COND(mixer == 1, MDP4_LAYERMIXER_IN_CFG_PIPE1_MIXER1);
break;
case RGB1:
- mixer_cfg = MDP4_LAYERMIXER_IN_CFG_PIPE2(stage) |
+ mixer_cfg &= ~(MDP4_LAYERMIXER_IN_CFG_PIPE2__MASK |
+ MDP4_LAYERMIXER_IN_CFG_PIPE2_MIXER1);
+ mixer_cfg |= MDP4_LAYERMIXER_IN_CFG_PIPE2(stage) |
COND(mixer == 1, MDP4_LAYERMIXER_IN_CFG_PIPE2_MIXER1);
break;
case RGB2:
- mixer_cfg = MDP4_LAYERMIXER_IN_CFG_PIPE3(stage) |
+ mixer_cfg &= ~(MDP4_LAYERMIXER_IN_CFG_PIPE3__MASK |
+ MDP4_LAYERMIXER_IN_CFG_PIPE3_MIXER1);
+ mixer_cfg |= MDP4_LAYERMIXER_IN_CFG_PIPE3(stage) |
COND(mixer == 1, MDP4_LAYERMIXER_IN_CFG_PIPE3_MIXER1);
break;
case RGB3:
- mixer_cfg = MDP4_LAYERMIXER_IN_CFG_PIPE4(stage) |
+ mixer_cfg &= ~(MDP4_LAYERMIXER_IN_CFG_PIPE4__MASK |
+ MDP4_LAYERMIXER_IN_CFG_PIPE4_MIXER1);
+ mixer_cfg |= MDP4_LAYERMIXER_IN_CFG_PIPE4(stage) |
COND(mixer == 1, MDP4_LAYERMIXER_IN_CFG_PIPE4_MIXER1);
break;
case VG3:
- mixer_cfg = MDP4_LAYERMIXER_IN_CFG_PIPE5(stage) |
+ mixer_cfg &= ~(MDP4_LAYERMIXER_IN_CFG_PIPE5__MASK |
+ MDP4_LAYERMIXER_IN_CFG_PIPE5_MIXER1);
+ mixer_cfg |= MDP4_LAYERMIXER_IN_CFG_PIPE5(stage) |
COND(mixer == 1, MDP4_LAYERMIXER_IN_CFG_PIPE5_MIXER1);
break;
case VG4:
- mixer_cfg = MDP4_LAYERMIXER_IN_CFG_PIPE6(stage) |
+ mixer_cfg &= ~(MDP4_LAYERMIXER_IN_CFG_PIPE6__MASK |
+ MDP4_LAYERMIXER_IN_CFG_PIPE6_MIXER1);
+ mixer_cfg |= MDP4_LAYERMIXER_IN_CFG_PIPE6(stage) |
COND(mixer == 1, MDP4_LAYERMIXER_IN_CFG_PIPE6_MIXER1);
break;
default:
uint32_t mdp4_crtc_vblank(struct drm_crtc *crtc);
void mdp4_crtc_cancel_pending_flip(struct drm_crtc *crtc, struct drm_file *file);
void mdp4_crtc_set_config(struct drm_crtc *crtc, uint32_t config);
-void mdp4_crtc_set_intf(struct drm_crtc *crtc, enum mdp4_intf intf);
+void mdp4_crtc_set_intf(struct drm_crtc *crtc, enum mdp4_intf intf, int mixer);
void mdp4_crtc_attach(struct drm_crtc *crtc, struct drm_plane *plane);
void mdp4_crtc_detach(struct drm_crtc *crtc, struct drm_plane *plane);
struct drm_crtc *mdp4_crtc_init(struct drm_device *dev,
long mdp4_dtv_round_pixclk(struct drm_encoder *encoder, unsigned long rate);
struct drm_encoder *mdp4_dtv_encoder_init(struct drm_device *dev);
+long mdp4_lcdc_round_pixclk(struct drm_encoder *encoder, unsigned long rate);
+struct drm_encoder *mdp4_lcdc_encoder_init(struct drm_device *dev,
+ struct drm_panel *panel);
+
+struct drm_connector *mdp4_lvds_connector_init(struct drm_device *dev,
+ struct drm_panel *panel, struct drm_encoder *encoder);
+
+#ifdef CONFIG_COMMON_CLK
+struct clk *mpd4_lvds_pll_init(struct drm_device *dev);
+#else
+static inline struct clk *mpd4_lvds_pll_init(struct drm_device *dev)
+{
+ return ERR_PTR(-ENODEV);
+}
+#endif
+
#ifdef CONFIG_MSM_BUS_SCALING
static inline int match_dev_name(struct device *dev, void *data)
{
--- /dev/null
+/*
+ * Copyright (C) 2014 Red Hat
+ * Author: Rob Clark <robdclark@gmail.com>
+ * Author: Vinay Simha <vinaysimha@inforcecomputing.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "mdp4_kms.h"
+
+#include "drm_crtc.h"
+#include "drm_crtc_helper.h"
+
+struct mdp4_lcdc_encoder {
+ struct drm_encoder base;
+ struct drm_panel *panel;
+ struct clk *lcdc_clk;
+ unsigned long int pixclock;
+ struct regulator *regs[3];
+ bool enabled;
+ uint32_t bsc;
+};
+#define to_mdp4_lcdc_encoder(x) container_of(x, struct mdp4_lcdc_encoder, base)
+
+static struct mdp4_kms *get_kms(struct drm_encoder *encoder)
+{
+ struct msm_drm_private *priv = encoder->dev->dev_private;
+ return to_mdp4_kms(to_mdp_kms(priv->kms));
+}
+
+#ifdef CONFIG_MSM_BUS_SCALING
+#include <mach/board.h>
+static void bs_init(struct mdp4_lcdc_encoder *mdp4_lcdc_encoder)
+{
+ struct drm_device *dev = mdp4_lcdc_encoder->base.dev;
+ struct lcdc_platform_data *lcdc_pdata = mdp4_find_pdata("lvds.0");
+
+ if (!lcdc_pdata) {
+ dev_err(dev->dev, "could not find lvds pdata\n");
+ return;
+ }
+
+ if (lcdc_pdata->bus_scale_table) {
+ mdp4_lcdc_encoder->bsc = msm_bus_scale_register_client(
+ lcdc_pdata->bus_scale_table);
+ DBG("lvds : bus scale client: %08x", mdp4_lcdc_encoder->bsc);
+ }
+}
+
+static void bs_fini(struct mdp4_lcdc_encoder *mdp4_lcdc_encoder)
+{
+ if (mdp4_lcdc_encoder->bsc) {
+ msm_bus_scale_unregister_client(mdp4_lcdc_encoder->bsc);
+ mdp4_lcdc_encoder->bsc = 0;
+ }
+}
+
+static void bs_set(struct mdp4_lcdc_encoder *mdp4_lcdc_encoder, int idx)
+{
+ if (mdp4_lcdc_encoder->bsc) {
+ DBG("set bus scaling: %d", idx);
+ msm_bus_scale_client_update_request(mdp4_lcdc_encoder->bsc, idx);
+ }
+}
+#else
+static void bs_init(struct mdp4_lcdc_encoder *mdp4_lcdc_encoder) {}
+static void bs_fini(struct mdp4_lcdc_encoder *mdp4_lcdc_encoder) {}
+static void bs_set(struct mdp4_lcdc_encoder *mdp4_lcdc_encoder, int idx) {}
+#endif
+
+static void mdp4_lcdc_encoder_destroy(struct drm_encoder *encoder)
+{
+ struct mdp4_lcdc_encoder *mdp4_lcdc_encoder =
+ to_mdp4_lcdc_encoder(encoder);
+ bs_fini(mdp4_lcdc_encoder);
+ drm_encoder_cleanup(encoder);
+ kfree(mdp4_lcdc_encoder);
+}
+
+static const struct drm_encoder_funcs mdp4_lcdc_encoder_funcs = {
+ .destroy = mdp4_lcdc_encoder_destroy,
+};
+
+/* this should probably be a helper: */
+struct drm_connector *get_connector(struct drm_encoder *encoder)
+{
+ struct drm_device *dev = encoder->dev;
+ struct drm_connector *connector;
+
+ list_for_each_entry(connector, &dev->mode_config.connector_list, head)
+ if (connector->encoder == encoder)
+ return connector;
+
+ return NULL;
+}
+
+static void setup_phy(struct drm_encoder *encoder)
+{
+ struct drm_device *dev = encoder->dev;
+ struct drm_connector *connector = get_connector(encoder);
+ struct mdp4_kms *mdp4_kms = get_kms(encoder);
+ uint32_t lvds_intf = 0, lvds_phy_cfg0 = 0;
+ int bpp, nchan, swap;
+
+ if (!connector)
+ return;
+
+ bpp = 3 * connector->display_info.bpc;
+
+ if (!bpp)
+ bpp = 18;
+
+ /* TODO, these should come from panel somehow: */
+ nchan = 1;
+ swap = 0;
+
+ switch (bpp) {
+ case 24:
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_LVDS_MUX_CTL_3_TO_0(0),
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT0(0x08) |
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT1(0x05) |
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT2(0x04) |
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT3(0x03));
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_LVDS_MUX_CTL_6_TO_4(0),
+ MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT4(0x02) |
+ MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT5(0x01) |
+ MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT6(0x00));
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_LVDS_MUX_CTL_3_TO_0(1),
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT0(0x11) |
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT1(0x10) |
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT2(0x0d) |
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT3(0x0c));
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_LVDS_MUX_CTL_6_TO_4(1),
+ MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT4(0x0b) |
+ MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT5(0x0a) |
+ MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT6(0x09));
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_LVDS_MUX_CTL_3_TO_0(2),
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT0(0x1a) |
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT1(0x19) |
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT2(0x18) |
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT3(0x15));
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_LVDS_MUX_CTL_6_TO_4(2),
+ MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT4(0x14) |
+ MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT5(0x13) |
+ MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT6(0x12));
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_LVDS_MUX_CTL_3_TO_0(3),
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT0(0x1b) |
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT1(0x17) |
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT2(0x16) |
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT3(0x0f));
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_LVDS_MUX_CTL_6_TO_4(3),
+ MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT4(0x0e) |
+ MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT5(0x07) |
+ MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT6(0x06));
+ if (nchan == 2) {
+ lvds_intf |= MDP4_LCDC_LVDS_INTF_CTL_CH2_DATA_LANE3_EN |
+ MDP4_LCDC_LVDS_INTF_CTL_CH2_DATA_LANE2_EN |
+ MDP4_LCDC_LVDS_INTF_CTL_CH2_DATA_LANE1_EN |
+ MDP4_LCDC_LVDS_INTF_CTL_CH2_DATA_LANE0_EN |
+ MDP4_LCDC_LVDS_INTF_CTL_CH1_DATA_LANE3_EN |
+ MDP4_LCDC_LVDS_INTF_CTL_CH1_DATA_LANE2_EN |
+ MDP4_LCDC_LVDS_INTF_CTL_CH1_DATA_LANE1_EN |
+ MDP4_LCDC_LVDS_INTF_CTL_CH1_DATA_LANE0_EN;
+ } else {
+ lvds_intf |= MDP4_LCDC_LVDS_INTF_CTL_CH1_DATA_LANE3_EN |
+ MDP4_LCDC_LVDS_INTF_CTL_CH1_DATA_LANE2_EN |
+ MDP4_LCDC_LVDS_INTF_CTL_CH1_DATA_LANE1_EN |
+ MDP4_LCDC_LVDS_INTF_CTL_CH1_DATA_LANE0_EN;
+ }
+ break;
+
+ case 18:
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_LVDS_MUX_CTL_3_TO_0(0),
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT0(0x0a) |
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT1(0x07) |
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT2(0x06) |
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT3(0x05));
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_LVDS_MUX_CTL_6_TO_4(0),
+ MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT4(0x04) |
+ MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT5(0x03) |
+ MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT6(0x02));
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_LVDS_MUX_CTL_3_TO_0(1),
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT0(0x13) |
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT1(0x12) |
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT2(0x0f) |
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT3(0x0e));
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_LVDS_MUX_CTL_6_TO_4(1),
+ MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT4(0x0d) |
+ MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT5(0x0c) |
+ MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT6(0x0b));
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_LVDS_MUX_CTL_3_TO_0(2),
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT0(0x1a) |
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT1(0x19) |
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT2(0x18) |
+ MDP4_LCDC_LVDS_MUX_CTL_3_TO_0_BIT3(0x17));
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_LVDS_MUX_CTL_6_TO_4(2),
+ MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT4(0x16) |
+ MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT5(0x15) |
+ MDP4_LCDC_LVDS_MUX_CTL_6_TO_4_BIT6(0x14));
+ if (nchan == 2) {
+ lvds_intf |= MDP4_LCDC_LVDS_INTF_CTL_CH2_DATA_LANE2_EN |
+ MDP4_LCDC_LVDS_INTF_CTL_CH2_DATA_LANE1_EN |
+ MDP4_LCDC_LVDS_INTF_CTL_CH2_DATA_LANE0_EN |
+ MDP4_LCDC_LVDS_INTF_CTL_CH1_DATA_LANE2_EN |
+ MDP4_LCDC_LVDS_INTF_CTL_CH1_DATA_LANE1_EN |
+ MDP4_LCDC_LVDS_INTF_CTL_CH1_DATA_LANE0_EN;
+ } else {
+ lvds_intf |= MDP4_LCDC_LVDS_INTF_CTL_CH1_DATA_LANE2_EN |
+ MDP4_LCDC_LVDS_INTF_CTL_CH1_DATA_LANE1_EN |
+ MDP4_LCDC_LVDS_INTF_CTL_CH1_DATA_LANE0_EN;
+ }
+ lvds_intf |= MDP4_LCDC_LVDS_INTF_CTL_RGB_OUT;
+ break;
+
+ default:
+ dev_err(dev->dev, "unknown bpp: %d\n", bpp);
+ return;
+ }
+
+ switch (nchan) {
+ case 1:
+ lvds_phy_cfg0 = MDP4_LVDS_PHY_CFG0_CHANNEL0;
+ lvds_intf |= MDP4_LCDC_LVDS_INTF_CTL_CH1_CLK_LANE_EN |
+ MDP4_LCDC_LVDS_INTF_CTL_MODE_SEL;
+ break;
+ case 2:
+ lvds_phy_cfg0 = MDP4_LVDS_PHY_CFG0_CHANNEL0 |
+ MDP4_LVDS_PHY_CFG0_CHANNEL1;
+ lvds_intf |= MDP4_LCDC_LVDS_INTF_CTL_CH2_CLK_LANE_EN |
+ MDP4_LCDC_LVDS_INTF_CTL_CH1_CLK_LANE_EN;
+ break;
+ default:
+ dev_err(dev->dev, "unknown # of channels: %d\n", nchan);
+ return;
+ }
+
+ if (swap)
+ lvds_intf |= MDP4_LCDC_LVDS_INTF_CTL_CH_SWAP;
+
+ lvds_intf |= MDP4_LCDC_LVDS_INTF_CTL_ENABLE;
+
+ mdp4_write(mdp4_kms, REG_MDP4_LVDS_PHY_CFG0, lvds_phy_cfg0);
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_LVDS_INTF_CTL, lvds_intf);
+ mdp4_write(mdp4_kms, REG_MDP4_LVDS_PHY_CFG2, 0x30);
+
+ mb();
+ udelay(1);
+ lvds_phy_cfg0 |= MDP4_LVDS_PHY_CFG0_SERIALIZATION_ENBLE;
+ mdp4_write(mdp4_kms, REG_MDP4_LVDS_PHY_CFG0, lvds_phy_cfg0);
+}
+
+static void mdp4_lcdc_encoder_dpms(struct drm_encoder *encoder, int mode)
+{
+ struct drm_device *dev = encoder->dev;
+ struct mdp4_lcdc_encoder *mdp4_lcdc_encoder =
+ to_mdp4_lcdc_encoder(encoder);
+ struct mdp4_kms *mdp4_kms = get_kms(encoder);
+ struct drm_panel *panel = mdp4_lcdc_encoder->panel;
+ bool enabled = (mode == DRM_MODE_DPMS_ON);
+ int i, ret;
+
+ DBG("mode=%d", mode);
+
+ if (enabled == mdp4_lcdc_encoder->enabled)
+ return;
+
+ if (enabled) {
+ unsigned long pc = mdp4_lcdc_encoder->pixclock;
+ int ret;
+
+ bs_set(mdp4_lcdc_encoder, 1);
+
+ for (i = 0; i < ARRAY_SIZE(mdp4_lcdc_encoder->regs); i++) {
+ ret = regulator_enable(mdp4_lcdc_encoder->regs[i]);
+ if (ret)
+ dev_err(dev->dev, "failed to enable regulator: %d\n", ret);
+ }
+
+ DBG("setting lcdc_clk=%lu", pc);
+ ret = clk_set_rate(mdp4_lcdc_encoder->lcdc_clk, pc);
+ if (ret)
+ dev_err(dev->dev, "failed to configure lcdc_clk: %d\n", ret);
+ ret = clk_prepare_enable(mdp4_lcdc_encoder->lcdc_clk);
+ if (ret)
+ dev_err(dev->dev, "failed to enable lcdc_clk: %d\n", ret);
+
+ if (panel)
+ drm_panel_enable(panel);
+
+ setup_phy(encoder);
+
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_ENABLE, 1);
+ } else {
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_ENABLE, 0);
+
+ if (panel)
+ drm_panel_disable(panel);
+
+ /*
+ * Wait for a vsync so we know the ENABLE=0 latched before
+ * the (connector) source of the vsync's gets disabled,
+ * otherwise we end up in a funny state if we re-enable
+ * before the disable latches, which results that some of
+ * the settings changes for the new modeset (like new
+ * scanout buffer) don't latch properly..
+ */
+ mdp_irq_wait(&mdp4_kms->base, MDP4_IRQ_PRIMARY_VSYNC);
+
+ clk_disable_unprepare(mdp4_lcdc_encoder->lcdc_clk);
+
+ for (i = 0; i < ARRAY_SIZE(mdp4_lcdc_encoder->regs); i++) {
+ ret = regulator_disable(mdp4_lcdc_encoder->regs[i]);
+ if (ret)
+ dev_err(dev->dev, "failed to disable regulator: %d\n", ret);
+ }
+
+ bs_set(mdp4_lcdc_encoder, 0);
+ }
+
+ mdp4_lcdc_encoder->enabled = enabled;
+}
+
+static bool mdp4_lcdc_encoder_mode_fixup(struct drm_encoder *encoder,
+ const struct drm_display_mode *mode,
+ struct drm_display_mode *adjusted_mode)
+{
+ return true;
+}
+
+static void mdp4_lcdc_encoder_mode_set(struct drm_encoder *encoder,
+ struct drm_display_mode *mode,
+ struct drm_display_mode *adjusted_mode)
+{
+ struct mdp4_lcdc_encoder *mdp4_lcdc_encoder =
+ to_mdp4_lcdc_encoder(encoder);
+ struct mdp4_kms *mdp4_kms = get_kms(encoder);
+ uint32_t lcdc_hsync_skew, vsync_period, vsync_len, ctrl_pol;
+ uint32_t display_v_start, display_v_end;
+ uint32_t hsync_start_x, hsync_end_x;
+
+ mode = adjusted_mode;
+
+ DBG("set mode: %d:\"%s\" %d %d %d %d %d %d %d %d %d %d 0x%x 0x%x",
+ mode->base.id, mode->name,
+ mode->vrefresh, mode->clock,
+ mode->hdisplay, mode->hsync_start,
+ mode->hsync_end, mode->htotal,
+ mode->vdisplay, mode->vsync_start,
+ mode->vsync_end, mode->vtotal,
+ mode->type, mode->flags);
+
+ mdp4_lcdc_encoder->pixclock = mode->clock * 1000;
+
+ DBG("pixclock=%lu", mdp4_lcdc_encoder->pixclock);
+
+ ctrl_pol = 0;
+ if (mode->flags & DRM_MODE_FLAG_NHSYNC)
+ ctrl_pol |= MDP4_LCDC_CTRL_POLARITY_HSYNC_LOW;
+ if (mode->flags & DRM_MODE_FLAG_NVSYNC)
+ ctrl_pol |= MDP4_LCDC_CTRL_POLARITY_VSYNC_LOW;
+ /* probably need to get DATA_EN polarity from panel.. */
+
+ lcdc_hsync_skew = 0; /* get this from panel? */
+
+ hsync_start_x = (mode->htotal - mode->hsync_start);
+ hsync_end_x = mode->htotal - (mode->hsync_start - mode->hdisplay) - 1;
+
+ vsync_period = mode->vtotal * mode->htotal;
+ vsync_len = (mode->vsync_end - mode->vsync_start) * mode->htotal;
+ display_v_start = (mode->vtotal - mode->vsync_start) * mode->htotal + lcdc_hsync_skew;
+ display_v_end = vsync_period - ((mode->vsync_start - mode->vdisplay) * mode->htotal) + lcdc_hsync_skew - 1;
+
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_HSYNC_CTRL,
+ MDP4_LCDC_HSYNC_CTRL_PULSEW(mode->hsync_end - mode->hsync_start) |
+ MDP4_LCDC_HSYNC_CTRL_PERIOD(mode->htotal));
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_VSYNC_PERIOD, vsync_period);
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_VSYNC_LEN, vsync_len);
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_DISPLAY_HCTRL,
+ MDP4_LCDC_DISPLAY_HCTRL_START(hsync_start_x) |
+ MDP4_LCDC_DISPLAY_HCTRL_END(hsync_end_x));
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_DISPLAY_VSTART, display_v_start);
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_DISPLAY_VEND, display_v_end);
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_BORDER_CLR, 0);
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_UNDERFLOW_CLR,
+ MDP4_LCDC_UNDERFLOW_CLR_ENABLE_RECOVERY |
+ MDP4_LCDC_UNDERFLOW_CLR_COLOR(0xff));
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_HSYNC_SKEW, lcdc_hsync_skew);
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_CTRL_POLARITY, ctrl_pol);
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_ACTIVE_HCTL,
+ MDP4_LCDC_ACTIVE_HCTL_START(0) |
+ MDP4_LCDC_ACTIVE_HCTL_END(0));
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_ACTIVE_VSTART, 0);
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_ACTIVE_VEND, 0);
+}
+
+static void mdp4_lcdc_encoder_prepare(struct drm_encoder *encoder)
+{
+ mdp4_lcdc_encoder_dpms(encoder, DRM_MODE_DPMS_OFF);
+}
+
+static void mdp4_lcdc_encoder_commit(struct drm_encoder *encoder)
+{
+ /* TODO: hard-coded for 18bpp: */
+ mdp4_crtc_set_config(encoder->crtc,
+ MDP4_DMA_CONFIG_R_BPC(BPC6) |
+ MDP4_DMA_CONFIG_G_BPC(BPC6) |
+ MDP4_DMA_CONFIG_B_BPC(BPC6) |
+ MDP4_DMA_CONFIG_PACK_ALIGN_MSB |
+ MDP4_DMA_CONFIG_PACK(0x21) |
+ MDP4_DMA_CONFIG_DEFLKR_EN |
+ MDP4_DMA_CONFIG_DITHER_EN);
+ mdp4_crtc_set_intf(encoder->crtc, INTF_LCDC_DTV, 0);
+ mdp4_lcdc_encoder_dpms(encoder, DRM_MODE_DPMS_ON);
+}
+
+static const struct drm_encoder_helper_funcs mdp4_lcdc_encoder_helper_funcs = {
+ .dpms = mdp4_lcdc_encoder_dpms,
+ .mode_fixup = mdp4_lcdc_encoder_mode_fixup,
+ .mode_set = mdp4_lcdc_encoder_mode_set,
+ .prepare = mdp4_lcdc_encoder_prepare,
+ .commit = mdp4_lcdc_encoder_commit,
+};
+
+long mdp4_lcdc_round_pixclk(struct drm_encoder *encoder, unsigned long rate)
+{
+ struct mdp4_lcdc_encoder *mdp4_lcdc_encoder =
+ to_mdp4_lcdc_encoder(encoder);
+ return clk_round_rate(mdp4_lcdc_encoder->lcdc_clk, rate);
+}
+
+/* initialize encoder */
+struct drm_encoder *mdp4_lcdc_encoder_init(struct drm_device *dev,
+ struct drm_panel *panel)
+{
+ struct drm_encoder *encoder = NULL;
+ struct mdp4_lcdc_encoder *mdp4_lcdc_encoder;
+ struct regulator *reg;
+ int ret;
+
+ mdp4_lcdc_encoder = kzalloc(sizeof(*mdp4_lcdc_encoder), GFP_KERNEL);
+ if (!mdp4_lcdc_encoder) {
+ ret = -ENOMEM;
+ goto fail;
+ }
+
+ mdp4_lcdc_encoder->panel = panel;
+
+ encoder = &mdp4_lcdc_encoder->base;
+
+ drm_encoder_init(dev, encoder, &mdp4_lcdc_encoder_funcs,
+ DRM_MODE_ENCODER_LVDS);
+ drm_encoder_helper_add(encoder, &mdp4_lcdc_encoder_helper_funcs);
+
+ /* TODO: do we need different pll in other cases? */
+ mdp4_lcdc_encoder->lcdc_clk = mpd4_lvds_pll_init(dev);
+ if (IS_ERR(mdp4_lcdc_encoder->lcdc_clk)) {
+ dev_err(dev->dev, "failed to get lvds_clk\n");
+ ret = PTR_ERR(mdp4_lcdc_encoder->lcdc_clk);
+ goto fail;
+ }
+
+ /* TODO: different regulators in other cases? */
+ reg = devm_regulator_get(dev->dev, "lvds-vccs-3p3v");
+ if (IS_ERR(reg)) {
+ ret = PTR_ERR(reg);
+ dev_err(dev->dev, "failed to get lvds-vccs-3p3v: %d\n", ret);
+ goto fail;
+ }
+ mdp4_lcdc_encoder->regs[0] = reg;
+
+ reg = devm_regulator_get(dev->dev, "lvds-pll-vdda");
+ if (IS_ERR(reg)) {
+ ret = PTR_ERR(reg);
+ dev_err(dev->dev, "failed to get lvds-pll-vdda: %d\n", ret);
+ goto fail;
+ }
+ mdp4_lcdc_encoder->regs[1] = reg;
+
+ reg = devm_regulator_get(dev->dev, "lvds-vdda");
+ if (IS_ERR(reg)) {
+ ret = PTR_ERR(reg);
+ dev_err(dev->dev, "failed to get lvds-vdda: %d\n", ret);
+ goto fail;
+ }
+ mdp4_lcdc_encoder->regs[2] = reg;
+
+ bs_init(mdp4_lcdc_encoder);
+
+ return encoder;
+
+fail:
+ if (encoder)
+ mdp4_lcdc_encoder_destroy(encoder);
+
+ return ERR_PTR(ret);
+}
--- /dev/null
+/*
+ * Copyright (C) 2014 Red Hat
+ * Author: Rob Clark <robdclark@gmail.com>
+ * Author: Vinay Simha <vinaysimha@inforcecomputing.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/gpio.h>
+
+#include "mdp4_kms.h"
+
+struct mdp4_lvds_connector {
+ struct drm_connector base;
+ struct drm_encoder *encoder;
+ struct drm_panel *panel;
+};
+#define to_mdp4_lvds_connector(x) container_of(x, struct mdp4_lvds_connector, base)
+
+static enum drm_connector_status mdp4_lvds_connector_detect(
+ struct drm_connector *connector, bool force)
+{
+ struct mdp4_lvds_connector *mdp4_lvds_connector =
+ to_mdp4_lvds_connector(connector);
+
+ return mdp4_lvds_connector->panel ?
+ connector_status_connected :
+ connector_status_disconnected;
+}
+
+static void mdp4_lvds_connector_destroy(struct drm_connector *connector)
+{
+ struct mdp4_lvds_connector *mdp4_lvds_connector =
+ to_mdp4_lvds_connector(connector);
+ struct drm_panel *panel = mdp4_lvds_connector->panel;
+
+ if (panel)
+ drm_panel_detach(panel);
+
+ drm_connector_unregister(connector);
+ drm_connector_cleanup(connector);
+
+ kfree(mdp4_lvds_connector);
+}
+
+static int mdp4_lvds_connector_get_modes(struct drm_connector *connector)
+{
+ struct mdp4_lvds_connector *mdp4_lvds_connector =
+ to_mdp4_lvds_connector(connector);
+ struct drm_panel *panel = mdp4_lvds_connector->panel;
+ int ret = 0;
+
+ if (panel)
+ ret = panel->funcs->get_modes(panel);
+
+ return ret;
+}
+
+static int mdp4_lvds_connector_mode_valid(struct drm_connector *connector,
+ struct drm_display_mode *mode)
+{
+ struct mdp4_lvds_connector *mdp4_lvds_connector =
+ to_mdp4_lvds_connector(connector);
+ struct drm_encoder *encoder = mdp4_lvds_connector->encoder;
+ long actual, requested;
+
+ requested = 1000 * mode->clock;
+ actual = mdp4_lcdc_round_pixclk(encoder, requested);
+
+ DBG("requested=%ld, actual=%ld", requested, actual);
+
+ if (actual != requested)
+ return MODE_CLOCK_RANGE;
+
+ return MODE_OK;
+}
+
+static struct drm_encoder *
+mdp4_lvds_connector_best_encoder(struct drm_connector *connector)
+{
+ struct mdp4_lvds_connector *mdp4_lvds_connector =
+ to_mdp4_lvds_connector(connector);
+ return mdp4_lvds_connector->encoder;
+}
+
+static const struct drm_connector_funcs mdp4_lvds_connector_funcs = {
+ .dpms = drm_helper_connector_dpms,
+ .detect = mdp4_lvds_connector_detect,
+ .fill_modes = drm_helper_probe_single_connector_modes,
+ .destroy = mdp4_lvds_connector_destroy,
+};
+
+static const struct drm_connector_helper_funcs mdp4_lvds_connector_helper_funcs = {
+ .get_modes = mdp4_lvds_connector_get_modes,
+ .mode_valid = mdp4_lvds_connector_mode_valid,
+ .best_encoder = mdp4_lvds_connector_best_encoder,
+};
+
+/* initialize connector */
+struct drm_connector *mdp4_lvds_connector_init(struct drm_device *dev,
+ struct drm_panel *panel, struct drm_encoder *encoder)
+{
+ struct drm_connector *connector = NULL;
+ struct mdp4_lvds_connector *mdp4_lvds_connector;
+ int ret;
+
+ mdp4_lvds_connector = kzalloc(sizeof(*mdp4_lvds_connector), GFP_KERNEL);
+ if (!mdp4_lvds_connector) {
+ ret = -ENOMEM;
+ goto fail;
+ }
+
+ mdp4_lvds_connector->encoder = encoder;
+ mdp4_lvds_connector->panel = panel;
+
+ connector = &mdp4_lvds_connector->base;
+
+ drm_connector_init(dev, connector, &mdp4_lvds_connector_funcs,
+ DRM_MODE_CONNECTOR_LVDS);
+ drm_connector_helper_add(connector, &mdp4_lvds_connector_helper_funcs);
+
+ connector->polled = 0;
+
+ connector->interlace_allowed = 0;
+ connector->doublescan_allowed = 0;
+
+ drm_connector_register(connector);
+
+ drm_mode_connector_attach_encoder(connector, encoder);
+
+ if (panel)
+ drm_panel_attach(panel, connector);
+
+ return connector;
+
+fail:
+ if (connector)
+ mdp4_lvds_connector_destroy(connector);
+
+ return ERR_PTR(ret);
+}
--- /dev/null
+/*
+ * Copyright (C) 2014 Red Hat
+ * Author: Rob Clark <robdclark@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/clk.h>
+#include <linux/clk-provider.h>
+
+#include "mdp4_kms.h"
+
+struct mdp4_lvds_pll {
+ struct clk_hw pll_hw;
+ struct drm_device *dev;
+ unsigned long pixclk;
+};
+#define to_mdp4_lvds_pll(x) container_of(x, struct mdp4_lvds_pll, pll_hw)
+
+static struct mdp4_kms *get_kms(struct mdp4_lvds_pll *lvds_pll)
+{
+ struct msm_drm_private *priv = lvds_pll->dev->dev_private;
+ return to_mdp4_kms(to_mdp_kms(priv->kms));
+}
+
+struct pll_rate {
+ unsigned long rate;
+ struct {
+ uint32_t val;
+ uint32_t reg;
+ } conf[32];
+};
+
+/* NOTE: keep sorted highest freq to lowest: */
+static const struct pll_rate freqtbl[] = {
+ { 72000000, {
+ { 0x8f, REG_MDP4_LVDS_PHY_PLL_CTRL_1 },
+ { 0x30, REG_MDP4_LVDS_PHY_PLL_CTRL_2 },
+ { 0xc6, REG_MDP4_LVDS_PHY_PLL_CTRL_3 },
+ { 0x10, REG_MDP4_LVDS_PHY_PLL_CTRL_5 },
+ { 0x07, REG_MDP4_LVDS_PHY_PLL_CTRL_6 },
+ { 0x62, REG_MDP4_LVDS_PHY_PLL_CTRL_7 },
+ { 0x41, REG_MDP4_LVDS_PHY_PLL_CTRL_8 },
+ { 0x0d, REG_MDP4_LVDS_PHY_PLL_CTRL_9 },
+ { 0, 0 } }
+ },
+};
+
+static const struct pll_rate *find_rate(unsigned long rate)
+{
+ int i;
+ for (i = 1; i < ARRAY_SIZE(freqtbl); i++)
+ if (rate > freqtbl[i].rate)
+ return &freqtbl[i-1];
+ return &freqtbl[i-1];
+}
+
+static int mpd4_lvds_pll_enable(struct clk_hw *hw)
+{
+ struct mdp4_lvds_pll *lvds_pll = to_mdp4_lvds_pll(hw);
+ struct mdp4_kms *mdp4_kms = get_kms(lvds_pll);
+ const struct pll_rate *pll_rate = find_rate(lvds_pll->pixclk);
+ int i;
+
+ DBG("pixclk=%lu (%lu)", lvds_pll->pixclk, pll_rate->rate);
+
+ if (WARN_ON(!pll_rate))
+ return -EINVAL;
+
+ mdp4_write(mdp4_kms, REG_MDP4_LCDC_LVDS_PHY_RESET, 0x33);
+
+ for (i = 0; pll_rate->conf[i].reg; i++)
+ mdp4_write(mdp4_kms, pll_rate->conf[i].reg, pll_rate->conf[i].val);
+
+ mdp4_write(mdp4_kms, REG_MDP4_LVDS_PHY_PLL_CTRL_0, 0x01);
+
+ /* Wait until LVDS PLL is locked and ready */
+ while (!mdp4_read(mdp4_kms, REG_MDP4_LVDS_PHY_PLL_LOCKED))
+ cpu_relax();
+
+ return 0;
+}
+
+static void mpd4_lvds_pll_disable(struct clk_hw *hw)
+{
+ struct mdp4_lvds_pll *lvds_pll = to_mdp4_lvds_pll(hw);
+ struct mdp4_kms *mdp4_kms = get_kms(lvds_pll);
+
+ DBG("");
+
+ mdp4_write(mdp4_kms, REG_MDP4_LVDS_PHY_CFG0, 0x0);
+ mdp4_write(mdp4_kms, REG_MDP4_LVDS_PHY_PLL_CTRL_0, 0x0);
+}
+
+static unsigned long mpd4_lvds_pll_recalc_rate(struct clk_hw *hw,
+ unsigned long parent_rate)
+{
+ struct mdp4_lvds_pll *lvds_pll = to_mdp4_lvds_pll(hw);
+ return lvds_pll->pixclk;
+}
+
+static long mpd4_lvds_pll_round_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long *parent_rate)
+{
+ const struct pll_rate *pll_rate = find_rate(rate);
+ return pll_rate->rate;
+}
+
+static int mpd4_lvds_pll_set_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long parent_rate)
+{
+ struct mdp4_lvds_pll *lvds_pll = to_mdp4_lvds_pll(hw);
+ lvds_pll->pixclk = rate;
+ return 0;
+}
+
+
+static const struct clk_ops mpd4_lvds_pll_ops = {
+ .enable = mpd4_lvds_pll_enable,
+ .disable = mpd4_lvds_pll_disable,
+ .recalc_rate = mpd4_lvds_pll_recalc_rate,
+ .round_rate = mpd4_lvds_pll_round_rate,
+ .set_rate = mpd4_lvds_pll_set_rate,
+};
+
+static const char *mpd4_lvds_pll_parents[] = {
+ "pxo",
+};
+
+static struct clk_init_data pll_init = {
+ .name = "mpd4_lvds_pll",
+ .ops = &mpd4_lvds_pll_ops,
+ .parent_names = mpd4_lvds_pll_parents,
+ .num_parents = ARRAY_SIZE(mpd4_lvds_pll_parents),
+};
+
+struct clk *mpd4_lvds_pll_init(struct drm_device *dev)
+{
+ struct mdp4_lvds_pll *lvds_pll;
+ struct clk *clk;
+ int ret;
+
+ lvds_pll = devm_kzalloc(dev->dev, sizeof(*lvds_pll), GFP_KERNEL);
+ if (!lvds_pll) {
+ ret = -ENOMEM;
+ goto fail;
+ }
+
+ lvds_pll->dev = dev;
+
+ lvds_pll->pll_hw.init = &pll_init;
+ clk = devm_clk_register(dev->dev, &lvds_pll->pll_hw);
+ if (IS_ERR(clk)) {
+ ret = PTR_ERR(clk);
+ goto fail;
+ }
+
+ return clk;
+
+fail:
+ return ERR_PTR(ret);
+}
#define reglog 0
#endif
-static char *vram;
+static char *vram = "16m";
MODULE_PARM_DESC(vram, "Configure VRAM size (for devices without IOMMU/GPUMMU");
module_param(vram, charp, 0);
dev->mode_config.max_height = 2048;
dev->mode_config.funcs = &mode_config_funcs;
- ret = drm_vblank_init(dev, 1);
+ ret = drm_vblank_init(dev, priv->num_crtcs);
if (ret < 0) {
dev_err(dev->dev, "failed to initialize vblank\n");
goto fail;
{
static DEFINE_MUTEX(init_lock);
struct msm_drm_private *priv = dev->dev_private;
- struct msm_gpu *gpu;
mutex_lock(&init_lock);
- if (priv->gpu)
- goto out;
-
- gpu = a3xx_gpu_init(dev);
- if (IS_ERR(gpu)) {
- dev_warn(dev->dev, "failed to load a3xx gpu\n");
- gpu = NULL;
- /* not fatal */
- }
-
- if (gpu) {
- int ret;
- mutex_lock(&dev->struct_mutex);
- gpu->funcs->pm_resume(gpu);
- mutex_unlock(&dev->struct_mutex);
- ret = gpu->funcs->hw_init(gpu);
- if (ret) {
- dev_err(dev->dev, "gpu hw init failed: %d\n", ret);
- gpu->funcs->destroy(gpu);
- gpu = NULL;
- } else {
- /* give inactive pm a chance to kick in: */
- msm_gpu_retire(gpu);
- }
- }
-
- priv->gpu = gpu;
+ if (!priv->gpu)
+ priv->gpu = adreno_load_gpu(dev);
-out:
mutex_unlock(&init_lock);
}
.open = msm_open,
.preclose = msm_preclose,
.lastclose = msm_lastclose,
+ .set_busid = drm_platform_set_busid,
.irq_handler = msm_irq,
.irq_preinstall = msm_irq_preinstall,
.irq_postinstall = msm_irq_postinstall,
for (i = 0; i < ARRAY_SIZE(devnames); i++) {
struct device *dev;
- int ret;
dev = bus_find_device_by_name(&platform_bus_type,
NULL, devnames[i]);
if (!dev) {
- dev_info(master, "still waiting for %s\n", devnames[i]);
+ dev_info(&pdev->dev, "still waiting for %s\n", devnames[i]);
return -EPROBE_DEFER;
}
{
DBG("init");
hdmi_register();
- a3xx_register();
+ adreno_register();
return platform_driver_register(&msm_platform_driver);
}
DBG("fini");
platform_driver_unregister(&msm_platform_driver);
hdmi_unregister();
- a3xx_unregister();
+ adreno_unregister();
}
module_init(msm_drm_register);
#include <drm/drm_crtc_helper.h>
#include <drm/drm_fb_helper.h>
#include <drm/msm_drm.h>
+#include <drm/drm_gem.h>
struct msm_kms;
struct msm_gpu;
void *msm_gem_prime_vmap(struct drm_gem_object *obj);
void msm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev,
- size_t size, struct sg_table *sg);
+ struct dma_buf_attachment *attach, struct sg_table *sg);
int msm_gem_prime_pin(struct drm_gem_object *obj);
void msm_gem_prime_unpin(struct drm_gem_object *obj);
void *msm_gem_vaddr_locked(struct drm_gem_object *obj);
ret = msm_gem_get_iova_locked(fbdev->bo, 0, &paddr);
if (ret) {
dev_err(dev->dev, "failed to get buffer obj iova: %d\n", ret);
- goto fail;
+ goto fail_unlock;
}
fbi = framebuffer_alloc(0, dev->dev);
#include "msm_drv.h"
#include "msm_gem.h"
+#include <linux/dma-buf.h>
struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj)
{
}
struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev,
- size_t size, struct sg_table *sg)
+ struct dma_buf_attachment *attach, struct sg_table *sg)
{
- return msm_gem_import(dev, size, sg);
+ return msm_gem_import(dev, attach->dmabuf->size, sg);
}
int msm_gem_prime_pin(struct drm_gem_object *obj)
const char *name, const char *ioname, const char *irqname, int ringsz);
void msm_gpu_cleanup(struct msm_gpu *gpu);
-struct msm_gpu *a3xx_gpu_init(struct drm_device *dev);
-void __init a3xx_register(void);
-void __exit a3xx_unregister(void);
+struct msm_gpu *adreno_load_gpu(struct drm_device *dev);
+void __init adreno_register(void);
+void __exit adreno_unregister(void);
#endif /* __MSM_GPU_H__ */
static int msm_fault_handler(struct iommu_domain *iommu, struct device *dev,
unsigned long iova, int flags, void *arg)
{
- DBG("*** fault: iova=%08lx, flags=%d", iova, flags);
- return -ENOSYS;
+ pr_warn_ratelimited("*** fault: iova=%08lx, flags=%d\n", iova, flags);
+ return 0;
}
static int msm_iommu_attach(struct msm_mmu *mmu, const char **names, int cnt)
nouveau-y += core/subdev/bios/disp.o
nouveau-y += core/subdev/bios/dp.o
nouveau-y += core/subdev/bios/extdev.o
+nouveau-y += core/subdev/bios/fan.o
nouveau-y += core/subdev/bios/gpio.o
nouveau-y += core/subdev/bios/i2c.o
nouveau-y += core/subdev/bios/init.o
nouveau-y += core/subdev/bios/vmap.o
nouveau-y += core/subdev/bios/volt.o
nouveau-y += core/subdev/bios/xpio.o
+nouveau-y += core/subdev/bios/M0205.o
+nouveau-y += core/subdev/bios/M0209.o
nouveau-y += core/subdev/bios/P0260.o
nouveau-y += core/subdev/bus/hwsq.o
nouveau-y += core/subdev/bus/nv04.o
nouveau-y += core/subdev/fb/ramnve0.o
nouveau-y += core/subdev/fb/ramgk20a.o
nouveau-y += core/subdev/fb/ramgm107.o
+nouveau-y += core/subdev/fb/sddr2.o
nouveau-y += core/subdev/fb/sddr3.o
nouveau-y += core/subdev/fb/gddr5.o
+nouveau-y += core/subdev/fuse/base.o
+nouveau-y += core/subdev/fuse/g80.o
+nouveau-y += core/subdev/fuse/gf100.o
+nouveau-y += core/subdev/fuse/gm107.o
nouveau-y += core/subdev/gpio/base.o
nouveau-y += core/subdev/gpio/nv10.o
nouveau-y += core/subdev/gpio/nv50.o
-nouveau-y += core/subdev/gpio/nv92.o
+nouveau-y += core/subdev/gpio/nv94.o
nouveau-y += core/subdev/gpio/nvd0.o
nouveau-y += core/subdev/gpio/nve0.o
nouveau-y += core/subdev/i2c/base.o
nouveau-y += core/subdev/therm/nv84.o
nouveau-y += core/subdev/therm/nva3.o
nouveau-y += core/subdev/therm/nvd0.o
+nouveau-y += core/subdev/therm/gm107.o
nouveau-y += core/subdev/timer/base.o
nouveau-y += core/subdev/timer/nv04.o
nouveau-y += core/subdev/timer/gk20a.o
nouveau-y += core/engine/disp/hdminv84.o
nouveau-y += core/engine/disp/hdminva3.o
nouveau-y += core/engine/disp/hdminvd0.o
+nouveau-y += core/engine/disp/hdminve0.o
nouveau-y += core/engine/disp/piornv50.o
nouveau-y += core/engine/disp/sornv50.o
nouveau-y += core/engine/disp/sornv94.o
}
int
-nvkm_client_notify_new(struct nouveau_client *client,
+nvkm_client_notify_new(struct nouveau_object *object,
struct nvkm_event *event, void *data, u32 size)
{
+ struct nouveau_client *client = nouveau_client(object);
struct nvkm_client_notify *notify;
union {
struct nvif_notify_req_v0 v0;
}
if (ret == 0) {
- ret = nvkm_notify_init(event, nvkm_client_notify, false,
- data, size, reply, ¬ify->n);
+ ret = nvkm_notify_init(object, event, nvkm_client_notify,
+ false, data, size, reply, ¬ify->n);
if (ret == 0) {
client->notify[index] = notify;
notify->client = client;
* OTHER DEALINGS IN THE SOFTWARE.
*/
-#include <core/os.h>
+#include <core/object.h>
#include <core/event.h>
void
gpuobj->size = size;
if (heap) {
- ret = nouveau_mm_head(heap, 1, size, size,
+ ret = nouveau_mm_head(heap, 0, 1, size, size,
max(align, (u32)1), &gpuobj->node);
if (ret)
return ret;
static int
nvkm_ioctl_ntfy_new(struct nouveau_handle *handle, void *data, u32 size)
{
- struct nouveau_client *client = nouveau_client(handle->object);
struct nouveau_object *object = handle->object;
struct nouveau_ofuncs *ofuncs = object->oclass->ofuncs;
union {
if (ret = -ENODEV, ofuncs->ntfy)
ret = ofuncs->ntfy(object, args->v0.event, &event);
if (ret == 0) {
- ret = nvkm_client_notify_new(client, event, data, size);
+ ret = nvkm_client_notify_new(object, event, data, size);
if (ret >= 0) {
args->v0.index = ret;
ret = 0;
#define node(root, dir) ((root)->nl_entry.dir == &mm->nodes) ? NULL : \
list_entry((root)->nl_entry.dir, struct nouveau_mm_node, nl_entry)
+static void
+nouveau_mm_dump(struct nouveau_mm *mm, const char *header)
+{
+ struct nouveau_mm_node *node;
+
+ printk(KERN_ERR "nouveau: %s\n", header);
+ printk(KERN_ERR "nouveau: node list:\n");
+ list_for_each_entry(node, &mm->nodes, nl_entry) {
+ printk(KERN_ERR "nouveau: \t%08x %08x %d\n",
+ node->offset, node->length, node->type);
+ }
+ printk(KERN_ERR "nouveau: free list:\n");
+ list_for_each_entry(node, &mm->free, fl_entry) {
+ printk(KERN_ERR "nouveau: \t%08x %08x %d\n",
+ node->offset, node->length, node->type);
+ }
+}
+
void
nouveau_mm_free(struct nouveau_mm *mm, struct nouveau_mm_node **pthis)
{
struct nouveau_mm_node *prev = node(this, prev);
struct nouveau_mm_node *next = node(this, next);
- if (prev && prev->type == 0) {
+ if (prev && prev->type == NVKM_MM_TYPE_NONE) {
prev->length += this->length;
list_del(&this->nl_entry);
kfree(this); this = prev;
}
- if (next && next->type == 0) {
+ if (next && next->type == NVKM_MM_TYPE_NONE) {
next->offset = this->offset;
next->length += this->length;
- if (this->type == 0)
+ if (this->type == NVKM_MM_TYPE_NONE)
list_del(&this->fl_entry);
list_del(&this->nl_entry);
kfree(this); this = NULL;
}
- if (this && this->type != 0) {
+ if (this && this->type != NVKM_MM_TYPE_NONE) {
list_for_each_entry(prev, &mm->free, fl_entry) {
if (this->offset < prev->offset)
break;
}
list_add_tail(&this->fl_entry, &prev->fl_entry);
- this->type = 0;
+ this->type = NVKM_MM_TYPE_NONE;
}
}
b->offset = a->offset;
b->length = size;
+ b->heap = a->heap;
b->type = a->type;
a->offset += size;
a->length -= size;
list_add_tail(&b->nl_entry, &a->nl_entry);
- if (b->type == 0)
+ if (b->type == NVKM_MM_TYPE_NONE)
list_add_tail(&b->fl_entry, &a->fl_entry);
return b;
}
int
-nouveau_mm_head(struct nouveau_mm *mm, u8 type, u32 size_max, u32 size_min,
- u32 align, struct nouveau_mm_node **pnode)
+nouveau_mm_head(struct nouveau_mm *mm, u8 heap, u8 type, u32 size_max,
+ u32 size_min, u32 align, struct nouveau_mm_node **pnode)
{
struct nouveau_mm_node *prev, *this, *next;
u32 mask = align - 1;
u32 splitoff;
u32 s, e;
- BUG_ON(!type);
+ BUG_ON(type == NVKM_MM_TYPE_NONE || type == NVKM_MM_TYPE_HOLE);
list_for_each_entry(this, &mm->free, fl_entry) {
+ if (unlikely(heap != NVKM_MM_HEAP_ANY)) {
+ if (this->heap != heap)
+ continue;
+ }
e = this->offset + this->length;
s = this->offset;
a->length -= size;
b->offset = a->offset + a->length;
b->length = size;
+ b->heap = a->heap;
b->type = a->type;
list_add(&b->nl_entry, &a->nl_entry);
- if (b->type == 0)
+ if (b->type == NVKM_MM_TYPE_NONE)
list_add(&b->fl_entry, &a->fl_entry);
return b;
}
int
-nouveau_mm_tail(struct nouveau_mm *mm, u8 type, u32 size_max, u32 size_min,
- u32 align, struct nouveau_mm_node **pnode)
+nouveau_mm_tail(struct nouveau_mm *mm, u8 heap, u8 type, u32 size_max,
+ u32 size_min, u32 align, struct nouveau_mm_node **pnode)
{
struct nouveau_mm_node *prev, *this, *next;
u32 mask = align - 1;
- BUG_ON(!type);
+ BUG_ON(type == NVKM_MM_TYPE_NONE || type == NVKM_MM_TYPE_HOLE);
list_for_each_entry_reverse(this, &mm->free, fl_entry) {
u32 e = this->offset + this->length;
u32 s = this->offset;
u32 c = 0, a;
+ if (unlikely(heap != NVKM_MM_HEAP_ANY)) {
+ if (this->heap != heap)
+ continue;
+ }
prev = node(this, prev);
if (prev && prev->type != type)
int
nouveau_mm_init(struct nouveau_mm *mm, u32 offset, u32 length, u32 block)
{
- struct nouveau_mm_node *node;
+ struct nouveau_mm_node *node, *prev;
+ u32 next;
- if (block) {
+ if (nouveau_mm_initialised(mm)) {
+ prev = list_last_entry(&mm->nodes, typeof(*node), nl_entry);
+ next = prev->offset + prev->length;
+ if (next != offset) {
+ BUG_ON(next > offset);
+ if (!(node = kzalloc(sizeof(*node), GFP_KERNEL)))
+ return -ENOMEM;
+ node->type = NVKM_MM_TYPE_HOLE;
+ node->offset = next;
+ node->length = offset - next;
+ list_add_tail(&node->nl_entry, &mm->nodes);
+ }
+ BUG_ON(block != mm->block_size);
+ } else {
INIT_LIST_HEAD(&mm->nodes);
INIT_LIST_HEAD(&mm->free);
mm->block_size = block;
list_add_tail(&node->nl_entry, &mm->nodes);
list_add_tail(&node->fl_entry, &mm->free);
- mm->heap_nodes++;
+ node->heap = ++mm->heap_nodes;
return 0;
}
int
nouveau_mm_fini(struct nouveau_mm *mm)
{
- if (nouveau_mm_initialised(mm)) {
- struct nouveau_mm_node *node, *heap =
- list_first_entry(&mm->nodes, typeof(*heap), nl_entry);
- int nodes = 0;
+ struct nouveau_mm_node *node, *temp;
+ int nodes = 0;
- list_for_each_entry(node, &mm->nodes, nl_entry) {
- if (WARN_ON(nodes++ == mm->heap_nodes))
+ if (!nouveau_mm_initialised(mm))
+ return 0;
+
+ list_for_each_entry(node, &mm->nodes, nl_entry) {
+ if (node->type != NVKM_MM_TYPE_HOLE) {
+ if (++nodes > mm->heap_nodes) {
+ nouveau_mm_dump(mm, "mm not clean!");
return -EBUSY;
+ }
}
-
- kfree(heap);
}
+ list_for_each_entry_safe(node, temp, &mm->nodes, nl_entry) {
+ list_del(&node->nl_entry);
+ kfree(node);
+ }
+ mm->heap_nodes = 0;
return 0;
}
}
int
-nvkm_notify_init(struct nvkm_event *event, int (*func)(struct nvkm_notify *),
- bool work, void *data, u32 size, u32 reply,
+nvkm_notify_init(struct nouveau_object *object, struct nvkm_event *event,
+ int (*func)(struct nvkm_notify *), bool work,
+ void *data, u32 size, u32 reply,
struct nvkm_notify *notify)
{
unsigned long flags;
int ret = -ENODEV;
if ((notify->event = event), event->refs) {
- ret = event->func->ctor(data, size, notify);
+ ret = event->func->ctor(object, data, size, notify);
if (ret == 0 && (ret = -EINVAL, notify->size == reply)) {
notify->flags = 0;
notify->block = 1;
sclass = nv_parent(parent)->sclass;
while (sclass) {
if (++nr < size)
- lclass[nr] = sclass->oclass->handle;
+ lclass[nr] = sclass->oclass->handle & 0xffff;
sclass = sclass->sclass;
}
if (engine && (oclass = engine->sclass)) {
while (oclass->ofuncs) {
if (++nr < size)
- lclass[nr] = oclass->handle;
+ lclass[nr] = oclass->handle & 0xffff;
oclass++;
}
}
};
static int
-nouveau_device_event_ctor(void *data, u32 size, struct nvkm_notify *notify)
+nouveau_device_event_ctor(struct nouveau_object *object, void *data, u32 size,
+ struct nvkm_notify *notify)
{
if (!WARN_ON(size != 0)) {
notify->size = 0;
#include <subdev/bus.h>
#include <subdev/gpio.h>
#include <subdev/i2c.h>
+#include <subdev/fuse.h>
#include <subdev/clock.h>
#include <subdev/therm.h>
#include <subdev/mxm.h>
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
device->oclass[NVDEV_SUBDEV_GPIO ] = nve0_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nvd0_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &gm107_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = &nve0_clock_oclass;
-#if 0
- device->oclass[NVDEV_SUBDEV_THERM ] = &nvd0_therm_oclass;
-#endif
+ device->oclass[NVDEV_SUBDEV_THERM ] = &gm107_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
device->oclass[NVDEV_SUBDEV_DEVINIT] = gm107_devinit_oclass;
device->oclass[NVDEV_SUBDEV_MC ] = gk20a_mc_oclass;
device->oclass[NVDEV_SUBDEV_INSTMEM] = nv50_instmem_oclass;
device->oclass[NVDEV_SUBDEV_VM ] = &nvc0_vmmgr_oclass;
device->oclass[NVDEV_SUBDEV_BAR ] = &nvc0_bar_oclass;
-#if 0
device->oclass[NVDEV_SUBDEV_PWR ] = nv108_pwr_oclass;
+
+#if 0
device->oclass[NVDEV_SUBDEV_VOLT ] = &nv40_volt_oclass;
#endif
device->oclass[NVDEV_ENGINE_DMAOBJ ] = nvd0_dmaeng_oclass;
#include <subdev/bus.h>
#include <subdev/gpio.h>
#include <subdev/i2c.h>
+#include <subdev/fuse.h>
#include <subdev/clock.h>
#include <subdev/therm.h>
#include <subdev/mxm.h>
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
device->oclass[NVDEV_SUBDEV_GPIO ] = nv50_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nv50_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &g80_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = nv50_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nv50_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
device->oclass[NVDEV_SUBDEV_GPIO ] = nv50_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nv50_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &g80_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = nv84_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nv84_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
device->oclass[NVDEV_SUBDEV_GPIO ] = nv50_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nv50_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &g80_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = nv84_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nv84_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
case 0x92:
device->cname = "G92";
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
- device->oclass[NVDEV_SUBDEV_GPIO ] = nv92_gpio_oclass;
+ device->oclass[NVDEV_SUBDEV_GPIO ] = nv50_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nv50_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &g80_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = nv84_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nv84_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
case 0x94:
device->cname = "G94";
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
- device->oclass[NVDEV_SUBDEV_GPIO ] = nv92_gpio_oclass;
+ device->oclass[NVDEV_SUBDEV_GPIO ] = nv94_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nv94_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &g80_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = nv84_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nv84_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
case 0x96:
device->cname = "G96";
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
- device->oclass[NVDEV_SUBDEV_GPIO ] = nv92_gpio_oclass;
+ device->oclass[NVDEV_SUBDEV_GPIO ] = nv94_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nv94_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &g80_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = nv84_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nv84_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
case 0x98:
device->cname = "G98";
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
- device->oclass[NVDEV_SUBDEV_GPIO ] = nv92_gpio_oclass;
+ device->oclass[NVDEV_SUBDEV_GPIO ] = nv94_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nv94_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &g80_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = nv84_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nv84_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
case 0xa0:
device->cname = "G200";
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
- device->oclass[NVDEV_SUBDEV_GPIO ] = nv92_gpio_oclass;
+ device->oclass[NVDEV_SUBDEV_GPIO ] = nv94_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nv50_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &g80_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = nv84_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nv84_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
case 0xaa:
device->cname = "MCP77/MCP78";
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
- device->oclass[NVDEV_SUBDEV_GPIO ] = nv92_gpio_oclass;
+ device->oclass[NVDEV_SUBDEV_GPIO ] = nv94_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nv94_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &g80_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = nvaa_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nv84_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
case 0xac:
device->cname = "MCP79/MCP7A";
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
- device->oclass[NVDEV_SUBDEV_GPIO ] = nv92_gpio_oclass;
+ device->oclass[NVDEV_SUBDEV_GPIO ] = nv94_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nv94_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &g80_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = nvaa_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nv84_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
case 0xa3:
device->cname = "GT215";
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
- device->oclass[NVDEV_SUBDEV_GPIO ] = nv92_gpio_oclass;
+ device->oclass[NVDEV_SUBDEV_GPIO ] = nv94_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nv94_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &g80_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = &nva3_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nva3_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
case 0xa5:
device->cname = "GT216";
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
- device->oclass[NVDEV_SUBDEV_GPIO ] = nv92_gpio_oclass;
+ device->oclass[NVDEV_SUBDEV_GPIO ] = nv94_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nv94_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &g80_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = &nva3_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nva3_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
case 0xa8:
device->cname = "GT218";
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
- device->oclass[NVDEV_SUBDEV_GPIO ] = nv92_gpio_oclass;
+ device->oclass[NVDEV_SUBDEV_GPIO ] = nv94_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nv94_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &g80_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = &nva3_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nva3_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
case 0xaf:
device->cname = "MCP89";
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
- device->oclass[NVDEV_SUBDEV_GPIO ] = nv92_gpio_oclass;
+ device->oclass[NVDEV_SUBDEV_GPIO ] = nv94_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nv94_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &g80_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = &nva3_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nva3_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
#include <subdev/bus.h>
#include <subdev/gpio.h>
#include <subdev/i2c.h>
+#include <subdev/fuse.h>
#include <subdev/clock.h>
#include <subdev/therm.h>
#include <subdev/mxm.h>
case 0xc0:
device->cname = "GF100";
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
- device->oclass[NVDEV_SUBDEV_GPIO ] = nv92_gpio_oclass;
+ device->oclass[NVDEV_SUBDEV_GPIO ] = nv94_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nv94_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &gf100_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = &nvc0_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nva3_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
case 0xc4:
device->cname = "GF104";
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
- device->oclass[NVDEV_SUBDEV_GPIO ] = nv92_gpio_oclass;
+ device->oclass[NVDEV_SUBDEV_GPIO ] = nv94_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nv94_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &gf100_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = &nvc0_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nva3_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
case 0xc3:
device->cname = "GF106";
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
- device->oclass[NVDEV_SUBDEV_GPIO ] = nv92_gpio_oclass;
+ device->oclass[NVDEV_SUBDEV_GPIO ] = nv94_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nv94_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &gf100_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = &nvc0_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nva3_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
case 0xce:
device->cname = "GF114";
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
- device->oclass[NVDEV_SUBDEV_GPIO ] = nv92_gpio_oclass;
+ device->oclass[NVDEV_SUBDEV_GPIO ] = nv94_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nv94_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &gf100_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = &nvc0_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nva3_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
case 0xcf:
device->cname = "GF116";
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
- device->oclass[NVDEV_SUBDEV_GPIO ] = nv92_gpio_oclass;
+ device->oclass[NVDEV_SUBDEV_GPIO ] = nv94_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nv94_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &gf100_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = &nvc0_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nva3_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
case 0xc1:
device->cname = "GF108";
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
- device->oclass[NVDEV_SUBDEV_GPIO ] = nv92_gpio_oclass;
+ device->oclass[NVDEV_SUBDEV_GPIO ] = nv94_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nv94_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &gf100_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = &nvc0_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nva3_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
case 0xc8:
device->cname = "GF110";
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
- device->oclass[NVDEV_SUBDEV_GPIO ] = nv92_gpio_oclass;
+ device->oclass[NVDEV_SUBDEV_GPIO ] = nv94_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nv94_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &gf100_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = &nvc0_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nva3_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
device->oclass[NVDEV_SUBDEV_GPIO ] = nvd0_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nvd0_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &gf100_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = &nvc0_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nvd0_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
device->oclass[NVDEV_SUBDEV_GPIO ] = nvd0_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = gf117_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &gf100_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = &nvc0_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nvd0_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
#include <subdev/bus.h>
#include <subdev/gpio.h>
#include <subdev/i2c.h>
+#include <subdev/fuse.h>
#include <subdev/clock.h>
#include <subdev/therm.h>
#include <subdev/mxm.h>
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
device->oclass[NVDEV_SUBDEV_GPIO ] = nve0_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nve0_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &gf100_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = &nve0_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nvd0_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
device->oclass[NVDEV_SUBDEV_GPIO ] = nve0_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nve0_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &gf100_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = &nve0_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nvd0_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
device->oclass[NVDEV_SUBDEV_GPIO ] = nve0_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nve0_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &gf100_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = &nve0_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nvd0_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = &gk20a_clock_oclass;
device->oclass[NVDEV_SUBDEV_MC ] = gk20a_mc_oclass;
device->oclass[NVDEV_SUBDEV_BUS ] = nvc0_bus_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &gf100_fuse_oclass;
device->oclass[NVDEV_SUBDEV_TIMER ] = &gk20a_timer_oclass;
device->oclass[NVDEV_SUBDEV_FB ] = gk20a_fb_oclass;
device->oclass[NVDEV_SUBDEV_LTC ] = gk104_ltc_oclass;
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
device->oclass[NVDEV_SUBDEV_GPIO ] = nve0_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nve0_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &gf100_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = &nve0_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nvd0_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
device->oclass[NVDEV_SUBDEV_GPIO ] = nve0_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nvd0_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &gf100_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = &nve0_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nvd0_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
device->oclass[NVDEV_SUBDEV_GPIO ] = nve0_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = nve0_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &gf100_fuse_oclass;
device->oclass[NVDEV_SUBDEV_CLOCK ] = &nve0_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nvd0_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
#include "conn.h"
int
-nouveau_disp_vblank_ctor(void *data, u32 size, struct nvkm_notify *notify)
+nouveau_disp_vblank_ctor(struct nouveau_object *object, void *data, u32 size,
+ struct nvkm_notify *notify)
{
struct nouveau_disp *disp =
container_of(notify->event, typeof(*disp), vblank);
}
static int
-nouveau_disp_hpd_ctor(void *data, u32 size, struct nvkm_notify *notify)
+nouveau_disp_hpd_ctor(struct nouveau_object *object, void *data, u32 size,
+ struct nvkm_notify *notify)
{
struct nouveau_disp *disp =
container_of(notify->event, typeof(*disp), hpd);
return 0;
}
- ret = nvkm_notify_init(&gpio->event, nvkm_connector_hpd, true,
- &(struct nvkm_gpio_ntfy_req) {
+ ret = nvkm_notify_init(NULL, &gpio->event, nvkm_connector_hpd,
+ true, &(struct nvkm_gpio_ntfy_req) {
.mask = NVKM_GPIO_TOGGLED,
.line = func.line,
},
if (ret)
return ret;
+ ret = nvkm_event_init(&nvd0_disp_chan_uevent, 1, 17, &priv->uevent);
+ if (ret)
+ return ret;
+
nv_engine(priv)->sclass = gm107_disp_base_oclass;
nv_engine(priv)->cclass = &nv50_disp_cclass;
nv_subdev(priv)->intr = nvd0_disp_intr;
priv->dac.sense = nv50_dac_sense;
priv->sor.power = nv50_sor_power;
priv->sor.hda_eld = nvd0_hda_eld;
- priv->sor.hdmi = nvd0_hdmi_ctrl;
+ priv->sor.hdmi = nve0_hdmi_ctrl;
return 0;
}
#include <nvif/unpack.h>
#include <nvif/class.h>
+#include <subdev/timer.h>
+
#include "nv50.h"
int
return ret;
if (size && args->v0.data[0]) {
+ if (outp->info.type == DCB_OUTPUT_DP) {
+ nv_mask(priv, 0x61c1e0 + soff, 0x8000000d, 0x80000001);
+ nv_wait(priv, 0x61c1e0 + soff, 0x80000000, 0x00000000);
+ }
for (i = 0; i < size; i++)
nv_wr32(priv, 0x61c440 + soff, (i << 8) | args->v0.data[0]);
for (; i < 0x60; i++)
nv_wr32(priv, 0x61c440 + soff, (i << 8));
nv_mask(priv, 0x61c448 + soff, 0x80000003, 0x80000003);
- } else
- if (size) {
- nv_mask(priv, 0x61c448 + soff, 0x80000003, 0x80000001);
} else {
- nv_mask(priv, 0x61c448 + soff, 0x80000003, 0x80000000);
+ if (outp->info.type == DCB_OUTPUT_DP) {
+ nv_mask(priv, 0x61c1e0 + soff, 0x80000001, 0x80000000);
+ nv_wait(priv, 0x61c1e0 + soff, 0x80000000, 0x00000000);
+ }
+ nv_mask(priv, 0x61c448 + soff, 0x80000003, 0x80000000 | !!size);
}
return 0;
#include <nvif/unpack.h>
#include <nvif/class.h>
-#include <subdev/bios.h>
-#include <subdev/bios/dcb.h>
-#include <subdev/bios/dp.h>
-#include <subdev/bios/init.h>
+#include <subdev/timer.h>
#include "nv50.h"
struct nv50_disp_sor_hda_eld_v0 v0;
} *args = data;
const u32 soff = outp->or * 0x030;
+ const u32 hoff = head * 0x800;
int ret, i;
nv_ioctl(object, "disp sor hda eld size %d\n", size);
return ret;
if (size && args->v0.data[0]) {
+ if (outp->info.type == DCB_OUTPUT_DP) {
+ nv_mask(priv, 0x616618 + hoff, 0x8000000c, 0x80000001);
+ nv_wait(priv, 0x616618 + hoff, 0x80000000, 0x00000000);
+ }
+ nv_mask(priv, 0x616548 + hoff, 0x00000070, 0x00000000);
for (i = 0; i < size; i++)
nv_wr32(priv, 0x10ec00 + soff, (i << 8) | args->v0.data[i]);
for (; i < 0x60; i++)
nv_wr32(priv, 0x10ec00 + soff, (i << 8));
nv_mask(priv, 0x10ec10 + soff, 0x80000003, 0x80000003);
- } else
- if (size) {
- nv_mask(priv, 0x10ec10 + soff, 0x80000003, 0x80000001);
} else {
- nv_mask(priv, 0x10ec10 + soff, 0x80000003, 0x80000000);
+ if (outp->info.type == DCB_OUTPUT_DP) {
+ nv_mask(priv, 0x616618 + hoff, 0x80000001, 0x80000000);
+ nv_wait(priv, 0x616618 + hoff, 0x80000000, 0x00000000);
+ }
+ nv_mask(priv, 0x10ec10 + soff, 0x80000003, 0x80000000 | !!size);
}
return 0;
/* HDMI_CTRL */
nv_mask(priv, 0x616798 + hoff, 0x401f007f, ctrl);
-
- /* NFI, audio doesn't work without it though.. */
- nv_mask(priv, 0x616548 + hoff, 0x00000070, 0x00000000);
return 0;
}
--- /dev/null
+/*
+ * Copyright 2014 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Ben Skeggs
+ */
+
+#include <core/client.h>
+#include <nvif/unpack.h>
+#include <nvif/class.h>
+
+#include "nv50.h"
+
+int
+nve0_hdmi_ctrl(NV50_DISP_MTHD_V1)
+{
+ const u32 hoff = (head * 0x800);
+ const u32 hdmi = (head * 0x400);
+ union {
+ struct nv50_disp_sor_hdmi_pwr_v0 v0;
+ } *args = data;
+ u32 ctrl;
+ int ret;
+
+ nv_ioctl(object, "disp sor hdmi ctrl size %d\n", size);
+ if (nvif_unpack(args->v0, 0, 0, false)) {
+ nv_ioctl(object, "disp sor hdmi ctrl vers %d state %d "
+ "max_ac_packet %d rekey %d\n",
+ args->v0.version, args->v0.state,
+ args->v0.max_ac_packet, args->v0.rekey);
+ if (args->v0.max_ac_packet > 0x1f || args->v0.rekey > 0x7f)
+ return -EINVAL;
+ ctrl = 0x40000000 * !!args->v0.state;
+ ctrl |= args->v0.max_ac_packet << 16;
+ ctrl |= args->v0.rekey;
+ } else
+ return ret;
+
+ if (!(ctrl & 0x40000000)) {
+ nv_mask(priv, 0x616798 + hoff, 0x40000000, 0x00000000);
+ nv_mask(priv, 0x6900c0 + hdmi, 0x00000001, 0x00000000);
+ nv_mask(priv, 0x690000 + hdmi, 0x00000001, 0x00000000);
+ return 0;
+ }
+
+ /* AVI InfoFrame */
+ nv_mask(priv, 0x690000 + hdmi, 0x00000001, 0x00000000);
+ nv_wr32(priv, 0x690008 + hdmi, 0x000d0282);
+ nv_wr32(priv, 0x69000c + hdmi, 0x0000006f);
+ nv_wr32(priv, 0x690010 + hdmi, 0x00000000);
+ nv_wr32(priv, 0x690014 + hdmi, 0x00000000);
+ nv_wr32(priv, 0x690018 + hdmi, 0x00000000);
+ nv_mask(priv, 0x690000 + hdmi, 0x00000001, 0x00000001);
+
+ /* ??? InfoFrame? */
+ nv_mask(priv, 0x6900c0 + hdmi, 0x00000001, 0x00000000);
+ nv_wr32(priv, 0x6900cc + hdmi, 0x00000010);
+ nv_mask(priv, 0x6900c0 + hdmi, 0x00000001, 0x00000001);
+
+ /* ??? */
+ nv_wr32(priv, 0x690080 + hdmi, 0x82000000);
+
+ /* HDMI_CTRL */
+ nv_mask(priv, 0x616798 + hoff, 0x401f007f, ctrl);
+ return 0;
+}
#include <core/enum.h>
#include <nvif/unpack.h>
#include <nvif/class.h>
+#include <nvif/event.h>
#include <subdev/bios.h>
#include <subdev/bios/dcb.h>
nouveau_namedb_destroy(&chan->base);
}
+static void
+nv50_disp_chan_uevent_fini(struct nvkm_event *event, int type, int index)
+{
+ struct nv50_disp_priv *priv = container_of(event, typeof(*priv), uevent);
+ nv_mask(priv, 0x610028, 0x00000001 << index, 0x00000000 << index);
+}
+
+static void
+nv50_disp_chan_uevent_init(struct nvkm_event *event, int types, int index)
+{
+ struct nv50_disp_priv *priv = container_of(event, typeof(*priv), uevent);
+ nv_mask(priv, 0x610028, 0x00000001 << index, 0x00000001 << index);
+}
+
+void
+nv50_disp_chan_uevent_send(struct nv50_disp_priv *priv, int chid)
+{
+ struct nvif_notify_uevent_rep {
+ } rep;
+
+ nvkm_event_send(&priv->uevent, 1, chid, &rep, sizeof(rep));
+}
+
+int
+nv50_disp_chan_uevent_ctor(struct nouveau_object *object, void *data, u32 size,
+ struct nvkm_notify *notify)
+{
+ struct nv50_disp_dmac *dmac = (void *)object;
+ union {
+ struct nvif_notify_uevent_req none;
+ } *args = data;
+ int ret;
+
+ if (nvif_unvers(args->none)) {
+ notify->size = sizeof(struct nvif_notify_uevent_rep);
+ notify->types = 1;
+ notify->index = dmac->base.chid;
+ return 0;
+ }
+
+ return ret;
+}
+
+const struct nvkm_event_func
+nv50_disp_chan_uevent = {
+ .ctor = nv50_disp_chan_uevent_ctor,
+ .init = nv50_disp_chan_uevent_init,
+ .fini = nv50_disp_chan_uevent_fini,
+};
+
+int
+nv50_disp_chan_ntfy(struct nouveau_object *object, u32 type,
+ struct nvkm_event **pevent)
+{
+ struct nv50_disp_priv *priv = (void *)object->engine;
+ switch (type) {
+ case NV50_DISP_CORE_CHANNEL_DMA_V0_NTFY_UEVENT:
+ *pevent = &priv->uevent;
+ return 0;
+ default:
+ break;
+ }
+ return -EINVAL;
+}
+
int
nv50_disp_chan_map(struct nouveau_object *object, u64 *addr, u32 *size)
{
return ret;
/* enable error reporting */
- nv_mask(priv, 0x610028, 0x00010001 << chid, 0x00010001 << chid);
+ nv_mask(priv, 0x610028, 0x00010000 << chid, 0x00010000 << chid);
/* initialise channel for dma command submission */
nv_wr32(priv, 0x610204 + (chid * 0x0010), dmac->push);
return -EBUSY;
}
- /* disable error reporting */
+ /* disable error reporting and completion notifications */
nv_mask(priv, 0x610028, 0x00010001 << chid, 0x00000000 << chid);
return nv50_disp_chan_fini(&dmac->base, suspend);
return ret;
/* enable error reporting */
- nv_mask(priv, 0x610028, 0x00010001, 0x00010001);
+ nv_mask(priv, 0x610028, 0x00010000, 0x00010000);
/* attempt to unstick channel from some unknown state */
if ((nv_rd32(priv, 0x610200) & 0x009f0000) == 0x00020000)
return -EBUSY;
}
- /* disable error reporting */
+ /* disable error reporting and completion notifications */
nv_mask(priv, 0x610028, 0x00010001, 0x00000000);
return nv50_disp_chan_fini(&mast->base, suspend);
.base.init = nv50_disp_mast_init,
.base.fini = nv50_disp_mast_fini,
.base.map = nv50_disp_chan_map,
+ .base.ntfy = nv50_disp_chan_ntfy,
.base.rd32 = nv50_disp_chan_rd32,
.base.wr32 = nv50_disp_chan_wr32,
.chid = 0,
.base.dtor = nv50_disp_dmac_dtor,
.base.init = nv50_disp_dmac_init,
.base.fini = nv50_disp_dmac_fini,
+ .base.ntfy = nv50_disp_chan_ntfy,
.base.map = nv50_disp_chan_map,
.base.rd32 = nv50_disp_chan_rd32,
.base.wr32 = nv50_disp_chan_wr32,
.base.dtor = nv50_disp_dmac_dtor,
.base.init = nv50_disp_dmac_init,
.base.fini = nv50_disp_dmac_fini,
+ .base.ntfy = nv50_disp_chan_ntfy,
.base.map = nv50_disp_chan_map,
.base.rd32 = nv50_disp_chan_rd32,
.base.wr32 = nv50_disp_chan_wr32,
.base.dtor = nv50_disp_pioc_dtor,
.base.init = nv50_disp_pioc_init,
.base.fini = nv50_disp_pioc_fini,
+ .base.ntfy = nv50_disp_chan_ntfy,
.base.map = nv50_disp_chan_map,
.base.rd32 = nv50_disp_chan_rd32,
.base.wr32 = nv50_disp_chan_wr32,
.base.dtor = nv50_disp_pioc_dtor,
.base.init = nv50_disp_pioc_init,
.base.fini = nv50_disp_pioc_fini,
+ .base.ntfy = nv50_disp_chan_ntfy,
.base.map = nv50_disp_chan_map,
.base.rd32 = nv50_disp_chan_rd32,
.base.wr32 = nv50_disp_chan_wr32,
}
static void
-nv50_disp_intr_unk20_2_dp(struct nv50_disp_priv *priv,
+nv50_disp_intr_unk20_2_dp(struct nv50_disp_priv *priv, int head,
struct dcb_output *outp, u32 pclk)
{
const int link = !(outp->sorconf.link & 1);
const u32 loff = (link * 0x080) + soff;
const u32 ctrl = nv_rd32(priv, 0x610794 + (or * 8));
const u32 symbol = 100000;
- u32 dpctrl = nv_rd32(priv, 0x61c10c + loff) & 0x0000f0000;
+ const s32 vactive = nv_rd32(priv, 0x610af8 + (head * 0x540)) & 0xffff;
+ const s32 vblanke = nv_rd32(priv, 0x610ae8 + (head * 0x540)) & 0xffff;
+ const s32 vblanks = nv_rd32(priv, 0x610af0 + (head * 0x540)) & 0xffff;
+ u32 dpctrl = nv_rd32(priv, 0x61c10c + loff);
u32 clksor = nv_rd32(priv, 0x614300 + soff);
int bestTU = 0, bestVTUi = 0, bestVTUf = 0, bestVTUa = 0;
int TU, VTUi, VTUf, VTUa;
u64 link_data_rate, link_ratio, unk;
u32 best_diff = 64 * symbol;
u32 link_nr, link_bw, bits;
-
- /* calculate packed data rate for each lane */
- if (dpctrl > 0x00030000) link_nr = 4;
- else if (dpctrl > 0x00010000) link_nr = 2;
- else link_nr = 1;
-
- if (clksor & 0x000c0000)
- link_bw = 270000;
- else
- link_bw = 162000;
-
+ u64 value;
+
+ link_bw = (clksor & 0x000c0000) ? 270000 : 162000;
+ link_nr = hweight32(dpctrl & 0x000f0000);
+
+ /* symbols/hblank - algorithm taken from comments in tegra driver */
+ value = vblanke + vactive - vblanks - 7;
+ value = value * link_bw;
+ do_div(value, pclk);
+ value = value - (3 * !!(dpctrl & 0x00004000)) - (12 / link_nr);
+ nv_mask(priv, 0x61c1e8 + soff, 0x0000ffff, value);
+
+ /* symbols/vblank - algorithm taken from comments in tegra driver */
+ value = vblanks - vblanke - 25;
+ value = value * link_bw;
+ do_div(value, pclk);
+ value = value - ((36 / link_nr) + 3) - 1;
+ nv_mask(priv, 0x61c1ec + soff, 0x00ffffff, value);
+
+ /* watermark / activesym */
if ((ctrl & 0xf0000) == 0x60000) bits = 30;
else if ((ctrl & 0xf0000) == 0x50000) bits = 24;
else bits = 18;
} else
if (!outp->info.location) {
if (outp->info.type == DCB_OUTPUT_DP)
- nv50_disp_intr_unk20_2_dp(priv, &outp->info, pclk);
+ nv50_disp_intr_unk20_2_dp(priv, head, &outp->info, pclk);
oreg = 0x614300 + (ffs(outp->info.or) - 1) * 0x800;
oval = (conf & 0x0100) ? 0x00000101 : 0x00000000;
hval = 0x00000000;
intr0 &= ~(0x00010000 << chid);
}
+ while (intr0 & 0x0000001f) {
+ u32 chid = __ffs(intr0 & 0x0000001f);
+ nv50_disp_chan_uevent_send(priv, chid);
+ intr0 &= ~(0x00000001 << chid);
+ }
+
if (intr1 & 0x00000004) {
nouveau_disp_vblank(&priv->base, 0);
nv_wr32(priv, 0x610024, 0x00000004);
if (ret)
return ret;
+ ret = nvkm_event_init(&nv50_disp_chan_uevent, 1, 9, &priv->uevent);
+ if (ret)
+ return ret;
+
nv_engine(priv)->sclass = nv50_disp_base_oclass;
nv_engine(priv)->cclass = &nv50_disp_cclass;
nv_subdev(priv)->intr = nv50_disp_intr;
struct work_struct supervisor;
u32 super;
+ struct nvkm_event uevent;
+
struct {
int nr;
} head;
int nv84_hdmi_ctrl(NV50_DISP_MTHD_V1);
int nva3_hdmi_ctrl(NV50_DISP_MTHD_V1);
int nvd0_hdmi_ctrl(NV50_DISP_MTHD_V1);
+int nve0_hdmi_ctrl(NV50_DISP_MTHD_V1);
int nv50_sor_power(NV50_DISP_MTHD_V1);
int chid;
};
+int nv50_disp_chan_ntfy(struct nouveau_object *, u32, struct nvkm_event **);
int nv50_disp_chan_map(struct nouveau_object *, u64 *, u32 *);
u32 nv50_disp_chan_rd32(struct nouveau_object *, u64);
void nv50_disp_chan_wr32(struct nouveau_object *, u64, u32);
+extern const struct nvkm_event_func nv50_disp_chan_uevent;
+int nv50_disp_chan_uevent_ctor(struct nouveau_object *, void *, u32,
+ struct nvkm_notify *);
+void nv50_disp_chan_uevent_send(struct nv50_disp_priv *, int);
+
+extern const struct nvkm_event_func nvd0_disp_chan_uevent;
#define nv50_disp_chan_init(a) \
nouveau_namedb_init(&(a)->base)
if (ret)
return ret;
+ ret = nvkm_event_init(&nv50_disp_chan_uevent, 1, 9, &priv->uevent);
+ if (ret)
+ return ret;
+
nv_engine(priv)->sclass = nv84_disp_base_oclass;
nv_engine(priv)->cclass = &nv50_disp_cclass;
nv_subdev(priv)->intr = nv50_disp_intr;
if (ret)
return ret;
+ ret = nvkm_event_init(&nv50_disp_chan_uevent, 1, 9, &priv->uevent);
+ if (ret)
+ return ret;
+
nv_engine(priv)->sclass = nv94_disp_base_oclass;
nv_engine(priv)->cclass = &nv50_disp_cclass;
nv_subdev(priv)->intr = nv50_disp_intr;
if (ret)
return ret;
+ ret = nvkm_event_init(&nv50_disp_chan_uevent, 1, 9, &priv->uevent);
+ if (ret)
+ return ret;
+
nv_engine(priv)->sclass = nva0_disp_base_oclass;
nv_engine(priv)->cclass = &nv50_disp_cclass;
nv_subdev(priv)->intr = nv50_disp_intr;
if (ret)
return ret;
+ ret = nvkm_event_init(&nv50_disp_chan_uevent, 1, 9, &priv->uevent);
+ if (ret)
+ return ret;
+
nv_engine(priv)->sclass = nva3_disp_base_oclass;
nv_engine(priv)->cclass = &nv50_disp_cclass;
nv_subdev(priv)->intr = nv50_disp_intr;
#include "nv50.h"
+/*******************************************************************************
+ * EVO channel base class
+ ******************************************************************************/
+
+static void
+nvd0_disp_chan_uevent_fini(struct nvkm_event *event, int type, int index)
+{
+ struct nv50_disp_priv *priv = container_of(event, typeof(*priv), uevent);
+ nv_mask(priv, 0x610090, 0x00000001 << index, 0x00000000 << index);
+}
+
+static void
+nvd0_disp_chan_uevent_init(struct nvkm_event *event, int types, int index)
+{
+ struct nv50_disp_priv *priv = container_of(event, typeof(*priv), uevent);
+ nv_mask(priv, 0x610090, 0x00000001 << index, 0x00000001 << index);
+}
+
+const struct nvkm_event_func
+nvd0_disp_chan_uevent = {
+ .ctor = nv50_disp_chan_uevent_ctor,
+ .init = nvd0_disp_chan_uevent_init,
+ .fini = nvd0_disp_chan_uevent_fini,
+};
+
/*******************************************************************************
* EVO DMA channel base class
******************************************************************************/
return ret;
/* enable error reporting */
- nv_mask(priv, 0x610090, 0x00000001 << chid, 0x00000001 << chid);
nv_mask(priv, 0x6100a0, 0x00000001 << chid, 0x00000001 << chid);
/* initialise channel for dma command submission */
return -EBUSY;
}
- /* disable error reporting */
+ /* disable error reporting and completion notification */
nv_mask(priv, 0x610090, 0x00000001 << chid, 0x00000000);
nv_mask(priv, 0x6100a0, 0x00000001 << chid, 0x00000000);
return ret;
/* enable error reporting */
- nv_mask(priv, 0x610090, 0x00000001, 0x00000001);
nv_mask(priv, 0x6100a0, 0x00000001, 0x00000001);
/* initialise channel for dma command submission */
return -EBUSY;
}
- /* disable error reporting */
+ /* disable error reporting and completion notification */
nv_mask(priv, 0x610090, 0x00000001, 0x00000000);
nv_mask(priv, 0x6100a0, 0x00000001, 0x00000000);
.base.dtor = nv50_disp_dmac_dtor,
.base.init = nvd0_disp_mast_init,
.base.fini = nvd0_disp_mast_fini,
+ .base.ntfy = nv50_disp_chan_ntfy,
.base.map = nv50_disp_chan_map,
.base.rd32 = nv50_disp_chan_rd32,
.base.wr32 = nv50_disp_chan_wr32,
.base.dtor = nv50_disp_dmac_dtor,
.base.init = nvd0_disp_dmac_init,
.base.fini = nvd0_disp_dmac_fini,
+ .base.ntfy = nv50_disp_chan_ntfy,
.base.map = nv50_disp_chan_map,
.base.rd32 = nv50_disp_chan_rd32,
.base.wr32 = nv50_disp_chan_wr32,
.base.dtor = nv50_disp_dmac_dtor,
.base.init = nvd0_disp_dmac_init,
.base.fini = nvd0_disp_dmac_fini,
+ .base.ntfy = nv50_disp_chan_ntfy,
.base.map = nv50_disp_chan_map,
.base.rd32 = nv50_disp_chan_rd32,
.base.wr32 = nv50_disp_chan_wr32,
return ret;
/* enable error reporting */
- nv_mask(priv, 0x610090, 0x00000001 << chid, 0x00000001 << chid);
nv_mask(priv, 0x6100a0, 0x00000001 << chid, 0x00000001 << chid);
/* activate channel */
return -EBUSY;
}
- /* disable error reporting */
+ /* disable error reporting and completion notification */
nv_mask(priv, 0x610090, 0x00000001 << chid, 0x00000000);
nv_mask(priv, 0x6100a0, 0x00000001 << chid, 0x00000000);
.base.dtor = nv50_disp_pioc_dtor,
.base.init = nvd0_disp_pioc_init,
.base.fini = nvd0_disp_pioc_fini,
+ .base.ntfy = nv50_disp_chan_ntfy,
.base.map = nv50_disp_chan_map,
.base.rd32 = nv50_disp_chan_rd32,
.base.wr32 = nv50_disp_chan_wr32,
.base.dtor = nv50_disp_pioc_dtor,
.base.init = nvd0_disp_pioc_init,
.base.fini = nvd0_disp_pioc_fini,
+ .base.ntfy = nv50_disp_chan_ntfy,
.base.map = nv50_disp_chan_map,
.base.rd32 = nv50_disp_chan_rd32,
.base.wr32 = nv50_disp_chan_wr32,
const int or = ffs(outp->or) - 1;
const u32 ctrl = nv_rd32(priv, 0x660200 + (or * 0x020));
const u32 conf = nv_rd32(priv, 0x660404 + (head * 0x300));
+ const s32 vactive = nv_rd32(priv, 0x660414 + (head * 0x300)) & 0xffff;
+ const s32 vblanke = nv_rd32(priv, 0x66041c + (head * 0x300)) & 0xffff;
+ const s32 vblanks = nv_rd32(priv, 0x660420 + (head * 0x300)) & 0xffff;
const u32 pclk = nv_rd32(priv, 0x660450 + (head * 0x300)) / 1000;
const u32 link = ((ctrl & 0xf00) == 0x800) ? 0 : 1;
const u32 hoff = (head * 0x800);
const u32 loff = (link * 0x080) + soff;
const u32 symbol = 100000;
const u32 TU = 64;
- u32 dpctrl = nv_rd32(priv, 0x61c10c + loff) & 0x000f0000;
+ u32 dpctrl = nv_rd32(priv, 0x61c10c + loff);
u32 clksor = nv_rd32(priv, 0x612300 + soff);
u32 datarate, link_nr, link_bw, bits;
u64 ratio, value;
+ link_nr = hweight32(dpctrl & 0x000f0000);
+ link_bw = (clksor & 0x007c0000) >> 18;
+ link_bw *= 27000;
+
+ /* symbols/hblank - algorithm taken from comments in tegra driver */
+ value = vblanke + vactive - vblanks - 7;
+ value = value * link_bw;
+ do_div(value, pclk);
+ value = value - (3 * !!(dpctrl & 0x00004000)) - (12 / link_nr);
+ nv_mask(priv, 0x616620 + hoff, 0x0000ffff, value);
+
+ /* symbols/vblank - algorithm taken from comments in tegra driver */
+ value = vblanks - vblanke - 25;
+ value = value * link_bw;
+ do_div(value, pclk);
+ value = value - ((36 / link_nr) + 3) - 1;
+ nv_mask(priv, 0x616624 + hoff, 0x00ffffff, value);
+
+ /* watermark */
if ((conf & 0x3c0) == 0x180) bits = 30;
else if ((conf & 0x3c0) == 0x140) bits = 24;
else bits = 18;
datarate = (pclk * bits) / 8;
- if (dpctrl > 0x00030000) link_nr = 4;
- else if (dpctrl > 0x00010000) link_nr = 2;
- else link_nr = 1;
-
- link_bw = (clksor & 0x007c0000) >> 18;
- link_bw *= 27000;
-
ratio = datarate;
ratio *= symbol;
do_div(ratio, link_nr * link_bw);
if (intr & 0x00000001) {
u32 stat = nv_rd32(priv, 0x61008c);
- nv_wr32(priv, 0x61008c, stat);
+ while (stat) {
+ int chid = __ffs(stat); stat &= ~(1 << chid);
+ nv50_disp_chan_uevent_send(priv, chid);
+ nv_wr32(priv, 0x61008c, 1 << chid);
+ }
intr &= ~0x00000001;
}
if (ret)
return ret;
+ ret = nvkm_event_init(&nvd0_disp_chan_uevent, 1, 17, &priv->uevent);
+ if (ret)
+ return ret;
+
nv_engine(priv)->sclass = nvd0_disp_base_oclass;
nv_engine(priv)->cclass = &nv50_disp_cclass;
nv_subdev(priv)->intr = nvd0_disp_intr;
if (ret)
return ret;
+ ret = nvkm_event_init(&nvd0_disp_chan_uevent, 1, 17, &priv->uevent);
+ if (ret)
+ return ret;
+
nv_engine(priv)->sclass = nve0_disp_base_oclass;
nv_engine(priv)->cclass = &nv50_disp_cclass;
nv_subdev(priv)->intr = nvd0_disp_intr;
priv->dac.sense = nv50_dac_sense;
priv->sor.power = nv50_sor_power;
priv->sor.hda_eld = nvd0_hda_eld;
- priv->sor.hdmi = nvd0_hdmi_ctrl;
+ priv->sor.hdmi = nve0_hdmi_ctrl;
return 0;
}
if (ret)
return ret;
+ ret = nvkm_event_init(&nvd0_disp_chan_uevent, 1, 17, &priv->uevent);
+ if (ret)
+ return ret;
+
nv_engine(priv)->sclass = nvf0_disp_base_oclass;
nv_engine(priv)->cclass = &nv50_disp_cclass;
nv_subdev(priv)->intr = nvd0_disp_intr;
priv->dac.sense = nv50_dac_sense;
priv->sor.power = nv50_sor_power;
priv->sor.hda_eld = nvd0_hda_eld;
- priv->sor.hdmi = nvd0_hdmi_ctrl;
+ priv->sor.hdmi = nve0_hdmi_ctrl;
return 0;
}
atomic_set(&outp->lt.done, 0);
/* link maintenance */
- ret = nvkm_notify_init(&i2c->event, nvkm_output_dp_irq, true,
+ ret = nvkm_notify_init(NULL, &i2c->event, nvkm_output_dp_irq, true,
&(struct nvkm_i2c_ntfy_req) {
.mask = NVKM_I2C_IRQ,
.port = outp->base.edid->index,
}
/* hotplug detect, replaces gpio-based mechanism with aux events */
- ret = nvkm_notify_init(&i2c->event, nvkm_output_dp_hpd, true,
+ ret = nvkm_notify_init(NULL, &i2c->event, nvkm_output_dp_hpd, true,
&(struct nvkm_i2c_ntfy_req) {
.mask = NVKM_I2C_PLUG | NVKM_I2C_UNPLUG,
.port = outp->base.edid->index,
extern struct nouveau_oclass *nvkm_output_oclass;
extern struct nouveau_oclass *nvkm_connector_oclass;
-int nouveau_disp_vblank_ctor(void *data, u32 size, struct nvkm_notify *);
+int nouveau_disp_vblank_ctor(struct nouveau_object *, void *data, u32 size,
+ struct nvkm_notify *);
void nouveau_disp_vblank(struct nouveau_disp *, int head);
int nouveau_disp_ntfy(struct nouveau_object *, u32, struct nvkm_event **);
#include <engine/fifo.h>
static int
-nouveau_fifo_event_ctor(void *data, u32 size, struct nvkm_notify *notify)
+nouveau_fifo_event_ctor(struct nouveau_object *object, void *data, u32 size,
+ struct nvkm_notify *notify)
{
if (size == 0) {
notify->size = 0;
}
int
-nouveau_fifo_uevent_ctor(void *data, u32 size, struct nvkm_notify *notify)
+nouveau_fifo_uevent_ctor(struct nouveau_object *object, void *data, u32 size,
+ struct nvkm_notify *notify)
{
union {
struct nvif_notify_uevent_req none;
return ret;
for (i = 0; pdisp && i < pdisp->vblank.index_nr; i++) {
- ret = nvkm_notify_init(&pdisp->vblank, pclass->vblank, false,
+ ret = nvkm_notify_init(NULL, &pdisp->vblank, pclass->vblank,
+ false,
&(struct nvif_notify_head_req_v0) {
.head = i,
},
int nouveau_client_fini(struct nouveau_client *, bool suspend);
const char *nouveau_client_name(void *obj);
-int nvkm_client_notify_new(struct nouveau_client *, struct nvkm_event *,
+int nvkm_client_notify_new(struct nouveau_object *, struct nvkm_event *,
void *data, u32 size);
int nvkm_client_notify_del(struct nouveau_client *, int index);
int nvkm_client_notify_get(struct nouveau_client *, int index);
* been created, and are allowed to assume any subdevs in the
* list above them exist and have been initialised.
*/
+ NVDEV_SUBDEV_FUSE,
NVDEV_SUBDEV_MXM,
NVDEV_SUBDEV_MC,
NVDEV_SUBDEV_BUS,
#include <core/notify.h>
struct nvkm_event_func {
- int (*ctor)(void *data, u32 size, struct nvkm_notify *);
+ int (*ctor)(struct nouveau_object *, void *data, u32 size,
+ struct nvkm_notify *);
void (*send)(void *data, u32 size, struct nvkm_notify *);
void (*init)(struct nvkm_event *, int type, int index);
void (*fini)(struct nvkm_event *, int type, int index);
struct list_head fl_entry;
struct list_head rl_entry;
+#define NVKM_MM_HEAP_ANY 0x00
+ u8 heap;
+#define NVKM_MM_TYPE_NONE 0x00
+#define NVKM_MM_TYPE_HOLE 0xff
u8 type;
u32 offset;
u32 length;
int nouveau_mm_init(struct nouveau_mm *, u32 offset, u32 length, u32 block);
int nouveau_mm_fini(struct nouveau_mm *);
-int nouveau_mm_head(struct nouveau_mm *, u8 type, u32 size_max, u32 size_min,
- u32 align, struct nouveau_mm_node **);
-int nouveau_mm_tail(struct nouveau_mm *, u8 type, u32 size_max, u32 size_min,
- u32 align, struct nouveau_mm_node **);
+int nouveau_mm_head(struct nouveau_mm *, u8 heap, u8 type, u32 size_max,
+ u32 size_min, u32 align, struct nouveau_mm_node **);
+int nouveau_mm_tail(struct nouveau_mm *, u8 heap, u8 type, u32 size_max,
+ u32 size_min, u32 align, struct nouveau_mm_node **);
void nouveau_mm_free(struct nouveau_mm *, struct nouveau_mm_node **);
#endif
const void *data;
};
-int nvkm_notify_init(struct nvkm_event *, int (*func)(struct nvkm_notify *),
- bool work, void *data, u32 size, u32 reply,
+int nvkm_notify_init(struct nouveau_object *, struct nvkm_event *,
+ int (*func)(struct nvkm_notify *), bool work,
+ void *data, u32 size, u32 reply,
struct nvkm_notify *);
void nvkm_notify_fini(struct nvkm_notify *);
void nvkm_notify_get(struct nvkm_notify *);
extern struct nouveau_oclass *gk20a_fifo_oclass;
extern struct nouveau_oclass *nv108_fifo_oclass;
-int nouveau_fifo_uevent_ctor(void *, u32, struct nvkm_notify *);
+int nouveau_fifo_uevent_ctor(struct nouveau_object *, void *, u32,
+ struct nvkm_notify *);
void nouveau_fifo_uevent(struct nouveau_fifo *);
void nv04_fifo_intr(struct nouveau_subdev *);
int (*alloc)(struct nouveau_bar *, struct nouveau_object *,
struct nouveau_mem *, struct nouveau_object **);
- void __iomem *iomem;
int (*kmap)(struct nouveau_bar *, struct nouveau_mem *,
u32 flags, struct nouveau_vma *);
--- /dev/null
+#ifndef __NVBIOS_M0205_H__
+#define __NVBIOS_M0205_H__
+
+struct nvbios_M0205T {
+ u16 freq;
+};
+
+u32 nvbios_M0205Te(struct nouveau_bios *,
+ u8 *ver, u8 *hdr, u8 *cnt, u8 *len, u8 *snr, u8 *ssz);
+u32 nvbios_M0205Tp(struct nouveau_bios *,
+ u8 *ver, u8 *hdr, u8 *cnt, u8 *len, u8 *snr, u8 *ssz,
+ struct nvbios_M0205T *);
+
+struct nvbios_M0205E {
+ u8 type;
+};
+
+u32 nvbios_M0205Ee(struct nouveau_bios *, int idx,
+ u8 *ver, u8 *hdr, u8 *cnt, u8 *len);
+u32 nvbios_M0205Ep(struct nouveau_bios *, int idx,
+ u8 *ver, u8 *hdr, u8 *cnt, u8 *len,
+ struct nvbios_M0205E *);
+
+struct nvbios_M0205S {
+ u8 data;
+};
+
+u32 nvbios_M0205Se(struct nouveau_bios *, int ent, int idx, u8 *ver, u8 *hdr);
+u32 nvbios_M0205Sp(struct nouveau_bios *, int ent, int idx, u8 *ver, u8 *hdr,
+ struct nvbios_M0205S *);
+
+#endif
--- /dev/null
+#ifndef __NVBIOS_M0209_H__
+#define __NVBIOS_M0209_H__
+
+u32 nvbios_M0209Te(struct nouveau_bios *,
+ u8 *ver, u8 *hdr, u8 *cnt, u8 *len, u8 *snr, u8 *ssz);
+
+struct nvbios_M0209E {
+ u8 v00_40;
+ u8 bits;
+ u8 modulo;
+ u8 v02_40;
+ u8 v02_07;
+ u8 v03;
+};
+
+u32 nvbios_M0209Ee(struct nouveau_bios *, int idx,
+ u8 *ver, u8 *hdr, u8 *cnt, u8 *len);
+u32 nvbios_M0209Ep(struct nouveau_bios *, int idx,
+ u8 *ver, u8 *hdr, u8 *cnt, u8 *len,
+ struct nvbios_M0209E *);
+
+struct nvbios_M0209S {
+ u32 data[0x200];
+};
+
+u32 nvbios_M0209Se(struct nouveau_bios *, int ent, int idx, u8 *ver, u8 *hdr);
+u32 nvbios_M0209Sp(struct nouveau_bios *, int ent, int idx, u8 *ver, u8 *hdr,
+ struct nvbios_M0209S *);
+
+#endif
--- /dev/null
+#ifndef __NVBIOS_FAN_H__
+#define __NVBIOS_FAN_H__
+
+#include <subdev/bios/therm.h>
+
+u16 nvbios_fan_parse(struct nouveau_bios *bios, struct nvbios_therm_fan *fan);
+
+#endif
struct nouveau_bios;
struct nvbios_ramcfg {
- unsigned rammap_11_08_01:1;
- unsigned rammap_11_08_0c:2;
- unsigned rammap_11_08_10:1;
- unsigned rammap_11_11_0c:2;
+ unsigned rammap_ver;
+ unsigned rammap_hdr;
+ unsigned rammap_min;
+ unsigned rammap_max;
+ union {
+ struct {
+ unsigned rammap_10_04_02:1;
+ unsigned rammap_10_04_08:1;
+ };
+ struct {
+ unsigned rammap_11_08_01:1;
+ unsigned rammap_11_08_0c:2;
+ unsigned rammap_11_08_10:1;
+ unsigned rammap_11_09_01ff:9;
+ unsigned rammap_11_0a_03fe:9;
+ unsigned rammap_11_0a_0400:1;
+ unsigned rammap_11_0a_0800:1;
+ unsigned rammap_11_0b_01f0:5;
+ unsigned rammap_11_0b_0200:1;
+ unsigned rammap_11_0b_0400:1;
+ unsigned rammap_11_0b_0800:1;
+ unsigned rammap_11_0d:8;
+ unsigned rammap_11_0e:8;
+ unsigned rammap_11_0f:8;
+ unsigned rammap_11_11_0c:2;
+ };
+ };
- unsigned ramcfg_11_01_01:1;
- unsigned ramcfg_11_01_02:1;
- unsigned ramcfg_11_01_04:1;
- unsigned ramcfg_11_01_08:1;
- unsigned ramcfg_11_01_10:1;
- unsigned ramcfg_11_01_20:1;
- unsigned ramcfg_11_01_40:1;
- unsigned ramcfg_11_01_80:1;
- unsigned ramcfg_11_02_03:2;
- unsigned ramcfg_11_02_04:1;
- unsigned ramcfg_11_02_08:1;
- unsigned ramcfg_11_02_10:1;
- unsigned ramcfg_11_02_40:1;
- unsigned ramcfg_11_02_80:1;
- unsigned ramcfg_11_03_0f:4;
- unsigned ramcfg_11_03_30:2;
- unsigned ramcfg_11_03_c0:2;
- unsigned ramcfg_11_03_f0:4;
- unsigned ramcfg_11_04:8;
- unsigned ramcfg_11_06:8;
- unsigned ramcfg_11_07_02:1;
- unsigned ramcfg_11_07_04:1;
- unsigned ramcfg_11_07_08:1;
- unsigned ramcfg_11_07_10:1;
- unsigned ramcfg_11_07_40:1;
- unsigned ramcfg_11_07_80:1;
- unsigned ramcfg_11_08_01:1;
- unsigned ramcfg_11_08_02:1;
- unsigned ramcfg_11_08_04:1;
- unsigned ramcfg_11_08_08:1;
- unsigned ramcfg_11_08_10:1;
- unsigned ramcfg_11_08_20:1;
- unsigned ramcfg_11_09:8;
+ unsigned ramcfg_ver;
+ unsigned ramcfg_hdr;
+ unsigned ramcfg_timing;
+ union {
+ struct {
+ unsigned ramcfg_10_02_01:1;
+ unsigned ramcfg_10_02_02:1;
+ unsigned ramcfg_10_02_04:1;
+ unsigned ramcfg_10_02_08:1;
+ unsigned ramcfg_10_02_10:1;
+ unsigned ramcfg_10_02_20:1;
+ unsigned ramcfg_10_02_40:1;
+ unsigned ramcfg_10_03_0f:4;
+ unsigned ramcfg_10_05:8;
+ unsigned ramcfg_10_06:8;
+ unsigned ramcfg_10_07:8;
+ unsigned ramcfg_10_08:8;
+ unsigned ramcfg_10_09_0f:4;
+ unsigned ramcfg_10_09_f0:4;
+ };
+ struct {
+ unsigned ramcfg_11_01_01:1;
+ unsigned ramcfg_11_01_02:1;
+ unsigned ramcfg_11_01_04:1;
+ unsigned ramcfg_11_01_08:1;
+ unsigned ramcfg_11_01_10:1;
+ unsigned ramcfg_11_01_20:1;
+ unsigned ramcfg_11_01_40:1;
+ unsigned ramcfg_11_01_80:1;
+ unsigned ramcfg_11_02_03:2;
+ unsigned ramcfg_11_02_04:1;
+ unsigned ramcfg_11_02_08:1;
+ unsigned ramcfg_11_02_10:1;
+ unsigned ramcfg_11_02_40:1;
+ unsigned ramcfg_11_02_80:1;
+ unsigned ramcfg_11_03_0f:4;
+ unsigned ramcfg_11_03_30:2;
+ unsigned ramcfg_11_03_c0:2;
+ unsigned ramcfg_11_03_f0:4;
+ unsigned ramcfg_11_04:8;
+ unsigned ramcfg_11_06:8;
+ unsigned ramcfg_11_07_02:1;
+ unsigned ramcfg_11_07_04:1;
+ unsigned ramcfg_11_07_08:1;
+ unsigned ramcfg_11_07_10:1;
+ unsigned ramcfg_11_07_40:1;
+ unsigned ramcfg_11_07_80:1;
+ unsigned ramcfg_11_08_01:1;
+ unsigned ramcfg_11_08_02:1;
+ unsigned ramcfg_11_08_04:1;
+ unsigned ramcfg_11_08_08:1;
+ unsigned ramcfg_11_08_10:1;
+ unsigned ramcfg_11_08_20:1;
+ unsigned ramcfg_11_09:8;
+ };
+ };
+ unsigned timing_ver;
+ unsigned timing_hdr;
unsigned timing[11];
- unsigned timing_20_2e_03:2;
- unsigned timing_20_2e_30:2;
- unsigned timing_20_2e_c0:2;
- unsigned timing_20_2f_03:2;
- unsigned timing_20_2c_003f:6;
- unsigned timing_20_2c_1fc0:7;
- unsigned timing_20_30_f8:5;
- unsigned timing_20_30_07:3;
- unsigned timing_20_31_0007:3;
- unsigned timing_20_31_0078:4;
- unsigned timing_20_31_0780:4;
- unsigned timing_20_31_0800:1;
- unsigned timing_20_31_7000:3;
- unsigned timing_20_31_8000:1;
+ union {
+ struct {
+ unsigned timing_10_WR:8;
+ unsigned timing_10_CL:8;
+ unsigned timing_10_ODT:3;
+ unsigned timing_10_CWL:8;
+ };
+ struct {
+ unsigned timing_20_2e_03:2;
+ unsigned timing_20_2e_30:2;
+ unsigned timing_20_2e_c0:2;
+ unsigned timing_20_2f_03:2;
+ unsigned timing_20_2c_003f:6;
+ unsigned timing_20_2c_1fc0:7;
+ unsigned timing_20_30_f8:5;
+ unsigned timing_20_30_07:3;
+ unsigned timing_20_31_0007:3;
+ unsigned timing_20_31_0078:4;
+ unsigned timing_20_31_0780:4;
+ unsigned timing_20_31_0800:1;
+ unsigned timing_20_31_7000:3;
+ unsigned timing_20_31_8000:1;
+ };
+ };
};
u8 nvbios_ramcfg_count(struct nouveau_bios *);
u32 nvbios_rammapEe(struct nouveau_bios *, int idx,
u8 *ver, u8 *hdr, u8 *cnt, u8 *len);
+u32 nvbios_rammapEp(struct nouveau_bios *, int idx,
+ u8 *ver, u8 *hdr, u8 *cnt, u8 *len,
+ struct nvbios_ramcfg *);
u32 nvbios_rammapEm(struct nouveau_bios *, u16 mhz,
- u8 *ver, u8 *hdr, u8 *cnt, u8 *len);
-u32 nvbios_rammapEp(struct nouveau_bios *, u16 mhz,
u8 *ver, u8 *hdr, u8 *cnt, u8 *len,
struct nvbios_ramcfg *);
struct nvbios_therm_threshold thrs_shutdown;
};
+enum nvbios_therm_fan_type {
+ NVBIOS_THERM_FAN_UNK = 0,
+ NVBIOS_THERM_FAN_TOGGLE = 1,
+ NVBIOS_THERM_FAN_PWM = 2,
+};
+
/* no vbios have more than 6 */
#define NOUVEAU_TEMP_FAN_TRIP_MAX 10
struct nouveau_therm_trip_point {
};
struct nvbios_therm_fan {
- u16 pwm_freq;
+ enum nvbios_therm_fan_type type;
+
+ u32 pwm_freq;
u8 min_duty;
u8 max_duty;
nv_clk_src_mdiv,
nv_clk_src_core,
+ nv_clk_src_core_intm,
nv_clk_src_shader,
nv_clk_src_mem,
#include <subdev/bios/ramcfg.h>
struct nouveau_ram_data {
+ struct list_head head;
struct nvbios_ramcfg bios;
u32 freq;
};
int ranks;
int parts;
+ int part_mask;
int (*get)(struct nouveau_fb *, u64 size, u32 align,
u32 size_nc, u32 type, struct nouveau_mem **);
int (*calc)(struct nouveau_fb *, u32 freq);
int (*prog)(struct nouveau_fb *);
void (*tidy)(struct nouveau_fb *);
- struct {
- u8 version;
- u32 data;
- u8 size;
- } rammap, ramcfg, timing;
u32 freq;
u32 mr[16];
u32 mr1_nuts;
--- /dev/null
+#ifndef __NOUVEAU_FB_REGS_04_H__
+#define __NOUVEAU_FB_REGS_04_H__
+
+#define NV04_PFB_BOOT_0 0x00100000
+# define NV04_PFB_BOOT_0_RAM_AMOUNT 0x00000003
+# define NV04_PFB_BOOT_0_RAM_AMOUNT_32MB 0x00000000
+# define NV04_PFB_BOOT_0_RAM_AMOUNT_4MB 0x00000001
+# define NV04_PFB_BOOT_0_RAM_AMOUNT_8MB 0x00000002
+# define NV04_PFB_BOOT_0_RAM_AMOUNT_16MB 0x00000003
+# define NV04_PFB_BOOT_0_RAM_WIDTH_128 0x00000004
+# define NV04_PFB_BOOT_0_RAM_TYPE 0x00000028
+# define NV04_PFB_BOOT_0_RAM_TYPE_SGRAM_8MBIT 0x00000000
+# define NV04_PFB_BOOT_0_RAM_TYPE_SGRAM_16MBIT 0x00000008
+# define NV04_PFB_BOOT_0_RAM_TYPE_SGRAM_16MBIT_4BANK 0x00000010
+# define NV04_PFB_BOOT_0_RAM_TYPE_SDRAM_16MBIT 0x00000018
+# define NV04_PFB_BOOT_0_RAM_TYPE_SDRAM_64MBIT 0x00000020
+# define NV04_PFB_BOOT_0_RAM_TYPE_SDRAM_64MBITX16 0x00000028
+# define NV04_PFB_BOOT_0_UMA_ENABLE 0x00000100
+# define NV04_PFB_BOOT_0_UMA_SIZE 0x0000f000
+
+#endif
--- /dev/null
+#ifndef __NOUVEAU_FUSE_H__
+#define __NOUVEAU_FUSE_H__
+
+#include <core/subdev.h>
+#include <core/device.h>
+
+struct nouveau_fuse {
+ struct nouveau_subdev base;
+};
+
+static inline struct nouveau_fuse *
+nouveau_fuse(void *obj)
+{
+ return (void *)nv_device(obj)->subdev[NVDEV_SUBDEV_FUSE];
+}
+
+#define nouveau_fuse_create(p, e, o, d) \
+ nouveau_fuse_create_((p), (e), (o), sizeof(**d), (void **)d)
+
+int nouveau_fuse_create_(struct nouveau_object *, struct nouveau_object *,
+ struct nouveau_oclass *, int, void **);
+void _nouveau_fuse_dtor(struct nouveau_object *);
+int _nouveau_fuse_init(struct nouveau_object *);
+#define _nouveau_fuse_fini _nouveau_subdev_fini
+
+extern struct nouveau_oclass g80_fuse_oclass;
+extern struct nouveau_oclass gf100_fuse_oclass;
+extern struct nouveau_oclass gm107_fuse_oclass;
+
+#endif
extern struct nouveau_oclass *nv10_gpio_oclass;
extern struct nouveau_oclass *nv50_gpio_oclass;
-extern struct nouveau_oclass *nv92_gpio_oclass;
+extern struct nouveau_oclass *nv94_gpio_oclass;
extern struct nouveau_oclass *nvd0_gpio_oclass;
extern struct nouveau_oclass *nve0_gpio_oclass;
void nouveau_memx_wait(struct nouveau_memx *,
u32 addr, u32 mask, u32 data, u32 nsec);
void nouveau_memx_nsec(struct nouveau_memx *, u32 nsec);
+void nouveau_memx_wait_vblank(struct nouveau_memx *);
+void nouveau_memx_block(struct nouveau_memx *);
+void nouveau_memx_unblock(struct nouveau_memx *);
#endif
extern struct nouveau_oclass nv84_therm_oclass;
extern struct nouveau_oclass nva3_therm_oclass;
extern struct nouveau_oclass nvd0_therm_oclass;
+extern struct nouveau_oclass gm107_therm_oclass;
#endif
static int
nouveau_barobj_ctor(struct nouveau_object *parent,
struct nouveau_object *engine,
- struct nouveau_oclass *oclass, void *mem, u32 size,
+ struct nouveau_oclass *oclass, void *data, u32 size,
struct nouveau_object **pobject)
{
+ struct nouveau_device *device = nv_device(parent);
struct nouveau_bar *bar = (void *)engine;
+ struct nouveau_mem *mem = data;
struct nouveau_barobj *barobj;
int ret;
if (ret)
return ret;
- barobj->iomem = bar->iomem + (u32)barobj->vma.offset;
+ barobj->iomem = ioremap(nv_device_resource_start(device, 3) +
+ (u32)barobj->vma.offset, mem->size << 12);
+ if (!barobj->iomem) {
+ nv_warn(bar, "PRAMIN ioremap failed\n");
+ return -ENOMEM;
+ }
+
return 0;
}
{
struct nouveau_bar *bar = (void *)object->engine;
struct nouveau_barobj *barobj = (void *)object;
- if (barobj->vma.node)
+ if (barobj->vma.node) {
+ if (barobj->iomem)
+ iounmap(barobj->iomem);
bar->unmap(bar, &barobj->vma);
+ }
nouveau_object_destroy(&barobj->base);
}
struct nouveau_mem *mem, struct nouveau_object **pobject)
{
struct nouveau_object *engine = nv_object(bar);
- int ret = -ENOMEM;
- if (bar->iomem) {
- ret = nouveau_object_ctor(parent, engine,
- &nouveau_barobj_oclass,
- mem, 0, pobject);
- }
+ struct nouveau_object *gpuobj;
+ int ret = nouveau_object_ctor(parent, engine, &nouveau_barobj_oclass,
+ mem, 0, &gpuobj);
+ if (ret == 0)
+ *pobject = gpuobj;
return ret;
}
struct nouveau_object *engine,
struct nouveau_oclass *oclass, int length, void **pobject)
{
- struct nouveau_device *device = nv_device(parent);
struct nouveau_bar *bar;
int ret;
if (ret)
return ret;
- if (nv_device_resource_len(device, 3) != 0) {
- bar->iomem = ioremap(nv_device_resource_start(device, 3),
- nv_device_resource_len(device, 3));
- if (!bar->iomem)
- nv_warn(bar, "PRAMIN ioremap failed\n");
- }
-
return 0;
}
void
nouveau_bar_destroy(struct nouveau_bar *bar)
{
- if (bar->iomem)
- iounmap(bar->iomem);
nouveau_subdev_destroy(&bar->base);
}
--- /dev/null
+/*
+ * Copyright 2013 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Ben Skeggs
+ */
+
+#include <subdev/bios.h>
+#include <subdev/bios/bit.h>
+#include <subdev/bios/M0205.h>
+
+u32
+nvbios_M0205Te(struct nouveau_bios *bios,
+ u8 *ver, u8 *hdr, u8 *cnt, u8 *len, u8 *snr, u8 *ssz)
+{
+ struct bit_entry bit_M;
+ u32 data = 0x00000000;
+
+ if (!bit_entry(bios, 'M', &bit_M)) {
+ if (bit_M.version == 2 && bit_M.length > 0x08)
+ data = nv_ro32(bios, bit_M.offset + 0x05);
+ if (data) {
+ *ver = nv_ro08(bios, data + 0x00);
+ switch (*ver) {
+ case 0x10:
+ *hdr = nv_ro08(bios, data + 0x01);
+ *len = nv_ro08(bios, data + 0x02);
+ *ssz = nv_ro08(bios, data + 0x03);
+ *snr = nv_ro08(bios, data + 0x04);
+ *cnt = nv_ro08(bios, data + 0x05);
+ return data;
+ default:
+ break;
+ }
+ }
+ }
+
+ return 0x00000000;
+}
+
+u32
+nvbios_M0205Tp(struct nouveau_bios *bios,
+ u8 *ver, u8 *hdr, u8 *cnt, u8 *len, u8 *snr, u8 *ssz,
+ struct nvbios_M0205T *info)
+{
+ u32 data = nvbios_M0205Te(bios, ver, hdr, cnt, len, snr, ssz);
+ memset(info, 0x00, sizeof(*info));
+ switch (!!data * *ver) {
+ case 0x10:
+ info->freq = nv_ro16(bios, data + 0x06);
+ break;
+ default:
+ break;
+ }
+ return data;
+}
+
+u32
+nvbios_M0205Ee(struct nouveau_bios *bios, int idx,
+ u8 *ver, u8 *hdr, u8 *cnt, u8 *len)
+{
+ u8 snr, ssz;
+ u32 data = nvbios_M0205Te(bios, ver, hdr, cnt, len, &snr, &ssz);
+ if (data && idx < *cnt) {
+ data = data + *hdr + idx * (*len + (snr * ssz));
+ *hdr = *len;
+ *cnt = snr;
+ *len = ssz;
+ return data;
+ }
+ return 0x00000000;
+}
+
+u32
+nvbios_M0205Ep(struct nouveau_bios *bios, int idx,
+ u8 *ver, u8 *hdr, u8 *cnt, u8 *len,
+ struct nvbios_M0205E *info)
+{
+ u32 data = nvbios_M0205Ee(bios, idx, ver, hdr, cnt, len);
+ memset(info, 0x00, sizeof(*info));
+ switch (!!data * *ver) {
+ case 0x10:
+ info->type = nv_ro08(bios, data + 0x00) & 0x0f;
+ return data;
+ default:
+ break;
+ }
+ return 0x00000000;
+}
+
+u32
+nvbios_M0205Se(struct nouveau_bios *bios, int ent, int idx, u8 *ver, u8 *hdr)
+{
+
+ u8 cnt, len;
+ u32 data = nvbios_M0205Ee(bios, ent, ver, hdr, &cnt, &len);
+ if (data && idx < cnt) {
+ data = data + *hdr + idx * len;
+ *hdr = len;
+ return data;
+ }
+ return 0x00000000;
+}
+
+u32
+nvbios_M0205Sp(struct nouveau_bios *bios, int ent, int idx, u8 *ver, u8 *hdr,
+ struct nvbios_M0205S *info)
+{
+ u32 data = nvbios_M0205Se(bios, ent, idx, ver, hdr);
+ memset(info, 0x00, sizeof(*info));
+ switch (!!data * *ver) {
+ case 0x10:
+ info->data = nv_ro08(bios, data + 0x00);
+ return data;
+ default:
+ break;
+ }
+ return 0x00000000;
+}
--- /dev/null
+/*
+ * Copyright 2013 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Ben Skeggs
+ */
+
+#include <subdev/bios.h>
+#include <subdev/bios/bit.h>
+#include <subdev/bios/M0209.h>
+
+u32
+nvbios_M0209Te(struct nouveau_bios *bios,
+ u8 *ver, u8 *hdr, u8 *cnt, u8 *len, u8 *snr, u8 *ssz)
+{
+ struct bit_entry bit_M;
+ u32 data = 0x00000000;
+
+ if (!bit_entry(bios, 'M', &bit_M)) {
+ if (bit_M.version == 2 && bit_M.length > 0x0c)
+ data = nv_ro32(bios, bit_M.offset + 0x09);
+ if (data) {
+ *ver = nv_ro08(bios, data + 0x00);
+ switch (*ver) {
+ case 0x10:
+ *hdr = nv_ro08(bios, data + 0x01);
+ *len = nv_ro08(bios, data + 0x02);
+ *ssz = nv_ro08(bios, data + 0x03);
+ *snr = 1;
+ *cnt = nv_ro08(bios, data + 0x04);
+ return data;
+ default:
+ break;
+ }
+ }
+ }
+
+ return 0x00000000;
+}
+
+u32
+nvbios_M0209Ee(struct nouveau_bios *bios, int idx,
+ u8 *ver, u8 *hdr, u8 *cnt, u8 *len)
+{
+ u8 snr, ssz;
+ u32 data = nvbios_M0209Te(bios, ver, hdr, cnt, len, &snr, &ssz);
+ if (data && idx < *cnt) {
+ data = data + *hdr + idx * (*len + (snr * ssz));
+ *hdr = *len;
+ *cnt = snr;
+ *len = ssz;
+ return data;
+ }
+ return 0x00000000;
+}
+
+u32
+nvbios_M0209Ep(struct nouveau_bios *bios, int idx,
+ u8 *ver, u8 *hdr, u8 *cnt, u8 *len,
+ struct nvbios_M0209E *info)
+{
+ u32 data = nvbios_M0209Ee(bios, idx, ver, hdr, cnt, len);
+ memset(info, 0x00, sizeof(*info));
+ switch (!!data * *ver) {
+ case 0x10:
+ info->v00_40 = (nv_ro08(bios, data + 0x00) & 0x40) >> 6;
+ info->bits = nv_ro08(bios, data + 0x00) & 0x3f;
+ info->modulo = nv_ro08(bios, data + 0x01);
+ info->v02_40 = (nv_ro08(bios, data + 0x02) & 0x40) >> 6;
+ info->v02_07 = nv_ro08(bios, data + 0x02) & 0x07;
+ info->v03 = nv_ro08(bios, data + 0x03);
+ return data;
+ default:
+ break;
+ }
+ return 0x00000000;
+}
+
+u32
+nvbios_M0209Se(struct nouveau_bios *bios, int ent, int idx, u8 *ver, u8 *hdr)
+{
+
+ u8 cnt, len;
+ u32 data = nvbios_M0209Ee(bios, ent, ver, hdr, &cnt, &len);
+ if (data && idx < cnt) {
+ data = data + *hdr + idx * len;
+ *hdr = len;
+ return data;
+ }
+ return 0x00000000;
+}
+
+u32
+nvbios_M0209Sp(struct nouveau_bios *bios, int ent, int idx, u8 *ver, u8 *hdr,
+ struct nvbios_M0209S *info)
+{
+ struct nvbios_M0209E M0209E;
+ u8 cnt, len;
+ u32 data = nvbios_M0209Ep(bios, ent, ver, hdr, &cnt, &len, &M0209E);
+ if (data) {
+ u32 i, data = nvbios_M0209Se(bios, ent, idx, ver, hdr);
+ memset(info, 0x00, sizeof(*info));
+ switch (!!data * *ver) {
+ case 0x10:
+ for (i = 0; i < ARRAY_SIZE(info->data); i++) {
+ u32 bits = (i % M0209E.modulo) * M0209E.bits;
+ u32 mask = (1ULL << M0209E.bits) - 1;
+ u16 off = bits / 8;
+ u8 mod = bits % 8;
+ info->data[i] = nv_ro32(bios, data + off);
+ info->data[i] = info->data[i] >> mod;
+ info->data[i] = info->data[i] & mask;
+ }
+ return data;
+ default:
+ break;
+ }
+ }
+ return 0x00000000;
+}
struct dcb_output *outp)
{
u16 dcb = dcb_outp(bios, idx, ver, len);
+ memset(outp, 0x00, sizeof(*outp));
if (dcb) {
if (*ver >= 0x20) {
u32 conn = nv_ro32(bios, dcb + 0x00);
--- /dev/null
+/*
+ * Copyright 2014 Martin Peres
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Martin Peres
+ */
+
+#include <subdev/bios.h>
+#include <subdev/bios/bit.h>
+#include <subdev/bios/fan.h>
+
+u16
+nvbios_fan_table(struct nouveau_bios *bios, u8 *ver, u8 *hdr, u8 *cnt, u8 *len)
+{
+ struct bit_entry bit_P;
+ u16 fan = 0x0000;
+
+ if (!bit_entry(bios, 'P', &bit_P)) {
+ if (bit_P.version == 2 && bit_P.length >= 0x5a)
+ fan = nv_ro16(bios, bit_P.offset + 0x58);
+
+ if (fan) {
+ *ver = nv_ro08(bios, fan + 0);
+ switch (*ver) {
+ case 0x10:
+ *hdr = nv_ro08(bios, fan + 1);
+ *len = nv_ro08(bios, fan + 2);
+ *cnt = nv_ro08(bios, fan + 3);
+ return fan;
+ default:
+ break;
+ }
+ }
+ }
+
+ return 0x0000;
+}
+
+u16
+nvbios_fan_entry(struct nouveau_bios *bios, int idx, u8 *ver, u8 *hdr,
+ u8 *cnt, u8 *len)
+{
+ u16 data = nvbios_fan_table(bios, ver, hdr, cnt, len);
+ if (data && idx < *cnt)
+ return data + *hdr + (idx * (*len));
+ return 0x0000;
+}
+
+u16
+nvbios_fan_parse(struct nouveau_bios *bios, struct nvbios_therm_fan *fan)
+{
+ u8 ver, hdr, cnt, len;
+
+ u16 data = nvbios_fan_entry(bios, 0, &ver, &hdr, &cnt, &len);
+ if (data) {
+ u8 type = nv_ro08(bios, data + 0x00);
+ switch (type) {
+ case 0:
+ fan->type = NVBIOS_THERM_FAN_TOGGLE;
+ break;
+ case 1:
+ case 2:
+ /* TODO: Understand the difference between the two! */
+ fan->type = NVBIOS_THERM_FAN_PWM;
+ break;
+ default:
+ fan->type = NVBIOS_THERM_FAN_UNK;
+ }
+
+ fan->min_duty = nv_ro08(bios, data + 0x02);
+ fan->max_duty = nv_ro08(bios, data + 0x03);
+
+ fan->pwm_freq = nv_ro32(bios, data + 0x0b) & 0xffffff;
+ }
+ return data;
+}
}
u32
-nvbios_rammapEm(struct nouveau_bios *bios, u16 khz,
- u8 *ver, u8 *hdr, u8 *cnt, u8 *len)
-{
- int idx = 0;
- u32 data;
- while ((data = nvbios_rammapEe(bios, idx++, ver, hdr, cnt, len))) {
- if (khz >= nv_ro16(bios, data + 0x00) &&
- khz <= nv_ro16(bios, data + 0x02))
- break;
- }
- return data;
-}
-
-u32
-nvbios_rammapEp(struct nouveau_bios *bios, u16 khz,
+nvbios_rammapEp(struct nouveau_bios *bios, int idx,
u8 *ver, u8 *hdr, u8 *cnt, u8 *len,
struct nvbios_ramcfg *p)
{
- u32 data = nvbios_rammapEm(bios, khz, ver, hdr, cnt, len);
+ u32 data = nvbios_rammapEe(bios, idx, ver, hdr, cnt, len), temp;
memset(p, 0x00, sizeof(*p));
+ p->rammap_ver = *ver;
+ p->rammap_hdr = *hdr;
switch (!!data * *ver) {
+ case 0x10:
+ p->rammap_min = nv_ro16(bios, data + 0x00);
+ p->rammap_max = nv_ro16(bios, data + 0x02);
+ p->rammap_10_04_02 = (nv_ro08(bios, data + 0x04) & 0x02) >> 1;
+ p->rammap_10_04_08 = (nv_ro08(bios, data + 0x04) & 0x08) >> 3;
+ break;
case 0x11:
+ p->rammap_min = nv_ro16(bios, data + 0x00);
+ p->rammap_max = nv_ro16(bios, data + 0x02);
p->rammap_11_08_01 = (nv_ro08(bios, data + 0x08) & 0x01) >> 0;
p->rammap_11_08_0c = (nv_ro08(bios, data + 0x08) & 0x0c) >> 2;
p->rammap_11_08_10 = (nv_ro08(bios, data + 0x08) & 0x10) >> 4;
+ temp = nv_ro32(bios, data + 0x09);
+ p->rammap_11_09_01ff = (temp & 0x000001ff) >> 0;
+ p->rammap_11_0a_03fe = (temp & 0x0003fe00) >> 9;
+ p->rammap_11_0a_0400 = (temp & 0x00040000) >> 18;
+ p->rammap_11_0a_0800 = (temp & 0x00080000) >> 19;
+ p->rammap_11_0b_01f0 = (temp & 0x01f00000) >> 20;
+ p->rammap_11_0b_0200 = (temp & 0x02000000) >> 25;
+ p->rammap_11_0b_0400 = (temp & 0x04000000) >> 26;
+ p->rammap_11_0b_0800 = (temp & 0x08000000) >> 27;
+ p->rammap_11_0d = nv_ro08(bios, data + 0x0d);
+ p->rammap_11_0e = nv_ro08(bios, data + 0x0e);
+ p->rammap_11_0f = nv_ro08(bios, data + 0x0f);
p->rammap_11_11_0c = (nv_ro08(bios, data + 0x11) & 0x0c) >> 2;
break;
default:
return data;
}
+u32
+nvbios_rammapEm(struct nouveau_bios *bios, u16 mhz,
+ u8 *ver, u8 *hdr, u8 *cnt, u8 *len,
+ struct nvbios_ramcfg *info)
+{
+ int idx = 0;
+ u32 data;
+ while ((data = nvbios_rammapEp(bios, idx++, ver, hdr, cnt, len, info))) {
+ if (mhz >= info->rammap_min && mhz <= info->rammap_max)
+ break;
+ }
+ return data;
+}
+
u32
nvbios_rammapSe(struct nouveau_bios *bios, u32 data,
u8 ever, u8 ehdr, u8 ecnt, u8 elen, int idx,
u8 *ver, u8 *hdr, struct nvbios_ramcfg *p)
{
data = nvbios_rammapSe(bios, data, ever, ehdr, ecnt, elen, idx, ver, hdr);
+ p->ramcfg_ver = *ver;
+ p->ramcfg_hdr = *hdr;
switch (!!data * *ver) {
+ case 0x10:
+ p->ramcfg_timing = nv_ro08(bios, data + 0x01);
+ p->ramcfg_10_02_01 = (nv_ro08(bios, data + 0x02) & 0x01) >> 0;
+ p->ramcfg_10_02_02 = (nv_ro08(bios, data + 0x02) & 0x02) >> 1;
+ p->ramcfg_10_02_04 = (nv_ro08(bios, data + 0x02) & 0x04) >> 2;
+ p->ramcfg_10_02_08 = (nv_ro08(bios, data + 0x02) & 0x08) >> 3;
+ p->ramcfg_10_02_10 = (nv_ro08(bios, data + 0x02) & 0x10) >> 4;
+ p->ramcfg_10_02_20 = (nv_ro08(bios, data + 0x02) & 0x20) >> 5;
+ p->ramcfg_10_02_40 = (nv_ro08(bios, data + 0x02) & 0x40) >> 6;
+ p->ramcfg_10_03_0f = (nv_ro08(bios, data + 0x03) & 0x0f) >> 0;
+ p->ramcfg_10_05 = (nv_ro08(bios, data + 0x05) & 0xff) >> 0;
+ p->ramcfg_10_06 = (nv_ro08(bios, data + 0x06) & 0xff) >> 0;
+ p->ramcfg_10_07 = (nv_ro08(bios, data + 0x07) & 0xff) >> 0;
+ p->ramcfg_10_08 = (nv_ro08(bios, data + 0x08) & 0xff) >> 0;
+ p->ramcfg_10_09_0f = (nv_ro08(bios, data + 0x09) & 0x0f) >> 0;
+ p->ramcfg_10_09_f0 = (nv_ro08(bios, data + 0x09) & 0xf0) >> 4;
+ break;
case 0x11:
+ p->ramcfg_timing = nv_ro08(bios, data + 0x00);
p->ramcfg_11_01_01 = (nv_ro08(bios, data + 0x01) & 0x01) >> 0;
p->ramcfg_11_01_02 = (nv_ro08(bios, data + 0x01) & 0x02) >> 1;
p->ramcfg_11_01_04 = (nv_ro08(bios, data + 0x01) & 0x04) >> 2;
struct nvbios_ramcfg *p)
{
u16 data = nvbios_timingEe(bios, idx, ver, hdr, cnt, len), temp;
+ p->timing_ver = *ver;
+ p->timing_hdr = *hdr;
switch (!!data * *ver) {
+ case 0x10:
+ p->timing_10_WR = nv_ro08(bios, data + 0x00);
+ p->timing_10_CL = nv_ro08(bios, data + 0x02);
+ p->timing_10_ODT = nv_ro08(bios, data + 0x0e) & 0x07;
+ p->timing_10_CWL = nv_ro08(bios, data + 0x13);
+ break;
case 0x20:
p->timing[0] = nv_ro32(bios, data + 0x00);
p->timing[1] = nv_ro32(bios, data + 0x04);
clk->allow_reclock = allow_reclock;
- ret = nvkm_notify_init(&device->event, nouveau_clock_pwrsrc, true,
+ ret = nvkm_notify_init(NULL, &device->event, nouveau_clock_pwrsrc, true,
NULL, 0, 0, &clk->pwrsrc_ntfy);
if (ret)
return ret;
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: Ben Skeggs
+ * Roy Spliet
*/
+#include <engine/fifo.h>
#include <subdev/bios.h>
#include <subdev/bios/pll.h>
#include <subdev/timer.h>
read_vco(struct nva3_clock_priv *priv, int clk)
{
u32 sctl = nv_rd32(priv, 0x4120 + (clk * 4));
- if ((sctl & 0x00000030) != 0x00000030)
+
+ switch (sctl & 0x00000030) {
+ case 0x00000000:
+ return nv_device(priv)->crystal;
+ case 0x00000020:
return read_pll(priv, 0x41, 0x00e820);
- return read_pll(priv, 0x42, 0x00e8a0);
+ case 0x00000030:
+ return read_pll(priv, 0x42, 0x00e8a0);
+ default:
+ return 0;
+ }
}
static u32
if (!ignore_en && !(sctl & 0x00000100))
return 0;
+ /* out_alt */
+ if (sctl & 0x00000400)
+ return 108000;
+
+ /* vco_out */
switch (sctl & 0x00003000) {
case 0x00000000:
- return nv_device(priv)->crystal;
+ if (!(sctl & 0x00000200))
+ return nv_device(priv)->crystal;
+ return 0;
case 0x00002000:
if (sctl & 0x00000040)
return 108000;
return 100000;
case 0x00003000:
+ /* vco_enable */
+ if (!(sctl & 0x00000001))
+ return 0;
+
sclk = read_vco(priv, clk);
sdiv = ((sctl & 0x003f0000) >> 16) + 2;
return (sclk * 2) / sdiv;
N = (coef & 0x0000ff00) >> 8;
P = (coef & 0x003f0000) >> 16;
- /* no post-divider on these.. */
+ /* no post-divider on these..
+ * XXX: it looks more like two post-"dividers" that
+ * cross each other out in the default RPLL config */
if ((pll & 0x00ff00) == 0x00e800)
P = 1;
nva3_clock_read(struct nouveau_clock *clk, enum nv_clk_src src)
{
struct nva3_clock_priv *priv = (void *)clk;
+ u32 hsrc;
switch (src) {
case nv_clk_src_crystal:
return nv_device(priv)->crystal;
- case nv_clk_src_href:
- return 100000;
case nv_clk_src_core:
+ case nv_clk_src_core_intm:
return read_pll(priv, 0x00, 0x4200);
case nv_clk_src_shader:
return read_pll(priv, 0x01, 0x4220);
return read_clk(priv, 0x21, false);
case nv_clk_src_daemon:
return read_clk(priv, 0x25, false);
+ case nv_clk_src_host:
+ hsrc = (nv_rd32(priv, 0xc040) & 0x30000000) >> 28;
+ switch (hsrc) {
+ case 0:
+ return read_clk(priv, 0x1d, false);
+ case 2:
+ case 3:
+ return 277000;
+ default:
+ nv_error(clk, "unknown HOST clock source %d\n", hsrc);
+ return -EINVAL;
+ }
default:
nv_error(clk, "invalid clock source %d\n", src);
return -EINVAL;
}
+
+ return 0;
}
int
-nva3_clock_info(struct nouveau_clock *clock, int clk, u32 pll, u32 khz,
+nva3_clk_info(struct nouveau_clock *clock, int clk, u32 khz,
struct nva3_clock_info *info)
{
- struct nouveau_bios *bios = nouveau_bios(clock);
struct nva3_clock_priv *priv = (void *)clock;
- struct nvbios_pll limits;
- u32 oclk, sclk, sdiv;
- int P, N, M, diff;
- int ret;
+ u32 oclk, sclk, sdiv, diff;
- info->pll = 0;
info->clk = 0;
switch (khz) {
return khz;
default:
sclk = read_vco(priv, clk);
- sdiv = min((sclk * 2) / (khz - 2999), (u32)65);
- /* if the clock has a PLL attached, and we can get a within
- * [-2, 3) MHz of a divider, we'll disable the PLL and use
- * the divider instead.
- *
- * divider can go as low as 2, limited here because NVIDIA
+ sdiv = min((sclk * 2) / khz, (u32)65);
+ oclk = (sclk * 2) / sdiv;
+ diff = ((khz + 3000) - oclk);
+
+ /* When imprecise, play it safe and aim for a clock lower than
+ * desired rather than higher */
+ if (diff < 0) {
+ sdiv++;
+ oclk = (sclk * 2) / sdiv;
+ }
+
+ /* divider can go as low as 2, limited here because NVIDIA
* and the VBIOS on my NVA8 seem to prefer using the PLL
* for 810MHz - is there a good reason?
- */
+ * XXX: PLLs with refclk 810MHz? */
if (sdiv > 4) {
- oclk = (sclk * 2) / sdiv;
- diff = khz - oclk;
- if (!pll || (diff >= -2000 && diff < 3000)) {
- info->clk = (((sdiv - 2) << 16) | 0x00003100);
- return oclk;
- }
+ info->clk = (((sdiv - 2) << 16) | 0x00003100);
+ return oclk;
}
- if (!pll)
- return -ERANGE;
break;
}
+ return -ERANGE;
+}
+
+int
+nva3_pll_info(struct nouveau_clock *clock, int clk, u32 pll, u32 khz,
+ struct nva3_clock_info *info)
+{
+ struct nouveau_bios *bios = nouveau_bios(clock);
+ struct nva3_clock_priv *priv = (void *)clock;
+ struct nvbios_pll limits;
+ int P, N, M, diff;
+ int ret;
+
+ info->pll = 0;
+
+ /* If we can get a within [-2, 3) MHz of a divider, we'll disable the
+ * PLL and use the divider instead. */
+ ret = nva3_clk_info(clock, clk, khz, info);
+ diff = khz - ret;
+ if (!pll || (diff >= -2000 && diff < 3000)) {
+ goto out;
+ }
+
+ /* Try with PLL */
ret = nvbios_pll_parse(bios, pll, &limits);
if (ret)
return ret;
- limits.refclk = read_clk(priv, clk - 0x10, true);
- if (!limits.refclk)
+ ret = nva3_clk_info(clock, clk - 0x10, limits.refclk, info);
+ if (ret != limits.refclk)
return -EINVAL;
ret = nva3_pll_calc(nv_subdev(priv), &limits, khz, &N, NULL, &M, &P);
if (ret >= 0) {
- info->clk = nv_rd32(priv, 0x4120 + (clk * 4));
info->pll = (P << 16) | (N << 8) | M;
}
+out:
+ info->fb_delay = max(((khz + 7566) / 15133), (u32) 18);
+
return ret ? ret : -ERANGE;
}
calc_clk(struct nva3_clock_priv *priv, struct nouveau_cstate *cstate,
int clk, u32 pll, int idx)
{
- int ret = nva3_clock_info(&priv->base, clk, pll, cstate->domain[idx],
+ int ret = nva3_pll_info(&priv->base, clk, pll, cstate->domain[idx],
&priv->eng[idx]);
if (ret >= 0)
return 0;
return ret;
}
+static int
+calc_host(struct nva3_clock_priv *priv, struct nouveau_cstate *cstate)
+{
+ int ret = 0;
+ u32 kHz = cstate->domain[nv_clk_src_host];
+ struct nva3_clock_info *info = &priv->eng[nv_clk_src_host];
+
+ if (kHz == 277000) {
+ info->clk = 0;
+ info->host_out = NVA3_HOST_277;
+ return 0;
+ }
+
+ info->host_out = NVA3_HOST_CLK;
+
+ ret = nva3_clk_info(&priv->base, 0x1d, kHz, info);
+ if (ret >= 0)
+ return 0;
+ return ret;
+}
+
+int
+nva3_clock_pre(struct nouveau_clock *clk, unsigned long *flags)
+{
+ struct nouveau_fifo *pfifo = nouveau_fifo(clk);
+
+ /* halt and idle execution engines */
+ nv_mask(clk, 0x020060, 0x00070000, 0x00000000);
+ nv_mask(clk, 0x002504, 0x00000001, 0x00000001);
+ /* Wait until the interrupt handler is finished */
+ if (!nv_wait(clk, 0x000100, 0xffffffff, 0x00000000))
+ return -EBUSY;
+
+ if (pfifo)
+ pfifo->pause(pfifo, flags);
+
+ if (!nv_wait(clk, 0x002504, 0x00000010, 0x00000010))
+ return -EIO;
+ if (!nv_wait(clk, 0x00251c, 0x0000003f, 0x0000003f))
+ return -EIO;
+
+ return 0;
+}
+
+void
+nva3_clock_post(struct nouveau_clock *clk, unsigned long *flags)
+{
+ struct nouveau_fifo *pfifo = nouveau_fifo(clk);
+
+ if (pfifo && flags)
+ pfifo->start(pfifo, flags);
+
+ nv_mask(clk, 0x002504, 0x00000001, 0x00000000);
+ nv_mask(clk, 0x020060, 0x00070000, 0x00040000);
+}
+
+static void
+disable_clk_src(struct nva3_clock_priv *priv, u32 src)
+{
+ nv_mask(priv, src, 0x00000100, 0x00000000);
+ nv_mask(priv, src, 0x00000001, 0x00000000);
+}
+
static void
prog_pll(struct nva3_clock_priv *priv, int clk, u32 pll, int idx)
{
const u32 src1 = 0x004160 + (clk * 4);
const u32 ctrl = pll + 0;
const u32 coef = pll + 4;
+ u32 bypass;
if (info->pll) {
- nv_mask(priv, src0, 0x00000101, 0x00000101);
+ /* Always start from a non-PLL clock */
+ bypass = nv_rd32(priv, ctrl) & 0x00000008;
+ if (!bypass) {
+ nv_mask(priv, src1, 0x00000101, 0x00000101);
+ nv_mask(priv, ctrl, 0x00000008, 0x00000008);
+ udelay(20);
+ }
+
+ nv_mask(priv, src0, 0x003f3141, 0x00000101 | info->clk);
nv_wr32(priv, coef, info->pll);
nv_mask(priv, ctrl, 0x00000015, 0x00000015);
nv_mask(priv, ctrl, 0x00000010, 0x00000000);
- nv_wait(priv, ctrl, 0x00020000, 0x00020000);
+ if (!nv_wait(priv, ctrl, 0x00020000, 0x00020000)) {
+ nv_mask(priv, ctrl, 0x00000010, 0x00000010);
+ nv_mask(priv, src0, 0x00000101, 0x00000000);
+ return;
+ }
nv_mask(priv, ctrl, 0x00000010, 0x00000010);
nv_mask(priv, ctrl, 0x00000008, 0x00000000);
- nv_mask(priv, src1, 0x00000100, 0x00000000);
- nv_mask(priv, src1, 0x00000001, 0x00000000);
+ disable_clk_src(priv, src1);
} else {
nv_mask(priv, src1, 0x003f3141, 0x00000101 | info->clk);
nv_mask(priv, ctrl, 0x00000018, 0x00000018);
udelay(20);
nv_mask(priv, ctrl, 0x00000001, 0x00000000);
- nv_mask(priv, src0, 0x00000100, 0x00000000);
- nv_mask(priv, src0, 0x00000001, 0x00000000);
+ disable_clk_src(priv, src0);
}
}
nv_mask(priv, 0x004120 + (clk * 4), 0x003f3141, 0x00000101 | info->clk);
}
+static void
+prog_host(struct nva3_clock_priv *priv)
+{
+ struct nva3_clock_info *info = &priv->eng[nv_clk_src_host];
+ u32 hsrc = (nv_rd32(priv, 0xc040));
+
+ switch (info->host_out) {
+ case NVA3_HOST_277:
+ if ((hsrc & 0x30000000) == 0) {
+ nv_wr32(priv, 0xc040, hsrc | 0x20000000);
+ disable_clk_src(priv, 0x4194);
+ }
+ break;
+ case NVA3_HOST_CLK:
+ prog_clk(priv, 0x1d, nv_clk_src_host);
+ if ((hsrc & 0x30000000) >= 0x20000000) {
+ nv_wr32(priv, 0xc040, hsrc & ~0x30000000);
+ }
+ break;
+ default:
+ break;
+ }
+
+ /* This seems to be a clock gating factor on idle, always set to 64 */
+ nv_wr32(priv, 0xc044, 0x3e);
+}
+
+static void
+prog_core(struct nva3_clock_priv *priv, int idx)
+{
+ struct nva3_clock_info *info = &priv->eng[idx];
+ u32 fb_delay = nv_rd32(priv, 0x10002c);
+
+ if (fb_delay < info->fb_delay)
+ nv_wr32(priv, 0x10002c, info->fb_delay);
+
+ prog_pll(priv, 0x00, 0x004200, idx);
+
+ if (fb_delay > info->fb_delay)
+ nv_wr32(priv, 0x10002c, info->fb_delay);
+}
+
static int
nva3_clock_calc(struct nouveau_clock *clk, struct nouveau_cstate *cstate)
{
struct nva3_clock_priv *priv = (void *)clk;
+ struct nva3_clock_info *core = &priv->eng[nv_clk_src_core];
int ret;
if ((ret = calc_clk(priv, cstate, 0x10, 0x4200, nv_clk_src_core)) ||
(ret = calc_clk(priv, cstate, 0x11, 0x4220, nv_clk_src_shader)) ||
(ret = calc_clk(priv, cstate, 0x20, 0x0000, nv_clk_src_disp)) ||
- (ret = calc_clk(priv, cstate, 0x21, 0x0000, nv_clk_src_vdec)))
+ (ret = calc_clk(priv, cstate, 0x21, 0x0000, nv_clk_src_vdec)) ||
+ (ret = calc_host(priv, cstate)))
return ret;
+ /* XXX: Should be reading the highest bit in the VBIOS clock to decide
+ * whether to use a PLL or not... but using a PLL defeats the purpose */
+ if (core->pll) {
+ ret = nva3_clk_info(clk, 0x10,
+ cstate->domain[nv_clk_src_core_intm],
+ &priv->eng[nv_clk_src_core_intm]);
+ if (ret < 0)
+ return ret;
+ }
+
return 0;
}
nva3_clock_prog(struct nouveau_clock *clk)
{
struct nva3_clock_priv *priv = (void *)clk;
- prog_pll(priv, 0x00, 0x004200, nv_clk_src_core);
+ struct nva3_clock_info *core = &priv->eng[nv_clk_src_core];
+ int ret = 0;
+ unsigned long flags;
+ unsigned long *f = &flags;
+
+ ret = nva3_clock_pre(clk, f);
+ if (ret)
+ goto out;
+
+ if (core->pll)
+ prog_core(priv, nv_clk_src_core_intm);
+
+ prog_core(priv, nv_clk_src_core);
prog_pll(priv, 0x01, 0x004220, nv_clk_src_shader);
prog_clk(priv, 0x20, nv_clk_src_disp);
prog_clk(priv, 0x21, nv_clk_src_vdec);
- return 0;
+ prog_host(priv);
+
+out:
+ if (ret == -EBUSY)
+ f = NULL;
+
+ nva3_clock_post(clk, f);
+
+ return ret;
}
static void
static struct nouveau_clocks
nva3_domain[] = {
- { nv_clk_src_crystal, 0xff },
- { nv_clk_src_href , 0xff },
- { nv_clk_src_core , 0x00, 0, "core", 1000 },
- { nv_clk_src_shader , 0x01, 0, "shader", 1000 },
- { nv_clk_src_mem , 0x02, 0, "memory", 1000 },
- { nv_clk_src_vdec , 0x03 },
- { nv_clk_src_disp , 0x04 },
+ { nv_clk_src_crystal , 0xff },
+ { nv_clk_src_core , 0x00, 0, "core", 1000 },
+ { nv_clk_src_shader , 0x01, 0, "shader", 1000 },
+ { nv_clk_src_mem , 0x02, 0, "memory", 1000 },
+ { nv_clk_src_vdec , 0x03 },
+ { nv_clk_src_disp , 0x04 },
+ { nv_clk_src_host , 0x05 },
+ { nv_clk_src_core_intm, 0x06 },
{ nv_clk_src_max }
};
struct nva3_clock_info {
u32 clk;
u32 pll;
+ enum {
+ NVA3_HOST_277,
+ NVA3_HOST_CLK,
+ } host_out;
+ u32 fb_delay;
};
-int nva3_clock_info(struct nouveau_clock *, int, u32, u32,
+int nva3_pll_info(struct nouveau_clock *, int, u32, u32,
struct nva3_clock_info *);
-
+int nva3_clock_pre(struct nouveau_clock *clk, unsigned long *flags);
+void nva3_clock_post(struct nouveau_clock *clk, unsigned long *flags);
#endif
#include <subdev/timer.h>
#include <subdev/clock.h>
+#include "nva3.h"
#include "pll.h"
struct nvaa_clock_priv {
nvaa_clock_prog(struct nouveau_clock *clk)
{
struct nvaa_clock_priv *priv = (void *)clk;
- struct nouveau_fifo *pfifo = nouveau_fifo(clk);
+ u32 pllmask = 0, mast;
unsigned long flags;
- u32 pllmask = 0, mast, ptherm_gate;
- int ret = -EBUSY;
-
- /* halt and idle execution engines */
- ptherm_gate = nv_mask(clk, 0x020060, 0x00070000, 0x00000000);
- nv_mask(clk, 0x002504, 0x00000001, 0x00000001);
- /* Wait until the interrupt handler is finished */
- if (!nv_wait(clk, 0x000100, 0xffffffff, 0x00000000))
- goto resume;
-
- if (pfifo)
- pfifo->pause(pfifo, &flags);
+ unsigned long *f = &flags;
+ int ret = 0;
- if (!nv_wait(clk, 0x002504, 0x00000010, 0x00000010))
- goto resume;
- if (!nv_wait(clk, 0x00251c, 0x0000003f, 0x0000003f))
- goto resume;
+ ret = nva3_clock_pre(clk, f);
+ if (ret)
+ goto out;
/* First switch to safe clocks: href */
mast = nv_mask(clk, 0xc054, 0x03400e70, 0x03400640);
}
nv_wr32(clk, 0xc054, mast);
- ret = 0;
resume:
- if (pfifo)
- pfifo->start(pfifo, &flags);
-
- nv_mask(clk, 0x002504, 0x00000001, 0x00000000);
- nv_wr32(clk, 0x020060, ptherm_gate);
-
/* Disable some PLLs and dividers when unused */
if (priv->csrc != nv_clk_src_core) {
nv_wr32(clk, 0x4040, 0x00000000);
nv_mask(clk, 0x4020, 0x80000000, 0x00000000);
}
+out:
+ if (ret == -EBUSY)
+ f = NULL;
+
+ nva3_clock_post(clk, f);
+
return ret;
}
#include <core/device.h>
-#define NV04_PFB_BOOT_0 0x00100000
-# define NV04_PFB_BOOT_0_RAM_AMOUNT 0x00000003
-# define NV04_PFB_BOOT_0_RAM_AMOUNT_32MB 0x00000000
-# define NV04_PFB_BOOT_0_RAM_AMOUNT_4MB 0x00000001
-# define NV04_PFB_BOOT_0_RAM_AMOUNT_8MB 0x00000002
-# define NV04_PFB_BOOT_0_RAM_AMOUNT_16MB 0x00000003
-# define NV04_PFB_BOOT_0_RAM_WIDTH_128 0x00000004
-# define NV04_PFB_BOOT_0_RAM_TYPE 0x00000028
-# define NV04_PFB_BOOT_0_RAM_TYPE_SGRAM_8MBIT 0x00000000
-# define NV04_PFB_BOOT_0_RAM_TYPE_SGRAM_16MBIT 0x00000008
-# define NV04_PFB_BOOT_0_RAM_TYPE_SGRAM_16MBIT_4BANK 0x00000010
-# define NV04_PFB_BOOT_0_RAM_TYPE_SDRAM_16MBIT 0x00000018
-# define NV04_PFB_BOOT_0_RAM_TYPE_SDRAM_64MBIT 0x00000020
-# define NV04_PFB_BOOT_0_RAM_TYPE_SDRAM_64MBITX16 0x00000028
-# define NV04_PFB_BOOT_0_UMA_ENABLE 0x00000100
-# define NV04_PFB_BOOT_0_UMA_SIZE 0x0000f000
+#include <subdev/fb/regsnv04.h>
+
#define NV04_PFB_DEBUG_0 0x00100080
# define NV04_PFB_DEBUG_0_PAGE_MODE 0x00000001
# define NV04_PFB_DEBUG_0_REFRESH_OFF 0x00000010
int WL, CL, WR, at[2], dt, ds;
int rq = ram->freq < 1000000; /* XXX */
- switch (ram->ramcfg.version) {
+ switch (ram->next->bios.ramcfg_ver) {
case 0x11:
pd = ram->next->bios.ramcfg_11_01_80;
lf = ram->next->bios.ramcfg_11_01_40;
return -ENOSYS;
}
- switch (ram->timing.version) {
+ switch (ram->next->bios.timing_ver) {
case 0x20:
WL = (ram->next->bios.timing[1] & 0x00000f80) >> 7;
CL = (ram->next->bios.timing[1] & 0x0000001f);
{
u32 tiles = DIV_ROUND_UP(size, 0x40);
u32 tags = round_up(tiles / pfb->ram->parts, 0x40);
- if (!nouveau_mm_head(&pfb->tags, 1, tags, tags, 1, &tile->tag)) {
+ if (!nouveau_mm_head(&pfb->tags, 0, 1, tags, tags, 1, &tile->tag)) {
if (!(flags & 2)) tile->zcomp = 0x00000000; /* Z16 */
else tile->zcomp = 0x04000000; /* Z24S8 */
tile->zcomp |= tile->tag->offset;
{
u32 tiles = DIV_ROUND_UP(size, 0x40);
u32 tags = round_up(tiles / pfb->ram->parts, 0x40);
- if (!nouveau_mm_head(&pfb->tags, 1, tags, tags, 1, &tile->tag)) {
+ if (!nouveau_mm_head(&pfb->tags, 0, 1, tags, tags, 1, &tile->tag)) {
if (!(flags & 2)) tile->zcomp = 0x00100000; /* Z16 */
else tile->zcomp = 0x00200000; /* Z24S8 */
tile->zcomp |= tile->tag->offset;
{
u32 tiles = DIV_ROUND_UP(size, 0x40);
u32 tags = round_up(tiles / pfb->ram->parts, 0x40);
- if (!nouveau_mm_head(&pfb->tags, 1, tags, tags, 1, &tile->tag)) {
+ if (!nouveau_mm_head(&pfb->tags, 0, 1, tags, tags, 1, &tile->tag)) {
if (flags & 2) tile->zcomp |= 0x01000000; /* Z16 */
else tile->zcomp |= 0x02000000; /* Z24S8 */
tile->zcomp |= ((tile->tag->offset ) >> 6);
{
u32 tiles = DIV_ROUND_UP(size, 0x40);
u32 tags = round_up(tiles / pfb->ram->parts, 0x40);
- if (!nouveau_mm_head(&pfb->tags, 1, tags, tags, 1, &tile->tag)) {
+ if (!nouveau_mm_head(&pfb->tags, 0, 1, tags, tags, 1, &tile->tag)) {
if (flags & 2) tile->zcomp |= 0x04000000; /* Z16 */
else tile->zcomp |= 0x08000000; /* Z24S8 */
tile->zcomp |= ((tile->tag->offset ) >> 6);
{
u32 tiles = DIV_ROUND_UP(size, 0x40);
u32 tags = round_up(tiles / pfb->ram->parts, 0x40);
- if (!nouveau_mm_head(&pfb->tags, 1, tags, tags, 1, &tile->tag)) {
+ if (!nouveau_mm_head(&pfb->tags, 0, 1, tags, tags, 1, &tile->tag)) {
if (flags & 2) tile->zcomp |= 0x10000000; /* Z16 */
else tile->zcomp |= 0x20000000; /* Z24S8 */
tile->zcomp |= ((tile->tag->offset ) >> 6);
u32 tiles = DIV_ROUND_UP(size, 0x80);
u32 tags = round_up(tiles / pfb->ram->parts, 0x100);
if ( (flags & 2) &&
- !nouveau_mm_head(&pfb->tags, 1, tags, tags, 1, &tile->tag)) {
+ !nouveau_mm_head(&pfb->tags, 0, 1, tags, tags, 1, &tile->tag)) {
tile->zcomp = 0x28000000; /* Z24S8_SPLIT_GRAD */
tile->zcomp |= ((tile->tag->offset ) >> 8);
tile->zcomp |= ((tile->tag->offset + tags - 1) >> 8) << 13;
extern struct nouveau_oclass gk20a_ram_oclass;
extern struct nouveau_oclass gm107_ram_oclass;
+int nouveau_sddr2_calc(struct nouveau_ram *ram);
int nouveau_sddr3_calc(struct nouveau_ram *ram);
int nouveau_gddr5_calc(struct nouveau_ram *ram, bool nuts);
struct ramfuc_reg {
int sequence;
bool force;
- u32 addr[2];
+ u32 addr;
+ u32 stride; /* in bytes */
+ u32 mask;
u32 data;
};
+static inline struct ramfuc_reg
+ramfuc_stride(u32 addr, u32 stride, u32 mask)
+{
+ return (struct ramfuc_reg) {
+ .sequence = 0,
+ .addr = addr,
+ .stride = stride,
+ .mask = mask,
+ .data = 0xdeadbeef,
+ };
+}
+
static inline struct ramfuc_reg
ramfuc_reg2(u32 addr1, u32 addr2)
{
return (struct ramfuc_reg) {
.sequence = 0,
- .addr = { addr1, addr2 },
+ .addr = addr1,
+ .stride = addr2 - addr1,
+ .mask = 0x3,
.data = 0xdeadbeef,
};
}
static noinline struct ramfuc_reg
ramfuc_reg(u32 addr)
{
- return ramfuc_reg2(addr, addr);
+ return (struct ramfuc_reg) {
+ .sequence = 0,
+ .addr = addr,
+ .stride = 0,
+ .mask = 0x1,
+ .data = 0xdeadbeef,
+ };
}
static inline int
ramfuc_rd32(struct ramfuc *ram, struct ramfuc_reg *reg)
{
if (reg->sequence != ram->sequence)
- reg->data = nv_rd32(ram->pfb, reg->addr[0]);
+ reg->data = nv_rd32(ram->pfb, reg->addr);
return reg->data;
}
static inline void
ramfuc_wr32(struct ramfuc *ram, struct ramfuc_reg *reg, u32 data)
{
+ unsigned int mask, off = 0;
+
reg->sequence = ram->sequence;
reg->data = data;
- if (reg->addr[0] != reg->addr[1])
- nouveau_memx_wr32(ram->memx, reg->addr[1], reg->data);
- nouveau_memx_wr32(ram->memx, reg->addr[0], reg->data);
+
+ for (mask = reg->mask; mask > 0; mask = (mask & ~1) >> 1) {
+ if (mask & 1) {
+ nouveau_memx_wr32(ram->memx, reg->addr+off, reg->data);
+ }
+
+ off += reg->stride;
+ }
}
static inline void
nouveau_memx_nsec(ram->memx, nsec);
}
-#define ram_init(s,p) ramfuc_init(&(s)->base, (p))
-#define ram_exec(s,e) ramfuc_exec(&(s)->base, (e))
-#define ram_have(s,r) ((s)->r_##r.addr[0] != 0x000000)
-#define ram_rd32(s,r) ramfuc_rd32(&(s)->base, &(s)->r_##r)
-#define ram_wr32(s,r,d) ramfuc_wr32(&(s)->base, &(s)->r_##r, (d))
-#define ram_nuke(s,r) ramfuc_nuke(&(s)->base, &(s)->r_##r)
-#define ram_mask(s,r,m,d) ramfuc_mask(&(s)->base, &(s)->r_##r, (m), (d))
-#define ram_wait(s,r,m,d,n) ramfuc_wait(&(s)->base, (r), (m), (d), (n))
-#define ram_nsec(s,n) ramfuc_nsec(&(s)->base, (n))
+static inline void
+ramfuc_wait_vblank(struct ramfuc *ram)
+{
+ nouveau_memx_wait_vblank(ram->memx);
+}
+
+static inline void
+ramfuc_block(struct ramfuc *ram)
+{
+ nouveau_memx_block(ram->memx);
+}
+
+static inline void
+ramfuc_unblock(struct ramfuc *ram)
+{
+ nouveau_memx_unblock(ram->memx);
+}
+
+#define ram_init(s,p) ramfuc_init(&(s)->base, (p))
+#define ram_exec(s,e) ramfuc_exec(&(s)->base, (e))
+#define ram_have(s,r) ((s)->r_##r.addr != 0x000000)
+#define ram_rd32(s,r) ramfuc_rd32(&(s)->base, &(s)->r_##r)
+#define ram_wr32(s,r,d) ramfuc_wr32(&(s)->base, &(s)->r_##r, (d))
+#define ram_nuke(s,r) ramfuc_nuke(&(s)->base, &(s)->r_##r)
+#define ram_mask(s,r,m,d) ramfuc_mask(&(s)->base, &(s)->r_##r, (m), (d))
+#define ram_wait(s,r,m,d,n) ramfuc_wait(&(s)->base, (r), (m), (d), (n))
+#define ram_nsec(s,n) ramfuc_nsec(&(s)->base, (n))
+#define ram_wait_vblank(s) ramfuc_wait_vblank(&(s)->base)
+#define ram_block(s) ramfuc_block(&(s)->base)
+#define ram_unblock(s) ramfuc_unblock(&(s)->base)
#endif
* Authors: Ben Skeggs
*/
-#define NV04_PFB_BOOT_0 0x00100000
-# define NV04_PFB_BOOT_0_RAM_AMOUNT 0x00000003
-# define NV04_PFB_BOOT_0_RAM_AMOUNT_32MB 0x00000000
-# define NV04_PFB_BOOT_0_RAM_AMOUNT_4MB 0x00000001
-# define NV04_PFB_BOOT_0_RAM_AMOUNT_8MB 0x00000002
-# define NV04_PFB_BOOT_0_RAM_AMOUNT_16MB 0x00000003
-# define NV04_PFB_BOOT_0_RAM_WIDTH_128 0x00000004
-# define NV04_PFB_BOOT_0_RAM_TYPE 0x00000028
-# define NV04_PFB_BOOT_0_RAM_TYPE_SGRAM_8MBIT 0x00000000
-# define NV04_PFB_BOOT_0_RAM_TYPE_SGRAM_16MBIT 0x00000008
-# define NV04_PFB_BOOT_0_RAM_TYPE_SGRAM_16MBIT_4BANK 0x00000010
-# define NV04_PFB_BOOT_0_RAM_TYPE_SDRAM_16MBIT 0x00000018
-# define NV04_PFB_BOOT_0_RAM_TYPE_SDRAM_64MBIT 0x00000020
-# define NV04_PFB_BOOT_0_RAM_TYPE_SDRAM_64MBITX16 0x00000028
-# define NV04_PFB_BOOT_0_UMA_ENABLE 0x00000100
-# define NV04_PFB_BOOT_0_UMA_SIZE 0x0000f000
+#include <subdev/fb/regsnv04.h>
#include "priv.h"
if (align == 16) {
int n = (max >> 4) * comp;
- ret = nouveau_mm_head(tags, 1, n, n, 1, &mem->tag);
+ ret = nouveau_mm_head(tags, 0, 1, n, n, 1, &mem->tag);
if (ret)
mem->tag = NULL;
}
type = nv50_fb_memtype[type];
do {
if (back)
- ret = nouveau_mm_tail(heap, type, max, min, align, &r);
+ ret = nouveau_mm_tail(heap, 0, type, max, min, align, &r);
else
- ret = nouveau_mm_head(heap, type, max, min, align, &r);
+ ret = nouveau_mm_head(heap, 0, type, max, min, align, &r);
if (ret) {
mutex_unlock(&pfb->base.mutex);
pfb->ram->put(pfb, &mem);
static u32
nv50_fb_vram_rblock(struct nouveau_fb *pfb, struct nouveau_ram *ram)
{
- int i, parts, colbits, rowbitsa, rowbitsb, banks;
+ int colbits, rowbitsa, rowbitsb, banks;
u64 rowsize, predicted;
- u32 r0, r4, rt, ru, rblock_size;
+ u32 r0, r4, rt, rblock_size;
r0 = nv_rd32(pfb, 0x100200);
r4 = nv_rd32(pfb, 0x100204);
rt = nv_rd32(pfb, 0x100250);
- ru = nv_rd32(pfb, 0x001540);
- nv_debug(pfb, "memcfg 0x%08x 0x%08x 0x%08x 0x%08x\n", r0, r4, rt, ru);
-
- for (i = 0, parts = 0; i < 8; i++) {
- if (ru & (0x00010000 << i))
- parts++;
- }
+ nv_debug(pfb, "memcfg 0x%08x 0x%08x 0x%08x 0x%08x\n", r0, r4, rt,
+ nv_rd32(pfb, 0x001540));
colbits = (r4 & 0x0000f000) >> 12;
rowbitsa = ((r4 & 0x000f0000) >> 16) + 8;
rowbitsb = ((r4 & 0x00f00000) >> 20) + 8;
banks = 1 << (((r4 & 0x03000000) >> 24) + 2);
- rowsize = parts * banks * (1 << colbits) * 8;
+ rowsize = ram->parts * banks * (1 << colbits) * 8;
predicted = rowsize << rowbitsa;
if (r0 & 0x00000004)
predicted += rowsize << rowbitsb;
ram->size = nv_rd32(pfb, 0x10020c);
ram->size = (ram->size & 0xffffff00) | ((ram->size & 0x000000ff) << 32);
+ ram->part_mask = (nv_rd32(pfb, 0x001540) & 0x00ff0000) >> 16;
+ ram->parts = hweight8(ram->part_mask);
+
switch (nv_rd32(pfb, 0x100714) & 0x00000007) {
case 0: ram->type = NV_MEM_TYPE_DDR1; break;
case 1:
struct nva3_ram *ram = (void *)pfb->ram;
struct nva3_ramfuc *fuc = &ram->fuc;
struct nva3_clock_info mclk;
- u8 ver, cnt, len, strap;
+ struct nouveau_ram_data *next;
+ u8 ver, hdr, cnt, len, strap;
u32 data;
- struct {
- u32 data;
- u8 size;
- } rammap, ramcfg, timing;
u32 r004018, r100760, ctrl;
u32 unk714, unk718, unk71c;
- int ret;
+ int ret, i;
+
+ next = &ram->base.target;
+ next->freq = freq;
+ ram->base.next = next;
/* lookup memory config data relevant to the target frequency */
- rammap.data = nvbios_rammapEm(bios, freq / 1000, &ver, &rammap.size,
- &cnt, &ramcfg.size);
- if (!rammap.data || ver != 0x10 || rammap.size < 0x0e) {
+ i = 0;
+ while ((data = nvbios_rammapEp(bios, i++, &ver, &hdr, &cnt, &len,
+ &next->bios))) {
+ if (freq / 1000 >= next->bios.rammap_min &&
+ freq / 1000 <= next->bios.rammap_max)
+ break;
+ }
+
+ if (!data || ver != 0x10 || hdr < 0x0e) {
nv_error(pfb, "invalid/missing rammap entry\n");
return -EINVAL;
}
return -EINVAL;
}
- ramcfg.data = rammap.data + rammap.size + (strap * ramcfg.size);
- if (!ramcfg.data || ver != 0x10 || ramcfg.size < 0x0e) {
+ data = nvbios_rammapSp(bios, data, ver, hdr, cnt, len, strap,
+ &ver, &hdr, &next->bios);
+ if (!data || ver != 0x10 || hdr < 0x0e) {
nv_error(pfb, "invalid/missing ramcfg entry\n");
return -EINVAL;
}
/* lookup memory timings, if bios says they're present */
- strap = nv_ro08(bios, ramcfg.data + 0x01);
- if (strap != 0xff) {
- timing.data = nvbios_timingEe(bios, strap, &ver, &timing.size,
- &cnt, &len);
- if (!timing.data || ver != 0x10 || timing.size < 0x19) {
+ if (next->bios.ramcfg_timing != 0xff) {
+ data = nvbios_timingEp(bios, next->bios.ramcfg_timing,
+ &ver, &hdr, &cnt, &len,
+ &next->bios);
+ if (!data || ver != 0x10 || hdr < 0x19) {
nv_error(pfb, "invalid/missing timing entry\n");
return -EINVAL;
}
- } else {
- timing.data = 0;
}
- ret = nva3_clock_info(nouveau_clock(pfb), 0x12, 0x4000, freq, &mclk);
+ ret = nva3_pll_info(nouveau_clock(pfb), 0x12, 0x4000, freq, &mclk);
if (ret < 0) {
nv_error(pfb, "failed mclk calculation\n");
return ret;
ram_mask(fuc, 0x004168, 0x003f3141, ctrl);
}
- if ( (nv_ro08(bios, ramcfg.data + 0x02) & 0x10)) {
+ if (next->bios.ramcfg_10_02_10) {
ram_mask(fuc, 0x111104, 0x00000600, 0x00000000);
} else {
ram_mask(fuc, 0x111100, 0x40000000, 0x40000000);
ram_mask(fuc, 0x111104, 0x00000180, 0x00000000);
}
- if (!(nv_ro08(bios, rammap.data + 0x04) & 0x02))
+ if (!next->bios.rammap_10_04_02)
ram_mask(fuc, 0x100200, 0x00000800, 0x00000000);
ram_wr32(fuc, 0x611200, 0x00003300);
- if (!(nv_ro08(bios, ramcfg.data + 0x02) & 0x10))
+ if (!next->bios.ramcfg_10_02_10)
ram_wr32(fuc, 0x111100, 0x4c020000); /*XXX*/
ram_wr32(fuc, 0x1002d4, 0x00000001);
ram_wr32(fuc, 0x004018, 0x0000d000 | r004018);
}
- if ( (nv_ro08(bios, rammap.data + 0x04) & 0x08)) {
- u32 unk5a0 = (nv_ro16(bios, ramcfg.data + 0x05) << 8) |
- nv_ro08(bios, ramcfg.data + 0x05);
- u32 unk5a4 = (nv_ro16(bios, ramcfg.data + 0x07));
- u32 unk804 = (nv_ro08(bios, ramcfg.data + 0x09) & 0xf0) << 16 |
- (nv_ro08(bios, ramcfg.data + 0x03) & 0x0f) << 16 |
- (nv_ro08(bios, ramcfg.data + 0x09) & 0x0f) |
- 0x80000000;
- ram_wr32(fuc, 0x1005a0, unk5a0);
- ram_wr32(fuc, 0x1005a4, unk5a4);
- ram_wr32(fuc, 0x10f804, unk804);
+ if (next->bios.rammap_10_04_08) {
+ ram_wr32(fuc, 0x1005a0, next->bios.ramcfg_10_06 << 16 |
+ next->bios.ramcfg_10_05 << 8 |
+ next->bios.ramcfg_10_05);
+ ram_wr32(fuc, 0x1005a4, next->bios.ramcfg_10_08 << 8 |
+ next->bios.ramcfg_10_07);
+ ram_wr32(fuc, 0x10f804, next->bios.ramcfg_10_09_f0 << 20 |
+ next->bios.ramcfg_10_03_0f << 16 |
+ next->bios.ramcfg_10_09_0f |
+ 0x80000000);
ram_mask(fuc, 0x10053c, 0x00001000, 0x00000000);
} else {
ram_mask(fuc, 0x10053c, 0x00001000, 0x00001000);
ram_mask(fuc, 0x100220[0], 0x00000000, 0x00000000);
ram_mask(fuc, 0x100220[8], 0x00000000, 0x00000000);
- data = (nv_ro08(bios, ramcfg.data + 0x02) & 0x08) ? 0x00000000 : 0x00001000;
- ram_mask(fuc, 0x100200, 0x00001000, data);
+ ram_mask(fuc, 0x100200, 0x00001000, !next->bios.ramcfg_10_02_08 << 12);
unk714 = ram_rd32(fuc, 0x100714) & ~0xf0000010;
unk718 = ram_rd32(fuc, 0x100718) & ~0x00000100;
unk71c = ram_rd32(fuc, 0x10071c) & ~0x00000100;
- if ( (nv_ro08(bios, ramcfg.data + 0x02) & 0x20))
+ if (next->bios.ramcfg_10_02_20)
unk714 |= 0xf0000000;
- if (!(nv_ro08(bios, ramcfg.data + 0x02) & 0x04))
+ if (!next->bios.ramcfg_10_02_04)
unk714 |= 0x00000010;
ram_wr32(fuc, 0x100714, unk714);
- if (nv_ro08(bios, ramcfg.data + 0x02) & 0x01)
+ if (next->bios.ramcfg_10_02_01)
unk71c |= 0x00000100;
ram_wr32(fuc, 0x10071c, unk71c);
- if (nv_ro08(bios, ramcfg.data + 0x02) & 0x02)
+ if (next->bios.ramcfg_10_02_02)
unk718 |= 0x00000100;
ram_wr32(fuc, 0x100718, unk718);
- if (nv_ro08(bios, ramcfg.data + 0x02) & 0x10)
+ if (next->bios.ramcfg_10_02_10)
ram_wr32(fuc, 0x111100, 0x48000000); /*XXX*/
ram_mask(fuc, mr[0], 0x100, 0x100);
ram_nsec(fuc, 12000);
ram_wr32(fuc, 0x611200, 0x00003330);
- if ( (nv_ro08(bios, rammap.data + 0x04) & 0x02))
+ if (next->bios.rammap_10_04_02)
ram_mask(fuc, 0x100200, 0x00000800, 0x00000800);
- if ( (nv_ro08(bios, ramcfg.data + 0x02) & 0x10)) {
+ if (next->bios.ramcfg_10_02_10) {
ram_mask(fuc, 0x111104, 0x00000180, 0x00000180);
ram_mask(fuc, 0x111100, 0x40000000, 0x00000000);
} else {
ram->fuc.r_0x100714 = ramfuc_reg(0x100714);
ram->fuc.r_0x100718 = ramfuc_reg(0x100718);
ram->fuc.r_0x10071c = ramfuc_reg(0x10071c);
- ram->fuc.r_0x100760 = ramfuc_reg(0x100760);
- ram->fuc.r_0x1007a0 = ramfuc_reg(0x1007a0);
- ram->fuc.r_0x1007e0 = ramfuc_reg(0x1007e0);
+ ram->fuc.r_0x100760 = ramfuc_stride(0x100760, 4, ram->base.part_mask);
+ ram->fuc.r_0x1007a0 = ramfuc_stride(0x1007a0, 4, ram->base.part_mask);
+ ram->fuc.r_0x1007e0 = ramfuc_stride(0x1007e0, 4, ram->base.part_mask);
ram->fuc.r_0x10f804 = ramfuc_reg(0x10f804);
- ram->fuc.r_0x1110e0 = ramfuc_reg(0x1110e0);
+ ram->fuc.r_0x1110e0 = ramfuc_stride(0x1110e0, 4, ram->base.part_mask);
ram->fuc.r_0x111100 = ramfuc_reg(0x111100);
ram->fuc.r_0x111104 = ramfuc_reg(0x111104);
ram->fuc.r_0x611200 = ramfuc_reg(0x611200);
struct nouveau_bios *bios = nouveau_bios(pfb);
struct nvc0_ram *ram = (void *)pfb->ram;
struct nvc0_ramfuc *fuc = &ram->fuc;
+ struct nvbios_ramcfg cfg;
u8 ver, cnt, len, strap;
struct {
u32 data;
/* lookup memory config data relevant to the target frequency */
rammap.data = nvbios_rammapEm(bios, freq / 1000, &ver, &rammap.size,
- &cnt, &ramcfg.size);
+ &cnt, &ramcfg.size, &cfg);
if (!rammap.data || ver != 0x10 || rammap.size < 0x0e) {
nv_error(pfb, "invalid/missing rammap entry\n");
return -EINVAL;
do {
if (back)
- ret = nouveau_mm_tail(mm, 1, size, ncmin, align, &r);
+ ret = nouveau_mm_tail(mm, 0, 1, size, ncmin, align, &r);
else
- ret = nouveau_mm_head(mm, 1, size, ncmin, align, &r);
+ ret = nouveau_mm_head(mm, 0, 1, size, ncmin, align, &r);
if (ret) {
mutex_unlock(&pfb->base.mutex);
pfb->ram->put(pfb, &mem);
offset = (0x0200000000ULL >> 12) + (bsize << 8);
length = (ram->size >> 12) - ((bsize * parts) << 8) - rsvd_tail;
- ret = nouveau_mm_init(&pfb->vram, offset, length, 0);
+ ret = nouveau_mm_init(&pfb->vram, offset, length, 1);
if (ret)
nouveau_mm_fini(&pfb->vram);
}
#include <subdev/bios/init.h>
#include <subdev/bios/rammap.h>
#include <subdev/bios/timing.h>
+#include <subdev/bios/M0205.h>
+#include <subdev/bios/M0209.h>
#include <subdev/clock.h>
#include <subdev/clock/pll.h>
#include "ramfuc.h"
-/* binary driver only executes this path if the condition (a) is true
- * for any configuration (combination of rammap+ramcfg+timing) that
- * can be reached on a given card. for now, we will execute the branch
- * unconditionally in the hope that a "false everywhere" in the bios
- * tables doesn't actually mean "don't touch this".
- */
-#define NOTE00(a) 1
-
struct nve0_ramfuc {
struct ramfuc base;
struct nouveau_ram base;
struct nve0_ramfuc fuc;
+ struct list_head cfg;
u32 parts;
u32 pmask;
u32 pnuts;
+ struct nvbios_ramcfg diff;
int from;
int mode;
int N1, fN1, M1, P1;
{
struct nve0_fb_priv *priv = (void *)nouveau_fb(ram);
struct ramfuc *fuc = &ram->fuc.base;
- u32 addr = 0x110000 + (reg->addr[0] & 0xfff);
+ u32 addr = 0x110000 + (reg->addr & 0xfff);
u32 mask = _mask | _copy;
u32 data = (_data & _mask) | (reg->data & _copy);
u32 i;
u32 mask, data;
ram_mask(fuc, 0x10f808, 0x40000000, 0x40000000);
+ ram_block(fuc);
ram_wr32(fuc, 0x62c000, 0x0f0f0000);
/* MR1: turn termination on early, for some reason.. */
ram_mask(fuc, 0x10f2e8, 0xffffffff, next->bios.timing[9]);
data = mask = 0x00000000;
- if (NOTE00(ramcfg_08_20)) {
+ if (ram->diff.ramcfg_11_08_20) {
if (next->bios.ramcfg_11_08_20)
data |= 0x01000000;
mask |= 0x01000000;
ram_mask(fuc, 0x10f200, mask, data);
data = mask = 0x00000000;
- if (NOTE00(ramcfg_02_03 != 0)) {
+ if (ram->diff.ramcfg_11_02_03) {
data |= next->bios.ramcfg_11_02_03 << 8;
mask |= 0x00000300;
}
- if (NOTE00(ramcfg_01_10)) {
+ if (ram->diff.ramcfg_11_01_10) {
if (next->bios.ramcfg_11_01_10)
data |= 0x70000000;
mask |= 0x70000000;
ram_mask(fuc, 0x10f604, mask, data);
data = mask = 0x00000000;
- if (NOTE00(timing_30_07 != 0)) {
+ if (ram->diff.timing_20_30_07) {
data |= next->bios.timing_20_30_07 << 28;
mask |= 0x70000000;
}
- if (NOTE00(ramcfg_01_01)) {
+ if (ram->diff.ramcfg_11_01_01) {
if (next->bios.ramcfg_11_01_01)
data |= 0x00000100;
mask |= 0x00000100;
ram_mask(fuc, 0x10f614, mask, data);
data = mask = 0x00000000;
- if (NOTE00(timing_30_07 != 0)) {
+ if (ram->diff.timing_20_30_07) {
data |= next->bios.timing_20_30_07 << 28;
mask |= 0x70000000;
}
- if (NOTE00(ramcfg_01_02)) {
+ if (ram->diff.ramcfg_11_01_02) {
if (next->bios.ramcfg_11_01_02)
data |= 0x00000100;
mask |= 0x00000100;
ram_wr32(fuc, 0x10f870, 0x11111111 * next->bios.ramcfg_11_03_0f);
data = mask = 0x00000000;
- if (NOTE00(ramcfg_02_03 != 0)) {
+ if (ram->diff.ramcfg_11_02_03) {
data |= next->bios.ramcfg_11_02_03;
mask |= 0x00000003;
}
- if (NOTE00(ramcfg_01_10)) {
+ if (ram->diff.ramcfg_11_01_10) {
if (next->bios.ramcfg_11_01_10)
data |= 0x00000004;
mask |= 0x00000004;
if (next->bios.ramcfg_11_07_02)
nve0_ram_train(fuc, 0x80020000, 0x01000000);
+ ram_unblock(fuc);
ram_wr32(fuc, 0x62c000, 0x0f0f0f00);
if (next->bios.rammap_11_08_01)
u32 mask, data;
ram_mask(fuc, 0x10f808, 0x40000000, 0x40000000);
+ ram_block(fuc);
ram_wr32(fuc, 0x62c000, 0x0f0f0000);
if (vc == 1 && ram_have(fuc, gpio2E)) {
ram_mask(fuc, 0x10f200, 0x80000000, 0x00000000);
ram_nsec(fuc, 1000);
+ ram_unblock(fuc);
ram_wr32(fuc, 0x62c000, 0x0f0f0f00);
if (next->bios.rammap_11_08_01)
******************************************************************************/
static int
-nve0_ram_calc_data(struct nouveau_fb *pfb, u32 freq,
+nve0_ram_calc_data(struct nouveau_fb *pfb, u32 khz,
struct nouveau_ram_data *data)
{
- struct nouveau_bios *bios = nouveau_bios(pfb);
struct nve0_ram *ram = (void *)pfb->ram;
- u8 strap, cnt, len;
-
- /* lookup memory config data relevant to the target frequency */
- ram->base.rammap.data = nvbios_rammapEp(bios, freq / 1000,
- &ram->base.rammap.version,
- &ram->base.rammap.size,
- &cnt, &len, &data->bios);
- if (!ram->base.rammap.data || ram->base.rammap.version != 0x11 ||
- ram->base.rammap.size < 0x09) {
- nv_error(pfb, "invalid/missing rammap entry\n");
- return -EINVAL;
- }
-
- /* locate specific data set for the attached memory */
- strap = nvbios_ramcfg_index(nv_subdev(pfb));
- ram->base.ramcfg.data = nvbios_rammapSp(bios, ram->base.rammap.data,
- ram->base.rammap.version,
- ram->base.rammap.size,
- cnt, len, strap,
- &ram->base.ramcfg.version,
- &ram->base.ramcfg.size,
- &data->bios);
- if (!ram->base.ramcfg.data || ram->base.ramcfg.version != 0x11 ||
- ram->base.ramcfg.size < 0x08) {
- nv_error(pfb, "invalid/missing ramcfg entry\n");
- return -EINVAL;
- }
-
- /* lookup memory timings, if bios says they're present */
- strap = nv_ro08(bios, ram->base.ramcfg.data + 0x00);
- if (strap != 0xff) {
- ram->base.timing.data =
- nvbios_timingEp(bios, strap, &ram->base.timing.version,
- &ram->base.timing.size, &cnt, &len,
- &data->bios);
- if (!ram->base.timing.data ||
- ram->base.timing.version != 0x20 ||
- ram->base.timing.size < 0x33) {
- nv_error(pfb, "invalid/missing timing entry\n");
- return -EINVAL;
+ struct nouveau_ram_data *cfg;
+ u32 mhz = khz / 1000;
+
+ list_for_each_entry(cfg, &ram->cfg, head) {
+ if (mhz >= cfg->bios.rammap_min &&
+ mhz <= cfg->bios.rammap_max) {
+ *data = *cfg;
+ data->freq = khz;
+ return 0;
}
- } else {
- ram->base.timing.data = 0;
}
- data->freq = freq;
- return 0;
+ nv_error(ram, "ramcfg data for %dMHz not found\n", mhz);
+ return -EINVAL;
}
static int
return nve0_ram_calc_xits(pfb, ram->base.next);
}
+static void
+nve0_ram_prog_0(struct nouveau_fb *pfb, u32 freq)
+{
+ struct nve0_ram *ram = (void *)pfb->ram;
+ struct nouveau_ram_data *cfg;
+ u32 mhz = freq / 1000;
+ u32 mask, data;
+
+ list_for_each_entry(cfg, &ram->cfg, head) {
+ if (mhz >= cfg->bios.rammap_min &&
+ mhz <= cfg->bios.rammap_max)
+ break;
+ }
+
+ if (&cfg->head == &ram->cfg)
+ return;
+
+ if (mask = 0, data = 0, ram->diff.rammap_11_0a_03fe) {
+ data |= cfg->bios.rammap_11_0a_03fe << 12;
+ mask |= 0x001ff000;
+ }
+ if (ram->diff.rammap_11_09_01ff) {
+ data |= cfg->bios.rammap_11_09_01ff;
+ mask |= 0x000001ff;
+ }
+ nv_mask(pfb, 0x10f468, mask, data);
+
+ if (mask = 0, data = 0, ram->diff.rammap_11_0a_0400) {
+ data |= cfg->bios.rammap_11_0a_0400;
+ mask |= 0x00000001;
+ }
+ nv_mask(pfb, 0x10f420, mask, data);
+
+ if (mask = 0, data = 0, ram->diff.rammap_11_0a_0800) {
+ data |= cfg->bios.rammap_11_0a_0800;
+ mask |= 0x00000001;
+ }
+ nv_mask(pfb, 0x10f430, mask, data);
+
+ if (mask = 0, data = 0, ram->diff.rammap_11_0b_01f0) {
+ data |= cfg->bios.rammap_11_0b_01f0;
+ mask |= 0x0000001f;
+ }
+ nv_mask(pfb, 0x10f400, mask, data);
+
+ if (mask = 0, data = 0, ram->diff.rammap_11_0b_0200) {
+ data |= cfg->bios.rammap_11_0b_0200 << 9;
+ mask |= 0x00000200;
+ }
+ nv_mask(pfb, 0x10f410, mask, data);
+
+ if (mask = 0, data = 0, ram->diff.rammap_11_0d) {
+ data |= cfg->bios.rammap_11_0d << 16;
+ mask |= 0x00ff0000;
+ }
+ if (ram->diff.rammap_11_0f) {
+ data |= cfg->bios.rammap_11_0f << 8;
+ mask |= 0x0000ff00;
+ }
+ nv_mask(pfb, 0x10f440, mask, data);
+
+ if (mask = 0, data = 0, ram->diff.rammap_11_0e) {
+ data |= cfg->bios.rammap_11_0e << 8;
+ mask |= 0x0000ff00;
+ }
+ if (ram->diff.rammap_11_0b_0800) {
+ data |= cfg->bios.rammap_11_0b_0800 << 7;
+ mask |= 0x00000080;
+ }
+ if (ram->diff.rammap_11_0b_0400) {
+ data |= cfg->bios.rammap_11_0b_0400 << 5;
+ mask |= 0x00000020;
+ }
+ nv_mask(pfb, 0x10f444, mask, data);
+}
+
static int
nve0_ram_prog(struct nouveau_fb *pfb)
{
struct nouveau_device *device = nv_device(pfb);
struct nve0_ram *ram = (void *)pfb->ram;
struct nve0_ramfuc *fuc = &ram->fuc;
- ram_exec(fuc, nouveau_boolopt(device->cfgopt, "NvMemExec", true));
+ struct nouveau_ram_data *next = ram->base.next;
+
+ if (!nouveau_boolopt(device->cfgopt, "NvMemExec", true)) {
+ ram_exec(fuc, false);
+ return (ram->base.next == &ram->base.xition);
+ }
+
+ nve0_ram_prog_0(pfb, 1000);
+ ram_exec(fuc, true);
+ nve0_ram_prog_0(pfb, next->freq);
+
return (ram->base.next == &ram->base.xition);
}
ram_exec(fuc, false);
}
+struct nve0_ram_train {
+ u16 mask;
+ struct nvbios_M0209S remap;
+ struct nvbios_M0209S type00;
+ struct nvbios_M0209S type01;
+ struct nvbios_M0209S type04;
+ struct nvbios_M0209S type06;
+ struct nvbios_M0209S type07;
+ struct nvbios_M0209S type08;
+ struct nvbios_M0209S type09;
+};
+
+static int
+nve0_ram_train_type(struct nouveau_fb *pfb, int i, u8 ramcfg,
+ struct nve0_ram_train *train)
+{
+ struct nouveau_bios *bios = nouveau_bios(pfb);
+ struct nvbios_M0205E M0205E;
+ struct nvbios_M0205S M0205S;
+ struct nvbios_M0209E M0209E;
+ struct nvbios_M0209S *remap = &train->remap;
+ struct nvbios_M0209S *value;
+ u8 ver, hdr, cnt, len;
+ u32 data;
+
+ /* determine type of data for this index */
+ if (!(data = nvbios_M0205Ep(bios, i, &ver, &hdr, &cnt, &len, &M0205E)))
+ return -ENOENT;
+
+ switch (M0205E.type) {
+ case 0x00: value = &train->type00; break;
+ case 0x01: value = &train->type01; break;
+ case 0x04: value = &train->type04; break;
+ case 0x06: value = &train->type06; break;
+ case 0x07: value = &train->type07; break;
+ case 0x08: value = &train->type08; break;
+ case 0x09: value = &train->type09; break;
+ default:
+ return 0;
+ }
+
+ /* training data index determined by ramcfg strap */
+ if (!(data = nvbios_M0205Sp(bios, i, ramcfg, &ver, &hdr, &M0205S)))
+ return -EINVAL;
+ i = M0205S.data;
+
+ /* training data format information */
+ if (!(data = nvbios_M0209Ep(bios, i, &ver, &hdr, &cnt, &len, &M0209E)))
+ return -EINVAL;
+
+ /* ... and the raw data */
+ if (!(data = nvbios_M0209Sp(bios, i, 0, &ver, &hdr, value)))
+ return -EINVAL;
+
+ if (M0209E.v02_07 == 2) {
+ /* of course! why wouldn't we have a pointer to another entry
+ * in the same table, and use the first one as an array of
+ * remap indices...
+ */
+ if (!(data = nvbios_M0209Sp(bios, M0209E.v03, 0, &ver, &hdr,
+ remap)))
+ return -EINVAL;
+
+ for (i = 0; i < ARRAY_SIZE(value->data); i++)
+ value->data[i] = remap->data[value->data[i]];
+ } else
+ if (M0209E.v02_07 != 1)
+ return -EINVAL;
+
+ train->mask |= 1 << M0205E.type;
+ return 0;
+}
+
+static int
+nve0_ram_train_init_0(struct nouveau_fb *pfb, struct nve0_ram_train *train)
+{
+ int i, j;
+
+ if ((train->mask & 0x03d3) != 0x03d3) {
+ nv_warn(pfb, "missing link training data\n");
+ return -EINVAL;
+ }
+
+ for (i = 0; i < 0x30; i++) {
+ for (j = 0; j < 8; j += 4) {
+ nv_wr32(pfb, 0x10f968 + j, 0x00000000 | (i << 8));
+ nv_wr32(pfb, 0x10f920 + j, 0x00000000 |
+ train->type08.data[i] << 4 |
+ train->type06.data[i]);
+ nv_wr32(pfb, 0x10f918 + j, train->type00.data[i]);
+ nv_wr32(pfb, 0x10f920 + j, 0x00000100 |
+ train->type09.data[i] << 4 |
+ train->type07.data[i]);
+ nv_wr32(pfb, 0x10f918 + j, train->type01.data[i]);
+ }
+ }
+
+ for (j = 0; j < 8; j += 4) {
+ for (i = 0; i < 0x100; i++) {
+ nv_wr32(pfb, 0x10f968 + j, i);
+ nv_wr32(pfb, 0x10f900 + j, train->type04.data[i]);
+ }
+ }
+
+ return 0;
+}
+
+static int
+nve0_ram_train_init(struct nouveau_fb *pfb)
+{
+ u8 ramcfg = nvbios_ramcfg_index(nv_subdev(pfb));
+ struct nve0_ram_train *train;
+ int ret = -ENOMEM, i;
+
+ if ((train = kzalloc(sizeof(*train), GFP_KERNEL))) {
+ for (i = 0; i < 0x100; i++) {
+ ret = nve0_ram_train_type(pfb, i, ramcfg, train);
+ if (ret && ret != -ENOENT)
+ break;
+ }
+ }
+
+ switch (pfb->ram->type) {
+ case NV_MEM_TYPE_GDDR5:
+ ret = nve0_ram_train_init_0(pfb, train);
+ break;
+ default:
+ ret = 0;
+ break;
+ }
+
+ kfree(train);
+ return ret;
+}
+
int
nve0_ram_init(struct nouveau_object *object)
{
struct nouveau_fb *pfb = (void *)object->parent;
struct nve0_ram *ram = (void *)object;
struct nouveau_bios *bios = nouveau_bios(pfb);
- static const u8 train0[] = {
- 0x00, 0xff, 0xff, 0x00, 0xff, 0x00,
- 0x00, 0xff, 0xff, 0x00, 0xff, 0x00,
- };
- static const u32 train1[] = {
- 0x00000000, 0xffffffff,
- 0x55555555, 0xaaaaaaaa,
- 0x33333333, 0xcccccccc,
- 0xf0f0f0f0, 0x0f0f0f0f,
- 0x00ff00ff, 0xff00ff00,
- 0x0000ffff, 0xffff0000,
- };
u8 ver, hdr, cnt, len, snr, ssz;
u32 data, save;
int ret, i;
cnt = nv_ro08(bios, data + 0x14); /* guess at count */
data = nv_ro32(bios, data + 0x10); /* guess u32... */
- save = nv_rd32(pfb, 0x10f65c);
- for (i = 0; i < cnt; i++) {
- nv_mask(pfb, 0x10f65c, 0x000000f0, i << 4);
- nvbios_exec(&(struct nvbios_init) {
- .subdev = nv_subdev(pfb),
- .bios = bios,
- .offset = nv_ro32(bios, data), /* guess u32 */
- .execute = 1,
- });
- data += 4;
- }
- nv_wr32(pfb, 0x10f65c, save);
+ save = nv_rd32(pfb, 0x10f65c) & 0x000000f0;
+ for (i = 0; i < cnt; i++, data += 4) {
+ if (i != save >> 4) {
+ nv_mask(pfb, 0x10f65c, 0x000000f0, i << 4);
+ nvbios_exec(&(struct nvbios_init) {
+ .subdev = nv_subdev(pfb),
+ .bios = bios,
+ .offset = nv_ro32(bios, data),
+ .execute = 1,
+ });
+ }
+ }
+ nv_mask(pfb, 0x10f65c, 0x000000f0, save);
nv_mask(pfb, 0x10f584, 0x11000000, 0x00000000);
+ nv_wr32(pfb, 0x10ecc0, 0xffffffff);
+ nv_mask(pfb, 0x10f160, 0x00000010, 0x00000010);
- switch (ram->base.type) {
- case NV_MEM_TYPE_GDDR5:
- for (i = 0; i < 0x30; i++) {
- nv_wr32(pfb, 0x10f968, 0x00000000 | (i << 8));
- nv_wr32(pfb, 0x10f920, 0x00000000 | train0[i % 12]);
- nv_wr32(pfb, 0x10f918, train1[i % 12]);
- nv_wr32(pfb, 0x10f920, 0x00000100 | train0[i % 12]);
- nv_wr32(pfb, 0x10f918, train1[i % 12]);
-
- nv_wr32(pfb, 0x10f96c, 0x00000000 | (i << 8));
- nv_wr32(pfb, 0x10f924, 0x00000000 | train0[i % 12]);
- nv_wr32(pfb, 0x10f91c, train1[i % 12]);
- nv_wr32(pfb, 0x10f924, 0x00000100 | train0[i % 12]);
- nv_wr32(pfb, 0x10f91c, train1[i % 12]);
- }
+ return nve0_ram_train_init(pfb);
+}
- for (i = 0; i < 0x100; i++) {
- nv_wr32(pfb, 0x10f968, i);
- nv_wr32(pfb, 0x10f900, train1[2 + (i & 1)]);
- }
+static int
+nve0_ram_ctor_data(struct nve0_ram *ram, u8 ramcfg, int i)
+{
+ struct nouveau_fb *pfb = (void *)nv_object(ram)->parent;
+ struct nouveau_bios *bios = nouveau_bios(pfb);
+ struct nouveau_ram_data *cfg;
+ struct nvbios_ramcfg *d = &ram->diff;
+ struct nvbios_ramcfg *p, *n;
+ u8 ver, hdr, cnt, len;
+ u32 data;
+ int ret;
- for (i = 0; i < 0x100; i++) {
- nv_wr32(pfb, 0x10f96c, i);
- nv_wr32(pfb, 0x10f900, train1[2 + (i & 1)]);
- }
- break;
- default:
- break;
+ if (!(cfg = kmalloc(sizeof(*cfg), GFP_KERNEL)))
+ return -ENOMEM;
+ p = &list_last_entry(&ram->cfg, typeof(*cfg), head)->bios;
+ n = &cfg->bios;
+
+ /* memory config data for a range of target frequencies */
+ data = nvbios_rammapEp(bios, i, &ver, &hdr, &cnt, &len, &cfg->bios);
+ if (ret = -ENOENT, !data)
+ goto done;
+ if (ret = -ENOSYS, ver != 0x11 || hdr < 0x12)
+ goto done;
+
+ /* ... and a portion specific to the attached memory */
+ data = nvbios_rammapSp(bios, data, ver, hdr, cnt, len, ramcfg,
+ &ver, &hdr, &cfg->bios);
+ if (ret = -EINVAL, !data)
+ goto done;
+ if (ret = -ENOSYS, ver != 0x11 || hdr < 0x0a)
+ goto done;
+
+ /* lookup memory timings, if bios says they're present */
+ if (cfg->bios.ramcfg_timing != 0xff) {
+ data = nvbios_timingEp(bios, cfg->bios.ramcfg_timing,
+ &ver, &hdr, &cnt, &len,
+ &cfg->bios);
+ if (ret = -EINVAL, !data)
+ goto done;
+ if (ret = -ENOSYS, ver != 0x20 || hdr < 0x33)
+ goto done;
}
- return 0;
+ list_add_tail(&cfg->head, &ram->cfg);
+ if (ret = 0, i == 0)
+ goto done;
+
+ d->rammap_11_0a_03fe |= p->rammap_11_0a_03fe != n->rammap_11_0a_03fe;
+ d->rammap_11_09_01ff |= p->rammap_11_09_01ff != n->rammap_11_09_01ff;
+ d->rammap_11_0a_0400 |= p->rammap_11_0a_0400 != n->rammap_11_0a_0400;
+ d->rammap_11_0a_0800 |= p->rammap_11_0a_0800 != n->rammap_11_0a_0800;
+ d->rammap_11_0b_01f0 |= p->rammap_11_0b_01f0 != n->rammap_11_0b_01f0;
+ d->rammap_11_0b_0200 |= p->rammap_11_0b_0200 != n->rammap_11_0b_0200;
+ d->rammap_11_0d |= p->rammap_11_0d != n->rammap_11_0d;
+ d->rammap_11_0f |= p->rammap_11_0f != n->rammap_11_0f;
+ d->rammap_11_0e |= p->rammap_11_0e != n->rammap_11_0e;
+ d->rammap_11_0b_0800 |= p->rammap_11_0b_0800 != n->rammap_11_0b_0800;
+ d->rammap_11_0b_0400 |= p->rammap_11_0b_0400 != n->rammap_11_0b_0400;
+ d->ramcfg_11_01_01 |= p->ramcfg_11_01_01 != n->ramcfg_11_01_01;
+ d->ramcfg_11_01_02 |= p->ramcfg_11_01_02 != n->ramcfg_11_01_02;
+ d->ramcfg_11_01_10 |= p->ramcfg_11_01_10 != n->ramcfg_11_01_10;
+ d->ramcfg_11_02_03 |= p->ramcfg_11_02_03 != n->ramcfg_11_02_03;
+ d->ramcfg_11_08_20 |= p->ramcfg_11_08_20 != n->ramcfg_11_08_20;
+ d->timing_20_30_07 |= p->timing_20_30_07 != n->timing_20_30_07;
+done:
+ if (ret)
+ kfree(cfg);
+ return ret;
+}
+
+static void
+nve0_ram_dtor(struct nouveau_object *object)
+{
+ struct nve0_ram *ram = (void *)object;
+ struct nouveau_ram_data *cfg, *tmp;
+
+ list_for_each_entry_safe(cfg, tmp, &ram->cfg, head) {
+ kfree(cfg);
+ }
+
+ nouveau_ram_destroy(&ram->base);
}
static int
struct dcb_gpio_func func;
struct nve0_ram *ram;
int ret, i;
+ u8 ramcfg = nvbios_ramcfg_index(nv_subdev(pfb));
u32 tmp;
ret = nvc0_ram_create(parent, engine, oclass, 0x022554, &ram);
if (ret)
return ret;
+ INIT_LIST_HEAD(&ram->cfg);
+
switch (ram->base.type) {
case NV_MEM_TYPE_DDR3:
case NV_MEM_TYPE_GDDR5:
}
}
- // parse bios data for both pll's
+ /* parse bios data for all rammap table entries up-front, and
+ * build information on whether certain fields differ between
+ * any of the entries.
+ *
+ * the binary driver appears to completely ignore some fields
+ * when all entries contain the same value. at first, it was
+ * hoped that these were mere optimisations and the bios init
+ * tables had configured as per the values here, but there is
+ * evidence now to suggest that this isn't the case and we do
+ * need to treat this condition as a "don't touch" indicator.
+ */
+ for (i = 0; !ret; i++) {
+ ret = nve0_ram_ctor_data(ram, ramcfg, i);
+ if (ret && ret != -ENOENT) {
+ nv_error(pfb, "failed to parse ramcfg data\n");
+ return ret;
+ }
+ }
+
+ /* parse bios data for both pll's */
ret = nvbios_pll_parse(bios, 0x0c, &ram->fuc.refpll);
if (ret) {
nv_error(pfb, "mclk refpll data not found\n");
return ret;
}
+ /* lookup memory voltage gpios */
ret = gpio->find(gpio, 0, 0x18, DCB_GPIO_UNUSED, &func);
if (ret == 0) {
ram->fuc.r_gpioMV = ramfuc_reg(0x00d610 + (func.line * 0x04));
.handle = 0,
.ofuncs = &(struct nouveau_ofuncs) {
.ctor = nve0_ram_ctor,
- .dtor = _nouveau_ram_dtor,
+ .dtor = nve0_ram_dtor,
.init = nve0_ram_init,
.fini = _nouveau_ram_fini,
}
--- /dev/null
+/*
+ * Copyright 2014 Roy Spliet
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Roy Spliet <rspliet@eclipso.eu>
+ * Ben Skeggs
+ */
+
+#include "priv.h"
+
+struct ramxlat {
+ int id;
+ u8 enc;
+};
+
+static inline int
+ramxlat(const struct ramxlat *xlat, int id)
+{
+ while (xlat->id >= 0) {
+ if (xlat->id == id)
+ return xlat->enc;
+ xlat++;
+ }
+ return -EINVAL;
+}
+
+static const struct ramxlat
+ramddr2_cl[] = {
+ { 2, 2 }, { 3, 3 }, { 4, 4 }, { 5, 5 }, { 6, 6 },
+ /* The following are available in some, but not all DDR2 docs */
+ { 7, 7 },
+ { -1 }
+};
+
+static const struct ramxlat
+ramddr2_wr[] = {
+ { 2, 1 }, { 3, 2 }, { 4, 3 }, { 5, 4 }, { 6, 5 },
+ /* The following are available in some, but not all DDR2 docs */
+ { 7, 6 },
+ { -1 }
+};
+
+int
+nouveau_sddr2_calc(struct nouveau_ram *ram)
+{
+ int CL, WR, DLL = 0, ODT = 0;
+
+ switch (ram->next->bios.timing_ver) {
+ case 0x10:
+ CL = ram->next->bios.timing_10_CL;
+ WR = ram->next->bios.timing_10_WR;
+ DLL = !ram->next->bios.ramcfg_10_02_40;
+ ODT = ram->next->bios.timing_10_ODT & 3;
+ break;
+ case 0x20:
+ CL = (ram->next->bios.timing[1] & 0x0000001f);
+ WR = (ram->next->bios.timing[2] & 0x007f0000) >> 16;
+ break;
+ default:
+ return -ENOSYS;
+ }
+
+ CL = ramxlat(ramddr2_cl, CL);
+ WR = ramxlat(ramddr2_wr, WR);
+ if (CL < 0 || WR < 0)
+ return -EINVAL;
+
+ ram->mr[0] &= ~0xf70;
+ ram->mr[0] |= (WR & 0x07) << 9;
+ ram->mr[0] |= (CL & 0x07) << 4;
+
+ ram->mr[1] &= ~0x045;
+ ram->mr[1] |= (ODT & 0x1) << 2;
+ ram->mr[1] |= (ODT & 0x2) << 5;
+ ram->mr[1] |= !DLL;
+ return 0;
+}
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: Ben Skeggs <bskeggs@redhat.com>
+ * Roy Spliet <rspliet@eclipso.eu>
*/
-#include <subdev/bios.h>
#include "priv.h"
struct ramxlat {
int
nouveau_sddr3_calc(struct nouveau_ram *ram)
{
- struct nouveau_bios *bios = nouveau_bios(ram);
- int WL, CL, WR;
+ int CWL, CL, WR, DLL = 0, ODT = 0;
- switch (!!ram->timing.data * ram->timing.version) {
+ switch (ram->next->bios.timing_ver) {
+ case 0x10:
+ if (ram->next->bios.timing_hdr < 0x17) {
+ /* XXX: NV50: Get CWL from the timing register */
+ return -ENOSYS;
+ }
+ CWL = ram->next->bios.timing_10_CWL;
+ CL = ram->next->bios.timing_10_CL;
+ WR = ram->next->bios.timing_10_WR;
+ DLL = !ram->next->bios.ramcfg_10_02_40;
+ ODT = ram->next->bios.timing_10_ODT;
+ break;
case 0x20:
- WL = (nv_ro16(bios, ram->timing.data + 0x04) & 0x0f80) >> 7;
- CL = nv_ro08(bios, ram->timing.data + 0x04) & 0x1f;
- WR = nv_ro08(bios, ram->timing.data + 0x0a) & 0x7f;
+ CWL = (ram->next->bios.timing[1] & 0x00000f80) >> 7;
+ CL = (ram->next->bios.timing[1] & 0x0000001f) >> 0;
+ WR = (ram->next->bios.timing[2] & 0x007f0000) >> 16;
+ /* XXX: Get these values from the VBIOS instead */
+ DLL = !(ram->mr[1] & 0x1);
+ ODT = (ram->mr[1] & 0x004) >> 2 |
+ (ram->mr[1] & 0x040) >> 5 |
+ (ram->mr[1] & 0x200) >> 7;
break;
default:
return -ENOSYS;
}
- WL = ramxlat(ramddr3_cwl, WL);
- CL = ramxlat(ramddr3_cl, CL);
- WR = ramxlat(ramddr3_wr, WR);
- if (WL < 0 || CL < 0 || WR < 0)
+ CWL = ramxlat(ramddr3_cwl, CWL);
+ CL = ramxlat(ramddr3_cl, CL);
+ WR = ramxlat(ramddr3_wr, WR);
+ if (CL < 0 || CWL < 0 || WR < 0)
return -EINVAL;
- ram->mr[0] &= ~0xe74;
+ ram->mr[0] &= ~0xf74;
ram->mr[0] |= (WR & 0x07) << 9;
ram->mr[0] |= (CL & 0x0e) << 3;
ram->mr[0] |= (CL & 0x01) << 2;
+ ram->mr[1] &= ~0x245;
+ ram->mr[1] |= (ODT & 0x1) << 2;
+ ram->mr[1] |= (ODT & 0x2) << 5;
+ ram->mr[1] |= (ODT & 0x4) << 7;
+ ram->mr[1] |= !DLL;
+
ram->mr[2] &= ~0x038;
- ram->mr[2] |= (WL & 0x07) << 3;
+ ram->mr[2] |= (CWL & 0x07) << 3;
return 0;
}
--- /dev/null
+/*
+ * Copyright 2014 Martin Peres
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Martin Peres
+ */
+
+#include <subdev/fuse.h>
+
+int
+_nouveau_fuse_init(struct nouveau_object *object)
+{
+ struct nouveau_fuse *fuse = (void *)object;
+ return nouveau_subdev_init(&fuse->base);
+}
+
+void
+_nouveau_fuse_dtor(struct nouveau_object *object)
+{
+ struct nouveau_fuse *fuse = (void *)object;
+ nouveau_subdev_destroy(&fuse->base);
+}
+
+int
+nouveau_fuse_create_(struct nouveau_object *parent,
+ struct nouveau_object *engine,
+ struct nouveau_oclass *oclass, int length, void **pobject)
+{
+ struct nouveau_fuse *fuse;
+ int ret;
+
+ ret = nouveau_subdev_create_(parent, engine, oclass, 0, "FUSE",
+ "fuse", length, pobject);
+ fuse = *pobject;
+
+ return ret;
+}
--- /dev/null
+/*
+ * Copyright 2014 Martin Peres
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Martin Peres
+ */
+
+#include "priv.h"
+
+struct g80_fuse_priv {
+ struct nouveau_fuse base;
+
+ spinlock_t fuse_enable_lock;
+};
+
+static u32
+g80_fuse_rd32(struct nouveau_object *object, u64 addr)
+{
+ struct g80_fuse_priv *priv = (void *)object;
+ unsigned long flags;
+ u32 fuse_enable, val;
+
+ spin_lock_irqsave(&priv->fuse_enable_lock, flags);
+
+ /* racy if another part of nouveau start writing to this reg */
+ fuse_enable = nv_mask(priv, 0x1084, 0x800, 0x800);
+ val = nv_rd32(priv, 0x21000 + addr);
+ nv_wr32(priv, 0x1084, fuse_enable);
+
+ spin_unlock_irqrestore(&priv->fuse_enable_lock, flags);
+
+ return val;
+}
+
+
+static int
+g80_fuse_ctor(struct nouveau_object *parent, struct nouveau_object *engine,
+ struct nouveau_oclass *oclass, void *data, u32 size,
+ struct nouveau_object **pobject)
+{
+ struct g80_fuse_priv *priv;
+ int ret;
+
+ ret = nouveau_fuse_create(parent, engine, oclass, &priv);
+ *pobject = nv_object(priv);
+ if (ret)
+ return ret;
+
+ spin_lock_init(&priv->fuse_enable_lock);
+
+ return 0;
+}
+
+struct nouveau_oclass
+g80_fuse_oclass = {
+ .handle = NV_SUBDEV(FUSE, 0x50),
+ .ofuncs = &(struct nouveau_ofuncs) {
+ .ctor = g80_fuse_ctor,
+ .dtor = _nouveau_fuse_dtor,
+ .init = _nouveau_fuse_init,
+ .fini = _nouveau_fuse_fini,
+ .rd32 = g80_fuse_rd32,
+ },
+};
--- /dev/null
+/*
+ * Copyright 2014 Martin Peres
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Martin Peres
+ */
+
+#include "priv.h"
+
+struct gf100_fuse_priv {
+ struct nouveau_fuse base;
+
+ spinlock_t fuse_enable_lock;
+};
+
+static u32
+gf100_fuse_rd32(struct nouveau_object *object, u64 addr)
+{
+ struct gf100_fuse_priv *priv = (void *)object;
+ unsigned long flags;
+ u32 fuse_enable, unk, val;
+
+ spin_lock_irqsave(&priv->fuse_enable_lock, flags);
+
+ /* racy if another part of nouveau start writing to these regs */
+ fuse_enable = nv_mask(priv, 0x22400, 0x800, 0x800);
+ unk = nv_mask(priv, 0x21000, 0x1, 0x1);
+ val = nv_rd32(priv, 0x21100 + addr);
+ nv_wr32(priv, 0x21000, unk);
+ nv_wr32(priv, 0x22400, fuse_enable);
+
+ spin_unlock_irqrestore(&priv->fuse_enable_lock, flags);
+
+ return val;
+}
+
+
+static int
+gf100_fuse_ctor(struct nouveau_object *parent, struct nouveau_object *engine,
+ struct nouveau_oclass *oclass, void *data, u32 size,
+ struct nouveau_object **pobject)
+{
+ struct gf100_fuse_priv *priv;
+ int ret;
+
+ ret = nouveau_fuse_create(parent, engine, oclass, &priv);
+ *pobject = nv_object(priv);
+ if (ret)
+ return ret;
+
+ spin_lock_init(&priv->fuse_enable_lock);
+
+ return 0;
+}
+
+struct nouveau_oclass
+gf100_fuse_oclass = {
+ .handle = NV_SUBDEV(FUSE, 0xC0),
+ .ofuncs = &(struct nouveau_ofuncs) {
+ .ctor = gf100_fuse_ctor,
+ .dtor = _nouveau_fuse_dtor,
+ .init = _nouveau_fuse_init,
+ .fini = _nouveau_fuse_fini,
+ .rd32 = gf100_fuse_rd32,
+ },
+};
--- /dev/null
+/*
+ * Copyright 2014 Martin Peres
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Martin Peres
+ */
+
+#include "priv.h"
+
+struct gm107_fuse_priv {
+ struct nouveau_fuse base;
+};
+
+static u32
+gm107_fuse_rd32(struct nouveau_object *object, u64 addr)
+{
+ struct gf100_fuse_priv *priv = (void *)object;
+
+ return nv_rd32(priv, 0x21100 + addr);
+}
+
+
+static int
+gm107_fuse_ctor(struct nouveau_object *parent, struct nouveau_object *engine,
+ struct nouveau_oclass *oclass, void *data, u32 size,
+ struct nouveau_object **pobject)
+{
+ struct gm107_fuse_priv *priv;
+ int ret;
+
+ ret = nouveau_fuse_create(parent, engine, oclass, &priv);
+ *pobject = nv_object(priv);
+ if (ret)
+ return ret;
+
+ return 0;
+}
+
+struct nouveau_oclass
+gm107_fuse_oclass = {
+ .handle = NV_SUBDEV(FUSE, 0x117),
+ .ofuncs = &(struct nouveau_ofuncs) {
+ .ctor = gm107_fuse_ctor,
+ .dtor = _nouveau_fuse_dtor,
+ .init = _nouveau_fuse_init,
+ .fini = _nouveau_fuse_fini,
+ .rd32 = gm107_fuse_rd32,
+ },
+};
--- /dev/null
+#ifndef __NVKM_FUSE_PRIV_H__
+#define __NVKM_FUSE_PRIV_H__
+
+#include <subdev/fuse.h>
+
+int _nouveau_fuse_init(struct nouveau_object *object);
+void _nouveau_fuse_dtor(struct nouveau_object *object);
+
+#endif
}
static int
-nouveau_gpio_intr_ctor(void *data, u32 size, struct nvkm_notify *notify)
+nouveau_gpio_intr_ctor(struct nouveau_object *object, void *data, u32 size,
+ struct nvkm_notify *notify)
{
struct nvkm_gpio_ntfy_req *req = data;
if (!WARN_ON(size != sizeof(*req))) {
#include "priv.h"
void
-nv92_gpio_intr_stat(struct nouveau_gpio *gpio, u32 *hi, u32 *lo)
+nv94_gpio_intr_stat(struct nouveau_gpio *gpio, u32 *hi, u32 *lo)
{
u32 intr0 = nv_rd32(gpio, 0x00e054);
u32 intr1 = nv_rd32(gpio, 0x00e074);
}
void
-nv92_gpio_intr_mask(struct nouveau_gpio *gpio, u32 type, u32 mask, u32 data)
+nv94_gpio_intr_mask(struct nouveau_gpio *gpio, u32 type, u32 mask, u32 data)
{
u32 inte0 = nv_rd32(gpio, 0x00e050);
u32 inte1 = nv_rd32(gpio, 0x00e070);
}
struct nouveau_oclass *
-nv92_gpio_oclass = &(struct nouveau_gpio_impl) {
- .base.handle = NV_SUBDEV(GPIO, 0x92),
+nv94_gpio_oclass = &(struct nouveau_gpio_impl) {
+ .base.handle = NV_SUBDEV(GPIO, 0x94),
.base.ofuncs = &(struct nouveau_ofuncs) {
.ctor = _nouveau_gpio_ctor,
.dtor = _nouveau_gpio_dtor,
.fini = _nouveau_gpio_fini,
},
.lines = 32,
- .intr_stat = nv92_gpio_intr_stat,
- .intr_mask = nv92_gpio_intr_mask,
+ .intr_stat = nv94_gpio_intr_stat,
+ .intr_mask = nv94_gpio_intr_mask,
.drive = nv50_gpio_drive,
.sense = nv50_gpio_sense,
.reset = nv50_gpio_reset,
.fini = _nouveau_gpio_fini,
},
.lines = 32,
- .intr_stat = nv92_gpio_intr_stat,
- .intr_mask = nv92_gpio_intr_mask,
+ .intr_stat = nv94_gpio_intr_stat,
+ .intr_mask = nv94_gpio_intr_mask,
.drive = nvd0_gpio_drive,
.sense = nvd0_gpio_sense,
.reset = nvd0_gpio_reset,
int nv50_gpio_drive(struct nouveau_gpio *, int, int, int);
int nv50_gpio_sense(struct nouveau_gpio *, int);
-void nv92_gpio_intr_stat(struct nouveau_gpio *, u32 *, u32 *);
-void nv92_gpio_intr_mask(struct nouveau_gpio *, u32, u32, u32);
+void nv94_gpio_intr_stat(struct nouveau_gpio *, u32 *, u32 *);
+void nv94_gpio_intr_mask(struct nouveau_gpio *, u32, u32, u32);
void nvd0_gpio_reset(struct nouveau_gpio *, u8);
int nvd0_gpio_drive(struct nouveau_gpio *, int, int, int);
*/
#include <core/option.h>
+#include <core/object.h>
#include <core/event.h>
#include <subdev/bios.h>
}
static int
-nouveau_i2c_intr_ctor(void *data, u32 size, struct nvkm_notify *notify)
+nouveau_i2c_intr_ctor(struct nouveau_object *object, void *data, u32 size,
+ struct nvkm_notify *notify)
{
struct nvkm_i2c_ntfy_req *req = data;
if (!WARN_ON(size != sizeof(*req))) {
if (ret)
return ret;
- ret = nouveau_mm_head(&priv->heap, 1, args->size, args->size,
+ ret = nouveau_mm_head(&priv->heap, 0, 1, args->size, args->size,
args->align, &node->mem);
if (ret)
return ret;
struct nvkm_ltc_priv *priv = (void *)ltc;
int ret;
- ret = nouveau_mm_head(&priv->tags, 1, n, n, 1, pnode);
+ ret = nouveau_mm_head(&priv->tags, 0, 1, n, n, 1, pnode);
if (ret)
*pnode = NULL;
nv_wr32(priv, 0x17ea58, depth);
}
+static const struct nouveau_bitfield
+gf100_ltc_lts_intr_name[] = {
+ { 0x00000001, "IDLE_ERROR_IQ" },
+ { 0x00000002, "IDLE_ERROR_CBC" },
+ { 0x00000004, "IDLE_ERROR_TSTG" },
+ { 0x00000008, "IDLE_ERROR_DSTG" },
+ { 0x00000010, "EVICTED_CB" },
+ { 0x00000020, "ILLEGAL_COMPSTAT" },
+ { 0x00000040, "BLOCKLINEAR_CB" },
+ { 0x00000100, "ECC_SEC_ERROR" },
+ { 0x00000200, "ECC_DED_ERROR" },
+ { 0x00000400, "DEBUG" },
+ { 0x00000800, "ATOMIC_TO_Z" },
+ { 0x00001000, "ILLEGAL_ATOMIC" },
+ { 0x00002000, "BLKACTIVITY_ERR" },
+ {}
+};
+
static void
-gf100_ltc_lts_isr(struct nvkm_ltc_priv *priv, int ltc, int lts)
+gf100_ltc_lts_intr(struct nvkm_ltc_priv *priv, int ltc, int lts)
{
u32 base = 0x141000 + (ltc * 0x2000) + (lts * 0x400);
- u32 stat = nv_rd32(priv, base + 0x020);
+ u32 intr = nv_rd32(priv, base + 0x020);
+ u32 stat = intr & 0x0000ffff;
if (stat) {
- nv_info(priv, "LTC%d_LTS%d: 0x%08x\n", ltc, lts, stat);
- nv_wr32(priv, base + 0x020, stat);
+ nv_info(priv, "LTC%d_LTS%d:", ltc, lts);
+ nouveau_bitfield_print(gf100_ltc_lts_intr_name, stat);
+ pr_cont("\n");
}
+
+ nv_wr32(priv, base + 0x020, intr);
}
void
while (mask) {
u32 lts, ltc = __ffs(mask);
for (lts = 0; lts < priv->lts_nr; lts++)
- gf100_ltc_lts_isr(priv, ltc, lts);
+ gf100_ltc_lts_intr(priv, ltc, lts);
mask &= ~(1 << ltc);
}
-
- /* we do something horribly wrong and upset PMFB a lot, so mask off
- * interrupts from it after the first one until it's fixed
- */
- nv_mask(priv, 0x000640, 0x02000000, 0x00000000);
}
static int
tag_size += tag_align;
tag_size = (tag_size + 0xfff) >> 12; /* round up */
- ret = nouveau_mm_tail(&pfb->vram, 1, tag_size, tag_size, 1,
+ ret = nouveau_mm_tail(&pfb->vram, 1, 1, tag_size, tag_size, 1,
&priv->tag_ram);
if (ret) {
priv->num_tags = 0;
gm107_ltc_lts_isr(priv, ltc, lts);
mask &= ~(1 << ltc);
}
-
- /* we do something horribly wrong and upset PMFB a lot, so mask off
- * interrupts from it after the first one until it's fixed
- */
- nv_mask(priv, 0x000640, 0x02000000, 0x00000000);
}
static int
#include <subdev/ltc.h>
#include <subdev/fb.h>
+#include <core/enum.h>
+
struct nvkm_ltc_priv {
struct nouveau_ltc base;
u32 ltc_nr;
nv_wait(ppwr, 0x10a04c, 0xffffffff, 0x00000000);
nv_mask(ppwr, 0x000200, 0x00002000, 0x00000000);
nv_mask(ppwr, 0x000200, 0x00002000, 0x00002000);
+ nv_rd32(ppwr, 0x000200);
+ nv_wait(ppwr, 0x10a10c, 0x00000006, 0x00000000);
/* upload data segment */
nv_wr32(ppwr, 0x10a1c0, 0x01000000);
--- /dev/null
+/*
+ * Copyright 2014 Martin Peres <martin.peres@free.fr>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the folloing conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Martin Peres
+ */
+
+/******************************************************************************
+ * arith data segment
+ *****************************************************************************/
+#ifdef INCLUDE_PROC
+#endif
+
+#ifdef INCLUDE_DATA
+#endif
+
+/******************************************************************************
+ * arith code segment
+ *****************************************************************************/
+#ifdef INCLUDE_CODE
+
+// does a 32x32 -> 64 multiplication
+//
+// A * B = A_lo * B_lo
+// + ( A_hi * B_lo ) << 16
+// + ( A_lo * B_hi ) << 16
+// + ( A_hi * B_hi ) << 32
+//
+// $r15 - current
+// $r14 - A
+// $r13 - B
+// $r12 - mul_lo (return)
+// $r11 - mul_hi (return)
+// $r0 - zero
+mulu32_32_64:
+ push $r1 // A_hi
+ push $r2 // B_hi
+ push $r3 // tmp0
+ push $r4 // tmp1
+
+ shr b32 $r1 $r14 16
+ shr b32 $r2 $r13 16
+
+ clear b32 $r12
+ clear b32 $r11
+
+ // A_lo * B_lo
+ mulu $r12 $r14 $r13
+
+ // ( A_hi * B_lo ) << 16
+ mulu $r3 $r1 $r13 // tmp0 = A_hi * B_lo
+ mov b32 $r4 $r3
+ and $r3 0xffff // tmp0 = tmp0_lo
+ shl b32 $r3 16
+ shr b32 $r4 16 // tmp1 = tmp0_hi
+ add b32 $r12 $r3
+ adc b32 $r11 $r4
+
+ // ( A_lo * B_hi ) << 16
+ mulu $r3 $r14 $r2 // tmp0 = A_lo * B_hi
+ mov b32 $r4 $r3
+ and $r3 0xffff // tmp0 = tmp0_lo
+ shl b32 $r3 16
+ shr b32 $r4 16 // tmp1 = tmp0_hi
+ add b32 $r12 $r3
+ adc b32 $r11 $r4
+
+ // ( A_hi * B_hi ) << 32
+ mulu $r3 $r1 $r2 // tmp0 = A_hi * B_hi
+ add b32 $r11 $r3
+
+ pop $r4
+ pop $r3
+ pop $r2
+ pop $r1
+ ret
+#endif
// $r14 - ns
// $r0 - zero
nsec:
+ push $r9
+ push $r8
nv_iord($r8, NV_PPWR_TIMER_LOW)
nsec_loop:
nv_iord($r9, NV_PPWR_TIMER_LOW)
sub b32 $r9 $r8
cmp b32 $r9 $r14
bra l #nsec_loop
+ pop $r8
+ pop $r9
ret
// busy-wait for a period of time
// $r11 - timeout (ns)
// $r0 - zero
wait:
+ push $r9
+ push $r8
nv_iord($r8, NV_PPWR_TIMER_LOW)
wait_loop:
nv_rd32($r10, $r14)
cmp b32 $r9 $r11
bra l #wait_loop
wait_done:
+ pop $r8
+ pop $r9
ret
// $r15 - current (kern)
bclr $flags $p0
iret
-// request the current process be sent a message after a timeout expires
+// calculate the number of ticks in the specified nanoseconds delay
+//
+// $r15 - current
+// $r14 - ns
+// $r14 - ticks (return)
+// $r0 - zero
+ticks_from_ns:
+ push $r12
+ push $r11
+
+ /* try not losing precision (multiply then divide) */
+ imm32($r13, HW_TICKS_PER_US)
+ call #mulu32_32_64
+
+ /* use an immeditate, it's ok because HW_TICKS_PER_US < 16 bits */
+ div $r12 $r12 1000
+
+ /* check if there wasn't any overflow */
+ cmpu b32 $r11 0
+ bra e #ticks_from_ns_quit
+
+ /* let's divide then multiply, too bad for the precision! */
+ div $r14 $r14 1000
+ imm32($r13, HW_TICKS_PER_US)
+ call #mulu32_32_64
+
+ /* this cannot overflow as long as HW_TICKS_PER_US < 1000 */
+
+ticks_from_ns_quit:
+ mov b32 $r14 $r12
+ pop $r11
+ pop $r12
+ ret
+
+// calculate the number of ticks in the specified microsecond delay
+//
+// $r15 - current
+// $r14 - us
+// $r14 - ticks (return)
+// $r0 - zero
+ticks_from_us:
+ push $r12
+ push $r11
+
+ /* simply multiply $us by HW_TICKS_PER_US */
+ imm32($r13, HW_TICKS_PER_US)
+ call #mulu32_32_64
+ mov b32 $r14 $r12
+
+ /* check if there wasn't any overflow */
+ cmpu b32 $r11 0
+ bra e #ticks_from_us_quit
+
+ /* Overflow! */
+ clear b32 $r14
+
+ticks_from_us_quit:
+ pop $r11
+ pop $r12
+ ret
+
+// calculate the number of ticks in the specified microsecond delay
//
// $r15 - current
// $r14 - ticks
+// $r14 - us (return)
+// $r0 - zero
+ticks_to_us:
+ /* simply divide $ticks by HW_TICKS_PER_US */
+ imm32($r13, HW_TICKS_PER_US)
+ div $r14 $r14 $r13
+
+ ret
+
+// request the current process be sent a message after a timeout expires
+//
+// $r15 - current
+// $r14 - ticks (make sure it is < 2^31 to avoid any possible overflow)
// $r0 - zero
timer:
+ push $r9
+ push $r8
+
// interrupts off to prevent racing with timer isr
bclr $flags ie0
ld b32 $r8 D[$r15 + #proc_time]
cmp b32 $r8 0
bra g #timer_done
- st b32 D[$r15 + #proc_time] $r14
- // halt watchdog timer temporarily and check for a pending
- // interrupt. if there's one already pending, we can just
- // bail since the timer isr will queue the next soonest
- // right after it's done
+ // halt watchdog timer temporarily
+ clear b32 $r8
nv_iowr(NV_PPWR_WATCHDOG_ENABLE, $r8)
+
+ // find out how much time elapsed since the last update
+ // of the watchdog and add this time to the wanted ticks
+ nv_iord($r8, NV_PPWR_WATCHDOG_TIME)
+ ld b32 $r9 D[$r0 + #time_prev]
+ sub b32 $r9 $r8
+ add b32 $r14 $r9
+ st b32 D[$r15 + #proc_time] $r14
+
+ // check for a pending interrupt. if there's one already
+ // pending, we can just bail since the timer isr will
+ // queue the next soonest right after it's done
nv_iord($r8, NV_PPWR_INTR)
and $r8 NV_PPWR_INTR_WATCHDOG
bra nz #timer_enable
cmp b32 $r14 $r0
bra e #timer_reset
cmp b32 $r14 $r8
- bra l #timer_done
- timer_reset:
- nv_iowr(NV_PPWR_WATCHDOG_TIME, $r14)
- st b32 D[$r0 + #time_prev] $r14
+ bra g #timer_enable
+ timer_reset:
+ nv_iowr(NV_PPWR_WATCHDOG_TIME, $r14)
+ st b32 D[$r0 + #time_prev] $r14
// re-enable the watchdog timer
timer_enable:
// interrupts back on
timer_done:
bset $flags ie0
+
+ pop $r8
+ pop $r9
ret
// send message to another process
// $r14 - process
// $r0 - zero
recv:
+ push $r9
+ push $r8
+
ld b32 $r8 D[$r14 + #proc_qget]
ld b32 $r9 D[$r14 + #proc_qput]
bclr $flags $p1
bset $flags $p1
pop $r15
recv_done:
+ pop $r8
+ pop $r9
ret
init:
*/ st b32 D[$r0] reg /*
*/ clear b32 $r0
#endif
+
+#define st(size, addr, reg) /*
+*/ movw $r0 addr /*
+*/ st size D[$r0] reg /*
+*/ clear b32 $r0
+
+#define ld(size, reg, addr) /*
+*/ movw $r0 addr /*
+*/ ld size reg D[$r0] /*
+*/ clear b32 $r0
+
+// does a 64+64 -> 64 unsigned addition (C = A + B)
+#define addu64(reg_a_c_hi, reg_a_c_lo, b_hi, b_lo) /*
+*/ add b32 reg_a_c_lo b_lo /*
+*/ adc b32 reg_a_c_hi b_hi
+
+// does a 64+64 -> 64 substraction (C = A - B)
+#define subu64(reg_a_c_hi, reg_a_c_lo, b_hi, b_lo) /*
+*/ sub b32 reg_a_c_lo b_lo /*
+*/ sbb b32 reg_a_c_hi b_hi
*/ .b32 func
memx_func_head:
-handler(ENTER , 0x0001, 0x0000, #memx_func_enter)
+handler(ENTER , 0x0000, 0x0000, #memx_func_enter)
memx_func_next:
handler(LEAVE , 0x0000, 0x0000, #memx_func_leave)
handler(WR32 , 0x0000, 0x0002, #memx_func_wr32)
handler(WAIT , 0x0004, 0x0000, #memx_func_wait)
handler(DELAY , 0x0001, 0x0000, #memx_func_delay)
+handler(VBLANK, 0x0001, 0x0000, #memx_func_wait_vblank)
memx_func_tail:
.equ #memx_func_size #memx_func_next - #memx_func_head
.equ #memx_func_num (#memx_func_tail - #memx_func_head) / #memx_func_size
+memx_ts_start:
+.b32 0
+memx_ts_end:
+.b32 0
+
memx_data_head:
.skip 0x0800
memx_data_tail:
//
// $r15 - current (memx)
// $r4 - packet length
-// +00: bitmask of heads to wait for vblank on
// $r3 - opcode desciption
// $r0 - zero
memx_func_enter:
+#if NVKM_PPWR_CHIPSET == GT215
+ movw $r8 0x1610
+ nv_rd32($r7, $r8)
+ imm32($r6, 0xfffffffc)
+ and $r7 $r6
+ movw $r6 0x2
+ or $r7 $r6
+ nv_wr32($r8, $r7)
+#else
+ movw $r6 0x001620
+ imm32($r7, ~0x00000aa2);
+ nv_rd32($r8, $r6)
+ and $r8 $r7
+ nv_wr32($r6, $r8)
+
+ imm32($r7, ~0x00000001)
+ nv_rd32($r8, $r6)
+ and $r8 $r7
+ nv_wr32($r6, $r8)
+
+ movw $r6 0x0026f0
+ nv_rd32($r8, $r6)
+ and $r8 $r7
+ nv_wr32($r6, $r8)
+#endif
+
mov $r6 NV_PPWR_OUTPUT_SET_FB_PAUSE
nv_iowr(NV_PPWR_OUTPUT_SET, $r6)
memx_func_enter_wait:
nv_iord($r6, NV_PPWR_OUTPUT)
and $r6 NV_PPWR_OUTPUT_FB_PAUSE
bra z #memx_func_enter_wait
- //XXX: TODO
- ld b32 $r6 D[$r1 + 0x00]
- add b32 $r1 0x04
+
+ nv_iord($r6, NV_PPWR_TIMER_LOW)
+ st b32 D[$r0 + #memx_ts_start] $r6
ret
// description
// $r3 - opcode desciption
// $r0 - zero
memx_func_leave:
+ nv_iord($r6, NV_PPWR_TIMER_LOW)
+ st b32 D[$r0 + #memx_ts_end] $r6
+
mov $r6 NV_PPWR_OUTPUT_CLR_FB_PAUSE
nv_iowr(NV_PPWR_OUTPUT_CLR, $r6)
memx_func_leave_wait:
nv_iord($r6, NV_PPWR_OUTPUT)
and $r6 NV_PPWR_OUTPUT_FB_PAUSE
bra nz #memx_func_leave_wait
+
+#if NVKM_PPWR_CHIPSET == GT215
+ movw $r8 0x1610
+ nv_rd32($r7, $r8)
+ imm32($r6, 0xffffffcc)
+ and $r7 $r6
+ nv_wr32($r8, $r7)
+#else
+ movw $r6 0x0026f0
+ imm32($r7, 0x00000001)
+ nv_rd32($r8, $r6)
+ or $r8 $r7
+ nv_wr32($r6, $r8)
+
+ movw $r6 0x001620
+ nv_rd32($r8, $r6)
+ or $r8 $r7
+ nv_wr32($r6, $r8)
+
+ imm32($r7, 0x00000aa2);
+ nv_rd32($r8, $r6)
+ or $r8 $r7
+ nv_wr32($r6, $r8)
+#endif
+ ret
+
+#if NVKM_PPWR_CHIPSET < GF119
+// description
+//
+// $r15 - current (memx)
+// $r4 - packet length
+// +00: head to wait for vblank on
+// $r3 - opcode desciption
+// $r0 - zero
+memx_func_wait_vblank:
+ ld b32 $r6 D[$r1 + 0x00]
+ cmp b32 $r6 0x0
+ bra z #memx_func_wait_vblank_head0
+ cmp b32 $r6 0x1
+ bra z #memx_func_wait_vblank_head1
+ bra #memx_func_wait_vblank_fini
+
+ memx_func_wait_vblank_head1:
+ movw $r7 0x20
+ bra #memx_func_wait_vblank_0
+
+ memx_func_wait_vblank_head0:
+ movw $r7 0x8
+
+ memx_func_wait_vblank_0:
+ nv_iord($r6, NV_PPWR_INPUT)
+ and $r6 $r7
+ bra nz #memx_func_wait_vblank_0
+
+ memx_func_wait_vblank_1:
+ nv_iord($r6, NV_PPWR_INPUT)
+ and $r6 $r7
+ bra z #memx_func_wait_vblank_1
+
+ memx_func_wait_vblank_fini:
+ add b32 $r1 0x4
+ ret
+
+#else
+
+// XXX: currently no-op
+//
+// $r15 - current (memx)
+// $r4 - packet length
+// +00: head to wait for vblank on
+// $r3 - opcode desciption
+// $r0 - zero
+memx_func_wait_vblank:
+ add b32 $r1 0x4
ret
+#endif
+
// description
//
// $r15 - current (memx)
push $r13
mov b32 $r1 $r12
mov b32 $r2 $r11
+
memx_exec_next:
- // fetch the packet header, and locate opcode info
+ // fetch the packet header
ld b32 $r3 D[$r1]
add b32 $r1 4
- shr b32 $r4 $r3 16
- mulu $r3 #memx_func_size
+ extr $r4 $r3 16:31
+ extr $r3 $r3 0:15
// execute the opcode handler
+ sub b32 $r3 1
+ mulu $r3 #memx_func_size
ld b32 $r5 D[$r3 + #memx_func_head + #memx_func]
call $r5
bra l #memx_exec_next
// send completion reply
+ ld b32 $r11 D[$r0 + #memx_ts_start]
+ ld b32 $r12 D[$r0 + #memx_ts_end]
+ sub b32 $r12 $r11
+ nv_iord($r11, NV_PPWR_INPUT)
pop $r13
pop $r14
call(send)
*/
#define NVKM_PPWR_CHIPSET GK208
+#define HW_TICKS_PER_US 324
#define NVKM_FALCON_PC24
#define NVKM_FALCON_UNSHIFTED_IO
.section #nv108_pwr_data
#define INCLUDE_PROC
#include "kernel.fuc"
+#include "arith.fuc"
#include "host.fuc"
#include "memx.fuc"
#include "perf.fuc"
#define INCLUDE_DATA
#include "kernel.fuc"
+#include "arith.fuc"
#include "host.fuc"
#include "memx.fuc"
#include "perf.fuc"
.section #nv108_pwr_code
#define INCLUDE_CODE
#include "kernel.fuc"
+#include "arith.fuc"
#include "host.fuc"
#include "memx.fuc"
#include "perf.fuc"
0x00000000,
/* 0x0058: proc_list_head */
0x54534f48,
- 0x00000379,
- 0x0000032a,
+ 0x00000453,
+ 0x00000404,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x584d454d,
- 0x00000464,
- 0x00000456,
+ 0x0000061c,
+ 0x0000060e,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x46524550,
- 0x00000468,
- 0x00000466,
+ 0x00000620,
+ 0x0000061e,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x5f433249,
- 0x0000086c,
- 0x00000713,
+ 0x00000a24,
+ 0x000008cb,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x54534554,
- 0x0000088d,
- 0x0000086e,
+ 0x00000a45,
+ 0x00000a26,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x454c4449,
- 0x00000898,
- 0x00000896,
+ 0x00000a50,
+ 0x00000a4e,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
/* 0x0370: memx_func_head */
- 0x00010000,
- 0x00000000,
- 0x000003a9,
-/* 0x037c: memx_func_next */
0x00000001,
0x00000000,
- 0x000003c7,
+ 0x00000483,
+/* 0x037c: memx_func_next */
0x00000002,
+ 0x00000000,
+ 0x00000500,
+ 0x00000003,
0x00000002,
- 0x000003df,
- 0x00040003,
+ 0x00000580,
+ 0x00040004,
+ 0x00000000,
+ 0x0000059d,
+ 0x00010005,
+ 0x00000000,
+ 0x000005b7,
+ 0x00010006,
0x00000000,
- 0x000003fc,
- 0x00010004,
+ 0x0000057b,
+/* 0x03b8: memx_func_tail */
+/* 0x03b8: memx_ts_start */
0x00000000,
- 0x00000416,
-/* 0x03ac: memx_func_tail */
-/* 0x03ac: memx_data_head */
+/* 0x03bc: memx_ts_end */
0x00000000,
+/* 0x03c0: memx_data_head */
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
-/* 0x0bac: memx_data_tail */
-/* 0x0bac: i2c_scl_map */
+ 0x00000000,
+/* 0x0bc0: memx_data_tail */
+/* 0x0bc0: i2c_scl_map */
0x00000400,
0x00000800,
0x00001000,
0x00020000,
0x00040000,
0x00080000,
-/* 0x0bd4: i2c_sda_map */
+/* 0x0be8: i2c_sda_map */
0x00100000,
0x00200000,
0x00400000,
0x10000000,
0x20000000,
0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
};
uint32_t nv108_pwr_code[] = {
- 0x02910ef5,
+ 0x031c0ef5,
/* 0x0004: rd32 */
0xf607a040,
0x04bd000e,
0x7000d4f1,
0xf8f61bf4,
/* 0x005d: nsec */
- 0xcf2c0800,
-/* 0x0062: nsec_loop */
+ 0xf990f900,
+ 0xcf2c0880,
+/* 0x0066: nsec_loop */
0x2c090088,
0xbb0099cf,
0x9ea60298,
- 0xf8f61ef4,
-/* 0x0071: wait */
- 0xcf2c0800,
-/* 0x0076: wait_loop */
+ 0xfcf61ef4,
+ 0xf890fc80,
+/* 0x0079: wait */
+ 0xf990f900,
+ 0xcf2c0880,
+/* 0x0082: wait_loop */
0xeeb20088,
0x0000047e,
0xadfddab2,
0x2c09100b,
0xbb0099cf,
0x9ba60298,
-/* 0x0093: wait_done */
- 0xf8e61ef4,
-/* 0x0095: intr_watchdog */
+/* 0x009f: wait_done */
+ 0xfce61ef4,
+ 0xf890fc80,
+/* 0x00a5: intr_watchdog */
0x03e99800,
0xf40096b0,
0x0a98280b,
0x029abb9a,
0x0d0e1cf4,
- 0x01de7e01,
+ 0x02617e01,
0xf494bd00,
-/* 0x00b2: intr_watchdog_next_time */
+/* 0x00c2: intr_watchdog_next_time */
0x0a98140e,
0x00a6b09b,
0xa6080bf4,
0x061cf49a,
-/* 0x00c0: intr_watchdog_next_time_set */
-/* 0x00c3: intr_watchdog_next_proc */
+/* 0x00d0: intr_watchdog_next_time_set */
+/* 0x00d3: intr_watchdog_next_proc */
0xb59b09b5,
0xe0b603e9,
0x68e6b158,
0xc81bf402,
-/* 0x00d2: intr */
+/* 0x00e2: intr */
0x00f900f8,
0x80f904bd,
0xa0f990f9,
0xc40088cf,
0x0bf40289,
0x9b00b51f,
- 0x957e580e,
+ 0xa57e580e,
0x09980000,
0x0096b09b,
0x000d0bf4,
0x0009f634,
0x09b504bd,
-/* 0x0125: intr_skip_watchdog */
+/* 0x0135: intr_skip_watchdog */
0x0089e49a,
0x360bf408,
0xcf068849,
0xc0f900cc,
0xf14f484e,
0x0d5453e3,
- 0x023f7e00,
+ 0x02c27e00,
0x40c0fc00,
0x0cf604c0,
-/* 0x0157: intr_subintr_skip_fifo */
+/* 0x0167: intr_subintr_skip_fifo */
0x4004bd00,
0x09f60688,
-/* 0x015f: intr_skip_subintr */
+/* 0x016f: intr_skip_subintr */
0xc404bd00,
0x0bf42089,
0xbfa4f107,
-/* 0x0169: intr_skip_pause */
+/* 0x0179: intr_skip_pause */
0x4089c4ff,
0xf1070bf4,
-/* 0x0173: intr_skip_user0 */
+/* 0x0183: intr_skip_user0 */
0x00ffbfa4,
0x0008f604,
0x80fc04bd,
0xfca0fcb0,
0xfc80fc90,
0x0032f400,
-/* 0x0196: timer */
- 0x32f401f8,
- 0x03f89810,
- 0xf40086b0,
- 0xfeb53a1c,
- 0xf6380003,
+/* 0x01a6: ticks_from_ns */
+ 0xc0f901f8,
+ 0xd7f1b0f9,
+ 0xd3f00144,
+ 0x7721f500,
+ 0xe8ccec03,
+ 0x00b4b003,
+ 0xec120bf4,
+ 0xf103e8ee,
+ 0xf00144d7,
+ 0x21f500d3,
+/* 0x01ce: ticks_from_ns_quit */
+ 0xceb20377,
+ 0xc0fcb0fc,
+/* 0x01d6: ticks_from_us */
+ 0xc0f900f8,
+ 0xd7f1b0f9,
+ 0xd3f00144,
+ 0x7721f500,
+ 0xb0ceb203,
+ 0x0bf400b4,
+/* 0x01ef: ticks_from_us_quit */
+ 0xfce4bd05,
+ 0xf8c0fcb0,
+/* 0x01f5: ticks_to_us */
+ 0x44d7f100,
+ 0x00d3f001,
+ 0xf8ecedff,
+/* 0x0201: timer */
+ 0xf990f900,
+ 0x1032f480,
+ 0xb003f898,
+ 0x1cf40086,
+ 0x0084bd4a,
+ 0x0008f638,
+ 0x340804bd,
+ 0x980088cf,
+ 0x98bb9a09,
+ 0x00e9bb02,
+ 0x0803feb5,
+ 0x0088cf08,
+ 0xf40284f0,
+ 0x34081c1b,
+ 0xa60088cf,
+ 0x080bf4e0,
+ 0x1cf4e8a6,
+/* 0x0245: timer_reset */
+ 0xf634000d,
+ 0x04bd000e,
+/* 0x024f: timer_enable */
+ 0x089a0eb5,
+ 0xf6380001,
0x04bd0008,
- 0x88cf0808,
- 0x0284f000,
- 0x081c1bf4,
- 0x0088cf34,
- 0x0bf4e0a6,
- 0xf4e8a608,
-/* 0x01c6: timer_reset */
- 0x3400161e,
- 0xbd000ef6,
- 0x9a0eb504,
-/* 0x01d0: timer_enable */
- 0x38000108,
- 0xbd0008f6,
-/* 0x01d9: timer_done */
- 0x1031f404,
-/* 0x01de: send_proc */
- 0x80f900f8,
- 0xe89890f9,
- 0x04e99805,
- 0xa60486f0,
- 0x2a0bf489,
- 0x940398c4,
- 0x80b60488,
- 0x008ebb18,
- 0xb500fa98,
- 0x8db5008a,
- 0x028cb501,
- 0xb6038bb5,
- 0x94f00190,
- 0x04e9b507,
-/* 0x0217: send_done */
- 0xfc0231f4,
- 0xf880fc90,
-/* 0x021d: find */
- 0x0880f900,
- 0x0131f458,
-/* 0x0224: find_loop */
- 0xa6008a98,
- 0x100bf4ae,
- 0xb15880b6,
- 0xf4026886,
- 0x32f4f11b,
-/* 0x0239: find_done */
- 0xfc8eb201,
-/* 0x023f: send */
- 0x7e00f880,
- 0xf400021d,
- 0x00f89b01,
-/* 0x0248: recv */
- 0x9805e898,
- 0x32f404e9,
- 0xf489a601,
- 0x89c43c0b,
- 0x0180b603,
- 0xb50784f0,
- 0xea9805e8,
- 0xfef0f902,
- 0xf0f9018f,
- 0x9994efb2,
- 0x00e9bb04,
- 0x9818e0b6,
- 0xec9803eb,
- 0x01ed9802,
- 0xf900ee98,
- 0xfef0fca5,
- 0x31f400f8,
-/* 0x028f: recv_done */
- 0xf8f0fc01,
-/* 0x0291: init */
- 0x01084100,
- 0xe70011cf,
- 0xb6010911,
- 0x14fe0814,
- 0x00e04100,
- 0x000013f0,
- 0x0001f61c,
- 0xff0104bd,
- 0x01f61400,
- 0x0104bd00,
- 0x0015f102,
- 0xf6100008,
- 0x04bd0001,
- 0xf000d241,
- 0x10fe0013,
- 0x1031f400,
- 0x38000101,
+/* 0x0258: timer_done */
+ 0xfc1031f4,
+ 0xf890fc80,
+/* 0x0261: send_proc */
+ 0xf980f900,
+ 0x05e89890,
+ 0xf004e998,
+ 0x89a60486,
+ 0xc42a0bf4,
+ 0x88940398,
+ 0x1880b604,
+ 0x98008ebb,
+ 0x8ab500fa,
+ 0x018db500,
+ 0xb5028cb5,
+ 0x90b6038b,
+ 0x0794f001,
+ 0xf404e9b5,
+/* 0x029a: send_done */
+ 0x90fc0231,
+ 0x00f880fc,
+/* 0x02a0: find */
+ 0x580880f9,
+/* 0x02a7: find_loop */
+ 0x980131f4,
+ 0xaea6008a,
+ 0xb6100bf4,
+ 0x86b15880,
+ 0x1bf40268,
+ 0x0132f4f1,
+/* 0x02bc: find_done */
+ 0x80fc8eb2,
+/* 0x02c2: send */
+ 0xa07e00f8,
+ 0x01f40002,
+/* 0x02cb: recv */
+ 0xf900f89b,
+ 0x9880f990,
+ 0xe99805e8,
+ 0x0132f404,
+ 0x0bf489a6,
+ 0x0389c43c,
+ 0xf00180b6,
+ 0xe8b50784,
+ 0x02ea9805,
+ 0x8ffef0f9,
+ 0xb2f0f901,
+ 0x049994ef,
+ 0xb600e9bb,
+ 0xeb9818e0,
+ 0x02ec9803,
+ 0x9801ed98,
+ 0xa5f900ee,
+ 0xf8fef0fc,
+ 0x0131f400,
+/* 0x0316: recv_done */
+ 0x80fcf0fc,
+ 0x00f890fc,
+/* 0x031c: init */
+ 0xcf010841,
+ 0x11e70011,
+ 0x14b60109,
+ 0x0014fe08,
+ 0xf000e041,
+ 0x1c000013,
0xbd0001f6,
-/* 0x02db: init_proc */
- 0x98580f04,
- 0x16b001f1,
- 0xfa0bf400,
- 0xf0b615f9,
- 0xf20ef458,
-/* 0x02ec: host_send */
- 0xcf04b041,
- 0xa0420011,
- 0x0022cf04,
- 0x0bf412a6,
- 0x071ec42e,
- 0xb704ee94,
- 0x980270e0,
- 0xec9803eb,
- 0x01ed9802,
- 0x7e00ee98,
- 0xb600023f,
- 0x1ec40110,
- 0x04b0400f,
- 0xbd000ef6,
- 0xc70ef404,
-/* 0x0328: host_send_done */
-/* 0x032a: host_recv */
- 0x494100f8,
- 0x5413f14e,
- 0xf4e1a652,
-/* 0x0336: host_recv_wait */
- 0xcc41b90b,
- 0x0011cf04,
- 0xcf04c842,
- 0x16f00022,
- 0xf412a608,
- 0x23c4ef0b,
- 0x0434b607,
- 0x02f030b7,
- 0xb5033bb5,
- 0x3db5023c,
- 0x003eb501,
- 0xf00120b6,
- 0xc8400f24,
- 0x0002f604,
- 0x400204bd,
- 0x02f60000,
- 0xf804bd00,
-/* 0x0379: host_init */
- 0x00804100,
- 0xf11014b6,
- 0x40027015,
- 0x01f604d0,
+ 0x00ff0104,
+ 0x0001f614,
+ 0x020104bd,
+ 0x080015f1,
+ 0x01f61000,
0x4104bd00,
+ 0x13f000e2,
+ 0x0010fe00,
+ 0x011031f4,
+ 0xf6380001,
+ 0x04bd0001,
+/* 0x0366: init_proc */
+ 0xf198580f,
+ 0x0016b001,
+ 0xf9fa0bf4,
+ 0x58f0b615,
+/* 0x0377: mulu32_32_64 */
+ 0xf9f20ef4,
+ 0xf920f910,
+ 0x9540f930,
+ 0xd29510e1,
+ 0xbdc4bd10,
+ 0xc0edffb4,
+ 0xb2301dff,
+ 0xff34f134,
+ 0x1034b6ff,
+ 0xbb1045b6,
+ 0xb4bb00c3,
+ 0x30e2ff01,
+ 0x34f134b2,
+ 0x34b6ffff,
+ 0x1045b610,
+ 0xbb00c3bb,
+ 0x12ff01b4,
+ 0x00b3bb30,
+ 0x30fc40fc,
+ 0x10fc20fc,
+/* 0x03c6: host_send */
+ 0xb04100f8,
+ 0x0011cf04,
+ 0xcf04a042,
+ 0x12a60022,
+ 0xc42e0bf4,
+ 0xee94071e,
+ 0x70e0b704,
+ 0x03eb9802,
+ 0x9802ec98,
+ 0xee9801ed,
+ 0x02c27e00,
+ 0x0110b600,
+ 0x400f1ec4,
+ 0x0ef604b0,
+ 0xf404bd00,
+/* 0x0402: host_send_done */
+ 0x00f8c70e,
+/* 0x0404: host_recv */
+ 0xf14e4941,
+ 0xa6525413,
+ 0xb90bf4e1,
+/* 0x0410: host_recv_wait */
+ 0xcf04cc41,
+ 0xc8420011,
+ 0x0022cf04,
+ 0xa60816f0,
+ 0xef0bf412,
+ 0xb60723c4,
+ 0x30b70434,
+ 0x3bb502f0,
+ 0x023cb503,
+ 0xb5013db5,
+ 0x20b6003e,
+ 0x0f24f001,
+ 0xf604c840,
+ 0x04bd0002,
+ 0x00004002,
+ 0xbd0002f6,
+/* 0x0453: host_init */
+ 0x4100f804,
0x14b60080,
- 0xf015f110,
- 0x04dc4002,
+ 0x7015f110,
+ 0x04d04002,
+ 0xbd0001f6,
+ 0x00804104,
+ 0xf11014b6,
+ 0x4002f015,
+ 0x01f604dc,
+ 0x0104bd00,
+ 0x04c44001,
0xbd0001f6,
- 0x40010104,
- 0x01f604c4,
- 0xf804bd00,
-/* 0x03a9: memx_func_enter */
- 0x40040600,
- 0x06f607e0,
-/* 0x03b3: memx_func_enter_wait */
- 0x4604bd00,
- 0x66cf07c0,
- 0x0464f000,
- 0x98f70bf4,
- 0x10b60016,
-/* 0x03c7: memx_func_leave */
- 0x0600f804,
- 0x07e44004,
- 0xbd0006f6,
-/* 0x03d1: memx_func_leave_wait */
- 0x07c04604,
- 0xf00066cf,
- 0x1bf40464,
-/* 0x03df: memx_func_wr32 */
- 0x9800f8f7,
- 0x15980016,
- 0x0810b601,
- 0x50f960f9,
+/* 0x0483: memx_func_enter */
+ 0xf100f804,
+ 0xf1162067,
+ 0xf1f55d77,
+ 0xb2ffff73,
+ 0x00047e6e,
+ 0xfdd8b200,
+ 0x60f90487,
+ 0xd0fc80f9,
+ 0x2e7ee0fc,
+ 0x77f10000,
+ 0x73f1fffe,
+ 0x6eb2ffff,
+ 0x0000047e,
+ 0x87fdd8b2,
+ 0xf960f904,
+ 0xfcd0fc80,
+ 0x002e7ee0,
+ 0xf067f100,
+ 0x7e6eb226,
+ 0xb2000004,
+ 0x0487fdd8,
+ 0x80f960f9,
0xe0fcd0fc,
0x00002e7e,
- 0xf40242b6,
- 0x00f8e81b,
-/* 0x03fc: memx_func_wait */
- 0x88cf2c08,
- 0x001e9800,
- 0x98011d98,
- 0x1b98021c,
- 0x1010b603,
- 0x0000717e,
-/* 0x0416: memx_func_delay */
- 0x1e9800f8,
- 0x0410b600,
- 0x00005d7e,
-/* 0x0422: memx_exec */
- 0xe0f900f8,
- 0xc1b2d0f9,
-/* 0x042a: memx_exec_next */
- 0x1398b2b2,
- 0x0410b600,
- 0xf0103495,
- 0x35980c30,
- 0xa655f9de,
- 0xed1ef412,
+ 0xe0400406,
+ 0x0006f607,
+/* 0x04ea: memx_func_enter_wait */
+ 0xc04604bd,
+ 0x0066cf07,
+ 0xf40464f0,
+ 0x2c06f70b,
+ 0xb50066cf,
+ 0x00f8ee06,
+/* 0x0500: memx_func_leave */
+ 0x66cf2c06,
+ 0xef06b500,
+ 0xe4400406,
+ 0x0006f607,
+/* 0x0512: memx_func_leave_wait */
+ 0xc04604bd,
+ 0x0066cf07,
+ 0xf40464f0,
+ 0x67f1f71b,
+ 0x77f126f0,
+ 0x73f00001,
+ 0x7e6eb200,
+ 0xb2000004,
+ 0x0587fdd8,
+ 0x80f960f9,
0xe0fcd0fc,
- 0x00023f7e,
-/* 0x044a: memx_info */
- 0xac4c00f8,
+ 0x00002e7e,
+ 0x162067f1,
+ 0x047e6eb2,
+ 0xd8b20000,
+ 0xf90587fd,
+ 0xfc80f960,
+ 0x7ee0fcd0,
+ 0xf100002e,
+ 0xf00aa277,
+ 0x6eb20073,
+ 0x0000047e,
+ 0x87fdd8b2,
+ 0xf960f905,
+ 0xfcd0fc80,
+ 0x002e7ee0,
+/* 0x057b: memx_func_wait_vblank */
+ 0xb600f800,
+ 0x00f80410,
+/* 0x0580: memx_func_wr32 */
+ 0x98001698,
+ 0x10b60115,
+ 0xf960f908,
+ 0xfcd0fc50,
+ 0x002e7ee0,
+ 0x0242b600,
+ 0xf8e81bf4,
+/* 0x059d: memx_func_wait */
+ 0xcf2c0800,
+ 0x1e980088,
+ 0x011d9800,
+ 0x98021c98,
+ 0x10b6031b,
+ 0x00797e10,
+/* 0x05b7: memx_func_delay */
+ 0x9800f800,
+ 0x10b6001e,
+ 0x005d7e04,
+/* 0x05c3: memx_exec */
+ 0xf900f800,
+ 0xb2d0f9e0,
+/* 0x05cb: memx_exec_next */
+ 0x98b2b2c1,
+ 0x10b60013,
+ 0xf034e704,
+ 0xe033e701,
+ 0x0132b601,
+ 0x980c30f0,
+ 0x55f9de35,
+ 0x1ef412a6,
+ 0xee0b98e5,
+ 0xbbef0c98,
+ 0xc44b02cb,
+ 0x00bbcf07,
+ 0xe0fcd0fc,
+ 0x0002c27e,
+/* 0x0602: memx_info */
+ 0xc04c00f8,
0x08004b03,
- 0x00023f7e,
-/* 0x0456: memx_recv */
+ 0x0002c27e,
+/* 0x060e: memx_recv */
0xd6b000f8,
- 0xc90bf401,
+ 0xb20bf401,
0xf400d6b0,
0x00f8eb0b,
-/* 0x0464: memx_init */
-/* 0x0466: perf_recv */
+/* 0x061c: memx_init */
+/* 0x061e: perf_recv */
0x00f800f8,
-/* 0x0468: perf_init */
-/* 0x046a: i2c_drive_scl */
+/* 0x0620: perf_init */
+/* 0x0622: i2c_drive_scl */
0x36b000f8,
0x0d0bf400,
0xf607e040,
0x04bd0001,
-/* 0x047a: i2c_drive_scl_lo */
+/* 0x0632: i2c_drive_scl_lo */
0xe44000f8,
0x0001f607,
0x00f804bd,
-/* 0x0484: i2c_drive_sda */
+/* 0x063c: i2c_drive_sda */
0xf40036b0,
0xe0400d0b,
0x0002f607,
0x00f804bd,
-/* 0x0494: i2c_drive_sda_lo */
+/* 0x064c: i2c_drive_sda_lo */
0xf607e440,
0x04bd0002,
-/* 0x049e: i2c_sense_scl */
+/* 0x0656: i2c_sense_scl */
0x32f400f8,
0x07c44301,
0xfd0033cf,
0x0bf40431,
0x0131f406,
-/* 0x04b0: i2c_sense_scl_done */
-/* 0x04b2: i2c_sense_sda */
+/* 0x0668: i2c_sense_scl_done */
+/* 0x066a: i2c_sense_sda */
0x32f400f8,
0x07c44301,
0xfd0033cf,
0x0bf40432,
0x0131f406,
-/* 0x04c4: i2c_sense_sda_done */
-/* 0x04c6: i2c_raise_scl */
+/* 0x067c: i2c_sense_sda_done */
+/* 0x067e: i2c_raise_scl */
0x40f900f8,
0x03089844,
- 0x046a7e01,
-/* 0x04d1: i2c_raise_scl_wait */
+ 0x06227e01,
+/* 0x0689: i2c_raise_scl_wait */
0x03e84e00,
0x00005d7e,
- 0x00049e7e,
+ 0x0006567e,
0xb60901f4,
0x1bf40142,
-/* 0x04e5: i2c_raise_scl_done */
+/* 0x069d: i2c_raise_scl_done */
0xf840fcef,
-/* 0x04e9: i2c_start */
- 0x049e7e00,
+/* 0x06a1: i2c_start */
+ 0x06567e00,
0x0d11f400,
- 0x0004b27e,
+ 0x00066a7e,
0xf40611f4,
-/* 0x04fa: i2c_start_rep */
+/* 0x06b2: i2c_start_rep */
0x00032e0e,
- 0x00046a7e,
- 0x847e0103,
- 0x76bb0004,
+ 0x0006227e,
+ 0x3c7e0103,
+ 0x76bb0006,
0x0465b600,
0x659450f9,
0x0256bb04,
0x75fd50bd,
0x7e50fc04,
- 0xb60004c6,
+ 0xb600067e,
0x11f40464,
-/* 0x0525: i2c_start_send */
+/* 0x06dd: i2c_start_send */
0x7e00031d,
- 0x4e000484,
+ 0x4e00063c,
0x5d7e1388,
0x00030000,
- 0x00046a7e,
+ 0x0006227e,
0x7e13884e,
-/* 0x053f: i2c_start_out */
+/* 0x06f7: i2c_start_out */
0xf800005d,
-/* 0x0541: i2c_stop */
+/* 0x06f9: i2c_stop */
0x7e000300,
- 0x0300046a,
- 0x04847e00,
+ 0x03000622,
+ 0x063c7e00,
0x03e84e00,
0x00005d7e,
- 0x6a7e0103,
- 0x884e0004,
+ 0x227e0103,
+ 0x884e0006,
0x005d7e13,
0x7e010300,
- 0x4e000484,
+ 0x4e00063c,
0x5d7e1388,
0x00f80000,
-/* 0x0570: i2c_bitw */
- 0x0004847e,
+/* 0x0728: i2c_bitw */
+ 0x00063c7e,
0x7e03e84e,
0xbb00005d,
0x65b60076,
0x56bb0465,
0xfd50bd02,
0x50fc0475,
- 0x0004c67e,
+ 0x00067e7e,
0xf40464b6,
0x884e1711,
0x005d7e13,
0x7e000300,
- 0x4e00046a,
+ 0x4e000622,
0x5d7e1388,
-/* 0x05ae: i2c_bitw_out */
+/* 0x0766: i2c_bitw_out */
0x00f80000,
-/* 0x05b0: i2c_bitr */
- 0x847e0103,
- 0xe84e0004,
+/* 0x0768: i2c_bitr */
+ 0x3c7e0103,
+ 0xe84e0006,
0x005d7e03,
0x0076bb00,
0xf90465b6,
0x04659450,
0xbd0256bb,
0x0475fd50,
- 0xc67e50fc,
- 0x64b60004,
+ 0x7e7e50fc,
+ 0x64b60006,
0x1a11f404,
- 0x0004b27e,
- 0x6a7e0003,
- 0x884e0004,
+ 0x00066a7e,
+ 0x227e0003,
+ 0x884e0006,
0x005d7e13,
0x013cf000,
-/* 0x05f3: i2c_bitr_done */
+/* 0x07ab: i2c_bitr_done */
0xf80131f4,
-/* 0x05f5: i2c_get_byte */
+/* 0x07ad: i2c_get_byte */
0x04000500,
-/* 0x05f9: i2c_get_byte_next */
+/* 0x07b1: i2c_get_byte_next */
0x0154b608,
0xb60076bb,
0x50f90465,
0xbb046594,
0x50bd0256,
0xfc0475fd,
- 0x05b07e50,
+ 0x07687e50,
0x0464b600,
0xfd2a11f4,
0x42b60553,
0x0256bb04,
0x75fd50bd,
0x7e50fc04,
- 0xb6000570,
-/* 0x0642: i2c_get_byte_done */
+ 0xb6000728,
+/* 0x07fa: i2c_get_byte_done */
0x00f80464,
-/* 0x0644: i2c_put_byte */
-/* 0x0646: i2c_put_byte_next */
+/* 0x07fc: i2c_put_byte */
+/* 0x07fe: i2c_put_byte_next */
0x42b60804,
0x3854ff01,
0xb60076bb,
0xbb046594,
0x50bd0256,
0xfc0475fd,
- 0x05707e50,
+ 0x07287e50,
0x0464b600,
0xb03411f4,
0x1bf40046,
0x04659450,
0xbd0256bb,
0x0475fd50,
- 0xb07e50fc,
- 0x64b60005,
+ 0x687e50fc,
+ 0x64b60007,
0x0f11f404,
0xb00076bb,
0x1bf40136,
0x0132f406,
-/* 0x069c: i2c_put_byte_done */
-/* 0x069e: i2c_addr */
+/* 0x0854: i2c_put_byte_done */
+/* 0x0856: i2c_addr */
0x76bb00f8,
0x0465b600,
0x659450f9,
0x0256bb04,
0x75fd50bd,
0x7e50fc04,
- 0xb60004e9,
+ 0xb60006a1,
0x11f40464,
0x2ec3e729,
0x0134b601,
0x56bb0465,
0xfd50bd02,
0x50fc0475,
- 0x0006447e,
-/* 0x06e3: i2c_addr_done */
+ 0x0007fc7e,
+/* 0x089b: i2c_addr_done */
0xf80464b6,
-/* 0x06e5: i2c_acquire_addr */
+/* 0x089d: i2c_acquire_addr */
0xf8cec700,
0xb705e4b6,
0xf8d014e0,
-/* 0x06f1: i2c_acquire */
- 0x06e57e00,
+/* 0x08a9: i2c_acquire */
+ 0x089d7e00,
0x00047e00,
0x03d9f000,
0x00002e7e,
-/* 0x0702: i2c_release */
- 0xe57e00f8,
- 0x047e0006,
+/* 0x08ba: i2c_release */
+ 0x9d7e00f8,
+ 0x047e0008,
0xdaf00000,
0x002e7e03,
-/* 0x0713: i2c_recv */
+/* 0x08cb: i2c_recv */
0xf400f800,
0xc1c70132,
0x0214b6f8,
0xf52816b0,
0xb801371f,
- 0x000bd413,
+ 0x000be813,
0xb8003298,
- 0x000bac13,
+ 0x000bc013,
0xf4003198,
0xd0f90231,
0xd0f9e0f9,
0x56bb0465,
0xfd50bd02,
0x50fc0475,
- 0x0006f17e,
+ 0x0008a97e,
0xfc0464b6,
0x00d6b0d0,
0x00b01bf5,
0x0256bb04,
0x75fd50bd,
0x7e50fc04,
- 0xb600069e,
+ 0xb6000856,
0x11f50464,
0xc5c700cc,
0x0076bbe0,
0x04659450,
0xbd0256bb,
0x0475fd50,
- 0x447e50fc,
- 0x64b60006,
+ 0xfc7e50fc,
+ 0x64b60007,
0xa911f504,
0xbb010500,
0x65b60076,
0x56bb0465,
0xfd50bd02,
0x50fc0475,
- 0x00069e7e,
+ 0x0008567e,
0xf50464b6,
0xbb008711,
0x65b60076,
0x56bb0465,
0xfd50bd02,
0x50fc0475,
- 0x0005f57e,
+ 0x0007ad7e,
0xf40464b6,
0x5bcb6711,
0x0076bbe0,
0x04659450,
0xbd0256bb,
0x0475fd50,
- 0x417e50fc,
- 0x64b60005,
+ 0xf97e50fc,
+ 0x64b60006,
0xbd5bb204,
0x410ef474,
-/* 0x0818: i2c_recv_not_rd08 */
+/* 0x09d0: i2c_recv_not_rd08 */
0xf401d6b0,
0x00053b1b,
- 0x00069e7e,
+ 0x0008567e,
0xc73211f4,
- 0x447ee0c5,
- 0x11f40006,
+ 0xfc7ee0c5,
+ 0x11f40007,
0x7e000528,
- 0xf400069e,
+ 0xf4000856,
0xb5c71f11,
- 0x06447ee0,
+ 0x07fc7ee0,
0x1511f400,
- 0x0005417e,
+ 0x0006f97e,
0xc5c774bd,
0x091bf408,
0xf40232f4,
-/* 0x0856: i2c_recv_not_wr08 */
-/* 0x0856: i2c_recv_done */
+/* 0x0a0e: i2c_recv_not_wr08 */
+/* 0x0a0e: i2c_recv_done */
0xcec7030e,
- 0x07027ef8,
+ 0x08ba7ef8,
0xfce0fc00,
0x0912f4d0,
- 0x3f7e7cb2,
-/* 0x086a: i2c_recv_exit */
+ 0xc27e7cb2,
+/* 0x0a22: i2c_recv_exit */
0x00f80002,
-/* 0x086c: i2c_init */
-/* 0x086e: test_recv */
+/* 0x0a24: i2c_init */
+/* 0x0a26: test_recv */
0x584100f8,
0x0011cf04,
0x400110b6,
0xf104bd00,
0xf1d900e7,
0x7e134fe3,
- 0xf8000196,
-/* 0x088d: test_init */
+ 0xf8000201,
+/* 0x0a45: test_init */
0x08004e00,
- 0x0001967e,
-/* 0x0896: idle_recv */
+ 0x0002017e,
+/* 0x0a4e: idle_recv */
0x00f800f8,
-/* 0x0898: idle */
+/* 0x0a50: idle */
0x410031f4,
0x11cf0454,
0x0110b600,
0xf6045440,
0x04bd0001,
-/* 0x08ac: idle_loop */
+/* 0x0a64: idle_loop */
0x32f45801,
-/* 0x08b1: idle_proc */
-/* 0x08b1: idle_proc_exec */
+/* 0x0a69: idle_proc */
+/* 0x0a69: idle_proc_exec */
0xb210f902,
- 0x02487e1e,
+ 0x02cb7e1e,
0xf410fc00,
0x31f40911,
0xf00ef402,
-/* 0x08c4: idle_proc_next */
+/* 0x0a7c: idle_proc_next */
0xa65810b6,
0xe81bf41f,
0xf4e002f4,
0x00000000,
0x00000000,
0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
};
*/
#define NVKM_PPWR_CHIPSET GT215
+#define HW_TICKS_PER_US 203 // should be 202.5
//#define NVKM_FALCON_PC24
//#define NVKM_FALCON_UNSHIFTED_IO
.section #nva3_pwr_data
#define INCLUDE_PROC
#include "kernel.fuc"
+#include "arith.fuc"
#include "host.fuc"
#include "memx.fuc"
#include "perf.fuc"
#define INCLUDE_DATA
#include "kernel.fuc"
+#include "arith.fuc"
#include "host.fuc"
#include "memx.fuc"
#include "perf.fuc"
.section #nva3_pwr_code
#define INCLUDE_CODE
#include "kernel.fuc"
+#include "arith.fuc"
#include "host.fuc"
#include "memx.fuc"
#include "perf.fuc"
0x00000000,
/* 0x0058: proc_list_head */
0x54534f48,
- 0x00000430,
- 0x000003cd,
+ 0x00000512,
+ 0x000004af,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x584d454d,
- 0x00000542,
- 0x00000534,
+ 0x000006e0,
+ 0x000006d2,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x46524550,
- 0x00000546,
- 0x00000544,
+ 0x000006e4,
+ 0x000006e2,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x5f433249,
- 0x00000976,
- 0x00000819,
+ 0x00000b14,
+ 0x000009b7,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x54534554,
- 0x0000099f,
- 0x00000978,
+ 0x00000b3d,
+ 0x00000b16,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x454c4449,
- 0x000009ab,
- 0x000009a9,
+ 0x00000b49,
+ 0x00000b47,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
/* 0x0370: memx_func_head */
- 0x00010000,
- 0x00000000,
- 0x0000046f,
-/* 0x037c: memx_func_next */
0x00000001,
0x00000000,
- 0x00000496,
+ 0x00000551,
+/* 0x037c: memx_func_next */
0x00000002,
+ 0x00000000,
+ 0x000005a8,
+ 0x00000003,
0x00000002,
- 0x000004b7,
- 0x00040003,
+ 0x0000063a,
+ 0x00040004,
+ 0x00000000,
+ 0x00000656,
+ 0x00010005,
+ 0x00000000,
+ 0x00000673,
+ 0x00010006,
0x00000000,
- 0x000004d3,
- 0x00010004,
+ 0x000005f8,
+/* 0x03b8: memx_func_tail */
+/* 0x03b8: memx_ts_start */
0x00000000,
- 0x000004f0,
-/* 0x03ac: memx_func_tail */
-/* 0x03ac: memx_data_head */
+/* 0x03bc: memx_ts_end */
0x00000000,
+/* 0x03c0: memx_data_head */
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
-/* 0x0bac: memx_data_tail */
-/* 0x0bac: i2c_scl_map */
+ 0x00000000,
+/* 0x0bc0: memx_data_tail */
+/* 0x0bc0: i2c_scl_map */
0x00001000,
0x00004000,
0x00010000,
0x01000000,
0x04000000,
0x10000000,
-/* 0x0bd4: i2c_sda_map */
+/* 0x0be8: i2c_sda_map */
0x00002000,
0x00008000,
0x00020000,
0x02000000,
0x08000000,
0x20000000,
-/* 0x0bfc: i2c_ctrl */
+/* 0x0c10: i2c_ctrl */
0x0000e138,
0x0000e150,
0x0000e168,
0x00000000,
0x00000000,
0x00000000,
- 0x00000000,
- 0x00000000,
- 0x00000000,
- 0x00000000,
- 0x00000000,
};
uint32_t nva3_pwr_code[] = {
- 0x030d0ef5,
+ 0x039e0ef5,
/* 0x0004: rd32 */
0x07a007f1,
0xd00604b6,
0xd4f100dd,
0x1bf47000,
/* 0x007f: nsec */
- 0xf000f8f2,
+ 0xf900f8f2,
+ 0xf080f990,
0x84b62c87,
0x0088cf06,
-/* 0x0088: nsec_loop */
+/* 0x008c: nsec_loop */
0xb62c97f0,
0x99cf0694,
0x0298bb00,
0xf4069eb8,
- 0x00f8f11e,
-/* 0x009c: wait */
+ 0x80fcf11e,
+ 0x00f890fc,
+/* 0x00a4: wait */
+ 0x80f990f9,
0xb62c87f0,
0x88cf0684,
-/* 0x00a5: wait_loop */
+/* 0x00b1: wait_loop */
0x02eeb900,
0xb90421f4,
0xadfd02da,
0x0099cf06,
0xb80298bb,
0x1ef4069b,
-/* 0x00c9: wait_done */
-/* 0x00cb: intr_watchdog */
- 0x9800f8df,
+/* 0x00d5: wait_done */
+ 0xfc80fcdf,
+/* 0x00db: intr_watchdog */
+ 0x9800f890,
0x96b003e9,
0x2a0bf400,
0xbb9a0a98,
0x1cf4029a,
0x01d7f00f,
- 0x025421f5,
+ 0x02dd21f5,
0x0ef494bd,
-/* 0x00e9: intr_watchdog_next_time */
+/* 0x00f9: intr_watchdog_next_time */
0x9b0a9815,
0xf400a6b0,
0x9ab8090b,
0x061cf406,
-/* 0x00f8: intr_watchdog_next_time_set */
-/* 0x00fb: intr_watchdog_next_proc */
+/* 0x0108: intr_watchdog_next_time_set */
+/* 0x010b: intr_watchdog_next_proc */
0x809b0980,
0xe0b603e9,
0x68e6b158,
0xc61bf402,
-/* 0x010a: intr */
+/* 0x011a: intr */
0x00f900f8,
0x80f904bd,
0xa0f990f9,
0xf40289c4,
0x0080230b,
0x58e7f09b,
- 0x98cb21f4,
+ 0x98db21f4,
0x96b09b09,
0x110bf400,
0xb63407f0,
0x09d00604,
0x8004bd00,
-/* 0x016e: intr_skip_watchdog */
+/* 0x017e: intr_skip_watchdog */
0x89e49a09,
0x0bf40800,
0x8897f148,
0x48e7f1c0,
0x53e3f14f,
0x00d7f054,
- 0x02b921f5,
+ 0x034221f5,
0x07f1c0fc,
0x04b604c0,
0x000cd006,
-/* 0x01ae: intr_subintr_skip_fifo */
+/* 0x01be: intr_subintr_skip_fifo */
0x07f104bd,
0x04b60688,
0x0009d006,
-/* 0x01ba: intr_skip_subintr */
+/* 0x01ca: intr_skip_subintr */
0x89c404bd,
0x070bf420,
0xffbfa4f1,
-/* 0x01c4: intr_skip_pause */
+/* 0x01d4: intr_skip_pause */
0xf44089c4,
0xa4f1070b,
-/* 0x01ce: intr_skip_user0 */
+/* 0x01de: intr_skip_user0 */
0x07f0ffbf,
0x0604b604,
0xbd0008d0,
0x90fca0fc,
0x00fc80fc,
0xf80032f4,
-/* 0x01f5: timer */
- 0x1032f401,
- 0xb003f898,
- 0x1cf40086,
- 0x03fe8051,
+/* 0x0205: ticks_from_ns */
+ 0xf9c0f901,
+ 0xcbd7f1b0,
+ 0x00d3f000,
+ 0x041321f5,
+ 0x03e8ccec,
+ 0xf400b4b0,
+ 0xeeec120b,
+ 0xd7f103e8,
+ 0xd3f000cb,
+ 0x1321f500,
+/* 0x022d: ticks_from_ns_quit */
+ 0x02ceb904,
+ 0xc0fcb0fc,
+/* 0x0236: ticks_from_us */
+ 0xc0f900f8,
+ 0xd7f1b0f9,
+ 0xd3f000cb,
+ 0x1321f500,
+ 0x02ceb904,
+ 0xf400b4b0,
+ 0xe4bd050b,
+/* 0x0250: ticks_from_us_quit */
+ 0xc0fcb0fc,
+/* 0x0256: ticks_to_us */
+ 0xd7f100f8,
+ 0xd3f000cb,
+ 0xecedff00,
+/* 0x0262: timer */
+ 0x90f900f8,
+ 0x32f480f9,
+ 0x03f89810,
+ 0xf40086b0,
+ 0x84bd651c,
0xb63807f0,
0x08d00604,
0xf004bd00,
- 0x84b60887,
+ 0x84b63487,
0x0088cf06,
- 0xf40284f0,
- 0x87f0261b,
- 0x0684b634,
- 0xb80088cf,
- 0x0bf406e0,
- 0x06e8b809,
-/* 0x0233: timer_reset */
- 0xf01f1ef4,
- 0x04b63407,
- 0x000ed006,
- 0x0e8004bd,
-/* 0x0241: timer_enable */
- 0x0187f09a,
+ 0xbb9a0998,
+ 0xe9bb0298,
+ 0x03fe8000,
+ 0xb60887f0,
+ 0x88cf0684,
+ 0x0284f000,
+ 0xf0261bf4,
+ 0x84b63487,
+ 0x0088cf06,
+ 0xf406e0b8,
+ 0xe8b8090b,
+ 0x111cf406,
+/* 0x02b8: timer_reset */
+ 0xb63407f0,
+ 0x0ed00604,
+ 0x8004bd00,
+/* 0x02c6: timer_enable */
+ 0x87f09a0e,
+ 0x3807f001,
+ 0xd00604b6,
+ 0x04bd0008,
+/* 0x02d4: timer_done */
+ 0xfc1031f4,
+ 0xf890fc80,
+/* 0x02dd: send_proc */
+ 0xf980f900,
+ 0x05e89890,
+ 0xf004e998,
+ 0x89b80486,
+ 0x2a0bf406,
+ 0x940398c4,
+ 0x80b60488,
+ 0x008ebb18,
+ 0x8000fa98,
+ 0x8d80008a,
+ 0x028c8001,
+ 0xb6038b80,
+ 0x94f00190,
+ 0x04e98007,
+/* 0x0317: send_done */
+ 0xfc0231f4,
+ 0xf880fc90,
+/* 0x031d: find */
+ 0xf080f900,
+ 0x31f45887,
+/* 0x0325: find_loop */
+ 0x008a9801,
+ 0xf406aeb8,
+ 0x80b6100b,
+ 0x6886b158,
+ 0xf01bf402,
+/* 0x033b: find_done */
+ 0xb90132f4,
+ 0x80fc028e,
+/* 0x0342: send */
+ 0x21f500f8,
+ 0x01f4031d,
+/* 0x034b: recv */
+ 0xf900f897,
+ 0x9880f990,
+ 0xe99805e8,
+ 0x0132f404,
+ 0xf40689b8,
+ 0x89c43d0b,
+ 0x0180b603,
+ 0x800784f0,
+ 0xea9805e8,
+ 0xfef0f902,
+ 0xf0f9018f,
+ 0x9402efb9,
+ 0xe9bb0499,
+ 0x18e0b600,
+ 0x9803eb98,
+ 0xed9802ec,
+ 0x00ee9801,
+ 0xf0fca5f9,
+ 0xf400f8fe,
+ 0xf0fc0131,
+/* 0x0398: recv_done */
+ 0x90fc80fc,
+/* 0x039e: init */
+ 0x17f100f8,
+ 0x14b60108,
+ 0x0011cf06,
+ 0x010911e7,
+ 0xfe0814b6,
+ 0x17f10014,
+ 0x13f000e0,
+ 0x1c07f000,
+ 0xd00604b6,
+ 0x04bd0001,
+ 0xf0ff17f0,
+ 0x04b61407,
+ 0x0001d006,
+ 0x17f004bd,
+ 0x0015f102,
+ 0x1007f008,
+ 0xd00604b6,
+ 0x04bd0001,
+ 0x011a17f1,
+ 0xfe0013f0,
+ 0x31f40010,
+ 0x0117f010,
0xb63807f0,
- 0x08d00604,
-/* 0x024f: timer_done */
- 0xf404bd00,
- 0x00f81031,
-/* 0x0254: send_proc */
- 0x90f980f9,
- 0x9805e898,
- 0x86f004e9,
- 0x0689b804,
- 0xc42a0bf4,
- 0x88940398,
- 0x1880b604,
- 0x98008ebb,
- 0x8a8000fa,
- 0x018d8000,
- 0x80028c80,
- 0x90b6038b,
- 0x0794f001,
- 0xf404e980,
-/* 0x028e: send_done */
- 0x90fc0231,
- 0x00f880fc,
-/* 0x0294: find */
- 0x87f080f9,
- 0x0131f458,
-/* 0x029c: find_loop */
- 0xb8008a98,
- 0x0bf406ae,
- 0x5880b610,
- 0x026886b1,
- 0xf4f01bf4,
-/* 0x02b2: find_done */
- 0x8eb90132,
- 0xf880fc02,
-/* 0x02b9: send */
- 0x9421f500,
- 0x9701f402,
-/* 0x02c2: recv */
- 0xe89800f8,
- 0x04e99805,
- 0xb80132f4,
- 0x0bf40689,
- 0x0389c43d,
- 0xf00180b6,
- 0xe8800784,
- 0x02ea9805,
- 0x8ffef0f9,
- 0xb9f0f901,
- 0x999402ef,
- 0x00e9bb04,
- 0x9818e0b6,
- 0xec9803eb,
- 0x01ed9802,
- 0xf900ee98,
- 0xfef0fca5,
- 0x31f400f8,
-/* 0x030b: recv_done */
- 0xf8f0fc01,
-/* 0x030d: init */
- 0x0817f100,
- 0x0614b601,
- 0xe70011cf,
- 0xb6010911,
- 0x14fe0814,
- 0xe017f100,
- 0x0013f000,
- 0xb61c07f0,
0x01d00604,
0xf004bd00,
- 0x07f0ff17,
- 0x0604b614,
- 0xbd0001d0,
- 0x0217f004,
- 0x080015f1,
- 0xb61007f0,
- 0x01d00604,
- 0xf104bd00,
- 0xf0010a17,
- 0x10fe0013,
- 0x1031f400,
- 0xf00117f0,
- 0x04b63807,
- 0x0001d006,
- 0xf7f004bd,
-/* 0x0371: init_proc */
- 0x01f19858,
- 0xf40016b0,
- 0x15f9fa0b,
- 0xf458f0b6,
-/* 0x0382: host_send */
- 0x17f1f20e,
- 0x14b604b0,
- 0x0011cf06,
- 0x04a027f1,
- 0xcf0624b6,
- 0x12b80022,
- 0x320bf406,
- 0x94071ec4,
- 0xe0b704ee,
- 0xeb980270,
- 0x02ec9803,
- 0x9801ed98,
- 0x21f500ee,
- 0x10b602b9,
- 0x0f1ec401,
- 0x04b007f1,
- 0xd00604b6,
- 0x04bd000e,
-/* 0x03cb: host_send_done */
- 0xf8ba0ef4,
-/* 0x03cd: host_recv */
- 0x4917f100,
- 0x5413f14e,
- 0x06e1b852,
-/* 0x03db: host_recv_wait */
- 0xf1aa0bf4,
- 0xb604cc17,
- 0x11cf0614,
- 0xc827f100,
- 0x0624b604,
- 0xf00022cf,
- 0x12b80816,
- 0xe60bf406,
- 0xb60723c4,
- 0x30b70434,
- 0x3b8002f0,
- 0x023c8003,
- 0x80013d80,
- 0x20b6003e,
- 0x0f24f001,
- 0x04c807f1,
+/* 0x0402: init_proc */
+ 0xf19858f7,
+ 0x0016b001,
+ 0xf9fa0bf4,
+ 0x58f0b615,
+/* 0x0413: mulu32_32_64 */
+ 0xf9f20ef4,
+ 0xf920f910,
+ 0x9540f930,
+ 0xd29510e1,
+ 0xbdc4bd10,
+ 0xc0edffb4,
+ 0xb9301dff,
+ 0x34f10234,
+ 0x34b6ffff,
+ 0x1045b610,
+ 0xbb00c3bb,
+ 0xe2ff01b4,
+ 0x0234b930,
+ 0xffff34f1,
+ 0xb61034b6,
+ 0xc3bb1045,
+ 0x01b4bb00,
+ 0xbb3012ff,
+ 0x40fc00b3,
+ 0x20fc30fc,
+ 0x00f810fc,
+/* 0x0464: host_send */
+ 0x04b017f1,
+ 0xcf0614b6,
+ 0x27f10011,
+ 0x24b604a0,
+ 0x0022cf06,
+ 0xf40612b8,
+ 0x1ec4320b,
+ 0x04ee9407,
+ 0x0270e0b7,
+ 0x9803eb98,
+ 0xed9802ec,
+ 0x00ee9801,
+ 0x034221f5,
+ 0xc40110b6,
+ 0x07f10f1e,
+ 0x04b604b0,
+ 0x000ed006,
+ 0x0ef404bd,
+/* 0x04ad: host_send_done */
+/* 0x04af: host_recv */
+ 0xf100f8ba,
+ 0xf14e4917,
+ 0xb8525413,
+ 0x0bf406e1,
+/* 0x04bd: host_recv_wait */
+ 0xcc17f1aa,
+ 0x0614b604,
+ 0xf10011cf,
+ 0xb604c827,
+ 0x22cf0624,
+ 0x0816f000,
+ 0xf40612b8,
+ 0x23c4e60b,
+ 0x0434b607,
+ 0x02f030b7,
+ 0x80033b80,
+ 0x3d80023c,
+ 0x003e8001,
+ 0xf00120b6,
+ 0x07f10f24,
+ 0x04b604c8,
+ 0x0002d006,
+ 0x27f004bd,
+ 0x0007f040,
0xd00604b6,
0x04bd0002,
- 0xf04027f0,
- 0x04b60007,
- 0x0002d006,
- 0x00f804bd,
-/* 0x0430: host_init */
- 0x008017f1,
- 0xf11014b6,
- 0xf1027015,
- 0xb604d007,
- 0x01d00604,
- 0xf104bd00,
- 0xb6008017,
- 0x15f11014,
- 0x07f102f0,
- 0x04b604dc,
- 0x0001d006,
- 0x17f004bd,
- 0xc407f101,
+/* 0x0512: host_init */
+ 0x17f100f8,
+ 0x14b60080,
+ 0x7015f110,
+ 0xd007f102,
0x0604b604,
0xbd0001d0,
-/* 0x046f: memx_func_enter */
- 0xf000f804,
+ 0x8017f104,
+ 0x1014b600,
+ 0x02f015f1,
+ 0x04dc07f1,
+ 0xd00604b6,
+ 0x04bd0001,
+ 0xf10117f0,
+ 0xb604c407,
+ 0x01d00604,
+ 0xf804bd00,
+/* 0x0551: memx_func_enter */
+ 0x1087f100,
+ 0x028eb916,
+ 0xb90421f4,
+ 0x67f102d7,
+ 0x63f1fffc,
+ 0x76fdffff,
+ 0x0267f104,
+ 0x0576fd00,
+ 0x70f980f9,
+ 0xe0fcd0fc,
+ 0xf03f21f4,
0x07f10467,
0x04b607e0,
0x0006d006,
-/* 0x047e: memx_func_enter_wait */
+/* 0x058a: memx_func_enter_wait */
0x67f104bd,
0x64b607c0,
0x0066cf06,
0xf40464f0,
- 0x1698f30b,
- 0x0410b600,
-/* 0x0496: memx_func_leave */
- 0x67f000f8,
- 0xe407f104,
- 0x0604b607,
- 0xbd0006d0,
-/* 0x04a5: memx_func_leave_wait */
- 0xc067f104,
+ 0x67f0f30b,
+ 0x0664b62c,
+ 0x800066cf,
+ 0x00f8ee06,
+/* 0x05a8: memx_func_leave */
+ 0xb62c67f0,
+ 0x66cf0664,
+ 0xef068000,
+ 0xf10467f0,
+ 0xb607e407,
+ 0x06d00604,
+/* 0x05c3: memx_func_leave_wait */
+ 0xf104bd00,
+ 0xb607c067,
+ 0x66cf0664,
+ 0x0464f000,
+ 0xf1f31bf4,
+ 0xb9161087,
+ 0x21f4028e,
+ 0x02d7b904,
+ 0xffcc67f1,
+ 0xffff63f1,
+ 0xf90476fd,
+ 0xfc70f980,
+ 0xf4e0fcd0,
+ 0x00f83f21,
+/* 0x05f8: memx_func_wait_vblank */
+ 0xb0001698,
+ 0x0bf40066,
+ 0x0166b013,
+ 0xf4060bf4,
+/* 0x060a: memx_func_wait_vblank_head1 */
+ 0x77f12e0e,
+ 0x0ef40020,
+/* 0x0611: memx_func_wait_vblank_head0 */
+ 0x0877f107,
+/* 0x0615: memx_func_wait_vblank_0 */
+ 0xc467f100,
0x0664b607,
- 0xf00066cf,
- 0x1bf40464,
-/* 0x04b7: memx_func_wr32 */
- 0x9800f8f3,
- 0x15980016,
- 0x0810b601,
- 0x50f960f9,
- 0xe0fcd0fc,
- 0xb63f21f4,
- 0x1bf40242,
-/* 0x04d3: memx_func_wait */
- 0xf000f8e9,
- 0x84b62c87,
- 0x0088cf06,
- 0x98001e98,
- 0x1c98011d,
- 0x031b9802,
- 0xf41010b6,
- 0x00f89c21,
-/* 0x04f0: memx_func_delay */
- 0xb6001e98,
- 0x21f40410,
-/* 0x04fb: memx_exec */
- 0xf900f87f,
- 0xb9d0f9e0,
- 0xb2b902c1,
-/* 0x0505: memx_exec_next */
- 0x00139802,
- 0x950410b6,
- 0x30f01034,
- 0xde35980c,
- 0x12b855f9,
- 0xec1ef406,
- 0xe0fcd0fc,
- 0x02b921f5,
-/* 0x0526: memx_info */
- 0xc7f100f8,
- 0xb7f103ac,
- 0x21f50800,
- 0x00f802b9,
-/* 0x0534: memx_recv */
- 0xf401d6b0,
- 0xd6b0c40b,
- 0xe90bf400,
-/* 0x0542: memx_init */
- 0x00f800f8,
-/* 0x0544: perf_recv */
-/* 0x0546: perf_init */
+ 0xfd0066cf,
+ 0x1bf40467,
+/* 0x0625: memx_func_wait_vblank_1 */
+ 0xc467f1f3,
+ 0x0664b607,
+ 0xfd0066cf,
+ 0x0bf40467,
+/* 0x0635: memx_func_wait_vblank_fini */
+ 0x0410b6f3,
+/* 0x063a: memx_func_wr32 */
+ 0x169800f8,
+ 0x01159800,
+ 0xf90810b6,
+ 0xfc50f960,
+ 0xf4e0fcd0,
+ 0x42b63f21,
+ 0xe91bf402,
+/* 0x0656: memx_func_wait */
+ 0x87f000f8,
+ 0x0684b62c,
+ 0x980088cf,
+ 0x1d98001e,
+ 0x021c9801,
+ 0xb6031b98,
+ 0x21f41010,
+/* 0x0673: memx_func_delay */
+ 0x9800f8a4,
+ 0x10b6001e,
+ 0x7f21f404,
+/* 0x067e: memx_exec */
+ 0xe0f900f8,
+ 0xc1b9d0f9,
+ 0x02b2b902,
+/* 0x0688: memx_exec_next */
+ 0xb6001398,
+ 0x34e70410,
+ 0x33e701f0,
+ 0x32b601e0,
+ 0x0c30f001,
+ 0xf9de3598,
+ 0x0612b855,
+ 0x98e41ef4,
+ 0x0c98ee0b,
+ 0x02cbbbef,
+ 0x07c4b7f1,
+ 0xcf06b4b6,
+ 0xd0fc00bb,
+ 0x21f5e0fc,
+ 0x00f80342,
+/* 0x06c4: memx_info */
+ 0x03c0c7f1,
+ 0x0800b7f1,
+ 0x034221f5,
+/* 0x06d2: memx_recv */
+ 0xd6b000f8,
+ 0xa90bf401,
+ 0xf400d6b0,
+ 0x00f8e90b,
+/* 0x06e0: memx_init */
+/* 0x06e2: perf_recv */
0x00f800f8,
-/* 0x0548: i2c_drive_scl */
- 0xf40036b0,
- 0x07f1110b,
- 0x04b607e0,
- 0x0001d006,
- 0x00f804bd,
-/* 0x055c: i2c_drive_scl_lo */
- 0x07e407f1,
- 0xd00604b6,
- 0x04bd0001,
-/* 0x056a: i2c_drive_sda */
+/* 0x06e4: perf_init */
+/* 0x06e6: i2c_drive_scl */
0x36b000f8,
0x110bf400,
0x07e007f1,
0xd00604b6,
- 0x04bd0002,
-/* 0x057e: i2c_drive_sda_lo */
+ 0x04bd0001,
+/* 0x06fa: i2c_drive_scl_lo */
0x07f100f8,
0x04b607e4,
+ 0x0001d006,
+ 0x00f804bd,
+/* 0x0708: i2c_drive_sda */
+ 0xf40036b0,
+ 0x07f1110b,
+ 0x04b607e0,
0x0002d006,
0x00f804bd,
-/* 0x058c: i2c_sense_scl */
- 0xf10132f4,
- 0xb607c437,
- 0x33cf0634,
- 0x0431fd00,
- 0xf4060bf4,
-/* 0x05a2: i2c_sense_scl_done */
- 0x00f80131,
-/* 0x05a4: i2c_sense_sda */
- 0xf10132f4,
- 0xb607c437,
- 0x33cf0634,
- 0x0432fd00,
- 0xf4060bf4,
-/* 0x05ba: i2c_sense_sda_done */
- 0x00f80131,
-/* 0x05bc: i2c_raise_scl */
- 0x47f140f9,
- 0x37f00898,
- 0x4821f501,
-/* 0x05c9: i2c_raise_scl_wait */
- 0xe8e7f105,
- 0x7f21f403,
- 0x058c21f5,
- 0xb60901f4,
- 0x1bf40142,
-/* 0x05dd: i2c_raise_scl_done */
- 0xf840fcef,
-/* 0x05e1: i2c_start */
- 0x8c21f500,
- 0x0d11f405,
- 0x05a421f5,
- 0xf40611f4,
-/* 0x05f2: i2c_start_rep */
- 0x37f0300e,
- 0x4821f500,
- 0x0137f005,
- 0x056a21f5,
- 0xb60076bb,
- 0x50f90465,
- 0xbb046594,
- 0x50bd0256,
- 0xfc0475fd,
- 0xbc21f550,
- 0x0464b605,
-/* 0x061f: i2c_start_send */
- 0xf01f11f4,
+/* 0x071c: i2c_drive_sda_lo */
+ 0x07e407f1,
+ 0xd00604b6,
+ 0x04bd0002,
+/* 0x072a: i2c_sense_scl */
+ 0x32f400f8,
+ 0xc437f101,
+ 0x0634b607,
+ 0xfd0033cf,
+ 0x0bf40431,
+ 0x0131f406,
+/* 0x0740: i2c_sense_scl_done */
+/* 0x0742: i2c_sense_sda */
+ 0x32f400f8,
+ 0xc437f101,
+ 0x0634b607,
+ 0xfd0033cf,
+ 0x0bf40432,
+ 0x0131f406,
+/* 0x0758: i2c_sense_sda_done */
+/* 0x075a: i2c_raise_scl */
+ 0x40f900f8,
+ 0x089847f1,
+ 0xf50137f0,
+/* 0x0767: i2c_raise_scl_wait */
+ 0xf106e621,
+ 0xf403e8e7,
+ 0x21f57f21,
+ 0x01f4072a,
+ 0x0142b609,
+/* 0x077b: i2c_raise_scl_done */
+ 0xfcef1bf4,
+/* 0x077f: i2c_start */
+ 0xf500f840,
+ 0xf4072a21,
+ 0x21f50d11,
+ 0x11f40742,
+ 0x300ef406,
+/* 0x0790: i2c_start_rep */
+ 0xf50037f0,
+ 0xf006e621,
+ 0x21f50137,
+ 0x76bb0708,
+ 0x0465b600,
+ 0x659450f9,
+ 0x0256bb04,
+ 0x75fd50bd,
+ 0xf550fc04,
+ 0xb6075a21,
+ 0x11f40464,
+/* 0x07bd: i2c_start_send */
+ 0x0037f01f,
+ 0x070821f5,
+ 0x1388e7f1,
+ 0xf07f21f4,
0x21f50037,
- 0xe7f1056a,
+ 0xe7f106e6,
0x21f41388,
- 0x0037f07f,
- 0x054821f5,
- 0x1388e7f1,
-/* 0x063b: i2c_start_out */
- 0xf87f21f4,
-/* 0x063d: i2c_stop */
- 0x0037f000,
- 0x054821f5,
- 0xf50037f0,
- 0xf1056a21,
- 0xf403e8e7,
- 0x37f07f21,
- 0x4821f501,
- 0x88e7f105,
- 0x7f21f413,
+/* 0x07d9: i2c_start_out */
+/* 0x07db: i2c_stop */
+ 0xf000f87f,
+ 0x21f50037,
+ 0x37f006e6,
+ 0x0821f500,
+ 0xe8e7f107,
+ 0x7f21f403,
0xf50137f0,
- 0xf1056a21,
+ 0xf106e621,
0xf41388e7,
- 0x00f87f21,
-/* 0x0670: i2c_bitw */
- 0x056a21f5,
- 0x03e8e7f1,
- 0xbb7f21f4,
- 0x65b60076,
- 0x9450f904,
- 0x56bb0465,
- 0xfd50bd02,
- 0x50fc0475,
- 0x05bc21f5,
- 0xf40464b6,
- 0xe7f11811,
- 0x21f41388,
- 0x0037f07f,
- 0x054821f5,
- 0x1388e7f1,
-/* 0x06af: i2c_bitw_out */
- 0xf87f21f4,
-/* 0x06b1: i2c_bitr */
- 0x0137f000,
- 0x056a21f5,
- 0x03e8e7f1,
- 0xbb7f21f4,
- 0x65b60076,
- 0x9450f904,
- 0x56bb0465,
- 0xfd50bd02,
- 0x50fc0475,
- 0x05bc21f5,
- 0xf40464b6,
- 0x21f51b11,
- 0x37f005a4,
- 0x4821f500,
- 0x88e7f105,
+ 0x37f07f21,
+ 0x0821f501,
+ 0x88e7f107,
0x7f21f413,
- 0xf4013cf0,
-/* 0x06f6: i2c_bitr_done */
- 0x00f80131,
-/* 0x06f8: i2c_get_byte */
- 0xf00057f0,
-/* 0x06fe: i2c_get_byte_next */
- 0x54b60847,
- 0x0076bb01,
+/* 0x080e: i2c_bitw */
+ 0x21f500f8,
+ 0xe7f10708,
+ 0x21f403e8,
+ 0x0076bb7f,
0xf90465b6,
0x04659450,
0xbd0256bb,
0x0475fd50,
0x21f550fc,
- 0x64b606b1,
- 0x2b11f404,
- 0xb60553fd,
- 0x1bf40142,
- 0x0137f0d8,
- 0xb60076bb,
- 0x50f90465,
- 0xbb046594,
- 0x50bd0256,
- 0xfc0475fd,
- 0x7021f550,
- 0x0464b606,
-/* 0x0748: i2c_get_byte_done */
-/* 0x074a: i2c_put_byte */
- 0x47f000f8,
-/* 0x074d: i2c_put_byte_next */
- 0x0142b608,
- 0xbb3854ff,
+ 0x64b6075a,
+ 0x1811f404,
+ 0x1388e7f1,
+ 0xf07f21f4,
+ 0x21f50037,
+ 0xe7f106e6,
+ 0x21f41388,
+/* 0x084d: i2c_bitw_out */
+/* 0x084f: i2c_bitr */
+ 0xf000f87f,
+ 0x21f50137,
+ 0xe7f10708,
+ 0x21f403e8,
+ 0x0076bb7f,
+ 0xf90465b6,
+ 0x04659450,
+ 0xbd0256bb,
+ 0x0475fd50,
+ 0x21f550fc,
+ 0x64b6075a,
+ 0x1b11f404,
+ 0x074221f5,
+ 0xf50037f0,
+ 0xf106e621,
+ 0xf41388e7,
+ 0x3cf07f21,
+ 0x0131f401,
+/* 0x0894: i2c_bitr_done */
+/* 0x0896: i2c_get_byte */
+ 0x57f000f8,
+ 0x0847f000,
+/* 0x089c: i2c_get_byte_next */
+ 0xbb0154b6,
0x65b60076,
0x9450f904,
0x56bb0465,
0xfd50bd02,
0x50fc0475,
- 0x067021f5,
+ 0x084f21f5,
0xf40464b6,
- 0x46b03411,
- 0xd81bf400,
- 0xb60076bb,
- 0x50f90465,
- 0xbb046594,
- 0x50bd0256,
- 0xfc0475fd,
- 0xb121f550,
- 0x0464b606,
- 0xbb0f11f4,
- 0x36b00076,
- 0x061bf401,
-/* 0x07a3: i2c_put_byte_done */
- 0xf80132f4,
-/* 0x07a5: i2c_addr */
- 0x0076bb00,
+ 0x53fd2b11,
+ 0x0142b605,
+ 0xf0d81bf4,
+ 0x76bb0137,
+ 0x0465b600,
+ 0x659450f9,
+ 0x0256bb04,
+ 0x75fd50bd,
+ 0xf550fc04,
+ 0xb6080e21,
+/* 0x08e6: i2c_get_byte_done */
+ 0x00f80464,
+/* 0x08e8: i2c_put_byte */
+/* 0x08eb: i2c_put_byte_next */
+ 0xb60847f0,
+ 0x54ff0142,
+ 0x0076bb38,
0xf90465b6,
0x04659450,
0xbd0256bb,
0x0475fd50,
0x21f550fc,
- 0x64b605e1,
- 0x2911f404,
- 0x012ec3e7,
- 0xfd0134b6,
- 0x76bb0553,
+ 0x64b6080e,
+ 0x3411f404,
+ 0xf40046b0,
+ 0x76bbd81b,
0x0465b600,
0x659450f9,
0x0256bb04,
0x75fd50bd,
0xf550fc04,
- 0xb6074a21,
-/* 0x07ea: i2c_addr_done */
- 0x00f80464,
-/* 0x07ec: i2c_acquire_addr */
- 0xb6f8cec7,
- 0xe0b702e4,
- 0xee980bfc,
-/* 0x07fb: i2c_acquire */
- 0xf500f800,
- 0xf407ec21,
- 0xd9f00421,
- 0x3f21f403,
-/* 0x080a: i2c_release */
- 0x21f500f8,
- 0x21f407ec,
- 0x03daf004,
- 0xf83f21f4,
-/* 0x0819: i2c_recv */
- 0x0132f400,
- 0xb6f8c1c7,
- 0x16b00214,
- 0x3a1ff528,
- 0xd413a001,
- 0x0032980b,
- 0x0bac13a0,
- 0xf4003198,
- 0xd0f90231,
- 0xd0f9e0f9,
- 0x000067f1,
- 0x100063f1,
- 0xbb016792,
+ 0xb6084f21,
+ 0x11f40464,
+ 0x0076bb0f,
+ 0xf40136b0,
+ 0x32f4061b,
+/* 0x0941: i2c_put_byte_done */
+/* 0x0943: i2c_addr */
+ 0xbb00f801,
0x65b60076,
0x9450f904,
0x56bb0465,
0xfd50bd02,
0x50fc0475,
- 0x07fb21f5,
- 0xfc0464b6,
- 0x00d6b0d0,
- 0x00b31bf5,
- 0xbb0057f0,
- 0x65b60076,
- 0x9450f904,
- 0x56bb0465,
- 0xfd50bd02,
- 0x50fc0475,
- 0x07a521f5,
- 0xf50464b6,
- 0xc700d011,
- 0x76bbe0c5,
- 0x0465b600,
- 0x659450f9,
- 0x0256bb04,
- 0x75fd50bd,
- 0xf550fc04,
- 0xb6074a21,
- 0x11f50464,
- 0x57f000ad,
+ 0x077f21f5,
+ 0xf40464b6,
+ 0xc3e72911,
+ 0x34b6012e,
+ 0x0553fd01,
+ 0xb60076bb,
+ 0x50f90465,
+ 0xbb046594,
+ 0x50bd0256,
+ 0xfc0475fd,
+ 0xe821f550,
+ 0x0464b608,
+/* 0x0988: i2c_addr_done */
+/* 0x098a: i2c_acquire_addr */
+ 0xcec700f8,
+ 0x02e4b6f8,
+ 0x0c10e0b7,
+ 0xf800ee98,
+/* 0x0999: i2c_acquire */
+ 0x8a21f500,
+ 0x0421f409,
+ 0xf403d9f0,
+ 0x00f83f21,
+/* 0x09a8: i2c_release */
+ 0x098a21f5,
+ 0xf00421f4,
+ 0x21f403da,
+/* 0x09b7: i2c_recv */
+ 0xf400f83f,
+ 0xc1c70132,
+ 0x0214b6f8,
+ 0xf52816b0,
+ 0xa0013a1f,
+ 0x980be813,
+ 0x13a00032,
+ 0x31980bc0,
+ 0x0231f400,
+ 0xe0f9d0f9,
+ 0x67f1d0f9,
+ 0x63f10000,
+ 0x67921000,
0x0076bb01,
0xf90465b6,
0x04659450,
0xbd0256bb,
0x0475fd50,
0x21f550fc,
- 0x64b607a5,
- 0x8a11f504,
+ 0x64b60999,
+ 0xb0d0fc04,
+ 0x1bf500d6,
+ 0x57f000b3,
0x0076bb00,
0xf90465b6,
0x04659450,
0xbd0256bb,
0x0475fd50,
0x21f550fc,
- 0x64b606f8,
- 0x6a11f404,
- 0xbbe05bcb,
+ 0x64b60943,
+ 0xd011f504,
+ 0xe0c5c700,
+ 0xb60076bb,
+ 0x50f90465,
+ 0xbb046594,
+ 0x50bd0256,
+ 0xfc0475fd,
+ 0xe821f550,
+ 0x0464b608,
+ 0x00ad11f5,
+ 0xbb0157f0,
0x65b60076,
0x9450f904,
0x56bb0465,
0xfd50bd02,
0x50fc0475,
- 0x063d21f5,
- 0xb90464b6,
- 0x74bd025b,
-/* 0x091f: i2c_recv_not_rd08 */
- 0xb0430ef4,
- 0x1bf401d6,
- 0x0057f03d,
- 0x07a521f5,
- 0xc73311f4,
- 0x21f5e0c5,
- 0x11f4074a,
- 0x0057f029,
- 0x07a521f5,
- 0xc71f11f4,
- 0x21f5e0b5,
- 0x11f4074a,
- 0x3d21f515,
- 0xc774bd06,
- 0x1bf408c5,
- 0x0232f409,
-/* 0x095f: i2c_recv_not_wr08 */
-/* 0x095f: i2c_recv_done */
- 0xc7030ef4,
- 0x21f5f8ce,
- 0xe0fc080a,
- 0x12f4d0fc,
- 0x027cb90a,
- 0x02b921f5,
-/* 0x0974: i2c_recv_exit */
-/* 0x0976: i2c_init */
- 0x00f800f8,
-/* 0x0978: test_recv */
- 0x05d817f1,
+ 0x094321f5,
+ 0xf50464b6,
+ 0xbb008a11,
+ 0x65b60076,
+ 0x9450f904,
+ 0x56bb0465,
+ 0xfd50bd02,
+ 0x50fc0475,
+ 0x089621f5,
+ 0xf40464b6,
+ 0x5bcb6a11,
+ 0x0076bbe0,
+ 0xf90465b6,
+ 0x04659450,
+ 0xbd0256bb,
+ 0x0475fd50,
+ 0x21f550fc,
+ 0x64b607db,
+ 0x025bb904,
+ 0x0ef474bd,
+/* 0x0abd: i2c_recv_not_rd08 */
+ 0x01d6b043,
+ 0xf03d1bf4,
+ 0x21f50057,
+ 0x11f40943,
+ 0xe0c5c733,
+ 0x08e821f5,
+ 0xf02911f4,
+ 0x21f50057,
+ 0x11f40943,
+ 0xe0b5c71f,
+ 0x08e821f5,
+ 0xf51511f4,
+ 0xbd07db21,
+ 0x08c5c774,
+ 0xf4091bf4,
+ 0x0ef40232,
+/* 0x0afd: i2c_recv_not_wr08 */
+/* 0x0afd: i2c_recv_done */
+ 0xf8cec703,
+ 0x09a821f5,
+ 0xd0fce0fc,
+ 0xb90a12f4,
+ 0x21f5027c,
+/* 0x0b12: i2c_recv_exit */
+ 0x00f80342,
+/* 0x0b14: i2c_init */
+/* 0x0b16: test_recv */
+ 0x17f100f8,
+ 0x14b605d8,
+ 0x0011cf06,
+ 0xf10110b6,
+ 0xb605d807,
+ 0x01d00604,
+ 0xf104bd00,
+ 0xf1d900e7,
+ 0xf5134fe3,
+ 0xf8026221,
+/* 0x0b3d: test_init */
+ 0x00e7f100,
+ 0x6221f508,
+/* 0x0b47: idle_recv */
+ 0xf800f802,
+/* 0x0b49: idle */
+ 0x0031f400,
+ 0x05d417f1,
0xcf0614b6,
0x10b60011,
- 0xd807f101,
+ 0xd407f101,
0x0604b605,
0xbd0001d0,
- 0x00e7f104,
- 0x4fe3f1d9,
- 0xf521f513,
-/* 0x099f: test_init */
- 0xf100f801,
- 0xf50800e7,
- 0xf801f521,
-/* 0x09a9: idle_recv */
-/* 0x09ab: idle */
- 0xf400f800,
- 0x17f10031,
- 0x14b605d4,
- 0x0011cf06,
- 0xf10110b6,
- 0xb605d407,
- 0x01d00604,
-/* 0x09c7: idle_loop */
- 0xf004bd00,
- 0x32f45817,
-/* 0x09cd: idle_proc */
-/* 0x09cd: idle_proc_exec */
- 0xb910f902,
- 0x21f5021e,
- 0x10fc02c2,
- 0xf40911f4,
- 0x0ef40231,
-/* 0x09e1: idle_proc_next */
- 0x5810b6ef,
- 0xf4061fb8,
- 0x02f4e61b,
- 0x0028f4dd,
- 0x00bb0ef4,
+/* 0x0b65: idle_loop */
+ 0x5817f004,
+/* 0x0b6b: idle_proc */
+/* 0x0b6b: idle_proc_exec */
+ 0xf90232f4,
+ 0x021eb910,
+ 0x034b21f5,
+ 0x11f410fc,
+ 0x0231f409,
+/* 0x0b7f: idle_proc_next */
+ 0xb6ef0ef4,
+ 0x1fb85810,
+ 0xe61bf406,
+ 0xf4dd02f4,
+ 0x0ef40028,
+ 0x000000bb,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
0x00000000,
0x00000000,
0x00000000,
*/
#define NVKM_PPWR_CHIPSET GF100
+#define HW_TICKS_PER_US 203 // should be 202.5
//#define NVKM_FALCON_PC24
//#define NVKM_FALCON_UNSHIFTED_IO
.section #nvc0_pwr_data
#define INCLUDE_PROC
#include "kernel.fuc"
+#include "arith.fuc"
#include "host.fuc"
#include "memx.fuc"
#include "perf.fuc"
#define INCLUDE_DATA
#include "kernel.fuc"
+#include "arith.fuc"
#include "host.fuc"
#include "memx.fuc"
#include "perf.fuc"
.section #nvc0_pwr_code
#define INCLUDE_CODE
#include "kernel.fuc"
+#include "arith.fuc"
#include "host.fuc"
#include "memx.fuc"
#include "perf.fuc"
0x00000000,
/* 0x0058: proc_list_head */
0x54534f48,
- 0x00000430,
- 0x000003cd,
+ 0x00000512,
+ 0x000004af,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x584d454d,
- 0x00000542,
- 0x00000534,
+ 0x0000074b,
+ 0x0000073d,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x46524550,
- 0x00000546,
- 0x00000544,
+ 0x0000074f,
+ 0x0000074d,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x5f433249,
- 0x00000976,
- 0x00000819,
+ 0x00000b7f,
+ 0x00000a22,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x54534554,
- 0x0000099f,
- 0x00000978,
+ 0x00000ba8,
+ 0x00000b81,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x454c4449,
- 0x000009ab,
- 0x000009a9,
+ 0x00000bb4,
+ 0x00000bb2,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
/* 0x0370: memx_func_head */
- 0x00010000,
- 0x00000000,
- 0x0000046f,
-/* 0x037c: memx_func_next */
0x00000001,
0x00000000,
- 0x00000496,
+ 0x00000551,
+/* 0x037c: memx_func_next */
0x00000002,
+ 0x00000000,
+ 0x000005db,
+ 0x00000003,
0x00000002,
- 0x000004b7,
- 0x00040003,
+ 0x000006a5,
+ 0x00040004,
+ 0x00000000,
+ 0x000006c1,
+ 0x00010005,
+ 0x00000000,
+ 0x000006de,
+ 0x00010006,
0x00000000,
- 0x000004d3,
- 0x00010004,
+ 0x00000663,
+/* 0x03b8: memx_func_tail */
+/* 0x03b8: memx_ts_start */
0x00000000,
- 0x000004f0,
-/* 0x03ac: memx_func_tail */
-/* 0x03ac: memx_data_head */
+/* 0x03bc: memx_ts_end */
0x00000000,
+/* 0x03c0: memx_data_head */
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
-/* 0x0bac: memx_data_tail */
-/* 0x0bac: i2c_scl_map */
+ 0x00000000,
+/* 0x0bc0: memx_data_tail */
+/* 0x0bc0: i2c_scl_map */
0x00001000,
0x00004000,
0x00010000,
0x01000000,
0x04000000,
0x10000000,
-/* 0x0bd4: i2c_sda_map */
+/* 0x0be8: i2c_sda_map */
0x00002000,
0x00008000,
0x00020000,
0x02000000,
0x08000000,
0x20000000,
-/* 0x0bfc: i2c_ctrl */
+/* 0x0c10: i2c_ctrl */
0x0000e138,
0x0000e150,
0x0000e168,
0x00000000,
0x00000000,
0x00000000,
- 0x00000000,
- 0x00000000,
- 0x00000000,
- 0x00000000,
- 0x00000000,
};
uint32_t nvc0_pwr_code[] = {
- 0x030d0ef5,
+ 0x039e0ef5,
/* 0x0004: rd32 */
0x07a007f1,
0xd00604b6,
0xd4f100dd,
0x1bf47000,
/* 0x007f: nsec */
- 0xf000f8f2,
+ 0xf900f8f2,
+ 0xf080f990,
0x84b62c87,
0x0088cf06,
-/* 0x0088: nsec_loop */
+/* 0x008c: nsec_loop */
0xb62c97f0,
0x99cf0694,
0x0298bb00,
0xf4069eb8,
- 0x00f8f11e,
-/* 0x009c: wait */
+ 0x80fcf11e,
+ 0x00f890fc,
+/* 0x00a4: wait */
+ 0x80f990f9,
0xb62c87f0,
0x88cf0684,
-/* 0x00a5: wait_loop */
+/* 0x00b1: wait_loop */
0x02eeb900,
0xb90421f4,
0xadfd02da,
0x0099cf06,
0xb80298bb,
0x1ef4069b,
-/* 0x00c9: wait_done */
-/* 0x00cb: intr_watchdog */
- 0x9800f8df,
+/* 0x00d5: wait_done */
+ 0xfc80fcdf,
+/* 0x00db: intr_watchdog */
+ 0x9800f890,
0x96b003e9,
0x2a0bf400,
0xbb9a0a98,
0x1cf4029a,
0x01d7f00f,
- 0x025421f5,
+ 0x02dd21f5,
0x0ef494bd,
-/* 0x00e9: intr_watchdog_next_time */
+/* 0x00f9: intr_watchdog_next_time */
0x9b0a9815,
0xf400a6b0,
0x9ab8090b,
0x061cf406,
-/* 0x00f8: intr_watchdog_next_time_set */
-/* 0x00fb: intr_watchdog_next_proc */
+/* 0x0108: intr_watchdog_next_time_set */
+/* 0x010b: intr_watchdog_next_proc */
0x809b0980,
0xe0b603e9,
0x68e6b158,
0xc61bf402,
-/* 0x010a: intr */
+/* 0x011a: intr */
0x00f900f8,
0x80f904bd,
0xa0f990f9,
0xf40289c4,
0x0080230b,
0x58e7f09b,
- 0x98cb21f4,
+ 0x98db21f4,
0x96b09b09,
0x110bf400,
0xb63407f0,
0x09d00604,
0x8004bd00,
-/* 0x016e: intr_skip_watchdog */
+/* 0x017e: intr_skip_watchdog */
0x89e49a09,
0x0bf40800,
0x8897f148,
0x48e7f1c0,
0x53e3f14f,
0x00d7f054,
- 0x02b921f5,
+ 0x034221f5,
0x07f1c0fc,
0x04b604c0,
0x000cd006,
-/* 0x01ae: intr_subintr_skip_fifo */
+/* 0x01be: intr_subintr_skip_fifo */
0x07f104bd,
0x04b60688,
0x0009d006,
-/* 0x01ba: intr_skip_subintr */
+/* 0x01ca: intr_skip_subintr */
0x89c404bd,
0x070bf420,
0xffbfa4f1,
-/* 0x01c4: intr_skip_pause */
+/* 0x01d4: intr_skip_pause */
0xf44089c4,
0xa4f1070b,
-/* 0x01ce: intr_skip_user0 */
+/* 0x01de: intr_skip_user0 */
0x07f0ffbf,
0x0604b604,
0xbd0008d0,
0x90fca0fc,
0x00fc80fc,
0xf80032f4,
-/* 0x01f5: timer */
- 0x1032f401,
- 0xb003f898,
- 0x1cf40086,
- 0x03fe8051,
+/* 0x0205: ticks_from_ns */
+ 0xf9c0f901,
+ 0xcbd7f1b0,
+ 0x00d3f000,
+ 0x041321f5,
+ 0x03e8ccec,
+ 0xf400b4b0,
+ 0xeeec120b,
+ 0xd7f103e8,
+ 0xd3f000cb,
+ 0x1321f500,
+/* 0x022d: ticks_from_ns_quit */
+ 0x02ceb904,
+ 0xc0fcb0fc,
+/* 0x0236: ticks_from_us */
+ 0xc0f900f8,
+ 0xd7f1b0f9,
+ 0xd3f000cb,
+ 0x1321f500,
+ 0x02ceb904,
+ 0xf400b4b0,
+ 0xe4bd050b,
+/* 0x0250: ticks_from_us_quit */
+ 0xc0fcb0fc,
+/* 0x0256: ticks_to_us */
+ 0xd7f100f8,
+ 0xd3f000cb,
+ 0xecedff00,
+/* 0x0262: timer */
+ 0x90f900f8,
+ 0x32f480f9,
+ 0x03f89810,
+ 0xf40086b0,
+ 0x84bd651c,
0xb63807f0,
0x08d00604,
0xf004bd00,
- 0x84b60887,
+ 0x84b63487,
0x0088cf06,
- 0xf40284f0,
- 0x87f0261b,
- 0x0684b634,
- 0xb80088cf,
- 0x0bf406e0,
- 0x06e8b809,
-/* 0x0233: timer_reset */
- 0xf01f1ef4,
- 0x04b63407,
- 0x000ed006,
- 0x0e8004bd,
-/* 0x0241: timer_enable */
- 0x0187f09a,
+ 0xbb9a0998,
+ 0xe9bb0298,
+ 0x03fe8000,
+ 0xb60887f0,
+ 0x88cf0684,
+ 0x0284f000,
+ 0xf0261bf4,
+ 0x84b63487,
+ 0x0088cf06,
+ 0xf406e0b8,
+ 0xe8b8090b,
+ 0x111cf406,
+/* 0x02b8: timer_reset */
+ 0xb63407f0,
+ 0x0ed00604,
+ 0x8004bd00,
+/* 0x02c6: timer_enable */
+ 0x87f09a0e,
+ 0x3807f001,
+ 0xd00604b6,
+ 0x04bd0008,
+/* 0x02d4: timer_done */
+ 0xfc1031f4,
+ 0xf890fc80,
+/* 0x02dd: send_proc */
+ 0xf980f900,
+ 0x05e89890,
+ 0xf004e998,
+ 0x89b80486,
+ 0x2a0bf406,
+ 0x940398c4,
+ 0x80b60488,
+ 0x008ebb18,
+ 0x8000fa98,
+ 0x8d80008a,
+ 0x028c8001,
+ 0xb6038b80,
+ 0x94f00190,
+ 0x04e98007,
+/* 0x0317: send_done */
+ 0xfc0231f4,
+ 0xf880fc90,
+/* 0x031d: find */
+ 0xf080f900,
+ 0x31f45887,
+/* 0x0325: find_loop */
+ 0x008a9801,
+ 0xf406aeb8,
+ 0x80b6100b,
+ 0x6886b158,
+ 0xf01bf402,
+/* 0x033b: find_done */
+ 0xb90132f4,
+ 0x80fc028e,
+/* 0x0342: send */
+ 0x21f500f8,
+ 0x01f4031d,
+/* 0x034b: recv */
+ 0xf900f897,
+ 0x9880f990,
+ 0xe99805e8,
+ 0x0132f404,
+ 0xf40689b8,
+ 0x89c43d0b,
+ 0x0180b603,
+ 0x800784f0,
+ 0xea9805e8,
+ 0xfef0f902,
+ 0xf0f9018f,
+ 0x9402efb9,
+ 0xe9bb0499,
+ 0x18e0b600,
+ 0x9803eb98,
+ 0xed9802ec,
+ 0x00ee9801,
+ 0xf0fca5f9,
+ 0xf400f8fe,
+ 0xf0fc0131,
+/* 0x0398: recv_done */
+ 0x90fc80fc,
+/* 0x039e: init */
+ 0x17f100f8,
+ 0x14b60108,
+ 0x0011cf06,
+ 0x010911e7,
+ 0xfe0814b6,
+ 0x17f10014,
+ 0x13f000e0,
+ 0x1c07f000,
+ 0xd00604b6,
+ 0x04bd0001,
+ 0xf0ff17f0,
+ 0x04b61407,
+ 0x0001d006,
+ 0x17f004bd,
+ 0x0015f102,
+ 0x1007f008,
+ 0xd00604b6,
+ 0x04bd0001,
+ 0x011a17f1,
+ 0xfe0013f0,
+ 0x31f40010,
+ 0x0117f010,
0xb63807f0,
- 0x08d00604,
-/* 0x024f: timer_done */
- 0xf404bd00,
- 0x00f81031,
-/* 0x0254: send_proc */
- 0x90f980f9,
- 0x9805e898,
- 0x86f004e9,
- 0x0689b804,
- 0xc42a0bf4,
- 0x88940398,
- 0x1880b604,
- 0x98008ebb,
- 0x8a8000fa,
- 0x018d8000,
- 0x80028c80,
- 0x90b6038b,
- 0x0794f001,
- 0xf404e980,
-/* 0x028e: send_done */
- 0x90fc0231,
- 0x00f880fc,
-/* 0x0294: find */
- 0x87f080f9,
- 0x0131f458,
-/* 0x029c: find_loop */
- 0xb8008a98,
- 0x0bf406ae,
- 0x5880b610,
- 0x026886b1,
- 0xf4f01bf4,
-/* 0x02b2: find_done */
- 0x8eb90132,
- 0xf880fc02,
-/* 0x02b9: send */
- 0x9421f500,
- 0x9701f402,
-/* 0x02c2: recv */
- 0xe89800f8,
- 0x04e99805,
- 0xb80132f4,
- 0x0bf40689,
- 0x0389c43d,
- 0xf00180b6,
- 0xe8800784,
- 0x02ea9805,
- 0x8ffef0f9,
- 0xb9f0f901,
- 0x999402ef,
- 0x00e9bb04,
- 0x9818e0b6,
- 0xec9803eb,
- 0x01ed9802,
- 0xf900ee98,
- 0xfef0fca5,
- 0x31f400f8,
-/* 0x030b: recv_done */
- 0xf8f0fc01,
-/* 0x030d: init */
- 0x0817f100,
- 0x0614b601,
- 0xe70011cf,
- 0xb6010911,
- 0x14fe0814,
- 0xe017f100,
- 0x0013f000,
- 0xb61c07f0,
0x01d00604,
0xf004bd00,
- 0x07f0ff17,
- 0x0604b614,
- 0xbd0001d0,
- 0x0217f004,
- 0x080015f1,
- 0xb61007f0,
- 0x01d00604,
- 0xf104bd00,
- 0xf0010a17,
- 0x10fe0013,
- 0x1031f400,
- 0xf00117f0,
- 0x04b63807,
- 0x0001d006,
- 0xf7f004bd,
-/* 0x0371: init_proc */
- 0x01f19858,
- 0xf40016b0,
- 0x15f9fa0b,
- 0xf458f0b6,
-/* 0x0382: host_send */
- 0x17f1f20e,
- 0x14b604b0,
- 0x0011cf06,
- 0x04a027f1,
- 0xcf0624b6,
- 0x12b80022,
- 0x320bf406,
- 0x94071ec4,
- 0xe0b704ee,
- 0xeb980270,
- 0x02ec9803,
- 0x9801ed98,
- 0x21f500ee,
- 0x10b602b9,
- 0x0f1ec401,
- 0x04b007f1,
- 0xd00604b6,
- 0x04bd000e,
-/* 0x03cb: host_send_done */
- 0xf8ba0ef4,
-/* 0x03cd: host_recv */
- 0x4917f100,
- 0x5413f14e,
- 0x06e1b852,
-/* 0x03db: host_recv_wait */
- 0xf1aa0bf4,
- 0xb604cc17,
- 0x11cf0614,
- 0xc827f100,
- 0x0624b604,
- 0xf00022cf,
- 0x12b80816,
- 0xe60bf406,
- 0xb60723c4,
- 0x30b70434,
- 0x3b8002f0,
- 0x023c8003,
- 0x80013d80,
- 0x20b6003e,
- 0x0f24f001,
- 0x04c807f1,
+/* 0x0402: init_proc */
+ 0xf19858f7,
+ 0x0016b001,
+ 0xf9fa0bf4,
+ 0x58f0b615,
+/* 0x0413: mulu32_32_64 */
+ 0xf9f20ef4,
+ 0xf920f910,
+ 0x9540f930,
+ 0xd29510e1,
+ 0xbdc4bd10,
+ 0xc0edffb4,
+ 0xb9301dff,
+ 0x34f10234,
+ 0x34b6ffff,
+ 0x1045b610,
+ 0xbb00c3bb,
+ 0xe2ff01b4,
+ 0x0234b930,
+ 0xffff34f1,
+ 0xb61034b6,
+ 0xc3bb1045,
+ 0x01b4bb00,
+ 0xbb3012ff,
+ 0x40fc00b3,
+ 0x20fc30fc,
+ 0x00f810fc,
+/* 0x0464: host_send */
+ 0x04b017f1,
+ 0xcf0614b6,
+ 0x27f10011,
+ 0x24b604a0,
+ 0x0022cf06,
+ 0xf40612b8,
+ 0x1ec4320b,
+ 0x04ee9407,
+ 0x0270e0b7,
+ 0x9803eb98,
+ 0xed9802ec,
+ 0x00ee9801,
+ 0x034221f5,
+ 0xc40110b6,
+ 0x07f10f1e,
+ 0x04b604b0,
+ 0x000ed006,
+ 0x0ef404bd,
+/* 0x04ad: host_send_done */
+/* 0x04af: host_recv */
+ 0xf100f8ba,
+ 0xf14e4917,
+ 0xb8525413,
+ 0x0bf406e1,
+/* 0x04bd: host_recv_wait */
+ 0xcc17f1aa,
+ 0x0614b604,
+ 0xf10011cf,
+ 0xb604c827,
+ 0x22cf0624,
+ 0x0816f000,
+ 0xf40612b8,
+ 0x23c4e60b,
+ 0x0434b607,
+ 0x02f030b7,
+ 0x80033b80,
+ 0x3d80023c,
+ 0x003e8001,
+ 0xf00120b6,
+ 0x07f10f24,
+ 0x04b604c8,
+ 0x0002d006,
+ 0x27f004bd,
+ 0x0007f040,
0xd00604b6,
0x04bd0002,
- 0xf04027f0,
- 0x04b60007,
- 0x0002d006,
- 0x00f804bd,
-/* 0x0430: host_init */
- 0x008017f1,
- 0xf11014b6,
- 0xf1027015,
- 0xb604d007,
- 0x01d00604,
- 0xf104bd00,
- 0xb6008017,
- 0x15f11014,
- 0x07f102f0,
- 0x04b604dc,
- 0x0001d006,
- 0x17f004bd,
- 0xc407f101,
+/* 0x0512: host_init */
+ 0x17f100f8,
+ 0x14b60080,
+ 0x7015f110,
+ 0xd007f102,
0x0604b604,
0xbd0001d0,
-/* 0x046f: memx_func_enter */
- 0xf000f804,
+ 0x8017f104,
+ 0x1014b600,
+ 0x02f015f1,
+ 0x04dc07f1,
+ 0xd00604b6,
+ 0x04bd0001,
+ 0xf10117f0,
+ 0xb604c407,
+ 0x01d00604,
+ 0xf804bd00,
+/* 0x0551: memx_func_enter */
+ 0x2067f100,
+ 0x5d77f116,
+ 0xff73f1f5,
+ 0x026eb9ff,
+ 0xb90421f4,
+ 0x87fd02d8,
+ 0xf960f904,
+ 0xfcd0fc80,
+ 0x3f21f4e0,
+ 0xfffe77f1,
+ 0xffff73f1,
+ 0xf4026eb9,
+ 0xd8b90421,
+ 0x0487fd02,
+ 0x80f960f9,
+ 0xe0fcd0fc,
+ 0xf13f21f4,
+ 0xb926f067,
+ 0x21f4026e,
+ 0x02d8b904,
+ 0xf90487fd,
+ 0xfc80f960,
+ 0xf4e0fcd0,
+ 0x67f03f21,
+ 0xe007f104,
+ 0x0604b607,
+ 0xbd0006d0,
+/* 0x05bd: memx_func_enter_wait */
+ 0xc067f104,
+ 0x0664b607,
+ 0xf00066cf,
+ 0x0bf40464,
+ 0x2c67f0f3,
+ 0xcf0664b6,
+ 0x06800066,
+/* 0x05db: memx_func_leave */
+ 0xf000f8ee,
+ 0x64b62c67,
+ 0x0066cf06,
+ 0xf0ef0680,
0x07f10467,
- 0x04b607e0,
+ 0x04b607e4,
0x0006d006,
-/* 0x047e: memx_func_enter_wait */
+/* 0x05f6: memx_func_leave_wait */
0x67f104bd,
0x64b607c0,
0x0066cf06,
0xf40464f0,
- 0x1698f30b,
+ 0x67f1f31b,
+ 0x77f126f0,
+ 0x73f00001,
+ 0x026eb900,
+ 0xb90421f4,
+ 0x87fd02d8,
+ 0xf960f905,
+ 0xfcd0fc80,
+ 0x3f21f4e0,
+ 0x162067f1,
+ 0xf4026eb9,
+ 0xd8b90421,
+ 0x0587fd02,
+ 0x80f960f9,
+ 0xe0fcd0fc,
+ 0xf13f21f4,
+ 0xf00aa277,
+ 0x6eb90073,
+ 0x0421f402,
+ 0xfd02d8b9,
+ 0x60f90587,
+ 0xd0fc80f9,
+ 0x21f4e0fc,
+/* 0x0663: memx_func_wait_vblank */
+ 0x9800f83f,
+ 0x66b00016,
+ 0x130bf400,
+ 0xf40166b0,
+ 0x0ef4060b,
+/* 0x0675: memx_func_wait_vblank_head1 */
+ 0x2077f12e,
+ 0x070ef400,
+/* 0x067c: memx_func_wait_vblank_head0 */
+ 0x000877f1,
+/* 0x0680: memx_func_wait_vblank_0 */
+ 0x07c467f1,
+ 0xcf0664b6,
+ 0x67fd0066,
+ 0xf31bf404,
+/* 0x0690: memx_func_wait_vblank_1 */
+ 0x07c467f1,
+ 0xcf0664b6,
+ 0x67fd0066,
+ 0xf30bf404,
+/* 0x06a0: memx_func_wait_vblank_fini */
+ 0xf80410b6,
+/* 0x06a5: memx_func_wr32 */
+ 0x00169800,
+ 0xb6011598,
+ 0x60f90810,
+ 0xd0fc50f9,
+ 0x21f4e0fc,
+ 0x0242b63f,
+ 0xf8e91bf4,
+/* 0x06c1: memx_func_wait */
+ 0x2c87f000,
+ 0xcf0684b6,
+ 0x1e980088,
+ 0x011d9800,
+ 0x98021c98,
+ 0x10b6031b,
+ 0xa421f410,
+/* 0x06de: memx_func_delay */
+ 0x1e9800f8,
0x0410b600,
-/* 0x0496: memx_func_leave */
- 0x67f000f8,
- 0xe407f104,
+ 0xf87f21f4,
+/* 0x06e9: memx_exec */
+ 0xf9e0f900,
+ 0x02c1b9d0,
+/* 0x06f3: memx_exec_next */
+ 0x9802b2b9,
+ 0x10b60013,
+ 0xf034e704,
+ 0xe033e701,
+ 0x0132b601,
+ 0x980c30f0,
+ 0x55f9de35,
+ 0xf40612b8,
+ 0x0b98e41e,
+ 0xef0c98ee,
+ 0xf102cbbb,
+ 0xb607c4b7,
+ 0xbbcf06b4,
+ 0xfcd0fc00,
+ 0x4221f5e0,
+/* 0x072f: memx_info */
+ 0xf100f803,
+ 0xf103c0c7,
+ 0xf50800b7,
+ 0xf8034221,
+/* 0x073d: memx_recv */
+ 0x01d6b000,
+ 0xb0a90bf4,
+ 0x0bf400d6,
+/* 0x074b: memx_init */
+ 0xf800f8e9,
+/* 0x074d: perf_recv */
+/* 0x074f: perf_init */
+ 0xf800f800,
+/* 0x0751: i2c_drive_scl */
+ 0x0036b000,
+ 0xf1110bf4,
+ 0xb607e007,
+ 0x01d00604,
+ 0xf804bd00,
+/* 0x0765: i2c_drive_scl_lo */
+ 0xe407f100,
0x0604b607,
- 0xbd0006d0,
-/* 0x04a5: memx_func_leave_wait */
- 0xc067f104,
- 0x0664b607,
- 0xf00066cf,
- 0x1bf40464,
-/* 0x04b7: memx_func_wr32 */
- 0x9800f8f3,
- 0x15980016,
- 0x0810b601,
- 0x50f960f9,
- 0xe0fcd0fc,
- 0xb63f21f4,
- 0x1bf40242,
-/* 0x04d3: memx_func_wait */
- 0xf000f8e9,
- 0x84b62c87,
- 0x0088cf06,
- 0x98001e98,
- 0x1c98011d,
- 0x031b9802,
- 0xf41010b6,
- 0x00f89c21,
-/* 0x04f0: memx_func_delay */
- 0xb6001e98,
- 0x21f40410,
-/* 0x04fb: memx_exec */
- 0xf900f87f,
- 0xb9d0f9e0,
- 0xb2b902c1,
-/* 0x0505: memx_exec_next */
- 0x00139802,
- 0x950410b6,
- 0x30f01034,
- 0xde35980c,
- 0x12b855f9,
- 0xec1ef406,
- 0xe0fcd0fc,
- 0x02b921f5,
-/* 0x0526: memx_info */
- 0xc7f100f8,
- 0xb7f103ac,
- 0x21f50800,
- 0x00f802b9,
-/* 0x0534: memx_recv */
- 0xf401d6b0,
- 0xd6b0c40b,
- 0xe90bf400,
-/* 0x0542: memx_init */
- 0x00f800f8,
-/* 0x0544: perf_recv */
-/* 0x0546: perf_init */
- 0x00f800f8,
-/* 0x0548: i2c_drive_scl */
- 0xf40036b0,
- 0x07f1110b,
- 0x04b607e0,
- 0x0001d006,
- 0x00f804bd,
-/* 0x055c: i2c_drive_scl_lo */
- 0x07e407f1,
- 0xd00604b6,
- 0x04bd0001,
-/* 0x056a: i2c_drive_sda */
- 0x36b000f8,
- 0x110bf400,
- 0x07e007f1,
- 0xd00604b6,
- 0x04bd0002,
-/* 0x057e: i2c_drive_sda_lo */
- 0x07f100f8,
- 0x04b607e4,
- 0x0002d006,
- 0x00f804bd,
-/* 0x058c: i2c_sense_scl */
- 0xf10132f4,
- 0xb607c437,
- 0x33cf0634,
- 0x0431fd00,
- 0xf4060bf4,
-/* 0x05a2: i2c_sense_scl_done */
- 0x00f80131,
-/* 0x05a4: i2c_sense_sda */
- 0xf10132f4,
- 0xb607c437,
- 0x33cf0634,
- 0x0432fd00,
- 0xf4060bf4,
-/* 0x05ba: i2c_sense_sda_done */
- 0x00f80131,
-/* 0x05bc: i2c_raise_scl */
- 0x47f140f9,
- 0x37f00898,
- 0x4821f501,
-/* 0x05c9: i2c_raise_scl_wait */
- 0xe8e7f105,
- 0x7f21f403,
- 0x058c21f5,
- 0xb60901f4,
- 0x1bf40142,
-/* 0x05dd: i2c_raise_scl_done */
- 0xf840fcef,
-/* 0x05e1: i2c_start */
- 0x8c21f500,
- 0x0d11f405,
- 0x05a421f5,
- 0xf40611f4,
-/* 0x05f2: i2c_start_rep */
- 0x37f0300e,
- 0x4821f500,
- 0x0137f005,
- 0x056a21f5,
- 0xb60076bb,
- 0x50f90465,
- 0xbb046594,
- 0x50bd0256,
- 0xfc0475fd,
- 0xbc21f550,
- 0x0464b605,
-/* 0x061f: i2c_start_send */
- 0xf01f11f4,
+ 0xbd0001d0,
+/* 0x0773: i2c_drive_sda */
+ 0xb000f804,
+ 0x0bf40036,
+ 0xe007f111,
+ 0x0604b607,
+ 0xbd0002d0,
+/* 0x0787: i2c_drive_sda_lo */
+ 0xf100f804,
+ 0xb607e407,
+ 0x02d00604,
+ 0xf804bd00,
+/* 0x0795: i2c_sense_scl */
+ 0x0132f400,
+ 0x07c437f1,
+ 0xcf0634b6,
+ 0x31fd0033,
+ 0x060bf404,
+/* 0x07ab: i2c_sense_scl_done */
+ 0xf80131f4,
+/* 0x07ad: i2c_sense_sda */
+ 0x0132f400,
+ 0x07c437f1,
+ 0xcf0634b6,
+ 0x32fd0033,
+ 0x060bf404,
+/* 0x07c3: i2c_sense_sda_done */
+ 0xf80131f4,
+/* 0x07c5: i2c_raise_scl */
+ 0xf140f900,
+ 0xf0089847,
+ 0x21f50137,
+/* 0x07d2: i2c_raise_scl_wait */
+ 0xe7f10751,
+ 0x21f403e8,
+ 0x9521f57f,
+ 0x0901f407,
+ 0xf40142b6,
+/* 0x07e6: i2c_raise_scl_done */
+ 0x40fcef1b,
+/* 0x07ea: i2c_start */
+ 0x21f500f8,
+ 0x11f40795,
+ 0xad21f50d,
+ 0x0611f407,
+/* 0x07fb: i2c_start_rep */
+ 0xf0300ef4,
0x21f50037,
- 0xe7f1056a,
- 0x21f41388,
- 0x0037f07f,
- 0x054821f5,
- 0x1388e7f1,
-/* 0x063b: i2c_start_out */
- 0xf87f21f4,
-/* 0x063d: i2c_stop */
- 0x0037f000,
- 0x054821f5,
+ 0x37f00751,
+ 0x7321f501,
+ 0x0076bb07,
+ 0xf90465b6,
+ 0x04659450,
+ 0xbd0256bb,
+ 0x0475fd50,
+ 0x21f550fc,
+ 0x64b607c5,
+ 0x1f11f404,
+/* 0x0828: i2c_start_send */
0xf50037f0,
- 0xf1056a21,
- 0xf403e8e7,
+ 0xf1077321,
+ 0xf41388e7,
0x37f07f21,
- 0x4821f501,
- 0x88e7f105,
+ 0x5121f500,
+ 0x88e7f107,
0x7f21f413,
- 0xf50137f0,
- 0xf1056a21,
- 0xf41388e7,
- 0x00f87f21,
-/* 0x0670: i2c_bitw */
- 0x056a21f5,
+/* 0x0844: i2c_start_out */
+/* 0x0846: i2c_stop */
+ 0x37f000f8,
+ 0x5121f500,
+ 0x0037f007,
+ 0x077321f5,
0x03e8e7f1,
- 0xbb7f21f4,
- 0x65b60076,
- 0x9450f904,
- 0x56bb0465,
- 0xfd50bd02,
- 0x50fc0475,
- 0x05bc21f5,
- 0xf40464b6,
- 0xe7f11811,
+ 0xf07f21f4,
+ 0x21f50137,
+ 0xe7f10751,
0x21f41388,
- 0x0037f07f,
- 0x054821f5,
+ 0x0137f07f,
+ 0x077321f5,
0x1388e7f1,
-/* 0x06af: i2c_bitw_out */
0xf87f21f4,
-/* 0x06b1: i2c_bitr */
- 0x0137f000,
- 0x056a21f5,
- 0x03e8e7f1,
- 0xbb7f21f4,
- 0x65b60076,
- 0x9450f904,
- 0x56bb0465,
- 0xfd50bd02,
- 0x50fc0475,
- 0x05bc21f5,
- 0xf40464b6,
- 0x21f51b11,
- 0x37f005a4,
- 0x4821f500,
- 0x88e7f105,
+/* 0x0879: i2c_bitw */
+ 0x7321f500,
+ 0xe8e7f107,
+ 0x7f21f403,
+ 0xb60076bb,
+ 0x50f90465,
+ 0xbb046594,
+ 0x50bd0256,
+ 0xfc0475fd,
+ 0xc521f550,
+ 0x0464b607,
+ 0xf11811f4,
+ 0xf41388e7,
+ 0x37f07f21,
+ 0x5121f500,
+ 0x88e7f107,
0x7f21f413,
- 0xf4013cf0,
-/* 0x06f6: i2c_bitr_done */
- 0x00f80131,
-/* 0x06f8: i2c_get_byte */
- 0xf00057f0,
-/* 0x06fe: i2c_get_byte_next */
- 0x54b60847,
+/* 0x08b8: i2c_bitw_out */
+/* 0x08ba: i2c_bitr */
+ 0x37f000f8,
+ 0x7321f501,
+ 0xe8e7f107,
+ 0x7f21f403,
+ 0xb60076bb,
+ 0x50f90465,
+ 0xbb046594,
+ 0x50bd0256,
+ 0xfc0475fd,
+ 0xc521f550,
+ 0x0464b607,
+ 0xf51b11f4,
+ 0xf007ad21,
+ 0x21f50037,
+ 0xe7f10751,
+ 0x21f41388,
+ 0x013cf07f,
+/* 0x08ff: i2c_bitr_done */
+ 0xf80131f4,
+/* 0x0901: i2c_get_byte */
+ 0x0057f000,
+/* 0x0907: i2c_get_byte_next */
+ 0xb60847f0,
+ 0x76bb0154,
+ 0x0465b600,
+ 0x659450f9,
+ 0x0256bb04,
+ 0x75fd50bd,
+ 0xf550fc04,
+ 0xb608ba21,
+ 0x11f40464,
+ 0x0553fd2b,
+ 0xf40142b6,
+ 0x37f0d81b,
0x0076bb01,
0xf90465b6,
0x04659450,
0xbd0256bb,
0x0475fd50,
0x21f550fc,
- 0x64b606b1,
- 0x2b11f404,
- 0xb60553fd,
- 0x1bf40142,
- 0x0137f0d8,
- 0xb60076bb,
- 0x50f90465,
- 0xbb046594,
- 0x50bd0256,
- 0xfc0475fd,
- 0x7021f550,
- 0x0464b606,
-/* 0x0748: i2c_get_byte_done */
-/* 0x074a: i2c_put_byte */
- 0x47f000f8,
-/* 0x074d: i2c_put_byte_next */
- 0x0142b608,
- 0xbb3854ff,
- 0x65b60076,
- 0x9450f904,
- 0x56bb0465,
- 0xfd50bd02,
- 0x50fc0475,
- 0x067021f5,
- 0xf40464b6,
- 0x46b03411,
- 0xd81bf400,
+ 0x64b60879,
+/* 0x0951: i2c_get_byte_done */
+/* 0x0953: i2c_put_byte */
+ 0xf000f804,
+/* 0x0956: i2c_put_byte_next */
+ 0x42b60847,
+ 0x3854ff01,
0xb60076bb,
0x50f90465,
0xbb046594,
0x50bd0256,
0xfc0475fd,
- 0xb121f550,
- 0x0464b606,
- 0xbb0f11f4,
- 0x36b00076,
- 0x061bf401,
-/* 0x07a3: i2c_put_byte_done */
- 0xf80132f4,
-/* 0x07a5: i2c_addr */
- 0x0076bb00,
+ 0x7921f550,
+ 0x0464b608,
+ 0xb03411f4,
+ 0x1bf40046,
+ 0x0076bbd8,
0xf90465b6,
0x04659450,
0xbd0256bb,
0x0475fd50,
0x21f550fc,
- 0x64b605e1,
- 0x2911f404,
- 0x012ec3e7,
- 0xfd0134b6,
- 0x76bb0553,
+ 0x64b608ba,
+ 0x0f11f404,
+ 0xb00076bb,
+ 0x1bf40136,
+ 0x0132f406,
+/* 0x09ac: i2c_put_byte_done */
+/* 0x09ae: i2c_addr */
+ 0x76bb00f8,
0x0465b600,
0x659450f9,
0x0256bb04,
0x75fd50bd,
0xf550fc04,
- 0xb6074a21,
-/* 0x07ea: i2c_addr_done */
- 0x00f80464,
-/* 0x07ec: i2c_acquire_addr */
- 0xb6f8cec7,
- 0xe0b702e4,
- 0xee980bfc,
-/* 0x07fb: i2c_acquire */
- 0xf500f800,
- 0xf407ec21,
- 0xd9f00421,
- 0x3f21f403,
-/* 0x080a: i2c_release */
- 0x21f500f8,
- 0x21f407ec,
- 0x03daf004,
- 0xf83f21f4,
-/* 0x0819: i2c_recv */
- 0x0132f400,
- 0xb6f8c1c7,
- 0x16b00214,
- 0x3a1ff528,
- 0xd413a001,
- 0x0032980b,
- 0x0bac13a0,
- 0xf4003198,
- 0xd0f90231,
- 0xd0f9e0f9,
- 0x000067f1,
- 0x100063f1,
- 0xbb016792,
+ 0xb607ea21,
+ 0x11f40464,
+ 0x2ec3e729,
+ 0x0134b601,
+ 0xbb0553fd,
0x65b60076,
0x9450f904,
0x56bb0465,
0xfd50bd02,
0x50fc0475,
- 0x07fb21f5,
- 0xfc0464b6,
- 0x00d6b0d0,
- 0x00b31bf5,
- 0xbb0057f0,
+ 0x095321f5,
+/* 0x09f3: i2c_addr_done */
+ 0xf80464b6,
+/* 0x09f5: i2c_acquire_addr */
+ 0xf8cec700,
+ 0xb702e4b6,
+ 0x980c10e0,
+ 0x00f800ee,
+/* 0x0a04: i2c_acquire */
+ 0x09f521f5,
+ 0xf00421f4,
+ 0x21f403d9,
+/* 0x0a13: i2c_release */
+ 0xf500f83f,
+ 0xf409f521,
+ 0xdaf00421,
+ 0x3f21f403,
+/* 0x0a22: i2c_recv */
+ 0x32f400f8,
+ 0xf8c1c701,
+ 0xb00214b6,
+ 0x1ff52816,
+ 0x13a0013a,
+ 0x32980be8,
+ 0xc013a000,
+ 0x0031980b,
+ 0xf90231f4,
+ 0xf9e0f9d0,
+ 0x0067f1d0,
+ 0x0063f100,
+ 0x01679210,
+ 0xb60076bb,
+ 0x50f90465,
+ 0xbb046594,
+ 0x50bd0256,
+ 0xfc0475fd,
+ 0x0421f550,
+ 0x0464b60a,
+ 0xd6b0d0fc,
+ 0xb31bf500,
+ 0x0057f000,
+ 0xb60076bb,
+ 0x50f90465,
+ 0xbb046594,
+ 0x50bd0256,
+ 0xfc0475fd,
+ 0xae21f550,
+ 0x0464b609,
+ 0x00d011f5,
+ 0xbbe0c5c7,
0x65b60076,
0x9450f904,
0x56bb0465,
0xfd50bd02,
0x50fc0475,
- 0x07a521f5,
+ 0x095321f5,
0xf50464b6,
- 0xc700d011,
- 0x76bbe0c5,
+ 0xf000ad11,
+ 0x76bb0157,
0x0465b600,
0x659450f9,
0x0256bb04,
0x75fd50bd,
0xf550fc04,
- 0xb6074a21,
+ 0xb609ae21,
0x11f50464,
- 0x57f000ad,
- 0x0076bb01,
- 0xf90465b6,
- 0x04659450,
- 0xbd0256bb,
- 0x0475fd50,
- 0x21f550fc,
- 0x64b607a5,
- 0x8a11f504,
- 0x0076bb00,
- 0xf90465b6,
- 0x04659450,
- 0xbd0256bb,
- 0x0475fd50,
- 0x21f550fc,
- 0x64b606f8,
- 0x6a11f404,
- 0xbbe05bcb,
- 0x65b60076,
- 0x9450f904,
- 0x56bb0465,
- 0xfd50bd02,
- 0x50fc0475,
- 0x063d21f5,
- 0xb90464b6,
- 0x74bd025b,
-/* 0x091f: i2c_recv_not_rd08 */
- 0xb0430ef4,
- 0x1bf401d6,
- 0x0057f03d,
- 0x07a521f5,
- 0xc73311f4,
- 0x21f5e0c5,
- 0x11f4074a,
- 0x0057f029,
- 0x07a521f5,
- 0xc71f11f4,
- 0x21f5e0b5,
- 0x11f4074a,
- 0x3d21f515,
- 0xc774bd06,
- 0x1bf408c5,
- 0x0232f409,
-/* 0x095f: i2c_recv_not_wr08 */
-/* 0x095f: i2c_recv_done */
- 0xc7030ef4,
- 0x21f5f8ce,
- 0xe0fc080a,
- 0x12f4d0fc,
- 0x027cb90a,
- 0x02b921f5,
-/* 0x0974: i2c_recv_exit */
-/* 0x0976: i2c_init */
+ 0x76bb008a,
+ 0x0465b600,
+ 0x659450f9,
+ 0x0256bb04,
+ 0x75fd50bd,
+ 0xf550fc04,
+ 0xb6090121,
+ 0x11f40464,
+ 0xe05bcb6a,
+ 0xb60076bb,
+ 0x50f90465,
+ 0xbb046594,
+ 0x50bd0256,
+ 0xfc0475fd,
+ 0x4621f550,
+ 0x0464b608,
+ 0xbd025bb9,
+ 0x430ef474,
+/* 0x0b28: i2c_recv_not_rd08 */
+ 0xf401d6b0,
+ 0x57f03d1b,
+ 0xae21f500,
+ 0x3311f409,
+ 0xf5e0c5c7,
+ 0xf4095321,
+ 0x57f02911,
+ 0xae21f500,
+ 0x1f11f409,
+ 0xf5e0b5c7,
+ 0xf4095321,
+ 0x21f51511,
+ 0x74bd0846,
+ 0xf408c5c7,
+ 0x32f4091b,
+ 0x030ef402,
+/* 0x0b68: i2c_recv_not_wr08 */
+/* 0x0b68: i2c_recv_done */
+ 0xf5f8cec7,
+ 0xfc0a1321,
+ 0xf4d0fce0,
+ 0x7cb90a12,
+ 0x4221f502,
+/* 0x0b7d: i2c_recv_exit */
+/* 0x0b7f: i2c_init */
+ 0xf800f803,
+/* 0x0b81: test_recv */
+ 0xd817f100,
+ 0x0614b605,
+ 0xb60011cf,
+ 0x07f10110,
+ 0x04b605d8,
+ 0x0001d006,
+ 0xe7f104bd,
+ 0xe3f1d900,
+ 0x21f5134f,
+ 0x00f80262,
+/* 0x0ba8: test_init */
+ 0x0800e7f1,
+ 0x026221f5,
+/* 0x0bb2: idle_recv */
0x00f800f8,
-/* 0x0978: test_recv */
- 0x05d817f1,
- 0xcf0614b6,
- 0x10b60011,
- 0xd807f101,
- 0x0604b605,
- 0xbd0001d0,
- 0x00e7f104,
- 0x4fe3f1d9,
- 0xf521f513,
-/* 0x099f: test_init */
- 0xf100f801,
- 0xf50800e7,
- 0xf801f521,
-/* 0x09a9: idle_recv */
-/* 0x09ab: idle */
- 0xf400f800,
- 0x17f10031,
- 0x14b605d4,
- 0x0011cf06,
- 0xf10110b6,
- 0xb605d407,
- 0x01d00604,
-/* 0x09c7: idle_loop */
- 0xf004bd00,
- 0x32f45817,
-/* 0x09cd: idle_proc */
-/* 0x09cd: idle_proc_exec */
- 0xb910f902,
- 0x21f5021e,
- 0x10fc02c2,
- 0xf40911f4,
- 0x0ef40231,
-/* 0x09e1: idle_proc_next */
- 0x5810b6ef,
- 0xf4061fb8,
- 0x02f4e61b,
- 0x0028f4dd,
- 0x00bb0ef4,
- 0x00000000,
- 0x00000000,
+/* 0x0bb4: idle */
+ 0xf10031f4,
+ 0xb605d417,
+ 0x11cf0614,
+ 0x0110b600,
+ 0x05d407f1,
+ 0xd00604b6,
+ 0x04bd0001,
+/* 0x0bd0: idle_loop */
+ 0xf45817f0,
+/* 0x0bd6: idle_proc */
+/* 0x0bd6: idle_proc_exec */
+ 0x10f90232,
+ 0xf5021eb9,
+ 0xfc034b21,
+ 0x0911f410,
+ 0xf40231f4,
+/* 0x0bea: idle_proc_next */
+ 0x10b6ef0e,
+ 0x061fb858,
+ 0xf4e61bf4,
+ 0x28f4dd02,
+ 0xbb0ef400,
0x00000000,
};
*/
#define NVKM_PPWR_CHIPSET GF119
+#define HW_TICKS_PER_US 324
//#define NVKM_FALCON_PC24
#define NVKM_FALCON_UNSHIFTED_IO
.section #nvd0_pwr_data
#define INCLUDE_PROC
#include "kernel.fuc"
+#include "arith.fuc"
#include "host.fuc"
#include "memx.fuc"
#include "perf.fuc"
#define INCLUDE_DATA
#include "kernel.fuc"
+#include "arith.fuc"
#include "host.fuc"
#include "memx.fuc"
#include "perf.fuc"
.section #nvd0_pwr_code
#define INCLUDE_CODE
#include "kernel.fuc"
+#include "arith.fuc"
#include "host.fuc"
#include "memx.fuc"
#include "perf.fuc"
0x00000000,
/* 0x0058: proc_list_head */
0x54534f48,
- 0x000003be,
- 0x00000367,
+ 0x0000049d,
+ 0x00000446,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x584d454d,
- 0x000004b8,
- 0x000004aa,
+ 0x00000678,
+ 0x0000066a,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x46524550,
- 0x000004bc,
- 0x000004ba,
+ 0x0000067c,
+ 0x0000067a,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x5f433249,
- 0x000008d7,
- 0x0000077a,
+ 0x00000a97,
+ 0x0000093a,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x54534554,
- 0x000008fa,
- 0x000008d9,
+ 0x00000aba,
+ 0x00000a99,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x454c4449,
- 0x00000906,
- 0x00000904,
+ 0x00000ac6,
+ 0x00000ac4,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
/* 0x0370: memx_func_head */
- 0x00010000,
- 0x00000000,
- 0x000003f4,
-/* 0x037c: memx_func_next */
0x00000001,
0x00000000,
- 0x00000415,
+ 0x000004d3,
+/* 0x037c: memx_func_next */
0x00000002,
+ 0x00000000,
+ 0x00000554,
+ 0x00000003,
0x00000002,
- 0x00000430,
- 0x00040003,
+ 0x000005d8,
+ 0x00040004,
+ 0x00000000,
+ 0x000005f4,
+ 0x00010005,
+ 0x00000000,
+ 0x0000060e,
+ 0x00010006,
+ 0x00000000,
+ 0x000005d3,
+/* 0x03b8: memx_func_tail */
+/* 0x03b8: memx_ts_start */
0x00000000,
- 0x0000044c,
- 0x00010004,
+/* 0x03bc: memx_ts_end */
0x00000000,
- 0x00000466,
-/* 0x03ac: memx_func_tail */
-/* 0x03ac: memx_data_head */
+/* 0x03c0: memx_data_head */
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
-/* 0x0bac: memx_data_tail */
-/* 0x0bac: i2c_scl_map */
+/* 0x0bc0: memx_data_tail */
+/* 0x0bc0: i2c_scl_map */
0x00000400,
0x00000800,
0x00001000,
0x00020000,
0x00040000,
0x00080000,
-/* 0x0bd4: i2c_sda_map */
+/* 0x0be8: i2c_sda_map */
0x00100000,
0x00200000,
0x00400000,
0x10000000,
0x20000000,
0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
};
uint32_t nvd0_pwr_code[] = {
- 0x02bf0ef5,
+ 0x034d0ef5,
/* 0x0004: rd32 */
0x07a007f1,
0xbd000ed0,
0xd4f100dd,
0x1bf47000,
/* 0x0067: nsec */
- 0xf000f8f5,
+ 0xf900f8f5,
+ 0xf080f990,
0x88cf2c87,
-/* 0x006d: nsec_loop */
+/* 0x0071: nsec_loop */
0x2c97f000,
0xbb0099cf,
0x9eb80298,
0xf41ef406,
-/* 0x007e: wait */
- 0x87f000f8,
+ 0x90fc80fc,
+/* 0x0086: wait */
+ 0x90f900f8,
+ 0x87f080f9,
0x0088cf2c,
-/* 0x0084: wait_loop */
+/* 0x0090: wait_loop */
0xf402eeb9,
0xdab90421,
0x04adfd02,
0x0099cf2c,
0xb80298bb,
0x1ef4069b,
-/* 0x00a5: wait_done */
-/* 0x00a7: intr_watchdog */
- 0x9800f8e2,
+/* 0x00b1: wait_done */
+ 0xfc80fce2,
+/* 0x00b7: intr_watchdog */
+ 0x9800f890,
0x96b003e9,
0x2a0bf400,
0xbb9a0a98,
0x1cf4029a,
0x01d7f00f,
- 0x020621f5,
+ 0x028c21f5,
0x0ef494bd,
-/* 0x00c5: intr_watchdog_next_time */
+/* 0x00d5: intr_watchdog_next_time */
0x9b0a9815,
0xf400a6b0,
0x9ab8090b,
0x061cf406,
-/* 0x00d4: intr_watchdog_next_time_set */
-/* 0x00d7: intr_watchdog_next_proc */
+/* 0x00e4: intr_watchdog_next_time_set */
+/* 0x00e7: intr_watchdog_next_proc */
0x809b0980,
0xe0b603e9,
0x68e6b158,
0xc61bf402,
-/* 0x00e6: intr */
+/* 0x00f6: intr */
0x00f900f8,
0x80f904bd,
0xa0f990f9,
0x0bf40289,
0x9b008020,
0xf458e7f0,
- 0x0998a721,
+ 0x0998b721,
0x0096b09b,
0xf00e0bf4,
0x09d03407,
0x8004bd00,
-/* 0x013e: intr_skip_watchdog */
+/* 0x014e: intr_skip_watchdog */
0x89e49a09,
0x0bf40800,
0x8897f13c,
0xf14f48e7,
0xf05453e3,
0x21f500d7,
- 0xc0fc026b,
+ 0xc0fc02f1,
0x04c007f1,
0xbd000cd0,
-/* 0x0175: intr_subintr_skip_fifo */
+/* 0x0185: intr_subintr_skip_fifo */
0x8807f104,
0x0009d006,
-/* 0x017e: intr_skip_subintr */
+/* 0x018e: intr_skip_subintr */
0x89c404bd,
0x070bf420,
0xffbfa4f1,
-/* 0x0188: intr_skip_pause */
+/* 0x0198: intr_skip_pause */
0xf44089c4,
0xa4f1070b,
-/* 0x0192: intr_skip_user0 */
+/* 0x01a2: intr_skip_user0 */
0x07f0ffbf,
0x0008d004,
0x80fc04bd,
0xfca0fcb0,
0xfc80fc90,
0x0032f400,
-/* 0x01b6: timer */
- 0x32f401f8,
- 0x03f89810,
- 0xf40086b0,
- 0xfe80421c,
- 0x3807f003,
+/* 0x01c6: ticks_from_ns */
+ 0xc0f901f8,
+ 0xd7f1b0f9,
+ 0xd3f00144,
+ 0xb321f500,
+ 0xe8ccec03,
+ 0x00b4b003,
+ 0xec120bf4,
+ 0xf103e8ee,
+ 0xf00144d7,
+ 0x21f500d3,
+/* 0x01ee: ticks_from_ns_quit */
+ 0xceb903b3,
+ 0xfcb0fc02,
+/* 0x01f7: ticks_from_us */
+ 0xf900f8c0,
+ 0xf1b0f9c0,
+ 0xf00144d7,
+ 0x21f500d3,
+ 0xceb903b3,
+ 0x00b4b002,
+ 0xbd050bf4,
+/* 0x0211: ticks_from_us_quit */
+ 0xfcb0fce4,
+/* 0x0217: ticks_to_us */
+ 0xf100f8c0,
+ 0xf00144d7,
+ 0xedff00d3,
+/* 0x0223: timer */
+ 0xf900f8ec,
+ 0xf480f990,
+ 0xf8981032,
+ 0x0086b003,
+ 0xbd531cf4,
+ 0x3807f084,
0xbd0008d0,
- 0x0887f004,
- 0xf00088cf,
- 0x1bf40284,
- 0x3487f020,
- 0xb80088cf,
- 0x0bf406e0,
- 0x06e8b809,
-/* 0x01eb: timer_reset */
- 0xf0191ef4,
- 0x0ed03407,
- 0x8004bd00,
-/* 0x01f6: timer_enable */
- 0x87f09a0e,
- 0x3807f001,
- 0xbd0008d0,
-/* 0x0201: timer_done */
- 0x1031f404,
-/* 0x0206: send_proc */
- 0x80f900f8,
- 0xe89890f9,
+ 0x3487f004,
+ 0x980088cf,
+ 0x98bb9a09,
+ 0x00e9bb02,
+ 0xf003fe80,
+ 0x88cf0887,
+ 0x0284f000,
+ 0xf0201bf4,
+ 0x88cf3487,
+ 0x06e0b800,
+ 0xb8090bf4,
+ 0x1cf406e8,
+/* 0x026d: timer_reset */
+ 0x3407f00e,
+ 0xbd000ed0,
+ 0x9a0e8004,
+/* 0x0278: timer_enable */
+ 0xf00187f0,
+ 0x08d03807,
+/* 0x0283: timer_done */
+ 0xf404bd00,
+ 0x80fc1031,
+ 0x00f890fc,
+/* 0x028c: send_proc */
+ 0x90f980f9,
+ 0x9805e898,
+ 0x86f004e9,
+ 0x0689b804,
+ 0xc42a0bf4,
+ 0x88940398,
+ 0x1880b604,
+ 0x98008ebb,
+ 0x8a8000fa,
+ 0x018d8000,
+ 0x80028c80,
+ 0x90b6038b,
+ 0x0794f001,
+ 0xf404e980,
+/* 0x02c6: send_done */
+ 0x90fc0231,
+ 0x00f880fc,
+/* 0x02cc: find */
+ 0x87f080f9,
+ 0x0131f458,
+/* 0x02d4: find_loop */
+ 0xb8008a98,
+ 0x0bf406ae,
+ 0x5880b610,
+ 0x026886b1,
+ 0xf4f01bf4,
+/* 0x02ea: find_done */
+ 0x8eb90132,
+ 0xf880fc02,
+/* 0x02f1: send */
+ 0xcc21f500,
+ 0x9701f402,
+/* 0x02fa: recv */
+ 0x90f900f8,
+ 0xe89880f9,
0x04e99805,
- 0xb80486f0,
+ 0xb80132f4,
0x0bf40689,
- 0x0398c42a,
- 0xb6048894,
- 0x8ebb1880,
- 0x00fa9800,
- 0x80008a80,
- 0x8c80018d,
- 0x038b8002,
- 0xf00190b6,
- 0xe9800794,
- 0x0231f404,
-/* 0x0240: send_done */
- 0x80fc90fc,
-/* 0x0246: find */
- 0x80f900f8,
- 0xf45887f0,
-/* 0x024e: find_loop */
- 0x8a980131,
- 0x06aeb800,
- 0xb6100bf4,
- 0x86b15880,
- 0x1bf40268,
- 0x0132f4f0,
-/* 0x0264: find_done */
- 0xfc028eb9,
-/* 0x026b: send */
- 0xf500f880,
- 0xf4024621,
- 0x00f89701,
-/* 0x0274: recv */
- 0x9805e898,
- 0x32f404e9,
- 0x0689b801,
- 0xc43d0bf4,
- 0x80b60389,
- 0x0784f001,
- 0x9805e880,
- 0xf0f902ea,
- 0xf9018ffe,
- 0x02efb9f0,
- 0xbb049994,
- 0xe0b600e9,
- 0x03eb9818,
- 0x9802ec98,
- 0xee9801ed,
- 0xfca5f900,
- 0x00f8fef0,
- 0xfc0131f4,
-/* 0x02bd: recv_done */
-/* 0x02bf: init */
- 0xf100f8f0,
- 0xcf010817,
- 0x11e70011,
- 0x14b60109,
- 0x0014fe08,
- 0x00e017f1,
- 0xf00013f0,
- 0x01d01c07,
- 0xf004bd00,
- 0x07f0ff17,
- 0x0001d014,
- 0x17f004bd,
- 0x0015f102,
- 0x1007f008,
- 0xbd0001d0,
- 0xe617f104,
- 0x0013f000,
- 0xf40010fe,
- 0x17f01031,
- 0x3807f001,
- 0xbd0001d0,
- 0x58f7f004,
-/* 0x0314: init_proc */
- 0xb001f198,
- 0x0bf40016,
- 0xb615f9fa,
- 0x0ef458f0,
-/* 0x0325: host_send */
- 0xb017f1f2,
- 0x0011cf04,
- 0x04a027f1,
- 0xb80022cf,
- 0x0bf40612,
- 0x071ec42f,
- 0xb704ee94,
- 0x980270e0,
+ 0x0389c43d,
+ 0xf00180b6,
+ 0xe8800784,
+ 0x02ea9805,
+ 0x8ffef0f9,
+ 0xb9f0f901,
+ 0x999402ef,
+ 0x00e9bb04,
+ 0x9818e0b6,
0xec9803eb,
0x01ed9802,
- 0xf500ee98,
- 0xb6026b21,
- 0x1ec40110,
- 0xb007f10f,
- 0x000ed004,
- 0x0ef404bd,
-/* 0x0365: host_send_done */
-/* 0x0367: host_recv */
- 0xf100f8c3,
- 0xf14e4917,
- 0xb8525413,
- 0x0bf406e1,
-/* 0x0375: host_recv_wait */
- 0xcc17f1b3,
- 0x0011cf04,
- 0x04c827f1,
- 0xf00022cf,
- 0x12b80816,
- 0xec0bf406,
- 0xb60723c4,
- 0x30b70434,
- 0x3b8002f0,
- 0x023c8003,
- 0x80013d80,
- 0x20b6003e,
- 0x0f24f001,
- 0x04c807f1,
- 0xbd0002d0,
- 0x4027f004,
- 0xd00007f0,
- 0x04bd0002,
-/* 0x03be: host_init */
+ 0xf900ee98,
+ 0xfef0fca5,
+ 0x31f400f8,
+/* 0x0347: recv_done */
+ 0xfcf0fc01,
+ 0xf890fc80,
+/* 0x034d: init */
+ 0x0817f100,
+ 0x0011cf01,
+ 0x010911e7,
+ 0xfe0814b6,
+ 0x17f10014,
+ 0x13f000e0,
+ 0x1c07f000,
+ 0xbd0001d0,
+ 0xff17f004,
+ 0xd01407f0,
+ 0x04bd0001,
+ 0xf10217f0,
+ 0xf0080015,
+ 0x01d01007,
+ 0xf104bd00,
+ 0xf000f617,
+ 0x10fe0013,
+ 0x1031f400,
+ 0xf00117f0,
+ 0x01d03807,
+ 0xf004bd00,
+/* 0x03a2: init_proc */
+ 0xf19858f7,
+ 0x0016b001,
+ 0xf9fa0bf4,
+ 0x58f0b615,
+/* 0x03b3: mulu32_32_64 */
+ 0xf9f20ef4,
+ 0xf920f910,
+ 0x9540f930,
+ 0xd29510e1,
+ 0xbdc4bd10,
+ 0xc0edffb4,
+ 0xb9301dff,
+ 0x34f10234,
+ 0x34b6ffff,
+ 0x1045b610,
+ 0xbb00c3bb,
+ 0xe2ff01b4,
+ 0x0234b930,
+ 0xffff34f1,
+ 0xb61034b6,
+ 0xc3bb1045,
+ 0x01b4bb00,
+ 0xbb3012ff,
+ 0x40fc00b3,
+ 0x20fc30fc,
+ 0x00f810fc,
+/* 0x0404: host_send */
+ 0x04b017f1,
+ 0xf10011cf,
+ 0xcf04a027,
+ 0x12b80022,
+ 0x2f0bf406,
+ 0x94071ec4,
+ 0xe0b704ee,
+ 0xeb980270,
+ 0x02ec9803,
+ 0x9801ed98,
+ 0x21f500ee,
+ 0x10b602f1,
+ 0x0f1ec401,
+ 0x04b007f1,
+ 0xbd000ed0,
+ 0xc30ef404,
+/* 0x0444: host_send_done */
+/* 0x0446: host_recv */
0x17f100f8,
- 0x14b60080,
- 0x7015f110,
- 0xd007f102,
- 0x0001d004,
- 0x17f104bd,
- 0x14b60080,
- 0xf015f110,
- 0xdc07f102,
- 0x0001d004,
- 0x17f004bd,
- 0xc407f101,
- 0x0001d004,
- 0x00f804bd,
-/* 0x03f4: memx_func_enter */
+ 0x13f14e49,
+ 0xe1b85254,
+ 0xb30bf406,
+/* 0x0454: host_recv_wait */
+ 0x04cc17f1,
+ 0xf10011cf,
+ 0xcf04c827,
+ 0x16f00022,
+ 0x0612b808,
+ 0xc4ec0bf4,
+ 0x34b60723,
+ 0xf030b704,
+ 0x033b8002,
+ 0x80023c80,
+ 0x3e80013d,
+ 0x0120b600,
+ 0xf10f24f0,
+ 0xd004c807,
+ 0x04bd0002,
+ 0xf04027f0,
+ 0x02d00007,
+ 0xf804bd00,
+/* 0x049d: host_init */
+ 0x8017f100,
+ 0x1014b600,
+ 0x027015f1,
+ 0x04d007f1,
+ 0xbd0001d0,
+ 0x8017f104,
+ 0x1014b600,
+ 0x02f015f1,
+ 0x04dc07f1,
+ 0xbd0001d0,
+ 0x0117f004,
+ 0x04c407f1,
+ 0xbd0001d0,
+/* 0x04d3: memx_func_enter */
+ 0xf100f804,
+ 0xf1162067,
+ 0xf1f55d77,
+ 0xb9ffff73,
+ 0x21f4026e,
+ 0x02d8b904,
+ 0xf90487fd,
+ 0xfc80f960,
+ 0xf4e0fcd0,
+ 0x77f13321,
+ 0x73f1fffe,
+ 0x6eb9ffff,
+ 0x0421f402,
+ 0xfd02d8b9,
+ 0x60f90487,
+ 0xd0fc80f9,
+ 0x21f4e0fc,
+ 0xf067f133,
+ 0x026eb926,
+ 0xb90421f4,
+ 0x87fd02d8,
+ 0xf960f904,
+ 0xfcd0fc80,
+ 0x3321f4e0,
0xf10467f0,
0xd007e007,
0x04bd0006,
-/* 0x0400: memx_func_enter_wait */
+/* 0x053c: memx_func_enter_wait */
0x07c067f1,
0xf00066cf,
0x0bf40464,
- 0x001698f6,
- 0xf80410b6,
-/* 0x0415: memx_func_leave */
- 0x0467f000,
+ 0x2c67f0f6,
+ 0x800066cf,
+ 0x00f8ee06,
+/* 0x0554: memx_func_leave */
+ 0xcf2c67f0,
+ 0x06800066,
+ 0x0467f0ef,
0x07e407f1,
0xbd0006d0,
-/* 0x0421: memx_func_leave_wait */
+/* 0x0569: memx_func_leave_wait */
0xc067f104,
0x0066cf07,
0xf40464f0,
- 0x00f8f61b,
-/* 0x0430: memx_func_wr32 */
+ 0x67f1f61b,
+ 0x77f126f0,
+ 0x73f00001,
+ 0x026eb900,
+ 0xb90421f4,
+ 0x87fd02d8,
+ 0xf960f905,
+ 0xfcd0fc80,
+ 0x3321f4e0,
+ 0x162067f1,
+ 0xf4026eb9,
+ 0xd8b90421,
+ 0x0587fd02,
+ 0x80f960f9,
+ 0xe0fcd0fc,
+ 0xf13321f4,
+ 0xf00aa277,
+ 0x6eb90073,
+ 0x0421f402,
+ 0xfd02d8b9,
+ 0x60f90587,
+ 0xd0fc80f9,
+ 0x21f4e0fc,
+/* 0x05d3: memx_func_wait_vblank */
+ 0xb600f833,
+ 0x00f80410,
+/* 0x05d8: memx_func_wr32 */
0x98001698,
0x10b60115,
0xf960f908,
0x3321f4e0,
0xf40242b6,
0x00f8e91b,
-/* 0x044c: memx_func_wait */
+/* 0x05f4: memx_func_wait */
0xcf2c87f0,
0x1e980088,
0x011d9800,
0x98021c98,
0x10b6031b,
- 0x7e21f410,
-/* 0x0466: memx_func_delay */
+ 0x8621f410,
+/* 0x060e: memx_func_delay */
0x1e9800f8,
0x0410b600,
0xf86721f4,
-/* 0x0471: memx_exec */
+/* 0x0619: memx_exec */
0xf9e0f900,
0x02c1b9d0,
-/* 0x047b: memx_exec_next */
+/* 0x0623: memx_exec_next */
0x9802b2b9,
0x10b60013,
- 0x10349504,
+ 0xf034e704,
+ 0xe033e701,
+ 0x0132b601,
0x980c30f0,
0x55f9de35,
0xf40612b8,
- 0xd0fcec1e,
+ 0x0b98e41e,
+ 0xef0c98ee,
+ 0xf102cbbb,
+ 0xcf07c4b7,
+ 0xd0fc00bb,
0x21f5e0fc,
- 0x00f8026b,
-/* 0x049c: memx_info */
- 0x03acc7f1,
+ 0x00f802f1,
+/* 0x065c: memx_info */
+ 0x03c0c7f1,
0x0800b7f1,
- 0x026b21f5,
-/* 0x04aa: memx_recv */
+ 0x02f121f5,
+/* 0x066a: memx_recv */
0xd6b000f8,
- 0xc40bf401,
+ 0xac0bf401,
0xf400d6b0,
0x00f8e90b,
-/* 0x04b8: memx_init */
-/* 0x04ba: perf_recv */
+/* 0x0678: memx_init */
+/* 0x067a: perf_recv */
0x00f800f8,
-/* 0x04bc: perf_init */
-/* 0x04be: i2c_drive_scl */
+/* 0x067c: perf_init */
+/* 0x067e: i2c_drive_scl */
0x36b000f8,
0x0e0bf400,
0x07e007f1,
0xbd0001d0,
-/* 0x04cf: i2c_drive_scl_lo */
+/* 0x068f: i2c_drive_scl_lo */
0xf100f804,
0xd007e407,
0x04bd0001,
-/* 0x04da: i2c_drive_sda */
+/* 0x069a: i2c_drive_sda */
0x36b000f8,
0x0e0bf400,
0x07e007f1,
0xbd0002d0,
-/* 0x04eb: i2c_drive_sda_lo */
+/* 0x06ab: i2c_drive_sda_lo */
0xf100f804,
0xd007e407,
0x04bd0002,
-/* 0x04f6: i2c_sense_scl */
+/* 0x06b6: i2c_sense_scl */
0x32f400f8,
0xc437f101,
0x0033cf07,
0xf40431fd,
0x31f4060b,
-/* 0x0509: i2c_sense_scl_done */
-/* 0x050b: i2c_sense_sda */
+/* 0x06c9: i2c_sense_scl_done */
+/* 0x06cb: i2c_sense_sda */
0xf400f801,
0x37f10132,
0x33cf07c4,
0x0432fd00,
0xf4060bf4,
-/* 0x051e: i2c_sense_sda_done */
+/* 0x06de: i2c_sense_sda_done */
0x00f80131,
-/* 0x0520: i2c_raise_scl */
+/* 0x06e0: i2c_raise_scl */
0x47f140f9,
0x37f00898,
- 0xbe21f501,
-/* 0x052d: i2c_raise_scl_wait */
- 0xe8e7f104,
+ 0x7e21f501,
+/* 0x06ed: i2c_raise_scl_wait */
+ 0xe8e7f106,
0x6721f403,
- 0x04f621f5,
+ 0x06b621f5,
0xb60901f4,
0x1bf40142,
-/* 0x0541: i2c_raise_scl_done */
+/* 0x0701: i2c_raise_scl_done */
0xf840fcef,
-/* 0x0545: i2c_start */
- 0xf621f500,
- 0x0d11f404,
- 0x050b21f5,
+/* 0x0705: i2c_start */
+ 0xb621f500,
+ 0x0d11f406,
+ 0x06cb21f5,
0xf40611f4,
-/* 0x0556: i2c_start_rep */
+/* 0x0716: i2c_start_rep */
0x37f0300e,
- 0xbe21f500,
- 0x0137f004,
- 0x04da21f5,
+ 0x7e21f500,
+ 0x0137f006,
+ 0x069a21f5,
0xb60076bb,
0x50f90465,
0xbb046594,
0x50bd0256,
0xfc0475fd,
- 0x2021f550,
- 0x0464b605,
-/* 0x0583: i2c_start_send */
+ 0xe021f550,
+ 0x0464b606,
+/* 0x0743: i2c_start_send */
0xf01f11f4,
0x21f50037,
- 0xe7f104da,
+ 0xe7f1069a,
0x21f41388,
0x0037f067,
- 0x04be21f5,
+ 0x067e21f5,
0x1388e7f1,
-/* 0x059f: i2c_start_out */
+/* 0x075f: i2c_start_out */
0xf86721f4,
-/* 0x05a1: i2c_stop */
+/* 0x0761: i2c_stop */
0x0037f000,
- 0x04be21f5,
+ 0x067e21f5,
0xf50037f0,
- 0xf104da21,
+ 0xf1069a21,
0xf403e8e7,
0x37f06721,
- 0xbe21f501,
- 0x88e7f104,
+ 0x7e21f501,
+ 0x88e7f106,
0x6721f413,
0xf50137f0,
- 0xf104da21,
+ 0xf1069a21,
0xf41388e7,
0x00f86721,
-/* 0x05d4: i2c_bitw */
- 0x04da21f5,
+/* 0x0794: i2c_bitw */
+ 0x069a21f5,
0x03e8e7f1,
0xbb6721f4,
0x65b60076,
0x56bb0465,
0xfd50bd02,
0x50fc0475,
- 0x052021f5,
+ 0x06e021f5,
0xf40464b6,
0xe7f11811,
0x21f41388,
0x0037f067,
- 0x04be21f5,
+ 0x067e21f5,
0x1388e7f1,
-/* 0x0613: i2c_bitw_out */
+/* 0x07d3: i2c_bitw_out */
0xf86721f4,
-/* 0x0615: i2c_bitr */
+/* 0x07d5: i2c_bitr */
0x0137f000,
- 0x04da21f5,
+ 0x069a21f5,
0x03e8e7f1,
0xbb6721f4,
0x65b60076,
0x56bb0465,
0xfd50bd02,
0x50fc0475,
- 0x052021f5,
+ 0x06e021f5,
0xf40464b6,
0x21f51b11,
- 0x37f0050b,
- 0xbe21f500,
- 0x88e7f104,
+ 0x37f006cb,
+ 0x7e21f500,
+ 0x88e7f106,
0x6721f413,
0xf4013cf0,
-/* 0x065a: i2c_bitr_done */
+/* 0x081a: i2c_bitr_done */
0x00f80131,
-/* 0x065c: i2c_get_byte */
+/* 0x081c: i2c_get_byte */
0xf00057f0,
-/* 0x0662: i2c_get_byte_next */
+/* 0x0822: i2c_get_byte_next */
0x54b60847,
0x0076bb01,
0xf90465b6,
0xbd0256bb,
0x0475fd50,
0x21f550fc,
- 0x64b60615,
+ 0x64b607d5,
0x2b11f404,
0xb60553fd,
0x1bf40142,
0xbb046594,
0x50bd0256,
0xfc0475fd,
- 0xd421f550,
- 0x0464b605,
-/* 0x06ac: i2c_get_byte_done */
-/* 0x06ae: i2c_put_byte */
+ 0x9421f550,
+ 0x0464b607,
+/* 0x086c: i2c_get_byte_done */
+/* 0x086e: i2c_put_byte */
0x47f000f8,
-/* 0x06b1: i2c_put_byte_next */
+/* 0x0871: i2c_put_byte_next */
0x0142b608,
0xbb3854ff,
0x65b60076,
0x56bb0465,
0xfd50bd02,
0x50fc0475,
- 0x05d421f5,
+ 0x079421f5,
0xf40464b6,
0x46b03411,
0xd81bf400,
0xbb046594,
0x50bd0256,
0xfc0475fd,
- 0x1521f550,
- 0x0464b606,
+ 0xd521f550,
+ 0x0464b607,
0xbb0f11f4,
0x36b00076,
0x061bf401,
-/* 0x0707: i2c_put_byte_done */
+/* 0x08c7: i2c_put_byte_done */
0xf80132f4,
-/* 0x0709: i2c_addr */
+/* 0x08c9: i2c_addr */
0x0076bb00,
0xf90465b6,
0x04659450,
0xbd0256bb,
0x0475fd50,
0x21f550fc,
- 0x64b60545,
+ 0x64b60705,
0x2911f404,
0x012ec3e7,
0xfd0134b6,
0x0256bb04,
0x75fd50bd,
0xf550fc04,
- 0xb606ae21,
-/* 0x074e: i2c_addr_done */
+ 0xb6086e21,
+/* 0x090e: i2c_addr_done */
0x00f80464,
-/* 0x0750: i2c_acquire_addr */
+/* 0x0910: i2c_acquire_addr */
0xb6f8cec7,
0xe0b705e4,
0x00f8d014,
-/* 0x075c: i2c_acquire */
- 0x075021f5,
+/* 0x091c: i2c_acquire */
+ 0x091021f5,
0xf00421f4,
0x21f403d9,
-/* 0x076b: i2c_release */
+/* 0x092b: i2c_release */
0xf500f833,
- 0xf4075021,
+ 0xf4091021,
0xdaf00421,
0x3321f403,
-/* 0x077a: i2c_recv */
+/* 0x093a: i2c_recv */
0x32f400f8,
0xf8c1c701,
0xb00214b6,
0x1ff52816,
0x13a0013a,
- 0x32980bd4,
- 0xac13a000,
+ 0x32980be8,
+ 0xc013a000,
0x0031980b,
0xf90231f4,
0xf9e0f9d0,
0xbb046594,
0x50bd0256,
0xfc0475fd,
- 0x5c21f550,
- 0x0464b607,
+ 0x1c21f550,
+ 0x0464b609,
0xd6b0d0fc,
0xb31bf500,
0x0057f000,
0xbb046594,
0x50bd0256,
0xfc0475fd,
- 0x0921f550,
- 0x0464b607,
+ 0xc921f550,
+ 0x0464b608,
0x00d011f5,
0xbbe0c5c7,
0x65b60076,
0x56bb0465,
0xfd50bd02,
0x50fc0475,
- 0x06ae21f5,
+ 0x086e21f5,
0xf50464b6,
0xf000ad11,
0x76bb0157,
0x0256bb04,
0x75fd50bd,
0xf550fc04,
- 0xb6070921,
+ 0xb608c921,
0x11f50464,
0x76bb008a,
0x0465b600,
0x0256bb04,
0x75fd50bd,
0xf550fc04,
- 0xb6065c21,
+ 0xb6081c21,
0x11f40464,
0xe05bcb6a,
0xb60076bb,
0xbb046594,
0x50bd0256,
0xfc0475fd,
- 0xa121f550,
- 0x0464b605,
+ 0x6121f550,
+ 0x0464b607,
0xbd025bb9,
0x430ef474,
-/* 0x0880: i2c_recv_not_rd08 */
+/* 0x0a40: i2c_recv_not_rd08 */
0xf401d6b0,
0x57f03d1b,
- 0x0921f500,
- 0x3311f407,
+ 0xc921f500,
+ 0x3311f408,
0xf5e0c5c7,
- 0xf406ae21,
+ 0xf4086e21,
0x57f02911,
- 0x0921f500,
- 0x1f11f407,
+ 0xc921f500,
+ 0x1f11f408,
0xf5e0b5c7,
- 0xf406ae21,
+ 0xf4086e21,
0x21f51511,
- 0x74bd05a1,
+ 0x74bd0761,
0xf408c5c7,
0x32f4091b,
0x030ef402,
-/* 0x08c0: i2c_recv_not_wr08 */
-/* 0x08c0: i2c_recv_done */
+/* 0x0a80: i2c_recv_not_wr08 */
+/* 0x0a80: i2c_recv_done */
0xf5f8cec7,
- 0xfc076b21,
+ 0xfc092b21,
0xf4d0fce0,
0x7cb90a12,
- 0x6b21f502,
-/* 0x08d5: i2c_recv_exit */
-/* 0x08d7: i2c_init */
+ 0xf121f502,
+/* 0x0a95: i2c_recv_exit */
+/* 0x0a97: i2c_init */
0xf800f802,
-/* 0x08d9: test_recv */
+/* 0x0a99: test_recv */
0xd817f100,
0x0011cf05,
0xf10110b6,
0x04bd0001,
0xd900e7f1,
0x134fe3f1,
- 0x01b621f5,
-/* 0x08fa: test_init */
+ 0x022321f5,
+/* 0x0aba: test_init */
0xe7f100f8,
0x21f50800,
- 0x00f801b6,
-/* 0x0904: idle_recv */
-/* 0x0906: idle */
+ 0x00f80223,
+/* 0x0ac4: idle_recv */
+/* 0x0ac6: idle */
0x31f400f8,
0xd417f100,
0x0011cf05,
0xf10110b6,
0xd005d407,
0x04bd0001,
-/* 0x091c: idle_loop */
+/* 0x0adc: idle_loop */
0xf45817f0,
-/* 0x0922: idle_proc */
-/* 0x0922: idle_proc_exec */
+/* 0x0ae2: idle_proc */
+/* 0x0ae2: idle_proc_exec */
0x10f90232,
0xf5021eb9,
- 0xfc027421,
+ 0xfc02fa21,
0x0911f410,
0xf40231f4,
-/* 0x0936: idle_proc_next */
+/* 0x0af6: idle_proc_next */
0x10b6ef0e,
0x061fb858,
0xf4e61bf4,
0x00000000,
0x00000000,
0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
};
#define MEMX_MSG_EXEC 1
/* MEMX: script opcode definitions */
-#define MEMX_ENTER 0
-#define MEMX_LEAVE 1
-#define MEMX_WR32 2
-#define MEMX_WAIT 3
-#define MEMX_DELAY 4
+#define MEMX_ENTER 1
+#define MEMX_LEAVE 2
+#define MEMX_WR32 3
+#define MEMX_WAIT 4
+#define MEMX_DELAY 5
+#define MEMX_VBLANK 6
/* I2C_: message identifiers */
#define I2C__MSG_RD08 0
struct nouveau_pwr *ppwr = memx->ppwr;
int i;
- if (memx->c.size) {
+ if (memx->c.mthd) {
nv_wr32(ppwr, 0x10a1c4, (memx->c.size << 16) | memx->c.mthd);
for (i = 0; i < memx->c.size; i++)
nv_wr32(ppwr, 0x10a1c4, memx->c.data[i]);
+ memx->c.mthd = 0;
memx->c.size = 0;
}
}
memx_cmd(struct nouveau_memx *memx, u32 mthd, u32 size, u32 data[])
{
if ((memx->c.size + size >= ARRAY_SIZE(memx->c.data)) ||
- (memx->c.size && memx->c.mthd != mthd))
+ (memx->c.mthd && memx->c.mthd != mthd))
memx_out(memx);
memcpy(&memx->c.data[memx->c.size], data, size * sizeof(data[0]));
memx->c.size += size;
nv_wr32(ppwr, 0x10a580, 0x00000003);
} while (nv_rd32(ppwr, 0x10a580) != 0x00000003);
nv_wr32(ppwr, 0x10a1c0, 0x01000000 | memx->base);
- nv_wr32(ppwr, 0x10a1c4, 0x00010000 | MEMX_ENTER);
- nv_wr32(ppwr, 0x10a1c4, 0x00000000);
+
return 0;
}
memx_out(memx);
/* release data segment access */
- nv_wr32(ppwr, 0x10a1c4, 0x00000000 | MEMX_LEAVE);
finish = nv_rd32(ppwr, 0x10a1c0) & 0x00ffffff;
nv_wr32(ppwr, 0x10a580, 0x00000000);
memx->base, finish);
}
+ nv_debug(memx->ppwr, "Exec took %uns, PPWR_IN %08x\n",
+ reply[0], reply[1]);
kfree(memx);
return 0;
}
memx_out(memx); /* fuc can't handle multiple */
}
+void
+nouveau_memx_wait_vblank(struct nouveau_memx *memx)
+{
+ struct nouveau_pwr *ppwr = memx->ppwr;
+ u32 heads, x, y, px = 0;
+ int i, head_sync;
+
+ if (nv_device(ppwr)->chipset < 0xd0) {
+ heads = nv_rd32(ppwr, 0x610050);
+ for (i = 0; i < 2; i++) {
+ /* Heuristic: sync to head with biggest resolution */
+ if (heads & (2 << (i << 3))) {
+ x = nv_rd32(ppwr, 0x610b40 + (0x540 * i));
+ y = (x & 0xffff0000) >> 16;
+ x &= 0x0000ffff;
+ if ((x * y) > px) {
+ px = (x * y);
+ head_sync = i;
+ }
+ }
+ }
+ }
+
+ if (px == 0) {
+ nv_debug(memx->ppwr, "WAIT VBLANK !NO ACTIVE HEAD\n");
+ return;
+ }
+
+ nv_debug(memx->ppwr, "WAIT VBLANK HEAD%d\n", head_sync);
+ memx_cmd(memx, MEMX_VBLANK, 1, (u32[]){ head_sync });
+ memx_out(memx); /* fuc can't handle multiple */
+}
+
+void
+nouveau_memx_block(struct nouveau_memx *memx)
+{
+ nv_debug(memx->ppwr, " HOST BLOCKED\n");
+ memx_cmd(memx, MEMX_ENTER, 0, NULL);
+}
+
+void
+nouveau_memx_unblock(struct nouveau_memx *memx)
+{
+ nv_debug(memx->ppwr, " HOST UNBLOCKED\n");
+ memx_cmd(memx, MEMX_LEAVE, 0, NULL);
+}
+
#endif
#include <subdev/gpio.h>
#include <subdev/timer.h>
+#include <subdev/bios/fan.h>
+
static int
nouveau_fan_update(struct nouveau_fan *fan, bool immediate, int target)
{
/* other random init... */
nouveau_therm_fan_set_defaults(therm);
nvbios_perf_fan_parse(bios, &priv->fan->perf);
- if (nvbios_therm_fan_parse(bios, &priv->fan->bios))
- nv_error(therm, "parsing the thermal table failed\n");
+ if (!nvbios_fan_parse(bios, &priv->fan->bios)) {
+ nv_debug(therm, "parsing the fan table failed\n");
+ if (nvbios_therm_fan_parse(bios, &priv->fan->bios))
+ nv_error(therm, "parsing both fan tables failed\n");
+ }
nouveau_therm_fan_safety_checks(therm);
return 0;
}
#include <core/option.h>
#include <subdev/gpio.h>
+#include <subdev/bios.h>
+#include <subdev/bios/fan.h>
#include "priv.h"
{
struct nouveau_device *device = nv_device(therm);
struct nouveau_therm_priv *tpriv = (void *)therm;
+ struct nouveau_bios *bios = nouveau_bios(therm);
struct nouveau_fanpwm_priv *priv;
+ struct nvbios_therm_fan fan;
u32 divs, duty;
+ nvbios_fan_parse(bios, &fan);
+
if (!nouveau_boolopt(device->cfgopt, "NvFanPWM", func->param) ||
- !therm->pwm_ctrl ||
+ !therm->pwm_ctrl || fan.type == NVBIOS_THERM_FAN_TOGGLE ||
therm->pwm_get(therm, func->line, &divs, &duty) == -ENODEV)
return -ENODEV;
--- /dev/null
+/*
+ * Copyright 2014 Martin Peres
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Martin Peres
+ */
+
+#include "priv.h"
+
+struct gm107_therm_priv {
+ struct nouveau_therm_priv base;
+};
+
+static int
+gm107_fan_pwm_ctrl(struct nouveau_therm *therm, int line, bool enable)
+{
+ /* nothing to do, it seems hardwired */
+ return 0;
+}
+
+static int
+gm107_fan_pwm_get(struct nouveau_therm *therm, int line, u32 *divs, u32 *duty)
+{
+ *divs = nv_rd32(therm, 0x10eb20) & 0x1fff;
+ *duty = nv_rd32(therm, 0x10eb24) & 0x1fff;
+ return 0;
+}
+
+static int
+gm107_fan_pwm_set(struct nouveau_therm *therm, int line, u32 divs, u32 duty)
+{
+ nv_mask(therm, 0x10eb10, 0x1fff, divs); /* keep the high bits */
+ nv_wr32(therm, 0x10eb14, duty | 0x80000000);
+ return 0;
+}
+
+static int
+gm107_fan_pwm_clock(struct nouveau_therm *therm, int line)
+{
+ return nv_device(therm)->crystal * 1000;
+}
+
+static int
+gm107_therm_ctor(struct nouveau_object *parent,
+ struct nouveau_object *engine,
+ struct nouveau_oclass *oclass, void *data, u32 size,
+ struct nouveau_object **pobject)
+{
+ struct gm107_therm_priv *priv;
+ int ret;
+
+ ret = nouveau_therm_create(parent, engine, oclass, &priv);
+ *pobject = nv_object(priv);
+ if (ret)
+ return ret;
+
+ priv->base.base.pwm_ctrl = gm107_fan_pwm_ctrl;
+ priv->base.base.pwm_get = gm107_fan_pwm_get;
+ priv->base.base.pwm_set = gm107_fan_pwm_set;
+ priv->base.base.pwm_clock = gm107_fan_pwm_clock;
+ priv->base.base.temp_get = nv84_temp_get;
+ priv->base.base.fan_sense = nva3_therm_fan_sense;
+ priv->base.sensor.program_alarms = nouveau_therm_program_alarms_polling;
+ return nouveau_therm_preinit(&priv->base.base);
+}
+
+struct nouveau_oclass
+gm107_therm_oclass = {
+ .handle = NV_SUBDEV(THERM, 0x117),
+ .ofuncs = &(struct nouveau_ofuncs) {
+ .ctor = gm107_therm_ctor,
+ .dtor = _nouveau_therm_dtor,
+ .init = nvd0_therm_init,
+ .fini = nv84_therm_fini,
+ },
+};
*/
#include "priv.h"
+#include <subdev/fuse.h>
struct nv84_therm_priv {
struct nouveau_therm_priv base;
int
nv84_temp_get(struct nouveau_therm *therm)
{
- return nv_rd32(therm, 0x20400);
+ struct nouveau_fuse *fuse = nouveau_fuse(therm);
+
+ if (nv_ro32(fuse, 0x1a8) == 1)
+ return nv_rd32(therm, 0x20400);
+ else
+ return -ENODEV;
+}
+
+void
+nv84_sensor_setup(struct nouveau_therm *therm)
+{
+ struct nouveau_fuse *fuse = nouveau_fuse(therm);
+
+ /* enable temperature reading for cards with insane defaults */
+ if (nv_ro32(fuse, 0x1a8) == 1) {
+ nv_mask(therm, 0x20008, 0x80008000, 0x80000000);
+ nv_mask(therm, 0x2000c, 0x80000003, 0x00000000);
+ mdelay(20); /* wait for the temperature to stabilize */
+ }
}
static void
spin_unlock_irqrestore(&priv->sensor.alarm_program_lock, flags);
}
+static int
+nv84_therm_init(struct nouveau_object *object)
+{
+ struct nv84_therm_priv *priv = (void *)object;
+ int ret;
+
+ ret = nouveau_therm_init(&priv->base.base);
+ if (ret)
+ return ret;
+
+ nv84_sensor_setup(&priv->base.base);
+
+ return 0;
+}
+
static int
nv84_therm_ctor(struct nouveau_object *parent,
struct nouveau_object *engine,
.ofuncs = &(struct nouveau_ofuncs) {
.ctor = nv84_therm_ctor,
.dtor = _nouveau_therm_dtor,
- .init = _nouveau_therm_init,
+ .init = nv84_therm_init,
.fini = nv84_therm_fini,
},
};
if (ret)
return ret;
+ nv84_sensor_setup(&priv->base.base);
+
/* enable fan tach, count revolutions per-second */
nv_mask(priv, 0x00e720, 0x00000003, 0x00000002);
if (tach->func != DCB_GPIO_UNUSED) {
return nv_device(therm)->crystal * 1000 / 10;
}
-static int
+int
nvd0_therm_init(struct nouveau_object *object)
{
struct nvd0_therm_priv *priv = (void *)object;
if (ret)
return ret;
+ nv84_sensor_setup(&priv->base.base);
+
priv->base.base.pwm_ctrl = nvd0_fan_pwm_ctrl;
priv->base.base.pwm_get = nvd0_fan_pwm_get;
priv->base.base.pwm_set = nvd0_fan_pwm_set;
int nv50_fan_pwm_set(struct nouveau_therm *, int, u32, u32);
int nv50_fan_pwm_clock(struct nouveau_therm *, int);
int nv84_temp_get(struct nouveau_therm *therm);
+void nv84_sensor_setup(struct nouveau_therm *therm);
int nv84_therm_fini(struct nouveau_object *object, bool suspend);
int nva3_therm_fan_sense(struct nouveau_therm *);
+int nvd0_therm_init(struct nouveau_object *object);
+
int nouveau_fanpwm_create(struct nouveau_therm *, struct dcb_gpio_func *);
int nouveau_fantog_create(struct nouveau_therm *, struct dcb_gpio_func *);
int nouveau_fannil_create(struct nouveau_therm *);
int ret;
mutex_lock(&nv_subdev(vmm)->mutex);
- ret = nouveau_mm_head(&vm->mm, page_shift, msize, msize, align,
+ ret = nouveau_mm_head(&vm->mm, 0, page_shift, msize, msize, align,
&vma->node);
if (unlikely(ret != 0)) {
mutex_unlock(&nv_subdev(vmm)->mutex);
drm_mode_crtc_set_gamma_size(&nv_crtc->base, 256);
ret = nouveau_bo_new(dev, 64*64*4, 0x100, TTM_PL_FLAG_VRAM,
- 0, 0x0000, NULL, &nv_crtc->cursor.nvbo);
+ 0, 0x0000, NULL, NULL, &nv_crtc->cursor.nvbo);
if (!ret) {
ret = nouveau_bo_pin(nv_crtc->cursor.nvbo, TTM_PL_FLAG_VRAM);
if (!ret) {
uint32_t src_w, uint32_t src_h)
{
struct nvif_device *dev = &nouveau_drm(plane->dev)->device;
- struct nouveau_plane *nv_plane = (struct nouveau_plane *)plane;
+ struct nouveau_plane *nv_plane =
+ container_of(plane, struct nouveau_plane, base);
struct nouveau_framebuffer *nv_fb = nouveau_framebuffer(fb);
struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc);
struct nouveau_bo *cur = nv_plane->cur;
nv10_disable_plane(struct drm_plane *plane)
{
struct nvif_device *dev = &nouveau_drm(plane->dev)->device;
- struct nouveau_plane *nv_plane = (struct nouveau_plane *)plane;
+ struct nouveau_plane *nv_plane =
+ container_of(plane, struct nouveau_plane, base);
nvif_wr32(dev, NV_PVIDEO_STOP, 1);
if (nv_plane->cur) {
struct drm_property *property,
uint64_t value)
{
- struct nouveau_plane *nv_plane = (struct nouveau_plane *)plane;
+ struct nouveau_plane *nv_plane =
+ container_of(plane, struct nouveau_plane, base);
if (property == nv_plane->props.colorkey)
nv_plane->colorkey = value;
uint32_t src_w, uint32_t src_h)
{
struct nvif_device *dev = &nouveau_drm(plane->dev)->device;
- struct nouveau_plane *nv_plane = (struct nouveau_plane *)plane;
+ struct nouveau_plane *nv_plane =
+ container_of(plane, struct nouveau_plane, base);
struct nouveau_framebuffer *nv_fb = nouveau_framebuffer(fb);
struct nouveau_bo *cur = nv_plane->cur;
uint32_t overlay = 1;
nv04_disable_plane(struct drm_plane *plane)
{
struct nvif_device *dev = &nouveau_drm(plane->dev)->device;
- struct nouveau_plane *nv_plane = (struct nouveau_plane *)plane;
+ struct nouveau_plane *nv_plane =
+ container_of(plane, struct nouveau_plane, base);
nvif_mask(dev, NV_PVIDEO_OVERLAY, 1, 0);
nvif_wr32(dev, NV_PVIDEO_OE_STATE, 0);
list_add(&ntfy->head, &chan->notifiers);
ntfy->handle = info->handle;
- ret = nouveau_mm_head(&chan->heap, 1, info->size, info->size, 1,
+ ret = nouveau_mm_head(&chan->heap, 0, 1, info->size, info->size, 1,
&ntfy->node);
if (ret)
goto done;
static void
nv10_bo_put_tile_region(struct drm_device *dev, struct nouveau_drm_tile *tile,
- struct nouveau_fence *fence)
+ struct fence *fence)
{
struct nouveau_drm *drm = nouveau_drm(dev);
if (tile) {
spin_lock(&drm->tile.lock);
- tile->fence = nouveau_fence_ref(fence);
+ tile->fence = (struct nouveau_fence *)fence_get(fence);
tile->used = false;
spin_unlock(&drm->tile.lock);
}
int
nouveau_bo_new(struct drm_device *dev, int size, int align,
uint32_t flags, uint32_t tile_mode, uint32_t tile_flags,
- struct sg_table *sg,
+ struct sg_table *sg, struct reservation_object *robj,
struct nouveau_bo **pnvbo)
{
struct nouveau_drm *drm = nouveau_drm(dev);
ret = ttm_bo_init(&drm->ttm.bdev, &nvbo->bo, size,
type, &nvbo->placement,
align >> PAGE_SHIFT, false, NULL, acc_size, sg,
- nouveau_bo_del_ttm);
+ robj, nouveau_bo_del_ttm);
if (ret) {
/* ttm will call nouveau_bo_del_ttm if it fails.. */
return ret;
}
static void
-set_placement_list(uint32_t *pl, unsigned *n, uint32_t type, uint32_t flags)
+set_placement_list(struct ttm_place *pl, unsigned *n, uint32_t type, uint32_t flags)
{
*n = 0;
if (type & TTM_PL_FLAG_VRAM)
- pl[(*n)++] = TTM_PL_FLAG_VRAM | flags;
+ pl[(*n)++].flags = TTM_PL_FLAG_VRAM | flags;
if (type & TTM_PL_FLAG_TT)
- pl[(*n)++] = TTM_PL_FLAG_TT | flags;
+ pl[(*n)++].flags = TTM_PL_FLAG_TT | flags;
if (type & TTM_PL_FLAG_SYSTEM)
- pl[(*n)++] = TTM_PL_FLAG_SYSTEM | flags;
+ pl[(*n)++].flags = TTM_PL_FLAG_SYSTEM | flags;
}
static void
{
struct nouveau_drm *drm = nouveau_bdev(nvbo->bo.bdev);
u32 vram_pages = drm->device.info.ram_size >> PAGE_SHIFT;
+ unsigned i, fpfn, lpfn;
if (drm->device.info.family == NV_DEVICE_INFO_V0_CELSIUS &&
nvbo->tile_mode && (type & TTM_PL_FLAG_VRAM) &&
* at the same time.
*/
if (nvbo->tile_flags & NOUVEAU_GEM_TILE_ZETA) {
- nvbo->placement.fpfn = vram_pages / 2;
- nvbo->placement.lpfn = ~0;
+ fpfn = vram_pages / 2;
+ lpfn = ~0;
} else {
- nvbo->placement.fpfn = 0;
- nvbo->placement.lpfn = vram_pages / 2;
+ fpfn = 0;
+ lpfn = vram_pages / 2;
+ }
+ for (i = 0; i < nvbo->placement.num_placement; ++i) {
+ nvbo->placements[i].fpfn = fpfn;
+ nvbo->placements[i].lpfn = lpfn;
+ }
+ for (i = 0; i < nvbo->placement.num_busy_placement; ++i) {
+ nvbo->busy_placements[i].fpfn = fpfn;
+ nvbo->busy_placements[i].lpfn = lpfn;
}
}
}
}
mutex_lock_nested(&cli->mutex, SINGLE_DEPTH_NESTING);
- ret = nouveau_fence_sync(bo->sync_obj, chan);
+ ret = nouveau_fence_sync(nouveau_bo(bo), chan, true, intr);
if (ret == 0) {
ret = drm->ttm.move(chan, bo, &bo->mem, new_mem);
if (ret == 0) {
ret = nouveau_fence_new(chan, false, &fence);
if (ret == 0) {
- ret = ttm_bo_move_accel_cleanup(bo, fence,
+ ret = ttm_bo_move_accel_cleanup(bo,
+ &fence->base,
evict,
no_wait_gpu,
new_mem);
nouveau_bo_move_flipd(struct ttm_buffer_object *bo, bool evict, bool intr,
bool no_wait_gpu, struct ttm_mem_reg *new_mem)
{
- u32 placement_memtype = TTM_PL_FLAG_TT | TTM_PL_MASK_CACHING;
+ struct ttm_place placement_memtype = {
+ .fpfn = 0,
+ .lpfn = 0,
+ .flags = TTM_PL_FLAG_TT | TTM_PL_MASK_CACHING
+ };
struct ttm_placement placement;
struct ttm_mem_reg tmp_mem;
int ret;
- placement.fpfn = placement.lpfn = 0;
placement.num_placement = placement.num_busy_placement = 1;
placement.placement = placement.busy_placement = &placement_memtype;
nouveau_bo_move_flips(struct ttm_buffer_object *bo, bool evict, bool intr,
bool no_wait_gpu, struct ttm_mem_reg *new_mem)
{
- u32 placement_memtype = TTM_PL_FLAG_TT | TTM_PL_MASK_CACHING;
+ struct ttm_place placement_memtype = {
+ .fpfn = 0,
+ .lpfn = 0,
+ .flags = TTM_PL_FLAG_TT | TTM_PL_MASK_CACHING
+ };
struct ttm_placement placement;
struct ttm_mem_reg tmp_mem;
int ret;
- placement.fpfn = placement.lpfn = 0;
placement.num_placement = placement.num_busy_placement = 1;
placement.placement = placement.busy_placement = &placement_memtype;
{
struct nouveau_drm *drm = nouveau_bdev(bo->bdev);
struct drm_device *dev = drm->dev;
+ struct fence *fence = reservation_object_get_excl(bo->resv);
- nv10_bo_put_tile_region(dev, *old_tile, bo->sync_obj);
+ nv10_bo_put_tile_region(dev, *old_tile, fence);
*old_tile = new_tile;
}
}
/* Fallback to software copy. */
- spin_lock(&bo->bdev->fence_lock);
ret = ttm_bo_wait(bo, true, intr, no_wait_gpu);
- spin_unlock(&bo->bdev->fence_lock);
if (ret == 0)
ret = ttm_bo_move_memcpy(bo, evict, no_wait_gpu, new_mem);
struct nouveau_bo *nvbo = nouveau_bo(bo);
struct nvif_device *device = &drm->device;
u32 mappable = nv_device_resource_len(nvkm_device(device), 1) >> PAGE_SHIFT;
- int ret;
+ int i, ret;
/* as long as the bo isn't in vram, and isn't tiled, we've got
* nothing to do here.
bo->mem.start + bo->mem.num_pages < mappable)
return 0;
+ for (i = 0; i < nvbo->placement.num_placement; ++i) {
+ nvbo->placements[i].fpfn = 0;
+ nvbo->placements[i].lpfn = mappable;
+ }
+
+ for (i = 0; i < nvbo->placement.num_busy_placement; ++i) {
+ nvbo->busy_placements[i].fpfn = 0;
+ nvbo->busy_placements[i].lpfn = mappable;
+ }
- nvbo->placement.fpfn = 0;
- nvbo->placement.lpfn = mappable;
nouveau_bo_placement_set(nvbo, TTM_PL_FLAG_VRAM, 0);
return nouveau_bo_validate(nvbo, false, false);
}
}
void
-nouveau_bo_fence(struct nouveau_bo *nvbo, struct nouveau_fence *fence)
-{
- struct nouveau_fence *new_fence = nouveau_fence_ref(fence);
- struct nouveau_fence *old_fence = NULL;
-
- spin_lock(&nvbo->bo.bdev->fence_lock);
- old_fence = nvbo->bo.sync_obj;
- nvbo->bo.sync_obj = new_fence;
- spin_unlock(&nvbo->bo.bdev->fence_lock);
-
- nouveau_fence_unref(&old_fence);
-}
-
-static void
-nouveau_bo_fence_unref(void **sync_obj)
-{
- nouveau_fence_unref((struct nouveau_fence **)sync_obj);
-}
-
-static void *
-nouveau_bo_fence_ref(void *sync_obj)
+nouveau_bo_fence(struct nouveau_bo *nvbo, struct nouveau_fence *fence, bool exclusive)
{
- return nouveau_fence_ref(sync_obj);
-}
+ struct reservation_object *resv = nvbo->bo.resv;
-static bool
-nouveau_bo_fence_signalled(void *sync_obj)
-{
- return nouveau_fence_done(sync_obj);
-}
-
-static int
-nouveau_bo_fence_wait(void *sync_obj, bool lazy, bool intr)
-{
- return nouveau_fence_wait(sync_obj, lazy, intr);
-}
-
-static int
-nouveau_bo_fence_flush(void *sync_obj)
-{
- return 0;
+ if (exclusive)
+ reservation_object_add_excl_fence(resv, &fence->base);
+ else if (fence)
+ reservation_object_add_shared_fence(resv, &fence->base);
}
struct ttm_bo_driver nouveau_bo_driver = {
.move_notify = nouveau_bo_move_ntfy,
.move = nouveau_bo_move,
.verify_access = nouveau_bo_verify_access,
- .sync_obj_signaled = nouveau_bo_fence_signalled,
- .sync_obj_wait = nouveau_bo_fence_wait,
- .sync_obj_flush = nouveau_bo_fence_flush,
- .sync_obj_unref = nouveau_bo_fence_unref,
- .sync_obj_ref = nouveau_bo_fence_ref,
.fault_reserve_notify = &nouveau_ttm_fault_reserve_notify,
.io_mem_reserve = &nouveau_ttm_io_mem_reserve,
.io_mem_free = &nouveau_ttm_io_mem_free,
#ifndef __NOUVEAU_BO_H__
#define __NOUVEAU_BO_H__
+#include <drm/drm_gem.h>
+
struct nouveau_channel;
struct nouveau_fence;
struct nouveau_vma;
struct ttm_buffer_object bo;
struct ttm_placement placement;
u32 valid_domains;
- u32 placements[3];
- u32 busy_placements[3];
+ struct ttm_place placements[3];
+ struct ttm_place busy_placements[3];
struct ttm_bo_kmap_obj kmap;
struct list_head head;
void nouveau_bo_move_init(struct nouveau_drm *);
int nouveau_bo_new(struct drm_device *, int size, int align, u32 flags,
u32 tile_mode, u32 tile_flags, struct sg_table *sg,
+ struct reservation_object *robj,
struct nouveau_bo **);
int nouveau_bo_pin(struct nouveau_bo *, u32 flags);
int nouveau_bo_unpin(struct nouveau_bo *);
void nouveau_bo_wr16(struct nouveau_bo *, unsigned index, u16 val);
u32 nouveau_bo_rd32(struct nouveau_bo *, unsigned index);
void nouveau_bo_wr32(struct nouveau_bo *, unsigned index, u32 val);
-void nouveau_bo_fence(struct nouveau_bo *, struct nouveau_fence *);
+void nouveau_bo_fence(struct nouveau_bo *, struct nouveau_fence *, bool exclusive);
int nouveau_bo_validate(struct nouveau_bo *, bool interruptible,
bool no_wait_gpu);
#include "nouveau_abi16.h"
MODULE_PARM_DESC(vram_pushbuf, "Create DMA push buffers in VRAM");
-static int nouveau_vram_pushbuf;
+int nouveau_vram_pushbuf;
module_param_named(vram_pushbuf, nouveau_vram_pushbuf, int, 0400);
int
if (nouveau_vram_pushbuf)
target = TTM_PL_FLAG_VRAM;
- ret = nouveau_bo_new(drm->dev, size, 0, target, 0, 0, NULL,
+ ret = nouveau_bo_new(drm->dev, size, 0, target, 0, 0, NULL, NULL,
&chan->push.buffer);
if (ret == 0) {
ret = nouveau_bo_pin(chan->push.buffer, target);
void nouveau_channel_del(struct nouveau_channel **);
int nouveau_channel_idle(struct nouveau_channel *);
+extern int nouveau_vram_pushbuf;
+
#endif
#include <nvif/event.h>
MODULE_PARM_DESC(tv_disable, "Disable TV-out detection");
-static int nouveau_tv_disable = 0;
+int nouveau_tv_disable = 0;
module_param_named(tv_disable, nouveau_tv_disable, int, 0400);
MODULE_PARM_DESC(ignorelid, "Ignore ACPI lid status");
-static int nouveau_ignorelid = 0;
+int nouveau_ignorelid = 0;
module_param_named(ignorelid, nouveau_ignorelid, int, 0400);
MODULE_PARM_DESC(duallink, "Allow dual-link TMDS (default: enabled)");
-static int nouveau_duallink = 1;
+int nouveau_duallink = 1;
module_param_named(duallink, nouveau_duallink, int, 0400);
struct nouveau_encoder *
struct drm_connector *
nouveau_connector_create(struct drm_device *, int index);
+extern int nouveau_tv_disable;
+extern int nouveau_ignorelid;
+extern int nouveau_duallink;
+
#endif /* __NOUVEAU_CONNECTOR_H__ */
if (etime) *etime = ns_to_ktime(args.scan.time[1]);
if (*vpos < 0)
- ret |= DRM_SCANOUTPOS_INVBL;
+ ret |= DRM_SCANOUTPOS_IN_VBLANK;
return ret;
}
spin_unlock_irqrestore(&dev->event_lock, flags);
/* Synchronize with the old framebuffer */
- ret = nouveau_fence_sync(old_bo->bo.sync_obj, chan);
+ ret = nouveau_fence_sync(old_bo, chan, false, false);
if (ret)
goto fail;
}
mutex_lock(&cli->mutex);
-
- /* synchronise rendering channel with the kernel's channel */
- spin_lock(&new_bo->bo.bdev->fence_lock);
- fence = nouveau_fence_ref(new_bo->bo.sync_obj);
- spin_unlock(&new_bo->bo.bdev->fence_lock);
- ret = nouveau_fence_sync(fence, chan);
- nouveau_fence_unref(&fence);
+ ret = ttm_bo_reserve(&new_bo->bo, true, false, false, NULL);
if (ret)
goto fail_unpin;
- ret = ttm_bo_reserve(&old_bo->bo, true, false, false, NULL);
- if (ret)
+ /* synchronise rendering channel with the kernel's channel */
+ ret = nouveau_fence_sync(new_bo, chan, false, true);
+ if (ret) {
+ ttm_bo_unreserve(&new_bo->bo);
goto fail_unpin;
+ }
+
+ if (new_bo != old_bo) {
+ ttm_bo_unreserve(&new_bo->bo);
+
+ ret = ttm_bo_reserve(&old_bo->bo, true, false, false, NULL);
+ if (ret)
+ goto fail_unpin;
+ }
/* Initialize a page flip struct */
*s = (struct nouveau_page_flip_state)
/* Update the crtc struct and cleanup */
crtc->primary->fb = fb;
- nouveau_bo_fence(old_bo, fence);
+ nouveau_bo_fence(old_bo, fence, false);
ttm_bo_unreserve(&old_bo->bo);
if (old_bo != new_bo)
nouveau_bo_unpin(old_bo);
#include "nouveau_fence.h"
#include "nouveau_debugfs.h"
#include "nouveau_usif.h"
+#include "nouveau_connector.h"
MODULE_PARM_DESC(config, "option string to pass to driver core");
static char *nouveau_config;
int nouveau_runtime_pm = -1;
module_param_named(runpm, nouveau_runtime_pm, int, 0400);
-static struct drm_driver driver;
+static struct drm_driver driver_stub;
+static struct drm_driver driver_pci;
+static struct drm_driver driver_platform;
static u64
nouveau_pci_name(struct pci_dev *pdev)
pci_set_master(pdev);
- ret = drm_get_pci_dev(pdev, pent, &driver);
+ ret = drm_get_pci_dev(pdev, pent, &driver_pci);
if (ret) {
nouveau_object_ref(NULL, (struct nouveau_object **)&device);
return ret;
};
static struct drm_driver
-driver = {
+driver_stub = {
.driver_features =
DRIVER_USE_AGP |
DRIVER_GEM | DRIVER_MODESET | DRIVER_PRIME | DRIVER_RENDER,
return 1;
}
+static void nouveau_display_options(void)
+{
+ DRM_DEBUG_DRIVER("Loading Nouveau with parameters:\n");
+
+ DRM_DEBUG_DRIVER("... tv_disable : %d\n", nouveau_tv_disable);
+ DRM_DEBUG_DRIVER("... ignorelid : %d\n", nouveau_ignorelid);
+ DRM_DEBUG_DRIVER("... duallink : %d\n", nouveau_duallink);
+ DRM_DEBUG_DRIVER("... nofbaccel : %d\n", nouveau_nofbaccel);
+ DRM_DEBUG_DRIVER("... config : %s\n", nouveau_config);
+ DRM_DEBUG_DRIVER("... debug : %s\n", nouveau_debug);
+ DRM_DEBUG_DRIVER("... noaccel : %d\n", nouveau_noaccel);
+ DRM_DEBUG_DRIVER("... modeset : %d\n", nouveau_modeset);
+ DRM_DEBUG_DRIVER("... runpm : %d\n", nouveau_runtime_pm);
+ DRM_DEBUG_DRIVER("... vram_pushbuf : %d\n", nouveau_vram_pushbuf);
+ DRM_DEBUG_DRIVER("... pstate : %d\n", nouveau_pstate);
+}
+
static const struct dev_pm_ops nouveau_pm_ops = {
.suspend = nouveau_pmops_suspend,
.resume = nouveau_pmops_resume,
if (err)
return ERR_PTR(err);
- drm = drm_dev_alloc(&driver, &pdev->dev);
+ drm = drm_dev_alloc(&driver_platform, &pdev->dev);
if (!drm) {
err = -ENOMEM;
goto err_free;
static int __init
nouveau_drm_init(void)
{
+ driver_pci = driver_stub;
+ driver_pci.set_busid = drm_pci_set_busid;
+ driver_platform = driver_stub;
+ driver_platform.set_busid = drm_platform_set_busid;
+
+ nouveau_display_options();
+
if (nouveau_modeset == -1) {
#ifdef CONFIG_VGA_CONSOLE
if (vgacon_text_force())
return 0;
nouveau_register_dsm_handler();
- return drm_pci_init(&driver, &nouveau_drm_pci_driver);
+ return drm_pci_init(&driver_pci, &nouveau_drm_pci_driver);
}
static void __exit
if (!nouveau_modeset)
return;
- drm_pci_exit(&driver, &nouveau_drm_pci_driver);
+ drm_pci_exit(&driver_pci, &nouveau_drm_pci_driver);
nouveau_unregister_dsm_handler();
}
#define DRIVER_MAJOR 1
#define DRIVER_MINOR 2
-#define DRIVER_PATCHLEVEL 0
+#define DRIVER_PATCHLEVEL 1
/*
* 1.1.1:
* 1.2.0:
* - object api exposed to userspace
* - fermi,kepler,maxwell zbc
+ * 1.2.1:
+ * - allow concurrent access to bo's mapped read/write.
*/
#include <nvif/client.h>
#include "nouveau_crtc.h"
MODULE_PARM_DESC(nofbaccel, "Disable fbcon acceleration");
-static int nouveau_nofbaccel = 0;
+int nouveau_nofbaccel = 0;
module_param_named(nofbaccel, nouveau_nofbaccel, int, 0400);
static void
nouveau_fbcon_create(struct drm_fb_helper *helper,
struct drm_fb_helper_surface_size *sizes)
{
- struct nouveau_fbdev *fbcon = (struct nouveau_fbdev *)helper;
+ struct nouveau_fbdev *fbcon =
+ container_of(helper, struct nouveau_fbdev, helper);
struct drm_device *dev = fbcon->dev;
struct nouveau_drm *drm = nouveau_drm(dev);
struct nvif_device *device = &drm->device;
void nouveau_fbcon_accel_restore(struct drm_device *dev);
void nouveau_fbcon_output_poll_changed(struct drm_device *dev);
+
+extern int nouveau_nofbaccel;
+
#endif /* __NV50_FBCON_H__ */
#include <linux/ktime.h>
#include <linux/hrtimer.h>
+#include <trace/events/fence.h>
#include <nvif/notify.h>
#include <nvif/event.h>
#include "nouveau_dma.h"
#include "nouveau_fence.h"
-struct fence_work {
- struct work_struct base;
- struct list_head head;
- void (*func)(void *);
- void *data;
-};
+static const struct fence_ops nouveau_fence_ops_uevent;
+static const struct fence_ops nouveau_fence_ops_legacy;
+
+static inline struct nouveau_fence *
+from_fence(struct fence *fence)
+{
+ return container_of(fence, struct nouveau_fence, base);
+}
+
+static inline struct nouveau_fence_chan *
+nouveau_fctx(struct nouveau_fence *fence)
+{
+ return container_of(fence->base.lock, struct nouveau_fence_chan, lock);
+}
static void
nouveau_fence_signal(struct nouveau_fence *fence)
{
- struct fence_work *work, *temp;
+ fence_signal_locked(&fence->base);
+ list_del(&fence->head);
+
+ if (test_bit(FENCE_FLAG_USER_BITS, &fence->base.flags)) {
+ struct nouveau_fence_chan *fctx = nouveau_fctx(fence);
- list_for_each_entry_safe(work, temp, &fence->work, head) {
- schedule_work(&work->base);
- list_del(&work->head);
+ if (!--fctx->notify_ref)
+ nvif_notify_put(&fctx->notify);
}
- fence->channel = NULL;
- list_del(&fence->head);
+ fence_put(&fence->base);
+}
+
+static struct nouveau_fence *
+nouveau_local_fence(struct fence *fence, struct nouveau_drm *drm) {
+ struct nouveau_fence_priv *priv = (void*)drm->fence;
+
+ if (fence->ops != &nouveau_fence_ops_legacy &&
+ fence->ops != &nouveau_fence_ops_uevent)
+ return NULL;
+
+ if (fence->context < priv->context_base ||
+ fence->context >= priv->context_base + priv->contexts)
+ return NULL;
+
+ return from_fence(fence);
}
void
nouveau_fence_context_del(struct nouveau_fence_chan *fctx)
{
- struct nouveau_fence *fence, *fnext;
- spin_lock(&fctx->lock);
- list_for_each_entry_safe(fence, fnext, &fctx->pending, head) {
+ struct nouveau_fence *fence;
+
+ nvif_notify_fini(&fctx->notify);
+
+ spin_lock_irq(&fctx->lock);
+ while (!list_empty(&fctx->pending)) {
+ fence = list_entry(fctx->pending.next, typeof(*fence), head);
+
nouveau_fence_signal(fence);
+ fence->channel = NULL;
}
- spin_unlock(&fctx->lock);
+ spin_unlock_irq(&fctx->lock);
+}
+
+static void
+nouveau_fence_context_put(struct kref *fence_ref)
+{
+ kfree(container_of(fence_ref, struct nouveau_fence_chan, fence_ref));
}
void
-nouveau_fence_context_new(struct nouveau_fence_chan *fctx)
+nouveau_fence_context_free(struct nouveau_fence_chan *fctx)
+{
+ kref_put(&fctx->fence_ref, nouveau_fence_context_put);
+}
+
+static void
+nouveau_fence_update(struct nouveau_channel *chan, struct nouveau_fence_chan *fctx)
+{
+ struct nouveau_fence *fence;
+
+ u32 seq = fctx->read(chan);
+
+ while (!list_empty(&fctx->pending)) {
+ fence = list_entry(fctx->pending.next, typeof(*fence), head);
+
+ if ((int)(seq - fence->base.seqno) < 0)
+ return;
+
+ nouveau_fence_signal(fence);
+ }
+}
+
+static int
+nouveau_fence_wait_uevent_handler(struct nvif_notify *notify)
{
+ struct nouveau_fence_chan *fctx =
+ container_of(notify, typeof(*fctx), notify);
+ unsigned long flags;
+
+ spin_lock_irqsave(&fctx->lock, flags);
+ if (!list_empty(&fctx->pending)) {
+ struct nouveau_fence *fence;
+
+ fence = list_entry(fctx->pending.next, typeof(*fence), head);
+ nouveau_fence_update(fence->channel, fctx);
+ }
+ spin_unlock_irqrestore(&fctx->lock, flags);
+
+ /* Always return keep here. NVIF refcount is handled with nouveau_fence_update */
+ return NVIF_NOTIFY_KEEP;
+}
+
+void
+nouveau_fence_context_new(struct nouveau_channel *chan, struct nouveau_fence_chan *fctx)
+{
+ struct nouveau_fence_priv *priv = (void*)chan->drm->fence;
+ struct nouveau_cli *cli = (void *)nvif_client(chan->object);
+ int ret;
+
INIT_LIST_HEAD(&fctx->flip);
INIT_LIST_HEAD(&fctx->pending);
spin_lock_init(&fctx->lock);
+ fctx->context = priv->context_base + chan->chid;
+
+ if (chan == chan->drm->cechan)
+ strcpy(fctx->name, "copy engine channel");
+ else if (chan == chan->drm->channel)
+ strcpy(fctx->name, "generic kernel channel");
+ else
+ strcpy(fctx->name, nvkm_client(&cli->base)->name);
+
+ kref_init(&fctx->fence_ref);
+ if (!priv->uevent)
+ return;
+
+ ret = nvif_notify_init(chan->object, NULL,
+ nouveau_fence_wait_uevent_handler, false,
+ G82_CHANNEL_DMA_V0_NTFY_UEVENT,
+ &(struct nvif_notify_uevent_req) { },
+ sizeof(struct nvif_notify_uevent_req),
+ sizeof(struct nvif_notify_uevent_rep),
+ &fctx->notify);
+
+ WARN_ON(ret);
}
+struct nouveau_fence_work {
+ struct work_struct work;
+ struct fence_cb cb;
+ void (*func)(void *);
+ void *data;
+};
+
static void
nouveau_fence_work_handler(struct work_struct *kwork)
{
- struct fence_work *work = container_of(kwork, typeof(*work), base);
+ struct nouveau_fence_work *work = container_of(kwork, typeof(*work), work);
work->func(work->data);
kfree(work);
}
+static void nouveau_fence_work_cb(struct fence *fence, struct fence_cb *cb)
+{
+ struct nouveau_fence_work *work = container_of(cb, typeof(*work), cb);
+
+ schedule_work(&work->work);
+}
+
void
-nouveau_fence_work(struct nouveau_fence *fence,
+nouveau_fence_work(struct fence *fence,
void (*func)(void *), void *data)
{
- struct nouveau_channel *chan = fence->channel;
- struct nouveau_fence_chan *fctx;
- struct fence_work *work = NULL;
+ struct nouveau_fence_work *work;
- if (nouveau_fence_done(fence)) {
- func(data);
- return;
- }
+ if (fence_is_signaled(fence))
+ goto err;
- fctx = chan->fence;
work = kmalloc(sizeof(*work), GFP_KERNEL);
if (!work) {
- WARN_ON(nouveau_fence_wait(fence, false, false));
- func(data);
- return;
- }
-
- spin_lock(&fctx->lock);
- if (!fence->channel) {
- spin_unlock(&fctx->lock);
- kfree(work);
- func(data);
- return;
+ /*
+ * this might not be a nouveau fence any more,
+ * so force a lazy wait here
+ */
+ WARN_ON(nouveau_fence_wait((struct nouveau_fence *)fence,
+ true, false));
+ goto err;
}
- INIT_WORK(&work->base, nouveau_fence_work_handler);
+ INIT_WORK(&work->work, nouveau_fence_work_handler);
work->func = func;
work->data = data;
- list_add(&work->head, &fence->work);
- spin_unlock(&fctx->lock);
-}
-
-static void
-nouveau_fence_update(struct nouveau_channel *chan)
-{
- struct nouveau_fence_chan *fctx = chan->fence;
- struct nouveau_fence *fence, *fnext;
- spin_lock(&fctx->lock);
- list_for_each_entry_safe(fence, fnext, &fctx->pending, head) {
- if (fctx->read(chan) < fence->sequence)
- break;
+ if (fence_add_callback(fence, &work->cb, nouveau_fence_work_cb) < 0)
+ goto err_free;
+ return;
- nouveau_fence_signal(fence);
- nouveau_fence_unref(&fence);
- }
- spin_unlock(&fctx->lock);
+err_free:
+ kfree(work);
+err:
+ func(data);
}
int
nouveau_fence_emit(struct nouveau_fence *fence, struct nouveau_channel *chan)
{
struct nouveau_fence_chan *fctx = chan->fence;
+ struct nouveau_fence_priv *priv = (void*)chan->drm->fence;
int ret;
fence->channel = chan;
fence->timeout = jiffies + (15 * HZ);
- fence->sequence = ++fctx->sequence;
+ if (priv->uevent)
+ fence_init(&fence->base, &nouveau_fence_ops_uevent,
+ &fctx->lock, fctx->context, ++fctx->sequence);
+ else
+ fence_init(&fence->base, &nouveau_fence_ops_legacy,
+ &fctx->lock, fctx->context, ++fctx->sequence);
+ kref_get(&fctx->fence_ref);
+
+ trace_fence_emit(&fence->base);
ret = fctx->emit(fence);
if (!ret) {
- kref_get(&fence->kref);
- spin_lock(&fctx->lock);
+ fence_get(&fence->base);
+ spin_lock_irq(&fctx->lock);
+ nouveau_fence_update(chan, fctx);
list_add_tail(&fence->head, &fctx->pending);
- spin_unlock(&fctx->lock);
+ spin_unlock_irq(&fctx->lock);
}
return ret;
bool
nouveau_fence_done(struct nouveau_fence *fence)
{
- if (fence->channel)
- nouveau_fence_update(fence->channel);
- return !fence->channel;
-}
+ if (fence->base.ops == &nouveau_fence_ops_legacy ||
+ fence->base.ops == &nouveau_fence_ops_uevent) {
+ struct nouveau_fence_chan *fctx = nouveau_fctx(fence);
+ unsigned long flags;
-struct nouveau_fence_wait {
- struct nouveau_fence_priv *priv;
- struct nvif_notify notify;
-};
+ if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->base.flags))
+ return true;
-static int
-nouveau_fence_wait_uevent_handler(struct nvif_notify *notify)
-{
- struct nouveau_fence_wait *wait =
- container_of(notify, typeof(*wait), notify);
- wake_up_all(&wait->priv->waiting);
- return NVIF_NOTIFY_KEEP;
+ spin_lock_irqsave(&fctx->lock, flags);
+ nouveau_fence_update(fence->channel, fctx);
+ spin_unlock_irqrestore(&fctx->lock, flags);
+ }
+ return fence_is_signaled(&fence->base);
}
-static int
-nouveau_fence_wait_uevent(struct nouveau_fence *fence, bool intr)
-
+static long
+nouveau_fence_wait_legacy(struct fence *f, bool intr, long wait)
{
- struct nouveau_channel *chan = fence->channel;
- struct nouveau_fence_priv *priv = chan->drm->fence;
- struct nouveau_fence_wait wait = { .priv = priv };
- int ret = 0;
+ struct nouveau_fence *fence = from_fence(f);
+ unsigned long sleep_time = NSEC_PER_MSEC / 1000;
+ unsigned long t = jiffies, timeout = t + wait;
- ret = nvif_notify_init(chan->object, NULL,
- nouveau_fence_wait_uevent_handler, false,
- G82_CHANNEL_DMA_V0_NTFY_UEVENT,
- &(struct nvif_notify_uevent_req) {
- },
- sizeof(struct nvif_notify_uevent_req),
- sizeof(struct nvif_notify_uevent_rep),
- &wait.notify);
- if (ret)
- return ret;
+ while (!nouveau_fence_done(fence)) {
+ ktime_t kt;
- nvif_notify_get(&wait.notify);
-
- if (fence->timeout) {
- unsigned long timeout = fence->timeout - jiffies;
-
- if (time_before(jiffies, fence->timeout)) {
- if (intr) {
- ret = wait_event_interruptible_timeout(
- priv->waiting,
- nouveau_fence_done(fence),
- timeout);
- } else {
- ret = wait_event_timeout(priv->waiting,
- nouveau_fence_done(fence),
- timeout);
- }
- }
+ t = jiffies;
- if (ret >= 0) {
- fence->timeout = jiffies + ret;
- if (time_after_eq(jiffies, fence->timeout))
- ret = -EBUSY;
- }
- } else {
- if (intr) {
- ret = wait_event_interruptible(priv->waiting,
- nouveau_fence_done(fence));
- } else {
- wait_event(priv->waiting, nouveau_fence_done(fence));
+ if (wait != MAX_SCHEDULE_TIMEOUT && time_after_eq(t, timeout)) {
+ __set_current_state(TASK_RUNNING);
+ return 0;
}
+
+ __set_current_state(intr ? TASK_INTERRUPTIBLE :
+ TASK_UNINTERRUPTIBLE);
+
+ kt = ktime_set(0, sleep_time);
+ schedule_hrtimeout(&kt, HRTIMER_MODE_REL);
+ sleep_time *= 2;
+ if (sleep_time > NSEC_PER_MSEC)
+ sleep_time = NSEC_PER_MSEC;
+
+ if (intr && signal_pending(current))
+ return -ERESTARTSYS;
}
- nvif_notify_fini(&wait.notify);
- if (unlikely(ret < 0))
- return ret;
+ __set_current_state(TASK_RUNNING);
- return 0;
+ return timeout - t;
}
-int
-nouveau_fence_wait(struct nouveau_fence *fence, bool lazy, bool intr)
+static int
+nouveau_fence_wait_busy(struct nouveau_fence *fence, bool intr)
{
- struct nouveau_channel *chan = fence->channel;
- struct nouveau_fence_priv *priv = chan ? chan->drm->fence : NULL;
- unsigned long sleep_time = NSEC_PER_MSEC / 1000;
- ktime_t t;
int ret = 0;
- while (priv && priv->uevent && lazy && !nouveau_fence_done(fence)) {
- ret = nouveau_fence_wait_uevent(fence, intr);
- if (ret < 0)
- return ret;
- }
-
while (!nouveau_fence_done(fence)) {
- if (fence->timeout && time_after_eq(jiffies, fence->timeout)) {
+ if (time_after_eq(jiffies, fence->timeout)) {
ret = -EBUSY;
break;
}
- __set_current_state(intr ? TASK_INTERRUPTIBLE :
- TASK_UNINTERRUPTIBLE);
- if (lazy) {
- t = ktime_set(0, sleep_time);
- schedule_hrtimeout(&t, HRTIMER_MODE_REL);
- sleep_time *= 2;
- if (sleep_time > NSEC_PER_MSEC)
- sleep_time = NSEC_PER_MSEC;
- }
+ __set_current_state(intr ?
+ TASK_INTERRUPTIBLE :
+ TASK_UNINTERRUPTIBLE);
if (intr && signal_pending(current)) {
ret = -ERESTARTSYS;
}
int
-nouveau_fence_sync(struct nouveau_fence *fence, struct nouveau_channel *chan)
+nouveau_fence_wait(struct nouveau_fence *fence, bool lazy, bool intr)
{
- struct nouveau_fence_chan *fctx = chan->fence;
- struct nouveau_channel *prev;
- int ret = 0;
+ long ret;
- prev = fence ? fence->channel : NULL;
- if (prev) {
- if (unlikely(prev != chan && !nouveau_fence_done(fence))) {
- ret = fctx->sync(fence, prev, chan);
- if (unlikely(ret))
- ret = nouveau_fence_wait(fence, true, false);
- }
- }
+ if (!lazy)
+ return nouveau_fence_wait_busy(fence, intr);
- return ret;
+ ret = fence_wait_timeout(&fence->base, intr, 15 * HZ);
+ if (ret < 0)
+ return ret;
+ else if (!ret)
+ return -EBUSY;
+ else
+ return 0;
}
-static void
-nouveau_fence_del(struct kref *kref)
+int
+nouveau_fence_sync(struct nouveau_bo *nvbo, struct nouveau_channel *chan, bool exclusive, bool intr)
{
- struct nouveau_fence *fence = container_of(kref, typeof(*fence), kref);
- kfree(fence);
+ struct nouveau_fence_chan *fctx = chan->fence;
+ struct fence *fence;
+ struct reservation_object *resv = nvbo->bo.resv;
+ struct reservation_object_list *fobj;
+ struct nouveau_fence *f;
+ int ret = 0, i;
+
+ if (!exclusive) {
+ ret = reservation_object_reserve_shared(resv);
+
+ if (ret)
+ return ret;
+ }
+
+ fobj = reservation_object_get_list(resv);
+ fence = reservation_object_get_excl(resv);
+
+ if (fence && (!exclusive || !fobj || !fobj->shared_count)) {
+ struct nouveau_channel *prev = NULL;
+
+ f = nouveau_local_fence(fence, chan->drm);
+ if (f)
+ prev = f->channel;
+
+ if (!prev || (prev != chan && (ret = fctx->sync(f, prev, chan))))
+ ret = fence_wait(fence, intr);
+
+ return ret;
+ }
+
+ if (!exclusive || !fobj)
+ return ret;
+
+ for (i = 0; i < fobj->shared_count && !ret; ++i) {
+ struct nouveau_channel *prev = NULL;
+
+ fence = rcu_dereference_protected(fobj->shared[i],
+ reservation_object_held(resv));
+
+ f = nouveau_local_fence(fence, chan->drm);
+ if (f)
+ prev = f->channel;
+
+ if (!prev || (prev != chan && (ret = fctx->sync(f, prev, chan))))
+ ret = fence_wait(fence, intr);
+
+ if (ret)
+ break;
+ }
+
+ return ret;
}
void
nouveau_fence_unref(struct nouveau_fence **pfence)
{
if (*pfence)
- kref_put(&(*pfence)->kref, nouveau_fence_del);
+ fence_put(&(*pfence)->base);
*pfence = NULL;
}
-struct nouveau_fence *
-nouveau_fence_ref(struct nouveau_fence *fence)
-{
- if (fence)
- kref_get(&fence->kref);
- return fence;
-}
-
int
nouveau_fence_new(struct nouveau_channel *chan, bool sysmem,
struct nouveau_fence **pfence)
if (!fence)
return -ENOMEM;
- INIT_LIST_HEAD(&fence->work);
fence->sysmem = sysmem;
- kref_init(&fence->kref);
ret = nouveau_fence_emit(fence, chan);
if (ret)
*pfence = fence;
return ret;
}
+
+static const char *nouveau_fence_get_get_driver_name(struct fence *fence)
+{
+ return "nouveau";
+}
+
+static const char *nouveau_fence_get_timeline_name(struct fence *f)
+{
+ struct nouveau_fence *fence = from_fence(f);
+ struct nouveau_fence_chan *fctx = nouveau_fctx(fence);
+
+ return fence->channel ? fctx->name : "dead channel";
+}
+
+/*
+ * In an ideal world, read would not assume the channel context is still alive.
+ * This function may be called from another device, running into free memory as a
+ * result. The drm node should still be there, so we can derive the index from
+ * the fence context.
+ */
+static bool nouveau_fence_is_signaled(struct fence *f)
+{
+ struct nouveau_fence *fence = from_fence(f);
+ struct nouveau_fence_chan *fctx = nouveau_fctx(fence);
+ struct nouveau_channel *chan = fence->channel;
+
+ return (int)(fctx->read(chan) - fence->base.seqno) >= 0;
+}
+
+static bool nouveau_fence_no_signaling(struct fence *f)
+{
+ struct nouveau_fence *fence = from_fence(f);
+
+ /*
+ * caller should have a reference on the fence,
+ * else fence could get freed here
+ */
+ WARN_ON(atomic_read(&fence->base.refcount.refcount) <= 1);
+
+ /*
+ * This needs uevents to work correctly, but fence_add_callback relies on
+ * being able to enable signaling. It will still get signaled eventually,
+ * just not right away.
+ */
+ if (nouveau_fence_is_signaled(f)) {
+ list_del(&fence->head);
+
+ fence_put(&fence->base);
+ return false;
+ }
+
+ return true;
+}
+
+static void nouveau_fence_release(struct fence *f)
+{
+ struct nouveau_fence *fence = from_fence(f);
+ struct nouveau_fence_chan *fctx = nouveau_fctx(fence);
+
+ kref_put(&fctx->fence_ref, nouveau_fence_context_put);
+ fence_free(&fence->base);
+}
+
+static const struct fence_ops nouveau_fence_ops_legacy = {
+ .get_driver_name = nouveau_fence_get_get_driver_name,
+ .get_timeline_name = nouveau_fence_get_timeline_name,
+ .enable_signaling = nouveau_fence_no_signaling,
+ .signaled = nouveau_fence_is_signaled,
+ .wait = nouveau_fence_wait_legacy,
+ .release = nouveau_fence_release
+};
+
+static bool nouveau_fence_enable_signaling(struct fence *f)
+{
+ struct nouveau_fence *fence = from_fence(f);
+ struct nouveau_fence_chan *fctx = nouveau_fctx(fence);
+ bool ret;
+
+ if (!fctx->notify_ref++)
+ nvif_notify_get(&fctx->notify);
+
+ ret = nouveau_fence_no_signaling(f);
+ if (ret)
+ set_bit(FENCE_FLAG_USER_BITS, &fence->base.flags);
+ else if (!--fctx->notify_ref)
+ nvif_notify_put(&fctx->notify);
+
+ return ret;
+}
+
+static const struct fence_ops nouveau_fence_ops_uevent = {
+ .get_driver_name = nouveau_fence_get_get_driver_name,
+ .get_timeline_name = nouveau_fence_get_timeline_name,
+ .enable_signaling = nouveau_fence_enable_signaling,
+ .signaled = nouveau_fence_is_signaled,
+ .wait = fence_default_wait,
+ .release = NULL
+};
#ifndef __NOUVEAU_FENCE_H__
#define __NOUVEAU_FENCE_H__
+#include <linux/fence.h>
+#include <nvif/notify.h>
+
struct nouveau_drm;
+struct nouveau_bo;
struct nouveau_fence {
+ struct fence base;
+
struct list_head head;
- struct list_head work;
- struct kref kref;
bool sysmem;
struct nouveau_channel *channel;
unsigned long timeout;
- u32 sequence;
};
int nouveau_fence_new(struct nouveau_channel *, bool sysmem,
struct nouveau_fence **);
-struct nouveau_fence *
-nouveau_fence_ref(struct nouveau_fence *);
void nouveau_fence_unref(struct nouveau_fence **);
int nouveau_fence_emit(struct nouveau_fence *, struct nouveau_channel *);
bool nouveau_fence_done(struct nouveau_fence *);
-void nouveau_fence_work(struct nouveau_fence *, void (*)(void *), void *);
+void nouveau_fence_work(struct fence *, void (*)(void *), void *);
int nouveau_fence_wait(struct nouveau_fence *, bool lazy, bool intr);
-int nouveau_fence_sync(struct nouveau_fence *, struct nouveau_channel *);
+int nouveau_fence_sync(struct nouveau_bo *, struct nouveau_channel *, bool exclusive, bool intr);
struct nouveau_fence_chan {
+ spinlock_t lock;
+ struct kref fence_ref;
+
struct list_head pending;
struct list_head flip;
int (*emit32)(struct nouveau_channel *, u64, u32);
int (*sync32)(struct nouveau_channel *, u64, u32);
- spinlock_t lock;
u32 sequence;
+ u32 context;
+ char name[32];
+
+ struct nvif_notify notify;
+ int notify_ref;
};
struct nouveau_fence_priv {
int (*context_new)(struct nouveau_channel *);
void (*context_del)(struct nouveau_channel *);
- wait_queue_head_t waiting;
+ u32 contexts, context_base;
bool uevent;
};
#define nouveau_fence(drm) ((struct nouveau_fence_priv *)(drm)->fence)
-void nouveau_fence_context_new(struct nouveau_fence_chan *);
+void nouveau_fence_context_new(struct nouveau_channel *, struct nouveau_fence_chan *);
void nouveau_fence_context_del(struct nouveau_fence_chan *);
+void nouveau_fence_context_free(struct nouveau_fence_chan *);
int nv04_fence_create(struct nouveau_drm *);
int nv04_fence_mthd(struct nouveau_channel *, u32, u32, u32);
nouveau_gem_object_unmap(struct nouveau_bo *nvbo, struct nouveau_vma *vma)
{
const bool mapped = nvbo->bo.mem.mem_type != TTM_PL_SYSTEM;
- struct nouveau_fence *fence = NULL;
+ struct reservation_object *resv = nvbo->bo.resv;
+ struct reservation_object_list *fobj;
+ struct fence *fence = NULL;
+
+ fobj = reservation_object_get_list(resv);
list_del(&vma->head);
- if (mapped) {
- spin_lock(&nvbo->bo.bdev->fence_lock);
- fence = nouveau_fence_ref(nvbo->bo.sync_obj);
- spin_unlock(&nvbo->bo.bdev->fence_lock);
- }
+ if (fobj && fobj->shared_count > 1)
+ ttm_bo_wait(&nvbo->bo, true, false, false);
+ else if (fobj && fobj->shared_count == 1)
+ fence = rcu_dereference_protected(fobj->shared[0],
+ reservation_object_held(resv));
+ else
+ fence = reservation_object_get_excl(nvbo->bo.resv);
- if (fence) {
+ if (fence && mapped) {
nouveau_fence_work(fence, nouveau_gem_object_delete, vma);
} else {
if (mapped)
nouveau_vm_put(vma);
kfree(vma);
}
- nouveau_fence_unref(&fence);
}
void
flags |= TTM_PL_FLAG_SYSTEM;
ret = nouveau_bo_new(dev, size, align, flags, tile_mode,
- tile_flags, NULL, pnvbo);
+ tile_flags, NULL, NULL, pnvbo);
if (ret)
return ret;
nvbo = *pnvbo;
}
struct validate_op {
- struct list_head vram_list;
- struct list_head gart_list;
- struct list_head both_list;
+ struct list_head list;
struct ww_acquire_ctx ticket;
};
static void
-validate_fini_list(struct list_head *list, struct nouveau_fence *fence,
- struct ww_acquire_ctx *ticket)
+validate_fini_no_ticket(struct validate_op *op, struct nouveau_fence *fence,
+ struct drm_nouveau_gem_pushbuf_bo *pbbo)
{
- struct list_head *entry, *tmp;
struct nouveau_bo *nvbo;
+ struct drm_nouveau_gem_pushbuf_bo *b;
- list_for_each_safe(entry, tmp, list) {
- nvbo = list_entry(entry, struct nouveau_bo, entry);
+ while (!list_empty(&op->list)) {
+ nvbo = list_entry(op->list.next, struct nouveau_bo, entry);
+ b = &pbbo[nvbo->pbbo_index];
if (likely(fence))
- nouveau_bo_fence(nvbo, fence);
+ nouveau_bo_fence(nvbo, fence, !!b->write_domains);
if (unlikely(nvbo->validate_mapped)) {
ttm_bo_kunmap(&nvbo->kmap);
list_del(&nvbo->entry);
nvbo->reserved_by = NULL;
- ttm_bo_unreserve_ticket(&nvbo->bo, ticket);
+ ttm_bo_unreserve_ticket(&nvbo->bo, &op->ticket);
drm_gem_object_unreference_unlocked(&nvbo->gem);
}
}
static void
-validate_fini_no_ticket(struct validate_op *op, struct nouveau_fence *fence)
+validate_fini(struct validate_op *op, struct nouveau_fence *fence,
+ struct drm_nouveau_gem_pushbuf_bo *pbbo)
{
- validate_fini_list(&op->vram_list, fence, &op->ticket);
- validate_fini_list(&op->gart_list, fence, &op->ticket);
- validate_fini_list(&op->both_list, fence, &op->ticket);
-}
-
-static void
-validate_fini(struct validate_op *op, struct nouveau_fence *fence)
-{
- validate_fini_no_ticket(op, fence);
+ validate_fini_no_ticket(op, fence, pbbo);
ww_acquire_fini(&op->ticket);
}
int trycnt = 0;
int ret, i;
struct nouveau_bo *res_bo = NULL;
+ LIST_HEAD(gart_list);
+ LIST_HEAD(vram_list);
+ LIST_HEAD(both_list);
ww_acquire_init(&op->ticket, &reservation_ww_class);
retry:
gem = drm_gem_object_lookup(dev, file_priv, b->handle);
if (!gem) {
NV_PRINTK(error, cli, "Unknown handle 0x%08x\n", b->handle);
- ww_acquire_done(&op->ticket);
- validate_fini(op, NULL);
- return -ENOENT;
+ ret = -ENOENT;
+ break;
}
nvbo = nouveau_gem_object(gem);
if (nvbo == res_bo) {
NV_PRINTK(error, cli, "multiple instances of buffer %d on "
"validation list\n", b->handle);
drm_gem_object_unreference_unlocked(gem);
- ww_acquire_done(&op->ticket);
- validate_fini(op, NULL);
- return -EINVAL;
+ ret = -EINVAL;
+ break;
}
ret = ttm_bo_reserve(&nvbo->bo, true, false, true, &op->ticket);
if (ret) {
- validate_fini_no_ticket(op, NULL);
+ list_splice_tail_init(&vram_list, &op->list);
+ list_splice_tail_init(&gart_list, &op->list);
+ list_splice_tail_init(&both_list, &op->list);
+ validate_fini_no_ticket(op, NULL, NULL);
if (unlikely(ret == -EDEADLK)) {
ret = ttm_bo_reserve_slowpath(&nvbo->bo, true,
&op->ticket);
res_bo = nvbo;
}
if (unlikely(ret)) {
- ww_acquire_done(&op->ticket);
- ww_acquire_fini(&op->ticket);
- drm_gem_object_unreference_unlocked(gem);
if (ret != -ERESTARTSYS)
NV_PRINTK(error, cli, "fail reserve\n");
- return ret;
+ break;
}
}
nvbo->pbbo_index = i;
if ((b->valid_domains & NOUVEAU_GEM_DOMAIN_VRAM) &&
(b->valid_domains & NOUVEAU_GEM_DOMAIN_GART))
- list_add_tail(&nvbo->entry, &op->both_list);
+ list_add_tail(&nvbo->entry, &both_list);
else
if (b->valid_domains & NOUVEAU_GEM_DOMAIN_VRAM)
- list_add_tail(&nvbo->entry, &op->vram_list);
+ list_add_tail(&nvbo->entry, &vram_list);
else
if (b->valid_domains & NOUVEAU_GEM_DOMAIN_GART)
- list_add_tail(&nvbo->entry, &op->gart_list);
+ list_add_tail(&nvbo->entry, &gart_list);
else {
NV_PRINTK(error, cli, "invalid valid domains: 0x%08x\n",
b->valid_domains);
- list_add_tail(&nvbo->entry, &op->both_list);
- ww_acquire_done(&op->ticket);
- validate_fini(op, NULL);
- return -EINVAL;
+ list_add_tail(&nvbo->entry, &both_list);
+ ret = -EINVAL;
+ break;
}
if (nvbo == res_bo)
goto retry;
}
ww_acquire_done(&op->ticket);
- return 0;
-}
-
-static int
-validate_sync(struct nouveau_channel *chan, struct nouveau_bo *nvbo)
-{
- struct nouveau_fence *fence = NULL;
- int ret = 0;
-
- spin_lock(&nvbo->bo.bdev->fence_lock);
- fence = nouveau_fence_ref(nvbo->bo.sync_obj);
- spin_unlock(&nvbo->bo.bdev->fence_lock);
-
- if (fence) {
- ret = nouveau_fence_sync(fence, chan);
- nouveau_fence_unref(&fence);
- }
-
+ list_splice_tail(&vram_list, &op->list);
+ list_splice_tail(&gart_list, &op->list);
+ list_splice_tail(&both_list, &op->list);
+ if (ret)
+ validate_fini(op, NULL, NULL);
return ret;
+
}
static int
return ret;
}
- ret = validate_sync(chan, nvbo);
+ ret = nouveau_fence_sync(nvbo, chan, !!b->write_domains, true);
if (unlikely(ret)) {
- NV_PRINTK(error, cli, "fail post-validate sync\n");
+ if (ret != -ERESTARTSYS)
+ NV_PRINTK(error, cli, "fail post-validate sync\n");
return ret;
}
struct validate_op *op, int *apply_relocs)
{
struct nouveau_cli *cli = nouveau_cli(file_priv);
- int ret, relocs = 0;
+ int ret;
- INIT_LIST_HEAD(&op->vram_list);
- INIT_LIST_HEAD(&op->gart_list);
- INIT_LIST_HEAD(&op->both_list);
+ INIT_LIST_HEAD(&op->list);
if (nr_buffers == 0)
return 0;
return ret;
}
- ret = validate_list(chan, cli, &op->vram_list, pbbo, user_buffers);
- if (unlikely(ret < 0)) {
- if (ret != -ERESTARTSYS)
- NV_PRINTK(error, cli, "validate vram_list\n");
- validate_fini(op, NULL);
- return ret;
- }
- relocs += ret;
-
- ret = validate_list(chan, cli, &op->gart_list, pbbo, user_buffers);
- if (unlikely(ret < 0)) {
- if (ret != -ERESTARTSYS)
- NV_PRINTK(error, cli, "validate gart_list\n");
- validate_fini(op, NULL);
- return ret;
- }
- relocs += ret;
-
- ret = validate_list(chan, cli, &op->both_list, pbbo, user_buffers);
+ ret = validate_list(chan, cli, &op->list, pbbo, user_buffers);
if (unlikely(ret < 0)) {
if (ret != -ERESTARTSYS)
- NV_PRINTK(error, cli, "validate both_list\n");
- validate_fini(op, NULL);
+ NV_PRINTK(error, cli, "validating bo list\n");
+ validate_fini(op, NULL, NULL);
return ret;
}
- relocs += ret;
-
- *apply_relocs = relocs;
+ *apply_relocs = ret;
return 0;
}
data |= r->vor;
}
- spin_lock(&nvbo->bo.bdev->fence_lock);
- ret = ttm_bo_wait(&nvbo->bo, false, false, false);
- spin_unlock(&nvbo->bo.bdev->fence_lock);
+ ret = ttm_bo_wait(&nvbo->bo, true, false, false);
if (ret) {
NV_PRINTK(error, cli, "reloc wait_idle failed: %d\n", ret);
break;
}
out:
- validate_fini(&op, fence);
+ validate_fini(&op, fence, bo);
nouveau_fence_unref(&fence);
out_prevalid:
struct drm_gem_object *gem;
struct nouveau_bo *nvbo;
bool no_wait = !!(req->flags & NOUVEAU_GEM_CPU_PREP_NOWAIT);
- int ret = -EINVAL;
+ bool write = !!(req->flags & NOUVEAU_GEM_CPU_PREP_WRITE);
+ int ret;
gem = drm_gem_object_lookup(dev, file_priv, req->handle);
if (!gem)
return -ENOENT;
nvbo = nouveau_gem_object(gem);
- spin_lock(&nvbo->bo.bdev->fence_lock);
- ret = ttm_bo_wait(&nvbo->bo, true, true, no_wait);
- spin_unlock(&nvbo->bo.bdev->fence_lock);
+ if (no_wait)
+ ret = reservation_object_test_signaled_rcu(nvbo->bo.resv, write) ? 0 : -EBUSY;
+ else {
+ long lret;
+
+ lret = reservation_object_wait_timeout_rcu(nvbo->bo.resv, write, true, 30 * HZ);
+ if (!lret)
+ ret = -EBUSY;
+ else if (lret > 0)
+ ret = 0;
+ else
+ ret = lret;
+ }
drm_gem_object_unreference_unlocked(gem);
+
return ret;
}
extern void nouveau_gem_prime_unpin(struct drm_gem_object *);
extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *);
extern struct drm_gem_object *nouveau_gem_prime_import_sg_table(
- struct drm_device *, size_t size, struct sg_table *);
+ struct drm_device *, struct dma_buf_attachment *, struct sg_table *);
extern void *nouveau_gem_prime_vmap(struct drm_gem_object *);
extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *);
*/
#include <drm/drmP.h>
+#include <linux/dma-buf.h>
#include "nouveau_drm.h"
#include "nouveau_gem.h"
}
struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev,
- size_t size,
+ struct dma_buf_attachment *attach,
struct sg_table *sg)
{
struct nouveau_bo *nvbo;
+ struct reservation_object *robj = attach->dmabuf->resv;
u32 flags = 0;
int ret;
flags = TTM_PL_FLAG_TT;
- ret = nouveau_bo_new(dev, size, 0, flags, 0, 0,
- sg, &nvbo);
+ ww_mutex_lock(&robj->lock, NULL);
+ ret = nouveau_bo_new(dev, attach->dmabuf->size, 0, flags, 0, 0,
+ sg, robj, &nvbo);
+ ww_mutex_unlock(&robj->lock);
if (ret)
return ERR_PTR(ret);
#include "nouveau_sysfs.h"
MODULE_PARM_DESC(pstate, "enable sysfs pstate file, which will be moved in the future");
-static int nouveau_pstate;
+int nouveau_pstate;
module_param_named(pstate, nouveau_pstate, int, 0400);
static inline struct drm_device *
int nouveau_sysfs_init(struct drm_device *);
void nouveau_sysfs_fini(struct drm_device *);
+extern int nouveau_pstate;
+
#endif
static int
nouveau_vram_manager_new(struct ttm_mem_type_manager *man,
struct ttm_buffer_object *bo,
- struct ttm_placement *placement,
- uint32_t flags,
+ const struct ttm_place *place,
struct ttm_mem_reg *mem)
{
struct nouveau_drm *drm = nouveau_bdev(man->bdev);
static int
nouveau_gart_manager_new(struct ttm_mem_type_manager *man,
struct ttm_buffer_object *bo,
- struct ttm_placement *placement,
- uint32_t flags,
+ const struct ttm_place *place,
struct ttm_mem_reg *mem)
{
struct nouveau_drm *drm = nouveau_bdev(bo->bdev);
static int
nv04_gart_manager_new(struct ttm_mem_type_manager *man,
struct ttm_buffer_object *bo,
- struct ttm_placement *placement,
- uint32_t flags,
+ const struct ttm_place *place,
struct ttm_mem_reg *mem)
{
struct nouveau_mem *node;
struct nouveau_drm *drm = nouveau_drm(file_priv->minor->dev);
if (unlikely(vma->vm_pgoff < DRM_FILE_PAGE_OFFSET))
- return drm_mmap(filp, vma);
+ return -EINVAL;
return ttm_bo_mmap(filp, vma, &drm->ttm.bdev);
}
int ret = RING_SPACE(chan, 2);
if (ret == 0) {
BEGIN_NV04(chan, NvSubSw, 0x0150, 1);
- OUT_RING (chan, fence->sequence);
+ OUT_RING (chan, fence->base.seqno);
FIRE_RING (chan);
}
return ret;
struct nv04_fence_chan *fctx = chan->fence;
nouveau_fence_context_del(&fctx->base);
chan->fence = NULL;
- kfree(fctx);
+ nouveau_fence_context_free(&fctx->base);
}
static int
{
struct nv04_fence_chan *fctx = kzalloc(sizeof(*fctx), GFP_KERNEL);
if (fctx) {
- nouveau_fence_context_new(&fctx->base);
+ nouveau_fence_context_new(chan, &fctx->base);
fctx->base.emit = nv04_fence_emit;
fctx->base.sync = nv04_fence_sync;
fctx->base.read = nv04_fence_read;
priv->base.dtor = nv04_fence_destroy;
priv->base.context_new = nv04_fence_context_new;
priv->base.context_del = nv04_fence_context_del;
+ priv->base.contexts = 15;
+ priv->base.context_base = fence_context_alloc(priv->base.contexts);
return 0;
}
int ret = RING_SPACE(chan, 2);
if (ret == 0) {
BEGIN_NV04(chan, 0, NV10_SUBCHAN_REF_CNT, 1);
- OUT_RING (chan, fence->sequence);
+ OUT_RING (chan, fence->base.seqno);
FIRE_RING (chan);
}
return ret;
nvif_object_fini(&fctx->head[i]);
nvif_object_fini(&fctx->sema);
chan->fence = NULL;
- kfree(fctx);
+ nouveau_fence_context_free(&fctx->base);
}
int
if (!fctx)
return -ENOMEM;
- nouveau_fence_context_new(&fctx->base);
+ nouveau_fence_context_new(chan, &fctx->base);
fctx->base.emit = nv10_fence_emit;
fctx->base.read = nv10_fence_read;
fctx->base.sync = nv10_fence_sync;
priv->base.dtor = nv10_fence_destroy;
priv->base.context_new = nv10_fence_context_new;
priv->base.context_del = nv10_fence_context_del;
+ priv->base.contexts = 31;
+ priv->base.context_base = fence_context_alloc(priv->base.contexts);
spin_lock_init(&priv->lock);
return 0;
}
if (!fctx)
return -ENOMEM;
- nouveau_fence_context_new(&fctx->base);
+ nouveau_fence_context_new(chan, &fctx->base);
fctx->base.emit = nv10_fence_emit;
fctx->base.read = nv10_fence_read;
fctx->base.sync = nv17_fence_sync;
priv->base.resume = nv17_fence_resume;
priv->base.context_new = nv17_fence_context_new;
priv->base.context_del = nv10_fence_context_del;
+ priv->base.contexts = 31;
+ priv->base.context_base = fence_context_alloc(priv->base.contexts);
spin_lock_init(&priv->lock);
ret = nouveau_bo_new(drm->dev, 4096, 0x1000, TTM_PL_FLAG_VRAM,
- 0, 0x0000, NULL, &priv->bo);
+ 0, 0x0000, NULL, NULL, &priv->bo);
if (!ret) {
ret = nouveau_bo_pin(priv->bo, TTM_PL_FLAG_VRAM);
if (!ret) {
u32 vscan = (mode->flags & DRM_MODE_FLAG_DBLSCAN) ? 2 : 1;
u32 hactive, hsynce, hbackp, hfrontp, hblanke, hblanks;
u32 vactive, vsynce, vbackp, vfrontp, vblanke, vblanks;
- u32 vblan2e = 0, vblan2s = 1;
+ u32 vblan2e = 0, vblan2s = 1, vblankus = 0;
u32 *push;
int ret;
vblanke = vsynce + vbackp;
vfrontp = (mode->vsync_start - mode->vdisplay) * vscan / ilace;
vblanks = vactive - vfrontp - 1;
+ /* XXX: Safe underestimate, even "0" works */
+ vblankus = (vactive - mode->vdisplay - 2) * hactive;
+ vblankus *= 1000;
+ vblankus /= mode->clock;
+
if (mode->flags & DRM_MODE_FLAG_INTERLACE) {
vblan2e = vactive + vsynce + vbackp;
vblan2s = vblan2e + (mode->vdisplay * vscan / ilace);
evo_mthd(push, 0x0804 + (nv_crtc->index * 0x400), 2);
evo_data(push, 0x00800000 | mode->clock);
evo_data(push, (ilace == 2) ? 2 : 0);
- evo_mthd(push, 0x0810 + (nv_crtc->index * 0x400), 6);
+ evo_mthd(push, 0x0810 + (nv_crtc->index * 0x400), 8);
evo_data(push, 0x00000000);
evo_data(push, (vactive << 16) | hactive);
evo_data(push, ( vsynce << 16) | hsynce);
evo_data(push, (vblanke << 16) | hblanke);
evo_data(push, (vblanks << 16) | hblanks);
evo_data(push, (vblan2e << 16) | vblan2s);
- evo_mthd(push, 0x082c + (nv_crtc->index * 0x400), 1);
+ evo_data(push, vblankus);
evo_data(push, 0x00000000);
evo_mthd(push, 0x0900 + (nv_crtc->index * 0x400), 2);
evo_data(push, 0x00000311);
drm_mode_crtc_set_gamma_size(crtc, 256);
ret = nouveau_bo_new(dev, 8192, 0x100, TTM_PL_FLAG_VRAM,
- 0, 0x0000, NULL, &head->base.lut.nvbo);
+ 0, 0x0000, NULL, NULL, &head->base.lut.nvbo);
if (!ret) {
ret = nouveau_bo_pin(head->base.lut.nvbo, TTM_PL_FLAG_VRAM);
if (!ret) {
goto out;
ret = nouveau_bo_new(dev, 64 * 64 * 4, 0x100, TTM_PL_FLAG_VRAM,
- 0, 0x0000, NULL, &head->base.cursor.nvbo);
+ 0, 0x0000, NULL, NULL, &head->base.cursor.nvbo);
if (!ret) {
ret = nouveau_bo_pin(head->base.cursor.nvbo, TTM_PL_FLAG_VRAM);
if (!ret) {
nv50_audio_mode_set(struct drm_encoder *encoder, struct drm_display_mode *mode)
{
struct nouveau_encoder *nv_encoder = nouveau_encoder(encoder);
+ struct nouveau_crtc *nv_crtc = nouveau_crtc(encoder->crtc);
struct nouveau_connector *nv_connector;
struct nv50_disp *disp = nv50_disp(encoder->dev);
- struct {
- struct nv50_disp_mthd_v1 base;
- struct nv50_disp_sor_hda_eld_v0 eld;
+ struct __packed {
+ struct {
+ struct nv50_disp_mthd_v1 mthd;
+ struct nv50_disp_sor_hda_eld_v0 eld;
+ } base;
u8 data[sizeof(nv_connector->base.eld)];
} args = {
- .base.version = 1,
- .base.method = NV50_DISP_MTHD_V1_SOR_HDA_ELD,
- .base.hasht = nv_encoder->dcb->hasht,
- .base.hashm = nv_encoder->dcb->hashm,
+ .base.mthd.version = 1,
+ .base.mthd.method = NV50_DISP_MTHD_V1_SOR_HDA_ELD,
+ .base.mthd.hasht = nv_encoder->dcb->hasht,
+ .base.mthd.hashm = (0xf0ff & nv_encoder->dcb->hashm) |
+ (0x0100 << nv_crtc->index),
};
nv_connector = nouveau_encoder_connector_get(nv_encoder);
drm_edid_to_eld(&nv_connector->base, nv_connector->edid);
memcpy(args.data, nv_connector->base.eld, sizeof(args.data));
- nvif_mthd(disp->disp, 0, &args, sizeof(args));
+ nvif_mthd(disp->disp, 0, &args, sizeof(args.base) + args.data[2] * 4);
}
static void
-nv50_audio_disconnect(struct drm_encoder *encoder)
+nv50_audio_disconnect(struct drm_encoder *encoder, struct nouveau_crtc *nv_crtc)
{
struct nouveau_encoder *nv_encoder = nouveau_encoder(encoder);
struct nv50_disp *disp = nv50_disp(encoder->dev);
.base.version = 1,
.base.method = NV50_DISP_MTHD_V1_SOR_HDA_ELD,
.base.hasht = nv_encoder->dcb->hasht,
- .base.hashm = nv_encoder->dcb->hashm,
+ .base.hashm = (0xf0ff & nv_encoder->dcb->hashm) |
+ (0x0100 << nv_crtc->index),
};
nvif_mthd(disp->disp, 0, &args, sizeof(args));
(0x0100 << nv_crtc->index),
};
- nv50_audio_disconnect(encoder);
-
nvif_mthd(disp->disp, 0, &args, sizeof(args));
}
if (nv_crtc) {
nv50_crtc_prepare(&nv_crtc->base);
nv50_sor_ctrl(nv_encoder, 1 << nv_crtc->index, 0);
+ nv50_audio_disconnect(encoder, nv_crtc);
nv50_hdmi_disconnect(&nv_encoder->base.base, nv_crtc);
}
}
proto = 0x8;
else
proto = 0x9;
+ nv50_audio_mode_set(encoder, mode);
break;
default:
BUG_ON(1);
/* small shared memory area we use for notifiers and semaphores */
ret = nouveau_bo_new(dev, 4096, 0x1000, TTM_PL_FLAG_VRAM,
- 0, 0x0000, NULL, &disp->sync);
+ 0, 0x0000, NULL, NULL, &disp->sync);
if (!ret) {
ret = nouveau_bo_pin(disp->sync, TTM_PL_FLAG_VRAM);
if (!ret) {
if (!fctx)
return -ENOMEM;
- nouveau_fence_context_new(&fctx->base);
+ nouveau_fence_context_new(chan, &fctx->base);
fctx->base.emit = nv10_fence_emit;
fctx->base.read = nv10_fence_read;
fctx->base.sync = nv17_fence_sync;
priv->base.resume = nv17_fence_resume;
priv->base.context_new = nv50_fence_context_new;
priv->base.context_del = nv10_fence_context_del;
+ priv->base.contexts = 127;
+ priv->base.context_base = fence_context_alloc(priv->base.contexts);
spin_lock_init(&priv->lock);
ret = nouveau_bo_new(drm->dev, 4096, 0x1000, TTM_PL_FLAG_VRAM,
- 0, 0x0000, NULL, &priv->bo);
+ 0, 0x0000, NULL, NULL, &priv->bo);
if (!ret) {
ret = nouveau_bo_pin(priv->bo, TTM_PL_FLAG_VRAM);
if (!ret) {
else
addr += fctx->vma.offset;
- return fctx->base.emit32(chan, addr, fence->sequence);
+ return fctx->base.emit32(chan, addr, fence->base.seqno);
}
static int
else
addr += fctx->vma.offset;
- return fctx->base.sync32(chan, addr, fence->sequence);
+ return fctx->base.sync32(chan, addr, fence->base.seqno);
}
static u32
nouveau_bo_vma_del(bo, &fctx->dispc_vma[i]);
}
+ nouveau_bo_wr32(priv->bo, chan->chid * 16 / 4, fctx->base.sequence);
nouveau_bo_vma_del(priv->bo, &fctx->vma_gart);
nouveau_bo_vma_del(priv->bo, &fctx->vma);
nouveau_fence_context_del(&fctx->base);
chan->fence = NULL;
- kfree(fctx);
+ nouveau_fence_context_free(&fctx->base);
}
int
if (!fctx)
return -ENOMEM;
- nouveau_fence_context_new(&fctx->base);
+ nouveau_fence_context_new(chan, &fctx->base);
fctx->base.emit = nv84_fence_emit;
fctx->base.sync = nv84_fence_sync;
fctx->base.read = nv84_fence_read;
fctx->base.emit32 = nv84_fence_emit32;
fctx->base.sync32 = nv84_fence_sync32;
+ fctx->base.sequence = nv84_fence_read(chan);
ret = nouveau_bo_vma_add(priv->bo, cli->vm, &fctx->vma);
if (ret == 0) {
ret = nouveau_bo_vma_add(bo, cli->vm, &fctx->dispc_vma[i]);
}
- nouveau_bo_wr32(priv->bo, chan->chid * 16/4, 0x00000000);
-
if (ret)
nv84_fence_context_del(chan);
return ret;
static bool
nv84_fence_suspend(struct nouveau_drm *drm)
{
- struct nouveau_fifo *pfifo = nvkm_fifo(&drm->device);
struct nv84_fence_priv *priv = drm->fence;
int i;
- priv->suspend = vmalloc((pfifo->max + 1) * sizeof(u32));
+ priv->suspend = vmalloc(priv->base.contexts * sizeof(u32));
if (priv->suspend) {
- for (i = 0; i <= pfifo->max; i++)
+ for (i = 0; i < priv->base.contexts; i++)
priv->suspend[i] = nouveau_bo_rd32(priv->bo, i*4);
}
static void
nv84_fence_resume(struct nouveau_drm *drm)
{
- struct nouveau_fifo *pfifo = nvkm_fifo(&drm->device);
struct nv84_fence_priv *priv = drm->fence;
int i;
if (priv->suspend) {
- for (i = 0; i <= pfifo->max; i++)
+ for (i = 0; i < priv->base.contexts; i++)
nouveau_bo_wr32(priv->bo, i*4, priv->suspend[i]);
vfree(priv->suspend);
priv->suspend = NULL;
priv->base.context_new = nv84_fence_context_new;
priv->base.context_del = nv84_fence_context_del;
- init_waitqueue_head(&priv->base.waiting);
+ priv->base.contexts = pfifo->max + 1;
+ priv->base.context_base = fence_context_alloc(priv->base.contexts);
priv->base.uevent = true;
- ret = nouveau_bo_new(drm->dev, 16 * (pfifo->max + 1), 0,
- TTM_PL_FLAG_VRAM, 0, 0, NULL, &priv->bo);
+ ret = nouveau_bo_new(drm->dev, 16 * priv->base.contexts, 0,
+ TTM_PL_FLAG_VRAM, 0, 0, NULL, NULL, &priv->bo);
if (ret == 0) {
ret = nouveau_bo_pin(priv->bo, TTM_PL_FLAG_VRAM);
if (ret == 0) {
}
if (ret == 0)
- ret = nouveau_bo_new(drm->dev, 16 * (pfifo->max + 1), 0,
- TTM_PL_FLAG_TT, 0, 0, NULL,
+ ret = nouveau_bo_new(drm->dev, 16 * priv->base.contexts, 0,
+ TTM_PL_FLAG_TT, 0, 0, NULL, NULL,
&priv->bo_gart);
if (ret == 0) {
ret = nouveau_bo_pin(priv->bo_gart, TTM_PL_FLAG_TT);
__u32 pushbuf;
};
+#define NV50_DISP_CORE_CHANNEL_DMA_V0_NTFY_UEVENT 0x00
+
/* cursor immediate */
struct nv50_disp_cursor_v0 {
__u8 version;
__u8 pad02[6];
};
+#define NV50_DISP_CURSOR_V0_NTFY_UEVENT 0x00
+
/* base */
struct nv50_disp_base_channel_dma_v0 {
__u8 version;
__u32 pushbuf;
};
+#define NV50_DISP_BASE_CHANNEL_DMA_V0_NTFY_UEVENT 0x00
+
/* overlay */
struct nv50_disp_overlay_channel_dma_v0 {
__u8 version;
__u32 pushbuf;
};
+#define NV50_DISP_OVERLAY_CHANNEL_DMA_V0_NTFY_UEVENT 0x00
+
/* overlay immediate */
struct nv50_disp_overlay_v0 {
__u8 version;
__u8 pad02[6];
};
+#define NV50_DISP_OVERLAY_V0_NTFY_UEVENT 0x00
/*******************************************************************************
* fermi
.lastclose = dev_lastclose,
.preclose = dev_preclose,
.postclose = dev_postclose,
+ .set_busid = drm_platform_set_busid,
.get_vblank_counter = drm_vblank_count,
.enable_vblank = omap_irq_enable_vblank,
.disable_vblank = omap_irq_disable_vblank,
#include <drm/drmP.h>
#include <drm/drm_crtc_helper.h>
#include <drm/omap_drm.h>
+#include <drm/drm_gem.h>
#include <linux/platform_data/omap_drm.h>
},
};
+static const struct drm_display_mode auo_b101xtn01_mode = {
+ .clock = 72000,
+ .hdisplay = 1366,
+ .hsync_start = 1366 + 20,
+ .hsync_end = 1366 + 20 + 70,
+ .htotal = 1366 + 20 + 70,
+ .vdisplay = 768,
+ .vsync_start = 768 + 14,
+ .vsync_end = 768 + 14 + 42,
+ .vtotal = 768 + 14 + 42,
+ .vrefresh = 60,
+ .flags = DRM_MODE_FLAG_NVSYNC | DRM_MODE_FLAG_NHSYNC,
+};
+
+static const struct panel_desc auo_b101xtn01 = {
+ .modes = &auo_b101xtn01_mode,
+ .num_modes = 1,
+ .bpc = 6,
+ .size = {
+ .width = 223,
+ .height = 125,
+ },
+};
+
static const struct drm_display_mode auo_b133xtn01_mode = {
.clock = 69500,
.hdisplay = 1366,
{
.compatible = "auo,b101aw03",
.data = &auo_b101aw03,
+ }, {
+ .compatible = "auo,b101xtn01",
+ .data = &auo_b101xtn01,
}, {
.compatible = "auo,b133htn01",
.data = &auo_b133htn01,
ccflags-y := -Iinclude/drm
-qxl-y := qxl_drv.o qxl_kms.o qxl_display.o qxl_ttm.o qxl_fb.o qxl_object.o qxl_gem.o qxl_cmd.o qxl_image.o qxl_draw.o qxl_debugfs.o qxl_irq.o qxl_dumb.o qxl_ioctl.o qxl_fence.o qxl_release.o
+qxl-y := qxl_drv.o qxl_kms.o qxl_display.o qxl_ttm.o qxl_fb.o qxl_object.o qxl_gem.o qxl_cmd.o qxl_image.o qxl_draw.o qxl_debugfs.o qxl_irq.o qxl_dumb.o qxl_ioctl.o qxl_release.o qxl_prime.o
obj-$(CONFIG_DRM_QXL)+= qxl.o
if (ret == -EBUSY)
return -EBUSY;
- if (surf->fence.num_active_releases > 0 && stall == false) {
- qxl_bo_unreserve(surf);
- return -EBUSY;
- }
-
if (stall)
mutex_unlock(&qdev->surf_evict_mutex);
- spin_lock(&surf->tbo.bdev->fence_lock);
ret = ttm_bo_wait(&surf->tbo, true, true, !stall);
- spin_unlock(&surf->tbo.bdev->fence_lock);
if (stall)
mutex_lock(&qdev->surf_evict_mutex);
struct qxl_bo *bo;
list_for_each_entry(bo, &qdev->gem.objects, list) {
- seq_printf(m, "size %ld, pc %d, sync obj %p, num releases %d\n",
- (unsigned long)bo->gem_base.size, bo->pin_count,
- bo->tbo.sync_obj, bo->fence.num_active_releases);
+ struct reservation_object_list *fobj;
+ int rel;
+
+ rcu_read_lock();
+ fobj = rcu_dereference(bo->tbo.resv->fence);
+ rel = fobj ? fobj->shared_count : 0;
+ rcu_read_unlock();
+
+ seq_printf(m, "size %ld, pc %d, num releases %d\n",
+ (unsigned long)bo->gem_base.size,
+ bo->pin_count, rel);
}
return 0;
}
kfree(qxl_crtc);
}
+static int qxl_crtc_page_flip(struct drm_crtc *crtc,
+ struct drm_framebuffer *fb,
+ struct drm_pending_vblank_event *event,
+ uint32_t page_flip_flags)
+{
+ struct drm_device *dev = crtc->dev;
+ struct qxl_device *qdev = dev->dev_private;
+ struct qxl_crtc *qcrtc = to_qxl_crtc(crtc);
+ struct qxl_framebuffer *qfb_src = to_qxl_framebuffer(fb);
+ struct qxl_framebuffer *qfb_old = to_qxl_framebuffer(crtc->primary->fb);
+ struct qxl_bo *bo_old = gem_to_qxl_bo(qfb_old->obj);
+ struct qxl_bo *bo = gem_to_qxl_bo(qfb_src->obj);
+ unsigned long flags;
+ struct drm_clip_rect norect = {
+ .x1 = 0,
+ .y1 = 0,
+ .x2 = fb->width,
+ .y2 = fb->height
+ };
+ int inc = 1;
+ int one_clip_rect = 1;
+ int ret = 0;
+
+ crtc->primary->fb = fb;
+ bo_old->is_primary = false;
+ bo->is_primary = true;
+
+ ret = qxl_bo_reserve(bo, false);
+ if (ret)
+ return ret;
+
+ qxl_draw_dirty_fb(qdev, qfb_src, bo, 0, 0,
+ &norect, one_clip_rect, inc);
+
+ drm_vblank_get(dev, qcrtc->index);
+
+ if (event) {
+ spin_lock_irqsave(&dev->event_lock, flags);
+ drm_send_vblank_event(dev, qcrtc->index, event);
+ spin_unlock_irqrestore(&dev->event_lock, flags);
+ }
+ drm_vblank_put(dev, qcrtc->index);
+
+ qxl_bo_unreserve(bo);
+
+ return 0;
+}
+
static int
qxl_hide_cursor(struct qxl_device *qdev)
{
.cursor_move = qxl_crtc_cursor_move,
.set_config = drm_crtc_helper_set_config,
.destroy = qxl_crtc_destroy,
+ .page_flip = qxl_crtc_page_flip,
};
static void qxl_user_framebuffer_destroy(struct drm_framebuffer *fb)
.release = drm_release,
.unlocked_ioctl = drm_ioctl,
.poll = drm_poll,
+ .read = drm_read,
.mmap = qxl_mmap,
};
return qxl_drm_resume(drm_dev, false);
}
+static u32 qxl_noop_get_vblank_counter(struct drm_device *dev, int crtc)
+{
+ return dev->vblank[crtc].count.counter;
+}
+
+static int qxl_noop_enable_vblank(struct drm_device *dev, int crtc)
+{
+ return 0;
+}
+
+static void qxl_noop_disable_vblank(struct drm_device *dev, int crtc)
+{
+}
+
static const struct dev_pm_ops qxl_pm_ops = {
.suspend = qxl_pm_suspend,
.resume = qxl_pm_resume,
};
static struct drm_driver qxl_driver = {
- .driver_features = DRIVER_GEM | DRIVER_MODESET |
+ .driver_features = DRIVER_GEM | DRIVER_MODESET | DRIVER_PRIME |
DRIVER_HAVE_IRQ | DRIVER_IRQ_SHARED,
.load = qxl_driver_load,
.unload = qxl_driver_unload,
+ .get_vblank_counter = qxl_noop_get_vblank_counter,
+ .enable_vblank = qxl_noop_enable_vblank,
+ .disable_vblank = qxl_noop_disable_vblank,
+
+ .set_busid = drm_pci_set_busid,
.dumb_create = qxl_mode_dumb_create,
.dumb_map_offset = qxl_mode_dumb_mmap,
.debugfs_init = qxl_debugfs_init,
.debugfs_cleanup = qxl_debugfs_takedown,
#endif
+ .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
+ .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
+ .gem_prime_export = drm_gem_prime_export,
+ .gem_prime_import = drm_gem_prime_import,
+ .gem_prime_pin = qxl_gem_prime_pin,
+ .gem_prime_unpin = qxl_gem_prime_unpin,
+ .gem_prime_get_sg_table = qxl_gem_prime_get_sg_table,
+ .gem_prime_import_sg_table = qxl_gem_prime_import_sg_table,
+ .gem_prime_vmap = qxl_gem_prime_vmap,
+ .gem_prime_vunmap = qxl_gem_prime_vunmap,
+ .gem_prime_mmap = qxl_gem_prime_mmap,
.gem_free_object = qxl_gem_object_free,
.gem_open_object = qxl_gem_object_open,
.gem_close_object = qxl_gem_object_close,
* Definitions taken from spice-protocol, plus kernel driver specific bits.
*/
+#include <linux/fence.h>
#include <linux/workqueue.h>
#include <linux/firmware.h>
#include <linux/platform_device.h>
#include <ttm/ttm_placement.h>
#include <ttm/ttm_module.h>
+#include <drm/drm_gem.h>
+
/* just for ttm_validate_buffer */
#include <ttm/ttm_execbuf_util.h>
QXL_INTERRUPT_IO_CMD |\
QXL_INTERRUPT_CLIENT_MONITORS_CONFIG)
-struct qxl_fence {
- struct qxl_device *qdev;
- uint32_t num_active_releases;
- uint32_t *release_ids;
- struct radix_tree_root tree;
-};
-
struct qxl_bo {
/* Protected by gem.mutex */
struct list_head list;
/* Protected by tbo.reserved */
- u32 placements[3];
+ struct ttm_place placements[3];
struct ttm_placement placement;
struct ttm_buffer_object tbo;
struct ttm_bo_kmap_obj kmap;
unsigned pin_count;
void *kptr;
int type;
+
/* Constant after initialization */
struct drm_gem_object gem_base;
bool is_primary; /* is this now a primary surface */
bool hw_surf_alloc;
struct qxl_surface surf;
uint32_t surface_id;
- struct qxl_fence fence; /* per bo fence - list of releases */
struct qxl_release *surf_create;
};
#define gem_to_qxl_bo(gobj) container_of((gobj), struct qxl_bo, gem_base)
* spice-protocol/qxl_dev.h */
#define QXL_MAX_RES 96
struct qxl_release {
+ struct fence base;
+
int id;
int type;
uint32_t release_offset;
uint8_t slot_gen_bits;
uint64_t va_slot_mask;
+ spinlock_t release_lock;
struct idr release_idr;
+ uint32_t release_seqno;
spinlock_t release_idr_lock;
struct mutex async_io_mutex;
unsigned int last_sent_io_cmd;
int qxl_debugfs_init(struct drm_minor *minor);
void qxl_debugfs_takedown(struct drm_minor *minor);
+/* qxl_prime.c */
+int qxl_gem_prime_pin(struct drm_gem_object *obj);
+void qxl_gem_prime_unpin(struct drm_gem_object *obj);
+struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj);
+struct drm_gem_object *qxl_gem_prime_import_sg_table(
+ struct drm_device *dev, struct dma_buf_attachment *attach,
+ struct sg_table *sgt);
+void *qxl_gem_prime_vmap(struct drm_gem_object *obj);
+void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int qxl_gem_prime_mmap(struct drm_gem_object *obj,
+ struct vm_area_struct *vma);
+
/* qxl_irq.c */
int qxl_irq_init(struct qxl_device *qdev);
irqreturn_t qxl_irq_handler(int irq, void *arg);
void qxl_surface_evict(struct qxl_device *qdev, struct qxl_bo *surf, bool freeing);
int qxl_update_surface(struct qxl_device *qdev, struct qxl_bo *surf);
-/* qxl_fence.c */
-void qxl_fence_add_release_locked(struct qxl_fence *qfence, uint32_t rel_id);
-int qxl_fence_remove_release(struct qxl_fence *qfence, uint32_t rel_id);
-int qxl_fence_init(struct qxl_device *qdev, struct qxl_fence *qfence);
-void qxl_fence_fini(struct qxl_fence *qfence);
-
#endif
struct drm_fb_helper *helper,
struct drm_fb_helper_surface_size *sizes)
{
- struct qxl_fbdev *qfbdev = (struct qxl_fbdev *)helper;
+ struct qxl_fbdev *qfbdev =
+ container_of(helper, struct qxl_fbdev, helper);
int new_fb = 0;
int ret;
+++ /dev/null
-/*
- * Copyright 2013 Red Hat Inc.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
- * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
- * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
- * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
- * OTHER DEALINGS IN THE SOFTWARE.
- *
- * Authors: Dave Airlie
- * Alon Levy
- */
-
-
-#include "qxl_drv.h"
-
-/* QXL fencing-
-
- When we submit operations to the GPU we pass a release reference to the GPU
- with them, the release reference is then added to the release ring when
- the GPU is finished with that particular operation and has removed it from
- its tree.
-
- So we have can have multiple outstanding non linear fences per object.
-
- From a TTM POV we only care if the object has any outstanding releases on
- it.
-
- we wait until all outstanding releases are processeed.
-
- sync object is just a list of release ids that represent that fence on
- that buffer.
-
- we just add new releases onto the sync object attached to the object.
-
- This currently uses a radix tree to store the list of release ids.
-
- For some reason every so often qxl hw fails to release, things go wrong.
-*/
-/* must be called with the fence lock held */
-void qxl_fence_add_release_locked(struct qxl_fence *qfence, uint32_t rel_id)
-{
- radix_tree_insert(&qfence->tree, rel_id, qfence);
- qfence->num_active_releases++;
-}
-
-int qxl_fence_remove_release(struct qxl_fence *qfence, uint32_t rel_id)
-{
- void *ret;
- int retval = 0;
- struct qxl_bo *bo = container_of(qfence, struct qxl_bo, fence);
-
- spin_lock(&bo->tbo.bdev->fence_lock);
-
- ret = radix_tree_delete(&qfence->tree, rel_id);
- if (ret == qfence)
- qfence->num_active_releases--;
- else {
- DRM_DEBUG("didn't find fence in radix tree for %d\n", rel_id);
- retval = -ENOENT;
- }
- spin_unlock(&bo->tbo.bdev->fence_lock);
- return retval;
-}
-
-
-int qxl_fence_init(struct qxl_device *qdev, struct qxl_fence *qfence)
-{
- qfence->qdev = qdev;
- qfence->num_active_releases = 0;
- INIT_RADIX_TREE(&qfence->tree, GFP_ATOMIC);
- return 0;
-}
-
-void qxl_fence_fini(struct qxl_fence *qfence)
-{
- kfree(qfence->release_ids);
- qfence->num_active_releases = 0;
-}
idr_init(&qdev->release_idr);
spin_lock_init(&qdev->release_idr_lock);
+ spin_lock_init(&qdev->release_lock);
idr_init(&qdev->surf_id_idr);
spin_lock_init(&qdev->surf_id_idr_lock);
if (qdev == NULL)
return 0;
+
+ drm_vblank_cleanup(dev);
+
qxl_modeset_fini(qdev);
qxl_device_fini(qdev);
if (r)
goto out;
+ r = drm_vblank_init(dev, 1);
+ if (r)
+ goto unload;
+
r = qxl_modeset_init(qdev);
- if (r) {
- qxl_driver_unload(dev);
- goto out;
- }
+ if (r)
+ goto unload;
drm_kms_helper_poll_init(qdev->ddev);
return 0;
+unload:
+ qxl_driver_unload(dev);
+
out:
kfree(qdev);
return r;
qdev = (struct qxl_device *)bo->gem_base.dev->dev_private;
qxl_surface_evict(qdev, bo, false);
- qxl_fence_fini(&bo->fence);
mutex_lock(&qdev->gem.mutex);
list_del_init(&bo->list);
mutex_unlock(&qdev->gem.mutex);
{
u32 c = 0;
u32 pflag = pinned ? TTM_PL_FLAG_NO_EVICT : 0;
+ unsigned i;
- qbo->placement.fpfn = 0;
- qbo->placement.lpfn = 0;
qbo->placement.placement = qbo->placements;
qbo->placement.busy_placement = qbo->placements;
if (domain == QXL_GEM_DOMAIN_VRAM)
- qbo->placements[c++] = TTM_PL_FLAG_CACHED | TTM_PL_FLAG_VRAM | pflag;
+ qbo->placements[c++].flags = TTM_PL_FLAG_CACHED | TTM_PL_FLAG_VRAM | pflag;
if (domain == QXL_GEM_DOMAIN_SURFACE)
- qbo->placements[c++] = TTM_PL_FLAG_CACHED | TTM_PL_FLAG_PRIV0 | pflag;
+ qbo->placements[c++].flags = TTM_PL_FLAG_CACHED | TTM_PL_FLAG_PRIV0 | pflag;
if (domain == QXL_GEM_DOMAIN_CPU)
- qbo->placements[c++] = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM | pflag;
+ qbo->placements[c++].flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM | pflag;
if (!c)
- qbo->placements[c++] = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
+ qbo->placements[c++].flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
qbo->placement.num_placement = c;
qbo->placement.num_busy_placement = c;
+ for (i = 0; i < c; ++i) {
+ qbo->placements[i].fpfn = 0;
+ qbo->placements[i].lpfn = 0;
+ }
}
bo->type = domain;
bo->pin_count = pinned ? 1 : 0;
bo->surface_id = 0;
- qxl_fence_init(qdev, &bo->fence);
INIT_LIST_HEAD(&bo->list);
if (surf)
r = ttm_bo_init(&qdev->mman.bdev, &bo->tbo, size, type,
&bo->placement, 0, !kernel, NULL, size,
- NULL, &qxl_ttm_bo_destroy);
+ NULL, NULL, &qxl_ttm_bo_destroy);
if (unlikely(r != 0)) {
if (r != -ERESTARTSYS)
dev_err(qdev->dev,
if (bo->pin_count)
return 0;
for (i = 0; i < bo->placement.num_placement; i++)
- bo->placements[i] &= ~TTM_PL_FLAG_NO_EVICT;
+ bo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT;
r = ttm_bo_validate(&bo->tbo, &bo->placement, false, false);
if (unlikely(r != 0))
dev_err(qdev->dev, "%p validate failed for unpin\n", bo);
}
return r;
}
- spin_lock(&bo->tbo.bdev->fence_lock);
if (mem_type)
*mem_type = bo->tbo.mem.mem_type;
- if (bo->tbo.sync_obj)
- r = ttm_bo_wait(&bo->tbo, true, true, no_wait);
- spin_unlock(&bo->tbo.bdev->fence_lock);
+
+ r = ttm_bo_wait(&bo->tbo, true, true, no_wait);
ttm_bo_unreserve(&bo->tbo);
return r;
}
--- /dev/null
+/*
+ * Copyright 2014 Canonical
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Andreas Pokorny
+ */
+
+#include "qxl_drv.h"
+
+/* Empty Implementations as there should not be any other driver for a virtual
+ * device that might share buffers with qxl */
+
+int qxl_gem_prime_pin(struct drm_gem_object *obj)
+{
+ WARN_ONCE(1, "not implemented");
+ return -ENOSYS;
+}
+
+void qxl_gem_prime_unpin(struct drm_gem_object *obj)
+{
+ WARN_ONCE(1, "not implemented");
+}
+
+
+struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj)
+{
+ WARN_ONCE(1, "not implemented");
+ return ERR_PTR(-ENOSYS);
+}
+
+struct drm_gem_object *qxl_gem_prime_import_sg_table(
+ struct drm_device *dev, struct dma_buf_attachment *attach,
+ struct sg_table *table)
+{
+ WARN_ONCE(1, "not implemented");
+ return ERR_PTR(-ENOSYS);
+}
+
+void *qxl_gem_prime_vmap(struct drm_gem_object *obj)
+{
+ WARN_ONCE(1, "not implemented");
+ return ERR_PTR(-ENOSYS);
+}
+
+void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+{
+ WARN_ONCE(1, "not implemented");
+}
+
+int qxl_gem_prime_mmap(struct drm_gem_object *obj,
+ struct vm_area_struct *area)
+{
+ WARN_ONCE(1, "not implemented");
+ return ENOSYS;
+}
*/
#include "qxl_drv.h"
#include "qxl_object.h"
+#include <trace/events/fence.h>
/*
* drawable cmd cache - allocate a bunch of VRAM pages, suballocate
static const int release_size_per_bo[] = { RELEASE_SIZE, SURFACE_RELEASE_SIZE, RELEASE_SIZE };
static const int releases_per_bo[] = { RELEASES_PER_BO, SURFACE_RELEASES_PER_BO, RELEASES_PER_BO };
+static const char *qxl_get_driver_name(struct fence *fence)
+{
+ return "qxl";
+}
+
+static const char *qxl_get_timeline_name(struct fence *fence)
+{
+ return "release";
+}
+
+static bool qxl_nop_signaling(struct fence *fence)
+{
+ /* fences are always automatically signaled, so just pretend we did this.. */
+ return true;
+}
+
+static long qxl_fence_wait(struct fence *fence, bool intr, signed long timeout)
+{
+ struct qxl_device *qdev;
+ struct qxl_release *release;
+ int count = 0, sc = 0;
+ bool have_drawable_releases;
+ unsigned long cur, end = jiffies + timeout;
+
+ qdev = container_of(fence->lock, struct qxl_device, release_lock);
+ release = container_of(fence, struct qxl_release, base);
+ have_drawable_releases = release->type == QXL_RELEASE_DRAWABLE;
+
+retry:
+ sc++;
+
+ if (fence_is_signaled(fence))
+ goto signaled;
+
+ qxl_io_notify_oom(qdev);
+
+ for (count = 0; count < 11; count++) {
+ if (!qxl_queue_garbage_collect(qdev, true))
+ break;
+
+ if (fence_is_signaled(fence))
+ goto signaled;
+ }
+
+ if (fence_is_signaled(fence))
+ goto signaled;
+
+ if (have_drawable_releases || sc < 4) {
+ if (sc > 2)
+ /* back off */
+ usleep_range(500, 1000);
+
+ if (time_after(jiffies, end))
+ return 0;
+
+ if (have_drawable_releases && sc > 300) {
+ FENCE_WARN(fence, "failed to wait on release %d "
+ "after spincount %d\n",
+ fence->context & ~0xf0000000, sc);
+ goto signaled;
+ }
+ goto retry;
+ }
+ /*
+ * yeah, original sync_obj_wait gave up after 3 spins when
+ * have_drawable_releases is not set.
+ */
+
+signaled:
+ cur = jiffies;
+ if (time_after(cur, end))
+ return 0;
+ return end - cur;
+}
+
+static const struct fence_ops qxl_fence_ops = {
+ .get_driver_name = qxl_get_driver_name,
+ .get_timeline_name = qxl_get_timeline_name,
+ .enable_signaling = qxl_nop_signaling,
+ .wait = qxl_fence_wait,
+};
+
static uint64_t
qxl_release_alloc(struct qxl_device *qdev, int type,
struct qxl_release **ret)
struct qxl_release *release;
int handle;
size_t size = sizeof(*release);
- int idr_ret;
release = kmalloc(size, GFP_KERNEL);
if (!release) {
DRM_ERROR("Out of memory\n");
return 0;
}
+ release->base.ops = NULL;
release->type = type;
release->release_offset = 0;
release->surface_release_id = 0;
idr_preload(GFP_KERNEL);
spin_lock(&qdev->release_idr_lock);
- idr_ret = idr_alloc(&qdev->release_idr, release, 1, 0, GFP_NOWAIT);
+ handle = idr_alloc(&qdev->release_idr, release, 1, 0, GFP_NOWAIT);
+ release->base.seqno = ++qdev->release_seqno;
spin_unlock(&qdev->release_idr_lock);
idr_preload_end();
- handle = idr_ret;
- if (idr_ret < 0)
- goto release_fail;
+ if (handle < 0) {
+ kfree(release);
+ *ret = NULL;
+ return handle;
+ }
*ret = release;
QXL_INFO(qdev, "allocated release %lld\n", handle);
release->id = handle;
-release_fail:
-
return handle;
}
+static void
+qxl_release_free_list(struct qxl_release *release)
+{
+ while (!list_empty(&release->bos)) {
+ struct qxl_bo_list *entry;
+ struct qxl_bo *bo;
+
+ entry = container_of(release->bos.next,
+ struct qxl_bo_list, tv.head);
+ bo = to_qxl_bo(entry->tv.bo);
+ qxl_bo_unref(&bo);
+ list_del(&entry->tv.head);
+ kfree(entry);
+ }
+}
+
void
qxl_release_free(struct qxl_device *qdev,
struct qxl_release *release)
{
- struct qxl_bo_list *entry, *tmp;
QXL_INFO(qdev, "release %d, type %d\n", release->id,
release->type);
if (release->surface_release_id)
qxl_surface_id_dealloc(qdev, release->surface_release_id);
- list_for_each_entry_safe(entry, tmp, &release->bos, tv.head) {
- struct qxl_bo *bo = to_qxl_bo(entry->tv.bo);
- QXL_INFO(qdev, "release %llx\n",
- drm_vma_node_offset_addr(&entry->tv.bo->vma_node)
- - DRM_FILE_OFFSET);
- qxl_fence_remove_release(&bo->fence, release->id);
- qxl_bo_unref(&bo);
- kfree(entry);
- }
spin_lock(&qdev->release_idr_lock);
idr_remove(&qdev->release_idr, release->id);
spin_unlock(&qdev->release_idr_lock);
- kfree(release);
+
+ if (release->base.ops) {
+ WARN_ON(list_empty(&release->bos));
+ qxl_release_free_list(release);
+
+ fence_signal(&release->base);
+ fence_put(&release->base);
+ } else {
+ qxl_release_free_list(release);
+ kfree(release);
+ }
}
static int qxl_release_bo_alloc(struct qxl_device *qdev,
qxl_bo_ref(bo);
entry->tv.bo = &bo->tbo;
+ entry->tv.shared = false;
list_add_tail(&entry->tv.head, &release->bos);
return 0;
}
return ret;
}
+ ret = reservation_object_reserve_shared(bo->tbo.resv);
+ if (ret)
+ return ret;
+
/* allocate a surface for reserved + validated buffers */
ret = qxl_bo_check_id(bo->gem_base.dev->dev_private, bo);
if (ret)
if (list_is_singular(&release->bos))
return 0;
- ret = ttm_eu_reserve_buffers(&release->ticket, &release->bos);
+ ret = ttm_eu_reserve_buffers(&release->ticket, &release->bos, !no_intr);
if (ret)
return ret;
/* stash the release after the create command */
idr_ret = qxl_release_alloc(qdev, QXL_RELEASE_SURFACE_CMD, release);
+ if (idr_ret < 0)
+ return idr_ret;
bo = qxl_bo_ref(to_qxl_bo(entry->tv.bo));
(*release)->release_offset = create_rel->release_offset + 64;
}
idr_ret = qxl_release_alloc(qdev, type, release);
+ if (idr_ret < 0) {
+ if (rbo)
+ *rbo = NULL;
+ return idr_ret;
+ }
mutex_lock(&qdev->release_mutex);
if (qdev->current_release_bo_offset[cur_idx] + 1 >= releases_per_bo[cur_idx]) {
void qxl_release_fence_buffer_objects(struct qxl_release *release)
{
- struct ttm_validate_buffer *entry;
struct ttm_buffer_object *bo;
struct ttm_bo_global *glob;
struct ttm_bo_device *bdev;
struct ttm_bo_driver *driver;
struct qxl_bo *qbo;
+ struct ttm_validate_buffer *entry;
+ struct qxl_device *qdev;
/* if only one object on the release its the release itself
since these objects are pinned no need to reserve */
- if (list_is_singular(&release->bos))
+ if (list_is_singular(&release->bos) || list_empty(&release->bos))
return;
bo = list_first_entry(&release->bos, struct ttm_validate_buffer, head)->bo;
bdev = bo->bdev;
+ qdev = container_of(bdev, struct qxl_device, mman.bdev);
+
+ /*
+ * Since we never really allocated a context and we don't want to conflict,
+ * set the highest bits. This will break if we really allow exporting of dma-bufs.
+ */
+ fence_init(&release->base, &qxl_fence_ops, &qdev->release_lock,
+ release->id | 0xf0000000, release->base.seqno);
+ trace_fence_emit(&release->base);
+
driver = bdev->driver;
glob = bo->glob;
spin_lock(&glob->lru_lock);
- spin_lock(&bdev->fence_lock);
list_for_each_entry(entry, &release->bos, head) {
bo = entry->bo;
qbo = to_qxl_bo(bo);
- if (!entry->bo->sync_obj)
- entry->bo->sync_obj = &qbo->fence;
-
- qxl_fence_add_release_locked(&qbo->fence, release->id);
-
+ reservation_object_add_shared_fence(bo->resv, &release->base);
ttm_bo_add_to_lru(bo);
__ttm_bo_unreserve(bo);
- entry->reserved = false;
}
- spin_unlock(&bdev->fence_lock);
spin_unlock(&glob->lru_lock);
ww_acquire_fini(&release->ticket);
}
if (unlikely(vma->vm_pgoff < DRM_FILE_PAGE_OFFSET)) {
pr_info("%s: vma->vm_pgoff (%ld) < DRM_FILE_PAGE_OFFSET\n",
__func__, vma->vm_pgoff);
- return drm_mmap(filp, vma);
+ return -EINVAL;
}
file_priv = filp->private_data;
struct ttm_placement *placement)
{
struct qxl_bo *qbo;
- static u32 placements = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
+ static struct ttm_place placements = {
+ .fpfn = 0,
+ .lpfn = 0,
+ .flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM
+ };
if (!qxl_ttm_bo_is_qxl_bo(bo)) {
- placement->fpfn = 0;
- placement->lpfn = 0;
placement->placement = &placements;
placement->busy_placement = &placements;
placement->num_placement = 1;
return ttm_bo_move_memcpy(bo, evict, no_wait_gpu, new_mem);
}
-
-static int qxl_sync_obj_wait(void *sync_obj,
- bool lazy, bool interruptible)
-{
- struct qxl_fence *qfence = (struct qxl_fence *)sync_obj;
- int count = 0, sc = 0;
- struct qxl_bo *bo = container_of(qfence, struct qxl_bo, fence);
-
- if (qfence->num_active_releases == 0)
- return 0;
-
-retry:
- if (sc == 0) {
- if (bo->type == QXL_GEM_DOMAIN_SURFACE)
- qxl_update_surface(qfence->qdev, bo);
- } else if (sc >= 1) {
- qxl_io_notify_oom(qfence->qdev);
- }
-
- sc++;
-
- for (count = 0; count < 10; count++) {
- bool ret;
- ret = qxl_queue_garbage_collect(qfence->qdev, true);
- if (ret == false)
- break;
-
- if (qfence->num_active_releases == 0)
- return 0;
- }
-
- if (qfence->num_active_releases) {
- bool have_drawable_releases = false;
- void **slot;
- struct radix_tree_iter iter;
- int release_id;
-
- radix_tree_for_each_slot(slot, &qfence->tree, &iter, 0) {
- struct qxl_release *release;
-
- release_id = iter.index;
- release = qxl_release_from_id_locked(qfence->qdev, release_id);
- if (release == NULL)
- continue;
-
- if (release->type == QXL_RELEASE_DRAWABLE)
- have_drawable_releases = true;
- }
-
- qxl_queue_garbage_collect(qfence->qdev, true);
-
- if (have_drawable_releases || sc < 4) {
- if (sc > 2)
- /* back off */
- usleep_range(500, 1000);
- if (have_drawable_releases && sc > 300) {
- WARN(1, "sync obj %d still has outstanding releases %d %d %d %ld %d\n", sc, bo->surface_id, bo->is_primary, bo->pin_count, (unsigned long)bo->gem_base.size, qfence->num_active_releases);
- return -EBUSY;
- }
- goto retry;
- }
- }
- return 0;
-}
-
-static int qxl_sync_obj_flush(void *sync_obj)
-{
- return 0;
-}
-
-static void qxl_sync_obj_unref(void **sync_obj)
-{
- *sync_obj = NULL;
-}
-
-static void *qxl_sync_obj_ref(void *sync_obj)
-{
- return sync_obj;
-}
-
-static bool qxl_sync_obj_signaled(void *sync_obj)
-{
- struct qxl_fence *qfence = (struct qxl_fence *)sync_obj;
- return (qfence->num_active_releases == 0);
-}
-
static void qxl_bo_move_notify(struct ttm_buffer_object *bo,
struct ttm_mem_reg *new_mem)
{
.verify_access = &qxl_verify_access,
.io_mem_reserve = &qxl_ttm_io_mem_reserve,
.io_mem_free = &qxl_ttm_io_mem_free,
- .sync_obj_signaled = &qxl_sync_obj_signaled,
- .sync_obj_wait = &qxl_sync_obj_wait,
- .sync_obj_flush = &qxl_sync_obj_flush,
- .sync_obj_unref = &qxl_sync_obj_unref,
- .sync_obj_ref = &qxl_sync_obj_ref,
.move_notify = &qxl_bo_move_notify,
};
-
-
int qxl_ttm_init(struct qxl_device *qdev)
{
int r;
dev_priv->span_pitch_offset_c = (((dev_priv->depth_pitch / 8) << 21) |
(dev_priv->span_offset >> 5));
- dev_priv->sarea = drm_getsarea(dev);
+ dev_priv->sarea = drm_legacy_getsarea(dev);
if (!dev_priv->sarea) {
DRM_ERROR("could not find sarea!\n");
dev->dev_private = (void *)dev_priv;
return -EINVAL;
}
- dev_priv->mmio = drm_core_findmap(dev, init->mmio_offset);
+ dev_priv->mmio = drm_legacy_findmap(dev, init->mmio_offset);
if (!dev_priv->mmio) {
DRM_ERROR("could not find mmio region!\n");
dev->dev_private = (void *)dev_priv;
r128_do_cleanup_cce(dev);
return -EINVAL;
}
- dev_priv->cce_ring = drm_core_findmap(dev, init->ring_offset);
+ dev_priv->cce_ring = drm_legacy_findmap(dev, init->ring_offset);
if (!dev_priv->cce_ring) {
DRM_ERROR("could not find cce ring region!\n");
dev->dev_private = (void *)dev_priv;
r128_do_cleanup_cce(dev);
return -EINVAL;
}
- dev_priv->ring_rptr = drm_core_findmap(dev, init->ring_rptr_offset);
+ dev_priv->ring_rptr = drm_legacy_findmap(dev, init->ring_rptr_offset);
if (!dev_priv->ring_rptr) {
DRM_ERROR("could not find ring read pointer!\n");
dev->dev_private = (void *)dev_priv;
return -EINVAL;
}
dev->agp_buffer_token = init->buffers_offset;
- dev->agp_buffer_map = drm_core_findmap(dev, init->buffers_offset);
+ dev->agp_buffer_map = drm_legacy_findmap(dev, init->buffers_offset);
if (!dev->agp_buffer_map) {
DRM_ERROR("could not find dma buffer region!\n");
dev->dev_private = (void *)dev_priv;
if (!dev_priv->is_pci) {
dev_priv->agp_textures =
- drm_core_findmap(dev, init->agp_textures_offset);
+ drm_legacy_findmap(dev, init->agp_textures_offset);
if (!dev_priv->agp_textures) {
DRM_ERROR("could not find agp texture region!\n");
dev->dev_private = (void *)dev_priv;
#if __OS_HAS_AGP
if (!dev_priv->is_pci) {
- drm_core_ioremap_wc(dev_priv->cce_ring, dev);
- drm_core_ioremap_wc(dev_priv->ring_rptr, dev);
- drm_core_ioremap_wc(dev->agp_buffer_map, dev);
+ drm_legacy_ioremap_wc(dev_priv->cce_ring, dev);
+ drm_legacy_ioremap_wc(dev_priv->ring_rptr, dev);
+ drm_legacy_ioremap_wc(dev->agp_buffer_map, dev);
if (!dev_priv->cce_ring->handle ||
!dev_priv->ring_rptr->handle ||
!dev->agp_buffer_map->handle) {
#if __OS_HAS_AGP
if (!dev_priv->is_pci) {
if (dev_priv->cce_ring != NULL)
- drm_core_ioremapfree(dev_priv->cce_ring, dev);
+ drm_legacy_ioremapfree(dev_priv->cce_ring, dev);
if (dev_priv->ring_rptr != NULL)
- drm_core_ioremapfree(dev_priv->ring_rptr, dev);
+ drm_legacy_ioremapfree(dev_priv->ring_rptr, dev);
if (dev->agp_buffer_map != NULL) {
- drm_core_ioremapfree(dev->agp_buffer_map, dev);
+ drm_legacy_ioremapfree(dev->agp_buffer_map, dev);
dev->agp_buffer_map = NULL;
}
} else
.open = drm_open,
.release = drm_release,
.unlocked_ioctl = drm_ioctl,
- .mmap = drm_mmap,
+ .mmap = drm_legacy_mmap,
.poll = drm_poll,
#ifdef CONFIG_COMPAT
.compat_ioctl = r128_compat_ioctl,
.load = r128_driver_load,
.preclose = r128_driver_preclose,
.lastclose = r128_driver_lastclose,
+ .set_busid = drm_pci_set_busid,
.get_vblank_counter = r128_get_vblank_counter,
.enable_vblank = r128_enable_vblank,
.disable_vblank = r128_disable_vblank,
#ifndef __R128_DRV_H__
#define __R128_DRV_H__
+#include <drm/ati_pcigart.h>
+#include <drm/drm_legacy.h>
+
/* General customization:
*/
#define DRIVER_AUTHOR "Gareth Hughes, VA Linux Systems Inc."
# add UMS driver
radeon-$(CONFIG_DRM_RADEON_UMS)+= radeon_cp.o radeon_state.o radeon_mem.o \
- radeon_irq.o r300_cmdbuf.o r600_cp.o r600_blit.o
+ radeon_irq.o r300_cmdbuf.o r600_cp.o r600_blit.o drm_buffer.o
# add KMS driver
radeon-y += radeon_device.o radeon_asic.o radeon_kms.o \
radeon_cs.o radeon_bios.o radeon_benchmark.o r100.o r300.o r420.o \
rs400.o rs600.o rs690.o rv515.o r520.o r600.o rv770.o radeon_test.o \
r200.o radeon_legacy_tv.o r600_cs.o r600_blit_shaders.o \
- radeon_pm.o atombios_dp.o r600_audio.o r600_hdmi.o dce3_1_afmt.o \
+ radeon_pm.o atombios_dp.o r600_hdmi.o dce3_1_afmt.o \
evergreen.o evergreen_cs.o evergreen_blit_shaders.o \
evergreen_hdmi.o radeon_trace_points.o ni.o cayman_blit_shaders.o \
atombios_encoders.o radeon_semaphore.o radeon_sa.o atombios_i2c.o si.o \
- si_blit_shaders.o radeon_prime.o radeon_uvd.o cik.o cik_blit_shaders.o \
+ si_blit_shaders.o radeon_prime.o cik.o cik_blit_shaders.o \
r600_dpm.o rs780_dpm.o rv6xx_dpm.o rv770_dpm.o rv730_dpm.o rv740_dpm.o \
rv770_smc.o cypress_dpm.o btc_dpm.o sumo_dpm.o sumo_smc.o trinity_dpm.o \
trinity_smc.o ni_dpm.o si_smc.o si_dpm.o kv_smc.o kv_dpm.o ci_smc.o \
- ci_dpm.o dce6_afmt.o radeon_vm.o radeon_ucode.o radeon_ib.o
+ ci_dpm.o dce6_afmt.o radeon_vm.o radeon_ucode.o radeon_ib.o radeon_mn.o
# add async DMA block
radeon-y += \
/***** general DP utility functions *****/
-#define DP_VOLTAGE_MAX DP_TRAIN_VOLTAGE_SWING_1200
-#define DP_PRE_EMPHASIS_MAX DP_TRAIN_PRE_EMPHASIS_9_5
+#define DP_VOLTAGE_MAX DP_TRAIN_VOLTAGE_SWING_LEVEL_3
+#define DP_PRE_EMPHASIS_MAX DP_TRAIN_PRE_EMPH_LEVEL_3
static void dp_get_adjust_train(u8 link_status[DP_LINK_STATUS_SIZE],
int lane_count,
u8 msg[DP_DPCD_SIZE];
int ret;
- char dpcd_hex_dump[DP_DPCD_SIZE * 3];
-
ret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux, DP_DPCD_REV, msg,
DP_DPCD_SIZE);
if (ret > 0) {
memcpy(dig_connector->dpcd, msg, DP_DPCD_SIZE);
- hex_dump_to_buffer(dig_connector->dpcd, sizeof(dig_connector->dpcd),
- 32, 1, dpcd_hex_dump, sizeof(dpcd_hex_dump), false);
- DRM_DEBUG_KMS("DPCD: %s\n", dpcd_hex_dump);
+ DRM_DEBUG_KMS("DPCD: %*ph\n", (int)sizeof(dig_connector->dpcd),
+ dig_connector->dpcd);
radeon_dp_probe_oui(radeon_connector);
bool radeon_atom_get_tv_timings(struct radeon_device *rdev, int index,
struct drm_display_mode *mode);
-
-static inline bool radeon_encoder_is_digital(struct drm_encoder *encoder)
-{
- struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
- switch (radeon_encoder->encoder_id) {
- case ENCODER_OBJECT_ID_INTERNAL_LVDS:
- case ENCODER_OBJECT_ID_INTERNAL_TMDS1:
- case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1:
- case ENCODER_OBJECT_ID_INTERNAL_LVTM1:
- case ENCODER_OBJECT_ID_INTERNAL_DVO1:
- case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1:
- case ENCODER_OBJECT_ID_INTERNAL_DDI:
- case ENCODER_OBJECT_ID_INTERNAL_UNIPHY:
- case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA:
- case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1:
- case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2:
- case ENCODER_OBJECT_ID_INTERNAL_UNIPHY3:
- return true;
- default:
- return false;
- }
-}
-
static bool radeon_atom_mode_fixup(struct drm_encoder *encoder,
const struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
{ 25000, 30000, RADEON_SCLK_UP }
};
-void btc_get_max_clock_from_voltage_dependency_table(struct radeon_clock_voltage_dependency_table *table,
- u32 *max_clock)
-{
- u32 i, clock = 0;
-
- if ((table == NULL) || (table->count == 0)) {
- *max_clock = clock;
- return;
- }
-
- for (i = 0; i < table->count; i++) {
- if (clock < table->entries[i].clk)
- clock = table->entries[i].clk;
- }
- *max_clock = clock;
-}
-
void btc_apply_voltage_dependency_rules(struct radeon_clock_voltage_dependency_table *table,
u32 clock, u16 max_voltage, u16 *voltage)
{
bool disable_mclk_switching;
u32 mclk, sclk;
u16 vddc, vddci;
- u32 max_sclk_vddc, max_mclk_vddci, max_mclk_vddc;
if ((rdev->pm.dpm.new_active_crtc_count > 1) ||
btc_dpm_vblank_too_short(rdev))
ps->low.vddci = max_limits->vddci;
}
- /* limit clocks to max supported clocks based on voltage dependency tables */
- btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddc_dependency_on_sclk,
- &max_sclk_vddc);
- btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddci_dependency_on_mclk,
- &max_mclk_vddci);
- btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddc_dependency_on_mclk,
- &max_mclk_vddc);
-
- if (max_sclk_vddc) {
- if (ps->low.sclk > max_sclk_vddc)
- ps->low.sclk = max_sclk_vddc;
- if (ps->medium.sclk > max_sclk_vddc)
- ps->medium.sclk = max_sclk_vddc;
- if (ps->high.sclk > max_sclk_vddc)
- ps->high.sclk = max_sclk_vddc;
- }
- if (max_mclk_vddci) {
- if (ps->low.mclk > max_mclk_vddci)
- ps->low.mclk = max_mclk_vddci;
- if (ps->medium.mclk > max_mclk_vddci)
- ps->medium.mclk = max_mclk_vddci;
- if (ps->high.mclk > max_mclk_vddci)
- ps->high.mclk = max_mclk_vddci;
- }
- if (max_mclk_vddc) {
- if (ps->low.mclk > max_mclk_vddc)
- ps->low.mclk = max_mclk_vddc;
- if (ps->medium.mclk > max_mclk_vddc)
- ps->medium.mclk = max_mclk_vddc;
- if (ps->high.mclk > max_mclk_vddc)
- ps->high.mclk = max_mclk_vddc;
- }
-
/* XXX validate the min clocks required for display */
if (disable_mclk_switching) {
struct rv7xx_pl *pl);
void btc_apply_voltage_dependency_rules(struct radeon_clock_voltage_dependency_table *table,
u32 clock, u16 max_voltage, u16 *voltage);
-void btc_get_max_clock_from_voltage_dependency_table(struct radeon_clock_voltage_dependency_table *table,
- u32 *max_clock);
void btc_apply_voltage_delta_rules(struct radeon_device *rdev,
u16 max_vddc, u16 max_vddci,
u16 *vddc, u16 *vddci);
};
extern u8 rv770_get_memory_module_index(struct radeon_device *rdev);
-extern void btc_get_max_clock_from_voltage_dependency_table(struct radeon_clock_voltage_dependency_table *table,
- u32 *max_clock);
extern int ni_copy_and_switch_arb_sets(struct radeon_device *rdev,
u32 arb_freq_src, u32 arb_freq_dest);
extern u8 si_get_ddr3_mclk_frequency_ratio(u32 memory_clock);
struct radeon_clock_and_voltage_limits *max_limits;
bool disable_mclk_switching;
u32 sclk, mclk;
- u32 max_sclk_vddc, max_mclk_vddci, max_mclk_vddc;
int i;
if (rps->vce_active) {
}
}
- /* limit clocks to max supported clocks based on voltage dependency tables */
- btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddc_dependency_on_sclk,
- &max_sclk_vddc);
- btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddci_dependency_on_mclk,
- &max_mclk_vddci);
- btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddc_dependency_on_mclk,
- &max_mclk_vddc);
-
- for (i = 0; i < ps->performance_level_count; i++) {
- if (max_sclk_vddc) {
- if (ps->performance_levels[i].sclk > max_sclk_vddc)
- ps->performance_levels[i].sclk = max_sclk_vddc;
- }
- if (max_mclk_vddci) {
- if (ps->performance_levels[i].mclk > max_mclk_vddci)
- ps->performance_levels[i].mclk = max_mclk_vddci;
- }
- if (max_mclk_vddc) {
- if (ps->performance_levels[i].mclk > max_mclk_vddc)
- ps->performance_levels[i].mclk = max_mclk_vddc;
- }
- }
-
/* XXX validate the min clocks required for display */
if (disable_mclk_switching) {
WREG32_SMC(CG_THERMAL_CTRL, tmp);
#endif
+ rdev->pm.dpm.thermal.min_temp = low_temp;
+ rdev->pm.dpm.thermal.max_temp = high_temp;
+
return 0;
}
void ci_dpm_debugfs_print_current_performance_level(struct radeon_device *rdev,
struct seq_file *m)
{
+ struct ci_power_info *pi = ci_get_pi(rdev);
+ struct radeon_ps *rps = &pi->current_rps;
u32 sclk = ci_get_average_sclk_freq(rdev);
u32 mclk = ci_get_average_mclk_freq(rdev);
+ seq_printf(m, "uvd %sabled\n", pi->uvd_enabled ? "en" : "dis");
+ seq_printf(m, "vce %sabled\n", rps->vce_active ? "en" : "dis");
seq_printf(m, "power level avg sclk: %u mclk: %u\n",
sclk, mclk);
}
u32 mc_shared_chmap, mc_arb_ramcfg;
u32 hdp_host_path_cntl;
u32 tmp;
- int i, j, k;
+ int i, j;
switch (rdev->family) {
case CHIP_BONAIRE:
(rdev->pdev->device == 0x130B) ||
(rdev->pdev->device == 0x130E) ||
(rdev->pdev->device == 0x1315) ||
+ (rdev->pdev->device == 0x1318) ||
(rdev->pdev->device == 0x131B)) {
rdev->config.cik.max_cu_per_sh = 4;
rdev->config.cik.max_backends_per_se = 1;
rdev->config.cik.max_sh_per_se,
rdev->config.cik.max_backends_per_se);
+ rdev->config.cik.active_cus = 0;
for (i = 0; i < rdev->config.cik.max_shader_engines; i++) {
for (j = 0; j < rdev->config.cik.max_sh_per_se; j++) {
- for (k = 0; k < rdev->config.cik.max_cu_per_sh; k++) {
- rdev->config.cik.active_cus +=
- hweight32(cik_get_cu_active_bitmap(rdev, i, j));
- }
+ rdev->config.cik.active_cus +=
+ hweight32(cik_get_cu_active_bitmap(rdev, i, j));
}
}
radeon_ring_write(ring, PACKET3(PACKET3_SET_UCONFIG_REG, 1));
radeon_ring_write(ring, ((scratch - PACKET3_SET_UCONFIG_REG_START) >> 2));
radeon_ring_write(ring, 0xDEADBEEF);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
for (i = 0; i < rdev->usec_timeout; i++) {
tmp = RREG32(scratch);
radeon_ring_write(ring, 0);
}
+/**
+ * cik_semaphore_ring_emit - emit a semaphore on the CP ring
+ *
+ * @rdev: radeon_device pointer
+ * @ring: radeon ring buffer object
+ * @semaphore: radeon semaphore object
+ * @emit_wait: Is this a sempahore wait?
+ *
+ * Emits a semaphore signal/wait packet to the CP ring and prevents the PFP
+ * from running ahead of semaphore waits.
+ */
bool cik_semaphore_ring_emit(struct radeon_device *rdev,
struct radeon_ring *ring,
struct radeon_semaphore *semaphore,
radeon_ring_write(ring, lower_32_bits(addr));
radeon_ring_write(ring, (upper_32_bits(addr) & 0xffff) | sel);
+ if (emit_wait && ring->idx == RADEON_RING_TYPE_GFX_INDEX) {
+ /* Prevent the PFP from running ahead of the semaphore wait */
+ radeon_ring_write(ring, PACKET3(PACKET3_PFP_SYNC_ME, 0));
+ radeon_ring_write(ring, 0x0);
+ }
+
return true;
}
* @src_offset: src GPU address
* @dst_offset: dst GPU address
* @num_gpu_pages: number of GPU pages to xfer
- * @fence: radeon fence object
+ * @resv: reservation object to sync to
*
* Copy GPU paging using the CP DMA engine (CIK+).
* Used by the radeon ttm implementation to move pages if
* registered as the asic copy callback.
*/
-int cik_copy_cpdma(struct radeon_device *rdev,
- uint64_t src_offset, uint64_t dst_offset,
- unsigned num_gpu_pages,
- struct radeon_fence **fence)
+struct radeon_fence *cik_copy_cpdma(struct radeon_device *rdev,
+ uint64_t src_offset, uint64_t dst_offset,
+ unsigned num_gpu_pages,
+ struct reservation_object *resv)
{
struct radeon_semaphore *sem = NULL;
+ struct radeon_fence *fence;
int ring_index = rdev->asic->copy.blit_ring_index;
struct radeon_ring *ring = &rdev->ring[ring_index];
u32 size_in_bytes, cur_size_in_bytes, control;
r = radeon_semaphore_create(rdev, &sem);
if (r) {
DRM_ERROR("radeon: moving bo (%d).\n", r);
- return r;
+ return ERR_PTR(r);
}
size_in_bytes = (num_gpu_pages << RADEON_GPU_PAGE_SHIFT);
if (r) {
DRM_ERROR("radeon: moving bo (%d).\n", r);
radeon_semaphore_free(rdev, &sem, NULL);
- return r;
+ return ERR_PTR(r);
}
- radeon_semaphore_sync_to(sem, *fence);
+ radeon_semaphore_sync_resv(rdev, sem, resv, false);
radeon_semaphore_sync_rings(rdev, sem, ring->idx);
for (i = 0; i < num_loops; i++) {
dst_offset += cur_size_in_bytes;
}
- r = radeon_fence_emit(rdev, fence, ring->idx);
+ r = radeon_fence_emit(rdev, &fence, ring->idx);
if (r) {
radeon_ring_unlock_undo(rdev, ring);
radeon_semaphore_free(rdev, &sem, NULL);
- return r;
+ return ERR_PTR(r);
}
- radeon_ring_unlock_commit(rdev, ring);
- radeon_semaphore_free(rdev, &sem, *fence);
+ radeon_ring_unlock_commit(rdev, ring, false);
+ radeon_semaphore_free(rdev, &sem, fence);
- return r;
+ return fence;
}
/*
ib.ptr[1] = ((scratch - PACKET3_SET_UCONFIG_REG_START) >> 2);
ib.ptr[2] = 0xDEADBEEF;
ib.length_dw = 3;
- r = radeon_ib_schedule(rdev, &ib, NULL);
+ r = radeon_ib_schedule(rdev, &ib, NULL, false);
if (r) {
radeon_scratch_free(rdev, scratch);
radeon_ib_free(rdev, &ib);
WREG32(CP_PFP_UCODE_ADDR, 0);
for (i = 0; i < fw_size; i++)
WREG32(CP_PFP_UCODE_DATA, le32_to_cpup(fw_data++));
- WREG32(CP_PFP_UCODE_ADDR, 0);
+ WREG32(CP_PFP_UCODE_ADDR, le32_to_cpu(pfp_hdr->header.ucode_version));
/* CE */
fw_data = (const __le32 *)
WREG32(CP_CE_UCODE_ADDR, 0);
for (i = 0; i < fw_size; i++)
WREG32(CP_CE_UCODE_DATA, le32_to_cpup(fw_data++));
- WREG32(CP_CE_UCODE_ADDR, 0);
+ WREG32(CP_CE_UCODE_ADDR, le32_to_cpu(ce_hdr->header.ucode_version));
/* ME */
fw_data = (const __be32 *)
WREG32(CP_ME_RAM_WADDR, 0);
for (i = 0; i < fw_size; i++)
WREG32(CP_ME_RAM_DATA, le32_to_cpup(fw_data++));
- WREG32(CP_ME_RAM_WADDR, 0);
+ WREG32(CP_ME_RAM_WADDR, le32_to_cpu(me_hdr->header.ucode_version));
+ WREG32(CP_ME_RAM_RADDR, le32_to_cpu(me_hdr->header.ucode_version));
} else {
const __be32 *fw_data;
WREG32(CP_ME_RAM_WADDR, 0);
}
- WREG32(CP_PFP_UCODE_ADDR, 0);
- WREG32(CP_CE_UCODE_ADDR, 0);
- WREG32(CP_ME_RAM_WADDR, 0);
- WREG32(CP_ME_RAM_RADDR, 0);
return 0;
}
radeon_ring_write(ring, 0x0000000e); /* VGT_VERTEX_REUSE_BLOCK_CNTL */
radeon_ring_write(ring, 0x00000010); /* VGT_OUT_DEALLOC_CNTL */
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
return 0;
}
WREG32(CP_MEC_ME1_UCODE_ADDR, 0);
for (i = 0; i < fw_size; i++)
WREG32(CP_MEC_ME1_UCODE_DATA, le32_to_cpup(fw_data++));
- WREG32(CP_MEC_ME1_UCODE_ADDR, 0);
+ WREG32(CP_MEC_ME1_UCODE_ADDR, le32_to_cpu(mec_hdr->header.ucode_version));
/* MEC2 */
if (rdev->family == CHIP_KAVERI) {
WREG32(CP_MEC_ME2_UCODE_ADDR, 0);
for (i = 0; i < fw_size; i++)
WREG32(CP_MEC_ME2_UCODE_DATA, le32_to_cpup(fw_data++));
- WREG32(CP_MEC_ME2_UCODE_ADDR, 0);
+ WREG32(CP_MEC_ME2_UCODE_ADDR, le32_to_cpu(mec2_hdr->header.ucode_version));
}
} else {
const __be32 *fw_data;
r = radeon_bo_create(rdev,
rdev->mec.num_mec *rdev->mec.num_pipe * MEC_HPD_SIZE * 2,
PAGE_SIZE, true,
- RADEON_GEM_DOMAIN_GTT, 0, NULL,
+ RADEON_GEM_DOMAIN_GTT, 0, NULL, NULL,
&rdev->mec.hpd_eop_obj);
if (r) {
dev_warn(rdev->dev, "(%d) create HDP EOP bo failed\n", r);
sizeof(struct bonaire_mqd),
PAGE_SIZE, true,
RADEON_GEM_DOMAIN_GTT, 0, NULL,
- &rdev->ring[idx].mqd_obj);
+ NULL, &rdev->ring[idx].mqd_obj);
if (r) {
dev_warn(rdev->dev, "(%d) create MQD bo failed\n", r);
return r;
WREG32(0x15D8, 0);
WREG32(0x15DC, 0);
- /* empty context1-15 */
- /* FIXME start with 4G, once using 2 level pt switch to full
- * vm size space
- */
+ /* restore context1-15 */
/* set vm size, must be a multiple of 4 */
WREG32(VM_CONTEXT1_PAGE_TABLE_START_ADDR, 0);
WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn);
for (i = 1; i < 16; i++) {
if (i < 8)
WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (i << 2),
- rdev->gart.table_addr >> 12);
+ rdev->vm_manager.saved_table_addr[i]);
else
WREG32(VM_CONTEXT8_PAGE_TABLE_BASE_ADDR + ((i - 8) << 2),
- rdev->gart.table_addr >> 12);
+ rdev->vm_manager.saved_table_addr[i]);
}
/* enable context1-15 */
*/
static void cik_pcie_gart_disable(struct radeon_device *rdev)
{
+ unsigned i;
+
+ for (i = 1; i < 16; ++i) {
+ uint32_t reg;
+ if (i < 8)
+ reg = VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (i << 2);
+ else
+ reg = VM_CONTEXT8_PAGE_TABLE_BASE_ADDR + ((i - 8) << 2);
+ rdev->vm_manager.saved_table_addr[i] = RREG32(reg);
+ }
+
/* Disable all tables */
WREG32(VM_CONTEXT0_CNTL, 0);
WREG32(VM_CONTEXT1_CNTL, 0);
/* update SH_MEM_* regs */
radeon_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
- radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(0) |
+ radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(usepfp) |
WRITE_DATA_DST_SEL(0)));
radeon_ring_write(ring, SRBM_GFX_CNTL >> 2);
radeon_ring_write(ring, 0);
radeon_ring_write(ring, VMID(vm->id));
radeon_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 6));
- radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(0) |
+ radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(usepfp) |
WRITE_DATA_DST_SEL(0)));
radeon_ring_write(ring, SH_MEM_BASES >> 2);
radeon_ring_write(ring, 0);
radeon_ring_write(ring, 0); /* SH_MEM_APE1_LIMIT */
radeon_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
- radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(0) |
+ radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(usepfp) |
WRITE_DATA_DST_SEL(0)));
radeon_ring_write(ring, SRBM_GFX_CNTL >> 2);
radeon_ring_write(ring, 0);
/* bits 0-15 are the VM contexts0-15 */
radeon_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
- radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(0) |
+ radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(usepfp) |
WRITE_DATA_DST_SEL(0)));
radeon_ring_write(ring, VM_INVALIDATE_REQUEST >> 2);
radeon_ring_write(ring, 0);
WREG32(RLC_GPM_UCODE_ADDR, 0);
for (i = 0; i < size; i++)
WREG32(RLC_GPM_UCODE_DATA, le32_to_cpup(fw_data++));
- WREG32(RLC_GPM_UCODE_ADDR, 0);
+ WREG32(RLC_GPM_UCODE_ADDR, le32_to_cpu(hdr->header.ucode_version));
} else {
const __be32 *fw_data;
}
if (queue_hotplug)
schedule_work(&rdev->hotplug_work);
- if (queue_reset)
- schedule_work(&rdev->reset_work);
+ if (queue_reset) {
+ rdev->needs_reset = true;
+ wake_up_all(&rdev->fence_queue);
+ }
if (queue_thermal)
schedule_work(&rdev->pm.dpm.thermal.work);
rdev->ih.rptr = rptr;
int ret, i;
u16 tmp16;
+ if (pci_is_root_bus(rdev->pdev->bus))
+ return;
+
if (radeon_pcie_gen2 == 0)
return;
if (orig != data)
WREG32_PCIE_PORT(PCIE_LC_LINK_WIDTH_CNTL, data);
- if (!disable_clkreq) {
+ if (!disable_clkreq &&
+ !pci_is_root_bus(rdev->pdev->bus)) {
struct pci_dev *root = rdev->pdev->bus->self;
u32 lnkcap;
* @src_offset: src GPU address
* @dst_offset: dst GPU address
* @num_gpu_pages: number of GPU pages to xfer
- * @fence: radeon fence object
+ * @resv: reservation object to sync to
*
* Copy GPU paging using the DMA engine (CIK).
* Used by the radeon ttm implementation to move pages if
* registered as the asic copy callback.
*/
-int cik_copy_dma(struct radeon_device *rdev,
- uint64_t src_offset, uint64_t dst_offset,
- unsigned num_gpu_pages,
- struct radeon_fence **fence)
+struct radeon_fence *cik_copy_dma(struct radeon_device *rdev,
+ uint64_t src_offset, uint64_t dst_offset,
+ unsigned num_gpu_pages,
+ struct reservation_object *resv)
{
struct radeon_semaphore *sem = NULL;
+ struct radeon_fence *fence;
int ring_index = rdev->asic->copy.dma_ring_index;
struct radeon_ring *ring = &rdev->ring[ring_index];
u32 size_in_bytes, cur_size_in_bytes;
r = radeon_semaphore_create(rdev, &sem);
if (r) {
DRM_ERROR("radeon: moving bo (%d).\n", r);
- return r;
+ return ERR_PTR(r);
}
size_in_bytes = (num_gpu_pages << RADEON_GPU_PAGE_SHIFT);
if (r) {
DRM_ERROR("radeon: moving bo (%d).\n", r);
radeon_semaphore_free(rdev, &sem, NULL);
- return r;
+ return ERR_PTR(r);
}
- radeon_semaphore_sync_to(sem, *fence);
+ radeon_semaphore_sync_resv(rdev, sem, resv, false);
radeon_semaphore_sync_rings(rdev, sem, ring->idx);
for (i = 0; i < num_loops; i++) {
dst_offset += cur_size_in_bytes;
}
- r = radeon_fence_emit(rdev, fence, ring->idx);
+ r = radeon_fence_emit(rdev, &fence, ring->idx);
if (r) {
radeon_ring_unlock_undo(rdev, ring);
radeon_semaphore_free(rdev, &sem, NULL);
- return r;
+ return ERR_PTR(r);
}
- radeon_ring_unlock_commit(rdev, ring);
- radeon_semaphore_free(rdev, &sem, *fence);
+ radeon_ring_unlock_commit(rdev, ring, false);
+ radeon_semaphore_free(rdev, &sem, fence);
- return r;
+ return fence;
}
/**
radeon_ring_write(ring, upper_32_bits(rdev->vram_scratch.gpu_addr));
radeon_ring_write(ring, 1); /* number of DWs to follow */
radeon_ring_write(ring, 0xDEADBEEF);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
for (i = 0; i < rdev->usec_timeout; i++) {
tmp = readl(ptr);
ib.ptr[4] = 0xDEADBEEF;
ib.length_dw = 5;
- r = radeon_ib_schedule(rdev, &ib, NULL);
+ r = radeon_ib_schedule(rdev, &ib, NULL, false);
if (r) {
radeon_ib_free(rdev, &ib);
DRM_ERROR("radeon: failed to schedule ib (%d).\n", r);
/* disable audio prior to setting up hw */
dig->afmt->pin = r600_audio_get_pin(rdev);
- r600_audio_enable(rdev, dig->afmt->pin, false);
+ r600_audio_enable(rdev, dig->afmt->pin, 0);
r600_audio_set_dto(encoder, mode->clock);
r600_hdmi_audio_workaround(encoder);
/* enable audio after to setting up hw */
- r600_audio_enable(rdev, dig->afmt->pin, true);
+ r600_audio_enable(rdev, dig->afmt->pin, 0xf);
}
void dce6_audio_enable(struct radeon_device *rdev,
struct r600_audio_pin *pin,
- bool enable)
+ u8 enable_mask)
{
if (!pin)
return;
- WREG32_ENDPOINT(pin->offset, AZ_F0_CODEC_PIN_CONTROL_HOTPLUG_CONTROL,
- enable ? AUDIO_ENABLED : 0);
+ WREG32_ENDPOINT(pin->offset, AZ_F0_CODEC_PIN_CONTROL_HOT_PLUG_CONTROL,
+ enable_mask ? AUDIO_ENABLED : 0);
}
static const u32 pin_offsets[7] =
*/
#include <linux/export.h>
-#include <drm/drm_buffer.h>
+#include "drm_buffer.h"
/**
* Allocate the drm buffer object.
kfree(*buf);
return -ENOMEM;
}
-EXPORT_SYMBOL(drm_buffer_alloc);
/**
* Copy the user data to the begin of the buffer and reset the processing
buf->iterator = 0;
return 0;
}
-EXPORT_SYMBOL(drm_buffer_copy_from_user);
/**
* Free the drm buffer object
kfree(buf);
}
}
-EXPORT_SYMBOL(drm_buffer_free);
/**
* Read an object from buffer that may be split to multiple parts. If object
drm_buffer_advance(buf, objsize);
return obj;
}
-EXPORT_SYMBOL(drm_buffer_read_object);
* Authors: Alex Deucher
*/
#include <linux/firmware.h>
-#include <linux/platform_device.h>
#include <linux/slab.h>
#include <drm/drmP.h>
#include "radeon.h"
radeon_ring_write(ring, PACKET3_ME_INITIALIZE_DEVICE_ID(1));
radeon_ring_write(ring, 0);
radeon_ring_write(ring, 0);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
cp_me = 0xff;
WREG32(CP_ME_CNTL, cp_me);
radeon_ring_write(ring, 0x0000000e); /* VGT_VERTEX_REUSE_BLOCK_CNTL */
radeon_ring_write(ring, 0x00000010); /* */
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
return 0;
}
if (rdev->rlc.save_restore_obj == NULL) {
r = radeon_bo_create(rdev, dws * 4, PAGE_SIZE, true,
RADEON_GEM_DOMAIN_VRAM, 0, NULL,
- &rdev->rlc.save_restore_obj);
+ NULL, &rdev->rlc.save_restore_obj);
if (r) {
dev_warn(rdev->dev, "(%d) create RLC sr bo failed\n", r);
return r;
if (rdev->rlc.clear_state_obj == NULL) {
r = radeon_bo_create(rdev, dws * 4, PAGE_SIZE, true,
RADEON_GEM_DOMAIN_VRAM, 0, NULL,
- &rdev->rlc.clear_state_obj);
+ NULL, &rdev->rlc.clear_state_obj);
if (r) {
dev_warn(rdev->dev, "(%d) create RLC c bo failed\n", r);
sumo_rlc_fini(rdev);
r = radeon_bo_create(rdev, rdev->rlc.cp_table_size,
PAGE_SIZE, true,
RADEON_GEM_DOMAIN_VRAM, 0, NULL,
- &rdev->rlc.cp_table_obj);
+ NULL, &rdev->rlc.cp_table_obj);
if (r) {
dev_warn(rdev->dev, "(%d) create RLC cp table bo failed\n", r);
sumo_rlc_fini(rdev);
* Used by the radeon ttm implementation to move pages if
* registered as the asic copy callback.
*/
-int evergreen_copy_dma(struct radeon_device *rdev,
- uint64_t src_offset, uint64_t dst_offset,
- unsigned num_gpu_pages,
- struct radeon_fence **fence)
+struct radeon_fence *evergreen_copy_dma(struct radeon_device *rdev,
+ uint64_t src_offset,
+ uint64_t dst_offset,
+ unsigned num_gpu_pages,
+ struct reservation_object *resv)
{
struct radeon_semaphore *sem = NULL;
+ struct radeon_fence *fence;
int ring_index = rdev->asic->copy.dma_ring_index;
struct radeon_ring *ring = &rdev->ring[ring_index];
u32 size_in_dw, cur_size_in_dw;
r = radeon_semaphore_create(rdev, &sem);
if (r) {
DRM_ERROR("radeon: moving bo (%d).\n", r);
- return r;
+ return ERR_PTR(r);
}
size_in_dw = (num_gpu_pages << RADEON_GPU_PAGE_SHIFT) / 4;
if (r) {
DRM_ERROR("radeon: moving bo (%d).\n", r);
radeon_semaphore_free(rdev, &sem, NULL);
- return r;
+ return ERR_PTR(r);
}
- radeon_semaphore_sync_to(sem, *fence);
+ radeon_semaphore_sync_resv(rdev, sem, resv, false);
radeon_semaphore_sync_rings(rdev, sem, ring->idx);
for (i = 0; i < num_loops; i++) {
dst_offset += cur_size_in_dw * 4;
}
- r = radeon_fence_emit(rdev, fence, ring->idx);
+ r = radeon_fence_emit(rdev, &fence, ring->idx);
if (r) {
radeon_ring_unlock_undo(rdev, ring);
radeon_semaphore_free(rdev, &sem, NULL);
- return r;
+ return ERR_PTR(r);
}
- radeon_ring_unlock_commit(rdev, ring);
- radeon_semaphore_free(rdev, &sem, *fence);
+ radeon_ring_unlock_commit(rdev, ring, false);
+ radeon_semaphore_free(rdev, &sem, fence);
- return r;
+ return fence;
}
/**
extern void dce6_afmt_write_latency_fields(struct drm_encoder *encoder,
struct drm_display_mode *mode);
+/* enable the audio stream */
+static void dce4_audio_enable(struct radeon_device *rdev,
+ struct r600_audio_pin *pin,
+ u8 enable_mask)
+{
+ u32 tmp = RREG32(AZ_HOT_PLUG_CONTROL);
+
+ if (!pin)
+ return;
+
+ if (enable_mask) {
+ tmp |= AUDIO_ENABLED;
+ if (enable_mask & 1)
+ tmp |= PIN0_AUDIO_ENABLED;
+ if (enable_mask & 2)
+ tmp |= PIN1_AUDIO_ENABLED;
+ if (enable_mask & 4)
+ tmp |= PIN2_AUDIO_ENABLED;
+ if (enable_mask & 8)
+ tmp |= PIN3_AUDIO_ENABLED;
+ } else {
+ tmp &= ~(AUDIO_ENABLED |
+ PIN0_AUDIO_ENABLED |
+ PIN1_AUDIO_ENABLED |
+ PIN2_AUDIO_ENABLED |
+ PIN3_AUDIO_ENABLED);
+ }
+
+ WREG32(AZ_HOT_PLUG_CONTROL, tmp);
+}
+
/*
* update the N and CTS parameters for a given pixel clock rate
*/
/* disable audio prior to setting up hw */
if (ASIC_IS_DCE6(rdev)) {
dig->afmt->pin = dce6_audio_get_pin(rdev);
- dce6_audio_enable(rdev, dig->afmt->pin, false);
+ dce6_audio_enable(rdev, dig->afmt->pin, 0);
} else {
dig->afmt->pin = r600_audio_get_pin(rdev);
- r600_audio_enable(rdev, dig->afmt->pin, false);
+ dce4_audio_enable(rdev, dig->afmt->pin, 0);
}
evergreen_audio_set_dto(encoder, mode->clock);
/* enable audio after to setting up hw */
if (ASIC_IS_DCE6(rdev))
- dce6_audio_enable(rdev, dig->afmt->pin, true);
+ dce6_audio_enable(rdev, dig->afmt->pin, 1);
else
- r600_audio_enable(rdev, dig->afmt->pin, true);
+ dce4_audio_enable(rdev, dig->afmt->pin, 0xf);
}
void evergreen_hdmi_enable(struct drm_encoder *encoder, bool enable)
{
+ struct drm_device *dev = encoder->dev;
+ struct radeon_device *rdev = dev->dev_private;
struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
if (!enable && !dig->afmt->enabled)
return;
+ if (!enable && dig->afmt->pin) {
+ if (ASIC_IS_DCE6(rdev))
+ dce6_audio_enable(rdev, dig->afmt->pin, 0);
+ else
+ dce4_audio_enable(rdev, dig->afmt->pin, 0);
+ dig->afmt->pin = NULL;
+ }
+
dig->afmt->enabled = enable;
DRM_DEBUG("%sabling HDMI interface @ 0x%04X for encoder 0x%x\n",
return kv_enable_uvd_dpm(rdev, !gate);
}
-static u8 kv_get_vce_boot_level(struct radeon_device *rdev)
+static u8 kv_get_vce_boot_level(struct radeon_device *rdev, u32 evclk)
{
u8 i;
struct radeon_vce_clock_voltage_dependency_table *table =
&rdev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table;
for (i = 0; i < table->count; i++) {
- if (table->entries[i].evclk >= 0) /* XXX */
+ if (table->entries[i].evclk >= evclk)
break;
}
if (pi->caps_stable_p_state)
pi->vce_boot_level = table->count - 1;
else
- pi->vce_boot_level = kv_get_vce_boot_level(rdev);
+ pi->vce_boot_level = kv_get_vce_boot_level(rdev, radeon_new_state->evclk);
ret = kv_copy_bytes_to_smc(rdev,
pi->dpm_table_start +
pi->caps_sclk_ds = true;
pi->enable_auto_thermal_throttling = true;
pi->disable_nb_ps3_in_battery = false;
- pi->bapm_enable = true;
+ if (radeon_bapm == 0)
+ pi->bapm_enable = false;
+ else
+ pi->bapm_enable = true;
pi->voltage_drop_t = 0;
pi->caps_sclk_throttle_low_notification = false;
pi->caps_fps = false; /* true? */
tmp = (RREG32_SMC(SMU_VOLTAGE_STATUS) & SMU_VOLTAGE_CURRENT_LEVEL_MASK) >>
SMU_VOLTAGE_CURRENT_LEVEL_SHIFT;
vddc = kv_convert_8bit_index_to_voltage(rdev, (u16)tmp);
+ seq_printf(m, "uvd %sabled\n", pi->uvd_power_gated ? "dis" : "en");
+ seq_printf(m, "vce %sabled\n", pi->vce_power_gated ? "dis" : "en");
seq_printf(m, "power level %d sclk: %u vddc: %u\n",
current_index, sclk, vddc);
}
WREG32(VM_CONTEXT0_PAGE_TABLE_START_ADDR + (i << 2), 0);
WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR + (i << 2), rdev->vm_manager.max_pfn);
WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (i << 2),
- rdev->gart.table_addr >> 12);
+ rdev->vm_manager.saved_table_addr[i]);
}
/* enable context1-7 */
static void cayman_pcie_gart_disable(struct radeon_device *rdev)
{
+ unsigned i;
+
+ for (i = 1; i < 8; ++i) {
+ rdev->vm_manager.saved_table_addr[i] = RREG32(
+ VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (i << 2));
+ }
+
/* Disable all tables */
WREG32(VM_CONTEXT0_CNTL, 0);
WREG32(VM_CONTEXT1_CNTL, 0);
radeon_ring_write(ring, PACKET3_ME_INITIALIZE_DEVICE_ID(1));
radeon_ring_write(ring, 0);
radeon_ring_write(ring, 0);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
cayman_cp_enable(rdev, true);
radeon_ring_write(ring, 0x0000000e); /* VGT_VERTEX_REUSE_BLOCK_CNTL */
radeon_ring_write(ring, 0x00000010); /* */
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
/* XXX init other rings */
bool disable_mclk_switching;
u32 mclk;
u16 vddci;
- u32 max_sclk_vddc, max_mclk_vddci, max_mclk_vddc;
int i;
if ((rdev->pm.dpm.new_active_crtc_count > 1) ||
}
}
- /* limit clocks to max supported clocks based on voltage dependency tables */
- btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddc_dependency_on_sclk,
- &max_sclk_vddc);
- btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddci_dependency_on_mclk,
- &max_mclk_vddci);
- btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddc_dependency_on_mclk,
- &max_mclk_vddc);
-
- for (i = 0; i < ps->performance_level_count; i++) {
- if (max_sclk_vddc) {
- if (ps->performance_levels[i].sclk > max_sclk_vddc)
- ps->performance_levels[i].sclk = max_sclk_vddc;
- }
- if (max_mclk_vddci) {
- if (ps->performance_levels[i].mclk > max_mclk_vddci)
- ps->performance_levels[i].mclk = max_mclk_vddci;
- }
- if (max_mclk_vddc) {
- if (ps->performance_levels[i].mclk > max_mclk_vddc)
- ps->performance_levels[i].mclk = max_mclk_vddc;
- }
- }
-
/* XXX validate the min clocks required for display */
/* adjust low state */
return false;
}
-int r100_copy_blit(struct radeon_device *rdev,
- uint64_t src_offset,
- uint64_t dst_offset,
- unsigned num_gpu_pages,
- struct radeon_fence **fence)
+struct radeon_fence *r100_copy_blit(struct radeon_device *rdev,
+ uint64_t src_offset,
+ uint64_t dst_offset,
+ unsigned num_gpu_pages,
+ struct reservation_object *resv)
{
struct radeon_ring *ring = &rdev->ring[RADEON_RING_TYPE_GFX_INDEX];
+ struct radeon_fence *fence;
uint32_t cur_pages;
uint32_t stride_bytes = RADEON_GPU_PAGE_SIZE;
uint32_t pitch;
r = radeon_ring_lock(rdev, ring, ndw);
if (r) {
DRM_ERROR("radeon: moving bo (%d) asking for %u dw.\n", r, ndw);
- return -EINVAL;
+ return ERR_PTR(-EINVAL);
}
while (num_gpu_pages > 0) {
cur_pages = num_gpu_pages;
RADEON_WAIT_2D_IDLECLEAN |
RADEON_WAIT_HOST_IDLECLEAN |
RADEON_WAIT_DMA_GUI_IDLE);
- if (fence) {
- r = radeon_fence_emit(rdev, fence, RADEON_RING_TYPE_GFX_INDEX);
+ r = radeon_fence_emit(rdev, &fence, RADEON_RING_TYPE_GFX_INDEX);
+ if (r) {
+ radeon_ring_unlock_undo(rdev, ring);
+ return ERR_PTR(r);
}
- radeon_ring_unlock_commit(rdev, ring);
- return r;
+ radeon_ring_unlock_commit(rdev, ring, false);
+ return fence;
}
static int r100_cp_wait_for_idle(struct radeon_device *rdev)
RADEON_ISYNC_ANY3D_IDLE2D |
RADEON_ISYNC_WAIT_IDLEGUI |
RADEON_ISYNC_CPSCRATCH_IDLEGUI);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
}
}
radeon_ring_write(ring, PACKET0(scratch, 0));
radeon_ring_write(ring, 0xDEADBEEF);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
for (i = 0; i < rdev->usec_timeout; i++) {
tmp = RREG32(scratch);
if (tmp == 0xDEADBEEF) {
ib.ptr[6] = PACKET2(0);
ib.ptr[7] = PACKET2(0);
ib.length_dw = 8;
- r = radeon_ib_schedule(rdev, &ib, NULL);
+ r = radeon_ib_schedule(rdev, &ib, NULL, false);
if (r) {
DRM_ERROR("radeon: failed to schedule ib (%d).\n", r);
goto free_ib;
return vtx_size;
}
-int r200_copy_dma(struct radeon_device *rdev,
- uint64_t src_offset,
- uint64_t dst_offset,
- unsigned num_gpu_pages,
- struct radeon_fence **fence)
+struct radeon_fence *r200_copy_dma(struct radeon_device *rdev,
+ uint64_t src_offset,
+ uint64_t dst_offset,
+ unsigned num_gpu_pages,
+ struct reservation_object *resv)
{
struct radeon_ring *ring = &rdev->ring[RADEON_RING_TYPE_GFX_INDEX];
+ struct radeon_fence *fence;
uint32_t size;
uint32_t cur_size;
int i, num_loops;
r = radeon_ring_lock(rdev, ring, num_loops * 4 + 64);
if (r) {
DRM_ERROR("radeon: moving bo (%d).\n", r);
- return r;
+ return ERR_PTR(r);
}
/* Must wait for 2D idle & clean before DMA or hangs might happen */
radeon_ring_write(ring, PACKET0(RADEON_WAIT_UNTIL, 0));
}
radeon_ring_write(ring, PACKET0(RADEON_WAIT_UNTIL, 0));
radeon_ring_write(ring, RADEON_WAIT_DMA_GUI_IDLE);
- if (fence) {
- r = radeon_fence_emit(rdev, fence, RADEON_RING_TYPE_GFX_INDEX);
+ r = radeon_fence_emit(rdev, &fence, RADEON_RING_TYPE_GFX_INDEX);
+ if (r) {
+ radeon_ring_unlock_undo(rdev, ring);
+ return ERR_PTR(r);
}
- radeon_ring_unlock_commit(rdev, ring);
- return r;
+ radeon_ring_unlock_commit(rdev, ring, false);
+ return fence;
}
radeon_ring_write(ring,
R300_GEOMETRY_ROUND_NEAREST |
R300_COLOR_ROUND_NEAREST);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
}
static void r300_errata(struct radeon_device *rdev)
*/
#include <drm/drmP.h>
-#include <drm/drm_buffer.h>
#include <drm/radeon_drm.h>
#include "radeon_drv.h"
#include "r300_reg.h"
+#include "drm_buffer.h"
#include <asm/unaligned.h>
radeon_ring_write(ring, PACKET0(R300_CP_RESYNC_ADDR, 1));
radeon_ring_write(ring, rdev->config.r300.resync_scratch);
radeon_ring_write(ring, 0xDEADBEEF);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
}
static void r420_cp_errata_fini(struct radeon_device *rdev)
radeon_ring_lock(rdev, ring, 8);
radeon_ring_write(ring, PACKET0(R300_RB3D_DSTCACHE_CTLSTAT, 0));
radeon_ring_write(ring, R300_RB3D_DC_FINISH);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
radeon_scratch_free(rdev, rdev->config.r300.resync_scratch);
}
int r600_set_uvd_clocks(struct radeon_device *rdev, u32 vclk, u32 dclk)
{
+ unsigned fb_div = 0, ref_div, vclk_div = 0, dclk_div = 0;
+ int r;
+
+ /* bypass vclk and dclk with bclk */
+ WREG32_P(CG_UPLL_FUNC_CNTL_2,
+ VCLK_SRC_SEL(1) | DCLK_SRC_SEL(1),
+ ~(VCLK_SRC_SEL_MASK | DCLK_SRC_SEL_MASK));
+
+ /* assert BYPASS_EN, deassert UPLL_RESET, UPLL_SLEEP and UPLL_CTLREQ */
+ WREG32_P(CG_UPLL_FUNC_CNTL, UPLL_BYPASS_EN_MASK, ~(
+ UPLL_RESET_MASK | UPLL_SLEEP_MASK | UPLL_CTLREQ_MASK));
+
+ if (rdev->family >= CHIP_RS780)
+ WREG32_P(GFX_MACRO_BYPASS_CNTL, UPLL_BYPASS_CNTL,
+ ~UPLL_BYPASS_CNTL);
+
+ if (!vclk || !dclk) {
+ /* keep the Bypass mode, put PLL to sleep */
+ WREG32_P(CG_UPLL_FUNC_CNTL, UPLL_SLEEP_MASK, ~UPLL_SLEEP_MASK);
+ return 0;
+ }
+
+ if (rdev->clock.spll.reference_freq == 10000)
+ ref_div = 34;
+ else
+ ref_div = 4;
+
+ r = radeon_uvd_calc_upll_dividers(rdev, vclk, dclk, 50000, 160000,
+ ref_div + 1, 0xFFF, 2, 30, ~0,
+ &fb_div, &vclk_div, &dclk_div);
+ if (r)
+ return r;
+
+ if (rdev->family >= CHIP_RV670 && rdev->family < CHIP_RS780)
+ fb_div >>= 1;
+ else
+ fb_div |= 1;
+
+ r = radeon_uvd_send_upll_ctlreq(rdev, CG_UPLL_FUNC_CNTL);
+ if (r)
+ return r;
+
+ /* assert PLL_RESET */
+ WREG32_P(CG_UPLL_FUNC_CNTL, UPLL_RESET_MASK, ~UPLL_RESET_MASK);
+
+ /* For RS780 we have to choose ref clk */
+ if (rdev->family >= CHIP_RS780)
+ WREG32_P(CG_UPLL_FUNC_CNTL, UPLL_REFCLK_SRC_SEL_MASK,
+ ~UPLL_REFCLK_SRC_SEL_MASK);
+
+ /* set the required fb, ref and post divder values */
+ WREG32_P(CG_UPLL_FUNC_CNTL,
+ UPLL_FB_DIV(fb_div) |
+ UPLL_REF_DIV(ref_div),
+ ~(UPLL_FB_DIV_MASK | UPLL_REF_DIV_MASK));
+ WREG32_P(CG_UPLL_FUNC_CNTL_2,
+ UPLL_SW_HILEN(vclk_div >> 1) |
+ UPLL_SW_LOLEN((vclk_div >> 1) + (vclk_div & 1)) |
+ UPLL_SW_HILEN2(dclk_div >> 1) |
+ UPLL_SW_LOLEN2((dclk_div >> 1) + (dclk_div & 1)) |
+ UPLL_DIVEN_MASK | UPLL_DIVEN2_MASK,
+ ~UPLL_SW_MASK);
+
+ /* give the PLL some time to settle */
+ mdelay(15);
+
+ /* deassert PLL_RESET */
+ WREG32_P(CG_UPLL_FUNC_CNTL, 0, ~UPLL_RESET_MASK);
+
+ mdelay(15);
+
+ /* deassert BYPASS EN */
+ WREG32_P(CG_UPLL_FUNC_CNTL, 0, ~UPLL_BYPASS_EN_MASK);
+
+ if (rdev->family >= CHIP_RS780)
+ WREG32_P(GFX_MACRO_BYPASS_CNTL, 0, ~UPLL_BYPASS_CNTL);
+
+ r = radeon_uvd_send_upll_ctlreq(rdev, CG_UPLL_FUNC_CNTL);
+ if (r)
+ return r;
+
+ /* switch VCLK and DCLK selection */
+ WREG32_P(CG_UPLL_FUNC_CNTL_2,
+ VCLK_SRC_SEL(2) | DCLK_SRC_SEL(2),
+ ~(VCLK_SRC_SEL_MASK | DCLK_SRC_SEL_MASK));
+
+ mdelay(100);
+
return 0;
}
WREG32(MC_VM_L1_TLB_MCB_WR_GFX_CNTL, tmp);
WREG32(MC_VM_L1_TLB_MCB_RD_PDMA_CNTL, tmp);
WREG32(MC_VM_L1_TLB_MCB_WR_PDMA_CNTL, tmp);
+ WREG32(MC_VM_L1_TLB_MCB_RD_UVD_CNTL, tmp);
+ WREG32(MC_VM_L1_TLB_MCB_WR_UVD_CNTL, tmp);
WREG32(MC_VM_L1_TLB_MCB_RD_SEM_CNTL, tmp | ENABLE_SEMAPHORE_MODE);
WREG32(MC_VM_L1_TLB_MCB_WR_SEM_CNTL, tmp | ENABLE_SEMAPHORE_MODE);
WREG32(VM_CONTEXT0_PAGE_TABLE_START_ADDR, rdev->mc.gtt_start >> 12);
WREG32(MC_VM_L1_TLB_MCB_WR_SYS_CNTL, tmp);
WREG32(MC_VM_L1_TLB_MCB_RD_HDP_CNTL, tmp);
WREG32(MC_VM_L1_TLB_MCB_WR_HDP_CNTL, tmp);
+ WREG32(MC_VM_L1_TLB_MCB_RD_UVD_CNTL, tmp);
+ WREG32(MC_VM_L1_TLB_MCB_WR_UVD_CNTL, tmp);
radeon_gart_table_vram_unpin(rdev);
}
if (rdev->vram_scratch.robj == NULL) {
r = radeon_bo_create(rdev, RADEON_GPU_PAGE_SIZE,
PAGE_SIZE, true, RADEON_GEM_DOMAIN_VRAM,
- 0, NULL, &rdev->vram_scratch.robj);
+ 0, NULL, NULL, &rdev->vram_scratch.robj);
if (r) {
return r;
}
{
u32 tiling_config;
u32 ramcfg;
- u32 cc_rb_backend_disable;
u32 cc_gc_shader_pipe_config;
u32 tmp;
int i, j;
}
tiling_config |= BANK_SWAPS(1);
- cc_rb_backend_disable = RREG32(CC_RB_BACKEND_DISABLE) & 0x00ff0000;
- tmp = R6XX_MAX_BACKENDS -
- r600_count_pipe_bits((cc_rb_backend_disable >> 16) & R6XX_MAX_BACKENDS_MASK);
- if (tmp < rdev->config.r600.max_backends) {
- rdev->config.r600.max_backends = tmp;
- }
-
cc_gc_shader_pipe_config = RREG32(CC_GC_SHADER_PIPE_CONFIG) & 0x00ffff00;
- tmp = R6XX_MAX_PIPES -
- r600_count_pipe_bits((cc_gc_shader_pipe_config >> 8) & R6XX_MAX_PIPES_MASK);
- if (tmp < rdev->config.r600.max_pipes) {
- rdev->config.r600.max_pipes = tmp;
- }
- tmp = R6XX_MAX_SIMDS -
- r600_count_pipe_bits((cc_gc_shader_pipe_config >> 16) & R6XX_MAX_SIMDS_MASK);
- if (tmp < rdev->config.r600.max_simds) {
- rdev->config.r600.max_simds = tmp;
- }
tmp = rdev->config.r600.max_simds -
r600_count_pipe_bits((cc_gc_shader_pipe_config >> 16) & R6XX_MAX_SIMDS_MASK);
rdev->config.r600.active_simds = tmp;
disabled_rb_mask = (RREG32(CC_RB_BACKEND_DISABLE) >> 16) & R6XX_MAX_BACKENDS_MASK;
+ tmp = 0;
+ for (i = 0; i < rdev->config.r600.max_backends; i++)
+ tmp |= (1 << i);
+ /* if all the backends are disabled, fix it up here */
+ if ((disabled_rb_mask & tmp) == tmp) {
+ for (i = 0; i < rdev->config.r600.max_backends; i++)
+ disabled_rb_mask &= ~(1 << i);
+ }
tmp = (tiling_config & PIPE_TILING__MASK) >> PIPE_TILING__SHIFT;
tmp = r6xx_remap_render_backend(rdev, tmp, rdev->config.r600.max_backends,
R6XX_MAX_BACKENDS, disabled_rb_mask);
radeon_ring_write(ring, PACKET3_ME_INITIALIZE_DEVICE_ID(1));
radeon_ring_write(ring, 0);
radeon_ring_write(ring, 0);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
cp_me = 0xff;
WREG32(R_0086D8_CP_ME_CNTL, cp_me);
radeon_ring_write(ring, PACKET3(PACKET3_SET_CONFIG_REG, 1));
radeon_ring_write(ring, ((scratch - PACKET3_SET_CONFIG_REG_OFFSET) >> 2));
radeon_ring_write(ring, 0xDEADBEEF);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
for (i = 0; i < rdev->usec_timeout; i++) {
tmp = RREG32(scratch);
if (tmp == 0xDEADBEEF)
}
}
+/**
+ * r600_semaphore_ring_emit - emit a semaphore on the CP ring
+ *
+ * @rdev: radeon_device pointer
+ * @ring: radeon ring buffer object
+ * @semaphore: radeon semaphore object
+ * @emit_wait: Is this a sempahore wait?
+ *
+ * Emits a semaphore signal/wait packet to the CP ring and prevents the PFP
+ * from running ahead of semaphore waits.
+ */
bool r600_semaphore_ring_emit(struct radeon_device *rdev,
struct radeon_ring *ring,
struct radeon_semaphore *semaphore,
radeon_ring_write(ring, lower_32_bits(addr));
radeon_ring_write(ring, (upper_32_bits(addr) & 0xff) | sel);
+ /* PFP_SYNC_ME packet only exists on 7xx+, only enable it on eg+ */
+ if (emit_wait && (rdev->family >= CHIP_CEDAR)) {
+ /* Prevent the PFP from running ahead of the semaphore wait */
+ radeon_ring_write(ring, PACKET3(PACKET3_PFP_SYNC_ME, 0));
+ radeon_ring_write(ring, 0x0);
+ }
+
return true;
}
* Used by the radeon ttm implementation to move pages if
* registered as the asic copy callback.
*/
-int r600_copy_cpdma(struct radeon_device *rdev,
- uint64_t src_offset, uint64_t dst_offset,
- unsigned num_gpu_pages,
- struct radeon_fence **fence)
+struct radeon_fence *r600_copy_cpdma(struct radeon_device *rdev,
+ uint64_t src_offset, uint64_t dst_offset,
+ unsigned num_gpu_pages,
+ struct reservation_object *resv)
{
struct radeon_semaphore *sem = NULL;
+ struct radeon_fence *fence;
int ring_index = rdev->asic->copy.blit_ring_index;
struct radeon_ring *ring = &rdev->ring[ring_index];
u32 size_in_bytes, cur_size_in_bytes, tmp;
r = radeon_semaphore_create(rdev, &sem);
if (r) {
DRM_ERROR("radeon: moving bo (%d).\n", r);
- return r;
+ return ERR_PTR(r);
}
size_in_bytes = (num_gpu_pages << RADEON_GPU_PAGE_SHIFT);
if (r) {
DRM_ERROR("radeon: moving bo (%d).\n", r);
radeon_semaphore_free(rdev, &sem, NULL);
- return r;
+ return ERR_PTR(r);
}
- radeon_semaphore_sync_to(sem, *fence);
+ radeon_semaphore_sync_resv(rdev, sem, resv, false);
radeon_semaphore_sync_rings(rdev, sem, ring->idx);
radeon_ring_write(ring, PACKET3(PACKET3_SET_CONFIG_REG, 1));
radeon_ring_write(ring, (WAIT_UNTIL - PACKET3_SET_CONFIG_REG_OFFSET) >> 2);
radeon_ring_write(ring, WAIT_CP_DMA_IDLE_bit);
- r = radeon_fence_emit(rdev, fence, ring->idx);
+ r = radeon_fence_emit(rdev, &fence, ring->idx);
if (r) {
radeon_ring_unlock_undo(rdev, ring);
radeon_semaphore_free(rdev, &sem, NULL);
- return r;
+ return ERR_PTR(r);
}
- radeon_ring_unlock_commit(rdev, ring);
- radeon_semaphore_free(rdev, &sem, *fence);
+ radeon_ring_unlock_commit(rdev, ring, false);
+ radeon_semaphore_free(rdev, &sem, fence);
- return r;
+ return fence;
}
int r600_set_surface_reg(struct radeon_device *rdev, int reg,
return r;
}
+ if (rdev->has_uvd) {
+ r = uvd_v1_0_resume(rdev);
+ if (!r) {
+ r = radeon_fence_driver_start_ring(rdev, R600_RING_TYPE_UVD_INDEX);
+ if (r) {
+ dev_err(rdev->dev, "failed initializing UVD fences (%d).\n", r);
+ }
+ }
+ if (r)
+ rdev->ring[R600_RING_TYPE_UVD_INDEX].ring_size = 0;
+ }
+
/* Enable IRQ */
if (!rdev->irq.installed) {
r = radeon_irq_kms_init(rdev);
if (r)
return r;
+ if (rdev->has_uvd) {
+ ring = &rdev->ring[R600_RING_TYPE_UVD_INDEX];
+ if (ring->ring_size) {
+ r = radeon_ring_init(rdev, ring, ring->ring_size, 0,
+ RADEON_CP_PACKET2);
+ if (!r)
+ r = uvd_v1_0_init(rdev);
+ if (r)
+ DRM_ERROR("radeon: failed initializing UVD (%d).\n", r);
+ }
+ }
+
r = radeon_ib_pool_init(rdev);
if (r) {
dev_err(rdev->dev, "IB initialization failed (%d).\n", r);
radeon_pm_suspend(rdev);
r600_audio_fini(rdev);
r600_cp_stop(rdev);
+ if (rdev->has_uvd) {
+ uvd_v1_0_fini(rdev);
+ radeon_uvd_suspend(rdev);
+ }
r600_irq_suspend(rdev);
radeon_wb_disable(rdev);
r600_pcie_gart_disable(rdev);
rdev->ring[RADEON_RING_TYPE_GFX_INDEX].ring_obj = NULL;
r600_ring_init(rdev, &rdev->ring[RADEON_RING_TYPE_GFX_INDEX], 1024 * 1024);
+ if (rdev->has_uvd) {
+ r = radeon_uvd_init(rdev);
+ if (!r) {
+ rdev->ring[R600_RING_TYPE_UVD_INDEX].ring_obj = NULL;
+ r600_ring_init(rdev, &rdev->ring[R600_RING_TYPE_UVD_INDEX], 4096);
+ }
+ }
+
rdev->ih.ring_obj = NULL;
r600_ih_ring_init(rdev, 64 * 1024);
r600_audio_fini(rdev);
r600_cp_fini(rdev);
r600_irq_fini(rdev);
+ if (rdev->has_uvd) {
+ uvd_v1_0_fini(rdev);
+ radeon_uvd_fini(rdev);
+ }
radeon_wb_fini(rdev);
radeon_ib_pool_fini(rdev);
radeon_irq_kms_fini(rdev);
ib.ptr[1] = ((scratch - PACKET3_SET_CONFIG_REG_OFFSET) >> 2);
ib.ptr[2] = 0xDEADBEEF;
ib.length_dw = 3;
- r = radeon_ib_schedule(rdev, &ib, NULL);
+ r = radeon_ib_schedule(rdev, &ib, NULL, false);
if (r) {
DRM_ERROR("radeon: failed to schedule ib (%d).\n", r);
goto free_ib;
r = radeon_bo_create(rdev, rdev->ih.ring_size,
PAGE_SIZE, true,
RADEON_GEM_DOMAIN_GTT, 0,
- NULL, &rdev->ih.ring_obj);
+ NULL, NULL, &rdev->ih.ring_obj);
if (r) {
DRM_ERROR("radeon: failed to create ih ring buffer (%d).\n", r);
return r;
+++ /dev/null
-/*
- * Copyright 2008 Advanced Micro Devices, Inc.
- * Copyright 2008 Red Hat Inc.
- * Copyright 2009 Christian König.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
- * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
- * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
- * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
- * OTHER DEALINGS IN THE SOFTWARE.
- *
- * Authors: Christian König
- */
-#include <drm/drmP.h>
-#include "radeon.h"
-#include "radeon_reg.h"
-#include "radeon_asic.h"
-#include "atom.h"
-
-/*
- * check if enc_priv stores radeon_encoder_atom_dig
- */
-static bool radeon_dig_encoder(struct drm_encoder *encoder)
-{
- struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
- switch (radeon_encoder->encoder_id) {
- case ENCODER_OBJECT_ID_INTERNAL_LVDS:
- case ENCODER_OBJECT_ID_INTERNAL_TMDS1:
- case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1:
- case ENCODER_OBJECT_ID_INTERNAL_LVTM1:
- case ENCODER_OBJECT_ID_INTERNAL_DVO1:
- case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1:
- case ENCODER_OBJECT_ID_INTERNAL_DDI:
- case ENCODER_OBJECT_ID_INTERNAL_UNIPHY:
- case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA:
- case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1:
- case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2:
- return true;
- }
- return false;
-}
-
-/*
- * check if the chipset is supported
- */
-static int r600_audio_chipset_supported(struct radeon_device *rdev)
-{
- return ASIC_IS_DCE2(rdev) && !ASIC_IS_NODCE(rdev);
-}
-
-struct r600_audio_pin r600_audio_status(struct radeon_device *rdev)
-{
- struct r600_audio_pin status;
- uint32_t value;
-
- value = RREG32(R600_AUDIO_RATE_BPS_CHANNEL);
-
- /* number of channels */
- status.channels = (value & 0x7) + 1;
-
- /* bits per sample */
- switch ((value & 0xF0) >> 4) {
- case 0x0:
- status.bits_per_sample = 8;
- break;
- case 0x1:
- status.bits_per_sample = 16;
- break;
- case 0x2:
- status.bits_per_sample = 20;
- break;
- case 0x3:
- status.bits_per_sample = 24;
- break;
- case 0x4:
- status.bits_per_sample = 32;
- break;
- default:
- dev_err(rdev->dev, "Unknown bits per sample 0x%x, using 16\n",
- (int)value);
- status.bits_per_sample = 16;
- }
-
- /* current sampling rate in HZ */
- if (value & 0x4000)
- status.rate = 44100;
- else
- status.rate = 48000;
- status.rate *= ((value >> 11) & 0x7) + 1;
- status.rate /= ((value >> 8) & 0x7) + 1;
-
- value = RREG32(R600_AUDIO_STATUS_BITS);
-
- /* iec 60958 status bits */
- status.status_bits = value & 0xff;
-
- /* iec 60958 category code */
- status.category_code = (value >> 8) & 0xff;
-
- return status;
-}
-
-/*
- * update all hdmi interfaces with current audio parameters
- */
-void r600_audio_update_hdmi(struct work_struct *work)
-{
- struct radeon_device *rdev = container_of(work, struct radeon_device,
- audio_work);
- struct drm_device *dev = rdev->ddev;
- struct r600_audio_pin audio_status = r600_audio_status(rdev);
- struct drm_encoder *encoder;
- bool changed = false;
-
- if (rdev->audio.pin[0].channels != audio_status.channels ||
- rdev->audio.pin[0].rate != audio_status.rate ||
- rdev->audio.pin[0].bits_per_sample != audio_status.bits_per_sample ||
- rdev->audio.pin[0].status_bits != audio_status.status_bits ||
- rdev->audio.pin[0].category_code != audio_status.category_code) {
- rdev->audio.pin[0] = audio_status;
- changed = true;
- }
-
- list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
- if (!radeon_dig_encoder(encoder))
- continue;
- if (changed || r600_hdmi_buffer_status_changed(encoder))
- r600_hdmi_update_audio_settings(encoder);
- }
-}
-
-/* enable the audio stream */
-void r600_audio_enable(struct radeon_device *rdev,
- struct r600_audio_pin *pin,
- bool enable)
-{
- u32 value = 0;
-
- if (!pin)
- return;
-
- if (ASIC_IS_DCE4(rdev)) {
- if (enable) {
- value |= 0x81000000; /* Required to enable audio */
- value |= 0x0e1000f0; /* fglrx sets that too */
- }
- WREG32(EVERGREEN_AUDIO_ENABLE, value);
- } else {
- WREG32_P(R600_AUDIO_ENABLE,
- enable ? 0x81000000 : 0x0, ~0x81000000);
- }
-}
-
-/*
- * initialize the audio vars
- */
-int r600_audio_init(struct radeon_device *rdev)
-{
- if (!radeon_audio || !r600_audio_chipset_supported(rdev))
- return 0;
-
- rdev->audio.enabled = true;
-
- rdev->audio.num_pins = 1;
- rdev->audio.pin[0].channels = -1;
- rdev->audio.pin[0].rate = -1;
- rdev->audio.pin[0].bits_per_sample = -1;
- rdev->audio.pin[0].status_bits = 0;
- rdev->audio.pin[0].category_code = 0;
- rdev->audio.pin[0].id = 0;
- /* disable audio. it will be set up later */
- r600_audio_enable(rdev, &rdev->audio.pin[0], false);
-
- return 0;
-}
-
-/*
- * release the audio timer
- * TODO: How to do this correctly on SMP systems?
- */
-void r600_audio_fini(struct radeon_device *rdev)
-{
- if (!rdev->audio.enabled)
- return;
-
- r600_audio_enable(rdev, &rdev->audio.pin[0], false);
-
- rdev->audio.enabled = false;
-}
-
-struct r600_audio_pin *r600_audio_get_pin(struct radeon_device *rdev)
-{
- /* only one pin on 6xx-NI */
- return &rdev->audio.pin[0];
-}
#if __OS_HAS_AGP
if (dev_priv->flags & RADEON_IS_AGP) {
if (dev_priv->cp_ring != NULL) {
- drm_core_ioremapfree(dev_priv->cp_ring, dev);
+ drm_legacy_ioremapfree(dev_priv->cp_ring, dev);
dev_priv->cp_ring = NULL;
}
if (dev_priv->ring_rptr != NULL) {
- drm_core_ioremapfree(dev_priv->ring_rptr, dev);
+ drm_legacy_ioremapfree(dev_priv->ring_rptr, dev);
dev_priv->ring_rptr = NULL;
}
if (dev->agp_buffer_map != NULL) {
- drm_core_ioremapfree(dev->agp_buffer_map, dev);
+ drm_legacy_ioremapfree(dev->agp_buffer_map, dev);
dev->agp_buffer_map = NULL;
}
} else
r600_page_table_cleanup(dev, &dev_priv->gart_info);
if (dev_priv->gart_info.gart_table_location == DRM_ATI_GART_FB) {
- drm_core_ioremapfree(&dev_priv->gart_info.mapping, dev);
+ drm_legacy_ioremapfree(&dev_priv->gart_info.mapping, dev);
dev_priv->gart_info.addr = NULL;
}
}
dev_priv->buffers_offset = init->buffers_offset;
dev_priv->gart_textures_offset = init->gart_textures_offset;
- master_priv->sarea = drm_getsarea(dev);
+ master_priv->sarea = drm_legacy_getsarea(dev);
if (!master_priv->sarea) {
DRM_ERROR("could not find sarea!\n");
r600_do_cleanup_cp(dev);
return -EINVAL;
}
- dev_priv->cp_ring = drm_core_findmap(dev, init->ring_offset);
+ dev_priv->cp_ring = drm_legacy_findmap(dev, init->ring_offset);
if (!dev_priv->cp_ring) {
DRM_ERROR("could not find cp ring region!\n");
r600_do_cleanup_cp(dev);
return -EINVAL;
}
- dev_priv->ring_rptr = drm_core_findmap(dev, init->ring_rptr_offset);
+ dev_priv->ring_rptr = drm_legacy_findmap(dev, init->ring_rptr_offset);
if (!dev_priv->ring_rptr) {
DRM_ERROR("could not find ring read pointer!\n");
r600_do_cleanup_cp(dev);
return -EINVAL;
}
dev->agp_buffer_token = init->buffers_offset;
- dev->agp_buffer_map = drm_core_findmap(dev, init->buffers_offset);
+ dev->agp_buffer_map = drm_legacy_findmap(dev, init->buffers_offset);
if (!dev->agp_buffer_map) {
DRM_ERROR("could not find dma buffer region!\n");
r600_do_cleanup_cp(dev);
if (init->gart_textures_offset) {
dev_priv->gart_textures =
- drm_core_findmap(dev, init->gart_textures_offset);
+ drm_legacy_findmap(dev, init->gart_textures_offset);
if (!dev_priv->gart_textures) {
DRM_ERROR("could not find GART texture region!\n");
r600_do_cleanup_cp(dev);
#if __OS_HAS_AGP
/* XXX */
if (dev_priv->flags & RADEON_IS_AGP) {
- drm_core_ioremap_wc(dev_priv->cp_ring, dev);
- drm_core_ioremap_wc(dev_priv->ring_rptr, dev);
- drm_core_ioremap_wc(dev->agp_buffer_map, dev);
+ drm_legacy_ioremap_wc(dev_priv->cp_ring, dev);
+ drm_legacy_ioremap_wc(dev_priv->ring_rptr, dev);
+ drm_legacy_ioremap_wc(dev->agp_buffer_map, dev);
if (!dev_priv->cp_ring->handle ||
!dev_priv->ring_rptr->handle ||
!dev->agp_buffer_map->handle) {
dev_priv->gart_info.mapping.size =
dev_priv->gart_info.table_size;
- drm_core_ioremap_wc(&dev_priv->gart_info.mapping, dev);
+ drm_legacy_ioremap_wc(&dev_priv->gart_info.mapping, dev);
if (!dev_priv->gart_info.mapping.handle) {
DRM_ERROR("ioremap failed.\n");
r600_do_cleanup_cp(dev);
radeon_ring_write(ring, rdev->vram_scratch.gpu_addr & 0xfffffffc);
radeon_ring_write(ring, upper_32_bits(rdev->vram_scratch.gpu_addr) & 0xff);
radeon_ring_write(ring, 0xDEADBEEF);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
for (i = 0; i < rdev->usec_timeout; i++) {
tmp = readl(ptr);
ib.ptr[3] = 0xDEADBEEF;
ib.length_dw = 4;
- r = radeon_ib_schedule(rdev, &ib, NULL);
+ r = radeon_ib_schedule(rdev, &ib, NULL, false);
if (r) {
radeon_ib_free(rdev, &ib);
DRM_ERROR("radeon: failed to schedule ib (%d).\n", r);
* @src_offset: src GPU address
* @dst_offset: dst GPU address
* @num_gpu_pages: number of GPU pages to xfer
- * @fence: radeon fence object
+ * @resv: reservation object to sync to
*
* Copy GPU paging using the DMA engine (r6xx).
* Used by the radeon ttm implementation to move pages if
* registered as the asic copy callback.
*/
-int r600_copy_dma(struct radeon_device *rdev,
- uint64_t src_offset, uint64_t dst_offset,
- unsigned num_gpu_pages,
- struct radeon_fence **fence)
+struct radeon_fence *r600_copy_dma(struct radeon_device *rdev,
+ uint64_t src_offset, uint64_t dst_offset,
+ unsigned num_gpu_pages,
+ struct reservation_object *resv)
{
struct radeon_semaphore *sem = NULL;
+ struct radeon_fence *fence;
int ring_index = rdev->asic->copy.dma_ring_index;
struct radeon_ring *ring = &rdev->ring[ring_index];
u32 size_in_dw, cur_size_in_dw;
r = radeon_semaphore_create(rdev, &sem);
if (r) {
DRM_ERROR("radeon: moving bo (%d).\n", r);
- return r;
+ return ERR_PTR(r);
}
size_in_dw = (num_gpu_pages << RADEON_GPU_PAGE_SHIFT) / 4;
if (r) {
DRM_ERROR("radeon: moving bo (%d).\n", r);
radeon_semaphore_free(rdev, &sem, NULL);
- return r;
+ return ERR_PTR(r);
}
- radeon_semaphore_sync_to(sem, *fence);
+ radeon_semaphore_sync_resv(rdev, sem, resv, false);
radeon_semaphore_sync_rings(rdev, sem, ring->idx);
for (i = 0; i < num_loops; i++) {
dst_offset += cur_size_in_dw * 4;
}
- r = radeon_fence_emit(rdev, fence, ring->idx);
+ r = radeon_fence_emit(rdev, &fence, ring->idx);
if (r) {
radeon_ring_unlock_undo(rdev, ring);
radeon_semaphore_free(rdev, &sem, NULL);
- return r;
+ return ERR_PTR(r);
}
- radeon_ring_unlock_commit(rdev, ring);
- radeon_semaphore_free(rdev, &sem, *fence);
+ radeon_ring_unlock_commit(rdev, ring, false);
+ radeon_semaphore_free(rdev, &sem, fence);
- return r;
+ return fence;
}
};
+/*
+ * check if the chipset is supported
+ */
+static int r600_audio_chipset_supported(struct radeon_device *rdev)
+{
+ return ASIC_IS_DCE2(rdev) && !ASIC_IS_NODCE(rdev);
+}
+
+static struct r600_audio_pin r600_audio_status(struct radeon_device *rdev)
+{
+ struct r600_audio_pin status;
+ uint32_t value;
+
+ value = RREG32(R600_AUDIO_RATE_BPS_CHANNEL);
+
+ /* number of channels */
+ status.channels = (value & 0x7) + 1;
+
+ /* bits per sample */
+ switch ((value & 0xF0) >> 4) {
+ case 0x0:
+ status.bits_per_sample = 8;
+ break;
+ case 0x1:
+ status.bits_per_sample = 16;
+ break;
+ case 0x2:
+ status.bits_per_sample = 20;
+ break;
+ case 0x3:
+ status.bits_per_sample = 24;
+ break;
+ case 0x4:
+ status.bits_per_sample = 32;
+ break;
+ default:
+ dev_err(rdev->dev, "Unknown bits per sample 0x%x, using 16\n",
+ (int)value);
+ status.bits_per_sample = 16;
+ }
+
+ /* current sampling rate in HZ */
+ if (value & 0x4000)
+ status.rate = 44100;
+ else
+ status.rate = 48000;
+ status.rate *= ((value >> 11) & 0x7) + 1;
+ status.rate /= ((value >> 8) & 0x7) + 1;
+
+ value = RREG32(R600_AUDIO_STATUS_BITS);
+
+ /* iec 60958 status bits */
+ status.status_bits = value & 0xff;
+
+ /* iec 60958 category code */
+ status.category_code = (value >> 8) & 0xff;
+
+ return status;
+}
+
+/*
+ * update all hdmi interfaces with current audio parameters
+ */
+void r600_audio_update_hdmi(struct work_struct *work)
+{
+ struct radeon_device *rdev = container_of(work, struct radeon_device,
+ audio_work);
+ struct drm_device *dev = rdev->ddev;
+ struct r600_audio_pin audio_status = r600_audio_status(rdev);
+ struct drm_encoder *encoder;
+ bool changed = false;
+
+ if (rdev->audio.pin[0].channels != audio_status.channels ||
+ rdev->audio.pin[0].rate != audio_status.rate ||
+ rdev->audio.pin[0].bits_per_sample != audio_status.bits_per_sample ||
+ rdev->audio.pin[0].status_bits != audio_status.status_bits ||
+ rdev->audio.pin[0].category_code != audio_status.category_code) {
+ rdev->audio.pin[0] = audio_status;
+ changed = true;
+ }
+
+ list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
+ if (!radeon_encoder_is_digital(encoder))
+ continue;
+ if (changed || r600_hdmi_buffer_status_changed(encoder))
+ r600_hdmi_update_audio_settings(encoder);
+ }
+}
+
+/* enable the audio stream */
+void r600_audio_enable(struct radeon_device *rdev,
+ struct r600_audio_pin *pin,
+ u8 enable_mask)
+{
+ u32 tmp = RREG32(AZ_HOT_PLUG_CONTROL);
+
+ if (!pin)
+ return;
+
+ if (enable_mask) {
+ tmp |= AUDIO_ENABLED;
+ if (enable_mask & 1)
+ tmp |= PIN0_AUDIO_ENABLED;
+ if (enable_mask & 2)
+ tmp |= PIN1_AUDIO_ENABLED;
+ if (enable_mask & 4)
+ tmp |= PIN2_AUDIO_ENABLED;
+ if (enable_mask & 8)
+ tmp |= PIN3_AUDIO_ENABLED;
+ } else {
+ tmp &= ~(AUDIO_ENABLED |
+ PIN0_AUDIO_ENABLED |
+ PIN1_AUDIO_ENABLED |
+ PIN2_AUDIO_ENABLED |
+ PIN3_AUDIO_ENABLED);
+ }
+
+ WREG32(AZ_HOT_PLUG_CONTROL, tmp);
+}
+
+/*
+ * initialize the audio vars
+ */
+int r600_audio_init(struct radeon_device *rdev)
+{
+ if (!radeon_audio || !r600_audio_chipset_supported(rdev))
+ return 0;
+
+ rdev->audio.enabled = true;
+
+ rdev->audio.num_pins = 1;
+ rdev->audio.pin[0].channels = -1;
+ rdev->audio.pin[0].rate = -1;
+ rdev->audio.pin[0].bits_per_sample = -1;
+ rdev->audio.pin[0].status_bits = 0;
+ rdev->audio.pin[0].category_code = 0;
+ rdev->audio.pin[0].id = 0;
+ /* disable audio. it will be set up later */
+ r600_audio_enable(rdev, &rdev->audio.pin[0], 0);
+
+ return 0;
+}
+
+/*
+ * release the audio timer
+ * TODO: How to do this correctly on SMP systems?
+ */
+void r600_audio_fini(struct radeon_device *rdev)
+{
+ if (!rdev->audio.enabled)
+ return;
+
+ r600_audio_enable(rdev, &rdev->audio.pin[0], 0);
+
+ rdev->audio.enabled = false;
+}
+
+struct r600_audio_pin *r600_audio_get_pin(struct radeon_device *rdev)
+{
+ /* only one pin on 6xx-NI */
+ return &rdev->audio.pin[0];
+}
+
/*
* calculate CTS and N values if they are not found in the table
*/
/* disable audio prior to setting up hw */
dig->afmt->pin = r600_audio_get_pin(rdev);
- r600_audio_enable(rdev, dig->afmt->pin, false);
+ r600_audio_enable(rdev, dig->afmt->pin, 0xf);
r600_audio_set_dto(encoder, mode->clock);
WREG32(HDMI0_RAMP_CONTROL3 + offset, 0x00000001);
/* enable audio after to setting up hw */
- r600_audio_enable(rdev, dig->afmt->pin, true);
+ r600_audio_enable(rdev, dig->afmt->pin, 0xf);
}
/**
if (!enable && !dig->afmt->enabled)
return;
+ if (!enable && dig->afmt->pin) {
+ r600_audio_enable(rdev, dig->afmt->pin, 0);
+ dig->afmt->pin = NULL;
+ }
+
/* Older chipsets require setting HDMI and routing manually */
if (!ASIC_IS_DCE3(rdev)) {
if (enable)
#define HDP_TILING_CONFIG 0x2F3C
#define HDP_DEBUG1 0x2F34
+#define MC_CONFIG 0x2000
#define MC_VM_AGP_TOP 0x2184
#define MC_VM_AGP_BOT 0x2188
#define MC_VM_AGP_BASE 0x218C
#define MC_VM_FB_LOCATION 0x2180
-#define MC_VM_L1_TLB_MCD_RD_A_CNTL 0x219C
+#define MC_VM_L1_TLB_MCB_RD_UVD_CNTL 0x2124
#define ENABLE_L1_TLB (1 << 0)
#define ENABLE_L1_FRAGMENT_PROCESSING (1 << 1)
#define ENABLE_L1_STRICT_ORDERING (1 << 2)
#define EFFECTIVE_L1_QUEUE_SIZE(x) (((x) & 7) << 15)
#define EFFECTIVE_L1_QUEUE_SIZE_MASK 0x00038000
#define EFFECTIVE_L1_QUEUE_SIZE_SHIFT 15
+#define MC_VM_L1_TLB_MCD_RD_A_CNTL 0x219C
#define MC_VM_L1_TLB_MCD_RD_B_CNTL 0x21A0
#define MC_VM_L1_TLB_MCB_RD_GFX_CNTL 0x21FC
#define MC_VM_L1_TLB_MCB_RD_HDP_CNTL 0x2204
#define MC_VM_L1_TLB_MCB_RD_PDMA_CNTL 0x2208
#define MC_VM_L1_TLB_MCB_RD_SEM_CNTL 0x220C
#define MC_VM_L1_TLB_MCB_RD_SYS_CNTL 0x2200
+#define MC_VM_L1_TLB_MCB_WR_UVD_CNTL 0x212c
#define MC_VM_L1_TLB_MCD_WR_A_CNTL 0x21A4
#define MC_VM_L1_TLB_MCD_WR_B_CNTL 0x21A8
#define MC_VM_L1_TLB_MCB_WR_GFX_CNTL 0x2210
#define MC_VM_SYSTEM_APERTURE_HIGH_ADDR 0x2194
#define MC_VM_SYSTEM_APERTURE_DEFAULT_ADDR 0x2198
+#define RS_DQ_RD_RET_CONF 0x2348
+
#define PA_CL_ENHANCE 0x8A14
#define CLIP_VTX_REORDER_ENA (1 << 0)
#define NUM_CLIP_SEQ(x) ((x) << 1)
# define TARGET_LINK_SPEED_MASK (0xf << 0)
# define SELECTABLE_DEEMPHASIS (1 << 6)
+/* Audio */
+#define AZ_HOT_PLUG_CONTROL 0x7300
+# define AZ_FORCE_CODEC_WAKE (1 << 0)
+# define JACK_DETECTION_ENABLE (1 << 4)
+# define UNSOLICITED_RESPONSE_ENABLE (1 << 8)
+# define CODEC_HOT_PLUG_ENABLE (1 << 12)
+# define AUDIO_ENABLED (1 << 31)
+/* DCE3 adds */
+# define PIN0_JACK_DETECTION_ENABLE (1 << 4)
+# define PIN1_JACK_DETECTION_ENABLE (1 << 5)
+# define PIN2_JACK_DETECTION_ENABLE (1 << 6)
+# define PIN3_JACK_DETECTION_ENABLE (1 << 7)
+# define PIN0_AUDIO_ENABLED (1 << 24)
+# define PIN1_AUDIO_ENABLED (1 << 25)
+# define PIN2_AUDIO_ENABLED (1 << 26)
+# define PIN3_AUDIO_ENABLED (1 << 27)
+
/* Audio clocks DCE 2.0/3.0 */
#define AUDIO_DTO 0x7340
# define AUDIO_DTO_PHASE(x) (((x) & 0xffff) << 0)
#define UVD_CGC_GATE 0xf4a8
#define UVD_LMI_CTRL2 0xf4f4
#define UVD_MASTINT_EN 0xf500
+#define UVD_FW_START 0xf51C
#define UVD_LMI_ADDR_EXT 0xf594
#define UVD_LMI_CTRL 0xf598
#define UVD_LMI_SWAP_CNTL 0xf5b4
#define UVD_MPC_SET_MUX 0xf5f4
#define UVD_MPC_SET_ALU 0xf5f8
+#define UVD_VCPU_CACHE_OFFSET0 0xf608
+#define UVD_VCPU_CACHE_SIZE0 0xf60c
+#define UVD_VCPU_CACHE_OFFSET1 0xf610
+#define UVD_VCPU_CACHE_SIZE1 0xf614
+#define UVD_VCPU_CACHE_OFFSET2 0xf618
+#define UVD_VCPU_CACHE_SIZE2 0xf61c
+
#define UVD_VCPU_CNTL 0xf660
#define UVD_SOFT_RESET 0xf680
#define RBC_SOFT_RESET (1<<0)
#define UVD_CONTEXT_ID 0xf6f4
+/* rs780 only */
+#define GFX_MACRO_BYPASS_CNTL 0x30c0
+#define SPLL_BYPASS_CNTL (1 << 0)
+#define UPLL_BYPASS_CNTL (1 << 1)
+
+#define CG_UPLL_FUNC_CNTL 0x7e0
+# define UPLL_RESET_MASK 0x00000001
+# define UPLL_SLEEP_MASK 0x00000002
+# define UPLL_BYPASS_EN_MASK 0x00000004
# define UPLL_CTLREQ_MASK 0x00000008
+# define UPLL_FB_DIV(x) ((x) << 4)
+# define UPLL_FB_DIV_MASK 0x0000FFF0
+# define UPLL_REF_DIV(x) ((x) << 16)
+# define UPLL_REF_DIV_MASK 0x003F0000
+# define UPLL_REFCLK_SRC_SEL_MASK 0x20000000
# define UPLL_CTLACK_MASK 0x40000000
# define UPLL_CTLACK2_MASK 0x80000000
+#define CG_UPLL_FUNC_CNTL_2 0x7e4
+# define UPLL_SW_HILEN(x) ((x) << 0)
+# define UPLL_SW_LOLEN(x) ((x) << 4)
+# define UPLL_SW_HILEN2(x) ((x) << 8)
+# define UPLL_SW_LOLEN2(x) ((x) << 12)
+# define UPLL_DIVEN_MASK 0x00010000
+# define UPLL_DIVEN2_MASK 0x00020000
+# define UPLL_SW_MASK 0x0003FFFF
+# define VCLK_SRC_SEL(x) ((x) << 20)
+# define VCLK_SRC_SEL_MASK 0x01F00000
+# define DCLK_SRC_SEL(x) ((x) << 25)
+# define DCLK_SRC_SEL_MASK 0x3E000000
/*
* PM4
*/
# define PACKET3_CP_DMA_CMD_SAIC (1 << 28)
# define PACKET3_CP_DMA_CMD_DAIC (1 << 29)
+#define PACKET3_PFP_SYNC_ME 0x42 /* r7xx+ only */
#define PACKET3_SURFACE_SYNC 0x43
# define PACKET3_CB0_DEST_BASE_ENA (1 << 6)
# define PACKET3_FULL_CACHE_ENA (1 << 20) /* r7xx+ only */
#include <linux/list.h>
#include <linux/kref.h>
#include <linux/interval_tree.h>
+#include <linux/hashtable.h>
+#include <linux/fence.h>
#include <ttm/ttm_bo_api.h>
#include <ttm/ttm_bo_driver.h>
#include <ttm/ttm_module.h>
#include <ttm/ttm_execbuf_util.h>
+#include <drm/drm_gem.h>
+
#include "radeon_family.h"
#include "radeon_mode.h"
#include "radeon_reg.h"
extern int radeon_vm_block_size;
extern int radeon_deep_color;
extern int radeon_use_pflipirq;
+extern int radeon_bapm;
/*
* Copy from radeon_drv.h so we don't have to include both and have conflicting
#define RADEONFB_CONN_LIMIT 4
#define RADEON_BIOS_NUM_SCRATCH 8
-/* fence seq are set to this number when signaled */
-#define RADEON_FENCE_SIGNALED_SEQ 0LL
-
/* internal ring indices */
/* r1xx+ has gfx CP ring */
#define RADEON_RING_TYPE_GFX_INDEX 0
* Fences.
*/
struct radeon_fence_driver {
+ struct radeon_device *rdev;
uint32_t scratch_reg;
uint64_t gpu_addr;
volatile uint32_t *cpu_addr;
/* sync_seq is protected by ring emission lock */
uint64_t sync_seq[RADEON_NUM_RINGS];
atomic64_t last_seq;
- bool initialized;
+ bool initialized, delayed_irq;
+ struct delayed_work lockup_work;
};
struct radeon_fence {
+ struct fence base;
+
struct radeon_device *rdev;
- struct kref kref;
- /* protected by radeon_fence.lock */
uint64_t seq;
/* RB, DMA, etc. */
unsigned ring;
+
+ wait_queue_t fence_wake;
};
int radeon_fence_driver_start_ring(struct radeon_device *rdev, int ring);
int radeon_fence_driver_init(struct radeon_device *rdev);
void radeon_fence_driver_fini(struct radeon_device *rdev);
-void radeon_fence_driver_force_completion(struct radeon_device *rdev);
+void radeon_fence_driver_force_completion(struct radeon_device *rdev, int ring);
int radeon_fence_emit(struct radeon_device *rdev, struct radeon_fence **fence, int ring);
void radeon_fence_process(struct radeon_device *rdev, int ring);
bool radeon_fence_signaled(struct radeon_fence *fence);
struct list_head list;
/* Protected by tbo.reserved */
u32 initial_domain;
- u32 placements[3];
+ struct ttm_place placements[3];
struct ttm_placement placement;
struct ttm_buffer_object tbo;
struct ttm_bo_kmap_obj kmap;
struct ttm_bo_kmap_obj dma_buf_vmap;
pid_t pid;
+
+ struct radeon_mn *mn;
+ struct interval_tree_node mn_it;
};
#define gem_to_radeon_bo(gobj) container_of((gobj), struct radeon_bo, gem_base)
struct radeon_semaphore *semaphore);
bool radeon_semaphore_emit_wait(struct radeon_device *rdev, int ring,
struct radeon_semaphore *semaphore);
-void radeon_semaphore_sync_to(struct radeon_semaphore *semaphore,
- struct radeon_fence *fence);
+void radeon_semaphore_sync_fence(struct radeon_semaphore *semaphore,
+ struct radeon_fence *fence);
+int radeon_semaphore_sync_resv(struct radeon_device *rdev,
+ struct radeon_semaphore *semaphore,
+ struct reservation_object *resv,
+ bool shared);
int radeon_semaphore_sync_rings(struct radeon_device *rdev,
struct radeon_semaphore *semaphore,
int waiting_ring);
uint64_t base;
struct drm_pending_vblank_event *event;
struct radeon_bo *old_rbo;
- struct radeon_fence *fence;
+ struct fence *fence;
};
struct r500_irq_stat_regs {
int radeon_irq_kms_init(struct radeon_device *rdev);
void radeon_irq_kms_fini(struct radeon_device *rdev);
void radeon_irq_kms_sw_irq_get(struct radeon_device *rdev, int ring);
+bool radeon_irq_kms_sw_irq_get_delayed(struct radeon_device *rdev, int ring);
void radeon_irq_kms_sw_irq_put(struct radeon_device *rdev, int ring);
void radeon_irq_kms_pflip_irq_get(struct radeon_device *rdev, int crtc);
void radeon_irq_kms_pflip_irq_put(struct radeon_device *rdev, int crtc);
u64 vram_base_offset;
/* is vm enabled? */
bool enabled;
+ /* for hw to save the PD addr on suspend/resume */
+ uint32_t saved_table_addr[RADEON_NUM_VM];
};
/*
unsigned size);
void radeon_ib_free(struct radeon_device *rdev, struct radeon_ib *ib);
int radeon_ib_schedule(struct radeon_device *rdev, struct radeon_ib *ib,
- struct radeon_ib *const_ib);
+ struct radeon_ib *const_ib, bool hdp_flush);
int radeon_ib_pool_init(struct radeon_device *rdev);
void radeon_ib_pool_fini(struct radeon_device *rdev);
int radeon_ib_ring_tests(struct radeon_device *rdev);
void radeon_ring_free_size(struct radeon_device *rdev, struct radeon_ring *cp);
int radeon_ring_alloc(struct radeon_device *rdev, struct radeon_ring *cp, unsigned ndw);
int radeon_ring_lock(struct radeon_device *rdev, struct radeon_ring *cp, unsigned ndw);
-void radeon_ring_commit(struct radeon_device *rdev, struct radeon_ring *cp);
-void radeon_ring_unlock_commit(struct radeon_device *rdev, struct radeon_ring *cp);
+void radeon_ring_commit(struct radeon_device *rdev, struct radeon_ring *cp,
+ bool hdp_flush);
+void radeon_ring_unlock_commit(struct radeon_device *rdev, struct radeon_ring *cp,
+ bool hdp_flush);
void radeon_ring_undo(struct radeon_ring *ring);
void radeon_ring_unlock_undo(struct radeon_device *rdev, struct radeon_ring *cp);
int radeon_ring_test(struct radeon_device *rdev, struct radeon_ring *cp);
uint32_t handle, struct radeon_fence **fence);
int radeon_uvd_get_destroy_msg(struct radeon_device *rdev, int ring,
uint32_t handle, struct radeon_fence **fence);
-void radeon_uvd_force_into_uvd_segment(struct radeon_bo *rbo);
+void radeon_uvd_force_into_uvd_segment(struct radeon_bo *rbo,
+ uint32_t allowed_domains);
void radeon_uvd_free_handles(struct radeon_device *rdev,
struct drm_file *filp);
int radeon_uvd_cs_parse(struct radeon_cs_parser *parser);
struct radeon_ring *cpB);
void radeon_test_syncing(struct radeon_device *rdev);
+/*
+ * MMU Notifier
+ */
+int radeon_mn_register(struct radeon_bo *bo, unsigned long addr);
+void radeon_mn_unregister(struct radeon_bo *bo);
/*
* Debugfs
} display;
/* copy functions for bo handling */
struct {
- int (*blit)(struct radeon_device *rdev,
- uint64_t src_offset,
- uint64_t dst_offset,
- unsigned num_gpu_pages,
- struct radeon_fence **fence);
+ struct radeon_fence *(*blit)(struct radeon_device *rdev,
+ uint64_t src_offset,
+ uint64_t dst_offset,
+ unsigned num_gpu_pages,
+ struct reservation_object *resv);
u32 blit_ring_index;
- int (*dma)(struct radeon_device *rdev,
- uint64_t src_offset,
- uint64_t dst_offset,
- unsigned num_gpu_pages,
- struct radeon_fence **fence);
+ struct radeon_fence *(*dma)(struct radeon_device *rdev,
+ uint64_t src_offset,
+ uint64_t dst_offset,
+ unsigned num_gpu_pages,
+ struct reservation_object *resv);
u32 dma_ring_index;
/* method used for bo copy */
- int (*copy)(struct radeon_device *rdev,
- uint64_t src_offset,
- uint64_t dst_offset,
- unsigned num_gpu_pages,
- struct radeon_fence **fence);
+ struct radeon_fence *(*copy)(struct radeon_device *rdev,
+ uint64_t src_offset,
+ uint64_t dst_offset,
+ unsigned num_gpu_pages,
+ struct reservation_object *resv);
/* ring used for bo copies */
u32 copy_ring_index;
} copy;
struct drm_file *filp);
int radeon_gem_create_ioctl(struct drm_device *dev, void *data,
struct drm_file *filp);
+int radeon_gem_userptr_ioctl(struct drm_device *dev, void *data,
+ struct drm_file *filp);
int radeon_gem_pin_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int radeon_gem_unpin_ioctl(struct drm_device *dev, void *data,
struct radeon_mman mman;
struct radeon_fence_driver fence_drv[RADEON_NUM_RINGS];
wait_queue_head_t fence_queue;
+ unsigned fence_context;
struct mutex ring_lock;
struct radeon_ring ring[RADEON_NUM_RINGS];
bool ib_pool_ready;
bool need_dma32;
bool accel_working;
bool fastfb_working; /* IGP feature*/
- bool needs_reset;
+ bool needs_reset, in_reset;
struct radeon_surface_reg surface_regs[RADEON_GEM_MAX_SURFACES];
const struct firmware *me_fw; /* all family ME firmware */
const struct firmware *pfp_fw; /* r6/700 PFP firmware */
struct radeon_mec mec;
struct work_struct hotplug_work;
struct work_struct audio_work;
- struct work_struct reset_work;
int num_crtc; /* number of crtcs */
struct mutex dc_hw_i2c_mutex; /* display controller hw i2c mutex */
bool has_uvd;
/* tracking pinned memory */
u64 vram_pin_size;
u64 gart_pin_size;
+
+ struct mutex mn_lock;
+ DECLARE_HASHTABLE(mn_hash, 7);
};
bool radeon_is_px(struct drm_device *dev);
/*
* Cast helper
*/
-#define to_radeon_fence(p) ((struct radeon_fence *)(p))
+extern const struct fence_ops radeon_fence_ops;
+
+static inline struct radeon_fence *to_radeon_fence(struct fence *f)
+{
+ struct radeon_fence *__f = container_of(f, struct radeon_fence, base);
+
+ if (__f->base.ops == &radeon_fence_ops)
+ return __f;
+
+ return NULL;
+}
/*
* Registers read & write functions.
/*
* RING helpers.
*/
-#if DRM_DEBUG_CODE == 0
+
+/**
+ * radeon_ring_write - write a value to the ring
+ *
+ * @ring: radeon_ring structure holding ring information
+ * @v: dword (dw) value to write
+ *
+ * Write a value to the requested ring buffer (all asics).
+ */
static inline void radeon_ring_write(struct radeon_ring *ring, uint32_t v)
{
+ if (ring->count_dw <= 0)
+ DRM_ERROR("radeon: writing more dwords to the ring than expected!\n");
+
ring->ring[ring->wptr++] = v;
ring->wptr &= ring->ptr_mask;
ring->count_dw--;
ring->ring_free_dw--;
}
-#else
-/* With debugging this is just too big to inline */
-void radeon_ring_write(struct radeon_ring *ring, uint32_t v);
-#endif
/*
* ASICs macro.
#define radeon_hdmi_setmode(rdev, e, m) (rdev)->asic->display.hdmi_setmode((e), (m))
#define radeon_fence_ring_emit(rdev, r, fence) (rdev)->asic->ring[(r)]->emit_fence((rdev), (fence))
#define radeon_semaphore_ring_emit(rdev, r, cp, semaphore, emit_wait) (rdev)->asic->ring[(r)]->emit_semaphore((rdev), (cp), (semaphore), (emit_wait))
-#define radeon_copy_blit(rdev, s, d, np, f) (rdev)->asic->copy.blit((rdev), (s), (d), (np), (f))
-#define radeon_copy_dma(rdev, s, d, np, f) (rdev)->asic->copy.dma((rdev), (s), (d), (np), (f))
-#define radeon_copy(rdev, s, d, np, f) (rdev)->asic->copy.copy((rdev), (s), (d), (np), (f))
+#define radeon_copy_blit(rdev, s, d, np, resv) (rdev)->asic->copy.blit((rdev), (s), (d), (np), (resv))
+#define radeon_copy_dma(rdev, s, d, np, resv) (rdev)->asic->copy.dma((rdev), (s), (d), (np), (resv))
+#define radeon_copy(rdev, s, d, np, resv) (rdev)->asic->copy.copy((rdev), (s), (d), (np), (resv))
#define radeon_copy_blit_ring_index(rdev) (rdev)->asic->copy.blit_ring_index
#define radeon_copy_dma_ring_index(rdev) (rdev)->asic->copy.dma_ring_index
#define radeon_copy_ring_index(rdev) (rdev)->asic->copy.copy_ring_index
extern void radeon_atom_set_clock_gating(struct radeon_device *rdev, int enable);
extern void radeon_ttm_placement_from_domain(struct radeon_bo *rbo, u32 domain);
extern bool radeon_ttm_bo_is_radeon_bo(struct ttm_buffer_object *bo);
+extern int radeon_ttm_tt_set_userptr(struct ttm_tt *ttm, uint64_t addr,
+ uint32_t flags);
+extern bool radeon_ttm_tt_has_userptr(struct ttm_tt *ttm);
+extern bool radeon_ttm_tt_is_readonly(struct ttm_tt *ttm);
extern void radeon_vram_location(struct radeon_device *rdev, struct radeon_mc *mc, u64 base);
extern void radeon_gtt_location(struct radeon_device *rdev, struct radeon_mc *mc);
extern int radeon_resume_kms(struct drm_device *dev, bool resume, bool fbcon);
struct r600_audio_pin *dce6_audio_get_pin(struct radeon_device *rdev);
void r600_audio_enable(struct radeon_device *rdev,
struct r600_audio_pin *pin,
- bool enable);
+ u8 enable_mask);
void dce6_audio_enable(struct radeon_device *rdev,
struct r600_audio_pin *pin,
- bool enable);
+ u8 enable_mask);
/*
* R600 vram scratch functions
},
};
+static struct radeon_asic_ring rv6xx_uvd_ring = {
+ .ib_execute = &uvd_v1_0_ib_execute,
+ .emit_fence = &uvd_v1_0_fence_emit,
+ .emit_semaphore = &uvd_v1_0_semaphore_emit,
+ .cs_parse = &radeon_uvd_cs_parse,
+ .ring_test = &uvd_v1_0_ring_test,
+ .ib_test = &uvd_v1_0_ib_test,
+ .is_lockup = &radeon_ring_test_lockup,
+ .get_rptr = &uvd_v1_0_get_rptr,
+ .get_wptr = &uvd_v1_0_get_wptr,
+ .set_wptr = &uvd_v1_0_set_wptr,
+};
+
static struct radeon_asic rv6xx_asic = {
.init = &r600_init,
.fini = &r600_fini,
.ring = {
[RADEON_RING_TYPE_GFX_INDEX] = &r600_gfx_ring,
[R600_RING_TYPE_DMA_INDEX] = &r600_dma_ring,
+ [R600_RING_TYPE_UVD_INDEX] = &rv6xx_uvd_ring,
},
.irq = {
.set = &r600_irq_set,
.ring = {
[RADEON_RING_TYPE_GFX_INDEX] = &r600_gfx_ring,
[R600_RING_TYPE_DMA_INDEX] = &r600_dma_ring,
+ [R600_RING_TYPE_UVD_INDEX] = &rv6xx_uvd_ring,
},
.irq = {
.set = &r600_irq_set,
case CHIP_RS780:
case CHIP_RS880:
rdev->asic = &rs780_asic;
- rdev->has_uvd = true;
+ /* 760G/780V/880V don't have UVD */
+ if ((rdev->pdev->device == 0x9616)||
+ (rdev->pdev->device == 0x9611)||
+ (rdev->pdev->device == 0x9613)||
+ (rdev->pdev->device == 0x9711)||
+ (rdev->pdev->device == 0x9713))
+ rdev->has_uvd = false;
+ else
+ rdev->has_uvd = true;
break;
case CHIP_RV770:
case CHIP_RV730:
int r100_cs_parse(struct radeon_cs_parser *p);
void r100_pll_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v);
uint32_t r100_pll_rreg(struct radeon_device *rdev, uint32_t reg);
-int r100_copy_blit(struct radeon_device *rdev,
- uint64_t src_offset,
- uint64_t dst_offset,
- unsigned num_gpu_pages,
- struct radeon_fence **fence);
+struct radeon_fence *r100_copy_blit(struct radeon_device *rdev,
+ uint64_t src_offset,
+ uint64_t dst_offset,
+ unsigned num_gpu_pages,
+ struct reservation_object *resv);
int r100_set_surface_reg(struct radeon_device *rdev, int reg,
uint32_t tiling_flags, uint32_t pitch,
uint32_t offset, uint32_t obj_size);
/*
* r200,rv250,rs300,rv280
*/
-extern int r200_copy_dma(struct radeon_device *rdev,
- uint64_t src_offset,
- uint64_t dst_offset,
- unsigned num_gpu_pages,
- struct radeon_fence **fence);
+struct radeon_fence *r200_copy_dma(struct radeon_device *rdev,
+ uint64_t src_offset,
+ uint64_t dst_offset,
+ unsigned num_gpu_pages,
+ struct reservation_object *resv);
void r200_set_safe_registers(struct radeon_device *rdev);
/*
void r600_ring_ib_execute(struct radeon_device *rdev, struct radeon_ib *ib);
int r600_ring_test(struct radeon_device *rdev, struct radeon_ring *cp);
int r600_dma_ring_test(struct radeon_device *rdev, struct radeon_ring *cp);
-int r600_copy_cpdma(struct radeon_device *rdev,
- uint64_t src_offset, uint64_t dst_offset,
- unsigned num_gpu_pages, struct radeon_fence **fence);
-int r600_copy_dma(struct radeon_device *rdev,
- uint64_t src_offset, uint64_t dst_offset,
- unsigned num_gpu_pages, struct radeon_fence **fence);
+struct radeon_fence *r600_copy_cpdma(struct radeon_device *rdev,
+ uint64_t src_offset, uint64_t dst_offset,
+ unsigned num_gpu_pages,
+ struct reservation_object *resv);
+struct radeon_fence *r600_copy_dma(struct radeon_device *rdev,
+ uint64_t src_offset, uint64_t dst_offset,
+ unsigned num_gpu_pages,
+ struct reservation_object *resv);
void r600_hpd_init(struct radeon_device *rdev);
void r600_hpd_fini(struct radeon_device *rdev);
bool r600_hpd_sense(struct radeon_device *rdev, enum radeon_hpd_id hpd);
void r600_rlc_stop(struct radeon_device *rdev);
/* r600 audio */
int r600_audio_init(struct radeon_device *rdev);
-struct r600_audio_pin r600_audio_status(struct radeon_device *rdev);
void r600_audio_fini(struct radeon_device *rdev);
void r600_audio_set_dto(struct drm_encoder *encoder, u32 clock);
void r600_hdmi_update_avi_infoframe(struct drm_encoder *encoder, void *buffer,
void r700_vram_gtt_location(struct radeon_device *rdev, struct radeon_mc *mc);
void r700_cp_stop(struct radeon_device *rdev);
void r700_cp_fini(struct radeon_device *rdev);
-int rv770_copy_dma(struct radeon_device *rdev,
- uint64_t src_offset, uint64_t dst_offset,
- unsigned num_gpu_pages,
- struct radeon_fence **fence);
+struct radeon_fence *rv770_copy_dma(struct radeon_device *rdev,
+ uint64_t src_offset, uint64_t dst_offset,
+ unsigned num_gpu_pages,
+ struct reservation_object *resv);
u32 rv770_get_xclk(struct radeon_device *rdev);
int rv770_set_uvd_clocks(struct radeon_device *rdev, u32 vclk, u32 dclk);
int rv770_get_temp(struct radeon_device *rdev);
struct radeon_fence *fence);
void evergreen_dma_ring_ib_execute(struct radeon_device *rdev,
struct radeon_ib *ib);
-int evergreen_copy_dma(struct radeon_device *rdev,
- uint64_t src_offset, uint64_t dst_offset,
- unsigned num_gpu_pages,
- struct radeon_fence **fence);
+struct radeon_fence *evergreen_copy_dma(struct radeon_device *rdev,
+ uint64_t src_offset, uint64_t dst_offset,
+ unsigned num_gpu_pages,
+ struct reservation_object *resv);
void evergreen_hdmi_enable(struct drm_encoder *encoder, bool enable);
void evergreen_hdmi_setmode(struct drm_encoder *encoder, struct drm_display_mode *mode);
int evergreen_get_temp(struct radeon_device *rdev);
void si_vm_fini(struct radeon_device *rdev);
void si_vm_flush(struct radeon_device *rdev, int ridx, struct radeon_vm *vm);
int si_ib_parse(struct radeon_device *rdev, struct radeon_ib *ib);
-int si_copy_dma(struct radeon_device *rdev,
- uint64_t src_offset, uint64_t dst_offset,
- unsigned num_gpu_pages,
- struct radeon_fence **fence);
+struct radeon_fence *si_copy_dma(struct radeon_device *rdev,
+ uint64_t src_offset, uint64_t dst_offset,
+ unsigned num_gpu_pages,
+ struct reservation_object *resv);
void si_dma_vm_copy_pages(struct radeon_device *rdev,
struct radeon_ib *ib,
struct radeon_semaphore *semaphore,
bool emit_wait);
void cik_sdma_ring_ib_execute(struct radeon_device *rdev, struct radeon_ib *ib);
-int cik_copy_dma(struct radeon_device *rdev,
- uint64_t src_offset, uint64_t dst_offset,
- unsigned num_gpu_pages,
- struct radeon_fence **fence);
-int cik_copy_cpdma(struct radeon_device *rdev,
- uint64_t src_offset, uint64_t dst_offset,
- unsigned num_gpu_pages,
- struct radeon_fence **fence);
+struct radeon_fence *cik_copy_dma(struct radeon_device *rdev,
+ uint64_t src_offset, uint64_t dst_offset,
+ unsigned num_gpu_pages,
+ struct reservation_object *resv);
+struct radeon_fence *cik_copy_cpdma(struct radeon_device *rdev,
+ uint64_t src_offset, uint64_t dst_offset,
+ unsigned num_gpu_pages,
+ struct reservation_object *resv);
int cik_sdma_ring_test(struct radeon_device *rdev, struct radeon_ring *ring);
int cik_sdma_ib_test(struct radeon_device *rdev, struct radeon_ring *ring);
bool cik_sdma_is_lockup(struct radeon_device *rdev, struct radeon_ring *ring);
struct radeon_ring *ring);
void uvd_v1_0_set_wptr(struct radeon_device *rdev,
struct radeon_ring *ring);
+int uvd_v1_0_resume(struct radeon_device *rdev);
int uvd_v1_0_init(struct radeon_device *rdev);
void uvd_v1_0_fini(struct radeon_device *rdev);
void uvd_v1_0_stop(struct radeon_device *rdev);
int uvd_v1_0_ring_test(struct radeon_device *rdev, struct radeon_ring *ring);
+void uvd_v1_0_fence_emit(struct radeon_device *rdev,
+ struct radeon_fence *fence);
int uvd_v1_0_ib_test(struct radeon_device *rdev, struct radeon_ring *ring);
bool uvd_v1_0_semaphore_emit(struct radeon_device *rdev,
struct radeon_ring *ring,
}
}
+ /* Fujitsu D3003-S2 board lists DVI-I as DVI-I and VGA */
+ if ((dev->pdev->device == 0x9805) &&
+ (dev->pdev->subsystem_vendor == 0x1734) &&
+ (dev->pdev->subsystem_device == 0x11bd)) {
+ if (*connector_type == DRM_MODE_CONNECTOR_VGA)
+ return false;
+ }
return true;
}
-const int supported_devices_connector_convert[] = {
+static const int supported_devices_connector_convert[] = {
DRM_MODE_CONNECTOR_Unknown,
DRM_MODE_CONNECTOR_VGA,
DRM_MODE_CONNECTOR_DVII,
DRM_MODE_CONNECTOR_DisplayPort
};
-const uint16_t supported_devices_connector_object_id_convert[] = {
+static const uint16_t supported_devices_connector_object_id_convert[] = {
CONNECTOR_OBJECT_ID_NONE,
CONNECTOR_OBJECT_ID_VGA,
CONNECTOR_OBJECT_ID_DUAL_LINK_DVI_I, /* not all boards support DL */
CONNECTOR_OBJECT_ID_SVIDEO
};
-const int object_connector_convert[] = {
+static const int object_connector_convert[] = {
DRM_MODE_CONNECTOR_Unknown,
DRM_MODE_CONNECTOR_DVII,
DRM_MODE_CONNECTOR_DVII,
(controller->ucFanParameters &
ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
rdev->pm.int_thermal_type = THERMAL_TYPE_KV;
- } else if ((controller->ucType ==
- ATOM_PP_THERMALCONTROLLER_EXTERNAL_GPIO) ||
- (controller->ucType ==
- ATOM_PP_THERMALCONTROLLER_ADT7473_WITH_INTERNAL) ||
- (controller->ucType ==
- ATOM_PP_THERMALCONTROLLER_EMC2103_WITH_INTERNAL)) {
- DRM_INFO("Special thermal controller config\n");
+ } else if (controller->ucType ==
+ ATOM_PP_THERMALCONTROLLER_EXTERNAL_GPIO) {
+ DRM_INFO("External GPIO thermal controller %s fan control\n",
+ (controller->ucFanParameters &
+ ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
+ rdev->pm.int_thermal_type = THERMAL_TYPE_EXTERNAL_GPIO;
+ } else if (controller->ucType ==
+ ATOM_PP_THERMALCONTROLLER_ADT7473_WITH_INTERNAL) {
+ DRM_INFO("ADT7473 with internal thermal controller %s fan control\n",
+ (controller->ucFanParameters &
+ ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
+ rdev->pm.int_thermal_type = THERMAL_TYPE_ADT7473_WITH_INTERNAL;
+ } else if (controller->ucType ==
+ ATOM_PP_THERMALCONTROLLER_EMC2103_WITH_INTERNAL) {
+ DRM_INFO("EMC2103 with internal thermal controller %s fan control\n",
+ (controller->ucFanParameters &
+ ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
+ rdev->pm.int_thermal_type = THERMAL_TYPE_EMC2103_WITH_INTERNAL;
} else if (controller->ucType < ARRAY_SIZE(pp_lib_thermal_controller_names)) {
DRM_INFO("Possible %s thermal controller at 0x%02x %s fan control\n",
pp_lib_thermal_controller_names[controller->ucType],
controller->ucI2cAddress >> 1,
(controller->ucFanParameters &
ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
+ rdev->pm.int_thermal_type = THERMAL_TYPE_EXTERNAL;
i2c_bus = radeon_lookup_i2c_gpio(rdev, controller->ucI2cLine);
rdev->pm.i2c_bus = radeon_i2c_lookup(rdev, &i2c_bus);
if (rdev->pm.i2c_bus) {
for (i = 0; i < n; i++) {
switch (flag) {
case RADEON_BENCHMARK_COPY_DMA:
- r = radeon_copy_dma(rdev, saddr, daddr,
- size / RADEON_GPU_PAGE_SIZE,
- &fence);
+ fence = radeon_copy_dma(rdev, saddr, daddr,
+ size / RADEON_GPU_PAGE_SIZE,
+ NULL);
break;
case RADEON_BENCHMARK_COPY_BLIT:
- r = radeon_copy_blit(rdev, saddr, daddr,
- size / RADEON_GPU_PAGE_SIZE,
- &fence);
+ fence = radeon_copy_blit(rdev, saddr, daddr,
+ size / RADEON_GPU_PAGE_SIZE,
+ NULL);
break;
default:
DRM_ERROR("Unknown copy method\n");
- r = -EINVAL;
+ return -EINVAL;
}
- if (r)
- goto exit_do_move;
+ if (IS_ERR(fence))
+ return PTR_ERR(fence);
+
r = radeon_fence_wait(fence, false);
- if (r)
- goto exit_do_move;
radeon_fence_unref(&fence);
+ if (r)
+ return r;
}
end_jiffies = jiffies;
- r = jiffies_to_msecs(end_jiffies - start_jiffies);
-
-exit_do_move:
- if (fence)
- radeon_fence_unref(&fence);
- return r;
+ return jiffies_to_msecs(end_jiffies - start_jiffies);
}
int time;
n = RADEON_BENCHMARK_ITERATIONS;
- r = radeon_bo_create(rdev, size, PAGE_SIZE, true, sdomain, 0, NULL, &sobj);
+ r = radeon_bo_create(rdev, size, PAGE_SIZE, true, sdomain, 0, NULL, NULL, &sobj);
if (r) {
goto out_cleanup;
}
if (r) {
goto out_cleanup;
}
- r = radeon_bo_create(rdev, size, PAGE_SIZE, true, ddomain, 0, NULL, &dobj);
+ r = radeon_bo_create(rdev, size, PAGE_SIZE, true, ddomain, 0, NULL, NULL, &dobj);
if (r) {
goto out_cleanup;
}
CONNECTOR_UNSUPPORTED_LEGACY
};
-const int legacy_connector_convert[] = {
+static const int legacy_connector_convert[] = {
DRM_MODE_CONNECTOR_Unknown,
DRM_MODE_CONNECTOR_DVID,
DRM_MODE_CONNECTOR_VGA,
dev_priv->buffers_offset = init->buffers_offset;
dev_priv->gart_textures_offset = init->gart_textures_offset;
- master_priv->sarea = drm_getsarea(dev);
+ master_priv->sarea = drm_legacy_getsarea(dev);
if (!master_priv->sarea) {
DRM_ERROR("could not find sarea!\n");
radeon_do_cleanup_cp(dev);
return -EINVAL;
}
- dev_priv->cp_ring = drm_core_findmap(dev, init->ring_offset);
+ dev_priv->cp_ring = drm_legacy_findmap(dev, init->ring_offset);
if (!dev_priv->cp_ring) {
DRM_ERROR("could not find cp ring region!\n");
radeon_do_cleanup_cp(dev);
return -EINVAL;
}
- dev_priv->ring_rptr = drm_core_findmap(dev, init->ring_rptr_offset);
+ dev_priv->ring_rptr = drm_legacy_findmap(dev, init->ring_rptr_offset);
if (!dev_priv->ring_rptr) {
DRM_ERROR("could not find ring read pointer!\n");
radeon_do_cleanup_cp(dev);
return -EINVAL;
}
dev->agp_buffer_token = init->buffers_offset;
- dev->agp_buffer_map = drm_core_findmap(dev, init->buffers_offset);
+ dev->agp_buffer_map = drm_legacy_findmap(dev, init->buffers_offset);
if (!dev->agp_buffer_map) {
DRM_ERROR("could not find dma buffer region!\n");
radeon_do_cleanup_cp(dev);
if (init->gart_textures_offset) {
dev_priv->gart_textures =
- drm_core_findmap(dev, init->gart_textures_offset);
+ drm_legacy_findmap(dev, init->gart_textures_offset);
if (!dev_priv->gart_textures) {
DRM_ERROR("could not find GART texture region!\n");
radeon_do_cleanup_cp(dev);
#if __OS_HAS_AGP
if (dev_priv->flags & RADEON_IS_AGP) {
- drm_core_ioremap_wc(dev_priv->cp_ring, dev);
- drm_core_ioremap_wc(dev_priv->ring_rptr, dev);
- drm_core_ioremap_wc(dev->agp_buffer_map, dev);
+ drm_legacy_ioremap_wc(dev_priv->cp_ring, dev);
+ drm_legacy_ioremap_wc(dev_priv->ring_rptr, dev);
+ drm_legacy_ioremap_wc(dev->agp_buffer_map, dev);
if (!dev_priv->cp_ring->handle ||
!dev_priv->ring_rptr->handle ||
!dev->agp_buffer_map->handle) {
dev_priv->gart_info.mapping.size =
dev_priv->gart_info.table_size;
- drm_core_ioremap_wc(&dev_priv->gart_info.mapping, dev);
+ drm_legacy_ioremap_wc(&dev_priv->gart_info.mapping, dev);
dev_priv->gart_info.addr =
dev_priv->gart_info.mapping.handle;
#if __OS_HAS_AGP
if (dev_priv->flags & RADEON_IS_AGP) {
if (dev_priv->cp_ring != NULL) {
- drm_core_ioremapfree(dev_priv->cp_ring, dev);
+ drm_legacy_ioremapfree(dev_priv->cp_ring, dev);
dev_priv->cp_ring = NULL;
}
if (dev_priv->ring_rptr != NULL) {
- drm_core_ioremapfree(dev_priv->ring_rptr, dev);
+ drm_legacy_ioremapfree(dev_priv->ring_rptr, dev);
dev_priv->ring_rptr = NULL;
}
if (dev->agp_buffer_map != NULL) {
- drm_core_ioremapfree(dev->agp_buffer_map, dev);
+ drm_legacy_ioremapfree(dev->agp_buffer_map, dev);
dev->agp_buffer_map = NULL;
}
} else
if (dev_priv->gart_info.gart_table_location == DRM_ATI_GART_FB)
{
- drm_core_ioremapfree(&dev_priv->gart_info.mapping, dev);
+ drm_legacy_ioremapfree(&dev_priv->gart_info.mapping, dev);
dev_priv->gart_info.addr = NULL;
}
}
else
dev_priv->flags |= RADEON_IS_PCI;
- ret = drm_addmap(dev, pci_resource_start(dev->pdev, 2),
- pci_resource_len(dev->pdev, 2), _DRM_REGISTERS,
- _DRM_READ_ONLY | _DRM_DRIVER, &dev_priv->mmio);
+ ret = drm_legacy_addmap(dev, pci_resource_start(dev->pdev, 2),
+ pci_resource_len(dev->pdev, 2), _DRM_REGISTERS,
+ _DRM_READ_ONLY | _DRM_DRIVER, &dev_priv->mmio);
if (ret != 0)
return ret;
/* prebuild the SAREA */
sareapage = max_t(unsigned long, SAREA_MAX, PAGE_SIZE);
- ret = drm_addmap(dev, 0, sareapage, _DRM_SHM, _DRM_CONTAINS_LOCK,
- &master_priv->sarea);
+ ret = drm_legacy_addmap(dev, 0, sareapage, _DRM_SHM, _DRM_CONTAINS_LOCK,
+ &master_priv->sarea);
if (ret) {
DRM_ERROR("SAREA setup failed\n");
kfree(master_priv);
master_priv->sarea_priv = NULL;
if (master_priv->sarea)
- drm_rmmap_locked(dev, master_priv->sarea);
+ drm_legacy_rmmap_locked(dev, master_priv->sarea);
kfree(master_priv);
dev_priv->gart_info.table_size = RADEON_PCIGART_TABLE_SIZE;
dev_priv->fb_aper_offset = pci_resource_start(dev->pdev, 0);
- ret = drm_addmap(dev, dev_priv->fb_aper_offset,
- pci_resource_len(dev->pdev, 0), _DRM_FRAME_BUFFER,
- _DRM_WRITE_COMBINING, &map);
+ ret = drm_legacy_addmap(dev, dev_priv->fb_aper_offset,
+ pci_resource_len(dev->pdev, 0),
+ _DRM_FRAME_BUFFER, _DRM_WRITE_COMBINING, &map);
if (ret != 0)
return ret;
DRM_DEBUG("\n");
- drm_rmmap(dev, dev_priv->mmio);
+ drm_legacy_rmmap(dev, dev_priv->mmio);
kfree(dev_priv);
struct radeon_cs_chunk *chunk;
struct radeon_cs_buckets buckets;
unsigned i, j;
- bool duplicate;
+ bool duplicate, need_mmap_lock = false;
+ int r;
if (p->chunk_relocs_idx == -1) {
return 0;
* the buffers used for read only, which doubles the range
* to 0 to 31. 32 is reserved for the kernel driver.
*/
- priority = (r->flags & 0xf) * 2 + !!r->write_domain;
+ priority = (r->flags & RADEON_RELOC_PRIO_MASK) * 2
+ + !!r->write_domain;
/* the first reloc of an UVD job is the msg and that must be in
- VRAM, also but everything into VRAM on AGP cards to avoid
- image corruptions */
+ VRAM, also but everything into VRAM on AGP cards and older
+ IGP chips to avoid image corruptions */
if (p->ring == R600_RING_TYPE_UVD_INDEX &&
- (i == 0 || drm_pci_device_is_agp(p->rdev->ddev))) {
+ (i == 0 || drm_pci_device_is_agp(p->rdev->ddev) ||
+ p->rdev->family == CHIP_RS780 ||
+ p->rdev->family == CHIP_RS880)) {
+
/* TODO: is this still needed for NI+ ? */
p->relocs[i].prefered_domains =
RADEON_GEM_DOMAIN_VRAM;
p->relocs[i].allowed_domains = domain;
}
+ if (radeon_ttm_tt_has_userptr(p->relocs[i].robj->tbo.ttm)) {
+ uint32_t domain = p->relocs[i].prefered_domains;
+ if (!(domain & RADEON_GEM_DOMAIN_GTT)) {
+ DRM_ERROR("Only RADEON_GEM_DOMAIN_GTT is "
+ "allowed for userptr BOs\n");
+ return -EINVAL;
+ }
+ need_mmap_lock = true;
+ domain = RADEON_GEM_DOMAIN_GTT;
+ p->relocs[i].prefered_domains = domain;
+ p->relocs[i].allowed_domains = domain;
+ }
+
p->relocs[i].tv.bo = &p->relocs[i].robj->tbo;
+ p->relocs[i].tv.shared = !r->write_domain;
p->relocs[i].handle = r->handle;
radeon_cs_buckets_add(&buckets, &p->relocs[i].tv.head,
if (p->cs_flags & RADEON_CS_USE_VM)
p->vm_bos = radeon_vm_get_bos(p->rdev, p->ib.vm,
&p->validated);
+ if (need_mmap_lock)
+ down_read(¤t->mm->mmap_sem);
- return radeon_bo_list_validate(p->rdev, &p->ticket, &p->validated, p->ring);
+ r = radeon_bo_list_validate(p->rdev, &p->ticket, &p->validated, p->ring);
+
+ if (need_mmap_lock)
+ up_read(¤t->mm->mmap_sem);
+
+ return r;
}
static int radeon_cs_get_ring(struct radeon_cs_parser *p, u32 ring, s32 priority)
return 0;
}
-static void radeon_cs_sync_rings(struct radeon_cs_parser *p)
+static int radeon_cs_sync_rings(struct radeon_cs_parser *p)
{
- int i;
+ int i, r = 0;
for (i = 0; i < p->nrelocs; i++) {
+ struct reservation_object *resv;
+
if (!p->relocs[i].robj)
continue;
- radeon_semaphore_sync_to(p->ib.semaphore,
- p->relocs[i].robj->tbo.sync_obj);
+ resv = p->relocs[i].robj->tbo.resv;
+ r = radeon_semaphore_sync_resv(p->rdev, p->ib.semaphore, resv,
+ p->relocs[i].tv.shared);
+
+ if (r)
+ break;
}
+ return r;
}
/* XXX: note that this is called from the legacy UMS CS ioctl as well */
ttm_eu_fence_buffer_objects(&parser->ticket,
&parser->validated,
- parser->ib.fence);
+ &parser->ib.fence->base);
} else if (backoff) {
ttm_eu_backoff_reservation(&parser->ticket,
&parser->validated);
return r;
}
+ r = radeon_cs_sync_rings(parser);
+ if (r) {
+ if (r != -ERESTARTSYS)
+ DRM_ERROR("Failed to sync rings: %i\n", r);
+ return r;
+ }
+
if (parser->ring == R600_RING_TYPE_UVD_INDEX)
radeon_uvd_note_usage(rdev);
else if ((parser->ring == TN_RING_TYPE_VCE1_INDEX) ||
(parser->ring == TN_RING_TYPE_VCE2_INDEX))
radeon_vce_note_usage(rdev);
- radeon_cs_sync_rings(parser);
- r = radeon_ib_schedule(rdev, &parser->ib, NULL);
+ r = radeon_ib_schedule(rdev, &parser->ib, NULL, true);
if (r) {
DRM_ERROR("Failed to schedule IB !\n");
}
if (r) {
goto out;
}
- radeon_cs_sync_rings(parser);
- radeon_semaphore_sync_to(parser->ib.semaphore, vm->fence);
+
+ r = radeon_cs_sync_rings(parser);
+ if (r) {
+ if (r != -ERESTARTSYS)
+ DRM_ERROR("Failed to sync rings: %i\n", r);
+ goto out;
+ }
+ radeon_semaphore_sync_fence(parser->ib.semaphore, vm->fence);
if ((rdev->family >= CHIP_TAHITI) &&
(parser->chunk_const_ib_idx != -1)) {
- r = radeon_ib_schedule(rdev, &parser->ib, &parser->const_ib);
+ r = radeon_ib_schedule(rdev, &parser->ib, &parser->const_ib, true);
} else {
- r = radeon_ib_schedule(rdev, &parser->ib, NULL);
+ r = radeon_ib_schedule(rdev, &parser->ib, NULL, true);
}
out:
up_read(&rdev->exclusive_lock);
return -EBUSY;
}
+ if (rdev->in_reset) {
+ up_read(&rdev->exclusive_lock);
+ r = radeon_gpu_reset(rdev);
+ if (!r)
+ r = -EAGAIN;
+ return r;
+ }
/* initialize parser */
memset(&parser, 0, sizeof(struct radeon_cs_parser));
parser.filp = filp;
if (rdev->wb.wb_obj == NULL) {
r = radeon_bo_create(rdev, RADEON_GPU_PAGE_SIZE, PAGE_SIZE, true,
- RADEON_GEM_DOMAIN_GTT, 0, NULL,
+ RADEON_GEM_DOMAIN_GTT, 0, NULL, NULL,
&rdev->wb.wb_obj);
if (r) {
dev_warn(rdev->dev, "(%d) create WB bo failed\n", r);
for (i = 0; i < RADEON_NUM_RINGS; i++) {
rdev->ring[i].idx = i;
}
+ rdev->fence_context = fence_context_alloc(RADEON_NUM_RINGS);
DRM_INFO("initializing kernel modesetting (%s 0x%04X:0x%04X 0x%04X:0x%04X).\n",
radeon_family_name[rdev->family], pdev->vendor, pdev->device,
init_rwsem(&rdev->pm.mclk_lock);
init_rwsem(&rdev->exclusive_lock);
init_waitqueue_head(&rdev->irq.vblank_queue);
+ mutex_init(&rdev->mn_lock);
+ hash_init(rdev->mn_hash);
r = radeon_gem_init(rdev);
if (r)
return r;
if (r)
return r;
- r = radeon_ib_ring_tests(rdev);
- if (r)
- DRM_ERROR("ib ring test failed (%d).\n", r);
-
r = radeon_gem_debugfs_init(rdev);
if (r) {
DRM_ERROR("registering gem debugfs failed (%d).\n", r);
return r;
}
+ r = radeon_ib_ring_tests(rdev);
+ if (r)
+ DRM_ERROR("ib ring test failed (%d).\n", r);
+
if ((radeon_testing & 1)) {
if (rdev->accel_working)
radeon_test_moves(rdev);
struct drm_crtc *crtc;
struct drm_connector *connector;
int i, r;
- bool force_completion = false;
if (dev == NULL || dev->dev_private == NULL) {
return -ENODEV;
r = radeon_fence_wait_empty(rdev, i);
if (r) {
/* delay GPU reset to resume */
- force_completion = true;
+ radeon_fence_driver_force_completion(rdev, i);
}
}
- if (force_completion) {
- radeon_fence_driver_force_completion(rdev);
- }
radeon_save_bios_scratch_regs(rdev);
return 0;
}
- rdev->needs_reset = false;
-
radeon_save_bios_scratch_regs(rdev);
/* block TTM */
resched = ttm_bo_lock_delayed_workqueue(&rdev->mman.bdev);
- radeon_pm_suspend(rdev);
radeon_suspend(rdev);
+ radeon_hpd_fini(rdev);
for (i = 0; i < RADEON_NUM_RINGS; ++i) {
ring_sizes[i] = radeon_ring_backup(rdev, &rdev->ring[i],
}
}
-retry:
r = radeon_asic_reset(rdev);
if (!r) {
dev_info(rdev->dev, "GPU reset succeeded, trying to resume\n");
radeon_restore_bios_scratch_regs(rdev);
- if (!r) {
- for (i = 0; i < RADEON_NUM_RINGS; ++i) {
+ for (i = 0; i < RADEON_NUM_RINGS; ++i) {
+ if (!r && ring_data[i]) {
radeon_ring_restore(rdev, &rdev->ring[i],
ring_sizes[i], ring_data[i]);
- ring_sizes[i] = 0;
- ring_data[i] = NULL;
+ } else {
+ radeon_fence_driver_force_completion(rdev, i);
+ kfree(ring_data[i]);
}
+ }
- r = radeon_ib_ring_tests(rdev);
+ if ((rdev->pm.pm_method == PM_METHOD_DPM) && rdev->pm.dpm_enabled) {
+ /* do dpm late init */
+ r = radeon_pm_late_init(rdev);
if (r) {
- dev_err(rdev->dev, "ib ring test failed (%d).\n", r);
- if (saved) {
- saved = false;
- radeon_suspend(rdev);
- goto retry;
- }
+ rdev->pm.dpm_enabled = false;
+ DRM_ERROR("radeon_pm_late_init failed, disabling dpm\n");
}
} else {
- radeon_fence_driver_force_completion(rdev);
- for (i = 0; i < RADEON_NUM_RINGS; ++i) {
- kfree(ring_data[i]);
+ /* resume old pm late */
+ radeon_pm_resume(rdev);
+ }
+
+ /* init dig PHYs, disp eng pll */
+ if (rdev->is_atom_bios) {
+ radeon_atom_encoder_init(rdev);
+ radeon_atom_disp_eng_pll_init(rdev);
+ /* turn on the BL */
+ if (rdev->mode_info.bl_encoder) {
+ u8 bl_level = radeon_get_backlight_level(rdev,
+ rdev->mode_info.bl_encoder);
+ radeon_set_backlight_level(rdev, rdev->mode_info.bl_encoder,
+ bl_level);
}
}
+ /* reset hpd state */
+ radeon_hpd_init(rdev);
+
+ ttm_bo_unlock_delayed_workqueue(&rdev->mman.bdev, resched);
+
+ rdev->in_reset = true;
+ rdev->needs_reset = false;
+
+ downgrade_write(&rdev->exclusive_lock);
- radeon_pm_resume(rdev);
drm_helper_resume_force_mode(rdev->ddev);
- ttm_bo_unlock_delayed_workqueue(&rdev->mman.bdev, resched);
- if (r) {
+ /* set the power state here in case we are a PX system or headless */
+ if ((rdev->pm.pm_method == PM_METHOD_DPM) && rdev->pm.dpm_enabled)
+ radeon_pm_compute_clocks(rdev);
+
+ if (!r) {
+ r = radeon_ib_ring_tests(rdev);
+ if (r && saved)
+ r = -EAGAIN;
+ } else {
/* bad news, how to tell it to userspace ? */
dev_info(rdev->dev, "GPU reset failed\n");
}
- up_write(&rdev->exclusive_lock);
+ rdev->needs_reset = r == -EAGAIN;
+ rdev->in_reset = false;
+
+ up_read(&rdev->exclusive_lock);
return r;
}
down_read(&rdev->exclusive_lock);
if (work->fence) {
- r = radeon_fence_wait(work->fence, false);
- if (r == -EDEADLK) {
- up_read(&rdev->exclusive_lock);
- r = radeon_gpu_reset(rdev);
- down_read(&rdev->exclusive_lock);
- }
+ struct radeon_fence *fence;
+
+ fence = to_radeon_fence(work->fence);
+ if (fence && fence->rdev == rdev) {
+ r = radeon_fence_wait(fence, false);
+ if (r == -EDEADLK) {
+ up_read(&rdev->exclusive_lock);
+ do {
+ r = radeon_gpu_reset(rdev);
+ } while (r == -EAGAIN);
+ down_read(&rdev->exclusive_lock);
+ }
+ } else
+ r = fence_wait(work->fence, false);
+
if (r)
DRM_ERROR("failed to wait on page flip fence (%d)!\n", r);
* confused about which BO the CRTC is scanning out
*/
- radeon_fence_unref(&work->fence);
+ fence_put(work->fence);
+ work->fence = NULL;
}
/* We borrow the event spin lock for protecting flip_status */
obj = new_radeon_fb->obj;
new_rbo = gem_to_radeon_bo(obj);
- spin_lock(&new_rbo->tbo.bdev->fence_lock);
- if (new_rbo->tbo.sync_obj)
- work->fence = radeon_fence_ref(new_rbo->tbo.sync_obj);
- spin_unlock(&new_rbo->tbo.bdev->fence_lock);
-
/* pin the new buffer */
DRM_DEBUG_DRIVER("flip-ioctl() cur_rbo = %p, new_rbo = %p\n",
work->old_rbo, new_rbo);
DRM_ERROR("failed to pin new rbo buffer before flip\n");
goto cleanup;
}
+ work->fence = fence_get(reservation_object_get_excl(new_rbo->tbo.resv));
radeon_bo_get_tiling_flags(new_rbo, &tiling_flags, NULL);
radeon_bo_unreserve(new_rbo);
cleanup:
drm_gem_object_unreference_unlocked(&work->old_rbo->gem_base);
- radeon_fence_unref(&work->fence);
+ fence_put(work->fence);
kfree(work);
-
return r;
}
/* In vblank? */
if (in_vbl)
- ret |= DRM_SCANOUTPOS_INVBL;
+ ret |= DRM_SCANOUTPOS_IN_VBLANK;
/* Is vpos outside nominal vblank area, but less than
* 1/100 of a frame height away from start of vblank?
#include <linux/module.h>
#include <linux/pm_runtime.h>
#include <linux/vga_switcheroo.h>
+#include <drm/drm_gem.h>
+
#include "drm_crtc_helper.h"
/*
* KMS wrapper.
struct drm_file *file_priv);
void radeon_gem_object_close(struct drm_gem_object *obj,
struct drm_file *file_priv);
+struct dma_buf *radeon_gem_prime_export(struct drm_device *dev,
+ struct drm_gem_object *gobj,
+ int flags);
extern int radeon_get_crtc_scanoutpos(struct drm_device *dev, int crtc,
unsigned int flags,
int *vpos, int *hpos, ktime_t *stime,
struct drm_mode_create_dumb *args);
struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj);
struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev,
- size_t size,
+ struct dma_buf_attachment *,
struct sg_table *sg);
int radeon_gem_prime_pin(struct drm_gem_object *obj);
void radeon_gem_prime_unpin(struct drm_gem_object *obj);
int radeon_vm_block_size = -1;
int radeon_deep_color = 0;
int radeon_use_pflipirq = 2;
+int radeon_bapm = -1;
MODULE_PARM_DESC(no_wb, "Disable AGP writeback for scratch registers");
module_param_named(no_wb, radeon_no_wb, int, 0444);
MODULE_PARM_DESC(use_pflipirq, "Pflip irqs for pageflip completion (0 = disable, 1 = as fallback, 2 = exclusive (default))");
module_param_named(use_pflipirq, radeon_use_pflipirq, int, 0444);
+MODULE_PARM_DESC(bapm, "BAPM support (1 = enable, 0 = disable, -1 = auto)");
+module_param_named(bapm, radeon_bapm, int, 0444);
+
static struct pci_device_id pciidlist[] = {
radeon_PCI_IDS
};
.open = drm_open,
.release = drm_release,
.unlocked_ioctl = drm_ioctl,
- .mmap = drm_mmap,
+ .mmap = drm_legacy_mmap,
.poll = drm_poll,
.read = drm_read,
#ifdef CONFIG_COMPAT
.preclose = radeon_driver_preclose,
.postclose = radeon_driver_postclose,
.lastclose = radeon_driver_lastclose,
+ .set_busid = drm_pci_set_busid,
.unload = radeon_driver_unload,
.suspend = radeon_suspend,
.resume = radeon_resume,
.preclose = radeon_driver_preclose_kms,
.postclose = radeon_driver_postclose_kms,
.lastclose = radeon_driver_lastclose_kms,
+ .set_busid = drm_pci_set_busid,
.unload = radeon_driver_unload_kms,
.get_vblank_counter = radeon_get_vblank_counter_kms,
.enable_vblank = radeon_enable_vblank_kms,
.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
- .gem_prime_export = drm_gem_prime_export,
+ .gem_prime_export = radeon_gem_prime_export,
.gem_prime_import = drm_gem_prime_import,
.gem_prime_pin = radeon_gem_prime_pin,
.gem_prime_unpin = radeon_gem_prime_unpin,
#include <linux/firmware.h>
#include <linux/platform_device.h>
+#include <drm/drm_legacy.h>
+#include <drm/ati_pcigart.h>
#include "radeon_family.h"
/* General customization:
}
}
+bool radeon_encoder_is_digital(struct drm_encoder *encoder)
+{
+ struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
+ switch (radeon_encoder->encoder_id) {
+ case ENCODER_OBJECT_ID_INTERNAL_LVDS:
+ case ENCODER_OBJECT_ID_INTERNAL_TMDS1:
+ case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1:
+ case ENCODER_OBJECT_ID_INTERNAL_LVTM1:
+ case ENCODER_OBJECT_ID_INTERNAL_DVO1:
+ case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1:
+ case ENCODER_OBJECT_ID_INTERNAL_DDI:
+ case ENCODER_OBJECT_ID_INTERNAL_UNIPHY:
+ case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA:
+ case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1:
+ case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2:
+ case ENCODER_OBJECT_ID_INTERNAL_UNIPHY3:
+ return true;
+ default:
+ return false;
+ }
+}
static int radeonfb_create(struct drm_fb_helper *helper,
struct drm_fb_helper_surface_size *sizes)
{
- struct radeon_fbdev *rfbdev = (struct radeon_fbdev *)helper;
+ struct radeon_fbdev *rfbdev =
+ container_of(helper, struct radeon_fbdev, helper);
struct radeon_device *rdev = rfbdev->rdev;
struct fb_info *info;
struct drm_framebuffer *fb = NULL;
return seq;
}
+/**
+ * radeon_fence_schedule_check - schedule lockup check
+ *
+ * @rdev: radeon_device pointer
+ * @ring: ring index we should work with
+ *
+ * Queues a delayed work item to check for lockups.
+ */
+static void radeon_fence_schedule_check(struct radeon_device *rdev, int ring)
+{
+ /*
+ * Do not reset the timer here with mod_delayed_work,
+ * this can livelock in an interaction with TTM delayed destroy.
+ */
+ queue_delayed_work(system_power_efficient_wq,
+ &rdev->fence_drv[ring].lockup_work,
+ RADEON_FENCE_JIFFIES_TIMEOUT);
+}
+
/**
* radeon_fence_emit - emit a fence on the requested ring
*
struct radeon_fence **fence,
int ring)
{
+ u64 seq = ++rdev->fence_drv[ring].sync_seq[ring];
+
/* we are protected by the ring emission mutex */
*fence = kmalloc(sizeof(struct radeon_fence), GFP_KERNEL);
if ((*fence) == NULL) {
return -ENOMEM;
}
- kref_init(&((*fence)->kref));
(*fence)->rdev = rdev;
- (*fence)->seq = ++rdev->fence_drv[ring].sync_seq[ring];
+ (*fence)->seq = seq;
(*fence)->ring = ring;
+ fence_init(&(*fence)->base, &radeon_fence_ops,
+ &rdev->fence_queue.lock, rdev->fence_context + ring, seq);
radeon_fence_ring_emit(rdev, ring, *fence);
trace_radeon_fence_emit(rdev->ddev, ring, (*fence)->seq);
+ radeon_fence_schedule_check(rdev, ring);
return 0;
}
/**
- * radeon_fence_process - process a fence
+ * radeon_fence_check_signaled - callback from fence_queue
+ *
+ * this function is called with fence_queue lock held, which is also used
+ * for the fence locking itself, so unlocked variants are used for
+ * fence_signal, and remove_wait_queue.
+ */
+static int radeon_fence_check_signaled(wait_queue_t *wait, unsigned mode, int flags, void *key)
+{
+ struct radeon_fence *fence;
+ u64 seq;
+
+ fence = container_of(wait, struct radeon_fence, fence_wake);
+
+ /*
+ * We cannot use radeon_fence_process here because we're already
+ * in the waitqueue, in a call from wake_up_all.
+ */
+ seq = atomic64_read(&fence->rdev->fence_drv[fence->ring].last_seq);
+ if (seq >= fence->seq) {
+ int ret = fence_signal_locked(&fence->base);
+
+ if (!ret)
+ FENCE_TRACE(&fence->base, "signaled from irq context\n");
+ else
+ FENCE_TRACE(&fence->base, "was already signaled\n");
+
+ radeon_irq_kms_sw_irq_put(fence->rdev, fence->ring);
+ __remove_wait_queue(&fence->rdev->fence_queue, &fence->fence_wake);
+ fence_put(&fence->base);
+ } else
+ FENCE_TRACE(&fence->base, "pending\n");
+ return 0;
+}
+
+/**
+ * radeon_fence_activity - check for fence activity
*
* @rdev: radeon_device pointer
* @ring: ring index the fence is associated with
*
- * Checks the current fence value and wakes the fence queue
- * if the sequence number has increased (all asics).
+ * Checks the current fence value and calculates the last
+ * signalled fence value. Returns true if activity occured
+ * on the ring, and the fence_queue should be waken up.
*/
-void radeon_fence_process(struct radeon_device *rdev, int ring)
+static bool radeon_fence_activity(struct radeon_device *rdev, int ring)
{
uint64_t seq, last_seq, last_emitted;
unsigned count_loop = 0;
}
} while (atomic64_xchg(&rdev->fence_drv[ring].last_seq, seq) > seq);
- if (wake)
- wake_up_all(&rdev->fence_queue);
+ if (seq < last_emitted)
+ radeon_fence_schedule_check(rdev, ring);
+
+ return wake;
}
/**
- * radeon_fence_destroy - destroy a fence
+ * radeon_fence_check_lockup - check for hardware lockup
*
- * @kref: fence kref
+ * @work: delayed work item
*
- * Frees the fence object (all asics).
+ * Checks for fence activity and if there is none probe
+ * the hardware if a lockup occured.
*/
-static void radeon_fence_destroy(struct kref *kref)
+static void radeon_fence_check_lockup(struct work_struct *work)
{
- struct radeon_fence *fence;
+ struct radeon_fence_driver *fence_drv;
+ struct radeon_device *rdev;
+ int ring;
+
+ fence_drv = container_of(work, struct radeon_fence_driver,
+ lockup_work.work);
+ rdev = fence_drv->rdev;
+ ring = fence_drv - &rdev->fence_drv[0];
+
+ if (!down_read_trylock(&rdev->exclusive_lock)) {
+ /* just reschedule the check if a reset is going on */
+ radeon_fence_schedule_check(rdev, ring);
+ return;
+ }
+
+ if (fence_drv->delayed_irq && rdev->ddev->irq_enabled) {
+ unsigned long irqflags;
+
+ fence_drv->delayed_irq = false;
+ spin_lock_irqsave(&rdev->irq.lock, irqflags);
+ radeon_irq_set(rdev);
+ spin_unlock_irqrestore(&rdev->irq.lock, irqflags);
+ }
+
+ if (radeon_fence_activity(rdev, ring))
+ wake_up_all(&rdev->fence_queue);
- fence = container_of(kref, struct radeon_fence, kref);
- kfree(fence);
+ else if (radeon_ring_is_lockup(rdev, ring, &rdev->ring[ring])) {
+
+ /* good news we believe it's a lockup */
+ dev_warn(rdev->dev, "GPU lockup (current fence id "
+ "0x%016llx last fence id 0x%016llx on ring %d)\n",
+ (uint64_t)atomic64_read(&fence_drv->last_seq),
+ fence_drv->sync_seq[ring], ring);
+
+ /* remember that we need an reset */
+ rdev->needs_reset = true;
+ wake_up_all(&rdev->fence_queue);
+ }
+ up_read(&rdev->exclusive_lock);
+}
+
+/**
+ * radeon_fence_process - process a fence
+ *
+ * @rdev: radeon_device pointer
+ * @ring: ring index the fence is associated with
+ *
+ * Checks the current fence value and wakes the fence queue
+ * if the sequence number has increased (all asics).
+ */
+void radeon_fence_process(struct radeon_device *rdev, int ring)
+{
+ if (radeon_fence_activity(rdev, ring))
+ wake_up_all(&rdev->fence_queue);
}
/**
return false;
}
+static bool radeon_fence_is_signaled(struct fence *f)
+{
+ struct radeon_fence *fence = to_radeon_fence(f);
+ struct radeon_device *rdev = fence->rdev;
+ unsigned ring = fence->ring;
+ u64 seq = fence->seq;
+
+ if (atomic64_read(&rdev->fence_drv[ring].last_seq) >= seq) {
+ return true;
+ }
+
+ if (down_read_trylock(&rdev->exclusive_lock)) {
+ radeon_fence_process(rdev, ring);
+ up_read(&rdev->exclusive_lock);
+
+ if (atomic64_read(&rdev->fence_drv[ring].last_seq) >= seq) {
+ return true;
+ }
+ }
+ return false;
+}
+
+/**
+ * radeon_fence_enable_signaling - enable signalling on fence
+ * @fence: fence
+ *
+ * This function is called with fence_queue lock held, and adds a callback
+ * to fence_queue that checks if this fence is signaled, and if so it
+ * signals the fence and removes itself.
+ */
+static bool radeon_fence_enable_signaling(struct fence *f)
+{
+ struct radeon_fence *fence = to_radeon_fence(f);
+ struct radeon_device *rdev = fence->rdev;
+
+ if (atomic64_read(&rdev->fence_drv[fence->ring].last_seq) >= fence->seq)
+ return false;
+
+ if (down_read_trylock(&rdev->exclusive_lock)) {
+ radeon_irq_kms_sw_irq_get(rdev, fence->ring);
+
+ if (radeon_fence_activity(rdev, fence->ring))
+ wake_up_all_locked(&rdev->fence_queue);
+
+ /* did fence get signaled after we enabled the sw irq? */
+ if (atomic64_read(&rdev->fence_drv[fence->ring].last_seq) >= fence->seq) {
+ radeon_irq_kms_sw_irq_put(rdev, fence->ring);
+ up_read(&rdev->exclusive_lock);
+ return false;
+ }
+
+ up_read(&rdev->exclusive_lock);
+ } else {
+ /* we're probably in a lockup, lets not fiddle too much */
+ if (radeon_irq_kms_sw_irq_get_delayed(rdev, fence->ring))
+ rdev->fence_drv[fence->ring].delayed_irq = true;
+ radeon_fence_schedule_check(rdev, fence->ring);
+ }
+
+ fence->fence_wake.flags = 0;
+ fence->fence_wake.private = NULL;
+ fence->fence_wake.func = radeon_fence_check_signaled;
+ __add_wait_queue(&rdev->fence_queue, &fence->fence_wake);
+ fence_get(f);
+
+ FENCE_TRACE(&fence->base, "armed on ring %i!\n", fence->ring);
+ return true;
+}
+
/**
* radeon_fence_signaled - check if a fence has signaled
*
*/
bool radeon_fence_signaled(struct radeon_fence *fence)
{
- if (!fence) {
+ if (!fence)
return true;
- }
- if (fence->seq == RADEON_FENCE_SIGNALED_SEQ) {
- return true;
- }
+
if (radeon_fence_seq_signaled(fence->rdev, fence->seq, fence->ring)) {
- fence->seq = RADEON_FENCE_SIGNALED_SEQ;
+ int ret;
+
+ ret = fence_signal(&fence->base);
+ if (!ret)
+ FENCE_TRACE(&fence->base, "signaled from radeon_fence_signaled\n");
return true;
}
return false;
}
/**
- * radeon_fence_wait_seq - wait for a specific sequence numbers
+ * radeon_fence_wait_seq_timeout - wait for a specific sequence numbers
*
* @rdev: radeon device pointer
* @target_seq: sequence number(s) we want to wait for
* @intr: use interruptable sleep
+ * @timeout: maximum time to wait, or MAX_SCHEDULE_TIMEOUT for infinite wait
*
* Wait for the requested sequence number(s) to be written by any ring
* (all asics). Sequnce number array is indexed by ring id.
* @intr selects whether to use interruptable (true) or non-interruptable
* (false) sleep when waiting for the sequence number. Helper function
* for radeon_fence_wait_*().
- * Returns 0 if the sequence number has passed, error for all other cases.
+ * Returns remaining time if the sequence number has passed, 0 when
+ * the wait timeout, or an error for all other cases.
* -EDEADLK is returned when a GPU lockup has been detected.
*/
-static int radeon_fence_wait_seq(struct radeon_device *rdev, u64 *target_seq,
- bool intr)
+static long radeon_fence_wait_seq_timeout(struct radeon_device *rdev,
+ u64 *target_seq, bool intr,
+ long timeout)
{
- uint64_t last_seq[RADEON_NUM_RINGS];
- bool signaled;
- int i, r;
-
- while (!radeon_fence_any_seq_signaled(rdev, target_seq)) {
+ long r;
+ int i;
- /* Save current sequence values, used to check for GPU lockups */
- for (i = 0; i < RADEON_NUM_RINGS; ++i) {
- if (!target_seq[i])
- continue;
+ if (radeon_fence_any_seq_signaled(rdev, target_seq))
+ return timeout;
- last_seq[i] = atomic64_read(&rdev->fence_drv[i].last_seq);
- trace_radeon_fence_wait_begin(rdev->ddev, i, target_seq[i]);
- radeon_irq_kms_sw_irq_get(rdev, i);
- }
+ /* enable IRQs and tracing */
+ for (i = 0; i < RADEON_NUM_RINGS; ++i) {
+ if (!target_seq[i])
+ continue;
- if (intr) {
- r = wait_event_interruptible_timeout(rdev->fence_queue, (
- (signaled = radeon_fence_any_seq_signaled(rdev, target_seq))
- || rdev->needs_reset), RADEON_FENCE_JIFFIES_TIMEOUT);
- } else {
- r = wait_event_timeout(rdev->fence_queue, (
- (signaled = radeon_fence_any_seq_signaled(rdev, target_seq))
- || rdev->needs_reset), RADEON_FENCE_JIFFIES_TIMEOUT);
- }
+ trace_radeon_fence_wait_begin(rdev->ddev, i, target_seq[i]);
+ radeon_irq_kms_sw_irq_get(rdev, i);
+ }
- for (i = 0; i < RADEON_NUM_RINGS; ++i) {
- if (!target_seq[i])
- continue;
+ if (intr) {
+ r = wait_event_interruptible_timeout(rdev->fence_queue, (
+ radeon_fence_any_seq_signaled(rdev, target_seq)
+ || rdev->needs_reset), timeout);
+ } else {
+ r = wait_event_timeout(rdev->fence_queue, (
+ radeon_fence_any_seq_signaled(rdev, target_seq)
+ || rdev->needs_reset), timeout);
+ }
- radeon_irq_kms_sw_irq_put(rdev, i);
- trace_radeon_fence_wait_end(rdev->ddev, i, target_seq[i]);
- }
+ if (rdev->needs_reset)
+ r = -EDEADLK;
- if (unlikely(r < 0))
- return r;
+ for (i = 0; i < RADEON_NUM_RINGS; ++i) {
+ if (!target_seq[i])
+ continue;
- if (unlikely(!signaled)) {
- if (rdev->needs_reset)
- return -EDEADLK;
-
- /* we were interrupted for some reason and fence
- * isn't signaled yet, resume waiting */
- if (r)
- continue;
-
- for (i = 0; i < RADEON_NUM_RINGS; ++i) {
- if (!target_seq[i])
- continue;
-
- if (last_seq[i] != atomic64_read(&rdev->fence_drv[i].last_seq))
- break;
- }
-
- if (i != RADEON_NUM_RINGS)
- continue;
-
- for (i = 0; i < RADEON_NUM_RINGS; ++i) {
- if (!target_seq[i])
- continue;
-
- if (radeon_ring_is_lockup(rdev, i, &rdev->ring[i]))
- break;
- }
-
- if (i < RADEON_NUM_RINGS) {
- /* good news we believe it's a lockup */
- dev_warn(rdev->dev, "GPU lockup (waiting for "
- "0x%016llx last fence id 0x%016llx on"
- " ring %d)\n",
- target_seq[i], last_seq[i], i);
-
- /* remember that we need an reset */
- rdev->needs_reset = true;
- wake_up_all(&rdev->fence_queue);
- return -EDEADLK;
- }
- }
+ radeon_irq_kms_sw_irq_put(rdev, i);
+ trace_radeon_fence_wait_end(rdev->ddev, i, target_seq[i]);
}
- return 0;
+
+ return r;
}
/**
* radeon_fence_wait - wait for a fence to signal
*
* @fence: radeon fence object
- * @intr: use interruptable sleep
+ * @intr: use interruptible sleep
*
* Wait for the requested fence to signal (all asics).
* @intr selects whether to use interruptable (true) or non-interruptable
int radeon_fence_wait(struct radeon_fence *fence, bool intr)
{
uint64_t seq[RADEON_NUM_RINGS] = {};
- int r;
+ long r;
- if (fence == NULL) {
- WARN(1, "Querying an invalid fence : %p !\n", fence);
- return -EINVAL;
- }
+ /*
+ * This function should not be called on !radeon fences.
+ * If this is the case, it would mean this function can
+ * also be called on radeon fences belonging to another card.
+ * exclusive_lock is not held in that case.
+ */
+ if (WARN_ON_ONCE(!to_radeon_fence(&fence->base)))
+ return fence_wait(&fence->base, intr);
seq[fence->ring] = fence->seq;
- if (seq[fence->ring] == RADEON_FENCE_SIGNALED_SEQ)
- return 0;
-
- r = radeon_fence_wait_seq(fence->rdev, seq, intr);
- if (r)
+ r = radeon_fence_wait_seq_timeout(fence->rdev, seq, intr, MAX_SCHEDULE_TIMEOUT);
+ if (r < 0) {
return r;
+ }
- fence->seq = RADEON_FENCE_SIGNALED_SEQ;
+ r = fence_signal(&fence->base);
+ if (!r)
+ FENCE_TRACE(&fence->base, "signaled from fence_wait\n");
return 0;
}
{
uint64_t seq[RADEON_NUM_RINGS];
unsigned i, num_rings = 0;
- int r;
+ long r;
for (i = 0; i < RADEON_NUM_RINGS; ++i) {
seq[i] = 0;
seq[i] = fences[i]->seq;
++num_rings;
-
- /* test if something was allready signaled */
- if (seq[i] == RADEON_FENCE_SIGNALED_SEQ)
- return 0;
}
/* nothing to wait for ? */
if (num_rings == 0)
return -ENOENT;
- r = radeon_fence_wait_seq(rdev, seq, intr);
- if (r) {
+ r = radeon_fence_wait_seq_timeout(rdev, seq, intr, MAX_SCHEDULE_TIMEOUT);
+ if (r < 0) {
return r;
}
return 0;
int radeon_fence_wait_next(struct radeon_device *rdev, int ring)
{
uint64_t seq[RADEON_NUM_RINGS] = {};
+ long r;
seq[ring] = atomic64_read(&rdev->fence_drv[ring].last_seq) + 1ULL;
if (seq[ring] >= rdev->fence_drv[ring].sync_seq[ring]) {
already the last emited fence */
return -ENOENT;
}
- return radeon_fence_wait_seq(rdev, seq, false);
+ r = radeon_fence_wait_seq_timeout(rdev, seq, false, MAX_SCHEDULE_TIMEOUT);
+ if (r < 0)
+ return r;
+ return 0;
}
/**
int radeon_fence_wait_empty(struct radeon_device *rdev, int ring)
{
uint64_t seq[RADEON_NUM_RINGS] = {};
- int r;
+ long r;
seq[ring] = rdev->fence_drv[ring].sync_seq[ring];
if (!seq[ring])
return 0;
- r = radeon_fence_wait_seq(rdev, seq, false);
- if (r) {
+ r = radeon_fence_wait_seq_timeout(rdev, seq, false, MAX_SCHEDULE_TIMEOUT);
+ if (r < 0) {
if (r == -EDEADLK)
return -EDEADLK;
- dev_err(rdev->dev, "error waiting for ring[%d] to become idle (%d)\n",
+ dev_err(rdev->dev, "error waiting for ring[%d] to become idle (%ld)\n",
ring, r);
}
return 0;
*/
struct radeon_fence *radeon_fence_ref(struct radeon_fence *fence)
{
- kref_get(&fence->kref);
+ fence_get(&fence->base);
return fence;
}
*fence = NULL;
if (tmp) {
- kref_put(&tmp->kref, radeon_fence_destroy);
+ fence_put(&tmp->base);
}
}
rdev->fence_drv[ring].sync_seq[i] = 0;
atomic64_set(&rdev->fence_drv[ring].last_seq, 0);
rdev->fence_drv[ring].initialized = false;
+ INIT_DELAYED_WORK(&rdev->fence_drv[ring].lockup_work,
+ radeon_fence_check_lockup);
+ rdev->fence_drv[ring].rdev = rdev;
}
/**
r = radeon_fence_wait_empty(rdev, ring);
if (r) {
/* no need to trigger GPU reset as we are unloading */
- radeon_fence_driver_force_completion(rdev);
+ radeon_fence_driver_force_completion(rdev, ring);
}
+ cancel_delayed_work_sync(&rdev->fence_drv[ring].lockup_work);
wake_up_all(&rdev->fence_queue);
radeon_scratch_free(rdev, rdev->fence_drv[ring].scratch_reg);
rdev->fence_drv[ring].initialized = false;
* radeon_fence_driver_force_completion - force all fence waiter to complete
*
* @rdev: radeon device pointer
+ * @ring: the ring to complete
*
* In case of GPU reset failure make sure no process keep waiting on fence
* that will never complete.
*/
-void radeon_fence_driver_force_completion(struct radeon_device *rdev)
+void radeon_fence_driver_force_completion(struct radeon_device *rdev, int ring)
{
- int ring;
-
- for (ring = 0; ring < RADEON_NUM_RINGS; ring++) {
- if (!rdev->fence_drv[ring].initialized)
- continue;
+ if (rdev->fence_drv[ring].initialized) {
radeon_fence_write(rdev, rdev->fence_drv[ring].sync_seq[ring], ring);
+ cancel_delayed_work_sync(&rdev->fence_drv[ring].lockup_work);
}
}
down_read(&rdev->exclusive_lock);
seq_printf(m, "%d\n", rdev->needs_reset);
rdev->needs_reset = true;
+ wake_up_all(&rdev->fence_queue);
up_read(&rdev->exclusive_lock);
return 0;
return 0;
#endif
}
+
+static const char *radeon_fence_get_driver_name(struct fence *fence)
+{
+ return "radeon";
+}
+
+static const char *radeon_fence_get_timeline_name(struct fence *f)
+{
+ struct radeon_fence *fence = to_radeon_fence(f);
+ switch (fence->ring) {
+ case RADEON_RING_TYPE_GFX_INDEX: return "radeon.gfx";
+ case CAYMAN_RING_TYPE_CP1_INDEX: return "radeon.cp1";
+ case CAYMAN_RING_TYPE_CP2_INDEX: return "radeon.cp2";
+ case R600_RING_TYPE_DMA_INDEX: return "radeon.dma";
+ case CAYMAN_RING_TYPE_DMA1_INDEX: return "radeon.dma1";
+ case R600_RING_TYPE_UVD_INDEX: return "radeon.uvd";
+ case TN_RING_TYPE_VCE1_INDEX: return "radeon.vce1";
+ case TN_RING_TYPE_VCE2_INDEX: return "radeon.vce2";
+ default: WARN_ON_ONCE(1); return "radeon.unk";
+ }
+}
+
+static inline bool radeon_test_signaled(struct radeon_fence *fence)
+{
+ return test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->base.flags);
+}
+
+static signed long radeon_fence_default_wait(struct fence *f, bool intr,
+ signed long t)
+{
+ struct radeon_fence *fence = to_radeon_fence(f);
+ struct radeon_device *rdev = fence->rdev;
+ bool signaled;
+
+ fence_enable_sw_signaling(&fence->base);
+
+ /*
+ * This function has to return -EDEADLK, but cannot hold
+ * exclusive_lock during the wait because some callers
+ * may already hold it. This means checking needs_reset without
+ * lock, and not fiddling with any gpu internals.
+ *
+ * The callback installed with fence_enable_sw_signaling will
+ * run before our wait_event_*timeout call, so we will see
+ * both the signaled fence and the changes to needs_reset.
+ */
+
+ if (intr)
+ t = wait_event_interruptible_timeout(rdev->fence_queue,
+ ((signaled = radeon_test_signaled(fence)) ||
+ rdev->needs_reset), t);
+ else
+ t = wait_event_timeout(rdev->fence_queue,
+ ((signaled = radeon_test_signaled(fence)) ||
+ rdev->needs_reset), t);
+
+ if (t > 0 && !signaled)
+ return -EDEADLK;
+ return t;
+}
+
+const struct fence_ops radeon_fence_ops = {
+ .get_driver_name = radeon_fence_get_driver_name,
+ .get_timeline_name = radeon_fence_get_timeline_name,
+ .enable_signaling = radeon_fence_enable_signaling,
+ .signaled = radeon_fence_is_signaled,
+ .wait = radeon_fence_default_wait,
+ .release = NULL,
+};
if (rdev->gart.robj == NULL) {
r = radeon_bo_create(rdev, rdev->gart.table_size,
PAGE_SIZE, true, RADEON_GEM_DOMAIN_VRAM,
- 0, NULL, &rdev->gart.robj);
+ 0, NULL, NULL, &rdev->gart.robj);
if (r) {
return r;
}
retry:
r = radeon_bo_create(rdev, size, alignment, kernel, initial_domain,
- flags, NULL, &robj);
+ flags, NULL, NULL, &robj);
if (r) {
if (r != -ERESTARTSYS) {
if (initial_domain == RADEON_GEM_DOMAIN_VRAM) {
{
struct radeon_bo *robj;
uint32_t domain;
- int r;
+ long r;
/* FIXME: reeimplement */
robj = gem_to_radeon_bo(gobj);
}
if (domain == RADEON_GEM_DOMAIN_CPU) {
/* Asking for cpu access wait for object idle */
- r = radeon_bo_wait(robj, NULL, false);
- if (r) {
- printk(KERN_ERR "Failed to wait for object !\n");
+ r = reservation_object_wait_timeout_rcu(robj->tbo.resv, true, true, 30 * HZ);
+ if (!r)
+ r = -EBUSY;
+
+ if (r < 0 && r != -EINTR) {
+ printk(KERN_ERR "Failed to wait for object: %li\n", r);
return r;
}
}
return 0;
}
+int radeon_gem_userptr_ioctl(struct drm_device *dev, void *data,
+ struct drm_file *filp)
+{
+ struct radeon_device *rdev = dev->dev_private;
+ struct drm_radeon_gem_userptr *args = data;
+ struct drm_gem_object *gobj;
+ struct radeon_bo *bo;
+ uint32_t handle;
+ int r;
+
+ if (offset_in_page(args->addr | args->size))
+ return -EINVAL;
+
+ /* reject unknown flag values */
+ if (args->flags & ~(RADEON_GEM_USERPTR_READONLY |
+ RADEON_GEM_USERPTR_ANONONLY | RADEON_GEM_USERPTR_VALIDATE |
+ RADEON_GEM_USERPTR_REGISTER))
+ return -EINVAL;
+
+ if (args->flags & RADEON_GEM_USERPTR_READONLY) {
+ /* readonly pages not tested on older hardware */
+ if (rdev->family < CHIP_R600)
+ return -EINVAL;
+
+ } else if (!(args->flags & RADEON_GEM_USERPTR_ANONONLY) ||
+ !(args->flags & RADEON_GEM_USERPTR_REGISTER)) {
+
+ /* if we want to write to it we must require anonymous
+ memory and install a MMU notifier */
+ return -EACCES;
+ }
+
+ down_read(&rdev->exclusive_lock);
+
+ /* create a gem object to contain this object in */
+ r = radeon_gem_object_create(rdev, args->size, 0,
+ RADEON_GEM_DOMAIN_CPU, 0,
+ false, &gobj);
+ if (r)
+ goto handle_lockup;
+
+ bo = gem_to_radeon_bo(gobj);
+ r = radeon_ttm_tt_set_userptr(bo->tbo.ttm, args->addr, args->flags);
+ if (r)
+ goto release_object;
+
+ if (args->flags & RADEON_GEM_USERPTR_REGISTER) {
+ r = radeon_mn_register(bo, args->addr);
+ if (r)
+ goto release_object;
+ }
+
+ if (args->flags & RADEON_GEM_USERPTR_VALIDATE) {
+ down_read(¤t->mm->mmap_sem);
+ r = radeon_bo_reserve(bo, true);
+ if (r) {
+ up_read(¤t->mm->mmap_sem);
+ goto release_object;
+ }
+
+ radeon_ttm_placement_from_domain(bo, RADEON_GEM_DOMAIN_GTT);
+ r = ttm_bo_validate(&bo->tbo, &bo->placement, true, false);
+ radeon_bo_unreserve(bo);
+ up_read(¤t->mm->mmap_sem);
+ if (r)
+ goto release_object;
+ }
+
+ r = drm_gem_handle_create(filp, gobj, &handle);
+ /* drop reference from allocate - handle holds it now */
+ drm_gem_object_unreference_unlocked(gobj);
+ if (r)
+ goto handle_lockup;
+
+ args->handle = handle;
+ up_read(&rdev->exclusive_lock);
+ return 0;
+
+release_object:
+ drm_gem_object_unreference_unlocked(gobj);
+
+handle_lockup:
+ up_read(&rdev->exclusive_lock);
+ r = radeon_gem_handle_lockup(rdev, r);
+
+ return r;
+}
+
int radeon_gem_set_domain_ioctl(struct drm_device *dev, void *data,
struct drm_file *filp)
{
return -ENOENT;
}
robj = gem_to_radeon_bo(gobj);
+ if (radeon_ttm_tt_has_userptr(robj->tbo.ttm)) {
+ drm_gem_object_unreference_unlocked(gobj);
+ return -EPERM;
+ }
*offset_p = radeon_bo_mmap_offset(robj);
drm_gem_object_unreference_unlocked(gobj);
return 0;
struct drm_radeon_gem_wait_idle *args = data;
struct drm_gem_object *gobj;
struct radeon_bo *robj;
- int r;
+ int r = 0;
uint32_t cur_placement = 0;
+ long ret;
gobj = drm_gem_object_lookup(dev, filp, args->handle);
if (gobj == NULL) {
return -ENOENT;
}
robj = gem_to_radeon_bo(gobj);
- r = radeon_bo_wait(robj, &cur_placement, false);
+
+ ret = reservation_object_wait_timeout_rcu(robj->tbo.resv, true, true, 30 * HZ);
+ if (ret == 0)
+ r = -EBUSY;
+ else if (ret < 0)
+ r = ret;
+
/* Flush HDP cache via MMIO if necessary */
if (rdev->asic->mmio_hdp_flush &&
radeon_mem_type_to_domain(cur_placement) == RADEON_GEM_DOMAIN_VRAM)
return -ENOENT;
}
robj = gem_to_radeon_bo(gobj);
+
+ r = -EPERM;
+ if (radeon_ttm_tt_has_userptr(robj->tbo.ttm))
+ goto out;
+
r = radeon_bo_reserve(robj, false);
if (unlikely(r))
goto out;
* @rdev: radeon_device pointer
* @ib: IB object to schedule
* @const_ib: Const IB to schedule (SI only)
+ * @hdp_flush: Whether or not to perform an HDP cache flush
*
* Schedule an IB on the associated ring (all asics).
* Returns 0 on success, error on failure.
* to SI there was just a DE IB.
*/
int radeon_ib_schedule(struct radeon_device *rdev, struct radeon_ib *ib,
- struct radeon_ib *const_ib)
+ struct radeon_ib *const_ib, bool hdp_flush)
{
struct radeon_ring *ring = &rdev->ring[ib->ring];
int r = 0;
if (ib->vm) {
struct radeon_fence *vm_id_fence;
vm_id_fence = radeon_vm_grab_id(rdev, ib->vm, ib->ring);
- radeon_semaphore_sync_to(ib->semaphore, vm_id_fence);
+ radeon_semaphore_sync_fence(ib->semaphore, vm_id_fence);
}
/* sync with other rings */
if (ib->vm)
radeon_vm_fence(rdev, ib->vm, ib->fence);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, hdp_flush);
return 0;
}
r = radeon_ib_test(rdev, i, ring);
if (r) {
+ radeon_fence_driver_force_completion(rdev, i);
ring->ready = false;
rdev->needs_reset = false;
drm_helper_hpd_irq_event(dev);
}
-/**
- * radeon_irq_reset_work_func - execute gpu reset
- *
- * @work: work struct
- *
- * Execute scheduled gpu reset (cayman+).
- * This function is called when the irq handler
- * thinks we need a gpu reset.
- */
-static void radeon_irq_reset_work_func(struct work_struct *work)
-{
- struct radeon_device *rdev = container_of(work, struct radeon_device,
- reset_work);
-
- radeon_gpu_reset(rdev);
-}
-
/**
* radeon_driver_irq_preinstall_kms - drm irq preinstall callback
*
INIT_WORK(&rdev->hotplug_work, radeon_hotplug_work_func);
INIT_WORK(&rdev->audio_work, r600_audio_update_hdmi);
- INIT_WORK(&rdev->reset_work, radeon_irq_reset_work_func);
rdev->irq.installed = true;
r = drm_irq_install(rdev->ddev, rdev->ddev->pdev->irq);
}
}
+/**
+ * radeon_irq_kms_sw_irq_get_delayed - enable software interrupt
+ *
+ * @rdev: radeon device pointer
+ * @ring: ring whose interrupt you want to enable
+ *
+ * Enables the software interrupt for a specific ring (all asics).
+ * The software interrupt is generally used to signal a fence on
+ * a particular ring.
+ */
+bool radeon_irq_kms_sw_irq_get_delayed(struct radeon_device *rdev, int ring)
+{
+ return atomic_inc_return(&rdev->irq.ring_int[ring]) == 1;
+}
+
/**
* radeon_irq_kms_sw_irq_put - disable software interrupt
*
DRM_IOCTL_DEF_DRV(RADEON_GEM_BUSY, radeon_gem_busy_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(RADEON_GEM_VA, radeon_gem_va_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(RADEON_GEM_OP, radeon_gem_op_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW),
+ DRM_IOCTL_DEF_DRV(RADEON_GEM_USERPTR, radeon_gem_userptr_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW),
};
int radeon_max_kms_ioctl = ARRAY_SIZE(radeon_ioctls_kms);
--- /dev/null
+/*
+ * Copyright 2014 Advanced Micro Devices, Inc.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the
+ * "Software"), to deal in the Software without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sub license, and/or sell copies of the Software, and to
+ * permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
+ * USE OR OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * The above copyright notice and this permission notice (including the
+ * next paragraph) shall be included in all copies or substantial portions
+ * of the Software.
+ *
+ */
+/*
+ * Authors:
+ * Christian König <christian.koenig@amd.com>
+ */
+
+#include <linux/firmware.h>
+#include <linux/module.h>
+#include <linux/mmu_notifier.h>
+#include <drm/drmP.h>
+#include <drm/drm.h>
+
+#include "radeon.h"
+
+struct radeon_mn {
+ /* constant after initialisation */
+ struct radeon_device *rdev;
+ struct mm_struct *mm;
+ struct mmu_notifier mn;
+
+ /* only used on destruction */
+ struct work_struct work;
+
+ /* protected by rdev->mn_lock */
+ struct hlist_node node;
+
+ /* objects protected by lock */
+ struct mutex lock;
+ struct rb_root objects;
+};
+
+/**
+ * radeon_mn_destroy - destroy the rmn
+ *
+ * @work: previously sheduled work item
+ *
+ * Lazy destroys the notifier from a work item
+ */
+static void radeon_mn_destroy(struct work_struct *work)
+{
+ struct radeon_mn *rmn = container_of(work, struct radeon_mn, work);
+ struct radeon_device *rdev = rmn->rdev;
+ struct radeon_bo *bo, *next;
+
+ mutex_lock(&rdev->mn_lock);
+ mutex_lock(&rmn->lock);
+ hash_del(&rmn->node);
+ rbtree_postorder_for_each_entry_safe(bo, next, &rmn->objects, mn_it.rb) {
+ interval_tree_remove(&bo->mn_it, &rmn->objects);
+ bo->mn = NULL;
+ }
+ mutex_unlock(&rmn->lock);
+ mutex_unlock(&rdev->mn_lock);
+ mmu_notifier_unregister(&rmn->mn, rmn->mm);
+ kfree(rmn);
+}
+
+/**
+ * radeon_mn_release - callback to notify about mm destruction
+ *
+ * @mn: our notifier
+ * @mn: the mm this callback is about
+ *
+ * Shedule a work item to lazy destroy our notifier.
+ */
+static void radeon_mn_release(struct mmu_notifier *mn,
+ struct mm_struct *mm)
+{
+ struct radeon_mn *rmn = container_of(mn, struct radeon_mn, mn);
+ INIT_WORK(&rmn->work, radeon_mn_destroy);
+ schedule_work(&rmn->work);
+}
+
+/**
+ * radeon_mn_invalidate_range_start - callback to notify about mm change
+ *
+ * @mn: our notifier
+ * @mn: the mm this callback is about
+ * @start: start of updated range
+ * @end: end of updated range
+ *
+ * We block for all BOs between start and end to be idle and
+ * unmap them by move them into system domain again.
+ */
+static void radeon_mn_invalidate_range_start(struct mmu_notifier *mn,
+ struct mm_struct *mm,
+ unsigned long start,
+ unsigned long end)
+{
+ struct radeon_mn *rmn = container_of(mn, struct radeon_mn, mn);
+ struct interval_tree_node *it;
+
+ /* notification is exclusive, but interval is inclusive */
+ end -= 1;
+
+ mutex_lock(&rmn->lock);
+
+ it = interval_tree_iter_first(&rmn->objects, start, end);
+ while (it) {
+ struct radeon_bo *bo;
+ struct fence *fence;
+ int r;
+
+ bo = container_of(it, struct radeon_bo, mn_it);
+ it = interval_tree_iter_next(it, start, end);
+
+ r = radeon_bo_reserve(bo, true);
+ if (r) {
+ DRM_ERROR("(%d) failed to reserve user bo\n", r);
+ continue;
+ }
+
+ fence = reservation_object_get_excl(bo->tbo.resv);
+ if (fence) {
+ r = radeon_fence_wait((struct radeon_fence *)fence, false);
+ if (r)
+ DRM_ERROR("(%d) failed to wait for user bo\n", r);
+ }
+
+ radeon_ttm_placement_from_domain(bo, RADEON_GEM_DOMAIN_CPU);
+ r = ttm_bo_validate(&bo->tbo, &bo->placement, false, false);
+ if (r)
+ DRM_ERROR("(%d) failed to validate user bo\n", r);
+
+ radeon_bo_unreserve(bo);
+ }
+
+ mutex_unlock(&rmn->lock);
+}
+
+static const struct mmu_notifier_ops radeon_mn_ops = {
+ .release = radeon_mn_release,
+ .invalidate_range_start = radeon_mn_invalidate_range_start,
+};
+
+/**
+ * radeon_mn_get - create notifier context
+ *
+ * @rdev: radeon device pointer
+ *
+ * Creates a notifier context for current->mm.
+ */
+static struct radeon_mn *radeon_mn_get(struct radeon_device *rdev)
+{
+ struct mm_struct *mm = current->mm;
+ struct radeon_mn *rmn;
+ int r;
+
+ down_write(&mm->mmap_sem);
+ mutex_lock(&rdev->mn_lock);
+
+ hash_for_each_possible(rdev->mn_hash, rmn, node, (unsigned long)mm)
+ if (rmn->mm == mm)
+ goto release_locks;
+
+ rmn = kzalloc(sizeof(*rmn), GFP_KERNEL);
+ if (!rmn) {
+ rmn = ERR_PTR(-ENOMEM);
+ goto release_locks;
+ }
+
+ rmn->rdev = rdev;
+ rmn->mm = mm;
+ rmn->mn.ops = &radeon_mn_ops;
+ mutex_init(&rmn->lock);
+ rmn->objects = RB_ROOT;
+
+ r = __mmu_notifier_register(&rmn->mn, mm);
+ if (r)
+ goto free_rmn;
+
+ hash_add(rdev->mn_hash, &rmn->node, (unsigned long)mm);
+
+release_locks:
+ mutex_unlock(&rdev->mn_lock);
+ up_write(&mm->mmap_sem);
+
+ return rmn;
+
+free_rmn:
+ mutex_unlock(&rdev->mn_lock);
+ up_write(&mm->mmap_sem);
+ kfree(rmn);
+
+ return ERR_PTR(r);
+}
+
+/**
+ * radeon_mn_register - register a BO for notifier updates
+ *
+ * @bo: radeon buffer object
+ * @addr: userptr addr we should monitor
+ *
+ * Registers an MMU notifier for the given BO at the specified address.
+ * Returns 0 on success, -ERRNO if anything goes wrong.
+ */
+int radeon_mn_register(struct radeon_bo *bo, unsigned long addr)
+{
+ unsigned long end = addr + radeon_bo_size(bo) - 1;
+ struct radeon_device *rdev = bo->rdev;
+ struct radeon_mn *rmn;
+ struct interval_tree_node *it;
+
+ rmn = radeon_mn_get(rdev);
+ if (IS_ERR(rmn))
+ return PTR_ERR(rmn);
+
+ mutex_lock(&rmn->lock);
+
+ it = interval_tree_iter_first(&rmn->objects, addr, end);
+ if (it) {
+ mutex_unlock(&rmn->lock);
+ return -EEXIST;
+ }
+
+ bo->mn = rmn;
+ bo->mn_it.start = addr;
+ bo->mn_it.last = end;
+ interval_tree_insert(&bo->mn_it, &rmn->objects);
+
+ mutex_unlock(&rmn->lock);
+
+ return 0;
+}
+
+/**
+ * radeon_mn_unregister - unregister a BO for notifier updates
+ *
+ * @bo: radeon buffer object
+ *
+ * Remove any registration of MMU notifier updates from the buffer object.
+ */
+void radeon_mn_unregister(struct radeon_bo *bo)
+{
+ struct radeon_device *rdev = bo->rdev;
+ struct radeon_mn *rmn;
+
+ mutex_lock(&rdev->mn_lock);
+ rmn = bo->mn;
+ if (rmn == NULL) {
+ mutex_unlock(&rdev->mn_lock);
+ return;
+ }
+
+ mutex_lock(&rmn->lock);
+ interval_tree_remove(&bo->mn_it, &rmn->objects);
+ bo->mn = NULL;
+ mutex_unlock(&rmn->lock);
+ mutex_unlock(&rdev->mn_lock);
+}
extern int atombios_get_encoder_mode(struct drm_encoder *encoder);
extern bool atombios_set_edp_panel_power(struct drm_connector *connector, int action);
extern void radeon_encoder_set_active_device(struct drm_encoder *encoder);
+extern bool radeon_encoder_is_digital(struct drm_encoder *encoder);
extern void radeon_crtc_load_lut(struct drm_crtc *crtc);
extern int atombios_crtc_set_base(struct drm_crtc *crtc, int x, int y,
bo = container_of(tbo, struct radeon_bo, tbo);
radeon_update_memory_usage(bo, bo->tbo.mem.mem_type, -1);
+ radeon_mn_unregister(bo);
mutex_lock(&bo->rdev->gem.mutex);
list_del_init(&bo->list);
{
u32 c = 0, i;
- rbo->placement.fpfn = 0;
- rbo->placement.lpfn = 0;
rbo->placement.placement = rbo->placements;
rbo->placement.busy_placement = rbo->placements;
if (domain & RADEON_GEM_DOMAIN_VRAM)
- rbo->placements[c++] = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED |
- TTM_PL_FLAG_VRAM;
+ rbo->placements[c++].flags = TTM_PL_FLAG_WC |
+ TTM_PL_FLAG_UNCACHED |
+ TTM_PL_FLAG_VRAM;
+
if (domain & RADEON_GEM_DOMAIN_GTT) {
if (rbo->flags & RADEON_GEM_GTT_UC) {
- rbo->placements[c++] = TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_TT;
+ rbo->placements[c++].flags = TTM_PL_FLAG_UNCACHED |
+ TTM_PL_FLAG_TT;
+
} else if ((rbo->flags & RADEON_GEM_GTT_WC) ||
(rbo->rdev->flags & RADEON_IS_AGP)) {
- rbo->placements[c++] = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED |
+ rbo->placements[c++].flags = TTM_PL_FLAG_WC |
+ TTM_PL_FLAG_UNCACHED |
TTM_PL_FLAG_TT;
} else {
- rbo->placements[c++] = TTM_PL_FLAG_CACHED | TTM_PL_FLAG_TT;
+ rbo->placements[c++].flags = TTM_PL_FLAG_CACHED |
+ TTM_PL_FLAG_TT;
}
}
+
if (domain & RADEON_GEM_DOMAIN_CPU) {
if (rbo->flags & RADEON_GEM_GTT_UC) {
- rbo->placements[c++] = TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_SYSTEM;
+ rbo->placements[c++].flags = TTM_PL_FLAG_UNCACHED |
+ TTM_PL_FLAG_SYSTEM;
+
} else if ((rbo->flags & RADEON_GEM_GTT_WC) ||
rbo->rdev->flags & RADEON_IS_AGP) {
- rbo->placements[c++] = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED |
+ rbo->placements[c++].flags = TTM_PL_FLAG_WC |
+ TTM_PL_FLAG_UNCACHED |
TTM_PL_FLAG_SYSTEM;
} else {
- rbo->placements[c++] = TTM_PL_FLAG_CACHED | TTM_PL_FLAG_SYSTEM;
+ rbo->placements[c++].flags = TTM_PL_FLAG_CACHED |
+ TTM_PL_FLAG_SYSTEM;
}
}
if (!c)
- rbo->placements[c++] = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
+ rbo->placements[c++].flags = TTM_PL_MASK_CACHING |
+ TTM_PL_FLAG_SYSTEM;
+
rbo->placement.num_placement = c;
rbo->placement.num_busy_placement = c;
+ for (i = 0; i < c; ++i) {
+ rbo->placements[i].fpfn = 0;
+ if ((rbo->flags & RADEON_GEM_CPU_ACCESS) &&
+ (rbo->placements[i].flags & TTM_PL_FLAG_VRAM))
+ rbo->placements[i].lpfn =
+ rbo->rdev->mc.visible_vram_size >> PAGE_SHIFT;
+ else
+ rbo->placements[i].lpfn = 0;
+ }
+
/*
* Use two-ended allocation depending on the buffer size to
* improve fragmentation quality.
* 512kb was measured as the most optimal number.
*/
- if (rbo->tbo.mem.size > 512 * 1024) {
+ if (!((rbo->flags & RADEON_GEM_CPU_ACCESS) &&
+ (rbo->placements[i].flags & TTM_PL_FLAG_VRAM)) &&
+ rbo->tbo.mem.size > 512 * 1024) {
for (i = 0; i < c; i++) {
- rbo->placements[i] |= TTM_PL_FLAG_TOPDOWN;
+ rbo->placements[i].flags |= TTM_PL_FLAG_TOPDOWN;
}
}
}
int radeon_bo_create(struct radeon_device *rdev,
- unsigned long size, int byte_align, bool kernel, u32 domain,
- u32 flags, struct sg_table *sg, struct radeon_bo **bo_ptr)
+ unsigned long size, int byte_align, bool kernel,
+ u32 domain, u32 flags, struct sg_table *sg,
+ struct reservation_object *resv,
+ struct radeon_bo **bo_ptr)
{
struct radeon_bo *bo;
enum ttm_bo_type type;
down_read(&rdev->pm.mclk_lock);
r = ttm_bo_init(&rdev->mman.bdev, &bo->tbo, size, type,
&bo->placement, page_align, !kernel, NULL,
- acc_size, sg, &radeon_ttm_bo_destroy);
+ acc_size, sg, resv, &radeon_ttm_bo_destroy);
up_read(&rdev->pm.mclk_lock);
if (unlikely(r != 0)) {
return r;
{
int r, i;
+ if (radeon_ttm_tt_has_userptr(bo->tbo.ttm))
+ return -EPERM;
+
if (bo->pin_count) {
bo->pin_count++;
if (gpu_addr)
return 0;
}
radeon_ttm_placement_from_domain(bo, domain);
- if (domain == RADEON_GEM_DOMAIN_VRAM) {
+ for (i = 0; i < bo->placement.num_placement; i++) {
/* force to pin into visible video ram */
- bo->placement.lpfn = bo->rdev->mc.visible_vram_size >> PAGE_SHIFT;
- }
- if (max_offset) {
- u64 lpfn = max_offset >> PAGE_SHIFT;
-
- if (!bo->placement.lpfn)
- bo->placement.lpfn = bo->rdev->mc.gtt_size >> PAGE_SHIFT;
+ if ((bo->placements[i].flags & TTM_PL_FLAG_VRAM) &&
+ !(bo->flags & RADEON_GEM_NO_CPU_ACCESS) &&
+ (!max_offset || max_offset > bo->rdev->mc.visible_vram_size))
+ bo->placements[i].lpfn =
+ bo->rdev->mc.visible_vram_size >> PAGE_SHIFT;
+ else
+ bo->placements[i].lpfn = max_offset >> PAGE_SHIFT;
- if (lpfn < bo->placement.lpfn)
- bo->placement.lpfn = lpfn;
+ bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
}
- for (i = 0; i < bo->placement.num_placement; i++)
- bo->placements[i] |= TTM_PL_FLAG_NO_EVICT;
+
r = ttm_bo_validate(&bo->tbo, &bo->placement, false, false);
if (likely(r == 0)) {
bo->pin_count = 1;
bo->pin_count--;
if (bo->pin_count)
return 0;
- for (i = 0; i < bo->placement.num_placement; i++)
- bo->placements[i] &= ~TTM_PL_FLAG_NO_EVICT;
+ for (i = 0; i < bo->placement.num_placement; i++) {
+ bo->placements[i].lpfn = 0;
+ bo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT;
+ }
r = ttm_bo_validate(&bo->tbo, &bo->placement, false, false);
if (likely(r == 0)) {
if (bo->tbo.mem.mem_type == TTM_PL_VRAM)
u64 bytes_moved = 0, initial_bytes_moved;
u64 bytes_moved_threshold = radeon_bo_get_threshold_for_moves(rdev);
- r = ttm_eu_reserve_buffers(ticket, head);
+ r = ttm_eu_reserve_buffers(ticket, head, true);
if (unlikely(r != 0)) {
return r;
}
bo = lobj->robj;
if (!bo->pin_count) {
u32 domain = lobj->prefered_domains;
+ u32 allowed = lobj->allowed_domains;
u32 current_domain =
radeon_mem_type_to_domain(bo->tbo.mem.mem_type);
* into account. We don't want to disallow buffer moves
* completely.
*/
- if ((lobj->allowed_domains & current_domain) != 0 &&
+ if ((allowed & current_domain) != 0 &&
(domain & current_domain) == 0 && /* will be moved */
bytes_moved > bytes_moved_threshold) {
/* don't move it */
retry:
radeon_ttm_placement_from_domain(bo, domain);
if (ring == R600_RING_TYPE_UVD_INDEX)
- radeon_uvd_force_into_uvd_segment(bo);
+ radeon_uvd_force_into_uvd_segment(bo, allowed);
initial_bytes_moved = atomic64_read(&rdev->num_bytes_moved);
r = ttm_bo_validate(&bo->tbo, &bo->placement, true, false);
/* hurrah the memory is not visible ! */
radeon_ttm_placement_from_domain(rbo, RADEON_GEM_DOMAIN_VRAM);
- rbo->placement.lpfn = rdev->mc.visible_vram_size >> PAGE_SHIFT;
+ rbo->placements[0].lpfn = rdev->mc.visible_vram_size >> PAGE_SHIFT;
r = ttm_bo_validate(bo, &rbo->placement, false, false);
if (unlikely(r == -ENOMEM)) {
radeon_ttm_placement_from_domain(rbo, RADEON_GEM_DOMAIN_GTT);
r = ttm_bo_reserve(&bo->tbo, true, no_wait, false, NULL);
if (unlikely(r != 0))
return r;
- spin_lock(&bo->tbo.bdev->fence_lock);
if (mem_type)
*mem_type = bo->tbo.mem.mem_type;
- if (bo->tbo.sync_obj)
- r = ttm_bo_wait(&bo->tbo, true, true, no_wait);
- spin_unlock(&bo->tbo.bdev->fence_lock);
+
+ r = ttm_bo_wait(&bo->tbo, true, true, no_wait);
ttm_bo_unreserve(&bo->tbo);
return r;
}
unsigned long size, int byte_align,
bool kernel, u32 domain, u32 flags,
struct sg_table *sg,
+ struct reservation_object *resv,
struct radeon_bo **bo_ptr);
extern int radeon_bo_kmap(struct radeon_bo *bo, void **ptr);
extern void radeon_bo_kunmap(struct radeon_bo *bo);
struct radeon_device *rdev = ddev->dev_private;
enum radeon_pm_state_type pm = rdev->pm.dpm.user_state;
- if ((rdev->flags & RADEON_IS_PX) &&
- (ddev->switch_power_state != DRM_SWITCH_POWER_ON))
- return snprintf(buf, PAGE_SIZE, "off\n");
-
return snprintf(buf, PAGE_SIZE, "%s\n",
(pm == POWER_STATE_TYPE_BATTERY) ? "battery" :
(pm == POWER_STATE_TYPE_BALANCED) ? "balanced" : "performance");
struct drm_device *ddev = dev_get_drvdata(dev);
struct radeon_device *rdev = ddev->dev_private;
- /* Can't set dpm state when the card is off */
- if ((rdev->flags & RADEON_IS_PX) &&
- (ddev->switch_power_state != DRM_SWITCH_POWER_ON))
- return -EINVAL;
-
mutex_lock(&rdev->pm.mutex);
if (strncmp("battery", buf, strlen("battery")) == 0)
rdev->pm.dpm.user_state = POWER_STATE_TYPE_BATTERY;
goto fail;
}
mutex_unlock(&rdev->pm.mutex);
- radeon_pm_compute_clocks(rdev);
+
+ /* Can't set dpm state when the card is off */
+ if (!(rdev->flags & RADEON_IS_PX) ||
+ (ddev->switch_power_state == DRM_SWITCH_POWER_ON))
+ radeon_pm_compute_clocks(rdev);
+
fail:
return count;
}
if (rdev->pm.active_crtcs & (1 << crtc)) {
vbl_status = radeon_get_crtc_scanoutpos(rdev->ddev, crtc, 0, &vpos, &hpos, NULL, NULL);
if ((vbl_status & DRM_SCANOUTPOS_VALID) &&
- !(vbl_status & DRM_SCANOUTPOS_INVBL))
+ !(vbl_status & DRM_SCANOUTPOS_IN_VBLANK))
in_vbl = false;
}
}
#include "radeon.h"
#include <drm/radeon_drm.h>
+#include <linux/dma-buf.h>
struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj)
{
}
struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev,
- size_t size,
+ struct dma_buf_attachment *attach,
struct sg_table *sg)
{
+ struct reservation_object *resv = attach->dmabuf->resv;
struct radeon_device *rdev = dev->dev_private;
struct radeon_bo *bo;
int ret;
- ret = radeon_bo_create(rdev, size, PAGE_SIZE, false,
- RADEON_GEM_DOMAIN_GTT, 0, sg, &bo);
+ ww_mutex_lock(&resv->lock, NULL);
+ ret = radeon_bo_create(rdev, attach->dmabuf->size, PAGE_SIZE, false,
+ RADEON_GEM_DOMAIN_GTT, 0, sg, resv, &bo);
+ ww_mutex_unlock(&resv->lock);
if (ret)
return ERR_PTR(ret);
return bo->tbo.resv;
}
+
+struct dma_buf *radeon_gem_prime_export(struct drm_device *dev,
+ struct drm_gem_object *gobj,
+ int flags)
+{
+ struct radeon_bo *bo = gem_to_radeon_bo(gobj);
+ if (radeon_ttm_tt_has_userptr(bo->tbo.ttm))
+ return ERR_PTR(-EPERM);
+ return drm_gem_prime_export(dev, gobj, flags);
+}
*/
static int radeon_debugfs_ring_init(struct radeon_device *rdev, struct radeon_ring *ring);
-/**
- * radeon_ring_write - write a value to the ring
- *
- * @ring: radeon_ring structure holding ring information
- * @v: dword (dw) value to write
- *
- * Write a value to the requested ring buffer (all asics).
- */
-void radeon_ring_write(struct radeon_ring *ring, uint32_t v)
-{
-#if DRM_DEBUG_CODE
- if (ring->count_dw <= 0) {
- DRM_ERROR("radeon: writing more dwords to the ring than expected!\n");
- }
-#endif
- ring->ring[ring->wptr++] = v;
- ring->wptr &= ring->ptr_mask;
- ring->count_dw--;
- ring->ring_free_dw--;
-}
-
/**
* radeon_ring_supports_scratch_reg - check if the ring supports
* writing to scratch registers
*
* @rdev: radeon_device pointer
* @ring: radeon_ring structure holding ring information
+ * @hdp_flush: Whether or not to perform an HDP cache flush
*
* Update the wptr (write pointer) to tell the GPU to
* execute new commands on the ring buffer (all asics).
*/
-void radeon_ring_commit(struct radeon_device *rdev, struct radeon_ring *ring)
+void radeon_ring_commit(struct radeon_device *rdev, struct radeon_ring *ring,
+ bool hdp_flush)
{
/* If we are emitting the HDP flush via the ring buffer, we need to
* do it before padding.
*/
- if (rdev->asic->ring[ring->idx]->hdp_flush)
+ if (hdp_flush && rdev->asic->ring[ring->idx]->hdp_flush)
rdev->asic->ring[ring->idx]->hdp_flush(rdev, ring);
/* We pad to match fetch size */
while (ring->wptr & ring->align_mask) {
/* If we are emitting the HDP flush via MMIO, we need to do it after
* all CPU writes to VRAM finished.
*/
- if (rdev->asic->mmio_hdp_flush)
+ if (hdp_flush && rdev->asic->mmio_hdp_flush)
rdev->asic->mmio_hdp_flush(rdev);
radeon_ring_set_wptr(rdev, ring);
}
*
* @rdev: radeon_device pointer
* @ring: radeon_ring structure holding ring information
+ * @hdp_flush: Whether or not to perform an HDP cache flush
*
* Call radeon_ring_commit() then unlock the ring (all asics).
*/
-void radeon_ring_unlock_commit(struct radeon_device *rdev, struct radeon_ring *ring)
+void radeon_ring_unlock_commit(struct radeon_device *rdev, struct radeon_ring *ring,
+ bool hdp_flush)
{
- radeon_ring_commit(rdev, ring);
+ radeon_ring_commit(rdev, ring, hdp_flush);
mutex_unlock(&rdev->ring_lock);
}
radeon_ring_write(ring, data[i]);
}
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
kfree(data);
return 0;
}
/* Allocate ring buffer */
if (ring->ring_obj == NULL) {
r = radeon_bo_create(rdev, ring->ring_size, PAGE_SIZE, true,
- RADEON_GEM_DOMAIN_GTT,
- (rdev->flags & RADEON_IS_PCIE) ?
- RADEON_GEM_GTT_WC : 0,
+ RADEON_GEM_DOMAIN_GTT, 0, NULL,
NULL, &ring->ring_obj);
if (r) {
dev_err(rdev->dev, "(%d) ring create failed\n", r);
}
r = radeon_bo_create(rdev, size, align, true,
- domain, flags, NULL, &sa_manager->bo);
+ domain, flags, NULL, NULL, &sa_manager->bo);
if (r) {
dev_err(rdev->dev, "(%d) failed to allocate bo for manager\n", r);
return r;
int radeon_semaphore_create(struct radeon_device *rdev,
struct radeon_semaphore **semaphore)
{
- uint32_t *cpu_addr;
+ uint64_t *cpu_addr;
int i, r;
*semaphore = kmalloc(sizeof(struct radeon_semaphore), GFP_KERNEL);
}
/**
- * radeon_semaphore_sync_to - use the semaphore to sync to a fence
+ * radeon_semaphore_sync_fence - use the semaphore to sync to a fence
*
* @semaphore: semaphore object to add fence to
* @fence: fence to sync to
*
* Sync to the fence using this semaphore object
*/
-void radeon_semaphore_sync_to(struct radeon_semaphore *semaphore,
- struct radeon_fence *fence)
+void radeon_semaphore_sync_fence(struct radeon_semaphore *semaphore,
+ struct radeon_fence *fence)
{
struct radeon_fence *other;
semaphore->sync_to[fence->ring] = radeon_fence_later(fence, other);
}
+/**
+ * radeon_semaphore_sync_to - use the semaphore to sync to a reservation object
+ *
+ * @sema: semaphore object to add fence from reservation object to
+ * @resv: reservation object with embedded fence
+ * @shared: true if we should onyl sync to the exclusive fence
+ *
+ * Sync to the fence using this semaphore object
+ */
+int radeon_semaphore_sync_resv(struct radeon_device *rdev,
+ struct radeon_semaphore *sema,
+ struct reservation_object *resv,
+ bool shared)
+{
+ struct reservation_object_list *flist;
+ struct fence *f;
+ struct radeon_fence *fence;
+ unsigned i;
+ int r = 0;
+
+ /* always sync to the exclusive fence */
+ f = reservation_object_get_excl(resv);
+ fence = f ? to_radeon_fence(f) : NULL;
+ if (fence && fence->rdev == rdev)
+ radeon_semaphore_sync_fence(sema, fence);
+ else if (f)
+ r = fence_wait(f, true);
+
+ flist = reservation_object_get_list(resv);
+ if (shared || !flist || r)
+ return r;
+
+ for (i = 0; i < flist->shared_count; ++i) {
+ f = rcu_dereference_protected(flist->shared[i],
+ reservation_object_held(resv));
+ fence = to_radeon_fence(f);
+ if (fence && fence->rdev == rdev)
+ radeon_semaphore_sync_fence(sema, fence);
+ else
+ r = fence_wait(f, true);
+
+ if (r)
+ break;
+ }
+ return r;
+}
+
/**
* radeon_semaphore_sync_rings - sync ring to all registered fences
*
continue;
}
- radeon_ring_commit(rdev, &rdev->ring[i]);
+ radeon_ring_commit(rdev, &rdev->ring[i], false);
radeon_fence_note_sync(fence, ring);
semaphore->gpu_addr += 8;
*/
#include <drm/drmP.h>
-#include <drm/drm_buffer.h>
#include <drm/radeon_drm.h>
#include "radeon_drv.h"
+#include "drm_buffer.h"
/* ================================================================
* Helper functions for client state checking and fixup
}
r = radeon_bo_create(rdev, size, PAGE_SIZE, true, RADEON_GEM_DOMAIN_VRAM,
- 0, NULL, &vram_obj);
+ 0, NULL, NULL, &vram_obj);
if (r) {
DRM_ERROR("Failed to create VRAM object\n");
goto out_cleanup;
struct radeon_fence *fence = NULL;
r = radeon_bo_create(rdev, size, PAGE_SIZE, true,
- RADEON_GEM_DOMAIN_GTT, 0, NULL, gtt_obj + i);
+ RADEON_GEM_DOMAIN_GTT, 0, NULL, NULL,
+ gtt_obj + i);
if (r) {
DRM_ERROR("Failed to create GTT object %d\n", i);
goto out_lclean;
radeon_bo_kunmap(gtt_obj[i]);
if (ring == R600_RING_TYPE_DMA_INDEX)
- r = radeon_copy_dma(rdev, gtt_addr, vram_addr, size / RADEON_GPU_PAGE_SIZE, &fence);
+ fence = radeon_copy_dma(rdev, gtt_addr, vram_addr,
+ size / RADEON_GPU_PAGE_SIZE,
+ NULL);
else
- r = radeon_copy_blit(rdev, gtt_addr, vram_addr, size / RADEON_GPU_PAGE_SIZE, &fence);
- if (r) {
+ fence = radeon_copy_blit(rdev, gtt_addr, vram_addr,
+ size / RADEON_GPU_PAGE_SIZE,
+ NULL);
+ if (IS_ERR(fence)) {
DRM_ERROR("Failed GTT->VRAM copy %d\n", i);
+ r = PTR_ERR(fence);
goto out_lclean_unpin;
}
radeon_bo_kunmap(vram_obj);
if (ring == R600_RING_TYPE_DMA_INDEX)
- r = radeon_copy_dma(rdev, vram_addr, gtt_addr, size / RADEON_GPU_PAGE_SIZE, &fence);
+ fence = radeon_copy_dma(rdev, vram_addr, gtt_addr,
+ size / RADEON_GPU_PAGE_SIZE,
+ NULL);
else
- r = radeon_copy_blit(rdev, vram_addr, gtt_addr, size / RADEON_GPU_PAGE_SIZE, &fence);
- if (r) {
+ fence = radeon_copy_blit(rdev, vram_addr, gtt_addr,
+ size / RADEON_GPU_PAGE_SIZE,
+ NULL);
+ if (IS_ERR(fence)) {
DRM_ERROR("Failed VRAM->GTT copy %d\n", i);
+ r = PTR_ERR(fence);
goto out_lclean_unpin;
}
radeon_bo_unreserve(gtt_obj[i]);
radeon_bo_unref(>t_obj[i]);
}
- if (fence)
+ if (fence && !IS_ERR(fence))
radeon_fence_unref(&fence);
break;
}
return r;
}
radeon_fence_emit(rdev, fence, ring->idx);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
}
return 0;
}
goto out_cleanup;
}
radeon_semaphore_emit_wait(rdev, ringA->idx, semaphore);
- radeon_ring_unlock_commit(rdev, ringA);
+ radeon_ring_unlock_commit(rdev, ringA, false);
r = radeon_test_create_and_emit_fence(rdev, ringA, &fence1);
if (r)
goto out_cleanup;
}
radeon_semaphore_emit_wait(rdev, ringA->idx, semaphore);
- radeon_ring_unlock_commit(rdev, ringA);
+ radeon_ring_unlock_commit(rdev, ringA, false);
r = radeon_test_create_and_emit_fence(rdev, ringA, &fence2);
if (r)
goto out_cleanup;
}
radeon_semaphore_emit_signal(rdev, ringB->idx, semaphore);
- radeon_ring_unlock_commit(rdev, ringB);
+ radeon_ring_unlock_commit(rdev, ringB, false);
r = radeon_fence_wait(fence1, false);
if (r) {
goto out_cleanup;
}
radeon_semaphore_emit_signal(rdev, ringB->idx, semaphore);
- radeon_ring_unlock_commit(rdev, ringB);
+ radeon_ring_unlock_commit(rdev, ringB, false);
r = radeon_fence_wait(fence2, false);
if (r) {
goto out_cleanup;
}
radeon_semaphore_emit_wait(rdev, ringA->idx, semaphore);
- radeon_ring_unlock_commit(rdev, ringA);
+ radeon_ring_unlock_commit(rdev, ringA, false);
r = radeon_test_create_and_emit_fence(rdev, ringA, &fenceA);
if (r)
goto out_cleanup;
}
radeon_semaphore_emit_wait(rdev, ringB->idx, semaphore);
- radeon_ring_unlock_commit(rdev, ringB);
+ radeon_ring_unlock_commit(rdev, ringB, false);
r = radeon_test_create_and_emit_fence(rdev, ringB, &fenceB);
if (r)
goto out_cleanup;
goto out_cleanup;
}
radeon_semaphore_emit_signal(rdev, ringC->idx, semaphore);
- radeon_ring_unlock_commit(rdev, ringC);
+ radeon_ring_unlock_commit(rdev, ringC, false);
for (i = 0; i < 30; ++i) {
mdelay(100);
goto out_cleanup;
}
radeon_semaphore_emit_signal(rdev, ringC->idx, semaphore);
- radeon_ring_unlock_commit(rdev, ringC);
+ radeon_ring_unlock_commit(rdev, ringC, false);
mdelay(1000);
#include <linux/seq_file.h>
#include <linux/slab.h>
#include <linux/swiotlb.h>
+#include <linux/swap.h>
+#include <linux/pagemap.h>
#include <linux/debugfs.h>
#include "radeon_reg.h"
#include "radeon.h"
static void radeon_evict_flags(struct ttm_buffer_object *bo,
struct ttm_placement *placement)
{
+ static struct ttm_place placements = {
+ .fpfn = 0,
+ .lpfn = 0,
+ .flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM
+ };
+
struct radeon_bo *rbo;
- static u32 placements = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
if (!radeon_ttm_bo_is_radeon_bo(bo)) {
- placement->fpfn = 0;
- placement->lpfn = 0;
placement->placement = &placements;
placement->busy_placement = &placements;
placement->num_placement = 1;
struct radeon_device *rdev;
uint64_t old_start, new_start;
struct radeon_fence *fence;
+ unsigned num_pages;
int r, ridx;
rdev = radeon_get_rdev(bo->bdev);
BUILD_BUG_ON((PAGE_SIZE % RADEON_GPU_PAGE_SIZE) != 0);
- /* sync other rings */
- fence = bo->sync_obj;
- r = radeon_copy(rdev, old_start, new_start,
- new_mem->num_pages * (PAGE_SIZE / RADEON_GPU_PAGE_SIZE), /* GPU pages */
- &fence);
- /* FIXME: handle copy error */
- r = ttm_bo_move_accel_cleanup(bo, (void *)fence,
+ num_pages = new_mem->num_pages * (PAGE_SIZE / RADEON_GPU_PAGE_SIZE);
+ fence = radeon_copy(rdev, old_start, new_start, num_pages, bo->resv);
+ if (IS_ERR(fence))
+ return PTR_ERR(fence);
+
+ r = ttm_bo_move_accel_cleanup(bo, &fence->base,
evict, no_wait_gpu, new_mem);
radeon_fence_unref(&fence);
return r;
struct radeon_device *rdev;
struct ttm_mem_reg *old_mem = &bo->mem;
struct ttm_mem_reg tmp_mem;
- u32 placements;
+ struct ttm_place placements;
struct ttm_placement placement;
int r;
rdev = radeon_get_rdev(bo->bdev);
tmp_mem = *new_mem;
tmp_mem.mm_node = NULL;
- placement.fpfn = 0;
- placement.lpfn = 0;
placement.num_placement = 1;
placement.placement = &placements;
placement.num_busy_placement = 1;
placement.busy_placement = &placements;
- placements = TTM_PL_MASK_CACHING | TTM_PL_FLAG_TT;
+ placements.fpfn = 0;
+ placements.lpfn = 0;
+ placements.flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_TT;
r = ttm_bo_mem_space(bo, &placement, &tmp_mem,
interruptible, no_wait_gpu);
if (unlikely(r)) {
struct ttm_mem_reg *old_mem = &bo->mem;
struct ttm_mem_reg tmp_mem;
struct ttm_placement placement;
- u32 placements;
+ struct ttm_place placements;
int r;
rdev = radeon_get_rdev(bo->bdev);
tmp_mem = *new_mem;
tmp_mem.mm_node = NULL;
- placement.fpfn = 0;
- placement.lpfn = 0;
placement.num_placement = 1;
placement.placement = &placements;
placement.num_busy_placement = 1;
placement.busy_placement = &placements;
- placements = TTM_PL_MASK_CACHING | TTM_PL_FLAG_TT;
+ placements.fpfn = 0;
+ placements.lpfn = 0;
+ placements.flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_TT;
r = ttm_bo_mem_space(bo, &placement, &tmp_mem,
interruptible, no_wait_gpu);
if (unlikely(r)) {
{
}
-static int radeon_sync_obj_wait(void *sync_obj, bool lazy, bool interruptible)
-{
- return radeon_fence_wait((struct radeon_fence *)sync_obj, interruptible);
-}
+/*
+ * TTM backend functions.
+ */
+struct radeon_ttm_tt {
+ struct ttm_dma_tt ttm;
+ struct radeon_device *rdev;
+ u64 offset;
-static int radeon_sync_obj_flush(void *sync_obj)
+ uint64_t userptr;
+ struct mm_struct *usermm;
+ uint32_t userflags;
+};
+
+/* prepare the sg table with the user pages */
+static int radeon_ttm_tt_pin_userptr(struct ttm_tt *ttm)
{
+ struct radeon_device *rdev = radeon_get_rdev(ttm->bdev);
+ struct radeon_ttm_tt *gtt = (void *)ttm;
+ unsigned pinned = 0, nents;
+ int r;
+
+ int write = !(gtt->userflags & RADEON_GEM_USERPTR_READONLY);
+ enum dma_data_direction direction = write ?
+ DMA_BIDIRECTIONAL : DMA_TO_DEVICE;
+
+ if (current->mm != gtt->usermm)
+ return -EPERM;
+
+ if (gtt->userflags & RADEON_GEM_USERPTR_ANONONLY) {
+ /* check that we only pin down anonymous memory
+ to prevent problems with writeback */
+ unsigned long end = gtt->userptr + ttm->num_pages * PAGE_SIZE;
+ struct vm_area_struct *vma;
+ vma = find_vma(gtt->usermm, gtt->userptr);
+ if (!vma || vma->vm_file || vma->vm_end < end)
+ return -EPERM;
+ }
+
+ do {
+ unsigned num_pages = ttm->num_pages - pinned;
+ uint64_t userptr = gtt->userptr + pinned * PAGE_SIZE;
+ struct page **pages = ttm->pages + pinned;
+
+ r = get_user_pages(current, current->mm, userptr, num_pages,
+ write, 0, pages, NULL);
+ if (r < 0)
+ goto release_pages;
+
+ pinned += r;
+
+ } while (pinned < ttm->num_pages);
+
+ r = sg_alloc_table_from_pages(ttm->sg, ttm->pages, ttm->num_pages, 0,
+ ttm->num_pages << PAGE_SHIFT,
+ GFP_KERNEL);
+ if (r)
+ goto release_sg;
+
+ r = -ENOMEM;
+ nents = dma_map_sg(rdev->dev, ttm->sg->sgl, ttm->sg->nents, direction);
+ if (nents != ttm->sg->nents)
+ goto release_sg;
+
+ drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
+ gtt->ttm.dma_address, ttm->num_pages);
+
return 0;
-}
-static void radeon_sync_obj_unref(void **sync_obj)
-{
- radeon_fence_unref((struct radeon_fence **)sync_obj);
-}
+release_sg:
+ kfree(ttm->sg);
-static void *radeon_sync_obj_ref(void *sync_obj)
-{
- return radeon_fence_ref((struct radeon_fence *)sync_obj);
+release_pages:
+ release_pages(ttm->pages, pinned, 0);
+ return r;
}
-static bool radeon_sync_obj_signaled(void *sync_obj)
+static void radeon_ttm_tt_unpin_userptr(struct ttm_tt *ttm)
{
- return radeon_fence_signaled((struct radeon_fence *)sync_obj);
-}
+ struct radeon_device *rdev = radeon_get_rdev(ttm->bdev);
+ struct radeon_ttm_tt *gtt = (void *)ttm;
+ struct scatterlist *sg;
+ int i;
-/*
- * TTM backend functions.
- */
-struct radeon_ttm_tt {
- struct ttm_dma_tt ttm;
- struct radeon_device *rdev;
- u64 offset;
-};
+ int write = !(gtt->userflags & RADEON_GEM_USERPTR_READONLY);
+ enum dma_data_direction direction = write ?
+ DMA_BIDIRECTIONAL : DMA_TO_DEVICE;
+
+ /* free the sg table and pages again */
+ dma_unmap_sg(rdev->dev, ttm->sg->sgl, ttm->sg->nents, direction);
+
+ for_each_sg(ttm->sg->sgl, sg, ttm->sg->nents, i) {
+ struct page *page = sg_page(sg);
+
+ if (!(gtt->userflags & RADEON_GEM_USERPTR_READONLY))
+ set_page_dirty(page);
+
+ mark_page_accessed(page);
+ page_cache_release(page);
+ }
+
+ sg_free_table(ttm->sg);
+}
static int radeon_ttm_backend_bind(struct ttm_tt *ttm,
struct ttm_mem_reg *bo_mem)
RADEON_GART_PAGE_WRITE;
int r;
+ if (gtt->userptr) {
+ radeon_ttm_tt_pin_userptr(ttm);
+ flags &= ~RADEON_GART_PAGE_WRITE;
+ }
+
gtt->offset = (unsigned long)(bo_mem->start << PAGE_SHIFT);
if (!ttm->num_pages) {
WARN(1, "nothing to bind %lu pages for mreg %p back %p!\n",
struct radeon_ttm_tt *gtt = (void *)ttm;
radeon_gart_unbind(gtt->rdev, gtt->offset, ttm->num_pages);
+
+ if (gtt->userptr)
+ radeon_ttm_tt_unpin_userptr(ttm);
+
return 0;
}
return >t->ttm.ttm;
}
+static struct radeon_ttm_tt *radeon_ttm_tt_to_gtt(struct ttm_tt *ttm)
+{
+ if (!ttm || ttm->func != &radeon_backend_func)
+ return NULL;
+ return (struct radeon_ttm_tt *)ttm;
+}
+
static int radeon_ttm_tt_populate(struct ttm_tt *ttm)
{
+ struct radeon_ttm_tt *gtt = radeon_ttm_tt_to_gtt(ttm);
struct radeon_device *rdev;
- struct radeon_ttm_tt *gtt = (void *)ttm;
unsigned i;
int r;
bool slave = !!(ttm->page_flags & TTM_PAGE_FLAG_SG);
if (ttm->state != tt_unpopulated)
return 0;
+ if (gtt && gtt->userptr) {
+ ttm->sg = kcalloc(1, sizeof(struct sg_table), GFP_KERNEL);
+ if (!ttm->sg)
+ return -ENOMEM;
+
+ ttm->page_flags |= TTM_PAGE_FLAG_SG;
+ ttm->state = tt_unbound;
+ return 0;
+ }
+
if (slave && ttm->sg) {
drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
gtt->ttm.dma_address, ttm->num_pages);
static void radeon_ttm_tt_unpopulate(struct ttm_tt *ttm)
{
struct radeon_device *rdev;
- struct radeon_ttm_tt *gtt = (void *)ttm;
+ struct radeon_ttm_tt *gtt = radeon_ttm_tt_to_gtt(ttm);
unsigned i;
bool slave = !!(ttm->page_flags & TTM_PAGE_FLAG_SG);
+ if (gtt && gtt->userptr) {
+ kfree(ttm->sg);
+ ttm->page_flags &= ~TTM_PAGE_FLAG_SG;
+ return;
+ }
+
if (slave)
return;
ttm_pool_unpopulate(ttm);
}
+int radeon_ttm_tt_set_userptr(struct ttm_tt *ttm, uint64_t addr,
+ uint32_t flags)
+{
+ struct radeon_ttm_tt *gtt = radeon_ttm_tt_to_gtt(ttm);
+
+ if (gtt == NULL)
+ return -EINVAL;
+
+ gtt->userptr = addr;
+ gtt->usermm = current->mm;
+ gtt->userflags = flags;
+ return 0;
+}
+
+bool radeon_ttm_tt_has_userptr(struct ttm_tt *ttm)
+{
+ struct radeon_ttm_tt *gtt = radeon_ttm_tt_to_gtt(ttm);
+
+ if (gtt == NULL)
+ return false;
+
+ return !!gtt->userptr;
+}
+
+bool radeon_ttm_tt_is_readonly(struct ttm_tt *ttm)
+{
+ struct radeon_ttm_tt *gtt = radeon_ttm_tt_to_gtt(ttm);
+
+ if (gtt == NULL)
+ return false;
+
+ return !!(gtt->userflags & RADEON_GEM_USERPTR_READONLY);
+}
+
static struct ttm_bo_driver radeon_bo_driver = {
.ttm_tt_create = &radeon_ttm_tt_create,
.ttm_tt_populate = &radeon_ttm_tt_populate,
.evict_flags = &radeon_evict_flags,
.move = &radeon_bo_move,
.verify_access = &radeon_verify_access,
- .sync_obj_signaled = &radeon_sync_obj_signaled,
- .sync_obj_wait = &radeon_sync_obj_wait,
- .sync_obj_flush = &radeon_sync_obj_flush,
- .sync_obj_unref = &radeon_sync_obj_unref,
- .sync_obj_ref = &radeon_sync_obj_ref,
.move_notify = &radeon_bo_move_notify,
.fault_reserve_notify = &radeon_bo_fault_reserve_notify,
.io_mem_reserve = &radeon_ttm_io_mem_reserve,
radeon_ttm_set_active_vram_size(rdev, rdev->mc.visible_vram_size);
r = radeon_bo_create(rdev, 256 * 1024, PAGE_SIZE, true,
- RADEON_GEM_DOMAIN_VRAM, 0,
+ RADEON_GEM_DOMAIN_VRAM, 0, NULL,
NULL, &rdev->stollen_vga_memory);
if (r) {
return r;
int r;
if (unlikely(vma->vm_pgoff < DRM_FILE_PAGE_OFFSET)) {
- return drm_mmap(filp, vma);
+ return -EINVAL;
}
file_priv = filp->private_data;
#define UVD_IDLE_TIMEOUT_MS 1000
/* Firmware Names */
+#define FIRMWARE_R600 "radeon/R600_uvd.bin"
+#define FIRMWARE_RS780 "radeon/RS780_uvd.bin"
+#define FIRMWARE_RV770 "radeon/RV770_uvd.bin"
#define FIRMWARE_RV710 "radeon/RV710_uvd.bin"
#define FIRMWARE_CYPRESS "radeon/CYPRESS_uvd.bin"
#define FIRMWARE_SUMO "radeon/SUMO_uvd.bin"
#define FIRMWARE_TAHITI "radeon/TAHITI_uvd.bin"
#define FIRMWARE_BONAIRE "radeon/BONAIRE_uvd.bin"
+MODULE_FIRMWARE(FIRMWARE_R600);
+MODULE_FIRMWARE(FIRMWARE_RS780);
+MODULE_FIRMWARE(FIRMWARE_RV770);
MODULE_FIRMWARE(FIRMWARE_RV710);
MODULE_FIRMWARE(FIRMWARE_CYPRESS);
MODULE_FIRMWARE(FIRMWARE_SUMO);
INIT_DELAYED_WORK(&rdev->uvd.idle_work, radeon_uvd_idle_work_handler);
switch (rdev->family) {
+ case CHIP_RV610:
+ case CHIP_RV630:
+ case CHIP_RV670:
+ case CHIP_RV620:
+ case CHIP_RV635:
+ fw_name = FIRMWARE_R600;
+ break;
+
+ case CHIP_RS780:
+ case CHIP_RS880:
+ fw_name = FIRMWARE_RS780;
+ break;
+
+ case CHIP_RV770:
+ fw_name = FIRMWARE_RV770;
+ break;
+
case CHIP_RV710:
case CHIP_RV730:
case CHIP_RV740:
}
bo_size = RADEON_GPU_PAGE_ALIGN(rdev->uvd_fw->size + 8) +
- RADEON_UVD_STACK_SIZE + RADEON_UVD_HEAP_SIZE;
+ RADEON_UVD_STACK_SIZE + RADEON_UVD_HEAP_SIZE +
+ RADEON_GPU_PAGE_SIZE;
r = radeon_bo_create(rdev, bo_size, PAGE_SIZE, true,
- RADEON_GEM_DOMAIN_VRAM, 0, NULL, &rdev->uvd.vcpu_bo);
+ RADEON_GEM_DOMAIN_VRAM, 0, NULL,
+ NULL, &rdev->uvd.vcpu_bo);
if (r) {
dev_err(rdev->dev, "(%d) failed to allocate UVD bo\n", r);
return r;
return 0;
}
-void radeon_uvd_force_into_uvd_segment(struct radeon_bo *rbo)
+void radeon_uvd_force_into_uvd_segment(struct radeon_bo *rbo,
+ uint32_t allowed_domains)
{
- rbo->placement.fpfn = 0 >> PAGE_SHIFT;
- rbo->placement.lpfn = (256 * 1024 * 1024) >> PAGE_SHIFT;
+ int i;
+
+ for (i = 0; i < rbo->placement.num_placement; ++i) {
+ rbo->placements[i].fpfn = 0 >> PAGE_SHIFT;
+ rbo->placements[i].lpfn = (256 * 1024 * 1024) >> PAGE_SHIFT;
+ }
+
+ /* If it must be in VRAM it must be in the first segment as well */
+ if (allowed_domains == RADEON_GEM_DOMAIN_VRAM)
+ return;
+
+ /* abort if we already have more than one placement */
+ if (rbo->placement.num_placement > 1)
+ return;
+
+ /* add another 256MB segment */
+ rbo->placements[1] = rbo->placements[0];
+ rbo->placements[1].fpfn += (256 * 1024 * 1024) >> PAGE_SHIFT;
+ rbo->placements[1].lpfn += (256 * 1024 * 1024) >> PAGE_SHIFT;
+ rbo->placement.num_placement++;
+ rbo->placement.num_busy_placement++;
}
void radeon_uvd_free_handles(struct radeon_device *rdev, struct drm_file *filp)
{
int32_t *msg, msg_type, handle;
unsigned img_size = 0;
+ struct fence *f;
void *ptr;
int i, r;
return -EINVAL;
}
- if (bo->tbo.sync_obj) {
- r = radeon_fence_wait(bo->tbo.sync_obj, false);
+ f = reservation_object_get_excl(bo->tbo.resv);
+ if (f) {
+ r = radeon_fence_wait((struct radeon_fence *)f, false);
if (r) {
DRM_ERROR("Failed waiting for UVD message (%d)!\n", r);
return r;
}
static int radeon_uvd_send_msg(struct radeon_device *rdev,
- int ring, struct radeon_bo *bo,
+ int ring, uint64_t addr,
struct radeon_fence **fence)
{
- struct ttm_validate_buffer tv;
- struct ww_acquire_ctx ticket;
- struct list_head head;
struct radeon_ib ib;
- uint64_t addr;
int i, r;
- memset(&tv, 0, sizeof(tv));
- tv.bo = &bo->tbo;
-
- INIT_LIST_HEAD(&head);
- list_add(&tv.head, &head);
-
- r = ttm_eu_reserve_buffers(&ticket, &head);
- if (r)
- return r;
-
- radeon_ttm_placement_from_domain(bo, RADEON_GEM_DOMAIN_VRAM);
- radeon_uvd_force_into_uvd_segment(bo);
-
- r = ttm_bo_validate(&bo->tbo, &bo->placement, true, false);
- if (r)
- goto err;
-
r = radeon_ib_get(rdev, ring, &ib, NULL, 64);
if (r)
- goto err;
+ return r;
- addr = radeon_bo_gpu_offset(bo);
ib.ptr[0] = PACKET0(UVD_GPCOM_VCPU_DATA0, 0);
ib.ptr[1] = addr;
ib.ptr[2] = PACKET0(UVD_GPCOM_VCPU_DATA1, 0);
ib.ptr[i] = PACKET2(0);
ib.length_dw = 16;
- r = radeon_ib_schedule(rdev, &ib, NULL);
- if (r)
- goto err;
- ttm_eu_fence_buffer_objects(&ticket, &head, ib.fence);
+ r = radeon_ib_schedule(rdev, &ib, NULL, false);
if (fence)
*fence = radeon_fence_ref(ib.fence);
radeon_ib_free(rdev, &ib);
- radeon_bo_unref(&bo);
- return 0;
-
-err:
- ttm_eu_backoff_reservation(&ticket, &head);
return r;
}
int radeon_uvd_get_create_msg(struct radeon_device *rdev, int ring,
uint32_t handle, struct radeon_fence **fence)
{
- struct radeon_bo *bo;
- uint32_t *msg;
- int r, i;
+ /* we use the last page of the vcpu bo for the UVD message */
+ uint64_t offs = radeon_bo_size(rdev->uvd.vcpu_bo) -
+ RADEON_GPU_PAGE_SIZE;
- r = radeon_bo_create(rdev, 1024, PAGE_SIZE, true,
- RADEON_GEM_DOMAIN_VRAM, 0, NULL, &bo);
- if (r)
- return r;
+ uint32_t *msg = rdev->uvd.cpu_addr + offs;
+ uint64_t addr = rdev->uvd.gpu_addr + offs;
- r = radeon_bo_reserve(bo, false);
- if (r) {
- radeon_bo_unref(&bo);
- return r;
- }
+ int r, i;
- r = radeon_bo_kmap(bo, (void **)&msg);
- if (r) {
- radeon_bo_unreserve(bo);
- radeon_bo_unref(&bo);
+ r = radeon_bo_reserve(rdev->uvd.vcpu_bo, true);
+ if (r)
return r;
- }
/* stitch together an UVD create msg */
msg[0] = cpu_to_le32(0x00000de4);
for (i = 11; i < 1024; ++i)
msg[i] = cpu_to_le32(0x0);
- radeon_bo_kunmap(bo);
- radeon_bo_unreserve(bo);
-
- return radeon_uvd_send_msg(rdev, ring, bo, fence);
+ r = radeon_uvd_send_msg(rdev, ring, addr, fence);
+ radeon_bo_unreserve(rdev->uvd.vcpu_bo);
+ return r;
}
int radeon_uvd_get_destroy_msg(struct radeon_device *rdev, int ring,
uint32_t handle, struct radeon_fence **fence)
{
- struct radeon_bo *bo;
- uint32_t *msg;
- int r, i;
+ /* we use the last page of the vcpu bo for the UVD message */
+ uint64_t offs = radeon_bo_size(rdev->uvd.vcpu_bo) -
+ RADEON_GPU_PAGE_SIZE;
- r = radeon_bo_create(rdev, 1024, PAGE_SIZE, true,
- RADEON_GEM_DOMAIN_VRAM, 0, NULL, &bo);
- if (r)
- return r;
+ uint32_t *msg = rdev->uvd.cpu_addr + offs;
+ uint64_t addr = rdev->uvd.gpu_addr + offs;
- r = radeon_bo_reserve(bo, false);
- if (r) {
- radeon_bo_unref(&bo);
- return r;
- }
+ int r, i;
- r = radeon_bo_kmap(bo, (void **)&msg);
- if (r) {
- radeon_bo_unreserve(bo);
- radeon_bo_unref(&bo);
+ r = radeon_bo_reserve(rdev->uvd.vcpu_bo, true);
+ if (r)
return r;
- }
/* stitch together an UVD destroy msg */
msg[0] = cpu_to_le32(0x00000de4);
for (i = 4; i < 1024; ++i)
msg[i] = cpu_to_le32(0x0);
- radeon_bo_kunmap(bo);
- radeon_bo_unreserve(bo);
-
- return radeon_uvd_send_msg(rdev, ring, bo, fence);
+ r = radeon_uvd_send_msg(rdev, ring, addr, fence);
+ radeon_bo_unreserve(rdev->uvd.vcpu_bo);
+ return r;
}
/**
size = RADEON_GPU_PAGE_ALIGN(rdev->vce_fw->size) +
RADEON_VCE_STACK_SIZE + RADEON_VCE_HEAP_SIZE;
r = radeon_bo_create(rdev, size, PAGE_SIZE, true,
- RADEON_GEM_DOMAIN_VRAM, 0, NULL, &rdev->vce.vcpu_bo);
+ RADEON_GEM_DOMAIN_VRAM, 0, NULL, NULL,
+ &rdev->vce.vcpu_bo);
if (r) {
dev_err(rdev->dev, "(%d) failed to allocate VCE bo\n", r);
return r;
for (i = ib.length_dw; i < ib_size_dw; ++i)
ib.ptr[i] = 0x0;
- r = radeon_ib_schedule(rdev, &ib, NULL);
+ r = radeon_ib_schedule(rdev, &ib, NULL, false);
if (r) {
DRM_ERROR("radeon: failed to schedule ib (%d).\n", r);
}
for (i = ib.length_dw; i < ib_size_dw; ++i)
ib.ptr[i] = 0x0;
- r = radeon_ib_schedule(rdev, &ib, NULL);
+ r = radeon_ib_schedule(rdev, &ib, NULL, false);
if (r) {
DRM_ERROR("radeon: failed to schedule ib (%d).\n", r);
}
return r;
}
radeon_ring_write(ring, VCE_CMD_END);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
for (i = 0; i < rdev->usec_timeout; i++) {
if (vce_v1_0_get_rptr(rdev, ring) != rptr)
list[0].prefered_domains = RADEON_GEM_DOMAIN_VRAM;
list[0].allowed_domains = RADEON_GEM_DOMAIN_VRAM;
list[0].tv.bo = &vm->page_directory->tbo;
+ list[0].tv.shared = false;
list[0].tiling_flags = 0;
list[0].handle = 0;
list_add(&list[0].tv.head, head);
list[idx].prefered_domains = RADEON_GEM_DOMAIN_VRAM;
list[idx].allowed_domains = RADEON_GEM_DOMAIN_VRAM;
list[idx].tv.bo = &list[idx].robj->tbo;
+ list[idx].tv.shared = false;
list[idx].tiling_flags = 0;
list[idx].handle = 0;
list_add(&list[idx++].tv.head, head);
memset(&tv, 0, sizeof(tv));
tv.bo = &bo->tbo;
+ tv.shared = false;
INIT_LIST_HEAD(&head);
list_add(&tv.head, &head);
- r = ttm_eu_reserve_buffers(&ticket, &head);
+ r = ttm_eu_reserve_buffers(&ticket, &head, true);
if (r)
return r;
radeon_asic_vm_pad_ib(rdev, &ib);
WARN_ON(ib.length_dw > 64);
- r = radeon_ib_schedule(rdev, &ib, NULL);
+ r = radeon_ib_schedule(rdev, &ib, NULL, false);
if (r)
goto error;
- ttm_eu_fence_buffer_objects(&ticket, &head, ib.fence);
+ ttm_eu_fence_buffer_objects(&ticket, &head, &ib.fence->base);
radeon_ib_free(rdev, &ib);
return 0;
/* add a clone of the bo_va to clear the old address */
struct radeon_bo_va *tmp;
tmp = kzalloc(sizeof(struct radeon_bo_va), GFP_KERNEL);
+ if (!tmp) {
+ mutex_unlock(&vm->mutex);
+ return -ENOMEM;
+ }
tmp->it.start = bo_va->it.start;
tmp->it.last = bo_va->it.last;
tmp->vm = vm;
r = radeon_bo_create(rdev, RADEON_VM_PTE_COUNT * 8,
RADEON_GPU_PAGE_SIZE, true,
- RADEON_GEM_DOMAIN_VRAM, 0, NULL, &pt);
+ RADEON_GEM_DOMAIN_VRAM, 0,
+ NULL, NULL, &pt);
if (r)
return r;
if (ib.length_dw != 0) {
radeon_asic_vm_pad_ib(rdev, &ib);
- radeon_semaphore_sync_to(ib.semaphore, pd->tbo.sync_obj);
- radeon_semaphore_sync_to(ib.semaphore, vm->last_id_use);
+
+ radeon_semaphore_sync_resv(rdev, ib.semaphore, pd->tbo.resv, false);
+ radeon_semaphore_sync_fence(ib.semaphore, vm->last_id_use);
WARN_ON(ib.length_dw > ndw);
- r = radeon_ib_schedule(rdev, &ib, NULL);
+ r = radeon_ib_schedule(rdev, &ib, NULL, false);
if (r) {
radeon_ib_free(rdev, &ib);
return r;
unsigned nptes;
uint64_t pte;
- radeon_semaphore_sync_to(ib->semaphore, pt->tbo.sync_obj);
+ radeon_semaphore_sync_resv(rdev, ib->semaphore, pt->tbo.resv, false);
if ((addr & ~mask) == (end & ~mask))
nptes = end - addr;
bo_va->flags &= ~RADEON_VM_PAGE_VALID;
bo_va->flags &= ~RADEON_VM_PAGE_SYSTEM;
bo_va->flags &= ~RADEON_VM_PAGE_SNOOPED;
+ if (bo_va->bo && radeon_ttm_tt_is_readonly(bo_va->bo->tbo.ttm))
+ bo_va->flags &= ~RADEON_VM_PAGE_WRITEABLE;
+
if (mem) {
addr = mem->start << PAGE_SHIFT;
if (mem->mem_type != TTM_PL_SYSTEM) {
radeon_asic_vm_pad_ib(rdev, &ib);
WARN_ON(ib.length_dw > ndw);
- radeon_semaphore_sync_to(ib.semaphore, vm->fence);
- r = radeon_ib_schedule(rdev, &ib, NULL);
+ radeon_semaphore_sync_fence(ib.semaphore, vm->fence);
+ r = radeon_ib_schedule(rdev, &ib, NULL, false);
if (r) {
radeon_ib_free(rdev, &ib);
return r;
r = radeon_bo_create(rdev, pd_size, align, true,
RADEON_GEM_DOMAIN_VRAM, 0, NULL,
- &vm->page_directory);
+ NULL, &vm->page_directory);
if (r)
return r;
radeon_ring_write(ring, GEOMETRY_ROUND_NEAREST | COLOR_ROUND_NEAREST);
radeon_ring_write(ring, PACKET0(0x20C8, 0));
radeon_ring_write(ring, 0);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
}
int rv515_mc_wait_for_idle(struct radeon_device *rdev)
* Jerome Glisse
*/
#include <linux/firmware.h>
-#include <linux/platform_device.h>
#include <linux/slab.h>
#include <drm/drmP.h>
#include "radeon.h"
u32 hdp_host_path_cntl;
u32 sq_dyn_gpr_size_simd_ab_0;
u32 gb_tiling_config = 0;
- u32 cc_rb_backend_disable = 0;
u32 cc_gc_shader_pipe_config = 0;
u32 mc_arb_ramcfg;
u32 db_debug4, tmp;
WREG32(SPI_CONFIG_CNTL, 0);
}
- cc_rb_backend_disable = RREG32(CC_RB_BACKEND_DISABLE) & 0x00ff0000;
- tmp = R7XX_MAX_BACKENDS - r600_count_pipe_bits(cc_rb_backend_disable >> 16);
- if (tmp < rdev->config.rv770.max_backends) {
- rdev->config.rv770.max_backends = tmp;
- }
-
cc_gc_shader_pipe_config = RREG32(CC_GC_SHADER_PIPE_CONFIG) & 0xffffff00;
- tmp = R7XX_MAX_PIPES - r600_count_pipe_bits((cc_gc_shader_pipe_config >> 8) & R7XX_MAX_PIPES_MASK);
- if (tmp < rdev->config.rv770.max_pipes) {
- rdev->config.rv770.max_pipes = tmp;
- }
- tmp = R7XX_MAX_SIMDS - r600_count_pipe_bits((cc_gc_shader_pipe_config >> 16) & R7XX_MAX_SIMDS_MASK);
- if (tmp < rdev->config.rv770.max_simds) {
- rdev->config.rv770.max_simds = tmp;
- }
tmp = rdev->config.rv770.max_simds -
r600_count_pipe_bits((cc_gc_shader_pipe_config >> 16) & R7XX_MAX_SIMDS_MASK);
rdev->config.rv770.active_simds = tmp;
rdev->config.rv770.tiling_npipes = rdev->config.rv770.max_tile_pipes;
disabled_rb_mask = (RREG32(CC_RB_BACKEND_DISABLE) >> 16) & R7XX_MAX_BACKENDS_MASK;
+ tmp = 0;
+ for (i = 0; i < rdev->config.rv770.max_backends; i++)
+ tmp |= (1 << i);
+ /* if all the backends are disabled, fix it up here */
+ if ((disabled_rb_mask & tmp) == tmp) {
+ for (i = 0; i < rdev->config.rv770.max_backends; i++)
+ disabled_rb_mask &= ~(1 << i);
+ }
tmp = (gb_tiling_config & PIPE_TILING__MASK) >> PIPE_TILING__SHIFT;
tmp = r6xx_remap_render_backend(rdev, tmp, rdev->config.rv770.max_backends,
R7XX_MAX_BACKENDS, disabled_rb_mask);
* @src_offset: src GPU address
* @dst_offset: dst GPU address
* @num_gpu_pages: number of GPU pages to xfer
- * @fence: radeon fence object
+ * @resv: reservation object to sync to
*
* Copy GPU paging using the DMA engine (r7xx).
* Used by the radeon ttm implementation to move pages if
* registered as the asic copy callback.
*/
-int rv770_copy_dma(struct radeon_device *rdev,
- uint64_t src_offset, uint64_t dst_offset,
- unsigned num_gpu_pages,
- struct radeon_fence **fence)
+struct radeon_fence *rv770_copy_dma(struct radeon_device *rdev,
+ uint64_t src_offset, uint64_t dst_offset,
+ unsigned num_gpu_pages,
+ struct reservation_object *resv)
{
struct radeon_semaphore *sem = NULL;
+ struct radeon_fence *fence;
int ring_index = rdev->asic->copy.dma_ring_index;
struct radeon_ring *ring = &rdev->ring[ring_index];
u32 size_in_dw, cur_size_in_dw;
r = radeon_semaphore_create(rdev, &sem);
if (r) {
DRM_ERROR("radeon: moving bo (%d).\n", r);
- return r;
+ return ERR_PTR(r);
}
size_in_dw = (num_gpu_pages << RADEON_GPU_PAGE_SHIFT) / 4;
if (r) {
DRM_ERROR("radeon: moving bo (%d).\n", r);
radeon_semaphore_free(rdev, &sem, NULL);
- return r;
+ return ERR_PTR(r);
}
- radeon_semaphore_sync_to(sem, *fence);
+ radeon_semaphore_sync_resv(rdev, sem, resv, false);
radeon_semaphore_sync_rings(rdev, sem, ring->idx);
for (i = 0; i < num_loops; i++) {
dst_offset += cur_size_in_dw * 4;
}
- r = radeon_fence_emit(rdev, fence, ring->idx);
+ r = radeon_fence_emit(rdev, &fence, ring->idx);
if (r) {
radeon_ring_unlock_undo(rdev, ring);
radeon_semaphore_free(rdev, &sem, NULL);
- return r;
+ return ERR_PTR(r);
}
- radeon_ring_unlock_commit(rdev, ring);
- radeon_semaphore_free(rdev, &sem, *fence);
+ radeon_ring_unlock_commit(rdev, ring, false);
+ radeon_semaphore_free(rdev, &sem, fence);
- return r;
+ return fence;
}
u32 sx_debug_1;
u32 hdp_host_path_cntl;
u32 tmp;
- int i, j, k;
+ int i, j;
switch (rdev->family) {
case CHIP_TAHITI:
rdev->config.si.max_sh_per_se,
rdev->config.si.max_cu_per_sh);
+ rdev->config.si.active_cus = 0;
for (i = 0; i < rdev->config.si.max_shader_engines; i++) {
for (j = 0; j < rdev->config.si.max_sh_per_se; j++) {
- for (k = 0; k < rdev->config.si.max_cu_per_sh; k++) {
- rdev->config.si.active_cus +=
- hweight32(si_get_cu_active_bitmap(rdev, i, j));
- }
+ rdev->config.si.active_cus +=
+ hweight32(si_get_cu_active_bitmap(rdev, i, j));
}
}
radeon_ring_write(ring, PACKET3_BASE_INDEX(CE_PARTITION_BASE));
radeon_ring_write(ring, 0xc000);
radeon_ring_write(ring, 0xe000);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
si_cp_enable(rdev, true);
radeon_ring_write(ring, 0x0000000e); /* VGT_VERTEX_REUSE_BLOCK_CNTL */
radeon_ring_write(ring, 0x00000010); /* VGT_OUT_DEALLOC_CNTL */
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
for (i = RADEON_RING_TYPE_GFX_INDEX; i <= CAYMAN_RING_TYPE_CP2_INDEX; ++i) {
ring = &rdev->ring[i];
radeon_ring_write(ring, PACKET3_COMPUTE(PACKET3_CLEAR_STATE, 0));
radeon_ring_write(ring, 0);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
}
return 0;
for (i = 1; i < 16; i++) {
if (i < 8)
WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (i << 2),
- rdev->gart.table_addr >> 12);
+ rdev->vm_manager.saved_table_addr[i]);
else
WREG32(VM_CONTEXT8_PAGE_TABLE_BASE_ADDR + ((i - 8) << 2),
- rdev->gart.table_addr >> 12);
+ rdev->vm_manager.saved_table_addr[i]);
}
/* enable context1-15 */
static void si_pcie_gart_disable(struct radeon_device *rdev)
{
+ unsigned i;
+
+ for (i = 1; i < 16; ++i) {
+ uint32_t reg;
+ if (i < 8)
+ reg = VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (i << 2);
+ else
+ reg = VM_CONTEXT8_PAGE_TABLE_BASE_ADDR + ((i - 8) << 2);
+ rdev->vm_manager.saved_table_addr[i] = RREG32(reg);
+ }
+
/* Disable all tables */
WREG32(VM_CONTEXT0_CNTL, 0);
WREG32(VM_CONTEXT1_CNTL, 0);
int si_ib_parse(struct radeon_device *rdev, struct radeon_ib *ib)
{
int ret = 0;
- u32 idx = 0;
+ u32 idx = 0, i;
struct radeon_cs_packet pkt;
do {
switch (pkt.type) {
case RADEON_PACKET_TYPE0:
dev_err(rdev->dev, "Packet0 not allowed!\n");
+ for (i = 0; i < ib->length_dw; i++) {
+ if (i == idx)
+ printk("\t0x%08x <---\n", ib->ptr[i]);
+ else
+ printk("\t0x%08x\n", ib->ptr[i]);
+ }
ret = -EINVAL;
break;
case RADEON_PACKET_TYPE2:
/* flush hdp cache */
radeon_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
- radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(0) |
+ radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(1) |
WRITE_DATA_DST_SEL(0)));
radeon_ring_write(ring, HDP_MEM_COHERENCY_FLUSH_CNTL >> 2);
radeon_ring_write(ring, 0);
/* bits 0-15 are the VM contexts0-15 */
radeon_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
- radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(0) |
+ radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(1) |
WRITE_DATA_DST_SEL(0)));
radeon_ring_write(ring, VM_INVALIDATE_REQUEST >> 2);
radeon_ring_write(ring, 0);
int ret, i;
u16 tmp16;
+ if (pci_is_root_bus(rdev->pdev->bus))
+ return;
+
if (radeon_pcie_gen2 == 0)
return;
if (orig != data)
WREG32_PIF_PHY1(PB1_PIF_CNTL, data);
- if (!disable_clkreq) {
+ if (!disable_clkreq &&
+ !pci_is_root_bus(rdev->pdev->bus)) {
struct pci_dev *root = rdev->pdev->bus->self;
u32 lnkcap;
* @src_offset: src GPU address
* @dst_offset: dst GPU address
* @num_gpu_pages: number of GPU pages to xfer
- * @fence: radeon fence object
+ * @resv: reservation object to sync to
*
* Copy GPU paging using the DMA engine (SI).
* Used by the radeon ttm implementation to move pages if
* registered as the asic copy callback.
*/
-int si_copy_dma(struct radeon_device *rdev,
- uint64_t src_offset, uint64_t dst_offset,
- unsigned num_gpu_pages,
- struct radeon_fence **fence)
+struct radeon_fence *si_copy_dma(struct radeon_device *rdev,
+ uint64_t src_offset, uint64_t dst_offset,
+ unsigned num_gpu_pages,
+ struct reservation_object *resv)
{
struct radeon_semaphore *sem = NULL;
+ struct radeon_fence *fence;
int ring_index = rdev->asic->copy.dma_ring_index;
struct radeon_ring *ring = &rdev->ring[ring_index];
u32 size_in_bytes, cur_size_in_bytes;
r = radeon_semaphore_create(rdev, &sem);
if (r) {
DRM_ERROR("radeon: moving bo (%d).\n", r);
- return r;
+ return ERR_PTR(r);
}
size_in_bytes = (num_gpu_pages << RADEON_GPU_PAGE_SHIFT);
if (r) {
DRM_ERROR("radeon: moving bo (%d).\n", r);
radeon_semaphore_free(rdev, &sem, NULL);
- return r;
+ return ERR_PTR(r);
}
- radeon_semaphore_sync_to(sem, *fence);
+ radeon_semaphore_sync_resv(rdev, sem, resv, false);
radeon_semaphore_sync_rings(rdev, sem, ring->idx);
for (i = 0; i < num_loops; i++) {
dst_offset += cur_size_in_bytes;
}
- r = radeon_fence_emit(rdev, fence, ring->idx);
+ r = radeon_fence_emit(rdev, &fence, ring->idx);
if (r) {
radeon_ring_unlock_undo(rdev, ring);
radeon_semaphore_free(rdev, &sem, NULL);
- return r;
+ return ERR_PTR(r);
}
- radeon_ring_unlock_commit(rdev, ring);
- radeon_semaphore_free(rdev, &sem, *fence);
+ radeon_ring_unlock_commit(rdev, ring, false);
+ radeon_semaphore_free(rdev, &sem, fence);
- return r;
+ return fence;
}
bool disable_sclk_switching = false;
u32 mclk, sclk;
u16 vddc, vddci;
- u32 max_sclk_vddc, max_mclk_vddci, max_mclk_vddc;
int i;
if ((rdev->pm.dpm.new_active_crtc_count > 1) ||
}
}
- /* limit clocks to max supported clocks based on voltage dependency tables */
- btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddc_dependency_on_sclk,
- &max_sclk_vddc);
- btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddci_dependency_on_mclk,
- &max_mclk_vddci);
- btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddc_dependency_on_mclk,
- &max_mclk_vddc);
-
- for (i = 0; i < ps->performance_level_count; i++) {
- if (max_sclk_vddc) {
- if (ps->performance_levels[i].sclk > max_sclk_vddc)
- ps->performance_levels[i].sclk = max_sclk_vddc;
- }
- if (max_mclk_vddci) {
- if (ps->performance_levels[i].mclk > max_mclk_vddci)
- ps->performance_levels[i].mclk = max_mclk_vddci;
- }
- if (max_mclk_vddc) {
- if (ps->performance_levels[i].mclk > max_mclk_vddc)
- ps->performance_levels[i].mclk = max_mclk_vddc;
- }
- }
-
/* XXX validate the min clocks required for display */
if (disable_mclk_switching) {
# define DESCRIPTION16(x) (((x) & 0xff) << 0)
# define DESCRIPTION17(x) (((x) & 0xff) << 8)
-#define AZ_F0_CODEC_PIN_CONTROL_HOTPLUG_CONTROL 0x54
+#define AZ_F0_CODEC_PIN_CONTROL_HOT_PLUG_CONTROL 0x54
# define AUDIO_ENABLED (1 << 31)
#define AZ_F0_CODEC_PIN_CONTROL_RESPONSE_CONFIGURATION_DEFAULT 0x56
for (i = 0; i < SUMO_MAX_HARDWARE_POWERLEVELS; i++)
pi->at[i] = TRINITY_AT_DFLT;
- /* There are stability issues reported on with
- * bapm enabled when switching between AC and battery
- * power. At the same time, some MSI boards hang
- * if it's not enabled and dpm is enabled. Just enable
- * it for MSI boards right now.
- */
- if (rdev->pdev->subsystem_vendor == 0x1462)
- pi->enable_bapm = true;
- else
+ if (radeon_bapm == -1) {
+ /* There are stability issues reported on with
+ * bapm enabled when switching between AC and battery
+ * power. At the same time, some MSI boards hang
+ * if it's not enabled and dpm is enabled. Just enable
+ * it for MSI boards right now.
+ */
+ if (rdev->pdev->subsystem_vendor == 0x1462)
+ pi->enable_bapm = true;
+ else
+ pi->enable_bapm = false;
+ } else if (radeon_bapm == 0) {
pi->enable_bapm = false;
+ } else {
+ pi->enable_bapm = true;
+ }
pi->enable_nbps_policy = true;
pi->enable_sclk_ds = true;
pi->enable_gfx_power_gating = true;
* Authors: Christian König <christian.koenig@amd.com>
*/
+#include <linux/firmware.h>
#include <drm/drmP.h>
#include "radeon.h"
#include "radeon_asic.h"
WREG32(UVD_RBC_RB_WPTR, ring->wptr);
}
+/**
+ * uvd_v1_0_fence_emit - emit an fence & trap command
+ *
+ * @rdev: radeon_device pointer
+ * @fence: fence to emit
+ *
+ * Write a fence and a trap command to the ring.
+ */
+void uvd_v1_0_fence_emit(struct radeon_device *rdev,
+ struct radeon_fence *fence)
+{
+ struct radeon_ring *ring = &rdev->ring[fence->ring];
+ uint64_t addr = rdev->fence_drv[fence->ring].gpu_addr;
+
+ radeon_ring_write(ring, PACKET0(UVD_GPCOM_VCPU_DATA0, 0));
+ radeon_ring_write(ring, addr & 0xffffffff);
+ radeon_ring_write(ring, PACKET0(UVD_GPCOM_VCPU_DATA1, 0));
+ radeon_ring_write(ring, fence->seq);
+ radeon_ring_write(ring, PACKET0(UVD_GPCOM_VCPU_CMD, 0));
+ radeon_ring_write(ring, 0);
+
+ radeon_ring_write(ring, PACKET0(UVD_GPCOM_VCPU_DATA0, 0));
+ radeon_ring_write(ring, 0);
+ radeon_ring_write(ring, PACKET0(UVD_GPCOM_VCPU_DATA1, 0));
+ radeon_ring_write(ring, 0);
+ radeon_ring_write(ring, PACKET0(UVD_GPCOM_VCPU_CMD, 0));
+ radeon_ring_write(ring, 2);
+ return;
+}
+
+/**
+ * uvd_v1_0_resume - memory controller programming
+ *
+ * @rdev: radeon_device pointer
+ *
+ * Let the UVD memory controller know it's offsets
+ */
+int uvd_v1_0_resume(struct radeon_device *rdev)
+{
+ uint64_t addr;
+ uint32_t size;
+ int r;
+
+ r = radeon_uvd_resume(rdev);
+ if (r)
+ return r;
+
+ /* programm the VCPU memory controller bits 0-27 */
+ addr = (rdev->uvd.gpu_addr >> 3) + 16;
+ size = RADEON_GPU_PAGE_ALIGN(rdev->uvd_fw->size) >> 3;
+ WREG32(UVD_VCPU_CACHE_OFFSET0, addr);
+ WREG32(UVD_VCPU_CACHE_SIZE0, size);
+
+ addr += size;
+ size = RADEON_UVD_STACK_SIZE >> 3;
+ WREG32(UVD_VCPU_CACHE_OFFSET1, addr);
+ WREG32(UVD_VCPU_CACHE_SIZE1, size);
+
+ addr += size;
+ size = RADEON_UVD_HEAP_SIZE >> 3;
+ WREG32(UVD_VCPU_CACHE_OFFSET2, addr);
+ WREG32(UVD_VCPU_CACHE_SIZE2, size);
+
+ /* bits 28-31 */
+ addr = (rdev->uvd.gpu_addr >> 28) & 0xF;
+ WREG32(UVD_LMI_ADDR_EXT, (addr << 12) | (addr << 0));
+
+ /* bits 32-39 */
+ addr = (rdev->uvd.gpu_addr >> 32) & 0xFF;
+ WREG32(UVD_LMI_EXT40_ADDR, addr | (0x9 << 16) | (0x1 << 31));
+
+ WREG32(UVD_FW_START, *((uint32_t*)rdev->uvd.cpu_addr));
+
+ return 0;
+}
+
/**
* uvd_v1_0_init - start and test UVD block
*
radeon_ring_write(ring, PACKET0(UVD_SEMA_CNTL, 0));
radeon_ring_write(ring, 3);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
done:
/* lower clocks again */
radeon_set_uvd_clocks(rdev, 0, 0);
- if (!r)
+ if (!r) {
+ switch (rdev->family) {
+ case CHIP_RV610:
+ case CHIP_RV630:
+ case CHIP_RV620:
+ /* 64byte granularity workaround */
+ WREG32(MC_CONFIG, 0);
+ WREG32(MC_CONFIG, 1 << 4);
+ WREG32(RS_DQ_RD_RET_CONF, 0x3f);
+ WREG32(MC_CONFIG, 0x1f);
+
+ /* fall through */
+ case CHIP_RV670:
+ case CHIP_RV635:
+
+ /* write clean workaround */
+ WREG32_P(UVD_VCPU_CNTL, 0x10, ~0x10);
+ break;
+
+ default:
+ /* TODO: Do we need more? */
+ break;
+ }
+
DRM_INFO("UVD initialized successfully.\n");
+ }
return r;
}
/* enable UMC */
WREG32_P(UVD_LMI_CTRL2, 0, ~(1 << 8));
+ WREG32_P(UVD_RB_ARB_CTRL, 0, ~(1 << 3));
+
/* boot up the VCPU */
WREG32(UVD_SOFT_RESET, 0);
mdelay(10);
- WREG32_P(UVD_RB_ARB_CTRL, 0, ~(1 << 3));
-
for (i = 0; i < 10; ++i) {
uint32_t status;
for (j = 0; j < 100; ++j) {
}
radeon_ring_write(ring, PACKET0(UVD_CONTEXT_ID, 0));
radeon_ring_write(ring, 0xDEADBEEF);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
for (i = 0; i < rdev->usec_timeout; i++) {
tmp = RREG32(UVD_CONTEXT_ID);
if (tmp == 0xDEADBEEF)
uint32_t chip_id, size;
int r;
+ /* RV770 uses V1.0 MC */
+ if (rdev->family == CHIP_RV770)
+ return uvd_v1_0_resume(rdev);
+
r = radeon_uvd_resume(rdev);
if (r)
return r;
select DRM_KMS_CMA_HELPER
select DRM_GEM_CMA_HELPER
select DRM_KMS_FB_HELPER
+ select VIDEOMODE_HELPERS
help
Choose this option if you have an R-Car chipset.
If M is selected the module will be called rcar-du-drm.
/*
* rcar_du_crtc.c -- R-Car Display Unit CRTCs
*
- * Copyright (C) 2013 Renesas Corporation
+ * Copyright (C) 2013-2014 Renesas Electronics Corporation
*
* Contact: Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
/*
* rcar_du_crtc.h -- R-Car Display Unit CRTCs
*
- * Copyright (C) 2013 Renesas Corporation
+ * Copyright (C) 2013-2014 Renesas Electronics Corporation
*
* Contact: Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
/*
* rcar_du_drv.c -- R-Car Display Unit DRM driver
*
- * Copyright (C) 2013 Renesas Corporation
+ * Copyright (C) 2013-2014 Renesas Electronics Corporation
*
* Contact: Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
#include <linux/io.h>
#include <linux/mm.h>
#include <linux/module.h>
+#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/pm.h>
#include <linux/slab.h>
#include "rcar_du_kms.h"
#include "rcar_du_regs.h"
+/* -----------------------------------------------------------------------------
+ * Device Information
+ */
+
+static const struct rcar_du_device_info rcar_du_r8a7779_info = {
+ .features = 0,
+ .num_crtcs = 2,
+ .routes = {
+ /* R8A7779 has two RGB outputs and one (currently unsupported)
+ * TCON output.
+ */
+ [RCAR_DU_OUTPUT_DPAD0] = {
+ .possible_crtcs = BIT(0),
+ .encoder_type = DRM_MODE_ENCODER_NONE,
+ .port = 0,
+ },
+ [RCAR_DU_OUTPUT_DPAD1] = {
+ .possible_crtcs = BIT(1) | BIT(0),
+ .encoder_type = DRM_MODE_ENCODER_NONE,
+ .port = 1,
+ },
+ },
+ .num_lvds = 0,
+};
+
+static const struct rcar_du_device_info rcar_du_r8a7790_info = {
+ .features = RCAR_DU_FEATURE_CRTC_IRQ_CLOCK | RCAR_DU_FEATURE_DEFR8,
+ .quirks = RCAR_DU_QUIRK_ALIGN_128B | RCAR_DU_QUIRK_LVDS_LANES,
+ .num_crtcs = 3,
+ .routes = {
+ /* R8A7790 has one RGB output, two LVDS outputs and one
+ * (currently unsupported) TCON output.
+ */
+ [RCAR_DU_OUTPUT_DPAD0] = {
+ .possible_crtcs = BIT(2) | BIT(1) | BIT(0),
+ .encoder_type = DRM_MODE_ENCODER_NONE,
+ .port = 0,
+ },
+ [RCAR_DU_OUTPUT_LVDS0] = {
+ .possible_crtcs = BIT(0),
+ .encoder_type = DRM_MODE_ENCODER_LVDS,
+ .port = 1,
+ },
+ [RCAR_DU_OUTPUT_LVDS1] = {
+ .possible_crtcs = BIT(2) | BIT(1),
+ .encoder_type = DRM_MODE_ENCODER_LVDS,
+ .port = 2,
+ },
+ },
+ .num_lvds = 2,
+};
+
+static const struct rcar_du_device_info rcar_du_r8a7791_info = {
+ .features = RCAR_DU_FEATURE_CRTC_IRQ_CLOCK | RCAR_DU_FEATURE_DEFR8,
+ .num_crtcs = 2,
+ .routes = {
+ /* R8A7791 has one RGB output, one LVDS output and one
+ * (currently unsupported) TCON output.
+ */
+ [RCAR_DU_OUTPUT_DPAD0] = {
+ .possible_crtcs = BIT(1),
+ .encoder_type = DRM_MODE_ENCODER_NONE,
+ .port = 0,
+ },
+ [RCAR_DU_OUTPUT_LVDS0] = {
+ .possible_crtcs = BIT(0),
+ .encoder_type = DRM_MODE_ENCODER_LVDS,
+ .port = 1,
+ },
+ },
+ .num_lvds = 1,
+};
+
+static const struct platform_device_id rcar_du_id_table[] = {
+ { "rcar-du-r8a7779", (kernel_ulong_t)&rcar_du_r8a7779_info },
+ { "rcar-du-r8a7790", (kernel_ulong_t)&rcar_du_r8a7790_info },
+ { "rcar-du-r8a7791", (kernel_ulong_t)&rcar_du_r8a7791_info },
+ { }
+};
+
+MODULE_DEVICE_TABLE(platform, rcar_du_id_table);
+
+static const struct of_device_id rcar_du_of_table[] = {
+ { .compatible = "renesas,du-r8a7779", .data = &rcar_du_r8a7779_info },
+ { .compatible = "renesas,du-r8a7790", .data = &rcar_du_r8a7790_info },
+ { .compatible = "renesas,du-r8a7791", .data = &rcar_du_r8a7791_info },
+ { }
+};
+
+MODULE_DEVICE_TABLE(of, rcar_du_of_table);
+
/* -----------------------------------------------------------------------------
* DRM operations
*/
static int rcar_du_load(struct drm_device *dev, unsigned long flags)
{
struct platform_device *pdev = dev->platformdev;
+ struct device_node *np = pdev->dev.of_node;
struct rcar_du_platform_data *pdata = pdev->dev.platform_data;
struct rcar_du_device *rcdu;
struct resource *mem;
int ret;
- if (pdata == NULL) {
+ if (pdata == NULL && np == NULL) {
dev_err(dev->dev, "no platform data\n");
return -ENODEV;
}
rcdu->dev = &pdev->dev;
rcdu->pdata = pdata;
- rcdu->info = (struct rcar_du_device_info *)pdev->id_entry->driver_data;
+ rcdu->info = np ? of_match_device(rcar_du_of_table, rcdu->dev)->data
+ : (void *)platform_get_device_id(pdev)->driver_data;
rcdu->ddev = dev;
dev->dev_private = rcdu;
.unload = rcar_du_unload,
.preclose = rcar_du_preclose,
.lastclose = rcar_du_lastclose,
+ .set_busid = drm_platform_set_busid,
.get_vblank_counter = drm_vblank_count,
.enable_vblank = rcar_du_enable_vblank,
.disable_vblank = rcar_du_disable_vblank,
return 0;
}
-static const struct rcar_du_device_info rcar_du_r8a7779_info = {
- .features = 0,
- .num_crtcs = 2,
- .routes = {
- /* R8A7779 has two RGB outputs and one (currently unsupported)
- * TCON output.
- */
- [RCAR_DU_OUTPUT_DPAD0] = {
- .possible_crtcs = BIT(0),
- .encoder_type = DRM_MODE_ENCODER_NONE,
- },
- [RCAR_DU_OUTPUT_DPAD1] = {
- .possible_crtcs = BIT(1) | BIT(0),
- .encoder_type = DRM_MODE_ENCODER_NONE,
- },
- },
- .num_lvds = 0,
-};
-
-static const struct rcar_du_device_info rcar_du_r8a7790_info = {
- .features = RCAR_DU_FEATURE_CRTC_IRQ_CLOCK | RCAR_DU_FEATURE_DEFR8,
- .quirks = RCAR_DU_QUIRK_ALIGN_128B | RCAR_DU_QUIRK_LVDS_LANES,
- .num_crtcs = 3,
- .routes = {
- /* R8A7790 has one RGB output, two LVDS outputs and one
- * (currently unsupported) TCON output.
- */
- [RCAR_DU_OUTPUT_DPAD0] = {
- .possible_crtcs = BIT(2) | BIT(1) | BIT(0),
- .encoder_type = DRM_MODE_ENCODER_NONE,
- },
- [RCAR_DU_OUTPUT_LVDS0] = {
- .possible_crtcs = BIT(0),
- .encoder_type = DRM_MODE_ENCODER_LVDS,
- },
- [RCAR_DU_OUTPUT_LVDS1] = {
- .possible_crtcs = BIT(2) | BIT(1),
- .encoder_type = DRM_MODE_ENCODER_LVDS,
- },
- },
- .num_lvds = 2,
-};
-
-static const struct rcar_du_device_info rcar_du_r8a7791_info = {
- .features = RCAR_DU_FEATURE_CRTC_IRQ_CLOCK | RCAR_DU_FEATURE_DEFR8,
- .num_crtcs = 2,
- .routes = {
- /* R8A7791 has one RGB output, one LVDS output and one
- * (currently unsupported) TCON output.
- */
- [RCAR_DU_OUTPUT_DPAD0] = {
- .possible_crtcs = BIT(1),
- .encoder_type = DRM_MODE_ENCODER_NONE,
- },
- [RCAR_DU_OUTPUT_LVDS0] = {
- .possible_crtcs = BIT(0),
- .encoder_type = DRM_MODE_ENCODER_LVDS,
- },
- },
- .num_lvds = 1,
-};
-
-static const struct platform_device_id rcar_du_id_table[] = {
- { "rcar-du-r8a7779", (kernel_ulong_t)&rcar_du_r8a7779_info },
- { "rcar-du-r8a7790", (kernel_ulong_t)&rcar_du_r8a7790_info },
- { "rcar-du-r8a7791", (kernel_ulong_t)&rcar_du_r8a7791_info },
- { }
-};
-
-MODULE_DEVICE_TABLE(platform, rcar_du_id_table);
-
static struct platform_driver rcar_du_platform_driver = {
.probe = rcar_du_probe,
.remove = rcar_du_remove,
.owner = THIS_MODULE,
.name = "rcar-du",
.pm = &rcar_du_pm_ops,
+ .of_match_table = rcar_du_of_table,
},
.id_table = rcar_du_id_table,
};
/*
* rcar_du_drv.h -- R-Car Display Unit DRM driver
*
- * Copyright (C) 2013 Renesas Corporation
+ * Copyright (C) 2013-2014 Renesas Electronics Corporation
*
* Contact: Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
* struct rcar_du_output_routing - Output routing specification
* @possible_crtcs: bitmask of possible CRTCs for the output
* @encoder_type: DRM type of the internal encoder associated with the output
+ * @port: device tree port number corresponding to this output route
*
* The DU has 5 possible outputs (DPAD0/1, LVDS0/1, TCON). Output routing data
* specify the valid SoC outputs, which CRTCs can drive the output, and the type
struct rcar_du_output_routing {
unsigned int possible_crtcs;
unsigned int encoder_type;
+ unsigned int port;
};
/*
/*
* rcar_du_encoder.c -- R-Car Display Unit Encoder
*
- * Copyright (C) 2013 Renesas Corporation
+ * Copyright (C) 2013-2014 Renesas Electronics Corporation
*
* Contact: Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
int rcar_du_encoder_init(struct rcar_du_device *rcdu,
enum rcar_du_encoder_type type,
enum rcar_du_output output,
- const struct rcar_du_encoder_data *data)
+ const struct rcar_du_encoder_data *data,
+ struct device_node *np)
{
struct rcar_du_encoder *renc;
unsigned int encoder_type;
drm_encoder_helper_add(&renc->encoder, &encoder_helper_funcs);
switch (encoder_type) {
- case DRM_MODE_ENCODER_LVDS:
- return rcar_du_lvds_connector_init(rcdu, renc,
- &data->connector.lvds.panel);
+ case DRM_MODE_ENCODER_LVDS: {
+ const struct rcar_du_panel_data *pdata =
+ data ? &data->connector.lvds.panel : NULL;
+ return rcar_du_lvds_connector_init(rcdu, renc, pdata, np);
+ }
case DRM_MODE_ENCODER_DAC:
return rcar_du_vga_connector_init(rcdu, renc);
/*
* rcar_du_encoder.h -- R-Car Display Unit Encoder
*
- * Copyright (C) 2013 Renesas Corporation
+ * Copyright (C) 2013-2014 Renesas Electronics Corporation
*
* Contact: Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
int rcar_du_encoder_init(struct rcar_du_device *rcdu,
enum rcar_du_encoder_type type,
enum rcar_du_output output,
- const struct rcar_du_encoder_data *data);
+ const struct rcar_du_encoder_data *data,
+ struct device_node *np);
#endif /* __RCAR_DU_ENCODER_H__ */
/*
* rcar_du_group.c -- R-Car Display Unit Channels Pair
*
- * Copyright (C) 2013 Renesas Corporation
+ * Copyright (C) 2013-2014 Renesas Electronics Corporation
*
* Contact: Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
/*
* rcar_du_group.c -- R-Car Display Unit Planes and CRTCs Group
*
- * Copyright (C) 2013 Renesas Corporation
+ * Copyright (C) 2013-2014 Renesas Electronics Corporation
*
* Contact: Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
/*
* rcar_du_kms.c -- R-Car Display Unit Mode Setting
*
- * Copyright (C) 2013 Renesas Corporation
+ * Copyright (C) 2013-2014 Renesas Electronics Corporation
*
* Contact: Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
#include <drm/drm_fb_cma_helper.h>
#include <drm/drm_gem_cma_helper.h>
+#include <linux/of_graph.h>
+
#include "rcar_du_crtc.h"
#include "rcar_du_drv.h"
#include "rcar_du_encoder.h"
.output_poll_changed = rcar_du_output_poll_changed,
};
+static int rcar_du_encoders_init_pdata(struct rcar_du_device *rcdu)
+{
+ unsigned int num_encoders = 0;
+ unsigned int i;
+ int ret;
+
+ for (i = 0; i < rcdu->pdata->num_encoders; ++i) {
+ const struct rcar_du_encoder_data *pdata =
+ &rcdu->pdata->encoders[i];
+ const struct rcar_du_output_routing *route =
+ &rcdu->info->routes[pdata->output];
+
+ if (pdata->type == RCAR_DU_ENCODER_UNUSED)
+ continue;
+
+ if (pdata->output >= RCAR_DU_OUTPUT_MAX ||
+ route->possible_crtcs == 0) {
+ dev_warn(rcdu->dev,
+ "encoder %u references unexisting output %u, skipping\n",
+ i, pdata->output);
+ continue;
+ }
+
+ ret = rcar_du_encoder_init(rcdu, pdata->type, pdata->output,
+ pdata, NULL);
+ if (ret < 0)
+ return ret;
+
+ num_encoders++;
+ }
+
+ return num_encoders;
+}
+
+static int rcar_du_encoders_init_dt_one(struct rcar_du_device *rcdu,
+ enum rcar_du_output output,
+ struct of_endpoint *ep)
+{
+ static const struct {
+ const char *compatible;
+ enum rcar_du_encoder_type type;
+ } encoders[] = {
+ { "adi,adv7123", RCAR_DU_ENCODER_VGA },
+ { "thine,thc63lvdm83d", RCAR_DU_ENCODER_LVDS },
+ };
+
+ enum rcar_du_encoder_type enc_type = RCAR_DU_ENCODER_NONE;
+ struct device_node *connector = NULL;
+ struct device_node *encoder = NULL;
+ struct device_node *prev = NULL;
+ struct device_node *entity_ep_node;
+ struct device_node *entity;
+ int ret;
+
+ /*
+ * Locate the connected entity and infer its type from the number of
+ * endpoints.
+ */
+ entity = of_graph_get_remote_port_parent(ep->local_node);
+ if (!entity) {
+ dev_dbg(rcdu->dev, "unconnected endpoint %s, skipping\n",
+ ep->local_node->full_name);
+ return 0;
+ }
+
+ entity_ep_node = of_parse_phandle(ep->local_node, "remote-endpoint", 0);
+
+ while (1) {
+ struct device_node *ep_node;
+
+ ep_node = of_graph_get_next_endpoint(entity, prev);
+ of_node_put(prev);
+ prev = ep_node;
+
+ if (!ep_node)
+ break;
+
+ if (ep_node == entity_ep_node)
+ continue;
+
+ /*
+ * We've found one endpoint other than the input, this must
+ * be an encoder. Locate the connector.
+ */
+ encoder = entity;
+ connector = of_graph_get_remote_port_parent(ep_node);
+ of_node_put(ep_node);
+
+ if (!connector) {
+ dev_warn(rcdu->dev,
+ "no connector for encoder %s, skipping\n",
+ encoder->full_name);
+ of_node_put(entity_ep_node);
+ of_node_put(encoder);
+ return 0;
+ }
+
+ break;
+ }
+
+ of_node_put(entity_ep_node);
+
+ if (encoder) {
+ /*
+ * If an encoder has been found, get its type based on its
+ * compatible string.
+ */
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(encoders); ++i) {
+ if (of_device_is_compatible(encoder,
+ encoders[i].compatible)) {
+ enc_type = encoders[i].type;
+ break;
+ }
+ }
+
+ if (i == ARRAY_SIZE(encoders)) {
+ dev_warn(rcdu->dev,
+ "unknown encoder type for %s, skipping\n",
+ encoder->full_name);
+ of_node_put(encoder);
+ of_node_put(connector);
+ return 0;
+ }
+ } else {
+ /*
+ * If no encoder has been found the entity must be the
+ * connector.
+ */
+ connector = entity;
+ }
+
+ ret = rcar_du_encoder_init(rcdu, enc_type, output, NULL, connector);
+ of_node_put(encoder);
+ of_node_put(connector);
+
+ return ret < 0 ? ret : 1;
+}
+
+static int rcar_du_encoders_init_dt(struct rcar_du_device *rcdu)
+{
+ struct device_node *np = rcdu->dev->of_node;
+ struct device_node *prev = NULL;
+ unsigned int num_encoders = 0;
+
+ /*
+ * Iterate over the endpoints and create one encoder for each output
+ * pipeline.
+ */
+ while (1) {
+ struct device_node *ep_node;
+ enum rcar_du_output output;
+ struct of_endpoint ep;
+ unsigned int i;
+ int ret;
+
+ ep_node = of_graph_get_next_endpoint(np, prev);
+ of_node_put(prev);
+ prev = ep_node;
+
+ if (ep_node == NULL)
+ break;
+
+ ret = of_graph_parse_endpoint(ep_node, &ep);
+ if (ret < 0) {
+ of_node_put(ep_node);
+ return ret;
+ }
+
+ /* Find the output route corresponding to the port number. */
+ for (i = 0; i < RCAR_DU_OUTPUT_MAX; ++i) {
+ if (rcdu->info->routes[i].possible_crtcs &&
+ rcdu->info->routes[i].port == ep.port) {
+ output = i;
+ break;
+ }
+ }
+
+ if (i == RCAR_DU_OUTPUT_MAX) {
+ dev_warn(rcdu->dev,
+ "port %u references unexisting output, skipping\n",
+ ep.port);
+ continue;
+ }
+
+ /* Process the output pipeline. */
+ ret = rcar_du_encoders_init_dt_one(rcdu, output, &ep);
+ if (ret < 0) {
+ of_node_put(ep_node);
+ return ret;
+ }
+
+ num_encoders += ret;
+ }
+
+ return num_encoders;
+}
+
int rcar_du_modeset_init(struct rcar_du_device *rcdu)
{
static const unsigned int mmio_offsets[] = {
struct drm_device *dev = rcdu->ddev;
struct drm_encoder *encoder;
struct drm_fbdev_cma *fbdev;
+ unsigned int num_encoders;
unsigned int num_groups;
unsigned int i;
int ret;
if (ret < 0)
return ret;
- for (i = 0; i < rcdu->pdata->num_encoders; ++i) {
- const struct rcar_du_encoder_data *pdata =
- &rcdu->pdata->encoders[i];
- const struct rcar_du_output_routing *route =
- &rcdu->info->routes[pdata->output];
-
- if (pdata->type == RCAR_DU_ENCODER_UNUSED)
- continue;
+ if (rcdu->pdata)
+ ret = rcar_du_encoders_init_pdata(rcdu);
+ else
+ ret = rcar_du_encoders_init_dt(rcdu);
- if (pdata->output >= RCAR_DU_OUTPUT_MAX ||
- route->possible_crtcs == 0) {
- dev_warn(rcdu->dev,
- "encoder %u references unexisting output %u, skipping\n",
- i, pdata->output);
- continue;
- }
+ if (ret < 0)
+ return ret;
- ret = rcar_du_encoder_init(rcdu, pdata->type, pdata->output,
- pdata);
- if (ret < 0)
- return ret;
- }
+ num_encoders = ret;
/* Set the possible CRTCs and possible clones. There's always at least
* one way for all encoders to clone each other, set all bits in the
&rcdu->info->routes[renc->output];
encoder->possible_crtcs = route->possible_crtcs;
- encoder->possible_clones = (1 << rcdu->pdata->num_encoders) - 1;
+ encoder->possible_clones = (1 << num_encoders) - 1;
}
/* Now that the CRTCs have been initialized register the planes. */
/*
* rcar_du_kms.h -- R-Car Display Unit Mode Setting
*
- * Copyright (C) 2013 Renesas Corporation
+ * Copyright (C) 2013-2014 Renesas Electronics Corporation
*
* Contact: Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
/*
* rcar_du_lvdscon.c -- R-Car Display Unit LVDS Connector
*
- * Copyright (C) 2013 Renesas Corporation
+ * Copyright (C) 2013-2014 Renesas Electronics Corporation
*
* Contact: Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
#include <drm/drm_crtc.h>
#include <drm/drm_crtc_helper.h>
+#include <video/display_timing.h>
+#include <video/of_display_timing.h>
+#include <video/videomode.h>
+
#include "rcar_du_drv.h"
#include "rcar_du_encoder.h"
#include "rcar_du_kms.h"
struct rcar_du_lvds_connector {
struct rcar_du_connector connector;
- const struct rcar_du_panel_data *panel;
+ struct rcar_du_panel_data panel;
};
#define to_rcar_lvds_connector(c) \
return 0;
mode->type = DRM_MODE_TYPE_PREFERRED | DRM_MODE_TYPE_DRIVER;
- mode->clock = lvdscon->panel->mode.clock;
- mode->hdisplay = lvdscon->panel->mode.hdisplay;
- mode->hsync_start = lvdscon->panel->mode.hsync_start;
- mode->hsync_end = lvdscon->panel->mode.hsync_end;
- mode->htotal = lvdscon->panel->mode.htotal;
- mode->vdisplay = lvdscon->panel->mode.vdisplay;
- mode->vsync_start = lvdscon->panel->mode.vsync_start;
- mode->vsync_end = lvdscon->panel->mode.vsync_end;
- mode->vtotal = lvdscon->panel->mode.vtotal;
- mode->flags = lvdscon->panel->mode.flags;
-
- drm_mode_set_name(mode);
+
+ drm_display_mode_from_videomode(&lvdscon->panel.mode, mode);
+
drm_mode_probed_add(connector, mode);
return 1;
int rcar_du_lvds_connector_init(struct rcar_du_device *rcdu,
struct rcar_du_encoder *renc,
- const struct rcar_du_panel_data *panel)
+ const struct rcar_du_panel_data *panel,
+ /* TODO const */ struct device_node *np)
{
struct rcar_du_lvds_connector *lvdscon;
struct drm_connector *connector;
if (lvdscon == NULL)
return -ENOMEM;
- lvdscon->panel = panel;
+ if (panel) {
+ lvdscon->panel = *panel;
+ } else {
+ struct display_timing timing;
+
+ ret = of_get_display_timing(np, "panel-timing", &timing);
+ if (ret < 0)
+ return ret;
+
+ videomode_from_timing(&timing, &lvdscon->panel.mode);
+
+ of_property_read_u32(np, "width-mm", &lvdscon->panel.width_mm);
+ of_property_read_u32(np, "height-mm", &lvdscon->panel.height_mm);
+ }
connector = &lvdscon->connector.connector;
- connector->display_info.width_mm = panel->width_mm;
- connector->display_info.height_mm = panel->height_mm;
+ connector->display_info.width_mm = lvdscon->panel.width_mm;
+ connector->display_info.height_mm = lvdscon->panel.height_mm;
ret = drm_connector_init(rcdu->ddev, connector, &connector_funcs,
DRM_MODE_CONNECTOR_LVDS);
/*
* rcar_du_lvdscon.h -- R-Car Display Unit LVDS Connector
*
- * Copyright (C) 2013 Renesas Corporation
+ * Copyright (C) 2013-2014 Renesas Electronics Corporation
*
* Contact: Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
int rcar_du_lvds_connector_init(struct rcar_du_device *rcdu,
struct rcar_du_encoder *renc,
- const struct rcar_du_panel_data *panel);
+ const struct rcar_du_panel_data *panel,
+ struct device_node *np);
#endif /* __RCAR_DU_LVDSCON_H__ */
/*
* rcar_du_lvdsenc.c -- R-Car Display Unit LVDS Encoder
*
- * Copyright (C) 2013 Renesas Corporation
+ * Copyright (C) 2013-2014 Renesas Electronics Corporation
*
* Contact: Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
/*
* rcar_du_lvdsenc.h -- R-Car Display Unit LVDS Encoder
*
- * Copyright (C) 2013 Renesas Corporation
+ * Copyright (C) 2013-2014 Renesas Electronics Corporation
*
* Contact: Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
/*
* rcar_du_plane.c -- R-Car Display Unit Planes
*
- * Copyright (C) 2013 Renesas Corporation
+ * Copyright (C) 2013-2014 Renesas Electronics Corporation
*
* Contact: Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
/*
* rcar_du_plane.h -- R-Car Display Unit Planes
*
- * Copyright (C) 2013 Renesas Corporation
+ * Copyright (C) 2013-2014 Renesas Electronics Corporation
*
* Contact: Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
/*
* rcar_du_vgacon.c -- R-Car Display Unit VGA Connector
*
- * Copyright (C) 2013 Renesas Corporation
+ * Copyright (C) 2013-2014 Renesas Electronics Corporation
*
* Contact: Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
/*
* rcar_du_vgacon.h -- R-Car Display Unit VGA Connector
*
- * Copyright (C) 2013 Renesas Corporation
+ * Copyright (C) 2013-2014 Renesas Electronics Corporation
*
* Contact: Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
/*
* Initialize mappings. On Savage4 and SavageIX the alignment
* and size of the aperture is not suitable for automatic MTRR setup
- * in drm_addmap. Therefore we add them manually before the maps are
+ * in drm_legacy_addmap. Therefore we add them manually before the maps are
* initialized, and tear them down on last close.
*/
int savage_driver_firstopen(struct drm_device *dev)
/* Automatic MTRR setup will do the right thing. */
}
- ret = drm_addmap(dev, mmio_base, SAVAGE_MMIO_SIZE, _DRM_REGISTERS,
- _DRM_READ_ONLY, &dev_priv->mmio);
+ ret = drm_legacy_addmap(dev, mmio_base, SAVAGE_MMIO_SIZE,
+ _DRM_REGISTERS, _DRM_READ_ONLY,
+ &dev_priv->mmio);
if (ret)
return ret;
- ret = drm_addmap(dev, fb_base, fb_size, _DRM_FRAME_BUFFER,
- _DRM_WRITE_COMBINING, &dev_priv->fb);
+ ret = drm_legacy_addmap(dev, fb_base, fb_size, _DRM_FRAME_BUFFER,
+ _DRM_WRITE_COMBINING, &dev_priv->fb);
if (ret)
return ret;
- ret = drm_addmap(dev, aperture_base, SAVAGE_APERTURE_SIZE,
- _DRM_FRAME_BUFFER, _DRM_WRITE_COMBINING,
- &dev_priv->aperture);
+ ret = drm_legacy_addmap(dev, aperture_base, SAVAGE_APERTURE_SIZE,
+ _DRM_FRAME_BUFFER, _DRM_WRITE_COMBINING,
+ &dev_priv->aperture);
return ret;
}
dev_priv->texture_offset = init->texture_offset;
dev_priv->texture_size = init->texture_size;
- dev_priv->sarea = drm_getsarea(dev);
+ dev_priv->sarea = drm_legacy_getsarea(dev);
if (!dev_priv->sarea) {
DRM_ERROR("could not find sarea!\n");
savage_do_cleanup_bci(dev);
return -EINVAL;
}
if (init->status_offset != 0) {
- dev_priv->status = drm_core_findmap(dev, init->status_offset);
+ dev_priv->status = drm_legacy_findmap(dev, init->status_offset);
if (!dev_priv->status) {
DRM_ERROR("could not find shadow status region!\n");
savage_do_cleanup_bci(dev);
}
if (dev_priv->dma_type == SAVAGE_DMA_AGP && init->buffers_offset) {
dev->agp_buffer_token = init->buffers_offset;
- dev->agp_buffer_map = drm_core_findmap(dev,
+ dev->agp_buffer_map = drm_legacy_findmap(dev,
init->buffers_offset);
if (!dev->agp_buffer_map) {
DRM_ERROR("could not find DMA buffer region!\n");
savage_do_cleanup_bci(dev);
return -EINVAL;
}
- drm_core_ioremap(dev->agp_buffer_map, dev);
+ drm_legacy_ioremap(dev->agp_buffer_map, dev);
if (!dev->agp_buffer_map->handle) {
DRM_ERROR("failed to ioremap DMA buffer region!\n");
savage_do_cleanup_bci(dev);
}
if (init->agp_textures_offset) {
dev_priv->agp_textures =
- drm_core_findmap(dev, init->agp_textures_offset);
+ drm_legacy_findmap(dev, init->agp_textures_offset);
if (!dev_priv->agp_textures) {
DRM_ERROR("could not find agp texture region!\n");
savage_do_cleanup_bci(dev);
savage_do_cleanup_bci(dev);
return -EINVAL;
}
- dev_priv->cmd_dma = drm_core_findmap(dev, init->cmd_dma_offset);
+ dev_priv->cmd_dma = drm_legacy_findmap(dev, init->cmd_dma_offset);
if (!dev_priv->cmd_dma) {
DRM_ERROR("could not find command DMA region!\n");
savage_do_cleanup_bci(dev);
savage_do_cleanup_bci(dev);
return -EINVAL;
}
- drm_core_ioremap(dev_priv->cmd_dma, dev);
+ drm_legacy_ioremap(dev_priv->cmd_dma, dev);
if (!dev_priv->cmd_dma->handle) {
DRM_ERROR("failed to ioremap command "
"DMA region!\n");
} else if (dev_priv->cmd_dma && dev_priv->cmd_dma->handle &&
dev_priv->cmd_dma->type == _DRM_AGP &&
dev_priv->dma_type == SAVAGE_DMA_AGP)
- drm_core_ioremapfree(dev_priv->cmd_dma, dev);
+ drm_legacy_ioremapfree(dev_priv->cmd_dma, dev);
if (dev_priv->dma_type == SAVAGE_DMA_AGP &&
dev->agp_buffer_map && dev->agp_buffer_map->handle) {
- drm_core_ioremapfree(dev->agp_buffer_map, dev);
+ drm_legacy_ioremapfree(dev->agp_buffer_map, dev);
/* make sure the next instance (which may be running
* in PCI mode) doesn't try to use an old
* agp_buffer_map. */
return;
if (file_priv->master && file_priv->master->lock.hw_lock) {
- drm_idlelock_take(&file_priv->master->lock);
+ drm_legacy_idlelock_take(&file_priv->master->lock);
release_idlelock = 1;
}
}
if (release_idlelock)
- drm_idlelock_release(&file_priv->master->lock);
+ drm_legacy_idlelock_release(&file_priv->master->lock);
}
const struct drm_ioctl_desc savage_ioctls[] = {
.open = drm_open,
.release = drm_release,
.unlocked_ioctl = drm_ioctl,
- .mmap = drm_mmap,
+ .mmap = drm_legacy_mmap,
.poll = drm_poll,
#ifdef CONFIG_COMPAT
.compat_ioctl = drm_compat_ioctl,
.preclose = savage_reclaim_buffers,
.lastclose = savage_driver_lastclose,
.unload = savage_driver_unload,
+ .set_busid = drm_pci_set_busid,
.ioctls = savage_ioctls,
.dma_ioctl = savage_bci_buffers,
.fops = &savage_driver_fops,
#ifndef __SAVAGE_DRV_H__
#define __SAVAGE_DRV_H__
+#include <drm/drm_legacy.h>
+
#define DRIVER_AUTHOR "Felix Kuehling"
#define DRIVER_NAME "savage"
/*
* shmob_drm_backlight.c -- SH Mobile DRM Backlight
*
- * Copyright (C) 2012 Renesas Corporation
+ * Copyright (C) 2012 Renesas Electronics Corporation
*
* Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
/*
* shmob_drm_backlight.h -- SH Mobile DRM Backlight
*
- * Copyright (C) 2012 Renesas Corporation
+ * Copyright (C) 2012 Renesas Electronics Corporation
*
* Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
/*
* shmob_drm_crtc.c -- SH Mobile DRM CRTCs
*
- * Copyright (C) 2012 Renesas Corporation
+ * Copyright (C) 2012 Renesas Electronics Corporation
*
* Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
/*
* shmob_drm_crtc.h -- SH Mobile DRM CRTCs
*
- * Copyright (C) 2012 Renesas Corporation
+ * Copyright (C) 2012 Renesas Electronics Corporation
*
* Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
/*
* shmob_drm_drv.c -- SH Mobile DRM driver
*
- * Copyright (C) 2012 Renesas Corporation
+ * Copyright (C) 2012 Renesas Electronics Corporation
*
* Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
.load = shmob_drm_load,
.unload = shmob_drm_unload,
.preclose = shmob_drm_preclose,
+ .set_busid = drm_platform_set_busid,
.irq_handler = shmob_drm_irq,
.get_vblank_counter = drm_vblank_count,
.enable_vblank = shmob_drm_enable_vblank,
/*
* shmob_drm.h -- SH Mobile DRM driver
*
- * Copyright (C) 2012 Renesas Corporation
+ * Copyright (C) 2012 Renesas Electronics Corporation
*
* Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
/*
* shmob_drm_kms.c -- SH Mobile DRM Mode Setting
*
- * Copyright (C) 2012 Renesas Corporation
+ * Copyright (C) 2012 Renesas Electronics Corporation
*
* Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
/*
* shmob_drm_kms.h -- SH Mobile DRM Mode Setting
*
- * Copyright (C) 2012 Renesas Corporation
+ * Copyright (C) 2012 Renesas Electronics Corporation
*
* Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
/*
* shmob_drm_plane.c -- SH Mobile DRM Planes
*
- * Copyright (C) 2012 Renesas Corporation
+ * Copyright (C) 2012 Renesas Electronics Corporation
*
* Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
/*
* shmob_drm_plane.h -- SH Mobile DRM Planes
*
- * Copyright (C) 2012 Renesas Corporation
+ * Copyright (C) 2012 Renesas Electronics Corporation
*
* Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
/*
* shmob_drm_regs.h -- SH Mobile DRM registers
*
- * Copyright (C) 2012 Renesas Corporation
+ * Copyright (C) 2012 Renesas Electronics Corporation
*
* Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*
.open = drm_open,
.release = drm_release,
.unlocked_ioctl = drm_ioctl,
- .mmap = drm_mmap,
+ .mmap = drm_legacy_mmap,
.poll = drm_poll,
#ifdef CONFIG_COMPAT
.compat_ioctl = drm_compat_ioctl,
.open = sis_driver_open,
.preclose = sis_reclaim_buffers_locked,
.postclose = sis_driver_postclose,
+ .set_busid = drm_pci_set_busid,
.dma_quiescent = sis_idle,
.lastclose = sis_lastclose,
.ioctls = sis_ioctls,
#ifndef _SIS_DRV_H_
#define _SIS_DRV_H_
+#include <drm/drm_legacy.h>
+
/* General customization:
*/
if (!(file->minor->master && file->master->lock.hw_lock))
return;
- drm_idlelock_take(&file->master->lock);
+ drm_legacy_idlelock_take(&file->master->lock);
mutex_lock(&dev->struct_mutex);
if (list_empty(&file_priv->obj_list)) {
mutex_unlock(&dev->struct_mutex);
- drm_idlelock_release(&file->master->lock);
+ drm_legacy_idlelock_release(&file->master->lock);
return;
}
}
mutex_unlock(&dev->struct_mutex);
- drm_idlelock_release(&file->master->lock);
+ drm_legacy_idlelock_release(&file->master->lock);
return;
}
config DRM_STI
tristate "DRM Support for STMicroelectronics SoC stiH41x Series"
depends on DRM && (SOC_STIH415 || SOC_STIH416 || ARCH_MULTIPLATFORM)
+ select RESET_CONTROLLER
select DRM_KMS_HELPER
select DRM_GEM_CMA_HELPER
select DRM_KMS_CMA_HELPER
master = platform_device_register_resndata(dev,
DRIVER_NAME "__master", -1,
NULL, 0, NULL, 0);
- if (!master)
- return -EINVAL;
+ if (IS_ERR(master))
+ return PTR_ERR(master);
platform_set_drvdata(pdev, master);
return 0;
return -ENOMEM;
}
hda->regs = devm_ioremap_nocache(dev, res->start, resource_size(res));
- if (IS_ERR(hda->regs))
- return PTR_ERR(hda->regs);
+ if (!hda->regs)
+ return -ENOMEM;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
"video-dacs-ctrl");
if (res) {
hda->video_dacs_ctrl = devm_ioremap_nocache(dev, res->start,
resource_size(res));
- if (IS_ERR(hda->video_dacs_ctrl))
- return PTR_ERR(hda->video_dacs_ctrl);
+ if (!hda->video_dacs_ctrl)
+ return -ENOMEM;
} else {
/* If no existing video-dacs-ctrl resource continue the probe */
DRM_DEBUG_DRIVER("No video-dacs-ctrl resource\n");
return 0;
}
-static struct of_device_id hda_of_match[] = {
+static const struct of_device_id hda_of_match[] = {
{ .compatible = "st,stih416-hda", },
{ .compatible = "st,stih407-hda", },
{ /* end node */ }
.unbind = sti_hdmi_unbind,
};
-static struct of_device_id hdmi_of_match[] = {
+static const struct of_device_id hdmi_of_match[] = {
{
.compatible = "st,stih416-hdmi",
.data = &tx3g0c55phy_ops,
return -ENOMEM;
}
hdmi->regs = devm_ioremap_nocache(dev, res->start, resource_size(res));
- if (IS_ERR(hdmi->regs))
- return PTR_ERR(hdmi->regs);
+ if (!hdmi->regs)
+ return -ENOMEM;
if (of_device_is_compatible(np, "st,stih416-hdmi")) {
res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
}
hdmi->syscfg = devm_ioremap_nocache(dev, res->start,
resource_size(res));
- if (IS_ERR(hdmi->syscfg))
- return PTR_ERR(hdmi->syscfg);
+ if (!hdmi->syscfg)
+ return -ENOMEM;
}
return -ENOMEM;
}
tvout->regs = devm_ioremap_nocache(dev, res->start, resource_size(res));
- if (IS_ERR(tvout->regs))
- return PTR_ERR(tvout->regs);
+ if (!tvout->regs)
+ return -ENOMEM;
/* get reset resources */
tvout->reset = devm_reset_control_get(dev, "tvout");
return 0;
}
-static struct of_device_id tvout_of_match[] = {
+static const struct of_device_id tvout_of_match[] = {
{ .compatible = "st,stih416-tvout", },
{ .compatible = "st,stih407-tvout", },
{ /* end node */ }
u32 phyts_per_pixel;
};
-static const struct sti_vtac_mode vtac_mode_main = {0x2, 0x2, VTAC_5_PPP};
-static const struct sti_vtac_mode vtac_mode_aux = {0x1, 0x0, VTAC_17_PPP};
+static const struct sti_vtac_mode vtac_mode_main = {
+ .vid_in_width = 0x2,
+ .phyts_width = 0x2,
+ .phyts_per_pixel = VTAC_5_PPP,
+};
+static const struct sti_vtac_mode vtac_mode_aux = {
+ .vid_in_width = 0x1,
+ .phyts_width = 0x0,
+ .phyts_per_pixel = VTAC_17_PPP,
+};
/**
* VTAC structure
#include "tdfx_drv.h"
#include <drm/drm_pciids.h>
+#include <drm/drm_legacy.h>
static struct pci_device_id pciidlist[] = {
tdfx_PCI_IDS
.open = drm_open,
.release = drm_release,
.unlocked_ioctl = drm_ioctl,
- .mmap = drm_mmap,
+ .mmap = drm_legacy_mmap,
.poll = drm_poll,
#ifdef CONFIG_COMPAT
.compat_ioctl = drm_compat_ioctl,
};
static struct drm_driver driver = {
+ .set_busid = drm_pci_set_busid,
.fops = &tdfx_driver_fops,
.name = DRIVER_NAME,
.desc = DRIVER_DESC,
for (i = 0; i < link->num_lanes; i++)
values[i] = DP_TRAIN_MAX_PRE_EMPHASIS_REACHED |
- DP_TRAIN_PRE_EMPHASIS_0 |
+ DP_TRAIN_PRE_EMPH_LEVEL_0 |
DP_TRAIN_MAX_SWING_REACHED |
- DP_TRAIN_VOLTAGE_SWING_400;
+ DP_TRAIN_VOLTAGE_SWING_LEVEL_0;
err = drm_dp_dpcd_write(&dpaux->aux, DP_TRAINING_LANE0_SET, values,
link->num_lanes);
#include <drm/drm.h>
#include <drm/drmP.h>
+#include <drm/drm_gem.h>
#define TEGRA_BO_BOTTOM_UP (1 << 0)
if ((priv->num_encoders == 0) || (priv->num_connectors == 0)) {
/* oh nos! */
dev_err(dev->dev, "no encoders/connectors found\n");
+ drm_mode_config_cleanup(dev);
return -ENXIO;
}
dev->dev_private = priv;
priv->wq = alloc_ordered_workqueue("tilcdc", 0);
+ if (!priv->wq) {
+ ret = -ENOMEM;
+ goto fail_free_priv;
+ }
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res) {
dev_err(dev->dev, "failed to get memory resource\n");
ret = -EINVAL;
- goto fail;
+ goto fail_free_wq;
}
priv->mmio = ioremap_nocache(res->start, resource_size(res));
if (!priv->mmio) {
dev_err(dev->dev, "failed to ioremap\n");
ret = -ENOMEM;
- goto fail;
+ goto fail_free_wq;
}
priv->clk = clk_get(dev->dev, "fck");
if (IS_ERR(priv->clk)) {
dev_err(dev->dev, "failed to get functional clock\n");
ret = -ENODEV;
- goto fail;
+ goto fail_iounmap;
}
priv->disp_clk = clk_get(dev->dev, "dpll_disp_ck");
if (IS_ERR(priv->clk)) {
dev_err(dev->dev, "failed to get display clock\n");
ret = -ENODEV;
- goto fail;
+ goto fail_put_clk;
}
#ifdef CONFIG_CPU_FREQ
CPUFREQ_TRANSITION_NOTIFIER);
if (ret) {
dev_err(dev->dev, "failed to register cpufreq notifier\n");
- goto fail;
+ goto fail_put_disp_clk;
}
#endif
ret = modeset_init(dev);
if (ret < 0) {
dev_err(dev->dev, "failed to initialize mode setting\n");
- goto fail;
+ goto fail_cpufreq_unregister;
}
ret = drm_vblank_init(dev, 1);
if (ret < 0) {
dev_err(dev->dev, "failed to initialize vblank\n");
- goto fail;
+ goto fail_mode_config_cleanup;
}
pm_runtime_get_sync(dev->dev);
pm_runtime_put_sync(dev->dev);
if (ret < 0) {
dev_err(dev->dev, "failed to install IRQ handler\n");
- goto fail;
+ goto fail_vblank_cleanup;
}
platform_set_drvdata(pdev, dev);
priv->fbdev = drm_fbdev_cma_init(dev, bpp,
dev->mode_config.num_crtc,
dev->mode_config.num_connector);
+ if (IS_ERR(priv->fbdev)) {
+ ret = PTR_ERR(priv->fbdev);
+ goto fail_irq_uninstall;
+ }
drm_kms_helper_poll_init(dev);
return 0;
-fail:
- tilcdc_unload(dev);
+fail_irq_uninstall:
+ pm_runtime_get_sync(dev->dev);
+ drm_irq_uninstall(dev);
+ pm_runtime_put_sync(dev->dev);
+
+fail_vblank_cleanup:
+ drm_vblank_cleanup(dev);
+
+fail_mode_config_cleanup:
+ drm_mode_config_cleanup(dev);
+
+fail_cpufreq_unregister:
+ pm_runtime_disable(dev->dev);
+#ifdef CONFIG_CPU_FREQ
+ cpufreq_unregister_notifier(&priv->freq_transition,
+ CPUFREQ_TRANSITION_NOTIFIER);
+fail_put_disp_clk:
+ clk_put(priv->disp_clk);
+#endif
+
+fail_put_clk:
+ clk_put(priv->clk);
+
+fail_iounmap:
+ iounmap(priv->mmio);
+
+fail_free_wq:
+ flush_workqueue(priv->wq);
+ destroy_workqueue(priv->wq);
+
+fail_free_priv:
+ dev->dev_private = NULL;
+ kfree(priv);
return ret;
}
.unload = tilcdc_unload,
.preclose = tilcdc_preclose,
.lastclose = tilcdc_lastclose,
+ .set_busid = drm_platform_set_busid,
.irq_handler = tilcdc_irq,
.irq_preinstall = tilcdc_irq_preinstall,
.irq_postinstall = tilcdc_irq_postinstall,
#include <linux/pinctrl/pinmux.h>
#include <linux/pinctrl/consumer.h>
#include <linux/backlight.h>
+#include <linux/gpio/consumer.h>
#include <video/display_timing.h>
#include <video/of_display_timing.h>
#include <video/videomode.h>
struct tilcdc_panel_info *info;
struct display_timings *timings;
struct backlight_device *backlight;
+ struct gpio_desc *enable_gpio;
};
#define to_panel_module(x) container_of(x, struct panel_module, base)
{
struct panel_encoder *panel_encoder = to_panel_encoder(encoder);
struct backlight_device *backlight = panel_encoder->mod->backlight;
+ struct gpio_desc *gpio = panel_encoder->mod->enable_gpio;
- if (!backlight)
- return;
+ if (backlight) {
+ backlight->props.power = mode == DRM_MODE_DPMS_ON ?
+ FB_BLANK_UNBLANK : FB_BLANK_POWERDOWN;
+ backlight_update_status(backlight);
+ }
- backlight->props.power = mode == DRM_MODE_DPMS_ON
- ? FB_BLANK_UNBLANK : FB_BLANK_POWERDOWN;
- backlight_update_status(backlight);
+ if (gpio)
+ gpiod_set_value_cansleep(gpio,
+ mode == DRM_MODE_DPMS_ON ? 1 : 0);
}
static bool panel_encoder_mode_fixup(struct drm_encoder *encoder,
info = kzalloc(sizeof(*info), GFP_KERNEL);
if (!info) {
pr_err("%s: allocation failed\n", __func__);
+ of_node_put(info_np);
return NULL;
}
if (ret) {
pr_err("%s: error reading panel-info properties\n", __func__);
kfree(info);
+ of_node_put(info_np);
return NULL;
}
+ of_node_put(info_np);
return info;
}
-static struct of_device_id panel_of_match[];
-
static int panel_probe(struct platform_device *pdev)
{
- struct device_node *node = pdev->dev.of_node;
+ struct device_node *bl_node, *node = pdev->dev.of_node;
struct panel_module *panel_mod;
struct tilcdc_module *mod;
struct pinctrl *pinctrl;
- int ret = -EINVAL;
-
+ int ret;
/* bail out early if no DT data: */
if (!node) {
return -ENXIO;
}
- panel_mod = kzalloc(sizeof(*panel_mod), GFP_KERNEL);
+ panel_mod = devm_kzalloc(&pdev->dev, sizeof(*panel_mod), GFP_KERNEL);
if (!panel_mod)
return -ENOMEM;
+ bl_node = of_parse_phandle(node, "backlight", 0);
+ if (bl_node) {
+ panel_mod->backlight = of_find_backlight_by_node(bl_node);
+ of_node_put(bl_node);
+
+ if (!panel_mod->backlight)
+ return -EPROBE_DEFER;
+
+ dev_info(&pdev->dev, "found backlight\n");
+ }
+
+ panel_mod->enable_gpio = devm_gpiod_get(&pdev->dev, "enable");
+ if (IS_ERR(panel_mod->enable_gpio)) {
+ ret = PTR_ERR(panel_mod->enable_gpio);
+ if (ret != -ENOENT) {
+ dev_err(&pdev->dev, "failed to request enable GPIO\n");
+ goto fail_backlight;
+ }
+
+ /* Optional GPIO is not here, continue silently. */
+ panel_mod->enable_gpio = NULL;
+ } else {
+ ret = gpiod_direction_output(panel_mod->enable_gpio, 0);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "failed to setup GPIO\n");
+ goto fail_backlight;
+ }
+ dev_info(&pdev->dev, "found enable GPIO\n");
+ }
+
mod = &panel_mod->base;
pdev->dev.platform_data = mod;
panel_mod->timings = of_get_display_timings(node);
if (!panel_mod->timings) {
dev_err(&pdev->dev, "could not get panel timings\n");
+ ret = -EINVAL;
goto fail_free;
}
panel_mod->info = of_get_panel_info(node);
if (!panel_mod->info) {
dev_err(&pdev->dev, "could not get panel info\n");
+ ret = -EINVAL;
goto fail_timings;
}
mod->preferred_bpp = panel_mod->info->bpp;
- panel_mod->backlight = of_find_backlight_by_node(node);
- if (panel_mod->backlight)
- dev_info(&pdev->dev, "found backlight\n");
-
return 0;
fail_timings:
display_timings_release(panel_mod->timings);
fail_free:
- kfree(panel_mod);
tilcdc_module_cleanup(mod);
+
+fail_backlight:
+ if (panel_mod->backlight)
+ put_device(&panel_mod->backlight->dev);
return ret;
}
{
struct tilcdc_module *mod = dev_get_platdata(&pdev->dev);
struct panel_module *panel_mod = to_panel_module(mod);
+ struct backlight_device *backlight = panel_mod->backlight;
+
+ if (backlight)
+ put_device(&backlight->dev);
display_timings_release(panel_mod->timings);
tilcdc_module_cleanup(mod);
kfree(panel_mod->info);
- kfree(panel_mod);
return 0;
}
#include <linux/file.h>
#include <linux/module.h>
#include <linux/atomic.h>
+#include <linux/reservation.h>
#define TTM_ASSERT_LOCKED(param)
#define TTM_DEBUG(fmt, arg...)
.mode = S_IRUGO
};
-static inline int ttm_mem_type_from_flags(uint32_t flags, uint32_t *mem_type)
+static inline int ttm_mem_type_from_place(const struct ttm_place *place,
+ uint32_t *mem_type)
{
int i;
for (i = 0; i <= TTM_PL_PRIV5; i++)
- if (flags & (1 << i)) {
+ if (place->flags & (1 << i)) {
*mem_type = i;
return 0;
}
bo, bo->mem.num_pages, bo->mem.size >> 10,
bo->mem.size >> 20);
for (i = 0; i < placement->num_placement; i++) {
- ret = ttm_mem_type_from_flags(placement->placement[i],
+ ret = ttm_mem_type_from_place(&placement->placement[i],
&mem_type);
if (ret)
return;
pr_err(" placement[%d]=0x%08X (%d)\n",
- i, placement->placement[i], mem_type);
+ i, placement->placement[i].flags, mem_type);
ttm_mem_type_debug(bo->bdev, mem_type);
}
}
BUG_ON(atomic_read(&bo->list_kref.refcount));
BUG_ON(atomic_read(&bo->kref.refcount));
BUG_ON(atomic_read(&bo->cpu_writers));
- BUG_ON(bo->sync_obj != NULL);
BUG_ON(bo->mem.mm_node != NULL);
BUG_ON(!list_empty(&bo->lru));
BUG_ON(!list_empty(&bo->ddestroy));
ww_mutex_unlock (&bo->resv->lock);
}
+static void ttm_bo_flush_all_fences(struct ttm_buffer_object *bo)
+{
+ struct reservation_object_list *fobj;
+ struct fence *fence;
+ int i;
+
+ fobj = reservation_object_get_list(bo->resv);
+ fence = reservation_object_get_excl(bo->resv);
+ if (fence && !fence->ops->signaled)
+ fence_enable_sw_signaling(fence);
+
+ for (i = 0; fobj && i < fobj->shared_count; ++i) {
+ fence = rcu_dereference_protected(fobj->shared[i],
+ reservation_object_held(bo->resv));
+
+ if (!fence->ops->signaled)
+ fence_enable_sw_signaling(fence);
+ }
+}
+
static void ttm_bo_cleanup_refs_or_queue(struct ttm_buffer_object *bo)
{
struct ttm_bo_device *bdev = bo->bdev;
struct ttm_bo_global *glob = bo->glob;
- struct ttm_bo_driver *driver = bdev->driver;
- void *sync_obj = NULL;
int put_count;
int ret;
spin_lock(&glob->lru_lock);
ret = __ttm_bo_reserve(bo, false, true, false, NULL);
- spin_lock(&bdev->fence_lock);
- (void) ttm_bo_wait(bo, false, false, true);
- if (!ret && !bo->sync_obj) {
- spin_unlock(&bdev->fence_lock);
- put_count = ttm_bo_del_from_lru(bo);
-
- spin_unlock(&glob->lru_lock);
- ttm_bo_cleanup_memtype_use(bo);
+ if (!ret) {
+ if (!ttm_bo_wait(bo, false, false, true)) {
+ put_count = ttm_bo_del_from_lru(bo);
- ttm_bo_list_ref_sub(bo, put_count, true);
+ spin_unlock(&glob->lru_lock);
+ ttm_bo_cleanup_memtype_use(bo);
- return;
- }
- if (bo->sync_obj)
- sync_obj = driver->sync_obj_ref(bo->sync_obj);
- spin_unlock(&bdev->fence_lock);
+ ttm_bo_list_ref_sub(bo, put_count, true);
- if (!ret) {
+ return;
+ } else
+ ttm_bo_flush_all_fences(bo);
/*
* Make NO_EVICT bos immediately available to
list_add_tail(&bo->ddestroy, &bdev->ddestroy);
spin_unlock(&glob->lru_lock);
- if (sync_obj) {
- driver->sync_obj_flush(sync_obj);
- driver->sync_obj_unref(&sync_obj);
- }
schedule_delayed_work(&bdev->wq,
((HZ / 100) < 1) ? 1 : HZ / 100);
}
bool interruptible,
bool no_wait_gpu)
{
- struct ttm_bo_device *bdev = bo->bdev;
- struct ttm_bo_driver *driver = bdev->driver;
struct ttm_bo_global *glob = bo->glob;
int put_count;
int ret;
- spin_lock(&bdev->fence_lock);
ret = ttm_bo_wait(bo, false, false, true);
if (ret && !no_wait_gpu) {
- void *sync_obj;
-
- /*
- * Take a reference to the fence and unreserve,
- * at this point the buffer should be dead, so
- * no new sync objects can be attached.
- */
- sync_obj = driver->sync_obj_ref(bo->sync_obj);
- spin_unlock(&bdev->fence_lock);
-
- __ttm_bo_unreserve(bo);
+ long lret;
+ ww_mutex_unlock(&bo->resv->lock);
spin_unlock(&glob->lru_lock);
- ret = driver->sync_obj_wait(sync_obj, false, interruptible);
- driver->sync_obj_unref(&sync_obj);
- if (ret)
- return ret;
+ lret = reservation_object_wait_timeout_rcu(bo->resv,
+ true,
+ interruptible,
+ 30 * HZ);
- /*
- * remove sync_obj with ttm_bo_wait, the wait should be
- * finished, and no new wait object should have been added.
- */
- spin_lock(&bdev->fence_lock);
- ret = ttm_bo_wait(bo, false, false, true);
- WARN_ON(ret);
- spin_unlock(&bdev->fence_lock);
- if (ret)
- return ret;
+ if (lret < 0)
+ return lret;
+ else if (lret == 0)
+ return -EBUSY;
spin_lock(&glob->lru_lock);
ret = __ttm_bo_reserve(bo, false, true, false, NULL);
spin_unlock(&glob->lru_lock);
return 0;
}
- } else
- spin_unlock(&bdev->fence_lock);
+
+ /*
+ * remove sync_obj with ttm_bo_wait, the wait should be
+ * finished, and no new wait object should have been added.
+ */
+ ret = ttm_bo_wait(bo, false, false, true);
+ WARN_ON(ret);
+ }
if (ret || unlikely(list_empty(&bo->ddestroy))) {
__ttm_bo_unreserve(bo);
struct ttm_placement placement;
int ret = 0;
- spin_lock(&bdev->fence_lock);
ret = ttm_bo_wait(bo, false, interruptible, no_wait_gpu);
- spin_unlock(&bdev->fence_lock);
if (unlikely(ret != 0)) {
if (ret != -ERESTARTSYS) {
evict_mem.bus.io_reserved_vm = false;
evict_mem.bus.io_reserved_count = 0;
- placement.fpfn = 0;
- placement.lpfn = 0;
placement.num_placement = 0;
placement.num_busy_placement = 0;
bdev->driver->evict_flags(bo, &placement);
*/
static int ttm_bo_mem_force_space(struct ttm_buffer_object *bo,
uint32_t mem_type,
- struct ttm_placement *placement,
+ const struct ttm_place *place,
struct ttm_mem_reg *mem,
bool interruptible,
bool no_wait_gpu)
int ret;
do {
- ret = (*man->func->get_node)(man, bo, placement, 0, mem);
+ ret = (*man->func->get_node)(man, bo, place, mem);
if (unlikely(ret != 0))
return ret;
if (mem->mm_node)
static bool ttm_bo_mt_compatible(struct ttm_mem_type_manager *man,
uint32_t mem_type,
- uint32_t proposed_placement,
+ const struct ttm_place *place,
uint32_t *masked_placement)
{
uint32_t cur_flags = ttm_bo_type_flags(mem_type);
- if ((cur_flags & proposed_placement & TTM_PL_MASK_MEM) == 0)
+ if ((cur_flags & place->flags & TTM_PL_MASK_MEM) == 0)
return false;
- if ((proposed_placement & man->available_caching) == 0)
+ if ((place->flags & man->available_caching) == 0)
return false;
- cur_flags |= (proposed_placement & man->available_caching);
+ cur_flags |= (place->flags & man->available_caching);
*masked_placement = cur_flags;
return true;
mem->mm_node = NULL;
for (i = 0; i < placement->num_placement; ++i) {
- ret = ttm_mem_type_from_flags(placement->placement[i],
- &mem_type);
+ const struct ttm_place *place = &placement->placement[i];
+
+ ret = ttm_mem_type_from_place(place, &mem_type);
if (ret)
return ret;
man = &bdev->man[mem_type];
- type_ok = ttm_bo_mt_compatible(man,
- mem_type,
- placement->placement[i],
+ type_ok = ttm_bo_mt_compatible(man, mem_type, place,
&cur_flags);
if (!type_ok)
* Use the access and other non-mapping-related flag bits from
* the memory placement flags to the current flags
*/
- ttm_flag_masked(&cur_flags, placement->placement[i],
+ ttm_flag_masked(&cur_flags, place->flags,
~TTM_PL_MASK_MEMTYPE);
if (mem_type == TTM_PL_SYSTEM)
if (man->has_type && man->use_type) {
type_found = true;
- ret = (*man->func->get_node)(man, bo, placement,
- cur_flags, mem);
+ ret = (*man->func->get_node)(man, bo, place, mem);
if (unlikely(ret))
return ret;
}
return -EINVAL;
for (i = 0; i < placement->num_busy_placement; ++i) {
- ret = ttm_mem_type_from_flags(placement->busy_placement[i],
- &mem_type);
+ const struct ttm_place *place = &placement->busy_placement[i];
+
+ ret = ttm_mem_type_from_place(place, &mem_type);
if (ret)
return ret;
man = &bdev->man[mem_type];
if (!man->has_type)
continue;
- if (!ttm_bo_mt_compatible(man,
- mem_type,
- placement->busy_placement[i],
- &cur_flags))
+ if (!ttm_bo_mt_compatible(man, mem_type, place, &cur_flags))
continue;
cur_flags = ttm_bo_select_caching(man, bo->mem.placement,
* Use the access and other non-mapping-related flag bits from
* the memory placement flags to the current flags
*/
- ttm_flag_masked(&cur_flags, placement->busy_placement[i],
+ ttm_flag_masked(&cur_flags, place->flags,
~TTM_PL_MASK_MEMTYPE);
if (mem_type == TTM_PL_SYSTEM) {
return 0;
}
- ret = ttm_bo_mem_force_space(bo, mem_type, placement, mem,
+ ret = ttm_bo_mem_force_space(bo, mem_type, place, mem,
interruptible, no_wait_gpu);
if (ret == 0 && mem->mm_node) {
mem->placement = cur_flags;
{
int ret = 0;
struct ttm_mem_reg mem;
- struct ttm_bo_device *bdev = bo->bdev;
lockdep_assert_held(&bo->resv->lock.base);
* Have the driver move function wait for idle when necessary,
* instead of doing it here.
*/
- spin_lock(&bdev->fence_lock);
ret = ttm_bo_wait(bo, false, interruptible, no_wait_gpu);
- spin_unlock(&bdev->fence_lock);
if (ret)
return ret;
mem.num_pages = bo->num_pages;
{
int i;
- if (mem->mm_node && placement->lpfn != 0 &&
- (mem->start < placement->fpfn ||
- mem->start + mem->num_pages > placement->lpfn))
- return false;
-
for (i = 0; i < placement->num_placement; i++) {
- *new_flags = placement->placement[i];
+ const struct ttm_place *heap = &placement->placement[i];
+ if (mem->mm_node && heap->lpfn != 0 &&
+ (mem->start < heap->fpfn ||
+ mem->start + mem->num_pages > heap->lpfn))
+ continue;
+
+ *new_flags = heap->flags;
if ((*new_flags & mem->placement & TTM_PL_MASK_CACHING) &&
(*new_flags & mem->placement & TTM_PL_MASK_MEM))
return true;
}
for (i = 0; i < placement->num_busy_placement; i++) {
- *new_flags = placement->busy_placement[i];
+ const struct ttm_place *heap = &placement->busy_placement[i];
+ if (mem->mm_node && heap->lpfn != 0 &&
+ (mem->start < heap->fpfn ||
+ mem->start + mem->num_pages > heap->lpfn))
+ continue;
+
+ *new_flags = heap->flags;
if ((*new_flags & mem->placement & TTM_PL_MASK_CACHING) &&
(*new_flags & mem->placement & TTM_PL_MASK_MEM))
return true;
uint32_t new_flags;
lockdep_assert_held(&bo->resv->lock.base);
- /* Check that range is valid */
- if (placement->lpfn || placement->fpfn)
- if (placement->fpfn > placement->lpfn ||
- (placement->lpfn - placement->fpfn) < bo->num_pages)
- return -EINVAL;
/*
* Check whether we need to move buffer.
*/
}
EXPORT_SYMBOL(ttm_bo_validate);
-int ttm_bo_check_placement(struct ttm_buffer_object *bo,
- struct ttm_placement *placement)
-{
- BUG_ON((placement->fpfn || placement->lpfn) &&
- (bo->mem.num_pages > (placement->lpfn - placement->fpfn)));
-
- return 0;
-}
-
int ttm_bo_init(struct ttm_bo_device *bdev,
struct ttm_buffer_object *bo,
unsigned long size,
struct file *persistent_swap_storage,
size_t acc_size,
struct sg_table *sg,
+ struct reservation_object *resv,
void (*destroy) (struct ttm_buffer_object *))
{
int ret = 0;
bo->persistent_swap_storage = persistent_swap_storage;
bo->acc_size = acc_size;
bo->sg = sg;
- bo->resv = &bo->ttm_resv;
- reservation_object_init(bo->resv);
+ if (resv) {
+ bo->resv = resv;
+ lockdep_assert_held(&bo->resv->lock.base);
+ } else {
+ bo->resv = &bo->ttm_resv;
+ reservation_object_init(&bo->ttm_resv);
+ }
atomic_inc(&bo->glob->bo_count);
drm_vma_node_reset(&bo->vma_node);
- ret = ttm_bo_check_placement(bo, placement);
-
/*
* For ttm_bo_type_device buffers, allocate
* address space from the device.
*/
- if (likely(!ret) &&
- (bo->type == ttm_bo_type_device ||
- bo->type == ttm_bo_type_sg))
+ if (bo->type == ttm_bo_type_device ||
+ bo->type == ttm_bo_type_sg)
ret = drm_vma_offset_add(&bdev->vma_manager, &bo->vma_node,
bo->mem.num_pages);
- locked = ww_mutex_trylock(&bo->resv->lock);
- WARN_ON(!locked);
+ /* passed reservation objects should already be locked,
+ * since otherwise lockdep will be angered in radeon.
+ */
+ if (!resv) {
+ locked = ww_mutex_trylock(&bo->resv->lock);
+ WARN_ON(!locked);
+ }
if (likely(!ret))
ret = ttm_bo_validate(bo, placement, interruptible, false);
- ttm_bo_unreserve(bo);
+ if (!resv)
+ ttm_bo_unreserve(bo);
if (unlikely(ret))
ttm_bo_unref(&bo);
acc_size = ttm_bo_acc_size(bdev, size, sizeof(struct ttm_buffer_object));
ret = ttm_bo_init(bdev, bo, size, type, placement, page_alignment,
interruptible, persistent_swap_storage, acc_size,
- NULL, NULL);
+ NULL, NULL, NULL);
if (likely(ret == 0))
*p_bo = bo;
bdev->glob = glob;
bdev->need_dma32 = need_dma32;
bdev->val_seq = 0;
- spin_lock_init(&bdev->fence_lock);
mutex_lock(&glob->device_list_mutex);
list_add_tail(&bdev->device_list, &glob->device_list);
mutex_unlock(&glob->device_list_mutex);
EXPORT_SYMBOL(ttm_bo_unmap_virtual);
-
int ttm_bo_wait(struct ttm_buffer_object *bo,
bool lazy, bool interruptible, bool no_wait)
{
- struct ttm_bo_driver *driver = bo->bdev->driver;
- struct ttm_bo_device *bdev = bo->bdev;
- void *sync_obj;
- int ret = 0;
-
- if (likely(bo->sync_obj == NULL))
- return 0;
+ struct reservation_object_list *fobj;
+ struct reservation_object *resv;
+ struct fence *excl;
+ long timeout = 15 * HZ;
+ int i;
- while (bo->sync_obj) {
+ resv = bo->resv;
+ fobj = reservation_object_get_list(resv);
+ excl = reservation_object_get_excl(resv);
+ if (excl) {
+ if (!fence_is_signaled(excl)) {
+ if (no_wait)
+ return -EBUSY;
- if (driver->sync_obj_signaled(bo->sync_obj)) {
- void *tmp_obj = bo->sync_obj;
- bo->sync_obj = NULL;
- clear_bit(TTM_BO_PRIV_FLAG_MOVING, &bo->priv_flags);
- spin_unlock(&bdev->fence_lock);
- driver->sync_obj_unref(&tmp_obj);
- spin_lock(&bdev->fence_lock);
- continue;
+ timeout = fence_wait_timeout(excl,
+ interruptible, timeout);
}
+ }
- if (no_wait)
- return -EBUSY;
+ for (i = 0; fobj && timeout > 0 && i < fobj->shared_count; ++i) {
+ struct fence *fence;
+ fence = rcu_dereference_protected(fobj->shared[i],
+ reservation_object_held(resv));
- sync_obj = driver->sync_obj_ref(bo->sync_obj);
- spin_unlock(&bdev->fence_lock);
- ret = driver->sync_obj_wait(sync_obj,
- lazy, interruptible);
- if (unlikely(ret != 0)) {
- driver->sync_obj_unref(&sync_obj);
- spin_lock(&bdev->fence_lock);
- return ret;
- }
- spin_lock(&bdev->fence_lock);
- if (likely(bo->sync_obj == sync_obj)) {
- void *tmp_obj = bo->sync_obj;
- bo->sync_obj = NULL;
- clear_bit(TTM_BO_PRIV_FLAG_MOVING,
- &bo->priv_flags);
- spin_unlock(&bdev->fence_lock);
- driver->sync_obj_unref(&sync_obj);
- driver->sync_obj_unref(&tmp_obj);
- spin_lock(&bdev->fence_lock);
- } else {
- spin_unlock(&bdev->fence_lock);
- driver->sync_obj_unref(&sync_obj);
- spin_lock(&bdev->fence_lock);
+ if (!fence_is_signaled(fence)) {
+ if (no_wait)
+ return -EBUSY;
+
+ timeout = fence_wait_timeout(fence,
+ interruptible, timeout);
}
}
+
+ if (timeout < 0)
+ return timeout;
+
+ if (timeout == 0)
+ return -EBUSY;
+
+ reservation_object_add_excl_fence(resv, NULL);
+ clear_bit(TTM_BO_PRIV_FLAG_MOVING, &bo->priv_flags);
return 0;
}
EXPORT_SYMBOL(ttm_bo_wait);
int ttm_bo_synccpu_write_grab(struct ttm_buffer_object *bo, bool no_wait)
{
- struct ttm_bo_device *bdev = bo->bdev;
int ret = 0;
/*
ret = ttm_bo_reserve(bo, true, no_wait, false, NULL);
if (unlikely(ret != 0))
return ret;
- spin_lock(&bdev->fence_lock);
ret = ttm_bo_wait(bo, false, true, no_wait);
- spin_unlock(&bdev->fence_lock);
if (likely(ret == 0))
atomic_inc(&bo->cpu_writers);
ttm_bo_unreserve(bo);
* Wait for GPU, then move to system cached.
*/
- spin_lock(&bo->bdev->fence_lock);
ret = ttm_bo_wait(bo, false, false, false);
- spin_unlock(&bo->bdev->fence_lock);
if (unlikely(ret != 0))
goto out;
static int ttm_bo_man_get_node(struct ttm_mem_type_manager *man,
struct ttm_buffer_object *bo,
- struct ttm_placement *placement,
- uint32_t flags,
+ const struct ttm_place *place,
struct ttm_mem_reg *mem)
{
struct ttm_range_manager *rman = (struct ttm_range_manager *) man->priv;
unsigned long lpfn;
int ret;
- lpfn = placement->lpfn;
+ lpfn = place->lpfn;
if (!lpfn)
lpfn = man->size;
if (!node)
return -ENOMEM;
- if (flags & TTM_PL_FLAG_TOPDOWN)
+ if (place->flags & TTM_PL_FLAG_TOPDOWN)
aflags = DRM_MM_CREATE_TOP;
spin_lock(&rman->lock);
ret = drm_mm_insert_node_in_range_generic(mm, node, mem->num_pages,
mem->page_alignment, 0,
- placement->fpfn, lpfn,
+ place->fpfn, lpfn,
DRM_MM_SEARCH_BEST,
aflags);
spin_unlock(&rman->lock);
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include <linux/module.h>
+#include <linux/reservation.h>
void ttm_bo_free_old_node(struct ttm_buffer_object *bo)
{
struct ttm_buffer_object **new_obj)
{
struct ttm_buffer_object *fbo;
- struct ttm_bo_device *bdev = bo->bdev;
- struct ttm_bo_driver *driver = bdev->driver;
int ret;
fbo = kmalloc(sizeof(*fbo), GFP_KERNEL);
drm_vma_node_reset(&fbo->vma_node);
atomic_set(&fbo->cpu_writers, 0);
- spin_lock(&bdev->fence_lock);
- if (bo->sync_obj)
- fbo->sync_obj = driver->sync_obj_ref(bo->sync_obj);
- else
- fbo->sync_obj = NULL;
- spin_unlock(&bdev->fence_lock);
kref_init(&fbo->list_kref);
kref_init(&fbo->kref);
fbo->destroy = &ttm_transfered_destroy;
pgprot_t ttm_io_prot(uint32_t caching_flags, pgprot_t tmp)
{
+ /* Cached mappings need no adjustment */
+ if (caching_flags & TTM_PL_FLAG_CACHED)
+ return tmp;
+
#if defined(__i386__) || defined(__x86_64__)
if (caching_flags & TTM_PL_FLAG_WC)
tmp = pgprot_writecombine(tmp);
else if (boot_cpu_data.x86 > 3)
tmp = pgprot_noncached(tmp);
-
-#elif defined(__powerpc__)
- if (!(caching_flags & TTM_PL_FLAG_CACHED)) {
- pgprot_val(tmp) |= _PAGE_NO_CACHE;
- if (caching_flags & TTM_PL_FLAG_UNCACHED)
- pgprot_val(tmp) |= _PAGE_GUARDED;
- }
#endif
-#if defined(__ia64__) || defined(__arm__)
+#if defined(__ia64__) || defined(__arm__) || defined(__powerpc__)
if (caching_flags & TTM_PL_FLAG_WC)
tmp = pgprot_writecombine(tmp);
else
tmp = pgprot_noncached(tmp);
#endif
#if defined(__sparc__) || defined(__mips__)
- if (!(caching_flags & TTM_PL_FLAG_CACHED))
- tmp = pgprot_noncached(tmp);
+ tmp = pgprot_noncached(tmp);
#endif
return tmp;
}
* We need to use vmap to get the desired page protection
* or to make the buffer object look contiguous.
*/
- prot = (mem->placement & TTM_PL_FLAG_CACHED) ?
- PAGE_KERNEL :
- ttm_io_prot(mem->placement, PAGE_KERNEL);
+ prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
map->bo_kmap_type = ttm_bo_map_vmap;
map->virtual = vmap(ttm->pages + start_page, num_pages,
0, prot);
EXPORT_SYMBOL(ttm_bo_kunmap);
int ttm_bo_move_accel_cleanup(struct ttm_buffer_object *bo,
- void *sync_obj,
+ struct fence *fence,
bool evict,
bool no_wait_gpu,
struct ttm_mem_reg *new_mem)
{
struct ttm_bo_device *bdev = bo->bdev;
- struct ttm_bo_driver *driver = bdev->driver;
struct ttm_mem_type_manager *man = &bdev->man[new_mem->mem_type];
struct ttm_mem_reg *old_mem = &bo->mem;
int ret;
struct ttm_buffer_object *ghost_obj;
- void *tmp_obj = NULL;
- spin_lock(&bdev->fence_lock);
- if (bo->sync_obj) {
- tmp_obj = bo->sync_obj;
- bo->sync_obj = NULL;
- }
- bo->sync_obj = driver->sync_obj_ref(sync_obj);
+ reservation_object_add_excl_fence(bo->resv, fence);
if (evict) {
ret = ttm_bo_wait(bo, false, false, false);
- spin_unlock(&bdev->fence_lock);
- if (tmp_obj)
- driver->sync_obj_unref(&tmp_obj);
if (ret)
return ret;
*/
set_bit(TTM_BO_PRIV_FLAG_MOVING, &bo->priv_flags);
- spin_unlock(&bdev->fence_lock);
- if (tmp_obj)
- driver->sync_obj_unref(&tmp_obj);
ret = ttm_buffer_object_transfer(bo, &ghost_obj);
if (ret)
return ret;
+ reservation_object_add_excl_fence(ghost_obj->resv, fence);
+
/**
* If we're not moving to fixed memory, the TTM object
* needs to stay alive. Otherwhise hang it on the ghost
struct vm_area_struct *vma,
struct vm_fault *vmf)
{
- struct ttm_bo_device *bdev = bo->bdev;
int ret = 0;
- spin_lock(&bdev->fence_lock);
if (likely(!test_bit(TTM_BO_PRIV_FLAG_MOVING, &bo->priv_flags)))
goto out_unlock;
VM_FAULT_NOPAGE;
out_unlock:
- spin_unlock(&bdev->fence_lock);
return ret;
}
cvma.vm_page_prot);
} else {
ttm = bo->ttm;
- if (!(bo->mem.placement & TTM_PL_FLAG_CACHED))
- cvma.vm_page_prot = ttm_io_prot(bo->mem.placement,
- cvma.vm_page_prot);
+ cvma.vm_page_prot = ttm_io_prot(bo->mem.placement,
+ cvma.vm_page_prot);
/* Allocate all page at once, most common usage */
if (ttm->bdev->driver->ttm_tt_populate(ttm)) {
#include <linux/sched.h>
#include <linux/module.h>
-static void ttm_eu_backoff_reservation_locked(struct list_head *list)
+static void ttm_eu_backoff_reservation_reverse(struct list_head *list,
+ struct ttm_validate_buffer *entry)
{
- struct ttm_validate_buffer *entry;
-
- list_for_each_entry(entry, list, head) {
+ list_for_each_entry_continue_reverse(entry, list, head) {
struct ttm_buffer_object *bo = entry->bo;
- if (!entry->reserved)
- continue;
- entry->reserved = false;
- if (entry->removed) {
- ttm_bo_add_to_lru(bo);
- entry->removed = false;
- }
__ttm_bo_unreserve(bo);
}
}
list_for_each_entry(entry, list, head) {
struct ttm_buffer_object *bo = entry->bo;
- if (!entry->reserved)
- continue;
-
- if (!entry->removed) {
- entry->put_count = ttm_bo_del_from_lru(bo);
- entry->removed = true;
- }
- }
-}
-
-static void ttm_eu_list_ref_sub(struct list_head *list)
-{
- struct ttm_validate_buffer *entry;
-
- list_for_each_entry(entry, list, head) {
- struct ttm_buffer_object *bo = entry->bo;
+ unsigned put_count = ttm_bo_del_from_lru(bo);
- if (entry->put_count) {
- ttm_bo_list_ref_sub(bo, entry->put_count, true);
- entry->put_count = 0;
- }
+ ttm_bo_list_ref_sub(bo, put_count, true);
}
}
entry = list_first_entry(list, struct ttm_validate_buffer, head);
glob = entry->bo->glob;
+
spin_lock(&glob->lru_lock);
- ttm_eu_backoff_reservation_locked(list);
+ list_for_each_entry(entry, list, head) {
+ struct ttm_buffer_object *bo = entry->bo;
+
+ ttm_bo_add_to_lru(bo);
+ __ttm_bo_unreserve(bo);
+ }
+ spin_unlock(&glob->lru_lock);
+
if (ticket)
ww_acquire_fini(ticket);
- spin_unlock(&glob->lru_lock);
}
EXPORT_SYMBOL(ttm_eu_backoff_reservation);
*/
int ttm_eu_reserve_buffers(struct ww_acquire_ctx *ticket,
- struct list_head *list)
+ struct list_head *list, bool intr)
{
struct ttm_bo_global *glob;
struct ttm_validate_buffer *entry;
if (list_empty(list))
return 0;
- list_for_each_entry(entry, list, head) {
- entry->reserved = false;
- entry->put_count = 0;
- entry->removed = false;
- }
-
entry = list_first_entry(list, struct ttm_validate_buffer, head);
glob = entry->bo->glob;
if (ticket)
ww_acquire_init(ticket, &reservation_ww_class);
-retry:
+
list_for_each_entry(entry, list, head) {
struct ttm_buffer_object *bo = entry->bo;
- /* already slowpath reserved? */
- if (entry->reserved)
- continue;
-
- ret = __ttm_bo_reserve(bo, true, (ticket == NULL), true,
+ ret = __ttm_bo_reserve(bo, intr, (ticket == NULL), true,
ticket);
+ if (!ret && unlikely(atomic_read(&bo->cpu_writers) > 0)) {
+ __ttm_bo_unreserve(bo);
+
+ ret = -EBUSY;
+ }
- if (ret == -EDEADLK) {
- /* uh oh, we lost out, drop every reservation and try
- * to only reserve this buffer, then start over if
- * this succeeds.
- */
- BUG_ON(ticket == NULL);
- spin_lock(&glob->lru_lock);
- ttm_eu_backoff_reservation_locked(list);
- spin_unlock(&glob->lru_lock);
- ttm_eu_list_ref_sub(list);
+ if (!ret) {
+ if (!entry->shared)
+ continue;
+
+ ret = reservation_object_reserve_shared(bo->resv);
+ if (!ret)
+ continue;
+ }
+
+ /* uh oh, we lost out, drop every reservation and try
+ * to only reserve this buffer, then start over if
+ * this succeeds.
+ */
+ ttm_eu_backoff_reservation_reverse(list, entry);
+
+ if (ret == -EDEADLK && intr) {
ret = ww_mutex_lock_slow_interruptible(&bo->resv->lock,
ticket);
- if (unlikely(ret != 0)) {
- if (ret == -EINTR)
- ret = -ERESTARTSYS;
- goto err_fini;
- }
+ } else if (ret == -EDEADLK) {
+ ww_mutex_lock_slow(&bo->resv->lock, ticket);
+ ret = 0;
+ }
- entry->reserved = true;
- if (unlikely(atomic_read(&bo->cpu_writers) > 0)) {
- ret = -EBUSY;
- goto err;
- }
- goto retry;
- } else if (ret)
- goto err;
+ if (!ret && entry->shared)
+ ret = reservation_object_reserve_shared(bo->resv);
- entry->reserved = true;
- if (unlikely(atomic_read(&bo->cpu_writers) > 0)) {
- ret = -EBUSY;
- goto err;
+ if (unlikely(ret != 0)) {
+ if (ret == -EINTR)
+ ret = -ERESTARTSYS;
+ if (ticket) {
+ ww_acquire_done(ticket);
+ ww_acquire_fini(ticket);
+ }
+ return ret;
}
+
+ /* move this item to the front of the list,
+ * forces correct iteration of the loop without keeping track
+ */
+ list_del(&entry->head);
+ list_add(&entry->head, list);
}
if (ticket)
spin_lock(&glob->lru_lock);
ttm_eu_del_from_lru_locked(list);
spin_unlock(&glob->lru_lock);
- ttm_eu_list_ref_sub(list);
return 0;
-
-err:
- spin_lock(&glob->lru_lock);
- ttm_eu_backoff_reservation_locked(list);
- spin_unlock(&glob->lru_lock);
- ttm_eu_list_ref_sub(list);
-err_fini:
- if (ticket) {
- ww_acquire_done(ticket);
- ww_acquire_fini(ticket);
- }
- return ret;
}
EXPORT_SYMBOL(ttm_eu_reserve_buffers);
void ttm_eu_fence_buffer_objects(struct ww_acquire_ctx *ticket,
- struct list_head *list, void *sync_obj)
+ struct list_head *list, struct fence *fence)
{
struct ttm_validate_buffer *entry;
struct ttm_buffer_object *bo;
glob = bo->glob;
spin_lock(&glob->lru_lock);
- spin_lock(&bdev->fence_lock);
list_for_each_entry(entry, list, head) {
bo = entry->bo;
- entry->old_sync_obj = bo->sync_obj;
- bo->sync_obj = driver->sync_obj_ref(sync_obj);
+ if (entry->shared)
+ reservation_object_add_shared_fence(bo->resv, fence);
+ else
+ reservation_object_add_excl_fence(bo->resv, fence);
ttm_bo_add_to_lru(bo);
__ttm_bo_unreserve(bo);
- entry->reserved = false;
}
- spin_unlock(&bdev->fence_lock);
spin_unlock(&glob->lru_lock);
if (ticket)
ww_acquire_fini(ticket);
-
- list_for_each_entry(entry, list, head) {
- if (entry->old_sync_obj)
- driver->sync_obj_unref(&entry->old_sync_obj);
- }
}
EXPORT_SYMBOL(ttm_eu_fence_buffer_objects);
zone->glob = glob;
glob->zone_highmem = zone;
ret = kobject_init_and_add(
- &zone->kobj, &ttm_mem_zone_kobj_type, &glob->kobj, zone->name);
+ &zone->kobj, &ttm_mem_zone_kobj_type, &glob->kobj, "%s",
+ zone->name);
if (unlikely(ret != 0)) {
kobject_put(&zone->kobj);
return ret;
config DRM_UDL
tristate "DisplayLink"
depends on DRM
+ depends on USB_SUPPORT
depends on USB_ARCH_HAS_HCD
- select DRM_USB
+ select USB
select FB_SYS_FILLRECT
select FB_SYS_COPYAREA
select FB_SYS_IMAGEBLIT
goto error;
for (i = 0; i < EDID_LENGTH; i++) {
- ret = usb_control_msg(udl->ddev->usbdev,
- usb_rcvctrlpipe(udl->ddev->usbdev, 0), (0x02),
+ ret = usb_control_msg(udl->udev,
+ usb_rcvctrlpipe(udl->udev, 0), (0x02),
(0x80 | (0x02 << 5)), i << 8, 0xA1, rbuf, 2,
HZ);
if (ret < 1) {
*/
#include <linux/module.h>
-#include <drm/drm_usb.h>
+#include <drm/drmP.h>
#include <drm/drm_crtc_helper.h>
#include "udl_drv.h"
-static struct drm_driver driver;
-
-/*
- * There are many DisplayLink-based graphics products, all with unique PIDs.
- * So we match on DisplayLink's VID + Vendor-Defined Interface Class (0xff)
- * We also require a match on SubClass (0x00) and Protocol (0x00),
- * which is compatible with all known USB 2.0 era graphics chips and firmware,
- * but allows DisplayLink to increment those for any future incompatible chips
- */
-static struct usb_device_id id_table[] = {
- {.idVendor = 0x17e9, .bInterfaceClass = 0xff,
- .bInterfaceSubClass = 0x00,
- .bInterfaceProtocol = 0x00,
- .match_flags = USB_DEVICE_ID_MATCH_VENDOR |
- USB_DEVICE_ID_MATCH_INT_CLASS |
- USB_DEVICE_ID_MATCH_INT_SUBCLASS |
- USB_DEVICE_ID_MATCH_INT_PROTOCOL,},
- {},
-};
-MODULE_DEVICE_TABLE(usb, id_table);
-
-MODULE_LICENSE("GPL");
-
-static int udl_usb_probe(struct usb_interface *interface,
- const struct usb_device_id *id)
+static int udl_driver_set_busid(struct drm_device *d, struct drm_master *m)
{
- return drm_get_usb_dev(interface, id, &driver);
-}
-
-static void udl_usb_disconnect(struct usb_interface *interface)
-{
- struct drm_device *dev = usb_get_intfdata(interface);
-
- drm_kms_helper_poll_disable(dev);
- drm_connector_unplug_all(dev);
- udl_fbdev_unplug(dev);
- udl_drop_usb(dev);
- drm_unplug_dev(dev);
+ return 0;
}
static const struct vm_operations_struct udl_gem_vm_ops = {
.driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_PRIME,
.load = udl_driver_load,
.unload = udl_driver_unload,
+ .set_busid = udl_driver_set_busid,
/* gem hooks */
.gem_free_object = udl_gem_free_object,
.patchlevel = DRIVER_PATCHLEVEL,
};
+static int udl_usb_probe(struct usb_interface *interface,
+ const struct usb_device_id *id)
+{
+ struct usb_device *udev = interface_to_usbdev(interface);
+ struct drm_device *dev;
+ int r;
+
+ dev = drm_dev_alloc(&driver, &interface->dev);
+ if (!dev)
+ return -ENOMEM;
+
+ r = drm_dev_register(dev, (unsigned long)udev);
+ if (r)
+ goto err_free;
+
+ usb_set_intfdata(interface, dev);
+ DRM_INFO("Initialized udl on minor %d\n", dev->primary->index);
+
+ return 0;
+
+err_free:
+ drm_dev_unref(dev);
+ return r;
+}
+
+static void udl_usb_disconnect(struct usb_interface *interface)
+{
+ struct drm_device *dev = usb_get_intfdata(interface);
+
+ drm_kms_helper_poll_disable(dev);
+ drm_connector_unplug_all(dev);
+ udl_fbdev_unplug(dev);
+ udl_drop_usb(dev);
+ drm_unplug_dev(dev);
+}
+
+/*
+ * There are many DisplayLink-based graphics products, all with unique PIDs.
+ * So we match on DisplayLink's VID + Vendor-Defined Interface Class (0xff)
+ * We also require a match on SubClass (0x00) and Protocol (0x00),
+ * which is compatible with all known USB 2.0 era graphics chips and firmware,
+ * but allows DisplayLink to increment those for any future incompatible chips
+ */
+static struct usb_device_id id_table[] = {
+ {.idVendor = 0x17e9, .bInterfaceClass = 0xff,
+ .bInterfaceSubClass = 0x00,
+ .bInterfaceProtocol = 0x00,
+ .match_flags = USB_DEVICE_ID_MATCH_VENDOR |
+ USB_DEVICE_ID_MATCH_INT_CLASS |
+ USB_DEVICE_ID_MATCH_INT_SUBCLASS |
+ USB_DEVICE_ID_MATCH_INT_PROTOCOL,},
+ {},
+};
+MODULE_DEVICE_TABLE(usb, id_table);
+
static struct usb_driver udl_driver = {
.name = "udl",
.probe = udl_usb_probe,
static int __init udl_init(void)
{
- return drm_usb_init(&driver, &udl_driver);
+ return usb_register(&udl_driver);
}
static void __exit udl_exit(void)
{
- drm_usb_exit(&driver, &udl_driver);
+ usb_deregister(&udl_driver);
}
module_init(udl_init);
module_exit(udl_exit);
+MODULE_LICENSE("GPL");
#define UDL_DRV_H
#include <linux/usb.h>
+#include <drm/drm_gem.h>
#define DRIVER_NAME "udl"
#define DRIVER_DESC "DisplayLink"
struct udl_device {
struct device *dev;
struct drm_device *ddev;
+ struct usb_device *udev;
int sku_pixel_limit;
static int udlfb_create(struct drm_fb_helper *helper,
struct drm_fb_helper_surface_size *sizes)
{
- struct udl_fbdev *ufbdev = (struct udl_fbdev *)helper;
+ struct udl_fbdev *ufbdev =
+ container_of(helper, struct udl_fbdev, helper);
struct drm_device *dev = ufbdev->helper.dev;
struct fb_info *info;
struct device *device = dev->dev;
}
unode->urb = urb;
- buf = usb_alloc_coherent(udl->ddev->usbdev, MAX_TRANSFER, GFP_KERNEL,
+ buf = usb_alloc_coherent(udl->udev, MAX_TRANSFER, GFP_KERNEL,
&urb->transfer_dma);
if (!buf) {
kfree(unode);
}
/* urb->transfer_buffer_length set to actual before submit */
- usb_fill_bulk_urb(urb, udl->ddev->usbdev, usb_sndbulkpipe(udl->ddev->usbdev, 1),
+ usb_fill_bulk_urb(urb, udl->udev, usb_sndbulkpipe(udl->udev, 1),
buf, size, udl_urb_completion, unode);
urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
int udl_driver_load(struct drm_device *dev, unsigned long flags)
{
+ struct usb_device *udev = (void*)flags;
struct udl_device *udl;
int ret = -ENOMEM;
if (!udl)
return -ENOMEM;
+ udl->udev = udev;
udl->ddev = dev;
dev->dev_private = udl;
- if (!udl_parse_vendor_descriptor(dev, dev->usbdev)) {
+ if (!udl_parse_vendor_descriptor(dev, udl->udev)) {
ret = -ENODEV;
DRM_ERROR("firmware not recognized. Assume incompatible device\n");
goto err;
if (dev_priv->ring.virtual_start) {
via_cmdbuf_reset(dev_priv);
- drm_core_ioremapfree(&dev_priv->ring.map, dev);
+ drm_legacy_ioremapfree(&dev_priv->ring.map, dev);
dev_priv->ring.virtual_start = NULL;
}
dev_priv->ring.map.flags = 0;
dev_priv->ring.map.mtrr = 0;
- drm_core_ioremap(&dev_priv->ring.map, dev);
+ drm_legacy_ioremap(&dev_priv->ring.map, dev);
if (dev_priv->ring.map.handle == NULL) {
via_dma_cleanup(dev);
.open = drm_open,
.release = drm_release,
.unlocked_ioctl = drm_ioctl,
- .mmap = drm_mmap,
+ .mmap = drm_legacy_mmap,
.poll = drm_poll,
#ifdef CONFIG_COMPAT
.compat_ioctl = drm_compat_ioctl,
.open = via_driver_open,
.preclose = via_reclaim_buffers_locked,
.postclose = via_driver_postclose,
+ .set_busid = drm_pci_set_busid,
.context_dtor = via_final_context,
.get_vblank_counter = via_get_vblank_counter,
.enable_vblank = via_enable_vblank,
#define _VIA_DRV_H_
#include <drm/drm_mm.h>
+#include <drm/drm_legacy.h>
+
#define DRIVER_AUTHOR "Various"
#define DRIVER_NAME "via"
DRM_DEBUG("\n");
- dev_priv->sarea = drm_getsarea(dev);
+ dev_priv->sarea = drm_legacy_getsarea(dev);
if (!dev_priv->sarea) {
DRM_ERROR("could not find sarea!\n");
dev->dev_private = (void *)dev_priv;
return -EINVAL;
}
- dev_priv->fb = drm_core_findmap(dev, init->fb_offset);
+ dev_priv->fb = drm_legacy_findmap(dev, init->fb_offset);
if (!dev_priv->fb) {
DRM_ERROR("could not find framebuffer!\n");
dev->dev_private = (void *)dev_priv;
via_do_cleanup_map(dev);
return -EINVAL;
}
- dev_priv->mmio = drm_core_findmap(dev, init->mmio_offset);
+ dev_priv->mmio = drm_legacy_findmap(dev, init->mmio_offset);
if (!dev_priv->mmio) {
DRM_ERROR("could not find mmio region!\n");
dev->dev_private = (void *)dev_priv;
if (!(file->minor->master && file->master->lock.hw_lock))
return;
- drm_idlelock_take(&file->master->lock);
+ drm_legacy_idlelock_take(&file->master->lock);
mutex_lock(&dev->struct_mutex);
if (list_empty(&file_priv->obj_list)) {
mutex_unlock(&dev->struct_mutex);
- drm_idlelock_release(&file->master->lock);
+ drm_legacy_idlelock_release(&file->master->lock);
return;
}
}
mutex_unlock(&dev->struct_mutex);
- drm_idlelock_release(&file->master->lock);
+ drm_legacy_idlelock_release(&file->master->lock);
return;
}
#include "via_3d_reg.h"
#include <drm/drmP.h>
#include <drm/via_drm.h>
+#include <drm/drm_legacy.h>
#include "via_verifier.h"
#include "via_drv.h"
#include <drm/ttm/ttm_placement.h>
#include <drm/ttm/ttm_page_alloc.h>
-static uint32_t vram_placement_flags = TTM_PL_FLAG_VRAM |
- TTM_PL_FLAG_CACHED;
-
-static uint32_t vram_ne_placement_flags = TTM_PL_FLAG_VRAM |
- TTM_PL_FLAG_CACHED |
- TTM_PL_FLAG_NO_EVICT;
+static struct ttm_place vram_placement_flags = {
+ .fpfn = 0,
+ .lpfn = 0,
+ .flags = TTM_PL_FLAG_VRAM | TTM_PL_FLAG_CACHED
+};
-static uint32_t sys_placement_flags = TTM_PL_FLAG_SYSTEM |
- TTM_PL_FLAG_CACHED;
+static struct ttm_place vram_ne_placement_flags = {
+ .fpfn = 0,
+ .lpfn = 0,
+ .flags = TTM_PL_FLAG_VRAM | TTM_PL_FLAG_CACHED | TTM_PL_FLAG_NO_EVICT
+};
-static uint32_t sys_ne_placement_flags = TTM_PL_FLAG_SYSTEM |
- TTM_PL_FLAG_CACHED |
- TTM_PL_FLAG_NO_EVICT;
+static struct ttm_place sys_placement_flags = {
+ .fpfn = 0,
+ .lpfn = 0,
+ .flags = TTM_PL_FLAG_SYSTEM | TTM_PL_FLAG_CACHED
+};
-static uint32_t gmr_placement_flags = VMW_PL_FLAG_GMR |
- TTM_PL_FLAG_CACHED;
+static struct ttm_place sys_ne_placement_flags = {
+ .fpfn = 0,
+ .lpfn = 0,
+ .flags = TTM_PL_FLAG_SYSTEM | TTM_PL_FLAG_CACHED | TTM_PL_FLAG_NO_EVICT
+};
-static uint32_t gmr_ne_placement_flags = VMW_PL_FLAG_GMR |
- TTM_PL_FLAG_CACHED |
- TTM_PL_FLAG_NO_EVICT;
+static struct ttm_place gmr_placement_flags = {
+ .fpfn = 0,
+ .lpfn = 0,
+ .flags = VMW_PL_FLAG_GMR | TTM_PL_FLAG_CACHED
+};
-static uint32_t mob_placement_flags = VMW_PL_FLAG_MOB |
- TTM_PL_FLAG_CACHED;
+static struct ttm_place gmr_ne_placement_flags = {
+ .fpfn = 0,
+ .lpfn = 0,
+ .flags = VMW_PL_FLAG_GMR | TTM_PL_FLAG_CACHED | TTM_PL_FLAG_NO_EVICT
+};
-struct ttm_placement vmw_vram_placement = {
+static struct ttm_place mob_placement_flags = {
.fpfn = 0,
.lpfn = 0,
+ .flags = VMW_PL_FLAG_MOB | TTM_PL_FLAG_CACHED
+};
+
+struct ttm_placement vmw_vram_placement = {
.num_placement = 1,
.placement = &vram_placement_flags,
.num_busy_placement = 1,
.busy_placement = &vram_placement_flags
};
-static uint32_t vram_gmr_placement_flags[] = {
- TTM_PL_FLAG_VRAM | TTM_PL_FLAG_CACHED,
- VMW_PL_FLAG_GMR | TTM_PL_FLAG_CACHED
+static struct ttm_place vram_gmr_placement_flags[] = {
+ {
+ .fpfn = 0,
+ .lpfn = 0,
+ .flags = TTM_PL_FLAG_VRAM | TTM_PL_FLAG_CACHED
+ }, {
+ .fpfn = 0,
+ .lpfn = 0,
+ .flags = VMW_PL_FLAG_GMR | TTM_PL_FLAG_CACHED
+ }
};
-static uint32_t gmr_vram_placement_flags[] = {
- VMW_PL_FLAG_GMR | TTM_PL_FLAG_CACHED,
- TTM_PL_FLAG_VRAM | TTM_PL_FLAG_CACHED
+static struct ttm_place gmr_vram_placement_flags[] = {
+ {
+ .fpfn = 0,
+ .lpfn = 0,
+ .flags = VMW_PL_FLAG_GMR | TTM_PL_FLAG_CACHED
+ }, {
+ .fpfn = 0,
+ .lpfn = 0,
+ .flags = TTM_PL_FLAG_VRAM | TTM_PL_FLAG_CACHED
+ }
};
struct ttm_placement vmw_vram_gmr_placement = {
- .fpfn = 0,
- .lpfn = 0,
.num_placement = 2,
.placement = vram_gmr_placement_flags,
.num_busy_placement = 1,
.busy_placement = &gmr_placement_flags
};
-static uint32_t vram_gmr_ne_placement_flags[] = {
- TTM_PL_FLAG_VRAM | TTM_PL_FLAG_CACHED | TTM_PL_FLAG_NO_EVICT,
- VMW_PL_FLAG_GMR | TTM_PL_FLAG_CACHED | TTM_PL_FLAG_NO_EVICT
+static struct ttm_place vram_gmr_ne_placement_flags[] = {
+ {
+ .fpfn = 0,
+ .lpfn = 0,
+ .flags = TTM_PL_FLAG_VRAM | TTM_PL_FLAG_CACHED |
+ TTM_PL_FLAG_NO_EVICT
+ }, {
+ .fpfn = 0,
+ .lpfn = 0,
+ .flags = VMW_PL_FLAG_GMR | TTM_PL_FLAG_CACHED |
+ TTM_PL_FLAG_NO_EVICT
+ }
};
struct ttm_placement vmw_vram_gmr_ne_placement = {
- .fpfn = 0,
- .lpfn = 0,
.num_placement = 2,
.placement = vram_gmr_ne_placement_flags,
.num_busy_placement = 1,
};
struct ttm_placement vmw_vram_sys_placement = {
- .fpfn = 0,
- .lpfn = 0,
.num_placement = 1,
.placement = &vram_placement_flags,
.num_busy_placement = 1,
};
struct ttm_placement vmw_vram_ne_placement = {
- .fpfn = 0,
- .lpfn = 0,
.num_placement = 1,
.placement = &vram_ne_placement_flags,
.num_busy_placement = 1,
};
struct ttm_placement vmw_sys_placement = {
- .fpfn = 0,
- .lpfn = 0,
.num_placement = 1,
.placement = &sys_placement_flags,
.num_busy_placement = 1,
};
struct ttm_placement vmw_sys_ne_placement = {
- .fpfn = 0,
- .lpfn = 0,
.num_placement = 1,
.placement = &sys_ne_placement_flags,
.num_busy_placement = 1,
.busy_placement = &sys_ne_placement_flags
};
-static uint32_t evictable_placement_flags[] = {
- TTM_PL_FLAG_SYSTEM | TTM_PL_FLAG_CACHED,
- TTM_PL_FLAG_VRAM | TTM_PL_FLAG_CACHED,
- VMW_PL_FLAG_GMR | TTM_PL_FLAG_CACHED,
- VMW_PL_FLAG_MOB | TTM_PL_FLAG_CACHED
+static struct ttm_place evictable_placement_flags[] = {
+ {
+ .fpfn = 0,
+ .lpfn = 0,
+ .flags = TTM_PL_FLAG_SYSTEM | TTM_PL_FLAG_CACHED
+ }, {
+ .fpfn = 0,
+ .lpfn = 0,
+ .flags = TTM_PL_FLAG_VRAM | TTM_PL_FLAG_CACHED
+ }, {
+ .fpfn = 0,
+ .lpfn = 0,
+ .flags = VMW_PL_FLAG_GMR | TTM_PL_FLAG_CACHED
+ }, {
+ .fpfn = 0,
+ .lpfn = 0,
+ .flags = VMW_PL_FLAG_MOB | TTM_PL_FLAG_CACHED
+ }
};
struct ttm_placement vmw_evictable_placement = {
- .fpfn = 0,
- .lpfn = 0,
.num_placement = 4,
.placement = evictable_placement_flags,
.num_busy_placement = 1,
};
struct ttm_placement vmw_srf_placement = {
- .fpfn = 0,
- .lpfn = 0,
.num_placement = 1,
.num_busy_placement = 2,
.placement = &gmr_placement_flags,
};
struct ttm_placement vmw_mob_placement = {
- .fpfn = 0,
- .lpfn = 0,
.num_placement = 1,
.num_busy_placement = 1,
.placement = &mob_placement_flags,
return 0;
}
-/**
- * FIXME: We're using the old vmware polling method to sync.
- * Do this with fences instead.
- */
-
-static void *vmw_sync_obj_ref(void *sync_obj)
-{
-
- return (void *)
- vmw_fence_obj_reference((struct vmw_fence_obj *) sync_obj);
-}
-
-static void vmw_sync_obj_unref(void **sync_obj)
-{
- vmw_fence_obj_unreference((struct vmw_fence_obj **) sync_obj);
-}
-
-static int vmw_sync_obj_flush(void *sync_obj)
-{
- vmw_fence_obj_flush((struct vmw_fence_obj *) sync_obj);
- return 0;
-}
-
-static bool vmw_sync_obj_signaled(void *sync_obj)
-{
- return vmw_fence_obj_signaled((struct vmw_fence_obj *) sync_obj,
- DRM_VMW_FENCE_FLAG_EXEC);
-
-}
-
-static int vmw_sync_obj_wait(void *sync_obj, bool lazy, bool interruptible)
-{
- return vmw_fence_obj_wait((struct vmw_fence_obj *) sync_obj,
- DRM_VMW_FENCE_FLAG_EXEC,
- lazy, interruptible,
- VMW_FENCE_WAIT_TIMEOUT);
-}
-
/**
* vmw_move_notify - TTM move_notify_callback
*
*/
static void vmw_swap_notify(struct ttm_buffer_object *bo)
{
- struct ttm_bo_device *bdev = bo->bdev;
-
- spin_lock(&bdev->fence_lock);
ttm_bo_wait(bo, false, false, false);
- spin_unlock(&bdev->fence_lock);
}
.evict_flags = vmw_evict_flags,
.move = NULL,
.verify_access = vmw_verify_access,
- .sync_obj_signaled = vmw_sync_obj_signaled,
- .sync_obj_wait = vmw_sync_obj_wait,
- .sync_obj_flush = vmw_sync_obj_flush,
- .sync_obj_unref = vmw_sync_obj_unref,
- .sync_obj_ref = vmw_sync_obj_ref,
.move_notify = vmw_move_notify,
.swap_notify = vmw_swap_notify,
.fault_reserve_notify = &vmw_ttm_fault_reserve_notify,
{
struct ttm_buffer_object *bo = &buf->base;
struct ttm_placement placement;
+ struct ttm_place place;
int ret = 0;
if (pin)
- placement = vmw_vram_ne_placement;
+ place = vmw_vram_ne_placement.placement[0];
else
- placement = vmw_vram_placement;
- placement.lpfn = bo->num_pages;
+ place = vmw_vram_placement.placement[0];
+ place.lpfn = bo->num_pages;
+
+ placement.num_placement = 1;
+ placement.placement = &place;
+ placement.num_busy_placement = 1;
+ placement.busy_placement = &place;
ret = ttm_write_lock(&dev_priv->reservation_sem, interruptible);
if (unlikely(ret != 0))
*/
void vmw_bo_pin(struct ttm_buffer_object *bo, bool pin)
{
- uint32_t pl_flags;
+ struct ttm_place pl;
struct ttm_placement placement;
uint32_t old_mem_type = bo->mem.mem_type;
int ret;
lockdep_assert_held(&bo->resv->lock.base);
- pl_flags = TTM_PL_FLAG_VRAM | VMW_PL_FLAG_GMR | VMW_PL_FLAG_MOB
+ pl.fpfn = 0;
+ pl.lpfn = 0;
+ pl.flags = TTM_PL_FLAG_VRAM | VMW_PL_FLAG_GMR | VMW_PL_FLAG_MOB
| TTM_PL_FLAG_SYSTEM | TTM_PL_FLAG_CACHED;
if (pin)
- pl_flags |= TTM_PL_FLAG_NO_EVICT;
+ pl.flags |= TTM_PL_FLAG_NO_EVICT;
memset(&placement, 0, sizeof(placement));
placement.num_placement = 1;
- placement.placement = &pl_flags;
+ placement.placement = &pl;
ret = ttm_bo_validate(bo, &placement, false, true);
.open = vmw_driver_open,
.preclose = vmw_preclose,
.postclose = vmw_postclose,
+ .set_busid = drm_pci_set_busid,
.dumb_create = vmw_dumb_create,
.dumb_map_offset = vmw_dumb_map_offset,
uint32_t *cmd_bounce;
uint32_t cmd_bounce_size;
struct list_head resource_list;
- uint32_t fence_flags;
struct ttm_buffer_object *cur_query_bo;
struct list_head res_relocations;
uint32_t *buf_start;
extern void vmw_fifo_commit(struct vmw_private *dev_priv, uint32_t bytes);
extern int vmw_fifo_send_fence(struct vmw_private *dev_priv,
uint32_t *seqno);
+extern void vmw_fifo_ping_host_locked(struct vmw_private *, uint32_t reason);
extern void vmw_fifo_ping_host(struct vmw_private *dev_priv, uint32_t reason);
extern bool vmw_fifo_have_3d(struct vmw_private *dev_priv);
extern bool vmw_fifo_have_pitchlock(struct vmw_private *dev_priv);
++sw_context->cur_val_buf;
val_buf = &vval_buf->base;
val_buf->bo = ttm_bo_reference(bo);
- val_buf->reserved = false;
+ val_buf->shared = false;
list_add_tail(&val_buf->head, &sw_context->validate_nodes);
vval_buf->validate_as_mob = validate_as_mob;
}
- sw_context->fence_flags |= DRM_VMW_FENCE_FLAG_EXEC;
-
if (p_val_node)
*p_val_node = val_node;
res,
id_loc - sw_context->buf_start);
if (unlikely(ret != 0))
- goto out_err;
+ return ret;
ret = vmw_resource_val_add(sw_context, res, &node);
if (unlikely(ret != 0))
- goto out_err;
+ return ret;
if (res_type == vmw_res_context && dev_priv->has_mob &&
node->first_usage) {
ret = vmw_resource_context_res_add(dev_priv, sw_context, res);
if (unlikely(ret != 0))
- goto out_err;
+ return ret;
node->staged_bindings =
kzalloc(sizeof(*node->staged_bindings), GFP_KERNEL);
if (node->staged_bindings == NULL) {
DRM_ERROR("Failed to allocate context binding "
"information.\n");
- goto out_err;
+ return -ENOMEM;
}
INIT_LIST_HEAD(&node->staged_bindings->list);
}
if (p_val)
*p_val = node;
-out_err:
- return ret;
+ return 0;
}
if (p_handle != NULL)
ret = vmw_user_fence_create(file_priv, dev_priv->fman,
- sequence,
- DRM_VMW_FENCE_FLAG_EXEC,
- p_fence, p_handle);
+ sequence, p_fence, p_handle);
else
- ret = vmw_fence_create(dev_priv->fman, sequence,
- DRM_VMW_FENCE_FLAG_EXEC,
- p_fence);
+ ret = vmw_fence_create(dev_priv->fman, sequence, p_fence);
if (unlikely(ret != 0 && !synced)) {
(void) vmw_fallback_wait(dev_priv, false, false,
BUG_ON(fence == NULL);
fence_rep.handle = fence_handle;
- fence_rep.seqno = fence->seqno;
+ fence_rep.seqno = fence->base.seqno;
vmw_update_seqno(dev_priv, &dev_priv->fifo);
fence_rep.passed_seqno = dev_priv->last_read_seqno;
}
ttm_ref_object_base_unref(vmw_fp->tfile,
fence_handle, TTM_REF_USAGE);
DRM_ERROR("Fence copy error. Syncing.\n");
- (void) vmw_fence_obj_wait(fence, fence->signal_mask,
- false, false,
+ (void) vmw_fence_obj_wait(fence, false, false,
VMW_FENCE_WAIT_TIMEOUT);
}
}
sw_context->fp = vmw_fpriv(file_priv);
sw_context->cur_reloc = 0;
sw_context->cur_val_buf = 0;
- sw_context->fence_flags = 0;
INIT_LIST_HEAD(&sw_context->resource_list);
sw_context->cur_query_bo = dev_priv->pinned_bo;
sw_context->last_query_ctx = NULL;
if (unlikely(ret != 0))
goto out_err_nores;
- ret = ttm_eu_reserve_buffers(&ticket, &sw_context->validate_nodes);
+ ret = ttm_eu_reserve_buffers(&ticket, &sw_context->validate_nodes, true);
if (unlikely(ret != 0))
goto out_err;
INIT_LIST_HEAD(&validate_list);
pinned_val.bo = ttm_bo_reference(dev_priv->pinned_bo);
+ pinned_val.shared = false;
list_add_tail(&pinned_val.head, &validate_list);
query_val.bo = ttm_bo_reference(dev_priv->dummy_query_bo);
+ query_val.shared = false;
list_add_tail(&query_val.head, &validate_list);
- do {
- ret = ttm_eu_reserve_buffers(&ticket, &validate_list);
- } while (ret == -ERESTARTSYS);
-
+ ret = ttm_eu_reserve_buffers(&ticket, &validate_list, false);
if (unlikely(ret != 0)) {
vmw_execbuf_unpin_panic(dev_priv);
goto out_no_reserve;
size_t size, struct vmw_dma_buffer **out)
{
struct vmw_dma_buffer *vmw_bo;
- struct ttm_placement ne_placement = vmw_vram_ne_placement;
+ struct ttm_place ne_place = vmw_vram_ne_placement.placement[0];
+ struct ttm_placement ne_placement;
int ret;
- ne_placement.lpfn = (size + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ ne_placement.num_placement = 1;
+ ne_placement.placement = &ne_place;
+ ne_placement.num_busy_placement = 1;
+ ne_placement.busy_placement = &ne_place;
+
+ ne_place.lpfn = (size + PAGE_SIZE - 1) >> PAGE_SHIFT;
(void) ttm_write_lock(&vmw_priv->reservation_sem, false);
struct vmw_private *dev_priv;
spinlock_t lock;
struct list_head fence_list;
- struct work_struct work;
+ struct work_struct work, ping_work;
u32 user_fence_size;
u32 fence_size;
u32 event_fence_action_size;
bool goal_irq_on; /* Protected by @goal_irq_mutex */
bool seqno_valid; /* Protected by @lock, and may not be set to true
without the @goal_irq_mutex held. */
+ unsigned ctx;
};
struct vmw_user_fence {
uint32_t *tv_usec;
};
+static struct vmw_fence_manager *
+fman_from_fence(struct vmw_fence_obj *fence)
+{
+ return container_of(fence->base.lock, struct vmw_fence_manager, lock);
+}
+
/**
* Note on fencing subsystem usage of irqs:
* Typically the vmw_fences_update function is called
* objects with actions attached to them.
*/
-static void vmw_fence_obj_destroy_locked(struct kref *kref)
+static void vmw_fence_obj_destroy(struct fence *f)
{
struct vmw_fence_obj *fence =
- container_of(kref, struct vmw_fence_obj, kref);
+ container_of(f, struct vmw_fence_obj, base);
- struct vmw_fence_manager *fman = fence->fman;
- unsigned int num_fences;
+ struct vmw_fence_manager *fman = fman_from_fence(fence);
+ unsigned long irq_flags;
+ spin_lock_irqsave(&fman->lock, irq_flags);
list_del_init(&fence->head);
- num_fences = --fman->num_fence_objects;
- spin_unlock_irq(&fman->lock);
- if (fence->destroy)
- fence->destroy(fence);
- else
- kfree(fence);
+ --fman->num_fence_objects;
+ spin_unlock_irqrestore(&fman->lock, irq_flags);
+ fence->destroy(fence);
+}
- spin_lock_irq(&fman->lock);
+static const char *vmw_fence_get_driver_name(struct fence *f)
+{
+ return "vmwgfx";
+}
+
+static const char *vmw_fence_get_timeline_name(struct fence *f)
+{
+ return "svga";
+}
+
+static void vmw_fence_ping_func(struct work_struct *work)
+{
+ struct vmw_fence_manager *fman =
+ container_of(work, struct vmw_fence_manager, ping_work);
+
+ vmw_fifo_ping_host(fman->dev_priv, SVGA_SYNC_GENERIC);
+}
+
+static bool vmw_fence_enable_signaling(struct fence *f)
+{
+ struct vmw_fence_obj *fence =
+ container_of(f, struct vmw_fence_obj, base);
+
+ struct vmw_fence_manager *fman = fman_from_fence(fence);
+ struct vmw_private *dev_priv = fman->dev_priv;
+
+ __le32 __iomem *fifo_mem = dev_priv->mmio_virt;
+ u32 seqno = ioread32(fifo_mem + SVGA_FIFO_FENCE);
+ if (seqno - fence->base.seqno < VMW_FENCE_WRAP)
+ return false;
+
+ if (mutex_trylock(&dev_priv->hw_mutex)) {
+ vmw_fifo_ping_host_locked(dev_priv, SVGA_SYNC_GENERIC);
+ mutex_unlock(&dev_priv->hw_mutex);
+ } else
+ schedule_work(&fman->ping_work);
+
+ return true;
+}
+
+struct vmwgfx_wait_cb {
+ struct fence_cb base;
+ struct task_struct *task;
+};
+
+static void
+vmwgfx_wait_cb(struct fence *fence, struct fence_cb *cb)
+{
+ struct vmwgfx_wait_cb *wait =
+ container_of(cb, struct vmwgfx_wait_cb, base);
+
+ wake_up_process(wait->task);
+}
+
+static void __vmw_fences_update(struct vmw_fence_manager *fman);
+
+static long vmw_fence_wait(struct fence *f, bool intr, signed long timeout)
+{
+ struct vmw_fence_obj *fence =
+ container_of(f, struct vmw_fence_obj, base);
+
+ struct vmw_fence_manager *fman = fman_from_fence(fence);
+ struct vmw_private *dev_priv = fman->dev_priv;
+ struct vmwgfx_wait_cb cb;
+ long ret = timeout;
+ unsigned long irq_flags;
+
+ if (likely(vmw_fence_obj_signaled(fence)))
+ return timeout;
+
+ vmw_fifo_ping_host(dev_priv, SVGA_SYNC_GENERIC);
+ vmw_seqno_waiter_add(dev_priv);
+
+ spin_lock_irqsave(f->lock, irq_flags);
+
+ if (intr && signal_pending(current)) {
+ ret = -ERESTARTSYS;
+ goto out;
+ }
+
+ cb.base.func = vmwgfx_wait_cb;
+ cb.task = current;
+ list_add(&cb.base.node, &f->cb_list);
+
+ while (ret > 0) {
+ __vmw_fences_update(fman);
+ if (test_bit(FENCE_FLAG_SIGNALED_BIT, &f->flags))
+ break;
+
+ if (intr)
+ __set_current_state(TASK_INTERRUPTIBLE);
+ else
+ __set_current_state(TASK_UNINTERRUPTIBLE);
+ spin_unlock_irqrestore(f->lock, irq_flags);
+
+ ret = schedule_timeout(ret);
+
+ spin_lock_irqsave(f->lock, irq_flags);
+ if (ret > 0 && intr && signal_pending(current))
+ ret = -ERESTARTSYS;
+ }
+
+ if (!list_empty(&cb.base.node))
+ list_del(&cb.base.node);
+ __set_current_state(TASK_RUNNING);
+
+out:
+ spin_unlock_irqrestore(f->lock, irq_flags);
+
+ vmw_seqno_waiter_remove(dev_priv);
+
+ return ret;
}
+static struct fence_ops vmw_fence_ops = {
+ .get_driver_name = vmw_fence_get_driver_name,
+ .get_timeline_name = vmw_fence_get_timeline_name,
+ .enable_signaling = vmw_fence_enable_signaling,
+ .wait = vmw_fence_wait,
+ .release = vmw_fence_obj_destroy,
+};
+
/**
* Execute signal actions on fences recently signaled.
INIT_LIST_HEAD(&fman->fence_list);
INIT_LIST_HEAD(&fman->cleanup_list);
INIT_WORK(&fman->work, &vmw_fence_work_func);
+ INIT_WORK(&fman->ping_work, &vmw_fence_ping_func);
fman->fifo_down = true;
fman->user_fence_size = ttm_round_pot(sizeof(struct vmw_user_fence));
fman->fence_size = ttm_round_pot(sizeof(struct vmw_fence_obj));
fman->event_fence_action_size =
ttm_round_pot(sizeof(struct vmw_event_fence_action));
mutex_init(&fman->goal_irq_mutex);
+ fman->ctx = fence_context_alloc(1);
return fman;
}
bool lists_empty;
(void) cancel_work_sync(&fman->work);
+ (void) cancel_work_sync(&fman->ping_work);
spin_lock_irqsave(&fman->lock, irq_flags);
lists_empty = list_empty(&fman->fence_list) &&
}
static int vmw_fence_obj_init(struct vmw_fence_manager *fman,
- struct vmw_fence_obj *fence,
- u32 seqno,
- uint32_t mask,
+ struct vmw_fence_obj *fence, u32 seqno,
void (*destroy) (struct vmw_fence_obj *fence))
{
unsigned long irq_flags;
- unsigned int num_fences;
int ret = 0;
- fence->seqno = seqno;
+ fence_init(&fence->base, &vmw_fence_ops, &fman->lock,
+ fman->ctx, seqno);
INIT_LIST_HEAD(&fence->seq_passed_actions);
- fence->fman = fman;
- fence->signaled = 0;
- fence->signal_mask = mask;
- kref_init(&fence->kref);
fence->destroy = destroy;
- init_waitqueue_head(&fence->queue);
spin_lock_irqsave(&fman->lock, irq_flags);
if (unlikely(fman->fifo_down)) {
goto out_unlock;
}
list_add_tail(&fence->head, &fman->fence_list);
- num_fences = ++fman->num_fence_objects;
+ ++fman->num_fence_objects;
out_unlock:
spin_unlock_irqrestore(&fman->lock, irq_flags);
}
-struct vmw_fence_obj *vmw_fence_obj_reference(struct vmw_fence_obj *fence)
-{
- if (unlikely(fence == NULL))
- return NULL;
-
- kref_get(&fence->kref);
- return fence;
-}
-
-/**
- * vmw_fence_obj_unreference
- *
- * Note that this function may not be entered with disabled irqs since
- * it may re-enable them in the destroy function.
- *
- */
-void vmw_fence_obj_unreference(struct vmw_fence_obj **fence_p)
-{
- struct vmw_fence_obj *fence = *fence_p;
- struct vmw_fence_manager *fman;
-
- if (unlikely(fence == NULL))
- return;
-
- fman = fence->fman;
- *fence_p = NULL;
- spin_lock_irq(&fman->lock);
- BUG_ON(atomic_read(&fence->kref.refcount) == 0);
- kref_put(&fence->kref, vmw_fence_obj_destroy_locked);
- spin_unlock_irq(&fman->lock);
-}
-
static void vmw_fences_perform_actions(struct vmw_fence_manager *fman,
struct list_head *list)
{
list_for_each_entry(fence, &fman->fence_list, head) {
if (!list_empty(&fence->seq_passed_actions)) {
fman->seqno_valid = true;
- iowrite32(fence->seqno,
+ iowrite32(fence->base.seqno,
fifo_mem + SVGA_FIFO_FENCE_GOAL);
break;
}
*/
static bool vmw_fence_goal_check_locked(struct vmw_fence_obj *fence)
{
+ struct vmw_fence_manager *fman = fman_from_fence(fence);
u32 goal_seqno;
__le32 __iomem *fifo_mem;
- if (fence->signaled & DRM_VMW_FENCE_FLAG_EXEC)
+ if (fence_is_signaled_locked(&fence->base))
return false;
- fifo_mem = fence->fman->dev_priv->mmio_virt;
+ fifo_mem = fman->dev_priv->mmio_virt;
goal_seqno = ioread32(fifo_mem + SVGA_FIFO_FENCE_GOAL);
- if (likely(fence->fman->seqno_valid &&
- goal_seqno - fence->seqno < VMW_FENCE_WRAP))
+ if (likely(fman->seqno_valid &&
+ goal_seqno - fence->base.seqno < VMW_FENCE_WRAP))
return false;
- iowrite32(fence->seqno, fifo_mem + SVGA_FIFO_FENCE_GOAL);
- fence->fman->seqno_valid = true;
+ iowrite32(fence->base.seqno, fifo_mem + SVGA_FIFO_FENCE_GOAL);
+ fman->seqno_valid = true;
return true;
}
-void vmw_fences_update(struct vmw_fence_manager *fman)
+static void __vmw_fences_update(struct vmw_fence_manager *fman)
{
- unsigned long flags;
struct vmw_fence_obj *fence, *next_fence;
struct list_head action_list;
bool needs_rerun;
seqno = ioread32(fifo_mem + SVGA_FIFO_FENCE);
rerun:
- spin_lock_irqsave(&fman->lock, flags);
list_for_each_entry_safe(fence, next_fence, &fman->fence_list, head) {
- if (seqno - fence->seqno < VMW_FENCE_WRAP) {
+ if (seqno - fence->base.seqno < VMW_FENCE_WRAP) {
list_del_init(&fence->head);
- fence->signaled |= DRM_VMW_FENCE_FLAG_EXEC;
+ fence_signal_locked(&fence->base);
INIT_LIST_HEAD(&action_list);
list_splice_init(&fence->seq_passed_actions,
&action_list);
vmw_fences_perform_actions(fman, &action_list);
- wake_up_all(&fence->queue);
} else
break;
}
- needs_rerun = vmw_fence_goal_new_locked(fman, seqno);
-
- if (!list_empty(&fman->cleanup_list))
- (void) schedule_work(&fman->work);
- spin_unlock_irqrestore(&fman->lock, flags);
-
/*
* Rerun if the fence goal seqno was updated, and the
* hardware might have raced with that update, so that
* we missed a fence_goal irq.
*/
+ needs_rerun = vmw_fence_goal_new_locked(fman, seqno);
if (unlikely(needs_rerun)) {
new_seqno = ioread32(fifo_mem + SVGA_FIFO_FENCE);
if (new_seqno != seqno) {
goto rerun;
}
}
+
+ if (!list_empty(&fman->cleanup_list))
+ (void) schedule_work(&fman->work);
}
-bool vmw_fence_obj_signaled(struct vmw_fence_obj *fence,
- uint32_t flags)
+void vmw_fences_update(struct vmw_fence_manager *fman)
{
- struct vmw_fence_manager *fman = fence->fman;
unsigned long irq_flags;
- uint32_t signaled;
spin_lock_irqsave(&fman->lock, irq_flags);
- signaled = fence->signaled;
+ __vmw_fences_update(fman);
spin_unlock_irqrestore(&fman->lock, irq_flags);
+}
- flags &= fence->signal_mask;
- if ((signaled & flags) == flags)
- return 1;
+bool vmw_fence_obj_signaled(struct vmw_fence_obj *fence)
+{
+ struct vmw_fence_manager *fman = fman_from_fence(fence);
- if ((signaled & DRM_VMW_FENCE_FLAG_EXEC) == 0)
- vmw_fences_update(fman);
+ if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->base.flags))
+ return 1;
- spin_lock_irqsave(&fman->lock, irq_flags);
- signaled = fence->signaled;
- spin_unlock_irqrestore(&fman->lock, irq_flags);
+ vmw_fences_update(fman);
- return ((signaled & flags) == flags);
+ return fence_is_signaled(&fence->base);
}
-int vmw_fence_obj_wait(struct vmw_fence_obj *fence,
- uint32_t flags, bool lazy,
+int vmw_fence_obj_wait(struct vmw_fence_obj *fence, bool lazy,
bool interruptible, unsigned long timeout)
{
- struct vmw_private *dev_priv = fence->fman->dev_priv;
- long ret;
+ long ret = fence_wait_timeout(&fence->base, interruptible, timeout);
- if (likely(vmw_fence_obj_signaled(fence, flags)))
+ if (likely(ret > 0))
return 0;
-
- vmw_fifo_ping_host(dev_priv, SVGA_SYNC_GENERIC);
- vmw_seqno_waiter_add(dev_priv);
-
- if (interruptible)
- ret = wait_event_interruptible_timeout
- (fence->queue,
- vmw_fence_obj_signaled(fence, flags),
- timeout);
+ else if (ret == 0)
+ return -EBUSY;
else
- ret = wait_event_timeout
- (fence->queue,
- vmw_fence_obj_signaled(fence, flags),
- timeout);
-
- vmw_seqno_waiter_remove(dev_priv);
-
- if (unlikely(ret == 0))
- ret = -EBUSY;
- else if (likely(ret > 0))
- ret = 0;
-
- return ret;
+ return ret;
}
void vmw_fence_obj_flush(struct vmw_fence_obj *fence)
{
- struct vmw_private *dev_priv = fence->fman->dev_priv;
+ struct vmw_private *dev_priv = fman_from_fence(fence)->dev_priv;
vmw_fifo_ping_host(dev_priv, SVGA_SYNC_GENERIC);
}
static void vmw_fence_destroy(struct vmw_fence_obj *fence)
{
- struct vmw_fence_manager *fman = fence->fman;
+ struct vmw_fence_manager *fman = fman_from_fence(fence);
+
+ fence_free(&fence->base);
- kfree(fence);
/*
* Free kernel space accounting.
*/
int vmw_fence_create(struct vmw_fence_manager *fman,
uint32_t seqno,
- uint32_t mask,
struct vmw_fence_obj **p_fence)
{
struct ttm_mem_global *mem_glob = vmw_mem_glob(fman->dev_priv);
goto out_no_object;
}
- ret = vmw_fence_obj_init(fman, fence, seqno, mask,
+ ret = vmw_fence_obj_init(fman, fence, seqno,
vmw_fence_destroy);
if (unlikely(ret != 0))
goto out_err_init;
{
struct vmw_user_fence *ufence =
container_of(fence, struct vmw_user_fence, fence);
- struct vmw_fence_manager *fman = fence->fman;
+ struct vmw_fence_manager *fman = fman_from_fence(fence);
ttm_base_object_kfree(ufence, base);
/*
int vmw_user_fence_create(struct drm_file *file_priv,
struct vmw_fence_manager *fman,
uint32_t seqno,
- uint32_t mask,
struct vmw_fence_obj **p_fence,
uint32_t *p_handle)
{
}
ret = vmw_fence_obj_init(fman, &ufence->fence, seqno,
- mask, vmw_user_fence_destroy);
+ vmw_user_fence_destroy);
if (unlikely(ret != 0)) {
kfree(ufence);
goto out_no_object;
void vmw_fence_fifo_down(struct vmw_fence_manager *fman)
{
- unsigned long irq_flags;
struct list_head action_list;
int ret;
* restart when we've released the fman->lock.
*/
- spin_lock_irqsave(&fman->lock, irq_flags);
+ spin_lock_irq(&fman->lock);
fman->fifo_down = true;
while (!list_empty(&fman->fence_list)) {
struct vmw_fence_obj *fence =
list_entry(fman->fence_list.prev, struct vmw_fence_obj,
head);
- kref_get(&fence->kref);
+ fence_get(&fence->base);
spin_unlock_irq(&fman->lock);
- ret = vmw_fence_obj_wait(fence, fence->signal_mask,
- false, false,
+ ret = vmw_fence_obj_wait(fence, false, false,
VMW_FENCE_WAIT_TIMEOUT);
if (unlikely(ret != 0)) {
list_del_init(&fence->head);
- fence->signaled |= DRM_VMW_FENCE_FLAG_EXEC;
+ fence_signal(&fence->base);
INIT_LIST_HEAD(&action_list);
list_splice_init(&fence->seq_passed_actions,
&action_list);
vmw_fences_perform_actions(fman, &action_list);
- wake_up_all(&fence->queue);
}
- spin_lock_irq(&fman->lock);
-
BUG_ON(!list_empty(&fence->head));
- kref_put(&fence->kref, vmw_fence_obj_destroy_locked);
+ fence_put(&fence->base);
+ spin_lock_irq(&fman->lock);
}
- spin_unlock_irqrestore(&fman->lock, irq_flags);
+ spin_unlock_irq(&fman->lock);
}
void vmw_fence_fifo_up(struct vmw_fence_manager *fman)
timeout = jiffies;
if (time_after_eq(timeout, (unsigned long)arg->kernel_cookie)) {
- ret = ((vmw_fence_obj_signaled(fence, arg->flags)) ?
+ ret = ((vmw_fence_obj_signaled(fence)) ?
0 : -EBUSY);
goto out;
}
timeout = (unsigned long)arg->kernel_cookie - timeout;
- ret = vmw_fence_obj_wait(fence, arg->flags, arg->lazy, true, timeout);
+ ret = vmw_fence_obj_wait(fence, arg->lazy, true, timeout);
out:
ttm_base_object_unref(&base);
}
fence = &(container_of(base, struct vmw_user_fence, base)->fence);
- fman = fence->fman;
+ fman = fman_from_fence(fence);
- arg->signaled = vmw_fence_obj_signaled(fence, arg->flags);
- spin_lock_irq(&fman->lock);
+ arg->signaled = vmw_fence_obj_signaled(fence);
- arg->signaled_flags = fence->signaled;
+ arg->signaled_flags = arg->flags;
+ spin_lock_irq(&fman->lock);
arg->passed_seqno = dev_priv->last_read_seqno;
spin_unlock_irq(&fman->lock);
{
struct vmw_event_fence_action *eaction =
container_of(action, struct vmw_event_fence_action, action);
- struct vmw_fence_manager *fman = eaction->fence->fman;
+ struct vmw_fence_manager *fman = fman_from_fence(eaction->fence);
unsigned long irq_flags;
spin_lock_irqsave(&fman->lock, irq_flags);
static void vmw_fence_obj_add_action(struct vmw_fence_obj *fence,
struct vmw_fence_action *action)
{
- struct vmw_fence_manager *fman = fence->fman;
+ struct vmw_fence_manager *fman = fman_from_fence(fence);
unsigned long irq_flags;
bool run_update = false;
spin_lock_irqsave(&fman->lock, irq_flags);
fman->pending_actions[action->type]++;
- if (fence->signaled & DRM_VMW_FENCE_FLAG_EXEC) {
+ if (fence_is_signaled_locked(&fence->base)) {
struct list_head action_list;
INIT_LIST_HEAD(&action_list);
bool interruptible)
{
struct vmw_event_fence_action *eaction;
- struct vmw_fence_manager *fman = fence->fman;
+ struct vmw_fence_manager *fman = fman_from_fence(fence);
struct vmw_fpriv *vmw_fp = vmw_fpriv(file_priv);
unsigned long irq_flags;
bool interruptible)
{
struct vmw_event_fence_pending *event;
- struct drm_device *dev = fence->fman->dev_priv->dev;
+ struct vmw_fence_manager *fman = fman_from_fence(fence);
+ struct drm_device *dev = fman->dev_priv->dev;
unsigned long irq_flags;
int ret;
#ifndef _VMWGFX_FENCE_H_
+#include <linux/fence.h>
+
#define VMW_FENCE_WAIT_TIMEOUT (5*HZ)
struct vmw_private;
};
struct vmw_fence_obj {
- struct kref kref;
- u32 seqno;
+ struct fence base;
- struct vmw_fence_manager *fman;
struct list_head head;
- uint32_t signaled;
- uint32_t signal_mask;
struct list_head seq_passed_actions;
void (*destroy)(struct vmw_fence_obj *fence);
- wait_queue_head_t queue;
};
extern struct vmw_fence_manager *
extern void vmw_fence_manager_takedown(struct vmw_fence_manager *fman);
-extern void vmw_fence_obj_unreference(struct vmw_fence_obj **fence_p);
+static inline void
+vmw_fence_obj_unreference(struct vmw_fence_obj **fence_p)
+{
+ struct vmw_fence_obj *fence = *fence_p;
+
+ *fence_p = NULL;
+ if (fence)
+ fence_put(&fence->base);
+}
-extern struct vmw_fence_obj *
-vmw_fence_obj_reference(struct vmw_fence_obj *fence);
+static inline struct vmw_fence_obj *
+vmw_fence_obj_reference(struct vmw_fence_obj *fence)
+{
+ if (fence)
+ fence_get(&fence->base);
+ return fence;
+}
extern void vmw_fences_update(struct vmw_fence_manager *fman);
-extern bool vmw_fence_obj_signaled(struct vmw_fence_obj *fence,
- uint32_t flags);
+extern bool vmw_fence_obj_signaled(struct vmw_fence_obj *fence);
-extern int vmw_fence_obj_wait(struct vmw_fence_obj *fence, uint32_t flags,
+extern int vmw_fence_obj_wait(struct vmw_fence_obj *fence,
bool lazy,
bool interruptible, unsigned long timeout);
extern int vmw_fence_create(struct vmw_fence_manager *fman,
uint32_t seqno,
- uint32_t mask,
struct vmw_fence_obj **p_fence);
extern int vmw_user_fence_create(struct drm_file *file_priv,
struct vmw_fence_manager *fman,
uint32_t sequence,
- uint32_t mask,
struct vmw_fence_obj **p_fence,
uint32_t *p_handle);
return vmw_fifo_send_fence(dev_priv, &dummy);
}
-void vmw_fifo_ping_host(struct vmw_private *dev_priv, uint32_t reason)
+void vmw_fifo_ping_host_locked(struct vmw_private *dev_priv, uint32_t reason)
{
__le32 __iomem *fifo_mem = dev_priv->mmio_virt;
- mutex_lock(&dev_priv->hw_mutex);
-
if (unlikely(ioread32(fifo_mem + SVGA_FIFO_BUSY) == 0)) {
iowrite32(1, fifo_mem + SVGA_FIFO_BUSY);
vmw_write(dev_priv, SVGA_REG_SYNC, reason);
}
+}
+
+void vmw_fifo_ping_host(struct vmw_private *dev_priv, uint32_t reason)
+{
+ mutex_lock(&dev_priv->hw_mutex);
+
+ vmw_fifo_ping_host_locked(dev_priv, reason);
mutex_unlock(&dev_priv->hw_mutex);
}
mutex_lock(&dev_priv->hw_mutex);
+ vmw_write(dev_priv, SVGA_REG_SYNC, SVGA_SYNC_GENERIC);
while (vmw_read(dev_priv, SVGA_REG_BUSY) != 0)
- vmw_write(dev_priv, SVGA_REG_SYNC, SVGA_SYNC_GENERIC);
+ ;
dev_priv->last_read_seqno = ioread32(fifo_mem + SVGA_FIFO_FENCE);
static int vmw_gmrid_man_get_node(struct ttm_mem_type_manager *man,
struct ttm_buffer_object *bo,
- struct ttm_placement *placement,
- uint32_t flags,
+ const struct ttm_place *place,
struct ttm_mem_reg *mem)
{
struct vmwgfx_gmrid_man *gman =
struct ttm_validate_buffer val_buf;
val_buf.bo = bo;
+ val_buf.shared = false;
res->func->unbind(res, false, &val_buf);
}
res->backup_dirty = false;
ret = ttm_bo_init(bdev, &vmw_bo->base, size,
ttm_bo_type_device, placement,
0, interruptible,
- NULL, acc_size, NULL, bo_free);
+ NULL, acc_size, NULL, NULL, bo_free);
return ret;
}
int ret;
if (flags & drm_vmw_synccpu_allow_cs) {
- struct ttm_bo_device *bdev = bo->bdev;
+ bool nonblock = !!(flags & drm_vmw_synccpu_dontblock);
+ long lret;
- spin_lock(&bdev->fence_lock);
- ret = ttm_bo_wait(bo, false, true,
- !!(flags & drm_vmw_synccpu_dontblock));
- spin_unlock(&bdev->fence_lock);
- return ret;
+ if (nonblock)
+ return reservation_object_test_signaled_rcu(bo->resv, true) ? 0 : -EBUSY;
+
+ lret = reservation_object_wait_timeout_rcu(bo->resv, true, true, MAX_SCHEDULE_TIMEOUT);
+ if (!lret)
+ return -EBUSY;
+ else if (lret < 0)
+ return lret;
+ return 0;
}
ret = ttm_bo_synccpu_write_grab
INIT_LIST_HEAD(&val_list);
val_buf->bo = ttm_bo_reference(&res->backup->base);
+ val_buf->shared = false;
list_add_tail(&val_buf->head, &val_list);
- ret = ttm_eu_reserve_buffers(NULL, &val_list);
+ ret = ttm_eu_reserve_buffers(NULL, &val_list, interruptible);
if (unlikely(ret != 0))
goto out_no_reserve;
BUG_ON(!func->may_evict);
val_buf.bo = NULL;
+ val_buf.shared = false;
ret = vmw_resource_check_buffer(res, interruptible, &val_buf);
if (unlikely(ret != 0))
return ret;
return 0;
val_buf.bo = NULL;
+ val_buf.shared = false;
if (res->backup)
val_buf.bo = &res->backup->base;
do {
struct vmw_fence_obj *fence)
{
struct ttm_bo_device *bdev = bo->bdev;
- struct ttm_bo_driver *driver = bdev->driver;
- struct vmw_fence_obj *old_fence_obj;
+
struct vmw_private *dev_priv =
container_of(bdev, struct vmw_private, bdev);
- if (fence == NULL)
+ if (fence == NULL) {
vmw_execbuf_fence_commands(NULL, dev_priv, &fence, NULL);
- else
- driver->sync_obj_ref(fence);
-
- spin_lock(&bdev->fence_lock);
-
- old_fence_obj = bo->sync_obj;
- bo->sync_obj = fence;
-
- spin_unlock(&bdev->fence_lock);
-
- if (old_fence_obj)
- vmw_fence_obj_unreference(&old_fence_obj);
+ reservation_object_add_excl_fence(bo->resv, &fence->base);
+ fence_put(&fence->base);
+ } else
+ reservation_object_add_excl_fence(bo->resv, &fence->base);
}
/**
if (mem->mem_type != VMW_PL_MOB) {
struct vmw_resource *res, *n;
- struct ttm_bo_device *bdev = bo->bdev;
struct ttm_validate_buffer val_buf;
val_buf.bo = bo;
+ val_buf.shared = false;
list_for_each_entry_safe(res, n, &dma_buf->res_list, mob_head) {
list_del_init(&res->mob_head);
}
- spin_lock(&bdev->fence_lock);
(void) ttm_bo_wait(bo, false, false, false);
- spin_unlock(&bdev->fence_lock);
}
}
obj-$(CONFIG_IMX_IPUV3_CORE) += imx-ipu-v3.o
-imx-ipu-v3-objs := ipu-common.o ipu-dc.o ipu-di.o ipu-dp.o ipu-dmfc.o ipu-smfc.o
+imx-ipu-v3-objs := ipu-common.o ipu-cpmem.o ipu-csi.o ipu-dc.o ipu-di.o \
+ ipu-dp.o ipu-dmfc.o ipu-ic.o ipu-smfc.o
writel(value, ipu->cm_reg + offset);
}
-static inline u32 ipu_idmac_read(struct ipu_soc *ipu, unsigned offset)
-{
- return readl(ipu->idmac_reg + offset);
-}
-
-static inline void ipu_idmac_write(struct ipu_soc *ipu, u32 value,
- unsigned offset)
-{
- writel(value, ipu->idmac_reg + offset);
-}
-
void ipu_srm_dp_sync_update(struct ipu_soc *ipu)
{
u32 val;
}
EXPORT_SYMBOL_GPL(ipu_srm_dp_sync_update);
-struct ipu_ch_param __iomem *ipu_get_cpmem(struct ipuv3_channel *channel)
-{
- struct ipu_soc *ipu = channel->ipu;
-
- return ipu->cpmem_base + channel->num;
-}
-EXPORT_SYMBOL_GPL(ipu_get_cpmem);
-
-void ipu_cpmem_set_high_priority(struct ipuv3_channel *channel)
-{
- struct ipu_soc *ipu = channel->ipu;
- struct ipu_ch_param __iomem *p = ipu_get_cpmem(channel);
- u32 val;
-
- if (ipu->ipu_type == IPUV3EX)
- ipu_ch_param_write_field(p, IPU_FIELD_ID, 1);
-
- val = ipu_idmac_read(ipu, IDMAC_CHA_PRI(channel->num));
- val |= 1 << (channel->num % 32);
- ipu_idmac_write(ipu, val, IDMAC_CHA_PRI(channel->num));
-};
-EXPORT_SYMBOL_GPL(ipu_cpmem_set_high_priority);
-
-void ipu_ch_param_write_field(struct ipu_ch_param __iomem *base, u32 wbs, u32 v)
-{
- u32 bit = (wbs >> 8) % 160;
- u32 size = wbs & 0xff;
- u32 word = (wbs >> 8) / 160;
- u32 i = bit / 32;
- u32 ofs = bit % 32;
- u32 mask = (1 << size) - 1;
- u32 val;
-
- pr_debug("%s %d %d %d\n", __func__, word, bit , size);
-
- val = readl(&base->word[word].data[i]);
- val &= ~(mask << ofs);
- val |= v << ofs;
- writel(val, &base->word[word].data[i]);
-
- if ((bit + size - 1) / 32 > i) {
- val = readl(&base->word[word].data[i + 1]);
- val &= ~(mask >> (ofs ? (32 - ofs) : 0));
- val |= v >> (ofs ? (32 - ofs) : 0);
- writel(val, &base->word[word].data[i + 1]);
- }
-}
-EXPORT_SYMBOL_GPL(ipu_ch_param_write_field);
-
-u32 ipu_ch_param_read_field(struct ipu_ch_param __iomem *base, u32 wbs)
-{
- u32 bit = (wbs >> 8) % 160;
- u32 size = wbs & 0xff;
- u32 word = (wbs >> 8) / 160;
- u32 i = bit / 32;
- u32 ofs = bit % 32;
- u32 mask = (1 << size) - 1;
- u32 val = 0;
-
- pr_debug("%s %d %d %d\n", __func__, word, bit , size);
-
- val = (readl(&base->word[word].data[i]) >> ofs) & mask;
-
- if ((bit + size - 1) / 32 > i) {
- u32 tmp;
- tmp = readl(&base->word[word].data[i + 1]);
- tmp &= mask >> (ofs ? (32 - ofs) : 0);
- val |= tmp << (ofs ? (32 - ofs) : 0);
- }
-
- return val;
-}
-EXPORT_SYMBOL_GPL(ipu_ch_param_read_field);
-
-int ipu_cpmem_set_format_rgb(struct ipu_ch_param __iomem *p,
- const struct ipu_rgb *rgb)
-{
- int bpp = 0, npb = 0, ro, go, bo, to;
-
- ro = rgb->bits_per_pixel - rgb->red.length - rgb->red.offset;
- go = rgb->bits_per_pixel - rgb->green.length - rgb->green.offset;
- bo = rgb->bits_per_pixel - rgb->blue.length - rgb->blue.offset;
- to = rgb->bits_per_pixel - rgb->transp.length - rgb->transp.offset;
-
- ipu_ch_param_write_field(p, IPU_FIELD_WID0, rgb->red.length - 1);
- ipu_ch_param_write_field(p, IPU_FIELD_OFS0, ro);
- ipu_ch_param_write_field(p, IPU_FIELD_WID1, rgb->green.length - 1);
- ipu_ch_param_write_field(p, IPU_FIELD_OFS1, go);
- ipu_ch_param_write_field(p, IPU_FIELD_WID2, rgb->blue.length - 1);
- ipu_ch_param_write_field(p, IPU_FIELD_OFS2, bo);
-
- if (rgb->transp.length) {
- ipu_ch_param_write_field(p, IPU_FIELD_WID3,
- rgb->transp.length - 1);
- ipu_ch_param_write_field(p, IPU_FIELD_OFS3, to);
- } else {
- ipu_ch_param_write_field(p, IPU_FIELD_WID3, 7);
- ipu_ch_param_write_field(p, IPU_FIELD_OFS3,
- rgb->bits_per_pixel);
- }
-
- switch (rgb->bits_per_pixel) {
- case 32:
- bpp = 0;
- npb = 15;
- break;
- case 24:
- bpp = 1;
- npb = 19;
- break;
- case 16:
- bpp = 3;
- npb = 31;
- break;
- case 8:
- bpp = 5;
- npb = 63;
- break;
- default:
- return -EINVAL;
- }
- ipu_ch_param_write_field(p, IPU_FIELD_BPP, bpp);
- ipu_ch_param_write_field(p, IPU_FIELD_NPB, npb);
- ipu_ch_param_write_field(p, IPU_FIELD_PFS, 7); /* rgb mode */
-
- return 0;
-}
-EXPORT_SYMBOL_GPL(ipu_cpmem_set_format_rgb);
-
-int ipu_cpmem_set_format_passthrough(struct ipu_ch_param __iomem *p,
- int width)
+enum ipu_color_space ipu_drm_fourcc_to_colorspace(u32 drm_fourcc)
{
- int bpp = 0, npb = 0;
-
- switch (width) {
- case 32:
- bpp = 0;
- npb = 15;
- break;
- case 24:
- bpp = 1;
- npb = 19;
- break;
- case 16:
- bpp = 3;
- npb = 31;
- break;
- case 8:
- bpp = 5;
- npb = 63;
- break;
+ switch (drm_fourcc) {
+ case DRM_FORMAT_RGB565:
+ case DRM_FORMAT_BGR565:
+ case DRM_FORMAT_RGB888:
+ case DRM_FORMAT_BGR888:
+ case DRM_FORMAT_XRGB8888:
+ case DRM_FORMAT_XBGR8888:
+ case DRM_FORMAT_RGBX8888:
+ case DRM_FORMAT_BGRX8888:
+ case DRM_FORMAT_ARGB8888:
+ case DRM_FORMAT_ABGR8888:
+ case DRM_FORMAT_RGBA8888:
+ case DRM_FORMAT_BGRA8888:
+ return IPUV3_COLORSPACE_RGB;
+ case DRM_FORMAT_YUYV:
+ case DRM_FORMAT_UYVY:
+ case DRM_FORMAT_YUV420:
+ case DRM_FORMAT_YVU420:
+ case DRM_FORMAT_YUV422:
+ case DRM_FORMAT_YVU422:
+ case DRM_FORMAT_NV12:
+ case DRM_FORMAT_NV21:
+ case DRM_FORMAT_NV16:
+ case DRM_FORMAT_NV61:
+ return IPUV3_COLORSPACE_YUV;
default:
- return -EINVAL;
+ return IPUV3_COLORSPACE_UNKNOWN;
}
-
- ipu_ch_param_write_field(p, IPU_FIELD_BPP, bpp);
- ipu_ch_param_write_field(p, IPU_FIELD_NPB, npb);
- ipu_ch_param_write_field(p, IPU_FIELD_PFS, 6); /* raw mode */
-
- return 0;
}
-EXPORT_SYMBOL_GPL(ipu_cpmem_set_format_passthrough);
+EXPORT_SYMBOL_GPL(ipu_drm_fourcc_to_colorspace);
-void ipu_cpmem_set_yuv_interleaved(struct ipu_ch_param __iomem *p,
- u32 pixel_format)
+enum ipu_color_space ipu_pixelformat_to_colorspace(u32 pixelformat)
{
- switch (pixel_format) {
+ switch (pixelformat) {
+ case V4L2_PIX_FMT_YUV420:
+ case V4L2_PIX_FMT_YVU420:
+ case V4L2_PIX_FMT_YUV422P:
case V4L2_PIX_FMT_UYVY:
- ipu_ch_param_write_field(p, IPU_FIELD_BPP, 3); /* bits/pixel */
- ipu_ch_param_write_field(p, IPU_FIELD_PFS, 0xA); /* pix format */
- ipu_ch_param_write_field(p, IPU_FIELD_NPB, 31); /* burst size */
- break;
case V4L2_PIX_FMT_YUYV:
- ipu_ch_param_write_field(p, IPU_FIELD_BPP, 3); /* bits/pixel */
- ipu_ch_param_write_field(p, IPU_FIELD_PFS, 0x8); /* pix format */
- ipu_ch_param_write_field(p, IPU_FIELD_NPB, 31); /* burst size */
- break;
+ case V4L2_PIX_FMT_NV12:
+ case V4L2_PIX_FMT_NV21:
+ case V4L2_PIX_FMT_NV16:
+ case V4L2_PIX_FMT_NV61:
+ return IPUV3_COLORSPACE_YUV;
+ case V4L2_PIX_FMT_RGB32:
+ case V4L2_PIX_FMT_BGR32:
+ case V4L2_PIX_FMT_RGB24:
+ case V4L2_PIX_FMT_BGR24:
+ case V4L2_PIX_FMT_RGB565:
+ return IPUV3_COLORSPACE_RGB;
+ default:
+ return IPUV3_COLORSPACE_UNKNOWN;
}
}
-EXPORT_SYMBOL_GPL(ipu_cpmem_set_yuv_interleaved);
+EXPORT_SYMBOL_GPL(ipu_pixelformat_to_colorspace);
-void ipu_cpmem_set_yuv_planar_full(struct ipu_ch_param __iomem *p,
- u32 pixel_format, int stride, int u_offset, int v_offset)
+bool ipu_pixelformat_is_planar(u32 pixelformat)
{
- switch (pixel_format) {
+ switch (pixelformat) {
case V4L2_PIX_FMT_YUV420:
- ipu_ch_param_write_field(p, IPU_FIELD_SLUV, (stride / 2) - 1);
- ipu_ch_param_write_field(p, IPU_FIELD_UBO, u_offset / 8);
- ipu_ch_param_write_field(p, IPU_FIELD_VBO, v_offset / 8);
- break;
case V4L2_PIX_FMT_YVU420:
- ipu_ch_param_write_field(p, IPU_FIELD_SLUV, (stride / 2) - 1);
- ipu_ch_param_write_field(p, IPU_FIELD_UBO, v_offset / 8);
- ipu_ch_param_write_field(p, IPU_FIELD_VBO, u_offset / 8);
- break;
+ case V4L2_PIX_FMT_YUV422P:
+ case V4L2_PIX_FMT_NV12:
+ case V4L2_PIX_FMT_NV21:
+ case V4L2_PIX_FMT_NV16:
+ case V4L2_PIX_FMT_NV61:
+ return true;
}
-}
-EXPORT_SYMBOL_GPL(ipu_cpmem_set_yuv_planar_full);
-
-void ipu_cpmem_set_yuv_planar(struct ipu_ch_param __iomem *p, u32 pixel_format,
- int stride, int height)
-{
- int u_offset, v_offset;
- int uv_stride = 0;
- switch (pixel_format) {
- case V4L2_PIX_FMT_YUV420:
- case V4L2_PIX_FMT_YVU420:
- uv_stride = stride / 2;
- u_offset = stride * height;
- v_offset = u_offset + (uv_stride * height / 2);
- ipu_cpmem_set_yuv_planar_full(p, pixel_format, stride,
- u_offset, v_offset);
- break;
- }
+ return false;
}
-EXPORT_SYMBOL_GPL(ipu_cpmem_set_yuv_planar);
-
-static const struct ipu_rgb def_rgb_32 = {
- .red = { .offset = 16, .length = 8, },
- .green = { .offset = 8, .length = 8, },
- .blue = { .offset = 0, .length = 8, },
- .transp = { .offset = 24, .length = 8, },
- .bits_per_pixel = 32,
-};
-
-static const struct ipu_rgb def_bgr_32 = {
- .red = { .offset = 0, .length = 8, },
- .green = { .offset = 8, .length = 8, },
- .blue = { .offset = 16, .length = 8, },
- .transp = { .offset = 24, .length = 8, },
- .bits_per_pixel = 32,
-};
-
-static const struct ipu_rgb def_rgb_24 = {
- .red = { .offset = 16, .length = 8, },
- .green = { .offset = 8, .length = 8, },
- .blue = { .offset = 0, .length = 8, },
- .transp = { .offset = 0, .length = 0, },
- .bits_per_pixel = 24,
-};
-
-static const struct ipu_rgb def_bgr_24 = {
- .red = { .offset = 0, .length = 8, },
- .green = { .offset = 8, .length = 8, },
- .blue = { .offset = 16, .length = 8, },
- .transp = { .offset = 0, .length = 0, },
- .bits_per_pixel = 24,
-};
-
-static const struct ipu_rgb def_rgb_16 = {
- .red = { .offset = 11, .length = 5, },
- .green = { .offset = 5, .length = 6, },
- .blue = { .offset = 0, .length = 5, },
- .transp = { .offset = 0, .length = 0, },
- .bits_per_pixel = 16,
-};
+EXPORT_SYMBOL_GPL(ipu_pixelformat_is_planar);
-static const struct ipu_rgb def_bgr_16 = {
- .red = { .offset = 0, .length = 5, },
- .green = { .offset = 5, .length = 6, },
- .blue = { .offset = 11, .length = 5, },
- .transp = { .offset = 0, .length = 0, },
- .bits_per_pixel = 16,
-};
-
-#define Y_OFFSET(pix, x, y) ((x) + pix->width * (y))
-#define U_OFFSET(pix, x, y) ((pix->width * pix->height) + \
- (pix->width * (y) / 4) + (x) / 2)
-#define V_OFFSET(pix, x, y) ((pix->width * pix->height) + \
- (pix->width * pix->height / 4) + \
- (pix->width * (y) / 4) + (x) / 2)
-
-int ipu_cpmem_set_fmt(struct ipu_ch_param __iomem *cpmem, u32 drm_fourcc)
+enum ipu_color_space ipu_mbus_code_to_colorspace(u32 mbus_code)
{
- switch (drm_fourcc) {
- case DRM_FORMAT_YUV420:
- case DRM_FORMAT_YVU420:
- /* pix format */
- ipu_ch_param_write_field(cpmem, IPU_FIELD_PFS, 2);
- /* burst size */
- ipu_ch_param_write_field(cpmem, IPU_FIELD_NPB, 63);
- break;
- case DRM_FORMAT_UYVY:
- /* bits/pixel */
- ipu_ch_param_write_field(cpmem, IPU_FIELD_BPP, 3);
- /* pix format */
- ipu_ch_param_write_field(cpmem, IPU_FIELD_PFS, 0xA);
- /* burst size */
- ipu_ch_param_write_field(cpmem, IPU_FIELD_NPB, 31);
- break;
- case DRM_FORMAT_YUYV:
- /* bits/pixel */
- ipu_ch_param_write_field(cpmem, IPU_FIELD_BPP, 3);
- /* pix format */
- ipu_ch_param_write_field(cpmem, IPU_FIELD_PFS, 0x8);
- /* burst size */
- ipu_ch_param_write_field(cpmem, IPU_FIELD_NPB, 31);
- break;
- case DRM_FORMAT_ABGR8888:
- case DRM_FORMAT_XBGR8888:
- ipu_cpmem_set_format_rgb(cpmem, &def_bgr_32);
- break;
- case DRM_FORMAT_ARGB8888:
- case DRM_FORMAT_XRGB8888:
- ipu_cpmem_set_format_rgb(cpmem, &def_rgb_32);
- break;
- case DRM_FORMAT_BGR888:
- ipu_cpmem_set_format_rgb(cpmem, &def_bgr_24);
- break;
- case DRM_FORMAT_RGB888:
- ipu_cpmem_set_format_rgb(cpmem, &def_rgb_24);
- break;
- case DRM_FORMAT_RGB565:
- ipu_cpmem_set_format_rgb(cpmem, &def_rgb_16);
- break;
- case DRM_FORMAT_BGR565:
- ipu_cpmem_set_format_rgb(cpmem, &def_bgr_16);
- break;
+ switch (mbus_code & 0xf000) {
+ case 0x1000:
+ return IPUV3_COLORSPACE_RGB;
+ case 0x2000:
+ return IPUV3_COLORSPACE_YUV;
default:
- return -EINVAL;
+ return IPUV3_COLORSPACE_UNKNOWN;
}
-
- return 0;
}
-EXPORT_SYMBOL_GPL(ipu_cpmem_set_fmt);
+EXPORT_SYMBOL_GPL(ipu_mbus_code_to_colorspace);
-/*
- * The V4L2 spec defines packed RGB formats in memory byte order, which from
- * point of view of the IPU corresponds to little-endian words with the first
- * component in the least significant bits.
- * The DRM pixel formats and IPU internal representation are ordered the other
- * way around, with the first named component ordered at the most significant
- * bits. Further, V4L2 formats are not well defined:
- * http://linuxtv.org/downloads/v4l-dvb-apis/packed-rgb.html
- * We choose the interpretation which matches GStreamer behavior.
- */
-static int v4l2_pix_fmt_to_drm_fourcc(u32 pixelformat)
+int ipu_stride_to_bytes(u32 pixel_stride, u32 pixelformat)
{
switch (pixelformat) {
- case V4L2_PIX_FMT_RGB565:
+ case V4L2_PIX_FMT_YUV420:
+ case V4L2_PIX_FMT_YVU420:
+ case V4L2_PIX_FMT_YUV422P:
+ case V4L2_PIX_FMT_NV12:
+ case V4L2_PIX_FMT_NV21:
+ case V4L2_PIX_FMT_NV16:
+ case V4L2_PIX_FMT_NV61:
/*
- * Here we choose the 'corrected' interpretation of RGBP, a
- * little-endian 16-bit word with the red component at the most
- * significant bits:
- * g[2:0]b[4:0] r[4:0]g[5:3] <=> [16:0] R:G:B
+ * for the planar YUV formats, the stride passed to
+ * cpmem must be the stride in bytes of the Y plane.
+ * And all the planar YUV formats have an 8-bit
+ * Y component.
*/
- return DRM_FORMAT_RGB565;
+ return (8 * pixel_stride) >> 3;
+ case V4L2_PIX_FMT_RGB565:
+ case V4L2_PIX_FMT_YUYV:
+ case V4L2_PIX_FMT_UYVY:
+ return (16 * pixel_stride) >> 3;
case V4L2_PIX_FMT_BGR24:
- /* B G R <=> [24:0] R:G:B */
- return DRM_FORMAT_RGB888;
case V4L2_PIX_FMT_RGB24:
- /* R G B <=> [24:0] B:G:R */
- return DRM_FORMAT_BGR888;
+ return (24 * pixel_stride) >> 3;
case V4L2_PIX_FMT_BGR32:
- /* B G R A <=> [32:0] A:B:G:R */
- return DRM_FORMAT_XRGB8888;
case V4L2_PIX_FMT_RGB32:
- /* R G B A <=> [32:0] A:B:G:R */
- return DRM_FORMAT_XBGR8888;
- case V4L2_PIX_FMT_UYVY:
- return DRM_FORMAT_UYVY;
- case V4L2_PIX_FMT_YUYV:
- return DRM_FORMAT_YUYV;
- case V4L2_PIX_FMT_YUV420:
- return DRM_FORMAT_YUV420;
- case V4L2_PIX_FMT_YVU420:
- return DRM_FORMAT_YVU420;
+ return (32 * pixel_stride) >> 3;
+ default:
+ break;
}
return -EINVAL;
}
+EXPORT_SYMBOL_GPL(ipu_stride_to_bytes);
-enum ipu_color_space ipu_drm_fourcc_to_colorspace(u32 drm_fourcc)
+int ipu_degrees_to_rot_mode(enum ipu_rotate_mode *mode, int degrees,
+ bool hflip, bool vflip)
{
- switch (drm_fourcc) {
- case DRM_FORMAT_RGB565:
- case DRM_FORMAT_BGR565:
- case DRM_FORMAT_RGB888:
- case DRM_FORMAT_BGR888:
- case DRM_FORMAT_XRGB8888:
- case DRM_FORMAT_XBGR8888:
- case DRM_FORMAT_RGBX8888:
- case DRM_FORMAT_BGRX8888:
- case DRM_FORMAT_ARGB8888:
- case DRM_FORMAT_ABGR8888:
- case DRM_FORMAT_RGBA8888:
- case DRM_FORMAT_BGRA8888:
- return IPUV3_COLORSPACE_RGB;
- case DRM_FORMAT_YUYV:
- case DRM_FORMAT_UYVY:
- case DRM_FORMAT_YUV420:
- case DRM_FORMAT_YVU420:
- return IPUV3_COLORSPACE_YUV;
+ u32 r90, vf, hf;
+
+ switch (degrees) {
+ case 0:
+ vf = hf = r90 = 0;
+ break;
+ case 90:
+ vf = hf = 0;
+ r90 = 1;
+ break;
+ case 180:
+ vf = hf = 1;
+ r90 = 0;
+ break;
+ case 270:
+ vf = hf = r90 = 1;
+ break;
default:
- return IPUV3_COLORSPACE_UNKNOWN;
+ return -EINVAL;
}
-}
-EXPORT_SYMBOL_GPL(ipu_drm_fourcc_to_colorspace);
-int ipu_cpmem_set_image(struct ipu_ch_param __iomem *cpmem,
- struct ipu_image *image)
-{
- struct v4l2_pix_format *pix = &image->pix;
- int y_offset, u_offset, v_offset;
+ hf ^= (u32)hflip;
+ vf ^= (u32)vflip;
- pr_debug("%s: resolution: %dx%d stride: %d\n",
- __func__, pix->width, pix->height,
- pix->bytesperline);
+ *mode = (enum ipu_rotate_mode)((r90 << 2) | (hf << 1) | vf);
+ return 0;
+}
+EXPORT_SYMBOL_GPL(ipu_degrees_to_rot_mode);
- ipu_cpmem_set_resolution(cpmem, image->rect.width,
- image->rect.height);
- ipu_cpmem_set_stride(cpmem, pix->bytesperline);
+int ipu_rot_mode_to_degrees(int *degrees, enum ipu_rotate_mode mode,
+ bool hflip, bool vflip)
+{
+ u32 r90, vf, hf;
- ipu_cpmem_set_fmt(cpmem, v4l2_pix_fmt_to_drm_fourcc(pix->pixelformat));
+ r90 = ((u32)mode >> 2) & 0x1;
+ hf = ((u32)mode >> 1) & 0x1;
+ vf = ((u32)mode >> 0) & 0x1;
+ hf ^= (u32)hflip;
+ vf ^= (u32)vflip;
- switch (pix->pixelformat) {
- case V4L2_PIX_FMT_YUV420:
- case V4L2_PIX_FMT_YVU420:
- y_offset = Y_OFFSET(pix, image->rect.left, image->rect.top);
- u_offset = U_OFFSET(pix, image->rect.left,
- image->rect.top) - y_offset;
- v_offset = V_OFFSET(pix, image->rect.left,
- image->rect.top) - y_offset;
-
- ipu_cpmem_set_yuv_planar_full(cpmem, pix->pixelformat,
- pix->bytesperline, u_offset, v_offset);
- ipu_cpmem_set_buffer(cpmem, 0, image->phys + y_offset);
+ switch ((enum ipu_rotate_mode)((r90 << 2) | (hf << 1) | vf)) {
+ case IPU_ROTATE_NONE:
+ *degrees = 0;
break;
- case V4L2_PIX_FMT_UYVY:
- case V4L2_PIX_FMT_YUYV:
- ipu_cpmem_set_buffer(cpmem, 0, image->phys +
- image->rect.left * 2 +
- image->rect.top * image->pix.bytesperline);
+ case IPU_ROTATE_90_RIGHT:
+ *degrees = 90;
break;
- case V4L2_PIX_FMT_RGB32:
- case V4L2_PIX_FMT_BGR32:
- ipu_cpmem_set_buffer(cpmem, 0, image->phys +
- image->rect.left * 4 +
- image->rect.top * image->pix.bytesperline);
+ case IPU_ROTATE_180:
+ *degrees = 180;
break;
- case V4L2_PIX_FMT_RGB565:
- ipu_cpmem_set_buffer(cpmem, 0, image->phys +
- image->rect.left * 2 +
- image->rect.top * image->pix.bytesperline);
- break;
- case V4L2_PIX_FMT_RGB24:
- case V4L2_PIX_FMT_BGR24:
- ipu_cpmem_set_buffer(cpmem, 0, image->phys +
- image->rect.left * 3 +
- image->rect.top * image->pix.bytesperline);
+ case IPU_ROTATE_90_LEFT:
+ *degrees = 270;
break;
default:
return -EINVAL;
return 0;
}
-EXPORT_SYMBOL_GPL(ipu_cpmem_set_image);
-
-enum ipu_color_space ipu_pixelformat_to_colorspace(u32 pixelformat)
-{
- switch (pixelformat) {
- case V4L2_PIX_FMT_YUV420:
- case V4L2_PIX_FMT_YVU420:
- case V4L2_PIX_FMT_UYVY:
- case V4L2_PIX_FMT_YUYV:
- return IPUV3_COLORSPACE_YUV;
- case V4L2_PIX_FMT_RGB32:
- case V4L2_PIX_FMT_BGR32:
- case V4L2_PIX_FMT_RGB24:
- case V4L2_PIX_FMT_BGR24:
- case V4L2_PIX_FMT_RGB565:
- return IPUV3_COLORSPACE_RGB;
- default:
- return IPUV3_COLORSPACE_UNKNOWN;
- }
-}
-EXPORT_SYMBOL_GPL(ipu_pixelformat_to_colorspace);
+EXPORT_SYMBOL_GPL(ipu_rot_mode_to_degrees);
struct ipuv3_channel *ipu_idmac_get(struct ipu_soc *ipu, unsigned num)
{
}
EXPORT_SYMBOL_GPL(ipu_idmac_put);
-#define idma_mask(ch) (1 << (ch & 0x1f))
+#define idma_mask(ch) (1 << ((ch) & 0x1f))
+
+/*
+ * This is an undocumented feature, a write one to a channel bit in
+ * IPU_CHA_CUR_BUF and IPU_CHA_TRIPLE_CUR_BUF will reset the channel's
+ * internal current buffer pointer so that transfers start from buffer
+ * 0 on the next channel enable (that's the theory anyway, the imx6 TRM
+ * only says these are read-only registers). This operation is required
+ * for channel linking to work correctly, for instance video capture
+ * pipelines that carry out image rotations will fail after the first
+ * streaming unless this function is called for each channel before
+ * re-enabling the channels.
+ */
+static void __ipu_idmac_reset_current_buffer(struct ipuv3_channel *channel)
+{
+ struct ipu_soc *ipu = channel->ipu;
+ unsigned int chno = channel->num;
+
+ ipu_cm_write(ipu, idma_mask(chno), IPU_CHA_CUR_BUF(chno));
+}
void ipu_idmac_set_double_buffer(struct ipuv3_channel *channel,
bool doublebuffer)
reg &= ~idma_mask(channel->num);
ipu_cm_write(ipu, reg, IPU_CHA_DB_MODE_SEL(channel->num));
+ __ipu_idmac_reset_current_buffer(channel);
+
spin_unlock_irqrestore(&ipu->lock, flags);
}
EXPORT_SYMBOL_GPL(ipu_idmac_set_double_buffer);
+static const struct {
+ int chnum;
+ u32 reg;
+ int shift;
+} idmac_lock_en_info[] = {
+ { .chnum = 5, .reg = IDMAC_CH_LOCK_EN_1, .shift = 0, },
+ { .chnum = 11, .reg = IDMAC_CH_LOCK_EN_1, .shift = 2, },
+ { .chnum = 12, .reg = IDMAC_CH_LOCK_EN_1, .shift = 4, },
+ { .chnum = 14, .reg = IDMAC_CH_LOCK_EN_1, .shift = 6, },
+ { .chnum = 15, .reg = IDMAC_CH_LOCK_EN_1, .shift = 8, },
+ { .chnum = 20, .reg = IDMAC_CH_LOCK_EN_1, .shift = 10, },
+ { .chnum = 21, .reg = IDMAC_CH_LOCK_EN_1, .shift = 12, },
+ { .chnum = 22, .reg = IDMAC_CH_LOCK_EN_1, .shift = 14, },
+ { .chnum = 23, .reg = IDMAC_CH_LOCK_EN_1, .shift = 16, },
+ { .chnum = 27, .reg = IDMAC_CH_LOCK_EN_1, .shift = 18, },
+ { .chnum = 28, .reg = IDMAC_CH_LOCK_EN_1, .shift = 20, },
+ { .chnum = 45, .reg = IDMAC_CH_LOCK_EN_2, .shift = 0, },
+ { .chnum = 46, .reg = IDMAC_CH_LOCK_EN_2, .shift = 2, },
+ { .chnum = 47, .reg = IDMAC_CH_LOCK_EN_2, .shift = 4, },
+ { .chnum = 48, .reg = IDMAC_CH_LOCK_EN_2, .shift = 6, },
+ { .chnum = 49, .reg = IDMAC_CH_LOCK_EN_2, .shift = 8, },
+ { .chnum = 50, .reg = IDMAC_CH_LOCK_EN_2, .shift = 10, },
+};
+
+int ipu_idmac_lock_enable(struct ipuv3_channel *channel, int num_bursts)
+{
+ struct ipu_soc *ipu = channel->ipu;
+ unsigned long flags;
+ u32 bursts, regval;
+ int i;
+
+ switch (num_bursts) {
+ case 0:
+ case 1:
+ bursts = 0x00; /* locking disabled */
+ break;
+ case 2:
+ bursts = 0x01;
+ break;
+ case 4:
+ bursts = 0x02;
+ break;
+ case 8:
+ bursts = 0x03;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ for (i = 0; i < ARRAY_SIZE(idmac_lock_en_info); i++) {
+ if (channel->num == idmac_lock_en_info[i].chnum)
+ break;
+ }
+ if (i >= ARRAY_SIZE(idmac_lock_en_info))
+ return -EINVAL;
+
+ spin_lock_irqsave(&ipu->lock, flags);
+
+ regval = ipu_idmac_read(ipu, idmac_lock_en_info[i].reg);
+ regval &= ~(0x03 << idmac_lock_en_info[i].shift);
+ regval |= (bursts << idmac_lock_en_info[i].shift);
+ ipu_idmac_write(ipu, regval, idmac_lock_en_info[i].reg);
+
+ spin_unlock_irqrestore(&ipu->lock, flags);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(ipu_idmac_lock_enable);
+
int ipu_module_enable(struct ipu_soc *ipu, u32 mask)
{
unsigned long lock_flags;
}
EXPORT_SYMBOL_GPL(ipu_module_disable);
-int ipu_csi_enable(struct ipu_soc *ipu, int csi)
-{
- return ipu_module_enable(ipu, csi ? IPU_CONF_CSI1_EN : IPU_CONF_CSI0_EN);
-}
-EXPORT_SYMBOL_GPL(ipu_csi_enable);
-
-int ipu_csi_disable(struct ipu_soc *ipu, int csi)
-{
- return ipu_module_disable(ipu, csi ? IPU_CONF_CSI1_EN : IPU_CONF_CSI0_EN);
-}
-EXPORT_SYMBOL_GPL(ipu_csi_disable);
-
-int ipu_smfc_enable(struct ipu_soc *ipu)
-{
- return ipu_module_enable(ipu, IPU_CONF_SMFC_EN);
-}
-EXPORT_SYMBOL_GPL(ipu_smfc_enable);
-
-int ipu_smfc_disable(struct ipu_soc *ipu)
-{
- return ipu_module_disable(ipu, IPU_CONF_SMFC_EN);
-}
-EXPORT_SYMBOL_GPL(ipu_smfc_disable);
-
int ipu_idmac_get_current_buffer(struct ipuv3_channel *channel)
{
struct ipu_soc *ipu = channel->ipu;
}
EXPORT_SYMBOL_GPL(ipu_idmac_get_current_buffer);
+bool ipu_idmac_buffer_is_ready(struct ipuv3_channel *channel, u32 buf_num)
+{
+ struct ipu_soc *ipu = channel->ipu;
+ unsigned long flags;
+ u32 reg = 0;
+
+ spin_lock_irqsave(&ipu->lock, flags);
+ switch (buf_num) {
+ case 0:
+ reg = ipu_cm_read(ipu, IPU_CHA_BUF0_RDY(channel->num));
+ break;
+ case 1:
+ reg = ipu_cm_read(ipu, IPU_CHA_BUF1_RDY(channel->num));
+ break;
+ case 2:
+ reg = ipu_cm_read(ipu, IPU_CHA_BUF2_RDY(channel->num));
+ break;
+ }
+ spin_unlock_irqrestore(&ipu->lock, flags);
+
+ return ((reg & idma_mask(channel->num)) != 0);
+}
+EXPORT_SYMBOL_GPL(ipu_idmac_buffer_is_ready);
+
void ipu_idmac_select_buffer(struct ipuv3_channel *channel, u32 buf_num)
{
struct ipu_soc *ipu = channel->ipu;
}
EXPORT_SYMBOL_GPL(ipu_idmac_select_buffer);
+void ipu_idmac_clear_buffer(struct ipuv3_channel *channel, u32 buf_num)
+{
+ struct ipu_soc *ipu = channel->ipu;
+ unsigned int chno = channel->num;
+ unsigned long flags;
+
+ spin_lock_irqsave(&ipu->lock, flags);
+
+ ipu_cm_write(ipu, 0xF0300000, IPU_GPR); /* write one to clear */
+ switch (buf_num) {
+ case 0:
+ ipu_cm_write(ipu, idma_mask(chno), IPU_CHA_BUF0_RDY(chno));
+ break;
+ case 1:
+ ipu_cm_write(ipu, idma_mask(chno), IPU_CHA_BUF1_RDY(chno));
+ break;
+ case 2:
+ ipu_cm_write(ipu, idma_mask(chno), IPU_CHA_BUF2_RDY(chno));
+ break;
+ default:
+ break;
+ }
+ ipu_cm_write(ipu, 0x0, IPU_GPR); /* write one to set */
+
+ spin_unlock_irqrestore(&ipu->lock, flags);
+}
+EXPORT_SYMBOL_GPL(ipu_idmac_clear_buffer);
+
int ipu_idmac_enable_channel(struct ipuv3_channel *channel)
{
struct ipu_soc *ipu = channel->ipu;
val &= ~idma_mask(channel->num);
ipu_idmac_write(ipu, val, IDMAC_CHA_EN(channel->num));
+ __ipu_idmac_reset_current_buffer(channel);
+
/* Set channel buffers NOT to be ready */
ipu_cm_write(ipu, 0xf0000000, IPU_GPR); /* write one to clear */
}
EXPORT_SYMBOL_GPL(ipu_idmac_disable_channel);
+/*
+ * The imx6 rev. D TRM says that enabling the WM feature will increase
+ * a channel's priority. Refer to Table 36-8 Calculated priority value.
+ * The sub-module that is the sink or source for the channel must enable
+ * watermark signal for this to take effect (SMFC_WM for instance).
+ */
+void ipu_idmac_enable_watermark(struct ipuv3_channel *channel, bool enable)
+{
+ struct ipu_soc *ipu = channel->ipu;
+ unsigned long flags;
+ u32 val;
+
+ spin_lock_irqsave(&ipu->lock, flags);
+
+ val = ipu_idmac_read(ipu, IDMAC_WM_EN(channel->num));
+ if (enable)
+ val |= 1 << (channel->num % 32);
+ else
+ val &= ~(1 << (channel->num % 32));
+ ipu_idmac_write(ipu, val, IDMAC_WM_EN(channel->num));
+
+ spin_unlock_irqrestore(&ipu->lock, flags);
+}
+EXPORT_SYMBOL_GPL(ipu_idmac_enable_watermark);
+
static int ipu_memory_reset(struct ipu_soc *ipu)
{
unsigned long timeout;
return 0;
}
+/*
+ * Set the source mux for the given CSI. Selects either parallel or
+ * MIPI CSI2 sources.
+ */
+void ipu_set_csi_src_mux(struct ipu_soc *ipu, int csi_id, bool mipi_csi2)
+{
+ unsigned long flags;
+ u32 val, mask;
+
+ mask = (csi_id == 1) ? IPU_CONF_CSI1_DATA_SOURCE :
+ IPU_CONF_CSI0_DATA_SOURCE;
+
+ spin_lock_irqsave(&ipu->lock, flags);
+
+ val = ipu_cm_read(ipu, IPU_CONF);
+ if (mipi_csi2)
+ val |= mask;
+ else
+ val &= ~mask;
+ ipu_cm_write(ipu, val, IPU_CONF);
+
+ spin_unlock_irqrestore(&ipu->lock, flags);
+}
+EXPORT_SYMBOL_GPL(ipu_set_csi_src_mux);
+
+/*
+ * Set the source mux for the IC. Selects either CSI[01] or the VDI.
+ */
+void ipu_set_ic_src_mux(struct ipu_soc *ipu, int csi_id, bool vdi)
+{
+ unsigned long flags;
+ u32 val;
+
+ spin_lock_irqsave(&ipu->lock, flags);
+
+ val = ipu_cm_read(ipu, IPU_CONF);
+ if (vdi) {
+ val |= IPU_CONF_IC_INPUT;
+ } else {
+ val &= ~IPU_CONF_IC_INPUT;
+ if (csi_id == 1)
+ val |= IPU_CONF_CSI_SEL;
+ else
+ val &= ~IPU_CONF_CSI_SEL;
+ }
+ ipu_cm_write(ipu, val, IPU_CONF);
+
+ spin_unlock_irqrestore(&ipu->lock, flags);
+}
+EXPORT_SYMBOL_GPL(ipu_set_ic_src_mux);
+
struct ipu_devtype {
const char *name;
unsigned long cm_ofs;
unsigned long cpmem_ofs;
unsigned long srm_ofs;
unsigned long tpm_ofs;
+ unsigned long csi0_ofs;
+ unsigned long csi1_ofs;
+ unsigned long ic_ofs;
unsigned long disp0_ofs;
unsigned long disp1_ofs;
unsigned long dc_tmpl_ofs;
.cpmem_ofs = 0x1f000000,
.srm_ofs = 0x1f040000,
.tpm_ofs = 0x1f060000,
+ .csi0_ofs = 0x1f030000,
+ .csi1_ofs = 0x1f038000,
+ .ic_ofs = 0x1f020000,
.disp0_ofs = 0x1e040000,
.disp1_ofs = 0x1e048000,
.dc_tmpl_ofs = 0x1f080000,
.cpmem_ofs = 0x07000000,
.srm_ofs = 0x07040000,
.tpm_ofs = 0x07060000,
+ .csi0_ofs = 0x07030000,
+ .csi1_ofs = 0x07038000,
+ .ic_ofs = 0x07020000,
.disp0_ofs = 0x06040000,
.disp1_ofs = 0x06048000,
.dc_tmpl_ofs = 0x07080000,
.cpmem_ofs = 0x00300000,
.srm_ofs = 0x00340000,
.tpm_ofs = 0x00360000,
+ .csi0_ofs = 0x00230000,
+ .csi1_ofs = 0x00238000,
+ .ic_ofs = 0x00220000,
.disp0_ofs = 0x00240000,
.disp1_ofs = 0x00248000,
.dc_tmpl_ofs = 0x00380000,
struct device *dev = &pdev->dev;
const struct ipu_devtype *devtype = ipu->devtype;
+ ret = ipu_cpmem_init(ipu, dev, ipu_base + devtype->cpmem_ofs);
+ if (ret) {
+ unit = "cpmem";
+ goto err_cpmem;
+ }
+
+ ret = ipu_csi_init(ipu, dev, 0, ipu_base + devtype->csi0_ofs,
+ IPU_CONF_CSI0_EN, ipu_clk);
+ if (ret) {
+ unit = "csi0";
+ goto err_csi_0;
+ }
+
+ ret = ipu_csi_init(ipu, dev, 1, ipu_base + devtype->csi1_ofs,
+ IPU_CONF_CSI1_EN, ipu_clk);
+ if (ret) {
+ unit = "csi1";
+ goto err_csi_1;
+ }
+
+ ret = ipu_ic_init(ipu, dev,
+ ipu_base + devtype->ic_ofs,
+ ipu_base + devtype->tpm_ofs);
+ if (ret) {
+ unit = "ic";
+ goto err_ic;
+ }
+
ret = ipu_di_init(ipu, dev, 0, ipu_base + devtype->disp0_ofs,
- IPU_CONF_DI0_EN, ipu_clk);
+ IPU_CONF_DI0_EN, ipu_clk);
if (ret) {
unit = "di0";
goto err_di_0;
err_di_1:
ipu_di_exit(ipu, 0);
err_di_0:
+ ipu_ic_exit(ipu);
+err_ic:
+ ipu_csi_exit(ipu, 1);
+err_csi_1:
+ ipu_csi_exit(ipu, 0);
+err_csi_0:
+ ipu_cpmem_exit(ipu);
+err_cpmem:
dev_err(&pdev->dev, "init %s failed with %d\n", unit, ret);
return ret;
}
ipu_dc_exit(ipu);
ipu_di_exit(ipu, 1);
ipu_di_exit(ipu, 0);
+ ipu_ic_exit(ipu);
+ ipu_csi_exit(ipu, 1);
+ ipu_csi_exit(ipu, 0);
+ ipu_cpmem_exit(ipu);
}
static int platform_remove_devices_fn(struct device *dev, void *unused)
irq_domain_remove(ipu->domain);
}
+void ipu_dump(struct ipu_soc *ipu)
+{
+ int i;
+
+ dev_dbg(ipu->dev, "IPU_CONF = \t0x%08X\n",
+ ipu_cm_read(ipu, IPU_CONF));
+ dev_dbg(ipu->dev, "IDMAC_CONF = \t0x%08X\n",
+ ipu_idmac_read(ipu, IDMAC_CONF));
+ dev_dbg(ipu->dev, "IDMAC_CHA_EN1 = \t0x%08X\n",
+ ipu_idmac_read(ipu, IDMAC_CHA_EN(0)));
+ dev_dbg(ipu->dev, "IDMAC_CHA_EN2 = \t0x%08X\n",
+ ipu_idmac_read(ipu, IDMAC_CHA_EN(32)));
+ dev_dbg(ipu->dev, "IDMAC_CHA_PRI1 = \t0x%08X\n",
+ ipu_idmac_read(ipu, IDMAC_CHA_PRI(0)));
+ dev_dbg(ipu->dev, "IDMAC_CHA_PRI2 = \t0x%08X\n",
+ ipu_idmac_read(ipu, IDMAC_CHA_PRI(32)));
+ dev_dbg(ipu->dev, "IDMAC_BAND_EN1 = \t0x%08X\n",
+ ipu_idmac_read(ipu, IDMAC_BAND_EN(0)));
+ dev_dbg(ipu->dev, "IDMAC_BAND_EN2 = \t0x%08X\n",
+ ipu_idmac_read(ipu, IDMAC_BAND_EN(32)));
+ dev_dbg(ipu->dev, "IPU_CHA_DB_MODE_SEL0 = \t0x%08X\n",
+ ipu_cm_read(ipu, IPU_CHA_DB_MODE_SEL(0)));
+ dev_dbg(ipu->dev, "IPU_CHA_DB_MODE_SEL1 = \t0x%08X\n",
+ ipu_cm_read(ipu, IPU_CHA_DB_MODE_SEL(32)));
+ dev_dbg(ipu->dev, "IPU_FS_PROC_FLOW1 = \t0x%08X\n",
+ ipu_cm_read(ipu, IPU_FS_PROC_FLOW1));
+ dev_dbg(ipu->dev, "IPU_FS_PROC_FLOW2 = \t0x%08X\n",
+ ipu_cm_read(ipu, IPU_FS_PROC_FLOW2));
+ dev_dbg(ipu->dev, "IPU_FS_PROC_FLOW3 = \t0x%08X\n",
+ ipu_cm_read(ipu, IPU_FS_PROC_FLOW3));
+ dev_dbg(ipu->dev, "IPU_FS_DISP_FLOW1 = \t0x%08X\n",
+ ipu_cm_read(ipu, IPU_FS_DISP_FLOW1));
+ for (i = 0; i < 15; i++)
+ dev_dbg(ipu->dev, "IPU_INT_CTRL(%d) = \t%08X\n", i,
+ ipu_cm_read(ipu, IPU_INT_CTRL(i)));
+}
+EXPORT_SYMBOL_GPL(ipu_dump);
+
static int ipu_probe(struct platform_device *pdev)
{
const struct of_device_id *of_id =
ipu_base + devtype->cm_ofs + IPU_CM_IDMAC_REG_OFS);
dev_dbg(&pdev->dev, "cpmem: 0x%08lx\n",
ipu_base + devtype->cpmem_ofs);
+ dev_dbg(&pdev->dev, "csi0: 0x%08lx\n",
+ ipu_base + devtype->csi0_ofs);
+ dev_dbg(&pdev->dev, "csi1: 0x%08lx\n",
+ ipu_base + devtype->csi1_ofs);
+ dev_dbg(&pdev->dev, "ic: 0x%08lx\n",
+ ipu_base + devtype->ic_ofs);
dev_dbg(&pdev->dev, "disp0: 0x%08lx\n",
ipu_base + devtype->disp0_ofs);
dev_dbg(&pdev->dev, "disp1: 0x%08lx\n",
ipu->idmac_reg = devm_ioremap(&pdev->dev,
ipu_base + devtype->cm_ofs + IPU_CM_IDMAC_REG_OFS,
PAGE_SIZE);
- ipu->cpmem_base = devm_ioremap(&pdev->dev,
- ipu_base + devtype->cpmem_ofs, PAGE_SIZE);
- if (!ipu->cm_reg || !ipu->idmac_reg || !ipu->cpmem_base)
+ if (!ipu->cm_reg || !ipu->idmac_reg)
return -ENOMEM;
ipu->clk = devm_clk_get(&pdev->dev, "bus");
--- /dev/null
+/*
+ * Copyright (C) 2012 Mentor Graphics Inc.
+ * Copyright 2005-2012 Freescale Semiconductor, Inc. All Rights Reserved.
+ *
+ * The code contained herein is licensed under the GNU General Public
+ * License. You may obtain a copy of the GNU General Public License
+ * Version 2 or later at the following locations:
+ *
+ * http://www.opensource.org/licenses/gpl-license.html
+ * http://www.gnu.org/copyleft/gpl.html
+ */
+#include <linux/types.h>
+#include <linux/bitrev.h>
+#include <linux/io.h>
+#include <drm/drm_fourcc.h>
+#include "ipu-prv.h"
+
+struct ipu_cpmem_word {
+ u32 data[5];
+ u32 res[3];
+};
+
+struct ipu_ch_param {
+ struct ipu_cpmem_word word[2];
+};
+
+struct ipu_cpmem {
+ struct ipu_ch_param __iomem *base;
+ u32 module;
+ spinlock_t lock;
+ int use_count;
+ struct ipu_soc *ipu;
+};
+
+#define IPU_CPMEM_WORD(word, ofs, size) ((((word) * 160 + (ofs)) << 8) | (size))
+
+#define IPU_FIELD_UBO IPU_CPMEM_WORD(0, 46, 22)
+#define IPU_FIELD_VBO IPU_CPMEM_WORD(0, 68, 22)
+#define IPU_FIELD_IOX IPU_CPMEM_WORD(0, 90, 4)
+#define IPU_FIELD_RDRW IPU_CPMEM_WORD(0, 94, 1)
+#define IPU_FIELD_SO IPU_CPMEM_WORD(0, 113, 1)
+#define IPU_FIELD_SLY IPU_CPMEM_WORD(1, 102, 14)
+#define IPU_FIELD_SLUV IPU_CPMEM_WORD(1, 128, 14)
+
+#define IPU_FIELD_XV IPU_CPMEM_WORD(0, 0, 10)
+#define IPU_FIELD_YV IPU_CPMEM_WORD(0, 10, 9)
+#define IPU_FIELD_XB IPU_CPMEM_WORD(0, 19, 13)
+#define IPU_FIELD_YB IPU_CPMEM_WORD(0, 32, 12)
+#define IPU_FIELD_NSB_B IPU_CPMEM_WORD(0, 44, 1)
+#define IPU_FIELD_CF IPU_CPMEM_WORD(0, 45, 1)
+#define IPU_FIELD_SX IPU_CPMEM_WORD(0, 46, 12)
+#define IPU_FIELD_SY IPU_CPMEM_WORD(0, 58, 11)
+#define IPU_FIELD_NS IPU_CPMEM_WORD(0, 69, 10)
+#define IPU_FIELD_SDX IPU_CPMEM_WORD(0, 79, 7)
+#define IPU_FIELD_SM IPU_CPMEM_WORD(0, 86, 10)
+#define IPU_FIELD_SCC IPU_CPMEM_WORD(0, 96, 1)
+#define IPU_FIELD_SCE IPU_CPMEM_WORD(0, 97, 1)
+#define IPU_FIELD_SDY IPU_CPMEM_WORD(0, 98, 7)
+#define IPU_FIELD_SDRX IPU_CPMEM_WORD(0, 105, 1)
+#define IPU_FIELD_SDRY IPU_CPMEM_WORD(0, 106, 1)
+#define IPU_FIELD_BPP IPU_CPMEM_WORD(0, 107, 3)
+#define IPU_FIELD_DEC_SEL IPU_CPMEM_WORD(0, 110, 2)
+#define IPU_FIELD_DIM IPU_CPMEM_WORD(0, 112, 1)
+#define IPU_FIELD_BNDM IPU_CPMEM_WORD(0, 114, 3)
+#define IPU_FIELD_BM IPU_CPMEM_WORD(0, 117, 2)
+#define IPU_FIELD_ROT IPU_CPMEM_WORD(0, 119, 1)
+#define IPU_FIELD_ROT_HF_VF IPU_CPMEM_WORD(0, 119, 3)
+#define IPU_FIELD_HF IPU_CPMEM_WORD(0, 120, 1)
+#define IPU_FIELD_VF IPU_CPMEM_WORD(0, 121, 1)
+#define IPU_FIELD_THE IPU_CPMEM_WORD(0, 122, 1)
+#define IPU_FIELD_CAP IPU_CPMEM_WORD(0, 123, 1)
+#define IPU_FIELD_CAE IPU_CPMEM_WORD(0, 124, 1)
+#define IPU_FIELD_FW IPU_CPMEM_WORD(0, 125, 13)
+#define IPU_FIELD_FH IPU_CPMEM_WORD(0, 138, 12)
+#define IPU_FIELD_EBA0 IPU_CPMEM_WORD(1, 0, 29)
+#define IPU_FIELD_EBA1 IPU_CPMEM_WORD(1, 29, 29)
+#define IPU_FIELD_ILO IPU_CPMEM_WORD(1, 58, 20)
+#define IPU_FIELD_NPB IPU_CPMEM_WORD(1, 78, 7)
+#define IPU_FIELD_PFS IPU_CPMEM_WORD(1, 85, 4)
+#define IPU_FIELD_ALU IPU_CPMEM_WORD(1, 89, 1)
+#define IPU_FIELD_ALBM IPU_CPMEM_WORD(1, 90, 3)
+#define IPU_FIELD_ID IPU_CPMEM_WORD(1, 93, 2)
+#define IPU_FIELD_TH IPU_CPMEM_WORD(1, 95, 7)
+#define IPU_FIELD_SL IPU_CPMEM_WORD(1, 102, 14)
+#define IPU_FIELD_WID0 IPU_CPMEM_WORD(1, 116, 3)
+#define IPU_FIELD_WID1 IPU_CPMEM_WORD(1, 119, 3)
+#define IPU_FIELD_WID2 IPU_CPMEM_WORD(1, 122, 3)
+#define IPU_FIELD_WID3 IPU_CPMEM_WORD(1, 125, 3)
+#define IPU_FIELD_OFS0 IPU_CPMEM_WORD(1, 128, 5)
+#define IPU_FIELD_OFS1 IPU_CPMEM_WORD(1, 133, 5)
+#define IPU_FIELD_OFS2 IPU_CPMEM_WORD(1, 138, 5)
+#define IPU_FIELD_OFS3 IPU_CPMEM_WORD(1, 143, 5)
+#define IPU_FIELD_SXYS IPU_CPMEM_WORD(1, 148, 1)
+#define IPU_FIELD_CRE IPU_CPMEM_WORD(1, 149, 1)
+#define IPU_FIELD_DEC_SEL2 IPU_CPMEM_WORD(1, 150, 1)
+
+static inline struct ipu_ch_param __iomem *
+ipu_get_cpmem(struct ipuv3_channel *ch)
+{
+ struct ipu_cpmem *cpmem = ch->ipu->cpmem_priv;
+
+ return cpmem->base + ch->num;
+}
+
+static void ipu_ch_param_write_field(struct ipuv3_channel *ch, u32 wbs, u32 v)
+{
+ struct ipu_ch_param __iomem *base = ipu_get_cpmem(ch);
+ u32 bit = (wbs >> 8) % 160;
+ u32 size = wbs & 0xff;
+ u32 word = (wbs >> 8) / 160;
+ u32 i = bit / 32;
+ u32 ofs = bit % 32;
+ u32 mask = (1 << size) - 1;
+ u32 val;
+
+ pr_debug("%s %d %d %d\n", __func__, word, bit , size);
+
+ val = readl(&base->word[word].data[i]);
+ val &= ~(mask << ofs);
+ val |= v << ofs;
+ writel(val, &base->word[word].data[i]);
+
+ if ((bit + size - 1) / 32 > i) {
+ val = readl(&base->word[word].data[i + 1]);
+ val &= ~(mask >> (ofs ? (32 - ofs) : 0));
+ val |= v >> (ofs ? (32 - ofs) : 0);
+ writel(val, &base->word[word].data[i + 1]);
+ }
+}
+
+static u32 ipu_ch_param_read_field(struct ipuv3_channel *ch, u32 wbs)
+{
+ struct ipu_ch_param __iomem *base = ipu_get_cpmem(ch);
+ u32 bit = (wbs >> 8) % 160;
+ u32 size = wbs & 0xff;
+ u32 word = (wbs >> 8) / 160;
+ u32 i = bit / 32;
+ u32 ofs = bit % 32;
+ u32 mask = (1 << size) - 1;
+ u32 val = 0;
+
+ pr_debug("%s %d %d %d\n", __func__, word, bit , size);
+
+ val = (readl(&base->word[word].data[i]) >> ofs) & mask;
+
+ if ((bit + size - 1) / 32 > i) {
+ u32 tmp;
+
+ tmp = readl(&base->word[word].data[i + 1]);
+ tmp &= mask >> (ofs ? (32 - ofs) : 0);
+ val |= tmp << (ofs ? (32 - ofs) : 0);
+ }
+
+ return val;
+}
+
+/*
+ * The V4L2 spec defines packed RGB formats in memory byte order, which from
+ * point of view of the IPU corresponds to little-endian words with the first
+ * component in the least significant bits.
+ * The DRM pixel formats and IPU internal representation are ordered the other
+ * way around, with the first named component ordered at the most significant
+ * bits. Further, V4L2 formats are not well defined:
+ * http://linuxtv.org/downloads/v4l-dvb-apis/packed-rgb.html
+ * We choose the interpretation which matches GStreamer behavior.
+ */
+static int v4l2_pix_fmt_to_drm_fourcc(u32 pixelformat)
+{
+ switch (pixelformat) {
+ case V4L2_PIX_FMT_RGB565:
+ /*
+ * Here we choose the 'corrected' interpretation of RGBP, a
+ * little-endian 16-bit word with the red component at the most
+ * significant bits:
+ * g[2:0]b[4:0] r[4:0]g[5:3] <=> [16:0] R:G:B
+ */
+ return DRM_FORMAT_RGB565;
+ case V4L2_PIX_FMT_BGR24:
+ /* B G R <=> [24:0] R:G:B */
+ return DRM_FORMAT_RGB888;
+ case V4L2_PIX_FMT_RGB24:
+ /* R G B <=> [24:0] B:G:R */
+ return DRM_FORMAT_BGR888;
+ case V4L2_PIX_FMT_BGR32:
+ /* B G R A <=> [32:0] A:B:G:R */
+ return DRM_FORMAT_XRGB8888;
+ case V4L2_PIX_FMT_RGB32:
+ /* R G B A <=> [32:0] A:B:G:R */
+ return DRM_FORMAT_XBGR8888;
+ case V4L2_PIX_FMT_UYVY:
+ return DRM_FORMAT_UYVY;
+ case V4L2_PIX_FMT_YUYV:
+ return DRM_FORMAT_YUYV;
+ case V4L2_PIX_FMT_YUV420:
+ return DRM_FORMAT_YUV420;
+ case V4L2_PIX_FMT_YUV422P:
+ return DRM_FORMAT_YUV422;
+ case V4L2_PIX_FMT_YVU420:
+ return DRM_FORMAT_YVU420;
+ case V4L2_PIX_FMT_NV12:
+ return DRM_FORMAT_NV12;
+ case V4L2_PIX_FMT_NV16:
+ return DRM_FORMAT_NV16;
+ }
+
+ return -EINVAL;
+}
+
+void ipu_cpmem_zero(struct ipuv3_channel *ch)
+{
+ struct ipu_ch_param __iomem *p = ipu_get_cpmem(ch);
+ void __iomem *base = p;
+ int i;
+
+ for (i = 0; i < sizeof(*p) / sizeof(u32); i++)
+ writel(0, base + i * sizeof(u32));
+}
+EXPORT_SYMBOL_GPL(ipu_cpmem_zero);
+
+void ipu_cpmem_set_resolution(struct ipuv3_channel *ch, int xres, int yres)
+{
+ ipu_ch_param_write_field(ch, IPU_FIELD_FW, xres - 1);
+ ipu_ch_param_write_field(ch, IPU_FIELD_FH, yres - 1);
+}
+EXPORT_SYMBOL_GPL(ipu_cpmem_set_resolution);
+
+void ipu_cpmem_set_stride(struct ipuv3_channel *ch, int stride)
+{
+ ipu_ch_param_write_field(ch, IPU_FIELD_SLY, stride - 1);
+}
+EXPORT_SYMBOL_GPL(ipu_cpmem_set_stride);
+
+void ipu_cpmem_set_high_priority(struct ipuv3_channel *ch)
+{
+ struct ipu_soc *ipu = ch->ipu;
+ u32 val;
+
+ if (ipu->ipu_type == IPUV3EX)
+ ipu_ch_param_write_field(ch, IPU_FIELD_ID, 1);
+
+ val = ipu_idmac_read(ipu, IDMAC_CHA_PRI(ch->num));
+ val |= 1 << (ch->num % 32);
+ ipu_idmac_write(ipu, val, IDMAC_CHA_PRI(ch->num));
+};
+EXPORT_SYMBOL_GPL(ipu_cpmem_set_high_priority);
+
+void ipu_cpmem_set_buffer(struct ipuv3_channel *ch, int bufnum, dma_addr_t buf)
+{
+ if (bufnum)
+ ipu_ch_param_write_field(ch, IPU_FIELD_EBA1, buf >> 3);
+ else
+ ipu_ch_param_write_field(ch, IPU_FIELD_EBA0, buf >> 3);
+}
+EXPORT_SYMBOL_GPL(ipu_cpmem_set_buffer);
+
+void ipu_cpmem_interlaced_scan(struct ipuv3_channel *ch, int stride)
+{
+ ipu_ch_param_write_field(ch, IPU_FIELD_SO, 1);
+ ipu_ch_param_write_field(ch, IPU_FIELD_ILO, stride / 8);
+ ipu_ch_param_write_field(ch, IPU_FIELD_SLY, (stride * 2) - 1);
+};
+EXPORT_SYMBOL_GPL(ipu_cpmem_interlaced_scan);
+
+void ipu_cpmem_set_axi_id(struct ipuv3_channel *ch, u32 id)
+{
+ id &= 0x3;
+ ipu_ch_param_write_field(ch, IPU_FIELD_ID, id);
+}
+EXPORT_SYMBOL_GPL(ipu_cpmem_set_axi_id);
+
+void ipu_cpmem_set_burstsize(struct ipuv3_channel *ch, int burstsize)
+{
+ ipu_ch_param_write_field(ch, IPU_FIELD_NPB, burstsize - 1);
+};
+EXPORT_SYMBOL_GPL(ipu_cpmem_set_burstsize);
+
+void ipu_cpmem_set_block_mode(struct ipuv3_channel *ch)
+{
+ ipu_ch_param_write_field(ch, IPU_FIELD_BM, 1);
+}
+EXPORT_SYMBOL_GPL(ipu_cpmem_set_block_mode);
+
+void ipu_cpmem_set_rotation(struct ipuv3_channel *ch,
+ enum ipu_rotate_mode rot)
+{
+ u32 temp_rot = bitrev8(rot) >> 5;
+
+ ipu_ch_param_write_field(ch, IPU_FIELD_ROT_HF_VF, temp_rot);
+}
+EXPORT_SYMBOL_GPL(ipu_cpmem_set_rotation);
+
+int ipu_cpmem_set_format_rgb(struct ipuv3_channel *ch,
+ const struct ipu_rgb *rgb)
+{
+ int bpp = 0, npb = 0, ro, go, bo, to;
+
+ ro = rgb->bits_per_pixel - rgb->red.length - rgb->red.offset;
+ go = rgb->bits_per_pixel - rgb->green.length - rgb->green.offset;
+ bo = rgb->bits_per_pixel - rgb->blue.length - rgb->blue.offset;
+ to = rgb->bits_per_pixel - rgb->transp.length - rgb->transp.offset;
+
+ ipu_ch_param_write_field(ch, IPU_FIELD_WID0, rgb->red.length - 1);
+ ipu_ch_param_write_field(ch, IPU_FIELD_OFS0, ro);
+ ipu_ch_param_write_field(ch, IPU_FIELD_WID1, rgb->green.length - 1);
+ ipu_ch_param_write_field(ch, IPU_FIELD_OFS1, go);
+ ipu_ch_param_write_field(ch, IPU_FIELD_WID2, rgb->blue.length - 1);
+ ipu_ch_param_write_field(ch, IPU_FIELD_OFS2, bo);
+
+ if (rgb->transp.length) {
+ ipu_ch_param_write_field(ch, IPU_FIELD_WID3,
+ rgb->transp.length - 1);
+ ipu_ch_param_write_field(ch, IPU_FIELD_OFS3, to);
+ } else {
+ ipu_ch_param_write_field(ch, IPU_FIELD_WID3, 7);
+ ipu_ch_param_write_field(ch, IPU_FIELD_OFS3,
+ rgb->bits_per_pixel);
+ }
+
+ switch (rgb->bits_per_pixel) {
+ case 32:
+ bpp = 0;
+ npb = 15;
+ break;
+ case 24:
+ bpp = 1;
+ npb = 19;
+ break;
+ case 16:
+ bpp = 3;
+ npb = 31;
+ break;
+ case 8:
+ bpp = 5;
+ npb = 63;
+ break;
+ default:
+ return -EINVAL;
+ }
+ ipu_ch_param_write_field(ch, IPU_FIELD_BPP, bpp);
+ ipu_ch_param_write_field(ch, IPU_FIELD_NPB, npb);
+ ipu_ch_param_write_field(ch, IPU_FIELD_PFS, 7); /* rgb mode */
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(ipu_cpmem_set_format_rgb);
+
+int ipu_cpmem_set_format_passthrough(struct ipuv3_channel *ch, int width)
+{
+ int bpp = 0, npb = 0;
+
+ switch (width) {
+ case 32:
+ bpp = 0;
+ npb = 15;
+ break;
+ case 24:
+ bpp = 1;
+ npb = 19;
+ break;
+ case 16:
+ bpp = 3;
+ npb = 31;
+ break;
+ case 8:
+ bpp = 5;
+ npb = 63;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ ipu_ch_param_write_field(ch, IPU_FIELD_BPP, bpp);
+ ipu_ch_param_write_field(ch, IPU_FIELD_NPB, npb);
+ ipu_ch_param_write_field(ch, IPU_FIELD_PFS, 6); /* raw mode */
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(ipu_cpmem_set_format_passthrough);
+
+void ipu_cpmem_set_yuv_interleaved(struct ipuv3_channel *ch, u32 pixel_format)
+{
+ switch (pixel_format) {
+ case V4L2_PIX_FMT_UYVY:
+ ipu_ch_param_write_field(ch, IPU_FIELD_BPP, 3); /* bits/pixel */
+ ipu_ch_param_write_field(ch, IPU_FIELD_PFS, 0xA);/* pix fmt */
+ ipu_ch_param_write_field(ch, IPU_FIELD_NPB, 31);/* burst size */
+ break;
+ case V4L2_PIX_FMT_YUYV:
+ ipu_ch_param_write_field(ch, IPU_FIELD_BPP, 3); /* bits/pixel */
+ ipu_ch_param_write_field(ch, IPU_FIELD_PFS, 0x8);/* pix fmt */
+ ipu_ch_param_write_field(ch, IPU_FIELD_NPB, 31);/* burst size */
+ break;
+ }
+}
+EXPORT_SYMBOL_GPL(ipu_cpmem_set_yuv_interleaved);
+
+void ipu_cpmem_set_yuv_planar_full(struct ipuv3_channel *ch,
+ u32 pixel_format, int stride,
+ int u_offset, int v_offset)
+{
+ switch (pixel_format) {
+ case V4L2_PIX_FMT_YUV420:
+ case V4L2_PIX_FMT_YUV422P:
+ ipu_ch_param_write_field(ch, IPU_FIELD_SLUV, (stride / 2) - 1);
+ ipu_ch_param_write_field(ch, IPU_FIELD_UBO, u_offset / 8);
+ ipu_ch_param_write_field(ch, IPU_FIELD_VBO, v_offset / 8);
+ break;
+ case V4L2_PIX_FMT_YVU420:
+ ipu_ch_param_write_field(ch, IPU_FIELD_SLUV, (stride / 2) - 1);
+ ipu_ch_param_write_field(ch, IPU_FIELD_UBO, v_offset / 8);
+ ipu_ch_param_write_field(ch, IPU_FIELD_VBO, u_offset / 8);
+ break;
+ case V4L2_PIX_FMT_NV12:
+ case V4L2_PIX_FMT_NV16:
+ ipu_ch_param_write_field(ch, IPU_FIELD_SLUV, stride - 1);
+ ipu_ch_param_write_field(ch, IPU_FIELD_UBO, u_offset / 8);
+ ipu_ch_param_write_field(ch, IPU_FIELD_VBO, u_offset / 8);
+ break;
+ }
+}
+EXPORT_SYMBOL_GPL(ipu_cpmem_set_yuv_planar_full);
+
+void ipu_cpmem_set_yuv_planar(struct ipuv3_channel *ch,
+ u32 pixel_format, int stride, int height)
+{
+ int u_offset, v_offset;
+ int uv_stride = 0;
+
+ switch (pixel_format) {
+ case V4L2_PIX_FMT_YUV420:
+ case V4L2_PIX_FMT_YVU420:
+ uv_stride = stride / 2;
+ u_offset = stride * height;
+ v_offset = u_offset + (uv_stride * height / 2);
+ ipu_cpmem_set_yuv_planar_full(ch, pixel_format, stride,
+ u_offset, v_offset);
+ break;
+ case V4L2_PIX_FMT_YUV422P:
+ uv_stride = stride / 2;
+ u_offset = stride * height;
+ v_offset = u_offset + (uv_stride * height);
+ ipu_cpmem_set_yuv_planar_full(ch, pixel_format, stride,
+ u_offset, v_offset);
+ break;
+ case V4L2_PIX_FMT_NV12:
+ case V4L2_PIX_FMT_NV16:
+ u_offset = stride * height;
+ ipu_cpmem_set_yuv_planar_full(ch, pixel_format, stride,
+ u_offset, 0);
+ break;
+ }
+}
+EXPORT_SYMBOL_GPL(ipu_cpmem_set_yuv_planar);
+
+static const struct ipu_rgb def_rgb_32 = {
+ .red = { .offset = 16, .length = 8, },
+ .green = { .offset = 8, .length = 8, },
+ .blue = { .offset = 0, .length = 8, },
+ .transp = { .offset = 24, .length = 8, },
+ .bits_per_pixel = 32,
+};
+
+static const struct ipu_rgb def_bgr_32 = {
+ .red = { .offset = 0, .length = 8, },
+ .green = { .offset = 8, .length = 8, },
+ .blue = { .offset = 16, .length = 8, },
+ .transp = { .offset = 24, .length = 8, },
+ .bits_per_pixel = 32,
+};
+
+static const struct ipu_rgb def_rgb_24 = {
+ .red = { .offset = 16, .length = 8, },
+ .green = { .offset = 8, .length = 8, },
+ .blue = { .offset = 0, .length = 8, },
+ .transp = { .offset = 0, .length = 0, },
+ .bits_per_pixel = 24,
+};
+
+static const struct ipu_rgb def_bgr_24 = {
+ .red = { .offset = 0, .length = 8, },
+ .green = { .offset = 8, .length = 8, },
+ .blue = { .offset = 16, .length = 8, },
+ .transp = { .offset = 0, .length = 0, },
+ .bits_per_pixel = 24,
+};
+
+static const struct ipu_rgb def_rgb_16 = {
+ .red = { .offset = 11, .length = 5, },
+ .green = { .offset = 5, .length = 6, },
+ .blue = { .offset = 0, .length = 5, },
+ .transp = { .offset = 0, .length = 0, },
+ .bits_per_pixel = 16,
+};
+
+static const struct ipu_rgb def_bgr_16 = {
+ .red = { .offset = 0, .length = 5, },
+ .green = { .offset = 5, .length = 6, },
+ .blue = { .offset = 11, .length = 5, },
+ .transp = { .offset = 0, .length = 0, },
+ .bits_per_pixel = 16,
+};
+
+#define Y_OFFSET(pix, x, y) ((x) + pix->width * (y))
+#define U_OFFSET(pix, x, y) ((pix->width * pix->height) + \
+ (pix->width * (y) / 4) + (x) / 2)
+#define V_OFFSET(pix, x, y) ((pix->width * pix->height) + \
+ (pix->width * pix->height / 4) + \
+ (pix->width * (y) / 4) + (x) / 2)
+#define U2_OFFSET(pix, x, y) ((pix->width * pix->height) + \
+ (pix->width * (y) / 2) + (x) / 2)
+#define V2_OFFSET(pix, x, y) ((pix->width * pix->height) + \
+ (pix->width * pix->height / 2) + \
+ (pix->width * (y) / 2) + (x) / 2)
+#define UV_OFFSET(pix, x, y) ((pix->width * pix->height) + \
+ (pix->width * (y) / 2) + (x))
+#define UV2_OFFSET(pix, x, y) ((pix->width * pix->height) + \
+ (pix->width * y) + (x))
+
+int ipu_cpmem_set_fmt(struct ipuv3_channel *ch, u32 drm_fourcc)
+{
+ switch (drm_fourcc) {
+ case DRM_FORMAT_YUV420:
+ case DRM_FORMAT_YVU420:
+ /* pix format */
+ ipu_ch_param_write_field(ch, IPU_FIELD_PFS, 2);
+ /* burst size */
+ ipu_ch_param_write_field(ch, IPU_FIELD_NPB, 31);
+ break;
+ case DRM_FORMAT_YUV422:
+ case DRM_FORMAT_YVU422:
+ /* pix format */
+ ipu_ch_param_write_field(ch, IPU_FIELD_PFS, 1);
+ /* burst size */
+ ipu_ch_param_write_field(ch, IPU_FIELD_NPB, 31);
+ break;
+ case DRM_FORMAT_NV12:
+ /* pix format */
+ ipu_ch_param_write_field(ch, IPU_FIELD_PFS, 4);
+ /* burst size */
+ ipu_ch_param_write_field(ch, IPU_FIELD_NPB, 31);
+ break;
+ case DRM_FORMAT_NV16:
+ /* pix format */
+ ipu_ch_param_write_field(ch, IPU_FIELD_PFS, 3);
+ /* burst size */
+ ipu_ch_param_write_field(ch, IPU_FIELD_NPB, 31);
+ break;
+ case DRM_FORMAT_UYVY:
+ /* bits/pixel */
+ ipu_ch_param_write_field(ch, IPU_FIELD_BPP, 3);
+ /* pix format */
+ ipu_ch_param_write_field(ch, IPU_FIELD_PFS, 0xA);
+ /* burst size */
+ ipu_ch_param_write_field(ch, IPU_FIELD_NPB, 31);
+ break;
+ case DRM_FORMAT_YUYV:
+ /* bits/pixel */
+ ipu_ch_param_write_field(ch, IPU_FIELD_BPP, 3);
+ /* pix format */
+ ipu_ch_param_write_field(ch, IPU_FIELD_PFS, 0x8);
+ /* burst size */
+ ipu_ch_param_write_field(ch, IPU_FIELD_NPB, 31);
+ break;
+ case DRM_FORMAT_ABGR8888:
+ case DRM_FORMAT_XBGR8888:
+ ipu_cpmem_set_format_rgb(ch, &def_bgr_32);
+ break;
+ case DRM_FORMAT_ARGB8888:
+ case DRM_FORMAT_XRGB8888:
+ ipu_cpmem_set_format_rgb(ch, &def_rgb_32);
+ break;
+ case DRM_FORMAT_BGR888:
+ ipu_cpmem_set_format_rgb(ch, &def_bgr_24);
+ break;
+ case DRM_FORMAT_RGB888:
+ ipu_cpmem_set_format_rgb(ch, &def_rgb_24);
+ break;
+ case DRM_FORMAT_RGB565:
+ ipu_cpmem_set_format_rgb(ch, &def_rgb_16);
+ break;
+ case DRM_FORMAT_BGR565:
+ ipu_cpmem_set_format_rgb(ch, &def_bgr_16);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(ipu_cpmem_set_fmt);
+
+int ipu_cpmem_set_image(struct ipuv3_channel *ch, struct ipu_image *image)
+{
+ struct v4l2_pix_format *pix = &image->pix;
+ int offset, u_offset, v_offset;
+
+ pr_debug("%s: resolution: %dx%d stride: %d\n",
+ __func__, pix->width, pix->height,
+ pix->bytesperline);
+
+ ipu_cpmem_set_resolution(ch, image->rect.width, image->rect.height);
+ ipu_cpmem_set_stride(ch, pix->bytesperline);
+
+ ipu_cpmem_set_fmt(ch, v4l2_pix_fmt_to_drm_fourcc(pix->pixelformat));
+
+ switch (pix->pixelformat) {
+ case V4L2_PIX_FMT_YUV420:
+ case V4L2_PIX_FMT_YVU420:
+ offset = Y_OFFSET(pix, image->rect.left, image->rect.top);
+ u_offset = U_OFFSET(pix, image->rect.left,
+ image->rect.top) - offset;
+ v_offset = V_OFFSET(pix, image->rect.left,
+ image->rect.top) - offset;
+
+ ipu_cpmem_set_yuv_planar_full(ch, pix->pixelformat,
+ pix->bytesperline,
+ u_offset, v_offset);
+ break;
+ case V4L2_PIX_FMT_YUV422P:
+ offset = Y_OFFSET(pix, image->rect.left, image->rect.top);
+ u_offset = U2_OFFSET(pix, image->rect.left,
+ image->rect.top) - offset;
+ v_offset = V2_OFFSET(pix, image->rect.left,
+ image->rect.top) - offset;
+
+ ipu_cpmem_set_yuv_planar_full(ch, pix->pixelformat,
+ pix->bytesperline,
+ u_offset, v_offset);
+ break;
+ case V4L2_PIX_FMT_NV12:
+ offset = Y_OFFSET(pix, image->rect.left, image->rect.top);
+ u_offset = UV_OFFSET(pix, image->rect.left,
+ image->rect.top) - offset;
+ v_offset = 0;
+
+ ipu_cpmem_set_yuv_planar_full(ch, pix->pixelformat,
+ pix->bytesperline,
+ u_offset, v_offset);
+ break;
+ case V4L2_PIX_FMT_NV16:
+ offset = Y_OFFSET(pix, image->rect.left, image->rect.top);
+ u_offset = UV2_OFFSET(pix, image->rect.left,
+ image->rect.top) - offset;
+ v_offset = 0;
+
+ ipu_cpmem_set_yuv_planar_full(ch, pix->pixelformat,
+ pix->bytesperline,
+ u_offset, v_offset);
+ break;
+ case V4L2_PIX_FMT_UYVY:
+ case V4L2_PIX_FMT_YUYV:
+ case V4L2_PIX_FMT_RGB565:
+ offset = image->rect.left * 2 +
+ image->rect.top * pix->bytesperline;
+ break;
+ case V4L2_PIX_FMT_RGB32:
+ case V4L2_PIX_FMT_BGR32:
+ offset = image->rect.left * 4 +
+ image->rect.top * pix->bytesperline;
+ break;
+ case V4L2_PIX_FMT_RGB24:
+ case V4L2_PIX_FMT_BGR24:
+ offset = image->rect.left * 3 +
+ image->rect.top * pix->bytesperline;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ ipu_cpmem_set_buffer(ch, 0, image->phys0 + offset);
+ ipu_cpmem_set_buffer(ch, 1, image->phys1 + offset);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(ipu_cpmem_set_image);
+
+void ipu_cpmem_dump(struct ipuv3_channel *ch)
+{
+ struct ipu_ch_param __iomem *p = ipu_get_cpmem(ch);
+ struct ipu_soc *ipu = ch->ipu;
+ int chno = ch->num;
+
+ dev_dbg(ipu->dev, "ch %d word 0 - %08X %08X %08X %08X %08X\n", chno,
+ readl(&p->word[0].data[0]),
+ readl(&p->word[0].data[1]),
+ readl(&p->word[0].data[2]),
+ readl(&p->word[0].data[3]),
+ readl(&p->word[0].data[4]));
+ dev_dbg(ipu->dev, "ch %d word 1 - %08X %08X %08X %08X %08X\n", chno,
+ readl(&p->word[1].data[0]),
+ readl(&p->word[1].data[1]),
+ readl(&p->word[1].data[2]),
+ readl(&p->word[1].data[3]),
+ readl(&p->word[1].data[4]));
+ dev_dbg(ipu->dev, "PFS 0x%x, ",
+ ipu_ch_param_read_field(ch, IPU_FIELD_PFS));
+ dev_dbg(ipu->dev, "BPP 0x%x, ",
+ ipu_ch_param_read_field(ch, IPU_FIELD_BPP));
+ dev_dbg(ipu->dev, "NPB 0x%x\n",
+ ipu_ch_param_read_field(ch, IPU_FIELD_NPB));
+
+ dev_dbg(ipu->dev, "FW %d, ",
+ ipu_ch_param_read_field(ch, IPU_FIELD_FW));
+ dev_dbg(ipu->dev, "FH %d, ",
+ ipu_ch_param_read_field(ch, IPU_FIELD_FH));
+ dev_dbg(ipu->dev, "EBA0 0x%x\n",
+ ipu_ch_param_read_field(ch, IPU_FIELD_EBA0) << 3);
+ dev_dbg(ipu->dev, "EBA1 0x%x\n",
+ ipu_ch_param_read_field(ch, IPU_FIELD_EBA1) << 3);
+ dev_dbg(ipu->dev, "Stride %d\n",
+ ipu_ch_param_read_field(ch, IPU_FIELD_SL));
+ dev_dbg(ipu->dev, "scan_order %d\n",
+ ipu_ch_param_read_field(ch, IPU_FIELD_SO));
+ dev_dbg(ipu->dev, "uv_stride %d\n",
+ ipu_ch_param_read_field(ch, IPU_FIELD_SLUV));
+ dev_dbg(ipu->dev, "u_offset 0x%x\n",
+ ipu_ch_param_read_field(ch, IPU_FIELD_UBO) << 3);
+ dev_dbg(ipu->dev, "v_offset 0x%x\n",
+ ipu_ch_param_read_field(ch, IPU_FIELD_VBO) << 3);
+
+ dev_dbg(ipu->dev, "Width0 %d+1, ",
+ ipu_ch_param_read_field(ch, IPU_FIELD_WID0));
+ dev_dbg(ipu->dev, "Width1 %d+1, ",
+ ipu_ch_param_read_field(ch, IPU_FIELD_WID1));
+ dev_dbg(ipu->dev, "Width2 %d+1, ",
+ ipu_ch_param_read_field(ch, IPU_FIELD_WID2));
+ dev_dbg(ipu->dev, "Width3 %d+1, ",
+ ipu_ch_param_read_field(ch, IPU_FIELD_WID3));
+ dev_dbg(ipu->dev, "Offset0 %d, ",
+ ipu_ch_param_read_field(ch, IPU_FIELD_OFS0));
+ dev_dbg(ipu->dev, "Offset1 %d, ",
+ ipu_ch_param_read_field(ch, IPU_FIELD_OFS1));
+ dev_dbg(ipu->dev, "Offset2 %d, ",
+ ipu_ch_param_read_field(ch, IPU_FIELD_OFS2));
+ dev_dbg(ipu->dev, "Offset3 %d\n",
+ ipu_ch_param_read_field(ch, IPU_FIELD_OFS3));
+}
+EXPORT_SYMBOL_GPL(ipu_cpmem_dump);
+
+int ipu_cpmem_init(struct ipu_soc *ipu, struct device *dev, unsigned long base)
+{
+ struct ipu_cpmem *cpmem;
+
+ cpmem = devm_kzalloc(dev, sizeof(*cpmem), GFP_KERNEL);
+ if (!cpmem)
+ return -ENOMEM;
+
+ ipu->cpmem_priv = cpmem;
+
+ spin_lock_init(&cpmem->lock);
+ cpmem->base = devm_ioremap(dev, base, SZ_128K);
+ if (!cpmem->base)
+ return -ENOMEM;
+
+ dev_dbg(dev, "CPMEM base: 0x%08lx remapped to %p\n",
+ base, cpmem->base);
+ cpmem->ipu = ipu;
+
+ return 0;
+}
+
+void ipu_cpmem_exit(struct ipu_soc *ipu)
+{
+}
--- /dev/null
+/*
+ * Copyright (C) 2012-2014 Mentor Graphics Inc.
+ * Copyright (C) 2005-2009 Freescale Semiconductor, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ */
+#include <linux/export.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/delay.h>
+#include <linux/io.h>
+#include <linux/err.h>
+#include <linux/platform_device.h>
+#include <linux/videodev2.h>
+#include <uapi/linux/v4l2-mediabus.h>
+#include <linux/clk.h>
+#include <linux/clk-provider.h>
+#include <linux/clkdev.h>
+
+#include "ipu-prv.h"
+
+struct ipu_csi {
+ void __iomem *base;
+ int id;
+ u32 module;
+ struct clk *clk_ipu; /* IPU bus clock */
+ spinlock_t lock;
+ bool inuse;
+ struct ipu_soc *ipu;
+};
+
+/* CSI Register Offsets */
+#define CSI_SENS_CONF 0x0000
+#define CSI_SENS_FRM_SIZE 0x0004
+#define CSI_ACT_FRM_SIZE 0x0008
+#define CSI_OUT_FRM_CTRL 0x000c
+#define CSI_TST_CTRL 0x0010
+#define CSI_CCIR_CODE_1 0x0014
+#define CSI_CCIR_CODE_2 0x0018
+#define CSI_CCIR_CODE_3 0x001c
+#define CSI_MIPI_DI 0x0020
+#define CSI_SKIP 0x0024
+#define CSI_CPD_CTRL 0x0028
+#define CSI_CPD_RC(n) (0x002c + ((n)*4))
+#define CSI_CPD_RS(n) (0x004c + ((n)*4))
+#define CSI_CPD_GRC(n) (0x005c + ((n)*4))
+#define CSI_CPD_GRS(n) (0x007c + ((n)*4))
+#define CSI_CPD_GBC(n) (0x008c + ((n)*4))
+#define CSI_CPD_GBS(n) (0x00Ac + ((n)*4))
+#define CSI_CPD_BC(n) (0x00Bc + ((n)*4))
+#define CSI_CPD_BS(n) (0x00Dc + ((n)*4))
+#define CSI_CPD_OFFSET1 0x00ec
+#define CSI_CPD_OFFSET2 0x00f0
+
+/* CSI Register Fields */
+#define CSI_SENS_CONF_DATA_FMT_SHIFT 8
+#define CSI_SENS_CONF_DATA_FMT_MASK 0x00000700
+#define CSI_SENS_CONF_DATA_FMT_RGB_YUV444 0L
+#define CSI_SENS_CONF_DATA_FMT_YUV422_YUYV 1L
+#define CSI_SENS_CONF_DATA_FMT_YUV422_UYVY 2L
+#define CSI_SENS_CONF_DATA_FMT_BAYER 3L
+#define CSI_SENS_CONF_DATA_FMT_RGB565 4L
+#define CSI_SENS_CONF_DATA_FMT_RGB555 5L
+#define CSI_SENS_CONF_DATA_FMT_RGB444 6L
+#define CSI_SENS_CONF_DATA_FMT_JPEG 7L
+
+#define CSI_SENS_CONF_VSYNC_POL_SHIFT 0
+#define CSI_SENS_CONF_HSYNC_POL_SHIFT 1
+#define CSI_SENS_CONF_DATA_POL_SHIFT 2
+#define CSI_SENS_CONF_PIX_CLK_POL_SHIFT 3
+#define CSI_SENS_CONF_SENS_PRTCL_MASK 0x00000070
+#define CSI_SENS_CONF_SENS_PRTCL_SHIFT 4
+#define CSI_SENS_CONF_PACK_TIGHT_SHIFT 7
+#define CSI_SENS_CONF_DATA_WIDTH_SHIFT 11
+#define CSI_SENS_CONF_EXT_VSYNC_SHIFT 15
+#define CSI_SENS_CONF_DIVRATIO_SHIFT 16
+
+#define CSI_SENS_CONF_DIVRATIO_MASK 0x00ff0000
+#define CSI_SENS_CONF_DATA_DEST_SHIFT 24
+#define CSI_SENS_CONF_DATA_DEST_MASK 0x07000000
+#define CSI_SENS_CONF_JPEG8_EN_SHIFT 27
+#define CSI_SENS_CONF_JPEG_EN_SHIFT 28
+#define CSI_SENS_CONF_FORCE_EOF_SHIFT 29
+#define CSI_SENS_CONF_DATA_EN_POL_SHIFT 31
+
+#define CSI_DATA_DEST_IC 2
+#define CSI_DATA_DEST_IDMAC 4
+
+#define CSI_CCIR_ERR_DET_EN 0x01000000
+#define CSI_HORI_DOWNSIZE_EN 0x80000000
+#define CSI_VERT_DOWNSIZE_EN 0x40000000
+#define CSI_TEST_GEN_MODE_EN 0x01000000
+
+#define CSI_HSC_MASK 0x1fff0000
+#define CSI_HSC_SHIFT 16
+#define CSI_VSC_MASK 0x00000fff
+#define CSI_VSC_SHIFT 0
+
+#define CSI_TEST_GEN_R_MASK 0x000000ff
+#define CSI_TEST_GEN_R_SHIFT 0
+#define CSI_TEST_GEN_G_MASK 0x0000ff00
+#define CSI_TEST_GEN_G_SHIFT 8
+#define CSI_TEST_GEN_B_MASK 0x00ff0000
+#define CSI_TEST_GEN_B_SHIFT 16
+
+#define CSI_MAX_RATIO_SKIP_SMFC_MASK 0x00000007
+#define CSI_MAX_RATIO_SKIP_SMFC_SHIFT 0
+#define CSI_SKIP_SMFC_MASK 0x000000f8
+#define CSI_SKIP_SMFC_SHIFT 3
+#define CSI_ID_2_SKIP_MASK 0x00000300
+#define CSI_ID_2_SKIP_SHIFT 8
+
+#define CSI_COLOR_FIRST_ROW_MASK 0x00000002
+#define CSI_COLOR_FIRST_COMP_MASK 0x00000001
+
+/* MIPI CSI-2 data types */
+#define MIPI_DT_YUV420 0x18 /* YYY.../UYVY.... */
+#define MIPI_DT_YUV420_LEGACY 0x1a /* UYY.../VYY... */
+#define MIPI_DT_YUV422 0x1e /* UYVY... */
+#define MIPI_DT_RGB444 0x20
+#define MIPI_DT_RGB555 0x21
+#define MIPI_DT_RGB565 0x22
+#define MIPI_DT_RGB666 0x23
+#define MIPI_DT_RGB888 0x24
+#define MIPI_DT_RAW6 0x28
+#define MIPI_DT_RAW7 0x29
+#define MIPI_DT_RAW8 0x2a
+#define MIPI_DT_RAW10 0x2b
+#define MIPI_DT_RAW12 0x2c
+#define MIPI_DT_RAW14 0x2d
+
+/*
+ * Bitfield of CSI bus signal polarities and modes.
+ */
+struct ipu_csi_bus_config {
+ unsigned data_width:4;
+ unsigned clk_mode:3;
+ unsigned ext_vsync:1;
+ unsigned vsync_pol:1;
+ unsigned hsync_pol:1;
+ unsigned pixclk_pol:1;
+ unsigned data_pol:1;
+ unsigned sens_clksrc:1;
+ unsigned pack_tight:1;
+ unsigned force_eof:1;
+ unsigned data_en_pol:1;
+
+ unsigned data_fmt;
+ unsigned mipi_dt;
+};
+
+/*
+ * Enumeration of CSI data bus widths.
+ */
+enum ipu_csi_data_width {
+ IPU_CSI_DATA_WIDTH_4 = 0,
+ IPU_CSI_DATA_WIDTH_8 = 1,
+ IPU_CSI_DATA_WIDTH_10 = 3,
+ IPU_CSI_DATA_WIDTH_12 = 5,
+ IPU_CSI_DATA_WIDTH_16 = 9,
+};
+
+/*
+ * Enumeration of CSI clock modes.
+ */
+enum ipu_csi_clk_mode {
+ IPU_CSI_CLK_MODE_GATED_CLK,
+ IPU_CSI_CLK_MODE_NONGATED_CLK,
+ IPU_CSI_CLK_MODE_CCIR656_PROGRESSIVE,
+ IPU_CSI_CLK_MODE_CCIR656_INTERLACED,
+ IPU_CSI_CLK_MODE_CCIR1120_PROGRESSIVE_DDR,
+ IPU_CSI_CLK_MODE_CCIR1120_PROGRESSIVE_SDR,
+ IPU_CSI_CLK_MODE_CCIR1120_INTERLACED_DDR,
+ IPU_CSI_CLK_MODE_CCIR1120_INTERLACED_SDR,
+};
+
+static inline u32 ipu_csi_read(struct ipu_csi *csi, unsigned offset)
+{
+ return readl(csi->base + offset);
+}
+
+static inline void ipu_csi_write(struct ipu_csi *csi, u32 value,
+ unsigned offset)
+{
+ writel(value, csi->base + offset);
+}
+
+/*
+ * Set mclk division ratio for generating test mode mclk. Only used
+ * for test generator.
+ */
+static int ipu_csi_set_testgen_mclk(struct ipu_csi *csi, u32 pixel_clk,
+ u32 ipu_clk)
+{
+ u32 temp;
+ u32 div_ratio;
+
+ div_ratio = (ipu_clk / pixel_clk) - 1;
+
+ if (div_ratio > 0xFF || div_ratio < 0) {
+ dev_err(csi->ipu->dev,
+ "value of pixel_clk extends normal range\n");
+ return -EINVAL;
+ }
+
+ temp = ipu_csi_read(csi, CSI_SENS_CONF);
+ temp &= ~CSI_SENS_CONF_DIVRATIO_MASK;
+ ipu_csi_write(csi, temp | (div_ratio << CSI_SENS_CONF_DIVRATIO_SHIFT),
+ CSI_SENS_CONF);
+
+ return 0;
+}
+
+/*
+ * Find the CSI data format and data width for the given V4L2 media
+ * bus pixel format code.
+ */
+static int mbus_code_to_bus_cfg(struct ipu_csi_bus_config *cfg, u32 mbus_code)
+{
+ switch (mbus_code) {
+ case V4L2_MBUS_FMT_BGR565_2X8_BE:
+ case V4L2_MBUS_FMT_BGR565_2X8_LE:
+ case V4L2_MBUS_FMT_RGB565_2X8_BE:
+ case V4L2_MBUS_FMT_RGB565_2X8_LE:
+ cfg->data_fmt = CSI_SENS_CONF_DATA_FMT_RGB565;
+ cfg->mipi_dt = MIPI_DT_RGB565;
+ cfg->data_width = IPU_CSI_DATA_WIDTH_8;
+ break;
+ case V4L2_MBUS_FMT_RGB444_2X8_PADHI_BE:
+ case V4L2_MBUS_FMT_RGB444_2X8_PADHI_LE:
+ cfg->data_fmt = CSI_SENS_CONF_DATA_FMT_RGB444;
+ cfg->mipi_dt = MIPI_DT_RGB444;
+ cfg->data_width = IPU_CSI_DATA_WIDTH_8;
+ break;
+ case V4L2_MBUS_FMT_RGB555_2X8_PADHI_BE:
+ case V4L2_MBUS_FMT_RGB555_2X8_PADHI_LE:
+ cfg->data_fmt = CSI_SENS_CONF_DATA_FMT_RGB555;
+ cfg->mipi_dt = MIPI_DT_RGB555;
+ cfg->data_width = IPU_CSI_DATA_WIDTH_8;
+ break;
+ case V4L2_MBUS_FMT_UYVY8_2X8:
+ cfg->data_fmt = CSI_SENS_CONF_DATA_FMT_YUV422_UYVY;
+ cfg->mipi_dt = MIPI_DT_YUV422;
+ cfg->data_width = IPU_CSI_DATA_WIDTH_8;
+ break;
+ case V4L2_MBUS_FMT_YUYV8_2X8:
+ cfg->data_fmt = CSI_SENS_CONF_DATA_FMT_YUV422_YUYV;
+ cfg->mipi_dt = MIPI_DT_YUV422;
+ cfg->data_width = IPU_CSI_DATA_WIDTH_8;
+ break;
+ case V4L2_MBUS_FMT_UYVY8_1X16:
+ cfg->data_fmt = CSI_SENS_CONF_DATA_FMT_YUV422_UYVY;
+ cfg->mipi_dt = MIPI_DT_YUV422;
+ cfg->data_width = IPU_CSI_DATA_WIDTH_16;
+ break;
+ case V4L2_MBUS_FMT_YUYV8_1X16:
+ cfg->data_fmt = CSI_SENS_CONF_DATA_FMT_YUV422_YUYV;
+ cfg->mipi_dt = MIPI_DT_YUV422;
+ cfg->data_width = IPU_CSI_DATA_WIDTH_16;
+ break;
+ case V4L2_MBUS_FMT_SBGGR8_1X8:
+ case V4L2_MBUS_FMT_SGBRG8_1X8:
+ case V4L2_MBUS_FMT_SGRBG8_1X8:
+ case V4L2_MBUS_FMT_SRGGB8_1X8:
+ cfg->data_fmt = CSI_SENS_CONF_DATA_FMT_BAYER;
+ cfg->mipi_dt = MIPI_DT_RAW8;
+ cfg->data_width = IPU_CSI_DATA_WIDTH_8;
+ break;
+ case V4L2_MBUS_FMT_SBGGR10_DPCM8_1X8:
+ case V4L2_MBUS_FMT_SGBRG10_DPCM8_1X8:
+ case V4L2_MBUS_FMT_SGRBG10_DPCM8_1X8:
+ case V4L2_MBUS_FMT_SRGGB10_DPCM8_1X8:
+ case V4L2_MBUS_FMT_SBGGR10_2X8_PADHI_BE:
+ case V4L2_MBUS_FMT_SBGGR10_2X8_PADHI_LE:
+ case V4L2_MBUS_FMT_SBGGR10_2X8_PADLO_BE:
+ case V4L2_MBUS_FMT_SBGGR10_2X8_PADLO_LE:
+ cfg->data_fmt = CSI_SENS_CONF_DATA_FMT_BAYER;
+ cfg->mipi_dt = MIPI_DT_RAW10;
+ cfg->data_width = IPU_CSI_DATA_WIDTH_8;
+ break;
+ case V4L2_MBUS_FMT_SBGGR10_1X10:
+ case V4L2_MBUS_FMT_SGBRG10_1X10:
+ case V4L2_MBUS_FMT_SGRBG10_1X10:
+ case V4L2_MBUS_FMT_SRGGB10_1X10:
+ cfg->data_fmt = CSI_SENS_CONF_DATA_FMT_BAYER;
+ cfg->mipi_dt = MIPI_DT_RAW10;
+ cfg->data_width = IPU_CSI_DATA_WIDTH_10;
+ break;
+ case V4L2_MBUS_FMT_SBGGR12_1X12:
+ case V4L2_MBUS_FMT_SGBRG12_1X12:
+ case V4L2_MBUS_FMT_SGRBG12_1X12:
+ case V4L2_MBUS_FMT_SRGGB12_1X12:
+ cfg->data_fmt = CSI_SENS_CONF_DATA_FMT_BAYER;
+ cfg->mipi_dt = MIPI_DT_RAW12;
+ cfg->data_width = IPU_CSI_DATA_WIDTH_12;
+ break;
+ case V4L2_MBUS_FMT_JPEG_1X8:
+ /* TODO */
+ cfg->data_fmt = CSI_SENS_CONF_DATA_FMT_JPEG;
+ cfg->mipi_dt = MIPI_DT_RAW8;
+ cfg->data_width = IPU_CSI_DATA_WIDTH_8;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+/*
+ * Fill a CSI bus config struct from mbus_config and mbus_framefmt.
+ */
+static void fill_csi_bus_cfg(struct ipu_csi_bus_config *csicfg,
+ struct v4l2_mbus_config *mbus_cfg,
+ struct v4l2_mbus_framefmt *mbus_fmt)
+{
+ memset(csicfg, 0, sizeof(*csicfg));
+
+ mbus_code_to_bus_cfg(csicfg, mbus_fmt->code);
+
+ switch (mbus_cfg->type) {
+ case V4L2_MBUS_PARALLEL:
+ csicfg->ext_vsync = 1;
+ csicfg->vsync_pol = (mbus_cfg->flags &
+ V4L2_MBUS_VSYNC_ACTIVE_LOW) ? 1 : 0;
+ csicfg->hsync_pol = (mbus_cfg->flags &
+ V4L2_MBUS_HSYNC_ACTIVE_LOW) ? 1 : 0;
+ csicfg->pixclk_pol = (mbus_cfg->flags &
+ V4L2_MBUS_PCLK_SAMPLE_FALLING) ? 1 : 0;
+ csicfg->clk_mode = IPU_CSI_CLK_MODE_GATED_CLK;
+ break;
+ case V4L2_MBUS_BT656:
+ csicfg->ext_vsync = 0;
+ if (V4L2_FIELD_HAS_BOTH(mbus_fmt->field))
+ csicfg->clk_mode = IPU_CSI_CLK_MODE_CCIR656_INTERLACED;
+ else
+ csicfg->clk_mode = IPU_CSI_CLK_MODE_CCIR656_PROGRESSIVE;
+ break;
+ case V4L2_MBUS_CSI2:
+ /*
+ * MIPI CSI-2 requires non gated clock mode, all other
+ * parameters are not applicable for MIPI CSI-2 bus.
+ */
+ csicfg->clk_mode = IPU_CSI_CLK_MODE_NONGATED_CLK;
+ break;
+ default:
+ /* will never get here, keep compiler quiet */
+ break;
+ }
+}
+
+int ipu_csi_init_interface(struct ipu_csi *csi,
+ struct v4l2_mbus_config *mbus_cfg,
+ struct v4l2_mbus_framefmt *mbus_fmt)
+{
+ struct ipu_csi_bus_config cfg;
+ unsigned long flags;
+ u32 data = 0;
+
+ fill_csi_bus_cfg(&cfg, mbus_cfg, mbus_fmt);
+
+ /* Set the CSI_SENS_CONF register remaining fields */
+ data |= cfg.data_width << CSI_SENS_CONF_DATA_WIDTH_SHIFT |
+ cfg.data_fmt << CSI_SENS_CONF_DATA_FMT_SHIFT |
+ cfg.data_pol << CSI_SENS_CONF_DATA_POL_SHIFT |
+ cfg.vsync_pol << CSI_SENS_CONF_VSYNC_POL_SHIFT |
+ cfg.hsync_pol << CSI_SENS_CONF_HSYNC_POL_SHIFT |
+ cfg.pixclk_pol << CSI_SENS_CONF_PIX_CLK_POL_SHIFT |
+ cfg.ext_vsync << CSI_SENS_CONF_EXT_VSYNC_SHIFT |
+ cfg.clk_mode << CSI_SENS_CONF_SENS_PRTCL_SHIFT |
+ cfg.pack_tight << CSI_SENS_CONF_PACK_TIGHT_SHIFT |
+ cfg.force_eof << CSI_SENS_CONF_FORCE_EOF_SHIFT |
+ cfg.data_en_pol << CSI_SENS_CONF_DATA_EN_POL_SHIFT;
+
+ spin_lock_irqsave(&csi->lock, flags);
+
+ ipu_csi_write(csi, data, CSI_SENS_CONF);
+
+ /* Setup sensor frame size */
+ ipu_csi_write(csi,
+ (mbus_fmt->width - 1) | ((mbus_fmt->height - 1) << 16),
+ CSI_SENS_FRM_SIZE);
+
+ /* Set CCIR registers */
+
+ switch (cfg.clk_mode) {
+ case IPU_CSI_CLK_MODE_CCIR656_PROGRESSIVE:
+ ipu_csi_write(csi, 0x40030, CSI_CCIR_CODE_1);
+ ipu_csi_write(csi, 0xFF0000, CSI_CCIR_CODE_3);
+ break;
+ case IPU_CSI_CLK_MODE_CCIR656_INTERLACED:
+ if (mbus_fmt->width == 720 && mbus_fmt->height == 576) {
+ /*
+ * PAL case
+ *
+ * Field0BlankEnd = 0x6, Field0BlankStart = 0x2,
+ * Field0ActiveEnd = 0x4, Field0ActiveStart = 0
+ * Field1BlankEnd = 0x7, Field1BlankStart = 0x3,
+ * Field1ActiveEnd = 0x5, Field1ActiveStart = 0x1
+ */
+ ipu_csi_write(csi, 0x40596 | CSI_CCIR_ERR_DET_EN,
+ CSI_CCIR_CODE_1);
+ ipu_csi_write(csi, 0xD07DF, CSI_CCIR_CODE_2);
+ ipu_csi_write(csi, 0xFF0000, CSI_CCIR_CODE_3);
+
+ } else if (mbus_fmt->width == 720 && mbus_fmt->height == 480) {
+ /*
+ * NTSC case
+ *
+ * Field0BlankEnd = 0x7, Field0BlankStart = 0x3,
+ * Field0ActiveEnd = 0x5, Field0ActiveStart = 0x1
+ * Field1BlankEnd = 0x6, Field1BlankStart = 0x2,
+ * Field1ActiveEnd = 0x4, Field1ActiveStart = 0
+ */
+ ipu_csi_write(csi, 0xD07DF | CSI_CCIR_ERR_DET_EN,
+ CSI_CCIR_CODE_1);
+ ipu_csi_write(csi, 0x40596, CSI_CCIR_CODE_2);
+ ipu_csi_write(csi, 0xFF0000, CSI_CCIR_CODE_3);
+ } else {
+ dev_err(csi->ipu->dev,
+ "Unsupported CCIR656 interlaced video mode\n");
+ spin_unlock_irqrestore(&csi->lock, flags);
+ return -EINVAL;
+ }
+ break;
+ case IPU_CSI_CLK_MODE_CCIR1120_PROGRESSIVE_DDR:
+ case IPU_CSI_CLK_MODE_CCIR1120_PROGRESSIVE_SDR:
+ case IPU_CSI_CLK_MODE_CCIR1120_INTERLACED_DDR:
+ case IPU_CSI_CLK_MODE_CCIR1120_INTERLACED_SDR:
+ ipu_csi_write(csi, 0x40030 | CSI_CCIR_ERR_DET_EN,
+ CSI_CCIR_CODE_1);
+ ipu_csi_write(csi, 0xFF0000, CSI_CCIR_CODE_3);
+ break;
+ case IPU_CSI_CLK_MODE_GATED_CLK:
+ case IPU_CSI_CLK_MODE_NONGATED_CLK:
+ ipu_csi_write(csi, 0, CSI_CCIR_CODE_1);
+ break;
+ }
+
+ dev_dbg(csi->ipu->dev, "CSI_SENS_CONF = 0x%08X\n",
+ ipu_csi_read(csi, CSI_SENS_CONF));
+ dev_dbg(csi->ipu->dev, "CSI_ACT_FRM_SIZE = 0x%08X\n",
+ ipu_csi_read(csi, CSI_ACT_FRM_SIZE));
+
+ spin_unlock_irqrestore(&csi->lock, flags);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(ipu_csi_init_interface);
+
+bool ipu_csi_is_interlaced(struct ipu_csi *csi)
+{
+ unsigned long flags;
+ u32 sensor_protocol;
+
+ spin_lock_irqsave(&csi->lock, flags);
+ sensor_protocol =
+ (ipu_csi_read(csi, CSI_SENS_CONF) &
+ CSI_SENS_CONF_SENS_PRTCL_MASK) >>
+ CSI_SENS_CONF_SENS_PRTCL_SHIFT;
+ spin_unlock_irqrestore(&csi->lock, flags);
+
+ switch (sensor_protocol) {
+ case IPU_CSI_CLK_MODE_GATED_CLK:
+ case IPU_CSI_CLK_MODE_NONGATED_CLK:
+ case IPU_CSI_CLK_MODE_CCIR656_PROGRESSIVE:
+ case IPU_CSI_CLK_MODE_CCIR1120_PROGRESSIVE_DDR:
+ case IPU_CSI_CLK_MODE_CCIR1120_PROGRESSIVE_SDR:
+ return false;
+ case IPU_CSI_CLK_MODE_CCIR656_INTERLACED:
+ case IPU_CSI_CLK_MODE_CCIR1120_INTERLACED_DDR:
+ case IPU_CSI_CLK_MODE_CCIR1120_INTERLACED_SDR:
+ return true;
+ default:
+ dev_err(csi->ipu->dev,
+ "CSI %d sensor protocol unsupported\n", csi->id);
+ return false;
+ }
+}
+EXPORT_SYMBOL_GPL(ipu_csi_is_interlaced);
+
+void ipu_csi_get_window(struct ipu_csi *csi, struct v4l2_rect *w)
+{
+ unsigned long flags;
+ u32 reg;
+
+ spin_lock_irqsave(&csi->lock, flags);
+
+ reg = ipu_csi_read(csi, CSI_ACT_FRM_SIZE);
+ w->width = (reg & 0xFFFF) + 1;
+ w->height = (reg >> 16 & 0xFFFF) + 1;
+
+ reg = ipu_csi_read(csi, CSI_OUT_FRM_CTRL);
+ w->left = (reg & CSI_HSC_MASK) >> CSI_HSC_SHIFT;
+ w->top = (reg & CSI_VSC_MASK) >> CSI_VSC_SHIFT;
+
+ spin_unlock_irqrestore(&csi->lock, flags);
+}
+EXPORT_SYMBOL_GPL(ipu_csi_get_window);
+
+void ipu_csi_set_window(struct ipu_csi *csi, struct v4l2_rect *w)
+{
+ unsigned long flags;
+ u32 reg;
+
+ spin_lock_irqsave(&csi->lock, flags);
+
+ ipu_csi_write(csi, (w->width - 1) | ((w->height - 1) << 16),
+ CSI_ACT_FRM_SIZE);
+
+ reg = ipu_csi_read(csi, CSI_OUT_FRM_CTRL);
+ reg &= ~(CSI_HSC_MASK | CSI_VSC_MASK);
+ reg |= ((w->top << CSI_VSC_SHIFT) | (w->left << CSI_HSC_SHIFT));
+ ipu_csi_write(csi, reg, CSI_OUT_FRM_CTRL);
+
+ spin_unlock_irqrestore(&csi->lock, flags);
+}
+EXPORT_SYMBOL_GPL(ipu_csi_set_window);
+
+void ipu_csi_set_test_generator(struct ipu_csi *csi, bool active,
+ u32 r_value, u32 g_value, u32 b_value,
+ u32 pix_clk)
+{
+ unsigned long flags;
+ u32 ipu_clk = clk_get_rate(csi->clk_ipu);
+ u32 temp;
+
+ spin_lock_irqsave(&csi->lock, flags);
+
+ temp = ipu_csi_read(csi, CSI_TST_CTRL);
+
+ if (active == false) {
+ temp &= ~CSI_TEST_GEN_MODE_EN;
+ ipu_csi_write(csi, temp, CSI_TST_CTRL);
+ } else {
+ /* Set sensb_mclk div_ratio */
+ ipu_csi_set_testgen_mclk(csi, pix_clk, ipu_clk);
+
+ temp &= ~(CSI_TEST_GEN_R_MASK | CSI_TEST_GEN_G_MASK |
+ CSI_TEST_GEN_B_MASK);
+ temp |= CSI_TEST_GEN_MODE_EN;
+ temp |= (r_value << CSI_TEST_GEN_R_SHIFT) |
+ (g_value << CSI_TEST_GEN_G_SHIFT) |
+ (b_value << CSI_TEST_GEN_B_SHIFT);
+ ipu_csi_write(csi, temp, CSI_TST_CTRL);
+ }
+
+ spin_unlock_irqrestore(&csi->lock, flags);
+}
+EXPORT_SYMBOL_GPL(ipu_csi_set_test_generator);
+
+int ipu_csi_set_mipi_datatype(struct ipu_csi *csi, u32 vc,
+ struct v4l2_mbus_framefmt *mbus_fmt)
+{
+ struct ipu_csi_bus_config cfg;
+ unsigned long flags;
+ u32 temp;
+
+ if (vc > 3)
+ return -EINVAL;
+
+ mbus_code_to_bus_cfg(&cfg, mbus_fmt->code);
+
+ spin_lock_irqsave(&csi->lock, flags);
+
+ temp = ipu_csi_read(csi, CSI_MIPI_DI);
+ temp &= ~(0xff << (vc * 8));
+ temp |= (cfg.mipi_dt << (vc * 8));
+ ipu_csi_write(csi, temp, CSI_MIPI_DI);
+
+ spin_unlock_irqrestore(&csi->lock, flags);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(ipu_csi_set_mipi_datatype);
+
+int ipu_csi_set_skip_smfc(struct ipu_csi *csi, u32 skip,
+ u32 max_ratio, u32 id)
+{
+ unsigned long flags;
+ u32 temp;
+
+ if (max_ratio > 5 || id > 3)
+ return -EINVAL;
+
+ spin_lock_irqsave(&csi->lock, flags);
+
+ temp = ipu_csi_read(csi, CSI_SKIP);
+ temp &= ~(CSI_MAX_RATIO_SKIP_SMFC_MASK | CSI_ID_2_SKIP_MASK |
+ CSI_SKIP_SMFC_MASK);
+ temp |= (max_ratio << CSI_MAX_RATIO_SKIP_SMFC_SHIFT) |
+ (id << CSI_ID_2_SKIP_SHIFT) |
+ (skip << CSI_SKIP_SMFC_SHIFT);
+ ipu_csi_write(csi, temp, CSI_SKIP);
+
+ spin_unlock_irqrestore(&csi->lock, flags);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(ipu_csi_set_skip_smfc);
+
+int ipu_csi_set_dest(struct ipu_csi *csi, enum ipu_csi_dest csi_dest)
+{
+ unsigned long flags;
+ u32 csi_sens_conf, dest;
+
+ if (csi_dest == IPU_CSI_DEST_IDMAC)
+ dest = CSI_DATA_DEST_IDMAC;
+ else
+ dest = CSI_DATA_DEST_IC; /* IC or VDIC */
+
+ spin_lock_irqsave(&csi->lock, flags);
+
+ csi_sens_conf = ipu_csi_read(csi, CSI_SENS_CONF);
+ csi_sens_conf &= ~CSI_SENS_CONF_DATA_DEST_MASK;
+ csi_sens_conf |= (dest << CSI_SENS_CONF_DATA_DEST_SHIFT);
+ ipu_csi_write(csi, csi_sens_conf, CSI_SENS_CONF);
+
+ spin_unlock_irqrestore(&csi->lock, flags);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(ipu_csi_set_dest);
+
+int ipu_csi_enable(struct ipu_csi *csi)
+{
+ ipu_module_enable(csi->ipu, csi->module);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(ipu_csi_enable);
+
+int ipu_csi_disable(struct ipu_csi *csi)
+{
+ ipu_module_disable(csi->ipu, csi->module);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(ipu_csi_disable);
+
+struct ipu_csi *ipu_csi_get(struct ipu_soc *ipu, int id)
+{
+ unsigned long flags;
+ struct ipu_csi *csi, *ret;
+
+ if (id > 1)
+ return ERR_PTR(-EINVAL);
+
+ csi = ipu->csi_priv[id];
+ ret = csi;
+
+ spin_lock_irqsave(&csi->lock, flags);
+
+ if (csi->inuse) {
+ ret = ERR_PTR(-EBUSY);
+ goto unlock;
+ }
+
+ csi->inuse = true;
+unlock:
+ spin_unlock_irqrestore(&csi->lock, flags);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(ipu_csi_get);
+
+void ipu_csi_put(struct ipu_csi *csi)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&csi->lock, flags);
+ csi->inuse = false;
+ spin_unlock_irqrestore(&csi->lock, flags);
+}
+EXPORT_SYMBOL_GPL(ipu_csi_put);
+
+int ipu_csi_init(struct ipu_soc *ipu, struct device *dev, int id,
+ unsigned long base, u32 module, struct clk *clk_ipu)
+{
+ struct ipu_csi *csi;
+
+ if (id > 1)
+ return -ENODEV;
+
+ csi = devm_kzalloc(dev, sizeof(*csi), GFP_KERNEL);
+ if (!csi)
+ return -ENOMEM;
+
+ ipu->csi_priv[id] = csi;
+
+ spin_lock_init(&csi->lock);
+ csi->module = module;
+ csi->id = id;
+ csi->clk_ipu = clk_ipu;
+ csi->base = devm_ioremap(dev, base, PAGE_SIZE);
+ if (!csi->base)
+ return -ENOMEM;
+
+ dev_dbg(dev, "CSI%d base: 0x%08lx remapped to %p\n",
+ id, base, csi->base);
+ csi->ipu = ipu;
+
+ return 0;
+}
+
+void ipu_csi_exit(struct ipu_soc *ipu, int id)
+{
+}
+
+void ipu_csi_dump(struct ipu_csi *csi)
+{
+ dev_dbg(csi->ipu->dev, "CSI_SENS_CONF: %08x\n",
+ ipu_csi_read(csi, CSI_SENS_CONF));
+ dev_dbg(csi->ipu->dev, "CSI_SENS_FRM_SIZE: %08x\n",
+ ipu_csi_read(csi, CSI_SENS_FRM_SIZE));
+ dev_dbg(csi->ipu->dev, "CSI_ACT_FRM_SIZE: %08x\n",
+ ipu_csi_read(csi, CSI_ACT_FRM_SIZE));
+ dev_dbg(csi->ipu->dev, "CSI_OUT_FRM_CTRL: %08x\n",
+ ipu_csi_read(csi, CSI_OUT_FRM_CTRL));
+ dev_dbg(csi->ipu->dev, "CSI_TST_CTRL: %08x\n",
+ ipu_csi_read(csi, CSI_TST_CTRL));
+ dev_dbg(csi->ipu->dev, "CSI_CCIR_CODE_1: %08x\n",
+ ipu_csi_read(csi, CSI_CCIR_CODE_1));
+ dev_dbg(csi->ipu->dev, "CSI_CCIR_CODE_2: %08x\n",
+ ipu_csi_read(csi, CSI_CCIR_CODE_2));
+ dev_dbg(csi->ipu->dev, "CSI_CCIR_CODE_3: %08x\n",
+ ipu_csi_read(csi, CSI_CCIR_CODE_3));
+ dev_dbg(csi->ipu->dev, "CSI_MIPI_DI: %08x\n",
+ ipu_csi_read(csi, CSI_MIPI_DI));
+ dev_dbg(csi->ipu->dev, "CSI_SKIP: %08x\n",
+ ipu_csi_read(csi, CSI_SKIP));
+}
+EXPORT_SYMBOL_GPL(ipu_csi_dump);
--- /dev/null
+/*
+ * Copyright (C) 2012-2014 Mentor Graphics Inc.
+ * Copyright 2005-2012 Freescale Semiconductor, Inc. All Rights Reserved.
+ *
+ * The code contained herein is licensed under the GNU General Public
+ * License. You may obtain a copy of the GNU General Public License
+ * Version 2 or later at the following locations:
+ *
+ * http://www.opensource.org/licenses/gpl-license.html
+ * http://www.gnu.org/copyleft/gpl.html
+ */
+
+#include <linux/types.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/spinlock.h>
+#include <linux/bitrev.h>
+#include <linux/io.h>
+#include <linux/err.h>
+#include "ipu-prv.h"
+
+/* IC Register Offsets */
+#define IC_CONF 0x0000
+#define IC_PRP_ENC_RSC 0x0004
+#define IC_PRP_VF_RSC 0x0008
+#define IC_PP_RSC 0x000C
+#define IC_CMBP_1 0x0010
+#define IC_CMBP_2 0x0014
+#define IC_IDMAC_1 0x0018
+#define IC_IDMAC_2 0x001C
+#define IC_IDMAC_3 0x0020
+#define IC_IDMAC_4 0x0024
+
+/* IC Register Fields */
+#define IC_CONF_PRPENC_EN (1 << 0)
+#define IC_CONF_PRPENC_CSC1 (1 << 1)
+#define IC_CONF_PRPENC_ROT_EN (1 << 2)
+#define IC_CONF_PRPVF_EN (1 << 8)
+#define IC_CONF_PRPVF_CSC1 (1 << 9)
+#define IC_CONF_PRPVF_CSC2 (1 << 10)
+#define IC_CONF_PRPVF_CMB (1 << 11)
+#define IC_CONF_PRPVF_ROT_EN (1 << 12)
+#define IC_CONF_PP_EN (1 << 16)
+#define IC_CONF_PP_CSC1 (1 << 17)
+#define IC_CONF_PP_CSC2 (1 << 18)
+#define IC_CONF_PP_CMB (1 << 19)
+#define IC_CONF_PP_ROT_EN (1 << 20)
+#define IC_CONF_IC_GLB_LOC_A (1 << 28)
+#define IC_CONF_KEY_COLOR_EN (1 << 29)
+#define IC_CONF_RWS_EN (1 << 30)
+#define IC_CONF_CSI_MEM_WR_EN (1 << 31)
+
+#define IC_IDMAC_1_CB0_BURST_16 (1 << 0)
+#define IC_IDMAC_1_CB1_BURST_16 (1 << 1)
+#define IC_IDMAC_1_CB2_BURST_16 (1 << 2)
+#define IC_IDMAC_1_CB3_BURST_16 (1 << 3)
+#define IC_IDMAC_1_CB4_BURST_16 (1 << 4)
+#define IC_IDMAC_1_CB5_BURST_16 (1 << 5)
+#define IC_IDMAC_1_CB6_BURST_16 (1 << 6)
+#define IC_IDMAC_1_CB7_BURST_16 (1 << 7)
+#define IC_IDMAC_1_PRPENC_ROT_MASK (0x7 << 11)
+#define IC_IDMAC_1_PRPENC_ROT_OFFSET 11
+#define IC_IDMAC_1_PRPVF_ROT_MASK (0x7 << 14)
+#define IC_IDMAC_1_PRPVF_ROT_OFFSET 14
+#define IC_IDMAC_1_PP_ROT_MASK (0x7 << 17)
+#define IC_IDMAC_1_PP_ROT_OFFSET 17
+#define IC_IDMAC_1_PP_FLIP_RS (1 << 22)
+#define IC_IDMAC_1_PRPVF_FLIP_RS (1 << 21)
+#define IC_IDMAC_1_PRPENC_FLIP_RS (1 << 20)
+
+#define IC_IDMAC_2_PRPENC_HEIGHT_MASK (0x3ff << 0)
+#define IC_IDMAC_2_PRPENC_HEIGHT_OFFSET 0
+#define IC_IDMAC_2_PRPVF_HEIGHT_MASK (0x3ff << 10)
+#define IC_IDMAC_2_PRPVF_HEIGHT_OFFSET 10
+#define IC_IDMAC_2_PP_HEIGHT_MASK (0x3ff << 20)
+#define IC_IDMAC_2_PP_HEIGHT_OFFSET 20
+
+#define IC_IDMAC_3_PRPENC_WIDTH_MASK (0x3ff << 0)
+#define IC_IDMAC_3_PRPENC_WIDTH_OFFSET 0
+#define IC_IDMAC_3_PRPVF_WIDTH_MASK (0x3ff << 10)
+#define IC_IDMAC_3_PRPVF_WIDTH_OFFSET 10
+#define IC_IDMAC_3_PP_WIDTH_MASK (0x3ff << 20)
+#define IC_IDMAC_3_PP_WIDTH_OFFSET 20
+
+struct ic_task_regoffs {
+ u32 rsc;
+ u32 tpmem_csc[2];
+};
+
+struct ic_task_bitfields {
+ u32 ic_conf_en;
+ u32 ic_conf_rot_en;
+ u32 ic_conf_cmb_en;
+ u32 ic_conf_csc1_en;
+ u32 ic_conf_csc2_en;
+ u32 ic_cmb_galpha_bit;
+};
+
+static const struct ic_task_regoffs ic_task_reg[IC_NUM_TASKS] = {
+ [IC_TASK_ENCODER] = {
+ .rsc = IC_PRP_ENC_RSC,
+ .tpmem_csc = {0x2008, 0},
+ },
+ [IC_TASK_VIEWFINDER] = {
+ .rsc = IC_PRP_VF_RSC,
+ .tpmem_csc = {0x4028, 0x4040},
+ },
+ [IC_TASK_POST_PROCESSOR] = {
+ .rsc = IC_PP_RSC,
+ .tpmem_csc = {0x6060, 0x6078},
+ },
+};
+
+static const struct ic_task_bitfields ic_task_bit[IC_NUM_TASKS] = {
+ [IC_TASK_ENCODER] = {
+ .ic_conf_en = IC_CONF_PRPENC_EN,
+ .ic_conf_rot_en = IC_CONF_PRPENC_ROT_EN,
+ .ic_conf_cmb_en = 0, /* NA */
+ .ic_conf_csc1_en = IC_CONF_PRPENC_CSC1,
+ .ic_conf_csc2_en = 0, /* NA */
+ .ic_cmb_galpha_bit = 0, /* NA */
+ },
+ [IC_TASK_VIEWFINDER] = {
+ .ic_conf_en = IC_CONF_PRPVF_EN,
+ .ic_conf_rot_en = IC_CONF_PRPVF_ROT_EN,
+ .ic_conf_cmb_en = IC_CONF_PRPVF_CMB,
+ .ic_conf_csc1_en = IC_CONF_PRPVF_CSC1,
+ .ic_conf_csc2_en = IC_CONF_PRPVF_CSC2,
+ .ic_cmb_galpha_bit = 0,
+ },
+ [IC_TASK_POST_PROCESSOR] = {
+ .ic_conf_en = IC_CONF_PP_EN,
+ .ic_conf_rot_en = IC_CONF_PP_ROT_EN,
+ .ic_conf_cmb_en = IC_CONF_PP_CMB,
+ .ic_conf_csc1_en = IC_CONF_PP_CSC1,
+ .ic_conf_csc2_en = IC_CONF_PP_CSC2,
+ .ic_cmb_galpha_bit = 8,
+ },
+};
+
+struct ipu_ic_priv;
+
+struct ipu_ic {
+ enum ipu_ic_task task;
+ const struct ic_task_regoffs *reg;
+ const struct ic_task_bitfields *bit;
+
+ enum ipu_color_space in_cs, g_in_cs;
+ enum ipu_color_space out_cs;
+ bool graphics;
+ bool rotation;
+ bool in_use;
+
+ struct ipu_ic_priv *priv;
+};
+
+struct ipu_ic_priv {
+ void __iomem *base;
+ void __iomem *tpmem_base;
+ spinlock_t lock;
+ struct ipu_soc *ipu;
+ int use_count;
+ struct ipu_ic task[IC_NUM_TASKS];
+};
+
+static inline u32 ipu_ic_read(struct ipu_ic *ic, unsigned offset)
+{
+ return readl(ic->priv->base + offset);
+}
+
+static inline void ipu_ic_write(struct ipu_ic *ic, u32 value, unsigned offset)
+{
+ writel(value, ic->priv->base + offset);
+}
+
+struct ic_csc_params {
+ s16 coeff[3][3]; /* signed 9-bit integer coefficients */
+ s16 offset[3]; /* signed 11+2-bit fixed point offset */
+ u8 scale:2; /* scale coefficients * 2^(scale-1) */
+ bool sat:1; /* saturate to (16, 235(Y) / 240(U, V)) */
+};
+
+/*
+ * Y = R * .299 + G * .587 + B * .114;
+ * U = R * -.169 + G * -.332 + B * .500 + 128.;
+ * V = R * .500 + G * -.419 + B * -.0813 + 128.;
+ */
+static const struct ic_csc_params ic_csc_rgb2ycbcr = {
+ .coeff = {
+ { 77, 150, 29 },
+ { 469, 427, 128 },
+ { 128, 405, 491 },
+ },
+ .offset = { 0, 512, 512 },
+ .scale = 1,
+};
+
+/* transparent RGB->RGB matrix for graphics combining */
+static const struct ic_csc_params ic_csc_rgb2rgb = {
+ .coeff = {
+ { 128, 0, 0 },
+ { 0, 128, 0 },
+ { 0, 0, 128 },
+ },
+ .scale = 2,
+};
+
+/*
+ * R = (1.164 * (Y - 16)) + (1.596 * (Cr - 128));
+ * G = (1.164 * (Y - 16)) - (0.392 * (Cb - 128)) - (0.813 * (Cr - 128));
+ * B = (1.164 * (Y - 16)) + (2.017 * (Cb - 128);
+ */
+static const struct ic_csc_params ic_csc_ycbcr2rgb = {
+ .coeff = {
+ { 149, 0, 204 },
+ { 149, 462, 408 },
+ { 149, 255, 0 },
+ },
+ .offset = { -446, 266, -554 },
+ .scale = 2,
+};
+
+static int init_csc(struct ipu_ic *ic,
+ enum ipu_color_space inf,
+ enum ipu_color_space outf,
+ int csc_index)
+{
+ struct ipu_ic_priv *priv = ic->priv;
+ const struct ic_csc_params *params;
+ u32 __iomem *base;
+ const u16 (*c)[3];
+ const u16 *a;
+ u32 param;
+
+ base = (u32 __iomem *)
+ (priv->tpmem_base + ic->reg->tpmem_csc[csc_index]);
+
+ if (inf == IPUV3_COLORSPACE_YUV && outf == IPUV3_COLORSPACE_RGB)
+ params = &ic_csc_ycbcr2rgb;
+ else if (inf == IPUV3_COLORSPACE_RGB && outf == IPUV3_COLORSPACE_YUV)
+ params = &ic_csc_rgb2ycbcr;
+ else if (inf == IPUV3_COLORSPACE_RGB && outf == IPUV3_COLORSPACE_RGB)
+ params = &ic_csc_rgb2rgb;
+ else {
+ dev_err(priv->ipu->dev, "Unsupported color space conversion\n");
+ return -EINVAL;
+ }
+
+ /* Cast to unsigned */
+ c = (const u16 (*)[3])params->coeff;
+ a = (const u16 *)params->offset;
+
+ param = ((a[0] & 0x1f) << 27) | ((c[0][0] & 0x1ff) << 18) |
+ ((c[1][1] & 0x1ff) << 9) | (c[2][2] & 0x1ff);
+ writel(param, base++);
+
+ param = ((a[0] & 0x1fe0) >> 5) | (params->scale << 8) |
+ (params->sat << 9);
+ writel(param, base++);
+
+ param = ((a[1] & 0x1f) << 27) | ((c[0][1] & 0x1ff) << 18) |
+ ((c[1][0] & 0x1ff) << 9) | (c[2][0] & 0x1ff);
+ writel(param, base++);
+
+ param = ((a[1] & 0x1fe0) >> 5);
+ writel(param, base++);
+
+ param = ((a[2] & 0x1f) << 27) | ((c[0][2] & 0x1ff) << 18) |
+ ((c[1][2] & 0x1ff) << 9) | (c[2][1] & 0x1ff);
+ writel(param, base++);
+
+ param = ((a[2] & 0x1fe0) >> 5);
+ writel(param, base++);
+
+ return 0;
+}
+
+static int calc_resize_coeffs(struct ipu_ic *ic,
+ u32 in_size, u32 out_size,
+ u32 *resize_coeff,
+ u32 *downsize_coeff)
+{
+ struct ipu_ic_priv *priv = ic->priv;
+ struct ipu_soc *ipu = priv->ipu;
+ u32 temp_size, temp_downsize;
+
+ /*
+ * Input size cannot be more than 4096, and output size cannot
+ * be more than 1024
+ */
+ if (in_size > 4096) {
+ dev_err(ipu->dev, "Unsupported resize (in_size > 4096)\n");
+ return -EINVAL;
+ }
+ if (out_size > 1024) {
+ dev_err(ipu->dev, "Unsupported resize (out_size > 1024)\n");
+ return -EINVAL;
+ }
+
+ /* Cannot downsize more than 8:1 */
+ if ((out_size << 3) < in_size) {
+ dev_err(ipu->dev, "Unsupported downsize\n");
+ return -EINVAL;
+ }
+
+ /* Compute downsizing coefficient */
+ temp_downsize = 0;
+ temp_size = in_size;
+ while (((temp_size > 1024) || (temp_size >= out_size * 2)) &&
+ (temp_downsize < 2)) {
+ temp_size >>= 1;
+ temp_downsize++;
+ }
+ *downsize_coeff = temp_downsize;
+
+ /*
+ * compute resizing coefficient using the following equation:
+ * resize_coeff = M * (SI - 1) / (SO - 1)
+ * where M = 2^13, SI = input size, SO = output size
+ */
+ *resize_coeff = (8192L * (temp_size - 1)) / (out_size - 1);
+ if (*resize_coeff >= 16384L) {
+ dev_err(ipu->dev, "Warning! Overflow on resize coeff.\n");
+ *resize_coeff = 0x3FFF;
+ }
+
+ return 0;
+}
+
+void ipu_ic_task_enable(struct ipu_ic *ic)
+{
+ struct ipu_ic_priv *priv = ic->priv;
+ unsigned long flags;
+ u32 ic_conf;
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+ ic_conf = ipu_ic_read(ic, IC_CONF);
+
+ ic_conf |= ic->bit->ic_conf_en;
+
+ if (ic->rotation)
+ ic_conf |= ic->bit->ic_conf_rot_en;
+
+ if (ic->in_cs != ic->out_cs)
+ ic_conf |= ic->bit->ic_conf_csc1_en;
+
+ if (ic->graphics) {
+ ic_conf |= ic->bit->ic_conf_cmb_en;
+ ic_conf |= ic->bit->ic_conf_csc1_en;
+
+ if (ic->g_in_cs != ic->out_cs)
+ ic_conf |= ic->bit->ic_conf_csc2_en;
+ }
+
+ ipu_ic_write(ic, ic_conf, IC_CONF);
+
+ spin_unlock_irqrestore(&priv->lock, flags);
+}
+EXPORT_SYMBOL_GPL(ipu_ic_task_enable);
+
+void ipu_ic_task_disable(struct ipu_ic *ic)
+{
+ struct ipu_ic_priv *priv = ic->priv;
+ unsigned long flags;
+ u32 ic_conf;
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+ ic_conf = ipu_ic_read(ic, IC_CONF);
+
+ ic_conf &= ~(ic->bit->ic_conf_en |
+ ic->bit->ic_conf_csc1_en |
+ ic->bit->ic_conf_rot_en);
+ if (ic->bit->ic_conf_csc2_en)
+ ic_conf &= ~ic->bit->ic_conf_csc2_en;
+ if (ic->bit->ic_conf_cmb_en)
+ ic_conf &= ~ic->bit->ic_conf_cmb_en;
+
+ ipu_ic_write(ic, ic_conf, IC_CONF);
+
+ ic->rotation = ic->graphics = false;
+
+ spin_unlock_irqrestore(&priv->lock, flags);
+}
+EXPORT_SYMBOL_GPL(ipu_ic_task_disable);
+
+int ipu_ic_task_graphics_init(struct ipu_ic *ic,
+ enum ipu_color_space in_g_cs,
+ bool galpha_en, u32 galpha,
+ bool colorkey_en, u32 colorkey)
+{
+ struct ipu_ic_priv *priv = ic->priv;
+ unsigned long flags;
+ u32 reg, ic_conf;
+ int ret = 0;
+
+ if (ic->task == IC_TASK_ENCODER)
+ return -EINVAL;
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+ ic_conf = ipu_ic_read(ic, IC_CONF);
+
+ if (!(ic_conf & ic->bit->ic_conf_csc1_en)) {
+ /* need transparent CSC1 conversion */
+ ret = init_csc(ic, IPUV3_COLORSPACE_RGB,
+ IPUV3_COLORSPACE_RGB, 0);
+ if (ret)
+ goto unlock;
+ }
+
+ ic->g_in_cs = in_g_cs;
+
+ if (ic->g_in_cs != ic->out_cs) {
+ ret = init_csc(ic, ic->g_in_cs, ic->out_cs, 1);
+ if (ret)
+ goto unlock;
+ }
+
+ if (galpha_en) {
+ ic_conf |= IC_CONF_IC_GLB_LOC_A;
+ reg = ipu_ic_read(ic, IC_CMBP_1);
+ reg &= ~(0xff << ic->bit->ic_cmb_galpha_bit);
+ reg |= (galpha << ic->bit->ic_cmb_galpha_bit);
+ ipu_ic_write(ic, reg, IC_CMBP_1);
+ } else
+ ic_conf &= ~IC_CONF_IC_GLB_LOC_A;
+
+ if (colorkey_en) {
+ ic_conf |= IC_CONF_KEY_COLOR_EN;
+ ipu_ic_write(ic, colorkey, IC_CMBP_2);
+ } else
+ ic_conf &= ~IC_CONF_KEY_COLOR_EN;
+
+ ipu_ic_write(ic, ic_conf, IC_CONF);
+
+ ic->graphics = true;
+unlock:
+ spin_unlock_irqrestore(&priv->lock, flags);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(ipu_ic_task_graphics_init);
+
+int ipu_ic_task_init(struct ipu_ic *ic,
+ int in_width, int in_height,
+ int out_width, int out_height,
+ enum ipu_color_space in_cs,
+ enum ipu_color_space out_cs)
+{
+ struct ipu_ic_priv *priv = ic->priv;
+ u32 reg, downsize_coeff, resize_coeff;
+ unsigned long flags;
+ int ret = 0;
+
+ /* Setup vertical resizing */
+ ret = calc_resize_coeffs(ic, in_height, out_height,
+ &resize_coeff, &downsize_coeff);
+ if (ret)
+ return ret;
+
+ reg = (downsize_coeff << 30) | (resize_coeff << 16);
+
+ /* Setup horizontal resizing */
+ ret = calc_resize_coeffs(ic, in_width, out_width,
+ &resize_coeff, &downsize_coeff);
+ if (ret)
+ return ret;
+
+ reg |= (downsize_coeff << 14) | resize_coeff;
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+ ipu_ic_write(ic, reg, ic->reg->rsc);
+
+ /* Setup color space conversion */
+ ic->in_cs = in_cs;
+ ic->out_cs = out_cs;
+
+ if (ic->in_cs != ic->out_cs) {
+ ret = init_csc(ic, ic->in_cs, ic->out_cs, 0);
+ if (ret)
+ goto unlock;
+ }
+
+unlock:
+ spin_unlock_irqrestore(&priv->lock, flags);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(ipu_ic_task_init);
+
+int ipu_ic_task_idma_init(struct ipu_ic *ic, struct ipuv3_channel *channel,
+ u32 width, u32 height, int burst_size,
+ enum ipu_rotate_mode rot)
+{
+ struct ipu_ic_priv *priv = ic->priv;
+ struct ipu_soc *ipu = priv->ipu;
+ u32 ic_idmac_1, ic_idmac_2, ic_idmac_3;
+ u32 temp_rot = bitrev8(rot) >> 5;
+ bool need_hor_flip = false;
+ unsigned long flags;
+ int ret = 0;
+
+ if ((burst_size != 8) && (burst_size != 16)) {
+ dev_err(ipu->dev, "Illegal burst length for IC\n");
+ return -EINVAL;
+ }
+
+ width--;
+ height--;
+
+ if (temp_rot & 0x2) /* Need horizontal flip */
+ need_hor_flip = true;
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+ ic_idmac_1 = ipu_ic_read(ic, IC_IDMAC_1);
+ ic_idmac_2 = ipu_ic_read(ic, IC_IDMAC_2);
+ ic_idmac_3 = ipu_ic_read(ic, IC_IDMAC_3);
+
+ switch (channel->num) {
+ case IPUV3_CHANNEL_IC_PP_MEM:
+ if (burst_size == 16)
+ ic_idmac_1 |= IC_IDMAC_1_CB2_BURST_16;
+ else
+ ic_idmac_1 &= ~IC_IDMAC_1_CB2_BURST_16;
+
+ if (need_hor_flip)
+ ic_idmac_1 |= IC_IDMAC_1_PP_FLIP_RS;
+ else
+ ic_idmac_1 &= ~IC_IDMAC_1_PP_FLIP_RS;
+
+ ic_idmac_2 &= ~IC_IDMAC_2_PP_HEIGHT_MASK;
+ ic_idmac_2 |= height << IC_IDMAC_2_PP_HEIGHT_OFFSET;
+
+ ic_idmac_3 &= ~IC_IDMAC_3_PP_WIDTH_MASK;
+ ic_idmac_3 |= width << IC_IDMAC_3_PP_WIDTH_OFFSET;
+ break;
+ case IPUV3_CHANNEL_MEM_IC_PP:
+ if (burst_size == 16)
+ ic_idmac_1 |= IC_IDMAC_1_CB5_BURST_16;
+ else
+ ic_idmac_1 &= ~IC_IDMAC_1_CB5_BURST_16;
+ break;
+ case IPUV3_CHANNEL_MEM_ROT_PP:
+ ic_idmac_1 &= ~IC_IDMAC_1_PP_ROT_MASK;
+ ic_idmac_1 |= temp_rot << IC_IDMAC_1_PP_ROT_OFFSET;
+ break;
+ case IPUV3_CHANNEL_MEM_IC_PRP_VF:
+ if (burst_size == 16)
+ ic_idmac_1 |= IC_IDMAC_1_CB6_BURST_16;
+ else
+ ic_idmac_1 &= ~IC_IDMAC_1_CB6_BURST_16;
+ break;
+ case IPUV3_CHANNEL_IC_PRP_ENC_MEM:
+ if (burst_size == 16)
+ ic_idmac_1 |= IC_IDMAC_1_CB0_BURST_16;
+ else
+ ic_idmac_1 &= ~IC_IDMAC_1_CB0_BURST_16;
+
+ if (need_hor_flip)
+ ic_idmac_1 |= IC_IDMAC_1_PRPENC_FLIP_RS;
+ else
+ ic_idmac_1 &= ~IC_IDMAC_1_PRPENC_FLIP_RS;
+
+ ic_idmac_2 &= ~IC_IDMAC_2_PRPENC_HEIGHT_MASK;
+ ic_idmac_2 |= height << IC_IDMAC_2_PRPENC_HEIGHT_OFFSET;
+
+ ic_idmac_3 &= ~IC_IDMAC_3_PRPENC_WIDTH_MASK;
+ ic_idmac_3 |= width << IC_IDMAC_3_PRPENC_WIDTH_OFFSET;
+ break;
+ case IPUV3_CHANNEL_MEM_ROT_ENC:
+ ic_idmac_1 &= ~IC_IDMAC_1_PRPENC_ROT_MASK;
+ ic_idmac_1 |= temp_rot << IC_IDMAC_1_PRPENC_ROT_OFFSET;
+ break;
+ case IPUV3_CHANNEL_IC_PRP_VF_MEM:
+ if (burst_size == 16)
+ ic_idmac_1 |= IC_IDMAC_1_CB1_BURST_16;
+ else
+ ic_idmac_1 &= ~IC_IDMAC_1_CB1_BURST_16;
+
+ if (need_hor_flip)
+ ic_idmac_1 |= IC_IDMAC_1_PRPVF_FLIP_RS;
+ else
+ ic_idmac_1 &= ~IC_IDMAC_1_PRPVF_FLIP_RS;
+
+ ic_idmac_2 &= ~IC_IDMAC_2_PRPVF_HEIGHT_MASK;
+ ic_idmac_2 |= height << IC_IDMAC_2_PRPVF_HEIGHT_OFFSET;
+
+ ic_idmac_3 &= ~IC_IDMAC_3_PRPVF_WIDTH_MASK;
+ ic_idmac_3 |= width << IC_IDMAC_3_PRPVF_WIDTH_OFFSET;
+ break;
+ case IPUV3_CHANNEL_MEM_ROT_VF:
+ ic_idmac_1 &= ~IC_IDMAC_1_PRPVF_ROT_MASK;
+ ic_idmac_1 |= temp_rot << IC_IDMAC_1_PRPVF_ROT_OFFSET;
+ break;
+ case IPUV3_CHANNEL_G_MEM_IC_PRP_VF:
+ if (burst_size == 16)
+ ic_idmac_1 |= IC_IDMAC_1_CB3_BURST_16;
+ else
+ ic_idmac_1 &= ~IC_IDMAC_1_CB3_BURST_16;
+ break;
+ case IPUV3_CHANNEL_G_MEM_IC_PP:
+ if (burst_size == 16)
+ ic_idmac_1 |= IC_IDMAC_1_CB4_BURST_16;
+ else
+ ic_idmac_1 &= ~IC_IDMAC_1_CB4_BURST_16;
+ break;
+ case IPUV3_CHANNEL_VDI_MEM_IC_VF:
+ if (burst_size == 16)
+ ic_idmac_1 |= IC_IDMAC_1_CB7_BURST_16;
+ else
+ ic_idmac_1 &= ~IC_IDMAC_1_CB7_BURST_16;
+ break;
+ default:
+ goto unlock;
+ }
+
+ ipu_ic_write(ic, ic_idmac_1, IC_IDMAC_1);
+ ipu_ic_write(ic, ic_idmac_2, IC_IDMAC_2);
+ ipu_ic_write(ic, ic_idmac_3, IC_IDMAC_3);
+
+ if (rot >= IPU_ROTATE_90_RIGHT)
+ ic->rotation = true;
+
+unlock:
+ spin_unlock_irqrestore(&priv->lock, flags);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(ipu_ic_task_idma_init);
+
+int ipu_ic_enable(struct ipu_ic *ic)
+{
+ struct ipu_ic_priv *priv = ic->priv;
+ unsigned long flags;
+ u32 module = IPU_CONF_IC_EN;
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+ if (ic->rotation)
+ module |= IPU_CONF_ROT_EN;
+
+ if (!priv->use_count)
+ ipu_module_enable(priv->ipu, module);
+
+ priv->use_count++;
+
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(ipu_ic_enable);
+
+int ipu_ic_disable(struct ipu_ic *ic)
+{
+ struct ipu_ic_priv *priv = ic->priv;
+ unsigned long flags;
+ u32 module = IPU_CONF_IC_EN | IPU_CONF_ROT_EN;
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+ priv->use_count--;
+
+ if (!priv->use_count)
+ ipu_module_disable(priv->ipu, module);
+
+ if (priv->use_count < 0)
+ priv->use_count = 0;
+
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(ipu_ic_disable);
+
+struct ipu_ic *ipu_ic_get(struct ipu_soc *ipu, enum ipu_ic_task task)
+{
+ struct ipu_ic_priv *priv = ipu->ic_priv;
+ unsigned long flags;
+ struct ipu_ic *ic, *ret;
+
+ if (task >= IC_NUM_TASKS)
+ return ERR_PTR(-EINVAL);
+
+ ic = &priv->task[task];
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+ if (ic->in_use) {
+ ret = ERR_PTR(-EBUSY);
+ goto unlock;
+ }
+
+ ic->in_use = true;
+ ret = ic;
+
+unlock:
+ spin_unlock_irqrestore(&priv->lock, flags);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(ipu_ic_get);
+
+void ipu_ic_put(struct ipu_ic *ic)
+{
+ struct ipu_ic_priv *priv = ic->priv;
+ unsigned long flags;
+
+ spin_lock_irqsave(&priv->lock, flags);
+ ic->in_use = false;
+ spin_unlock_irqrestore(&priv->lock, flags);
+}
+EXPORT_SYMBOL_GPL(ipu_ic_put);
+
+int ipu_ic_init(struct ipu_soc *ipu, struct device *dev,
+ unsigned long base, unsigned long tpmem_base)
+{
+ struct ipu_ic_priv *priv;
+ int i;
+
+ priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
+
+ ipu->ic_priv = priv;
+
+ spin_lock_init(&priv->lock);
+ priv->base = devm_ioremap(dev, base, PAGE_SIZE);
+ if (!priv->base)
+ return -ENOMEM;
+ priv->tpmem_base = devm_ioremap(dev, tpmem_base, SZ_64K);
+ if (!priv->tpmem_base)
+ return -ENOMEM;
+
+ dev_dbg(dev, "IC base: 0x%08lx remapped to %p\n", base, priv->base);
+
+ priv->ipu = ipu;
+
+ for (i = 0; i < IC_NUM_TASKS; i++) {
+ priv->task[i].task = i;
+ priv->task[i].priv = priv;
+ priv->task[i].reg = &ic_task_reg[i];
+ priv->task[i].bit = &ic_task_bit[i];
+ }
+
+ return 0;
+}
+
+void ipu_ic_exit(struct ipu_soc *ipu)
+{
+}
+
+void ipu_ic_dump(struct ipu_ic *ic)
+{
+ struct ipu_ic_priv *priv = ic->priv;
+ struct ipu_soc *ipu = priv->ipu;
+
+ dev_dbg(ipu->dev, "IC_CONF = \t0x%08X\n",
+ ipu_ic_read(ic, IC_CONF));
+ dev_dbg(ipu->dev, "IC_PRP_ENC_RSC = \t0x%08X\n",
+ ipu_ic_read(ic, IC_PRP_ENC_RSC));
+ dev_dbg(ipu->dev, "IC_PRP_VF_RSC = \t0x%08X\n",
+ ipu_ic_read(ic, IC_PRP_VF_RSC));
+ dev_dbg(ipu->dev, "IC_PP_RSC = \t0x%08X\n",
+ ipu_ic_read(ic, IC_PP_RSC));
+ dev_dbg(ipu->dev, "IC_CMBP_1 = \t0x%08X\n",
+ ipu_ic_read(ic, IC_CMBP_1));
+ dev_dbg(ipu->dev, "IC_CMBP_2 = \t0x%08X\n",
+ ipu_ic_read(ic, IC_CMBP_2));
+ dev_dbg(ipu->dev, "IC_IDMAC_1 = \t0x%08X\n",
+ ipu_ic_read(ic, IC_IDMAC_1));
+ dev_dbg(ipu->dev, "IC_IDMAC_2 = \t0x%08X\n",
+ ipu_ic_read(ic, IC_IDMAC_2));
+ dev_dbg(ipu->dev, "IC_IDMAC_3 = \t0x%08X\n",
+ ipu_ic_read(ic, IC_IDMAC_3));
+ dev_dbg(ipu->dev, "IC_IDMAC_4 = \t0x%08X\n",
+ ipu_ic_read(ic, IC_IDMAC_4));
+}
+EXPORT_SYMBOL_GPL(ipu_ic_dump);
#include <video/imx-ipu-v3.h>
-#define IPUV3_CHANNEL_CSI0 0
-#define IPUV3_CHANNEL_CSI1 1
-#define IPUV3_CHANNEL_CSI2 2
-#define IPUV3_CHANNEL_CSI3 3
-#define IPUV3_CHANNEL_MEM_BG_SYNC 23
-#define IPUV3_CHANNEL_MEM_FG_SYNC 27
-#define IPUV3_CHANNEL_MEM_DC_SYNC 28
-#define IPUV3_CHANNEL_MEM_FG_SYNC_ALPHA 31
-#define IPUV3_CHANNEL_MEM_DC_ASYNC 41
-#define IPUV3_CHANNEL_ROT_ENC_MEM 45
-#define IPUV3_CHANNEL_ROT_VF_MEM 46
-#define IPUV3_CHANNEL_ROT_PP_MEM 47
-#define IPUV3_CHANNEL_ROT_ENC_MEM_OUT 48
-#define IPUV3_CHANNEL_ROT_VF_MEM_OUT 49
-#define IPUV3_CHANNEL_ROT_PP_MEM_OUT 50
-#define IPUV3_CHANNEL_MEM_BG_SYNC_ALPHA 51
-
#define IPU_MCU_T_DEFAULT 8
#define IPU_CM_IDMAC_REG_OFS 0x00008000
#define IPU_CM_IC_REG_OFS 0x00020000
#define IPU_DISP_TASK_STAT IPU_CM_REG(0x0254)
#define IPU_CHA_BUF0_RDY(ch) IPU_CM_REG(0x0268 + 4 * ((ch) / 32))
#define IPU_CHA_BUF1_RDY(ch) IPU_CM_REG(0x0270 + 4 * ((ch) / 32))
+#define IPU_CHA_BUF2_RDY(ch) IPU_CM_REG(0x0288 + 4 * ((ch) / 32))
#define IPU_ALT_CHA_BUF0_RDY(ch) IPU_CM_REG(0x0278 + 4 * ((ch) / 32))
#define IPU_ALT_CHA_BUF1_RDY(ch) IPU_CM_REG(0x0280 + 4 * ((ch) / 32))
struct ipu_soc *ipu;
};
+struct ipu_cpmem;
+struct ipu_csi;
struct ipu_dc_priv;
struct ipu_dmfc_priv;
struct ipu_di;
+struct ipu_ic_priv;
struct ipu_smfc_priv;
struct ipu_devtype;
void __iomem *cm_reg;
void __iomem *idmac_reg;
- struct ipu_ch_param __iomem *cpmem_base;
int usecount;
int irq_err;
struct irq_domain *domain;
+ struct ipu_cpmem *cpmem_priv;
struct ipu_dc_priv *dc_priv;
struct ipu_dp_priv *dp_priv;
struct ipu_dmfc_priv *dmfc_priv;
struct ipu_di *di_priv[2];
+ struct ipu_csi *csi_priv[2];
+ struct ipu_ic_priv *ic_priv;
struct ipu_smfc_priv *smfc_priv;
};
+static inline u32 ipu_idmac_read(struct ipu_soc *ipu, unsigned offset)
+{
+ return readl(ipu->idmac_reg + offset);
+}
+
+static inline void ipu_idmac_write(struct ipu_soc *ipu, u32 value,
+ unsigned offset)
+{
+ writel(value, ipu->idmac_reg + offset);
+}
+
void ipu_srm_dp_sync_update(struct ipu_soc *ipu);
int ipu_module_enable(struct ipu_soc *ipu, u32 mask);
bool ipu_idmac_channel_busy(struct ipu_soc *ipu, unsigned int chno);
int ipu_wait_interrupt(struct ipu_soc *ipu, int irq, int ms);
+int ipu_csi_init(struct ipu_soc *ipu, struct device *dev, int id,
+ unsigned long base, u32 module, struct clk *clk_ipu);
+void ipu_csi_exit(struct ipu_soc *ipu, int id);
+
+int ipu_ic_init(struct ipu_soc *ipu, struct device *dev,
+ unsigned long base, unsigned long tpmem_base);
+void ipu_ic_exit(struct ipu_soc *ipu);
+
int ipu_di_init(struct ipu_soc *ipu, struct device *dev, int id,
unsigned long base, u32 module, struct clk *ipu_clk);
void ipu_di_exit(struct ipu_soc *ipu, int id);
#include "ipu-prv.h"
+struct ipu_smfc {
+ struct ipu_smfc_priv *priv;
+ int chno;
+ bool inuse;
+};
+
struct ipu_smfc_priv {
void __iomem *base;
spinlock_t lock;
+ struct ipu_soc *ipu;
+ struct ipu_smfc channel[4];
+ int use_count;
};
/*SMFC Registers */
#define SMFC_WMC 0x0004
#define SMFC_BS 0x0008
-int ipu_smfc_set_burstsize(struct ipu_soc *ipu, int channel, int burstsize)
+int ipu_smfc_set_burstsize(struct ipu_smfc *smfc, int burstsize)
{
- struct ipu_smfc_priv *smfc = ipu->smfc_priv;
+ struct ipu_smfc_priv *priv = smfc->priv;
unsigned long flags;
u32 val, shift;
- spin_lock_irqsave(&smfc->lock, flags);
+ spin_lock_irqsave(&priv->lock, flags);
- shift = channel * 4;
- val = readl(smfc->base + SMFC_BS);
+ shift = smfc->chno * 4;
+ val = readl(priv->base + SMFC_BS);
val &= ~(0xf << shift);
val |= burstsize << shift;
- writel(val, smfc->base + SMFC_BS);
+ writel(val, priv->base + SMFC_BS);
- spin_unlock_irqrestore(&smfc->lock, flags);
+ spin_unlock_irqrestore(&priv->lock, flags);
return 0;
}
EXPORT_SYMBOL_GPL(ipu_smfc_set_burstsize);
-int ipu_smfc_map_channel(struct ipu_soc *ipu, int channel, int csi_id, int mipi_id)
+int ipu_smfc_map_channel(struct ipu_smfc *smfc, int csi_id, int mipi_id)
{
- struct ipu_smfc_priv *smfc = ipu->smfc_priv;
+ struct ipu_smfc_priv *priv = smfc->priv;
unsigned long flags;
u32 val, shift;
- spin_lock_irqsave(&smfc->lock, flags);
+ spin_lock_irqsave(&priv->lock, flags);
- shift = channel * 3;
- val = readl(smfc->base + SMFC_MAP);
+ shift = smfc->chno * 3;
+ val = readl(priv->base + SMFC_MAP);
val &= ~(0x7 << shift);
val |= ((csi_id << 2) | mipi_id) << shift;
- writel(val, smfc->base + SMFC_MAP);
+ writel(val, priv->base + SMFC_MAP);
- spin_unlock_irqrestore(&smfc->lock, flags);
+ spin_unlock_irqrestore(&priv->lock, flags);
return 0;
}
EXPORT_SYMBOL_GPL(ipu_smfc_map_channel);
+int ipu_smfc_set_watermark(struct ipu_smfc *smfc, u32 set_level, u32 clr_level)
+{
+ struct ipu_smfc_priv *priv = smfc->priv;
+ unsigned long flags;
+ u32 val, shift;
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+ shift = smfc->chno * 6 + (smfc->chno > 1 ? 4 : 0);
+ val = readl(priv->base + SMFC_WMC);
+ val &= ~(0x3f << shift);
+ val |= ((clr_level << 3) | set_level) << shift;
+ writel(val, priv->base + SMFC_WMC);
+
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(ipu_smfc_set_watermark);
+
+int ipu_smfc_enable(struct ipu_smfc *smfc)
+{
+ struct ipu_smfc_priv *priv = smfc->priv;
+ unsigned long flags;
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+ if (!priv->use_count)
+ ipu_module_enable(priv->ipu, IPU_CONF_SMFC_EN);
+
+ priv->use_count++;
+
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(ipu_smfc_enable);
+
+int ipu_smfc_disable(struct ipu_smfc *smfc)
+{
+ struct ipu_smfc_priv *priv = smfc->priv;
+ unsigned long flags;
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+ priv->use_count--;
+
+ if (!priv->use_count)
+ ipu_module_disable(priv->ipu, IPU_CONF_SMFC_EN);
+
+ if (priv->use_count < 0)
+ priv->use_count = 0;
+
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(ipu_smfc_disable);
+
+struct ipu_smfc *ipu_smfc_get(struct ipu_soc *ipu, unsigned int chno)
+{
+ struct ipu_smfc_priv *priv = ipu->smfc_priv;
+ struct ipu_smfc *smfc, *ret;
+ unsigned long flags;
+
+ if (chno >= 4)
+ return ERR_PTR(-EINVAL);
+
+ smfc = &priv->channel[chno];
+ ret = smfc;
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+ if (smfc->inuse) {
+ ret = ERR_PTR(-EBUSY);
+ goto unlock;
+ }
+
+ smfc->inuse = true;
+unlock:
+ spin_unlock_irqrestore(&priv->lock, flags);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(ipu_smfc_get);
+
+void ipu_smfc_put(struct ipu_smfc *smfc)
+{
+ struct ipu_smfc_priv *priv = smfc->priv;
+ unsigned long flags;
+
+ spin_lock_irqsave(&priv->lock, flags);
+ smfc->inuse = false;
+ spin_unlock_irqrestore(&priv->lock, flags);
+}
+EXPORT_SYMBOL_GPL(ipu_smfc_put);
+
int ipu_smfc_init(struct ipu_soc *ipu, struct device *dev,
unsigned long base)
{
- struct ipu_smfc_priv *smfc;
+ struct ipu_smfc_priv *priv;
+ int i;
- smfc = devm_kzalloc(dev, sizeof(*smfc), GFP_KERNEL);
- if (!smfc)
+ priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+ if (!priv)
return -ENOMEM;
- ipu->smfc_priv = smfc;
- spin_lock_init(&smfc->lock);
+ ipu->smfc_priv = priv;
+ spin_lock_init(&priv->lock);
+ priv->ipu = ipu;
- smfc->base = devm_ioremap(dev, base, PAGE_SIZE);
- if (!smfc->base)
+ priv->base = devm_ioremap(dev, base, PAGE_SIZE);
+ if (!priv->base)
return -ENOMEM;
- pr_debug("%s: ioremap 0x%08lx -> %p\n", __func__, base, smfc->base);
+ for (i = 0; i < 4; i++) {
+ priv->channel[i].priv = priv;
+ priv->channel[i].chno = i;
+ }
+
+ pr_debug("%s: ioremap 0x%08lx -> %p\n", __func__, base, priv->base);
return 0;
}
static __u8 *ch_report_fixup(struct hid_device *hdev, __u8 *rdesc,
unsigned int *rsize)
{
- if (*rsize >= 17 && rdesc[11] == 0x3c && rdesc[12] == 0x02) {
+ if (*rsize >= 18 && rdesc[11] == 0x3c && rdesc[12] == 0x02) {
hid_info(hdev, "fixing up Cherry Cymotion report descriptor\n");
rdesc[11] = rdesc[16] = 0xff;
rdesc[12] = rdesc[17] = 0x03;
0xC0 /* End Collection */
};
+/* Parameter indices */
+enum huion_prm {
+ HUION_PRM_X_LM = 1,
+ HUION_PRM_Y_LM = 2,
+ HUION_PRM_PRESSURE_LM = 4,
+ HUION_PRM_RESOLUTION = 5,
+ HUION_PRM_NUM
+};
+
/* Driver data */
struct huion_drvdata {
__u8 *rdesc;
int rc;
struct usb_device *usb_dev = hid_to_usb_dev(hdev);
struct huion_drvdata *drvdata = hid_get_drvdata(hdev);
- __le16 buf[6];
+ __le16 *buf = NULL;
+ size_t len;
+ s32 params[HUION_PH_ID_NUM];
+ s32 resolution;
+ __u8 *p;
+ s32 v;
/*
* Read string descriptor containing tablet parameters. The specific
* driver traffic.
* NOTE: This enables fully-functional tablet mode.
*/
+ len = HUION_PRM_NUM * sizeof(*buf);
+ buf = kmalloc(len, GFP_KERNEL);
+ if (buf == NULL) {
+ hid_err(hdev, "failed to allocate parameter buffer\n");
+ rc = -ENOMEM;
+ goto cleanup;
+ }
rc = usb_control_msg(usb_dev, usb_rcvctrlpipe(usb_dev, 0),
USB_REQ_GET_DESCRIPTOR, USB_DIR_IN,
(USB_DT_STRING << 8) + 0x64,
- 0x0409, buf, sizeof(buf),
+ 0x0409, buf, len,
USB_CTRL_GET_TIMEOUT);
- if (rc == -EPIPE)
- hid_warn(hdev, "device parameters not found\n");
- else if (rc < 0)
- hid_warn(hdev, "failed to get device parameters: %d\n", rc);
- else if (rc != sizeof(buf))
- hid_warn(hdev, "invalid device parameters\n");
- else {
- s32 params[HUION_PH_ID_NUM];
- s32 resolution;
- __u8 *p;
- s32 v;
+ if (rc == -EPIPE) {
+ hid_err(hdev, "device parameters not found\n");
+ rc = -ENODEV;
+ goto cleanup;
+ } else if (rc < 0) {
+ hid_err(hdev, "failed to get device parameters: %d\n", rc);
+ rc = -ENODEV;
+ goto cleanup;
+ } else if (rc != len) {
+ hid_err(hdev, "invalid device parameters\n");
+ rc = -ENODEV;
+ goto cleanup;
+ }
- /* Extract device parameters */
- params[HUION_PH_ID_X_LM] = le16_to_cpu(buf[1]);
- params[HUION_PH_ID_Y_LM] = le16_to_cpu(buf[2]);
- params[HUION_PH_ID_PRESSURE_LM] = le16_to_cpu(buf[4]);
- resolution = le16_to_cpu(buf[5]);
- if (resolution == 0) {
- params[HUION_PH_ID_X_PM] = 0;
- params[HUION_PH_ID_Y_PM] = 0;
- } else {
- params[HUION_PH_ID_X_PM] = params[HUION_PH_ID_X_LM] *
- 1000 / resolution;
- params[HUION_PH_ID_Y_PM] = params[HUION_PH_ID_Y_LM] *
- 1000 / resolution;
- }
+ /* Extract device parameters */
+ params[HUION_PH_ID_X_LM] = le16_to_cpu(buf[HUION_PRM_X_LM]);
+ params[HUION_PH_ID_Y_LM] = le16_to_cpu(buf[HUION_PRM_Y_LM]);
+ params[HUION_PH_ID_PRESSURE_LM] =
+ le16_to_cpu(buf[HUION_PRM_PRESSURE_LM]);
+ resolution = le16_to_cpu(buf[HUION_PRM_RESOLUTION]);
+ if (resolution == 0) {
+ params[HUION_PH_ID_X_PM] = 0;
+ params[HUION_PH_ID_Y_PM] = 0;
+ } else {
+ params[HUION_PH_ID_X_PM] = params[HUION_PH_ID_X_LM] *
+ 1000 / resolution;
+ params[HUION_PH_ID_Y_PM] = params[HUION_PH_ID_Y_LM] *
+ 1000 / resolution;
+ }
- /* Allocate fixed report descriptor */
- drvdata->rdesc = devm_kmalloc(&hdev->dev,
- sizeof(huion_tablet_rdesc_template),
- GFP_KERNEL);
- if (drvdata->rdesc == NULL) {
- hid_err(hdev, "failed to allocate fixed rdesc\n");
- return -ENOMEM;
- }
- drvdata->rsize = sizeof(huion_tablet_rdesc_template);
+ /* Allocate fixed report descriptor */
+ drvdata->rdesc = devm_kmalloc(&hdev->dev,
+ sizeof(huion_tablet_rdesc_template),
+ GFP_KERNEL);
+ if (drvdata->rdesc == NULL) {
+ hid_err(hdev, "failed to allocate fixed rdesc\n");
+ rc = -ENOMEM;
+ goto cleanup;
+ }
+ drvdata->rsize = sizeof(huion_tablet_rdesc_template);
- /* Format fixed report descriptor */
- memcpy(drvdata->rdesc, huion_tablet_rdesc_template,
- drvdata->rsize);
- for (p = drvdata->rdesc;
- p <= drvdata->rdesc + drvdata->rsize - 4;) {
- if (p[0] == 0xFE && p[1] == 0xED && p[2] == 0x1D &&
- p[3] < sizeof(params)) {
- v = params[p[3]];
- put_unaligned(cpu_to_le32(v), (s32 *)p);
- p += 4;
- } else {
- p++;
- }
+ /* Format fixed report descriptor */
+ memcpy(drvdata->rdesc, huion_tablet_rdesc_template,
+ drvdata->rsize);
+ for (p = drvdata->rdesc;
+ p <= drvdata->rdesc + drvdata->rsize - 4;) {
+ if (p[0] == 0xFE && p[1] == 0xED && p[2] == 0x1D &&
+ p[3] < sizeof(params)) {
+ v = params[p[3]];
+ put_unaligned(cpu_to_le32(v), (s32 *)p);
+ p += 4;
+ } else {
+ p++;
}
}
- return 0;
+ rc = 0;
+
+cleanup:
+ kfree(buf);
+ return rc;
}
static int huion_probe(struct hid_device *hdev, const struct hid_device_id *id)
* - change the button usage range to 4-7 for the extra
* buttons
*/
- if (*rsize >= 74 &&
+ if (*rsize >= 75 &&
rdesc[61] == 0x05 && rdesc[62] == 0x08 &&
rdesc[63] == 0x19 && rdesc[64] == 0x08 &&
rdesc[65] == 0x29 && rdesc[66] == 0x0f &&
struct usb_device_descriptor *udesc;
__u16 bcdDevice, rev_maj, rev_min;
- if ((drv_data->quirks & LG_RDESC) && *rsize >= 90 && rdesc[83] == 0x26 &&
+ if ((drv_data->quirks & LG_RDESC) && *rsize >= 91 && rdesc[83] == 0x26 &&
rdesc[84] == 0x8c && rdesc[85] == 0x02) {
hid_info(hdev,
"fixing up Logitech keyboard report descriptor\n");
rdesc[84] = rdesc[89] = 0x4d;
rdesc[85] = rdesc[90] = 0x10;
}
- if ((drv_data->quirks & LG_RDESC_REL_ABS) && *rsize >= 50 &&
+ if ((drv_data->quirks & LG_RDESC_REL_ABS) && *rsize >= 51 &&
rdesc[32] == 0x81 && rdesc[33] == 0x06 &&
rdesc[49] == 0x81 && rdesc[50] == 0x06) {
hid_info(hdev,
drv_data = hid_get_drvdata(hid);
if (!drv_data) {
hid_err(hid, "Private driver data not found!\n");
- return 0;
+ return -EINVAL;
}
entry = drv_data->device_props;
if (!entry) {
hid_err(hid, "Device properties not found!\n");
- return 0;
+ return -EINVAL;
}
if (range == 0)
return;
}
- if ((dj_report->device_index < DJ_DEVICE_INDEX_MIN) ||
- (dj_report->device_index > DJ_DEVICE_INDEX_MAX)) {
- dev_err(&djrcv_hdev->dev, "%s: invalid device index:%d\n",
- __func__, dj_report->device_index);
- return;
- }
-
if (djrcv_dev->paired_dj_devices[dj_report->device_index]) {
/* The device is already known. No need to reallocate it. */
dbg_hid("%s: device is already known\n", __func__);
if (!out_buf)
return -ENOMEM;
- if (count < DJREPORT_SHORT_LENGTH - 2)
+ if (count > DJREPORT_SHORT_LENGTH - 2)
count = DJREPORT_SHORT_LENGTH - 2;
out_buf[0] = REPORT_ID_DJ_SHORT;
struct dj_receiver_dev *djrcv_dev = hid_get_drvdata(hdev);
struct dj_report *dj_report = (struct dj_report *) data;
unsigned long flags;
- bool report_processed = false;
dbg_hid("%s, size:%d\n", __func__, size);
* anything else with it.
*/
+ /* case 1) */
+ if (data[0] != REPORT_ID_DJ_SHORT)
+ return false;
+
+ if ((dj_report->device_index < DJ_DEVICE_INDEX_MIN) ||
+ (dj_report->device_index > DJ_DEVICE_INDEX_MAX)) {
+ /*
+ * Device index is wrong, bail out.
+ * This driver can ignore safely the receiver notifications,
+ * so ignore those reports too.
+ */
+ if (dj_report->device_index != DJ_RECEIVER_INDEX)
+ dev_err(&hdev->dev, "%s: invalid device index:%d\n",
+ __func__, dj_report->device_index);
+ return false;
+ }
+
spin_lock_irqsave(&djrcv_dev->lock, flags);
- if (dj_report->report_id == REPORT_ID_DJ_SHORT) {
- switch (dj_report->report_type) {
- case REPORT_TYPE_NOTIF_DEVICE_PAIRED:
- case REPORT_TYPE_NOTIF_DEVICE_UNPAIRED:
- logi_dj_recv_queue_notification(djrcv_dev, dj_report);
- break;
- case REPORT_TYPE_NOTIF_CONNECTION_STATUS:
- if (dj_report->report_params[CONNECTION_STATUS_PARAM_STATUS] ==
- STATUS_LINKLOSS) {
- logi_dj_recv_forward_null_report(djrcv_dev, dj_report);
- }
- break;
- default:
- logi_dj_recv_forward_report(djrcv_dev, dj_report);
+ switch (dj_report->report_type) {
+ case REPORT_TYPE_NOTIF_DEVICE_PAIRED:
+ case REPORT_TYPE_NOTIF_DEVICE_UNPAIRED:
+ logi_dj_recv_queue_notification(djrcv_dev, dj_report);
+ break;
+ case REPORT_TYPE_NOTIF_CONNECTION_STATUS:
+ if (dj_report->report_params[CONNECTION_STATUS_PARAM_STATUS] ==
+ STATUS_LINKLOSS) {
+ logi_dj_recv_forward_null_report(djrcv_dev, dj_report);
}
- report_processed = true;
+ break;
+ default:
+ logi_dj_recv_forward_report(djrcv_dev, dj_report);
}
spin_unlock_irqrestore(&djrcv_dev->lock, flags);
- return report_processed;
+ return true;
}
static int logi_dj_probe(struct hid_device *hdev,
#define DJ_MAX_PAIRED_DEVICES 6
#define DJ_MAX_NUMBER_NOTIFICATIONS 8
+#define DJ_RECEIVER_INDEX 0
#define DJ_DEVICE_INDEX_MIN 1
#define DJ_DEVICE_INDEX_MAX 6
if (size < 4 || ((size - 4) % 9) != 0)
return 0;
npoints = (size - 4) / 9;
+ if (npoints > 15) {
+ hid_warn(hdev, "invalid size value (%d) for TRACKPAD_REPORT_ID\n",
+ size);
+ return 0;
+ }
msc->ntouches = 0;
for (ii = 0; ii < npoints; ii++)
magicmouse_emit_touch(msc, ii, data + ii * 9 + 4);
if (size < 6 || ((size - 6) % 8) != 0)
return 0;
npoints = (size - 6) / 8;
+ if (npoints > 15) {
+ hid_warn(hdev, "invalid size value (%d) for MOUSE_REPORT_ID\n",
+ size);
+ return 0;
+ }
msc->ntouches = 0;
for (ii = 0; ii < npoints; ii++)
magicmouse_emit_touch(msc, ii, data + ii * 8 + 6);
static __u8 *mr_report_fixup(struct hid_device *hdev, __u8 *rdesc,
unsigned int *rsize)
{
- if (*rsize >= 30 && rdesc[29] == 0x05 && rdesc[30] == 0x09) {
+ if (*rsize >= 31 && rdesc[29] == 0x05 && rdesc[30] == 0x09) {
hid_info(hdev, "fixing up button/consumer in HID report descriptor\n");
rdesc[30] = 0x0c;
}
static __u8 *pl_report_fixup(struct hid_device *hdev, __u8 *rdesc,
unsigned int *rsize)
{
- if (*rsize >= 60 && rdesc[39] == 0x2a && rdesc[40] == 0xf5 &&
+ if (*rsize >= 62 && rdesc[39] == 0x2a && rdesc[40] == 0xf5 &&
rdesc[41] == 0x00 && rdesc[59] == 0x26 &&
rdesc[60] == 0xf9 && rdesc[61] == 0x00) {
hid_info(hdev, "fixing up Petalynx Maxter Remote report descriptor\n");
if (!data)
return 1;
+ if (size > 64) {
+ hid_warn(hdev, "invalid size value (%d) for picolcd raw event\n",
+ size);
+ return 0;
+ }
+
if (report->id == REPORT_KEY_STATE) {
if (data->input_keys)
ret = picolcd_raw_keypad(data, report, raw_data+1, size-1);
return ret;
}
- if (!test_bit(RMI_STARTED, &data->flags)) {
- hid_hw_stop(hdev);
- return -EIO;
- }
+ if (!test_bit(RMI_STARTED, &data->flags))
+ /*
+ * The device maybe in the bootloader if rmi_input_configured
+ * failed to find F11 in the PDT. Print an error, but don't
+ * return an error from rmi_probe so that hidraw will be
+ * accessible from userspace. That way a userspace tool
+ * can be used to reload working firmware on the touchpad.
+ */
+ hid_err(hdev, "Device failed to be properly configured\n");
return 0;
}
ret = -EINVAL;
goto err_stop_hw;
}
- sd->hid_sensor_hub_client_devs = kzalloc(dev_cnt *
- sizeof(struct mfd_cell),
- GFP_KERNEL);
+ sd->hid_sensor_hub_client_devs = devm_kzalloc(&hdev->dev, dev_cnt *
+ sizeof(struct mfd_cell),
+ GFP_KERNEL);
if (sd->hid_sensor_hub_client_devs == NULL) {
hid_err(hdev, "Failed to allocate memory for mfd cells\n");
ret = -ENOMEM;
if (collection->type == HID_COLLECTION_PHYSICAL) {
- hsdev = kzalloc(sizeof(*hsdev), GFP_KERNEL);
+ hsdev = devm_kzalloc(&hdev->dev, sizeof(*hsdev),
+ GFP_KERNEL);
if (!hsdev) {
hid_err(hdev, "cannot allocate hid_sensor_hub_device\n");
ret = -ENOMEM;
- goto err_no_mem;
+ goto err_stop_hw;
}
hsdev->hdev = hdev;
hsdev->vendor_id = hdev->vendor;
if (last_hsdev)
last_hsdev->end_collection_index = i;
last_hsdev = hsdev;
- name = kasprintf(GFP_KERNEL, "HID-SENSOR-%x",
- collection->usage);
+ name = devm_kasprintf(&hdev->dev, GFP_KERNEL,
+ "HID-SENSOR-%x",
+ collection->usage);
if (name == NULL) {
hid_err(hdev, "Failed MFD device name\n");
ret = -ENOMEM;
- kfree(hsdev);
- goto err_no_mem;
+ goto err_stop_hw;
}
sd->hid_sensor_hub_client_devs[
sd->hid_sensor_client_cnt].id =
ret = mfd_add_devices(&hdev->dev, 0, sd->hid_sensor_hub_client_devs,
sd->hid_sensor_client_cnt, NULL, 0, NULL);
if (ret < 0)
- goto err_no_mem;
+ goto err_stop_hw;
return ret;
-err_no_mem:
- for (i = 0; i < sd->hid_sensor_client_cnt; ++i) {
- kfree(sd->hid_sensor_hub_client_devs[i].name);
- kfree(sd->hid_sensor_hub_client_devs[i].platform_data);
- }
- kfree(sd->hid_sensor_hub_client_devs);
err_stop_hw:
hid_hw_stop(hdev);
{
struct sensor_hub_data *data = hid_get_drvdata(hdev);
unsigned long flags;
- int i;
hid_dbg(hdev, " hardware removed\n");
hid_hw_close(hdev);
complete(&data->pending.ready);
spin_unlock_irqrestore(&data->lock, flags);
mfd_remove_devices(&hdev->dev);
- for (i = 0; i < data->hid_sensor_client_cnt; ++i) {
- kfree(data->hid_sensor_hub_client_devs[i].name);
- kfree(data->hid_sensor_hub_client_devs[i].platform_data);
- }
- kfree(data->hid_sensor_hub_client_devs);
hid_set_drvdata(hdev, NULL);
mutex_destroy(&data->mutex);
}
static __u8 *sp_report_fixup(struct hid_device *hdev, __u8 *rdesc,
unsigned int *rsize)
{
- if (*rsize >= 107 && rdesc[104] == 0x26 && rdesc[105] == 0x80 &&
+ if (*rsize >= 112 && rdesc[104] == 0x26 && rdesc[105] == 0x80 &&
rdesc[106] == 0x03) {
hid_info(hdev, "fixing up Sunplus Wireless Desktop report descriptor\n");
rdesc[105] = rdesc[110] = 0x03;
data->conf |= (resol << DS1621_REG_CONFIG_RESOL_SHIFT);
i2c_smbus_write_byte_data(client, DS1621_REG_CONF, data->conf);
data->update_interval = ds1721_convrates[resol];
+ data->zbits = 7 - resol;
mutex_unlock(&data->update_lock);
return count;
This I2C support can also be built as a module. If so, the module
will be called i2c-core.
-config I2C_ACPI
- bool "I2C ACPI support"
- select I2C
- depends on ACPI
+config ACPI_I2C_OPREGION
+ bool "ACPI I2C Operation region support"
+ depends on I2C=y && ACPI
default y
help
- Say Y here if you want to enable ACPI I2C support. This includes support
- for automatic enumeration of I2C slave devices and support for ACPI I2C
- Operation Regions. Operation Regions allow firmware (BIOS) code to
- access I2C slave devices, such as smart batteries through an I2C host
- controller driver.
+ Say Y here if you want to enable ACPI I2C operation region support.
+ Operation Regions allow firmware (BIOS) code to access I2C slave devices,
+ such as smart batteries through an I2C host controller driver.
if I2C
#
i2ccore-y := i2c-core.o
-i2ccore-$(CONFIG_I2C_ACPI) += i2c-acpi.o
+i2ccore-$(CONFIG_ACPI) += i2c-acpi.o
obj-$(CONFIG_I2C_BOARDINFO) += i2c-boardinfo.o
obj-$(CONFIG_I2C) += i2ccore.o
unsigned twi_cwgr_reg;
struct at91_twi_pdata *pdata;
bool use_dma;
+ bool recv_len_abort;
struct at91_twi_dma dma;
};
*dev->buf = at91_twi_read(dev, AT91_TWI_RHR) & 0xff;
--dev->buf_len;
+ /* return if aborting, we only needed to read RHR to clear RXRDY*/
+ if (dev->recv_len_abort)
+ return;
+
/* handle I2C_SMBUS_BLOCK_DATA */
if (unlikely(dev->msg->flags & I2C_M_RECV_LEN)) {
- dev->msg->flags &= ~I2C_M_RECV_LEN;
- dev->buf_len += *dev->buf;
- dev->msg->len = dev->buf_len + 1;
- dev_dbg(dev->dev, "received block length %d\n", dev->buf_len);
+ /* ensure length byte is a valid value */
+ if (*dev->buf <= I2C_SMBUS_BLOCK_MAX && *dev->buf > 0) {
+ dev->msg->flags &= ~I2C_M_RECV_LEN;
+ dev->buf_len += *dev->buf;
+ dev->msg->len = dev->buf_len + 1;
+ dev_dbg(dev->dev, "received block length %d\n",
+ dev->buf_len);
+ } else {
+ /* abort and send the stop by reading one more byte */
+ dev->recv_len_abort = true;
+ dev->buf_len = 1;
+ }
}
/* send stop if second but last byte has been read */
}
}
- ret = wait_for_completion_interruptible_timeout(&dev->cmd_complete,
- dev->adapter.timeout);
+ ret = wait_for_completion_io_timeout(&dev->cmd_complete,
+ dev->adapter.timeout);
if (ret == 0) {
dev_err(dev->dev, "controller timed out\n");
at91_init_twi_bus(dev);
ret = -EIO;
goto error;
}
+ if (dev->recv_len_abort) {
+ dev_err(dev->dev, "invalid smbus block length recvd\n");
+ ret = -EPROTO;
+ goto error;
+ }
+
dev_dbg(dev->dev, "transfer complete\n");
return 0;
dev->buf_len = m_start->len;
dev->buf = m_start->buf;
dev->msg = m_start;
+ dev->recv_len_abort = false;
ret = at91_do_twi_transfer(dev);
/* Older devices have their ID defined in <linux/pci_ids.h> */
#define PCI_DEVICE_ID_INTEL_BAYTRAIL_SMBUS 0x0f12
+#define PCI_DEVICE_ID_INTEL_BRASWELL_SMBUS 0x2292
#define PCI_DEVICE_ID_INTEL_COUGARPOINT_SMBUS 0x1c22
#define PCI_DEVICE_ID_INTEL_PATSBURG_SMBUS 0x1d22
/* Patsburg also has three 'Integrated Device Function' SMBus controllers */
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_WILDCATPOINT_SMBUS) },
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_WILDCATPOINT_LP_SMBUS) },
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_BAYTRAIL_SMBUS) },
+ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_BRASWELL_SMBUS) },
{ 0, }
};
}
tclk = clk_get_rate(drv_data->clk);
- rc = of_property_read_u32(np, "clock-frequency", &bus_freq);
- if (rc)
+ if (of_property_read_u32(np, "clock-frequency", &bus_freq))
bus_freq = 100000; /* 100kHz by default */
if (!mv64xxx_find_baud_factors(bus_freq, tclk,
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/slab.h>
+#include <linux/spinlock.h>
/* register offsets */
#define ICSCR 0x00 /* slave ctrl */
struct i2c_msg *msg;
struct clk *clk;
+ spinlock_t lock;
wait_queue_head_t wait;
int pos;
struct rcar_i2c_priv *priv = ptr;
u32 msr;
+ /*-------------- spin lock -----------------*/
+ spin_lock(&priv->lock);
+
msr = rcar_i2c_read(priv, ICMSR);
+ /* Only handle interrupts that are currently enabled */
+ msr &= rcar_i2c_read(priv, ICMIER);
+
/* Arbitration lost */
if (msr & MAL) {
rcar_i2c_flags_set(priv, (ID_DONE | ID_ARBLOST));
goto out;
}
- /* Stop */
- if (msr & MST) {
- rcar_i2c_flags_set(priv, ID_DONE);
- goto out;
- }
-
/* Nack */
if (msr & MNR) {
/* go to stop phase */
goto out;
}
+ /* Stop */
+ if (msr & MST) {
+ rcar_i2c_flags_set(priv, ID_DONE);
+ goto out;
+ }
+
if (rcar_i2c_is_recv(priv))
rcar_i2c_flags_set(priv, rcar_i2c_irq_recv(priv, msr));
else
wake_up(&priv->wait);
}
+ spin_unlock(&priv->lock);
+ /*-------------- spin unlock -----------------*/
+
return IRQ_HANDLED;
}
{
struct rcar_i2c_priv *priv = i2c_get_adapdata(adap);
struct device *dev = rcar_i2c_priv_to_dev(priv);
+ unsigned long flags;
int i, ret, timeout;
pm_runtime_get_sync(dev);
+ /*-------------- spin lock -----------------*/
+ spin_lock_irqsave(&priv->lock, flags);
+
rcar_i2c_init(priv);
/* start clock */
rcar_i2c_write(priv, ICCCR, priv->icccr);
+ spin_unlock_irqrestore(&priv->lock, flags);
+ /*-------------- spin unlock -----------------*/
+
ret = rcar_i2c_bus_barrier(priv);
if (ret < 0)
goto out;
break;
}
+ /*-------------- spin lock -----------------*/
+ spin_lock_irqsave(&priv->lock, flags);
+
/* init each data */
priv->msg = &msgs[i];
priv->pos = 0;
ret = rcar_i2c_prepare_msg(priv);
+ spin_unlock_irqrestore(&priv->lock, flags);
+ /*-------------- spin unlock -----------------*/
+
if (ret < 0)
break;
irq = platform_get_irq(pdev, 0);
init_waitqueue_head(&priv->wait);
+ spin_lock_init(&priv->lock);
adap = &priv->adap;
adap->nr = pdev->id;
/* ack interrupt */
i2c_writel(i2c, REG_INT_MBRF, REG_IPD);
+ /* Can only handle a maximum of 32 bytes at a time */
+ if (len > 32)
+ len = 32;
+
/* read the data from receive buffer */
for (i = 0; i < len; ++i) {
if (i % 4 == 0)
dev_warn(&adap->dev, "failed to enumerate I2C slaves\n");
}
+#ifdef CONFIG_ACPI_I2C_OPREGION
static int acpi_gsb_i2c_read_bytes(struct i2c_client *client,
u8 cmd, u8 *data, u8 data_len)
{
acpi_bus_detach_private_data(handle);
}
+#endif
return err;
}
+static int mlx4_ib_tunnel_steer_add(struct ib_qp *qp, struct ib_flow_attr *flow_attr,
+ u64 *reg_id)
+{
+ void *ib_flow;
+ union ib_flow_spec *ib_spec;
+ struct mlx4_dev *dev = to_mdev(qp->device)->dev;
+ int err = 0;
+
+ if (dev->caps.tunnel_offload_mode != MLX4_TUNNEL_OFFLOAD_MODE_VXLAN)
+ return 0; /* do nothing */
+
+ ib_flow = flow_attr + 1;
+ ib_spec = (union ib_flow_spec *)ib_flow;
+
+ if (ib_spec->type != IB_FLOW_SPEC_ETH || flow_attr->num_of_specs != 1)
+ return 0; /* do nothing */
+
+ err = mlx4_tunnel_steer_add(to_mdev(qp->device)->dev, ib_spec->eth.val.dst_mac,
+ flow_attr->port, qp->qp_num,
+ MLX4_DOMAIN_UVERBS | (flow_attr->priority & 0xff),
+ reg_id);
+ return err;
+}
+
static struct ib_flow *mlx4_ib_create_flow(struct ib_qp *qp,
struct ib_flow_attr *flow_attr,
int domain)
i++;
}
+ if (i < ARRAY_SIZE(type) && flow_attr->type == IB_FLOW_ATTR_NORMAL) {
+ err = mlx4_ib_tunnel_steer_add(qp, flow_attr, &mflow->reg_id[i]);
+ if (err)
+ goto err_free;
+ }
+
return &mflow->ibflow;
err_free:
}
}
- if (qp->ibqp.qp_type == IB_QPT_RAW_PACKET)
+ if (qp->ibqp.qp_type == IB_QPT_RAW_PACKET) {
context->pri_path.ackto = (context->pri_path.ackto & 0xf8) |
MLX4_IB_LINK_TYPE_ETH;
+ if (dev->dev->caps.tunnel_offload_mode == MLX4_TUNNEL_OFFLOAD_MODE_VXLAN) {
+ /* set QP to receive both tunneled & non-tunneled packets */
+ if (!(context->flags & (1 << MLX4_RSS_QPC_FLAG_OFFSET)))
+ context->srqn = cpu_to_be32(7 << 28);
+ }
+ }
if (ibqp->qp_type == IB_QPT_UD && (new_state == IB_QPS_RTR)) {
int is_eth = rdma_port_get_link_layer(
}
EXPORT_SYMBOL(input_mt_report_pointer_emulation);
+static void __input_mt_drop_unused(struct input_dev *dev, struct input_mt *mt)
+{
+ int i;
+
+ for (i = 0; i < mt->num_slots; i++) {
+ if (!input_mt_is_used(mt, &mt->slots[i])) {
+ input_mt_slot(dev, i);
+ input_event(dev, EV_ABS, ABS_MT_TRACKING_ID, -1);
+ }
+ }
+}
+
/**
* input_mt_drop_unused() - Inactivate slots not seen in this frame
* @dev: input device with allocated MT slots
void input_mt_drop_unused(struct input_dev *dev)
{
struct input_mt *mt = dev->mt;
- int i;
- if (!mt)
- return;
-
- for (i = 0; i < mt->num_slots; i++) {
- if (!input_mt_is_used(mt, &mt->slots[i])) {
- input_mt_slot(dev, i);
- input_event(dev, EV_ABS, ABS_MT_TRACKING_ID, -1);
- }
+ if (mt) {
+ __input_mt_drop_unused(dev, mt);
+ mt->frame++;
}
-
- mt->frame++;
}
EXPORT_SYMBOL(input_mt_drop_unused);
return;
if (mt->flags & INPUT_MT_DROP_UNUSED)
- input_mt_drop_unused(dev);
+ __input_mt_drop_unused(dev, mt);
if ((mt->flags & INPUT_MT_POINTER) && !(mt->flags & INPUT_MT_SEMI_MT))
use_count = true;
input_mt_report_pointer_emulation(dev, use_count);
+
+ mt->frame++;
}
EXPORT_SYMBOL(input_mt_sync_frame);
#define CAP1106_REG_SENSOR_CONFIG 0x22
#define CAP1106_REG_SENSOR_CONFIG2 0x23
#define CAP1106_REG_SAMPLING_CONFIG 0x24
-#define CAP1106_REG_CALIBRATION 0x25
-#define CAP1106_REG_INT_ENABLE 0x26
+#define CAP1106_REG_CALIBRATION 0x26
+#define CAP1106_REG_INT_ENABLE 0x27
#define CAP1106_REG_REPEAT_RATE 0x28
#define CAP1106_REG_MT_CONFIG 0x2a
#define CAP1106_REG_MT_PATTERN_CONFIG 0x2b
}
if (pdata->clustered_irq > 0) {
- err = request_irq(pdata->clustered_irq,
+ err = request_any_context_irq(pdata->clustered_irq,
matrix_keypad_interrupt,
pdata->clustered_irq_flags,
"matrix-keypad", keypad);
- if (err) {
+ if (err < 0) {
dev_err(&pdev->dev,
"Unable to acquire clustered interrupt\n");
goto err_free_rows;
}
} else {
for (i = 0; i < pdata->num_row_gpios; i++) {
- err = request_irq(gpio_to_irq(pdata->row_gpios[i]),
+ err = request_any_context_irq(
+ gpio_to_irq(pdata->row_gpios[i]),
matrix_keypad_interrupt,
IRQF_TRIGGER_RISING |
IRQF_TRIGGER_FALLING,
"matrix-keypad", keypad);
- if (err) {
+ if (err < 0) {
dev_err(&pdev->dev,
"Unable to acquire interrupt for GPIO line %i\n",
pdata->row_gpios[i]);
return 0;
}
- psmouse_info(psmouse,
- "Unknown ALPS touchpad: E7=%3ph, EC=%3ph\n", e7, ec);
+ psmouse_dbg(psmouse,
+ "Likely not an ALPS touchpad: E7=%3ph, EC=%3ph\n", e7, ec);
return -EINVAL;
}
dev2->keybit[BIT_WORD(BTN_LEFT)] =
BIT_MASK(BTN_LEFT) | BIT_MASK(BTN_MIDDLE) | BIT_MASK(BTN_RIGHT);
+ __set_bit(INPUT_PROP_POINTER, dev2->propbit);
+ if (priv->flags & ALPS_DUALPOINT)
+ __set_bit(INPUT_PROP_POINTING_STICK, dev2->propbit);
+
if (input_register_device(priv->dev2))
goto init_fail;
#include <linux/input/mt.h>
#include <linux/serio.h>
#include <linux/libps2.h>
+#include <asm/unaligned.h>
#include "psmouse.h"
#include "elantech.h"
input_sync(dev);
}
+static void elantech_report_trackpoint(struct psmouse *psmouse,
+ int packet_type)
+{
+ /*
+ * byte 0: 0 0 sx sy 0 M R L
+ * byte 1:~sx 0 0 0 0 0 0 0
+ * byte 2:~sy 0 0 0 0 0 0 0
+ * byte 3: 0 0 ~sy ~sx 0 1 1 0
+ * byte 4: x7 x6 x5 x4 x3 x2 x1 x0
+ * byte 5: y7 y6 y5 y4 y3 y2 y1 y0
+ *
+ * x and y are written in two's complement spread
+ * over 9 bits with sx/sy the relative top bit and
+ * x7..x0 and y7..y0 the lower bits.
+ * The sign of y is opposite to what the input driver
+ * expects for a relative movement
+ */
+
+ struct elantech_data *etd = psmouse->private;
+ struct input_dev *tp_dev = etd->tp_dev;
+ unsigned char *packet = psmouse->packet;
+ int x, y;
+ u32 t;
+
+ if (dev_WARN_ONCE(&psmouse->ps2dev.serio->dev,
+ !tp_dev,
+ psmouse_fmt("Unexpected trackpoint message\n"))) {
+ if (etd->debug == 1)
+ elantech_packet_dump(psmouse);
+ return;
+ }
+
+ t = get_unaligned_le32(&packet[0]);
+
+ switch (t & ~7U) {
+ case 0x06000030U:
+ case 0x16008020U:
+ case 0x26800010U:
+ case 0x36808000U:
+ x = packet[4] - (int)((packet[1]^0x80) << 1);
+ y = (int)((packet[2]^0x80) << 1) - packet[5];
+
+ input_report_key(tp_dev, BTN_LEFT, packet[0] & 0x01);
+ input_report_key(tp_dev, BTN_RIGHT, packet[0] & 0x02);
+ input_report_key(tp_dev, BTN_MIDDLE, packet[0] & 0x04);
+
+ input_report_rel(tp_dev, REL_X, x);
+ input_report_rel(tp_dev, REL_Y, y);
+
+ input_sync(tp_dev);
+
+ break;
+
+ default:
+ /* Dump unexpected packet sequences if debug=1 (default) */
+ if (etd->debug == 1)
+ elantech_packet_dump(psmouse);
+
+ break;
+ }
+}
+
/*
* Interpret complete data packets and report absolute mode input events for
* hardware version 3. (12 byte packets for two fingers)
if ((packet[0] & 0x0c) == 0x0c && (packet[3] & 0xce) == 0x0c)
return PACKET_V3_TAIL;
+ if ((packet[3] & 0x0f) == 0x06)
+ return PACKET_TRACKPOINT;
}
return PACKET_UNKNOWN;
case 3:
packet_type = elantech_packet_check_v3(psmouse);
- /* ignore debounce */
- if (packet_type == PACKET_DEBOUNCE)
- return PSMOUSE_FULL_PACKET;
-
- if (packet_type == PACKET_UNKNOWN)
+ switch (packet_type) {
+ case PACKET_UNKNOWN:
return PSMOUSE_BAD_DATA;
- elantech_report_absolute_v3(psmouse, packet_type);
+ case PACKET_DEBOUNCE:
+ /* ignore debounce */
+ break;
+
+ case PACKET_TRACKPOINT:
+ elantech_report_trackpoint(psmouse, packet_type);
+ break;
+
+ default:
+ elantech_report_absolute_v3(psmouse, packet_type);
+ break;
+ }
+
break;
case 4:
* Asus UX31 0x361f00 20, 15, 0e clickpad
* Asus UX32VD 0x361f02 00, 15, 0e clickpad
* Avatar AVIU-145A2 0x361f00 ? clickpad
+ * Fujitsu H730 0x570f00 c0, 14, 0c 3 hw buttons (**)
* Gigabyte U2442 0x450f01 58, 17, 0c 2 hw buttons
* Lenovo L430 0x350f02 b9, 15, 0c 2 hw buttons (*)
+ * Lenovo L530 0x350f02 b9, 15, 0c 2 hw buttons (*)
* Samsung NF210 0x150b00 78, 14, 0a 2 hw buttons
* Samsung NP770Z5E 0x575f01 10, 15, 0f clickpad
* Samsung NP700Z5B 0x361f06 21, 15, 0f clickpad
* Samsung RF710 0x450f00 ? 2 hw buttons
* System76 Pangolin 0x250f01 ? 2 hw buttons
* (*) + 3 trackpoint buttons
+ * (**) + 0 trackpoint buttons
+ * Note: Lenovo L430 and Lenovo L430 have the same fw_version/caps
*/
static void elantech_set_buttonpad_prop(struct psmouse *psmouse)
{
if (param[1] == 0)
return true;
+ /*
+ * Some models have a revision higher then 20. Meaning param[2] may
+ * be 10 or 20, skip the rates check for these.
+ */
+ if (param[0] == 0x46 && (param[1] & 0xef) == 0x0f && param[2] < 40)
+ return true;
+
for (i = 0; i < ARRAY_SIZE(rates); i++)
if (param[2] == rates[i])
return false;
*/
static void elantech_disconnect(struct psmouse *psmouse)
{
+ struct elantech_data *etd = psmouse->private;
+
+ if (etd->tp_dev)
+ input_unregister_device(etd->tp_dev);
sysfs_remove_group(&psmouse->ps2dev.serio->dev.kobj,
&elantech_attr_group);
kfree(psmouse->private);
int elantech_init(struct psmouse *psmouse)
{
struct elantech_data *etd;
- int i, error;
+ int i;
+ int error = -EINVAL;
unsigned char param[3];
+ struct input_dev *tp_dev;
psmouse->private = etd = kzalloc(sizeof(struct elantech_data), GFP_KERNEL);
if (!etd)
goto init_fail;
}
+ /* The MSB indicates the presence of the trackpoint */
+ if ((etd->capabilities[0] & 0x80) == 0x80) {
+ tp_dev = input_allocate_device();
+
+ if (!tp_dev) {
+ error = -ENOMEM;
+ goto init_fail_tp_alloc;
+ }
+
+ etd->tp_dev = tp_dev;
+ snprintf(etd->tp_phys, sizeof(etd->tp_phys), "%s/input1",
+ psmouse->ps2dev.serio->phys);
+ tp_dev->phys = etd->tp_phys;
+ tp_dev->name = "Elantech PS/2 TrackPoint";
+ tp_dev->id.bustype = BUS_I8042;
+ tp_dev->id.vendor = 0x0002;
+ tp_dev->id.product = PSMOUSE_ELANTECH;
+ tp_dev->id.version = 0x0000;
+ tp_dev->dev.parent = &psmouse->ps2dev.serio->dev;
+ tp_dev->evbit[0] = BIT_MASK(EV_KEY) | BIT_MASK(EV_REL);
+ tp_dev->relbit[BIT_WORD(REL_X)] =
+ BIT_MASK(REL_X) | BIT_MASK(REL_Y);
+ tp_dev->keybit[BIT_WORD(BTN_LEFT)] =
+ BIT_MASK(BTN_LEFT) | BIT_MASK(BTN_MIDDLE) |
+ BIT_MASK(BTN_RIGHT);
+
+ __set_bit(INPUT_PROP_POINTER, tp_dev->propbit);
+ __set_bit(INPUT_PROP_POINTING_STICK, tp_dev->propbit);
+
+ error = input_register_device(etd->tp_dev);
+ if (error < 0)
+ goto init_fail_tp_reg;
+ }
+
psmouse->protocol_handler = elantech_process_byte;
psmouse->disconnect = elantech_disconnect;
psmouse->reconnect = elantech_reconnect;
psmouse->pktsize = etd->hw_version > 1 ? 6 : 4;
return 0;
-
+ init_fail_tp_reg:
+ input_free_device(tp_dev);
+ init_fail_tp_alloc:
+ sysfs_remove_group(&psmouse->ps2dev.serio->dev.kobj,
+ &elantech_attr_group);
init_fail:
+ psmouse_reset(psmouse);
kfree(etd);
- return -1;
+ return error;
}
#define PACKET_V4_HEAD 0x05
#define PACKET_V4_MOTION 0x06
#define PACKET_V4_STATUS 0x07
+#define PACKET_TRACKPOINT 0x08
/*
* track up to 5 fingers for v4 hardware
};
struct elantech_data {
+ struct input_dev *tp_dev; /* Relative device for trackpoint */
+ char tp_phys[32];
unsigned char reg_07;
unsigned char reg_10;
unsigned char reg_11;
__set_bit(REL_X, input_dev->relbit);
__set_bit(REL_Y, input_dev->relbit);
+ __set_bit(INPUT_PROP_POINTER, input_dev->propbit);
+
psmouse->set_rate = psmouse_set_rate;
psmouse->set_resolution = psmouse_set_resolution;
psmouse->poll = psmouse_poll;
((buf[0] & 0x04) >> 1) |
((buf[3] & 0x04) >> 2));
+ if ((SYN_CAP_ADV_GESTURE(priv->ext_cap_0c) ||
+ SYN_CAP_IMAGE_SENSOR(priv->ext_cap_0c)) &&
+ hw->w == 2) {
+ synaptics_parse_agm(buf, priv, hw);
+ return 1;
+ }
+
+ hw->x = (((buf[3] & 0x10) << 8) |
+ ((buf[1] & 0x0f) << 8) |
+ buf[4]);
+ hw->y = (((buf[3] & 0x20) << 7) |
+ ((buf[1] & 0xf0) << 4) |
+ buf[5]);
+ hw->z = buf[2];
+
hw->left = (buf[0] & 0x01) ? 1 : 0;
hw->right = (buf[0] & 0x02) ? 1 : 0;
- if (SYN_CAP_CLICKPAD(priv->ext_cap_0c)) {
+ if (SYN_CAP_FORCEPAD(priv->ext_cap_0c)) {
+ /*
+ * ForcePads, like Clickpads, use middle button
+ * bits to report primary button clicks.
+ * Unfortunately they report primary button not
+ * only when user presses on the pad above certain
+ * threshold, but also when there are more than one
+ * finger on the touchpad, which interferes with
+ * out multi-finger gestures.
+ */
+ if (hw->z == 0) {
+ /* No contacts */
+ priv->press = priv->report_press = false;
+ } else if (hw->w >= 4 && ((buf[0] ^ buf[3]) & 0x01)) {
+ /*
+ * Single-finger touch with pressure above
+ * the threshold. If pressure stays long
+ * enough, we'll start reporting primary
+ * button. We rely on the device continuing
+ * sending data even if finger does not
+ * move.
+ */
+ if (!priv->press) {
+ priv->press_start = jiffies;
+ priv->press = true;
+ } else if (time_after(jiffies,
+ priv->press_start +
+ msecs_to_jiffies(50))) {
+ priv->report_press = true;
+ }
+ } else {
+ priv->press = false;
+ }
+
+ hw->left = priv->report_press;
+
+ } else if (SYN_CAP_CLICKPAD(priv->ext_cap_0c)) {
/*
* Clickpad's button is transmitted as middle button,
* however, since it is primary button, we will report
hw->down = ((buf[0] ^ buf[3]) & 0x02) ? 1 : 0;
}
- if ((SYN_CAP_ADV_GESTURE(priv->ext_cap_0c) ||
- SYN_CAP_IMAGE_SENSOR(priv->ext_cap_0c)) &&
- hw->w == 2) {
- synaptics_parse_agm(buf, priv, hw);
- return 1;
- }
-
- hw->x = (((buf[3] & 0x10) << 8) |
- ((buf[1] & 0x0f) << 8) |
- buf[4]);
- hw->y = (((buf[3] & 0x20) << 7) |
- ((buf[1] & 0xf0) << 4) |
- buf[5]);
- hw->z = buf[2];
-
if (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) &&
((buf[0] ^ buf[3]) & 0x02)) {
switch (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) & ~0x01) {
* 2 0x08 image sensor image sensor tracks 5 fingers, but only
* reports 2.
* 2 0x20 report min query 0x0f gives min coord reported
+ * 2 0x80 forcepad forcepad is a variant of clickpad that
+ * does not have physical buttons but rather
+ * uses pressure above certain threshold to
+ * report primary clicks. Forcepads also have
+ * clickpad bit set.
*/
#define SYN_CAP_CLICKPAD(ex0c) ((ex0c) & 0x100000) /* 1-button ClickPad */
#define SYN_CAP_CLICKPAD2BTN(ex0c) ((ex0c) & 0x000100) /* 2-button ClickPad */
#define SYN_CAP_ADV_GESTURE(ex0c) ((ex0c) & 0x080000)
#define SYN_CAP_REDUCED_FILTERING(ex0c) ((ex0c) & 0x000400)
#define SYN_CAP_IMAGE_SENSOR(ex0c) ((ex0c) & 0x000800)
+#define SYN_CAP_FORCEPAD(ex0c) ((ex0c) & 0x008000)
/* synaptics modes query bits */
#define SYN_MODE_ABSOLUTE(m) ((m) & (1 << 7))
*/
struct synaptics_hw_state agm;
bool agm_pending; /* new AGM packet received */
+
+ /* ForcePad handling */
+ unsigned long press_start;
+ bool press;
+ bool report_press;
};
void synaptics_module_init(void);
__set_bit(EV_REL, input_dev->evbit);
__set_bit(REL_X, input_dev->relbit);
__set_bit(REL_Y, input_dev->relbit);
+ __set_bit(INPUT_PROP_POINTING_STICK, input_dev->propbit);
input_set_abs_params(input_dev, ABS_PRESSURE, 0, 127, 0, 0);
} else {
input_set_abs_params(input_dev, ABS_X,
__set_bit(BTN_TOOL_TRIPLETAP, input_dev->keybit);
}
+ if (synusb->flags & SYNUSB_TOUCHSCREEN)
+ __set_bit(INPUT_PROP_DIRECT, input_dev->propbit);
+ else
+ __set_bit(INPUT_PROP_POINTER, input_dev->propbit);
+
__set_bit(BTN_LEFT, input_dev->keybit);
__set_bit(BTN_RIGHT, input_dev->keybit);
__set_bit(BTN_MIDDLE, input_dev->keybit);
if ((button_info & 0x0f) >= 3)
__set_bit(BTN_MIDDLE, psmouse->dev->keybit);
+ __set_bit(INPUT_PROP_POINTER, psmouse->dev->propbit);
+ __set_bit(INPUT_PROP_POINTING_STICK, psmouse->dev->propbit);
+
trackpoint_defaults(psmouse->private);
error = trackpoint_power_on_reset(&psmouse->ps2dev);
#define I8042_MUX_PHYS_DESC "sparcps2/serio%d"
static void __iomem *kbd_iobase;
-static struct resource *kbd_res;
#define I8042_COMMAND_REG (kbd_iobase + 0x64UL)
#define I8042_DATA_REG (kbd_iobase + 0x60UL)
#ifdef CONFIG_PCI
+static struct resource *kbd_res;
+
#define OBP_PS2KBD_NAME1 "kb_ps2"
#define OBP_PS2KBD_NAME2 "keyboard"
#define OBP_PS2MS_NAME1 "kdmouse"
#include <linux/init.h>
#include <linux/serio.h>
#include <linux/tty.h>
+#include <linux/compat.h>
MODULE_AUTHOR("Vojtech Pavlik <vojtech@ucw.cz>");
MODULE_DESCRIPTION("Input device TTY line discipline");
return 0;
}
+static void serport_set_type(struct tty_struct *tty, unsigned long type)
+{
+ struct serport *serport = tty->disc_data;
+
+ serport->id.proto = type & 0x000000ff;
+ serport->id.id = (type & 0x0000ff00) >> 8;
+ serport->id.extra = (type & 0x00ff0000) >> 16;
+}
+
/*
* serport_ldisc_ioctl() allows to set the port protocol, and device ID
*/
-static int serport_ldisc_ioctl(struct tty_struct * tty, struct file * file, unsigned int cmd, unsigned long arg)
+static int serport_ldisc_ioctl(struct tty_struct *tty, struct file *file,
+ unsigned int cmd, unsigned long arg)
{
- struct serport *serport = (struct serport*) tty->disc_data;
- unsigned long type;
-
if (cmd == SPIOCSTYPE) {
+ unsigned long type;
+
if (get_user(type, (unsigned long __user *) arg))
return -EFAULT;
- serport->id.proto = type & 0x000000ff;
- serport->id.id = (type & 0x0000ff00) >> 8;
- serport->id.extra = (type & 0x00ff0000) >> 16;
+ serport_set_type(tty, type);
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+#ifdef CONFIG_COMPAT
+#define COMPAT_SPIOCSTYPE _IOW('q', 0x01, compat_ulong_t)
+static long serport_ldisc_compat_ioctl(struct tty_struct *tty,
+ struct file *file,
+ unsigned int cmd, unsigned long arg)
+{
+ if (cmd == COMPAT_SPIOCSTYPE) {
+ void __user *uarg = compat_ptr(arg);
+ compat_ulong_t compat_type;
+
+ if (get_user(compat_type, (compat_ulong_t __user *)uarg))
+ return -EFAULT;
+ serport_set_type(tty, compat_type);
return 0;
}
return -EINVAL;
}
+#endif
static void serport_ldisc_write_wakeup(struct tty_struct * tty)
{
.close = serport_ldisc_close,
.read = serport_ldisc_read,
.ioctl = serport_ldisc_ioctl,
+#ifdef CONFIG_COMPAT
+ .compat_ioctl = serport_ldisc_compat_ioctl,
+#endif
.receive_buf = serport_ldisc_receive,
.write_wakeup = serport_ldisc_write_wakeup
};
count = data->msg_buf[0];
if (count == 0) {
- dev_warn(dev, "Interrupt triggered but zero messages\n");
+ /*
+ * This condition is caused by the CHG line being configured
+ * in Mode 0. It results in unnecessary I2C operations but it
+ * is benign.
+ */
+ dev_dbg(dev, "Interrupt triggered but zero messages\n");
return IRQ_NONE;
} else if (count > data->max_reportid) {
dev_err(dev, "T44 count %d exceeded max report id\n", count);
return 0;
}
-static void mxt_free_object_table(struct mxt_data *data)
+static void mxt_free_input_device(struct mxt_data *data)
{
- input_unregister_device(data->input_dev);
- data->input_dev = NULL;
+ if (data->input_dev) {
+ input_unregister_device(data->input_dev);
+ data->input_dev = NULL;
+ }
+}
+static void mxt_free_object_table(struct mxt_data *data)
+{
kfree(data->object_table);
data->object_table = NULL;
kfree(data->msg_buf);
ret = mxt_lookup_bootloader_address(data, 0);
if (ret)
goto release_firmware;
+
+ mxt_free_input_device(data);
+ mxt_free_object_table(data);
} else {
enable_irq(data->irq);
}
- mxt_free_object_table(data);
reinit_completion(&data->bl_completion);
ret = mxt_check_bootloader(data, MXT_WAITING_BOOTLOAD_CMD, false);
return 0;
err_free_object:
+ mxt_free_input_device(data);
mxt_free_object_table(data);
err_free_irq:
free_irq(client->irq, data);
sysfs_remove_group(&client->dev.kobj, &mxt_attr_group);
free_irq(data->irq, data);
- input_unregister_device(data->input_dev);
+ mxt_free_input_device(data);
mxt_free_object_table(data);
kfree(data);
*/
static int rpu = 8;
module_param(rpu, int, 0);
-MODULE_PARM_DESC(rpu, "Set internal pull up resitor for pen detect.");
+MODULE_PARM_DESC(rpu, "Set internal pull up resistor for pen detect.");
/*
* Set current used for pressure measurement.
*/
static int rpu = 8;
module_param(rpu, int, 0);
-MODULE_PARM_DESC(rpu, "Set internal pull up resitor for pen detect.");
+MODULE_PARM_DESC(rpu, "Set internal pull up resistor for pen detect.");
/*
* Set current used for pressure measurement.
static void cleanup_domain(struct protection_domain *domain)
{
- struct iommu_dev_data *dev_data, *next;
+ struct iommu_dev_data *entry;
unsigned long flags;
write_lock_irqsave(&amd_iommu_devtable_lock, flags);
- list_for_each_entry_safe(dev_data, next, &domain->dev_list, list) {
- __detach_device(dev_data);
- atomic_set(&dev_data->bind, 0);
+ while (!list_empty(&domain->dev_list)) {
+ entry = list_first_entry(&domain->dev_list,
+ struct iommu_dev_data, list);
+ __detach_device(entry);
+ atomic_set(&entry->bind, 0);
}
write_unlock_irqrestore(&amd_iommu_devtable_lock, flags);
#define ID0_CTTW (1 << 14)
#define ID0_NUMIRPT_SHIFT 16
#define ID0_NUMIRPT_MASK 0xff
+#define ID0_NUMSIDB_SHIFT 9
+#define ID0_NUMSIDB_MASK 0xf
#define ID0_NUMSMRG_SHIFT 0
#define ID0_NUMSMRG_MASK 0xff
master->of_node = masterspec->np;
master->cfg.num_streamids = masterspec->args_count;
- for (i = 0; i < master->cfg.num_streamids; ++i)
- master->cfg.streamids[i] = masterspec->args[i];
+ for (i = 0; i < master->cfg.num_streamids; ++i) {
+ u16 streamid = masterspec->args[i];
+ if (!(smmu->features & ARM_SMMU_FEAT_STREAM_MATCH) &&
+ (streamid >= smmu->num_mapping_groups)) {
+ dev_err(dev,
+ "stream ID for master device %s greater than maximum allowed (%d)\n",
+ masterspec->np->name, smmu->num_mapping_groups);
+ return -ERANGE;
+ }
+ master->cfg.streamids[i] = streamid;
+ }
return insert_smmu_master(smmu, master);
}
if (fsr & FSR_IGN)
dev_err_ratelimited(smmu->dev,
- "Unexpected context fault (fsr 0x%u)\n",
+ "Unexpected context fault (fsr 0x%x)\n",
fsr);
fsynr = readl_relaxed(cb_base + ARM_SMMU_CB_FSYNR0);
reg = (TTBCR2_ADDR_36 << TTBCR2_SEP_SHIFT);
break;
case 39:
+ case 40:
reg = (TTBCR2_ADDR_40 << TTBCR2_SEP_SHIFT);
break;
case 42:
reg |= (TTBCR2_ADDR_36 << TTBCR2_PASIZE_SHIFT);
break;
case 39:
+ case 40:
reg |= (TTBCR2_ADDR_40 << TTBCR2_PASIZE_SHIFT);
break;
case 42:
reg |= TTBCR_EAE |
(TTBCR_SH_IS << TTBCR_SH0_SHIFT) |
(TTBCR_RGN_WBWA << TTBCR_ORGN0_SHIFT) |
- (TTBCR_RGN_WBWA << TTBCR_IRGN0_SHIFT) |
- (TTBCR_SL0_LVL_1 << TTBCR_SL0_SHIFT);
+ (TTBCR_RGN_WBWA << TTBCR_IRGN0_SHIFT);
+
+ if (!stage1)
+ reg |= (TTBCR_SL0_LVL_1 << TTBCR_SL0_SHIFT);
+
writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBCR);
/* MAIR0 (stage-1 only) */
static int arm_smmu_init_domain_context(struct iommu_domain *domain,
struct arm_smmu_device *smmu)
{
- int irq, ret, start;
+ int irq, start, ret = 0;
+ unsigned long flags;
struct arm_smmu_domain *smmu_domain = domain->priv;
struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
+ spin_lock_irqsave(&smmu_domain->lock, flags);
+ if (smmu_domain->smmu)
+ goto out_unlock;
+
if (smmu->features & ARM_SMMU_FEAT_TRANS_NESTED) {
/*
* We will likely want to change this if/when KVM gets
ret = __arm_smmu_alloc_bitmap(smmu->context_map, start,
smmu->num_context_banks);
if (IS_ERR_VALUE(ret))
- return ret;
+ goto out_unlock;
cfg->cbndx = ret;
if (smmu->version == 1) {
cfg->irptndx = cfg->cbndx;
}
+ ACCESS_ONCE(smmu_domain->smmu) = smmu;
+ arm_smmu_init_context_bank(smmu_domain);
+ spin_unlock_irqrestore(&smmu_domain->lock, flags);
+
irq = smmu->irqs[smmu->num_global_irqs + cfg->irptndx];
ret = request_irq(irq, arm_smmu_context_fault, IRQF_SHARED,
"arm-smmu-context-fault", domain);
dev_err(smmu->dev, "failed to request context IRQ %d (%u)\n",
cfg->irptndx, irq);
cfg->irptndx = INVALID_IRPTNDX;
- goto out_free_context;
}
- smmu_domain->smmu = smmu;
- arm_smmu_init_context_bank(smmu_domain);
return 0;
-out_free_context:
- __arm_smmu_free_bitmap(smmu->context_map, cfg->cbndx);
+out_unlock:
+ spin_unlock_irqrestore(&smmu_domain->lock, flags);
return ret;
}
{
pgtable_t table = pmd_pgtable(*pmd);
- pgtable_page_dtor(table);
__free_page(table);
}
void __iomem *gr0_base = ARM_SMMU_GR0(smmu);
struct arm_smmu_smr *smrs = cfg->smrs;
+ if (!smrs)
+ return;
+
/* Invalidate the SMRs before freeing back to the allocator */
for (i = 0; i < cfg->num_streamids; ++i) {
u8 idx = smrs[i].idx;
kfree(smrs);
}
-static void arm_smmu_bypass_stream_mapping(struct arm_smmu_device *smmu,
- struct arm_smmu_master_cfg *cfg)
-{
- int i;
- void __iomem *gr0_base = ARM_SMMU_GR0(smmu);
-
- for (i = 0; i < cfg->num_streamids; ++i) {
- u16 sid = cfg->streamids[i];
-
- writel_relaxed(S2CR_TYPE_BYPASS,
- gr0_base + ARM_SMMU_GR0_S2CR(sid));
- }
-}
-
static int arm_smmu_domain_add_master(struct arm_smmu_domain *smmu_domain,
struct arm_smmu_master_cfg *cfg)
{
static void arm_smmu_domain_remove_master(struct arm_smmu_domain *smmu_domain,
struct arm_smmu_master_cfg *cfg)
{
+ int i;
struct arm_smmu_device *smmu = smmu_domain->smmu;
+ void __iomem *gr0_base = ARM_SMMU_GR0(smmu);
/*
* We *must* clear the S2CR first, because freeing the SMR means
* that it can be re-allocated immediately.
*/
- arm_smmu_bypass_stream_mapping(smmu, cfg);
+ for (i = 0; i < cfg->num_streamids; ++i) {
+ u32 idx = cfg->smrs ? cfg->smrs[i].idx : cfg->streamids[i];
+
+ writel_relaxed(S2CR_TYPE_BYPASS,
+ gr0_base + ARM_SMMU_GR0_S2CR(idx));
+ }
+
arm_smmu_master_free_smrs(smmu, cfg);
}
static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
{
- int ret = -EINVAL;
+ int ret;
struct arm_smmu_domain *smmu_domain = domain->priv;
- struct arm_smmu_device *smmu;
+ struct arm_smmu_device *smmu, *dom_smmu;
struct arm_smmu_master_cfg *cfg;
- unsigned long flags;
smmu = dev_get_master_dev(dev)->archdata.iommu;
if (!smmu) {
* Sanity check the domain. We don't support domains across
* different SMMUs.
*/
- spin_lock_irqsave(&smmu_domain->lock, flags);
- if (!smmu_domain->smmu) {
+ dom_smmu = ACCESS_ONCE(smmu_domain->smmu);
+ if (!dom_smmu) {
/* Now that we have a master, we can finalise the domain */
ret = arm_smmu_init_domain_context(domain, smmu);
if (IS_ERR_VALUE(ret))
- goto err_unlock;
- } else if (smmu_domain->smmu != smmu) {
+ return ret;
+
+ dom_smmu = smmu_domain->smmu;
+ }
+
+ if (dom_smmu != smmu) {
dev_err(dev,
"cannot attach to SMMU %s whilst already attached to domain on SMMU %s\n",
- dev_name(smmu_domain->smmu->dev),
- dev_name(smmu->dev));
- goto err_unlock;
+ dev_name(smmu_domain->smmu->dev), dev_name(smmu->dev));
+ return -EINVAL;
}
- spin_unlock_irqrestore(&smmu_domain->lock, flags);
/* Looks ok, so add the device to the domain */
cfg = find_smmu_master_cfg(smmu_domain->smmu, dev);
return -ENODEV;
return arm_smmu_domain_add_master(smmu_domain, cfg);
-
-err_unlock:
- spin_unlock_irqrestore(&smmu_domain->lock, flags);
- return ret;
}
static void arm_smmu_detach_dev(struct iommu_domain *domain, struct device *dev)
return -ENOMEM;
arm_smmu_flush_pgtable(smmu, page_address(table), PAGE_SIZE);
- if (!pgtable_page_ctor(table)) {
- __free_page(table);
- return -ENOMEM;
- }
pmd_populate(NULL, pmd, table);
arm_smmu_flush_pgtable(smmu, pmd, sizeof(*pmd));
}
/* Mark all SMRn as invalid and all S2CRn as bypass */
for (i = 0; i < smmu->num_mapping_groups; ++i) {
- writel_relaxed(~SMR_VALID, gr0_base + ARM_SMMU_GR0_SMR(i));
+ writel_relaxed(0, gr0_base + ARM_SMMU_GR0_SMR(i));
writel_relaxed(S2CR_TYPE_BYPASS,
gr0_base + ARM_SMMU_GR0_S2CR(i));
}
dev_notice(smmu->dev,
"\tstream matching with %u register groups, mask 0x%x",
smmu->num_mapping_groups, mask);
+ } else {
+ smmu->num_mapping_groups = (id >> ID0_NUMSIDB_SHIFT) &
+ ID0_NUMSIDB_MASK;
}
/* ID1 */
* Stage-1 output limited by stage-2 input size due to pgd
* allocation (PTRS_PER_PGD).
*/
+ if (smmu->features & ARM_SMMU_FEAT_TRANS_NESTED) {
#ifdef CONFIG_64BIT
- smmu->s1_output_size = min_t(unsigned long, VA_BITS, size);
+ smmu->s1_output_size = min_t(unsigned long, VA_BITS, size);
#else
- smmu->s1_output_size = min(32UL, size);
+ smmu->s1_output_size = min(32UL, size);
#endif
+ } else {
+ smmu->s1_output_size = min_t(unsigned long, PHYS_MASK_SHIFT,
+ size);
+ }
/* The stage-2 output mask is also applied for bypass */
size = arm_smmu_id_size_to_bits((id >> ID2_OAS_SHIFT) & ID2_OAS_MASK);
smmu->irqs[i] = irq;
}
+ err = arm_smmu_device_cfg_probe(smmu);
+ if (err)
+ return err;
+
i = 0;
smmu->masters = RB_ROOT;
while (!of_parse_phandle_with_args(dev->of_node, "mmu-masters",
}
dev_notice(dev, "registered %d master devices\n", i);
- err = arm_smmu_device_cfg_probe(smmu);
- if (err)
- goto out_put_masters;
-
parse_driver_options(smmu);
if (smmu->version > 1 &&
andd->device_name);
continue;
}
- acpi_bus_get_device(h, &adev);
- if (!adev) {
+ if (acpi_bus_get_device(h, &adev)) {
pr_err("Failed to get device for ACPI object %s\n",
andd->device_name);
continue;
struct iommu_group *group = ERR_PTR(-ENODEV);
struct pci_dev *pdev;
const u32 *prop;
- int ret, len;
+ int ret = 0, len;
/*
* For platform devices we allocate a separate group for
if (IS_ERR(group))
return PTR_ERR(group);
- ret = iommu_group_add_device(group, dev);
+ /*
+ * Check if device has already been added to an iommu group.
+ * Group could have already been created for a PCI device in
+ * the iommu_group_get_for_dev path.
+ */
+ if (!dev->iommu_group)
+ ret = iommu_group_add_device(group, dev);
iommu_group_put(group);
return ret;
action != BUS_NOTIFY_DEL_DEVICE)
return 0;
+ /*
+ * If the device is still attached to a device driver we can't
+ * tear down the domain yet as DMA mappings may still be in use.
+ * Wait for the BUS_NOTIFY_UNBOUND_DRIVER event to do that.
+ */
+ if (action == BUS_NOTIFY_DEL_DEVICE && dev->driver != NULL)
+ return 0;
+
domain = find_domain(dev);
if (!domain)
return 0;
*/
struct iommu_group *iommu_group_get_for_dev(struct device *dev)
{
- struct iommu_group *group = ERR_PTR(-EIO);
+ struct iommu_group *group;
int ret;
group = iommu_group_get(dev);
if (group)
return group;
- if (dev_is_pci(dev))
- group = iommu_group_get_for_pci_dev(to_pci_dev(dev));
+ if (!dev_is_pci(dev))
+ return ERR_PTR(-EINVAL);
+
+ group = iommu_group_get_for_pci_dev(to_pci_dev(dev));
if (IS_ERR(group))
return group;
size_t orig_size = size;
int ret = 0;
- if (unlikely(domain->ops->unmap == NULL ||
+ if (unlikely(domain->ops->map == NULL ||
domain->ops->pgsize_bitmap == 0UL))
return -ENODEV;
#include <linux/slab.h>
#include <linux/irqdomain.h>
#include <linux/irqchip/chained_irq.h>
+#include <linux/interrupt.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
of_property_read_u32_index(node,
"ti,irqs-reserved",
i, &entry);
- if (entry > max) {
+ if (entry >= max) {
pr_err("Invalid reserved entry\n");
ret = -EINVAL;
goto err_irq_map;
of_property_read_u32_index(node,
"ti,irqs-skip",
i, &entry);
- if (entry > max) {
+ if (entry >= max) {
pr_err("Invalid skip entry\n");
ret = -EINVAL;
goto err_irq_map;
struct gic_chip_data {
void __iomem *dist_base;
void __iomem **redist_base;
- void __percpu __iomem **rdist;
+ void __iomem * __percpu *rdist;
struct irq_domain *domain;
u64 redist_stride;
u32 redist_regions;
}
/* Low level accessors */
-static u64 gic_read_iar(void)
+static u64 __maybe_unused gic_read_iar(void)
{
u64 irqstat;
return irqstat;
}
-static void gic_write_pmr(u64 val)
+static void __maybe_unused gic_write_pmr(u64 val)
{
asm volatile("msr_s " __stringify(ICC_PMR_EL1) ", %0" : : "r" (val));
}
-static void gic_write_ctlr(u64 val)
+static void __maybe_unused gic_write_ctlr(u64 val)
{
asm volatile("msr_s " __stringify(ICC_CTLR_EL1) ", %0" : : "r" (val));
isb();
}
-static void gic_write_grpen1(u64 val)
+static void __maybe_unused gic_write_grpen1(u64 val)
{
asm volatile("msr_s " __stringify(ICC_GRPEN1_EL1) ", %0" : : "r" (val));
isb();
}
-static void gic_write_sgi1r(u64 val)
+static void __maybe_unused gic_write_sgi1r(u64 val)
{
asm volatile("msr_s " __stringify(ICC_SGI1R_EL1) ", %0" : : "r" (val));
}
rwp_wait();
}
-static int gic_peek_irq(struct irq_data *d, u32 offset)
-{
- u32 mask = 1 << (gic_irq(d) % 32);
- void __iomem *base;
-
- if (gic_irq_in_rdist(d))
- base = gic_data_rdist_sgi_base();
- else
- base = gic_data.dist_base;
-
- return !!(readl_relaxed(base + offset + (gic_irq(d) / 32) * 4) & mask);
-}
-
static void gic_mask_irq(struct irq_data *d)
{
gic_poke_irq(d, GICD_ICENABLER);
}
#ifdef CONFIG_SMP
+static int gic_peek_irq(struct irq_data *d, u32 offset)
+{
+ u32 mask = 1 << (gic_irq(d) % 32);
+ void __iomem *base;
+
+ if (gic_irq_in_rdist(d))
+ base = gic_data_rdist_sgi_base();
+ else
+ base = gic_data.dist_base;
+
+ return !!(readl_relaxed(base + offset + (gic_irq(d) / 32) * 4) & mask);
+}
+
static int gic_secondary_init(struct notifier_block *nfb,
unsigned long action, void *hcpu)
{
return 0;
}
-const struct irq_domain_ops gic_default_routable_irq_domain_ops = {
+static const struct irq_domain_ops gic_default_routable_irq_domain_ops = {
.map = gic_routable_irq_domain_map,
.unmap = gic_routable_irq_domain_unmap,
.xlate = gic_routable_irq_domain_xlate,
/* $Id: xdi_msg.h,v 1.1.2.2 2001/02/16 08:40:36 armin Exp $ */
-#ifndef __DIVA_XDI_UM_CFG_MESSSGE_H__
+#ifndef __DIVA_XDI_UM_CFG_MESSAGE_H__
#define __DIVA_XDI_UM_CFG_MESSAGE_H__
/*
#include <linux/list.h>
#include <linux/spinlock.h>
#include <linux/device.h>
+#include <linux/timer.h>
#include <linux/err.h>
#include <linux/ctype.h>
#include <linux/leds.h>
-#include <linux/workqueue.h>
#include "leds.h"
static struct class *leds_class;
NULL,
};
-static void led_work_function(struct work_struct *ws)
+static void led_timer_function(unsigned long data)
{
- struct led_classdev *led_cdev =
- container_of(ws, struct led_classdev, blink_work.work);
+ struct led_classdev *led_cdev = (void *)data;
unsigned long brightness;
unsigned long delay;
}
}
- queue_delayed_work(system_wq, &led_cdev->blink_work,
- msecs_to_jiffies(delay));
+ mod_timer(&led_cdev->blink_timer, jiffies + msecs_to_jiffies(delay));
}
static void set_brightness_delayed(struct work_struct *ws)
INIT_WORK(&led_cdev->set_brightness_work, set_brightness_delayed);
- INIT_DELAYED_WORK(&led_cdev->blink_work, led_work_function);
+ init_timer(&led_cdev->blink_timer);
+ led_cdev->blink_timer.function = led_timer_function;
+ led_cdev->blink_timer.data = (unsigned long)led_cdev;
#ifdef CONFIG_LEDS_TRIGGERS
led_trigger_set_default(led_cdev);
#include <linux/module.h>
#include <linux/rwsem.h>
#include <linux/leds.h>
-#include <linux/workqueue.h>
#include "leds.h"
DECLARE_RWSEM(leds_list_lock);
return;
}
- queue_delayed_work(system_wq, &led_cdev->blink_work, 1);
+ mod_timer(&led_cdev->blink_timer, jiffies + 1);
}
unsigned long *delay_on,
unsigned long *delay_off)
{
- cancel_delayed_work_sync(&led_cdev->blink_work);
+ del_timer_sync(&led_cdev->blink_timer);
led_cdev->flags &= ~LED_BLINK_ONESHOT;
led_cdev->flags &= ~LED_BLINK_ONESHOT_STOP;
int invert)
{
if ((led_cdev->flags & LED_BLINK_ONESHOT) &&
- delayed_work_pending(&led_cdev->blink_work))
+ timer_pending(&led_cdev->blink_timer))
return;
led_cdev->flags |= LED_BLINK_ONESHOT;
void led_stop_software_blink(struct led_classdev *led_cdev)
{
- cancel_delayed_work_sync(&led_cdev->blink_work);
+ del_timer_sync(&led_cdev->blink_timer);
led_cdev->blink_delay_on = 0;
led_cdev->blink_delay_off = 0;
}
void led_set_brightness(struct led_classdev *led_cdev,
enum led_brightness brightness)
{
- /* delay brightness setting if need to stop soft-blink work */
+ /* delay brightness setting if need to stop soft-blink timer */
if (led_cdev->blink_delay_on || led_cdev->blink_delay_off) {
led_cdev->delayed_set_value = brightness;
schedule_work(&led_cdev->set_brightness_work);
struct cache *cache = mg->cache;
if (mg->writeback) {
- cell_defer(cache, mg->old_ocell, false);
clear_dirty(cache, mg->old_oblock, mg->cblock);
+ cell_defer(cache, mg->old_ocell, false);
cleanup_migration(mg);
return;
}
} else {
+ clear_dirty(cache, mg->new_oblock, mg->cblock);
if (mg->requeue_holder)
cell_defer(cache, mg->new_ocell, true);
else {
bio_endio(mg->new_ocell->holder, 0);
cell_defer(cache, mg->new_ocell, false);
}
- clear_dirty(cache, mg->new_oblock, mg->cblock);
cleanup_migration(mg);
}
}
unsigned int key_size, opt_params;
unsigned long long tmpll;
int ret;
+ size_t iv_size_padding;
struct dm_arg_set as;
const char *opt_string;
char dummy;
cc->dmreq_start = sizeof(struct ablkcipher_request);
cc->dmreq_start += crypto_ablkcipher_reqsize(any_tfm(cc));
- cc->dmreq_start = ALIGN(cc->dmreq_start, crypto_tfm_ctx_alignment());
- cc->dmreq_start += crypto_ablkcipher_alignmask(any_tfm(cc)) &
- ~(crypto_tfm_ctx_alignment() - 1);
+ cc->dmreq_start = ALIGN(cc->dmreq_start, __alignof__(struct dm_crypt_request));
+
+ if (crypto_ablkcipher_alignmask(any_tfm(cc)) < CRYPTO_MINALIGN) {
+ /* Allocate the padding exactly */
+ iv_size_padding = -(cc->dmreq_start + sizeof(struct dm_crypt_request))
+ & crypto_ablkcipher_alignmask(any_tfm(cc));
+ } else {
+ /*
+ * If the cipher requires greater alignment than kmalloc
+ * alignment, we don't know the exact position of the
+ * initialization vector. We must assume worst case.
+ */
+ iv_size_padding = crypto_ablkcipher_alignmask(any_tfm(cc));
+ }
cc->req_pool = mempool_create_kmalloc_pool(MIN_IOS, cc->dmreq_start +
- sizeof(struct dm_crypt_request) + cc->iv_size);
+ sizeof(struct dm_crypt_request) + iv_size_padding + cc->iv_size);
if (!cc->req_pool) {
ti->error = "Cannot allocate crypt request mempool";
goto bad;
}
cc->per_bio_data_size = ti->per_bio_data_size =
- sizeof(struct dm_crypt_io) + cc->dmreq_start +
- sizeof(struct dm_crypt_request) + cc->iv_size;
+ ALIGN(sizeof(struct dm_crypt_io) + cc->dmreq_start +
+ sizeof(struct dm_crypt_request) + iv_size_padding + cc->iv_size,
+ ARCH_KMALLOC_MINALIGN);
cc->page_pool = mempool_create_page_pool(MIN_POOL_PAGES, 0);
if (!cc->page_pool) {
*/
if (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery)) {
end_reshape(conf);
+ close_sync(conf);
return 0;
}
}
r10_bio = mempool_alloc(conf->r10buf_pool, GFP_NOIO);
+ r10_bio->state = 0;
raise_barrier(conf, rb2 != NULL);
atomic_set(&r10_bio->remaining, 0);
if (sync_blocks < max_sync)
max_sync = sync_blocks;
r10_bio = mempool_alloc(conf->r10buf_pool, GFP_NOIO);
+ r10_bio->state = 0;
r10_bio->mddev = mddev;
atomic_set(&r10_bio->remaining, 0);
read_more:
/* Now schedule reads for blocks from sector_nr to last */
r10_bio = mempool_alloc(conf->r10buf_pool, GFP_NOIO);
+ r10_bio->state = 0;
raise_barrier(conf, sectors_done != 0);
atomic_set(&r10_bio->remaining, 0);
r10_bio->mddev = mddev;
* on all the target devices.
*/
// FIXME
+ mempool_free(r10_bio, conf->r10buf_pool);
set_bit(MD_RECOVERY_INTR, &mddev->recovery);
return sectors_done;
}
read_bio->bi_private = r10_bio;
read_bio->bi_end_io = end_sync_read;
read_bio->bi_rw = READ;
- read_bio->bi_flags &= ~(BIO_POOL_MASK - 1);
+ read_bio->bi_flags &= (~0UL << BIO_RESET_BITS);
read_bio->bi_flags |= 1 << BIO_UPTODATE;
read_bio->bi_vcnt = 0;
read_bio->bi_iter.bi_size = 0;
(!test_bit(R5_Insync, &dev->flags) || test_bit(STRIPE_PREREAD_ACTIVE, &sh->state)) &&
!test_bit(R5_OVERWRITE, &fdev[0]->flags)) ||
(sh->raid_conf->level == 6 && s->failed && s->to_write &&
- s->to_write < sh->raid_conf->raid_disks - 2 &&
+ s->to_write - s->non_overwrite < sh->raid_conf->raid_disks - 2 &&
(!test_bit(R5_Insync, &dev->flags) || test_bit(STRIPE_PREREAD_ACTIVE, &sh->state))))) {
/* we would like to get this block, possibly by computing it,
* otherwise read it if the backing disk is insync
set_bit(R5_Wantwrite, &dev->flags);
if (prexor)
continue;
+ if (s.failed > 1)
+ continue;
if (!test_bit(R5_Insync, &dev->flags) ||
((i == sh->pd_idx || i == sh->qd_idx) &&
s.failed == 0))
if (ret)
return ret;
-#if CONFIG_DEBUG_FS
+#ifdef CONFIG_DEBUG_FS
/* Pass to debugfs */
ab8500_debug_resources[0].start = ab8500->irq;
ab8500_debug_resources[0].end = ab8500->irq;
}
i2c_set_clientdata(client, chip);
- snprintf(client->name, I2C_NAME_SIZE, "Chip_0x%d", client->addr);
+ snprintf(client->name, I2C_NAME_SIZE, "Chip_0x%x", client->addr);
chip->client = client;
/* Reset the chip */
default:
omap->nports = OMAP3_HS_USB_PORTS;
dev_dbg(dev,
- "USB HOST Rev:0x%d not recognized, assuming %d ports\n",
+ "USB HOST Rev:0x%x not recognized, assuming %d ports\n",
omap->usbhs_rev, omap->nports);
break;
}
* above.
*/
static struct twl4030_resconfig omap3_idle_rconfig[] = {
- TWL_REMAP_SLEEP(RES_VAUX1, DEV_GRP_NULL, 0, 0),
- TWL_REMAP_SLEEP(RES_VAUX2, DEV_GRP_NULL, 0, 0),
- TWL_REMAP_SLEEP(RES_VAUX3, DEV_GRP_NULL, 0, 0),
- TWL_REMAP_SLEEP(RES_VAUX4, DEV_GRP_NULL, 0, 0),
- TWL_REMAP_SLEEP(RES_VMMC1, DEV_GRP_NULL, 0, 0),
- TWL_REMAP_SLEEP(RES_VMMC2, DEV_GRP_NULL, 0, 0),
+ TWL_REMAP_SLEEP(RES_VAUX1, TWL4030_RESCONFIG_UNDEF, 0, 0),
+ TWL_REMAP_SLEEP(RES_VAUX2, TWL4030_RESCONFIG_UNDEF, 0, 0),
+ TWL_REMAP_SLEEP(RES_VAUX3, TWL4030_RESCONFIG_UNDEF, 0, 0),
+ TWL_REMAP_SLEEP(RES_VAUX4, TWL4030_RESCONFIG_UNDEF, 0, 0),
+ TWL_REMAP_SLEEP(RES_VMMC1, TWL4030_RESCONFIG_UNDEF, 0, 0),
+ TWL_REMAP_SLEEP(RES_VMMC2, TWL4030_RESCONFIG_UNDEF, 0, 0),
TWL_REMAP_OFF(RES_VPLL1, DEV_GRP_P1, 3, 1),
TWL_REMAP_SLEEP(RES_VPLL2, DEV_GRP_P1, 0, 0),
- TWL_REMAP_SLEEP(RES_VSIM, DEV_GRP_NULL, 0, 0),
- TWL_REMAP_SLEEP(RES_VDAC, DEV_GRP_NULL, 0, 0),
+ TWL_REMAP_SLEEP(RES_VSIM, TWL4030_RESCONFIG_UNDEF, 0, 0),
+ TWL_REMAP_SLEEP(RES_VDAC, TWL4030_RESCONFIG_UNDEF, 0, 0),
TWL_REMAP_SLEEP(RES_VINTANA1, TWL_DEV_GRP_P123, 1, 2),
TWL_REMAP_SLEEP(RES_VINTANA2, TWL_DEV_GRP_P123, 0, 2),
TWL_REMAP_SLEEP(RES_VINTDIG, TWL_DEV_GRP_P123, 1, 2),
TWL_REMAP_SLEEP(RES_VIO, TWL_DEV_GRP_P123, 2, 2),
TWL_REMAP_OFF(RES_VDD1, DEV_GRP_P1, 4, 1),
TWL_REMAP_OFF(RES_VDD2, DEV_GRP_P1, 3, 1),
- TWL_REMAP_SLEEP(RES_VUSB_1V5, DEV_GRP_NULL, 0, 0),
- TWL_REMAP_SLEEP(RES_VUSB_1V8, DEV_GRP_NULL, 0, 0),
+ TWL_REMAP_SLEEP(RES_VUSB_1V5, TWL4030_RESCONFIG_UNDEF, 0, 0),
+ TWL_REMAP_SLEEP(RES_VUSB_1V8, TWL4030_RESCONFIG_UNDEF, 0, 0),
TWL_REMAP_SLEEP(RES_VUSB_3V1, TWL_DEV_GRP_P123, 0, 0),
/* Resource #20 USB charge pump skipped */
TWL_REMAP_SLEEP(RES_REGEN, TWL_DEV_GRP_P123, 2, 1),
u32 jedec_id;
u32 status;
+ if (fw == NULL) {
+ dev_err(&spi->dev, "Cannot load firmware, aborting\n");
+ return;
+ }
+
if (fw->size == 0) {
dev_err(&spi->dev, "Error: Firmware size is 0!\n");
return;
cl->timer_count = MEI_CONNECT_TIMEOUT;
list_add_tail(&cb->list, &dev->ctrl_rd_list.list);
} else {
+ cl->state = MEI_FILE_INITIALIZING;
list_add_tail(&cb->list, &dev->ctrl_wr_list.list);
}
ndev = (struct mei_nfc_dev *) cldev->priv_data;
dev = ndev->cl->dev;
+ err = -ENOMEM;
mei_buf = kzalloc(length + MEI_NFC_HEADER_SIZE, GFP_KERNEL);
if (!mei_buf)
- return -ENOMEM;
+ goto out;
hdr = (struct mei_nfc_hci_hdr *) mei_buf;
hdr->cmd = MEI_NFC_CMD_HCI_SEND;
hdr->data_size = length;
memcpy(mei_buf + MEI_NFC_HEADER_SIZE, buf, length);
-
err = __mei_cl_send(ndev->cl, mei_buf, length + MEI_NFC_HEADER_SIZE);
if (err < 0)
- return err;
-
- kfree(mei_buf);
+ goto out;
if (!wait_event_interruptible_timeout(ndev->send_wq,
ndev->recv_req_id == ndev->req_id, HZ)) {
} else {
ndev->req_id++;
}
-
+out:
+ kfree(mei_buf);
return err;
}
mutex_lock(&chip->mutex);
ret = get_chip(map, chip, base, FL_LOCKING);
+ if (ret) {
+ mutex_unlock(&chip->mutex);
+ return ret;
+ }
/* Enter lock register command */
cfi_send_gen_cmd(0xAA, cfi->addr_unlock1,
u32 val;
val = readl(info->reg.gpmc_ecc_config);
- if (((val >> ECC_CONFIG_CS_SHIFT) & ~CS_MASK) != info->gpmc_cs)
+ if (((val >> ECC_CONFIG_CS_SHIFT) & CS_MASK) != info->gpmc_cs)
return -EINVAL;
/* read ecc result */
}
/* populate MTD interface based on ECC scheme */
- nand_chip->ecc.layout = &omap_oobinfo;
ecclayout = &omap_oobinfo;
switch (info->ecc_opt) {
+ case OMAP_ECC_HAM1_CODE_SW:
+ nand_chip->ecc.mode = NAND_ECC_SOFT;
+ break;
+
case OMAP_ECC_HAM1_CODE_HW:
pr_info("nand: using OMAP_ECC_HAM1_CODE_HW\n");
nand_chip->ecc.mode = NAND_ECC_HW;
nand_chip->ecc.priv = nand_bch_init(mtd,
nand_chip->ecc.size,
nand_chip->ecc.bytes,
- &nand_chip->ecc.layout);
+ &ecclayout);
if (!nand_chip->ecc.priv) {
pr_err("nand: error: unable to use s/w BCH library\n");
err = -EINVAL;
nand_chip->ecc.priv = nand_bch_init(mtd,
nand_chip->ecc.size,
nand_chip->ecc.bytes,
- &nand_chip->ecc.layout);
+ &ecclayout);
if (!nand_chip->ecc.priv) {
pr_err("nand: error: unable to use s/w BCH library\n");
err = -EINVAL;
goto return_error;
}
+ if (info->ecc_opt == OMAP_ECC_HAM1_CODE_SW)
+ goto scan_tail;
+
/* all OOB bytes from oobfree->offset till end off OOB are free */
ecclayout->oobfree->length = mtd->oobsize - ecclayout->oobfree->offset;
/* check if NAND device's OOB is enough to store ECC signatures */
err = -EINVAL;
goto return_error;
}
+ nand_chip->ecc.layout = ecclayout;
+scan_tail:
/* second phase scan */
if (nand_scan_tail(mtd)) {
err = -ENXIO;
priv->raminit_ctrlreg = devm_ioremap(&pdev->dev, res->start,
resource_size(res));
- if (IS_ERR(priv->raminit_ctrlreg) || priv->instance < 0)
+ if (!priv->raminit_ctrlreg || priv->instance < 0)
dev_info(&pdev->dev, "control memory is not used for raminit\n");
else
priv->raminit = c_can_hw_raminit_ti;
/* process state changes depending on the new state */
switch (new_state) {
+ case CAN_STATE_ERROR_WARNING:
+ netdev_dbg(dev, "Error Warning\n");
+ cf->can_id |= CAN_ERR_CRTL;
+ cf->data[1] = (bec.txerr > bec.rxerr) ?
+ CAN_ERR_CRTL_TX_WARNING :
+ CAN_ERR_CRTL_RX_WARNING;
+ break;
case CAN_STATE_ERROR_ACTIVE:
netdev_dbg(dev, "Error Active\n");
cf->can_id |= CAN_ERR_PROT;
if (priv->devtype_data->features & FLEXCAN_HAS_BROKEN_ERR_STATE ||
priv->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING)
reg_ctrl |= FLEXCAN_CTRL_ERR_MSK;
+ else
+ reg_ctrl &= ~FLEXCAN_CTRL_ERR_MSK;
/* save for later use */
priv->reg_ctrl_default = reg_ctrl;
netdev_err(dev, "setting SJA1000 into normal mode failed!\n");
}
+/*
+ * initialize SJA1000 chip:
+ * - reset chip
+ * - set output mode
+ * - set baudrate
+ * - enable interrupts
+ * - start operating mode
+ */
+static void chipset_init(struct net_device *dev)
+{
+ struct sja1000_priv *priv = netdev_priv(dev);
+
+ /* set clock divider and output control register */
+ priv->write_reg(priv, SJA1000_CDR, priv->cdr | CDR_PELICAN);
+
+ /* set acceptance filter (accept all) */
+ priv->write_reg(priv, SJA1000_ACCC0, 0x00);
+ priv->write_reg(priv, SJA1000_ACCC1, 0x00);
+ priv->write_reg(priv, SJA1000_ACCC2, 0x00);
+ priv->write_reg(priv, SJA1000_ACCC3, 0x00);
+
+ priv->write_reg(priv, SJA1000_ACCM0, 0xFF);
+ priv->write_reg(priv, SJA1000_ACCM1, 0xFF);
+ priv->write_reg(priv, SJA1000_ACCM2, 0xFF);
+ priv->write_reg(priv, SJA1000_ACCM3, 0xFF);
+
+ priv->write_reg(priv, SJA1000_OCR, priv->ocr | OCR_MODE_NORMAL);
+}
+
static void sja1000_start(struct net_device *dev)
{
struct sja1000_priv *priv = netdev_priv(dev);
if (priv->can.state != CAN_STATE_STOPPED)
set_reset_mode(dev);
+ /* Initialize chip if uninitialized at this stage */
+ if (!(priv->read_reg(priv, SJA1000_CDR) & CDR_PELICAN))
+ chipset_init(dev);
+
/* Clear error counters and error code capture */
priv->write_reg(priv, SJA1000_TXERR, 0x0);
priv->write_reg(priv, SJA1000_RXERR, 0x0);
return 0;
}
-/*
- * initialize SJA1000 chip:
- * - reset chip
- * - set output mode
- * - set baudrate
- * - enable interrupts
- * - start operating mode
- */
-static void chipset_init(struct net_device *dev)
-{
- struct sja1000_priv *priv = netdev_priv(dev);
-
- /* set clock divider and output control register */
- priv->write_reg(priv, SJA1000_CDR, priv->cdr | CDR_PELICAN);
-
- /* set acceptance filter (accept all) */
- priv->write_reg(priv, SJA1000_ACCC0, 0x00);
- priv->write_reg(priv, SJA1000_ACCC1, 0x00);
- priv->write_reg(priv, SJA1000_ACCC2, 0x00);
- priv->write_reg(priv, SJA1000_ACCC3, 0x00);
-
- priv->write_reg(priv, SJA1000_ACCM0, 0xFF);
- priv->write_reg(priv, SJA1000_ACCM1, 0xFF);
- priv->write_reg(priv, SJA1000_ACCM2, 0xFF);
- priv->write_reg(priv, SJA1000_ACCM3, 0xFF);
-
- priv->write_reg(priv, SJA1000_OCR, priv->ocr | OCR_MODE_NORMAL);
-}
-
/*
* transmit a CAN message
* message layout in the sk_buff should be like this:
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
vp->tx_ring[entry].frag[i+1].addr =
- cpu_to_le32(pci_map_single(
- VORTEX_PCI(vp),
- (void *)skb_frag_address(frag),
- skb_frag_size(frag), PCI_DMA_TODEVICE));
+ cpu_to_le32(skb_frag_dma_map(
+ &VORTEX_PCI(vp)->dev,
+ frag,
+ frag->page_offset, frag->size, DMA_TO_DEVICE));
if (i == skb_shinfo(skb)->nr_frags-1)
vp->tx_ring[entry].frag[i+1].length = cpu_to_le32(skb_frag_size(frag)|LAST_FRAG);
GRETH_REGORIN(greth->regs->control, GRETH_TXEN);
}
+static inline void greth_enable_tx_and_irq(struct greth_private *greth)
+{
+ wmb(); /* BDs must been written to memory before enabling TX */
+ GRETH_REGORIN(greth->regs->control, GRETH_TXEN | GRETH_TXI);
+}
+
static inline void greth_disable_tx(struct greth_private *greth)
{
GRETH_REGANDIN(greth->regs->control, ~GRETH_TXEN);
return err;
}
+static inline u16 greth_num_free_bds(u16 tx_last, u16 tx_next)
+{
+ if (tx_next < tx_last)
+ return (tx_last - tx_next) - 1;
+ else
+ return GRETH_TXBD_NUM - (tx_next - tx_last) - 1;
+}
static netdev_tx_t
greth_start_xmit_gbit(struct sk_buff *skb, struct net_device *dev)
{
struct greth_private *greth = netdev_priv(dev);
struct greth_bd *bdp;
- u32 status = 0, dma_addr, ctrl;
+ u32 status, dma_addr;
int curr_tx, nr_frags, i, err = NETDEV_TX_OK;
unsigned long flags;
+ u16 tx_last;
nr_frags = skb_shinfo(skb)->nr_frags;
+ tx_last = greth->tx_last;
+ rmb(); /* tx_last is updated by the poll task */
- /* Clean TX Ring */
- greth_clean_tx_gbit(dev);
-
- if (greth->tx_free < nr_frags + 1) {
- spin_lock_irqsave(&greth->devlock, flags);/*save from poll/irq*/
- ctrl = GRETH_REGLOAD(greth->regs->control);
- /* Enable TX IRQ only if not already in poll() routine */
- if (ctrl & GRETH_RXI)
- GRETH_REGSAVE(greth->regs->control, ctrl | GRETH_TXI);
+ if (greth_num_free_bds(tx_last, greth->tx_next) < nr_frags + 1) {
netif_stop_queue(dev);
- spin_unlock_irqrestore(&greth->devlock, flags);
err = NETDEV_TX_BUSY;
goto out;
}
/* Linear buf */
if (nr_frags != 0)
status = GRETH_TXBD_MORE;
+ else
+ status = GRETH_BD_IE;
if (skb->ip_summed == CHECKSUM_PARTIAL)
status |= GRETH_TXBD_CSALL;
/* Enable the descriptor chain by enabling the first descriptor */
bdp = greth->tx_bd_base + greth->tx_next;
- greth_write_bd(&bdp->stat, greth_read_bd(&bdp->stat) | GRETH_BD_EN);
- greth->tx_next = curr_tx;
- greth->tx_free -= nr_frags + 1;
-
- wmb();
+ greth_write_bd(&bdp->stat,
+ greth_read_bd(&bdp->stat) | GRETH_BD_EN);
spin_lock_irqsave(&greth->devlock, flags); /*save from poll/irq*/
- greth_enable_tx(greth);
+ greth->tx_next = curr_tx;
+ greth_enable_tx_and_irq(greth);
spin_unlock_irqrestore(&greth->devlock, flags);
return NETDEV_TX_OK;
if (greth->tx_free > 0) {
netif_wake_queue(dev);
}
-
}
static inline void greth_update_tx_stats(struct net_device *dev, u32 stat)
{
struct greth_private *greth;
struct greth_bd *bdp, *bdp_last_frag;
- struct sk_buff *skb;
+ struct sk_buff *skb = NULL;
u32 stat;
int nr_frags, i;
+ u16 tx_last;
greth = netdev_priv(dev);
+ tx_last = greth->tx_last;
- while (greth->tx_free < GRETH_TXBD_NUM) {
+ while (tx_last != greth->tx_next) {
- skb = greth->tx_skbuff[greth->tx_last];
+ skb = greth->tx_skbuff[tx_last];
nr_frags = skb_shinfo(skb)->nr_frags;
/* We only clean fully completed SKBs */
- bdp_last_frag = greth->tx_bd_base + SKIP_TX(greth->tx_last, nr_frags);
+ bdp_last_frag = greth->tx_bd_base + SKIP_TX(tx_last, nr_frags);
GRETH_REGSAVE(greth->regs->status, GRETH_INT_TE | GRETH_INT_TX);
mb();
if (stat & GRETH_BD_EN)
break;
- greth->tx_skbuff[greth->tx_last] = NULL;
+ greth->tx_skbuff[tx_last] = NULL;
greth_update_tx_stats(dev, stat);
dev->stats.tx_bytes += skb->len;
- bdp = greth->tx_bd_base + greth->tx_last;
+ bdp = greth->tx_bd_base + tx_last;
- greth->tx_last = NEXT_TX(greth->tx_last);
+ tx_last = NEXT_TX(tx_last);
dma_unmap_single(greth->dev,
greth_read_bd(&bdp->addr),
for (i = 0; i < nr_frags; i++) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
- bdp = greth->tx_bd_base + greth->tx_last;
+ bdp = greth->tx_bd_base + tx_last;
dma_unmap_page(greth->dev,
greth_read_bd(&bdp->addr),
skb_frag_size(frag),
DMA_TO_DEVICE);
- greth->tx_last = NEXT_TX(greth->tx_last);
+ tx_last = NEXT_TX(tx_last);
}
- greth->tx_free += nr_frags+1;
dev_kfree_skb(skb);
}
+ if (skb) { /* skb is set only if the above while loop was entered */
+ wmb();
+ greth->tx_last = tx_last;
- if (netif_queue_stopped(dev) && (greth->tx_free > (MAX_SKB_FRAGS+1)))
- netif_wake_queue(dev);
+ if (netif_queue_stopped(dev) &&
+ (greth_num_free_bds(tx_last, greth->tx_next) >
+ (MAX_SKB_FRAGS+1)))
+ netif_wake_queue(dev);
+ }
}
static int greth_rx(struct net_device *dev, int limit)
greth = container_of(napi, struct greth_private, napi);
restart_txrx_poll:
- if (netif_queue_stopped(greth->netdev)) {
- if (greth->gbit_mac)
- greth_clean_tx_gbit(greth->netdev);
- else
- greth_clean_tx(greth->netdev);
- }
-
if (greth->gbit_mac) {
+ greth_clean_tx_gbit(greth->netdev);
work_done += greth_rx_gbit(greth->netdev, budget - work_done);
} else {
+ if (netif_queue_stopped(greth->netdev))
+ greth_clean_tx(greth->netdev);
work_done += greth_rx(greth->netdev, budget - work_done);
}
spin_lock_irqsave(&greth->devlock, flags);
ctrl = GRETH_REGLOAD(greth->regs->control);
- if (netif_queue_stopped(greth->netdev)) {
+ if ((greth->gbit_mac && (greth->tx_last != greth->tx_next)) ||
+ (!greth->gbit_mac && netif_queue_stopped(greth->netdev))) {
GRETH_REGSAVE(greth->regs->control,
ctrl | GRETH_TXI | GRETH_RXI);
mask = GRETH_INT_RX | GRETH_INT_RE |
u16 tx_next;
u16 tx_last;
- u16 tx_free;
+ u16 tx_free; /* only used on 10/100Mbit */
u16 rx_cur;
struct greth_regs *regs; /* Address of controller registers. */
struct xgbe_prv_data *pdata = filp->private_data;
unsigned int value;
- value = pdata->hw_if.read_mmd_regs(pdata, pdata->debugfs_xpcs_mmd,
- pdata->debugfs_xpcs_reg);
+ value = XMDIO_READ(pdata, pdata->debugfs_xpcs_mmd,
+ pdata->debugfs_xpcs_reg);
return xgbe_common_read(buffer, count, ppos, value);
}
if (len < 0)
return len;
- pdata->hw_if.write_mmd_regs(pdata, pdata->debugfs_xpcs_mmd,
- pdata->debugfs_xpcs_reg, value);
+ XMDIO_WRITE(pdata, pdata->debugfs_xpcs_mmd, pdata->debugfs_xpcs_reg,
+ value);
return len;
}
/* Clear MAC flow control */
max_q_count = XGMAC_MAX_FLOW_CONTROL_QUEUES;
- q_count = min_t(unsigned int, pdata->rx_q_count, max_q_count);
+ q_count = min_t(unsigned int, pdata->tx_q_count, max_q_count);
reg = MAC_Q0TFCR;
for (i = 0; i < q_count; i++) {
reg_val = XGMAC_IOREAD(pdata, reg);
/* Set MAC flow control */
max_q_count = XGMAC_MAX_FLOW_CONTROL_QUEUES;
- q_count = min_t(unsigned int, pdata->rx_q_count, max_q_count);
+ q_count = min_t(unsigned int, pdata->tx_q_count, max_q_count);
reg = MAC_Q0TFCR;
for (i = 0; i < q_count; i++) {
reg_val = XGMAC_IOREAD(pdata, reg);
XGMAC_IOWRITE(pdata, MAC_IER, mac_ier);
/* Enable all counter interrupts */
- XGMAC_IOWRITE_BITS(pdata, MMC_RIER, ALL_INTERRUPTS, 0xff);
- XGMAC_IOWRITE_BITS(pdata, MMC_TIER, ALL_INTERRUPTS, 0xff);
+ XGMAC_IOWRITE_BITS(pdata, MMC_RIER, ALL_INTERRUPTS, 0xffffffff);
+ XGMAC_IOWRITE_BITS(pdata, MMC_TIER, ALL_INTERRUPTS, 0xffffffff);
}
static int xgbe_set_gmii_speed(struct xgbe_prv_data *pdata)
{
unsigned int i, count;
+ if (XGMAC_GET_BITS(pdata->hw_feat.version, MAC_VR, SNPSVER) < 0x21)
+ return 0;
+
for (i = 0; i < pdata->tx_q_count; i++)
XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_TQOMR, FTQ, 1);
XGMAC_IOWRITE_BITS(pdata, MTL_OMR, RAA, MTL_RAA_SP);
}
-static unsigned int xgbe_calculate_per_queue_fifo(unsigned long fifo_size,
- unsigned char queue_count)
+static unsigned int xgbe_calculate_per_queue_fifo(unsigned int fifo_size,
+ unsigned int queue_count)
{
unsigned int q_fifo_size = 0;
enum xgbe_mtl_fifo_size p_fifo = XGMAC_MTL_FIFO_SIZE_256;
q_fifo_size = XGBE_FIFO_SIZE_KB(256);
break;
}
+
+ /* The configured value is not the actual amount of fifo RAM */
+ q_fifo_size = min_t(unsigned int, XGBE_FIFO_MAX, q_fifo_size);
+
q_fifo_size = q_fifo_size / queue_count;
/* Set the queue fifo size programmable value */
xgbe_disable_rx_vlan_stripping(pdata);
}
+static u64 xgbe_mmc_read(struct xgbe_prv_data *pdata, unsigned int reg_lo)
+{
+ bool read_hi;
+ u64 val;
+
+ switch (reg_lo) {
+ /* These registers are always 64 bit */
+ case MMC_TXOCTETCOUNT_GB_LO:
+ case MMC_TXOCTETCOUNT_G_LO:
+ case MMC_RXOCTETCOUNT_GB_LO:
+ case MMC_RXOCTETCOUNT_G_LO:
+ read_hi = true;
+ break;
+
+ default:
+ read_hi = false;
+ };
+
+ val = XGMAC_IOREAD(pdata, reg_lo);
+
+ if (read_hi)
+ val |= ((u64)XGMAC_IOREAD(pdata, reg_lo + 4) << 32);
+
+ return val;
+}
+
static void xgbe_tx_mmc_int(struct xgbe_prv_data *pdata)
{
struct xgbe_mmc_stats *stats = &pdata->mmc_stats;
if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TXOCTETCOUNT_GB))
stats->txoctetcount_gb +=
- XGMAC_IOREAD(pdata, MMC_TXOCTETCOUNT_GB_LO);
+ xgbe_mmc_read(pdata, MMC_TXOCTETCOUNT_GB_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TXFRAMECOUNT_GB))
stats->txframecount_gb +=
- XGMAC_IOREAD(pdata, MMC_TXFRAMECOUNT_GB_LO);
+ xgbe_mmc_read(pdata, MMC_TXFRAMECOUNT_GB_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TXBROADCASTFRAMES_G))
stats->txbroadcastframes_g +=
- XGMAC_IOREAD(pdata, MMC_TXBROADCASTFRAMES_G_LO);
+ xgbe_mmc_read(pdata, MMC_TXBROADCASTFRAMES_G_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TXMULTICASTFRAMES_G))
stats->txmulticastframes_g +=
- XGMAC_IOREAD(pdata, MMC_TXMULTICASTFRAMES_G_LO);
+ xgbe_mmc_read(pdata, MMC_TXMULTICASTFRAMES_G_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TX64OCTETS_GB))
stats->tx64octets_gb +=
- XGMAC_IOREAD(pdata, MMC_TX64OCTETS_GB_LO);
+ xgbe_mmc_read(pdata, MMC_TX64OCTETS_GB_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TX65TO127OCTETS_GB))
stats->tx65to127octets_gb +=
- XGMAC_IOREAD(pdata, MMC_TX65TO127OCTETS_GB_LO);
+ xgbe_mmc_read(pdata, MMC_TX65TO127OCTETS_GB_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TX128TO255OCTETS_GB))
stats->tx128to255octets_gb +=
- XGMAC_IOREAD(pdata, MMC_TX128TO255OCTETS_GB_LO);
+ xgbe_mmc_read(pdata, MMC_TX128TO255OCTETS_GB_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TX256TO511OCTETS_GB))
stats->tx256to511octets_gb +=
- XGMAC_IOREAD(pdata, MMC_TX256TO511OCTETS_GB_LO);
+ xgbe_mmc_read(pdata, MMC_TX256TO511OCTETS_GB_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TX512TO1023OCTETS_GB))
stats->tx512to1023octets_gb +=
- XGMAC_IOREAD(pdata, MMC_TX512TO1023OCTETS_GB_LO);
+ xgbe_mmc_read(pdata, MMC_TX512TO1023OCTETS_GB_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TX1024TOMAXOCTETS_GB))
stats->tx1024tomaxoctets_gb +=
- XGMAC_IOREAD(pdata, MMC_TX1024TOMAXOCTETS_GB_LO);
+ xgbe_mmc_read(pdata, MMC_TX1024TOMAXOCTETS_GB_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TXUNICASTFRAMES_GB))
stats->txunicastframes_gb +=
- XGMAC_IOREAD(pdata, MMC_TXUNICASTFRAMES_GB_LO);
+ xgbe_mmc_read(pdata, MMC_TXUNICASTFRAMES_GB_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TXMULTICASTFRAMES_GB))
stats->txmulticastframes_gb +=
- XGMAC_IOREAD(pdata, MMC_TXMULTICASTFRAMES_GB_LO);
+ xgbe_mmc_read(pdata, MMC_TXMULTICASTFRAMES_GB_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TXBROADCASTFRAMES_GB))
stats->txbroadcastframes_g +=
- XGMAC_IOREAD(pdata, MMC_TXBROADCASTFRAMES_GB_LO);
+ xgbe_mmc_read(pdata, MMC_TXBROADCASTFRAMES_GB_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TXUNDERFLOWERROR))
stats->txunderflowerror +=
- XGMAC_IOREAD(pdata, MMC_TXUNDERFLOWERROR_LO);
+ xgbe_mmc_read(pdata, MMC_TXUNDERFLOWERROR_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TXOCTETCOUNT_G))
stats->txoctetcount_g +=
- XGMAC_IOREAD(pdata, MMC_TXOCTETCOUNT_G_LO);
+ xgbe_mmc_read(pdata, MMC_TXOCTETCOUNT_G_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TXFRAMECOUNT_G))
stats->txframecount_g +=
- XGMAC_IOREAD(pdata, MMC_TXFRAMECOUNT_G_LO);
+ xgbe_mmc_read(pdata, MMC_TXFRAMECOUNT_G_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TXPAUSEFRAMES))
stats->txpauseframes +=
- XGMAC_IOREAD(pdata, MMC_TXPAUSEFRAMES_LO);
+ xgbe_mmc_read(pdata, MMC_TXPAUSEFRAMES_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TXVLANFRAMES_G))
stats->txvlanframes_g +=
- XGMAC_IOREAD(pdata, MMC_TXVLANFRAMES_G_LO);
+ xgbe_mmc_read(pdata, MMC_TXVLANFRAMES_G_LO);
}
static void xgbe_rx_mmc_int(struct xgbe_prv_data *pdata)
if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXFRAMECOUNT_GB))
stats->rxframecount_gb +=
- XGMAC_IOREAD(pdata, MMC_RXFRAMECOUNT_GB_LO);
+ xgbe_mmc_read(pdata, MMC_RXFRAMECOUNT_GB_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXOCTETCOUNT_GB))
stats->rxoctetcount_gb +=
- XGMAC_IOREAD(pdata, MMC_RXOCTETCOUNT_GB_LO);
+ xgbe_mmc_read(pdata, MMC_RXOCTETCOUNT_GB_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXOCTETCOUNT_G))
stats->rxoctetcount_g +=
- XGMAC_IOREAD(pdata, MMC_RXOCTETCOUNT_G_LO);
+ xgbe_mmc_read(pdata, MMC_RXOCTETCOUNT_G_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXBROADCASTFRAMES_G))
stats->rxbroadcastframes_g +=
- XGMAC_IOREAD(pdata, MMC_RXBROADCASTFRAMES_G_LO);
+ xgbe_mmc_read(pdata, MMC_RXBROADCASTFRAMES_G_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXMULTICASTFRAMES_G))
stats->rxmulticastframes_g +=
- XGMAC_IOREAD(pdata, MMC_RXMULTICASTFRAMES_G_LO);
+ xgbe_mmc_read(pdata, MMC_RXMULTICASTFRAMES_G_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXCRCERROR))
stats->rxcrcerror +=
- XGMAC_IOREAD(pdata, MMC_RXCRCERROR_LO);
+ xgbe_mmc_read(pdata, MMC_RXCRCERROR_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXRUNTERROR))
stats->rxrunterror +=
- XGMAC_IOREAD(pdata, MMC_RXRUNTERROR);
+ xgbe_mmc_read(pdata, MMC_RXRUNTERROR);
if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXJABBERERROR))
stats->rxjabbererror +=
- XGMAC_IOREAD(pdata, MMC_RXJABBERERROR);
+ xgbe_mmc_read(pdata, MMC_RXJABBERERROR);
if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXUNDERSIZE_G))
stats->rxundersize_g +=
- XGMAC_IOREAD(pdata, MMC_RXUNDERSIZE_G);
+ xgbe_mmc_read(pdata, MMC_RXUNDERSIZE_G);
if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXOVERSIZE_G))
stats->rxoversize_g +=
- XGMAC_IOREAD(pdata, MMC_RXOVERSIZE_G);
+ xgbe_mmc_read(pdata, MMC_RXOVERSIZE_G);
if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RX64OCTETS_GB))
stats->rx64octets_gb +=
- XGMAC_IOREAD(pdata, MMC_RX64OCTETS_GB_LO);
+ xgbe_mmc_read(pdata, MMC_RX64OCTETS_GB_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RX65TO127OCTETS_GB))
stats->rx65to127octets_gb +=
- XGMAC_IOREAD(pdata, MMC_RX65TO127OCTETS_GB_LO);
+ xgbe_mmc_read(pdata, MMC_RX65TO127OCTETS_GB_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RX128TO255OCTETS_GB))
stats->rx128to255octets_gb +=
- XGMAC_IOREAD(pdata, MMC_RX128TO255OCTETS_GB_LO);
+ xgbe_mmc_read(pdata, MMC_RX128TO255OCTETS_GB_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RX256TO511OCTETS_GB))
stats->rx256to511octets_gb +=
- XGMAC_IOREAD(pdata, MMC_RX256TO511OCTETS_GB_LO);
+ xgbe_mmc_read(pdata, MMC_RX256TO511OCTETS_GB_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RX512TO1023OCTETS_GB))
stats->rx512to1023octets_gb +=
- XGMAC_IOREAD(pdata, MMC_RX512TO1023OCTETS_GB_LO);
+ xgbe_mmc_read(pdata, MMC_RX512TO1023OCTETS_GB_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RX1024TOMAXOCTETS_GB))
stats->rx1024tomaxoctets_gb +=
- XGMAC_IOREAD(pdata, MMC_RX1024TOMAXOCTETS_GB_LO);
+ xgbe_mmc_read(pdata, MMC_RX1024TOMAXOCTETS_GB_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXUNICASTFRAMES_G))
stats->rxunicastframes_g +=
- XGMAC_IOREAD(pdata, MMC_RXUNICASTFRAMES_G_LO);
+ xgbe_mmc_read(pdata, MMC_RXUNICASTFRAMES_G_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXLENGTHERROR))
stats->rxlengtherror +=
- XGMAC_IOREAD(pdata, MMC_RXLENGTHERROR_LO);
+ xgbe_mmc_read(pdata, MMC_RXLENGTHERROR_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXOUTOFRANGETYPE))
stats->rxoutofrangetype +=
- XGMAC_IOREAD(pdata, MMC_RXOUTOFRANGETYPE_LO);
+ xgbe_mmc_read(pdata, MMC_RXOUTOFRANGETYPE_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXPAUSEFRAMES))
stats->rxpauseframes +=
- XGMAC_IOREAD(pdata, MMC_RXPAUSEFRAMES_LO);
+ xgbe_mmc_read(pdata, MMC_RXPAUSEFRAMES_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXFIFOOVERFLOW))
stats->rxfifooverflow +=
- XGMAC_IOREAD(pdata, MMC_RXFIFOOVERFLOW_LO);
+ xgbe_mmc_read(pdata, MMC_RXFIFOOVERFLOW_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXVLANFRAMES_GB))
stats->rxvlanframes_gb +=
- XGMAC_IOREAD(pdata, MMC_RXVLANFRAMES_GB_LO);
+ xgbe_mmc_read(pdata, MMC_RXVLANFRAMES_GB_LO);
if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXWATCHDOGERROR))
stats->rxwatchdogerror +=
- XGMAC_IOREAD(pdata, MMC_RXWATCHDOGERROR);
+ xgbe_mmc_read(pdata, MMC_RXWATCHDOGERROR);
}
static void xgbe_read_mmc_stats(struct xgbe_prv_data *pdata)
XGMAC_IOWRITE_BITS(pdata, MMC_CR, MCF, 1);
stats->txoctetcount_gb +=
- XGMAC_IOREAD(pdata, MMC_TXOCTETCOUNT_GB_LO);
+ xgbe_mmc_read(pdata, MMC_TXOCTETCOUNT_GB_LO);
stats->txframecount_gb +=
- XGMAC_IOREAD(pdata, MMC_TXFRAMECOUNT_GB_LO);
+ xgbe_mmc_read(pdata, MMC_TXFRAMECOUNT_GB_LO);
stats->txbroadcastframes_g +=
- XGMAC_IOREAD(pdata, MMC_TXBROADCASTFRAMES_G_LO);
+ xgbe_mmc_read(pdata, MMC_TXBROADCASTFRAMES_G_LO);
stats->txmulticastframes_g +=
- XGMAC_IOREAD(pdata, MMC_TXMULTICASTFRAMES_G_LO);
+ xgbe_mmc_read(pdata, MMC_TXMULTICASTFRAMES_G_LO);
stats->tx64octets_gb +=
- XGMAC_IOREAD(pdata, MMC_TX64OCTETS_GB_LO);
+ xgbe_mmc_read(pdata, MMC_TX64OCTETS_GB_LO);
stats->tx65to127octets_gb +=
- XGMAC_IOREAD(pdata, MMC_TX65TO127OCTETS_GB_LO);
+ xgbe_mmc_read(pdata, MMC_TX65TO127OCTETS_GB_LO);
stats->tx128to255octets_gb +=
- XGMAC_IOREAD(pdata, MMC_TX128TO255OCTETS_GB_LO);
+ xgbe_mmc_read(pdata, MMC_TX128TO255OCTETS_GB_LO);
stats->tx256to511octets_gb +=
- XGMAC_IOREAD(pdata, MMC_TX256TO511OCTETS_GB_LO);
+ xgbe_mmc_read(pdata, MMC_TX256TO511OCTETS_GB_LO);
stats->tx512to1023octets_gb +=
- XGMAC_IOREAD(pdata, MMC_TX512TO1023OCTETS_GB_LO);
+ xgbe_mmc_read(pdata, MMC_TX512TO1023OCTETS_GB_LO);
stats->tx1024tomaxoctets_gb +=
- XGMAC_IOREAD(pdata, MMC_TX1024TOMAXOCTETS_GB_LO);
+ xgbe_mmc_read(pdata, MMC_TX1024TOMAXOCTETS_GB_LO);
stats->txunicastframes_gb +=
- XGMAC_IOREAD(pdata, MMC_TXUNICASTFRAMES_GB_LO);
+ xgbe_mmc_read(pdata, MMC_TXUNICASTFRAMES_GB_LO);
stats->txmulticastframes_gb +=
- XGMAC_IOREAD(pdata, MMC_TXMULTICASTFRAMES_GB_LO);
+ xgbe_mmc_read(pdata, MMC_TXMULTICASTFRAMES_GB_LO);
stats->txbroadcastframes_g +=
- XGMAC_IOREAD(pdata, MMC_TXBROADCASTFRAMES_GB_LO);
+ xgbe_mmc_read(pdata, MMC_TXBROADCASTFRAMES_GB_LO);
stats->txunderflowerror +=
- XGMAC_IOREAD(pdata, MMC_TXUNDERFLOWERROR_LO);
+ xgbe_mmc_read(pdata, MMC_TXUNDERFLOWERROR_LO);
stats->txoctetcount_g +=
- XGMAC_IOREAD(pdata, MMC_TXOCTETCOUNT_G_LO);
+ xgbe_mmc_read(pdata, MMC_TXOCTETCOUNT_G_LO);
stats->txframecount_g +=
- XGMAC_IOREAD(pdata, MMC_TXFRAMECOUNT_G_LO);
+ xgbe_mmc_read(pdata, MMC_TXFRAMECOUNT_G_LO);
stats->txpauseframes +=
- XGMAC_IOREAD(pdata, MMC_TXPAUSEFRAMES_LO);
+ xgbe_mmc_read(pdata, MMC_TXPAUSEFRAMES_LO);
stats->txvlanframes_g +=
- XGMAC_IOREAD(pdata, MMC_TXVLANFRAMES_G_LO);
+ xgbe_mmc_read(pdata, MMC_TXVLANFRAMES_G_LO);
stats->rxframecount_gb +=
- XGMAC_IOREAD(pdata, MMC_RXFRAMECOUNT_GB_LO);
+ xgbe_mmc_read(pdata, MMC_RXFRAMECOUNT_GB_LO);
stats->rxoctetcount_gb +=
- XGMAC_IOREAD(pdata, MMC_RXOCTETCOUNT_GB_LO);
+ xgbe_mmc_read(pdata, MMC_RXOCTETCOUNT_GB_LO);
stats->rxoctetcount_g +=
- XGMAC_IOREAD(pdata, MMC_RXOCTETCOUNT_G_LO);
+ xgbe_mmc_read(pdata, MMC_RXOCTETCOUNT_G_LO);
stats->rxbroadcastframes_g +=
- XGMAC_IOREAD(pdata, MMC_RXBROADCASTFRAMES_G_LO);
+ xgbe_mmc_read(pdata, MMC_RXBROADCASTFRAMES_G_LO);
stats->rxmulticastframes_g +=
- XGMAC_IOREAD(pdata, MMC_RXMULTICASTFRAMES_G_LO);
+ xgbe_mmc_read(pdata, MMC_RXMULTICASTFRAMES_G_LO);
stats->rxcrcerror +=
- XGMAC_IOREAD(pdata, MMC_RXCRCERROR_LO);
+ xgbe_mmc_read(pdata, MMC_RXCRCERROR_LO);
stats->rxrunterror +=
- XGMAC_IOREAD(pdata, MMC_RXRUNTERROR);
+ xgbe_mmc_read(pdata, MMC_RXRUNTERROR);
stats->rxjabbererror +=
- XGMAC_IOREAD(pdata, MMC_RXJABBERERROR);
+ xgbe_mmc_read(pdata, MMC_RXJABBERERROR);
stats->rxundersize_g +=
- XGMAC_IOREAD(pdata, MMC_RXUNDERSIZE_G);
+ xgbe_mmc_read(pdata, MMC_RXUNDERSIZE_G);
stats->rxoversize_g +=
- XGMAC_IOREAD(pdata, MMC_RXOVERSIZE_G);
+ xgbe_mmc_read(pdata, MMC_RXOVERSIZE_G);
stats->rx64octets_gb +=
- XGMAC_IOREAD(pdata, MMC_RX64OCTETS_GB_LO);
+ xgbe_mmc_read(pdata, MMC_RX64OCTETS_GB_LO);
stats->rx65to127octets_gb +=
- XGMAC_IOREAD(pdata, MMC_RX65TO127OCTETS_GB_LO);
+ xgbe_mmc_read(pdata, MMC_RX65TO127OCTETS_GB_LO);
stats->rx128to255octets_gb +=
- XGMAC_IOREAD(pdata, MMC_RX128TO255OCTETS_GB_LO);
+ xgbe_mmc_read(pdata, MMC_RX128TO255OCTETS_GB_LO);
stats->rx256to511octets_gb +=
- XGMAC_IOREAD(pdata, MMC_RX256TO511OCTETS_GB_LO);
+ xgbe_mmc_read(pdata, MMC_RX256TO511OCTETS_GB_LO);
stats->rx512to1023octets_gb +=
- XGMAC_IOREAD(pdata, MMC_RX512TO1023OCTETS_GB_LO);
+ xgbe_mmc_read(pdata, MMC_RX512TO1023OCTETS_GB_LO);
stats->rx1024tomaxoctets_gb +=
- XGMAC_IOREAD(pdata, MMC_RX1024TOMAXOCTETS_GB_LO);
+ xgbe_mmc_read(pdata, MMC_RX1024TOMAXOCTETS_GB_LO);
stats->rxunicastframes_g +=
- XGMAC_IOREAD(pdata, MMC_RXUNICASTFRAMES_G_LO);
+ xgbe_mmc_read(pdata, MMC_RXUNICASTFRAMES_G_LO);
stats->rxlengtherror +=
- XGMAC_IOREAD(pdata, MMC_RXLENGTHERROR_LO);
+ xgbe_mmc_read(pdata, MMC_RXLENGTHERROR_LO);
stats->rxoutofrangetype +=
- XGMAC_IOREAD(pdata, MMC_RXOUTOFRANGETYPE_LO);
+ xgbe_mmc_read(pdata, MMC_RXOUTOFRANGETYPE_LO);
stats->rxpauseframes +=
- XGMAC_IOREAD(pdata, MMC_RXPAUSEFRAMES_LO);
+ xgbe_mmc_read(pdata, MMC_RXPAUSEFRAMES_LO);
stats->rxfifooverflow +=
- XGMAC_IOREAD(pdata, MMC_RXFIFOOVERFLOW_LO);
+ xgbe_mmc_read(pdata, MMC_RXFIFOOVERFLOW_LO);
stats->rxvlanframes_gb +=
- XGMAC_IOREAD(pdata, MMC_RXVLANFRAMES_GB_LO);
+ xgbe_mmc_read(pdata, MMC_RXVLANFRAMES_GB_LO);
stats->rxwatchdogerror +=
- XGMAC_IOREAD(pdata, MMC_RXWATCHDOGERROR);
+ xgbe_mmc_read(pdata, MMC_RXWATCHDOGERROR);
/* Un-freeze counters */
XGMAC_IOWRITE_BITS(pdata, MMC_CR, MCF, 0);
memset(hw_feat, 0, sizeof(*hw_feat));
+ hw_feat->version = XGMAC_IOREAD(pdata, MAC_VR);
+
/* Hardware feature register 0 */
hw_feat->gmii = XGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, GMIISEL);
hw_feat->vlhash = XGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, VLHASH);
struct ethtool_drvinfo *drvinfo)
{
struct xgbe_prv_data *pdata = netdev_priv(netdev);
+ struct xgbe_hw_features *hw_feat = &pdata->hw_feat;
strlcpy(drvinfo->driver, XGBE_DRV_NAME, sizeof(drvinfo->driver));
strlcpy(drvinfo->version, XGBE_DRV_VERSION, sizeof(drvinfo->version));
strlcpy(drvinfo->bus_info, dev_name(pdata->dev),
sizeof(drvinfo->bus_info));
snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version), "%d.%d.%d",
- XGMAC_IOREAD_BITS(pdata, MAC_VR, USERVER),
- XGMAC_IOREAD_BITS(pdata, MAC_VR, DEVID),
- XGMAC_IOREAD_BITS(pdata, MAC_VR, SNPSVER));
+ XGMAC_GET_BITS(hw_feat->version, MAC_VR, USERVER),
+ XGMAC_GET_BITS(hw_feat->version, MAC_VR, DEVID),
+ XGMAC_GET_BITS(hw_feat->version, MAC_VR, SNPSVER));
drvinfo->n_stats = XGBE_STATS_COUNT;
}
}
if (i < pdata->rx_ring_count) {
- spin_lock_init(&tx_ring->lock);
+ spin_lock_init(&rx_ring->lock);
channel->rx_ring = rx_ring++;
}
#define XGMAC_DRIVER_CONTEXT 1
#define XGMAC_IOCTL_CONTEXT 2
+#define XGBE_FIFO_MAX 81920
#define XGBE_FIFO_SIZE_B(x) (x)
#define XGBE_FIFO_SIZE_KB(x) (x * 1024)
* or configurations are present in the device.
*/
struct xgbe_hw_features {
+ /* HW Version */
+ unsigned int version;
+
/* HW Feature Register0 */
unsigned int gmii; /* 1000 Mbps support */
unsigned int vlhash; /* VLAN Hash Filter */
config NET_XGENE
tristate "APM X-Gene SoC Ethernet Driver"
+ depends on HAS_DMA
select PHYLIB
help
This is the Ethernet driver for the on-chip ethernet interface on the
struct xgene_enet_desc_ring *ring;
ring = pdata->tx_ring;
- if (ring && ring->cp_ring && ring->cp_ring->cp_skb)
- devm_kfree(dev, ring->cp_ring->cp_skb);
- xgene_enet_free_desc_ring(ring);
+ if (ring) {
+ if (ring->cp_ring && ring->cp_ring->cp_skb)
+ devm_kfree(dev, ring->cp_ring->cp_skb);
+ xgene_enet_free_desc_ring(ring);
+ }
ring = pdata->rx_ring;
- if (ring && ring->buf_pool && ring->buf_pool->rx_skb)
- devm_kfree(dev, ring->buf_pool->rx_skb);
- xgene_enet_free_desc_ring(ring->buf_pool);
- xgene_enet_free_desc_ring(ring);
+ if (ring) {
+ if (ring->buf_pool) {
+ if (ring->buf_pool->rx_skb)
+ devm_kfree(dev, ring->buf_pool->rx_skb);
+ xgene_enet_free_desc_ring(ring->buf_pool);
+ }
+ xgene_enet_free_desc_ring(ring);
+ }
}
static struct xgene_enet_desc_ring *xgene_enet_create_desc_ring(
config CNIC
tristate "QLogic CNIC support"
- depends on PCI
+ depends on PCI && (IPV6 || IPV6=n)
select BNX2
select UIO
---help---
#ifdef BNX2X_STOP_ON_ERROR
fp->tpa_queue_used |= (1 << queue);
-#ifdef _ASM_GENERIC_INT_L64_H
- DP(NETIF_MSG_RX_STATUS, "fp->tpa_queue_used = 0x%lx\n",
-#else
DP(NETIF_MSG_RX_STATUS, "fp->tpa_queue_used = 0x%llx\n",
-#endif
fp->tpa_queue_used);
#endif
}
u32 reserved3; /* Offset 0x14C */
u32 reserved4; /* Offset 0x150 */
u32 link_attr_sync[PORT_MAX]; /* Offset 0x154 */
- #define LINK_ATTR_SYNC_KR2_ENABLE (1<<0)
+ #define LINK_ATTR_SYNC_KR2_ENABLE 0x00000001
+ #define LINK_SFP_EEPROM_COMP_CODE_MASK 0x0000ff00
+ #define LINK_SFP_EEPROM_COMP_CODE_SHIFT 8
+ #define LINK_SFP_EEPROM_COMP_CODE_SR 0x00001000
+ #define LINK_SFP_EEPROM_COMP_CODE_LR 0x00002000
+ #define LINK_SFP_EEPROM_COMP_CODE_LRM 0x00004000
u32 reserved5[2];
u32 reserved6[PORT_MAX];
LINK_STATUS_LINK_PARTNER_ASYMMETRIC_PAUSE)
#define SFP_EEPROM_CON_TYPE_ADDR 0x2
+ #define SFP_EEPROM_CON_TYPE_VAL_UNKNOWN 0x0
#define SFP_EEPROM_CON_TYPE_VAL_LC 0x7
#define SFP_EEPROM_CON_TYPE_VAL_COPPER 0x21
#define SFP_EEPROM_CON_TYPE_VAL_RJ45 0x22
-#define SFP_EEPROM_COMP_CODE_ADDR 0x3
- #define SFP_EEPROM_COMP_CODE_SR_MASK (1<<4)
- #define SFP_EEPROM_COMP_CODE_LR_MASK (1<<5)
- #define SFP_EEPROM_COMP_CODE_LRM_MASK (1<<6)
+#define SFP_EEPROM_10G_COMP_CODE_ADDR 0x3
+ #define SFP_EEPROM_10G_COMP_CODE_SR_MASK (1<<4)
+ #define SFP_EEPROM_10G_COMP_CODE_LR_MASK (1<<5)
+ #define SFP_EEPROM_10G_COMP_CODE_LRM_MASK (1<<6)
+
+#define SFP_EEPROM_1G_COMP_CODE_ADDR 0x6
+ #define SFP_EEPROM_1G_COMP_CODE_SX (1<<0)
+ #define SFP_EEPROM_1G_COMP_CODE_LX (1<<1)
+ #define SFP_EEPROM_1G_COMP_CODE_CX (1<<2)
+ #define SFP_EEPROM_1G_COMP_CODE_BASE_T (1<<3)
#define SFP_EEPROM_FC_TX_TECH_ADDR 0x8
#define SFP_EEPROM_FC_TX_TECH_BITMASK_COPPER_PASSIVE 0x4
reg_set[i].val);
/* Start KR2 work-around timer which handles BCM8073 link-parner */
- vars->link_attr_sync |= LINK_ATTR_SYNC_KR2_ENABLE;
- bnx2x_update_link_attr(params, vars->link_attr_sync);
+ params->link_attr_sync |= LINK_ATTR_SYNC_KR2_ENABLE;
+ bnx2x_update_link_attr(params, params->link_attr_sync);
}
static void bnx2x_disable_kr2(struct link_params *params,
for (i = 0; i < ARRAY_SIZE(reg_set); i++)
bnx2x_cl45_write(bp, phy, reg_set[i].devad, reg_set[i].reg,
reg_set[i].val);
- vars->link_attr_sync &= ~LINK_ATTR_SYNC_KR2_ENABLE;
- bnx2x_update_link_attr(params, vars->link_attr_sync);
+ params->link_attr_sync &= ~LINK_ATTR_SYNC_KR2_ENABLE;
+ bnx2x_update_link_attr(params, params->link_attr_sync);
vars->check_kr2_recovery_cnt = CHECK_KR2_RECOVERY_CNT;
}
~FEATURE_CONFIG_PFC_ENABLED;
if (SHMEM2_HAS(bp, link_attr_sync))
- vars->link_attr_sync = SHMEM2_RD(bp,
+ params->link_attr_sync = SHMEM2_RD(bp,
link_attr_sync[params->port]);
DP(NETIF_MSG_LINK, "link_status 0x%x phy_link_up %x int_mask 0x%x\n",
{
struct bnx2x *bp = params->bp;
u32 sync_offset = 0, phy_idx, media_types;
- u8 gport, val[2], check_limiting_mode = 0;
+ u8 val[SFP_EEPROM_FC_TX_TECH_ADDR + 1], check_limiting_mode = 0;
*edc_mode = EDC_MODE_LIMITING;
phy->media_type = ETH_PHY_UNSPECIFIED;
/* First check for copper cable */
if (bnx2x_read_sfp_module_eeprom(phy,
params,
I2C_DEV_ADDR_A0,
- SFP_EEPROM_CON_TYPE_ADDR,
- 2,
+ 0,
+ SFP_EEPROM_FC_TX_TECH_ADDR + 1,
(u8 *)val) != 0) {
DP(NETIF_MSG_LINK, "Failed to read from SFP+ module EEPROM\n");
return -EINVAL;
}
-
- switch (val[0]) {
+ params->link_attr_sync &= ~LINK_SFP_EEPROM_COMP_CODE_MASK;
+ params->link_attr_sync |= val[SFP_EEPROM_10G_COMP_CODE_ADDR] <<
+ LINK_SFP_EEPROM_COMP_CODE_SHIFT;
+ bnx2x_update_link_attr(params, params->link_attr_sync);
+ switch (val[SFP_EEPROM_CON_TYPE_ADDR]) {
case SFP_EEPROM_CON_TYPE_VAL_COPPER:
{
u8 copper_module_type;
/* Check if its active cable (includes SFP+ module)
* of passive cable
*/
- if (bnx2x_read_sfp_module_eeprom(phy,
- params,
- I2C_DEV_ADDR_A0,
- SFP_EEPROM_FC_TX_TECH_ADDR,
- 1,
- &copper_module_type) != 0) {
- DP(NETIF_MSG_LINK,
- "Failed to read copper-cable-type"
- " from SFP+ EEPROM\n");
- return -EINVAL;
- }
+ copper_module_type = val[SFP_EEPROM_FC_TX_TECH_ADDR];
if (copper_module_type &
SFP_EEPROM_FC_TX_TECH_BITMASK_COPPER_ACTIVE) {
}
break;
}
+ case SFP_EEPROM_CON_TYPE_VAL_UNKNOWN:
case SFP_EEPROM_CON_TYPE_VAL_LC:
case SFP_EEPROM_CON_TYPE_VAL_RJ45:
check_limiting_mode = 1;
- if ((val[1] & (SFP_EEPROM_COMP_CODE_SR_MASK |
- SFP_EEPROM_COMP_CODE_LR_MASK |
- SFP_EEPROM_COMP_CODE_LRM_MASK)) == 0) {
+ if ((val[SFP_EEPROM_10G_COMP_CODE_ADDR] &
+ (SFP_EEPROM_10G_COMP_CODE_SR_MASK |
+ SFP_EEPROM_10G_COMP_CODE_LR_MASK |
+ SFP_EEPROM_10G_COMP_CODE_LRM_MASK)) == 0) {
DP(NETIF_MSG_LINK, "1G SFP module detected\n");
- gport = params->port;
phy->media_type = ETH_PHY_SFP_1G_FIBER;
if (phy->req_line_speed != SPEED_1000) {
+ u8 gport = params->port;
phy->req_line_speed = SPEED_1000;
if (!CHIP_IS_E1x(bp)) {
gport = BP_PATH(bp) +
"Warning: Link speed was forced to 1000Mbps. Current SFP module in port %d is not compliant with 10G Ethernet\n",
gport);
}
+ if (val[SFP_EEPROM_1G_COMP_CODE_ADDR] &
+ SFP_EEPROM_1G_COMP_CODE_BASE_T) {
+ bnx2x_sfp_set_transmitter(params, phy, 0);
+ msleep(40);
+ bnx2x_sfp_set_transmitter(params, phy, 1);
+ }
} else {
int idx, cfg_idx = 0;
DP(NETIF_MSG_LINK, "10G Optic module detected\n");
break;
default:
DP(NETIF_MSG_LINK, "Unable to determine module type 0x%x !!!\n",
- val[0]);
+ val[SFP_EEPROM_CON_TYPE_ADDR]);
return -EINVAL;
}
sync_offset = params->shmem_base +
sigdet = bnx2x_warpcore_get_sigdet(phy, params);
if (!sigdet) {
- if (!(vars->link_attr_sync & LINK_ATTR_SYNC_KR2_ENABLE)) {
+ if (!(params->link_attr_sync & LINK_ATTR_SYNC_KR2_ENABLE)) {
bnx2x_kr2_recovery(params, vars, phy);
DP(NETIF_MSG_LINK, "No sigdet\n");
}
/* CL73 has not begun yet */
if (base_page == 0) {
- if (!(vars->link_attr_sync & LINK_ATTR_SYNC_KR2_ENABLE)) {
+ if (!(params->link_attr_sync & LINK_ATTR_SYNC_KR2_ENABLE)) {
bnx2x_kr2_recovery(params, vars, phy);
DP(NETIF_MSG_LINK, "No BP\n");
}
((next_page & 0xe0) == 0x20))));
/* In case KR2 is already disabled, check if we need to re-enable it */
- if (!(vars->link_attr_sync & LINK_ATTR_SYNC_KR2_ENABLE)) {
+ if (!(params->link_attr_sync & LINK_ATTR_SYNC_KR2_ENABLE)) {
if (!not_kr2_device) {
DP(NETIF_MSG_LINK, "BP=0x%x, NP=0x%x\n", base_page,
next_page);
#define LINK_FLAGS_INT_DISABLED (1<<0)
#define PHY_INITIALIZED (1<<1)
u32 lfa_base;
+
+ /* The same definitions as the shmem2 parameter */
+ u32 link_attr_sync;
};
/* Output parameters */
u8 rx_tx_asic_rst;
u8 turn_to_run_wc_rt;
u16 rsrv2;
- /* The same definitions as the shmem2 parameter */
- u32 link_attr_sync;
};
/***********************************************************/
bnx2x_release_phy_lock(bp);
}
+static void bnx2x_config_endianity(struct bnx2x *bp, u32 val)
+{
+ REG_WR(bp, PXP2_REG_RQ_QM_ENDIAN_M, val);
+ REG_WR(bp, PXP2_REG_RQ_TM_ENDIAN_M, val);
+ REG_WR(bp, PXP2_REG_RQ_SRC_ENDIAN_M, val);
+ REG_WR(bp, PXP2_REG_RQ_CDU_ENDIAN_M, val);
+ REG_WR(bp, PXP2_REG_RQ_DBG_ENDIAN_M, val);
+
+ /* make sure this value is 0 */
+ REG_WR(bp, PXP2_REG_RQ_HC_ENDIAN_M, 0);
+
+ REG_WR(bp, PXP2_REG_RD_QM_SWAP_MODE, val);
+ REG_WR(bp, PXP2_REG_RD_TM_SWAP_MODE, val);
+ REG_WR(bp, PXP2_REG_RD_SRC_SWAP_MODE, val);
+ REG_WR(bp, PXP2_REG_RD_CDURD_SWAP_MODE, val);
+}
+
+static void bnx2x_set_endianity(struct bnx2x *bp)
+{
+#ifdef __BIG_ENDIAN
+ bnx2x_config_endianity(bp, 1);
+#else
+ bnx2x_config_endianity(bp, 0);
+#endif
+}
+
+static void bnx2x_reset_endianity(struct bnx2x *bp)
+{
+ bnx2x_config_endianity(bp, 0);
+}
+
/**
* bnx2x_init_hw_common - initialize the HW at the COMMON phase.
*
bnx2x_init_block(bp, BLOCK_PXP2, PHASE_COMMON);
bnx2x_init_pxp(bp);
-
-#ifdef __BIG_ENDIAN
- REG_WR(bp, PXP2_REG_RQ_QM_ENDIAN_M, 1);
- REG_WR(bp, PXP2_REG_RQ_TM_ENDIAN_M, 1);
- REG_WR(bp, PXP2_REG_RQ_SRC_ENDIAN_M, 1);
- REG_WR(bp, PXP2_REG_RQ_CDU_ENDIAN_M, 1);
- REG_WR(bp, PXP2_REG_RQ_DBG_ENDIAN_M, 1);
- /* make sure this value is 0 */
- REG_WR(bp, PXP2_REG_RQ_HC_ENDIAN_M, 0);
-
-/* REG_WR(bp, PXP2_REG_RD_PBF_SWAP_MODE, 1); */
- REG_WR(bp, PXP2_REG_RD_QM_SWAP_MODE, 1);
- REG_WR(bp, PXP2_REG_RD_TM_SWAP_MODE, 1);
- REG_WR(bp, PXP2_REG_RD_SRC_SWAP_MODE, 1);
- REG_WR(bp, PXP2_REG_RD_CDURD_SWAP_MODE, 1);
-#endif
-
+ bnx2x_set_endianity(bp);
bnx2x_ilt_init_page_size(bp, INITOP_SET);
if (CHIP_REV_IS_FPGA(bp) && CHIP_IS_E1H(bp))
}
#define BNX2X_PREV_UNDI_PROD_ADDR(p) (BAR_TSTRORM_INTMEM + 0x1508 + ((p) << 4))
+#define BNX2X_PREV_UNDI_PROD_ADDR_H(f) (BAR_TSTRORM_INTMEM + \
+ 0x1848 + ((f) << 4))
#define BNX2X_PREV_UNDI_RCQ(val) ((val) & 0xffff)
#define BNX2X_PREV_UNDI_BD(val) ((val) >> 16 & 0xffff)
#define BNX2X_PREV_UNDI_PROD(rcq, bd) ((bd) << 16 | (rcq))
#define BCM_5710_UNDI_FW_MF_MAJOR (0x07)
#define BCM_5710_UNDI_FW_MF_MINOR (0x08)
#define BCM_5710_UNDI_FW_MF_VERS (0x05)
-#define BNX2X_PREV_UNDI_MF_PORT(p) (BAR_TSTRORM_INTMEM + 0x150c + ((p) << 4))
-#define BNX2X_PREV_UNDI_MF_FUNC(f) (BAR_TSTRORM_INTMEM + 0x184c + ((f) << 4))
static bool bnx2x_prev_is_after_undi(struct bnx2x *bp)
{
return false;
}
-static bool bnx2x_prev_unload_undi_fw_supports_mf(struct bnx2x *bp)
-{
- u8 major, minor, version;
- u32 fw;
-
- /* Must check that FW is loaded */
- if (!(REG_RD(bp, MISC_REG_RESET_REG_1) &
- MISC_REGISTERS_RESET_REG_1_RST_XSEM)) {
- BNX2X_DEV_INFO("XSEM is reset - UNDI MF FW is not loaded\n");
- return false;
- }
-
- /* Read Currently loaded FW version */
- fw = REG_RD(bp, XSEM_REG_PRAM);
- major = fw & 0xff;
- minor = (fw >> 0x8) & 0xff;
- version = (fw >> 0x10) & 0xff;
- BNX2X_DEV_INFO("Loaded FW: 0x%08x: Major 0x%02x Minor 0x%02x Version 0x%02x\n",
- fw, major, minor, version);
-
- if (major > BCM_5710_UNDI_FW_MF_MAJOR)
- return true;
-
- if ((major == BCM_5710_UNDI_FW_MF_MAJOR) &&
- (minor > BCM_5710_UNDI_FW_MF_MINOR))
- return true;
-
- if ((major == BCM_5710_UNDI_FW_MF_MAJOR) &&
- (minor == BCM_5710_UNDI_FW_MF_MINOR) &&
- (version >= BCM_5710_UNDI_FW_MF_VERS))
- return true;
-
- return false;
-}
-
-static void bnx2x_prev_unload_undi_mf(struct bnx2x *bp)
-{
- int i;
-
- /* Due to legacy (FW) code, the first function on each engine has a
- * different offset macro from the rest of the functions.
- * Setting this for all 8 functions is harmless regardless of whether
- * this is actually a multi-function device.
- */
- for (i = 0; i < 2; i++)
- REG_WR(bp, BNX2X_PREV_UNDI_MF_PORT(i), 1);
-
- for (i = 2; i < 8; i++)
- REG_WR(bp, BNX2X_PREV_UNDI_MF_FUNC(i - 2), 1);
-
- BNX2X_DEV_INFO("UNDI FW (MF) set to discard\n");
-}
-
-static void bnx2x_prev_unload_undi_inc(struct bnx2x *bp, u8 port, u8 inc)
+static void bnx2x_prev_unload_undi_inc(struct bnx2x *bp, u8 inc)
{
u16 rcq, bd;
- u32 tmp_reg = REG_RD(bp, BNX2X_PREV_UNDI_PROD_ADDR(port));
+ u32 addr, tmp_reg;
+
+ if (BP_FUNC(bp) < 2)
+ addr = BNX2X_PREV_UNDI_PROD_ADDR(BP_PORT(bp));
+ else
+ addr = BNX2X_PREV_UNDI_PROD_ADDR_H(BP_FUNC(bp) - 2);
+ tmp_reg = REG_RD(bp, addr);
rcq = BNX2X_PREV_UNDI_RCQ(tmp_reg) + inc;
bd = BNX2X_PREV_UNDI_BD(tmp_reg) + inc;
tmp_reg = BNX2X_PREV_UNDI_PROD(rcq, bd);
- REG_WR(bp, BNX2X_PREV_UNDI_PROD_ADDR(port), tmp_reg);
+ REG_WR(bp, addr, tmp_reg);
- BNX2X_DEV_INFO("UNDI producer [%d] rings bd -> 0x%04x, rcq -> 0x%04x\n",
- port, bd, rcq);
+ BNX2X_DEV_INFO("UNDI producer [%d/%d][%08x] rings bd -> 0x%04x, rcq -> 0x%04x\n",
+ BP_PORT(bp), BP_FUNC(bp), addr, bd, rcq);
}
static int bnx2x_prev_mcp_done(struct bnx2x *bp)
/* Reset should be performed after BRB is emptied */
if (reset_reg & MISC_REGISTERS_RESET_REG_1_RST_BRB1) {
u32 timer_count = 1000;
- bool need_write = true;
/* Close the MAC Rx to prevent BRB from filling up */
bnx2x_prev_unload_close_mac(bp, &mac_vals);
else
timer_count--;
- /* New UNDI FW supports MF and contains better
- * cleaning methods - might be redundant but harmless.
- */
- if (bnx2x_prev_unload_undi_fw_supports_mf(bp)) {
- if (need_write) {
- bnx2x_prev_unload_undi_mf(bp);
- need_write = false;
- }
- } else if (prev_undi) {
- /* If UNDI resides in memory,
- * manually increment it
- */
- bnx2x_prev_unload_undi_inc(bp, BP_PORT(bp), 1);
- }
+ /* If UNDI resides in memory, manually increment it */
+ if (prev_undi)
+ bnx2x_prev_unload_undi_inc(bp, 1);
+
udelay(10);
}
bnx2x_iov_remove_one(bp);
/* Power on: we can't let PCI layer write to us while we are in D3 */
- if (IS_PF(bp))
+ if (IS_PF(bp)) {
bnx2x_set_power_state(bp, PCI_D0);
+ /* Set endianity registers to reset values in case next driver
+ * boots in different endianty environment.
+ */
+ bnx2x_reset_endianity(bp);
+ }
+
/* Disable MSI/MSI-X */
bnx2x_disable_msi(bp);
#include <linux/if_vlan.h>
#include <linux/prefetch.h>
#include <linux/random.h>
-#if defined(CONFIG_VLAN_8021Q) || defined(CONFIG_VLAN_8021Q_MODULE)
+#if IS_ENABLED(CONFIG_VLAN_8021Q)
#define BCM_VLAN 1
#endif
#include <net/ip.h>
static int cnic_get_v6_route(struct sockaddr_in6 *dst_addr,
struct dst_entry **dst)
{
-#if defined(CONFIG_IPV6) || (defined(CONFIG_IPV6_MODULE) && defined(MODULE))
+#if IS_ENABLED(CONFIG_IPV6)
struct flowi6 fl6;
memset(&fl6, 0, sizeof(fl6));
struct tg3 *tp = netdev_priv(dev);
int err;
+ if (tp->pcierr_recovery) {
+ netdev_err(dev, "Failed to open device. PCI error recovery "
+ "in progress\n");
+ return -EAGAIN;
+ }
+
if (tp->fw_needed) {
err = tg3_request_firmware(tp);
if (tg3_asic_rev(tp) == ASIC_REV_57766) {
{
struct tg3 *tp = netdev_priv(dev);
+ if (tp->pcierr_recovery) {
+ netdev_err(dev, "Failed to close device. PCI error recovery "
+ "in progress\n");
+ return -EAGAIN;
+ }
+
tg3_ptp_fini(tp);
tg3_stop(tp);
tp->rx_mode = TG3_DEF_RX_MODE;
tp->tx_mode = TG3_DEF_TX_MODE;
tp->irq_sync = 1;
+ tp->pcierr_recovery = false;
if (tg3_debug > 0)
tp->msg_enable = tg3_debug;
rtnl_lock();
+ tp->pcierr_recovery = true;
+
/* We probably don't have netdev yet */
if (!netdev || !netif_running(netdev))
goto done;
tg3_phy_start(tp);
done:
+ tp->pcierr_recovery = false;
rtnl_unlock();
}
struct device *hwmon_dev;
bool link_up;
+ bool pcierr_recovery;
};
/* Accessor macros for chip and asic attributes
* For TSO, the TCP checksum field is seeded with pseudo-header sum
* excluding the length field.
*/
- if (skb->protocol == htons(ETH_P_IP)) {
+ if (vlan_get_protocol(skb) == htons(ETH_P_IP)) {
struct iphdr *iph = ip_hdr(skb);
/* Do we really need these? */
}
if (skb->ip_summed == CHECKSUM_PARTIAL) {
+ __be16 net_proto = vlan_get_protocol(skb);
u8 proto = 0;
- if (skb->protocol == htons(ETH_P_IP))
+ if (net_proto == htons(ETH_P_IP))
proto = ip_hdr(skb)->protocol;
#ifdef NETIF_F_IPV6_CSUM
- else if (skb->protocol == htons(ETH_P_IPV6)) {
+ else if (net_proto == htons(ETH_P_IPV6)) {
/* nexthdr may not be TCP immediately. */
proto = ipv6_hdr(skb)->nexthdr;
}
config NET_CALXEDA_XGMAC
tristate "Calxeda 1G/10G XGMAC Ethernet driver"
depends on HAS_IOMEM && HAS_DMA
+ depends on ARCH_HIGHBANK || COMPILE_TEST
select CRC32
help
This is the driver for the XGMAC Ethernet IP block found on Calxeda
struct tid_info tids;
void **tid_release_head;
spinlock_t tid_release_lock;
+ struct workqueue_struct *workq;
struct work_struct tid_release_task;
struct work_struct db_full_task;
struct work_struct db_drop_task;
return ret;
}
-static struct workqueue_struct *workq;
-
/**
* link_start - enable a port
* @dev: the port to enable
goto freeout;
}
- t4_write_reg(adap, MPS_TRC_RSS_CONTROL,
+ t4_write_reg(adap, is_t4(adap->params.chip) ?
+ MPS_TRC_RSS_CONTROL :
+ MPS_T5_TRC_RSS_CONTROL,
RSSCONTROL(netdev2pinfo(adap->port[0])->tx_chan) |
QUEUENUMBER(s->ethrxq[0].rspq.abs_id));
return 0;
0xd004, 0xd03c,
0xdfc0, 0xdfe0,
0xe000, 0xea7c,
- 0xf000, 0x11190,
+ 0xf000, 0x11110,
+ 0x11118, 0x11190,
0x19040, 0x1906c,
0x19078, 0x19080,
0x1908c, 0x19124,
0xd004, 0xd03c,
0xdfc0, 0xdfe0,
0xe000, 0x11088,
- 0x1109c, 0x1117c,
+ 0x1109c, 0x11110,
+ 0x11118, 0x1117c,
0x11190, 0x11204,
0x19040, 0x1906c,
0x19078, 0x19080,
adap->tid_release_head = (void **)((uintptr_t)p | chan);
if (!adap->tid_release_task_busy) {
adap->tid_release_task_busy = true;
- queue_work(workq, &adap->tid_release_task);
+ queue_work(adap->workq, &adap->tid_release_task);
}
spin_unlock_bh(&adap->tid_release_lock);
}
notify_rdma_uld(adap, CXGB4_CONTROL_DB_FULL);
t4_set_reg_field(adap, SGE_INT_ENABLE3,
DBFIFO_HP_INT | DBFIFO_LP_INT, 0);
- queue_work(workq, &adap->db_full_task);
+ queue_work(adap->workq, &adap->db_full_task);
}
}
disable_dbs(adap);
notify_rdma_uld(adap, CXGB4_CONTROL_DB_FULL);
}
- queue_work(workq, &adap->db_drop_task);
+ queue_work(adap->workq, &adap->db_drop_task);
}
static void uld_attach(struct adapter *adap, unsigned int uld)
params[3] = FW_PARAM_PFVF(CQ_END);
params[4] = FW_PARAM_PFVF(OCQ_START);
params[5] = FW_PARAM_PFVF(OCQ_END);
- ret = t4_query_params(adap, 0, 0, 0, 6, params, val);
+ ret = t4_query_params(adap, adap->mbox, adap->fn, 0, 6, params,
+ val);
if (ret < 0)
goto bye;
adap->vres.qp.start = val[0];
params[0] = FW_PARAM_DEV(MAXORDIRD_QP);
params[1] = FW_PARAM_DEV(MAXIRD_ADAPTER);
- ret = t4_query_params(adap, 0, 0, 0, 2, params, val);
+ ret = t4_query_params(adap, adap->mbox, adap->fn, 0, 2, params,
+ val);
if (ret < 0) {
adap->params.max_ordird_qp = 8;
adap->params.max_ird_adapter = 32 * adap->tids.ntids;
goto out_disable_device;
}
+ adapter->workq = create_singlethread_workqueue("cxgb4");
+ if (!adapter->workq) {
+ err = -ENOMEM;
+ goto out_free_adapter;
+ }
+
/* PCI device has been enabled */
adapter->flags |= DEV_ENABLED;
out_unmap_bar0:
iounmap(adapter->regs);
out_free_adapter:
+ if (adapter->workq)
+ destroy_workqueue(adapter->workq);
+
kfree(adapter);
out_disable_device:
pci_disable_pcie_error_reporting(pdev);
if (adapter) {
int i;
+ /* Tear down per-adapter Work Queue first since it can contain
+ * references to our adapter data structure.
+ */
+ destroy_workqueue(adapter->workq);
+
if (is_offload(adapter))
detach_ulds(adapter);
{
int ret;
- workq = create_singlethread_workqueue("cxgb4");
- if (!workq)
- return -ENOMEM;
-
/* Debugfs support is optional, just warn if this fails */
cxgb4_debugfs_root = debugfs_create_dir(KBUILD_MODNAME, NULL);
if (!cxgb4_debugfs_root)
pr_warn("could not create debugfs entry, continuing\n");
ret = pci_register_driver(&cxgb4_driver);
- if (ret < 0) {
+ if (ret < 0)
debugfs_remove(cxgb4_debugfs_root);
- destroy_workqueue(workq);
- }
register_inet6addr_notifier(&cxgb4_inet6addr_notifier);
unregister_inet6addr_notifier(&cxgb4_inet6addr_notifier);
pci_unregister_driver(&cxgb4_driver);
debugfs_remove(cxgb4_debugfs_root); /* NULL ok */
- flush_workqueue(workq);
- destroy_workqueue(workq);
}
module_init(cxgb4_init_module);
FW_EQ_ETH_CMD_PFN(adap->fn) | FW_EQ_ETH_CMD_VFN(0));
c.alloc_to_len16 = htonl(FW_EQ_ETH_CMD_ALLOC |
FW_EQ_ETH_CMD_EQSTART | FW_LEN16(c));
- c.viid_pkd = htonl(FW_EQ_ETH_CMD_VIID(pi->viid));
+ c.viid_pkd = htonl(FW_EQ_ETH_CMD_AUTOEQUEQE |
+ FW_EQ_ETH_CMD_VIID(pi->viid));
c.fetchszm_to_iqid = htonl(FW_EQ_ETH_CMD_HOSTFCMODE(2) |
FW_EQ_ETH_CMD_PCIECHN(pi->tx_chan) |
FW_EQ_ETH_CMD_FETCHRO(1) |
t4_write_reg(adap, PCIE_CFG_SPACE_REQ, 0);
}
+/*
+ * t4_report_fw_error - report firmware error
+ * @adap: the adapter
+ *
+ * The adapter firmware can indicate error conditions to the host.
+ * If the firmware has indicated an error, print out the reason for
+ * the firmware error.
+ */
+static void t4_report_fw_error(struct adapter *adap)
+{
+ static const char *const reason[] = {
+ "Crash", /* PCIE_FW_EVAL_CRASH */
+ "During Device Preparation", /* PCIE_FW_EVAL_PREP */
+ "During Device Configuration", /* PCIE_FW_EVAL_CONF */
+ "During Device Initialization", /* PCIE_FW_EVAL_INIT */
+ "Unexpected Event", /* PCIE_FW_EVAL_UNEXPECTEDEVENT */
+ "Insufficient Airflow", /* PCIE_FW_EVAL_OVERHEAT */
+ "Device Shutdown", /* PCIE_FW_EVAL_DEVICESHUTDOWN */
+ "Reserved", /* reserved */
+ };
+ u32 pcie_fw;
+
+ pcie_fw = t4_read_reg(adap, MA_PCIE_FW);
+ if (pcie_fw & FW_PCIE_FW_ERR)
+ dev_err(adap->pdev_dev, "Firmware reports adapter error: %s\n",
+ reason[FW_PCIE_FW_EVAL_GET(pcie_fw)]);
+}
+
/*
* Get the reply to a mailbox command and store it in @rpl in big-endian order.
*/
dump_mbox(adap, mbox, data_reg);
dev_err(adap->pdev_dev, "command %#x in mailbox %d timed out\n",
*(const u8 *)cmd, mbox);
+ t4_report_fw_error(adap);
return -ETIMEDOUT;
}
#define VPD_BASE 0x400
#define VPD_BASE_OLD 0
#define VPD_LEN 1024
+#define CHELSIO_VPD_UNIQUE_ID 0x82
/**
* t4_seeprom_wp - enable/disable EEPROM write protection
ret = pci_read_vpd(adapter->pdev, VPD_BASE, sizeof(u32), vpd);
if (ret < 0)
goto out;
- addr = *vpd == 0x82 ? VPD_BASE : VPD_BASE_OLD;
+
+ /* The VPD shall have a unique identifier specified by the PCI SIG.
+ * For chelsio adapters, the identifier is 0x82. The first byte of a VPD
+ * shall be CHELSIO_VPD_UNIQUE_ID (0x82). The VPD programming software
+ * is expected to automatically put this entry at the
+ * beginning of the VPD.
+ */
+ addr = *vpd == CHELSIO_VPD_UNIQUE_ID ? VPD_BASE : VPD_BASE_OLD;
ret = pci_read_vpd(adapter->pdev, addr, VPD_LEN, vpd);
if (ret < 0)
i = pci_vpd_info_field_size(vpd + sn - PCI_VPD_INFO_FLD_HDR_SIZE);
memcpy(p->sn, vpd + sn, min(i, SERNUM_LEN));
strim(p->sn);
+ i = pci_vpd_info_field_size(vpd + pn - PCI_VPD_INFO_FLD_HDR_SIZE);
memcpy(p->pn, vpd + pn, min(i, PN_LEN));
strim(p->pn);
int fat;
- fat = t4_handle_intr_status(adapter,
- PCIE_CORE_UTL_SYSTEM_BUS_AGENT_STATUS,
- sysbus_intr_info) +
- t4_handle_intr_status(adapter,
- PCIE_CORE_UTL_PCI_EXPRESS_PORT_STATUS,
- pcie_port_intr_info) +
- t4_handle_intr_status(adapter, PCIE_INT_CAUSE,
- is_t4(adapter->params.chip) ?
- pcie_intr_info : t5_pcie_intr_info);
+ if (is_t4(adapter->params.chip))
+ fat = t4_handle_intr_status(adapter,
+ PCIE_CORE_UTL_SYSTEM_BUS_AGENT_STATUS,
+ sysbus_intr_info) +
+ t4_handle_intr_status(adapter,
+ PCIE_CORE_UTL_PCI_EXPRESS_PORT_STATUS,
+ pcie_port_intr_info) +
+ t4_handle_intr_status(adapter, PCIE_INT_CAUSE,
+ pcie_intr_info);
+ else
+ fat = t4_handle_intr_status(adapter, PCIE_INT_CAUSE,
+ t5_pcie_intr_info);
if (fat)
t4_fatal_err(adapter);
int fat;
+ if (t4_read_reg(adapter, MA_PCIE_FW) & FW_PCIE_FW_ERR)
+ t4_report_fw_error(adapter);
+
fat = t4_handle_intr_status(adapter, CIM_HOST_INT_CAUSE,
cim_intr_info) +
t4_handle_intr_status(adapter, CIM_HOST_UPACC_INT_CAUSE,
{
u32 v, status = t4_read_reg(adap, MA_INT_CAUSE);
- if (status & MEM_PERR_INT_CAUSE)
+ if (status & MEM_PERR_INT_CAUSE) {
dev_alert(adap->pdev_dev,
"MA parity error, parity status %#x\n",
t4_read_reg(adap, MA_PARITY_ERROR_STATUS));
+ if (is_t5(adap->params.chip))
+ dev_alert(adap->pdev_dev,
+ "MA parity error, parity status %#x\n",
+ t4_read_reg(adap,
+ MA_PARITY_ERROR_STATUS2));
+ }
if (status & MEM_WRAP_INT_CAUSE) {
v = t4_read_reg(adap, MA_INT_WRAP_STATUS);
dev_alert(adap->pdev_dev, "MA address wrap-around error by "
/*
* Issue the HELLO command to the firmware. If it's not successful
* but indicates that we got a "busy" or "timeout" condition, retry
- * the HELLO until we exhaust our retry limit.
+ * the HELLO until we exhaust our retry limit. If we do exceed our
+ * retry limit, check to see if the firmware left us any error
+ * information and report that if so.
*/
ret = t4_wr_mbox(adap, mbox, &c, sizeof(c), &c);
if (ret < 0) {
if ((ret == -EBUSY || ret == -ETIMEDOUT) && retries-- > 0)
goto retry;
+ if (t4_read_reg(adap, MA_PCIE_FW) & FW_PCIE_FW_ERR)
+ t4_report_fw_error(adap);
return ret;
}
lc->link_ok = link_ok;
lc->speed = speed;
lc->fc = fc;
+ lc->supported = be16_to_cpu(p->u.info.pcap);
t4_os_link_changed(adap, port, link_ok);
}
if (mod != pi->mod_type) {
#define MEM_WRAP_CLIENT_NUM_GET(x) (((x) & MEM_WRAP_CLIENT_NUM_MASK) >> MEM_WRAP_CLIENT_NUM_SHIFT)
#define MA_PCIE_FW 0x30b8
#define MA_PARITY_ERROR_STATUS 0x77f4
+#define MA_PARITY_ERROR_STATUS2 0x7804
#define MA_EXT_MEMORY1_BAR 0x7808
#define EDC_0_BASE_ADDR 0x7900
#define TRCMULTIFILTER 0x00000001U
#define MPS_TRC_RSS_CONTROL 0x9808
+#define MPS_T5_TRC_RSS_CONTROL 0xa00c
#define RSSCONTROL_MASK 0x00ff0000U
#define RSSCONTROL_SHIFT 16
#define RSSCONTROL(x) ((x) << RSSCONTROL_SHIFT)
#define FW_EQ_ETH_CMD_CIDXFTHRESH(x) ((x) << 16)
#define FW_EQ_ETH_CMD_EQSIZE(x) ((x) << 0)
+#define FW_EQ_ETH_CMD_AUTOEQUEQE (1U << 30)
#define FW_EQ_ETH_CMD_VIID(x) ((x) << 16)
struct fw_eq_ctrl_cmd {
#define FW_PCIE_FW_MASTER(x) ((x) << FW_PCIE_FW_MASTER_SHIFT)
#define FW_PCIE_FW_MASTER_GET(x) (((x) >> FW_PCIE_FW_MASTER_SHIFT) & \
FW_PCIE_FW_MASTER_MASK)
+#define FW_PCIE_FW_EVAL_MASK 0x7
+#define FW_PCIE_FW_EVAL_SHIFT 24
+#define FW_PCIE_FW_EVAL_GET(x) (((x) >> FW_PCIE_FW_EVAL_SHIFT) & \
+ FW_PCIE_FW_EVAL_MASK)
struct fw_hdr {
u8 ver;
cmd.alloc_to_len16 = cpu_to_be32(FW_EQ_ETH_CMD_ALLOC |
FW_EQ_ETH_CMD_EQSTART |
FW_LEN16(cmd));
- cmd.viid_pkd = cpu_to_be32(FW_EQ_ETH_CMD_VIID(pi->viid));
+ cmd.viid_pkd = cpu_to_be32(FW_EQ_ETH_CMD_AUTOEQUEQE |
+ FW_EQ_ETH_CMD_VIID(pi->viid));
cmd.fetchszm_to_iqid =
cpu_to_be32(FW_EQ_ETH_CMD_HOSTFCMODE(SGE_HOSTFCMODE_STPG) |
FW_EQ_ETH_CMD_PCIECHN(pi->port_id) |
struct clk *clk_enet_out;
struct clk *clk_ptp;
+ bool ptp_clk_on;
+ struct mutex ptp_clk_mutex;
+
/* The saved address of a sent-in-place packet/buffer, for skfree(). */
unsigned char *tx_bounce[TX_RING_SIZE];
struct sk_buff *tx_skbuff[TX_RING_SIZE];
u32 cycle_speed;
int hwts_rx_en;
int hwts_tx_en;
- struct timer_list time_keep;
+ struct delayed_work time_keep;
struct regulator *reg_phy;
};
goto failed_clk_enet_out;
}
if (fep->clk_ptp) {
+ mutex_lock(&fep->ptp_clk_mutex);
ret = clk_prepare_enable(fep->clk_ptp);
- if (ret)
+ if (ret) {
+ mutex_unlock(&fep->ptp_clk_mutex);
goto failed_clk_ptp;
+ } else {
+ fep->ptp_clk_on = true;
+ }
+ mutex_unlock(&fep->ptp_clk_mutex);
}
} else {
clk_disable_unprepare(fep->clk_ahb);
clk_disable_unprepare(fep->clk_ipg);
if (fep->clk_enet_out)
clk_disable_unprepare(fep->clk_enet_out);
- if (fep->clk_ptp)
+ if (fep->clk_ptp) {
+ mutex_lock(&fep->ptp_clk_mutex);
clk_disable_unprepare(fep->clk_ptp);
+ fep->ptp_clk_on = false;
+ mutex_unlock(&fep->ptp_clk_mutex);
+ }
}
return 0;
if (IS_ERR(fep->clk_enet_out))
fep->clk_enet_out = NULL;
+ fep->ptp_clk_on = false;
+ mutex_init(&fep->ptp_clk_mutex);
fep->clk_ptp = devm_clk_get(&pdev->dev, "ptp");
fep->bufdesc_ex =
pdev->id_entry->driver_data & FEC_QUIRK_HAS_BUFDESC_EX;
struct net_device *ndev = platform_get_drvdata(pdev);
struct fec_enet_private *fep = netdev_priv(ndev);
+ cancel_delayed_work_sync(&fep->time_keep);
cancel_work_sync(&fep->tx_timeout_work);
unregister_netdev(ndev);
fec_enet_mii_remove(fep);
- del_timer_sync(&fep->time_keep);
if (fep->reg_phy)
regulator_disable(fep->reg_phy);
if (fep->ptp_clock)
u64 ns;
unsigned long flags;
+ mutex_lock(&fep->ptp_clk_mutex);
+ /* Check the ptp clock */
+ if (!fep->ptp_clk_on) {
+ mutex_unlock(&fep->ptp_clk_mutex);
+ return -EINVAL;
+ }
+
ns = ts->tv_sec * 1000000000ULL;
ns += ts->tv_nsec;
spin_lock_irqsave(&fep->tmreg_lock, flags);
timecounter_init(&fep->tc, &fep->cc, ns);
spin_unlock_irqrestore(&fep->tmreg_lock, flags);
+ mutex_unlock(&fep->ptp_clk_mutex);
return 0;
}
* fec_time_keep - call timecounter_read every second to avoid timer overrun
* because ENET just support 32bit counter, will timeout in 4s
*/
-static void fec_time_keep(unsigned long _data)
+static void fec_time_keep(struct work_struct *work)
{
- struct fec_enet_private *fep = (struct fec_enet_private *)_data;
+ struct delayed_work *dwork = to_delayed_work(work);
+ struct fec_enet_private *fep = container_of(dwork, struct fec_enet_private, time_keep);
u64 ns;
unsigned long flags;
- spin_lock_irqsave(&fep->tmreg_lock, flags);
- ns = timecounter_read(&fep->tc);
- spin_unlock_irqrestore(&fep->tmreg_lock, flags);
+ mutex_lock(&fep->ptp_clk_mutex);
+ if (fep->ptp_clk_on) {
+ spin_lock_irqsave(&fep->tmreg_lock, flags);
+ ns = timecounter_read(&fep->tc);
+ spin_unlock_irqrestore(&fep->tmreg_lock, flags);
+ }
+ mutex_unlock(&fep->ptp_clk_mutex);
- mod_timer(&fep->time_keep, jiffies + HZ);
+ schedule_delayed_work(&fep->time_keep, HZ);
}
/**
fec_ptp_start_cyclecounter(ndev);
- init_timer(&fep->time_keep);
- fep->time_keep.data = (unsigned long)fep;
- fep->time_keep.function = fec_time_keep;
- fep->time_keep.expires = jiffies + HZ;
- add_timer(&fep->time_keep);
+ INIT_DELAYED_WORK(&fep->time_keep, fec_time_keep);
fep->ptp_clock = ptp_clock_register(&fep->ptp_caps, &pdev->dev);
if (IS_ERR(fep->ptp_clock)) {
fep->ptp_clock = NULL;
pr_err("ptp_clock_register failed\n");
}
+
+ schedule_delayed_work(&fep->time_keep, HZ);
}
{
swqe->tx_control |= EHEA_SWQE_IMM_DATA_PRESENT | EHEA_SWQE_CRC;
- if (skb->protocol != htons(ETH_P_IP))
+ if (vlan_get_protocol(skb) != htons(ETH_P_IP))
return;
if (skb->ip_summed == CHECKSUM_PARTIAL)
atomic_add(buffers_added, &(pool->available));
}
+/*
+ * The final 8 bytes of the buffer list is a counter of frames dropped
+ * because there was not a buffer in the buffer list capable of holding
+ * the frame.
+ */
+static void ibmveth_update_rx_no_buffer(struct ibmveth_adapter *adapter)
+{
+ __be64 *p = adapter->buffer_list_addr + 4096 - 8;
+
+ adapter->rx_no_buffer = be64_to_cpup(p);
+}
+
/* replenish routine */
static void ibmveth_replenish_task(struct ibmveth_adapter *adapter)
{
ibmveth_replenish_buffer_pool(adapter, pool);
}
- adapter->rx_no_buffer = *(u64 *)(((char*)adapter->buffer_list_addr) +
- 4096 - 8);
+ ibmveth_update_rx_no_buffer(adapter);
}
/* empty and free ana buffer pool - also used to do cleanup in error paths */
free_irq(netdev->irq, netdev);
- adapter->rx_no_buffer = *(u64 *)(((char *)adapter->buffer_list_addr) +
- 4096 - 8);
+ ibmveth_update_rx_no_buffer(adapter);
ibmveth_cleanup(adapter);
#define E1000_TX_FLAGS_VLAN_SHIFT 16
static int e1000_tso(struct e1000_adapter *adapter,
- struct e1000_tx_ring *tx_ring, struct sk_buff *skb)
+ struct e1000_tx_ring *tx_ring, struct sk_buff *skb,
+ __be16 protocol)
{
struct e1000_context_desc *context_desc;
struct e1000_buffer *buffer_info;
hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
mss = skb_shinfo(skb)->gso_size;
- if (skb->protocol == htons(ETH_P_IP)) {
+ if (protocol == htons(ETH_P_IP)) {
struct iphdr *iph = ip_hdr(skb);
iph->tot_len = 0;
iph->check = 0;
0);
cmd_length = E1000_TXD_CMD_IP;
ipcse = skb_transport_offset(skb) - 1;
- } else if (skb->protocol == htons(ETH_P_IPV6)) {
+ } else if (skb_is_gso_v6(skb)) {
ipv6_hdr(skb)->payload_len = 0;
tcp_hdr(skb)->check =
~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
}
static bool e1000_tx_csum(struct e1000_adapter *adapter,
- struct e1000_tx_ring *tx_ring, struct sk_buff *skb)
+ struct e1000_tx_ring *tx_ring, struct sk_buff *skb,
+ __be16 protocol)
{
struct e1000_context_desc *context_desc;
struct e1000_buffer *buffer_info;
if (skb->ip_summed != CHECKSUM_PARTIAL)
return false;
- switch (skb->protocol) {
+ switch (protocol) {
case cpu_to_be16(ETH_P_IP):
if (ip_hdr(skb)->protocol == IPPROTO_TCP)
cmd_len |= E1000_TXD_CMD_TCP;
int count = 0;
int tso;
unsigned int f;
+ __be16 protocol = vlan_get_protocol(skb);
/* This goes back to the question of how to logically map a Tx queue
* to a flow. Right now, performance is impacted slightly negatively
first = tx_ring->next_to_use;
- tso = e1000_tso(adapter, tx_ring, skb);
+ tso = e1000_tso(adapter, tx_ring, skb, protocol);
if (tso < 0) {
dev_kfree_skb_any(skb);
return NETDEV_TX_OK;
if (likely(hw->mac_type != e1000_82544))
tx_ring->last_tx_tso = true;
tx_flags |= E1000_TX_FLAGS_TSO;
- } else if (likely(e1000_tx_csum(adapter, tx_ring, skb)))
+ } else if (likely(e1000_tx_csum(adapter, tx_ring, skb, protocol)))
tx_flags |= E1000_TX_FLAGS_CSUM;
- if (likely(skb->protocol == htons(ETH_P_IP)))
+ if (protocol == htons(ETH_P_IP))
tx_flags |= E1000_TX_FLAGS_IPV4;
if (unlikely(skb->no_fcs))
#define E1000_TX_FLAGS_VLAN_MASK 0xffff0000
#define E1000_TX_FLAGS_VLAN_SHIFT 16
-static int e1000_tso(struct e1000_ring *tx_ring, struct sk_buff *skb)
+static int e1000_tso(struct e1000_ring *tx_ring, struct sk_buff *skb,
+ __be16 protocol)
{
struct e1000_context_desc *context_desc;
struct e1000_buffer *buffer_info;
hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
mss = skb_shinfo(skb)->gso_size;
- if (skb->protocol == htons(ETH_P_IP)) {
+ if (protocol == htons(ETH_P_IP)) {
struct iphdr *iph = ip_hdr(skb);
iph->tot_len = 0;
iph->check = 0;
return 1;
}
-static bool e1000_tx_csum(struct e1000_ring *tx_ring, struct sk_buff *skb)
+static bool e1000_tx_csum(struct e1000_ring *tx_ring, struct sk_buff *skb,
+ __be16 protocol)
{
struct e1000_adapter *adapter = tx_ring->adapter;
struct e1000_context_desc *context_desc;
unsigned int i;
u8 css;
u32 cmd_len = E1000_TXD_CMD_DEXT;
- __be16 protocol;
if (skb->ip_summed != CHECKSUM_PARTIAL)
return false;
- if (skb->protocol == cpu_to_be16(ETH_P_8021Q))
- protocol = vlan_eth_hdr(skb)->h_vlan_encapsulated_proto;
- else
- protocol = skb->protocol;
-
switch (protocol) {
case cpu_to_be16(ETH_P_IP):
if (ip_hdr(skb)->protocol == IPPROTO_TCP)
int count = 0;
int tso;
unsigned int f;
+ __be16 protocol = vlan_get_protocol(skb);
if (test_bit(__E1000_DOWN, &adapter->state)) {
dev_kfree_skb_any(skb);
first = tx_ring->next_to_use;
- tso = e1000_tso(tx_ring, skb);
+ tso = e1000_tso(tx_ring, skb, protocol);
if (tso < 0) {
dev_kfree_skb_any(skb);
return NETDEV_TX_OK;
if (tso)
tx_flags |= E1000_TX_FLAGS_TSO;
- else if (e1000_tx_csum(tx_ring, skb))
+ else if (e1000_tx_csum(tx_ring, skb, protocol))
tx_flags |= E1000_TX_FLAGS_CSUM;
/* Old method was to assume IPv4 packet by default if TSO was enabled.
* 82571 hardware supports TSO capabilities for IPv6 as well...
* no longer assume, we must.
*/
- if (skb->protocol == htons(ETH_P_IP))
+ if (protocol == htons(ETH_P_IP))
tx_flags |= E1000_TX_FLAGS_IPV4;
if (unlikely(skb->no_fcs))
u32 prttsyn_stat;
int n;
- if (pf->flags & I40E_FLAG_PTP)
+ if (!(pf->flags & I40E_FLAG_PTP))
return;
prttsyn_stat = rd32(hw, I40E_PRTTSYN_STAT_1);
goto out_drop;
/* obtain protocol of skb */
- protocol = skb->protocol;
+ protocol = vlan_get_protocol(skb);
/* record the location of the first descriptor for this packet */
first = &tx_ring->tx_bi[tx_ring->next_to_use];
static int i40e_vc_send_msg_to_vf(struct i40e_vf *vf, u32 v_opcode,
u32 v_retval, u8 *msg, u16 msglen)
{
- struct i40e_pf *pf = vf->pf;
- struct i40e_hw *hw = &pf->hw;
- int abs_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
+ struct i40e_pf *pf;
+ struct i40e_hw *hw;
+ int abs_vf_id;
i40e_status aq_ret;
+ /* validate the request */
+ if (!vf || vf->vf_id >= vf->pf->num_alloc_vfs)
+ return -EINVAL;
+
+ pf = vf->pf;
+ hw = &pf->hw;
+ abs_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
+
/* single place to detect unsuccessful return values */
if (v_retval) {
vf->num_invalid_msgs++;
{
struct i40e_hw *hw = &pf->hw;
struct i40e_vf *vf = pf->vf;
- int abs_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
int i;
- for (i = 0; i < pf->num_alloc_vfs; i++) {
+ for (i = 0; i < pf->num_alloc_vfs; i++, vf++) {
+ int abs_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
+ /* Not all vfs are enabled so skip the ones that are not */
+ if (!test_bit(I40E_VF_STAT_INIT, &vf->vf_states) &&
+ !test_bit(I40E_VF_STAT_ACTIVE, &vf->vf_states))
+ continue;
+
/* Ignore return value on purpose - a given VF may fail, but
* we need to keep going and send to all of them
*/
i40e_aq_send_msg_to_vf(hw, abs_vf_id, v_opcode, v_retval,
msg, msglen, NULL);
- vf++;
- abs_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
}
}
struct i40e_hw *hw = &pf->hw;
struct i40e_vf *vf = pf->vf;
struct i40e_link_status *ls = &pf->hw.phy.link_info;
- int abs_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
int i;
pfe.event = I40E_VIRTCHNL_EVENT_LINK_CHANGE;
pfe.severity = I40E_PF_EVENT_SEVERITY_INFO;
- for (i = 0; i < pf->num_alloc_vfs; i++) {
+ for (i = 0; i < pf->num_alloc_vfs; i++, vf++) {
+ int abs_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
if (vf->link_forced) {
pfe.event_data.link_event.link_status = vf->link_up;
pfe.event_data.link_event.link_speed =
i40e_aq_send_msg_to_vf(hw, abs_vf_id, I40E_VIRTCHNL_OP_EVENT,
0, (u8 *)&pfe, sizeof(pfe),
NULL);
- vf++;
- abs_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
}
}
void i40e_vc_notify_vf_reset(struct i40e_vf *vf)
{
struct i40e_virtchnl_pf_event pfe;
- int abs_vf_id = vf->vf_id + vf->pf->hw.func_caps.vf_base_id;
+ int abs_vf_id;
+
+ /* validate the request */
+ if (!vf || vf->vf_id >= vf->pf->num_alloc_vfs)
+ return;
+
+ /* verify if the VF is in either init or active before proceeding */
+ if (!test_bit(I40E_VF_STAT_INIT, &vf->vf_states) &&
+ !test_bit(I40E_VF_STAT_ACTIVE, &vf->vf_states))
+ return;
+
+ abs_vf_id = vf->vf_id + vf->pf->hw.func_caps.vf_base_id;
pfe.event = I40E_VIRTCHNL_EVENT_RESET_IMPENDING;
pfe.severity = I40E_PF_EVENT_SEVERITY_CERTAIN_DOOM;
goto out_drop;
/* obtain protocol of skb */
- protocol = skb->protocol;
+ protocol = vlan_get_protocol(skb);
/* record the location of the first descriptor for this packet */
first = &tx_ring->tx_bi[tx_ring->next_to_use];
#include <linux/mbus.h>
#include <linux/module.h>
#include <linux/interrupt.h>
+#include <linux/if_vlan.h>
#include <net/ip.h>
#include <net/ipv6.h>
#include <linux/io.h>
{
if (skb->ip_summed == CHECKSUM_PARTIAL) {
int ip_hdr_len = 0;
+ __be16 l3_proto = vlan_get_protocol(skb);
u8 l4_proto;
- if (skb->protocol == htons(ETH_P_IP)) {
+ if (l3_proto == htons(ETH_P_IP)) {
struct iphdr *ip4h = ip_hdr(skb);
/* Calculate IPv4 checksum and L4 checksum */
ip_hdr_len = ip4h->ihl;
l4_proto = ip4h->protocol;
- } else if (skb->protocol == htons(ETH_P_IPV6)) {
+ } else if (l3_proto == htons(ETH_P_IPV6)) {
struct ipv6hdr *ip6h = ipv6_hdr(skb);
/* Read l4_protocol from one of IPv6 extra headers */
return MVNETA_TX_L4_CSUM_NOT;
return mvneta_txq_desc_csum(skb_network_offset(skb),
- skb->protocol, ip_hdr_len, l4_proto);
+ l3_proto, ip_hdr_len, l4_proto);
}
return MVNETA_TX_L4_CSUM_NOT;
int qpn, u64 *reg_id)
{
int err;
- struct mlx4_spec_list spec_eth_outer = { {NULL} };
- struct mlx4_spec_list spec_vxlan = { {NULL} };
- struct mlx4_spec_list spec_eth_inner = { {NULL} };
-
- struct mlx4_net_trans_rule rule = {
- .queue_mode = MLX4_NET_TRANS_Q_FIFO,
- .exclusive = 0,
- .allow_loopback = 1,
- .promisc_mode = MLX4_FS_REGULAR,
- .priority = MLX4_DOMAIN_NIC,
- };
-
- __be64 mac_mask = cpu_to_be64(MLX4_MAC_MASK << 16);
if (priv->mdev->dev->caps.tunnel_offload_mode != MLX4_TUNNEL_OFFLOAD_MODE_VXLAN)
return 0; /* do nothing */
- rule.port = priv->port;
- rule.qpn = qpn;
- INIT_LIST_HEAD(&rule.list);
-
- spec_eth_outer.id = MLX4_NET_TRANS_RULE_ID_ETH;
- memcpy(spec_eth_outer.eth.dst_mac, addr, ETH_ALEN);
- memcpy(spec_eth_outer.eth.dst_mac_msk, &mac_mask, ETH_ALEN);
-
- spec_vxlan.id = MLX4_NET_TRANS_RULE_ID_VXLAN; /* any vxlan header */
- spec_eth_inner.id = MLX4_NET_TRANS_RULE_ID_ETH; /* any inner eth header */
-
- list_add_tail(&spec_eth_outer.list, &rule.list);
- list_add_tail(&spec_vxlan.list, &rule.list);
- list_add_tail(&spec_eth_inner.list, &rule.list);
-
- err = mlx4_flow_attach(priv->mdev->dev, &rule, reg_id);
+ err = mlx4_tunnel_steer_add(priv->mdev->dev, addr, priv->port, qpn,
+ MLX4_DOMAIN_NIC, reg_id);
if (err) {
en_err(priv, "failed to add vxlan steering rule, err %d\n", err);
return err;
}
EXPORT_SYMBOL_GPL(mlx4_flow_detach);
+int mlx4_tunnel_steer_add(struct mlx4_dev *dev, unsigned char *addr,
+ int port, int qpn, u16 prio, u64 *reg_id)
+{
+ int err;
+ struct mlx4_spec_list spec_eth_outer = { {NULL} };
+ struct mlx4_spec_list spec_vxlan = { {NULL} };
+ struct mlx4_spec_list spec_eth_inner = { {NULL} };
+
+ struct mlx4_net_trans_rule rule = {
+ .queue_mode = MLX4_NET_TRANS_Q_FIFO,
+ .exclusive = 0,
+ .allow_loopback = 1,
+ .promisc_mode = MLX4_FS_REGULAR,
+ };
+
+ __be64 mac_mask = cpu_to_be64(MLX4_MAC_MASK << 16);
+
+ rule.port = port;
+ rule.qpn = qpn;
+ rule.priority = prio;
+ INIT_LIST_HEAD(&rule.list);
+
+ spec_eth_outer.id = MLX4_NET_TRANS_RULE_ID_ETH;
+ memcpy(spec_eth_outer.eth.dst_mac, addr, ETH_ALEN);
+ memcpy(spec_eth_outer.eth.dst_mac_msk, &mac_mask, ETH_ALEN);
+
+ spec_vxlan.id = MLX4_NET_TRANS_RULE_ID_VXLAN; /* any vxlan header */
+ spec_eth_inner.id = MLX4_NET_TRANS_RULE_ID_ETH; /* any inner eth header */
+
+ list_add_tail(&spec_eth_outer.list, &rule.list);
+ list_add_tail(&spec_vxlan.list, &rule.list);
+ list_add_tail(&spec_eth_inner.list, &rule.list);
+
+ err = mlx4_flow_attach(dev, &rule, reg_id);
+ return err;
+}
+EXPORT_SYMBOL(mlx4_tunnel_steer_add);
+
int mlx4_FLOW_STEERING_IB_UC_QP_RANGE(struct mlx4_dev *dev, u32 min_range_qpn,
u32 max_range_qpn)
{
int rx_head = priv->rx_head;
int rx = 0;
- while (1) {
+ while (rx < budget) {
desc = priv->rx_desc_base + (RX_REG_DESC_SIZE * rx_head);
desc0 = readl(desc + RX_REG_OFFSET_DESC0);
net_dbg_ratelimited("packet error\n");
priv->stats.rx_dropped++;
priv->stats.rx_errors++;
- continue;
+ goto rx_next;
}
len = desc0 & RX_DESC0_FRAME_LEN_MASK;
if (len > RX_BUF_SIZE)
len = RX_BUF_SIZE;
- skb = build_skb(priv->rx_buf[rx_head], priv->rx_buf_size);
+ dma_sync_single_for_cpu(&ndev->dev,
+ priv->rx_mapping[rx_head],
+ priv->rx_buf_size, DMA_FROM_DEVICE);
+ skb = netdev_alloc_skb_ip_align(ndev, len);
+
if (unlikely(!skb)) {
- net_dbg_ratelimited("build_skb failed\n");
+ net_dbg_ratelimited("netdev_alloc_skb_ip_align failed\n");
priv->stats.rx_dropped++;
priv->stats.rx_errors++;
+ goto rx_next;
}
+ memcpy(skb->data, priv->rx_buf[rx_head], len);
skb_put(skb, len);
skb->protocol = eth_type_trans(skb, ndev);
napi_gro_receive(&priv->napi, skb);
if (desc0 & RX_DESC0_MULTICAST)
priv->stats.multicast++;
+rx_next:
writel(RX_DESC0_DMA_OWN, desc + RX_REG_OFFSET_DESC0);
rx_head = RX_NEXT(rx_head);
priv->rx_head = rx_head;
-
- if (rx >= budget)
- break;
}
if (rx < budget) {
- napi_gro_flush(napi, false);
- __napi_complete(napi);
+ napi_complete(napi);
}
priv->reg_imr |= RPKT_FINISH_M;
len = ETH_ZLEN;
}
- txdes1 = readl(desc + TX_REG_OFFSET_DESC1);
- txdes1 |= TX_DESC1_LTS | TX_DESC1_FTS;
- txdes1 &= ~(TX_DESC1_FIFO_COMPLETE | TX_DESC1_INTR_COMPLETE);
- txdes1 |= (len & TX_DESC1_BUF_SIZE_MASK);
+ dma_sync_single_for_device(&ndev->dev, priv->tx_mapping[tx_head],
+ priv->tx_buf_size, DMA_TO_DEVICE);
+
+ txdes1 = TX_DESC1_LTS | TX_DESC1_FTS | (len & TX_DESC1_BUF_SIZE_MASK);
+ if (tx_head == TX_DESC_NUM_MASK)
+ txdes1 |= TX_DESC1_END;
writel(txdes1, desc + TX_REG_OFFSET_DESC1);
writel(TX_DESC0_DMA_OWN, desc + TX_REG_OFFSET_DESC0);
spin_lock_init(&priv->txlock);
priv->tx_buf_size = TX_BUF_SIZE;
- priv->rx_buf_size = RX_BUF_SIZE +
- SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+ priv->rx_buf_size = RX_BUF_SIZE;
priv->tx_desc_base = dma_alloc_coherent(NULL, TX_REG_DESC_SIZE *
TX_DESC_NUM, &priv->tx_base,
__lpc_eth_clock_enable(pldat, true);
+ /* Suspended PHY makes LPC ethernet core block, so resume now */
+ phy_resume(pldat->phy_dev);
+
/* Reset and initialize */
__lpc_eth_reset(pldat);
__lpc_eth_init(pldat);
u16 cksum;
u16 unused;
u8 model[16];
- u16 mfg_id;
+ u8 mfg_id;
u16 id;
u8 flag;
u8 erase_cmd;
return QLC_DEFAULT_VNIC_COUNT;
}
+static inline void qlcnic_swap32_buffer(u32 *buffer, int count)
+{
+#if defined(__BIG_ENDIAN)
+ u32 *tmp = buffer;
+ int i;
+
+ for (i = 0; i < count; i++) {
+ *tmp = swab32(*tmp);
+ tmp++;
+ }
+#endif
+}
+
#ifdef CONFIG_QLCNIC_HWMON
void qlcnic_register_hwmon_dev(struct qlcnic_adapter *);
void qlcnic_unregister_hwmon_dev(struct qlcnic_adapter *);
}
qlcnic_83xx_wrt_reg_indirect(adapter, QLC_83XX_FLASH_DIRECT_WINDOW,
- (addr));
+ (addr & 0xFFFF0000));
range = flash_offset + (count * sizeof(u32));
/* Check if data is spread across multiple sectors */
ret = qlcnic_83xx_lockless_flash_read32(adapter, QLCNIC_FDT_LOCATION,
(u8 *)&adapter->ahw->fdt,
count);
-
+ qlcnic_swap32_buffer((u32 *)&adapter->ahw->fdt, count);
qlcnic_83xx_unlock_flash(adapter);
return ret;
}
addr1 = (sector_start_addr & 0xFF) << 16;
addr2 = (sector_start_addr & 0xFF0000) >> 16;
- reversed_addr = addr1 | addr2;
+ reversed_addr = addr1 | addr2 | (sector_start_addr & 0xFF00);
qlcnic_83xx_wrt_reg_indirect(adapter, QLC_83XX_FLASH_WRDATA,
reversed_addr);
{
struct qlc_83xx_fw_info *fw_info = adapter->ahw->fw_info;
const struct firmware *fw = fw_info->fw;
- u32 dest, *p_cache;
+ u32 dest, *p_cache, *temp;
int i, ret = -EIO;
+ __le32 *temp_le;
u8 data[16];
size_t size;
u64 addr;
+ temp = kzalloc(fw->size, GFP_KERNEL);
+ if (!temp) {
+ release_firmware(fw);
+ fw_info->fw = NULL;
+ return -ENOMEM;
+ }
+
+ temp_le = (__le32 *)fw->data;
+
+ /* FW image in file is in little endian, swap the data to nullify
+ * the effect of writel() operation on big endian platform.
+ */
+ for (i = 0; i < fw->size / sizeof(u32); i++)
+ temp[i] = __le32_to_cpu(temp_le[i]);
+
dest = QLCRDX(adapter->ahw, QLCNIC_FW_IMAGE_ADDR);
size = (fw->size & ~0xF);
- p_cache = (u32 *)fw->data;
+ p_cache = temp;
addr = (u64)dest;
ret = qlcnic_ms_mem_write128(adapter, addr,
p_cache, size / 16);
if (ret) {
dev_err(&adapter->pdev->dev, "MS memory write failed\n");
- release_firmware(fw);
- fw_info->fw = NULL;
- return -EIO;
+ goto exit;
}
/* alignment check */
if (fw->size & 0xF) {
addr = dest + size;
for (i = 0; i < (fw->size & 0xF); i++)
- data[i] = fw->data[size + i];
+ data[i] = temp[size + i];
for (; i < 16; i++)
data[i] = 0;
ret = qlcnic_ms_mem_write128(adapter, addr,
if (ret) {
dev_err(&adapter->pdev->dev,
"MS memory write failed\n");
- release_firmware(fw);
- fw_info->fw = NULL;
- return -EIO;
+ goto exit;
}
}
+
+exit:
release_firmware(fw);
fw_info->fw = NULL;
+ kfree(temp);
- return 0;
+ return ret;
}
static void qlcnic_83xx_dump_pause_control_regs(struct qlcnic_adapter *adapter)
u32 type;
u32 offset;
u32 cap_size;
+#if defined(__LITTLE_ENDIAN)
u8 mask;
u8 rsvd[2];
u8 flags;
+#else
+ u8 flags;
+ u8 rsvd[2];
+ u8 mask;
+#endif
} __packed;
struct __crb {
u32 addr;
+#if defined(__LITTLE_ENDIAN)
u8 stride;
u8 rsvd1[3];
+#else
+ u8 rsvd1[3];
+ u8 stride;
+#endif
u32 data_size;
u32 no_ops;
u32 rsvd2[4];
struct __ctrl {
u32 addr;
+#if defined(__LITTLE_ENDIAN)
u8 stride;
u8 index_a;
u16 timeout;
+#else
+ u16 timeout;
+ u8 index_a;
+ u8 stride;
+#endif
u32 data_size;
u32 no_ops;
+#if defined(__LITTLE_ENDIAN)
u8 opcode;
u8 index_v;
u8 shl_val;
u8 shr_val;
+#else
+ u8 shr_val;
+ u8 shl_val;
+ u8 index_v;
+ u8 opcode;
+#endif
u32 val1;
u32 val2;
u32 val3;
struct __cache {
u32 addr;
+#if defined(__LITTLE_ENDIAN)
u16 stride;
u16 init_tag_val;
+#else
+ u16 init_tag_val;
+ u16 stride;
+#endif
u32 size;
u32 no_ops;
u32 ctrl_addr;
u32 ctrl_val;
u32 read_addr;
+#if defined(__LITTLE_ENDIAN)
u8 read_addr_stride;
u8 read_addr_num;
u8 rsvd1[2];
+#else
+ u8 rsvd1[2];
+ u8 read_addr_num;
+ u8 read_addr_stride;
+#endif
} __packed;
struct __ocm {
struct __queue {
u32 sel_addr;
+#if defined(__LITTLE_ENDIAN)
u16 stride;
u8 rsvd[2];
+#else
+ u8 rsvd[2];
+ u16 stride;
+#endif
u32 size;
u32 no_ops;
u8 rsvd2[8];
u32 read_addr;
+#if defined(__LITTLE_ENDIAN)
u8 read_addr_stride;
u8 read_addr_cnt;
u8 rsvd3[2];
+#else
+ u8 rsvd3[2];
+ u8 read_addr_cnt;
+ u8 read_addr_stride;
+#endif
} __packed;
struct __pollrd {
u32 sel_addr;
u32 read_addr;
u32 sel_val;
+#if defined(__LITTLE_ENDIAN)
u16 sel_val_stride;
u16 no_ops;
+#else
+ u16 no_ops;
+ u16 sel_val_stride;
+#endif
u32 poll_wait;
u32 poll_mask;
u32 data_size;
u32 no_ops;
u32 sel_val_mask;
u32 read_addr;
+#if defined(__LITTLE_ENDIAN)
u8 sel_val_stride;
u8 data_size;
u8 rsvd[2];
+#else
+ u8 rsvd[2];
+ u8 data_size;
+ u8 sel_val_stride;
+#endif
} __packed;
struct __pollrdmwr {
if (ret != 0)
return ret;
qlcnic_read_crb(adapter, buf, offset, size);
+ qlcnic_swap32_buffer((u32 *)buf, size / sizeof(u32));
return size;
}
if (ret != 0)
return ret;
+ qlcnic_swap32_buffer((u32 *)buf, size / sizeof(u32));
qlcnic_write_crb(adapter, buf, offset, size);
return size;
}
return -EIO;
memcpy(buf, &data, size);
+ qlcnic_swap32_buffer((u32 *)buf, size / sizeof(u32));
return size;
}
if (ret != 0)
return ret;
+ qlcnic_swap32_buffer((u32 *)buf, size / sizeof(u32));
memcpy(&data, buf, size);
if (qlcnic_pci_mem_write_2M(adapter, offset, data))
if (rem)
return QL_STATUS_INVALID_PARAM;
+ qlcnic_swap32_buffer((u32 *)buf, size / sizeof(u32));
pm_cfg = (struct qlcnic_pm_func_cfg *)buf;
ret = validate_pm_config(adapter, pm_cfg, count);
pm_cfg[pci_func].dest_npar = 0;
pm_cfg[pci_func].pci_func = i;
}
+ qlcnic_swap32_buffer((u32 *)buf, size / sizeof(u32));
return size;
}
if (rem)
return QL_STATUS_INVALID_PARAM;
+ qlcnic_swap32_buffer((u32 *)buf, size / sizeof(u32));
esw_cfg = (struct qlcnic_esw_func_cfg *)buf;
ret = validate_esw_config(adapter, esw_cfg, count);
if (ret)
if (qlcnic_get_eswitch_port_config(adapter, &esw_cfg[pci_func]))
return QL_STATUS_INVALID_PARAM;
}
+ qlcnic_swap32_buffer((u32 *)buf, size / sizeof(u32));
return size;
}
if (rem)
return QL_STATUS_INVALID_PARAM;
+ qlcnic_swap32_buffer((u32 *)buf, size / sizeof(u32));
np_cfg = (struct qlcnic_npar_func_cfg *)buf;
ret = validate_npar_config(adapter, np_cfg, count);
if (ret)
np_cfg[pci_func].max_tx_queues = nic_info.max_tx_ques;
np_cfg[pci_func].max_rx_queues = nic_info.max_rx_ques;
}
+ qlcnic_swap32_buffer((u32 *)buf, size / sizeof(u32));
return size;
}
pci_cfg = (struct qlcnic_pci_func_cfg *)buf;
count = size / sizeof(struct qlcnic_pci_func_cfg);
+ qlcnic_swap32_buffer((u32 *)pci_info, size / sizeof(u32));
for (i = 0; i < count; i++) {
pci_cfg[i].pci_func = pci_info[i].id;
pci_cfg[i].func_type = pci_info[i].type;
}
qlcnic_83xx_unlock_flash(adapter);
+ qlcnic_swap32_buffer((u32 *)p_read_buf, count);
memcpy(buf, p_read_buf, size);
kfree(p_read_buf);
if (!p_cache)
return -ENOMEM;
+ count = size / sizeof(u32);
+ qlcnic_swap32_buffer((u32 *)buf, count);
memcpy(p_cache, buf, size);
p_src = p_cache;
- count = size / sizeof(u32);
if (qlcnic_83xx_lock_flash(adapter) != 0) {
kfree(p_cache);
if (!p_cache)
return -ENOMEM;
+ qlcnic_swap32_buffer((u32 *)buf, size / sizeof(u32));
memcpy(p_cache, buf, size);
p_src = p_cache;
count = size / sizeof(u32);
if (skb_is_gso(skb)) {
int err;
+ __be16 l3_proto = vlan_get_protocol(skb);
err = skb_cow_head(skb, 0);
if (err < 0)
<< OB_MAC_TRANSPORT_HDR_SHIFT);
mac_iocb_ptr->mss = cpu_to_le16(skb_shinfo(skb)->gso_size);
mac_iocb_ptr->flags2 |= OB_MAC_TSO_IOCB_LSO;
- if (likely(skb->protocol == htons(ETH_P_IP))) {
+ if (likely(l3_proto == htons(ETH_P_IP))) {
struct iphdr *iph = ip_hdr(skb);
iph->check = 0;
mac_iocb_ptr->flags1 |= OB_MAC_TSO_IOCB_IP4;
iph->daddr, 0,
IPPROTO_TCP,
0);
- } else if (skb->protocol == htons(ETH_P_IPV6)) {
+ } else if (l3_proto == htons(ETH_P_IPV6)) {
mac_iocb_ptr->flags1 |= OB_MAC_TSO_IOCB_IP6;
tcp_hdr(skb)->check =
~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
config SH_ETH
tristate "Renesas SuperH Ethernet support"
depends on HAS_DMA
+ depends on ARCH_SHMOBILE || SUPERH || COMPILE_TEST
select CRC32
select MII
select MDIO_BITBANG
#include "stmmac.h"
-static unsigned int stmmac_jumbo_frm(void *p, struct sk_buff *skb, int csum)
+static int stmmac_jumbo_frm(void *p, struct sk_buff *skb, int csum)
{
struct stmmac_priv *priv = (struct stmmac_priv *)p;
unsigned int txsize = priv->dma_tx_size;
desc->des2 = dma_map_single(priv->device, skb->data,
bmax, DMA_TO_DEVICE);
- priv->tx_skbuff_dma[entry] = desc->des2;
+ if (dma_mapping_error(priv->device, desc->des2))
+ return -1;
+ priv->tx_skbuff_dma[entry].buf = desc->des2;
priv->hw->desc->prepare_tx_desc(desc, 1, bmax, csum, STMMAC_CHAIN_MODE);
while (len != 0) {
desc->des2 = dma_map_single(priv->device,
(skb->data + bmax * i),
bmax, DMA_TO_DEVICE);
- priv->tx_skbuff_dma[entry] = desc->des2;
+ if (dma_mapping_error(priv->device, desc->des2))
+ return -1;
+ priv->tx_skbuff_dma[entry].buf = desc->des2;
priv->hw->desc->prepare_tx_desc(desc, 0, bmax, csum,
STMMAC_CHAIN_MODE);
priv->hw->desc->set_tx_owner(desc);
desc->des2 = dma_map_single(priv->device,
(skb->data + bmax * i), len,
DMA_TO_DEVICE);
- priv->tx_skbuff_dma[entry] = desc->des2;
+ if (dma_mapping_error(priv->device, desc->des2))
+ return -1;
+ priv->tx_skbuff_dma[entry].buf = desc->des2;
priv->hw->desc->prepare_tx_desc(desc, 0, len, csum,
STMMAC_CHAIN_MODE);
priv->hw->desc->set_tx_owner(desc);
handle_tx = 0x8,
};
-#define CORE_IRQ_TX_PATH_IN_LPI_MODE (1 << 1)
-#define CORE_IRQ_TX_PATH_EXIT_LPI_MODE (1 << 2)
-#define CORE_IRQ_RX_PATH_IN_LPI_MODE (1 << 3)
-#define CORE_IRQ_RX_PATH_EXIT_LPI_MODE (1 << 4)
+#define CORE_IRQ_TX_PATH_IN_LPI_MODE (1 << 0)
+#define CORE_IRQ_TX_PATH_EXIT_LPI_MODE (1 << 1)
+#define CORE_IRQ_RX_PATH_IN_LPI_MODE (1 << 2)
+#define CORE_IRQ_RX_PATH_EXIT_LPI_MODE (1 << 3)
#define CORE_PCS_ANE_COMPLETE (1 << 5)
#define CORE_PCS_LINK_STATUS (1 << 6)
/* Default LPI timers */
#define STMMAC_DEFAULT_LIT_LS 0x3E8
-#define STMMAC_DEFAULT_TWT_LS 0x0
+#define STMMAC_DEFAULT_TWT_LS 0x1E
#define STMMAC_CHAIN_MODE 0x1
#define STMMAC_RING_MODE 0x2
void (*init) (void *des, dma_addr_t phy_addr, unsigned int size,
unsigned int extend_desc);
unsigned int (*is_jumbo_frm) (int len, int ehn_desc);
- unsigned int (*jumbo_frm) (void *priv, struct sk_buff *skb, int csum);
+ int (*jumbo_frm)(void *priv, struct sk_buff *skb, int csum);
int (*set_16kib_bfsize)(int mtu);
void (*init_desc3)(struct dma_desc *p);
void (*refill_desc3) (void *priv, struct dma_desc *p);
int multicast_filter_bins;
int unicast_filter_entries;
int mcast_bits_log2;
+ unsigned int rx_csum;
};
struct mac_device_info *dwmac1000_setup(void __iomem *ioaddr, int mcbins,
#define GMAC_CONTROL_RE 0x00000004 /* Receiver Enable */
#define GMAC_CORE_INIT (GMAC_CONTROL_JD | GMAC_CONTROL_PS | GMAC_CONTROL_ACS | \
- GMAC_CONTROL_BE)
+ GMAC_CONTROL_BE | GMAC_CONTROL_DCRS)
/* GMAC Frame Filter defines */
#define GMAC_FRAME_FILTER_PR 0x00000001 /* Promiscuous Mode */
void __iomem *ioaddr = hw->pcsr;
u32 value = readl(ioaddr + GMAC_CONTROL);
- value |= GMAC_CONTROL_IPC;
+ if (hw->rx_csum)
+ value |= GMAC_CONTROL_IPC;
+ else
+ value &= ~GMAC_CONTROL_IPC;
+
writel(value, ioaddr + GMAC_CONTROL);
value = readl(ioaddr + GMAC_CONTROL);
unsigned int mmc_rx_octetcount_g;
unsigned int mmc_rx_broadcastframe_g;
unsigned int mmc_rx_multicastframe_g;
- unsigned int mmc_rx_crc_errror;
+ unsigned int mmc_rx_crc_error;
unsigned int mmc_rx_align_error;
unsigned int mmc_rx_run_error;
unsigned int mmc_rx_jabber_error;
mmc->mmc_rx_octetcount_g += readl(ioaddr + MMC_RX_OCTETCOUNT_G);
mmc->mmc_rx_broadcastframe_g += readl(ioaddr + MMC_RX_BROADCASTFRAME_G);
mmc->mmc_rx_multicastframe_g += readl(ioaddr + MMC_RX_MULTICASTFRAME_G);
- mmc->mmc_rx_crc_errror += readl(ioaddr + MMC_RX_CRC_ERRROR);
+ mmc->mmc_rx_crc_error += readl(ioaddr + MMC_RX_CRC_ERRROR);
mmc->mmc_rx_align_error += readl(ioaddr + MMC_RX_ALIGN_ERROR);
mmc->mmc_rx_run_error += readl(ioaddr + MMC_RX_RUN_ERROR);
mmc->mmc_rx_jabber_error += readl(ioaddr + MMC_RX_JABBER_ERROR);
#include "stmmac.h"
-static unsigned int stmmac_jumbo_frm(void *p, struct sk_buff *skb, int csum)
+static int stmmac_jumbo_frm(void *p, struct sk_buff *skb, int csum)
{
struct stmmac_priv *priv = (struct stmmac_priv *)p;
unsigned int txsize = priv->dma_tx_size;
desc->des2 = dma_map_single(priv->device, skb->data,
bmax, DMA_TO_DEVICE);
- priv->tx_skbuff_dma[entry] = desc->des2;
+ if (dma_mapping_error(priv->device, desc->des2))
+ return -1;
+
+ priv->tx_skbuff_dma[entry].buf = desc->des2;
desc->des3 = desc->des2 + BUF_SIZE_4KiB;
priv->hw->desc->prepare_tx_desc(desc, 1, bmax, csum,
STMMAC_RING_MODE);
desc->des2 = dma_map_single(priv->device, skb->data + bmax,
len, DMA_TO_DEVICE);
- priv->tx_skbuff_dma[entry] = desc->des2;
+ if (dma_mapping_error(priv->device, desc->des2))
+ return -1;
+ priv->tx_skbuff_dma[entry].buf = desc->des2;
desc->des3 = desc->des2 + BUF_SIZE_4KiB;
priv->hw->desc->prepare_tx_desc(desc, 0, len, csum,
STMMAC_RING_MODE);
} else {
desc->des2 = dma_map_single(priv->device, skb->data,
nopaged_len, DMA_TO_DEVICE);
- priv->tx_skbuff_dma[entry] = desc->des2;
+ if (dma_mapping_error(priv->device, desc->des2))
+ return -1;
+ priv->tx_skbuff_dma[entry].buf = desc->des2;
desc->des3 = desc->des2 + BUF_SIZE_4KiB;
priv->hw->desc->prepare_tx_desc(desc, 1, nopaged_len, csum,
STMMAC_RING_MODE);
#include <linux/ptp_clock_kernel.h>
#include <linux/reset.h>
+struct stmmac_tx_info {
+ dma_addr_t buf;
+ bool map_as_page;
+};
+
struct stmmac_priv {
/* Frequently used values are kept adjacent for cache effect */
struct dma_extended_desc *dma_etx ____cacheline_aligned_in_smp;
u32 tx_count_frames;
u32 tx_coal_frames;
u32 tx_coal_timer;
- dma_addr_t *tx_skbuff_dma;
+ struct stmmac_tx_info *tx_skbuff_dma;
dma_addr_t dma_tx_phy;
int tx_coalesce;
int hwts_tx_en;
struct ptp_clock *ptp_clock;
struct ptp_clock_info ptp_clock_ops;
unsigned int default_addend;
+ struct clk *clk_ptp_ref;
+ unsigned int clk_ptp_rate;
u32 adv_ts;
int use_riwt;
int irq_wake;
STMMAC_MMC_STAT(mmc_rx_octetcount_g),
STMMAC_MMC_STAT(mmc_rx_broadcastframe_g),
STMMAC_MMC_STAT(mmc_rx_multicastframe_g),
- STMMAC_MMC_STAT(mmc_rx_crc_errror),
+ STMMAC_MMC_STAT(mmc_rx_crc_error),
STMMAC_MMC_STAT(mmc_rx_align_error),
STMMAC_MMC_STAT(mmc_rx_run_error),
STMMAC_MMC_STAT(mmc_rx_jabber_error),
*/
bool stmmac_eee_init(struct stmmac_priv *priv)
{
+ char *phy_bus_name = priv->plat->phy_bus_name;
bool ret = false;
/* Using PCS we cannot dial with the phy registers at this stage
(priv->pcs == STMMAC_PCS_RTBI))
goto out;
+ /* Never init EEE in case of a switch is attached */
+ if (phy_bus_name && (!strcmp(phy_bus_name, "fixed")))
+ goto out;
+
/* MAC core supports the EEE feature. */
if (priv->dma_cap.eee) {
int tx_lpi_timer = priv->tx_lpi_timer;
priv->hw->mac->set_eee_timer(priv->hw,
STMMAC_DEFAULT_LIT_LS,
tx_lpi_timer);
- } else
- /* Set HW EEE according to the speed */
- priv->hw->mac->set_eee_pls(priv->hw,
- priv->phydev->link);
+ }
+ /* Set HW EEE according to the speed */
+ priv->hw->mac->set_eee_pls(priv->hw, priv->phydev->link);
pr_debug("stmmac: Energy-Efficient Ethernet initialized\n");
/* calculate default added value:
* formula is :
* addend = (2^32)/freq_div_ratio;
- * where, freq_div_ratio = STMMAC_SYSCLOCK/50MHz
- * hence, addend = ((2^32) * 50MHz)/STMMAC_SYSCLOCK;
- * NOTE: STMMAC_SYSCLOCK should be >= 50MHz to
+ * where, freq_div_ratio = clk_ptp_ref_i/50MHz
+ * hence, addend = ((2^32) * 50MHz)/clk_ptp_ref_i;
+ * NOTE: clk_ptp_ref_i should be >= 50MHz to
* achive 20ns accuracy.
*
* 2^x * y == (y << x), hence
* 2^32 * 50000000 ==> (50000000 << 32)
*/
temp = (u64) (50000000ULL << 32);
- priv->default_addend = div_u64(temp, STMMAC_SYSCLOCK);
+ priv->default_addend = div_u64(temp, priv->clk_ptp_rate);
priv->hw->ptp->config_addend(priv->ioaddr,
priv->default_addend);
if (!(priv->dma_cap.time_stamp || priv->dma_cap.atime_stamp))
return -EOPNOTSUPP;
+ /* Fall-back to main clock in case of no PTP ref is passed */
+ priv->clk_ptp_ref = devm_clk_get(priv->device, "clk_ptp_ref");
+ if (IS_ERR(priv->clk_ptp_ref)) {
+ priv->clk_ptp_rate = clk_get_rate(priv->stmmac_clk);
+ priv->clk_ptp_ref = NULL;
+ } else {
+ clk_prepare_enable(priv->clk_ptp_ref);
+ priv->clk_ptp_rate = clk_get_rate(priv->clk_ptp_ref);
+ }
+
priv->adv_ts = 0;
if (priv->dma_cap.atime_stamp && priv->extend_desc)
priv->adv_ts = 1;
static void stmmac_release_ptp(struct stmmac_priv *priv)
{
+ if (priv->clk_ptp_ref)
+ clk_disable_unprepare(priv->clk_ptp_ref);
stmmac_ptp_unregister(priv);
}
else
p = priv->dma_tx + i;
p->des2 = 0;
- priv->tx_skbuff_dma[i] = 0;
+ priv->tx_skbuff_dma[i].buf = 0;
+ priv->tx_skbuff_dma[i].map_as_page = false;
priv->tx_skbuff[i] = NULL;
}
else
p = priv->dma_tx + i;
- if (priv->tx_skbuff_dma[i]) {
- dma_unmap_single(priv->device,
- priv->tx_skbuff_dma[i],
- priv->hw->desc->get_tx_len(p),
- DMA_TO_DEVICE);
- priv->tx_skbuff_dma[i] = 0;
+ if (priv->tx_skbuff_dma[i].buf) {
+ if (priv->tx_skbuff_dma[i].map_as_page)
+ dma_unmap_page(priv->device,
+ priv->tx_skbuff_dma[i].buf,
+ priv->hw->desc->get_tx_len(p),
+ DMA_TO_DEVICE);
+ else
+ dma_unmap_single(priv->device,
+ priv->tx_skbuff_dma[i].buf,
+ priv->hw->desc->get_tx_len(p),
+ DMA_TO_DEVICE);
}
if (priv->tx_skbuff[i] != NULL) {
dev_kfree_skb_any(priv->tx_skbuff[i]);
priv->tx_skbuff[i] = NULL;
+ priv->tx_skbuff_dma[i].buf = 0;
+ priv->tx_skbuff_dma[i].map_as_page = false;
}
}
}
if (!priv->rx_skbuff)
goto err_rx_skbuff;
- priv->tx_skbuff_dma = kmalloc_array(txsize, sizeof(dma_addr_t),
+ priv->tx_skbuff_dma = kmalloc_array(txsize,
+ sizeof(*priv->tx_skbuff_dma),
GFP_KERNEL);
if (!priv->tx_skbuff_dma)
goto err_tx_skbuff_dma;
pr_debug("%s: curr %d, dirty %d\n", __func__,
priv->cur_tx, priv->dirty_tx);
- if (likely(priv->tx_skbuff_dma[entry])) {
- dma_unmap_single(priv->device,
- priv->tx_skbuff_dma[entry],
- priv->hw->desc->get_tx_len(p),
- DMA_TO_DEVICE);
- priv->tx_skbuff_dma[entry] = 0;
+ if (likely(priv->tx_skbuff_dma[entry].buf)) {
+ if (priv->tx_skbuff_dma[entry].map_as_page)
+ dma_unmap_page(priv->device,
+ priv->tx_skbuff_dma[entry].buf,
+ priv->hw->desc->get_tx_len(p),
+ DMA_TO_DEVICE);
+ else
+ dma_unmap_single(priv->device,
+ priv->tx_skbuff_dma[entry].buf,
+ priv->hw->desc->get_tx_len(p),
+ DMA_TO_DEVICE);
+ priv->tx_skbuff_dma[entry].buf = 0;
+ priv->tx_skbuff_dma[entry].map_as_page = false;
}
priv->hw->mode->clean_desc3(priv, p);
/* Initialize the MAC Core */
priv->hw->mac->core_init(priv->hw, dev->mtu);
+ ret = priv->hw->mac->rx_ipc(priv->hw);
+ if (!ret) {
+ pr_warn(" RX IPC Checksum Offload disabled\n");
+ priv->plat->rx_coe = STMMAC_RX_COE_NONE;
+ priv->hw->rx_csum = 0;
+ }
+
/* Enable the MAC Rx/Tx */
stmmac_set_mac(priv->ioaddr, true);
if (likely(!is_jumbo)) {
desc->des2 = dma_map_single(priv->device, skb->data,
nopaged_len, DMA_TO_DEVICE);
- priv->tx_skbuff_dma[entry] = desc->des2;
+ if (dma_mapping_error(priv->device, desc->des2))
+ goto dma_map_err;
+ priv->tx_skbuff_dma[entry].buf = desc->des2;
priv->hw->desc->prepare_tx_desc(desc, 1, nopaged_len,
csum_insertion, priv->mode);
} else {
desc = first;
entry = priv->hw->mode->jumbo_frm(priv, skb, csum_insertion);
+ if (unlikely(entry < 0))
+ goto dma_map_err;
}
for (i = 0; i < nfrags; i++) {
desc->des2 = skb_frag_dma_map(priv->device, frag, 0, len,
DMA_TO_DEVICE);
- priv->tx_skbuff_dma[entry] = desc->des2;
+ if (dma_mapping_error(priv->device, desc->des2))
+ goto dma_map_err; /* should reuse desc w/o issues */
+
+ priv->tx_skbuff_dma[entry].buf = desc->des2;
+ priv->tx_skbuff_dma[entry].map_as_page = true;
priv->hw->desc->prepare_tx_desc(desc, 0, len, csum_insertion,
priv->mode);
wmb();
priv->hw->dma->enable_dma_transmission(priv->ioaddr);
spin_unlock(&priv->tx_lock);
+ return NETDEV_TX_OK;
+dma_map_err:
+ dev_err(priv->device, "Tx dma map failed\n");
+ dev_kfree_skb(skb);
+ priv->dev->stats.tx_dropped++;
return NETDEV_TX_OK;
}
priv->rx_skbuff_dma[entry] =
dma_map_single(priv->device, skb->data, bfsize,
DMA_FROM_DEVICE);
-
+ if (dma_mapping_error(priv->device,
+ priv->rx_skbuff_dma[entry])) {
+ dev_err(priv->device, "Rx dma map failed\n");
+ dev_kfree_skb(skb);
+ break;
+ }
p->des2 = priv->rx_skbuff_dma[entry];
priv->hw->mode->refill_desc3(priv, p);
unsigned int entry = priv->cur_rx % rxsize;
unsigned int next_entry;
unsigned int count = 0;
- int coe = priv->plat->rx_coe;
+ int coe = priv->hw->rx_csum;
if (netif_msg_rx_status(priv)) {
pr_debug("%s: descriptor ring:\n", __func__);
if (priv->plat->rx_coe == STMMAC_RX_COE_NONE)
features &= ~NETIF_F_RXCSUM;
- else if (priv->plat->rx_coe == STMMAC_RX_COE_TYPE1)
- features &= ~NETIF_F_IPV6_CSUM;
+
if (!priv->plat->tx_coe)
features &= ~NETIF_F_ALL_CSUM;
return features;
}
+static int stmmac_set_features(struct net_device *netdev,
+ netdev_features_t features)
+{
+ struct stmmac_priv *priv = netdev_priv(netdev);
+
+ /* Keep the COE Type in case of csum is supporting */
+ if (features & NETIF_F_RXCSUM)
+ priv->hw->rx_csum = priv->plat->rx_coe;
+ else
+ priv->hw->rx_csum = 0;
+ /* No check needed because rx_coe has been set before and it will be
+ * fixed in case of issue.
+ */
+ priv->hw->mac->rx_ipc(priv->hw);
+
+ return 0;
+}
+
/**
* stmmac_interrupt - main ISR
* @irq: interrupt number.
.ndo_stop = stmmac_release,
.ndo_change_mtu = stmmac_change_mtu,
.ndo_fix_features = stmmac_fix_features,
+ .ndo_set_features = stmmac_set_features,
.ndo_set_rx_mode = stmmac_set_rx_mode,
.ndo_tx_timeout = stmmac_tx_timeout,
.ndo_do_ioctl = stmmac_ioctl,
*/
static int stmmac_hw_init(struct stmmac_priv *priv)
{
- int ret;
struct mac_device_info *mac;
/* Identify the MAC HW device */
/* To use alternate (extended) or normal descriptor structures */
stmmac_selec_desc_mode(priv);
- ret = priv->hw->mac->rx_ipc(priv->hw);
- if (!ret) {
- pr_warn(" RX IPC Checksum Offload not configured.\n");
- priv->plat->rx_coe = STMMAC_RX_COE_NONE;
- }
-
- if (priv->plat->rx_coe)
+ if (priv->plat->rx_coe) {
+ priv->hw->rx_csum = priv->plat->rx_coe;
pr_info(" RX Checksum Offload Engine supported (type %d)\n",
priv->plat->rx_coe);
+ }
if (priv->plat->tx_coe)
pr_info(" TX Checksum insertion supported\n");
{
if (priv->ptp_clock) {
ptp_clock_unregister(priv->ptp_clock);
+ priv->ptp_clock = NULL;
pr_debug("Removed PTP HW clock successfully on %s\n",
priv->dev->name);
}
#ifndef __STMMAC_PTP_H__
#define __STMMAC_PTP_H__
-#define STMMAC_SYSCLOCK 62500000
-
/* IEEE 1588 PTP register offsets */
#define PTP_TCR 0x0700 /* Timestamp Control Reg */
#define PTP_SSIR 0x0704 /* Sub-Second Increment Reg */
#define PCI_MEM64BIT (2<<1) /* Base addr anywhere in 64 Bit range */
#define PCI_MEMSPACE 0x00000001L /* Bit 0: Memory Space Indic. */
-/* PCI_BASE_2ND 32 bit 2nd Base address */
-#define PCI_IOBASE 0xffffff00L /* Bit 31..8: I/O Base address */
-#define PCI_IOSIZE 0x000000fcL /* Bit 7..2: I/O Size Requirements */
-#define PCI_IOSPACE 0x00000001L /* Bit 0: I/O Space Indicator */
-
/* PCI_SUB_VID 16 bit Subsystem Vendor ID */
/* PCI_SUB_ID 16 bit Subsystem ID */
struct macvlan_dev *vlan = netdev_priv(dev);
int err = -EINVAL;
- if (!vlan->port->passthru)
+ /* Support unicast filter only on passthru devices.
+ * Multicast filter should be allowed on all devices.
+ */
+ if (!vlan->port->passthru && is_unicast_ether_addr(addr))
return -EOPNOTSUPP;
if (flags & NLM_F_REPLACE)
struct macvlan_dev *vlan = netdev_priv(dev);
int err = -EINVAL;
- if (!vlan->port->passthru)
+ /* Support unicast filter only on passthru devices.
+ * Multicast filter should be allowed on all devices.
+ */
+ if (!vlan->port->passthru && is_unicast_ether_addr(addr))
return -EOPNOTSUPP;
if (is_unicast_ether_addr(addr))
return bcm7xxx_28nm_afe_config_init(phydev);
}
+static int bcm7xxx_28nm_resume(struct phy_device *phydev)
+{
+ int ret;
+
+ /* Re-apply workarounds coming out suspend/resume */
+ ret = bcm7xxx_28nm_config_init(phydev);
+ if (ret)
+ return ret;
+
+ /* 28nm Gigabit PHYs come out of reset without any half-duplex
+ * or "hub" compliant advertised mode, fix that. This does not
+ * cause any problems with the PHY library since genphy_config_aneg()
+ * gracefully handles auto-negotiated and forced modes.
+ */
+ return genphy_config_aneg(phydev);
+}
+
static int phy_set_clr_bits(struct phy_device *dev, int location,
int set_mask, int clr_mask)
{
}
/* Workaround for putting the PHY in IDDQ mode, required
- * for all BCM7XXX PHYs
+ * for all BCM7XXX 40nm and 65nm PHYs
*/
static int bcm7xxx_suspend(struct phy_device *phydev)
{
.config_init = bcm7xxx_28nm_afe_config_init,
.config_aneg = genphy_config_aneg,
.read_status = genphy_read_status,
- .suspend = bcm7xxx_suspend,
- .resume = bcm7xxx_28nm_afe_config_init,
+ .resume = bcm7xxx_28nm_resume,
.driver = { .owner = THIS_MODULE },
}, {
.phy_id = PHY_ID_BCM7439,
.config_init = bcm7xxx_28nm_afe_config_init,
.config_aneg = genphy_config_aneg,
.read_status = genphy_read_status,
- .suspend = bcm7xxx_suspend,
- .resume = bcm7xxx_28nm_afe_config_init,
+ .resume = bcm7xxx_28nm_resume,
.driver = { .owner = THIS_MODULE },
}, {
.phy_id = PHY_ID_BCM7445,
.config_init = bcm7xxx_28nm_config_init,
.config_aneg = genphy_config_aneg,
.read_status = genphy_read_status,
- .suspend = bcm7xxx_suspend,
- .resume = bcm7xxx_28nm_config_init,
- .driver = { .owner = THIS_MODULE },
-}, {
- .name = "Broadcom BCM7XXX 28nm",
- .phy_id = PHY_ID_BCM7XXX_28,
- .phy_id_mask = PHY_BCM_OUI_MASK,
- .features = PHY_GBIT_FEATURES |
- SUPPORTED_Pause | SUPPORTED_Asym_Pause,
- .flags = PHY_IS_INTERNAL,
- .config_init = bcm7xxx_28nm_config_init,
- .config_aneg = genphy_config_aneg,
- .read_status = genphy_read_status,
- .suspend = bcm7xxx_suspend,
- .resume = bcm7xxx_28nm_config_init,
+ .resume = bcm7xxx_28nm_afe_config_init,
.driver = { .owner = THIS_MODULE },
}, {
.phy_id = PHY_BCM_OUI_4,
{ PHY_ID_BCM7366, 0xfffffff0, },
{ PHY_ID_BCM7439, 0xfffffff0, },
{ PHY_ID_BCM7445, 0xfffffff0, },
- { PHY_ID_BCM7XXX_28, 0xfffffc00 },
{ PHY_BCM_OUI_4, 0xffff0000 },
{ PHY_BCM_OUI_5, 0xffffff00 },
{ }
/* First check if the EEE ability is supported */
eee_cap = phy_read_mmd_indirect(phydev, MDIO_PCS_EEE_ABLE,
MDIO_MMD_PCS, phydev->addr);
- if (eee_cap < 0)
- return eee_cap;
+ if (eee_cap <= 0)
+ goto eee_exit_err;
cap = mmd_eee_cap_to_ethtool_sup_t(eee_cap);
if (!cap)
- return -EPROTONOSUPPORT;
+ goto eee_exit_err;
/* Check which link settings negotiated and verify it in
* the EEE advertising registers.
*/
eee_lp = phy_read_mmd_indirect(phydev, MDIO_AN_EEE_LPABLE,
MDIO_MMD_AN, phydev->addr);
- if (eee_lp < 0)
- return eee_lp;
+ if (eee_lp <= 0)
+ goto eee_exit_err;
eee_adv = phy_read_mmd_indirect(phydev, MDIO_AN_EEE_ADV,
MDIO_MMD_AN, phydev->addr);
- if (eee_adv < 0)
- return eee_adv;
+ if (eee_adv <= 0)
+ goto eee_exit_err;
adv = mmd_eee_adv_to_ethtool_adv_t(eee_adv);
lp = mmd_eee_adv_to_ethtool_adv_t(eee_lp);
idx = phy_find_setting(phydev->speed, phydev->duplex);
if (!(lp & adv & settings[idx].setting))
- return -EPROTONOSUPPORT;
+ goto eee_exit_err;
if (clk_stop_enable) {
/* Configure the PHY to stop receiving xMII
return 0; /* EEE supported */
}
-
+eee_exit_err:
return -EPROTONOSUPPORT;
}
EXPORT_SYMBOL(phy_init_eee);
}
static int smsc_phy_config_init(struct phy_device *phydev)
+{
+ int rc = phy_read(phydev, MII_LAN83C185_CTRL_STATUS);
+
+ if (rc < 0)
+ return rc;
+
+ /* Enable energy detect mode for this SMSC Transceivers */
+ rc = phy_write(phydev, MII_LAN83C185_CTRL_STATUS,
+ rc | MII_LAN83C185_EDPWRDOWN);
+ if (rc < 0)
+ return rc;
+
+ return smsc_phy_ack_interrupt(phydev);
+}
+
+static int smsc_phy_reset(struct phy_device *phydev)
{
int rc = phy_read(phydev, MII_LAN83C185_SPECIAL_MODES);
if (rc < 0)
rc = phy_read(phydev, MII_BMCR);
} while (rc & BMCR_RESET);
}
-
- rc = phy_read(phydev, MII_LAN83C185_CTRL_STATUS);
- if (rc < 0)
- return rc;
-
- /* Enable energy detect mode for this SMSC Transceivers */
- rc = phy_write(phydev, MII_LAN83C185_CTRL_STATUS,
- rc | MII_LAN83C185_EDPWRDOWN);
- if (rc < 0)
- return rc;
-
- return smsc_phy_ack_interrupt (phydev);
+ return 0;
}
static int lan911x_config_init(struct phy_device *phydev)
.config_aneg = genphy_config_aneg,
.read_status = genphy_read_status,
.config_init = smsc_phy_config_init,
+ .soft_reset = smsc_phy_reset,
/* IRQ related */
.ack_interrupt = smsc_phy_ack_interrupt,
.config_aneg = genphy_config_aneg,
.read_status = genphy_read_status,
.config_init = smsc_phy_config_init,
+ .soft_reset = smsc_phy_reset,
/* IRQ related */
.ack_interrupt = smsc_phy_ack_interrupt,
.config_aneg = genphy_config_aneg,
.read_status = genphy_read_status,
.config_init = smsc_phy_config_init,
+ .soft_reset = smsc_phy_reset,
/* IRQ related */
.ack_interrupt = smsc_phy_ack_interrupt,
.config_aneg = genphy_config_aneg,
.read_status = lan87xx_read_status,
.config_init = smsc_phy_config_init,
+ .soft_reset = smsc_phy_reset,
/* IRQ related */
.ack_interrupt = smsc_phy_ack_interrupt,
if (!netdev_mc_empty(netdev)) {
new_table = vmxnet3_copy_mc(netdev);
if (new_table) {
- new_mode |= VMXNET3_RXM_MCAST;
rxConf->mfTableLen = cpu_to_le16(
netdev_mc_count(netdev) * ETH_ALEN);
new_table_pa = dma_map_single(
new_table,
rxConf->mfTableLen,
PCI_DMA_TODEVICE);
+ }
+
+ if (new_table_pa) {
+ new_mode |= VMXNET3_RXM_MCAST;
rxConf->mfTablePA = cpu_to_le64(new_table_pa);
} else {
- netdev_info(netdev, "failed to copy mcast list"
- ", setting ALL_MULTI\n");
+ netdev_info(netdev,
+ "failed to copy mcast list, setting ALL_MULTI\n");
new_mode |= VMXNET3_RXM_ALL_MULTI;
}
}
-
if (!(new_mode & VMXNET3_RXM_MCAST)) {
rxConf->mfTableLen = 0;
rxConf->mfTablePA = 0;
VMXNET3_CMD_UPDATE_MAC_FILTERS);
spin_unlock_irqrestore(&adapter->cmd_lock, flags);
- if (new_table) {
+ if (new_table_pa)
dma_unmap_single(&adapter->pdev->dev, new_table_pa,
rxConf->mfTableLen, PCI_DMA_TODEVICE);
- kfree(new_table);
- }
+ kfree(new_table);
}
void
/*
* Version numbers
*/
-#define VMXNET3_DRIVER_VERSION_STRING "1.2.0.0-k"
+#define VMXNET3_DRIVER_VERSION_STRING "1.2.1.0-k"
/* a 32-bit int, each byte encode a verion number in VMXNET3_DRIVER_VERSION */
-#define VMXNET3_DRIVER_VERSION_NUM 0x01020000
+#define VMXNET3_DRIVER_VERSION_NUM 0x01020100
#if defined(CONFIG_PCI_MSI)
/* RSS only makes sense if MSI-X is supported. */
} else if (vxlan->flags & VXLAN_F_L3MISS) {
union vxlan_addr ipa = {
.sin.sin_addr.s_addr = tip,
- .sa.sa_family = AF_INET,
+ .sin.sin_family = AF_INET,
};
vxlan_ip_miss(dev, &ipa);
} else if (vxlan->flags & VXLAN_F_L3MISS) {
union vxlan_addr ipa = {
.sin6.sin6_addr = msg->target,
- .sa.sa_family = AF_INET6,
+ .sin6.sin6_family = AF_INET6,
};
vxlan_ip_miss(dev, &ipa);
if (!n && (vxlan->flags & VXLAN_F_L3MISS)) {
union vxlan_addr ipa = {
.sin.sin_addr.s_addr = pip->daddr,
- .sa.sa_family = AF_INET,
+ .sin.sin_family = AF_INET,
};
vxlan_ip_miss(dev, &ipa);
if (!n && (vxlan->flags & VXLAN_F_L3MISS)) {
union vxlan_addr ipa = {
.sin6.sin6_addr = pip6->daddr,
- .sa.sa_family = AF_INET6,
+ .sin6.sin6_family = AF_INET6,
};
vxlan_ip_miss(dev, &ipa);
kfree_skb(priv->rx_skb);
- usb_put_dev(priv->udev);
-
at76_dbg(DBG_PROC_ENTRY, "%s: before freeing priv/ieee80211_hw",
__func__);
ieee80211_free_hw(priv->hw);
wiphy_info(priv->hw->wiphy, "disconnecting\n");
at76_delete_device(priv);
+ usb_put_dev(priv->udev);
dev_info(&interface->dev, "disconnected\n");
}
if (strncmp("trigger", buf, 7) == 0) {
ath9k_spectral_scan_trigger(sc->hw);
- } else if (strncmp("background", buf, 9) == 0) {
+ } else if (strncmp("background", buf, 10) == 0) {
ath9k_spectral_scan_config(sc->hw, SPECTRAL_BACKGROUND);
ath_dbg(common, CONFIG, "spectral scan: background mode enabled\n");
} else if (strncmp("chanscan", buf, 8) == 0) {
config IWLDVM
tristate "Intel Wireless WiFi DVM Firmware support"
- depends on m
default IWLWIFI
help
This is the driver that supports the DVM firmware which is
config IWLMVM
tristate "Intel Wireless WiFi MVM Firmware support"
- depends on m
help
This is the driver that supports the MVM firmware which is
currently only available for 7260 and 3160 devices.
/* recalculate basic rates */
iwl_calc_basic_rates(priv, ctx);
+ /*
+ * force CTS-to-self frames protection if RTS-CTS is not preferred
+ * one aggregation protection method
+ */
+ if (!priv->hw_params.use_rts_for_aggregation)
+ ctx->staging.flags |= RXON_FLG_SELF_CTS_EN;
+
if ((ctx->vif && ctx->vif->bss_conf.use_short_slot) ||
!(ctx->staging.flags & RXON_FLG_BAND_24G_MSK))
ctx->staging.flags |= RXON_FLG_SHORT_SLOT_MSK;
else
ctx->staging.flags &= ~RXON_FLG_TGG_PROTECT_MSK;
+ if (bss_conf->use_cts_prot)
+ ctx->staging.flags |= RXON_FLG_SELF_CTS_EN;
+ else
+ ctx->staging.flags &= ~RXON_FLG_SELF_CTS_EN;
+
memcpy(ctx->staging.bssid_addr, bss_conf->bssid, ETH_ALEN);
if (vif->type == NL80211_IFTYPE_AP ||
#include "iwl-agn-hw.h"
/* Highest firmware API version supported */
-#define IWL7260_UCODE_API_MAX 9
-#define IWL3160_UCODE_API_MAX 9
+#define IWL7260_UCODE_API_MAX 10
+#define IWL3160_UCODE_API_MAX 10
/* Oldest version we won't warn about */
#define IWL7260_UCODE_API_OK 9
#include "iwl-agn-hw.h"
/* Highest firmware API version supported */
-#define IWL8000_UCODE_API_MAX 9
+#define IWL8000_UCODE_API_MAX 10
/* Oldest version we won't warn about */
#define IWL8000_UCODE_API_OK 8
bool is_legacy = false;
- if ((mac->mode == WIRELESS_MODE_B) || (mac->mode == WIRELESS_MODE_B))
+ if ((mac->mode == WIRELESS_MODE_B) || (mac->mode == WIRELESS_MODE_G))
is_legacy = true;
return is_legacy;
{RTL_USB_DEVICE(0x0bda, 0x5088, rtl92cu_hal_cfg)}, /*Thinkware-CC&C*/
{RTL_USB_DEVICE(0x0df6, 0x0052, rtl92cu_hal_cfg)}, /*Sitecom - Edimax*/
{RTL_USB_DEVICE(0x0df6, 0x005c, rtl92cu_hal_cfg)}, /*Sitecom - Edimax*/
+ {RTL_USB_DEVICE(0x0df6, 0x0070, rtl92cu_hal_cfg)}, /*Sitecom - 150N */
{RTL_USB_DEVICE(0x0df6, 0x0077, rtl92cu_hal_cfg)}, /*Sitecom-WLA2100V2*/
{RTL_USB_DEVICE(0x0eb0, 0x9071, rtl92cu_hal_cfg)}, /*NO Brand - Etop*/
{RTL_USB_DEVICE(0x4856, 0x0091, rtl92cu_hal_cfg)}, /*NetweeN - Feixun*/
init_waitqueue_head(&queue->dealloc_wq);
atomic_set(&queue->inflight_packets, 0);
+ netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
+ XENVIF_NAPI_WEIGHT);
+
if (tx_evtchn == rx_evtchn) {
/* feature-split-event-channels == 0 */
err = bind_interdomain_evtchn_to_irqhandler(
wake_up_process(queue->task);
wake_up_process(queue->dealloc_task);
- netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
- XENVIF_NAPI_WEIGHT);
-
return 0;
err_rx_unbind:
WARN_ON(nt->mw[mw_num].virt_addr == NULL);
- if (nt->max_qps % mw_max && mw_num < nt->max_qps % mw_max)
+ if (nt->max_qps % mw_max && mw_num + 1 < nt->max_qps / mw_max)
num_qps_mw = nt->max_qps / mw_max + 1;
else
num_qps_mw = nt->max_qps / mw_max;
return -ENOMEM;
}
+ /*
+ * we must ensure that the memory address allocated is BAR size
+ * aligned in order for the XLAT register to take the value. This
+ * is a requirement of the hardware. It is recommended to setup CMA
+ * for BAR sizes equal or greater than 4MB.
+ */
+ if (!IS_ALIGNED(mw->dma_addr, mw->size)) {
+ dev_err(&pdev->dev, "DMA memory %pad not aligned to BAR size\n",
+ &mw->dma_addr);
+ ntb_free_mw(nt, num_mw);
+ return -ENOMEM;
+ }
+
/* Notify HW the memory location of the receive buffer */
ntb_set_mw_addr(nt->ndev, num_mw, mw->dma_addr);
qp->client_ready = NTB_LINK_DOWN;
qp->event_handler = NULL;
- if (nt->max_qps % mw_max && mw_num < nt->max_qps % mw_max)
+ if (nt->max_qps % mw_max && mw_num + 1 < nt->max_qps / mw_max)
num_qps_mw = nt->max_qps / mw_max + 1;
else
num_qps_mw = nt->max_qps / mw_max;
base = dt_mem_next_cell(dt_root_addr_cells, &prop);
size = dt_mem_next_cell(dt_root_size_cells, &prop);
- if (base && size &&
+ if (size &&
early_init_dt_reserve_memory_arch(base, size, nomap) == 0)
pr_debug("Reserved memory: reserved region for node '%s': base %pa, size %ld MiB\n",
uname, &base, (unsigned long)size / SZ_1M);
/* Get the reg property (if any) */
addr = of_get_property(device, "reg", NULL);
+ /* Try the new-style interrupts-extended first */
+ res = of_parse_phandle_with_args(device, "interrupts-extended",
+ "#interrupt-cells", index, out_irq);
+ if (!res)
+ return of_irq_parse_raw(addr, out_irq);
+
/* Get the interrupts property */
intspec = of_get_property(device, "interrupts", &intlen);
- if (intspec == NULL) {
- /* Try the new-style interrupts-extended */
- res = of_parse_phandle_with_args(device, "interrupts-extended",
- "#interrupt-cells", index, out_irq);
- if (res)
- return -EINVAL;
- return of_irq_parse_raw(addr, out_irq);
- }
+ if (intspec == NULL)
+ return -EINVAL;
+
intlen /= sizeof(*intspec);
pr_debug(" intspec=%d intlen=%d\n", be32_to_cpup(intspec), intlen);
#define NO_OF_NODES 2
static struct device_node *nodes[NO_OF_NODES];
static int last_node_index;
+static bool selftest_live_tree;
#define selftest(result, fmt, ...) { \
if (!(result)) { \
{
struct device_node *next, *root = np, *dup;
- if (!np) {
- pr_warn("%s: No tree to attach; not running tests\n",
- __func__);
- return -ENODATA;
- }
-
-
/* skip root node */
np = np->child;
/* storing a copy in temporary node */
static int __init selftest_data_add(void)
{
void *selftest_data;
- struct device_node *selftest_data_node;
+ struct device_node *selftest_data_node, *np;
extern uint8_t __dtb_testcases_begin[];
extern uint8_t __dtb_testcases_end[];
const int size = __dtb_testcases_end - __dtb_testcases_begin;
- if (!size || !of_allnodes) {
+ if (!size) {
pr_warn("%s: No testcase data to attach; not running tests\n",
__func__);
return -ENODATA;
return -ENOMEM;
}
of_fdt_unflatten_tree(selftest_data, &selftest_data_node);
+ if (!selftest_data_node) {
+ pr_warn("%s: No tree to attach; not running tests\n", __func__);
+ return -ENODATA;
+ }
+
+ if (!of_allnodes) {
+ /* enabling flag for removing nodes */
+ selftest_live_tree = true;
+ of_allnodes = selftest_data_node;
+
+ for_each_of_allnodes(np)
+ __of_attach_node_sysfs(np);
+ of_aliases = of_find_node_by_path("/aliases");
+ of_chosen = of_find_node_by_path("/chosen");
+ return 0;
+ }
/* attach the sub-tree to live tree */
return attach_node_and_children(selftest_data_node);
struct device_node *np;
struct property *prop;
+ if (selftest_live_tree) {
+ of_node_put(of_aliases);
+ of_node_put(of_chosen);
+ of_aliases = NULL;
+ of_chosen = NULL;
+ for_each_child_of_node(of_allnodes, np)
+ detach_node_and_children(np);
+ __of_detach_node_sysfs(of_allnodes);
+ of_allnodes = NULL;
+ return;
+ }
+
while (last_node_index >= 0) {
if (nodes[last_node_index]) {
np = of_find_node_by_path(nodes[last_node_index]->full_name);
printk("%s version %s found at 0x%lx\n", name, version, hpa);
if (!request_mem_region(hpa, PAGE_SIZE, name)) {
- printk(KERN_ERR "DINO: Hey! Someone took my MMIO space (0x%ld)!\n",
+ printk(KERN_ERR "DINO: Hey! Someone took my MMIO space (0x%lx)!\n",
hpa);
return 1;
}
menu "PCI host controller drivers"
depends on PCI
+config PCI_DRA7XX
+ bool "TI DRA7xx PCIe controller"
+ select PCIE_DW
+ depends on OF && HAS_IOMEM && TI_PIPE3
+ help
+ Enables support for the PCIe controller in the DRA7xx SoC. There
+ are two instances of PCIe controller in DRA7xx. This controller can
+ act both as EP and RC. This reuses the Designware core.
+
config PCI_MVEBU
bool "Marvell EBU PCIe controller"
- depends on ARCH_MVEBU || ARCH_DOVE || ARCH_KIRKWOOD
+ depends on ARCH_MVEBU || ARCH_DOVE
depends on OF
config PCIE_DW
controller, such as the one emulated by kvmtool.
config PCIE_SPEAR13XX
- tristate "STMicroelectronics SPEAr PCIe controller"
+ bool "STMicroelectronics SPEAr PCIe controller"
depends on ARCH_SPEAR13XX
select PCIEPORTBUS
select PCIE_DW
obj-$(CONFIG_PCIE_DW) += pcie-designware.o
+obj-$(CONFIG_PCI_DRA7XX) += pci-dra7xx.o
obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o
obj-$(CONFIG_PCI_IMX6) += pci-imx6.o
obj-$(CONFIG_PCI_MVEBU) += pci-mvebu.o
--- /dev/null
+/*
+ * pcie-dra7xx - PCIe controller driver for TI DRA7xx SoCs
+ *
+ * Copyright (C) 2013-2014 Texas Instruments Incorporated - http://www.ti.com
+ *
+ * Authors: Kishon Vijay Abraham I <kishon@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/delay.h>
+#include <linux/err.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
+#include <linux/irqdomain.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/phy/phy.h>
+#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+#include <linux/resource.h>
+#include <linux/types.h>
+
+#include "pcie-designware.h"
+
+/* PCIe controller wrapper DRA7XX configuration registers */
+
+#define PCIECTRL_DRA7XX_CONF_IRQSTATUS_MAIN 0x0024
+#define PCIECTRL_DRA7XX_CONF_IRQENABLE_SET_MAIN 0x0028
+#define ERR_SYS BIT(0)
+#define ERR_FATAL BIT(1)
+#define ERR_NONFATAL BIT(2)
+#define ERR_COR BIT(3)
+#define ERR_AXI BIT(4)
+#define ERR_ECRC BIT(5)
+#define PME_TURN_OFF BIT(8)
+#define PME_TO_ACK BIT(9)
+#define PM_PME BIT(10)
+#define LINK_REQ_RST BIT(11)
+#define LINK_UP_EVT BIT(12)
+#define CFG_BME_EVT BIT(13)
+#define CFG_MSE_EVT BIT(14)
+#define INTERRUPTS (ERR_SYS | ERR_FATAL | ERR_NONFATAL | ERR_COR | ERR_AXI | \
+ ERR_ECRC | PME_TURN_OFF | PME_TO_ACK | PM_PME | \
+ LINK_REQ_RST | LINK_UP_EVT | CFG_BME_EVT | CFG_MSE_EVT)
+
+#define PCIECTRL_DRA7XX_CONF_IRQSTATUS_MSI 0x0034
+#define PCIECTRL_DRA7XX_CONF_IRQENABLE_SET_MSI 0x0038
+#define INTA BIT(0)
+#define INTB BIT(1)
+#define INTC BIT(2)
+#define INTD BIT(3)
+#define MSI BIT(4)
+#define LEG_EP_INTERRUPTS (INTA | INTB | INTC | INTD)
+
+#define PCIECTRL_DRA7XX_CONF_DEVICE_CMD 0x0104
+#define LTSSM_EN 0x1
+
+#define PCIECTRL_DRA7XX_CONF_PHY_CS 0x010C
+#define LINK_UP BIT(16)
+
+struct dra7xx_pcie {
+ void __iomem *base;
+ struct phy **phy;
+ int phy_count;
+ struct device *dev;
+ struct pcie_port pp;
+};
+
+#define to_dra7xx_pcie(x) container_of((x), struct dra7xx_pcie, pp)
+
+static inline u32 dra7xx_pcie_readl(struct dra7xx_pcie *pcie, u32 offset)
+{
+ return readl(pcie->base + offset);
+}
+
+static inline void dra7xx_pcie_writel(struct dra7xx_pcie *pcie, u32 offset,
+ u32 value)
+{
+ writel(value, pcie->base + offset);
+}
+
+static int dra7xx_pcie_link_up(struct pcie_port *pp)
+{
+ struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pp);
+ u32 reg = dra7xx_pcie_readl(dra7xx, PCIECTRL_DRA7XX_CONF_PHY_CS);
+
+ return !!(reg & LINK_UP);
+}
+
+static int dra7xx_pcie_establish_link(struct pcie_port *pp)
+{
+ u32 reg;
+ unsigned int retries = 1000;
+ struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pp);
+
+ if (dw_pcie_link_up(pp)) {
+ dev_err(pp->dev, "link is already up\n");
+ return 0;
+ }
+
+ reg = dra7xx_pcie_readl(dra7xx, PCIECTRL_DRA7XX_CONF_DEVICE_CMD);
+ reg |= LTSSM_EN;
+ dra7xx_pcie_writel(dra7xx, PCIECTRL_DRA7XX_CONF_DEVICE_CMD, reg);
+
+ while (retries--) {
+ reg = dra7xx_pcie_readl(dra7xx, PCIECTRL_DRA7XX_CONF_PHY_CS);
+ if (reg & LINK_UP)
+ break;
+ usleep_range(10, 20);
+ }
+
+ if (retries == 0) {
+ dev_err(pp->dev, "link is not up\n");
+ return -ETIMEDOUT;
+ }
+
+ return 0;
+}
+
+static void dra7xx_pcie_enable_interrupts(struct pcie_port *pp)
+{
+ struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pp);
+
+ dra7xx_pcie_writel(dra7xx, PCIECTRL_DRA7XX_CONF_IRQSTATUS_MAIN,
+ ~INTERRUPTS);
+ dra7xx_pcie_writel(dra7xx,
+ PCIECTRL_DRA7XX_CONF_IRQENABLE_SET_MAIN, INTERRUPTS);
+ dra7xx_pcie_writel(dra7xx, PCIECTRL_DRA7XX_CONF_IRQSTATUS_MSI,
+ ~LEG_EP_INTERRUPTS & ~MSI);
+
+ if (IS_ENABLED(CONFIG_PCI_MSI))
+ dra7xx_pcie_writel(dra7xx,
+ PCIECTRL_DRA7XX_CONF_IRQENABLE_SET_MSI, MSI);
+ else
+ dra7xx_pcie_writel(dra7xx,
+ PCIECTRL_DRA7XX_CONF_IRQENABLE_SET_MSI,
+ LEG_EP_INTERRUPTS);
+}
+
+static void dra7xx_pcie_host_init(struct pcie_port *pp)
+{
+ dw_pcie_setup_rc(pp);
+ dra7xx_pcie_establish_link(pp);
+ if (IS_ENABLED(CONFIG_PCI_MSI))
+ dw_pcie_msi_init(pp);
+ dra7xx_pcie_enable_interrupts(pp);
+}
+
+static struct pcie_host_ops dra7xx_pcie_host_ops = {
+ .link_up = dra7xx_pcie_link_up,
+ .host_init = dra7xx_pcie_host_init,
+};
+
+static int dra7xx_pcie_intx_map(struct irq_domain *domain, unsigned int irq,
+ irq_hw_number_t hwirq)
+{
+ irq_set_chip_and_handler(irq, &dummy_irq_chip, handle_simple_irq);
+ irq_set_chip_data(irq, domain->host_data);
+ set_irq_flags(irq, IRQF_VALID);
+
+ return 0;
+}
+
+static const struct irq_domain_ops intx_domain_ops = {
+ .map = dra7xx_pcie_intx_map,
+};
+
+static int dra7xx_pcie_init_irq_domain(struct pcie_port *pp)
+{
+ struct device *dev = pp->dev;
+ struct device_node *node = dev->of_node;
+ struct device_node *pcie_intc_node = of_get_next_child(node, NULL);
+
+ if (!pcie_intc_node) {
+ dev_err(dev, "No PCIe Intc node found\n");
+ return PTR_ERR(pcie_intc_node);
+ }
+
+ pp->irq_domain = irq_domain_add_linear(pcie_intc_node, 4,
+ &intx_domain_ops, pp);
+ if (!pp->irq_domain) {
+ dev_err(dev, "Failed to get a INTx IRQ domain\n");
+ return PTR_ERR(pp->irq_domain);
+ }
+
+ return 0;
+}
+
+static irqreturn_t dra7xx_pcie_msi_irq_handler(int irq, void *arg)
+{
+ struct pcie_port *pp = arg;
+ struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pp);
+ u32 reg;
+
+ reg = dra7xx_pcie_readl(dra7xx, PCIECTRL_DRA7XX_CONF_IRQSTATUS_MSI);
+
+ switch (reg) {
+ case MSI:
+ dw_handle_msi_irq(pp);
+ break;
+ case INTA:
+ case INTB:
+ case INTC:
+ case INTD:
+ generic_handle_irq(irq_find_mapping(pp->irq_domain, ffs(reg)));
+ break;
+ }
+
+ dra7xx_pcie_writel(dra7xx, PCIECTRL_DRA7XX_CONF_IRQSTATUS_MSI, reg);
+
+ return IRQ_HANDLED;
+}
+
+
+static irqreturn_t dra7xx_pcie_irq_handler(int irq, void *arg)
+{
+ struct dra7xx_pcie *dra7xx = arg;
+ u32 reg;
+
+ reg = dra7xx_pcie_readl(dra7xx, PCIECTRL_DRA7XX_CONF_IRQSTATUS_MAIN);
+
+ if (reg & ERR_SYS)
+ dev_dbg(dra7xx->dev, "System Error\n");
+
+ if (reg & ERR_FATAL)
+ dev_dbg(dra7xx->dev, "Fatal Error\n");
+
+ if (reg & ERR_NONFATAL)
+ dev_dbg(dra7xx->dev, "Non Fatal Error\n");
+
+ if (reg & ERR_COR)
+ dev_dbg(dra7xx->dev, "Correctable Error\n");
+
+ if (reg & ERR_AXI)
+ dev_dbg(dra7xx->dev, "AXI tag lookup fatal Error\n");
+
+ if (reg & ERR_ECRC)
+ dev_dbg(dra7xx->dev, "ECRC Error\n");
+
+ if (reg & PME_TURN_OFF)
+ dev_dbg(dra7xx->dev,
+ "Power Management Event Turn-Off message received\n");
+
+ if (reg & PME_TO_ACK)
+ dev_dbg(dra7xx->dev,
+ "Power Management Turn-Off Ack message received\n");
+
+ if (reg & PM_PME)
+ dev_dbg(dra7xx->dev,
+ "PM Power Management Event message received\n");
+
+ if (reg & LINK_REQ_RST)
+ dev_dbg(dra7xx->dev, "Link Request Reset\n");
+
+ if (reg & LINK_UP_EVT)
+ dev_dbg(dra7xx->dev, "Link-up state change\n");
+
+ if (reg & CFG_BME_EVT)
+ dev_dbg(dra7xx->dev, "CFG 'Bus Master Enable' change\n");
+
+ if (reg & CFG_MSE_EVT)
+ dev_dbg(dra7xx->dev, "CFG 'Memory Space Enable' change\n");
+
+ dra7xx_pcie_writel(dra7xx, PCIECTRL_DRA7XX_CONF_IRQSTATUS_MAIN, reg);
+
+ return IRQ_HANDLED;
+}
+
+static int add_pcie_port(struct dra7xx_pcie *dra7xx,
+ struct platform_device *pdev)
+{
+ int ret;
+ struct pcie_port *pp;
+ struct resource *res;
+ struct device *dev = &pdev->dev;
+
+ pp = &dra7xx->pp;
+ pp->dev = dev;
+ pp->ops = &dra7xx_pcie_host_ops;
+
+ pp->irq = platform_get_irq(pdev, 1);
+ if (pp->irq < 0) {
+ dev_err(dev, "missing IRQ resource\n");
+ return -EINVAL;
+ }
+
+ ret = devm_request_irq(&pdev->dev, pp->irq,
+ dra7xx_pcie_msi_irq_handler, IRQF_SHARED,
+ "dra7-pcie-msi", pp);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to request irq\n");
+ return ret;
+ }
+
+ if (!IS_ENABLED(CONFIG_PCI_MSI)) {
+ ret = dra7xx_pcie_init_irq_domain(pp);
+ if (ret < 0)
+ return ret;
+ }
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "rc_dbics");
+ pp->dbi_base = devm_ioremap(dev, res->start, resource_size(res));
+ if (!pp->dbi_base)
+ return -ENOMEM;
+
+ ret = dw_pcie_host_init(pp);
+ if (ret) {
+ dev_err(dra7xx->dev, "failed to initialize host\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+static int __init dra7xx_pcie_probe(struct platform_device *pdev)
+{
+ u32 reg;
+ int ret;
+ int irq;
+ int i;
+ int phy_count;
+ struct phy **phy;
+ void __iomem *base;
+ struct resource *res;
+ struct dra7xx_pcie *dra7xx;
+ struct device *dev = &pdev->dev;
+ struct device_node *np = dev->of_node;
+ char name[10];
+
+ dra7xx = devm_kzalloc(dev, sizeof(*dra7xx), GFP_KERNEL);
+ if (!dra7xx)
+ return -ENOMEM;
+
+ irq = platform_get_irq(pdev, 0);
+ if (irq < 0) {
+ dev_err(dev, "missing IRQ resource\n");
+ return -EINVAL;
+ }
+
+ ret = devm_request_irq(dev, irq, dra7xx_pcie_irq_handler,
+ IRQF_SHARED, "dra7xx-pcie-main", dra7xx);
+ if (ret) {
+ dev_err(dev, "failed to request irq\n");
+ return ret;
+ }
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ti_conf");
+ base = devm_ioremap_nocache(dev, res->start, resource_size(res));
+ if (!base)
+ return -ENOMEM;
+
+ phy_count = of_property_count_strings(np, "phy-names");
+ if (phy_count < 0) {
+ dev_err(dev, "unable to find the strings\n");
+ return phy_count;
+ }
+
+ phy = devm_kzalloc(dev, sizeof(*phy) * phy_count, GFP_KERNEL);
+ if (!phy)
+ return -ENOMEM;
+
+ for (i = 0; i < phy_count; i++) {
+ snprintf(name, sizeof(name), "pcie-phy%d", i);
+ phy[i] = devm_phy_get(dev, name);
+ if (IS_ERR(phy[i]))
+ return PTR_ERR(phy[i]);
+
+ ret = phy_init(phy[i]);
+ if (ret < 0)
+ goto err_phy;
+
+ ret = phy_power_on(phy[i]);
+ if (ret < 0) {
+ phy_exit(phy[i]);
+ goto err_phy;
+ }
+ }
+
+ dra7xx->base = base;
+ dra7xx->phy = phy;
+ dra7xx->dev = dev;
+ dra7xx->phy_count = phy_count;
+
+ pm_runtime_enable(dev);
+ ret = pm_runtime_get_sync(dev);
+ if (IS_ERR_VALUE(ret)) {
+ dev_err(dev, "pm_runtime_get_sync failed\n");
+ goto err_phy;
+ }
+
+ reg = dra7xx_pcie_readl(dra7xx, PCIECTRL_DRA7XX_CONF_DEVICE_CMD);
+ reg &= ~LTSSM_EN;
+ dra7xx_pcie_writel(dra7xx, PCIECTRL_DRA7XX_CONF_DEVICE_CMD, reg);
+
+ platform_set_drvdata(pdev, dra7xx);
+
+ ret = add_pcie_port(dra7xx, pdev);
+ if (ret < 0)
+ goto err_add_port;
+
+ return 0;
+
+err_add_port:
+ pm_runtime_put(dev);
+ pm_runtime_disable(dev);
+
+err_phy:
+ while (--i >= 0) {
+ phy_power_off(phy[i]);
+ phy_exit(phy[i]);
+ }
+
+ return ret;
+}
+
+static int __exit dra7xx_pcie_remove(struct platform_device *pdev)
+{
+ struct dra7xx_pcie *dra7xx = platform_get_drvdata(pdev);
+ struct pcie_port *pp = &dra7xx->pp;
+ struct device *dev = &pdev->dev;
+ int count = dra7xx->phy_count;
+
+ if (pp->irq_domain)
+ irq_domain_remove(pp->irq_domain);
+ pm_runtime_put(dev);
+ pm_runtime_disable(dev);
+ while (count--) {
+ phy_power_off(dra7xx->phy[count]);
+ phy_exit(dra7xx->phy[count]);
+ }
+
+ return 0;
+}
+
+static const struct of_device_id of_dra7xx_pcie_match[] = {
+ { .compatible = "ti,dra7-pcie", },
+ {},
+};
+MODULE_DEVICE_TABLE(of, of_dra7xx_pcie_match);
+
+static struct platform_driver dra7xx_pcie_driver = {
+ .remove = __exit_p(dra7xx_pcie_remove),
+ .driver = {
+ .name = "dra7-pcie",
+ .owner = THIS_MODULE,
+ .of_match_table = of_dra7xx_pcie_match,
+ },
+};
+
+module_platform_driver_probe(dra7xx_pcie_driver, dra7xx_pcie_probe);
+
+MODULE_AUTHOR("Kishon Vijay Abraham I <kishon@ti.com>");
+MODULE_DESCRIPTION("TI PCIe controller driver");
+MODULE_LICENSE("GPL v2");
*/
#include <linux/clk.h>
+#include <linux/debugfs.h>
#include <linux/delay.h>
#include <linux/export.h>
#include <linux/interrupt.h>
unsigned int num_supplies;
const struct tegra_pcie_soc_data *soc_data;
+ struct dentry *debugfs;
};
struct tegra_pcie_port {
};
MODULE_DEVICE_TABLE(of, tegra_pcie_of_match);
+static void *tegra_pcie_ports_seq_start(struct seq_file *s, loff_t *pos)
+{
+ struct tegra_pcie *pcie = s->private;
+
+ if (list_empty(&pcie->ports))
+ return NULL;
+
+ seq_printf(s, "Index Status\n");
+
+ return seq_list_start(&pcie->ports, *pos);
+}
+
+static void *tegra_pcie_ports_seq_next(struct seq_file *s, void *v, loff_t *pos)
+{
+ struct tegra_pcie *pcie = s->private;
+
+ return seq_list_next(v, &pcie->ports, pos);
+}
+
+static void tegra_pcie_ports_seq_stop(struct seq_file *s, void *v)
+{
+}
+
+static int tegra_pcie_ports_seq_show(struct seq_file *s, void *v)
+{
+ bool up = false, active = false;
+ struct tegra_pcie_port *port;
+ unsigned int value;
+
+ port = list_entry(v, struct tegra_pcie_port, list);
+
+ value = readl(port->base + RP_VEND_XP);
+
+ if (value & RP_VEND_XP_DL_UP)
+ up = true;
+
+ value = readl(port->base + RP_LINK_CONTROL_STATUS);
+
+ if (value & RP_LINK_CONTROL_STATUS_DL_LINK_ACTIVE)
+ active = true;
+
+ seq_printf(s, "%2u ", port->index);
+
+ if (up)
+ seq_printf(s, "up");
+
+ if (active) {
+ if (up)
+ seq_printf(s, ", ");
+
+ seq_printf(s, "active");
+ }
+
+ seq_printf(s, "\n");
+ return 0;
+}
+
+static const struct seq_operations tegra_pcie_ports_seq_ops = {
+ .start = tegra_pcie_ports_seq_start,
+ .next = tegra_pcie_ports_seq_next,
+ .stop = tegra_pcie_ports_seq_stop,
+ .show = tegra_pcie_ports_seq_show,
+};
+
+static int tegra_pcie_ports_open(struct inode *inode, struct file *file)
+{
+ struct tegra_pcie *pcie = inode->i_private;
+ struct seq_file *s;
+ int err;
+
+ err = seq_open(file, &tegra_pcie_ports_seq_ops);
+ if (err)
+ return err;
+
+ s = file->private_data;
+ s->private = pcie;
+
+ return 0;
+}
+
+static const struct file_operations tegra_pcie_ports_ops = {
+ .owner = THIS_MODULE,
+ .open = tegra_pcie_ports_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release,
+};
+
+static int tegra_pcie_debugfs_init(struct tegra_pcie *pcie)
+{
+ struct dentry *file;
+
+ pcie->debugfs = debugfs_create_dir("pcie", NULL);
+ if (!pcie->debugfs)
+ return -ENOMEM;
+
+ file = debugfs_create_file("ports", S_IFREG | S_IRUGO, pcie->debugfs,
+ pcie, &tegra_pcie_ports_ops);
+ if (!file)
+ goto remove;
+
+ return 0;
+
+remove:
+ debugfs_remove_recursive(pcie->debugfs);
+ pcie->debugfs = NULL;
+ return -ENOMEM;
+}
+
static int tegra_pcie_probe(struct platform_device *pdev)
{
const struct of_device_id *match;
goto disable_msi;
}
+ if (IS_ENABLED(CONFIG_DEBUG_FS)) {
+ err = tegra_pcie_debugfs_init(pcie);
+ if (err < 0)
+ dev_err(&pdev->dev, "failed to setup debugfs: %d\n",
+ err);
+ }
+
platform_set_drvdata(pdev, pcie);
return 0;
#include <linux/of_pci.h>
#include <linux/pci.h>
#include <linux/pci_regs.h>
+#include <linux/platform_device.h>
#include <linux/types.h>
#include "pcie-designware.h"
return 0;
}
+static void dw_pcie_msi_clear_irq(struct pcie_port *pp, int irq)
+{
+ unsigned int res, bit, val;
+
+ res = (irq / 32) * 12;
+ bit = irq % 32;
+ dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, &val);
+ val &= ~(1 << bit);
+ dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, val);
+}
+
static void clear_irq_range(struct pcie_port *pp, unsigned int irq_base,
unsigned int nvec, unsigned int pos)
{
- unsigned int i, res, bit, val;
+ unsigned int i;
for (i = 0; i < nvec; i++) {
irq_set_msi_desc_off(irq_base, i, NULL);
clear_bit(pos + i, pp->msi_irq_in_use);
/* Disable corresponding interrupt on MSI controller */
- res = ((pos + i) / 32) * 12;
- bit = (pos + i) % 32;
- dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, &val);
- val &= ~(1 << bit);
- dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, val);
+ if (pp->ops->msi_clear_irq)
+ pp->ops->msi_clear_irq(pp, pos + i);
+ else
+ dw_pcie_msi_clear_irq(pp, pos + i);
}
}
+static void dw_pcie_msi_set_irq(struct pcie_port *pp, int irq)
+{
+ unsigned int res, bit, val;
+
+ res = (irq / 32) * 12;
+ bit = irq % 32;
+ dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, &val);
+ val |= 1 << bit;
+ dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, val);
+}
+
static int assign_irq(int no_irqs, struct msi_desc *desc, int *pos)
{
- int res, bit, irq, pos0, pos1, i;
- u32 val;
+ int irq, pos0, pos1, i;
struct pcie_port *pp = sys_to_pcie(desc->dev->bus->sysdata);
if (!pp) {
}
set_bit(pos0 + i, pp->msi_irq_in_use);
/*Enable corresponding interrupt in MSI interrupt controller */
- res = ((pos0 + i) / 32) * 12;
- bit = (pos0 + i) % 32;
- dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, &val);
- val |= 1 << bit;
- dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, val);
+ if (pp->ops->msi_set_irq)
+ pp->ops->msi_set_irq(pp, pos0 + i);
+ else
+ dw_pcie_msi_set_irq(pp, pos0 + i);
}
*pos = pos0;
*/
desc->msi_attrib.multiple = msgvec;
- msg.address_lo = virt_to_phys((void *)pp->msi_data);
+ if (pp->ops->get_msi_data)
+ msg.address_lo = pp->ops->get_msi_data(pp);
+ else
+ msg.address_lo = virt_to_phys((void *)pp->msi_data);
msg.address_hi = 0x0;
msg.data = pos;
write_msi_msg(irq, &msg);
int __init dw_pcie_host_init(struct pcie_port *pp)
{
struct device_node *np = pp->dev->of_node;
+ struct platform_device *pdev = to_platform_device(pp->dev);
struct of_pci_range range;
struct of_pci_range_parser parser;
- u32 val;
- int i;
+ struct resource *cfg_res;
+ u32 val, na, ns;
+ const __be32 *addrp;
+ int i, index;
+
+ /* Find the address cell size and the number of cells in order to get
+ * the untranslated address.
+ */
+ of_property_read_u32(np, "#address-cells", &na);
+ ns = of_n_size_cells(np);
+
+ cfg_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config");
+ if (cfg_res) {
+ pp->config.cfg0_size = resource_size(cfg_res)/2;
+ pp->config.cfg1_size = resource_size(cfg_res)/2;
+ pp->cfg0_base = cfg_res->start;
+ pp->cfg1_base = cfg_res->start + pp->config.cfg0_size;
+
+ /* Find the untranslated configuration space address */
+ index = of_property_match_string(np, "reg-names", "config");
+ addrp = of_get_address(np, index, false, false);
+ pp->cfg0_mod_base = of_read_number(addrp, ns);
+ pp->cfg1_mod_base = pp->cfg0_mod_base + pp->config.cfg0_size;
+ } else {
+ dev_err(pp->dev, "missing *config* reg space\n");
+ }
if (of_pci_range_parser_init(&parser, np)) {
dev_err(pp->dev, "missing ranges property\n");
pp->config.io_size = resource_size(&pp->io);
pp->config.io_bus_addr = range.pci_addr;
pp->io_base = range.cpu_addr;
+
+ /* Find the untranslated IO space address */
+ pp->io_mod_base = of_read_number(parser.range -
+ parser.np + na, ns);
}
if (restype == IORESOURCE_MEM) {
of_pci_range_to_resource(&range, np, &pp->mem);
pp->mem.name = "MEM";
pp->config.mem_size = resource_size(&pp->mem);
pp->config.mem_bus_addr = range.pci_addr;
+
+ /* Find the untranslated MEM space address */
+ pp->mem_mod_base = of_read_number(parser.range -
+ parser.np + na, ns);
}
if (restype == 0) {
of_pci_range_to_resource(&range, np, &pp->cfg);
pp->config.cfg0_size = resource_size(&pp->cfg)/2;
pp->config.cfg1_size = resource_size(&pp->cfg)/2;
+ pp->cfg0_base = pp->cfg.start;
+ pp->cfg1_base = pp->cfg.start + pp->config.cfg0_size;
+
+ /* Find the untranslated configuration space address */
+ pp->cfg0_mod_base = of_read_number(parser.range -
+ parser.np + na, ns);
+ pp->cfg1_mod_base = pp->cfg0_mod_base +
+ pp->config.cfg0_size;
}
}
}
}
- pp->cfg0_base = pp->cfg.start;
- pp->cfg1_base = pp->cfg.start + pp->config.cfg0_size;
pp->mem_base = pp->mem.start;
pp->va_cfg0_base = devm_ioremap(pp->dev, pp->cfg0_base,
/* Program viewport 0 : OUTBOUND : CFG0 */
dw_pcie_writel_rc(pp, PCIE_ATU_REGION_OUTBOUND | PCIE_ATU_REGION_INDEX0,
PCIE_ATU_VIEWPORT);
- dw_pcie_writel_rc(pp, pp->cfg0_base, PCIE_ATU_LOWER_BASE);
- dw_pcie_writel_rc(pp, (pp->cfg0_base >> 32), PCIE_ATU_UPPER_BASE);
- dw_pcie_writel_rc(pp, pp->cfg0_base + pp->config.cfg0_size - 1,
+ dw_pcie_writel_rc(pp, pp->cfg0_mod_base, PCIE_ATU_LOWER_BASE);
+ dw_pcie_writel_rc(pp, (pp->cfg0_mod_base >> 32), PCIE_ATU_UPPER_BASE);
+ dw_pcie_writel_rc(pp, pp->cfg0_mod_base + pp->config.cfg0_size - 1,
PCIE_ATU_LIMIT);
dw_pcie_writel_rc(pp, busdev, PCIE_ATU_LOWER_TARGET);
dw_pcie_writel_rc(pp, 0, PCIE_ATU_UPPER_TARGET);
dw_pcie_writel_rc(pp, PCIE_ATU_REGION_OUTBOUND | PCIE_ATU_REGION_INDEX1,
PCIE_ATU_VIEWPORT);
dw_pcie_writel_rc(pp, PCIE_ATU_TYPE_CFG1, PCIE_ATU_CR1);
- dw_pcie_writel_rc(pp, pp->cfg1_base, PCIE_ATU_LOWER_BASE);
- dw_pcie_writel_rc(pp, (pp->cfg1_base >> 32), PCIE_ATU_UPPER_BASE);
- dw_pcie_writel_rc(pp, pp->cfg1_base + pp->config.cfg1_size - 1,
+ dw_pcie_writel_rc(pp, pp->cfg1_mod_base, PCIE_ATU_LOWER_BASE);
+ dw_pcie_writel_rc(pp, (pp->cfg1_mod_base >> 32), PCIE_ATU_UPPER_BASE);
+ dw_pcie_writel_rc(pp, pp->cfg1_mod_base + pp->config.cfg1_size - 1,
PCIE_ATU_LIMIT);
dw_pcie_writel_rc(pp, busdev, PCIE_ATU_LOWER_TARGET);
dw_pcie_writel_rc(pp, 0, PCIE_ATU_UPPER_TARGET);
dw_pcie_writel_rc(pp, PCIE_ATU_REGION_OUTBOUND | PCIE_ATU_REGION_INDEX0,
PCIE_ATU_VIEWPORT);
dw_pcie_writel_rc(pp, PCIE_ATU_TYPE_MEM, PCIE_ATU_CR1);
- dw_pcie_writel_rc(pp, pp->mem_base, PCIE_ATU_LOWER_BASE);
- dw_pcie_writel_rc(pp, (pp->mem_base >> 32), PCIE_ATU_UPPER_BASE);
- dw_pcie_writel_rc(pp, pp->mem_base + pp->config.mem_size - 1,
+ dw_pcie_writel_rc(pp, pp->mem_mod_base, PCIE_ATU_LOWER_BASE);
+ dw_pcie_writel_rc(pp, (pp->mem_mod_base >> 32), PCIE_ATU_UPPER_BASE);
+ dw_pcie_writel_rc(pp, pp->mem_mod_base + pp->config.mem_size - 1,
PCIE_ATU_LIMIT);
dw_pcie_writel_rc(pp, pp->config.mem_bus_addr, PCIE_ATU_LOWER_TARGET);
dw_pcie_writel_rc(pp, upper_32_bits(pp->config.mem_bus_addr),
dw_pcie_writel_rc(pp, PCIE_ATU_REGION_OUTBOUND | PCIE_ATU_REGION_INDEX1,
PCIE_ATU_VIEWPORT);
dw_pcie_writel_rc(pp, PCIE_ATU_TYPE_IO, PCIE_ATU_CR1);
- dw_pcie_writel_rc(pp, pp->io_base, PCIE_ATU_LOWER_BASE);
- dw_pcie_writel_rc(pp, (pp->io_base >> 32), PCIE_ATU_UPPER_BASE);
- dw_pcie_writel_rc(pp, pp->io_base + pp->config.io_size - 1,
+ dw_pcie_writel_rc(pp, pp->io_mod_base, PCIE_ATU_LOWER_BASE);
+ dw_pcie_writel_rc(pp, (pp->io_mod_base >> 32), PCIE_ATU_UPPER_BASE);
+ dw_pcie_writel_rc(pp, pp->io_mod_base + pp->config.io_size - 1,
PCIE_ATU_LIMIT);
dw_pcie_writel_rc(pp, pp->config.io_bus_addr, PCIE_ATU_LOWER_TARGET);
dw_pcie_writel_rc(pp, upper_32_bits(pp->config.io_bus_addr),
}
if (bus->number != pp->root_bus_nr)
- ret = dw_pcie_rd_other_conf(pp, bus, devfn,
+ if (pp->ops->rd_other_conf)
+ ret = pp->ops->rd_other_conf(pp, bus, devfn,
+ where, size, val);
+ else
+ ret = dw_pcie_rd_other_conf(pp, bus, devfn,
where, size, val);
else
ret = dw_pcie_rd_own_conf(pp, where, size, val);
return PCIBIOS_DEVICE_NOT_FOUND;
if (bus->number != pp->root_bus_nr)
- ret = dw_pcie_wr_other_conf(pp, bus, devfn,
+ if (pp->ops->wr_other_conf)
+ ret = pp->ops->wr_other_conf(pp, bus, devfn,
+ where, size, val);
+ else
+ ret = dw_pcie_wr_other_conf(pp, bus, devfn,
where, size, val);
else
ret = dw_pcie_wr_own_conf(pp, where, size, val);
u8 root_bus_nr;
void __iomem *dbi_base;
u64 cfg0_base;
+ u64 cfg0_mod_base;
void __iomem *va_cfg0_base;
u64 cfg1_base;
+ u64 cfg1_mod_base;
void __iomem *va_cfg1_base;
u64 io_base;
+ u64 io_mod_base;
u64 mem_base;
+ u64 mem_mod_base;
struct resource cfg;
struct resource io;
struct resource mem;
u32 val, void __iomem *dbi_base);
int (*rd_own_conf)(struct pcie_port *pp, int where, int size, u32 *val);
int (*wr_own_conf)(struct pcie_port *pp, int where, int size, u32 val);
+ int (*rd_other_conf)(struct pcie_port *pp, struct pci_bus *bus,
+ unsigned int devfn, int where, int size, u32 *val);
+ int (*wr_other_conf)(struct pcie_port *pp, struct pci_bus *bus,
+ unsigned int devfn, int where, int size, u32 val);
int (*link_up)(struct pcie_port *pp);
void (*host_init)(struct pcie_port *pp);
+ void (*msi_set_irq)(struct pcie_port *pp, int irq);
+ void (*msi_clear_irq)(struct pcie_port *pp, int irq);
+ u32 (*get_msi_data)(struct pcie_port *pp);
};
int dw_pcie_cfg_read(void __iomem *addr, int where, int size, u32 *val);
config PHY_MIPHY365X
tristate "STMicroelectronics MIPHY365X PHY driver for STiH41x series"
depends on ARCH_STI
- depends on GENERIC_PHY
depends on HAS_IOMEM
depends on OF
+ select GENERIC_PHY
help
Enable this to support the miphy transceiver (for SATA/PCIE)
that is part of STMicroelectronics STiH41x SoC series.
},
{ },
};
+MODULE_DEVICE_TABLE(of, exynos5_usbdrd_phy_of_match);
static int exynos5_usbdrd_phy_probe(struct platform_device *pdev)
{
#include <linux/delay.h>
#include <linux/usb/otg.h>
#include <linux/phy/phy.h>
+#include <linux/pm_runtime.h>
#include <linux/usb/musb-omap.h>
#include <linux/usb/ulpi.h>
#include <linux/i2c/twl.h>
}
}
-static int twl4030_phy_power_off(struct phy *phy)
+static int twl4030_usb_runtime_suspend(struct device *dev)
{
- struct twl4030_usb *twl = phy_get_drvdata(phy);
+ struct twl4030_usb *twl = dev_get_drvdata(dev);
+ dev_dbg(twl->dev, "%s\n", __func__);
if (twl->asleep)
return 0;
twl4030_phy_power(twl, 0);
twl->asleep = 1;
- dev_dbg(twl->dev, "%s\n", __func__);
+
return 0;
}
-static void __twl4030_phy_power_on(struct twl4030_usb *twl)
+static int twl4030_usb_runtime_resume(struct device *dev)
{
+ struct twl4030_usb *twl = dev_get_drvdata(dev);
+
+ dev_dbg(twl->dev, "%s\n", __func__);
+ if (!twl->asleep)
+ return 0;
+
twl4030_phy_power(twl, 1);
- twl4030_i2c_access(twl, 1);
- twl4030_usb_set_mode(twl, twl->usb_mode);
- if (twl->usb_mode == T2_USB_MODE_ULPI)
- twl4030_i2c_access(twl, 0);
+ twl->asleep = 0;
+
+ return 0;
+}
+
+static int twl4030_phy_power_off(struct phy *phy)
+{
+ struct twl4030_usb *twl = phy_get_drvdata(phy);
+
+ dev_dbg(twl->dev, "%s\n", __func__);
+ pm_runtime_mark_last_busy(twl->dev);
+ pm_runtime_put_autosuspend(twl->dev);
+
+ return 0;
}
static int twl4030_phy_power_on(struct phy *phy)
{
struct twl4030_usb *twl = phy_get_drvdata(phy);
- if (!twl->asleep)
- return 0;
- __twl4030_phy_power_on(twl);
- twl->asleep = 0;
dev_dbg(twl->dev, "%s\n", __func__);
+ pm_runtime_get_sync(twl->dev);
+ twl4030_i2c_access(twl, 1);
+ twl4030_usb_set_mode(twl, twl->usb_mode);
+ if (twl->usb_mode == T2_USB_MODE_ULPI)
+ twl4030_i2c_access(twl, 0);
/*
* XXX When VBUS gets driven after musb goes to A mode,
* USB_LINK_VBUS state. musb_hdrc won't care until it
* starts to handle softconnect right.
*/
+ if ((status == OMAP_MUSB_VBUS_VALID) ||
+ (status == OMAP_MUSB_ID_GROUND)) {
+ if (twl->asleep)
+ pm_runtime_get_sync(twl->dev);
+ } else {
+ if (!twl->asleep) {
+ pm_runtime_mark_last_busy(twl->dev);
+ pm_runtime_put_autosuspend(twl->dev);
+ }
+ }
omap_musb_mailbox(status);
}
- sysfs_notify(&twl->dev->kobj, NULL, "vbus");
+
+ /* don't schedule during sleep - irq works right then */
+ if (status == OMAP_MUSB_ID_GROUND && !twl->asleep) {
+ cancel_delayed_work(&twl->id_workaround_work);
+ schedule_delayed_work(&twl->id_workaround_work, HZ);
+ }
+
+ if (irq)
+ sysfs_notify(&twl->dev->kobj, NULL, "vbus");
return IRQ_HANDLED;
}
{
struct twl4030_usb *twl = container_of(work, struct twl4030_usb,
id_workaround_work.work);
- enum omap_musb_vbus_id_status status;
- bool status_changed = false;
-
- status = twl4030_usb_linkstat(twl);
-
- spin_lock_irq(&twl->lock);
- if (status >= 0 && status != twl->linkstat) {
- twl->linkstat = status;
- status_changed = true;
- }
- spin_unlock_irq(&twl->lock);
-
- if (status_changed) {
- dev_dbg(twl->dev, "handle missing status change to %d\n",
- status);
- omap_musb_mailbox(status);
- }
- /* don't schedule during sleep - irq works right then */
- if (status == OMAP_MUSB_ID_GROUND && !twl->asleep) {
- cancel_delayed_work(&twl->id_workaround_work);
- schedule_delayed_work(&twl->id_workaround_work, HZ);
- }
+ twl4030_usb_irq(0, twl);
}
static int twl4030_phy_init(struct phy *phy)
struct twl4030_usb *twl = phy_get_drvdata(phy);
enum omap_musb_vbus_id_status status;
- /*
- * Start in sleep state, we'll get called through set_suspend()
- * callback when musb is runtime resumed and it's time to start.
- */
- __twl4030_phy_power(twl, 0);
- twl->asleep = 1;
-
+ pm_runtime_get_sync(twl->dev);
status = twl4030_usb_linkstat(twl);
twl->linkstat = status;
- if (status == OMAP_MUSB_ID_GROUND || status == OMAP_MUSB_VBUS_VALID) {
+ if (status == OMAP_MUSB_ID_GROUND || status == OMAP_MUSB_VBUS_VALID)
omap_musb_mailbox(twl->linkstat);
- twl4030_phy_power_on(phy);
- }
sysfs_notify(&twl->dev->kobj, NULL, "vbus");
+ pm_runtime_mark_last_busy(twl->dev);
+ pm_runtime_put_autosuspend(twl->dev);
+
return 0;
}
.owner = THIS_MODULE,
};
+static const struct dev_pm_ops twl4030_usb_pm_ops = {
+ SET_RUNTIME_PM_OPS(twl4030_usb_runtime_suspend,
+ twl4030_usb_runtime_resume, NULL)
+};
+
static int twl4030_usb_probe(struct platform_device *pdev)
{
struct twl4030_usb_data *pdata = dev_get_platdata(&pdev->dev);
ATOMIC_INIT_NOTIFIER_HEAD(&twl->phy.notifier);
+ pm_runtime_use_autosuspend(&pdev->dev);
+ pm_runtime_set_autosuspend_delay(&pdev->dev, 2000);
+ pm_runtime_enable(&pdev->dev);
+ pm_runtime_get_sync(&pdev->dev);
+
/* Our job is to use irqs and status from the power module
* to keep the transceiver disabled when nothing's connected.
*
return status;
}
+ pm_runtime_mark_last_busy(&pdev->dev);
+ pm_runtime_put_autosuspend(twl->dev);
+
dev_info(&pdev->dev, "Initialized TWL4030 USB module\n");
return 0;
}
struct twl4030_usb *twl = platform_get_drvdata(pdev);
int val;
+ pm_runtime_get_sync(twl->dev);
cancel_delayed_work(&twl->id_workaround_work);
device_remove_file(twl->dev, &dev_attr_vbus);
/* disable complete OTG block */
twl4030_usb_clear_bits(twl, POWER_CTRL, POWER_CTRL_OTG_ENAB);
-
- if (!twl->asleep)
- twl4030_phy_power(twl, 0);
+ pm_runtime_mark_last_busy(twl->dev);
+ pm_runtime_put(twl->dev);
return 0;
}
.remove = twl4030_usb_remove,
.driver = {
.name = "twl4030_usb",
+ .pm = &twl4030_usb_pm_ops,
.owner = THIS_MODULE,
.of_match_table = of_match_ptr(twl4030_usb_id_table),
},
} else
seq_printf(s, " %-9s", chip->get(chip, offset) ? "hi" : "lo");
- if (pctldev)
- mode = abx500_get_mode(pctldev, chip, offset);
+ mode = abx500_get_mode(pctldev, chip, offset);
seq_printf(s, " %s", (mode < 0) ? "unknown" : modes[mode]);
static void at91_pin_dbg(const struct device *dev, const struct at91_pmx_pin *pin)
{
if (pin->mux) {
- dev_dbg(dev, "pio%c%d configured as periph%c with conf = 0x%lu\n",
+ dev_dbg(dev, "pio%c%d configured as periph%c with conf = 0x%lx\n",
pin->bank + 'A', pin->pin, pin->mux - 1 + 'A', pin->conf);
} else {
- dev_dbg(dev, "pio%c%d configured as gpio with conf = 0x%lu\n",
+ dev_dbg(dev, "pio%c%d configured as gpio with conf = 0x%lx\n",
pin->bank + 'A', pin->pin, pin->conf);
}
}
.irq_mask = byt_irq_mask,
.irq_unmask = byt_irq_unmask,
.irq_set_type = byt_irq_type,
+ .flags = IRQCHIP_SKIP_SET_WAKE,
};
static void byt_gpio_irq_init_hw(struct byt_gpio *vg)
int reg, ret, mask;
unsigned long flags;
u8 bit;
- u32 data;
+ u32 data, rmask;
if (iomux_num > 3)
return -EINVAL;
spin_lock_irqsave(&bank->slock, flags);
data = (mask << (bit + 16));
+ rmask = data | (data >> 16);
data |= (mux & mask) << bit;
- ret = regmap_write(regmap, reg, data);
+ ret = regmap_update_bits(regmap, reg, rmask, data);
spin_unlock_irqrestore(&bank->slock, flags);
struct regmap *regmap;
unsigned long flags;
int reg, ret, i;
- u32 data;
+ u32 data, rmask;
u8 bit;
rk3288_calc_drv_reg_and_bit(bank, pin_num, ®map, ®, &bit);
/* enable the write to the equivalent lower bits */
data = ((1 << RK3288_DRV_BITS_PER_PIN) - 1) << (bit + 16);
+ rmask = data | (data >> 16);
data |= (ret << bit);
- ret = regmap_write(regmap, reg, data);
+ ret = regmap_update_bits(regmap, reg, rmask, data);
spin_unlock_irqrestore(&bank->slock, flags);
return ret;
int reg, ret;
unsigned long flags;
u8 bit;
- u32 data;
+ u32 data, rmask;
dev_dbg(info->dev, "setting pull of GPIO%d-%d to %d\n",
bank->bank_num, pin_num, pull);
/* enable the write to the equivalent lower bits */
data = ((1 << RK3188_PULL_BITS_PER_PIN) - 1) << (bit + 16);
+ rmask = data | (data >> 16);
switch (pull) {
case PIN_CONFIG_BIAS_DISABLE:
return -EINVAL;
}
- ret = regmap_write(regmap, reg, data);
+ ret = regmap_update_bits(regmap, reg, rmask, data);
spin_unlock_irqrestore(&bank->slock, flags);
break;
if (args->args_count <= 0)
return ERR_PTR(-EINVAL);
- if (index > ARRAY_SIZE(padctl->phys))
+ if (index >= ARRAY_SIZE(padctl->phys))
return ERR_PTR(-EINVAL);
return padctl->phys[index];
padctl->provider = devm_of_phy_provider_register(&pdev->dev,
tegra_xusb_padctl_xlate);
- if (err < 0) {
+ if (IS_ERR(padctl->provider)) {
+ err = PTR_ERR(padctl->provider);
dev_err(&pdev->dev, "failed to register PHYs: %d\n", err);
goto unregister;
}
struct irq_chip *chip = irq_data_get_irq_chip(irqd);
struct exynos_irq_chip *our_chip = to_exynos_irq_chip(chip);
struct samsung_pin_bank *bank = irq_data_get_irq_chip_data(irqd);
- struct samsung_pin_bank_type *bank_type = bank->type;
struct samsung_pinctrl_drv_data *d = bank->drvdata;
- unsigned int pin = irqd->hwirq;
- unsigned int shift = EXYNOS_EINT_CON_LEN * pin;
+ unsigned int shift = EXYNOS_EINT_CON_LEN * irqd->hwirq;
unsigned int con, trig_type;
unsigned long reg_con = our_chip->eint_con + bank->eint_offset;
- unsigned long flags;
- unsigned int mask;
switch (type) {
case IRQ_TYPE_EDGE_RISING:
con |= trig_type << shift;
writel(con, d->virt_base + reg_con);
+ return 0;
+}
+
+static int exynos_irq_request_resources(struct irq_data *irqd)
+{
+ struct irq_chip *chip = irq_data_get_irq_chip(irqd);
+ struct exynos_irq_chip *our_chip = to_exynos_irq_chip(chip);
+ struct samsung_pin_bank *bank = irq_data_get_irq_chip_data(irqd);
+ struct samsung_pin_bank_type *bank_type = bank->type;
+ struct samsung_pinctrl_drv_data *d = bank->drvdata;
+ unsigned int shift = EXYNOS_EINT_CON_LEN * irqd->hwirq;
+ unsigned long reg_con = our_chip->eint_con + bank->eint_offset;
+ unsigned long flags;
+ unsigned int mask;
+ unsigned int con;
+ int ret;
+
+ ret = gpio_lock_as_irq(&bank->gpio_chip, irqd->hwirq);
+ if (ret) {
+ dev_err(bank->gpio_chip.dev, "unable to lock pin %s-%lu IRQ\n",
+ bank->name, irqd->hwirq);
+ return ret;
+ }
+
reg_con = bank->pctl_offset + bank_type->reg_offset[PINCFG_TYPE_FUNC];
- shift = pin * bank_type->fld_width[PINCFG_TYPE_FUNC];
+ shift = irqd->hwirq * bank_type->fld_width[PINCFG_TYPE_FUNC];
mask = (1 << bank_type->fld_width[PINCFG_TYPE_FUNC]) - 1;
spin_lock_irqsave(&bank->slock, flags);
spin_unlock_irqrestore(&bank->slock, flags);
+ exynos_irq_unmask(irqd);
+
return 0;
}
+static void exynos_irq_release_resources(struct irq_data *irqd)
+{
+ struct irq_chip *chip = irq_data_get_irq_chip(irqd);
+ struct exynos_irq_chip *our_chip = to_exynos_irq_chip(chip);
+ struct samsung_pin_bank *bank = irq_data_get_irq_chip_data(irqd);
+ struct samsung_pin_bank_type *bank_type = bank->type;
+ struct samsung_pinctrl_drv_data *d = bank->drvdata;
+ unsigned int shift = EXYNOS_EINT_CON_LEN * irqd->hwirq;
+ unsigned long reg_con = our_chip->eint_con + bank->eint_offset;
+ unsigned long flags;
+ unsigned int mask;
+ unsigned int con;
+
+ reg_con = bank->pctl_offset + bank_type->reg_offset[PINCFG_TYPE_FUNC];
+ shift = irqd->hwirq * bank_type->fld_width[PINCFG_TYPE_FUNC];
+ mask = (1 << bank_type->fld_width[PINCFG_TYPE_FUNC]) - 1;
+
+ exynos_irq_mask(irqd);
+
+ spin_lock_irqsave(&bank->slock, flags);
+
+ con = readl(d->virt_base + reg_con);
+ con &= ~(mask << shift);
+ con |= FUNC_INPUT << shift;
+ writel(con, d->virt_base + reg_con);
+
+ spin_unlock_irqrestore(&bank->slock, flags);
+
+ gpio_unlock_as_irq(&bank->gpio_chip, irqd->hwirq);
+}
+
/*
* irq_chip for gpio interrupts.
*/
.irq_mask = exynos_irq_mask,
.irq_ack = exynos_irq_ack,
.irq_set_type = exynos_irq_set_type,
+ .irq_request_resources = exynos_irq_request_resources,
+ .irq_release_resources = exynos_irq_release_resources,
},
.eint_con = EXYNOS_GPIO_ECON_OFFSET,
.eint_mask = EXYNOS_GPIO_EMASK_OFFSET,
.irq_ack = exynos_irq_ack,
.irq_set_type = exynos_irq_set_type,
.irq_set_wake = exynos_wkup_irq_set_wake,
+ .irq_request_resources = exynos_irq_request_resources,
+ .irq_release_resources = exynos_irq_release_resources,
},
.eint_con = EXYNOS_WKUP_ECON_OFFSET,
.eint_mask = EXYNOS_WKUP_EMASK_OFFSET,
#include <linux/gpio.h>
/* pinmux function number for pin as gpio output line */
+#define FUNC_INPUT 0x0
#define FUNC_OUTPUT 0x1
/**
};
static const char * const can0_groups[] = {
- "can0_data_a",
+ "can0_data",
"can0_data_b",
"can0_data_c",
"can0_data_d",
"can0_data_e",
"can0_data_f",
- "can_clk_a",
+ "can_clk",
"can_clk_b",
"can_clk_c",
"can_clk_d",
};
static const char * const can1_groups[] = {
- "can1_data_a",
+ "can1_data",
"can1_data_b",
"can1_data_c",
"can1_data_d",
- "can_clk_a",
+ "can_clk",
"can_clk_b",
"can_clk_c",
"can_clk_d",
struct dentry *debug;
unsigned long cfg;
bool has_hw_rfkill_switch;
- bool has_touchpad_control;
};
static bool no_bt_rfkill;
int type;
};
-const const struct ideapad_rfk_data ideapad_rfk_data[] = {
+static const struct ideapad_rfk_data ideapad_rfk_data[] = {
{ "ideapad_wlan", CFG_WIFI_BIT, VPCCMD_W_WIFI, RFKILL_TYPE_WLAN },
{ "ideapad_bluetooth", CFG_BT_BIT, VPCCMD_W_BT, RFKILL_TYPE_BLUETOOTH },
{ "ideapad_3g", CFG_3G_BIT, VPCCMD_W_3G, RFKILL_TYPE_WWAN },
{
unsigned long value;
- if (!priv->has_touchpad_control)
- return;
-
/* Without reading from EC touchpad LED doesn't switch state */
if (!read_ec_data(priv->adev->handle, VPCCMD_R_TOUCHPAD, &value)) {
/* Some IdeaPads don't really turn off touchpad - they only
* always results in 0 on these models, causing ideapad_laptop to wrongly
* report all radios as hardware-blocked.
*/
-static struct dmi_system_id no_hw_rfkill_list[] = {
- {
- .ident = "Lenovo Yoga 2 11 / 13 / Pro",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
- DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo Yoga 2"),
- },
- },
- {}
-};
-
-/*
- * Some models don't offer touchpad ctrl through the ideapad interface, causing
- * ideapad_sync_touchpad_state to send wrong touchpad enable/disable events.
- */
-static struct dmi_system_id no_touchpad_ctrl_list[] = {
- {
- .ident = "Lenovo Yoga 1 series",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
- DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo IdeaPad Yoga"),
- },
- },
+static const struct dmi_system_id no_hw_rfkill_list[] = {
{
.ident = "Lenovo Yoga 2 11 / 13 / Pro",
.matches = {
priv->adev = adev;
priv->platform_device = pdev;
priv->has_hw_rfkill_switch = !dmi_check_system(no_hw_rfkill_list);
- priv->has_touchpad_control = !dmi_check_system(no_touchpad_ctrl_list);
ret = ideapad_sysfs_init(priv);
if (ret)
const char *buf, size_t count)
{
struct toshiba_acpi_dev *toshiba = dev_get_drvdata(dev);
- int mode = -1;
- int time = -1;
+ int mode;
+ int time;
+ int ret;
- if (sscanf(buf, "%i", &mode) != 1 || (mode != 2 || mode != 1))
+
+ ret = kstrtoint(buf, 0, &mode);
+ if (ret)
+ return ret;
+ if (mode != SCI_KBD_MODE_FNZ && mode != SCI_KBD_MODE_AUTO)
return -EINVAL;
/* Set the Keyboard Backlight Mode where:
* Auto - KBD backlight turns off automatically in given time
* FN-Z - KBD backlight "toggles" when hotkey pressed
*/
- if (mode != -1 && toshiba->kbd_mode != mode) {
+ if (toshiba->kbd_mode != mode) {
time = toshiba->kbd_time << HCI_MISC_SHIFT;
time = time + toshiba->kbd_mode;
- if (toshiba_kbd_illum_status_set(toshiba, time) < 0)
- return -EIO;
+ ret = toshiba_kbd_illum_status_set(toshiba, time);
+ if (ret)
+ return ret;
toshiba->kbd_mode = mode;
}
{
struct toshiba_acpi_dev *dev = acpi_driver_data(to_acpi_device(device));
u32 result;
+ acpi_status status;
+
+ if (dev->hotkey_dev) {
+ status = acpi_evaluate_object(dev->acpi_dev->handle, "ENAB",
+ NULL, NULL);
+ if (ACPI_FAILURE(status))
+ pr_info("Unable to re-enable hotkeys\n");
- if (dev->hotkey_dev)
hci_write1(dev, HCI_HOTKEY_EVENT, HCI_HOTKEY_ENABLE, &result);
+ }
return 0;
}
{ X86_VENDOR_INTEL, 6, 0x3a},/* Ivy Bridge */
{ X86_VENDOR_INTEL, 6, 0x3c},/* Haswell */
{ X86_VENDOR_INTEL, 6, 0x3d},/* Broadwell */
+ { X86_VENDOR_INTEL, 6, 0x3f},/* Haswell */
{ X86_VENDOR_INTEL, 6, 0x45},/* Haswell ULT */
/* TODO: Add more CPU IDs after testing */
{}
for (i = 0; i < RAPL_DOMAIN_MAX; i++) {
/* use physical package id to read counters */
- if (!rapl_check_domain(cpu, i))
+ if (!rapl_check_domain(cpu, i)) {
rp->domain_map |= 1 << i;
- else
- pr_warn("RAPL domain %s detection failed\n",
- rapl_domain_names[i]);
+ pr_info("Found RAPL domain %s\n", rapl_domain_names[i]);
+ }
}
rp->nr_domains = bitmap_weight(&rp->domain_map, RAPL_DOMAIN_MAX);
if (!rp->nr_domains) {
unsigned int best = 0;
struct pwm_lookup *p;
unsigned int match;
+ unsigned int period;
+ enum pwm_polarity polarity;
/* look up via DT first */
if (IS_ENABLED(CONFIG_OF) && dev && dev->of_node)
if (match > best) {
chip = pwmchip_find_by_name(p->provider);
index = p->index;
+ period = p->period;
+ polarity = p->polarity;
if (match != 3)
best = match;
if (IS_ERR(pwm))
return pwm;
- pwm_set_period(pwm, p->period);
- pwm_set_polarity(pwm, p->polarity);
+ pwm_set_period(pwm, period);
+ pwm_set_polarity(pwm, polarity);
return pwm;
info->device_type = s5m87xx->device_type;
info->wtsr_smpl = s5m87xx->wtsr_smpl;
- info->irq = regmap_irq_get_virq(s5m87xx->irq_data, alarm_irq);
- if (info->irq <= 0) {
- ret = -EINVAL;
- dev_err(&pdev->dev, "Failed to get virtual IRQ %d\n",
+ if (s5m87xx->irq_data) {
+ info->irq = regmap_irq_get_virq(s5m87xx->irq_data, alarm_irq);
+ if (info->irq <= 0) {
+ ret = -EINVAL;
+ dev_err(&pdev->dev, "Failed to get virtual IRQ %d\n",
alarm_irq);
- goto err;
+ goto err;
+ }
}
platform_set_drvdata(pdev, info);
goto err;
}
+ if (!info->irq) {
+ dev_info(&pdev->dev, "Alarm IRQ not available\n");
+ return 0;
+ }
+
ret = devm_request_threaded_irq(&pdev->dev, info->irq, NULL,
s5m_rtc_alarm_irq, 0, "rtc-alarm0",
info);
struct s5m_rtc_info *info = dev_get_drvdata(dev);
int ret = 0;
- if (device_may_wakeup(dev))
+ if (info->irq && device_may_wakeup(dev))
ret = disable_irq_wake(info->irq);
return ret;
struct s5m_rtc_info *info = dev_get_drvdata(dev);
int ret = 0;
- if (device_may_wakeup(dev))
+ if (info->irq && device_may_wakeup(dev))
ret = enable_irq_wake(info->irq);
return ret;
* strings when running as a module.
*/
static char *dasd[256];
-module_param_array(dasd, charp, NULL, 0);
+module_param_array(dasd, charp, NULL, S_IRUGO);
/*
* Single spinlock to protect devmap and servermap structures and lists.
const unsigned char *buf, int count)
{
struct raw3215_info *raw;
+ int i, written;
if (!tty)
return 0;
raw = (struct raw3215_info *) tty->driver_data;
- raw3215_write(raw, buf, count);
- return count;
+ written = count;
+ while (count > 0) {
+ for (i = 0; i < count; i++)
+ if (buf[i] == '\t' || buf[i] == '\n')
+ break;
+ raw3215_write(raw, buf, i);
+ count -= i;
+ buf += i;
+ if (count > 0) {
+ raw3215_putchar(raw, *buf);
+ count--;
+ buf++;
+ }
+ }
+ return written;
}
/*
driver->subtype = SYSTEM_TYPE_TTY;
driver->init_termios = tty_std_termios;
driver->init_termios.c_iflag = IGNBRK | IGNPAR;
- driver->init_termios.c_oflag = ONLCR | XTABS;
+ driver->init_termios.c_oflag = ONLCR;
driver->init_termios.c_lflag = ISIG;
driver->flags = TTY_DRIVER_REAL_RAW;
tty_set_operations(driver, &tty3215_ops);
driver->subtype = SYSTEM_TYPE_TTY;
driver->init_termios = tty_std_termios;
driver->init_termios.c_iflag = IGNBRK | IGNPAR;
- driver->init_termios.c_oflag = ONLCR | XTABS;
+ driver->init_termios.c_oflag = ONLCR;
driver->init_termios.c_lflag = ISIG | ECHO;
driver->flags = TTY_DRIVER_REAL_RAW;
tty_set_operations(driver, &sclp_ops);
extern const struct attribute_group *qeth_osn_attr_groups[];
extern struct workqueue_struct *qeth_wq;
+int qeth_card_hw_is_reachable(struct qeth_card *);
const char *qeth_get_cardname_short(struct qeth_card *);
int qeth_realloc_buffer_pool(struct qeth_card *, int);
int qeth_core_load_discipline(struct qeth_card *, enum qeth_discipline_id);
struct workqueue_struct *qeth_wq;
EXPORT_SYMBOL_GPL(qeth_wq);
+int qeth_card_hw_is_reachable(struct qeth_card *card)
+{
+ return (card->state == CARD_STATE_SOFTSETUP) ||
+ (card->state == CARD_STATE_UP);
+}
+EXPORT_SYMBOL_GPL(qeth_card_hw_is_reachable);
+
static void qeth_close_dev_handler(struct work_struct *work)
{
struct qeth_card *card;
struct qeth_card *card = netdev->ml_priv;
enum qeth_link_types link_type;
struct carrier_info carrier_info;
+ int rc;
u32 speed;
if ((card->info.type == QETH_CARD_TYPE_IQD) || (card->info.guestlan))
/* Check if we can obtain more accurate information. */
/* If QUERY_CARD_INFO command is not supported or fails, */
/* just return the heuristics that was filled above. */
- if (qeth_query_card_info(card, &carrier_info) != 0)
+ if (!qeth_card_hw_is_reachable(card))
+ return -ENODEV;
+ rc = qeth_query_card_info(card, &carrier_info);
+ if (rc == -EOPNOTSUPP) /* for old hardware, return heuristic */
return 0;
+ if (rc) /* report error from the hardware operation */
+ return rc;
+ /* on success, fill in the information got from the hardware */
netdev_dbg(netdev,
"card info: card_type=0x%02x, port_mode=0x%04x, port_speed=0x%08x\n",
#include <linux/slab.h>
#include <asm/ebcdic.h>
+#include "qeth_core.h"
#include "qeth_l2.h"
#define QETH_DEVICE_ATTR(_id, _name, _mode, _show, _store) \
struct device_attribute dev_attr_##_id = __ATTR(_name, _mode, _show, _store)
-static int qeth_card_hw_is_reachable(struct qeth_card *card)
-{
- return (card->state == CARD_STATE_SOFTSETUP) ||
- (card->state == CARD_STATE_UP);
-}
-
static ssize_t qeth_bridge_port_role_state_show(struct device *dev,
struct device_attribute *attr, char *buf,
int show_state)
pool->slab_flags |= SLAB_CACHE_DMA;
pool->gfp_mask = __GFP_DMA;
}
+
+ if (hostt->cmd_size)
+ hostt->cmd_pool = pool;
+
return pool;
}
out_free_slab:
kmem_cache_destroy(pool->cmd_slab);
out_free_pool:
- if (hostt->cmd_size)
+ if (hostt->cmd_size) {
scsi_free_host_cmd_pool(pool);
+ hostt->cmd_pool = NULL;
+ }
goto out;
}
if (!--pool->users) {
kmem_cache_destroy(pool->cmd_slab);
kmem_cache_destroy(pool->sense_slab);
- if (hostt->cmd_size)
+ if (hostt->cmd_size) {
scsi_free_host_cmd_pool(pool);
+ hostt->cmd_pool = NULL;
+ }
}
mutex_unlock(&host_cmd_pool_mutex);
}
blk_requeue_request(q, req);
atomic_dec(&sdev->device_busy);
out_delay:
- if (atomic_read(&sdev->device_busy) && !scsi_device_blocked(sdev))
+ if (!atomic_read(&sdev->device_busy) && !scsi_device_blocked(sdev))
blk_delay_queue(q, SCSI_QUEUE_DELAY);
}
cmd->tag = req->tag;
- req->cmd = req->__cmd;
cmd->cmnd = req->cmd;
cmd->prot_op = SCSI_PROT_NORMAL;
#
# Makefile for the SuperH specific drivers.
#
-obj-$(CONFIG_SUPERH) += intc/
-obj-$(CONFIG_ARCH_SHMOBILE_LEGACY) += intc/
+obj-$(CONFIG_SH_INTC) += intc/
ifneq ($(CONFIG_COMMON_CLK),y)
obj-$(CONFIG_HAVE_CLK) += clk/
endif
config SH_INTC
- def_bool y
+ bool
select IRQ_DOMAIN
+if SH_INTC
+
comment "Interrupt controller options"
config INTC_USERIMASK
between system IRQs and the per-controller id tables.
If in doubt, say N.
+
+endif
spi_bitbang_stop(&hw->bitbang);
free_irq(hw->irq, hw);
iounmap((void __iomem *)hw->regs);
- release_mem_region(r->start, sizeof(psc_spi_t));
+ release_mem_region(hw->ioarea->start, sizeof(psc_spi_t));
if (hw->usedma) {
au1550_spi_dma_rxtmp_free(hw);
flags, dev_name(&spi->dev));
internal_cs = false;
}
- }
- if (retval) {
- dev_err(&spi->dev, "GPIO %d setup failed (%d)\n",
- spi->cs_gpio, retval);
- return retval;
- }
+ if (retval) {
+ dev_err(&spi->dev, "GPIO %d setup failed (%d)\n",
+ spi->cs_gpio, retval);
+ return retval;
+ }
- if (internal_cs)
- set_io_bits(dspi->base + SPIPC0, 1 << spi->chip_select);
+ if (internal_cs)
+ set_io_bits(dspi->base + SPIPC0, 1 << spi->chip_select);
+ }
if (spi->mode & SPI_READY)
set_io_bits(dspi->base + SPIPC0, SPIPC0_SPIENA_MASK);
if (ret)
return ret;
+ dws->regs = pcim_iomap_table(pdev)[pci_bar];
+
dws->bus_num = 0;
dws->num_cs = 4;
dws->irq = pdev->irq;
transfer_list);
if (!last_transfer->cs_change)
- spi_chip_sel(dws, dws->cur_msg->spi, 0);
+ spi_chip_sel(dws, msg->spi, 0);
spi_finalize_current_message(dws->master);
}
disable_fifo:
if (t->rx_buf != NULL)
chconf &= ~OMAP2_MCSPI_CHCONF_FFER;
- else
+
+ if (t->tx_buf != NULL)
chconf &= ~OMAP2_MCSPI_CHCONF_FFET;
mcspi_write_chconf0(spi, chconf);
{ "INT3430", 0 },
{ "INT3431", 0 },
{ "80860F0E", 0 },
+ { "8086228E", 0 },
{ },
};
MODULE_DEVICE_TABLE(acpi, pxa2xx_spi_acpi_match);
}
/* div doesn't support odd number */
- div = rs->max_freq / rs->speed;
+ div = max_t(u32, rs->max_freq / rs->speed, 1);
div = (div + 1) & 0xfffe;
spi_enable_chip(rs, 0);
rs->dma_tx.addr = (dma_addr_t)(mem->start + ROCKCHIP_SPI_TXDR);
rs->dma_rx.addr = (dma_addr_t)(mem->start + ROCKCHIP_SPI_RXDR);
rs->dma_tx.direction = DMA_MEM_TO_DEV;
- rs->dma_tx.direction = DMA_DEV_TO_MEM;
+ rs->dma_rx.direction = DMA_DEV_TO_MEM;
master->can_dma = rockchip_spi_can_dma;
master->dma_tx = rs->dma_tx.ch;
dma_cookie_t cookie;
int ret;
- if (tx) {
- desc_tx = dmaengine_prep_slave_sg(rspi->master->dma_tx,
- tx->sgl, tx->nents, DMA_TO_DEVICE,
- DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
- if (!desc_tx)
- goto no_dma;
-
- irq_mask |= SPCR_SPTIE;
- }
+ /* First prepare and submit the DMA request(s), as this may fail */
if (rx) {
desc_rx = dmaengine_prep_slave_sg(rspi->master->dma_rx,
rx->sgl, rx->nents, DMA_FROM_DEVICE,
DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
- if (!desc_rx)
- goto no_dma;
+ if (!desc_rx) {
+ ret = -EAGAIN;
+ goto no_dma_rx;
+ }
+
+ desc_rx->callback = rspi_dma_complete;
+ desc_rx->callback_param = rspi;
+ cookie = dmaengine_submit(desc_rx);
+ if (dma_submit_error(cookie)) {
+ ret = cookie;
+ goto no_dma_rx;
+ }
irq_mask |= SPCR_SPRIE;
}
+ if (tx) {
+ desc_tx = dmaengine_prep_slave_sg(rspi->master->dma_tx,
+ tx->sgl, tx->nents, DMA_TO_DEVICE,
+ DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+ if (!desc_tx) {
+ ret = -EAGAIN;
+ goto no_dma_tx;
+ }
+
+ if (rx) {
+ /* No callback */
+ desc_tx->callback = NULL;
+ } else {
+ desc_tx->callback = rspi_dma_complete;
+ desc_tx->callback_param = rspi;
+ }
+ cookie = dmaengine_submit(desc_tx);
+ if (dma_submit_error(cookie)) {
+ ret = cookie;
+ goto no_dma_tx;
+ }
+
+ irq_mask |= SPCR_SPTIE;
+ }
+
/*
* DMAC needs SPxIE, but if SPxIE is set, the IRQ routine will be
* called. So, this driver disables the IRQ while DMA transfer.
rspi_enable_irq(rspi, irq_mask);
rspi->dma_callbacked = 0;
- if (rx) {
- desc_rx->callback = rspi_dma_complete;
- desc_rx->callback_param = rspi;
- cookie = dmaengine_submit(desc_rx);
- if (dma_submit_error(cookie))
- return cookie;
+ /* Now start DMA */
+ if (rx)
dma_async_issue_pending(rspi->master->dma_rx);
- }
- if (tx) {
- if (rx) {
- /* No callback */
- desc_tx->callback = NULL;
- } else {
- desc_tx->callback = rspi_dma_complete;
- desc_tx->callback_param = rspi;
- }
- cookie = dmaengine_submit(desc_tx);
- if (dma_submit_error(cookie))
- return cookie;
+ if (tx)
dma_async_issue_pending(rspi->master->dma_tx);
- }
ret = wait_event_interruptible_timeout(rspi->wait,
rspi->dma_callbacked, HZ);
if (ret > 0 && rspi->dma_callbacked)
ret = 0;
- else if (!ret)
+ else if (!ret) {
+ dev_err(&rspi->master->dev, "DMA timeout\n");
ret = -ETIMEDOUT;
+ if (tx)
+ dmaengine_terminate_all(rspi->master->dma_tx);
+ if (rx)
+ dmaengine_terminate_all(rspi->master->dma_rx);
+ }
rspi_disable_irq(rspi, irq_mask);
return ret;
-no_dma:
- pr_warn_once("%s %s: DMA not available, falling back to PIO\n",
- dev_driver_string(&rspi->master->dev),
- dev_name(&rspi->master->dev));
- return -EAGAIN;
+no_dma_tx:
+ if (rx)
+ dmaengine_terminate_all(rspi->master->dma_rx);
+no_dma_rx:
+ if (ret == -EAGAIN) {
+ pr_warn_once("%s %s: DMA not available, falling back to PIO\n",
+ dev_driver_string(&rspi->master->dev),
+ dev_name(&rspi->master->dev));
+ }
+ return ret;
}
static void rspi_receive_init(const struct rspi_data *rspi)
dma_cookie_t cookie;
int ret;
- if (tx) {
- ier_bits |= IER_TDREQE | IER_TDMAE;
- dma_sync_single_for_device(p->master->dma_tx->device->dev,
- p->tx_dma_addr, len, DMA_TO_DEVICE);
- desc_tx = dmaengine_prep_slave_single(p->master->dma_tx,
- p->tx_dma_addr, len, DMA_TO_DEVICE,
- DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
- if (!desc_tx)
- return -EAGAIN;
- }
-
+ /* First prepare and submit the DMA request(s), as this may fail */
if (rx) {
ier_bits |= IER_RDREQE | IER_RDMAE;
desc_rx = dmaengine_prep_slave_single(p->master->dma_rx,
p->rx_dma_addr, len, DMA_FROM_DEVICE,
DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
- if (!desc_rx)
- return -EAGAIN;
- }
-
- /* 1 stage FIFO watermarks for DMA */
- sh_msiof_write(p, FCTR, FCTR_TFWM_1 | FCTR_RFWM_1);
-
- /* setup msiof transfer mode registers (32-bit words) */
- sh_msiof_spi_set_mode_regs(p, tx, rx, 32, len / 4);
-
- sh_msiof_write(p, IER, ier_bits);
-
- reinit_completion(&p->done);
+ if (!desc_rx) {
+ ret = -EAGAIN;
+ goto no_dma_rx;
+ }
- if (rx) {
desc_rx->callback = sh_msiof_dma_complete;
desc_rx->callback_param = p;
cookie = dmaengine_submit(desc_rx);
if (dma_submit_error(cookie)) {
ret = cookie;
- goto stop_ier;
+ goto no_dma_rx;
}
- dma_async_issue_pending(p->master->dma_rx);
}
if (tx) {
+ ier_bits |= IER_TDREQE | IER_TDMAE;
+ dma_sync_single_for_device(p->master->dma_tx->device->dev,
+ p->tx_dma_addr, len, DMA_TO_DEVICE);
+ desc_tx = dmaengine_prep_slave_single(p->master->dma_tx,
+ p->tx_dma_addr, len, DMA_TO_DEVICE,
+ DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+ if (!desc_tx) {
+ ret = -EAGAIN;
+ goto no_dma_tx;
+ }
+
if (rx) {
/* No callback */
desc_tx->callback = NULL;
cookie = dmaengine_submit(desc_tx);
if (dma_submit_error(cookie)) {
ret = cookie;
- goto stop_rx;
+ goto no_dma_tx;
}
- dma_async_issue_pending(p->master->dma_tx);
}
+ /* 1 stage FIFO watermarks for DMA */
+ sh_msiof_write(p, FCTR, FCTR_TFWM_1 | FCTR_RFWM_1);
+
+ /* setup msiof transfer mode registers (32-bit words) */
+ sh_msiof_spi_set_mode_regs(p, tx, rx, 32, len / 4);
+
+ sh_msiof_write(p, IER, ier_bits);
+
+ reinit_completion(&p->done);
+
+ /* Now start DMA */
+ if (rx)
+ dma_async_issue_pending(p->master->dma_rx);
+ if (tx)
+ dma_async_issue_pending(p->master->dma_tx);
+
ret = sh_msiof_spi_start(p, rx);
if (ret) {
dev_err(&p->pdev->dev, "failed to start hardware\n");
- goto stop_tx;
+ goto stop_dma;
}
/* wait for tx fifo to be emptied / rx fifo to be filled */
stop_reset:
sh_msiof_reset_str(p);
sh_msiof_spi_stop(p, rx);
-stop_tx:
+stop_dma:
if (tx)
dmaengine_terminate_all(p->master->dma_tx);
-stop_rx:
+no_dma_tx:
if (rx)
dmaengine_terminate_all(p->master->dma_rx);
-stop_ier:
sh_msiof_write(p, IER, 0);
+no_dma_rx:
return ret;
}
/**
* spi_finalize_current_transfer - report completion of a transfer
+ * @master: the master reporting completion
*
* Called by SPI drivers using the core transfer_one_message()
* implementation to notify it that the current interrupt driven
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x432b) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x432c) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4350) },
+ { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4351) },
{ 0, },
};
MODULE_DEVICE_TABLE(pci, b43_pci_bridge_tbl);
source "drivers/staging/slicoss/Kconfig"
-source "drivers/staging/usbip/Kconfig"
-
source "drivers/staging/wlan-ng/Kconfig"
source "drivers/staging/comedi/Kconfig"
obj-y += media/
obj-$(CONFIG_ET131X) += et131x/
obj-$(CONFIG_SLICOSS) += slicoss/
-obj-$(CONFIG_USBIP_CORE) += usbip/
obj-$(CONFIG_PRISM2_USB) += wlan-ng/
obj-$(CONFIG_COMEDI) += comedi/
obj-$(CONFIG_FB_OLPC_DCON) += olpc_dcon/
if (unlikely(ret)) {
pr_err("failed to register misc device for log '%s'!\n",
log->misc.name);
- goto out_free_log;
+ goto out_free_misc_name;
}
pr_info("created %luK log '%s'\n",
return 0;
+out_free_misc_name:
+ kfree(log->misc.name);
+
out_free_log:
kfree(log);
fence->num_fences = 1;
atomic_set(&fence->status, 1);
- fence_get(&pt->base);
fence->cbs[0].sync_pt = &pt->base;
fence->cbs[0].fence = fence;
if (fence_add_callback(&pt->base, &fence->cbs[0].cb,
* @reg: the register to read
* @value: 16-bit value to write
*/
-static int et131x_mii_write(struct et131x_adapter *adapter, u8 reg, u16 value)
+static int et131x_mii_write(struct et131x_adapter *adapter, u8 addr, u8 reg,
+ u16 value)
{
struct mac_regs __iomem *mac = &adapter->regs->mac;
- struct phy_device *phydev = adapter->phydev;
int status = 0;
- u8 addr;
u32 delay = 0;
u32 mii_addr;
u32 mii_cmd;
u32 mii_indicator;
- if (!phydev)
- return -EIO;
-
- addr = phydev->addr;
-
/* Save a local copy of the registers we are dealing with so we can
* set them back
*/
struct net_device *netdev = bus->priv;
struct et131x_adapter *adapter = netdev_priv(netdev);
- return et131x_mii_write(adapter, reg, value);
-}
-
-static int et131x_mdio_reset(struct mii_bus *bus)
-{
- struct net_device *netdev = bus->priv;
- struct et131x_adapter *adapter = netdev_priv(netdev);
-
- et131x_mii_write(adapter, MII_BMCR, BMCR_RESET);
-
- return 0;
+ return et131x_mii_write(adapter, phy_addr, reg, value);
}
/* et1310_phy_power_switch - PHY power control
static void et1310_phy_power_switch(struct et131x_adapter *adapter, bool down)
{
u16 data;
+ struct phy_device *phydev = adapter->phydev;
et131x_mii_read(adapter, MII_BMCR, &data);
data &= ~BMCR_PDOWN;
if (down)
data |= BMCR_PDOWN;
- et131x_mii_write(adapter, MII_BMCR, data);
+ et131x_mii_write(adapter, phydev->addr, MII_BMCR, data);
}
/* et131x_xcvr_init - Init the phy if we are setting it into force mode */
static void et131x_xcvr_init(struct et131x_adapter *adapter)
{
u16 lcr2;
+ struct phy_device *phydev = adapter->phydev;
/* Set the LED behavior such that LED 1 indicates speed (off =
* 10Mbits, blink = 100Mbits, on = 1000Mbits) and LED 2 indicates
else
lcr2 |= (LED_VAL_LINKON << LED_TXRX_SHIFT);
- et131x_mii_write(adapter, PHY_LED_2, lcr2);
+ et131x_mii_write(adapter, phydev->addr, PHY_LED_2, lcr2);
}
}
et131x_mii_read(adapter, PHY_MPHY_CONTROL_REG,
®ister18);
- et131x_mii_write(adapter, PHY_MPHY_CONTROL_REG,
- register18 | 0x4);
- et131x_mii_write(adapter, PHY_INDEX_REG,
+ et131x_mii_write(adapter, phydev->addr,
+ PHY_MPHY_CONTROL_REG, register18 | 0x4);
+ et131x_mii_write(adapter, phydev->addr, PHY_INDEX_REG,
register18 | 0x8402);
- et131x_mii_write(adapter, PHY_DATA_REG,
+ et131x_mii_write(adapter, phydev->addr, PHY_DATA_REG,
register18 | 511);
- et131x_mii_write(adapter, PHY_MPHY_CONTROL_REG,
- register18);
+ et131x_mii_write(adapter, phydev->addr,
+ PHY_MPHY_CONTROL_REG, register18);
}
et1310_config_flow_control(adapter);
et131x_mii_read(adapter, PHY_CONFIG, ®);
reg &= ~ET_PHY_CONFIG_TX_FIFO_DEPTH;
reg |= ET_PHY_CONFIG_FIFO_DEPTH_32;
- et131x_mii_write(adapter, PHY_CONFIG, reg);
+ et131x_mii_write(adapter, phydev->addr, PHY_CONFIG,
+ reg);
}
et131x_set_rx_dma_timer(adapter);
et131x_mii_read(adapter, PHY_MPHY_CONTROL_REG,
®ister18);
- et131x_mii_write(adapter, PHY_MPHY_CONTROL_REG,
- register18 | 0x4);
- et131x_mii_write(adapter, PHY_INDEX_REG,
- register18 | 0x8402);
- et131x_mii_write(adapter, PHY_DATA_REG,
- register18 | 511);
- et131x_mii_write(adapter, PHY_MPHY_CONTROL_REG,
- register18);
+ et131x_mii_write(adapter, phydev->addr,
+ PHY_MPHY_CONTROL_REG, register18 | 0x4);
+ et131x_mii_write(adapter, phydev->addr,
+ PHY_INDEX_REG, register18 | 0x8402);
+ et131x_mii_write(adapter, phydev->addr,
+ PHY_DATA_REG, register18 | 511);
+ et131x_mii_write(adapter, phydev->addr,
+ PHY_MPHY_CONTROL_REG, register18);
}
/* Free the packets being actively sent & stopped */
/* Copy address into the net_device struct */
memcpy(netdev->dev_addr, adapter->addr, ETH_ALEN);
- /* Init variable for counting how long we do not have link status */
- adapter->boot_coma = 0;
- et1310_disable_phy_coma(adapter);
-
rc = -ENOMEM;
/* Setup the mii_bus struct */
adapter->mii_bus->priv = netdev;
adapter->mii_bus->read = et131x_mdio_read;
adapter->mii_bus->write = et131x_mdio_write;
- adapter->mii_bus->reset = et131x_mdio_reset;
adapter->mii_bus->irq = kmalloc_array(PHY_MAX_ADDR, sizeof(int),
GFP_KERNEL);
if (!adapter->mii_bus->irq)
/* Setup et1310 as per the documentation */
et131x_adapter_setup(adapter);
+ /* Init variable for counting how long we do not have link status */
+ adapter->boot_coma = 0;
+ et1310_disable_phy_coma(adapter);
+
/* We can enable interrupts now
*
* NOTE - Because registration of interrupt handler is done in the
.unload = imx_drm_driver_unload,
.lastclose = imx_drm_driver_lastclose,
.preclose = imx_drm_driver_preclose,
+ .set_busid = drm_platform_set_busid,
.gem_free_object = drm_gem_cma_free_object,
.gem_vm_ops = &drm_gem_cma_vm_ops,
.dumb_create = drm_gem_cma_dumb_create,
for (i = 0; i < 2; i++) {
struct imx_ldb_channel *channel = &imx_ldb->channel[i];
+ if (!channel->connector.funcs)
+ continue;
+
channel->connector.funcs->destroy(&channel->connector);
channel->encoder.funcs->destroy(&channel->encoder);
}
int ipu_plane_set_base(struct ipu_plane *ipu_plane, struct drm_framebuffer *fb,
int x, int y)
{
- struct ipu_ch_param __iomem *cpmem;
struct drm_gem_cma_object *cma_obj;
unsigned long eba;
dev_dbg(ipu_plane->base.dev->dev, "phys = %pad, x = %d, y = %d",
&cma_obj->paddr, x, y);
- cpmem = ipu_get_cpmem(ipu_plane->ipu_ch);
- ipu_cpmem_set_stride(cpmem, fb->pitches[0]);
+ ipu_cpmem_set_stride(ipu_plane->ipu_ch, fb->pitches[0]);
eba = cma_obj->paddr + fb->offsets[0] +
fb->pitches[0] * y + (fb->bits_per_pixel >> 3) * x;
- ipu_cpmem_set_buffer(cpmem, 0, eba);
- ipu_cpmem_set_buffer(cpmem, 1, eba);
+ ipu_cpmem_set_buffer(ipu_plane->ipu_ch, 0, eba);
+ ipu_cpmem_set_buffer(ipu_plane->ipu_ch, 1, eba);
/* cache offsets for subsequent pageflips */
ipu_plane->x = x;
uint32_t src_x, uint32_t src_y,
uint32_t src_w, uint32_t src_h)
{
- struct ipu_ch_param __iomem *cpmem;
struct device *dev = ipu_plane->base.dev->dev;
int ret;
return ret;
}
- cpmem = ipu_get_cpmem(ipu_plane->ipu_ch);
- ipu_ch_param_zero(cpmem);
- ipu_cpmem_set_resolution(cpmem, src_w, src_h);
- ret = ipu_cpmem_set_fmt(cpmem, fb->pixel_format);
+ ipu_cpmem_zero(ipu_plane->ipu_ch);
+ ipu_cpmem_set_resolution(ipu_plane->ipu_ch, src_w, src_h);
+ ret = ipu_cpmem_set_fmt(ipu_plane->ipu_ch, fb->pixel_format);
if (ret < 0) {
dev_err(dev, "unsupported pixel format 0x%08x\n",
fb->pixel_format);
ipu_idmac_put(ipu_plane->ipu_ch);
ipu_dmfc_put(ipu_plane->dmfc);
- ipu_dp_put(ipu_plane->dp);
+ if (ipu_plane->dp)
+ ipu_dp_put(ipu_plane->dp);
}
}
return -ENOMEM;
strncpy(sched->ws_name, name, CFS_WS_NAME_LEN);
+ sched->ws_name[CFS_WS_NAME_LEN - 1] = '\0';
sched->ws_cptab = cptab;
sched->ws_cpt = cpt;
if (sb->s_root == NULL) {
CERROR("%s: can't make root dentry\n",
ll_get_fsname(sb, NULL, 0));
- GOTO(out_root, err = -ENOMEM);
+ GOTO(out_lock_cn_cb, err = -ENOMEM);
}
sbi->ll_sdev_orig = sb->s_dev;
*/
#define DEBUG_SUBSYSTEM S_CLASS
-# include <asm/atomic.h>
+# include <linux/atomic.h>
#include "../include/obd_support.h"
#include "../include/obd_class.h"
{USB_DEVICE(USB_VENDER_ID_REALTEK, 0x0179)}, /* 8188ETV */
/*=== Customer ID ===*/
/****** 8188EUS ********/
+ {USB_DEVICE(0x056e, 0x4008)}, /* Elecom WDC-150SU2M */
{USB_DEVICE(0x07b8, 0x8179)}, /* Abocom - Abocom */
{USB_DEVICE(0x2001, 0x330F)}, /* DLink DWA-125 REV D1 */
{USB_DEVICE(0x2001, 0x3310)}, /* Dlink DWA-123 REV D1 */
+ {USB_DEVICE(0x0df6, 0x0076)}, /* Sitecom N150 v2 */
{} /* Terminating entry */
};
/* Activate hops. */
for (i = path->path_length - 1; i >= 0; i--) {
- struct tb_regs_hop hop;
+ struct tb_regs_hop hop = { 0 };
+
+ /*
+ * We do (currently) not tear down paths setup by the firmeware.
+ * If a firmware device is unplugged and plugged in again then
+ * it can happen that we reuse some of the hops from the (now
+ * defunct) firmeware path. This causes the hotplug operation to
+ * fail (the pci device does not show up). Clearing the hop
+ * before overwriting it fixes the problem.
+ *
+ * Should be removed once we discover and tear down firmeware
+ * paths.
+ */
+ res = tb_port_write(path->hops[i].in_port, &hop, TB_CFG_HOPS,
+ 2 * path->hops[i].in_hop_index, 2);
+ if (res) {
+ __tb_path_deactivate_hops(path, i);
+ __tb_path_deallocate_nfc(path, 0);
+ goto err;
+ }
/* dword 0 */
hop.next_hop = path->hops[i].next_hop_index;
{ "INT3434", 0 },
{ "INT3435", 0 },
{ "80860F0A", 0 },
+ { "8086228A", 0 },
{ },
};
MODULE_DEVICE_TABLE(acpi, dw8250_acpi_match);
UART_PUT_IER(port, ier);
}
+/*
+ * Disable modem status interrupts
+ */
+static void atmel_disable_ms(struct uart_port *port)
+{
+ struct atmel_uart_port *atmel_port = to_atmel_uart_port(port);
+ uint32_t idr = 0;
+
+ /*
+ * Interrupt should not be disabled twice
+ */
+ if (!atmel_port->ms_irq_enabled)
+ return;
+
+ atmel_port->ms_irq_enabled = false;
+
+ if (atmel_port->gpio_irq[UART_GPIO_CTS] >= 0)
+ disable_irq(atmel_port->gpio_irq[UART_GPIO_CTS]);
+ else
+ idr |= ATMEL_US_CTSIC;
+
+ if (atmel_port->gpio_irq[UART_GPIO_DSR] >= 0)
+ disable_irq(atmel_port->gpio_irq[UART_GPIO_DSR]);
+ else
+ idr |= ATMEL_US_DSRIC;
+
+ if (atmel_port->gpio_irq[UART_GPIO_RI] >= 0)
+ disable_irq(atmel_port->gpio_irq[UART_GPIO_RI]);
+ else
+ idr |= ATMEL_US_RIIC;
+
+ if (atmel_port->gpio_irq[UART_GPIO_DCD] >= 0)
+ disable_irq(atmel_port->gpio_irq[UART_GPIO_DCD]);
+ else
+ idr |= ATMEL_US_DCDIC;
+
+ UART_PUT_IDR(port, idr);
+}
+
/*
* Control the transmission of a break signal
*/
/* CTS flow-control and modem-status interrupts */
if (UART_ENABLE_MS(port, termios->c_cflag))
- port->ops->enable_ms(port);
+ atmel_enable_ms(port);
+ else
+ atmel_disable_ms(port);
spin_unlock_irqrestore(&port->lock, flags);
}
{
unsigned int status;
- status = cdns_uart_readl(CDNS_UART_ISR_OFFSET) & CDNS_UART_IXR_TXEMPTY;
+ status = cdns_uart_readl(CDNS_UART_SR_OFFSET) & CDNS_UART_SR_TXEMPTY;
return status ? TIOCSER_TEMT : 0;
}
source "drivers/usb/image/Kconfig"
+source "drivers/usb/usbip/Kconfig"
+
endif
source "drivers/usb/musb/Kconfig"
obj-$(CONFIG_USB_GADGET) += gadget/
obj-$(CONFIG_USB_COMMON) += common/
+
+obj-$(CONFIG_USBIP_CORE) += usbip/
static void ci_hdrc_msm_notify_event(struct ci_hdrc *ci, unsigned event)
{
struct device *dev = ci->gadget.dev.parent;
- int val;
switch (event) {
case CI_HDRC_CONTROLLER_RESET_EVENT:
dev_dbg(dev, "CI_HDRC_CONTROLLER_RESET_EVENT received\n");
writel(0, USB_AHBBURST);
writel(0, USB_AHBMODE);
+ usb_phy_init(ci->transceiver);
break;
case CI_HDRC_CONTROLLER_STOPPED_EVENT:
dev_dbg(dev, "CI_HDRC_CONTROLLER_STOPPED_EVENT received\n");
* Put the transceiver in non-driving mode. Otherwise host
* may not detect soft-disconnection.
*/
- val = usb_phy_io_read(ci->transceiver, ULPI_FUNC_CTRL);
- val &= ~ULPI_FUNC_CTRL_OPMODE_MASK;
- val |= ULPI_FUNC_CTRL_OPMODE_NONDRIVING;
- usb_phy_io_write(ci->transceiver, val, ULPI_FUNC_CTRL);
+ usb_phy_notify_disconnect(ci->transceiver, USB_SPEED_UNKNOWN);
break;
default:
dev_dbg(dev, "unknown ci_hdrc event\n");
* - Change autosuspend delay of hub can avoid unnecessary auto
* suspend timer for hub, also may decrease power consumption
* of USB bus.
+ *
+ * - If user has indicated to prevent autosuspend by passing
+ * usbcore.autosuspend = -1 then keep autosuspend disabled.
*/
- pm_runtime_set_autosuspend_delay(&hdev->dev, 0);
+#ifdef CONFIG_PM_RUNTIME
+ if (hdev->dev.power.autosuspend_delay >= 0)
+ pm_runtime_set_autosuspend_delay(&hdev->dev, 0);
+#endif
/*
* Hubs have proper suspend/resume support, except for root hubs
{
struct usb_port *port_dev = NULL;
struct usb_device *udev = *pdev;
- struct usb_hub *hub;
- int port1;
+ struct usb_hub *hub = NULL;
+ int port1 = 1;
/* mark the device as inactive, so any further urb submissions for
* this device (and any of its children) will fail immediately.
if (status != -ENODEV &&
port1 != unreliable_port &&
printk_ratelimit())
- dev_err(&udev->dev, "connect-debounce failed, port %d disabled\n",
- port1);
-
+ dev_err(&port_dev->dev, "connect-debounce failed\n");
portstatus &= ~USB_PORT_STAT_CONNECTION;
unreliable_port = port1;
} else {
hub = list_entry(tmp, struct usb_hub, event_list);
kref_get(&hub->kref);
+ hdev = hub->hdev;
+ usb_get_dev(hdev);
spin_unlock_irq(&hub_event_lock);
- hdev = hub->hdev;
hub_dev = hub->intfdev;
intf = to_usb_interface(hub_dev);
dev_dbg(hub_dev, "state %d ports %d chg %04x evt %04x\n",
usb_autopm_put_interface(intf);
loop_disconnected:
usb_unlock_device(hdev);
+ usb_put_dev(hdev);
kref_put(&hub->kref, hub_release);
} /* end while (1) */
dev_err(hsotg->dev,
"%s: timeout flushing fifo (GRSTCTL=%08x)\n",
__func__, val);
+ break;
}
udelay(1);
static void s3c_hsotg_irq_enumdone(struct s3c_hsotg *hsotg)
{
u32 dsts = readl(hsotg->regs + DSTS);
- int ep0_mps = 0, ep_mps;
+ int ep0_mps = 0, ep_mps = 8;
/*
* This should signal the finish of the enumeration phase
dev_dbg(hsotg->dev, "pdev 0x%p\n", pdev);
- if (hsotg->phy) {
- phy_init(hsotg->phy);
- phy_power_on(hsotg->phy);
- } else if (hsotg->uphy)
+ if (hsotg->uphy)
usb_phy_init(hsotg->uphy);
- else if (hsotg->plat->phy_init)
+ else if (hsotg->plat && hsotg->plat->phy_init)
hsotg->plat->phy_init(pdev, hsotg->plat->phy_type);
+ else {
+ phy_init(hsotg->phy);
+ phy_power_on(hsotg->phy);
+ }
}
/**
{
struct platform_device *pdev = to_platform_device(hsotg->dev);
- if (hsotg->phy) {
- phy_power_off(hsotg->phy);
- phy_exit(hsotg->phy);
- } else if (hsotg->uphy)
+ if (hsotg->uphy)
usb_phy_shutdown(hsotg->uphy);
- else if (hsotg->plat->phy_exit)
+ else if (hsotg->plat && hsotg->plat->phy_exit)
hsotg->plat->phy_exit(pdev, hsotg->plat->phy_type);
+ else {
+ phy_power_off(hsotg->phy);
+ phy_exit(hsotg->phy);
+ }
}
/**
return -ENODEV;
/* all endpoints should be shutdown */
- for (ep = 0; ep < hsotg->num_of_eps; ep++)
+ for (ep = 1; ep < hsotg->num_of_eps; ep++)
s3c_hsotg_ep_disable(&hsotg->eps[ep].ep);
spin_lock_irqsave(&hsotg->lock, flags);
- s3c_hsotg_phy_disable(hsotg);
-
if (!driver)
hsotg->driver = NULL;
s3c_hsotg_phy_enable(hsotg);
s3c_hsotg_core_init(hsotg);
} else {
- s3c_hsotg_disconnect(hsotg);
s3c_hsotg_phy_disable(hsotg);
}
hsotg->irq = ret;
- ret = devm_request_irq(&pdev->dev, hsotg->irq, s3c_hsotg_irq, 0,
- dev_name(dev), hsotg);
- if (ret < 0) {
- dev_err(dev, "cannot claim IRQ\n");
- goto err_clk;
- }
-
dev_info(dev, "regs %p, irq %d\n", hsotg->regs, hsotg->irq);
hsotg->gadget.max_speed = USB_SPEED_HIGH;
if (hsotg->phy && (phy_get_bus_width(phy) == 8))
hsotg->phyif = GUSBCFG_PHYIF8;
- if (hsotg->phy)
- phy_init(hsotg->phy);
-
/* usb phy enable */
s3c_hsotg_phy_enable(hsotg);
s3c_hsotg_init(hsotg);
s3c_hsotg_hw_cfg(hsotg);
+ ret = devm_request_irq(&pdev->dev, hsotg->irq, s3c_hsotg_irq, 0,
+ dev_name(dev), hsotg);
+ if (ret < 0) {
+ s3c_hsotg_phy_disable(hsotg);
+ clk_disable_unprepare(hsotg->clk);
+ regulator_bulk_disable(ARRAY_SIZE(hsotg->supplies),
+ hsotg->supplies);
+ dev_err(dev, "cannot claim IRQ\n");
+ goto err_clk;
+ }
+
/* hsotg->num_of_eps holds number of EPs other than ep0 */
if (hsotg->num_of_eps == 0) {
usb_gadget_unregister_driver(hsotg->driver);
}
- s3c_hsotg_phy_disable(hsotg);
- if (hsotg->phy)
- phy_exit(hsotg->phy);
clk_disable_unprepare(hsotg->clk);
return 0;
{
struct dwc3 *dwc = platform_get_drvdata(pdev);
+ dwc3_debugfs_exit(dwc);
+ dwc3_core_exit_mode(dwc);
+ dwc3_event_buffers_cleanup(dwc);
+ dwc3_free_event_buffers(dwc);
+
usb_phy_set_suspend(dwc->usb2_phy, 1);
usb_phy_set_suspend(dwc->usb3_phy, 1);
phy_power_off(dwc->usb2_generic_phy);
phy_power_off(dwc->usb3_generic_phy);
+ dwc3_core_exit(dwc);
+
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);
- dwc3_debugfs_exit(dwc);
- dwc3_core_exit_mode(dwc);
- dwc3_event_buffers_cleanup(dwc);
- dwc3_free_event_buffers(dwc);
- dwc3_core_exit(dwc);
-
return 0;
}
static int dwc3_omap_extcon_register(struct dwc3_omap *omap)
{
- u32 ret;
+ int ret;
struct device_node *node = omap->dev->of_node;
struct extcon_dev *edev;
if (omap->extcon_id_dev.edev)
extcon_unregister_interest(&omap->extcon_id_dev);
dwc3_omap_disable_irqs(omap);
+ device_for_each_child(&pdev->dev, NULL, dwc3_omap_remove_core);
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);
- device_for_each_child(&pdev->dev, NULL, dwc3_omap_remove_core);
return 0;
}
dep->stream_capable = true;
}
- if (usb_endpoint_xfer_isoc(desc))
+ if (!usb_endpoint_xfer_control(desc))
params.param1 |= DWC3_DEPCFG_XFER_IN_PROGRESS_EN;
/*
int ret;
+ spin_lock_irqsave(&dwc->lock, flags);
if (!dep->endpoint.desc) {
dev_dbg(dwc->dev, "trying to queue request %p to disabled %s\n",
request, ep->name);
+ spin_unlock_irqrestore(&dwc->lock, flags);
return -ESHUTDOWN;
}
dev_vdbg(dwc->dev, "queing request %p to %s length %d\n",
request, ep->name, request->length);
- spin_lock_irqsave(&dwc->lock, flags);
ret = __dwc3_gadget_ep_queue(dep, req);
spin_unlock_irqrestore(&dwc->lock, flags);
dwc3_endpoint_transfer_complete(dwc, dep, event);
break;
case DWC3_DEPEVT_XFERINPROGRESS:
- if (!usb_endpoint_xfer_isoc(dep->endpoint.desc)) {
- dev_dbg(dwc->dev, "%s is not an Isochronous endpoint\n",
- dep->name);
- return;
- }
-
dwc3_endpoint_transfer_complete(dwc, dep, event);
break;
case DWC3_DEPEVT_XFERNOTREADY:
#
subdir-ccflags-$(CONFIG_USB_GADGET_DEBUG) := -DDEBUG
subdir-ccflags-$(CONFIG_USB_GADGET_VERBOSE) += -DVERBOSE_DEBUG
-ccflags-y += -I$(PWD)/drivers/usb/gadget/udc
+ccflags-y += -Idrivers/usb/gadget/udc
obj-$(CONFIG_USB_LIBCOMPOSITE) += libcomposite.o
libcomposite-y := usbstring.o config.o epautoconf.o
# USB peripheral controller drivers
#
-ccflags-y := -I$(PWD)/drivers/usb/gadget/
-ccflags-y += -I$(PWD)/drivers/usb/gadget/udc/
+ccflags-y := -Idrivers/usb/gadget/
+ccflags-y += -Idrivers/usb/gadget/udc/
# USB Functions
usb_f_acm-y := f_acm.o
struct usb_request *req;
};
+struct ffs_desc_helper {
+ struct ffs_data *ffs;
+ unsigned interfaces_count;
+ unsigned eps_count;
+};
+
static int __must_check ffs_epfiles_create(struct ffs_data *ffs);
static void ffs_epfiles_destroy(struct ffs_epfile *epfiles, unsigned count);
u8 *valuep, struct usb_descriptor_header *desc,
void *priv)
{
- struct ffs_data *ffs = priv;
+ struct ffs_desc_helper *helper = priv;
+ struct usb_endpoint_descriptor *d;
ENTER();
* encountered interface "n" then there are at least
* "n+1" interfaces.
*/
- if (*valuep >= ffs->interfaces_count)
- ffs->interfaces_count = *valuep + 1;
+ if (*valuep >= helper->interfaces_count)
+ helper->interfaces_count = *valuep + 1;
break;
case FFS_STRING:
* Strings are indexed from 1 (0 is magic ;) reserved
* for languages list or some such)
*/
- if (*valuep > ffs->strings_count)
- ffs->strings_count = *valuep;
+ if (*valuep > helper->ffs->strings_count)
+ helper->ffs->strings_count = *valuep;
break;
case FFS_ENDPOINT:
- /* Endpoints are indexed from 1 as well. */
- if ((*valuep & USB_ENDPOINT_NUMBER_MASK) > ffs->eps_count)
- ffs->eps_count = (*valuep & USB_ENDPOINT_NUMBER_MASK);
+ d = (void *)desc;
+ helper->eps_count++;
+ if (helper->eps_count >= 15)
+ return -EINVAL;
+ /* Check if descriptors for any speed were already parsed */
+ if (!helper->ffs->eps_count && !helper->ffs->interfaces_count)
+ helper->ffs->eps_addrmap[helper->eps_count] =
+ d->bEndpointAddress;
+ else if (helper->ffs->eps_addrmap[helper->eps_count] !=
+ d->bEndpointAddress)
+ return -EINVAL;
break;
}
char *data = _data, *raw_descs;
unsigned os_descs_count = 0, counts[3], flags;
int ret = -EINVAL, i;
+ struct ffs_desc_helper helper;
ENTER();
/* Read descriptors */
raw_descs = data;
+ helper.ffs = ffs;
for (i = 0; i < 3; ++i) {
if (!counts[i])
continue;
+ helper.interfaces_count = 0;
+ helper.eps_count = 0;
ret = ffs_do_descs(counts[i], data, len,
- __ffs_data_do_entity, ffs);
+ __ffs_data_do_entity, &helper);
if (ret < 0)
goto error;
+ if (!ffs->eps_count && !ffs->interfaces_count) {
+ ffs->eps_count = helper.eps_count;
+ ffs->interfaces_count = helper.interfaces_count;
+ } else {
+ if (ffs->eps_count != helper.eps_count) {
+ ret = -EINVAL;
+ goto error;
+ }
+ if (ffs->interfaces_count != helper.interfaces_count) {
+ ret = -EINVAL;
+ goto error;
+ }
+ }
data += ret;
len -= ret;
}
spin_unlock_irqrestore(&ffs->ev.waitq.lock, flags);
}
-
/* Bind/unbind USB function hooks *******************************************/
+static int ffs_ep_addr2idx(struct ffs_data *ffs, u8 endpoint_address)
+{
+ int i;
+
+ for (i = 1; i < ARRAY_SIZE(ffs->eps_addrmap); ++i)
+ if (ffs->eps_addrmap[i] == endpoint_address)
+ return i;
+ return -ENOENT;
+}
+
static int __ffs_func_bind_do_descs(enum ffs_entity_type type, u8 *valuep,
struct usb_descriptor_header *desc,
void *priv)
if (!desc || desc->bDescriptorType != USB_DT_ENDPOINT)
return 0;
- idx = (ds->bEndpointAddress & USB_ENDPOINT_NUMBER_MASK) - 1;
+ idx = ffs_ep_addr2idx(func->ffs, ds->bEndpointAddress) - 1;
+ if (idx < 0)
+ return idx;
+
ffs_ep = func->eps + idx;
if (unlikely(ffs_ep->descs[ep_desc_id])) {
DBG(dev, "%s\n", __func__);
- netif_tx_lock(dev->net);
netif_stop_queue(dev->net);
- netif_tx_unlock(dev->net);
-
netif_carrier_off(dev->net);
/* disable endpoints, forcing (synchronous) completion
void *ms_os_descs_ext_prop_name_avail;
void *ms_os_descs_ext_prop_data_avail;
+ u8 eps_addrmap[15];
+
unsigned short strings_count;
unsigned short interfaces_count;
unsigned short eps_count;
printk(KERN_INFO "Failed to queue request (%d).\n", ret);
usb_ep_set_halt(ep);
spin_unlock_irqrestore(&video->queue.irqlock, flags);
+ uvc_queue_cancel(queue, 0);
goto requeue;
}
spin_unlock_irqrestore(&video->queue.irqlock, flags);
static int
uvc_video_pump(struct uvc_video *video)
{
+ struct uvc_video_queue *queue = &video->queue;
struct usb_request *req;
struct uvc_buffer *buf;
unsigned long flags;
printk(KERN_INFO "Failed to queue request (%d)\n", ret);
usb_ep_set_halt(video->ep);
spin_unlock_irqrestore(&video->queue.irqlock, flags);
+ uvc_queue_cancel(queue, 0);
break;
}
spin_unlock_irqrestore(&video->queue.irqlock, flags);
# USB gadget drivers
#
-ccflags-y := -I$(PWD)/drivers/usb/gadget/
-ccflags-y += -I$(PWD)/drivers/usb/gadget/udc/
-ccflags-y += -I$(PWD)/drivers/usb/gadget/function/
+ccflags-y := -Idrivers/usb/gadget/
+ccflags-y += -Idrivers/usb/gadget/udc/
+ccflags-y += -Idrivers/usb/gadget/function/
g_zero-y := zero.o
g_audio-y := audio.o
{
#ifdef CONFIG_USB_G_DBGP_SERIAL
kfree(dbgp.serial);
+ dbgp.serial = NULL;
#endif
if (dbgp.req) {
kfree(dbgp.req->buf);
usb_ep_free_request(gadget->ep0, dbgp.req);
+ dbgp.req = NULL;
}
gadget->ep0->driver_data = NULL;
value = -ENOMEM;
kbuf = memdup_user(buf, len);
- if (!kbuf) {
+ if (IS_ERR(kbuf)) {
value = PTR_ERR(kbuf);
goto free1;
}
gadget drivers to also be dynamically linked.
config USB_EG20T
- tristate "Intel EG20T PCH/LAPIS Semiconductor IOH(ML7213/ML7831) UDC"
+ tristate "Intel QUARK X1000/EG20T PCH/LAPIS Semiconductor IOH(ML7213/ML7831) UDC"
depends on PCI
help
This is a USB device driver for EG20T PCH.
ML7213/ML7831 is companion chip for Intel Atom E6xx series.
ML7213/ML7831 is completely compatible for Intel EG20T PCH.
+ This driver can be used with Intel's Quark X1000 SOC platform
#
# LAST -- dummy/emulated controller
#
if (dma_status) {
int i;
- for (i = 1; i < USBA_NR_DMAS; i++)
+ for (i = 1; i <= USBA_NR_DMAS; i++)
if (dma_status & (1 << i))
usba_dma_irq(udc, &udc->usba_ep[i]);
}
/* initialize udc */
fusb300 = kzalloc(sizeof(struct fusb300), GFP_KERNEL);
- if (fusb300 == NULL)
+ if (fusb300 == NULL) {
+ ret = -ENOMEM;
goto clean_up;
+ }
for (i = 0; i < FUSB300_MAX_NUM_EP; i++) {
_ep[i] = kzalloc(sizeof(struct fusb300_ep), GFP_KERNEL);
- if (_ep[i] == NULL)
+ if (_ep[i] == NULL) {
+ ret = -ENOMEM;
goto clean_up;
+ }
fusb300->ep[i] = _ep[i];
}
#ifndef __FUSB300_UDC_H__
-#define __FUSB300_UDC_H_
+#define __FUSB300_UDC_H__
#include <linux/kernel.h>
if (stat & tmp) {
writel(tmp, &dev->regs->irqstat1);
if ((((stat & BIT(ROOT_PORT_RESET_INTERRUPT)) &&
- (readl(&dev->usb->usbstat) & mask)) ||
+ ((readl(&dev->usb->usbstat) & mask) == 0)) ||
((readl(&dev->usb->usbctl) &
BIT(VBUS_PIN)) == 0)) &&
(dev->gadget.speed != USB_SPEED_UNKNOWN)) {
* @setup_data: Received setup data
* @phys_addr: of device memory
* @base_addr: for mapped device memory
+ * @bar: Indicates which PCI BAR for USB regs
* @irq: IRQ line for the device
* @cfg_data: current cfg, intf, and alt in use
* @vbus_gpio: GPIO informaton for detecting VBUS
struct usb_ctrlrequest setup_data;
unsigned long phys_addr;
void __iomem *base_addr;
+ unsigned bar;
unsigned irq;
struct pch_udc_cfg_data cfg_data;
struct pch_vbus_gpio_data vbus_gpio;
};
#define to_pch_udc(g) (container_of((g), struct pch_udc_dev, gadget))
+#define PCH_UDC_PCI_BAR_QUARK_X1000 0
#define PCH_UDC_PCI_BAR 1
#define PCI_DEVICE_ID_INTEL_EG20T_UDC 0x8808
+#define PCI_DEVICE_ID_INTEL_QUARK_X1000_UDC 0x0939
#define PCI_VENDOR_ID_ROHM 0x10DB
#define PCI_DEVICE_ID_ML7213_IOH_UDC 0x801D
#define PCI_DEVICE_ID_ML7831_IOH_UDC 0x8808
iounmap(dev->base_addr);
if (dev->mem_region)
release_mem_region(dev->phys_addr,
- pci_resource_len(pdev, PCH_UDC_PCI_BAR));
+ pci_resource_len(pdev, dev->bar));
if (dev->active)
pci_disable_device(pdev);
kfree(dev);
dev->active = 1;
pci_set_drvdata(pdev, dev);
+ /* Determine BAR based on PCI ID */
+ if (id->device == PCI_DEVICE_ID_INTEL_QUARK_X1000_UDC)
+ dev->bar = PCH_UDC_PCI_BAR_QUARK_X1000;
+ else
+ dev->bar = PCH_UDC_PCI_BAR;
+
/* PCI resource allocation */
- resource = pci_resource_start(pdev, 1);
- len = pci_resource_len(pdev, 1);
+ resource = pci_resource_start(pdev, dev->bar);
+ len = pci_resource_len(pdev, dev->bar);
if (!request_mem_region(resource, len, KBUILD_MODNAME)) {
dev_err(&pdev->dev, "%s: pci device used already\n", __func__);
}
static const struct pci_device_id pch_udc_pcidev_id[] = {
+ {
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL,
+ PCI_DEVICE_ID_INTEL_QUARK_X1000_UDC),
+ .class = (PCI_CLASS_SERIAL_USB << 8) | 0xfe,
+ .class_mask = 0xffffffff,
+ },
{
PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_EG20T_UDC),
.class = (PCI_CLASS_SERIAL_USB << 8) | 0xfe,
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
reg = devm_ioremap_resource(&pdev->dev, res);
- if (!reg)
- return -ENODEV;
+ if (IS_ERR(reg))
+ return PTR_ERR(reg);
ires = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
irq = ires->start;
if (selector == EHSET_TEST_SINGLE_STEP_SET_FEATURE) {
spin_unlock_irqrestore(&ehci->lock, flags);
retval = ehset_single_step_set_feature(hcd,
- wIndex);
+ wIndex + 1);
spin_lock_irqsave(&ehci->lock, flags);
break;
}
}
/* Updates Link Status for super Speed port */
-static void xhci_hub_report_usb3_link_state(u32 *status, u32 status_reg)
+static void xhci_hub_report_usb3_link_state(struct xhci_hcd *xhci,
+ u32 *status, u32 status_reg)
{
u32 pls = status_reg & PORT_PLS_MASK;
* in which sometimes the port enters compliance mode
* caused by a delay on the host-device negotiation.
*/
- if (pls == USB_SS_PORT_LS_COMP_MOD)
+ if ((xhci->quirks & XHCI_COMP_MODE_QUIRK) &&
+ (pls == USB_SS_PORT_LS_COMP_MOD))
pls |= USB_PORT_STAT_CONNECTION;
}
}
/* Update Port Link State */
if (hcd->speed == HCD_USB3) {
- xhci_hub_report_usb3_link_state(&status, raw_port_status);
+ xhci_hub_report_usb3_link_state(xhci, &status, raw_port_status);
/*
* Verify if all USB3 Ports Have entered U0 already.
* Delete Compliance Mode Timer if so.
if (xhci->lpm_command)
xhci_free_command(xhci, xhci->lpm_command);
+ xhci->lpm_command = NULL;
if (xhci->cmd_ring)
xhci_ring_free(xhci, xhci->cmd_ring);
xhci->cmd_ring = NULL;
xhci_cleanup_command_queue(xhci);
num_ports = HCS_MAX_PORTS(xhci->hcs_params1);
- for (i = 0; i < num_ports; i++) {
+ for (i = 0; i < num_ports && xhci->rh_bw; i++) {
struct xhci_interval_bw_table *bwt = &xhci->rh_bw[i].bw_table;
for (j = 0; j < XHCI_MAX_INTERVAL; j++) {
struct list_head *ep = &bwt->interval_bw[j].endpoints;
/* AMD PLL quirk */
if (pdev->vendor == PCI_VENDOR_ID_AMD && usb_amd_find_chipset_info())
xhci->quirks |= XHCI_AMD_PLL_FIX;
+
+ if (pdev->vendor == PCI_VENDOR_ID_AMD)
+ xhci->quirks |= XHCI_TRUST_TX_LENGTH;
+
if (pdev->vendor == PCI_VENDOR_ID_INTEL) {
xhci->quirks |= XHCI_LPM_SUPPORT;
xhci->quirks |= XHCI_INTEL_HOST;
if (pdev->vendor == PCI_VENDOR_ID_VIA)
xhci->quirks |= XHCI_RESET_ON_RESUME;
+ /* See https://bugzilla.kernel.org/show_bug.cgi?id=79511 */
+ if (pdev->vendor == PCI_VENDOR_ID_VIA &&
+ pdev->device == 0x3432)
+ xhci->quirks |= XHCI_BROKEN_STREAMS;
+
if (xhci->quirks & XHCI_RESET_ON_RESUME)
xhci_dbg_trace(xhci, trace_xhci_dbg_quirks,
"QUIRK: Resetting on resume");
}
}
-/*
- * Find the segment that trb is in. Start searching in start_seg.
- * If we must move past a segment that has a link TRB with a toggle cycle state
- * bit set, then we will toggle the value pointed at by cycle_state.
- */
-static struct xhci_segment *find_trb_seg(
- struct xhci_segment *start_seg,
- union xhci_trb *trb, int *cycle_state)
-{
- struct xhci_segment *cur_seg = start_seg;
- struct xhci_generic_trb *generic_trb;
-
- while (cur_seg->trbs > trb ||
- &cur_seg->trbs[TRBS_PER_SEGMENT - 1] < trb) {
- generic_trb = &cur_seg->trbs[TRBS_PER_SEGMENT - 1].generic;
- if (generic_trb->field[3] & cpu_to_le32(LINK_TOGGLE))
- *cycle_state ^= 0x1;
- cur_seg = cur_seg->next;
- if (cur_seg == start_seg)
- /* Looped over the entire list. Oops! */
- return NULL;
- }
- return cur_seg;
-}
-
-
static struct xhci_ring *xhci_triad_to_transfer_ring(struct xhci_hcd *xhci,
unsigned int slot_id, unsigned int ep_index,
unsigned int stream_id)
struct xhci_virt_device *dev = xhci->devs[slot_id];
struct xhci_virt_ep *ep = &dev->eps[ep_index];
struct xhci_ring *ep_ring;
- struct xhci_generic_trb *trb;
+ struct xhci_segment *new_seg;
+ union xhci_trb *new_deq;
dma_addr_t addr;
u64 hw_dequeue;
+ bool cycle_found = false;
+ bool td_last_trb_found = false;
ep_ring = xhci_triad_to_transfer_ring(xhci, slot_id,
ep_index, stream_id);
hw_dequeue = le64_to_cpu(ep_ctx->deq);
}
- /* Find virtual address and segment of hardware dequeue pointer */
- state->new_deq_seg = ep_ring->deq_seg;
- state->new_deq_ptr = ep_ring->dequeue;
- while (xhci_trb_virt_to_dma(state->new_deq_seg, state->new_deq_ptr)
- != (dma_addr_t)(hw_dequeue & ~0xf)) {
- next_trb(xhci, ep_ring, &state->new_deq_seg,
- &state->new_deq_ptr);
- if (state->new_deq_ptr == ep_ring->dequeue) {
- WARN_ON(1);
- return;
- }
- }
+ new_seg = ep_ring->deq_seg;
+ new_deq = ep_ring->dequeue;
+ state->new_cycle_state = hw_dequeue & 0x1;
+
/*
- * Find cycle state for last_trb, starting at old cycle state of
- * hw_dequeue. If there is only one segment ring, find_trb_seg() will
- * return immediately and cannot toggle the cycle state if this search
- * wraps around, so add one more toggle manually in that case.
+ * We want to find the pointer, segment and cycle state of the new trb
+ * (the one after current TD's last_trb). We know the cycle state at
+ * hw_dequeue, so walk the ring until both hw_dequeue and last_trb are
+ * found.
*/
- state->new_cycle_state = hw_dequeue & 0x1;
- if (ep_ring->first_seg == ep_ring->first_seg->next &&
- cur_td->last_trb < state->new_deq_ptr)
- state->new_cycle_state ^= 0x1;
+ do {
+ if (!cycle_found && xhci_trb_virt_to_dma(new_seg, new_deq)
+ == (dma_addr_t)(hw_dequeue & ~0xf)) {
+ cycle_found = true;
+ if (td_last_trb_found)
+ break;
+ }
+ if (new_deq == cur_td->last_trb)
+ td_last_trb_found = true;
- state->new_deq_ptr = cur_td->last_trb;
- xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb,
- "Finding segment containing last TRB in TD.");
- state->new_deq_seg = find_trb_seg(state->new_deq_seg,
- state->new_deq_ptr, &state->new_cycle_state);
- if (!state->new_deq_seg) {
- WARN_ON(1);
- return;
- }
+ if (cycle_found &&
+ TRB_TYPE_LINK_LE32(new_deq->generic.field[3]) &&
+ new_deq->generic.field[3] & cpu_to_le32(LINK_TOGGLE))
+ state->new_cycle_state ^= 0x1;
+
+ next_trb(xhci, ep_ring, &new_seg, &new_deq);
+
+ /* Search wrapped around, bail out */
+ if (new_deq == ep->ring->dequeue) {
+ xhci_err(xhci, "Error: Failed finding new dequeue state\n");
+ state->new_deq_seg = NULL;
+ state->new_deq_ptr = NULL;
+ return;
+ }
+
+ } while (!cycle_found || !td_last_trb_found);
- /* Increment to find next TRB after last_trb. Cycle if appropriate. */
- trb = &state->new_deq_ptr->generic;
- if (TRB_TYPE_LINK_LE32(trb->field[3]) &&
- (trb->field[3] & cpu_to_le32(LINK_TOGGLE)))
- state->new_cycle_state ^= 0x1;
- next_trb(xhci, ep_ring, &state->new_deq_seg, &state->new_deq_ptr);
+ state->new_deq_seg = new_seg;
+ state->new_deq_ptr = new_deq;
/* Don't update the ring cycle state for the producer (us). */
xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb,
* last TRB of the previous TD. The command completion handle
* will take care the rest.
*/
- if (!event_seg && trb_comp_code == COMP_STOP_INVAL) {
+ if (!event_seg && (trb_comp_code == COMP_STOP ||
+ trb_comp_code == COMP_STOP_INVAL)) {
ret = 0;
goto cleanup;
}
ep_index, ep->stopped_stream, ep->stopped_td,
&deq_state);
+ if (!deq_state.new_deq_ptr || !deq_state.new_deq_seg)
+ return;
+
/* HW with the reset endpoint quirk will use the saved dequeue state to
* issue a configure endpoint command later.
*/
int ret;
spin_lock_irqsave(&xhci->lock, flags);
- if (max_exit_latency == xhci->devs[udev->slot_id]->current_mel) {
+
+ virt_dev = xhci->devs[udev->slot_id];
+
+ /*
+ * virt_dev might not exists yet if xHC resumed from hibernate (S4) and
+ * xHC was re-initialized. Exit latency will be set later after
+ * hub_port_finish_reset() is done and xhci->devs[] are re-allocated
+ */
+
+ if (!virt_dev || max_exit_latency == virt_dev->current_mel) {
spin_unlock_irqrestore(&xhci->lock, flags);
return 0;
}
/* Attempt to issue an Evaluate Context command to change the MEL. */
- virt_dev = xhci->devs[udev->slot_id];
command = xhci->lpm_command;
ctrl_ctx = xhci_get_input_control_ctx(xhci, command->in_ctx);
if (!ctrl_ctx) {
{ USB_DEVICE(0x0711, 0x0918) },
{ USB_DEVICE(0x0711, 0x0920) },
{ USB_DEVICE(0x0711, 0x0950) },
+ { USB_DEVICE(0x0711, 0x5200) },
{ USB_DEVICE(0x182d, 0x021c) },
{ USB_DEVICE(0x182d, 0x0269) },
{ }
u32 transferred;
u32 packet_sz;
struct list_head tx_check;
+ int tx_zlp;
};
#define MUSB_DMA_NUM_CHANNELS 15
{
struct musb_hw_ep *hw_ep = cppi41_channel->hw_ep;
struct musb *musb = hw_ep->musb;
+ void __iomem *epio = hw_ep->regs;
+ u16 csr;
if (!cppi41_channel->prog_len ||
(cppi41_channel->channel.status == MUSB_DMA_STATUS_FREE)) {
cppi41_channel->transferred;
cppi41_channel->channel.status = MUSB_DMA_STATUS_FREE;
cppi41_channel->channel.rx_packet_done = true;
+
+ /*
+ * transmit ZLP using PIO mode for transfers which size is
+ * multiple of EP packet size.
+ */
+ if (cppi41_channel->tx_zlp && (cppi41_channel->transferred %
+ cppi41_channel->packet_sz) == 0) {
+ musb_ep_select(musb->mregs, hw_ep->epnum);
+ csr = MUSB_TXCSR_MODE | MUSB_TXCSR_TXPKTRDY;
+ musb_writew(epio, MUSB_TXCSR, csr);
+ }
musb_dma_completion(musb, hw_ep->epnum, cppi41_channel->is_tx);
} else {
/* next iteration, reload */
struct dma_chan *dc = cppi41_channel->dc;
struct dma_async_tx_descriptor *dma_desc;
enum dma_transfer_direction direction;
- u16 csr;
u32 remain_bytes;
- void __iomem *epio = cppi41_channel->hw_ep->regs;
cppi41_channel->buf_addr += cppi41_channel->packet_sz;
cppi41_channel->total_len = len;
cppi41_channel->transferred = 0;
cppi41_channel->packet_sz = packet_sz;
+ cppi41_channel->tx_zlp = (cppi41_channel->is_tx && mode) ? 1 : 0;
/*
* Due to AM335x' Advisory 1.0.13 we are not allowed to transfer more
struct musb *musb = ux500_channel->controller->private_data;
dev_dbg(musb->controller,
- "packet_sz=%d, mode=%d, dma_addr=0x%llu, len=%d is_tx=%d\n",
+ "packet_sz=%d, mode=%d, dma_addr=0x%llx, len=%d is_tx=%d\n",
packet_sz, mode, (unsigned long long) dma_addr,
len, ux500_channel->is_tx);
gpio_vbus->phy.otg = devm_kzalloc(&pdev->dev, sizeof(struct usb_otg),
GFP_KERNEL);
- if (!gpio_vbus->phy.otg) {
- kfree(gpio_vbus);
+ if (!gpio_vbus->phy.otg)
return -ENOMEM;
- }
platform_set_drvdata(pdev, gpio_vbus);
gpio_vbus->dev = &pdev->dev;
*/
if (motg->phy_number) {
phy_select = devm_ioremap_nocache(&pdev->dev, USB2_PHY_SEL, 4);
- if (IS_ERR(phy_select))
- return PTR_ERR(phy_select);
+ if (!phy_select)
+ return -ENOMEM;
/* Enable second PHY with the OTG port */
writel(0x1, phy_select);
}
/*
- * Copyright 2012-2013 Freescale Semiconductor, Inc.
+ * Copyright 2012-2014 Freescale Semiconductor, Inc.
* Copyright (C) 2012 Marek Vasut <marex@denx.de>
* on behalf of DENX Software Engineering GmbH
*
MXS_PHY_NEED_IP_FIX,
};
+static const struct mxs_phy_data imx6sx_phy_data = {
+ .flags = MXS_PHY_DISCONNECT_LINE_WITHOUT_VBUS |
+ MXS_PHY_NEED_IP_FIX,
+};
+
static const struct of_device_id mxs_phy_dt_ids[] = {
+ { .compatible = "fsl,imx6sx-usbphy", .data = &imx6sx_phy_data, },
{ .compatible = "fsl,imx6sl-usbphy", .data = &imx6sl_phy_data, },
{ .compatible = "fsl,imx6q-usbphy", .data = &imx6q_phy_data, },
{ .compatible = "fsl,imx23-usbphy", .data = &imx23_phy_data, },
#define EXYNOS5_DRD_PHYPARAM1 (0x20)
-#define PHYPARAM1_PCS_TXDEEMPH_MASK (0x1f << 0)
+#define PHYPARAM1_PCS_TXDEEMPH_MASK (0x3f << 0)
#define PHYPARAM1_PCS_TXDEEMPH (0x1c)
#define EXYNOS5_DRD_PHYTERM (0x24)
return -ENOMEM;
}
- tegra_phy->config = devm_kzalloc(&pdev->dev,
- sizeof(*tegra_phy->config), GFP_KERNEL);
+ tegra_phy->config = devm_kzalloc(&pdev->dev, sizeof(*config),
+ GFP_KERNEL);
if (!tegra_phy->config) {
dev_err(&pdev->dev,
"unable to allocate memory for USB UTMIP config\n");
phy = __usb_find_phy_dev(dev, &phy_bind_list, index);
if (IS_ERR(phy) || !try_module_get(phy->dev->driver->owner)) {
dev_dbg(dev, "unable to find transceiver\n");
+ if (!IS_ERR(phy))
+ phy = ERR_PTR(-ENODEV);
+
goto err0;
}
return list_first_entry(&pipe->list, struct usbhs_pkt, node);
}
+static void usbhsf_fifo_clear(struct usbhs_pipe *pipe,
+ struct usbhs_fifo *fifo);
+static void usbhsf_fifo_unselect(struct usbhs_pipe *pipe,
+ struct usbhs_fifo *fifo);
+static struct dma_chan *usbhsf_dma_chan_get(struct usbhs_fifo *fifo,
+ struct usbhs_pkt *pkt);
+#define usbhsf_dma_map(p) __usbhsf_dma_map_ctrl(p, 1)
+#define usbhsf_dma_unmap(p) __usbhsf_dma_map_ctrl(p, 0)
+static int __usbhsf_dma_map_ctrl(struct usbhs_pkt *pkt, int map);
struct usbhs_pkt *usbhs_pkt_pop(struct usbhs_pipe *pipe, struct usbhs_pkt *pkt)
{
struct usbhs_priv *priv = usbhs_pipe_to_priv(pipe);
+ struct usbhs_fifo *fifo = usbhs_pipe_to_fifo(pipe);
unsigned long flags;
/******************** spin lock ********************/
usbhs_lock(priv, flags);
+ usbhs_pipe_disable(pipe);
+
if (!pkt)
pkt = __usbhsf_pkt_get(pipe);
- if (pkt)
+ if (pkt) {
+ struct dma_chan *chan = NULL;
+
+ if (fifo)
+ chan = usbhsf_dma_chan_get(fifo, pkt);
+ if (chan) {
+ dmaengine_terminate_all(chan);
+ usbhsf_fifo_clear(pipe, fifo);
+ usbhsf_dma_unmap(pkt);
+ }
+
__usbhsf_pkt_del(pkt);
+ }
+
+ if (fifo)
+ usbhsf_fifo_unselect(pipe, fifo);
usbhs_unlock(priv, flags);
/******************** spin unlock ******************/
usbhsf_send_terminator(pipe, fifo);
usbhsf_tx_irq_ctrl(pipe, !*is_done);
+ usbhs_pipe_running(pipe, !*is_done);
usbhs_pipe_enable(pipe);
dev_dbg(dev, " send %d (%d/ %d/ %d/ %d)\n",
* retry in interrupt
*/
usbhsf_tx_irq_ctrl(pipe, 1);
+ usbhs_pipe_running(pipe, 1);
return ret;
}
+static int usbhsf_pio_prepare_push(struct usbhs_pkt *pkt, int *is_done)
+{
+ if (usbhs_pipe_is_running(pkt->pipe))
+ return 0;
+
+ return usbhsf_pio_try_push(pkt, is_done);
+}
+
struct usbhs_pkt_handle usbhs_fifo_pio_push_handler = {
- .prepare = usbhsf_pio_try_push,
+ .prepare = usbhsf_pio_prepare_push,
.try_run = usbhsf_pio_try_push,
};
if (usbhs_pipe_is_busy(pipe))
return 0;
+ if (usbhs_pipe_is_running(pipe))
+ return 0;
+
/*
* pipe enable to prepare packet receive
*/
usbhs_pipe_set_trans_count_if_bulk(pipe, pkt->length);
usbhs_pipe_enable(pipe);
+ usbhs_pipe_running(pipe, 1);
usbhsf_rx_irq_ctrl(pipe, 1);
return 0;
(total_len < maxp)) { /* short packet */
*is_done = 1;
usbhsf_rx_irq_ctrl(pipe, 0);
+ usbhs_pipe_running(pipe, 0);
usbhs_pipe_disable(pipe); /* disable pipe first */
}
usbhs_bset(priv, fifo->sel, DREQE, dreqe);
}
-#define usbhsf_dma_map(p) __usbhsf_dma_map_ctrl(p, 1)
-#define usbhsf_dma_unmap(p) __usbhsf_dma_map_ctrl(p, 0)
static int __usbhsf_dma_map_ctrl(struct usbhs_pkt *pkt, int map)
{
struct usbhs_pipe *pipe = pkt->pipe;
dev_dbg(dev, " %s %d (%d/ %d)\n",
fifo->name, usbhs_pipe_number(pipe), pkt->length, pkt->zero);
+ usbhs_pipe_running(pipe, 1);
usbhs_pipe_set_trans_count_if_bulk(pipe, pkt->trans);
usbhs_pipe_enable(pipe);
usbhsf_dma_start(pipe, fifo);
if ((uintptr_t)(pkt->buf + pkt->actual) & 0x7) /* 8byte alignment */
goto usbhsf_pio_prepare_push;
+ /* return at this time if the pipe is running */
+ if (usbhs_pipe_is_running(pipe))
+ return 0;
+
/* get enable DMA fifo */
fifo = usbhsf_get_dma_fifo(priv, pkt);
if (!fifo)
static int usbhsf_dma_push_done(struct usbhs_pkt *pkt, int *is_done)
{
struct usbhs_pipe *pipe = pkt->pipe;
+ int is_short = pkt->trans % usbhs_pipe_get_maxpacket(pipe);
+
+ pkt->actual += pkt->trans;
- pkt->actual = pkt->trans;
+ if (pkt->actual < pkt->length)
+ *is_done = 0; /* there are remainder data */
+ else if (is_short)
+ *is_done = 1; /* short packet */
+ else
+ *is_done = !pkt->zero; /* send zero packet? */
- *is_done = !pkt->zero; /* send zero packet ? */
+ usbhs_pipe_running(pipe, !*is_done);
usbhsf_dma_stop(pipe, pipe->fifo);
usbhsf_dma_unmap(pkt);
usbhsf_fifo_unselect(pipe, pipe->fifo);
+ if (!*is_done) {
+ /* change handler to PIO */
+ pkt->handler = &usbhs_fifo_pio_push_handler;
+ return pkt->handler->try_run(pkt, is_done);
+ }
+
return 0;
}
if ((pkt->actual == pkt->length) || /* receive all data */
(pkt->trans < maxp)) { /* short packet */
*is_done = 1;
+ usbhs_pipe_running(pipe, 0);
} else {
/* re-enable */
+ usbhs_pipe_running(pipe, 0);
usbhsf_prepare_pop(pkt, is_done);
}
{
struct usbhs_mod *mod = usbhs_mod_get_current(priv);
u16 intenb0, intenb1;
+ unsigned long flags;
+ /******************** spin lock ********************/
+ usbhs_lock(priv, flags);
state->intsts0 = usbhs_read(priv, INTSTS0);
state->intsts1 = usbhs_read(priv, INTSTS1);
state->bempsts &= mod->irq_bempsts;
state->brdysts &= mod->irq_brdysts;
}
+ usbhs_unlock(priv, flags);
+ /******************** spin unlock ******************/
/*
* Check whether the irq enable registers and the irq status are set
return usbhsp_flags_has(pipe, IS_DIR_HOST);
}
+int usbhs_pipe_is_running(struct usbhs_pipe *pipe)
+{
+ return usbhsp_flags_has(pipe, IS_RUNNING);
+}
+
+void usbhs_pipe_running(struct usbhs_pipe *pipe, int running)
+{
+ if (running)
+ usbhsp_flags_set(pipe, IS_RUNNING);
+ else
+ usbhsp_flags_clr(pipe, IS_RUNNING);
+}
+
void usbhs_pipe_data_sequence(struct usbhs_pipe *pipe, int sequence)
{
u16 mask = (SQCLR | SQSET);
#define USBHS_PIPE_FLAGS_IS_USED (1 << 0)
#define USBHS_PIPE_FLAGS_IS_DIR_IN (1 << 1)
#define USBHS_PIPE_FLAGS_IS_DIR_HOST (1 << 2)
+#define USBHS_PIPE_FLAGS_IS_RUNNING (1 << 3)
struct usbhs_pkt_handle *handler;
void usbhs_pipe_remove(struct usbhs_priv *priv);
int usbhs_pipe_is_dir_in(struct usbhs_pipe *pipe);
int usbhs_pipe_is_dir_host(struct usbhs_pipe *pipe);
+int usbhs_pipe_is_running(struct usbhs_pipe *pipe);
+void usbhs_pipe_running(struct usbhs_pipe *pipe, int running);
+
void usbhs_pipe_init(struct usbhs_priv *priv,
int (*dma_map_ctrl)(struct usbhs_pkt *pkt, int map));
int usbhs_pipe_get_maxpacket(struct usbhs_pipe *pipe);
{ USB_DEVICE(FTDI_VID, FTDI_AMC232_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_CANUSB_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_CANDAPTER_PID) },
+ { USB_DEVICE(FTDI_VID, FTDI_BM_ATOM_NANO_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_NXTCAM_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_EV3CON_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_SCS_DEVICE_0_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_NDI_AURORA_SCU_PID),
.driver_info = (kernel_ulong_t)&ftdi_NDI_device_quirk },
{ USB_DEVICE(TELLDUS_VID, TELLDUS_TELLSTICK_PID) },
+ { USB_DEVICE(NOVITUS_VID, NOVITUS_BONO_E_PID) },
{ USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_S03_PID) },
{ USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_59_PID) },
{ USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_57A_PID) },
{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_2_PID) },
{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_3_PID) },
{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_4_PID) },
+ /* ekey Devices */
+ { USB_DEVICE(FTDI_VID, FTDI_EKEY_CONV_USB_PID) },
/* Infineon Devices */
{ USB_DEVICE_INTERFACE_NUMBER(INFINEON_VID, INFINEON_TRIBOARD_PID, 1) },
+ /* GE Healthcare devices */
+ { USB_DEVICE(GE_HEALTHCARE_VID, GE_HEALTHCARE_NEMO_TRACKER_PID) },
{ } /* Terminating entry */
};
/* www.candapter.com Ewert Energy Systems CANdapter device */
#define FTDI_CANDAPTER_PID 0x9F80 /* Product Id */
+#define FTDI_BM_ATOM_NANO_PID 0xa559 /* Basic Micro ATOM Nano USB2Serial */
+
/*
* Texas Instruments XDS100v2 JTAG / BeagleBone A3
* http://processors.wiki.ti.com/index.php/XDS100
#define TELLDUS_VID 0x1781 /* Vendor ID */
#define TELLDUS_TELLSTICK_PID 0x0C30 /* RF control dongle 433 MHz using FT232RL */
+/*
+ * NOVITUS printers
+ */
+#define NOVITUS_VID 0x1a28
+#define NOVITUS_BONO_E_PID 0x6010
+
/*
* RT Systems programming cables for various ham radios
*/
#define BRAINBOXES_US_160_6_PID 0x9006 /* US-160 16xRS232 1Mbaud Port 11 and 12 */
#define BRAINBOXES_US_160_7_PID 0x9007 /* US-160 16xRS232 1Mbaud Port 13 and 14 */
#define BRAINBOXES_US_160_8_PID 0x9008 /* US-160 16xRS232 1Mbaud Port 15 and 16 */
+
+/*
+ * ekey biometric systems GmbH (http://ekey.net/)
+ */
+#define FTDI_EKEY_CONV_USB_PID 0xCB08 /* Converter USB */
+
+/*
+ * GE Healthcare devices
+ */
+#define GE_HEALTHCARE_VID 0x1901
+#define GE_HEALTHCARE_NEMO_TRACKER_PID 0x0015
#define ZTE_PRODUCT_MF622 0x0001
#define ZTE_PRODUCT_MF628 0x0015
#define ZTE_PRODUCT_MF626 0x0031
-#define ZTE_PRODUCT_MC2718 0xffe8
#define ZTE_PRODUCT_AC2726 0xfff1
+#define ZTE_PRODUCT_CDMA_TECH 0xfffe
+#define ZTE_PRODUCT_AC8710T 0xffff
+#define ZTE_PRODUCT_MC2718 0xffe8
+#define ZTE_PRODUCT_AD3812 0xffeb
+#define ZTE_PRODUCT_MC2716 0xffed
#define BENQ_VENDOR_ID 0x04a5
#define BENQ_PRODUCT_H10 0x4068
#define INOVIA_VENDOR_ID 0x20a6
#define INOVIA_SEW858 0x1105
+/* VIA Telecom */
+#define VIATELECOM_VENDOR_ID 0x15eb
+#define VIATELECOM_PRODUCT_CDS7 0x0001
+
/* some devices interfaces need special handling due to a number of reasons */
enum option_blacklist_reason {
OPTION_BLACKLIST_NONE = 0,
.reserved = BIT(4),
};
+static const struct option_blacklist_info zte_ad3812_z_blacklist = {
+ .sendsetup = BIT(0) | BIT(1) | BIT(2),
+};
+
static const struct option_blacklist_info zte_mc2718_z_blacklist = {
.sendsetup = BIT(1) | BIT(2) | BIT(3) | BIT(4),
};
+static const struct option_blacklist_info zte_mc2716_z_blacklist = {
+ .sendsetup = BIT(1) | BIT(2) | BIT(3),
+};
+
static const struct option_blacklist_info huawei_cdc12_blacklist = {
.reserved = BIT(1) | BIT(2),
};
{ USB_DEVICE_INTERFACE_CLASS(BANDRICH_VENDOR_ID, BANDRICH_PRODUCT_1012, 0xff) },
{ USB_DEVICE(KYOCERA_VENDOR_ID, KYOCERA_PRODUCT_KPC650) },
{ USB_DEVICE(KYOCERA_VENDOR_ID, KYOCERA_PRODUCT_KPC680) },
+ { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x6000)}, /* ZTE AC8700 */
{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x6613)}, /* Onda H600/ZTE MF330 */
{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x0023)}, /* ONYX 3G device */
{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x9000)}, /* SIMCom SIM5218 */
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff93, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff94, 0xff, 0xff, 0xff) },
- /* NOTE: most ZTE CDMA devices should be driven by zte_ev, not option */
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_CDMA_TECH, 0xff, 0xff, 0xff) },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_AC2726, 0xff, 0xff, 0xff) },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_AC8710T, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MC2718, 0xff, 0xff, 0xff),
.driver_info = (kernel_ulong_t)&zte_mc2718_z_blacklist },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_AD3812, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&zte_ad3812_z_blacklist },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MC2716, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&zte_mc2716_z_blacklist },
{ USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x02, 0x01) },
{ USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x02, 0x05) },
{ USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x86, 0x10) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_AC2726, 0xff, 0xff, 0xff) },
{ USB_DEVICE(BENQ_VENDOR_ID, BENQ_PRODUCT_H10) },
{ USB_DEVICE(DLINK_VENDOR_ID, DLINK_PRODUCT_DWM_652) },
{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e01, 0xff, 0xff, 0xff) }, /* D-Link DWM-152/C1 */
{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e02, 0xff, 0xff, 0xff) }, /* D-Link DWM-156/C1 */
{ USB_DEVICE(INOVIA_VENDOR_ID, INOVIA_SEW858) },
+ { USB_DEVICE(VIATELECOM_VENDOR_ID, VIATELECOM_PRODUCT_CDS7) },
{ } /* Terminating entry */
};
MODULE_DEVICE_TABLE(usb, option_ids);
dev_dbg(dev, "%s: type %x req %x\n", __func__,
req_pkt->bRequestType, req_pkt->bRequest);
}
+ } else if (status == -ENOENT || status == -ESHUTDOWN) {
+ dev_dbg(dev, "%s: urb stopped: %d\n", __func__, status);
} else
dev_err(dev, "%s: error %d\n", __func__, status);
{ USB_DEVICE(PL2303_VENDOR_ID, PL2303_PRODUCT_ID_GPRS) },
{ USB_DEVICE(PL2303_VENDOR_ID, PL2303_PRODUCT_ID_HCR331) },
{ USB_DEVICE(PL2303_VENDOR_ID, PL2303_PRODUCT_ID_MOTOROLA) },
+ { USB_DEVICE(PL2303_VENDOR_ID, PL2303_PRODUCT_ID_ZTEK) },
{ USB_DEVICE(IODATA_VENDOR_ID, IODATA_PRODUCT_ID) },
{ USB_DEVICE(IODATA_VENDOR_ID, IODATA_PRODUCT_ID_RSAQ5) },
{ USB_DEVICE(ATEN_VENDOR_ID, ATEN_PRODUCT_ID) },
#define PL2303_PRODUCT_ID_GPRS 0x0609
#define PL2303_PRODUCT_ID_HCR331 0x331a
#define PL2303_PRODUCT_ID_MOTOROLA 0x0307
+#define PL2303_PRODUCT_ID_ZTEK 0xe1f1
#define ATEN_VENDOR_ID 0x0557
#define ATEN_VENDOR_ID2 0x0547
/* Sierra Wireless HSPA Non-Composite Device */
{ USB_DEVICE_AND_INTERFACE_INFO(0x1199, 0x6892, 0xFF, 0xFF, 0xFF)},
{ USB_DEVICE(0x1199, 0x6893) }, /* Sierra Wireless Device */
- { USB_DEVICE(0x1199, 0x68A3), /* Sierra Wireless Direct IP modems */
+ /* Sierra Wireless Direct IP modems */
+ { USB_DEVICE_AND_INTERFACE_INFO(0x1199, 0x68A3, 0xFF, 0xFF, 0xFF),
+ .driver_info = (kernel_ulong_t)&direct_ip_interface_blacklist
+ },
+ { USB_DEVICE_AND_INTERFACE_INFO(0x1199, 0x68AA, 0xFF, 0xFF, 0xFF),
.driver_info = (kernel_ulong_t)&direct_ip_interface_blacklist
},
/* AT&T Direct IP LTE modems */
{ USB_DEVICE_AND_INTERFACE_INFO(0x0F3D, 0x68AA, 0xFF, 0xFF, 0xFF),
.driver_info = (kernel_ulong_t)&direct_ip_interface_blacklist
},
- { USB_DEVICE(0x0f3d, 0x68A3), /* Airprime/Sierra Wireless Direct IP modems */
+ /* Airprime/Sierra Wireless Direct IP modems */
+ { USB_DEVICE_AND_INTERFACE_INFO(0x0F3D, 0x68A3, 0xFF, 0xFF, 0xFF),
.driver_info = (kernel_ulong_t)&direct_ip_interface_blacklist
},
if (usb_endpoint_is_bulk_in(endpoint)) {
/* we found a bulk in endpoint */
dev_dbg(ddev, "found bulk in on endpoint %d\n", i);
- bulk_in_endpoint[num_bulk_in] = endpoint;
- ++num_bulk_in;
+ if (num_bulk_in < MAX_NUM_PORTS) {
+ bulk_in_endpoint[num_bulk_in] = endpoint;
+ ++num_bulk_in;
+ }
}
if (usb_endpoint_is_bulk_out(endpoint)) {
/* we found a bulk out endpoint */
dev_dbg(ddev, "found bulk out on endpoint %d\n", i);
- bulk_out_endpoint[num_bulk_out] = endpoint;
- ++num_bulk_out;
+ if (num_bulk_out < MAX_NUM_PORTS) {
+ bulk_out_endpoint[num_bulk_out] = endpoint;
+ ++num_bulk_out;
+ }
}
if (usb_endpoint_is_int_in(endpoint)) {
/* we found a interrupt in endpoint */
dev_dbg(ddev, "found interrupt in on endpoint %d\n", i);
- interrupt_in_endpoint[num_interrupt_in] = endpoint;
- ++num_interrupt_in;
+ if (num_interrupt_in < MAX_NUM_PORTS) {
+ interrupt_in_endpoint[num_interrupt_in] =
+ endpoint;
+ ++num_interrupt_in;
+ }
}
if (usb_endpoint_is_int_out(endpoint)) {
/* we found an interrupt out endpoint */
dev_dbg(ddev, "found interrupt out on endpoint %d\n", i);
- interrupt_out_endpoint[num_interrupt_out] = endpoint;
- ++num_interrupt_out;
+ if (num_interrupt_out < MAX_NUM_PORTS) {
+ interrupt_out_endpoint[num_interrupt_out] =
+ endpoint;
+ ++num_interrupt_out;
+ }
}
}
if (usb_endpoint_is_int_in(endpoint)) {
/* we found a interrupt in endpoint */
dev_dbg(ddev, "found interrupt in for Prolific device on separate interface\n");
- interrupt_in_endpoint[num_interrupt_in] = endpoint;
- ++num_interrupt_in;
+ if (num_interrupt_in < MAX_NUM_PORTS) {
+ interrupt_in_endpoint[num_interrupt_in] = endpoint;
+ ++num_interrupt_in;
+ }
}
}
}
num_ports = type->num_ports;
}
+ if (num_ports > MAX_NUM_PORTS) {
+ dev_warn(ddev, "too many ports requested: %d\n", num_ports);
+ num_ports = MAX_NUM_PORTS;
+ }
+
serial->num_ports = num_ports;
serial->num_bulk_in = num_bulk_in;
serial->num_bulk_out = num_bulk_out;
dev_dbg(&urb->dev->dev, "%s - command_info is NULL, exiting.\n", __func__);
return;
}
+ if (!urb->actual_length) {
+ dev_dbg(&urb->dev->dev, "%s - empty response, exiting.\n", __func__);
+ return;
+ }
if (status) {
dev_dbg(&urb->dev->dev, "%s - nonzero urb status: %d\n", __func__, status);
if (status != -ENOENT)
/* These are unsolicited reports from the firmware, hence no
waiting command to wakeup */
dev_dbg(&urb->dev->dev, "%s - event received\n", __func__);
- } else if (data[0] == WHITEHEAT_GET_DTR_RTS) {
+ } else if ((data[0] == WHITEHEAT_GET_DTR_RTS) &&
+ (urb->actual_length - 1 <= sizeof(command_info->result_buffer))) {
memcpy(command_info->result_buffer, &data[1],
urb->actual_length - 1);
command_info->command_finished = WHITEHEAT_CMD_COMPLETE;
}
static const struct usb_device_id id_table[] = {
- /* AC8710, AC8710T */
- { USB_DEVICE_AND_INTERFACE_INFO(0x19d2, 0xffff, 0xff, 0xff, 0xff) },
- /* AC8700 */
- { USB_DEVICE_AND_INTERFACE_INFO(0x19d2, 0xfffe, 0xff, 0xff, 0xff) },
- /* MG880 */
- { USB_DEVICE(0x19d2, 0xfffd) },
- { USB_DEVICE(0x19d2, 0xfffc) },
- { USB_DEVICE(0x19d2, 0xfffb) },
- /* AC8710_V3 */
+ { USB_DEVICE(0x19d2, 0xffec) },
+ { USB_DEVICE(0x19d2, 0xffee) },
{ USB_DEVICE(0x19d2, 0xfff6) },
{ USB_DEVICE(0x19d2, 0xfff7) },
{ USB_DEVICE(0x19d2, 0xfff8) },
{ USB_DEVICE(0x19d2, 0xfff9) },
- { USB_DEVICE(0x19d2, 0xffee) },
- /* AC2716, MC2716 */
- { USB_DEVICE_AND_INTERFACE_INFO(0x19d2, 0xffed, 0xff, 0xff, 0xff) },
- /* AD3812 */
- { USB_DEVICE_AND_INTERFACE_INFO(0x19d2, 0xffeb, 0xff, 0xff, 0xff) },
- { USB_DEVICE(0x19d2, 0xffec) },
- { USB_DEVICE(0x05C6, 0x3197) },
- { USB_DEVICE(0x05C6, 0x6000) },
- { USB_DEVICE(0x05C6, 0x9008) },
+ { USB_DEVICE(0x19d2, 0xfffb) },
+ { USB_DEVICE(0x19d2, 0xfffc) },
+ /* MG880 */
+ { USB_DEVICE(0x19d2, 0xfffd) },
{ },
};
MODULE_DEVICE_TABLE(usb, id_table);
unsigned long flags = id->driver_info;
int r, alt;
- usb_stor_adjust_quirks(udev, &flags);
-
- if (flags & US_FL_IGNORE_UAS)
- return 0;
alt = uas_find_uas_alt_setting(intf);
if (alt < 0)
if (r < 0)
return 0;
+ /*
+ * ASM1051 and older ASM1053 devices have the same usb-id, and UAS is
+ * broken on the ASM1051, use the number of streams to differentiate.
+ * New ASM1053-s also support 32 streams, but have a different prod-id.
+ */
+ if (le16_to_cpu(udev->descriptor.idVendor) == 0x174c &&
+ le16_to_cpu(udev->descriptor.idProduct) == 0x55aa) {
+ if (udev->speed < USB_SPEED_SUPER) {
+ /* No streams info, assume ASM1051 */
+ flags |= US_FL_IGNORE_UAS;
+ } else if (usb_ss_max_streams(&eps[1]->ss_ep_comp) == 32) {
+ flags |= US_FL_IGNORE_UAS;
+ }
+ }
+
+ usb_stor_adjust_quirks(udev, &flags);
+
+ if (flags & US_FL_IGNORE_UAS) {
+ dev_warn(&udev->dev,
+ "UAS is blacklisted for this device, using usb-storage instead\n");
+ return 0;
+ }
+
if (udev->bus->sg_tablesize == 0) {
dev_warn(&udev->dev,
"The driver for the USB controller %s does not support scatter-gather which is\n",
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
US_FL_SINGLE_LUN ),
+UNUSUAL_DEV( 0x059b, 0x0040, 0x0100, 0x0100,
+ "Iomega",
+ "Jaz USB Adapter",
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_SINGLE_LUN ),
+
/* Reported by <Hendryk.Pfeiffer@gmx.de> */
UNUSUAL_DEV( 0x059f, 0x0643, 0x0000, 0x0000,
"LaCie",
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
US_FL_FIX_CAPACITY ),
+UNUSUAL_DEV( 0x06ca, 0x2003, 0x0100, 0x0100,
+ "Newer Technology",
+ "uSCSI",
+ USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_euscsi_init,
+ US_FL_SCM_MULT_TARG ),
+
/* Reported by Adrian Pilchowiec <adi1981@epf.pl> */
UNUSUAL_DEV( 0x071b, 0x3203, 0x0000, 0x0000,
"RockChip",
#include "usbip_common.h"
#include "stub.h"
-/*
- * Define device IDs here if you want to explicitly limit exportable devices.
- * In most cases, wildcard matching will be okay because driver binding can be
- * changed dynamically by a userland program.
- */
-static struct usb_device_id stub_table[] = {
-#if 0
- /* just an example */
- { USB_DEVICE(0x05ac, 0x0301) }, /* Mac 1 button mouse */
- { USB_DEVICE(0x0430, 0x0009) }, /* Plat Home Keyboard */
- { USB_DEVICE(0x059b, 0x0001) }, /* Iomega USB Zip 100 */
- { USB_DEVICE(0x04b3, 0x4427) }, /* IBM USB CD-ROM */
- { USB_DEVICE(0x05a9, 0xa511) }, /* LifeView USB cam */
- { USB_DEVICE(0x55aa, 0x0201) }, /* Imation card reader */
- { USB_DEVICE(0x046d, 0x0870) }, /* Qcam Express(QV-30) */
- { USB_DEVICE(0x04bb, 0x0101) }, /* IO-DATA HD 120GB */
- { USB_DEVICE(0x04bb, 0x0904) }, /* IO-DATA USB-ET/TX */
- { USB_DEVICE(0x04bb, 0x0201) }, /* IO-DATA USB-ET/TX */
- { USB_DEVICE(0x08bb, 0x2702) }, /* ONKYO USB Speaker */
- { USB_DEVICE(0x046d, 0x08b2) }, /* Logicool Qcam 4000 Pro */
-#endif
- /* magic for wild card */
- { .driver_info = 1 },
- { 0, } /* Terminating entry */
-};
-MODULE_DEVICE_TABLE(usb, stub_table);
-
/*
* usbip_status shows the status of usbip-host as long as this driver is bound
* to the target device.
#include <linux/types.h>
#include <linux/usb.h>
#include <linux/wait.h>
-#include "uapi/usbip.h"
+#include <uapi/linux/usbip.h>
#define USBIP_VERSION "1.0.0"
dev = &wa->usb_iface->dev;
--(wa->active_buf_in_urbs);
active_buf_in_urbs = wa->active_buf_in_urbs;
+ rpipe = xfer->ep->hcpriv;
if (usb_pipeisoc(xfer->urb->pipe)) {
struct usb_iso_packet_descriptor *iso_frame_desc =
resubmit_dti = (isoc_data_frame_count ==
urb_frame_count);
} else if (active_buf_in_urbs == 0) {
- rpipe = xfer->ep->hcpriv;
dev_dbg(dev,
"xfer %p 0x%08X#%u: data in done (%zu bytes)\n",
xfer, wa_xfer_id(xfer), seg->index,
*/
resubmit_dti = wa->dti_state != WA_DTI_TRANSFER_RESULT_PENDING;
spin_lock_irqsave(&xfer->lock, flags);
- rpipe = xfer->ep->hcpriv;
if (printk_ratelimit())
dev_err(dev, "xfer %p 0x%08X#%u: data in error %d\n",
xfer, wa_xfer_id(xfer), seg->index,
uwb_dev->mac_addr = *bce->mac_addr;
uwb_dev->dev_addr = bce->dev_addr;
dev_set_name(&uwb_dev->dev, "%s", macbuf);
+
+ /* plug the beacon cache */
+ bce->uwb_dev = uwb_dev;
+ uwb_dev->bce = bce;
+ uwb_bce_get(bce); /* released in uwb_dev_sys_release() */
+
result = uwb_dev_add(uwb_dev, &rc->uwb_dev.dev, rc);
if (result < 0) {
dev_err(dev, "new device %s: cannot instantiate device\n",
macbuf);
goto error_dev_add;
}
- /* plug the beacon cache */
- bce->uwb_dev = uwb_dev;
- uwb_dev->bce = bce;
- uwb_bce_get(bce); /* released in uwb_dev_sys_release() */
+
dev_info(dev, "uwb device (mac %s dev %s) connected to %s %s\n",
macbuf, devbuf, rc->uwb_dev.dev.parent->bus->name,
dev_name(rc->uwb_dev.dev.parent));
return;
error_dev_add:
+ bce->uwb_dev = NULL;
+ uwb_bce_put(bce);
kfree(uwb_dev);
return;
}
data->max_brightness--;
}
+ data->enable_gpio = -EINVAL;
return 0;
}
menuconfig FB
tristate "Support for frame buffer devices"
+ select FB_CMDLINE
---help---
The frame buffer device provides an abstraction for the graphics
hardware. It represents the frame buffer of some video hardware and
combination with certain motherboards and monitors are known to
suffer from this problem.
+config FB_CMDLINE
+ bool
+
config FB_DDC
tristate
depends on FB
#include <linux/list.h>
#include <linux/amba/bus.h>
#include <linux/amba/clcd.h>
+#include <linux/bitops.h>
#include <linux/clk.h>
#include <linux/hardirq.h>
#include <linux/dma-mapping.h>
if (g0 != panels[i].g0)
continue;
if (r0 == panels[i].r0 && b0 == panels[i].b0)
- fb->panel->caps = panels[i].caps & CLCD_CAP_RGB;
- if (r0 == panels[i].b0 && b0 == panels[i].r0)
- fb->panel->caps = panels[i].caps & CLCD_CAP_BGR;
+ fb->panel->caps = panels[i].caps;
}
return fb->panel->caps ? 0 : -EINVAL;
{
struct device_node *endpoint;
int err;
+ unsigned int bpp;
u32 max_bandwidth;
u32 tft_r0b0g0[3];
err = of_property_read_u32(fb->dev->dev.of_node, "max-memory-bandwidth",
&max_bandwidth);
- if (!err)
- fb->panel->bpp = 8 * max_bandwidth / (fb->panel->mode.xres *
- fb->panel->mode.yres * fb->panel->mode.refresh);
- else
- fb->panel->bpp = 32;
+ if (!err) {
+ /*
+ * max_bandwidth is in bytes per second and pixclock in
+ * pico-seconds, so the maximum allowed bits per pixel is
+ * 8 * max_bandwidth / (PICOS2KHZ(pixclock) * 1000)
+ * Rearrange this calculation to avoid overflow and then ensure
+ * result is a valid format.
+ */
+ bpp = max_bandwidth / (1000 / 8)
+ / PICOS2KHZ(fb->panel->mode.pixclock);
+ bpp = rounddown_pow_of_two(bpp);
+ if (bpp > 32)
+ bpp = 32;
+ } else
+ bpp = 32;
+ fb->panel->bpp = bpp;
#ifdef CONFIG_CPU_BIG_ENDIAN
fb->panel->cntl |= CNTL_BEBO;
timings = of_get_display_timings(display_np);
if (!timings) {
dev_err(dev, "failed to get display timings\n");
+ ret = -EINVAL;
goto put_display_node;
}
timings_np = of_find_node_by_name(display_np, "display-timings");
if (!timings_np) {
dev_err(dev, "failed to find display-timings node\n");
+ ret = -ENODEV;
goto put_display_node;
}
{ 0xa8, 0x00 }
};
-static void __init chips_hw_init(void)
+static void chips_hw_init(void)
{
int i;
obj-y += fb_notify.o
+obj-$(CONFIG_FB_CMDLINE) += fb_cmdline.o
obj-$(CONFIG_FB) += fb.o
fb-y := fbmem.o fbmon.o fbcmap.o fbsysfs.o \
modedb.o fbcvt.o
--- /dev/null
+/*
+ * linux/drivers/video/fb_cmdline.c
+ *
+ * Copyright (C) 2014 Intel Corp
+ * Copyright (C) 1994 Martin Schaller
+ *
+ * 2001 - Documented with DocBook
+ * - Brad Douglas <brad@neruo.com>
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file COPYING in the main directory of this archive
+ * for more details.
+ *
+ * Authors:
+ * Vetter <danie.vetter@ffwll.ch>
+ */
+#include <linux/init.h>
+#include <linux/fb.h>
+
+static char *video_options[FB_MAX] __read_mostly;
+static int ofonly __read_mostly;
+
+const char *fb_mode_option;
+EXPORT_SYMBOL_GPL(fb_mode_option);
+
+/**
+ * fb_get_options - get kernel boot parameters
+ * @name: framebuffer name as it would appear in
+ * the boot parameter line
+ * (video=<name>:<options>)
+ * @option: the option will be stored here
+ *
+ * NOTE: Needed to maintain backwards compatibility
+ */
+int fb_get_options(const char *name, char **option)
+{
+ char *opt, *options = NULL;
+ int retval = 0;
+ int name_len = strlen(name), i;
+
+ if (name_len && ofonly && strncmp(name, "offb", 4))
+ retval = 1;
+
+ if (name_len && !retval) {
+ for (i = 0; i < FB_MAX; i++) {
+ if (video_options[i] == NULL)
+ continue;
+ if (!video_options[i][0])
+ continue;
+ opt = video_options[i];
+ if (!strncmp(name, opt, name_len) &&
+ opt[name_len] == ':')
+ options = opt + name_len + 1;
+ }
+ }
+ /* No match, pass global option */
+ if (!options && option && fb_mode_option)
+ options = kstrdup(fb_mode_option, GFP_KERNEL);
+ if (options && !strncmp(options, "off", 3))
+ retval = 1;
+
+ if (option)
+ *option = options;
+
+ return retval;
+}
+EXPORT_SYMBOL(fb_get_options);
+
+/**
+ * video_setup - process command line options
+ * @options: string of options
+ *
+ * Process command line options for frame buffer subsystem.
+ *
+ * NOTE: This function is a __setup and __init function.
+ * It only stores the options. Drivers have to call
+ * fb_get_options() as necessary.
+ *
+ * Returns zero.
+ *
+ */
+static int __init video_setup(char *options)
+{
+ int i, global = 0;
+
+ if (!options || !*options)
+ global = 1;
+
+ if (!global && !strncmp(options, "ofonly", 6)) {
+ ofonly = 1;
+ global = 1;
+ }
+
+ if (!global && !strchr(options, ':')) {
+ fb_mode_option = options;
+ global = 1;
+ }
+
+ if (!global) {
+ for (i = 0; i < FB_MAX; i++) {
+ if (video_options[i] == NULL) {
+ video_options[i] = options;
+ break;
+ }
+ }
+ }
+
+ return 1;
+}
+__setup("video=", video_setup);
return err;
}
-static char *video_options[FB_MAX] __read_mostly;
-static int ofonly __read_mostly;
-
-/**
- * fb_get_options - get kernel boot parameters
- * @name: framebuffer name as it would appear in
- * the boot parameter line
- * (video=<name>:<options>)
- * @option: the option will be stored here
- *
- * NOTE: Needed to maintain backwards compatibility
- */
-int fb_get_options(const char *name, char **option)
-{
- char *opt, *options = NULL;
- int retval = 0;
- int name_len = strlen(name), i;
-
- if (name_len && ofonly && strncmp(name, "offb", 4))
- retval = 1;
-
- if (name_len && !retval) {
- for (i = 0; i < FB_MAX; i++) {
- if (video_options[i] == NULL)
- continue;
- if (!video_options[i][0])
- continue;
- opt = video_options[i];
- if (!strncmp(name, opt, name_len) &&
- opt[name_len] == ':')
- options = opt + name_len + 1;
- }
- }
- /* No match, pass global option */
- if (!options && option && fb_mode_option)
- options = kstrdup(fb_mode_option, GFP_KERNEL);
- if (options && !strncmp(options, "off", 3))
- retval = 1;
-
- if (option)
- *option = options;
-
- return retval;
-}
-EXPORT_SYMBOL(fb_get_options);
-
-#ifndef MODULE
-/**
- * video_setup - process command line options
- * @options: string of options
- *
- * Process command line options for frame buffer subsystem.
- *
- * NOTE: This function is a __setup and __init function.
- * It only stores the options. Drivers have to call
- * fb_get_options() as necessary.
- *
- * Returns zero.
- *
- */
-static int __init video_setup(char *options)
-{
- int i, global = 0;
-
- if (!options || !*options)
- global = 1;
-
- if (!global && !strncmp(options, "ofonly", 6)) {
- ofonly = 1;
- global = 1;
- }
-
- if (!global && !strchr(options, ':')) {
- fb_mode_option = options;
- global = 1;
- }
-
- if (!global) {
- for (i = 0; i < FB_MAX; i++) {
- if (video_options[i] == NULL) {
- video_options[i] = options;
- break;
- }
-
- }
- }
-
- return 1;
-}
-__setup("video=", video_setup);
-#endif
-
MODULE_LICENSE("GPL");
#define DPRINTK(fmt, args...)
#endif
-const char *fb_mode_option;
-EXPORT_SYMBOL_GPL(fb_mode_option);
-
/*
* Standard video mode definitions (taken from XFree86)
*/
{
u32 reg;
- reg = lcdc_read(LCD_RASTER_TIMING_0_REG) & 0xf;
+ reg = lcdc_read(LCD_RASTER_TIMING_0_REG) & 0x3ff;
reg |= (((back_porch-1) & 0xff) << 24)
| (((front_porch-1) & 0xff) << 16)
| (((pulse_width-1) & 0x3f) << 10);
if (native_mode)
of_node_put(native_mode);
display_timings_release(disp);
+ disp = NULL;
entryfail:
kfree(disp);
dispfail:
rc = add_memory(nid, hotplug_start_paddr, balloon_hotplug << PAGE_SHIFT);
if (rc) {
- pr_info("%s: add_memory() failed: %i\n", __func__, rc);
- return BP_EAGAIN;
+ pr_warn("Cannot add additional memory (%i)\n", rc);
+ return BP_ECANCELED;
}
balloon_hotplug -= credit;
int i, rc, readonly;
LIST_HEAD(queue_gref);
LIST_HEAD(queue_file);
- struct gntalloc_gref *gref;
+ struct gntalloc_gref *gref, *next;
readonly = !(op->flags & GNTALLOC_FLAG_WRITABLE);
rc = -ENOMEM;
goto undo;
/* Grant foreign access to the page. */
- gref->gref_id = gnttab_grant_foreign_access(op->domid,
+ rc = gnttab_grant_foreign_access(op->domid,
pfn_to_mfn(page_to_pfn(gref->page)), readonly);
- if ((int)gref->gref_id < 0) {
- rc = gref->gref_id;
+ if (rc < 0)
goto undo;
- }
- gref_ids[i] = gref->gref_id;
+ gref_ids[i] = gref->gref_id = rc;
}
/* Add to gref lists. */
mutex_lock(&gref_mutex);
gref_size -= (op->count - i);
- list_for_each_entry(gref, &queue_file, next_file) {
- /* __del_gref does not remove from queue_file */
+ list_for_each_entry_safe(gref, next, &queue_file, next_file) {
+ list_del(&gref->next_file);
__del_gref(gref);
}
gref->notify.flags = 0;
- if (gref->gref_id > 0) {
+ if (gref->gref_id) {
if (gnttab_query_foreign_access(gref->gref_id))
return;
shutting_down = SHUTDOWN_SUSPEND;
-#ifdef CONFIG_PREEMPT
- /* If the kernel is preemptible, we need to freeze all the processes
- to prevent them from being in the middle of a pagetable update
- during suspend. */
err = freeze_processes();
if (err) {
pr_err("%s: freeze failed %d\n", __func__, err);
goto out;
}
-#endif
err = dpm_suspend_start(PMSG_FREEZE);
if (err) {
dpm_resume_end(si.cancelled ? PMSG_THAW : PMSG_RESTORE);
out_thaw:
-#ifdef CONFIG_PREEMPT
thaw_processes();
out:
-#endif
shutting_down = SHUTDOWN_INVALID;
}
#endif /* CONFIG_HIBERNATE_CALLBACKS */
struct {
unsigned tail;
+ unsigned completed_events;
spinlock_t completion_lock;
} ____cacheline_aligned_in_smp;
for (i = 0; i < table->nr; ++i) {
struct kioctx *ctx = table->table[i];
+ struct completion requests_done =
+ COMPLETION_INITIALIZER_ONSTACK(requests_done);
if (!ctx)
continue;
* that it needs to unmap the area, just set it to 0.
*/
ctx->mmap_size = 0;
- kill_ioctx(mm, ctx, NULL);
+ kill_ioctx(mm, ctx, &requests_done);
+
+ /* Wait until all IO for the context are done. */
+ wait_for_completion(&requests_done);
}
RCU_INIT_POINTER(mm->ioctx_table, NULL);
return ret;
}
+/* refill_reqs_available
+ * Updates the reqs_available reference counts used for tracking the
+ * number of free slots in the completion ring. This can be called
+ * from aio_complete() (to optimistically update reqs_available) or
+ * from aio_get_req() (the we're out of events case). It must be
+ * called holding ctx->completion_lock.
+ */
+static void refill_reqs_available(struct kioctx *ctx, unsigned head,
+ unsigned tail)
+{
+ unsigned events_in_ring, completed;
+
+ /* Clamp head since userland can write to it. */
+ head %= ctx->nr_events;
+ if (head <= tail)
+ events_in_ring = tail - head;
+ else
+ events_in_ring = ctx->nr_events - (head - tail);
+
+ completed = ctx->completed_events;
+ if (events_in_ring < completed)
+ completed -= events_in_ring;
+ else
+ completed = 0;
+
+ if (!completed)
+ return;
+
+ ctx->completed_events -= completed;
+ put_reqs_available(ctx, completed);
+}
+
+/* user_refill_reqs_available
+ * Called to refill reqs_available when aio_get_req() encounters an
+ * out of space in the completion ring.
+ */
+static void user_refill_reqs_available(struct kioctx *ctx)
+{
+ spin_lock_irq(&ctx->completion_lock);
+ if (ctx->completed_events) {
+ struct aio_ring *ring;
+ unsigned head;
+
+ /* Access of ring->head may race with aio_read_events_ring()
+ * here, but that's okay since whether we read the old version
+ * or the new version, and either will be valid. The important
+ * part is that head cannot pass tail since we prevent
+ * aio_complete() from updating tail by holding
+ * ctx->completion_lock. Even if head is invalid, the check
+ * against ctx->completed_events below will make sure we do the
+ * safe/right thing.
+ */
+ ring = kmap_atomic(ctx->ring_pages[0]);
+ head = ring->head;
+ kunmap_atomic(ring);
+
+ refill_reqs_available(ctx, head, ctx->tail);
+ }
+
+ spin_unlock_irq(&ctx->completion_lock);
+}
+
/* aio_get_req
* Allocate a slot for an aio request.
* Returns NULL if no requests are free.
{
struct kiocb *req;
- if (!get_reqs_available(ctx))
- return NULL;
+ if (!get_reqs_available(ctx)) {
+ user_refill_reqs_available(ctx);
+ if (!get_reqs_available(ctx))
+ return NULL;
+ }
req = kmem_cache_alloc(kiocb_cachep, GFP_KERNEL|__GFP_ZERO);
if (unlikely(!req))
struct kioctx *ctx = iocb->ki_ctx;
struct aio_ring *ring;
struct io_event *ev_page, *event;
+ unsigned tail, pos, head;
unsigned long flags;
- unsigned tail, pos;
/*
* Special case handling for sync iocbs:
ctx->tail = tail;
ring = kmap_atomic(ctx->ring_pages[0]);
+ head = ring->head;
ring->tail = tail;
kunmap_atomic(ring);
flush_dcache_page(ctx->ring_pages[0]);
+ ctx->completed_events++;
+ if (ctx->completed_events > 1)
+ refill_reqs_available(ctx, head, tail);
spin_unlock_irqrestore(&ctx->completion_lock, flags);
pr_debug("added to ring %p at [%u]\n", iocb, tail);
/* everything turned out well, dispose of the aiocb. */
kiocb_free(iocb);
- put_reqs_available(ctx, 1);
/*
* We have to order our ring_info tail store above and test
tail = ring->tail;
kunmap_atomic(ring);
+ /*
+ * Ensure that once we've read the current tail pointer, that
+ * we also see the events that were stored up to the tail.
+ */
+ smp_rmb();
+
pr_debug("h%u t%u m%u\n", head, tail, ctx->nr_events);
if (head == tail)
#include <linux/list.h>
#include <linux/spinlock.h>
#include <linux/freezer.h>
-#include <linux/workqueue.h>
#include "async-thread.h"
#include "ctree.h"
struct __btrfs_workqueue *high;
};
-static inline struct __btrfs_workqueue
-*__btrfs_alloc_workqueue(const char *name, int flags, int max_active,
+static void normal_work_helper(struct btrfs_work *work);
+
+#define BTRFS_WORK_HELPER(name) \
+void btrfs_##name(struct work_struct *arg) \
+{ \
+ struct btrfs_work *work = container_of(arg, struct btrfs_work, \
+ normal_work); \
+ normal_work_helper(work); \
+}
+
+BTRFS_WORK_HELPER(worker_helper);
+BTRFS_WORK_HELPER(delalloc_helper);
+BTRFS_WORK_HELPER(flush_delalloc_helper);
+BTRFS_WORK_HELPER(cache_helper);
+BTRFS_WORK_HELPER(submit_helper);
+BTRFS_WORK_HELPER(fixup_helper);
+BTRFS_WORK_HELPER(endio_helper);
+BTRFS_WORK_HELPER(endio_meta_helper);
+BTRFS_WORK_HELPER(endio_meta_write_helper);
+BTRFS_WORK_HELPER(endio_raid56_helper);
+BTRFS_WORK_HELPER(rmw_helper);
+BTRFS_WORK_HELPER(endio_write_helper);
+BTRFS_WORK_HELPER(freespace_write_helper);
+BTRFS_WORK_HELPER(delayed_meta_helper);
+BTRFS_WORK_HELPER(readahead_helper);
+BTRFS_WORK_HELPER(qgroup_rescan_helper);
+BTRFS_WORK_HELPER(extent_refs_helper);
+BTRFS_WORK_HELPER(scrub_helper);
+BTRFS_WORK_HELPER(scrubwrc_helper);
+BTRFS_WORK_HELPER(scrubnc_helper);
+
+static struct __btrfs_workqueue *
+__btrfs_alloc_workqueue(const char *name, int flags, int max_active,
int thresh)
{
struct __btrfs_workqueue *ret = kzalloc(sizeof(*ret), GFP_NOFS);
spin_unlock_irqrestore(lock, flags);
}
-static void normal_work_helper(struct work_struct *arg)
+static void normal_work_helper(struct btrfs_work *work)
{
- struct btrfs_work *work;
struct __btrfs_workqueue *wq;
int need_order = 0;
- work = container_of(arg, struct btrfs_work, normal_work);
/*
* We should not touch things inside work in the following cases:
* 1) after work->func() if it has no ordered_free
trace_btrfs_all_work_done(work);
}
-void btrfs_init_work(struct btrfs_work *work,
+void btrfs_init_work(struct btrfs_work *work, btrfs_work_func_t uniq_func,
btrfs_func_t func,
btrfs_func_t ordered_func,
btrfs_func_t ordered_free)
work->func = func;
work->ordered_func = ordered_func;
work->ordered_free = ordered_free;
- INIT_WORK(&work->normal_work, normal_work_helper);
+ INIT_WORK(&work->normal_work, uniq_func);
INIT_LIST_HEAD(&work->ordered_list);
work->flags = 0;
}
#ifndef __BTRFS_ASYNC_THREAD_
#define __BTRFS_ASYNC_THREAD_
+#include <linux/workqueue.h>
struct btrfs_workqueue;
/* Internal use only */
struct __btrfs_workqueue;
struct btrfs_work;
typedef void (*btrfs_func_t)(struct btrfs_work *arg);
+typedef void (*btrfs_work_func_t)(struct work_struct *arg);
struct btrfs_work {
btrfs_func_t func;
unsigned long flags;
};
+#define BTRFS_WORK_HELPER_PROTO(name) \
+void btrfs_##name(struct work_struct *arg)
+
+BTRFS_WORK_HELPER_PROTO(worker_helper);
+BTRFS_WORK_HELPER_PROTO(delalloc_helper);
+BTRFS_WORK_HELPER_PROTO(flush_delalloc_helper);
+BTRFS_WORK_HELPER_PROTO(cache_helper);
+BTRFS_WORK_HELPER_PROTO(submit_helper);
+BTRFS_WORK_HELPER_PROTO(fixup_helper);
+BTRFS_WORK_HELPER_PROTO(endio_helper);
+BTRFS_WORK_HELPER_PROTO(endio_meta_helper);
+BTRFS_WORK_HELPER_PROTO(endio_meta_write_helper);
+BTRFS_WORK_HELPER_PROTO(endio_raid56_helper);
+BTRFS_WORK_HELPER_PROTO(rmw_helper);
+BTRFS_WORK_HELPER_PROTO(endio_write_helper);
+BTRFS_WORK_HELPER_PROTO(freespace_write_helper);
+BTRFS_WORK_HELPER_PROTO(delayed_meta_helper);
+BTRFS_WORK_HELPER_PROTO(readahead_helper);
+BTRFS_WORK_HELPER_PROTO(qgroup_rescan_helper);
+BTRFS_WORK_HELPER_PROTO(extent_refs_helper);
+BTRFS_WORK_HELPER_PROTO(scrub_helper);
+BTRFS_WORK_HELPER_PROTO(scrubwrc_helper);
+BTRFS_WORK_HELPER_PROTO(scrubnc_helper);
+
struct btrfs_workqueue *btrfs_alloc_workqueue(const char *name,
int flags,
int max_active,
int thresh);
-void btrfs_init_work(struct btrfs_work *work,
+void btrfs_init_work(struct btrfs_work *work, btrfs_work_func_t helper,
btrfs_func_t func,
btrfs_func_t ordered_func,
btrfs_func_t ordered_free);
return -ENOMEM;
async_work->delayed_root = delayed_root;
- btrfs_init_work(&async_work->work, btrfs_async_run_delayed_root,
- NULL, NULL);
+ btrfs_init_work(&async_work->work, btrfs_delayed_meta_helper,
+ btrfs_async_run_delayed_root, NULL, NULL);
async_work->nr = nr;
btrfs_queue_work(root->fs_info->delayed_workers, &async_work->work);
#include "btrfs_inode.h"
#include "volumes.h"
#include "print-tree.h"
-#include "async-thread.h"
#include "locking.h"
#include "tree-log.h"
#include "free-space-cache.h"
{
struct end_io_wq *end_io_wq = bio->bi_private;
struct btrfs_fs_info *fs_info;
+ struct btrfs_workqueue *wq;
+ btrfs_work_func_t func;
fs_info = end_io_wq->info;
end_io_wq->error = err;
- btrfs_init_work(&end_io_wq->work, end_workqueue_fn, NULL, NULL);
if (bio->bi_rw & REQ_WRITE) {
- if (end_io_wq->metadata == BTRFS_WQ_ENDIO_METADATA)
- btrfs_queue_work(fs_info->endio_meta_write_workers,
- &end_io_wq->work);
- else if (end_io_wq->metadata == BTRFS_WQ_ENDIO_FREE_SPACE)
- btrfs_queue_work(fs_info->endio_freespace_worker,
- &end_io_wq->work);
- else if (end_io_wq->metadata == BTRFS_WQ_ENDIO_RAID56)
- btrfs_queue_work(fs_info->endio_raid56_workers,
- &end_io_wq->work);
- else
- btrfs_queue_work(fs_info->endio_write_workers,
- &end_io_wq->work);
+ if (end_io_wq->metadata == BTRFS_WQ_ENDIO_METADATA) {
+ wq = fs_info->endio_meta_write_workers;
+ func = btrfs_endio_meta_write_helper;
+ } else if (end_io_wq->metadata == BTRFS_WQ_ENDIO_FREE_SPACE) {
+ wq = fs_info->endio_freespace_worker;
+ func = btrfs_freespace_write_helper;
+ } else if (end_io_wq->metadata == BTRFS_WQ_ENDIO_RAID56) {
+ wq = fs_info->endio_raid56_workers;
+ func = btrfs_endio_raid56_helper;
+ } else {
+ wq = fs_info->endio_write_workers;
+ func = btrfs_endio_write_helper;
+ }
} else {
- if (end_io_wq->metadata == BTRFS_WQ_ENDIO_RAID56)
- btrfs_queue_work(fs_info->endio_raid56_workers,
- &end_io_wq->work);
- else if (end_io_wq->metadata)
- btrfs_queue_work(fs_info->endio_meta_workers,
- &end_io_wq->work);
- else
- btrfs_queue_work(fs_info->endio_workers,
- &end_io_wq->work);
+ if (end_io_wq->metadata == BTRFS_WQ_ENDIO_RAID56) {
+ wq = fs_info->endio_raid56_workers;
+ func = btrfs_endio_raid56_helper;
+ } else if (end_io_wq->metadata) {
+ wq = fs_info->endio_meta_workers;
+ func = btrfs_endio_meta_helper;
+ } else {
+ wq = fs_info->endio_workers;
+ func = btrfs_endio_helper;
+ }
}
+
+ btrfs_init_work(&end_io_wq->work, func, end_workqueue_fn, NULL, NULL);
+ btrfs_queue_work(wq, &end_io_wq->work);
}
/*
async->submit_bio_start = submit_bio_start;
async->submit_bio_done = submit_bio_done;
- btrfs_init_work(&async->work, run_one_async_start,
+ btrfs_init_work(&async->work, btrfs_worker_helper, run_one_async_start,
run_one_async_done, run_one_async_free);
async->bio_flags = bio_flags;
btrfs_set_stack_device_generation(dev_item, 0);
btrfs_set_stack_device_type(dev_item, dev->type);
btrfs_set_stack_device_id(dev_item, dev->devid);
- btrfs_set_stack_device_total_bytes(dev_item, dev->total_bytes);
+ btrfs_set_stack_device_total_bytes(dev_item,
+ dev->disk_total_bytes);
btrfs_set_stack_device_bytes_used(dev_item, dev->bytes_used);
btrfs_set_stack_device_io_align(dev_item, dev->io_align);
btrfs_set_stack_device_io_width(dev_item, dev->io_width);
caching_ctl->block_group = cache;
caching_ctl->progress = cache->key.objectid;
atomic_set(&caching_ctl->count, 1);
- btrfs_init_work(&caching_ctl->work, caching_thread, NULL, NULL);
+ btrfs_init_work(&caching_ctl->work, btrfs_cache_helper,
+ caching_thread, NULL, NULL);
spin_lock(&cache->lock);
/*
async->sync = 0;
init_completion(&async->wait);
- btrfs_init_work(&async->work, delayed_ref_async_start,
- NULL, NULL);
+ btrfs_init_work(&async->work, btrfs_extent_refs_helper,
+ delayed_ref_async_start, NULL, NULL);
btrfs_queue_work(root->fs_info->extent_workers, &async->work);
*/
static u64 btrfs_reduce_alloc_profile(struct btrfs_root *root, u64 flags)
{
- /*
- * we add in the count of missing devices because we want
- * to make sure that any RAID levels on a degraded FS
- * continue to be honored.
- */
- u64 num_devices = root->fs_info->fs_devices->rw_devices +
- root->fs_info->fs_devices->missing_devices;
+ u64 num_devices = root->fs_info->fs_devices->rw_devices;
u64 target;
u64 tmp;
if (stripped)
return extended_to_chunk(stripped);
- /*
- * we add in the count of missing devices because we want
- * to make sure that any RAID levels on a degraded FS
- * continue to be honored.
- */
- num_devices = root->fs_info->fs_devices->rw_devices +
- root->fs_info->fs_devices->missing_devices;
+ num_devices = root->fs_info->fs_devices->rw_devices;
stripped = BTRFS_BLOCK_GROUP_RAID0 |
BTRFS_BLOCK_GROUP_RAID5 | BTRFS_BLOCK_GROUP_RAID6 |
test_bit(BIO_UPTODATE, &bio->bi_flags);
if (err)
uptodate = 0;
+ offset += len;
continue;
}
}
return -ENOMEM;
path->leave_spinning = 1;
- start = ALIGN(start, BTRFS_I(inode)->root->sectorsize);
- len = ALIGN(len, BTRFS_I(inode)->root->sectorsize);
+ start = round_down(start, BTRFS_I(inode)->root->sectorsize);
+ len = round_up(max, BTRFS_I(inode)->root->sectorsize) - start;
/*
* lookup the last file extent. We're not using i_size here
{
if (filp->private_data)
btrfs_ioctl_trans_end(filp);
- filemap_flush(inode->i_mapping);
+ /*
+ * ordered_data_close is set by settattr when we are about to truncate
+ * a file from a non-zero size to a zero size. This tries to
+ * flush down new bytes that may have been written if the
+ * application were using truncate to replace a file in place.
+ */
+ if (test_and_clear_bit(BTRFS_INODE_ORDERED_DATA_CLOSE,
+ &BTRFS_I(inode)->runtime_flags))
+ filemap_flush(inode->i_mapping);
return 0;
}
btrfs_init_log_ctx(&ctx);
- ret = btrfs_log_dentry_safe(trans, root, dentry, &ctx);
+ ret = btrfs_log_dentry_safe(trans, root, dentry, start, end, &ctx);
if (ret < 0) {
/* Fallthrough and commit/free transaction. */
ret = 1;
goto out;
}
- if (hole_mergeable(inode, leaf, path->slots[0]+1, offset, end)) {
+ if (hole_mergeable(inode, leaf, path->slots[0], offset, end)) {
u64 num_bytes;
- path->slots[0]++;
key.offset = offset;
btrfs_set_item_key_safe(root, path, &key);
fi = btrfs_item_ptr(leaf, path->slots[0],
goto out_only_mutex;
}
- lockstart = round_up(offset , BTRFS_I(inode)->root->sectorsize);
+ lockstart = round_up(offset, BTRFS_I(inode)->root->sectorsize);
lockend = round_down(offset + len,
BTRFS_I(inode)->root->sectorsize) - 1;
same_page = ((offset >> PAGE_CACHE_SHIFT) ==
tail_start + tail_len, 0, 1);
if (ret)
goto out_only_mutex;
- }
+ }
}
}
ins.offset,
BTRFS_ORDERED_COMPRESSED,
async_extent->compress_type);
- if (ret)
+ if (ret) {
+ btrfs_drop_extent_cache(inode, async_extent->start,
+ async_extent->start +
+ async_extent->ram_size - 1, 0);
goto out_free_reserve;
+ }
/*
* clear dirty, set writeback and unlock the pages.
ret = btrfs_add_ordered_extent(inode, start, ins.objectid,
ram_size, cur_alloc_size, 0);
if (ret)
- goto out_reserve;
+ goto out_drop_extent_cache;
if (root->root_key.objectid ==
BTRFS_DATA_RELOC_TREE_OBJECTID) {
ret = btrfs_reloc_clone_csums(inode, start,
cur_alloc_size);
if (ret)
- goto out_reserve;
+ goto out_drop_extent_cache;
}
if (disk_num_bytes < cur_alloc_size)
out:
return ret;
+out_drop_extent_cache:
+ btrfs_drop_extent_cache(inode, start, start + ram_size - 1, 0);
out_reserve:
btrfs_free_reserved_extent(root, ins.objectid, ins.offset, 1);
out_unlock:
async_cow->end = cur_end;
INIT_LIST_HEAD(&async_cow->extents);
- btrfs_init_work(&async_cow->work, async_cow_start,
- async_cow_submit, async_cow_free);
+ btrfs_init_work(&async_cow->work,
+ btrfs_delalloc_helper,
+ async_cow_start, async_cow_submit,
+ async_cow_free);
nr_pages = (cur_end - start + PAGE_CACHE_SIZE) >>
PAGE_CACHE_SHIFT;
SetPageChecked(page);
page_cache_get(page);
- btrfs_init_work(&fixup->work, btrfs_writepage_fixup_worker, NULL, NULL);
+ btrfs_init_work(&fixup->work, btrfs_fixup_helper,
+ btrfs_writepage_fixup_worker, NULL, NULL);
fixup->page = page;
btrfs_queue_work(root->fs_info->fixup_workers, &fixup->work);
return -EBUSY;
struct inode *inode = page->mapping->host;
struct btrfs_root *root = BTRFS_I(inode)->root;
struct btrfs_ordered_extent *ordered_extent = NULL;
- struct btrfs_workqueue *workers;
+ struct btrfs_workqueue *wq;
+ btrfs_work_func_t func;
trace_btrfs_writepage_end_io_hook(page, start, end, uptodate);
end - start + 1, uptodate))
return 0;
- btrfs_init_work(&ordered_extent->work, finish_ordered_fn, NULL, NULL);
+ if (btrfs_is_free_space_inode(inode)) {
+ wq = root->fs_info->endio_freespace_worker;
+ func = btrfs_freespace_write_helper;
+ } else {
+ wq = root->fs_info->endio_write_workers;
+ func = btrfs_endio_write_helper;
+ }
- if (btrfs_is_free_space_inode(inode))
- workers = root->fs_info->endio_freespace_worker;
- else
- workers = root->fs_info->endio_write_workers;
- btrfs_queue_work(workers, &ordered_extent->work);
+ btrfs_init_work(&ordered_extent->work, func, finish_ordered_fn, NULL,
+ NULL);
+ btrfs_queue_work(wq, &ordered_extent->work);
return 0;
}
btrfs_abort_transaction(trans, root, ret);
}
error:
- if (last_size != (u64)-1)
+ if (last_size != (u64)-1 &&
+ root->root_key.objectid != BTRFS_TREE_LOG_OBJECTID)
btrfs_ordered_update_i_size(inode, last_size, NULL);
btrfs_free_path(path);
return err;
clear_bit(EXTENT_FLAG_LOGGING, &em->flags);
remove_extent_mapping(map_tree, em);
free_extent_map(em);
+ if (need_resched()) {
+ write_unlock(&map_tree->lock);
+ cond_resched();
+ write_lock(&map_tree->lock);
+ }
}
write_unlock(&map_tree->lock);
&cached_state, GFP_NOFS);
free_extent_state(state);
+ cond_resched();
spin_lock(&io_tree->lock);
}
spin_unlock(&io_tree->lock);
iput(inode);
inode = ERR_PTR(ret);
}
+ /*
+ * If orphan cleanup did remove any orphans, it means the tree
+ * was modified and therefore the commit root is not the same as
+ * the current root anymore. This is a problem, because send
+ * uses the commit root and therefore can see inode items that
+ * don't exist in the current root anymore, and for example make
+ * calls to btrfs_iget, which will do tree lookups based on the
+ * current root and not on the commit root. Those lookups will
+ * fail, returning a -ESTALE error, and making send fail with
+ * that error. So make sure a send does not see any orphans we
+ * have just removed, and that it will see the same inodes
+ * regardless of whether a transaction commit happened before
+ * it started (meaning that the commit root will be the same as
+ * the current root) or not.
+ */
+ if (sub_root->node != sub_root->commit_root) {
+ u64 sub_flags = btrfs_root_flags(&sub_root->root_item);
+
+ if (sub_flags & BTRFS_ROOT_SUBVOL_RDONLY) {
+ struct extent_buffer *eb;
+
+ /*
+ * Assert we can't have races between dentry
+ * lookup called through the snapshot creation
+ * ioctl and the VFS.
+ */
+ ASSERT(mutex_is_locked(&dir->i_mutex));
+
+ down_write(&root->fs_info->commit_root_sem);
+ eb = sub_root->commit_root;
+ sub_root->commit_root =
+ btrfs_root_node(sub_root);
+ up_write(&root->fs_info->commit_root_sem);
+ free_extent_buffer(eb);
+ }
+ }
}
return inode;
return ret;
}
+static int btrfs_insert_inode_locked(struct inode *inode)
+{
+ struct btrfs_iget_args args;
+ args.location = &BTRFS_I(inode)->location;
+ args.root = BTRFS_I(inode)->root;
+
+ return insert_inode_locked4(inode,
+ btrfs_inode_hash(inode->i_ino, BTRFS_I(inode)->root),
+ btrfs_find_actor, &args);
+}
+
static struct inode *btrfs_new_inode(struct btrfs_trans_handle *trans,
struct btrfs_root *root,
struct inode *dir,
return ERR_PTR(-ENOMEM);
}
+ /*
+ * O_TMPFILE, set link count to 0, so that after this point,
+ * we fill in an inode item with the correct link count.
+ */
+ if (!name)
+ set_nlink(inode, 0);
+
/*
* we have to initialize this early, so we can reclaim the inode
* number if we fail afterwards in this function.
sizes[1] = name_len + sizeof(*ref);
}
+ location = &BTRFS_I(inode)->location;
+ location->objectid = objectid;
+ location->offset = 0;
+ btrfs_set_key_type(location, BTRFS_INODE_ITEM_KEY);
+
+ ret = btrfs_insert_inode_locked(inode);
+ if (ret < 0)
+ goto fail;
+
path->leave_spinning = 1;
ret = btrfs_insert_empty_items(trans, root, path, key, sizes, nitems);
if (ret != 0)
- goto fail;
+ goto fail_unlock;
inode_init_owner(inode, dir, mode);
inode_set_bytes(inode, 0);
btrfs_mark_buffer_dirty(path->nodes[0]);
btrfs_free_path(path);
- location = &BTRFS_I(inode)->location;
- location->objectid = objectid;
- location->offset = 0;
- btrfs_set_key_type(location, BTRFS_INODE_ITEM_KEY);
-
btrfs_inherit_iflags(inode, dir);
if (S_ISREG(mode)) {
BTRFS_INODE_NODATASUM;
}
- btrfs_insert_inode_hash(inode);
inode_tree_add(inode);
trace_btrfs_inode_new(inode);
btrfs_ino(inode), root->root_key.objectid, ret);
return inode;
+
+fail_unlock:
+ unlock_new_inode(inode);
fail:
if (dir && name)
BTRFS_I(dir)->index_cnt--;
goto out_unlock;
}
- err = btrfs_init_inode_security(trans, inode, dir, &dentry->d_name);
- if (err) {
- drop_inode = 1;
- goto out_unlock;
- }
-
/*
* If the active LSM wants to access the inode during
* d_instantiate it needs these. Smack checks to see
* if the filesystem supports xattrs by looking at the
* ops vector.
*/
-
inode->i_op = &btrfs_special_inode_operations;
- err = btrfs_add_nondir(trans, dir, dentry, inode, 0, index);
+ init_special_inode(inode, inode->i_mode, rdev);
+
+ err = btrfs_init_inode_security(trans, inode, dir, &dentry->d_name);
if (err)
- drop_inode = 1;
- else {
- init_special_inode(inode, inode->i_mode, rdev);
+ goto out_unlock_inode;
+
+ err = btrfs_add_nondir(trans, dir, dentry, inode, 0, index);
+ if (err) {
+ goto out_unlock_inode;
+ } else {
btrfs_update_inode(trans, root, inode);
+ unlock_new_inode(inode);
d_instantiate(dentry, inode);
}
+
out_unlock:
btrfs_end_transaction(trans, root);
btrfs_balance_delayed_items(root);
iput(inode);
}
return err;
+
+out_unlock_inode:
+ drop_inode = 1;
+ unlock_new_inode(inode);
+ goto out_unlock;
+
}
static int btrfs_create(struct inode *dir, struct dentry *dentry,
goto out_unlock;
}
drop_inode_on_err = 1;
-
- err = btrfs_init_inode_security(trans, inode, dir, &dentry->d_name);
- if (err)
- goto out_unlock;
-
- err = btrfs_update_inode(trans, root, inode);
- if (err)
- goto out_unlock;
-
/*
* If the active LSM wants to access the inode during
* d_instantiate it needs these. Smack checks to see
*/
inode->i_fop = &btrfs_file_operations;
inode->i_op = &btrfs_file_inode_operations;
+ inode->i_mapping->a_ops = &btrfs_aops;
+ inode->i_mapping->backing_dev_info = &root->fs_info->bdi;
+
+ err = btrfs_init_inode_security(trans, inode, dir, &dentry->d_name);
+ if (err)
+ goto out_unlock_inode;
+
+ err = btrfs_update_inode(trans, root, inode);
+ if (err)
+ goto out_unlock_inode;
err = btrfs_add_nondir(trans, dir, dentry, inode, 0, index);
if (err)
- goto out_unlock;
+ goto out_unlock_inode;
- inode->i_mapping->a_ops = &btrfs_aops;
- inode->i_mapping->backing_dev_info = &root->fs_info->bdi;
BTRFS_I(inode)->io_tree.ops = &btrfs_extent_io_ops;
+ unlock_new_inode(inode);
d_instantiate(dentry, inode);
out_unlock:
btrfs_balance_delayed_items(root);
btrfs_btree_balance_dirty(root);
return err;
+
+out_unlock_inode:
+ unlock_new_inode(inode);
+ goto out_unlock;
+
}
static int btrfs_link(struct dentry *old_dentry, struct inode *dir,
}
drop_on_err = 1;
+ /* these must be set before we unlock the inode */
+ inode->i_op = &btrfs_dir_inode_operations;
+ inode->i_fop = &btrfs_dir_file_operations;
err = btrfs_init_inode_security(trans, inode, dir, &dentry->d_name);
if (err)
- goto out_fail;
-
- inode->i_op = &btrfs_dir_inode_operations;
- inode->i_fop = &btrfs_dir_file_operations;
+ goto out_fail_inode;
btrfs_i_size_write(inode, 0);
err = btrfs_update_inode(trans, root, inode);
if (err)
- goto out_fail;
+ goto out_fail_inode;
err = btrfs_add_link(trans, dir, inode, dentry->d_name.name,
dentry->d_name.len, 0, index);
if (err)
- goto out_fail;
+ goto out_fail_inode;
d_instantiate(dentry, inode);
+ /*
+ * mkdir is special. We're unlocking after we call d_instantiate
+ * to avoid a race with nfsd calling d_instantiate.
+ */
+ unlock_new_inode(inode);
drop_on_err = 0;
out_fail:
btrfs_balance_delayed_items(root);
btrfs_btree_balance_dirty(root);
return err;
+
+out_fail_inode:
+ unlock_new_inode(inode);
+ goto out_fail;
}
/* helper for btfs_get_extent. Given an existing extent in the tree,
static int merge_extent_mapping(struct extent_map_tree *em_tree,
struct extent_map *existing,
struct extent_map *em,
- u64 map_start, u64 map_len)
+ u64 map_start)
{
u64 start_diff;
BUG_ON(map_start < em->start || map_start >= extent_map_end(em));
start_diff = map_start - em->start;
em->start = map_start;
- em->len = map_len;
+ em->len = existing->start - em->start;
if (em->block_start < EXTENT_MAP_LAST_BYTE &&
!test_bit(EXTENT_FLAG_COMPRESSED, &em->flags)) {
em->block_start += start_diff;
goto not_found;
if (start + len <= found_key.offset)
goto not_found;
+ if (start > found_key.offset)
+ goto next;
em->start = start;
em->orig_start = start;
em->len = found_key.offset - start;
em->len);
if (existing) {
err = merge_extent_mapping(em_tree, existing,
- em, start,
- root->sectorsize);
+ em, start);
free_extent_map(existing);
if (err) {
free_extent_map(em);
if (!ret)
goto out_test;
- btrfs_init_work(&ordered->work, finish_ordered_fn, NULL, NULL);
+ btrfs_init_work(&ordered->work, btrfs_endio_write_helper,
+ finish_ordered_fn, NULL, NULL);
btrfs_queue_work(root->fs_info->endio_write_workers,
&ordered->work);
out_test:
map_length = orig_bio->bi_iter.bi_size;
ret = btrfs_map_block(root->fs_info, rw, start_sector << 9,
&map_length, NULL, 0);
- if (ret) {
- bio_put(orig_bio);
+ if (ret)
return -EIO;
- }
if (map_length >= orig_bio->bi_iter.bi_size) {
bio = orig_bio;
bio = btrfs_dio_bio_alloc(orig_bio->bi_bdev, start_sector, GFP_NOFS);
if (!bio)
return -ENOMEM;
+
bio->bi_private = dip;
bio->bi_end_io = btrfs_end_dio_bio;
atomic_inc(&dip->pending_bios);
count = iov_iter_count(iter);
if (test_bit(BTRFS_INODE_HAS_ASYNC_EXTENT,
&BTRFS_I(inode)->runtime_flags))
- filemap_fdatawrite_range(inode->i_mapping, offset, count);
+ filemap_fdatawrite_range(inode->i_mapping, offset,
+ offset + count - 1);
if (rw & WRITE) {
/*
set_nlink(inode, 1);
btrfs_i_size_write(inode, 0);
+ unlock_new_inode(inode);
err = btrfs_subvol_inherit_props(trans, new_root, parent_root);
if (err)
work->inode = inode;
work->wait = wait;
work->delay_iput = delay_iput;
- btrfs_init_work(&work->work, btrfs_run_delalloc_work, NULL, NULL);
+ WARN_ON_ONCE(!inode);
+ btrfs_init_work(&work->work, btrfs_flush_delalloc_helper,
+ btrfs_run_delalloc_work, NULL, NULL);
return work;
}
goto out_unlock;
}
- err = btrfs_init_inode_security(trans, inode, dir, &dentry->d_name);
- if (err) {
- drop_inode = 1;
- goto out_unlock;
- }
-
/*
* If the active LSM wants to access the inode during
* d_instantiate it needs these. Smack checks to see
*/
inode->i_fop = &btrfs_file_operations;
inode->i_op = &btrfs_file_inode_operations;
+ inode->i_mapping->a_ops = &btrfs_aops;
+ inode->i_mapping->backing_dev_info = &root->fs_info->bdi;
+ BTRFS_I(inode)->io_tree.ops = &btrfs_extent_io_ops;
+
+ err = btrfs_init_inode_security(trans, inode, dir, &dentry->d_name);
+ if (err)
+ goto out_unlock_inode;
err = btrfs_add_nondir(trans, dir, dentry, inode, 0, index);
if (err)
- drop_inode = 1;
- else {
- inode->i_mapping->a_ops = &btrfs_aops;
- inode->i_mapping->backing_dev_info = &root->fs_info->bdi;
- BTRFS_I(inode)->io_tree.ops = &btrfs_extent_io_ops;
- }
- if (drop_inode)
- goto out_unlock;
+ goto out_unlock_inode;
path = btrfs_alloc_path();
if (!path) {
err = -ENOMEM;
- drop_inode = 1;
- goto out_unlock;
+ goto out_unlock_inode;
}
key.objectid = btrfs_ino(inode);
key.offset = 0;
err = btrfs_insert_empty_item(trans, root, path, &key,
datasize);
if (err) {
- drop_inode = 1;
btrfs_free_path(path);
- goto out_unlock;
+ goto out_unlock_inode;
}
leaf = path->nodes[0];
ei = btrfs_item_ptr(leaf, path->slots[0],
inode_set_bytes(inode, name_len);
btrfs_i_size_write(inode, name_len);
err = btrfs_update_inode(trans, root, inode);
- if (err)
+ if (err) {
drop_inode = 1;
+ goto out_unlock_inode;
+ }
+
+ unlock_new_inode(inode);
+ d_instantiate(dentry, inode);
out_unlock:
- if (!err)
- d_instantiate(dentry, inode);
btrfs_end_transaction(trans, root);
if (drop_inode) {
inode_dec_link_count(inode);
}
btrfs_btree_balance_dirty(root);
return err;
+
+out_unlock_inode:
+ drop_inode = 1;
+ unlock_new_inode(inode);
+ goto out_unlock;
}
static int __btrfs_prealloc_file_range(struct inode *inode, int mode,
goto out;
}
- ret = btrfs_init_inode_security(trans, inode, dir, NULL);
- if (ret)
- goto out;
-
- ret = btrfs_update_inode(trans, root, inode);
- if (ret)
- goto out;
-
inode->i_fop = &btrfs_file_operations;
inode->i_op = &btrfs_file_inode_operations;
inode->i_mapping->backing_dev_info = &root->fs_info->bdi;
BTRFS_I(inode)->io_tree.ops = &btrfs_extent_io_ops;
+ ret = btrfs_init_inode_security(trans, inode, dir, NULL);
+ if (ret)
+ goto out_inode;
+
+ ret = btrfs_update_inode(trans, root, inode);
+ if (ret)
+ goto out_inode;
ret = btrfs_orphan_add(trans, inode);
if (ret)
- goto out;
+ goto out_inode;
+ /*
+ * We set number of links to 0 in btrfs_new_inode(), and here we set
+ * it to 1 because d_tmpfile() will issue a warning if the count is 0,
+ * through:
+ *
+ * d_tmpfile() -> inode_dec_link_count() -> drop_nlink()
+ */
+ set_nlink(inode, 1);
+ unlock_new_inode(inode);
d_tmpfile(dentry, inode);
mark_inode_dirty(inode);
iput(inode);
btrfs_balance_delayed_items(root);
btrfs_btree_balance_dirty(root);
-
return ret;
+
+out_inode:
+ unlock_new_inode(inode);
+ goto out;
+
}
static const struct inode_operations btrfs_dir_inode_operations = {
if (ret)
goto fail;
- ret = btrfs_orphan_cleanup(pending_snapshot->snap);
- if (ret)
- goto fail;
-
- /*
- * If orphan cleanup did remove any orphans, it means the tree was
- * modified and therefore the commit root is not the same as the
- * current root anymore. This is a problem, because send uses the
- * commit root and therefore can see inode items that don't exist
- * in the current root anymore, and for example make calls to
- * btrfs_iget, which will do tree lookups based on the current root
- * and not on the commit root. Those lookups will fail, returning a
- * -ESTALE error, and making send fail with that error. So make sure
- * a send does not see any orphans we have just removed, and that it
- * will see the same inodes regardless of whether a transaction
- * commit happened before it started (meaning that the commit root
- * will be the same as the current root) or not.
- */
- if (readonly && pending_snapshot->snap->node !=
- pending_snapshot->snap->commit_root) {
- trans = btrfs_join_transaction(pending_snapshot->snap);
- if (IS_ERR(trans) && PTR_ERR(trans) != -ENOENT) {
- ret = PTR_ERR(trans);
- goto fail;
- }
- if (!IS_ERR(trans)) {
- ret = btrfs_commit_transaction(trans,
- pending_snapshot->snap);
- if (ret)
- goto fail;
- }
- }
-
inode = btrfs_lookup_dentry(dentry->d_parent->d_inode, dentry);
if (IS_ERR(inode)) {
ret = PTR_ERR(inode);
return false;
next = defrag_lookup_extent(inode, em->start + em->len);
- if (!next || next->block_start >= EXTENT_MAP_LAST_BYTE ||
- (em->block_start + em->block_len == next->block_start))
+ if (!next || next->block_start >= EXTENT_MAP_LAST_BYTE)
+ ret = false;
+ else if ((em->block_start + em->block_len == next->block_start) &&
+ (em->block_len > 128 * 1024 && next->block_len > 128 * 1024))
ret = false;
free_extent_map(next);
}
next_mergeable = defrag_check_next_extent(inode, em);
-
/*
* we hit a real extent, if it is big or the next extent is not a
* real extent, don't bother defragging it
~(BTRFS_SUBVOL_CREATE_ASYNC | BTRFS_SUBVOL_RDONLY |
BTRFS_SUBVOL_QGROUP_INHERIT)) {
ret = -EOPNOTSUPP;
- goto out;
+ goto free_args;
}
if (vol_args->flags & BTRFS_SUBVOL_CREATE_ASYNC)
if (vol_args->flags & BTRFS_SUBVOL_QGROUP_INHERIT) {
if (vol_args->size > PAGE_CACHE_SIZE) {
ret = -EINVAL;
- goto out;
+ goto free_args;
}
inherit = memdup_user(vol_args->qgroup_inherit, vol_args->size);
if (IS_ERR(inherit)) {
ret = PTR_ERR(inherit);
- goto out;
+ goto free_args;
}
}
ret = btrfs_ioctl_snap_create_transid(file, vol_args->name,
vol_args->fd, subvol, ptr,
readonly, inherit);
+ if (ret)
+ goto free_inherit;
- if (ret == 0 && ptr &&
- copy_to_user(arg +
- offsetof(struct btrfs_ioctl_vol_args_v2,
- transid), ptr, sizeof(*ptr)))
+ if (ptr && copy_to_user(arg +
+ offsetof(struct btrfs_ioctl_vol_args_v2,
+ transid),
+ ptr, sizeof(*ptr)))
ret = -EFAULT;
-out:
- kfree(vol_args);
+
+free_inherit:
kfree(inherit);
+free_args:
+ kfree(vol_args);
return ret;
}
vol_args = memdup_user(arg, sizeof(*vol_args));
if (IS_ERR(vol_args)) {
ret = PTR_ERR(vol_args);
- goto out;
+ goto err_drop;
}
vol_args->name[BTRFS_PATH_NAME_MAX] = '\0';
out:
kfree(vol_args);
+err_drop:
mnt_drop_write_file(file);
return ret;
}
btrfs_mark_buffer_dirty(leaf);
btrfs_release_path(path);
- last_dest_end = new_key.offset + datal;
+ last_dest_end = ALIGN(new_key.offset + datal,
+ root->sectorsize);
ret = clone_finish_inode_update(trans, inode,
last_dest_end,
destoff, olen);
spin_unlock(&root->ordered_extent_lock);
btrfs_init_work(&ordered->flush_work,
+ btrfs_flush_delalloc_helper,
btrfs_run_ordered_extent_work, NULL, NULL);
list_add_tail(&ordered->work_list, &works);
btrfs_queue_work(root->fs_info->flush_workers,
elem.seq, &roots);
btrfs_put_tree_mod_seq(fs_info, &elem);
if (ret < 0)
- return ret;
+ goto out;
if (roots->nnodes != 1)
goto out;
memset(&fs_info->qgroup_rescan_work, 0,
sizeof(fs_info->qgroup_rescan_work));
btrfs_init_work(&fs_info->qgroup_rescan_work,
+ btrfs_qgroup_rescan_helper,
btrfs_qgroup_rescan_worker, NULL, NULL);
if (ret) {
static void async_rmw_stripe(struct btrfs_raid_bio *rbio)
{
- btrfs_init_work(&rbio->work, rmw_work, NULL, NULL);
+ btrfs_init_work(&rbio->work, btrfs_rmw_helper,
+ rmw_work, NULL, NULL);
btrfs_queue_work(rbio->fs_info->rmw_workers,
&rbio->work);
static void async_read_rebuild(struct btrfs_raid_bio *rbio)
{
- btrfs_init_work(&rbio->work, read_rebuild_work, NULL, NULL);
+ btrfs_init_work(&rbio->work, btrfs_rmw_helper,
+ read_rebuild_work, NULL, NULL);
btrfs_queue_work(rbio->fs_info->rmw_workers,
&rbio->work);
plug = container_of(cb, struct btrfs_plug_cb, cb);
if (from_schedule) {
- btrfs_init_work(&plug->work, unplug_work, NULL, NULL);
+ btrfs_init_work(&plug->work, btrfs_rmw_helper,
+ unplug_work, NULL, NULL);
btrfs_queue_work(plug->info->rmw_workers,
&plug->work);
return;
/* FIXME we cannot handle this properly right now */
BUG();
}
- btrfs_init_work(&rmw->work, reada_start_machine_worker, NULL, NULL);
+ btrfs_init_work(&rmw->work, btrfs_readahead_helper,
+ reada_start_machine_worker, NULL, NULL);
rmw->fs_info = fs_info;
btrfs_queue_work(fs_info->readahead_workers, &rmw->work);
sbio->index = i;
sbio->sctx = sctx;
sbio->page_count = 0;
- btrfs_init_work(&sbio->work, scrub_bio_end_io_worker,
- NULL, NULL);
+ btrfs_init_work(&sbio->work, btrfs_scrub_helper,
+ scrub_bio_end_io_worker, NULL, NULL);
if (i != SCRUB_BIOS_PER_SCTX - 1)
sctx->bios[i]->next_free = i + 1;
fixup_nodatasum->root = fs_info->extent_root;
fixup_nodatasum->mirror_num = failed_mirror_index + 1;
scrub_pending_trans_workers_inc(sctx);
- btrfs_init_work(&fixup_nodatasum->work, scrub_fixup_nodatasum,
- NULL, NULL);
+ btrfs_init_work(&fixup_nodatasum->work, btrfs_scrub_helper,
+ scrub_fixup_nodatasum, NULL, NULL);
btrfs_queue_work(fs_info->scrub_workers,
&fixup_nodatasum->work);
goto out;
sbio->err = err;
sbio->bio = bio;
- btrfs_init_work(&sbio->work, scrub_wr_bio_end_io_worker, NULL, NULL);
+ btrfs_init_work(&sbio->work, btrfs_scrubwrc_helper,
+ scrub_wr_bio_end_io_worker, NULL, NULL);
btrfs_queue_work(fs_info->scrub_wr_completion_workers, &sbio->work);
}
struct scrub_ctx *sctx;
int ret;
struct btrfs_device *dev;
+ struct rcu_string *name;
if (btrfs_fs_closing(fs_info))
return -EINVAL;
return -ENODEV;
}
+ if (!is_dev_replace && !readonly && !dev->writeable) {
+ mutex_unlock(&fs_info->fs_devices->device_list_mutex);
+ rcu_read_lock();
+ name = rcu_dereference(dev->name);
+ btrfs_err(fs_info, "scrub: device %s is not writable",
+ name->str);
+ rcu_read_unlock();
+ return -EROFS;
+ }
+
mutex_lock(&fs_info->scrub_lock);
if (!dev->in_fs_metadata || dev->is_tgtdev_for_dev_replace) {
mutex_unlock(&fs_info->scrub_lock);
nocow_ctx->len = len;
nocow_ctx->mirror_num = mirror_num;
nocow_ctx->physical_for_dev_replace = physical_for_dev_replace;
- btrfs_init_work(&nocow_ctx->work, copy_nocow_pages_worker, NULL, NULL);
+ btrfs_init_work(&nocow_ctx->work, btrfs_scrubnc_helper,
+ copy_nocow_pages_worker, NULL, NULL);
INIT_LIST_HEAD(&nocow_ctx->inodes);
btrfs_queue_work(fs_info->scrub_nocow_workers,
&nocow_ctx->work);
if (!fs_info->device_dir_kobj)
return -EINVAL;
- if (one_device) {
+ if (one_device && one_device->bdev) {
disk = one_device->bdev->bd_part;
disk_kobj = &part_to_dev(disk)->kobj;
#define LOG_WALK_REPLAY_ALL 3
static int btrfs_log_inode(struct btrfs_trans_handle *trans,
- struct btrfs_root *root, struct inode *inode,
- int inode_only);
+ struct btrfs_root *root, struct inode *inode,
+ int inode_only,
+ const loff_t start,
+ const loff_t end);
static int link_to_fixup_dir(struct btrfs_trans_handle *trans,
struct btrfs_root *root,
struct btrfs_path *path, u64 objectid);
struct list_head ordered_sums;
int skip_csum = BTRFS_I(inode)->flags & BTRFS_INODE_NODATASUM;
bool has_extents = false;
- bool need_find_last_extent = (*last_extent == 0);
+ bool need_find_last_extent = true;
bool done = false;
INIT_LIST_HEAD(&ordered_sums);
*/
if (ins_keys[i].type == BTRFS_EXTENT_DATA_KEY) {
has_extents = true;
- if (need_find_last_extent &&
- first_key.objectid == (u64)-1)
+ if (first_key.objectid == (u64)-1)
first_key = ins_keys[i];
} else {
need_find_last_extent = false;
if (!has_extents)
return ret;
+ if (need_find_last_extent && *last_extent == first_key.offset) {
+ /*
+ * We don't have any leafs between our current one and the one
+ * we processed before that can have file extent items for our
+ * inode (and have a generation number smaller than our current
+ * transaction id).
+ */
+ need_find_last_extent = false;
+ }
+
/*
* Because we use btrfs_search_forward we could skip leaves that were
* not modified and then assume *last_extent is valid when it really
0, 0);
if (ret)
break;
- *last_extent = offset + len;
+ *last_extent = extent_end;
}
/*
* Need to let the callers know we dropped the path so they should
* This handles both files and directories.
*/
static int btrfs_log_inode(struct btrfs_trans_handle *trans,
- struct btrfs_root *root, struct inode *inode,
- int inode_only)
+ struct btrfs_root *root, struct inode *inode,
+ int inode_only,
+ const loff_t start,
+ const loff_t end)
{
struct btrfs_path *path;
struct btrfs_path *dst_path;
int ins_nr;
bool fast_search = false;
u64 ino = btrfs_ino(inode);
+ struct extent_map_tree *em_tree = &BTRFS_I(inode)->extent_tree;
path = btrfs_alloc_path();
if (!path)
goto out_unlock;
}
} else if (inode_only == LOG_INODE_ALL) {
- struct extent_map_tree *tree = &BTRFS_I(inode)->extent_tree;
struct extent_map *em, *n;
- write_lock(&tree->lock);
- list_for_each_entry_safe(em, n, &tree->modified_extents, list)
- list_del_init(&em->list);
- write_unlock(&tree->lock);
+ write_lock(&em_tree->lock);
+ /*
+ * We can't just remove every em if we're called for a ranged
+ * fsync - that is, one that doesn't cover the whole possible
+ * file range (0 to LLONG_MAX). This is because we can have
+ * em's that fall outside the range we're logging and therefore
+ * their ordered operations haven't completed yet
+ * (btrfs_finish_ordered_io() not invoked yet). This means we
+ * didn't get their respective file extent item in the fs/subvol
+ * tree yet, and need to let the next fast fsync (one which
+ * consults the list of modified extent maps) find the em so
+ * that it logs a matching file extent item and waits for the
+ * respective ordered operation to complete (if it's still
+ * running).
+ *
+ * Removing every em outside the range we're logging would make
+ * the next fast fsync not log their matching file extent items,
+ * therefore making us lose data after a log replay.
+ */
+ list_for_each_entry_safe(em, n, &em_tree->modified_extents,
+ list) {
+ const u64 mod_end = em->mod_start + em->mod_len - 1;
+
+ if (em->mod_start >= start && mod_end <= end)
+ list_del_init(&em->list);
+ }
+ write_unlock(&em_tree->lock);
}
if (inode_only == LOG_INODE_ALL && S_ISDIR(inode->i_mode)) {
goto out_unlock;
}
}
- BTRFS_I(inode)->logged_trans = trans->transid;
- BTRFS_I(inode)->last_log_commit = BTRFS_I(inode)->last_sub_trans;
+
+ write_lock(&em_tree->lock);
+ /*
+ * If we're doing a ranged fsync and there are still modified extents
+ * in the list, we must run on the next fsync call as it might cover
+ * those extents (a full fsync or an fsync for other range).
+ */
+ if (list_empty(&em_tree->modified_extents)) {
+ BTRFS_I(inode)->logged_trans = trans->transid;
+ BTRFS_I(inode)->last_log_commit =
+ BTRFS_I(inode)->last_sub_trans;
+ }
+ write_unlock(&em_tree->lock);
out_unlock:
if (unlikely(err))
btrfs_put_logged_extents(&logged_list);
*/
static int btrfs_log_inode_parent(struct btrfs_trans_handle *trans,
struct btrfs_root *root, struct inode *inode,
- struct dentry *parent, int exists_only,
+ struct dentry *parent,
+ const loff_t start,
+ const loff_t end,
+ int exists_only,
struct btrfs_log_ctx *ctx)
{
int inode_only = exists_only ? LOG_INODE_EXISTS : LOG_INODE_ALL;
if (ret)
goto end_no_trans;
- ret = btrfs_log_inode(trans, root, inode, inode_only);
+ ret = btrfs_log_inode(trans, root, inode, inode_only, start, end);
if (ret)
goto end_trans;
if (BTRFS_I(inode)->generation >
root->fs_info->last_trans_committed) {
- ret = btrfs_log_inode(trans, root, inode, inode_only);
+ ret = btrfs_log_inode(trans, root, inode, inode_only,
+ 0, LLONG_MAX);
if (ret)
goto end_trans;
}
*/
int btrfs_log_dentry_safe(struct btrfs_trans_handle *trans,
struct btrfs_root *root, struct dentry *dentry,
+ const loff_t start,
+ const loff_t end,
struct btrfs_log_ctx *ctx)
{
struct dentry *parent = dget_parent(dentry);
int ret;
ret = btrfs_log_inode_parent(trans, root, dentry->d_inode, parent,
- 0, ctx);
+ start, end, 0, ctx);
dput(parent);
return ret;
root->fs_info->last_trans_committed))
return 0;
- return btrfs_log_inode_parent(trans, root, inode, parent, 1, NULL);
+ return btrfs_log_inode_parent(trans, root, inode, parent, 0,
+ LLONG_MAX, 1, NULL);
}
int btrfs_recover_log_trees(struct btrfs_root *tree_root);
int btrfs_log_dentry_safe(struct btrfs_trans_handle *trans,
struct btrfs_root *root, struct dentry *dentry,
+ const loff_t start,
+ const loff_t end,
struct btrfs_log_ctx *ctx);
int btrfs_del_dir_entries_in_log(struct btrfs_trans_handle *trans,
struct btrfs_root *root,
ret = 1;
device->fs_devices = fs_devices;
} else if (!device->name || strcmp(device->name->str, path)) {
+ /*
+ * When FS is already mounted.
+ * 1. If you are here and if the device->name is NULL that
+ * means this device was missing at time of FS mount.
+ * 2. If you are here and if the device->name is different
+ * from 'path' that means either
+ * a. The same device disappeared and reappeared with
+ * different name. or
+ * b. The missing-disk-which-was-replaced, has
+ * reappeared now.
+ *
+ * We must allow 1 and 2a above. But 2b would be a spurious
+ * and unintentional.
+ *
+ * Further in case of 1 and 2a above, the disk at 'path'
+ * would have missed some transaction when it was away and
+ * in case of 2a the stale bdev has to be updated as well.
+ * 2b must not be allowed at all time.
+ */
+
+ /*
+ * As of now don't allow update to btrfs_fs_device through
+ * the btrfs dev scan cli, after FS has been mounted.
+ */
+ if (fs_devices->opened) {
+ return -EBUSY;
+ } else {
+ /*
+ * That is if the FS is _not_ mounted and if you
+ * are here, that means there is more than one
+ * disk with same uuid and devid.We keep the one
+ * with larger generation number or the last-in if
+ * generation are equal.
+ */
+ if (found_transid < device->generation)
+ return -EEXIST;
+ }
+
name = rcu_string_strdup(path, GFP_NOFS);
if (!name)
return -ENOMEM;
}
}
+ /*
+ * Unmount does not free the btrfs_device struct but would zero
+ * generation along with most of the other members. So just update
+ * it back. We need it to pick the disk with largest generation
+ * (as above).
+ */
+ if (!fs_devices->opened)
+ device->generation = found_transid;
+
if (found_transid > fs_devices->latest_trans) {
fs_devices->latest_devid = devid;
fs_devices->latest_trans = found_transid;
btrfs_set_device_io_align(leaf, dev_item, device->io_align);
btrfs_set_device_io_width(leaf, dev_item, device->io_width);
btrfs_set_device_sector_size(leaf, dev_item, device->sector_size);
- btrfs_set_device_total_bytes(leaf, dev_item, device->total_bytes);
+ btrfs_set_device_total_bytes(leaf, dev_item, device->disk_total_bytes);
btrfs_set_device_bytes_used(leaf, dev_item, device->bytes_used);
btrfs_set_device_group(leaf, dev_item, 0);
btrfs_set_device_seek_speed(leaf, dev_item, 0);
device->fs_devices->total_devices--;
if (device->missing)
- root->fs_info->fs_devices->missing_devices--;
+ device->fs_devices->missing_devices--;
next_device = list_entry(root->fs_info->fs_devices->devices.next,
struct btrfs_device, dev_list);
if (srcdev->bdev) {
fs_info->fs_devices->open_devices--;
- /* zero out the old super */
- btrfs_scratch_superblock(srcdev);
+ /*
+ * zero out the old super if it is not writable
+ * (e.g. seed device)
+ */
+ if (srcdev->writeable)
+ btrfs_scratch_superblock(srcdev);
}
call_rcu(&srcdev->rcu, free_device);
fs_devices->seeding = 0;
fs_devices->num_devices = 0;
fs_devices->open_devices = 0;
+ fs_devices->missing_devices = 0;
+ fs_devices->num_can_discard = 0;
+ fs_devices->rotating = 0;
fs_devices->seed = seed_devices;
generate_random_uuid(fs_devices->fsid);
else
generate_random_uuid(dev->uuid);
- btrfs_init_work(&dev->work, pending_bios_fn, NULL, NULL);
+ btrfs_init_work(&dev->work, btrfs_submit_helper,
+ pending_bios_fn, NULL, NULL);
return dev;
}
support for OS/2 and Windows ME and similar servers is provided as
well.
+ The module also provides optional support for the followon
+ protocols for CIFS including SMB3, which enables
+ useful performance and security features (see the description
+ of CONFIG_CIFS_SMB2).
+
The cifs module provides an advanced network file system
client for mounting to CIFS compliant servers. It includes
support for DFS (hierarchical name space), secure per-user
depends on CIFS_XATTR && KEYS
help
Allows fetching CIFS/NTFS ACL from the server. The DACL blob
- is handed over to the application/caller.
+ is handed over to the application/caller. See the man
+ page for getcifsacl for more information.
config CIFS_DEBUG
bool "Enable CIFS debugging routines"
Allows NFS server to export a CIFS mounted share (nfsd over cifs)
config CIFS_SMB2
- bool "SMB2 network file system support"
+ bool "SMB2 and SMB3 network file system support"
depends on CIFS && INET
select NLS
select KEYS
select DNS_RESOLVER
help
- This enables experimental support for the SMB2 (Server Message Block
- version 2) protocol. The SMB2 protocol is the successor to the
- popular CIFS and SMB network file sharing protocols. SMB2 is the
- native file sharing mechanism for recent versions of Windows
- operating systems (since Vista). SMB2 enablement will eventually
- allow users better performance, security and features, than would be
- possible with cifs. Note that smb2 mount options also are simpler
- (compared to cifs) due to protocol improvements.
-
- Unless you are a developer or tester, say N.
+ This enables support for the Server Message Block version 2
+ family of protocols, including SMB3. SMB3 support is
+ enabled on mount by specifying "vers=3.0" in the mount
+ options. These protocols are the successors to the popular
+ CIFS and SMB network file sharing protocols. SMB3 is the
+ native file sharing mechanism for the more recent
+ versions of Windows (Windows 8 and Windows 2012 and
+ later) and Samba server and many others support SMB3 well.
+ In general SMB3 enables better performance, security
+ and features, than would be possible with CIFS (Note that
+ when mounting to Samba, due to the CIFS POSIX extensions,
+ CIFS mounts can provide slightly better POSIX compatibility
+ than SMB3 mounts do though). Note that SMB2/SMB3 mount
+ options are also slightly simpler (compared to CIFS) due
+ to protocol improvements.
config CIFS_FSCACHE
bool "Provide CIFS client caching support"
return 0;
}
+static long cifs_fallocate(struct file *file, int mode, loff_t off, loff_t len)
+{
+ struct super_block *sb = file->f_path.dentry->d_sb;
+ struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
+ struct cifs_tcon *tcon = cifs_sb_master_tcon(cifs_sb);
+ struct TCP_Server_Info *server = tcon->ses->server;
+
+ if (server->ops->fallocate)
+ return server->ops->fallocate(file, tcon, mode, off, len);
+
+ return -EOPNOTSUPP;
+}
+
static int cifs_permission(struct inode *inode, int mask)
{
struct cifs_sb_info *cifs_sb;
if (!(S_ISREG(inode->i_mode)))
return -EINVAL;
- /* check if file is oplocked */
- if (((arg == F_RDLCK) && CIFS_CACHE_READ(CIFS_I(inode))) ||
+ /* Check if file is oplocked if this is request for new lease */
+ if (arg == F_UNLCK ||
+ ((arg == F_RDLCK) && CIFS_CACHE_READ(CIFS_I(inode))) ||
((arg == F_WRLCK) && CIFS_CACHE_WRITE(CIFS_I(inode))))
return generic_setlease(file, arg, lease);
else if (tlink_tcon(cfile->tlink)->local_lease &&
.unlocked_ioctl = cifs_ioctl,
#endif /* CONFIG_CIFS_POSIX */
.setlease = cifs_setlease,
+ .fallocate = cifs_fallocate,
};
const struct file_operations cifs_file_strict_ops = {
.unlocked_ioctl = cifs_ioctl,
#endif /* CONFIG_CIFS_POSIX */
.setlease = cifs_setlease,
+ .fallocate = cifs_fallocate,
};
const struct file_operations cifs_file_direct_ops = {
#endif /* CONFIG_CIFS_POSIX */
.llseek = cifs_llseek,
.setlease = cifs_setlease,
+ .fallocate = cifs_fallocate,
};
const struct file_operations cifs_file_nobrl_ops = {
.unlocked_ioctl = cifs_ioctl,
#endif /* CONFIG_CIFS_POSIX */
.setlease = cifs_setlease,
+ .fallocate = cifs_fallocate,
};
const struct file_operations cifs_file_strict_nobrl_ops = {
.unlocked_ioctl = cifs_ioctl,
#endif /* CONFIG_CIFS_POSIX */
.setlease = cifs_setlease,
+ .fallocate = cifs_fallocate,
};
const struct file_operations cifs_file_direct_nobrl_ops = {
#endif /* CONFIG_CIFS_POSIX */
.llseek = cifs_llseek,
.setlease = cifs_setlease,
+ .fallocate = cifs_fallocate,
};
const struct file_operations cifs_dir_ops = {
#define SERVER_NAME_LENGTH 40
#define SERVER_NAME_LEN_WITH_NULL (SERVER_NAME_LENGTH + 1)
-/* used to define string lengths for reversing unicode strings */
-/* (256+1)*2 = 514 */
-/* (max path length + 1 for null) * 2 for unicode */
-#define MAX_NAME 514
-
/* SMB echo "timeout" -- FIXME: tunable? */
#define SMB_ECHO_INTERVAL (60 * HZ)
/* get mtu credits */
int (*wait_mtu_credits)(struct TCP_Server_Info *, unsigned int,
unsigned int *, unsigned int *);
+ /* check if we need to issue closedir */
+ bool (*dir_needs_close)(struct cifsFileInfo *);
+ long (*fallocate)(struct file *, struct cifs_tcon *, int, loff_t,
+ loff_t);
};
struct smb_version_values {
for this mount even if server would support */
bool local_lease:1; /* check leases (only) on local system not remote */
bool broken_posix_open; /* e.g. Samba server versions < 3.3.2, 3.2.9 */
+ bool broken_sparse_sup; /* if server or share does not support sparse */
bool need_reconnect:1; /* connection reset, tid now invalid */
#ifdef CONFIG_CIFS_SMB2
bool print:1; /* set if connection to printer share */
/* minimum includes first three fields, and empty FS Name */
#define MIN_FS_ATTR_INFO_SIZE 12
+
+/* List of FileSystemAttributes - see 2.5.1 of MS-FSCC */
+#define FILE_SUPPORT_INTEGRITY_STREAMS 0x04000000
+#define FILE_SUPPORTS_USN_JOURNAL 0x02000000
+#define FILE_SUPPORTS_OPEN_BY_FILE_ID 0x01000000
+#define FILE_SUPPORTS_EXTENDED_ATTRIBUTES 0x00800000
+#define FILE_SUPPORTS_HARD_LINKS 0x00400000
+#define FILE_SUPPORTS_TRANSACTIONS 0x00200000
+#define FILE_SEQUENTIAL_WRITE_ONCE 0x00100000
+#define FILE_READ_ONLY_VOLUME 0x00080000
+#define FILE_NAMED_STREAMS 0x00040000
+#define FILE_SUPPORTS_ENCRYPTION 0x00020000
+#define FILE_SUPPORTS_OBJECT_IDS 0x00010000
+#define FILE_VOLUME_IS_COMPRESSED 0x00008000
+#define FILE_SUPPORTS_REMOTE_STORAGE 0x00000100
+#define FILE_SUPPORTS_REPARSE_POINTS 0x00000080
+#define FILE_SUPPORTS_SPARSE_FILES 0x00000040
+#define FILE_VOLUME_QUOTAS 0x00000020
+#define FILE_FILE_COMPRESSION 0x00000010
+#define FILE_PERSISTENT_ACLS 0x00000008
+#define FILE_UNICODE_ON_DISK 0x00000004
+#define FILE_CASE_PRESERVED_NAMES 0x00000002
+#define FILE_CASE_SENSITIVE_SEARCH 0x00000001
typedef struct {
__le32 Attributes;
__le32 MaxPathNameComponentLength;
struct TCP_Server_Info *server = p;
unsigned int pdu_length;
char *buf = NULL;
- struct task_struct *task_to_wake = NULL;
struct mid_q_entry *mid_entry;
current->flags |= PF_MEMALLOC;
if (server->smallbuf) /* no sense logging a debug message if NULL */
cifs_small_buf_release(server->smallbuf);
- task_to_wake = xchg(&server->tsk, NULL);
clean_demultiplex_info(server);
-
- /* if server->tsk was NULL then wait for a signal before exiting */
- if (!task_to_wake) {
- set_current_state(TASK_INTERRUPTIBLE);
- while (!signal_pending(current)) {
- schedule();
- set_current_state(TASK_INTERRUPTIBLE);
- }
- set_current_state(TASK_RUNNING);
- }
-
module_put_and_exit(0);
}
tmp_end++;
if (!(tmp_end < end && tmp_end[1] == delim)) {
/* No it is not. Set the password to NULL */
+ kfree(vol->password);
vol->password = NULL;
break;
}
options = end;
}
+ kfree(vol->password);
/* Now build new password string */
temp_len = strlen(value);
vol->password = kzalloc(temp_len+1, GFP_KERNEL);
static void
cifs_put_tcp_session(struct TCP_Server_Info *server)
{
- struct task_struct *task;
-
spin_lock(&cifs_tcp_ses_lock);
if (--server->srv_count > 0) {
spin_unlock(&cifs_tcp_ses_lock);
kfree(server->session_key.response);
server->session_key.response = NULL;
server->session_key.len = 0;
-
- task = xchg(&server->tsk, NULL);
- if (task)
- force_sig(SIGKILL, task);
}
static struct TCP_Server_Info *
goto out;
}
+ if (file->f_flags & O_DIRECT &&
+ CIFS_SB(inode->i_sb)->mnt_cifs_flags & CIFS_MOUNT_STRICT_IO) {
+ if (CIFS_SB(inode->i_sb)->mnt_cifs_flags & CIFS_MOUNT_NO_BRL)
+ file->f_op = &cifs_file_direct_nobrl_ops;
+ else
+ file->f_op = &cifs_file_direct_ops;
+ }
+
file_info = cifs_new_fileinfo(&fid, file, tlink, oplock);
if (file_info == NULL) {
if (server->ops->close)
cifs_dbg(FYI, "inode = 0x%p file flags are 0x%x for %s\n",
inode, file->f_flags, full_path);
+ if (file->f_flags & O_DIRECT &&
+ cifs_sb->mnt_cifs_flags & CIFS_MOUNT_STRICT_IO) {
+ if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_BRL)
+ file->f_op = &cifs_file_direct_nobrl_ops;
+ else
+ file->f_op = &cifs_file_direct_ops;
+ }
+
if (server->oplocks)
oplock = REQ_OPLOCK;
else
cifs_dbg(FYI, "Freeing private data in close dir\n");
spin_lock(&cifs_file_list_lock);
- if (!cfile->srch_inf.endOfSearch && !cfile->invalidHandle) {
+ if (server->ops->dir_needs_close(cfile)) {
cfile->invalidHandle = true;
spin_unlock(&cifs_file_list_lock);
if (server->ops->close_dir)
unlink_target:
/* Try unlinking the target dentry if it's not negative */
if (target_dentry->d_inode && (rc == -EACCES || rc == -EEXIST)) {
- tmprc = cifs_unlink(target_dir, target_dentry);
+ if (d_is_dir(target_dentry))
+ tmprc = cifs_rmdir(target_dir, target_dentry);
+ else
+ tmprc = cifs_unlink(target_dir, target_dentry);
if (tmprc)
goto cifs_rename_exit;
rc = cifs_do_rename(xid, source_dentry, from_name,
target_dentry, to_name);
}
+ /* force revalidate to go get info when needed */
+ CIFS_I(source_dir)->time = CIFS_I(target_dir)->time = 0;
+
+ source_dir->i_ctime = source_dir->i_mtime = target_dir->i_ctime =
+ target_dir->i_mtime = current_fs_time(source_dir->i_sb);
+
cifs_rename_exit:
kfree(info_buf_source);
kfree(from_name);
cinode->oplock = 0;
}
-static int
-cifs_oplock_break_wait(void *unused)
-{
- schedule();
- return signal_pending(current) ? -ERESTARTSYS : 0;
-}
-
/*
* We wait for oplock breaks to be processed before we attempt to perform
* writes.
/* close and restart search */
cifs_dbg(FYI, "search backing up - close and restart search\n");
spin_lock(&cifs_file_list_lock);
- if (!cfile->srch_inf.endOfSearch && !cfile->invalidHandle) {
+ if (server->ops->dir_needs_close(cfile)) {
cfile->invalidHandle = true;
spin_unlock(&cifs_file_list_lock);
- if (server->ops->close)
- server->ops->close(xid, tcon, &cfile->fid);
+ if (server->ops->close_dir)
+ server->ops->close_dir(xid, tcon, &cfile->fid);
} else
spin_unlock(&cifs_file_list_lock);
if (cfile->srch_inf.ntwrk_buf_start) {
kfree(ses->serverOS);
ses->serverOS = kzalloc(len + 1, GFP_KERNEL);
- if (ses->serverOS)
+ if (ses->serverOS) {
strncpy(ses->serverOS, bcc_ptr, len);
- if (strncmp(ses->serverOS, "OS/2", 4) == 0)
- cifs_dbg(FYI, "OS/2 server\n");
+ if (strncmp(ses->serverOS, "OS/2", 4) == 0)
+ cifs_dbg(FYI, "OS/2 server\n");
+ }
bcc_ptr += len + 1;
bleft -= len + 1;
return CIFS_SB(inode->i_sb)->wsize;
}
+static bool
+cifs_dir_needs_close(struct cifsFileInfo *cfile)
+{
+ return !cfile->srch_inf.endOfSearch && !cfile->invalidHandle;
+}
+
struct smb_version_operations smb1_operations = {
.send_cancel = send_nt_cancel,
.compare_fids = cifs_compare_fids,
.create_mf_symlink = cifs_create_mf_symlink,
.is_read_op = cifs_is_read_op,
.wp_retry_size = cifs_wp_retry_size,
+ .dir_needs_close = cifs_dir_needs_close,
#ifdef CONFIG_CIFS_XATTR
.query_all_EAs = CIFSSMBQAllEAs,
.set_EA = CIFSSMBSetEA,
goto out;
}
- smb2_data = kzalloc(sizeof(struct smb2_file_all_info) + MAX_NAME * 2,
+ smb2_data = kzalloc(sizeof(struct smb2_file_all_info) + PATH_MAX * 2,
GFP_KERNEL);
if (smb2_data == NULL) {
rc = -ENOMEM;
*adjust_tz = false;
*symlink = false;
- smb2_data = kzalloc(sizeof(struct smb2_file_all_info) + MAX_NAME * 2,
+ smb2_data = kzalloc(sizeof(struct smb2_file_all_info) + PATH_MAX * 2,
GFP_KERNEL);
if (smb2_data == NULL)
return -ENOMEM;
{STATUS_BREAKPOINT, -EIO, "STATUS_BREAKPOINT"},
{STATUS_SINGLE_STEP, -EIO, "STATUS_SINGLE_STEP"},
{STATUS_BUFFER_OVERFLOW, -EIO, "STATUS_BUFFER_OVERFLOW"},
- {STATUS_NO_MORE_FILES, -EIO, "STATUS_NO_MORE_FILES"},
+ {STATUS_NO_MORE_FILES, -ENODATA, "STATUS_NO_MORE_FILES"},
{STATUS_WAKE_SYSTEM_DEBUGGER, -EIO, "STATUS_WAKE_SYSTEM_DEBUGGER"},
{STATUS_HANDLES_CLOSED, -EIO, "STATUS_HANDLES_CLOSED"},
{STATUS_NO_INHERITANCE, -EIO, "STATUS_NO_INHERITANCE"},
{STATUS_INVALID_PARAMETER, -EINVAL, "STATUS_INVALID_PARAMETER"},
{STATUS_NO_SUCH_DEVICE, -ENODEV, "STATUS_NO_SUCH_DEVICE"},
{STATUS_NO_SUCH_FILE, -ENOENT, "STATUS_NO_SUCH_FILE"},
- {STATUS_INVALID_DEVICE_REQUEST, -EIO, "STATUS_INVALID_DEVICE_REQUEST"},
+ {STATUS_INVALID_DEVICE_REQUEST, -EOPNOTSUPP, "STATUS_INVALID_DEVICE_REQUEST"},
{STATUS_END_OF_FILE, -ENODATA, "STATUS_END_OF_FILE"},
{STATUS_WRONG_VOLUME, -EIO, "STATUS_WRONG_VOLUME"},
{STATUS_NO_MEDIA_IN_DEVICE, -EIO, "STATUS_NO_MEDIA_IN_DEVICE"},
/* Windows 7 server returns 24 bytes more */
if (clc_len + 20 == len && command == SMB2_OPLOCK_BREAK_HE)
return 0;
- /* server can return one byte more */
+ /* server can return one byte more due to implied bcc[0] */
if (clc_len == 4 + len + 1)
return 0;
+
+ /*
+ * MacOS server pads after SMB2.1 write response with 3 bytes
+ * of junk. Other servers match RFC1001 len to actual
+ * SMB2/SMB3 frame length (header + smb2 response specific data)
+ * Log the server error (once), but allow it and continue
+ * since the frame is parseable.
+ */
+ if (clc_len < 4 /* RFC1001 header size */ + len) {
+ printk_once(KERN_WARNING
+ "SMB2 server sent bad RFC1001 len %d not %d\n",
+ len, clc_len - 4);
+ return 0;
+ }
+
return 1;
}
return 0;
int rc;
struct smb2_file_all_info *smb2_data;
- smb2_data = kzalloc(sizeof(struct smb2_file_all_info) + MAX_NAME * 2,
+ smb2_data = kzalloc(sizeof(struct smb2_file_all_info) + PATH_MAX * 2,
GFP_KERNEL);
if (smb2_data == NULL)
return -ENOMEM;
return SMB2_write(xid, parms, written, iov, nr_segs);
}
+/* Set or clear the SPARSE_FILE attribute based on value passed in setsparse */
+static bool smb2_set_sparse(const unsigned int xid, struct cifs_tcon *tcon,
+ struct cifsFileInfo *cfile, struct inode *inode, __u8 setsparse)
+{
+ struct cifsInodeInfo *cifsi;
+ int rc;
+
+ cifsi = CIFS_I(inode);
+
+ /* if file already sparse don't bother setting sparse again */
+ if ((cifsi->cifsAttrs & FILE_ATTRIBUTE_SPARSE_FILE) && setsparse)
+ return true; /* already sparse */
+
+ if (!(cifsi->cifsAttrs & FILE_ATTRIBUTE_SPARSE_FILE) && !setsparse)
+ return true; /* already not sparse */
+
+ /*
+ * Can't check for sparse support on share the usual way via the
+ * FS attribute info (FILE_SUPPORTS_SPARSE_FILES) on the share
+ * since Samba server doesn't set the flag on the share, yet
+ * supports the set sparse FSCTL and returns sparse correctly
+ * in the file attributes. If we fail setting sparse though we
+ * mark that server does not support sparse files for this share
+ * to avoid repeatedly sending the unsupported fsctl to server
+ * if the file is repeatedly extended.
+ */
+ if (tcon->broken_sparse_sup)
+ return false;
+
+ rc = SMB2_ioctl(xid, tcon, cfile->fid.persistent_fid,
+ cfile->fid.volatile_fid, FSCTL_SET_SPARSE,
+ true /* is_fctl */, &setsparse, 1, NULL, NULL);
+ if (rc) {
+ tcon->broken_sparse_sup = true;
+ cifs_dbg(FYI, "set sparse rc = %d\n", rc);
+ return false;
+ }
+
+ if (setsparse)
+ cifsi->cifsAttrs |= FILE_ATTRIBUTE_SPARSE_FILE;
+ else
+ cifsi->cifsAttrs &= (~FILE_ATTRIBUTE_SPARSE_FILE);
+
+ return true;
+}
+
static int
smb2_set_file_size(const unsigned int xid, struct cifs_tcon *tcon,
struct cifsFileInfo *cfile, __u64 size, bool set_alloc)
{
__le64 eof = cpu_to_le64(size);
+ struct inode *inode;
+
+ /*
+ * If extending file more than one page make sparse. Many Linux fs
+ * make files sparse by default when extending via ftruncate
+ */
+ inode = cfile->dentry->d_inode;
+
+ if (!set_alloc && (size > inode->i_size + 8192)) {
+ __u8 set_sparse = 1;
+
+ /* whether set sparse succeeds or not, extend the file */
+ smb2_set_sparse(xid, tcon, cfile, inode, set_sparse);
+ }
+
return SMB2_set_eof(xid, tcon, cfile->fid.persistent_fid,
cfile->fid.volatile_fid, cfile->pid, &eof, false);
}
return rc;
}
+static long smb3_zero_range(struct file *file, struct cifs_tcon *tcon,
+ loff_t offset, loff_t len, bool keep_size)
+{
+ struct inode *inode;
+ struct cifsInodeInfo *cifsi;
+ struct cifsFileInfo *cfile = file->private_data;
+ struct file_zero_data_information fsctl_buf;
+ long rc;
+ unsigned int xid;
+
+ xid = get_xid();
+
+ inode = cfile->dentry->d_inode;
+ cifsi = CIFS_I(inode);
+
+ /* if file not oplocked can't be sure whether asking to extend size */
+ if (!CIFS_CACHE_READ(cifsi))
+ if (keep_size == false)
+ return -EOPNOTSUPP;
+
+ /*
+ * Must check if file sparse since fallocate -z (zero range) assumes
+ * non-sparse allocation
+ */
+ if (!(cifsi->cifsAttrs & FILE_ATTRIBUTE_SPARSE_FILE))
+ return -EOPNOTSUPP;
+
+ /*
+ * need to make sure we are not asked to extend the file since the SMB3
+ * fsctl does not change the file size. In the future we could change
+ * this to zero the first part of the range then set the file size
+ * which for a non sparse file would zero the newly extended range
+ */
+ if (keep_size == false)
+ if (i_size_read(inode) < offset + len)
+ return -EOPNOTSUPP;
+
+ cifs_dbg(FYI, "offset %lld len %lld", offset, len);
+
+ fsctl_buf.FileOffset = cpu_to_le64(offset);
+ fsctl_buf.BeyondFinalZero = cpu_to_le64(offset + len);
+
+ rc = SMB2_ioctl(xid, tcon, cfile->fid.persistent_fid,
+ cfile->fid.volatile_fid, FSCTL_SET_ZERO_DATA,
+ true /* is_fctl */, (char *)&fsctl_buf,
+ sizeof(struct file_zero_data_information), NULL, NULL);
+ free_xid(xid);
+ return rc;
+}
+
+static long smb3_punch_hole(struct file *file, struct cifs_tcon *tcon,
+ loff_t offset, loff_t len)
+{
+ struct inode *inode;
+ struct cifsInodeInfo *cifsi;
+ struct cifsFileInfo *cfile = file->private_data;
+ struct file_zero_data_information fsctl_buf;
+ long rc;
+ unsigned int xid;
+ __u8 set_sparse = 1;
+
+ xid = get_xid();
+
+ inode = cfile->dentry->d_inode;
+ cifsi = CIFS_I(inode);
+
+ /* Need to make file sparse, if not already, before freeing range. */
+ /* Consider adding equivalent for compressed since it could also work */
+ if (!smb2_set_sparse(xid, tcon, cfile, inode, set_sparse))
+ return -EOPNOTSUPP;
+
+ cifs_dbg(FYI, "offset %lld len %lld", offset, len);
+
+ fsctl_buf.FileOffset = cpu_to_le64(offset);
+ fsctl_buf.BeyondFinalZero = cpu_to_le64(offset + len);
+
+ rc = SMB2_ioctl(xid, tcon, cfile->fid.persistent_fid,
+ cfile->fid.volatile_fid, FSCTL_SET_ZERO_DATA,
+ true /* is_fctl */, (char *)&fsctl_buf,
+ sizeof(struct file_zero_data_information), NULL, NULL);
+ free_xid(xid);
+ return rc;
+}
+
+static long smb3_fallocate(struct file *file, struct cifs_tcon *tcon, int mode,
+ loff_t off, loff_t len)
+{
+ /* KEEP_SIZE already checked for by do_fallocate */
+ if (mode & FALLOC_FL_PUNCH_HOLE)
+ return smb3_punch_hole(file, tcon, off, len);
+ else if (mode & FALLOC_FL_ZERO_RANGE) {
+ if (mode & FALLOC_FL_KEEP_SIZE)
+ return smb3_zero_range(file, tcon, off, len, true);
+ return smb3_zero_range(file, tcon, off, len, false);
+ }
+
+ return -EOPNOTSUPP;
+}
+
static void
smb2_downgrade_oplock(struct TCP_Server_Info *server,
struct cifsInodeInfo *cinode, bool set_level2)
SMB2_MAX_BUFFER_SIZE);
}
+static bool
+smb2_dir_needs_close(struct cifsFileInfo *cfile)
+{
+ return !cfile->invalidHandle;
+}
+
struct smb_version_operations smb20_operations = {
.compare_fids = smb2_compare_fids,
.setup_request = smb2_setup_request,
.parse_lease_buf = smb2_parse_lease_buf,
.clone_range = smb2_clone_range,
.wp_retry_size = smb2_wp_retry_size,
+ .dir_needs_close = smb2_dir_needs_close,
};
struct smb_version_operations smb21_operations = {
.parse_lease_buf = smb2_parse_lease_buf,
.clone_range = smb2_clone_range,
.wp_retry_size = smb2_wp_retry_size,
+ .dir_needs_close = smb2_dir_needs_close,
};
struct smb_version_operations smb30_operations = {
.clone_range = smb2_clone_range,
.validate_negotiate = smb3_validate_negotiate,
.wp_retry_size = smb2_wp_retry_size,
+ .dir_needs_close = smb2_dir_needs_close,
+ .fallocate = smb3_fallocate,
};
struct smb_version_values smb20_values = {
struct smb2_sess_setup_rsp *rsp = NULL;
struct kvec iov[2];
int rc = 0;
- int resp_buftype;
+ int resp_buftype = CIFS_NO_BUFFER;
__le32 phase = NtLmNegotiate; /* NTLMSSP, if needed, is multistage */
struct TCP_Server_Info *server = ses->server;
u16 blob_length = 0;
tcon_error_exit:
if (rsp->hdr.Status == STATUS_BAD_NETWORK_NAME) {
cifs_dbg(VFS, "BAD_NETWORK_NAME: %s\n", tree);
- tcon->bad_network_name = true;
+ if (tcon)
+ tcon->bad_network_name = true;
}
goto tcon_exit;
}
cifs_dbg(FYI, "SMB2 IOCTL\n");
- *out_data = NULL;
+ if (out_data != NULL)
+ *out_data = NULL;
+
/* zero out returned data len, in case of error */
if (plen)
*plen = 0;
rsp = (struct smb2_close_rsp *)iov[0].iov_base;
if (rc != 0) {
- if (tcon)
- cifs_stats_fail_inc(tcon, SMB2_CLOSE_HE);
+ cifs_stats_fail_inc(tcon, SMB2_CLOSE_HE);
goto close_exit;
}
{
return query_info(xid, tcon, persistent_fid, volatile_fid,
FILE_ALL_INFORMATION,
- sizeof(struct smb2_file_all_info) + MAX_NAME * 2,
+ sizeof(struct smb2_file_all_info) + PATH_MAX * 2,
sizeof(struct smb2_file_all_info), data);
}
rsp = (struct smb2_query_directory_rsp *)iov[0].iov_base;
if (rc) {
+ if (rc == -ENODATA && rsp->hdr.Status == STATUS_NO_MORE_FILES) {
+ srch_inf->endOfSearch = true;
+ rc = 0;
+ }
cifs_stats_fail_inc(tcon, SMB2_QUERY_DIRECTORY_HE);
goto qdir_exit;
}
else
cifs_dbg(VFS, "illegal search buffer type\n");
- if (rsp->hdr.Status == STATUS_NO_MORE_FILES)
- srch_inf->endOfSearch = 1;
- else
- srch_inf->endOfSearch = 0;
-
return rc;
qdir_exit:
__u32 Reserved2;
} __packed;
+/* this goes in the ioctl buffer when doing FSCTL_SET_ZERO_DATA */
+struct file_zero_data_information {
+ __le64 FileOffset;
+ __le64 BeyondFinalZero;
+} __packed;
+
struct copychunk_ioctl_rsp {
__le32 ChunksWritten;
__le32 ChunkBytesWritten;
#define FSCTL_SET_OBJECT_ID_EXTENDED 0x000900BC /* BB add struct */
#define FSCTL_CREATE_OR_GET_OBJECT_ID 0x000900C0 /* BB add struct */
#define FSCTL_SET_SPARSE 0x000900C4 /* BB add struct */
-#define FSCTL_SET_ZERO_DATA 0x000900C8 /* BB add struct */
+#define FSCTL_SET_ZERO_DATA 0x000980C8
#define FSCTL_SET_ENCRYPTION 0x000900D7 /* BB add struct */
#define FSCTL_ENCRYPTION_FSCTL_IO 0x000900DB /* BB add struct */
#define FSCTL_WRITE_RAW_ENCRYPTED 0x000900DF /* BB add struct */
unsigned int hash)
{
hash += (unsigned long) parent / L1_CACHE_BYTES;
- hash = hash + (hash >> d_hash_shift);
- return dentry_hashtable + (hash & d_hash_mask);
+ return dentry_hashtable + hash_32(hash, d_hash_shift);
}
/* Statistics gathering. */
dentry->d_parent = dentry;
list_del_init(&dentry->d_u.d_child);
anon->d_parent = dparent;
+ if (likely(!d_unhashed(anon))) {
+ hlist_bl_lock(&anon->d_sb->s_anon);
+ __hlist_bl_del(&anon->d_hash);
+ anon->d_hash.pprev = NULL;
+ hlist_bl_unlock(&anon->d_sb->s_anon);
+ }
list_move(&anon->d_u.d_child, &dparent->d_subdirs);
write_seqcount_end(&dentry->d_seq);
write_seqlock(&rename_lock);
__d_materialise_dentry(dentry, new);
write_sequnlock(&rename_lock);
- __d_drop(new);
_d_rehash(new);
spin_unlock(&new->d_lock);
spin_unlock(&inode->i_lock);
* could splice into our tree? */
__d_materialise_dentry(dentry, alias);
write_sequnlock(&rename_lock);
- __d_drop(alias);
goto found;
} else {
/* Nope, but we must(!) avoid directory
goto error_tgt_fput;
/* Check if EPOLLWAKEUP is allowed */
- ep_take_care_of_epollwakeup(&epds);
+ if (ep_op_has_event(op))
+ ep_take_care_of_epollwakeup(&epds);
/*
* We have to check that the file structure underneath the file descriptor
*/
overhead += ngroups * (2 + sbi->s_itb_per_group);
- /* Add the journal blocks as well */
- overhead += sbi->s_journal->j_maxlen;
+ /* Add the internal journal blocks as well */
+ if (sbi->s_journal && !sbi->journal_bdev)
+ overhead += sbi->s_journal->j_maxlen;
sbi->s_overhead_last = overhead;
smp_wmb();
/*
* Special error return code only used by dx_probe() and its callers.
*/
-#define ERR_BAD_DX_DIR -75000
+#define ERR_BAD_DX_DIR (-(MAX_ERRNO - 1))
/*
* Timeout and state flag for lazy initialization inode thread.
up_write(&EXT4_I(inode)->i_data_sem);
}
+/* Update i_size, i_disksize. Requires i_mutex to avoid races with truncate */
+static inline int ext4_update_inode_size(struct inode *inode, loff_t newsize)
+{
+ int changed = 0;
+
+ if (newsize > inode->i_size) {
+ i_size_write(inode, newsize);
+ changed = 1;
+ }
+ if (newsize > EXT4_I(inode)->i_disksize) {
+ ext4_update_i_disksize(inode, newsize);
+ changed |= 2;
+ }
+ return changed;
+}
+
struct ext4_group_info {
unsigned long bb_state;
struct rb_root bb_free_root;
}
static int ext4_alloc_file_blocks(struct file *file, ext4_lblk_t offset,
- ext4_lblk_t len, int flags, int mode)
+ ext4_lblk_t len, loff_t new_size,
+ int flags, int mode)
{
struct inode *inode = file_inode(file);
handle_t *handle;
int retries = 0;
struct ext4_map_blocks map;
unsigned int credits;
+ loff_t epos;
map.m_lblk = offset;
+ map.m_len = len;
/*
* Don't normalize the request if it can fit in one extent so
* that it doesn't get unnecessarily split into multiple
credits = ext4_chunk_trans_blocks(inode, len);
retry:
- while (ret >= 0 && ret < len) {
- map.m_lblk = map.m_lblk + ret;
- map.m_len = len = len - ret;
+ while (ret >= 0 && len) {
handle = ext4_journal_start(inode, EXT4_HT_MAP_BLOCKS,
credits);
if (IS_ERR(handle)) {
ret2 = ext4_journal_stop(handle);
break;
}
+ map.m_lblk += ret;
+ map.m_len = len = len - ret;
+ epos = (loff_t)map.m_lblk << inode->i_blkbits;
+ inode->i_ctime = ext4_current_time(inode);
+ if (new_size) {
+ if (epos > new_size)
+ epos = new_size;
+ if (ext4_update_inode_size(inode, epos) & 0x1)
+ inode->i_mtime = inode->i_ctime;
+ } else {
+ if (epos > inode->i_size)
+ ext4_set_inode_flag(inode,
+ EXT4_INODE_EOFBLOCKS);
+ }
+ ext4_mark_inode_dirty(handle, inode);
ret2 = ext4_journal_stop(handle);
if (ret2)
break;
loff_t new_size = 0;
int ret = 0;
int flags;
- int partial;
+ int credits;
+ int partial_begin, partial_end;
loff_t start, end;
ext4_lblk_t lblk;
struct address_space *mapping = inode->i_mapping;
if (start < offset || end > offset + len)
return -EINVAL;
- partial = (offset + len) & ((1 << blkbits) - 1);
+ partial_begin = offset & ((1 << blkbits) - 1);
+ partial_end = (offset + len) & ((1 << blkbits) - 1);
lblk = start >> blkbits;
max_blocks = (end >> blkbits);
* If we have a partial block after EOF we have to allocate
* the entire block.
*/
- if (partial)
+ if (partial_end)
max_blocks += 1;
}
/* Now release the pages and zero block aligned part of pages*/
truncate_pagecache_range(inode, start, end - 1);
+ inode->i_mtime = inode->i_ctime = ext4_current_time(inode);
/* Wait all existing dio workers, newcomers will block on i_mutex */
ext4_inode_block_unlocked_dio(inode);
if (ret)
goto out_dio;
- ret = ext4_alloc_file_blocks(file, lblk, max_blocks, flags,
- mode);
+ ret = ext4_alloc_file_blocks(file, lblk, max_blocks, new_size,
+ flags, mode);
if (ret)
goto out_dio;
}
+ if (!partial_begin && !partial_end)
+ goto out_dio;
- handle = ext4_journal_start(inode, EXT4_HT_MISC, 4);
+ /*
+ * In worst case we have to writeout two nonadjacent unwritten
+ * blocks and update the inode
+ */
+ credits = (2 * ext4_ext_index_trans_blocks(inode, 2)) + 1;
+ if (ext4_should_journal_data(inode))
+ credits += 2;
+ handle = ext4_journal_start(inode, EXT4_HT_MISC, credits);
if (IS_ERR(handle)) {
ret = PTR_ERR(handle);
ext4_std_error(inode->i_sb, ret);
}
inode->i_mtime = inode->i_ctime = ext4_current_time(inode);
-
if (new_size) {
- if (new_size > i_size_read(inode))
- i_size_write(inode, new_size);
- if (new_size > EXT4_I(inode)->i_disksize)
- ext4_update_i_disksize(inode, new_size);
+ ext4_update_inode_size(inode, new_size);
} else {
/*
* Mark that we allocate beyond EOF so the subsequent truncate
if ((offset + len) > i_size_read(inode))
ext4_set_inode_flag(inode, EXT4_INODE_EOFBLOCKS);
}
-
ext4_mark_inode_dirty(handle, inode);
/* Zero out partial block at the edges of the range */
long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
{
struct inode *inode = file_inode(file);
- handle_t *handle;
loff_t new_size = 0;
unsigned int max_blocks;
int ret = 0;
int flags;
ext4_lblk_t lblk;
- struct timespec tv;
unsigned int blkbits = inode->i_blkbits;
/* Return error if mode is not supported */
goto out;
}
- ret = ext4_alloc_file_blocks(file, lblk, max_blocks, flags, mode);
+ ret = ext4_alloc_file_blocks(file, lblk, max_blocks, new_size,
+ flags, mode);
if (ret)
goto out;
- handle = ext4_journal_start(inode, EXT4_HT_INODE, 2);
- if (IS_ERR(handle))
- goto out;
-
- tv = inode->i_ctime = ext4_current_time(inode);
-
- if (new_size) {
- if (new_size > i_size_read(inode)) {
- i_size_write(inode, new_size);
- inode->i_mtime = tv;
- }
- if (new_size > EXT4_I(inode)->i_disksize)
- ext4_update_i_disksize(inode, new_size);
- } else {
- /*
- * Mark that we allocate beyond EOF so the subsequent truncate
- * can proceed even if the new size is the same as i_size.
- */
- if ((offset + len) > i_size_read(inode))
- ext4_set_inode_flag(inode, EXT4_INODE_EOFBLOCKS);
+ if (file->f_flags & O_SYNC && EXT4_SB(inode->i_sb)->s_journal) {
+ ret = jbd2_complete_transaction(EXT4_SB(inode->i_sb)->s_journal,
+ EXT4_I(inode)->i_sync_tid);
}
- ext4_mark_inode_dirty(handle, inode);
- if (file->f_flags & O_SYNC)
- ext4_handle_sync(handle);
-
- ext4_journal_stop(handle);
out:
mutex_unlock(&inode->i_mutex);
trace_ext4_fallocate_exit(inode, offset, max_blocks, ret);
} else
copied = block_write_end(file, mapping, pos,
len, copied, page, fsdata);
-
/*
- * No need to use i_size_read() here, the i_size
- * cannot change under us because we hole i_mutex.
- *
- * But it's important to update i_size while still holding page lock:
+ * it's important to update i_size while still holding page lock:
* page writeout could otherwise come in and zero beyond i_size.
*/
- if (pos + copied > inode->i_size) {
- i_size_write(inode, pos + copied);
- i_size_changed = 1;
- }
-
- if (pos + copied > EXT4_I(inode)->i_disksize) {
- /* We need to mark inode dirty even if
- * new_i_size is less that inode->i_size
- * but greater than i_disksize. (hint delalloc)
- */
- ext4_update_i_disksize(inode, (pos + copied));
- i_size_changed = 1;
- }
+ i_size_changed = ext4_update_inode_size(inode, pos + copied);
unlock_page(page);
page_cache_release(page);
int ret = 0, ret2;
int partial = 0;
unsigned from, to;
- loff_t new_i_size;
+ int size_changed = 0;
trace_ext4_journalled_write_end(inode, pos, len, copied);
from = pos & (PAGE_CACHE_SIZE - 1);
if (!partial)
SetPageUptodate(page);
}
- new_i_size = pos + copied;
- if (new_i_size > inode->i_size)
- i_size_write(inode, pos+copied);
+ size_changed = ext4_update_inode_size(inode, pos + copied);
ext4_set_inode_state(inode, EXT4_STATE_JDATA);
EXT4_I(inode)->i_datasync_tid = handle->h_transaction->t_tid;
- if (new_i_size > EXT4_I(inode)->i_disksize) {
- ext4_update_i_disksize(inode, new_i_size);
+ unlock_page(page);
+ page_cache_release(page);
+
+ if (size_changed) {
ret2 = ext4_mark_inode_dirty(handle, inode);
if (!ret)
ret = ret2;
}
- unlock_page(page);
- page_cache_release(page);
if (pos + len > inode->i_size && ext4_can_truncate(inode))
/* if we have allocated more blocks and copied
* less. We will have blocks allocated outside
struct ext4_map_blocks *map = &mpd->map;
int err;
loff_t disksize;
+ int progress = 0;
mpd->io_submit.io_end->offset =
((loff_t)map->m_lblk) << inode->i_blkbits;
* is non-zero, a commit should free up blocks.
*/
if ((err == -ENOMEM) ||
- (err == -ENOSPC && ext4_count_free_clusters(sb)))
+ (err == -ENOSPC && ext4_count_free_clusters(sb))) {
+ if (progress)
+ goto update_disksize;
return err;
+ }
ext4_msg(sb, KERN_CRIT,
"Delayed block allocation failed for "
"inode %lu at logical offset %llu with"
*give_up_on_write = true;
return err;
}
+ progress = 1;
/*
* Update buffer state, submit mapped pages, and get us new
* extent to map
*/
err = mpage_map_and_submit_buffers(mpd);
if (err < 0)
- return err;
+ goto update_disksize;
} while (map->m_len);
+update_disksize:
/*
* Update on-disk size after IO is submitted. Races with
* truncate are avoided by checking i_size under i_data_sem.
int last = first + count - 1;
struct super_block *sb = e4b->bd_sb;
+ if (WARN_ON(count == 0))
+ return;
BUG_ON(last >= (sb->s_blocksize << 3));
assert_spin_locked(ext4_group_lock_ptr(sb, e4b->bd_group));
/* Don't bother if the block group is corrupt. */
int err;
if (pa == NULL) {
+ if (ac->ac_f_ex.fe_len == 0)
+ return;
err = ext4_mb_load_buddy(ac->ac_sb, ac->ac_f_ex.fe_group, &e4b);
if (err) {
/*
mb_free_blocks(ac->ac_inode, &e4b, ac->ac_f_ex.fe_start,
ac->ac_f_ex.fe_len);
ext4_unlock_group(ac->ac_sb, ac->ac_f_ex.fe_group);
+ ext4_mb_unload_buddy(&e4b);
return;
}
if (pa->pa_type == MB_INODE_PA)
buffer */
int num = 0;
ext4_lblk_t nblocks;
- int i, err;
+ int i, err = 0;
int namelen;
*res_dir = NULL;
* return. Otherwise, fall back to doing a search the
* old fashioned way.
*/
- if (bh || (err != ERR_BAD_DX_DIR))
+ if (err == -ENOENT)
+ return NULL;
+ if (err && err != ERR_BAD_DX_DIR)
+ return ERR_PTR(err);
+ if (bh)
return bh;
dxtrace(printk(KERN_DEBUG "ext4_find_entry: dx failed, "
"falling back\n"));
}
num++;
bh = ext4_getblk(NULL, dir, b++, 0, &err);
+ if (unlikely(err)) {
+ if (ra_max == 0)
+ return ERR_PTR(err);
+ break;
+ }
bh_use[ra_max] = bh;
if (bh)
ll_rw_block(READ | REQ_META | REQ_PRIO,
return ERR_PTR(-ENAMETOOLONG);
bh = ext4_find_entry(dir, &dentry->d_name, &de, NULL);
+ if (IS_ERR(bh))
+ return (struct dentry *) bh;
inode = NULL;
if (bh) {
__u32 ino = le32_to_cpu(de->inode);
struct buffer_head *bh;
bh = ext4_find_entry(child->d_inode, &dotdot, &de, NULL);
+ if (IS_ERR(bh))
+ return (struct dentry *) bh;
if (!bh)
return ERR_PTR(-ENOENT);
ino = le32_to_cpu(de->inode);
retval = -ENOENT;
bh = ext4_find_entry(dir, &dentry->d_name, &de, NULL);
+ if (IS_ERR(bh))
+ return PTR_ERR(bh);
if (!bh)
goto end_rmdir;
retval = -ENOENT;
bh = ext4_find_entry(dir, &dentry->d_name, &de, NULL);
+ if (IS_ERR(bh))
+ return PTR_ERR(bh);
if (!bh)
goto end_unlink;
struct ext4_dir_entry_2 *de;
bh = ext4_find_entry(dir, d_name, &de, NULL);
+ if (IS_ERR(bh))
+ return PTR_ERR(bh);
if (bh) {
retval = ext4_delete_entry(handle, dir, de, bh);
brelse(bh);
return retval;
}
-static void ext4_rename_delete(handle_t *handle, struct ext4_renament *ent)
+static void ext4_rename_delete(handle_t *handle, struct ext4_renament *ent,
+ int force_reread)
{
int retval;
/*
if (le32_to_cpu(ent->de->inode) != ent->inode->i_ino ||
ent->de->name_len != ent->dentry->d_name.len ||
strncmp(ent->de->name, ent->dentry->d_name.name,
- ent->de->name_len)) {
+ ent->de->name_len) ||
+ force_reread) {
retval = ext4_find_delete_entry(handle, ent->dir,
&ent->dentry->d_name);
} else {
.dentry = new_dentry,
.inode = new_dentry->d_inode,
};
+ int force_reread;
int retval;
dquot_initialize(old.dir);
dquot_initialize(new.inode);
old.bh = ext4_find_entry(old.dir, &old.dentry->d_name, &old.de, NULL);
+ if (IS_ERR(old.bh))
+ return PTR_ERR(old.bh);
/*
* Check for inode number is _not_ due to possible IO errors.
* We might rmdir the source, keep it as pwd of some process
new.bh = ext4_find_entry(new.dir, &new.dentry->d_name,
&new.de, &new.inlined);
+ if (IS_ERR(new.bh)) {
+ retval = PTR_ERR(new.bh);
+ new.bh = NULL;
+ goto end_rename;
+ }
if (new.bh) {
if (!new.inode) {
brelse(new.bh);
if (retval)
goto end_rename;
}
+ /*
+ * If we're renaming a file within an inline_data dir and adding or
+ * setting the new dirent causes a conversion from inline_data to
+ * extents/blockmap, we need to force the dirent delete code to
+ * re-read the directory, or else we end up trying to delete a dirent
+ * from what is now the extent tree root (or a block map).
+ */
+ force_reread = (new.dir->i_ino == old.dir->i_ino &&
+ ext4_test_inode_flag(new.dir, EXT4_INODE_INLINE_DATA));
if (!new.bh) {
retval = ext4_add_entry(handle, new.dentry, old.inode);
if (retval)
if (retval)
goto end_rename;
}
+ if (force_reread)
+ force_reread = !ext4_test_inode_flag(new.dir,
+ EXT4_INODE_INLINE_DATA);
/*
* Like most other Unix systems, set the ctime for inodes on a
/*
* ok, that's it
*/
- ext4_rename_delete(handle, &old);
+ ext4_rename_delete(handle, &old, force_reread);
if (new.inode) {
ext4_dec_count(handle, new.inode);
old.bh = ext4_find_entry(old.dir, &old.dentry->d_name,
&old.de, &old.inlined);
+ if (IS_ERR(old.bh))
+ return PTR_ERR(old.bh);
/*
* Check for inode number is _not_ due to possible IO errors.
* We might rmdir the source, keep it as pwd of some process
new.bh = ext4_find_entry(new.dir, &new.dentry->d_name,
&new.de, &new.inlined);
+ if (IS_ERR(new.bh)) {
+ retval = PTR_ERR(new.bh);
+ new.bh = NULL;
+ goto end_rename;
+ }
/* RENAME_EXCHANGE case: old *and* new must both exist */
if (!new.bh || le32_to_cpu(new.de->inode) != new.inode->i_ino)
bh = bclean(handle, sb, block);
if (IS_ERR(bh)) {
err = PTR_ERR(bh);
+ bh = NULL;
goto out;
}
overhead = ext4_group_overhead_blocks(sb, group);
bh = bclean(handle, sb, block);
if (IS_ERR(bh)) {
err = PTR_ERR(bh);
+ bh = NULL;
goto out;
}
if (EXT4_HAS_RO_COMPAT_FEATURE(sb,
EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) {
- /* journal checksum v2 */
+ /* journal checksum v3 */
compat = 0;
- incompat = JBD2_FEATURE_INCOMPAT_CSUM_V2;
+ incompat = JBD2_FEATURE_INCOMPAT_CSUM_V3;
} else {
/* journal checksum v1 */
compat = JBD2_FEATURE_COMPAT_CHECKSUM;
jbd2_journal_clear_features(sbi->s_journal,
JBD2_FEATURE_COMPAT_CHECKSUM, 0,
JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT |
+ JBD2_FEATURE_INCOMPAT_CSUM_V3 |
JBD2_FEATURE_INCOMPAT_CSUM_V2);
}
mounted as f2fs. Each file shows the whole f2fs information.
/sys/kernel/debug/f2fs/status includes:
- - major file system information managed by f2fs currently
+ - major filesystem information managed by f2fs currently
- average SIT information about whole segments
- current memory footprint consumed by f2fs.
bool "F2FS consistency checking feature"
depends on F2FS_FS
help
- Enables BUG_ONs which check the file system consistency in runtime.
+ Enables BUG_ONs which check the filesystem consistency in runtime.
If you want to improve the performance, say N.
goto redirty_out;
if (wbc->for_reclaim)
goto redirty_out;
-
- /* Should not write any meta pages, if any IO error was occurred */
- if (unlikely(is_set_ckpt_flags(F2FS_CKPT(sbi), CP_ERROR_FLAG)))
- goto no_write;
+ if (unlikely(f2fs_cp_error(sbi)))
+ goto redirty_out;
f2fs_wait_on_page_writeback(page, META);
write_meta_page(sbi, page);
-no_write:
dec_page_count(sbi, F2FS_DIRTY_META);
unlock_page(page);
return 0;
return e ? true : false;
}
-static void release_dirty_inode(struct f2fs_sb_info *sbi)
+void release_dirty_inode(struct f2fs_sb_info *sbi)
{
struct ino_entry *e, *tmp;
int i;
struct f2fs_orphan_block *orphan_blk = NULL;
unsigned int nentries = 0;
unsigned short index;
- unsigned short orphan_blocks = (unsigned short)((sbi->n_orphans +
- (F2FS_ORPHANS_PER_BLOCK - 1)) / F2FS_ORPHANS_PER_BLOCK);
+ unsigned short orphan_blocks =
+ (unsigned short)GET_ORPHAN_BLOCKS(sbi->n_orphans);
struct page *page = NULL;
struct ino_entry *orphan = NULL;
/*
* Freeze all the FS-operations for checkpoint.
*/
-static void block_operations(struct f2fs_sb_info *sbi)
+static int block_operations(struct f2fs_sb_info *sbi)
{
struct writeback_control wbc = {
.sync_mode = WB_SYNC_ALL,
.for_reclaim = 0,
};
struct blk_plug plug;
+ int err = 0;
blk_start_plug(&plug);
if (get_pages(sbi, F2FS_DIRTY_DENTS)) {
f2fs_unlock_all(sbi);
sync_dirty_dir_inodes(sbi);
+ if (unlikely(f2fs_cp_error(sbi))) {
+ err = -EIO;
+ goto out;
+ }
goto retry_flush_dents;
}
/*
- * POR: we should ensure that there is no dirty node pages
+ * POR: we should ensure that there are no dirty node pages
* until finishing nat/sit flush.
*/
retry_flush_nodes:
if (get_pages(sbi, F2FS_DIRTY_NODES)) {
up_write(&sbi->node_write);
sync_node_pages(sbi, 0, &wbc);
+ if (unlikely(f2fs_cp_error(sbi))) {
+ f2fs_unlock_all(sbi);
+ err = -EIO;
+ goto out;
+ }
goto retry_flush_nodes;
}
+out:
blk_finish_plug(&plug);
+ return err;
}
static void unblock_operations(struct f2fs_sb_info *sbi)
discard_next_dnode(sbi, NEXT_FREE_BLKADDR(sbi, curseg));
/* Flush all the NAT/SIT pages */
- while (get_pages(sbi, F2FS_DIRTY_META))
+ while (get_pages(sbi, F2FS_DIRTY_META)) {
sync_meta_pages(sbi, META, LONG_MAX);
+ if (unlikely(f2fs_cp_error(sbi)))
+ return;
+ }
next_free_nid(sbi, &last_nid);
ckpt->elapsed_time = cpu_to_le64(get_mtime(sbi));
ckpt->valid_block_count = cpu_to_le64(valid_user_blocks(sbi));
ckpt->free_segment_count = cpu_to_le32(free_segments(sbi));
- for (i = 0; i < 3; i++) {
+ for (i = 0; i < NR_CURSEG_NODE_TYPE; i++) {
ckpt->cur_node_segno[i] =
cpu_to_le32(curseg_segno(sbi, i + CURSEG_HOT_NODE));
ckpt->cur_node_blkoff[i] =
ckpt->alloc_type[i + CURSEG_HOT_NODE] =
curseg_alloc_type(sbi, i + CURSEG_HOT_NODE);
}
- for (i = 0; i < 3; i++) {
+ for (i = 0; i < NR_CURSEG_DATA_TYPE; i++) {
ckpt->cur_data_segno[i] =
cpu_to_le32(curseg_segno(sbi, i + CURSEG_HOT_DATA));
ckpt->cur_data_blkoff[i] =
/* 2 cp + n data seg summary + orphan inode blocks */
data_sum_blocks = npages_for_summary_flush(sbi);
- if (data_sum_blocks < 3)
+ if (data_sum_blocks < NR_CURSEG_DATA_TYPE)
set_ckpt_flags(ckpt, CP_COMPACT_SUM_FLAG);
else
clear_ckpt_flags(ckpt, CP_COMPACT_SUM_FLAG);
- orphan_blocks = (sbi->n_orphans + F2FS_ORPHANS_PER_BLOCK - 1)
- / F2FS_ORPHANS_PER_BLOCK;
+ orphan_blocks = GET_ORPHAN_BLOCKS(sbi->n_orphans);
ckpt->cp_pack_start_sum = cpu_to_le32(1 + cp_payload_blks +
orphan_blocks);
if (is_umount) {
set_ckpt_flags(ckpt, CP_UMOUNT_FLAG);
- ckpt->cp_pack_total_block_count = cpu_to_le32(2 +
+ ckpt->cp_pack_total_block_count = cpu_to_le32(F2FS_CP_PACKS+
cp_payload_blks + data_sum_blocks +
orphan_blocks + NR_CURSEG_NODE_TYPE);
} else {
clear_ckpt_flags(ckpt, CP_UMOUNT_FLAG);
- ckpt->cp_pack_total_block_count = cpu_to_le32(2 +
+ ckpt->cp_pack_total_block_count = cpu_to_le32(F2FS_CP_PACKS +
cp_payload_blks + data_sum_blocks +
orphan_blocks);
}
/* wait for previous submitted node/meta pages writeback */
wait_on_all_pages_writeback(sbi);
+ if (unlikely(f2fs_cp_error(sbi)))
+ return;
+
filemap_fdatawait_range(NODE_MAPPING(sbi), 0, LONG_MAX);
filemap_fdatawait_range(META_MAPPING(sbi), 0, LONG_MAX);
/* Here, we only have one bio having CP pack */
sync_meta_pages(sbi, META_FLUSH, LONG_MAX);
- if (!is_set_ckpt_flags(ckpt, CP_ERROR_FLAG)) {
- clear_prefree_segments(sbi);
- release_dirty_inode(sbi);
- F2FS_RESET_SB_DIRT(sbi);
- }
+ release_dirty_inode(sbi);
+
+ if (unlikely(f2fs_cp_error(sbi)))
+ return;
+
+ clear_prefree_segments(sbi);
+ F2FS_RESET_SB_DIRT(sbi);
}
/*
- * We guarantee that this checkpoint procedure should not fail.
+ * We guarantee that this checkpoint procedure will not fail.
*/
void write_checkpoint(struct f2fs_sb_info *sbi, bool is_umount)
{
trace_f2fs_write_checkpoint(sbi->sb, is_umount, "start block_ops");
mutex_lock(&sbi->cp_mutex);
- block_operations(sbi);
+
+ if (!sbi->s_dirty)
+ goto out;
+ if (unlikely(f2fs_cp_error(sbi)))
+ goto out;
+ if (block_operations(sbi))
+ goto out;
trace_f2fs_write_checkpoint(sbi->sb, is_umount, "finish block_ops");
do_checkpoint(sbi, is_umount);
unblock_operations(sbi);
- mutex_unlock(&sbi->cp_mutex);
-
stat_inc_cp_count(sbi->stat_info);
+out:
+ mutex_unlock(&sbi->cp_mutex);
trace_f2fs_write_checkpoint(sbi->sb, is_umount, "finish checkpoint");
}
* for cp pack we can have max 1020*504 orphan entries
*/
sbi->n_orphans = 0;
- sbi->max_orphans = (sbi->blocks_per_seg - 2 - NR_CURSEG_TYPE)
- * F2FS_ORPHANS_PER_BLOCK;
+ sbi->max_orphans = (sbi->blocks_per_seg - F2FS_CP_PACKS -
+ NR_CURSEG_TYPE) * F2FS_ORPHANS_PER_BLOCK;
}
int __init create_checkpoint_caches(void)
struct page *page = bvec->bv_page;
if (unlikely(err)) {
- SetPageError(page);
+ set_page_dirty(page);
set_bit(AS_EIO, &page->mapping->flags);
f2fs_stop_checkpoint(sbi);
}
allocated = true;
blkaddr = dn.data_blkaddr;
}
- /* Give more consecutive addresses for the read ahead */
+ /* Give more consecutive addresses for the readahead */
if (blkaddr == (bh_result->b_blocknr + ofs)) {
ofs++;
dn.ofs_in_node++;
trace_f2fs_readpage(page, DATA);
- /* If the file has inline data, try to read it directlly */
+ /* If the file has inline data, try to read it directly */
if (f2fs_has_inline_data(inode))
ret = f2fs_read_inline_data(inode, page);
else
/* Dentry blocks are controlled by checkpoint */
if (S_ISDIR(inode->i_mode)) {
+ if (unlikely(f2fs_cp_error(sbi)))
+ goto redirty_out;
err = do_write_data_page(page, &fio);
goto done;
}
+ /* we should bypass data pages to proceed the kworkder jobs */
+ if (unlikely(f2fs_cp_error(sbi))) {
+ SetPageError(page);
+ unlock_page(page);
+ return 0;
+ }
+
if (!wbc->for_reclaim)
need_balance_fs = true;
else if (has_not_enough_free_secs(sbi, 0))
if (to > inode->i_size) {
truncate_pagecache(inode, inode->i_size);
- truncate_blocks(inode, inode->i_size);
+ truncate_blocks(inode, inode->i_size, true);
}
}
f2fs_balance_fs(sbi);
repeat:
- err = f2fs_convert_inline_data(inode, pos + len);
+ err = f2fs_convert_inline_data(inode, pos + len, NULL);
if (err)
goto fail;
struct f2fs_stat_info *si = F2FS_STAT(sbi);
int i;
- /* valid check of the segment numbers */
+ /* validation check of the segment numbers */
si->hit_ext = sbi->read_hit_ext;
si->total_ext = sbi->total_hit_ext;
si->ndirty_node = get_pages(sbi, F2FS_DIRTY_NODES);
si->base_mem += NR_DIRTY_TYPE * f2fs_bitmap_size(TOTAL_SEGS(sbi));
si->base_mem += f2fs_bitmap_size(TOTAL_SECS(sbi));
- /* buld nm */
+ /* build nm */
si->base_mem += sizeof(struct f2fs_nm_info);
si->base_mem += __bitmap_size(sbi, NAT_BITMAP);
/*
* For the most part, it should be a bug when name_len is zero.
- * We stop here for figuring out where the bugs are occurred.
+ * We stop here for figuring out where the bugs has occurred.
*/
f2fs_bug_on(!de->name_len);
error:
/* once the failed inode becomes a bad inode, i_mode is S_IFREG */
truncate_inode_pages(&inode->i_data, 0);
- truncate_blocks(inode, 0);
+ truncate_blocks(inode, 0, false);
remove_dirty_dir_inode(inode);
remove_inode_page(inode);
return ERR_PTR(err);
}
/*
- * It only removes the dentry from the dentry page,corresponding name
+ * It only removes the dentry from the dentry page, corresponding name
* entry in name page does not need to be touched during deletion.
*/
void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page,
#define f2fs_bug_on(condition) BUG_ON(condition)
#define f2fs_down_write(x, y) down_write_nest_lock(x, y)
#else
-#define f2fs_bug_on(condition)
+#define f2fs_bug_on(condition) WARN_ON(condition)
#define f2fs_down_write(x, y) down_write(x)
#endif
};
/*
- * The below are the page types of bios used in submti_bio().
+ * The below are the page types of bios used in submit_bio().
* The available types are:
* DATA User data pages. It operates as async mode.
* NODE Node pages. It operates as async mode.
struct list_head dir_inode_list; /* dir inode list */
spinlock_t dir_inode_lock; /* for dir inode list lock */
- /* basic file system units */
+ /* basic filesystem units */
unsigned int log_sectors_per_block; /* log2 sectors per block */
unsigned int log_blocksize; /* log2 block size */
unsigned int blocksize; /* block size */
/*
* odd numbered checkpoint should at cp segment 0
- * and even segent must be at cp segment 1
+ * and even segment must be at cp segment 1
*/
if (!(ckpt_version & 1))
start_addr += sbi->blocks_per_seg;
return sb->s_flags & MS_RDONLY;
}
+static inline bool f2fs_cp_error(struct f2fs_sb_info *sbi)
+{
+ return is_set_ckpt_flags(sbi->ckpt, CP_ERROR_FLAG);
+}
+
static inline void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi)
{
set_ckpt_flags(sbi->ckpt, CP_ERROR_FLAG);
*/
int f2fs_sync_file(struct file *, loff_t, loff_t, int);
void truncate_data_blocks(struct dnode_of_data *);
-int truncate_blocks(struct inode *, u64);
+int truncate_blocks(struct inode *, u64, bool);
void f2fs_truncate(struct inode *);
int f2fs_getattr(struct vfsmount *, struct dentry *, struct kstat *);
int f2fs_setattr(struct dentry *, struct iattr *);
bool alloc_nid(struct f2fs_sb_info *, nid_t *);
void alloc_nid_done(struct f2fs_sb_info *, nid_t);
void alloc_nid_failed(struct f2fs_sb_info *, nid_t);
-void recover_node_page(struct f2fs_sb_info *, struct page *,
- struct f2fs_summary *, struct node_info *, block_t);
void recover_inline_xattr(struct inode *, struct page *);
-bool recover_xattr_data(struct inode *, struct page *, block_t);
+void recover_xattr_data(struct inode *, struct page *, block_t);
int recover_inode_page(struct f2fs_sb_info *, struct page *);
int restore_node_summary(struct f2fs_sb_info *, unsigned int,
struct f2fs_summary_block *);
void rewrite_data_page(struct page *, block_t, struct f2fs_io_info *);
void recover_data_page(struct f2fs_sb_info *, struct page *,
struct f2fs_summary *, block_t, block_t);
-void rewrite_node_page(struct f2fs_sb_info *, struct page *,
- struct f2fs_summary *, block_t, block_t);
void allocate_data_block(struct f2fs_sb_info *, struct page *,
block_t, block_t *, struct f2fs_summary *, int);
void f2fs_wait_on_page_writeback(struct page *, enum page_type);
long sync_meta_pages(struct f2fs_sb_info *, enum page_type, long);
void add_dirty_inode(struct f2fs_sb_info *, nid_t, int type);
void remove_dirty_inode(struct f2fs_sb_info *, nid_t, int type);
+void release_dirty_inode(struct f2fs_sb_info *);
bool exist_written_data(struct f2fs_sb_info *, nid_t, int);
int acquire_orphan_inode(struct f2fs_sb_info *);
void release_orphan_inode(struct f2fs_sb_info *);
*/
bool f2fs_may_inline(struct inode *);
int f2fs_read_inline_data(struct inode *, struct page *);
-int f2fs_convert_inline_data(struct inode *, pgoff_t);
+int f2fs_convert_inline_data(struct inode *, pgoff_t, struct page *);
int f2fs_write_inline_data(struct inode *, struct page *, unsigned int);
void truncate_inline_data(struct inode *, u64);
-int recover_inline_data(struct inode *, struct page *);
+bool recover_inline_data(struct inode *, struct page *);
#endif
sb_start_pagefault(inode->i_sb);
+ /* force to convert with normal data indices */
+ err = f2fs_convert_inline_data(inode, MAX_INLINE_DATA + 1, page);
+ if (err)
+ goto out;
+
/* block allocation */
f2fs_lock_op(sbi);
set_new_dnode(&dn, inode, NULL, NULL, 0);
return 1;
}
+static inline bool need_do_checkpoint(struct inode *inode)
+{
+ struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
+ bool need_cp = false;
+
+ if (!S_ISREG(inode->i_mode) || inode->i_nlink != 1)
+ need_cp = true;
+ else if (file_wrong_pino(inode))
+ need_cp = true;
+ else if (!space_for_roll_forward(sbi))
+ need_cp = true;
+ else if (!is_checkpointed_node(sbi, F2FS_I(inode)->i_pino))
+ need_cp = true;
+ else if (F2FS_I(inode)->xattr_ver == cur_cp_version(F2FS_CKPT(sbi)))
+ need_cp = true;
+
+ return need_cp;
+}
+
int f2fs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
{
struct inode *inode = file->f_mapping->host;
/* guarantee free sections for fsync */
f2fs_balance_fs(sbi);
- down_read(&fi->i_sem);
-
/*
* Both of fdatasync() and fsync() are able to be recovered from
* sudden-power-off.
*/
- if (!S_ISREG(inode->i_mode) || inode->i_nlink != 1)
- need_cp = true;
- else if (file_wrong_pino(inode))
- need_cp = true;
- else if (!space_for_roll_forward(sbi))
- need_cp = true;
- else if (!is_checkpointed_node(sbi, F2FS_I(inode)->i_pino))
- need_cp = true;
- else if (F2FS_I(inode)->xattr_ver == cur_cp_version(F2FS_CKPT(sbi)))
- need_cp = true;
-
+ down_read(&fi->i_sem);
+ need_cp = need_do_checkpoint(inode);
up_read(&fi->i_sem);
if (need_cp) {
if (err && err != -ENOENT) {
goto fail;
} else if (err == -ENOENT) {
- /* direct node is not exist */
+ /* direct node does not exists */
if (whence == SEEK_DATA) {
pgofs = PGOFS_OF_NEXT_DNODE(pgofs,
F2FS_I(inode));
f2fs_put_page(page, 1);
}
-int truncate_blocks(struct inode *inode, u64 from)
+int truncate_blocks(struct inode *inode, u64 from, bool lock)
{
struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
unsigned int blocksize = inode->i_sb->s_blocksize;
free_from = (pgoff_t)
((from + blocksize - 1) >> (sbi->log_blocksize));
- f2fs_lock_op(sbi);
+ if (lock)
+ f2fs_lock_op(sbi);
set_new_dnode(&dn, inode, NULL, NULL, 0);
err = get_dnode_of_data(&dn, free_from, LOOKUP_NODE);
if (err) {
if (err == -ENOENT)
goto free_next;
- f2fs_unlock_op(sbi);
+ if (lock)
+ f2fs_unlock_op(sbi);
trace_f2fs_truncate_blocks_exit(inode, err);
return err;
}
f2fs_put_dnode(&dn);
free_next:
err = truncate_inode_blocks(inode, free_from);
- f2fs_unlock_op(sbi);
+ if (lock)
+ f2fs_unlock_op(sbi);
done:
/* lastly zero out the first data page */
truncate_partial_data_page(inode, from);
trace_f2fs_truncate(inode);
- if (!truncate_blocks(inode, i_size_read(inode))) {
+ if (!truncate_blocks(inode, i_size_read(inode), true)) {
inode->i_mtime = inode->i_ctime = CURRENT_TIME;
mark_inode_dirty(inode);
}
if ((attr->ia_valid & ATTR_SIZE) &&
attr->ia_size != i_size_read(inode)) {
- err = f2fs_convert_inline_data(inode, attr->ia_size);
+ err = f2fs_convert_inline_data(inode, attr->ia_size, NULL);
if (err)
return err;
loff_t off_start, off_end;
int ret = 0;
- ret = f2fs_convert_inline_data(inode, MAX_INLINE_DATA + 1);
+ ret = f2fs_convert_inline_data(inode, MAX_INLINE_DATA + 1, NULL);
if (ret)
return ret;
if (ret)
return ret;
- ret = f2fs_convert_inline_data(inode, offset + len);
+ ret = f2fs_convert_inline_data(inode, offset + len, NULL);
if (ret)
return ret;
* 3. IO subsystem is idle by checking the # of requests in
* bdev's request list.
*
- * Note) We have to avoid triggering GCs too much frequently.
+ * Note) We have to avoid triggering GCs frequently.
* Because it is possible that some segments can be
* invalidated soon after by user update or deletion.
* So, I'd like to wait some time to collect dirty segments.
u = (vblocks * 100) >> sbi->log_blocks_per_seg;
- /* Handle if the system time is changed by user */
+ /* Handle if the system time has changed by the user */
if (mtime < sit_i->min_mtime)
sit_i->min_mtime = mtime;
if (mtime > sit_i->max_mtime)
if (phase == 2) {
inode = f2fs_iget(sb, dni.ino);
- if (IS_ERR(inode))
+ if (IS_ERR(inode) || is_bad_inode(inode))
continue;
start_bidx = start_bidx_of_node(nofs, F2FS_I(inode));
gc_more:
if (unlikely(!(sbi->sb->s_flags & MS_ACTIVE)))
goto stop;
- if (unlikely(is_set_ckpt_flags(F2FS_CKPT(sbi), CP_ERROR_FLAG)))
+ if (unlikely(f2fs_cp_error(sbi)))
goto stop;
if (gc_type == BG_GC && has_not_enough_free_secs(sbi, nfree)) {
block_t invalid_user_blocks = sbi->user_block_count -
written_block_count(sbi);
/*
- * Background GC is triggered with the following condition.
+ * Background GC is triggered with the following conditions.
* 1. There are a number of invalid blocks.
* 2. There is not enough free space.
*/
buf[1] += b1;
}
-static void str2hashbuf(const char *msg, size_t len, unsigned int *buf, int num)
+static void str2hashbuf(const unsigned char *msg, size_t len,
+ unsigned int *buf, int num)
{
unsigned pad, val;
int i;
{
__u32 hash;
f2fs_hash_t f2fs_hash;
- const char *p;
+ const unsigned char *p;
__u32 in[8], buf[4];
- const char *name = name_info->name;
+ const unsigned char *name = name_info->name;
size_t len = name_info->len;
if ((len <= 2) && (name[0] == '.') &&
static int __f2fs_convert_inline_data(struct inode *inode, struct page *page)
{
- int err;
+ int err = 0;
struct page *ipage;
struct dnode_of_data dn;
void *src_addr, *dst_addr;
goto out;
}
+ /* someone else converted inline_data already */
+ if (!f2fs_has_inline_data(inode))
+ goto out;
+
/*
* i_addr[0] is not used for inline data,
* so reserving new block will not destroy inline data
return err;
}
-int f2fs_convert_inline_data(struct inode *inode, pgoff_t to_size)
+int f2fs_convert_inline_data(struct inode *inode, pgoff_t to_size,
+ struct page *page)
{
- struct page *page;
+ struct page *new_page = page;
int err;
if (!f2fs_has_inline_data(inode))
else if (to_size <= MAX_INLINE_DATA)
return 0;
- page = grab_cache_page(inode->i_mapping, 0);
- if (!page)
- return -ENOMEM;
+ if (!page || page->index != 0) {
+ new_page = grab_cache_page(inode->i_mapping, 0);
+ if (!new_page)
+ return -ENOMEM;
+ }
- err = __f2fs_convert_inline_data(inode, page);
- f2fs_put_page(page, 1);
+ err = __f2fs_convert_inline_data(inode, new_page);
+ if (!page || page->index != 0)
+ f2fs_put_page(new_page, 1);
return err;
}
int f2fs_write_inline_data(struct inode *inode,
- struct page *page, unsigned size)
+ struct page *page, unsigned size)
{
void *src_addr, *dst_addr;
struct page *ipage;
f2fs_put_page(ipage, 1);
}
-int recover_inline_data(struct inode *inode, struct page *npage)
+bool recover_inline_data(struct inode *inode, struct page *npage)
{
struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
struct f2fs_inode *ri = NULL;
ri = F2FS_INODE(npage);
if (f2fs_has_inline_data(inode) &&
- ri && ri->i_inline & F2FS_INLINE_DATA) {
+ ri && (ri->i_inline & F2FS_INLINE_DATA)) {
process_inline:
ipage = get_node_page(sbi, inode->i_ino);
f2fs_bug_on(IS_ERR(ipage));
memcpy(dst_addr, src_addr, MAX_INLINE_DATA);
update_inode(inode, ipage);
f2fs_put_page(ipage, 1);
- return -1;
+ return true;
}
if (f2fs_has_inline_data(inode)) {
clear_inode_flag(F2FS_I(inode), FI_INLINE_DATA);
update_inode(inode, ipage);
f2fs_put_page(ipage, 1);
- } else if (ri && ri->i_inline & F2FS_INLINE_DATA) {
- truncate_blocks(inode, 0);
+ } else if (ri && (ri->i_inline & F2FS_INLINE_DATA)) {
+ truncate_blocks(inode, 0, false);
set_inode_flag(F2FS_I(inode), FI_INLINE_DATA);
goto process_inline;
}
- return 0;
+ return false;
}
return 0;
out:
clear_nlink(inode);
- unlock_new_inode(inode);
- make_bad_inode(inode);
- iput(inode);
+ iget_failed(inode);
alloc_nid_failed(sbi, ino);
return err;
}
f2fs_delete_entry(de, page, inode);
f2fs_unlock_op(sbi);
- /* In order to evict this inode, we set it dirty */
+ /* In order to evict this inode, we set it dirty */
mark_inode_dirty(inode);
fail:
trace_f2fs_unlink_exit(inode, err);
return err;
out:
clear_nlink(inode);
- unlock_new_inode(inode);
- make_bad_inode(inode);
- iput(inode);
+ iget_failed(inode);
alloc_nid_failed(sbi, inode->i_ino);
return err;
}
out_fail:
clear_inode_flag(F2FS_I(inode), FI_INC_LINK);
clear_nlink(inode);
- unlock_new_inode(inode);
- make_bad_inode(inode);
- iput(inode);
+ iget_failed(inode);
alloc_nid_failed(sbi, inode->i_ino);
return err;
}
return 0;
out:
clear_nlink(inode);
- unlock_new_inode(inode);
- make_bad_inode(inode);
- iput(inode);
+ iget_failed(inode);
alloc_nid_failed(sbi, inode->i_ino);
return err;
}
out:
f2fs_unlock_op(sbi);
clear_nlink(inode);
- unlock_new_inode(inode);
- make_bad_inode(inode);
- iput(inode);
+ iget_failed(inode);
alloc_nid_failed(sbi, inode->i_ino);
return err;
}
.mkdir = f2fs_mkdir,
.rmdir = f2fs_rmdir,
.mknod = f2fs_mknod,
- .rename = f2fs_rename,
.rename2 = f2fs_rename2,
.tmpfile = f2fs_tmpfile,
.getattr = f2fs_getattr,
nat_get_blkaddr(e) != NULL_ADDR &&
new_blkaddr == NEW_ADDR);
- /* increament version no as node is removed */
+ /* increment version no as node is removed */
if (nat_get_blkaddr(e) != NEW_ADDR && new_blkaddr == NULL_ADDR) {
unsigned char version = nat_get_version(e);
nat_set_version(e, inc_node_version(version));
}
/*
- * This function returns always success
+ * This function always returns success
*/
void get_node_info(struct f2fs_sb_info *sbi, nid_t nid, struct node_info *ni)
{
/* get indirect nodes in the path */
for (i = 0; i < idx + 1; i++) {
- /* refernece count'll be increased */
+ /* reference count'll be increased */
pages[i] = get_node_page(sbi, nid[i]);
if (IS_ERR(pages[i])) {
err = PTR_ERR(pages[i]);
*/
void remove_inode_page(struct inode *inode)
{
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
- struct page *page;
- nid_t ino = inode->i_ino;
struct dnode_of_data dn;
- page = get_node_page(sbi, ino);
- if (IS_ERR(page))
+ set_new_dnode(&dn, inode, NULL, NULL, inode->i_ino);
+ if (get_dnode_of_data(&dn, 0, LOOKUP_NODE))
return;
- if (truncate_xattr_node(inode, page)) {
- f2fs_put_page(page, 1);
+ if (truncate_xattr_node(inode, dn.inode_page)) {
+ f2fs_put_dnode(&dn);
return;
}
- /* 0 is possible, after f2fs_new_inode() is failed */
+
+ /* remove potential inline_data blocks */
+ if (S_ISREG(inode->i_mode) || S_ISDIR(inode->i_mode) ||
+ S_ISLNK(inode->i_mode))
+ truncate_data_blocks_range(&dn, 1);
+
+ /* 0 is possible, after f2fs_new_inode() has failed */
f2fs_bug_on(inode->i_blocks != 0 && inode->i_blocks != 1);
- set_new_dnode(&dn, inode, page, page, ino);
+
+ /* will put inode & node pages */
truncate_node(&dn);
}
set_fsync_mark(page, 0);
set_dentry_mark(page, 0);
}
- NODE_MAPPING(sbi)->a_ops->writepage(page, wbc);
- wrote++;
+
+ if (NODE_MAPPING(sbi)->a_ops->writepage(page, wbc))
+ unlock_page(page);
+ else
+ wrote++;
if (--wbc->nr_to_write == 0)
break;
if (unlikely(sbi->por_doing))
goto redirty_out;
+ if (unlikely(f2fs_cp_error(sbi)))
+ goto redirty_out;
f2fs_wait_on_page_writeback(page, NODE);
kmem_cache_free(free_nid_slab, i);
}
-void recover_node_page(struct f2fs_sb_info *sbi, struct page *page,
- struct f2fs_summary *sum, struct node_info *ni,
- block_t new_blkaddr)
-{
- rewrite_node_page(sbi, page, sum, ni->blk_addr, new_blkaddr);
- set_node_addr(sbi, ni, new_blkaddr, false);
- clear_node_page_dirty(page);
-}
-
void recover_inline_xattr(struct inode *inode, struct page *page)
{
struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
struct page *ipage;
struct f2fs_inode *ri;
- if (!f2fs_has_inline_xattr(inode))
- return;
-
- if (!IS_INODE(page))
- return;
-
- ri = F2FS_INODE(page);
- if (!(ri->i_inline & F2FS_INLINE_XATTR))
- return;
-
ipage = get_node_page(sbi, inode->i_ino);
f2fs_bug_on(IS_ERR(ipage));
+ ri = F2FS_INODE(page);
+ if (!(ri->i_inline & F2FS_INLINE_XATTR)) {
+ clear_inode_flag(F2FS_I(inode), FI_INLINE_XATTR);
+ goto update_inode;
+ }
+
dst_addr = inline_xattr_addr(ipage);
src_addr = inline_xattr_addr(page);
inline_size = inline_xattr_size(inode);
f2fs_wait_on_page_writeback(ipage, NODE);
memcpy(dst_addr, src_addr, inline_size);
-
+update_inode:
update_inode(inode, ipage);
f2fs_put_page(ipage, 1);
}
-bool recover_xattr_data(struct inode *inode, struct page *page, block_t blkaddr)
+void recover_xattr_data(struct inode *inode, struct page *page, block_t blkaddr)
{
struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
nid_t prev_xnid = F2FS_I(inode)->i_xattr_nid;
nid_t new_xnid = nid_of_node(page);
struct node_info ni;
- if (!f2fs_has_xattr_block(ofs_of_node(page)))
- return false;
-
/* 1: invalidate the previous xattr nid */
if (!prev_xnid)
goto recover_xnid;
set_node_addr(sbi, &ni, blkaddr, false);
update_inode_page(inode);
- return true;
}
int recover_inode_page(struct f2fs_sb_info *sbi, struct page *page)
if (!ipage)
return -ENOMEM;
- /* Should not use this inode from free nid list */
+ /* Should not use this inode from free nid list */
remove_free_nid(NM_I(sbi), ino);
SetPageUptodate(ipage);
dst->i_blocks = cpu_to_le64(1);
dst->i_links = cpu_to_le32(1);
dst->i_xattr_nid = 0;
+ dst->i_inline = src->i_inline & F2FS_INLINE_XATTR;
new_ni = old_ni;
new_ni.ino = ino;
WARN_ON(1);
set_node_addr(sbi, &new_ni, NEW_ADDR, false);
inc_valid_inode_count(sbi);
+ set_page_dirty(ipage);
f2fs_put_page(ipage, 1);
return 0;
}
/*
* ra_sum_pages() merge contiguous pages into one bio and submit.
- * these pre-readed pages are alloced in bd_inode's mapping tree.
+ * these pre-read pages are allocated in bd_inode's mapping tree.
*/
static int ra_sum_pages(struct f2fs_sb_info *sbi, struct page **pages,
int start, int nrpages)
for (i = 0; !err && i < last_offset; i += nrpages, addr += nrpages) {
nrpages = min(last_offset - i, bio_blocks);
- /* read ahead node pages */
+ /* readahead node pages */
nrpages = ra_sum_pages(sbi, pages, addr, nrpages);
if (!nrpages)
return -ENOMEM;
nm_i->max_nid = NAT_ENTRY_PER_BLOCK * nat_blocks;
/* not used nids: 0, node, meta, (and root counted as valid node) */
- nm_i->available_nids = nm_i->max_nid - 3;
+ nm_i->available_nids = nm_i->max_nid - F2FS_RESERVED_NODE_NUM;
nm_i->fcnt = 0;
nm_i->nat_cnt = 0;
nm_i->ram_thresh = DEF_RAM_THRESHOLD;
}
retry:
de = f2fs_find_entry(dir, &name, &page);
- if (de && inode->i_ino == le32_to_cpu(de->ino))
+ if (de && inode->i_ino == le32_to_cpu(de->ino)) {
+ clear_inode_flag(F2FS_I(inode), FI_INC_LINK);
goto out_unmap_put;
+ }
if (de) {
einode = f2fs_iget(inode->i_sb, le32_to_cpu(de->ino));
if (IS_ERR(einode)) {
struct node_info ni;
int err = 0, recovered = 0;
- recover_inline_xattr(inode, page);
-
- if (recover_inline_data(inode, page))
+ /* step 1: recover xattr */
+ if (IS_INODE(page)) {
+ recover_inline_xattr(inode, page);
+ } else if (f2fs_has_xattr_block(ofs_of_node(page))) {
+ recover_xattr_data(inode, page, blkaddr);
goto out;
+ }
- if (recover_xattr_data(inode, page, blkaddr))
+ /* step 2: recover inline data */
+ if (recover_inline_data(inode, page))
goto out;
+ /* step 3: recover data indices */
start = start_bidx_of_node(ofs_of_node(page), fi);
end = start + ADDRS_PER_PAGE(page, fi);
fill_node_footer(dn.node_page, dn.nid, ni.ino,
ofs_of_node(page), false);
set_page_dirty(dn.node_page);
-
- recover_node_page(sbi, dn.node_page, &sum, &ni, blkaddr);
err:
f2fs_put_dnode(&dn);
f2fs_unlock_op(sbi);
/* step #1: find fsynced inode numbers */
sbi->por_doing = true;
+ /* prevent checkpoint */
+ mutex_lock(&sbi->cp_mutex);
+
blkaddr = NEXT_FREE_BLKADDR(sbi, curseg);
err = find_fsync_dnodes(sbi, &inode_list);
/* step #2: recover data */
err = recover_data(sbi, &inode_list, CURSEG_WARM_NODE);
- f2fs_bug_on(!list_empty(&inode_list));
+ if (!err)
+ f2fs_bug_on(!list_empty(&inode_list));
out:
destroy_fsync_dnodes(&inode_list);
kmem_cache_destroy(fsync_entry_slab);
/* Flush all the NAT/SIT pages */
while (get_pages(sbi, F2FS_DIRTY_META))
sync_meta_pages(sbi, META, LONG_MAX);
+ set_ckpt_flags(sbi->ckpt, CP_ERROR_FLAG);
+ mutex_unlock(&sbi->cp_mutex);
} else if (need_writecp) {
+ mutex_unlock(&sbi->cp_mutex);
write_checkpoint(sbi, false);
+ } else {
+ mutex_unlock(&sbi->cp_mutex);
}
return err;
}
}
/*
- * __find_rev_next(_zero)_bit is copied from lib/find_next_bit.c becasue
+ * __find_rev_next(_zero)_bit is copied from lib/find_next_bit.c because
* f2fs_set_bit makes MSB and LSB reversed in a byte.
* Example:
* LSB <--> MSB
}
/*
- * This function always allocates a used segment (from dirty seglist) by SSR
+ * This function always allocates a used segment(from dirty seglist) by SSR
* manner, so it should recover the existing segment information of valid blocks
*/
static void change_curseg(struct f2fs_sb_info *sbi, int type, bool reuse)
mutex_unlock(&curseg->curseg_mutex);
}
-void rewrite_node_page(struct f2fs_sb_info *sbi,
- struct page *page, struct f2fs_summary *sum,
- block_t old_blkaddr, block_t new_blkaddr)
-{
- struct sit_info *sit_i = SIT_I(sbi);
- int type = CURSEG_WARM_NODE;
- struct curseg_info *curseg;
- unsigned int segno, old_cursegno;
- block_t next_blkaddr = next_blkaddr_of_node(page);
- unsigned int next_segno = GET_SEGNO(sbi, next_blkaddr);
- struct f2fs_io_info fio = {
- .type = NODE,
- .rw = WRITE_SYNC,
- };
-
- curseg = CURSEG_I(sbi, type);
-
- mutex_lock(&curseg->curseg_mutex);
- mutex_lock(&sit_i->sentry_lock);
-
- segno = GET_SEGNO(sbi, new_blkaddr);
- old_cursegno = curseg->segno;
-
- /* change the current segment */
- if (segno != curseg->segno) {
- curseg->next_segno = segno;
- change_curseg(sbi, type, true);
- }
- curseg->next_blkoff = GET_BLKOFF_FROM_SEG0(sbi, new_blkaddr);
- __add_sum_entry(sbi, type, sum);
-
- /* change the current log to the next block addr in advance */
- if (next_segno != segno) {
- curseg->next_segno = next_segno;
- change_curseg(sbi, type, true);
- }
- curseg->next_blkoff = GET_BLKOFF_FROM_SEG0(sbi, next_blkaddr);
-
- /* rewrite node page */
- set_page_writeback(page);
- f2fs_submit_page_mbio(sbi, page, new_blkaddr, &fio);
- f2fs_submit_merged_bio(sbi, NODE, WRITE);
- refresh_sit_entry(sbi, old_blkaddr, new_blkaddr);
- locate_dirty_segment(sbi, old_cursegno);
-
- mutex_unlock(&sit_i->sentry_lock);
- mutex_unlock(&curseg->curseg_mutex);
-}
-
static inline bool is_merged_page(struct f2fs_sb_info *sbi,
struct page *page, enum page_type type)
{
}
/*
- * Summary block is always treated as invalid block
+ * Summary block is always treated as an invalid block
*/
static inline void check_block_count(struct f2fs_sb_info *sbi,
int segno, struct f2fs_sit_entry *raw_sit)
stop_gc_thread(sbi);
/* We don't need to do checkpoint when it's clean */
- if (sbi->s_dirty && get_pages(sbi, F2FS_DIRTY_NODES))
+ if (sbi->s_dirty)
write_checkpoint(sbi, true);
+ /*
+ * normally superblock is clean, so we need to release this.
+ * In addition, EIO will skip do checkpoint, we need this as well.
+ */
+ release_dirty_inode(sbi);
+
iput(sbi->node_inode);
iput(sbi->meta_inode);
trace_f2fs_sync_fs(sb, sync);
- if (!sbi->s_dirty && !get_pages(sbi, F2FS_DIRTY_NODES))
- return 0;
-
if (sync) {
mutex_lock(&sbi->gc_mutex);
write_checkpoint(sbi, false);
buf->f_bfree = buf->f_blocks - valid_user_blocks(sbi) - ovp_count;
buf->f_bavail = user_block_count - valid_user_blocks(sbi);
- buf->f_files = sbi->total_node_count;
- buf->f_ffree = sbi->total_node_count - valid_inode_count(sbi);
+ buf->f_files = sbi->total_node_count - F2FS_RESERVED_NODE_NUM;
+ buf->f_ffree = buf->f_files - valid_inode_count(sbi);
buf->f_namelen = F2FS_NAME_LEN;
buf->f_fsid.val[0] = (u32)id;
if (need_restart_gc) {
if (start_gc_thread(sbi))
f2fs_msg(sbi->sb, KERN_WARNING,
- "background gc thread is stop");
+ "background gc thread has stopped");
} else if (need_stop_gc) {
stop_gc_thread(sbi);
}
if (unlikely(fsmeta >= total))
return 1;
- if (unlikely(is_set_ckpt_flags(ckpt, CP_ERROR_FLAG))) {
+ if (unlikely(f2fs_cp_error(sbi))) {
f2fs_msg(sbi->sb, KERN_ERR, "A bug case: need to run fsck");
return 1;
}
struct buffer_head *raw_super_buf;
struct inode *root;
long err = -EINVAL;
+ bool retry = true;
int i;
+try_onemore:
/* allocate memory for f2fs-specific super block info */
sbi = kzalloc(sizeof(struct f2fs_sb_info), GFP_KERNEL);
if (!sbi)
/* recover fsynced data */
if (!test_opt(sbi, DISABLE_ROLL_FORWARD)) {
err = recover_fsync_data(sbi);
- if (err)
+ if (err) {
f2fs_msg(sb, KERN_ERR,
"Cannot recover all fsync data errno=%ld", err);
+ goto free_kobj;
+ }
}
/*
brelse(raw_super_buf);
free_sbi:
kfree(sbi);
+
+ /* give only one another chance */
+ if (retry) {
+ retry = 0;
+ shrink_dcache_sb(sb);
+ goto try_onemore;
+ }
return err;
}
int free;
/*
* If value is NULL, it is remove operation.
- * In case of update operation, we caculate free.
+ * In case of update operation, we calculate free.
*/
free = MIN_OFFSET(inode) - ((char *)last - (char *)base_addr);
if (found)
return;
}
-static int isofs_read_inode(struct inode *);
+static int isofs_read_inode(struct inode *, int relocated);
static int isofs_statfs (struct dentry *, struct kstatfs *);
static struct kmem_cache *isofs_inode_cachep;
goto out;
}
-static int isofs_read_inode(struct inode *inode)
+static int isofs_read_inode(struct inode *inode, int relocated)
{
struct super_block *sb = inode->i_sb;
struct isofs_sb_info *sbi = ISOFS_SB(sb);
*/
if (!high_sierra) {
- parse_rock_ridge_inode(de, inode);
+ parse_rock_ridge_inode(de, inode, relocated);
/* if we want uid/gid set, override the rock ridge setting */
if (sbi->s_uid_set)
inode->i_uid = sbi->s_uid;
* offset that point to the underlying meta-data for the inode. The
* code below is otherwise similar to the iget() code in
* include/linux/fs.h */
-struct inode *isofs_iget(struct super_block *sb,
- unsigned long block,
- unsigned long offset)
+struct inode *__isofs_iget(struct super_block *sb,
+ unsigned long block,
+ unsigned long offset,
+ int relocated)
{
unsigned long hashval;
struct inode *inode;
return ERR_PTR(-ENOMEM);
if (inode->i_state & I_NEW) {
- ret = isofs_read_inode(inode);
+ ret = isofs_read_inode(inode, relocated);
if (ret < 0) {
iget_failed(inode);
inode = ERR_PTR(ret);
struct inode; /* To make gcc happy */
-extern int parse_rock_ridge_inode(struct iso_directory_record *, struct inode *);
+extern int parse_rock_ridge_inode(struct iso_directory_record *, struct inode *, int relocated);
extern int get_rock_ridge_filename(struct iso_directory_record *, char *, struct inode *);
extern int isofs_name_translate(struct iso_directory_record *, char *, struct inode *);
extern struct buffer_head *isofs_bread(struct inode *, sector_t);
extern int isofs_get_blocks(struct inode *, sector_t, struct buffer_head **, unsigned long);
-extern struct inode *isofs_iget(struct super_block *sb,
- unsigned long block,
- unsigned long offset);
+struct inode *__isofs_iget(struct super_block *sb,
+ unsigned long block,
+ unsigned long offset,
+ int relocated);
+
+static inline struct inode *isofs_iget(struct super_block *sb,
+ unsigned long block,
+ unsigned long offset)
+{
+ return __isofs_iget(sb, block, offset, 0);
+}
+
+static inline struct inode *isofs_iget_reloc(struct super_block *sb,
+ unsigned long block,
+ unsigned long offset)
+{
+ return __isofs_iget(sb, block, offset, 1);
+}
/* Because the inode number is no longer relevant to finding the
* underlying meta-data for an inode, we are free to choose a more
goto out;
}
+#define RR_REGARD_XA 1
+#define RR_RELOC_DE 2
+
static int
parse_rock_ridge_inode_internal(struct iso_directory_record *de,
- struct inode *inode, int regard_xa)
+ struct inode *inode, int flags)
{
int symlink_len = 0;
int cnt, sig;
+ unsigned int reloc_block;
struct inode *reloc;
struct rock_ridge *rr;
int rootflag;
init_rock_state(&rs, inode);
setup_rock_ridge(de, inode, &rs);
- if (regard_xa) {
+ if (flags & RR_REGARD_XA) {
rs.chr += 14;
rs.len -= 14;
if (rs.len < 0)
"relocated directory\n");
goto out;
case SIG('C', 'L'):
- ISOFS_I(inode)->i_first_extent =
- isonum_733(rr->u.CL.location);
- reloc =
- isofs_iget(inode->i_sb,
- ISOFS_I(inode)->i_first_extent,
- 0);
+ if (flags & RR_RELOC_DE) {
+ printk(KERN_ERR
+ "ISOFS: Recursive directory relocation "
+ "is not supported\n");
+ goto eio;
+ }
+ reloc_block = isonum_733(rr->u.CL.location);
+ if (reloc_block == ISOFS_I(inode)->i_iget5_block &&
+ ISOFS_I(inode)->i_iget5_offset == 0) {
+ printk(KERN_ERR
+ "ISOFS: Directory relocation points to "
+ "itself\n");
+ goto eio;
+ }
+ ISOFS_I(inode)->i_first_extent = reloc_block;
+ reloc = isofs_iget_reloc(inode->i_sb, reloc_block, 0);
if (IS_ERR(reloc)) {
ret = PTR_ERR(reloc);
goto out;
return rpnt;
}
-int parse_rock_ridge_inode(struct iso_directory_record *de, struct inode *inode)
+int parse_rock_ridge_inode(struct iso_directory_record *de, struct inode *inode,
+ int relocated)
{
- int result = parse_rock_ridge_inode_internal(de, inode, 0);
+ int flags = relocated ? RR_RELOC_DE : 0;
+ int result = parse_rock_ridge_inode_internal(de, inode, flags);
/*
* if rockridge flag was reset and we didn't look for attributes
*/
if ((ISOFS_SB(inode->i_sb)->s_rock_offset == -1)
&& (ISOFS_SB(inode->i_sb)->s_rock == 2)) {
- result = parse_rock_ridge_inode_internal(de, inode, 14);
+ result = parse_rock_ridge_inode_internal(de, inode,
+ flags | RR_REGARD_XA);
}
return result;
}
struct commit_header *h;
__u32 csum;
- if (!JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_CSUM_V2))
+ if (!jbd2_journal_has_csum_v2or3(j))
return;
h = (struct commit_header *)(bh->b_data);
return checksum;
}
-static void write_tag_block(int tag_bytes, journal_block_tag_t *tag,
+static void write_tag_block(journal_t *j, journal_block_tag_t *tag,
unsigned long long block)
{
tag->t_blocknr = cpu_to_be32(block & (u32)~0);
- if (tag_bytes > JBD2_TAG_SIZE32)
+ if (JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_64BIT))
tag->t_blocknr_high = cpu_to_be32((block >> 31) >> 1);
}
struct jbd2_journal_block_tail *tail;
__u32 csum;
- if (!JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_CSUM_V2))
+ if (!jbd2_journal_has_csum_v2or3(j))
return;
tail = (struct jbd2_journal_block_tail *)(bh->b_data + j->j_blocksize -
static void jbd2_block_tag_csum_set(journal_t *j, journal_block_tag_t *tag,
struct buffer_head *bh, __u32 sequence)
{
+ journal_block_tag3_t *tag3 = (journal_block_tag3_t *)tag;
struct page *page = bh->b_page;
__u8 *addr;
__u32 csum32;
__be32 seq;
- if (!JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_CSUM_V2))
+ if (!jbd2_journal_has_csum_v2or3(j))
return;
seq = cpu_to_be32(sequence);
bh->b_size);
kunmap_atomic(addr);
- /* We only have space to store the lower 16 bits of the crc32c. */
- tag->t_checksum = cpu_to_be16(csum32);
+ if (JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_CSUM_V3))
+ tag3->t_checksum = cpu_to_be32(csum32);
+ else
+ tag->t_checksum = cpu_to_be16(csum32);
}
/*
* jbd2_journal_commit_transaction
LIST_HEAD(io_bufs);
LIST_HEAD(log_bufs);
- if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_CSUM_V2))
+ if (jbd2_journal_has_csum_v2or3(journal))
csum_size = sizeof(struct jbd2_journal_block_tail);
/*
tag_flag |= JBD2_FLAG_SAME_UUID;
tag = (journal_block_tag_t *) tagp;
- write_tag_block(tag_bytes, tag, jh2bh(jh)->b_blocknr);
+ write_tag_block(journal, tag, jh2bh(jh)->b_blocknr);
tag->t_flags = cpu_to_be16(tag_flag);
jbd2_block_tag_csum_set(journal, tag, wbuf[bufs],
commit_transaction->t_tid);
/* Checksumming functions */
static int jbd2_verify_csum_type(journal_t *j, journal_superblock_t *sb)
{
- if (!JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_CSUM_V2))
+ if (!jbd2_journal_has_csum_v2or3(j))
return 1;
return sb->s_checksum_type == JBD2_CRC32C_CHKSUM;
static int jbd2_superblock_csum_verify(journal_t *j, journal_superblock_t *sb)
{
- if (!JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_CSUM_V2))
+ if (!jbd2_journal_has_csum_v2or3(j))
return 1;
return sb->s_checksum == jbd2_superblock_csum(j, sb);
static void jbd2_superblock_csum_set(journal_t *j, journal_superblock_t *sb)
{
- if (!JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_CSUM_V2))
+ if (!jbd2_journal_has_csum_v2or3(j))
return;
sb->s_checksum = jbd2_superblock_csum(j, sb);
goto out;
}
- if (JBD2_HAS_COMPAT_FEATURE(journal, JBD2_FEATURE_COMPAT_CHECKSUM) &&
- JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_CSUM_V2)) {
+ if (jbd2_journal_has_csum_v2or3(journal) &&
+ JBD2_HAS_COMPAT_FEATURE(journal, JBD2_FEATURE_COMPAT_CHECKSUM)) {
/* Can't have checksum v1 and v2 on at the same time! */
printk(KERN_ERR "JBD2: Can't enable checksumming v1 and v2 "
"at the same time!\n");
goto out;
}
+ if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_CSUM_V2) &&
+ JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_CSUM_V3)) {
+ /* Can't have checksum v2 and v3 at the same time! */
+ printk(KERN_ERR "JBD2: Can't enable checksumming v2 and v3 "
+ "at the same time!\n");
+ goto out;
+ }
+
if (!jbd2_verify_csum_type(journal, sb)) {
printk(KERN_ERR "JBD2: Unknown checksum type\n");
goto out;
}
/* Load the checksum driver */
- if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_CSUM_V2)) {
+ if (jbd2_journal_has_csum_v2or3(journal)) {
journal->j_chksum_driver = crypto_alloc_shash("crc32c", 0, 0);
if (IS_ERR(journal->j_chksum_driver)) {
printk(KERN_ERR "JBD2: Cannot load crc32c driver.\n");
}
/* Precompute checksum seed for all metadata */
- if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_CSUM_V2))
+ if (jbd2_journal_has_csum_v2or3(journal))
journal->j_csum_seed = jbd2_chksum(journal, ~0, sb->s_uuid,
sizeof(sb->s_uuid));
if (!jbd2_journal_check_available_features(journal, compat, ro, incompat))
return 0;
- /* Asking for checksumming v2 and v1? Only give them v2. */
- if (incompat & JBD2_FEATURE_INCOMPAT_CSUM_V2 &&
+ /* If enabling v2 checksums, turn on v3 instead */
+ if (incompat & JBD2_FEATURE_INCOMPAT_CSUM_V2) {
+ incompat &= ~JBD2_FEATURE_INCOMPAT_CSUM_V2;
+ incompat |= JBD2_FEATURE_INCOMPAT_CSUM_V3;
+ }
+
+ /* Asking for checksumming v3 and v1? Only give them v3. */
+ if (incompat & JBD2_FEATURE_INCOMPAT_CSUM_V3 &&
compat & JBD2_FEATURE_COMPAT_CHECKSUM)
compat &= ~JBD2_FEATURE_COMPAT_CHECKSUM;
sb = journal->j_superblock;
- /* If enabling v2 checksums, update superblock */
- if (INCOMPAT_FEATURE_ON(JBD2_FEATURE_INCOMPAT_CSUM_V2)) {
+ /* If enabling v3 checksums, update superblock */
+ if (INCOMPAT_FEATURE_ON(JBD2_FEATURE_INCOMPAT_CSUM_V3)) {
sb->s_checksum_type = JBD2_CRC32C_CHKSUM;
sb->s_feature_compat &=
~cpu_to_be32(JBD2_FEATURE_COMPAT_CHECKSUM);
}
/* Precompute checksum seed for all metadata */
- if (JBD2_HAS_INCOMPAT_FEATURE(journal,
- JBD2_FEATURE_INCOMPAT_CSUM_V2))
+ if (jbd2_journal_has_csum_v2or3(journal))
journal->j_csum_seed = jbd2_chksum(journal, ~0,
sb->s_uuid,
sizeof(sb->s_uuid));
/* If enabling v1 checksums, downgrade superblock */
if (COMPAT_FEATURE_ON(JBD2_FEATURE_COMPAT_CHECKSUM))
sb->s_feature_incompat &=
- ~cpu_to_be32(JBD2_FEATURE_INCOMPAT_CSUM_V2);
+ ~cpu_to_be32(JBD2_FEATURE_INCOMPAT_CSUM_V2 |
+ JBD2_FEATURE_INCOMPAT_CSUM_V3);
sb->s_feature_compat |= cpu_to_be32(compat);
sb->s_feature_ro_compat |= cpu_to_be32(ro);
*/
size_t journal_tag_bytes(journal_t *journal)
{
- journal_block_tag_t tag;
- size_t x = 0;
+ size_t sz;
+
+ if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_CSUM_V3))
+ return sizeof(journal_block_tag3_t);
+
+ sz = sizeof(journal_block_tag_t);
if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_CSUM_V2))
- x += sizeof(tag.t_checksum);
+ sz += sizeof(__u16);
if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT))
- return x + JBD2_TAG_SIZE64;
+ return sz;
else
- return x + JBD2_TAG_SIZE32;
+ return sz - sizeof(__u32);
}
/*
__be32 provided;
__u32 calculated;
- if (!JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_CSUM_V2))
+ if (!jbd2_journal_has_csum_v2or3(j))
return 1;
tail = (struct jbd2_journal_block_tail *)(buf + j->j_blocksize -
int nr = 0, size = journal->j_blocksize;
int tag_bytes = journal_tag_bytes(journal);
- if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_CSUM_V2))
+ if (jbd2_journal_has_csum_v2or3(journal))
size -= sizeof(struct jbd2_journal_block_tail);
tagp = &bh->b_data[sizeof(journal_header_t)];
return err;
}
-static inline unsigned long long read_tag_block(int tag_bytes, journal_block_tag_t *tag)
+static inline unsigned long long read_tag_block(journal_t *journal,
+ journal_block_tag_t *tag)
{
unsigned long long block = be32_to_cpu(tag->t_blocknr);
- if (tag_bytes > JBD2_TAG_SIZE32)
+ if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT))
block |= (u64)be32_to_cpu(tag->t_blocknr_high) << 32;
return block;
}
__be32 provided;
__u32 calculated;
- if (!JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_CSUM_V2))
+ if (!jbd2_journal_has_csum_v2or3(j))
return 1;
h = buf;
static int jbd2_block_tag_csum_verify(journal_t *j, journal_block_tag_t *tag,
void *buf, __u32 sequence)
{
+ journal_block_tag3_t *tag3 = (journal_block_tag3_t *)tag;
__u32 csum32;
__be32 seq;
- if (!JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_CSUM_V2))
+ if (!jbd2_journal_has_csum_v2or3(j))
return 1;
seq = cpu_to_be32(sequence);
csum32 = jbd2_chksum(j, j->j_csum_seed, (__u8 *)&seq, sizeof(seq));
csum32 = jbd2_chksum(j, csum32, buf, j->j_blocksize);
- return tag->t_checksum == cpu_to_be16(csum32);
+ if (JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_CSUM_V3))
+ return tag3->t_checksum == cpu_to_be32(csum32);
+ else
+ return tag->t_checksum == cpu_to_be16(csum32);
}
static int do_one_pass(journal_t *journal,
int tag_bytes = journal_tag_bytes(journal);
__u32 crc32_sum = ~0; /* Transactional Checksums */
int descr_csum_size = 0;
+ int block_error = 0;
/*
* First thing is to establish what we expect to find in the log
switch(blocktype) {
case JBD2_DESCRIPTOR_BLOCK:
/* Verify checksum first */
- if (JBD2_HAS_INCOMPAT_FEATURE(journal,
- JBD2_FEATURE_INCOMPAT_CSUM_V2))
+ if (jbd2_journal_has_csum_v2or3(journal))
descr_csum_size =
sizeof(struct jbd2_journal_block_tail);
if (descr_csum_size > 0 &&
unsigned long long blocknr;
J_ASSERT(obh != NULL);
- blocknr = read_tag_block(tag_bytes,
+ blocknr = read_tag_block(journal,
tag);
/* If the block has been
"checksum recovering "
"block %llu in log\n",
blocknr);
- continue;
+ block_error = 1;
+ goto skip_write;
}
/* Find a buffer for the new
success = -EIO;
}
}
-
+ if (block_error && success == 0)
+ success = -EIO;
return success;
failed:
__be32 provided;
__u32 calculated;
- if (!JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_CSUM_V2))
+ if (!jbd2_journal_has_csum_v2or3(j))
return 1;
tail = (struct jbd2_journal_revoke_tail *)(buf + j->j_blocksize -
#include <linux/list.h>
#include <linux/init.h>
#include <linux/bio.h>
-#endif
#include <linux/log2.h>
+#endif
static struct kmem_cache *jbd2_revoke_record_cache;
static struct kmem_cache *jbd2_revoke_table_cache;
offset = *offsetp;
/* Do we need to leave space at the end for a checksum? */
- if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_CSUM_V2))
+ if (jbd2_journal_has_csum_v2or3(journal))
csum_size = sizeof(struct jbd2_journal_revoke_tail);
/* Make sure we have a descriptor with space left for the record */
struct jbd2_journal_revoke_tail *tail;
__u32 csum;
- if (!JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_CSUM_V2))
+ if (!jbd2_journal_has_csum_v2or3(j))
return;
tail = (struct jbd2_journal_revoke_tail *)(bh->b_data + j->j_blocksize -
error = make_socks(serv, net);
if (error < 0)
- goto err_socks;
+ goto err_bind;
set_grace_period(net);
dprintk("lockd_up_net: per-net data created; net=%p\n", net);
return 0;
-err_socks:
- svc_rpcb_cleanup(serv, net);
err_bind:
ln->nlmsvc_users--;
return error;
smp_mb();
error = check_conflicting_open(dentry, arg);
if (error)
- locks_unlink_lock(flp);
+ locks_unlink_lock(before);
out:
if (is_deleg)
mutex_unlock(&inode->i_mutex);
#include <linux/device_cgroup.h>
#include <linux/fs_struct.h>
#include <linux/posix_acl.h>
+#include <linux/hash.h>
#include <asm/uaccess.h>
#include "internal.h"
static __always_inline void set_root(struct nameidata *nd)
{
- if (!nd->root.mnt)
- get_fs_root(current->fs, &nd->root);
+ get_fs_root(current->fs, &nd->root);
}
static int link_path_walk(const char *, struct nameidata *);
-static __always_inline void set_root_rcu(struct nameidata *nd)
+static __always_inline unsigned set_root_rcu(struct nameidata *nd)
{
- if (!nd->root.mnt) {
- struct fs_struct *fs = current->fs;
- unsigned seq;
+ struct fs_struct *fs = current->fs;
+ unsigned seq, res;
- do {
- seq = read_seqcount_begin(&fs->seq);
- nd->root = fs->root;
- nd->seq = __read_seqcount_begin(&nd->root.dentry->d_seq);
- } while (read_seqcount_retry(&fs->seq, seq));
- }
+ do {
+ seq = read_seqcount_begin(&fs->seq);
+ nd->root = fs->root;
+ res = __read_seqcount_begin(&nd->root.dentry->d_seq);
+ } while (read_seqcount_retry(&fs->seq, seq));
+ return res;
}
static void path_put_conditional(struct path *path, struct nameidata *nd)
return PTR_ERR(s);
}
if (*s == '/') {
- set_root(nd);
+ if (!nd->root.mnt)
+ set_root(nd);
path_put(&nd->path);
nd->path = nd->root;
path_get(&nd->root);
*/
*inode = path->dentry->d_inode;
}
- return read_seqretry(&mount_lock, nd->m_seq) &&
+ return !read_seqretry(&mount_lock, nd->m_seq) &&
!(path->dentry->d_flags & DCACHE_NEED_AUTOMOUNT);
}
static int follow_dotdot_rcu(struct nameidata *nd)
{
- set_root_rcu(nd);
+ struct inode *inode = nd->inode;
+ if (!nd->root.mnt)
+ set_root_rcu(nd);
while (1) {
if (nd->path.dentry == nd->root.dentry &&
struct dentry *parent = old->d_parent;
unsigned seq;
+ inode = parent->d_inode;
seq = read_seqcount_begin(&parent->d_seq);
if (read_seqcount_retry(&old->d_seq, nd->seq))
goto failed;
}
if (!follow_up_rcu(&nd->path))
break;
+ inode = nd->path.dentry->d_inode;
nd->seq = read_seqcount_begin(&nd->path.dentry->d_seq);
}
while (d_mountpoint(nd->path.dentry)) {
break;
nd->path.mnt = &mounted->mnt;
nd->path.dentry = mounted->mnt.mnt_root;
+ inode = nd->path.dentry->d_inode;
nd->seq = read_seqcount_begin(&nd->path.dentry->d_seq);
- if (!read_seqretry(&mount_lock, nd->m_seq))
+ if (read_seqretry(&mount_lock, nd->m_seq))
goto failed;
}
- nd->inode = nd->path.dentry->d_inode;
+ nd->inode = inode;
return 0;
failed:
static void follow_dotdot(struct nameidata *nd)
{
- set_root(nd);
+ if (!nd->root.mnt)
+ set_root(nd);
while(1) {
struct dentry *old = nd->path.dentry;
static inline unsigned int fold_hash(unsigned long hash)
{
- hash += hash >> (8*sizeof(int));
- return hash;
+ return hash_64(hash, 32);
}
#else /* 32-bit case */
/*
* Calculate the length and hash of the path component, and
- * return the length of the component;
+ * fill in the qstr. return the "len" as the result.
*/
-static inline unsigned long hash_name(const char *name, unsigned int *hashp)
+static inline unsigned long hash_name(const char *name, struct qstr *res)
{
unsigned long a, b, adata, bdata, mask, hash, len;
const struct word_at_a_time constants = WORD_AT_A_TIME_CONSTANTS;
+ res->name = name;
hash = a = 0;
len = -sizeof(unsigned long);
do {
mask = create_zero_mask(adata | bdata);
hash += a & zero_bytemask(mask);
- *hashp = fold_hash(hash);
+ len += find_zero(mask);
+ res->hash_len = hashlen_create(fold_hash(hash), len);
- return len + find_zero(mask);
+ return len;
}
#else
* We know there's a real path component here of at least
* one character.
*/
-static inline unsigned long hash_name(const char *name, unsigned int *hashp)
+static inline long hash_name(const char *name, struct qstr *res)
{
unsigned long hash = init_name_hash();
unsigned long len = 0, c;
+ res->name = name;
c = (unsigned char)*name;
do {
len++;
hash = partial_name_hash(c, hash);
c = (unsigned char)name[len];
} while (c && c != '/');
- *hashp = end_name_hash(hash);
+ res->hash_len = hashlen_create(end_name_hash(hash), len);
return len;
}
if (err)
break;
- len = hash_name(name, &this.hash);
- this.name = name;
- this.len = len;
+ len = hash_name(name, &this);
type = LAST_NORM;
if (name[0] == '.') switch (len) {
if (*name=='/') {
if (flags & LOOKUP_RCU) {
rcu_read_lock();
- set_root_rcu(nd);
+ nd->seq = set_root_rcu(nd);
} else {
set_root(nd);
path_get(&nd->root);
}
nd->inode = nd->path.dentry->d_inode;
- return 0;
+ if (!(flags & LOOKUP_RCU))
+ return 0;
+ if (likely(!read_seqcount_retry(&nd->path.dentry->d_seq, nd->seq)))
+ return 0;
+ if (!(nd->flags & LOOKUP_ROOT))
+ nd->root.mnt = NULL;
+ rcu_read_unlock();
+ return -ECHILD;
}
static inline int lookup_last(struct nameidata *nd, struct path *path)
head.first->pprev = &head.first;
INIT_HLIST_HEAD(&unmounted);
+ /* undo decrements we'd done in umount_tree() */
+ hlist_for_each_entry(mnt, &head, mnt_hash)
+ if (mnt->mnt_ex_mountpoint.mnt)
+ mntget(mnt->mnt_ex_mountpoint.mnt);
+
up_write(&namespace_sem);
synchronize_rcu();
hlist_add_head(&p->mnt_hash, &tmp_list);
}
+ hlist_for_each_entry(p, &tmp_list, mnt_hash)
+ list_del_init(&p->mnt_child);
+
if (how)
propagate_umount(&tmp_list);
p->mnt_ns = NULL;
if (how < 2)
p->mnt.mnt_flags |= MNT_SYNC_UMOUNT;
- list_del_init(&p->mnt_child);
if (mnt_has_parent(p)) {
put_mountpoint(p->mnt_mp);
+ mnt_add_count(p->mnt_parent, -1);
/* move the reference to mountpoint into ->mnt_ex_mountpoint */
p->mnt_ex_mountpoint.dentry = p->mnt_mountpoint;
p->mnt_ex_mountpoint.mnt = &p->mnt_parent->mnt;
p = proc_create("volumes", S_IFREG|S_IRUGO,
nn->proc_nfsfs, &nfs_volume_list_fops);
if (!p)
- goto error_2;
+ goto error_1;
return 0;
-error_2:
- remove_proc_entry("servers", nn->proc_nfsfs);
error_1:
- remove_proc_entry("fs/nfsfs", NULL);
+ remove_proc_subtree("nfsfs", net->proc_net);
error_0:
return -ENOMEM;
}
void nfs_fs_proc_net_exit(struct net *net)
{
- struct nfs_net *nn = net_generic(net, nfs_net_id);
-
- remove_proc_entry("volumes", nn->proc_nfsfs);
- remove_proc_entry("servers", nn->proc_nfsfs);
- remove_proc_entry("fs/nfsfs", NULL);
+ remove_proc_subtree("nfsfs", net->proc_net);
}
/*
static void filelayout_retry_commit(struct nfs_commit_info *cinfo, int idx)
{
struct pnfs_ds_commit_info *fl_cinfo = cinfo->ds;
- struct pnfs_commit_bucket *bucket = fl_cinfo->buckets;
+ struct pnfs_commit_bucket *bucket;
struct pnfs_layout_segment *freeme;
int i;
- for (i = idx; i < fl_cinfo->nbuckets; i++, bucket++) {
+ for (i = idx; i < fl_cinfo->nbuckets; i++) {
+ bucket = &fl_cinfo->buckets[i];
if (list_empty(&bucket->committing))
continue;
nfs_retry_commit(&bucket->committing, bucket->clseg, cinfo);
.rpc_argp = &args,
.rpc_resp = &fattr,
};
- int status;
+ int status = 0;
+
+ if (acl == NULL && (!S_ISDIR(inode->i_mode) || dfacl == NULL))
+ goto out;
status = -EOPNOTSUPP;
if (!nfs_server_capable(inode, NFS_CAP_ACLS))
*/
struct nfs4_lock_state {
- struct list_head ls_locks; /* Other lock stateids */
- struct nfs4_state * ls_state; /* Pointer to open state */
+ struct list_head ls_locks; /* Other lock stateids */
+ struct nfs4_state * ls_state; /* Pointer to open state */
#define NFS_LOCK_INITIALIZED 0
#define NFS_LOCK_LOST 1
- unsigned long ls_flags;
+ unsigned long ls_flags;
struct nfs_seqid_counter ls_seqid;
- nfs4_stateid ls_stateid;
- atomic_t ls_count;
- fl_owner_t ls_owner;
- struct work_struct ls_release;
+ nfs4_stateid ls_stateid;
+ atomic_t ls_count;
+ fl_owner_t ls_owner;
};
/* bits for nfs4_state->flags */
struct nfs4_closedata *calldata = data;
struct nfs4_state *state = calldata->state;
struct nfs_server *server = NFS_SERVER(calldata->inode);
+ nfs4_stateid *res_stateid = NULL;
dprintk("%s: begin!\n", __func__);
if (!nfs4_sequence_done(task, &calldata->res.seq_res))
*/
switch (task->tk_status) {
case 0:
- if (calldata->roc)
+ res_stateid = &calldata->res.stateid;
+ if (calldata->arg.fmode == 0 && calldata->roc)
pnfs_roc_set_barrier(state->inode,
calldata->roc_barrier);
- nfs_clear_open_stateid(state, &calldata->res.stateid, 0);
renew_lease(server, calldata->timestamp);
- goto out_release;
+ break;
case -NFS4ERR_ADMIN_REVOKED:
case -NFS4ERR_STALE_STATEID:
case -NFS4ERR_OLD_STATEID:
goto out_release;
}
}
- nfs_clear_open_stateid(state, NULL, calldata->arg.fmode);
+ nfs_clear_open_stateid(state, res_stateid, calldata->arg.fmode);
out_release:
nfs_release_seqid(calldata->arg.seqid);
nfs_refresh_inode(calldata->inode, calldata->res.fattr);
struct nfs4_closedata *calldata = data;
struct nfs4_state *state = calldata->state;
struct inode *inode = calldata->inode;
+ bool is_rdonly, is_wronly, is_rdwr;
int call_close = 0;
dprintk("%s: begin!\n", __func__);
goto out_wait;
task->tk_msg.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_OPEN_DOWNGRADE];
- calldata->arg.fmode = FMODE_READ|FMODE_WRITE;
spin_lock(&state->owner->so_lock);
+ is_rdwr = test_bit(NFS_O_RDWR_STATE, &state->flags);
+ is_rdonly = test_bit(NFS_O_RDONLY_STATE, &state->flags);
+ is_wronly = test_bit(NFS_O_WRONLY_STATE, &state->flags);
+ /* Calculate the current open share mode */
+ calldata->arg.fmode = 0;
+ if (is_rdonly || is_rdwr)
+ calldata->arg.fmode |= FMODE_READ;
+ if (is_wronly || is_rdwr)
+ calldata->arg.fmode |= FMODE_WRITE;
/* Calculate the change in open mode */
if (state->n_rdwr == 0) {
if (state->n_rdonly == 0) {
- call_close |= test_bit(NFS_O_RDONLY_STATE, &state->flags);
- call_close |= test_bit(NFS_O_RDWR_STATE, &state->flags);
+ call_close |= is_rdonly || is_rdwr;
calldata->arg.fmode &= ~FMODE_READ;
}
if (state->n_wronly == 0) {
- call_close |= test_bit(NFS_O_WRONLY_STATE, &state->flags);
- call_close |= test_bit(NFS_O_RDWR_STATE, &state->flags);
+ call_close |= is_wronly || is_rdwr;
calldata->arg.fmode &= ~FMODE_WRITE;
}
}
return NULL;
}
-static void
-free_lock_state_work(struct work_struct *work)
-{
- struct nfs4_lock_state *lsp = container_of(work,
- struct nfs4_lock_state, ls_release);
- struct nfs4_state *state = lsp->ls_state;
- struct nfs_server *server = state->owner->so_server;
- struct nfs_client *clp = server->nfs_client;
-
- clp->cl_mvops->free_lock_state(server, lsp);
-}
-
/*
* Return a compatible lock_state. If no initialized lock_state structure
* exists, return an uninitialized one.
if (lsp->ls_seqid.owner_id < 0)
goto out_free;
INIT_LIST_HEAD(&lsp->ls_locks);
- INIT_WORK(&lsp->ls_release, free_lock_state_work);
return lsp;
out_free:
kfree(lsp);
if (list_empty(&state->lock_states))
clear_bit(LK_STATE_IN_USE, &state->flags);
spin_unlock(&state->state_lock);
- if (test_bit(NFS_LOCK_INITIALIZED, &lsp->ls_flags))
- queue_work(nfsiod_workqueue, &lsp->ls_release);
- else {
- server = state->owner->so_server;
+ server = state->owner->so_server;
+ if (test_bit(NFS_LOCK_INITIALIZED, &lsp->ls_flags)) {
+ struct nfs_client *clp = server->nfs_client;
+
+ clp->cl_mvops->free_lock_state(server, lsp);
+ } else
nfs4_free_lock_state(server, lsp);
- }
}
static void nfs4_fl_copy_lock(struct file_lock *dst, struct file_lock *src)
if (atomic_read(&c->io_count) == 0)
break;
ret = nfs_wait_bit_killable(&q.key);
- } while (atomic_read(&c->io_count) != 0);
+ } while (atomic_read(&c->io_count) != 0 && !ret);
finish_wait(wq, &q.wait);
return ret;
}
/*
* nfs_page_group_lock - lock the head of the page group
* @req - request in group that is to be locked
+ * @nonblock - if true don't block waiting for lock
*
* this lock must be held if modifying the page group list
*
- * returns result from wait_on_bit_lock: 0 on success, < 0 on error
+ * return 0 on success, < 0 on error: -EDELAY if nonblocking or the
+ * result from wait_on_bit_lock
+ *
+ * NOTE: calling with nonblock=false should always have set the
+ * lock bit (see fs/buffer.c and other uses of wait_on_bit_lock
+ * with TASK_UNINTERRUPTIBLE), so there is no need to check the result.
*/
int
-nfs_page_group_lock(struct nfs_page *req, bool wait)
+nfs_page_group_lock(struct nfs_page *req, bool nonblock)
{
struct nfs_page *head = req->wb_head;
- int ret;
WARN_ON_ONCE(head != head->wb_head);
- do {
- ret = wait_on_bit_lock(&head->wb_flags, PG_HEADLOCK,
- TASK_UNINTERRUPTIBLE);
- } while (wait && ret != 0);
+ if (!test_and_set_bit(PG_HEADLOCK, &head->wb_flags))
+ return 0;
- WARN_ON_ONCE(ret > 0);
- return ret;
+ if (!nonblock)
+ return wait_on_bit_lock(&head->wb_flags, PG_HEADLOCK,
+ TASK_UNINTERRUPTIBLE);
+
+ return -EAGAIN;
+}
+
+/*
+ * nfs_page_group_lock_wait - wait for the lock to clear, but don't grab it
+ * @req - a request in the group
+ *
+ * This is a blocking call to wait for the group lock to be cleared.
+ */
+void
+nfs_page_group_lock_wait(struct nfs_page *req)
+{
+ struct nfs_page *head = req->wb_head;
+
+ WARN_ON_ONCE(head != head->wb_head);
+
+ wait_on_bit(&head->wb_flags, PG_HEADLOCK,
+ TASK_UNINTERRUPTIBLE);
}
/*
{
bool ret;
- nfs_page_group_lock(req, true);
+ nfs_page_group_lock(req, false);
ret = nfs_page_group_sync_on_bit_locked(req, bit);
nfs_page_group_unlock(req);
struct nfs_pgio_header *hdr)
{
struct nfs_page *req;
- struct page **pages;
+ struct page **pages,
+ *last_page;
struct list_head *head = &desc->pg_list;
struct nfs_commit_info cinfo;
- unsigned int pagecount;
+ unsigned int pagecount, pageused;
pagecount = nfs_page_array_len(desc->pg_base, desc->pg_count);
if (!nfs_pgarray_set(&hdr->page_array, pagecount))
nfs_init_cinfo(&cinfo, desc->pg_inode, desc->pg_dreq);
pages = hdr->page_array.pagevec;
+ last_page = NULL;
+ pageused = 0;
while (!list_empty(head)) {
req = nfs_list_entry(head->next);
nfs_list_remove_request(req);
nfs_list_add_request(req, &hdr->pages);
- *pages++ = req->wb_page;
+
+ if (WARN_ON_ONCE(pageused >= pagecount))
+ return nfs_pgio_error(desc, hdr);
+
+ if (!last_page || last_page != req->wb_page) {
+ *pages++ = last_page = req->wb_page;
+ pageused++;
+ }
}
+ if (WARN_ON_ONCE(pageused != pagecount))
+ return nfs_pgio_error(desc, hdr);
if ((desc->pg_ioflags & FLUSH_COND_STABLE) &&
(desc->pg_moreio || nfs_reqs_to_commit(&cinfo)))
return false;
if (req_offset(req) != req_offset(prev) + prev->wb_bytes)
return false;
+ if (req->wb_page == prev->wb_page) {
+ if (req->wb_pgbase != prev->wb_pgbase + prev->wb_bytes)
+ return false;
+ } else {
+ if (req->wb_pgbase != 0 ||
+ prev->wb_pgbase + prev->wb_bytes != PAGE_CACHE_SIZE)
+ return false;
+ }
}
size = pgio->pg_ops->pg_test(pgio, prev, req);
WARN_ON_ONCE(size > req->wb_bytes);
struct nfs_page *subreq;
unsigned int bytes_left = 0;
unsigned int offset, pgbase;
- int ret;
- ret = nfs_page_group_lock(req, false);
- if (ret < 0) {
- desc->pg_error = ret;
- return 0;
- }
+ nfs_page_group_lock(req, false);
subreq = req;
bytes_left = subreq->wb_bytes;
if (desc->pg_recoalesce)
return 0;
/* retry add_request for this subreq */
- ret = nfs_page_group_lock(req, false);
- if (ret < 0) {
- desc->pg_error = ret;
- return 0;
- }
+ nfs_page_group_lock(req, false);
continue;
}
unsigned int pos = 0;
unsigned int len = nfs_page_length(req->wb_page);
- nfs_page_group_lock(req, true);
+ nfs_page_group_lock(req, false);
do {
tmp = nfs_page_group_search_locked(req->wb_head, pos);
return NULL;
}
- /* lock each request in the page group */
- ret = nfs_page_group_lock(head, false);
- if (ret < 0)
+ /* holding inode lock, so always make a non-blocking call to try the
+ * page group lock */
+ ret = nfs_page_group_lock(head, true);
+ if (ret < 0) {
+ spin_unlock(&inode->i_lock);
+
+ if (!nonblock && ret == -EAGAIN) {
+ nfs_page_group_lock_wait(head);
+ nfs_release_request(head);
+ goto try_again;
+ }
+
+ nfs_release_request(head);
return ERR_PTR(ret);
+ }
+
+ /* lock each request in the page group */
subreq = head;
do {
/*
struct xdr_stream *xdr = cd->xdr;
int start_offset = xdr->buf->len;
int cookie_offset;
+ u32 name_and_cookie;
int entry_bytes;
__be32 nfserr = nfserr_toosmall;
__be64 wire_offset;
cd->rd_maxcount -= entry_bytes;
if (!cd->rd_dircount)
goto fail;
- cd->rd_dircount--;
+ /*
+ * RFC 3530 14.2.24 describes rd_dircount as only a "hint", so
+ * let's always let through the first entry, at least:
+ */
+ name_and_cookie = 4 * XDR_QUADLEN(namlen) + 8;
+ if (name_and_cookie > cd->rd_dircount && cd->cookie_offset)
+ goto fail;
+ cd->rd_dircount -= min(cd->rd_dircount, name_and_cookie);
cd->cookie_offset = cookie_offset;
skip_entry:
cd->common.err = nfs_ok;
}
maxcount = min_t(int, maxcount-16, bytes_left);
+ /* RFC 3530 14.2.24 allows us to ignore dircount when it's 0: */
+ if (!readdir->rd_dircount)
+ readdir->rd_dircount = INT_MAX;
+
readdir->xdr = xdr;
readdir->rd_maxcount = maxcount;
readdir->common.err = 0;
{
struct {
struct file_handle handle;
- u8 pad[64];
+ u8 pad[MAX_HANDLE_SZ];
} f;
int size, ret, i;
size = f.handle.handle_bytes >> 2;
ret = exportfs_encode_inode_fh(inode, (struct fid *)f.handle.f_handle, &size, 0);
- if ((ret == 255) || (ret == -ENOSPC)) {
+ if ((ret == FILEID_INVALID) || (ret < 0)) {
WARN_ONCE(1, "Can't encode file handler for inotify: %d\n", ret);
return 0;
}
}
out:
- spin_unlock(&qs->qs_lock);
- if (fence)
+ if (fence) {
+ spin_unlock(&qs->qs_lock);
o2quo_fence_self();
+ } else {
+ mlog(ML_NOTICE, "not fencing this node, heartbeating: %d, "
+ "connected: %d, lowest: %d (%sreachable)\n",
+ qs->qs_heartbeating, qs->qs_connected, lowest_hb,
+ lowest_reachable ? "" : "un");
+ spin_unlock(&qs->qs_lock);
+
+ }
+
}
static void o2quo_set_hold(struct o2quo_state *qs, u8 node)
return ret;
}
+static int o2net_set_usertimeout(struct socket *sock)
+{
+ int user_timeout = O2NET_TCP_USER_TIMEOUT;
+
+ return kernel_setsockopt(sock, SOL_TCP, TCP_USER_TIMEOUT,
+ (char *)&user_timeout, sizeof(user_timeout));
+}
+
static void o2net_initialize_handshake(void)
{
o2net_hand->o2hb_heartbeat_timeout_ms = cpu_to_be32(
#endif
printk(KERN_NOTICE "o2net: Connection to " SC_NODEF_FMT " has been "
- "idle for %lu.%lu secs, shutting it down.\n", SC_NODEF_ARGS(sc),
- msecs / 1000, msecs % 1000);
+ "idle for %lu.%lu secs.\n",
+ SC_NODEF_ARGS(sc), msecs / 1000, msecs % 1000);
- /*
- * Initialize the nn_timeout so that the next connection attempt
- * will continue in o2net_start_connect.
+ /* idle timerout happen, don't shutdown the connection, but
+ * make fence decision. Maybe the connection can recover before
+ * the decision is made.
*/
atomic_set(&nn->nn_timeout, 1);
+ o2quo_conn_err(o2net_num_from_nn(nn));
+ queue_delayed_work(o2net_wq, &nn->nn_still_up,
+ msecs_to_jiffies(O2NET_QUORUM_DELAY_MS));
+
+ o2net_sc_reset_idle_timer(sc);
- o2net_sc_queue_work(sc, &sc->sc_shutdown_work);
}
static void o2net_sc_reset_idle_timer(struct o2net_sock_container *sc)
static void o2net_sc_postpone_idle(struct o2net_sock_container *sc)
{
+ struct o2net_node *nn = o2net_nn_from_num(sc->sc_node->nd_num);
+
+ /* clear fence decision since the connection recover from timeout*/
+ if (atomic_read(&nn->nn_timeout)) {
+ o2quo_conn_up(o2net_num_from_nn(nn));
+ cancel_delayed_work(&nn->nn_still_up);
+ atomic_set(&nn->nn_timeout, 0);
+ }
+
/* Only push out an existing timer */
if (timer_pending(&sc->sc_idle_timeout))
o2net_sc_reset_idle_timer(sc);
goto out;
}
+ ret = o2net_set_usertimeout(sock);
+ if (ret) {
+ mlog(ML_ERROR, "set TCP_USER_TIMEOUT failed with %d\n", ret);
+ goto out;
+ }
+
o2net_register_callbacks(sc->sc_sock->sk, sc);
spin_lock(&nn->nn_lock);
goto out;
}
+ ret = o2net_set_usertimeout(new_sock);
+ if (ret) {
+ mlog(ML_ERROR, "set TCP_USER_TIMEOUT failed with %d\n", ret);
+ goto out;
+ }
+
slen = sizeof(sin);
ret = new_sock->ops->getname(new_sock, (struct sockaddr *) &sin,
&slen, 1);
#define O2NET_KEEPALIVE_DELAY_MS_DEFAULT 2000
#define O2NET_IDLE_TIMEOUT_MS_DEFAULT 30000
+#define O2NET_TCP_USER_TIMEOUT 0x7fffffff
/* TODO: figure this out.... */
static inline int o2net_link_down(int err, struct socket *sock)
copy_to_user((typeof(a) __user *)b, &(a), sizeof(a))
/*
- * This call is void because we are already reporting an error that may
- * be -EFAULT. The error will be returned from the ioctl(2) call. It's
- * just a best-effort to tell userspace that this request caused the error.
+ * This is just a best-effort to tell userspace that this request
+ * caused the error.
*/
static inline void o2info_set_request_error(struct ocfs2_info_request *kreq,
struct ocfs2_info_request __user *req)
static int ocfs2_info_handle_blocksize(struct inode *inode,
struct ocfs2_info_request __user *req)
{
- int status = -EFAULT;
struct ocfs2_info_blocksize oib;
if (o2info_from_user(oib, req))
- goto bail;
+ return -EFAULT;
oib.ib_blocksize = inode->i_sb->s_blocksize;
o2info_set_request_filled(&oib.ib_req);
if (o2info_to_user(oib, req))
- goto bail;
-
- status = 0;
-bail:
- if (status)
- o2info_set_request_error(&oib.ib_req, req);
+ return -EFAULT;
- return status;
+ return 0;
}
static int ocfs2_info_handle_clustersize(struct inode *inode,
struct ocfs2_info_request __user *req)
{
- int status = -EFAULT;
struct ocfs2_info_clustersize oic;
struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
if (o2info_from_user(oic, req))
- goto bail;
+ return -EFAULT;
oic.ic_clustersize = osb->s_clustersize;
o2info_set_request_filled(&oic.ic_req);
if (o2info_to_user(oic, req))
- goto bail;
-
- status = 0;
-bail:
- if (status)
- o2info_set_request_error(&oic.ic_req, req);
+ return -EFAULT;
- return status;
+ return 0;
}
static int ocfs2_info_handle_maxslots(struct inode *inode,
struct ocfs2_info_request __user *req)
{
- int status = -EFAULT;
struct ocfs2_info_maxslots oim;
struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
if (o2info_from_user(oim, req))
- goto bail;
+ return -EFAULT;
oim.im_max_slots = osb->max_slots;
o2info_set_request_filled(&oim.im_req);
if (o2info_to_user(oim, req))
- goto bail;
+ return -EFAULT;
- status = 0;
-bail:
- if (status)
- o2info_set_request_error(&oim.im_req, req);
-
- return status;
+ return 0;
}
static int ocfs2_info_handle_label(struct inode *inode,
struct ocfs2_info_request __user *req)
{
- int status = -EFAULT;
struct ocfs2_info_label oil;
struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
if (o2info_from_user(oil, req))
- goto bail;
+ return -EFAULT;
memcpy(oil.il_label, osb->vol_label, OCFS2_MAX_VOL_LABEL_LEN);
o2info_set_request_filled(&oil.il_req);
if (o2info_to_user(oil, req))
- goto bail;
+ return -EFAULT;
- status = 0;
-bail:
- if (status)
- o2info_set_request_error(&oil.il_req, req);
-
- return status;
+ return 0;
}
static int ocfs2_info_handle_uuid(struct inode *inode,
struct ocfs2_info_request __user *req)
{
- int status = -EFAULT;
struct ocfs2_info_uuid oiu;
struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
if (o2info_from_user(oiu, req))
- goto bail;
+ return -EFAULT;
memcpy(oiu.iu_uuid_str, osb->uuid_str, OCFS2_TEXT_UUID_LEN + 1);
o2info_set_request_filled(&oiu.iu_req);
if (o2info_to_user(oiu, req))
- goto bail;
-
- status = 0;
-bail:
- if (status)
- o2info_set_request_error(&oiu.iu_req, req);
+ return -EFAULT;
- return status;
+ return 0;
}
static int ocfs2_info_handle_fs_features(struct inode *inode,
struct ocfs2_info_request __user *req)
{
- int status = -EFAULT;
struct ocfs2_info_fs_features oif;
struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
if (o2info_from_user(oif, req))
- goto bail;
+ return -EFAULT;
oif.if_compat_features = osb->s_feature_compat;
oif.if_incompat_features = osb->s_feature_incompat;
o2info_set_request_filled(&oif.if_req);
if (o2info_to_user(oif, req))
- goto bail;
+ return -EFAULT;
- status = 0;
-bail:
- if (status)
- o2info_set_request_error(&oif.if_req, req);
-
- return status;
+ return 0;
}
static int ocfs2_info_handle_journal_size(struct inode *inode,
struct ocfs2_info_request __user *req)
{
- int status = -EFAULT;
struct ocfs2_info_journal_size oij;
struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
if (o2info_from_user(oij, req))
- goto bail;
+ return -EFAULT;
oij.ij_journal_size = i_size_read(osb->journal->j_inode);
o2info_set_request_filled(&oij.ij_req);
if (o2info_to_user(oij, req))
- goto bail;
+ return -EFAULT;
- status = 0;
-bail:
- if (status)
- o2info_set_request_error(&oij.ij_req, req);
-
- return status;
+ return 0;
}
static int ocfs2_info_scan_inode_alloc(struct ocfs2_super *osb,
u32 i;
u64 blkno = -1;
char namebuf[40];
- int status = -EFAULT, type = INODE_ALLOC_SYSTEM_INODE;
+ int status, type = INODE_ALLOC_SYSTEM_INODE;
struct ocfs2_info_freeinode *oifi = NULL;
struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
struct inode *inode_alloc = NULL;
goto out_err;
}
- if (o2info_from_user(*oifi, req))
- goto bail;
+ if (o2info_from_user(*oifi, req)) {
+ status = -EFAULT;
+ goto out_free;
+ }
oifi->ifi_slotnum = osb->max_slots;
o2info_set_request_filled(&oifi->ifi_req);
- if (o2info_to_user(*oifi, req))
- goto bail;
+ if (o2info_to_user(*oifi, req)) {
+ status = -EFAULT;
+ goto out_free;
+ }
status = 0;
bail:
if (status)
o2info_set_request_error(&oifi->ifi_req, req);
-
+out_free:
kfree(oifi);
out_err:
return status;
{
u64 blkno = -1;
char namebuf[40];
- int status = -EFAULT, type = GLOBAL_BITMAP_SYSTEM_INODE;
+ int status, type = GLOBAL_BITMAP_SYSTEM_INODE;
struct ocfs2_info_freefrag *oiff;
struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
goto out_err;
}
- if (o2info_from_user(*oiff, req))
- goto bail;
+ if (o2info_from_user(*oiff, req)) {
+ status = -EFAULT;
+ goto out_free;
+ }
/*
* chunksize from userspace should be power of 2.
*/
if (o2info_to_user(*oiff, req)) {
status = -EFAULT;
- goto bail;
+ goto out_free;
}
status = 0;
bail:
if (status)
o2info_set_request_error(&oiff->iff_req, req);
-
+out_free:
kfree(oiff);
out_err:
return status;
static int ocfs2_info_handle_unknown(struct inode *inode,
struct ocfs2_info_request __user *req)
{
- int status = -EFAULT;
struct ocfs2_info_request oir;
if (o2info_from_user(oir, req))
- goto bail;
+ return -EFAULT;
o2info_clear_request_filled(&oir);
if (o2info_to_user(oir, req))
- goto bail;
+ return -EFAULT;
- status = 0;
-bail:
- if (status)
- o2info_set_request_error(&oir, req);
-
- return status;
+ return 0;
}
/*
* other children
*/
if (child && list_empty(&child->mnt_mounts)) {
+ list_del_init(&child->mnt_child);
hlist_del_init_rcu(&child->mnt_hash);
hlist_add_before_rcu(&child->mnt_hash, &mnt->mnt_hash);
}
return ret;
return __sync_filesystem(sb, 1);
}
-EXPORT_SYMBOL_GPL(sync_filesystem);
+EXPORT_SYMBOL(sync_filesystem);
static void sync_inodes_one_sb(struct super_block *sb, void *arg)
{
udf_free_blocks(sb, NULL, &UDF_I(inode)->i_location, 0, 1);
}
-struct inode *udf_new_inode(struct inode *dir, umode_t mode, int *err)
+struct inode *udf_new_inode(struct inode *dir, umode_t mode)
{
struct super_block *sb = dir->i_sb;
struct udf_sb_info *sbi = UDF_SB(sb);
struct udf_inode_info *iinfo;
struct udf_inode_info *dinfo = UDF_I(dir);
struct logicalVolIntegrityDescImpUse *lvidiu;
+ int err;
inode = new_inode(sb);
- if (!inode) {
- *err = -ENOMEM;
- return NULL;
- }
- *err = -ENOSPC;
+ if (!inode)
+ return ERR_PTR(-ENOMEM);
iinfo = UDF_I(inode);
if (UDF_QUERY_FLAG(inode->i_sb, UDF_FLAG_USE_EXTENDED_FE)) {
}
if (!iinfo->i_ext.i_data) {
iput(inode);
- *err = -ENOMEM;
- return NULL;
+ return ERR_PTR(-ENOMEM);
}
+ err = -ENOSPC;
block = udf_new_block(dir->i_sb, NULL,
dinfo->i_location.partitionReferenceNum,
- start, err);
- if (*err) {
+ start, &err);
+ if (err) {
iput(inode);
- return NULL;
+ return ERR_PTR(err);
}
lvidiu = udf_sb_lvidiu(sb);
if (lvidiu) {
iinfo->i_unique = lvid_get_unique_id(sb);
+ inode->i_generation = iinfo->i_unique;
mutex_lock(&sbi->s_alloc_mutex);
if (S_ISDIR(mode))
le32_add_cpu(&lvidiu->numDirs, 1);
iinfo->i_alloc_type = ICBTAG_FLAG_AD_LONG;
inode->i_mtime = inode->i_atime = inode->i_ctime =
iinfo->i_crtime = current_fs_time(inode->i_sb);
- insert_inode_hash(inode);
+ if (unlikely(insert_inode_locked(inode) < 0)) {
+ make_bad_inode(inode);
+ iput(inode);
+ return ERR_PTR(-EIO);
+ }
mark_inode_dirty(inode);
- *err = 0;
return inode;
}
static umode_t udf_convert_permissions(struct fileEntry *);
static int udf_update_inode(struct inode *, int);
-static void udf_fill_inode(struct inode *, struct buffer_head *);
static int udf_sync_inode(struct inode *inode);
static int udf_alloc_i_data(struct inode *inode, size_t size);
static sector_t inode_getblk(struct inode *, sector_t, int *, int *);
return 0;
}
-static void __udf_read_inode(struct inode *inode)
+/*
+ * Maximum length of linked list formed by ICB hierarchy. The chosen number is
+ * arbitrary - just that we hopefully don't limit any real use of rewritten
+ * inode on write-once media but avoid looping for too long on corrupted media.
+ */
+#define UDF_MAX_ICB_NESTING 1024
+
+static int udf_read_inode(struct inode *inode)
{
struct buffer_head *bh = NULL;
struct fileEntry *fe;
+ struct extendedFileEntry *efe;
uint16_t ident;
struct udf_inode_info *iinfo = UDF_I(inode);
+ struct udf_sb_info *sbi = UDF_SB(inode->i_sb);
+ struct kernel_lb_addr *iloc = &iinfo->i_location;
+ unsigned int link_count;
+ unsigned int indirections = 0;
+ int ret = -EIO;
+
+reread:
+ if (iloc->logicalBlockNum >=
+ sbi->s_partmaps[iloc->partitionReferenceNum].s_partition_len) {
+ udf_debug("block=%d, partition=%d out of range\n",
+ iloc->logicalBlockNum, iloc->partitionReferenceNum);
+ return -EIO;
+ }
/*
* Set defaults, but the inode is still incomplete!
* i_nlink = 1
* i_op = NULL;
*/
- bh = udf_read_ptagged(inode->i_sb, &iinfo->i_location, 0, &ident);
+ bh = udf_read_ptagged(inode->i_sb, iloc, 0, &ident);
if (!bh) {
udf_err(inode->i_sb, "(ino %ld) failed !bh\n", inode->i_ino);
- make_bad_inode(inode);
- return;
+ return -EIO;
}
if (ident != TAG_IDENT_FE && ident != TAG_IDENT_EFE &&
ident != TAG_IDENT_USE) {
udf_err(inode->i_sb, "(ino %ld) failed ident=%d\n",
inode->i_ino, ident);
- brelse(bh);
- make_bad_inode(inode);
- return;
+ goto out;
}
fe = (struct fileEntry *)bh->b_data;
+ efe = (struct extendedFileEntry *)bh->b_data;
if (fe->icbTag.strategyType == cpu_to_le16(4096)) {
struct buffer_head *ibh;
- ibh = udf_read_ptagged(inode->i_sb, &iinfo->i_location, 1,
- &ident);
+ ibh = udf_read_ptagged(inode->i_sb, iloc, 1, &ident);
if (ident == TAG_IDENT_IE && ibh) {
- struct buffer_head *nbh = NULL;
struct kernel_lb_addr loc;
struct indirectEntry *ie;
ie = (struct indirectEntry *)ibh->b_data;
loc = lelb_to_cpu(ie->indirectICB.extLocation);
- if (ie->indirectICB.extLength &&
- (nbh = udf_read_ptagged(inode->i_sb, &loc, 0,
- &ident))) {
- if (ident == TAG_IDENT_FE ||
- ident == TAG_IDENT_EFE) {
- memcpy(&iinfo->i_location,
- &loc,
- sizeof(struct kernel_lb_addr));
- brelse(bh);
- brelse(ibh);
- brelse(nbh);
- __udf_read_inode(inode);
- return;
+ if (ie->indirectICB.extLength) {
+ brelse(ibh);
+ memcpy(&iinfo->i_location, &loc,
+ sizeof(struct kernel_lb_addr));
+ if (++indirections > UDF_MAX_ICB_NESTING) {
+ udf_err(inode->i_sb,
+ "too many ICBs in ICB hierarchy"
+ " (max %d supported)\n",
+ UDF_MAX_ICB_NESTING);
+ goto out;
}
- brelse(nbh);
+ brelse(bh);
+ goto reread;
}
}
brelse(ibh);
} else if (fe->icbTag.strategyType != cpu_to_le16(4)) {
udf_err(inode->i_sb, "unsupported strategy type: %d\n",
le16_to_cpu(fe->icbTag.strategyType));
- brelse(bh);
- make_bad_inode(inode);
- return;
+ goto out;
}
- udf_fill_inode(inode, bh);
-
- brelse(bh);
-}
-
-static void udf_fill_inode(struct inode *inode, struct buffer_head *bh)
-{
- struct fileEntry *fe;
- struct extendedFileEntry *efe;
- struct udf_sb_info *sbi = UDF_SB(inode->i_sb);
- struct udf_inode_info *iinfo = UDF_I(inode);
- unsigned int link_count;
-
- fe = (struct fileEntry *)bh->b_data;
- efe = (struct extendedFileEntry *)bh->b_data;
-
if (fe->icbTag.strategyType == cpu_to_le16(4))
iinfo->i_strat4096 = 0;
else /* if (fe->icbTag.strategyType == cpu_to_le16(4096)) */
if (fe->descTag.tagIdent == cpu_to_le16(TAG_IDENT_EFE)) {
iinfo->i_efe = 1;
iinfo->i_use = 0;
- if (udf_alloc_i_data(inode, inode->i_sb->s_blocksize -
- sizeof(struct extendedFileEntry))) {
- make_bad_inode(inode);
- return;
- }
+ ret = udf_alloc_i_data(inode, inode->i_sb->s_blocksize -
+ sizeof(struct extendedFileEntry));
+ if (ret)
+ goto out;
memcpy(iinfo->i_ext.i_data,
bh->b_data + sizeof(struct extendedFileEntry),
inode->i_sb->s_blocksize -
} else if (fe->descTag.tagIdent == cpu_to_le16(TAG_IDENT_FE)) {
iinfo->i_efe = 0;
iinfo->i_use = 0;
- if (udf_alloc_i_data(inode, inode->i_sb->s_blocksize -
- sizeof(struct fileEntry))) {
- make_bad_inode(inode);
- return;
- }
+ ret = udf_alloc_i_data(inode, inode->i_sb->s_blocksize -
+ sizeof(struct fileEntry));
+ if (ret)
+ goto out;
memcpy(iinfo->i_ext.i_data,
bh->b_data + sizeof(struct fileEntry),
inode->i_sb->s_blocksize - sizeof(struct fileEntry));
iinfo->i_lenAlloc = le32_to_cpu(
((struct unallocSpaceEntry *)bh->b_data)->
lengthAllocDescs);
- if (udf_alloc_i_data(inode, inode->i_sb->s_blocksize -
- sizeof(struct unallocSpaceEntry))) {
- make_bad_inode(inode);
- return;
- }
+ ret = udf_alloc_i_data(inode, inode->i_sb->s_blocksize -
+ sizeof(struct unallocSpaceEntry));
+ if (ret)
+ goto out;
memcpy(iinfo->i_ext.i_data,
bh->b_data + sizeof(struct unallocSpaceEntry),
inode->i_sb->s_blocksize -
sizeof(struct unallocSpaceEntry));
- return;
+ return 0;
}
+ ret = -EIO;
read_lock(&sbi->s_cred_lock);
i_uid_write(inode, le32_to_cpu(fe->uid));
if (!uid_valid(inode->i_uid) ||
read_unlock(&sbi->s_cred_lock);
link_count = le16_to_cpu(fe->fileLinkCount);
- if (!link_count)
- link_count = 1;
+ if (!link_count) {
+ ret = -ESTALE;
+ goto out;
+ }
set_nlink(inode, link_count);
inode->i_size = le64_to_cpu(fe->informationLength);
iinfo->i_lenAlloc = le32_to_cpu(efe->lengthAllocDescs);
iinfo->i_checkpoint = le32_to_cpu(efe->checkpoint);
}
+ inode->i_generation = iinfo->i_unique;
switch (fe->icbTag.fileType) {
case ICBTAG_FILE_TYPE_DIRECTORY:
default:
udf_err(inode->i_sb, "(ino %ld) failed unknown file type=%d\n",
inode->i_ino, fe->icbTag.fileType);
- make_bad_inode(inode);
- return;
+ goto out;
}
if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode)) {
struct deviceSpec *dsea =
le32_to_cpu(dsea->minorDeviceIdent)));
/* Developer ID ??? */
} else
- make_bad_inode(inode);
+ goto out;
}
+ ret = 0;
+out:
+ brelse(bh);
+ return ret;
}
static int udf_alloc_i_data(struct inode *inode, size_t size)
FE_PERM_U_DELETE | FE_PERM_U_CHATTR));
fe->permissions = cpu_to_le32(udfperms);
- if (S_ISDIR(inode->i_mode))
+ if (S_ISDIR(inode->i_mode) && inode->i_nlink > 0)
fe->fileLinkCount = cpu_to_le16(inode->i_nlink - 1);
else
fe->fileLinkCount = cpu_to_le16(inode->i_nlink);
{
unsigned long block = udf_get_lb_pblock(sb, ino, 0);
struct inode *inode = iget_locked(sb, block);
+ int err;
if (!inode)
- return NULL;
-
- if (inode->i_state & I_NEW) {
- memcpy(&UDF_I(inode)->i_location, ino, sizeof(struct kernel_lb_addr));
- __udf_read_inode(inode);
- unlock_new_inode(inode);
- }
+ return ERR_PTR(-ENOMEM);
- if (is_bad_inode(inode))
- goto out_iput;
+ if (!(inode->i_state & I_NEW))
+ return inode;
- if (ino->logicalBlockNum >= UDF_SB(sb)->
- s_partmaps[ino->partitionReferenceNum].s_partition_len) {
- udf_debug("block=%d, partition=%d out of range\n",
- ino->logicalBlockNum, ino->partitionReferenceNum);
- make_bad_inode(inode);
- goto out_iput;
+ memcpy(&UDF_I(inode)->i_location, ino, sizeof(struct kernel_lb_addr));
+ err = udf_read_inode(inode);
+ if (err < 0) {
+ iget_failed(inode);
+ return ERR_PTR(err);
}
+ unlock_new_inode(inode);
return inode;
-
- out_iput:
- iput(inode);
- return NULL;
}
int udf_add_aext(struct inode *inode, struct extent_position *epos,
NULL, 0),
};
inode = udf_iget(dir->i_sb, lb);
- if (!inode) {
- return ERR_PTR(-EACCES);
- }
+ if (IS_ERR(inode))
+ return inode;
} else
#endif /* UDF_RECOVERY */
loc = lelb_to_cpu(cfi.icb.extLocation);
inode = udf_iget(dir->i_sb, &loc);
- if (!inode) {
- return ERR_PTR(-EACCES);
- }
+ if (IS_ERR(inode))
+ return ERR_CAST(inode);
}
return d_splice_alias(inode, dentry);
return udf_write_fi(inode, cfi, fi, fibh, NULL, NULL);
}
-static int udf_create(struct inode *dir, struct dentry *dentry, umode_t mode,
- bool excl)
+static int udf_add_nondir(struct dentry *dentry, struct inode *inode)
{
+ struct udf_inode_info *iinfo = UDF_I(inode);
+ struct inode *dir = dentry->d_parent->d_inode;
struct udf_fileident_bh fibh;
- struct inode *inode;
struct fileIdentDesc cfi, *fi;
int err;
- struct udf_inode_info *iinfo;
-
- inode = udf_new_inode(dir, mode, &err);
- if (!inode) {
- return err;
- }
-
- iinfo = UDF_I(inode);
- if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB)
- inode->i_data.a_ops = &udf_adinicb_aops;
- else
- inode->i_data.a_ops = &udf_aops;
- inode->i_op = &udf_file_inode_operations;
- inode->i_fop = &udf_file_operations;
- mark_inode_dirty(inode);
fi = udf_add_entry(dir, dentry, &fibh, &cfi, &err);
- if (!fi) {
+ if (unlikely(!fi)) {
inode_dec_link_count(inode);
+ unlock_new_inode(inode);
iput(inode);
return err;
}
if (fibh.sbh != fibh.ebh)
brelse(fibh.ebh);
brelse(fibh.sbh);
+ unlock_new_inode(inode);
d_instantiate(dentry, inode);
return 0;
}
-static int udf_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mode)
+static int udf_create(struct inode *dir, struct dentry *dentry, umode_t mode,
+ bool excl)
{
- struct inode *inode;
- struct udf_inode_info *iinfo;
- int err;
+ struct inode *inode = udf_new_inode(dir, mode);
- inode = udf_new_inode(dir, mode, &err);
- if (!inode)
- return err;
+ if (IS_ERR(inode))
+ return PTR_ERR(inode);
- iinfo = UDF_I(inode);
- if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB)
+ if (UDF_I(inode)->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB)
inode->i_data.a_ops = &udf_adinicb_aops;
else
inode->i_data.a_ops = &udf_aops;
inode->i_fop = &udf_file_operations;
mark_inode_dirty(inode);
+ return udf_add_nondir(dentry, inode);
+}
+
+static int udf_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mode)
+{
+ struct inode *inode = udf_new_inode(dir, mode);
+
+ if (IS_ERR(inode))
+ return PTR_ERR(inode);
+
+ if (UDF_I(inode)->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB)
+ inode->i_data.a_ops = &udf_adinicb_aops;
+ else
+ inode->i_data.a_ops = &udf_aops;
+ inode->i_op = &udf_file_inode_operations;
+ inode->i_fop = &udf_file_operations;
+ mark_inode_dirty(inode);
d_tmpfile(dentry, inode);
+ unlock_new_inode(inode);
return 0;
}
dev_t rdev)
{
struct inode *inode;
- struct udf_fileident_bh fibh;
- struct fileIdentDesc cfi, *fi;
- int err;
- struct udf_inode_info *iinfo;
if (!old_valid_dev(rdev))
return -EINVAL;
- err = -EIO;
- inode = udf_new_inode(dir, mode, &err);
- if (!inode)
- goto out;
+ inode = udf_new_inode(dir, mode);
+ if (IS_ERR(inode))
+ return PTR_ERR(inode);
- iinfo = UDF_I(inode);
init_special_inode(inode, mode, rdev);
- fi = udf_add_entry(dir, dentry, &fibh, &cfi, &err);
- if (!fi) {
- inode_dec_link_count(inode);
- iput(inode);
- return err;
- }
- cfi.icb.extLength = cpu_to_le32(inode->i_sb->s_blocksize);
- cfi.icb.extLocation = cpu_to_lelb(iinfo->i_location);
- *(__le32 *)((struct allocDescImpUse *)cfi.icb.impUse)->impUse =
- cpu_to_le32(iinfo->i_unique & 0x00000000FFFFFFFFUL);
- udf_write_fi(dir, &cfi, fi, &fibh, NULL, NULL);
- if (UDF_I(dir)->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB)
- mark_inode_dirty(dir);
- mark_inode_dirty(inode);
-
- if (fibh.sbh != fibh.ebh)
- brelse(fibh.ebh);
- brelse(fibh.sbh);
- d_instantiate(dentry, inode);
- err = 0;
-
-out:
- return err;
+ return udf_add_nondir(dentry, inode);
}
static int udf_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
struct udf_inode_info *dinfo = UDF_I(dir);
struct udf_inode_info *iinfo;
- err = -EIO;
- inode = udf_new_inode(dir, S_IFDIR | mode, &err);
- if (!inode)
- goto out;
+ inode = udf_new_inode(dir, S_IFDIR | mode);
+ if (IS_ERR(inode))
+ return PTR_ERR(inode);
iinfo = UDF_I(inode);
inode->i_op = &udf_dir_inode_operations;
fi = udf_add_entry(inode, NULL, &fibh, &cfi, &err);
if (!fi) {
inode_dec_link_count(inode);
+ unlock_new_inode(inode);
iput(inode);
goto out;
}
if (!fi) {
clear_nlink(inode);
mark_inode_dirty(inode);
+ unlock_new_inode(inode);
iput(inode);
goto out;
}
udf_write_fi(dir, &cfi, fi, &fibh, NULL, NULL);
inc_nlink(dir);
mark_inode_dirty(dir);
+ unlock_new_inode(inode);
d_instantiate(dentry, inode);
if (fibh.sbh != fibh.ebh)
brelse(fibh.ebh);
static int udf_symlink(struct inode *dir, struct dentry *dentry,
const char *symname)
{
- struct inode *inode;
+ struct inode *inode = udf_new_inode(dir, S_IFLNK | S_IRWXUGO);
struct pathComponent *pc;
const char *compstart;
- struct udf_fileident_bh fibh;
struct extent_position epos = {};
int eoffset, elen = 0;
- struct fileIdentDesc *fi;
- struct fileIdentDesc cfi;
uint8_t *ea;
int err;
int block;
struct udf_inode_info *iinfo;
struct super_block *sb = dir->i_sb;
- inode = udf_new_inode(dir, S_IFLNK | S_IRWXUGO, &err);
- if (!inode)
- goto out;
+ if (IS_ERR(inode))
+ return PTR_ERR(inode);
iinfo = UDF_I(inode);
down_write(&iinfo->i_data_sem);
mark_inode_dirty(inode);
up_write(&iinfo->i_data_sem);
- fi = udf_add_entry(dir, dentry, &fibh, &cfi, &err);
- if (!fi)
- goto out_no_entry;
- cfi.icb.extLength = cpu_to_le32(sb->s_blocksize);
- cfi.icb.extLocation = cpu_to_lelb(iinfo->i_location);
- if (UDF_SB(inode->i_sb)->s_lvid_bh) {
- *(__le32 *)((struct allocDescImpUse *)cfi.icb.impUse)->impUse =
- cpu_to_le32(lvid_get_unique_id(sb));
- }
- udf_write_fi(dir, &cfi, fi, &fibh, NULL, NULL);
- if (UDF_I(dir)->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB)
- mark_inode_dirty(dir);
- if (fibh.sbh != fibh.ebh)
- brelse(fibh.ebh);
- brelse(fibh.sbh);
- d_instantiate(dentry, inode);
- err = 0;
-
+ err = udf_add_nondir(dentry, inode);
out:
kfree(name);
return err;
out_no_entry:
up_write(&iinfo->i_data_sem);
inode_dec_link_count(inode);
+ unlock_new_inode(inode);
iput(inode);
goto out;
}
struct udf_fileident_bh fibh;
if (!udf_find_entry(child->d_inode, &dotdot, &fibh, &cfi))
- goto out_unlock;
+ return ERR_PTR(-EACCES);
if (fibh.sbh != fibh.ebh)
brelse(fibh.ebh);
tloc = lelb_to_cpu(cfi.icb.extLocation);
inode = udf_iget(child->d_inode->i_sb, &tloc);
- if (!inode)
- goto out_unlock;
+ if (IS_ERR(inode))
+ return ERR_CAST(inode);
return d_obtain_alias(inode);
-out_unlock:
- return ERR_PTR(-EACCES);
}
loc.partitionReferenceNum = partref;
inode = udf_iget(sb, &loc);
- if (inode == NULL)
- return ERR_PTR(-ENOMEM);
+ if (IS_ERR(inode))
+ return ERR_CAST(inode);
if (generation && inode->i_generation != generation) {
iput(inode);
metadata_fe = udf_iget(sb, &addr);
- if (metadata_fe == NULL)
+ if (IS_ERR(metadata_fe)) {
udf_warn(sb, "metadata inode efe not found\n");
- else if (UDF_I(metadata_fe)->i_alloc_type != ICBTAG_FLAG_AD_SHORT) {
+ return metadata_fe;
+ }
+ if (UDF_I(metadata_fe)->i_alloc_type != ICBTAG_FLAG_AD_SHORT) {
udf_warn(sb, "metadata inode efe does not have short allocation descriptors!\n");
iput(metadata_fe);
- metadata_fe = NULL;
+ return ERR_PTR(-EIO);
}
return metadata_fe;
struct udf_part_map *map;
struct udf_meta_data *mdata;
struct kernel_lb_addr addr;
+ struct inode *fe;
map = &sbi->s_partmaps[partition];
mdata = &map->s_type_specific.s_metadata;
udf_debug("Metadata file location: block = %d part = %d\n",
mdata->s_meta_file_loc, map->s_partition_num);
- mdata->s_metadata_fe = udf_find_metadata_inode_efe(sb,
- mdata->s_meta_file_loc, map->s_partition_num);
-
- if (mdata->s_metadata_fe == NULL) {
+ fe = udf_find_metadata_inode_efe(sb, mdata->s_meta_file_loc,
+ map->s_partition_num);
+ if (IS_ERR(fe)) {
/* mirror file entry */
udf_debug("Mirror metadata file location: block = %d part = %d\n",
mdata->s_mirror_file_loc, map->s_partition_num);
- mdata->s_mirror_fe = udf_find_metadata_inode_efe(sb,
- mdata->s_mirror_file_loc, map->s_partition_num);
+ fe = udf_find_metadata_inode_efe(sb, mdata->s_mirror_file_loc,
+ map->s_partition_num);
- if (mdata->s_mirror_fe == NULL) {
+ if (IS_ERR(fe)) {
udf_err(sb, "Both metadata and mirror metadata inode efe can not found\n");
- return -EIO;
+ return PTR_ERR(fe);
}
- }
+ mdata->s_mirror_fe = fe;
+ } else
+ mdata->s_metadata_fe = fe;
+
/*
* bitmap file entry
udf_debug("Bitmap file location: block = %d part = %d\n",
addr.logicalBlockNum, addr.partitionReferenceNum);
- mdata->s_bitmap_fe = udf_iget(sb, &addr);
- if (mdata->s_bitmap_fe == NULL) {
+ fe = udf_iget(sb, &addr);
+ if (IS_ERR(fe)) {
if (sb->s_flags & MS_RDONLY)
udf_warn(sb, "bitmap inode efe not found but it's ok since the disc is mounted read-only\n");
else {
udf_err(sb, "bitmap inode efe not found and attempted read-write mount\n");
- return -EIO;
+ return PTR_ERR(fe);
}
- }
+ } else
+ mdata->s_bitmap_fe = fe;
}
udf_debug("udf_load_metadata_files Ok\n");
phd->unallocSpaceTable.extPosition),
.partitionReferenceNum = p_index,
};
+ struct inode *inode;
- map->s_uspace.s_table = udf_iget(sb, &loc);
- if (!map->s_uspace.s_table) {
+ inode = udf_iget(sb, &loc);
+ if (IS_ERR(inode)) {
udf_debug("cannot load unallocSpaceTable (part %d)\n",
p_index);
- return -EIO;
+ return PTR_ERR(inode);
}
+ map->s_uspace.s_table = inode;
map->s_partition_flags |= UDF_PART_FLAG_UNALLOC_TABLE;
udf_debug("unallocSpaceTable (part %d) @ %ld\n",
p_index, map->s_uspace.s_table->i_ino);
phd->freedSpaceTable.extPosition),
.partitionReferenceNum = p_index,
};
+ struct inode *inode;
- map->s_fspace.s_table = udf_iget(sb, &loc);
- if (!map->s_fspace.s_table) {
+ inode = udf_iget(sb, &loc);
+ if (IS_ERR(inode)) {
udf_debug("cannot load freedSpaceTable (part %d)\n",
p_index);
- return -EIO;
+ return PTR_ERR(inode);
}
-
+ map->s_fspace.s_table = inode;
map->s_partition_flags |= UDF_PART_FLAG_FREED_TABLE;
udf_debug("freedSpaceTable (part %d) @ %ld\n",
p_index, map->s_fspace.s_table->i_ino);
struct udf_part_map *map = &sbi->s_partmaps[p_index];
sector_t vat_block;
struct kernel_lb_addr ino;
+ struct inode *inode;
/*
* VAT file entry is in the last recorded block. Some broken disks have
ino.partitionReferenceNum = type1_index;
for (vat_block = start_block;
vat_block >= map->s_partition_root &&
- vat_block >= start_block - 3 &&
- !sbi->s_vat_inode; vat_block--) {
+ vat_block >= start_block - 3; vat_block--) {
ino.logicalBlockNum = vat_block - map->s_partition_root;
- sbi->s_vat_inode = udf_iget(sb, &ino);
+ inode = udf_iget(sb, &ino);
+ if (!IS_ERR(inode)) {
+ sbi->s_vat_inode = inode;
+ break;
+ }
}
}
/* assign inodes by physical block number */
/* perhaps it's not extensible enough, but for now ... */
inode = udf_iget(sb, &rootdir);
- if (!inode) {
+ if (IS_ERR(inode)) {
udf_err(sb, "Error in udf_iget, block=%d, partition=%d\n",
rootdir.logicalBlockNum, rootdir.partitionReferenceNum);
- ret = -EIO;
+ ret = PTR_ERR(inode);
goto error_out;
}
extern struct buffer_head *udf_expand_dir_adinicb(struct inode *, int *, int *);
extern struct buffer_head *udf_bread(struct inode *, int, int, int *);
extern int udf_setsize(struct inode *, loff_t);
-extern void udf_read_inode(struct inode *);
extern void udf_evict_inode(struct inode *);
extern int udf_write_inode(struct inode *, struct writeback_control *wbc);
extern long udf_block_map(struct inode *, sector_t);
/* ialloc.c */
extern void udf_free_inode(struct inode *);
-extern struct inode *udf_new_inode(struct inode *, umode_t, int *);
+extern struct inode *udf_new_inode(struct inode *, umode_t);
/* truncate.c */
extern void udf_truncate_tail_extent(struct inode *);
invalidate_inode_buffers(inode);
clear_inode(inode);
- if (want_delete) {
- lock_ufs(inode->i_sb);
- ufs_free_inode (inode);
- unlock_ufs(inode->i_sb);
- }
+ if (want_delete)
+ ufs_free_inode(inode);
}
if (l > sb->s_blocksize)
goto out_notlocked;
- lock_ufs(dir->i_sb);
inode = ufs_new_inode(dir, S_IFLNK | S_IRWXUGO);
err = PTR_ERR(inode);
if (IS_ERR(inode))
- goto out;
+ goto out_notlocked;
+ lock_ufs(dir->i_sb);
if (l > UFS_SB(sb)->s_uspi->s_maxsymlinklen) {
/* slow symlink */
inode->i_op = &ufs_symlink_inode_operations;
struct inode * inode;
int err;
- lock_ufs(dir->i_sb);
- inode_inc_link_count(dir);
-
inode = ufs_new_inode(dir, S_IFDIR|mode);
- err = PTR_ERR(inode);
if (IS_ERR(inode))
- goto out_dir;
+ return PTR_ERR(inode);
inode->i_op = &ufs_dir_inode_operations;
inode->i_fop = &ufs_dir_operations;
inode_inc_link_count(inode);
+ lock_ufs(dir->i_sb);
+ inode_inc_link_count(dir);
+
err = ufs_make_empty(inode, dir);
if (err)
goto out_fail;
inode_dec_link_count(inode);
inode_dec_link_count(inode);
iput (inode);
-out_dir:
inode_dec_link_count(dir);
unlock_ufs(dir->i_sb);
goto out;
struct xfs_bmap_free *flist,
int num_exts)
{
- struct xfs_btree_cur *cur;
+ struct xfs_btree_cur *cur = NULL;
struct xfs_bmbt_rec_host *gotp;
struct xfs_bmbt_irec got;
struct xfs_bmbt_irec left;
int error = 0;
int i;
int whichfork = XFS_DATA_FORK;
- int logflags;
+ int logflags = 0;
xfs_filblks_t blockcount = 0;
int total_extents;
}
}
- /* We are going to change core inode */
- logflags = XFS_ILOG_CORE;
if (ifp->if_flags & XFS_IFBROOT) {
cur = xfs_bmbt_init_cursor(mp, tp, ip, whichfork);
cur->bc_private.b.firstblock = *firstblock;
cur->bc_private.b.flist = flist;
cur->bc_private.b.flags = 0;
- } else {
- cur = NULL;
- logflags |= XFS_ILOG_DEXT;
}
/*
blockcount = left.br_blockcount +
got.br_blockcount;
xfs_iext_remove(ip, *current_ext, 1, 0);
+ logflags |= XFS_ILOG_CORE;
if (cur) {
error = xfs_btree_delete(cur, &i);
if (error)
goto del_cursor;
XFS_WANT_CORRUPTED_GOTO(i == 1, del_cursor);
+ } else {
+ logflags |= XFS_ILOG_DEXT;
}
XFS_IFORK_NEXT_SET(ip, whichfork,
XFS_IFORK_NEXTENTS(ip, whichfork) - 1);
got.br_startoff = startoff;
}
+ logflags |= XFS_ILOG_CORE;
if (cur) {
error = xfs_bmbt_update(cur, got.br_startoff,
got.br_startblock,
got.br_state);
if (error)
goto del_cursor;
+ } else {
+ logflags |= XFS_ILOG_DEXT;
}
(*current_ext)++;
xfs_btree_del_cursor(cur,
error ? XFS_BTREE_ERROR : XFS_BTREE_NOERROR);
- xfs_trans_log_inode(tp, ip, logflags);
+ if (logflags)
+ xfs_trans_log_inode(tp, ip, logflags);
return error;
}
return mpage_readpages(mapping, pages, nr_pages, xfs_get_blocks);
}
+/*
+ * This is basically a copy of __set_page_dirty_buffers() with one
+ * small tweak: buffers beyond EOF do not get marked dirty. If we mark them
+ * dirty, we'll never be able to clean them because we don't write buffers
+ * beyond EOF, and that means we can't invalidate pages that span EOF
+ * that have been marked dirty. Further, the dirty state can leak into
+ * the file interior if the file is extended, resulting in all sorts of
+ * bad things happening as the state does not match the underlying data.
+ *
+ * XXX: this really indicates that bufferheads in XFS need to die. Warts like
+ * this only exist because of bufferheads and how the generic code manages them.
+ */
+STATIC int
+xfs_vm_set_page_dirty(
+ struct page *page)
+{
+ struct address_space *mapping = page->mapping;
+ struct inode *inode = mapping->host;
+ loff_t end_offset;
+ loff_t offset;
+ int newly_dirty;
+
+ if (unlikely(!mapping))
+ return !TestSetPageDirty(page);
+
+ end_offset = i_size_read(inode);
+ offset = page_offset(page);
+
+ spin_lock(&mapping->private_lock);
+ if (page_has_buffers(page)) {
+ struct buffer_head *head = page_buffers(page);
+ struct buffer_head *bh = head;
+
+ do {
+ if (offset < end_offset)
+ set_buffer_dirty(bh);
+ bh = bh->b_this_page;
+ offset += 1 << inode->i_blkbits;
+ } while (bh != head);
+ }
+ newly_dirty = !TestSetPageDirty(page);
+ spin_unlock(&mapping->private_lock);
+
+ if (newly_dirty) {
+ /* sigh - __set_page_dirty() is static, so copy it here, too */
+ unsigned long flags;
+
+ spin_lock_irqsave(&mapping->tree_lock, flags);
+ if (page->mapping) { /* Race with truncate? */
+ WARN_ON_ONCE(!PageUptodate(page));
+ account_page_dirtied(page, mapping);
+ radix_tree_tag_set(&mapping->page_tree,
+ page_index(page), PAGECACHE_TAG_DIRTY);
+ }
+ spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ __mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
+ }
+ return newly_dirty;
+}
+
const struct address_space_operations xfs_address_space_operations = {
.readpage = xfs_vm_readpage,
.readpages = xfs_vm_readpages,
.writepage = xfs_vm_writepage,
.writepages = xfs_vm_writepages,
+ .set_page_dirty = xfs_vm_set_page_dirty,
.releasepage = xfs_vm_releasepage,
.invalidatepage = xfs_vm_invalidatepage,
.write_begin = xfs_vm_write_begin,
start_fsb = XFS_B_TO_FSB(mp, offset + len);
shift_fsb = XFS_B_TO_FSB(mp, len);
+ /*
+ * Writeback the entire file and force remove any post-eof blocks. The
+ * writeback prevents changes to the extent list via concurrent
+ * writeback and the eofblocks trim prevents the extent shift algorithm
+ * from running into a post-eof delalloc extent.
+ *
+ * XXX: This is a temporary fix until the extent shift loop below is
+ * converted to use offsets and lookups within the ILOCK rather than
+ * carrying around the index into the extent list for the next
+ * iteration.
+ */
+ error = filemap_write_and_wait(VFS_I(ip)->i_mapping);
+ if (error)
+ return error;
+ if (xfs_can_free_eofblocks(ip, true)) {
+ error = xfs_free_eofblocks(mp, ip, false);
+ if (error)
+ return error;
+ }
+
error = xfs_free_file_space(ip, offset, len);
if (error)
return error;
if (inode->i_mapping->nrpages) {
ret = filemap_write_and_wait_range(
VFS_I(ip)->i_mapping,
- pos, -1);
+ pos, pos + size - 1);
if (ret) {
xfs_rw_iunlock(ip, XFS_IOLOCK_EXCL);
return ret;
}
- truncate_pagecache_range(VFS_I(ip), pos, -1);
+
+ /*
+ * Invalidate whole pages. This can return an error if
+ * we fail to invalidate a page, but this should never
+ * happen on XFS. Warn if it does fail.
+ */
+ ret = invalidate_inode_pages2_range(VFS_I(ip)->i_mapping,
+ pos >> PAGE_CACHE_SHIFT,
+ (pos + size - 1) >> PAGE_CACHE_SHIFT);
+ WARN_ON_ONCE(ret);
+ ret = 0;
}
xfs_rw_ilock_demote(ip, XFS_IOLOCK_EXCL);
}
if (mapping->nrpages) {
ret = filemap_write_and_wait_range(VFS_I(ip)->i_mapping,
- pos, -1);
+ pos, pos + count - 1);
if (ret)
goto out;
- truncate_pagecache_range(VFS_I(ip), pos, -1);
+ /*
+ * Invalidate whole pages. This can return an error if
+ * we fail to invalidate a page, but this should never
+ * happen on XFS. Warn if it does fail.
+ */
+ ret = invalidate_inode_pages2_range(VFS_I(ip)->i_mapping,
+ pos >> PAGE_CACHE_SHIFT,
+ (pos + count - 1) >> PAGE_CACHE_SHIFT);
+ WARN_ON_ONCE(ret);
+ ret = 0;
}
/*
acpi_device_name device_name; /* Driver-determined */
acpi_device_class device_class; /* " */
union acpi_object *str_obj; /* unicode string for _STR method */
- unsigned long sun; /* _SUN */
};
#define acpi_device_bid(d) ((d)->pnp.bus_id)
--- /dev/null
+#ifndef DRM_ATI_PCIGART_H
+#define DRM_ATI_PCIGART_H
+
+#include <drm/drm_legacy.h>
+
+/* location of GART table */
+#define DRM_ATI_GART_MAIN 1
+#define DRM_ATI_GART_FB 2
+
+#define DRM_ATI_GART_PCI 1
+#define DRM_ATI_GART_PCIE 2
+#define DRM_ATI_GART_IGP 3
+
+struct drm_ati_pcigart_info {
+ int gart_table_location;
+ int gart_reg_if;
+ void *addr;
+ dma_addr_t bus_addr;
+ dma_addr_t table_mask;
+ struct drm_dma_handle *table_handle;
+ struct drm_local_map mapping;
+ int table_size;
+};
+
+extern int drm_ati_pcigart_init(struct drm_device *dev,
+ struct drm_ati_pcigart_info * gart_info);
+extern int drm_ati_pcigart_cleanup(struct drm_device *dev,
+ struct drm_ati_pcigart_info * gart_info);
+
+#endif
-/**
- * \file drmP.h
- * Private header for Direct Rendering Manager
- *
- * \author Rickard E. (Rik) Faith <faith@valinux.com>
- * \author Gareth Hughes <gareth@valinux.com>
- */
-
/*
+ * Internal Header for the Direct Rendering Manager
+ *
* Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
* Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
* Copyright (c) 2009-2010, Code Aurora Forum.
* All rights reserved.
*
+ * Author: Rickard E. (Rik) Faith <faith@valinux.com>
+ * Author: Gareth Hughes <gareth@valinux.com>
+ *
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
#ifndef _DRM_P_H_
#define _DRM_P_H_
-#ifdef __KERNEL__
-#ifdef __alpha__
-/* add include of current.h so that "current" is defined
- * before static inline funcs in wait.h. Doing this so we
- * can build the DRM (part of PI DRI). 4/21/2000 S + B */
-#include <asm/current.h>
-#endif /* __alpha__ */
-#include <linux/kernel.h>
-#include <linux/kref.h>
-#include <linux/miscdevice.h>
+#include <linux/agp_backend.h>
+#include <linux/cdev.h>
+#include <linux/dma-mapping.h>
+#include <linux/file.h>
#include <linux/fs.h>
+#include <linux/highmem.h>
+#include <linux/idr.h>
#include <linux/init.h>
-#include <linux/file.h>
-#include <linux/platform_device.h>
-#include <linux/pci.h>
+#include <linux/io.h>
#include <linux/jiffies.h>
-#include <linux/dma-mapping.h>
+#include <linux/kernel.h>
+#include <linux/kref.h>
+#include <linux/miscdevice.h>
#include <linux/mm.h>
-#include <linux/cdev.h>
#include <linux/mutex.h>
-#include <linux/io.h>
-#include <linux/slab.h>
+#include <linux/pci.h>
+#include <linux/platform_device.h>
+#include <linux/poll.h>
#include <linux/ratelimit.h>
-#if defined(__alpha__) || defined(__powerpc__)
-#include <asm/pgtable.h> /* For pte_wrprotect */
-#endif
-#include <asm/mman.h>
-#include <asm/uaccess.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
#include <linux/types.h>
-#include <linux/agp_backend.h>
+#include <linux/vmalloc.h>
#include <linux/workqueue.h>
-#include <linux/poll.h>
+
+#include <asm/mman.h>
#include <asm/pgalloc.h>
-#include <drm/drm.h>
-#include <drm/drm_sarea.h>
-#include <drm/drm_vma_manager.h>
+#include <asm/uaccess.h>
-#include <linux/idr.h>
+#include <uapi/drm/drm.h>
+#include <uapi/drm/drm_mode.h>
-#define __OS_HAS_AGP (defined(CONFIG_AGP) || (defined(CONFIG_AGP_MODULE) && defined(MODULE)))
+#include <drm/drm_agpsupport.h>
+#include <drm/drm_crtc.h>
+#include <drm/drm_global.h>
+#include <drm/drm_hashtab.h>
+#include <drm/drm_mem_util.h>
+#include <drm/drm_mm.h>
+#include <drm/drm_os_linux.h>
+#include <drm/drm_sarea.h>
+#include <drm/drm_vma_manager.h>
struct module;
struct drm_file;
struct drm_device;
+struct drm_agp_head;
+struct drm_local_map;
+struct drm_device_dma;
+struct drm_dma_handle;
+struct drm_gem_object;
struct device_node;
struct videomode;
struct reservation_object;
-
-#include <drm/drm_os_linux.h>
-#include <drm/drm_hashtab.h>
-#include <drm/drm_mm.h>
+struct dma_buf_attachment;
/*
* 4 debug categories are defined:
void drm_ut_debug_printk(const char *function_name,
const char *format, ...);
extern __printf(2, 3)
-int drm_err(const char *func, const char *format, ...);
+void drm_err(const char *func, const char *format, ...);
/***********************************************************************/
/** \name DRM template customization defaults */
#define DRIVER_PRIME 0x4000
#define DRIVER_RENDER 0x8000
-/***********************************************************************/
-/** \name Begin the DRM... */
-/*@{*/
-
-#define DRM_DEBUG_CODE 2 /**< Include debugging code if > 1, then
- also include looping detection. */
-
-#define DRM_MAGIC_HASH_ORDER 4 /**< Size of key hash table. Must be power of 2. */
-
-#define DRM_MAP_HASH_OFFSET 0x10000000
-
-/*@}*/
-
/***********************************************************************/
/** \name Macros to make printk easier */
/*@{*/
* \param fmt printf() like format string.
* \param arg arguments
*/
-#if DRM_DEBUG_CODE
#define DRM_DEBUG(fmt, args...) \
do { \
if (unlikely(drm_debug & DRM_UT_CORE)) \
if (unlikely(drm_debug & DRM_UT_PRIME)) \
drm_ut_debug_printk(__func__, fmt, ##args); \
} while (0)
-#else
-#define DRM_DEBUG_DRIVER(fmt, args...) do { } while (0)
-#define DRM_DEBUG_KMS(fmt, args...) do { } while (0)
-#define DRM_DEBUG_PRIME(fmt, args...) do { } while (0)
-#define DRM_DEBUG(fmt, arg...) do { } while (0)
-#endif
/*@}*/
#define DRM_IF_VERSION(maj, min) (maj << 16 | min)
-/**
- * Test that the hardware lock is held by the caller, returning otherwise.
- *
- * \param dev DRM device.
- * \param filp file pointer of the caller.
- */
-#define LOCK_TEST_WITH_RETURN( dev, _file_priv ) \
-do { \
- if (!_DRM_LOCK_IS_HELD(_file_priv->master->lock.hw_lock->lock) || \
- _file_priv->master->lock.file_priv != _file_priv) { \
- DRM_ERROR( "%s called without lock held, held %d owner %p %p\n",\
- __func__, _DRM_LOCK_IS_HELD(_file_priv->master->lock.hw_lock->lock),\
- _file_priv->master->lock.file_priv, _file_priv); \
- return -EINVAL; \
- } \
-} while (0)
-
/**
* Ioctl function type.
*
#define DRM_IOCTL_DEF_DRV(ioctl, _func, _flags) \
[DRM_IOCTL_NR(DRM_##ioctl)] = {.cmd = DRM_##ioctl, .func = _func, .flags = _flags, .cmd_drv = DRM_IOCTL_##ioctl, .name = #ioctl}
-struct drm_magic_entry {
- struct list_head head;
- struct drm_hash_item hash_item;
- struct drm_file *priv;
-};
-
-struct drm_vma_entry {
- struct list_head head;
- struct vm_area_struct *vma;
- pid_t pid;
-};
-
-/**
- * DMA buffer.
- */
-struct drm_buf {
- int idx; /**< Index into master buflist */
- int total; /**< Buffer size */
- int order; /**< log-base-2(total) */
- int used; /**< Amount of buffer in use (for DMA) */
- unsigned long offset; /**< Byte offset (used internally) */
- void *address; /**< Address of buffer */
- unsigned long bus_address; /**< Bus address of buffer */
- struct drm_buf *next; /**< Kernel-only: used for free list */
- __volatile__ int waiting; /**< On kernel DMA queue */
- __volatile__ int pending; /**< On hardware DMA queue */
- struct drm_file *file_priv; /**< Private of holding file descr */
- int context; /**< Kernel queue for this buffer */
- int while_locked; /**< Dispatch this buffer while locked */
- enum {
- DRM_LIST_NONE = 0,
- DRM_LIST_FREE = 1,
- DRM_LIST_WAIT = 2,
- DRM_LIST_PEND = 3,
- DRM_LIST_PRIO = 4,
- DRM_LIST_RECLAIM = 5
- } list; /**< Which list we're on */
-
- int dev_priv_size; /**< Size of buffer private storage */
- void *dev_private; /**< Per-buffer private storage */
-};
-
-/** bufs is one longer than it has to be */
-struct drm_waitlist {
- int count; /**< Number of possible buffers */
- struct drm_buf **bufs; /**< List of pointers to buffers */
- struct drm_buf **rp; /**< Read pointer */
- struct drm_buf **wp; /**< Write pointer */
- struct drm_buf **end; /**< End pointer */
- spinlock_t read_lock;
- spinlock_t write_lock;
-};
-
-typedef struct drm_dma_handle {
- dma_addr_t busaddr;
- void *vaddr;
- size_t size;
-} drm_dma_handle_t;
-
-/**
- * Buffer entry. There is one of this for each buffer size order.
- */
-struct drm_buf_entry {
- int buf_size; /**< size */
- int buf_count; /**< number of buffers */
- struct drm_buf *buflist; /**< buffer list */
- int seg_count;
- int page_order;
- struct drm_dma_handle **seglist;
-
- int low_mark; /**< Low water mark */
- int high_mark; /**< High water mark */
-};
-
/* Event queued up for userspace to read */
struct drm_pending_event {
struct drm_event *event;
int idle_has_lock;
};
-/**
- * DMA data.
- */
-struct drm_device_dma {
-
- struct drm_buf_entry bufs[DRM_MAX_ORDER + 1]; /**< buffers, grouped by their size order */
- int buf_count; /**< total number of buffers */
- struct drm_buf **buflist; /**< Vector of pointers into drm_device_dma::bufs */
- int seg_count;
- int page_count; /**< number of pages */
- unsigned long *pagelist; /**< page list */
- unsigned long byte_count;
- enum {
- _DRM_DMA_USE_AGP = 0x01,
- _DRM_DMA_USE_SG = 0x02,
- _DRM_DMA_USE_FB = 0x04,
- _DRM_DMA_USE_PCI_RO = 0x08
- } flags;
-
-};
-
-/**
- * AGP memory entry. Stored as a doubly linked list.
- */
-struct drm_agp_mem {
- unsigned long handle; /**< handle */
- struct agp_memory *memory;
- unsigned long bound; /**< address */
- int pages;
- struct list_head head;
-};
-
-/**
- * AGP data.
- *
- * \sa drm_agp_init() and drm_device::agp.
- */
-struct drm_agp_head {
- struct agp_kern_info agp_info; /**< AGP device information */
- struct list_head memory;
- unsigned long mode; /**< AGP mode */
- struct agp_bridge_data *bridge;
- int enabled; /**< whether the AGP bus as been enabled */
- int acquired; /**< whether the AGP device has been acquired */
- unsigned long base;
- int agp_mtrr;
- int cant_use_aperture;
- unsigned long page_mask;
-};
-
-/**
- * Scatter-gather memory.
- */
-struct drm_sg_mem {
- unsigned long handle;
- void *virtual;
- int pages;
- struct page **pagelist;
- dma_addr_t *busaddr;
-};
-
-struct drm_sigdata {
- int context;
- struct drm_hw_lock *lock;
-};
-
-
-/**
- * Kernel side of a mapping
- */
-struct drm_local_map {
- resource_size_t offset; /**< Requested physical address (0 for SAREA)*/
- unsigned long size; /**< Requested physical size (bytes) */
- enum drm_map_type type; /**< Type of memory to map */
- enum drm_map_flags flags; /**< Flags */
- void *handle; /**< User-space: "Handle" to pass to mmap() */
- /**< Kernel-space: kernel-virtual address */
- int mtrr; /**< MTRR slot used */
-};
-
-typedef struct drm_local_map drm_local_map_t;
-
-/**
- * Mappings list
- */
-struct drm_map_list {
- struct list_head head; /**< list head */
- struct drm_hash_item hash;
- struct drm_local_map *map; /**< mapping */
- uint64_t user_token;
- struct drm_master *master;
-};
-
-/* location of GART table */
-#define DRM_ATI_GART_MAIN 1
-#define DRM_ATI_GART_FB 2
-
-#define DRM_ATI_GART_PCI 1
-#define DRM_ATI_GART_PCIE 2
-#define DRM_ATI_GART_IGP 3
-
-struct drm_ati_pcigart_info {
- int gart_table_location;
- int gart_reg_if;
- void *addr;
- dma_addr_t bus_addr;
- dma_addr_t table_mask;
- struct drm_dma_handle *table_handle;
- struct drm_local_map mapping;
- int table_size;
-};
-
-/**
- * This structure defines the drm_mm memory object, which will be used by the
- * DRM for its buffer objects.
- */
-struct drm_gem_object {
- /** Reference count of this object */
- struct kref refcount;
-
- /**
- * handle_count - gem file_priv handle count of this object
- *
- * Each handle also holds a reference. Note that when the handle_count
- * drops to 0 any global names (e.g. the id in the flink namespace) will
- * be cleared.
- *
- * Protected by dev->object_name_lock.
- * */
- unsigned handle_count;
-
- /** Related drm device */
- struct drm_device *dev;
-
- /** File representing the shmem storage */
- struct file *filp;
-
- /* Mapping info for this object */
- struct drm_vma_offset_node vma_node;
-
- /**
- * Size of the object, in bytes. Immutable over the object's
- * lifetime.
- */
- size_t size;
-
- /**
- * Global name for this object, starts at 1. 0 means unnamed.
- * Access is covered by the object_name_lock in the related drm_device
- */
- int name;
-
- /**
- * Memory domains. These monitor which caches contain read/write data
- * related to the object. When transitioning from one set of domains
- * to another, the driver is called to ensure that caches are suitably
- * flushed and invalidated
- */
- uint32_t read_domains;
- uint32_t write_domain;
-
- /**
- * While validating an exec operation, the
- * new read/write domain values are computed here.
- * They will be transferred to the above values
- * at the point that any cache flushing occurs
- */
- uint32_t pending_read_domains;
- uint32_t pending_write_domain;
-
- /**
- * dma_buf - dma buf associated with this GEM object
- *
- * Pointer to the dma-buf associated with this gem object (either
- * through importing or exporting). We break the resulting reference
- * loop when the last gem handle for this object is released.
- *
- * Protected by obj->object_name_lock
- */
- struct dma_buf *dma_buf;
-
- /**
- * import_attach - dma buf attachment backing this object
- *
- * Any foreign dma_buf imported as a gem object has this set to the
- * attachment point for the device. This is invariant over the lifetime
- * of a gem object.
- *
- * The driver's ->gem_free_object callback is responsible for cleaning
- * up the dma_buf attachment and references acquired at import time.
- *
- * Note that the drm gem/prime core does not depend upon drivers setting
- * this field any more. So for drivers where this doesn't make sense
- * (e.g. virtual devices or a displaylink behind an usb bus) they can
- * simply leave it as NULL.
- */
- struct dma_buf_attachment *import_attach;
-};
-
-#include <drm/drm_crtc.h>
-
/**
* struct drm_master - drm master structure
*
* @minor: Link back to minor char device we are master for. Immutable.
* @unique: Unique identifier: e.g. busid. Protected by drm_global_mutex.
* @unique_len: Length of unique field. Protected by drm_global_mutex.
- * @unique_size: Amount allocated. Protected by drm_global_mutex.
* @magiclist: Hash of used authentication tokens. Protected by struct_mutex.
* @magicfree: List of used authentication tokens. Protected by struct_mutex.
* @lock: DRI lock information.
struct drm_minor *minor;
char *unique;
int unique_len;
- int unique_size;
struct drm_open_hash magiclist;
struct list_head magicfree;
struct drm_lock_data lock;
/* Flags and return codes for get_vblank_timestamp() driver function. */
#define DRM_CALLED_FROM_VBLIRQ 1
#define DRM_VBLANKTIME_SCANOUTPOS_METHOD (1 << 0)
-#define DRM_VBLANKTIME_INVBL (1 << 1)
+#define DRM_VBLANKTIME_IN_VBLANK (1 << 1)
/* get_scanout_position() return flags */
#define DRM_SCANOUTPOS_VALID (1 << 0)
-#define DRM_SCANOUTPOS_INVBL (1 << 1)
+#define DRM_SCANOUTPOS_IN_VBLANK (1 << 1)
#define DRM_SCANOUTPOS_ACCURATE (1 << 2)
-struct drm_bus {
- int (*set_busid)(struct drm_device *dev, struct drm_master *master);
-};
-
/**
* DRM driver structure. This structure represent the common code for
* a family of cards. There will one drm_device for each card present
int (*dma_ioctl) (struct drm_device *dev, void *data, struct drm_file *file_priv);
int (*dma_quiescent) (struct drm_device *);
int (*context_dtor) (struct drm_device *dev, int context);
+ int (*set_busid)(struct drm_device *dev, struct drm_master *master);
/**
* get_vblank_counter - get raw hardware vblank counter
struct drm_gem_object *obj);
struct sg_table *(*gem_prime_get_sg_table)(struct drm_gem_object *obj);
struct drm_gem_object *(*gem_prime_import_sg_table)(
- struct drm_device *dev, size_t size,
+ struct drm_device *dev,
+ struct dma_buf_attachment *attach,
struct sg_table *sgt);
void *(*gem_prime_vmap)(struct drm_gem_object *obj);
void (*gem_prime_vunmap)(struct drm_gem_object *obj, void *vaddr);
const struct drm_ioctl_desc *ioctls;
int num_ioctls;
const struct file_operations *fops;
- struct drm_bus *bus;
/* List of devices hanging off this driver with stealth attach. */
struct list_head legacy_dev_list;
*/
bool vblank_disable_allowed;
+ /*
+ * If true, vblank interrupt will be disabled immediately when the
+ * refcount drops to zero, as opposed to via the vblank disable
+ * timer.
+ * This can be set to true it the hardware has a working vblank
+ * counter and the driver uses drm_vblank_on() and drm_vblank_off()
+ * appropriately.
+ */
+ bool vblank_disable_immediate;
+
/* array of size num_crtcs */
struct drm_vblank_crtc *vblank;
#endif
struct platform_device *platformdev; /**< Platform device struture */
- struct usb_device *usbdev;
struct drm_sg_mem *sg; /**< Scatter gather memory */
unsigned int num_crtcs; /**< Number of CRTCs on this device */
- struct drm_sigdata sigdata; /**< For block_all_signals */
sigset_t sigmask;
+ struct {
+ int context;
+ struct drm_hw_lock *lock;
+ } sigdata;
+
struct drm_local_map *agp_buffer_map;
unsigned int agp_buffer_token;
unsigned int cmd, unsigned long arg);
extern long drm_compat_ioctl(struct file *filp,
unsigned int cmd, unsigned long arg);
-extern int drm_lastclose(struct drm_device *dev);
extern bool drm_ioctl_flags(unsigned int nr, unsigned int *flags);
/* Device support (drm_fops.h) */
-extern struct mutex drm_global_mutex;
extern int drm_open(struct inode *inode, struct file *filp);
extern ssize_t drm_read(struct file *filp, char __user *buffer,
size_t count, loff_t *offset);
extern int drm_release(struct inode *inode, struct file *filp);
/* Mapping support (drm_vm.h) */
-extern int drm_mmap(struct file *filp, struct vm_area_struct *vma);
-extern int drm_mmap_locked(struct file *filp, struct vm_area_struct *vma);
-extern void drm_vm_open_locked(struct drm_device *dev, struct vm_area_struct *vma);
-extern void drm_vm_close_locked(struct drm_device *dev, struct vm_area_struct *vma);
extern unsigned int drm_poll(struct file *filp, struct poll_table_struct *wait);
- /* Memory management support (drm_memory.h) */
-#include <drm/drm_memory.h>
-
-
- /* Misc. IOCTL support (drm_ioctl.h) */
-extern int drm_irq_by_busid(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-extern int drm_getunique(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-extern int drm_setunique(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-extern int drm_getmap(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-extern int drm_getclient(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-extern int drm_getstats(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-extern int drm_getcap(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-extern int drm_setclientcap(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-extern int drm_setversion(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-extern int drm_noop(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-
- /* Authentication IOCTL support (drm_auth.h) */
-extern int drm_getmagic(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-extern int drm_authmagic(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-extern int drm_remove_magic(struct drm_master *master, drm_magic_t magic);
+/* Misc. IOCTL support (drm_ioctl.c) */
+int drm_noop(struct drm_device *dev, void *data,
+ struct drm_file *file_priv);
/* Cache management (drm_cache.c) */
void drm_clflush_pages(struct page *pages[], unsigned long num_pages);
void drm_clflush_sg(struct sg_table *st);
void drm_clflush_virt_range(void *addr, unsigned long length);
- /* Locking IOCTL support (drm_lock.h) */
-extern int drm_lock(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-extern int drm_unlock(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-extern int drm_lock_free(struct drm_lock_data *lock_data, unsigned int context);
-extern void drm_idlelock_take(struct drm_lock_data *lock_data);
-extern void drm_idlelock_release(struct drm_lock_data *lock_data);
-
/*
* These are exported to drivers so that they can implement fencing using
* DMA quiscent + idle. DMA quiescent usually requires the hardware lock.
*/
-extern int drm_i_have_hw_lock(struct drm_device *dev, struct drm_file *file_priv);
-
- /* Buffer management support (drm_bufs.h) */
-extern int drm_addbufs_agp(struct drm_device *dev, struct drm_buf_desc * request);
-extern int drm_addbufs_pci(struct drm_device *dev, struct drm_buf_desc * request);
-extern int drm_addmap(struct drm_device *dev, resource_size_t offset,
- unsigned int size, enum drm_map_type type,
- enum drm_map_flags flags, struct drm_local_map **map_ptr);
-extern int drm_addmap_ioctl(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-extern int drm_rmmap(struct drm_device *dev, struct drm_local_map *map);
-extern int drm_rmmap_locked(struct drm_device *dev, struct drm_local_map *map);
-extern int drm_rmmap_ioctl(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-extern int drm_addbufs(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-extern int drm_infobufs(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-extern int drm_markbufs(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-extern int drm_freebufs(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-extern int drm_mapbufs(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-extern int drm_dma_ioctl(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-
- /* DMA support (drm_dma.h) */
-extern int drm_legacy_dma_setup(struct drm_device *dev);
-extern void drm_legacy_dma_takedown(struct drm_device *dev);
-extern void drm_free_buffer(struct drm_device *dev, struct drm_buf * buf);
-extern void drm_core_reclaim_buffers(struct drm_device *dev,
- struct drm_file *filp);
-
/* IRQ support (drm_irq.h) */
-extern int drm_control(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
extern int drm_irq_install(struct drm_device *dev, int irq);
extern int drm_irq_uninstall(struct drm_device *dev);
extern void drm_vblank_put(struct drm_device *dev, int crtc);
extern int drm_crtc_vblank_get(struct drm_crtc *crtc);
extern void drm_crtc_vblank_put(struct drm_crtc *crtc);
+extern void drm_wait_one_vblank(struct drm_device *dev, int crtc);
+extern void drm_crtc_wait_one_vblank(struct drm_crtc *crtc);
extern void drm_vblank_off(struct drm_device *dev, int crtc);
extern void drm_vblank_on(struct drm_device *dev, int crtc);
extern void drm_crtc_vblank_off(struct drm_crtc *crtc);
extern void drm_crtc_vblank_on(struct drm_crtc *crtc);
extern void drm_vblank_cleanup(struct drm_device *dev);
-extern u32 drm_get_last_vbltimestamp(struct drm_device *dev, int crtc,
- struct timeval *tvblank, unsigned flags);
extern int drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev,
int crtc, int *max_error,
struct timeval *vblank_time,
extern void drm_calc_timestamping_constants(struct drm_crtc *crtc,
const struct drm_display_mode *mode);
+/**
+ * drm_crtc_vblank_waitqueue - get vblank waitqueue for the CRTC
+ * @crtc: which CRTC's vblank waitqueue to retrieve
+ *
+ * This function returns a pointer to the vblank waitqueue for the CRTC.
+ * Drivers can use this to implement vblank waits using wait_event() & co.
+ */
+static inline wait_queue_head_t *drm_crtc_vblank_waitqueue(struct drm_crtc *crtc)
+{
+ return &crtc->dev->vblank[drm_crtc_index(crtc)].queue;
+}
/* Modesetting support */
extern void drm_vblank_pre_modeset(struct drm_device *dev, int crtc);
extern void drm_vblank_post_modeset(struct drm_device *dev, int crtc);
-extern int drm_modeset_ctl(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-
- /* AGP/GART support (drm_agpsupport.h) */
-
-#include <drm/drm_agpsupport.h>
/* Stub support (drm_stub.h) */
-extern int drm_setmaster_ioctl(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-extern int drm_dropmaster_ioctl(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-struct drm_master *drm_master_create(struct drm_minor *minor);
extern struct drm_master *drm_master_get(struct drm_master *master);
extern void drm_master_put(struct drm_master **master);
extern void drm_unplug_dev(struct drm_device *dev);
extern unsigned int drm_debug;
-extern unsigned int drm_vblank_offdelay;
-extern unsigned int drm_timestamp_precision;
-extern unsigned int drm_timestamp_monotonic;
-
-extern struct class *drm_class;
-
-extern struct drm_local_map *drm_getsarea(struct drm_device *dev);
-
/* Debugfs support */
#if defined(CONFIG_DEBUG_FS)
-extern int drm_debugfs_init(struct drm_minor *minor, int minor_id,
- struct dentry *root);
extern int drm_debugfs_create_files(const struct drm_info_list *files,
int count, struct dentry *root,
struct drm_minor *minor);
extern int drm_debugfs_remove_files(const struct drm_info_list *files,
int count, struct drm_minor *minor);
-extern int drm_debugfs_cleanup(struct drm_minor *minor);
-extern int drm_debugfs_connector_add(struct drm_connector *connector);
-extern void drm_debugfs_connector_remove(struct drm_connector *connector);
#else
-static inline int drm_debugfs_init(struct drm_minor *minor, int minor_id,
- struct dentry *root)
-{
- return 0;
-}
-
static inline int drm_debugfs_create_files(const struct drm_info_list *files,
int count, struct dentry *root,
struct drm_minor *minor)
{
return 0;
}
-
-static inline int drm_debugfs_cleanup(struct drm_minor *minor)
-{
- return 0;
-}
-
-static inline int drm_debugfs_connector_add(struct drm_connector *connector)
-{
- return 0;
-}
-static inline void drm_debugfs_connector_remove(struct drm_connector *connector)
-{
-}
-
#endif
- /* Info file support */
-extern int drm_name_info(struct seq_file *m, void *data);
-extern int drm_vm_info(struct seq_file *m, void *data);
-extern int drm_bufs_info(struct seq_file *m, void *data);
-extern int drm_vblank_info(struct seq_file *m, void *data);
-extern int drm_clients_info(struct seq_file *m, void* data);
-extern int drm_gem_name_info(struct seq_file *m, void *data);
-
-
extern struct dma_buf *drm_gem_prime_export(struct drm_device *dev,
struct drm_gem_object *obj, int flags);
extern int drm_gem_prime_handle_to_fd(struct drm_device *dev,
struct drm_file *file_priv, int prime_fd, uint32_t *handle);
extern void drm_gem_dmabuf_release(struct dma_buf *dma_buf);
-extern int drm_prime_handle_to_fd_ioctl(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-extern int drm_prime_fd_to_handle_ioctl(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-
extern int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
dma_addr_t *addrs, int max_pages);
extern struct sg_table *drm_prime_pages_to_sg(struct page **pages, int nr_pages);
extern void drm_prime_gem_destroy(struct drm_gem_object *obj, struct sg_table *sg);
-int drm_gem_dumb_destroy(struct drm_file *file,
- struct drm_device *dev,
- uint32_t handle);
-
-void drm_prime_init_file_private(struct drm_prime_file_private *prime_fpriv);
-void drm_prime_destroy_file_private(struct drm_prime_file_private *prime_fpriv);
-void drm_prime_remove_buf_handle_locked(struct drm_prime_file_private *prime_fpriv, struct dma_buf *dma_buf);
-
-#if DRM_DEBUG_CODE
-extern int drm_vma_info(struct seq_file *m, void *data);
-#endif
- /* Scatter Gather Support (drm_scatter.h) */
-extern void drm_legacy_sg_cleanup(struct drm_device *dev);
-extern int drm_sg_alloc(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-extern int drm_sg_free(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-
- /* ATI PCIGART support (ati_pcigart.h) */
-extern int drm_ati_pcigart_init(struct drm_device *dev,
- struct drm_ati_pcigart_info * gart_info);
-extern int drm_ati_pcigart_cleanup(struct drm_device *dev,
- struct drm_ati_pcigart_info * gart_info);
-
-extern drm_dma_handle_t *drm_pci_alloc(struct drm_device *dev, size_t size,
- size_t align);
-extern void __drm_pci_free(struct drm_device *dev, drm_dma_handle_t * dmah);
-extern void drm_pci_free(struct drm_device *dev, drm_dma_handle_t * dmah);
-extern int drm_pci_set_unique(struct drm_device *dev,
- struct drm_master *master,
- struct drm_unique *u);
+extern struct drm_dma_handle *drm_pci_alloc(struct drm_device *dev, size_t size,
+ size_t align);
+extern void drm_pci_free(struct drm_device *dev, struct drm_dma_handle * dmah);
/* sysfs support (drm_sysfs.c) */
-struct drm_sysfs_class;
-extern struct class *drm_sysfs_create(struct module *owner, char *name);
-extern void drm_sysfs_destroy(void);
-extern struct device *drm_sysfs_minor_alloc(struct drm_minor *minor);
extern void drm_sysfs_hotplug_event(struct drm_device *dev);
-extern int drm_sysfs_connector_add(struct drm_connector *connector);
-extern void drm_sysfs_connector_remove(struct drm_connector *connector);
-
-/* Graphics Execution Manager library functions (drm_gem.c) */
-int drm_gem_init(struct drm_device *dev);
-void drm_gem_destroy(struct drm_device *dev);
-void drm_gem_object_release(struct drm_gem_object *obj);
-void drm_gem_object_free(struct kref *kref);
-int drm_gem_object_init(struct drm_device *dev,
- struct drm_gem_object *obj, size_t size);
-void drm_gem_private_object_init(struct drm_device *dev,
- struct drm_gem_object *obj, size_t size);
-void drm_gem_vm_open(struct vm_area_struct *vma);
-void drm_gem_vm_close(struct vm_area_struct *vma);
-int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size,
- struct vm_area_struct *vma);
-int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma);
-
-#include <drm/drm_global.h>
-
-static inline void
-drm_gem_object_reference(struct drm_gem_object *obj)
-{
- kref_get(&obj->refcount);
-}
-
-static inline void
-drm_gem_object_unreference(struct drm_gem_object *obj)
-{
- if (obj != NULL)
- kref_put(&obj->refcount, drm_gem_object_free);
-}
-
-static inline void
-drm_gem_object_unreference_unlocked(struct drm_gem_object *obj)
-{
- if (obj && !atomic_add_unless(&obj->refcount.refcount, -1, 1)) {
- struct drm_device *dev = obj->dev;
-
- mutex_lock(&dev->struct_mutex);
- if (likely(atomic_dec_and_test(&obj->refcount.refcount)))
- drm_gem_object_free(&obj->refcount);
- mutex_unlock(&dev->struct_mutex);
- }
-}
-
-int drm_gem_handle_create_tail(struct drm_file *file_priv,
- struct drm_gem_object *obj,
- u32 *handlep);
-int drm_gem_handle_create(struct drm_file *file_priv,
- struct drm_gem_object *obj,
- u32 *handlep);
-int drm_gem_handle_delete(struct drm_file *filp, u32 handle);
-void drm_gem_free_mmap_offset(struct drm_gem_object *obj);
-int drm_gem_create_mmap_offset(struct drm_gem_object *obj);
-int drm_gem_create_mmap_offset_size(struct drm_gem_object *obj, size_t size);
-
-struct page **drm_gem_get_pages(struct drm_gem_object *obj);
-void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
- bool dirty, bool accessed);
-
-struct drm_gem_object *drm_gem_object_lookup(struct drm_device *dev,
- struct drm_file *filp,
- u32 handle);
-int drm_gem_close_ioctl(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-int drm_gem_flink_ioctl(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-int drm_gem_open_ioctl(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
-void drm_gem_open(struct drm_device *dev, struct drm_file *file_private);
-void drm_gem_release(struct drm_device *dev, struct drm_file *file_private);
-
-extern void drm_core_ioremap(struct drm_local_map *map, struct drm_device *dev);
-extern void drm_core_ioremap_wc(struct drm_local_map *map, struct drm_device *dev);
-extern void drm_core_ioremapfree(struct drm_local_map *map, struct drm_device *dev);
-
-static __inline__ struct drm_local_map *drm_core_findmap(struct drm_device *dev,
- unsigned int token)
-{
- struct drm_map_list *_entry;
- list_for_each_entry(_entry, &dev->maplist, head)
- if (_entry->user_token == token)
- return _entry->map;
- return NULL;
-}
-
-static __inline__ void drm_core_dropmap(struct drm_local_map *map)
-{
-}
-
-#include <drm/drm_mem_util.h>
-
struct drm_device *drm_dev_alloc(struct drm_driver *driver,
struct device *parent);
void drm_dev_ref(struct drm_device *dev);
extern int drm_get_pci_dev(struct pci_dev *pdev,
const struct pci_device_id *ent,
struct drm_driver *driver);
+extern int drm_pci_set_busid(struct drm_device *dev, struct drm_master *master);
#define DRM_PCIE_SPEED_25 1
#define DRM_PCIE_SPEED_50 2
/* platform section */
extern int drm_platform_init(struct drm_driver *driver, struct platform_device *platform_device);
+extern int drm_platform_set_busid(struct drm_device *d, struct drm_master *m);
/* returns true if currently okay to sleep */
static __inline__ bool drm_can_sleep(void)
return true;
}
-#endif /* __KERNEL__ */
#endif
#ifndef _DRM_AGPSUPPORT_H_
#define _DRM_AGPSUPPORT_H_
+#include <linux/agp_backend.h>
#include <linux/kernel.h>
+#include <linux/list.h>
#include <linux/mm.h>
#include <linux/mutex.h>
#include <linux/types.h>
-#include <linux/agp_backend.h>
-#include <drm/drmP.h>
+#include <uapi/drm/drm.h>
+
+struct drm_device;
+struct drm_file;
+
+#define __OS_HAS_AGP (defined(CONFIG_AGP) || (defined(CONFIG_AGP_MODULE) && \
+ defined(MODULE)))
+
+struct drm_agp_head {
+ struct agp_kern_info agp_info;
+ struct list_head memory;
+ unsigned long mode;
+ struct agp_bridge_data *bridge;
+ int enabled;
+ int acquired;
+ unsigned long base;
+ int agp_mtrr;
+ int cant_use_aperture;
+ unsigned long page_mask;
+};
#if __OS_HAS_AGP
int drm_agp_bind(struct drm_device *dev, struct drm_agp_binding *request);
int drm_agp_bind_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
+
#else /* __OS_HAS_AGP */
static inline void drm_free_agp(struct agp_memory * handle, int pages)
{
return -ENODEV;
}
+
#endif /* __OS_HAS_AGP */
#endif /* _DRM_AGPSUPPORT_H_ */
#include <linux/idr.h>
#include <linux/fb.h>
#include <linux/hdmi.h>
-#include <drm/drm_mode.h>
-#include <drm/drm_fourcc.h>
+#include <uapi/drm/drm_mode.h>
+#include <uapi/drm/drm_fourcc.h>
#include <drm/drm_modeset_lock.h>
struct drm_device;
struct list_head enum_blob_list;
};
-void drm_modeset_lock_all(struct drm_device *dev);
-void drm_modeset_unlock_all(struct drm_device *dev);
-void drm_warn_on_modeset_not_all_locked(struct drm_device *dev);
-
struct drm_crtc;
struct drm_connector;
struct drm_encoder;
int cursor_x;
int cursor_y;
- /* Temporary tracking of the old fb while a modeset is ongoing. Used
- * by drm_mode_set_config_internal to implement correct refcounting. */
- struct drm_framebuffer *old_fb;
-
bool enabled;
/* Requested mode from modesetting. */
void *helper_private;
struct drm_object_properties properties;
+
+ /*
+ * For legacy crtc ioctls so that atomic drivers can get at the locking
+ * acquire context.
+ */
+ struct drm_modeset_acquire_ctx *acquire_ctx;
};
void *helper_private;
/* forced on connector */
+ struct drm_cmdline_mode cmdline_mode;
enum drm_connector_force force;
bool override_edid;
uint32_t encoder_ids[DRM_CONNECTOR_MAX_ENCODER];
uint32_t src_w, uint32_t src_h);
int (*disable_plane)(struct drm_plane *plane);
void (*destroy)(struct drm_plane *plane);
+ void (*reset)(struct drm_plane *plane);
int (*set_property)(struct drm_plane *plane,
struct drm_property *property, uint64_t val);
struct drm_crtc *crtc;
struct drm_framebuffer *fb;
+ /* Temporary tracking of the old fb while a modeset is ongoing. Used
+ * by drm_mode_set_config_internal to implement correct refcounting. */
+ struct drm_framebuffer *old_fb;
+
const struct drm_plane_funcs *funcs;
struct drm_object_properties properties;
struct drm_property *dpms_property;
struct drm_property *path_property;
struct drm_property *plane_type_property;
+ struct drm_property *rotation_property;
/* DVI-I properties */
struct drm_property *dvi_i_subconnector_property;
void drm_connector_unregister(struct drm_connector *connector);
extern void drm_connector_cleanup(struct drm_connector *connector);
+extern unsigned int drm_connector_index(struct drm_connector *connector);
/* helper to unplug all connectors from sysfs for device */
extern void drm_connector_unplug_all(struct drm_device *dev);
const uint32_t *formats, uint32_t format_count,
bool is_primary);
extern void drm_plane_cleanup(struct drm_plane *plane);
+extern unsigned int drm_plane_index(struct drm_plane *plane);
extern void drm_plane_force_disable(struct drm_plane *plane);
extern int drm_crtc_check_viewport(const struct drm_crtc *crtc,
int x, int y,
struct drm_file *file_priv);
extern int drm_mode_obj_set_property_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
+extern int drm_mode_plane_set_obj_prop(struct drm_plane *plane,
+ struct drm_property *property,
+ uint64_t value);
extern void drm_fb_get_bpp_depth(uint32_t format, unsigned int *depth,
int *bpp);
# define DP_TRAIN_VOLTAGE_SWING_MASK 0x3
# define DP_TRAIN_VOLTAGE_SWING_SHIFT 0
# define DP_TRAIN_MAX_SWING_REACHED (1 << 2)
-# define DP_TRAIN_VOLTAGE_SWING_400 (0 << 0)
-# define DP_TRAIN_VOLTAGE_SWING_600 (1 << 0)
-# define DP_TRAIN_VOLTAGE_SWING_800 (2 << 0)
-# define DP_TRAIN_VOLTAGE_SWING_1200 (3 << 0)
+# define DP_TRAIN_VOLTAGE_SWING_LEVEL_0 (0 << 0)
+# define DP_TRAIN_VOLTAGE_SWING_LEVEL_1 (1 << 0)
+# define DP_TRAIN_VOLTAGE_SWING_LEVEL_2 (2 << 0)
+# define DP_TRAIN_VOLTAGE_SWING_LEVEL_3 (3 << 0)
# define DP_TRAIN_PRE_EMPHASIS_MASK (3 << 3)
-# define DP_TRAIN_PRE_EMPHASIS_0 (0 << 3)
-# define DP_TRAIN_PRE_EMPHASIS_3_5 (1 << 3)
-# define DP_TRAIN_PRE_EMPHASIS_6 (2 << 3)
-# define DP_TRAIN_PRE_EMPHASIS_9_5 (3 << 3)
+# define DP_TRAIN_PRE_EMPH_LEVEL_0 (0 << 3)
+# define DP_TRAIN_PRE_EMPH_LEVEL_1 (1 << 3)
+# define DP_TRAIN_PRE_EMPH_LEVEL_2 (2 << 3)
+# define DP_TRAIN_PRE_EMPH_LEVEL_3 (3 << 3)
# define DP_TRAIN_PRE_EMPHASIS_SHIFT 3
# define DP_TRAIN_MAX_PRE_EMPHASIS_REACHED (1 << 5)
struct drm_fb_helper_connector {
struct drm_connector *connector;
- struct drm_cmdline_mode cmdline_mode;
};
struct drm_fb_helper {
--- /dev/null
+#ifndef __DRM_GEM_H__
+#define __DRM_GEM_H__
+
+/*
+ * GEM Graphics Execution Manager Driver Interfaces
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * Copyright (c) 2009-2010, Code Aurora Forum.
+ * All rights reserved.
+ * Copyright © 2014 Intel Corporation
+ * Daniel Vetter <daniel.vetter@ffwll.ch>
+ *
+ * Author: Rickard E. (Rik) Faith <faith@valinux.com>
+ * Author: Gareth Hughes <gareth@valinux.com>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/**
+ * This structure defines the drm_mm memory object, which will be used by the
+ * DRM for its buffer objects.
+ */
+struct drm_gem_object {
+ /** Reference count of this object */
+ struct kref refcount;
+
+ /**
+ * handle_count - gem file_priv handle count of this object
+ *
+ * Each handle also holds a reference. Note that when the handle_count
+ * drops to 0 any global names (e.g. the id in the flink namespace) will
+ * be cleared.
+ *
+ * Protected by dev->object_name_lock.
+ * */
+ unsigned handle_count;
+
+ /** Related drm device */
+ struct drm_device *dev;
+
+ /** File representing the shmem storage */
+ struct file *filp;
+
+ /* Mapping info for this object */
+ struct drm_vma_offset_node vma_node;
+
+ /**
+ * Size of the object, in bytes. Immutable over the object's
+ * lifetime.
+ */
+ size_t size;
+
+ /**
+ * Global name for this object, starts at 1. 0 means unnamed.
+ * Access is covered by the object_name_lock in the related drm_device
+ */
+ int name;
+
+ /**
+ * Memory domains. These monitor which caches contain read/write data
+ * related to the object. When transitioning from one set of domains
+ * to another, the driver is called to ensure that caches are suitably
+ * flushed and invalidated
+ */
+ uint32_t read_domains;
+ uint32_t write_domain;
+
+ /**
+ * While validating an exec operation, the
+ * new read/write domain values are computed here.
+ * They will be transferred to the above values
+ * at the point that any cache flushing occurs
+ */
+ uint32_t pending_read_domains;
+ uint32_t pending_write_domain;
+
+ /**
+ * dma_buf - dma buf associated with this GEM object
+ *
+ * Pointer to the dma-buf associated with this gem object (either
+ * through importing or exporting). We break the resulting reference
+ * loop when the last gem handle for this object is released.
+ *
+ * Protected by obj->object_name_lock
+ */
+ struct dma_buf *dma_buf;
+
+ /**
+ * import_attach - dma buf attachment backing this object
+ *
+ * Any foreign dma_buf imported as a gem object has this set to the
+ * attachment point for the device. This is invariant over the lifetime
+ * of a gem object.
+ *
+ * The driver's ->gem_free_object callback is responsible for cleaning
+ * up the dma_buf attachment and references acquired at import time.
+ *
+ * Note that the drm gem/prime core does not depend upon drivers setting
+ * this field any more. So for drivers where this doesn't make sense
+ * (e.g. virtual devices or a displaylink behind an usb bus) they can
+ * simply leave it as NULL.
+ */
+ struct dma_buf_attachment *import_attach;
+};
+
+void drm_gem_object_release(struct drm_gem_object *obj);
+void drm_gem_object_free(struct kref *kref);
+int drm_gem_object_init(struct drm_device *dev,
+ struct drm_gem_object *obj, size_t size);
+void drm_gem_private_object_init(struct drm_device *dev,
+ struct drm_gem_object *obj, size_t size);
+void drm_gem_vm_open(struct vm_area_struct *vma);
+void drm_gem_vm_close(struct vm_area_struct *vma);
+int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size,
+ struct vm_area_struct *vma);
+int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma);
+
+static inline void
+drm_gem_object_reference(struct drm_gem_object *obj)
+{
+ kref_get(&obj->refcount);
+}
+
+static inline void
+drm_gem_object_unreference(struct drm_gem_object *obj)
+{
+ if (obj != NULL)
+ kref_put(&obj->refcount, drm_gem_object_free);
+}
+
+static inline void
+drm_gem_object_unreference_unlocked(struct drm_gem_object *obj)
+{
+ if (obj && !atomic_add_unless(&obj->refcount.refcount, -1, 1)) {
+ struct drm_device *dev = obj->dev;
+
+ mutex_lock(&dev->struct_mutex);
+ if (likely(atomic_dec_and_test(&obj->refcount.refcount)))
+ drm_gem_object_free(&obj->refcount);
+ mutex_unlock(&dev->struct_mutex);
+ }
+}
+
+int drm_gem_handle_create(struct drm_file *file_priv,
+ struct drm_gem_object *obj,
+ u32 *handlep);
+int drm_gem_handle_delete(struct drm_file *filp, u32 handle);
+
+
+void drm_gem_free_mmap_offset(struct drm_gem_object *obj);
+int drm_gem_create_mmap_offset(struct drm_gem_object *obj);
+int drm_gem_create_mmap_offset_size(struct drm_gem_object *obj, size_t size);
+
+struct page **drm_gem_get_pages(struct drm_gem_object *obj);
+void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
+ bool dirty, bool accessed);
+
+struct drm_gem_object *drm_gem_object_lookup(struct drm_device *dev,
+ struct drm_file *filp,
+ u32 handle);
+int drm_gem_dumb_destroy(struct drm_file *file,
+ struct drm_device *dev,
+ uint32_t handle);
+
+#endif /* __DRM_GEM_H__ */
#define __DRM_GEM_CMA_HELPER_H__
#include <drm/drmP.h>
+#include <drm/drm_gem.h>
struct drm_gem_cma_object {
struct drm_gem_object base;
struct sg_table *drm_gem_cma_prime_get_sg_table(struct drm_gem_object *obj);
struct drm_gem_object *
-drm_gem_cma_prime_import_sg_table(struct drm_device *dev, size_t size,
+drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
+ struct dma_buf_attachment *attach,
struct sg_table *sgt);
int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma);
--- /dev/null
+#ifndef __DRM_DRM_LEGACY_H__
+#define __DRM_DRM_LEGACY_H__
+
+/*
+ * Legacy driver interfaces for the Direct Rendering Manager
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * Copyright (c) 2009-2010, Code Aurora Forum.
+ * All rights reserved.
+ * Copyright © 2014 Intel Corporation
+ * Daniel Vetter <daniel.vetter@ffwll.ch>
+ *
+ * Author: Rickard E. (Rik) Faith <faith@valinux.com>
+ * Author: Gareth Hughes <gareth@valinux.com>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+
+/*
+ * Legacy Support for palateontologic DRM drivers
+ *
+ * If you add a new driver and it uses any of these functions or structures,
+ * you're doing it terribly wrong.
+ */
+
+/**
+ * DMA buffer.
+ */
+struct drm_buf {
+ int idx; /**< Index into master buflist */
+ int total; /**< Buffer size */
+ int order; /**< log-base-2(total) */
+ int used; /**< Amount of buffer in use (for DMA) */
+ unsigned long offset; /**< Byte offset (used internally) */
+ void *address; /**< Address of buffer */
+ unsigned long bus_address; /**< Bus address of buffer */
+ struct drm_buf *next; /**< Kernel-only: used for free list */
+ __volatile__ int waiting; /**< On kernel DMA queue */
+ __volatile__ int pending; /**< On hardware DMA queue */
+ struct drm_file *file_priv; /**< Private of holding file descr */
+ int context; /**< Kernel queue for this buffer */
+ int while_locked; /**< Dispatch this buffer while locked */
+ enum {
+ DRM_LIST_NONE = 0,
+ DRM_LIST_FREE = 1,
+ DRM_LIST_WAIT = 2,
+ DRM_LIST_PEND = 3,
+ DRM_LIST_PRIO = 4,
+ DRM_LIST_RECLAIM = 5
+ } list; /**< Which list we're on */
+
+ int dev_priv_size; /**< Size of buffer private storage */
+ void *dev_private; /**< Per-buffer private storage */
+};
+
+typedef struct drm_dma_handle {
+ dma_addr_t busaddr;
+ void *vaddr;
+ size_t size;
+} drm_dma_handle_t;
+
+/**
+ * Buffer entry. There is one of this for each buffer size order.
+ */
+struct drm_buf_entry {
+ int buf_size; /**< size */
+ int buf_count; /**< number of buffers */
+ struct drm_buf *buflist; /**< buffer list */
+ int seg_count;
+ int page_order;
+ struct drm_dma_handle **seglist;
+
+ int low_mark; /**< Low water mark */
+ int high_mark; /**< High water mark */
+};
+
+/**
+ * DMA data.
+ */
+struct drm_device_dma {
+
+ struct drm_buf_entry bufs[DRM_MAX_ORDER + 1]; /**< buffers, grouped by their size order */
+ int buf_count; /**< total number of buffers */
+ struct drm_buf **buflist; /**< Vector of pointers into drm_device_dma::bufs */
+ int seg_count;
+ int page_count; /**< number of pages */
+ unsigned long *pagelist; /**< page list */
+ unsigned long byte_count;
+ enum {
+ _DRM_DMA_USE_AGP = 0x01,
+ _DRM_DMA_USE_SG = 0x02,
+ _DRM_DMA_USE_FB = 0x04,
+ _DRM_DMA_USE_PCI_RO = 0x08
+ } flags;
+
+};
+
+/**
+ * Scatter-gather memory.
+ */
+struct drm_sg_mem {
+ unsigned long handle;
+ void *virtual;
+ int pages;
+ struct page **pagelist;
+ dma_addr_t *busaddr;
+};
+
+/**
+ * Kernel side of a mapping
+ */
+struct drm_local_map {
+ resource_size_t offset; /**< Requested physical address (0 for SAREA)*/
+ unsigned long size; /**< Requested physical size (bytes) */
+ enum drm_map_type type; /**< Type of memory to map */
+ enum drm_map_flags flags; /**< Flags */
+ void *handle; /**< User-space: "Handle" to pass to mmap() */
+ /**< Kernel-space: kernel-virtual address */
+ int mtrr; /**< MTRR slot used */
+};
+
+typedef struct drm_local_map drm_local_map_t;
+
+/**
+ * Mappings list
+ */
+struct drm_map_list {
+ struct list_head head; /**< list head */
+ struct drm_hash_item hash;
+ struct drm_local_map *map; /**< mapping */
+ uint64_t user_token;
+ struct drm_master *master;
+};
+
+int drm_legacy_addmap(struct drm_device *d, resource_size_t offset,
+ unsigned int size, enum drm_map_type type,
+ enum drm_map_flags flags, struct drm_local_map **map_p);
+int drm_legacy_rmmap(struct drm_device *d, struct drm_local_map *map);
+int drm_legacy_rmmap_locked(struct drm_device *d, struct drm_local_map *map);
+struct drm_local_map *drm_legacy_getsarea(struct drm_device *dev);
+int drm_legacy_mmap(struct file *filp, struct vm_area_struct *vma);
+
+int drm_legacy_addbufs_agp(struct drm_device *d, struct drm_buf_desc *req);
+int drm_legacy_addbufs_pci(struct drm_device *d, struct drm_buf_desc *req);
+
+/**
+ * Test that the hardware lock is held by the caller, returning otherwise.
+ *
+ * \param dev DRM device.
+ * \param filp file pointer of the caller.
+ */
+#define LOCK_TEST_WITH_RETURN( dev, _file_priv ) \
+do { \
+ if (!_DRM_LOCK_IS_HELD(_file_priv->master->lock.hw_lock->lock) || \
+ _file_priv->master->lock.file_priv != _file_priv) { \
+ DRM_ERROR( "%s called without lock held, held %d owner %p %p\n",\
+ __func__, _DRM_LOCK_IS_HELD(_file_priv->master->lock.hw_lock->lock),\
+ _file_priv->master->lock.file_priv, _file_priv); \
+ return -EINVAL; \
+ } \
+} while (0)
+
+void drm_legacy_idlelock_take(struct drm_lock_data *lock);
+void drm_legacy_idlelock_release(struct drm_lock_data *lock);
+
+/* drm_pci.c dma alloc wrappers */
+void __drm_legacy_pci_free(struct drm_device *dev, drm_dma_handle_t * dmah);
+
+/* drm_memory.c */
+void drm_legacy_ioremap(struct drm_local_map *map, struct drm_device *dev);
+void drm_legacy_ioremap_wc(struct drm_local_map *map, struct drm_device *dev);
+void drm_legacy_ioremapfree(struct drm_local_map *map, struct drm_device *dev);
+
+static __inline__ struct drm_local_map *drm_legacy_findmap(struct drm_device *dev,
+ unsigned int token)
+{
+ struct drm_map_list *_entry;
+ list_for_each_entry(_entry, &dev->maplist, head)
+ if (_entry->user_token == token)
+ return _entry->map;
+ return NULL;
+}
+
+#endif /* __DRM_DRM_LEGACY_H__ */
+++ /dev/null
-/**
- * \file drm_memory.h
- * Memory management wrappers for DRM
- *
- * \author Rickard E. (Rik) Faith <faith@valinux.com>
- * \author Gareth Hughes <gareth@valinux.com>
- */
-
-/*
- * Created: Thu Feb 4 14:00:34 1999 by faith@valinux.com
- *
- * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
- * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
- * All Rights Reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
- * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
- * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
- * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
- * OTHER DEALINGS IN THE SOFTWARE.
- */
-
-#include <linux/highmem.h>
-#include <linux/vmalloc.h>
-#include <drm/drmP.h>
-
-/**
- * Cut down version of drm_memory_debug.h, which used to be called
- * drm_memory.h.
- */
-
-#if __OS_HAS_AGP
-
-#ifdef HAVE_PAGE_AGP
-#include <asm/agp.h>
-#else
-# ifdef __powerpc__
-# define PAGE_AGP __pgprot(_PAGE_KERNEL | _PAGE_NO_CACHE)
-# else
-# define PAGE_AGP PAGE_KERNEL
-# endif
-#endif
-
-#else /* __OS_HAS_AGP */
-
-#endif
#define MIPI_DSI_MODE_EOT_PACKET BIT(9)
/* device supports non-continuous clock behavior (DSI spec 5.6.1) */
#define MIPI_DSI_CLOCK_NON_CONTINUOUS BIT(10)
+/* transmit data in low power */
+#define MIPI_DSI_MODE_LPM BIT(11)
enum mipi_dsi_pixel_format {
MIPI_DSI_FMT_RGB888,
struct drm_modeset_lock;
/**
- * drm_modeset_acquire_ctx - locking context (see ww_acquire_ctx)
+ * struct drm_modeset_acquire_ctx - locking context (see ww_acquire_ctx)
* @ww_ctx: base acquire ctx
* @contended: used internally for -EDEADLK handling
* @locked: list of held locks
* list of held locks (drm_modeset_lock)
*/
struct list_head locked;
+
+ /**
+ * Trylock mode, use only for panic handlers!
+ */
+ bool trylock_only;
};
/**
- * drm_modeset_lock - used for locking modeset resources.
+ * struct drm_modeset_lock - used for locking modeset resources.
* @mutex: resource locking
* @head: used to hold it's place on state->locked list when
* part of an atomic update
void drm_modeset_unlock(struct drm_modeset_lock *lock);
struct drm_device;
+struct drm_crtc;
+
+void drm_modeset_lock_all(struct drm_device *dev);
+int __drm_modeset_lock_all(struct drm_device *dev, bool trylock);
+void drm_modeset_unlock_all(struct drm_device *dev);
+void drm_modeset_lock_crtc(struct drm_crtc *crtc);
+void drm_modeset_unlock_crtc(struct drm_crtc *crtc);
+void drm_warn_on_modeset_not_all_locked(struct drm_device *dev);
+struct drm_modeset_acquire_ctx *
+drm_modeset_legacy_acquire_ctx(struct drm_crtc *crtc);
+
int drm_modeset_lock_all_crtcs(struct drm_device *dev,
struct drm_modeset_acquire_ctx *ctx);
{0x1002, 0x1315, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x1316, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x1317, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x1318, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x131B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x131C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x131D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x6601, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6602, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6603, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
+ {0x1002, 0x6604, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
+ {0x1002, 0x6605, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6606, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6607, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
+ {0x1002, 0x6608, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6610, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6611, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6613, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6631, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6640, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6641, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
+ {0x1002, 0x6646, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
+ {0x1002, 0x6647, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6649, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6650, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6651, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6829, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_NEW_MEMMAP}, \
{0x1002, 0x682A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x682B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
+ {0x1002, 0x682C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_NEW_MEMMAP}, \
{0x1002, 0x682D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x682F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6830, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
+++ /dev/null
-#ifndef DRM_USB_H
-#define DRM_USB_H
-
-#include <drmP.h>
-
-#include <linux/usb.h>
-
-extern int drm_usb_init(struct drm_driver *driver, struct usb_driver *udriver);
-extern void drm_usb_exit(struct drm_driver *driver, struct usb_driver *udriver);
-
-int drm_get_usb_dev(struct usb_interface *interface,
- const struct usb_device_id *id,
- struct drm_driver *driver);
-
-#endif
struct drm_mm_node;
+/**
+ * struct ttm_place
+ *
+ * @fpfn: first valid page frame number to put the object
+ * @lpfn: last valid page frame number to put the object
+ * @flags: memory domain and caching flags for the object
+ *
+ * Structure indicating a possible place to put an object.
+ */
+struct ttm_place {
+ unsigned fpfn;
+ unsigned lpfn;
+ uint32_t flags;
+};
/**
* struct ttm_placement
*
- * @fpfn: first valid page frame number to put the object
- * @lpfn: last valid page frame number to put the object
* @num_placement: number of preferred placements
* @placement: preferred placements
* @num_busy_placement: number of preferred placements when need to evict buffer
* Structure indicating the placement you request for an object.
*/
struct ttm_placement {
- unsigned fpfn;
- unsigned lpfn;
- unsigned num_placement;
- const uint32_t *placement;
- unsigned num_busy_placement;
- const uint32_t *busy_placement;
+ unsigned num_placement;
+ const struct ttm_place *placement;
+ unsigned num_busy_placement;
+ const struct ttm_place *busy_placement;
};
/**
* @lru: List head for the lru list.
* @ddestroy: List head for the delayed destroy list.
* @swap: List head for swap LRU list.
- * @sync_obj: Pointer to a synchronization object.
* @priv_flags: Flags describing buffer object internal state.
* @vma_node: Address space manager node.
* @offset: The current GPU offset, which can have different meanings
struct list_head io_reserve_lru;
/**
- * Members protected by struct buffer_object_device::fence_lock
- * In addition, setting sync_obj to anything else
- * than NULL requires bo::reserved to be held. This allows for
- * checking NULL while reserved but not holding the mentioned lock.
+ * Members protected by a bo reservation.
*/
- void *sync_obj;
unsigned long priv_flags;
struct drm_vma_offset_node vma_node;
* point to the shmem object backing a GEM object if TTM is used to back a
* GEM user interface.
* @acc_size: Accounted size for this object.
+ * @resv: Pointer to a reservation_object, or NULL to let ttm allocate one.
* @destroy: Destroy function. Use NULL for kfree().
*
* This function initializes a pre-allocated struct ttm_buffer_object.
struct file *persistent_swap_storage,
size_t acc_size,
struct sg_table *sg,
+ struct reservation_object *resv,
void (*destroy) (struct ttm_buffer_object *));
/**
struct file *persistent_swap_storage,
struct ttm_buffer_object **p_bo);
-/**
- * ttm_bo_check_placement
- *
- * @bo: the buffer object.
- * @placement: placements
- *
- * Performs minimal validity checking on an intended change of
- * placement flags.
- * Returns
- * -EINVAL: Intended change is invalid or not allowed.
- */
-extern int ttm_bo_check_placement(struct ttm_buffer_object *bo,
- struct ttm_placement *placement);
-
/**
* ttm_bo_init_mm
*
*/
int (*get_node)(struct ttm_mem_type_manager *man,
struct ttm_buffer_object *bo,
- struct ttm_placement *placement,
- uint32_t flags,
+ const struct ttm_place *place,
struct ttm_mem_reg *mem);
/**
* @move: Callback for a driver to hook in accelerated functions to
* move a buffer.
* If set to NULL, a potentially slow memcpy() move is used.
- * @sync_obj_signaled: See ttm_fence_api.h
- * @sync_obj_wait: See ttm_fence_api.h
- * @sync_obj_flush: See ttm_fence_api.h
- * @sync_obj_unref: See ttm_fence_api.h
- * @sync_obj_ref: See ttm_fence_api.h
*/
struct ttm_bo_driver {
int (*verify_access) (struct ttm_buffer_object *bo,
struct file *filp);
- /**
- * In case a driver writer dislikes the TTM fence objects,
- * the driver writer can replace those with sync objects of
- * his / her own. If it turns out that no driver writer is
- * using these. I suggest we remove these hooks and plug in
- * fences directly. The bo driver needs the following functionality:
- * See the corresponding functions in the fence object API
- * documentation.
- */
-
- bool (*sync_obj_signaled) (void *sync_obj);
- int (*sync_obj_wait) (void *sync_obj,
- bool lazy, bool interruptible);
- int (*sync_obj_flush) (void *sync_obj);
- void (*sync_obj_unref) (void **sync_obj);
- void *(*sync_obj_ref) (void *sync_obj);
-
/* hook to notify driver about a driver move so it
* can do tiling things */
void (*move_notify)(struct ttm_buffer_object *bo,
*
* @driver: Pointer to a struct ttm_bo_driver struct setup by the driver.
* @man: An array of mem_type_managers.
- * @fence_lock: Protects the synchronizing members on *all* bos belonging
- * to this device.
* @vma_manager: Address space manager
* lru_lock: Spinlock that protects the buffer+device lru lists and
* ddestroy lists.
struct ttm_bo_global *glob;
struct ttm_bo_driver *driver;
struct ttm_mem_type_manager man[TTM_NUM_MEM_TYPES];
- spinlock_t fence_lock;
/*
* Protected by internal locks.
* ttm_bo_move_accel_cleanup.
*
* @bo: A pointer to a struct ttm_buffer_object.
- * @sync_obj: A sync object that signals when moving is complete.
+ * @fence: A fence object that signals when moving is complete.
* @evict: This is an evict move. Don't return until the buffer is idle.
* @no_wait_gpu: Return immediately if the GPU is busy.
* @new_mem: struct ttm_mem_reg indicating where to move.
*/
extern int ttm_bo_move_accel_cleanup(struct ttm_buffer_object *bo,
- void *sync_obj,
+ struct fence *fence,
bool evict, bool no_wait_gpu,
struct ttm_mem_reg *new_mem);
/**
*
* @head: list head for thread-private list.
* @bo: refcounted buffer object pointer.
- * @reserved: Indicates whether @bo has been reserved for validation.
- * @removed: Indicates whether @bo has been removed from lru lists.
- * @put_count: Number of outstanding references on bo::list_kref.
- * @old_sync_obj: Pointer to a sync object about to be unreferenced
+ * @shared: should the fence be added shared?
*/
struct ttm_validate_buffer {
struct list_head head;
struct ttm_buffer_object *bo;
- bool reserved;
- bool removed;
- int put_count;
- void *old_sync_obj;
+ bool shared;
};
/**
* @ticket: [out] ww_acquire_ctx filled in by call, or NULL if only
* non-blocking reserves should be tried.
* @list: thread private list of ttm_validate_buffer structs.
+ * @intr: should the wait be interruptible
*
* Tries to reserve bos pointed to by the list entries for validation.
* If the function returns 0, all buffers are marked as "unfenced",
* CPU write reservations to be cleared, and for other threads to
* unreserve their buffers.
*
- * This function may return -ERESTART or -EAGAIN if the calling process
- * receives a signal while waiting. In that case, no buffers on the list
- * will be reserved upon return.
+ * If intr is set to true, this function may return -ERESTARTSYS if the
+ * calling process receives a signal while waiting. In that case, no
+ * buffers on the list will be reserved upon return.
*
* Buffers reserved by this function should be unreserved by
* a call to either ttm_eu_backoff_reservation() or
*/
extern int ttm_eu_reserve_buffers(struct ww_acquire_ctx *ticket,
- struct list_head *list);
+ struct list_head *list, bool intr);
/**
* function ttm_eu_fence_buffer_objects.
*
* @ticket: ww_acquire_ctx from reserve call
* @list: thread private list of ttm_validate_buffer structs.
- * @sync_obj: The new sync object for the buffers.
+ * @fence: The new exclusive fence for the buffers.
*
* This function should be called when command submission is complete, and
* it will add a new sync object to bos pointed to by entries on @list.
*/
extern void ttm_eu_fence_buffer_objects(struct ww_acquire_ctx *ticket,
- struct list_head *list, void *sync_obj);
+ struct list_head *list,
+ struct fence *fence);
#endif
BLK_MQ_RQ_QUEUE_ERROR = 2, /* end IO with error */
BLK_MQ_F_SHOULD_MERGE = 1 << 0,
- BLK_MQ_F_SHOULD_SORT = 1 << 1,
- BLK_MQ_F_TAG_SHARED = 1 << 2,
- BLK_MQ_F_SG_MERGE = 1 << 3,
- BLK_MQ_F_SYSFS_UP = 1 << 4,
+ BLK_MQ_F_TAG_SHARED = 1 << 1,
+ BLK_MQ_F_SG_MERGE = 1 << 2,
+ BLK_MQ_F_SYSFS_UP = 1 << 3,
BLK_MQ_S_STOPPED = 0,
BLK_MQ_S_TAG_ACTIVE = 1,
#define PHY_ID_BCM7366 0x600d8490
#define PHY_ID_BCM7439 0x600d8480
#define PHY_ID_BCM7445 0x600d8510
-#define PHY_ID_BCM7XXX_28 0x600d8400
#define PHY_BCM_OUI_MASK 0xfffffc00
#define PHY_BCM_OUI_1 0x00206000
#define QSTR_INIT(n,l) { { { .len = l } }, .name = n }
#define hashlen_hash(hashlen) ((u32) (hashlen))
#define hashlen_len(hashlen) ((u32)((hashlen) >> 32))
+#define hashlen_create(hash,len) (((u64)(len)<<32)|(u32)(hash))
struct dentry_stat_t {
long nr_dentry;
#define NULL_ADDR ((block_t)0) /* used as block_t addresses */
#define NEW_ADDR ((block_t)-1) /* used as block_t addresses */
+/* 0, 1(node nid), 2(meta nid) are reserved node id */
+#define F2FS_RESERVED_NODE_NUM 3
+
#define F2FS_ROOT_INO(sbi) (sbi->root_ino_num)
#define F2FS_NODE_INO(sbi) (sbi->node_ino_num)
#define F2FS_META_INO(sbi) (sbi->meta_ino_num)
#define CP_ORPHAN_PRESENT_FLAG 0x00000002
#define CP_UMOUNT_FLAG 0x00000001
+#define F2FS_CP_PACKS 2 /* # of checkpoint packs */
+
struct f2fs_checkpoint {
__le64 checkpoint_ver; /* checkpoint block version number */
__le64 user_block_count; /* # of user blocks */
*/
#define F2FS_ORPHANS_PER_BLOCK 1020
+#define GET_ORPHAN_BLOCKS(n) ((n + F2FS_ORPHANS_PER_BLOCK - 1) / \
+ F2FS_ORPHANS_PER_BLOCK)
+
struct f2fs_orphan_block {
__le32 ino[F2FS_ORPHANS_PER_BLOCK]; /* inode numbers */
__le32 reserved; /* reserved */
#define F2FS_NAME_LEN 255
#define F2FS_INLINE_XATTR_ADDRS 50 /* 200 bytes for inline xattrs */
#define DEF_ADDRS_PER_INODE 923 /* Address Pointers in an Inode */
+#define DEF_NIDS_PER_INODE 5 /* Node IDs in an Inode */
#define ADDRS_PER_INODE(fi) addrs_per_inode(fi)
#define ADDRS_PER_BLOCK 1018 /* Address Pointers in a Direct Block */
#define NIDS_PER_BLOCK 1018 /* Node IDs in an Indirect Block */
#define MAX_INLINE_DATA (sizeof(__le32) * (DEF_ADDRS_PER_INODE - \
F2FS_INLINE_XATTR_ADDRS - 1))
-#define INLINE_DATA_OFFSET (PAGE_CACHE_SIZE - sizeof(struct node_footer) \
- - sizeof(__le32) * (DEF_ADDRS_PER_INODE + 5 - 1))
+#define INLINE_DATA_OFFSET (PAGE_CACHE_SIZE - sizeof(struct node_footer) -\
+ sizeof(__le32) * (DEF_ADDRS_PER_INODE + \
+ DEF_NIDS_PER_INODE - 1))
struct f2fs_inode {
__le16 i_mode; /* file mode */
__le32 i_addr[DEF_ADDRS_PER_INODE]; /* Pointers to data blocks */
- __le32 i_nid[5]; /* direct(2), indirect(2),
+ __le32 i_nid[DEF_NIDS_PER_INODE]; /* direct(2), indirect(2),
double_indirect(1) node id */
} __packed;
FTRACE_OPS_FL_DELETED = 1 << 8,
};
+#ifdef CONFIG_DYNAMIC_FTRACE
+/* The hash used to know what functions callbacks trace */
+struct ftrace_ops_hash {
+ struct ftrace_hash *notrace_hash;
+ struct ftrace_hash *filter_hash;
+ struct mutex regex_lock;
+};
+#endif
+
/*
* Note, ftrace_ops can be referenced outside of RCU protection.
* (Although, for perf, the control ops prevent that). If ftrace_ops is
int __percpu *disabled;
#ifdef CONFIG_DYNAMIC_FTRACE
int nr_trampolines;
- struct ftrace_hash *notrace_hash;
- struct ftrace_hash *filter_hash;
+ struct ftrace_ops_hash local_hash;
+ struct ftrace_ops_hash *func_hash;
struct ftrace_hash *tramp_hash;
- struct mutex regex_lock;
unsigned long trampoline;
#endif
};
*/
struct gpio_desc;
-#ifdef CONFIG_GPIOLIB
-
#define GPIOD_FLAGS_BIT_DIR_SET BIT(0)
#define GPIOD_FLAGS_BIT_DIR_OUT BIT(1)
#define GPIOD_FLAGS_BIT_DIR_VAL BIT(2)
GPIOD_FLAGS_BIT_DIR_VAL,
};
+#ifdef CONFIG_GPIOLIB
+
/* Acquire and dispose GPIOs */
struct gpio_desc *__must_check __gpiod_get(struct device *dev,
const char *con_id,
enum gpiod_flags flags);
-#define __gpiod_get(dev, con_id, flags, ...) __gpiod_get(dev, con_id, flags)
-#define gpiod_get(varargs...) __gpiod_get(varargs, 0)
struct gpio_desc *__must_check __gpiod_get_index(struct device *dev,
const char *con_id,
unsigned int idx,
enum gpiod_flags flags);
-#define __gpiod_get_index(dev, con_id, index, flags, ...) \
- __gpiod_get_index(dev, con_id, index, flags)
-#define gpiod_get_index(varargs...) __gpiod_get_index(varargs, 0)
struct gpio_desc *__must_check __gpiod_get_optional(struct device *dev,
const char *con_id,
enum gpiod_flags flags);
-#define __gpiod_get_optional(dev, con_id, flags, ...) \
- __gpiod_get_optional(dev, con_id, flags)
-#define gpiod_get_optional(varargs...) __gpiod_get_optional(varargs, 0)
struct gpio_desc *__must_check __gpiod_get_index_optional(struct device *dev,
const char *con_id,
unsigned int index,
enum gpiod_flags flags);
-#define __gpiod_get_index_optional(dev, con_id, index, flags, ...) \
- __gpiod_get_index_optional(dev, con_id, index, flags)
-#define gpiod_get_index_optional(varargs...) \
- __gpiod_get_index_optional(varargs, 0)
-
void gpiod_put(struct gpio_desc *desc);
struct gpio_desc *__must_check __devm_gpiod_get(struct device *dev,
const char *con_id,
enum gpiod_flags flags);
-#define __devm_gpiod_get(dev, con_id, flags, ...) \
- __devm_gpiod_get(dev, con_id, flags)
-#define devm_gpiod_get(varargs...) __devm_gpiod_get(varargs, 0)
struct gpio_desc *__must_check __devm_gpiod_get_index(struct device *dev,
const char *con_id,
unsigned int idx,
enum gpiod_flags flags);
-#define __devm_gpiod_get_index(dev, con_id, index, flags, ...) \
- __devm_gpiod_get_index(dev, con_id, index, flags)
-#define devm_gpiod_get_index(varargs...) __devm_gpiod_get_index(varargs, 0)
struct gpio_desc *__must_check __devm_gpiod_get_optional(struct device *dev,
const char *con_id,
enum gpiod_flags flags);
-#define __devm_gpiod_get_optional(dev, con_id, flags, ...) \
- __devm_gpiod_get_optional(dev, con_id, flags)
-#define devm_gpiod_get_optional(varargs...) \
- __devm_gpiod_get_optional(varargs, 0)
struct gpio_desc *__must_check
__devm_gpiod_get_index_optional(struct device *dev, const char *con_id,
unsigned int index, enum gpiod_flags flags);
-#define __devm_gpiod_get_index_optional(dev, con_id, index, flags, ...) \
- __devm_gpiod_get_index_optional(dev, con_id, index, flags)
-#define devm_gpiod_get_index_optional(varargs...) \
- __devm_gpiod_get_index_optional(varargs, 0)
-
void devm_gpiod_put(struct device *dev, struct gpio_desc *desc);
int gpiod_get_direction(const struct gpio_desc *desc);
#else /* CONFIG_GPIOLIB */
-static inline struct gpio_desc *__must_check gpiod_get(struct device *dev,
- const char *con_id)
+static inline struct gpio_desc *__must_check __gpiod_get(struct device *dev,
+ const char *con_id,
+ enum gpiod_flags flags)
{
return ERR_PTR(-ENOSYS);
}
-static inline struct gpio_desc *__must_check gpiod_get_index(struct device *dev,
- const char *con_id,
- unsigned int idx)
+static inline struct gpio_desc *__must_check
+__gpiod_get_index(struct device *dev,
+ const char *con_id,
+ unsigned int idx,
+ enum gpiod_flags flags)
{
return ERR_PTR(-ENOSYS);
}
static inline struct gpio_desc *__must_check
-gpiod_get_optional(struct device *dev, const char *con_id)
+__gpiod_get_optional(struct device *dev, const char *con_id,
+ enum gpiod_flags flags)
{
return ERR_PTR(-ENOSYS);
}
static inline struct gpio_desc *__must_check
-gpiod_get_index_optional(struct device *dev, const char *con_id,
- unsigned int index)
+__gpiod_get_index_optional(struct device *dev, const char *con_id,
+ unsigned int index, enum gpiod_flags flags)
{
return ERR_PTR(-ENOSYS);
}
WARN_ON(1);
}
-static inline struct gpio_desc *__must_check devm_gpiod_get(struct device *dev,
- const char *con_id)
+static inline struct gpio_desc *__must_check
+__devm_gpiod_get(struct device *dev,
+ const char *con_id,
+ enum gpiod_flags flags)
{
return ERR_PTR(-ENOSYS);
}
static inline
-struct gpio_desc *__must_check devm_gpiod_get_index(struct device *dev,
- const char *con_id,
- unsigned int idx)
+struct gpio_desc *__must_check
+__devm_gpiod_get_index(struct device *dev,
+ const char *con_id,
+ unsigned int idx,
+ enum gpiod_flags flags)
{
return ERR_PTR(-ENOSYS);
}
static inline struct gpio_desc *__must_check
-devm_gpiod_get_optional(struct device *dev, const char *con_id)
+__devm_gpiod_get_optional(struct device *dev, const char *con_id,
+ enum gpiod_flags flags)
{
return ERR_PTR(-ENOSYS);
}
static inline struct gpio_desc *__must_check
-devm_gpiod_get_index_optional(struct device *dev, const char *con_id,
- unsigned int index)
+__devm_gpiod_get_index_optional(struct device *dev, const char *con_id,
+ unsigned int index, enum gpiod_flags flags)
{
return ERR_PTR(-ENOSYS);
}
return -EINVAL;
}
-
#endif /* CONFIG_GPIOLIB */
+/*
+ * Vararg-hacks! This is done to transition the kernel to always pass
+ * the options flags argument to the below functions. During a transition
+ * phase these vararg macros make both old-and-newstyle code compile,
+ * but when all calls to the elder API are removed, these should go away
+ * and the __gpiod_get() etc functions above be renamed just gpiod_get()
+ * etc.
+ */
+#define __gpiod_get(dev, con_id, flags, ...) __gpiod_get(dev, con_id, flags)
+#define gpiod_get(varargs...) __gpiod_get(varargs, 0)
+#define __gpiod_get_index(dev, con_id, index, flags, ...) \
+ __gpiod_get_index(dev, con_id, index, flags)
+#define gpiod_get_index(varargs...) __gpiod_get_index(varargs, 0)
+#define __gpiod_get_optional(dev, con_id, flags, ...) \
+ __gpiod_get_optional(dev, con_id, flags)
+#define gpiod_get_optional(varargs...) __gpiod_get_optional(varargs, 0)
+#define __gpiod_get_index_optional(dev, con_id, index, flags, ...) \
+ __gpiod_get_index_optional(dev, con_id, index, flags)
+#define gpiod_get_index_optional(varargs...) \
+ __gpiod_get_index_optional(varargs, 0)
+#define __devm_gpiod_get(dev, con_id, flags, ...) \
+ __devm_gpiod_get(dev, con_id, flags)
+#define devm_gpiod_get(varargs...) __devm_gpiod_get(varargs, 0)
+#define __devm_gpiod_get_index(dev, con_id, index, flags, ...) \
+ __devm_gpiod_get_index(dev, con_id, index, flags)
+#define devm_gpiod_get_index(varargs...) __devm_gpiod_get_index(varargs, 0)
+#define __devm_gpiod_get_optional(dev, con_id, flags, ...) \
+ __devm_gpiod_get_optional(dev, con_id, flags)
+#define devm_gpiod_get_optional(varargs...) \
+ __devm_gpiod_get_optional(varargs, 0)
+#define __devm_gpiod_get_index_optional(dev, con_id, index, flags, ...) \
+ __devm_gpiod_get_index_optional(dev, con_id, index, flags)
+#define devm_gpiod_get_index_optional(varargs...) \
+ __devm_gpiod_get_index_optional(varargs, 0)
+
#if IS_ENABLED(CONFIG_GPIOLIB) && IS_ENABLED(CONFIG_GPIO_SYSFS)
int gpiod_export(struct gpio_desc *desc, bool direction_may_change);
{
u64 hash = val;
+#if defined(CONFIG_ARCH_HAS_FAST_MULTIPLIER) && BITS_PER_LONG == 64
+ hash = hash * GOLDEN_RATIO_PRIME_64;
+#else
/* Sigh, gcc can't optimise this alone like it does for 32 bits. */
u64 n = hash;
n <<= 18;
hash += n;
n <<= 2;
hash += n;
+#endif
/* High bits are more random, so use them. */
return hash >> (64 - bits);
}
#endif /* CONFIG_OF */
-#ifdef CONFIG_I2C_ACPI
-int acpi_i2c_install_space_handler(struct i2c_adapter *adapter);
-void acpi_i2c_remove_space_handler(struct i2c_adapter *adapter);
+#ifdef CONFIG_ACPI
void acpi_i2c_register_devices(struct i2c_adapter *adap);
#else
static inline void acpi_i2c_register_devices(struct i2c_adapter *adap) { }
+#endif /* CONFIG_ACPI */
+
+#ifdef CONFIG_ACPI_I2C_OPREGION
+int acpi_i2c_install_space_handler(struct i2c_adapter *adapter);
+void acpi_i2c_remove_space_handler(struct i2c_adapter *adapter);
+#else
static inline void acpi_i2c_remove_space_handler(struct i2c_adapter *adapter)
{ }
static inline int acpi_i2c_install_space_handler(struct i2c_adapter *adapter)
{ return 0; }
-#endif
+#endif /* CONFIG_ACPI_I2C_OPREGION */
#endif /* _LINUX_I2C_H */
* journal_block_tag (in the descriptor). The other h_chksum* fields are
* not used.
*
- * Checksum v1 and v2 are mutually exclusive features.
+ * If FEATURE_INCOMPAT_CSUM_V3 is set, the descriptor block uses
+ * journal_block_tag3_t to store a full 32-bit checksum. Everything else
+ * is the same as v2.
+ *
+ * Checksum v1, v2, and v3 are mutually exclusive features.
*/
struct commit_header {
__be32 h_magic;
* raw struct shouldn't be used for pointer math or sizeof() - use
* journal_tag_bytes(journal) instead to compute this.
*/
+typedef struct journal_block_tag3_s
+{
+ __be32 t_blocknr; /* The on-disk block number */
+ __be32 t_flags; /* See below */
+ __be32 t_blocknr_high; /* most-significant high 32bits. */
+ __be32 t_checksum; /* crc32c(uuid+seq+block) */
+} journal_block_tag3_t;
+
typedef struct journal_block_tag_s
{
__be32 t_blocknr; /* The on-disk block number */
__be32 t_blocknr_high; /* most-significant high 32bits. */
} journal_block_tag_t;
-#define JBD2_TAG_SIZE32 (offsetof(journal_block_tag_t, t_blocknr_high))
-#define JBD2_TAG_SIZE64 (sizeof(journal_block_tag_t))
-
/* Tail of descriptor block, for checksumming */
struct jbd2_journal_block_tail {
__be32 t_checksum; /* crc32c(uuid+descr_block) */
#define JBD2_FEATURE_INCOMPAT_64BIT 0x00000002
#define JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT 0x00000004
#define JBD2_FEATURE_INCOMPAT_CSUM_V2 0x00000008
+#define JBD2_FEATURE_INCOMPAT_CSUM_V3 0x00000010
/* Features known to this kernel version: */
#define JBD2_KNOWN_COMPAT_FEATURES JBD2_FEATURE_COMPAT_CHECKSUM
#define JBD2_KNOWN_INCOMPAT_FEATURES (JBD2_FEATURE_INCOMPAT_REVOKE | \
JBD2_FEATURE_INCOMPAT_64BIT | \
JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT | \
- JBD2_FEATURE_INCOMPAT_CSUM_V2)
+ JBD2_FEATURE_INCOMPAT_CSUM_V2 | \
+ JBD2_FEATURE_INCOMPAT_CSUM_V3)
#ifdef __KERNEL__
extern int jbd2_journal_blocks_per_page(struct inode *inode);
extern size_t journal_tag_bytes(journal_t *journal);
+static inline int jbd2_journal_has_csum_v2or3(journal_t *journal)
+{
+ if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_CSUM_V2) ||
+ JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_CSUM_V3))
+ return 1;
+
+ return 0;
+}
+
/*
* We reserve t_outstanding_credits >> JBD2_CONTROL_BLOCKS_SHIFT for
* transaction control blocks.
#define SEC_JIFFIE_SC (32 - SHIFT_HZ)
#endif
#define NSEC_JIFFIE_SC (SEC_JIFFIE_SC + 29)
-#define USEC_JIFFIE_SC (SEC_JIFFIE_SC + 19)
#define SEC_CONVERSION ((unsigned long)((((u64)NSEC_PER_SEC << SEC_JIFFIE_SC) +\
TICK_NSEC -1) / (u64)TICK_NSEC))
#define NSEC_CONVERSION ((unsigned long)((((u64)1 << NSEC_JIFFIE_SC) +\
TICK_NSEC -1) / (u64)TICK_NSEC))
-#define USEC_CONVERSION \
- ((unsigned long)((((u64)NSEC_PER_USEC << USEC_JIFFIE_SC) +\
- TICK_NSEC -1) / (u64)TICK_NSEC))
-/*
- * USEC_ROUND is used in the timeval to jiffie conversion. See there
- * for more details. It is the scaled resolution rounding value. Note
- * that it is a 64-bit value. Since, when it is applied, we are already
- * in jiffies (albit scaled), it is nothing but the bits we will shift
- * off.
- */
-#define USEC_ROUND (u64)(((u64)1 << USEC_JIFFIE_SC) - 1)
/*
* The maximum jiffie value is (MAX_INT >> 1). Here we translate that
* into seconds. The 64-bit case will overflow if we are not careful,
#include <linux/list.h>
#include <linux/spinlock.h>
#include <linux/rwsem.h>
+#include <linux/timer.h>
#include <linux/workqueue.h>
struct device;
const char *default_trigger; /* Trigger to use */
unsigned long blink_delay_on, blink_delay_off;
- struct delayed_work blink_work;
+ struct timer_list blink_timer;
int blink_brightness;
struct work_struct set_brightness_work;
enum mlx4_net_trans_rule_id id);
int mlx4_hw_rule_sz(struct mlx4_dev *dev, enum mlx4_net_trans_rule_id id);
+int mlx4_tunnel_steer_add(struct mlx4_dev *dev, unsigned char *addr,
+ int port, int qpn, u16 prio, u64 *reg_id);
+
void mlx4_sync_pkey_table(struct mlx4_dev *dev, int slave, int port,
int i, int val);
: 0;
}
-/**
+/*
* struct nand_sdr_timings - SDR NAND chip timings
*
* This struct defines the timing requirements of a SDR NAND chip.
}
/**
- * __dev_uc_unsync - Remove synchonized addresses from device
+ * __dev_uc_unsync - Remove synchronized addresses from device
* @dev: device to sync
* @unsync: function to call if address should be removed
*
}
/**
- * __dev_mc_unsync - Remove synchonized addresses from device
+ * __dev_mc_unsync - Remove synchronized addresses from device
* @dev: device to sync
* @unsync: function to call if address should be removed
*
#include <linux/in6.h>
#include <linux/wait.h>
#include <linux/list.h>
+#include <linux/static_key.h>
#include <uapi/linux/netfilter.h>
#ifdef CONFIG_NETFILTER
static inline int NF_DROP_GETERR(int verdict)
extern struct list_head nf_hooks[NFPROTO_NUMPROTO][NF_MAX_HOOKS];
-#if defined(CONFIG_JUMP_LABEL)
-#include <linux/static_key.h>
+#ifdef HAVE_JUMP_LABEL
extern struct static_key nf_hooks_needed[NFPROTO_NUMPROTO][NF_MAX_HOOKS];
+
static inline bool nf_hooks_active(u_int8_t pf, unsigned int hook)
{
if (__builtin_constant_p(pf) &&
extern void nfs_unlock_request(struct nfs_page *req);
extern void nfs_unlock_and_release_request(struct nfs_page *);
extern int nfs_page_group_lock(struct nfs_page *, bool);
+extern void nfs_page_group_lock_wait(struct nfs_page *);
extern void nfs_page_group_unlock(struct nfs_page *);
extern bool nfs_page_group_sync_on_bit(struct nfs_page *, unsigned int);
};
enum omap_ecc {
- /* 1-bit ECC calculation by GPMC, Error detection by Software */
- OMAP_ECC_HAM1_CODE_HW = 0,
+ /*
+ * 1-bit ECC: calculation and correction by SW
+ * ECC stored at end of spare area
+ */
+ OMAP_ECC_HAM1_CODE_SW = 0,
+
+ /*
+ * 1-bit ECC: calculation by GPMC, Error detection by Software
+ * ECC layout compatible with ROM code layout
+ */
+ OMAP_ECC_HAM1_CODE_HW,
/* 4-bit ECC calculation by GPMC, Error detection by Software */
OMAP_ECC_BCH4_CODE_HW_DETECTION_SW,
/* 4-bit ECC calculation by GPMC, Error detection by ELM */
#ifndef __RCAR_DU_H__
#define __RCAR_DU_H__
-#include <drm/drm_mode.h>
+#include <video/videomode.h>
enum rcar_du_output {
RCAR_DU_OUTPUT_DPAD0,
struct rcar_du_panel_data {
unsigned int width_mm; /* Panel width in mm */
unsigned int height_mm; /* Panel height in mm */
- struct drm_mode_modeinfo mode;
+ struct videomode mode;
};
struct rcar_du_connector_lvds_data {
struct mutex lock;
struct dev_power_governor *gov;
struct work_struct power_off_work;
- char *name;
+ const char *name;
unsigned int in_progress; /* Number of devices being suspended now */
atomic_t sd_count; /* Number of subdomains with power "on" */
enum gpd_status status; /* Current state of the domain */
* @linear_min_sel: Minimal selector for starting linear mapping
* @fixed_uV: Fixed voltage of rails.
* @ramp_delay: Time to settle down after voltage change (unit: uV/us)
+ * @linear_ranges: A constant table of possible voltage ranges.
+ * @n_linear_ranges: Number of entries in the @linear_ranges table.
* @volt_table: Voltage mapping table (if table based mapping)
*
* @vsel_reg: Register for selector when using regulator_regmap_X_voltage_
* bootloader then it will be enabled when the constraints are
* applied.
* @apply_uV: Apply the voltage constraint when initialising.
+ * @ramp_disable: Disable ramp delay when initialising or when setting voltage.
*
* @input_uV: Input voltage for regulator when supplied by another regulator.
*
* @context: the execution context this fence is a part of
* @seqno_ofs: the offset within @sync_buf
* @seqno: the sequence # to signal on
+ * @cond: fence wait condition
* @ops: the fence_ops for operations on this seqno fence
*
* This function initializes a struct seqno_fence with passed parameters,
* the device whose settings are being modified.
* @transfer: adds a message to the controller's transfer queue.
* @cleanup: frees controller-specific state
+ * @can_dma: determine whether this master supports DMA
* @queued: whether this master is providing an internal message queue
* @kworker: thread struct for message pump
* @kworker_task: pointer to task for message pump kworker thread
* @cur_msg: the currently in-flight message
* @cur_msg_prepared: spi_prepare_message was called for the currently
* in-flight message
+ * @cur_msg_mapped: message has been mapped for DMA
* @xfer_completion: used by core transfer_one_message()
* @busy: message pump is busy
* @running: message pump is running
* @cs_gpios: Array of GPIOs to use as chip select lines; one per CS
* number. Any individual value may be -ENOENT for CS lines that
* are not GPIOs (driven by the SPI controller itself).
+ * @dma_tx: DMA transmit channel
+ * @dma_rx: DMA receive channel
+ * @dummy_rx: dummy receive buffer for full-duplex devices
+ * @dummy_tx: dummy transmit buffer for full-duplex devices
*
* Each SPI master controller can communicate with one or more @spi_device
* children. These make a small bus, sharing MOSI, MISO and SCK signals
* addresses for each transfer buffer
* @complete: called to report transaction completions
* @context: the argument to complete() when it's called
+ * @frame_length: the total number of bytes in the message
* @actual_length: the total number of bytes that were transferred in all
* successful segments
* @status: zero for success, else negative errno
extern void tick_nohz_init(void);
extern void __tick_nohz_full_check(void);
+extern void tick_nohz_full_kick(void);
extern void tick_nohz_full_kick_cpu(int cpu);
-
-static inline void tick_nohz_full_kick(void)
-{
- tick_nohz_full_kick_cpu(smp_processor_id());
-}
-
extern void tick_nohz_full_kick_all(void);
extern void __tick_nohz_task_switch(struct task_struct *tsk);
#else
HCI_AUTO_CONN_ALWAYS,
HCI_AUTO_CONN_LINK_LOSS,
} auto_connect;
+
+ struct hci_conn *conn;
};
extern struct list_head hci_dev_list;
struct netns_ieee802154_lowpan {
struct netns_sysctl_lowpan sysctl;
struct netns_frags frags;
- int max_dsize;
};
#endif
struct ieee80211_regdomain {
struct rcu_head rcu_head;
u32 n_reg_rules;
- char alpha2[2];
+ char alpha2[3];
enum nl80211_dfs_regions dfs_region;
struct ieee80211_reg_rule reg_rules[];
};
return asoc ? asoc->assoc_id : 0;
}
+static inline enum sctp_sstat_state
+sctp_assoc_to_state(const struct sctp_association *asoc)
+{
+ /* SCTP's uapi always had SCTP_EMPTY(=0) as a dummy state, but we
+ * got rid of it in kernel space. Therefore SCTP_CLOSED et al
+ * start at =1 in user space, but actually as =0 in kernel space.
+ * Now that we can not break user space and SCTP_EMPTY is exposed
+ * there, we need to fix it up with an ugly offset not to break
+ * applications. :(
+ */
+ return asoc->state + 1;
+}
+
/* Look up the association by its id. */
struct sctp_association *sctp_id2assoc(struct sock *sk, sctp_assoc_t id);
*/
if (sock_flag(sk, SOCK_RCVTSTAMP) ||
(sk->sk_tsflags & SOF_TIMESTAMPING_RX_SOFTWARE) ||
- (kt.tv64 &&
- (sk->sk_tsflags & SOF_TIMESTAMPING_SOFTWARE ||
- skb_shinfo(skb)->tx_flags & SKBTX_ANY_SW_TSTAMP)) ||
+ (kt.tv64 && sk->sk_tsflags & SOF_TIMESTAMPING_SOFTWARE) ||
(hwtstamps->hwtstamp.tv64 &&
(sk->sk_tsflags & SOF_TIMESTAMPING_RAW_HARDWARE)))
__sock_recv_timestamp(msg, sk, skb);
* This operation has to be synchronous, and return only when the
* reset is complete. In case of having had to resort to bus/cold
* reset implying a device disconnection, the call is allowed to
- * return inmediately.
+ * return immediately.
* NOTE: wimax_dev->mutex is NOT locked when this op is being
* called; however, wimax_dev->mutex_reset IS locked to ensure
* serialization of calls to wimax_reset().
.access = SNDRV_CTL_ELEM_ACCESS_TLV_READWRITE | \
SNDRV_CTL_ELEM_ACCESS_TLV_CALLBACK, \
.tlv.c = (snd_soc_bytes_tlv_callback), \
- .info = snd_soc_info_bytes_ext, \
+ .info = snd_soc_bytes_info_ext, \
.private_value = (unsigned long)&(struct soc_bytes_ext) \
{.max = xcount, .get = xhandler_get, .put = xhandler_put, } }
#define SOC_SINGLE_XR_SX(xname, xregbase, xregcount, xnbits, \
* @vec_nr: softirq vector number
*
* When used in combination with the softirq_exit tracepoint
- * we can determine the softirq handler runtine.
+ * we can determine the softirq handler routine.
*/
DEFINE_EVENT(softirq, softirq_entry,
* @vec_nr: softirq vector number
*
* When used in combination with the softirq_entry tracepoint
- * we can determine the softirq handler runtine.
+ * we can determine the softirq handler routine.
*/
DEFINE_EVENT(softirq, softirq_exit,
__SYSCALL(__NR_seccomp, sys_seccomp)
#define __NR_getrandom 278
__SYSCALL(__NR_getrandom, sys_getrandom)
+#define __NR_memfd_create 279
+__SYSCALL(__NR_memfd_create, sys_memfd_create)
#undef __NR_syscalls
-#define __NR_syscalls 279
+#define __NR_syscalls 280
/*
* All syscalls below here should go away really,
unsigned int handle;
};
-/**
- * A structure for getting buffer offset.
- *
- * @handle: a pointer to gem object created.
- * @pad: just padding to be 64-bit aligned.
- * @offset: relatived offset value of the memory region allocated.
- * - this value should be set by user.
- */
-struct drm_exynos_gem_map_off {
- unsigned int handle;
- unsigned int pad;
- uint64_t offset;
-};
-
-/**
- * A structure for mapping buffer.
- *
- * @handle: a handle to gem object created.
- * @pad: just padding to be 64-bit aligned.
- * @size: memory size to be mapped.
- * @mapped: having user virtual address mmaped.
- * - this variable would be filled by exynos gem module
- * of kernel side with user virtual address which is allocated
- * by do_mmap().
- */
-struct drm_exynos_gem_mmap {
- unsigned int handle;
- unsigned int pad;
- uint64_t size;
- uint64_t mapped;
-};
-
/**
* A structure to gem information.
*
};
#define DRM_EXYNOS_GEM_CREATE 0x00
-#define DRM_EXYNOS_GEM_MAP_OFFSET 0x01
-#define DRM_EXYNOS_GEM_MMAP 0x02
/* Reserved 0x03 ~ 0x05 for exynos specific gem ioctl */
#define DRM_EXYNOS_GEM_GET 0x04
#define DRM_EXYNOS_VIDI_CONNECTION 0x07
#define DRM_IOCTL_EXYNOS_GEM_CREATE DRM_IOWR(DRM_COMMAND_BASE + \
DRM_EXYNOS_GEM_CREATE, struct drm_exynos_gem_create)
-#define DRM_IOCTL_EXYNOS_GEM_MAP_OFFSET DRM_IOWR(DRM_COMMAND_BASE + \
- DRM_EXYNOS_GEM_MAP_OFFSET, struct drm_exynos_gem_map_off)
-
-#define DRM_IOCTL_EXYNOS_GEM_MMAP DRM_IOWR(DRM_COMMAND_BASE + \
- DRM_EXYNOS_GEM_MMAP, struct drm_exynos_gem_mmap)
-
#define DRM_IOCTL_EXYNOS_GEM_GET DRM_IOWR(DRM_COMMAND_BASE + \
DRM_EXYNOS_GEM_GET, struct drm_exynos_gem_info)
#define DRM_RADEON_GEM_BUSY 0x2a
#define DRM_RADEON_GEM_VA 0x2b
#define DRM_RADEON_GEM_OP 0x2c
+#define DRM_RADEON_GEM_USERPTR 0x2d
#define DRM_IOCTL_RADEON_CP_INIT DRM_IOW( DRM_COMMAND_BASE + DRM_RADEON_CP_INIT, drm_radeon_init_t)
#define DRM_IOCTL_RADEON_CP_START DRM_IO( DRM_COMMAND_BASE + DRM_RADEON_CP_START)
#define DRM_IOCTL_RADEON_GEM_BUSY DRM_IOWR(DRM_COMMAND_BASE + DRM_RADEON_GEM_BUSY, struct drm_radeon_gem_busy)
#define DRM_IOCTL_RADEON_GEM_VA DRM_IOWR(DRM_COMMAND_BASE + DRM_RADEON_GEM_VA, struct drm_radeon_gem_va)
#define DRM_IOCTL_RADEON_GEM_OP DRM_IOWR(DRM_COMMAND_BASE + DRM_RADEON_GEM_OP, struct drm_radeon_gem_op)
+#define DRM_IOCTL_RADEON_GEM_USERPTR DRM_IOWR(DRM_COMMAND_BASE + DRM_RADEON_GEM_USERPTR, struct drm_radeon_gem_userptr)
typedef struct drm_radeon_init {
enum {
#define RADEON_GEM_NO_BACKING_STORE (1 << 0)
#define RADEON_GEM_GTT_UC (1 << 1)
#define RADEON_GEM_GTT_WC (1 << 2)
+/* BO is expected to be accessed by the CPU */
+#define RADEON_GEM_CPU_ACCESS (1 << 3)
+/* CPU access is not expected to work for this BO */
+#define RADEON_GEM_NO_CPU_ACCESS (1 << 4)
struct drm_radeon_gem_create {
uint64_t size;
uint32_t flags;
};
+/*
+ * This is not a reliable API and you should expect it to fail for any
+ * number of reasons and have fallback path that do not use userptr to
+ * perform any operation.
+ */
+#define RADEON_GEM_USERPTR_READONLY (1 << 0)
+#define RADEON_GEM_USERPTR_ANONONLY (1 << 1)
+#define RADEON_GEM_USERPTR_VALIDATE (1 << 2)
+#define RADEON_GEM_USERPTR_REGISTER (1 << 3)
+
+struct drm_radeon_gem_userptr {
+ uint64_t addr;
+ uint64_t size;
+ uint32_t flags;
+ uint32_t handle;
+};
+
#define RADEON_TILING_MACRO 0x1
#define RADEON_TILING_MICRO 0x2
#define RADEON_TILING_SWAP_16BIT 0x4
};
/* drm_radeon_cs_reloc.flags */
+#define RADEON_RELOC_PRIO_MASK (0xf << 0)
struct drm_radeon_cs_reloc {
uint32_t handle;
#define __VMWGFX_DRM_H__
#ifndef __KERNEL__
-#include <drm.h>
+#include <drm/drm.h>
#endif
#define DRM_VMW_MAX_SURFACE_FACES 6
header-y += mdio.h
header-y += media.h
header-y += mei.h
+header-y += memfd.h
header-y += mempolicy.h
header-y += meye.h
header-y += mic_common.h
header-y += unistd.h
header-y += unix_diag.h
header-y += usbdevice_fs.h
+header-y += usbip.h
header-y += utime.h
header-y += utsname.h
header-y += uuid.h
#define INPUT_PROP_BUTTONPAD 0x02 /* has button(s) under pad */
#define INPUT_PROP_SEMI_MT 0x03 /* touch rectangle only */
#define INPUT_PROP_TOPBUTTONPAD 0x04 /* softbuttons at top of pad */
+#define INPUT_PROP_POINTING_STICK 0x05 /* is a pointing stick */
#define INPUT_PROP_MAX 0x1f
#define INPUT_PROP_CNT (INPUT_PROP_MAX + 1)
#ifndef _UAPI_LINUX_XATTR_H
#define _UAPI_LINUX_XATTR_H
-#ifdef __UAPI_DEF_XATTR
+#if __UAPI_DEF_XATTR
#define __USE_KERNEL_XATTR_DEFS
#define XATTR_CREATE 0x1 /* set value, fail if attr already exists */
#include <linux/videodev2.h>
#include <linux/bitmap.h>
#include <linux/fb.h>
+#include <media/v4l2-mediabus.h>
struct ipu_soc;
u8 vsync_pin;
};
+/*
+ * Enumeration of CSI destinations
+ */
+enum ipu_csi_dest {
+ IPU_CSI_DEST_IDMAC, /* to memory via SMFC */
+ IPU_CSI_DEST_IC, /* to Image Converter */
+ IPU_CSI_DEST_VDIC, /* to VDIC */
+};
+
+/*
+ * Enumeration of IPU rotation modes
+ */
+enum ipu_rotate_mode {
+ IPU_ROTATE_NONE = 0,
+ IPU_ROTATE_VERT_FLIP,
+ IPU_ROTATE_HORIZ_FLIP,
+ IPU_ROTATE_180,
+ IPU_ROTATE_90_RIGHT,
+ IPU_ROTATE_90_RIGHT_VFLIP,
+ IPU_ROTATE_90_RIGHT_HFLIP,
+ IPU_ROTATE_90_LEFT,
+};
+
enum ipu_color_space {
IPUV3_COLORSPACE_RGB,
IPUV3_COLORSPACE_YUV,
IPU_IRQ_EOS = 192,
};
+/*
+ * Enumeration of IDMAC channels
+ */
+#define IPUV3_CHANNEL_CSI0 0
+#define IPUV3_CHANNEL_CSI1 1
+#define IPUV3_CHANNEL_CSI2 2
+#define IPUV3_CHANNEL_CSI3 3
+#define IPUV3_CHANNEL_VDI_MEM_IC_VF 5
+#define IPUV3_CHANNEL_MEM_IC_PP 11
+#define IPUV3_CHANNEL_MEM_IC_PRP_VF 12
+#define IPUV3_CHANNEL_G_MEM_IC_PRP_VF 14
+#define IPUV3_CHANNEL_G_MEM_IC_PP 15
+#define IPUV3_CHANNEL_IC_PRP_ENC_MEM 20
+#define IPUV3_CHANNEL_IC_PRP_VF_MEM 21
+#define IPUV3_CHANNEL_IC_PP_MEM 22
+#define IPUV3_CHANNEL_MEM_BG_SYNC 23
+#define IPUV3_CHANNEL_MEM_BG_ASYNC 24
+#define IPUV3_CHANNEL_MEM_FG_SYNC 27
+#define IPUV3_CHANNEL_MEM_DC_SYNC 28
+#define IPUV3_CHANNEL_MEM_FG_ASYNC 29
+#define IPUV3_CHANNEL_MEM_FG_SYNC_ALPHA 31
+#define IPUV3_CHANNEL_MEM_DC_ASYNC 41
+#define IPUV3_CHANNEL_MEM_ROT_ENC 45
+#define IPUV3_CHANNEL_MEM_ROT_VF 46
+#define IPUV3_CHANNEL_MEM_ROT_PP 47
+#define IPUV3_CHANNEL_ROT_ENC_MEM 48
+#define IPUV3_CHANNEL_ROT_VF_MEM 49
+#define IPUV3_CHANNEL_ROT_PP_MEM 50
+#define IPUV3_CHANNEL_MEM_BG_SYNC_ALPHA 51
+
int ipu_map_irq(struct ipu_soc *ipu, int irq);
int ipu_idmac_channel_irq(struct ipu_soc *ipu, struct ipuv3_channel *channel,
enum ipu_channel_irq irq);
#define IPU_IRQ_VSYNC_PRE_0 (448 + 14)
#define IPU_IRQ_VSYNC_PRE_1 (448 + 15)
+/*
+ * IPU Common functions
+ */
+void ipu_set_csi_src_mux(struct ipu_soc *ipu, int csi_id, bool mipi_csi2);
+void ipu_set_ic_src_mux(struct ipu_soc *ipu, int csi_id, bool vdi);
+void ipu_dump(struct ipu_soc *ipu);
+
/*
* IPU Image DMA Controller (idmac) functions
*/
int ipu_idmac_enable_channel(struct ipuv3_channel *channel);
int ipu_idmac_disable_channel(struct ipuv3_channel *channel);
+void ipu_idmac_enable_watermark(struct ipuv3_channel *channel, bool enable);
+int ipu_idmac_lock_enable(struct ipuv3_channel *channel, int num_bursts);
int ipu_idmac_wait_busy(struct ipuv3_channel *channel, int ms);
void ipu_idmac_set_double_buffer(struct ipuv3_channel *channel,
bool doublebuffer);
int ipu_idmac_get_current_buffer(struct ipuv3_channel *channel);
+bool ipu_idmac_buffer_is_ready(struct ipuv3_channel *channel, u32 buf_num);
void ipu_idmac_select_buffer(struct ipuv3_channel *channel, u32 buf_num);
+void ipu_idmac_clear_buffer(struct ipuv3_channel *channel, u32 buf_num);
+
+/*
+ * IPU Channel Parameter Memory (cpmem) functions
+ */
+struct ipu_rgb {
+ struct fb_bitfield red;
+ struct fb_bitfield green;
+ struct fb_bitfield blue;
+ struct fb_bitfield transp;
+ int bits_per_pixel;
+};
+
+struct ipu_image {
+ struct v4l2_pix_format pix;
+ struct v4l2_rect rect;
+ dma_addr_t phys0;
+ dma_addr_t phys1;
+};
+
+void ipu_cpmem_zero(struct ipuv3_channel *ch);
+void ipu_cpmem_set_resolution(struct ipuv3_channel *ch, int xres, int yres);
+void ipu_cpmem_set_stride(struct ipuv3_channel *ch, int stride);
+void ipu_cpmem_set_high_priority(struct ipuv3_channel *ch);
+void ipu_cpmem_set_buffer(struct ipuv3_channel *ch, int bufnum, dma_addr_t buf);
+void ipu_cpmem_interlaced_scan(struct ipuv3_channel *ch, int stride);
+void ipu_cpmem_set_axi_id(struct ipuv3_channel *ch, u32 id);
+void ipu_cpmem_set_burstsize(struct ipuv3_channel *ch, int burstsize);
+void ipu_cpmem_set_block_mode(struct ipuv3_channel *ch);
+void ipu_cpmem_set_rotation(struct ipuv3_channel *ch,
+ enum ipu_rotate_mode rot);
+int ipu_cpmem_set_format_rgb(struct ipuv3_channel *ch,
+ const struct ipu_rgb *rgb);
+int ipu_cpmem_set_format_passthrough(struct ipuv3_channel *ch, int width);
+void ipu_cpmem_set_yuv_interleaved(struct ipuv3_channel *ch, u32 pixel_format);
+void ipu_cpmem_set_yuv_planar_full(struct ipuv3_channel *ch,
+ u32 pixel_format, int stride,
+ int u_offset, int v_offset);
+void ipu_cpmem_set_yuv_planar(struct ipuv3_channel *ch,
+ u32 pixel_format, int stride, int height);
+int ipu_cpmem_set_fmt(struct ipuv3_channel *ch, u32 drm_fourcc);
+int ipu_cpmem_set_image(struct ipuv3_channel *ch, struct ipu_image *image);
+void ipu_cpmem_dump(struct ipuv3_channel *ch);
/*
* IPU Display Controller (dc) functions
/*
* IPU CMOS Sensor Interface (csi) functions
*/
-int ipu_csi_enable(struct ipu_soc *ipu, int csi);
-int ipu_csi_disable(struct ipu_soc *ipu, int csi);
+struct ipu_csi;
+int ipu_csi_init_interface(struct ipu_csi *csi,
+ struct v4l2_mbus_config *mbus_cfg,
+ struct v4l2_mbus_framefmt *mbus_fmt);
+bool ipu_csi_is_interlaced(struct ipu_csi *csi);
+void ipu_csi_get_window(struct ipu_csi *csi, struct v4l2_rect *w);
+void ipu_csi_set_window(struct ipu_csi *csi, struct v4l2_rect *w);
+void ipu_csi_set_test_generator(struct ipu_csi *csi, bool active,
+ u32 r_value, u32 g_value, u32 b_value,
+ u32 pix_clk);
+int ipu_csi_set_mipi_datatype(struct ipu_csi *csi, u32 vc,
+ struct v4l2_mbus_framefmt *mbus_fmt);
+int ipu_csi_set_skip_smfc(struct ipu_csi *csi, u32 skip,
+ u32 max_ratio, u32 id);
+int ipu_csi_set_dest(struct ipu_csi *csi, enum ipu_csi_dest csi_dest);
+int ipu_csi_enable(struct ipu_csi *csi);
+int ipu_csi_disable(struct ipu_csi *csi);
+struct ipu_csi *ipu_csi_get(struct ipu_soc *ipu, int id);
+void ipu_csi_put(struct ipu_csi *csi);
+void ipu_csi_dump(struct ipu_csi *csi);
/*
- * IPU Sensor Multiple FIFO Controller (SMFC) functions
+ * IPU Image Converter (ic) functions
*/
-int ipu_smfc_enable(struct ipu_soc *ipu);
-int ipu_smfc_disable(struct ipu_soc *ipu);
-int ipu_smfc_map_channel(struct ipu_soc *ipu, int channel, int csi_id, int mipi_id);
-int ipu_smfc_set_burstsize(struct ipu_soc *ipu, int channel, int burstsize);
-
-#define IPU_CPMEM_WORD(word, ofs, size) ((((word) * 160 + (ofs)) << 8) | (size))
-
-#define IPU_FIELD_UBO IPU_CPMEM_WORD(0, 46, 22)
-#define IPU_FIELD_VBO IPU_CPMEM_WORD(0, 68, 22)
-#define IPU_FIELD_IOX IPU_CPMEM_WORD(0, 90, 4)
-#define IPU_FIELD_RDRW IPU_CPMEM_WORD(0, 94, 1)
-#define IPU_FIELD_SO IPU_CPMEM_WORD(0, 113, 1)
-#define IPU_FIELD_SLY IPU_CPMEM_WORD(1, 102, 14)
-#define IPU_FIELD_SLUV IPU_CPMEM_WORD(1, 128, 14)
-
-#define IPU_FIELD_XV IPU_CPMEM_WORD(0, 0, 10)
-#define IPU_FIELD_YV IPU_CPMEM_WORD(0, 10, 9)
-#define IPU_FIELD_XB IPU_CPMEM_WORD(0, 19, 13)
-#define IPU_FIELD_YB IPU_CPMEM_WORD(0, 32, 12)
-#define IPU_FIELD_NSB_B IPU_CPMEM_WORD(0, 44, 1)
-#define IPU_FIELD_CF IPU_CPMEM_WORD(0, 45, 1)
-#define IPU_FIELD_SX IPU_CPMEM_WORD(0, 46, 12)
-#define IPU_FIELD_SY IPU_CPMEM_WORD(0, 58, 11)
-#define IPU_FIELD_NS IPU_CPMEM_WORD(0, 69, 10)
-#define IPU_FIELD_SDX IPU_CPMEM_WORD(0, 79, 7)
-#define IPU_FIELD_SM IPU_CPMEM_WORD(0, 86, 10)
-#define IPU_FIELD_SCC IPU_CPMEM_WORD(0, 96, 1)
-#define IPU_FIELD_SCE IPU_CPMEM_WORD(0, 97, 1)
-#define IPU_FIELD_SDY IPU_CPMEM_WORD(0, 98, 7)
-#define IPU_FIELD_SDRX IPU_CPMEM_WORD(0, 105, 1)
-#define IPU_FIELD_SDRY IPU_CPMEM_WORD(0, 106, 1)
-#define IPU_FIELD_BPP IPU_CPMEM_WORD(0, 107, 3)
-#define IPU_FIELD_DEC_SEL IPU_CPMEM_WORD(0, 110, 2)
-#define IPU_FIELD_DIM IPU_CPMEM_WORD(0, 112, 1)
-#define IPU_FIELD_BNDM IPU_CPMEM_WORD(0, 114, 3)
-#define IPU_FIELD_BM IPU_CPMEM_WORD(0, 117, 2)
-#define IPU_FIELD_ROT IPU_CPMEM_WORD(0, 119, 1)
-#define IPU_FIELD_HF IPU_CPMEM_WORD(0, 120, 1)
-#define IPU_FIELD_VF IPU_CPMEM_WORD(0, 121, 1)
-#define IPU_FIELD_THE IPU_CPMEM_WORD(0, 122, 1)
-#define IPU_FIELD_CAP IPU_CPMEM_WORD(0, 123, 1)
-#define IPU_FIELD_CAE IPU_CPMEM_WORD(0, 124, 1)
-#define IPU_FIELD_FW IPU_CPMEM_WORD(0, 125, 13)
-#define IPU_FIELD_FH IPU_CPMEM_WORD(0, 138, 12)
-#define IPU_FIELD_EBA0 IPU_CPMEM_WORD(1, 0, 29)
-#define IPU_FIELD_EBA1 IPU_CPMEM_WORD(1, 29, 29)
-#define IPU_FIELD_ILO IPU_CPMEM_WORD(1, 58, 20)
-#define IPU_FIELD_NPB IPU_CPMEM_WORD(1, 78, 7)
-#define IPU_FIELD_PFS IPU_CPMEM_WORD(1, 85, 4)
-#define IPU_FIELD_ALU IPU_CPMEM_WORD(1, 89, 1)
-#define IPU_FIELD_ALBM IPU_CPMEM_WORD(1, 90, 3)
-#define IPU_FIELD_ID IPU_CPMEM_WORD(1, 93, 2)
-#define IPU_FIELD_TH IPU_CPMEM_WORD(1, 95, 7)
-#define IPU_FIELD_SL IPU_CPMEM_WORD(1, 102, 14)
-#define IPU_FIELD_WID0 IPU_CPMEM_WORD(1, 116, 3)
-#define IPU_FIELD_WID1 IPU_CPMEM_WORD(1, 119, 3)
-#define IPU_FIELD_WID2 IPU_CPMEM_WORD(1, 122, 3)
-#define IPU_FIELD_WID3 IPU_CPMEM_WORD(1, 125, 3)
-#define IPU_FIELD_OFS0 IPU_CPMEM_WORD(1, 128, 5)
-#define IPU_FIELD_OFS1 IPU_CPMEM_WORD(1, 133, 5)
-#define IPU_FIELD_OFS2 IPU_CPMEM_WORD(1, 138, 5)
-#define IPU_FIELD_OFS3 IPU_CPMEM_WORD(1, 143, 5)
-#define IPU_FIELD_SXYS IPU_CPMEM_WORD(1, 148, 1)
-#define IPU_FIELD_CRE IPU_CPMEM_WORD(1, 149, 1)
-#define IPU_FIELD_DEC_SEL2 IPU_CPMEM_WORD(1, 150, 1)
-
-struct ipu_cpmem_word {
- u32 data[5];
- u32 res[3];
-};
-
-struct ipu_ch_param {
- struct ipu_cpmem_word word[2];
-};
-
-void ipu_ch_param_write_field(struct ipu_ch_param __iomem *base, u32 wbs, u32 v);
-u32 ipu_ch_param_read_field(struct ipu_ch_param __iomem *base, u32 wbs);
-struct ipu_ch_param __iomem *ipu_get_cpmem(struct ipuv3_channel *channel);
-void ipu_ch_param_dump(struct ipu_ch_param __iomem *p);
-
-static inline void ipu_ch_param_zero(struct ipu_ch_param __iomem *p)
-{
- int i;
- void __iomem *base = p;
-
- for (i = 0; i < sizeof(*p) / sizeof(u32); i++)
- writel(0, base + i * sizeof(u32));
-}
-
-static inline void ipu_cpmem_set_buffer(struct ipu_ch_param __iomem *p,
- int bufnum, dma_addr_t buf)
-{
- if (bufnum)
- ipu_ch_param_write_field(p, IPU_FIELD_EBA1, buf >> 3);
- else
- ipu_ch_param_write_field(p, IPU_FIELD_EBA0, buf >> 3);
-}
-
-static inline void ipu_cpmem_set_resolution(struct ipu_ch_param __iomem *p,
- int xres, int yres)
-{
- ipu_ch_param_write_field(p, IPU_FIELD_FW, xres - 1);
- ipu_ch_param_write_field(p, IPU_FIELD_FH, yres - 1);
-}
-
-static inline void ipu_cpmem_set_stride(struct ipu_ch_param __iomem *p,
- int stride)
-{
- ipu_ch_param_write_field(p, IPU_FIELD_SLY, stride - 1);
-}
-
-void ipu_cpmem_set_high_priority(struct ipuv3_channel *channel);
-
-struct ipu_rgb {
- struct fb_bitfield red;
- struct fb_bitfield green;
- struct fb_bitfield blue;
- struct fb_bitfield transp;
- int bits_per_pixel;
-};
-
-struct ipu_image {
- struct v4l2_pix_format pix;
- struct v4l2_rect rect;
- dma_addr_t phys;
+enum ipu_ic_task {
+ IC_TASK_ENCODER,
+ IC_TASK_VIEWFINDER,
+ IC_TASK_POST_PROCESSOR,
+ IC_NUM_TASKS,
};
-int ipu_cpmem_set_format_passthrough(struct ipu_ch_param __iomem *p,
- int width);
-
-int ipu_cpmem_set_format_rgb(struct ipu_ch_param __iomem *,
- const struct ipu_rgb *rgb);
-
-static inline void ipu_cpmem_interlaced_scan(struct ipu_ch_param *p,
- int stride)
-{
- ipu_ch_param_write_field(p, IPU_FIELD_SO, 1);
- ipu_ch_param_write_field(p, IPU_FIELD_ILO, stride / 8);
- ipu_ch_param_write_field(p, IPU_FIELD_SLY, (stride * 2) - 1);
-};
+struct ipu_ic;
+int ipu_ic_task_init(struct ipu_ic *ic,
+ int in_width, int in_height,
+ int out_width, int out_height,
+ enum ipu_color_space in_cs,
+ enum ipu_color_space out_cs);
+int ipu_ic_task_graphics_init(struct ipu_ic *ic,
+ enum ipu_color_space in_g_cs,
+ bool galpha_en, u32 galpha,
+ bool colorkey_en, u32 colorkey);
+void ipu_ic_task_enable(struct ipu_ic *ic);
+void ipu_ic_task_disable(struct ipu_ic *ic);
+int ipu_ic_task_idma_init(struct ipu_ic *ic, struct ipuv3_channel *channel,
+ u32 width, u32 height, int burst_size,
+ enum ipu_rotate_mode rot);
+int ipu_ic_enable(struct ipu_ic *ic);
+int ipu_ic_disable(struct ipu_ic *ic);
+struct ipu_ic *ipu_ic_get(struct ipu_soc *ipu, enum ipu_ic_task task);
+void ipu_ic_put(struct ipu_ic *ic);
+void ipu_ic_dump(struct ipu_ic *ic);
-void ipu_cpmem_set_yuv_planar(struct ipu_ch_param __iomem *p, u32 pixel_format,
- int stride, int height);
-void ipu_cpmem_set_yuv_interleaved(struct ipu_ch_param __iomem *p,
- u32 pixel_format);
-void ipu_cpmem_set_yuv_planar_full(struct ipu_ch_param __iomem *p,
- u32 pixel_format, int stride, int u_offset, int v_offset);
-int ipu_cpmem_set_fmt(struct ipu_ch_param __iomem *cpmem, u32 pixelformat);
-int ipu_cpmem_set_image(struct ipu_ch_param __iomem *cpmem,
- struct ipu_image *image);
+/*
+ * IPU Sensor Multiple FIFO Controller (SMFC) functions
+ */
+struct ipu_smfc *ipu_smfc_get(struct ipu_soc *ipu, unsigned int chno);
+void ipu_smfc_put(struct ipu_smfc *smfc);
+int ipu_smfc_enable(struct ipu_smfc *smfc);
+int ipu_smfc_disable(struct ipu_smfc *smfc);
+int ipu_smfc_map_channel(struct ipu_smfc *smfc, int csi_id, int mipi_id);
+int ipu_smfc_set_burstsize(struct ipu_smfc *smfc, int burstsize);
+int ipu_smfc_set_watermark(struct ipu_smfc *smfc, u32 set_level, u32 clr_level);
enum ipu_color_space ipu_drm_fourcc_to_colorspace(u32 drm_fourcc);
enum ipu_color_space ipu_pixelformat_to_colorspace(u32 pixelformat);
-
-static inline void ipu_cpmem_set_burstsize(struct ipu_ch_param __iomem *p,
- int burstsize)
-{
- ipu_ch_param_write_field(p, IPU_FIELD_NPB, burstsize - 1);
-};
+enum ipu_color_space ipu_mbus_code_to_colorspace(u32 mbus_code);
+int ipu_stride_to_bytes(u32 pixel_stride, u32 pixelformat);
+bool ipu_pixelformat_is_planar(u32 pixelformat);
+int ipu_degrees_to_rot_mode(enum ipu_rotate_mode *mode, int degrees,
+ bool hflip, bool vflip);
+int ipu_rot_mode_to_degrees(int *degrees, enum ipu_rotate_mode mode,
+ bool hflip, bool vflip);
struct ipu_client_platformdata {
int csi;
/* operation as Dom0 is supported */
#define XENFEAT_dom0 11
+/* Xen also maps grant references at pfn = mfn */
+#define XENFEAT_grant_map_identity 12
+
#define XENFEAT_NR_SUBMAPS 1
#endif /* __XEN_PUBLIC_FEATURES_H__ */
css_get(&cgrp->self);
}
+static bool cgroup_tryget(struct cgroup *cgrp)
+{
+ return css_tryget(&cgrp->self);
+}
+
static void cgroup_put(struct cgroup *cgrp)
{
css_put(&cgrp->self);
* protection against removal. Ensure @cgrp stays accessible and
* break the active_ref protection.
*/
- cgroup_get(cgrp);
+ if (!cgroup_tryget(cgrp))
+ return NULL;
kernfs_break_active_protection(kn);
mutex_lock(&cgroup_mutex);
{
struct cftype *cft;
- for (cft = cfts; cft && cft->name[0] != '\0'; cft++)
- cft->flags |= __CFTYPE_NOT_ON_DFL;
+ /*
+ * If legacy_flies_on_dfl, we want to show the legacy files on the
+ * dfl hierarchy but iff the target subsystem hasn't been updated
+ * for the dfl hierarchy yet.
+ */
+ if (!cgroup_legacy_files_on_dfl ||
+ ss->dfl_cftypes != ss->legacy_cftypes) {
+ for (cft = cfts; cft && cft->name[0] != '\0'; cft++)
+ cft->flags |= __CFTYPE_NOT_ON_DFL;
+ }
+
return cgroup_add_cftypes(ss, cfts);
}
/* cgroup release path */
cgroup_idr_remove(&cgrp->root->cgroup_idr, cgrp->id);
cgrp->id = -1;
+
+ /*
+ * There are two control paths which try to determine
+ * cgroup from dentry without going through kernfs -
+ * cgroupstats_build() and css_tryget_online_from_dir().
+ * Those are supported by RCU protecting clearing of
+ * cgrp->kn->priv backpointer.
+ */
+ RCU_INIT_POINTER(*(void __rcu __force **)&cgrp->kn->priv, NULL);
}
mutex_unlock(&cgroup_mutex);
struct cftype *base_files;
int ssid, ret;
+ /* Do not accept '\n' to prevent making /proc/<pid>/cgroup unparsable.
+ */
+ if (strchr(name, '\n'))
+ return -EINVAL;
+
parent = cgroup_kn_lock_live(parent_kn);
if (!parent)
return -ENODEV;
cgroup_kn_unlock(kn);
- /*
- * There are two control paths which try to determine cgroup from
- * dentry without going through kernfs - cgroupstats_build() and
- * css_tryget_online_from_dir(). Those are supported by RCU
- * protecting clearing of cgrp->kn->priv backpointer, which should
- * happen after all files under it have been removed.
- */
- if (!ret)
- RCU_INIT_POINTER(*(void __rcu __force **)&kn->priv, NULL);
-
cgroup_put(cgrp);
return ret;
}
/*
* This path doesn't originate from kernfs and @kn could already
* have been or be removed at any point. @kn->priv is RCU
- * protected for this access. See cgroup_rmdir() for details.
+ * protected for this access. See css_release_work_fn() for details.
*/
cgrp = rcu_dereference(kn->priv);
if (cgrp)
ret = hrtimer_nanosleep_restart(restart);
set_fs(oldfs);
- if (ret) {
+ if (ret == -ERESTART_RESTARTBLOCK) {
rmtp = restart->nanosleep.compat_rmtp;
if (rmtp && compat_put_timespec(&rmt, rmtp))
HRTIMER_MODE_REL, CLOCK_MONOTONIC);
set_fs(oldfs);
- if (ret) {
+ /*
+ * hrtimer_nanosleep() can only return 0 or
+ * -ERESTART_RESTARTBLOCK here because:
+ *
+ * - we call it with HRTIMER_MODE_REL and therefor exclude the
+ * -ERESTARTNOHAND return path.
+ *
+ * - we supply the rmtp argument from the task stack (due to
+ * the necessary compat conversion. So the update cannot
+ * fail, which excludes the -EFAULT return path as well. If
+ * it fails nevertheless we have a bigger problem and wont
+ * reach this place anymore.
+ *
+ * - if the return value is 0, we do not have to update rmtp
+ * because there is no remaining time.
+ *
+ * We check for -ERESTART_RESTARTBLOCK nevertheless if the
+ * core implementation decides to return random nonsense.
+ */
+ if (ret == -ERESTART_RESTARTBLOCK) {
struct restart_block *restart
= ¤t_thread_info()->restart_block;
if (rmtp && compat_put_timespec(&rmt, rmtp))
return -EFAULT;
}
-
return ret;
}
#include <linux/cgroup.h>
#include <linux/module.h>
#include <linux/mman.h>
+#include <linux/compat.h>
#include "internal.h"
return 0;
}
+#ifdef CONFIG_COMPAT
+static long perf_compat_ioctl(struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ switch (_IOC_NR(cmd)) {
+ case _IOC_NR(PERF_EVENT_IOC_SET_FILTER):
+ case _IOC_NR(PERF_EVENT_IOC_ID):
+ /* Fix up pointer size (usually 4 -> 8 in 32-on-64-bit case */
+ if (_IOC_SIZE(cmd) == sizeof(compat_uptr_t)) {
+ cmd &= ~IOCSIZE_MASK;
+ cmd |= sizeof(void *) << IOCSIZE_SHIFT;
+ }
+ break;
+ }
+ return perf_ioctl(file, cmd, arg);
+}
+#else
+# define perf_compat_ioctl NULL
+#endif
+
int perf_event_task_enable(void)
{
struct perf_event *event;
.read = perf_read,
.poll = perf_poll,
.unlocked_ioctl = perf_ioctl,
- .compat_ioctl = perf_ioctl,
+ .compat_ioctl = perf_compat_ioctl,
.mmap = perf_mmap,
.fasync = perf_fasync,
};
* shared futexes. We need to compare the keys:
*/
if (match_futex(&q.key, &key2)) {
+ queue_unlock(hb);
ret = -EINVAL;
goto out_put_keys;
}
chip->irq_eoi(&desc->irq_data);
raw_spin_unlock(&desc->lock);
}
+EXPORT_SYMBOL_GPL(handle_fasteoi_irq);
/**
* handle_edge_irq - edge type IRQ handler
*/
static int kcmp_ptr(void *v1, void *v2, enum kcmp_type type)
{
- long ret;
+ long t1, t2;
- ret = kptr_obfuscate((long)v1, type) - kptr_obfuscate((long)v2, type);
+ t1 = kptr_obfuscate((long)v1, type);
+ t2 = kptr_obfuscate((long)v2, type);
- return (ret < 0) | ((ret > 0) << 1);
+ return (t1 < t2) | ((t1 > t2) << 1);
}
/* The caller must have pinned the task */
char __weak kexec_purgatory[0];
size_t __weak kexec_purgatory_size = 0;
+#ifdef CONFIG_KEXEC_FILE
static int kexec_calculate_store_digests(struct kimage *image);
+#endif
/* Location of the reserved area for the crash kernel */
struct resource crashk_res = {
return ret;
}
+#ifdef CONFIG_KEXEC_FILE
static int copy_file_from_fd(int fd, void **buf, unsigned long *buf_len)
{
struct fd f = fdget(fd);
kfree(image);
return ret;
}
+#else /* CONFIG_KEXEC_FILE */
+static inline void kimage_file_post_load_cleanup(struct kimage *image) { }
+#endif /* CONFIG_KEXEC_FILE */
static int kimage_is_destination_range(struct kimage *image,
unsigned long start,
}
#endif
+#ifdef CONFIG_KEXEC_FILE
SYSCALL_DEFINE5(kexec_file_load, int, kernel_fd, int, initrd_fd,
unsigned long, cmdline_len, const char __user *, cmdline_ptr,
unsigned long, flags)
return ret;
}
+#endif /* CONFIG_KEXEC_FILE */
+
void crash_kexec(struct pt_regs *regs)
{
/* Take the kexec_mutex here to prevent sys_kexec_load
subsys_initcall(crash_save_vmcoreinfo_init);
+#ifdef CONFIG_KEXEC_FILE
static int __kexec_add_segment(struct kimage *image, char *buf,
unsigned long bufsz, unsigned long mem,
unsigned long memsz)
return 0;
}
+#endif /* CONFIG_KEXEC_FILE */
/*
* Move into place and start executing a preloaded standalone
unsigned long hash, flags = 0;
struct kretprobe_instance *ri;
- /*TODO: consider to only swap the RA after the last pre_handler fired */
+ /*
+ * To avoid deadlocks, prohibit return probing in NMI contexts,
+ * just skip the probe and increase the (inexact) 'nmissed'
+ * statistical counter, so that the user is informed that
+ * something happened:
+ */
+ if (unlikely(in_nmi())) {
+ rp->nmissed++;
+ return 0;
+ }
+
+ /* TODO: consider to only swap the RA after the last pre_handler fired */
hash = hash_ptr(current, KPROBE_HASH_BITS);
raw_spin_lock_irqsave(&rp->lock, flags);
if (!hlist_empty(&rp->free_instances)) {
#ifdef CONFIG_SUSPEND
/* kernel/power/suspend.c */
+extern const char *pm_labels[];
extern const char *pm_states[];
extern int suspend_devices_and_enter(suspend_state_t state);
#include "power.h"
-static const char *pm_labels[] = { "mem", "standby", "freeze", };
+const char *pm_labels[] = { "mem", "standby", "freeze", NULL };
const char *pm_states[PM_SUSPEND_MAX];
static const struct platform_suspend_ops *suspend_ops;
* at startup time. They're normally disabled, for faster boot and because
* we can't know which states really work on this particular system.
*/
-static suspend_state_t test_state __initdata = PM_SUSPEND_ON;
+static const char *test_state_label __initdata;
static char warn_bad_state[] __initdata =
KERN_WARNING "PM: can't test '%s' suspend state\n";
static int __init setup_test_suspend(char *value)
{
- suspend_state_t i;
+ int i;
/* "=mem" ==> "mem" */
value++;
- for (i = PM_SUSPEND_MIN; i < PM_SUSPEND_MAX; i++)
- if (!strcmp(pm_states[i], value)) {
- test_state = i;
+ for (i = 0; pm_labels[i]; i++)
+ if (!strcmp(pm_labels[i], value)) {
+ test_state_label = pm_labels[i];
return 0;
}
struct rtc_device *rtc = NULL;
struct device *dev;
+ suspend_state_t test_state;
/* PM is initialized by now; is that state testable? */
- if (test_state == PM_SUSPEND_ON)
- goto done;
- if (!pm_states[test_state]) {
- printk(warn_bad_state, pm_states[test_state]);
- goto done;
+ if (!test_state_label)
+ return 0;
+
+ for (test_state = PM_SUSPEND_MIN; test_state < PM_SUSPEND_MAX; test_state++) {
+ const char *state_label = pm_states[test_state];
+
+ if (state_label && !strcmp(test_state_label, state_label))
+ break;
+ }
+ if (test_state == PM_SUSPEND_MAX) {
+ printk(warn_bad_state, test_state_label);
+ return 0;
}
/* RTCs have initialized by now too ... can we use one? */
rtc = rtc_class_open(dev_name(dev));
if (!rtc) {
printk(warn_no_rtc);
- goto done;
+ return 0;
}
/* go for it */
test_wakealarm(rtc, test_state);
rtc_class_close(rtc);
-done:
return 0;
}
late_initcall(test_suspend);
raw_spin_lock(&logbuf_lock);
logbuf_cpu = this_cpu;
- if (recursion_bug) {
+ if (unlikely(recursion_bug)) {
static const char recursion_msg[] =
"BUG: recent printk recursion!";
recursion_bug = 0;
- text_len = strlen(recursion_msg);
/* emit KERN_CRIT message */
printed_len += log_store(0, 2, LOG_PREFIX|LOG_NEWLINE, 0,
- NULL, 0, recursion_msg, text_len);
+ NULL, 0, recursion_msg,
+ strlen(recursion_msg));
}
/*
struct rcu_head **nocb_gp_tail;
long nocb_gp_count;
long nocb_gp_count_lazy;
- bool nocb_leader_wake; /* Is the nocb leader thread awake? */
+ bool nocb_leader_sleep; /* Is the nocb leader thread asleep? */
struct rcu_data *nocb_next_follower;
/* Next follower in wakeup chain. */
if (!ACCESS_ONCE(rdp_leader->nocb_kthread))
return;
- if (!ACCESS_ONCE(rdp_leader->nocb_leader_wake) || force) {
+ if (ACCESS_ONCE(rdp_leader->nocb_leader_sleep) || force) {
/* Prior xchg orders against prior callback enqueue. */
- ACCESS_ONCE(rdp_leader->nocb_leader_wake) = true;
+ ACCESS_ONCE(rdp_leader->nocb_leader_sleep) = false;
wake_up(&rdp_leader->nocb_wq);
}
}
if (!rcu_nocb_poll) {
trace_rcu_nocb_wake(my_rdp->rsp->name, my_rdp->cpu, "Sleep");
wait_event_interruptible(my_rdp->nocb_wq,
- ACCESS_ONCE(my_rdp->nocb_leader_wake));
+ !ACCESS_ONCE(my_rdp->nocb_leader_sleep));
/* Memory barrier handled by smp_mb() calls below and repoll. */
} else if (firsttime) {
firsttime = false; /* Don't drown trace log with "Poll"! */
schedule_timeout_interruptible(1);
/* Rescan in case we were a victim of memory ordering. */
- my_rdp->nocb_leader_wake = false;
- smp_mb(); /* Ensure _wake false before scan. */
+ my_rdp->nocb_leader_sleep = true;
+ smp_mb(); /* Ensure _sleep true before scan. */
for (rdp = my_rdp; rdp; rdp = rdp->nocb_next_follower)
if (ACCESS_ONCE(rdp->nocb_head)) {
/* Found CB, so short-circuit next wait. */
- my_rdp->nocb_leader_wake = true;
+ my_rdp->nocb_leader_sleep = false;
break;
}
goto wait_again;
rcu_nocb_wait_gp(my_rdp);
/*
- * We left ->nocb_leader_wake set to reduce cache thrashing.
- * We clear it now, but recheck for new callbacks while
+ * We left ->nocb_leader_sleep unset to reduce cache thrashing.
+ * We set it now, but recheck for new callbacks while
* traversing our follower list.
*/
- my_rdp->nocb_leader_wake = false;
- smp_mb(); /* Ensure _wake false before scan of ->nocb_head. */
+ my_rdp->nocb_leader_sleep = true;
+ smp_mb(); /* Ensure _sleep true before scan of ->nocb_head. */
/* Each pass through the following loop wakes a follower, if needed. */
for (rdp = my_rdp; rdp; rdp = rdp->nocb_next_follower) {
if (ACCESS_ONCE(rdp->nocb_head))
- my_rdp->nocb_leader_wake = true; /* No need to wait. */
+ my_rdp->nocb_leader_sleep = false;/* No need to sleep.*/
if (!rdp->nocb_gp_head)
continue; /* No CBs, so no need to wake follower. */
end = res->end;
BUG_ON(start >= end);
- read_lock(&resource_lock);
-
- if (first_level_children_only) {
- p = iomem_resource.child;
+ if (first_level_children_only)
sibling_only = true;
- } else
- p = &iomem_resource;
- while ((p = next_resource(p, sibling_only))) {
+ read_lock(&resource_lock);
+
+ for (p = iomem_resource.child; p; p = next_resource(p, sibling_only)) {
if (p->flags != res->flags)
continue;
if (name && strcmp(p->name, name))
static enum alarmtimer_restart alarm_handle_timer(struct alarm *alarm,
ktime_t now)
{
+ unsigned long flags;
struct k_itimer *ptr = container_of(alarm, struct k_itimer,
it.alarm.alarmtimer);
- if (posix_timer_event(ptr, 0) != 0)
- ptr->it_overrun++;
+ enum alarmtimer_restart result = ALARMTIMER_NORESTART;
+
+ spin_lock_irqsave(&ptr->it_lock, flags);
+ if ((ptr->it_sigev_notify & ~SIGEV_THREAD_ID) != SIGEV_NONE) {
+ if (posix_timer_event(ptr, 0) != 0)
+ ptr->it_overrun++;
+ }
/* Re-add periodic timers */
if (ptr->it.alarm.interval.tv64) {
ptr->it_overrun += alarm_forward(alarm, now,
ptr->it.alarm.interval);
- return ALARMTIMER_RESTART;
+ result = ALARMTIMER_RESTART;
}
- return ALARMTIMER_NORESTART;
+ spin_unlock_irqrestore(&ptr->it_lock, flags);
+
+ return result;
}
/**
* @new_timer: k_itimer pointer
* @cur_setting: itimerspec data to fill
*
- * Copies the itimerspec data out from the k_itimer
+ * Copies out the current itimerspec data
*/
static void alarm_timer_get(struct k_itimer *timr,
struct itimerspec *cur_setting)
{
- memset(cur_setting, 0, sizeof(struct itimerspec));
+ ktime_t relative_expiry_time =
+ alarm_expires_remaining(&(timr->it.alarm.alarmtimer));
+
+ if (ktime_to_ns(relative_expiry_time) > 0) {
+ cur_setting->it_value = ktime_to_timespec(relative_expiry_time);
+ } else {
+ cur_setting->it_value.tv_sec = 0;
+ cur_setting->it_value.tv_nsec = 0;
+ }
- cur_setting->it_interval =
- ktime_to_timespec(timr->it.alarm.interval);
- cur_setting->it_value =
- ktime_to_timespec(timr->it.alarm.alarmtimer.node.expires);
- return;
+ cur_setting->it_interval = ktime_to_timespec(timr->it.alarm.interval);
}
/**
.func = nohz_full_kick_work_func,
};
+/*
+ * Kick this CPU if it's full dynticks in order to force it to
+ * re-evaluate its dependency on the tick and restart it if necessary.
+ * This kick, unlike tick_nohz_full_kick_cpu() and tick_nohz_full_kick_all(),
+ * is NMI safe.
+ */
+void tick_nohz_full_kick(void)
+{
+ if (!tick_nohz_full_cpu(smp_processor_id()))
+ return;
+
+ irq_work_queue(&__get_cpu_var(nohz_full_kick_work));
+}
+
/*
* Kick the CPU if it's full dynticks in order to force it to
* re-evaluate its dependency on the tick and restart it if necessary.
* that a remainder subtract here would not do the right thing as the
* resolution values don't fall on second boundries. I.e. the line:
* nsec -= nsec % TICK_NSEC; is NOT a correct resolution rounding.
+ * Note that due to the small error in the multiplier here, this
+ * rounding is incorrect for sufficiently large values of tv_nsec, but
+ * well formed timespecs should have tv_nsec < NSEC_PER_SEC, so we're
+ * OK.
*
* Rather, we just shift the bits off the right.
*
* The >> (NSEC_JIFFIE_SC - SEC_JIFFIE_SC) converts the scaled nsec
* value to a scaled second value.
*/
-unsigned long
-timespec_to_jiffies(const struct timespec *value)
+static unsigned long
+__timespec_to_jiffies(unsigned long sec, long nsec)
{
- unsigned long sec = value->tv_sec;
- long nsec = value->tv_nsec + TICK_NSEC - 1;
+ nsec = nsec + TICK_NSEC - 1;
if (sec >= MAX_SEC_IN_JIFFIES){
sec = MAX_SEC_IN_JIFFIES;
(NSEC_JIFFIE_SC - SEC_JIFFIE_SC))) >> SEC_JIFFIE_SC;
}
+
+unsigned long
+timespec_to_jiffies(const struct timespec *value)
+{
+ return __timespec_to_jiffies(value->tv_sec, value->tv_nsec);
+}
+
EXPORT_SYMBOL(timespec_to_jiffies);
void
}
EXPORT_SYMBOL(jiffies_to_timespec);
-/* Same for "timeval"
- *
- * Well, almost. The problem here is that the real system resolution is
- * in nanoseconds and the value being converted is in micro seconds.
- * Also for some machines (those that use HZ = 1024, in-particular),
- * there is a LARGE error in the tick size in microseconds.
-
- * The solution we use is to do the rounding AFTER we convert the
- * microsecond part. Thus the USEC_ROUND, the bits to be shifted off.
- * Instruction wise, this should cost only an additional add with carry
- * instruction above the way it was done above.
+/*
+ * We could use a similar algorithm to timespec_to_jiffies (with a
+ * different multiplier for usec instead of nsec). But this has a
+ * problem with rounding: we can't exactly add TICK_NSEC - 1 to the
+ * usec value, since it's not necessarily integral.
+ *
+ * We could instead round in the intermediate scaled representation
+ * (i.e. in units of 1/2^(large scale) jiffies) but that's also
+ * perilous: the scaling introduces a small positive error, which
+ * combined with a division-rounding-upward (i.e. adding 2^(scale) - 1
+ * units to the intermediate before shifting) leads to accidental
+ * overflow and overestimates.
+ *
+ * At the cost of one additional multiplication by a constant, just
+ * use the timespec implementation.
*/
unsigned long
timeval_to_jiffies(const struct timeval *value)
{
- unsigned long sec = value->tv_sec;
- long usec = value->tv_usec;
-
- if (sec >= MAX_SEC_IN_JIFFIES){
- sec = MAX_SEC_IN_JIFFIES;
- usec = 0;
- }
- return (((u64)sec * SEC_CONVERSION) +
- (((u64)usec * USEC_CONVERSION + USEC_ROUND) >>
- (USEC_JIFFIE_SC - SEC_JIFFIE_SC))) >> SEC_JIFFIE_SC;
+ return __timespec_to_jiffies(value->tv_sec,
+ value->tv_usec * NSEC_PER_USEC);
}
EXPORT_SYMBOL(timeval_to_jiffies);
tk->ntp_error = 0;
ntp_clear();
}
- update_vsyscall(tk);
- update_pvclock_gtod(tk, action & TK_CLOCK_WAS_SET);
tk_update_ktime_data(tk);
+ update_vsyscall(tk);
+ update_pvclock_gtod(tk, action & TK_CLOCK_WAS_SET);
+
if (action & TK_MIRROR)
memcpy(&shadow_timekeeper, &tk_core.timekeeper,
sizeof(tk_core.timekeeper));
#define FL_GLOBAL_CONTROL_MASK (FTRACE_OPS_FL_CONTROL)
#ifdef CONFIG_DYNAMIC_FTRACE
-#define INIT_REGEX_LOCK(opsname) \
- .regex_lock = __MUTEX_INITIALIZER(opsname.regex_lock),
+#define INIT_OPS_HASH(opsname) \
+ .func_hash = &opsname.local_hash, \
+ .local_hash.regex_lock = __MUTEX_INITIALIZER(opsname.local_hash.regex_lock),
+#define ASSIGN_OPS_HASH(opsname, val) \
+ .func_hash = val, \
+ .local_hash.regex_lock = __MUTEX_INITIALIZER(opsname.local_hash.regex_lock),
#else
-#define INIT_REGEX_LOCK(opsname)
+#define INIT_OPS_HASH(opsname)
+#define ASSIGN_OPS_HASH(opsname, val)
#endif
static struct ftrace_ops ftrace_list_end __read_mostly = {
.func = ftrace_stub,
.flags = FTRACE_OPS_FL_RECURSION_SAFE | FTRACE_OPS_FL_STUB,
+ INIT_OPS_HASH(ftrace_list_end)
};
/* ftrace_enabled is a method to turn ftrace on or off */
{
#ifdef CONFIG_DYNAMIC_FTRACE
if (!(ops->flags & FTRACE_OPS_FL_INITIALIZED)) {
- mutex_init(&ops->regex_lock);
+ mutex_init(&ops->local_hash.regex_lock);
+ ops->func_hash = &ops->local_hash;
ops->flags |= FTRACE_OPS_FL_INITIALIZED;
}
#endif
static struct ftrace_ops ftrace_profile_ops __read_mostly = {
.func = function_profile_call,
.flags = FTRACE_OPS_FL_RECURSION_SAFE | FTRACE_OPS_FL_INITIALIZED,
- INIT_REGEX_LOCK(ftrace_profile_ops)
+ INIT_OPS_HASH(ftrace_profile_ops)
};
static int register_ftrace_profiler(void)
#define EMPTY_HASH ((struct ftrace_hash *)&empty_hash)
static struct ftrace_ops global_ops = {
- .func = ftrace_stub,
- .notrace_hash = EMPTY_HASH,
- .filter_hash = EMPTY_HASH,
- .flags = FTRACE_OPS_FL_RECURSION_SAFE | FTRACE_OPS_FL_INITIALIZED,
- INIT_REGEX_LOCK(global_ops)
+ .func = ftrace_stub,
+ .local_hash.notrace_hash = EMPTY_HASH,
+ .local_hash.filter_hash = EMPTY_HASH,
+ INIT_OPS_HASH(global_ops)
+ .flags = FTRACE_OPS_FL_RECURSION_SAFE |
+ FTRACE_OPS_FL_INITIALIZED,
};
struct ftrace_page {
void ftrace_free_filter(struct ftrace_ops *ops)
{
ftrace_ops_init(ops);
- free_ftrace_hash(ops->filter_hash);
- free_ftrace_hash(ops->notrace_hash);
+ free_ftrace_hash(ops->func_hash->filter_hash);
+ free_ftrace_hash(ops->func_hash->notrace_hash);
}
static struct ftrace_hash *alloc_ftrace_hash(int size_bits)
}
static void
-ftrace_hash_rec_disable(struct ftrace_ops *ops, int filter_hash);
+ftrace_hash_rec_disable_modify(struct ftrace_ops *ops, int filter_hash);
static void
-ftrace_hash_rec_enable(struct ftrace_ops *ops, int filter_hash);
+ftrace_hash_rec_enable_modify(struct ftrace_ops *ops, int filter_hash);
static int
ftrace_hash_move(struct ftrace_ops *ops, int enable,
* Remove the current set, update the hash and add
* them back.
*/
- ftrace_hash_rec_disable(ops, enable);
+ ftrace_hash_rec_disable_modify(ops, enable);
old_hash = *dst;
rcu_assign_pointer(*dst, new_hash);
free_ftrace_hash_rcu(old_hash);
- ftrace_hash_rec_enable(ops, enable);
+ ftrace_hash_rec_enable_modify(ops, enable);
return 0;
}
return 0;
#endif
- filter_hash = rcu_dereference_raw_notrace(ops->filter_hash);
- notrace_hash = rcu_dereference_raw_notrace(ops->notrace_hash);
+ filter_hash = rcu_dereference_raw_notrace(ops->func_hash->filter_hash);
+ notrace_hash = rcu_dereference_raw_notrace(ops->func_hash->notrace_hash);
if ((ftrace_hash_empty(filter_hash) ||
ftrace_lookup_ip(filter_hash, ip)) &&
static void ftrace_remove_tramp(struct ftrace_ops *ops,
struct dyn_ftrace *rec)
{
- struct ftrace_func_entry *entry;
-
- entry = ftrace_lookup_ip(ops->tramp_hash, rec->ip);
- if (!entry)
+ /* If TRAMP is not set, no ops should have a trampoline for this */
+ if (!(rec->flags & FTRACE_FL_TRAMP))
return;
+ rec->flags &= ~FTRACE_FL_TRAMP;
+
+ if ((!ftrace_hash_empty(ops->func_hash->filter_hash) &&
+ !ftrace_lookup_ip(ops->func_hash->filter_hash, rec->ip)) ||
+ ftrace_lookup_ip(ops->func_hash->notrace_hash, rec->ip))
+ return;
/*
* The tramp_hash entry will be removed at time
* of update.
*/
ops->nr_trampolines--;
- rec->flags &= ~FTRACE_FL_TRAMP;
}
-static void ftrace_clear_tramps(struct dyn_ftrace *rec)
+static void ftrace_clear_tramps(struct dyn_ftrace *rec, struct ftrace_ops *ops)
{
struct ftrace_ops *op;
+ /* If TRAMP is not set, no ops should have a trampoline for this */
+ if (!(rec->flags & FTRACE_FL_TRAMP))
+ return;
+
do_for_each_ftrace_op(op, ftrace_ops_list) {
+ /*
+ * This function is called to clear other tramps
+ * not the one that is being updated.
+ */
+ if (op == ops)
+ continue;
if (op->nr_trampolines)
ftrace_remove_tramp(op, rec);
} while_for_each_ftrace_op(op);
* gets inversed.
*/
if (filter_hash) {
- hash = ops->filter_hash;
- other_hash = ops->notrace_hash;
+ hash = ops->func_hash->filter_hash;
+ other_hash = ops->func_hash->notrace_hash;
if (ftrace_hash_empty(hash))
all = 1;
} else {
inc = !inc;
- hash = ops->notrace_hash;
- other_hash = ops->filter_hash;
+ hash = ops->func_hash->notrace_hash;
+ other_hash = ops->func_hash->filter_hash;
/*
* If the notrace hash has no items,
* then there's nothing to do.
/*
* If we are adding another function callback
* to this function, and the previous had a
- * trampoline used, then we need to go back to
- * the default trampoline.
+ * custom trampoline in use, then we need to go
+ * back to the default trampoline.
*/
- rec->flags &= ~FTRACE_FL_TRAMP;
-
- /* remove trampolines from any ops for this rec */
- ftrace_clear_tramps(rec);
+ ftrace_clear_tramps(rec, ops);
}
/*
__ftrace_hash_rec_update(ops, filter_hash, 1);
}
+static void ftrace_hash_rec_update_modify(struct ftrace_ops *ops,
+ int filter_hash, int inc)
+{
+ struct ftrace_ops *op;
+
+ __ftrace_hash_rec_update(ops, filter_hash, inc);
+
+ if (ops->func_hash != &global_ops.local_hash)
+ return;
+
+ /*
+ * If the ops shares the global_ops hash, then we need to update
+ * all ops that are enabled and use this hash.
+ */
+ do_for_each_ftrace_op(op, ftrace_ops_list) {
+ /* Already done */
+ if (op == ops)
+ continue;
+ if (op->func_hash == &global_ops.local_hash)
+ __ftrace_hash_rec_update(op, filter_hash, inc);
+ } while_for_each_ftrace_op(op);
+}
+
+static void ftrace_hash_rec_disable_modify(struct ftrace_ops *ops,
+ int filter_hash)
+{
+ ftrace_hash_rec_update_modify(ops, filter_hash, 0);
+}
+
+static void ftrace_hash_rec_enable_modify(struct ftrace_ops *ops,
+ int filter_hash)
+{
+ ftrace_hash_rec_update_modify(ops, filter_hash, 1);
+}
+
static void print_ip_ins(const char *fmt, unsigned char *p)
{
int i;
if (rec->flags & FTRACE_FL_TRAMP) {
ops = ftrace_find_tramp_ops_new(rec);
if (FTRACE_WARN_ON(!ops || !ops->trampoline)) {
- pr_warning("Bad trampoline accounting at: %p (%pS)\n",
- (void *)rec->ip, (void *)rec->ip);
+ pr_warn("Bad trampoline accounting at: %p (%pS) (%lx)\n",
+ (void *)rec->ip, (void *)rec->ip, rec->flags);
/* Ftrace is shutting down, return anything */
return (unsigned long)FTRACE_ADDR;
}
return ftrace_make_call(rec, ftrace_addr);
case FTRACE_UPDATE_MAKE_NOP:
- return ftrace_make_nop(NULL, rec, ftrace_addr);
+ return ftrace_make_nop(NULL, rec, ftrace_old_addr);
case FTRACE_UPDATE_MODIFY_CALL:
return ftrace_modify_call(rec, ftrace_old_addr, ftrace_addr);
} while_for_each_ftrace_rec();
/* The number of recs in the hash must match nr_trampolines */
- FTRACE_WARN_ON(ops->tramp_hash->count != ops->nr_trampolines);
+ if (FTRACE_WARN_ON(ops->tramp_hash->count != ops->nr_trampolines))
+ pr_warn("count=%ld trampolines=%d\n",
+ ops->tramp_hash->count,
+ ops->nr_trampolines);
return 0;
}
* Filter_hash being empty will default to trace module.
* But notrace hash requires a test of individual module functions.
*/
- return ftrace_hash_empty(ops->filter_hash) &&
- ftrace_hash_empty(ops->notrace_hash);
+ return ftrace_hash_empty(ops->func_hash->filter_hash) &&
+ ftrace_hash_empty(ops->func_hash->notrace_hash);
}
/*
return 0;
/* The function must be in the filter */
- if (!ftrace_hash_empty(ops->filter_hash) &&
- !ftrace_lookup_ip(ops->filter_hash, rec->ip))
+ if (!ftrace_hash_empty(ops->func_hash->filter_hash) &&
+ !ftrace_lookup_ip(ops->func_hash->filter_hash, rec->ip))
return 0;
/* If in notrace hash, we ignore it too */
- if (ftrace_lookup_ip(ops->notrace_hash, rec->ip))
+ if (ftrace_lookup_ip(ops->func_hash->notrace_hash, rec->ip))
return 0;
return 1;
} else {
rec = &iter->pg->records[iter->idx++];
if (((iter->flags & FTRACE_ITER_FILTER) &&
- !(ftrace_lookup_ip(ops->filter_hash, rec->ip))) ||
+ !(ftrace_lookup_ip(ops->func_hash->filter_hash, rec->ip))) ||
((iter->flags & FTRACE_ITER_NOTRACE) &&
- !ftrace_lookup_ip(ops->notrace_hash, rec->ip)) ||
+ !ftrace_lookup_ip(ops->func_hash->notrace_hash, rec->ip)) ||
((iter->flags & FTRACE_ITER_ENABLED) &&
!(rec->flags & FTRACE_FL_ENABLED))) {
* functions are enabled.
*/
if ((iter->flags & FTRACE_ITER_FILTER &&
- ftrace_hash_empty(ops->filter_hash)) ||
+ ftrace_hash_empty(ops->func_hash->filter_hash)) ||
(iter->flags & FTRACE_ITER_NOTRACE &&
- ftrace_hash_empty(ops->notrace_hash))) {
+ ftrace_hash_empty(ops->func_hash->notrace_hash))) {
if (*pos > 0)
return t_hash_start(m, pos);
iter->flags |= FTRACE_ITER_PRINTALL;
iter->ops = ops;
iter->flags = flag;
- mutex_lock(&ops->regex_lock);
+ mutex_lock(&ops->func_hash->regex_lock);
if (flag & FTRACE_ITER_NOTRACE)
- hash = ops->notrace_hash;
+ hash = ops->func_hash->notrace_hash;
else
- hash = ops->filter_hash;
+ hash = ops->func_hash->filter_hash;
if (file->f_mode & FMODE_WRITE) {
const int size_bits = FTRACE_HASH_DEFAULT_BITS;
file->private_data = iter;
out_unlock:
- mutex_unlock(&ops->regex_lock);
+ mutex_unlock(&ops->func_hash->regex_lock);
return ret;
}
{
.func = function_trace_probe_call,
.flags = FTRACE_OPS_FL_INITIALIZED,
- INIT_REGEX_LOCK(trace_probe_ops)
+ INIT_OPS_HASH(trace_probe_ops)
};
static int ftrace_probe_registered;
void *data)
{
struct ftrace_func_probe *entry;
- struct ftrace_hash **orig_hash = &trace_probe_ops.filter_hash;
+ struct ftrace_hash **orig_hash = &trace_probe_ops.func_hash->filter_hash;
struct ftrace_hash *hash;
struct ftrace_page *pg;
struct dyn_ftrace *rec;
if (WARN_ON(not))
return -EINVAL;
- mutex_lock(&trace_probe_ops.regex_lock);
+ mutex_lock(&trace_probe_ops.func_hash->regex_lock);
hash = alloc_and_copy_ftrace_hash(FTRACE_HASH_DEFAULT_BITS, *orig_hash);
if (!hash) {
out_unlock:
mutex_unlock(&ftrace_lock);
out:
- mutex_unlock(&trace_probe_ops.regex_lock);
+ mutex_unlock(&trace_probe_ops.func_hash->regex_lock);
free_ftrace_hash(hash);
return count;
struct ftrace_func_entry *rec_entry;
struct ftrace_func_probe *entry;
struct ftrace_func_probe *p;
- struct ftrace_hash **orig_hash = &trace_probe_ops.filter_hash;
+ struct ftrace_hash **orig_hash = &trace_probe_ops.func_hash->filter_hash;
struct list_head free_list;
struct ftrace_hash *hash;
struct hlist_node *tmp;
return;
}
- mutex_lock(&trace_probe_ops.regex_lock);
+ mutex_lock(&trace_probe_ops.func_hash->regex_lock);
hash = alloc_and_copy_ftrace_hash(FTRACE_HASH_DEFAULT_BITS, *orig_hash);
if (!hash)
mutex_unlock(&ftrace_lock);
out_unlock:
- mutex_unlock(&trace_probe_ops.regex_lock);
+ mutex_unlock(&trace_probe_ops.func_hash->regex_lock);
free_ftrace_hash(hash);
}
if (unlikely(ftrace_disabled))
return -ENODEV;
- mutex_lock(&ops->regex_lock);
+ mutex_lock(&ops->func_hash->regex_lock);
if (enable)
- orig_hash = &ops->filter_hash;
+ orig_hash = &ops->func_hash->filter_hash;
else
- orig_hash = &ops->notrace_hash;
+ orig_hash = &ops->func_hash->notrace_hash;
if (reset)
hash = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
mutex_unlock(&ftrace_lock);
out_regex_unlock:
- mutex_unlock(&ops->regex_lock);
+ mutex_unlock(&ops->func_hash->regex_lock);
free_ftrace_hash(hash);
return ret;
trace_parser_put(parser);
- mutex_lock(&iter->ops->regex_lock);
+ mutex_lock(&iter->ops->func_hash->regex_lock);
if (file->f_mode & FMODE_WRITE) {
filter_hash = !!(iter->flags & FTRACE_ITER_FILTER);
if (filter_hash)
- orig_hash = &iter->ops->filter_hash;
+ orig_hash = &iter->ops->func_hash->filter_hash;
else
- orig_hash = &iter->ops->notrace_hash;
+ orig_hash = &iter->ops->func_hash->notrace_hash;
mutex_lock(&ftrace_lock);
ret = ftrace_hash_move(iter->ops, filter_hash,
mutex_unlock(&ftrace_lock);
}
- mutex_unlock(&iter->ops->regex_lock);
+ mutex_unlock(&iter->ops->func_hash->regex_lock);
free_ftrace_hash(iter->hash);
kfree(iter);
static struct ftrace_ops global_ops = {
.func = ftrace_stub,
.flags = FTRACE_OPS_FL_RECURSION_SAFE | FTRACE_OPS_FL_INITIALIZED,
- INIT_REGEX_LOCK(global_ops)
};
static int __init ftrace_nodyn_init(void)
static struct ftrace_ops control_ops = {
.func = ftrace_ops_control_func,
.flags = FTRACE_OPS_FL_RECURSION_SAFE | FTRACE_OPS_FL_INITIALIZED,
- INIT_REGEX_LOCK(control_ops)
+ INIT_OPS_HASH(control_ops)
};
static inline void
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+static struct ftrace_ops graph_ops = {
+ .func = ftrace_stub,
+ .flags = FTRACE_OPS_FL_RECURSION_SAFE |
+ FTRACE_OPS_FL_INITIALIZED |
+ FTRACE_OPS_FL_STUB,
+#ifdef FTRACE_GRAPH_TRAMP_ADDR
+ .trampoline = FTRACE_GRAPH_TRAMP_ADDR,
+#endif
+ ASSIGN_OPS_HASH(graph_ops, &global_ops.local_hash)
+};
+
static int ftrace_graph_active;
int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace)
*/
static void update_function_graph_func(void)
{
- if (ftrace_ops_list == &ftrace_list_end ||
- (ftrace_ops_list == &global_ops &&
- global_ops.next == &ftrace_list_end))
- ftrace_graph_entry = __ftrace_graph_entry;
- else
+ struct ftrace_ops *op;
+ bool do_test = false;
+
+ /*
+ * The graph and global ops share the same set of functions
+ * to test. If any other ops is on the list, then
+ * the graph tracing needs to test if its the function
+ * it should call.
+ */
+ do_for_each_ftrace_op(op, ftrace_ops_list) {
+ if (op != &global_ops && op != &graph_ops &&
+ op != &ftrace_list_end) {
+ do_test = true;
+ /* in double loop, break out with goto */
+ goto out;
+ }
+ } while_for_each_ftrace_op(op);
+ out:
+ if (do_test)
ftrace_graph_entry = ftrace_graph_entry_test;
+ else
+ ftrace_graph_entry = __ftrace_graph_entry;
}
static struct notifier_block ftrace_suspend_notifier = {
ftrace_graph_entry = ftrace_graph_entry_test;
update_function_graph_func();
- /* Function graph doesn't use the .func field of global_ops */
- global_ops.flags |= FTRACE_OPS_FL_STUB;
-
-#ifdef CONFIG_DYNAMIC_FTRACE
- /* Optimize function graph calling (if implemented by arch) */
- if (FTRACE_GRAPH_TRAMP_ADDR != 0)
- global_ops.trampoline = FTRACE_GRAPH_TRAMP_ADDR;
-#endif
-
- ret = ftrace_startup(&global_ops, FTRACE_START_FUNC_RET);
+ ret = ftrace_startup(&graph_ops, FTRACE_START_FUNC_RET);
out:
mutex_unlock(&ftrace_lock);
ftrace_graph_return = (trace_func_graph_ret_t)ftrace_stub;
ftrace_graph_entry = ftrace_graph_entry_stub;
__ftrace_graph_entry = ftrace_graph_entry_stub;
- ftrace_shutdown(&global_ops, FTRACE_STOP_FUNC_RET);
- global_ops.flags &= ~FTRACE_OPS_FL_STUB;
-#ifdef CONFIG_DYNAMIC_FTRACE
- if (FTRACE_GRAPH_TRAMP_ADDR != 0)
- global_ops.trampoline = 0;
-#endif
+ ftrace_shutdown(&graph_ops, FTRACE_STOP_FUNC_RET);
unregister_pm_notifier(&ftrace_suspend_notifier);
unregister_trace_sched_switch(ftrace_graph_probe_sched_switch, NULL);
work = &cpu_buffer->irq_work;
}
- work->waiters_pending = true;
poll_wait(filp, &work->waiters, poll_table);
+ work->waiters_pending = true;
+ /*
+ * There's a tight race between setting the waiters_pending and
+ * checking if the ring buffer is empty. Once the waiters_pending bit
+ * is set, the next event will wake the task up, but we can get stuck
+ * if there's only a single event in.
+ *
+ * FIXME: Ideally, we need a memory barrier on the writer side as well,
+ * but adding a memory barrier to all events will cause too much of a
+ * performance hit in the fast path. We only need a memory barrier when
+ * the buffer goes from empty to having content. But as this race is
+ * extremely small, and it's not a problem if another event comes in, we
+ * will fix it later.
+ */
+ smp_mb();
if ((cpu == RING_BUFFER_ALL_CPUS && !ring_buffer_empty(buffer)) ||
(cpu != RING_BUFFER_ALL_CPUS && !ring_buffer_empty_cpu(buffer, cpu)))
config ARCH_USE_CMPXCHG_LOCKREF
bool
+config ARCH_HAS_FAST_MULTIPLIER
+ bool
+
config CRC_CCITT
tristate "CRC-CCITT functions"
help
the full mutex checks enabled with (CONFIG_PROVE_LOCKING) this
will test all possible w/w mutex interface abuse with the
exception of simply not acquiring all the required locks.
+ Note that this feature can introduce significant overhead, so
+ it really should not be enabled in a production or distro kernel,
+ even a debug kernel. If you are a driver writer, enable it. If
+ you are a distro, do not.
config DEBUG_LOCK_ALLOC
bool "Lock debugging: detect incorrect freeing of live locks"
either tracing or lock debugging.
config STACKTRACE
- bool
+ bool "Stack backtrace support"
depends on STACKTRACE_SUPPORT
+ help
+ This option causes the kernel to create a /proc/pid/stack for
+ every process, showing its current stack trace.
+ It is also used by various kernel debugging features that require
+ stack trace generation.
config DEBUG_KOBJECT
bool "kobject debugging"
shortcut = assoc_array_ptr_to_shortcut(ptr);
slot = shortcut->parent_slot;
cursor = shortcut->back_pointer;
+ if (!cursor)
+ goto gc_complete;
} else {
slot = node->parent_slot;
cursor = ptr;
}
- BUG_ON(!ptr);
+ BUG_ON(!cursor);
node = assoc_array_ptr_to_node(cursor);
slot++;
goto continue_node;
gc_complete:
edit->set[0].to = new_root;
assoc_array_apply_edit(edit);
- edit->array->nr_leaves_on_tree = nr_leaves_on_tree;
+ array->nr_leaves_on_tree = nr_leaves_on_tree;
return 0;
enomem:
unsigned int __sw_hweight32(unsigned int w)
{
-#ifdef ARCH_HAS_FAST_MULTIPLIER
+#ifdef CONFIG_ARCH_HAS_FAST_MULTIPLIER
w -= (w >> 1) & 0x55555555;
w = (w & 0x33333333) + ((w >> 2) & 0x33333333);
w = (w + (w >> 4)) & 0x0f0f0f0f;
return __sw_hweight32((unsigned int)(w >> 32)) +
__sw_hweight32((unsigned int)w);
#elif BITS_PER_LONG == 64
-#ifdef ARCH_HAS_FAST_MULTIPLIER
+#ifdef CONFIG_ARCH_HAS_FAST_MULTIPLIER
w -= (w >> 1) & 0x5555555555555555ul;
w = (w & 0x3333333333333333ul) + ((w >> 2) & 0x3333333333333333ul);
w = (w + (w >> 4)) & 0x0f0f0f0f0f0f0f0ful;
return check_bytes8(start, value, bytes);
value64 = value;
-#if defined(ARCH_HAS_FAST_MULTIPLIER) && BITS_PER_LONG == 64
+#if defined(CONFIG_ARCH_HAS_FAST_MULTIPLIER) && BITS_PER_LONG == 64
value64 *= 0x0101010101010101;
-#elif defined(ARCH_HAS_FAST_MULTIPLIER)
+#elif defined(CONFIG_ARCH_HAS_FAST_MULTIPLIER)
value64 *= 0x01010101;
value64 |= value64 << 32;
#else
if (hugetlb_cgroup_disabled())
return;
- VM_BUG_ON(!spin_is_locked(&hugetlb_lock));
+ lockdep_assert_held(&hugetlb_lock);
h_cg = hugetlb_cgroup_from_page(page);
if (unlikely(!h_cg))
return;
phys_addr_t align, phys_addr_t start,
phys_addr_t end, int nid)
{
- int ret;
- phys_addr_t kernel_end;
+ phys_addr_t kernel_end, ret;
/* pump up @end */
if (end == MEMBLOCK_ALLOC_ACCESSIBLE)
if (nid != NUMA_NO_NODE && nid != m_nid)
continue;
+ /* skip hotpluggable memory regions if needed */
+ if (movable_node_is_enabled() && memblock_is_hotpluggable(m))
+ continue;
+
if (!type_b) {
if (out_start)
*out_start = m_start;
unsigned long long size;
int ret = 0;
+ if (mem_cgroup_is_root(memcg))
+ goto done;
retry:
if (consume_stock(memcg, nr_pages))
goto done;
if (!(gfp_mask & __GFP_NOFAIL))
return -ENOMEM;
bypass:
- memcg = root_mem_cgroup;
- ret = -EINTR;
- goto retry;
+ return -EINTR;
done_restock:
if (batch > nr_pages)
{
unsigned long bytes = nr_pages * PAGE_SIZE;
+ if (mem_cgroup_is_root(memcg))
+ return;
+
res_counter_uncharge(&memcg->res, bytes);
if (do_swap_account)
res_counter_uncharge(&memcg->memsw, bytes);
{
unsigned long bytes = nr_pages * PAGE_SIZE;
+ if (mem_cgroup_is_root(memcg))
+ return;
+
res_counter_uncharge_until(&memcg->res, memcg->res.parent, bytes);
if (do_swap_account)
res_counter_uncharge_until(&memcg->memsw,
return retval;
}
+static unsigned long mem_cgroup_recursive_stat(struct mem_cgroup *memcg,
+ enum mem_cgroup_stat_index idx)
+{
+ struct mem_cgroup *iter;
+ long val = 0;
+
+ /* Per-cpu values can be negative, use a signed accumulator */
+ for_each_mem_cgroup_tree(iter, memcg)
+ val += mem_cgroup_read_stat(iter, idx);
+
+ if (val < 0) /* race ? */
+ val = 0;
+ return val;
+}
+
+static inline u64 mem_cgroup_usage(struct mem_cgroup *memcg, bool swap)
+{
+ u64 val;
+
+ if (!mem_cgroup_is_root(memcg)) {
+ if (!swap)
+ return res_counter_read_u64(&memcg->res, RES_USAGE);
+ else
+ return res_counter_read_u64(&memcg->memsw, RES_USAGE);
+ }
+
+ /*
+ * Transparent hugepages are still accounted for in MEM_CGROUP_STAT_RSS
+ * as well as in MEM_CGROUP_STAT_RSS_HUGE.
+ */
+ val = mem_cgroup_recursive_stat(memcg, MEM_CGROUP_STAT_CACHE);
+ val += mem_cgroup_recursive_stat(memcg, MEM_CGROUP_STAT_RSS);
+
+ if (swap)
+ val += mem_cgroup_recursive_stat(memcg, MEM_CGROUP_STAT_SWAP);
+
+ return val << PAGE_SHIFT;
+}
+
+
static u64 mem_cgroup_read_u64(struct cgroup_subsys_state *css,
struct cftype *cft)
{
switch (type) {
case _MEM:
+ if (name == RES_USAGE)
+ return mem_cgroup_usage(memcg, false);
return res_counter_read_u64(&memcg->res, name);
case _MEMSWAP:
+ if (name == RES_USAGE)
+ return mem_cgroup_usage(memcg, true);
return res_counter_read_u64(&memcg->memsw, name);
case _KMEM:
return res_counter_read_u64(&memcg->kmem, name);
if (!t)
goto unlock;
- if (!swap)
- usage = res_counter_read_u64(&memcg->res, RES_USAGE);
- else
- usage = res_counter_read_u64(&memcg->memsw, RES_USAGE);
+ usage = mem_cgroup_usage(memcg, swap);
/*
* current_threshold points to threshold just below or equal to usage.
if (type == _MEM) {
thresholds = &memcg->thresholds;
- usage = res_counter_read_u64(&memcg->res, RES_USAGE);
+ usage = mem_cgroup_usage(memcg, false);
} else if (type == _MEMSWAP) {
thresholds = &memcg->memsw_thresholds;
- usage = res_counter_read_u64(&memcg->memsw, RES_USAGE);
+ usage = mem_cgroup_usage(memcg, true);
} else
BUG();
if (type == _MEM) {
thresholds = &memcg->thresholds;
- usage = res_counter_read_u64(&memcg->res, RES_USAGE);
+ usage = mem_cgroup_usage(memcg, false);
} else if (type == _MEMSWAP) {
thresholds = &memcg->memsw_thresholds;
- usage = res_counter_read_u64(&memcg->memsw, RES_USAGE);
+ usage = mem_cgroup_usage(memcg, true);
} else
BUG();
* core guarantees its existence.
*/
} else {
- res_counter_init(&memcg->res, &root_mem_cgroup->res);
- res_counter_init(&memcg->memsw, &root_mem_cgroup->memsw);
- res_counter_init(&memcg->kmem, &root_mem_cgroup->kmem);
+ res_counter_init(&memcg->res, NULL);
+ res_counter_init(&memcg->memsw, NULL);
+ res_counter_init(&memcg->kmem, NULL);
/*
* Deeper hierachy with use_hierarchy == false doesn't make
* much sense so let cgroup subsystem know about this
/* we must fixup refcnts and charges */
if (mc.moved_swap) {
/* uncharge swap account from the old cgroup */
- res_counter_uncharge(&mc.from->memsw,
- PAGE_SIZE * mc.moved_swap);
+ if (!mem_cgroup_is_root(mc.from))
+ res_counter_uncharge(&mc.from->memsw,
+ PAGE_SIZE * mc.moved_swap);
for (i = 0; i < mc.moved_swap; i++)
css_put(&mc.from->css);
* we charged both to->res and to->memsw, so we should
* uncharge to->res.
*/
- res_counter_uncharge(&mc.to->res,
- PAGE_SIZE * mc.moved_swap);
+ if (!mem_cgroup_is_root(mc.to))
+ res_counter_uncharge(&mc.to->res,
+ PAGE_SIZE * mc.moved_swap);
/* we've already done css_get(mc.to) */
mc.moved_swap = 0;
}
rcu_read_lock();
memcg = mem_cgroup_lookup(id);
if (memcg) {
- res_counter_uncharge(&memcg->memsw, PAGE_SIZE);
+ if (!mem_cgroup_is_root(memcg))
+ res_counter_uncharge(&memcg->memsw, PAGE_SIZE);
mem_cgroup_swap_statistics(memcg, false);
css_put(&memcg->css);
}
{
unsigned long flags;
- if (nr_mem)
- res_counter_uncharge(&memcg->res, nr_mem * PAGE_SIZE);
- if (nr_memsw)
- res_counter_uncharge(&memcg->memsw, nr_memsw * PAGE_SIZE);
-
- memcg_oom_recover(memcg);
+ if (!mem_cgroup_is_root(memcg)) {
+ if (nr_mem)
+ res_counter_uncharge(&memcg->res,
+ nr_mem * PAGE_SIZE);
+ if (nr_memsw)
+ res_counter_uncharge(&memcg->memsw,
+ nr_memsw * PAGE_SIZE);
+ memcg_oom_recover(memcg);
+ }
local_irq_save(flags);
__this_cpu_sub(memcg->stat->count[MEM_CGROUP_STAT_RSS], nr_anon);
unsigned long pfn = pte_pfn(pte);
if (HAVE_PTE_SPECIAL) {
- if (likely(!pte_special(pte) || pte_numa(pte)))
+ if (likely(!pte_special(pte)))
goto check_pfn;
if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
return NULL;
}
}
+ if (is_zero_pfn(pfn))
+ return NULL;
check_pfn:
if (unlikely(pfn > highest_memmap_pfn)) {
print_bad_pte(vma, addr, pte, NULL);
return NULL;
}
- if (is_zero_pfn(pfn))
- return NULL;
-
/*
* NOTE! We still have PageReserved() pages in the page tables.
* eg. VDSO mappings can cause them to exist.
struct vm_area_struct *vma;
vma = rb_entry(nd, struct vm_area_struct, vm_rb);
if (vma->vm_start < prev) {
- pr_info("vm_start %lx prev %lx\n", vma->vm_start, prev);
+ pr_emerg("vm_start %lx prev %lx\n", vma->vm_start, prev);
bug = 1;
}
if (vma->vm_start < pend) {
- pr_info("vm_start %lx pend %lx\n", vma->vm_start, pend);
+ pr_emerg("vm_start %lx pend %lx\n", vma->vm_start, pend);
bug = 1;
}
if (vma->vm_start > vma->vm_end) {
- pr_info("vm_end %lx < vm_start %lx\n",
+ pr_emerg("vm_end %lx < vm_start %lx\n",
vma->vm_end, vma->vm_start);
bug = 1;
}
if (vma->rb_subtree_gap != vma_compute_subtree_gap(vma)) {
- pr_info("free gap %lx, correct %lx\n",
+ pr_emerg("free gap %lx, correct %lx\n",
vma->rb_subtree_gap,
vma_compute_subtree_gap(vma));
bug = 1;
for (nd = pn; nd; nd = rb_prev(nd))
j++;
if (i != j) {
- pr_info("backwards %d, forwards %d\n", j, i);
+ pr_emerg("backwards %d, forwards %d\n", j, i);
bug = 1;
}
return bug ? -1 : i;
i++;
}
if (i != mm->map_count) {
- pr_info("map_count %d vm_next %d\n", mm->map_count, i);
+ pr_emerg("map_count %d vm_next %d\n", mm->map_count, i);
bug = 1;
}
if (highest_address != mm->highest_vm_end) {
- pr_info("mm->highest_vm_end %lx, found %lx\n",
+ pr_emerg("mm->highest_vm_end %lx, found %lx\n",
mm->highest_vm_end, highest_address);
bug = 1;
}
i = browse_rb(&mm->mm_rb);
if (i != mm->map_count) {
- pr_info("map_count %d rb %d\n", mm->map_count, i);
+ pr_emerg("map_count %d rb %d\n", mm->map_count, i);
bug = 1;
}
BUG_ON(bug);
phys_addr_t start, end;
u64 i;
+ memblock_clear_hotplug(0, -1);
+
for_each_free_mem_range(i, NUMA_NO_NODE, &start, &end, NULL)
count += __free_memory_core(start, end);
int page_start, int page_end)
{
const gfp_t gfp = GFP_KERNEL | __GFP_HIGHMEM | __GFP_COLD;
- unsigned int cpu;
+ unsigned int cpu, tcpu;
int i;
for_each_possible_cpu(cpu) {
struct page **pagep = &pages[pcpu_page_idx(cpu, i)];
*pagep = alloc_pages_node(cpu_to_node(cpu), gfp, 0);
- if (!*pagep) {
- pcpu_free_pages(chunk, pages, populated,
- page_start, page_end);
- return -ENOMEM;
- }
+ if (!*pagep)
+ goto err;
}
}
return 0;
+
+err:
+ while (--i >= page_start)
+ __free_page(pages[pcpu_page_idx(cpu, i)]);
+
+ for_each_possible_cpu(tcpu) {
+ if (tcpu == cpu)
+ break;
+ for (i = page_start; i < page_end; i++)
+ __free_page(pages[pcpu_page_idx(tcpu, i)]);
+ }
+ return -ENOMEM;
}
/**
__pcpu_unmap_pages(pcpu_chunk_addr(chunk, tcpu, page_start),
page_end - page_start);
}
+ pcpu_post_unmap_tlb_flush(chunk, page_start, page_end);
return err;
}
if (pcpu_setup_first_chunk(ai, fc) < 0)
panic("Failed to initialize percpu areas.");
+
+ pcpu_free_alloc_info(ai);
}
#endif /* CONFIG_SMP */
pmd_t entry = *pmdp;
if (pmd_numa(entry))
entry = pmd_mknonnuma(entry);
- set_pmd_at(vma->vm_mm, address, pmdp, pmd_mknotpresent(*pmdp));
+ set_pmd_at(vma->vm_mm, address, pmdp, pmd_mknotpresent(entry));
flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
.total_size = zbud_zpool_total_size,
};
+MODULE_ALIAS("zpool-zbud");
#endif /* CONFIG_ZPOOL */
/*****************
driver = zpool_get_driver(type);
if (!driver) {
- request_module(type);
+ request_module("zpool-%s", type);
driver = zpool_get_driver(type);
}
.total_size = zs_zpool_total_size,
};
+MODULE_ALIAS("zpool-zsmalloc");
#endif /* CONFIG_ZPOOL */
/* per-cpu VM mapping areas for zspage accesses that cross page boundaries */
priv->lane2_ops = NULL;
if (priv->lane_version > 1)
priv->lane2_ops = &lane2_ops;
+ rtnl_lock();
if (dev_set_mtu(dev, mesg->content.config.mtu))
pr_info("%s: change_mtu to %d failed\n",
dev->name, mesg->content.config.mtu);
+ rtnl_unlock();
priv->is_proxy = mesg->content.config.is_proxy;
break;
case l_flush_tran_id:
/* Reached the end of the list, so insert after 'frag_entry_last'. */
if (likely(frag_entry_last)) {
- hlist_add_behind(&frag_entry_last->list, &frag_entry_new->list);
+ hlist_add_behind(&frag_entry_new->list, &frag_entry_last->list);
chain->size += skb->len - hdr_size;
chain->timestamp = jiffies;
ret = true;
void hci_le_conn_failed(struct hci_conn *conn, u8 status)
{
struct hci_dev *hdev = conn->hdev;
+ struct hci_conn_params *params;
+
+ params = hci_pend_le_action_lookup(&hdev->pend_le_conns, &conn->dst,
+ conn->dst_type);
+ if (params && params->conn) {
+ hci_conn_drop(params->conn);
+ params->conn = NULL;
+ }
conn->state = BT_CLOSED;
{
struct hci_conn_params *p;
- list_for_each_entry(p, &hdev->le_conn_params, list)
+ list_for_each_entry(p, &hdev->le_conn_params, list) {
+ if (p->conn) {
+ hci_conn_drop(p->conn);
+ p->conn = NULL;
+ }
list_del_init(&p->action);
+ }
BT_DBG("All LE pending actions cleared");
}
hci_dev_lock(hdev);
hci_inquiry_cache_flush(hdev);
- hci_conn_hash_flush(hdev);
hci_pend_le_actions_clear(hdev);
+ hci_conn_hash_flush(hdev);
hci_dev_unlock(hdev);
hci_notify(hdev, HCI_DEV_DOWN);
if (!params)
return;
+ if (params->conn)
+ hci_conn_drop(params->conn);
+
list_del(¶ms->action);
list_del(¶ms->list);
kfree(params);
struct hci_conn_params *params, *tmp;
list_for_each_entry_safe(params, tmp, &hdev->le_conn_params, list) {
+ if (params->conn)
+ hci_conn_drop(params->conn);
list_del(¶ms->action);
list_del(¶ms->list);
kfree(params);
hci_proto_connect_cfm(conn, ev->status);
params = hci_conn_params_lookup(hdev, &conn->dst, conn->dst_type);
- if (params)
+ if (params) {
list_del_init(¶ms->action);
+ if (params->conn) {
+ hci_conn_drop(params->conn);
+ params->conn = NULL;
+ }
+ }
unlock:
hci_update_background_scan(hdev);
conn = hci_connect_le(hdev, addr, addr_type, BT_SECURITY_LOW,
HCI_LE_AUTOCONN_TIMEOUT, HCI_ROLE_MASTER);
- if (!IS_ERR(conn))
+ if (!IS_ERR(conn)) {
+ /* Store the pointer since we don't really have any
+ * other owner of the object besides the params that
+ * triggered it. This way we can abort the connection if
+ * the parameters get removed and keep the reference
+ * count consistent once the connection is established.
+ */
+ params->conn = conn;
return;
+ }
switch (PTR_ERR(conn)) {
case -EBUSY:
#include "auth_x.h"
#include "auth_x_protocol.h"
-#define TEMP_TICKET_BUF_LEN 256
-
static void ceph_x_validate_tickets(struct ceph_auth_client *ac, int *pneed);
static int ceph_x_is_authenticated(struct ceph_auth_client *ac)
}
static int ceph_x_decrypt(struct ceph_crypto_key *secret,
- void **p, void *end, void *obuf, size_t olen)
+ void **p, void *end, void **obuf, size_t olen)
{
struct ceph_x_encrypt_header head;
size_t head_len = sizeof(head);
return -EINVAL;
dout("ceph_x_decrypt len %d\n", len);
- ret = ceph_decrypt2(secret, &head, &head_len, obuf, &olen,
- *p, len);
+ if (*obuf == NULL) {
+ *obuf = kmalloc(len, GFP_NOFS);
+ if (!*obuf)
+ return -ENOMEM;
+ olen = len;
+ }
+
+ ret = ceph_decrypt2(secret, &head, &head_len, *obuf, &olen, *p, len);
if (ret)
return ret;
if (head.struct_v != 1 || le64_to_cpu(head.magic) != CEPHX_ENC_MAGIC)
kfree(th);
}
-static int ceph_x_proc_ticket_reply(struct ceph_auth_client *ac,
- struct ceph_crypto_key *secret,
- void *buf, void *end)
+static int process_one_ticket(struct ceph_auth_client *ac,
+ struct ceph_crypto_key *secret,
+ void **p, void *end)
{
struct ceph_x_info *xi = ac->private;
- int num;
- void *p = buf;
+ int type;
+ u8 tkt_struct_v, blob_struct_v;
+ struct ceph_x_ticket_handler *th;
+ void *dbuf = NULL;
+ void *dp, *dend;
+ int dlen;
+ char is_enc;
+ struct timespec validity;
+ struct ceph_crypto_key old_key;
+ void *ticket_buf = NULL;
+ void *tp, *tpend;
+ struct ceph_timespec new_validity;
+ struct ceph_crypto_key new_session_key;
+ struct ceph_buffer *new_ticket_blob;
+ unsigned long new_expires, new_renew_after;
+ u64 new_secret_id;
int ret;
- char *dbuf;
- char *ticket_buf;
- u8 reply_struct_v;
- dbuf = kmalloc(TEMP_TICKET_BUF_LEN, GFP_NOFS);
- if (!dbuf)
- return -ENOMEM;
+ ceph_decode_need(p, end, sizeof(u32) + 1, bad);
- ret = -ENOMEM;
- ticket_buf = kmalloc(TEMP_TICKET_BUF_LEN, GFP_NOFS);
- if (!ticket_buf)
- goto out_dbuf;
+ type = ceph_decode_32(p);
+ dout(" ticket type %d %s\n", type, ceph_entity_type_name(type));
- ceph_decode_need(&p, end, 1 + sizeof(u32), bad);
- reply_struct_v = ceph_decode_8(&p);
- if (reply_struct_v != 1)
+ tkt_struct_v = ceph_decode_8(p);
+ if (tkt_struct_v != 1)
goto bad;
- num = ceph_decode_32(&p);
- dout("%d tickets\n", num);
- while (num--) {
- int type;
- u8 tkt_struct_v, blob_struct_v;
- struct ceph_x_ticket_handler *th;
- void *dp, *dend;
- int dlen;
- char is_enc;
- struct timespec validity;
- struct ceph_crypto_key old_key;
- void *tp, *tpend;
- struct ceph_timespec new_validity;
- struct ceph_crypto_key new_session_key;
- struct ceph_buffer *new_ticket_blob;
- unsigned long new_expires, new_renew_after;
- u64 new_secret_id;
-
- ceph_decode_need(&p, end, sizeof(u32) + 1, bad);
-
- type = ceph_decode_32(&p);
- dout(" ticket type %d %s\n", type, ceph_entity_type_name(type));
-
- tkt_struct_v = ceph_decode_8(&p);
- if (tkt_struct_v != 1)
- goto bad;
-
- th = get_ticket_handler(ac, type);
- if (IS_ERR(th)) {
- ret = PTR_ERR(th);
- goto out;
- }
- /* blob for me */
- dlen = ceph_x_decrypt(secret, &p, end, dbuf,
- TEMP_TICKET_BUF_LEN);
- if (dlen <= 0) {
- ret = dlen;
- goto out;
- }
- dout(" decrypted %d bytes\n", dlen);
- dend = dbuf + dlen;
- dp = dbuf;
+ th = get_ticket_handler(ac, type);
+ if (IS_ERR(th)) {
+ ret = PTR_ERR(th);
+ goto out;
+ }
- tkt_struct_v = ceph_decode_8(&dp);
- if (tkt_struct_v != 1)
- goto bad;
+ /* blob for me */
+ dlen = ceph_x_decrypt(secret, p, end, &dbuf, 0);
+ if (dlen <= 0) {
+ ret = dlen;
+ goto out;
+ }
+ dout(" decrypted %d bytes\n", dlen);
+ dp = dbuf;
+ dend = dp + dlen;
- memcpy(&old_key, &th->session_key, sizeof(old_key));
- ret = ceph_crypto_key_decode(&new_session_key, &dp, dend);
- if (ret)
- goto out;
+ tkt_struct_v = ceph_decode_8(&dp);
+ if (tkt_struct_v != 1)
+ goto bad;
- ceph_decode_copy(&dp, &new_validity, sizeof(new_validity));
- ceph_decode_timespec(&validity, &new_validity);
- new_expires = get_seconds() + validity.tv_sec;
- new_renew_after = new_expires - (validity.tv_sec / 4);
- dout(" expires=%lu renew_after=%lu\n", new_expires,
- new_renew_after);
+ memcpy(&old_key, &th->session_key, sizeof(old_key));
+ ret = ceph_crypto_key_decode(&new_session_key, &dp, dend);
+ if (ret)
+ goto out;
- /* ticket blob for service */
- ceph_decode_8_safe(&p, end, is_enc, bad);
- tp = ticket_buf;
- if (is_enc) {
- /* encrypted */
- dout(" encrypted ticket\n");
- dlen = ceph_x_decrypt(&old_key, &p, end, ticket_buf,
- TEMP_TICKET_BUF_LEN);
- if (dlen < 0) {
- ret = dlen;
- goto out;
- }
- dlen = ceph_decode_32(&tp);
- } else {
- /* unencrypted */
- ceph_decode_32_safe(&p, end, dlen, bad);
- ceph_decode_need(&p, end, dlen, bad);
- ceph_decode_copy(&p, ticket_buf, dlen);
+ ceph_decode_copy(&dp, &new_validity, sizeof(new_validity));
+ ceph_decode_timespec(&validity, &new_validity);
+ new_expires = get_seconds() + validity.tv_sec;
+ new_renew_after = new_expires - (validity.tv_sec / 4);
+ dout(" expires=%lu renew_after=%lu\n", new_expires,
+ new_renew_after);
+
+ /* ticket blob for service */
+ ceph_decode_8_safe(p, end, is_enc, bad);
+ if (is_enc) {
+ /* encrypted */
+ dout(" encrypted ticket\n");
+ dlen = ceph_x_decrypt(&old_key, p, end, &ticket_buf, 0);
+ if (dlen < 0) {
+ ret = dlen;
+ goto out;
}
- tpend = tp + dlen;
- dout(" ticket blob is %d bytes\n", dlen);
- ceph_decode_need(&tp, tpend, 1 + sizeof(u64), bad);
- blob_struct_v = ceph_decode_8(&tp);
- new_secret_id = ceph_decode_64(&tp);
- ret = ceph_decode_buffer(&new_ticket_blob, &tp, tpend);
- if (ret)
+ tp = ticket_buf;
+ dlen = ceph_decode_32(&tp);
+ } else {
+ /* unencrypted */
+ ceph_decode_32_safe(p, end, dlen, bad);
+ ticket_buf = kmalloc(dlen, GFP_NOFS);
+ if (!ticket_buf) {
+ ret = -ENOMEM;
goto out;
-
- /* all is well, update our ticket */
- ceph_crypto_key_destroy(&th->session_key);
- if (th->ticket_blob)
- ceph_buffer_put(th->ticket_blob);
- th->session_key = new_session_key;
- th->ticket_blob = new_ticket_blob;
- th->validity = new_validity;
- th->secret_id = new_secret_id;
- th->expires = new_expires;
- th->renew_after = new_renew_after;
- dout(" got ticket service %d (%s) secret_id %lld len %d\n",
- type, ceph_entity_type_name(type), th->secret_id,
- (int)th->ticket_blob->vec.iov_len);
- xi->have_keys |= th->service;
+ }
+ tp = ticket_buf;
+ ceph_decode_need(p, end, dlen, bad);
+ ceph_decode_copy(p, ticket_buf, dlen);
}
+ tpend = tp + dlen;
+ dout(" ticket blob is %d bytes\n", dlen);
+ ceph_decode_need(&tp, tpend, 1 + sizeof(u64), bad);
+ blob_struct_v = ceph_decode_8(&tp);
+ new_secret_id = ceph_decode_64(&tp);
+ ret = ceph_decode_buffer(&new_ticket_blob, &tp, tpend);
+ if (ret)
+ goto out;
+
+ /* all is well, update our ticket */
+ ceph_crypto_key_destroy(&th->session_key);
+ if (th->ticket_blob)
+ ceph_buffer_put(th->ticket_blob);
+ th->session_key = new_session_key;
+ th->ticket_blob = new_ticket_blob;
+ th->validity = new_validity;
+ th->secret_id = new_secret_id;
+ th->expires = new_expires;
+ th->renew_after = new_renew_after;
+ dout(" got ticket service %d (%s) secret_id %lld len %d\n",
+ type, ceph_entity_type_name(type), th->secret_id,
+ (int)th->ticket_blob->vec.iov_len);
+ xi->have_keys |= th->service;
- ret = 0;
out:
kfree(ticket_buf);
-out_dbuf:
kfree(dbuf);
return ret;
goto out;
}
+static int ceph_x_proc_ticket_reply(struct ceph_auth_client *ac,
+ struct ceph_crypto_key *secret,
+ void *buf, void *end)
+{
+ void *p = buf;
+ u8 reply_struct_v;
+ u32 num;
+ int ret;
+
+ ceph_decode_8_safe(&p, end, reply_struct_v, bad);
+ if (reply_struct_v != 1)
+ return -EINVAL;
+
+ ceph_decode_32_safe(&p, end, num, bad);
+ dout("%d tickets\n", num);
+
+ while (num--) {
+ ret = process_one_ticket(ac, secret, &p, end);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+
+bad:
+ return -EINVAL;
+}
+
static int ceph_x_build_authorizer(struct ceph_auth_client *ac,
struct ceph_x_ticket_handler *th,
struct ceph_x_authorizer *au)
struct ceph_x_ticket_handler *th;
int ret = 0;
struct ceph_x_authorize_reply reply;
+ void *preply = &reply;
void *p = au->reply_buf;
void *end = p + sizeof(au->reply_buf);
th = get_ticket_handler(ac, au->service);
if (IS_ERR(th))
return PTR_ERR(th);
- ret = ceph_x_decrypt(&th->session_key, &p, end, &reply, sizeof(reply));
+ ret = ceph_x_decrypt(&th->session_key, &p, end, &preply, sizeof(reply));
if (ret < 0)
return ret;
if (ret != sizeof(reply))
if (!m) {
pr_info("alloc_msg unknown type %d\n", type);
*skip = 1;
+ } else if (front_len > m->front_alloc_len) {
+ pr_warning("mon_alloc_msg front %d > prealloc %d (%u#%llu)\n",
+ front_len, m->front_alloc_len,
+ (unsigned int)con->peer_name.type,
+ le64_to_cpu(con->peer_name.num));
+ ceph_msg_put(m);
+ m = ceph_msg_new(type, front_len, GFP_NOFS, false);
}
+
return m;
}
EXPORT_SYMBOL(__skb_checksum_complete);
/**
- * skb_copy_and_csum_datagram_iovec - Copy and checkum skb to user iovec.
+ * skb_copy_and_csum_datagram_iovec - Copy and checksum skb to user iovec.
* @skb: skbuff
* @hlen: hardware length
* @iov: io vector
return harmonize_features(skb, features);
}
- features &= (skb->dev->vlan_features | NETIF_F_HW_VLAN_CTAG_TX |
- NETIF_F_HW_VLAN_STAG_TX);
+ features = netdev_intersect_features(features,
+ skb->dev->vlan_features |
+ NETIF_F_HW_VLAN_CTAG_TX |
+ NETIF_F_HW_VLAN_STAG_TX);
if (protocol == htons(ETH_P_8021Q) || protocol == htons(ETH_P_8021AD))
- features &= NETIF_F_SG | NETIF_F_HIGHDMA | NETIF_F_FRAGLIST |
- NETIF_F_GEN_CSUM | NETIF_F_HW_VLAN_CTAG_TX |
- NETIF_F_HW_VLAN_STAG_TX;
+ features = netdev_intersect_features(features,
+ NETIF_F_SG |
+ NETIF_F_HIGHDMA |
+ NETIF_F_FRAGLIST |
+ NETIF_F_GEN_CSUM |
+ NETIF_F_HW_VLAN_CTAG_TX |
+ NETIF_F_HW_VLAN_STAG_TX);
return harmonize_features(skb, features);
}
if (adj->master)
sysfs_remove_link(&(dev->dev.kobj), "master");
- if (netdev_adjacent_is_neigh_list(dev, dev_list))
+ if (netdev_adjacent_is_neigh_list(dev, dev_list) &&
+ net_eq(dev_net(dev),dev_net(adj_dev)))
netdev_adjacent_sysfs_del(dev, adj_dev->name, dev_list);
list_del_rcu(&adj->list);
}
EXPORT_SYMBOL(netdev_upper_dev_unlink);
+void netdev_adjacent_add_links(struct net_device *dev)
+{
+ struct netdev_adjacent *iter;
+
+ struct net *net = dev_net(dev);
+
+ list_for_each_entry(iter, &dev->adj_list.upper, list) {
+ if (!net_eq(net,dev_net(iter->dev)))
+ continue;
+ netdev_adjacent_sysfs_add(iter->dev, dev,
+ &iter->dev->adj_list.lower);
+ netdev_adjacent_sysfs_add(dev, iter->dev,
+ &dev->adj_list.upper);
+ }
+
+ list_for_each_entry(iter, &dev->adj_list.lower, list) {
+ if (!net_eq(net,dev_net(iter->dev)))
+ continue;
+ netdev_adjacent_sysfs_add(iter->dev, dev,
+ &iter->dev->adj_list.upper);
+ netdev_adjacent_sysfs_add(dev, iter->dev,
+ &dev->adj_list.lower);
+ }
+}
+
+void netdev_adjacent_del_links(struct net_device *dev)
+{
+ struct netdev_adjacent *iter;
+
+ struct net *net = dev_net(dev);
+
+ list_for_each_entry(iter, &dev->adj_list.upper, list) {
+ if (!net_eq(net,dev_net(iter->dev)))
+ continue;
+ netdev_adjacent_sysfs_del(iter->dev, dev->name,
+ &iter->dev->adj_list.lower);
+ netdev_adjacent_sysfs_del(dev, iter->dev->name,
+ &dev->adj_list.upper);
+ }
+
+ list_for_each_entry(iter, &dev->adj_list.lower, list) {
+ if (!net_eq(net,dev_net(iter->dev)))
+ continue;
+ netdev_adjacent_sysfs_del(iter->dev, dev->name,
+ &iter->dev->adj_list.upper);
+ netdev_adjacent_sysfs_del(dev, iter->dev->name,
+ &dev->adj_list.lower);
+ }
+}
+
void netdev_adjacent_rename_links(struct net_device *dev, char *oldname)
{
struct netdev_adjacent *iter;
+ struct net *net = dev_net(dev);
+
list_for_each_entry(iter, &dev->adj_list.upper, list) {
+ if (!net_eq(net,dev_net(iter->dev)))
+ continue;
netdev_adjacent_sysfs_del(iter->dev, oldname,
&iter->dev->adj_list.lower);
netdev_adjacent_sysfs_add(iter->dev, dev,
}
list_for_each_entry(iter, &dev->adj_list.lower, list) {
+ if (!net_eq(net,dev_net(iter->dev)))
+ continue;
netdev_adjacent_sysfs_del(iter->dev, oldname,
&iter->dev->adj_list.upper);
netdev_adjacent_sysfs_add(iter->dev, dev,
/* Send a netdev-removed uevent to the old namespace */
kobject_uevent(&dev->dev.kobj, KOBJ_REMOVE);
+ netdev_adjacent_del_links(dev);
/* Actually switch the network namespace */
dev_net_set(dev, net);
/* Send a netdev-add uevent to the new namespace */
kobject_uevent(&dev->dev.kobj, KOBJ_ADD);
+ netdev_adjacent_add_links(dev);
/* Fixup kobjects */
err = device_rename(&dev->dev, dev->name);
* as destination. A new timer with the interval specified in the
* configuration TLV is created. Upon each interval, the latest statistics
* will be read from &bstats and the estimated rate will be stored in
- * &rate_est with the statistics lock grabed during this period.
+ * &rate_est with the statistics lock grabbed during this period.
*
* Returns 0 on success or a negative error code.
*
* @st: application specific statistics data
* @len: length of data
*
- * Appends the application sepecific statistics to the top level TLV created by
+ * Appends the application specific statistics to the top level TLV created by
* gnet_stats_start_copy() and remembers the data for XSTATS if the dumping
* handle is in backward compatibility mode.
*
* skb_seq_read() will return the remaining part of the block.
*
* Note 1: The size of each block of data returned can be arbitrary,
- * this limitation is the cost for zerocopy seqeuental
+ * this limitation is the cost for zerocopy sequential
* reads of potentially non linear data.
*
* Note 2: Fragment lists within fragments are not implemented
/**
* skb_append_datato_frags - append the user data to a skb
* @sk: sock structure
- * @skb: skb structure to be appened with user data.
+ * @skb: skb structure to be appended with user data.
* @getfrag: call back function to be used for getting the user data
* @from: pointer to user message iov
* @length: length of the iov message
/**
* sk_capable - Socket global capability test
* @sk: Socket to use a capability on or through
- * @cap: The global capbility to use
+ * @cap: The global capability to use
*
* Test to see if the opener of the socket had when the socket was
* created and the current process has the capability @cap in all user
* @sk: Socket to use a capability on or through
* @cap: The capability to use
*
- * Test to see if the opener of the socket had when the socke was created
+ * Test to see if the opener of the socket had when the socket was created
* and the current process has the capability @cap over the network namespace
* the socket is a member of.
*/
order);
if (page)
goto fill_page;
+ /* Do not retry other high order allocations */
+ order = 1;
+ max_page_order = 0;
}
order--;
}
* no guarantee that allocations succeed. Therefore, @sz MUST be
* less or equal than PAGE_SIZE.
*/
-bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio)
+bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t gfp)
{
- int order;
-
if (pfrag->page) {
if (atomic_read(&pfrag->page->_count) == 1) {
pfrag->offset = 0;
put_page(pfrag->page);
}
- order = SKB_FRAG_PAGE_ORDER;
- do {
- gfp_t gfp = prio;
-
- if (order)
- gfp |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY;
- pfrag->page = alloc_pages(gfp, order);
+ pfrag->offset = 0;
+ if (SKB_FRAG_PAGE_ORDER) {
+ pfrag->page = alloc_pages(gfp | __GFP_COMP |
+ __GFP_NOWARN | __GFP_NORETRY,
+ SKB_FRAG_PAGE_ORDER);
if (likely(pfrag->page)) {
- pfrag->offset = 0;
- pfrag->size = PAGE_SIZE << order;
+ pfrag->size = PAGE_SIZE << SKB_FRAG_PAGE_ORDER;
return true;
}
- } while (--order >= 0);
-
+ }
+ pfrag->page = alloc_page(gfp);
+ if (likely(pfrag->page)) {
+ pfrag->size = PAGE_SIZE;
+ return true;
+ }
return false;
}
EXPORT_SYMBOL(skb_page_frag_refill);
return ERR_PTR(-rc);
}
} else {
- frag = ERR_PTR(ENOMEM);
+ frag = ERR_PTR(-ENOMEM);
}
return frag;
/* Frame Control + Sequence Number + Address fields + Security Header */
dev->hard_header_len = 2 + 1 + 20 + 14;
dev->needed_tailroom = 2; /* FCS */
- dev->mtu = 1281;
+ dev->mtu = IPV6_MIN_MTU;
dev->tx_queue_len = 0;
dev->flags = IFF_BROADCAST | IFF_MULTICAST;
dev->watchdog_timeo = 0;
struct net *net = dev_net(skb->dev);
struct lowpan_frag_info *frag_info = lowpan_cb(skb);
struct ieee802154_addr source, dest;
- struct netns_ieee802154_lowpan *ieee802154_lowpan =
- net_ieee802154_lowpan(net);
int err;
source = mac_cb(skb)->source;
if (err < 0)
goto err;
- if (frag_info->d_size > ieee802154_lowpan->max_dsize)
+ if (frag_info->d_size > IPV6_MIN_MTU) {
+ net_warn_ratelimited("lowpan_frag_rcv: datagram size exceeds MTU\n");
goto err;
+ }
fq = fq_find(net, frag_info, &source, &dest);
if (fq != NULL) {
.mode = 0644,
.proc_handler = proc_dointvec_jiffies,
},
- {
- .procname = "6lowpanfrag_max_datagram_size",
- .data = &init_net.ieee802154_lowpan.max_dsize,
- .maxlen = sizeof(int),
- .mode = 0644,
- .proc_handler = proc_dointvec
- },
{ }
};
table[1].data = &ieee802154_lowpan->frags.low_thresh;
table[1].extra2 = &ieee802154_lowpan->frags.high_thresh;
table[2].data = &ieee802154_lowpan->frags.timeout;
- table[3].data = &ieee802154_lowpan->max_dsize;
/* Don't export sysctls to unprivileged users */
if (net->user_ns != &init_user_ns)
ieee802154_lowpan->frags.high_thresh = IPV6_FRAG_HIGH_THRESH;
ieee802154_lowpan->frags.low_thresh = IPV6_FRAG_LOW_THRESH;
ieee802154_lowpan->frags.timeout = IPV6_FRAG_TIMEOUT;
- ieee802154_lowpan->max_dsize = 0xFFFF;
inet_frags_init_net(&ieee802154_lowpan->frags);
help
This option enables the ARP support for nf_tables.
+config NF_NAT_IPV4
+ tristate "IPv4 NAT"
+ depends on NF_CONNTRACK_IPV4
+ default m if NETFILTER_ADVANCED=n
+ select NF_NAT
+ help
+ The IPv4 NAT option allows masquerading, port forwarding and other
+ forms of full Network Address Port Translation. This can be
+ controlled by iptables or nft.
+
+if NF_NAT_IPV4
+
+config NF_NAT_SNMP_BASIC
+ tristate "Basic SNMP-ALG support"
+ depends on NF_CONNTRACK_SNMP
+ depends on NETFILTER_ADVANCED
+ default NF_NAT && NF_CONNTRACK_SNMP
+ ---help---
+
+ This module implements an Application Layer Gateway (ALG) for
+ SNMP payloads. In conjunction with NAT, it allows a network
+ management system to access multiple private networks with
+ conflicting addresses. It works by modifying IP addresses
+ inside SNMP payloads to match IP-layer NAT mapping.
+
+ This is the "basic" form of SNMP-ALG, as described in RFC 2962
+
+ To compile it as a module, choose M here. If unsure, say N.
+
+config NF_NAT_PROTO_GRE
+ tristate
+ depends on NF_CT_PROTO_GRE
+
+config NF_NAT_PPTP
+ tristate
+ depends on NF_CONNTRACK
+ default NF_CONNTRACK_PPTP
+ select NF_NAT_PROTO_GRE
+
+config NF_NAT_H323
+ tristate
+ depends on NF_CONNTRACK
+ default NF_CONNTRACK_H323
+
+endif # NF_NAT_IPV4
+
config IP_NF_IPTABLES
tristate "IP tables support (required for filtering/masq/NAT)"
default m if NETFILTER_ADVANCED=n
To compile it as a module, choose M here. If unsure, say N.
# NAT + specific targets: nf_conntrack
-config NF_NAT_IPV4
- tristate "IPv4 NAT"
+config IP_NF_NAT
+ tristate "iptables NAT support"
depends on NF_CONNTRACK_IPV4
default m if NETFILTER_ADVANCED=n
select NF_NAT
+ select NF_NAT_IPV4
+ select NETFILTER_XT_NAT
help
- The IPv4 NAT option allows masquerading, port forwarding and other
- forms of full Network Address Port Translation. It is controlled by
- the `nat' table in iptables: see the man page for iptables(8).
+ This enables the `nat' table in iptables. This allows masquerading,
+ port forwarding and other forms of full Network Address Port
+ Translation.
To compile it as a module, choose M here. If unsure, say N.
-if NF_NAT_IPV4
+if IP_NF_NAT
config IP_NF_TARGET_MASQUERADE
tristate "MASQUERADE target support"
(e.g. when running oldconfig). It selects
CONFIG_NETFILTER_XT_TARGET_REDIRECT.
-endif
-
-config NF_NAT_SNMP_BASIC
- tristate "Basic SNMP-ALG support"
- depends on NF_CONNTRACK_SNMP && NF_NAT_IPV4
- depends on NETFILTER_ADVANCED
- default NF_NAT && NF_CONNTRACK_SNMP
- ---help---
-
- This module implements an Application Layer Gateway (ALG) for
- SNMP payloads. In conjunction with NAT, it allows a network
- management system to access multiple private networks with
- conflicting addresses. It works by modifying IP addresses
- inside SNMP payloads to match IP-layer NAT mapping.
-
- This is the "basic" form of SNMP-ALG, as described in RFC 2962
-
- To compile it as a module, choose M here. If unsure, say N.
-
-# If they want FTP, set to $CONFIG_IP_NF_NAT (m or y),
-# or $CONFIG_IP_NF_FTP (m or y), whichever is weaker.
-# From kconfig-language.txt:
-#
-# <expr> '&&' <expr> (6)
-#
-# (6) Returns the result of min(/expr/, /expr/).
-
-config NF_NAT_PROTO_GRE
- tristate
- depends on NF_NAT_IPV4 && NF_CT_PROTO_GRE
-
-config NF_NAT_PPTP
- tristate
- depends on NF_CONNTRACK && NF_NAT_IPV4
- default NF_NAT_IPV4 && NF_CONNTRACK_PPTP
- select NF_NAT_PROTO_GRE
-
-config NF_NAT_H323
- tristate
- depends on NF_CONNTRACK && NF_NAT_IPV4
- default NF_NAT_IPV4 && NF_CONNTRACK_H323
+endif # IP_NF_NAT
# mangle + specific targets
config IP_NF_MANGLE
# the three instances of ip_tables
obj-$(CONFIG_IP_NF_FILTER) += iptable_filter.o
obj-$(CONFIG_IP_NF_MANGLE) += iptable_mangle.o
-obj-$(CONFIG_NF_NAT_IPV4) += iptable_nat.o
+obj-$(CONFIG_IP_NF_NAT) += iptable_nat.o
obj-$(CONFIG_IP_NF_RAW) += iptable_raw.o
obj-$(CONFIG_IP_NF_SECURITY) += iptable_security.o
addrconf_mod_dad_work(ifp, 0);
}
-/* Join to solicited addr multicast group. */
-
+/* Join to solicited addr multicast group.
+ * caller must hold RTNL */
void addrconf_join_solict(struct net_device *dev, const struct in6_addr *addr)
{
struct in6_addr maddr;
- ASSERT_RTNL();
-
if (dev->flags&(IFF_LOOPBACK|IFF_NOARP))
return;
ipv6_dev_mc_inc(dev, &maddr);
}
+/* caller must hold RTNL */
void addrconf_leave_solict(struct inet6_dev *idev, const struct in6_addr *addr)
{
struct in6_addr maddr;
- ASSERT_RTNL();
-
if (idev->dev->flags&(IFF_LOOPBACK|IFF_NOARP))
return;
__ipv6_dev_mc_dec(idev, &maddr);
}
+/* caller must hold RTNL */
static void addrconf_join_anycast(struct inet6_ifaddr *ifp)
{
struct in6_addr addr;
- ASSERT_RTNL();
-
if (ifp->prefix_len >= 127) /* RFC 6164 */
return;
ipv6_addr_prefix(&addr, &ifp->addr, ifp->prefix_len);
ipv6_dev_ac_inc(ifp->idev->dev, &addr);
}
+/* caller must hold RTNL */
static void addrconf_leave_anycast(struct inet6_ifaddr *ifp)
{
struct in6_addr addr;
- ASSERT_RTNL();
-
if (ifp->prefix_len >= 127) /* RFC 6164 */
return;
ipv6_addr_prefix(&addr, &ifp->addr, ifp->prefix_len);
addrconf_leave_solict(ifp->idev, &ifp->addr);
if (!ipv6_addr_any(&ifp->peer_addr)) {
struct rt6_info *rt;
- struct net_device *dev = ifp->idev->dev;
-
- rt = rt6_lookup(dev_net(dev), &ifp->peer_addr, NULL,
- dev->ifindex, 1);
- if (rt) {
- dst_hold(&rt->dst);
- if (ip6_del_rt(rt))
- dst_free(&rt->dst);
- }
+
+ rt = addrconf_get_prefix_route(&ifp->peer_addr, 128,
+ ifp->idev->dev, 0, 0);
+ if (rt && ip6_del_rt(rt))
+ dst_free(&rt->dst);
}
dst_hold(&ifp->rt->dst);
pac->acl_next = NULL;
pac->acl_addr = *addr;
+ rtnl_lock();
rcu_read_lock();
if (ifindex == 0) {
struct rt6_info *rt;
error:
rcu_read_unlock();
+ rtnl_unlock();
if (pac)
sock_kfree_s(sk, pac, sizeof(*pac));
return err;
spin_unlock_bh(&ipv6_sk_ac_lock);
+ rtnl_lock();
rcu_read_lock();
dev = dev_get_by_index_rcu(net, pac->acl_ifindex);
if (dev)
ipv6_dev_ac_dec(dev, &pac->acl_addr);
rcu_read_unlock();
+ rtnl_unlock();
sock_kfree_s(sk, pac, sizeof(*pac));
return 0;
spin_unlock_bh(&ipv6_sk_ac_lock);
prev_index = 0;
+ rtnl_lock();
rcu_read_lock();
while (pac) {
struct ipv6_ac_socklist *next = pac->acl_next;
pac = next;
}
rcu_read_unlock();
+ rtnl_unlock();
}
static void aca_put(struct ifacaddr6 *ac)
struct rt6_info *rt;
int err;
+ ASSERT_RTNL();
+
idev = in6_dev_get(dev);
if (idev == NULL)
{
struct ifacaddr6 *aca, *prev_aca;
+ ASSERT_RTNL();
+
write_lock_bh(&idev->lock);
prev_aca = NULL;
for (aca = idev->ac_list; aca; aca = aca->aca_next) {
if (dst->flags & DST_HOST) {
mp = dst_metrics_write_ptr(dst);
} else {
- mp = kzalloc(sizeof(u32) * RTAX_MAX, GFP_KERNEL);
+ mp = kzalloc(sizeof(u32) * RTAX_MAX, GFP_ATOMIC);
if (!mp)
return -ENOMEM;
dst_init_metrics(dst, mp, 0);
mc_lst->next = NULL;
mc_lst->addr = *addr;
+ rtnl_lock();
rcu_read_lock();
if (ifindex == 0) {
struct rt6_info *rt;
if (dev == NULL) {
rcu_read_unlock();
+ rtnl_unlock();
sock_kfree_s(sk, mc_lst, sizeof(*mc_lst));
return -ENODEV;
}
if (err) {
rcu_read_unlock();
+ rtnl_unlock();
sock_kfree_s(sk, mc_lst, sizeof(*mc_lst));
return err;
}
spin_unlock(&ipv6_sk_mc_lock);
rcu_read_unlock();
+ rtnl_unlock();
return 0;
}
if (!ipv6_addr_is_multicast(addr))
return -EINVAL;
+ rtnl_lock();
spin_lock(&ipv6_sk_mc_lock);
for (lnk = &np->ipv6_mc_list;
(mc_lst = rcu_dereference_protected(*lnk,
} else
(void) ip6_mc_leave_src(sk, mc_lst, NULL);
rcu_read_unlock();
+ rtnl_unlock();
+
atomic_sub(sizeof(*mc_lst), &sk->sk_omem_alloc);
kfree_rcu(mc_lst, rcu);
return 0;
}
}
spin_unlock(&ipv6_sk_mc_lock);
+ rtnl_unlock();
return -EADDRNOTAVAIL;
}
if (!rcu_access_pointer(np->ipv6_mc_list))
return;
+ rtnl_lock();
spin_lock(&ipv6_sk_mc_lock);
while ((mc_lst = rcu_dereference_protected(np->ipv6_mc_list,
lockdep_is_held(&ipv6_sk_mc_lock))) != NULL) {
spin_lock(&ipv6_sk_mc_lock);
}
spin_unlock(&ipv6_sk_mc_lock);
+ rtnl_unlock();
}
int ip6_mc_source(int add, int omode, struct sock *sk,
struct ifmcaddr6 *mc;
struct inet6_dev *idev;
+ ASSERT_RTNL();
+
/* we need to take a reference on idev */
idev = in6_dev_get(dev);
{
struct ifmcaddr6 *ma, **map;
+ ASSERT_RTNL();
+
write_lock_bh(&idev->lock);
for (map = &idev->mc_list; (ma=*map) != NULL; map = &ma->next) {
if (ipv6_addr_equal(&ma->mca_addr, addr)) {
config NF_LOG_IPV6
tristate "IPv6 packet logging"
- depends on NETFILTER_ADVANCED
+ default m if NETFILTER_ADVANCED=n
select NF_LOG_COMMON
+config NF_NAT_IPV6
+ tristate "IPv6 NAT"
+ depends on NF_CONNTRACK_IPV6
+ depends on NETFILTER_ADVANCED
+ select NF_NAT
+ help
+ The IPv6 NAT option allows masquerading, port forwarding and other
+ forms of full Network Address Port Translation. This can be
+ controlled by iptables or nft.
+
config IP6_NF_IPTABLES
tristate "IP6 tables support (required for filtering)"
depends on INET && IPV6
If unsure, say N.
-config NF_NAT_IPV6
- tristate "IPv6 NAT"
+config IP6_NF_NAT
+ tristate "ip6tables NAT support"
depends on NF_CONNTRACK_IPV6
depends on NETFILTER_ADVANCED
select NF_NAT
+ select NF_NAT_IPV6
+ select NETFILTER_XT_NAT
help
- The IPv6 NAT option allows masquerading, port forwarding and other
- forms of full Network Address Port Translation. It is controlled by
- the `nat' table in ip6tables, see the man page for ip6tables(8).
+ This enables the `nat' table in ip6tables. This allows masquerading,
+ port forwarding and other forms of full Network Address Port
+ Translation.
To compile it as a module, choose M here. If unsure, say N.
-if NF_NAT_IPV6
+if IP6_NF_NAT
config IP6_NF_TARGET_MASQUERADE
tristate "MASQUERADE target support"
To compile it as a module, choose M here. If unsure, say N.
-endif # NF_NAT_IPV6
+endif # IP6_NF_NAT
endif # IP6_NF_IPTABLES
obj-$(CONFIG_IP6_NF_MANGLE) += ip6table_mangle.o
obj-$(CONFIG_IP6_NF_RAW) += ip6table_raw.o
obj-$(CONFIG_IP6_NF_SECURITY) += ip6table_security.o
-obj-$(CONFIG_NF_NAT_IPV6) += ip6table_nat.o
+obj-$(CONFIG_IP6_NF_NAT) += ip6table_nat.o
# objects for l3 independent conntrack
nf_conntrack_ipv6-y := nf_conntrack_l3proto_ipv6.o nf_conntrack_proto_icmpv6.o
/* If PMTU discovery was enabled, use the MTU that was discovered */
dst = sk_dst_get(tunnel->sock);
if (dst != NULL) {
- u32 pmtu = dst_mtu(__sk_dst_get(tunnel->sock));
+ u32 pmtu = dst_mtu(dst);
+
if (pmtu != 0)
session->mtu = session->mru = pmtu -
PPPOL2TP_HEADER_OVERHEAD;
continue;
if (rcu_access_pointer(sdata->vif.chanctx_conf) != conf)
continue;
+ if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN)
+ continue;
if (!compat)
compat = &sdata->vif.bss_conf.chandef;
list_del(&sdata->reserved_chanctx_list);
list_move(&sdata->assigned_chanctx_list,
- &new_ctx->assigned_vifs);
+ &ctx->assigned_vifs);
sdata->reserved_chanctx = NULL;
ieee80211_vif_chanctx_reservation_complete(sdata);
p += scnprintf(p, sizeof(buf) + buf - p, "next dialog_token: %#02x\n",
sta->ampdu_mlme.dialog_token_allocator + 1);
p += scnprintf(p, sizeof(buf) + buf - p,
- "TID\t\tRX active\tDTKN\tSSN\t\tTX\tDTKN\tpending\n");
+ "TID\t\tRX\tDTKN\tSSN\t\tTX\tDTKN\tpending\n");
for (i = 0; i < IEEE80211_NUM_TIDS; i++) {
tid_rx = rcu_dereference(sta->ampdu_mlme.tid_rx[i]);
if (sta) {
u16 last_seq;
- last_seq = le16_to_cpu(
- sta->last_seq_ctrl[rx_agg->tid]);
+ last_seq = IEEE80211_SEQ_TO_SN(le16_to_cpu(
+ sta->last_seq_ctrl[rx_agg->tid]));
__ieee80211_start_rx_ba_session(sta,
0, 0,
if (!matches_local)
event = CNF_RJCT;
if (!mesh_plink_free_count(sdata) ||
- (sta->llid != llid || sta->plid != plid))
+ sta->llid != llid ||
+ (sta->plid && sta->plid != plid))
event = CNF_IGNR;
else
event = CNF_ACPT;
goto unlock_rcu;
}
+ /* 802.11-2012 13.3.7.2 - update plid on CNF if not set */
+ if (!sta->plid && event == CNF_ACPT)
+ sta->plid = plid;
+
changed |= mesh_plink_fsm(sdata, sta, event);
unlock_rcu:
rcu_read_unlock();
if (bss->wmm_used && bss->uapsd_supported &&
- (sdata->local->hw.flags & IEEE80211_HW_SUPPORTS_UAPSD) &&
- sdata->wmm_acm != 0xff) {
+ (sdata->local->hw.flags & IEEE80211_HW_SUPPORTS_UAPSD)) {
assoc_data->uapsd = true;
ifmgd->flags |= IEEE80211_STA_UAPSD_ENABLED;
} else {
unsigned long flags;
struct ps_data *ps;
- if (sdata->vif.type == NL80211_IFTYPE_AP ||
- sdata->vif.type == NL80211_IFTYPE_AP_VLAN)
+ if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN)
+ sdata = container_of(sdata->bss, struct ieee80211_sub_if_data,
+ u.ap);
+
+ if (sdata->vif.type == NL80211_IFTYPE_AP)
ps = &sdata->bss->ps;
else if (ieee80211_vif_is_mesh(&sdata->vif))
ps = &sdata->u.mesh.ps;
skb->pkt_type = PACKET_OTHERHOST;
break;
default:
- break;
+ spin_unlock_bh(&sdata->mib_lock);
+ pr_debug("invalid dest mode\n");
+ kfree_skb(skb);
+ return NET_RX_DROP;
}
spin_unlock_bh(&sdata->mib_lock);
ret = mac802154_parse_frame_start(skb, &hdr);
if (ret) {
pr_debug("got invalid frame\n");
+ kfree_skb(skb);
return;
}
config NFT_NAT
depends on NF_TABLES
depends on NF_CONNTRACK
- depends on NF_NAT
+ select NF_NAT
tristate "Netfilter nf_tables nat module"
help
This option adds the "nat" expression that you can use to perform
config NETFILTER_XT_TARGET_LOG
tristate "LOG target support"
- depends on NF_LOG_IPV4 && NF_LOG_IPV6
+ select NF_LOG_COMMON
+ select NF_LOG_IPV4
+ select NF_LOG_IPV6 if IPV6
default m if NETFILTER_ADVANCED=n
help
This option adds a `LOG' target, which allows you to create rules in
(e.g. when running oldconfig). It selects
CONFIG_NETFILTER_XT_MARK (combined mark/MARK module).
+config NETFILTER_XT_NAT
+ tristate '"SNAT and DNAT" targets support'
+ depends on NF_NAT
+ ---help---
+ This option enables the SNAT and DNAT targets.
+
+ To compile it as a module, choose M here. If unsure, say N.
+
config NETFILTER_XT_TARGET_NETMAP
tristate '"NETMAP" target support'
depends on NF_NAT
obj-$(CONFIG_NETFILTER_XT_MARK) += xt_mark.o
obj-$(CONFIG_NETFILTER_XT_CONNMARK) += xt_connmark.o
obj-$(CONFIG_NETFILTER_XT_SET) += xt_set.o
-obj-$(CONFIG_NF_NAT) += xt_nat.o
+obj-$(CONFIG_NETFILTER_XT_NAT) += xt_nat.o
# targets
obj-$(CONFIG_NETFILTER_XT_TARGET_AUDIT) += xt_AUDIT.o
struct list_head nf_hooks[NFPROTO_NUMPROTO][NF_MAX_HOOKS] __read_mostly;
EXPORT_SYMBOL(nf_hooks);
-#if defined(CONFIG_JUMP_LABEL)
+#ifdef HAVE_JUMP_LABEL
struct static_key nf_hooks_needed[NFPROTO_NUMPROTO][NF_MAX_HOOKS];
EXPORT_SYMBOL(nf_hooks_needed);
#endif
}
list_add_rcu(®->list, elem->list.prev);
mutex_unlock(&nf_hook_mutex);
-#if defined(CONFIG_JUMP_LABEL)
+#ifdef HAVE_JUMP_LABEL
static_key_slow_inc(&nf_hooks_needed[reg->pf][reg->hooknum]);
#endif
return 0;
mutex_lock(&nf_hook_mutex);
list_del_rcu(®->list);
mutex_unlock(&nf_hook_mutex);
-#if defined(CONFIG_JUMP_LABEL)
+#ifdef HAVE_JUMP_LABEL
static_key_slow_dec(&nf_hooks_needed[reg->pf][reg->hooknum]);
#endif
synchronize_net();
{
.hook = ip_vs_local_reply6,
.owner = THIS_MODULE,
- .pf = NFPROTO_IPV4,
+ .pf = NFPROTO_IPV6,
.hooknum = NF_INET_LOCAL_OUT,
.priority = NF_IP6_PRI_NAT_DST + 1,
},
#include <net/route.h> /* for ip_route_output */
#include <net/ipv6.h>
#include <net/ip6_route.h>
+#include <net/ip_tunnels.h>
#include <net/addrconf.h>
#include <linux/icmpv6.h>
#include <linux/netfilter.h>
old_iph = ip_hdr(skb);
}
- skb->transport_header = skb->network_header;
-
/* fix old IP header checksum */
ip_send_check(old_iph);
+ skb = iptunnel_handle_offloads(skb, false, SKB_GSO_IPIP);
+ if (IS_ERR(skb))
+ goto tx_error;
+
+ skb->transport_header = skb->network_header;
+
skb_push(skb, sizeof(struct iphdr));
skb_reset_network_header(skb);
memset(&(IPCB(skb)->opt), 0, sizeof(IPCB(skb)->opt));
return NF_STOLEN;
tx_error:
- kfree_skb(skb);
+ if (!IS_ERR(skb))
+ kfree_skb(skb);
rcu_read_unlock();
LeaveFunction(10);
return NF_STOLEN;
old_iph = ipv6_hdr(skb);
}
+ /* GSO: we need to provide proper SKB_GSO_ value for IPv6 */
+ skb = iptunnel_handle_offloads(skb, false, 0); /* SKB_GSO_SIT/IPV6 */
+ if (IS_ERR(skb))
+ goto tx_error;
+
skb->transport_header = skb->network_header;
skb_push(skb, sizeof(struct ipv6hdr));
return NF_STOLEN;
tx_error:
- kfree_skb(skb);
+ if (!IS_ERR(skb))
+ kfree_skb(skb);
rcu_read_unlock();
LeaveFunction(10);
return NF_STOLEN;
if (info->invert & ~1)
return -EINVAL;
- return info->id ? 0 : -EINVAL;
+ return 0;
}
static bool
static int make_writable(struct sk_buff *skb, int write_len)
{
+ if (!pskb_may_pull(skb, write_len))
+ return -ENOMEM;
+
if (!skb_cloned(skb) || skb_clone_writable(skb, write_len))
return 0;
vlan_set_encap_proto(skb, vhdr);
skb->mac_header += VLAN_HLEN;
+ if (skb_network_offset(skb) < ETH_HLEN)
+ skb_set_network_header(skb, ETH_HLEN);
skb_reset_mac_len(skb);
return 0;
upcall.key = &key;
upcall.userdata = NULL;
upcall.portid = ovs_vport_find_upcall_portid(p, skb);
- ovs_dp_upcall(dp, skb, &upcall);
- consume_skb(skb);
+ error = ovs_dp_upcall(dp, skb, &upcall);
+ if (unlikely(error))
+ kfree_skb(skb);
+ else
+ consume_skb(skb);
stats_counter = &stats->n_missed;
goto out;
}
{
struct ovs_header *upcall;
struct sk_buff *nskb = NULL;
- struct sk_buff *user_skb; /* to be queued to userspace */
+ struct sk_buff *user_skb = NULL; /* to be queued to userspace */
struct nlattr *nla;
struct genl_info info = {
.dst_sk = ovs_dp_get_net(dp)->genl_sock,
((struct nlmsghdr *) user_skb->data)->nlmsg_len = user_skb->len;
err = genlmsg_unicast(ovs_dp_get_net(dp), user_skb, upcall_info->portid);
+ user_skb = NULL;
out:
if (err)
skb_tx_error(skb);
+ kfree_skb(user_skb);
kfree_skb(nskb);
return err;
}
p1->tov_in_jiffies = msecs_to_jiffies(p1->retire_blk_tov);
p1->blk_sizeof_priv = req_u->req3.tp_sizeof_priv;
+ p1->max_frame_len = p1->kblk_size - BLK_PLUS_PRIV(p1->blk_sizeof_priv);
prb_init_ft_ops(p1, req_u);
prb_setup_retire_blk_timer(po, tx_ring);
prb_open_block(p1, pbd);
if ((int)snaplen < 0)
snaplen = 0;
}
+ } else if (unlikely(macoff + snaplen >
+ GET_PBDQC_FROM_RB(&po->rx_ring)->max_frame_len)) {
+ u32 nval;
+
+ nval = GET_PBDQC_FROM_RB(&po->rx_ring)->max_frame_len - macoff;
+ pr_err_once("tpacket_rcv: packet too big, clamped from %u to %u. macoff=%u\n",
+ snaplen, nval, macoff);
+ snaplen = nval;
+ if (unlikely((int)snaplen < 0)) {
+ snaplen = 0;
+ macoff = GET_PBDQC_FROM_RB(&po->rx_ring)->max_frame_len;
+ }
}
spin_lock(&sk->sk_receive_queue.lock);
h.raw = packet_current_rx_frame(po, skb,
goto out;
if (unlikely(req->tp_block_size & (PAGE_SIZE - 1)))
goto out;
+ if (po->tp_version >= TPACKET_V3 &&
+ (int)(req->tp_block_size -
+ BLK_PLUS_PRIV(req_u->req3.tp_sizeof_priv)) <= 0)
+ goto out;
if (unlikely(req->tp_frame_size < po->tp_hdrlen +
po->tp_reserve))
goto out;
char *pkblk_start;
char *pkblk_end;
int kblk_size;
+ unsigned int max_frame_len;
unsigned int knum_blocks;
uint64_t knxt_seq_num;
char *prev;
{ "BCM2E1A", RFKILL_TYPE_BLUETOOTH },
{ "BCM2E39", RFKILL_TYPE_BLUETOOTH },
{ "BCM2E3D", RFKILL_TYPE_BLUETOOTH },
+ { "BCM2E64", RFKILL_TYPE_BLUETOOTH },
{ "BCM4752", RFKILL_TYPE_GPS },
{ "LNV4752", RFKILL_TYPE_GPS },
{ },
struct cbq_class *tx_borrowed;
int tx_len;
psched_time_t now; /* Cached timestamp */
- psched_time_t now_rt; /* Cached real time */
unsigned int pmask;
struct hrtimer delay_timer;
int toplevel = q->toplevel;
if (toplevel > cl->level && !(qdisc_is_throttled(cl->q))) {
- psched_time_t now;
- psched_tdiff_t incr;
-
- now = psched_get_time();
- incr = now - q->now_rt;
- now = q->now + incr;
+ psched_time_t now = psched_get_time();
do {
if (cl->undertime < now) {
struct cbq_class *this = q->tx_class;
struct cbq_class *cl = this;
int len = q->tx_len;
+ psched_time_t now;
q->tx_class = NULL;
+ /* Time integrator. We calculate EOS time
+ * by adding expected packet transmission time.
+ */
+ now = q->now + L2T(&q->link, len);
for ( ; cl; cl = cl->share) {
long avgidle = cl->avgidle;
* idle = (now - last) - last_pktlen/rate
*/
- idle = q->now - cl->last;
+ idle = now - cl->last;
if ((unsigned long)idle > 128*1024*1024) {
avgidle = cl->maxidle;
} else {
idle -= L2T(&q->link, len);
idle += L2T(cl, len);
- cl->undertime = q->now + idle;
+ cl->undertime = now + idle;
} else {
/* Underlimit */
else
cl->avgidle = avgidle;
}
- cl->last = q->now;
+ if ((s64)(now - cl->last) > 0)
+ cl->last = now;
}
cbq_update_toplevel(q, this, q->tx_borrowed);
struct sk_buff *skb;
struct cbq_sched_data *q = qdisc_priv(sch);
psched_time_t now;
- psched_tdiff_t incr;
now = psched_get_time();
- incr = now - q->now_rt;
-
- if (q->tx_class) {
- psched_tdiff_t incr2;
- /* Time integrator. We calculate EOS time
- * by adding expected packet transmission time.
- * If real time is greater, we warp artificial clock,
- * so that:
- *
- * cbq_time = max(real_time, work);
- */
- incr2 = L2T(&q->link, q->tx_len);
- q->now += incr2;
+
+ if (q->tx_class)
cbq_update(q);
- if ((incr -= incr2) < 0)
- incr = 0;
- q->now += incr;
- } else {
- if (now > q->now)
- q->now = now;
- }
- q->now_rt = now;
+
+ q->now = now;
for (;;) {
q->wd_expires = 0;
hrtimer_cancel(&q->delay_timer);
q->toplevel = TC_CBQ_MAXLEVEL;
q->now = psched_get_time();
- q->now_rt = q->now;
for (prio = 0; prio <= TC_CBQ_MAXPRIO; prio++)
q->active[prio] = NULL;
q->delay_timer.function = cbq_undelay;
q->toplevel = TC_CBQ_MAXLEVEL;
q->now = psched_get_time();
- q->now_rt = q->now;
cbq_link_class(&q->link);
else {
dst_release(transport->dst);
transport->dst = NULL;
+ ulp_notify = false;
}
spc_state = SCTP_ADDR_UNREACHABLE;
{
u8 score_curr, score_best;
- if (best == NULL)
+ if (best == NULL || curr == best)
return curr;
score_curr = sctp_trans_score(curr);
trans_sec = trans_pri;
/* If we failed to find a usable transport, just camp on the
- * primary or retran, even if they are inactive, if possible
- * pick a PF iff it's the better choice.
+ * active or pick a PF iff it's the better choice.
*/
if (trans_pri == NULL) {
- trans_pri = sctp_trans_elect_best(asoc->peer.primary_path,
- asoc->peer.retran_path);
- trans_pri = sctp_trans_elect_best(trans_pri, trans_pf);
- trans_sec = asoc->peer.primary_path;
+ trans_pri = sctp_trans_elect_best(asoc->peer.active_path, trans_pf);
+ trans_sec = trans_pri;
}
/* Set the active and retran transports. */
transport = asoc->peer.primary_path;
status.sstat_assoc_id = sctp_assoc2id(asoc);
- status.sstat_state = asoc->state;
+ status.sstat_state = sctp_assoc_to_state(asoc);
status.sstat_rwnd = asoc->peer.rwnd;
status.sstat_unackdata = asoc->unack_data;
}
memset(&tss, 0, sizeof(tss));
- if ((sk->sk_tsflags & SOF_TIMESTAMPING_SOFTWARE ||
- skb_shinfo(skb)->tx_flags & SKBTX_ANY_SW_TSTAMP) &&
+ if ((sk->sk_tsflags & SOF_TIMESTAMPING_SOFTWARE) &&
ktime_to_timespec_cond(skb->tstamp, tss.ts + 0))
empty = 0;
if (shhwtstamps &&
*
* This function is called by a protocol handler that wants to
* advertise its address family, and have it linked into the
- * socket interface. The value ops->family coresponds to the
+ * socket interface. The value ops->family corresponds to the
* socket system call protocol family.
*/
int sock_register(const struct net_proto_family *ops)
return msg_importance(&port->phdr);
}
-static inline void tipc_port_set_importance(struct tipc_port *port, int imp)
+static inline int tipc_port_set_importance(struct tipc_port *port, int imp)
{
+ if (imp > TIPC_CRITICAL_IMPORTANCE)
+ return -EINVAL;
msg_set_importance(&port->phdr, (u32)imp);
+ return 0;
}
#endif
switch (opt) {
case TIPC_IMPORTANCE:
- tipc_port_set_importance(port, value);
+ res = tipc_port_set_importance(port, value);
break;
case TIPC_SRC_DROPPABLE:
if (sock->type != SOCK_STREAM)
# Check for improperly formed commit descriptions
if ($in_commit_log &&
$line =~ /\bcommit\s+[0-9a-f]{5,}/i &&
- $line !~ /\b[Cc]ommit [0-9a-f]{12,16} \("/) {
+ !($line =~ /\b[Cc]ommit [0-9a-f]{12,40} \("/ ||
+ ($line =~ /\b[Cc]ommit [0-9a-f]{12,40}\s*$/ &&
+ defined $rawlines[$linenr] &&
+ $rawlines[$linenr] =~ /^\s*\("/))) {
$line =~ /\b(c)ommit\s+([0-9a-f]{5,})/i;
my $init_char = $1;
my $orig_commit = lc($2);
my $desc = 'commit description';
($id, $desc) = git_commit_info($orig_commit, $id, $desc);
ERROR("GIT_COMMIT_ID",
- "Please use 12 to 16 chars for the git commit ID like: '${init_char}ommit $id (\"$desc\")'\n" . $herecurr);
+ "Please use 12 or more chars for the git commit ID like: '${init_char}ommit $id (\"$desc\")'\n" . $herecurr);
}
# Check for added, moved or deleted files
$prototype =~ s/^noinline +//;
$prototype =~ s/__init +//;
$prototype =~ s/__init_or_module +//;
+ $prototype =~ s/__meminit +//;
$prototype =~ s/__must_check +//;
$prototype =~ s/__weak +//;
my $define = $prototype =~ s/^#\s*define\s+//; #ak added
struct rb_root key_user_tree; /* tree of quota records indexed by UID */
DEFINE_SPINLOCK(key_user_lock);
-unsigned int key_quota_root_maxkeys = 200; /* root's key count quota */
-unsigned int key_quota_root_maxbytes = 20000; /* root's key space quota */
+unsigned int key_quota_root_maxkeys = 1000000; /* root's key count quota */
+unsigned int key_quota_root_maxbytes = 25000000; /* root's key space quota */
unsigned int key_quota_maxkeys = 200; /* general key count quota */
unsigned int key_quota_maxbytes = 20000; /* general key space quota */
* Use filesystem name if filesystem does not support rename()
* operation.
*/
- if (!inode->i_op->rename)
+ if (!inode->i_op->rename && !inode->i_op->rename2)
goto prepend_filesystem_name;
}
/* Prepend device name. */
* Get local name for filesystems without rename() operation
* or dentry without vfsmount.
*/
- if (!path->mnt || !inode->i_op->rename)
+ if (!path->mnt ||
+ (!inode->i_op->rename && !inode->i_op->rename2))
pos = tomoyo_get_local_path(path->dentry, buf,
buf_len - 1);
/* Get absolute name for the rest. */
* snd_info_get_line - read one line from the procfs buffer
* @buffer: the procfs buffer
* @line: the buffer to store
- * @len: the max. buffer size - 1
+ * @len: the max. buffer size
*
* Reads one line from the buffer and stores the string.
*
buffer->stop = 1;
if (c == '\n')
break;
- if (len) {
+ if (len > 1) {
len--;
*line++ = c;
}
},
[SNDRV_PCM_FORMAT_DSD_U8] = {
.width = 8, .phys = 8, .le = 1, .signd = 0,
- .silence = {},
+ .silence = { 0x69 },
},
[SNDRV_PCM_FORMAT_DSD_U16_LE] = {
.width = 16, .phys = 16, .le = 1, .signd = 0,
- .silence = {},
+ .silence = { 0x69, 0x69 },
},
/* FIXME: the following three formats are not defined properly yet */
[SNDRV_PCM_FORMAT_MPEG] = {
static void update_pcm_pointers(struct amdtp_stream *s,
struct snd_pcm_substream *pcm,
unsigned int frames)
-{ unsigned int ptr;
+{
+ unsigned int ptr;
+
+ /*
+ * In IEC 61883-6, one data block represents one event. In ALSA, one
+ * event equals to one PCM frame. But Dice has a quirk to transfer
+ * two PCM frames in one data block.
+ */
+ if (s->double_pcm_frames)
+ frames *= 2;
ptr = s->pcm_buffer_pointer + frames;
if (ptr >= pcm->runtime->buffer_size)
unsigned int pcm_buffer_pointer;
unsigned int pcm_period_pointer;
bool pointer_flush;
+ bool double_pcm_frames;
struct snd_rawmidi_substream *midi[AMDTP_MAX_CHANNELS_FOR_MIDI * 8];
return err;
/*
- * At rates above 96 kHz, pretend that the stream runs at half the
- * actual sample rate with twice the number of channels; two samples
- * of a channel are stored consecutively in the packet. Requires
- * blocking mode and PCM buffer size should be aligned to SYT_INTERVAL.
+ * At 176.4/192.0 kHz, Dice has a quirk to transfer two PCM frames in
+ * one data block of AMDTP packet. Thus sampling transfer frequency is
+ * a half of PCM sampling frequency, i.e. PCM frames at 192.0 kHz are
+ * transferred on AMDTP packets at 96 kHz. Two successive samples of a
+ * channel are stored consecutively in the packet. This quirk is called
+ * as 'Dual Wire'.
+ * For this quirk, blocking mode is required and PCM buffer size should
+ * be aligned to SYT_INTERVAL.
*/
channels = params_channels(hw_params);
if (rate_index > 4) {
return err;
}
- for (i = 0; i < channels; i++) {
- dice->stream.pcm_positions[i * 2] = i;
- dice->stream.pcm_positions[i * 2 + 1] = i + channels;
- }
-
rate /= 2;
channels *= 2;
+ dice->stream.double_pcm_frames = true;
+ } else {
+ dice->stream.double_pcm_frames = false;
}
mode = rate_index_to_mode(rate_index);
amdtp_stream_set_parameters(&dice->stream, rate, channels,
dice->rx_midi_ports[mode]);
+ if (rate_index > 4) {
+ channels /= 2;
+
+ for (i = 0; i < channels; i++) {
+ dice->stream.pcm_positions[i] = i * 2;
+ dice->stream.pcm_positions[i + channels] = i * 2 + 1;
+ }
+ }
+
amdtp_stream_set_pcm_format(&dice->stream,
params_format(hw_params));
*/
#ifndef CT20K1REG_H
-#define CT20k1REG_H
+#define CT20K1REG_H
/* 20k1 registers */
#define DSPXRAM_START 0x000000
#define I2SD_R 0x19L
#endif /* CT20K1REG_H */
-
-
*/
#ifndef __CA0132_REGS_H
-#define __CA0312_REGS_H
+#define __CA0132_REGS_H
#define DSP_CHIP_OFFSET 0x100000
#define DSP_DBGCNTL_MODULE_OFFSET 0xE30
CXT_FIXUP_HEADPHONE_MIC_PIN,
CXT_FIXUP_HEADPHONE_MIC,
CXT_FIXUP_GPIO1,
+ CXT_FIXUP_ASPIRE_DMIC,
CXT_FIXUP_THINKPAD_ACPI,
CXT_FIXUP_OLPC_XO,
CXT_FIXUP_CAP_MIX_AMP,
{ }
},
},
+ [CXT_FIXUP_ASPIRE_DMIC] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = cxt_fixup_stereo_dmic,
+ .chained = true,
+ .chain_id = CXT_FIXUP_GPIO1,
+ },
[CXT_FIXUP_THINKPAD_ACPI] = {
.type = HDA_FIXUP_FUNC,
.v.func = hda_fixup_thinkpad_acpi,
static const struct snd_pci_quirk cxt5066_fixups[] = {
SND_PCI_QUIRK(0x1025, 0x0543, "Acer Aspire One 522", CXT_FIXUP_STEREO_DMIC),
- SND_PCI_QUIRK(0x1025, 0x054c, "Acer Aspire 3830TG", CXT_FIXUP_GPIO1),
+ SND_PCI_QUIRK(0x1025, 0x054c, "Acer Aspire 3830TG", CXT_FIXUP_ASPIRE_DMIC),
SND_PCI_QUIRK(0x1043, 0x138d, "Asus", CXT_FIXUP_HEADPHONE_MIC_PIN),
SND_PCI_QUIRK(0x152d, 0x0833, "OLPC XO-1.5", CXT_FIXUP_OLPC_XO),
SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo T400", CXT_PINCFG_LENOVO_TP410),
#define is_haswell_plus(codec) (is_haswell(codec) || is_broadwell(codec))
#define is_valleyview(codec) ((codec)->vendor_id == 0x80862882)
+#define is_cherryview(codec) ((codec)->vendor_id == 0x80862883)
+#define is_valleyview_plus(codec) (is_valleyview(codec) || is_cherryview(codec))
struct hdmi_spec_per_cvt {
hda_nid_t cvt_nid;
mux_idx);
/* configure unused pins to choose other converters */
- if (is_haswell_plus(codec) || is_valleyview(codec))
+ if (is_haswell_plus(codec) || is_valleyview_plus(codec))
intel_not_share_assigned_cvt(codec, per_pin->pin_nid, mux_idx);
snd_hda_spdif_ctls_assign(codec, pin_idx, per_cvt->cvt_nid);
* and this can make HW reset converter selection on a pin.
*/
if (eld->eld_valid && !old_eld_valid && per_pin->setup) {
- if (is_haswell_plus(codec) || is_valleyview(codec)) {
+ if (is_haswell_plus(codec) ||
+ is_valleyview_plus(codec)) {
intel_verify_pin_cvt_connect(codec, per_pin);
intel_not_share_assigned_cvt(codec, pin_nid,
per_pin->mux_idx);
bool non_pcm;
int pinctl;
- if (is_haswell_plus(codec) || is_valleyview(codec)) {
+ if (is_haswell_plus(codec) || is_valleyview_plus(codec)) {
/* Verify pin:cvt selections to avoid silent audio after S3.
* After S3, the audio driver restores pin:cvt selections
* but this can happen before gfx is ready and such selection
intel_haswell_fixup_enable_dp12(codec);
}
- if (is_haswell(codec) || is_valleyview(codec)) {
+ if (is_haswell_plus(codec) || is_valleyview_plus(codec))
codec->depop_delay = 0;
- }
if (hdmi_parse_codec(codec) < 0) {
codec->spec = NULL;
spec->pll_coef_idx);
val = snd_hda_codec_read(codec, spec->pll_nid, 0,
AC_VERB_GET_PROC_COEF, 0);
+ if (val == -1)
+ return;
snd_hda_codec_write(codec, spec->pll_nid, 0, AC_VERB_SET_COEF_INDEX,
spec->pll_coef_idx);
snd_hda_codec_write(codec, spec->pll_nid, 0, AC_VERB_SET_PROC_COEF,
case 0x10ec0885:
case 0x10ec0887:
/*case 0x10ec0889:*/ /* this causes an SPDIF problem */
+ case 0x10ec0900:
alc889_coef_init(codec);
break;
case 0x10ec0888:
switch (codec->vendor_id) {
case 0x10ec0882:
case 0x10ec0885:
+ case 0x10ec0900:
break;
default:
/* ALC883 and variants */
static void alc269vb_toggle_power_output(struct hda_codec *codec, int power_up)
{
int val = alc_read_coef_idx(codec, 0x04);
+ if (val == -1)
+ return;
if (power_up)
val |= 1 << 11;
else
snd_hda_codec_resume_cache(codec);
alc_inv_dmic_sync(codec, true);
hda_call_check_power_status(codec, 0x01);
+
+ /* on some machine, the BIOS will clear the codec gpio data when enter
+ * suspend, and won't restore the data after resume, so we restore it
+ * in the driver.
+ */
+ if (spec->gpio_led)
+ snd_hda_codec_write(codec, codec->afg, 0, AC_VERB_SET_GPIO_DATA,
+ spec->gpio_led);
+
if (spec->has_alc5505_dsp)
alc5505_dsp_resume(codec);
ALC292_FIXUP_TPT440_DOCK,
ALC283_FIXUP_BXBT2807_MIC,
ALC255_FIXUP_DELL_WMI_MIC_MUTE_LED,
+ ALC282_FIXUP_ASPIRE_V5_PINS,
};
static const struct hda_fixup alc269_fixups[] = {
.chained_before = true,
.chain_id = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE
},
+ [ALC282_FIXUP_ASPIRE_V5_PINS] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+ { 0x12, 0x90a60130 },
+ { 0x14, 0x90170110 },
+ { 0x17, 0x40000008 },
+ { 0x18, 0x411111f0 },
+ { 0x19, 0x411111f0 },
+ { 0x1a, 0x411111f0 },
+ { 0x1b, 0x411111f0 },
+ { 0x1d, 0x40f89b2d },
+ { 0x1e, 0x411111f0 },
+ { 0x21, 0x0321101f },
+ { },
+ },
+ },
};
SND_PCI_QUIRK(0x1025, 0x0740, "Acer AO725", ALC271_FIXUP_HP_GATE_MIC_JACK),
SND_PCI_QUIRK(0x1025, 0x0742, "Acer AO756", ALC271_FIXUP_HP_GATE_MIC_JACK),
SND_PCI_QUIRK(0x1025, 0x0775, "Acer Aspire E1-572", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572),
+ SND_PCI_QUIRK(0x1025, 0x079b, "Acer Aspire V5-573G", ALC282_FIXUP_ASPIRE_V5_PINS),
SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z),
SND_PCI_QUIRK(0x1028, 0x05bd, "Dell", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x05be, "Dell", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE),
if ((alc_get_coef0(codec) & 0x00ff) == 0x017) {
val = alc_read_coef_idx(codec, 0x04);
/* Power up output pin */
- alc_write_coef_idx(codec, 0x04, val | (1<<11));
+ if (val != -1)
+ alc_write_coef_idx(codec, 0x04, val | (1<<11));
}
if ((alc_get_coef0(codec) & 0x00ff) == 0x018) {
val = alc_read_coef_idx(codec, 0xd);
- if ((val & 0x0c00) >> 10 != 0x1) {
+ if (val != -1 && (val & 0x0c00) >> 10 != 0x1) {
/* Capless ramp up clock control */
alc_write_coef_idx(codec, 0xd, val | (1<<10));
}
val = alc_read_coef_idx(codec, 0x17);
- if ((val & 0x01c0) >> 6 != 0x4) {
+ if (val != -1 && (val & 0x01c0) >> 6 != 0x4) {
/* Class D power on reset */
alc_write_coef_idx(codec, 0x17, val | (1<<7));
}
}
val = alc_read_coef_idx(codec, 0xd); /* Class D */
- alc_write_coef_idx(codec, 0xd, val | (1<<14));
+ if (val != -1)
+ alc_write_coef_idx(codec, 0xd, val | (1<<14));
val = alc_read_coef_idx(codec, 0x4); /* HP */
- alc_write_coef_idx(codec, 0x4, val | (1<<11));
+ if (val != -1)
+ alc_write_coef_idx(codec, 0x4, val | (1<<11));
}
/*
else
rates = &arizona_48k_bclk_rates[0];
+ wl = snd_pcm_format_width(params_format(params));
+
if (tdm_slots) {
arizona_aif_dbg(dai, "Configuring for %d %d bit TDM slots\n",
tdm_slots, tdm_width);
channels = tdm_slots;
} else {
bclk_target = snd_soc_params_to_bclk(params);
+ tdm_width = wl;
}
if (chan_limit && chan_limit < channels) {
arizona_aif_dbg(dai, "BCLK %dHz LRCLK %dHz\n",
rates[bclk], rates[bclk] / lrclk);
- wl = snd_pcm_format_width(params_format(params));
- frame = wl << ARIZONA_AIF1TX_WL_SHIFT | wl;
+ frame = wl << ARIZONA_AIF1TX_WL_SHIFT | tdm_width;
reconfig = arizona_aif_cfg_changed(codec, base, bclk, lrclk, frame);
/*64k*/
{8192000, 64000, 1, 0},
- {1228800, 64000, 1, 1},
- {1693440, 64000, 1, 2},
- {2457600, 64000, 1, 3},
- {3276800, 64000, 1, 4},
+ {12288000, 64000, 1, 1},
+ {16934400, 64000, 1, 2},
+ {24576000, 64000, 1, 3},
+ {32768000, 64000, 1, 4},
/* 88.2k */
{11289600, 88200, 1, 0},
index = cs4265_get_clk_index(cs4265->sysclk, params_rate(params));
if (index >= 0) {
snd_soc_update_bits(codec, CS4265_ADC_CTL,
- CS4265_ADC_FM, clk_map_table[index].fm_mode);
+ CS4265_ADC_FM, clk_map_table[index].fm_mode << 6);
snd_soc_update_bits(codec, CS4265_MCLK_FREQ,
CS4265_MCLK_FREQ_MASK,
- clk_map_table[index].mclkdiv);
+ clk_map_table[index].mclkdiv << 4);
} else {
dev_err(codec->dev, "can't get correct mclk\n");
*/
#ifndef __DA732X_H_
-#define __DA732X_H
+#define __DA732X_H_
#include <sound/soc.h>
pcm512x_ramp_step_text);
static const struct snd_kcontrol_new pcm512x_controls[] = {
-SOC_DOUBLE_R_TLV("Playback Digital Volume", PCM512x_DIGITAL_VOLUME_2,
+SOC_DOUBLE_R_TLV("Digital Playback Volume", PCM512x_DIGITAL_VOLUME_2,
PCM512x_DIGITAL_VOLUME_3, 0, 255, 1, digital_tlv),
SOC_DOUBLE_TLV("Playback Volume", PCM512x_ANALOG_GAIN_CTRL,
PCM512x_LAGN_SHIFT, PCM512x_RAGN_SHIFT, 1, 1, analog_tlv),
SOC_DOUBLE_TLV("Playback Boost Volume", PCM512x_ANALOG_GAIN_BOOST,
PCM512x_AGBL_SHIFT, PCM512x_AGBR_SHIFT, 1, 0, boost_tlv),
-SOC_DOUBLE("Playback Digital Switch", PCM512x_MUTE, PCM512x_RQML_SHIFT,
+SOC_DOUBLE("Digital Playback Switch", PCM512x_MUTE, PCM512x_RQML_SHIFT,
PCM512x_RQMR_SHIFT, 1, 1),
SOC_SINGLE("Deemphasis Switch", PCM512x_DSP, PCM512x_DEMP_SHIFT, 1, 1),
static const struct regmap_config rt5640_regmap = {
.reg_bits = 8,
.val_bits = 16,
+ .use_single_rw = true,
.max_register = RT5640_VENDOR_ID2 + 1 + (ARRAY_SIZE(rt5640_ranges) *
RT5640_PR_SPACING),
{ "BST2", NULL, "IN2P" },
{ "BST2", NULL, "IN2N" },
- { "IN1P", NULL, "micbias1" },
- { "IN1N", NULL, "micbias1" },
- { "IN2P", NULL, "micbias1" },
- { "IN2N", NULL, "micbias1" },
+ { "IN1P", NULL, "MICBIAS1" },
+ { "IN1N", NULL, "MICBIAS1" },
+ { "IN2P", NULL, "MICBIAS1" },
+ { "IN2N", NULL, "MICBIAS1" },
{ "ADC 1", NULL, "BST1" },
{ "ADC 1", NULL, "ADC 1 power" },
return ret;
}
-static int davinci_mcasp_set_clkdiv(struct snd_soc_dai *dai, int div_id, int div)
+static int __davinci_mcasp_set_clkdiv(struct snd_soc_dai *dai, int div_id,
+ int div, bool explicit)
{
struct davinci_mcasp *mcasp = snd_soc_dai_get_drvdata(dai);
ACLKXDIV(div - 1), ACLKXDIV_MASK);
mcasp_mod_bits(mcasp, DAVINCI_MCASP_ACLKRCTL_REG,
ACLKRDIV(div - 1), ACLKRDIV_MASK);
- mcasp->bclk_div = div;
+ if (explicit)
+ mcasp->bclk_div = div;
break;
case 2: /* BCLK/LRCLK ratio */
return 0;
}
+static int davinci_mcasp_set_clkdiv(struct snd_soc_dai *dai, int div_id,
+ int div)
+{
+ return __davinci_mcasp_set_clkdiv(dai, div_id, div, 1);
+}
+
static int davinci_mcasp_set_sysclk(struct snd_soc_dai *dai, int clk_id,
unsigned int freq, int dir)
{
"Inaccurate BCLK: %u Hz / %u != %u Hz\n",
mcasp->sysclk_freq, div, bclk_freq);
}
- davinci_mcasp_set_clkdiv(cpu_dai, 1, div);
+ __davinci_mcasp_set_clkdiv(cpu_dai, 1, div, 0);
}
ret = mcasp_common_hw_param(mcasp, substream->stream,
tristate "Enhanced Serial Audio Interface (ESAI) module support"
select REGMAP_MMIO
select SND_SOC_IMX_PCM_DMA if SND_IMX_SOC != n
- select SND_SOC_FSL_UTILS
help
Say Y if you want to add Enhanced Synchronous Audio Interface
(ESAI) support for the Freescale CPUs.
#include "fsl_esai.h"
#include "imx-pcm.h"
-#include "fsl_utils.h"
#define FSL_ESAI_RATES SNDRV_PCM_RATE_8000_192000
#define FSL_ESAI_FORMATS (SNDRV_PCM_FMTBIT_S8 | \
.hw_params = fsl_esai_hw_params,
.set_sysclk = fsl_esai_set_dai_sysclk,
.set_fmt = fsl_esai_set_dai_fmt,
- .xlate_tdm_slot_mask = fsl_asoc_xlate_tdm_slot_mask,
.set_tdm_slot = fsl_esai_set_dai_tdm_slot,
};
snd_soc_card_set_drvdata(&priv->snd_card, priv);
ret = devm_snd_soc_register_card(&pdev->dev, &priv->snd_card);
+ if (ret >= 0)
+ return ret;
err:
asoc_simple_card_unref(pdev);
return ret;
}
+static int asoc_simple_card_remove(struct platform_device *pdev)
+{
+ return asoc_simple_card_unref(pdev);
+}
+
static const struct of_device_id asoc_simple_of_match[] = {
{ .compatible = "simple-audio-card", },
{},
.of_match_table = asoc_simple_of_match,
},
.probe = asoc_simple_card_probe,
+ .remove = asoc_simple_card_remove,
};
module_platform_driver(asoc_simple_card);
};
static struct sst_acpi_mach baytrail_machines[] = {
- { "10EC5640", "byt-rt5640", "intel/fw_sst_0f28.bin-i2s_master" },
- { "193C9890", "byt-max98090", "intel/fw_sst_0f28.bin-i2s_master" },
+ { "10EC5640", "byt-rt5640", "intel/fw_sst_0f28.bin-48kHz_i2s_master" },
+ { "193C9890", "byt-max98090", "intel/fw_sst_0f28.bin-48kHz_i2s_master" },
{}
};
.ops = &sst_byt_ops,
};
-int sst_byt_dsp_suspend_noirq(struct device *dev, struct sst_pdata *pdata)
+int sst_byt_dsp_suspend_late(struct device *dev, struct sst_pdata *pdata)
{
struct sst_byt *byt = pdata->dsp;
sst_byt_drop_all(byt);
dev_dbg(byt->dev, "dsp in reset\n");
- return 0;
-}
-EXPORT_SYMBOL_GPL(sst_byt_dsp_suspend_noirq);
-
-int sst_byt_dsp_suspend_late(struct device *dev, struct sst_pdata *pdata)
-{
- struct sst_byt *byt = pdata->dsp;
-
dev_dbg(byt->dev, "free all blocks and unload fw\n");
sst_fw_unload(byt->fw);
int sst_byt_dsp_init(struct device *dev, struct sst_pdata *pdata);
void sst_byt_dsp_free(struct device *dev, struct sst_pdata *pdata);
struct sst_dsp *sst_byt_get_dsp(struct sst_byt *byt);
-int sst_byt_dsp_suspend_noirq(struct device *dev, struct sst_pdata *pdata);
int sst_byt_dsp_suspend_late(struct device *dev, struct sst_pdata *pdata);
int sst_byt_dsp_boot(struct device *dev, struct sst_pdata *pdata);
int sst_byt_dsp_wait_for_ready(struct device *dev, struct sst_pdata *pdata);
/* DAI data */
struct sst_byt_pcm_data pcm[BYT_PCM_COUNT];
+
+ /* flag indicating is stream context restore needed after suspend */
+ bool restore_stream;
};
/* this may get called several times by oss emulation */
sst_byt_stream_start(byt, pcm_data->stream, 0);
break;
case SNDRV_PCM_TRIGGER_RESUME:
- schedule_work(&pcm_data->work);
+ if (pdata->restore_stream == true)
+ schedule_work(&pcm_data->work);
+ else
+ sst_byt_stream_resume(byt, pcm_data->stream);
break;
case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
sst_byt_stream_resume(byt, pcm_data->stream);
sst_byt_stream_stop(byt, pcm_data->stream);
break;
case SNDRV_PCM_TRIGGER_SUSPEND:
+ pdata->restore_stream = false;
case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
sst_byt_stream_pause(byt, pcm_data->stream);
break;
};
#ifdef CONFIG_PM
-static int sst_byt_pcm_dev_suspend_noirq(struct device *dev)
-{
- struct sst_pdata *sst_pdata = dev_get_platdata(dev);
- int ret;
-
- dev_dbg(dev, "suspending noirq\n");
-
- /* at this point all streams will be stopped and context saved */
- ret = sst_byt_dsp_suspend_noirq(dev, sst_pdata);
- if (ret < 0) {
- dev_err(dev, "failed to suspend %d\n", ret);
- return ret;
- }
-
- return ret;
-}
-
static int sst_byt_pcm_dev_suspend_late(struct device *dev)
{
struct sst_pdata *sst_pdata = dev_get_platdata(dev);
+ struct sst_byt_priv_data *priv_data = dev_get_drvdata(dev);
int ret;
dev_dbg(dev, "suspending late\n");
return ret;
}
+ priv_data->restore_stream = true;
+
return ret;
}
static int sst_byt_pcm_dev_resume_early(struct device *dev)
{
struct sst_pdata *sst_pdata = dev_get_platdata(dev);
+ int ret;
dev_dbg(dev, "resume early\n");
/* load fw and boot DSP */
- return sst_byt_dsp_boot(dev, sst_pdata);
-}
-
-static int sst_byt_pcm_dev_resume(struct device *dev)
-{
- struct sst_pdata *sst_pdata = dev_get_platdata(dev);
-
- dev_dbg(dev, "resume\n");
+ ret = sst_byt_dsp_boot(dev, sst_pdata);
+ if (ret)
+ return ret;
/* wait for FW to finish booting */
return sst_byt_dsp_wait_for_ready(dev, sst_pdata);
}
static const struct dev_pm_ops sst_byt_pm_ops = {
- .suspend_noirq = sst_byt_pcm_dev_suspend_noirq,
.suspend_late = sst_byt_pcm_dev_suspend_late,
.resume_early = sst_byt_pcm_dev_resume_early,
- .resume = sst_byt_pcm_dev_resume,
};
#define SST_BYT_PM_OPS (&sst_byt_pm_ops)
.stream_name = "TWL4030 Voice",
.cpu_dai_name = "omap-mcbsp.3",
.codec_dai_name = "twl4030-voice",
- .platform_name = "omap-mcbsp.2",
+ .platform_name = "omap-mcbsp.3",
.codec_name = "twl4030-codec",
.dai_fmt = SND_SOC_DAIFMT_DSP_A | SND_SOC_DAIFMT_IB_NF |
SND_SOC_DAIFMT_CBM_CFM,
SNDRV_PCM_RATE_48000 | SNDRV_PCM_RATE_64000 | \
SNDRV_PCM_RATE_88200 | SNDRV_PCM_RATE_96000)
-#define PXA_SSP_FORMATS (SNDRV_PCM_FMTBIT_S16_LE |\
- SNDRV_PCM_FMTBIT_S24_LE | \
- SNDRV_PCM_FMTBIT_S32_LE)
+#define PXA_SSP_FORMATS (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S32_LE)
static const struct snd_soc_dai_ops pxa_ssp_dai_ops = {
.startup = pxa_ssp_startup,
};
/* it shouldn't happen */
- if (use_dvc & !use_src)
+ if (use_dvc && !use_src)
dev_err(dev, "DVC is selected without SRC\n");
/* use SSIU or SSI ? */
device_initialize(rtd->dev);
rtd->dev->parent = rtd->card->dev;
rtd->dev->release = rtd_release;
- rtd->dev->init_name = name;
+ dev_set_name(rtd->dev, "%s", name);
dev_set_drvdata(rtd->dev, rtd);
mutex_init(&rtd->pcm_mutex);
INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_PLAYBACK].be_clients);
struct snd_soc_dapm_context *dapm = snd_soc_dapm_kcontrol_dapm(kcontrol);
struct soc_enum *e = (struct soc_enum *)kcontrol->private_value;
unsigned int reg_val, val;
- int ret = 0;
- if (e->reg != SND_SOC_NOPM)
- ret = soc_dapm_read(dapm, e->reg, ®_val);
- else
+ if (e->reg != SND_SOC_NOPM) {
+ int ret = soc_dapm_read(dapm, e->reg, ®_val);
+ if (ret)
+ return ret;
+ } else {
reg_val = dapm_kcontrol_get_value(kcontrol);
+ }
val = (reg_val >> e->shift_l) & e->mask;
ucontrol->value.enumerated.item[0] = snd_soc_enum_val_to_item(e, val);
ucontrol->value.enumerated.item[1] = val;
}
- return ret;
+ return 0;
}
EXPORT_SYMBOL_GPL(snd_soc_dapm_get_enum_double);
*/
#ifndef __TEGRA_ASOC_UTILS_H__
-#define __TEGRA_ASOC_UTILS_H_
+#define __TEGRA_ASOC_UTILS_H__
struct clk;
struct device;
uname_M := $(shell uname -m 2>/dev/null || echo not)
ARCH ?= $(shell echo $(uname_M) | sed -e s/i.86/i386/)
ifeq ($(ARCH),i386)
- ARCH := X86
+ ARCH := x86
CFLAGS := -DCONFIG_X86_32 -D__i386__
endif
ifeq ($(ARCH),x86_64)
- ARCH := X86
+ ARCH := x86
CFLAGS := -DCONFIG_X86_64 -D__x86_64__
endif
CFLAGS += -I../../../../usr/include/
all:
-ifeq ($(ARCH),X86)
+ifeq ($(ARCH),x86)
gcc $(CFLAGS) msgque.c -o msgque_test
else
echo "Not an x86 target, can't build msgque selftest"
uname_M := $(shell uname -m 2>/dev/null || echo not)
ARCH ?= $(shell echo $(uname_M) | sed -e s/i.86/i386/)
ifeq ($(ARCH),i386)
- ARCH := X86
+ ARCH := x86
CFLAGS := -DCONFIG_X86_32 -D__i386__
endif
ifeq ($(ARCH),x86_64)
- ARCH := X86
+ ARCH := x86
CFLAGS := -DCONFIG_X86_64 -D__x86_64__
endif
CFLAGS += -I../../../../arch/x86/include/
all:
-ifeq ($(ARCH),X86)
+ifeq ($(ARCH),x86)
gcc $(CFLAGS) kcmp_test.c -o kcmp_test
else
echo "Not an x86 target, can't build kcmp selftest"
uname_M := $(shell uname -m 2>/dev/null || echo not)
ARCH ?= $(shell echo $(uname_M) | sed -e s/i.86/i386/)
ifeq ($(ARCH),i386)
- ARCH := X86
+ ARCH := x86
endif
ifeq ($(ARCH),x86_64)
- ARCH := X86
+ ARCH := x86
endif
CFLAGS += -D_FILE_OFFSET_BITS=64
CFLAGS += -I../../../../include/
all:
-ifeq ($(ARCH),X86)
+ifeq ($(ARCH),x86)
gcc $(CFLAGS) memfd_test.c -o memfd_test
else
echo "Not an x86 target, can't build memfd selftest"
endif
run_tests: all
-ifeq ($(ARCH),X86)
+ifeq ($(ARCH),x86)
gcc $(CFLAGS) memfd_test.c -o memfd_test
endif
@./memfd_test || echo "memfd_test: [FAIL]"
build_fuse:
-ifeq ($(ARCH),X86)
+ifeq ($(ARCH),x86)
gcc $(CFLAGS) fuse_mnt.c `pkg-config fuse --cflags --libs` -o fuse_mnt
gcc $(CFLAGS) fuse_test.c -o fuse_test
else
#include <syslog.h>
#include <unistd.h>
#include <linux/usb/ch9.h>
-#include "../../uapi/usbip.h"
+#include <linux/usbip.h>
#ifndef USBIDS_FILE
#define USBIDS_FILE "/usr/share/hwdata/usb.ids"
dev->irq_requested_type |= guest_irq_type;
if (dev->ack_notifier.gsi != -1)
kvm_register_irq_ack_notifier(kvm, &dev->ack_notifier);
- } else
+ } else {
kvm_free_irq_source_id(kvm, dev->irq_source_id);
+ dev->irq_source_id = -1;
+ }
return r;
}
return pfn;
}
+static void kvm_unpin_pages(struct kvm *kvm, pfn_t pfn, unsigned long npages)
+{
+ unsigned long i;
+
+ for (i = 0; i < npages; ++i)
+ kvm_release_pfn_clean(pfn + i);
+}
+
int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot)
{
gfn_t gfn, end_gfn;
if (r) {
printk(KERN_ERR "kvm_iommu_map_address:"
"iommu failed to map pfn=%llx\n", pfn);
+ kvm_unpin_pages(kvm, pfn, page_size);
goto unmap_pages;
}
return 0;
unmap_pages:
- kvm_iommu_put_pages(kvm, slot->base_gfn, gfn);
+ kvm_iommu_put_pages(kvm, slot->base_gfn, gfn - slot->base_gfn);
return r;
}
return r;
}
-static void kvm_unpin_pages(struct kvm *kvm, pfn_t pfn, unsigned long npages)
-{
- unsigned long i;
-
- for (i = 0; i < npages; ++i)
- kvm_release_pfn_clean(pfn + i);
-}
-
static void kvm_iommu_put_pages(struct kvm *kvm,
gfn_t base_gfn, unsigned long npages)
{