4 Vinod Koul <vinod dot koul at intel.com>
6 NOTE: For DMA Engine usage in async_tx please see:
7 Documentation/crypto/async-tx-api.txt
10 Below is a guide to device driver writers on how to use the Slave-DMA API of the
11 DMA Engine. This is applicable only for slave DMA usage only.
13 The slave DMA usage consists of following steps:
14 1. Allocate a DMA slave channel
15 2. Set slave and controller specific parameters
16 3. Get a descriptor for transaction
17 4. Submit the transaction
18 5. Issue pending requests and wait for callback notification
20 1. Allocate a DMA slave channel
22 Channel allocation is slightly different in the slave DMA context,
23 client drivers typically need a channel from a particular DMA
24 controller only and even in some cases a specific channel is desired.
25 To request a channel dma_request_channel() API is used.
28 struct dma_chan *dma_request_channel(dma_cap_mask_t mask,
29 dma_filter_fn filter_fn,
31 where dma_filter_fn is defined as:
32 typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param);
34 The 'filter_fn' parameter is optional, but highly recommended for
35 slave and cyclic channels as they typically need to obtain a specific
38 When the optional 'filter_fn' parameter is NULL, dma_request_channel()
39 simply returns the first channel that satisfies the capability mask.
41 Otherwise, the 'filter_fn' routine will be called once for each free
42 channel which has a capability in 'mask'. 'filter_fn' is expected to
43 return 'true' when the desired DMA channel is found.
45 A channel allocated via this interface is exclusive to the caller,
46 until dma_release_channel() is called.
48 2. Set slave and controller specific parameters
50 Next step is always to pass some specific information to the DMA
51 driver. Most of the generic information which a slave DMA can use
52 is in struct dma_slave_config. This allows the clients to specify
53 DMA direction, DMA addresses, bus widths, DMA burst lengths etc
56 If some DMA controllers have more parameters to be sent then they
57 should try to embed struct dma_slave_config in their controller
58 specific structure. That gives flexibility to client to pass more
59 parameters, if required.
62 int dmaengine_slave_config(struct dma_chan *chan,
63 struct dma_slave_config *config)
65 Please see the dma_slave_config structure definition in dmaengine.h
66 for a detailed explanation of the struct members. Please note
67 that the 'direction' member will be going away as it duplicates the
68 direction given in the prepare call.
70 3. Get a descriptor for transaction
72 For slave usage the various modes of slave transfers supported by the
75 slave_sg - DMA a list of scatter gather buffers from/to a peripheral
76 dma_cyclic - Perform a cyclic DMA operation from/to a peripheral till the
77 operation is explicitly stopped.
78 interleaved_dma - This is common to Slave as well as M2M clients. For slave
79 address of devices' fifo could be already known to the driver.
80 Various types of operations could be expressed by setting
81 appropriate values to the 'dma_interleaved_template' members.
83 A non-NULL return of this transfer API represents a "descriptor" for
84 the given transaction.
87 struct dma_async_tx_descriptor *(*chan->device->device_prep_slave_sg)(
88 struct dma_chan *chan, struct scatterlist *sgl,
89 unsigned int sg_len, enum dma_data_direction direction,
92 struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_cyclic)(
93 struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
94 size_t period_len, enum dma_data_direction direction);
96 struct dma_async_tx_descriptor *(*device_prep_interleaved_dma)(
97 struct dma_chan *chan, struct dma_interleaved_template *xt,
100 The peripheral driver is expected to have mapped the scatterlist for
101 the DMA operation prior to calling device_prep_slave_sg, and must
102 keep the scatterlist mapped until the DMA operation has completed.
103 The scatterlist must be mapped using the DMA struct device. So,
104 normal setup should look like this:
106 nr_sg = dma_map_sg(chan->device->dev, sgl, sg_len);
110 desc = chan->device->device_prep_slave_sg(chan, sgl, nr_sg,
113 Once a descriptor has been obtained, the callback information can be
114 added and the descriptor must then be submitted. Some DMA engine
115 drivers may hold a spinlock between a successful preparation and
116 submission so it is important that these two operations are closely
120 Although the async_tx API specifies that completion callback
121 routines cannot submit any new operations, this is not the
122 case for slave/cyclic DMA.
124 For slave DMA, the subsequent transaction may not be available
125 for submission prior to callback function being invoked, so
126 slave DMA callbacks are permitted to prepare and submit a new
129 For cyclic DMA, a callback function may wish to terminate the
130 DMA via dmaengine_terminate_all().
132 Therefore, it is important that DMA engine drivers drop any
133 locks before calling the callback function which may cause a
136 Note that callbacks will always be invoked from the DMA
137 engines tasklet, never from interrupt context.
139 4. Submit the transaction
141 Once the descriptor has been prepared and the callback information
142 added, it must be placed on the DMA engine drivers pending queue.
145 dma_cookie_t dmaengine_submit(struct dma_async_tx_descriptor *desc)
147 This returns a cookie can be used to check the progress of DMA engine
148 activity via other DMA engine calls not covered in this document.
150 dmaengine_submit() will not start the DMA operation, it merely adds
151 it to the pending queue. For this, see step 5, dma_async_issue_pending.
153 5. Issue pending DMA requests and wait for callback notification
155 The transactions in the pending queue can be activated by calling the
156 issue_pending API. If channel is idle then the first transaction in
157 queue is started and subsequent ones queued up.
159 On completion of each DMA operation, the next in queue is started and
160 a tasklet triggered. The tasklet will then call the client driver
161 completion callback routine for notification, if set.
164 void dma_async_issue_pending(struct dma_chan *chan);
168 1. int dmaengine_terminate_all(struct dma_chan *chan)
170 This causes all activity for the DMA channel to be stopped, and may
171 discard data in the DMA FIFO which hasn't been fully transferred.
172 No callback functions will be called for any incomplete transfers.
174 2. int dmaengine_pause(struct dma_chan *chan)
176 This pauses activity on the DMA channel without data loss.
178 3. int dmaengine_resume(struct dma_chan *chan)
180 Resume a previously paused DMA channel. It is invalid to resume a
181 channel which is not currently paused.
183 4. enum dma_status dma_async_is_tx_complete(struct dma_chan *chan,
184 dma_cookie_t cookie, dma_cookie_t *last, dma_cookie_t *used)
186 This can be used to check the status of the channel. Please see
187 the documentation in include/linux/dmaengine.h for a more complete
188 description of this API.
190 This can be used in conjunction with dma_async_is_complete() and
191 the cookie returned from 'descriptor->submit()' to check for
192 completion of a specific DMA transaction.
195 Not all DMA engine drivers can return reliable information for
196 a running DMA channel. It is recommended that DMA engine users
197 pause or stop (via dmaengine_terminate_all) the channel before