]>
Commit | Line | Data |
---|---|---|
46b2903c VK |
1 | DMA Engine API Guide |
2 | ==================== | |
3 | ||
4 | Vinod Koul <vinod dot koul at intel.com> | |
5 | ||
6 | NOTE: For DMA Engine usage in async_tx please see: | |
7 | Documentation/crypto/async-tx-api.txt | |
8 | ||
9 | ||
10 | Below is a guide to device driver writers on how to use the Slave-DMA API of the | |
11 | DMA Engine. This is applicable only for slave DMA usage only. | |
12 | ||
5a42fb93 | 13 | The slave DMA usage consists of following steps: |
46b2903c VK |
14 | 1. Allocate a DMA slave channel |
15 | 2. Set slave and controller specific parameters | |
16 | 3. Get a descriptor for transaction | |
5a42fb93 RKAL |
17 | 4. Submit the transaction |
18 | 5. Issue pending requests and wait for callback notification | |
46b2903c VK |
19 | |
20 | 1. Allocate a DMA slave channel | |
5a42fb93 RKAL |
21 | |
22 | Channel allocation is slightly different in the slave DMA context, | |
23 | client drivers typically need a channel from a particular DMA | |
24 | controller only and even in some cases a specific channel is desired. | |
25 | To request a channel dma_request_channel() API is used. | |
26 | ||
27 | Interface: | |
28 | struct dma_chan *dma_request_channel(dma_cap_mask_t mask, | |
29 | dma_filter_fn filter_fn, | |
30 | void *filter_param); | |
31 | where dma_filter_fn is defined as: | |
32 | typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param); | |
33 | ||
34 | The 'filter_fn' parameter is optional, but highly recommended for | |
35 | slave and cyclic channels as they typically need to obtain a specific | |
36 | DMA channel. | |
37 | ||
38 | When the optional 'filter_fn' parameter is NULL, dma_request_channel() | |
39 | simply returns the first channel that satisfies the capability mask. | |
40 | ||
41 | Otherwise, the 'filter_fn' routine will be called once for each free | |
42 | channel which has a capability in 'mask'. 'filter_fn' is expected to | |
43 | return 'true' when the desired DMA channel is found. | |
44 | ||
45 | A channel allocated via this interface is exclusive to the caller, | |
46 | until dma_release_channel() is called. | |
46b2903c VK |
47 | |
48 | 2. Set slave and controller specific parameters | |
5a42fb93 RKAL |
49 | |
50 | Next step is always to pass some specific information to the DMA | |
51 | driver. Most of the generic information which a slave DMA can use | |
52 | is in struct dma_slave_config. This allows the clients to specify | |
53 | DMA direction, DMA addresses, bus widths, DMA burst lengths etc | |
54 | for the peripheral. | |
55 | ||
56 | If some DMA controllers have more parameters to be sent then they | |
57 | should try to embed struct dma_slave_config in their controller | |
58 | specific structure. That gives flexibility to client to pass more | |
59 | parameters, if required. | |
60 | ||
61 | Interface: | |
62 | int dmaengine_slave_config(struct dma_chan *chan, | |
63 | struct dma_slave_config *config) | |
64 | ||
65 | Please see the dma_slave_config structure definition in dmaengine.h | |
40e47125 | 66 | for a detailed explanation of the struct members. Please note |
5a42fb93 RKAL |
67 | that the 'direction' member will be going away as it duplicates the |
68 | direction given in the prepare call. | |
46b2903c VK |
69 | |
70 | 3. Get a descriptor for transaction | |
5a42fb93 RKAL |
71 | |
72 | For slave usage the various modes of slave transfers supported by the | |
73 | DMA-engine are: | |
74 | ||
75 | slave_sg - DMA a list of scatter gather buffers from/to a peripheral | |
76 | dma_cyclic - Perform a cyclic DMA operation from/to a peripheral till the | |
46b2903c | 77 | operation is explicitly stopped. |
b14dab79 JB |
78 | interleaved_dma - This is common to Slave as well as M2M clients. For slave |
79 | address of devices' fifo could be already known to the driver. | |
80 | Various types of operations could be expressed by setting | |
81 | appropriate values to the 'dma_interleaved_template' members. | |
5a42fb93 RKAL |
82 | |
83 | A non-NULL return of this transfer API represents a "descriptor" for | |
84 | the given transaction. | |
85 | ||
86 | Interface: | |
87 | struct dma_async_tx_descriptor *(*chan->device->device_prep_slave_sg)( | |
88 | struct dma_chan *chan, struct scatterlist *sgl, | |
89 | unsigned int sg_len, enum dma_data_direction direction, | |
46b2903c | 90 | unsigned long flags); |
5a42fb93 RKAL |
91 | |
92 | struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_cyclic)( | |
46b2903c VK |
93 | struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len, |
94 | size_t period_len, enum dma_data_direction direction); | |
95 | ||
b14dab79 JB |
96 | struct dma_async_tx_descriptor *(*device_prep_interleaved_dma)( |
97 | struct dma_chan *chan, struct dma_interleaved_template *xt, | |
98 | unsigned long flags); | |
99 | ||
5a42fb93 RKAL |
100 | The peripheral driver is expected to have mapped the scatterlist for |
101 | the DMA operation prior to calling device_prep_slave_sg, and must | |
102 | keep the scatterlist mapped until the DMA operation has completed. | |
103 | The scatterlist must be mapped using the DMA struct device. So, | |
104 | normal setup should look like this: | |
105 | ||
106 | nr_sg = dma_map_sg(chan->device->dev, sgl, sg_len); | |
107 | if (nr_sg == 0) | |
108 | /* error */ | |
109 | ||
110 | desc = chan->device->device_prep_slave_sg(chan, sgl, nr_sg, | |
111 | direction, flags); | |
112 | ||
113 | Once a descriptor has been obtained, the callback information can be | |
114 | added and the descriptor must then be submitted. Some DMA engine | |
115 | drivers may hold a spinlock between a successful preparation and | |
116 | submission so it is important that these two operations are closely | |
117 | paired. | |
118 | ||
119 | Note: | |
120 | Although the async_tx API specifies that completion callback | |
121 | routines cannot submit any new operations, this is not the | |
122 | case for slave/cyclic DMA. | |
123 | ||
124 | For slave DMA, the subsequent transaction may not be available | |
125 | for submission prior to callback function being invoked, so | |
126 | slave DMA callbacks are permitted to prepare and submit a new | |
127 | transaction. | |
128 | ||
129 | For cyclic DMA, a callback function may wish to terminate the | |
130 | DMA via dmaengine_terminate_all(). | |
131 | ||
132 | Therefore, it is important that DMA engine drivers drop any | |
133 | locks before calling the callback function which may cause a | |
134 | deadlock. | |
135 | ||
136 | Note that callbacks will always be invoked from the DMA | |
137 | engines tasklet, never from interrupt context. | |
138 | ||
139 | 4. Submit the transaction | |
140 | ||
141 | Once the descriptor has been prepared and the callback information | |
142 | added, it must be placed on the DMA engine drivers pending queue. | |
143 | ||
144 | Interface: | |
145 | dma_cookie_t dmaengine_submit(struct dma_async_tx_descriptor *desc) | |
146 | ||
147 | This returns a cookie can be used to check the progress of DMA engine | |
148 | activity via other DMA engine calls not covered in this document. | |
149 | ||
150 | dmaengine_submit() will not start the DMA operation, it merely adds | |
151 | it to the pending queue. For this, see step 5, dma_async_issue_pending. | |
152 | ||
153 | 5. Issue pending DMA requests and wait for callback notification | |
154 | ||
155 | The transactions in the pending queue can be activated by calling the | |
156 | issue_pending API. If channel is idle then the first transaction in | |
157 | queue is started and subsequent ones queued up. | |
158 | ||
159 | On completion of each DMA operation, the next in queue is started and | |
160 | a tasklet triggered. The tasklet will then call the client driver | |
161 | completion callback routine for notification, if set. | |
162 | ||
163 | Interface: | |
164 | void dma_async_issue_pending(struct dma_chan *chan); | |
165 | ||
166 | Further APIs: | |
167 | ||
168 | 1. int dmaengine_terminate_all(struct dma_chan *chan) | |
169 | ||
170 | This causes all activity for the DMA channel to be stopped, and may | |
171 | discard data in the DMA FIFO which hasn't been fully transferred. | |
172 | No callback functions will be called for any incomplete transfers. | |
173 | ||
174 | 2. int dmaengine_pause(struct dma_chan *chan) | |
175 | ||
176 | This pauses activity on the DMA channel without data loss. | |
177 | ||
178 | 3. int dmaengine_resume(struct dma_chan *chan) | |
179 | ||
180 | Resume a previously paused DMA channel. It is invalid to resume a | |
181 | channel which is not currently paused. | |
182 | ||
183 | 4. enum dma_status dma_async_is_tx_complete(struct dma_chan *chan, | |
184 | dma_cookie_t cookie, dma_cookie_t *last, dma_cookie_t *used) | |
185 | ||
186 | This can be used to check the status of the channel. Please see | |
187 | the documentation in include/linux/dmaengine.h for a more complete | |
188 | description of this API. | |
189 | ||
190 | This can be used in conjunction with dma_async_is_complete() and | |
191 | the cookie returned from 'descriptor->submit()' to check for | |
192 | completion of a specific DMA transaction. | |
193 | ||
194 | Note: | |
195 | Not all DMA engine drivers can return reliable information for | |
196 | a running DMA channel. It is recommended that DMA engine users | |
197 | pause or stop (via dmaengine_terminate_all) the channel before | |
198 | using this API. |