]>
Commit | Line | Data |
---|---|---|
1 | ========================= | |
2 | Dynamic DMA mapping Guide | |
3 | ========================= | |
4 | ||
5 | :Author: David S. Miller <davem@redhat.com> | |
6 | :Author: Richard Henderson <rth@cygnus.com> | |
7 | :Author: Jakub Jelinek <jakub@redhat.com> | |
8 | ||
9 | This is a guide to device driver writers on how to use the DMA API | |
10 | with example pseudo-code. For a concise description of the API, see | |
11 | DMA-API.txt. | |
12 | ||
13 | CPU and DMA addresses | |
14 | ===================== | |
15 | ||
16 | There are several kinds of addresses involved in the DMA API, and it's | |
17 | important to understand the differences. | |
18 | ||
19 | The kernel normally uses virtual addresses. Any address returned by | |
20 | kmalloc(), vmalloc(), and similar interfaces is a virtual address and can | |
21 | be stored in a ``void *``. | |
22 | ||
23 | The virtual memory system (TLB, page tables, etc.) translates virtual | |
24 | addresses to CPU physical addresses, which are stored as "phys_addr_t" or | |
25 | "resource_size_t". The kernel manages device resources like registers as | |
26 | physical addresses. These are the addresses in /proc/iomem. The physical | |
27 | address is not directly useful to a driver; it must use ioremap() to map | |
28 | the space and produce a virtual address. | |
29 | ||
30 | I/O devices use a third kind of address: a "bus address". If a device has | |
31 | registers at an MMIO address, or if it performs DMA to read or write system | |
32 | memory, the addresses used by the device are bus addresses. In some | |
33 | systems, bus addresses are identical to CPU physical addresses, but in | |
34 | general they are not. IOMMUs and host bridges can produce arbitrary | |
35 | mappings between physical and bus addresses. | |
36 | ||
37 | From a device's point of view, DMA uses the bus address space, but it may | |
38 | be restricted to a subset of that space. For example, even if a system | |
39 | supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU | |
40 | so devices only need to use 32-bit DMA addresses. | |
41 | ||
42 | Here's a picture and some examples:: | |
43 | ||
44 | CPU CPU Bus | |
45 | Virtual Physical Address | |
46 | Address Address Space | |
47 | Space Space | |
48 | ||
49 | +-------+ +------+ +------+ | |
50 | | | |MMIO | Offset | | | |
51 | | | Virtual |Space | applied | | | |
52 | C +-------+ --------> B +------+ ----------> +------+ A | |
53 | | | mapping | | by host | | | |
54 | +-----+ | | | | bridge | | +--------+ | |
55 | | | | | +------+ | | | | | |
56 | | CPU | | | | RAM | | | | Device | | |
57 | | | | | | | | | | | | |
58 | +-----+ +-------+ +------+ +------+ +--------+ | |
59 | | | Virtual |Buffer| Mapping | | | |
60 | X +-------+ --------> Y +------+ <---------- +------+ Z | |
61 | | | mapping | RAM | by IOMMU | |
62 | | | | | | |
63 | | | | | | |
64 | +-------+ +------+ | |
65 | ||
66 | During the enumeration process, the kernel learns about I/O devices and | |
67 | their MMIO space and the host bridges that connect them to the system. For | |
68 | example, if a PCI device has a BAR, the kernel reads the bus address (A) | |
69 | from the BAR and converts it to a CPU physical address (B). The address B | |
70 | is stored in a struct resource and usually exposed via /proc/iomem. When a | |
71 | driver claims a device, it typically uses ioremap() to map physical address | |
72 | B at a virtual address (C). It can then use, e.g., ioread32(C), to access | |
73 | the device registers at bus address A. | |
74 | ||
75 | If the device supports DMA, the driver sets up a buffer using kmalloc() or | |
76 | a similar interface, which returns a virtual address (X). The virtual | |
77 | memory system maps X to a physical address (Y) in system RAM. The driver | |
78 | can use virtual address X to access the buffer, but the device itself | |
79 | cannot because DMA doesn't go through the CPU virtual memory system. | |
80 | ||
81 | In some simple systems, the device can do DMA directly to physical address | |
82 | Y. But in many others, there is IOMMU hardware that translates DMA | |
83 | addresses to physical addresses, e.g., it translates Z to Y. This is part | |
84 | of the reason for the DMA API: the driver can give a virtual address X to | |
85 | an interface like dma_map_single(), which sets up any required IOMMU | |
86 | mapping and returns the DMA address Z. The driver then tells the device to | |
87 | do DMA to Z, and the IOMMU maps it to the buffer at address Y in system | |
88 | RAM. | |
89 | ||
90 | So that Linux can use the dynamic DMA mapping, it needs some help from the | |
91 | drivers, namely it has to take into account that DMA addresses should be | |
92 | mapped only for the time they are actually used and unmapped after the DMA | |
93 | transfer. | |
94 | ||
95 | The following API will work of course even on platforms where no such | |
96 | hardware exists. | |
97 | ||
98 | Note that the DMA API works with any bus independent of the underlying | |
99 | microprocessor architecture. You should use the DMA API rather than the | |
100 | bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the | |
101 | pci_map_*() interfaces. | |
102 | ||
103 | First of all, you should make sure:: | |
104 | ||
105 | #include <linux/dma-mapping.h> | |
106 | ||
107 | is in your driver, which provides the definition of dma_addr_t. This type | |
108 | can hold any valid DMA address for the platform and should be used | |
109 | everywhere you hold a DMA address returned from the DMA mapping functions. | |
110 | ||
111 | What memory is DMA'able? | |
112 | ======================== | |
113 | ||
114 | The first piece of information you must know is what kernel memory can | |
115 | be used with the DMA mapping facilities. There has been an unwritten | |
116 | set of rules regarding this, and this text is an attempt to finally | |
117 | write them down. | |
118 | ||
119 | If you acquired your memory via the page allocator | |
120 | (i.e. __get_free_page*()) or the generic memory allocators | |
121 | (i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from | |
122 | that memory using the addresses returned from those routines. | |
123 | ||
124 | This means specifically that you may _not_ use the memory/addresses | |
125 | returned from vmalloc() for DMA. It is possible to DMA to the | |
126 | _underlying_ memory mapped into a vmalloc() area, but this requires | |
127 | walking page tables to get the physical addresses, and then | |
128 | translating each of those pages back to a kernel address using | |
129 | something like __va(). [ EDIT: Update this when we integrate | |
130 | Gerd Knorr's generic code which does this. ] | |
131 | ||
132 | This rule also means that you may use neither kernel image addresses | |
133 | (items in data/text/bss segments), nor module image addresses, nor | |
134 | stack addresses for DMA. These could all be mapped somewhere entirely | |
135 | different than the rest of physical memory. Even if those classes of | |
136 | memory could physically work with DMA, you'd need to ensure the I/O | |
137 | buffers were cacheline-aligned. Without that, you'd see cacheline | |
138 | sharing problems (data corruption) on CPUs with DMA-incoherent caches. | |
139 | (The CPU could write to one word, DMA would write to a different one | |
140 | in the same cache line, and one of them could be overwritten.) | |
141 | ||
142 | Also, this means that you cannot take the return of a kmap() | |
143 | call and DMA to/from that. This is similar to vmalloc(). | |
144 | ||
145 | What about block I/O and networking buffers? The block I/O and | |
146 | networking subsystems make sure that the buffers they use are valid | |
147 | for you to DMA from/to. | |
148 | ||
149 | DMA addressing limitations | |
150 | ========================== | |
151 | ||
152 | Does your device have any DMA addressing limitations? For example, is | |
153 | your device only capable of driving the low order 24-bits of address? | |
154 | If so, you need to inform the kernel of this fact. | |
155 | ||
156 | By default, the kernel assumes that your device can address the full | |
157 | 32-bits. For a 64-bit capable device, this needs to be increased. | |
158 | And for a device with limitations, as discussed in the previous | |
159 | paragraph, it needs to be decreased. | |
160 | ||
161 | Special note about PCI: PCI-X specification requires PCI-X devices to | |
162 | support 64-bit addressing (DAC) for all transactions. And at least | |
163 | one platform (SGI SN2) requires 64-bit consistent allocations to | |
164 | operate correctly when the IO bus is in PCI-X mode. | |
165 | ||
166 | For correct operation, you must interrogate the kernel in your device | |
167 | probe routine to see if the DMA controller on the machine can properly | |
168 | support the DMA addressing limitation your device has. It is good | |
169 | style to do this even if your device holds the default setting, | |
170 | because this shows that you did think about these issues wrt. your | |
171 | device. | |
172 | ||
173 | The query is performed via a call to dma_set_mask_and_coherent():: | |
174 | ||
175 | int dma_set_mask_and_coherent(struct device *dev, u64 mask); | |
176 | ||
177 | which will query the mask for both streaming and coherent APIs together. | |
178 | If you have some special requirements, then the following two separate | |
179 | queries can be used instead: | |
180 | ||
181 | The query for streaming mappings is performed via a call to | |
182 | dma_set_mask():: | |
183 | ||
184 | int dma_set_mask(struct device *dev, u64 mask); | |
185 | ||
186 | The query for consistent allocations is performed via a call | |
187 | to dma_set_coherent_mask():: | |
188 | ||
189 | int dma_set_coherent_mask(struct device *dev, u64 mask); | |
190 | ||
191 | Here, dev is a pointer to the device struct of your device, and mask | |
192 | is a bit mask describing which bits of an address your device | |
193 | supports. It returns zero if your card can perform DMA properly on | |
194 | the machine given the address mask you provided. In general, the | |
195 | device struct of your device is embedded in the bus-specific device | |
196 | struct of your device. For example, &pdev->dev is a pointer to the | |
197 | device struct of a PCI device (pdev is a pointer to the PCI device | |
198 | struct of your device). | |
199 | ||
200 | If it returns non-zero, your device cannot perform DMA properly on | |
201 | this platform, and attempting to do so will result in undefined | |
202 | behavior. You must either use a different mask, or not use DMA. | |
203 | ||
204 | This means that in the failure case, you have three options: | |
205 | ||
206 | 1) Use another DMA mask, if possible (see below). | |
207 | 2) Use some non-DMA mode for data transfer, if possible. | |
208 | 3) Ignore this device and do not initialize it. | |
209 | ||
210 | It is recommended that your driver print a kernel KERN_WARNING message | |
211 | when you end up performing either #2 or #3. In this manner, if a user | |
212 | of your driver reports that performance is bad or that the device is not | |
213 | even detected, you can ask them for the kernel messages to find out | |
214 | exactly why. | |
215 | ||
216 | The standard 32-bit addressing device would do something like this:: | |
217 | ||
218 | if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) { | |
219 | dev_warn(dev, "mydev: No suitable DMA available\n"); | |
220 | goto ignore_this_device; | |
221 | } | |
222 | ||
223 | Another common scenario is a 64-bit capable device. The approach here | |
224 | is to try for 64-bit addressing, but back down to a 32-bit mask that | |
225 | should not fail. The kernel may fail the 64-bit mask not because the | |
226 | platform is not capable of 64-bit addressing. Rather, it may fail in | |
227 | this case simply because 32-bit addressing is done more efficiently | |
228 | than 64-bit addressing. For example, Sparc64 PCI SAC addressing is | |
229 | more efficient than DAC addressing. | |
230 | ||
231 | Here is how you would handle a 64-bit capable device which can drive | |
232 | all 64-bits when accessing streaming DMA:: | |
233 | ||
234 | int using_dac; | |
235 | ||
236 | if (!dma_set_mask(dev, DMA_BIT_MASK(64))) { | |
237 | using_dac = 1; | |
238 | } else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) { | |
239 | using_dac = 0; | |
240 | } else { | |
241 | dev_warn(dev, "mydev: No suitable DMA available\n"); | |
242 | goto ignore_this_device; | |
243 | } | |
244 | ||
245 | If a card is capable of using 64-bit consistent allocations as well, | |
246 | the case would look like this:: | |
247 | ||
248 | int using_dac, consistent_using_dac; | |
249 | ||
250 | if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) { | |
251 | using_dac = 1; | |
252 | consistent_using_dac = 1; | |
253 | } else if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) { | |
254 | using_dac = 0; | |
255 | consistent_using_dac = 0; | |
256 | } else { | |
257 | dev_warn(dev, "mydev: No suitable DMA available\n"); | |
258 | goto ignore_this_device; | |
259 | } | |
260 | ||
261 | The coherent mask will always be able to set the same or a smaller mask as | |
262 | the streaming mask. However for the rare case that a device driver only | |
263 | uses consistent allocations, one would have to check the return value from | |
264 | dma_set_coherent_mask(). | |
265 | ||
266 | Finally, if your device can only drive the low 24-bits of | |
267 | address you might do something like:: | |
268 | ||
269 | if (dma_set_mask(dev, DMA_BIT_MASK(24))) { | |
270 | dev_warn(dev, "mydev: 24-bit DMA addressing not available\n"); | |
271 | goto ignore_this_device; | |
272 | } | |
273 | ||
274 | When dma_set_mask() or dma_set_mask_and_coherent() is successful, and | |
275 | returns zero, the kernel saves away this mask you have provided. The | |
276 | kernel will use this information later when you make DMA mappings. | |
277 | ||
278 | There is a case which we are aware of at this time, which is worth | |
279 | mentioning in this documentation. If your device supports multiple | |
280 | functions (for example a sound card provides playback and record | |
281 | functions) and the various different functions have _different_ | |
282 | DMA addressing limitations, you may wish to probe each mask and | |
283 | only provide the functionality which the machine can handle. It | |
284 | is important that the last call to dma_set_mask() be for the | |
285 | most specific mask. | |
286 | ||
287 | Here is pseudo-code showing how this might be done:: | |
288 | ||
289 | #define PLAYBACK_ADDRESS_BITS DMA_BIT_MASK(32) | |
290 | #define RECORD_ADDRESS_BITS DMA_BIT_MASK(24) | |
291 | ||
292 | struct my_sound_card *card; | |
293 | struct device *dev; | |
294 | ||
295 | ... | |
296 | if (!dma_set_mask(dev, PLAYBACK_ADDRESS_BITS)) { | |
297 | card->playback_enabled = 1; | |
298 | } else { | |
299 | card->playback_enabled = 0; | |
300 | dev_warn(dev, "%s: Playback disabled due to DMA limitations\n", | |
301 | card->name); | |
302 | } | |
303 | if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) { | |
304 | card->record_enabled = 1; | |
305 | } else { | |
306 | card->record_enabled = 0; | |
307 | dev_warn(dev, "%s: Record disabled due to DMA limitations\n", | |
308 | card->name); | |
309 | } | |
310 | ||
311 | A sound card was used as an example here because this genre of PCI | |
312 | devices seems to be littered with ISA chips given a PCI front end, | |
313 | and thus retaining the 16MB DMA addressing limitations of ISA. | |
314 | ||
315 | Types of DMA mappings | |
316 | ===================== | |
317 | ||
318 | There are two types of DMA mappings: | |
319 | ||
320 | - Consistent DMA mappings which are usually mapped at driver | |
321 | initialization, unmapped at the end and for which the hardware should | |
322 | guarantee that the device and the CPU can access the data | |
323 | in parallel and will see updates made by each other without any | |
324 | explicit software flushing. | |
325 | ||
326 | Think of "consistent" as "synchronous" or "coherent". | |
327 | ||
328 | The current default is to return consistent memory in the low 32 | |
329 | bits of the DMA space. However, for future compatibility you should | |
330 | set the consistent mask even if this default is fine for your | |
331 | driver. | |
332 | ||
333 | Good examples of what to use consistent mappings for are: | |
334 | ||
335 | - Network card DMA ring descriptors. | |
336 | - SCSI adapter mailbox command data structures. | |
337 | - Device firmware microcode executed out of | |
338 | main memory. | |
339 | ||
340 | The invariant these examples all require is that any CPU store | |
341 | to memory is immediately visible to the device, and vice | |
342 | versa. Consistent mappings guarantee this. | |
343 | ||
344 | .. important:: | |
345 | ||
346 | Consistent DMA memory does not preclude the usage of | |
347 | proper memory barriers. The CPU may reorder stores to | |
348 | consistent memory just as it may normal memory. Example: | |
349 | if it is important for the device to see the first word | |
350 | of a descriptor updated before the second, you must do | |
351 | something like:: | |
352 | ||
353 | desc->word0 = address; | |
354 | wmb(); | |
355 | desc->word1 = DESC_VALID; | |
356 | ||
357 | in order to get correct behavior on all platforms. | |
358 | ||
359 | Also, on some platforms your driver may need to flush CPU write | |
360 | buffers in much the same way as it needs to flush write buffers | |
361 | found in PCI bridges (such as by reading a register's value | |
362 | after writing it). | |
363 | ||
364 | - Streaming DMA mappings which are usually mapped for one DMA | |
365 | transfer, unmapped right after it (unless you use dma_sync_* below) | |
366 | and for which hardware can optimize for sequential accesses. | |
367 | ||
368 | Think of "streaming" as "asynchronous" or "outside the coherency | |
369 | domain". | |
370 | ||
371 | Good examples of what to use streaming mappings for are: | |
372 | ||
373 | - Networking buffers transmitted/received by a device. | |
374 | - Filesystem buffers written/read by a SCSI device. | |
375 | ||
376 | The interfaces for using this type of mapping were designed in | |
377 | such a way that an implementation can make whatever performance | |
378 | optimizations the hardware allows. To this end, when using | |
379 | such mappings you must be explicit about what you want to happen. | |
380 | ||
381 | Neither type of DMA mapping has alignment restrictions that come from | |
382 | the underlying bus, although some devices may have such restrictions. | |
383 | Also, systems with caches that aren't DMA-coherent will work better | |
384 | when the underlying buffers don't share cache lines with other data. | |
385 | ||
386 | ||
387 | Using Consistent DMA mappings | |
388 | ============================= | |
389 | ||
390 | To allocate and map large (PAGE_SIZE or so) consistent DMA regions, | |
391 | you should do:: | |
392 | ||
393 | dma_addr_t dma_handle; | |
394 | ||
395 | cpu_addr = dma_alloc_coherent(dev, size, &dma_handle, gfp); | |
396 | ||
397 | where device is a ``struct device *``. This may be called in interrupt | |
398 | context with the GFP_ATOMIC flag. | |
399 | ||
400 | Size is the length of the region you want to allocate, in bytes. | |
401 | ||
402 | This routine will allocate RAM for that region, so it acts similarly to | |
403 | __get_free_pages() (but takes size instead of a page order). If your | |
404 | driver needs regions sized smaller than a page, you may prefer using | |
405 | the dma_pool interface, described below. | |
406 | ||
407 | The consistent DMA mapping interfaces, for non-NULL dev, will by | |
408 | default return a DMA address which is 32-bit addressable. Even if the | |
409 | device indicates (via DMA mask) that it may address the upper 32-bits, | |
410 | consistent allocation will only return > 32-bit addresses for DMA if | |
411 | the consistent DMA mask has been explicitly changed via | |
412 | dma_set_coherent_mask(). This is true of the dma_pool interface as | |
413 | well. | |
414 | ||
415 | dma_alloc_coherent() returns two values: the virtual address which you | |
416 | can use to access it from the CPU and dma_handle which you pass to the | |
417 | card. | |
418 | ||
419 | The CPU virtual address and the DMA address are both | |
420 | guaranteed to be aligned to the smallest PAGE_SIZE order which | |
421 | is greater than or equal to the requested size. This invariant | |
422 | exists (for example) to guarantee that if you allocate a chunk | |
423 | which is smaller than or equal to 64 kilobytes, the extent of the | |
424 | buffer you receive will not cross a 64K boundary. | |
425 | ||
426 | To unmap and free such a DMA region, you call:: | |
427 | ||
428 | dma_free_coherent(dev, size, cpu_addr, dma_handle); | |
429 | ||
430 | where dev, size are the same as in the above call and cpu_addr and | |
431 | dma_handle are the values dma_alloc_coherent() returned to you. | |
432 | This function may not be called in interrupt context. | |
433 | ||
434 | If your driver needs lots of smaller memory regions, you can write | |
435 | custom code to subdivide pages returned by dma_alloc_coherent(), | |
436 | or you can use the dma_pool API to do that. A dma_pool is like | |
437 | a kmem_cache, but it uses dma_alloc_coherent(), not __get_free_pages(). | |
438 | Also, it understands common hardware constraints for alignment, | |
439 | like queue heads needing to be aligned on N byte boundaries. | |
440 | ||
441 | Create a dma_pool like this:: | |
442 | ||
443 | struct dma_pool *pool; | |
444 | ||
445 | pool = dma_pool_create(name, dev, size, align, boundary); | |
446 | ||
447 | The "name" is for diagnostics (like a kmem_cache name); dev and size | |
448 | are as above. The device's hardware alignment requirement for this | |
449 | type of data is "align" (which is expressed in bytes, and must be a | |
450 | power of two). If your device has no boundary crossing restrictions, | |
451 | pass 0 for boundary; passing 4096 says memory allocated from this pool | |
452 | must not cross 4KByte boundaries (but at that time it may be better to | |
453 | use dma_alloc_coherent() directly instead). | |
454 | ||
455 | Allocate memory from a DMA pool like this:: | |
456 | ||
457 | cpu_addr = dma_pool_alloc(pool, flags, &dma_handle); | |
458 | ||
459 | flags are GFP_KERNEL if blocking is permitted (not in_interrupt nor | |
460 | holding SMP locks), GFP_ATOMIC otherwise. Like dma_alloc_coherent(), | |
461 | this returns two values, cpu_addr and dma_handle. | |
462 | ||
463 | Free memory that was allocated from a dma_pool like this:: | |
464 | ||
465 | dma_pool_free(pool, cpu_addr, dma_handle); | |
466 | ||
467 | where pool is what you passed to dma_pool_alloc(), and cpu_addr and | |
468 | dma_handle are the values dma_pool_alloc() returned. This function | |
469 | may be called in interrupt context. | |
470 | ||
471 | Destroy a dma_pool by calling:: | |
472 | ||
473 | dma_pool_destroy(pool); | |
474 | ||
475 | Make sure you've called dma_pool_free() for all memory allocated | |
476 | from a pool before you destroy the pool. This function may not | |
477 | be called in interrupt context. | |
478 | ||
479 | DMA Direction | |
480 | ============= | |
481 | ||
482 | The interfaces described in subsequent portions of this document | |
483 | take a DMA direction argument, which is an integer and takes on | |
484 | one of the following values:: | |
485 | ||
486 | DMA_BIDIRECTIONAL | |
487 | DMA_TO_DEVICE | |
488 | DMA_FROM_DEVICE | |
489 | DMA_NONE | |
490 | ||
491 | You should provide the exact DMA direction if you know it. | |
492 | ||
493 | DMA_TO_DEVICE means "from main memory to the device" | |
494 | DMA_FROM_DEVICE means "from the device to main memory" | |
495 | It is the direction in which the data moves during the DMA | |
496 | transfer. | |
497 | ||
498 | You are _strongly_ encouraged to specify this as precisely | |
499 | as you possibly can. | |
500 | ||
501 | If you absolutely cannot know the direction of the DMA transfer, | |
502 | specify DMA_BIDIRECTIONAL. It means that the DMA can go in | |
503 | either direction. The platform guarantees that you may legally | |
504 | specify this, and that it will work, but this may be at the | |
505 | cost of performance for example. | |
506 | ||
507 | The value DMA_NONE is to be used for debugging. One can | |
508 | hold this in a data structure before you come to know the | |
509 | precise direction, and this will help catch cases where your | |
510 | direction tracking logic has failed to set things up properly. | |
511 | ||
512 | Another advantage of specifying this value precisely (outside of | |
513 | potential platform-specific optimizations of such) is for debugging. | |
514 | Some platforms actually have a write permission boolean which DMA | |
515 | mappings can be marked with, much like page protections in the user | |
516 | program address space. Such platforms can and do report errors in the | |
517 | kernel logs when the DMA controller hardware detects violation of the | |
518 | permission setting. | |
519 | ||
520 | Only streaming mappings specify a direction, consistent mappings | |
521 | implicitly have a direction attribute setting of | |
522 | DMA_BIDIRECTIONAL. | |
523 | ||
524 | The SCSI subsystem tells you the direction to use in the | |
525 | 'sc_data_direction' member of the SCSI command your driver is | |
526 | working on. | |
527 | ||
528 | For Networking drivers, it's a rather simple affair. For transmit | |
529 | packets, map/unmap them with the DMA_TO_DEVICE direction | |
530 | specifier. For receive packets, just the opposite, map/unmap them | |
531 | with the DMA_FROM_DEVICE direction specifier. | |
532 | ||
533 | Using Streaming DMA mappings | |
534 | ============================ | |
535 | ||
536 | The streaming DMA mapping routines can be called from interrupt | |
537 | context. There are two versions of each map/unmap, one which will | |
538 | map/unmap a single memory region, and one which will map/unmap a | |
539 | scatterlist. | |
540 | ||
541 | To map a single region, you do:: | |
542 | ||
543 | struct device *dev = &my_dev->dev; | |
544 | dma_addr_t dma_handle; | |
545 | void *addr = buffer->ptr; | |
546 | size_t size = buffer->len; | |
547 | ||
548 | dma_handle = dma_map_single(dev, addr, size, direction); | |
549 | if (dma_mapping_error(dev, dma_handle)) { | |
550 | /* | |
551 | * reduce current DMA mapping usage, | |
552 | * delay and try again later or | |
553 | * reset driver. | |
554 | */ | |
555 | goto map_error_handling; | |
556 | } | |
557 | ||
558 | and to unmap it:: | |
559 | ||
560 | dma_unmap_single(dev, dma_handle, size, direction); | |
561 | ||
562 | You should call dma_mapping_error() as dma_map_single() could fail and return | |
563 | error. Doing so will ensure that the mapping code will work correctly on all | |
564 | DMA implementations without any dependency on the specifics of the underlying | |
565 | implementation. Using the returned address without checking for errors could | |
566 | result in failures ranging from panics to silent data corruption. The same | |
567 | applies to dma_map_page() as well. | |
568 | ||
569 | You should call dma_unmap_single() when the DMA activity is finished, e.g., | |
570 | from the interrupt which told you that the DMA transfer is done. | |
571 | ||
572 | Using CPU pointers like this for single mappings has a disadvantage: | |
573 | you cannot reference HIGHMEM memory in this way. Thus, there is a | |
574 | map/unmap interface pair akin to dma_{map,unmap}_single(). These | |
575 | interfaces deal with page/offset pairs instead of CPU pointers. | |
576 | Specifically:: | |
577 | ||
578 | struct device *dev = &my_dev->dev; | |
579 | dma_addr_t dma_handle; | |
580 | struct page *page = buffer->page; | |
581 | unsigned long offset = buffer->offset; | |
582 | size_t size = buffer->len; | |
583 | ||
584 | dma_handle = dma_map_page(dev, page, offset, size, direction); | |
585 | if (dma_mapping_error(dev, dma_handle)) { | |
586 | /* | |
587 | * reduce current DMA mapping usage, | |
588 | * delay and try again later or | |
589 | * reset driver. | |
590 | */ | |
591 | goto map_error_handling; | |
592 | } | |
593 | ||
594 | ... | |
595 | ||
596 | dma_unmap_page(dev, dma_handle, size, direction); | |
597 | ||
598 | Here, "offset" means byte offset within the given page. | |
599 | ||
600 | You should call dma_mapping_error() as dma_map_page() could fail and return | |
601 | error as outlined under the dma_map_single() discussion. | |
602 | ||
603 | You should call dma_unmap_page() when the DMA activity is finished, e.g., | |
604 | from the interrupt which told you that the DMA transfer is done. | |
605 | ||
606 | With scatterlists, you map a region gathered from several regions by:: | |
607 | ||
608 | int i, count = dma_map_sg(dev, sglist, nents, direction); | |
609 | struct scatterlist *sg; | |
610 | ||
611 | for_each_sg(sglist, sg, count, i) { | |
612 | hw_address[i] = sg_dma_address(sg); | |
613 | hw_len[i] = sg_dma_len(sg); | |
614 | } | |
615 | ||
616 | where nents is the number of entries in the sglist. | |
617 | ||
618 | The implementation is free to merge several consecutive sglist entries | |
619 | into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any | |
620 | consecutive sglist entries can be merged into one provided the first one | |
621 | ends and the second one starts on a page boundary - in fact this is a huge | |
622 | advantage for cards which either cannot do scatter-gather or have very | |
623 | limited number of scatter-gather entries) and returns the actual number | |
624 | of sg entries it mapped them to. On failure 0 is returned. | |
625 | ||
626 | Then you should loop count times (note: this can be less than nents times) | |
627 | and use sg_dma_address() and sg_dma_len() macros where you previously | |
628 | accessed sg->address and sg->length as shown above. | |
629 | ||
630 | To unmap a scatterlist, just call:: | |
631 | ||
632 | dma_unmap_sg(dev, sglist, nents, direction); | |
633 | ||
634 | Again, make sure DMA activity has already finished. | |
635 | ||
636 | .. note:: | |
637 | ||
638 | The 'nents' argument to the dma_unmap_sg call must be | |
639 | the _same_ one you passed into the dma_map_sg call, | |
640 | it should _NOT_ be the 'count' value _returned_ from the | |
641 | dma_map_sg call. | |
642 | ||
643 | Every dma_map_{single,sg}() call should have its dma_unmap_{single,sg}() | |
644 | counterpart, because the DMA address space is a shared resource and | |
645 | you could render the machine unusable by consuming all DMA addresses. | |
646 | ||
647 | If you need to use the same streaming DMA region multiple times and touch | |
648 | the data in between the DMA transfers, the buffer needs to be synced | |
649 | properly in order for the CPU and device to see the most up-to-date and | |
650 | correct copy of the DMA buffer. | |
651 | ||
652 | So, firstly, just map it with dma_map_{single,sg}(), and after each DMA | |
653 | transfer call either:: | |
654 | ||
655 | dma_sync_single_for_cpu(dev, dma_handle, size, direction); | |
656 | ||
657 | or:: | |
658 | ||
659 | dma_sync_sg_for_cpu(dev, sglist, nents, direction); | |
660 | ||
661 | as appropriate. | |
662 | ||
663 | Then, if you wish to let the device get at the DMA area again, | |
664 | finish accessing the data with the CPU, and then before actually | |
665 | giving the buffer to the hardware call either:: | |
666 | ||
667 | dma_sync_single_for_device(dev, dma_handle, size, direction); | |
668 | ||
669 | or:: | |
670 | ||
671 | dma_sync_sg_for_device(dev, sglist, nents, direction); | |
672 | ||
673 | as appropriate. | |
674 | ||
675 | .. note:: | |
676 | ||
677 | The 'nents' argument to dma_sync_sg_for_cpu() and | |
678 | dma_sync_sg_for_device() must be the same passed to | |
679 | dma_map_sg(). It is _NOT_ the count returned by | |
680 | dma_map_sg(). | |
681 | ||
682 | After the last DMA transfer call one of the DMA unmap routines | |
683 | dma_unmap_{single,sg}(). If you don't touch the data from the first | |
684 | dma_map_*() call till dma_unmap_*(), then you don't have to call the | |
685 | dma_sync_*() routines at all. | |
686 | ||
687 | Here is pseudo code which shows a situation in which you would need | |
688 | to use the dma_sync_*() interfaces:: | |
689 | ||
690 | my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len) | |
691 | { | |
692 | dma_addr_t mapping; | |
693 | ||
694 | mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE); | |
695 | if (dma_mapping_error(cp->dev, mapping)) { | |
696 | /* | |
697 | * reduce current DMA mapping usage, | |
698 | * delay and try again later or | |
699 | * reset driver. | |
700 | */ | |
701 | goto map_error_handling; | |
702 | } | |
703 | ||
704 | cp->rx_buf = buffer; | |
705 | cp->rx_len = len; | |
706 | cp->rx_dma = mapping; | |
707 | ||
708 | give_rx_buf_to_card(cp); | |
709 | } | |
710 | ||
711 | ... | |
712 | ||
713 | my_card_interrupt_handler(int irq, void *devid, struct pt_regs *regs) | |
714 | { | |
715 | struct my_card *cp = devid; | |
716 | ||
717 | ... | |
718 | if (read_card_status(cp) == RX_BUF_TRANSFERRED) { | |
719 | struct my_card_header *hp; | |
720 | ||
721 | /* Examine the header to see if we wish | |
722 | * to accept the data. But synchronize | |
723 | * the DMA transfer with the CPU first | |
724 | * so that we see updated contents. | |
725 | */ | |
726 | dma_sync_single_for_cpu(&cp->dev, cp->rx_dma, | |
727 | cp->rx_len, | |
728 | DMA_FROM_DEVICE); | |
729 | ||
730 | /* Now it is safe to examine the buffer. */ | |
731 | hp = (struct my_card_header *) cp->rx_buf; | |
732 | if (header_is_ok(hp)) { | |
733 | dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len, | |
734 | DMA_FROM_DEVICE); | |
735 | pass_to_upper_layers(cp->rx_buf); | |
736 | make_and_setup_new_rx_buf(cp); | |
737 | } else { | |
738 | /* CPU should not write to | |
739 | * DMA_FROM_DEVICE-mapped area, | |
740 | * so dma_sync_single_for_device() is | |
741 | * not needed here. It would be required | |
742 | * for DMA_BIDIRECTIONAL mapping if | |
743 | * the memory was modified. | |
744 | */ | |
745 | give_rx_buf_to_card(cp); | |
746 | } | |
747 | } | |
748 | } | |
749 | ||
750 | Drivers converted fully to this interface should not use virt_to_bus() any | |
751 | longer, nor should they use bus_to_virt(). Some drivers have to be changed a | |
752 | little bit, because there is no longer an equivalent to bus_to_virt() in the | |
753 | dynamic DMA mapping scheme - you have to always store the DMA addresses | |
754 | returned by the dma_alloc_coherent(), dma_pool_alloc(), and dma_map_single() | |
755 | calls (dma_map_sg() stores them in the scatterlist itself if the platform | |
756 | supports dynamic DMA mapping in hardware) in your driver structures and/or | |
757 | in the card registers. | |
758 | ||
759 | All drivers should be using these interfaces with no exceptions. It | |
760 | is planned to completely remove virt_to_bus() and bus_to_virt() as | |
761 | they are entirely deprecated. Some ports already do not provide these | |
762 | as it is impossible to correctly support them. | |
763 | ||
764 | Handling Errors | |
765 | =============== | |
766 | ||
767 | DMA address space is limited on some architectures and an allocation | |
768 | failure can be determined by: | |
769 | ||
770 | - checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0 | |
771 | ||
772 | - checking the dma_addr_t returned from dma_map_single() and dma_map_page() | |
773 | by using dma_mapping_error():: | |
774 | ||
775 | dma_addr_t dma_handle; | |
776 | ||
777 | dma_handle = dma_map_single(dev, addr, size, direction); | |
778 | if (dma_mapping_error(dev, dma_handle)) { | |
779 | /* | |
780 | * reduce current DMA mapping usage, | |
781 | * delay and try again later or | |
782 | * reset driver. | |
783 | */ | |
784 | goto map_error_handling; | |
785 | } | |
786 | ||
787 | - unmap pages that are already mapped, when mapping error occurs in the middle | |
788 | of a multiple page mapping attempt. These example are applicable to | |
789 | dma_map_page() as well. | |
790 | ||
791 | Example 1:: | |
792 | ||
793 | dma_addr_t dma_handle1; | |
794 | dma_addr_t dma_handle2; | |
795 | ||
796 | dma_handle1 = dma_map_single(dev, addr, size, direction); | |
797 | if (dma_mapping_error(dev, dma_handle1)) { | |
798 | /* | |
799 | * reduce current DMA mapping usage, | |
800 | * delay and try again later or | |
801 | * reset driver. | |
802 | */ | |
803 | goto map_error_handling1; | |
804 | } | |
805 | dma_handle2 = dma_map_single(dev, addr, size, direction); | |
806 | if (dma_mapping_error(dev, dma_handle2)) { | |
807 | /* | |
808 | * reduce current DMA mapping usage, | |
809 | * delay and try again later or | |
810 | * reset driver. | |
811 | */ | |
812 | goto map_error_handling2; | |
813 | } | |
814 | ||
815 | ... | |
816 | ||
817 | map_error_handling2: | |
818 | dma_unmap_single(dma_handle1); | |
819 | map_error_handling1: | |
820 | ||
821 | Example 2:: | |
822 | ||
823 | /* | |
824 | * if buffers are allocated in a loop, unmap all mapped buffers when | |
825 | * mapping error is detected in the middle | |
826 | */ | |
827 | ||
828 | dma_addr_t dma_addr; | |
829 | dma_addr_t array[DMA_BUFFERS]; | |
830 | int save_index = 0; | |
831 | ||
832 | for (i = 0; i < DMA_BUFFERS; i++) { | |
833 | ||
834 | ... | |
835 | ||
836 | dma_addr = dma_map_single(dev, addr, size, direction); | |
837 | if (dma_mapping_error(dev, dma_addr)) { | |
838 | /* | |
839 | * reduce current DMA mapping usage, | |
840 | * delay and try again later or | |
841 | * reset driver. | |
842 | */ | |
843 | goto map_error_handling; | |
844 | } | |
845 | array[i].dma_addr = dma_addr; | |
846 | save_index++; | |
847 | } | |
848 | ||
849 | ... | |
850 | ||
851 | map_error_handling: | |
852 | ||
853 | for (i = 0; i < save_index; i++) { | |
854 | ||
855 | ... | |
856 | ||
857 | dma_unmap_single(array[i].dma_addr); | |
858 | } | |
859 | ||
860 | Networking drivers must call dev_kfree_skb() to free the socket buffer | |
861 | and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook | |
862 | (ndo_start_xmit). This means that the socket buffer is just dropped in | |
863 | the failure case. | |
864 | ||
865 | SCSI drivers must return SCSI_MLQUEUE_HOST_BUSY if the DMA mapping | |
866 | fails in the queuecommand hook. This means that the SCSI subsystem | |
867 | passes the command to the driver again later. | |
868 | ||
869 | Optimizing Unmap State Space Consumption | |
870 | ======================================== | |
871 | ||
872 | On many platforms, dma_unmap_{single,page}() is simply a nop. | |
873 | Therefore, keeping track of the mapping address and length is a waste | |
874 | of space. Instead of filling your drivers up with ifdefs and the like | |
875 | to "work around" this (which would defeat the whole purpose of a | |
876 | portable API) the following facilities are provided. | |
877 | ||
878 | Actually, instead of describing the macros one by one, we'll | |
879 | transform some example code. | |
880 | ||
881 | 1) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures. | |
882 | Example, before:: | |
883 | ||
884 | struct ring_state { | |
885 | struct sk_buff *skb; | |
886 | dma_addr_t mapping; | |
887 | __u32 len; | |
888 | }; | |
889 | ||
890 | after:: | |
891 | ||
892 | struct ring_state { | |
893 | struct sk_buff *skb; | |
894 | DEFINE_DMA_UNMAP_ADDR(mapping); | |
895 | DEFINE_DMA_UNMAP_LEN(len); | |
896 | }; | |
897 | ||
898 | 2) Use dma_unmap_{addr,len}_set() to set these values. | |
899 | Example, before:: | |
900 | ||
901 | ringp->mapping = FOO; | |
902 | ringp->len = BAR; | |
903 | ||
904 | after:: | |
905 | ||
906 | dma_unmap_addr_set(ringp, mapping, FOO); | |
907 | dma_unmap_len_set(ringp, len, BAR); | |
908 | ||
909 | 3) Use dma_unmap_{addr,len}() to access these values. | |
910 | Example, before:: | |
911 | ||
912 | dma_unmap_single(dev, ringp->mapping, ringp->len, | |
913 | DMA_FROM_DEVICE); | |
914 | ||
915 | after:: | |
916 | ||
917 | dma_unmap_single(dev, | |
918 | dma_unmap_addr(ringp, mapping), | |
919 | dma_unmap_len(ringp, len), | |
920 | DMA_FROM_DEVICE); | |
921 | ||
922 | It really should be self-explanatory. We treat the ADDR and LEN | |
923 | separately, because it is possible for an implementation to only | |
924 | need the address in order to perform the unmap operation. | |
925 | ||
926 | Platform Issues | |
927 | =============== | |
928 | ||
929 | If you are just writing drivers for Linux and do not maintain | |
930 | an architecture port for the kernel, you can safely skip down | |
931 | to "Closing". | |
932 | ||
933 | 1) Struct scatterlist requirements. | |
934 | ||
935 | You need to enable CONFIG_NEED_SG_DMA_LENGTH if the architecture | |
936 | supports IOMMUs (including software IOMMU). | |
937 | ||
938 | 2) ARCH_DMA_MINALIGN | |
939 | ||
940 | Architectures must ensure that kmalloc'ed buffer is | |
941 | DMA-safe. Drivers and subsystems depend on it. If an architecture | |
942 | isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in | |
943 | the CPU cache is identical to data in main memory), | |
944 | ARCH_DMA_MINALIGN must be set so that the memory allocator | |
945 | makes sure that kmalloc'ed buffer doesn't share a cache line with | |
946 | the others. See arch/arm/include/asm/cache.h as an example. | |
947 | ||
948 | Note that ARCH_DMA_MINALIGN is about DMA memory alignment | |
949 | constraints. You don't need to worry about the architecture data | |
950 | alignment constraints (e.g. the alignment constraints about 64-bit | |
951 | objects). | |
952 | ||
953 | Closing | |
954 | ======= | |
955 | ||
956 | This document, and the API itself, would not be in its current | |
957 | form without the feedback and suggestions from numerous individuals. | |
958 | We would like to specifically mention, in no particular order, the | |
959 | following people:: | |
960 | ||
961 | Russell King <rmk@arm.linux.org.uk> | |
962 | Leo Dagum <dagum@barrel.engr.sgi.com> | |
963 | Ralf Baechle <ralf@oss.sgi.com> | |
964 | Grant Grundler <grundler@cup.hp.com> | |
965 | Jay Estabrook <Jay.Estabrook@compaq.com> | |
966 | Thomas Sailer <sailer@ife.ee.ethz.ch> | |
967 | Andrea Arcangeli <andrea@suse.de> | |
968 | Jens Axboe <jens.axboe@oracle.com> | |
969 | David Mosberger-Tang <davidm@hpl.hp.com> |