]> git.proxmox.com Git - mirror_ubuntu-jammy-kernel.git/blame - Documentation/core-api/dma-api.rst
Merge branch 'akpm' (patches from Andrew)
[mirror_ubuntu-jammy-kernel.git] / Documentation / core-api / dma-api.rst
CommitLineData
03158a70
MCC
1============================================
2Dynamic DMA mapping using the generic device
3============================================
1da177e4 4
03158a70 5:Author: James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
1da177e4
LT
6
7This document describes the DMA API. For a more gentle introduction
a822b2ee 8of the API (and actual examples), see Documentation/core-api/dma-api-howto.rst.
1da177e4 9
77f2ea2f
BH
10This API is split into two pieces. Part I describes the basic API.
11Part II describes extensions for supporting non-consistent memory
12machines. Unless you know that your driver absolutely has to support
13non-consistent platforms (this is usually only legacy platforms) you
14should only use the API described in part I.
1da177e4 15
03158a70
MCC
16Part I - dma_API
17----------------
1da177e4 18
03158a70 19To get the dma_API, you must #include <linux/dma-mapping.h>. This
77f2ea2f 20provides dma_addr_t and the interfaces described below.
1da177e4 21
3a9ad0b4
YL
22A dma_addr_t can hold any valid DMA address for the platform. It can be
23given to a device to use as a DMA source or target. A CPU cannot reference
24a dma_addr_t directly because there may be translation between its physical
25address space and the DMA address space.
1da177e4 26
77f2ea2f 27Part Ia - Using large DMA-coherent buffers
1da177e4
LT
28------------------------------------------
29
03158a70
MCC
30::
31
32 void *
33 dma_alloc_coherent(struct device *dev, size_t size,
34 dma_addr_t *dma_handle, gfp_t flag)
1da177e4
LT
35
36Consistent memory is memory for which a write by either the device or
37the processor can immediately be read by the processor or device
21440d31
DB
38without having to worry about caching effects. (You may however need
39to make sure to flush the processor's write buffers before telling
40devices to read that memory.)
1da177e4
LT
41
42This routine allocates a region of <size> bytes of consistent memory.
1da177e4 43
77f2ea2f 44It returns a pointer to the allocated region (in the processor's virtual
1da177e4
LT
45address space) or NULL if the allocation failed.
46
77f2ea2f 47It also returns a <dma_handle> which may be cast to an unsigned integer the
3a9ad0b4 48same width as the bus and given to the device as the DMA address base of
77f2ea2f
BH
49the region.
50
1da177e4
LT
51Note: consistent memory can be expensive on some platforms, and the
52minimum allocation length may be as big as a page, so you should
53consolidate your requests for consistent memory as much as possible.
54The simplest way to do that is to use the dma_pool calls (see below).
55
77f2ea2f 56The flag parameter (dma_alloc_coherent() only) allows the caller to
03158a70 57specify the ``GFP_`` flags (see kmalloc()) for the allocation (the
a12e2c6c 58implementation may choose to ignore flags that affect the location of
f5a69f4c 59the returned memory, like GFP_DMA).
1da177e4 60
03158a70
MCC
61::
62
63 void
64 dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
65 dma_addr_t dma_handle)
1da177e4 66
77f2ea2f
BH
67Free a region of consistent memory you previously allocated. dev,
68size and dma_handle must all be the same as those passed into
69dma_alloc_coherent(). cpu_addr must be the virtual address returned by
70the dma_alloc_coherent().
1da177e4 71
aa24886e
DB
72Note that unlike their sibling allocation calls, these routines
73may only be called with IRQs enabled.
74
1da177e4 75
77f2ea2f 76Part Ib - Using small DMA-coherent buffers
1da177e4
LT
77------------------------------------------
78
03158a70 79To get this part of the dma_API, you must #include <linux/dmapool.h>
1da177e4 80
77f2ea2f 81Many drivers need lots of small DMA-coherent memory regions for DMA
1da177e4
LT
82descriptors or I/O buffers. Rather than allocating in units of a page
83or more using dma_alloc_coherent(), you can use DMA pools. These work
77f2ea2f 84much like a struct kmem_cache, except that they use the DMA-coherent allocator,
1da177e4 85not __get_free_pages(). Also, they understand common hardware constraints
a12e2c6c 86for alignment, like queue heads needing to be aligned on N-byte boundaries.
1da177e4
LT
87
88
03158a70
MCC
89::
90
1da177e4
LT
91 struct dma_pool *
92 dma_pool_create(const char *name, struct device *dev,
93 size_t size, size_t align, size_t alloc);
94
77f2ea2f 95dma_pool_create() initializes a pool of DMA-coherent buffers
1da177e4
LT
96for use with a given device. It must be called in a context which
97can sleep.
98
e18b890b 99The "name" is for diagnostics (like a struct kmem_cache name); dev and size
1da177e4
LT
100are like what you'd pass to dma_alloc_coherent(). The device's hardware
101alignment requirement for this type of data is "align" (which is expressed
102in bytes, and must be a power of two). If your device has no boundary
103crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated
104from this pool must not cross 4KByte boundaries.
105
03158a70 106::
1da177e4 107
03158a70
MCC
108 void *
109 dma_pool_zalloc(struct dma_pool *pool, gfp_t mem_flags,
110 dma_addr_t *handle)
ad82362b
SS
111
112Wraps dma_pool_alloc() and also zeroes the returned memory if the
113allocation attempt succeeded.
114
115
03158a70
MCC
116::
117
118 void *
119 dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags,
120 dma_addr_t *dma_handle);
1da177e4 121
77f2ea2f
BH
122This allocates memory from the pool; the returned memory will meet the
123size and alignment requirements specified at creation time. Pass
124GFP_ATOMIC to prevent blocking, or if it's permitted (not
125in_interrupt, not holding SMP locks), pass GFP_KERNEL to allow
126blocking. Like dma_alloc_coherent(), this returns two values: an
f311a724 127address usable by the CPU, and the DMA address usable by the pool's
77f2ea2f 128device.
1da177e4 129
03158a70 130::
1da177e4 131
03158a70
MCC
132 void
133 dma_pool_free(struct dma_pool *pool, void *vaddr,
134 dma_addr_t addr);
1da177e4 135
1da177e4 136This puts memory back into the pool. The pool is what was passed to
f311a724 137dma_pool_alloc(); the CPU (vaddr) and DMA addresses are what
1da177e4
LT
138were returned when that routine allocated the memory being freed.
139
03158a70 140::
1da177e4 141
03158a70
MCC
142 void
143 dma_pool_destroy(struct dma_pool *pool);
1da177e4 144
77f2ea2f 145dma_pool_destroy() frees the resources of the pool. It must be
1da177e4
LT
146called in a context which can sleep. Make sure you've freed all allocated
147memory back to the pool before you destroy it.
148
149
150Part Ic - DMA addressing limitations
151------------------------------------
152
03158a70
MCC
153::
154
155 int
156 dma_set_mask_and_coherent(struct device *dev, u64 mask)
4aa806b7
RK
157
158Checks to see if the mask is possible and updates the device
159streaming and coherent DMA mask parameters if it is.
160
161Returns: 0 if successful and a negative error if not.
162
03158a70
MCC
163::
164
165 int
166 dma_set_mask(struct device *dev, u64 mask)
1da177e4
LT
167
168Checks to see if the mask is possible and updates the device
169parameters if it is.
170
171Returns: 0 if successful and a negative error if not.
172
03158a70
MCC
173::
174
175 int
176 dma_set_coherent_mask(struct device *dev, u64 mask)
6a1961f4
FT
177
178Checks to see if the mask is possible and updates the device
179parameters if it is.
180
181Returns: 0 if successful and a negative error if not.
182
03158a70
MCC
183::
184
185 u64
186 dma_get_required_mask(struct device *dev)
1da177e4 187
175add19
JK
188This API returns the mask that the platform requires to
189operate efficiently. Usually this means the returned mask
1da177e4
LT
190is the minimum required to cover all of memory. Examining the
191required mask gives drivers with variable descriptor sizes the
192opportunity to use smaller descriptors as necessary.
193
194Requesting the required mask does not alter the current mask. If you
175add19
JK
195wish to take advantage of it, you should issue a dma_set_mask()
196call to set the mask to the value returned.
1da177e4 197
133d624b
JR
198::
199
200 size_t
99d2b938 201 dma_max_mapping_size(struct device *dev);
133d624b
JR
202
203Returns the maximum size of a mapping for the device. The size parameter
204of the mapping functions like dma_map_single(), dma_map_page() and
205others should not be larger than the returned value.
1da177e4 206
3aa91625
CH
207::
208
209 bool
210 dma_need_sync(struct device *dev, dma_addr_t dma_addr);
211
212Returns %true if dma_sync_single_for_{device,cpu} calls are required to
213transfer memory ownership. Returns %false if those calls can be skipped.
214
6ba99411
YS
215::
216
217 unsigned long
218 dma_get_merge_boundary(struct device *dev);
219
220Returns the DMA merge boundary. If the device cannot merge any the DMA address
221segments, the function returns 0.
222
1da177e4
LT
223Part Id - Streaming DMA mappings
224--------------------------------
225
03158a70
MCC
226::
227
228 dma_addr_t
229 dma_map_single(struct device *dev, void *cpu_addr, size_t size,
230 enum dma_data_direction direction)
1da177e4
LT
231
232Maps a piece of processor virtual memory so it can be accessed by the
3a9ad0b4 233device and returns the DMA address of the memory.
1da177e4 234
77f2ea2f 235The direction for both APIs may be converted freely by casting.
03158a70 236However the dma_API uses a strongly typed enumerator for its
1da177e4
LT
237direction:
238
03158a70 239======================= =============================================
f5a69f4c
FT
240DMA_NONE no direction (used for debugging)
241DMA_TO_DEVICE data is going from the memory to the device
242DMA_FROM_DEVICE data is coming from the device to the memory
243DMA_BIDIRECTIONAL direction isn't known
03158a70
MCC
244======================= =============================================
245
246.. note::
247
248 Not all memory regions in a machine can be mapped by this API.
249 Further, contiguous kernel virtual space may not be contiguous as
250 physical memory. Since this API does not provide any scatter/gather
251 capability, it will fail if the user tries to map a non-physically
252 contiguous piece of memory. For this reason, memory to be mapped by
253 this API should be obtained from sources which guarantee it to be
254 physically contiguous (like kmalloc).
255
256 Further, the DMA address of the memory must be within the
257 dma_mask of the device (the dma_mask is a bit mask of the
258 addressable region for the device, i.e., if the DMA address of
259 the memory ANDed with the dma_mask is still equal to the DMA
260 address, then the device can perform DMA to the memory). To
261 ensure that the memory allocated by kmalloc is within the dma_mask,
262 the driver may specify various platform-dependent flags to restrict
263 the DMA address range of the allocation (e.g., on x86, GFP_DMA
264 guarantees to be within the first 16MB of available DMA addresses,
265 as required by ISA devices).
266
267 Note also that the above constraints on physical contiguity and
268 dma_mask may not apply if the platform has an IOMMU (a device which
269 maps an I/O DMA address to a physical memory address). However, to be
270 portable, device driver writers may *not* assume that such an IOMMU
271 exists.
272
273.. warning::
274
275 Memory coherency operates at a granularity called the cache
276 line width. In order for memory mapped by this API to operate
277 correctly, the mapped region must begin exactly on a cache line
278 boundary and end exactly on one (to prevent two separately mapped
279 regions from sharing a single cache line). Since the cache line size
280 may not be known at compile time, the API will not enforce this
281 requirement. Therefore, it is recommended that driver writers who
282 don't take special care to determine the cache line size at run time
283 only map virtual regions that begin and end on page boundaries (which
284 are guaranteed also to be cache line boundaries).
285
286 DMA_TO_DEVICE synchronisation must be done after the last modification
287 of the memory region by the software and before it is handed off to
288 the device. Once this primitive is used, memory covered by this
289 primitive should be treated as read-only by the device. If the device
290 may write to it at any point, it should be DMA_BIDIRECTIONAL (see
291 below).
292
293 DMA_FROM_DEVICE synchronisation must be done before the driver
294 accesses data that may be changed by the device. This memory should
295 be treated as read-only by the driver. If the driver needs to write
296 to it at any point, it should be DMA_BIDIRECTIONAL (see below).
297
298 DMA_BIDIRECTIONAL requires special handling: it means that the driver
299 isn't sure if the memory was modified before being handed off to the
300 device and also isn't sure if the device will also modify it. Thus,
301 you must always sync bidirectional memory twice: once before the
302 memory is handed off to the device (to make sure all memory changes
303 are flushed from the processor) and once before the data may be
304 accessed after being used by the device (to make sure any processor
305 cache lines are updated with data that the device may have changed).
306
307::
1da177e4 308
03158a70
MCC
309 void
310 dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
311 enum dma_data_direction direction)
1da177e4
LT
312
313Unmaps the region previously mapped. All the parameters passed in
314must be identical to those passed in (and returned) by the mapping
315API.
316
03158a70
MCC
317::
318
319 dma_addr_t
320 dma_map_page(struct device *dev, struct page *page,
321 unsigned long offset, size_t size,
322 enum dma_data_direction direction)
323
324 void
325 dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
326 enum dma_data_direction direction)
1da177e4
LT
327
328API for mapping and unmapping for pages. All the notes and warnings
329for the other mapping APIs apply here. Also, although the <offset>
330and <size> parameters are provided to do partial page mapping, it is
331recommended that you never use these unless you really know what the
332cache width is.
333
03158a70 334::
6f3d8796 335
03158a70
MCC
336 dma_addr_t
337 dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size,
338 enum dma_data_direction dir, unsigned long attrs)
339
340 void
341 dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size,
342 enum dma_data_direction dir, unsigned long attrs)
6f3d8796
NS
343
344API for mapping and unmapping for MMIO resources. All the notes and
345warnings for the other mapping APIs apply here. The API should only be
346used to map device MMIO resources, mapping of RAM is not permitted.
347
03158a70
MCC
348::
349
350 int
351 dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
1da177e4 352
6f3d8796
NS
353In some circumstances dma_map_single(), dma_map_page() and dma_map_resource()
354will fail to create a mapping. A driver can check for these errors by testing
355the returned DMA address with dma_mapping_error(). A non-zero return value
356means the mapping could not be created and the driver should take appropriate
357action (e.g. reduce current DMA mapping usage or delay and try again later).
1da177e4 358
03158a70
MCC
359::
360
21440d31
DB
361 int
362 dma_map_sg(struct device *dev, struct scatterlist *sg,
03158a70 363 int nents, enum dma_data_direction direction)
1da177e4 364
3a9ad0b4 365Returns: the number of DMA address segments mapped (this may be shorter
1d678f36
FT
366than <nents> passed in if some elements of the scatter/gather list are
367physically or virtually adjacent and an IOMMU maps them with a single
368entry).
1da177e4
LT
369
370Please note that the sg cannot be mapped again if it has been mapped once.
371The mapping process is allowed to destroy information in the sg.
372
77f2ea2f 373As with the other mapping interfaces, dma_map_sg() can fail. When it
1da177e4
LT
374does, 0 is returned and a driver must take appropriate action. It is
375critical that the driver do something, in the case of a block driver
376aborting the request or even oopsing is better than doing nothing and
377corrupting the filesystem.
378
03158a70 379With scatterlists, you use the resulting mapping like this::
21440d31
DB
380
381 int i, count = dma_map_sg(dev, sglist, nents, direction);
382 struct scatterlist *sg;
383
79eb0145 384 for_each_sg(sglist, sg, count, i) {
21440d31
DB
385 hw_address[i] = sg_dma_address(sg);
386 hw_len[i] = sg_dma_len(sg);
387 }
388
389where nents is the number of entries in the sglist.
390
391The implementation is free to merge several consecutive sglist entries
392into one (e.g. with an IOMMU, or if several pages just happen to be
393physically contiguous) and returns the actual number of sg entries it
394mapped them to. On failure 0, is returned.
395
396Then you should loop count times (note: this can be less than nents times)
397and use sg_dma_address() and sg_dma_len() macros where you previously
398accessed sg->address and sg->length as shown above.
399
03158a70
MCC
400::
401
21440d31
DB
402 void
403 dma_unmap_sg(struct device *dev, struct scatterlist *sg,
03158a70 404 int nents, enum dma_data_direction direction)
1da177e4 405
a12e2c6c 406Unmap the previously mapped scatter/gather list. All the parameters
1da177e4
LT
407must be the same as those and passed in to the scatter/gather mapping
408API.
409
410Note: <nents> must be the number you passed in, *not* the number of
3a9ad0b4 411DMA address entries returned.
1da177e4 412
03158a70
MCC
413::
414
415 void
416 dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
417 size_t size,
418 enum dma_data_direction direction)
419
420 void
421 dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
422 size_t size,
423 enum dma_data_direction direction)
424
425 void
426 dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
427 int nents,
428 enum dma_data_direction direction)
429
430 void
431 dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
432 int nents,
433 enum dma_data_direction direction)
9705ef7e 434
f311a724 435Synchronise a single contiguous or scatter/gather mapping for the CPU
9705ef7e
FT
436and device. With the sync_sg API, all the parameters must be the same
437as those passed into the single mapping API. With the sync_single API,
438you can use dma_handle and size parameters that aren't identical to
439those passed into the single mapping API to do a partial sync.
440
9705ef7e 441
03158a70
MCC
442.. note::
443
444 You must do this:
445
446 - Before reading values that have been written by DMA from the device
447 (use the DMA_FROM_DEVICE direction)
448 - After writing values that will be written to the device using DMA
449 (use the DMA_TO_DEVICE) direction
450 - before *and* after handing memory to the device if the memory is
451 DMA_BIDIRECTIONAL
9705ef7e
FT
452
453See also dma_map_single().
454
03158a70
MCC
455::
456
457 dma_addr_t
458 dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size,
459 enum dma_data_direction dir,
460 unsigned long attrs)
a75b0a2f 461
03158a70
MCC
462 void
463 dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr,
464 size_t size, enum dma_data_direction dir,
465 unsigned long attrs)
a75b0a2f 466
03158a70
MCC
467 int
468 dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl,
469 int nents, enum dma_data_direction dir,
470 unsigned long attrs)
a75b0a2f 471
03158a70
MCC
472 void
473 dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl,
474 int nents, enum dma_data_direction dir,
475 unsigned long attrs)
a75b0a2f
AK
476
477The four functions above are just like the counterpart functions
478without the _attrs suffixes, except that they pass an optional
00085f1e 479dma_attrs.
a75b0a2f 480
77f2ea2f 481The interpretation of DMA attributes is architecture-specific, and
a822b2ee
MCC
482each attribute should be documented in
483Documentation/core-api/dma-attributes.rst.
a75b0a2f 484
00085f1e
KK
485If dma_attrs are 0, the semantics of each of these functions
486is identical to those of the corresponding function
a75b0a2f
AK
487without the _attrs suffix. As a result dma_map_single_attrs()
488can generally replace dma_map_single(), etc.
489
03158a70 490As an example of the use of the ``*_attrs`` functions, here's how
a75b0a2f 491you could pass an attribute DMA_ATTR_FOO when mapping memory
03158a70 492for DMA::
a75b0a2f 493
03158a70
MCC
494 #include <linux/dma-mapping.h>
495 /* DMA_ATTR_FOO should be defined in linux/dma-mapping.h and
985098a0 496 * documented in Documentation/core-api/dma-attributes.rst */
03158a70 497 ...
a75b0a2f 498
03158a70
MCC
499 unsigned long attr;
500 attr |= DMA_ATTR_FOO;
501 ....
502 n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, attr);
503 ....
a75b0a2f
AK
504
505Architectures that care about DMA_ATTR_FOO would check for its
506presence in their implementations of the mapping and unmapping
03158a70
MCC
507routines, e.g.:::
508
509 void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr,
510 size_t size, enum dma_data_direction dir,
511 unsigned long attrs)
512 {
513 ....
514 if (attrs & DMA_ATTR_FOO)
515 /* twizzle the frobnozzle */
516 ....
517 }
a75b0a2f 518
1da177e4 519
0d71675f
CH
520Part II - Non-coherent DMA allocations
521--------------------------------------
1da177e4 522
6857a5eb
CH
523These APIs allow to allocate pages that are guaranteed to be DMA addressable
524by the passed in device, but which need explicit management of memory ownership
525for the kernel vs the device.
1da177e4 526
0d71675f
CH
527If you don't understand how cache line coherency works between a processor and
528an I/O device, you should not be using this part of the API.
1da177e4 529
03158a70
MCC
530::
531
81d88ce5
CH
532 struct page *
533 dma_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle,
534 enum dma_data_direction dir, gfp_t gfp)
1da177e4 535
81d88ce5
CH
536This routine allocates a region of <size> bytes of non-coherent memory. It
537returns a pointer to first struct page for the region, or NULL if the
538allocation failed. The resulting struct page can be used for everything a
539struct page is suitable for.
1da177e4 540
0d71675f
CH
541It also returns a <dma_handle> which may be cast to an unsigned integer the
542same width as the bus and given to the device as the DMA address base of
543the region.
1da177e4 544
0d71675f
CH
545The dir parameter specified if data is read and/or written by the device,
546see dma_map_single() for details.
547
548The gfp parameter allows the caller to specify the ``GFP_`` flags (see
549kmalloc()) for the allocation, but rejects flags used to specify a memory
550zone such as GFP_DMA or GFP_HIGHMEM.
551
552Before giving the memory to the device, dma_sync_single_for_device() needs
553to be called, and before reading memory written by the device,
554dma_sync_single_for_cpu(), just like for streaming DMA mappings that are
555reused.
1da177e4 556
03158a70
MCC
557::
558
559 void
81d88ce5 560 dma_free_pages(struct device *dev, size_t size, struct page *page,
0d71675f 561 dma_addr_t dma_handle, enum dma_data_direction dir)
1da177e4 562
81d88ce5
CH
563Free a region of memory previously allocated using dma_alloc_pages().
564dev, size, dma_handle and dir must all be the same as those passed into
565dma_alloc_pages(). page must be the pointer returned by dma_alloc_pages().
6857a5eb 566
eedb0b12
CH
567::
568
569 int
570 dma_mmap_pages(struct device *dev, struct vm_area_struct *vma,
571 size_t size, struct page *page)
572
573Map an allocation returned from dma_alloc_pages() into a user address space.
574dev and size must be the same as those passed into dma_alloc_pages().
575page must be the pointer returned by dma_alloc_pages().
576
6857a5eb
CH
577::
578
81d88ce5
CH
579 void *
580 dma_alloc_noncoherent(struct device *dev, size_t size,
581 dma_addr_t *dma_handle, enum dma_data_direction dir,
582 gfp_t gfp)
6857a5eb 583
81d88ce5
CH
584This routine is a convenient wrapper around dma_alloc_pages that returns the
585kernel virtual address for the allocated memory instead of the page structure.
6857a5eb
CH
586
587::
588
589 void
81d88ce5 590 dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr,
6857a5eb
CH
591 dma_addr_t dma_handle, enum dma_data_direction dir)
592
81d88ce5
CH
593Free a region of memory previously allocated using dma_alloc_noncoherent().
594dev, size, dma_handle and dir must all be the same as those passed into
595dma_alloc_noncoherent(). cpu_addr must be the virtual address returned by
596dma_alloc_noncoherent().
1da177e4 597
7d5b5738
CH
598::
599
600 struct sg_table *
601 dma_alloc_noncontiguous(struct device *dev, size_t size,
602 enum dma_data_direction dir, gfp_t gfp,
603 unsigned long attrs);
604
605This routine allocates <size> bytes of non-coherent and possibly non-contiguous
606memory. It returns a pointer to struct sg_table that describes the allocated
607and DMA mapped memory, or NULL if the allocation failed. The resulting memory
608can be used for struct page mapped into a scatterlist are suitable for.
609
610The return sg_table is guaranteed to have 1 single DMA mapped segment as
611indicated by sgt->nents, but it might have multiple CPU side segments as
612indicated by sgt->orig_nents.
613
614The dir parameter specified if data is read and/or written by the device,
615see dma_map_single() for details.
616
617The gfp parameter allows the caller to specify the ``GFP_`` flags (see
618kmalloc()) for the allocation, but rejects flags used to specify a memory
619zone such as GFP_DMA or GFP_HIGHMEM.
620
621The attrs argument must be either 0 or DMA_ATTR_ALLOC_SINGLE_PAGES.
622
623Before giving the memory to the device, dma_sync_sgtable_for_device() needs
624to be called, and before reading memory written by the device,
625dma_sync_sgtable_for_cpu(), just like for streaming DMA mappings that are
626reused.
627
628::
629
630 void
631 dma_free_noncontiguous(struct device *dev, size_t size,
632 struct sg_table *sgt,
633 enum dma_data_direction dir)
634
635Free memory previously allocated using dma_alloc_noncontiguous(). dev, size,
636and dir must all be the same as those passed into dma_alloc_noncontiguous().
637sgt must be the pointer returned by dma_alloc_noncontiguous().
638
639::
640
641 void *
642 dma_vmap_noncontiguous(struct device *dev, size_t size,
643 struct sg_table *sgt)
644
645Return a contiguous kernel mapping for an allocation returned from
646dma_alloc_noncontiguous(). dev and size must be the same as those passed into
647dma_alloc_noncontiguous(). sgt must be the pointer returned by
648dma_alloc_noncontiguous().
649
650Once a non-contiguous allocation is mapped using this function, the
651flush_kernel_vmap_range() and invalidate_kernel_vmap_range() APIs must be used
652to manage the coherency between the kernel mapping, the device and user space
653mappings (if any).
654
655::
656
657 void
658 dma_vunmap_noncontiguous(struct device *dev, void *vaddr)
659
660Unmap a kernel mapping returned by dma_vmap_noncontiguous(). dev must be the
661same the one passed into dma_alloc_noncontiguous(). vaddr must be the pointer
662returned by dma_vmap_noncontiguous().
663
664
665::
666
667 int
668 dma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma,
669 size_t size, struct sg_table *sgt)
670
671Map an allocation returned from dma_alloc_noncontiguous() into a user address
672space. dev and size must be the same as those passed into
673dma_alloc_noncontiguous(). sgt must be the pointer returned by
674dma_alloc_noncontiguous().
675
03158a70
MCC
676::
677
678 int
679 dma_get_cache_alignment(void)
1da177e4 680
a12e2c6c 681Returns the processor cache alignment. This is the absolute minimum
1da177e4
LT
682alignment *and* width that you must observe when either mapping
683memory or doing partial flushes.
684
03158a70 685.. note::
1da177e4 686
03158a70
MCC
687 This API may return a number *larger* than the actual cache
688 line, but it will guarantee that one or more cache lines fit exactly
689 into the width returned by this call. It will also always be a power
690 of two for easy alignment.
691
1da177e4 692
187f9c3f
JR
693Part III - Debug drivers use of the DMA-API
694-------------------------------------------
695
77f2ea2f 696The DMA-API as described above has some constraints. DMA addresses must be
187f9c3f
JR
697released with the corresponding function with the same size for example. With
698the advent of hardware IOMMUs it becomes more and more important that drivers
699do not violate those constraints. In the worst case such a violation can
700result in data corruption up to destroyed filesystems.
701
702To debug drivers and find bugs in the usage of the DMA-API checking code can
703be compiled into the kernel which will tell the developer about those
704violations. If your architecture supports it you can select the "Enable
705debugging of DMA-API usage" option in your kernel configuration. Enabling this
706option has a performance impact. Do not enable it in production kernels.
707
708If you boot the resulting kernel will contain code which does some bookkeeping
709about what DMA memory was allocated for which device. If this code detects an
710error it prints a warning message with some details into your kernel log. An
03158a70
MCC
711example warning message may look like this::
712
713 WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448
714 check_unmap+0x203/0x490()
715 Hardware name:
716 forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong
717 function [device address=0x00000000640444be] [size=66 bytes] [mapped as
718 single] [unmapped as page]
719 Modules linked in: nfsd exportfs bridge stp llc r8169
720 Pid: 0, comm: swapper Tainted: G W 2.6.28-dmatest-09289-g8bb99c0 #1
721 Call Trace:
722 <IRQ> [<ffffffff80240b22>] warn_slowpath+0xf2/0x130
723 [<ffffffff80647b70>] _spin_unlock+0x10/0x30
724 [<ffffffff80537e75>] usb_hcd_link_urb_to_ep+0x75/0xc0
725 [<ffffffff80647c22>] _spin_unlock_irqrestore+0x12/0x40
726 [<ffffffff8055347f>] ohci_urb_enqueue+0x19f/0x7c0
727 [<ffffffff80252f96>] queue_work+0x56/0x60
728 [<ffffffff80237e10>] enqueue_task_fair+0x20/0x50
729 [<ffffffff80539279>] usb_hcd_submit_urb+0x379/0xbc0
730 [<ffffffff803b78c3>] cpumask_next_and+0x23/0x40
731 [<ffffffff80235177>] find_busiest_group+0x207/0x8a0
732 [<ffffffff8064784f>] _spin_lock_irqsave+0x1f/0x50
733 [<ffffffff803c7ea3>] check_unmap+0x203/0x490
734 [<ffffffff803c8259>] debug_dma_unmap_page+0x49/0x50
735 [<ffffffff80485f26>] nv_tx_done_optimized+0xc6/0x2c0
736 [<ffffffff80486c13>] nv_nic_irq_optimized+0x73/0x2b0
737 [<ffffffff8026df84>] handle_IRQ_event+0x34/0x70
738 [<ffffffff8026ffe9>] handle_edge_irq+0xc9/0x150
739 [<ffffffff8020e3ab>] do_IRQ+0xcb/0x1c0
740 [<ffffffff8020c093>] ret_from_intr+0x0/0xa
741 <EOI> <4>---[ end trace f6435a98e2a38c0e ]---
187f9c3f
JR
742
743The driver developer can find the driver and the device including a stacktrace
744of the DMA-API call which caused this warning.
745
746Per default only the first error will result in a warning message. All other
747errors will only silently counted. This limitation exist to prevent the code
748from flooding your kernel log. To support debugging a device driver this can
749be disabled via debugfs. See the debugfs interface documentation below for
750details.
751
752The debugfs directory for the DMA-API debugging code is called dma-api/. In
753this directory the following files can currently be found:
754
03158a70
MCC
755=============================== ===============================================
756dma-api/all_errors This file contains a numeric value. If this
187f9c3f
JR
757 value is not equal to zero the debugging code
758 will print a warning for every error it finds
19f59460
ML
759 into the kernel log. Be careful with this
760 option, as it can easily flood your logs.
187f9c3f 761
03158a70 762dma-api/disabled This read-only file contains the character 'Y'
187f9c3f
JR
763 if the debugging code is disabled. This can
764 happen when it runs out of memory or if it was
765 disabled at boot time
766
0a3b192c
CL
767dma-api/dump This read-only file contains current DMA
768 mappings.
769
03158a70 770dma-api/error_count This file is read-only and shows the total
187f9c3f
JR
771 numbers of errors found.
772
03158a70 773dma-api/num_errors The number in this file shows how many
187f9c3f
JR
774 warnings will be printed to the kernel log
775 before it stops. This number is initialized to
776 one at system boot and be set by writing into
777 this file
778
03158a70 779dma-api/min_free_entries This read-only file can be read to get the
187f9c3f
JR
780 minimum number of free dma_debug_entries the
781 allocator has ever seen. If this value goes
2b9d9ac0
RM
782 down to zero the code will attempt to increase
783 nr_total_entries to compensate.
187f9c3f 784
03158a70 785dma-api/num_free_entries The current number of free dma_debug_entries
187f9c3f
JR
786 in the allocator.
787
9f191555
RM
788dma-api/nr_total_entries The total number of dma_debug_entries in the
789 allocator, both free and used.
790
31f43330 791dma-api/driver_filter You can write a name of a driver into this file
016ea687
JR
792 to limit the debug output to requests from that
793 particular driver. Write an empty string to
794 that file to disable the filter and see
795 all errors again.
03158a70 796=============================== ===============================================
016ea687 797
187f9c3f
JR
798If you have this code compiled into your kernel it will be enabled by default.
799If you want to boot without the bookkeeping anyway you can provide
800'dma_debug=off' as a boot parameter. This will disable DMA-API debugging.
801Notice that you can not enable it again at runtime. You have to reboot to do
802so.
803
016ea687
JR
804If you want to see debug messages only for a special device driver you can
805specify the dma_debug_driver=<drivername> parameter. This will enable the
806driver filter at boot time. The debug code will only print errors for that
807driver afterwards. This filter can be disabled or changed later using debugfs.
808
187f9c3f 809When the code disables itself at runtime this is most likely because it ran
2b9d9ac0
RM
810out of dma_debug_entries and was unable to allocate more on-demand. 65536
811entries are preallocated at boot - if this is too low for you boot with
ad78dee0
RM
812'dma_debug_entries=<your_desired_number>' to overwrite the default. Note
813that the code allocates entries in batches, so the exact number of
814preallocated entries may be greater than the actual number requested. The
ceb51173
RM
815code will print to the kernel log each time it has dynamically allocated
816as many entries as were initially preallocated. This is to indicate that a
817larger preallocation size may be appropriate, or if it happens continually
818that a driver may be leaking mappings.
6c9c6d63 819
03158a70
MCC
820::
821
822 void
823 debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr);
6c9c6d63
SK
824
825dma-debug interface debug_dma_mapping_error() to debug drivers that fail
77f2ea2f 826to check DMA mapping errors on addresses returned by dma_map_single() and
6c9c6d63
SK
827dma_map_page() interfaces. This interface clears a flag set by
828debug_dma_map_page() to indicate that dma_mapping_error() has been called by
829the driver. When driver does unmap, debug_dma_unmap() checks the flag and if
830this flag is still set, prints warning message that includes call trace that
831leads up to the unmap. This interface can be called from dma_mapping_error()
77f2ea2f 832routines to enable DMA mapping error check debugging.