Loading Documentation/ABI/testing/sysfs-class-mtd +3 −3 Original line number Diff line number Diff line Loading @@ -229,6 +229,6 @@ KernelVersion: 4.1 Contact: linux-mtd@lists.infradead.org Description: For a partition, the offset of that partition from the start of the master device in bytes. This attribute is absent on main devices, so it can be used to distinguish between partitions and devices that aren't partitions. of the parent (another partition or a flash device) in bytes. This attribute is absent on flash devices, so it can be used to distinguish them from partitions. Documentation/DMA-API-HOWTO.txt +88 −65 Original line number Diff line number Diff line ========================= Dynamic DMA mapping Guide ========================= David S. Miller <davem@redhat.com> Richard Henderson <rth@cygnus.com> Jakub Jelinek <jakub@redhat.com> :Author: David S. Miller <davem@redhat.com> :Author: Richard Henderson <rth@cygnus.com> :Author: Jakub Jelinek <jakub@redhat.com> This is a guide to device driver writers on how to use the DMA API with example pseudo-code. For a concise description of the API, see DMA-API.txt. CPU and DMA addresses ===================== There are several kinds of addresses involved in the DMA API, and it's important to understand the differences. The kernel normally uses virtual addresses. Any address returned by kmalloc(), vmalloc(), and similar interfaces is a virtual address and can be stored in a "void *". be stored in a ``void *``. The virtual memory system (TLB, page tables, etc.) translates virtual addresses to CPU physical addresses, which are stored as "phys_addr_t" or Loading @@ -37,7 +39,7 @@ be restricted to a subset of that space. For example, even if a system supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU so devices only need to use 32-bit DMA addresses. Here's a picture and some examples: Here's a picture and some examples:: CPU CPU Bus Virtual Physical Address Loading Loading @@ -98,7 +100,7 @@ microprocessor architecture. You should use the DMA API rather than the bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the pci_map_*() interfaces. First of all, you should make sure First of all, you should make sure:: #include <linux/dma-mapping.h> Loading @@ -107,6 +109,7 @@ can hold any valid DMA address for the platform and should be used everywhere you hold a DMA address returned from the DMA mapping functions. What memory is DMA'able? ======================== The first piece of information you must know is what kernel memory can be used with the DMA mapping facilities. There has been an unwritten Loading Loading @@ -144,6 +147,7 @@ networking subsystems make sure that the buffers they use are valid for you to DMA from/to. DMA addressing limitations ========================== Does your device have any DMA addressing limitations? For example, is your device only capable of driving the low order 24-bits of address? Loading @@ -166,7 +170,7 @@ style to do this even if your device holds the default setting, because this shows that you did think about these issues wrt. your device. The query is performed via a call to dma_set_mask_and_coherent(): The query is performed via a call to dma_set_mask_and_coherent():: int dma_set_mask_and_coherent(struct device *dev, u64 mask); Loading @@ -175,12 +179,12 @@ If you have some special requirements, then the following two separate queries can be used instead: The query for streaming mappings is performed via a call to dma_set_mask(): dma_set_mask():: int dma_set_mask(struct device *dev, u64 mask); The query for consistent allocations is performed via a call to dma_set_coherent_mask(): to dma_set_coherent_mask():: int dma_set_coherent_mask(struct device *dev, u64 mask); Loading Loading @@ -209,7 +213,7 @@ of your driver reports that performance is bad or that the device is not even detected, you can ask them for the kernel messages to find out exactly why. The standard 32-bit addressing device would do something like this: The standard 32-bit addressing device would do something like this:: if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) { dev_warn(dev, "mydev: No suitable DMA available\n"); Loading @@ -225,7 +229,7 @@ than 64-bit addressing. For example, Sparc64 PCI SAC addressing is more efficient than DAC addressing. Here is how you would handle a 64-bit capable device which can drive all 64-bits when accessing streaming DMA: all 64-bits when accessing streaming DMA:: int using_dac; Loading @@ -239,7 +243,7 @@ all 64-bits when accessing streaming DMA: } If a card is capable of using 64-bit consistent allocations as well, the case would look like this: the case would look like this:: int using_dac, consistent_using_dac; Loading @@ -260,7 +264,7 @@ uses consistent allocations, one would have to check the return value from dma_set_coherent_mask(). Finally, if your device can only drive the low 24-bits of address you might do something like: address you might do something like:: if (dma_set_mask(dev, DMA_BIT_MASK(24))) { dev_warn(dev, "mydev: 24-bit DMA addressing not available\n"); Loading @@ -280,7 +284,7 @@ only provide the functionality which the machine can handle. It is important that the last call to dma_set_mask() be for the most specific mask. Here is pseudo-code showing how this might be done: Here is pseudo-code showing how this might be done:: #define PLAYBACK_ADDRESS_BITS DMA_BIT_MASK(32) #define RECORD_ADDRESS_BITS DMA_BIT_MASK(24) Loading Loading @@ -309,6 +313,7 @@ devices seems to be littered with ISA chips given a PCI front end, and thus retaining the 16MB DMA addressing limitations of ISA. Types of DMA mappings ===================== There are two types of DMA mappings: Loading Loading @@ -336,12 +341,14 @@ There are two types of DMA mappings: to memory is immediately visible to the device, and vice versa. Consistent mappings guarantee this. IMPORTANT: Consistent DMA memory does not preclude the usage of .. important:: Consistent DMA memory does not preclude the usage of proper memory barriers. The CPU may reorder stores to consistent memory just as it may normal memory. Example: if it is important for the device to see the first word of a descriptor updated before the second, you must do something like: something like:: desc->word0 = address; wmb(); Loading Loading @@ -377,16 +384,17 @@ Also, systems with caches that aren't DMA-coherent will work better when the underlying buffers don't share cache lines with other data. Using Consistent DMA mappings. Using Consistent DMA mappings ============================= To allocate and map large (PAGE_SIZE or so) consistent DMA regions, you should do: you should do:: dma_addr_t dma_handle; cpu_addr = dma_alloc_coherent(dev, size, &dma_handle, gfp); where device is a struct device *. This may be called in interrupt where device is a ``struct device *``. This may be called in interrupt context with the GFP_ATOMIC flag. Size is the length of the region you want to allocate, in bytes. Loading Loading @@ -415,7 +423,7 @@ exists (for example) to guarantee that if you allocate a chunk which is smaller than or equal to 64 kilobytes, the extent of the buffer you receive will not cross a 64K boundary. To unmap and free such a DMA region, you call: To unmap and free such a DMA region, you call:: dma_free_coherent(dev, size, cpu_addr, dma_handle); Loading @@ -430,7 +438,7 @@ a kmem_cache, but it uses dma_alloc_coherent(), not __get_free_pages(). Also, it understands common hardware constraints for alignment, like queue heads needing to be aligned on N byte boundaries. Create a dma_pool like this: Create a dma_pool like this:: struct dma_pool *pool; Loading @@ -444,7 +452,7 @@ pass 0 for boundary; passing 4096 says memory allocated from this pool must not cross 4KByte boundaries (but at that time it may be better to use dma_alloc_coherent() directly instead). Allocate memory from a DMA pool like this: Allocate memory from a DMA pool like this:: cpu_addr = dma_pool_alloc(pool, flags, &dma_handle); Loading @@ -452,7 +460,7 @@ flags are GFP_KERNEL if blocking is permitted (not in_interrupt nor holding SMP locks), GFP_ATOMIC otherwise. Like dma_alloc_coherent(), this returns two values, cpu_addr and dma_handle. Free memory that was allocated from a dma_pool like this: Free memory that was allocated from a dma_pool like this:: dma_pool_free(pool, cpu_addr, dma_handle); Loading @@ -460,7 +468,7 @@ where pool is what you passed to dma_pool_alloc(), and cpu_addr and dma_handle are the values dma_pool_alloc() returned. This function may be called in interrupt context. Destroy a dma_pool by calling: Destroy a dma_pool by calling:: dma_pool_destroy(pool); Loading @@ -469,10 +477,11 @@ from a pool before you destroy the pool. This function may not be called in interrupt context. DMA Direction ============= The interfaces described in subsequent portions of this document take a DMA direction argument, which is an integer and takes on one of the following values: one of the following values:: DMA_BIDIRECTIONAL DMA_TO_DEVICE Loading Loading @@ -522,13 +531,14 @@ specifier. For receive packets, just the opposite, map/unmap them with the DMA_FROM_DEVICE direction specifier. Using Streaming DMA mappings ============================ The streaming DMA mapping routines can be called from interrupt context. There are two versions of each map/unmap, one which will map/unmap a single memory region, and one which will map/unmap a scatterlist. To map a single region, you do: To map a single region, you do:: struct device *dev = &my_dev->dev; dma_addr_t dma_handle; Loading @@ -545,7 +555,7 @@ To map a single region, you do: goto map_error_handling; } and to unmap it: and to unmap it:: dma_unmap_single(dev, dma_handle, size, direction); Loading @@ -563,7 +573,7 @@ Using CPU pointers like this for single mappings has a disadvantage: you cannot reference HIGHMEM memory in this way. Thus, there is a map/unmap interface pair akin to dma_{map,unmap}_single(). These interfaces deal with page/offset pairs instead of CPU pointers. Specifically: Specifically:: struct device *dev = &my_dev->dev; dma_addr_t dma_handle; Loading Loading @@ -593,7 +603,7 @@ error as outlined under the dma_map_single() discussion. You should call dma_unmap_page() when the DMA activity is finished, e.g., from the interrupt which told you that the DMA transfer is done. With scatterlists, you map a region gathered from several regions by: With scatterlists, you map a region gathered from several regions by:: int i, count = dma_map_sg(dev, sglist, nents, direction); struct scatterlist *sg; Loading @@ -617,13 +627,15 @@ Then you should loop count times (note: this can be less than nents times) and use sg_dma_address() and sg_dma_len() macros where you previously accessed sg->address and sg->length as shown above. To unmap a scatterlist, just call: To unmap a scatterlist, just call:: dma_unmap_sg(dev, sglist, nents, direction); Again, make sure DMA activity has already finished. PLEASE NOTE: The 'nents' argument to the dma_unmap_sg call must be .. note:: The 'nents' argument to the dma_unmap_sg call must be the _same_ one you passed into the dma_map_sg call, it should _NOT_ be the 'count' value _returned_ from the dma_map_sg call. Loading @@ -638,11 +650,11 @@ properly in order for the CPU and device to see the most up-to-date and correct copy of the DMA buffer. So, firstly, just map it with dma_map_{single,sg}(), and after each DMA transfer call either: transfer call either:: dma_sync_single_for_cpu(dev, dma_handle, size, direction); or: or:: dma_sync_sg_for_cpu(dev, sglist, nents, direction); Loading @@ -650,17 +662,19 @@ as appropriate. Then, if you wish to let the device get at the DMA area again, finish accessing the data with the CPU, and then before actually giving the buffer to the hardware call either: giving the buffer to the hardware call either:: dma_sync_single_for_device(dev, dma_handle, size, direction); or: or:: dma_sync_sg_for_device(dev, sglist, nents, direction); as appropriate. PLEASE NOTE: The 'nents' argument to dma_sync_sg_for_cpu() and .. note:: The 'nents' argument to dma_sync_sg_for_cpu() and dma_sync_sg_for_device() must be the same passed to dma_map_sg(). It is _NOT_ the count returned by dma_map_sg(). Loading @@ -671,7 +685,7 @@ dma_map_*() call till dma_unmap_*(), then you don't have to call the dma_sync_*() routines at all. Here is pseudo code which shows a situation in which you would need to use the dma_sync_*() interfaces. to use the dma_sync_*() interfaces:: my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len) { Loading Loading @@ -748,6 +762,7 @@ they are entirely deprecated. Some ports already do not provide these as it is impossible to correctly support them. Handling Errors =============== DMA address space is limited on some architectures and an allocation failure can be determined by: Loading @@ -755,7 +770,7 @@ failure can be determined by: - checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0 - checking the dma_addr_t returned from dma_map_single() and dma_map_page() by using dma_mapping_error(): by using dma_mapping_error():: dma_addr_t dma_handle; Loading @@ -773,7 +788,8 @@ failure can be determined by: of a multiple page mapping attempt. These example are applicable to dma_map_page() as well. Example 1: Example 1:: dma_addr_t dma_handle1; dma_addr_t dma_handle2; Loading Loading @@ -802,8 +818,12 @@ Example 1: dma_unmap_single(dma_handle1); map_error_handling1: Example 2: (if buffers are allocated in a loop, unmap all mapped buffers when mapping error is detected in the middle) Example 2:: /* * if buffers are allocated in a loop, unmap all mapped buffers when * mapping error is detected in the middle */ dma_addr_t dma_addr; dma_addr_t array[DMA_BUFFERS]; Loading Loading @@ -847,6 +867,7 @@ fails in the queuecommand hook. This means that the SCSI subsystem passes the command to the driver again later. Optimizing Unmap State Space Consumption ======================================== On many platforms, dma_unmap_{single,page}() is simply a nop. Therefore, keeping track of the mapping address and length is a waste Loading @@ -858,7 +879,7 @@ Actually, instead of describing the macros one by one, we'll transform some example code. 1) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures. Example, before: Example, before:: struct ring_state { struct sk_buff *skb; Loading @@ -866,7 +887,7 @@ transform some example code. __u32 len; }; after: after:: struct ring_state { struct sk_buff *skb; Loading @@ -875,23 +896,23 @@ transform some example code. }; 2) Use dma_unmap_{addr,len}_set() to set these values. Example, before: Example, before:: ringp->mapping = FOO; ringp->len = BAR; after: after:: dma_unmap_addr_set(ringp, mapping, FOO); dma_unmap_len_set(ringp, len, BAR); 3) Use dma_unmap_{addr,len}() to access these values. Example, before: Example, before:: dma_unmap_single(dev, ringp->mapping, ringp->len, DMA_FROM_DEVICE); after: after:: dma_unmap_single(dev, dma_unmap_addr(ringp, mapping), Loading @@ -903,6 +924,7 @@ separately, because it is possible for an implementation to only need the address in order to perform the unmap operation. Platform Issues =============== If you are just writing drivers for Linux and do not maintain an architecture port for the kernel, you can safely skip down Loading @@ -929,11 +951,12 @@ to "Closing". objects). Closing ======= This document, and the API itself, would not be in its current form without the feedback and suggestions from numerous individuals. We would like to specifically mention, in no particular order, the following people: following people:: Russell King <rmk@arm.linux.org.uk> Leo Dagum <dagum@barrel.engr.sgi.com> Loading Documentation/DMA-API.txt +329 −251 File changed.Preview size limit exceeded, changes collapsed. Show changes Documentation/DMA-ISA-LPC.txt +37 −36 Original line number Diff line number Diff line ============================ DMA with ISA and LPC devices ============================ Pierre Ossman <drzeus@drzeus.cx> :Author: Pierre Ossman <drzeus@drzeus.cx> This document describes how to do DMA transfers using the old ISA DMA controller. Even though ISA is more or less dead today the LPC bus uses the same DMA system so it will be around for quite some time. Part I - Headers and dependencies --------------------------------- Headers and dependencies ------------------------ To do ISA style DMA you need to include two headers: To do ISA style DMA you need to include two headers:: #include <linux/dma-mapping.h> #include <asm/dma.h> Loading @@ -23,8 +24,8 @@ this is not present on all platforms make sure you construct your Kconfig to be dependent on ISA_DMA_API (not ISA) so that nobody tries to build your driver on unsupported platforms. Part II - Buffer allocation --------------------------- Buffer allocation ----------------- The ISA DMA controller has some very strict requirements on which memory it can access so extra care must be taken when allocating Loading @@ -42,13 +43,13 @@ requirements you pass the flag GFP_DMA to kmalloc. Unfortunately the memory available for ISA DMA is scarce so unless you allocate the memory during boot-up it's a good idea to also pass __GFP_REPEAT and __GFP_NOWARN to make the allocator try a bit harder. __GFP_RETRY_MAYFAIL and __GFP_NOWARN to make the allocator try a bit harder. (This scarcity also means that you should allocate the buffer as early as possible and not release it until the driver is unloaded.) Part III - Address translation ------------------------------ Address translation ------------------- To translate the virtual address to a bus address, use the normal DMA API. Do _not_ use isa_virt_to_phys() even though it does the same Loading @@ -61,8 +62,8 @@ Note: x86_64 had a broken DMA API when it came to ISA but has since been fixed. If your arch has problems then fix the DMA API instead of reverting to the ISA functions. Part IV - Channels ------------------ Channels -------- A normal ISA DMA controller has 8 channels. The lower four are for 8-bit transfers and the upper four are for 16-bit transfers. Loading @@ -80,8 +81,8 @@ The ability to use 16-bit or 8-bit transfers is _not_ up to you as a driver author but depends on what the hardware supports. Check your specs or test different channels. Part V - Transfer data ---------------------- Transfer data ------------- Now for the good stuff, the actual DMA transfer. :) Loading Loading @@ -112,7 +113,7 @@ Once the DMA transfer is finished (or timed out) you should disable the channel again. You should also check get_dma_residue() to make sure that all data has been transferred. Example: Example:: int flags, residue; Loading Loading @@ -141,8 +142,8 @@ if (residue != 0) release_dma_lock(flags); Part VI - Suspend/resume ------------------------ Suspend/resume -------------- It is the driver's responsibility to make sure that the machine isn't suspended while a DMA transfer is in progress. Also, all DMA settings Loading Documentation/DMA-attributes.txt +9 −6 Original line number Diff line number Diff line ============== DMA attributes ============== Loading Loading @@ -108,6 +109,7 @@ This is a hint to the DMA-mapping subsystem that it's probably not worth the time to try to allocate memory to in a way that gives better TLB efficiency (AKA it's not worth trying to build the mapping out of larger pages). You might want to specify this if: - You know that the accesses to this memory won't thrash the TLB. You might know that the accesses are likely to be sequential or that they aren't sequential but it's unlikely you'll ping-pong Loading @@ -121,10 +123,11 @@ pages). You might want to specify this if: the mapping to have a short lifetime then it may be worth it to optimize allocation (avoid coming up with large pages) instead of getting the slight performance win of larger pages. Setting this hint doesn't guarantee that you won't get huge pages, but it means that we won't try quite as hard to get them. NOTE: At the moment DMA_ATTR_ALLOC_SINGLE_PAGES is only implemented on ARM, .. note:: At the moment DMA_ATTR_ALLOC_SINGLE_PAGES is only implemented on ARM, though ARM64 patches will likely be posted soon. DMA_ATTR_NO_WARN Loading @@ -142,10 +145,10 @@ problem at all, depending on the implementation of the retry mechanism. So, this provides a way for drivers to avoid those error messages on calls where allocation failures are not a problem, and shouldn't bother the logs. NOTE: At the moment DMA_ATTR_NO_WARN is only implemented on PowerPC. .. note:: At the moment DMA_ATTR_NO_WARN is only implemented on PowerPC. DMA_ATTR_PRIVILEGED ------------------------------ ------------------- Some advanced peripherals such as remote processors and GPUs perform accesses to DMA buffers in both privileged "supervisor" and unprivileged Loading Loading
Documentation/ABI/testing/sysfs-class-mtd +3 −3 Original line number Diff line number Diff line Loading @@ -229,6 +229,6 @@ KernelVersion: 4.1 Contact: linux-mtd@lists.infradead.org Description: For a partition, the offset of that partition from the start of the master device in bytes. This attribute is absent on main devices, so it can be used to distinguish between partitions and devices that aren't partitions. of the parent (another partition or a flash device) in bytes. This attribute is absent on flash devices, so it can be used to distinguish them from partitions.
Documentation/DMA-API-HOWTO.txt +88 −65 Original line number Diff line number Diff line ========================= Dynamic DMA mapping Guide ========================= David S. Miller <davem@redhat.com> Richard Henderson <rth@cygnus.com> Jakub Jelinek <jakub@redhat.com> :Author: David S. Miller <davem@redhat.com> :Author: Richard Henderson <rth@cygnus.com> :Author: Jakub Jelinek <jakub@redhat.com> This is a guide to device driver writers on how to use the DMA API with example pseudo-code. For a concise description of the API, see DMA-API.txt. CPU and DMA addresses ===================== There are several kinds of addresses involved in the DMA API, and it's important to understand the differences. The kernel normally uses virtual addresses. Any address returned by kmalloc(), vmalloc(), and similar interfaces is a virtual address and can be stored in a "void *". be stored in a ``void *``. The virtual memory system (TLB, page tables, etc.) translates virtual addresses to CPU physical addresses, which are stored as "phys_addr_t" or Loading @@ -37,7 +39,7 @@ be restricted to a subset of that space. For example, even if a system supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU so devices only need to use 32-bit DMA addresses. Here's a picture and some examples: Here's a picture and some examples:: CPU CPU Bus Virtual Physical Address Loading Loading @@ -98,7 +100,7 @@ microprocessor architecture. You should use the DMA API rather than the bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the pci_map_*() interfaces. First of all, you should make sure First of all, you should make sure:: #include <linux/dma-mapping.h> Loading @@ -107,6 +109,7 @@ can hold any valid DMA address for the platform and should be used everywhere you hold a DMA address returned from the DMA mapping functions. What memory is DMA'able? ======================== The first piece of information you must know is what kernel memory can be used with the DMA mapping facilities. There has been an unwritten Loading Loading @@ -144,6 +147,7 @@ networking subsystems make sure that the buffers they use are valid for you to DMA from/to. DMA addressing limitations ========================== Does your device have any DMA addressing limitations? For example, is your device only capable of driving the low order 24-bits of address? Loading @@ -166,7 +170,7 @@ style to do this even if your device holds the default setting, because this shows that you did think about these issues wrt. your device. The query is performed via a call to dma_set_mask_and_coherent(): The query is performed via a call to dma_set_mask_and_coherent():: int dma_set_mask_and_coherent(struct device *dev, u64 mask); Loading @@ -175,12 +179,12 @@ If you have some special requirements, then the following two separate queries can be used instead: The query for streaming mappings is performed via a call to dma_set_mask(): dma_set_mask():: int dma_set_mask(struct device *dev, u64 mask); The query for consistent allocations is performed via a call to dma_set_coherent_mask(): to dma_set_coherent_mask():: int dma_set_coherent_mask(struct device *dev, u64 mask); Loading Loading @@ -209,7 +213,7 @@ of your driver reports that performance is bad or that the device is not even detected, you can ask them for the kernel messages to find out exactly why. The standard 32-bit addressing device would do something like this: The standard 32-bit addressing device would do something like this:: if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) { dev_warn(dev, "mydev: No suitable DMA available\n"); Loading @@ -225,7 +229,7 @@ than 64-bit addressing. For example, Sparc64 PCI SAC addressing is more efficient than DAC addressing. Here is how you would handle a 64-bit capable device which can drive all 64-bits when accessing streaming DMA: all 64-bits when accessing streaming DMA:: int using_dac; Loading @@ -239,7 +243,7 @@ all 64-bits when accessing streaming DMA: } If a card is capable of using 64-bit consistent allocations as well, the case would look like this: the case would look like this:: int using_dac, consistent_using_dac; Loading @@ -260,7 +264,7 @@ uses consistent allocations, one would have to check the return value from dma_set_coherent_mask(). Finally, if your device can only drive the low 24-bits of address you might do something like: address you might do something like:: if (dma_set_mask(dev, DMA_BIT_MASK(24))) { dev_warn(dev, "mydev: 24-bit DMA addressing not available\n"); Loading @@ -280,7 +284,7 @@ only provide the functionality which the machine can handle. It is important that the last call to dma_set_mask() be for the most specific mask. Here is pseudo-code showing how this might be done: Here is pseudo-code showing how this might be done:: #define PLAYBACK_ADDRESS_BITS DMA_BIT_MASK(32) #define RECORD_ADDRESS_BITS DMA_BIT_MASK(24) Loading Loading @@ -309,6 +313,7 @@ devices seems to be littered with ISA chips given a PCI front end, and thus retaining the 16MB DMA addressing limitations of ISA. Types of DMA mappings ===================== There are two types of DMA mappings: Loading Loading @@ -336,12 +341,14 @@ There are two types of DMA mappings: to memory is immediately visible to the device, and vice versa. Consistent mappings guarantee this. IMPORTANT: Consistent DMA memory does not preclude the usage of .. important:: Consistent DMA memory does not preclude the usage of proper memory barriers. The CPU may reorder stores to consistent memory just as it may normal memory. Example: if it is important for the device to see the first word of a descriptor updated before the second, you must do something like: something like:: desc->word0 = address; wmb(); Loading Loading @@ -377,16 +384,17 @@ Also, systems with caches that aren't DMA-coherent will work better when the underlying buffers don't share cache lines with other data. Using Consistent DMA mappings. Using Consistent DMA mappings ============================= To allocate and map large (PAGE_SIZE or so) consistent DMA regions, you should do: you should do:: dma_addr_t dma_handle; cpu_addr = dma_alloc_coherent(dev, size, &dma_handle, gfp); where device is a struct device *. This may be called in interrupt where device is a ``struct device *``. This may be called in interrupt context with the GFP_ATOMIC flag. Size is the length of the region you want to allocate, in bytes. Loading Loading @@ -415,7 +423,7 @@ exists (for example) to guarantee that if you allocate a chunk which is smaller than or equal to 64 kilobytes, the extent of the buffer you receive will not cross a 64K boundary. To unmap and free such a DMA region, you call: To unmap and free such a DMA region, you call:: dma_free_coherent(dev, size, cpu_addr, dma_handle); Loading @@ -430,7 +438,7 @@ a kmem_cache, but it uses dma_alloc_coherent(), not __get_free_pages(). Also, it understands common hardware constraints for alignment, like queue heads needing to be aligned on N byte boundaries. Create a dma_pool like this: Create a dma_pool like this:: struct dma_pool *pool; Loading @@ -444,7 +452,7 @@ pass 0 for boundary; passing 4096 says memory allocated from this pool must not cross 4KByte boundaries (but at that time it may be better to use dma_alloc_coherent() directly instead). Allocate memory from a DMA pool like this: Allocate memory from a DMA pool like this:: cpu_addr = dma_pool_alloc(pool, flags, &dma_handle); Loading @@ -452,7 +460,7 @@ flags are GFP_KERNEL if blocking is permitted (not in_interrupt nor holding SMP locks), GFP_ATOMIC otherwise. Like dma_alloc_coherent(), this returns two values, cpu_addr and dma_handle. Free memory that was allocated from a dma_pool like this: Free memory that was allocated from a dma_pool like this:: dma_pool_free(pool, cpu_addr, dma_handle); Loading @@ -460,7 +468,7 @@ where pool is what you passed to dma_pool_alloc(), and cpu_addr and dma_handle are the values dma_pool_alloc() returned. This function may be called in interrupt context. Destroy a dma_pool by calling: Destroy a dma_pool by calling:: dma_pool_destroy(pool); Loading @@ -469,10 +477,11 @@ from a pool before you destroy the pool. This function may not be called in interrupt context. DMA Direction ============= The interfaces described in subsequent portions of this document take a DMA direction argument, which is an integer and takes on one of the following values: one of the following values:: DMA_BIDIRECTIONAL DMA_TO_DEVICE Loading Loading @@ -522,13 +531,14 @@ specifier. For receive packets, just the opposite, map/unmap them with the DMA_FROM_DEVICE direction specifier. Using Streaming DMA mappings ============================ The streaming DMA mapping routines can be called from interrupt context. There are two versions of each map/unmap, one which will map/unmap a single memory region, and one which will map/unmap a scatterlist. To map a single region, you do: To map a single region, you do:: struct device *dev = &my_dev->dev; dma_addr_t dma_handle; Loading @@ -545,7 +555,7 @@ To map a single region, you do: goto map_error_handling; } and to unmap it: and to unmap it:: dma_unmap_single(dev, dma_handle, size, direction); Loading @@ -563,7 +573,7 @@ Using CPU pointers like this for single mappings has a disadvantage: you cannot reference HIGHMEM memory in this way. Thus, there is a map/unmap interface pair akin to dma_{map,unmap}_single(). These interfaces deal with page/offset pairs instead of CPU pointers. Specifically: Specifically:: struct device *dev = &my_dev->dev; dma_addr_t dma_handle; Loading Loading @@ -593,7 +603,7 @@ error as outlined under the dma_map_single() discussion. You should call dma_unmap_page() when the DMA activity is finished, e.g., from the interrupt which told you that the DMA transfer is done. With scatterlists, you map a region gathered from several regions by: With scatterlists, you map a region gathered from several regions by:: int i, count = dma_map_sg(dev, sglist, nents, direction); struct scatterlist *sg; Loading @@ -617,13 +627,15 @@ Then you should loop count times (note: this can be less than nents times) and use sg_dma_address() and sg_dma_len() macros where you previously accessed sg->address and sg->length as shown above. To unmap a scatterlist, just call: To unmap a scatterlist, just call:: dma_unmap_sg(dev, sglist, nents, direction); Again, make sure DMA activity has already finished. PLEASE NOTE: The 'nents' argument to the dma_unmap_sg call must be .. note:: The 'nents' argument to the dma_unmap_sg call must be the _same_ one you passed into the dma_map_sg call, it should _NOT_ be the 'count' value _returned_ from the dma_map_sg call. Loading @@ -638,11 +650,11 @@ properly in order for the CPU and device to see the most up-to-date and correct copy of the DMA buffer. So, firstly, just map it with dma_map_{single,sg}(), and after each DMA transfer call either: transfer call either:: dma_sync_single_for_cpu(dev, dma_handle, size, direction); or: or:: dma_sync_sg_for_cpu(dev, sglist, nents, direction); Loading @@ -650,17 +662,19 @@ as appropriate. Then, if you wish to let the device get at the DMA area again, finish accessing the data with the CPU, and then before actually giving the buffer to the hardware call either: giving the buffer to the hardware call either:: dma_sync_single_for_device(dev, dma_handle, size, direction); or: or:: dma_sync_sg_for_device(dev, sglist, nents, direction); as appropriate. PLEASE NOTE: The 'nents' argument to dma_sync_sg_for_cpu() and .. note:: The 'nents' argument to dma_sync_sg_for_cpu() and dma_sync_sg_for_device() must be the same passed to dma_map_sg(). It is _NOT_ the count returned by dma_map_sg(). Loading @@ -671,7 +685,7 @@ dma_map_*() call till dma_unmap_*(), then you don't have to call the dma_sync_*() routines at all. Here is pseudo code which shows a situation in which you would need to use the dma_sync_*() interfaces. to use the dma_sync_*() interfaces:: my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len) { Loading Loading @@ -748,6 +762,7 @@ they are entirely deprecated. Some ports already do not provide these as it is impossible to correctly support them. Handling Errors =============== DMA address space is limited on some architectures and an allocation failure can be determined by: Loading @@ -755,7 +770,7 @@ failure can be determined by: - checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0 - checking the dma_addr_t returned from dma_map_single() and dma_map_page() by using dma_mapping_error(): by using dma_mapping_error():: dma_addr_t dma_handle; Loading @@ -773,7 +788,8 @@ failure can be determined by: of a multiple page mapping attempt. These example are applicable to dma_map_page() as well. Example 1: Example 1:: dma_addr_t dma_handle1; dma_addr_t dma_handle2; Loading Loading @@ -802,8 +818,12 @@ Example 1: dma_unmap_single(dma_handle1); map_error_handling1: Example 2: (if buffers are allocated in a loop, unmap all mapped buffers when mapping error is detected in the middle) Example 2:: /* * if buffers are allocated in a loop, unmap all mapped buffers when * mapping error is detected in the middle */ dma_addr_t dma_addr; dma_addr_t array[DMA_BUFFERS]; Loading Loading @@ -847,6 +867,7 @@ fails in the queuecommand hook. This means that the SCSI subsystem passes the command to the driver again later. Optimizing Unmap State Space Consumption ======================================== On many platforms, dma_unmap_{single,page}() is simply a nop. Therefore, keeping track of the mapping address and length is a waste Loading @@ -858,7 +879,7 @@ Actually, instead of describing the macros one by one, we'll transform some example code. 1) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures. Example, before: Example, before:: struct ring_state { struct sk_buff *skb; Loading @@ -866,7 +887,7 @@ transform some example code. __u32 len; }; after: after:: struct ring_state { struct sk_buff *skb; Loading @@ -875,23 +896,23 @@ transform some example code. }; 2) Use dma_unmap_{addr,len}_set() to set these values. Example, before: Example, before:: ringp->mapping = FOO; ringp->len = BAR; after: after:: dma_unmap_addr_set(ringp, mapping, FOO); dma_unmap_len_set(ringp, len, BAR); 3) Use dma_unmap_{addr,len}() to access these values. Example, before: Example, before:: dma_unmap_single(dev, ringp->mapping, ringp->len, DMA_FROM_DEVICE); after: after:: dma_unmap_single(dev, dma_unmap_addr(ringp, mapping), Loading @@ -903,6 +924,7 @@ separately, because it is possible for an implementation to only need the address in order to perform the unmap operation. Platform Issues =============== If you are just writing drivers for Linux and do not maintain an architecture port for the kernel, you can safely skip down Loading @@ -929,11 +951,12 @@ to "Closing". objects). Closing ======= This document, and the API itself, would not be in its current form without the feedback and suggestions from numerous individuals. We would like to specifically mention, in no particular order, the following people: following people:: Russell King <rmk@arm.linux.org.uk> Leo Dagum <dagum@barrel.engr.sgi.com> Loading
Documentation/DMA-API.txt +329 −251 File changed.Preview size limit exceeded, changes collapsed. Show changes
Documentation/DMA-ISA-LPC.txt +37 −36 Original line number Diff line number Diff line ============================ DMA with ISA and LPC devices ============================ Pierre Ossman <drzeus@drzeus.cx> :Author: Pierre Ossman <drzeus@drzeus.cx> This document describes how to do DMA transfers using the old ISA DMA controller. Even though ISA is more or less dead today the LPC bus uses the same DMA system so it will be around for quite some time. Part I - Headers and dependencies --------------------------------- Headers and dependencies ------------------------ To do ISA style DMA you need to include two headers: To do ISA style DMA you need to include two headers:: #include <linux/dma-mapping.h> #include <asm/dma.h> Loading @@ -23,8 +24,8 @@ this is not present on all platforms make sure you construct your Kconfig to be dependent on ISA_DMA_API (not ISA) so that nobody tries to build your driver on unsupported platforms. Part II - Buffer allocation --------------------------- Buffer allocation ----------------- The ISA DMA controller has some very strict requirements on which memory it can access so extra care must be taken when allocating Loading @@ -42,13 +43,13 @@ requirements you pass the flag GFP_DMA to kmalloc. Unfortunately the memory available for ISA DMA is scarce so unless you allocate the memory during boot-up it's a good idea to also pass __GFP_REPEAT and __GFP_NOWARN to make the allocator try a bit harder. __GFP_RETRY_MAYFAIL and __GFP_NOWARN to make the allocator try a bit harder. (This scarcity also means that you should allocate the buffer as early as possible and not release it until the driver is unloaded.) Part III - Address translation ------------------------------ Address translation ------------------- To translate the virtual address to a bus address, use the normal DMA API. Do _not_ use isa_virt_to_phys() even though it does the same Loading @@ -61,8 +62,8 @@ Note: x86_64 had a broken DMA API when it came to ISA but has since been fixed. If your arch has problems then fix the DMA API instead of reverting to the ISA functions. Part IV - Channels ------------------ Channels -------- A normal ISA DMA controller has 8 channels. The lower four are for 8-bit transfers and the upper four are for 16-bit transfers. Loading @@ -80,8 +81,8 @@ The ability to use 16-bit or 8-bit transfers is _not_ up to you as a driver author but depends on what the hardware supports. Check your specs or test different channels. Part V - Transfer data ---------------------- Transfer data ------------- Now for the good stuff, the actual DMA transfer. :) Loading Loading @@ -112,7 +113,7 @@ Once the DMA transfer is finished (or timed out) you should disable the channel again. You should also check get_dma_residue() to make sure that all data has been transferred. Example: Example:: int flags, residue; Loading Loading @@ -141,8 +142,8 @@ if (residue != 0) release_dma_lock(flags); Part VI - Suspend/resume ------------------------ Suspend/resume -------------- It is the driver's responsibility to make sure that the machine isn't suspended while a DMA transfer is in progress. Also, all DMA settings Loading
Documentation/DMA-attributes.txt +9 −6 Original line number Diff line number Diff line ============== DMA attributes ============== Loading Loading @@ -108,6 +109,7 @@ This is a hint to the DMA-mapping subsystem that it's probably not worth the time to try to allocate memory to in a way that gives better TLB efficiency (AKA it's not worth trying to build the mapping out of larger pages). You might want to specify this if: - You know that the accesses to this memory won't thrash the TLB. You might know that the accesses are likely to be sequential or that they aren't sequential but it's unlikely you'll ping-pong Loading @@ -121,10 +123,11 @@ pages). You might want to specify this if: the mapping to have a short lifetime then it may be worth it to optimize allocation (avoid coming up with large pages) instead of getting the slight performance win of larger pages. Setting this hint doesn't guarantee that you won't get huge pages, but it means that we won't try quite as hard to get them. NOTE: At the moment DMA_ATTR_ALLOC_SINGLE_PAGES is only implemented on ARM, .. note:: At the moment DMA_ATTR_ALLOC_SINGLE_PAGES is only implemented on ARM, though ARM64 patches will likely be posted soon. DMA_ATTR_NO_WARN Loading @@ -142,10 +145,10 @@ problem at all, depending on the implementation of the retry mechanism. So, this provides a way for drivers to avoid those error messages on calls where allocation failures are not a problem, and shouldn't bother the logs. NOTE: At the moment DMA_ATTR_NO_WARN is only implemented on PowerPC. .. note:: At the moment DMA_ATTR_NO_WARN is only implemented on PowerPC. DMA_ATTR_PRIVILEGED ------------------------------ ------------------- Some advanced peripherals such as remote processors and GPUs perform accesses to DMA buffers in both privileged "supervisor" and unprivileged Loading