diff --git a/Documentation/ABI/testing/sysfs-class-mtd b/Documentation/ABI/testing/sysfs-class-mtd index 3b5c3bca9186d13e8cf5f1911b2fc88424c6970f..f34e592301d1dbd9190d8557e43fd05bcde08f80 100644 --- a/Documentation/ABI/testing/sysfs-class-mtd +++ b/Documentation/ABI/testing/sysfs-class-mtd @@ -229,6 +229,6 @@ KernelVersion: 4.1 Contact: linux-mtd@lists.infradead.org Description: For a partition, the offset of that partition from the start - of the master device in bytes. This attribute is absent on - main devices, so it can be used to distinguish between - partitions and devices that aren't partitions. + of the parent (another partition or a flash device) in bytes. + This attribute is absent on flash devices, so it can be used + to distinguish them from partitions. diff --git a/Documentation/DMA-API-HOWTO.txt b/Documentation/DMA-API-HOWTO.txt index 4ed388356898435a12670581083a9036a71131bb..f0cc3f772265488b24f2ace181c62c34cd0b080d 100644 --- a/Documentation/DMA-API-HOWTO.txt +++ b/Documentation/DMA-API-HOWTO.txt @@ -1,22 +1,24 @@ - Dynamic DMA mapping Guide - ========================= +========================= +Dynamic DMA mapping Guide +========================= - David S. Miller - Richard Henderson - Jakub Jelinek +:Author: David S. Miller +:Author: Richard Henderson +:Author: Jakub Jelinek This is a guide to device driver writers on how to use the DMA API with example pseudo-code. For a concise description of the API, see DMA-API.txt. - CPU and DMA addresses +CPU and DMA addresses +===================== There are several kinds of addresses involved in the DMA API, and it's important to understand the differences. The kernel normally uses virtual addresses. Any address returned by kmalloc(), vmalloc(), and similar interfaces is a virtual address and can -be stored in a "void *". +be stored in a ``void *``. The virtual memory system (TLB, page tables, etc.) translates virtual addresses to CPU physical addresses, which are stored as "phys_addr_t" or @@ -37,7 +39,7 @@ be restricted to a subset of that space. For example, even if a system supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU so devices only need to use 32-bit DMA addresses. -Here's a picture and some examples: +Here's a picture and some examples:: CPU CPU Bus Virtual Physical Address @@ -98,15 +100,16 @@ microprocessor architecture. You should use the DMA API rather than the bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the pci_map_*() interfaces. -First of all, you should make sure +First of all, you should make sure:: -#include + #include is in your driver, which provides the definition of dma_addr_t. This type can hold any valid DMA address for the platform and should be used everywhere you hold a DMA address returned from the DMA mapping functions. - What memory is DMA'able? +What memory is DMA'able? +======================== The first piece of information you must know is what kernel memory can be used with the DMA mapping facilities. There has been an unwritten @@ -143,7 +146,8 @@ What about block I/O and networking buffers? The block I/O and networking subsystems make sure that the buffers they use are valid for you to DMA from/to. - DMA addressing limitations +DMA addressing limitations +========================== Does your device have any DMA addressing limitations? For example, is your device only capable of driving the low order 24-bits of address? @@ -166,7 +170,7 @@ style to do this even if your device holds the default setting, because this shows that you did think about these issues wrt. your device. -The query is performed via a call to dma_set_mask_and_coherent(): +The query is performed via a call to dma_set_mask_and_coherent():: int dma_set_mask_and_coherent(struct device *dev, u64 mask); @@ -175,12 +179,12 @@ If you have some special requirements, then the following two separate queries can be used instead: The query for streaming mappings is performed via a call to - dma_set_mask(): + dma_set_mask():: int dma_set_mask(struct device *dev, u64 mask); The query for consistent allocations is performed via a call - to dma_set_coherent_mask(): + to dma_set_coherent_mask():: int dma_set_coherent_mask(struct device *dev, u64 mask); @@ -209,7 +213,7 @@ of your driver reports that performance is bad or that the device is not even detected, you can ask them for the kernel messages to find out exactly why. -The standard 32-bit addressing device would do something like this: +The standard 32-bit addressing device would do something like this:: if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) { dev_warn(dev, "mydev: No suitable DMA available\n"); @@ -225,7 +229,7 @@ than 64-bit addressing. For example, Sparc64 PCI SAC addressing is more efficient than DAC addressing. Here is how you would handle a 64-bit capable device which can drive -all 64-bits when accessing streaming DMA: +all 64-bits when accessing streaming DMA:: int using_dac; @@ -239,7 +243,7 @@ all 64-bits when accessing streaming DMA: } If a card is capable of using 64-bit consistent allocations as well, -the case would look like this: +the case would look like this:: int using_dac, consistent_using_dac; @@ -260,7 +264,7 @@ uses consistent allocations, one would have to check the return value from dma_set_coherent_mask(). Finally, if your device can only drive the low 24-bits of -address you might do something like: +address you might do something like:: if (dma_set_mask(dev, DMA_BIT_MASK(24))) { dev_warn(dev, "mydev: 24-bit DMA addressing not available\n"); @@ -280,7 +284,7 @@ only provide the functionality which the machine can handle. It is important that the last call to dma_set_mask() be for the most specific mask. -Here is pseudo-code showing how this might be done: +Here is pseudo-code showing how this might be done:: #define PLAYBACK_ADDRESS_BITS DMA_BIT_MASK(32) #define RECORD_ADDRESS_BITS DMA_BIT_MASK(24) @@ -308,7 +312,8 @@ A sound card was used as an example here because this genre of PCI devices seems to be littered with ISA chips given a PCI front end, and thus retaining the 16MB DMA addressing limitations of ISA. - Types of DMA mappings +Types of DMA mappings +===================== There are two types of DMA mappings: @@ -336,12 +341,14 @@ There are two types of DMA mappings: to memory is immediately visible to the device, and vice versa. Consistent mappings guarantee this. - IMPORTANT: Consistent DMA memory does not preclude the usage of - proper memory barriers. The CPU may reorder stores to + .. important:: + + Consistent DMA memory does not preclude the usage of + proper memory barriers. The CPU may reorder stores to consistent memory just as it may normal memory. Example: if it is important for the device to see the first word of a descriptor updated before the second, you must do - something like: + something like:: desc->word0 = address; wmb(); @@ -377,16 +384,17 @@ Also, systems with caches that aren't DMA-coherent will work better when the underlying buffers don't share cache lines with other data. - Using Consistent DMA mappings. +Using Consistent DMA mappings +============================= To allocate and map large (PAGE_SIZE or so) consistent DMA regions, -you should do: +you should do:: dma_addr_t dma_handle; cpu_addr = dma_alloc_coherent(dev, size, &dma_handle, gfp); -where device is a struct device *. This may be called in interrupt +where device is a ``struct device *``. This may be called in interrupt context with the GFP_ATOMIC flag. Size is the length of the region you want to allocate, in bytes. @@ -415,7 +423,7 @@ exists (for example) to guarantee that if you allocate a chunk which is smaller than or equal to 64 kilobytes, the extent of the buffer you receive will not cross a 64K boundary. -To unmap and free such a DMA region, you call: +To unmap and free such a DMA region, you call:: dma_free_coherent(dev, size, cpu_addr, dma_handle); @@ -430,7 +438,7 @@ a kmem_cache, but it uses dma_alloc_coherent(), not __get_free_pages(). Also, it understands common hardware constraints for alignment, like queue heads needing to be aligned on N byte boundaries. -Create a dma_pool like this: +Create a dma_pool like this:: struct dma_pool *pool; @@ -444,7 +452,7 @@ pass 0 for boundary; passing 4096 says memory allocated from this pool must not cross 4KByte boundaries (but at that time it may be better to use dma_alloc_coherent() directly instead). -Allocate memory from a DMA pool like this: +Allocate memory from a DMA pool like this:: cpu_addr = dma_pool_alloc(pool, flags, &dma_handle); @@ -452,7 +460,7 @@ flags are GFP_KERNEL if blocking is permitted (not in_interrupt nor holding SMP locks), GFP_ATOMIC otherwise. Like dma_alloc_coherent(), this returns two values, cpu_addr and dma_handle. -Free memory that was allocated from a dma_pool like this: +Free memory that was allocated from a dma_pool like this:: dma_pool_free(pool, cpu_addr, dma_handle); @@ -460,7 +468,7 @@ where pool is what you passed to dma_pool_alloc(), and cpu_addr and dma_handle are the values dma_pool_alloc() returned. This function may be called in interrupt context. -Destroy a dma_pool by calling: +Destroy a dma_pool by calling:: dma_pool_destroy(pool); @@ -468,11 +476,12 @@ Make sure you've called dma_pool_free() for all memory allocated from a pool before you destroy the pool. This function may not be called in interrupt context. - DMA Direction +DMA Direction +============= The interfaces described in subsequent portions of this document take a DMA direction argument, which is an integer and takes on -one of the following values: +one of the following values:: DMA_BIDIRECTIONAL DMA_TO_DEVICE @@ -521,14 +530,15 @@ packets, map/unmap them with the DMA_TO_DEVICE direction specifier. For receive packets, just the opposite, map/unmap them with the DMA_FROM_DEVICE direction specifier. - Using Streaming DMA mappings +Using Streaming DMA mappings +============================ The streaming DMA mapping routines can be called from interrupt context. There are two versions of each map/unmap, one which will map/unmap a single memory region, and one which will map/unmap a scatterlist. -To map a single region, you do: +To map a single region, you do:: struct device *dev = &my_dev->dev; dma_addr_t dma_handle; @@ -545,7 +555,7 @@ To map a single region, you do: goto map_error_handling; } -and to unmap it: +and to unmap it:: dma_unmap_single(dev, dma_handle, size, direction); @@ -563,7 +573,7 @@ Using CPU pointers like this for single mappings has a disadvantage: you cannot reference HIGHMEM memory in this way. Thus, there is a map/unmap interface pair akin to dma_{map,unmap}_single(). These interfaces deal with page/offset pairs instead of CPU pointers. -Specifically: +Specifically:: struct device *dev = &my_dev->dev; dma_addr_t dma_handle; @@ -593,7 +603,7 @@ error as outlined under the dma_map_single() discussion. You should call dma_unmap_page() when the DMA activity is finished, e.g., from the interrupt which told you that the DMA transfer is done. -With scatterlists, you map a region gathered from several regions by: +With scatterlists, you map a region gathered from several regions by:: int i, count = dma_map_sg(dev, sglist, nents, direction); struct scatterlist *sg; @@ -617,16 +627,18 @@ Then you should loop count times (note: this can be less than nents times) and use sg_dma_address() and sg_dma_len() macros where you previously accessed sg->address and sg->length as shown above. -To unmap a scatterlist, just call: +To unmap a scatterlist, just call:: dma_unmap_sg(dev, sglist, nents, direction); Again, make sure DMA activity has already finished. -PLEASE NOTE: The 'nents' argument to the dma_unmap_sg call must be - the _same_ one you passed into the dma_map_sg call, - it should _NOT_ be the 'count' value _returned_ from the - dma_map_sg call. +.. note:: + + The 'nents' argument to the dma_unmap_sg call must be + the _same_ one you passed into the dma_map_sg call, + it should _NOT_ be the 'count' value _returned_ from the + dma_map_sg call. Every dma_map_{single,sg}() call should have its dma_unmap_{single,sg}() counterpart, because the DMA address space is a shared resource and @@ -638,11 +650,11 @@ properly in order for the CPU and device to see the most up-to-date and correct copy of the DMA buffer. So, firstly, just map it with dma_map_{single,sg}(), and after each DMA -transfer call either: +transfer call either:: dma_sync_single_for_cpu(dev, dma_handle, size, direction); -or: +or:: dma_sync_sg_for_cpu(dev, sglist, nents, direction); @@ -650,17 +662,19 @@ as appropriate. Then, if you wish to let the device get at the DMA area again, finish accessing the data with the CPU, and then before actually -giving the buffer to the hardware call either: +giving the buffer to the hardware call either:: dma_sync_single_for_device(dev, dma_handle, size, direction); -or: +or:: dma_sync_sg_for_device(dev, sglist, nents, direction); as appropriate. -PLEASE NOTE: The 'nents' argument to dma_sync_sg_for_cpu() and +.. note:: + + The 'nents' argument to dma_sync_sg_for_cpu() and dma_sync_sg_for_device() must be the same passed to dma_map_sg(). It is _NOT_ the count returned by dma_map_sg(). @@ -671,7 +685,7 @@ dma_map_*() call till dma_unmap_*(), then you don't have to call the dma_sync_*() routines at all. Here is pseudo code which shows a situation in which you would need -to use the dma_sync_*() interfaces. +to use the dma_sync_*() interfaces:: my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len) { @@ -747,7 +761,8 @@ is planned to completely remove virt_to_bus() and bus_to_virt() as they are entirely deprecated. Some ports already do not provide these as it is impossible to correctly support them. - Handling Errors +Handling Errors +=============== DMA address space is limited on some architectures and an allocation failure can be determined by: @@ -755,7 +770,7 @@ failure can be determined by: - checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0 - checking the dma_addr_t returned from dma_map_single() and dma_map_page() - by using dma_mapping_error(): + by using dma_mapping_error():: dma_addr_t dma_handle; @@ -773,7 +788,8 @@ failure can be determined by: of a multiple page mapping attempt. These example are applicable to dma_map_page() as well. -Example 1: +Example 1:: + dma_addr_t dma_handle1; dma_addr_t dma_handle2; @@ -802,8 +818,12 @@ Example 1: dma_unmap_single(dma_handle1); map_error_handling1: -Example 2: (if buffers are allocated in a loop, unmap all mapped buffers when - mapping error is detected in the middle) +Example 2:: + + /* + * if buffers are allocated in a loop, unmap all mapped buffers when + * mapping error is detected in the middle + */ dma_addr_t dma_addr; dma_addr_t array[DMA_BUFFERS]; @@ -846,7 +866,8 @@ SCSI drivers must return SCSI_MLQUEUE_HOST_BUSY if the DMA mapping fails in the queuecommand hook. This means that the SCSI subsystem passes the command to the driver again later. - Optimizing Unmap State Space Consumption +Optimizing Unmap State Space Consumption +======================================== On many platforms, dma_unmap_{single,page}() is simply a nop. Therefore, keeping track of the mapping address and length is a waste @@ -858,7 +879,7 @@ Actually, instead of describing the macros one by one, we'll transform some example code. 1) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures. - Example, before: + Example, before:: struct ring_state { struct sk_buff *skb; @@ -866,7 +887,7 @@ transform some example code. __u32 len; }; - after: + after:: struct ring_state { struct sk_buff *skb; @@ -875,23 +896,23 @@ transform some example code. }; 2) Use dma_unmap_{addr,len}_set() to set these values. - Example, before: + Example, before:: ringp->mapping = FOO; ringp->len = BAR; - after: + after:: dma_unmap_addr_set(ringp, mapping, FOO); dma_unmap_len_set(ringp, len, BAR); 3) Use dma_unmap_{addr,len}() to access these values. - Example, before: + Example, before:: dma_unmap_single(dev, ringp->mapping, ringp->len, DMA_FROM_DEVICE); - after: + after:: dma_unmap_single(dev, dma_unmap_addr(ringp, mapping), @@ -902,7 +923,8 @@ It really should be self-explanatory. We treat the ADDR and LEN separately, because it is possible for an implementation to only need the address in order to perform the unmap operation. - Platform Issues +Platform Issues +=============== If you are just writing drivers for Linux and do not maintain an architecture port for the kernel, you can safely skip down @@ -928,12 +950,13 @@ to "Closing". alignment constraints (e.g. the alignment constraints about 64-bit objects). - Closing +Closing +======= This document, and the API itself, would not be in its current form without the feedback and suggestions from numerous individuals. We would like to specifically mention, in no particular order, the -following people: +following people:: Russell King Leo Dagum diff --git a/Documentation/DMA-API.txt b/Documentation/DMA-API.txt index 71200dfa09228b42b6d98e635f900de3038e7227..45b29326d719b25395accc371177348a52ca1ace 100644 --- a/Documentation/DMA-API.txt +++ b/Documentation/DMA-API.txt @@ -1,7 +1,8 @@ - Dynamic DMA mapping using the generic device - ============================================ +============================================ +Dynamic DMA mapping using the generic device +============================================ - James E.J. Bottomley +:Author: James E.J. Bottomley This document describes the DMA API. For a more gentle introduction of the API (and actual examples), see Documentation/DMA-API-HOWTO.txt. @@ -12,10 +13,10 @@ machines. Unless you know that your driver absolutely has to support non-consistent platforms (this is usually only legacy platforms) you should only use the API described in part I. -Part I - dma_ API -------------------------------------- +Part I - dma_API +---------------- -To get the dma_ API, you must #include . This +To get the dma_API, you must #include . This provides dma_addr_t and the interfaces described below. A dma_addr_t can hold any valid DMA address for the platform. It can be @@ -26,9 +27,11 @@ address space and the DMA address space. Part Ia - Using large DMA-coherent buffers ------------------------------------------ -void * -dma_alloc_coherent(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t flag) +:: + + void * + dma_alloc_coherent(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t flag) Consistent memory is memory for which a write by either the device or the processor can immediately be read by the processor or device @@ -51,20 +54,24 @@ consolidate your requests for consistent memory as much as possible. The simplest way to do that is to use the dma_pool calls (see below). The flag parameter (dma_alloc_coherent() only) allows the caller to -specify the GFP_ flags (see kmalloc()) for the allocation (the +specify the ``GFP_`` flags (see kmalloc()) for the allocation (the implementation may choose to ignore flags that affect the location of the returned memory, like GFP_DMA). -void * -dma_zalloc_coherent(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t flag) +:: + + void * + dma_zalloc_coherent(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t flag) Wraps dma_alloc_coherent() and also zeroes the returned memory if the allocation attempt succeeded. -void -dma_free_coherent(struct device *dev, size_t size, void *cpu_addr, - dma_addr_t dma_handle) +:: + + void + dma_free_coherent(struct device *dev, size_t size, void *cpu_addr, + dma_addr_t dma_handle) Free a region of consistent memory you previously allocated. dev, size and dma_handle must all be the same as those passed into @@ -78,7 +85,7 @@ may only be called with IRQs enabled. Part Ib - Using small DMA-coherent buffers ------------------------------------------ -To get this part of the dma_ API, you must #include +To get this part of the dma_API, you must #include Many drivers need lots of small DMA-coherent memory regions for DMA descriptors or I/O buffers. Rather than allocating in units of a page @@ -88,6 +95,8 @@ not __get_free_pages(). Also, they understand common hardware constraints for alignment, like queue heads needing to be aligned on N-byte boundaries. +:: + struct dma_pool * dma_pool_create(const char *name, struct device *dev, size_t size, size_t align, size_t alloc); @@ -103,16 +112,21 @@ in bytes, and must be a power of two). If your device has no boundary crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated from this pool must not cross 4KByte boundaries. +:: - void *dma_pool_zalloc(struct dma_pool *pool, gfp_t mem_flags, - dma_addr_t *handle) + void * + dma_pool_zalloc(struct dma_pool *pool, gfp_t mem_flags, + dma_addr_t *handle) Wraps dma_pool_alloc() and also zeroes the returned memory if the allocation attempt succeeded. - void *dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags, - dma_addr_t *dma_handle); +:: + + void * + dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags, + dma_addr_t *dma_handle); This allocates memory from the pool; the returned memory will meet the size and alignment requirements specified at creation time. Pass @@ -122,16 +136,20 @@ blocking. Like dma_alloc_coherent(), this returns two values: an address usable by the CPU, and the DMA address usable by the pool's device. +:: - void dma_pool_free(struct dma_pool *pool, void *vaddr, - dma_addr_t addr); + void + dma_pool_free(struct dma_pool *pool, void *vaddr, + dma_addr_t addr); This puts memory back into the pool. The pool is what was passed to dma_pool_alloc(); the CPU (vaddr) and DMA addresses are what were returned when that routine allocated the memory being freed. +:: - void dma_pool_destroy(struct dma_pool *pool); + void + dma_pool_destroy(struct dma_pool *pool); dma_pool_destroy() frees the resources of the pool. It must be called in a context which can sleep. Make sure you've freed all allocated @@ -141,32 +159,40 @@ memory back to the pool before you destroy it. Part Ic - DMA addressing limitations ------------------------------------ -int -dma_set_mask_and_coherent(struct device *dev, u64 mask) +:: + + int + dma_set_mask_and_coherent(struct device *dev, u64 mask) Checks to see if the mask is possible and updates the device streaming and coherent DMA mask parameters if it is. Returns: 0 if successful and a negative error if not. -int -dma_set_mask(struct device *dev, u64 mask) +:: + + int + dma_set_mask(struct device *dev, u64 mask) Checks to see if the mask is possible and updates the device parameters if it is. Returns: 0 if successful and a negative error if not. -int -dma_set_coherent_mask(struct device *dev, u64 mask) +:: + + int + dma_set_coherent_mask(struct device *dev, u64 mask) Checks to see if the mask is possible and updates the device parameters if it is. Returns: 0 if successful and a negative error if not. -u64 -dma_get_required_mask(struct device *dev) +:: + + u64 + dma_get_required_mask(struct device *dev) This API returns the mask that the platform requires to operate efficiently. Usually this means the returned mask @@ -182,94 +208,107 @@ call to set the mask to the value returned. Part Id - Streaming DMA mappings -------------------------------- -dma_addr_t -dma_map_single(struct device *dev, void *cpu_addr, size_t size, - enum dma_data_direction direction) +:: + + dma_addr_t + dma_map_single(struct device *dev, void *cpu_addr, size_t size, + enum dma_data_direction direction) Maps a piece of processor virtual memory so it can be accessed by the device and returns the DMA address of the memory. The direction for both APIs may be converted freely by casting. -However the dma_ API uses a strongly typed enumerator for its +However the dma_API uses a strongly typed enumerator for its direction: +======================= ============================================= DMA_NONE no direction (used for debugging) DMA_TO_DEVICE data is going from the memory to the device DMA_FROM_DEVICE data is coming from the device to the memory DMA_BIDIRECTIONAL direction isn't known +======================= ============================================= + +.. note:: + + Not all memory regions in a machine can be mapped by this API. + Further, contiguous kernel virtual space may not be contiguous as + physical memory. Since this API does not provide any scatter/gather + capability, it will fail if the user tries to map a non-physically + contiguous piece of memory. For this reason, memory to be mapped by + this API should be obtained from sources which guarantee it to be + physically contiguous (like kmalloc). + + Further, the DMA address of the memory must be within the + dma_mask of the device (the dma_mask is a bit mask of the + addressable region for the device, i.e., if the DMA address of + the memory ANDed with the dma_mask is still equal to the DMA + address, then the device can perform DMA to the memory). To + ensure that the memory allocated by kmalloc is within the dma_mask, + the driver may specify various platform-dependent flags to restrict + the DMA address range of the allocation (e.g., on x86, GFP_DMA + guarantees to be within the first 16MB of available DMA addresses, + as required by ISA devices). + + Note also that the above constraints on physical contiguity and + dma_mask may not apply if the platform has an IOMMU (a device which + maps an I/O DMA address to a physical memory address). However, to be + portable, device driver writers may *not* assume that such an IOMMU + exists. + +.. warning:: + + Memory coherency operates at a granularity called the cache + line width. In order for memory mapped by this API to operate + correctly, the mapped region must begin exactly on a cache line + boundary and end exactly on one (to prevent two separately mapped + regions from sharing a single cache line). Since the cache line size + may not be known at compile time, the API will not enforce this + requirement. Therefore, it is recommended that driver writers who + don't take special care to determine the cache line size at run time + only map virtual regions that begin and end on page boundaries (which + are guaranteed also to be cache line boundaries). + + DMA_TO_DEVICE synchronisation must be done after the last modification + of the memory region by the software and before it is handed off to + the device. Once this primitive is used, memory covered by this + primitive should be treated as read-only by the device. If the device + may write to it at any point, it should be DMA_BIDIRECTIONAL (see + below). + + DMA_FROM_DEVICE synchronisation must be done before the driver + accesses data that may be changed by the device. This memory should + be treated as read-only by the driver. If the driver needs to write + to it at any point, it should be DMA_BIDIRECTIONAL (see below). + + DMA_BIDIRECTIONAL requires special handling: it means that the driver + isn't sure if the memory was modified before being handed off to the + device and also isn't sure if the device will also modify it. Thus, + you must always sync bidirectional memory twice: once before the + memory is handed off to the device (to make sure all memory changes + are flushed from the processor) and once before the data may be + accessed after being used by the device (to make sure any processor + cache lines are updated with data that the device may have changed). + +:: -Notes: Not all memory regions in a machine can be mapped by this API. -Further, contiguous kernel virtual space may not be contiguous as -physical memory. Since this API does not provide any scatter/gather -capability, it will fail if the user tries to map a non-physically -contiguous piece of memory. For this reason, memory to be mapped by -this API should be obtained from sources which guarantee it to be -physically contiguous (like kmalloc). - -Further, the DMA address of the memory must be within the -dma_mask of the device (the dma_mask is a bit mask of the -addressable region for the device, i.e., if the DMA address of -the memory ANDed with the dma_mask is still equal to the DMA -address, then the device can perform DMA to the memory). To -ensure that the memory allocated by kmalloc is within the dma_mask, -the driver may specify various platform-dependent flags to restrict -the DMA address range of the allocation (e.g., on x86, GFP_DMA -guarantees to be within the first 16MB of available DMA addresses, -as required by ISA devices). - -Note also that the above constraints on physical contiguity and -dma_mask may not apply if the platform has an IOMMU (a device which -maps an I/O DMA address to a physical memory address). However, to be -portable, device driver writers may *not* assume that such an IOMMU -exists. - -Warnings: Memory coherency operates at a granularity called the cache -line width. In order for memory mapped by this API to operate -correctly, the mapped region must begin exactly on a cache line -boundary and end exactly on one (to prevent two separately mapped -regions from sharing a single cache line). Since the cache line size -may not be known at compile time, the API will not enforce this -requirement. Therefore, it is recommended that driver writers who -don't take special care to determine the cache line size at run time -only map virtual regions that begin and end on page boundaries (which -are guaranteed also to be cache line boundaries). - -DMA_TO_DEVICE synchronisation must be done after the last modification -of the memory region by the software and before it is handed off to -the device. Once this primitive is used, memory covered by this -primitive should be treated as read-only by the device. If the device -may write to it at any point, it should be DMA_BIDIRECTIONAL (see -below). - -DMA_FROM_DEVICE synchronisation must be done before the driver -accesses data that may be changed by the device. This memory should -be treated as read-only by the driver. If the driver needs to write -to it at any point, it should be DMA_BIDIRECTIONAL (see below). - -DMA_BIDIRECTIONAL requires special handling: it means that the driver -isn't sure if the memory was modified before being handed off to the -device and also isn't sure if the device will also modify it. Thus, -you must always sync bidirectional memory twice: once before the -memory is handed off to the device (to make sure all memory changes -are flushed from the processor) and once before the data may be -accessed after being used by the device (to make sure any processor -cache lines are updated with data that the device may have changed). - -void -dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size, - enum dma_data_direction direction) + void + dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size, + enum dma_data_direction direction) Unmaps the region previously mapped. All the parameters passed in must be identical to those passed in (and returned) by the mapping API. -dma_addr_t -dma_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, - enum dma_data_direction direction) -void -dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size, - enum dma_data_direction direction) +:: + + dma_addr_t + dma_map_page(struct device *dev, struct page *page, + unsigned long offset, size_t size, + enum dma_data_direction direction) + + void + dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size, + enum dma_data_direction direction) API for mapping and unmapping for pages. All the notes and warnings for the other mapping APIs apply here. Also, although the @@ -277,20 +316,24 @@ and parameters are provided to do partial page mapping, it is recommended that you never use these unless you really know what the cache width is. -dma_addr_t -dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size, - enum dma_data_direction dir, unsigned long attrs) +:: -void -dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size, - enum dma_data_direction dir, unsigned long attrs) + dma_addr_t + dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size, + enum dma_data_direction dir, unsigned long attrs) + + void + dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size, + enum dma_data_direction dir, unsigned long attrs) API for mapping and unmapping for MMIO resources. All the notes and warnings for the other mapping APIs apply here. The API should only be used to map device MMIO resources, mapping of RAM is not permitted. -int -dma_mapping_error(struct device *dev, dma_addr_t dma_addr) +:: + + int + dma_mapping_error(struct device *dev, dma_addr_t dma_addr) In some circumstances dma_map_single(), dma_map_page() and dma_map_resource() will fail to create a mapping. A driver can check for these errors by testing @@ -298,9 +341,11 @@ the returned DMA address with dma_mapping_error(). A non-zero return value means the mapping could not be created and the driver should take appropriate action (e.g. reduce current DMA mapping usage or delay and try again later). +:: + int dma_map_sg(struct device *dev, struct scatterlist *sg, - int nents, enum dma_data_direction direction) + int nents, enum dma_data_direction direction) Returns: the number of DMA address segments mapped (this may be shorter than passed in if some elements of the scatter/gather list are @@ -316,7 +361,7 @@ critical that the driver do something, in the case of a block driver aborting the request or even oopsing is better than doing nothing and corrupting the filesystem. -With scatterlists, you use the resulting mapping like this: +With scatterlists, you use the resulting mapping like this:: int i, count = dma_map_sg(dev, sglist, nents, direction); struct scatterlist *sg; @@ -337,9 +382,11 @@ Then you should loop count times (note: this can be less than nents times) and use sg_dma_address() and sg_dma_len() macros where you previously accessed sg->address and sg->length as shown above. +:: + void dma_unmap_sg(struct device *dev, struct scatterlist *sg, - int nents, enum dma_data_direction direction) + int nents, enum dma_data_direction direction) Unmap the previously mapped scatter/gather list. All the parameters must be the same as those and passed in to the scatter/gather mapping @@ -348,18 +395,27 @@ API. Note: must be the number you passed in, *not* the number of DMA address entries returned. -void -dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size, - enum dma_data_direction direction) -void -dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size, - enum dma_data_direction direction) -void -dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nents, - enum dma_data_direction direction) -void -dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nents, - enum dma_data_direction direction) +:: + + void + dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, + size_t size, + enum dma_data_direction direction) + + void + dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, + size_t size, + enum dma_data_direction direction) + + void + dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, + int nents, + enum dma_data_direction direction) + + void + dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, + int nents, + enum dma_data_direction direction) Synchronise a single contiguous or scatter/gather mapping for the CPU and device. With the sync_sg API, all the parameters must be the same @@ -367,36 +423,41 @@ as those passed into the single mapping API. With the sync_single API, you can use dma_handle and size parameters that aren't identical to those passed into the single mapping API to do a partial sync. -Notes: You must do this: -- Before reading values that have been written by DMA from the device - (use the DMA_FROM_DEVICE direction) -- After writing values that will be written to the device using DMA - (use the DMA_TO_DEVICE) direction -- before *and* after handing memory to the device if the memory is - DMA_BIDIRECTIONAL +.. note:: + + You must do this: + + - Before reading values that have been written by DMA from the device + (use the DMA_FROM_DEVICE direction) + - After writing values that will be written to the device using DMA + (use the DMA_TO_DEVICE) direction + - before *and* after handing memory to the device if the memory is + DMA_BIDIRECTIONAL See also dma_map_single(). -dma_addr_t -dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size, - enum dma_data_direction dir, - unsigned long attrs) +:: + + dma_addr_t + dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size, + enum dma_data_direction dir, + unsigned long attrs) -void -dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr, - size_t size, enum dma_data_direction dir, - unsigned long attrs) + void + dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr, + size_t size, enum dma_data_direction dir, + unsigned long attrs) -int -dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl, - int nents, enum dma_data_direction dir, - unsigned long attrs) + int + dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl, + int nents, enum dma_data_direction dir, + unsigned long attrs) -void -dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl, - int nents, enum dma_data_direction dir, - unsigned long attrs) + void + dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl, + int nents, enum dma_data_direction dir, + unsigned long attrs) The four functions above are just like the counterpart functions without the _attrs suffixes, except that they pass an optional @@ -410,37 +471,38 @@ is identical to those of the corresponding function without the _attrs suffix. As a result dma_map_single_attrs() can generally replace dma_map_single(), etc. -As an example of the use of the *_attrs functions, here's how +As an example of the use of the ``*_attrs`` functions, here's how you could pass an attribute DMA_ATTR_FOO when mapping memory -for DMA: +for DMA:: -#include -/* DMA_ATTR_FOO should be defined in linux/dma-mapping.h and - * documented in Documentation/DMA-attributes.txt */ -... + #include + /* DMA_ATTR_FOO should be defined in linux/dma-mapping.h and + * documented in Documentation/DMA-attributes.txt */ + ... - unsigned long attr; - attr |= DMA_ATTR_FOO; - .... - n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, attr); - .... + unsigned long attr; + attr |= DMA_ATTR_FOO; + .... + n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, attr); + .... Architectures that care about DMA_ATTR_FOO would check for its presence in their implementations of the mapping and unmapping -routines, e.g.: - -void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr, - size_t size, enum dma_data_direction dir, - unsigned long attrs) -{ - .... - if (attrs & DMA_ATTR_FOO) - /* twizzle the frobnozzle */ - .... +routines, e.g.::: + + void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr, + size_t size, enum dma_data_direction dir, + unsigned long attrs) + { + .... + if (attrs & DMA_ATTR_FOO) + /* twizzle the frobnozzle */ + .... + } -Part II - Advanced dma_ usage ------------------------------ +Part II - Advanced dma usage +---------------------------- Warning: These pieces of the DMA API should not be used in the majority of cases, since they cater for unlikely corner cases that @@ -450,9 +512,11 @@ If you don't understand how cache line coherency works between a processor and an I/O device, you should not be using this part of the API at all. -void * -dma_alloc_noncoherent(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t flag) +:: + + void * + dma_alloc_noncoherent(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t flag) Identical to dma_alloc_coherent() except that the platform will choose to return either consistent or non-consistent memory as it sees @@ -468,39 +532,49 @@ only use this API if you positively know your driver will be required to work on one of the rare (usually non-PCI) architectures that simply cannot make consistent memory. -void -dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr, - dma_addr_t dma_handle) +:: + + void + dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr, + dma_addr_t dma_handle) Free memory allocated by the nonconsistent API. All parameters must be identical to those passed in (and returned by dma_alloc_noncoherent()). -int -dma_get_cache_alignment(void) +:: + + int + dma_get_cache_alignment(void) Returns the processor cache alignment. This is the absolute minimum alignment *and* width that you must observe when either mapping memory or doing partial flushes. -Notes: This API may return a number *larger* than the actual cache -line, but it will guarantee that one or more cache lines fit exactly -into the width returned by this call. It will also always be a power -of two for easy alignment. +.. note:: -void -dma_cache_sync(struct device *dev, void *vaddr, size_t size, - enum dma_data_direction direction) + This API may return a number *larger* than the actual cache + line, but it will guarantee that one or more cache lines fit exactly + into the width returned by this call. It will also always be a power + of two for easy alignment. + +:: + + void + dma_cache_sync(struct device *dev, void *vaddr, size_t size, + enum dma_data_direction direction) Do a partial sync of memory that was allocated by dma_alloc_noncoherent(), starting at virtual address vaddr and continuing on for size. Again, you *must* observe the cache line boundaries when doing this. -int -dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, - dma_addr_t device_addr, size_t size, int - flags) +:: + + int + dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, + dma_addr_t device_addr, size_t size, int + flags) Declare region of memory to be handed out by dma_alloc_coherent() when it's asked for coherent memory for this device. @@ -516,21 +590,21 @@ size is the size of the area (must be multiples of PAGE_SIZE). flags can be ORed together and are: -DMA_MEMORY_MAP - request that the memory returned from -dma_alloc_coherent() be directly writable. +- DMA_MEMORY_MAP - request that the memory returned from + dma_alloc_coherent() be directly writable. -DMA_MEMORY_IO - request that the memory returned from -dma_alloc_coherent() be addressable using read()/write()/memcpy_toio() etc. +- DMA_MEMORY_IO - request that the memory returned from + dma_alloc_coherent() be addressable using read()/write()/memcpy_toio() etc. One or both of these flags must be present. -DMA_MEMORY_INCLUDES_CHILDREN - make the declared memory be allocated by -dma_alloc_coherent of any child devices of this one (for memory residing -on a bridge). +- DMA_MEMORY_INCLUDES_CHILDREN - make the declared memory be allocated by + dma_alloc_coherent of any child devices of this one (for memory residing + on a bridge). -DMA_MEMORY_EXCLUSIVE - only allocate memory from the declared regions. -Do not allow dma_alloc_coherent() to fall back to system memory when -it's out of memory in the declared region. +- DMA_MEMORY_EXCLUSIVE - only allocate memory from the declared regions. + Do not allow dma_alloc_coherent() to fall back to system memory when + it's out of memory in the declared region. The return value will be either DMA_MEMORY_MAP or DMA_MEMORY_IO and must correspond to a passed in flag (i.e. no returning DMA_MEMORY_IO @@ -543,15 +617,17 @@ must be accessed using the correct bus functions. If your driver isn't prepared to handle this contingency, it should not specify DMA_MEMORY_IO in the input flags. -As a simplification for the platforms, only *one* such region of +As a simplification for the platforms, only **one** such region of memory may be declared per device. For reasons of efficiency, most platforms choose to track the declared region only at the granularity of a page. For smaller allocations, you should use the dma_pool() API. -void -dma_release_declared_memory(struct device *dev) +:: + + void + dma_release_declared_memory(struct device *dev) Remove the memory region previously declared from the system. This API performs *no* in-use checking for this region and will return @@ -559,9 +635,11 @@ unconditionally having removed all the required structures. It is the driver's job to ensure that no parts of this memory region are currently in use. -void * -dma_mark_declared_memory_occupied(struct device *dev, - dma_addr_t device_addr, size_t size) +:: + + void * + dma_mark_declared_memory_occupied(struct device *dev, + dma_addr_t device_addr, size_t size) This is used to occupy specific regions of the declared space (dma_alloc_coherent() will hand out the first free region it finds). @@ -592,38 +670,37 @@ option has a performance impact. Do not enable it in production kernels. If you boot the resulting kernel will contain code which does some bookkeeping about what DMA memory was allocated for which device. If this code detects an error it prints a warning message with some details into your kernel log. An -example warning message may look like this: - -------------[ cut here ]------------ -WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448 - check_unmap+0x203/0x490() -Hardware name: -forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong - function [device address=0x00000000640444be] [size=66 bytes] [mapped as -single] [unmapped as page] -Modules linked in: nfsd exportfs bridge stp llc r8169 -Pid: 0, comm: swapper Tainted: G W 2.6.28-dmatest-09289-g8bb99c0 #1 -Call Trace: - [] warn_slowpath+0xf2/0x130 - [] _spin_unlock+0x10/0x30 - [] usb_hcd_link_urb_to_ep+0x75/0xc0 - [] _spin_unlock_irqrestore+0x12/0x40 - [] ohci_urb_enqueue+0x19f/0x7c0 - [] queue_work+0x56/0x60 - [] enqueue_task_fair+0x20/0x50 - [] usb_hcd_submit_urb+0x379/0xbc0 - [] cpumask_next_and+0x23/0x40 - [] find_busiest_group+0x207/0x8a0 - [] _spin_lock_irqsave+0x1f/0x50 - [] check_unmap+0x203/0x490 - [] debug_dma_unmap_page+0x49/0x50 - [] nv_tx_done_optimized+0xc6/0x2c0 - [] nv_nic_irq_optimized+0x73/0x2b0 - [] handle_IRQ_event+0x34/0x70 - [] handle_edge_irq+0xc9/0x150 - [] do_IRQ+0xcb/0x1c0 - [] ret_from_intr+0x0/0xa - <4>---[ end trace f6435a98e2a38c0e ]--- +example warning message may look like this:: + + WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448 + check_unmap+0x203/0x490() + Hardware name: + forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong + function [device address=0x00000000640444be] [size=66 bytes] [mapped as + single] [unmapped as page] + Modules linked in: nfsd exportfs bridge stp llc r8169 + Pid: 0, comm: swapper Tainted: G W 2.6.28-dmatest-09289-g8bb99c0 #1 + Call Trace: + [] warn_slowpath+0xf2/0x130 + [] _spin_unlock+0x10/0x30 + [] usb_hcd_link_urb_to_ep+0x75/0xc0 + [] _spin_unlock_irqrestore+0x12/0x40 + [] ohci_urb_enqueue+0x19f/0x7c0 + [] queue_work+0x56/0x60 + [] enqueue_task_fair+0x20/0x50 + [] usb_hcd_submit_urb+0x379/0xbc0 + [] cpumask_next_and+0x23/0x40 + [] find_busiest_group+0x207/0x8a0 + [] _spin_lock_irqsave+0x1f/0x50 + [] check_unmap+0x203/0x490 + [] debug_dma_unmap_page+0x49/0x50 + [] nv_tx_done_optimized+0xc6/0x2c0 + [] nv_nic_irq_optimized+0x73/0x2b0 + [] handle_IRQ_event+0x34/0x70 + [] handle_edge_irq+0xc9/0x150 + [] do_IRQ+0xcb/0x1c0 + [] ret_from_intr+0x0/0xa + <4>---[ end trace f6435a98e2a38c0e ]--- The driver developer can find the driver and the device including a stacktrace of the DMA-API call which caused this warning. @@ -637,43 +714,42 @@ details. The debugfs directory for the DMA-API debugging code is called dma-api/. In this directory the following files can currently be found: - dma-api/all_errors This file contains a numeric value. If this +=============================== =============================================== +dma-api/all_errors This file contains a numeric value. If this value is not equal to zero the debugging code will print a warning for every error it finds into the kernel log. Be careful with this option, as it can easily flood your logs. - dma-api/disabled This read-only file contains the character 'Y' +dma-api/disabled This read-only file contains the character 'Y' if the debugging code is disabled. This can happen when it runs out of memory or if it was disabled at boot time - dma-api/error_count This file is read-only and shows the total +dma-api/error_count This file is read-only and shows the total numbers of errors found. - dma-api/num_errors The number in this file shows how many +dma-api/num_errors The number in this file shows how many warnings will be printed to the kernel log before it stops. This number is initialized to one at system boot and be set by writing into this file - dma-api/min_free_entries - This read-only file can be read to get the +dma-api/min_free_entries This read-only file can be read to get the minimum number of free dma_debug_entries the allocator has ever seen. If this value goes down to zero the code will disable itself because it is not longer reliable. - dma-api/num_free_entries - The current number of free dma_debug_entries +dma-api/num_free_entries The current number of free dma_debug_entries in the allocator. - dma-api/driver-filter - You can write a name of a driver into this file +dma-api/driver-filter You can write a name of a driver into this file to limit the debug output to requests from that particular driver. Write an empty string to that file to disable the filter and see all errors again. +=============================== =============================================== If you have this code compiled into your kernel it will be enabled by default. If you want to boot without the bookkeeping anyway you can provide @@ -692,7 +768,10 @@ of preallocated entries is defined per architecture. If it is too low for you boot with 'dma_debug_entries=' to overwrite the architectural default. -void debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr); +:: + + void + debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr); dma-debug interface debug_dma_mapping_error() to debug drivers that fail to check DMA mapping errors on addresses returned by dma_map_single() and @@ -702,4 +781,3 @@ the driver. When driver does unmap, debug_dma_unmap() checks the flag and if this flag is still set, prints warning message that includes call trace that leads up to the unmap. This interface can be called from dma_mapping_error() routines to enable DMA mapping error check debugging. - diff --git a/Documentation/DMA-ISA-LPC.txt b/Documentation/DMA-ISA-LPC.txt index c413313987521fdb9d95522cd9573f19af6cd97b..8c2b8be6e45b9ac6104346a856e9b49b7f672cca 100644 --- a/Documentation/DMA-ISA-LPC.txt +++ b/Documentation/DMA-ISA-LPC.txt @@ -1,19 +1,20 @@ - DMA with ISA and LPC devices - ============================ +============================ +DMA with ISA and LPC devices +============================ - Pierre Ossman +:Author: Pierre Ossman This document describes how to do DMA transfers using the old ISA DMA controller. Even though ISA is more or less dead today the LPC bus uses the same DMA system so it will be around for quite some time. -Part I - Headers and dependencies ---------------------------------- +Headers and dependencies +------------------------ -To do ISA style DMA you need to include two headers: +To do ISA style DMA you need to include two headers:: -#include -#include + #include + #include The first is the generic DMA API used to convert virtual addresses to bus addresses (see Documentation/DMA-API.txt for details). @@ -23,8 +24,8 @@ this is not present on all platforms make sure you construct your Kconfig to be dependent on ISA_DMA_API (not ISA) so that nobody tries to build your driver on unsupported platforms. -Part II - Buffer allocation ---------------------------- +Buffer allocation +----------------- The ISA DMA controller has some very strict requirements on which memory it can access so extra care must be taken when allocating @@ -42,13 +43,13 @@ requirements you pass the flag GFP_DMA to kmalloc. Unfortunately the memory available for ISA DMA is scarce so unless you allocate the memory during boot-up it's a good idea to also pass -__GFP_REPEAT and __GFP_NOWARN to make the allocator try a bit harder. +__GFP_RETRY_MAYFAIL and __GFP_NOWARN to make the allocator try a bit harder. (This scarcity also means that you should allocate the buffer as early as possible and not release it until the driver is unloaded.) -Part III - Address translation ------------------------------- +Address translation +------------------- To translate the virtual address to a bus address, use the normal DMA API. Do _not_ use isa_virt_to_phys() even though it does the same @@ -61,8 +62,8 @@ Note: x86_64 had a broken DMA API when it came to ISA but has since been fixed. If your arch has problems then fix the DMA API instead of reverting to the ISA functions. -Part IV - Channels ------------------- +Channels +-------- A normal ISA DMA controller has 8 channels. The lower four are for 8-bit transfers and the upper four are for 16-bit transfers. @@ -80,8 +81,8 @@ The ability to use 16-bit or 8-bit transfers is _not_ up to you as a driver author but depends on what the hardware supports. Check your specs or test different channels. -Part V - Transfer data ----------------------- +Transfer data +------------- Now for the good stuff, the actual DMA transfer. :) @@ -112,37 +113,37 @@ Once the DMA transfer is finished (or timed out) you should disable the channel again. You should also check get_dma_residue() to make sure that all data has been transferred. -Example: +Example:: -int flags, residue; + int flags, residue; -flags = claim_dma_lock(); + flags = claim_dma_lock(); -clear_dma_ff(); + clear_dma_ff(); -set_dma_mode(channel, DMA_MODE_WRITE); -set_dma_addr(channel, phys_addr); -set_dma_count(channel, num_bytes); + set_dma_mode(channel, DMA_MODE_WRITE); + set_dma_addr(channel, phys_addr); + set_dma_count(channel, num_bytes); -dma_enable(channel); + dma_enable(channel); -release_dma_lock(flags); + release_dma_lock(flags); -while (!device_done()); + while (!device_done()); -flags = claim_dma_lock(); + flags = claim_dma_lock(); -dma_disable(channel); + dma_disable(channel); -residue = dma_get_residue(channel); -if (residue != 0) - printk(KERN_ERR "driver: Incomplete DMA transfer!" - " %d bytes left!\n", residue); + residue = dma_get_residue(channel); + if (residue != 0) + printk(KERN_ERR "driver: Incomplete DMA transfer!" + " %d bytes left!\n", residue); -release_dma_lock(flags); + release_dma_lock(flags); -Part VI - Suspend/resume ------------------------- +Suspend/resume +-------------- It is the driver's responsibility to make sure that the machine isn't suspended while a DMA transfer is in progress. Also, all DMA settings diff --git a/Documentation/DMA-attributes.txt b/Documentation/DMA-attributes.txt index 44c6bc496eee6b140a779afa7b159c0a261326b9..8f8d97f65d7375a9fc87f4bad9f4a6c8033e8bba 100644 --- a/Documentation/DMA-attributes.txt +++ b/Documentation/DMA-attributes.txt @@ -1,5 +1,6 @@ - DMA attributes - ============== +============== +DMA attributes +============== This document describes the semantics of the DMA attributes that are defined in linux/dma-mapping.h. @@ -108,6 +109,7 @@ This is a hint to the DMA-mapping subsystem that it's probably not worth the time to try to allocate memory to in a way that gives better TLB efficiency (AKA it's not worth trying to build the mapping out of larger pages). You might want to specify this if: + - You know that the accesses to this memory won't thrash the TLB. You might know that the accesses are likely to be sequential or that they aren't sequential but it's unlikely you'll ping-pong @@ -121,11 +123,12 @@ pages). You might want to specify this if: the mapping to have a short lifetime then it may be worth it to optimize allocation (avoid coming up with large pages) instead of getting the slight performance win of larger pages. + Setting this hint doesn't guarantee that you won't get huge pages, but it means that we won't try quite as hard to get them. -NOTE: At the moment DMA_ATTR_ALLOC_SINGLE_PAGES is only implemented on ARM, -though ARM64 patches will likely be posted soon. +.. note:: At the moment DMA_ATTR_ALLOC_SINGLE_PAGES is only implemented on ARM, + though ARM64 patches will likely be posted soon. DMA_ATTR_NO_WARN ---------------- @@ -142,10 +145,10 @@ problem at all, depending on the implementation of the retry mechanism. So, this provides a way for drivers to avoid those error messages on calls where allocation failures are not a problem, and shouldn't bother the logs. -NOTE: At the moment DMA_ATTR_NO_WARN is only implemented on PowerPC. +.. note:: At the moment DMA_ATTR_NO_WARN is only implemented on PowerPC. DMA_ATTR_PRIVILEGED ------------------------------- +------------------- Some advanced peripherals such as remote processors and GPUs perform accesses to DMA buffers in both privileged "supervisor" and unprivileged diff --git a/Documentation/IPMI.txt b/Documentation/IPMI.txt index 6962cab997efd5cf740efd28c2edaa85141acc9c..aa77a25a09400d91bbc45cf5f20152965c5b309a 100644 --- a/Documentation/IPMI.txt +++ b/Documentation/IPMI.txt @@ -1,9 +1,8 @@ +===================== +The Linux IPMI Driver +===================== - The Linux IPMI Driver - --------------------- - Corey Minyard - - +:Author: Corey Minyard / The Intelligent Platform Management Interface, or IPMI, is a standard for controlling intelligent devices that monitor a system. @@ -141,7 +140,7 @@ Addressing ---------- The IPMI addressing works much like IP addresses, you have an overlay -to handle the different address types. The overlay is: +to handle the different address types. The overlay is:: struct ipmi_addr { @@ -153,7 +152,7 @@ to handle the different address types. The overlay is: The addr_type determines what the address really is. The driver currently understands two different types of addresses. -"System Interface" addresses are defined as: +"System Interface" addresses are defined as:: struct ipmi_system_interface_addr { @@ -166,7 +165,7 @@ straight to the BMC on the current card. The channel must be IPMI_BMC_CHANNEL. Messages that are destined to go out on the IPMB bus use the -IPMI_IPMB_ADDR_TYPE address type. The format is +IPMI_IPMB_ADDR_TYPE address type. The format is:: struct ipmi_ipmb_addr { @@ -184,16 +183,16 @@ spec. Messages -------- -Messages are defined as: +Messages are defined as:: -struct ipmi_msg -{ + struct ipmi_msg + { unsigned char netfn; unsigned char lun; unsigned char cmd; unsigned char *data; int data_len; -}; + }; The driver takes care of adding/stripping the header information. The data portion is just the data to be send (do NOT put addressing info @@ -208,7 +207,7 @@ block of data, even when receiving messages. Otherwise the driver will have no place to put the message. Messages coming up from the message handler in kernelland will come in -as: +as:: struct ipmi_recv_msg { @@ -246,6 +245,7 @@ and the user should not have to care what type of SMI is below them. Watching For Interfaces +^^^^^^^^^^^^^^^^^^^^^^^ When your code comes up, the IPMI driver may or may not have detected if IPMI devices exist. So you might have to defer your setup until @@ -256,6 +256,7 @@ and tell you when they come and go. Creating the User +^^^^^^^^^^^^^^^^^ To use the message handler, you must first create a user using ipmi_create_user. The interface number specifies which SMI you want @@ -272,6 +273,7 @@ closing the device automatically destroys the user. Messaging +^^^^^^^^^ To send a message from kernel-land, the ipmi_request_settime() call does pretty much all message handling. Most of the parameter are @@ -321,6 +323,7 @@ though, since it is tricky to manage your own buffers. Events and Incoming Commands +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The driver takes care of polling for IPMI events and receiving commands (commands are messages that are not responses, they are @@ -367,7 +370,7 @@ in the system. It discovers interfaces through a host of different methods, depending on the system. You can specify up to four interfaces on the module load line and -control some module parameters: +control some module parameters:: modprobe ipmi_si.o type=,.... ports=,... addrs=,... @@ -437,7 +440,7 @@ default is one. Setting to 0 is useful with the hotmod, but is obviously only useful for modules. When compiled into the kernel, the parameters can be specified on the -kernel command line as: +kernel command line as:: ipmi_si.type=,... ipmi_si.ports=,... ipmi_si.addrs=,... @@ -474,16 +477,22 @@ The driver supports a hot add and remove of interfaces. This way, interfaces can be added or removed after the kernel is up and running. This is done using /sys/modules/ipmi_si/parameters/hotmod, which is a write-only parameter. You write a string to this interface. The string -has the format: +has the format:: + [:op2[:op3...]] -The "op"s are: + +The "op"s are:: + add|remove,kcs|bt|smic,mem|i/o,
[,[,[,...]]] -You can specify more than one interface on the line. The "opt"s are: + +You can specify more than one interface on the line. The "opt"s are:: + rsp= rsi= rsh= irq= ipmb= + and these have the same meanings as discussed above. Note that you can also use this on the kernel command line for a more compact format for specifying an interface. Note that when removing an interface, @@ -496,7 +505,7 @@ The SMBus Driver (SSIF) The SMBus driver allows up to 4 SMBus devices to be configured in the system. By default, the driver will only register with something it finds in DMI or ACPI tables. You can change this -at module load time (for a module) with: +at module load time (for a module) with:: modprobe ipmi_ssif.o addr=[,[,...]] @@ -535,7 +544,7 @@ the smb_addr parameter unless you have DMI or ACPI data to tell the driver what to use. When compiled into the kernel, the addresses can be specified on the -kernel command line as: +kernel command line as:: ipmb_ssif.addr=[,[...]] ipmi_ssif.adapter=[,[...]] @@ -565,9 +574,9 @@ Some users need more detailed information about a device, like where the address came from or the raw base device for the IPMI interface. You can use the IPMI smi_watcher to catch the IPMI interfaces as they come or go, and to grab the information, you can use the function -ipmi_get_smi_info(), which returns the following structure: +ipmi_get_smi_info(), which returns the following structure:: -struct ipmi_smi_info { + struct ipmi_smi_info { enum ipmi_addr_src addr_src; struct device *dev; union { @@ -575,7 +584,7 @@ struct ipmi_smi_info { void *acpi_handle; } acpi_info; } addr_info; -}; + }; Currently special info for only for SI_ACPI address sources is returned. Others may be added as necessary. @@ -590,7 +599,7 @@ Watchdog A watchdog timer is provided that implements the Linux-standard watchdog timer interface. It has three module parameters that can be -used to control it: +used to control it:: modprobe ipmi_watchdog timeout= pretimeout= action= preaction= preop= start_now=x @@ -635,7 +644,7 @@ watchdog device is closed. The default value of nowayout is true if the CONFIG_WATCHDOG_NOWAYOUT option is enabled, or false if not. When compiled into the kernel, the kernel command line is available -for configuring the watchdog: +for configuring the watchdog:: ipmi_watchdog.timeout= ipmi_watchdog.pretimeout= ipmi_watchdog.action= @@ -675,6 +684,7 @@ also get a bunch of OEM events holding the panic string. The field settings of the events are: + * Generator ID: 0x21 (kernel) * EvM Rev: 0x03 (this event is formatting in IPMI 1.0 format) * Sensor Type: 0x20 (OS critical stop sensor) @@ -683,18 +693,20 @@ The field settings of the events are: * Event Data 1: 0xa1 (Runtime stop in OEM bytes 2 and 3) * Event data 2: second byte of panic string * Event data 3: third byte of panic string + See the IPMI spec for the details of the event layout. This event is always sent to the local management controller. It will handle routing the message to the right place Other OEM events have the following format: -Record ID (bytes 0-1): Set by the SEL. -Record type (byte 2): 0xf0 (OEM non-timestamped) -byte 3: The slave address of the card saving the panic -byte 4: A sequence number (starting at zero) -The rest of the bytes (11 bytes) are the panic string. If the panic string -is longer than 11 bytes, multiple messages will be sent with increasing -sequence numbers. + +* Record ID (bytes 0-1): Set by the SEL. +* Record type (byte 2): 0xf0 (OEM non-timestamped) +* byte 3: The slave address of the card saving the panic +* byte 4: A sequence number (starting at zero) + The rest of the bytes (11 bytes) are the panic string. If the panic string + is longer than 11 bytes, multiple messages will be sent with increasing + sequence numbers. Because you cannot send OEM events using the standard interface, this function will attempt to find an SEL and add the events there. It diff --git a/Documentation/IRQ-affinity.txt b/Documentation/IRQ-affinity.txt index 01a675175a3674ef88a08ebb4f430dca3a4e4ec2..29da5000836a973d8272a431189fc44e2a142607 100644 --- a/Documentation/IRQ-affinity.txt +++ b/Documentation/IRQ-affinity.txt @@ -1,8 +1,11 @@ +================ +SMP IRQ affinity +================ + ChangeLog: - Started by Ingo Molnar - Update by Max Krasnyansky + - Started by Ingo Molnar + - Update by Max Krasnyansky -SMP IRQ affinity /proc/irq/IRQ#/smp_affinity and /proc/irq/IRQ#/smp_affinity_list specify which target CPUs are permitted for a given IRQ source. It's a bitmask @@ -16,50 +19,52 @@ will be set to the default mask. It can then be changed as described above. Default mask is 0xffffffff. Here is an example of restricting IRQ44 (eth1) to CPU0-3 then restricting -it to CPU4-7 (this is an 8-CPU SMP box): +it to CPU4-7 (this is an 8-CPU SMP box):: -[root@moon 44]# cd /proc/irq/44 -[root@moon 44]# cat smp_affinity -ffffffff + [root@moon 44]# cd /proc/irq/44 + [root@moon 44]# cat smp_affinity + ffffffff -[root@moon 44]# echo 0f > smp_affinity -[root@moon 44]# cat smp_affinity -0000000f -[root@moon 44]# ping -f h -PING hell (195.4.7.3): 56 data bytes -... ---- hell ping statistics --- -6029 packets transmitted, 6027 packets received, 0% packet loss -round-trip min/avg/max = 0.1/0.1/0.4 ms -[root@moon 44]# cat /proc/interrupts | grep 'CPU\|44:' - CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 - 44: 1068 1785 1785 1783 0 0 0 0 IO-APIC-level eth1 + [root@moon 44]# echo 0f > smp_affinity + [root@moon 44]# cat smp_affinity + 0000000f + [root@moon 44]# ping -f h + PING hell (195.4.7.3): 56 data bytes + ... + --- hell ping statistics --- + 6029 packets transmitted, 6027 packets received, 0% packet loss + round-trip min/avg/max = 0.1/0.1/0.4 ms + [root@moon 44]# cat /proc/interrupts | grep 'CPU\|44:' + CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 + 44: 1068 1785 1785 1783 0 0 0 0 IO-APIC-level eth1 As can be seen from the line above IRQ44 was delivered only to the first four processors (0-3). Now lets restrict that IRQ to CPU(4-7). -[root@moon 44]# echo f0 > smp_affinity -[root@moon 44]# cat smp_affinity -000000f0 -[root@moon 44]# ping -f h -PING hell (195.4.7.3): 56 data bytes -.. ---- hell ping statistics --- -2779 packets transmitted, 2777 packets received, 0% packet loss -round-trip min/avg/max = 0.1/0.5/585.4 ms -[root@moon 44]# cat /proc/interrupts | 'CPU\|44:' - CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 - 44: 1068 1785 1785 1783 1784 1069 1070 1069 IO-APIC-level eth1 +:: + + [root@moon 44]# echo f0 > smp_affinity + [root@moon 44]# cat smp_affinity + 000000f0 + [root@moon 44]# ping -f h + PING hell (195.4.7.3): 56 data bytes + .. + --- hell ping statistics --- + 2779 packets transmitted, 2777 packets received, 0% packet loss + round-trip min/avg/max = 0.1/0.5/585.4 ms + [root@moon 44]# cat /proc/interrupts | 'CPU\|44:' + CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 + 44: 1068 1785 1785 1783 1784 1069 1070 1069 IO-APIC-level eth1 This time around IRQ44 was delivered only to the last four processors. i.e counters for the CPU0-3 did not change. -Here is an example of limiting that same irq (44) to cpus 1024 to 1031: +Here is an example of limiting that same irq (44) to cpus 1024 to 1031:: -[root@moon 44]# echo 1024-1031 > smp_affinity_list -[root@moon 44]# cat smp_affinity_list -1024-1031 + [root@moon 44]# echo 1024-1031 > smp_affinity_list + [root@moon 44]# cat smp_affinity_list + 1024-1031 Note that to do this with a bitmask would require 32 bitmasks of zero to follow the pertinent one. diff --git a/Documentation/IRQ-domain.txt b/Documentation/IRQ-domain.txt index 1f246eb25ca546acdbb9d38cd6ad5b7628fa66cd..4a1cd7645d8568d02450dcbf3023fd84e3f25406 100644 --- a/Documentation/IRQ-domain.txt +++ b/Documentation/IRQ-domain.txt @@ -1,4 +1,6 @@ -irq_domain interrupt number mapping library +=============================================== +The irq_domain interrupt number mapping library +=============================================== The current design of the Linux kernel uses a single large number space where each separate IRQ source is assigned a different number. @@ -36,7 +38,9 @@ irq_domain also implements translation from an abstract irq_fwspec structure to hwirq numbers (Device Tree and ACPI GSI so far), and can be easily extended to support other IRQ topology data sources. -=== irq_domain usage === +irq_domain usage +================ + An interrupt controller driver creates and registers an irq_domain by calling one of the irq_domain_add_*() functions (each mapping method has a different allocator function, more on that later). The function @@ -62,15 +66,21 @@ If the driver has the Linux IRQ number or the irq_data pointer, and needs to know the associated hwirq number (such as in the irq_chip callbacks) then it can be directly obtained from irq_data->hwirq. -=== Types of irq_domain mappings === +Types of irq_domain mappings +============================ + There are several mechanisms available for reverse mapping from hwirq to Linux irq, and each mechanism uses a different allocation function. Which reverse map type should be used depends on the use case. Each of the reverse map types are described below: -==== Linear ==== -irq_domain_add_linear() -irq_domain_create_linear() +Linear +------ + +:: + + irq_domain_add_linear() + irq_domain_create_linear() The linear reverse map maintains a fixed size table indexed by the hwirq number. When a hwirq is mapped, an irq_desc is allocated for @@ -89,9 +99,13 @@ accepts a more general abstraction 'struct fwnode_handle'. The majority of drivers should use the linear map. -==== Tree ==== -irq_domain_add_tree() -irq_domain_create_tree() +Tree +---- + +:: + + irq_domain_add_tree() + irq_domain_create_tree() The irq_domain maintains a radix tree map from hwirq numbers to Linux IRQs. When an hwirq is mapped, an irq_desc is allocated and the @@ -109,8 +123,12 @@ accepts a more general abstraction 'struct fwnode_handle'. Very few drivers should need this mapping. -==== No Map ===- -irq_domain_add_nomap() +No Map +------ + +:: + + irq_domain_add_nomap() The No Map mapping is to be used when the hwirq number is programmable in the hardware. In this case it is best to program the @@ -121,10 +139,14 @@ Linux IRQ number into the hardware. Most drivers cannot use this mapping. -==== Legacy ==== -irq_domain_add_simple() -irq_domain_add_legacy() -irq_domain_add_legacy_isa() +Legacy +------ + +:: + + irq_domain_add_simple() + irq_domain_add_legacy() + irq_domain_add_legacy_isa() The Legacy mapping is a special case for drivers that already have a range of irq_descs allocated for the hwirqs. It is used when the @@ -163,14 +185,17 @@ that the driver using the simple domain call irq_create_mapping() before any irq_find_mapping() since the latter will actually work for the static IRQ assignment case. -==== Hierarchy IRQ domain ==== +Hierarchy IRQ domain +-------------------- + On some architectures, there may be multiple interrupt controllers involved in delivering an interrupt from the device to the target CPU. -Let's look at a typical interrupt delivering path on x86 platforms: +Let's look at a typical interrupt delivering path on x86 platforms:: -Device --> IOAPIC -> Interrupt remapping Controller -> Local APIC -> CPU + Device --> IOAPIC -> Interrupt remapping Controller -> Local APIC -> CPU There are three interrupt controllers involved: + 1) IOAPIC controller 2) Interrupt remapping controller 3) Local APIC controller @@ -180,7 +205,8 @@ hardware architecture, an irq_domain data structure is built for each interrupt controller and those irq_domains are organized into hierarchy. When building irq_domain hierarchy, the irq_domain near to the device is child and the irq_domain near to CPU is parent. So a hierarchy structure -as below will be built for the example above. +as below will be built for the example above:: + CPU Vector irq_domain (root irq_domain to manage CPU vectors) ^ | @@ -190,6 +216,7 @@ as below will be built for the example above. IOAPIC irq_domain (manage IOAPIC delivery entries/pins) There are four major interfaces to use hierarchy irq_domain: + 1) irq_domain_alloc_irqs(): allocate IRQ descriptors and interrupt controller related resources to deliver these interrupts. 2) irq_domain_free_irqs(): free IRQ descriptors and interrupt controller @@ -199,7 +226,8 @@ There are four major interfaces to use hierarchy irq_domain: 4) irq_domain_deactivate_irq(): deactivate interrupt controller hardware to stop delivering the interrupt. -Following changes are needed to support hierarchy irq_domain. +Following changes are needed to support hierarchy irq_domain: + 1) a new field 'parent' is added to struct irq_domain; it's used to maintain irq_domain hierarchy information. 2) a new field 'parent_data' is added to struct irq_data; it's used to @@ -223,6 +251,7 @@ software architecture. For an interrupt controller driver to support hierarchy irq_domain, it needs to: + 1) Implement irq_domain_ops.alloc and irq_domain_ops.free 2) Optionally implement irq_domain_ops.activate and irq_domain_ops.deactivate. diff --git a/Documentation/IRQ.txt b/Documentation/IRQ.txt index 1011e717502162c63a04245169ac05d8f96a895a..4273806a606bb66c14d639fb079a2ba332ad001d 100644 --- a/Documentation/IRQ.txt +++ b/Documentation/IRQ.txt @@ -1,4 +1,6 @@ +=============== What is an IRQ? +=============== An IRQ is an interrupt request from a device. Currently they can come in over a pin, or over a packet. diff --git a/Documentation/Intel-IOMMU.txt b/Documentation/Intel-IOMMU.txt index 49585b6e1ea24c951ea58090b762740aa36b066d..9dae6b47e398b09778b48f1bce6bcf2b1c310a8b 100644 --- a/Documentation/Intel-IOMMU.txt +++ b/Documentation/Intel-IOMMU.txt @@ -1,3 +1,4 @@ +=================== Linux IOMMU Support =================== @@ -9,11 +10,11 @@ This guide gives a quick cheat sheet for some basic understanding. Some Keywords -DMAR - DMA remapping -DRHD - DMA Remapping Hardware Unit Definition -RMRR - Reserved memory Region Reporting Structure -ZLR - Zero length reads from PCI devices -IOVA - IO Virtual address. +- DMAR - DMA remapping +- DRHD - DMA Remapping Hardware Unit Definition +- RMRR - Reserved memory Region Reporting Structure +- ZLR - Zero length reads from PCI devices +- IOVA - IO Virtual address. Basic stuff ----------- @@ -33,7 +34,7 @@ devices that need to access these regions. OS is expected to setup unity mappings for these regions for these devices to access these regions. How is IOVA generated? ---------------------- +---------------------- Well behaved drivers call pci_map_*() calls before sending command to device that needs to perform DMA. Once DMA is completed and mapping is no longer @@ -82,14 +83,14 @@ in ACPI. ACPI: DMAR (v001 A M I OEMDMAR 0x00000001 MSFT 0x00000097) @ 0x000000007f5b5ef0 When DMAR is being processed and initialized by ACPI, prints DMAR locations -and any RMRR's processed. +and any RMRR's processed:: -ACPI DMAR:Host address width 36 -ACPI DMAR:DRHD (flags: 0x00000000)base: 0x00000000fed90000 -ACPI DMAR:DRHD (flags: 0x00000000)base: 0x00000000fed91000 -ACPI DMAR:DRHD (flags: 0x00000001)base: 0x00000000fed93000 -ACPI DMAR:RMRR base: 0x00000000000ed000 end: 0x00000000000effff -ACPI DMAR:RMRR base: 0x000000007f600000 end: 0x000000007fffffff + ACPI DMAR:Host address width 36 + ACPI DMAR:DRHD (flags: 0x00000000)base: 0x00000000fed90000 + ACPI DMAR:DRHD (flags: 0x00000000)base: 0x00000000fed91000 + ACPI DMAR:DRHD (flags: 0x00000001)base: 0x00000000fed93000 + ACPI DMAR:RMRR base: 0x00000000000ed000 end: 0x00000000000effff + ACPI DMAR:RMRR base: 0x000000007f600000 end: 0x000000007fffffff When DMAR is enabled for use, you will notice.. @@ -98,10 +99,12 @@ PCI-DMA: Using DMAR IOMMU Fault reporting --------------- -DMAR:[DMA Write] Request device [00:02.0] fault addr 6df084000 -DMAR:[fault reason 05] PTE Write access is not set -DMAR:[DMA Write] Request device [00:02.0] fault addr 6df084000 -DMAR:[fault reason 05] PTE Write access is not set +:: + + DMAR:[DMA Write] Request device [00:02.0] fault addr 6df084000 + DMAR:[fault reason 05] PTE Write access is not set + DMAR:[DMA Write] Request device [00:02.0] fault addr 6df084000 + DMAR:[fault reason 05] PTE Write access is not set TBD ---- diff --git a/Documentation/SAK.txt b/Documentation/SAK.txt index 74be14679ed891820cd9c3a7393007f8dd21d07d..260e1d3687bdcdc374df2fa9ea83f1af91176045 100644 --- a/Documentation/SAK.txt +++ b/Documentation/SAK.txt @@ -1,5 +1,9 @@ -Linux 2.4.2 Secure Attention Key (SAK) handling -18 March 2001, Andrew Morton +========================================= +Linux Secure Attention Key (SAK) handling +========================================= + +:Date: 18 March 2001 +:Author: Andrew Morton An operating system's Secure Attention Key is a security tool which is provided as protection against trojan password capturing programs. It @@ -13,7 +17,7 @@ this sequence. It is only available if the kernel was compiled with sysrq support. The proper way of generating a SAK is to define the key sequence using -`loadkeys'. This will work whether or not sysrq support is compiled +``loadkeys``. This will work whether or not sysrq support is compiled into the kernel. SAK works correctly when the keyboard is in raw mode. This means that @@ -25,64 +29,63 @@ What key sequence should you use? Well, CTRL-ALT-DEL is used to reboot the machine. CTRL-ALT-BACKSPACE is magical to the X server. We'll choose CTRL-ALT-PAUSE. -In your rc.sysinit (or rc.local) file, add the command +In your rc.sysinit (or rc.local) file, add the command:: echo "control alt keycode 101 = SAK" | /bin/loadkeys And that's it! Only the superuser may reprogram the SAK key. -NOTES -===== +.. note:: -1: Linux SAK is said to be not a "true SAK" as is required by - systems which implement C2 level security. This author does not - know why. + 1. Linux SAK is said to be not a "true SAK" as is required by + systems which implement C2 level security. This author does not + know why. -2: On the PC keyboard, SAK kills all applications which have - /dev/console opened. + 2. On the PC keyboard, SAK kills all applications which have + /dev/console opened. - Unfortunately this includes a number of things which you don't - actually want killed. This is because these applications are - incorrectly holding /dev/console open. Be sure to complain to your - Linux distributor about this! + Unfortunately this includes a number of things which you don't + actually want killed. This is because these applications are + incorrectly holding /dev/console open. Be sure to complain to your + Linux distributor about this! - You can identify processes which will be killed by SAK with the - command + You can identify processes which will be killed by SAK with the + command:: # ls -l /proc/[0-9]*/fd/* | grep console l-wx------ 1 root root 64 Mar 18 00:46 /proc/579/fd/0 -> /dev/console - Then: + Then:: # ps aux|grep 579 root 579 0.0 0.1 1088 436 ? S 00:43 0:00 gpm -t ps/2 - So `gpm' will be killed by SAK. This is a bug in gpm. It should - be closing standard input. You can work around this by finding the - initscript which launches gpm and changing it thusly: + So ``gpm`` will be killed by SAK. This is a bug in gpm. It should + be closing standard input. You can work around this by finding the + initscript which launches gpm and changing it thusly: - Old: + Old:: daemon gpm - New: + New:: daemon gpm < /dev/null - Vixie cron also seems to have this problem, and needs the same treatment. + Vixie cron also seems to have this problem, and needs the same treatment. - Also, one prominent Linux distribution has the following three - lines in its rc.sysinit and rc scripts: + Also, one prominent Linux distribution has the following three + lines in its rc.sysinit and rc scripts:: exec 3<&0 exec 4>&1 exec 5>&2 - These commands cause *all* daemons which are launched by the - initscripts to have file descriptors 3, 4 and 5 attached to - /dev/console. So SAK kills them all. A workaround is to simply - delete these lines, but this may cause system management - applications to malfunction - test everything well. + These commands cause **all** daemons which are launched by the + initscripts to have file descriptors 3, 4 and 5 attached to + /dev/console. So SAK kills them all. A workaround is to simply + delete these lines, but this may cause system management + applications to malfunction - test everything well. diff --git a/Documentation/SM501.txt b/Documentation/SM501.txt index 561826f82093574bc61d887cae0436935d317c5e..882507453ba4ea60fd022d663aefcdc9de40bf35 100644 --- a/Documentation/SM501.txt +++ b/Documentation/SM501.txt @@ -1,7 +1,10 @@ - SM501 Driver - ============ +.. include:: -Copyright 2006, 2007 Simtec Electronics +============ +SM501 Driver +============ + +:Copyright: |copy| 2006, 2007 Simtec Electronics The Silicon Motion SM501 multimedia companion chip is a multifunction device which may provide numerous interfaces including USB host controller USB gadget, diff --git a/Documentation/bcache.txt b/Documentation/bcache.txt index a9259b562d5c8b188576f22cfd9ed532ad522d79..c0ce64d75bbf7cfe02fa0a99af136f3ae02ba60c 100644 --- a/Documentation/bcache.txt +++ b/Documentation/bcache.txt @@ -1,10 +1,15 @@ +============================ +A block layer cache (bcache) +============================ + Say you've got a big slow raid 6, and an ssd or three. Wouldn't it be nice if you could use them as cache... Hence bcache. Wiki and git repositories are at: - http://bcache.evilpiepirate.org - http://evilpiepirate.org/git/linux-bcache.git - http://evilpiepirate.org/git/bcache-tools.git + + - http://bcache.evilpiepirate.org + - http://evilpiepirate.org/git/linux-bcache.git + - http://evilpiepirate.org/git/bcache-tools.git It's designed around the performance characteristics of SSDs - it only allocates in erase block sized buckets, and it uses a hybrid btree/log to track cached @@ -37,17 +42,19 @@ to be flushed. Getting started: You'll need make-bcache from the bcache-tools repository. Both the cache device -and backing device must be formatted before use. +and backing device must be formatted before use:: + make-bcache -B /dev/sdb make-bcache -C /dev/sdc make-bcache has the ability to format multiple devices at the same time - if you format your backing devices and cache device at the same time, you won't -have to manually attach: +have to manually attach:: + make-bcache -B /dev/sda /dev/sdb -C /dev/sdc bcache-tools now ships udev rules, and bcache devices are known to the kernel -immediately. Without udev, you can manually register devices like this: +immediately. Without udev, you can manually register devices like this:: echo /dev/sdb > /sys/fs/bcache/register echo /dev/sdc > /sys/fs/bcache/register @@ -60,16 +67,16 @@ slow devices as bcache backing devices without a cache, and you can choose to ad a caching device later. See 'ATTACHING' section below. -The devices show up as: +The devices show up as:: /dev/bcache -As well as (with udev): +As well as (with udev):: /dev/bcache/by-uuid/ /dev/bcache/by-label/