Loading Documentation/kernel-parameters.txt +6 −0 Original line number Diff line number Diff line Loading @@ -746,6 +746,12 @@ bytes respectively. Such letter suffixes can also be entirely omitted. cpuidle.off=1 [CPU_IDLE] disable the cpuidle sub-system cpu_init_udelay=N [X86] Delay for N microsec between assert and de-assert of APIC INIT to start processors. This delay occurs on every CPU online, such as boot, and resume from suspend. Default: 10000 cpcihp_generic= [HW,PCI] Generic port I/O CompactPCI driver Format: <first_slot>,<last_slot>,<port>,<enum_bit>[,<debug>] Loading Documentation/x86/mtrr.txt +15 −3 Original line number Diff line number Diff line MTRR (Memory Type Range Register) control 3 Jun 1999 Richard Gooch <rgooch@atnf.csiro.au> Richard Gooch <rgooch@atnf.csiro.au> - 3 Jun 1999 Luis R. Rodriguez <mcgrof@do-not-panic.com> - April 9, 2015 =============================================================================== Phasing out MTRR use MTRR use is replaced on modern x86 hardware with PAT. Over time the only type of effective MTRR that is expected to be supported will be for write-combining. As MTRR use is phased out device drivers should use arch_phys_wc_add() to make MTRR effective on non-PAT systems while a no-op on PAT enabled systems. For details refer to Documentation/x86/pat.txt. =============================================================================== On Intel P6 family processors (Pentium Pro, Pentium II and later) the Memory Type Range Registers (MTRRs) may be used to control Loading Documentation/x86/pat.txt +34 −1 Original line number Diff line number Diff line Loading @@ -34,6 +34,8 @@ ioremap | -- | UC- | UC- | | | | | ioremap_cache | -- | WB | WB | | | | | ioremap_uc | -- | UC | UC | | | | | ioremap_nocache | -- | UC- | UC- | | | | | ioremap_wc | -- | -- | WC | Loading Loading @@ -102,7 +104,38 @@ wants to export a RAM region, it has to do set_memory_uc() or set_memory_wc() as step 0 above and also track the usage of those pages and use set_memory_wb() before the page is freed to free pool. MTRR effects on PAT / non-PAT systems ------------------------------------- The following table provides the effects of using write-combining MTRRs when using ioremap*() calls on x86 for both non-PAT and PAT systems. Ideally mtrr_add() usage will be phased out in favor of arch_phys_wc_add() which will be a no-op on PAT enabled systems. The region over which a arch_phys_wc_add() is made, should already have been ioremapped with WC attributes or PAT entries, this can be done by using ioremap_wc() / set_memory_wc(). Devices which combine areas of IO memory desired to remain uncacheable with areas where write-combining is desirable should consider use of ioremap_uc() followed by set_memory_wc() to white-list effective write-combined areas. Such use is nevertheless discouraged as the effective memory type is considered implementation defined, yet this strategy can be used as last resort on devices with size-constrained regions where otherwise MTRR write-combining would otherwise not be effective. ---------------------------------------------------------------------- MTRR Non-PAT PAT Linux ioremap value Effective memory type ---------------------------------------------------------------------- Non-PAT | PAT PAT |PCD ||PWT ||| WC 000 WB _PAGE_CACHE_MODE_WB WC | WC WC 001 WC _PAGE_CACHE_MODE_WC WC* | WC WC 010 UC- _PAGE_CACHE_MODE_UC_MINUS WC* | UC WC 011 UC _PAGE_CACHE_MODE_UC UC | UC ---------------------------------------------------------------------- (*) denotes implementation defined and is discouraged Notes: Loading arch/ia64/include/asm/irq_remapping.h +0 −2 Original line number Diff line number Diff line #ifndef __IA64_INTR_REMAPPING_H #define __IA64_INTR_REMAPPING_H #define irq_remapping_enabled 0 #define dmar_alloc_hwirq create_irq #define dmar_free_hwirq destroy_irq #endif arch/ia64/kernel/msi_ia64.c +19 −11 Original line number Diff line number Diff line Loading @@ -165,7 +165,7 @@ static struct irq_chip dmar_msi_type = { .irq_retrigger = ia64_msi_retrigger_irq, }; static int static void msi_compose_msg(struct pci_dev *pdev, unsigned int irq, struct msi_msg *msg) { struct irq_cfg *cfg = irq_cfg + irq; Loading @@ -186,21 +186,29 @@ msi_compose_msg(struct pci_dev *pdev, unsigned int irq, struct msi_msg *msg) MSI_DATA_LEVEL_ASSERT | MSI_DATA_DELIVERY_FIXED | MSI_DATA_VECTOR(cfg->vector); return 0; } int arch_setup_dmar_msi(unsigned int irq) int dmar_alloc_hwirq(int id, int node, void *arg) { int ret; int irq; struct msi_msg msg; ret = msi_compose_msg(NULL, irq, &msg); if (ret < 0) return ret; irq = create_irq(); if (irq > 0) { irq_set_handler_data(irq, arg); irq_set_chip_and_handler_name(irq, &dmar_msi_type, handle_edge_irq, "edge"); msi_compose_msg(NULL, irq, &msg); dmar_msi_write(irq, &msg); irq_set_chip_and_handler_name(irq, &dmar_msi_type, handle_edge_irq, "edge"); return 0; } return irq; } void dmar_free_hwirq(int irq) { irq_set_handler_data(irq, NULL); destroy_irq(irq); } #endif /* CONFIG_INTEL_IOMMU */ Loading
Documentation/kernel-parameters.txt +6 −0 Original line number Diff line number Diff line Loading @@ -746,6 +746,12 @@ bytes respectively. Such letter suffixes can also be entirely omitted. cpuidle.off=1 [CPU_IDLE] disable the cpuidle sub-system cpu_init_udelay=N [X86] Delay for N microsec between assert and de-assert of APIC INIT to start processors. This delay occurs on every CPU online, such as boot, and resume from suspend. Default: 10000 cpcihp_generic= [HW,PCI] Generic port I/O CompactPCI driver Format: <first_slot>,<last_slot>,<port>,<enum_bit>[,<debug>] Loading
Documentation/x86/mtrr.txt +15 −3 Original line number Diff line number Diff line MTRR (Memory Type Range Register) control 3 Jun 1999 Richard Gooch <rgooch@atnf.csiro.au> Richard Gooch <rgooch@atnf.csiro.au> - 3 Jun 1999 Luis R. Rodriguez <mcgrof@do-not-panic.com> - April 9, 2015 =============================================================================== Phasing out MTRR use MTRR use is replaced on modern x86 hardware with PAT. Over time the only type of effective MTRR that is expected to be supported will be for write-combining. As MTRR use is phased out device drivers should use arch_phys_wc_add() to make MTRR effective on non-PAT systems while a no-op on PAT enabled systems. For details refer to Documentation/x86/pat.txt. =============================================================================== On Intel P6 family processors (Pentium Pro, Pentium II and later) the Memory Type Range Registers (MTRRs) may be used to control Loading
Documentation/x86/pat.txt +34 −1 Original line number Diff line number Diff line Loading @@ -34,6 +34,8 @@ ioremap | -- | UC- | UC- | | | | | ioremap_cache | -- | WB | WB | | | | | ioremap_uc | -- | UC | UC | | | | | ioremap_nocache | -- | UC- | UC- | | | | | ioremap_wc | -- | -- | WC | Loading Loading @@ -102,7 +104,38 @@ wants to export a RAM region, it has to do set_memory_uc() or set_memory_wc() as step 0 above and also track the usage of those pages and use set_memory_wb() before the page is freed to free pool. MTRR effects on PAT / non-PAT systems ------------------------------------- The following table provides the effects of using write-combining MTRRs when using ioremap*() calls on x86 for both non-PAT and PAT systems. Ideally mtrr_add() usage will be phased out in favor of arch_phys_wc_add() which will be a no-op on PAT enabled systems. The region over which a arch_phys_wc_add() is made, should already have been ioremapped with WC attributes or PAT entries, this can be done by using ioremap_wc() / set_memory_wc(). Devices which combine areas of IO memory desired to remain uncacheable with areas where write-combining is desirable should consider use of ioremap_uc() followed by set_memory_wc() to white-list effective write-combined areas. Such use is nevertheless discouraged as the effective memory type is considered implementation defined, yet this strategy can be used as last resort on devices with size-constrained regions where otherwise MTRR write-combining would otherwise not be effective. ---------------------------------------------------------------------- MTRR Non-PAT PAT Linux ioremap value Effective memory type ---------------------------------------------------------------------- Non-PAT | PAT PAT |PCD ||PWT ||| WC 000 WB _PAGE_CACHE_MODE_WB WC | WC WC 001 WC _PAGE_CACHE_MODE_WC WC* | WC WC 010 UC- _PAGE_CACHE_MODE_UC_MINUS WC* | UC WC 011 UC _PAGE_CACHE_MODE_UC UC | UC ---------------------------------------------------------------------- (*) denotes implementation defined and is discouraged Notes: Loading
arch/ia64/include/asm/irq_remapping.h +0 −2 Original line number Diff line number Diff line #ifndef __IA64_INTR_REMAPPING_H #define __IA64_INTR_REMAPPING_H #define irq_remapping_enabled 0 #define dmar_alloc_hwirq create_irq #define dmar_free_hwirq destroy_irq #endif
arch/ia64/kernel/msi_ia64.c +19 −11 Original line number Diff line number Diff line Loading @@ -165,7 +165,7 @@ static struct irq_chip dmar_msi_type = { .irq_retrigger = ia64_msi_retrigger_irq, }; static int static void msi_compose_msg(struct pci_dev *pdev, unsigned int irq, struct msi_msg *msg) { struct irq_cfg *cfg = irq_cfg + irq; Loading @@ -186,21 +186,29 @@ msi_compose_msg(struct pci_dev *pdev, unsigned int irq, struct msi_msg *msg) MSI_DATA_LEVEL_ASSERT | MSI_DATA_DELIVERY_FIXED | MSI_DATA_VECTOR(cfg->vector); return 0; } int arch_setup_dmar_msi(unsigned int irq) int dmar_alloc_hwirq(int id, int node, void *arg) { int ret; int irq; struct msi_msg msg; ret = msi_compose_msg(NULL, irq, &msg); if (ret < 0) return ret; irq = create_irq(); if (irq > 0) { irq_set_handler_data(irq, arg); irq_set_chip_and_handler_name(irq, &dmar_msi_type, handle_edge_irq, "edge"); msi_compose_msg(NULL, irq, &msg); dmar_msi_write(irq, &msg); irq_set_chip_and_handler_name(irq, &dmar_msi_type, handle_edge_irq, "edge"); return 0; } return irq; } void dmar_free_hwirq(int irq) { irq_set_handler_data(irq, NULL); destroy_irq(irq); } #endif /* CONFIG_INTEL_IOMMU */