- Jan 20, 2015
-
-
Oleksij Rempel authored
for now it is wary basic SoC description with most important IPs needed to make this device work Signed-off-by: Oleksij Rempel <linux@rempel-privat.de> Signed-off-by: Olof Johansson <olof@lixom.net>
-
- Dec 28, 2014
-
-
Paolo Bonzini authored
Since most virtual machines raise this message once, it is a bit annoying. Make it KERN_DEBUG severity. Cc: stable@vger.kernel.org Fixes: 7a2e8aaf Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Tiejun Chen authored
The commit 34a1cd60, "x86: vmx: move some vmx setting from vmx_init() to hardware_setup()", tried to refactor some codes specific to vmx hardware setting into hardware_setup(), but some msr writing should depend on our previous setting condition like enable_apicv, enable_ept and so on. Reported-by: Jamie Heilman <jamie@audible.transient.net> Tested-by: Jamie Heilman <jamie@audible.transient.net> Signed-off-by: Tiejun Chen <tiejun.chen@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
- Dec 27, 2014
-
-
John David Anglin authored
The __ldcw macro has a problem when its argument needs to be reloaded from memory. The output memory operand and the input register operand both need to be reloaded using a register in class R1_REGS when generating 64-bit code. This fails because there's only a single register in the class. Instead, use a memory clobber. This also makes the __ldcw macro a compiler memory barrier. Signed-off-by: John David Anglin <dave.anglin@bell.net> Cc: <stable@vger.kernel.org> [3.13+] Signed-off-by: Helge Deller <deller@gmx.de>
-
- Dec 24, 2014
-
-
Jungseok Lee authored
This patch adds pgd_page definition in order to keep supporting HAVE_GENERIC_RCU_GUP configuration. In addition, it changes pud_page expression to align with pmd_page for readability. An introduction of pgd_page resolves the following build breakage under 4KB + 4Level memory management combo. mm/gup.c: In function 'gup_huge_pgd': mm/gup.c:889:2: error: implicit declaration of function 'pgd_page' [-Werror=implicit-function-declaration] head = pgd_page(orig); ^ mm/gup.c:889:7: warning: assignment makes pointer from integer without a cast head = pgd_page(orig); Cc: Will Deacon <will.deacon@arm.com> Cc: Steve Capper <steve.capper@linaro.org> Signed-off-by: Jungseok Lee <jungseoklee85@gmail.com> [catalin.marinas@arm.com: remove duplicate pmd_page definition] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
The usual defconfig tweaks, this time: - FHANDLE and AUTOFS4_FS to keep systemd happy - PID_NS, QUOTA and KEYS to keep LTP happy - Disable DEBUG_PREEMPT, as this *really* hurts performance Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Lorenzo Pieralisi authored
On arm64 the TTBR0_EL1 register is set to either the reserved TTBR0 page tables on boot or to the active_mm mappings belonging to user space processes, it must never be set to swapper_pg_dir page tables mappings. When a CPU is booted its active_mm is set to init_mm even though its TTBR0_EL1 points at the reserved TTBR0 page mappings. This implies that when __cpu_suspend is triggered the active_mm can point at init_mm even if the current TTBR0_EL1 register contains the reserved TTBR0_EL1 mappings. Therefore, the mm save and restore executed in __cpu_suspend might turn out to be erroneous in that, if the current->active_mm corresponds to init_mm, on resume from low power it ends up restoring in the TTBR0_EL1 the init_mm mappings that are global and can cause speculation of TLB entries which end up being propagated to user space. This patch fixes the issue by checking the active_mm pointer before restoring the TTBR0 mappings. If the current active_mm == &init_mm, the code sets the TTBR0_EL1 to the reserved TTBR0 mapping instead of switching back to the active_mm, which is the expected behaviour corresponding to the TTBR0_EL1 settings when __cpu_suspend was entered. Fixes: 95322526 ("arm64: kernel: cpu_{suspend/resume} implementation") Cc: <stable@vger.kernel.org> # 3.14+: 18ab7db6 Cc: <stable@vger.kernel.org> # 3.14+: 714f5992 Cc: <stable@vger.kernel.org> # 3.14+: c3684fbb Cc: <stable@vger.kernel.org> # 3.14+ Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- Dec 22, 2014
-
-
Catalin Marinas authored
Commit a3a60f81 (dma-mapping: replace set_arch_dma_coherent_ops with arch_setup_dma_ops) changes the of_dma_configure() arch dma_ops callback to arch_setup_dma_ops but only the arch/arm code is updated. Subsequent commit 97890ba9 (dma-mapping: detect and configure IOMMU in of_dma_configure) changes the arch_setup_dma_ops() prototype further to handle iommu. The patch makes the corresponding arm64 changes. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Will Deacon <will.deacon@arm.com>
-
- Dec 20, 2014
-
-
Jesper Nilsson authored
There are no users of this symbol left. Reported-by: Paul Bolle <pebolle@tiscali.nl> Signed-off-by: Jesper Nilsson <jesper.nilsson@axis.com>
-
Jesper Nilsson authored
Move pinmux alloc/dealloc code into functions that don't take the spinlock so we can use from code that has the spinlock already. CRISv32 has no working SMP, so spinlocks becomes a NOP, so deadlock was never seen. Signed-off-by: Jesper Nilsson <jesper.nilsson@axis.com>
-
Jesper Nilsson authored
Fixes compile error on allmodconfig. Signed-off-by: Jesper Nilsson <jesper.nilsson@axis.com>
-
Jesper Nilsson authored
Signed-off-by: Jesper Nilsson <jesper.nilsson@axis.com>
-
Jesper Nilsson authored
Also, print kernel version on oops. Signed-off-by: Jesper Nilsson <jesper.nilsson@axis.com>
-
Jesper Nilsson authored
Drop i2c_init from this header, it was declared non-static here, but static in the C-file. Signed-off-by: Jesper Nilsson <jesper.nilsson@axis.com>
-
Jesper Nilsson authored
Make driver possible to load as a module and try to handle locking better. Signed-off-by: Jesper Nilsson <jesper.nilsson@axis.com>
-
Jesper Nilsson authored
- Add free_initrd_mem as found by Guenter Roeck <linux@roeck-us.net> - Add free_init_pages - Export empty_zero_page symbol Signed-off-by: Jesper Nilsson <jesper.nilsson@axis.com>
-
Jesper Nilsson authored
Don't enter watchdog handling if we're already in watchdog handling. Also some minor formatting tweaks. Signed-off-by: Jesper Nilsson <jesper.nilsson@axis.com>
-
Jesper Nilsson authored
strcmp was lost when all other string functions were removed, but we still have an optimized version for this on CRISv32, so any driver built as a module would not have access to this symbol. In a similar manner, we had optimized versions of csum_partial_copy_from_user and __do_clear_user but no exported symbols for them, breaking bunch of other drivers when built as a module. At the same time, move EXPORT_SYMBOL(__copy_user) and EXPORT_SYMBOL(__copy_user_zeroing) C-files so it's located together with the function definition. Signed-off-by: Jesper Nilsson <jesper.nilsson@axis.com>
-
Jesper Nilsson authored
Signed-off-by: Jesper Nilsson <jesper.nilsson@axis.com>
-
Sam Ravnborg authored
Fix headers_install by adjusting the path to arch files. And delete unused Kbuild file. Drop special handling of cris in the headers.sh script as a nice side-effect. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Cc: Mikael Starvik <starvik@axis.com> Cc: David Howells <dhowells@redhat.com> Signed-off-by: Jesper Nilsson <jesper.nilsson@axis.com>
-
Jesper Nilsson authored
Fixes the following compile error. arch/cris/arch-v32/kernel/time.c: In function 'reset_watchdog': arch/cris/arch-v32/kernel/time.c:121:2: error: implicit declaration of function 'global_page_state' Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Jesper Nilsson <jesper.nilsson@axis.com>
-
Jesper Nilsson authored
File was already deleted. Signed-off-by: Jesper Nilsson <jesper.nilsson@axis.com>
-
Rafael J. Wysocki authored
Having switched over all of the users of CONFIG_PM_RUNTIME to use CONFIG_PM directly, turn the latter into a user-selectable option and drop the former entirely from the tree. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org> Acked-by: Kevin Hilman <khilman@linaro.org>
-
- Dec 19, 2014
-
-
Alexander Graf authored
Commit 69111bac ("powerpc: Replace __get_cpu_var uses") introduced compile breakage to the e500 target by introducing invalid automatically created C syntax. Fix up the breakage and make the code compile again. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Andreas Larsson authored
Load balancing can be triggered in the critical sections protected by srmmu_context_spinlock in destroy_context() and switch_mm() and can hang the cpu waiting for the rq lock of another cpu that in turn has called switch_mm hangning on srmmu_context_spinlock leading to deadlock. So, disable interrupt while taking srmmu_context_spinlock in destroy_context() and switch_mm() so we don't deadlock. See also commit 77b838fa ("[SPARC64]: destroy_context() needs to disable interrupts.") Signed-off-by: Andreas Larsson <andreas@gaisler.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- Dec 18, 2014
-
-
Andy Lutomirski authored
It turns out that there's a lurking ABI issue. GCC, when compiling this in a 32-bit program: struct user_desc desc = { .entry_number = idx, .base_addr = base, .limit = 0xfffff, .seg_32bit = 1, .contents = 0, /* Data, grow-up */ .read_exec_only = 0, .limit_in_pages = 1, .seg_not_present = 0, .useable = 0, }; will leave .lm uninitialized. This means that anything in the kernel that reads user_desc.lm for 32-bit tasks is unreliable. Revert the .lm check in set_thread_area(). The value never did anything in the first place. Fixes: 0e58af4e ("x86/tls: Disallow unusual TLS segments") Signed-off-by: Andy Lutomirski <luto@amacapital.net> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org # Only if 0e58af4e is backported Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/d7875b60e28c512f6a6fc0baf5714d58e7eaadbb.1418856405.git.luto@amacapital.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
Greg Kurz authored
Starting with POWER8, the subcore logic relies on all threads of a core being booted so that they can participate in split mode switches. So on those machines we ignore the smt_enabled_at_boot setting (smt-enabled on the kernel command line). Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com> [mpe: Update comment and change log to be more precise] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christian Borntraeger authored
ACCESS_ONCE does not work reliably on non-scalar types. For example gcc 4.6 and 4.7 might remove the volatile tag for such accesses during the SRA (scalar replacement of aggregates) step (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 ) Commit 1365039d ("KVM: s390: Fix ipte locking") replace ACCESS_ONCE with barriers. Lets use READ_ONCE instead. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
-
Christian Borntraeger authored
ACCESS_ONCE does not work reliably on non-scalar types. For example gcc 4.6 and 4.7 might remove the volatile tag for such accesses during the SRA (scalar replacement of aggregates) step (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 ) Change the spinlock code to replace ACCESS_ONCE with READ_ONCE. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
-
Christian Borntraeger authored
ACCESS_ONCE does not work reliably on non-scalar types. For example gcc 4.6 and 4.7 might remove the volatile tag for such accesses during the SRA (scalar replacement of aggregates) step (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 ) Change the spinlock code to replace ACCESS_ONCE with READ_ONCE. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
-
Christian Borntraeger authored
ACCESS_ONCE does not work reliably on non-scalar types. For example gcc 4.6 and 4.7 might remove the volatile tag for such accesses during the SRA (scalar replacement of aggregates) step https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 ) Change the gup code to replace ACCESS_ONCE with READ_ONCE. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
-
Christian Borntraeger authored
ACCESS_ONCE does not work reliably on non-scalar types. For example gcc 4.6 and 4.7 might remove the volatile tag for such accesses during the SRA (scalar replacement of aggregates) step (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 ) Change the gup code to replace ACCESS_ONCE with READ_ONCE. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
-
Christian Borntraeger authored
ACCESS_ONCE does not work reliably on non-scalar types. For example gcc 4.6 and 4.7 might remove the volatile tag for such accesses during the SRA (scalar replacement of aggregates) step (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 ) Change the spinlock code to replace ACCESS_ONCE with READ_ONCE. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
-
Paolo Bonzini authored
They are not used anymore by IA64, move them away. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Michael S. Tsirkin authored
At the moment, if p and x are both of the same bitwise type (eg. __le32), get_user(x, p) produces a sparse warning. This is because *p is loaded into a long then cast back to typeof(*p). When typeof(*p) is a bitwise type (which is uncommon), such a cast needs __force, otherwise sparse produces a warning. For non-bitwise types __force should have no effect, and should not hide any legitimate errors. Note that we are casting to typeof(*p) not typeof(x). Even with the cast, if x and *p are of different types we should get the warning, so I think we are not loosing the ability to detect any actual errors. virtio would like to use bitwise types with get_user() so fix these spurious warnings by adding __force. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> [mpe: Fill in changelog with more details] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Anton Blanchard authored
The in-kernel XICS emulation is faster than doing it all in QEMU and it has got a lot of testing, so enable it by default. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Alexander Graf <agraf@suse.de>
-
Linus Torvalds authored
My commit 26178ec1 ("x86: mm: consolidate VM_FAULT_RETRY handling") had a really stupid typo: the FAULT_FLAG_USER bit is in the 'flags' variable, not the 'fault' variable. Duh, The one silver lining in this is that Dave finding this at least confirms that trinity actually triggers this special path easily, in a way normal use does not. Reported-by: Dave Jones <davej@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
- Dec 17, 2014
-
-
Sam Bobroff authored
Currently the H_CONFER hcall is implemented in kernel virtual mode, meaning that whenever a guest thread does an H_CONFER, all the threads in that virtual core have to exit the guest. This is bad for performance because it interrupts the other threads even if they are doing useful work. The H_CONFER hcall is called by a guest VCPU when it is spinning on a spinlock and it detects that the spinlock is held by a guest VCPU that is currently not running on a physical CPU. The idea is to give this VCPU's time slice to the holder VCPU so that it can make progress towards releasing the lock. To avoid having the other threads exit the guest unnecessarily, we add a real-mode implementation of H_CONFER that checks whether the other threads are doing anything. If all the other threads are idle (i.e. in H_CEDE) or trying to confer (i.e. in H_CONFER), it returns H_TOO_HARD which causes a guest exit and allows the H_CONFER to be handled in virtual mode. Otherwise it spins for a short time (up to 10 microseconds) to give other threads the chance to observe that this thread is trying to confer. The spin loop also terminates when any thread exits the guest or when all other threads are idle or trying to confer. If the timeout is reached, the H_CONFER returns H_SUCCESS. In this case the guest VCPU will recheck the spinlock word and most likely call H_CONFER again. This also improves the implementation of the H_CONFER virtual mode handler. If the VCPU is part of a virtual core (vcore) which is runnable, there will be a 'runner' VCPU which has taken responsibility for running the vcore. In this case we yield to the runner VCPU rather than the target VCPU. We also introduce a check on the target VCPU's yield count: if it differs from the yield count passed to H_CONFER, the target VCPU has run since H_CONFER was called and may have already released the lock. This check is required by PAPR. Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Alexander Graf <agraf@suse.de>
-
Paul Mackerras authored
There are two ways in which a guest instruction can be obtained from the guest in the guest exit code in book3s_hv_rmhandlers.S. If the exit was caused by a Hypervisor Emulation interrupt (i.e. an illegal instruction), the offending instruction is in the HEIR register (Hypervisor Emulation Instruction Register). If the exit was caused by a load or store to an emulated MMIO device, we load the instruction from the guest by turning data relocation on and loading the instruction with an lwz instruction. Unfortunately, in the case where the guest has opposite endianness to the host, these two methods give results of different endianness, but both get put into vcpu->arch.last_inst. The HEIR value has been loaded using guest endianness, whereas the lwz will load the instruction using host endianness. The rest of the code that uses vcpu->arch.last_inst assumes it was loaded using host endianness. To fix this, we define a new vcpu field to store the HEIR value. Then, in kvmppc_handle_exit_hv(), we transfer the value from this new field to vcpu->arch.last_inst, doing a byte-swap if the guest and host endianness differ. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Alexander Graf <agraf@suse.de>
-
Paul Mackerras authored
This removes the code that was added to enable HV KVM to work on PPC970 processors. The PPC970 is an old CPU that doesn't support virtualizing guest memory. Removing PPC970 support also lets us remove the code for allocating and managing contiguous real-mode areas, the code for the !kvm->arch.using_mmu_notifiers case, the code for pinning pages of guest memory when first accessed and keeping track of which pages have been pinned, and the code for handling H_ENTER hypercalls in virtual mode. Book3S HV KVM is now supported only on POWER7 and POWER8 processors. The KVM_CAP_PPC_RMA capability now always returns 0. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Alexander Graf <agraf@suse.de>
-