Skip to content
  1. Jun 23, 2008
  2. Jun 21, 2008
  3. Jun 20, 2008
  4. Jun 19, 2008
    • Jordan Crouse's avatar
      x86, geode: add a VSA2 ID for General Software · ffe6e1da
      Jordan Crouse authored
      
      
      General Software writes their own VSA2 module for their version
      of the Geode BIOS, which returns a different ID then the standard
      VSA2.  This was causing the framebuffer driver to break for most
      GSW boards.
      
      Signed-off-by: default avatarJordan Crouse <jordan.crouse@amd.com>
      Cc: tglx@linutronix.de
      Cc: linux-geode@lists.infradead.org
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      ffe6e1da
    • Bharath Ravi's avatar
      sched, delay accounting: fix incorrect delay time when constantly waiting on runqueue · d4abc238
      Bharath Ravi authored
      
      
      This patch corrects the incorrect value of per process run-queue wait
      time reported by delay statistics. The anomaly was due to the following
      reason. When a process leaves the CPU and immediately starts waiting for
      CPU on the runqueue (which means it remains in the TASK_RUNNABLE state),
      the time of re-entry into the run-queue is never recorded. Due to this,
      the waiting time on the runqueue from this point of re-entry upto the
      next time it hits the CPU is not accounted for. This is solved by
      recording the time of re-entry of a process leaving the CPU in the
      sched_info_depart() function IF the process will go back to waiting on
      the run-queue. This IF condition is verified by checking whether the
      process is still in the TASK_RUNNABLE state.
      
      The patch was tested on 2.6.26-rc6 using two simple CPU hog programs.
      The values noted prior to the fix did not account for the time spent on
      the runqueue waiting. After the fix, the correct values were reported
      back to user space.
      
      Signed-off-by: default avatarBharath Ravi <bharathravi1@gmail.com>
      Signed-off-by: default avatarMadhava K R <madhavakr@gmail.com>
      Cc: dhaval@linux.vnet.ibm.com
      Cc: vatsa@in.ibm.com
      Cc: balbir@in.ibm.com
      Acked-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      d4abc238
    • Sonic Zhang's avatar
      Blackfin Serial Driver: Use timer to poll CTS PIN instead of workqueue. · f30ac0ce
      Sonic Zhang authored
      
      
      This allows other threads to run when the serial driver polls the CTS
      PIN in a loop.
      
      Signed-off-by: default avatarSonic Zhang <sonic.zhang@analog.com>
      Signed-off-by: default avatarBryan Wu <cooloney@kernel.org>
      f30ac0ce
    • Sonic Zhang's avatar
    • Bernhard Walle's avatar
      x86: use BOOTMEM_EXCLUSIVE on 32-bit · d3942cff
      Bernhard Walle authored
      
      
      This patch uses the BOOTMEM_EXCLUSIVE for crashkernel reservation also for
      i386 and prints a error message on failure.
      
      The patch is still for 2.6.26 since it is only bug fixing. The unification
      of reserve_crashkernel() between i386 and x86_64 should be done for 2.6.27.
      
      Signed-off-by: default avatarBernhard Walle <bwalle@suse.de>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Cc: <stable@kernel.org>
      d3942cff
    • Mikael Pettersson's avatar
      x86, 32-bit: fix boot failure on TSC-less processors · df17b1d9
      Mikael Pettersson authored
      Booting 2.6.26-rc6 on my 486 DX/4 fails with a "BUG: Int 6"
      (invalid opcode) and a kernel halt immediately after the
      kernel has been uncompressed. The BUG shows EIP pointing
      to an rdtsc instruction in native_read_tsc(), invoked from
      native_sched_clock().
      
      (This error occurs so early that not even the serial console
      can capture it.)
      
      A bisection showed that this bug first occurs in 2.6.26-rc3-git7,
      via commit 9ccc906c
      
      :
      
      >x86: distangle user disabled TSC from unstable
      >
      >tsc_enabled is set to 0 from the command line switch "notsc" and from
      >the mark_tsc_unstable code. Seperate those functionalities and replace
      >tsc_enable with tsc_disable. This makes also the native_sched_clock()
      >decision when to use TSC understandable.
      >
      >Preparatory patch to solve the sched_clock() issue on 32 bit.
      >
      >Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      
      The core reason for this bug is that native_sched_clock() gets
      called before tsc_init().
      
      Before the commit above, tsc_32.c used a "tsc_enabled" variable
      which defaulted to 0 == disabled, and which only got enabled late
      in tsc_init(). Thus early calls to native_sched_clock() would skip
      the TSC and use jiffies instead.
      
      After the commit above, tsc_32.c uses a "tsc_disabled" variable
      which defaults to 0, meaning that the TSC is Ok to use. Early calls
      to native_sched_clock() now erroneously try to use the TSC on
      !cpu_has_tsc processors, leading to invalid opcode exceptions.
      
      My proposed fix is to initialise tsc_disabled to a "soft disabled"
      state distinct from the hard disabled state set up by the "notsc"
      kernel option. This fixes the native_sched_clock() problem. It also
      allows tsc_init() to be simplified: instead of setting tsc_disabled = 1
      on every error return, we just set tsc_disabled = 0 once when all
      checks have succeeded.
      
      I've verified that this lets my 486 boot again. I've also verified
      that a Core2 machine still uses the TSC as clocksource after the patch.
      
      Signed-off-by: default avatarMikael Pettersson <mikpe@it.uu.se>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      df17b1d9
    • Suresh Siddha's avatar
      x86: fix NULL pointer deref in __switch_to · 75118a82
      Suresh Siddha authored
      
      
      Patrick McHardy reported a crash:
      
      > > I get this oops once a day, its apparently triggered by something
      > > run by cron, but the process is a different one each time.
      > >
      > > Kernel is -git from yesterday shortly before the -rc6 release
      > > (last commit is the usb-2.6 merge, the x86 patches are missing),
      > > .config is attached.
      > >
      > > I'll retry with current -git, but the patches that have gone in
      > > since I last updated don't look related.
      > >
      > > [62060.043009] BUG: unable to handle kernel NULL pointer dereference at
      > > 000001ff
      > > [62060.043009] IP: [<c0102a9b>] __switch_to+0x2f/0x118
      > > [62060.043009] *pde = 00000000
      > > [62060.043009] Oops: 0002 [#1] PREEMPT
      
      Vegard Nossum analyzed it:
      
      > This decodes to
      >
      >    0:   0f ae 00                fxsave (%eax)
      >
      > so it's related to the floating-point context. This is the exact
      > location of the crash:
      >
      > $ addr2line -e arch/x86/kernel/process_32.o -i ab0
      > include/asm/i387.h:232
      > include/asm/i387.h:262
      > arch/x86/kernel/process_32.c:595
      >
      > ...so it looks like prev_task->thread.xstate->fxsave has become NULL.
      > Or maybe it never had any other value.
      
      Somehow (as described below) TS_USEDFPU is set but the fpu is not
      allocated or freed.
      
      Another possible FPU pre-emption issue with the sleazy FPU optimization
      which was benign before but not so anymore, with the dynamic FPU allocation
      patch.
      
      New task is getting exec'd and it is prempted at the below point.
      
      flush_thread() {
      	...
      	/*
      	* Forget coprocessor state..
      	*/
      	clear_fpu(tsk);
      		<----- Preemption point
      	clear_used_math();
      	...
      }
      
      Now when it context switches in again, as the used_math() is still set
      and fpu_counter can be > 5, we will do a math_state_restore() which sets
      the task's TS_USEDFPU. After it continues from the above preemption point
      it does clear_used_math() and much later free_thread_xstate().
      
      Now, at the next context switch, it is quite possible that xstate is
      null, used_math() is not set and TS_USEDFPU is still set. This will
      trigger unlazy_fpu() causing kernel oops.
      
      Fix this  by clearing tsk's fpu_counter before clearing task's fpu.
      
      Reported-by: default avatarPatrick McHardy <kaber@trash.net>
      Signed-off-by: default avatarSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      75118a82
    • Jeremy Fitzhardinge's avatar
      x86: set PAE PHYSICAL_MASK_SHIFT to 44 bits. · ad524d46
      Jeremy Fitzhardinge authored
      
      
      When a 64-bit x86 processor runs in 32-bit PAE mode, a pte can
      potentially have the same number of physical address bits as the
      64-bit host ("Enhanced Legacy PAE Paging").  This means, in theory,
      we could have up to 52 bits of physical address in a pte.
      
      The 32-bit kernel uses a 32-bit unsigned long to represent a pfn.
      This means that it can only represent physical addresses up to 32+12=44
      bits wide.  Rather than widening pfns everywhere, just set 2^44 as the
      Linux x86_32-PAE architectural limit for physical address size.
      
      This is a bugfix for two cases:
      1. running a 32-bit PAE kernel on a machine with
        more than 64GB RAM.
      2. running a 32-bit PAE Xen guest on a host machine with
        more than 64GB RAM
      
      In both cases, a pte could need to have more than 36 bits of physical,
      and masking it to 36-bits will cause fairly severe havoc.
      
      Signed-off-by: default avatarJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Cc: Jan Beulich <jbeulich@novell.com>
      Cc: <stable@kernel.org>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      ad524d46
    • Jason Wessel's avatar
      softlockup: fix NMI hangs due to lock race - 2.6.26-rc regression · 9c106c11
      Jason Wessel authored
      The touch_nmi_watchdog() routine on x86 ultimately calls
      touch_softlockup_watchdog().  The problem is that to touch the
      softlockup watchdog, the cpu_clock code has to be called which could
      involve multiple cpu locks and can lead to a hard hang if one of the
      locks is held by a processor that is not going to return anytime soon
      (such as could be the case with kgdb or perhaps even with some other
      kind of exception).
      
      This patch causes the public version of the
      touch_softlockup_watchdog() to defer the cpu clock access to a later
      point.
      
      The test case for this problem is to use the following kernel config
      options:
      
      CONFIG_KGDB_TESTS=y
      CONFIG_KGDB_TESTS_ON_BOOT=y
      CONFIG_KGDB_TESTS_BOOT_STRING="V1F100I100000"
      
      It should be noted that kgdb test suite and these options were not
      available until 2.6.26-rc2, so it was necessary to patch the kgdb
      test suite during the bisection.
      
      I would consider this patch a regression fix because the problem first
      appeared in commit 27ec4407 when some
      logic was added to try to periodically sync the clocks.  It was
      possible to work around this particular problem by simply not
      performing the sync anytime the system was in a critical context.
      This was ok until commit 3e51f33f
      
      ,
      which added config option CONFIG_HAVE_UNSTABLE_SCHED_CLOCK and some
      multi-cpu locks to sync the clocks.  It became clear that accessing
      this code from an nmi was the source of the lockups.  Avoiding the
      access to the low level clock code from an code inside the NMI
      processing also fixed the problem with the 27ec44... commit.
      
      Signed-off-by: default avatarJason Wessel <jason.wessel@windriver.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      9c106c11
    • Steven Rostedt's avatar
      rcupreempt: remove export of rcu_batches_completed_bh · afd38009
      Steven Rostedt authored
      
      
      In rcupreempt, rcu_batches_completed_bh is defined as a static inline in
      the header file. This does not need to be exported, and not only that,
      this breaks my PPC build.
      
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: paulus@samba.org
      Cc: linuxppc-dev@ozlabs.org
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      afd38009
    • Li Zefan's avatar
      cpuset: limit the input of cpuset.sched_relax_domain_level · 30e0e178
      Li Zefan authored
      
      
      We allow the inputs to be [-1 ... SD_LV_MAX), and return -EINVAL
      for inputs outside this range.
      
      Signed-off-by: default avatarLi Zefan <lizf@cn.fujitsu.com>
      Acked-by: default avatarPaul Menage <menage@google.com>
      Acked-by: default avatarPaul Jackson <pj@sgi.com>
      Acked-by: default avatarHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      30e0e178
    • Ingo Molnar's avatar
      Merge branch 'linus' into sched/urgent · d819c49d
      Ingo Molnar authored
      d819c49d
    • Max Krasnyansky's avatar
      sched: CPU hotplug events must not destroy scheduler domains created by the cpusets · f18f982a
      Max Krasnyansky authored
      
      
      First issue is not related to the cpusets. We're simply leaking doms_cur.
      It's allocated in arch_init_sched_domains() which is called for every
      hotplug event. So we just keep reallocation doms_cur without freeing it.
      I introduced free_sched_domains() function that cleans things up.
      
      Second issue is that sched domains created by the cpusets are
      completely destroyed by the CPU hotplug events. For all CPU hotplug
      events scheduler attaches all CPUs to the NULL domain and then puts
      them all into the single domain thereby destroying domains created
      by the cpusets (partition_sched_domains).
      The solution is simple, when cpusets are enabled scheduler should not
      create default domain and instead let cpusets do that. Which is
      exactly what the patch does.
      
      Signed-off-by: default avatarMax Krasnyansky <maxk@qualcomm.com>
      Cc: pj@sgi.com
      Cc: menage@google.com
      Cc: rostedt@goodmis.org
      Acked-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      f18f982a
    • Peter Zijlstra's avatar
      sched: rt-group: fix RR buglet · 15a8641e
      Peter Zijlstra authored
      
      
      In tick_task_rt() we first call update_curr_rt() which can dequeue a runqueue
      due to it running out of runtime, and then we try to requeue it, of it also
      having exhausted its RR quota. Obviously requeueing something that is no longer
      on the runqueue will not have the expected result.
      
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Tested-by: default avatarDaniel K. <dk@uw.no>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      15a8641e