Skip to content
  1. Sep 29, 2018
    • Josef Bacik's avatar
      blk-iolatency: deal with small samples · 22ed8a93
      Josef Bacik authored
      
      
      There is logic to keep cgroups that haven't done a lot of IO in the most
      recent scale window from being punished for over-active higher priority
      groups.  However for things like ssd's where the windows are pretty
      short we'll end up with small numbers of samples, so 5% of samples will
      come out to 0 if there aren't enough.  Make the floor 1 sample to keep
      us from improperly bailing out of scaling down.
      
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      22ed8a93
    • Josef Bacik's avatar
      blk-iolatency: deal with nr_requests == 1 · 9f60511a
      Josef Bacik authored
      
      
      Hitting the case where blk_queue_depth() returned 1 uncovered the fact
      that iolatency doesn't actually handle this case properly, it simply
      doesn't scale down anybody.  For this case we should go straight into
      applying the time delay, which we weren't doing.  Since we already limit
      the floor at 1 request this if statement is not needed, and this allows
      us to set our depth to 1 which allows us to apply the delay if needed.
      
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      9f60511a
    • Josef Bacik's avatar
      blk-iolatency: use q->nr_requests directly · ff4cee08
      Josef Bacik authored
      
      
      We were using blk_queue_depth() assuming that it would return
      nr_requests, but we hit a case in production on drives that had to have
      NCQ turned off in order for them to not shit the bed which resulted in a
      qd of 1, even though the nr_requests was much larger.  iolatency really
      only cares about requests we are allowed to queue up, as any io that
      get's onto the request list is going to be serviced soonish, so we want
      to be throttling before the bio gets onto the request list.  To make
      iolatency work as expected, simply use q->nr_requests instead of
      blk_queue_depth() as that is what we actually care about.
      
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      ff4cee08
    • Omar Sandoval's avatar
      kyber: fix integer overflow of latency targets on 32-bit · f0a0cddd
      Omar Sandoval authored
      
      
      NSEC_PER_SEC has type long, so 5 * NSEC_PER_SEC is calculated as a long.
      However, 5 seconds is 5,000,000,000 nanoseconds, which overflows a
      32-bit long. Make sure all of the targets are calculated as 64-bit
      values.
      
      Fixes: 6e25cb01 ("kyber: implement improved heuristics")
      Reported-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: default avatarOmar Sandoval <osandov@fb.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      f0a0cddd
  2. Sep 28, 2018
  3. Sep 27, 2018
  4. Sep 26, 2018
  5. Sep 25, 2018
  6. Sep 22, 2018