Skip to content
  1. Jan 17, 2021
    • Jouni K. Seppänen's avatar
      net: cdc_ncm: correct overhead in delayed_ndp_size · d73f7e75
      Jouni K. Seppänen authored
      [ Upstream commit 7a68d725
      
       ]
      
      Aligning to tx_ndp_modulus is not sufficient because the next align
      call can be cdc_ncm_align_tail, which can add up to ctx->tx_modulus +
      ctx->tx_remainder - 1 bytes. This used to lead to occasional crashes
      on a Huawei 909s-120 LTE module as follows:
      
      - the condition marked /* if there is a remaining skb [...] */ is true
        so the swaps happen
      - skb_out is set from ctx->tx_curr_skb
      - skb_out->len is exactly 0x3f52
      - ctx->tx_curr_size is 0x4000 and delayed_ndp_size is 0xac
        (note that the sum of skb_out->len and delayed_ndp_size is 0x3ffe)
      - the for loop over n is executed once
      - the cdc_ncm_align_tail call marked /* align beginning of next frame */
        increases skb_out->len to 0x3f56 (the sum is now 0x4002)
      - the condition marked /* check if we had enough room left [...] */ is
        false so we break out of the loop
      - the condition marked /* If requested, put NDP at end of frame. */ is
        true so the NDP is written into skb_out
      - now skb_out->len is 0x4002, so padding_count is minus two interpreted
        as an unsigned number, which is used as the length argument to memset,
        leading to a crash with various symptoms but usually including
      
      > Call Trace:
      >  <IRQ>
      >  cdc_ncm_fill_tx_frame+0x83a/0x970 [cdc_ncm]
      >  cdc_mbim_tx_fixup+0x1d9/0x240 [cdc_mbim]
      >  usbnet_start_xmit+0x5d/0x720 [usbnet]
      
      The cdc_ncm_align_tail call first aligns on a ctx->tx_modulus
      boundary (adding at most ctx->tx_modulus-1 bytes), then adds
      ctx->tx_remainder bytes. Alternatively, the next alignment call can
      occur in cdc_ncm_ndp16 or cdc_ncm_ndp32, in which case at most
      ctx->tx_ndp_modulus-1 bytes are added.
      
      A similar problem has occurred before, and the code is nontrivial to
      reason about, so add a guard before the crashing call. By that time it
      is too late to prevent any memory corruption (we'll have written past
      the end of the buffer already) but we can at least try to get a warning
      written into an on-disk log by avoiding the hard crash caused by padding
      past the buffer with a huge number of zeros.
      
      Signed-off-by: default avatarJouni K. Seppänen <jks@iki.fi>
      Fixes: 4a0e3e98 ("cdc_ncm: Add support for moving NDP to end of NCM frame")
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=209407
      
      
      Reported-by: default avatarkernel test robot <lkp@intel.com>
      Reviewed-by: default avatarBjørn Mork <bjorn@mork.no>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      d73f7e75
    • Matthew Rosato's avatar
      vfio iommu: Add dma available capability · 55975572
      Matthew Rosato authored
      [ Upstream commit 7d6e1329 ]
      
      The following functional changes were needed for backport:
      - vfio_iommu_type1_get_info doesn't exist, call
        vfio_iommu_dma_avail_build_caps from vfio_iommu_type1_ioctl.
      - As further fallout from this, vfio_iommu_dma_avail_build_caps must
        acquire and release the iommu mutex lock.  To do so, the return value is
        stored in a local variable as in vfio_iommu_iova_build_caps.
      
      Upstream commit description:
      Commit 49285593
      
       ("vfio/type1: Limit DMA mappings per container")
      added the ability to limit the number of memory backed DMA mappings.
      However on s390x, when lazy mapping is in use, we use a very large
      number of concurrent mappings.  Let's provide the current allowable
      number of DMA mappings to userspace via the IOMMU info chain so that
      userspace can take appropriate mitigation.
      
      Signed-off-by: default avatarMatthew Rosato <mjrosato@linux.ibm.com>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      55975572
    • Jiri Slaby's avatar
      x86/asm/32: Add ENDs to some functions and relabel with SYM_CODE_* · 33510408
      Jiri Slaby authored
      commit 78762b0e
      
       upstream.
      
      All these are functions which are invoked from elsewhere but they are
      not typical C functions. So annotate them using the new SYM_CODE_START.
      All these were not balanced with any END, so mark their ends by
      SYM_CODE_END, appropriately.
      
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> [xen bits]
      Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> [hibernate]
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: linux-arch@vger.kernel.org
      Cc: linux-pm@vger.kernel.org
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Pingfan Liu <kernelfans@gmail.com>
      Cc: Stefano Stabellini <sstabellini@kernel.org>
      Cc: "Steven Rostedt (VMware)" <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: x86-ml <x86@kernel.org>
      Cc: xen-devel@lists.xenproject.org
      Link: https://lkml.kernel.org/r/20191011115108.12392-26-jslaby@suse.cz
      
      
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      33510408
  2. Jan 13, 2021