Loading .mailmap +4 −0 Original line number Diff line number Diff line Loading @@ -137,6 +137,7 @@ Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com> Rudolf Marek <R.Marek@sh.cvut.cz> Rui Saraiva <rmps@joel.ist.utl.pt> Sachin P Sant <ssant@in.ibm.com> Sarangdhar Joshi <spjoshi@codeaurora.org> Sam Ravnborg <sam@mars.ravnborg.org> Santosh Shilimkar <ssantosh@kernel.org> Santosh Shilimkar <santosh.shilimkar@oracle.org> Loading @@ -150,10 +151,13 @@ Shuah Khan <shuah@kernel.org> <shuah.kh@samsung.com> Simon Kelley <simon@thekelleys.org.uk> Stéphane Witzmann <stephane.witzmann@ubpmes.univ-bpclermont.fr> Stephen Hemminger <shemminger@osdl.org> Subash Abhinov Kasiviswanathan <subashab@codeaurora.org> Subhash Jadavani <subhashj@codeaurora.org> Sudeep Holla <sudeep.holla@arm.com> Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> Sumit Semwal <sumit.semwal@ti.com> Tejun Heo <htejun@gmail.com> Thomas Graf <tgraf@suug.ch> Thomas Pedersen <twp@codeaurora.org> Tony Luck <tony.luck@intel.com> Tsuneo Yoshioka <Tsuneo.Yoshioka@f-secure.com> Uwe Kleine-König <ukleinek@informatik.uni-freiburg.de> Loading Documentation/vm/page_frags 0 → 100644 +42 −0 Original line number Diff line number Diff line Page fragments -------------- A page fragment is an arbitrary-length arbitrary-offset area of memory which resides within a 0 or higher order compound page. Multiple fragments within that page are individually refcounted, in the page's reference counter. The page_frag functions, page_frag_alloc and page_frag_free, provide a simple allocation framework for page fragments. This is used by the network stack and network device drivers to provide a backing region of memory for use as either an sk_buff->head, or to be used in the "frags" portion of skb_shared_info. In order to make use of the page fragment APIs a backing page fragment cache is needed. This provides a central point for the fragment allocation and tracks allows multiple calls to make use of a cached page. The advantage to doing this is that multiple calls to get_page can be avoided which can be expensive at allocation time. However due to the nature of this caching it is required that any calls to the cache be protected by either a per-cpu limitation, or a per-cpu limitation and forcing interrupts to be disabled when executing the fragment allocation. The network stack uses two separate caches per CPU to handle fragment allocation. The netdev_alloc_cache is used by callers making use of the __netdev_alloc_frag and __netdev_alloc_skb calls. The napi_alloc_cache is used by callers of the __napi_alloc_frag and __napi_alloc_skb calls. The main difference between these two calls is the context in which they may be called. The "netdev" prefixed functions are usable in any context as these functions will disable interrupts, while the "napi" prefixed functions are only usable within the softirq context. Many network device drivers use a similar methodology for allocating page fragments, but the page fragments are cached at the ring or descriptor level. In order to enable these cases it is necessary to provide a generic way of tearing down a page cache. For this reason __page_frag_cache_drain was implemented. It allows for freeing multiple references from a single page via a single call. The advantage to doing this is that it allows for cleaning up the multiple references that were added to a page in order to avoid calling get_page per allocation. Alexander Duyck, Nov 29, 2016. MAINTAINERS +1 −2 Original line number Diff line number Diff line Loading @@ -81,7 +81,6 @@ Descriptions of section entries: Q: Patchwork web based patch tracking system site T: SCM tree type and location. Type is one of: git, hg, quilt, stgit, topgit B: Bug tracking system location. S: Status, one of the following: Supported: Someone is actually paid to look after this. Maintained: Someone actually looks after it. Loading Loading @@ -4123,7 +4122,7 @@ F: drivers/gpu/drm/cirrus/ RADEON and AMDGPU DRM DRIVERS M: Alex Deucher <alexander.deucher@amd.com> M: Christian König <christian.koenig@amd.com> L: dri-devel@lists.freedesktop.org L: amd-gfx@lists.freedesktop.org T: git git://people.freedesktop.org/~agd5f/linux S: Supported F: drivers/gpu/drm/radeon/ Loading arch/x86/crypto/aesni-intel_glue.c +2 −1 Original line number Diff line number Diff line Loading @@ -1020,7 +1020,8 @@ struct { const char *basename; struct simd_skcipher_alg *simd; } aesni_simd_skciphers2[] = { #if IS_ENABLED(CONFIG_CRYPTO_PCBC) #if (defined(MODULE) && IS_ENABLED(CONFIG_CRYPTO_PCBC)) || \ IS_BUILTIN(CONFIG_CRYPTO_PCBC) { .algname = "pcbc(aes)", .drvname = "pcbc-aes-aesni", Loading drivers/block/zram/zram_drv.c +11 −8 Original line number Diff line number Diff line Loading @@ -25,6 +25,7 @@ #include <linux/genhd.h> #include <linux/highmem.h> #include <linux/slab.h> #include <linux/backing-dev.h> #include <linux/string.h> #include <linux/vmalloc.h> #include <linux/err.h> Loading Loading @@ -112,6 +113,14 @@ static inline bool is_partial_io(struct bio_vec *bvec) return bvec->bv_len != PAGE_SIZE; } static void zram_revalidate_disk(struct zram *zram) { revalidate_disk(zram->disk); /* revalidate_disk reset the BDI_CAP_STABLE_WRITES so set again */ zram->disk->queue->backing_dev_info.capabilities |= BDI_CAP_STABLE_WRITES; } /* * Check if request is within bounds and aligned on zram logical blocks. */ Loading Loading @@ -1095,15 +1104,9 @@ static ssize_t disksize_store(struct device *dev, zram->comp = comp; zram->disksize = disksize; set_capacity(zram->disk, zram->disksize >> SECTOR_SHIFT); zram_revalidate_disk(zram); up_write(&zram->init_lock); /* * Revalidate disk out of the init_lock to avoid lockdep splat. * It's okay because disk's capacity is protected by init_lock * so that revalidate_disk always sees up-to-date capacity. */ revalidate_disk(zram->disk); return len; out_destroy_comp: Loading Loading @@ -1149,7 +1152,7 @@ static ssize_t reset_store(struct device *dev, /* Make sure all the pending I/O are finished */ fsync_bdev(bdev); zram_reset_device(zram); revalidate_disk(zram->disk); zram_revalidate_disk(zram); bdput(bdev); mutex_lock(&bdev->bd_mutex); Loading Loading
.mailmap +4 −0 Original line number Diff line number Diff line Loading @@ -137,6 +137,7 @@ Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com> Rudolf Marek <R.Marek@sh.cvut.cz> Rui Saraiva <rmps@joel.ist.utl.pt> Sachin P Sant <ssant@in.ibm.com> Sarangdhar Joshi <spjoshi@codeaurora.org> Sam Ravnborg <sam@mars.ravnborg.org> Santosh Shilimkar <ssantosh@kernel.org> Santosh Shilimkar <santosh.shilimkar@oracle.org> Loading @@ -150,10 +151,13 @@ Shuah Khan <shuah@kernel.org> <shuah.kh@samsung.com> Simon Kelley <simon@thekelleys.org.uk> Stéphane Witzmann <stephane.witzmann@ubpmes.univ-bpclermont.fr> Stephen Hemminger <shemminger@osdl.org> Subash Abhinov Kasiviswanathan <subashab@codeaurora.org> Subhash Jadavani <subhashj@codeaurora.org> Sudeep Holla <sudeep.holla@arm.com> Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> Sumit Semwal <sumit.semwal@ti.com> Tejun Heo <htejun@gmail.com> Thomas Graf <tgraf@suug.ch> Thomas Pedersen <twp@codeaurora.org> Tony Luck <tony.luck@intel.com> Tsuneo Yoshioka <Tsuneo.Yoshioka@f-secure.com> Uwe Kleine-König <ukleinek@informatik.uni-freiburg.de> Loading
Documentation/vm/page_frags 0 → 100644 +42 −0 Original line number Diff line number Diff line Page fragments -------------- A page fragment is an arbitrary-length arbitrary-offset area of memory which resides within a 0 or higher order compound page. Multiple fragments within that page are individually refcounted, in the page's reference counter. The page_frag functions, page_frag_alloc and page_frag_free, provide a simple allocation framework for page fragments. This is used by the network stack and network device drivers to provide a backing region of memory for use as either an sk_buff->head, or to be used in the "frags" portion of skb_shared_info. In order to make use of the page fragment APIs a backing page fragment cache is needed. This provides a central point for the fragment allocation and tracks allows multiple calls to make use of a cached page. The advantage to doing this is that multiple calls to get_page can be avoided which can be expensive at allocation time. However due to the nature of this caching it is required that any calls to the cache be protected by either a per-cpu limitation, or a per-cpu limitation and forcing interrupts to be disabled when executing the fragment allocation. The network stack uses two separate caches per CPU to handle fragment allocation. The netdev_alloc_cache is used by callers making use of the __netdev_alloc_frag and __netdev_alloc_skb calls. The napi_alloc_cache is used by callers of the __napi_alloc_frag and __napi_alloc_skb calls. The main difference between these two calls is the context in which they may be called. The "netdev" prefixed functions are usable in any context as these functions will disable interrupts, while the "napi" prefixed functions are only usable within the softirq context. Many network device drivers use a similar methodology for allocating page fragments, but the page fragments are cached at the ring or descriptor level. In order to enable these cases it is necessary to provide a generic way of tearing down a page cache. For this reason __page_frag_cache_drain was implemented. It allows for freeing multiple references from a single page via a single call. The advantage to doing this is that it allows for cleaning up the multiple references that were added to a page in order to avoid calling get_page per allocation. Alexander Duyck, Nov 29, 2016.
MAINTAINERS +1 −2 Original line number Diff line number Diff line Loading @@ -81,7 +81,6 @@ Descriptions of section entries: Q: Patchwork web based patch tracking system site T: SCM tree type and location. Type is one of: git, hg, quilt, stgit, topgit B: Bug tracking system location. S: Status, one of the following: Supported: Someone is actually paid to look after this. Maintained: Someone actually looks after it. Loading Loading @@ -4123,7 +4122,7 @@ F: drivers/gpu/drm/cirrus/ RADEON and AMDGPU DRM DRIVERS M: Alex Deucher <alexander.deucher@amd.com> M: Christian König <christian.koenig@amd.com> L: dri-devel@lists.freedesktop.org L: amd-gfx@lists.freedesktop.org T: git git://people.freedesktop.org/~agd5f/linux S: Supported F: drivers/gpu/drm/radeon/ Loading
arch/x86/crypto/aesni-intel_glue.c +2 −1 Original line number Diff line number Diff line Loading @@ -1020,7 +1020,8 @@ struct { const char *basename; struct simd_skcipher_alg *simd; } aesni_simd_skciphers2[] = { #if IS_ENABLED(CONFIG_CRYPTO_PCBC) #if (defined(MODULE) && IS_ENABLED(CONFIG_CRYPTO_PCBC)) || \ IS_BUILTIN(CONFIG_CRYPTO_PCBC) { .algname = "pcbc(aes)", .drvname = "pcbc-aes-aesni", Loading
drivers/block/zram/zram_drv.c +11 −8 Original line number Diff line number Diff line Loading @@ -25,6 +25,7 @@ #include <linux/genhd.h> #include <linux/highmem.h> #include <linux/slab.h> #include <linux/backing-dev.h> #include <linux/string.h> #include <linux/vmalloc.h> #include <linux/err.h> Loading Loading @@ -112,6 +113,14 @@ static inline bool is_partial_io(struct bio_vec *bvec) return bvec->bv_len != PAGE_SIZE; } static void zram_revalidate_disk(struct zram *zram) { revalidate_disk(zram->disk); /* revalidate_disk reset the BDI_CAP_STABLE_WRITES so set again */ zram->disk->queue->backing_dev_info.capabilities |= BDI_CAP_STABLE_WRITES; } /* * Check if request is within bounds and aligned on zram logical blocks. */ Loading Loading @@ -1095,15 +1104,9 @@ static ssize_t disksize_store(struct device *dev, zram->comp = comp; zram->disksize = disksize; set_capacity(zram->disk, zram->disksize >> SECTOR_SHIFT); zram_revalidate_disk(zram); up_write(&zram->init_lock); /* * Revalidate disk out of the init_lock to avoid lockdep splat. * It's okay because disk's capacity is protected by init_lock * so that revalidate_disk always sees up-to-date capacity. */ revalidate_disk(zram->disk); return len; out_destroy_comp: Loading Loading @@ -1149,7 +1152,7 @@ static ssize_t reset_store(struct device *dev, /* Make sure all the pending I/O are finished */ fsync_bdev(bdev); zram_reset_device(zram); revalidate_disk(zram->disk); zram_revalidate_disk(zram); bdput(bdev); mutex_lock(&bdev->bd_mutex); Loading