Loading Documentation/virt/kvm/api.rst +37 −9 Original line number Diff line number Diff line Loading @@ -3736,7 +3736,7 @@ The fields in each entry are defined as follows: :Parameters: struct kvm_s390_mem_op (in) :Returns: = 0 on success, < 0 on generic error (e.g. -EFAULT or -ENOMEM), > 0 if an exception occurred while walking the page tables 16 bit program exception code if the access causes such an exception Read or write data from/to the VM's memory. The KVM_CAP_S390_MEM_OP_EXTENSION capability specifies what functionality is Loading @@ -3754,6 +3754,8 @@ Parameters are specified via the following structure:: struct { __u8 ar; /* the access register number */ __u8 key; /* access key, ignored if flag unset */ __u8 pad1[6]; /* ignored */ __u64 old_addr; /* ignored if flag unset */ }; __u32 sida_offset; /* offset into the sida */ __u8 reserved[32]; /* ignored */ Loading Loading @@ -3781,6 +3783,7 @@ Possible operations are: * ``KVM_S390_MEMOP_ABSOLUTE_WRITE`` * ``KVM_S390_MEMOP_SIDA_READ`` * ``KVM_S390_MEMOP_SIDA_WRITE`` * ``KVM_S390_MEMOP_ABSOLUTE_CMPXCHG`` Logical read/write: ^^^^^^^^^^^^^^^^^^^ Loading Loading @@ -3829,7 +3832,7 @@ the checks required for storage key protection as one operation (as opposed to user space getting the storage keys, performing the checks, and accessing memory thereafter, which could lead to a delay between check and access). Absolute accesses are permitted for the VM ioctl if KVM_CAP_S390_MEM_OP_EXTENSION is > 0. has the KVM_S390_MEMOP_EXTENSION_CAP_BASE bit set. Currently absolute accesses are not permitted for VCPU ioctls. Absolute accesses are permitted for non-protected guests only. Loading @@ -3837,7 +3840,26 @@ Supported flags: * ``KVM_S390_MEMOP_F_CHECK_ONLY`` * ``KVM_S390_MEMOP_F_SKEY_PROTECTION`` The semantics of the flags are as for logical accesses. The semantics of the flags common with logical accesses are as for logical accesses. Absolute cmpxchg: ^^^^^^^^^^^^^^^^^ Perform cmpxchg on absolute guest memory. Intended for use with the KVM_S390_MEMOP_F_SKEY_PROTECTION flag. Instead of doing an unconditional write, the access occurs only if the target location contains the value pointed to by "old_addr". This is performed as an atomic cmpxchg with the length specified by the "size" parameter. "size" must be a power of two up to and including 16. If the exchange did not take place because the target value doesn't match the old value, the value "old_addr" points to is replaced by the target value. User space can tell if an exchange took place by checking if this replacement occurred. The cmpxchg op is permitted for the VM ioctl if KVM_CAP_S390_MEM_OP_EXTENSION has flag KVM_S390_MEMOP_EXTENSION_CAP_CMPXCHG set. Supported flags: * ``KVM_S390_MEMOP_F_SKEY_PROTECTION`` SIDA read/write: ^^^^^^^^^^^^^^^^ Loading Loading @@ -4457,6 +4479,18 @@ not holding a previously reported uncorrected error). :Parameters: struct kvm_s390_cmma_log (in, out) :Returns: 0 on success, a negative value on error Errors: ====== ============================================================= ENOMEM not enough memory can be allocated to complete the task ENXIO if CMMA is not enabled EINVAL if KVM_S390_CMMA_PEEK is not set but migration mode was not enabled EINVAL if KVM_S390_CMMA_PEEK is not set but dirty tracking has been disabled (and thus migration mode was automatically disabled) EFAULT if the userspace address is invalid or if no page table is present for the addresses (e.g. when using hugepages). ====== ============================================================= This ioctl is used to get the values of the CMMA bits on the s390 architecture. It is meant to be used in two scenarios: Loading Loading @@ -4537,12 +4571,6 @@ mask is unused. values points to the userspace buffer where the result will be stored. This ioctl can fail with -ENOMEM if not enough memory can be allocated to complete the task, with -ENXIO if CMMA is not enabled, with -EINVAL if KVM_S390_CMMA_PEEK is not set but migration mode was not enabled, with -EFAULT if the userspace address is invalid or if no page table is present for the addresses (e.g. when using hugepages). 4.108 KVM_S390_SET_CMMA_BITS ---------------------------- Loading Documentation/virt/kvm/devices/vm.rst +4 −0 Original line number Diff line number Diff line Loading @@ -302,6 +302,10 @@ Allows userspace to start migration mode, needed for PGSTE migration. Setting this attribute when migration mode is already active will have no effects. Dirty tracking must be enabled on all memslots, else -EINVAL is returned. When dirty tracking is disabled on any memslot, migration mode is automatically stopped. :Parameters: none :Returns: -ENOMEM if there is not enough free memory to start migration mode; -EINVAL if the state of the VM is invalid (e.g. no memory defined); Loading arch/s390/include/asm/asm-extable.h +4 −0 Original line number Diff line number Diff line Loading @@ -12,6 +12,7 @@ #define EX_TYPE_UA_STORE 3 #define EX_TYPE_UA_LOAD_MEM 4 #define EX_TYPE_UA_LOAD_REG 5 #define EX_TYPE_UA_LOAD_REGPAIR 6 #define EX_DATA_REG_ERR_SHIFT 0 #define EX_DATA_REG_ERR GENMASK(3, 0) Loading Loading @@ -85,4 +86,7 @@ #define EX_TABLE_UA_LOAD_REG(_fault, _target, _regerr, _regzero) \ __EX_TABLE_UA(__ex_table, _fault, _target, EX_TYPE_UA_LOAD_REG, _regerr, _regzero, 0) #define EX_TABLE_UA_LOAD_REGPAIR(_fault, _target, _regerr, _regzero) \ __EX_TABLE_UA(__ex_table, _fault, _target, EX_TYPE_UA_LOAD_REGPAIR, _regerr, _regzero, 0) #endif /* __ASM_EXTABLE_H */ arch/s390/include/asm/cmpxchg.h +66 −43 Original line number Diff line number Diff line Loading @@ -88,67 +88,90 @@ static __always_inline unsigned long __cmpxchg(unsigned long address, unsigned long old, unsigned long new, int size) { unsigned long prev, tmp; int shift; switch (size) { case 1: case 1: { unsigned int prev, shift, mask; shift = (3 ^ (address & 3)) << 3; address ^= address & 3; old = (old & 0xff) << shift; new = (new & 0xff) << shift; mask = ~(0xff << shift); asm volatile( " l %0,%2\n" "0: nr %0,%5\n" " lr %1,%0\n" " or %0,%3\n" " or %1,%4\n" " cs %0,%1,%2\n" " l %[prev],%[address]\n" " nr %[prev],%[mask]\n" " xilf %[mask],0xffffffff\n" " or %[new],%[prev]\n" " or %[prev],%[tmp]\n" "0: lr %[tmp],%[prev]\n" " cs %[prev],%[new],%[address]\n" " jnl 1f\n" " xr %1,%0\n" " nr %1,%5\n" " jnz 0b\n" " xr %[tmp],%[prev]\n" " xr %[new],%[tmp]\n" " nr %[tmp],%[mask]\n" " jz 0b\n" "1:" : "=&d" (prev), "=&d" (tmp), "+Q" (*(int *) address) : "d" ((old & 0xff) << shift), "d" ((new & 0xff) << shift), "d" (~(0xff << shift)) : "memory", "cc"); : [prev] "=&d" (prev), [address] "+Q" (*(int *)address), [tmp] "+&d" (old), [new] "+&d" (new), [mask] "+&d" (mask) :: "memory", "cc"); return prev >> shift; case 2: } case 2: { unsigned int prev, shift, mask; shift = (2 ^ (address & 2)) << 3; address ^= address & 2; old = (old & 0xffff) << shift; new = (new & 0xffff) << shift; mask = ~(0xffff << shift); asm volatile( " l %0,%2\n" "0: nr %0,%5\n" " lr %1,%0\n" " or %0,%3\n" " or %1,%4\n" " cs %0,%1,%2\n" " l %[prev],%[address]\n" " nr %[prev],%[mask]\n" " xilf %[mask],0xffffffff\n" " or %[new],%[prev]\n" " or %[prev],%[tmp]\n" "0: lr %[tmp],%[prev]\n" " cs %[prev],%[new],%[address]\n" " jnl 1f\n" " xr %1,%0\n" " nr %1,%5\n" " jnz 0b\n" " xr %[tmp],%[prev]\n" " xr %[new],%[tmp]\n" " nr %[tmp],%[mask]\n" " jz 0b\n" "1:" : "=&d" (prev), "=&d" (tmp), "+Q" (*(int *) address) : "d" ((old & 0xffff) << shift), "d" ((new & 0xffff) << shift), "d" (~(0xffff << shift)) : "memory", "cc"); : [prev] "=&d" (prev), [address] "+Q" (*(int *)address), [tmp] "+&d" (old), [new] "+&d" (new), [mask] "+&d" (mask) :: "memory", "cc"); return prev >> shift; case 4: } case 4: { unsigned int prev = old; asm volatile( " cs %0,%3,%1\n" : "=&d" (prev), "+Q" (*(int *) address) : "0" (old), "d" (new) " cs %[prev],%[new],%[address]\n" : [prev] "+&d" (prev), [address] "+Q" (*(int *)address) : [new] "d" (new) : "memory", "cc"); return prev; case 8: } case 8: { unsigned long prev = old; asm volatile( " csg %0,%3,%1\n" : "=&d" (prev), "+QS" (*(long *) address) : "0" (old), "d" (new) " csg %[prev],%[new],%[address]\n" : [prev] "+&d" (prev), [address] "+QS" (*(long *)address) : [new] "d" (new) : "memory", "cc"); return prev; } } __cmpxchg_called_with_bad_pointer(); return old; } Loading arch/s390/include/asm/uaccess.h +208 −0 Original line number Diff line number Diff line Loading @@ -390,4 +390,212 @@ do { \ goto err_label; \ } while (0) void __cmpxchg_user_key_called_with_bad_pointer(void); #define CMPXCHG_USER_KEY_MAX_LOOPS 128 static __always_inline int __cmpxchg_user_key(unsigned long address, void *uval, __uint128_t old, __uint128_t new, unsigned long key, int size) { int rc = 0; switch (size) { case 1: { unsigned int prev, shift, mask, _old, _new; unsigned long count; shift = (3 ^ (address & 3)) << 3; address ^= address & 3; _old = ((unsigned int)old & 0xff) << shift; _new = ((unsigned int)new & 0xff) << shift; mask = ~(0xff << shift); asm volatile( " spka 0(%[key])\n" " sacf 256\n" " llill %[count],%[max_loops]\n" "0: l %[prev],%[address]\n" "1: nr %[prev],%[mask]\n" " xilf %[mask],0xffffffff\n" " or %[new],%[prev]\n" " or %[prev],%[tmp]\n" "2: lr %[tmp],%[prev]\n" "3: cs %[prev],%[new],%[address]\n" "4: jnl 5f\n" " xr %[tmp],%[prev]\n" " xr %[new],%[tmp]\n" " nr %[tmp],%[mask]\n" " jnz 5f\n" " brct %[count],2b\n" "5: sacf 768\n" " spka %[default_key]\n" EX_TABLE_UA_LOAD_REG(0b, 5b, %[rc], %[prev]) EX_TABLE_UA_LOAD_REG(1b, 5b, %[rc], %[prev]) EX_TABLE_UA_LOAD_REG(3b, 5b, %[rc], %[prev]) EX_TABLE_UA_LOAD_REG(4b, 5b, %[rc], %[prev]) : [rc] "+&d" (rc), [prev] "=&d" (prev), [address] "+Q" (*(int *)address), [tmp] "+&d" (_old), [new] "+&d" (_new), [mask] "+&d" (mask), [count] "=a" (count) : [key] "%[count]" (key << 4), [default_key] "J" (PAGE_DEFAULT_KEY), [max_loops] "J" (CMPXCHG_USER_KEY_MAX_LOOPS) : "memory", "cc"); *(unsigned char *)uval = prev >> shift; if (!count) rc = -EAGAIN; return rc; } case 2: { unsigned int prev, shift, mask, _old, _new; unsigned long count; shift = (2 ^ (address & 2)) << 3; address ^= address & 2; _old = ((unsigned int)old & 0xffff) << shift; _new = ((unsigned int)new & 0xffff) << shift; mask = ~(0xffff << shift); asm volatile( " spka 0(%[key])\n" " sacf 256\n" " llill %[count],%[max_loops]\n" "0: l %[prev],%[address]\n" "1: nr %[prev],%[mask]\n" " xilf %[mask],0xffffffff\n" " or %[new],%[prev]\n" " or %[prev],%[tmp]\n" "2: lr %[tmp],%[prev]\n" "3: cs %[prev],%[new],%[address]\n" "4: jnl 5f\n" " xr %[tmp],%[prev]\n" " xr %[new],%[tmp]\n" " nr %[tmp],%[mask]\n" " jnz 5f\n" " brct %[count],2b\n" "5: sacf 768\n" " spka %[default_key]\n" EX_TABLE_UA_LOAD_REG(0b, 5b, %[rc], %[prev]) EX_TABLE_UA_LOAD_REG(1b, 5b, %[rc], %[prev]) EX_TABLE_UA_LOAD_REG(3b, 5b, %[rc], %[prev]) EX_TABLE_UA_LOAD_REG(4b, 5b, %[rc], %[prev]) : [rc] "+&d" (rc), [prev] "=&d" (prev), [address] "+Q" (*(int *)address), [tmp] "+&d" (_old), [new] "+&d" (_new), [mask] "+&d" (mask), [count] "=a" (count) : [key] "%[count]" (key << 4), [default_key] "J" (PAGE_DEFAULT_KEY), [max_loops] "J" (CMPXCHG_USER_KEY_MAX_LOOPS) : "memory", "cc"); *(unsigned short *)uval = prev >> shift; if (!count) rc = -EAGAIN; return rc; } case 4: { unsigned int prev = old; asm volatile( " spka 0(%[key])\n" " sacf 256\n" "0: cs %[prev],%[new],%[address]\n" "1: sacf 768\n" " spka %[default_key]\n" EX_TABLE_UA_LOAD_REG(0b, 1b, %[rc], %[prev]) EX_TABLE_UA_LOAD_REG(1b, 1b, %[rc], %[prev]) : [rc] "+&d" (rc), [prev] "+&d" (prev), [address] "+Q" (*(int *)address) : [new] "d" ((unsigned int)new), [key] "a" (key << 4), [default_key] "J" (PAGE_DEFAULT_KEY) : "memory", "cc"); *(unsigned int *)uval = prev; return rc; } case 8: { unsigned long prev = old; asm volatile( " spka 0(%[key])\n" " sacf 256\n" "0: csg %[prev],%[new],%[address]\n" "1: sacf 768\n" " spka %[default_key]\n" EX_TABLE_UA_LOAD_REG(0b, 1b, %[rc], %[prev]) EX_TABLE_UA_LOAD_REG(1b, 1b, %[rc], %[prev]) : [rc] "+&d" (rc), [prev] "+&d" (prev), [address] "+QS" (*(long *)address) : [new] "d" ((unsigned long)new), [key] "a" (key << 4), [default_key] "J" (PAGE_DEFAULT_KEY) : "memory", "cc"); *(unsigned long *)uval = prev; return rc; } case 16: { __uint128_t prev = old; asm volatile( " spka 0(%[key])\n" " sacf 256\n" "0: cdsg %[prev],%[new],%[address]\n" "1: sacf 768\n" " spka %[default_key]\n" EX_TABLE_UA_LOAD_REGPAIR(0b, 1b, %[rc], %[prev]) EX_TABLE_UA_LOAD_REGPAIR(1b, 1b, %[rc], %[prev]) : [rc] "+&d" (rc), [prev] "+&d" (prev), [address] "+QS" (*(__int128_t *)address) : [new] "d" (new), [key] "a" (key << 4), [default_key] "J" (PAGE_DEFAULT_KEY) : "memory", "cc"); *(__uint128_t *)uval = prev; return rc; } } __cmpxchg_user_key_called_with_bad_pointer(); return rc; } /** * cmpxchg_user_key() - cmpxchg with user space target, honoring storage keys * @ptr: User space address of value to compare to @old and exchange with * @new. Must be aligned to sizeof(*@ptr). * @uval: Address where the old value of *@ptr is written to. * @old: Old value. Compared to the content pointed to by @ptr in order to * determine if the exchange occurs. The old value read from *@ptr is * written to *@uval. * @new: New value to place at *@ptr. * @key: Access key to use for checking storage key protection. * * Perform a cmpxchg on a user space target, honoring storage key protection. * @key alone determines how key checking is performed, neither * storage-protection-override nor fetch-protection-override apply. * The caller must compare *@uval and @old to determine if values have been * exchanged. In case of an exception *@uval is set to zero. * * Return: 0: cmpxchg executed * -EFAULT: an exception happened when trying to access *@ptr * -EAGAIN: maxed out number of retries (byte and short only) */ #define cmpxchg_user_key(ptr, uval, old, new, key) \ ({ \ __typeof__(ptr) __ptr = (ptr); \ __typeof__(uval) __uval = (uval); \ \ BUILD_BUG_ON(sizeof(*(__ptr)) != sizeof(*(__uval))); \ might_fault(); \ __chk_user_ptr(__ptr); \ __cmpxchg_user_key((unsigned long)(__ptr), (void *)(__uval), \ (old), (new), (key), sizeof(*(__ptr))); \ }) #endif /* __S390_UACCESS_H */ Loading
Documentation/virt/kvm/api.rst +37 −9 Original line number Diff line number Diff line Loading @@ -3736,7 +3736,7 @@ The fields in each entry are defined as follows: :Parameters: struct kvm_s390_mem_op (in) :Returns: = 0 on success, < 0 on generic error (e.g. -EFAULT or -ENOMEM), > 0 if an exception occurred while walking the page tables 16 bit program exception code if the access causes such an exception Read or write data from/to the VM's memory. The KVM_CAP_S390_MEM_OP_EXTENSION capability specifies what functionality is Loading @@ -3754,6 +3754,8 @@ Parameters are specified via the following structure:: struct { __u8 ar; /* the access register number */ __u8 key; /* access key, ignored if flag unset */ __u8 pad1[6]; /* ignored */ __u64 old_addr; /* ignored if flag unset */ }; __u32 sida_offset; /* offset into the sida */ __u8 reserved[32]; /* ignored */ Loading Loading @@ -3781,6 +3783,7 @@ Possible operations are: * ``KVM_S390_MEMOP_ABSOLUTE_WRITE`` * ``KVM_S390_MEMOP_SIDA_READ`` * ``KVM_S390_MEMOP_SIDA_WRITE`` * ``KVM_S390_MEMOP_ABSOLUTE_CMPXCHG`` Logical read/write: ^^^^^^^^^^^^^^^^^^^ Loading Loading @@ -3829,7 +3832,7 @@ the checks required for storage key protection as one operation (as opposed to user space getting the storage keys, performing the checks, and accessing memory thereafter, which could lead to a delay between check and access). Absolute accesses are permitted for the VM ioctl if KVM_CAP_S390_MEM_OP_EXTENSION is > 0. has the KVM_S390_MEMOP_EXTENSION_CAP_BASE bit set. Currently absolute accesses are not permitted for VCPU ioctls. Absolute accesses are permitted for non-protected guests only. Loading @@ -3837,7 +3840,26 @@ Supported flags: * ``KVM_S390_MEMOP_F_CHECK_ONLY`` * ``KVM_S390_MEMOP_F_SKEY_PROTECTION`` The semantics of the flags are as for logical accesses. The semantics of the flags common with logical accesses are as for logical accesses. Absolute cmpxchg: ^^^^^^^^^^^^^^^^^ Perform cmpxchg on absolute guest memory. Intended for use with the KVM_S390_MEMOP_F_SKEY_PROTECTION flag. Instead of doing an unconditional write, the access occurs only if the target location contains the value pointed to by "old_addr". This is performed as an atomic cmpxchg with the length specified by the "size" parameter. "size" must be a power of two up to and including 16. If the exchange did not take place because the target value doesn't match the old value, the value "old_addr" points to is replaced by the target value. User space can tell if an exchange took place by checking if this replacement occurred. The cmpxchg op is permitted for the VM ioctl if KVM_CAP_S390_MEM_OP_EXTENSION has flag KVM_S390_MEMOP_EXTENSION_CAP_CMPXCHG set. Supported flags: * ``KVM_S390_MEMOP_F_SKEY_PROTECTION`` SIDA read/write: ^^^^^^^^^^^^^^^^ Loading Loading @@ -4457,6 +4479,18 @@ not holding a previously reported uncorrected error). :Parameters: struct kvm_s390_cmma_log (in, out) :Returns: 0 on success, a negative value on error Errors: ====== ============================================================= ENOMEM not enough memory can be allocated to complete the task ENXIO if CMMA is not enabled EINVAL if KVM_S390_CMMA_PEEK is not set but migration mode was not enabled EINVAL if KVM_S390_CMMA_PEEK is not set but dirty tracking has been disabled (and thus migration mode was automatically disabled) EFAULT if the userspace address is invalid or if no page table is present for the addresses (e.g. when using hugepages). ====== ============================================================= This ioctl is used to get the values of the CMMA bits on the s390 architecture. It is meant to be used in two scenarios: Loading Loading @@ -4537,12 +4571,6 @@ mask is unused. values points to the userspace buffer where the result will be stored. This ioctl can fail with -ENOMEM if not enough memory can be allocated to complete the task, with -ENXIO if CMMA is not enabled, with -EINVAL if KVM_S390_CMMA_PEEK is not set but migration mode was not enabled, with -EFAULT if the userspace address is invalid or if no page table is present for the addresses (e.g. when using hugepages). 4.108 KVM_S390_SET_CMMA_BITS ---------------------------- Loading
Documentation/virt/kvm/devices/vm.rst +4 −0 Original line number Diff line number Diff line Loading @@ -302,6 +302,10 @@ Allows userspace to start migration mode, needed for PGSTE migration. Setting this attribute when migration mode is already active will have no effects. Dirty tracking must be enabled on all memslots, else -EINVAL is returned. When dirty tracking is disabled on any memslot, migration mode is automatically stopped. :Parameters: none :Returns: -ENOMEM if there is not enough free memory to start migration mode; -EINVAL if the state of the VM is invalid (e.g. no memory defined); Loading
arch/s390/include/asm/asm-extable.h +4 −0 Original line number Diff line number Diff line Loading @@ -12,6 +12,7 @@ #define EX_TYPE_UA_STORE 3 #define EX_TYPE_UA_LOAD_MEM 4 #define EX_TYPE_UA_LOAD_REG 5 #define EX_TYPE_UA_LOAD_REGPAIR 6 #define EX_DATA_REG_ERR_SHIFT 0 #define EX_DATA_REG_ERR GENMASK(3, 0) Loading Loading @@ -85,4 +86,7 @@ #define EX_TABLE_UA_LOAD_REG(_fault, _target, _regerr, _regzero) \ __EX_TABLE_UA(__ex_table, _fault, _target, EX_TYPE_UA_LOAD_REG, _regerr, _regzero, 0) #define EX_TABLE_UA_LOAD_REGPAIR(_fault, _target, _regerr, _regzero) \ __EX_TABLE_UA(__ex_table, _fault, _target, EX_TYPE_UA_LOAD_REGPAIR, _regerr, _regzero, 0) #endif /* __ASM_EXTABLE_H */
arch/s390/include/asm/cmpxchg.h +66 −43 Original line number Diff line number Diff line Loading @@ -88,67 +88,90 @@ static __always_inline unsigned long __cmpxchg(unsigned long address, unsigned long old, unsigned long new, int size) { unsigned long prev, tmp; int shift; switch (size) { case 1: case 1: { unsigned int prev, shift, mask; shift = (3 ^ (address & 3)) << 3; address ^= address & 3; old = (old & 0xff) << shift; new = (new & 0xff) << shift; mask = ~(0xff << shift); asm volatile( " l %0,%2\n" "0: nr %0,%5\n" " lr %1,%0\n" " or %0,%3\n" " or %1,%4\n" " cs %0,%1,%2\n" " l %[prev],%[address]\n" " nr %[prev],%[mask]\n" " xilf %[mask],0xffffffff\n" " or %[new],%[prev]\n" " or %[prev],%[tmp]\n" "0: lr %[tmp],%[prev]\n" " cs %[prev],%[new],%[address]\n" " jnl 1f\n" " xr %1,%0\n" " nr %1,%5\n" " jnz 0b\n" " xr %[tmp],%[prev]\n" " xr %[new],%[tmp]\n" " nr %[tmp],%[mask]\n" " jz 0b\n" "1:" : "=&d" (prev), "=&d" (tmp), "+Q" (*(int *) address) : "d" ((old & 0xff) << shift), "d" ((new & 0xff) << shift), "d" (~(0xff << shift)) : "memory", "cc"); : [prev] "=&d" (prev), [address] "+Q" (*(int *)address), [tmp] "+&d" (old), [new] "+&d" (new), [mask] "+&d" (mask) :: "memory", "cc"); return prev >> shift; case 2: } case 2: { unsigned int prev, shift, mask; shift = (2 ^ (address & 2)) << 3; address ^= address & 2; old = (old & 0xffff) << shift; new = (new & 0xffff) << shift; mask = ~(0xffff << shift); asm volatile( " l %0,%2\n" "0: nr %0,%5\n" " lr %1,%0\n" " or %0,%3\n" " or %1,%4\n" " cs %0,%1,%2\n" " l %[prev],%[address]\n" " nr %[prev],%[mask]\n" " xilf %[mask],0xffffffff\n" " or %[new],%[prev]\n" " or %[prev],%[tmp]\n" "0: lr %[tmp],%[prev]\n" " cs %[prev],%[new],%[address]\n" " jnl 1f\n" " xr %1,%0\n" " nr %1,%5\n" " jnz 0b\n" " xr %[tmp],%[prev]\n" " xr %[new],%[tmp]\n" " nr %[tmp],%[mask]\n" " jz 0b\n" "1:" : "=&d" (prev), "=&d" (tmp), "+Q" (*(int *) address) : "d" ((old & 0xffff) << shift), "d" ((new & 0xffff) << shift), "d" (~(0xffff << shift)) : "memory", "cc"); : [prev] "=&d" (prev), [address] "+Q" (*(int *)address), [tmp] "+&d" (old), [new] "+&d" (new), [mask] "+&d" (mask) :: "memory", "cc"); return prev >> shift; case 4: } case 4: { unsigned int prev = old; asm volatile( " cs %0,%3,%1\n" : "=&d" (prev), "+Q" (*(int *) address) : "0" (old), "d" (new) " cs %[prev],%[new],%[address]\n" : [prev] "+&d" (prev), [address] "+Q" (*(int *)address) : [new] "d" (new) : "memory", "cc"); return prev; case 8: } case 8: { unsigned long prev = old; asm volatile( " csg %0,%3,%1\n" : "=&d" (prev), "+QS" (*(long *) address) : "0" (old), "d" (new) " csg %[prev],%[new],%[address]\n" : [prev] "+&d" (prev), [address] "+QS" (*(long *)address) : [new] "d" (new) : "memory", "cc"); return prev; } } __cmpxchg_called_with_bad_pointer(); return old; } Loading
arch/s390/include/asm/uaccess.h +208 −0 Original line number Diff line number Diff line Loading @@ -390,4 +390,212 @@ do { \ goto err_label; \ } while (0) void __cmpxchg_user_key_called_with_bad_pointer(void); #define CMPXCHG_USER_KEY_MAX_LOOPS 128 static __always_inline int __cmpxchg_user_key(unsigned long address, void *uval, __uint128_t old, __uint128_t new, unsigned long key, int size) { int rc = 0; switch (size) { case 1: { unsigned int prev, shift, mask, _old, _new; unsigned long count; shift = (3 ^ (address & 3)) << 3; address ^= address & 3; _old = ((unsigned int)old & 0xff) << shift; _new = ((unsigned int)new & 0xff) << shift; mask = ~(0xff << shift); asm volatile( " spka 0(%[key])\n" " sacf 256\n" " llill %[count],%[max_loops]\n" "0: l %[prev],%[address]\n" "1: nr %[prev],%[mask]\n" " xilf %[mask],0xffffffff\n" " or %[new],%[prev]\n" " or %[prev],%[tmp]\n" "2: lr %[tmp],%[prev]\n" "3: cs %[prev],%[new],%[address]\n" "4: jnl 5f\n" " xr %[tmp],%[prev]\n" " xr %[new],%[tmp]\n" " nr %[tmp],%[mask]\n" " jnz 5f\n" " brct %[count],2b\n" "5: sacf 768\n" " spka %[default_key]\n" EX_TABLE_UA_LOAD_REG(0b, 5b, %[rc], %[prev]) EX_TABLE_UA_LOAD_REG(1b, 5b, %[rc], %[prev]) EX_TABLE_UA_LOAD_REG(3b, 5b, %[rc], %[prev]) EX_TABLE_UA_LOAD_REG(4b, 5b, %[rc], %[prev]) : [rc] "+&d" (rc), [prev] "=&d" (prev), [address] "+Q" (*(int *)address), [tmp] "+&d" (_old), [new] "+&d" (_new), [mask] "+&d" (mask), [count] "=a" (count) : [key] "%[count]" (key << 4), [default_key] "J" (PAGE_DEFAULT_KEY), [max_loops] "J" (CMPXCHG_USER_KEY_MAX_LOOPS) : "memory", "cc"); *(unsigned char *)uval = prev >> shift; if (!count) rc = -EAGAIN; return rc; } case 2: { unsigned int prev, shift, mask, _old, _new; unsigned long count; shift = (2 ^ (address & 2)) << 3; address ^= address & 2; _old = ((unsigned int)old & 0xffff) << shift; _new = ((unsigned int)new & 0xffff) << shift; mask = ~(0xffff << shift); asm volatile( " spka 0(%[key])\n" " sacf 256\n" " llill %[count],%[max_loops]\n" "0: l %[prev],%[address]\n" "1: nr %[prev],%[mask]\n" " xilf %[mask],0xffffffff\n" " or %[new],%[prev]\n" " or %[prev],%[tmp]\n" "2: lr %[tmp],%[prev]\n" "3: cs %[prev],%[new],%[address]\n" "4: jnl 5f\n" " xr %[tmp],%[prev]\n" " xr %[new],%[tmp]\n" " nr %[tmp],%[mask]\n" " jnz 5f\n" " brct %[count],2b\n" "5: sacf 768\n" " spka %[default_key]\n" EX_TABLE_UA_LOAD_REG(0b, 5b, %[rc], %[prev]) EX_TABLE_UA_LOAD_REG(1b, 5b, %[rc], %[prev]) EX_TABLE_UA_LOAD_REG(3b, 5b, %[rc], %[prev]) EX_TABLE_UA_LOAD_REG(4b, 5b, %[rc], %[prev]) : [rc] "+&d" (rc), [prev] "=&d" (prev), [address] "+Q" (*(int *)address), [tmp] "+&d" (_old), [new] "+&d" (_new), [mask] "+&d" (mask), [count] "=a" (count) : [key] "%[count]" (key << 4), [default_key] "J" (PAGE_DEFAULT_KEY), [max_loops] "J" (CMPXCHG_USER_KEY_MAX_LOOPS) : "memory", "cc"); *(unsigned short *)uval = prev >> shift; if (!count) rc = -EAGAIN; return rc; } case 4: { unsigned int prev = old; asm volatile( " spka 0(%[key])\n" " sacf 256\n" "0: cs %[prev],%[new],%[address]\n" "1: sacf 768\n" " spka %[default_key]\n" EX_TABLE_UA_LOAD_REG(0b, 1b, %[rc], %[prev]) EX_TABLE_UA_LOAD_REG(1b, 1b, %[rc], %[prev]) : [rc] "+&d" (rc), [prev] "+&d" (prev), [address] "+Q" (*(int *)address) : [new] "d" ((unsigned int)new), [key] "a" (key << 4), [default_key] "J" (PAGE_DEFAULT_KEY) : "memory", "cc"); *(unsigned int *)uval = prev; return rc; } case 8: { unsigned long prev = old; asm volatile( " spka 0(%[key])\n" " sacf 256\n" "0: csg %[prev],%[new],%[address]\n" "1: sacf 768\n" " spka %[default_key]\n" EX_TABLE_UA_LOAD_REG(0b, 1b, %[rc], %[prev]) EX_TABLE_UA_LOAD_REG(1b, 1b, %[rc], %[prev]) : [rc] "+&d" (rc), [prev] "+&d" (prev), [address] "+QS" (*(long *)address) : [new] "d" ((unsigned long)new), [key] "a" (key << 4), [default_key] "J" (PAGE_DEFAULT_KEY) : "memory", "cc"); *(unsigned long *)uval = prev; return rc; } case 16: { __uint128_t prev = old; asm volatile( " spka 0(%[key])\n" " sacf 256\n" "0: cdsg %[prev],%[new],%[address]\n" "1: sacf 768\n" " spka %[default_key]\n" EX_TABLE_UA_LOAD_REGPAIR(0b, 1b, %[rc], %[prev]) EX_TABLE_UA_LOAD_REGPAIR(1b, 1b, %[rc], %[prev]) : [rc] "+&d" (rc), [prev] "+&d" (prev), [address] "+QS" (*(__int128_t *)address) : [new] "d" (new), [key] "a" (key << 4), [default_key] "J" (PAGE_DEFAULT_KEY) : "memory", "cc"); *(__uint128_t *)uval = prev; return rc; } } __cmpxchg_user_key_called_with_bad_pointer(); return rc; } /** * cmpxchg_user_key() - cmpxchg with user space target, honoring storage keys * @ptr: User space address of value to compare to @old and exchange with * @new. Must be aligned to sizeof(*@ptr). * @uval: Address where the old value of *@ptr is written to. * @old: Old value. Compared to the content pointed to by @ptr in order to * determine if the exchange occurs. The old value read from *@ptr is * written to *@uval. * @new: New value to place at *@ptr. * @key: Access key to use for checking storage key protection. * * Perform a cmpxchg on a user space target, honoring storage key protection. * @key alone determines how key checking is performed, neither * storage-protection-override nor fetch-protection-override apply. * The caller must compare *@uval and @old to determine if values have been * exchanged. In case of an exception *@uval is set to zero. * * Return: 0: cmpxchg executed * -EFAULT: an exception happened when trying to access *@ptr * -EAGAIN: maxed out number of retries (byte and short only) */ #define cmpxchg_user_key(ptr, uval, old, new, key) \ ({ \ __typeof__(ptr) __ptr = (ptr); \ __typeof__(uval) __uval = (uval); \ \ BUILD_BUG_ON(sizeof(*(__ptr)) != sizeof(*(__uval))); \ might_fault(); \ __chk_user_ptr(__ptr); \ __cmpxchg_user_key((unsigned long)(__ptr), (void *)(__uval), \ (old), (new), (key), sizeof(*(__ptr))); \ }) #endif /* __S390_UACCESS_H */