Unverified Commit 580253b5 authored by Palmer Dabbelt's avatar Palmer Dabbelt
Browse files

Merge patch series "RISC-V: Probe for misaligned access speed"

Evan Green <evan@rivosinc.com> says:

The current setting for the hwprobe bit indicating misaligned access
speed is controlled by a vendor-specific feature probe function. This is
essentially a per-SoC table we have to maintain on behalf of each vendor
going forward. Let's convert that instead to something we detect at
runtime.

We have two assembly routines at the heart of our probe: one that
does a bunch of word-sized accesses (without aligning its input buffer),
and the other that does byte accesses. If we can move a larger number of
bytes using misaligned word accesses than we can with the same amount of
time doing byte accesses, then we can declare misaligned accesses as
"fast".

The tradeoff of reducing this maintenance burden is boot time. We spend
4-6 jiffies per core doing this measurement (0-2 on jiffie edge
alignment, and 4 on measurement). The timing loop was based on
raid6_choose_gen(), which uses (16+1)*N jiffies (where N is the number
of algorithms). By taking only the fastest iteration out of all
attempts for use in the comparison, variance between runs is very low.
On my THead C906, it looks like this:

[    0.047563] cpu0: Ratio of byte access time to unaligned word access is 4.34, unaligned accesses are fast

Several others have chimed in with results on slow machines with the
older algorithm, which took all runs into account, including noise like
interrupts. Even with this variation, results indicate that in all cases
(fast, slow, and emulated) the measured numbers are nowhere near each
other (always multiple factors away).

* b4-shazam-merge:
  RISC-V: alternative: Remove feature_probe_func
  RISC-V: Probe for unaligned access speed

Link: https://lore.kernel.org/r/20230818194136.4084400-1-evan@rivosinc.com


Signed-off-by: default avatarPalmer Dabbelt <palmer@rivosinc.com>
parents e0152e74 f2d14bc4
Loading
Loading
Loading
Loading
+5 −6
Original line number Diff line number Diff line
@@ -87,13 +87,12 @@ The following keys are defined:
    emulated via software, either in or below the kernel.  These accesses are
    always extremely slow.

  * :c:macro:`RISCV_HWPROBE_MISALIGNED_SLOW`: Misaligned accesses are supported
    in hardware, but are slower than the corresponding aligned accesses
    sequences.
  * :c:macro:`RISCV_HWPROBE_MISALIGNED_SLOW`: Misaligned accesses are slower
    than equivalent byte accesses.  Misaligned accesses may be supported
    directly in hardware, or trapped and emulated by software.

  * :c:macro:`RISCV_HWPROBE_MISALIGNED_FAST`: Misaligned accesses are supported
    in hardware and are faster than the corresponding aligned accesses
    sequences.
  * :c:macro:`RISCV_HWPROBE_MISALIGNED_FAST`: Misaligned accesses are faster
    than equivalent byte accesses.

  * :c:macro:`RISCV_HWPROBE_MISALIGNED_UNSUPPORTED`: Misaligned accesses are
    not supported at all and will generate a misaligned address fault.
+0 −8
Original line number Diff line number Diff line
@@ -120,11 +120,3 @@ void thead_errata_patch_func(struct alt_entry *begin, struct alt_entry *end,
	if (stage == RISCV_ALTERNATIVES_EARLY_BOOT)
		local_flush_icache_all();
}

void thead_feature_probe_func(unsigned int cpu,
			      unsigned long archid,
			      unsigned long impid)
{
	if ((archid == 0) && (impid == 0))
		per_cpu(misaligned_access_speed, cpu) = RISCV_HWPROBE_MISALIGNED_FAST;
}
+0 −5
Original line number Diff line number Diff line
@@ -30,7 +30,6 @@
#define ALT_OLD_PTR(a)			__ALT_PTR(a, old_offset)
#define ALT_ALT_PTR(a)			__ALT_PTR(a, alt_offset)

void probe_vendor_features(unsigned int cpu);
void __init apply_boot_alternatives(void);
void __init apply_early_boot_alternatives(void);
void apply_module_alternatives(void *start, size_t length);
@@ -53,15 +52,11 @@ void thead_errata_patch_func(struct alt_entry *begin, struct alt_entry *end,
			     unsigned long archid, unsigned long impid,
			     unsigned int stage);

void thead_feature_probe_func(unsigned int cpu, unsigned long archid,
			      unsigned long impid);

void riscv_cpufeature_patch_func(struct alt_entry *begin, struct alt_entry *end,
				 unsigned int stage);

#else /* CONFIG_RISCV_ALTERNATIVE */

static inline void probe_vendor_features(unsigned int cpu) { }
static inline void apply_boot_alternatives(void) { }
static inline void apply_early_boot_alternatives(void) { }
static inline void apply_module_alternatives(void *start, size_t length) { }
+2 −0
Original line number Diff line number Diff line
@@ -30,4 +30,6 @@ DECLARE_PER_CPU(long, misaligned_access_speed);
/* Per-cpu ISA extensions. */
extern struct riscv_isainfo hart_isa[NR_CPUS];

void check_unaligned_access(int cpu);

#endif
+1 −0
Original line number Diff line number Diff line
@@ -38,6 +38,7 @@ extra-y += vmlinux.lds
obj-y	+= head.o
obj-y	+= soc.o
obj-$(CONFIG_RISCV_ALTERNATIVE) += alternative.o
obj-y	+= copy-unaligned.o
obj-y	+= cpu.o
obj-y	+= cpufeature.o
obj-y	+= entry.o
Loading