Loading Documentation/admin-guide/kernel-parameters.txt +23 −17 Original line number Diff line number Diff line Loading @@ -339,6 +339,29 @@ This mode requires kvm-amd.avic=1. (Default when IOMMU HW support is present.) amd_pstate= [X86] disable Do not enable amd_pstate as the default scaling driver for the supported processors passive Use amd_pstate with passive mode as a scaling driver. In this mode autonomous selection is disabled. Driver requests a desired performance level and platform tries to match the same performance level if it is satisfied by guaranteed performance level. active Use amd_pstate_epp driver instance as the scaling driver, driver provides a hint to the hardware if software wants to bias toward performance (0x0) or energy efficiency (0xff) to the CPPC firmware. then CPPC power algorithm will calculate the runtime workload and adjust the realtime cores frequency. guided Activate guided autonomous mode. Driver requests minimum and maximum performance level and the platform autonomously selects a performance level in this range and appropriate to the current workload. amijoy.map= [HW,JOY] Amiga joystick support Map of devices attached to JOY0DAT and JOY1DAT Format: <a>,<b> Loading Loading @@ -7059,20 +7082,3 @@ xmon commands. off xmon is disabled. amd_pstate= [X86] disable Do not enable amd_pstate as the default scaling driver for the supported processors passive Use amd_pstate as a scaling driver, driver requests a desired performance on this abstract scale and the power management firmware translates the requests into actual hardware states (core frequency, data fabric and memory clocks etc.) active Use amd_pstate_epp driver instance as the scaling driver, driver provides a hint to the hardware if software wants to bias toward performance (0x0) or energy efficiency (0xff) to the CPPC firmware. then CPPC power algorithm will calculate the runtime workload and adjust the realtime cores frequency. Documentation/admin-guide/pm/amd-pstate.rst +24 −7 Original line number Diff line number Diff line Loading @@ -303,13 +303,18 @@ efficiency frequency management method on AMD processors. AMD Pstate Driver Operation Modes ================================= ``amd_pstate`` CPPC has two operation modes: CPPC Autonomous(active) mode and CPPC non-autonomous(passive) mode. active mode and passive mode can be chosen by different kernel parameters. When in Autonomous mode, CPPC ignores requests done in the Desired Performance Target register and takes into account only the values set to the Minimum requested performance, Maximum requested performance, and Energy Performance Preference registers. When Autonomous is disabled, it only considers the Desired Performance Target. ``amd_pstate`` CPPC has 3 operation modes: autonomous (active) mode, non-autonomous (passive) mode and guided autonomous (guided) mode. Active/passive/guided mode can be chosen by different kernel parameters. - In autonomous mode, platform ignores the desired performance level request and takes into account only the values set to the minimum, maximum and energy performance preference registers. - In non-autonomous mode, platform gets desired performance level from OS directly through Desired Performance Register. - In guided-autonomous mode, platform sets operating performance level autonomously according to the current workload and within the limits set by OS through min and max performance registers. Active Mode ------------ Loading Loading @@ -338,6 +343,15 @@ to the Performance Reduction Tolerance register. Above the nominal performance l processor must provide at least nominal performance requested and go higher if current operating conditions allow. Guided Mode ----------- ``amd_pstate=guided`` If ``amd_pstate=guided`` is passed to kernel command line option then this mode is activated. In this mode, driver requests minimum and maximum performance level and the platform autonomously selects a performance level in this range and appropriate to the current workload. User Space Interface in ``sysfs`` - General =========================================== Loading @@ -358,6 +372,9 @@ control its functionality at the system level. They are located in the "passive" The driver is functional and in the ``passive mode`` "guided" The driver is functional and in the ``guided mode`` "disable" The driver is unregistered and not functional now. Loading drivers/acpi/cppc_acpi.c +111 −7 Original line number Diff line number Diff line Loading @@ -1433,6 +1433,102 @@ int cppc_set_epp_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls, bool enable) } EXPORT_SYMBOL_GPL(cppc_set_epp_perf); /** * cppc_get_auto_sel_caps - Read autonomous selection register. * @cpunum : CPU from which to read register. * @perf_caps : struct where autonomous selection register value is updated. */ int cppc_get_auto_sel_caps(int cpunum, struct cppc_perf_caps *perf_caps) { struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpunum); struct cpc_register_resource *auto_sel_reg; u64 auto_sel; if (!cpc_desc) { pr_debug("No CPC descriptor for CPU:%d\n", cpunum); return -ENODEV; } auto_sel_reg = &cpc_desc->cpc_regs[AUTO_SEL_ENABLE]; if (!CPC_SUPPORTED(auto_sel_reg)) pr_warn_once("Autonomous mode is not unsupported!\n"); if (CPC_IN_PCC(auto_sel_reg)) { int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpunum); struct cppc_pcc_data *pcc_ss_data = NULL; int ret = 0; if (pcc_ss_id < 0) return -ENODEV; pcc_ss_data = pcc_data[pcc_ss_id]; down_write(&pcc_ss_data->pcc_lock); if (send_pcc_cmd(pcc_ss_id, CMD_READ) >= 0) { cpc_read(cpunum, auto_sel_reg, &auto_sel); perf_caps->auto_sel = (bool)auto_sel; } else { ret = -EIO; } up_write(&pcc_ss_data->pcc_lock); return ret; } return 0; } EXPORT_SYMBOL_GPL(cppc_get_auto_sel_caps); /** * cppc_set_auto_sel - Write autonomous selection register. * @cpu : CPU to which to write register. * @enable : the desired value of autonomous selection resiter to be updated. */ int cppc_set_auto_sel(int cpu, bool enable) { int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpu); struct cpc_register_resource *auto_sel_reg; struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpu); struct cppc_pcc_data *pcc_ss_data = NULL; int ret = -EINVAL; if (!cpc_desc) { pr_debug("No CPC descriptor for CPU:%d\n", cpu); return -ENODEV; } auto_sel_reg = &cpc_desc->cpc_regs[AUTO_SEL_ENABLE]; if (CPC_IN_PCC(auto_sel_reg)) { if (pcc_ss_id < 0) { pr_debug("Invalid pcc_ss_id\n"); return -ENODEV; } if (CPC_SUPPORTED(auto_sel_reg)) { ret = cpc_write(cpu, auto_sel_reg, enable); if (ret) return ret; } pcc_ss_data = pcc_data[pcc_ss_id]; down_write(&pcc_ss_data->pcc_lock); /* after writing CPC, transfer the ownership of PCC to platform */ ret = send_pcc_cmd(pcc_ss_id, CMD_WRITE); up_write(&pcc_ss_data->pcc_lock); } else { ret = -ENOTSUPP; pr_debug("_CPC in PCC is not supported\n"); } return ret; } EXPORT_SYMBOL_GPL(cppc_set_auto_sel); /** * cppc_set_enable - Set to enable CPPC on the processor by writing the * Continuous Performance Control package EnableRegister field. Loading Loading @@ -1488,7 +1584,7 @@ EXPORT_SYMBOL_GPL(cppc_set_enable); int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls) { struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpu); struct cpc_register_resource *desired_reg; struct cpc_register_resource *desired_reg, *min_perf_reg, *max_perf_reg; int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpu); struct cppc_pcc_data *pcc_ss_data = NULL; int ret = 0; Loading @@ -1499,6 +1595,8 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls) } desired_reg = &cpc_desc->cpc_regs[DESIRED_PERF]; min_perf_reg = &cpc_desc->cpc_regs[MIN_PERF]; max_perf_reg = &cpc_desc->cpc_regs[MAX_PERF]; /* * This is Phase-I where we want to write to CPC registers Loading @@ -1507,7 +1605,7 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls) * Since read_lock can be acquired by multiple CPUs simultaneously we * achieve that goal here */ if (CPC_IN_PCC(desired_reg)) { if (CPC_IN_PCC(desired_reg) || CPC_IN_PCC(min_perf_reg) || CPC_IN_PCC(max_perf_reg)) { if (pcc_ss_id < 0) { pr_debug("Invalid pcc_ss_id\n"); return -ENODEV; Loading @@ -1530,13 +1628,19 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls) cpc_desc->write_cmd_status = 0; } cpc_write(cpu, desired_reg, perf_ctrls->desired_perf); /* * Skip writing MIN/MAX until Linux knows how to come up with * useful values. * Only write if min_perf and max_perf not zero. Some drivers pass zero * value to min and max perf, but they don't mean to set the zero value, * they just don't want to write to those registers. */ cpc_write(cpu, desired_reg, perf_ctrls->desired_perf); if (perf_ctrls->min_perf) cpc_write(cpu, min_perf_reg, perf_ctrls->min_perf); if (perf_ctrls->max_perf) cpc_write(cpu, max_perf_reg, perf_ctrls->max_perf); if (CPC_IN_PCC(desired_reg)) if (CPC_IN_PCC(desired_reg) || CPC_IN_PCC(min_perf_reg) || CPC_IN_PCC(max_perf_reg)) up_read(&pcc_ss_data->pcc_lock); /* END Phase-I */ /* * This is Phase-II where we transfer the ownership of PCC to Platform Loading Loading @@ -1584,7 +1688,7 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls) * case during a CMD_READ and if there are pending writes it delivers * the write command before servicing the read command */ if (CPC_IN_PCC(desired_reg)) { if (CPC_IN_PCC(desired_reg) || CPC_IN_PCC(min_perf_reg) || CPC_IN_PCC(max_perf_reg)) { if (down_write_trylock(&pcc_ss_data->pcc_lock)) {/* BEGIN Phase-II */ /* Update only if there are pending write commands */ if (pcc_ss_data->pending_pcc_write_cmd) Loading drivers/cpufreq/Kconfig.arm +1 −1 Original line number Diff line number Diff line Loading @@ -95,7 +95,7 @@ config ARM_BRCMSTB_AVS_CPUFREQ help Some Broadcom STB SoCs use a co-processor running proprietary firmware ("AVS") to handle voltage and frequency scaling. This driver provides a standard CPUfreq interface to to the firmware. a standard CPUfreq interface to the firmware. Say Y, if you have a Broadcom SoC with AVS support for DFS or DVFS. Loading drivers/cpufreq/amd-pstate.c +129 −46 Original line number Diff line number Diff line Loading @@ -106,6 +106,8 @@ static unsigned int epp_values[] = { [EPP_INDEX_POWERSAVE] = AMD_CPPC_EPP_POWERSAVE, }; typedef int (*cppc_mode_transition_fn)(int); static inline int get_mode_idx_from_str(const char *str, size_t size) { int i; Loading Loading @@ -308,9 +310,24 @@ static int cppc_init_perf(struct amd_cpudata *cpudata) cppc_perf.lowest_nonlinear_perf); WRITE_ONCE(cpudata->lowest_perf, cppc_perf.lowest_perf); if (cppc_state == AMD_PSTATE_ACTIVE) return 0; ret = cppc_get_auto_sel_caps(cpudata->cpu, &cppc_perf); if (ret) { pr_warn("failed to get auto_sel, ret: %d\n", ret); return 0; } ret = cppc_set_auto_sel(cpudata->cpu, (cppc_state == AMD_PSTATE_PASSIVE) ? 0 : 1); if (ret) pr_warn("failed to set auto_sel, ret: %d\n", ret); return ret; } DEFINE_STATIC_CALL(amd_pstate_init_perf, pstate_init_perf); static inline int amd_pstate_init_perf(struct amd_cpudata *cpudata) Loading Loading @@ -385,12 +402,18 @@ static inline bool amd_pstate_sample(struct amd_cpudata *cpudata) } static void amd_pstate_update(struct amd_cpudata *cpudata, u32 min_perf, u32 des_perf, u32 max_perf, bool fast_switch) u32 des_perf, u32 max_perf, bool fast_switch, int gov_flags) { u64 prev = READ_ONCE(cpudata->cppc_req_cached); u64 value = prev; des_perf = clamp_t(unsigned long, des_perf, min_perf, max_perf); if ((cppc_state == AMD_PSTATE_GUIDED) && (gov_flags & CPUFREQ_GOV_DYNAMIC_SWITCHING)) { min_perf = des_perf; des_perf = 0; } value &= ~AMD_CPPC_MIN_PERF(~0L); value |= AMD_CPPC_MIN_PERF(min_perf); Loading Loading @@ -445,7 +468,7 @@ static int amd_pstate_target(struct cpufreq_policy *policy, cpufreq_freq_transition_begin(policy, &freqs); amd_pstate_update(cpudata, min_perf, des_perf, max_perf, false); max_perf, false, policy->governor->flags); cpufreq_freq_transition_end(policy, &freqs, false); return 0; Loading Loading @@ -479,7 +502,8 @@ static void amd_pstate_adjust_perf(unsigned int cpu, if (max_perf < min_perf) max_perf = min_perf; amd_pstate_update(cpudata, min_perf, des_perf, max_perf, true); amd_pstate_update(cpudata, min_perf, des_perf, max_perf, true, policy->governor->flags); cpufreq_cpu_put(policy); } Loading Loading @@ -816,63 +840,122 @@ static ssize_t show_energy_performance_preference( return sysfs_emit(buf, "%s\n", energy_perf_strings[preference]); } static ssize_t amd_pstate_show_status(char *buf) { if (!current_pstate_driver) return sysfs_emit(buf, "disable\n"); return sysfs_emit(buf, "%s\n", amd_pstate_mode_string[cppc_state]); } static void amd_pstate_driver_cleanup(void) { amd_pstate_enable(false); cppc_state = AMD_PSTATE_DISABLE; current_pstate_driver = NULL; } static int amd_pstate_update_status(const char *buf, size_t size) static int amd_pstate_register_driver(int mode) { int ret = 0; int mode_idx; int ret; if (size > 7 || size < 6) if (mode == AMD_PSTATE_PASSIVE || mode == AMD_PSTATE_GUIDED) current_pstate_driver = &amd_pstate_driver; else if (mode == AMD_PSTATE_ACTIVE) current_pstate_driver = &amd_pstate_epp_driver; else return -EINVAL; mode_idx = get_mode_idx_from_str(buf, size); switch(mode_idx) { case AMD_PSTATE_DISABLE: if (current_pstate_driver) { cpufreq_unregister_driver(current_pstate_driver); cppc_state = mode; ret = cpufreq_register_driver(current_pstate_driver); if (ret) { amd_pstate_driver_cleanup(); return ret; } break; case AMD_PSTATE_PASSIVE: if (current_pstate_driver) { if (current_pstate_driver == &amd_pstate_driver) return 0; } static int amd_pstate_unregister_driver(int dummy) { cpufreq_unregister_driver(current_pstate_driver); amd_pstate_driver_cleanup(); return 0; } current_pstate_driver = &amd_pstate_driver; cppc_state = AMD_PSTATE_PASSIVE; ret = cpufreq_register_driver(current_pstate_driver); break; case AMD_PSTATE_ACTIVE: if (current_pstate_driver) { if (current_pstate_driver == &amd_pstate_epp_driver) static int amd_pstate_change_mode_without_dvr_change(int mode) { int cpu = 0; cppc_state = mode; if (boot_cpu_has(X86_FEATURE_CPPC) || cppc_state == AMD_PSTATE_ACTIVE) return 0; cpufreq_unregister_driver(current_pstate_driver); for_each_present_cpu(cpu) { cppc_set_auto_sel(cpu, (cppc_state == AMD_PSTATE_PASSIVE) ? 0 : 1); } current_pstate_driver = &amd_pstate_epp_driver; cppc_state = AMD_PSTATE_ACTIVE; ret = cpufreq_register_driver(current_pstate_driver); break; default: ret = -EINVAL; break; return 0; } static int amd_pstate_change_driver_mode(int mode) { int ret; ret = amd_pstate_unregister_driver(0); if (ret) return ret; ret = amd_pstate_register_driver(mode); if (ret) return ret; return 0; } static cppc_mode_transition_fn mode_state_machine[AMD_PSTATE_MAX][AMD_PSTATE_MAX] = { [AMD_PSTATE_DISABLE] = { [AMD_PSTATE_DISABLE] = NULL, [AMD_PSTATE_PASSIVE] = amd_pstate_register_driver, [AMD_PSTATE_ACTIVE] = amd_pstate_register_driver, [AMD_PSTATE_GUIDED] = amd_pstate_register_driver, }, [AMD_PSTATE_PASSIVE] = { [AMD_PSTATE_DISABLE] = amd_pstate_unregister_driver, [AMD_PSTATE_PASSIVE] = NULL, [AMD_PSTATE_ACTIVE] = amd_pstate_change_driver_mode, [AMD_PSTATE_GUIDED] = amd_pstate_change_mode_without_dvr_change, }, [AMD_PSTATE_ACTIVE] = { [AMD_PSTATE_DISABLE] = amd_pstate_unregister_driver, [AMD_PSTATE_PASSIVE] = amd_pstate_change_driver_mode, [AMD_PSTATE_ACTIVE] = NULL, [AMD_PSTATE_GUIDED] = amd_pstate_change_driver_mode, }, [AMD_PSTATE_GUIDED] = { [AMD_PSTATE_DISABLE] = amd_pstate_unregister_driver, [AMD_PSTATE_PASSIVE] = amd_pstate_change_mode_without_dvr_change, [AMD_PSTATE_ACTIVE] = amd_pstate_change_driver_mode, [AMD_PSTATE_GUIDED] = NULL, }, }; static ssize_t amd_pstate_show_status(char *buf) { if (!current_pstate_driver) return sysfs_emit(buf, "disable\n"); return sysfs_emit(buf, "%s\n", amd_pstate_mode_string[cppc_state]); } static int amd_pstate_update_status(const char *buf, size_t size) { int mode_idx; if (size > strlen("passive") || size < strlen("active")) return -EINVAL; mode_idx = get_mode_idx_from_str(buf, size); if (mode_idx < 0 || mode_idx >= AMD_PSTATE_MAX) return -EINVAL; if (mode_state_machine[cppc_state][mode_idx]) return mode_state_machine[cppc_state][mode_idx](mode_idx); return 0; } static ssize_t show_status(struct kobject *kobj, Loading Loading @@ -1277,7 +1360,7 @@ static int __init amd_pstate_init(void) /* capability check */ if (boot_cpu_has(X86_FEATURE_CPPC)) { pr_debug("AMD CPPC MSR based functionality is supported\n"); if (cppc_state == AMD_PSTATE_PASSIVE) if (cppc_state != AMD_PSTATE_ACTIVE) current_pstate_driver->adjust_perf = amd_pstate_adjust_perf; } else { pr_debug("AMD CPPC shared memory based functionality is supported\n"); Loading Loading @@ -1339,7 +1422,7 @@ static int __init amd_pstate_param(char *str) if (cppc_state == AMD_PSTATE_ACTIVE) current_pstate_driver = &amd_pstate_epp_driver; if (cppc_state == AMD_PSTATE_PASSIVE) if (cppc_state == AMD_PSTATE_PASSIVE || cppc_state == AMD_PSTATE_GUIDED) current_pstate_driver = &amd_pstate_driver; return 0; Loading Loading
Documentation/admin-guide/kernel-parameters.txt +23 −17 Original line number Diff line number Diff line Loading @@ -339,6 +339,29 @@ This mode requires kvm-amd.avic=1. (Default when IOMMU HW support is present.) amd_pstate= [X86] disable Do not enable amd_pstate as the default scaling driver for the supported processors passive Use amd_pstate with passive mode as a scaling driver. In this mode autonomous selection is disabled. Driver requests a desired performance level and platform tries to match the same performance level if it is satisfied by guaranteed performance level. active Use amd_pstate_epp driver instance as the scaling driver, driver provides a hint to the hardware if software wants to bias toward performance (0x0) or energy efficiency (0xff) to the CPPC firmware. then CPPC power algorithm will calculate the runtime workload and adjust the realtime cores frequency. guided Activate guided autonomous mode. Driver requests minimum and maximum performance level and the platform autonomously selects a performance level in this range and appropriate to the current workload. amijoy.map= [HW,JOY] Amiga joystick support Map of devices attached to JOY0DAT and JOY1DAT Format: <a>,<b> Loading Loading @@ -7059,20 +7082,3 @@ xmon commands. off xmon is disabled. amd_pstate= [X86] disable Do not enable amd_pstate as the default scaling driver for the supported processors passive Use amd_pstate as a scaling driver, driver requests a desired performance on this abstract scale and the power management firmware translates the requests into actual hardware states (core frequency, data fabric and memory clocks etc.) active Use amd_pstate_epp driver instance as the scaling driver, driver provides a hint to the hardware if software wants to bias toward performance (0x0) or energy efficiency (0xff) to the CPPC firmware. then CPPC power algorithm will calculate the runtime workload and adjust the realtime cores frequency.
Documentation/admin-guide/pm/amd-pstate.rst +24 −7 Original line number Diff line number Diff line Loading @@ -303,13 +303,18 @@ efficiency frequency management method on AMD processors. AMD Pstate Driver Operation Modes ================================= ``amd_pstate`` CPPC has two operation modes: CPPC Autonomous(active) mode and CPPC non-autonomous(passive) mode. active mode and passive mode can be chosen by different kernel parameters. When in Autonomous mode, CPPC ignores requests done in the Desired Performance Target register and takes into account only the values set to the Minimum requested performance, Maximum requested performance, and Energy Performance Preference registers. When Autonomous is disabled, it only considers the Desired Performance Target. ``amd_pstate`` CPPC has 3 operation modes: autonomous (active) mode, non-autonomous (passive) mode and guided autonomous (guided) mode. Active/passive/guided mode can be chosen by different kernel parameters. - In autonomous mode, platform ignores the desired performance level request and takes into account only the values set to the minimum, maximum and energy performance preference registers. - In non-autonomous mode, platform gets desired performance level from OS directly through Desired Performance Register. - In guided-autonomous mode, platform sets operating performance level autonomously according to the current workload and within the limits set by OS through min and max performance registers. Active Mode ------------ Loading Loading @@ -338,6 +343,15 @@ to the Performance Reduction Tolerance register. Above the nominal performance l processor must provide at least nominal performance requested and go higher if current operating conditions allow. Guided Mode ----------- ``amd_pstate=guided`` If ``amd_pstate=guided`` is passed to kernel command line option then this mode is activated. In this mode, driver requests minimum and maximum performance level and the platform autonomously selects a performance level in this range and appropriate to the current workload. User Space Interface in ``sysfs`` - General =========================================== Loading @@ -358,6 +372,9 @@ control its functionality at the system level. They are located in the "passive" The driver is functional and in the ``passive mode`` "guided" The driver is functional and in the ``guided mode`` "disable" The driver is unregistered and not functional now. Loading
drivers/acpi/cppc_acpi.c +111 −7 Original line number Diff line number Diff line Loading @@ -1433,6 +1433,102 @@ int cppc_set_epp_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls, bool enable) } EXPORT_SYMBOL_GPL(cppc_set_epp_perf); /** * cppc_get_auto_sel_caps - Read autonomous selection register. * @cpunum : CPU from which to read register. * @perf_caps : struct where autonomous selection register value is updated. */ int cppc_get_auto_sel_caps(int cpunum, struct cppc_perf_caps *perf_caps) { struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpunum); struct cpc_register_resource *auto_sel_reg; u64 auto_sel; if (!cpc_desc) { pr_debug("No CPC descriptor for CPU:%d\n", cpunum); return -ENODEV; } auto_sel_reg = &cpc_desc->cpc_regs[AUTO_SEL_ENABLE]; if (!CPC_SUPPORTED(auto_sel_reg)) pr_warn_once("Autonomous mode is not unsupported!\n"); if (CPC_IN_PCC(auto_sel_reg)) { int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpunum); struct cppc_pcc_data *pcc_ss_data = NULL; int ret = 0; if (pcc_ss_id < 0) return -ENODEV; pcc_ss_data = pcc_data[pcc_ss_id]; down_write(&pcc_ss_data->pcc_lock); if (send_pcc_cmd(pcc_ss_id, CMD_READ) >= 0) { cpc_read(cpunum, auto_sel_reg, &auto_sel); perf_caps->auto_sel = (bool)auto_sel; } else { ret = -EIO; } up_write(&pcc_ss_data->pcc_lock); return ret; } return 0; } EXPORT_SYMBOL_GPL(cppc_get_auto_sel_caps); /** * cppc_set_auto_sel - Write autonomous selection register. * @cpu : CPU to which to write register. * @enable : the desired value of autonomous selection resiter to be updated. */ int cppc_set_auto_sel(int cpu, bool enable) { int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpu); struct cpc_register_resource *auto_sel_reg; struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpu); struct cppc_pcc_data *pcc_ss_data = NULL; int ret = -EINVAL; if (!cpc_desc) { pr_debug("No CPC descriptor for CPU:%d\n", cpu); return -ENODEV; } auto_sel_reg = &cpc_desc->cpc_regs[AUTO_SEL_ENABLE]; if (CPC_IN_PCC(auto_sel_reg)) { if (pcc_ss_id < 0) { pr_debug("Invalid pcc_ss_id\n"); return -ENODEV; } if (CPC_SUPPORTED(auto_sel_reg)) { ret = cpc_write(cpu, auto_sel_reg, enable); if (ret) return ret; } pcc_ss_data = pcc_data[pcc_ss_id]; down_write(&pcc_ss_data->pcc_lock); /* after writing CPC, transfer the ownership of PCC to platform */ ret = send_pcc_cmd(pcc_ss_id, CMD_WRITE); up_write(&pcc_ss_data->pcc_lock); } else { ret = -ENOTSUPP; pr_debug("_CPC in PCC is not supported\n"); } return ret; } EXPORT_SYMBOL_GPL(cppc_set_auto_sel); /** * cppc_set_enable - Set to enable CPPC on the processor by writing the * Continuous Performance Control package EnableRegister field. Loading Loading @@ -1488,7 +1584,7 @@ EXPORT_SYMBOL_GPL(cppc_set_enable); int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls) { struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpu); struct cpc_register_resource *desired_reg; struct cpc_register_resource *desired_reg, *min_perf_reg, *max_perf_reg; int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpu); struct cppc_pcc_data *pcc_ss_data = NULL; int ret = 0; Loading @@ -1499,6 +1595,8 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls) } desired_reg = &cpc_desc->cpc_regs[DESIRED_PERF]; min_perf_reg = &cpc_desc->cpc_regs[MIN_PERF]; max_perf_reg = &cpc_desc->cpc_regs[MAX_PERF]; /* * This is Phase-I where we want to write to CPC registers Loading @@ -1507,7 +1605,7 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls) * Since read_lock can be acquired by multiple CPUs simultaneously we * achieve that goal here */ if (CPC_IN_PCC(desired_reg)) { if (CPC_IN_PCC(desired_reg) || CPC_IN_PCC(min_perf_reg) || CPC_IN_PCC(max_perf_reg)) { if (pcc_ss_id < 0) { pr_debug("Invalid pcc_ss_id\n"); return -ENODEV; Loading @@ -1530,13 +1628,19 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls) cpc_desc->write_cmd_status = 0; } cpc_write(cpu, desired_reg, perf_ctrls->desired_perf); /* * Skip writing MIN/MAX until Linux knows how to come up with * useful values. * Only write if min_perf and max_perf not zero. Some drivers pass zero * value to min and max perf, but they don't mean to set the zero value, * they just don't want to write to those registers. */ cpc_write(cpu, desired_reg, perf_ctrls->desired_perf); if (perf_ctrls->min_perf) cpc_write(cpu, min_perf_reg, perf_ctrls->min_perf); if (perf_ctrls->max_perf) cpc_write(cpu, max_perf_reg, perf_ctrls->max_perf); if (CPC_IN_PCC(desired_reg)) if (CPC_IN_PCC(desired_reg) || CPC_IN_PCC(min_perf_reg) || CPC_IN_PCC(max_perf_reg)) up_read(&pcc_ss_data->pcc_lock); /* END Phase-I */ /* * This is Phase-II where we transfer the ownership of PCC to Platform Loading Loading @@ -1584,7 +1688,7 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls) * case during a CMD_READ and if there are pending writes it delivers * the write command before servicing the read command */ if (CPC_IN_PCC(desired_reg)) { if (CPC_IN_PCC(desired_reg) || CPC_IN_PCC(min_perf_reg) || CPC_IN_PCC(max_perf_reg)) { if (down_write_trylock(&pcc_ss_data->pcc_lock)) {/* BEGIN Phase-II */ /* Update only if there are pending write commands */ if (pcc_ss_data->pending_pcc_write_cmd) Loading
drivers/cpufreq/Kconfig.arm +1 −1 Original line number Diff line number Diff line Loading @@ -95,7 +95,7 @@ config ARM_BRCMSTB_AVS_CPUFREQ help Some Broadcom STB SoCs use a co-processor running proprietary firmware ("AVS") to handle voltage and frequency scaling. This driver provides a standard CPUfreq interface to to the firmware. a standard CPUfreq interface to the firmware. Say Y, if you have a Broadcom SoC with AVS support for DFS or DVFS. Loading
drivers/cpufreq/amd-pstate.c +129 −46 Original line number Diff line number Diff line Loading @@ -106,6 +106,8 @@ static unsigned int epp_values[] = { [EPP_INDEX_POWERSAVE] = AMD_CPPC_EPP_POWERSAVE, }; typedef int (*cppc_mode_transition_fn)(int); static inline int get_mode_idx_from_str(const char *str, size_t size) { int i; Loading Loading @@ -308,9 +310,24 @@ static int cppc_init_perf(struct amd_cpudata *cpudata) cppc_perf.lowest_nonlinear_perf); WRITE_ONCE(cpudata->lowest_perf, cppc_perf.lowest_perf); if (cppc_state == AMD_PSTATE_ACTIVE) return 0; ret = cppc_get_auto_sel_caps(cpudata->cpu, &cppc_perf); if (ret) { pr_warn("failed to get auto_sel, ret: %d\n", ret); return 0; } ret = cppc_set_auto_sel(cpudata->cpu, (cppc_state == AMD_PSTATE_PASSIVE) ? 0 : 1); if (ret) pr_warn("failed to set auto_sel, ret: %d\n", ret); return ret; } DEFINE_STATIC_CALL(amd_pstate_init_perf, pstate_init_perf); static inline int amd_pstate_init_perf(struct amd_cpudata *cpudata) Loading Loading @@ -385,12 +402,18 @@ static inline bool amd_pstate_sample(struct amd_cpudata *cpudata) } static void amd_pstate_update(struct amd_cpudata *cpudata, u32 min_perf, u32 des_perf, u32 max_perf, bool fast_switch) u32 des_perf, u32 max_perf, bool fast_switch, int gov_flags) { u64 prev = READ_ONCE(cpudata->cppc_req_cached); u64 value = prev; des_perf = clamp_t(unsigned long, des_perf, min_perf, max_perf); if ((cppc_state == AMD_PSTATE_GUIDED) && (gov_flags & CPUFREQ_GOV_DYNAMIC_SWITCHING)) { min_perf = des_perf; des_perf = 0; } value &= ~AMD_CPPC_MIN_PERF(~0L); value |= AMD_CPPC_MIN_PERF(min_perf); Loading Loading @@ -445,7 +468,7 @@ static int amd_pstate_target(struct cpufreq_policy *policy, cpufreq_freq_transition_begin(policy, &freqs); amd_pstate_update(cpudata, min_perf, des_perf, max_perf, false); max_perf, false, policy->governor->flags); cpufreq_freq_transition_end(policy, &freqs, false); return 0; Loading Loading @@ -479,7 +502,8 @@ static void amd_pstate_adjust_perf(unsigned int cpu, if (max_perf < min_perf) max_perf = min_perf; amd_pstate_update(cpudata, min_perf, des_perf, max_perf, true); amd_pstate_update(cpudata, min_perf, des_perf, max_perf, true, policy->governor->flags); cpufreq_cpu_put(policy); } Loading Loading @@ -816,63 +840,122 @@ static ssize_t show_energy_performance_preference( return sysfs_emit(buf, "%s\n", energy_perf_strings[preference]); } static ssize_t amd_pstate_show_status(char *buf) { if (!current_pstate_driver) return sysfs_emit(buf, "disable\n"); return sysfs_emit(buf, "%s\n", amd_pstate_mode_string[cppc_state]); } static void amd_pstate_driver_cleanup(void) { amd_pstate_enable(false); cppc_state = AMD_PSTATE_DISABLE; current_pstate_driver = NULL; } static int amd_pstate_update_status(const char *buf, size_t size) static int amd_pstate_register_driver(int mode) { int ret = 0; int mode_idx; int ret; if (size > 7 || size < 6) if (mode == AMD_PSTATE_PASSIVE || mode == AMD_PSTATE_GUIDED) current_pstate_driver = &amd_pstate_driver; else if (mode == AMD_PSTATE_ACTIVE) current_pstate_driver = &amd_pstate_epp_driver; else return -EINVAL; mode_idx = get_mode_idx_from_str(buf, size); switch(mode_idx) { case AMD_PSTATE_DISABLE: if (current_pstate_driver) { cpufreq_unregister_driver(current_pstate_driver); cppc_state = mode; ret = cpufreq_register_driver(current_pstate_driver); if (ret) { amd_pstate_driver_cleanup(); return ret; } break; case AMD_PSTATE_PASSIVE: if (current_pstate_driver) { if (current_pstate_driver == &amd_pstate_driver) return 0; } static int amd_pstate_unregister_driver(int dummy) { cpufreq_unregister_driver(current_pstate_driver); amd_pstate_driver_cleanup(); return 0; } current_pstate_driver = &amd_pstate_driver; cppc_state = AMD_PSTATE_PASSIVE; ret = cpufreq_register_driver(current_pstate_driver); break; case AMD_PSTATE_ACTIVE: if (current_pstate_driver) { if (current_pstate_driver == &amd_pstate_epp_driver) static int amd_pstate_change_mode_without_dvr_change(int mode) { int cpu = 0; cppc_state = mode; if (boot_cpu_has(X86_FEATURE_CPPC) || cppc_state == AMD_PSTATE_ACTIVE) return 0; cpufreq_unregister_driver(current_pstate_driver); for_each_present_cpu(cpu) { cppc_set_auto_sel(cpu, (cppc_state == AMD_PSTATE_PASSIVE) ? 0 : 1); } current_pstate_driver = &amd_pstate_epp_driver; cppc_state = AMD_PSTATE_ACTIVE; ret = cpufreq_register_driver(current_pstate_driver); break; default: ret = -EINVAL; break; return 0; } static int amd_pstate_change_driver_mode(int mode) { int ret; ret = amd_pstate_unregister_driver(0); if (ret) return ret; ret = amd_pstate_register_driver(mode); if (ret) return ret; return 0; } static cppc_mode_transition_fn mode_state_machine[AMD_PSTATE_MAX][AMD_PSTATE_MAX] = { [AMD_PSTATE_DISABLE] = { [AMD_PSTATE_DISABLE] = NULL, [AMD_PSTATE_PASSIVE] = amd_pstate_register_driver, [AMD_PSTATE_ACTIVE] = amd_pstate_register_driver, [AMD_PSTATE_GUIDED] = amd_pstate_register_driver, }, [AMD_PSTATE_PASSIVE] = { [AMD_PSTATE_DISABLE] = amd_pstate_unregister_driver, [AMD_PSTATE_PASSIVE] = NULL, [AMD_PSTATE_ACTIVE] = amd_pstate_change_driver_mode, [AMD_PSTATE_GUIDED] = amd_pstate_change_mode_without_dvr_change, }, [AMD_PSTATE_ACTIVE] = { [AMD_PSTATE_DISABLE] = amd_pstate_unregister_driver, [AMD_PSTATE_PASSIVE] = amd_pstate_change_driver_mode, [AMD_PSTATE_ACTIVE] = NULL, [AMD_PSTATE_GUIDED] = amd_pstate_change_driver_mode, }, [AMD_PSTATE_GUIDED] = { [AMD_PSTATE_DISABLE] = amd_pstate_unregister_driver, [AMD_PSTATE_PASSIVE] = amd_pstate_change_mode_without_dvr_change, [AMD_PSTATE_ACTIVE] = amd_pstate_change_driver_mode, [AMD_PSTATE_GUIDED] = NULL, }, }; static ssize_t amd_pstate_show_status(char *buf) { if (!current_pstate_driver) return sysfs_emit(buf, "disable\n"); return sysfs_emit(buf, "%s\n", amd_pstate_mode_string[cppc_state]); } static int amd_pstate_update_status(const char *buf, size_t size) { int mode_idx; if (size > strlen("passive") || size < strlen("active")) return -EINVAL; mode_idx = get_mode_idx_from_str(buf, size); if (mode_idx < 0 || mode_idx >= AMD_PSTATE_MAX) return -EINVAL; if (mode_state_machine[cppc_state][mode_idx]) return mode_state_machine[cppc_state][mode_idx](mode_idx); return 0; } static ssize_t show_status(struct kobject *kobj, Loading Loading @@ -1277,7 +1360,7 @@ static int __init amd_pstate_init(void) /* capability check */ if (boot_cpu_has(X86_FEATURE_CPPC)) { pr_debug("AMD CPPC MSR based functionality is supported\n"); if (cppc_state == AMD_PSTATE_PASSIVE) if (cppc_state != AMD_PSTATE_ACTIVE) current_pstate_driver->adjust_perf = amd_pstate_adjust_perf; } else { pr_debug("AMD CPPC shared memory based functionality is supported\n"); Loading Loading @@ -1339,7 +1422,7 @@ static int __init amd_pstate_param(char *str) if (cppc_state == AMD_PSTATE_ACTIVE) current_pstate_driver = &amd_pstate_epp_driver; if (cppc_state == AMD_PSTATE_PASSIVE) if (cppc_state == AMD_PSTATE_PASSIVE || cppc_state == AMD_PSTATE_GUIDED) current_pstate_driver = &amd_pstate_driver; return 0; Loading