Hi all:
The core frequency is subjected to the process variation in semiconductors. Not all cores are able to reach the maximum frequency respecting the infrastructure limits. Consequently, AMD has redefined the concept of maximum frequency of a part. This means that a fraction of cores can reach maximum frequency. To find the best process scheduling policy for a given scenario, OS needs to know the core ordering informed by the platform through highest performance capability register of the CPPC interface.
Earlier implementations of amd-pstate preferred core only support a static core ranking and targeted performance. Now it has the ability to dynamically change the preferred core based on the workload and platform conditions and accounting for thermals and aging.
Amd-pstate driver utilizes the functions and data structures provided by the ITMT architecture to enable the scheduler to favor scheduling on cores which can be get a higher frequency with lower voltage. We call it amd-pstate preferred core.
Here sched_set_itmt_core_prio() is called to set priorities and sched_set_itmt_support() is called to enable ITMT feature. Amd-pstate driver uses the highest performance value to indicate the priority of CPU. The higher value has a higher priority.
Amd-pstate driver will provide an initial core ordering at boot time. It relies on the CPPC interface to communicate the core ranking to the operating system and scheduler to make sure that OS is choosing the cores with highest performance firstly for scheduling the process. When amd-pstate driver receives a message with the highest performance change, it will update the core ranking.
Changes form V8->V9: - all: - - pick up Tested-By flag added by Oleksandr. - cpufreq: amd-pstate: - - pick up Review-By flag added by Wyes. - - ignore modification of bug. - - add a attribute of prefcore_ranking. - - modify data type conversion from u32 to int. - Documentation: amd-pstate: - - pick up Review-By flag added by Wyes.
Changes form V7->V8: - all: - - pick up Review-By flag added by Mario and Ray. - cpufreq: amd-pstate: - - use hw_prefcore embeds into cpudata structure. - - delete preferred core init from cpu online/off.
Changes form V6->V7: - x86: - - Modify kconfig about X86_AMD_PSTATE. - cpufreq: amd-pstate: - - modify incorrect comments about scheduler_work(). - - convert highest_perf data type. - - modify preferred core init when cpu init and online. - acpi: cppc: - - modify link of CPPC highest performance. - cpufreq: - - modify link of CPPC highest performance changed.
Changes form V5->V6: - cpufreq: amd-pstate: - - modify the wrong tag order. - - modify warning about hw_prefcore sysfs attribute. - - delete duplicate comments. - - modify the variable name cppc_highest_perf to prefcore_ranking. - - modify judgment conditions for setting highest_perf. - - modify sysfs attribute for CPPC highest perf to pr_debug message. - Documentation: amd-pstate: - - modify warning: title underline too short.
Changes form V4->V5: - cpufreq: amd-pstate: - - modify sysfs attribute for CPPC highest perf. - - modify warning about comments - - rebase linux-next - cpufreq: - - Moidfy warning about function declarations. - Documentation: amd-pstate: - - align with ``amd-pstat``
Changes form V3->V4: - Documentation: amd-pstate: - - Modify inappropriate descriptions.
Changes form V2->V3: - x86: - - Modify kconfig and description. - cpufreq: amd-pstate: - - Add Co-developed-by tag in commit message. - cpufreq: - - Modify commit message. - Documentation: amd-pstate: - - Modify inappropriate descriptions.
Changes form V1->V2: - acpi: cppc: - - Add reference link. - cpufreq: - - Moidfy link error. - cpufreq: amd-pstate: - - Init the priorities of all online CPUs - - Use a single variable to represent the status of preferred core. - Documentation: - - Default enabled preferred core. - Documentation: amd-pstate: - - Modify inappropriate descriptions. - - Default enabled preferred core. - - Use a single variable to represent the status of preferred core.
Meng Li (7): x86: Drop CPU_SUP_INTEL from SCHED_MC_PRIO for the expansion. acpi: cppc: Add get the highest performance cppc control cpufreq: amd-pstate: Enable amd-pstate preferred core supporting. cpufreq: Add a notification message that the highest perf has changed cpufreq: amd-pstate: Update amd-pstate preferred core ranking dynamically Documentation: amd-pstate: introduce amd-pstate preferred core Documentation: introduce amd-pstate preferrd core mode kernel command line options
.../admin-guide/kernel-parameters.txt | 5 + Documentation/admin-guide/pm/amd-pstate.rst | 59 ++++- arch/x86/Kconfig | 5 +- drivers/acpi/cppc_acpi.c | 13 ++ drivers/acpi/processor_driver.c | 6 + drivers/cpufreq/amd-pstate.c | 204 ++++++++++++++++-- drivers/cpufreq/cpufreq.c | 13 ++ include/acpi/cppc_acpi.h | 5 + include/linux/amd-pstate.h | 10 + include/linux/cpufreq.h | 5 + 10 files changed, 305 insertions(+), 20 deletions(-)
amd-pstate driver also uses SCHED_MC_PRIO, so decouple the requirement of CPU_SUP_INTEL from the dependencies to allow compilation in kernels without Intel CPU support.
Tested-by: Oleksandr Natalenko oleksandr@natalenko.name Reviewed-by: Mario Limonciello mario.limonciello@amd.com Reviewed-by: Huang Rui ray.huang@amd.com Signed-off-by: Meng Li li.meng@amd.com --- arch/x86/Kconfig | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 66bfabae8814..a2e163acf623 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1054,8 +1054,9 @@ config SCHED_MC
config SCHED_MC_PRIO bool "CPU core priorities scheduler support" - depends on SCHED_MC && CPU_SUP_INTEL - select X86_INTEL_PSTATE + depends on SCHED_MC + select X86_INTEL_PSTATE if CPU_SUP_INTEL + select X86_AMD_PSTATE if CPU_SUP_AMD && ACPI select CPU_FREQ default y help
Add support for getting the highest performance to the generic CPPC driver. This enables downstream drivers such as amd-pstate to discover and use these values.
Please refer to the ACPI_Spec for details on continuous performance control of CPPC.
Tested-by: Oleksandr Natalenko oleksandr@natalenko.name Reviewed-by: Mario Limonciello mario.limonciello@amd.com Reviewed-by: Wyes Karny wyes.karny@amd.com Acked-by: Huang Rui ray.huang@amd.com Signed-off-by: Meng Li li.meng@amd.com Link: https://uefi.org/specs/ACPI/6.5/08_Processor_Configuration_and_Control.html?... --- drivers/acpi/cppc_acpi.c | 13 +++++++++++++ include/acpi/cppc_acpi.h | 5 +++++ 2 files changed, 18 insertions(+)
diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c index 7ff269a78c20..ad388a0e8484 100644 --- a/drivers/acpi/cppc_acpi.c +++ b/drivers/acpi/cppc_acpi.c @@ -1154,6 +1154,19 @@ int cppc_get_nominal_perf(int cpunum, u64 *nominal_perf) return cppc_get_perf(cpunum, NOMINAL_PERF, nominal_perf); }
+/** + * cppc_get_highest_perf - Get the highest performance register value. + * @cpunum: CPU from which to get highest performance. + * @highest_perf: Return address. + * + * Return: 0 for success, -EIO otherwise. + */ +int cppc_get_highest_perf(int cpunum, u64 *highest_perf) +{ + return cppc_get_perf(cpunum, HIGHEST_PERF, highest_perf); +} +EXPORT_SYMBOL_GPL(cppc_get_highest_perf); + /** * cppc_get_epp_perf - Get the epp register value. * @cpunum: CPU from which to get epp preference value. diff --git a/include/acpi/cppc_acpi.h b/include/acpi/cppc_acpi.h index 6126c977ece0..c0b69ffe7bdb 100644 --- a/include/acpi/cppc_acpi.h +++ b/include/acpi/cppc_acpi.h @@ -139,6 +139,7 @@ struct cppc_cpudata { #ifdef CONFIG_ACPI_CPPC_LIB extern int cppc_get_desired_perf(int cpunum, u64 *desired_perf); extern int cppc_get_nominal_perf(int cpunum, u64 *nominal_perf); +extern int cppc_get_highest_perf(int cpunum, u64 *highest_perf); extern int cppc_get_perf_ctrs(int cpu, struct cppc_perf_fb_ctrs *perf_fb_ctrs); extern int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls); extern int cppc_set_enable(int cpu, bool enable); @@ -165,6 +166,10 @@ static inline int cppc_get_nominal_perf(int cpunum, u64 *nominal_perf) { return -ENOTSUPP; } +static inline int cppc_get_highest_perf(int cpunum, u64 *highest_perf) +{ + return -ENOTSUPP; +} static inline int cppc_get_perf_ctrs(int cpu, struct cppc_perf_fb_ctrs *perf_fb_ctrs) { return -ENOTSUPP;
amd-pstate driver utilizes the functions and data structures provided by the ITMT architecture to enable the scheduler to favor scheduling on cores which can be get a higher frequency with lower voltage. We call it amd-pstate preferrred core.
Here sched_set_itmt_core_prio() is called to set priorities and sched_set_itmt_support() is called to enable ITMT feature. amd-pstate driver uses the highest performance value to indicate the priority of CPU. The higher value has a higher priority.
The initial core rankings are set up by amd-pstate when the system boots.
Add a variable hw_prefcore in cpudata structure. It will check if the processor and power firmware support preferred core feature.
Add one new early parameter `disable` to allow user to disable the preferred core.
Only when hardware supports preferred core and user set `enabled` in early parameter, amd pstate driver supports preferred core featue.
Tested-by: Oleksandr Natalenko oleksandr@natalenko.name Reviewed-by: Huang Rui ray.huang@amd.com Reviewed-by: Wyes Karny wyes.karny@amd.com Reviewed-by: Mario Limonciello mario.limonciello@amd.com Co-developed-by: Perry Yuan Perry.Yuan@amd.com Signed-off-by: Perry Yuan Perry.Yuan@amd.com Signed-off-by: Meng Li li.meng@amd.com --- drivers/cpufreq/amd-pstate.c | 155 +++++++++++++++++++++++++++++++---- include/linux/amd-pstate.h | 4 + 2 files changed, 143 insertions(+), 16 deletions(-)
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index 9a1e194d5cf8..6aae383990f1 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -37,6 +37,7 @@ #include <linux/uaccess.h> #include <linux/static_call.h> #include <linux/amd-pstate.h> +#include <linux/topology.h>
#include <acpi/processor.h> #include <acpi/cppc_acpi.h> @@ -49,6 +50,8 @@
#define AMD_PSTATE_TRANSITION_LATENCY 20000 #define AMD_PSTATE_TRANSITION_DELAY 1000 +#define AMD_PSTATE_PREFCORE_THRESHOLD 166 +#define AMD_PSTATE_MAX_CPPC_PERF 255
/* * TODO: We need more time to fine tune processors with shared memory solution @@ -64,6 +67,7 @@ static struct cpufreq_driver amd_pstate_driver; static struct cpufreq_driver amd_pstate_epp_driver; static int cppc_state = AMD_PSTATE_UNDEFINED; static bool cppc_enabled; +static bool amd_pstate_prefcore = true;
/* * AMD Energy Preference Performance (EPP) @@ -290,23 +294,21 @@ static inline int amd_pstate_enable(bool enable) static int pstate_init_perf(struct amd_cpudata *cpudata) { u64 cap1; - u32 highest_perf;
int ret = rdmsrl_safe_on_cpu(cpudata->cpu, MSR_AMD_CPPC_CAP1, &cap1); if (ret) return ret;
- /* - * TODO: Introduce AMD specific power feature. - * - * CPPC entry doesn't indicate the highest performance in some ASICs. + /* For platforms that do not support the preferred core feature, the + * highest_pef may be configured with 166 or 255, to avoid max frequency + * calculated wrongly. we take the AMD_CPPC_HIGHEST_PERF(cap1) value as + * the default max perf. */ - highest_perf = amd_get_highest_perf(); - if (highest_perf > AMD_CPPC_HIGHEST_PERF(cap1)) - highest_perf = AMD_CPPC_HIGHEST_PERF(cap1); - - WRITE_ONCE(cpudata->highest_perf, highest_perf); + if (cpudata->hw_prefcore) + WRITE_ONCE(cpudata->highest_perf, AMD_PSTATE_PREFCORE_THRESHOLD); + else + WRITE_ONCE(cpudata->highest_perf, AMD_CPPC_HIGHEST_PERF(cap1));
WRITE_ONCE(cpudata->nominal_perf, AMD_CPPC_NOMINAL_PERF(cap1)); WRITE_ONCE(cpudata->lowest_nonlinear_perf, AMD_CPPC_LOWNONLIN_PERF(cap1)); @@ -318,17 +320,15 @@ static int pstate_init_perf(struct amd_cpudata *cpudata) static int cppc_init_perf(struct amd_cpudata *cpudata) { struct cppc_perf_caps cppc_perf; - u32 highest_perf;
int ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf); if (ret) return ret;
- highest_perf = amd_get_highest_perf(); - if (highest_perf > cppc_perf.highest_perf) - highest_perf = cppc_perf.highest_perf; - - WRITE_ONCE(cpudata->highest_perf, highest_perf); + if (cpudata->hw_prefcore) + WRITE_ONCE(cpudata->highest_perf, AMD_PSTATE_PREFCORE_THRESHOLD); + else + WRITE_ONCE(cpudata->highest_perf, cppc_perf.highest_perf);
WRITE_ONCE(cpudata->nominal_perf, cppc_perf.nominal_perf); WRITE_ONCE(cpudata->lowest_nonlinear_perf, @@ -676,6 +676,93 @@ static void amd_perf_ctl_reset(unsigned int cpu) wrmsrl_on_cpu(cpu, MSR_AMD_PERF_CTL, 0); }
+/* + * Set amd-pstate preferred core enable can't be done directly from cpufreq callbacks + * due to locking, so queue the work for later. + */ +static void amd_pstste_sched_prefcore_workfn(struct work_struct *work) +{ + sched_set_itmt_support(); +} +static DECLARE_WORK(sched_prefcore_work, amd_pstste_sched_prefcore_workfn); + +/* + * Get the highest performance register value. + * @cpu: CPU from which to get highest performance. + * @highest_perf: Return address. + * + * Return: 0 for success, -EIO otherwise. + */ +static int amd_pstate_get_highest_perf(int cpu, u32 *highest_perf) +{ + int ret; + + if (boot_cpu_has(X86_FEATURE_CPPC)) { + u64 cap1; + + ret = rdmsrl_safe_on_cpu(cpu, MSR_AMD_CPPC_CAP1, &cap1); + if (ret) + return ret; + WRITE_ONCE(*highest_perf, AMD_CPPC_HIGHEST_PERF(cap1)); + } else { + u64 cppc_highest_perf; + + ret = cppc_get_highest_perf(cpu, &cppc_highest_perf); + WRITE_ONCE(*highest_perf, cppc_highest_perf); + } + + return (ret); +} + +static void amd_pstate_init_prefcore(struct amd_cpudata *cpudata) +{ + int ret, prio; + u32 highest_perf; + static u32 max_highest_perf = 0, min_highest_perf = U32_MAX; + + ret = amd_pstate_get_highest_perf(cpudata->cpu, &highest_perf); + if (ret) + return; + + cpudata->hw_prefcore = true; + /* check if CPPC preferred core feature is enabled*/ + if (highest_perf == AMD_PSTATE_MAX_CPPC_PERF) { + pr_debug("AMD CPPC preferred core is unsupported!\n"); + cpudata->hw_prefcore = false; + return; + } + + if (!amd_pstate_prefcore) + return; + + /* The maximum value of highest perf is 255 */ + prio = (int)(highest_perf & 0xff); + /* + * The priorities can be set regardless of whether or not + * sched_set_itmt_support(true) has been called and it is valid to + * update them at any time after it has been called. + */ + sched_set_itmt_core_prio(prio, cpudata->cpu); + + if (max_highest_perf <= min_highest_perf) { + if (highest_perf > max_highest_perf) + max_highest_perf = highest_perf; + + if (highest_perf < min_highest_perf) + min_highest_perf = highest_perf; + + if (max_highest_perf > min_highest_perf) { + /* + * This code can be run during CPU online under the + * CPU hotplug locks, so sched_set_itmt_support() + * cannot be called from here. Queue up a work item + * to invoke it. + */ + schedule_work(&sched_prefcore_work); + } + } +} + static int amd_pstate_cpu_init(struct cpufreq_policy *policy) { int min_freq, max_freq, nominal_freq, lowest_nonlinear_freq, ret; @@ -697,6 +784,8 @@ static int amd_pstate_cpu_init(struct cpufreq_policy *policy)
cpudata->cpu = policy->cpu;
+ amd_pstate_init_prefcore(cpudata); + ret = amd_pstate_init_perf(cpudata); if (ret) goto free_cpudata1; @@ -845,6 +934,17 @@ static ssize_t show_amd_pstate_highest_perf(struct cpufreq_policy *policy, return sysfs_emit(buf, "%u\n", perf); }
+static ssize_t show_amd_pstate_hw_prefcore(struct cpufreq_policy *policy, + char *buf) +{ + bool hw_prefcore; + struct amd_cpudata *cpudata = policy->driver_data; + + hw_prefcore = READ_ONCE(cpudata->hw_prefcore); + + return sysfs_emit(buf, "%s\n", hw_prefcore ? "supported" : "unsupported"); +} + static ssize_t show_energy_performance_available_preferences( struct cpufreq_policy *policy, char *buf) { @@ -1037,18 +1137,27 @@ static ssize_t status_store(struct device *a, struct device_attribute *b, return ret < 0 ? ret : count; }
+static ssize_t prefcore_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + return sysfs_emit(buf, "%s\n", amd_pstate_prefcore ? "enabled" : "disabled"); +} + cpufreq_freq_attr_ro(amd_pstate_max_freq); cpufreq_freq_attr_ro(amd_pstate_lowest_nonlinear_freq);
cpufreq_freq_attr_ro(amd_pstate_highest_perf); +cpufreq_freq_attr_ro(amd_pstate_hw_prefcore); cpufreq_freq_attr_rw(energy_performance_preference); cpufreq_freq_attr_ro(energy_performance_available_preferences); static DEVICE_ATTR_RW(status); +static DEVICE_ATTR_RO(prefcore);
static struct freq_attr *amd_pstate_attr[] = { &amd_pstate_max_freq, &amd_pstate_lowest_nonlinear_freq, &amd_pstate_highest_perf, + &amd_pstate_hw_prefcore, NULL, };
@@ -1056,6 +1165,7 @@ static struct freq_attr *amd_pstate_epp_attr[] = { &amd_pstate_max_freq, &amd_pstate_lowest_nonlinear_freq, &amd_pstate_highest_perf, + &amd_pstate_hw_prefcore, &energy_performance_preference, &energy_performance_available_preferences, NULL, @@ -1063,6 +1173,7 @@ static struct freq_attr *amd_pstate_epp_attr[] = {
static struct attribute *pstate_global_attributes[] = { &dev_attr_status.attr, + &dev_attr_prefcore.attr, NULL };
@@ -1114,6 +1225,8 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy) cpudata->cpu = policy->cpu; cpudata->epp_policy = 0;
+ amd_pstate_init_prefcore(cpudata); + ret = amd_pstate_init_perf(cpudata); if (ret) goto free_cpudata1; @@ -1527,7 +1640,17 @@ static int __init amd_pstate_param(char *str)
return amd_pstate_set_driver(mode_idx); } + +static int __init amd_prefcore_param(char *str) +{ + if (!strcmp(str, "disable")) + amd_pstate_prefcore = false; + + return 0; +} + early_param("amd_pstate", amd_pstate_param); +early_param("amd_prefcore", amd_prefcore_param);
MODULE_AUTHOR("Huang Rui ray.huang@amd.com"); MODULE_DESCRIPTION("AMD Processor P-state Frequency Driver"); diff --git a/include/linux/amd-pstate.h b/include/linux/amd-pstate.h index 446394f84606..87e140e9e6db 100644 --- a/include/linux/amd-pstate.h +++ b/include/linux/amd-pstate.h @@ -52,6 +52,9 @@ struct amd_aperf_mperf { * @prev: Last Aperf/Mperf/tsc count value read from register * @freq: current cpu frequency value * @boost_supported: check whether the Processor or SBIOS supports boost mode + * @hw_prefcore: check whether HW supports preferred core featue. + * Only when hw_prefcore and early prefcore param are true, + * AMD P-State driver supports preferred core featue. * @epp_policy: Last saved policy used to set energy-performance preference * @epp_cached: Cached CPPC energy-performance preference value * @policy: Cpufreq policy value @@ -81,6 +84,7 @@ struct amd_cpudata {
u64 freq; bool boost_supported; + bool hw_prefcore;
/* EPP feature related attributes*/ s16 epp_policy;
On Fri, Oct 13, 2023 at 11:31:14AM +0800, Meng Li wrote:
+#define AMD_PSTATE_PREFCORE_THRESHOLD 166 +#define AMD_PSTATE_MAX_CPPC_PERF 255
+static void amd_pstate_init_prefcore(struct amd_cpudata *cpudata) +{
- int ret, prio;
- u32 highest_perf;
- static u32 max_highest_perf = 0, min_highest_perf = U32_MAX;
What serializes these things?
Also, *why* are you using u32 here, what's wrong with something like:
int max_hp = INT_MIN, min_hp = INT_MAX;
- ret = amd_pstate_get_highest_perf(cpudata->cpu, &highest_perf);
- if (ret)
return;
- cpudata->hw_prefcore = true;
- /* check if CPPC preferred core feature is enabled*/
- if (highest_perf == AMD_PSTATE_MAX_CPPC_PERF) {
Which effectively means <255 (also, seems to suggest MAX_CPPC_PERF might not be the best name, hmm?)
Should you not write '>= 255' then? Just in case something 'funny' happens?
pr_debug("AMD CPPC preferred core is unsupported!\n");
cpudata->hw_prefcore = false;
return;
- }
- if (!amd_pstate_prefcore)
return;
- /* The maximum value of highest perf is 255 */
- prio = (int)(highest_perf & 0xff);
If for some weird reason you get 0x1ff or whatever above (dodgy BIOS never happens, right) then this makes sense how?
Perhaps stop sending patches at break-nek speed and think for a little while on how to write this and not be confused?
- /*
* The priorities can be set regardless of whether or not
* sched_set_itmt_support(true) has been called and it is valid to
* update them at any time after it has been called.
*/
- sched_set_itmt_core_prio(prio, cpudata->cpu);
- if (max_highest_perf <= min_highest_perf) {
if (highest_perf > max_highest_perf)
max_highest_perf = highest_perf;
if (highest_perf < min_highest_perf)
min_highest_perf = highest_perf;
if (max_highest_perf > min_highest_perf) {
/*
* This code can be run during CPU online under the
* CPU hotplug locks, so sched_set_itmt_support()
* cannot be called from here. Queue up a work item
* to invoke it.
*/
schedule_work(&sched_prefcore_work);
}
- }
Not a word about what serializes these variables.
+}
[AMD Official Use Only - General]
Hi Peter:
-----Original Message----- From: Peter Zijlstra peterz@infradead.org Sent: Saturday, October 14, 2023 12:01 AM To: Meng, Li (Jassmine) Li.Meng@amd.com Cc: Rafael J . Wysocki rafael.j.wysocki@intel.com; Huang, Ray Ray.Huang@amd.com; linux-pm@vger.kernel.org; linux- kernel@vger.kernel.org; x86@kernel.org; linux-acpi@vger.kernel.org; Shuah Khan skhan@linuxfoundation.org; linux-kselftest@vger.kernel.org; Fontenot, Nathan Nathan.Fontenot@amd.com; Sharma, Deepak Deepak.Sharma@amd.com; Deucher, Alexander Alexander.Deucher@amd.com; Limonciello, Mario Mario.Limonciello@amd.com; Huang, Shimmer Shimmer.Huang@amd.com; Yuan, Perry Perry.Yuan@amd.com; Du, Xiaojian Xiaojian.Du@amd.com; Viresh Kumar viresh.kumar@linaro.org; Borislav Petkov bp@alien8.de; Oleksandr Natalenko oleksandr@natalenko.name; Karny, Wyes Wyes.Karny@amd.com Subject: Re: [RESEND PATCH V9 3/7] cpufreq: amd-pstate: Enable amd- pstate preferred core supporting.
Caution: This message originated from an External Source. Use proper caution when opening attachments, clicking links, or responding.
On Fri, Oct 13, 2023 at 11:31:14AM +0800, Meng Li wrote:
+#define AMD_PSTATE_PREFCORE_THRESHOLD 166 +#define AMD_PSTATE_MAX_CPPC_PERF 255
+static void amd_pstate_init_prefcore(struct amd_cpudata *cpudata) {
int ret, prio;
u32 highest_perf;
static u32 max_highest_perf = 0, min_highest_perf = U32_MAX;
What serializes these things?
Also, *why* are you using u32 here, what's wrong with something like:
int max_hp = INT_MIN, min_hp = INT_MAX;
[Meng, Li (Jassmine)] We use ITMT architecture to utilize preferred core features. Therefore, we need to try to be consistent with Intel's implementation as much as possible. For details, please refer to the intel_pstate_set_itmt_prio function in file intel_pstate.c. (Line 355)
I think using the data type of u32 is consistent with the data structures of cppc_perf_ctrls and amd_cpudata etc.
ret = amd_pstate_get_highest_perf(cpudata->cpu, &highest_perf);
if (ret)
return;
cpudata->hw_prefcore = true;
/* check if CPPC preferred core feature is enabled*/
if (highest_perf == AMD_PSTATE_MAX_CPPC_PERF) {
Which effectively means <255 (also, seems to suggest MAX_CPPC_PERF might not be the best name, hmm?)
Should you not write '>= 255' then? Just in case something 'funny' happens?
[Meng, Li (Jassmine)] OK, I will modify these.
pr_debug("AMD CPPC preferred core is unsupported!\n");
cpudata->hw_prefcore = false;
return;
}
if (!amd_pstate_prefcore)
return;
/* The maximum value of highest perf is 255 */
prio = (int)(highest_perf & 0xff);
If for some weird reason you get 0x1ff or whatever above (dodgy BIOS never happens, right) then this makes sense how?
Perhaps stop sending patches at break-nek speed and think for a little while on how to write this and not be confused?
[Meng, Li (Jassmine)] If I use '>= 255' to check, the issue mentioned will not exist. Because it will be returned when highest_perff>0xff.
/*
* The priorities can be set regardless of whether or not
* sched_set_itmt_support(true) has been called and it is valid to
* update them at any time after it has been called.
*/
sched_set_itmt_core_prio(prio, cpudata->cpu);
if (max_highest_perf <= min_highest_perf) {
if (highest_perf > max_highest_perf)
max_highest_perf = highest_perf;
if (highest_perf < min_highest_perf)
min_highest_perf = highest_perf;
if (max_highest_perf > min_highest_perf) {
/*
* This code can be run during CPU online under the
* CPU hotplug locks, so sched_set_itmt_support()
* cannot be called from here. Queue up a work item
* to invoke it.
*/
schedule_work(&sched_prefcore_work);
}
}
Not a word about what serializes these variables.
+}
On Mon, Oct 16, 2023 at 06:20:53AM +0000, Meng, Li (Jassmine) wrote:
+static void amd_pstate_init_prefcore(struct amd_cpudata *cpudata) {
int ret, prio;
u32 highest_perf;
static u32 max_highest_perf = 0, min_highest_perf = U32_MAX;
What serializes these things?
Also, *why* are you using u32 here, what's wrong with something like:
int max_hp = INT_MIN, min_hp = INT_MAX;
[Meng, Li (Jassmine)] We use ITMT architecture to utilize preferred core features. Therefore, we need to try to be consistent with Intel's implementation as much as possible. For details, please refer to the intel_pstate_set_itmt_prio function in file intel_pstate.c. (Line 355)
I think using the data type of u32 is consistent with the data structures of cppc_perf_ctrls and amd_cpudata etc.
Rafael, should we fix intel_pstate too?
The point is, that sched_asym_prefer(), the final consumer of these values uses int and thus an explicitly signed compare.
Using u32 and U32_MAX anywhere near the setting the priority makes absolutely no sense.
If you were to have the high bit set, things do not behave as expected.
Also, same question as to the amd folks; what serializes those static variables?
On 10/16/2023 12:58 PM, Peter Zijlstra wrote:
On Mon, Oct 16, 2023 at 06:20:53AM +0000, Meng, Li (Jassmine) wrote:
+static void amd_pstate_init_prefcore(struct amd_cpudata *cpudata) {
int ret, prio;
u32 highest_perf;
static u32 max_highest_perf = 0, min_highest_perf = U32_MAX;
What serializes these things?
Also, *why* are you using u32 here, what's wrong with something like:
int max_hp = INT_MIN, min_hp = INT_MAX;
[Meng, Li (Jassmine)] We use ITMT architecture to utilize preferred core features. Therefore, we need to try to be consistent with Intel's implementation as much as possible. For details, please refer to the intel_pstate_set_itmt_prio function in file intel_pstate.c. (Line 355)
I think using the data type of u32 is consistent with the data structures of cppc_perf_ctrls and amd_cpudata etc.
Rafael, should we fix intel_pstate too?
Srinivas should be more familiar with this code than I am, so adding him.
The point is, that sched_asym_prefer(), the final consumer of these values uses int and thus an explicitly signed compare.
Using u32 and U32_MAX anywhere near the setting the priority makes absolutely no sense.
If you were to have the high bit set, things do not behave as expected.
Right, but in practice these values are always between 0 and 255 inclusive AFAICS.
It would have been better to use u8 I suppose.
Also, same question as to the amd folks; what serializes those static variables?
That's a good one.
On Mon, 2023-10-16 at 19:27 +0200, Wysocki, Rafael J wrote:
On 10/16/2023 12:58 PM, Peter Zijlstra wrote:
On Mon, Oct 16, 2023 at 06:20:53AM +0000, Meng, Li (Jassmine) wrote:
+static void amd_pstate_init_prefcore(struct amd_cpudata *cpudata) { + int ret, prio; + u32 highest_perf; + static u32 max_highest_perf = 0, min_highest_perf = U32_MAX;
What serializes these things?
Also, *why* are you using u32 here, what's wrong with something like:
int max_hp = INT_MIN, min_hp = INT_MAX;
[Meng, Li (Jassmine)] We use ITMT architecture to utilize preferred core features. Therefore, we need to try to be consistent with Intel's implementation as much as possible. For details, please refer to the intel_pstate_set_itmt_prio function in file intel_pstate.c. (Line 355)
I think using the data type of u32 is consistent with the data structures of cppc_perf_ctrls and amd_cpudata etc.
Rafael, should we fix intel_pstate too?
Srinivas should be more familiar with this code than I am, so adding him.
If we make static u32 max_highest_perf = 0, min_highest_perf = U32_MAX; to static int max_highest_perf = INT_MIN, min_highest_perf = INT_MAX;
Then in intel_pstate we will compare signed vs unsigned comparison as cppc_perf.highest_perf is u32.
In reality this will be fine to change to "int" as we will never reach u32 max as performance on any Intel platform.
The point is, that sched_asym_prefer(), the final consumer of these values uses int and thus an explicitly signed compare.
Using u32 and U32_MAX anywhere near the setting the priority makes absolutely no sense.
If you were to have the high bit set, things do not behave as expected.
Right, but in practice these values are always between 0 and 255 inclusive AFAICS.
It would have been better to use u8 I suppose.
Should be fine as over clocked parts will set to max 0xff.
Also, same question as to the amd folks; what serializes those static variables?
That's a good one.
This function which is checking static variables is called from cpufreq ->init callback. Which in turn is called from a function which is passed as startup() function pointer to cpuhp_setup_state_nocalls_cpuslocked().
I see that startup() callbacks are called under a mutex cpuhp_state_mutex for each present CPUs. So if some tear down happen, that is also protected by the same mutex. The assumption is here is that cpuhp_invoke_callback() in hotplug state machine is not called in parallel on two CPUs by the hotplug state machine. But I see activity on parallel bringup, so this is questionable now.
Thanks, Srinivas
On Mon, Oct 16, 2023 at 11:50:34AM -0700, srinivas pandruvada wrote:
I'll respond to the rest tomorrow, it's far too late.
Also, same question as to the amd folks; what serializes those static variables?
That's a good one.
This function which is checking static variables is called from cpufreq ->init callback. Which in turn is called from a function which is passed as startup() function pointer to cpuhp_setup_state_nocalls_cpuslocked().
I see that startup() callbacks are called under a mutex cpuhp_state_mutex for each present CPUs. So if some tear down happen, that is also protected by the same mutex. The assumption is here is that cpuhp_invoke_callback() in hotplug state machine is not called in parallel on two CPUs by the hotplug state machine. But I see activity on parallel bringup, so this is questionable now.
Parallel bringup should still serialise this. It mostly only does the hardware bringup in parallel.
Having a pointer back to the cpu hotplug lock would make it easier to untangle this code though.
[AMD Official Use Only - General]
Hi Peter:
After our internal discussion, the following modifications will be made. Do you think they are feasible? 1. Add judgement for "highest_perf". When it is less than 255, the preferred core feature is enabled. And it will set the priority. 2. Delete "static u32 max_highset_perf/min_highest_perf", because amd p-state preferred core does not require special processing for hotplug.
+#define CPPC_MAX_PERF U8_MAX + +static void amd_pstate_init_prefcore(struct amd_cpudata *cpudata) +{ + int ret, prio; + u32 highest_perf; + + ret = amd_pstate_get_highest_perf(cpudata->cpu, &highest_perf); + if (ret) + return; + + cpudata->hw_prefcore = true; + /* check if CPPC preferred core feature is enabled*/ + if (highest_perf < CPPC_MAX_PERF) + prio = (int)highest_perf; + else { + pr_debug("AMD CPPC preferred core is unsupported!\n"); + cpudata->hw_prefcore = false; + return; + } + + if (!amd_pstate_prefcore) + return; + + /* + * The priorities can be set regardless of whether or not + * sched_set_itmt_support(true) has been called and it is valid to + * update them at any time after it has been called. + */ + sched_set_itmt_core_prio(prio, cpudata->cpu); + + schedule_work(&sched_prefcore_work); +}
-----Original Message----- From: srinivas pandruvada srinivas.pandruvada@linux.intel.com Sent: Tuesday, October 17, 2023 2:51 AM To: Wysocki, Rafael J rafael.j.wysocki@intel.com; Peter Zijlstra peterz@infradead.org; Meng, Li (Jassmine) Li.Meng@amd.com Cc: Huang, Ray Ray.Huang@amd.com; linux-pm@vger.kernel.org; linux- kernel@vger.kernel.org; x86@kernel.org; linux-acpi@vger.kernel.org; Shuah Khan skhan@linuxfoundation.org; linux-kselftest@vger.kernel.org; Fontenot, Nathan Nathan.Fontenot@amd.com; Sharma, Deepak Deepak.Sharma@amd.com; Deucher, Alexander Alexander.Deucher@amd.com; Limonciello, Mario Mario.Limonciello@amd.com; Huang, Shimmer Shimmer.Huang@amd.com; Yuan, Perry Perry.Yuan@amd.com; Du, Xiaojian Xiaojian.Du@amd.com; Viresh Kumar viresh.kumar@linaro.org; Borislav Petkov bp@alien8.de; Oleksandr Natalenko oleksandr@natalenko.name; Karny, Wyes Wyes.Karny@amd.com Subject: Re: [RESEND PATCH V9 3/7] cpufreq: amd-pstate: Enable amd- pstate preferred core supporting.
Caution: This message originated from an External Source. Use proper caution when opening attachments, clicking links, or responding.
On Mon, 2023-10-16 at 19:27 +0200, Wysocki, Rafael J wrote:
On 10/16/2023 12:58 PM, Peter Zijlstra wrote:
On Mon, Oct 16, 2023 at 06:20:53AM +0000, Meng, Li (Jassmine) wrote:
+static void amd_pstate_init_prefcore(struct amd_cpudata *cpudata) {
int ret, prio;
u32 highest_perf;
static u32 max_highest_perf = 0, min_highest_perf =
U32_MAX;
What serializes these things?
Also, *why* are you using u32 here, what's wrong with something like:
int max_hp = INT_MIN, min_hp = INT_MAX;
[Meng, Li (Jassmine)] We use ITMT architecture to utilize preferred core features. Therefore, we need to try to be consistent with Intel's implementation as much as possible. For details, please refer to the intel_pstate_set_itmt_prio function in file intel_pstate.c. (Line 355)
I think using the data type of u32 is consistent with the data structures of cppc_perf_ctrls and amd_cpudata etc.
Rafael, should we fix intel_pstate too?
Srinivas should be more familiar with this code than I am, so adding him.
If we make static u32 max_highest_perf = 0, min_highest_perf = U32_MAX; to static int max_highest_perf = INT_MIN, min_highest_perf = INT_MAX;
Then in intel_pstate we will compare signed vs unsigned comparison as cppc_perf.highest_perf is u32.
In reality this will be fine to change to "int" as we will never reach u32 max as performance on any Intel platform.
The point is, that sched_asym_prefer(), the final consumer of these values uses int and thus an explicitly signed compare.
Using u32 and U32_MAX anywhere near the setting the priority makes absolutely no sense.
If you were to have the high bit set, things do not behave as expected.
Right, but in practice these values are always between 0 and 255 inclusive AFAICS.
It would have been better to use u8 I suppose.
Should be fine as over clocked parts will set to max 0xff.
Also, same question as to the amd folks; what serializes those static variables?
That's a good one.
This function which is checking static variables is called from cpufreq ->init callback. Which in turn is called from a function which is passed as startup() function pointer to cpuhp_setup_state_nocalls_cpuslocked().
I see that startup() callbacks are called under a mutex cpuhp_state_mutex for each present CPUs. So if some tear down happen, that is also protected by the same mutex. The assumption is here is that cpuhp_invoke_callback() in hotplug state machine is not called in parallel on two CPUs by the hotplug state machine. But I see activity on parallel bringup, so this is questionable now.
Thanks, Srinivas
ACPI 6.5 section 8.4.6.1.1.1 specifies that Notify event 0x85 can be emmitted to cause the the OSPM to re-evaluate the highest performance register. Add support for this event.
Tested-by: Oleksandr Natalenko oleksandr@natalenko.name Reviewed-by: Mario Limonciello mario.limonciello@amd.com Reviewed-by: Huang Rui ray.huang@amd.com Signed-off-by: Meng Li li.meng@amd.com Link: https://uefi.org/specs/ACPI/6.5/05_ACPI_Software_Programming_Model.html#proc... --- drivers/acpi/processor_driver.c | 6 ++++++ drivers/cpufreq/cpufreq.c | 13 +++++++++++++ include/linux/cpufreq.h | 5 +++++ 3 files changed, 24 insertions(+)
diff --git a/drivers/acpi/processor_driver.c b/drivers/acpi/processor_driver.c index 4bd16b3f0781..29b2fb68a35d 100644 --- a/drivers/acpi/processor_driver.c +++ b/drivers/acpi/processor_driver.c @@ -27,6 +27,7 @@ #define ACPI_PROCESSOR_NOTIFY_PERFORMANCE 0x80 #define ACPI_PROCESSOR_NOTIFY_POWER 0x81 #define ACPI_PROCESSOR_NOTIFY_THROTTLING 0x82 +#define ACPI_PROCESSOR_NOTIFY_HIGEST_PERF_CHANGED 0x85
MODULE_AUTHOR("Paul Diefenbaugh"); MODULE_DESCRIPTION("ACPI Processor Driver"); @@ -83,6 +84,11 @@ static void acpi_processor_notify(acpi_handle handle, u32 event, void *data) acpi_bus_generate_netlink_event(device->pnp.device_class, dev_name(&device->dev), event, 0); break; + case ACPI_PROCESSOR_NOTIFY_HIGEST_PERF_CHANGED: + cpufreq_update_highest_perf(pr->id); + acpi_bus_generate_netlink_event(device->pnp.device_class, + dev_name(&device->dev), event, 0); + break; default: acpi_handle_debug(handle, "Unsupported event [0x%x]\n", event); break; diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 4bc15634d49c..e66b040b0c61 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -2717,6 +2717,19 @@ void cpufreq_update_limits(unsigned int cpu) } EXPORT_SYMBOL_GPL(cpufreq_update_limits);
+/** + * cpufreq_update_highest_perf - Update highest performance for a given CPU. + * @cpu: CPU to update the highest performance for. + * + * Invoke the driver's ->update_highest_perf callback if present + */ +void cpufreq_update_highest_perf(unsigned int cpu) +{ + if (cpufreq_driver->update_highest_perf) + cpufreq_driver->update_highest_perf(cpu); +} +EXPORT_SYMBOL_GPL(cpufreq_update_highest_perf); + /********************************************************************* * BOOST * *********************************************************************/ diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h index 1c5ca92a0555..f62257b2a42f 100644 --- a/include/linux/cpufreq.h +++ b/include/linux/cpufreq.h @@ -235,6 +235,7 @@ int cpufreq_get_policy(struct cpufreq_policy *policy, unsigned int cpu); void refresh_frequency_limits(struct cpufreq_policy *policy); void cpufreq_update_policy(unsigned int cpu); void cpufreq_update_limits(unsigned int cpu); +void cpufreq_update_highest_perf(unsigned int cpu); bool have_governor_per_policy(void); bool cpufreq_supports_freq_invariance(void); struct kobject *get_governor_parent_kobj(struct cpufreq_policy *policy); @@ -263,6 +264,7 @@ static inline bool cpufreq_supports_freq_invariance(void) return false; } static inline void disable_cpufreq(void) { } +static inline void cpufreq_update_highest_perf(unsigned int cpu) { } #endif
#ifdef CONFIG_CPU_FREQ_STAT @@ -380,6 +382,9 @@ struct cpufreq_driver { /* Called to update policy limits on firmware notifications. */ void (*update_limits)(unsigned int cpu);
+ /* Called to update highest performance on firmware notifications. */ + void (*update_highest_perf)(unsigned int cpu); + /* optional */ int (*bios_limit)(int cpu, unsigned int *limit);
Preferred core rankings can be changed dynamically by the platform based on the workload and platform conditions and accounting for thermals and aging. When this occurs, cpu priority need to be set.
Tested-by: Oleksandr Natalenko oleksandr@natalenko.name Reviewed-by: Mario Limonciello mario.limonciello@amd.com Reviewed-by: Wyes Karny wyes.karny@amd.com Reviewed-by: Huang Rui ray.huang@amd.com Signed-off-by: Meng Li li.meng@amd.com --- drivers/cpufreq/amd-pstate.c | 49 ++++++++++++++++++++++++++++++++++++ include/linux/amd-pstate.h | 6 +++++ 2 files changed, 55 insertions(+)
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index 6aae383990f1..3b054e3acba1 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -313,6 +313,7 @@ static int pstate_init_perf(struct amd_cpudata *cpudata) WRITE_ONCE(cpudata->nominal_perf, AMD_CPPC_NOMINAL_PERF(cap1)); WRITE_ONCE(cpudata->lowest_nonlinear_perf, AMD_CPPC_LOWNONLIN_PERF(cap1)); WRITE_ONCE(cpudata->lowest_perf, AMD_CPPC_LOWEST_PERF(cap1)); + WRITE_ONCE(cpudata->prefcore_ranking, AMD_CPPC_HIGHEST_PERF(cap1));
return 0; } @@ -334,6 +335,7 @@ static int cppc_init_perf(struct amd_cpudata *cpudata) WRITE_ONCE(cpudata->lowest_nonlinear_perf, cppc_perf.lowest_nonlinear_perf); WRITE_ONCE(cpudata->lowest_perf, cppc_perf.lowest_perf); + WRITE_ONCE(cpudata->prefcore_ranking, cppc_perf.highest_perf);
if (cppc_state == AMD_PSTATE_ACTIVE) return 0; @@ -763,6 +765,37 @@ static void amd_pstate_init_prefcore(struct amd_cpudata *cpudata) } }
+static void amd_pstate_update_highest_perf(unsigned int cpu) +{ + struct cpufreq_policy *policy; + struct amd_cpudata *cpudata; + u32 prev_high = 0, cur_high = 0; + int ret; + + if ((!amd_pstate_prefcore) || (!cpudata->hw_prefcore)) + return; + + ret = amd_pstate_get_highest_perf(cpu, &cur_high); + if (ret) + return; + + policy = cpufreq_cpu_get(cpu); + cpudata = policy->driver_data; + prev_high = READ_ONCE(cpudata->prefcore_ranking); + + if (prev_high != cur_high) { + int prio; + + WRITE_ONCE(cpudata->prefcore_ranking, cur_high); + + /* The maximum value of highest perf is 255 */ + prio = (int)(cur_high & 0xff); + sched_set_itmt_core_prio(prio, cpu); + } + + cpufreq_cpu_put(policy); +} + static int amd_pstate_cpu_init(struct cpufreq_policy *policy) { int min_freq, max_freq, nominal_freq, lowest_nonlinear_freq, ret; @@ -934,6 +967,17 @@ static ssize_t show_amd_pstate_highest_perf(struct cpufreq_policy *policy, return sysfs_emit(buf, "%u\n", perf); }
+static ssize_t show_amd_pstate_prefcore_ranking(struct cpufreq_policy *policy, + char *buf) +{ + u32 perf; + struct amd_cpudata *cpudata = policy->driver_data; + + perf = READ_ONCE(cpudata->prefcore_ranking); + + return sysfs_emit(buf, "%u\n", perf); +} + static ssize_t show_amd_pstate_hw_prefcore(struct cpufreq_policy *policy, char *buf) { @@ -1147,6 +1191,7 @@ cpufreq_freq_attr_ro(amd_pstate_max_freq); cpufreq_freq_attr_ro(amd_pstate_lowest_nonlinear_freq);
cpufreq_freq_attr_ro(amd_pstate_highest_perf); +cpufreq_freq_attr_ro(amd_pstate_prefcore_ranking); cpufreq_freq_attr_ro(amd_pstate_hw_prefcore); cpufreq_freq_attr_rw(energy_performance_preference); cpufreq_freq_attr_ro(energy_performance_available_preferences); @@ -1157,6 +1202,7 @@ static struct freq_attr *amd_pstate_attr[] = { &amd_pstate_max_freq, &amd_pstate_lowest_nonlinear_freq, &amd_pstate_highest_perf, + &amd_pstate_prefcore_ranking, &amd_pstate_hw_prefcore, NULL, }; @@ -1165,6 +1211,7 @@ static struct freq_attr *amd_pstate_epp_attr[] = { &amd_pstate_max_freq, &amd_pstate_lowest_nonlinear_freq, &amd_pstate_highest_perf, + &amd_pstate_prefcore_ranking, &amd_pstate_hw_prefcore, &energy_performance_preference, &energy_performance_available_preferences, @@ -1505,6 +1552,7 @@ static struct cpufreq_driver amd_pstate_driver = { .suspend = amd_pstate_cpu_suspend, .resume = amd_pstate_cpu_resume, .set_boost = amd_pstate_set_boost, + .update_highest_perf = amd_pstate_update_highest_perf, .name = "amd-pstate", .attr = amd_pstate_attr, }; @@ -1519,6 +1567,7 @@ static struct cpufreq_driver amd_pstate_epp_driver = { .online = amd_pstate_epp_cpu_online, .suspend = amd_pstate_epp_suspend, .resume = amd_pstate_epp_resume, + .update_highest_perf = amd_pstate_update_highest_perf, .name = "amd-pstate-epp", .attr = amd_pstate_epp_attr, }; diff --git a/include/linux/amd-pstate.h b/include/linux/amd-pstate.h index 87e140e9e6db..426822612373 100644 --- a/include/linux/amd-pstate.h +++ b/include/linux/amd-pstate.h @@ -39,11 +39,16 @@ struct amd_aperf_mperf { * @cppc_req_cached: cached performance request hints * @highest_perf: the maximum performance an individual processor may reach, * assuming ideal conditions + * For platforms that do not support the preferred core feature, the + * highest_pef may be configured with 166 or 255, to avoid max frequency + * calculated wrongly. we take the fixed value as the highest_perf. * @nominal_perf: the maximum sustained performance level of the processor, * assuming ideal operating conditions * @lowest_nonlinear_perf: the lowest performance level at which nonlinear power * savings are achieved * @lowest_perf: the absolute lowest performance level of the processor + * @prefcore_ranking: the preferred core ranking, the higher value indicates a higher + * priority. * @max_freq: the frequency that mapped to highest_perf * @min_freq: the frequency that mapped to lowest_perf * @nominal_freq: the frequency that mapped to nominal_perf @@ -73,6 +78,7 @@ struct amd_cpudata { u32 nominal_perf; u32 lowest_nonlinear_perf; u32 lowest_perf; + u32 prefcore_ranking;
u32 max_freq; u32 min_freq;
Introduce amd-pstate preferred core.
check preferred core state set by the kernel parameter: $ cat /sys/devices/system/cpu/amd-pstate/prefcore
Tested-by: Oleksandr Natalenko oleksandr@natalenko.name Reviewed-by: Wyes Karny wyes.karny@amd.com Reviewed-by: Mario Limonciello mario.limonciello@amd.com Reviewed-by: Huang Rui ray.huang@amd.com Signed-off-by: Meng Li li.meng@amd.com --- Documentation/admin-guide/pm/amd-pstate.rst | 59 ++++++++++++++++++++- 1 file changed, 57 insertions(+), 2 deletions(-)
diff --git a/Documentation/admin-guide/pm/amd-pstate.rst b/Documentation/admin-guide/pm/amd-pstate.rst index 1cf40f69278c..0b832ff529db 100644 --- a/Documentation/admin-guide/pm/amd-pstate.rst +++ b/Documentation/admin-guide/pm/amd-pstate.rst @@ -300,8 +300,8 @@ platforms. The AMD P-States mechanism is the more performance and energy efficiency frequency management method on AMD processors.
-AMD Pstate Driver Operation Modes -================================= +``amd-pstate`` Driver Operation Modes +======================================
``amd_pstate`` CPPC has 3 operation modes: autonomous (active) mode, non-autonomous (passive) mode and guided autonomous (guided) mode. @@ -353,6 +353,48 @@ is activated. In this mode, driver requests minimum and maximum performance level and the platform autonomously selects a performance level in this range and appropriate to the current workload.
+``amd-pstate`` Preferred Core +================================= + +The core frequency is subjected to the process variation in semiconductors. +Not all cores are able to reach the maximum frequency respecting the +infrastructure limits. Consequently, AMD has redefined the concept of +maximum frequency of a part. This means that a fraction of cores can reach +maximum frequency. To find the best process scheduling policy for a given +scenario, OS needs to know the core ordering informed by the platform through +highest performance capability register of the CPPC interface. + +``amd-pstate`` preferred core enables the scheduler to prefer scheduling on +cores that can achieve a higher frequency with lower voltage. The preferred +core rankings can dynamically change based on the workload, platform conditions, +thermals and ageing. + +The priority metric will be initialized by the ``amd-pstate`` driver. The ``amd-pstate`` +driver will also determine whether or not ``amd-pstate`` preferred core is +supported by the platform. + +``amd-pstate`` driver will provide an initial core ordering when the system boots. +The platform uses the CPPC interfaces to communicate the core ranking to the +operating system and scheduler to make sure that OS is choosing the cores +with highest performance firstly for scheduling the process. When ``amd-pstate`` +driver receives a message with the highest performance change, it will +update the core ranking and set the cpu's priority. + +``amd-pstate`` Preferred Core Switch +================================= +Kernel Parameters +----------------- + +``amd-pstate`` peferred core`` has two states: enable and disable. +Enable/disable states can be chosen by different kernel parameters. +Default enable ``amd-pstate`` preferred core. + +``amd_prefcore=disable`` + +For systems that support ``amd-pstate`` preferred core, the core rankings will +always be advertised by the platform. But OS can choose to ignore that via the +kernel parameter ``amd_prefcore=disable``. + User Space Interface in ``sysfs`` - General ===========================================
@@ -385,6 +427,19 @@ control its functionality at the system level. They are located in the to the operation mode represented by that string - or to be unregistered in the "disable" case.
+``prefcore`` + Preferred core state of the driver: "enabled" or "disabled". + + "enabled" + Enable the ``amd-pstate`` preferred core. + + "disabled" + Disable the ``amd-pstate`` preferred core + + + This attribute is read-only to check the state of preferred core set + by the kernel parameter. + ``cpupower`` tool support for ``amd-pstate`` ===============================================
amd-pstate driver support enable/disable preferred core. Default enabled on platforms supporting amd-pstate preferred core. Disable amd-pstate preferred core with "amd_prefcore=disable" added to the kernel command line.
Signed-off-by: Meng Li li.meng@amd.com Reviewed-by: Mario Limonciello mario.limonciello@amd.com Reviewed-by: Wyes Karny wyes.karny@amd.com Reviewed-by: Huang Rui ray.huang@amd.com Tested-by: Oleksandr Natalenko oleksandr@natalenko.name --- Documentation/admin-guide/kernel-parameters.txt | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 0a1731a0f0ef..e35b795aa8aa 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -363,6 +363,11 @@ selects a performance level in this range and appropriate to the current workload.
+ amd_prefcore= + [X86] + disable + Disable amd-pstate preferred core. + amijoy.map= [HW,JOY] Amiga joystick support Map of devices attached to JOY0DAT and JOY1DAT Format: <a>,<b>
Hello.
On pátek 13. října 2023 5:31:11 CEST Meng Li wrote:
Hi all:
The core frequency is subjected to the process variation in semiconductors. Not all cores are able to reach the maximum frequency respecting the infrastructure limits. Consequently, AMD has redefined the concept of maximum frequency of a part. This means that a fraction of cores can reach maximum frequency. To find the best process scheduling policy for a given scenario, OS needs to know the core ordering informed by the platform through highest performance capability register of the CPPC interface.
Earlier implementations of amd-pstate preferred core only support a static core ranking and targeted performance. Now it has the ability to dynamically change the preferred core based on the workload and platform conditions and accounting for thermals and aging.
Amd-pstate driver utilizes the functions and data structures provided by the ITMT architecture to enable the scheduler to favor scheduling on cores which can be get a higher frequency with lower voltage. We call it amd-pstate preferred core.
Here sched_set_itmt_core_prio() is called to set priorities and sched_set_itmt_support() is called to enable ITMT feature. Amd-pstate driver uses the highest performance value to indicate the priority of CPU. The higher value has a higher priority.
Amd-pstate driver will provide an initial core ordering at boot time. It relies on the CPPC interface to communicate the core ranking to the operating system and scheduler to make sure that OS is choosing the cores with highest performance firstly for scheduling the process. When amd-pstate driver receives a message with the highest performance change, it will update the core ranking.
Changes form V8->V9:
- all:
- pick up Tested-By flag added by Oleksandr.
- cpufreq: amd-pstate:
- pick up Review-By flag added by Wyes.
- ignore modification of bug.
Thanks for this submission.
The bug you refer to, I assume it should have been fixed by this hunk:
``` --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -542,7 +542,7 @@ static void amd_pstate_adjust_perf(unsigned int cpu, if (target_perf < capacity) des_perf = DIV_ROUND_UP(cap_perf * target_perf, capacity);
- min_perf = READ_ONCE(cpudata->highest_perf); + min_perf = READ_ONCE(cpudata->lowest_perf); if (_min_perf < capacity) min_perf = DIV_ROUND_UP(cap_perf * _min_perf, capacity); ```
which is now missing from this patchset as it was suggested to send it as a separate patch.
Am I correct? If so, are you going to send it as a separate patch within the next round of this patchset, or it will be sent separately (if it hasn't yet)?
- add a attribute of prefcore_ranking.
- modify data type conversion from u32 to int.
- Documentation: amd-pstate:
- pick up Review-By flag added by Wyes.
Changes form V7->V8:
- all:
- pick up Review-By flag added by Mario and Ray.
- cpufreq: amd-pstate:
- use hw_prefcore embeds into cpudata structure.
- delete preferred core init from cpu online/off.
Changes form V6->V7:
- x86:
- Modify kconfig about X86_AMD_PSTATE.
- cpufreq: amd-pstate:
- modify incorrect comments about scheduler_work().
- convert highest_perf data type.
- modify preferred core init when cpu init and online.
- acpi: cppc:
- modify link of CPPC highest performance.
- cpufreq:
- modify link of CPPC highest performance changed.
Changes form V5->V6:
- cpufreq: amd-pstate:
- modify the wrong tag order.
- modify warning about hw_prefcore sysfs attribute.
- delete duplicate comments.
- modify the variable name cppc_highest_perf to prefcore_ranking.
- modify judgment conditions for setting highest_perf.
- modify sysfs attribute for CPPC highest perf to pr_debug message.
- Documentation: amd-pstate:
- modify warning: title underline too short.
Changes form V4->V5:
- cpufreq: amd-pstate:
- modify sysfs attribute for CPPC highest perf.
- modify warning about comments
- rebase linux-next
- cpufreq:
- Moidfy warning about function declarations.
- Documentation: amd-pstate:
- align with ``amd-pstat``
Changes form V3->V4:
- Documentation: amd-pstate:
- Modify inappropriate descriptions.
Changes form V2->V3:
- x86:
- Modify kconfig and description.
- cpufreq: amd-pstate:
- Add Co-developed-by tag in commit message.
- cpufreq:
- Modify commit message.
- Documentation: amd-pstate:
- Modify inappropriate descriptions.
Changes form V1->V2:
- acpi: cppc:
- Add reference link.
- cpufreq:
- Moidfy link error.
- cpufreq: amd-pstate:
- Init the priorities of all online CPUs
- Use a single variable to represent the status of preferred core.
- Documentation:
- Default enabled preferred core.
- Documentation: amd-pstate:
- Modify inappropriate descriptions.
- Default enabled preferred core.
- Use a single variable to represent the status of preferred core.
Meng Li (7): x86: Drop CPU_SUP_INTEL from SCHED_MC_PRIO for the expansion. acpi: cppc: Add get the highest performance cppc control cpufreq: amd-pstate: Enable amd-pstate preferred core supporting. cpufreq: Add a notification message that the highest perf has changed cpufreq: amd-pstate: Update amd-pstate preferred core ranking dynamically Documentation: amd-pstate: introduce amd-pstate preferred core Documentation: introduce amd-pstate preferrd core mode kernel command line options
.../admin-guide/kernel-parameters.txt | 5 + Documentation/admin-guide/pm/amd-pstate.rst | 59 ++++- arch/x86/Kconfig | 5 +- drivers/acpi/cppc_acpi.c | 13 ++ drivers/acpi/processor_driver.c | 6 + drivers/cpufreq/amd-pstate.c | 204 ++++++++++++++++-- drivers/cpufreq/cpufreq.c | 13 ++ include/acpi/cppc_acpi.h | 5 + include/linux/amd-pstate.h | 10 + include/linux/cpufreq.h | 5 + 10 files changed, 305 insertions(+), 20 deletions(-)
[AMD Official Use Only - General]
Hi Oleksandr:
-----Original Message----- From: Oleksandr Natalenko oleksandr@natalenko.name Sent: Friday, October 13, 2023 11:45 PM To: Rafael J . Wysocki rafael.j.wysocki@intel.com; Huang, Ray Ray.Huang@amd.com; Meng, Li (Jassmine) Li.Meng@amd.com Cc: linux-pm@vger.kernel.org; linux-kernel@vger.kernel.org; x86@kernel.org; linux-acpi@vger.kernel.org; Shuah Khan skhan@linuxfoundation.org; linux-kselftest@vger.kernel.org; Fontenot, Nathan Nathan.Fontenot@amd.com; Sharma, Deepak Deepak.Sharma@amd.com; Deucher, Alexander Alexander.Deucher@amd.com; Limonciello, Mario Mario.Limonciello@amd.com; Huang, Shimmer Shimmer.Huang@amd.com; Yuan, Perry Perry.Yuan@amd.com; Du, Xiaojian Xiaojian.Du@amd.com; Viresh Kumar viresh.kumar@linaro.org; Borislav Petkov bp@alien8.de; Meng, Li (Jassmine) Li.Meng@amd.com Subject: Re: [RESEND PATCH V9 0/7] amd-pstate preferred core
Hello.
On pátek 13. října 2023 5:31:11 CEST Meng Li wrote:
Hi all:
The core frequency is subjected to the process variation in semiconductors. Not all cores are able to reach the maximum frequency respecting the infrastructure limits. Consequently, AMD has redefined the concept of maximum frequency of a part. This means that a fraction of cores can reach maximum frequency. To find the best process scheduling policy for a given scenario, OS needs to know the core ordering informed by the platform through highest performance capability register of the CPPC
interface.
Earlier implementations of amd-pstate preferred core only support a static core ranking and targeted performance. Now it has the ability to dynamically change the preferred core based on the workload and platform conditions and accounting for thermals and aging.
Amd-pstate driver utilizes the functions and data structures provided by the ITMT architecture to enable the scheduler to favor scheduling on cores which can be get a higher frequency with lower voltage. We call it amd-pstate preferred core.
Here sched_set_itmt_core_prio() is called to set priorities and sched_set_itmt_support() is called to enable ITMT feature. Amd-pstate driver uses the highest performance value to indicate the priority of CPU. The higher value has a higher priority.
Amd-pstate driver will provide an initial core ordering at boot time. It relies on the CPPC interface to communicate the core ranking to the operating system and scheduler to make sure that OS is choosing the cores with highest performance firstly for scheduling the process. When amd-pstate driver receives a message with the highest performance change, it will update the core ranking.
Changes form V8->V9:
- all:
- pick up Tested-By flag added by Oleksandr.
- cpufreq: amd-pstate:
- pick up Review-By flag added by Wyes.
- ignore modification of bug.
Thanks for this submission.
The bug you refer to, I assume it should have been fixed by this hunk:
--- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -542,7 +542,7 @@ static void amd_pstate_adjust_perf(unsigned int cpu, if (target_perf < capacity) des_perf = DIV_ROUND_UP(cap_perf * target_perf, capacity); - min_perf = READ_ONCE(cpudata->highest_perf); + min_perf = READ_ONCE(cpudata->lowest_perf); if (_min_perf < capacity) min_perf = DIV_ROUND_UP(cap_perf * _min_perf, capacity);
which is now missing from this patchset as it was suggested to send it as a separate patch.
Am I correct? If so, are you going to send it as a separate patch within the next round of this patchset, or it will be sent separately (if it hasn't yet)?
[Meng, Li (Jassmine)] Thank you! It is now missing from this serial patches. And I will send a separately patch for this issue.
- add a attribute of prefcore_ranking.
- modify data type conversion from u32 to int.
- Documentation: amd-pstate:
- pick up Review-By flag added by Wyes.
Changes form V7->V8:
- all:
- pick up Review-By flag added by Mario and Ray.
- cpufreq: amd-pstate:
- use hw_prefcore embeds into cpudata structure.
- delete preferred core init from cpu online/off.
Changes form V6->V7:
- x86:
- Modify kconfig about X86_AMD_PSTATE.
- cpufreq: amd-pstate:
- modify incorrect comments about scheduler_work().
- convert highest_perf data type.
- modify preferred core init when cpu init and online.
- acpi: cppc:
- modify link of CPPC highest performance.
- cpufreq:
- modify link of CPPC highest performance changed.
Changes form V5->V6:
- cpufreq: amd-pstate:
- modify the wrong tag order.
- modify warning about hw_prefcore sysfs attribute.
- delete duplicate comments.
- modify the variable name cppc_highest_perf to prefcore_ranking.
- modify judgment conditions for setting highest_perf.
- modify sysfs attribute for CPPC highest perf to pr_debug message.
- Documentation: amd-pstate:
- modify warning: title underline too short.
Changes form V4->V5:
- cpufreq: amd-pstate:
- modify sysfs attribute for CPPC highest perf.
- modify warning about comments
- rebase linux-next
- cpufreq:
- Moidfy warning about function declarations.
- Documentation: amd-pstate:
- align with ``amd-pstat``
Changes form V3->V4:
- Documentation: amd-pstate:
- Modify inappropriate descriptions.
Changes form V2->V3:
- x86:
- Modify kconfig and description.
- cpufreq: amd-pstate:
- Add Co-developed-by tag in commit message.
- cpufreq:
- Modify commit message.
- Documentation: amd-pstate:
- Modify inappropriate descriptions.
Changes form V1->V2:
- acpi: cppc:
- Add reference link.
- cpufreq:
- Moidfy link error.
- cpufreq: amd-pstate:
- Init the priorities of all online CPUs
- Use a single variable to represent the status of preferred core.
- Documentation:
- Default enabled preferred core.
- Documentation: amd-pstate:
- Modify inappropriate descriptions.
- Default enabled preferred core.
- Use a single variable to represent the status of preferred core.
Meng Li (7): x86: Drop CPU_SUP_INTEL from SCHED_MC_PRIO for the expansion. acpi: cppc: Add get the highest performance cppc control cpufreq: amd-pstate: Enable amd-pstate preferred core supporting. cpufreq: Add a notification message that the highest perf has changed cpufreq: amd-pstate: Update amd-pstate preferred core ranking dynamically Documentation: amd-pstate: introduce amd-pstate preferred core Documentation: introduce amd-pstate preferrd core mode kernel
command
line options
.../admin-guide/kernel-parameters.txt | 5 + Documentation/admin-guide/pm/amd-pstate.rst | 59 ++++- arch/x86/Kconfig | 5 +- drivers/acpi/cppc_acpi.c | 13 ++ drivers/acpi/processor_driver.c | 6 + drivers/cpufreq/amd-pstate.c | 204 ++++++++++++++++-- drivers/cpufreq/cpufreq.c | 13 ++ include/acpi/cppc_acpi.h | 5 + include/linux/amd-pstate.h | 10 + include/linux/cpufreq.h | 5 + 10 files changed, 305 insertions(+), 20 deletions(-)
-- Oleksandr Natalenko (post-factum)
linux-kselftest-mirror@lists.linaro.org