Hi stable team,
Please backport the mainline commit
bf202e654bfa ("cpufreq: amd-pstate: fix the highest frequency issue which limits performance")
to the 6.9 stable series.
It fixes a performance regression on AMD Phoenix platforms.
It was meant to get into the 6.9 release or the stable branch shortly
after, but apparently that didn't happen.
On 2024-05-08 13:47:03+0000, Perry Yuan wrote:
> To address the performance drop issue, an optimization has been
> implemented. The incorrect highest performance value previously set by the
> low-level power firmware for AMD CPUs with Family ID 0x19 and Model ID
> ranging from 0x70 to 0x7F series has been identified as the cause.
>
> To resolve this, a check has been implemented to accurately determine the
> CPU family and model ID. The correct highest performance value is now set
> and the performance drop caused by the incorrect highest performance value
> are eliminated.
>
> Before the fix, the highest frequency was set to 4200MHz, now it is set
> to 4971MHz which is correct.
>
> CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ MHZ
> 0 0 0 0 0:0:0:0 yes 4971.0000 400.0000 400.0000
> 1 0 0 0 0:0:0:0 yes 4971.0000 400.0000 400.0000
> 2 0 0 1 1:1:1:0 yes 4971.0000 400.0000 4865.8140
> 3 0 0 1 1:1:1:0 yes 4971.0000 400.0000 400.0000
>
> v1->v2:
> * add test by flag from Gaha Bana
>
> Fixes: f3a052391822 ("cpufreq: amd-pstate: Enable amd-pstate preferred core support")
> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=218759
> Signed-off-by: Perry Yuan <perry.yuan(a)amd.com>
> Co-developed-by: Mario Limonciello <mario.limonciello(a)amd.com>
> Signed-off-by: Mario Limonciello <mario.limonciello(a)amd.com>
> Tested-by: Gaha Bana <gahabana(a)gmail.com>
> ---
> drivers/cpufreq/amd-pstate.c | 22 +++++++++++++++++++---
> 1 file changed, 19 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
> index 2db095867d03..6a342b0c0140 100644
> --- a/drivers/cpufreq/amd-pstate.c
> +++ b/drivers/cpufreq/amd-pstate.c
> @@ -50,7 +50,8 @@
>
> #define AMD_PSTATE_TRANSITION_LATENCY 20000
> #define AMD_PSTATE_TRANSITION_DELAY 1000
> -#define AMD_PSTATE_PREFCORE_THRESHOLD 166
> +#define CPPC_HIGHEST_PERF_PERFORMANCE 196
> +#define CPPC_HIGHEST_PERF_DEFAULT 166
>
> /*
> * TODO: We need more time to fine tune processors with shared memory solution
> @@ -326,6 +327,21 @@ static inline int amd_pstate_enable(bool enable)
> return static_call(amd_pstate_enable)(enable);
> }
>
> +static u32 amd_pstate_highest_perf_set(struct amd_cpudata *cpudata)
> +{
> + struct cpuinfo_x86 *c = &cpu_data(0);
> +
> + /*
> + * For AMD CPUs with Family ID 19H and Model ID range 0x70 to 0x7f,
> + * the highest performance level is set to 196.
> + * https://bugzilla.kernel.org/show_bug.cgi?id=218759
> + */
> + if (c->x86 == 0x19 && (c->x86_model >= 0x70 && c->x86_model <= 0x7f))
> + return CPPC_HIGHEST_PERF_PERFORMANCE;
> +
> + return CPPC_HIGHEST_PERF_DEFAULT;
> +}
> +
> static int pstate_init_perf(struct amd_cpudata *cpudata)
> {
> u64 cap1;
> @@ -342,7 +358,7 @@ static int pstate_init_perf(struct amd_cpudata *cpudata)
> * the default max perf.
> */
> if (cpudata->hw_prefcore)
> - highest_perf = AMD_PSTATE_PREFCORE_THRESHOLD;
> + highest_perf = amd_pstate_highest_perf_set(cpudata);
> else
> highest_perf = AMD_CPPC_HIGHEST_PERF(cap1);
>
> @@ -366,7 +382,7 @@ static int cppc_init_perf(struct amd_cpudata *cpudata)
> return ret;
>
> if (cpudata->hw_prefcore)
> - highest_perf = AMD_PSTATE_PREFCORE_THRESHOLD;
> + highest_perf = amd_pstate_highest_perf_set(cpudata);
> else
> highest_perf = cppc_perf.highest_perf;
>
> --
> 2.34.1
>
When asn1_encode_sequence() fails, WARN is not the correct solution.
1. asn1_encode_sequence() is not an internal function (located
in lib/asn1_encode.c).
2. Location is known, which makes the stack trace useless.
3. Results a crash if panic_on_warn is set.
It is also noteworthy that the use of WARN is undocumented, and it
should be avoided unless there is a carefully considered rationale to
use it.
Replace WARN with pr_err, and print the return value instead, which is
only useful piece of information.
Cc: stable(a)vger.kernel.org # v5.13+
Fixes: f2219745250f ("security: keys: trusted: use ASN.1 TPM2 key format for the blobs")
Signed-off-by: Jarkko Sakkinen <jarkko(a)kernel.org>
---
security/keys/trusted-keys/trusted_tpm2.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/security/keys/trusted-keys/trusted_tpm2.c b/security/keys/trusted-keys/trusted_tpm2.c
index dfeec06301ce..dbdd6a318b8b 100644
--- a/security/keys/trusted-keys/trusted_tpm2.c
+++ b/security/keys/trusted-keys/trusted_tpm2.c
@@ -38,6 +38,7 @@ static int tpm2_key_encode(struct trusted_key_payload *payload,
u8 *end_work = scratch + SCRATCH_SIZE;
u8 *priv, *pub;
u16 priv_len, pub_len;
+ int ret;
priv_len = get_unaligned_be16(src) + 2;
priv = src;
@@ -79,8 +80,11 @@ static int tpm2_key_encode(struct trusted_key_payload *payload,
work1 = payload->blob;
work1 = asn1_encode_sequence(work1, work1 + sizeof(payload->blob),
scratch, work - scratch);
- if (WARN(IS_ERR(work1), "BUG: ASN.1 encoder failed"))
- return PTR_ERR(work1);
+ if (IS_ERR(work1)) {
+ ret = PTR_ERR(work1);
+ pr_err("ASN.1 encode error %d\n", ret);
+ return ret;
+ }
return work1 - payload->blob;
}
--
2.45.1
When asn1_encode_sequence() fails, WARN is not the correct solution.
1. asn1_encode_sequence() is not an internal function (located
in lib/asn1_encode.c).
2. Location is known, which makes the stack trace useless.
3. Results a crash if panic_on_warn is set.
It is also noteworthy that the use of WARN is undocumented, and it
should be avoided unless there is a carefully considered rationale to
use it.
Replace WARN with pr_err, and print the return value instead, which is
only useful piece of information.
Cc: stable(a)vger.kernel.org # v5.13+
Fixes: f2219745250f ("security: keys: trusted: use ASN.1 TPM2 key format for the blobs")
Signed-off-by: Jarkko Sakkinen <jarkko(a)kernel.org>
---
security/keys/trusted-keys/trusted_tpm2.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/security/keys/trusted-keys/trusted_tpm2.c b/security/keys/trusted-keys/trusted_tpm2.c
index dfeec06301ce..dbdd6a318b8b 100644
--- a/security/keys/trusted-keys/trusted_tpm2.c
+++ b/security/keys/trusted-keys/trusted_tpm2.c
@@ -38,6 +38,7 @@ static int tpm2_key_encode(struct trusted_key_payload *payload,
u8 *end_work = scratch + SCRATCH_SIZE;
u8 *priv, *pub;
u16 priv_len, pub_len;
+ int ret;
priv_len = get_unaligned_be16(src) + 2;
priv = src;
@@ -79,8 +80,11 @@ static int tpm2_key_encode(struct trusted_key_payload *payload,
work1 = payload->blob;
work1 = asn1_encode_sequence(work1, work1 + sizeof(payload->blob),
scratch, work - scratch);
- if (WARN(IS_ERR(work1), "BUG: ASN.1 encoder failed"))
- return PTR_ERR(work1);
+ if (IS_ERR(work1)) {
+ ret = PTR_ERR(work1);
+ pr_err("ASN.1 encode error %d\n", ret);
+ return ret;
+ }
return work1 - payload->blob;
}
--
2.45.1
Notice...
Your fund that was stopped from completion has been released and ready to be transferred indicate if this email id is active for more details
Regards
Mr. Rowland Cole
From: NeilBrown <neilb(a)suse.de>
[ Upstream commit 3903902401451b1cd9d797a8c79769eb26ac7fe5 ]
The original implementation of nfsd used signals to stop threads during
shutdown.
In Linux 2.3.46pre5 nfsd gained the ability to shutdown threads
internally it if was asked to run "0" threads. After this user-space
transitioned to using "rpc.nfsd 0" to stop nfsd and sending signals to
threads was no longer an important part of the API.
In commit 3ebdbe5203a8 ("SUNRPC: discard svo_setup and rename
svc_set_num_threads_sync()") (v5.17-rc1~75^2~41) we finally removed the
use of signals for stopping threads, using kthread_stop() instead.
This patch makes the "obvious" next step and removes the ability to
signal nfsd threads - or any svc threads. nfsd stops allowing signals
and we don't check for their delivery any more.
This will allow for some simplification in later patches.
A change worth noting is in nfsd4_ssc_setup_dul(). There was previously
a signal_pending() check which would only succeed when the thread was
being shut down. It should really have tested kthread_should_stop() as
well. Now it just does the latter, not the former.
Signed-off-by: NeilBrown <neilb(a)suse.de>
Reviewed-by: Jeff Layton <jlayton(a)kernel.org>
Signed-off-by: Chuck Lever <chuck.lever(a)oracle.com>
---
fs/nfs/callback.c | 9 +--------
fs/nfsd/nfs4proc.c | 5 ++---
fs/nfsd/nfssvc.c | 12 ------------
net/sunrpc/svc_xprt.c | 16 ++++++----------
4 files changed, 9 insertions(+), 33 deletions(-)
Greg, Sasha - This is the third resend for this fix. Why isn't it
applied to origin/linux-5.15.y yet?
diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
index 456af7d230cf..46a0a2d6962e 100644
--- a/fs/nfs/callback.c
+++ b/fs/nfs/callback.c
@@ -80,9 +80,6 @@ nfs4_callback_svc(void *vrqstp)
set_freezable();
while (!kthread_freezable_should_stop(NULL)) {
-
- if (signal_pending(current))
- flush_signals(current);
/*
* Listen for a request on the socket
*/
@@ -112,11 +109,7 @@ nfs41_callback_svc(void *vrqstp)
set_freezable();
while (!kthread_freezable_should_stop(NULL)) {
-
- if (signal_pending(current))
- flush_signals(current);
-
- prepare_to_wait(&serv->sv_cb_waitq, &wq, TASK_INTERRUPTIBLE);
+ prepare_to_wait(&serv->sv_cb_waitq, &wq, TASK_IDLE);
spin_lock_bh(&serv->sv_cb_lock);
if (!list_empty(&serv->sv_cb_list)) {
req = list_first_entry(&serv->sv_cb_list,
diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
index c14f5ac1484c..6779291efca9 100644
--- a/fs/nfsd/nfs4proc.c
+++ b/fs/nfsd/nfs4proc.c
@@ -1317,12 +1317,11 @@ static __be32 nfsd4_ssc_setup_dul(struct nfsd_net *nn, char *ipaddr,
/* found a match */
if (ni->nsui_busy) {
/* wait - and try again */
- prepare_to_wait(&nn->nfsd_ssc_waitq, &wait,
- TASK_INTERRUPTIBLE);
+ prepare_to_wait(&nn->nfsd_ssc_waitq, &wait, TASK_IDLE);
spin_unlock(&nn->nfsd_ssc_lock);
/* allow 20secs for mount/unmount for now - revisit */
- if (signal_pending(current) ||
+ if (kthread_should_stop() ||
(schedule_timeout(20*HZ) == 0)) {
finish_wait(&nn->nfsd_ssc_waitq, &wait);
kfree(work);
diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
index 4c1a0a1623e5..3d4fd40c987b 100644
--- a/fs/nfsd/nfssvc.c
+++ b/fs/nfsd/nfssvc.c
@@ -938,15 +938,6 @@ nfsd(void *vrqstp)
current->fs->umask = 0;
- /*
- * thread is spawned with all signals set to SIG_IGN, re-enable
- * the ones that will bring down the thread
- */
- allow_signal(SIGKILL);
- allow_signal(SIGHUP);
- allow_signal(SIGINT);
- allow_signal(SIGQUIT);
-
atomic_inc(&nfsdstats.th_cnt);
set_freezable();
@@ -971,9 +962,6 @@ nfsd(void *vrqstp)
validate_process_creds();
}
- /* Clear signals before calling svc_exit_thread() */
- flush_signals(current);
-
atomic_dec(&nfsdstats.th_cnt);
out:
diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
index 67ccf1a6459a..b19592673eef 100644
--- a/net/sunrpc/svc_xprt.c
+++ b/net/sunrpc/svc_xprt.c
@@ -700,8 +700,8 @@ static int svc_alloc_arg(struct svc_rqst *rqstp)
/* Made progress, don't sleep yet */
continue;
- set_current_state(TASK_INTERRUPTIBLE);
- if (signalled() || kthread_should_stop()) {
+ set_current_state(TASK_IDLE);
+ if (kthread_should_stop()) {
set_current_state(TASK_RUNNING);
return -EINTR;
}
@@ -736,7 +736,7 @@ rqst_should_sleep(struct svc_rqst *rqstp)
return false;
/* are we shutting down? */
- if (signalled() || kthread_should_stop())
+ if (kthread_should_stop())
return false;
/* are we freezing? */
@@ -758,11 +758,7 @@ static struct svc_xprt *svc_get_next_xprt(struct svc_rqst *rqstp, long timeout)
if (rqstp->rq_xprt)
goto out_found;
- /*
- * We have to be able to interrupt this wait
- * to bring down the daemons ...
- */
- set_current_state(TASK_INTERRUPTIBLE);
+ set_current_state(TASK_IDLE);
smp_mb__before_atomic();
clear_bit(SP_CONGESTED, &pool->sp_flags);
clear_bit(RQ_BUSY, &rqstp->rq_flags);
@@ -784,7 +780,7 @@ static struct svc_xprt *svc_get_next_xprt(struct svc_rqst *rqstp, long timeout)
if (!time_left)
atomic_long_inc(&pool->sp_stats.threads_timedout);
- if (signalled() || kthread_should_stop())
+ if (kthread_should_stop())
return ERR_PTR(-EINTR);
return ERR_PTR(-EAGAIN);
out_found:
@@ -882,7 +878,7 @@ int svc_recv(struct svc_rqst *rqstp, long timeout)
try_to_freeze();
cond_resched();
err = -EINTR;
- if (signalled() || kthread_should_stop())
+ if (kthread_should_stop())
goto out;
xprt = svc_get_next_xprt(rqstp, timeout);
base-commit: 284087d4f7d57502b5a41b423b771f16cc6d157a
--
2.44.0
Zoned devices request sequential writing on the same zone. That means
if 2 requests on the saem zone, the lower pos request need to dispatch
to device first.
While different priority has it's own tree & list, request with high
priority will be disptch first.
So if requestA & requestB are on the same zone. RequestA is BE and pos
is X+0. ReqeustB is RT and pos is X+1. RequestB will be disptched before
requestA, which got an ERROR from zoned device.
This is found in a practice scenario when using F2FS on zoned device.
And it is very easy to reproduce:
1. Use fsstress to run 8 test processes
2. Use ionice to change 4/8 processes to RT priority
Fixes: c807ab520fc3 ("block/mq-deadline: Add I/O priority support")
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Wu Bo <bo.wu(a)vivo.com>
---
block/mq-deadline.c | 31 +++++++++++++++++++++++++++++++
include/linux/blk-mq.h | 15 +++++++++++++++
2 files changed, 46 insertions(+)
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index 02a916ba62ee..6a05dd86e8ca 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -539,6 +539,37 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd,
if (started_after(dd, rq, latest_start))
return NULL;
+ if (!blk_rq_is_seq_zoned_write(rq))
+ goto skip_check;
+ /*
+ * To ensure sequential writing, check the lower priority class to see
+ * if there is a request on the same zone and need to be dispatched
+ * first
+ */
+ ioprio_class = dd_rq_ioclass(rq);
+ prio = ioprio_class_to_prio[ioprio_class];
+ prio++;
+ for (; prio <= DD_PRIO_MAX; prio++) {
+ struct request *temp_rq;
+ unsigned long flags;
+ bool can_dispatch;
+
+ if (!dd_queued(dd, prio))
+ continue;
+
+ temp_rq = deadline_from_pos(&dd->per_prio[prio], data_dir, blk_rq_pos(rq));
+ if (temp_rq && blk_req_zone_in_one(temp_rq, rq) &&
+ blk_rq_pos(temp_rq) < blk_rq_pos(rq)) {
+ spin_lock_irqsave(&dd->zone_lock, flags);
+ can_dispatch = blk_req_can_dispatch_to_zone(temp_rq);
+ spin_unlock_irqrestore(&dd->zone_lock, flags);
+ if (!can_dispatch)
+ return NULL;
+ rq = temp_rq;
+ per_prio = &dd->per_prio[prio];
+ }
+ }
+skip_check:
/*
* rq is the selected appropriate request.
*/
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index d3d8fd8e229b..bca1e639e0f3 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -1202,6 +1202,15 @@ static inline bool blk_req_can_dispatch_to_zone(struct request *rq)
return true;
return !blk_req_zone_is_write_locked(rq);
}
+
+static inline bool blk_req_zone_in_one(struct request *rq_a,
+ struct request *rq_b)
+{
+ unsigned int zone_sectors = rq_a->q->limits.chunk_sectors;
+
+ return round_down(blk_rq_pos(rq_a), zone_sectors) ==
+ round_down(blk_rq_pos(rq_b), zone_sectors);
+}
#else /* CONFIG_BLK_DEV_ZONED */
static inline bool blk_rq_is_seq_zoned_write(struct request *rq)
{
@@ -1229,6 +1238,12 @@ static inline bool blk_req_can_dispatch_to_zone(struct request *rq)
{
return true;
}
+
+static inline bool blk_req_zone_in_one(struct request *rq_a,
+ struct request *rq_b)
+{
+ return false;
+}
#endif /* CONFIG_BLK_DEV_ZONED */
#endif /* BLK_MQ_H */
--
2.35.3
From: Ronald Wahl <ronald.wahl(a)raritan.com>
Under some circumstances it may happen that the ks8851 Ethernet driver
stops sending data.
Currently the interrupt handler resets the interrupt status flags in the
hardware after handling TX. With this approach we may lose interrupts in
the time window between handling the TX interrupt and resetting the TX
interrupt status bit.
When all of the three following conditions are true then transmitting
data stops:
- TX queue is stopped to wait for room in the hardware TX buffer
- no queued SKBs in the driver (txq) that wait for being written to hw
- hardware TX buffer is empty and the last TX interrupt was lost
This is because reenabling the TX queue happens when handling the TX
interrupt status but if the TX status bit has already been cleared then
this interrupt will never come.
With this commit the interrupt status flags will be cleared before they
are handled. That way we stop losing interrupts.
The wrong handling of the ISR flags was there from the beginning but
with commit 3dc5d4454545 ("net: ks8851: Fix TX stall caused by TX
buffer overrun") the issue becomes apparent.
Fixes: 3dc5d4454545 ("net: ks8851: Fix TX stall caused by TX buffer overrun")
Cc: "David S. Miller" <davem(a)davemloft.net>
Cc: Eric Dumazet <edumazet(a)google.com>
Cc: Jakub Kicinski <kuba(a)kernel.org>
Cc: Paolo Abeni <pabeni(a)redhat.com>
Cc: Simon Horman <horms(a)kernel.org>
Cc: netdev(a)vger.kernel.org
Cc: stable(a)vger.kernel.org # 5.10+
Signed-off-by: Ronald Wahl <ronald.wahl(a)raritan.com>
---
drivers/net/ethernet/micrel/ks8851_common.c | 18 +-----------------
1 file changed, 1 insertion(+), 17 deletions(-)
diff --git a/drivers/net/ethernet/micrel/ks8851_common.c b/drivers/net/ethernet/micrel/ks8851_common.c
index 502518cdb461..6453c92f0fa7 100644
--- a/drivers/net/ethernet/micrel/ks8851_common.c
+++ b/drivers/net/ethernet/micrel/ks8851_common.c
@@ -328,7 +328,6 @@ static irqreturn_t ks8851_irq(int irq, void *_ks)
{
struct ks8851_net *ks = _ks;
struct sk_buff_head rxq;
- unsigned handled = 0;
unsigned long flags;
unsigned int status;
struct sk_buff *skb;
@@ -336,24 +335,17 @@ static irqreturn_t ks8851_irq(int irq, void *_ks)
ks8851_lock(ks, &flags);
status = ks8851_rdreg16(ks, KS_ISR);
+ ks8851_wrreg16(ks, KS_ISR, status);
netif_dbg(ks, intr, ks->netdev,
"%s: status 0x%04x\n", __func__, status);
- if (status & IRQ_LCI)
- handled |= IRQ_LCI;
-
if (status & IRQ_LDI) {
u16 pmecr = ks8851_rdreg16(ks, KS_PMECR);
pmecr &= ~PMECR_WKEVT_MASK;
ks8851_wrreg16(ks, KS_PMECR, pmecr | PMECR_WKEVT_LINK);
-
- handled |= IRQ_LDI;
}
- if (status & IRQ_RXPSI)
- handled |= IRQ_RXPSI;
-
if (status & IRQ_TXI) {
unsigned short tx_space = ks8851_rdreg16(ks, KS_TXMIR);
@@ -365,20 +357,12 @@ static irqreturn_t ks8851_irq(int irq, void *_ks)
if (netif_queue_stopped(ks->netdev))
netif_wake_queue(ks->netdev);
spin_unlock(&ks->statelock);
-
- handled |= IRQ_TXI;
}
- if (status & IRQ_RXI)
- handled |= IRQ_RXI;
-
if (status & IRQ_SPIBEI) {
netdev_err(ks->netdev, "%s: spi bus error\n", __func__);
- handled |= IRQ_SPIBEI;
}
- ks8851_wrreg16(ks, KS_ISR, handled);
-
if (status & IRQ_RXI) {
/* the datasheet says to disable the rx interrupt during
* packet read-out, however we're masking the interrupt
--
2.45.0
The initial change to set x86_virt_bits to the correct value straight
away broke boot on Intel Quark X1000 CPUs (which are family 5, model 9,
stepping 0)
With deeper investigation it appears that the Quark doesn't have
the bit 19 set in 0x01 CPUID leaf, which means it doesn't provide
any clflush instructions and hence the cache alignment is set to 0.
The actual cache line size is 16 bytes, hence we may set the alignment
accordingly. At the same time the physical and virtual address bits
are retrieved via 0x80000008 CPUID leaf.
Note, we don't really care about the value of x86_clflush_size as it
is either used with a proper check for the instruction to be present,
or, like in PCI case, it assumes 32 bytes for all supported 32-bit CPUs
that have actually smaller cache line sizes and don't advertise it.
The commit fbf6449f84bf ("x86/sev-es: Set x86_virt_bits to the correct
value straight away, instead of a two-phase approach") basically
revealed the issue that has been present from day 1 of introducing
the Quark support.
Fixes: aece118e487a ("x86: Add cpu_detect_cache_sizes to init_intel() add Quark legacy_cache()")
Cc: stable(a)vger.kernel.org
Signed-off-by: Andy Shevchenko <andriy.shevchenko(a)linux.intel.com>
---
arch/x86/kernel/cpu/intel.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index be30d7fa2e66..2bffae158dd5 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -321,6 +321,15 @@ static void early_init_intel(struct cpuinfo_x86 *c)
#ifdef CONFIG_X86_64
set_cpu_cap(c, X86_FEATURE_SYSENTER32);
#else
+ /*
+ * The Quark doesn't have bit 19 set in 0x01 CPUID leaf, which means
+ * it doesn't provide any clflush instructions and hence the cache
+ * alignment is set to 0. The actual cache line size is 16 bytes,
+ * hence set the alignment accordingly. At the same time the physical
+ * and virtual address bits are retrieved via 0x80000008 CPUID leaf.
+ */
+ if (c->x86 == 5 && c->x86_model == 9)
+ c->x86_cache_alignment = 16;
/* Netburst reports 64 bytes clflush size, but does IO in 128 bytes */
if (c->x86 == 15 && c->x86_cache_alignment == 64)
c->x86_cache_alignment = 128;
--
2.43.0.rc1.1336.g36b5255a03ac