Hi Stable kernel maintainers,
I want to backport below patch to stable tree: 4929a4e6faa0 sched/fair: Scale bandwidth quota and period without losing quota/period ratio precision
This Email is to follow the option 2 [1] to submit patches to stable kernel.
This patch is fixing a real bug affecting Kubernetes users [2]. The bug (2e8e19226398 sched/fair: Limit sched_cfs_period_timer() loop to avoid hard lockup) was backported to all stable versions. So we need this fix to be backported to all stable versions as well.
Thanks! Xuewei
[1] https://github.com/torvalds/linux/blob/master/Documentation/process/stable-k... [2] https://github.com/kubernetes/kubernetes/issues/72878
On Tue, Dec 03, 2019 at 02:54:57PM -0800, Xuewei Zhang wrote:
Hi Stable kernel maintainers,
I want to backport below patch to stable tree: 4929a4e6faa0 sched/fair: Scale bandwidth quota and period without losing quota/period ratio precision
What tree(s) do you want it backported to?
It's already been backported to 5.3.9 and 5.4, is that not sufficient?
If so, can you provide working backports to any older stable kernels that you wish to see it merged into?
thanks,
greg k-h
On Tue, Dec 3, 2019 at 3:09 PM Greg KH gregkh@linuxfoundation.org wrote:
On Tue, Dec 03, 2019 at 02:54:57PM -0800, Xuewei Zhang wrote:
Hi Stable kernel maintainers,
I want to backport below patch to stable tree: 4929a4e6faa0 sched/fair: Scale bandwidth quota and period without losing quota/period ratio precision
What tree(s) do you want it backported to?
It's already been backported to 5.3.9 and 5.4, is that not sufficient?
We should backport this into all supported stable versions, as the bug is affecting all of them: 4.19, 4.14, 4.9, 4.4, 3.16.
If so, can you provide working backports to any older stable kernels that you wish to see it merged into?
Happy to do that! I will send them in a few Emails following this thread to provide the backports. Please let me know if this is not the correct way to send patches.
Thanks! Xuewei
thanks,
greg k-h
Backport patch that cleanly applies for 4.19, 4.14, 4.9, 4.4 stable trees:
From 974bb36176677c05f257a8385fb69720ae8ed071 Mon Sep 17 00:00:00 2001
From: Xuewei Zhang xueweiz@google.com Date: Thu, 3 Oct 2019 17:12:43 -0700 Subject: [PATCH] sched/fair: Scale bandwidth quota and period without losing quota/period ratio precision
commit 4929a4e6faa0f13289a67cae98139e727f0d4a97 upstream.
The quota/period ratio is used to ensure a child task group won't get more bandwidth than the parent task group, and is calculated as:
normalized_cfs_quota() = [(quota_us << 20) / period_us]
If the quota/period ratio was changed during this scaling due to precision loss, it will cause inconsistency between parent and child task groups.
See below example:
A userspace container manager (kubelet) does three operations:
1) Create a parent cgroup, set quota to 1,000us and period to 10,000us. 2) Create a few children cgroups. 3) Set quota to 1,000us and period to 10,000us on a child cgroup.
These operations are expected to succeed. However, if the scaling of 147/128 happens before step 3, quota and period of the parent cgroup will be changed:
new_quota: 1148437ns, 1148us new_period: 11484375ns, 11484us
And when step 3 comes in, the ratio of the child cgroup will be 104857, which will be larger than the parent cgroup ratio (104821), and will fail.
Scaling them by a factor of 2 will fix the problem.
Change-Id: I3d5f7629012ff115557a08c465a95a5239a105de Tested-by: Phil Auld pauld@redhat.com Signed-off-by: Xuewei Zhang xueweiz@google.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Acked-by: Phil Auld pauld@redhat.com Cc: Anton Blanchard anton@ozlabs.org Cc: Ben Segall bsegall@google.com Cc: Dietmar Eggemann dietmar.eggemann@arm.com Cc: Juri Lelli juri.lelli@redhat.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Mel Gorman mgorman@suse.de Cc: Peter Zijlstra peterz@infradead.org Cc: Steven Rostedt rostedt@goodmis.org Cc: Thomas Gleixner tglx@linutronix.de Cc: Vincent Guittot vincent.guittot@linaro.org Fixes: 2e8e19226398 ("sched/fair: Limit sched_cfs_period_timer() loop to avoid hard lockup") Link: https://lkml.kernel.org/r/20191004001243.140897-1-xueweiz@google.com Signed-off-by: Ingo Molnar mingo@kernel.org Cc: Xuewei Zhang xueweiz@google.com --- kernel/sched/fair.c | 36 ++++++++++++++++++++++-------------- 1 file changed, 22 insertions(+), 14 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index f77fcd37b226..f0abb8fe0ae9 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4868,20 +4868,28 @@ static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer) if (++count > 3) { u64 new, old = ktime_to_ns(cfs_b->period);
- new = (old * 147) / 128; /* ~115% */ - new = min(new, max_cfs_quota_period); - - cfs_b->period = ns_to_ktime(new); - - /* since max is 1s, this is limited to 1e9^2, which fits in u64 */ - cfs_b->quota *= new; - cfs_b->quota = div64_u64(cfs_b->quota, old); - - pr_warn_ratelimited( - "cfs_period_timer[cpu%d]: period too short, scaling up (new cfs_period_us %lld, cfs_quota_us = %lld)\n", - smp_processor_id(), - div_u64(new, NSEC_PER_USEC), - div_u64(cfs_b->quota, NSEC_PER_USEC)); + /* + * Grow period by a factor of 2 to avoid losing precision. + * Precision loss in the quota/period ratio can cause __cfs_schedulable + * to fail. + */ + new = old * 2; + if (new < max_cfs_quota_period) { + cfs_b->period = ns_to_ktime(new); + cfs_b->quota *= 2; + + pr_warn_ratelimited( + "cfs_period_timer[cpu%d]: period too short, scaling up (new cfs_period_us = %lld, cfs_quota_us = %lld)\n", + smp_processor_id(), + div_u64(new, NSEC_PER_USEC), + div_u64(cfs_b->quota, NSEC_PER_USEC)); + } else { + pr_warn_ratelimited( + "cfs_period_timer[cpu%d]: period too short, but cannot scale up without losing precision (cfs_period_us = %lld, cfs_quota_us = %lld)\n", + smp_processor_id(), + div_u64(old, NSEC_PER_USEC), + div_u64(cfs_b->quota, NSEC_PER_USEC)); + }
/* reset count so we don't come right back in here */ count = 0;
Backport patch that cleanly applies for 4.19, 4.14, 4.9, 4.4 stable trees:
From 199df9edf62c339b5459fc53d4631c82a3b82f5b Mon Sep 17 00:00:00 2001
From: Xuewei Zhang xueweiz@google.com Date: Thu, 3 Oct 2019 17:12:43 -0700 Subject: [PATCH] sched/fair: Scale bandwidth quota and period without losing quota/period ratio precision
commit 4929a4e6faa0f13289a67cae98139e727f0d4a97 upstream.
The quota/period ratio is used to ensure a child task group won't get more bandwidth than the parent task group, and is calculated as:
normalized_cfs_quota() = [(quota_us << 20) / period_us]
If the quota/period ratio was changed during this scaling due to precision loss, it will cause inconsistency between parent and child task groups.
See below example:
A userspace container manager (kubelet) does three operations:
1) Create a parent cgroup, set quota to 1,000us and period to 10,000us. 2) Create a few children cgroups. 3) Set quota to 1,000us and period to 10,000us on a child cgroup.
These operations are expected to succeed. However, if the scaling of 147/128 happens before step 3, quota and period of the parent cgroup will be changed:
new_quota: 1148437ns, 1148us new_period: 11484375ns, 11484us
And when step 3 comes in, the ratio of the child cgroup will be 104857, which will be larger than the parent cgroup ratio (104821), and will fail.
Scaling them by a factor of 2 will fix the problem.
Change-Id: I3d5f7629012ff115557a08c465a95a5239a105de Tested-by: Phil Auld pauld@redhat.com Signed-off-by: Xuewei Zhang xueweiz@google.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Acked-by: Phil Auld pauld@redhat.com Cc: Anton Blanchard anton@ozlabs.org Cc: Ben Segall bsegall@google.com Cc: Dietmar Eggemann dietmar.eggemann@arm.com Cc: Juri Lelli juri.lelli@redhat.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Mel Gorman mgorman@suse.de Cc: Peter Zijlstra peterz@infradead.org Cc: Steven Rostedt rostedt@goodmis.org Cc: Thomas Gleixner tglx@linutronix.de Cc: Vincent Guittot vincent.guittot@linaro.org Fixes: 2e8e19226398 ("sched/fair: Limit sched_cfs_period_timer() loop to avoid hard lockup") Link: https://lkml.kernel.org/r/20191004001243.140897-1-xueweiz@google.com Signed-off-by: Ingo Molnar mingo@kernel.org Cc: Xuewei Zhang xueweiz@google.com --- kernel/sched/fair.c | 36 ++++++++++++++++++++++-------------- 1 file changed, 22 insertions(+), 14 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index ea2d33aa1f55..773135f534ef 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3753,20 +3753,28 @@ static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer) if (++count > 3) { u64 new, old = ktime_to_ns(cfs_b->period);
- new = (old * 147) / 128; /* ~115% */ - new = min(new, max_cfs_quota_period); - - cfs_b->period = ns_to_ktime(new); - - /* since max is 1s, this is limited to 1e9^2, which fits in u64 */ - cfs_b->quota *= new; - cfs_b->quota = div64_u64(cfs_b->quota, old); - - pr_warn_ratelimited( - "cfs_period_timer[cpu%d]: period too short, scaling up (new cfs_period_us %lld, cfs_quota_us = %lld)\n", - smp_processor_id(), - div_u64(new, NSEC_PER_USEC), - div_u64(cfs_b->quota, NSEC_PER_USEC)); + /* + * Grow period by a factor of 2 to avoid losing precision. + * Precision loss in the quota/period ratio can cause __cfs_schedulable + * to fail. + */ + new = old * 2; + if (new < max_cfs_quota_period) { + cfs_b->period = ns_to_ktime(new); + cfs_b->quota *= 2; + + pr_warn_ratelimited( + "cfs_period_timer[cpu%d]: period too short, scaling up (new cfs_period_us = %lld, cfs_quota_us = %lld)\n", + smp_processor_id(), + div_u64(new, NSEC_PER_USEC), + div_u64(cfs_b->quota, NSEC_PER_USEC)); + } else { + pr_warn_ratelimited( + "cfs_period_timer[cpu%d]: period too short, but cannot scale up without losing precision (cfs_period_us = %lld, cfs_quota_us = %lld)\n", + smp_processor_id(), + div_u64(old, NSEC_PER_USEC), + div_u64(cfs_b->quota, NSEC_PER_USEC)); + }
/* reset count so we don't come right back in here */ count = 0;
Backport patch that cleanly applies for 3.16 stable tree:
From 199df9edf62c339b5459fc53d4631c82a3b82f5b Mon Sep 17 00:00:00 2001
From: Xuewei Zhang xueweiz@google.com Date: Thu, 3 Oct 2019 17:12:43 -0700 Subject: [PATCH] sched/fair: Scale bandwidth quota and period without losing quota/period ratio precision
commit 4929a4e6faa0f13289a67cae98139e727f0d4a97 upstream.
The quota/period ratio is used to ensure a child task group won't get more bandwidth than the parent task group, and is calculated as:
normalized_cfs_quota() = [(quota_us << 20) / period_us]
If the quota/period ratio was changed during this scaling due to precision loss, it will cause inconsistency between parent and child task groups.
See below example:
A userspace container manager (kubelet) does three operations:
1) Create a parent cgroup, set quota to 1,000us and period to 10,000us. 2) Create a few children cgroups. 3) Set quota to 1,000us and period to 10,000us on a child cgroup.
These operations are expected to succeed. However, if the scaling of 147/128 happens before step 3, quota and period of the parent cgroup will be changed:
new_quota: 1148437ns, 1148us new_period: 11484375ns, 11484us
And when step 3 comes in, the ratio of the child cgroup will be 104857, which will be larger than the parent cgroup ratio (104821), and will fail.
Scaling them by a factor of 2 will fix the problem.
Change-Id: I3d5f7629012ff115557a08c465a95a5239a105de Tested-by: Phil Auld pauld@redhat.com Signed-off-by: Xuewei Zhang xueweiz@google.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Acked-by: Phil Auld pauld@redhat.com Cc: Anton Blanchard anton@ozlabs.org Cc: Ben Segall bsegall@google.com Cc: Dietmar Eggemann dietmar.eggemann@arm.com Cc: Juri Lelli juri.lelli@redhat.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Mel Gorman mgorman@suse.de Cc: Peter Zijlstra peterz@infradead.org Cc: Steven Rostedt rostedt@goodmis.org Cc: Thomas Gleixner tglx@linutronix.de Cc: Vincent Guittot vincent.guittot@linaro.org Fixes: 2e8e19226398 ("sched/fair: Limit sched_cfs_period_timer() loop to avoid hard lockup") Link: https://lkml.kernel.org/r/20191004001243.140897-1-xueweiz@google.com Signed-off-by: Ingo Molnar mingo@kernel.org Cc: Xuewei Zhang xueweiz@google.com --- kernel/sched/fair.c | 36 ++++++++++++++++++++++-------------- 1 file changed, 22 insertions(+), 14 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index ea2d33aa1f55..773135f534ef 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3753,20 +3753,28 @@ static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer) if (++count > 3) { u64 new, old = ktime_to_ns(cfs_b->period);
- new = (old * 147) / 128; /* ~115% */ - new = min(new, max_cfs_quota_period); - - cfs_b->period = ns_to_ktime(new); - - /* since max is 1s, this is limited to 1e9^2, which fits in u64 */ - cfs_b->quota *= new; - cfs_b->quota = div64_u64(cfs_b->quota, old); - - pr_warn_ratelimited( - "cfs_period_timer[cpu%d]: period too short, scaling up (new cfs_period_us %lld, cfs_quota_us = %lld)\n", - smp_processor_id(), - div_u64(new, NSEC_PER_USEC), - div_u64(cfs_b->quota, NSEC_PER_USEC)); + /* + * Grow period by a factor of 2 to avoid losing precision. + * Precision loss in the quota/period ratio can cause __cfs_schedulable + * to fail. + */ + new = old * 2; + if (new < max_cfs_quota_period) { + cfs_b->period = ns_to_ktime(new); + cfs_b->quota *= 2; + + pr_warn_ratelimited( + "cfs_period_timer[cpu%d]: period too short, scaling up (new cfs_period_us = %lld, cfs_quota_us = %lld)\n", + smp_processor_id(), + div_u64(new, NSEC_PER_USEC), + div_u64(cfs_b->quota, NSEC_PER_USEC)); + } else { + pr_warn_ratelimited( + "cfs_period_timer[cpu%d]: period too short, but cannot scale up without losing precision (cfs_period_us = %lld, cfs_quota_us = %lld)\n", + smp_processor_id(), + div_u64(old, NSEC_PER_USEC), + div_u64(cfs_b->quota, NSEC_PER_USEC)); + }
/* reset count so we don't come right back in here */ count = 0;
Hi Greg,
I sent the backports which should apply cleanly to the 5 stable kernel versions: 4.19, 4.14, 4.9, 4.4, 3.16.
Does it work for you? Please let me know if I should submit them using some other formats. Apologize ahead if my current format is wrong.
Best regards, Xuewei
On Tue, Dec 03, 2019 at 03:37:36PM -0800, Xuewei Zhang wrote:
Hi Greg,
I sent the backports which should apply cleanly to the 5 stable kernel versions: 4.19, 4.14, 4.9, 4.4, 3.16.
Does it work for you? Please let me know if I should submit them using some other formats. Apologize ahead if my current format is wrong.
That looks good, I'll queue them up later this week once the current round of stable kernels are released.
thanks,
greg k-h
On Tue, Dec 3, 2019 at 11:12 PM Greg KH gregkh@linuxfoundation.org wrote:
On Tue, Dec 03, 2019 at 03:37:36PM -0800, Xuewei Zhang wrote:
Hi Greg,
I sent the backports which should apply cleanly to the 5 stable kernel versions: 4.19, 4.14, 4.9, 4.4, 3.16.
Does it work for you? Please let me know if I should submit them using some other formats. Apologize ahead if my current format is wrong.
That looks good, I'll queue them up later this week once the current round of stable kernels are released.
thanks,
greg k-h
Thanks for the help Greg!
Best regards, Xuewei
On Tue, Dec 03, 2019 at 03:30:05PM -0800, Xuewei Zhang wrote:
Backport patch that cleanly applies for 4.19, 4.14, 4.9, 4.4 stable trees:
Did you send this twice?
And the patch is totally corrupted:
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index f77fcd37b226..f0abb8fe0ae9 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4868,20 +4868,28 @@ static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer) if (++count > 3) { u64 new, old = ktime_to_ns(cfs_b->period);
- new = (old * 147) / 128; /* ~115% */
- new = min(new, max_cfs_quota_period);
- cfs_b->period = ns_to_ktime(new);
All of your leading whitespace is gone.
You can't use the web client of gmail to send patches inline, sorry.
Can you fix this up and resend all of the backports, none of these worked :(
greg k-h
On Fri, Dec 6, 2019 at 6:11 AM Greg KH gregkh@linuxfoundation.org wrote:
On Tue, Dec 03, 2019 at 03:30:05PM -0800, Xuewei Zhang wrote:
Backport patch that cleanly applies for 4.19, 4.14, 4.9, 4.4 stable trees:
Did you send this twice?
Yes, sorry it's by accident.
And the patch is totally corrupted:
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index f77fcd37b226..f0abb8fe0ae9 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4868,20 +4868,28 @@ static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer) if (++count > 3) { u64 new, old = ktime_to_ns(cfs_b->period);
- new = (old * 147) / 128; /* ~115% */
- new = min(new, max_cfs_quota_period);
- cfs_b->period = ns_to_ktime(new);
All of your leading whitespace is gone.
You can't use the web client of gmail to send patches inline, sorry.
Can you fix this up and resend all of the backports, none of these worked :(
Sorry for the formatting problem Greg.
I just used `git send-email` to sent out the backports. They are at: v4.19: https://www.spinics.net/lists/stable/msg347573.html v4.14: https://www.spinics.net/lists/stable/msg347576.html v4.9: https://www.spinics.net/lists/stable/msg347577.html v4.4: https://www.spinics.net/lists/stable/msg347578.html v3.16: https://www.spinics.net/lists/stable/msg347579.html
Hopefully that could work for you. But if they are still broken somehow (for which I'd be very sorry), you could consider cherry-picking the patch from my Github repository (forked from yours):
v4.19: https://github.com/xueweiz/linux/commit/1f2c7fd411a4c332629338571911ae63d380... v4.14: https://github.com/xueweiz/linux/commit/ccec07bf7842d7ab859fdbbfb79781028e0c... v4.9: https://github.com/xueweiz/linux/commit/548cd3b8d6a1e663d4a5870daee45f236d75... v4.4: https://github.com/xueweiz/linux/commit/1d3b43c3c6612901f5384b8ced4e73307217... v3.16: https://github.com/xueweiz/linux/commit/f31487e819b78515b3173145d13265b746c7...
Each commit is already based on the current HEAD of the respective branches in your stable kernel repo.
Please let me know if they still don't work.
Best regards, Xuewei
greg k-h
On Fri, Dec 06, 2019 at 05:47:10PM -0800, Xuewei Zhang wrote:
On Fri, Dec 6, 2019 at 6:11 AM Greg KH gregkh@linuxfoundation.org wrote:
On Tue, Dec 03, 2019 at 03:30:05PM -0800, Xuewei Zhang wrote:
Backport patch that cleanly applies for 4.19, 4.14, 4.9, 4.4 stable trees:
Did you send this twice?
Yes, sorry it's by accident.
And the patch is totally corrupted:
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index f77fcd37b226..f0abb8fe0ae9 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4868,20 +4868,28 @@ static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer) if (++count > 3) { u64 new, old = ktime_to_ns(cfs_b->period);
- new = (old * 147) / 128; /* ~115% */
- new = min(new, max_cfs_quota_period);
- cfs_b->period = ns_to_ktime(new);
All of your leading whitespace is gone.
You can't use the web client of gmail to send patches inline, sorry.
Can you fix this up and resend all of the backports, none of these worked :(
Sorry for the formatting problem Greg.
I just used `git send-email` to sent out the backports. They are at: v4.19: https://www.spinics.net/lists/stable/msg347573.html v4.14: https://www.spinics.net/lists/stable/msg347576.html v4.9: https://www.spinics.net/lists/stable/msg347577.html v4.4: https://www.spinics.net/lists/stable/msg347578.html v3.16: https://www.spinics.net/lists/stable/msg347579.html
Hopefully that could work for you. But if they are still broken somehow (for which I'd be very sorry), you could consider cherry-picking the patch from my Github repository (forked from yours):
Those worked just fine from the emails you just sent (note, try using lore.kernel.org instead of other mail archives, as I could not pull a patch out of spinics.net).
I'll let Ben take the 3.16 version, as he handles that tree.
thanks,
greg k-h
On Sat, Dec 7, 2019 at 3:57 AM Greg KH gregkh@linuxfoundation.org wrote:
On Fri, Dec 06, 2019 at 05:47:10PM -0800, Xuewei Zhang wrote:
On Fri, Dec 6, 2019 at 6:11 AM Greg KH gregkh@linuxfoundation.org wrote:
On Tue, Dec 03, 2019 at 03:30:05PM -0800, Xuewei Zhang wrote:
Backport patch that cleanly applies for 4.19, 4.14, 4.9, 4.4 stable trees:
Did you send this twice?
Yes, sorry it's by accident.
And the patch is totally corrupted:
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index f77fcd37b226..f0abb8fe0ae9 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4868,20 +4868,28 @@ static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer) if (++count > 3) { u64 new, old = ktime_to_ns(cfs_b->period);
- new = (old * 147) / 128; /* ~115% */
- new = min(new, max_cfs_quota_period);
- cfs_b->period = ns_to_ktime(new);
All of your leading whitespace is gone.
You can't use the web client of gmail to send patches inline, sorry.
Can you fix this up and resend all of the backports, none of these worked :(
Sorry for the formatting problem Greg.
I just used `git send-email` to sent out the backports. They are at: v4.19: https://www.spinics.net/lists/stable/msg347573.html v4.14: https://www.spinics.net/lists/stable/msg347576.html v4.9: https://www.spinics.net/lists/stable/msg347577.html v4.4: https://www.spinics.net/lists/stable/msg347578.html v3.16: https://www.spinics.net/lists/stable/msg347579.html
Hopefully that could work for you. But if they are still broken somehow (for which I'd be very sorry), you could consider cherry-picking the patch from my Github repository (forked from yours):
Those worked just fine from the emails you just sent (note, try using lore.kernel.org instead of other mail archives, as I could not pull a patch out of spinics.net).
I'll let Ben take the 3.16 version, as he handles that tree.
thanks,
greg k-h
Thanks a lot Greg!
I will use lore.kernel.org next time. Thanks for the tips!
Best regards, Xuewei
linux-stable-mirror@lists.linaro.org