To change the active state of an MMIO, halt is requested for all vcpus of
the affected guest before modifying the IRQ state. This is done by calling
cond_resched_lock() in vgic_mmio_change_active(). However interrupts are
disabled at this point and we cannot reschedule a vcpu.
Solve this by waiting for all vcpus to be halted after emmiting the halt
request.
Signed-off-by: Julien Thierry <julien.thierry(a)arm.com>
Suggested-by: Marc Zyngier <marc.zyngier(a)arm.com>
Cc: Christoffer Dall <christoffer.dall(a)arm.com>
Cc: Marc Zyngier <marc.zyngier(a)arm.com>
Cc: stable(a)vger.kernel.org
---
virt/kvm/arm/vgic/vgic-mmio.c | 36 ++++++++++++++----------------------
1 file changed, 14 insertions(+), 22 deletions(-)
diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
index f56ff1c..5c76a92 100644
--- a/virt/kvm/arm/vgic/vgic-mmio.c
+++ b/virt/kvm/arm/vgic/vgic-mmio.c
@@ -313,27 +313,6 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
spin_lock_irqsave(&irq->irq_lock, flags);
- /*
- * If this virtual IRQ was written into a list register, we
- * have to make sure the CPU that runs the VCPU thread has
- * synced back the LR state to the struct vgic_irq.
- *
- * As long as the conditions below are true, we know the VCPU thread
- * may be on its way back from the guest (we kicked the VCPU thread in
- * vgic_change_active_prepare) and still has to sync back this IRQ,
- * so we release and re-acquire the spin_lock to let the other thread
- * sync back the IRQ.
- *
- * When accessing VGIC state from user space, requester_vcpu is
- * NULL, which is fine, because we guarantee that no VCPUs are running
- * when accessing VGIC state from user space so irq->vcpu->cpu is
- * always -1.
- */
- while (irq->vcpu && /* IRQ may have state in an LR somewhere */
- irq->vcpu != requester_vcpu && /* Current thread is not the VCPU thread */
- irq->vcpu->cpu != -1) /* VCPU thread is running */
- cond_resched_lock(&irq->irq_lock);
-
if (irq->hw) {
vgic_hw_irq_change_active(vcpu, irq, active, !requester_vcpu);
} else {
@@ -368,8 +347,21 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
*/
static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
{
- if (intid > VGIC_NR_PRIVATE_IRQS)
+ if (intid > VGIC_NR_PRIVATE_IRQS) {
+ struct kvm_vcpu *tmp;
+ int i;
+
kvm_arm_halt_guest(vcpu->kvm);
+
+ /* Wait for each vcpu to be halted */
+ kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
+ if (tmp == vcpu)
+ continue;
+
+ while (tmp->cpu != -1)
+ cond_resched();
+ }
+ }
}
/* See vgic_change_active_prepare */
--
1.9.1
From: Henrik Austad <haustad(a)cisco.com>
Short story:
The following patches are needed on a 4.4 kernel to avoid
Oops in the scheduler when a sched_rr and sched_deadline task contends
on the same futex (with PI).
Longer story:
On one of our arm64 systems, we occasionally crash with an Oops in the
scheduler with the following backtrace.
[<ffffffc0000ee398>] enqueue_task_dl+0x1f0/0x420
[<ffffffc0000d0f14>] activate_task+0x7c/0x90
[<ffffffc0000edbdc>] push_dl_task+0x164/0x1c8
[<ffffffc0000edc60>] push_dl_tasks+0x20/0x30
[<ffffffc0000cc00c>] __balance_callback+0x44/0x68
[<ffffffc000d2c018>] __schedule+0x6f0/0x728
[<ffffffc000d2c278>] schedule+0x78/0x98
[<ffffffc000d2e76c>] __rt_mutex_slowlock+0x9c/0x108
[<ffffffc000d2e9d0>] rt_mutex_slowlock+0xd8/0x198
[<ffffffc0000f7f28>] rt_mutex_timed_futex_lock+0x30/0x40
[<ffffffc00012c1a8>] futex_lock_pi+0x200/0x3b0
[<ffffffc00012cf84>] do_futex+0x1c4/0x550
[<ffffffc00012d92c>] compat_SyS_futex+0x10c/0x138
[<ffffffc00008504c>] __sys_trace_return+0x0/0x4
This seems to be the same bug Xuneli Pang triggered and fixed in
e96a7705e7d3 "sched/rtmutex/deadline: Fix a PI crash for deadline
tasks". As noted by Peter Zijlstra in the previous attempt, this fix
requires a few other patches, most notably the FUTEX_UNLOCK_PI series
[1]
Testing this on a dual-core VM I have not been able to reproduce the
same crash, but pi_stress (part of the rt-test suite) reveals that
vanilla 4.4.162 behaves rather badly with a mix of deadline and
sched_(rr|fifo) tasks:
time pi_stress --rr --mlockall --sched id=high,policy=deadline,runtime=100000,deadline=200000,period=200000
Starting PI Stress Test
Number of thread groups: 1
Duration of test run: infinite
Number of inversions per group: unlimited
Admin thread SCHED_RR priority 4
1 groups of 3 threads will be created
High thread SCHED_DEADLINE runtime 100000 deadline 200000 period 200000
Med thread SCHED_RR priority 2
Low thread SCHED_RR priority 1
Current Inversions: 141627
WATCHDOG triggered: group 0 is deadlocked!
reporter stopping due to watchdog event
Stopping test
Terminated
real 0m26.291s
user 0m0.148s
sys 0m18.819s
With this series applied, the test ran for ~4.5 hours and again for 129
minutes (when I remembered to time it) before crashing:
time pi_stress --rr --mlockall --sched id=high,policy=deadline,runtime=100000,deadline=200000,period=200000
Starting PI Stress Test
Number of thread groups: 1
Duration of test run: infinite
Number of inversions per group: unlimited
Admin thread SCHED_RR priority 4
1 groups of 3 threads will be created
High thread SCHED_DEADLINE runtime 100000 deadline 200000 period 200000
Med thread SCHED_RR priority 2
Low thread SCHED_RR priority 1
Current Inversions: 51985223
WATCHDOG triggered: group 0 is deadlocked!
reporter stopping due to watchdog event
Stopping test
Terminated
real 129m38.807s
user 0m59.084s
sys 109m53.666s
So clearly not perfect, but a *lot* better.
The same series on our vendor-4.4 kernel moves pi_stress up from ~30
seconds before deadlock up to the same level as the VM (the test is
still going as of this writing).
I suspect other users of 4.4 would benefit from having these patches
backported, so tag them for stable. I assume 4.9 and 4.14 could benefit
as well, but I have not had time to look into those.
1) https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1359667.html
Peter Zijlstra (13):
futex: Cleanup variable names for futex_top_waiter()
futex: Use smp_store_release() in mark_wake_futex()
futex: Remove rt_mutex_deadlock_account_*()
futex,rt_mutex: Provide futex specific rt_mutex API
futex: Change locking rules
futex: Cleanup refcounting
futex: Rework inconsistent rt_mutex/futex_q state
futex: Pull rt_mutex_futex_unlock() out from under hb->lock
futex,rt_mutex: Introduce rt_mutex_init_waiter()
futex,rt_mutex: Restructure rt_mutex_finish_proxy_lock()
futex: Rework futex_lock_pi() to use rt_mutex_*_proxy_lock()
futex: Futex_unlock_pi() determinism
futex: Drop hb->lock before enqueueing on the rtmutex
Thomas Gleixner (2):
rtmutex: Make wait_lock irq safe
futex: Rename free_pi_state() to put_pi_state()
Xunlei Pang (2):
rtmutex: Deboost before waking up the top waiter
sched/rtmutex/deadline: Fix a PI crash for deadline tasks
include/linux/init_task.h | 1 +
include/linux/sched.h | 2 +
include/linux/sched/rt.h | 1 +
kernel/fork.c | 1 +
kernel/futex.c | 532 ++++++++++++++++++++++++++--------------
kernel/locking/rtmutex-debug.c | 9 -
kernel/locking/rtmutex-debug.h | 3 -
kernel/locking/rtmutex.c | 406 ++++++++++++++++++------------
kernel/locking/rtmutex.h | 2 -
kernel/locking/rtmutex_common.h | 24 +-
kernel/sched/core.c | 2 +
11 files changed, 620 insertions(+), 363 deletions(-)
--
2.7.4
From: Coly Li <colyli(a)suse.de>
Commit b1092c9af9ed ("bcache: allow quick writeback when backing idle")
allows the writeback rate to be faster if there is no I/O request on a
bcache device. It works well if there is only one bcache device attached
to the cache set. If there are many bcache devices attached to a cache
set, it may introduce performance regression because multiple faster
writeback threads of the idle bcache devices will compete the btree level
locks with the bcache device who have I/O requests coming.
This patch fixes the above issue by only permitting fast writebac when
all bcache devices attached on the cache set are idle. And if one of the
bcache devices has new I/O request coming, minimized all writeback
throughput immediately and let PI controller __update_writeback_rate()
to decide the upcoming writeback rate for each bcache device.
Also when all bcache devices are idle, limited wrieback rate to a small
number is wast of thoughput, especially when backing devices are slower
non-rotation devices (e.g. SATA SSD). This patch sets a max writeback
rate for each backing device if the whole cache set is idle. A faster
writeback rate in idle time means new I/Os may have more available space
for dirty data, and people may observe a better write performance then.
Please note bcache may change its cache mode in run time, and this patch
still works if the cache mode is switched from writeback mode and there
is still dirty data on cache.
Fixes: Commit b1092c9af9ed ("bcache: allow quick writeback when backing idle")
Cc: stable(a)vger.kernel.org #4.16+
Signed-off-by: Coly Li <colyli(a)suse.de>
Tested-by: Kai Krakow <kai(a)kaishome.de>
Tested-by: Stefan Priebe <s.priebe(a)profihost.ag>
Cc: Michael Lyle <mlyle(a)lyle.org>
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
(cherry picked from commit ea8c5356d39048bc94bae068228f51ddbecc6b89)
Signed-off-by: Kai Krakow <kai(a)kaishome.de>
---
drivers/md/bcache/bcache.h | 10 ++---
drivers/md/bcache/request.c | 54 ++++++++++++++++++++++++-
drivers/md/bcache/super.c | 4 ++
drivers/md/bcache/sysfs.c | 14 +++++--
drivers/md/bcache/util.c | 2 +-
drivers/md/bcache/util.h | 2 +-
drivers/md/bcache/writeback.c | 91 +++++++++++++++++++++++++++++--------------
7 files changed, 133 insertions(+), 44 deletions(-)
diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index d6bf294f3907..6ba41887664a 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -328,13 +328,6 @@ struct cached_dev {
*/
atomic_t has_dirty;
- /*
- * Set to zero by things that touch the backing volume-- except
- * writeback. Incremented by writeback. Used to determine when to
- * accelerate idle writeback.
- */
- atomic_t backing_idle;
-
struct bch_ratelimit writeback_rate;
struct delayed_work writeback_rate_update;
@@ -514,6 +507,8 @@ struct cache_set {
struct cache_accounting accounting;
unsigned long flags;
+ atomic_t idle_counter;
+ atomic_t at_max_writeback_rate;
struct cache_sb sb;
@@ -523,6 +518,7 @@ struct cache_set {
struct bcache_device **devices;
unsigned devices_max_used;
+ atomic_t attached_dev_nr;
struct list_head cached_devs;
uint64_t cached_dev_sectors;
struct closure caching;
diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
index ae67f5fa8047..6e08eb89abee 100644
--- a/drivers/md/bcache/request.c
+++ b/drivers/md/bcache/request.c
@@ -1102,6 +1102,44 @@ static void detached_dev_do_request(struct bcache_device *d, struct bio *bio)
generic_make_request(bio);
}
+static void quit_max_writeback_rate(struct cache_set *c,
+ struct cached_dev *this_dc)
+{
+ int i;
+ struct bcache_device *d;
+ struct cached_dev *dc;
+
+ /*
+ * mutex bch_register_lock may compete with other parallel requesters,
+ * or attach/detach operations on other backing device. Waiting to
+ * the mutex lock may increase I/O request latency for seconds or more.
+ * To avoid such situation, if mutext_trylock() failed, only writeback
+ * rate of current cached device is set to 1, and __update_write_back()
+ * will decide writeback rate of other cached devices (remember now
+ * c->idle_counter is 0 already).
+ */
+ if (mutex_trylock(&bch_register_lock)) {
+ for (i = 0; i < c->devices_max_used; i++) {
+ if (!c->devices[i])
+ continue;
+
+ if (UUID_FLASH_ONLY(&c->uuids[i]))
+ continue;
+
+ d = c->devices[i];
+ dc = container_of(d, struct cached_dev, disk);
+ /*
+ * set writeback rate to default minimum value,
+ * then let update_writeback_rate() to decide the
+ * upcoming rate.
+ */
+ atomic_long_set(&dc->writeback_rate.rate, 1);
+ }
+ mutex_unlock(&bch_register_lock);
+ } else
+ atomic_long_set(&this_dc->writeback_rate.rate, 1);
+}
+
/* Cached devices - read & write stuff */
static blk_qc_t cached_dev_make_request(struct request_queue *q,
@@ -1119,7 +1157,21 @@ static blk_qc_t cached_dev_make_request(struct request_queue *q,
return BLK_QC_T_NONE;
}
- atomic_set(&dc->backing_idle, 0);
+ if (likely(d->c)) {
+ if (atomic_read(&d->c->idle_counter))
+ atomic_set(&d->c->idle_counter, 0);
+ /*
+ * If at_max_writeback_rate of cache set is true and new I/O
+ * comes, quit max writeback rate of all cached devices
+ * attached to this cache set, and set at_max_writeback_rate
+ * to false.
+ */
+ if (unlikely(atomic_read(&d->c->at_max_writeback_rate) == 1)) {
+ atomic_set(&d->c->at_max_writeback_rate, 0);
+ quit_max_writeback_rate(d->c, dc);
+ }
+ }
+
generic_start_io_acct(q, rw, bio_sectors(bio), &d->disk->part0);
bio_set_dev(bio, dc->bdev);
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index fa4058e43202..dc7b6131ddbb 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -696,6 +696,8 @@ static void bcache_device_detach(struct bcache_device *d)
{
lockdep_assert_held(&bch_register_lock);
+ atomic_dec(&d->c->attached_dev_nr);
+
if (test_bit(BCACHE_DEV_DETACHING, &d->flags)) {
struct uuid_entry *u = d->c->uuids + d->id;
@@ -1138,6 +1140,7 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c,
bch_cached_dev_run(dc);
bcache_device_link(&dc->disk, c, "bdev");
+ atomic_inc(&c->attached_dev_nr);
/* Allow the writeback thread to proceed */
up_write(&dc->writeback_lock);
@@ -1687,6 +1690,7 @@ struct cache_set *bch_cache_set_alloc(struct cache_sb *sb)
c->block_bits = ilog2(sb->block_size);
c->nr_uuids = bucket_bytes(c) / sizeof(struct uuid_entry);
c->devices_max_used = 0;
+ atomic_set(&c->attached_dev_nr, 0);
c->btree_pages = bucket_pages(c);
if (c->btree_pages > BTREE_MAX_PAGES)
c->btree_pages = max_t(int, c->btree_pages / 4,
diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
index 225b15aa0340..a56067e80b10 100644
--- a/drivers/md/bcache/sysfs.c
+++ b/drivers/md/bcache/sysfs.c
@@ -170,7 +170,8 @@ SHOW(__bch_cached_dev)
var_printf(writeback_running, "%i");
var_print(writeback_delay);
var_print(writeback_percent);
- sysfs_hprint(writeback_rate, dc->writeback_rate.rate << 9);
+ sysfs_hprint(writeback_rate,
+ atomic_long_read(&dc->writeback_rate.rate) << 9);
sysfs_hprint(io_errors, atomic_read(&dc->io_errors));
sysfs_printf(io_error_limit, "%i", dc->error_limit);
sysfs_printf(io_disable, "%i", dc->io_disable);
@@ -188,7 +189,8 @@ SHOW(__bch_cached_dev)
char change[20];
s64 next_io;
- bch_hprint(rate, dc->writeback_rate.rate << 9);
+ bch_hprint(rate,
+ atomic_long_read(&dc->writeback_rate.rate) << 9);
bch_hprint(dirty, bcache_dev_sectors_dirty(&dc->disk) << 9);
bch_hprint(target, dc->writeback_rate_target << 9);
bch_hprint(proportional,dc->writeback_rate_proportional << 9);
@@ -255,8 +257,12 @@ STORE(__cached_dev)
sysfs_strtoul_clamp(writeback_percent, dc->writeback_percent, 0, 40);
- sysfs_strtoul_clamp(writeback_rate,
- dc->writeback_rate.rate, 1, INT_MAX);
+ if (attr == &sysfs_writeback_rate) {
+ int v;
+
+ sysfs_strtoul_clamp(writeback_rate, v, 1, INT_MAX);
+ atomic_long_set(&dc->writeback_rate.rate, v);
+ }
sysfs_strtoul_clamp(writeback_rate_update_seconds,
dc->writeback_rate_update_seconds,
diff --git a/drivers/md/bcache/util.c b/drivers/md/bcache/util.c
index fc479b026d6d..b15256bcf0e7 100644
--- a/drivers/md/bcache/util.c
+++ b/drivers/md/bcache/util.c
@@ -200,7 +200,7 @@ uint64_t bch_next_delay(struct bch_ratelimit *d, uint64_t done)
{
uint64_t now = local_clock();
- d->next += div_u64(done * NSEC_PER_SEC, d->rate);
+ d->next += div_u64(done * NSEC_PER_SEC, atomic_long_read(&d->rate));
/* Bound the time. Don't let us fall further than 2 seconds behind
* (this prevents unnecessary backlog that would make it impossible
diff --git a/drivers/md/bcache/util.h b/drivers/md/bcache/util.h
index cced87f8eb27..f7b0133c9d2f 100644
--- a/drivers/md/bcache/util.h
+++ b/drivers/md/bcache/util.h
@@ -442,7 +442,7 @@ struct bch_ratelimit {
* Rate at which we want to do work, in units per second
* The units here correspond to the units passed to bch_next_delay()
*/
- uint32_t rate;
+ atomic_long_t rate;
};
static inline void bch_ratelimit_reset(struct bch_ratelimit *d)
diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
index ad45ebe1a74b..9f5e33324d1d 100644
--- a/drivers/md/bcache/writeback.c
+++ b/drivers/md/bcache/writeback.c
@@ -104,11 +104,56 @@ static void __update_writeback_rate(struct cached_dev *dc)
dc->writeback_rate_proportional = proportional_scaled;
dc->writeback_rate_integral_scaled = integral_scaled;
- dc->writeback_rate_change = new_rate - dc->writeback_rate.rate;
- dc->writeback_rate.rate = new_rate;
+ dc->writeback_rate_change = new_rate -
+ atomic_long_read(&dc->writeback_rate.rate);
+ atomic_long_set(&dc->writeback_rate.rate, new_rate);
dc->writeback_rate_target = target;
}
+static bool set_at_max_writeback_rate(struct cache_set *c,
+ struct cached_dev *dc)
+{
+ /*
+ * Idle_counter is increased everytime when update_writeback_rate() is
+ * called. If all backing devices attached to the same cache set have
+ * identical dc->writeback_rate_update_seconds values, it is about 6
+ * rounds of update_writeback_rate() on each backing device before
+ * c->at_max_writeback_rate is set to 1, and then max wrteback rate set
+ * to each dc->writeback_rate.rate.
+ * In order to avoid extra locking cost for counting exact dirty cached
+ * devices number, c->attached_dev_nr is used to calculate the idle
+ * throushold. It might be bigger if not all cached device are in write-
+ * back mode, but it still works well with limited extra rounds of
+ * update_writeback_rate().
+ */
+ if (atomic_inc_return(&c->idle_counter) <
+ atomic_read(&c->attached_dev_nr) * 6)
+ return false;
+
+ if (atomic_read(&c->at_max_writeback_rate) != 1)
+ atomic_set(&c->at_max_writeback_rate, 1);
+
+ atomic_long_set(&dc->writeback_rate.rate, INT_MAX);
+
+ /* keep writeback_rate_target as existing value */
+ dc->writeback_rate_proportional = 0;
+ dc->writeback_rate_integral_scaled = 0;
+ dc->writeback_rate_change = 0;
+
+ /*
+ * Check c->idle_counter and c->at_max_writeback_rate agagain in case
+ * new I/O arrives during before set_at_max_writeback_rate() returns.
+ * Then the writeback rate is set to 1, and its new value should be
+ * decided via __update_writeback_rate().
+ */
+ if ((atomic_read(&c->idle_counter) <
+ atomic_read(&c->attached_dev_nr) * 6) ||
+ !atomic_read(&c->at_max_writeback_rate))
+ return false;
+
+ return true;
+}
+
static void update_writeback_rate(struct work_struct *work)
{
struct cached_dev *dc = container_of(to_delayed_work(work),
@@ -136,13 +181,20 @@ static void update_writeback_rate(struct work_struct *work)
return;
}
- down_read(&dc->writeback_lock);
+ if (atomic_read(&dc->has_dirty) && dc->writeback_percent) {
+ /*
+ * If the whole cache set is idle, set_at_max_writeback_rate()
+ * will set writeback rate to a max number. Then it is
+ * unncessary to update writeback rate for an idle cache set
+ * in maximum writeback rate number(s).
+ */
+ if (!set_at_max_writeback_rate(c, dc)) {
+ down_read(&dc->writeback_lock);
+ __update_writeback_rate(dc);
+ up_read(&dc->writeback_lock);
+ }
+ }
- if (atomic_read(&dc->has_dirty) &&
- dc->writeback_percent)
- __update_writeback_rate(dc);
-
- up_read(&dc->writeback_lock);
/*
* CACHE_SET_IO_DISABLE might be set via sysfs interface,
@@ -422,27 +474,6 @@ static void read_dirty(struct cached_dev *dc)
delay = writeback_delay(dc, size);
- /* If the control system would wait for at least half a
- * second, and there's been no reqs hitting the backing disk
- * for awhile: use an alternate mode where we have at most
- * one contiguous set of writebacks in flight at a time. If
- * someone wants to do IO it will be quick, as it will only
- * have to contend with one operation in flight, and we'll
- * be round-tripping data to the backing disk as quickly as
- * it can accept it.
- */
- if (delay >= HZ / 2) {
- /* 3 means at least 1.5 seconds, up to 7.5 if we
- * have slowed way down.
- */
- if (atomic_inc_return(&dc->backing_idle) >= 3) {
- /* Wait for current I/Os to finish */
- closure_sync(&cl);
- /* And immediately launch a new set. */
- delay = 0;
- }
- }
-
while (!kthread_should_stop() &&
!test_bit(CACHE_SET_IO_DISABLE, &dc->disk.c->flags) &&
delay) {
@@ -715,7 +746,7 @@ void bch_cached_dev_writeback_init(struct cached_dev *dc)
dc->writeback_running = true;
dc->writeback_percent = 10;
dc->writeback_delay = 30;
- dc->writeback_rate.rate = 1024;
+ atomic_long_set(&dc->writeback_rate.rate, 1024);
dc->writeback_rate_minimum = 8;
dc->writeback_rate_update_seconds = WRITEBACK_RATE_UPDATE_SECS_DEFAULT;
--
2.16.4
Commit a9c8088c7988 ("i2c: i801: Don't restore config registers on
runtime PM") nullified the runtime PM suspend/resume callback pointers
while keeping the runtime PM enabled.
This causes that SMBus PCI device stays in D0 power state and sysfs
/sys/bus/pci/devices/[SMBus PCI ID]/power/runtime_status shows "error"
when the runtime PM framework attempts to autosuspend the device. This
is due PCI bus runtime PM which checks for driver runtime PM callbacks
and returns with -ENOSYS if they are not set.
Since i2c-i801.c don't need to do anything device specific beyond PCI
device power state management Jean Delvare proposed if this can be fixed
in the PCI subsystem core level rather than adding dummy runtime PM
callback functions in the PCI drivers.
Change the pci_pm_runtime_suspend()/pci_pm_runtime_resume() semantics so
that they allow change the PCI device power state during runtime PM
transitions even if no runtime PM callback functions are defined.
This change fixes the runtime PM regression on i2c-i801.c.
It is not obvious why the code had hard requirements for the runtime PM
callbacks. Test has been here since the code was introduced by the
commit 6cbf82148ff2 ("PCI PM: Run-time callbacks for PCI bus type").
On the other hand similar change than this was done to generic runtime
PM callbacks way back in the commit 05aa55dddb9e ("PM / Runtime: Lenient
generic runtime pm callbacks").
Fixes: a9c8088c7988 ("i2c: i801: Don't restore config registers on runtime PM")
Reported-by: Mika Westerberg <mika.westerberg(a)linux.intel.com>
Cc: <stable(a)vger.kernel.org> # 4.18+
Signed-off-by: Jarkko Nikula <jarkko.nikula(a)linux.intel.com>
---
I Cc'ed stable since this fixes the regression on i2c-i801.c. But we
probably want to get some test coverage first before applying into
stable. Queueing for v4.20 sounds reasonable to me.
v2:
Previous version had a potential NULL dereference in WARN_ONCE() statement
noted by Jean Delvare. Now covered by pm && pm->runtime_suspend test.
Also handling of error code from pm->runtime_suspend() moved under the
same code block where callback is called.
v1:
This is related to my i2c-i801.c fix thread back in June which I completely
forgot till now: https://lkml.org/lkml/2018/6/27/642
Discussion back then was that it should be handled in the PCI PM instead
of having dummy functions in the drivers. I wanted to respin with a patch.
---
drivers/pci/pci-driver.c | 27 ++++++++++++---------------
1 file changed, 12 insertions(+), 15 deletions(-)
diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
index bef17c3fca67..33f3f475e5c6 100644
--- a/drivers/pci/pci-driver.c
+++ b/drivers/pci/pci-driver.c
@@ -1251,30 +1251,29 @@ static int pci_pm_runtime_suspend(struct device *dev)
return 0;
}
- if (!pm || !pm->runtime_suspend)
- return -ENOSYS;
-
pci_dev->state_saved = false;
- error = pm->runtime_suspend(dev);
- if (error) {
+ if (pm && pm->runtime_suspend) {
+ error = pm->runtime_suspend(dev);
/*
* -EBUSY and -EAGAIN is used to request the runtime PM core
* to schedule a new suspend, so log the event only with debug
* log level.
*/
- if (error == -EBUSY || error == -EAGAIN)
+ if (error == -EBUSY || error == -EAGAIN) {
dev_dbg(dev, "can't suspend now (%pf returned %d)\n",
pm->runtime_suspend, error);
- else
+ return error;
+ } else if (error) {
dev_err(dev, "can't suspend (%pf returned %d)\n",
pm->runtime_suspend, error);
-
- return error;
+ return error;
+ }
}
pci_fixup_device(pci_fixup_suspend, pci_dev);
- if (!pci_dev->state_saved && pci_dev->current_state != PCI_D0
+ if (pm && pm->runtime_suspend
+ && !pci_dev->state_saved && pci_dev->current_state != PCI_D0
&& pci_dev->current_state != PCI_UNKNOWN) {
WARN_ONCE(pci_dev->current_state != prev,
"PCI PM: State of device not saved by %pF\n",
@@ -1292,7 +1291,7 @@ static int pci_pm_runtime_suspend(struct device *dev)
static int pci_pm_runtime_resume(struct device *dev)
{
- int rc;
+ int rc = 0;
struct pci_dev *pci_dev = to_pci_dev(dev);
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
@@ -1306,14 +1305,12 @@ static int pci_pm_runtime_resume(struct device *dev)
if (!pci_dev->driver)
return 0;
- if (!pm || !pm->runtime_resume)
- return -ENOSYS;
-
pci_fixup_device(pci_fixup_resume_early, pci_dev);
pci_enable_wake(pci_dev, PCI_D0, false);
pci_fixup_device(pci_fixup_resume, pci_dev);
- rc = pm->runtime_resume(dev);
+ if (pm && pm->runtime_resume)
+ rc = pm->runtime_resume(dev);
pci_dev->runtime_d3cold = false;
--
2.19.1
Since the addition of platform MSI support, there were two helpers
supposed to allocate/free IRQs for a device:
platform_msi_domain_alloc_irqs()
platform_msi_domain_free_irqs()
In these helpers, IRQ descriptors are allocated in the "alloc" routine
while they are freed in the "free" one.
Later, two other helpers have been added to handle IRQ domains on top
of MSI domains:
platform_msi_domain_alloc()
platform_msi_domain_free()
Seen from the outside, the logic is pretty close with the former
helpers and people used it with the same logic as before: a
platform_msi_domain_alloc() call should be balanced with a
platform_msi_domain_free() call. While this is probably what was
intended to do, the platform_msi_domain_free() does not remove/free
the IRQ descriptor(s) created/inserted in
platform_msi_domain_alloc().
One effect of such situation is that removing a module that requested
an IRQ will let one orphaned IRQ descriptor (with an allocated MSI
entry) in the device descriptors list. Next time the module will be
inserted back, one will observe that the allocation will happen twice
in the MSI domain, one time for the remaining descriptor, one time for
the new one. It also has the side effect to quickly overshoot the
maximum number of allocated MSI and then prevent any module requesting
an interrupt in the same domain to be inserted anymore.
This situation has been met with loops of insertion/removal of the
mvpp2.ko module (requesting 15 MSIs each time).
Fixes: 552c494a7666 ("platform-msi: Allow creation of a MSI-based stacked irq domain")
Cc: stable(a)vger.kernel.org
Signed-off-by: Miquel Raynal <miquel.raynal(a)bootlin.com>
---
drivers/base/platform-msi.c | 6 ++++--
include/linux/msi.h | 2 ++
2 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/base/platform-msi.c b/drivers/base/platform-msi.c
index 60d6cc618f1c..6d54905c6263 100644
--- a/drivers/base/platform-msi.c
+++ b/drivers/base/platform-msi.c
@@ -366,14 +366,16 @@ void platform_msi_domain_free(struct irq_domain *domain, unsigned int virq,
unsigned int nvec)
{
struct platform_msi_priv_data *data = domain->host_data;
- struct msi_desc *desc;
- for_each_msi_entry(desc, data->dev) {
+ struct msi_desc *desc, *tmp;
+ for_each_msi_entry_safe(desc, tmp, data->dev) {
if (WARN_ON(!desc->irq || desc->nvec_used != 1))
return;
if (!(desc->irq >= virq && desc->irq < (virq + nvec)))
continue;
irq_domain_free_irqs_common(domain, desc->irq, 1);
+ list_del(&desc->list);
+ free_msi_entry(desc);
}
}
diff --git a/include/linux/msi.h b/include/linux/msi.h
index 5839d8062dfc..be8ec813dbfb 100644
--- a/include/linux/msi.h
+++ b/include/linux/msi.h
@@ -116,6 +116,8 @@ struct msi_desc {
list_first_entry(dev_to_msi_list((dev)), struct msi_desc, list)
#define for_each_msi_entry(desc, dev) \
list_for_each_entry((desc), dev_to_msi_list((dev)), list)
+#define for_each_msi_entry_safe(desc, tmp, dev) \
+ list_for_each_entry_safe((desc), (tmp), dev_to_msi_list((dev)), list)
#ifdef CONFIG_PCI_MSI
#define first_pci_msi_entry(pdev) first_msi_entry(&(pdev)->dev)
--
2.17.1
The patch below does not apply to the 4.4-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From f45c752b65af46bf42963295c332865d95f97fff Mon Sep 17 00:00:00 2001
From: Josef Bacik <josef(a)toxicpanda.com>
Date: Fri, 28 Sep 2018 07:17:48 -0400
Subject: [PATCH] btrfs: release metadata before running delayed refs
We want to release the unused reservation we have since it refills the
delayed refs reserve, which will make everything go smoother when
running the delayed refs if we're short on our reservation.
CC: stable(a)vger.kernel.org # 4.4+
Reviewed-by: Omar Sandoval <osandov(a)fb.com>
Reviewed-by: Liu Bo <bo.liu(a)linux.alibaba.com>
Reviewed-by: Nikolay Borisov <nborisov(a)suse.com>
Signed-off-by: Josef Bacik <josef(a)toxicpanda.com>
Signed-off-by: David Sterba <dsterba(a)suse.com>
diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
index cadc747292d9..e7f618b17b07 100644
--- a/fs/btrfs/transaction.c
+++ b/fs/btrfs/transaction.c
@@ -1932,6 +1932,9 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
return ret;
}
+ btrfs_trans_release_metadata(trans);
+ trans->block_rsv = NULL;
+
/* make a pass through all the delayed refs we have so far
* any runnings procs may add more while we are here
*/
@@ -1941,9 +1944,6 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
return ret;
}
- btrfs_trans_release_metadata(trans);
- trans->block_rsv = NULL;
-
cur_trans = trans->transaction;
/*
Hello Kees,
On 2018/11/28 6:38, Kees Cook wrote:
> On Thu, Nov 22, 2018 at 11:54 PM, Wang Dongsheng
> <dongsheng.wang(a)hxt-semitech.com> wrote:
>> When select ARCH_TASK_STRUCT_ON_STACK the first of thread_info variable
>> is overwritten by STACK_END_MAGIC. In fact, the ARCH_TASK_STRUCT_ON_STACK
>> is not a real task on stack, it's only init_task on init_stack.
>>
>> Commit 0500871f21b2 ("Construct init thread stack in the linker script
>> rather than by union") added this macro and put task_strcut into
>> thread_union. This brings us the following possibilities:
>> TASK_ON_STACK THREAD_INFO_IN_TASK STACK
>> ----- <-- thread_info & stack
>> N N | | --- <-- task
>> | | | |
>> ----- ---
>>
>> ----- <-- stack
>> N Y | | --- <-- task(Including thread_info)
>> | | | |
>> ----- ---
>>
>> ----- <-- stack & task & thread_info
>> Y N | |
>> | |
>> -----
>>
>> ----- <-- stack & task(Including thread_info)
>> Y Y | |
>> | |
>> -----
>> The kernel has handled the first two cases correctly.
>>
>> For the third case:
>> TASK_ON_STACK: Y. THREAD_INFO_IN_TASK: N. this case
>> should never happen, because the task and thread_info will overlap. So
>> when TASK_ON_STACK is selected, THREAD_INFO_IN_TASK must be selected too.
>>
>> For the fourth case:
>> When task on stack, the end of stack should add a sizeof(task_struct) offset.
>>
>> This patch handled with the third and fourth case.
>>
>> Fixes: 0500871f21b2 ("Construct init thread stack in the linker ...")
>>
>> Signed-off-by: Wang Dongsheng <dongsheng.wang(a)hxt-semitech.com>
>> Signed-off-by: Shunyong Yang <shunyong.yang(a)hxt-semitech.com>
>> ---
>> arch/Kconfig | 1 +
>> include/linux/sched/task_stack.h | 5 ++++-
>> 2 files changed, 5 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/Kconfig b/arch/Kconfig
>> index e1e540ffa979..0a2c73e73195 100644
>> --- a/arch/Kconfig
>> +++ b/arch/Kconfig
>> @@ -251,6 +251,7 @@ config ARCH_HAS_SET_MEMORY
>> # Select if arch init_task must go in the __init_task_data section
>> config ARCH_TASK_STRUCT_ON_STACK
>> bool
>> + depends on THREAD_INFO_IN_TASK || IA64
> The "IA64" part shouldn't be needed since IA64 already selects it.
>
> Since it's selected, it also can't have a depends, IIUC.
Since the IA64 thread_info including task_struct, it doesn't need to
select THREAD_INFO_IN_TASK.
So we need to allow IA64 select ARCH_TASK_STRUCT_ON_STACK without
THREAD_INFO.
>> # Select if arch has its private alloc_task_struct() function
>> config ARCH_TASK_STRUCT_ALLOCATOR
>> diff --git a/include/linux/sched/task_stack.h b/include/linux/sched/task_stack.h
>> index 6a841929073f..624c48defb9e 100644
>> --- a/include/linux/sched/task_stack.h
>> +++ b/include/linux/sched/task_stack.h
>> @@ -7,6 +7,7 @@
>> */
>>
>> #include <linux/sched.h>
>> +#include <linux/sched/task.h>
>> #include <linux/magic.h>
>>
>> #ifdef CONFIG_THREAD_INFO_IN_TASK
>> @@ -25,7 +26,9 @@ static inline void *task_stack_page(const struct task_struct *task)
>>
>> static inline unsigned long *end_of_stack(const struct task_struct *task)
>> {
>> - return task->stack;
>> + if (!IS_ENABLED(CONFIG_ARCH_TASK_STRUCT_ON_STACK) || task != &init_task)
>> + return task->stack;
>> + return (unsigned long *)(task + 1);
>> }
> This seems like a strange place for the change. It feels more like
> init_task has been defined incorrectly.
The init_task will put into init_stack when ARCH_TASK_STRUCT_ON_STACK is
selected.
include/asm-generic/vmlinux.lds.h:
#define INIT_TASK_DATA(align) \
. = ALIGN(align); \
__start_init_task = .; \
init_thread_union = .; \
init_stack = .; \
KEEP(*(.data..init_task)) \
KEEP(*(.data..init_thread_info)) \
. = __start_init_task + THREAD_SIZE; \
__end_init_task = .;
So we need end_of_stack to offset sizeof(task_struct).
Cheers,
Dongsheng
I've run into some problems which appear due to (a) recent patch(es) on
the wlcore wifi driver.
4.4.160 - commit 3fdd34643ffc378b5924941fad40352c04610294
4.9.131 - commit afeeecc764436f31d4447575bb9007732333818c
Earlier versions (4.9.130 and 4.4.159 - tested back to 4.4.49) do not
exhibit this problem. It is still present in 4.9.141.
master as of 4.20.0-rc4 does not exhibit this problem.
Basically, during client association when in AP mode (running hostapd),
handshake may or may not complete following a noticeable delay. If
successful, then the driver fails consistently in warn_slowpath_null
during disassociation. If unsuccessful, the wifi client attempts
multiple times, sometimes failing repeatedly. I've had clients unable to
connect for 3-5 minutes during testing, with the syslog filled with
dozens of backtraces. syslog details are below.
I'm working on an embedded device with a TI 3352 ARM processor and a
murata wl1271 module in sdio mode. We're running a fully patched ubuntu
18.04 ARM build, with a kernel built from kernel.org's stable/linux repo
<https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?h=…>.
Relevant parts of the kernel config are included below.
The commit message states:
> /I've only seen this few times with the runtime PM patches enabled so
> this one is probably not needed before that. This seems to work
> currently based on the current PM implementation timer. Let's apply
> this separately though in case others are hitting this issue./
We're not doing anything explicit with power management. The device is
an IoT edge gateway with battery backup, normally running on wall power.
The battery is currently used solely to shut down the system cleanly to
avoid filesystem corruption.
The device tree is configured to keep power in suspend; but the device
should never suspend, so in our case, there is no need to call
wl1271_ps_elp_wakeup() or wl1271_ps_elp_sleep(), as occurs in the patch.
&mmc2 {
status = "okay";
pinctrl-names = "default";
pinctrl-0 = <&wl1271_pins>;
vmmc-supply = <&vwifi>;
bus-width = <4>;
ti,non-removable;
/* am335x-evm.dts: ti,needs-special-hs-handling; - evm has wl18xx not
wl12xx */
cap-power-off-card;
keep-power-in-suspend;
#address-cells = <1>;
#size-cells = <0>;
wlcore: wlcore@2 {
compatible = "ti,wl1271";
reg = <2>;
interrupt-parent = <&gpio1>;
interrupts = <14 IRQ_TYPE_LEVEL_HIGH>; /* gpio1[14] */
ref-clock-frequency = <38400000>;
};
};
At this point, we're unable to ship a kernel version later than 4.9.130;
so it's important to us to get this issue resolved.
The simplest thing for us would be if these changes could be reverted;
but I'd be happy to debug or try some things out.
Thanks,
Dietmar May
Software Architect
Intellastar LLC
_Association_
Nov 16 15:25:52 ice hostapd: wlan0: STA 84:3a:4b:00:8d:04 IEEE 802.11:
authenticated
Nov 16 15:25:52 ice hostapd: wlan0: STA 84:3a:4b:00:8d:04 IEEE 802.11:
associated (aid 1)
Nov 16 15:25:52 ice hostapd: wlan0: STA 84:3a:4b:00:8d:04 RADIUS:
starting accounting session 5BEEE158-00000000
Nov 16 15:25:52 ice hostapd: wlan0: STA 84:3a:4b:00:8d:04 WPA: pairwise
key handshake completed (RSN)
_Disassociation_
Nov 16 15:26:05 ice kernel: ------------[ cut here ]------------
Nov 16 15:26:05 ice kernel: WARNING: CPU: 0 PID: 1067 at
drivers/net/wireless/ti/wlcore/ps.c:91 wl12xx_op_sta_state+0x208/0x56c
[wlcore]
Nov 16 15:26:05 ice kernel: Modules linked in: bridge stp llc cdc_ncm
usbnet mii cdc_acm usb_serial_simple usbserial bnep hci_uart bluetooth
xt_conntrack iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4
nf_nat nf_conntrack arc4 wl12xx wlcore mac80211 cfg80211 musb_dsps
musb_hdrc usbcore phy_am335x cppi41 phy_am335x_control phy_generic
usb_common ti_am335x_adc kfifo_buf industrialio wlcore_sdio omap_rng
rng_core musb_am335x rtc_omap omap_wdt ti_am335x_tscadc cpufreq_dt
leds_gpio led_class thermal_sys hwmon autofs4
Nov 16 15:26:05 ice kernel: CPU: 0 PID: 1067 Comm: hostapd Not tainted
4.9.131-ice245 #1
Nov 16 15:26:05 ice kernel: Hardware name: Generic AM33XX (Flattened
Device Tree)
Nov 16 15:26:05 ice kernel: [<c010d2e0>] (unwind_backtrace) from
[<c010b50c>] (show_stack+0x10/0x14)
Nov 16 15:26:05 ice kernel: [<c010b50c>] (show_stack) from [<c012e3ac>]
(__warn+0xd8/0x100)
Nov 16 15:26:05 ice kernel: [<c012e3ac>] (__warn) from [<c012e480>]
(warn_slowpath_null+0x20/0x28)
Nov 16 15:26:05 ice kernel: [<c012e480>] (warn_slowpath_null) from
[<bf2e521c>] (wl12xx_op_sta_state+0x208/0x56c [wlcore])
Nov 16 15:26:05 ice kernel: [<bf2e521c>] (wl12xx_op_sta_state [wlcore])
from [<bf20dbf8>] (drv_sta_state+0x84/0x6c8 [mac80211])
Nov 16 15:26:05 ice kernel: [<bf20dbf8>] (drv_sta_state [mac80211]) from
[<bf215284>] (__sta_info_destroy_part2+0x160/0x1b4 [mac80211])
Nov 16 15:26:05 ice kernel: [<bf215284>] (__sta_info_destroy_part2
[mac80211]) from [<bf2152f8>] (__sta_info_destroy+0x20/0x28 [mac80211])
Nov 16 15:26:05 ice kernel: [<bf2152f8>] (__sta_info_destroy [mac80211])
from [<bf21537c>] (sta_info_destroy_addr_bss+0x30/0x4c [mac80211])
Nov 16 15:26:05 ice kernel: [<bf21537c>] (sta_info_destroy_addr_bss
[mac80211]) from [<bf171c70>] (nl80211_del_station+0xe8/0x2b8 [cfg80211])
Nov 16 15:26:05 ice kernel: [<bf171c70>] (nl80211_del_station
[cfg80211]) from [<c05535c4>] (genl_rcv_msg+0x308/0x3e4)
Nov 16 15:26:05 ice kernel: [<c05535c4>] (genl_rcv_msg) from
[<c05527a0>] (netlink_rcv_skb+0xa4/0xe8)
Nov 16 15:26:05 ice kernel: [<c05527a0>] (netlink_rcv_skb) from
[<c05532a8>] (genl_rcv+0x20/0x34)
Nov 16 15:26:05 ice kernel: [<c05532a8>] (genl_rcv) from [<c0552100>]
(netlink_unicast+0x168/0x1f4)
Nov 16 15:26:05 ice kernel: [<c0552100>] (netlink_unicast) from
[<c0552540>] (netlink_sendmsg+0x2e8/0x378)
Nov 16 15:26:05 ice kernel: [<c0552540>] (netlink_sendmsg) from
[<c050396c>] (sock_sendmsg+0x14/0x24)
Nov 16 15:26:05 ice kernel: [<c050396c>] (sock_sendmsg) from
[<c05041f8>] (___sys_sendmsg+0x1ec/0x200)
Nov 16 15:26:05 ice kernel: [<c05041f8>] (___sys_sendmsg) from
[<c0504fa0>] (__sys_sendmsg+0x40/0x6c)
Nov 16 15:26:05 ice kernel: [<c0504fa0>] (__sys_sendmsg) from
[<c0107560>] (ret_fast_syscall+0x0/0x1c)
Nov 16 15:26:05 ice kernel: ---[ end trace 44f73265865f31c4 ]---
CONFIG_FW_LOADER=y
CONFIG_FIRMWARE_IN_KERNEL=y
CONFIG_EXTRA_FIRMWARE="am335x-pm-firmware.elf"
CONFIG_AM335X_PHY_USB=m
CONFIG_ARCH_MULTI_V6=y
CONFIG_ARCH_OMAP2PLUS=y
CONFIG_ARCH_OMAP2=y
CONFIG_ARCH_OMAP3=y
CONFIG_ARCH_OMAP=y
CONFIG_ARM_APPENDED_DTB=y
CONFIG_ARM_ATAG_DTB_COMPAT=y
CONFIG_ARM_CRYPTO=y
CONFIG_ARM_ERRATA_411920=y
CONFIG_ARM_ERRATA_430973=y
CONFIG_ARM_THUMBEE=y
CONFIG_CFG80211=m
CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
CONFIG_CPUFREQ_DT=m
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
CONFIG_CPU_FREQ_GOV_USERSPACE=y
CONFIG_CPU_FREQ_STAT_DETAILS=y
CONFIG_CPU_FREQ=y
CONFIG_CPU_IDLE=y
CONFIG_CPUSETS=y
CONFIG_CPU_THERMAL=y
CONFIG_DMA_CMA=y
CONFIG_DMADEVICES=y
CONFIG_DMA_OMAP=yCONFIG_MAC80211=m
CONFIG_MMC_OMAP_HS=y
CONFIG_MMC_OMAP=y
CONFIG_MMC=y
CONFIG_OMAP3_THERMAL=y
CONFIG_OMAP_IOMMU=y
CONFIG_OMAP_MUX_DEBUG=n
CONFIG_OMAP_OCP2SCP=y
CONFIG_OMAP_RESET_CLOCKS=y
CONFIG_OMAP_SSI=m
CONFIG_OMAP_USB2=m
CONFIG_OMAP_WATCHDOG=m
CONFIG_POWER_AVS_OMAP_CLASS3=y
CONFIG_POWER_AVS_OMAP=y
CONFIG_POWER_AVS=y
CONFIG_POWER_RESET=y
CONFIG_SLUB=y
CONFIG_SOC_AM33XX=y
CONFIG_SOC_TI=y
CONFIG_THERMAL_GOV_FAIR_SHARE=y
CONFIG_THERMAL_GOV_USER_SPACE=y
CONFIG_THERMAL=m
CONFIG_TI_AM335X_ADC=m
CONFIG_TI_CPSW=y
CONFIG_TI_CPTS=y
CONFIG_TI_DAVINCI_EMAC=y
CONFIG_TI_EDMA=y
CONFIG_TI_EMIF=m
CONFIG_TIMER_STATS=y
CONFIG_TI_PIPE3=y
CONFIG_TI_SOC_THERMAL=m
CONFIG_TI_THERMAL=y
CONFIG_WIRELESS=y
CONFIG_WL12XX=m
CONFIG_WL18XX=m
CONFIG_WLAN=y
CONFIG_WLCORE_SDIO=m
CONFIG_WLCORE_SPI=m
CONFIG_WL_TI=y
--
This email and any information disclosed in connection herewith, whether
written or oral, is the property of Intellastar LLC, and is intended only
for the person or entity to which it is addressed. This email may contain
information that is privileged confidential or otherwise protected from
disclosure. Distributing or copying any information contained in this email
to anyone other than the intended recipient is strictly prohibited.