On Mon, Jan 20, 2014 at 11:41 PM, Frederic Weisbecker fweisbec@gmail.com wrote:
On Mon, Jan 20, 2014 at 08:30:10PM +0530, Viresh Kumar wrote:
On 20 January 2014 19:29, Lei Wen adrian.wenl@gmail.com wrote:
Hi Viresh,
Hi Lei,
I have one question regarding unbounded workqueue migration in your case. You use hotplug to migrate the unbounded work to other cpus, but its cpu mask would still be 0xf, since cannot be changed by cpuset.
My question is how you could prevent this unbounded work migrate back to your isolated cpu? Seems to me there is no such mechanism in kernel, am I understand wrong?
These workqueues are normally queued back from workqueue handler. And we normally queue them on the local cpu, that's the default behavior of workqueue subsystem. And so they land up on the same CPU again and again.
But for workqueues having a global affinity, I think they can be rescheduled later on the old CPUs. Although I'm not sure about that, I'm Cc'ing Tejun.
Agree, since worker thread is made as enterring into all cpus, it cannot prevent scheduler do the migration.
But here is one point, that I see Viresh alredy set up two cpuset with scheduler load balance disabled, so it should stop the task migration between those two groups? Since the sched_domain changed?
What is more, I also did similiar test, and find when I set two such cpuset group, like core 0-2 to cpuset1, core 3 to cpuset2, while hotunplug the core3 afterwise. I find the cpuset's cpus member becomes NULL even I hotplug the core3 back again. So is it a bug?
Thanks, Lei
Also, one of the plan is to extend the sysfs interface of workqueues to override their affinity. If any of you guys want to try something there, that would be welcome. Also we want to work on the timer affinity. Perhaps we don't need a user interface for that, or maybe something on top of full dynticks to outline that we want the unbound timers to run on housekeeping CPUs only.