On 02/05/2015 07:11 AM, Rafael J. Wysocki wrote:
On Wednesday, February 04, 2015 05:55:02 PM Saravana Kannan wrote:
On 02/04/2015 03:20 PM, Rafael J. Wysocki wrote:
On Wednesday, February 04, 2015 02:28:55 PM Saravana Kannan wrote:
On 02/03/2015 10:20 PM, Viresh Kumar wrote:
On 4 February 2015 at 03:58, Saravana Kannan skannan@codeaurora.org wrote:
Can you explain why we need a fallback list in the first place? Now that we are not destroying and creating policy objects, I don't see any point in the fallback list.
Because we wanted to mark the policy inactive. But as I have introduced another field for that now, probably it can be fixed. Will check again on what can be done.
Thanks. That's why I was asking. Now that you have another flag. Also, you might not even need a flag. You can just check if policy->cpus is empty (btw, I think we should let that go to empty)
So the idea would be to avoid clearig cpufreq_cpu_data during offline tear-down (because we know that the CPU is offline anyway) and then start using the same policy pointer during offline?
Yeah. We still don't clear the policy->cpus today when the last CPU goes down. But that can be done easily by changing a few "if" conditions and rearranging the hotplug notifier code (I think it's mostly there with this series). Once we clear policy->cpus when all CPUs are offline, we can just use that data to figure out if it's "active" or not.
OK
I'm still concerned about the case when the last policy CPU is physically going away, in which we do the offline as a preliminary step and then will go for full CPU device unregistration.
I made sure that would work in the patch series I sent out a while back. I now need to make sure that Viresh's patch series account for it correctly. I'll make sure to review this series at least once a week.
The way to handle the case you mentioned is by treating the subsys_interface based add/remove ops as physical add/remove and the hotplug add/remove as logical add/remove.
-Saravana