I have an appointment at this time. Is it possible to delay *2* hours after ?
-- Daniel
On 03/18/2015 06:29 AM, Mike Turquette wrote:
sched-dvfs design discussion
Link to Google HO is in the meeting details. If that doesn't work for everyone then we can use some teleconf method.
The main purpose of the call is to discuss:
- locking issues around get_cpu_usage
- the polling nature of the current governor code and its implications
- designing for the non-blocking DVFS case and for unification of task
placement and frequency selection decisions
/Date/ Mer. 18 Mars 2015 15:30 – 16:30 Paris /Appel vidéo/ https://plus.google.com/hangouts/_/linaro.org/sched-dvfs https://plus.google.com/hangouts/_/linaro.org/sched-dvfs?hceid=bWlrZS50dXJxdWV0dGVAbGluYXJvLm9yZw.kk5jdeun3co217ebrlkjvqn8ok /Agenda/ Daniel Lezcano /Participants/
• Mike Turquette- organisateur • Morten Rasmussen • Amit Kucheria • Daniel Lezcano • eas-dev@lists.linaro.org • Vincent Guittot • dietmar.eggemann@arm.com • Juri Lelli
Allez-vous participer ? *Oui https://www.google.com/calendar/event?action=RESPOND&eid=a2s1amRldW4zY28yMTdlYnJsa2p2cW44b2sgZGFuaWVsLmxlemNhbm9AbGluYXJvLm9yZw&rst=1&tok=MjUjbWlrZS50dXJxdWV0dGVAbGluYXJvLm9yZzA2ZThiZmQyZTFhNmZjYjY0Mzk4ODg3ZTg1N2NkMTIxZGY0MTQyNjI&ctz=Europe/Paris&hl=fr- Peut-être https://www.google.com/calendar/event?action=RESPOND&eid=a2s1amRldW4zY28yMTdlYnJsa2p2cW44b2sgZGFuaWVsLmxlemNhbm9AbGluYXJvLm9yZw&rst=3&tok=MjUjbWlrZS50dXJxdWV0dGVAbGluYXJvLm9yZzA2ZThiZmQyZTFhNmZjYjY0Mzk4ODg3ZTg1N2NkMTIxZGY0MTQyNjI&ctz=Europe/Paris&hl=fr- Non https://www.google.com/calendar/event?action=RESPOND&eid=a2s1amRldW4zY28yMTdlYnJsa2p2cW44b2sgZGFuaWVsLmxlemNhbm9AbGluYXJvLm9yZw&rst=2&tok=MjUjbWlrZS50dXJxdWV0dGVAbGluYXJvLm9yZzA2ZThiZmQyZTFhNmZjYjY0Mzk4ODg3ZTg1N2NkMTIxZGY0MTQyNjI&ctz=Europe/Paris&hl=fr* plus d'options » https://www.google.com/calendar/event?action=VIEW&eid=a2s1amRldW4zY28yMTdlYnJsa2p2cW44b2sgZGFuaWVsLmxlemNhbm9AbGluYXJvLm9yZw&tok=MjUjbWlrZS50dXJxdWV0dGVAbGluYXJvLm9yZzA2ZThiZmQyZTFhNmZjYjY0Mzk4ODg3ZTg1N2NkMTIxZGY0MTQyNjI&ctz=Europe/Paris&hl=fr
Invitation de Google Agenda https://www.google.com/calendar/
Vous recevez ce message à l'adresse daniel.lezcano@linaro.org, car vous êtes abonné aux invitations dans l'agenda Daniel Lezcano.
Pour ne plus recevoir ces e-mails, veuillez vous connecter à https://www.google.com/calendar/ et modifier vos paramètres de notification pour cet agenda.
-- http://www.linaro.org/ Linaro.org │ Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linaro Facebook | http://twitter.com/#!/linaroorg Twitter | http://www.linaro.org/linaro-blog/ Blog
Hi all,
On 18/03/15 08:36, Daniel Lezcano wrote:
I have an appointment at this time. Is it possible to delay *2* hours after ?
4:30PM GMT would work fine for us.
Thanks a lot,
- Juri
-- Daniel
On 03/18/2015 06:29 AM, Mike Turquette wrote:
sched-dvfs design discussion
Link to Google HO is in the meeting details. If that doesn't work for everyone then we can use some teleconf method.
The main purpose of the call is to discuss:
- locking issues around get_cpu_usage
- the polling nature of the current governor code and its implications
- designing for the non-blocking DVFS case and for unification of task
placement and frequency selection decisions
/Date/ Mer. 18 Mars 2015 15:30 – 16:30 Paris /Appel vidéo/ https://plus.google.com/hangouts/_/linaro.org/sched-dvfs https://plus.google.com/hangouts/_/linaro.org/sched-dvfs?hceid=bWlrZS50dXJxdWV0dGVAbGluYXJvLm9yZw.kk5jdeun3co217ebrlkjvqn8ok /Agenda/ Daniel Lezcano /Participants/ • Mike Turquette- organisateur • Morten Rasmussen • Amit Kucheria • Daniel Lezcano • eas-dev@lists.linaro.org • Vincent Guittot • dietmar.eggemann@arm.com • Juri Lelli
Allez-vous participer ? *Oui https://www.google.com/calendar/event?action=RESPOND&eid=a2s1amRldW4zY28yMTdlYnJsa2p2cW44b2sgZGFuaWVsLmxlemNhbm9AbGluYXJvLm9yZw&rst=1&tok=MjUjbWlrZS50dXJxdWV0dGVAbGluYXJvLm9yZzA2ZThiZmQyZTFhNmZjYjY0Mzk4ODg3ZTg1N2NkMTIxZGY0MTQyNjI&ctz=Europe/Paris&hl=fr- Peut-être https://www.google.com/calendar/event?action=RESPOND&eid=a2s1amRldW4zY28yMTdlYnJsa2p2cW44b2sgZGFuaWVsLmxlemNhbm9AbGluYXJvLm9yZw&rst=3&tok=MjUjbWlrZS50dXJxdWV0dGVAbGluYXJvLm9yZzA2ZThiZmQyZTFhNmZjYjY0Mzk4ODg3ZTg1N2NkMTIxZGY0MTQyNjI&ctz=Europe/Paris&hl=fr- Non https://www.google.com/calendar/event?action=RESPOND&eid=a2s1amRldW4zY28yMTdlYnJsa2p2cW44b2sgZGFuaWVsLmxlemNhbm9AbGluYXJvLm9yZw&rst=2&tok=MjUjbWlrZS50dXJxdWV0dGVAbGluYXJvLm9yZzA2ZThiZmQyZTFhNmZjYjY0Mzk4ODg3ZTg1N2NkMTIxZGY0MTQyNjI&ctz=Europe/Paris&hl=fr* plus d'options » https://www.google.com/calendar/event?action=VIEW&eid=a2s1amRldW4zY28yMTdlYnJsa2p2cW44b2sgZGFuaWVsLmxlemNhbm9AbGluYXJvLm9yZw&tok=MjUjbWlrZS50dXJxdWV0dGVAbGluYXJvLm9yZzA2ZThiZmQyZTFhNmZjYjY0Mzk4ODg3ZTg1N2NkMTIxZGY0MTQyNjI&ctz=Europe/Paris&hl=fr
Invitation de Google Agenda https://www.google.com/calendar/
Vous recevez ce message à l'adresse daniel.lezcano@linaro.org, car vous êtes abonné aux invitations dans l'agenda Daniel Lezcano.
Pour ne plus recevoir ces e-mails, veuillez vous connecter à https://www.google.com/calendar/ et modifier vos paramètres de notification pour cet agenda.
On Wed, Mar 18, 2015 at 4:55 AM, Juri Lelli juri.lelli@arm.com wrote:
Hi all,
On 18/03/15 08:36, Daniel Lezcano wrote:
I have an appointment at this time. Is it possible to delay *2* hours after ?
4:30PM GMT would work fine for us.
No problem. I have updated the meeting.
Regards, Mike
Thanks a lot,
- Juri
-- Daniel
On 03/18/2015 06:29 AM, Mike Turquette wrote:
sched-dvfs design discussion
Link to Google HO is in the meeting details. If that doesn't work for everyone then we can use some teleconf method.
The main purpose of the call is to discuss:
- locking issues around get_cpu_usage
- the polling nature of the current governor code and its implications
- designing for the non-blocking DVFS case and for unification of task
placement and frequency selection decisions
/Date/ Mer. 18 Mars 2015 15:30 – 16:30 Paris /Appel vidéo/ https://plus.google.com/hangouts/_/linaro.org/sched-dvfs https://plus.google.com/hangouts/_/linaro.org/sched-dvfs?hceid=bWlrZS50dXJxdWV0dGVAbGluYXJvLm9yZw.kk5jdeun3co217ebrlkjvqn8ok /Agenda/ Daniel Lezcano /Participants/
• Mike Turquette- organisateur • Morten Rasmussen • Amit Kucheria • Daniel Lezcano • eas-dev@lists.linaro.org • Vincent Guittot • dietmar.eggemann@arm.com • Juri Lelli
Allez-vous participer ? *Oui https://www.google.com/calendar/event?action=RESPOND&eid=a2s1amRldW4zY28yMTdlYnJsa2p2cW44b2sgZGFuaWVsLmxlemNhbm9AbGluYXJvLm9yZw&rst=1&tok=MjUjbWlrZS50dXJxdWV0dGVAbGluYXJvLm9yZzA2ZThiZmQyZTFhNmZjYjY0Mzk4ODg3ZTg1N2NkMTIxZGY0MTQyNjI&ctz=Europe/Paris&hl=fr- Peut-être https://www.google.com/calendar/event?action=RESPOND&eid=a2s1amRldW4zY28yMTdlYnJsa2p2cW44b2sgZGFuaWVsLmxlemNhbm9AbGluYXJvLm9yZw&rst=3&tok=MjUjbWlrZS50dXJxdWV0dGVAbGluYXJvLm9yZzA2ZThiZmQyZTFhNmZjYjY0Mzk4ODg3ZTg1N2NkMTIxZGY0MTQyNjI&ctz=Europe/Paris&hl=fr- Non https://www.google.com/calendar/event?action=RESPOND&eid=a2s1amRldW4zY28yMTdlYnJsa2p2cW44b2sgZGFuaWVsLmxlemNhbm9AbGluYXJvLm9yZw&rst=2&tok=MjUjbWlrZS50dXJxdWV0dGVAbGluYXJvLm9yZzA2ZThiZmQyZTFhNmZjYjY0Mzk4ODg3ZTg1N2NkMTIxZGY0MTQyNjI&ctz=Europe/Paris&hl=fr* plus d'options » https://www.google.com/calendar/event?action=VIEW&eid=a2s1amRldW4zY28yMTdlYnJsa2p2cW44b2sgZGFuaWVsLmxlemNhbm9AbGluYXJvLm9yZw&tok=MjUjbWlrZS50dXJxdWV0dGVAbGluYXJvLm9yZzA2ZThiZmQyZTFhNmZjYjY0Mzk4ODg3ZTg1N2NkMTIxZGY0MTQyNjI&ctz=Europe/Paris&hl=fr
Invitation de Google Agenda https://www.google.com/calendar/
Vous recevez ce message à l'adresse daniel.lezcano@linaro.org, car vous êtes abonné aux invitations dans l'agenda Daniel Lezcano.
Pour ne plus recevoir ces e-mails, veuillez vous connecter à https://www.google.com/calendar/ et modifier vos paramètres de notification pour cet agenda.
Hi Mike,
On 18/03/15 14:00, Mike Turquette wrote:
On Wed, Mar 18, 2015 at 4:55 AM, Juri Lelli juri.lelli@arm.com wrote:
Hi all,
On 18/03/15 08:36, Daniel Lezcano wrote:
I have an appointment at this time. Is it possible to delay *2* hours after ?
4:30PM GMT would work fine for us.
No problem. I have updated the meeting.
Thanks a lot again for organizing this meeting. Personally, it was really useful :).
So, we briefly mentioned that you were going to post a new version of sched-dvfs on eas-dev for further discussion. Do you have a rough idea of when you'll be able to do it?
The fact is that it would be nice to have this new version ready at the time EASv4 is going out on LKML. Since this is scheduled to happen by end of March, it would be really good to have the possibility to spend some time reviewing your new version and possibly adapt my deltas to it, I guess, next week :).
Thanks!
Best,
- Juri
Quoting Juri Lelli (2015-03-20 07:54:10)
Hi Mike,
On 18/03/15 14:00, Mike Turquette wrote:
On Wed, Mar 18, 2015 at 4:55 AM, Juri Lelli juri.lelli@arm.com wrote:
Hi all,
On 18/03/15 08:36, Daniel Lezcano wrote:
I have an appointment at this time. Is it possible to delay *2* hours after ?
4:30PM GMT would work fine for us.
No problem. I have updated the meeting.
Thanks a lot again for organizing this meeting. Personally, it was really useful :).
So, we briefly mentioned that you were going to post a new version of sched-dvfs on eas-dev for further discussion. Do you have a rough idea of when you'll be able to do it?
The fact is that it would be nice to have this new version ready at the time EASv4 is going out on LKML. Since this is scheduled to happen by end of March, it would be really good to have the possibility to spend some time reviewing your new version and possibly adapt my deltas to it, I guess, next week :).
Hi Juri,
The meeting was very useful for me as well. I've been working to integrate all of the items we discussed into the next RFC. My hope is to publish to eas-dev for a first round of review and testing before I leave for ELC this weekend. The patches may be in a rough shape but I'll do my best.
Regards, Mike
Thanks!
Best,
- Juri
On Fri, Mar 20, 2015 at 12:33 PM, Michael Turquette mturquette@linaro.org wrote:
Quoting Juri Lelli (2015-03-20 07:54:10)
Hi Mike,
On 18/03/15 14:00, Mike Turquette wrote:
On Wed, Mar 18, 2015 at 4:55 AM, Juri Lelli juri.lelli@arm.com wrote:
Hi all,
On 18/03/15 08:36, Daniel Lezcano wrote:
I have an appointment at this time. Is it possible to delay *2* hours after ?
4:30PM GMT would work fine for us.
No problem. I have updated the meeting.
Thanks a lot again for organizing this meeting. Personally, it was really useful :).
So, we briefly mentioned that you were going to post a new version of sched-dvfs on eas-dev for further discussion. Do you have a rough idea of when you'll be able to do it?
The fact is that it would be nice to have this new version ready at the time EASv4 is going out on LKML. Since this is scheduled to happen by end of March, it would be really good to have the possibility to spend some time reviewing your new version and possibly adapt my deltas to it, I guess, next week :).
Hi Juri,
The meeting was very useful for me as well. I've been working to integrate all of the items we discussed into the next RFC. My hope is to publish to eas-dev for a first round of review and testing before I leave for ELC this weekend. The patches may be in a rough shape but I'll do my best.
Juri et al,
I've pushed code to my github tree under the branch "follow-the-capacity". This code badly needs to be rebased/squashed and may contain foul language and/or stream-of-conscious block comments.
This version has some notable changes:
1) no thresholds. After finding the most utilized cpu in the affected frequency domain we try to find the smallest capacity that is greater than that max_usage. In my smoke tests I found that PELT had no trouble hitting the highest OPP. I've left the up_thr code in there so you should still be able to play with it if you like.
2) no cpumasks exposed to scheduler. We just pass in a cpu (e.g. dst cpu via enqueue_task_fair after a load_balance). This code is significantly more elegant now (IMHO). If we need to evaluate frequency then we set the per-cpu pointer to the "need_task_wake" atomic_t and quickly return to the scheduler. All frequency evaluation takes place in the kthread. I have not yet implemented the non-blocking/async-dvfs code, but you can see comments for it; search for the conditional controller named "driver_might_sleep".
I've puts more block comments/kerneldoc in there. Hopefully it helps for you to understand the changes. I've only done a smoke test on Pandaboard ES:
echo cap_gov > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor echo SCHED_ENERGY_FREQ > /sys/kernel/debug/sched_features while [ 1 ]; do cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq; done
I'll rebase this and test on more hardware when I can. Now I must drive up to SF for ELC.
Regards, Mike
Regards, Mike
Thanks!
Best,
- Juri
Hi Mike,
On 21/03/15 22:36, Mike Turquette wrote:
Juri et al,
I've pushed code to my github tree under the branch "follow-the-capacity". This code badly needs to be rebased/squashed and may contain foul language and/or stream-of-conscious block comments.
Thanks for this!
I rebased your code on top of 4.0-rcX plus EAS. I also ported my deltas on this new branch, plus additional changes/fixes.
You should be able to pull it from:
git://linux-arm.org/linux-power.git [wip/easv4_dvfs]
This version has some notable changes:
- no thresholds. After finding the most utilized cpu in the affected
frequency domain we try to find the smallest capacity that is greater than that max_usage. In my smoke tests I found that PELT had no trouble hitting the highest OPP. I've left the up_thr code in there so you should still be able to play with it if you like.
So, this is because I'm based on top of the full EAS patchstack I guess. Your get_cpu_usage is only capped by capacity_orig_of(), while Morten's "sched: Use capacity_curr to cap utilization in get_cpu_usage()" caps it with capacity_curr(). This means that if you start, let's say, at the lowest OPP you won't get any usage above the capacity at that OPP. That's why we could use this up_th in this case.
- no cpumasks exposed to scheduler. We just pass in a cpu (e.g. dst
cpu via enqueue_task_fair after a load_balance). This code is significantly more elegant now (IMHO). If we need to evaluate frequency then we set the per-cpu pointer to the "need_task_wake" atomic_t and quickly return to the scheduler. All frequency evaluation takes place in the kthread. I have not yet implemented the non-blocking/async-dvfs code, but you can see comments for it; search for the conditional controller named "driver_might_sleep".
Yeah, right. So, we could still try to experiment a bit seeing if and how we can pass some sort of hints to this kthread, as to reduce his "compute new cap" burden. Another point is that the current triggering points looks good to me with the current synch approach, but we might want to move them around when we'll have the asynch one (as Peter was suggesting).
I've puts more block comments/kerneldoc in there. Hopefully it helps for you to understand the changes. I've only done a smoke test on Pandaboard ES:
echo cap_gov > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor echo SCHED_ENERGY_FREQ > /sys/kernel/debug/sched_features while [ 1 ]; do cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq; done
I'll rebase this and test on more hardware when I can. Now I must drive up to SF for ELC.
As said, I rebased this on 4.0-rc4 and tested a bit on TC2. Apart from small fixes here and there, I put EAS bindings on top and you can also find an idea of how we could use the up_threshold.
I guess it would be nice if we could start converging towards a cleaner patchset for posting on LKML. What are your thoughts about this?
Best,
- Juri
On 26/03/15 12:23, Juri Lelli wrote:
Hi Mike,
On 21/03/15 22:36, Mike Turquette wrote:
Juri et al,
I've pushed code to my github tree under the branch "follow-the-capacity". This code badly needs to be rebased/squashed and may contain foul language and/or stream-of-conscious block comments.
Thanks for this!
I rebased your code on top of 4.0-rcX plus EAS. I also ported my deltas on this new branch, plus additional changes/fixes.
You should be able to pull it from:
git://linux-arm.org/linux-power.git [wip/easv4_dvfs]
This version has some notable changes:
- no thresholds. After finding the most utilized cpu in the affected
frequency domain we try to find the smallest capacity that is greater than that max_usage. In my smoke tests I found that PELT had no trouble hitting the highest OPP. I've left the up_thr code in there so you should still be able to play with it if you like.
So, this is because I'm based on top of the full EAS patchstack I guess. Your get_cpu_usage is only capped by capacity_orig_of(), while Morten's "sched: Use capacity_curr to cap utilization in get_cpu_usage()" caps it with capacity_curr(). This means that if you start, let's say, at the lowest OPP you won't get any usage above the capacity at that OPP. That's why we could use this up_th in this case.
- no cpumasks exposed to scheduler. We just pass in a cpu (e.g. dst
cpu via enqueue_task_fair after a load_balance). This code is significantly more elegant now (IMHO). If we need to evaluate frequency then we set the per-cpu pointer to the "need_task_wake" atomic_t and quickly return to the scheduler. All frequency evaluation takes place in the kthread. I have not yet implemented the non-blocking/async-dvfs code, but you can see comments for it; search for the conditional controller named "driver_might_sleep".
Yeah, right. So, we could still try to experiment a bit seeing if and how we can pass some sort of hints to this kthread, as to reduce his "compute new cap" burden. Another point is that the current triggering points looks good to me with the current synch approach, but we might want to move them around when we'll have the asynch one (as Peter was suggesting).
Actually, regarding this, do you already have in mind something you'd like to try to move towards a more event driven approach? I mean, something that can work also with sleeping drivers.
Regarding the "driver_might_sleep" flag in the code instead. Can we come up with a list of things that need to be changed to be able to remove that flag? I know that this has been already discussed in the past, but it might be still valuable to write this list down somewhere IMHO. I guess we'll receive this question as soon as we post something on LKML, better be prepared right? :)
Thanks again,
- Juri
I've puts more block comments/kerneldoc in there. Hopefully it helps for you to understand the changes. I've only done a smoke test on Pandaboard ES:
echo cap_gov > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor echo SCHED_ENERGY_FREQ > /sys/kernel/debug/sched_features while [ 1 ]; do cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq; done
I'll rebase this and test on more hardware when I can. Now I must drive up to SF for ELC.
As said, I rebased this on 4.0-rc4 and tested a bit on TC2. Apart from small fixes here and there, I put EAS bindings on top and you can also find an idea of how we could use the up_threshold.
I guess it would be nice if we could start converging towards a cleaner patchset for posting on LKML. What are your thoughts about this?
Best,
- Juri
eas-dev mailing list eas-dev@lists.linaro.org http://lists.linaro.org/mailman/listinfo/eas-dev
On Thu, Mar 26, 2015 at 10:49 AM, Juri Lelli juri.lelli@arm.com wrote:
On 26/03/15 12:23, Juri Lelli wrote:
Hi Mike,
On 21/03/15 22:36, Mike Turquette wrote:
Juri et al,
I've pushed code to my github tree under the branch "follow-the-capacity". This code badly needs to be rebased/squashed and may contain foul language and/or stream-of-conscious block comments.
Thanks for this!
I rebased your code on top of 4.0-rcX plus EAS. I also ported my deltas on this new branch, plus additional changes/fixes.
You should be able to pull it from:
git://linux-arm.org/linux-power.git [wip/easv4_dvfs]
This version has some notable changes:
- no thresholds. After finding the most utilized cpu in the affected
frequency domain we try to find the smallest capacity that is greater than that max_usage. In my smoke tests I found that PELT had no trouble hitting the highest OPP. I've left the up_thr code in there so you should still be able to play with it if you like.
So, this is because I'm based on top of the full EAS patchstack I guess. Your get_cpu_usage is only capped by capacity_orig_of(), while Morten's "sched: Use capacity_curr to cap utilization in get_cpu_usage()" caps it with capacity_curr(). This means that if you start, let's say, at the lowest OPP you won't get any usage above the capacity at that OPP. That's why we could use this up_th in this case.
- no cpumasks exposed to scheduler. We just pass in a cpu (e.g. dst
cpu via enqueue_task_fair after a load_balance). This code is significantly more elegant now (IMHO). If we need to evaluate frequency then we set the per-cpu pointer to the "need_task_wake" atomic_t and quickly return to the scheduler. All frequency evaluation takes place in the kthread. I have not yet implemented the non-blocking/async-dvfs code, but you can see comments for it; search for the conditional controller named "driver_might_sleep".
Yeah, right. So, we could still try to experiment a bit seeing if and how we can pass some sort of hints to this kthread, as to reduce his "compute new cap" burden. Another point is that the current triggering points looks good to me with the current synch approach, but we might want to move them around when we'll have the asynch one (as Peter was suggesting).
Actually, regarding this, do you already have in mind something you'd like to try to move towards a more event driven approach? I mean, something that can work also with sleeping drivers.
Hi Juri,
Not sure I follow. This is event driven in the sense that we re-evaluate the load statistics when they are updated by {en,de}queue_task_fair and task_tick_fair.
Regarding the "driver_might_sleep" flag in the code instead. Can we come up with a list of things that need to be changed to be able to remove that flag? I know that this has been already discussed in the past, but it might be still valuable to write this list down somewhere IMHO. I guess we'll receive this question as soon as we post something on LKML, better be prepared right? :)
I had not planned to remove driver_might_sleep. In my 2014 post I had something similar which exposed driver flags via a new function. I think that something like this is necessary: the cpufreq governor needs to read some flag/capability from the driver and choose the more optimal code path. So keeping the async and sync code paths in the governor is something that I plan to do. The missing part is the async path which I cannot test easily (maybe using Intel hardware + P-state driver?) but I simply ran out of time to put that together before ELC. That will be my focus on Monday.
Regards, Mike
Thanks again,
- Juri
I've puts more block comments/kerneldoc in there. Hopefully it helps for you to understand the changes. I've only done a smoke test on Pandaboard ES:
echo cap_gov > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor echo SCHED_ENERGY_FREQ > /sys/kernel/debug/sched_features while [ 1 ]; do cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq; done
I'll rebase this and test on more hardware when I can. Now I must drive up to SF for ELC.
As said, I rebased this on 4.0-rc4 and tested a bit on TC2. Apart from small fixes here and there, I put EAS bindings on top and you can also find an idea of how we could use the up_threshold.
I guess it would be nice if we could start converging towards a cleaner patchset for posting on LKML. What are your thoughts about this?
Best,
- Juri
eas-dev mailing list eas-dev@lists.linaro.org http://lists.linaro.org/mailman/listinfo/eas-dev
Hi Mike,
On 30/03/15 08:31, Mike Turquette wrote:
On Thu, Mar 26, 2015 at 10:49 AM, Juri Lelli juri.lelli@arm.com wrote:
[snip]
Actually, regarding this, do you already have in mind something you'd like to try to move towards a more event driven approach? I mean, something that can work also with sleeping drivers.
Hi Juri,
Not sure I follow. This is event driven in the sense that we re-evaluate the load statistics when they are updated by {en,de}queue_task_fair and task_tick_fair.
I've pushed my experiment with irq_work on:
git://linux-arm.org/linux-power.git [wip/easv4_dvfs_ed_irq_work]
Extremely wip, just a sketch of the idea ;). Please let me know if you have any issue fetching this.
Looking forward to hear your thoughts this coming Thursday.
Best,
- Juri