sched_mc test scenario
amit.kucheria at linaro.org
Fri Aug 5 13:27:32 UTC 2011
On Fri, Aug 5, 2011 at 1:10 PM, Daniel Lezcano
<daniel.lezcano at linaro.org> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> On 08/04/2011 12:26 PM, Vincent Guittot wrote:
>> On 4 August 2011 09:57, Daniel Lezcano <daniel.lezcano at linaro.org> wrote:
>> On 08/03/2011 06:25 PM, Vincent Guittot wrote:
>>>>> Hi Daniel,
>>>>> On 3 August 2011 15:58, Daniel Lezcano <daniel.lezcano at linaro.org> wrote:
>> [ ... ]
>>>>>> it sounds good for me.
>> Ok, cool. Thanks.
>>>>> Concerning the functional tests, I need some hints :)
>>>>> On the architecture we have, that will be difficult to verify sched_mc
>>>>> works as expected.
>>>>> If I understood correctly, in order to test that, we should have a
>>>>> dual Cortex-A9 to check a program with two processes eating a lot of
>>>>> cpu cycles will be bounded in the same socket_id when
>>>>> The other processor staying idle or not running any of these
>>>>> processes, right ? AFAIK, there is no such hardware, no ?
>>>>>> you could integrate a non regression test which check that performance
>>>>>> results in both sched_mc_power_savings=0 and sched_mc_power_savings=2
>>>>>> . I have one which uses cyclictest and sysbench.
>> Can you elaborate a bit ? Do you mean, we should run the test and
>> compare the result to some hardcoded values (taking account the
>> hysteresis of course) ?
>>> yes we should compare the results with some hard coded values or a
>>> reference test results. I don't know if it's possible to use the
>>> result of a previous tests sequence in order to make some comparisons
>>> et set a test has passed or failed ?
> IMO, this is out of the scope of the pm-qa test suite. The test suite
> should check the different subsystem are correct. In our case, we should
> ensure two processes ran only on a single socket and was not spread on
> two different socket. I don't know how to do that right now, but I think
> this is what we should validate.
> What you are proposing is some kind of power management "benchmark".
Right. Let's keep the 'functional' tests separate from the 'measurement' tests.
Functional tests will catch Kconfig errors, and ensure that the
feature works according to the interface we care about.
PM-QA will (in the future) contain benchmark tests too that we'll use
to measure how we're doing wrt power.
> That makes sense and would be very useful to check where is the
> consumption cursor. But a set of prerequisite will be needed for that:
> (1) the pm blocks should be validated by pm-qa
> (2) we have to define an userspace scenario where we set the system
> with a maximum power saving policy
> (3) run different application and collect the consumption
> We can imagine to have it automated, ran when there is a kernel update
> and plot the result like:
More information about the linaro-dev