[Linaro-mm-sig] [RFCv1 0/6] PASR: Partial Array Self-Refresh Framework

Maxime Coquelin maxime.coquelin at stericsson.com
Tue Jan 31 14:48:28 UTC 2012


On 01/31/2012 01:39 PM, Ingo Molnar wrote:
> * Maxime Coquelin<maxime.coquelin at stericsson.com>  wrote:
>
>> Dear Ingo,
>>
>> On 01/30/2012 02:53 PM, Ingo Molnar wrote:
>>> * Maxime Coquelin<maxime.coquelin at stericsson.com>   wrote:
>>>
>>>> The role of this framework is to stop the refresh of unused
>>>> memory to enhance DDR power consumption.
>>> I'm wondering in what scenarios this is useful, and how
>>> consistently it is useful.
>>>
>>> The primary concern I can see is that on most Linux systems with
>>> an uptime more than a couple of minutes RAM gets used up by the
>>> Linux page-cache:
>>>
>>>   $ uptime
>>>    14:46:39 up 11 days,  2:04, 19 users,  load average: 0.11, 0.29, 0.80
>>>   $ free
>>>                total       used       free     shared    buffers     cached
>>>   Mem:      12255096   12030152     224944          0     651560    6000452
>>>   -/+ buffers/cache:    5378140    6876956
>>>
>>> Even mobile phones easily have days of uptime - quite often
>>> weeks of uptime. I'd expect the page-cache to fill up RAM on
>>> such systems.
>>>
>>> So how will this actually end up saving power consistently?
>>> Does it have to be combined with a VM policy that more
>>> aggressively flushes cached pages from the page-cache?
>> You're right Ingo, page-cache fills up the RAM. This framework
>> is to be used in combination with a page-cache flush governor.
>> In the case of a mobile phone, we can imagine dropping the
>> cache when system's screen is off for a while, in order to
>> preserve user's experience.
> Is this "page-cache flush governor" some existing code?
> How does it work and does it need upstream patches?
For now, such a governor has not been implemented.
I use the dedicated ProcFS interface to test the framework (echo 3 > 
/proc/sys/vm/drop_caches).


>>> A secondary concern is fragmentation: right now we fragment
>>> memory rather significantly.
>> Yes, I think fragmentation is the main challenge. This is the
>> same problem faced for Memory Hotplug feature. The solution I
>> see is to add a significant Movable zone in the system and use
>> the Compaction feature from Mel Gorman. The problem of course
>> remains for the Normal zone.
> Ok. I guess phones/appliances can generally live with a
> relatively large movable zone as they don't have serious
> memory pressure issues.
Actually, current high-end smartphones and tablets have 1GB DDR.
Smartphones and tablets arriving later this year should have up to 2GB DDR.
For example, my Android phone running for 2 days has only 230MB are used 
in idle once the page-caches dropped.
So I think having a 1GB movable zone on a 2GB DDR phone is conceivable.

>>> For the Ux500 PASR driver you've implemented the section
>>> size is 64 MB. Do I interpret the code correctly in that a
>>> continuous, 64MB physical block of RAM has to be 100% free
>>> for us to be able to turn off refresh and power for this
>>> block of RAM?
>> Current DDR (2Gb/4Gb dies) used in mobile platform have 64MB
>> banks and segments. This is the lower granularity for Partial
>> Array Self-refresh.
> Ok, so do you see real, consistent power savings with a large
> movable zone, with page cache governor patches applied (assuming
> it's a kernel mechanism) and CONFIG_COMPACTION=y enabled, on an
> upstream kernel with all these patches applied?
I don't have consistent figures for now as it is being prototyped.
 From the DDR datasheet I gathered, the DDR power savings is about 33% 
when half of the die is in self-refresh, compared to the full die in 
self-refresh.



Thanks for your comments,
Maxime

>
> Thanks,
>
> 	Ingo
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo at kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
> Don't email:<a href=mailto:"dont at kvack.org">  email at kvack.org</a>




More information about the Linaro-mm-sig mailing list