Hello!
Please see below for my rough notes from the kernel consolidation, and please reply to the group with any errors or omissions.
Thanx, Paul
------------------------------------------------------------------------
Attendees: Arnd Bergmann, Dave Martin, Jeremy Kerr, John Stultz, Mounir Bsaibes, Nat Waddell, Nicolas Pitre, Steve Sakoman, Paul E. McKenney.
o Kernel Consolidation -> Kernel. Job 1 will of course continue to be kernel consolidation, but will continue to take on other kernel work.
o Kernel requirements (https://wiki.linaro.org/Internal/TSC/MinutesToBeAgreed)
o Virtualization requirements (https://wiki.linaro.org/Internal/TSC/MinutesToBeAgreed) Arnd Bergmann, Dave Martin. Discussion of implementation strategy -- full virtualization vs. paravirtualization. TrustZone.
o Requirements schedule
o Requirements discussion, mostly in TSC. o Discussion and refinement at Orlando. o Blueprint generation in the 2-3 weeks following Orlando.
o Round table:
o Arnd Bergmann: BKL work mostly submitted. Looking into I/O spaces and SoCs -- address-space annotation. o Dave Martin: One perftools patch to be pushed to mainline. John Rigby working on build of perftools. o Jason Hui: iMX51 work on device trees. Need assignment. Reviewed BSP review, need to send results to Loïc et al. o Jeremy Kerr: Device tree response to review comments, API. Adding more platforms into device tree -- use device-tree properties. o John Stultz: OMAP emulation environment work -- preparation for clock consolidation. Networking is not working. First revision of POSIX clock RTC patch, working second revision. o Mounir Bsaibes: New presentation for blueprints, almost done. Working on howto for writing specifications. Tracking mechanism. o Nat Waddell: Several Versatile Express bugs fixed last week, especially frame-buffer tearing problem. Lorenzo's kernel built and boots, but cannot get frame-buffer display working. Device-tree problem? [Follow up with Lorenzo.] o Nicolas Pitre: Merge bug fixes for ARM, including some GDB vector-page-backtrace problems (related to atomic operations). Pulled upstream patches from mainline. SECCOMP support -- syscall restrictions for untrusted code. Q: /dev/mem protection from Kees Cook? A: Might. o Paul McKenney: TSC meeting. RCU bugs, priority boosting. o Steve Sakoman: Expansion-board detection in uboot for OMAP3, save environments for eMMC for boards lacking flash memory, bring Panda support to latest 8-layer boards just appearing (about 12 patches required). Expect to submit upstream later this week. o Dave Martin on holiday Thu through next week.
On Monday 20 September 2010, Paul E. McKenney wrote:
o Kernel requirements (https://wiki.linaro.org/Internal/TSC/MinutesToBeAgreed)
I still have two questions about stuff that came up here:
* SD/MMC performance: Is this about device access, file systems or both? I think the file system level stuff is actually the more important part but I fear that what was discussed is the other one.
* highmem: Not sure what this was about. I guess we don't really want to enable this by default, but some people will want it anyway. Is this about run-time patching the code out of the kernel?
Arnd
On Mon, Sep 20, 2010 at 06:37:05PM +0200, Arnd Bergmann wrote:
On Monday 20 September 2010, Paul E. McKenney wrote:
o Kernel requirements (https://wiki.linaro.org/Internal/TSC/MinutesToBeAgreed)
I still have two questions about stuff that came up here:
- SD/MMC performance: Is this about device access, file systems or both? I think the file system level stuff is actually the more important part but I fear that what was discussed is the other one.
The discussion didn't get into that level of detail. :-(
- highmem: Not sure what this was about. I guess we don't really want to enable this by default, but some people will want it anyway. Is this about run-time patching the code out of the kernel?
I believe that this is related to LPAE -- the usual 32-bit-only DMA devices in a >32-bit physical address space. But there was also discussion about run-time patching for SMP alternatives, though I am missing how this relates to highmem. Enlightenment?
Thanx, Paul
On Mon, 20 Sep 2010, Paul E. McKenney wrote:
On Mon, Sep 20, 2010 at 06:37:05PM +0200, Arnd Bergmann wrote:
On Monday 20 September 2010, Paul E. McKenney wrote:
o Kernel requirements (https://wiki.linaro.org/Internal/TSC/MinutesToBeAgreed)
I still have two questions about stuff that came up here:
- SD/MMC performance: Is this about device access, file systems or both? I think the file system level stuff is actually the more important part but I fear that what was discussed is the other one.
The discussion didn't get into that level of detail. :-(
- highmem: Not sure what this was about. I guess we don't really want to enable this by default, but some people will want it anyway. Is this about run-time patching the code out of the kernel?
I believe that this is related to LPAE -- the usual 32-bit-only DMA devices in a >32-bit physical address space. But there was also discussion about run-time patching for SMP alternatives, though I am missing how this relates to highmem. Enlightenment?
I'm also interested in both of those topics as 1) I participated in the design of the SDIO stack (closely related to SD), and 2) I did the highmem implementation for ARM.
Wrt SD/MMC performance, if we're talking about device access, then all the DMA-using hardware configs I've seen are only limited by the media speed which is typically much slower than a hard disk, or even a SSD for that matter (and so is the price).
Wrt highmem: I can't see the link with highmem and SMP. As far as I know, highmem on ARM should be SMP safe already (the only SMP related issue I've seen has been fixed in commit 831e8047eb).
Nicolas
On Monday 20 September 2010 19:50:59 Nicolas Pitre wrote:
I believe that this is related to LPAE -- the usual 32-bit-only DMA devices in a >32-bit physical address space. But there was also discussion about run-time patching for SMP alternatives, though I am missing how this relates to highmem. Enlightenment?
Wrt SD/MMC performance, if we're talking about device access, then all the DMA-using hardware configs I've seen are only limited by the media speed which is typically much slower than a hard disk, or even a SSD for that matter (and so is the price).
Right. Having an intelligent file system is the only way I can see for getting good speedups, by avoiding erase-cycles inside the SD card, which commonly happen when you write to sectors at random addresses.
There has been a lot of research in optimizing for regular NAND flash, at least some of which should apply to SD cards as well, although their naive wear-levelling algorithms might easily get in the way.
Wrt highmem: I can't see the link with highmem and SMP. As far as I know, highmem on ARM should be SMP safe already (the only SMP related issue I've seen has been fixed in commit 831e8047eb).
Right, it's not related to SMP, I was thinking of using run-time patching for for both highmem and SMP though. My idea was to use make the decision between simply doing page_address() and the full kmap()/kmap_atomic() statically at boot time depending on the amount of memory.
I looked at the functions again, and I'm now guessing that the difference would be minimal because the first thing we check is (PageHighMem(page)) on a presumably cache-hot struct page. It may be worthwhile comparing the performance of a highmem-enabled kernel with a regular kernel on a system without highmem, but it may very well be identical.
Arnd
On Mon, Sep 20, 2010, Arnd Bergmann wrote:
Right. Having an intelligent file system is the only way I can see for getting good speedups, by avoiding erase-cycles inside the SD card, which commonly happen when you write to sectors at random addresses.
There has been a lot of research in optimizing for regular NAND flash, at least some of which should apply to SD cards as well, although their naive wear-levelling algorithms might easily get in the way.
This sounds like a relatively large task; do you think that's something we could build on existing infrastructure like some ubi bits or some btrfs bits?
I'm worried that we don't really know what the SD wear-levelling is doing, and it might change over time; I'm not sure whether we can introspect the way the controllers do it, or whether we'd have some fragile heuristics to decide that this or that SD card manufacturer uses this kind of algorithm :-/
Also, do we know enough about the underlying hardware to basically override what the manufacturer is trying to do?
On Thursday 23 September 2010, Loïc Minier wrote:
On Mon, Sep 20, 2010, Arnd Bergmann wrote:
Right. Having an intelligent file system is the only way I can see for getting good speedups, by avoiding erase-cycles inside the SD card, which commonly happen when you write to sectors at random addresses.
There has been a lot of research in optimizing for regular NAND flash, at least some of which should apply to SD cards as well, although their naive wear-levelling algorithms might easily get in the way.
This sounds like a relatively large task; do you think that's something we could build on existing infrastructure like some ubi bits or some btrfs bits?
Definitely. I wasn't suggesting we reinvent the wheel, but there may a lot of value in comparing what's there today (logfs, ubifs, btrfs, nilfs2) to see if any of them does the job, and possibly adding a few extensions.
The current state is mostly that people put unaligned partitions on their SD card, stick an ext3 fs on a partition and watch performance suck while destroying their cards.
I'm worried that we don't really know what the SD wear-levelling is doing, and it might change over time; I'm not sure whether we can introspect the way the controllers do it, or whether we'd have some fragile heuristics to decide that this or that SD card manufacturer uses this kind of algorithm :-/
Also, do we know enough about the underlying hardware to basically override what the manufacturer is trying to do?
There has been a study by Thomas Gleixner on what CF cards do, which basically showed that they all use the same broken algorithm. It may be interesting to do the same for SD cards. We could also ask the Samsung people in Linaro to find out more technical details than are currently known publically about Samsung SD cards.
Arnd
On Thu, Sep 23, 2010, Arnd Bergmann wrote:
The current state is mostly that people put unaligned partitions on their SD card, stick an ext3 fs on a partition and watch performance suck while destroying their cards.
Agreed :-)
There has been a study by Thomas Gleixner on what CF cards do, which basically showed that they all use the same broken algorithm. It may be interesting to do the same for SD cards. We could also ask the Samsung people in Linaro to find out more technical details than are currently known publically about Samsung SD cards.
Ok; that sounds good, will check out with the TSC what exact SD work they are expecting and then we can see whether there's an FS part to it, but it sounds like useful research in any case
On Mon, Sep 20, 2010, Arnd Bergmann wrote:
Wrt highmem: I can't see the link with highmem and SMP. As far as I know, highmem on ARM should be SMP safe already (the only SMP related issue I've seen has been fixed in commit 831e8047eb).
Right, it's not related to SMP, I was thinking of using run-time patching for for both highmem and SMP though. My idea was to use make the decision between simply doing page_address() and the full kmap()/kmap_atomic() statically at boot time depending on the amount of memory.
I looked at the functions again, and I'm now guessing that the difference would be minimal because the first thing we check is (PageHighMem(page)) on a presumably cache-hot struct page. It may be worthwhile comparing the performance of a highmem-enabled kernel with a regular kernel on a system without highmem, but it may very well be identical.
This highmem topic comes from the fact that highmem will be needed in the period of time between now and LPAE where we have boards with lots of memory but we can't address it all without highmem (unless we want to revisit the 3g/1g split, but I personally think not).
I proposed making highmem the default across all Linaro kernels as a way to simplify things, perhaps removing the need to bother about this config option altogether. This proposal does need some investigation on runtime performance; if highmem is basically free, then we're good and we can just enable it by default. If it's not, I proposed we do runtime patching just like SMP (exactly what Arnd proposed).
Arnd, Nicolas, would either of you take an action to benchmark the cost of CONFIG_HIGHMEM? That would help understanding what kind of work we're looking at
Thanks!!
On Thu, 23 Sep 2010, Loïc Minier wrote:
On Mon, Sep 20, 2010, Arnd Bergmann wrote:
Wrt highmem: I can't see the link with highmem and SMP. As far as I know, highmem on ARM should be SMP safe already (the only SMP related issue I've seen has been fixed in commit 831e8047eb).
Right, it's not related to SMP, I was thinking of using run-time patching for for both highmem and SMP though. My idea was to use make the decision between simply doing page_address() and the full kmap()/kmap_atomic() statically at boot time depending on the amount of memory.
I looked at the functions again, and I'm now guessing that the difference would be minimal because the first thing we check is (PageHighMem(page)) on a presumably cache-hot struct page. It may be worthwhile comparing the performance of a highmem-enabled kernel with a regular kernel on a system without highmem, but it may very well be identical.
This highmem topic comes from the fact that highmem will be needed in the period of time between now and LPAE where we have boards with lots of memory but we can't address it all without highmem (unless we want to revisit the 3g/1g split, but I personally think not).
Note that LPAE does require highmem to be useful. The only way highmem could be avoided is to move to a 64-bit architecture.
I proposed making highmem the default across all Linaro kernels as a way to simplify things, perhaps removing the need to bother about this config option altogether. This proposal does need some investigation on runtime performance; if highmem is basically free, then we're good and we can just enable it by default. If it's not, I proposed we do runtime patching just like SMP (exactly what Arnd proposed).
Arnd, Nicolas, would either of you take an action to benchmark the cost of CONFIG_HIGHMEM? That would help understanding what kind of work we're looking at
Sure. I don't think the highmem overhead is that significant, especially when it doesn't kick in i.e. when total RAM is below 800MB or so. But I'm skeptical about the gain that runtime patching for this particular case could bring.
The runtime patching of the kernel is useful for simple and straight-forward cases such as SMP ops which are performed in assembly. But in this case I'm afraid this could add even more overhead in the end, especially when highmem is active. But if the overhead of simply enabling highmem is not significant enough to be measurable in the first place then this is moot.
Nicolas
On Thu, Sep 23, 2010, Nicolas Pitre wrote:
The runtime patching of the kernel is useful for simple and straight-forward cases such as SMP ops which are performed in assembly. But in this case I'm afraid this could add even more overhead in the end, especially when highmem is active. But if the overhead of simply enabling highmem is not significant enough to be measurable in the first place then this is moot.
Agreed; let's benchmark and decide whether it's worth thinking about :)
On Thursday 23 September 2010, Nicolas Pitre wrote:
This highmem topic comes from the fact that highmem will be needed in the period of time between now and LPAE where we have boards with lots of memory but we can't address it all without highmem (unless we want to revisit the 3g/1g split, but I personally think not).
Note that LPAE does require highmem to be useful. The only way highmem could be avoided is to move to a 64-bit architecture.
Right, I'd even say LPAE can only make things worse because people will stick even more memory into their systems, most of which then becomes highmem.
We might be able to use MMU features to implement a 4G/4G split, which lets us use 3GB physical RAM or more (depending on vmalloc and I/O sizes) without highmem, but can have an even higher cost by significantly slowing down uaccess.
Arnd, Nicolas, would either of you take an action to benchmark the cost of CONFIG_HIGHMEM? That would help understanding what kind of work we're looking at
Sure. I don't think the highmem overhead is that significant, especially when it doesn't kick in i.e. when total RAM is below 800MB or so. But I'm skeptical about the gain that runtime patching for this particular case could bring.
Yes, that's what I thought after looking into the thing in more detail. It looked promising at first, but I now doubt it would be measurable.
Arnd
On Thu, 2010-09-23 at 16:15 +0100, Arnd Bergmann wrote:
On Thursday 23 September 2010, Nicolas Pitre wrote:
This highmem topic comes from the fact that highmem will be needed in the period of time between now and LPAE where we have boards with lots of memory but we can't address it all without highmem (unless we want to revisit the 3g/1g split, but I personally think not).
Note that LPAE does require highmem to be useful. The only way highmem could be avoided is to move to a 64-bit architecture.
Right, I'd even say LPAE can only make things worse because people will stick even more memory into their systems, most of which then becomes highmem.
If you really need so much memory, it's more efficient to have LPAE +highmem than a swap device. The problem is if the OS doesn't need so much memory but it is available, Linux tries to allocate from highmem first. What could help is a different zone fall back mechanism trying to allocate from lowmem up to a certain threshold.
Another option would be to use the highmem for hosting a swap via some form of ramdisk or slram/phram.
Yet another option is some dynamic memory hotplug based on the amount of spare memory you've got.
We might be able to use MMU features to implement a 4G/4G split, which lets us use 3GB physical RAM or more (depending on vmalloc and I/O sizes) without highmem, but can have an even higher cost by significantly slowing down uaccess.
It would be tricky to create temporary mappings for uaccess (and may involve get_user_pages or some form of pinning the pages in memory).
On Thursday 23 September 2010 19:03:42 Catalin Marinas wrote:
On Thu, 2010-09-23 at 16:15 +0100, Arnd Bergmann wrote:
On Thursday 23 September 2010, Nicolas Pitre wrote:
This highmem topic comes from the fact that highmem will be needed in the period of time between now and LPAE where we have boards with lots of memory but we can't address it all without highmem (unless we want to revisit the 3g/1g split, but I personally think not).
Note that LPAE does require highmem to be useful. The only way highmem could be avoided is to move to a 64-bit architecture.
Right, I'd even say LPAE can only make things worse because people will stick even more memory into their systems, most of which then becomes highmem.
If you really need so much memory, it's more efficient to have LPAE +highmem than a swap device. The problem is if the OS doesn't need so much memory but it is available, Linux tries to allocate from highmem first. What could help is a different zone fall back mechanism trying to allocate from lowmem up to a certain threshold.
Another option would be to use the highmem for hosting a swap via some form of ramdisk or slram/phram.
Yet another option is some dynamic memory hotplug based on the amount of spare memory you've got.
Right. Unfortunately all of these ideas depend a lot on the workload you actually want to run. For the general case, highmem is probably the best we can do.
If you know you have at most 2GB of memory, the 2g/2g split is also an interesting option for many workloads.
Yet another variant of your phram swap is to use compressed swap, whatever that is called nowadays.
We might be able to use MMU features to implement a 4G/4G split, which lets us use 3GB physical RAM or more (depending on vmalloc and I/O sizes) without highmem, but can have an even higher cost by significantly slowing down uaccess.
It would be tricky to create temporary mappings for uaccess (and may involve get_user_pages or some form of pinning the pages in memory).
That's what I meant with expensive. You end up with get_user turning into
get_user_pages_fast() kmap_atomic() memcpy() kunmap_atomic() put_page()
The get_user_pages_fast is bad enough, but if you're unfortunate enough to still require highmem, the kmap_atomic is going to hurt even more.
With a pure 4G/4G split and highmem disabled, it may be worth trying, but again it will depend on the workload if there is anything to gain here.
Arnd
On Thu, 23 Sep 2010, Catalin Marinas wrote:
On Thu, 2010-09-23 at 16:15 +0100, Arnd Bergmann wrote:
On Thursday 23 September 2010, Nicolas Pitre wrote:
This highmem topic comes from the fact that highmem will be needed in the period of time between now and LPAE where we have boards with lots of memory but we can't address it all without highmem (unless we want to revisit the 3g/1g split, but I personally think not).
Note that LPAE does require highmem to be useful. The only way highmem could be avoided is to move to a 64-bit architecture.
Right, I'd even say LPAE can only make things worse because people will stick even more memory into their systems, most of which then becomes highmem.
If you really need so much memory, it's more efficient to have LPAE +highmem than a swap device. The problem is if the OS doesn't need so much memory but it is available, Linux tries to allocate from highmem first. What could help is a different zone fall back mechanism trying to allocate from lowmem up to a certain threshold.
Beware the subtlety here. The kernel will target highmem first for user space allocations, as this is in most cases memory that the kernel won't have to touch. Typically you get user memory populated with application code and data through DMA and the kernel doesn't have to kmap() those pages. Even swapping user space pages doesn't require that the kernel see the content of those pages. But that works out _only_ if IO is performed through DMA, and that DMA can be done on the full physical address range. As soon as you need to bounce data into lowmem you start to lose.
Also when highmem is involved, the proportion of low pages vs high pages becomes quickly small (more than 3 times as many highmem pages than lowmem pages when there is 4G of RAM), and lowmem pages become a sparse resource. It is normal in that case to favor highmem page allocations as much as possible.
Another option would be to use the highmem for hosting a swap via some form of ramdisk or slram/phram.
This is useless when highmem is allocated to user space. Better to simply allocate user space in highmem directly and do nothing else than swapping between page tables on context switch.
Nicolas
On Mon, Sep 20, 2010, Nicolas Pitre wrote:
I'm also interested in both of those topics as 1) I participated in the design of the SDIO stack (closely related to SD), and 2) I did the highmem implementation for ARM.
(You're awesome! :-)
When you say SDIO, you mean just SDIO or SD and SDIO?
SDIO came up, but the main request from TSC-Ms was for _SD_ and I inferred that this meant SD as a mass storage. I suspect it's getting common to get cheaper by getting rid of flash and using just SD in phones and DTVs.
However, I didn't check whether it was "anything impacting fs on SD", which means we might want to look at filesystems, nor how important SDIO was here. In fact, I didn't think of FSes at all, just thought about the throughput of the SD subsystem and specific SD backend drivers.
I'll take an action to go ask the TSC-Ms about this and see what they really care about.
Thanks,
On Thu, 23 Sep 2010, Loïc Minier wrote:
When you say SDIO, you mean just SDIO or SD and SDIO?
I wrote a driver for one SD/SDIO host controller, played with the code for two other controllers, and wrote part of the SDIO stack. All this share common infrastructure with pure SD cards. Hence my interest in the topic due to the overlap, especially at the low level. I'm less familiar with the filesystem issues that Arnd is mentioning though.
SDIO came up, but the main request from TSC-Ms was for _SD_ and I inferred that this meant SD as a mass storage. I suspect it's getting common to get cheaper by getting rid of flash and using just SD in phones and DTVs.
Indeed.
However, I didn't check whether it was "anything impacting fs on SD", which means we might want to look at filesystems, nor how important SDIO was here. In fact, I didn't think of FSes at all, just thought about the throughput of the SD subsystem and specific SD backend drivers.
SDIO is also becoming important as this is often the preferred interconnect for wireless chips.
Nicolas
On Mon, Sep 20, 2010 at 10:35:52AM -0700, Paul E. McKenney wrote:
I still have two questions about stuff that came up here:
- SD/MMC performance: Is this about device access, file systems or both? I think the file system level stuff is actually the more important part but I fear that what was discussed is the other one.
The discussion didn't get into that level of detail. :-(
That's not reason for a frownie. The TSC discussion is /meant/ to be high-level; if you want customer input on what to focus on we can reach out to them, but otherwise just measure and optimize. IOW, you get to choose what the most important part is.
- highmem: Not sure what this was about. I guess we don't really want to enable this by default, but some people will want it anyway. Is this about run-time patching the code out of the kernel?
I believe that this is related to LPAE -- the usual 32-bit-only DMA devices in a >32-bit physical address space. But there was also discussion about run-time patching for SMP alternatives, though I am missing how this relates to highmem. Enlightenment?
(Highmem is not just for LPAE, as we discussed earlier it's also important to support configurations with more than 1GB if you preserve the 1/3 memory split.)
There's was relationship between runtime UP detection and highmem discussed in the meeting; the bigger point here is that we want to ensure the kernel is stellar at supporting our architectural baseline: platforms that are V7-A, SMP, NEON and larger-than-traditional-embedded memory profiles.
On Tue, Sep 21, 2010 at 12:57:35PM -0300, Christian Robottom Reis wrote:
On Mon, Sep 20, 2010 at 10:35:52AM -0700, Paul E. McKenney wrote:
I still have two questions about stuff that came up here:
- SD/MMC performance: Is this about device access, file systems or both? I think the file system level stuff is actually the more important part but I fear that what was discussed is the other one.
The discussion didn't get into that level of detail. :-(
That's not reason for a frownie. The TSC discussion is /meant/ to be high-level; if you want customer input on what to focus on we can reach out to them, but otherwise just measure and optimize. IOW, you get to choose what the most important part is.
Fair enough. My point was that the answer was not known, so that work would need to be done to find the answer.
- highmem: Not sure what this was about. I guess we don't really want to enable this by default, but some people will want it anyway. Is this about run-time patching the code out of the kernel?
I believe that this is related to LPAE -- the usual 32-bit-only DMA devices in a >32-bit physical address space. But there was also discussion about run-time patching for SMP alternatives, though I am missing how this relates to highmem. Enlightenment?
(Highmem is not just for LPAE, as we discussed earlier it's also important to support configurations with more than 1GB if you preserve the 1/3 memory split.)
There's was relationship between runtime UP detection and highmem discussed in the meeting; the bigger point here is that we want to ensure the kernel is stellar at supporting our architectural baseline: platforms that are V7-A, SMP, NEON and larger-than-traditional-embedded memory profiles.
Thank you for the info!
Thanx, Paul
On Tue, Sep 21, 2010 at 12:57:47PM -0700, Paul E. McKenney wrote:
There's was relationship between runtime UP detection and highmem discussed in the meeting; the bigger point here is that we want to ensure the kernel is stellar at supporting our architectural baseline: platforms that are V7-A, SMP, NEON and larger-than-traditional-embedded memory profiles.
Thank you for the info!
Or misinfo ;-) -- the first sentence should have read "There was no relationship between runtime UP detection and highmem discussed at the meeting". In my mind they are separate but important pieces in the support-v7a-and-etc puzzle.
On 20 Sep 2010, at 17:19, Paul E. McKenney wrote:
o Requirements schedule
o Requirements discussion, mostly in TSC. o Discussion and refinement at Orlando. o Blueprint generation in the 2-3 weeks following Orlando.
I'd just like to point out that "Blueprint generation" is *before* Linaro@UDS. The 2 weeks following the event are used to refine the Blueprints, add the inputs from the session(s) and break down the task into around 2 day chunks (work items).
For a high-level explanation of the timings there is a presentation at:
http://wiki.linaro.org/JamieBennett#Presentations%20in%20Progress
entitled "Planning and Executing the Linaro Cycle". This url is just a temporary one, it will be put in a better place soon.
Regards, Jamie.
On Mon, Sep 20, 2010 at 06:10:16PM +0100, Jamie Bennett wrote:
On 20 Sep 2010, at 17:19, Paul E. McKenney wrote:
o Requirements schedule
o Requirements discussion, mostly in TSC. o Discussion and refinement at Orlando. o Blueprint generation in the 2-3 weeks following Orlando.
I'd just like to point out that "Blueprint generation" is *before* Linaro@UDS. The 2 weeks following the event are used to refine the Blueprints, add the inputs from the session(s) and break down the task into around 2 day chunks (work items).
For a high-level explanation of the timings there is a presentation at:
http://wiki.linaro.org/JamieBennett#Presentations%20in%20Progress
entitled "Planning and Executing the Linaro Cycle". This url is just a temporary one, it will be put in a better place soon.
Thank you for the info!!! I would not accuse this PDF of downloading quickly -- I should be able to see it in a few more minutes. ;-)
Thanx, Paul
On Mon, Sep 20, 2010 at 09:19:49AM -0700, Paul E. McKenney wrote:
o Jason Hui: iMX51 work on device trees. Need assignment. Reviewed BSP review, need to send results to Loïc et al.
It would be great to see more MX51 things on ALKML (and devicetree things on devtree-discuss). Up to now I have the impression that there is still much work done behind closed doors, which is bad if we want to have better mainline support for i.MX5x.
Robert
On Tue, Sep 21, 2010 at 12:15:17AM +0200, Robert Schwebel wrote:
On Mon, Sep 20, 2010 at 09:19:49AM -0700, Paul E. McKenney wrote:
o Jason Hui: iMX51 work on device trees. Need assignment. Reviewed BSP review, need to send results to Loïc et al.
It would be great to see more MX51 things on ALKML (and devicetree things on devtree-discuss). Up to now I have the impression that there is still much work done behind closed doors, which is bad if we want to have better mainline support for i.MX5x.
That's a good point and there's nothing behind closed doors happening that I have requested -- in fact, I mandate the opposite. I would assume that Jeremy has been posting device tree mx51 work to the devtree-discuss ML, and that mx51 patches coming out of the KWG and PMWG are also posted to ALKML; correct me if I'm wrong.
It may be that it's just very little happening :-/
On 09/21/2010 11:02 AM, Christian Robottom Reis wrote:
On Tue, Sep 21, 2010 at 12:15:17AM +0200, Robert Schwebel wrote:
On Mon, Sep 20, 2010 at 09:19:49AM -0700, Paul E. McKenney wrote:
o Jason Hui: iMX51 work on device trees. Need assignment. Reviewed BSP review, need to send results to Loïc et al.
It would be great to see more MX51 things on ALKML (and devicetree things on devtree-discuss). Up to now I have the impression that there is still much work done behind closed doors, which is bad if we want to have better mainline support for i.MX5x.
That's a good point and there's nothing behind closed doors happening that I have requested -- in fact, I mandate the opposite. I would assume that Jeremy has been posting device tree mx51 work to the devtree-discuss ML, and that mx51 patches coming out of the KWG and PMWG are also posted to ALKML; correct me if I'm wrong.
It may be that it's just very little happening :-/
In fact, that is the case. There has been no work on it in 4 months. The main area of work since then has been on a common struct clk to enable DT clock support.
The MX51 DT work is here:
http://kernel.ubuntu.com/git?p=jk/dt/linux-2.6.git%3Ba=shortlog%3Bh=refs/hea...
Rob
On Tue, Sep 21, 2010, Robert Schwebel wrote:
It would be great to see more MX51 things on ALKML (and devicetree things on devtree-discuss). Up to now I have the impression that there is still much work done behind closed doors, which is bad if we want to have better mainline support for i.MX5x.
Eh, there is no closed door, just the usual slowness in learning how to work with device tree from newcomers; discussions should happen on the devicetree-discuss and the linux-arm-kernel mailing-lists. For example, you can see Shaju Abraham working on DT support for Samsung boards posting with his @linaro.org address on the devicetree-discuss list :-)