On Fri, Apr 1, 2011 at 11:57 PM, Philippe Robin philippe.robin@arm.com wrote:
Eric,
So maybe it's just a right time to talk about using linaro ARM kernel tree as a fork for quick merge of the ever expanding SoC and board support, and using it more as a productive kernel for downstream.
I don't think that a 'fork' is really a solution we are looking for. Using Linaro as a staging and consolidation tree and at the same time improving the upstream kernel is more what I would be looking for and what Linaro is currently working on.
Yeah, staging is more close to what I meant, a 'fork' is not appropriate here, as getting the support into mainline will always be our goal. Yet there seems to be necessary to have such a temporary place for those patches to live before the mainline is in a good enough shape. And it should not be an arm-next tree, which is just for detecting merge conflict. I expect it to be more usable, end users can just download and build a basically usable kernel.
Regards, Philippe
-----Original Message----- From: linaro-dev-bounces@lists.linaro.org [mailto:linaro-dev-bounces@lists.linaro.org] On Behalf Of Eric Miao Sent: 01 April 2011 16:44 To: Linaro Dev Subject: Linus being annoyed by the ARM kernel code
Just FYI - lengthy but very interesting read, Linus was really good at wording, enjoy heh :-)
https://lkml.org/lkml/2011/3/17/283
So maybe it's just a right time to talk about using linaro ARM kernel tree as a fork for quick merge of the ever expanding SoC and board support, and using it more as a productive kernel for downstream. And in the mean time, improve the mainline kernel into such a good shape that with less crappy code we could support more platforms?
Just a bit thought on that possibility.
- eric
linaro-dev mailing list linaro-dev@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-dev
linaro-dev mailing list linaro-dev@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-dev
On Sat, 2 Apr 2011, Eric Miao wrote:
Yeah, staging is more close to what I meant, a 'fork' is not appropriate here, as getting the support into mainline will always be our goal. Yet there seems to be necessary to have such a temporary place for those patches to live before the mainline is in a good enough shape.
No, that won't solve the problem. Patches will simply be pushed to that temporary place and rot there while their authors have moved on to the next SOC revision to enable.
The problem is more fundamental due to the lack of better common infrastructure. We must come to a point where SOC code is obvious to write and right from the start. That's the only solution that scales.
Nicolas
On Sat, Apr 2, 2011 at 6:05 AM, Nicolas Pitre nicolas.pitre@linaro.org wrote:
On Sat, 2 Apr 2011, Eric Miao wrote:
Yeah, staging is more close to what I meant, a 'fork' is not appropriate here, as getting the support into mainline will always be our goal. Yet there seems to be necessary to have such a temporary place for those patches to live before the mainline is in a good enough shape.
No, that won't solve the problem. Patches will simply be pushed to that temporary place and rot there while their authors have moved on to the next SOC revision to enable.
The problem is more fundamental due to the lack of better common infrastructure. We must come to a point where SOC code is obvious to write and right from the start. That's the only solution that scales.
I understand it could be even worse to have a temporary place. Yet there is indeed a timeline gap before a generic enough infrastructure could be implemented, and please Linus and everyone. I noted some of my ideas in my mind last night below:
The major problem I see now is the ever increasing support for more ARM SoC and more platforms, yet the mainline kernel so far is not in a good shape for this scalability, not at least until the below features are completed:
* Device tree for hardware abstraction * Single kernel binary for most SoCs/boards * And very hardware specific code moved out to a controllable place, i.e. something like BIOS
So that the kernel can be generic enough. There is a time gap before all these could be done. Thus, I'm thinking of a staging kernel tree that:
1. Merges support for more ARM SoCs and platforms
2. Code for different SoCs and boards do not conflict and impact each other, they live in a single branch
3. Will have a certain level of code quality, at least conforming to mainline kernel code quality, however, upstreamable, upstreamed or Acked-by mainline maintainers might be too strict here
e.g. Most of Freescale's BSP code is quite in a good shape already, but won't probably make upstream maintainer happy in every place. Yet I believe it deserves a place somewhere, not only on freescale's extranet.
4. This kernel tree should be as much full feature as possible, unless some driver code quality doesn't conform
5. A usable Ubuntu kernel package, an Android kernel image or a kernel image for the LEBs that we will support could be automatically generated from this tree, daily build and automation test would make every player here happy
6. The tree will be sync'ed with mainline kernel constantly, I believe what Nico is doing is already very perfectly, that we will have:
linux-linaro-2.6.38 linux-linaro-2.6.39 ...
Our members are always busy working on kernel upgrade by their own, and each upgrade they chose a kernel version based on customer's or distro kernel's requirement. And they would normally be facing with a big kernel version delta, and make their upgrade a great pain.
If there is already a Linaro kernel that's very usable and we help sync with every mainline kernel release, this will definitely make them happy.
Keeping track to every mainline kernel release would also make the whole upgrade easier, because the delta would be much smaller. But that of course Linaro will have to do more work, which I think could be part of the landing team job. Even if we do little or none here, the upgrade itself, and the daily build and automation test would provide a great feedback to our members.
7. Due to a certain level of code requirement here, it could be the case that SoC member still need put something on top, e.g. some dirty patches which are necessary but could make other SoCs unusable, and they still need their own BSP kernel. However, in this case, they are losing nothing, because the only difference for them in this case is to base on a Linaro kernel or a mainline kernel.
8. For our SoC member's customers, they can either get a kernel from our members, or from Linaro. And for those very small customers that our members have no resource to support, they don't normally care if a kernel is coming from mainline or from Linaro, as long as it's usable.
9. And again, upstreaming effort to mainline remains unchanged, and this tree could serve as a great starting point.
But this would definitely increase the maintainance effort. (I'm looking at Nico :-)
So I think the Landing team could definitely help to get in our member's kernel support in there, as this increases our member's value.
- eric
Nicolas
linaro-dev mailing list linaro-dev@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-dev
On Sat, Apr 2, 2011 at 12:46 PM, Eric Miao eric.miao@linaro.org wrote:
On Sat, Apr 2, 2011 at 6:05 AM, Nicolas Pitre nicolas.pitre@linaro.org wrote:
On Sat, 2 Apr 2011, Eric Miao wrote:
Yeah, staging is more close to what I meant, a 'fork' is not appropriate here, as getting the support into mainline will always be our goal. Yet there seems to be necessary to have such a temporary place for those patches to live before the mainline is in a good enough shape.
No, that won't solve the problem. Patches will simply be pushed to that temporary place and rot there while their authors have moved on to the next SOC revision to enable.
My argument is that patches won't rot here. Patches rot in a kernel tree that is derived from our member SoC's BSP and pushed nowhere and very few developers involved as a pet project. Yeah, this reminds me of the zaurus 2.4 kernel tree. But it's a different case here, we have our members involved, and what we are going to support are those boards that's widely available and there's a community working on, we definitely don't want to merge all the craps that could be out of maintenance. And that we provide a playground so the kernel is actually usable with Ubuntu or Android, which in the contrary makes it easier to test and validate and get the patches mainlined.
But I don't believe what we're doing
The problem is more fundamental due to the lack of better common infrastructure. We must come to a point where SOC code is obvious to write and right from the start. That's the only solution that scales.
I understand it could be even worse to have a temporary place. Yet there is indeed a timeline gap before a generic enough infrastructure could be implemented, and please Linus and everyone. I noted some of my ideas in my mind last night below:
The major problem I see now is the ever increasing support for more ARM SoC and more platforms, yet the mainline kernel so far is not in a good shape for this scalability, not at least until the below features are completed:
* Device tree for hardware abstraction * Single kernel binary for most SoCs/boards * And very hardware specific code moved out to a controllable place, i.e. something like BIOS
So that the kernel can be generic enough. There is a time gap before all these could be done. Thus, I'm thinking of a staging kernel tree that:
Merges support for more ARM SoCs and platforms
Code for different SoCs and boards do not conflict and impact each
other, they live in a single branch
- Will have a certain level of code quality, at least conforming to
mainline kernel code quality, however, upstreamable, upstreamed or Acked-by mainline maintainers might be too strict here
e.g. Most of Freescale's BSP code is quite in a good shape already, but won't probably make upstream maintainer happy in every place. Yet I believe it deserves a place somewhere, not only on freescale's extranet.
- This kernel tree should be as much full feature as possible, unless
some driver code quality doesn't conform
- A usable Ubuntu kernel package, an Android kernel image or a kernel
image for the LEBs that we will support could be automatically generated from this tree, daily build and automation test would make every player here happy
- The tree will be sync'ed with mainline kernel constantly, I believe
what Nico is doing is already very perfectly, that we will have:
linux-linaro-2.6.38 linux-linaro-2.6.39 ...
Our members are always busy working on kernel upgrade by their own, and each upgrade they chose a kernel version based on customer's or distro kernel's requirement. And they would normally be facing with a big kernel version delta, and make their upgrade a great pain.
If there is already a Linaro kernel that's very usable and we help sync with every mainline kernel release, this will definitely make them happy.
Keeping track to every mainline kernel release would also make the whole upgrade easier, because the delta would be much smaller. But that of course Linaro will have to do more work, which I think could be part of the landing team job. Even if we do little or none here, the upgrade itself, and the daily build and automation test would provide a great feedback to our members.
- Due to a certain level of code requirement here, it could be the
case that SoC member still need put something on top, e.g. some dirty patches which are necessary but could make other SoCs unusable, and they still need their own BSP kernel. However, in this case, they are losing nothing, because the only difference for them in this case is to base on a Linaro kernel or a mainline kernel.
- For our SoC member's customers, they can either get a kernel from
our members, or from Linaro. And for those very small customers that our members have no resource to support, they don't normally care if a kernel is coming from mainline or from Linaro, as long as it's usable.
- And again, upstreaming effort to mainline remains unchanged, and
this tree could serve as a great starting point.
But this would definitely increase the maintainance effort. (I'm looking at Nico :-)
So I think the Landing team could definitely help to get in our member's kernel support in there, as this increases our member's value.
- eric
Nicolas
linaro-dev mailing list linaro-dev@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-dev
On Sat, 2 Apr 2011, Eric Miao wrote:
On Sat, Apr 2, 2011 at 6:05 AM, Nicolas Pitre nicolas.pitre@linaro.org wrote:
On Sat, 2 Apr 2011, Eric Miao wrote:
Yeah, staging is more close to what I meant, a 'fork' is not appropriate here, as getting the support into mainline will always be our goal. Yet there seems to be necessary to have such a temporary place for those patches to live before the mainline is in a good enough shape.
No, that won't solve the problem. Patches will simply be pushed to that temporary place and rot there while their authors have moved on to the next SOC revision to enable.
The problem is more fundamental due to the lack of better common infrastructure. We must come to a point where SOC code is obvious to write and right from the start. That's the only solution that scales.
I understand it could be even worse to have a temporary place. Yet there is indeed a timeline gap before a generic enough infrastructure could be implemented, and please Linus and everyone.
There is also a catch. Unless you have the power of precognition, it is extremely hard to predict what is going to be common in the future and therefore might call for a generic infrastructure. And without that generic infrastructure, people will be tempted to duplicate code for their own purpose. It is therefore important to be on the lookout for such duplications when they occur and be quick to provide infrastructure to reduce them.
I noted some of my ideas in my mind last night below:
The major problem I see now is the ever increasing support for more ARM SoC and more platforms, yet the mainline kernel so far is not in a good shape for this scalability, not at least until the below features are completed:
- Device tree for hardware abstraction
Sure.
- Single kernel binary for most SoCs/boards
Well... While this is a worthwhile thing to do, this is not going to have a significant effect on the amount of code involved. Each SoC will always require some amount of hardware specific code, whether or not that code is compiled alone or together with other SoC support.
- And very hardware specific code moved out to a controllable place, i.e. something like BIOS
Sorry, but I must vehemently disagree here. BIOSes are a problem for Open Source, not a solution. On X86 they use BIOS services only when there is simply no other choice, because the BIOS is too often buggy and it is more difficult and risky to update than the kernel.
If you rely on the BIOS to do X, it will work when the BIOS gets it right. If you do X yourself, it will work whether or not the BIOS gets it right. This means that if there's even one BIOS version you have to deal with out there that gets X wrong, you have to do it yourself and then there is no incentive to rely on the BIOS even in the cases where it does get it right so to maintain only one code path.
And relying on a BIOS could make many kernel improvements impossible to implement as the execution context assumed by the BIOS may not be guaranteed anymore (think about UP vs SMP, different preemption modes, different realtime kernel modes, etc.) And of course it is impossible to anticipate what execution context and requirements the kernel might need in the future, hence this can't be provisioned for (and much less validated) into the BIOS design in advance.
So that the kernel can be generic enough. There is a time gap before all these could be done. Thus, I'm thinking of a staging kernel tree that:
- Merges support for more ARM SoCs and platforms
Sure. That's what I do with the Linaro kernel. And my policy when switching to the next mainline release is to _not_ rebase patches that appear unmaintained. In fact, I expect people to either have pushed their patches upstream, or rebase them to the next mainline version themselves, and if they don't do any of that then maybe those patches are just not worth it anymore and the best course of action is simply to drop them. So this temporary kernel tree _is_ the Linaro tree, and I'm making sure that nothing is kept latent for too long.
- Code for different SoCs and boards do not conflict and impact each
other, they live in a single branch
Absolutely. Otherwise people simply get lazy and careless. We _have_ to share the same branch as much as possible. And to answer Linus' criticism, we also have to _share_ as much code between SoCs and vendors.
- Will have a certain level of code quality, at least conforming to
mainline kernel code quality, however, upstreamable, upstreamed or Acked-by mainline maintainers might be too strict here
e.g. Most of Freescale's BSP code is quite in a good shape already, but won't probably make upstream maintainer happy in every place. Yet I believe it deserves a place somewhere, not only on freescale's extranet.
In that case it certainly can be merged in the Linaro kernel for this 2.6.38 release. When Linaro moves to 2.6.39 then someone will have to rebase those patches for Linaro's 2.6.39 release, and/or work on them towards upstream acceptance. but they won't be merged automatically into the next Linaro kernel if nothing is done about them in parallel.
Nicolas
On 04/03/2011 04:07 AM, Somebody in the thread at some point said:
Hi -
- And very hardware specific code moved out to a controllable place, i.e. something like BIOS
Sorry, but I must vehemently disagree here. BIOSes are a problem for Open Source, not a solution. On X86 they use BIOS services only when there is simply no other choice, because the BIOS is too often buggy and it is more difficult and risky to update than the kernel.
I followed the lkml thread and saw there bootloaders mentioned as some kind of happy place all problems will be solved. You're quite right it's just a carpet to shove stuff under and stumble over.
If the kernel operation will intimately rely on this information, eg DT tables, and needs its versioning to match kernel code precisely, in the end it can't avoid owning it and that extends up to packaging as well. The "attach device tree data to end of kernel" scheme you mentioned, or taking it inside the kernel tree seems the way to go in that domain not indirecting its availability and versioning through not only bootloader package, code but also environment.
-Andy
On Sun, Apr 3, 2011 at 4:38 AM, Andy Green andy@warmcat.com wrote:
On 04/03/2011 04:07 AM, Somebody in the thread at some point said:
Hi -
* And very hardware specific code moved out to a controllable place, i.e. something like BIOS
Sorry, but I must vehemently disagree here. BIOSes are a problem for Open Source, not a solution. On X86 they use BIOS services only when there is simply no other choice, because the BIOS is too often buggy and it is more difficult and risky to update than the kernel.
I followed the lkml thread and saw there bootloaders mentioned as some kind of happy place all problems will be solved. You're quite right it's just a carpet to shove stuff under and stumble over.
If the kernel operation will intimately rely on this information, eg DT tables, and needs its versioning to match kernel code precisely, in the end it can't avoid owning it and that extends up to packaging as well. The "attach device tree data to end of kernel" scheme you mentioned, or taking it inside the kernel tree seems the way to go in that domain not indirecting its availability and versioning through not only bootloader package, code but also environment.
I've worked on DT based PowerPC systems which have similar, multiple variants of the base hardware. The bootloader provides the DT to the kernel on these machines. A unified kernel image reads the DT and then adjusts to reflect the hardware specified in the device tree.
Think of the DT as a way of probing a bus that doesn't have probe capabilities. This gives you a way to dynamically load drivers from initrd if you want. For example we dynamically loaded drivers for I2C devices that were previously always built in.
Board specific code inside the kernel can trigger off the DT board name and then deal with variations of DT formats that exist for that particular board so you aren't forced to update the kernel and DT in lock step. Examples of that are in arch/powerpc/platforms/xxx/board-x Code in there looks for some old, messed up DTs and then modifies them to be compliant before allowing the kernel to read them. If the user updates their bootloader/DT that code won't trigger. The trick here is that the code modifies the DT, and then the kernel interprets the DT. The code detecting the old DT format does not go and directly manipulate the device drivers.
The goal of this was to be able to ship single kernel image update that worked on a dozen hardware variations.
I haven't been following the ARM DT work, but a scheme that might work on ARM is to build DTs into the kernel corresponding to each ARM machine ID supported by the kernel image. Use the machine ID to select the correct one and discard the rest. As ARM bootloaders are modified to directly support DTs slowly get rid of the in-kernel DTs.
A key concept: think of the DT as a way of probing a bus that doesn't have probe capabilities. You can argue that C code can produce the same effect as DTs which is true. But that board specific setup code tends to grow and stick its fingers into everything. DTs mitigate that simply because they aren't C code. DTs encourage the development of generic device setup code instead of one-off platform specific code.
On Sun, Apr 3, 2011 at 10:25 AM, jonsmirl@gmail.com jonsmirl@gmail.com wrote:
On Sun, Apr 3, 2011 at 4:38 AM, Andy Green andy@warmcat.com wrote:
On 04/03/2011 04:07 AM, Somebody in the thread at some point said:
Hi -
* And very hardware specific code moved out to a controllable place, i.e. something like BIOS
Sorry, but I must vehemently disagree here. BIOSes are a problem for Open Source, not a solution. On X86 they use BIOS services only when there is simply no other choice, because the BIOS is too often buggy and it is more difficult and risky to update than the kernel.
I followed the lkml thread and saw there bootloaders mentioned as some kind of happy place all problems will be solved. You're quite right it's just a carpet to shove stuff under and stumble over.
If the kernel operation will intimately rely on this information, eg DT tables, and needs its versioning to match kernel code precisely, in the end it can't avoid owning it and that extends up to packaging as well. The "attach device tree data to end of kernel" scheme you mentioned, or taking it inside the kernel tree seems the way to go in that domain not indirecting its availability and versioning through not only bootloader package, code but also environment.
I've worked on DT based PowerPC systems which have similar, multiple variants of the base hardware. The bootloader provides the DT to the kernel on these machines. A unified kernel image reads the DT and then adjusts to reflect the hardware specified in the device tree.
Think of the DT as a way of probing a bus that doesn't have probe capabilities. This gives you a way to dynamically load drivers from initrd if you want. For example we dynamically loaded drivers for I2C devices that were previously always built in.
Board specific code inside the kernel can trigger off the DT board name and then deal with variations of DT formats that exist for that particular board so you aren't forced to update the kernel and DT in lock step. Examples of that are in arch/powerpc/platforms/xxx/board-x Code in there looks for some old, messed up DTs and then modifies them to be compliant before allowing the kernel to read them. If the user updates their bootloader/DT that code won't trigger. The trick here is that the code modifies the DT, and then the kernel interprets the DT. The code detecting the old DT format does not go and directly manipulate the device drivers.
The goal of this was to be able to ship single kernel image update that worked on a dozen hardware variations.
I haven't been following the ARM DT work, but a scheme that might work on ARM is to build DTs into the kernel corresponding to each ARM machine ID supported by the kernel image. Use the machine ID to select the correct one and discard the rest. As ARM bootloaders are modified to directly support DTs slowly get rid of the in-kernel DTs.
A key concept: think of the DT as a way of probing a bus that doesn't have probe capabilities. You can argue that C code can produce the same effect as DTs which is true. But that board specific setup code tends to grow and stick its fingers into everything. DTs mitigate that simply because they aren't C code. DTs encourage the development of generic device setup code instead of one-off platform specific code.
-- Jon Smirl jonsmirl@gmail.com
linaro-dev mailing list linaro-dev@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-dev
Above everything else, I definitely like to see DT get done first, it's essential for SoC these days.
On 04/03/2011 05:05 PM, Somebody in the thread at some point said:
Above everything else, I definitely like to see DT get done first, it's essential for SoC these days.
All I am suggesting is bind the DTs in the kernel. That's easier and faster than the alternatives and there is a lot less to go wrong and make DT a difficult experience for users.
-Andy
As long as the experience is driven by both the SoC vendor and the board designer and not the kernel driver engineer, this will go very well..
At the point where device tree specification and maintenance is done in-kernel, device trees get very Linux-specific and very Linux-driver-specific. Plenty of mistakes were made in the move to flattened device trees on PowerPC which took too long to resolve.
Linux may not have a stable internal API but the bootloader to kernel can and should have a stable interface. If that stable interface is the device tree it cannot change just because someone has the ability to add 10 lines to a kernel driver AND patch the DT in the same tree to correspond to it. The device tree is dynamic by design from it's legacy of OpenPROM and OpenFirmware specification, but that does not mean it changes every day, it just means it changes depending on hardware configuration (i.e. plugging a PCI card, showing a USB device in the tree - even though this is probed later in both cases, if you boot from it, it should be there. On development boards this is the presence or not of add-on boards, too, such as LCDs, debug interfaces, stacks, controllers which may be entirely different per-boot or per-dip-switch).
This is a fundamental divergence between consumer product and Linux developer - it is not acceptable to update the firmware for every kernel version, or burden the user for responsibility or increase engineering tasks and risks to make Linux drivers by being reliant on keeping track of a moving target.
Unfortunately with a predefined, flat device tree compiled at kernel-time and attached to the kernel image, you lose the configurability of the hardware and need to at least re-link the kernel if you remove the debug board, or add a user interface element like an LCD panel to a pluggable board solution. Or, you have to specify it in a static tree regardless of presence, and use runtime code in Linux to specifically detect that this plugin is available, which defeats the purpose of a device tree in the first place.
The point: solutions should exist in firmware to generate the device tree or at least take a known-common configuration and add to it. If this means making the device tree compiler and an internal API available within U-Boot, then sure. For real OF or other implementations (we've been working on a way to interface UEFI with a device tree for a while using a variant of the system table) this may be generated in another way.
What you will end up with is two interfaces, like on PPC - one for flattened trees being loaded from a compiled binary and one which, on boot, pulls the device tree out to a self-maintained copy which can be parsed by a common API. This is all already implemented, and common code across PPC and SPARC and probably Blackfin and the new architectures.
The interface between U-Boot (to pick an example) to Linux to basically drop in a device tree when generated by U-Boot itself (or half-generated or at least compiled at runtime) would still be bootm with a third argument: that third argument will have to be the pointer to the generated tree.
At the very least any pin configuration - supplemented by a device tree or otherwise - absolutely must be done in the very first instants of boot time, and not 5 seconds into kernel boot after loading disks, uncompressing and performing several architecture init functions. For i.MX that means pulling every iomux table into U-Boot. For anything dynamic (check a GPIO, then change the IOMUX depending on the type of board, revision or so) then you are forced to think of device trees anyway..
We are going to have to deal with at least one firmware update for every "ported" platform. Can we try and keep it to at most one firmware update for the sake of all our customers? :)
On 04/03/2011 05:46 PM, Somebody in the thread at some point said:
As long as the experience is driven by both the SoC vendor and the board designer and not the kernel driver engineer, this will go very well..
At the point where device tree specification and maintenance is done in-kernel, device trees get very Linux-specific and very Linux-driver-specific. Plenty of mistakes were made in the move to flattened device trees on PowerPC which took too long to resolve.
I don't have any comment about who should work on the DT data in-kernel, whoever would work on it outside kernel should do it in-kernel rendering this moot.
Linux may not have a stable internal API but the bootloader to kernel can and should have a stable interface. If that stable interface is the device tree it cannot change just because someone has the ability to add 10 lines to a kernel driver AND patch the DT in the same tree to correspond to it. The device tree is dynamic by design from it's legacy of OpenPROM and OpenFirmware specification, but that does not mean it changes every day, it just means it changes depending on hardware configuration (i.e. plugging a PCI card, showing a USB device in the tree - even though this is probed later in both cases, if you boot from it, it should be there. On development boards this is the presence or not of add-on boards, too, such as LCDs, debug interfaces, stacks, controllers which may be entirely different per-boot or per-dip-switch).
I see. That makes DT sound a bit like a hybrid between code and data that I thought it was meant to eliminate. I guess for board option case, it should be possible to retain a static definition and have nodes that are conditional on switch states.
This is a fundamental divergence between consumer product and Linux developer - it is not acceptable to update the firmware for every kernel version, or burden the user for responsibility or increase engineering tasks and risks to make Linux drivers by being reliant on keeping track of a moving target.
Sounds like we agree about that.
Unfortunately with a predefined, flat device tree compiled at kernel-time and attached to the kernel image, you lose the configurability of the hardware and need to at least re-link the kernel if you remove the debug board, or add a user interface element like an LCD panel to a pluggable board solution. Or, you have to specify it in a static tree regardless of presence, and use runtime code in Linux to specifically detect that this plugin is available, which defeats the purpose of a device tree in the first place.
I don't see that should follow. You should be able to attached probed devs to the flat tree basis? It can't do that?
The point: solutions should exist in firmware to generate the device tree or at least take a known-common configuration and add to it. If this means making the device tree compiler and an internal API available within U-Boot, then sure. For real OF or other implementations (we've been working on a way to interface UEFI with a device tree for a while using a variant of the system table) this may be generated in another way.
What you will end up with is two interfaces, like on PPC - one for flattened trees being loaded from a compiled binary and one which, on boot, pulls the device tree out to a self-maintained copy which can be parsed by a common API. This is all already implemented, and common code across PPC and SPARC and probably Blackfin and the new architectures.
The interface between U-Boot (to pick an example) to Linux to basically drop in a device tree when generated by U-Boot itself (or half-generated or at least compiled at runtime) would still be bootm with a third argument: that third argument will have to be the pointer to the generated tree.
My experience with U-Boot and other bootloaders has led me to conclude they should be as minimal, thin and deterministic as possible. Any real business should be deferred to Linux doing it in one place and doing it well.
At the very least any pin configuration - supplemented by a device tree or otherwise - absolutely must be done in the very first instants of boot time, and not 5 seconds into kernel boot after loading disks,
Maybe you can explain that with examples where 5 seconds matters that is not a hardware design error. Because the SoC itself (and I have done this work for iMX31 in Qi) has default pin states that are usually Hi-Z with pullup. "First instants" in any case does not sound like a correct approach either, because if the IO is in conflict even some ms of high current can't be allowed; your bootloader may crash and your board will burn. So we can take it that the IO will not be in conflict by default from startup on any board.
Fact is when the bootloader comes up, you are running OK already by definition. Voltages, IO, clock for cpu are all an acceptable and safe starting point that is already running code.
The amount of bringup that truly cannot be deferred to Linux is restricted only to the prerequisites of loading and booting Linux then, and that does not involve setting all ball muxes, just the ones involved in loading and booting Linux.
uncompressing and performing several architecture init functions. For i.MX that means pulling every iomux table into U-Boot. For anything
No just the genuine prerequisites to get at the kernel and SDRAM.
dynamic (check a GPIO, then change the IOMUX depending on the type of board, revision or so) then you are forced to think of device trees anyway..
If you take the approach that the bootloader's job is to do unnecessary things for loading and booting Linux, it's true there is no end to the type and scope of things you might try to have the bootloader do. However in terms of loading and booting Linux, anything outside of that is unnecessary including "all IOMUX setting", etc.
We are going to have to deal with at least one firmware update for every "ported" platform. Can we try and keep it to at most one firmware update for the sake of all our customers? :)
In fact if the machine ID is used as a key to find in-kernel DTs, you don't need to update the bootloader even once.
-Andy
On Sun, Apr 3, 2011 at 12:46 PM, Matt Sealey matt@genesi-usa.com wrote:
At the point where device tree specification and maintenance is done in-kernel, device trees get very Linux-specific and very Linux-driver-specific. Plenty of mistakes were made in the move to flattened device trees on PowerPC which took too long to resolve.
This another good point. If you find yourself sticking Linux specific information (like module names) into the device tree, then something is wrong in the kernel, go fix the kernel. The device tree should only contain a generic description of the hardware that can be used by any operating system.
On the other hand, device trees aren't a static solution. For example, they haven't come up with a generic mechanism for completely describing things like clock and power management domains. But let's figure out schemes for describing these problem areas and fix the device tree model. As more architectures utilize device trees these problem areas should get figured out and the issues will go away. So if you find yourself adding thousands of lines of board specific code to the kernel, something is probably wrong in the device tree generic hardware description, go fix it.
This is an evolutionary process. Start off with selecting in-kernel device trees based on machine ID. Start off with describing the basic hardware in the device tree and remove the old kernel code that was building the description. Move on to device trees provided by the bootloader. After basic hardware description is converted move on to more complex areas like clock and power domains.
A big effect of switching to device trees is to make kernel developers stop and think about generic solutions to problems instead of adding another 1,000 lines of one-off code to the kernel. The quicker ARM converts to device trees the easier the task will be.
On 04/03/2011 06:19 PM, Somebody in the thread at some point said:
Hi -
On the other hand, device trees aren't a static solution. For example, they haven't come up with a generic mechanism for completely describing things like clock and power management domains. But let's figure out schemes for describing these problem areas and fix the device tree model. As more architectures utilize device trees these problem areas should get figured out and the issues will go away. So if you find yourself adding thousands of lines of board specific code to the kernel, something is probably wrong in the device tree generic hardware description, go fix it.
Of course, we should be totally clear here that adding "thousands of lines of board specific code" to the _bootloader_ would be no less of a sign something horrible had gone wrong.
This is an evolutionary process. Start off with selecting in-kernel device trees based on machine ID. Start off with describing the basic
Sounds right.
hardware in the device tree and remove the old kernel code that was building the description. Move on to device trees provided by the bootloader. After basic hardware description is converted move on to
Can you describe why code in the bootloader is a better place than code in the kernel early init? I mean if you go and look in say U-Boot sources, it's a lot less beautiful and elegant than kernel code.
-Andy
On Sun, Apr 3, 2011 at 1:26 PM, Andy Green andy@warmcat.com wrote:
On 04/03/2011 06:19 PM, Somebody in the thread at some point said:
hardware in the device tree and remove the old kernel code that was building the description. Move on to device trees provided by the bootloader. After basic hardware description is converted move on to
Can you describe why code in the bootloader is a better place than code in the kernel early init? I mean if you go and look in say U-Boot sources, it's a lot less beautiful and elegant than kernel code.
You shouldn't just move the init code into uboot, instead you should figure out how to encode the hardware specific information into the device tree using a generic schema. Then have code in the kernel that knows how to interpret this generic data.
Matt and I may differ a little on the responsibilities of the bootloader. I think it should do the bare minimum needed to get the kernel loaded and to feed it a device tree. Matt has it doing more like setting up all of the pin configurations. But I don't have a strong opinion on this.
The way things are set up currently I also don't believe you can remove all board specific code from the kernel. The goal with device trees is to start hacking away at the board specific code and make the piles of it smaller. In the future we may be able to remove it all like on the PC platform.
On 04/03/2011 09:09 PM, Somebody in the thread at some point said:
Hi -
Can you describe why code in the bootloader is a better place than code in the kernel early init? I mean if you go and look in say U-Boot sources, it's a lot less beautiful and elegant than kernel code.
You shouldn't just move the init code into uboot, instead you should figure out how to encode the hardware specific information into the device tree using a generic schema. Then have code in the kernel that knows how to interpret this generic data.
Sounds good.
Matt and I may differ a little on the responsibilities of the bootloader. I think it should do the bare minimum needed to get the kernel loaded and to feed it a device tree. Matt has it doing more like setting up all of the pin configurations. But I don't have a strong opinion on this.
Well, in the iMX31 case there is only a 2KByte SRAM on-die that gets auto-filled by the ROM. In the case of SD Card boot which I implemented - the bootloader is on the SD Card at a defined place - it means you need to fit your SD init, "mmc stack" and mmc host driver inside the 2KBytes so it can load the rest of the bootloader.
That works fine but it will never be implementable with DT in bootloader. I don't mean it as a problem for DT I mean that it seems we all need to maybe challenge our assumptions a bit in the face of this new stuff being introduced. Matt is assuming the bootloader will consume DT data, if it does do so it will only ever need a small fraction of it and doing so at all is optional, since no bootloader does it today.
Specifically: the bootloader prerequisites for accessing the DT data may entirely mandate private bootloader knowledge of ALL the information it would have required from DT. For example, bootloader must init SDRAM, knowing the size and start address and memory types, must init the storage device, must contain a stack for accessing data on the storage device to even get at the DT information stored in files on the storage device... what's actually left to do for the bootloader using the DT information? It could go straight to getting the kernel from the same storage and boot that with internal DT tables and leave the bootloader blissfully unaware of DT info at no cost in terms of increasing the hardcoded knowledge in the bootloader.
The way things are set up currently I also don't believe you can remove all board specific code from the kernel. The goal with device trees is to start hacking away at the board specific code and make the piles of it smaller. In the future we may be able to remove it all like on the PC platform.
Sure.
-Andy
On Mon, Apr 4, 2011 at 4:21 AM, Andy Green andy@warmcat.com wrote:
Matt and I may differ a little on the responsibilities of the bootloader. I think it should do the bare minimum needed to get the kernel loaded and to feed it a device tree. Matt has it doing more like setting up all of the pin configurations. But I don't have a strong opinion on this.
Well, in the iMX31 case there is only a 2KByte SRAM on-die that gets auto-filled by the ROM. In the case of SD Card boot which I implemented - the bootloader is on the SD Card at a defined place - it means you need to fit your SD init, "mmc stack" and mmc host driver inside the 2KBytes so it can load the rest of the bootloader.
That works fine but it will never be implementable with DT in bootloader. I don't mean it as a problem for DT I mean that it seems we all need to maybe challenge our assumptions a bit in the face of this new stuff being introduced. Matt is assuming the bootloader will consume DT data, if it does do so it will only ever need a small fraction of it and doing so at all is optional, since no bootloader does it today.
Specifically: the bootloader prerequisites for accessing the DT data may entirely mandate private bootloader knowledge of ALL the information it would have required from DT. For example, bootloader must init SDRAM, knowing the size and start address and memory types, must init the storage device, must contain a stack for accessing data on the storage device to even get at the DT information stored in files on the storage device... what's actually left to do for the bootloader using the DT information? It could go straight to getting the kernel from the same storage and boot that with internal DT tables and leave the bootloader blissfully unaware of DT info at no cost in terms of increasing the hardcoded knowledge in the bootloader.
Bootloaders don't need use the DTs, but they need to provide the DTs to OS. This get implements in lots of different ways. In bootloaders I worked on we wrote custom code in the bootloader to initialize everything. This code knew nothing of the DT. I wrote the DTs to match the specific hardware. They were compiled and merged into the bootloader binary. The bootloader didn't know how to read them, it just knew how to pass them onto the OS. But other bootloader implementations do this differently. Some of those implementations (open firmware) actually build the DTs on the fly. It's your choice on how the DT gets created.
The DT is just supposed to be a generic description of the hardware that is provided to the OS so that the OS will know what hardware is there when that hardware can't be probed.
So in this MX31 case did your 2K of code directly boot the kernel or was it a two stage boot process? If it is two stage the second stage should load the DT and pass it to the kernel. If it is booting direct to the kernel then the kernel will need the DT image built into it.
The two stage boot process is the more general solution. What happens when we get 6,000 variants of hardware like we have we PCs? It is unreasonable for the kernel to load 6,000 different device trees and select one. Instead you want the bootloader to provide the DT and then the kernel just interprets it. If you don't do this then something like Ubuntu can't ship generic kernels. What if Ubuntu has shipped their kernel before your hardware is released? It can't possibly have the DT for your hardware built into it.
On 04/04/2011 12:12 PM, Somebody in the thread at some point said:
Hi -
Specifically: the bootloader prerequisites for accessing the DT data may entirely mandate private bootloader knowledge of ALL the information it would have required from DT. For example, bootloader must init SDRAM, knowing the size and start address and memory types, must init the storage device, must contain a stack for accessing data on the storage device to even get at the DT information stored in files on the storage device... what's actually left to do for the bootloader using the DT information? It could go straight to getting the kernel from the same storage and boot that with internal DT tables and leave the bootloader blissfully unaware of DT info at no cost in terms of increasing the hardcoded knowledge in the bootloader.
Bootloaders don't need use the DTs
Alright.
but they need to provide the DTs to OS.
It "can" provide a DT to the OS, it does not need to do it in the same sense that the bootloader truly needs to set up SDRAM to bring in Linux at all. You may chose to have it do it but that's slightly different.
Given the complexity and configurability of OMAP ROM, which has an MMC stack and FAT parser built-in, it's even within sight there will be NO bootloader, you just stick a header on your kernel image which is only data interpreted by the ROM that tells it how to set up SDRAM and so on, and tells it the length of the following kernel image to pull in. In this case, the kernel would take care of business itself.
This get implements in lots of different ways. In bootloaders
I worked on we wrote custom code in the bootloader to initialize everything. This code knew nothing of the DT. I wrote the DTs to match the specific hardware. They were compiled and merged into the bootloader binary. The bootloader didn't know how to read them, it just knew how to pass them onto the OS. But other bootloader implementations do this differently. Some of those implementations (open firmware) actually build the DTs on the fly. It's your choice on how the DT gets created.
The DT is just supposed to be a generic description of the hardware that is provided to the OS so that the OS will know what hardware is there when that hardware can't be probed.
So in this MX31 case did your 2K of code directly boot the kernel or was it a two stage boot process? If it is two stage the second stage should load the DT and pass it to the kernel. If it is booting direct to the kernel then the kernel will need the DT image built into it.
It was two-stage but in one binary; the first 2K of the bootloader had these critical bits placed there by linker script. It sets up SDRAM and pulls the whole of the same bootloader image into SDRAM and runs that. The post-2K part has useful things like VFAT and ext2+ parsers that enable getting ahold of the kernel image in a reasonable way via filesystem.
The point about it was to establish the understanding that we cannot eliminate the considerable amount of required hardcoded magic from the bootloader by usage of DT, and it follows there is no requirement for DT parser in a minimal bootloader, it seems we reached agreement about that.
The two stage boot process is the more general solution. What happens when we get 6,000 variants of hardware like we have we PCs? It is unreasonable for the kernel to load 6,000 different device trees and select one. Instead you want the bootloader to provide the DT and then the kernel just interprets it. If you don't do this then something like Ubuntu can't ship generic kernels. What if Ubuntu has shipped their kernel before your hardware is released? It can't possibly have the DT for your hardware built into it.
Well, I see 134 defconfigs in arch/arm/configs, but it's reasonable to take it to an extreme to see if it scales.
You are right as it stands bloating the kernel with 6K discrete DTBs with nothing in common won't fly. However if you look at arch/arm there is a great deal of platform-level commonality to be found there. At least, no less commonality and presumably much more should be possible between the device trees, eg, the overo / beagle case should be sharing 90%+ of their tree one way or another, same with panda / sdp4430 and so on. Eg, for identical boards adding another should boil down to pointing to the already-introduced parent and just setting a new name field.
I don't know much about DT but I assume that reflects the source of it that you basically do like #include "omap4430.devtree" and meddle with adding the unique bits of your board, and it's not the case there is a massive panda.devtree or whatever that has all omap4430 asset details copied in there for every board.
-Andy
On Mon, 4 Apr 2011, Andy Green wrote:
Well, in the iMX31 case there is only a 2KByte SRAM on-die that gets auto-filled by the ROM. In the case of SD Card boot which I implemented - the bootloader is on the SD Card at a defined place - it means you need to fit your SD init, "mmc stack" and mmc host driver inside the 2KBytes so it can load the rest of the bootloader.
That works fine but it will never be implementable with DT in bootloader. I don't mean it as a problem for DT I mean that it seems we all need to maybe challenge our assumptions a bit in the face of this new stuff being introduced. Matt is assuming the bootloader will consume DT data, if it does do so it will only ever need a small fraction of it and doing so at all is optional, since no bootloader does it today.
Specifically: the bootloader prerequisites for accessing the DT data may entirely mandate private bootloader knowledge of ALL the information it would have required from DT. For example, bootloader must init SDRAM, knowing the size and start address and memory types, must init the storage device, must contain a stack for accessing data on the storage device to even get at the DT information stored in files on the storage device... what's actually left to do for the bootloader using the DT information? It could go straight to getting the kernel from the same storage and boot that with internal DT tables and leave the bootloader blissfully unaware of DT info at no cost in terms of increasing the hardcoded knowledge in the bootloader.
I don't think it is a general assumption that any bootloaders woule become users of DT. If anything, DT used to be _created_ by the bootloader not the reverse. If a bootloader is generic enough at runtime to require DT data to function, it is not a bootloader anymore but an OS.
Nicolas
On 3 April 2011 21:44, Andy Green andy@warmcat.com wrote:
On 04/03/2011 05:05 PM, Somebody in the thread at some point said:
Above everything else, I definitely like to see DT get done first, it's essential for SoC these days.
All I am suggesting is bind the DTs in the kernel. That's easier and faster than the alternatives and there is a lot less to go wrong and make DT a difficult experience for users.
I second.
Without doubt some mechanism to pass board configuration data to the kernel is desirable .... recent issue of being able to pass smsc95xxs's missing mac addr to kernel is one such use. Rather we just might be able to do away with the EEPROMs for such purposes (?) At the same time, Linux shouldn't depend on support from bootloaders much more than is currently provided. Otherwise what do we say... ARM Linux needs, say, U-Boot to run ? And let us count upon neither the number of bootloaders in existence nor the ease of making them support DT.
So far Nicholas' idea of appending config data to kernel image sounds good or may be a simplified version of DT that doesn't go deeper than board files.
On Sun, Apr 3, 2011 at 11:26 AM, Jaswinder Singh jaswinder.singh@linaro.org wrote:
On 3 April 2011 21:44, Andy Green andy@warmcat.com wrote:
On 04/03/2011 05:05 PM, Somebody in the thread at some point said:
Above everything else, I definitely like to see DT get done first, it's essential for SoC these days.
All I am suggesting is bind the DTs in the kernel. That's easier and faster than the alternatives and there is a lot less to go wrong and make DT a difficult experience for users.
I second.
Without doubt some mechanism to pass board configuration data to the kernel is desirable .... recent issue of being able to pass smsc95xxs's missing mac addr to kernel is one such use. Rather we just might be able to do away with the EEPROMs for such purposes (?) At the same time, Linux shouldn't depend on support from bootloaders much more than is currently provided. Otherwise what do we say... ARM Linux needs, say, U-Boot to run ? And let us count upon neither the number of bootloaders in existence nor the ease of making them support DT.
So far Nicholas' idea of appending config data to kernel image sounds good or may be a simplified version of DT that doesn't go deeper than board files.
John Bonesio has a patch that does exactly this.
g.
On 04/03/2011 04:25 PM, Somebody in the thread at some point said:
Hi -
Think of the DT as a way of probing a bus that doesn't have probe capabilities. This gives you a way to dynamically load drivers from initrd if you want. For example we dynamically loaded drivers for I2C devices that were previously always built in.
Understood.
I haven't been following the ARM DT work, but a scheme that might work on ARM is to build DTs into the kernel corresponding to each ARM machine ID supported by the kernel image. Use the machine ID to
Sounds reasonable.
The bootloader must bring up enough prerequisites for the kernel to run, so it has to 'know' / identify what it is running on enough at least to configure the SDRAM and operate the source that is providing the kernel image, eg, right MMC controller code at the right place in memory. If it absolutely must know that much, it's reasonable that it passes in the machine ID ATAG (or otherwise tell the kernel which DT is appropriate) like it does at the moment, it kind of had to know that much to do the prerequisites.
select the correct one and discard the rest. As ARM bootloaders are modified to directly support DTs slowly get rid of the in-kernel DTs.
I didn't really understand the value of ending up with the bootloader in change of providing DT data and DT data sitting as external files. It seems to me that since the kernel is the guy that is consuming that data, it's the kernel that it should be bound into. That must be particularly true when it's such early days for DT and the code parsing it and the data in the trees will be in flux for quite a while, and getting them out of step versioning-wise will cause hard to reproduce or understand failure for the user.
I just don't see people tweaking DT tables by package update and leaving the kernel package unchanged, I do see wrong version DT tables getting pulled in, bootloader environments pointing to the wrong place or NAND or default environments coming in and causing DT load failure, and serious issues coming from trying to manage boot.scr via package. I don't see we should want people writing their own customized boot.scr to point the same kernel to different DT tables, as this also is going to make bug reports nondeterministic.
(It was suggested also on that lkml thread to cast DT stuff out into a module, but that's not going to fly since the information presumably that will be part of DT like where the mmc controller is, what mux settings for the balls are needed, clock tree for it are in the module that you need on mmc, for example. So it seems to me it needs to be built-in.)
A key concept: think of the DT as a way of probing a bus that doesn't have probe capabilities. You can argue that C code can produce the same effect as DTs which is true. But that board specific setup code tends to grow and stick its fingers into everything. DTs mitigate that simply because they aren't C code. DTs encourage the development of generic device setup code instead of one-off platform specific code.
I'm only arguing against sticking what is fragile, changing data critical for operation on a platform outside of the kernel, and arguing for binding the intended version of data to the intended version of the code that's going to use it in-kernel. If it's going to become uber critical to any kind of kernel operation it had better be under the auspices of the kernel itself.
-Andy
On Sun, 3 Apr 2011, Andy Green wrote:
I just don't see people tweaking DT tables by package update and leaving the kernel package unchanged, I do see wrong version DT tables getting pulled in, bootloader environments pointing to the wrong place or NAND or default environments coming in and causing DT load failure, and serious issues coming from trying to manage boot.scr via package. I don't see we should want people writing their own customized boot.scr to point the same kernel to different DT tables, as this also is going to make bug reports nondeterministic.
In a perfect world, the DT data would be tied to the hardware and provided by the bootloader to the kernel. It would be produced by hardware vendors who would only have to describe their hardware in this OS independent abstraction without having to write any kernel code. And it would allow for an existing kernel binary that was distributed prior the availability of the hardware to boot unmodified on that hardware.
This is a pretty noble goal of course. But I'm skeptical. I'm afraid that the reality is just too messy for this to be achievable, except maybe for the easy cases. And the easy cases are not worth all this trouble.
But I don't care much if this is never achieved. Booting an existing binary on future hardware is not expected to work at the moment either. At least DT do provide the opportunity to force some consolidation and cleanups which is a sufficient reason already to go for it.
What I fear is the situation where this mechanism designed to make things simpler would actually make them even more complicated. Because the simple fact of distributing knowledge and responsibilities across multiple entities (bootloader, DT representation, kernel) is multiplying the opportunities for bugs, version skews, and interpretation differences. A system that has to rely on externally provided data is always going to be way more vulnerable and error prone than a self contained system.
I would like to be proven wrong of course. But in the mean time, I'm making sure that the DT information in the bootloader can be user replaced, or if need be simply overridden with a transparent mechanism which is to simply append the DT of your choice to the kernel image.
(It was suggested also on that lkml thread to cast DT stuff out into a module, but that's not going to fly since the information presumably that will be part of DT like where the mmc controller is, what mux settings for the balls are needed, clock tree for it are in the module that you need on mmc, for example. So it seems to me it needs to be built-in.)
Forget about DT in modules. That is nonsense.
I don't think having it built-in is a good idea either. In sync, yes. But if it can be loosely coupled then at least the idea of making it independent from the kernel has a chance of being tested, and when we have the ability to boot the same kernel binary on multiple hardware then in theory the kernel will be a bit smaller than if it has to carry the information for all those boards compiled in like it is the case now.
Nicolas
On 04/04/2011 03:22 AM, Somebody in the thread at some point said:
Hi -
In a perfect world, the DT data would be tied to the hardware and provided by the bootloader to the kernel. It would be produced by hardware vendors who would only have to describe their hardware in this OS independent abstraction without having to write any kernel code. And it would allow for an existing kernel binary that was distributed prior the availability of the hardware to boot unmodified on that hardware.
Fair enough, having the DT in EEPROM would indeed be cool although it doesn't get away from the bootloader code to understand certain things on the nod (such as where the EEPROM with the DT code is mapped on that board, how to work the SPI interface on that SoC, how to bring up its SDRAM to have space for this...).
If it knows that much about the board by magic, is it not easier to leave the EEPROM reading to kernel code?
This is a pretty noble goal of course. But I'm skeptical. I'm afraid that the reality is just too messy for this to be achievable, except maybe for the easy cases. And the easy cases are not worth all this trouble.
Yeah.
But I don't care much if this is never achieved. Booting an existing binary on future hardware is not expected to work at the moment either. At least DT do provide the opportunity to force some consolidation and cleanups which is a sufficient reason already to go for it.
Sounds right.
What I fear is the situation where this mechanism designed to make things simpler would actually make them even more complicated. Because the simple fact of distributing knowledge and responsibilities across multiple entities (bootloader, DT representation, kernel) is multiplying the opportunities for bugs, version skews, and interpretation differences. A system that has to rely on externally provided data is always going to be way more vulnerable and error prone than a self contained system.
Absolutely. At least we can decide to avoid having U-Boot environment involved which is the current plan AIUI.
I would like to be proven wrong of course. But in the mean time, I'm making sure that the DT information in the bootloader can be user replaced, or if need be simply overridden with a transparent mechanism which is to simply append the DT of your choice to the kernel image.
I still didn't see why DT and bootloader are so closely associated in peoples' minds.
I can imagine one reason, which is that in the case a kernel supports many boards, you'd have this fat kernel image with many discrete DT tables embedded in it. Maybe the thinking is that casting the tables out of the kernel into what will AIUI be a set of files in a VFAT filesystem, and having the bootloader pick one, gets around that.
Another way to solve it, would be to encode the set of device trees supported in a way that removes redundancy. Then the additional data needed to resolve a generic OMAP3 device table into overo or beagle or any similar board would be a small increment. The encoded table can be initdata and gotten rid of after being rendered into the DT required.
I don't think having it built-in is a good idea either. In sync, yes.
"In sync" would practically mean the DT tables being part of the kernel source tree if I understood it, which I think is the right direction.
-Andy
On Mon, Apr 4, 2011 at 4:54 AM, Andy Green andy@warmcat.com wrote:
Another way to solve it, would be to encode the set of device trees supported in a way that removes redundancy. Then the additional data needed to resolve a generic OMAP3 device table into overo or beagle or any similar board would be a small increment. The encoded table can be initdata and gotten rid of after being rendered into the DT required.
Grant, this is an interesting idea...
Suppose Linux develops a standardized DT format for describing things like clock and power domains. This domain data is then acted on by generic code in the kernel. The domain data is CPU specific, not board specific.
It would make sense to me to not put this data into the board specific device trees. Board specific device trees include the CPU identifier. During the Linux boot process we could look at the CPU identifier and then expand the in-memory DT with CPU specific info like the clock and power domains.
Since each CPU variant would expand into the corresponding nodes we can now write generic code that acts on this data in a manner that is not bound to the specific CPU. Keeping data like this out of the board specific DT makes them easier to write. There's nothing to be gained by adding dozens of CPU specific nodes to a board level device tree.
* And very hardware specific code moved out to a controllable place, i.e. something like BIOS
Sorry, but I must vehemently disagree here. BIOSes are a problem for Open Source, not a solution. On X86 they use BIOS services only when there is simply no other choice, because the BIOS is too often buggy and it is more difficult and risky to update than the kernel.
If you rely on the BIOS to do X, it will work when the BIOS gets it right. If you do X yourself, it will work whether or not the BIOS gets it right. This means that if there's even one BIOS version you have to deal with out there that gets X wrong, you have to do it yourself and then there is no incentive to rely on the BIOS even in the cases where it does get it right so to maintain only one code path.
And relying on a BIOS could make many kernel improvements impossible to implement as the execution context assumed by the BIOS may not be guaranteed anymore (think about UP vs SMP, different preemption modes, different realtime kernel modes, etc.) And of course it is impossible to anticipate what execution context and requirements the kernel might need in the future, hence this can't be provisioned for (and much less validated) into the BIOS design in advance.
Not BIOS exactly, since we don't have BIOS for ARM. The problem with BIOS is that we don't have control, and when it's wrong we have nowhere to fix it.
There are some ways out - e.g. if u-boot is standardized, we can definitely move part of the non-generic code there. Or I'm actually still thinking about using kernel as a bootloader, though not very clear on how, and we may still end up placing crap there... :-)
On Mon, 4 Apr 2011, Eric Miao wrote:
* And very hardware specific code moved out to a controllable place, i.e. something like BIOS
Sorry, but I must vehemently disagree here. BIOSes are a problem for Open Source, not a solution. On X86 they use BIOS services only when there is simply no other choice, because the BIOS is too often buggy and it is more difficult and risky to update than the kernel.
If you rely on the BIOS to do X, it will work when the BIOS gets it right. If you do X yourself, it will work whether or not the BIOS gets it right. This means that if there's even one BIOS version you have to deal with out there that gets X wrong, you have to do it yourself and then there is no incentive to rely on the BIOS even in the cases where it does get it right so to maintain only one code path.
And relying on a BIOS could make many kernel improvements impossible to implement as the execution context assumed by the BIOS may not be guaranteed anymore (think about UP vs SMP, different preemption modes, different realtime kernel modes, etc.) And of course it is impossible to anticipate what execution context and requirements the kernel might need in the future, hence this can't be provisioned for (and much less validated) into the BIOS design in advance.
Not BIOS exactly, since we don't have BIOS for ARM. The problem with BIOS is that we don't have control, and when it's wrong we have nowhere to fix it.
There are some ways out - e.g. if u-boot is standardized, we can definitely move part of the non-generic code there. Or I'm actually still thinking about using kernel as a bootloader, though not very clear on how, and we may still end up placing crap there... :-)
BIOS/firmware/bootloader. It's all the same thing when it comes to updating them in deployed hardware: it is harder and much risquier than updating the kernel or some DT data in the bootloader's environment. And there is real danger of version skews that were never tested and planned.
Standardizing on U-Boot is not a given. Many real devices out there are not using it.
I still like the idea of using the kernel as a bootloader. But once you've put the crap in there, some people will be tempted to simply use the same working crap in the real kernel. Hence better make that crap into something palatable from the start.
Nicolas
On Mon, Apr 4, 2011 at 8:50 AM, Nicolas Pitre nicolas.pitre@linaro.org wrote:
On Mon, 4 Apr 2011, Eric Miao wrote:
* And very hardware specific code moved out to a controllable place, i.e. something like BIOS
Sorry, but I must vehemently disagree here. BIOSes are a problem for Open Source, not a solution. On X86 they use BIOS services only when there is simply no other choice, because the BIOS is too often buggy and it is more difficult and risky to update than the kernel.
If you rely on the BIOS to do X, it will work when the BIOS gets it right. If you do X yourself, it will work whether or not the BIOS gets it right. This means that if there's even one BIOS version you have to deal with out there that gets X wrong, you have to do it yourself and then there is no incentive to rely on the BIOS even in the cases where it does get it right so to maintain only one code path.
And relying on a BIOS could make many kernel improvements impossible to implement as the execution context assumed by the BIOS may not be guaranteed anymore (think about UP vs SMP, different preemption modes, different realtime kernel modes, etc.) And of course it is impossible to anticipate what execution context and requirements the kernel might need in the future, hence this can't be provisioned for (and much less validated) into the BIOS design in advance.
Not BIOS exactly, since we don't have BIOS for ARM. The problem with BIOS is that we don't have control, and when it's wrong we have nowhere to fix it.
There are some ways out - e.g. if u-boot is standardized, we can definitely move part of the non-generic code there. Or I'm actually still thinking about using kernel as a bootloader, though not very clear on how, and we may still end up placing crap there... :-)
BIOS/firmware/bootloader. It's all the same thing when it comes to updating them in deployed hardware: it is harder and much risquier than updating the kernel or some DT data in the bootloader's environment. And there is real danger of version skews that were never tested and planned.
+1