Hi there. We've been having a discussion in the Toolchain WG regarding Thumb-1 improvements. I want to decline doing them as I've assumed that Linaro is focused on the Cortex-A series, but I can't find that written down anywhere.
May I limit the Toolchain WG to currently or nearly shipping Cortex-A implementations? This means: * Cortex-A profile only * ARMv7 only * ARM and Thumb-2 instruction sets * With or without NEON * With either VFPv3 D16 or D32 * With or without SMP
We will try to do no harm to other architectures or earlier ARM versions. The Thumb-2 routines may be applicable to the Cortex-M and Cortex-R series but we will not optimise for them.
I'd like Linaro to state this explicitly in the next round. https://wiki.linaro.org/Linaro1011/TechnicalRequirements defines a 'Standard ARMv7 Configuration' but there's no higher level statement justifying it, no statement restricting us to it, and it includes ARM, Thumb-2, and Thumb-1.
-- Michael
It is, see the original requirements. You have it correct, ARMv7A + vfp + neon...
Dave
Sent from my iPhone
On 31 Aug 2010, at 23:03, Michael Hope michael.hope@linaro.org wrote:
Hi there. We've been having a discussion in the Toolchain WG regarding Thumb-1 improvements. I want to decline doing them as I've assumed that Linaro is focused on the Cortex-A series, but I can't find that written down anywhere.
May I limit the Toolchain WG to currently or nearly shipping Cortex-A implementations? This means:
- Cortex-A profile only
- ARMv7 only
- ARM and Thumb-2 instruction sets
- With or without NEON
- With either VFPv3 D16 or D32
- With or without SMP
We will try to do no harm to other architectures or earlier ARM versions. The Thumb-2 routines may be applicable to the Cortex-M and Cortex-R series but we will not optimise for them.
I'd like Linaro to state this explicitly in the next round. https://wiki.linaro.org/Linaro1011/TechnicalRequirements defines a 'Standard ARMv7 Configuration' but there's no higher level statement justifying it, no statement restricting us to it, and it includes ARM, Thumb-2, and Thumb-1.
-- Michael
On Wednesday 01 September 2010, Michael Hope wrote:
We will try to do no harm to other architectures or earlier ARM versions. The Thumb-2 routines may be applicable to the Cortex-M and Cortex-R series but we will not optimise for them.
I'd like Linaro to state this explicitly in the next round. https://wiki.linaro.org/Linaro1011/TechnicalRequirements defines a 'Standard ARMv7 Configuration' but there's no higher level statement justifying it, no statement restricting us to it, and it includes ARM, Thumb-2, and Thumb-1.
I think there are two aspects to this:
On the one hand, we need to improve the code formost for new CPUs looking forward, so the latest generation of shiny high-end hardware is what matters the most and needs to be the primary target. Today's high end is tomorrow's mainstream, so sooner or later everyone will benefit from this.
On the other hand, I think we need to be relevant and provide code that everyone can use. The market today mainly consists of stuff that's not the primary focus, like ARM926 or some non-MMU cores. Refusing to do a simple fix because it's not relevant for Cortex-A8/A9 will just manage to piss off people [1].
Obviously there has to be a middle ground. We're building the binary packages for the configuration Dave mentioned (v7A/Neon), but IMHO that shouldn't prevent anyone from rebuilding it with our tool chain without having to make significant changes. If there are patches readily available for stuff that's not our primary focus (thumb1, non-cortex v7A CPUs, vfp without neon, ...), I'd say we should still keep them or get them upstream.
Arnd
On Thu, Sep 2, 2010 at 11:56 AM, Arnd Bergmann arnd@arndb.de wrote:
On Wednesday 01 September 2010, Michael Hope wrote:
We will try to do no harm to other architectures or earlier ARM versions. The Thumb-2 routines may be applicable to the Cortex-M and Cortex-R series but we will not optimise for them.
I'd like Linaro to state this explicitly in the next round. https://wiki.linaro.org/Linaro1011/TechnicalRequirements defines a 'Standard ARMv7 Configuration' but there's no higher level statement justifying it, no statement restricting us to it, and it includes ARM, Thumb-2, and Thumb-1.
I think there are two aspects to this:
On the one hand, we need to improve the code formost for new CPUs looking forward, so the latest generation of shiny high-end hardware is what matters the most and needs to be the primary target. Today's high end is tomorrow's mainstream, so sooner or later everyone will benefit from this.
On the other hand, I think we need to be relevant and provide code that everyone can use. The market today mainly consists of stuff that's not the primary focus, like ARM926 or some non-MMU cores. Refusing to do a simple fix because it's not relevant for Cortex-A8/A9 will just manage to piss off people [1].
Obviously there has to be a middle ground. We're building the binary packages for the configuration Dave mentioned (v7A/Neon), but IMHO that shouldn't prevent anyone from rebuilding it with our tool chain without having to make significant changes. If there are patches readily available for stuff that's not our primary focus (thumb1, non-cortex v7A CPUs, vfp without neon, ...), I'd say we should still keep them or get them upstream.
As an embedded developer I'd like to see a standardized tool chain for building on most ARM architectures. There are at least two groups of users for this tool chain - ARM based PCs and embedded systems. There are dozens are various tool chain build systems for ARM. Every time I get a new embedded dev board I have to build yet another ARM tool chain to match what the accompanying software expects. This is a significant hurdle to new developers who may not have fast machines. Some of the people I've worked with needed 24hrs to build a tool chain. Let's get a standardized tool chain for the older ARM chips into a distribution to stop this needless proliferation.
Arnd
[1] http://www.joe-ks.com/archives_oct2006/ItsNotMyJob.htm
linaro-dev mailing list linaro-dev@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-dev
Agree, whilst v7A is our priority, we need to 'do the right thing' for everyone
Save
Sent from my iPhone
On 2 Sep 2010, at 18:17, Jon Smirl jonsmirl@gmail.com wrote:
On Thu, Sep 2, 2010 at 11:56 AM, Arnd Bergmann arnd@arndb.de wrote:
On Wednesday 01 September 2010, Michael Hope wrote:
We will try to do no harm to other architectures or earlier ARM versions. The Thumb-2 routines may be applicable to the Cortex-M and Cortex-R series but we will not optimise for them.
I'd like Linaro to state this explicitly in the next round. https://wiki.linaro.org/Linaro1011/TechnicalRequirements defines a 'Standard ARMv7 Configuration' but there's no higher level statement justifying it, no statement restricting us to it, and it includes ARM, Thumb-2, and Thumb-1.
I think there are two aspects to this:
On the one hand, we need to improve the code formost for new CPUs looking forward, so the latest generation of shiny high-end hardware is what matters the most and needs to be the primary target. Today's high end is tomorrow's mainstream, so sooner or later everyone will benefit from this.
On the other hand, I think we need to be relevant and provide code that everyone can use. The market today mainly consists of stuff that's not the primary focus, like ARM926 or some non-MMU cores. Refusing to do a simple fix because it's not relevant for Cortex-A8/A9 will just manage to piss off people [1].
Obviously there has to be a middle ground. We're building the binary packages for the configuration Dave mentioned (v7A/Neon), but IMHO that shouldn't prevent anyone from rebuilding it with our tool chain without having to make significant changes. If there are patches readily available for stuff that's not our primary focus (thumb1, non-cortex v7A CPUs, vfp without neon, ...), I'd say we should still keep them or get them upstream.
As an embedded developer I'd like to see a standardized tool chain for building on most ARM architectures. There are at least two groups of users for this tool chain - ARM based PCs and embedded systems. There are dozens are various tool chain build systems for ARM. Every time I get a new embedded dev board I have to build yet another ARM tool chain to match what the accompanying software expects. This is a significant hurdle to new developers who may not have fast machines. Some of the people I've worked with needed 24hrs to build a tool chain. Let's get a standardized tool chain for the older ARM chips into a distribution to stop this needless proliferation.
Arnd
[1] http://www.joe-ks.com/archives_oct2006/ItsNotMyJob.htm
linaro-dev mailing list linaro-dev@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-dev
-- Jon Smirl jonsmirl@gmail.com
On Thu, Sep 02, 2010, Jon Smirl wrote:
As an embedded developer I'd like to see a standardized tool chain for building on most ARM architectures. There are at least two groups of users for this tool chain - ARM based PCs and embedded systems. There are dozens are various tool chain build systems for ARM. Every time I get a new embedded dev board I have to build yet another ARM tool chain to match what the accompanying software expects. This is a significant hurdle to new developers who may not have fast machines. Some of the people I've worked with needed 24hrs to build a tool chain. Let's get a standardized tool chain for the older ARM chips into a distribution to stop this needless proliferation.
I'd like to understand your use cases to make sure we're on track to cover them. First, we're trying to maintain a toolchain source tree which is adequately patched; that's mostly launchpad.net/gcc-linaro right now. Second, we're integrating that into the native Ubuntu toolchain. Third, we're providing a cross-toolchain to install in Ubuntu environments. The latter two are built from the same source, which includes the gcc-linaro tree.
Is this what you're looking for, or do you need more? What were the specific cases which you experienced which required different toolchain?
Things I can think of, but I don't know how important they are: - being able to easily change the default toolchain build flags (how do you get the toolchain? which flags do you use?) - being able to easily drop patches into the toolchain (how do you get the toolchain? which kind of patches?) - being able to roll your own toolchain packages from another toolchain tree - providing RPM packages for native and cross toolchains - providing tarball "packages" with "standalone" linux binaries, similar to CS toolchain downloads
I think we have partial answers for some of the above things, but many we dismissed. Would love to hear which one you think are important for your specific use case (tell us what you do!)
On Thu, Sep 2, 2010 at 2:32 PM, Loïc Minier loic.minier@linaro.org wrote:
On Thu, Sep 02, 2010, Jon Smirl wrote:
As an embedded developer I'd like to see a standardized tool chain for building on most ARM architectures. There are at least two groups of users for this tool chain - ARM based PCs and embedded systems. There are dozens are various tool chain build systems for ARM. Every time I get a new embedded dev board I have to build yet another ARM tool chain to match what the accompanying software expects. This is a significant hurdle to new developers who may not have fast machines. Some of the people I've worked with needed 24hrs to build a tool chain. Let's get a standardized tool chain for the older ARM chips into a distribution to stop this needless proliferation.
I'd like to understand your use cases to make sure we're on track to cover them. First, we're trying to maintain a toolchain source tree which is adequately patched; that's mostly launchpad.net/gcc-linaro right now. Second, we're integrating that into the native Ubuntu toolchain. Third, we're providing a cross-toolchain to install in Ubuntu environments. The latter two are built from the same source, which includes the gcc-linaro tree.
Is this what you're looking for, or do you need more? What were the specific cases which you experienced which required different toolchain?
Things I can think of, but I don't know how important they are: - being able to easily change the default toolchain build flags (how do you get the toolchain? which flags do you use?) - being able to easily drop patches into the toolchain (how do you get the toolchain? which kind of patches?) - being able to roll your own toolchain packages from another toolchain tree - providing RPM packages for native and cross toolchains - providing tarball "packages" with "standalone" linux binaries, similar to CS toolchain downloads
I think we have partial answers for some of the above things, but many we dismissed. Would love to hear which one you think are important for your specific use case (tell us what you do!)
This question should be asked on a list with more embedded ARM integrators, like the linux-arm kernel list or the open embedded list. I've fowarded it to the Pengutronix developers. I'm an end user of the tools, the bigger picture is to get the companies putting together BSPs for embedded ARM systems to stop rolling their own compilers. Getting a tool chain with into Ubuntu with broad target support should be enough to get the ball rolling.
What I'd like to do is install a pre-built cross tools chain (x86 to ARM) that works with the common CPU archs. Currently I am working with v4t, 926, ARM11, etc. Right now I have to have a separate build environment for every dev system I am using. Some of these systems run Linux and some are too small.
An unrelated thing that is missing is a universal configuration tool for putting together Linux images. Pengutronix has modified kconfig to do this but every dev system is different. The basic idea is a menu system that lets you pick what packages you want in your flash image. Open embedded does this with bitbake, etc, etc...
-- Loďc Minier
On Thu, Sep 02, 2010, Jon Smirl wrote:
What I'd like to do is install a pre-built cross tools chain (x86 to ARM) that works with the common CPU archs. Currently I am working with v4t, 926, ARM11, etc. Right now I have to have a separate build environment for every dev system I am using. Some of these systems run Linux and some are too small.
So if you consider the current cross-toolchain we're providing, they offer these -march=armv4t or -mcpu=arm926ej-s flags; is this good enough for you?
An unrelated thing that is missing is a universal configuration tool for putting together Linux images. Pengutronix has modified kconfig to do this but every dev system is different. The basic idea is a menu system that lets you pick what packages you want in your flash image. Open embedded does this with bitbake, etc, etc...
Hmm that's an embedded distro you're looking for. Linaro doesn't aim at being a distribution, but I think mainstream distribution are becoming more and more embedded friendly, more and more "combinable". Eventually, the lines will blur I'm sure. In the mean time, you should consider the current distributions like OpenEmbedded/Poky/Angstrom/ Debian/Ubuntu/MeeGo etc. on their face value for what they can offer: either a good (cross-)compilation framework, or a large amount of software, usually not both right now :-) We're working at providing more tools to cross-compile Debian/Ubuntu distros though.
Hi,
Loïc Minier wrote:
I'd like to understand your use cases to make sure we're on track to cover them. First, we're trying to maintain a toolchain source tree which is adequately patched; that's mostly launchpad.net/gcc-linaro right now. Second, we're integrating that into the native Ubuntu toolchain. Third, we're providing a cross-toolchain to install in Ubuntu environments. The latter two are built from the same source, which includes the gcc-linaro tree.
Sounds good.
Is this what you're looking for, or do you need more? What were the specific cases which you experienced which required different toolchain?
Things I can think of, but I don't know how important they are:
- being able to easily change the default toolchain build flags (how do
you get the toolchain? which flags do you use?)
In OSELAS.Toolchain, we have a set of flags integrated into each toolchain, so for example for arm-v4t-linux-gnueabi, it generates v4t code with software floating point without any further flags. I agree on that it would be a goal to have only one multilib toolchain which can do everything, but we definitely need wrappers, because compiling a hello world should still be
$(GCC) hello.c -o hello
- being able to easily drop patches into the toolchain (how do you get the toolchain? which kind of patches?)
Well, nobody wants patches if you can avoid them. The question is what to do with things which cannot be brought into the upstream. It is sometimes a very long lasting thing to try to convince the upstream of certain patches. So we need patches in the toolchains for the forseable future. What we also need is better collaboration with the upstream maintainers.
For us, patches are also a thing which is important if problems arise. With OSELAS.Toolchain, we already had the case toolchains had to be patched right in a project, and nobody had the time to wait until there is a new distro package available. Throwing in a patch and rebuilding is easy, but I think it is probably as easy to do that with Debian packets once that's the standard way.
Btw, the current work-in-progress patch queue from OSELAS.Toolchain is here:
http://git.pengutronix.de/?p=rsc/OSELAS.Toolchain.git%3Ba=tree%3Bf=patches%3...
Unfortunately, it still contains several underdocumented patches (mainly from Gentoo and OpenEmbedded). For ARM it is only slightly tested with v4t and arm1136.
- being able to roll your own toolchain packages from another toolchain tree
- providing RPM packages for native and cross toolchains
I would be interested in if it was possible to have the cross toolchains so decoupled from the rest of the distro that they can be run on Debian/Ubuntu/RedHat/SuSE.
- providing tarball "packages" with "standalone" linux binaries,
similar to CS toolchain downloads
Would also be easy. Our customers quite often don't care about the development box at all, they just take what some random decision mechanism says (i.e. a project member who has already experience with distro x.y). So in order to come to a well defined environment, we usually install the toolchain in /opt and ptxdist does the rest.
On Thu, Sep 02, 2010 at 03:23:17PM -0400, Jon Smirl wrote:
This question should be asked on a list with more embedded ARM integrators, like the linux-arm kernel list or the open embedded list. I've fowarded it to the Pengutronix developers. I'm an end user of the tools, the bigger picture is to get the companies putting together BSPs for embedded ARM systems to stop rolling their own compilers.
I'm all for this goal. Building toolchains is a lot of useless effort. In the end, we are no toolchain people, but we still have to do this in order to get our work done.
Getting a tool chain with into Ubuntu with broad target support should be enough to get the ball rolling.
Right. I would be even more happy to get these things into Debian as well.
An unrelated thing that is missing is a universal configuration tool for putting together Linux images. Pengutronix has modified kconfig to do this but every dev system is different. The basic idea is a menu system that lets you pick what packages you want in your flash image. Open embedded does this with bitbake, etc, etc...
I doubt it will be possible to get consense about this topic, because the requirements are too different. Let's start with toolchains and patch upstreaming - I think if these topics are solved, we have gained a lot.
rsc
+++ Robert Schwebel [2010-09-03 09:41 +0200]:
Hi,
In OSELAS.Toolchain, we have a set of flags integrated into each toolchain, so for example for arm-v4t-linux-gnueabi, it generates v4t code with software floating point without any further flags. I agree on that it would be a goal to have only one multilib toolchain which can do everything, but we definitely need wrappers, because compiling a hello world should still be
$(GCC) hello.c -o hello
Yep, this is the thing you need to work well once you move to using distribution toolchains instead of 'rebuild for my target and my defaults'. It has annoyed me for years that this was not an easy thing to do. I am not a toolchain engineer either (so feel free to correct anything incorrect about my understanding below), but I am sat in a room with some now, which may prove helpful to making some progress on this issue in the Debian context.
The way gcc builds multilib currently does not support making one toolchain that supports lots of ouput options well, because you get a cross-product number of multilibs built for the options you are trying to support. That works well for 2 or three, but rapidly becomes madness (and toolchains build times get very long).
Codesourcery are working on improving this. Discussion here: http://gcc.gnu.org/ml/gcc/2010-01/msg00063.html
As you say, wrappers are the only practical way of setting defaults right now, although I'd really like to see the compiler grow some kind of config file you could set some defaults in, like normal software (or someone explain to me why that can't be done or isn't sensible).
I really don't see why one has to set target arch/CPU/preferred optimisations on _every_ command line, rather than in a config file.
specs files help somewhat here but are sufficiently impenetrable to not be much use to developers that just want to set some defaults. They could work OK for distro-supplied 'gcc-settings' packages.
Wookey
Hi,
Things I can think of, but I don't know how important they are: - being able to easily change the default toolchain build flags (how do you get the toolchain? which flags do you use?) - being able to easily drop patches into the toolchain (how do you get the toolchain? which kind of patches?) - being able to roll your own toolchain packages from another toolchain tree - providing RPM packages for native and cross toolchains - providing tarball "packages" with "standalone" linux binaries, similar to CS toolchain downloads
I think multilib support (or something which achieves the same result like a suitable specs file) needs to be added here.
The v7 libgcc in the linaro toolchain won't work for embedded developers with pre-v7 hardware, for example.
Cheers ---Dave
+++ Jon Smirl [2010-09-02 13:17 -0400]:
As an embedded developer I'd like to see a standardized tool chain for building on most ARM architectures. There are at least two groups of users for this tool chain - ARM based PCs and embedded systems. There are dozens are various tool chain build systems for ARM. Let's get a standardized tool chain for the older ARM chips into a distribution to stop this needless proliferation.
Emdebian has been providing the Debian gcc builds as cross-toolchains for Debian/Ubuntu for some years now. Did you try those but not find they met your needs, or did you not know about them?
I've been trying to tell embedded engineers not to build a new toolchain for every single project for some years now, but I've met a fair amount of resistance to the idea. It's been many years since I had to build one for my own purposes (I just used the emdebian ones), and it worked very well.
The biggest issue for people using newer hardware is probably currency (i.e toolchains build from distro sources not having support for latest chips), but for your above use-case of "standardized tool chain for the older ARM chips", that has been a solved problem for a long time IMHO.
Those toolchains are now moving into Ubuntu and Debian so you don't even have to add a separate repo, which solves the twin problems of installability (a separate repo can get out of sync) and exposure.
Combined with Linaro keeping things much closer to the bleeding edge I look forward to a time when people will normally try the distro cross-compiler first.
There are still issues with the difficulty of building one gcc to support more than a few target option combinations, and setting build defaults reliably without rebuilding, but hopefully once everyone stops building a new one every time these will get more attention.
Wookey
Hi Wookey,
On Thu, Sep 02, 2010 at 10:21:24PM +0100, Wookey wrote:
I've been trying to tell embedded engineers not to build a new toolchain for every single project for some years now, but I've met a fair amount of resistance to the idea.
Wearing my embedded linux developer hat, I really like your idea. We use Debian all over the place, and if it contains solid cross toolchains for the ARM/PowerPC/SH/Blackfin/x86 targets we are working with I'm really happy with the idea that we wouldn't have to work on toolchains any more. It is a pain every time you have to do it, it sucks up a lot of time and the result is almost never something which can impress your customers.
However, we (mainly the group at Pengutronix that develops ptxdist) still work on OSELAS.Toolchain simply because the customers we work with do not all use Debian or Ubuntu but also SuSE and RedHat/Fedora. So in order to provide stable and reproducable board support packages, having toolchains that are independend of the distribution are quite important. The toolchains built from OSELAS.Toolchain install into the
/opt/OSELAS.Toolchain-<version>
hierarchy and are independend of the rest of the system. On the other hand, the toolchain is an "external" component for ptxdist, so if Debian and Ubuntu continue their world domination efforts, it may simply happen that over the time the other distros become less important and we can tell the customers to either use Debian/Ubuntu or be on their own.
I would like to test what the Debian guys have available. Can you point me to the right entry point? There seem to be too much Debian cross efforts out there and it is dificult for people from the outside to find the right things. All I find in my apt-cache is this:
thebe:~# apt-cache policy gcc-4.3-arm-linux-gnu gcc-4.3-arm-linux-gnu: Installed: (none) Candidate: 4.3.2-1.1 Version table: 4.3.2-1.1 0 500 http://www.emdebian.org lenny/main Packages
This is
a) not in debian (not even in unstable) b) gcc-4.3.2 + binutils-2.18.1 + glibc-2.7.
The gcc upstream development (this is where you have to push your patches into and where it makes sense to do development on toolchains) is gcc-4.6, and for production systems I'd like to have gcc-4.5.1, binutils-2.20.1 and glibc-2.12.1. So I assume I have not the right paths in my sources.list?
rsc
Dnia piątek, 3 września 2010 o 09:20:51 Robert Schwebel napisał(a):
I would like to test what the Debian guys have available. Can you point me to the right entry point? There seem to be too much Debian cross efforts out there and it is dificult for people from the outside to find the right things.
Visit my page with Ubuntu maverick cross compilers:
http://people.canonical.com/~hrw/ubuntu-maverick-armel-cross-compilers/
It works under maverick and should be installable under Debian sid/squeeze.
for production systems I'd like to have gcc-4.5.1, binutils-2.20.1 and glibc-2.12.1. So I assume I have not the right paths in my sources.list?
I provide gcc-4.4.4, gcc-4.5.1, binutils 2.20.1-20100813, eglibc 2.12.1 versions. GCC runtime libraries are from 4.5.1 version.
Regards,
"Robert" == Robert Schwebel r.schwebel@pengutronix.de writes:
Hi,
Robert> Wearing my embedded linux developer hat, I really like your Robert> idea. We use Debian all over the place, and if it contains Robert> solid cross toolchains for the ARM/PowerPC/SH/Blackfin/x86 Robert> targets we are working with I'm really happy with the idea that Robert> we wouldn't have to work on toolchains any more. It is a pain Robert> every time you have to do it, it sucks up a lot of time and the Robert> result is almost never something which can impress your Robert> customers.
My feelings are basically the same for Buildroot.
Robert> However, we (mainly the group at Pengutronix that develops Robert> ptxdist) still work on OSELAS.Toolchain simply because the Robert> customers we work with do not all use Debian or Ubuntu but also Robert> SuSE and RedHat/Fedora. So in order to provide stable and Robert> reproducable board support packages, having toolchains that are Robert> independend of the distribution are quite important.
Indeed. An alternative to distribution-specific toolchain work is collaborating with crosstool-NG, which is something we've been working on for some time. Long term I want to remove the internal toolchain stuff from BR.
+++ Robert Schwebel [2010-09-03 09:20 +0200]:
I would like to test what the Debian guys have available. Can you point me to the right entry point? There seem to be too much Debian cross efforts out there and it is dificult for people from the outside to find the right things.
Right now things are in a state of flux as the linaro stuff gets into Ubuntu and the Emdebian tools are being updated.
All I find in my apt-cache is this:
thebe:~# apt-cache policy gcc-4.3-arm-linux-gnu gcc-4.3-arm-linux-gnu: Installed: (none) Candidate: 4.3.2-1.1 Version table: 4.3.2-1.1 0 500 http://www.emdebian.org lenny/main Packages
This is
a) not in debian (not even in unstable)
The problem is that cross-toolchain builds have cross-arch dependencies and autobuilders don't understand dependencies outside their architecture, so a scheme was needed to allow autobuilders to cope (one day multiarch will help here). That was worked out over a year ago () but gcc4.4 breakage and real life got in the way of getting it into Debian. A similar (but not identical) scheme has now been implmented by Marcin for the armel/linaro/ubuntu toolchains.
b) gcc-4.3.2 + binutils-2.18.1 + glibc-2.7.
newer toolchains for armel are available, but much of the rest broke: http://emdebian.org/debian/i386-unstable-stat.txt
4.4 has been stalled for over a year because the biarch/triarch multilib builds ((?)) broke. http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=504487 Al viro has just appeared and applied his genius to fix it all so a pile of patches went in last week/this week.
There has also been much anguish about paths and tools re the various possibilities of existing dpkg-cross style triplet paths, sysrooted builds/paths, and multiarch paths and what should be done in default toolchains, and how to manage transitions. That seems to be shaking out now, as as far as we can tell the sysroot and existing paths are orthogonal and both still work in the linaro cross-tools, and multiarch isn't going to be properly with us until next cycle so we can leave the associated breakage a little longer.
The gcc upstream development (this is where you have to push your patches into and where it makes sense to do development on toolchains) is gcc-4.6, and for production systems I'd like to have gcc-4.5.1, binutils-2.20.1 and glibc-2.12.1. So I assume I have not the right paths in my sources.list?
The reason newer stuff is not available is mostly one of resources. Hector Oron has been looking after the build-process, but keeping all the Debian arches cross-building and up-to-date with Debian has proved to be more than he had time for. (It's OK until something mysterious breaks, so it went well until 4.4 came out with much breakage). The combination of Marcin's and Al's (and Zumbi's and Doko's) recent work with a general desire to make this stuff happen directly in the distributions, rather than outside, is rapidly improving matters (although there reamins plenty to do).
However there is still a shortage of attention on anything other than arm cross-toolchains. And I'm not sure with where we are at in terms of getting all this stuff into Debian, as oppposed to Ubuntu. I assume it just missed squeeze so will remain in the emdebian repos for a little longer.
Anyone who wanted to help make sure the various arches stay building and help get the cross-builds happening in the main archive is _extremely_ welcome. Talk to Hector for details of how to help.
Current status is here: http://wiki.debian.org/EmdebianToolchain
Wookey
+++ Robert Schwebel [2010-09-03 09:20 +0200]:
[sorry - got the emdebian email wrong on previous mail - reply to this, not that]
[Emdebian people: this is useful thread from linaro-dev I am now cross-posting. Start here to read it: http://lists.linaro.org/pipermail/linaro-dev/2010-September/000657.html ]
I would like to test what the Debian guys have available. Can you point me to the right entry point? There seem to be too much Debian cross efforts out there and it is dificult for people from the outside to find the right things.
Right now things are in a state of flux as the linaro stuff gets into Ubuntu and the Emdebian tools are being updated.
All I find in my apt-cache is this:
thebe:~# apt-cache policy gcc-4.3-arm-linux-gnu gcc-4.3-arm-linux-gnu: Installed: (none) Candidate: 4.3.2-1.1 Version table: 4.3.2-1.1 0 500 http://www.emdebian.org lenny/main Packages
This is
a) not in debian (not even in unstable)
The problem is that cross-toolchain builds have cross-arch dependencies and autobuilders don't understand dependencies outside their architecture, so a scheme was needed to allow autobuilders to cope (one day multiarch will help here). That was worked out over a year ago () but gcc4.4 breakage and real life got in the way of getting it into Debian. A similar (but not identical) scheme has now been implmented by Marcin for the armel/linaro/ubuntu toolchains.
b) gcc-4.3.2 + binutils-2.18.1 + glibc-2.7.
newer toolchains for armel are available, but much of the rest broke: http://emdebian.org/debian/i386-unstable-stat.txt
4.4 has been stalled for over a year because the biarch/triarch multilib builds ((?)) broke. http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=504487 Al viro has just appeared and applied his genius to fix it all so a pile of patches went in last week/this week.
There has also been much anguish about paths and tools re the various possibilities of existing dpkg-cross style triplet paths, sysrooted builds/paths, and multiarch paths and what should be done in default toolchains, and how to manage transitions. That seems to be shaking out now, as as far as we can tell the sysroot and existing paths are orthogonal and both still work in the linaro cross-tools, and multiarch isn't going to be properly with us until next cycle so we can leave the associated breakage a little longer.
The gcc upstream development (this is where you have to push your patches into and where it makes sense to do development on toolchains) is gcc-4.6, and for production systems I'd like to have gcc-4.5.1, binutils-2.20.1 and glibc-2.12.1. So I assume I have not the right paths in my sources.list?
The reason newer stuff is not available is mostly one of resources. Hector Oron has been looking after the build-process, but keeping all the Debian arches cross-building and up-to-date with Debian has proved to be more than he had time for. (It's OK until something mysterious breaks, so it went well until 4.4 came out with much breakage). The combination of Marcin's and Al's (and Zumbi's and Doko's) recent work with a general desire to make this stuff happen directly in the distributions, rather than outside, is rapidly improving matters (although there reamins plenty to do).
However there is still a shortage of attention on anything other than arm cross-toolchains. And I'm not sure with where we are at in terms of getting all this stuff into Debian, as oppposed to Ubuntu. I assume it just missed squeeze so will remain in the emdebian repos for a little longer.
Anyone who wanted to help make sure the various arches stay building and help get the cross-builds happening in the main archive is _extremely_ welcome. Talk to Hector for details of how to help.
Current status is here: http://wiki.debian.org/EmdebianToolchain
Wookey
On Thu, Sep 02, 2010 at 05:56:49PM +0200, Arnd Bergmann wrote:
Obviously there has to be a middle ground. We're building the binary packages for the configuration Dave mentioned (v7A/Neon), but IMHO that shouldn't prevent anyone from rebuilding it with our tool chain without having to make significant changes. If there are patches readily available for stuff that's not our primary focus (thumb1, non-cortex v7A CPUs, vfp without neon, ...), I'd say we should still keep them or get them upstream.
Arnd is right; what's missing is infrastructure and/or somebody to actually build and QA varying configuration for the toolchain and targets.
I'd like it so that people could pick up Ubuntu and tell it to bootstrap build a v5 archive and assemble an image from that in a few simple commands; if that was easy then finding somebody to actually maintain this in parallel to our platform releases would be easier. Right now you just need too much detailed knowledge. This is work that Foundations and Infrastructure have on their plates -- so I'd like Scott and Steve to use the opportunity to fish out low-hanging fruit that can be done and use it to inform their plans.