Someone asked if this worked, and I thought 'that's trivial to test with multiarch' so I did. On Saucy (where there is no multiarch version skew issue between binary versions of packages) the dpkg --add-architecutre armhf apt-get update apt-get install links:armhf part works very nicely. Everything installs as required.
However binaries don't run - they just get killed.
Apparently no-one has tried this for a couple of years since it was last working...(Or is it working on other platforms? - apparently it's OK on android)
Turns out that our arm64 kernel config has: vm.mmap_min_addr=65536 but armhf binaries tend to get mmapped at 0x8000 (32K).
On armhf that value is set to vm.mmap_min_addr=4096
This difference is to protect page0 even if large pages are enabled which seems sensible enough, but has this unfortunate side-effect.
So either we need to stop doing that (What would be the consequences of setting 4096 by default on arm64?) or change something in the loader to stop mapping things below 64K, which I think involves glibc hackery.
Running 32-bit binaries is quite seriously broken until this is fixed. I presume this currently isn't on anyone's list to fix? I'm not sure who's list it should go on.
part2: Once this is fixed with: sudo sysctl -w vm.mmap_min_addr=4096
some binaries work (hello, bzip2) but fancier things still don't (links, wget). They segfault after loading libs. I'm still investigating that, but it looks like we have at least two issues..
So this mail is really to ask what the best fix is and thus who will deal with it? Do I need to file a bug or a card somewhere?
Possibly more to follow when I work out what else is wrong...
Wookey