I have been working on arm64 ILP32 bootstrapping (in debian) for some
time now (sporadically), and in doing so noticed that we (ARM) did not
choose the best triplet. After internal discussion we have decided
that it would be better if it was changed, if possible. The technical
case for this is set out below.
Changing it is only sensible if there is consensus across distros and
upstreams that this is the right way to go, so comment is being
solicited here. If people have good reasons why it is worth the extra
work and difference from x86 practice involved in keeping the
existing triplet, then we can do that.
The existing triplets (big+little endian) are:
aarch64_ilp32-linux-gnu
aarch64_be_ilp32-linux-gnu
The proposed new triplets are:
aarch64-linux-gnu_ilp32
aarch64_be-linux-gnu_ilp32
The reasoning for changing is as follows.
The main technical argument is:
* ILP32 is an ABI, not a machine/kernel feature. So it makes more
sense to put the ILP32 indicator in the 'ABI' part of the triplet
than the 'kernel/machine' part of the triplet. No kernel will ever
return 'aarch64_ilp32' via uname (only 'aarch64' or 'aarch64_be').
Any aarch64 machine, with a new-enough kernel, can run LP64 or
ILP32 code. The kernel/hardware really doesn't care. Which you
choose/want is a feature of the userspace, more specifically of the
glibc and/or gcc defaults in place, (gcc defaults optionally
overridden with a -mabi option). Thus it makes sense to say "this
userspace is 'gnu_ilp32' ABI", when building/configuring software.
The main practical arguments are:
* We don't have to change config.sub/guess in every autoconfed piece
of software in the world to get our new ABI recognised. To use
aarch64_ilp32-linux-gnu requires config.sub and config.guess to be
updated in every autoconf-using package, and no autoconf update has
even gone upstream yet so this will take years. If we use
aarch64-linux-gnu_ilp32 instead, autoconf 'just works' already without
any changes, as abi suffixes are passed-through.
This is the thing that will significantly reduce the effort of building
an ILP32 userspace.
* The x86 'x32' ABI is exactly analagous to the aarch64 ilp32 ABI,
and they chose to use an ABI suffix: 'x86_64-linux-gnux32'
Avoiding gratuitous difference from x86, in the exact same
situation, without any technical justification for divergence, is
sensible. More stuff should just work, like the autoconf case, or
at least be fixed in the same way, so porting is easier. Being
gratuitously different is both confusing to people and likely to
make more porting work (but I don't have a good handle on how much
more, beyond autoconf-using packages).
We do insist on an '_' separator, because separators are good for
both clarity and parsing. (i.e gnu_ilp32, not gnuilp32)
We realise that it is rather late in the day for this change, as some
people have been using the existing triplet in toolchains already, but
the kernel and glibc ABI was only relatively recently settled and
patches submitted, there is no autoconf support yet upstream, and the
toolchain support is incomplete in that you can't actually build a
self-hosted ilp32-defaulting toolchain yet, so we believe that it is
not actually too late, and worth trying to fix this now in the
interests of a simpler porting process in the long term.
The set of people who actually use ILP32 at this early stage, and thus
care, is currently quite small, but we would like to hear from anyone
who has baked the existing triplet into something and maybe released
it, such that this change could cause serious dificulties? (loader path
is the main place this gets exposed). Are there good reasons why it is
in fact not practical to change to using the new triplet (assuming
that we get on with it reasonably promptly)?
Thank you for reading this far, I look forward to your comments.
Wookey
--
Principal hats: Linaro, Debian, ARM
http://wookware.org/
Hi all,
I've now posted a couple of RFC kernel patch series ([2], [1])
proposing the bulk of the Linux user/kernel ABI for the Scalable Vector
Extension (SVE) on AArch64.
There's an overview of the SVE architecture in [3] (no public specs
yet, though).
Since there are some impacts on ABI backwards compatiblity,
particularly due to enlargement of the signal frame (see [1] for more
detail), I wanted to open the discussion up to a wider group of
distro-oriented people.
The big unknown from my perspective is how much real-world software
will actually break if the signal frame grows, and what sort of
migration paths are feasible. For now, I adopt the policy of making
things safe by default and providing a means for the distro maintainer
to override it -- but this may also slow adoption unnecessarily, if
hardly any software would break anyway.
People's experience and views on this kind of issue would be much
appreciated.
Cheers
---Dave
[1] [RFC PATCH 00/10] arm64/sve: Add userspace vector length control API
lists.infradead.org/pipermail/linux-arm-kernel/2017-January/478941.html
[2] [RFC PATCH 00/29] arm64: Scalable Vector Extension core support
http://lists.infradead.org/pipermail/linux-arm-kernel/2016-November/470507.…
[3] Technology Update: The Scalable Vector Extension (SVE) for the ARMv8-A architecture
https://www.community.arm.com/processors/b/blog/posts/technology-update-the…
我与您共享了:
Mould making Die-casting Precision stamping Machining parts CNC Precision
Parts ManufacturingePC-386PdKcO
https://drive.google.com/drive/folders/0BwSTBotVUc0wNFZKUnNsSTFlOVU?usp=sha…
这项内容并非附件,而是在线存储的文档。点击上面的链接即可将其打开。
Dear Sir/Ms,
Good day!
As an ISO certified factory, we specialized manufacture Mould making/ Sheet
metal process/ Die-casting/ Precision stamping/ Machining parts, with
strong competitive price and excellent quality, for more than 20 years.
Any questions and enquiries will be highly regarded. Just email us the
drawing and detailed requirement, you will get a complete quotation with
technical analysis within 24 hours.
Your prompt reply is highly appreciated.
Best regards sincerely!
Michael
________________________________________
Shenzhen, China
Hi,
Some time ago we drafted a specification[1] for AArch64 virtual
machines. Now we are launching verification tools that let everyone
verify that the whole stack (host hypervisor, guest firmware and guest
OS image) implements the spec:
https://github.com/linaro/vmspec-tools
For some extra background see the blog post on vmspec:
http://www.linaro.org/blog/core-dump/ensuring-bootable-arm-vm-images/
>From the cross-distro point of view, we are interested in finding out if
- QEMU shipped is new enough (2.6+)
- a compatible EFI for arm64 guests is available
- a vmspec compatible cloud guest image is available
If the image comes with cloud-init, vmspec-boot can be used directly
to verify compliance. Without cloud-init, one can run vmspec-verify
inside the guest to verify manually.
The tools are still under development, for example the ACPI test
returns a failure even if the guest would support ACPI if forced.
Feedback and patches are always welcome.
The README.md lists a handful of guest images that have been used in
testing. I'd be most happy to add more links to the list!
Riku
[1] http://www.linaro.org/app/resources/WhitePaper/VMSystemSpecificationForARM-…
我与您共享了:
Digital signage solution Smart TV become digital signage★★☆☆K------468Upf
https://drive.google.com/drive/folders/0BzXQbU55sYcTZVF3ODZHQ21RcGM?usp=sha…
这项内容并非附件,而是在线存储的文档。点击上面的链接即可将其打开。
┈┍┓╁┿┞┾┳┄┯╀┒┢┼┬—┐┎┑┽┟├╃┐┑┊┣┱┡┭╂┮┠┝┓┒┰┲|┆┏┌┏
Dear Sirs,
From internet we know you are leading on AV/TV product reseller field.
Sysview is a digital signage software, capable change your existing smart
TV to a digital signage . Sysview features following :
Value add
Based on your existing smart TV, no need buy any hardware
Easy use
No need install any hardware or software ,everyone can do.
Setup your own brand easily
White label or your re-brand, depend on your choice
Easy management
Multi levels account and authority , make manage up to 1000 screens easily
Cost friendly
One time charge, No monthly/Annual Fees
90 days free trial account is available now.
Try this freely now.
Best Regards
Anna Wang
Manager | Sales department No.3 Section
Nanshan, Shenzhen, China
Hi Alex,
- Mozilla Javascript
- Old versions 1.8.5
- The upstream fix doesn't help as you identified. But I didn't
find an easy solution for that. The attached patch modifies the tagged
pointer data structure to use less bits. But this patch
changes the JSAPI,
so all packages depends on mozjs 1.8.5 need to be re-built.
- esr17/esr24/esr38
- The attached patches are back ported from the upstream fix. If
there is no similar issues like js 1.8.5 , these patches can be used
without rebuilding the dependent packages.
- LuaJIT
- Seems most distributions uses v2.1. The issue is fixed on upstream
v2.1 branch already. So updating the source code should be enough.
Reference :
https://github.com/LuaJIT/LuaJIT/commit/0c6fdc1039a3a4450d366fba7af4b29de73…
Hi Leif,
I am not familiar with the process of Debian. If I want to fix the issues
on Debian, how should I push the patches to the community?
Thanks,
Zheng
Hi,
As painfully found out by mono team, if big/little cores have
different cache line sizes, __clear_cache doesn't work as expected.
This affects any home-grown cache flushing mechanism as well.
http://www.mono-project.com/news/2016/09/12/arm64-icache/
protip, if you suspect your application issues might related to
big.LITTLE, use taskset(1) or hwloc-bind(1) to tie the process to
either big or little cluster (or just a single core).
Hi all,
(Sent to cross-distro with debian-arm on cc.)
We have an 'interesting' situation ahead of us, or indeed some of us
have already fallen into it:
ARM64 platforms with > 512GB between the lowest and highest RAM
addresses end up getting their amount of usable memory truncated if
the kernel is built for 39-bit VA (which is what currently happens for
Debian kernels). For 4.7, the arm64 defconfig was changed to enable
48-bit VA by default.
While itself not a critical error (but really annoying), in
combination with GRUB putting the initrd near the top of available
RAM, we end up with systems not booting. We think we've also seen
issues with ACPI tables above this waterline.
Simple - all we need to do then is enable 48-bit VA in the arm64
kernel config? Well, yes. I know Fedora are already doing this, and I
have raised a bug[1] for Debian to do the same.
[1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=834505
The problem is - some pieces of software have had time to be written
in a ... let's charitably call it a "focused on amd64" fasion ... with
the embedded assumption that anything above virtual address bit 44 is
a pointer-tag free-for-all.
On the Debian-ish side, we're coming up on both Ubuntu 16.10 and the
freeze for Stretch, leaving a pretty short window to resolve this
unholy kernel->initrd->userland triangle.
The applications we know are affected are luajit and mozjs (libv8 is
not a problem). But this has a follow-on cost: both of these are used
by other packages. Other jit/runtime packages could have their own
issues.
The mozjs bug is fixed on trunk, and will hopefully make it into
release 49[2], but it remains to be seen if that's too late for some
distributions.
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1143022
For luajit, I'm told this has been fixed on 2.1 branch, but not merged
to master?
Now, Jeremy (cc:d) tells me the list of currently-known Fedora
packages affected by this are:
couchdb
elinks
erlang-js
freewrl
libEAI
libproxy-mozjs
mediatomb
pacrunner
plowshare
polkit
cinnamon
cjs
cjs
cjs-tests
gjs
gjs
gjs-tests
gnome-shell
0ad
mongodb
mongodb-server
Some of these may only need an updated luajit/mozjs package, but some
may need more invasive changes.
Anyway, this is just a heads up - anyone sitting on more information
than I've put into this email is very welcome to share it.
/
Leif
Recently my angry post on Google+ [1] got so many comments that it was
clear that it would be better to move to some mailing list with discussion.
As it is about boot loaders and Linaro has engineers from most of SoC
vendor companies I thought that this will be best one.
1. https://plus.google.com/u/0/+MarcinJuszkiewicz/posts/J79qhndV6FY
All started when I got Pine64 board (based on Allwinner A64 SoC) and had
same issue as on several boards in past - boot loader written in some
random place on SD card.
Days where people used Texas Instruments SoC chips were great - in-cpu
boot loader knew how to read MBR partition table and was able to load
1st stage boot loader (called MLO) from it as long it was FAT filesystem.
GPU used by Raspberry/Pi is able to read MBR, finds 1st partition and
reads firmware files from there as long it is FAT.
Chromebooks have some SPI flash to keep boot loaders and use GPT
partitioning to find where from load kernel (or another boot loader).
And then we have all those boards where vendors decided that SPI flash
for boot loader is too expensive so it will be read from SD card
instead. From any random place of course...
Then we have distributions. And instead of generating bunch of images
per board they want to make one clean image which will be able to handle
as much as possible.
If there are UEFI machines on a list of supported ones then GPT
partitioning will be used, boot loader will be stored in "EFI system
area" and it boots. This is how AArch64 SBSA/SBBR machines work.
But there are also all those U-Boot (or fastboot/redboot/whateverboot)
ones. They are usually handled by taking image from previous stage and
adding boot loader(s) by separate script. And this is where "fun" starts...
GPT takes first 17KB of storage media as it allow to store information
about 128 partitions. Sure, no one is using so many on such devices but
still space is reserved.
But most of chips expects boot loader(s) to be stored:
- right after MBR
- from 1KB
- from 8KB
- any other random place
So scripts start to be sets of magic written to handle all those SoCs...
Solution for existing SoCs is usually adding 1MB of SPI flash during
design phase of device and store boot loader(s) there. But it is so
expensive someone would say when it is in 10-30 cents range...
Even 96boards CE specification totally ignored that fact while it could
be a way of showing how to make popular board. Instead it became
yet-another-board-to-laugh (EE spec did not improve much).
Is there a way to get it improved? At least for new designs?
> Message: 1
> Date: Thu, 5 May 2016 13:45:57 +0200
> From: Marcin Juszkiewicz <marcin.juszkiewicz(a)linaro.org>
> To: linaro-dev(a)lists.linaro.org
> Cc: cross-distro(a)lists.linaro.org
> Subject: Let's talk about boot loaders
> Message-ID: <89cb7539-fbea-e9d6-10a7-2ea4333dc756(a)linaro.org>
> Content-Type: text/plain; charset=utf-8; format=flowed
>
> Recently my angry post on Google+ [1] got so many comments that it was
> clear that it would be better to move to some mailing list with discussion.
>
> As it is about boot loaders and Linaro has engineers from most of SoC
> vendor companies I thought that this will be best one.
>
> 1. https://plus.google.com/u/0/+MarcinJuszkiewicz/posts/J79qhndV6FY
>
>
> All started when I got Pine64 board (based on Allwinner A64 SoC) and had
> same issue as on several boards in past - boot loader written in some
> random place on SD card.
>
> Days where people used Texas Instruments SoC chips were great - in-cpu
> boot loader knew how to read MBR partition table and was able to load
> 1st stage boot loader (called MLO) from it as long it was FAT filesystem.
>
> GPU used by Raspberry/Pi is able to read MBR, finds 1st partition and
> reads firmware files from there as long it is FAT.
>
> Chromebooks have some SPI flash to keep boot loaders and use GPT
> partitioning to find where from load kernel (or another boot loader).
>
> And then we have all those boards where vendors decided that SPI flash
> for boot loader is too expensive so it will be read from SD card
> instead. From any random place of course...
>
>
> Then we have distributions. And instead of generating bunch of images
> per board they want to make one clean image which will be able to handle
> as much as possible.
>
> If there are UEFI machines on a list of supported ones then GPT
> partitioning will be used, boot loader will be stored in "EFI system
> area" and it boots. This is how AArch64 SBSA/SBBR machines work.
>
> But there are also all those U-Boot (or fastboot/redboot/whateverboot)
> ones. They are usually handled by taking image from previous stage and
> adding boot loader(s) by separate script. And this is where "fun" starts...
>
> GPT takes first 17KB of storage media as it allow to store information
> about 128 partitions. Sure, no one is using so many on such devices but
> still space is reserved.
>
> But most of chips expects boot loader(s) to be stored:
>
> - right after MBR
> - from 1KB
> - from 8KB
> - any other random place
>
> So scripts start to be sets of magic written to handle all those SoCs...
>
> Solution for existing SoCs is usually adding 1MB of SPI flash during
> design phase of device and store boot loader(s) there. But it is so
> expensive someone would say when it is in 10-30 cents range...
>
> Even 96boards CE specification totally ignored that fact while it could
> be a way of showing how to make popular board. Instead it became
> yet-another-board-to-laugh (EE spec did not improve much).
>
> Is there a way to get it improved? At least for new designs?
Unfortunately, SoC vendors don't give a crap; as long as *their* SDK works,
anyone else can take a hike up a tree. If it wasn't for JCM pushing against
this, aarch64 servers would be in the same state, and they'd never ever
gain traction against Intel.
John.