== Progress ==
* GCC upstream validation:
- Small improvement to pre-commit testing scripts to allow running a
subset of the tests (and thus save a lot of time)
* GCC
- MVE autovectorization:
- vcmp support of FP types OK.
- testsuite cleanup: looking at current failures, only found issues
with tests so far. Testing several small testsuite patches is taking a
long time due to the number of configurations
* Misc
== Next ==
* MVE auto-vectorization/intrinsics improvements
* GCC/cortex-M testing improvements & fixes
* Resume GDB/FDPIC work
Progress:
* UM-2 [QEMU upstream maintainership]
- softfreeze was this week, lots of pullreq processing
- spun v2 of "fix M-profile load of PC/SP from ELF images via
memory aliases"
thanks
-- PMM
Hello Linaro Toolchain Working Group,
Please connect your new bots to the staging first and make them reliably
green there:
linaro-aarch64-flang-debug
linaro-aarch64-flang-latest-clang
linaro-aarch64-flang-latest-gcc
clang-cmake-aarch64-full is red for a month.
https://lab.llvm.org/buildbot/#/builders/7?numbuilds=700
Could you move linaro-aarch64-full to the staging and make it green again,
please?
Please let me know if I could help or you have questions.
Thanks
Galina
Hi Joel,
Indeed, LLD is not configured to be used by default in LLVM-12.0.0-rc1. You need to add -fuse-ld=lld option for it to work. We’ll fix this in the final LLVM-12 release for WoA, which is expected in around 2 weeks.
Thanks for catching this!
c:\Users\tcwg\source\maxim>..\llvm-12.0.0-rc1\bin\clang-cl.exe hello.c
clang-cl: error: unable to execute command: Couldn't execute program 'C:\BuildTools\VC\Tools\MSVC\14.28.29333\bin\Hostx64\arm64\link.exe': Unknown error (0xD8)
clang-cl: error: linker command failed with exit code 1 (use -v to see invocation)
c:\Users\tcwg\source\maxim>..\llvm-12.0.0-rc1\bin\clang-cl.exe -fuse-ld=lld hello.c
c:\Users\tcwg\source\maxim>hello.exe
Hello
c:\Users\tcwg\source\maxim>..\llvm-12.0.0-rc1\bin\clang.exe -fuse-ld=lld hello.c
c:\Users\tcwg\source\maxim>hello.exe
Hello
--
Maxim Kuvyrkov
https://www.linaro.org
> On 4 Mar 2021, at 19:43, Joel Cox <Joel.Cox(a)arm.com> wrote:
>
> Hi
>
> I was trying "clang hello.c" from command line, but "clang-cl hello.c" gives me the same error. I am unsure if this is what you mean but neither work.
>
> Thanks,
> Joel
>
> -----Original Message-----
> From: Maxim Kuvyrkov <maxim.kuvyrkov(a)linaro.org>
> Sent: 04 March 2021 16:40
> To: Joel Cox <Joel.Cox(a)arm.com>
> Cc: linaro-toolchain(a)lists.linaro.org
> Subject: Re: Clang targetting x64 linker
>
> Hi Joel,
>
> Are you using clang-cl.exe as compiler/linker driver? It’s easiest to use clang-cl.exe as it aims to be a direct replacement for MSVC’s cl.exe, but will use LLVM tools. In particular, when clang-cl.exe uses LLVM Linker (LLD) by default.
>
> If you are using linux-style clang.exe as the driver, then you need to specify -fuse-ld=lld to use LLD.
>
> Does this help?
>
> Regards,
>
> --
> Maxim Kuvyrkov
> https://www.linaro.org
>
>> On 4 Mar 2021, at 19:11, Joel Cox <Joel.Cox(a)arm.com> wrote:
>>
>> Hi
>>
>> I've been trying to run clang on a Windows on Arm machine, but it keeps trying to using the link.exe located in "Visual studio/..../Host64/arm64", which is (seemingly) an x64 tool and as such doesn't run, and crashes the process.
>> Is there a way to set clang to look at VS's x86 link.exe? Or if there is an arm64 version that clang should be using instead?
>>
>> Thanks,
>> Joel
>> IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
>> _______________________________________________
>> linaro-toolchain mailing list
>> linaro-toolchain(a)lists.linaro.org
>> https://lists.linaro.org/mailman/listinfo/linaro-toolchain
>
> IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Progress:
* UM-2 [QEMU upstream maintainership]
- Mostly caught up on code review now: have done all the patches that
are targeting 6.0
- Sent out some pullreqs with for-6.0 material
- Wrote a patch to make us emulate the right number of counters for
the PMU according to what each CPU should have, rather than always 4
- Had a go at fixing the M-profile vector table load on reset to
handle aliased memory regions
* QEMU-364 [QEMU support for ARMv8.1-M extensions]
- mps3-an524 and -an547 patchseries now in master: this epic is done!
thanks
-- PMM
Folks,
I am pleased to announce the move of libc++ to pre-commit CI. Over the past
few months, we have set up Buildkite jobs on top of the Phabricator
integration built by Mikhail and Christian, and we now run almost all of
the libc++ build bots whenever a Phabricator review is created. The bots
also run when a commit is pushed to the master branch, similarly to the
existing Buildbot setup. You can see the libc++ pipeline in action here:
https://buildkite.com/llvm-project/libcxx-ci.
This is great -- we’ve been waiting to set up pre-commit CI for a long
time, and we’ve seen a giant productivity gain since it’s up. I think
everyone who contributes to libc++ greatly benefits, seeing how reviews are
now used to trigger CI and improve our confidence in changes.
This change does have an impact on existing build bots that are not owned
by one of the libc++ maintainers. While I transferred the build bots that
we owned (which Eric had set up) to Buildkite, the remaining build bots
will have to be moved to Buildkite by their respective owners. These builds
bots are (owners in CC):
libcxx-libcxxabi-x86_64-linux-debian
libcxx-libcxxabi-x86_64-linux-debian-noexceptions
libcxx-libcxxabi-libunwind-x86_64-linux-debian
libcxx-libcxxabi-singlethreaded-x86_64-linux-debian
libcxx-libcxxabi-libunwind-armv7-linux
libcxx-libcxxabi-libunwind-armv8-linux
libcxx-libcxxabi-libunwind-armv7-linux-noexceptions
libcxx-libcxxabi-libunwind-armv8-linux-noexceptions
libcxx-libcxxabi-libunwind-aarch64-linux
libcxx-libcxxabi-libunwind-aarch64-linux-noexceptions
The process of moving these bots over to Buildkite is really easy. Please
take a look at the documentation at
https://libcxx.llvm.org/docs/AddingNewCIJobs.html#addingnewcijobs and
contact me if you need additional help.
To make sure we get the full benefits of pre-commit CI soon, I would like
to put a cutoff date on supporting the old libc++ builders at
http://lab.llvm.org:8011/builders. I would propose that after January 1st
2021 (approx. 1 month from now), the libc++ specific build bots at
lab.llvm.org be removed in favor of the Buildkite ones. If you currently
own a bot, please make sure to add an equivalent Buildkite bot by that
cutoff date to make sure your configuration is still supported, or let me
know if you need an extension.
Furthermore, with the ease of creating new CI jobs with this
infrastructure, we will consider any libc++ configuration not covered by a
pre-commit bot as not explicitly supported. It doesn’t mean that such
configurations won’t work -- it just means that we won’t be making bold
claims about supporting configurations we’re unable to actually test. So if
you care about a configuration, please open a discussion and let’s see how
we can make sure it's tested properly!
I am thrilled to be moving into the pre-commit CI era. The benefits we see
so far are huge, and we're loving it.
Thanks,
Louis
PS: This has nothing to do with a potential move or non-move to GitHub. The
current pre-commit CI works with Phabricator, and would work with GitHub if
we decided to switch. Let’s try to keep those discussions separate :-).
PPS: We’re still aiming to support non libc++ specific Buildbots. For
example, if something in libc++ breaks a Clang bot, we’ll still be
monitoring that. I’m just trying to move the libc++-specific configurations
to pre-commit.
[VIRT-349 # QEMU SVE2 Support ]
I have a working FVP SVE2 install!
I used a new FVP version (11.13.36) from the last time that I tried (11.13.21
on Jan 27).
I used a debian-testing snapshot from 1-MAR, which has linux 5.10 bundled, and
fvp revc dtb installed.
I used
https://git.linaro.org/landing-teams/working/arm/arm-reference-platforms.gi…
(which is linked to by one of the howtos that Peter forwarded) and chose the
"pre-built uefi" version. I need to report a bug on this build script -- the
"build from source" option does not work on a system that has all python3 and
no python2.
I've rebuilt all of the risu trace files for vq=4 (512-bit).
I'm now refreshing my qemu branch to test.
[UM-61 # TCG Core Maintainership ]
PR for patch queue; aa64 fixes, tci fixes, tb_lookup cleanups.
[UM-2 # Upstream Maintainership ]
Patch review, mostly v8.1m board stuff.
r~
Progress (short week, 2 days):
* Some time spent on setting up new work laptop
* Started in on the pile of code review that had built up
while I was on holiday... made a bit of progress and sent out
one pullreq. Queue length now: 16 series.
thanks
-- PMM
Hi, all
Does anybody know what does '.....isra.0' mean in GCC 10.2 compiled objects?
I just noticed this issue when using bcc/eBPF tools. I submitted the detail
into
* https://github.com/iovisor/bcc/issues/3293
Simply put, when building a linux kernel with GCC 10.2, the symbol
'finish_task_switch' becomes 'finish_task_switch.isra.0' in the object (as
reported by 'nm')
Because a lot of kernel tracers (such as bcc) use 'finish_task_switch' as
the probe point, this change in the compilation result can make all result
such tools fail.
Thanks.
Best regards,
Guodong Xu
== Progress ==
* GCC upstream validation:
- a couple of regressions to bisect.
- minor testcase fix
- reported a couple of new failures
* GCC
- MVE autovectorization:
- vcmp support mostly complete. Minor update needed to support FP types.
- working on interleaved vector load/store support
* Misc
- fixed stm32 benchmarking harness, it's working again.
- submitted patches to reduce the toolchain build time (for
benchmarking we don't need all multilibs which take ages to build)
== Next ==
* MVE auto-vectorization/intrinsics improvements
* GCC/cortex-M testing improvements & fixes
* cortex-m benchmarking
Hi
I've been trying to run clang on a Windows on Arm machine, but it keeps trying to using the link.exe located in "Visual studio/..../Host64/arm64", which is (seemingly) an x64 tool and as such doesn't run, and crashes the process.
Is there a way to set clang to look at VS's x86 link.exe? Or if there is an arm64 version that clang should be using instead?
Thanks,
Joel
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Hi,
I've been trying to use clang on Windows on Arm machine (Samsung Galaxy Book S), but whenever I try to use it, I get either nothing (in powershell) or an error popup: "The application was unable to start correctly (0x000007b)".
The release I'm using is LLVM-12.0.0-rc1-woa64.exe<https://github.com/llvm/llvm-project/releases/download/llvmorg-12.0.0-rc1/L…> from https://github.com/llvm/llvm-project/releases/tag/llvmorg-12.0.0-rc1
I have a feeling there might be a library or dependency I am missing.
Any help with how to resolve this issue would be appreciated.
Thanks,
Joel
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
[UM-2 # QEMU Upstream Work]
* A lot of patch review.
* Some target/i386 cleanup, as followup to said patch review.
* Collecting patches for tcg-next.
r~
== Progress ==
* GCC upstream validation:
- a few regressions to bisect. Fixed a minor testcase issue
* GCC
- MVE autovectorization: Working on vcmp. After some cleanup &
factorization, the cmp operators work on GCC vectors. I will now
resume work on auto-vectorization.
* Misc
- fixes in stm32 benchmarking harness
== Next ==
* MVE auto-vectorization/intrinsics improvements
* GCC/cortex-M testing improvements & fixes
* cortex-m benchmarking
* After proving that the aarch64 host problem with
booting s390x with virtio was lack of barriers for
the guest memory model, I've spent some time working
on a ld-acq/st-rel optimizer for tcg.
* Minor rev of tci rewrite.
* Some patch review on claudio's kvm/tcg split.
r~
Progress:
* QEMU-364 [QEMU support for ARMv8.1-M extensions]
+ sent out v2 of the MPS3-AN524 series (minor changes only)
+ discovered that I need to implement the SSE-300's System Counter
+ implemented the system counter, did some final testing and
reviewing, and sent out v1 of the MPS3-AN547 series to the list
+ once these patches are reviewed and in master the QEMU-413 epic
will be complete
-- PMM
Hello Linaro Toolchain Working Group,
libcxx-libcxxabi-libunwind-armv8-linux has been red for 5 days.
clang-cmake-aarch64-global-isel has been red for 10 days.
clang-cmake-aarch64-quick has been red for 10 days.
clang-cmake-armv7-global-isel has been red for 10 days.
clang-cmake-armv7-lnt has been red for 10 days.
clang-cmake-aarch64-lld has been red for 11 days.
clang-cmake-armv8-lld has been red for 11 days.
clang-cmake-thumbv7-full-sh has been red for 11 days.
clang-cmake-armv7-full has been red for 11 days.
Is anybody looking at this?
I also noticed that some of your workers are less reliable than others. For
example, inaro-tk1-09 worker was last time producing a green build 2 months
ago - http://lab.llvm.org:8011/#/workers/2?numbuilds=500
I'm removing the linaro-tk1-09 worker from the production buildbot for now.
Please feel free to connect it to the staging and make the builder reliably
green. Then it could be returned back to the production.
linaro-armv8-windows-msvc-01 and linaro-armv8-windows-msvc-02 have been red
for the least 4 months. I'm removing them from the production buildbot. You
can return them back to production after they are reliably green in the
staging.
Please feel free to ask if you have questions.
Thanks
Galina
== Progress ==
* GCC upstream validation:
- a few regressions to bisect. Fixed a minor testcase issue
- native validation in Linaro's lab: we still see a few random results
* GCC
- MVE autovectorization: Working on vcmp.
* Misc
- fixes in stm32 benchmarking harness
== Next ==
* MVE auto-vectorization/intrinsics improvements
* GCC/cortex-M testing improvements & fixes
* cortex-m benchmarking
Holidays next week, back Monday 22nd Feb
Progress:
* UM-2 [QEMU upstream maintainership]
+ sent out a small code-cleanup patchset for some arm display device
models to remove no-longer-needed support for non-32bpp outputs
+ tracked down (with the aid of Paolo) why our build system had started
building the documentation every time you run 'make'; sent a fix
+ code review todo queue status: 12 items, but the oldest was only
sent to the list on Monday...
* QEMU-364 [QEMU support for ARMv8.1-M extensions]
+ An MPS3 Cortex-M55 FPGA image is now publicly available: the AN547.
We'll provide a model of this in QEMU. Started working on the
patchseries to implement it (relatively small changes on top of
the SSE-300 work I've already done but not sent out, and the
mps3-an524 series I sent out for review last week). This is basically
code-complete but I need to do some more testing and ideally I'd
like the AN524 series to get reviewed before I send out another
40-patch series that would have to be based on top of that one.
thanks
-- PMM