== Progress ==
* GCC upstream validation:
- Reported a few regressions
- Reduced build frequency on release branches, now same as trunk:
daily bump and arm/aarch64 "interesting" commits
* GCC
- pinged further fix for testcase for PR96770
- preparing cortex-m55 validation setup
- looking at cmse tests vs qemu
- Linaro BZ 5755/upstream 100049
* GDB
- uploaded cleaned up last known-to-work branch to sourceware so that
maintainers can have a look at it
- uploaded latest wip based on 8.2 branch too
* Misc
- scripts patch reviews
== Next ==
* MVE auto-vectorization/intrinsics improvements
* GCC/cortex-M testing improvements & fixes
Progress:
* UM-2 [QEMU upstream maintainership]
- Finished and posted patches implementing remap support for AN524
- Release work for rc3
- Sent a set of refactoring patches that split out some files we
were previously #including into translate.c so they are separate
compilation units.
- Thought a bit more about reset handling (in the course of writing
the AN524 remap patches I again ran into some of the deficiencies
of our current reset implementation; I really must take some time
this release cycle to try to improve things here...)
- Fixed bug where our AN547 board model accidentally disabled the FPU
on the Cortex-M55...
- ...and one where we were accidentally giving it an M33 rather than
an M55 !
- Respun and extended some patches from Paolo which fix our use of
QEMU and system headers in the 2 C++ files in the codebase (which
were broken by a change in new versions of glib)
- started looking at some of the for-6.1 patch review queue, which I
had been postponing in favour of for-6.0 stuff
* QEMU-406 [QEMU support for MVE (M-profile Vector Extension; Helium)]
- re-read the MVE portions of the v8M Arm ARM, sketched out a plan
of where to start with the QEMU implementation
thanks
-- PMM
https://gitlab.com/rth7680/qemu/-/commits/tgt-arm-bf16
It isn't a large extension, so all done. I added aarch64 simd, aarch32 neon,
and sve support all at once. I've tested it via risu vs FVP.
I've based this on my SVE2 branch, since there are some cleanups that made that
easier. I'll post patches to qemu-devel next week.
r~
== Progress ==
* GCC upstream validation:
- No regression to report this week. Issues on gcc-9 and gcc-10
release branches had already been reported by other people.
* GCC
- pinged further fix for testcase for PR96770
- Looking at failures for cortex-M, only found testisms so far
* Misc
- Fixed benchmarking jobs on stm32
== Next ==
* MVE auto-vectorization/intrinsics improvements
* GCC/cortex-M testing improvements & fixes
* Resume GDB/FDPIC work
VirtIO Initiative ([STR-9])
===========================
- had sync-up with Akashi-san/Vincent about approaches for the Zephyr
demo
- need to write-up on Monday
[STR-9] <https://projects.linaro.org/browse/STR-9>
QEMU Support for Xen ([STR-20])
===============================
- continued reviewing [RFC v12 00/65] arm cleanup experiment for
kvm-only build Message-Id: <20210326193701.5981-1-cfontana(a)suse.de>
- this is looking pretty solid now
- cut new version of [arm build clean-ups based on Claudio's series]
- testing reveals some breakage to track down next week
[STR-20] <https://projects.linaro.org/browse/STR-20>
[arm build clean-ups based on Claudio's series]
<https://github.com/stsquad/qemu/tree/xen/arm-build-cleanups-v3>
QEMU Upstream Work ([UM-2])
===========================
- posted [PULL 00/11] rc2 fixes (check-tcg, gitlab, gdbstub)
Message-Id: <20210406150041.28753-1-alex.bennee(a)linaro.org>
- some time trying to debug last weeks PR, eventually merged
[UM-2] <https://projects.linaro.org/browse/UM-2>
GSoC/Intern Mentoring
=====================
GSoC 2021 proposal initiation for AGL
Message-Id: <CAC+yH-Z_KZKuQ0nmAgk-+B+HW88sNjztObtWwazof+cAGJGuuQ(a)mail.gmail.com>
GSoC: cache modelling plugin inquiry
Message-Id: <CAD-LL6hLk1XAB5VWwHgMOWQV=bT1+FCnf8f-q9MVw5e5A4RMqg(a)mail.gmail.com>
GSoC - QEMU Ideas
Message-Id: <CALpwMJz0kb2BDreTWBbt945FR+sLX=sKzHSv8pMb5LkLwXqEJA(a)mail.gmail.com>
Synced up with potential AGL Jailhouse/VirtIO GSoC applicant
Completed Reviews [3/3]
=======================
[PATCH RESEND] docs: clarify absence of set_features in vhost-user
Message-Id: <20210325144846.17520-1-hi(a)alyssa.is>
[PATCH] docs: Add a QEMU Code of Conduct and Conflict Resolution Policy document
Message-Id: <72bc8020-2028-82db-219c-a6ae311e26df(a)redhat.com>
[PATCH v4 00/12] target/arm mte fixes
Message-Id: <20210406174031.64299-4-richard.henderson(a)linaro.org>
Absences
========
Current Review Queue
====================
TODO [RFC v12 00/65] arm cleanup experiment for kvm-only build
Message-Id: <20210326193701.5981-1-cfontana(a)suse.de>
==================================================================================================================
TODO [RFC 0/8] virtio: Improve boot time of virtio-scsi-pci and virtio-blk-pci
Message-Id: <20210325150735.1098387-1-groug(a)kaod.org>
===================================================================================================================================
TODO [PATCH 0/5] virtio: Implement generic vhost-user-i2c backend
Message-Id: <cover.1616570702.git.viresh.kumar(a)linaro.org>
===========================================================================================================================
TODO [PATCH v2 00/29] tcg: Workaround macOS 11.2 mprotect bug
Message-Id: <20210314212724.1917075-1-richard.henderson(a)linaro.org>
================================================================================================================================
--
Alex Bennée
Progress (short week, 2 days):
* UM-2 [QEMU upstream maintainership]
- more release work, bug triage, etc
- started implementing the memory-remapping feature of the
AN524 FPGA image (which allows the BRAM and QSPI to be swapped
either at startup or dynamically under guest control). Some guests
(like ARM TF-M) assume the QSPI mapping.
thanks
-- PMM
== Progress ==
* GCC upstream validation:
- Reported minor testsuite issues (eg failures with -mabi=ilp32 on aarch64)
- re-started looking at validation for cortex-m55, realized that qemu
does not support MVE yet
* GCC
- posted further fix for testcase for PR96770
- fixed PR 99786
- committed fix for PR 99773
* Misc
== Next ==
* MVE auto-vectorization/intrinsics improvements
* GCC/cortex-M testing improvements & fixes
* Resume GDB/FDPIC work
Progress:
* UM-2 [QEMU upstream maintainership]
- tagged rc0
- various bug investigation/patches for 6.0-ish stuff:
+ machines with a 'platform bus' incorrectly treated all sysbus
devices as hotpluggable: wrote and sent patchseries
+ respin of gpex pci-host "don't fault for unmapped MMIO/PIO window
accesses" patch
+ document how to use gdbstub/gdb for a multi-cluster machine where
the clusters are different gdb inferiors
+ looked for workarounds for macos Appkit bug where Apple broke menubars
for apps that start off as console apps and programmatically switch
themselves to being GUI apps; various online suggestions don't seem
to help, except for "when app starts, force the Dock to become active
and then grab back focus 200ms later". Maybe we'll do that, but it's
pretty ugly...
thanks
-- PMM
== Progress ==
* GCC upstream validation:
- Small improvement to pre-commit testing scripts to allow running a
subset of the tests (and thus save a lot of time)
* GCC
- MVE autovectorization:
- vcmp support of FP types OK.
- testsuite cleanup: looking at current failures, only found issues
with tests so far. Testing several small testsuite patches is taking a
long time due to the number of configurations
* Misc
== Next ==
* MVE auto-vectorization/intrinsics improvements
* GCC/cortex-M testing improvements & fixes
* Resume GDB/FDPIC work
Progress:
* UM-2 [QEMU upstream maintainership]
- softfreeze was this week, lots of pullreq processing
- spun v2 of "fix M-profile load of PC/SP from ELF images via
memory aliases"
thanks
-- PMM
Hello Linaro Toolchain Working Group,
Please connect your new bots to the staging first and make them reliably
green there:
linaro-aarch64-flang-debug
linaro-aarch64-flang-latest-clang
linaro-aarch64-flang-latest-gcc
clang-cmake-aarch64-full is red for a month.
https://lab.llvm.org/buildbot/#/builders/7?numbuilds=700
Could you move linaro-aarch64-full to the staging and make it green again,
please?
Please let me know if I could help or you have questions.
Thanks
Galina
Hi Joel,
Indeed, LLD is not configured to be used by default in LLVM-12.0.0-rc1. You need to add -fuse-ld=lld option for it to work. We’ll fix this in the final LLVM-12 release for WoA, which is expected in around 2 weeks.
Thanks for catching this!
c:\Users\tcwg\source\maxim>..\llvm-12.0.0-rc1\bin\clang-cl.exe hello.c
clang-cl: error: unable to execute command: Couldn't execute program 'C:\BuildTools\VC\Tools\MSVC\14.28.29333\bin\Hostx64\arm64\link.exe': Unknown error (0xD8)
clang-cl: error: linker command failed with exit code 1 (use -v to see invocation)
c:\Users\tcwg\source\maxim>..\llvm-12.0.0-rc1\bin\clang-cl.exe -fuse-ld=lld hello.c
c:\Users\tcwg\source\maxim>hello.exe
Hello
c:\Users\tcwg\source\maxim>..\llvm-12.0.0-rc1\bin\clang.exe -fuse-ld=lld hello.c
c:\Users\tcwg\source\maxim>hello.exe
Hello
--
Maxim Kuvyrkov
https://www.linaro.org
> On 4 Mar 2021, at 19:43, Joel Cox <Joel.Cox(a)arm.com> wrote:
>
> Hi
>
> I was trying "clang hello.c" from command line, but "clang-cl hello.c" gives me the same error. I am unsure if this is what you mean but neither work.
>
> Thanks,
> Joel
>
> -----Original Message-----
> From: Maxim Kuvyrkov <maxim.kuvyrkov(a)linaro.org>
> Sent: 04 March 2021 16:40
> To: Joel Cox <Joel.Cox(a)arm.com>
> Cc: linaro-toolchain(a)lists.linaro.org
> Subject: Re: Clang targetting x64 linker
>
> Hi Joel,
>
> Are you using clang-cl.exe as compiler/linker driver? It’s easiest to use clang-cl.exe as it aims to be a direct replacement for MSVC’s cl.exe, but will use LLVM tools. In particular, when clang-cl.exe uses LLVM Linker (LLD) by default.
>
> If you are using linux-style clang.exe as the driver, then you need to specify -fuse-ld=lld to use LLD.
>
> Does this help?
>
> Regards,
>
> --
> Maxim Kuvyrkov
> https://www.linaro.org
>
>> On 4 Mar 2021, at 19:11, Joel Cox <Joel.Cox(a)arm.com> wrote:
>>
>> Hi
>>
>> I've been trying to run clang on a Windows on Arm machine, but it keeps trying to using the link.exe located in "Visual studio/..../Host64/arm64", which is (seemingly) an x64 tool and as such doesn't run, and crashes the process.
>> Is there a way to set clang to look at VS's x86 link.exe? Or if there is an arm64 version that clang should be using instead?
>>
>> Thanks,
>> Joel
>> IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
>> _______________________________________________
>> linaro-toolchain mailing list
>> linaro-toolchain(a)lists.linaro.org
>> https://lists.linaro.org/mailman/listinfo/linaro-toolchain
>
> IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Progress:
* UM-2 [QEMU upstream maintainership]
- Mostly caught up on code review now: have done all the patches that
are targeting 6.0
- Sent out some pullreqs with for-6.0 material
- Wrote a patch to make us emulate the right number of counters for
the PMU according to what each CPU should have, rather than always 4
- Had a go at fixing the M-profile vector table load on reset to
handle aliased memory regions
* QEMU-364 [QEMU support for ARMv8.1-M extensions]
- mps3-an524 and -an547 patchseries now in master: this epic is done!
thanks
-- PMM
Folks,
I am pleased to announce the move of libc++ to pre-commit CI. Over the past
few months, we have set up Buildkite jobs on top of the Phabricator
integration built by Mikhail and Christian, and we now run almost all of
the libc++ build bots whenever a Phabricator review is created. The bots
also run when a commit is pushed to the master branch, similarly to the
existing Buildbot setup. You can see the libc++ pipeline in action here:
https://buildkite.com/llvm-project/libcxx-ci.
This is great -- we’ve been waiting to set up pre-commit CI for a long
time, and we’ve seen a giant productivity gain since it’s up. I think
everyone who contributes to libc++ greatly benefits, seeing how reviews are
now used to trigger CI and improve our confidence in changes.
This change does have an impact on existing build bots that are not owned
by one of the libc++ maintainers. While I transferred the build bots that
we owned (which Eric had set up) to Buildkite, the remaining build bots
will have to be moved to Buildkite by their respective owners. These builds
bots are (owners in CC):
libcxx-libcxxabi-x86_64-linux-debian
libcxx-libcxxabi-x86_64-linux-debian-noexceptions
libcxx-libcxxabi-libunwind-x86_64-linux-debian
libcxx-libcxxabi-singlethreaded-x86_64-linux-debian
libcxx-libcxxabi-libunwind-armv7-linux
libcxx-libcxxabi-libunwind-armv8-linux
libcxx-libcxxabi-libunwind-armv7-linux-noexceptions
libcxx-libcxxabi-libunwind-armv8-linux-noexceptions
libcxx-libcxxabi-libunwind-aarch64-linux
libcxx-libcxxabi-libunwind-aarch64-linux-noexceptions
The process of moving these bots over to Buildkite is really easy. Please
take a look at the documentation at
https://libcxx.llvm.org/docs/AddingNewCIJobs.html#addingnewcijobs and
contact me if you need additional help.
To make sure we get the full benefits of pre-commit CI soon, I would like
to put a cutoff date on supporting the old libc++ builders at
http://lab.llvm.org:8011/builders. I would propose that after January 1st
2021 (approx. 1 month from now), the libc++ specific build bots at
lab.llvm.org be removed in favor of the Buildkite ones. If you currently
own a bot, please make sure to add an equivalent Buildkite bot by that
cutoff date to make sure your configuration is still supported, or let me
know if you need an extension.
Furthermore, with the ease of creating new CI jobs with this
infrastructure, we will consider any libc++ configuration not covered by a
pre-commit bot as not explicitly supported. It doesn’t mean that such
configurations won’t work -- it just means that we won’t be making bold
claims about supporting configurations we’re unable to actually test. So if
you care about a configuration, please open a discussion and let’s see how
we can make sure it's tested properly!
I am thrilled to be moving into the pre-commit CI era. The benefits we see
so far are huge, and we're loving it.
Thanks,
Louis
PS: This has nothing to do with a potential move or non-move to GitHub. The
current pre-commit CI works with Phabricator, and would work with GitHub if
we decided to switch. Let’s try to keep those discussions separate :-).
PPS: We’re still aiming to support non libc++ specific Buildbots. For
example, if something in libc++ breaks a Clang bot, we’ll still be
monitoring that. I’m just trying to move the libc++-specific configurations
to pre-commit.
[VIRT-349 # QEMU SVE2 Support ]
I have a working FVP SVE2 install!
I used a new FVP version (11.13.36) from the last time that I tried (11.13.21
on Jan 27).
I used a debian-testing snapshot from 1-MAR, which has linux 5.10 bundled, and
fvp revc dtb installed.
I used
https://git.linaro.org/landing-teams/working/arm/arm-reference-platforms.gi…
(which is linked to by one of the howtos that Peter forwarded) and chose the
"pre-built uefi" version. I need to report a bug on this build script -- the
"build from source" option does not work on a system that has all python3 and
no python2.
I've rebuilt all of the risu trace files for vq=4 (512-bit).
I'm now refreshing my qemu branch to test.
[UM-61 # TCG Core Maintainership ]
PR for patch queue; aa64 fixes, tci fixes, tb_lookup cleanups.
[UM-2 # Upstream Maintainership ]
Patch review, mostly v8.1m board stuff.
r~
Progress (short week, 2 days):
* Some time spent on setting up new work laptop
* Started in on the pile of code review that had built up
while I was on holiday... made a bit of progress and sent out
one pullreq. Queue length now: 16 series.
thanks
-- PMM