# Progress #
* Fix timeout in random-signal.exp. TCWG-424. Ongoing. [4/10]
I know the cause of the problem but can't figure out why it can
happen. Can only reproduce the fail in every 20~30 runs.
* TCWG-423, support gnu vector in inferior call in AArch64 GDB. Done.
[1/10]
* Build native ARM and AArch64 GDB in C++. No regression in test.
Done. TCWG-446. Done. [1/10].
* Fix GDB internal error in gdb.thread/watchpoint-fork.exp on AArch64.
[2/10]
* Upstream patches review [2/10]
** Review patches on arm gdbserver software single step V3. Some
patches are approved, but V4 is also needed for the rest.
** Review all-stop on non-stop patches.
** Discuss whether we need single step in fast tracepoint. Ongoing.
# Plan #
* One day off on Monday.
* Understand ST's jtag probe and help them to make use of multi-arch
in GDB.
* Look into timeout in random-signal.exp.
* TCWG-156, GDB test parity between AArch64 and X86_64.
* TCWG-460, follow up on the AArch64 GDBserver multi-arch work.
--
Yao
== Progress ==
- tree re-assoc regression (2/10)
- Found a test-case to reproduce it.
- working on a patch
- LuaJIT issue (6/10)
* Setup luajit on aarch64 and tried it
* tried to reproduce nginx issue with various configs without success
- LTO (1/10)
* aarch64 bootstrap
* ran into an uninit warning issues looking into it
- Misc (1/10)
* gcc/bug list
== Plan ==
* tree re-assoc
* LTO
== Progress ==
* Support (6/10)
- Trying to add ADRL support in the assembler: http://llvm.org/PR24350
* Buildbots (1/10)
- Some breakages, nothing serious
* Background (3/10)
- Code review, meetings, discussions, general support, etc.
- Working on 2016's plan
== Progress ==
o Linaro GCC (4/10)
* Found and backported missing dependencies
* More backports
o Upstream work (4/10)
* Reviewed some patches
* Tried to reproduced an LRA issue (fixed in the meantime)
* Continue on sanitizing gfortran testsuite
o Misc (2/10)
* Various meetings
== Plan ==
o Continue on-going tasks
== This Week ==
* Target conversion to hook (2/10)
- Completed build and test for ASM_FORMAT_PRIVATE_NAME, ASM_LABEL_OUTPUT_LABEL
to hook
- Converted ASM_OUTPUT_LABEL_REF to hook
* Holidays (8/10)
- 3 days leave and public holiday (Guru nanak Jayanti)
== Next Week ==
- Send updated patch upstream for tcwg-72
- LTO benchmarking with SPEC for correctness on arm and aarch64
- Target hook conversion
The Linaro Toolchain Working Group is pleased to announce the availability
of the Linaro Stable Binary Toolchain GCC 5.2-2015.11 Release Archives.
http://releases.linaro.org/components/toolchain/binaries/5.2-2015.11/http://releases.linaro.org/components/toolchain/gcc-linaro/5.2-2015.11/
These archives provide cross-toolchain executables (compiler, debugger,
linker, etc.) and shared libraries (libstdc++, libc, etc.) that target ARM
or Aarch64 GNU/Linux and bare-metal environments. The cross-toolchain
binaries execute on a Linux or MS Windows (under mingw32) host
operating-system.
Please evaluate this release-candidate for correctness. Linaro will
shortly spin the Linaro GCC 5.2-2015.11 release if this release-candidate
passes stakeholder validation.
For bugs related to this release-candidate please email
linaro-toolchain(a)lists.linaro.org or file a bug at
https://bugs.linaro.org/enter_bug.cgi?product=Linux%20Binary%20toolchain
NEWS
* GCC 5.2 2015.11
The Linaro GCC 5.2 2015.11 binary toolchain release is built from the
Linaro GCC-5.2-2015.11 release source archive. The Linaro GCC-5.2-2015.11
release source archive is derived from the same sources as the Linaro
GCC-5.2-2015.10 snapshot source archive.
* GCC 5.2 2015.11-rc1
The Linaro GCC 5.2 2015.11-rc1 binary toolchain release-candidate is built
from the Linaro GCC-5.2-2015.11 release-candidate source archive. The
Linaro GCC-5.2-2015.11-rc1 release-candidate source archive is derived from
the same sources as the Linaro GCC-5.2-2015.10 snapshot source archive.
--
Ryan S. Arnold
Linaro Toolchain Working Group - Engineering Manager
www.linaro.org
Hello,
I am trying to create gcc 4.9.x toolchains for ARM v7 and v8 based on Linaro's sources. At first Linaro's 4.9-2015.05 binary release looked suitable, but then one of my colleagues noticed that that it had an incompatibility with Red Hat Enterprise Linux 6. Linaro has decided not to fix this incompatibility (see https://bugs.linaro.org/show_bug.cgi?id=1869 ).
So, I tried to work around that bug by rebuilding the toolchains myself on RHEL6 using Linaro's new ABE script. I initially tried to recreate the builds by using ABE's --manifest <manifest_file> command line option. I experienced problems with that, though, including it building gcc version 6.x instead of 4.9.x. I eventually gave up on that approach. Instead, I extracted the required branches and revisions from the manifest files and put them into ABE command line options, like this:
abe.sh --target aarch64-elf --build all --parallel --dump --tarball --release fsl-2015.11.16 --set libc=newlib binutils=binutils-gdb.git~linaro_binutils-2_24-branch@a93e252ee5250dba831e54f98336b40c7210dac7 gcc=gcc-linaro-4.9-2015.05 gmp=5.1.3 gdb=binutils-gdb.git~gdb-7.10-branch@ef5fa52ac9ab68b505b52acb2d2068b366ba8bf2 mpfr=3.1.2 mpc=1.0.1 newlib=newlib.git~linaro_newlib-branch@136b66e404df41435bdec4630c0787b0bc7e7580
abe.sh --target aarch64-linux-gnu --build all --parallel --dump --tarball --release fsl-2015.11.16 --set libc=glibc binutils=binutils-gdb.git~linaro_binutils-2_24-branch@a93e252ee5250dba831e54f98336b40c7210dac7 gcc=gcc-linaro-4.9-2015.05 gmp=5.1.3 gdb=binutils-gdb.git~gdb-7.10-branch@ef5fa52ac9ab68b505b52acb2d2068b366ba8bf2 mpfr=3.1.2 mpc=1.0.1 glibc=glibc-linaro-2.20-2014.11.tar.xz
abe.sh --target arm-eabi --build all --parallel --dump --tarball --release fsl-2015.11.16 --set libc=newlib binutils=binutils-gdb.git~linaro_binutils-2_24-branch@a93e252ee5250dba831e54f98336b40c7210dac7 gcc=gcc-linaro-4.9-2015.05 gmp=5.1.3 gdb=binutils-gdb.git~gdb-7.10-branch@ef5fa52ac9ab68b505b52acb2d2068b366ba8bf2 mpfr=3.1.2 mpc=1.0.1 newlib=newlib.git~linaro_newlib-branch@136b66e404df41435bdec4630c0787b0bc7e7580
abe.sh --target arm-linux-gnueabihf --build all --parallel --dump --tarball --release fsl-2015.11.16 --set libc=glibc binutils=binutils-gdb.git~linaro_binutils-2_24-branch@a93e252ee5250dba831e54f98336b40c7210dac7 gcc=gcc-linaro-4.9-2015.05 gmp=5.1.3 gdb=binutils-gdb.git~gdb-7.10-branch@ef5fa52ac9ab68b505b52acb2d2068b366ba8bf2 mpfr=3.1.2 mpc=1.0.1 glibc=glibc-linaro-2.20-2014.11.tar.xz
That worked, and the resulting toolchains ran without error under RHEL6. Note that I deliberately chose to switch to glibc in the *-linux-* toolchains, whereas the manifest files had them using eglibc.
At least one serious problem remained. The toolchains supported different multilibs than previous releases. For example, arm-eabi-gcc reported that it supported only three sets of libraries:
$ arm-eabi-gcc -print-multi-lib
.;
thumb;@mthumb
fpu;@mfloat-abi=hard
Linaro's 2015.05 build of the toolchain gives the same output. However, previous releases of this toolchain supported a much larger set of multilibs. A build from 2014.08 reports:
$ arm-none-eabi-gcc --print-multi-lib
.;
thumb;@mthumb
v7-a;@march=armv7-a
v7ve;@march=armv7ve
v8-a;@march=armv8-a
v7-a/fpv3/softfp;@march=armv7-a@mfpu=vfpv3-d16@mfloat-abi=softfp
v7-a/fpv3/hard;@march=armv7-a@mfpu=vfpv3-d16@mfloat-abi=hard
v7-a/simdv1/softfp;@march=armv7-a@mfpu=neon@mfloat-abi=softfp
v7-a/simdv1/hard;@march=armv7-a@mfpu=neon@mfloat-abi=hard
v7ve/fpv4/softfp;@march=armv7ve@mfpu=vfpv4-d16@mfloat-abi=softfp
v7ve/fpv4/hard;@march=armv7ve@mfpu=vfpv4-d16@mfloat-abi=hard
v7ve/simdvfpv4/softfp;@march=armv7ve@mfpu=neon-vfpv4@mfloat-abi=softfp
v7ve/simdvfpv4/hard;@march=armv7ve@mfpu=neon-vfpv4@mfloat-abi=hard
v8-a/simdv8/softfp;@march=armv8-a@mfpu=neon-fp-armv8@mfloat-abi=softfp
v8-a/simdv8/hard;@march=armv8-a@mfpu=neon-fp-armv8@mfloat-abi=hard
thumb/v7-a;@mthumb@march=armv7-a
thumb/v7ve;@mthumb@march=armv7ve
thumb/v8-a;@mthumb@march=armv8-a
thumb/v7-a/fpv3/softfp;@mthumb@march=armv7-a@mfpu=vfpv3-d16@mfloat-abi=softfp
thumb/v7-a/fpv3/hard;@mthumb@march=armv7-a@mfpu=vfpv3-d16@mfloat-abi=hard
thumb/v7-a/simdv1/softfp;@mthumb@march=armv7-a@mfpu=neon@mfloat-abi=softfp
thumb/v7-a/simdv1/hard;@mthumb@march=armv7-a@mfpu=neon@mfloat-abi=hard
thumb/v7ve/fpv4/softfp;@mthumb@march=armv7ve@mfpu=vfpv4-d16@mfloat-abi=softfp
thumb/v7ve/fpv4/hard;@mthumb@march=armv7ve@mfpu=vfpv4-d16@mfloat-abi=hard
thumb/v7ve/simdvfpv4/softfp;@mthumb@march=armv7ve@mfpu=neon-vfpv4@mfloat-abi=softfp
thumb/v7ve/simdvfpv4/hard;@mthumb@march=armv7ve@mfpu=neon-vfpv4@mfloat-abi=hard
thumb/v8-a/simdv8/softfp;@mthumb@march=armv8-a@mfpu=neon-fp-armv8@mfloat-abi=softfp
thumb/v8-a/simdv8/hard;@mthumb@march=armv8-a@mfpu=neon-fp-armv8@mfloat-abi=hard
I found that the file that encodes this older set of multilib mappings is gcc-linaro-4.9-2015.05/gcc/config/arm/t-aprofile. Based on some comments in gcc-linaro-4.9-2015.05/gcc/config.gcc, I guessed that ABE should have configured gcc with "--with-multilib-list=aprofile", and without "--with-arch=armv7-a" or "--with-fpu=vfpv3-d16". I quickly hacked these changes into abe/config/gcc.conf like this:
diff --git a/config/gcc.conf b/config/gcc.conf
index 19c44ca..4cc5eaf
--- a/config/gcc.conf
+++ b/config/gcc.conf
@@ -111,9 +111,9 @@ if test x"${build}" != x"${target}"; then
default_configure_flags="${default_configure_flags} --with-tune=cortex-a9"
fi
if test x"${override_arch}" = x -a x"${override_cpu}" = x; then
- default_configure_flags="${default_configure_flags} --with-arch=armv7-a"
+ default_configure_flags="${default_configure_flags}"
fi
- default_configure_flags="${default_configure_flags} --enable-threads=no --with-fpu=vfpv3-d16 --enable-multilib --disable-multiarch"
+ default_configure_flags="${default_configure_flags} --enable-threads=no --with-multilib-list=aprofile --enable-multilib --disable-multiarch"
languages="c,c++,lto"
;;
aarch64*-*elf)
After rebuilding the toolchain, I found it had the desired older set of multilibs.
I hope that this mail will help anyone who experiences similar problems. I have filed a bug report for the multilib issue. See https://bugs.linaro.org/show_bug.cgi?id=1920 .
While validating the toolchains, dejagnu reports a few unexpected failures. Does the TCWG publish their validation results anywhere for comparison? That would be very helpful.
Thanks,
Fred Peterson
Freescale Developer Tools
== Progress ==
o Valfidation and Infra (2/10)
* Some fixes in our release script
* look at refactoring our publishing snapshot job
o Linaro GCC (4/10)
* Start backports for 2015.12
* Tracking dependencies
o Upstream work (1/10)
* Continue on sanitizing gfortran testsuite
o Misc (3/10)
* Various meetings
* Internal support
== Plan ==
o Continue on-going tasks
Controlled image builds - TCWG-360 [2/10]
* A few more test/debug cycles with ci-loop-built image
Jenkins benchmarking job - TCWG-348 [3/10]
* YAML-ised Jenkins job, more test/debug cycles
Juno crashdump [1/10]
* Got a usable dump (via alt-sysrq-c) with latest patches plus some fiddling
SPEC-on-Android [1/10]
* Looked at Qian's work to date, didn't come up with any bright ideas
Misc [3/10]
=Plan=
Review security with shared uinstance/main instance code
Expose more data, benchmarks to bundles
Continue debug/test of Jenkins job
Create bootable image for at least 1 target, or know what the problems are
Write up noise control report (if time)
Set Juno off, try to get a dump of my crash
Probably more support for SPEC-on-Android
=Absences=
'ARM Day' next Monday (30th)
== This week ==
* TCWG-317 - Exploit wide add operations when appropriate for Aarch32 (5/10)
- Blocked as I have not yet determined why the pattern fails on big
endian targets
* TCWG-369 - Exploit wide add operations when appropriate for Aarch64 (1/10)
- Modified code based on minor code style comments
* TCWG-316 - Exploit vector multiply by scalar instructions (3/10)
- Discovered relevant previous RFC:
https://gcc.gnu.org/ml/gcc/2013-09/msg00061.html
- Coded subset of vector patterns
- Debugging combine phase to determine why patterns are not
being utilized
* Misc (1/10)
- Conference calls
== Next week ==
- TCWG-369 - Submit modified patch upstream for final approval
- TCWG-316 - Determine if rtl patterns can be used by combine
- TCWG-317 - Need feedback
- USA Thanksgiving Holidays (November 26-27)
== This Week ==
* TCWG-72 (2/10)
- Rebased patch
- Fixed ICE for x86-gcc with -m32 following Jim's suggestions.
* Target hook conversion (6/10)
- Converted ASM_FORMAT_PRIVATE_NAME, ASM_LABEL_OUTPUT_LABEL,
ASM_OUTPUT_LABELREF to hook
* TCWG-319 (1/10)
- Benchmark jobs for fp in progress on a53, a57.
* Misc (1/10)
- Meetings
== Next Week ==
- Test and send updated patch upstream for tcwg-72
- TCWG-319 benchmarking on cortex-a15
- Holidays from 23-25th November (Mon-Wed).
# Progress #
* TCWG-332, done. [1/10]
Fix GDB bug on stepping over breakpoint on ARM. Patch is pushed in.
* TCWG-423, patches are posted. [5/10].
Support gnu vector in inferior call in AArch64 GDB.
Also correctly handle HVA (homogeneous vector aggregate) in inferior
call.
* TCWG-433, done. [2/10]
All memory issues found by -fsanitize=address in GDB are fixed.
* TCWG-447, done. [1/10]
Fix GDB mainline build warnings and errors in C++ mode on ARM and
AArch64.
* Discussion on the approach of building GDB in C++. Need to test GDB
built in C++ on both ARM and AArch64, from my side. [1/10]
# Plan #
* Understand ST's jtag probe and help them to make use of multi-arch
in GDB.
* Fix GDB internal error in gdb.thread/watchpoint-fork.exp on AArch64.
* TCWG-156, GDB test parity between AArch64 and X86_64.
--
Yao
== Progress ==
* Validation (6/10)
- a few improvements in the validations using the ST compute farm
- thinking about appropriate ways of sharing validation
reports with the GCC community without flooding gcc-testresults
- moved results comparison scripts to a dedicated repo
and updated Jenkins jobs accordingly
* GCC (1/10)
- bug #1869 / glibc dependency on RHEL6
proof of concept to force use of old memcpy
but it will be much safer to build the toolchain
in a suitable container with the right distro
* Misc (conf calls, meetings, emails, ...) (3/10)
- patches and backports reviews
== Next ==
* Validation
- continue preparation of switch, as dev-01 is now back
- improve reporting
* GCC:
- check Neon tests cleanup
- bug #1869
- look at how to send valuable reports to gcc-regression
Hi,
We're currently running into issues with the OE builds due to OE-core
having moved to 2.22. So what's the plan for glibc-linaro 2.22?
--
Koen Kooi
Builds and Baselines | Release Manager
Linaro.org | Open source software for ARM SoCs
Hi,
This question has arisen in the ODP project and the thought is that a 'best
practices' answer would be more likely to be found on this list.
We have a component that wants to make use of specialized instructions for
performing CRC and/or AES computations and was wondering what is the
recommended way for an application to determine whether such instructions
are available in the toolchain and whether the user has overruled their use?
Thanks for any insight you can provide.
Bill
I think there are many issues with binary compatibility beyond
function inlining. An ODP application cannot expect all ODP
implementations to support the same number of ODP queues or
classification rules or even which classification terms (fields) are
supported (efficiently/in HW) etc. Is there some kind of lowest common
denominator an application should expect? Do we want to make
guarantees of an ODP implementation stricter? What are the
consequences of such strict functional guarantees?
I think an application that requires binary compatibility over ARMv8.1
platforms should compile and link against a specific ODP SW
implementation (possibly with some well-defined HW offloads where the
underlying platform can provide the relevant drivers). I.e. more of a
(user-space) Linux architecture than standard ODP (as influenced by
OpenGL). The important binary interfaces then becomes the interfaces
to these offloads/drivers.
On 16 November 2015 at 14:23, Nicolas Morey-Chaisemartin
<nmorey(a)kalray.eu> wrote:
>
>
> On 11/11/2015 09:45 AM, Savolainen, Petri (Nokia - FI/Espoo) wrote:
>>
>>> -----Original Message-----
>>> From: lng-odp [mailto:lng-odp-bounces@lists.linaro.org] On Behalf Of
>>> EXT Nicolas Morey-Chaisemartin
>>> Sent: Tuesday, November 10, 2015 5:13 PM
>>> To: Zoltan Kiss; linaro-toolchain(a)lists.linaro.org
>>> Cc: lng-odp
>>> Subject: Re: [lng-odp] Runtime inlining
>>>
>>> As I said in the call last week, the problem is wider than that.
>>>
>>> ODP specifies a lot of types but not their sizes, a lot of
>>> enums/defines (things like ODP_PKTIO_INVALID) but not their value
>>> either.
>>> For our port a lot of those values were changed for
>>> performance/implementation reason. So I'm not even compatible between
>>> one version of our ODP port and another one.
>>>
>>> The only way I can see to solve this is for ODP to fix the size of all
>>> these types.
>>> Default/Invalid values are not that easy, as a pointer would have a
>>> completely different behaviour from structs/bitfields
>>>
>>> Nicolas
>>>
>> Type sizes do not need to be fixed in general, but only when an application is build for binary compatibility (the use case we are talking here). Binary compatibility and thus the fixed type sizes are defined per ISA.
>>
>> We can e.g. define a configure target (for our reference implementation == linux-generic) "--binary-compatible=armv8.x" or "--binary-compatible=x86_64". When you build your application with that option, "platform dependent" types and constants would be fixed to pre-defined values specified in (new) ODP API arch files.
>>
>> So instead of building against odp/platform/linux-generic/include/odp/plat/queue_types.h ...
>>
>> typedef ODP_HANDLE_T(odp_queue_t);
>> #define ODP_QUEUE_INVALID _odp_cast_scalar(odp_queue_t, 0)
>> #define ODP_QUEUE_NAME_LEN 32
>>
>>
>> ... you'd build against odp/arch/armv8.x/include/odp/queue_types.h ...
>>
>> typedef uintptr_t odp_queue_t;
>> #define ODP_QUEUE_INVALID ((uintptr_t)0)
>> #define ODP_QUEUE_NAME_LEN 64
>>
>>
>> ... or odp/arch/x86_64/include/odp/queue_types.h
>>
>> typedef uint64_t odp_queue_t;
>> #define ODP_QUEUE_INVALID ((uint64_t)0xffffffffffffffff)
>> #define ODP_QUEUE_NAME_LEN 32
>>
>>
>> For highest performance on a fixed target platform, you'd still build against the platform directly
>>
>> odp/platform/<soc_vendor_xyz>/include/odp/plat/queue_types.h
>>
>> typedef xyz_queue_desc_t * odp_queue_t;
>> #define ODP_QUEUE_INVALID ((xyz_queue_desc_t *)0xdeadbeef)
>> #define ODP_QUEUE_NAME_LEN 20
>>
>>
>> -Petri
>>
>
> It still means that you need to enforce a type for all ODP implementation on a given arch. Which could be problematic.
> As a precise example: the way handles are used now for odp_packet_t brings some useful features for checks and memory savings, but performance wise, they are a "disaster". One of the first thing I did was to switch them to pointers. And if I wanted a high perf linux x86_64 implementation, I'd probably do the same.
>
> Nicolas
> _______________________________________________
> lng-odp mailing list
> lng-odp(a)lists.linaro.org
> https://lists.linaro.org/mailman/listinfo/lng-odp
== Progress ==
LLDB development
-- Root Google Nexus devices and read debug module configuration with
kernel module [TCWG-429] [7/10]
-- Figure out steps to unlock and root Nexus S
-- Figure out steps to build kernel and kernel module for Nexus S
-- Tried out lldb watchpoints with custom kernel on Nexus S
-- Tried out reaching debug co processors without ptrace using kernel module.
-- Identify mix-mode debugging problems (ARM & Thumb) [TCWG-229] [2/10]
-- Ongoing Initial investigation and indentifying code areas needing changes
Miscellaneous [1/10]
-- Meetings, emails, discussions etc.
== Plan ==
-- Root Google Nexus devices and read debug module configuration with
kernel module [TCWG-429]
-- Complete app and kernel module to read debug coprocessor registers.
-- Try them out on remaining Android devices.
-- Identify mix-mode debugging problems (ARM & Thumb) [TCWG-229]
-- Further investigation and testing a mix mode application.
== Progress ==
* Buildbots (4/10)
- Found culprit for self-hosting breakages
- Bot didn't get right because of dirty builds
- Moving all self-hosting bots to clean builds (~3h)
- More work on MIPS patch breaking self-hosting
- Several breakages and bisections
- Adding first cloud (Scaleway) buildbot to local master
- No NEON, so we can't replace the Chromebooks
* Infrastructure (4/10)
- Power cut in Cambridge Lab, no generator yet
- Chromebooks fail at the time of the cut, even with the UPS
batteries still holding. I'm guessing the power regulator
depends on the internal battery to work (and we removed them)
- Bringing all bots up, etc.
- Setting up an HiKey/AMD for benchmarks (APMs are too different)
- Running EEMBC and SPEC on AMD
* Background (2/10)
- Code review, meetings, discussions, general support, etc.
- Upstreaming -meabi, which may fix builds of kernel, android, bsd
- Compiling aarch64-linux-gnu-gcc by hand because Arch pkg didn't work
o 1 day off (2/10)
== Progress ==
o Linaro GCC (6/10)
* FSF branch merge into linaro GCC 5 branch
* Troubleshot various regression after the merge
* Delivered GCC 5.2 2015.11 snapshot
o Upstream work (1/10)
* Sanitizing gfortran testsuite
o Misc (1/10)
* Various meetings
== Plan ==
o Continue on sanitizing testsuite
o Backports, infra, ...
Implement LAVA jobs for microinstance - TCWG-432 [6/10]
* Refactoring to permit sharing of code between uinstance & main
instance, as far as possible
* Further refactoring for sane submission of bundles without inserting
LAVA assumptions in the wrong places
* Tested as far as possible in main instance, using light hacks and fakebench
Jenkins benchmarking job - TCWG-348 [1/10]
* Converted pbl hacks into a sane patch for yaml-to-json.py
Controlled image builds - TCWG-360 [1/10]
* Submitted aarch64 filesystem build for review
* Generated armhf and amd64 filesystems
* Started learning how to generate hwpack
Misc [2/10]
=Plan=
Review security with shared uinstance/main instance code
Expose more data, benchmarks to bundles
Create YAML definition for Jenkins benchmarking job
Generate (controlled) hwpack for at least one target, or know what the
problems are
Write up noise control report (if time)
Have another at crashdump (if time, if new kexec patches)
* TCWG-72 (3/10)
- divmod transform approved by Richard
- builds cleanly on arm-linux-gnueabihf, aarch64-linux-gnu
- Investigating segfault with __bdi64_div.c
happens when mode == DImode and libval_mode == TImode
- Found another segfault on x86 with TImode, on arm
TImode is not supported and compiler aborts. Perhaps we should
not do the transform when mode is TImode ?
- Had a look at expand_binop_twoval_libfunc().
Wrote a similar function to obtain both results but this resulted
in infinite loop in emit_libcall_block_1
- Strangely the bug is reproducible only during the build and doesn't
trigger when compiled with preprocessed version of bid64_div.c
(passing the same set of options).
- waiting for upstream comments
* TCWG-319 (1/10)
- Submitted jobs for fp benchmark on a53, a57
* Misc:
- PR66214 appears to have gone (fixed or became latent), that was
blocking firefox LTO build with trunk
- PR65837 still appears to be present after r230327
* Public Holidays (6/10)
- Diwali festival
== Next Week ==
- Continue with TCWG-72, TCWG-319 benchmarking, target hook conversion
- Run SPEC2k6 with LTO
== Progress ==
- Widening pass (TCWG-547) - 6/10
* Bootstrapped latest patch on ppc64-linux-gnu, aarch64-linux-gnu and
x64-64-linux-gnu.
* Regression testing on ppc64-linux-gnu,
aarch64-linux-gnu arm64-linux-gnu and x64-64-linux-gnu.
* Fixed all of the execution issues
* Posted updated patch to the list
- Misc (4/10)
* Linaro bug 1900
* Continued Looking at LuaJIT code-base
* gcc/bug list
== Plan ==
* bug 1900
* Look at implementing LuaJIT for aarch64
* LTO