On Sat, 1 Aug 2020 at 19:40, Sumit Garg via lists.yoctoproject.org
<sumit.garg=linaro.org(a)lists.yoctoproject.org> wrote:
>
> On Sat, 1 Aug 2020 at 14:57, Ryan Harkin <ryan.harkin(a)linaro.org> wrote:
> >
> >
> >
> > On Sat, 1 Aug 2020 at 10:09, Ryan Harkin <ryan.harkin(a)linaro.org> wrote:
> >>
> >> Hi Khem,
> >>
> >> On Fri, 31 Jul 2020, 21:58 Khem Raj, <raj.khem(a)gmail.com> wrote:
> >>>
> >>> On Fri, Jul 31, 2020 at 8:35 AM Ryan Harkin <ryan.harkin(a)linaro.org> wrote:
> >>> >
> >>> > Hello,
> >>> >
> >>> > I'm migrating from Warrior to Dunfell and I'm getting a curious build failure in gcc-sanitizers.
> >>> >
> >>> > Here's the full gory detail:
> >>> > https://pastebin.ubuntu.com/p/nh4cDKMvgS/
> >>> >
> >>> > However, the main error is this:
> >>> >
> >>> > | In file included from ../../../../../../../../../work-shared/gcc-arm-8.3-r2019.03/git/libsanitizer/sanitizer_common/sanitizer_platform_limits_posix.cc:193:
> >>> > | ../../../../../../../../../work-shared/gcc-arm-8.3-r2019.03/git/libsanitizer/sanitizer_common/sanitizer_internal_defs.h:317:72: error: size of array 'assertion_failed__1152' is negative
> >>> > | typedef char IMPL_PASTE(assertion_failed_##_, line)[2*(int)(pred)-1]
> >>> >
> >>> > I have no idea where to begin with this. I don't even know why gcc-sanitizers is included in the build, what it does, or why I need it. I'm building an image with dev packages and gcc, so I guess that's why.
> >>> >
> >>> > I've hacked meta-arm to patch sanitizer_platform_limits_posix.cc to null out the macros and that builds fine. I'm sure it won't work, should someone want to use it, mind you.
> >>> >
> >>> > Is there something obvious that I should be doing as part of a Warrior -> Dunfell migration to get this to work?
> >>> >
> >>> > note: Warrior used meta-linaro-toolchain and for Dunfell, it's moved to meta-arm-toolchain.
> >>> >
> >>>
> >>> is gcc 8.3 the latest for linaro
> >>
> >>
> >> I assume so. I haven't attempted to change the default.
> >
> >
> > I'm sorry, that's incorrect: local.conf has an over-ride to specify 8.3.
> > I've just removed it and now it's using 9.3. And it's building fine.
> >
It's using GCC 9.3 from OE core. If you wish to use Arm toolchain then
you need to override the default OE core GCC version with Arm
toolchain GCC version:
GCCVERSION = "arm-9.2"
-Sumit
> > Sumit, do you know if there's a reason for using 9.2 in RPB instead of 9.3?
> >
>
> Arm GCC 9.3 toolchain isn't released yet (see here [1]).
>
> [1] https://developer.arm.com/tools-and-software/open-source-software/developer…
>
> -Sumit
>
> >>
> >>>
> >>> > Regards,
> >>> > Ryan.
> >>> >
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#50161): https://lists.yoctoproject.org/g/yocto/message/50161
> Mute This Topic: https://lists.yoctoproject.org/mt/75909560/1777089
> Group Owner: yocto+owner(a)lists.yoctoproject.org
> Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub [sumit.garg(a)linaro.org]
> -=-=-=-=-=-=-=-=-=-=-=-
Hello,
I'm migrating from Warrior to Dunfell and I'm getting a curious build
failure in gcc-sanitizers.
Here's the full gory detail:
https://pastebin.ubuntu.com/p/nh4cDKMvgS/
However, the main error is this:
| In file included from
../../../../../../../../../work-shared/gcc-arm-8.3-r2019.03/git/libsanitizer/sanitizer_common/sanitizer_platform_limits_posix.cc:193:
|
../../../../../../../../../work-shared/gcc-arm-8.3-r2019.03/git/libsanitizer/sanitizer_common/sanitizer_internal_defs.h:317:72:
error: size of array 'assertion_failed__1152' is negative
| typedef char IMPL_PASTE(assertion_failed_##_, line)[2*(int)(pred)-1]
I have no idea where to begin with this. I don't even know why
gcc-sanitizers is included in the build, what it does, or why I need it.
I'm building an image with dev packages and gcc, so I guess that's why.
I've hacked meta-arm to patch sanitizer_platform_limits_posix.cc to null
out the macros and that builds fine. I'm sure it won't work, should someone
want to use it, mind you.
Is there something obvious that I should be doing as part of a Warrior ->
Dunfell migration to get this to work?
note: Warrior used meta-linaro-toolchain and for Dunfell, it's moved to
meta-arm-toolchain.
Regards,
Ryan.
Hello!
I mentioned a few weeks back that I had some doubts on the difference
between this override:
SRCREV_kernel = "5821a5593fa9f28eb6fcc95c35d00454d9bb8624"
and this one:
SRCREV_kernel_juno = "5821a5593fa9f28eb6fcc95c35d00454d9bb8624"
The following is the definition in the kernel recipe itself:
SRCREV_kernel = "3d77e6a8804abcc0504c904bd6e5cdf3a5cf8162"
Turns out that with _only_ the first override in local.conf (without
MACHINE), the kernel uses the right SRCREV but Perf uses the one from
the recipe; I have to include the second override (with MACHINE) so
that Perf can use the right SRCREV.
Does anyone have a pointer that could shed more light on that subject?
Thanks and greetings!
Daniel Díaz
daniel.diaz(a)linaro.org
On Fri, 10 Apr 2020 at 23:11, Jon Mason <jdmason(a)kudzu.us> wrote:
>
> On Fri, Apr 10, 2020 at 06:05:57PM +0200, Nicolas Dechesne wrote:
> > hi there,
> >
> > over the last few months, we've transitioned the gcc-arm toolchain from
> > meta-linaro-toolchain for master. Since we are approaching the next YP
> > release, I would like to remove/cleanup recipes in meta-linaro-toolchain
> > before we create the dunfell branch.
> >
> > as of now, meta-linaro-toolchain has the following recipes:
> > gcc_arm-8.2.bb
> > gcc_arm-8.3.bb
> > gcc_arm-9.2.bb
> > gcc_linaro-7.2.bb
> >
> > including all their usual dependencies, and the 'external' toolchain
> > recipes.
> >
> > meta-arm-toolchain has
> > gcc_arm-8.2.bb
> > gcc_arm-8.3.bb
> > gcc_arm-9.2.bb
> >
> > and it's strictly the same metadata, as of now.
> >
> > As such, I am proposing to remove all recipes from meta-linaro-toolchain
> > master branch, and require every user to transition to meta-arm.
> >
> > it will effectively become an empty layer.. so I have 2 options:
> > 1. keep an empty layer with conf/layer.conf
> > 2. remove the layer completely (e.g. remote meta-linaro-toolchain folder)
> >
> > I am not sure which one is better.. #2 will generate an obvious error
> > message for anyone using the layer. Maybe that would be a strong signal..
> >
> > I will default to #2 right before we create the dunfell branch (in a couple
> > of weeks max), unless someone speaks before that!
>
> I agree with you that #2 is the correct choice (in case you were
> wanting a secondary opinion).
>
+1
-Sumit
> Thanks,
> Jon
>
> >
> > Nothing will change of course for our stable branches.
> >
> > cheers
> > nico
>
> >
>
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#196): https://lists.yoctoproject.org/g/meta-arm/message/196
> Mute This Topic: https://lists.yoctoproject.org/mt/72924576/1777089
> Group Owner: meta-arm+owner(a)lists.yoctoproject.org
> Unsubscribe: https://lists.yoctoproject.org/g/meta-arm/leave/7647878/863301594/xyzzy [sumit.garg(a)linaro.org]
> -=-=-=-=-=-=-=-=-=-=-=-
hi there,
over the last few months, we've transitioned the gcc-arm toolchain from
meta-linaro-toolchain for master. Since we are approaching the next YP
release, I would like to remove/cleanup recipes in meta-linaro-toolchain
before we create the dunfell branch.
as of now, meta-linaro-toolchain has the following recipes:
gcc_arm-8.2.bb
gcc_arm-8.3.bb
gcc_arm-9.2.bb
gcc_linaro-7.2.bb
including all their usual dependencies, and the 'external' toolchain
recipes.
meta-arm-toolchain has
gcc_arm-8.2.bb
gcc_arm-8.3.bb
gcc_arm-9.2.bb
and it's strictly the same metadata, as of now.
As such, I am proposing to remove all recipes from meta-linaro-toolchain
master branch, and require every user to transition to meta-arm.
it will effectively become an empty layer.. so I have 2 options:
1. keep an empty layer with conf/layer.conf
2. remove the layer completely (e.g. remote meta-linaro-toolchain folder)
I am not sure which one is better.. #2 will generate an obvious error
message for anyone using the layer. Maybe that would be a strong signal..
I will default to #2 right before we create the dunfell branch (in a couple
of weeks max), unless someone speaks before that!
Nothing will change of course for our stable branches.
cheers
nico
Hi all,
Well, I don't even know where to start debugging this, so that isn't very
good.
I had working OE-RPB builds for the WaRP7 board (MACHINE=imx7s-warp), then
repo locked out the relative symlinks. Nico fixed that in OE-RPB, and I
rebased to his fix.
But it didn't fix my builds.
My manifest repo is here:
https://git.linaro.org/people/ryan.harkin/oe-rpb-manifest.git/log/?h=warrior
My meta-layer is here:
https://git.linaro.org/people/ryan.harkin/meta-warp7.git/log/?h=warrior
As you can see, they're both simple, just to get the board added to OE-RPB.
I now have two environments: my original warrior workspace that still
builds, so long as I don't try to "repo sync", and my new workspace from a
fresh clone that gives me this error:
ERROR: OE-core's config sanity checker detected a potential
misconfiguration.
Either fix the cause of this error or at your own risk disable the
checker (see sanity.conf).
Following is the list of potential problems / advisories:
MACHINE=imx7s-warp is invalid. Please set a valid MACHINE in your
local.conf, environment or other configuration file.
I'm guessing the problem is in meta-freescale-3rdparty where imx7s-warp is
defined, not my simple layers, but I have no clue where to start debugging
this.
Any advice on where to start??
Here's a CI job that builds and fails as described above:
https://ci.linaro.org/job/warp7-openembedded-warrior/161/DISTRO=rpb,label=d…
Here's the previous job from before the changes that succeeded:
https://ci.linaro.org/job/warp7-openembedded-warrior/160/DISTRO=rpb,label=d…
Thanks,
Ryan.
hi Jassi,
On Wed, Mar 4, 2020 at 1:34 AM Jassi Brar <jassisinghbrar(a)gmail.com> wrote:
> On Tue, Mar 3, 2020 at 1:15 AM Nicolas Dechesne
> <nicolas.dechesne(a)linaro.org> wrote:
> >
> > hi Jassi,
> >
> > On Tue, Mar 3, 2020 at 1:06 AM Jassi Brar <jassisinghbrar(a)gmail.com>
> wrote:
> >>
> >> Hi Team,
> >>
> >> I understand ubuntu-19.10 maybe too modern for Sumo builds, but I can
> >> not even build RPB distro with Zeus. Poky builds fine though.
> >
> >
> > Zeus was released at the same time as 19.10, so 19.10 is not marked as a
> supported distro in zeus, see
> >
> http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta-poky/conf/distro/p…
> >
> I see now, thanks.
>
> > Can you please provide build log that is showing errors with RPB/zeus on
> 19.10?
> >
> I use the 'zeus' branch of oe-rpb-manifest.git and build
> rpb-console-image
>
I tried to reproduce. I am using the qcom/zeus branch in the manifest which
is pretty much the same as zeus (only shows the QCOM BSP). And I am using a
19.10 docker build env. For the record, my Dockerfile is here:
https://github.com/ndechesne/docker-me/blob/eoan/Dockerfile
So that you can see what packages are installed on my host.
> Build Configuration:
> BB_VERSION = "1.44.0"
> BUILD_SYS = "x86_64-linux"
> NATIVELSBSTRING = "ubuntu-19.10"
> TARGET_SYS = "aarch64-linaro-linux"
> MACHINE = "hikey960"
> DISTRO = "rpb"
> DISTRO_VERSION = "3.0+linaro"
> TUNE_FEATURES = "aarch64 cortexa53 crc"
> TARGET_FPU = ""
> meta-rpb = "HEAD:c54331aacc7cd1e40b5e32fd1a7b3484904fbcb0"
>
My build config is:
Build Configuration:
BB_VERSION = "1.44.0"
BUILD_SYS = "x86_64-linux"
NATIVELSBSTRING = "ubuntu-19.10"
TARGET_SYS = "aarch64-linaro-linux"
MACHINE = "dragonboard-410c"
DISTRO = "rpb"
DISTRO_VERSION = "3.0+linaro"
TUNE_FEATURES = "aarch64 cortexa53 crc"
TARGET_FPU = ""
meta-rpb = "HEAD:c54331aacc7cd1e40b5e32fd1a7b3484904fbcb0"
meta-oe
meta-gnome
meta-xfce
meta-initramfs
meta-multimedia
meta-networking
meta-webserver
meta-filesystems
meta-perl
meta-python = "HEAD:e855ecc6d35677e79780adc57b2552213c995731"
meta-rust = "HEAD:0f950f5e333a1c8999320bf18232144f3dd9c80e"
meta-browser = "HEAD:7378141606822ef0bb985aaa00e442c9ea806429"
meta-qt5 = "HEAD:432ad2aa6c3a13253fefc909faba368851d21fb1"
meta-virtualization = "HEAD:f4262ab75d36a06c528cc1630b48b817fb0acf8f"
meta-clang = "HEAD:0c393398a91713a108f319ac75337c02b7ebeaa7"
meta-selinux = "HEAD:44d760413920ba440439b8bc7c2a71ca26cd7a2d"
meta-96boards = "HEAD:a96a1dd635f32d8eb1d644db51c0e0d8297060d8"
meta-qcom = "HEAD:3e5569032856f4f1ab98687257dd0049342473c5"
meta-linaro
meta-linaro-toolchain
meta-optee = "HEAD:d9accce97e73d0be0037d22a5c155efddd216301"
meta = "HEAD:754d0ae5a960056468cdf50e5965a4c22515f8f9"
> 1) autoconf-native/2.69-r11/build/man/Makefile fails with
> "help2man: command not found"
> hacking it call /usr/bin/help2man instead of help2man makes it
> work.
>
> 2) libtool-native_2.4.6.bb:do_install fails with
> "func_fatal_help: command not found"
> Which does sound like host setup issue and not RPB specific.
> However, I can build Poky/Zeus for RPi4 just fine (from another
> how-to).
>
both autoconf-native and libtool-native build fine for me. So i cannot
reproduce your issues. Note that help2man package is not installed on my
host.
I can build core-image-minimal just fine, however rpb-console-image fails
to build, because of:
| Traceback (most recent call last):
| File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
| "__main__", mod_spec)
| File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
| exec(code, run_globals)
| File
"/home/nicolas.dechesne/work/oe-rpb-qcom-zeus/build-eoan/tmp-rpb-glibc/work/x86_64-linux/icu-native/64.2-r0/icu/source/data/buildtool/__main__.py",
line 19, in <module>
| import BUILDRULES
| File
"/home/nicolas.dechesne/work/oe-rpb-qcom-zeus/build-eoan/tmp-rpb-glibc/work/x86_64-linux/icu-native/64.2-r0/icu/source/test/testdata/BUILDRULES.py",
line 4, in <module>
| from distutils.sysconfig import parse_makefile
| ModuleNotFoundError: No module named 'distutils.sysconfig'
| configure: error: Python failed to run; see above error.
| WARNING:
/home/nicolas.dechesne/work/oe-rpb-qcom-zeus/build-eoan/tmp-rpb-glibc/work/x86_64-linux/icu-native/64.2-r0/temp/run.do_configure.13974:1
exit 1 from 'exit 1'
|
ERROR: Task
(virtual:native:/home/nicolas.dechesne/work/oe-rpb-qcom-zeus/build-eoan/conf/../../layers/openembedded-core/meta/recipes-support/icu/icu_64.2.bb:do_configure)
failed with exit code '1'
which is yet another error...
> Thanks!
>