Hi folks,
It's probably worth sharing something I found yesterday, running
Debian Wheezy binaries on a couple of different ARMv8 machines...
libgc embeds a copy of libatomic-ops, and until very recently,
libatomic-ops has been continuing to use (deprecated) CP15 barriers
[1] when building for ARMv7 instead of the recommended DMB
instruction. On v7 machines that's been working fine, but v8 does not
support these old-style barriers at all. I found this trying to run
w3m, and it failed immediately with "Illegal instruction". There's
ongoing discussion about a kernel patch for arm64 to catch this
exception (and others) [2], but it's not gone upstream yet. Other
packages that I can see using ligbc on Debian are:
asymptote
chase
debfoster
ecl
fauhdlc
gcc-3.3
goo
guile-2.0
inkscape
kaya
neko
parser
stalin
synopsis
w3m
There may be quite a few more using older versions of libatomic-ops
too - be warned. :-(
[1] e.g. mcr 15, 0, r3, cr7, cr10, {5}
[2] http://comments.gmane.org/gmane.linux.ports.arm.kernel/361430
Cheers,
--
Steve McIntyre steve.mcintyre(a)linaro.org
<http://www.linaro.org/> Linaro.org | Open source software for ARM SoCs
Hi,
I've just invited a bunch of attendees at linaro connect for
cross-distribution meeting in the next connect. Sadly it's not
officially on the scheduled talks, so there is no remote participation
this time. However you still have a change to reply here and add items
to our agenda :)
Draft agenda:
- Review status of various distributions ARMv7 and ARMv8 support
- Discuss boot environment standardization (U-Boot/UEFI/GRUB..)
- uEnv.txt
- legacy platforms
- Installers vs pre-built images
- What remaning OSS software needs to be ported to ARMv8
- Identifying common pain points Linaro could solve
Also, if you are interested in the session and didn't get an invite
from me - just mail me and I'll add you to the meeting in zerista - or
just pop up at the Platform hacking room in tuesday. The meeting might
move somewhere else, but people in the room will be able to give the
directions to the moved meeting.
Riku
ARM VM System Specification
===========================
Goal
----
The goal of this spec is to allow suitably-built OS images to run on
all ARM virtualization solutions, such as KVM or Xen.
Recommendations in this spec are valid for aarch32 and aarch64 alike, and
they aim to be hypervisor agnostic.
Note that simply adhering to the SBSA [2] is not a valid approach, for
example because the SBSA mandates EL2, which will not be available for
VMs. Further, this spec also covers the aarch32 execution mode, not
covered in the SBSA.
Image format
------------
The image format, as presented to the VM, needs to be well-defined in
order for prepared disk images to be bootable across various
virtualization implementations.
The raw disk format as presented to the VM must be partitioned with a
GUID Partition Table (GPT). The bootable software must be placed in the
EFI System Partition (ESP), using the UEFI removable media path, and
must be an EFI application complying to the UEFI Specification 2.4
Revision A [6].
The ESP partition's GPT entry's partition type GUID must be
C12A7328-F81F-11D2-BA4B-00A0C93EC93B and the file system must be
formatted as FAT32/vfat as per Section 12.3.1.1 in [6].
The removable media path is \EFI\BOOT\BOOTARM.EFI for the aarch32
execution state and is \EFI\BOOT\BOOTAA64.EFI for the aarch64 execution
state as specified in Section 3.3 (3.3 (Boot Option Variables Default Boot
Behavior) and 3.4.1.1 (Removable Media Boot Behavior) in [6].
This ensures that tools for both Xen and KVM can load a binary UEFI
firmware which can read and boot the EFI application in the disk image.
A typical scenario will be GRUB2 packaged as an EFI application, which
mounts the system boot partition and boots Linux.
Virtual Firmware
----------------
The VM system must be UEFI compliant in order to be able to boot the EFI
application in the ESP. It is recommended that this is achieved by
loading a UEFI binary as the first software executed by the VM, which
then executes the EFI application. The UEFI implementation should be
compliant with UEFI Specification 2.4 Revision A [6] or later.
This document strongly recommends that the VM implementation supports
persistent environment storage for virtual firmware implementation in
order to ensure probable use cases such as adding additional disk images
to a VM or running installers to perform upgrades.
This document strongly recommends that VM implementations implement
persistent variable storage for their UEFI implementation. Persistent
variable storage shall be a property of a VM instance, but shall not be
stored as part of a portable disk image. Portable disk images shall
conform to the UEFI removable disk requirements from the UEFI spec and
cannot rely on on a pre-configured UEFI environment.
The binary UEFI firmware implementation should not be distributed as
part of the VM image, but is specific to the VM implementation.
Hardware Description
--------------------
The VM system must be UEFI compliant and therefore the UEFI system table
will provide a means to access hardware description data.
The VM implementation must provide through its UEFI implementation:
a complete FDT which describes the entire VM system and will boot
mainline kernels driven by device tree alone
For more information about the arm and arm64 boot conventions, see
Documentation/arm/Booting and Documentation/arm64/booting.txt in the
Linux kernel source tree.
For more information about UEFI booting, see [4] and [5].
VM Platform
-----------
The specification does not mandate any specific memory map. The guest
OS must be able to enumerate all processing elements, devices, and
memory through HW description data (FDT) or a bus-specific
mechanism such as PCI.
If aarch64 physical CPUs implement support for the aarch32 execution
state in EL1 and EL0 execution, it is recommended that the VM
implementation supports booting the VM at EL1 in both aarch32 and
aarch64 execution states.
The virtual hardware platform must provide a number of mandatory
peripherals:
Serial console: The platform should provide a console,
based on an emulated pl011, a virtio-console, or a Xen PV console.
An ARM Generic Interrupt Controller v2 (GICv2) [3] or newer. GICv2
limits the the number of virtual CPUs to 8 cores, newer GIC versions
removes this limitation.
The ARM virtual timer and counter should be available to the VM as
per the ARM Generic Timers specification in the ARM ARM [1].
It is strongly recommended that the VM implementation provides a
hotpluggable bus to support hotplug of at least block and network
devices. Suitable buses include a virtual PCIe bus and the Xen PV bus.
For the VM image to be compliant with this spec, the following applies
for the guest OS in the VM image:
The guest OS must include support for pl011 UART, virtio-console, and
the Xen PV console.
The guest OS must include support for GICv2 and any available newer
version of the GIC architecture to maintain compatibility with older
VM implementations.
It is strongly recommended to include support for all available
(block, network, console, balloon) virtio-pci, virtio-mmio, and Xen PV
drivers in the guest OS kernel or initial ramdisk.
Other common peripherals for block devices, networking, and more can
(and typically will) be provided, but OS software written and compiled
to run on ARM VMs cannot make any assumptions about which variations
of these should exist or which implementation they use (e.g. VirtIO or
Xen PV). See "Hardware Description" above.
Changes from previous versions of this RFC
------------------------------------------
Changes v1-v2:
- Clearly specify that the guest must support the pl011,
virtio-console, and Xen PV console. (Note that it was discussed to
mandate a pl011 during Linaro Connect Asia 2014, but that was under the
impression that the SBSA specification was an output-only no-irq
serial port, which is not the case. The only two benefits from
mandating a specific serial type was to handle "console=ttyAMA0"
kernel command line parameters and earlyprintk; Grant Likely has
submitted patches to avoid the need for "console=" parameters, and
Rob Herring has submitted patches for paravirtualized earlyprintk
consoles.)
- Reference EFI specification for bootable paths.
- Remove discussion on ACPI boot methods and do not suggest ACPI
support in VMs.
- Add specification about UEFI persistent variable storage and
portability.
References
----------
[1]: The ARM Architecture Reference Manual, ARMv8, Issue A.b
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0487a.b/inde…
[2]: ARM Server Base System Architecture
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.den0029/index.h…
[3]: The ARM Generic Interrupt Controller Architecture Specifications v2.0
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0487a.b/inde…
[4]: http://www.secretlab.ca/archives/27
[5]: https://git.linaro.org/people/leif.lindholm/linux.git/blob/refs/heads/uefi-…
[6]: UEFI Specification 2.4 Revision A
http://www.uefi.org/sites/default/files/resources/2_4_Errata_A.pdf
Hi,
I've collected a list of where people install their dtb files these days;
https://wiki.linaro.org/Platform/DeviceTreeConsolidation
Every distribution has a slightly different variation of install location,
which is not good - we can't tell end users that "this is the place you can
expect to find your device tree files regardless of what distribution you
choose". Some questions I have here before we proceed discussing what
would be the standardized location:
1) Anything missing of the pros and cons of different locations?
2) Are you interested in moving to a standardized location if cross-distro
list proposes one?
Feel free to either comment here and/or update the wiki.
Riku
Hi,
I am trying to setup an open-embedded environment for ARMv8.
And following the steps as mentioned (building from source) @
http://releases.linaro.org/latest/openembedded/aarch64/
i.e. Done follwing so far
mkdir openembedded cd openembedded git clone git://
git.linaro.org/openembedded/jenkins-setup.git cd jenkins-setup git checkout
release-14.04 cd .. sudo bash
jenkins-setup/pre-build-root-install-dependencies.sh bash
jenkins-setup/init-and-build.sh
Last command, is giving me error like:
*jenkins-setup/init-and-build.sh: line 50: cd: poky: No such file or directory
jenkins-setup/init-and-build.sh: line 53: oe-init-build-env: No such
file or directory
jenkins-setup/functions.sh: line 70: conf/bblayers.conf: No such file or
directory
*
Am I missing something? Please provide some pointer to debug this further.
Thanks,
Chetan Nanda
ARM VM System Specification
===========================
Goal
----
The goal of this spec is to allow suitably-built OS images to run on
all ARM virtualization solutions, such as KVM or Xen.
Recommendations in this spec are valid for aarch32 and aarch64 alike, and
they aim to be hypervisor agnostic.
Note that simply adhering to the SBSA [2] is not a valid approach,
for example because the SBSA mandates EL2, which will not be available
for VMs. Further, the SBSA mandates peripherals like the pl011, which
may be controversial for some ARM VM implementations to support.
This spec also covers the aarch32 execution mode, not covered in the
SBSA.
Image format
------------
The image format, as presented to the VM, needs to be well-defined in
order for prepared disk images to be bootable across various
virtualization implementations.
The raw disk format as presented to the VM must be partitioned with a
GUID Partition Table (GPT). The bootable software must be placed in the
EFI System Partition (ESP), using the UEFI removable media path, and
must be an EFI application complying to the UEFI Specification 2.4
Revision A [6].
The ESP partition's GPT entry's partition type GUID must be
C12A7328-F81F-11D2-BA4B-00A0C93EC93B and the file system must be
formatted as FAT32/vfat as per Section 12.3.1.1 in [6].
The removable media path is \EFI\BOOT\BOOTARM.EFI for the aarch32
execution state and is \EFI\BOOT\BOOTAA64.EFI for the aarch64 execution
state.
This ensures that tools for both Xen and KVM can load a binary UEFI
firmware which can read and boot the EFI application in the disk image.
A typical scenario will be GRUB2 packaged as an EFI application, which
mounts the system boot partition and boots Linux.
Virtual Firmware
----------------
The VM system must be able to boot the EFI application in the ESP. It
is recommended that this is achieved by loading a UEFI binary as the
first software executed by the VM, which then executes the EFI
application. The UEFI implementation should be compliant with UEFI
Specification 2.4 Revision A [6] or later.
This document strongly recommends that the VM implementation supports
persistent environment storage for virtual firmware implementation in
order to ensure probable use cases such as adding additional disk images
to a VM or running installers to perform upgrades.
The binary UEFI firmware implementation should not be distributed as
part of the VM image, but is specific to the VM implementation.
Hardware Description
--------------------
The Linux kernel's proper entry point always takes a pointer to an FDT,
regardless of the boot mechanism, firmware, and hardware description
method. Even on real hardware which only supports ACPI and UEFI, the kernel
entry point will still receive a pointer to a simple FDT, generated by
the Linux kernel UEFI stub, containing a pointer to the UEFI system
table. The kernel can then discover ACPI from the system tables. The
presence of ACPI vs. FDT is therefore always itself discoverable,
through the FDT.
Therefore, the VM implementation must provide through its UEFI
implementation, either:
a complete FDT which describes the entire VM system and will boot
mainline kernels driven by device tree alone, or
no FDT. In this case, the VM implementation must provide ACPI, and
the OS must be able to locate the ACPI root pointer through the UEFI
system table.
For more information about the arm and arm64 boot conventions, see
Documentation/arm/Booting and Documentation/arm64/booting.txt in the
Linux kernel source tree.
For more information about UEFI and ACPI booting, see [4] and [5].
VM Platform
-----------
The specification does not mandate any specific memory map. The guest
OS must be able to enumerate all processing elements, devices, and
memory through HW description data (FDT, ACPI) or a bus-specific
mechanism such as PCI.
The virtual platform must support at least one of the following ARM
execution states:
(1) aarch32 virtual CPUs on aarch32 physical CPUs
(2) aarch32 virtual CPUs on aarch64 physical CPUs
(3) aarch64 virtual CPUs on aarch64 physical CPUs
It is recommended to support both (2) and (3) on aarch64 capable
physical systems.
The virtual hardware platform must provide a number of mandatory
peripherals:
Serial console: The platform should provide a console,
based on an emulated pl011, a virtio-console, or a Xen PV console.
An ARM Generic Interrupt Controller v2 (GICv2) [3] or newer. GICv2
limits the the number of virtual CPUs to 8 cores, newer GIC versions
removes this limitation.
The ARM virtual timer and counter should be available to the VM as
per the ARM Generic Timers specification in the ARM ARM [1].
A hotpluggable bus to support hotplug of at least block and network
devices. Suitable buses include a virtual PCIe bus and the Xen PV
bus.
We make the following recommendations for the guest OS kernel:
The guest OS must include support for GICv2 and any available newer
version of the GIC architecture to maintain compatibility with older
VM implementations.
It is strongly recommended to include support for all available
(block, network, console, balloon) virtio-pci, virtio-mmio, and Xen PV
drivers in the guest OS kernel or initial ramdisk.
Other common peripherals for block devices, networking, and more can
(and typically will) be provided, but OS software written and compiled
to run on ARM VMs cannot make any assumptions about which variations
of these should exist or which implementation they use (e.g. VirtIO or
Xen PV). See "Hardware Description" above.
Note that this platform specification is separate from the Linux kernel
concept of mach-virt, which merely specifies a machine model driven
purely from device tree, but does not mandate any peripherals or have any
mention of ACPI.
References
----------
[1]: The ARM Architecture Reference Manual, ARMv8, Issue A.b
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0487a.b/inde…
[2]: ARM Server Base System Architecture
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.den0029/index.h…
[3]: The ARM Generic Interrupt Controller Architecture Specifications v2.0
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0487a.b/inde…
[4]: http://www.secretlab.ca/archives/27
[5]: https://git.linaro.org/people/leif.lindholm/linux.git/blob/refs/heads/uefi-…
[6]: UEFI Specification 2.4 Revision A
http://www.uefi.org/sites/default/files/resources/2_4_Errata_A.pdf
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hey all,
So I have been working with upstream u-boot, and we are gotten to a
point where we have a common set of config options defined that we can
be sure will be there. Next step is working on setting a common
environment. The preferred boot format will be syslinux style config
with boot.scr also supported. tegra boards and raspberry pi default to
reading a extlinux.conf file and falling back to boot.scr if it doesnt
exist, with more platforms in progress of being moved over. There has
been work to make it so that lots of extra noise is not printed on
screen. longer term BootLoaderSpec support will also be implemented.
a working extlinux.conf file is
# extlinux.conf generated by appliance-creator
ui menu.c32
menu autoboot Welcome to Fedora-Minimal-armhfp-rawhide-20140211-test. Automatic boot in # second{,s}. Press a key for options.
menu title Fedora-Minimal-armhfp-rawhide-20140211-test Boot Options.
menu hidden
timeout 1
totaltimeout 600
label Fedora-Minimal-armhfp-rawhide-20140211-test (3.14.0-0.rc2.git0.1.fc21.armv7hl)
kernel /vmlinuz-3.14.0-0.rc2.git0.1.fc21.armv7hl
append ro root=UUID=1cd5dcc6-074d-4364-b606-a9a62b498166
fdtdir /dtb-3.14.0-0.rc2.git0.1.fc21.armv7hl/
initrd /initramfs-3.14.0-0.rc2.git0.1.fc21.armv7hl.img
though menu support still needs some work you do get a menu and can
select 1, 2, 3, etc
the ftddir which can also be devicetreedir specifies a directory where
u-boot can find dtb files. u-boot itself will know which dtb file to
load for the current board. alternatively you can use fdt or devicetree
to specify the exact dtb file to load. other than the dtb support its
stock standard syslinux config file. which gives a much more usable
experience to users and takes away any need to know memory addresses or
to wrap things with mkimage. you can if you want but raw kernel and
initramfs support is enabled and just works. the end result is
something more familiar and simple to users.
Just wanted to give you an update to the emails I sent last year to
follow up and give a brief status update. any questions please ask.
The end goal is to have it so that distros don't need to care about
making sure we do things differently for each soc or board. We can just
work everywhere.
Dennis
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAEBAgAGBQJTDFryAAoJEH7ltONmPFDRnnkP/RoAHknnBJU18xKexkR8XyIM
nZGIAndE6umPa1gA40CrxHJLxOqXuUSu4TxiO2InodFuyap6oCkrU/v0/Qaw0XF+
7I3WZ9ao3Dnx70tPQRiHeiu1TN2x3XXEdgNHo5JplmzMx9yRwwDYIkE+ji1a6xbt
S32UhjNrId7mhPHy/AS5MH63a5JD7S0Hlak5eymZQrEudqqUnVJdMuBjmCI9uWuC
g6iW1paNEFf1GFGfWKfpIuEbKFrhtWJJZKl9EWxTOLKbU9VLzqoyVqAtHOFVswgA
7LH+rq85tuEOHe1U/y5y9Wh5Nb0mH9Iv3uFQxGxrsLJWH/IEA7JADbG6tiC9vAH/
6x8yuXuu1UuTbkiy5/nT+kjeBjQo7pJKZUOH3CoUJhiTUjZiRQBSU7NeL9Kh60XM
vznH+WRdJiIqYCc7UBYChhOasJdaZ0AggxQt80G5VRBUzUJfowbBIUezZxlFdvub
tvF1/00ITN8dU7v0+4+SWU7SV02lb2/F6WRvHKDl7xZibP8+24yA+mqC8vkZJECA
hHkk3EisHxQ8aXAJzQceFEUaIHuZcGN+15wOA8WFfqvs3tJBoP1LEwYYafx61ii+
6Ec7ZJIBk1N1utHTZUOUfIz3oW909FZyt7Rwhs/+ju8Gp2lWdQq/hSlH27Qmwysm
RFNNTtwOKb42RErbO4rs
=eXL1
-----END PGP SIGNATURE-----
Does anyone know how to increase the disk size on Foundation Model?
For some reason, the network connectivity to the foundation model (via ssh
localhost) doesn't work reliably. So my earlier attempts to mount my host
computer's drive and get things done doesn't work reliably either. Now I
am stuck with the initial 8GB or so hard disk on the foundation model which
is insufficient for any meaningful work.
Appreciate any help.
Thanks & Regards,
Anil
I am facing a bug with starting the foundation model using the following
command
anilss@anilss:~/Linaro/tools/
Foundation_v8pkg/models/Linux64_GCC-4.1$ ./Foundation_v8
--openembedded_lamp-armv8_20130719-403.img --network=nat
--network-nat-ports=8022=22
Any one has faced this problem before..?
Any ideas on how to uninstall the foundation model..?
The problem seems to be that after the start up for some reason 'ps' is
being being run ( I dont know who is running that) continuously.. So I dont
get to the command prompt at all. As you can see below the console keeps
outputting the following continuously:
----------------------------------------
. . .
Usage: ps
130722 06:41:50 mysqld_safe Number of processes running now: 0
130722 06:41:50 mysqld_safe mysqld restarted
ps: invalid option -- 'x'
BusyBox v1.21.1 (2013-07-20 01:25:13 UTC) multi-call binary.
Usage: ps
130722 06:41:50 mysqld_safe Number of processes running now: 0
130722 06:41:50 mysqld_safe mysqld restarted
ps: invalid option -- 'x'
BusyBox v1.21.1 (2013-07-20 01:25:13 UTC) multi-call binary.
Usage: ps
130722 06:41:50 mysqld_safe Number of processes running now: 0
130722 06:41:50 mysqld_safe mysqld restarted
ps: invalid option -- 'x'
BusyBox v1.21.1 (2013-07-20 01:25:13 UTC) multi-call binary.
Usage: ps
130722 06:41:51 mysqld_safe Number of processes running now: 0
130722 06:41:51 mysqld_safe mysqld restarted
ps: invalid option -- 'x'
BusyBox v1.21.1 (2013-07-20 01:25:13 UTC) multi-call binary.
Usage: ps
130722 06:41:51 mysqld_safe Number of processes running now: 0
130722 06:41:51 mysqld_safe mysqld restarted
ps: invalid option -- 'x'
BusyBox v1.21.1 (2013-07-20 01:25:13 UTC) multi-call binary.
Usage: ps
130722 06:41:52 mysqld_safe Number of processes running now: 0
130722 06:41:52 mysqld_safe mysqld restarted
ps: invalid option -- 'x'
BusyBox v1.21.1 (2013-07-20 01:25:13 UTC) multi-call binary.
Usage: ps
130722 06:41:52 mysqld_safe Number of processe
. . .
--------------------------------------------------------------------------------------------