Hi Viresh/Mathieu,
This is a dump of the current state of my notes for cross-building and running Xen via QEMU. It's a little bit of a stream of consciousness as I wrote things down but hopefully it proves useful in getting things started.
━━━━━━━━━━━━━━━━━━━ XEN RELATED NOTES
Alex Bennée ━━━━━━━━━━━━━━━━━━━
Table of Contents ─────────────────
1. Tasks [0/1] :tasks: .. 1. TODO Xen SMC/HVC pass through ([STR-41]) 2. Notes .. 1. Building Xen .. 2. Cross Build .. 3. Cross Packages .. 4. Configure .. 5. Build ..... 1. Just the Hypervisor .. 6. Running Dom0 .. 7. MachiatoBin Issues .. 8. Running DomU .. 9. View console of DomU guest
1 Tasks [0/1] :tasks: ═════════════
[STR-41] https://projects.linaro.org/browse/STR-41
1.1 TODO Xen SMC/HVC pass through ([STR-41]) ────────────────────────────────────────────
sstabellini_: that tracks [ 1.420679] mvebu-comphy f2120000.phy: unsupported SMC call, try updating your firmware [ 1.420720] mvebu-comphy f2120000.phy: Firmware could not configure PHY 4 with mode 15 (ret: -95), trying legacy method sstabellini_: is that a boot option? <sstabellini_> stsquad: as an example see [19:55] xen/arch/arm/platforms/xilinx-zynqmp-eemi.c:forward_to_fw <sstabellini_> stsquad: we don't have a boot option for that, you need to add a few lines of code. Actually this issue comes up often enough that a boot option would be really useful!
[STR-41] https://projects.linaro.org/browse/STR-41
2 Notes ═══════
2.1 Building Xen ────────────────
As Xen's out-of-tree support is a little sporadic I've ended up going for using git-worktree to split out the various builds.
2.2 Cross Build ───────────────
2.3 Cross Packages ──────────────────
┌──── │ apt install libpython3-dev:arm64 libfdt-dev:arm64 libncurses-dev:arm64 └────
Or as I do on hackbox do everything in a container:
┌──── │ docker run --rm -it -u (id -u) -v $HOME:$HOME -w (pwd) alex.bennee:xen-arm64 /usr/bin/fish └────
built from [my dockerfile]
[my dockerfile] https://github.com/stsquad/dockerfiles/blob/master/crossbuild/xen-arm64/Dockerfile
2.4 Configure ─────────────
┌──── │ ./configure --build=x86_64-unknown-linux-gnu --host=aarch64-linux-gnu \ │ --disable-docs --disable-golang --disable-ocamltools \ │ --with-system-qemu=/usr/bin/qemu-system-i386 └────
The location of system QEMU can be tweaked after the fact by editing /etc/default/xencommons
2.5 Build ─────────
You need to have the compilers in the command line:
┌──── │ make -j9 dist CROSS_COMPILE=aarch64-linux-gnu- XEN_TARGET_ARCH=arm64 └────
To build an Debian package which can install all the tools and binaries (replacing any distro requirements) then:
┌──── │ make -j9 debball CROSS_COMPILE=aarch64-linux-gnu- XEN_TARGET_ARCH=arm64 └────
This builds a package which you can find in ./dist/xen-upstream-4.15-unstable.deb which you can dpkg -i on your dom0 environment.
2.5.1 Just the Hypervisor ╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌
┌──── │ cd xen │ set -x CROSS_COMPILE aarch64-linux-gnu- │ make XEN_TARGET_ARCH=arm64 └────
2.6 Running Dom0 ────────────────
To avoid complications with broken grub and bios firmware I'm currently just using my direct boot:
┌──── │ ./qemu-system-aarch64 -machine virt,virtualization=on \ │ -cpu cortex-a57 -serial mon:stdio \ │ -nic user,model=virtio-net-pci,hostfwd=tcp::2222-:22 \ │ -device virtio-scsi-pci \ │ -drive file=/dev/zvol/hackpool-0/debian-buster-arm64,id=hd0,index=0,if=none,format=raw \ │ -device scsi-hd,drive=hd0 \ │ -display none \ │ -m 8192 \ │ -kernel ~/lsrc/xen/xen.build.arm64-xen/xen/xen \ │ -append "dom0_mem=2G,max:2G dom0_max_vcpus=4 loglvl=all guest_loglvl=all" \ │ -device guest-loader,addr=0x46000000,kernel=$HOME/lsrc/linux.git/builds/arm64/arch/arm64/boot/Image,bootargs="root=/dev/sda2 console=hvc0 earlyprintk=xen" \ │ -smp 8 └────
Care has to be taken to avoid the guest-loader address clashing with anything else and ending up corrupting the DTB. Currently QEMU doesn't do anything avoid those issues as it doesn't have visibility of where the kernel gets loaded. You can see by examining the Xen output:
┌──── │ (XEN) Loading zImage from 0000000046000000 to 0000000050000000-0000000050eb2200 │ (XEN) Loading d0 DTB to 0x0000000058000000-0x0000000058001ce8 └────
The initial xl list will check Xen and it's tools are up and running and the hypervisor and userspace ABI's are in sync. It's important that the hypercall ABI and it's userspace are in sync.
┌──── │ 13:13:43 [root@buster:~] # xl list │ Name ID Mem VCPUs State Time(s) │ Domain-0 0 4096 4 r----- 215.8 └────
2.7 MachiatoBin Issues ──────────────────────
• upgraded to Bullseye for latest Grub • built with 8250 serial (32 bit, shift = 2)
2.8 Running DomU ────────────────
The with a config like
┌──── │ # =====================================================================# Example PV Linux guest configuration │ # ===================================================================== │ # │ # This is a fairly minimal example of what is required for a │ # Paravirtualised Linux guest. For a more complete guide see xl.cfg(5) │ │ # Guest name │ name = "xenpv-initrd-guest" │ │ # 128-bit UUID for the domain as a hexadecimal number. │ # Use "uuidgen" to generate one if required. │ # The default behavior is to generate a new UUID each time the guest is started. │ #uuid = "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" │ │ # Kernel image to boot │ kernel = "/home/alex/arm64-minkern" │ # kernel = "/home/alex/arm64-kernel" │ # kernel = "/home/alex/arm64-defconfig" │ │ # Ramdisk (optional) │ #ramdisk = "/boot/initrd.gz" │ │ # Kernel command line options │ extra = "console=hvc0" │ │ # Initial memory allocation (MB) │ memory = 4096 │ │ # Maximum memory (MB) │ # If this is greater than `memory' then the slack will start ballooned │ # (this assumes guest kernel support for ballooning) │ maxmem = 4096 │ │ # Number of VCPUS │ vcpus = 2 │ │ # Network devices │ # A list of 'vifspec' entries as described in │ # docs/misc/xl-network-configuration.markdown │ # vif = [ ] │ │ # Disk Devices │ # A list of `diskspec' entries as described in │ # docs/misc/xl-disk-configuration.txt │ # disk = [ '/dev/vg/guest-volume,raw,xvda,rw' ] │ # disk = [ ] └────
┌──── │ xl create simple-guest.conf │ xl list │ Name ID Mem VCPUs State Time(s) │ Domain-0 0 4096 4 r----- 221.4 │ xenpv-initrd-guest 1 4095 2 -b---- 5.3 └────
2.9 View console of DomU guest ──────────────────────────────
And to get console:
┌──── │ xl console -t pv -n 0 xenpv-initrd-guest └────
to exit you need *Ctrl-[*
Hi Alex / all,
On Wed, 6 Oct 2021 at 11:41, Alex Bennée via Stratos-dev < stratos-dev@op-lists.linaro.org> wrote:
Hi Viresh/Mathieu,
This is a dump of the current state of my notes for cross-building and running Xen via QEMU. It's a little bit of a stream of consciousness as I wrote things down but hopefully it proves useful in getting things started.
Thanks Alex for the Xen notes, that is very useful!
On a related note, I think Ruchika (on cc) had Xen booting in Qemu and integrated using the optee-build [1] system which may also be a good starting point.
I've also been using optee-build for my virtio-video work and it nicely wraps building Qemu, Linux, optee etc so I can recommend it as a good dev environment.
[1] https://github.com/OP-TEE/build
Thanks,
Peter.
━━━━━━━━━━━━━━━━━━━ XEN RELATED NOTES Alex Bennée ━━━━━━━━━━━━━━━━━━━
Table of Contents ─────────────────
- Tasks [0/1] :tasks:
.. 1. TODO Xen SMC/HVC pass through ([STR-41]) 2. Notes .. 1. Building Xen .. 2. Cross Build .. 3. Cross Packages .. 4. Configure .. 5. Build ..... 1. Just the Hypervisor .. 6. Running Dom0 .. 7. MachiatoBin Issues .. 8. Running DomU .. 9. View console of DomU guest
1 Tasks [0/1] :tasks: ═════════════
[STR-41] https://projects.linaro.org/browse/STR-41
1.1 TODO Xen SMC/HVC pass through ([STR-41]) ────────────────────────────────────────────
sstabellini_: that tracks [ 1.420679] mvebu-comphy f2120000.phy: unsupported SMC call, try updating your firmware [ 1.420720] mvebu-comphy f2120000.phy: Firmware could not configure PHY 4 with mode 15 (ret: -95), trying legacy method sstabellini_: is that a boot option? <sstabellini_> stsquad: as an example see [19:55] xen/arch/arm/platforms/xilinx-zynqmp-eemi.c:forward_to_fw <sstabellini_> stsquad: we don't have a boot option for that, you need to add a few lines of code. Actually this issue comes up often enough that a boot option would be really useful!
[STR-41] https://projects.linaro.org/browse/STR-41
2 Notes ═══════
2.1 Building Xen ────────────────
As Xen's out-of-tree support is a little sporadic I've ended up going for using git-worktree to split out the various builds.
2.2 Cross Build ───────────────
2.3 Cross Packages ──────────────────
┌──── │ apt install libpython3-dev:arm64 libfdt-dev:arm64 libncurses-dev:arm64 └────
Or as I do on hackbox do everything in a container:
┌──── │ docker run --rm -it -u (id -u) -v $HOME:$HOME -w (pwd) alex.bennee:xen-arm64 /usr/bin/fish └────
built from [my dockerfile]
[my dockerfile] < https://github.com/stsquad/dockerfiles/blob/master/crossbuild/xen-arm64/Dock...
2.4 Configure ─────────────
┌──── │ ./configure --build=x86_64-unknown-linux-gnu --host=aarch64-linux-gnu \ │ --disable-docs --disable-golang --disable-ocamltools \ │ --with-system-qemu=/usr/bin/qemu-system-i386 └────
The location of system QEMU can be tweaked after the fact by editing /etc/default/xencommons
2.5 Build ─────────
You need to have the compilers in the command line:
┌──── │ make -j9 dist CROSS_COMPILE=aarch64-linux-gnu- XEN_TARGET_ARCH=arm64 └────
To build an Debian package which can install all the tools and binaries (replacing any distro requirements) then:
┌──── │ make -j9 debball CROSS_COMPILE=aarch64-linux-gnu- XEN_TARGET_ARCH=arm64 └────
This builds a package which you can find in ./dist/xen-upstream-4.15-unstable.deb which you can dpkg -i on your dom0 environment.
2.5.1 Just the Hypervisor ╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌
┌──── │ cd xen │ set -x CROSS_COMPILE aarch64-linux-gnu- │ make XEN_TARGET_ARCH=arm64 └────
2.6 Running Dom0 ────────────────
To avoid complications with broken grub and bios firmware I'm currently just using my direct boot:
┌──── │ ./qemu-system-aarch64 -machine virt,virtualization=on \ │ -cpu cortex-a57 -serial mon:stdio \ │ -nic user,model=virtio-net-pci,hostfwd=tcp::2222-:22 \ │ -device virtio-scsi-pci \ │ -drive file=/dev/zvol/hackpool-0/debian-buster-arm64,id=hd0,index=0,if=none,format=raw \ │ -device scsi-hd,drive=hd0 \ │ -display none \ │ -m 8192 \ │ -kernel ~/lsrc/xen/xen.build.arm64-xen/xen/xen \ │ -append "dom0_mem=2G,max:2G dom0_max_vcpus=4 loglvl=all guest_loglvl=all" \ │ -device guest-loader,addr=0x46000000,kernel=$HOME/lsrc/linux.git/builds/arm64/arch/arm64/boot/Image,bootargs="root=/dev/sda2 console=hvc0 earlyprintk=xen" \ │ -smp 8 └────
Care has to be taken to avoid the guest-loader address clashing with anything else and ending up corrupting the DTB. Currently QEMU doesn't do anything avoid those issues as it doesn't have visibility of where the kernel gets loaded. You can see by examining the Xen output:
┌──── │ (XEN) Loading zImage from 0000000046000000 to 0000000050000000-0000000050eb2200 │ (XEN) Loading d0 DTB to 0x0000000058000000-0x0000000058001ce8 └────
The initial xl list will check Xen and it's tools are up and running and the hypervisor and userspace ABI's are in sync. It's important that the hypercall ABI and it's userspace are in sync.
┌──── │ 13:13:43 [root@buster:~] # xl list │ Name ID Mem VCPUs State Time(s) │ Domain-0 0 4096 4 r----- 215.8 └────
2.7 MachiatoBin Issues ──────────────────────
• upgraded to Bullseye for latest Grub • built with 8250 serial (32 bit, shift = 2)
2.8 Running DomU ────────────────
The with a config like
┌──── │ # =====================================================================# Example PV Linux guest configuration │ # ===================================================================== │ # │ # This is a fairly minimal example of what is required for a │ # Paravirtualised Linux guest. For a more complete guide see xl.cfg(5) │ │ # Guest name │ name = "xenpv-initrd-guest" │ │ # 128-bit UUID for the domain as a hexadecimal number. │ # Use "uuidgen" to generate one if required. │ # The default behavior is to generate a new UUID each time the guest is started. │ #uuid = "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" │ │ # Kernel image to boot │ kernel = "/home/alex/arm64-minkern" │ # kernel = "/home/alex/arm64-kernel" │ # kernel = "/home/alex/arm64-defconfig" │ │ # Ramdisk (optional) │ #ramdisk = "/boot/initrd.gz" │ │ # Kernel command line options │ extra = "console=hvc0" │ │ # Initial memory allocation (MB) │ memory = 4096 │ │ # Maximum memory (MB) │ # If this is greater than `memory' then the slack will start ballooned │ # (this assumes guest kernel support for ballooning) │ maxmem = 4096 │ │ # Number of VCPUS │ vcpus = 2 │ │ # Network devices │ # A list of 'vifspec' entries as described in │ # docs/misc/xl-network-configuration.markdown │ # vif = [ ] │ │ # Disk Devices │ # A list of `diskspec' entries as described in │ # docs/misc/xl-disk-configuration.txt │ # disk = [ '/dev/vg/guest-volume,raw,xvda,rw' ] │ # disk = [ ] └────
┌──── │ xl create simple-guest.conf │ xl list │ Name ID Mem VCPUs State Time(s) │ Domain-0 0 4096 4 r----- 221.4 │ xenpv-initrd-guest 1 4095 2 -b---- 5.3 └────
2.9 View console of DomU guest ──────────────────────────────
And to get console:
┌──── │ xl console -t pv -n 0 xenpv-initrd-guest └────
to exit you need *Ctrl-[*
-- Alex Bennée -- Stratos-dev mailing list Stratos-dev@op-lists.linaro.org https://op-lists.linaro.org/mailman/listinfo/stratos-dev
Hi,
On Wed, 6 Oct 2021 at 19:30, Peter Griffin peter.griffin@linaro.org wrote:
Hi Alex / all,
On Wed, 6 Oct 2021 at 11:41, Alex Bennée via Stratos-dev < stratos-dev@op-lists.linaro.org> wrote:
Hi Viresh/Mathieu,
This is a dump of the current state of my notes for cross-building and running Xen via QEMU. It's a little bit of a stream of consciousness as I wrote things down but hopefully it proves useful in getting things started.
Thanks Alex for the Xen notes, that is very useful!
On a related note, I think Ruchika (on cc) had Xen booting in Qemu and integrated using the optee-build [1] system which may also be a good starting point.
Yes, we do have a manifest which can be used to boot up XEN with TF-A, u-boot and OP-TEE on QEMUv8 with buildroot. You can follow the steps to reproduce the environment as stated in [2]. Booting Xen is not a default option in the build. So to try it out, change the following step in [2] make run to make XEN_BOOT=y run
[2] https://optee.readthedocs.io/en/latest/building/devices/qemu.html#qemu-v8
Please note that OP-TEE based mediator is available as an experimental feature in XEN which has been enabled in this build setup.
Regards, Ruchika
I've also been using optee-build for my virtio-video work and it nicely wraps building Qemu, Linux, optee etc so I can recommend it as a good dev environment.
[1] https://github.com/OP-TEE/build
Thanks,
Peter.
━━━━━━━━━━━━━━━━━━━ XEN RELATED NOTES Alex Bennée ━━━━━━━━━━━━━━━━━━━
Table of Contents ─────────────────
- Tasks [0/1] :tasks:
.. 1. TODO Xen SMC/HVC pass through ([STR-41]) 2. Notes .. 1. Building Xen .. 2. Cross Build .. 3. Cross Packages .. 4. Configure .. 5. Build ..... 1. Just the Hypervisor .. 6. Running Dom0 .. 7. MachiatoBin Issues .. 8. Running DomU .. 9. View console of DomU guest
1 Tasks [0/1] :tasks: ═════════════
[STR-41] https://projects.linaro.org/browse/STR-41
1.1 TODO Xen SMC/HVC pass through ([STR-41]) ────────────────────────────────────────────
sstabellini_: that tracks [ 1.420679] mvebu-comphy f2120000.phy: unsupported SMC call, try updating your firmware [ 1.420720] mvebu-comphy f2120000.phy: Firmware could not configure PHY 4 with mode 15 (ret: -95), trying legacy method sstabellini_: is that a boot option? <sstabellini_> stsquad: as an example see [19:55] xen/arch/arm/platforms/xilinx-zynqmp-eemi.c:forward_to_fw <sstabellini_> stsquad: we don't have a boot option for that, you need to add a few lines of code. Actually this issue comes up often enough that a boot option would be really useful!
[STR-41] https://projects.linaro.org/browse/STR-41
2 Notes ═══════
2.1 Building Xen ────────────────
As Xen's out-of-tree support is a little sporadic I've ended up going for using git-worktree to split out the various builds.
2.2 Cross Build ───────────────
2.3 Cross Packages ──────────────────
┌──── │ apt install libpython3-dev:arm64 libfdt-dev:arm64 libncurses-dev:arm64 └────
Or as I do on hackbox do everything in a container:
┌──── │ docker run --rm -it -u (id -u) -v $HOME:$HOME -w (pwd) alex.bennee:xen-arm64 /usr/bin/fish └────
built from [my dockerfile]
[my dockerfile] < https://github.com/stsquad/dockerfiles/blob/master/crossbuild/xen-arm64/Dock...
2.4 Configure ─────────────
┌──── │ ./configure --build=x86_64-unknown-linux-gnu --host=aarch64-linux-gnu \ │ --disable-docs --disable-golang --disable-ocamltools \ │ --with-system-qemu=/usr/bin/qemu-system-i386 └────
The location of system QEMU can be tweaked after the fact by editing /etc/default/xencommons
2.5 Build ─────────
You need to have the compilers in the command line:
┌──── │ make -j9 dist CROSS_COMPILE=aarch64-linux-gnu- XEN_TARGET_ARCH=arm64 └────
To build an Debian package which can install all the tools and binaries (replacing any distro requirements) then:
┌──── │ make -j9 debball CROSS_COMPILE=aarch64-linux-gnu- XEN_TARGET_ARCH=arm64 └────
This builds a package which you can find in ./dist/xen-upstream-4.15-unstable.deb which you can dpkg -i on your dom0 environment.
2.5.1 Just the Hypervisor ╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌
┌──── │ cd xen │ set -x CROSS_COMPILE aarch64-linux-gnu- │ make XEN_TARGET_ARCH=arm64 └────
2.6 Running Dom0 ────────────────
To avoid complications with broken grub and bios firmware I'm currently just using my direct boot:
┌──── │ ./qemu-system-aarch64 -machine virt,virtualization=on \ │ -cpu cortex-a57 -serial mon:stdio \ │ -nic user,model=virtio-net-pci,hostfwd=tcp::2222-:22 \ │ -device virtio-scsi-pci \ │ -drive file=/dev/zvol/hackpool-0/debian-buster-arm64,id=hd0,index=0,if=none,format=raw \ │ -device scsi-hd,drive=hd0 \ │ -display none \ │ -m 8192 \ │ -kernel ~/lsrc/xen/xen.build.arm64-xen/xen/xen \ │ -append "dom0_mem=2G,max:2G dom0_max_vcpus=4 loglvl=all guest_loglvl=all" \ │ -device guest-loader,addr=0x46000000,kernel=$HOME/lsrc/linux.git/builds/arm64/arch/arm64/boot/Image,bootargs="root=/dev/sda2 console=hvc0 earlyprintk=xen" \ │ -smp 8 └────
Care has to be taken to avoid the guest-loader address clashing with anything else and ending up corrupting the DTB. Currently QEMU doesn't do anything avoid those issues as it doesn't have visibility of where the kernel gets loaded. You can see by examining the Xen output:
┌──── │ (XEN) Loading zImage from 0000000046000000 to 0000000050000000-0000000050eb2200 │ (XEN) Loading d0 DTB to 0x0000000058000000-0x0000000058001ce8 └────
The initial xl list will check Xen and it's tools are up and running and the hypervisor and userspace ABI's are in sync. It's important that the hypercall ABI and it's userspace are in sync.
┌──── │ 13:13:43 [root@buster:~] # xl list │ Name ID Mem VCPUs State Time(s) │ Domain-0 0 4096 4 r----- 215.8 └────
2.7 MachiatoBin Issues ──────────────────────
• upgraded to Bullseye for latest Grub • built with 8250 serial (32 bit, shift = 2)
2.8 Running DomU ────────────────
The with a config like
┌──── │ # =====================================================================# Example PV Linux guest configuration │ # ===================================================================== │ # │ # This is a fairly minimal example of what is required for a │ # Paravirtualised Linux guest. For a more complete guide see xl.cfg(5) │ │ # Guest name │ name = "xenpv-initrd-guest" │ │ # 128-bit UUID for the domain as a hexadecimal number. │ # Use "uuidgen" to generate one if required. │ # The default behavior is to generate a new UUID each time the guest is started. │ #uuid = "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" │ │ # Kernel image to boot │ kernel = "/home/alex/arm64-minkern" │ # kernel = "/home/alex/arm64-kernel" │ # kernel = "/home/alex/arm64-defconfig" │ │ # Ramdisk (optional) │ #ramdisk = "/boot/initrd.gz" │ │ # Kernel command line options │ extra = "console=hvc0" │ │ # Initial memory allocation (MB) │ memory = 4096 │ │ # Maximum memory (MB) │ # If this is greater than `memory' then the slack will start ballooned │ # (this assumes guest kernel support for ballooning) │ maxmem = 4096 │ │ # Number of VCPUS │ vcpus = 2 │ │ # Network devices │ # A list of 'vifspec' entries as described in │ # docs/misc/xl-network-configuration.markdown │ # vif = [ ] │ │ # Disk Devices │ # A list of `diskspec' entries as described in │ # docs/misc/xl-disk-configuration.txt │ # disk = [ '/dev/vg/guest-volume,raw,xvda,rw' ] │ # disk = [ ] └────
┌──── │ xl create simple-guest.conf │ xl list │ Name ID Mem VCPUs State Time(s) │ Domain-0 0 4096 4 r----- 221.4 │ xenpv-initrd-guest 1 4095 2 -b---- 5.3 └────
2.9 View console of DomU guest ──────────────────────────────
And to get console:
┌──── │ xl console -t pv -n 0 xenpv-initrd-guest └────
to exit you need *Ctrl-[*
-- Alex Bennée -- Stratos-dev mailing list Stratos-dev@op-lists.linaro.org https://op-lists.linaro.org/mailman/listinfo/stratos-dev
On Wed, Oct 6, 2021 at 7:33 AM Ruchika Gupta via Stratos-dev < stratos-dev@op-lists.linaro.org> wrote:
Hi,
On Wed, 6 Oct 2021 at 19:30, Peter Griffin peter.griffin@linaro.org wrote:
Hi Alex / all,
On Wed, 6 Oct 2021 at 11:41, Alex Bennée via Stratos-dev < stratos-dev@op-lists.linaro.org> wrote:
Hi Viresh/Mathieu,
This is a dump of the current state of my notes for cross-building and running Xen via QEMU. It's a little bit of a stream of consciousness as I wrote things down but hopefully it proves useful in getting things started.
Thanks Alex for the Xen notes, that is very useful!
On a related note, I think Ruchika (on cc) had Xen booting in Qemu and integrated using the optee-build [1] system which may also be a good starting point.
Yes, we do have a manifest which can be used to boot up XEN with TF-A, u-boot and OP-TEE on QEMUv8 with buildroot. You can follow the steps to reproduce the environment as stated in [2]. Booting Xen is not a default option in the build. So to try it out, change the following step in [2] make run to make XEN_BOOT=y run
[2] https://optee.readthedocs.io/en/latest/building/devices/qemu.html#qemu-v8
Please note that OP-TEE based mediator is available as an experimental feature in XEN which has been enabled in this build setup.
Xen is supported in the meta-virtualization layer of Yocto and OpenEmbedded, which has strong support for cross-compilation, reproducible builds, software customization for embedded systems, building full system images entirely from source code and tracking software licensing metadata. It supports booting Xen target images built for either x86-64 or 64-bit Arm targets in Qemu, including running the OpenEmbedded QA test cases on the image under Qemu, and running the Xen Test Framework to exercise the Xen hypervisor test case unikernels. The Raspberry Pi 4 board has dedicated support in meta-virtualization, for building a system environment with the Xen hypervisor, tools and the standard Yocto Linux kernel for booting on that hardware. Advanced users of OpenEmbedded can use the multiconfig support for building composite Xen system images that include guest VM images either also built from source or incorporating binary filesystem images from alternative distro sources.
There is an active community of Xen developers who participate in the meta-virtualization forum for maintaining the support. I recommend it; it provides a powerful development environment and encourages collaboration on maintaining the detailed configurations needed for successful cross compilation and system emulation, in general and also specifically for Xen.
For those familiar with OE/Yocto, to build and launch a Xen system image in Qemu: - set up a build tree with poky, meta-openembedded, meta-virtualization and optionally meta-raspberrypi - set MACHINE=qemuarm64 or qemux86-64 in local.conf, depending on the architecture you're interested in - put xen and virtualization into DISTRO_FEATURES in local.conf, and set the other standard build variables (DL_DIR, SSTATE_DIR, TMPDIR, etc) - bitbake xen-image-minimal - runqemu xen-image-minimal nographic slirp
I hope I'm not pushing this too heavily, but I do want to convey the strength of the OE toolchain and its suitability for Xen.
Christopher (Xen recipe maintainer, meta-virtualization)
Regards, Ruchika
I've also been using optee-build for my virtio-video work and it nicely wraps building Qemu, Linux, optee etc so I can recommend it as a good dev environment.
[1] https://github.com/OP-TEE/build
Thanks,
Peter.
━━━━━━━━━━━━━━━━━━━ XEN RELATED NOTES Alex Bennée ━━━━━━━━━━━━━━━━━━━
Table of Contents ─────────────────
- Tasks [0/1] :tasks:
.. 1. TODO Xen SMC/HVC pass through ([STR-41]) 2. Notes .. 1. Building Xen .. 2. Cross Build .. 3. Cross Packages .. 4. Configure .. 5. Build ..... 1. Just the Hypervisor .. 6. Running Dom0 .. 7. MachiatoBin Issues .. 8. Running DomU .. 9. View console of DomU guest
1 Tasks [0/1] :tasks: ═════════════
[STR-41] https://projects.linaro.org/browse/STR-41
1.1 TODO Xen SMC/HVC pass through ([STR-41]) ────────────────────────────────────────────
sstabellini_: that tracks [ 1.420679] mvebu-comphy f2120000.phy: unsupported SMC call, try updating your firmware [ 1.420720] mvebu-comphy f2120000.phy: Firmware could not configure PHY 4 with mode 15 (ret: -95), trying legacy method sstabellini_: is that a boot option? <sstabellini_> stsquad: as an example see [19:55] xen/arch/arm/platforms/xilinx-zynqmp-eemi.c:forward_to_fw <sstabellini_> stsquad: we don't have a boot option for that, you need to add a few lines of code. Actually this issue comes up often enough that a boot option would be really useful!
[STR-41] https://projects.linaro.org/browse/STR-41
2 Notes ═══════
2.1 Building Xen ────────────────
As Xen's out-of-tree support is a little sporadic I've ended up going for using git-worktree to split out the various builds.
2.2 Cross Build ───────────────
2.3 Cross Packages ──────────────────
┌──── │ apt install libpython3-dev:arm64 libfdt-dev:arm64
libncurses-dev:arm64
└────
Or as I do on hackbox do everything in a container:
┌──── │ docker run --rm -it -u (id -u) -v $HOME:$HOME -w (pwd) alex.bennee:xen-arm64 /usr/bin/fish └────
built from [my dockerfile]
[my dockerfile] <
https://github.com/stsquad/dockerfiles/blob/master/crossbuild/xen-arm64/Dock...
2.4 Configure ─────────────
┌──── │ ./configure --build=x86_64-unknown-linux-gnu
--host=aarch64-linux-gnu
\ │ --disable-docs --disable-golang --disable-ocamltools \ │ --with-system-qemu=/usr/bin/qemu-system-i386 └────
The location of system QEMU can be tweaked after the fact by editing /etc/default/xencommons
2.5 Build ─────────
You need to have the compilers in the command line:
┌──── │ make -j9 dist CROSS_COMPILE=aarch64-linux-gnu- XEN_TARGET_ARCH=arm64 └────
To build an Debian package which can install all the tools and binaries (replacing any distro requirements) then:
┌──── │ make -j9 debball CROSS_COMPILE=aarch64-linux-gnu- XEN_TARGET_ARCH=arm64 └────
This builds a package which you can find in ./dist/xen-upstream-4.15-unstable.deb which you can dpkg -i on your dom0 environment.
2.5.1 Just the Hypervisor ╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌
┌──── │ cd xen │ set -x CROSS_COMPILE aarch64-linux-gnu- │ make XEN_TARGET_ARCH=arm64 └────
2.6 Running Dom0 ────────────────
To avoid complications with broken grub and bios firmware I'm currently just using my direct boot:
┌──── │ ./qemu-system-aarch64 -machine virt,virtualization=on \ │ -cpu cortex-a57 -serial mon:stdio \ │ -nic user,model=virtio-net-pci,hostfwd=tcp::2222-:22 \ │ -device virtio-scsi-pci \ │ -drive
file=/dev/zvol/hackpool-0/debian-buster-arm64,id=hd0,index=0,if=none,format=raw
\ │ -device scsi-hd,drive=hd0 \ │ -display none \ │ -m 8192 \ │ -kernel ~/lsrc/xen/xen.build.arm64-xen/xen/xen \ │ -append "dom0_mem=2G,max:2G dom0_max_vcpus=4 loglvl=all guest_loglvl=all" \ │ -device
guest-loader,addr=0x46000000,kernel=$HOME/lsrc/linux.git/builds/arm64/arch/arm64/boot/Image,bootargs="root=/dev/sda2
console=hvc0 earlyprintk=xen" \ │ -smp 8 └────
Care has to be taken to avoid the guest-loader address clashing with anything else and ending up corrupting the DTB. Currently QEMU doesn't do anything avoid those issues as it doesn't have visibility of where the kernel gets loaded. You can see by examining the Xen output:
┌──── │ (XEN) Loading zImage from 0000000046000000 to 0000000050000000-0000000050eb2200 │ (XEN) Loading d0 DTB to 0x0000000058000000-0x0000000058001ce8 └────
The initial xl list will check Xen and it's tools are up and running and the hypervisor and userspace ABI's are in sync. It's important that the hypercall ABI and it's userspace are in sync.
┌──── │ 13:13:43 [root@buster:~] # xl list │ Name ID Mem VCPUs State Time(s) │ Domain-0 0 4096 4 r----- 215.8 └────
2.7 MachiatoBin Issues ──────────────────────
• upgraded to Bullseye for latest Grub • built with 8250 serial (32 bit, shift = 2)
2.8 Running DomU ────────────────
The with a config like
┌──── │ # =====================================================================# Example PV Linux guest configuration │ # ===================================================================== │ # │ # This is a fairly minimal example of what is required for a │ # Paravirtualised Linux guest. For a more complete guide see
xl.cfg(5)
│ │ # Guest name │ name = "xenpv-initrd-guest" │ │ # 128-bit UUID for the domain as a hexadecimal number. │ # Use "uuidgen" to generate one if required. │ # The default behavior is to generate a new UUID each time the guest is started. │ #uuid = "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" │ │ # Kernel image to boot │ kernel = "/home/alex/arm64-minkern" │ # kernel = "/home/alex/arm64-kernel" │ # kernel = "/home/alex/arm64-defconfig" │ │ # Ramdisk (optional) │ #ramdisk = "/boot/initrd.gz" │ │ # Kernel command line options │ extra = "console=hvc0" │ │ # Initial memory allocation (MB) │ memory = 4096 │ │ # Maximum memory (MB) │ # If this is greater than `memory' then the slack will start
ballooned
│ # (this assumes guest kernel support for ballooning) │ maxmem = 4096 │ │ # Number of VCPUS │ vcpus = 2 │ │ # Network devices │ # A list of 'vifspec' entries as described in │ # docs/misc/xl-network-configuration.markdown │ # vif = [ ] │ │ # Disk Devices │ # A list of `diskspec' entries as described in │ # docs/misc/xl-disk-configuration.txt │ # disk = [ '/dev/vg/guest-volume,raw,xvda,rw' ] │ # disk = [ ] └────
┌──── │ xl create simple-guest.conf │ xl list │ Name ID Mem VCPUs State Time(s) │ Domain-0 0 4096 4 r----- 221.4 │ xenpv-initrd-guest 1 4095 2 -b---- 5.3 └────
2.9 View console of DomU guest ──────────────────────────────
And to get console:
┌──── │ xl console -t pv -n 0 xenpv-initrd-guest └────
to exit you need *Ctrl-[*
-- Alex Bennée -- Stratos-dev mailing list Stratos-dev@op-lists.linaro.org https://op-lists.linaro.org/mailman/listinfo/stratos-dev
-- Stratos-dev mailing list Stratos-dev@op-lists.linaro.org https://op-lists.linaro.org/mailman/listinfo/stratos-dev
Hi Alex,
Thanks for the write-up, that was quite useful.
2.6 Running Dom0 ────────────────
To avoid complications with broken grub and bios firmware I'm currently just using my direct boot:
┌──── │ ./qemu-system-aarch64 -machine virt,virtualization=on \ │ -cpu cortex-a57 -serial mon:stdio \ │ -nic user,model=virtio-net-pci,hostfwd=tcp::2222-:22 \ │ -device virtio-scsi-pci \ │ -drive file=/dev/zvol/hackpool-0/debian-buster-arm64,id=hd0,index=0,if=none,format=raw \ │ -device scsi-hd,drive=hd0 \ │ -display none \ │ -m 8192 \ │ -kernel ~/lsrc/xen/xen.build.arm64-xen/xen/xen \ │ -append "dom0_mem=2G,max:2G dom0_max_vcpus=4 loglvl=all guest_loglvl=all" \ │ -device guest-loader,addr=0x46000000,kernel=$HOME/lsrc/linux.git/builds/arm64/arch/arm64/boot/Image,bootargs="root=/dev/sda2 console=hvc0 earlyprintk=xen" \ │ -smp 8 └────
Question from a neophyte... I don't know if it is a Xen or a QEMU problem but with the above command line my dom0 kernel couldn't recognise the partition inside of the image and as such wouldn't mount a file system. I had to use the virtio-blk-device interface instead of virtio-scsi-pci as follow:
./qemu-system-aarch64 \ -machine virt,virtualization=on \ -cpu cortex-a57 -serial mon:stdio \ -nic user,model=virtio-net-pci,hostfwd=tcp::8022-:22 \ -drive file=debian-buster-arm64,id=hd0,index=0,if=none,format=raw \ -device virtio-blk-device,drive=hd0 \ -display none \ -m 8192 -smp 4\ -kernel /home/mpoirier/work/stratos/xen/xen/xen/xen \ -append "dom0_mem=2G,max:2G dom0_max_vcpus=4 loglvl=all guest_loglvl=all" \ -device guest-loader,addr=0x46000000,kernel=/home/mpoirier/work/stratos/kernel/builds/dom0/arch/arm64/boot/Image,bootargs="root=/dev/vda2 console=hvc0 earlyprintk=xen"
Only then was I able to boot my dom0 kernel up to a command line. My baselines are:
xen: c76cfada1cfa tools/libacpi: Use 64-byte alignment for FACS QEMU: ecf2706e271f Update version for v6.1.0-rc4 release
Thanks, Mathieu
Mathieu Poirier mathieu.poirier@linaro.org writes:
Hi Alex,
Thanks for the write-up, that was quite useful.
2.6 Running Dom0 ────────────────
To avoid complications with broken grub and bios firmware I'm currently just using my direct boot:
┌──── │ ./qemu-system-aarch64 -machine virt,virtualization=on \ │ -cpu cortex-a57 -serial mon:stdio \ │ -nic user,model=virtio-net-pci,hostfwd=tcp::2222-:22 \ │ -device virtio-scsi-pci \ │ -drive file=/dev/zvol/hackpool-0/debian-buster-arm64,id=hd0,index=0,if=none,format=raw \ │ -device scsi-hd,drive=hd0 \ │ -display none \ │ -m 8192 \ │ -kernel ~/lsrc/xen/xen.build.arm64-xen/xen/xen \ │ -append "dom0_mem=2G,max:2G dom0_max_vcpus=4 loglvl=all guest_loglvl=all" \ │ -device guest-loader,addr=0x46000000,kernel=$HOME/lsrc/linux.git/builds/arm64/arch/arm64/boot/Image,bootargs="root=/dev/sda2 console=hvc0 earlyprintk=xen" \ │ -smp 8 └────
Question from a neophyte... I don't know if it is a Xen or a QEMU problem but with the above command line my dom0 kernel couldn't recognise the partition inside of the image and as such wouldn't mount a file system. I had to use the virtio-blk-device interface instead of virtio-scsi-pci as follow:
./qemu-system-aarch64 \ -machine virt,virtualization=on \ -cpu cortex-a57 -serial mon:stdio \ -nic user,model=virtio-net-pci,hostfwd=tcp::8022-:22 \ -drive file=debian-buster-arm64,id=hd0,index=0,if=none,format=raw \ -device virtio-blk-device,drive=hd0 \ -display none \ -m 8192 -smp 4\ -kernel /home/mpoirier/work/stratos/xen/xen/xen/xen \ -append "dom0_mem=2G,max:2G dom0_max_vcpus=4 loglvl=all guest_loglvl=all" \ -device guest-loader,addr=0x46000000,kernel=/home/mpoirier/work/stratos/kernel/builds/dom0/arch/arm64/boot/Image,bootargs="root=/dev/vda2 console=hvc0 earlyprintk=xen"
Only then was I able to boot my dom0 kernel up to a command line. My baselines are:
xen: c76cfada1cfa tools/libacpi: Use 64-byte alignment for FACS QEMU: ecf2706e271f Update version for v6.1.0-rc4 release
I think it all comes down to the kernel config. I tend to use virtio-pci because it should make discovery easier but it does require the kernel having support for PCI and the generic PCI bus controller. You also need the appropriate SCSI bits for chaining SCSI via VirtIO.
My kernel has:
✗ grep VIRTIO .config CONFIG_BLK_MQ_VIRTIO=y CONFIG_VIRTIO_VSOCKETS=y CONFIG_VIRTIO_VSOCKETS_COMMON=y CONFIG_BT_VIRTIO=y CONFIG_NET_9P_VIRTIO=y CONFIG_VIRTIO_BLK=y CONFIG_SCSI_VIRTIO=y CONFIG_VIRTIO_NET=y CONFIG_CAIF_VIRTIO=y CONFIG_VIRTIO_CONSOLE=y CONFIG_HW_RANDOM_VIRTIO=y CONFIG_DRM_VIRTIO_GPU=y CONFIG_VIRTIO=y CONFIG_VIRTIO_PCI_LIB=y CONFIG_VIRTIO_MENU=y CONFIG_VIRTIO_PCI=y # CONFIG_VIRTIO_PCI_LEGACY is not set CONFIG_VIRTIO_BALLOON=y CONFIG_VIRTIO_INPUT=y CONFIG_VIRTIO_MMIO=y # CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES is not set CONFIG_VIRTIO_DMA_SHARED_BUFFER=y CONFIG_VIRTIO_IOMMU=y CONFIG_RPMSG_VIRTIO=y CONFIG_VIRTIO_FS=y CONFIG_CRYPTO_DEV_VIRTIO=y
➜ grep PCI .config | grep -v "not set" | grep -v "#" CONFIG_ACPI_PCI_SLOT=y CONFIG_BLK_MQ_PCI=y CONFIG_HAVE_PCI=y CONFIG_PCI=y CONFIG_PCI_DOMAINS=y CONFIG_PCI_DOMAINS_GENERIC=y CONFIG_PCI_SYSCALL=y CONFIG_PCIEASPM=y CONFIG_PCIEASPM_DEFAULT=y CONFIG_PCI_MSI=y CONFIG_PCI_MSI_IRQ_DOMAIN=y CONFIG_PCI_ECAM=y CONFIG_PCI_LABEL=y CONFIG_PCIE_BUS_DEFAULT=y CONFIG_PCI_HOST_COMMON=y CONFIG_PCI_HOST_GENERIC=y CONFIG_USB_PCI=y CONFIG_USB_XHCI_PCI=y CONFIG_USB_EHCI_PCI=y CONFIG_USB_OHCI_HCD_PCI=y CONFIG_VIRTIO_PCI_LIB=y CONFIG_VIRTIO_PCI=y CONFIG_ARM_GIC_V3_ITS_PCI=y CONFIG_GENERIC_PCI_IOMAP=y
Thanks, Mathieu
Good day,
On Fri, 8 Oct 2021 at 04:44, Alex Bennée alex.bennee@linaro.org wrote:
Mathieu Poirier mathieu.poirier@linaro.org writes:
Hi Alex,
Thanks for the write-up, that was quite useful.
2.6 Running Dom0 ────────────────
To avoid complications with broken grub and bios firmware I'm currently just using my direct boot:
┌──── │ ./qemu-system-aarch64 -machine virt,virtualization=on \ │ -cpu cortex-a57 -serial mon:stdio \ │ -nic user,model=virtio-net-pci,hostfwd=tcp::2222-:22 \ │ -device virtio-scsi-pci \ │ -drive file=/dev/zvol/hackpool-0/debian-buster-arm64,id=hd0,index=0,if=none,format=raw \ │ -device scsi-hd,drive=hd0 \ │ -display none \ │ -m 8192 \ │ -kernel ~/lsrc/xen/xen.build.arm64-xen/xen/xen \ │ -append "dom0_mem=2G,max:2G dom0_max_vcpus=4 loglvl=all guest_loglvl=all" \ │ -device guest-loader,addr=0x46000000,kernel=$HOME/lsrc/linux.git/builds/arm64/arch/arm64/boot/Image,bootargs="root=/dev/sda2 console=hvc0 earlyprintk=xen" \ │ -smp 8 └────
Question from a neophyte... I don't know if it is a Xen or a QEMU problem but with the above command line my dom0 kernel couldn't recognise the partition inside of the image and as such wouldn't mount a file system. I had to use the virtio-blk-device interface instead of virtio-scsi-pci as follow:
./qemu-system-aarch64 \ -machine virt,virtualization=on \ -cpu cortex-a57 -serial mon:stdio \ -nic user,model=virtio-net-pci,hostfwd=tcp::8022-:22 \ -drive file=debian-buster-arm64,id=hd0,index=0,if=none,format=raw \ -device virtio-blk-device,drive=hd0 \ -display none \ -m 8192 -smp 4\ -kernel /home/mpoirier/work/stratos/xen/xen/xen/xen \ -append "dom0_mem=2G,max:2G dom0_max_vcpus=4 loglvl=all guest_loglvl=all" \ -device guest-loader,addr=0x46000000,kernel=/home/mpoirier/work/stratos/kernel/builds/dom0/arch/arm64/boot/Image,bootargs="root=/dev/vda2 console=hvc0 earlyprintk=xen"
The "-device guest-loader" option isn't supported on X86. Is there a similar method for booting Xen from qemu-system-x86_64 or will I have to follow the same heuristic that includes grub on real HW? I also found the qemu-xen git tree [1] but haven't made sense of it yet. Peter Maydell's name is all over that one so he may be the best person to ask.
[1]. https://xenbits.xen.org/gitweb/?p=qemu-xen.git%3Ba=summary
Only then was I able to boot my dom0 kernel up to a command line. My baselines are:
xen: c76cfada1cfa tools/libacpi: Use 64-byte alignment for FACS QEMU: ecf2706e271f Update version for v6.1.0-rc4 release
I think it all comes down to the kernel config. I tend to use virtio-pci because it should make discovery easier but it does require the kernel having support for PCI and the generic PCI bus controller. You also need the appropriate SCSI bits for chaining SCSI via VirtIO.
Ah perfect - the end result is the same then.
Thanks, Mathieu
My kernel has:
✗ grep VIRTIO .config CONFIG_BLK_MQ_VIRTIO=y CONFIG_VIRTIO_VSOCKETS=y CONFIG_VIRTIO_VSOCKETS_COMMON=y CONFIG_BT_VIRTIO=y CONFIG_NET_9P_VIRTIO=y CONFIG_VIRTIO_BLK=y CONFIG_SCSI_VIRTIO=y CONFIG_VIRTIO_NET=y CONFIG_CAIF_VIRTIO=y CONFIG_VIRTIO_CONSOLE=y CONFIG_HW_RANDOM_VIRTIO=y CONFIG_DRM_VIRTIO_GPU=y CONFIG_VIRTIO=y CONFIG_VIRTIO_PCI_LIB=y CONFIG_VIRTIO_MENU=y CONFIG_VIRTIO_PCI=y # CONFIG_VIRTIO_PCI_LEGACY is not set CONFIG_VIRTIO_BALLOON=y CONFIG_VIRTIO_INPUT=y CONFIG_VIRTIO_MMIO=y # CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES is not set CONFIG_VIRTIO_DMA_SHARED_BUFFER=y CONFIG_VIRTIO_IOMMU=y CONFIG_RPMSG_VIRTIO=y CONFIG_VIRTIO_FS=y CONFIG_CRYPTO_DEV_VIRTIO=y
➜ grep PCI .config | grep -v "not set" | grep -v "#" CONFIG_ACPI_PCI_SLOT=y CONFIG_BLK_MQ_PCI=y CONFIG_HAVE_PCI=y CONFIG_PCI=y CONFIG_PCI_DOMAINS=y CONFIG_PCI_DOMAINS_GENERIC=y CONFIG_PCI_SYSCALL=y CONFIG_PCIEASPM=y CONFIG_PCIEASPM_DEFAULT=y CONFIG_PCI_MSI=y CONFIG_PCI_MSI_IRQ_DOMAIN=y CONFIG_PCI_ECAM=y CONFIG_PCI_LABEL=y CONFIG_PCIE_BUS_DEFAULT=y CONFIG_PCI_HOST_COMMON=y CONFIG_PCI_HOST_GENERIC=y CONFIG_USB_PCI=y CONFIG_USB_XHCI_PCI=y CONFIG_USB_EHCI_PCI=y CONFIG_USB_OHCI_HCD_PCI=y CONFIG_VIRTIO_PCI_LIB=y CONFIG_VIRTIO_PCI=y CONFIG_ARM_GIC_V3_ITS_PCI=y CONFIG_GENERIC_PCI_IOMAP=y
Thanks, Mathieu
-- Alex Bennée
Mathieu Poirier mathieu.poirier@linaro.org writes:
Good day,
On Fri, 8 Oct 2021 at 04:44, Alex Bennée alex.bennee@linaro.org wrote:
Mathieu Poirier mathieu.poirier@linaro.org writes:
Hi Alex,
Thanks for the write-up, that was quite useful.
2.6 Running Dom0 ────────────────
To avoid complications with broken grub and bios firmware I'm currently just using my direct boot:
┌──── │ ./qemu-system-aarch64 -machine virt,virtualization=on \ │ -cpu cortex-a57 -serial mon:stdio \ │ -nic user,model=virtio-net-pci,hostfwd=tcp::2222-:22 \ │ -device virtio-scsi-pci \ │ -drive file=/dev/zvol/hackpool-0/debian-buster-arm64,id=hd0,index=0,if=none,format=raw \ │ -device scsi-hd,drive=hd0 \ │ -display none \ │ -m 8192 \ │ -kernel ~/lsrc/xen/xen.build.arm64-xen/xen/xen \ │ -append "dom0_mem=2G,max:2G dom0_max_vcpus=4 loglvl=all guest_loglvl=all" \ │ -device guest-loader,addr=0x46000000,kernel=$HOME/lsrc/linux.git/builds/arm64/arch/arm64/boot/Image,bootargs="root=/dev/sda2 console=hvc0 earlyprintk=xen" \ │ -smp 8 └────
Question from a neophyte... I don't know if it is a Xen or a QEMU problem but with the above command line my dom0 kernel couldn't recognise the partition inside of the image and as such wouldn't mount a file system. I had to use the virtio-blk-device interface instead of virtio-scsi-pci as follow:
./qemu-system-aarch64 \ -machine virt,virtualization=on \ -cpu cortex-a57 -serial mon:stdio \ -nic user,model=virtio-net-pci,hostfwd=tcp::8022-:22 \ -drive file=debian-buster-arm64,id=hd0,index=0,if=none,format=raw \ -device virtio-blk-device,drive=hd0 \ -display none \ -m 8192 -smp 4\ -kernel /home/mpoirier/work/stratos/xen/xen/xen/xen \ -append "dom0_mem=2G,max:2G dom0_max_vcpus=4 loglvl=all guest_loglvl=all" \ -device guest-loader,addr=0x46000000,kernel=/home/mpoirier/work/stratos/kernel/builds/dom0/arch/arm64/boot/Image,bootargs="root=/dev/vda2 console=hvc0 earlyprintk=xen"
The "-device guest-loader" option isn't supported on X86. Is there a similar method for booting Xen from qemu-system-x86_64 or will I have to follow the same heuristic that includes grub on real HW?
The device-loader only works with dynamic DTB based machines which is in practice the aarch64 and riscv "virt" machine models. I assume x86 involves some sort of ACPI but I've no idea how you would emulate that with QEMU rather than having the full bootchain. That said x86 Xen is at least more widely tested so I suspect has been less prone to bitrot.
I also found the qemu-xen git tree [1] but haven't made sense of it yet. Peter Maydell's name is all over that one so he may be the best person to ask.
[1]. https://xenbits.xen.org/gitweb/?p=qemu-xen.git%3Ba=summary
Only because he's the lead maintainer for QEMU. That is Xen's copy which is broadly the same as upstream QEMU with maybe one or two Xen specific patches on top which haven't made it upstream yet.
Only then was I able to boot my dom0 kernel up to a command line. My baselines are:
xen: c76cfada1cfa tools/libacpi: Use 64-byte alignment for FACS QEMU: ecf2706e271f Update version for v6.1.0-rc4 release
I think it all comes down to the kernel config. I tend to use virtio-pci because it should make discovery easier but it does require the kernel having support for PCI and the generic PCI bus controller. You also need the appropriate SCSI bits for chaining SCSI via VirtIO.
Ah perfect - the end result is the same then.
On Tue, Oct 12, 2021 at 10:43:01AM +0100, Alex Benn??e via Stratos-dev wrote:
Mathieu Poirier mathieu.poirier@linaro.org writes:
Good day,
On Fri, 8 Oct 2021 at 04:44, Alex Bennée alex.bennee@linaro.org wrote:
Mathieu Poirier mathieu.poirier@linaro.org writes:
Hi Alex,
Thanks for the write-up, that was quite useful.
2.6 Running Dom0 ────────────────
To avoid complications with broken grub and bios firmware I'm currently just using my direct boot:
┌──── │ ./qemu-system-aarch64 -machine virt,virtualization=on \ │ -cpu cortex-a57 -serial mon:stdio \ │ -nic user,model=virtio-net-pci,hostfwd=tcp::2222-:22 \ │ -device virtio-scsi-pci \ │ -drive file=/dev/zvol/hackpool-0/debian-buster-arm64,id=hd0,index=0,if=none,format=raw \ │ -device scsi-hd,drive=hd0 \ │ -display none \ │ -m 8192 \ │ -kernel ~/lsrc/xen/xen.build.arm64-xen/xen/xen \ │ -append "dom0_mem=2G,max:2G dom0_max_vcpus=4 loglvl=all guest_loglvl=all" \ │ -device guest-loader,addr=0x46000000,kernel=$HOME/lsrc/linux.git/builds/arm64/arch/arm64/boot/Image,bootargs="root=/dev/sda2 console=hvc0 earlyprintk=xen" \ │ -smp 8 └────
Question from a neophyte... I don't know if it is a Xen or a QEMU problem but with the above command line my dom0 kernel couldn't recognise the partition inside of the image and as such wouldn't mount a file system. I had to use the virtio-blk-device interface instead of virtio-scsi-pci as follow:
./qemu-system-aarch64 \ -machine virt,virtualization=on \ -cpu cortex-a57 -serial mon:stdio \ -nic user,model=virtio-net-pci,hostfwd=tcp::8022-:22 \ -drive file=debian-buster-arm64,id=hd0,index=0,if=none,format=raw \ -device virtio-blk-device,drive=hd0 \ -display none \ -m 8192 -smp 4\ -kernel /home/mpoirier/work/stratos/xen/xen/xen/xen \ -append "dom0_mem=2G,max:2G dom0_max_vcpus=4 loglvl=all guest_loglvl=all" \ -device guest-loader,addr=0x46000000,kernel=/home/mpoirier/work/stratos/kernel/builds/dom0/arch/arm64/boot/Image,bootargs="root=/dev/vda2 console=hvc0 earlyprintk=xen"
The "-device guest-loader" option isn't supported on X86. Is there a similar method for booting Xen from qemu-system-x86_64 or will I have to follow the same heuristic that includes grub on real HW?
The device-loader only works with dynamic DTB based machines which is in practice the aarch64 and riscv "virt" machine models. I assume x86 involves some sort of ACPI but I've no idea how you would emulate that with QEMU rather than having the full bootchain. That said x86 Xen is at least more widely tested so I suspect has been less prone to bitrot.
Just FYI. I don't know this is a normal way for x86 systems, but we can make use of grub for "multiboot". What I actually practiced when I was developing my virtio-proxy was to take the following boot sequence: qemu => TF-A => U-Boot => grub => Xen + distro (Debian in my case) (Why U-Boot? Because U-Boot (on arm64) can boot grub as an EFI image.)
In a grub configuration, you can create a boot menu like: (for x86) multi_boot /boot/xen.efi ... module /boot/vmlinuz ... module --nounzip /boot/initrd.img ... (for arm64) xen_hypervisor /boot/xen.efi ... xen_module /boot/vmlinuz ... xen_module --nounzip /boot/initrd.img ...
Thanks, -Takahiro Akashi
I also found the qemu-xen git tree [1] but haven't made sense of it yet. Peter Maydell's name is all over that one so he may be the best person to ask.
[1]. https://xenbits.xen.org/gitweb/?p=qemu-xen.git%3Ba=summary
Only because he's the lead maintainer for QEMU. That is Xen's copy which is broadly the same as upstream QEMU with maybe one or two Xen specific patches on top which haven't made it upstream yet.
Only then was I able to boot my dom0 kernel up to a command line. My baselines are:
xen: c76cfada1cfa tools/libacpi: Use 64-byte alignment for FACS QEMU: ecf2706e271f Update version for v6.1.0-rc4 release
I think it all comes down to the kernel config. I tend to use virtio-pci because it should make discovery easier but it does require the kernel having support for PCI and the generic PCI bus controller. You also need the appropriate SCSI bits for chaining SCSI via VirtIO.
Ah perfect - the end result is the same then.
-- Alex Bennée -- Stratos-dev mailing list Stratos-dev@op-lists.linaro.org https://op-lists.linaro.org/mailman/listinfo/stratos-dev
AKASHI Takahiro takahiro.akashi@linaro.org writes:
On Tue, Oct 12, 2021 at 10:43:01AM +0100, Alex Benn??e via Stratos-dev wrote:
Mathieu Poirier mathieu.poirier@linaro.org writes:
Good day,
On Fri, 8 Oct 2021 at 04:44, Alex Bennée alex.bennee@linaro.org wrote:
Mathieu Poirier mathieu.poirier@linaro.org writes:
Hi Alex,
Thanks for the write-up, that was quite useful.
2.6 Running Dom0 ────────────────
To avoid complications with broken grub and bios firmware I'm currently just using my direct boot:
┌──── │ ./qemu-system-aarch64 -machine virt,virtualization=on \ │ -cpu cortex-a57 -serial mon:stdio \ │ -nic user,model=virtio-net-pci,hostfwd=tcp::2222-:22 \ │ -device virtio-scsi-pci \ │ -drive file=/dev/zvol/hackpool-0/debian-buster-arm64,id=hd0,index=0,if=none,format=raw \ │ -device scsi-hd,drive=hd0 \ │ -display none \ │ -m 8192 \ │ -kernel ~/lsrc/xen/xen.build.arm64-xen/xen/xen \ │ -append "dom0_mem=2G,max:2G dom0_max_vcpus=4 loglvl=all guest_loglvl=all" \ │ -device guest-loader,addr=0x46000000,kernel=$HOME/lsrc/linux.git/builds/arm64/arch/arm64/boot/Image,bootargs="root=/dev/sda2 console=hvc0 earlyprintk=xen" \ │ -smp 8 └────
Question from a neophyte... I don't know if it is a Xen or a QEMU problem but with the above command line my dom0 kernel couldn't recognise the partition inside of the image and as such wouldn't mount a file system. I had to use the virtio-blk-device interface instead of virtio-scsi-pci as follow:
./qemu-system-aarch64 \ -machine virt,virtualization=on \ -cpu cortex-a57 -serial mon:stdio \ -nic user,model=virtio-net-pci,hostfwd=tcp::8022-:22 \ -drive file=debian-buster-arm64,id=hd0,index=0,if=none,format=raw \ -device virtio-blk-device,drive=hd0 \ -display none \ -m 8192 -smp 4\ -kernel /home/mpoirier/work/stratos/xen/xen/xen/xen \ -append "dom0_mem=2G,max:2G dom0_max_vcpus=4 loglvl=all guest_loglvl=all" \ -device guest-loader,addr=0x46000000,kernel=/home/mpoirier/work/stratos/kernel/builds/dom0/arch/arm64/boot/Image,bootargs="root=/dev/vda2 console=hvc0 earlyprintk=xen"
The "-device guest-loader" option isn't supported on X86. Is there a similar method for booting Xen from qemu-system-x86_64 or will I have to follow the same heuristic that includes grub on real HW?
The device-loader only works with dynamic DTB based machines which is in practice the aarch64 and riscv "virt" machine models. I assume x86 involves some sort of ACPI but I've no idea how you would emulate that with QEMU rather than having the full bootchain. That said x86 Xen is at least more widely tested so I suspect has been less prone to bitrot.
Just FYI. I don't know this is a normal way for x86 systems, but we can make use of grub for "multiboot". What I actually practiced when I was developing my virtio-proxy was to take the following boot sequence: qemu => TF-A => U-Boot => grub => Xen + distro (Debian in my case) (Why U-Boot? Because U-Boot (on arm64) can boot grub as an EFI image.)
In a grub configuration, you can create a boot menu like: (for x86) multi_boot /boot/xen.efi ... module /boot/vmlinuz ... module --nounzip /boot/initrd.img ... (for arm64) xen_hypervisor /boot/xen.efi ... xen_module /boot/vmlinuz ... xen_module --nounzip /boot/initrd.img ...
Thanks, -Takahiro Akashi
Out of interest what version of grub were you running on arm64? I think when I upgraded to MachiatoBin to the latest Debian (bullseye) I finally got to a Xen booting before running into the SMC issue.
I also found the qemu-xen git tree [1] but haven't made sense of it yet. Peter Maydell's name is all over that one so he may be the best person to ask.
[1]. https://xenbits.xen.org/gitweb/?p=qemu-xen.git%3Ba=summary
Only because he's the lead maintainer for QEMU. That is Xen's copy which is broadly the same as upstream QEMU with maybe one or two Xen specific patches on top which haven't made it upstream yet.
Only then was I able to boot my dom0 kernel up to a command line. My baselines are:
xen: c76cfada1cfa tools/libacpi: Use 64-byte alignment for FACS QEMU: ecf2706e271f Update version for v6.1.0-rc4 release
I think it all comes down to the kernel config. I tend to use virtio-pci because it should make discovery easier but it does require the kernel having support for PCI and the generic PCI bus controller. You also need the appropriate SCSI bits for chaining SCSI via VirtIO.
Ah perfect - the end result is the same then.
-- Alex Bennée -- Stratos-dev mailing list Stratos-dev@op-lists.linaro.org https://op-lists.linaro.org/mailman/listinfo/stratos-dev
On Wed, Oct 13, 2021 at 10:11:34AM +0100, Alex Benn??e wrote:
AKASHI Takahiro takahiro.akashi@linaro.org writes:
On Tue, Oct 12, 2021 at 10:43:01AM +0100, Alex Benn??e via Stratos-dev wrote:
Mathieu Poirier mathieu.poirier@linaro.org writes:
Good day,
On Fri, 8 Oct 2021 at 04:44, Alex Bennée alex.bennee@linaro.org wrote:
Mathieu Poirier mathieu.poirier@linaro.org writes:
Hi Alex,
Thanks for the write-up, that was quite useful.
> > > 2.6 Running Dom0 > ──────────────── > > To avoid complications with broken grub and bios firmware I'm > currently just using my direct boot: > > ┌──── > │ ./qemu-system-aarch64 -machine virt,virtualization=on \ > │ -cpu cortex-a57 -serial mon:stdio \ > │ -nic user,model=virtio-net-pci,hostfwd=tcp::2222-:22 \ > │ -device virtio-scsi-pci \ > │ -drive file=/dev/zvol/hackpool-0/debian-buster-arm64,id=hd0,index=0,if=none,format=raw \ > │ -device scsi-hd,drive=hd0 \ > │ -display none \ > │ -m 8192 \ > │ -kernel ~/lsrc/xen/xen.build.arm64-xen/xen/xen \ > │ -append "dom0_mem=2G,max:2G dom0_max_vcpus=4 loglvl=all guest_loglvl=all" \ > │ -device guest-loader,addr=0x46000000,kernel=$HOME/lsrc/linux.git/builds/arm64/arch/arm64/boot/Image,bootargs="root=/dev/sda2 console=hvc0 earlyprintk=xen" \ > │ -smp 8 > └──── >
Question from a neophyte... I don't know if it is a Xen or a QEMU problem but with the above command line my dom0 kernel couldn't recognise the partition inside of the image and as such wouldn't mount a file system. I had to use the virtio-blk-device interface instead of virtio-scsi-pci as follow:
./qemu-system-aarch64 \ -machine virt,virtualization=on \ -cpu cortex-a57 -serial mon:stdio \ -nic user,model=virtio-net-pci,hostfwd=tcp::8022-:22 \ -drive file=debian-buster-arm64,id=hd0,index=0,if=none,format=raw \ -device virtio-blk-device,drive=hd0 \ -display none \ -m 8192 -smp 4\ -kernel /home/mpoirier/work/stratos/xen/xen/xen/xen \ -append "dom0_mem=2G,max:2G dom0_max_vcpus=4 loglvl=all guest_loglvl=all" \ -device guest-loader,addr=0x46000000,kernel=/home/mpoirier/work/stratos/kernel/builds/dom0/arch/arm64/boot/Image,bootargs="root=/dev/vda2 console=hvc0 earlyprintk=xen"
The "-device guest-loader" option isn't supported on X86. Is there a similar method for booting Xen from qemu-system-x86_64 or will I have to follow the same heuristic that includes grub on real HW?
The device-loader only works with dynamic DTB based machines which is in practice the aarch64 and riscv "virt" machine models. I assume x86 involves some sort of ACPI but I've no idea how you would emulate that with QEMU rather than having the full bootchain. That said x86 Xen is at least more widely tested so I suspect has been less prone to bitrot.
Just FYI. I don't know this is a normal way for x86 systems, but we can make use of grub for "multiboot". What I actually practiced when I was developing my virtio-proxy was to take the following boot sequence: qemu => TF-A => U-Boot => grub => Xen + distro (Debian in my case) (Why U-Boot? Because U-Boot (on arm64) can boot grub as an EFI image.)
In a grub configuration, you can create a boot menu like: (for x86) multi_boot /boot/xen.efi ... module /boot/vmlinuz ... module --nounzip /boot/initrd.img ... (for arm64) xen_hypervisor /boot/xen.efi ... xen_module /boot/vmlinuz ... xen_module --nounzip /boot/initrd.img ...
Thanks, -Takahiro Akashi
Out of interest what version of grub were you running on arm64? I think when I upgraded to MachiatoBin to the latest Debian (bullseye) I finally got to a Xen booting before running into the SMC issue.
I think it was 2.04-8 (in debian-testing for bullseye). (Please note I replaced the distro's Xen with my own build anyway.)
Thanks, -Takahiro Akashi
I also found the qemu-xen git tree [1] but haven't made sense of it yet. Peter Maydell's name is all over that one so he may be the best person to ask.
[1]. https://xenbits.xen.org/gitweb/?p=qemu-xen.git%3Ba=summary
Only because he's the lead maintainer for QEMU. That is Xen's copy which is broadly the same as upstream QEMU with maybe one or two Xen specific patches on top which haven't made it upstream yet.
Only then was I able to boot my dom0 kernel up to a command line. My baselines are:
xen: c76cfada1cfa tools/libacpi: Use 64-byte alignment for FACS QEMU: ecf2706e271f Update version for v6.1.0-rc4 release
I think it all comes down to the kernel config. I tend to use virtio-pci because it should make discovery easier but it does require the kernel having support for PCI and the generic PCI bus controller. You also need the appropriate SCSI bits for chaining SCSI via VirtIO.
Ah perfect - the end result is the same then.
-- Alex Bennée -- Stratos-dev mailing list Stratos-dev@op-lists.linaro.org https://op-lists.linaro.org/mailman/listinfo/stratos-dev
-- Alex Bennée
On Wed, 13 Oct 2021 at 01:12, AKASHI Takahiro takahiro.akashi@linaro.org wrote:
On Tue, Oct 12, 2021 at 10:43:01AM +0100, Alex Benn??e via Stratos-dev wrote:
Mathieu Poirier mathieu.poirier@linaro.org writes:
Good day,
On Fri, 8 Oct 2021 at 04:44, Alex Bennée alex.bennee@linaro.org wrote:
Mathieu Poirier mathieu.poirier@linaro.org writes:
Hi Alex,
Thanks for the write-up, that was quite useful.
2.6 Running Dom0 ────────────────
To avoid complications with broken grub and bios firmware I'm currently just using my direct boot:
┌──── │ ./qemu-system-aarch64 -machine virt,virtualization=on \ │ -cpu cortex-a57 -serial mon:stdio \ │ -nic user,model=virtio-net-pci,hostfwd=tcp::2222-:22 \ │ -device virtio-scsi-pci \ │ -drive file=/dev/zvol/hackpool-0/debian-buster-arm64,id=hd0,index=0,if=none,format=raw \ │ -device scsi-hd,drive=hd0 \ │ -display none \ │ -m 8192 \ │ -kernel ~/lsrc/xen/xen.build.arm64-xen/xen/xen \ │ -append "dom0_mem=2G,max:2G dom0_max_vcpus=4 loglvl=all guest_loglvl=all" \ │ -device guest-loader,addr=0x46000000,kernel=$HOME/lsrc/linux.git/builds/arm64/arch/arm64/boot/Image,bootargs="root=/dev/sda2 console=hvc0 earlyprintk=xen" \ │ -smp 8 └────
Question from a neophyte... I don't know if it is a Xen or a QEMU problem but with the above command line my dom0 kernel couldn't recognise the partition inside of the image and as such wouldn't mount a file system. I had to use the virtio-blk-device interface instead of virtio-scsi-pci as follow:
./qemu-system-aarch64 \ -machine virt,virtualization=on \ -cpu cortex-a57 -serial mon:stdio \ -nic user,model=virtio-net-pci,hostfwd=tcp::8022-:22 \ -drive file=debian-buster-arm64,id=hd0,index=0,if=none,format=raw \ -device virtio-blk-device,drive=hd0 \ -display none \ -m 8192 -smp 4\ -kernel /home/mpoirier/work/stratos/xen/xen/xen/xen \ -append "dom0_mem=2G,max:2G dom0_max_vcpus=4 loglvl=all guest_loglvl=all" \ -device guest-loader,addr=0x46000000,kernel=/home/mpoirier/work/stratos/kernel/builds/dom0/arch/arm64/boot/Image,bootargs="root=/dev/vda2 console=hvc0 earlyprintk=xen"
The "-device guest-loader" option isn't supported on X86. Is there a similar method for booting Xen from qemu-system-x86_64 or will I have to follow the same heuristic that includes grub on real HW?
The device-loader only works with dynamic DTB based machines which is in practice the aarch64 and riscv "virt" machine models. I assume x86 involves some sort of ACPI but I've no idea how you would emulate that with QEMU rather than having the full bootchain. That said x86 Xen is at least more widely tested so I suspect has been less prone to bitrot.
Just FYI. I don't know this is a normal way for x86 systems, but we can make use of grub for "multiboot". What I actually practiced when I was developing my virtio-proxy was to take the following boot sequence: qemu => TF-A => U-Boot => grub => Xen + distro (Debian in my case)
Thanks for the suggestion.
My hope was to avoid that kind of boot chain to reduce complexity but based on what I've seen out there I may not have a choice. At least that is what I thought until Wednesday afternoon when I re-read Christopher Clark's email where he outlines steps to build and boot an x86 Xen based system. Everything Christopher wrote is accurate and works right out of the box.
(Why U-Boot? Because U-Boot (on arm64) can boot grub as an EFI image.)
In a grub configuration, you can create a boot menu like: (for x86) multi_boot /boot/xen.efi ... module /boot/vmlinuz ... module --nounzip /boot/initrd.img ... (for arm64) xen_hypervisor /boot/xen.efi ... xen_module /boot/vmlinuz ... xen_module --nounzip /boot/initrd.img ...
Thanks, -Takahiro Akashi
I also found the qemu-xen git tree [1] but haven't made sense of it yet. Peter Maydell's name is all over that one so he may be the best person to ask.
[1]. https://xenbits.xen.org/gitweb/?p=qemu-xen.git%3Ba=summary
Only because he's the lead maintainer for QEMU. That is Xen's copy which is broadly the same as upstream QEMU with maybe one or two Xen specific patches on top which haven't made it upstream yet.
Only then was I able to boot my dom0 kernel up to a command line. My baselines are:
xen: c76cfada1cfa tools/libacpi: Use 64-byte alignment for FACS QEMU: ecf2706e271f Update version for v6.1.0-rc4 release
I think it all comes down to the kernel config. I tend to use virtio-pci because it should make discovery easier but it does require the kernel having support for PCI and the generic PCI bus controller. You also need the appropriate SCSI bits for chaining SCSI via VirtIO.
Ah perfect - the end result is the same then.
-- Alex Bennée -- Stratos-dev mailing list Stratos-dev@op-lists.linaro.org https://op-lists.linaro.org/mailman/listinfo/stratos-dev
stratos-dev@op-lists.linaro.org