I looked through BSA/SBSA issues today and found one with PCI related
discussion [1]. BSA ignores cards which are RCiEP (Root Complex
Integrated Endpoint). And for SBSA Reference Platform it means all of
them.
1. https://github.com/ARM-software/bsa-acs/issues/77
By default QEMU starts SBSA Reference Platform with two cards:
- e1000e network
- VGA graphics
Setup is done as simple as possible with both cards plugged directly
into PCI Express bus:
~ # lspci -nn
00:00.0 Host bridge [0600]: Red Hat, Inc. QEMU PCIe Host bridge [1b36:0008]
00:01.0 Ethernet controller [0200]: Intel Corporation 82574L Gigabit Network Connection [8086:10d3]
00:02.0 VGA compatible controller [0300]: Device [1234:1111] (rev 02)
~ # lspci -t
-[0000:00]-+-00.0
+-01.0
\-02.0
VGA is simple PCI card, e1000e becames Root Complex Integrated Endpoint.
EDK2 sees them, Linux sees them, both are able to use it.
But that's far from real world PCI Express scenario where PCI Express
cards are connected to the bus via root ports.
Let take a look at system without cards added by default. I edited QEMU
code, removed adding e1000e and VGA card. Then added several cards with
root ports and bridges. Here I added NVME, USB controller, e1000e
network card, Bochs display and virtio RNG PCI card:
~ # lspci -nn
00:00.0 Host bridge [0600]: Red Hat, Inc. QEMU PCIe Host bridge [1b36:0008]
00:01.0 PCI bridge [0604]: Red Hat, Inc. QEMU PCIe Root port [1b36:000c]
00:02.0 PCI bridge [0604]: Red Hat, Inc. QEMU PCIe Root port [1b36:000c]
00:03.0 PCI bridge [0604]: Red Hat, Inc. QEMU PCIe Root port [1b36:000c]
00:04.0 PCI bridge [0604]: Red Hat, Inc. QEMU PCIe Root port [1b36:000c]
00:05.0 PCI bridge [0604]: Red Hat, Inc. QEMU PCIe Root port [1b36:000c]
01:00.0 Non-Volatile memory controller [0108]: Red Hat, Inc. QEMU NVM Express Controller [1b36:0010] (rev 02)
02:00.0 USB controller [0c03]: Red Hat, Inc. QEMU XHCI Host Controller [1b36:000d] (rev 01)
03:00.0 Display controller [0380]: Device [1234:1111] (rev 02)
04:00.0 Ethernet controller [0200]: Intel Corporation 82574L Gigabit Network Connection [8086:10d3]
05:00.0 PCI bridge [0604]: Red Hat, Inc. Device [1b36:000e]
06:08.0 Unclassified device [00ff]: Red Hat, Inc. Virtio RNG [1af4:1005]
~ # lspci -t
-[0000:00]-+-00.0
+-01.0-[01]----00.0
+-02.0-[02]----00.0
+-03.0-[03]----00.0
+-04.0-[04]----00.0
\-05.0-[05-06]----00.0-[06]----08.0
~ #
Each PCI Express card (01:00.0 - 04:00.0) are behind PCI Express root
port, virtio RNG is behind PCIe-PCI bridge (which is behind PCIe root
port). All PCIe cards are reported with "Express (v2) Endpoint"
capability. See attached file for "lspci -vvvnn" output.
I have run BSA ACS against both default QEMU setup (bsa-default.txt) and
against my complex one (bsa-complex.txt). As you can see with new PCI
setup BSA ACS runs far more tests than before.
What next?
I think that QEMU needs patching to add root ports etc. Switching from
VGA card to Bochs display would be good as well because former is PCI
while former is PCI Express. We could also think of adding USB
controller by default.
See <https://ci.linaro.org/job/ldcg-sbsa-acs/26/display/redirect>
Changes:
------------------------------------------
Started by user Marcin Juszkiewicz (marcin.juszkiewicz(a)linaro.org)
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on ldcg-aarch64-02-df2fc25dfdc7 (docker-bullseye-arm64-ldcg) in workspace <https://ci.linaro.org/job/ldcg-sbsa-acs/ws/>
[ldcg-sbsa-acs] $ /bin/bash /tmp/jenkins15353892541120853620.sh
+ rm -rf '<https://ci.linaro.org/job/ldcg-sbsa-acs/ws/*'>
+ apt update
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
Reading package lists...
E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied)
Build step 'Execute shell' marked build as failure
QEMU has Cortex-A76 and Neoverse N1 emulation now. Would be nice to
migrate SBSA reference platform to use them (in "use newest possible
cores" move).
It needs support in TF-A and it looks like we can not keep A57/A72/A76
in one build (not looked into details):
lib/cpus/aarch64/cortex_a76.S:18:2: error: #error "Cortex-A76 must be
compiled with HW_ASSISTED_COHERENCY enabled"
18 | #error "Cortex-A76 must be compiled with HW_ASSISTED_COHERENCY
enabled"
| ^~~~~
lib/cpus/aarch64/cortex_a76.S:23:2: error: #error "Cortex-A76 supports
only AArch64. Compile with CTX_INCLUDE_AARCH32_REGS=0"
23 | #error "Cortex-A76 supports only AArch64. Compile with
CTX_INCLUDE_AARCH32_REGS=0"
Not checked does edk2 need work to handle A76/N1 too. I expect kernel
and userspace to be fine with cpu core change (worked for A57/A72/MAX).
This made we thinking "do we need to version sbsa-ref"?
sbsa-ref-7.0 being A57/A72/MAX
sbsa-ref-7.1 being A76/N1/MAX
MAX in both cases as this allows us to have all cpu features.
Hi, long time no mail so let me tell you how our merge queue looks:
1. Leif wrote patchset for TF-A to add 'max' cpu support.
This gives us CPU with all/most features supported in QEMU. Compared to
'cortex-a72' we use now we get some tests pass for level 4 and above.
https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/9112
Code to enable 'max' cpu model for sbsa-ref is already merged into QEMU
tree.
Passed tests:
20 : Check for 16-Bit VMID : Result: PASS
21 : Check for Virtual host extensions : Result: PASS
22 : Check for pointer signing : Result: PASS
27 : Check for SHA3 and SHA512 support : Result: PASS
31 : Check PEs Implement SB Barrier : Result: PASS
32 : Check PE Impl CFP,DVP,CPP RCTX : Result: PASS
33 : Check Branch Target Support : Result: PASS
2. I sent patch to QEMU to bump amount of PMU counters to six (required
by both BSA and SBSA).
https://lists.gnu.org/archive/html/qemu-devel/2021-03/msg01012.html
Patch was discussed and then Peter Maydell proposed patch which handles
that per CPU model.
https://lists.gnu.org/archive/html/qemu-devel/2021-03/msg04065.html
Passed tests:
12 : Check number of PMU counters : Result: PASS
3. GIT ITS enablement done by Shashi.
This work has two parts: QEMU and EDK2.
https://lists.gnu.org/archive/html/qemu-devel/2021-03/msg05315.htmlhttps://edk2.groups.io/g/devel/message/72682?p=,,,20,0,0,0::Created,,shashi…
This allows us to pass ITS related tests and allows us to add SMMU handling.
Passed tests:
102 : If PCIe, then GIC implements ITS : Result: PASS
On 01/03/2021 06:20, Masahisa Kojima via Asa-dev wrote:
> Hi All,
>
> I just encounter this error, I would like to share the information.
> SMP boot fails with the latest master branch of
> edk2/edk2-platforms/tf-a as of today.
> # I boot sbsa-qemu with "-smp 4".
>
> Linux kernel outputs following error.
> [ 0.000000] duplicate boot CPU MPIDR: 0x0 in MADT
> [ 0.000000] duplicate boot CPU MPIDR: 0x0 in MADT
> [ 0.000000] duplicate boot CPU MPIDR: 0x0 in MADT
>
> debian@debian:~$ lscpu
> Architecture: aarch64
> Byte Order: Little Endian
> CPU(s): 1
> On-line CPU(s) list: 0
> Thread(s) per core: 1
> Core(s) per socket: 1
> Socket(s): 1
> NUMA node(s): 1
> Vendor ID: ARM
> Model: 3
> Model name: Cortex-A72
> Stepping: r0p3
> BogoMIPS: 125.00
> L1d cache: 32K
> L1i cache: 32K
> L2 cache: 512K
> NUMA node0 CPU(s): 0
> Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
>
Hi Masahisa,
Just to check, your qemu is new enough to contain this commit?
commit 999f6ebde5d3ee30b03270bc05095bed737b7dab
Author: Leif Lindholm <leif(a)nuviainc.com>
Date: Thu Aug 27 13:43:35 2020 +0100
hw/arm/sbsa-ref: add "reg" property to DT cpu nodes
Graeme