We submit a patch in TF-A which is armed at reporting the information
of through SMC. Cooperating with that patch, we can get thecinformation
of memory in EDK2 via SMC calls.
Changes in v2:
- Align with the latest version.
- modify language and formatting errors.
Changes in v3:
- read the information of memory in SbsaQemuSmc.c.
Xiong Yining (1):
SbsaQemu: get the information of memory form TF-A
.../SbsaQemuPlatformVersion.h | 1 +
.../Include/IndustryStandard/SbsaQemuSmc.h | 2 +
.../SbsaQemu/Include/Library/QemuSbsaSmc.h | 28 ++++++++++
.../Library/SbsaQemuLib/SbsaQemuLib.inf | 2 +-
.../Library/SbsaQemuLib/SbsaQemuMem.c | 52 +++++--------------
.../Library/SbsaQemuSmc/SbsaQemuSmc.c | 45 ++++++++++++++++
6 files changed, 89 insertions(+), 41 deletions(-)
--
2.34.1
As a part of removing DeviceTree from EDK2 I wrote code for CPU
information.
TF-A reads data for upto 512 cpus and keep it in local memory.
Two SMC calls are provided for EDK2:
- GET_CPU_COUNT reports amount of cpus
- GET_CPU_NODE returns MPIDR and NUMA node id values for selected cpu
I took some ideas from Xiong Yining's memory patches.
For EDK2 I removed FdtHelperLib as we no longer need it. Dropped FdtLib
from all places where it is no longer needed - the only place is getting
memory nodes but that is handled by Xiong Yining's patches.
There is SbsaQemuSmc helper library with functions to get MPIDR and NUMA
node id for selected cpu.
Amount of cpus is read in SbsaQemuPlatformDxe and stored in PcdCoreCount
variable. It is the first place where this data is used so I replaced
all FdtHelperLib calls with PcdGet32 calls.
Generation of SRAT table needs to be rewritten.
Marcin Juszkiewicz (2):
feat(qemu_sbsa): handle CPU information
SbsaQemu: get cpu information from TF-A
--
2.43.0
We submit a patch(d56b488) in TF-A which is armed at reporting the information of
memory through SMC. Cooperating with that patch, we can get the
information of memory in EDK2 via SMC calls.
Xiong Yining (1):
SbsaQemu: get the information of memory form TF-A
Platform/Qemu/SbsaQemu/SbsaQemu.dsc | 5 +++-
.../SbsaQemuPlatformDxe/SbsaQemuPlatformDxe.c | 29 +++++++++++++++++++
.../SbsaQemuPlatformDxe.inf | 1 +
.../SbsaQemuPlatformVersion.h | 14 +++++++++
.../Include/IndustryStandard/SbsaQemuSmc.h | 2 ++
Silicon/Qemu/SbsaQemu/SbsaQemu.dec | 2 ++
6 files changed, 52 insertions(+), 1 deletion(-)
--
2.34.1
Back in May 2023, when we were handling versioning changes, I added set
of SMC calls between TF-A and EDK2 to share hardware details (instead of
parsing DeviceTree in EDK2).
I think that it is time to move forward and drop DeviceTree support from
EDK2 completely.
# what for we use DT now?
There are few things we read from DT in EDK2 now:
- cpu count
- cpu mpidr value
- cpu numa node id
- memory nodes (with numa node ids)
# initial code
I took a look at it, created some tickets [1] in Linaro Jira and wrote
some initial code for checking cpu count (will send to ML).
1. https://linaro.atlassian.net/browse/SRCPC-156
# ideas for next steps
For mpidr/numa I have some ideas. I was thinking of adding function in
sbsa_sip_svc.c which would count cpus (using code I already have) and
then malloc() memory for cpu struct { id, mpidr, node_id } for each cpu.
And similar for memory nodes: read DT, alloc structures, fill them.
When EDK2 does SMC call to get cpu data like mpidr/node_id code will go
through allocated structures and return single data. Again, similar for
memory nodes.
The funny part: DT is somewhere in memory during BL3*, we provide it for
EDK2 at start of memory but I probably do something wrong when trying to
access it during BL3*.
Current data gathering reads DT from BL2 memory before MMU kicks in.
# things for the future
Adding SPM would require additional calls. So far I only created ticket
[2] for it without looking into details.
2. https://linaro.atlassian.net/browse/SRCPC-165
# call for opinions
What are your opinions about it? Ideas? Someone maybe already considered it?
When SaSbQemu platform is configured with multi memory nodes,like numa architecture, os will ignore any memory node in the device tree except the first one.The kernel reads UEFI memory map for memory information when booting via UEFI.However UEFI only allocates the first memory node memory space for SbsaQemu platform, in this scenario we can full use of HighMemDxe to add memory spaces for other memory nodes.
Changes in V2:
- Modify the the execution order of PlatformPeiLib moodule after memory initialization
xiongyining1480 (3):
SbsaQemu: Use HighMemDxe provided by OvmfPkg
SbsaQemu: Use FdtClientDxe provided by EmbeddedPkg
SbsaQemu: Add PlatformPeiLib library
Platform/Qemu/SbsaQemu/SbsaQemu.dsc | 8 ++-
Platform/Qemu/SbsaQemu/SbsaQemu.fdf | 2 +
.../Library/PlatformPeiLib/PlatformPeiLib.c | 63 +++++++++++++++++++
.../Library/PlatformPeiLib/PlatformPeiLib.inf | 41 ++++++++++++
4 files changed, 113 insertions(+), 1 deletion(-)
create mode 100644 Silicon/Qemu/SbsaQemu/Library/PlatformPeiLib/PlatformPeiLib.c
create mode 100644 Silicon/Qemu/SbsaQemu/Library/PlatformPeiLib/PlatformPeiLib.inf
--
2.34.1
I looked through BSA/SBSA issues today and found one with PCI related
discussion [1]. BSA ignores cards which are RCiEP (Root Complex
Integrated Endpoint). And for SBSA Reference Platform it means all of
them.
1. https://github.com/ARM-software/bsa-acs/issues/77
By default QEMU starts SBSA Reference Platform with two cards:
- e1000e network
- VGA graphics
Setup is done as simple as possible with both cards plugged directly
into PCI Express bus:
~ # lspci -nn
00:00.0 Host bridge [0600]: Red Hat, Inc. QEMU PCIe Host bridge [1b36:0008]
00:01.0 Ethernet controller [0200]: Intel Corporation 82574L Gigabit Network Connection [8086:10d3]
00:02.0 VGA compatible controller [0300]: Device [1234:1111] (rev 02)
~ # lspci -t
-[0000:00]-+-00.0
+-01.0
\-02.0
VGA is simple PCI card, e1000e becames Root Complex Integrated Endpoint.
EDK2 sees them, Linux sees them, both are able to use it.
But that's far from real world PCI Express scenario where PCI Express
cards are connected to the bus via root ports.
Let take a look at system without cards added by default. I edited QEMU
code, removed adding e1000e and VGA card. Then added several cards with
root ports and bridges. Here I added NVME, USB controller, e1000e
network card, Bochs display and virtio RNG PCI card:
~ # lspci -nn
00:00.0 Host bridge [0600]: Red Hat, Inc. QEMU PCIe Host bridge [1b36:0008]
00:01.0 PCI bridge [0604]: Red Hat, Inc. QEMU PCIe Root port [1b36:000c]
00:02.0 PCI bridge [0604]: Red Hat, Inc. QEMU PCIe Root port [1b36:000c]
00:03.0 PCI bridge [0604]: Red Hat, Inc. QEMU PCIe Root port [1b36:000c]
00:04.0 PCI bridge [0604]: Red Hat, Inc. QEMU PCIe Root port [1b36:000c]
00:05.0 PCI bridge [0604]: Red Hat, Inc. QEMU PCIe Root port [1b36:000c]
01:00.0 Non-Volatile memory controller [0108]: Red Hat, Inc. QEMU NVM Express Controller [1b36:0010] (rev 02)
02:00.0 USB controller [0c03]: Red Hat, Inc. QEMU XHCI Host Controller [1b36:000d] (rev 01)
03:00.0 Display controller [0380]: Device [1234:1111] (rev 02)
04:00.0 Ethernet controller [0200]: Intel Corporation 82574L Gigabit Network Connection [8086:10d3]
05:00.0 PCI bridge [0604]: Red Hat, Inc. Device [1b36:000e]
06:08.0 Unclassified device [00ff]: Red Hat, Inc. Virtio RNG [1af4:1005]
~ # lspci -t
-[0000:00]-+-00.0
+-01.0-[01]----00.0
+-02.0-[02]----00.0
+-03.0-[03]----00.0
+-04.0-[04]----00.0
\-05.0-[05-06]----00.0-[06]----08.0
~ #
Each PCI Express card (01:00.0 - 04:00.0) are behind PCI Express root
port, virtio RNG is behind PCIe-PCI bridge (which is behind PCIe root
port). All PCIe cards are reported with "Express (v2) Endpoint"
capability. See attached file for "lspci -vvvnn" output.
I have run BSA ACS against both default QEMU setup (bsa-default.txt) and
against my complex one (bsa-complex.txt). As you can see with new PCI
setup BSA ACS runs far more tests than before.
What next?
I think that QEMU needs patching to add root ports etc. Switching from
VGA card to Bochs display would be good as well because former is PCI
while former is PCI Express. We could also think of adding USB
controller by default.
See <https://ci.linaro.org/job/ldcg-sbsa-acs/26/display/redirect>
Changes:
------------------------------------------
Started by user Marcin Juszkiewicz (marcin.juszkiewicz(a)linaro.org)
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on ldcg-aarch64-02-df2fc25dfdc7 (docker-bullseye-arm64-ldcg) in workspace <https://ci.linaro.org/job/ldcg-sbsa-acs/ws/>
[ldcg-sbsa-acs] $ /bin/bash /tmp/jenkins15353892541120853620.sh
+ rm -rf '<https://ci.linaro.org/job/ldcg-sbsa-acs/ws/*'>
+ apt update
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
Reading package lists...
E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied)
Build step 'Execute shell' marked build as failure