> -----Original Message-----
> From: Linaro-open-discussions
> [mailto:linaro-open-discussions-bounces@op-lists.linaro.org] On Behalf Of
> James Morse via Linaro-open-discussions
> Sent: 04 June 2021 17:56
> To: Jean-Philippe Brucker <jean-philippe(a)linaro.org>;
> linaro-open-discussions(a)op-lists.linaro.org
> Subject: Re: [Linaro-open-discussions] [RFC linux 0/5] KVM: arm64: Let
> userspace handle PSCI
>
> 'lo
>
> On 20/05/2021 14:07, Jean-Philippe Brucker wrote:
> > As planned during the vCPU hot-add discussions from previous LOD
> > meetings, this prototype lets userspace handle PSCI calls from a guest.
> >
> > The vCPU hot-add model preferred by Arm presents all possible resources
> > through ACPI at boot time, only marking unavailable vCPUs as hidden.
> > The VMM prevents bringing up those vCPUs by rejecting PSCI CPU_ON calls.
> > This allows to keep things simple for vCPU scaling enablement, while
> > leaving the door open for hardware CPU hot-add.
> >
> > This series focuses on moving PSCI support into userspace. Patches 1-3
> > allow userspace to request WFI to be executed by KVM. That way the VMM
> > can easily implement the CPU_SUSPEND function, which is mandatory from
> > PSCI v0.2 onwards (even if it doesn't have a more useful implementation
> > than WFI, natively available to the guest). An alternative would be to
> > poll the vGIC implemented in KVM for interrupts, but I haven't explored
> > that solution. Patches 4 and 5 let the VMM request PSCI calls.
>
> As mentioned on the call, I've tested the udev output on x86 and arm64, as
> expected its
> the same:
> | root@vm:~# udevadm monitor
> | monitor will print the received events for:
> | UDEV - the event which udev sends out after rule processing
> | KERNEL - the kernel uevent
> |
> | KERNEL[33.935817] add /devices/system/cpu/cpu1 (cpu)
> | KERNEL[33.946333] bind /devices/system/cpu/cpu1 (cpu)
> | UDEV [33.953251] add /devices/system/cpu/cpu1 (cpu)
> | UDEV [33.958676] bind /devices/system/cpu/cpu1 (cpu)
>
>
> (I've not played with the KVM changes yet)
I also had a little play on my setup with the cpuhp kernel and Qemu.
Also added the below udev rule to online cpu by default.
SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1"
And on,
(qemu) device_add host-arm-cpu,id=core2,core-id=2
KERNEL[266.623545] add /devices/system/cpu/cpu2 (cpu)
KERNEL[266.686160] online /devices/system/cpu/cpu2 (cpu)
UDEV [266.691808] add /devices/system/cpu/cpu2 (cpu)
UDEV [266.692216] online /devices/system/cpu/cpu2 (cpu)
the new cpu now becomes online without explicitly doing so.
But with a Guest kernel Image without cpuhp support the behavior
is different(obviously!).
On boot, it reports failures,
[ 0.981712] psci: failed to boot CPU2 (-22)
[ 0.982438] CPU2: failed to boot: -22
But all the cpus are visible under,
root@ubuntu:~# cat /sys/devices/system/cpu/
cpu0/ cpu5/ kernel_max power/
cpu1/ cpufreq/ modalias present
cpu2/ cpuidle/ offline smt/
cpu3/ hotplug/ online uevent
cpu4/ isolated possible vulnerabilities/
And on,
(qemu) device_add host-arm-cpu,id=core2,core-id=2
No udev event "add" is reported.
So you have to explicitly make the new cpu online in this case,
root@ubuntu:~# echo 1 >/sys/devices/system/cpu/cpu2/online
KERNEL[357.211520] online /devices/system/cpu/cpu2 (cpu)
UDEV [357.213550] online /devices/system/cpu/cpu2 (cpu)
And the new cpu becomes available to VM.
Not sure, this is major concern or not and there are any better ways to
handle this more gracefully in Qemu( A warning or preventing hot add
cpus etc).
Thanks,
Shameer