Hello,
We have been trying to verify the "system suspend/Restore" with vCPU Hotplug patches recently and
found this functionality does not work on ARM64 with VMs even without our patches i.e. using latest kernel and qemu repository.
estuary:/$ cat /sys/power/mem_sleep
[s2idle]
estuary:/$
estuary:/$ cat /sys/power/state
freeze mem disk
estuary:/$
estuary:/$
estuary:/$ echo mem > /sys/power/state
[ 60.458445] PM: suspend entry (s2idle)
[ 60.458840] Filesystems sync: 0.000 seconds
[ 60.459649] Freezing user space processes
[ 60.461149] Freezing user space processes completed (elapsed 0.001 seconds)
[ 60.461830] OOM killer disabled.
[ 60.462144] Freezing remaining freezable tasks
[ 60.463188] Freezing remaining freezable tasks completed (elapsed 0.000 seconds)
[ 60.463920] printk: Suspending console(s) (use no_console_suspend to debug)
(qemu)
(qemu) sys
system_powerdown system_reset system_wakeup
(qemu) system_wakeup
Error: wake-up from suspend is not supported by this guest
(qemu)
Or using # systemctl suspend
What is the expected behavior or are we missing something?
Thanks
Salil
Hi,all
May I, representing our members, bring forth an issue for discussion?
The impact of this issue is big: Without resolving this issue, the scenario
where the GPU is passed through to the virtual machine cannot be used. And
it exists on Arm only, x86 is good. (I know workarounds exist, but we want
to fix that in the mainline).
After reading through so many community emails (see below), I believe it’s
unlikely that a single patch can quickly gain everyone's support. A broad
discussion involving the ARM ecosystem and the KVM community is essential.
Only with consensus can a submitted patch receive sufficient support,
eventually resolving this issue in the mainline version.
History:
-
This issue was first submitted to the public on 2021-07-01, refer to
this [URL] (
https://gitee.com/openeuler/kernel/issues/I3YRDP?from=project-issue ).
-
A patch was submitted to the kernel maillist on 2022-04-01, authored by
kylinos.com, but it was refused. No follow-up was found after that.
[Link to the Patch] (
https://lore.kernel.org/lkml/20220401090828.614167-1-xieming@kylinos.cn/T/
)
-
Another discussion went on in this email chain on 2022-05-09:, authored
by nvidia.com, but no conclusion.
https://lore.kernel.org/all/20210429162906.32742-1-sdonthineni@nvidia.com/
-
As of this writing 2023-10-09, the issue still can be reproduced in
kernel 6.1.x with Nvidia / AMD GPUs, on Arm arch.
Problem Description:
A GPU is passed through to a virtual machine via a PCIE node. When
installing the GPU driver within a virtual machine that runs on Openeuler
22.03 LTS SP2 (aarch64) system (linux kernel 5.10 based), the following
error occurs:
“Unsupported FSC: EC=0x24 xFSC=0x21 ESR_EL2=0x92000061”
PS: the same issue can also be reproduced in kernel 6.1.x with Nvidia / AMD
GPUs.
Upon consulting the official ARM documentation, the meanings of some of the
error codes are as follows:
EC=0x24
The binary code is 0b100100, which corresponds to a data abort. A possible
cause of this problem could be alignment errors.
xFSC=0x21
The binary code is 0b100001. Upon inquiry, this code represents an
alignment error.
There is a lack of online solutions to this error. KylinOS engineers once
proposed a patch for this error, but it was rejected by the community.
Moreover, their modification was based on the old 4.x kernel version.
Link: [Link to the Patch](
https://lore.kernel.org/lkml/20220401090828.614167-1-xieming@kylinos.cn/T/)
This patch suggests that it is unreasonable to set the I/O memory
attributes of the virtual machine to Device_nGnRE type.
According to ARM's official Whitepaper: Understanding Write Combining on Arm,
Device-GRE is a type of relaxed-order memory that allows for gathering
operations, but it does not allow for read speculation and imposes strict
alignment constraints. You may refer to the table below or the following
link for more information.
[Link to ARM Community](
https://community.arm.com/arm-research/m/resources/1012 )
Preliminary Deduction:
The GPU might be using Device-GRE type memory but without proper alignment,
leading to the generation of this error.
Thanks.
Best regards,
Guodong Xu
Linaro
Hi Jonathan,James,Salil,all
Any topics we want to sync during next LOD meeting?
Thanks:)
Joyce
> 在 2023年7月13日,上午8:00,linaro-open-discussions-request@op-lists.linaro.org 写道:
>
> Send Linaro-open-discussions mailing list submissions to
> linaro-open-discussions(a)op-lists.linaro.org
>
> To subscribe or unsubscribe via email, send a message with subject or
> body 'help' to
> linaro-open-discussions-request(a)op-lists.linaro.org
>
> You can reach the person managing the list at
> linaro-open-discussions-owner(a)op-lists.linaro.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Linaro-open-discussions digest..."
>
> Today's Topics:
>
> 1. Re: FYI: KVMForum2023 Conference Talk slides - "Virtual CPU Hotplug Support on ARM64"
> (Joyce Qi)
> 2. Re: Linaro-open-discussions Digest, Vol 33, Issue 19 (Yicong Yang)
> 3. FYI: KVMForum2023 Conference Talk slides - "Virtual CPU Hotplug Support on ARM64"
> (Salil Mehta)
> 4. Re: [PATCH V3 1/1] qemu_v8: add support to run secondary OP-TEE
> (Shiju Jose)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 12 Jul 2023 10:06:24 +0800
> From: Joyce Qi <joyce.qi(a)linaro.org>
> Subject: [Linaro-open-discussions] Re: FYI: KVMForum2023 Conference
> Talk slides - "Virtual CPU Hotplug Support on ARM64"
> To: Salil Mehta <salil.mehta(a)huawei.com>
> Cc: "linaro-open-discussions(a)op-lists.linaro.org"
> <linaro-open-discussions(a)op-lists.linaro.org>, "james.morse(a)arm.com"
> <james.morse(a)arm.com>, Jean-Philippe Brucker
> <jean-philippe.brucker(a)arm.com>, "lorenzo.pieralisi(a)linaro.org"
> <lorenzo.pieralisi(a)linaro.org>, Lorenzo Pieralisi
> <lpieralisi(a)kernel.org>, zhukeqian <zhukeqian1(a)huawei.com>,
> "wangxiongfeng (C)" <wangxiongfeng2(a)huawei.com>, Miguel Luis
> <miguel.luis(a)oracle.com>, Vishnu Pajjuri
> <vishnu(a)amperemail.onmicrosoft.com>,
> "darren(a)amperemail.onmicrosoft.com"
> <darren(a)amperemail.onmicrosoft.com>,
> "ilkka(a)amperemail.onmicrosoft.com" <ilkka(a)amperemail.onmicrosoft.com>,
> Catalin Marinas <catalin.marinas(a)arm.com>, Marc Zyngier
> <maz(a)kernel.org>, Will Deacon <will(a)kernel.org>, Karl Heubaum
> <karl.heubaum(a)oracle.com>, Russell King <linux(a)armlinux.org.uk>,
> "salil.mehta(a)opnsrc.net" <salil.mehta(a)opnsrc.net>, Peter Maydell
> <peter.maydell(a)linaro.org>, Sudeep Holla <sudeep.holla(a)arm.com>,
> Suzuki K Poulose <suzuki.poulose(a)arm.com>
> Message-ID: <AB9B6479-BAA8-4976-B670-757B461A3B7E(a)linaro.org>
> Content-Type: text/plain; charset=gb2312
>
> Hi Salil,
>
> Thanks for sharing, uploaded to the meeting minutes on June 7.
>
> https://linaro.atlassian.net/wiki/spaces/LOD/pages/28933292085/2023-06-07+M…
>
>
>
>> 在 2023年7月12日,上午2:22,Salil Mehta <salil.mehta(a)huawei.com> 写道:
>>
>> Hello,
>> Slides (Qemu + Kernel) and video related to our "Virtual CPU Hotplug presentation" recently concluded
>> at KVMForum2023 Conference are now available at the below link:
>>
>> Link: https://kvm-forum.qemu.org/2023/talk/9SMPDQ/
>>
>>
>> Many thanks to everyone who has contributed to the project so far!
>>
>>
>> Best regards
>> Salil
>>
>>
>>
>>
>>
>>
>>
>> <KVMForum2023-virtual-cpu-hotplug-kernel-slides.pdf><KVMForum2023-virtual-cpu-hotplug-qemu-slides.pdf>
>
> Thanks:)
> Joyce
>
>
> ------------------------------
>
> Message: 2
> Date: Tue, 11 Jul 2023 20:28:19 +0800
> From: Yicong Yang <yangyicong(a)huawei.com>
> Subject: [Linaro-open-discussions] Re: Linaro-open-discussions Digest,
> Vol 33, Issue 19
> To: Joyce Qi <joyce.qi(a)linaro.org>, Lorenzo Pieralisi
> <lorenzo.pieralisi(a)linaro.org>
> Cc: yangyicong(a)hisilicon.com, James Morse <james.morse(a)arm.com>, James
> Morse via Linaro-open-discussions
> <linaro-open-discussions(a)op-lists.linaro.org>,
> "wangkefeng.wang(a)huawei.com" <wangkefeng.wang(a)huawei.com>, tanxiaofei
> <tanxiaofei(a)huawei.com>, tongtiangen(a)huawei.com
> Message-ID: <b00be3fe-968f-5116-2711-ad77a4ea7f71(a)huawei.com>
> Content-Type: text/plain; charset="gbk"
>
> Hi,
>
> Thanks for organizing the meeting and thanks for the discussion. Attached the material talked
> on the meeting as well as the meeting minutes at the last.
>
> Thanks,
> Yicong
>
> On 2023/7/10 23:03, Joyce Qi wrote:
>> Hi Lorenzo,
>>
>> Thanks for attending and invite James to join.
>> Just resend the calendar.
>>
>> Also invited Yicong to join.
>>
>> Thanks:)
>> Joyce
>>
>>
>>> 在 2023年7月10日,下午10:34,Lorenzo Pieralisi <lorenzo.pieralisi(a)linaro.org> 写道:
>>>
>>> On Mon, 10 Jul 2023 at 16:32, Lorenzo Pieralisi
>>> <lorenzo.pieralisi(a)linaro.org> wrote:
>>>>
>>>> On Mon, 10 Jul 2023 at 16:24, Joyce Qi via Linaro-open-discussions
>>>> <linaro-open-discussions(a)op-lists.linaro.org> wrote:
>>>>>
>>>>> Hi all,
>>>>>
>>>>> Just reminder that we will have the Open Discussion on July 11th on the memory and RAS related topics.
>>>>>
>>>>> @James, Lorenz,
>>>>>
>>>>> Will you be able to join?
>>>
>>> Is it possible please to resend a calendar invite ?
>>>
>>> Thanks,
>>> Lorenzo
>>>
>>>>
>>>> Yes, I will - I CC'ed James, hopefully also the Huawei folks will be there
>>>> otherwise there is no point since they raised the topics.
>>>>
>>>> Please let us know,
>>>> Lorenzo
>>
>> .
>>
>
> ------------------------------
>
> Message: 3
> Date: Tue, 11 Jul 2023 18:22:04 +0000
> From: Salil Mehta <salil.mehta(a)huawei.com>
> Subject: [Linaro-open-discussions] FYI: KVMForum2023 Conference Talk
> slides - "Virtual CPU Hotplug Support on ARM64"
> To: "linaro-open-discussions(a)op-lists.linaro.org"
> <linaro-open-discussions(a)op-lists.linaro.org>, Joyce Qi
> <joyce.qi(a)linaro.org>
> Cc: "james.morse(a)arm.com" <james.morse(a)arm.com>, Jean-Philippe Brucker
> <jean-philippe.brucker(a)arm.com>, "lorenzo.pieralisi(a)linaro.org"
> <lorenzo.pieralisi(a)linaro.org>, Lorenzo Pieralisi
> <lpieralisi(a)kernel.org>, zhukeqian <zhukeqian1(a)huawei.com>,
> "wangxiongfeng (C)" <wangxiongfeng2(a)huawei.com>, Miguel Luis
> <miguel.luis(a)oracle.com>, Vishnu Pajjuri
> <vishnu(a)amperemail.onmicrosoft.com>,
> "darren(a)amperemail.onmicrosoft.com"
> <darren(a)amperemail.onmicrosoft.com>,
> "ilkka(a)amperemail.onmicrosoft.com" <ilkka(a)amperemail.onmicrosoft.com>,
> Catalin Marinas <catalin.marinas(a)arm.com>, Marc Zyngier
> <maz(a)kernel.org>, Will Deacon <will(a)kernel.org>, Karl Heubaum
> <karl.heubaum(a)oracle.com>, Russell King <linux(a)armlinux.org.uk>,
> "salil.mehta(a)opnsrc.net" <salil.mehta(a)opnsrc.net>, Peter Maydell
> <peter.maydell(a)linaro.org>, Sudeep Holla <sudeep.holla(a)arm.com>,
> Suzuki K Poulose <suzuki.poulose(a)arm.com>
> Message-ID: <42340fd4b86747809a955c81fe6be8b5(a)huawei.com>
> Content-Type: text/plain; charset="us-ascii"
>
> Hello,
> Slides (Qemu + Kernel) and video related to our "Virtual CPU Hotplug presentation" recently concluded
> at KVMForum2023 Conference are now available at the below link:
>
> Link: https://kvm-forum.qemu.org/2023/talk/9SMPDQ/
>
>
> Many thanks to everyone who has contributed to the project so far!
>
>
> Best regards
> Salil
>
>
>
>
>
>
>
>
> ------------------------------
>
> Message: 4
> Date: Wed, 12 Jul 2023 10:17:11 +0000
> From: Shiju Jose <shiju.jose(a)huawei.com>
> Subject: [Linaro-open-discussions] Re: [PATCH V3 1/1] qemu_v8: add
> support to run secondary OP-TEE
> To: Jens Wiklander <jens.wiklander(a)linaro.org>
> Cc: "linaro-open-discussions(a)op-lists.linaro.org"
> <linaro-open-discussions(a)op-lists.linaro.org>,
> "Olivier.Deprez(a)arm.com" <Olivier.Deprez(a)arm.com>, Linuxarm
> <linuxarm(a)huawei.com>, "zhouguangwei (C)" <zhouguangwei5(a)huawei.com>
> Message-ID: <d550a78261ec447f8b2d8c62e92ab584(a)huawei.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi Jens,
>
>> -----Original Message-----
>> From: Jens Wiklander <jens.wiklander(a)linaro.org>
>> Sent: 11 July 2023 10:02
>> To: Shiju Jose <shiju.jose(a)huawei.com>
>> Cc: linaro-open-discussions(a)op-lists.linaro.org; Olivier.Deprez(a)arm.com;
>> Linuxarm <linuxarm(a)huawei.com>; Jonathan Cameron
>> <jonathan.cameron(a)huawei.com>; Zengtao (B) <prime.zeng(a)hisilicon.com>;
>> zhouguangwei (C) <zhouguangwei5(a)huawei.com>
>> Subject: Re: [PATCH V3 1/1] qemu_v8: add support to run secondary OP-TEE
>>
>> Hi,
>>
>> On Mon, Jun 26, 2023 at 1:13 PM <shiju.jose(a)huawei.com> wrote:
>>>
>>> From: Shiju Jose <shiju.jose(a)huawei.com>
>>>
>>> Add changes to run a secondary OP-TEE at S-EL1 for SPMC_AT_EL=2, where
>>> Hafnium is loaded at S-EL2.
>>>
>>> Signed-off-by: Shiju Jose <shiju.jose(a)huawei.com>
>>
>> With https://github.com/OP-TEE/build/pull/663 I'm trying to upstream the
>> Hafnium setup. Once that's merged please create a pull request against
>> https://github.com/OP-TEE/build and we can continue reviewing this patch
>> there.
>
> Sure.
>
> Thanks,
> Shiju
>>
>> Thanks,
>> Jens
>>
>>>
> ...
>>> 2.25.1
>>>
>
> ------------------------------
>
> Subject: Digest Footer
>
> Linaro-open-discussions mailing list -- linaro-open-discussions(a)op-lists.linaro.org
> To unsubscribe send an email to linaro-open-discussions-leave(a)op-lists.linaro.org
>
>
> ------------------------------
>
> End of Linaro-open-discussions Digest, Vol 34, Issue 3
> ******************************************************
Hi
Today Tuesday, September 26 it's time for another LOC monthly meeting.
Sorry for the short notice. For
time and connection details see the calendar at
https://www.trustedfirmware.org/meetings/
I have a few items for the agenda:
- Firmware handoff in OP-TEE https://github.com/OP-TEE/optee_os/pull/6308
- We've started to upstream a few PRs that will require bumping the
major version to 4
Any other topics?
Thanks,
Jens
[+] Adding back LOD mailing list
Hi Leo,
> From: Leonardo Augusto Guimarães Garcia <leonardo.garcia(a)linaro.org>
> Sent: Wednesday, August 30, 2023 1:59 PM
> To: Marcin Juszkiewicz <marcin.juszkiewicz(a)linaro.org>; Jonathan Cameron
> <jonathan.cameron(a)huawei.com>; Salil Mehta <salil.mehta(a)huawei.com>
> Cc: Lorenzo Pieralisi <lorenzo.pieralisi(a)linaro.org>; Joyce Qi
> <joyce.qi(a)linaro.org>; James Morse <james.morse(a)arm.com>; Salil Mehta
> <salil.mehta(a)opnsrc.net>; Radoslaw Biernacki <rad(a)semihalf.com>; Leif
> Lindholm <quic_llindhol(a)quicinc.com>; Peter Maydell <peter.maydell(a)linaro.org>
> Subject: Re: [Linaro-open-discussions] Re: About the LOD next week (Aug 22th)
>
>
> On 2023/08/30 09:11, Marcin Juszkiewicz wrote:
> > W dniu 30.08.2023 o 13:59, Jonathan Cameron pisze:
> >> Salil Mehta <salil.mehta(a)huawei.com> wrote:
> >
> >>> But I am not sure about qemu-sbsa. Maybe someone can help throw
> >>> light on the goals of qemu-sbsa and who wants it?
> >>
> >> Architecture verification - not use as a VM except for that purpose.
> >> It's not a replacement for virt - purpose is completely different.
> >>
> >> Of the current machines in QEMU the only one that I can see vCPU HP
> >> mattering for is virt as that's the one targeting cloud VMs etc.
> >
> > Clouds use 'virt' as it was made for that purpose. And vCPU hotplug
> > may have sense there.
> >
> > SBSA Reference Platform (sbsa-ref in QEMU) is emulation of physical
> > computer (similar to 'q35' in x86-64 world). It does not use any of
> > virtio expansions but emulate expansion cards so you get
> > 'bochs-display' instead of 'virtio-gpu', 'ahci' instead of
> > 'virtio-blk/scsi' etc.
> >
> > The goal is to have virtual system which can be used to check how
> > operating systems behave on SBSA hardware.
> >
> > We are going for SBSA level 3 compliance and we are getting closer.
> >
> > Does vCPU hotplug make sense for sbsa-ref? Probably not as you do not
> > hotplug cpus into your physical box, right?
>
>
> This is correct. Today we don't have Arm physical machines that support
> CPUs hot plug/unplug.
>
> Given that, I agree with Jonathan that the cVPU hot plug/unplug code
> don't need to worry about SBSA QEMU implementation. There shouldn't be
> any use case for that in the foreseeable future.
Agreed for the reasons mentioned above. Thanks again for the confirmation.
Thanks
Salil.
>
> Cheers,
>
> Leo
[+] Adding the LOD list for the obvious reasons
Hi Marcin,
> From: Marcin Juszkiewicz <marcin.juszkiewicz(a)linaro.org>
> Sent: Wednesday, August 30, 2023 1:12 PM
> To: Jonathan Cameron <jonathan.cameron(a)huawei.com>; Salil Mehta
> <salil.mehta(a)huawei.com>
> Cc: Leonardo Augusto Guimarães Garcia <leonardo.garcia(a)linaro.org>; Lorenzo
> Pieralisi <lorenzo.pieralisi(a)linaro.org>; Joyce Qi <joyce.qi(a)linaro.org>;
> James Morse <james.morse(a)arm.com>; Salil Mehta <salil.mehta(a)opnsrc.net>;
> Radoslaw Biernacki <rad(a)semihalf.com>; Leif Lindholm
> <quic_llindhol(a)quicinc.com>; Peter Maydell <peter.maydell(a)linaro.org>
> Subject: Re: [Linaro-open-discussions] Re: About the LOD next week (Aug
> 22th)
>
> W dniu 30.08.2023 o 13:59, Jonathan Cameron pisze:
> > Salil Mehta <salil.mehta(a)huawei.com> wrote:
>
> >> But I am not sure about qemu-sbsa. Maybe someone can help throw
> >> light on the goals of qemu-sbsa and who wants it?
> >
> > Architecture verification - not use as a VM except for that purpose.
> > It's not a replacement for virt - purpose is completely different.
> >
> > Of the current machines in QEMU the only one that I can see vCPU HP
> > mattering for is virt as that's the one targeting cloud VMs etc.
>
> Clouds use 'virt' as it was made for that purpose. And vCPU hotplug may
> have sense there.
>
> SBSA Reference Platform (sbsa-ref in QEMU) is emulation of physical
> computer (similar to 'q35' in x86-64 world). It does not use any of
> virtio expansions but emulate expansion cards so you get 'bochs-display'
> instead of 'virtio-gpu', 'ahci' instead of 'virtio-blk/scsi' etc.
>
> The goal is to have virtual system which can be used to check how
> operating systems behave on SBSA hardware.
>
> We are going for SBSA level 3 compliance and we are getting closer.
Thanks for confirming this and sharing this useful information.
>
> Does vCPU hotplug make sense for sbsa-ref? Probably not as you do not
> hotplug cpus into your physical box, right?
Agreed, for the reference verification platform it does not.
Thanks
Salil.
Hi James,
Do we have Bugzilla Id for the vCPU Hotplug related _OSC capability Bits (Present and enabled) change proposed for ACPI Specification?
Thanks
Salil
Hi,
Time flies, on Tuesday, August 22 it's for another LOC monthly meeting. For
time and connection details see the calendar at
https://www.trustedfirmware.org/meetings/
I'm happy to report that the Xen patches needed to run OP-TEE with
FF-A have just been merged [1] and will be included in the next Xen
release. With this, we may need to focus more on for how long we may
hog the CPU with non-secure interrupts masked.
[1] https://patchew.org/Xen/20230731121536.934239-1-jens.wiklander@linaro.org/#…
Any other topics?
Thanks,
Jens
Hi Vishnu,
I got tied down with the refactoring of the patches in preparation
of the patch-set and hence could not follow this bug last week.
Thanks for your testing efforts and also suggesting a probable fix.
> From: Vishnu Pajjuri <vishnu(a)amperemail.onmicrosoft.com>
> Sent: Tuesday, August 8, 2023 12:53 PM
> To: Salil Mehta <salil.mehta(a)huawei.com>; Vishnu Pajjuri OS
> <vishnu(a)os.amperecomputing.com>; Salil Mehta <salil.mehta(a)opnsrc.net>;
> Miguel Luis <miguel.luis(a)oracle.com>
> Cc: linaro-open-discussions(a)op-lists.linaro.org
> Subject: Re: [External] : RE: vCPU hotplug/unplug for the ARMv8 processors
>
> Hi Salil,
> Below patch resolves VM live migration CPU stall issues.
Good catch.
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index e9ced9f172..23b06d44cc 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -3224,6 +3224,11 @@ static void virt_cpu_plug(HotplugHandler
> *hotplug_dev, DeviceState *dev,
> fw_cfg_modify_i16(vms->fw_cfg, FW_CFG_NB_CPUS, vms->boot_cpus);
>
> cs->disabled = false;
> + if (qemu_enabled_cpu(cs)) {
> + cs->vmcse =
> qemu_add_vm_change_state_handler(kvm_arm_vm_state_change, cs);
> + }
> +
This is not the correct location of the fix.
> return;
> fail:
> error_propagate(errp, local_err);
> diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
> index 80a456ef16..510f829e4b 100644
> --- a/target/arm/kvm64.c
> +++ b/target/arm/kvm64.c
> @@ -883,9 +883,6 @@ int kvm_arch_init_vcpu(CPUState *cs)
> return -EINVAL;
> }
>
> - if (qemu_enabled_cpu(cs))
> - cs->vmcse =
> qemu_add_vm_change_state_handler(kvm_arm_vm_state_change,
> - cs);
We cannot remove this. Its corresponding destruction leg also
has similar problem and needs to be fixed as well. Installation
and un-installation must happen when realization of vCPUs is
happening.
>
> /* Determine init features for this CPU */
> memset(cpu->kvm_init_features, 0, sizeof(cpu->kvm_init_features));
>
> Root Cause:
> It's too early to attach kvm_arm_vm_state_change handler in
> kvm_arch_init_vcpu
> since vcpus were not yet enabled at this stage
Yes, the problem is related to updating of virtual time but it
is not so much about the time when these handlers must be installed.
VM State change handler must be installed during realization phase
of the vCPU thread and when vCPU exits, these handlers must be
uninstalled as well. And even now, this is getting done at the
right place and time.
The problem is due to the wrong qemu_enabled_cpu() check, none of the
VM state change handlers are getting installed. Hence, the jump in
the virtual time becomes large during migration as the adjustment
of the virtual time with Host KVM is not happening.
>
> Because of this, QEMU is not fetching virtual time from KVM before saving
> VM state
> Also not able to restore virtual time back to KVM after restoring VM state
> It is causing misconfiguration of vtimer offset and leading to CPU stall
> issues.
This is correct. But the reason why this is happening is due to
absence of the VM state change handlers at the first place. And what you
just mentioned is the visible side affect of the real cause mentioned
earlier above.
> Attaching kvm_arm_vm_state_change handler in virt_cpu_plug after enabling
> each vCPU
> makes more reasonable to me. Any suggestions are welcome here.
As mentioned above, we should not move installation of handler to any
context other than the vCPU thread. This is because when vCPU exits,
its symmetric destruction legs ensures uninstallation of the VM state
change handler. Having these at 2 different places is an invitation
to many problems and actually not required at all.
> This helps to properly configure vtimer offset. And didn't see any timing
> issues
Sure, I have pushed the fix in the below commit. Please pull the
change and check if everything works for you.
This will ensure that handlers are only installed when vCPU threads
are being spawned and vCPU is being realized.
https://github.com/salil-mehta/qemu/commit/a220a9e22f8d6452e8c9759e05a12b75…
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
index 80a456ef16..96ceebc7cc 100644
--- a/target/arm/kvm64.c
+++ b/target/arm/kvm64.c
@@ -883,7 +883,7 @@ int kvm_arch_init_vcpu(CPUState *cs)
return -EINVAL;
}
- if (qemu_enabled_cpu(cs))
+ if (cs->thread_id)
cs->vmcse = qemu_add_vm_change_state_handler(kvm_arm_vm_state_change,
cs);
@@ -962,7 +962,7 @@ int kvm_arch_init_vcpu(CPUState *cs)
int kvm_arch_destroy_vcpu(CPUState *cs)
{
- if (qemu_enabled_cpu(cs))
+ if (cs->thread_id)
qemu_del_vm_change_state_handler(cs->vmcse);
return 0;
Meny thanks
Salil.
>
> Regards,
> -Vishnu.
> On 26-07-2023 15:15, Salil Mehta wrote:
> HI Vishnu,
> Got it. I think I am now able to recreate the condition successfully with
> just 2 vCPUs and even when all of them are present and online.
> I am analyzing the issue and will get back to you soon.
>
> Thanks
> Salil
> From: Vishnu Pajjuri OS mailto:vishnu@os.amperecomputing.com
> Sent: Friday, July 21, 2023 10:39 AM
> To: Salil Mehta mailto:salil.mehta@opnsrc.net; Salil Mehta
> mailto:salil.mehta@huawei.com
> Cc: Vishnu Pajjuri OS mailto:vishnu@os.amperecomputing.com; Miguel Luis
> mailto:miguel.luis@oracle.com
> Subject: Re: [External] : RE: vCPU hotplug/unplug for the ARMv8 processors
>
> Hi Salil,
> On 20-07-2023 22:56, Salil Mehta wrote:
> [+] For your reference, I am attaching the scripts which I am using. Note
> the subtle difference between both
> Thanks for sharing your scripts for VM live migration.
>
>
>
> Thanks!
> On Thu, Jul 20, 2023 at 12:57 PM Salil Mehta
> <mailto:salil.mehta@huawei.com> wrote:
> Hi Vishnu,
> Aah, I see looks like you have 2 physical machine? I am using same host
> setup and use below commands to migrate VMs
>
> Src VM (Same Host)
> Qemu> migrate tcp:0:4444
>
> Dst VM (Same Host)
> $ qemu-arm64-nosmmu […] incoming tcp:0:4444
> I have prepared setup like below on the same Host with the help of qemu
> commands only
> [VM1 (P4:C2:H0:U2:O2)] --> [VM2 (P4:C2:H0:U2:O2)] --> [VM3
> (P4:C2:H0:U2:O2)]
> From VM1 to VM2 works fine. But when I tried to do live migration from
> VM2(same VM migrated from VM1) to VM3
> I observed CPU stall issues in this scenario also.
> One more observation is that time is jumping in the VM2 to VM3 iteration
> always
> [root@fedora ~]# date
> Wed Nov 21 06:34:54 AM EST 2114
> VM1.sh
> TOPDIR=/home/vishnu/vcpu_hp
> BASECODEPATH=$TOPDIR/pre_v2
> QEMUPATH=$BASECODEPATH/qemu/
> QEMUBIN=$QEMUPATH/build/aarch64-softmmu/qemu-system-aarch64
> ROOTFS_IMG=/home/vishnu/vms/images/vm11.raw
> BIOS=/usr/share/AAVMF/AAVMF_CODE.fd
> LOG_FILE=/tmp/serial1.log
>
> $QEMUBIN -M virt,accel=kvm,gic-version=3 \
> -smp cpus=2,maxcpus=4 \
> -m 4g \
> -cpu host \
> -nodefaults \
> -nographic \
> -drive if=pflash,format=raw,readonly,file=$BIOS \
> -boot c -hda $ROOTFS_IMG \
> -chardev stdio,id=char0,logfile=$LOG_FILE,signal=off \
> -serial chardev:char0 \
> -monitor tcp:127.0.0.1:55555,server,nowait
> VM2.sh
> TOPDIR=/home/vishnu/vcpu_hp
> BASECODEPATH=$TOPDIR/pre_v2
> QEMUPATH=$BASECODEPATH/qemu/
> QEMUBIN=$QEMUPATH/build/aarch64-softmmu/qemu-system-aarch64
> ROOTFS_IMG=/home/vishnu/vms/images/vm11.raw
> BIOS=/usr/share/AAVMF/AAVMF_CODE.fd
> LOG_FILE=/tmp/serial1.log
>
> $QEMUBIN -M virt,accel=kvm,gic-version=3 \
> -smp cpus=2,maxcpus=4 \
> -m 4g \
> -cpu host \
> -nodefaults \
> -nographic \
> -drive if=pflash,format=raw,readonly,file=$BIOS \
> -boot c -hda $ROOTFS_IMG \
> -chardev stdio,id=char0,logfile=$LOG_FILE,signal=off \
> -serial chardev:char0 \
> -monitor tcp:127.0.0.1:55556,server,nowait \
> -incoming tcp:0:4444
> VM3.sh
> TOPDIR=/home/vishnu/vcpu_hp
> BASECODEPATH=$TOPDIR/pre_v2
> QEMUPATH=$BASECODEPATH/qemu/
> QEMUBIN=$QEMUPATH/build/aarch64-softmmu/qemu-system-aarch64
> ROOTFS_IMG=/home/vishnu/vms/images/vm11.raw
> BIOS=/usr/share/AAVMF/AAVMF_CODE.fd
> LOG_FILE=/tmp/serial1.log
>
> $QEMUBIN -M virt,accel=kvm,gic-version=3 \
> -smp cpus=2,maxcpus=4 \
> -m 4g \
> -cpu host \
> -nodefaults \
> -nographic \
> -drive if=pflash,format=raw,readonly,file=$BIOS \
> -boot c -hda $ROOTFS_IMG \
> -chardev stdio,id=char0,logfile=$LOG_FILE,signal=off \
> -serial chardev:char0 \
> -monitor tcp:127.0.0.1:55557,server,nowait \
> -incoming tcp:0:4445
> Log:-
> [root@fedora ~]# [ 189.884992] rcu: INFO: rcu_sched detected stalls on
> CPUs/tasks:
> [ 189.887474] rcu: 1-...!: (0 ticks this GP) idle=c268/0/0x0
> softirq=13837/13837 fqs=0 (false positive?)
> [ 189.891022] rcu: (detected by 0, t=6002 jiffies, g=7521, q=764
> ncpus=2)
> [ 189.893494] Task dump for CPU 1:
> [ 189.894875] task:swapper/1 state:R running
> task stack:0 pid:0 ppid:1 flags:0x00000008
> [ 189.898597] Call trace:
> [ 189.899505] __switch_to+0xc8/0xf8
> [ 189.900825] 0xffff8000080ebde0
> [ 189.902236] rcu: rcu_sched kthread timer wakeup didn't happen for 6001
> jiffies! g7521 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
> [ 189.906499] rcu: Possible timer handling issue on cpu=1 timer-
> softirq=3803
> [ 189.909125] rcu: rcu_sched kthread starved for 6002 jiffies! g7521 f0x0
> RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=1
> [ 189.912934] rcu: Unless rcu_sched kthread gets sufficient CPU time,
> OOM is now expected behavior.
> [ 189.916226] rcu: RCU grace-period kthread stack dump:
> [ 189.918122] task:rcu_sched state:I
> stack:0 pid:15 ppid:2 flags:0x00000008
> [ 189.921241] Call trace:
> [ 189.922165] __switch_to+0xc8/0xf8
> [ 189.923453] __schedule+0x248/0x6c0
> [ 189.924821] schedule+0x58/0xf0
> [ 189.926010] schedule_timeout+0x90/0x178
> [ 189.927483] rcu_gp_fqs_loop+0x124/0x438
> [ 189.928969] rcu_gp_kthread+0x164/0x1c8
> [ 189.930401] kthread+0xe0/0xf0
> [ 189.931544] ret_from_fork+0x10/0x20
> [ 189.932874] rcu: Stack dump where RCU GP kthread last ran:
> [ 189.935207] Task dump for CPU 1:
> [ 189.936338] task:swapper/1 state:R running
> task stack:0 pid:0 ppid:1 flags:0x00000008
> [ 189.939811] Call trace:
> [ 189.940673] __switch_to+0xc8/0xf8
> [ 189.941888] 0xffff8000080ebde0
> Lets first use Qemu prompt to migrate instead of web interface if you can?
> This will also help us converge whether problem is an interface problem or
> we are really missing something.
> Could you please share your Linux and Qemu source repos as well?
> I'll try to figure out if I'm using any wrong source repositories.
>
> Regards,
> -Vishnu.
> Thanks
> Salil
> From: Vishnu Pajjuri OS <mailto:vishnu@os.amperecomputing.com>
> Sent: Thursday, July 20, 2023 12:49 PM
> To: Salil Mehta <mailto:salil.mehta@huawei.com>; Vishnu Pajjuri OS
> <mailto:vishnu@os.amperecomputing.com>; Miguel Luis
> <mailto:miguel.luis@oracle.com>
> Cc: Salil Mehta <mailto:salil.mehta@opnsrc.net>
> Subject: Re: [External] : RE: vCPU hotplug/unplug for the ARMv8 processors
>
> Hi Salil,
> On 20-07-2023 16:58, Salil Mehta wrote:
> Hi Vishnu,
> Can I request you to reduce the number of vCPUs to 6.
>
> Test Case:
> [VM1 (P6:C4:H0:U2:O4)] --> [VM2 (P6:C4:H0:U2:O4)]
> <Exit from qemu prompt. Restart VM1 and then execute below>
> With virsh(libvirt) commands,
> After Migrating VM1 to DstHost, there is nothing left on SrcHost. Even ps
> command shows nothing on SrcHost regarding qemu command.
> So operating VM1 on SrcHost after migrating VM1 to DstHost is not possible.
> On DstHost side VM1 is live, there I can able to access VM1's console.
> I can able to manipulate VM1 on DstHost after migration.
> Regards,
> -Vishnu.
> [VM1 (P6:C4:H0:U2:O4)] <-- [VM2 (P6:C4:H0:U2:O4)]
>
> [...‘n’ times...]
>
> [VM1 (P6:C4:H0:U2:O4)] --> [VM2 (P6:C4:H0:U2:O4)]
> <Exit from qemu prompt. Restart VM1 and then execute below>
> [VM1 (P6:C4:H0:U2:O4)] <-- [VM2 (P6:C4:H0:U2:O4)]
>
> Thanks
> Salil.
>
> From: Vishnu Pajjuri OS mailto:vishnu@os.amperecomputing.com
> Sent: Thursday, July 20, 2023 12:16 PM
> To: Salil Mehta mailto:salil.mehta@huawei.com; Vishnu Pajjuri OS
> mailto:vishnu@os.amperecomputing.com; Miguel Luis
> mailto:miguel.luis@oracle.com
> Cc: Salil Mehta mailto:salil.mehta@opnsrc.net
> Subject: Re: [External] : RE: vCPU hotplug/unplug for the ARMv8 processors
>
> Hi Salil,
> On 20-07-2023 15:52, Salil Mehta wrote:
> Hi Vishnu,
>
> From: Vishnu Pajjuri OS mailto:vishnu@os.amperecomputing.com
> Sent: Tuesday, July 18, 2023 12:39 PM
>
> [...]
>
> @Miguel/Vishnu BTW, I was able to reproduce the CPU stall issue and it
> looks
> to me it only happens when number of online vCPUs being migrated at source
> VM
> do not match number of vCPUs being online'd during boot at the destination
> VM.
> This means "maxcpus=N" kernel parameter at the destination VM should have
> value equal to number of online'd vCPUs being migrated.
>
> I still need to check if this matches the x86_64 behavior. If possible, can
> you guys confirm below:
>
> [1] always keep the online vcpus same at source and destination and then
> wait
> and check for the CPU stall messages?
> Hopefully below is the test scenario that you are looking at, but still I
> observed CPU stall issues
>
> P: Possible vCPUs
> C: Cold Plugged vCPUs
> H: Hot Plugged vCPUs
> U: Unplugged vCPUs
> O: Onlined vCPUs
> sVM: Source VM
> dVM: Destination VM
>
>
> Test Case:
> [sVM (P80:C72:H0:U8:O72)] --> [dVM (P80:C72:H0:U8:O72)] - PASS
>
> Migrate the same VM back
>
> [sVM (P80:C72:H0:U8:O72)] <-- [dVM (P80:C72:H0:U8:O72)] - Observed CPU
> stalls on VM
>
>
> I have now repeated above scenario at-least 5 times and cannot reproduce
> stalls
> in the case you have mentioned.
> Good hear that you didn't observed any CPU stall issues with your setup.
> I'm using virsh commands to create VM live setup. If possible I'll adapt
> your qemu commands to libvirt,
> Otherwise I would like to reproduce your setup,
> for that could you please provide your kernel and qemu repo's
> Also your complete Qemu command of VM launch and migration
> Commands you have been using both at source and destination side?
>
> Few questions:
> Q1: In above case, there is no Hotplug happening at all - right?
> Ans: Yes
>
> Q2: Do you restart the source VM after it has been migrated once to the
> destination?
> Ans: No.
>
>
>
> Q3: Can you please paste your complete Qemu command of VM launch and
> migration
> Commands you have been using both at source and destination side?
> Ans: I'm using virsh commands to migrate VMs
>
> Commands on SrcHost:
> qemu command after running with virsh command:
> virsh start vm1 --console
> /usr/local/bin/qemu-system-aarch64 -name guest=vm1,debug-threads=on -S -
> object {"qom-
> type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qe
> mu/domain-4-vm1/master-key.aes"} -blockdev
> {"driver":"file","filename":"/usr/share/edk2/aarch64/QEMU_EFI-silent-
> pflash.raw","node-name":"libvirt-pflash0-storage","auto-read-
> only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-pflash0-
> format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"} -
> blockdev
> {"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/vm1_VARS.fd","node
> -name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"} -
> blockdev {"node-name":"libvirt-pflash1-format","read-
> only":false,"driver":"raw","file":"libvirt-pflash1-storage"} -machine virt-
> 7.2,usb=off,dump-guest-core=off,gic-version=3,pflash0=libvirt-pflash0-
> format,pflash1=libvirt-pflash1-format,memory-backend=mach-virt.ram -accel
> kvm -cpu host -m 32768 -object {"qom-type":"memory-backend-ram","id":"mach-
> virt.ram","size":34359738368} -overcommit mem-lock=off -smp
> 2,maxcpus=4,sockets=1,dies=1,cores=4,threads=1 -uuid ea78094e-de32-4caa-
> 827d-7628afff6524 -display none -no-user-config -nodefaults -chardev
> socket,id=charmonitor,fd=23,server=on,wait=off -mon
> chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -
> boot strict=on -device {"driver":"pcie-root-
> port","port":8,"chassis":1,"id":"pci.1","bus":"pcie.0","multifunction":true
> ,"addr":"0x1"} -device {"driver":"pcie-root-
> port","port":9,"chassis":2,"id":"pci.2","bus":"pcie.0","addr":"0x1.0x1"} -
> device {"driver":"pcie-root-
> port","port":10,"chassis":3,"id":"pci.3","bus":"pcie.0","addr":"0x1.0x2"} -
> device {"driver":"pcie-root-
> port","port":11,"chassis":4,"id":"pci.4","bus":"pcie.0","addr":"0x1.0x3"} -
> device {"driver":"pcie-root-
> port","port":12,"chassis":5,"id":"pci.5","bus":"pcie.0","addr":"0x1.0x4"} -
> device {"driver":"pcie-root-
> port","port":13,"chassis":6,"id":"pci.6","bus":"pcie.0","addr":"0x1.0x5"} -
> device {"driver":"pcie-root-
> port","port":14,"chassis":7,"id":"pci.7","bus":"pcie.0","addr":"0x1.0x6"} -
> device {"driver":"pcie-root-
> port","port":15,"chassis":8,"id":"pci.8","bus":"pcie.0","addr":"0x1.0x7"} -
> device {"driver":"pcie-root-
> port","port":16,"chassis":9,"id":"pci.9","bus":"pcie.0","multifunction":tru
> e,"addr":"0x2"} -device {"driver":"pcie-root-
> port","port":17,"chassis":10,"id":"pci.10","bus":"pcie.0","addr":"0x2.0x1"}
> -device {"driver":"pcie-root-
> port","port":18,"chassis":11,"id":"pci.11","bus":"pcie.0","addr":"0x2.0x2"}
> -device {"driver":"pcie-root-
> port","port":19,"chassis":12,"id":"pci.12","bus":"pcie.0","addr":"0x2.0x3"}
> -device {"driver":"pcie-root-
> port","port":20,"chassis":13,"id":"pci.13","bus":"pcie.0","addr":"0x2.0x4"}
> -device {"driver":"pcie-root-
> port","port":21,"chassis":14,"id":"pci.14","bus":"pcie.0","addr":"0x2.0x5"}
> -device {"driver":"qemu-
> xhci","p2":15,"p3":15,"id":"usb","bus":"pci.2","addr":"0x0"} -device
> {"driver":"virtio-scsi-pci","id":"scsi0","bus":"pci.3","addr":"0x0"} -
> device {"driver":"virtio-serial-pci","id":"virtio-
> serial0","bus":"pci.4","addr":"0x0"} -blockdev
> {"driver":"file","filename":"/mnt/nfs/vm11.raw","node-name":"libvirt-2-
> storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-
> name":"libvirt-2-format","read-
> only":false,"discard":"unmap","driver":"raw","file":"libvirt-2-storage"} -
> device {"driver":"virtio-blk-
> pci","bus":"pci.5","addr":"0x0","drive":"libvirt-2-format","id":"virtio-
> disk0","bootindex":1} -blockdev
> {"driver":"file","filename":"/home/vishnu/vms/Fedora-Server-dvd-aarch64-36-
> 1.5.iso","node-name":"libvirt-1-storage","auto-read-
> only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-1-
> format","read-only":true,"driver":"raw","file":"libvirt-1-storage"} -device
> {"driver":"scsi-cd","bus":"scsi0.0","channel":0,"scsi-
> id":0,"lun":0,"device_id":"drive-scsi0-0-0-0","drive":"libvirt-1-
> format","id":"scsi0-0-0-0"} -netdev
> tap,fd=24,vhost=on,vhostfd=26,id=hostnet0 -device {"driver":"virtio-net-
> pci","netdev":"hostnet0","id":"net0","mac":"52:54:00:4d:8d:9e","bus":"pci.1
> ","addr":"0x0"} -chardev pty,id=charserial0 -serial chardev:charserial0 -
> chardev socket,id=charchannel0,fd=22,server=on,wait=off -device
> {"driver":"virtserialport","bus":"virtio-
> serial0.0","nr":1,"chardev":"charchannel0","id":"channel0","name":"org.qemu
> .guest_agent.0"} -chardev socket,id=chrtpm,path=/run/libvirt/qemu/swtpm/4-
> vm1-swtpm.sock -tpmdev emulator,id=tpm-tpm0,chardev=chrtpm -device
> {"driver":"tpm-tis-device","tpmdev":"tpm-tpm0","id":"tpm0"} -audiodev
> {"id":"audio1","driver":"none"} -device {"driver":"virtio-balloon-
> pci","id":"balloon0","bus":"pci.6","addr":"0x0"} -object {"qom-type":"rng-
> random","id":"objrng0","filename":"/dev/urandom"} -device
> {"driver":"virtio-rng-
> pci","rng":"objrng0","id":"rng0","bus":"pci.7","addr":"0x0"} -msg
> timestamp=on
> virsh command to migrate vm1 from SrcHost to DstHost
> virsh migrate --live vm1 qemu+ssh://vCPU-HP-Host2/system --unsafe --
> verbose
> Commands on DstHost:
> qemu command after vm1 migrated to DstHost:
> /usr/local/bin/qemu-system-aarch64 -name guest=vm1,debug-threads=on -S -
> object {"qom-
> type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qe
> mu/domain-1-vm1/master-key.aes"} -blockdev
> {"driver":"file","filename":"/usr/share/edk2/aarch64/QEMU_EFI-silent-
> pflash.raw","node-name":"libvirt-pflash0-storage","auto-read-
> only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-pflash0-
> format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"} -
> blockdev
> {"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/vm1_VARS.fd","node
> -name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"} -
> blockdev {"node-name":"libvirt-pflash1-format","read-
> only":false,"driver":"raw","file":"libvirt-pflash1-storage"} -machine virt-
> 7.2,usb=off,gic-version=3,dump-guest-core=off,memory-backend=mach-
> virt.ram,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format -
> accel kvm -cpu host -m 32768 -object {"qom-type":"memory-backend-
> ram","id":"mach-virt.ram","size":34359738368} -overcommit mem-lock=off -smp
> 2,maxcpus=4,sockets=1,dies=1,cores=4,threads=1 -uuid ea78094e-de32-4caa-
> 827d-7628afff6524 -display none -no-user-config -nodefaults -chardev
> socket,id=charmonitor,fd=23,server=on,wait=off -mon
> chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -
> boot strict=on -device {"driver":"pcie-root-
> port","port":8,"chassis":1,"id":"pci.1","bus":"pcie.0","multifunction":true
> ,"addr":"0x1"} -device {"driver":"pcie-root-
> port","port":9,"chassis":2,"id":"pci.2","bus":"pcie.0","addr":"0x1.0x1"} -
> device {"driver":"pcie-root-
> port","port":10,"chassis":3,"id":"pci.3","bus":"pcie.0","addr":"0x1.0x2"} -
> device {"driver":"pcie-root-
> port","port":11,"chassis":4,"id":"pci.4","bus":"pcie.0","addr":"0x1.0x3"} -
> device {"driver":"pcie-root-
> port","port":12,"chassis":5,"id":"pci.5","bus":"pcie.0","addr":"0x1.0x4"} -
> device {"driver":"pcie-root-
> port","port":13,"chassis":6,"id":"pci.6","bus":"pcie.0","addr":"0x1.0x5"} -
> device {"driver":"pcie-root-
> port","port":14,"chassis":7,"id":"pci.7","bus":"pcie.0","addr":"0x1.0x6"} -
> device {"driver":"pcie-root-
> port","port":15,"chassis":8,"id":"pci.8","bus":"pcie.0","addr":"0x1.0x7"} -
> device {"driver":"pcie-root-
> port","port":16,"chassis":9,"id":"pci.9","bus":"pcie.0","multifunction":tru
> e,"addr":"0x2"} -device {"driver":"pcie-root-
> port","port":17,"chassis":10,"id":"pci.10","bus":"pcie.0","addr":"0x2.0x1"}
> -device {"driver":"pcie-root-
> port","port":18,"chassis":11,"id":"pci.11","bus":"pcie.0","addr":"0x2.0x2"}
> -device {"driver":"pcie-root-
> port","port":19,"chassis":12,"id":"pci.12","bus":"pcie.0","addr":"0x2.0x3"}
> -device {"driver":"pcie-root-
> port","port":20,"chassis":13,"id":"pci.13","bus":"pcie.0","addr":"0x2.0x4"}
> -device {"driver":"pcie-root-
> port","port":21,"chassis":14,"id":"pci.14","bus":"pcie.0","addr":"0x2.0x5"}
> -device {"driver":"qemu-
> xhci","p2":15,"p3":15,"id":"usb","bus":"pci.2","addr":"0x0"} -device
> {"driver":"virtio-scsi-pci","id":"scsi0","bus":"pci.3","addr":"0x0"} -
> device {"driver":"virtio-serial-pci","id":"virtio-
> serial0","bus":"pci.4","addr":"0x0"} -blockdev
> {"driver":"file","filename":"/mnt/nfs/vm11.raw","node-name":"libvirt-2-
> storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-
> name":"libvirt-2-format","read-
> only":false,"discard":"unmap","driver":"raw","file":"libvirt-2-storage"} -
> device {"driver":"virtio-blk-
> pci","bus":"pci.5","addr":"0x0","drive":"libvirt-2-format","id":"virtio-
> disk0","bootindex":1} -blockdev
> {"driver":"file","filename":"/home/vishnu/vms/Fedora-Server-dvd-aarch64-36-
> 1.5.iso","node-name":"libvirt-1-storage","auto-read-
> only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-1-
> format","read-only":true,"driver":"raw","file":"libvirt-1-storage"} -device
> {"driver":"scsi-cd","bus":"scsi0.0","channel":0,"scsi-
> id":0,"lun":0,"device_id":"drive-scsi0-0-0-0","drive":"libvirt-1-
> format","id":"scsi0-0-0-0"} -netdev
> {"type":"tap","fd":"24","vhost":true,"vhostfd":"26","id":"hostnet0"} -
> device {"driver":"virtio-net-
> pci","netdev":"hostnet0","id":"net0","mac":"52:54:00:4d:8d:9e","bus":"pci.1
> ","addr":"0x0"} -chardev pty,id=charserial0 -serial chardev:charserial0 -
> chardev socket,id=charchannel0,fd=22,server=on,wait=off -device
> {"driver":"virtserialport","bus":"virtio-
> serial0.0","nr":1,"chardev":"charchannel0","id":"channel0","name":"org.qemu
> .guest_agent.0"} -chardev socket,id=chrtpm,path=/run/libvirt/qemu/swtpm/1-
> vm1-swtpm.sock -tpmdev emulator,id=tpm-tpm0,chardev=chrtpm -device
> {"driver":"tpm-tis-device","tpmdev":"tpm-tpm0","id":"tpm0"} -audiodev
> {"id":"audio1","driver":"none"} -incoming defer -device {"driver":"virtio-
> balloon-pci","id":"balloon0","bus":"pci.6","addr":"0x0"} -object {"qom-
> type":"rng-random","id":"objrng0","filename":"/dev/urandom"} -device
> {"driver":"virtio-rng-
> pci","rng":"objrng0","id":"rng0","bus":"pci.7","addr":"0x0"} -msg
> timestamp=on
> virsh command to migrate vm1 back to SrcHost:
> # virsh migrate --live vm1 qemu+ssh://vCPU-HP-Host1/system --unsafe --
> verbose
>
>
>
> Thanks
> Salil.
>
>
>
>