Hi Vishnu, I got tied down with the refactoring of the patches in preparation of the patch-set and hence could not follow this bug last week.
Thanks for your testing efforts and also suggesting a probable fix.
From: Vishnu Pajjuri vishnu@amperemail.onmicrosoft.com Sent: Tuesday, August 8, 2023 12:53 PM To: Salil Mehta salil.mehta@huawei.com; Vishnu Pajjuri OS vishnu@os.amperecomputing.com; Salil Mehta salil.mehta@opnsrc.net; Miguel Luis miguel.luis@oracle.com Cc: linaro-open-discussions@op-lists.linaro.org Subject: Re: [External] : RE: vCPU hotplug/unplug for the ARMv8 processors
Hi Salil, Below patch resolves VM live migration CPU stall issues.
Good catch.
diff --git a/hw/arm/virt.c b/hw/arm/virt.c index e9ced9f172..23b06d44cc 100644 --- a/hw/arm/virt.c +++ b/hw/arm/virt.c @@ -3224,6 +3224,11 @@ static void virt_cpu_plug(HotplugHandler *hotplug_dev, DeviceState *dev, fw_cfg_modify_i16(vms->fw_cfg, FW_CFG_NB_CPUS, vms->boot_cpus);
cs->disabled = false; + if (qemu_enabled_cpu(cs)) { + cs->vmcse = qemu_add_vm_change_state_handler(kvm_arm_vm_state_change, cs); + }
This is not the correct location of the fix.
return; fail: error_propagate(errp, local_err); diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c index 80a456ef16..510f829e4b 100644 --- a/target/arm/kvm64.c +++ b/target/arm/kvm64.c @@ -883,9 +883,6 @@ int kvm_arch_init_vcpu(CPUState *cs) return -EINVAL; }
- if (qemu_enabled_cpu(cs)) - cs->vmcse = qemu_add_vm_change_state_handler(kvm_arm_vm_state_change, - cs);
We cannot remove this. Its corresponding destruction leg also has similar problem and needs to be fixed as well. Installation and un-installation must happen when realization of vCPUs is happening.
/* Determine init features for this CPU */ memset(cpu->kvm_init_features, 0, sizeof(cpu->kvm_init_features));
Root Cause: It's too early to attach kvm_arm_vm_state_change handler in kvm_arch_init_vcpu since vcpus were not yet enabled at this stage
Yes, the problem is related to updating of virtual time but it is not so much about the time when these handlers must be installed. VM State change handler must be installed during realization phase of the vCPU thread and when vCPU exits, these handlers must be uninstalled as well. And even now, this is getting done at the right place and time.
The problem is due to the wrong qemu_enabled_cpu() check, none of the VM state change handlers are getting installed. Hence, the jump in the virtual time becomes large during migration as the adjustment of the virtual time with Host KVM is not happening.
Because of this, QEMU is not fetching virtual time from KVM before saving VM state Also not able to restore virtual time back to KVM after restoring VM state It is causing misconfiguration of vtimer offset and leading to CPU stall issues.
This is correct. But the reason why this is happening is due to absence of the VM state change handlers at the first place. And what you just mentioned is the visible side affect of the real cause mentioned earlier above.
Attaching kvm_arm_vm_state_change handler in virt_cpu_plug after enabling each vCPU makes more reasonable to me. Any suggestions are welcome here.
As mentioned above, we should not move installation of handler to any context other than the vCPU thread. This is because when vCPU exits, its symmetric destruction legs ensures uninstallation of the VM state change handler. Having these at 2 different places is an invitation to many problems and actually not required at all.
This helps to properly configure vtimer offset. And didn't see any timing issues
Sure, I have pushed the fix in the below commit. Please pull the change and check if everything works for you.
This will ensure that handlers are only installed when vCPU threads are being spawned and vCPU is being realized.
https://github.com/salil-mehta/qemu/commit/a220a9e22f8d6452e8c9759e05a12b759...
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c index 80a456ef16..96ceebc7cc 100644 --- a/target/arm/kvm64.c +++ b/target/arm/kvm64.c @@ -883,7 +883,7 @@ int kvm_arch_init_vcpu(CPUState *cs) return -EINVAL; }
- if (qemu_enabled_cpu(cs)) + if (cs->thread_id) cs->vmcse = qemu_add_vm_change_state_handler(kvm_arm_vm_state_change, cs);
@@ -962,7 +962,7 @@ int kvm_arch_init_vcpu(CPUState *cs)
int kvm_arch_destroy_vcpu(CPUState *cs) { - if (qemu_enabled_cpu(cs)) + if (cs->thread_id) qemu_del_vm_change_state_handler(cs->vmcse);
return 0;
Meny thanks Salil.
Regards, -Vishnu. On 26-07-2023 15:15, Salil Mehta wrote: HI Vishnu, Got it. I think I am now able to recreate the condition successfully with just 2 vCPUs and even when all of them are present and online. I am analyzing the issue and will get back to you soon.
Thanks Salil From: Vishnu Pajjuri OS mailto:vishnu@os.amperecomputing.com Sent: Friday, July 21, 2023 10:39 AM To: Salil Mehta mailto:salil.mehta@opnsrc.net; Salil Mehta mailto:salil.mehta@huawei.com Cc: Vishnu Pajjuri OS mailto:vishnu@os.amperecomputing.com; Miguel Luis mailto:miguel.luis@oracle.com Subject: Re: [External] : RE: vCPU hotplug/unplug for the ARMv8 processors
Hi Salil, On 20-07-2023 22:56, Salil Mehta wrote: [+] For your reference, I am attaching the scripts which I am using. Note the subtle difference between both Thanks for sharing your scripts for VM live migration.
Thanks! On Thu, Jul 20, 2023 at 12:57 PM Salil Mehta mailto:salil.mehta@huawei.com wrote: Hi Vishnu, Aah, I see looks like you have 2 physical machine? I am using same host setup and use below commands to migrate VMs
Src VM (Same Host) Qemu> migrate tcp:0:4444
Dst VM (Same Host) $ qemu-arm64-nosmmu […] incoming tcp:0:4444 I have prepared setup like below on the same Host with the help of qemu commands only [VM1 (P4:C2:H0:U2:O2)] --> [VM2 (P4:C2:H0:U2:O2)] --> [VM3 (P4:C2:H0:U2:O2)] From VM1 to VM2 works fine. But when I tried to do live migration from VM2(same VM migrated from VM1) to VM3 I observed CPU stall issues in this scenario also. One more observation is that time is jumping in the VM2 to VM3 iteration always [root@fedora ~]# date Wed Nov 21 06:34:54 AM EST 2114 VM1.sh TOPDIR=/home/vishnu/vcpu_hp BASECODEPATH=$TOPDIR/pre_v2 QEMUPATH=$BASECODEPATH/qemu/ QEMUBIN=$QEMUPATH/build/aarch64-softmmu/qemu-system-aarch64 ROOTFS_IMG=/home/vishnu/vms/images/vm11.raw BIOS=/usr/share/AAVMF/AAVMF_CODE.fd LOG_FILE=/tmp/serial1.log
$QEMUBIN -M virt,accel=kvm,gic-version=3 \ -smp cpus=2,maxcpus=4 \ -m 4g \ -cpu host \ -nodefaults \ -nographic \ -drive if=pflash,format=raw,readonly,file=$BIOS \ -boot c -hda $ROOTFS_IMG \ -chardev stdio,id=char0,logfile=$LOG_FILE,signal=off \ -serial chardev:char0 \ -monitor tcp:127.0.0.1:55555,server,nowait VM2.sh TOPDIR=/home/vishnu/vcpu_hp BASECODEPATH=$TOPDIR/pre_v2 QEMUPATH=$BASECODEPATH/qemu/ QEMUBIN=$QEMUPATH/build/aarch64-softmmu/qemu-system-aarch64 ROOTFS_IMG=/home/vishnu/vms/images/vm11.raw BIOS=/usr/share/AAVMF/AAVMF_CODE.fd LOG_FILE=/tmp/serial1.log
$QEMUBIN -M virt,accel=kvm,gic-version=3 \ -smp cpus=2,maxcpus=4 \ -m 4g \ -cpu host \ -nodefaults \ -nographic \ -drive if=pflash,format=raw,readonly,file=$BIOS \ -boot c -hda $ROOTFS_IMG \ -chardev stdio,id=char0,logfile=$LOG_FILE,signal=off \ -serial chardev:char0 \ -monitor tcp:127.0.0.1:55556,server,nowait \ -incoming tcp:0:4444 VM3.sh TOPDIR=/home/vishnu/vcpu_hp BASECODEPATH=$TOPDIR/pre_v2 QEMUPATH=$BASECODEPATH/qemu/ QEMUBIN=$QEMUPATH/build/aarch64-softmmu/qemu-system-aarch64 ROOTFS_IMG=/home/vishnu/vms/images/vm11.raw BIOS=/usr/share/AAVMF/AAVMF_CODE.fd LOG_FILE=/tmp/serial1.log
$QEMUBIN -M virt,accel=kvm,gic-version=3 \ -smp cpus=2,maxcpus=4 \ -m 4g \ -cpu host \ -nodefaults \ -nographic \ -drive if=pflash,format=raw,readonly,file=$BIOS \ -boot c -hda $ROOTFS_IMG \ -chardev stdio,id=char0,logfile=$LOG_FILE,signal=off \ -serial chardev:char0 \ -monitor tcp:127.0.0.1:55557,server,nowait \ -incoming tcp:0:4445 Log:- [root@fedora ~]# [ 189.884992] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [ 189.887474] rcu: 1-...!: (0 ticks this GP) idle=c268/0/0x0 softirq=13837/13837 fqs=0 (false positive?) [ 189.891022] rcu: (detected by 0, t=6002 jiffies, g=7521, q=764 ncpus=2) [ 189.893494] Task dump for CPU 1: [ 189.894875] task:swapper/1 state:R running task stack:0 pid:0 ppid:1 flags:0x00000008 [ 189.898597] Call trace: [ 189.899505] __switch_to+0xc8/0xf8 [ 189.900825] 0xffff8000080ebde0 [ 189.902236] rcu: rcu_sched kthread timer wakeup didn't happen for 6001 jiffies! g7521 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 [ 189.906499] rcu: Possible timer handling issue on cpu=1 timer- softirq=3803 [ 189.909125] rcu: rcu_sched kthread starved for 6002 jiffies! g7521 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=1 [ 189.912934] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 189.916226] rcu: RCU grace-period kthread stack dump: [ 189.918122] task:rcu_sched state:I stack:0 pid:15 ppid:2 flags:0x00000008 [ 189.921241] Call trace: [ 189.922165] __switch_to+0xc8/0xf8 [ 189.923453] __schedule+0x248/0x6c0 [ 189.924821] schedule+0x58/0xf0 [ 189.926010] schedule_timeout+0x90/0x178 [ 189.927483] rcu_gp_fqs_loop+0x124/0x438 [ 189.928969] rcu_gp_kthread+0x164/0x1c8 [ 189.930401] kthread+0xe0/0xf0 [ 189.931544] ret_from_fork+0x10/0x20 [ 189.932874] rcu: Stack dump where RCU GP kthread last ran: [ 189.935207] Task dump for CPU 1: [ 189.936338] task:swapper/1 state:R running task stack:0 pid:0 ppid:1 flags:0x00000008 [ 189.939811] Call trace: [ 189.940673] __switch_to+0xc8/0xf8 [ 189.941888] 0xffff8000080ebde0 Lets first use Qemu prompt to migrate instead of web interface if you can? This will also help us converge whether problem is an interface problem or we are really missing something. Could you please share your Linux and Qemu source repos as well? I'll try to figure out if I'm using any wrong source repositories.
Regards, -Vishnu. Thanks Salil From: Vishnu Pajjuri OS mailto:vishnu@os.amperecomputing.com Sent: Thursday, July 20, 2023 12:49 PM To: Salil Mehta mailto:salil.mehta@huawei.com; Vishnu Pajjuri OS mailto:vishnu@os.amperecomputing.com; Miguel Luis mailto:miguel.luis@oracle.com Cc: Salil Mehta mailto:salil.mehta@opnsrc.net Subject: Re: [External] : RE: vCPU hotplug/unplug for the ARMv8 processors
Hi Salil, On 20-07-2023 16:58, Salil Mehta wrote: Hi Vishnu, Can I request you to reduce the number of vCPUs to 6.
Test Case: [VM1 (P6:C4:H0:U2:O4)] --> [VM2 (P6:C4:H0:U2:O4)] <Exit from qemu prompt. Restart VM1 and then execute below> With virsh(libvirt) commands, After Migrating VM1 to DstHost, there is nothing left on SrcHost. Even ps command shows nothing on SrcHost regarding qemu command. So operating VM1 on SrcHost after migrating VM1 to DstHost is not possible. On DstHost side VM1 is live, there I can able to access VM1's console. I can able to manipulate VM1 on DstHost after migration. Regards, -Vishnu. [VM1 (P6:C4:H0:U2:O4)] <-- [VM2 (P6:C4:H0:U2:O4)]
[...‘n’ times...]
[VM1 (P6:C4:H0:U2:O4)] --> [VM2 (P6:C4:H0:U2:O4)] <Exit from qemu prompt. Restart VM1 and then execute below> [VM1 (P6:C4:H0:U2:O4)] <-- [VM2 (P6:C4:H0:U2:O4)]
Thanks Salil.
From: Vishnu Pajjuri OS mailto:vishnu@os.amperecomputing.com Sent: Thursday, July 20, 2023 12:16 PM To: Salil Mehta mailto:salil.mehta@huawei.com; Vishnu Pajjuri OS mailto:vishnu@os.amperecomputing.com; Miguel Luis mailto:miguel.luis@oracle.com Cc: Salil Mehta mailto:salil.mehta@opnsrc.net Subject: Re: [External] : RE: vCPU hotplug/unplug for the ARMv8 processors
Hi Salil, On 20-07-2023 15:52, Salil Mehta wrote: Hi Vishnu,
From: Vishnu Pajjuri OS mailto:vishnu@os.amperecomputing.com Sent: Tuesday, July 18, 2023 12:39 PM
[...]
@Miguel/Vishnu BTW, I was able to reproduce the CPU stall issue and it looks to me it only happens when number of online vCPUs being migrated at source VM do not match number of vCPUs being online'd during boot at the destination VM. This means "maxcpus=N" kernel parameter at the destination VM should have value equal to number of online'd vCPUs being migrated.
I still need to check if this matches the x86_64 behavior. If possible, can you guys confirm below:
[1] always keep the online vcpus same at source and destination and then wait and check for the CPU stall messages? Hopefully below is the test scenario that you are looking at, but still I observed CPU stall issues
P: Possible vCPUs C: Cold Plugged vCPUs H: Hot Plugged vCPUs U: Unplugged vCPUs O: Onlined vCPUs sVM: Source VM dVM: Destination VM
Test Case: [sVM (P80:C72:H0:U8:O72)] --> [dVM (P80:C72:H0:U8:O72)] - PASS
Migrate the same VM back
[sVM (P80:C72:H0:U8:O72)] <-- [dVM (P80:C72:H0:U8:O72)] - Observed CPU stalls on VM
I have now repeated above scenario at-least 5 times and cannot reproduce stalls in the case you have mentioned. Good hear that you didn't observed any CPU stall issues with your setup. I'm using virsh commands to create VM live setup. If possible I'll adapt your qemu commands to libvirt, Otherwise I would like to reproduce your setup, for that could you please provide your kernel and qemu repo's Also your complete Qemu command of VM launch and migration Commands you have been using both at source and destination side?
Few questions: Q1: In above case, there is no Hotplug happening at all - right? Ans: Yes
Q2: Do you restart the source VM after it has been migrated once to the destination? Ans: No.
Q3: Can you please paste your complete Qemu command of VM launch and migration Commands you have been using both at source and destination side? Ans: I'm using virsh commands to migrate VMs
Commands on SrcHost: qemu command after running with virsh command: virsh start vm1 --console /usr/local/bin/qemu-system-aarch64 -name guest=vm1,debug-threads=on -S - object {"qom- type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qe mu/domain-4-vm1/master-key.aes"} -blockdev {"driver":"file","filename":"/usr/share/edk2/aarch64/QEMU_EFI-silent- pflash.raw","node-name":"libvirt-pflash0-storage","auto-read- only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-pflash0- format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"} - blockdev {"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/vm1_VARS.fd","node -name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"} - blockdev {"node-name":"libvirt-pflash1-format","read- only":false,"driver":"raw","file":"libvirt-pflash1-storage"} -machine virt- 7.2,usb=off,dump-guest-core=off,gic-version=3,pflash0=libvirt-pflash0- format,pflash1=libvirt-pflash1-format,memory-backend=mach-virt.ram -accel kvm -cpu host -m 32768 -object {"qom-type":"memory-backend-ram","id":"mach- virt.ram","size":34359738368} -overcommit mem-lock=off -smp 2,maxcpus=4,sockets=1,dies=1,cores=4,threads=1 -uuid ea78094e-de32-4caa- 827d-7628afff6524 -display none -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=23,server=on,wait=off -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown - boot strict=on -device {"driver":"pcie-root- port","port":8,"chassis":1,"id":"pci.1","bus":"pcie.0","multifunction":true ,"addr":"0x1"} -device {"driver":"pcie-root- port","port":9,"chassis":2,"id":"pci.2","bus":"pcie.0","addr":"0x1.0x1"} - device {"driver":"pcie-root- port","port":10,"chassis":3,"id":"pci.3","bus":"pcie.0","addr":"0x1.0x2"} - device {"driver":"pcie-root- port","port":11,"chassis":4,"id":"pci.4","bus":"pcie.0","addr":"0x1.0x3"} - device {"driver":"pcie-root- port","port":12,"chassis":5,"id":"pci.5","bus":"pcie.0","addr":"0x1.0x4"} - device {"driver":"pcie-root- port","port":13,"chassis":6,"id":"pci.6","bus":"pcie.0","addr":"0x1.0x5"} - device {"driver":"pcie-root- port","port":14,"chassis":7,"id":"pci.7","bus":"pcie.0","addr":"0x1.0x6"} - device {"driver":"pcie-root- port","port":15,"chassis":8,"id":"pci.8","bus":"pcie.0","addr":"0x1.0x7"} - device {"driver":"pcie-root- port","port":16,"chassis":9,"id":"pci.9","bus":"pcie.0","multifunction":tru e,"addr":"0x2"} -device {"driver":"pcie-root- port","port":17,"chassis":10,"id":"pci.10","bus":"pcie.0","addr":"0x2.0x1"} -device {"driver":"pcie-root- port","port":18,"chassis":11,"id":"pci.11","bus":"pcie.0","addr":"0x2.0x2"} -device {"driver":"pcie-root- port","port":19,"chassis":12,"id":"pci.12","bus":"pcie.0","addr":"0x2.0x3"} -device {"driver":"pcie-root- port","port":20,"chassis":13,"id":"pci.13","bus":"pcie.0","addr":"0x2.0x4"} -device {"driver":"pcie-root- port","port":21,"chassis":14,"id":"pci.14","bus":"pcie.0","addr":"0x2.0x5"} -device {"driver":"qemu- xhci","p2":15,"p3":15,"id":"usb","bus":"pci.2","addr":"0x0"} -device {"driver":"virtio-scsi-pci","id":"scsi0","bus":"pci.3","addr":"0x0"} - device {"driver":"virtio-serial-pci","id":"virtio- serial0","bus":"pci.4","addr":"0x0"} -blockdev {"driver":"file","filename":"/mnt/nfs/vm11.raw","node-name":"libvirt-2- storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node- name":"libvirt-2-format","read- only":false,"discard":"unmap","driver":"raw","file":"libvirt-2-storage"} - device {"driver":"virtio-blk- pci","bus":"pci.5","addr":"0x0","drive":"libvirt-2-format","id":"virtio- disk0","bootindex":1} -blockdev {"driver":"file","filename":"/home/vishnu/vms/Fedora-Server-dvd-aarch64-36- 1.5.iso","node-name":"libvirt-1-storage","auto-read- only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-1- format","read-only":true,"driver":"raw","file":"libvirt-1-storage"} -device {"driver":"scsi-cd","bus":"scsi0.0","channel":0,"scsi- id":0,"lun":0,"device_id":"drive-scsi0-0-0-0","drive":"libvirt-1- format","id":"scsi0-0-0-0"} -netdev tap,fd=24,vhost=on,vhostfd=26,id=hostnet0 -device {"driver":"virtio-net- pci","netdev":"hostnet0","id":"net0","mac":"52:54:00:4d:8d:9e","bus":"pci.1 ","addr":"0x0"} -chardev pty,id=charserial0 -serial chardev:charserial0 - chardev socket,id=charchannel0,fd=22,server=on,wait=off -device {"driver":"virtserialport","bus":"virtio- serial0.0","nr":1,"chardev":"charchannel0","id":"channel0","name":"org.qemu .guest_agent.0"} -chardev socket,id=chrtpm,path=/run/libvirt/qemu/swtpm/4- vm1-swtpm.sock -tpmdev emulator,id=tpm-tpm0,chardev=chrtpm -device {"driver":"tpm-tis-device","tpmdev":"tpm-tpm0","id":"tpm0"} -audiodev {"id":"audio1","driver":"none"} -device {"driver":"virtio-balloon- pci","id":"balloon0","bus":"pci.6","addr":"0x0"} -object {"qom-type":"rng- random","id":"objrng0","filename":"/dev/urandom"} -device {"driver":"virtio-rng- pci","rng":"objrng0","id":"rng0","bus":"pci.7","addr":"0x0"} -msg timestamp=on virsh command to migrate vm1 from SrcHost to DstHost virsh migrate --live vm1 qemu+ssh://vCPU-HP-Host2/system --unsafe -- verbose Commands on DstHost: qemu command after vm1 migrated to DstHost: /usr/local/bin/qemu-system-aarch64 -name guest=vm1,debug-threads=on -S - object {"qom- type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qe mu/domain-1-vm1/master-key.aes"} -blockdev {"driver":"file","filename":"/usr/share/edk2/aarch64/QEMU_EFI-silent- pflash.raw","node-name":"libvirt-pflash0-storage","auto-read- only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-pflash0- format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"} - blockdev {"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/vm1_VARS.fd","node -name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"} - blockdev {"node-name":"libvirt-pflash1-format","read- only":false,"driver":"raw","file":"libvirt-pflash1-storage"} -machine virt- 7.2,usb=off,gic-version=3,dump-guest-core=off,memory-backend=mach- virt.ram,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format - accel kvm -cpu host -m 32768 -object {"qom-type":"memory-backend- ram","id":"mach-virt.ram","size":34359738368} -overcommit mem-lock=off -smp 2,maxcpus=4,sockets=1,dies=1,cores=4,threads=1 -uuid ea78094e-de32-4caa- 827d-7628afff6524 -display none -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=23,server=on,wait=off -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown - boot strict=on -device {"driver":"pcie-root- port","port":8,"chassis":1,"id":"pci.1","bus":"pcie.0","multifunction":true ,"addr":"0x1"} -device {"driver":"pcie-root- port","port":9,"chassis":2,"id":"pci.2","bus":"pcie.0","addr":"0x1.0x1"} - device {"driver":"pcie-root- port","port":10,"chassis":3,"id":"pci.3","bus":"pcie.0","addr":"0x1.0x2"} - device {"driver":"pcie-root- port","port":11,"chassis":4,"id":"pci.4","bus":"pcie.0","addr":"0x1.0x3"} - device {"driver":"pcie-root- port","port":12,"chassis":5,"id":"pci.5","bus":"pcie.0","addr":"0x1.0x4"} - device {"driver":"pcie-root- port","port":13,"chassis":6,"id":"pci.6","bus":"pcie.0","addr":"0x1.0x5"} - device {"driver":"pcie-root- port","port":14,"chassis":7,"id":"pci.7","bus":"pcie.0","addr":"0x1.0x6"} - device {"driver":"pcie-root- port","port":15,"chassis":8,"id":"pci.8","bus":"pcie.0","addr":"0x1.0x7"} - device {"driver":"pcie-root- port","port":16,"chassis":9,"id":"pci.9","bus":"pcie.0","multifunction":tru e,"addr":"0x2"} -device {"driver":"pcie-root- port","port":17,"chassis":10,"id":"pci.10","bus":"pcie.0","addr":"0x2.0x1"} -device {"driver":"pcie-root- port","port":18,"chassis":11,"id":"pci.11","bus":"pcie.0","addr":"0x2.0x2"} -device {"driver":"pcie-root- port","port":19,"chassis":12,"id":"pci.12","bus":"pcie.0","addr":"0x2.0x3"} -device {"driver":"pcie-root- port","port":20,"chassis":13,"id":"pci.13","bus":"pcie.0","addr":"0x2.0x4"} -device {"driver":"pcie-root- port","port":21,"chassis":14,"id":"pci.14","bus":"pcie.0","addr":"0x2.0x5"} -device {"driver":"qemu- xhci","p2":15,"p3":15,"id":"usb","bus":"pci.2","addr":"0x0"} -device {"driver":"virtio-scsi-pci","id":"scsi0","bus":"pci.3","addr":"0x0"} - device {"driver":"virtio-serial-pci","id":"virtio- serial0","bus":"pci.4","addr":"0x0"} -blockdev {"driver":"file","filename":"/mnt/nfs/vm11.raw","node-name":"libvirt-2- storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node- name":"libvirt-2-format","read- only":false,"discard":"unmap","driver":"raw","file":"libvirt-2-storage"} - device {"driver":"virtio-blk- pci","bus":"pci.5","addr":"0x0","drive":"libvirt-2-format","id":"virtio- disk0","bootindex":1} -blockdev {"driver":"file","filename":"/home/vishnu/vms/Fedora-Server-dvd-aarch64-36- 1.5.iso","node-name":"libvirt-1-storage","auto-read- only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-1- format","read-only":true,"driver":"raw","file":"libvirt-1-storage"} -device {"driver":"scsi-cd","bus":"scsi0.0","channel":0,"scsi- id":0,"lun":0,"device_id":"drive-scsi0-0-0-0","drive":"libvirt-1- format","id":"scsi0-0-0-0"} -netdev {"type":"tap","fd":"24","vhost":true,"vhostfd":"26","id":"hostnet0"} - device {"driver":"virtio-net- pci","netdev":"hostnet0","id":"net0","mac":"52:54:00:4d:8d:9e","bus":"pci.1 ","addr":"0x0"} -chardev pty,id=charserial0 -serial chardev:charserial0 - chardev socket,id=charchannel0,fd=22,server=on,wait=off -device {"driver":"virtserialport","bus":"virtio- serial0.0","nr":1,"chardev":"charchannel0","id":"channel0","name":"org.qemu .guest_agent.0"} -chardev socket,id=chrtpm,path=/run/libvirt/qemu/swtpm/1- vm1-swtpm.sock -tpmdev emulator,id=tpm-tpm0,chardev=chrtpm -device {"driver":"tpm-tis-device","tpmdev":"tpm-tpm0","id":"tpm0"} -audiodev {"id":"audio1","driver":"none"} -incoming defer -device {"driver":"virtio- balloon-pci","id":"balloon0","bus":"pci.6","addr":"0x0"} -object {"qom- type":"rng-random","id":"objrng0","filename":"/dev/urandom"} -device {"driver":"virtio-rng- pci","rng":"objrng0","id":"rng0","bus":"pci.7","addr":"0x0"} -msg timestamp=on virsh command to migrate vm1 back to SrcHost: # virsh migrate --live vm1 qemu+ssh://vCPU-HP-Host1/system --unsafe -- verbose
Thanks Salil.
linaro-open-discussions@op-lists.linaro.org