Hello,
I am trying to connect a RISC-V VisionFive 2 device with the LAVA web
server. I do not have a PDU at the moment and I do not know which one will
be appropriate for this device setup.
The LAVA master and worker are running on the same machine, so I have
localhost set up. I added the following device dictionary for VisionFive 2.
{% extends 'jh7100-visionfive.jinja2' %}
{% set console_backend = 'serial' %}
{% set serial_port = '/dev/ttyUSB0' %}
{% set serial_baud_rate = 115200 %}
The dictionary template, which is extended for VisionFive 2, is from the
VisionFive 1 device type. As I won't be needing initramfs and other uboot
env variables, I am using visionfive 1's dictionary template just for
getting the setup to work.
Following is the VisionFive 1 dictionary template.
{% extends 'base-uboot.jinja2' %}
{% set uboot_mkimage_arch = 'riscv' %}
{% set console_device = console_device|default('ttyS0') %}
{% set baud_rate = baud_rate|default(115200) %}
{% set booti_kernel_addr = '0x84000000' %}
{% set booti_dtb_addr = '0x88000000' %}
{% set booti_ramdisk_addr = '0x88300000' %}
{% set uboot_initrd_high = '0xffffffffffffffff' %}
{% set uboot_fdt_high = '0xffffffffffffffff' %}
{% set bootloader_prompt = 'VisionFive#' %}
{% set uboot_tftp_commands = [
"tftpboot {KERNEL_ADDR} {KERNEL}",
"tftpboot {RAMDISK_ADDR} {RAMDISK}",
"tftpboot {DTB_ADDR} {DTB}"]
-%}
I was able to run a job on QEMU. But the VisionFive 2 is not detected by
the LAVA web server. The physical connection is complete because I can use
picocom on my computer to attach to the VisionFive 2 using the command sudo
picocom -b 115200 /dev/ttyUSB0. I created several job definitions just for
testing the setup but they all are just loading. The device health check is
bad. Following is the simplest job that I am trying to run and see if it
completes.
job_name: simple-uboot-test
device_type: visionfive2
priority: medium
visibility: public
timeouts:
job:
minutes: 5
actions:
- deploy:
timeout:
minutes: 1
to: u-boot
- boot:
method: bootloader
bootloader: u-boot
connection: serial
commands:
- printenv
But all the jobs are on infinite loading. This is my first time connecting
a physical device with the LAVA web server, so I would appreciate it if
someone could tell me what the issue is here.
Regards,
Ali
Hello Team,
I have a custom scripts which spins up a pxe server in qemu and on the same network when we try to spin up VM it boot's from network and takes OS image from pxe to boot up. I want to do the same in LAVA and execute all my automated OS test cases over newly booted VM .
I have a pipeline which setup's the bridge n/w (pxe network for server and host ) and i want to trigger LAVA job to spin up VM using same network. But to that to work I require image_arg in deploy action and which require mandatory download of image . I even tried to remove the image from img_arg but it wont boot from network .
Also I am not able to find option to pass qemu arguments in case I use tftp or nfs .
Could you please suggest how pxe boot qemu vm in LAVA
Hi, I am trying to make use of environment variables defined on my lava-server inside interactive tests.
I have had some success with this, but I am not sure why one method works over the other.
For example, I do something like the following:
echo 'wget --user=some-name --password=${SECRET_PASSWORD}' > somefile.sh
And that works just fine. But when I already have a script (say it exists on the DUT already) that just takes it as a parameter, it doesn't work.
Example:
some_script.sh -p ${SECRET_PASSWORD} -u some-name
I have also tried accessing it directly in the script, similar to what somefile.sh would look like on the inside, but had no success with that.
I have tried exporting the environment variable as another one to see if that would work, but to no avail.
Is there something I'm missing? What I can I do to achieve this without hardcoding it anywhere in my scripts?
Regards,
Michael
Hi,
I upgraded my LAVA instance to 2025.04 yesterday I skipped one release
and the upgrade was from 2024.09. Most of the jobs I run in use
deploy-to-flasher with custom script handling the board flashing. Up
until 2024.09 (I don't know about 2025.02) "downloads" URLs resulted
in all files stored in a flat directory. My flashing scripts made use
of this "feature". In 2025.04 this has changed. This commit:
https://gitlab.com/lava/lava/-/commit/4d9f0ebdae9ca53baf6633f4a35e716183bd2…
makes the files stored in separate directories. It feels like a step
in a right direction but it broke my flashing scripts. To make a quick
fix I did this in the beginning of the script:
DOWNLOAD_DIRS=$(find . -maxdepth 1 -mindepth 1 -type d -printf '%f ')
for d in $DOWNLOAD_DIRS; do
FILE_TO_COPY=$(find ./$d -maxdepth 1 -mindepth 1 -type f -printf '%f')
ln "$PWD/$d/$FILE_TO_COPY" "$PWD/$FILE_TO_COPY"
done
This isn't great but it solves the issue for now. After a short
discussion with Chase we came to conclusion that solution that looks
"more correct" is to implement support for "uniquify" for "download"
URLs. Chase sent a patch here:
https://gitlab.com/lava/lava/-/merge_requests/2795
This should be available in the next release.
I hope this message helps someone with similar issues :)
Best Regards,
Milosz