Hi,
I have been asked by our hardware dev team to stand up a server that will
run Lava.
I was advised that Debian was the only supported OS.
But as they will be running the official docker images is there any reason
I couldn't deploy use Ubuntu 24.04 LTS ?
Thank you
Greg
Hello,
I would like to replace -net legacy values with -netdev as netdevice in /etc/lava-server/dispatcher-config/device-types/qemu.jinja2 file but even after creating new jinja file my device is not able to identify it.
I have tried restarting all the LAVA services but still not able to . Could you please share the how can I change or update existing jinja file.
Thanks,
Sweta
Hi LAVA users,
I have set-up a LAVA server, dispatcher and connected my device (banana-pi) board serially via UART to host machine on which dispatcher is running.
I have defined a job which uses deploy "tmpfs" to download .itb file from local file server. After that, under "boot" action, i am using u-boot method to boot the device.
Output:
1. deploy images section runs successfully
2. uboot action starts
3. After connecting to device using telnet, device is reset(pdu-reboot).
4. Device enters in Y modem state waiting for image to get transferred.
5. After some time, "CCC CCCCCCC[ 99.943] spl: ymodem err - Timed out"
Issue/Question:
6. At this stage, i want LAVA dispatcher(HOST machine) to send .itb file over UART using Ymodem to device.
7. But LAVA is unable to execute this command "sz -Y --ymodem /uboot-opensbi.itb < /dev/ttyUSB0 > /dev/ttyUSB0" to transfer opensbi.itb file to device.
8. I need to know what am i missing in writing job definition for this case.
9. Which device template should i use and how it should be extended.
10. is there any job definition syntax to fulfill my requirement?
Regards,
Hello Team,
I am trying to use Azure AD for user authentication in LAVA. I followed the document https://validation.linaro.org/static/docs/v2/authentication.html#using-open… and created a yaml file with all details under /etc/lava-server/settings.d and restarted lava-server-gunicorn.service.
Additionally , added ssl_header configuration to trust X-Forwarded-Proto header for detecting HTTPS.
cat secure_proxy_ssl_header.yaml
SECURE_PROXY_SSL_HEADER:
- HTTP_X_FORWARDED_PROTO
- https
During sign in I am able get Microsoft login page but after successfully login I get "Bad Request (400)" .
Could you please suggest if Lava can work with mozilla-django-oidc or if i am missing any configuration.
Hello Team,
I have a custom scripts which spins up a pxe server in qemu and on the same network when we try to spin up VM it boot's from network and takes OS image from pxe to boot up. I want to do the same in LAVA and execute all my automated OS test cases over newly booted VM .
I have a pipeline which setup's the bridge n/w (pxe network for server and host ) and i want to trigger LAVA job to spin up VM using same network. But to that to work I require image_arg in deploy action and which require mandatory download of image . I even tried to remove the image from img_arg but it wont boot from network .
Also I am not able to find option to pass qemu arguments in case I use tftp or nfs .
Could you please suggest how pxe boot qemu vm in LAVA
Hi, I am trying to make use of environment variables defined on my lava-server inside interactive tests.
I have had some success with this, but I am not sure why one method works over the other.
For example, I do something like the following:
echo 'wget --user=some-name --password=${SECRET_PASSWORD}' > somefile.sh
And that works just fine. But when I already have a script (say it exists on the DUT already) that just takes it as a parameter, it doesn't work.
Example:
some_script.sh -p ${SECRET_PASSWORD} -u some-name
I have also tried accessing it directly in the script, similar to what somefile.sh would look like on the inside, but had no success with that.
I have tried exporting the environment variable as another one to see if that would work, but to no avail.
Is there something I'm missing? What I can I do to achieve this without hardcoding it anywhere in my scripts?
Regards,
Michael
Hi,
I upgraded my LAVA instance to 2025.04 yesterday I skipped one release
and the upgrade was from 2024.09. Most of the jobs I run in use
deploy-to-flasher with custom script handling the board flashing. Up
until 2024.09 (I don't know about 2025.02) "downloads" URLs resulted
in all files stored in a flat directory. My flashing scripts made use
of this "feature". In 2025.04 this has changed. This commit:
https://gitlab.com/lava/lava/-/commit/4d9f0ebdae9ca53baf6633f4a35e716183bd2…
makes the files stored in separate directories. It feels like a step
in a right direction but it broke my flashing scripts. To make a quick
fix I did this in the beginning of the script:
DOWNLOAD_DIRS=$(find . -maxdepth 1 -mindepth 1 -type d -printf '%f ')
for d in $DOWNLOAD_DIRS; do
FILE_TO_COPY=$(find ./$d -maxdepth 1 -mindepth 1 -type f -printf '%f')
ln "$PWD/$d/$FILE_TO_COPY" "$PWD/$FILE_TO_COPY"
done
This isn't great but it solves the issue for now. After a short
discussion with Chase we came to conclusion that solution that looks
"more correct" is to implement support for "uniquify" for "download"
URLs. Chase sent a patch here:
https://gitlab.com/lava/lava/-/merge_requests/2795
This should be available in the next release.
I hope this message helps someone with similar issues :)
Best Regards,
Milosz
Hi,
I am trying to write a test to conditionally extract files from a zip file for use in some scripts, e.g. unzip the file, copy/move the files I need out, remove the zip file and directory created from unzipping it.
However, the zip file size exceeds the disk space of the DUT.
I have considered doing something like a MultiNode job, where the DUT is booted as well as a QEMU instance, and the file is downloaded and extracted
on the QEMU and moved onto the DUT, but I would prefer not to do that if possible.
Is there any way to download the zip file mid-test to the worker instead, extract the files I need, and move them onto the DUT? Ideally no modifications to
the code of LAVA, but if that is necessary then I can figure out an interim solution and come back to it later.
Regards,
Michael
Hi,
I am trying to get LDAP working in my LAVA instance, and have managed to get logging into it working. The issue comes when I try to use Django's MIRROR_GROUPS setting.
I am already aware that it is not built into LAVA to be configurable, and I have already taken steps to add the relevant lines in lava_server/settings/common.py to make it work (initialize to None, values.get() in the update method, then run eval on the value in the LDAP if section), but it still won't work. All other required settings are clearly working just fine, and I can even set USER_FLAGS_BY_GROUP just fine, but I would prefer to mirror certain groups that users are members of and assign permissions to the groups.
Do I need to pre-create the groups before logging in to LAVA or am I missing something else/doing something wrong?
Original common.py source: https://gitlab.com/lava/lava/-/blob/master/lava_server/settings/common.py