Hi,
Recently my "lava-worker" host name was changed and showed the same offline
in Lava-server.
Is there any easy way to edit the old hostname with the new hostname (in
lava-server side software) to bring back the lava-worker/dispatcher online ?
Regards,
Koti
Hi,
I plan to install the 2021.10 lava-dispatcher instead of the 2021.11 latest
version.
I am unable to install the 2021.10 lava-dispatcher version but able to
install 2021.11 using "apt install lava-dispatcher"
Is there any way to install the 2021.10 lava-dispatcher at this time? Or
do we always need to install the latest versions?
Regards,
koti
Hi Nagerndra,
I would recommend send a patch if the issue is resolved so that the LAVA
team will review/check it.
Regards,
Koti
On Thu, 25 Nov 2021 at 17:30, <lava-users-request(a)lists.lavasoftware.org>
wrote:
> Send Lava-users mailing list submissions to
> lava-users(a)lists.lavasoftware.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.lavasoftware.org/mailman/listinfo/lava-users
> or, via email, send a message with subject or body 'help' to
> lava-users-request(a)lists.lavasoftware.org
>
> You can reach the person managing the list at
> lava-users-owner(a)lists.lavasoftware.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Lava-users digest..."
>
>
> Today's Topics:
>
> 1. Re: Constant failure_retry does not exist in the device
> config 'constants' (Nagendra Singamsetti)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 24 Nov 2021 18:02:59 +0530
> From: Nagendra Singamsetti <nag.singam91(a)gmail.com>
> To: lava-users(a)lists.lavasoftware.org
> Subject: Re: [Lava-users] Constant failure_retry does not exist in the
> device config 'constants'
> Message-ID:
> <CAFhg_Wtt3+AnYRLLO7nm2u8uiYKLFPHaS7_Q5LKvah=
> 0Z_vsSg(a)mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi All ,
> The Issue got solved by updating the latest change in base.jinja2
> device-type file.
>
> # Set the failure retry to default or override it failure_retry: {{
> failure_retry|default(1) }} # Override boot_retry boot_retry: {{
> boot_retry }}{% endblock constants -%}
>
> I am not sure why base.jinja2 is not updated during the 2021.10
> lava-server upgrade process. If any one got the same issue please try
> this change.
>
> thanks
>
> Regards
>
> Nagendra S
>
>
> On Wed, Nov 24, 2021 at 1:19 PM Nagendra Singamsetti <
> nag.singam91(a)gmail.com>
> wrote:
>
> > Hi All,
> >
> > I got struck with strange issue with latest version 2021.10
> >
> >
> > - *error_msg*: Constant failure_retry does not exist in the device
> > config 'constants' section.
> > - *error_type*: Configuration
> >
> > ConfigurationError: The LAVA instance is not configured correctly. Please
> > report this error to LAVA admins.
> >
> > My setup is working fine with 2021.05. But after upgrading with the
> latest
> > version 2021.10 none of the jobs are not working. Always it raises the
> > Exception called Configuration Error.
> >
> > Looking forward to your kind response!!!
> >
> >
> > Regards
> > Nagendra S
> >
> >
> >
>
Hi All,
I got struck with strange issue with latest version 2021.10
- *error_msg*: Constant failure_retry does not exist in the device
config 'constants' section.
- *error_type*: Configuration
ConfigurationError: The LAVA instance is not configured correctly. Please
report this error to LAVA admins.
My setup is working fine with 2021.05. But after upgrading with the latest
version 2021.10 none of the jobs are not working. Always it raises the
Exception called Configuration Error.
Looking forward to your kind response!!!
Regards
Nagendra S
Hi,
I'm running a project which requires changes in device status in LAVA.
It will request the device to be put into 'maintenance' and later on
into 'good' state. Initially I was using my personal API token. Since
I was a superuser it all worked. A few weeks ago I tried to move to a
more permanent solution with a dedicated user. I granted all required
access rights to the new user, but PUT calls to
api/v0.2/devices/<device_name> were rejected. After looking at the
code it seems that only superuser is allowed to make such calls. Is
there a reason for that?
Changes were introduced in this commit:
https://git.lavasoftware.org/lava/lava/-/commit/2bdbd462d745b45308faf86dd37…
Best Regards,
Milosz
Hello,
I received an error message when using the 'uuu' boot action on v2021.09.
I tried poking around through the lava source but I'm still fairly new
to the repo so I'm a bit stumped.
Any comments or suggestions would be greatly appreciated.
Thank you,
Davis
Traceback (most recent call last):
File "/bin/lava-run", line 240, in main
job = parse_job_file(logger, options)
File "/bin/lava-run", line 151, in parse_job_file
env_dut=env_dut,
File "/usr/lib/python3/dist-packages/lava_dispatcher/parser.py",
line 163, in parse
test_counts[namespace],
File "/usr/lib/python3/dist-packages/lava_dispatcher/parser.py",
line 47, in parse_action
cls = Boot.select(device, parameters)
File "/usr/lib/python3/dist-packages/lava_dispatcher/logical.py",
line 276, in select
res = c.accepts(device, parameters)
File "/usr/lib/python3/dist-packages/lava_dispatcher/actions/boot/uuu.py",
line 119, in accepts
params = device["actions"]["boot"]["methods"]["uuu"]["options"]
KeyError: 'uuu'
LAVABug: This is probably a bug in LAVA, please report it.
---------------------------------------------------------------------------------------------------------------------------
device_type: imx8m
job_name: flash with uuu
timeouts:
job:
minutes: 10
action:
minutes: 10
connection:
minutes: 5
visibility: public
actions:
- deploy:
to: uuu
images:
boot:
url: file:///home/davis/uuu/imx-boot-64.bin
system :
url: file:///home/davis/uuu/sd.img
- boot:
method: uuu
commands :
- uuu: -b emmc_all {boot} {system}
Moin,
I'm using https://git.lavasoftware.org/lava/pkg/docker-compose to build
example setups with LAVA and I want to add a second remote worker.
Therefore I set the URL and DC_DISPATCHER_HOSTNAME in .env
and run `docker-compose up lava-dispatcher`.
Sadly the dispatcher always tries to connect to "lava-server" dispite
that the correct URL is passed via environment variables into the container.
It looks like the value gets overwritten in
https://git.lavasoftware.org/lava/lava/-/blob/master/docker/share/entrypoin…
as the lava-worker file looks like this inside the container.
...
# Logging level should be uppercase (DEBUG, INFO, WARN, ERROR)
# LOGLEVEL="DEBUG"
# Server connection
URL="http://lava-server:8000/"
# TOKEN="--token <token>"
WS_URL="--ws-url http://lava-publisher:8001/ws/"
After digging around a bit I found out, that this environment file comes
from
https://git.lavasoftware.org/lava/pkg/docker-compose/-/blob/master/overlays…
This practically disables the possibility of configuring the worker via
the .env of the docker-compose repo.
The order of defaults loading in
https://git.lavasoftware.org/lava/lava/-/blob/master/docker/share/entrypoin…
should be changed, so that the importing from the lava-worker file
happens first and the environment variable is applied afterwards.
Overall there are two defaults here and two possibilities to configure
the worker, which is a bit confusing and redundant.
Imho the .env from the docker-compose repo is a known and established
way of configuring containers and should be preferred.
Regards,
Beni
--
Benedikt Braunger
emlix GmbH, http://www.emlix.com
Fon +49 551 30664-0, Fax +49 551 30664-11
Gothaer Platz 3, 37083 Goettingen, Germany
Sitz der Gesellschaft: Goettingen, Amtsgericht Goettingen HR B 3160
Geschaeftsfuehrung: Heike Jordan, Dr. Uwe Kracke
Ust-IdNr.: DE 205 198 055
emlix - smart embedded open source
Hi
I setup a LAVA server few years ago, on Debian 9.5 Stretch(with Python
3.5), and it works correctly.
Recently I update the Python to 3.6, but the lava-server not able to
work correctly. eg. the HTTP server does not work.
So I fall back to Python 3.5 and re-install the LAVA server at the same
time, but unfortunately the device does not work this time.
The device status is set to "Bad (Invalid device configuration)" by the
lava-health without any real check.
The device was work correctly, and only change to 'BAD' when health
check job get fail.
Is there any device related option was set to mandatory instead of
optional which cause this issue? How can I identify this issue? I can't
see any Error in the log file at /var/log/lava-server
Thanks,
- Kever