Hi all,
I'm basically repeating [1] here as there was no reaction for some
months now. Maybe I used the wrong communication channel, let's see...
We have a testsuite that is able to trigger a RCU WARNING inside the
Linux kernel. My expectation was that whenever a kernel warning / oops
/ call stack dump / ... occurs the LAVA job is marked as "failed".
This assumption seems to be wrong. It took some time to realize that we
have a real problem as manual inspection of test logs only happens from
time to time.
After scanning the code my understanding is that the output of the
connection (serial connection in my case) is only parsed during kernel
boot (until the login action takes over). That is not sufficient for
detecting problems that happen during test execution.
Is there a way to scan the full log for the same patterns that are used
by the boot action? If so, how to configure that? Whenever a kernel
problem occurs my test run should be marked as "failed".
Any ideas? Did I overlook something?
Best regards,
Florian
[1] https://git.lavasoftware.org/lava/lava/-/issues/576
Hi All,
I am trying to setup LAVA 2023.08 lava-server and lava-dispatcher on a
single machine, I successfully installed and was able to add workers and a
QEMU device.
From logs, it seems the workers are communicating with the lava server, I
am trying to execute a simple QEMU sample job and it always going in the
submitted state. I verified the device dictionary is correct.
As per my understanding if communication is happening between
lava-server/worker then a device should be assigned that is configured on
the worker, but in my case, no device is assigned to the test job and it
always keeps in a submitted state.
Can someone let me know if any special settings are required for running a
test job on a QEMU device? or share any docs link s to resolve this, any
input will be appreciated.
My Testjob YAML:
https://docs.lavasoftware.org/lava/examples/test-jobs/qemu-amd64-standard-s…
Thanks,
Ankit
Hello,
Is there a standard way to reboot the ECU by calling the reboot command defined in the device type (hard_reset_command),
between test case, so we are sure the tests are not impacting each other?
Thanks.
Hi All,
I am setting an HTTPS instance of LAVA, I can access the LAVA UI and am
able to log in as well. I am using a self-signed SSL certificate.
I have added *URL="https://172.16.60.178/ <https://172.16.60.178/>"* in the
/etc/lava-dispatcher/lava-worker file and restarted the service. It's
giving me the below error.
*2023-10-30 07:10:47,383 ERROR -> server error: code 5032023-10-30
07:10:47,383 DEBUG --> HTTPSConnectionPool(host='172.16.60.178',
port=443): Max retries exceeded with url:
/scheduler/internal/v1/workers/debian/?version=2023.10 (Caused by
SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED]
certificate verify failed: self signed certificate (_ssl.c:1123)')))*
Do we need to configure some extra settings in case we are using a
self-signed certificate? any help will be appreciated.
Thanks,
Ankit
Hello everyone,
There seems to be a bug in LAVA. I was on version 2022.04 and have also tried 2023.03. Both versions have the same bug.
The same configurations works in a 2018 build of LAVA on an old machine.
I am trying to connect to an always on board via ssh.
The healthcheck is failing with this error :
lava-dispatcher, installed at version: 2023.03<https://10.1.52.17/scheduler/job/8857#L1>start: 0 validate<https://10.1.52.17/scheduler/job/8857#L2>Start time: 2023-04-12 14:07:00.373707+00:00 (UTC)<https://10.1.52.17/scheduler/job/8857#L3>Traceback (most recent call last): File "/usr/lib/python3/dist-packages/lava_dispatcher/job.py", line 198, in validate self._validate() File "/usr/lib/python3/dist-packages/lava_dispatcher/job.py", line 183, in _validate self.pipeline.validate_actions() File "/usr/lib/python3/dist-packages/lava_dispatcher/action.py", line 190, in validate_actions action.validate() File "/usr/lib/python3/dist-packages/lava_dispatcher/actions/deploy/ssh.py", line 81, in validate if "serial" not in self.job.device["actions"]["deploy"]["connections"]: KeyError: 'connections' <https://10.1.52.17/scheduler/job/8857#L4> validate duration: 0.00<https://10.1.52.17/scheduler/job/8857#results_244238>case: validate
case_id: 244238
definition: lava
result: fail
<https://10.1.52.17/results/testcase/244238><https://10.1.52.17/scheduler/job/8857#L6>Cleaning after the job<https://10.1.52.17/scheduler/job/8857#L7>Root tmp directory removed at /var/lib/lava/dispatcher/tmp/8857<https://10.1.52.17/scheduler/job/8857#L8>LAVABug: This is probably a bug in LAVA, please report it.<https://10.1.52.17/scheduler/job/8857#results_244239>case: job
case_id: 244239
definition: lava
error_msg: 'connections'
error_type: Bug
result: fail<https://10.1.52.17/results/testcase/244239>
The health check looks like this:
job_name: SSH check
timeouts:
job:
minutes: 10
action:
minutes: 2
priority: medium
visibility: public
actions:
- deploy:
timeout: # timeout for the connection attempt
seconds: 30
to: ssh
os: oe
- boot:
timeout:
minutes: 2
prompts: ['root@(.*):~#']
method: ssh
connection: ssh
- test:
timeout:
minutes: 5
definitions:
- repository:
metadata:
format: Lava-Test Test Definition 1.0
name: smoke-tests-basic
description: "Basic smoke test"
run:
steps:
- lava-test-case linux-linaro-ubuntu-pwd --shell pwd
- lava-test-case linux-linaro-ubuntu-uname --shell uname -a
- lava-test-case linux-linaro-ubuntu-vmstat --shell vmstat
- lava-test-case linux-linaro-ubuntu-ip --shell ip a
from: inline
name: smoke-tests-basic
Any ideas ?
Best regards,
Sebastian
Hello everyone,
I hava a worker in docker container, and the container runs in a Ubuntu x86 PC.
now I want to make the Ubuntu x86 PC as a device and connect device to the worker run on it. Then I can execute job test actions commands in device.
Which kind of connection should I choose?
I am newer , and I try to make new device.
there is no uboot, kerner ... on my system.
for boot, we just need run cmd on host "echo "reset 1" > /dev/ttyUSB0" , it is the mcu serial port.
then I can get boot log from /dev/ttyUSB1 , it is thesoc serial port.
the following is the job
device_type: orinshort
job_name: for orinshort device
timeouts:
job:
minutes: 20
action:
minutes: 15
priority: medium
visibility: public
actions:
- deploy:
timeout:
minutes: 10
to: flasher
images:
kernel:
url: http://10.19.207.190/static/docs/v2/contents.html#contents-first-steps-using
The following is the my device, it is very sample:
{# orin test short #}
{% extends 'orinshort.jinja2' %}
{% set connection_list = ['usb1'] %}
{% set connection_tags = {'usb1': ['primary', 'telnet']} %}
{% set connection_commands = {'usb1': 'telnet localhost 10009'} %}
{% set flasher_reset_commands = ['/tmp/test.sh'] %}
{% block body %}
actions:
deploy:
methods:
flasher:
commands: {{ flasher_reset_commands }}
{% endblock body %}
I don't know how to set boot with serial port .
1. about cmd echo "reset 1" > /dev/ttyUSB0
0.1 when I input the cmd on the host shell : echo "reset 1" > /dev/ttyUSB0 , soc can be reset
0.2 when I add it to test.sh and run it with the following code in device, soc can be reset too
actions:
deploy:
methods:
flasher:
commands: {{ flasher_reset_commands }}
0.3 when I use {% set flasher_reset_commands = ['echo "reset 1" > /dev/ttyUSB0) '] %} , it doesn't work.
{% set flasher_reset_commands = ['echo \"reset 1\" > /dev/ttyUSB0) '] %} , it is doesn't work.
So how to echo cmd to /dev/ttyUSB0 by flasher: -> commands?
2. how to set boot with serial port in device jinja2 file?
if I add the fowllowing code , the test can't run
boot:
connections:
serial: usb1
3. I think the normal process is to configure deploy boot and other operations in the job. but I don't know how to do with my case.
just run cmd on host "echo "reset 1" > /dev/ttyUSB0" to start system, how to select the method?
how to run flasher_reset_commands in job yaml file?
4. If the job can't start
job state is Submitted and can't start test, how to debug? where is the log?
I am newer and my system is special , it just like docker, there is now uboot kernel, we need special cmd to deploy and boot
so it is hard for me, please give me a hand for it.
Thanks Paweł Wieczorek,
I'm not aware this new feature, magic job by you guys, thanks.
-----Original Message-----
From: lava-users-request(a)lists.lavasoftware.org <lava-users-request(a)lists.lavasoftware.org>
Sent: Thursday, October 12, 2023 8:00 AM
To: lava-users(a)lists.lavasoftware.org
Subject: [EXT] lava-users Digest, Vol 61, Issue 10
Caution: This is an external email. Please take care when clicking links or opening attachments. When in doubt, report the message using the 'Report this email' button
Send lava-users mailing list submissions to
lava-users(a)lists.lavasoftware.org
To subscribe or unsubscribe via email, send a message with subject or body 'help' to
lava-users-request(a)lists.lavasoftware.org
You can reach the person managing the list at
lava-users-owner(a)lists.lavasoftware.org
When replying, please edit your Subject line so it is more specific than "Re: Contents of lava-users digest..."
Today's Topics:
1. Re: About timeouts. (Paweł Wieczorek)
----------------------------------------------------------------------
Message: 1
Date: Wed, 11 Oct 2023 12:43:51 +0200
From: Paweł Wieczorek <pawiecz(a)collabora.com>
Subject: [lava-users] Re: About timeouts.
To: lava-users(a)lists.lavasoftware.org
Message-ID: <32bdd741-289a-ae09-d994-f73b069a892f(a)collabora.com>
Content-Type: multipart/alternative;
boundary="------------4xeVji058aNHAzBX687S773k"
Hi Larry,
On 11.10.2023 08:04, Larry Shen wrote:
>
> Hi, guys, I have a question about timeout:
>
> 1) For next job, the boot action block's timeout will be 5 minutes,
> while pdu-reboot timeout will be 10 seconds.
>
> timeouts:
>
> job:
>
> minutes: 10
>
> action:
>
> minutes: 5
>
> actions:
>
> pdu-reboot:
>
> seconds: 10
>
> connection:
>
> minutes: 2
>
> actions:
>
> - boot:
>
> failure_retry: 4
>
> method: bootloader
>
> bootloader: u-boot
>
> commands: []
>
> prompts: ['=>']
>
> 2. But after we add timeouts to boot actions, the boot action timeout
> is 2 minutes now, that's OK. But the individual pdu-reboot timeout
> will be the left time of 2 minutes.
>
> actions:
>
> - boot:
>
> failure_retry: 4
>
> method: bootloader
>
> bootloader: u-boot
>
> commands: []
>
> prompts: ['=>']
>
> timeout:
>
> minutes: 2
>
> My question is: for the second item, if possible we could let
> pdu-reboot remain the value "10 seconds"?
>
> Above is just an example to explain my question. What I really want to
> achieve is: sometimes, I want to specify the timeout for "uboot wait
> for interrupt", I want to fail that individual sub-action quickly,
> then we are possible retry this action quickly without wait for the
> whole boot action timeout.
>
You can set action block timeouts also for individual actions [0][1] - in your case it could look like:
actions:
- boot:
failure_retry: 4
method: bootloader
bootloader: u-boot
commands: []
prompts: ['=>']
timeout:
minutes: 2
timeouts:
pdu-reboot:
seconds: 10
This feature should be available if you're running LAVA 2023.01 or newer [2].
[0]
https://lava.collabora.dev/static/docs/v2/actions-timeout.html#individual-a…
[1]
https://lava.collabora.dev/static/docs/v2/timeouts.html#action-block-overri…
[2]
https://gitlab.com/lava/lava/-/commit/15650f11aa10931c9b2a148ae16561b748a38…
Kind regards,
Paweł
> Any idea? Thanks.
>
> Regards,
>
> Larry
>
>
> _______________________________________________
> lava-users mailing list --lava-users(a)lists.lavasoftware.org
> To unsubscribe send an email tolava-users-leave(a)lists.lavasoftware.org
> %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
Hi, guys, I have a question about timeout:
1) For next job, the boot action block's timeout will be 5 minutes, while pdu-reboot timeout will be 10 seconds.
timeouts:
job:
minutes: 10
action:
minutes: 5
actions:
pdu-reboot:
seconds: 10
connection:
minutes: 2
actions:
- boot:
failure_retry: 4
method: bootloader
bootloader: u-boot
commands: []
prompts: ['=>']
2. But after we add timeouts to boot actions, the boot action timeout is 2 minutes now, that's OK. But the individual pdu-reboot timeout will be the left time of 2 minutes.
actions:
- boot:
failure_retry: 4
method: bootloader
bootloader: u-boot
commands: []
prompts: ['=>']
timeout:
minutes: 2
My question is: for the second item, if possible we could let pdu-reboot remain the value "10 seconds"?
Above is just an example to explain my question. What I really want to achieve is: sometimes, I want to specify the timeout for "uboot wait for interrupt", I want to fail that individual sub-action quickly, then we are possible retry this action quickly without wait for the whole boot action timeout.
Any idea? Thanks.
Regards,
Larry
Hi,
There are mcu and soc on my board, two serial port for them( ttyusb0:mcu, ttyusb1:soc), the hostpc ( it is used for lava worker) connected to the two serial ports, and reboot soc cmd is echo "soc boot" > /dev/ttyUSB0 . then we can see boot log at ttyUSB1.
We want to use lava to test soc system, it is arm64 with linux , without uboot . and we hope to add build action in the device.
So the deploy and boot steps should be:
1. run build.sh on hostpc (it should be lava worker) and check the result (failed and pass)
2. run echo "soc burn" > /dev/ttyUSB0 on hostpc to change the soc to burn mode, and check the result (failed and pass)
3. run burn.sh on hostpc to burn to soc , and check the result (failed and pass)
4. run echo "soc boot" > /dev/ttyUSB0 to reboot soc , and check the result (failed and pass)
5. connect to /dev/ttyUSB1 to get soc boot log, and check the result (failed and pass)
6. ssh to the linux of soc.
What I want to know is:
1. Is the above design feasible on lava?
2. What do I need to do for this? Are there any device type templets that I can refer to?
The following is my lava system, I can run the test job with qemu device now.
~/work/src/lava $ dpkg -l |grep lava
ii lava 2022.11.1+10+buster all Linaro Automated Validation Architecture metapackage
ii lava-common 2022.11.1+10+buster all Linaro Automated Validation Architecture common
ii lava-coordinator 2022.11.1+10+buster all LAVA coordinator daemon
ii lava-dev 2022.11.1+10+buster all Linaro Automated Validation Architecture developer support
ii lava-dispatcher 2022.11.1+10+buster all Linaro Automated Validation Architecture dispatcher
ii lava-dispatcher-host 2022.11.1+10+buster all LAVA dispatcher host tools
ii lava-server 2022.11.1+10+buster all Linaro Automated Validation Architecture server
ii lava-server-doc 2022.11.1+10+buster all Linaro Automated Validation Architecture documentation
ii lava-tool 0.25-2 all deprecated command line utility for LAVA
ii lavacli 0.9.7+buster all LAVA XML-RPC command line interface
~/work/src/lava $
Hi, guys,
I currently have 2 jobs:
Job1:
actions:
- boot:
failure_retry: 4
method: bootloader
bootloader: u-boot
commands: []
prompts: ['=>']
timeout:
minutes: 2
- test:
timeout:
minutes: 4
interactive:
- name: check-uboot
prompts: ["=> ", "/ # "]
script:
- command: "printenv"
name: printenv
successes:
- message: "soc_type=imx93"
Job2:
actions:
- deploy:
to : uuu
images :
boot :
url : /path/to/bootloader
- boot:
method: uuu
commands :
- bcu: reset usb
- uuu: -b sd {boot}
- bcu: deinit
timeout:
minutes: 2
- boot:
method: bootloader
bootloader: u-boot
commands: []
prompts: ['=>']
timeout:
minutes: 2
- test:
interactive:
- name: check-uboot
prompts: ["=> ", "/ # "]
script:
- command: "printenv"
name: printenv
successes:
- message: "soc_type=imx93"
timeout:
minutes: 2
The first job just boot the board and check if the bootloader ok, the second job flash a new bootloader to board everytime before check the bootloader.
I wonder if possible for me to combine the two, define next logic in a job:
1. Boot the board to check the uboot
2. If uboot is ok, then the job finish
3. If uboot not ok or action timeout, then go to flash action to flash new bootloader to the device, then job finish.
The idea is: the step 3 is optional, we only flash a new bootloader when previous boot action failure.
Regards,
Larry
Hi all.
I am trying to setup my own embedded device using multinode API.
The senario is simple. target needs to wait until host role done.
The test action that wait for host role done looks like this:
```
- test:
interactive:
- name: send_target_ready
prompts:
- 'Generate Erased Block'
script:
- command: null
name: result
- lava-send: booted
- lava-wait: done
role:
- target
```
The multinode job done very well, but the test action I mentioned not show live logs from the device uart connection.
Is it possible to show live logs from the device while waiting with 'lava-wait host'?
Thanks.
Hello Team,
I'm using notify action in the job definition to notify users about the
status of the job. I'm already a registered user in lava and after
completion of my job, the administration site shows the status as "*not
sent*".
Please let me know what can be the reason, and how can i achieve it?
[image: lava-notify.PNG]
Below is the job definition i'm using:
*device_type: ADT-UNIT1job_name: sample test to notify usertimeouts: job:
minutes: 15 action: minutes: 10 connection: minutes: 5visibility:
publicactions:- command: name: relay_pwr_on timeout:
minutes: 1- deploy: to: flasher images: package:
url:
https://artifactory.softwaretools.com/artifactory/gop-generic-stable-local/…
<https://artifactory.softwaretools.com/artifactory/gop-generic-stable-local/…>-
boot: method: u-boot commands: - setenv factorymode 1 -
boot auto_login: login_prompt: 'login:' username: root
password_prompt: 'Password:' password: root login_commands:
- touch /home/root/test_file - ifconfig prompts: -
'root@hon-grip' - 'root@GRIP'notify: recipients: - to: method:
email user: pavan criteria: status: finished verbosity: verbose*
Thanks & Regards,
Pavan
Hi. all
I am struggling with situation that one of multinode job stuck in
scheduling status forever.
[image: image.png]
It occurs 1 in 10 times.
Sometimes one of the multinode job stuck in 'scheduling' status and the
other job goes timeout waiting first multinode job.
- lava-scheduler log
[image: image.png]
when this issue occur, the lava-scheduler's log indicate only one of
multinode job scheduled, and the other's not.
- lava-dispatcher log
[image: image.png]
and when issue occur, lava-dispatcher's log seems only one job has
triggered.
I am using lava-server and lava-dispatcher with docker instance (version
2023.08)
It occur in 2023.06 too.
It seems the issue related lava-scheduler. What should i check for resolve
this issue?
Please advise.
Thank you
Hello Team,
I'm using notify action in the job definition to notify users about the
status of the job. I'm already a registered user in lava and after
completion of my job, the administration site shows the status as "*not
sent*".
Please let me know what can be the reason, and how can i achieve it?
[image: lava-notify.PNG]
Below is the job definition i'm using:
*device_type: ADT-UNIT1job_name: sample test to notify usertimeouts: job:
minutes: 15 action: minutes: 10 connection: minutes: 5visibility:
publicactions:- command: name: relay_pwr_on timeout:
minutes: 1- deploy: to: flasher images: package:
url:
https://artifactory.softwaretools.com/artifactory/gop-generic-stable-local/…
<https://artifactory.softwaretools.com/artifactory/gop-generic-stable-local/…>-
boot: method: u-boot commands: - setenv factorymode 1 -
boot auto_login: login_prompt: 'login:' username: root
password_prompt: 'Password:' password: root login_commands:
- touch /home/root/test_file - ifconfig prompts: -
'root@hon-grip' - 'root@GRIP'notify: recipients: - to: method:
email user: pavan criteria: status: finished verbosity: verbose*
Thanks & Regards,
Pavan
Hi.
I am using Lava-server and Lava-dispatcher using docker image with version 2023.08.
I have an issue when submitting a multinode job, sometimes Lava-server run the jobs with abnormal Job ID such as 230, 230.1.
Sometimes, Lava-server run the jobs with normal Job ID such as 230.0, 230.1.
When I submit the definition, sometimes Lava-server show the message 'Invalid job definition: expected a dictionary' on the web page.
Is it because of the syntax error in my definition?
When I submit using Lava server web or lavacli, there's no warning or error that indicates a syntax error in the definition.
The job definition yaml looks like this.
```
job_name: multinode test job
timeouts:
job:
minutes: 60
action:
minutes: 60
connection:
minutes: 60
priority: medium
visibility: public
protocols:
lava-multinode:
roles:
target:
count: 1
device_type: customdevice
host:
count: 1
device_type: docker
actions:
- deploy:
role:
- target
to: flasher
images:
fw:
url: http://example.com/repository/customdevice/test/test.bin
- boot:
role:
- target
method: minimal
prompts:
- 'root:'
- test:
interactive:
- name: send_target_ready
prompts:
- 'root:'
script:
- command: ls
- lava-send: booted
- lava-wait: done
role:
- target
- deploy:
role:
- host
to: docker
os: debian
image: testimage:2023.08
- boot:
role:
- host
method: docker
command: /bin/bash -c 'service ssh start; bash'
prompts:
- 'root@lava:'
- test:
interactive:
- name: wait_target_ready
prompts:
- 'root@lava:'
script:
- command: ls
- lava-wait: booted
role:
- host
- test:
role:
- host
definitions:
- repository:
metadata:
format: Lava-Test Test Definition 1.0
name: run-command
description: "Run command"
os:
- ubuntu
scope:
- functional
run:
steps:
- lava-test-case pwd-command --shell 'pwd'
- lava-send done
from: inline
name: run-command
path: inline/run-command.yaml
```
Hello,
I tried upgrading from lava-server 2023.03 to 2023.06 on Debian 11 and got the following error:
Setting up lava-server (2023.06+11+bullseye) ...
/var/run/postgresql:5432 - accepting connections Updating configuration:
* generate SECRET_KEY [SKIP]
* generate DATABASES [SKIP]
Run fixups:
* fix permissions:
- /var/lib/lava-server/home/
- /var/lib/lava-server/default/
- /var/lib/lava-server/default/media/
- /var/lib/lava-server/default/media/job-output/
- /etc/lava-server/dispatcher-config/
- /etc/lava-server/dispatcher.d/
- /var/lib/lava-server/default/media/job-output/2017/
- /etc/lava-server/dispatcher-config/devices/
- /etc/lava-server/dispatcher-config/devices/*
- /etc/lava-server/dispatcher-config/device-types/
- /etc/lava-server/dispatcher-config/device-types/*
- /etc/lava-server/dispatcher-config/health-checks/
- /etc/lava-server/dispatcher-config/health-checks/*
* drop duplicated templates:
* fix permissions:
- /etc/lava-server/settings.conf
- /etc/lava-server/instance.conf
- /var/log/lava-server/
- /var/log/lava-server/*
- /etc/lava-server/secret_key.conf
Create database:
psql -q
NOTICE: not creating role lavaserver -- it already exists
NOTICE: not creating role devel -- it already exists lava-server manage migrate --noinput --fake-initial Operations to perform:
Apply all migrations: admin, auth, authtoken, contenttypes, lava_results_app, lava_scheduler_app, linaro_django_xmlrpc, sessions, sites Running migrations:
Applying lava_results_app.0019_auto_20230307_1545...Traceback (most recent call last):
File "/usr/bin/lava-server", line 55, in <module>
main()
File "/usr/bin/lava-server", line 51, in main
execute_from_command_line([sys.argv[0]] + options.command)
File "/usr/lib/python3/dist-packages/django/core/management/__init__.py", line 381, in execute_from_command_line
utility.execute()
File "/usr/lib/python3/dist-packages/django/core/management/__init__.py", line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/lib/python3/dist-packages/django/core/management/base.py", line 323, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/lib/python3/dist-packages/django/core/management/base.py", line 364, in execute
output = self.handle(*args, **options)
File "/usr/lib/python3/dist-packages/django/core/management/base.py", line 83, in wrapped
res = handle_func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/django/core/management/commands/migrate.py", line 232, in handle
post_migrate_state = executor.migrate(
File "/usr/lib/python3/dist-packages/django/db/migrations/executor.py", line 117, in migrate
state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial)
File "/usr/lib/python3/dist-packages/django/db/migrations/executor.py", line 147, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File "/usr/lib/python3/dist-packages/django/db/migrations/executor.py", line 245, in apply_migration
state = migration.apply(state, schema_editor)
File "/usr/lib/python3/dist-packages/django/db/migrations/migration.py", line 114, in apply
operation.state_forwards(self.app_label, project_state)
File "/usr/lib/python3/dist-packages/django/db/migrations/operations/models.py", line 256, in state_forwards
state.remove_model(app_label, self.name_lower)
File "/usr/lib/python3/dist-packages/django/db/migrations/state.py", line 100, in remove_model
del self.models[app_label, model_name]
KeyError: ('lava_results_app', 'actiondata') migration
dpkg: error processing package lava-server (--configure):
installed lava-server package post-installation script subprocess returned error exit status 1 Errors were encountered while processing:
lava-server
E: Sub-process /usr/bin/dpkg returned an error code (1)
Does anybody know what I can do in this case?
Uninstalling lava-server and installing it again does not resolve the issue.
Kind regards,
Tim
--
Tim Jaacks
SOFTWARE DEVELOPER
SECO Northern Europe GmbH
Schlachthofstrasse 20
21079 Hamburg
Germany
T: +49 40 791899-183
E: tim.jaacks(a)seco.com
Register: Amtsgericht Hamburg, HRB 148893 Represented by: Dirk Finstel, Marc-Michael Braun, Massimo Mauri
Hi,
is it possible to show "Total count of test cases listed in Test
Definition" to the Results GUI page.
Right now it shows only executed test cases (either PASS/FAIL) but does not
show to be executed (yet to be executed) cases count.
Regards,
Koti
Hello,
I am trying to understand how the result levels are constructed for example when I run "lavacli results <job-id>"
I get an output looking like so :
Results:
* lava.git-repo-action [pass]
* lava.test-overlay [pass]
* lava.test-install-overlay [pass]
* lava.test-runscript-overlay [pass]
* lava.my_test_case_1 [pass]
* lava.my_test_case_2[pass]
what I want to do is to have a test categories, for example
* lava.category1.my_test_case_1[pass]
* lava.category3.my_test_case_3[pass]
and so on. I tried adding tags and namespaces but the results are not impacted,
Can someone please guide me through this or mention part of the documentation that might be helpful ?
Thanks in advance.
Hi Team,
Is there any provision that LAVA provides for passing the uuid for qemu platforms ? Because I am getting an error while mounting the rootfs like "Can't find the root device with the matching UUID!". Does LAVA provide any provision to pass the uuid along with the "image_arg" for qemu ? Or is it something I am missing in the job definition file ?
- deploy:
timeout:
minutes: 5
to: tmpfs
images:
kernel:
image_arg: -kernel {kernel} -append "console=ttyS0,115200 root=/dev/sda vga=0x305
rw"
url: <URL_to_vmlinuz>/vmlinuz
type: zimage
ramdisk:
image_arg: -initrd {ramdisk}
url: <URL_to_initrd>/initrd.img
compression: gz
rootfs:
image_arg: -drive file={rootfs},format=raw "
url: <URL_to_rootfs>/rootfs-qemu-amd64.squashfs
os: debian
root_partition: 1
[cid:image002.png@01D99A39.9F6574A0]
Regards ,
Sarath P T
Hi Team,
Is there any implementation in LAVA for reboot mechanism of USB booted devices ? basically x86 architecture. I found there are options in sd card booted devices using "uuu" , but is there a similar way approach for USB booted devices ?
Ideally LAVA does not have a provision to identify the "login prompt" without any "boot" method.
Regards,
Sarath P T
Hello Team,
Good Day to you!
I have a requirement to pass "enter" key after u-boot process completion,
to enter into the system.
Without pressing any key cannot enter into prompt and it stays stagnant.
Please let me know how to pass "enter" press key at the end of u-boot log.
Thanks
Pavan kumar
Hi, team,
I remember I once saw a video about how to use docker in lava to test zephyr, by kumar gala or someone else I'm not sure? But I forget the detail, can you kindly share something about that? Sample job, video or how others test zephyr with lava are highly appreciated, thanks in advance.
Hello,
I have 2 versions of a device which are identical in terms of the deploy and boot and DUT control command,
but they need different images to boot, I differentiate between them with tags,
how can I have the same health-check job running on both of them, but use different images ?
or shall I create 2 device-types for them ? ( doesn't sound like a clean solution )
Thanks
Vote +1, we have same requirements. Either LAVA could support different health check for different chip versions, or take one step back, just can disable health check on old version device, enable health on newest version only. Currently to avoid too many variants, we disable all healthy check for chipV1, chipV2, chipV3 etc.
------------------------------------------------------------------
Sender:lava-users-request <lava-users-request(a)lists.lavasoftware.org>
Sent At:2023 May 13 (Sat.) 08:00
Recipient:lava-users <lava-users(a)lists.lavasoftware.org>
Subject:Lava-users Digest, Vol 56, Issue 4
Send Lava-users mailing list submissions to
lava-users(a)lists.lavasoftware.org
To subscribe or unsubscribe via email, send a message with subject or
body 'help' to
lava-users-request(a)lists.lavasoftware.org
You can reach the person managing the list at
lava-users-owner(a)lists.lavasoftware.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Lava-users digest..."
Today's Topics:
1. Design decision help reagrding device-types and health-check
(gemad(a)outlook.com)
----------------------------------------------------------------------
Message: 1
Date: Fri, 12 May 2023 14:31:22 -0000
From: gemad(a)outlook.com
Subject: [Lava-users] Design decision help reagrding device-types and
health-check
To: lava-users(a)lists.lavasoftware.org
Message-ID:
<168390188210.1209.11677411661920883493(a)op-lists.linaro.org>
Content-Type: text/plain; charset="utf-8"
Hello,
I have 2 versions of a device which are identical in terms of the deploy and boot and DUT control command,
but they need different images to boot, I differentiate between them with tags,
how can I have the same health-check job running on both of them, but use different images ?
or shall I create 2 device-types for them ? ( doesn't sound like a clean solution )
Thanks
------------------------------
Subject: Digest Footer
_______________________________________________
Lava-users mailing list -- lava-users(a)lists.lavasoftware.org
To unsubscribe send an email to lava-users-leave(a)lists.lavasoftware.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
------------------------------
End of Lava-users Digest, Vol 56, Issue 4
*****************************************
Hello,
I am using the deploy action to:download to download a file from http server and decompress it,
I want to pass the path of which the file is decompressed to a user defined function in the device.jinja2,
Is there a way to do do so ? I this variable exposed somehow to the environment ?
Thanks
Hey everyone!
As I got last problems solved, I am able to boot my device - but that doesn't work out to be reliable.
During inspection of console output, I see that there are "random" chars where I don't expect them to be, these chars are the next command that gets sent - before the previous command is finished.
For me it seems as U-Boot bootloader-commands doesn't wait for the prompt. Can this be the case here? Or is something other going wrong?
Configuration basically the same as in https://lists.lavasoftware.org/archives/list/lava-users@lists.lavasoftware.…
Serial console of the device is connected via USB2Serial adapter which uses ser2net to make this available with Telnet (which seems to be the LAVA default way to go).
In console I can see things like this, e.g. when tftp command shows its progress, only '#' should be displayed, but there are chars in between:
Loading: *################s#####################################e############
###############t######################################e############
########################n#####################################v####
The chars in there are "setenv" which is the next command that LAVA wants to execute.
This causes the boot to fail in some occurrences, e.g. if dhcp command is not yet fully executed and zImage already tried to load, thus it fails and boot fails.
See attached (stripped to relevant parts) log for details.
As workaround I set character delay to 100 milliseconds, which seems to make it a bit more reliable, but that can only be a temporary solution, as the characters still appear at the wrong place.
Thanks in advance!
Stefan
I was trying to follow the section testing new device template here :
https://validation.linaro.org/static/docs/v2/development-intro.html#develop…
but I am getting the error :
root@master1:/# lava-server manage device-dictionary
Unknown command: 'device-dictionary'. Did you mean device-tags?
Type 'lava-server help' for usage.
Some versions information :
master1:/#lava-server manage version
2.2.28
lab-slave-0:/# lavacli system version
2023.01
Is the documentation outdated, or am I trying to execute the command in the wrong place ?
Also what is the best way to debug and check the errors in a new device template ?
Thanks in advance.
G.Emad
Hello everyone,
There seems to be a bug in LAVA. I was on version 2022.04 and have also tried 2023.03. Both versions have the same bug.
The same configurations works in a 2018 build of LAVA on an old machine.
I am trying to connect to an always on board via ssh.
The healthcheck is failing with this error :
lava-dispatcher, installed at version: 2023.03
start: 0 validate
Start time: 2023-04-12 14:07:00.373707+00:00 (UTC)
Traceback (most recent call last): File "/usr/lib/python3/dist-packages/lava_dispatcher/job.py", line 198, in validate self._validate() File "/usr/lib/python3/dist-packages/lava_dispatcher/job.py", line 183, in _validate self.pipeline.validate_actions() File "/usr/lib/python3/dist-packages/lava_dispatcher/action.py", line 190, in validate_actions action.validate() File "/usr/lib/python3/dist-packages/lava_dispatcher/actions/deploy/ssh.py", line 81, in validate if "serial" not in self.job.device["actions"]["deploy"]["connections"]: KeyError: 'connections'
validate duration: 0.00
case: validate
case_id: 244238
definition: lava
result: fail
Cleaning after the job
Root tmp directory removed at /var/lib/lava/dispatcher/tmp/8857
LAVABug: This is probably a bug in LAVA, please report it.
case: job
case_id: 244239
definition: lava
error_msg: 'connections'
error_type: Bug
result: fail
The health check looks like this:
job_name: SSH check
timeouts:
job:
minutes: 10
action:
minutes: 2
priority: medium
visibility: public
actions:
- deploy:
timeout: # timeout for the connection attempt
seconds: 30
to: ssh
os: oe
- boot:
timeout:
minutes: 2
prompts: ['root@(.*):~#']
method: ssh
connection: ssh
- test:
timeout:
minutes: 5
definitions:
- repository:
metadata:
format: Lava-Test Test Definition 1.0
name: smoke-tests-basic
description: "Basic smoke test"
run:
steps:
- lava-test-case linux-linaro-ubuntu-pwd --shell pwd
- lava-test-case linux-linaro-ubuntu-uname --shell uname -a
- lava-test-case linux-linaro-ubuntu-vmstat --shell vmstat
- lava-test-case linux-linaro-ubuntu-ip --shell ip a
from: inline
name: smoke-tests-basic
Any ideas ?
Best regards,
Sebastian
Hello everyone,
There seems to be a bug in LAVA. I was on version 2022.04 and have also tried 2023.03. Both versions have the same bug.
The same configurations works in a 2018 build of LAVA on an old machine.
I am trying to connect to an always on board via ssh.
The healthcheck is failing with this error :
lava-dispatcher, installed at version: 2023.03<https://10.1.52.17/scheduler/job/8857#L1>start: 0 validate<https://10.1.52.17/scheduler/job/8857#L2>Start time: 2023-04-12 14:07:00.373707+00:00 (UTC)<https://10.1.52.17/scheduler/job/8857#L3>Traceback (most recent call last): File "/usr/lib/python3/dist-packages/lava_dispatcher/job.py", line 198, in validate self._validate() File "/usr/lib/python3/dist-packages/lava_dispatcher/job.py", line 183, in _validate self.pipeline.validate_actions() File "/usr/lib/python3/dist-packages/lava_dispatcher/action.py", line 190, in validate_actions action.validate() File "/usr/lib/python3/dist-packages/lava_dispatcher/actions/deploy/ssh.py", line 81, in validate if "serial" not in self.job.device["actions"]["deploy"]["connections"]: KeyError: 'connections'<https://10.1.52.17/scheduler/job/8857#L4>validate duration: 0.00<https://10.1.52.17/scheduler/job/8857#results_244238>case: validate
case_id: 244238
definition: lava
result: fail
<https://10.1.52.17/results/testcase/244238><https://10.1.52.17/scheduler/job/8857#L6>Cleaning after the job<https://10.1.52.17/scheduler/job/8857#L7>Root tmp directory removed at /var/lib/lava/dispatcher/tmp/8857<https://10.1.52.17/scheduler/job/8857#L8>LAVABug: This is probably a bug in LAVA, please report it.<https://10.1.52.17/scheduler/job/8857#results_244239>case: job
case_id: 244239
definition: lava
error_msg: 'connections'
error_type: Bug
result: fail<https://10.1.52.17/results/testcase/244239>
The health check looks like this:
job_name: SSH check
timeouts:
job:
minutes: 10
action:
minutes: 2
priority: medium
visibility: public
actions:
- deploy:
timeout: # timeout for the connection attempt
seconds: 30
to: ssh
os: oe
- boot:
timeout:
minutes: 2
prompts: ['root@(.*):~#']
method: ssh
connection: ssh
- test:
timeout:
minutes: 5
definitions:
- repository:
metadata:
format: Lava-Test Test Definition 1.0
name: smoke-tests-basic
description: "Basic smoke test"
run:
steps:
- lava-test-case linux-linaro-ubuntu-pwd --shell pwd
- lava-test-case linux-linaro-ubuntu-uname --shell uname -a
- lava-test-case linux-linaro-ubuntu-vmstat --shell vmstat
- lava-test-case linux-linaro-ubuntu-ip --shell ip a
from: inline
name: smoke-tests-basic
Any ideas ?
Best regards,
Sebastian
Hi Everyone,
My board has multiple uarts and they are designed to act as different
serial consoles of platform OS. So I am trying to set the desired
connection (serial) while calling the respective platform boot/flash
method from the job description. Means i need to utilise the same board and
have to call suitable serial console from connection list to get boot
logs.
{% set UART_PORTS = {"SE-UART": "AB0KOQF7", "UART2": "A10LOBA4", "UART4":
"AB0LPSO7"} %}
{% set connection_list = ['uart2', 'uart4'] %}
{% set connection_tags = {'uart2': ['primary','telnet'], 'uart4':
['telnet']} %}
{% set connection_commands = {'uart2': 'telnet 10.10.4.140 4004', 'uart4':
'telnet 10.10.4.140 4002'} %}
How to set required serial(uart2 or uart4) from connection list in job
description??
Regards
Nagendra S
You may want to have a look for Antonio's MR: https://git.lavasoftware.org/lava/lava/-/merge_requests/2038, not sure if it's your case.
-----Original Message-----
From: lava-users-request(a)lists.lavasoftware.org <lava-users-request(a)lists.lavasoftware.org>
Sent: Friday, March 24, 2023 8:00 AM
To: lava-users(a)lists.lavasoftware.org
Subject: [EXT] Lava-users Digest, Vol 54, Issue 8
Caution: EXT Email
Send Lava-users mailing list submissions to
lava-users(a)lists.lavasoftware.org
To subscribe or unsubscribe via email, send a message with subject or body 'help' to
lava-users-request(a)lists.lavasoftware.org
You can reach the person managing the list at
lava-users-owner(a)lists.lavasoftware.org
When replying, please edit your Subject line so it is more specific than "Re: Contents of Lava-users digest..."
Today's Topics:
1. Re: Not able to unpack the overlay other than "/" directory
(P T, Sarath)
----------------------------------------------------------------------
Message: 1
Date: Thu, 23 Mar 2023 07:14:49 +0000
From: "P T, Sarath" <Sarath_PT(a)mentor.com>
Subject: [Lava-users] Re: Not able to unpack the overlay other than
"/" directory
To: Milosz Wasilewski <milosz.wasilewski(a)foundries.io>
Cc: "lava-users(a)lists.lavasoftware.org"
<lava-users(a)lists.lavasoftware.org>, "Bhola, Bikram (PLM)"
<bikram.bhola(a)siemens.com>
Message-ID: <035b0a2cc7dd40f19552ce319acae2d9(a)mentor.com>
Content-Type: text/plain; charset="utf-8"
Hi Milosz,
Any update on below query ?
I changed "lava_test_results_dir": "/etc/lava-%s", from "lava_test_results_dir": "/lava-%s", from the above code session. Somehow the lava_test_results_dir variable which we defined in the job definition is not getting override by the default value , that’s why we did the change. Can you post your code snippet from the worker ? and do you know why the job definition is not able to identify the lava_test_results_dir value ?
Regards,
Sarath PT
-----Original Message-----
From: P T, Sarath
Sent: 15 March 2023 19:21
To: 'Milosz Wasilewski' <milosz.wasilewski(a)foundries.io>
Cc: lava-users(a)lists.lavasoftware.org; Bhola, Bikram (PLM) <bikram.bhola(a)siemens.com>
Subject: RE: [Lava-users] Re: Not able to unpack the overlay other than "/" directory
Hi Milosz,
Thanks for the support you have given. And I could able to achieve the scenario by changing the code from worker side as below
Navigate to the directory : cd /usr/lib/python3/dist-packages/lava_dispatcher
debian = {
"TESTER_PS1": r"linaro-test [rc=$(echo \$?)]# ",
"TESTER_PS1_PATTERN": r"linaro-test \[rc=(\d+)\]# ",
"TESTER_PS1_INCLUDES_RC": True,
"boot_cmds": "boot_cmds",
"line_separator": "\n",
# for lava-test-shell
"distro": "debian",
"tar_flags": "--warning no-timestamp",
"lava_test_sh_cmd": "/bin/bash",
"lava_test_dir": "/lava-%s",
"lava_test_results_part_attr": "root_part",
"lava_test_results_dir": "/etc/lava-%s",
"lava_test_shell_file": "~/.bashrc", }
I changed "lava_test_results_dir": "/etc/lava-%s", from "lava_test_results_dir": "/lava-%s", from the above code session. Somehow the lava_test_results_dir variable which we defined in the job definition is not getting override by the default value , that’s why we did the change. Can you post your code snippet from the worker ? and do you know why the job definition is not able to identify the lava_test_results_dir value ?
Regards,
Sarath P T
-----Original Message-----
From: Milosz Wasilewski <milosz.wasilewski(a)foundries.io>
Sent: 13 March 2023 14:50
To: P T, Sarath <Sarath_PT(a)mentor.com>
Cc: lava-users(a)lists.lavasoftware.org; Bhola, Bikram (PLM) <bikram.bhola(a)siemens.com>
Subject: Re: [Lava-users] Re: Not able to unpack the overlay other than "/" directory
Sarath,
Sorry, I can't reproduce any of your issues. You need to debug yourself.
Best Regards,
Milosz
On Mon, Mar 13, 2023 at 6:11 AM P T, Sarath <Sarath_PT(a)mentor.com> wrote:
>
> Hi Milosz,
>
> Any update ?
>
> Regards,
> Sarath P T
>
> -----Original Message-----
> From: P T, Sarath
> Sent: 08 March 2023 15:53
> To: 'Milosz Wasilewski' <milosz.wasilewski(a)foundries.io>
> Cc: lava-users(a)lists.lavasoftware.org; Bhola, Bikram (PLM)
> <bikram.bhola(a)siemens.com>
> Subject: RE: [Lava-users] Re: Not able to unpack the overlay other
> than "/" directory
>
> Hi Milosz,
>
> Now we could able to unpack the overlay in the path that we have given in the job definition. But after exporting the shell (export SHELL=/bin/bash) its not able to proceed with ". /<path_defined_in_definition>/lava-13030/environment" and "/<path_defined_in_definition>/lava-13030/bin/lava-test-runner /<path_defined_in_definition>/lava-13030/0 ". Below I am giving the job definition.
>
>
> priority: high
> visibility: public
> device_type: industrial-mtda-simatic-ipc227e-01
> visibility: public
> timeouts:
> job:
> minutes: 30
> action:
> minutes: 20
> connection:
> minutes: 20
> job_name: SLLL_CI_MTDA_SIMATIC_IPC227E_DEPLOY_BOOT_TEST_JOB
> actions:
> - deploy:
> to: overlay
> os: debian
> - boot:
> method: minimal
> reset: true
> failure_retry: 2
> auto_login:
> login_prompt: 'ebsy-isar login:'
> username: root
> password_prompt: "Password:"
> password: root
> prompts:
> - 'root@ebsy-isar:~#'
> transfer_overlay:
> download_command: sleep 10; cd /home/test ; wget
> unpack_command: tar -C /home/test -xzf
> - test:
> role:
> - node1
> timeout:
> minutes: 5
> definitions:
> - repository:
> metadata:
> description: basic test for some devices on board
> format: Lava-Test Test Definition 1.0
> name: kernel-version
> os:
> - debian
> run:
> steps:
> - cd /usr/libexec/ebsy-qa-suites/swupdate
> - ./swupdate_new.sh
> from: inline
> name: mtda-restart
> path: inline/mtda.yaml
> context:
> lava_test_results_dir: /home/test/lava-%s
> test_character_delay: 10
> tags:
> - slll-industrial-mtda-simatic-ipc227e-01
> timeouts:
> action:
> minutes: 20
> connection:
> minutes: 20
> job:
> hours: 7
> visibility: public
> job_name: SLLL_X86_TEST_JOB
> metadata:
> Description: '"Lava jobs for deploy, boot and usb test"'
> DEVICES: slll-simatic-ipc227e-mtda-01
>
> Regards,
> Sarath P T
>
>
>
> -----Original Message-----
> From: Milosz Wasilewski <milosz.wasilewski(a)foundries.io>
> Sent: 01 March 2023 14:27
> To: P T, Sarath <Sarath_PT(a)mentor.com>
> Cc: lava-users(a)lists.lavasoftware.org; Bhola, Bikram (PLM)
> <bikram.bhola(a)siemens.com>
> Subject: Re: [Lava-users] Re: Not able to unpack the overlay other
> than "/" directory
>
> On Wed, Mar 1, 2023 at 5:51 AM P T, Sarath <Sarath_PT(a)mentor.com> wrote:
> >
> > Hi Milosz,
> >
> > Yes it’s a strange behaviour that there is no command sent over serial to start overlay decompression. And is there any server configuration changes you made for accomplishing it ?
>
> No, I'm running vanilla LAVA dispatcher in a container from dockerhub
> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fhub.
> docker.com%2Fr%2Flavasoftware%2Flava-dispatcher%2F&data=05%7C01%7Clarr
> y.shen%40nxp.com%7C4bc23a099b864eb0dfbb08db2bfac2a1%7C686ea1d3bc2b4c6f
> a92cd99c5c301635%7C0%7C0%7C638152128229848153%7CUnknown%7CTWFpbGZsb3d8
> eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3
> 000%7C%7C%7C&sdata=D3wz%2BoY6viVxsjK%2FtFpRFBTeiftt%2B%2Bv6iB6E1oXwR%2
> BY%3D&reserved=0
>
> Best Regards,
> Milosz
------------------------------
End of Lava-users Digest, Vol 54, Issue 8
*****************************************
Hi everyone,
I'm experiencing this error on setting up my U-Boot based device:
Unable to extract cpio archive '/var/lib/lava/dispatcher/tmp/17/extract-overlay-ramdisk-tv17mriz/ramdisk.cpio': Command '['cpio', '--extract', '--make-directories', '--unconditional', '--file', '/var/lib/lava/dispatcher/tmp/17/extract-overlay-ramdisk-tv17mriz/ramdisk.cpio']' returned non-zero exit status 2.
As documentation in https://docs.lavasoftware.org/lava/actions-deploy.html#deploy-action-roles states, the overlay will only be used, when a test action is defined in job - so I removed the test action (and the log states "[common] skipped lava-overlay - no test action.") but the error still appears.
For debugging I commented out the _cleanup() function in /usr/lib/python3/dist-packages/lava_dispatcher/job.py (and can see in job output that dispatcher tmp directory isn't cleaned anymore) but somehow the directory still disappears, so I don't know how to debug this further.
So what's going wrong here? How to get the overlay working?
Running LAVA 2023.01 on Debian 11.6, full log see attached.
Thanks in advance!
Hi Team,
Facing an issue while unpack the overlay in some custom location other than "/" directory. From the LAVA linaro documentation I came to know that the -C / command to tar is essential or the test shell will not be able to start. Below are the issues that I am facing because of this.
* My / directory is a read only one and I want it in some /etc or /var.
* The LAVA job is getting stuck while doing the unpack with the command below in job definition.
transfer_overlay:
download_command: cd /tmp; wget
unpack_command: tar -C /tmp -xzf
Below I am attaching the test log from LAVA.
[cid:image001.png@01D9455A.1AD8F6A0]
How we can solve the read only filesystem image issue for unpacking the overlay other than "/" directory ?
Regards,
Sarath P T
Hey there,
I set up LAVA, and ran some jobs (e.g qemu from tutorial) which went fine - but I noticed HTTP 500 status code on /api/help page. Which wasn't severe as I thought I might ask that later. But this monday I discovered that /accounts/login/ isn't working anymore, HTTP 500 status code too! Of course I didn't change anything... (everybody says that) and honestly can't say when this broke as I only logged in once after install. Django log says "django.contrib.sites.models.Site.DoesNotExist: Site matching query does not exist."and as far as I can see Sites reside in database?
As I'm new to LAVA I don't know how to debug and to solve this, can you please help me? Is there a way to re-install the sites/templates?
Logs that might be relevant are attached.
After install (and qemu setup) I noticed that /var is full. This is resolved (deleted some docker images), but might be relevant.
Running on current Debian 11 as apt install, recent LAVA version.
Thanks in advance
Stefan
Hey everyone,
in the announcement of LAVA 2023.01 it says:
> The support for Debian Buster has been dropped as Debian Buster does
> not provide support for the latest pyyaml versions.
So I updated my LAVA machines from buster to bullseye.
Now I have the problem that I cannot install lava-dispatcher anymore now due to the following error:
The following packages have unmet dependencies:
libbpf0 : Depends: linux-libc-dev (>= 5.14) but 5.10.162-1 is to be installed
libbpf0 seems to come from the LAVA repository:
libbpf0:
Installed: (none)
Candidate: 1:0.5.0-1~bpo11+1~lava1
Version table:
1:0.5.0-1~bpo11+1~lava1 500
500 http://apt.lavasoftware.org/release bullseye/main amd64 Packages
1:0.3-2 500
500 http://ftp.de.debian.org/debian bullseye/main amd64 Packages
While linux-libc-dev is part of the standard debian repositories:
linux-libc-dev:
Installed: 5.10.162-1
Candidate: 5.10.162-1
Version table:
*** 5.10.162-1 500
500 http://security.debian.org/debian-security bullseye-security/main amd64 Packages
100 /var/lib/dpkg/status
5.10.158-2 500
500 http://ftp.de.debian.org/debian bullseye/main amd64 Packages
This is the LAVA repository I am using:
deb http://apt.lavasoftware.org/release bullseye main
Is this a known issue? How do I correctly install LAVA 2023.01 on Debian Bullseye?
Thanks in advance and kind regards,
Tim
--
Tim Jaacks
SOFTWARE DEVELOPER
SECO Northern Europe GmbH
Schlachthofstrasse 20
21079 Hamburg
Germany
T: +49 40 791899-183
E: tim.jaacks(a)seco.com
Register: Amtsgericht Hamburg, HRB 148893 Represented by: Dirk Finstel, Marc-Michael Braun, Massimo Mauri
Hello Lava Users,
On deploying the latest LAVA release(LAVA 2022.11.1) packages after building the packages on Debian11 Host , we are facing issue with deploying lava-dispatcher-host package on Debian 11 host(11.6).
On checking further we noticed https://git.lavasoftware.org/lava/lava/-/blob/master/debian/control , lava-dispatcher-host package depends on base-files (<< 11.1) but base-files on system is 11.1+deb11u6.
To support Deploying on latest Debian bullseye release should this be updated or are we missing something.
$ sudo dpkg -i lava-dispatcher-host_2022.11.1+11+bullseye_all.deb
(Reading database ... 283093 files and directories currently installed.)
Preparing to unpack lava-dispatcher-host_2022.11.1+11+bullseye_all.deb ...
Unpacking lava-dispatcher-host (2022.11.1+11+bullseye) over (2021.10+10+buster) ...
dpkg: dependency problems prevent configuration of lava-dispatcher-host:
lava-dispatcher-host depends on base-files (<< 11.1) | python3-bpfcc (>= 0.21); however:
Version of base-files on system is 11.1+deb11u6.
Package python3-bpfcc is not installed.
lava-dispatcher-host depends on base-files (<< 11.1) | linux-headers-amd64 | linux-headers-arm64 | linux-headers-generic; however:
Version of base-files on system is 11.1+deb11u6.
....
Thanks,
Hemanth.
Hi All,
I'm looking to automate overall testing process and post the test results
in to JIRA instead of going to Lava CI.
Is there any way that I can use robot framework to run the tests on Lava
and so robot framework produces results in XML file that can be used to
post into issue tracker.
Any suggestions would be helpful
Thanks,
Pavan
Hello Team,
Good Day to All!
We are setting up a new device into LAVA Automation with the following
requirements.
Flash panel to R4 image
Copy delta to R5
Deploy R5
reboot
Make sure the panel boots to R5
The Test definition i'm using is as follows:
actions:
- deploy:
to: flasher
images:
package:
url:
https://artifactory.softwaretools.com/artifactory/mfgtools-***-0v3
- boot:
timeout:
minutes: 15
commands:
- boot
method: u-boot
prompts:
- 'root@hon-grip'
auto_login:
login_prompt: 'login: '
username: root
password_prompt: 'Password:'
password: root
login_commands:
- coredump --enable
- sysinfo
- ifconfig
- networkctl status
- wget http://192.100.**.**/oslat
- ostree static-delta apply-offline /home/root/oslat
- ostree admin deploy
35b3297cf3e4bc59d2a21e2ae9f7a02ef3f7a940e37389a7e9ae66a610c60b7
- reboot
After reboot command is executed, the panel again prompts for login and
password, whereas I cannot declare auto_login for the second time.
Please let me know how to disable auto login after the reboot (for second
time login)
Best Regards
*Pavan Kumar*
Hi Team,
I had a Query which is particularly to use lava-test-shell or other
binaries like lava-test-runner.
My board is booted with Linux and It has a POSIX environment but it doesn't
support either ssh/nfs due to the low memory footprint available and the
ethernet driver not fully functional.
To test/run my test-suite drivers How can I use lava-test-runner/
lava-test-shell ? Is it possible to test our suite using
lava-test-shell/runner where DUT doesn't have the ethernet/nfs support.
I am getting lava-test-shell timeout on the DUT console whereas Lava-worker
had all the binaries available by lava-overlay method.
Please find the attached test job definition/lava-job log files for your
reference. Kindly let me know the solution.
Hi teams,
I wonder if you could share an android cts lava job define to me, is it in https://validation.linaro.org/?
I want to have a reference, thanks!
Hi.
It looks like I am facing the same problem and the job does not exist even
after the timeout. .
I guess there might be communication gap between the Dispatcher and server.
Dispatcher log screenshot: (/var/log/lava-dispatcher/lava-worker.log)
######################
[image: image.png]
any solution to resolve this?
Regards,
Koti
On Sat, 26 Feb 2022 at 05:30, <lava-users-request(a)lists.lavasoftware.org>
wrote:
> Send Lava-users mailing list submissions to
> lava-users(a)lists.lavasoftware.org
>
> To subscribe or unsubscribe via email, send a message with subject or
> body 'help' to
> lava-users-request(a)lists.lavasoftware.org
>
> You can reach the person managing the list at
> lava-users-owner(a)lists.lavasoftware.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Lava-users digest..."
>
> Today's Topics:
>
> 1. Re: Job is not exiting after the timeout (P T, Sarath)
> 2. Re: Job is not exiting after the timeout (Antonio Terceiro)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Fri, 25 Feb 2022 05:10:58 +0000
> From: "P T, Sarath" <Sarath_PT(a)mentor.com>
> Subject: [Lava-users] Re: Job is not exiting after the timeout
> To: Antonio Terceiro <antonio.terceiro(a)linaro.org>
> Cc: "lava-users(a)lists.lavasoftware.org"
> <lava-users(a)lists.lavasoftware.org>
> Message-ID:
> <7b18ad8ebf54460e935b147659d2da99(a)svr-orw-mbx-01.mgc.mentorg.com>
> Content-Type: text/plain; charset="us-ascii"
>
> Hi Antonio,
>
> These are the logs for the server connection:
>
> Worker side log ( /var/log/lava-dispatcher/lava-worker.log )
> ------------------------------------------------------------
>
> 2022-02-24 05:56:58,718 INFO [3834] FINISHED => server
> 2022-02-24 05:57:01,233 ERROR [3834] -> server error: code 404
> 2022-02-24 05:57:01,233 DEBUG [3834] --> {"error": "Unknown job '3834'"}
> 2022-02-24 05:57:18,246 INFO PING => server
> 2022-02-24 05:57:18,729 INFO [3834] FINISHED => server
> 2022-02-24 05:57:18,965 ERROR [3834] -> server error: code 503
> 2022-02-24 05:57:18,965 DEBUG [3834] --> ('Connection aborted.',
> RemoteDisconnected('Remote end closed connection without response'))
> 2022-02-24 05:57:38,248 INFO PING => server
> 2022-02-24 05:57:38,737 INFO [3834] FINISHED => server
> 2022-02-24 05:57:38,977 ERROR [3834] -> server error: code 503
> 2022-02-24 05:57:38,977 DEBUG [3834] --> ('Connection aborted.',
> RemoteDisconnected('Remote end closed connection without response'))
> 2022-02-24 05:57:58,250 INFO PING => server
> 2022-02-24 05:57:58,731 INFO [3834] FINISHED => server
> 2022-02-24 05:57:58,968 ERROR [3834] -> server error: code 503
> 2022-02-24 05:57:58,969 DEBUG [3834] --> ('Connection aborted.',
> RemoteDisconnected('Remote end closed connection without response'))
> 2022-02-24 05:58:18,252 INFO PING => server
> 2022-02-24 05:58:18,745 INFO [3834] FINISHED => server
> 2022-02-24 05:58:21,739 ERROR [3834] -> server error: code 502
> 2022-02-24 05:58:21,740 DEBUG [3834] --> <!DOCTYPE HTML PUBLIC
> "-//IETF//DTD HTML 2.0//EN">
> <html><head>
> <title>502 Bad Gateway</title>
> </head><body>
> <h1>Bad Gateway</h1>
> <p>The proxy server received an invalid
> response from an upstream server.<br />
> </p>
> <hr>
> <address>Apache/2.4.38 (Debian) Server at 132.186.71.148 Port 80</address>
> </body></html>
>
>
> 2022-02-24 05:58:38,253 INFO PING => server
> 2022-02-24 05:58:38,735 INFO [3834] FINISHED => server
> 2022-02-24 05:58:38,971 ERROR [3834] -> server error: code 503
> 2022-02-24 05:58:38,971 DEBUG [3834] --> ('Connection aborted.',
> RemoteDisconnected('Remote end closed connection without response'))
> 2022-02-24 05:58:58,254 INFO PING => server
> 2022-02-24 05:58:58,738 INFO [3834] FINISHED => server
> 2022-02-24 05:58:58,973 ERROR [3834] -> server error: code 503
> 2022-02-24 05:58:58,973 DEBUG [3834] --> ('Connection aborted.',
> RemoteDisconnected('Remote end closed connection without response'))
> 2022-02-24 05:59:18,256 INFO PING => server
>
>
> Server side log ( /var/log/apache2/lava-server.log )
> ------------------------------------------------------
>
> 134.86.62.69 - - [24/Feb/2022:19:39:46 +0530] "GET /ws/ HTTP/1.1" 500 804
> "-" "lava-worker 2021.10"
> ::1 - - [24/Feb/2022:19:39:46 +0530] "POST /scheduler/internal/v1/workers/
> HTTP/1.1" 400 68338 "-" "lava-worker 2021.10"
> [Thu Feb 24 19:39:46.711251 2022] [proxy:warn] [pid 9108:tid
> 140199652738816] [client 134.86.62.139:42968] AH01144: No protocol
> handler was valid for the URL /ws/ (scheme 'ws'). If you are using a DSO
> version of mod_proxy, make sure the proxy submodules are included in the
> configuration using LoadModule.
> 134.86.62.139 - - [24/Feb/2022:19:39:46 +0530] "GET /ws/ HTTP/1.1" 500 804
> "-" "lava-worker 2021.10"
> [Thu Feb 24 19:39:47.054716 2022] [proxy:warn] [pid 9151:tid
> 140199132653312] [client 134.86.61.20:43200] AH01144: No protocol handler
> was valid for the URL /ws/ (scheme 'ws'). If you are using a DSO version of
> mod_proxy, make sure the proxy submodules are included in the configuration
> using LoadModule.
> 134.86.61.20 - - [24/Feb/2022:19:39:47 +0530] "GET /ws/ HTTP/1.1" 500 804
> "-" "lava-worker 2021.10"
> [Thu Feb 24 19:39:47.919417 2022] [proxy:warn] [pid 9108:tid
> 140200256718592] [client 134.86.62.69:45566] AH01144: No protocol handler
> was valid for the URL /ws/ (scheme 'ws'). If you are using a DSO version of
> mod_proxy, make sure the proxy submodules are included in the configuration
> using LoadModule.
> 134.86.62.69 - - [24/Feb/2022:19:39:47 +0530] "GET /ws/ HTTP/1.1" 500 804
> "-" "lava-worker 2021.10"
> [Thu Feb 24 19:39:48.202295 2022] [proxy:warn] [pid 9151:tid
> 140199661131520] [client 134.86.62.139:42970] AH01144: No protocol
> handler was valid for the URL /ws/ (scheme 'ws'). If you are using a DSO
> version of mod_proxy, make sure the proxy submodules are included in the
> configuration using LoadModule.
> 134.86.62.139 - - [24/Feb/2022:19:39:48 +0530] "GET /ws/ HTTP/1.1" 500 804
> "-" "lava-worker 2021.10"
> [Thu Feb 24 19:39:48.515377 2022] [proxy:warn] [pid 9108:tid
> 140200655480576] [client 134.86.61.20:43202] AH01144: No protocol handler
> was valid for the URL /ws/ (scheme 'ws'). If you are using a DSO version of
> mod_proxy, make sure the proxy submodules are included in the configuration
> using LoadModule.
> 134.86.61.20 - - [24/Feb/2022:19:39:48 +0530] "GET /ws/ HTTP/1.1" 500 804
> "-" "lava-worker 2021.10"
>
>
> Server side log ( /var/log/lava-server/gunicorn.log )
> --------------------------------------------------------
>
> [2022-02-24 14:02:17 +0000] [704] [DEBUG] GET
> /scheduler/internal/v1/workers/slll-worker-testing/
> [2022-02-24 14:02:18 +0000] [704] [DEBUG] POST
> /scheduler/internal/v1/workers/
> [2022-02-24 14:02:19 +0000] [704] [DEBUG] GET
> /scheduler/internal/v1/workers/bng-test-worker/
> [2022-02-24 14:02:20 +0000] [722] [DEBUG] GET
> /scheduler/internal/v1/workers/Test-worker/
> [2022-02-24 14:02:20 +0000] [704] [DEBUG] POST
> /scheduler/internal/v1/jobs/3879/
> [2022-02-24 14:02:23 +0000] [722] [DEBUG] POST
> /scheduler/internal/v1/workers/
> [2022-02-24 14:02:28 +0000] [721] [DEBUG] POST
> /scheduler/internal/v1/workers/
> [2022-02-24 14:02:29 +0000] [704] [DEBUG] GET
> /scheduler/job/3966/job_status
> [2022-02-24 14:02:29 +0000] [721] [DEBUG] GET
> /scheduler/job/3966/log_pipeline_incremental
> [2022-02-24 14:02:33 +0000] [704] [DEBUG] POST
> /scheduler/internal/v1/workers/
> [2022-02-24 14:02:37 +0000] [704] [DEBUG] GET
> /scheduler/internal/v1/workers/slll-worker-testing/
> [2022-02-24 14:02:38 +0000] [720] [DEBUG] POST
> /scheduler/internal/v1/workers/
> [2022-02-24 14:02:38 +0000] [704] [DEBUG] POST
> /scheduler/internal/v1/jobs/3834/
> [2022-02-24 14:02:39 +0000] [704] [DEBUG] GET
> /scheduler/internal/v1/workers/bng-test-worker/
> [2022-02-24 14:02:40 +0000] [704] [DEBUG] GET
> /scheduler/internal/v1/workers/Test-worker/
> [2022-02-24 14:02:43 +0000] [722] [DEBUG] POST
> /scheduler/internal/v1/workers/
> [2022-02-24 14:02:48 +0000] [722] [DEBUG] POST
> /scheduler/internal/v1/workers/
>
>
> Regards
> Sarath P T
>
> -----Original Message-----
> From: Antonio Terceiro [mailto:antonio.terceiro@linaro.org]
> Sent: 24 February 2022 18:37
> To: P T, Sarath <Sarath_PT(a)mentor.com>
> Cc: lava-users(a)lists.lavasoftware.org
> Subject: Re: [Lava-users] Re: Job is not exiting after the timeout
>
> On Thu, Feb 24, 2022 at 09:40:22AM +0000, P T, Sarath wrote:
> > Hi Team,
> >
> > I could able to find the root cause of the issue just giving my
> observation :
> >
> > 1. I deleted a `cancelling` job with the ID 3834 from the GUI.
> > 2. And for the next test run its giving an error log under worker like
> this .
> >
> > 2022-02-24 01:18:57,502 ERROR [3834] -> server error: code 503
> > 2022-02-24 01:18:57,502 DEBUG [3834] --> ('Connection aborted.',
> RemoteDisconnected('Remote end closed connection without response'))
> > 2022-02-24 01:19:16,795 INFO PING => server
> > 2022-02-24 01:19:17,268 INFO [3834] FINISHED => server
> > 2022-02-24 01:19:18,666 ERROR [3834] -> server error: code 404
> > 2022-02-24 01:19:18,666 DEBUG [3834] --> {"error": "Unknown job
> '3834'"}
> > 2022-02-24 01:19:36,797 INFO PING => server
> > 2022-02-24 01:19:37,274 INFO [3834] FINISHED => server
> > 2022-02-24 01:19:37,509 ERROR [3834] -> server error: code 503
> > 2022-02-24 01:19:37,509 DEBUG [3834] --> ('Connection aborted.',
> RemoteDisconnected('Remote end closed connection without response'))
>
> Is the server receiving the connections normally? If you look at the
> server logs (apache and/or gunicorn) there should be corresponding error
> messages in there telling you what went wrong.
>
> ------------------------------
>
> Message: 2
> Date: Fri, 25 Feb 2022 10:37:00 -0300
> From: Antonio Terceiro <antonio.terceiro(a)linaro.org>
> Subject: [Lava-users] Re: Job is not exiting after the timeout
> To: "P T, Sarath" <Sarath_PT(a)mentor.com>
> Cc: "lava-users(a)lists.lavasoftware.org"
> <lava-users(a)lists.lavasoftware.org>
> Message-ID: <YhjbfBGnnyO67EIY(a)linaro.org>
> Content-Type: multipart/signed; micalg=pgp-sha256;
> protocol="application/pgp-signature"; boundary="U431ChLU/1f+Fa7u"
>
> On Fri, Feb 25, 2022 at 05:10:58AM +0000, P T, Sarath wrote:
> > Server side log ( /var/log/apache2/lava-server.log )
> > ------------------------------------------------------
> >
> > 134.86.62.69 - - [24/Feb/2022:19:39:46 +0530] "GET /ws/ HTTP/1.1" 500
> 804 "-" "lava-worker 2021.10"
> > ::1 - - [24/Feb/2022:19:39:46 +0530] "POST
> /scheduler/internal/v1/workers/ HTTP/1.1" 400 68338 "-" "lava-worker
> 2021.10"
> > [Thu Feb 24 19:39:46.711251 2022] [proxy:warn] [pid 9108:tid
> 140199652738816] [client 134.86.62.139:42968] AH01144: No protocol
> handler was valid for the URL /ws/ (scheme 'ws'). If you are using a DSO
> version of mod_proxy, make sure the proxy submodules are included in the
> configuration using LoadModule.
> > 134.86.62.139 - - [24/Feb/2022:19:39:46 +0530] "GET /ws/ HTTP/1.1" 500
> 804 "-" "lava-worker 2021.10"
> > [Thu Feb 24 19:39:47.054716 2022] [proxy:warn] [pid 9151:tid
> 140199132653312] [client 134.86.61.20:43200] AH01144: No protocol handler
> was valid for the URL /ws/ (scheme 'ws'). If you are using a DSO version of
> mod_proxy, make sure the proxy submodules are included in the configuration
> using LoadModule.
> > 134.86.61.20 - - [24/Feb/2022:19:39:47 +0530] "GET /ws/ HTTP/1.1" 500
> 804 "-" "lava-worker 2021.10"
> > [Thu Feb 24 19:39:47.919417 2022] [proxy:warn] [pid 9108:tid
> 140200256718592] [client 134.86.62.69:45566] AH01144: No protocol handler
> was valid for the URL /ws/ (scheme 'ws'). If you are using a DSO version of
> mod_proxy, make sure the proxy submodules are included in the configuration
> using LoadModule.
> > 134.86.62.69 - - [24/Feb/2022:19:39:47 +0530] "GET /ws/ HTTP/1.1" 500
> 804 "-" "lava-worker 2021.10"
> > [Thu Feb 24 19:39:48.202295 2022] [proxy:warn] [pid 9151:tid
> 140199661131520] [client 134.86.62.139:42970] AH01144: No protocol
> handler was valid for the URL /ws/ (scheme 'ws'). If you are using a DSO
> version of mod_proxy, make sure the proxy submodules are included in the
> configuration using LoadModule.
> > 134.86.62.139 - - [24/Feb/2022:19:39:48 +0530] "GET /ws/ HTTP/1.1" 500
> 804 "-" "lava-worker 2021.10"
> > [Thu Feb 24 19:39:48.515377 2022] [proxy:warn] [pid 9108:tid
> 140200655480576] [client 134.86.61.20:43202] AH01144: No protocol handler
> was valid for the URL /ws/ (scheme 'ws'). If you are using a DSO version of
> mod_proxy, make sure the proxy submodules are included in the configuration
> using LoadModule.
> > 134.86.61.20 - - [24/Feb/2022:19:39:48 +0530] "GET /ws/ HTTP/1.1" 500
> 804 "-" "lava-worker 2021.10"
>
> Your apache is not configured correctly, you are probably missing
> enabling mod_proxy and/or mod_proxy_http. See
>
> https://master.lavasoftware.org/static/docs/v2/installing_on_debian.html#pr…
>
In fact, I didn't see any performance drop because I'm still trying it in a small and initial phase, just want to eliminate any risk when move to debian11, you know if issue happen in production environment, it maybe really not easy to debug.
Back to the question: in lava, if we use debian11 which default cgroupV2 enabled, then when use docker-test-shell, lava will attach a custom BPF Device program to container to replace the default one in docker.
Everything looks fine, just I observed if I use "adb devices" in container, then the trace_pipe will be flushed with next:
```
device poll-7289 [001] d... 103054.767620: bpf_trace_printk: Device access: major = 189, minor = 261
device poll-7289 [001] d... 103055.767851: bpf_trace_printk: Device access: major = 189, minor = 261
device poll-7289 [001] d... 103056.768117: bpf_trace_printk: Device access: major = 189, minor = 261
device poll-7289 [001] d... 103057.768354: bpf_trace_printk: Device access: major = 189, minor = 261
device poll-7289 [001] d... 103058.768590: bpf_trace_printk: Device access: major = 189, minor = 261
device poll-7289 [001] d... 103059.768819: bpf_trace_printk: Device access: major = 189, minor = 261
device poll-7289 [001] d... 103060.769053: bpf_trace_printk: Device access: major = 189, minor = 261
```
Which means that bpf function frequently be called (interval less than 1 second)
On the other hand, if I do next then the BPF prog unregistered from linux kernel, but looks every adb devices still works.
```
/sys/fs/cgroup/system.slice/docker-a9354f54a8c6a56932e15b4d577432abf86c897630d5e94da442474e938bf875.scope
78 device multi lava_docker_dev
$ bpftool cgroup detach /sys/fs/cgroup/system.slice/docker-a9354f54a8c6a56932e15b4d577432abf86c897630d5e94da442474e938bf875.scope device id 78
```
So, I just want to confirm have you guys noticed this behavior, and you confirm this behavior is ok?
(To be honestly, I'm not sure BPF performance if it's frequently be called, so this is just a enquire)
Or, we have better methods handle it in lava?
I need your confirm to decide if I need to downgrade to CGroupV1 when I migrate, thanks!
Regards,
Larry
Hi,
I'm facing an issue after updating the base-uboot file on the server.
*Configuration Error: missing or invalid template.*
*Jobs requesting this device type will not be able to start until a
template is available on the master.*
I have restarted the server and dispatcher but no update. All the devices
went offline automatically.
Any advice to resolve this issue would be appreciated.
Thank you
Hello Team
After facing " Infrastructure ERror: bootloader interrupt ", I have made
changes in the base-uboot.jinja file and restarted the server.
But, it says "*Configuration Error: missing or invalid template.*
Jobs requesting this device type will not be able to start until a template
is available on the master."
Please advise how to fix this.
Thank you
Hi Team,
Good day to you!
I'm making a dispatcher setup to connect new devices and run tests. In
order to do this, I have connected all the required hardware correctly and
created configuration files by following Lava documentation. I Am able to
control the device with the relay board automatically and can login using
telnet.
I'm trying to run my initial test job definition, It downloads the image
but fails to flash the image on device with the below message:
- {"dt": "2022-08-22T09:07:45.799863", "lvl": "debug", "msg": "Calling:
'nice' 'flash' 'pk_raspi-unit1' '{package}'"}
- {"dt": "2022-08-22T09:07:45.809654", "lvl": "debug", "msg": ">>
/var/lib/lava/dispatcher/tmp/466/deploy-flasher-p2k8tvri
/var/lib/lava/dispatcher/tmp/466/deploy-flasher-p2k8tvri\r"}
- {"dt": "2022-08-22T09:07:45.811510", "lvl": "debug", "msg": ">> \r"}
- {"dt": "2022-08-22T09:07:45.811674", "lvl": "debug", "msg": ">> 7-Zip
[64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21\r"}
- {"dt": "2022-08-22T09:07:45.811955", "lvl": "debug", "msg": ">> p7zip
Version 16.02 (locale=C.UTF-8,Utf16=on,HugeFiles=on,64 bits,8 CPUs Intel(R)
Core(TM) i5-8265U CPU @ 1.60GHz (806EB),ASM,AES-NI)\r"}
- {"dt": "2022-08-22T09:07:45.812045", "lvl": "debug", "msg": ">> \r"}
- {"dt": "2022-08-22T09:07:45.812121", "lvl": "debug", "msg": ">> Scanning
the drive for archives:\r"}
- {"dt": "2022-08-22T09:07:45.812195", "lvl": "debug", "msg": ">> 0M
Scan\b\b\b\b\b\b\b\b\b \b\b\b\b\b\b\b\b\b\r"}
- {"dt": "2022-08-22T09:07:45.812266", "lvl": "debug", "msg": ">> ERROR: No
more files\r"}
- {"dt": "2022-08-22T09:07:45.812366", "lvl": "debug", "msg": ">>
{package}\r"}
- {"dt": "2022-08-22T09:07:45.812434", "lvl": "debug", "msg": ">> \r"}
- {"dt": "2022-08-22T09:07:45.812712", "lvl": "debug", "msg": ">> \r"}
- {"dt": "2022-08-22T09:07:45.812790", "lvl": "debug", "msg": ">> \r"}
- {"dt": "2022-08-22T09:07:45.812860", "lvl": "debug", "msg": ">> System
ERROR:\r"}
- {"dt": "2022-08-22T09:07:45.812929", "lvl": "debug", "msg": ">> Unknown
error -2147024872\r"}
- {"dt": "2022-08-22T09:07:45.813221", "lvl": "debug", "msg": "Returned 2
in 0 seconds"}
- {"dt": "2022-08-22T09:07:45.813306", "lvl": "error", "msg": "Unable to
run 'nice' '['flash', 'pk_raspi-unit1', '{package}']'"}
- {"dt": "2022-08-22T09:07:45.813411", "lvl": "exception", "msg": "Unable
to flash the device"}
Any suggestion to overcome the issue would be very helpful.
Attached log file
Best Regards.