Hello Team,
I'm using notify action in the job definition to notify users about the
status of the job. I'm already a registered user in lava and after
completion of my job, the administration site shows the status as "*not
sent*".
Please let me know what can be the reason, and how can i achieve it?
[image: lava-notify.PNG]
Below is the job definition i'm using:
*device_type: ADT-UNIT1job_name: sample test to notify usertimeouts: job:
minutes: 15 action: minutes: 10 connection: minutes: 5visibility:
publicactions:- command: name: relay_pwr_on timeout:
minutes: 1- deploy: to: flasher images: package:
url:
https://artifactory.softwaretools.com/artifactory/gop-generic-stable-local/…
<https://artifactory.softwaretools.com/artifactory/gop-generic-stable-local/…>-
boot: method: u-boot commands: - setenv factorymode 1 -
boot auto_login: login_prompt: 'login:' username: root
password_prompt: 'Password:' password: root login_commands:
- touch /home/root/test_file - ifconfig prompts: -
'root@hon-grip' - 'root@GRIP'notify: recipients: - to: method:
email user: pavan criteria: status: finished verbosity: verbose*
Thanks & Regards,
Pavan
Hi. all
I am struggling with situation that one of multinode job stuck in
scheduling status forever.
[image: image.png]
It occurs 1 in 10 times.
Sometimes one of the multinode job stuck in 'scheduling' status and the
other job goes timeout waiting first multinode job.
- lava-scheduler log
[image: image.png]
when this issue occur, the lava-scheduler's log indicate only one of
multinode job scheduled, and the other's not.
- lava-dispatcher log
[image: image.png]
and when issue occur, lava-dispatcher's log seems only one job has
triggered.
I am using lava-server and lava-dispatcher with docker instance (version
2023.08)
It occur in 2023.06 too.
It seems the issue related lava-scheduler. What should i check for resolve
this issue?
Please advise.
Thank you
Hello Team,
I'm using notify action in the job definition to notify users about the
status of the job. I'm already a registered user in lava and after
completion of my job, the administration site shows the status as "*not
sent*".
Please let me know what can be the reason, and how can i achieve it?
[image: lava-notify.PNG]
Below is the job definition i'm using:
*device_type: ADT-UNIT1job_name: sample test to notify usertimeouts: job:
minutes: 15 action: minutes: 10 connection: minutes: 5visibility:
publicactions:- command: name: relay_pwr_on timeout:
minutes: 1- deploy: to: flasher images: package:
url:
https://artifactory.softwaretools.com/artifactory/gop-generic-stable-local/…
<https://artifactory.softwaretools.com/artifactory/gop-generic-stable-local/…>-
boot: method: u-boot commands: - setenv factorymode 1 -
boot auto_login: login_prompt: 'login:' username: root
password_prompt: 'Password:' password: root login_commands:
- touch /home/root/test_file - ifconfig prompts: -
'root@hon-grip' - 'root@GRIP'notify: recipients: - to: method:
email user: pavan criteria: status: finished verbosity: verbose*
Thanks & Regards,
Pavan
Hi.
I am using Lava-server and Lava-dispatcher using docker image with version 2023.08.
I have an issue when submitting a multinode job, sometimes Lava-server run the jobs with abnormal Job ID such as 230, 230.1.
Sometimes, Lava-server run the jobs with normal Job ID such as 230.0, 230.1.
When I submit the definition, sometimes Lava-server show the message 'Invalid job definition: expected a dictionary' on the web page.
Is it because of the syntax error in my definition?
When I submit using Lava server web or lavacli, there's no warning or error that indicates a syntax error in the definition.
The job definition yaml looks like this.
```
job_name: multinode test job
timeouts:
job:
minutes: 60
action:
minutes: 60
connection:
minutes: 60
priority: medium
visibility: public
protocols:
lava-multinode:
roles:
target:
count: 1
device_type: customdevice
host:
count: 1
device_type: docker
actions:
- deploy:
role:
- target
to: flasher
images:
fw:
url: http://example.com/repository/customdevice/test/test.bin
- boot:
role:
- target
method: minimal
prompts:
- 'root:'
- test:
interactive:
- name: send_target_ready
prompts:
- 'root:'
script:
- command: ls
- lava-send: booted
- lava-wait: done
role:
- target
- deploy:
role:
- host
to: docker
os: debian
image: testimage:2023.08
- boot:
role:
- host
method: docker
command: /bin/bash -c 'service ssh start; bash'
prompts:
- 'root@lava:'
- test:
interactive:
- name: wait_target_ready
prompts:
- 'root@lava:'
script:
- command: ls
- lava-wait: booted
role:
- host
- test:
role:
- host
definitions:
- repository:
metadata:
format: Lava-Test Test Definition 1.0
name: run-command
description: "Run command"
os:
- ubuntu
scope:
- functional
run:
steps:
- lava-test-case pwd-command --shell 'pwd'
- lava-send done
from: inline
name: run-command
path: inline/run-command.yaml
```
Hello,
I tried upgrading from lava-server 2023.03 to 2023.06 on Debian 11 and got the following error:
Setting up lava-server (2023.06+11+bullseye) ...
/var/run/postgresql:5432 - accepting connections Updating configuration:
* generate SECRET_KEY [SKIP]
* generate DATABASES [SKIP]
Run fixups:
* fix permissions:
- /var/lib/lava-server/home/
- /var/lib/lava-server/default/
- /var/lib/lava-server/default/media/
- /var/lib/lava-server/default/media/job-output/
- /etc/lava-server/dispatcher-config/
- /etc/lava-server/dispatcher.d/
- /var/lib/lava-server/default/media/job-output/2017/
- /etc/lava-server/dispatcher-config/devices/
- /etc/lava-server/dispatcher-config/devices/*
- /etc/lava-server/dispatcher-config/device-types/
- /etc/lava-server/dispatcher-config/device-types/*
- /etc/lava-server/dispatcher-config/health-checks/
- /etc/lava-server/dispatcher-config/health-checks/*
* drop duplicated templates:
* fix permissions:
- /etc/lava-server/settings.conf
- /etc/lava-server/instance.conf
- /var/log/lava-server/
- /var/log/lava-server/*
- /etc/lava-server/secret_key.conf
Create database:
psql -q
NOTICE: not creating role lavaserver -- it already exists
NOTICE: not creating role devel -- it already exists lava-server manage migrate --noinput --fake-initial Operations to perform:
Apply all migrations: admin, auth, authtoken, contenttypes, lava_results_app, lava_scheduler_app, linaro_django_xmlrpc, sessions, sites Running migrations:
Applying lava_results_app.0019_auto_20230307_1545...Traceback (most recent call last):
File "/usr/bin/lava-server", line 55, in <module>
main()
File "/usr/bin/lava-server", line 51, in main
execute_from_command_line([sys.argv[0]] + options.command)
File "/usr/lib/python3/dist-packages/django/core/management/__init__.py", line 381, in execute_from_command_line
utility.execute()
File "/usr/lib/python3/dist-packages/django/core/management/__init__.py", line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/lib/python3/dist-packages/django/core/management/base.py", line 323, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/lib/python3/dist-packages/django/core/management/base.py", line 364, in execute
output = self.handle(*args, **options)
File "/usr/lib/python3/dist-packages/django/core/management/base.py", line 83, in wrapped
res = handle_func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/django/core/management/commands/migrate.py", line 232, in handle
post_migrate_state = executor.migrate(
File "/usr/lib/python3/dist-packages/django/db/migrations/executor.py", line 117, in migrate
state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial)
File "/usr/lib/python3/dist-packages/django/db/migrations/executor.py", line 147, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File "/usr/lib/python3/dist-packages/django/db/migrations/executor.py", line 245, in apply_migration
state = migration.apply(state, schema_editor)
File "/usr/lib/python3/dist-packages/django/db/migrations/migration.py", line 114, in apply
operation.state_forwards(self.app_label, project_state)
File "/usr/lib/python3/dist-packages/django/db/migrations/operations/models.py", line 256, in state_forwards
state.remove_model(app_label, self.name_lower)
File "/usr/lib/python3/dist-packages/django/db/migrations/state.py", line 100, in remove_model
del self.models[app_label, model_name]
KeyError: ('lava_results_app', 'actiondata') migration
dpkg: error processing package lava-server (--configure):
installed lava-server package post-installation script subprocess returned error exit status 1 Errors were encountered while processing:
lava-server
E: Sub-process /usr/bin/dpkg returned an error code (1)
Does anybody know what I can do in this case?
Uninstalling lava-server and installing it again does not resolve the issue.
Kind regards,
Tim
--
Tim Jaacks
SOFTWARE DEVELOPER
SECO Northern Europe GmbH
Schlachthofstrasse 20
21079 Hamburg
Germany
T: +49 40 791899-183
E: tim.jaacks(a)seco.com
Register: Amtsgericht Hamburg, HRB 148893 Represented by: Dirk Finstel, Marc-Michael Braun, Massimo Mauri
Hi,
is it possible to show "Total count of test cases listed in Test
Definition" to the Results GUI page.
Right now it shows only executed test cases (either PASS/FAIL) but does not
show to be executed (yet to be executed) cases count.
Regards,
Koti
Hello,
I am trying to understand how the result levels are constructed for example when I run "lavacli results <job-id>"
I get an output looking like so :
Results:
* lava.git-repo-action [pass]
* lava.test-overlay [pass]
* lava.test-install-overlay [pass]
* lava.test-runscript-overlay [pass]
* lava.my_test_case_1 [pass]
* lava.my_test_case_2[pass]
what I want to do is to have a test categories, for example
* lava.category1.my_test_case_1[pass]
* lava.category3.my_test_case_3[pass]
and so on. I tried adding tags and namespaces but the results are not impacted,
Can someone please guide me through this or mention part of the documentation that might be helpful ?
Thanks in advance.
Hi Team,
Is there any provision that LAVA provides for passing the uuid for qemu platforms ? Because I am getting an error while mounting the rootfs like "Can't find the root device with the matching UUID!". Does LAVA provide any provision to pass the uuid along with the "image_arg" for qemu ? Or is it something I am missing in the job definition file ?
- deploy:
timeout:
minutes: 5
to: tmpfs
images:
kernel:
image_arg: -kernel {kernel} -append "console=ttyS0,115200 root=/dev/sda vga=0x305
rw"
url: <URL_to_vmlinuz>/vmlinuz
type: zimage
ramdisk:
image_arg: -initrd {ramdisk}
url: <URL_to_initrd>/initrd.img
compression: gz
rootfs:
image_arg: -drive file={rootfs},format=raw "
url: <URL_to_rootfs>/rootfs-qemu-amd64.squashfs
os: debian
root_partition: 1
[cid:image002.png@01D99A39.9F6574A0]
Regards ,
Sarath P T
Hi Team,
Is there any implementation in LAVA for reboot mechanism of USB booted devices ? basically x86 architecture. I found there are options in sd card booted devices using "uuu" , but is there a similar way approach for USB booted devices ?
Ideally LAVA does not have a provision to identify the "login prompt" without any "boot" method.
Regards,
Sarath P T
Hello Team,
Good Day to you!
I have a requirement to pass "enter" key after u-boot process completion,
to enter into the system.
Without pressing any key cannot enter into prompt and it stays stagnant.
Please let me know how to pass "enter" press key at the end of u-boot log.
Thanks
Pavan kumar