Hi Everyone,
In the Beaglebone-Black Health Check, `bbb_debian_ramdisk_test.yaml`,
located in the Linaro master repository
(https://git.linaro.org/lava-team/refactoring.git), there are the
following lines in the "action:" block:
---
kernel:
url:
http://snapshots.linaro.org/components/lava/standard/debian/jessie/armhf/4/…
ramdisk:
url:
http://snapshots.linaro.org/components/lava/standard/debian/jessie/armhf/4/…
compression: gz
# the bootloader needs a u-boot header on the modified ramdisk
add-header: u-boot
modules:
url:
http://snapshots.linaro.org/components/lava/standard/debian/jessie/armhf/4/…
compression: gz
---
How is the `initramfs.cpio.gz` generated? KernelCI's build.py script
doesn't generate it. None of the Lava scripts generate it, yet it is
required to perform the boot test of a kernel on the Beaglebone Black. I
can't find it mentioned anywhere in the documentation either.
How did you generate this so it is compatible with the BBB? We want to
follow Linaro's standards, guidelines and recommendations as close as we
can, but this particular part seems to be missing.
Any help you can offer would be greatly appreciated.
Thank you!
--
Don Brown
Codethink, Ltd.
Software Engineering Consultant
Indianapolis, IN USA
Email: don.brown(a)codethink.co.uk
Mobile: +1 317-560-0513
Hi Everyone,
This week one of my teammates discovered that storage was very low on
our LAVA Server. After he investigated, he found that
/var/lib/lava/dispatcher gradually increases in size. He realized that
when a test is run, the files are accumulated in
/var/lib/lava/dispatcher/slave/tmp for each job. However, they are never
removed.
Does LAVA have a setting or does it have some kind of automation that
will remove tests, say, after X days or by some other criteria or, do we
need to remove them manually?
I appreciate any guidance you can offer.
Thank you!
--
Don Brown
Codethink, Ltd.
Software Engineering Consultant
Indianapolis, IN USA
Email: don.brown(a)codethink.co.uk
Mobile: +1 317-560-0513
Feature request:
Please add an option to "lava-server manage device-types" to add an
alias. Currently this can be done from the django interface, but the
command-line interface is much more automation friendly and
administrator friendly.
Thanks,
Kevin
Hi , we're attempting to use lava-ci to submit a test to lava, I've
cloned it from
https://github.com/kernelci/lava-ci.git
But when I attempt to submit a simple test
../lava-job-runner.py --username lavauser --token ... --server http://localhost:8080/RPC2
I get
Connecting to Server...
Connection Successful!
connect-to-server : pass
Gathering Devices...
Gathered Devices Successfully!
Gathering Device Types...
Gathered Device Types Successfully!
Submitting Jobs to Server...
but I don't see any submitted jobs in the lava2 web interface, is there
anything obvious elsewhere I should be checking? - or does the absence
of a 'Submitted Jobs successfully', if there should be one, means nothing
has been submitted?
Robert
Hi,
In LAVA v1, one could declare login commands to be run after logging in and
before starting any of the tests. For example:
"actions": [
{
"command": "deploy_image",
"parameters": {
"image": "https://some-url-to-a-rootfs.ext4.img.gz",
"bootfstype": "ext2",
"login_commands": [
"sudo su"
],
"login_prompt": "login:",
"username": "username",
"password_prompt": "Password:",
"password": "password"
}
},
In this case, "sudo su" is needed to open a regular user session and inherit
the user's environment while also having root privileges.
In LAVA v2, there isn't the possibility to do anything like this directly. One
could define a test with inline commands, but this is not ideal. The login
commands are not a test but part of how the job sets up the environment in
which the tests are run - i.e. it's part of the initial conditions. Also it's
quite a convoluted and lengthy way of running some commands, and it relies on
the side effects of that "login commands test" to persist when running
subsequent tests.
So I've made a couple of patches to see how this could be implemented in LAVA
v2 with an extra parameter in auto_login:
https://review.linaro.org/#/q/topic:T5703-login-commands
For example:
- boot:
method: qemu
auto_login:
login_prompt: 'login:'
username: user
password_prompt: 'Password:'
password: password
login_commands:
- sudo su
Essentially, this makes auto_login more flexible. At the moment, after logging
in auto_login sets the shell prompt: this is already some kind of hard-coded
login command. Some jobs need to run other things such as "sudo su" to stick
to the same example.
Another login command we've used is "systemctl --failed" to show if any systemd
units (services) failed to load during boot.
Notes from the Gerrit reviews:
* The login commands can't be part of a device definition as they are not
related to the device hardware or the boot configuration. For example, when
running Android, one would not want to run "sudo su" but maybe "setprop ..."
or something else - to be defined in each job.
* The login commands should not be fixed in a given distro / userspace
configuration as each job may need to set up a different initial environment.
For example, some jobs may need to be run with a regular user and would not
use the "sudo su" login command.
* Some documentation and unit tests would need to be added for this to be
merged. This is to first discuss the idea and review the code changes.
Any thoughts? Would it make sense to add this feature or maybe implement it
differently?
Best wishes,
Guillaume
Hi Everyone,
I have a co-worker who wants to use our Kernel CI & Lava Virtual
Machine. He says he wants to boot the VM, log in, and run a command that
downloads a kernel and then tests multiple defconfig's and multiple
versions of the Linux kernel. What is the best tool to do this (lava-ci,
lava-tool, or a different tool)?
Can you point me to some examples of the tool you recommend?
Any help you can offer would be greatly appreciated.
--
Don Brown
Codethink, Ltd.
Software Engineering Consultant
Indianapolis, IN USA
Email: don.brown(a)codethink.co.uk
Mobile: +1 317-560-0513
Hello,
I'm trying to set one timeout per test in a job. To do so I'm declaring one test block per test.
Unfortunately, it seems that only the first timeout declaration is taken into account. Did I miss something in my job definition?
Best regards,
Denis
Dear all,
This is my first post on the mailing list, I hope I'm at the right place.
Using Lava V2, I'm trying to install packages in the DUT following the guidelines from
https://validation.linaro.org/static/docs/v2/writing-tests.html#test-defini…
My job looks like this:
metadata:
(...)
install:
sources:
- http://<local_network_package_server>/sti
- http:// <local_network_package_server>/all
- http:// <local_network_package_server>/cortexa9hf-neon
deps:
- python-pytest
- python-lxml
- packagegroup-core-buildessential*
run:
steps:
- step1
- step2
parse:
pattern: "^(?P<test_case_id>\\w+) RESULT:(?P<result>(pass|fail|unknown))"
fixupdict:
FAILED: fail
SUCCESS: pass
ABORTED: unknown
Running this test, I get the following error:
<LAVA_TEST_RUNNER>: started<LAVA_TEST_RUNNER>: looking for work in /lava-3715/0/lava-test-runner.conf-1484266027
<LAVA_TEST_RUNNER>: running 0_TC_BENCH_DISK_AIO_STRESS under lava-test-shell...
<LAVA_TEST_RUNNER>: running 0_TC_BENCH_DISK_AIO_STRESS installer ...
/lava-3715/0/tests/0_TC_BENCH_DISK_AIO_STRESS/install.sh: line 5: lava-add-sources: command not found
/lava-3715/0/tests/0_TC_BENCH_DISK_AIO_STRESS/install.sh: line 6: lava-add-sources: command not found
/lava-3715/0/tests/0_TC_BENCH_DISK_AIO_STRESS/install.sh: line 7: lava-add-sources: command not found
Error: OPKG package manager not found in the path.
It seems lava-add-sources is not copied to the target. Do I understand the log correctly?
Best regards,
Denis
Hello Team,
My name is guru. i am very new to lava and i am very much interested using
lava concept for embedded linux boards for auto deployment and testing
concepts.
I tried to setup the lava for bbb device. i have followed below steps for
that.
> 1. installed debian on vm machine and lava-server and its component
(jessie-backport) 2016.
> 2. just for understanding purpose i tried to add kvm job it was loaded
successfully.
> 3. Now i am trying to add the BBB device on lava.
> 4. For that i have added the bbb device to dispatcher. find the conf file
below
> name: beaglebone-black01.conf
> content :
> device_type = beaglebone-black
> hostname = beaglebone-black01
> connection_command = telnet localhost 2003
> hard_reset: /usr/bin/reset.sh
> power_off: /usr/bin/off.sh
> power_on: /usr/bin/on.sh
> Note : i am not using pduclient. i am using my own script for control
commands
> but it is not working while executing the hard_reset command on lava..
find
the log for more details.
>
> 5. My current setup is like i am controlling the bbb using serial
controlled
relay from VM host machine(debian).
>
> for that i made my own custom script to on.off,reset serial python code
for
controlling the relay.
> 6. after that i tried to submit the below json test job. Find My
definition
job attached.
> I have taken the below json for reference.
> https://git.linaro.org/lava-team/lava-functional-tests.
git/tree/lava-test-shell/single-node/armmp-bbb-daily.json
> 7. after that i have submitted the job . find the job log for more
details.
> 8. i have no idea what is going on and what went wrong on my setup.
> Help me out to boot the BBB using lava.
Regards,
Guru
On 27 March 2017 at 14:54, Ковалёв Сергей <SKovalev(a)ptsecurity.com> wrote:
> Thank you Neil for you reply.
Please keep the list in CC:
>
>> Compare with: https://staging.validation.linaro.org/scheduler/job/168802
>
> I have tried https://staging.validation.linaro.org/scheduler/job/168802/definition but iPXE stuck on it. I have amd64 machine with UEFI.
"stuck" ? This is a standard amd64 Debian kernel with modules and
initramfs. It is already UEFI-aware. Does the machine run Debian
natively? Is there a Debian kernel you can use in your LAVA
submissions (with modules and ramdisk)?
>> First step is to replace these files with images which work on the x86 DUT on staging.validation.linaro.org
>
> I perform kernel development with my colleagues so I have to load our kernels.
Yes, however, to debug what is going on, you should switch to known
working files so that you have a valid comparison with known working
test jobs. Once debugging has produced some results, then switch back
to the locally built kernels. Change one thing at a time.
>> That just isn't going to work. The initrd needs to come via TFTP but this is an absolute path.
>
> 'initrd' is come via TFTP. In context I supply additional kernel boot options.
Your original email quoted:
context:
extra_kernel_args: initrd=/rootfs.cpio.gz root=/dev/ram0
rootfs.cpio.gz does not exist when the machine boots. The initramfs
will have been downloaded by TFTP and loaded directly into memory, it
simply does not exist as a cpio.gz any longer. /dev/ram0 shouldn't be
needed with modern kernels. At best, it would seem that these options
are ignored.
Debian initramfs log:
Begin: Loading essential drivers ... done.
Begin: Running /scripts/init-premount ... done.
Begin: Mounting root file system ... Begin: Running /scripts/local-top ... done.
Begin: Running /scripts/local-premount ... done.
Warning: fsck not present, so skipping unknown file system
mount: can't find /root in /etc/fstab
done.
Begin: Running /scripts/local-bottom ... done.
Begin: Running /scripts/init-bottom ... mount: mounting /dev on
/root/dev failed: No such file or directory
mount: mounting /dev on /root/dev failed: No such file or directory
done.
mount: mounting /run on /root/run failed: No such file or directory
run-init: current directory on the same filesystem as the root: error 0
Target filesystem doesn't have requested /sbin/init.
run-init: current directory on the same filesystem as the root: error 0
run-init: current directory on the same filesystem as the root: error 0
run-init: current directory on the same filesystem as the root: error 0
run-init: current directory on the same filesystem as the root: error 0
run-init: current directory on the same filesystem as the root: error 0
No init found. Try passing init= bootarg.
BusyBox v1.22.1 (Debian 1:1.22.0-19) built-in shell (ash)
Enter 'help' for a list of built-in commands.
Matched prompt #5: \(initramfs\)
> This boot option have been detected before effort to automate the process with LAVA. Without it we could see kernel panic. With it we successfully load kernel and rootfs (from Buildroot). May be in Linaro you embed that boot options in compile time?
No, we do not embed anything in V2 (it's one of the key changes from
V1, we don't hid magic like that anymore.)
The files were prepared with:
https://git.linaro.org/lava-team/refactoring.git/tree/scripts/x86_64-nfs.sh
You can also see the build log for the original Debian kernel package
if relevant.
https://tracker.debian.org/pkg/linux-signedhttps://buildd.debian.org/status/fetch.php?pkg=linux-signed&arch=amd64&ver=…
Running x86_64-nfs.sh in an empty directory will provide access to the
config of the kernel itself as well as the initramfs and supporting
tools.
It's possible these context arguments are hiding some other problem in
the kernel but, as described so far, the options seem to make no
sense.
The command line used in our tests is simply: Command line: ip=dhcp
console=ttyS0,115200n8 lava_mac={LAVA_MAC}
(where LAVA_MAC does not need to be defined for these devices.)
--
Neil Williams
=============
neil.williams(a)linaro.org
http://www.linux.codehelp.co.uk/
Hello.
I'm trying to start LXC Debian hacking sessions on our V2 LAVA server.
This is the related configuration:
http://pastebin.com/index/DNGpJfc6
And I'm mostly doing what's in here:
https://git.linaro.org/lava-team/hacking-session.git
The problem I'm facing is that inside a script the environment seems to be broken, so there is no way to copy to ~/.ssh.
Regarding the environment I get this output:
$ echo $HOME
$ echo $USER
$ cat /etc/passwd | grep root
root:x:0:0:root:/root:/bin/bash
$ ls -al /root
total 16
drwx------ 2 root root 4096 Dec 16 15:33 .
drwxrwxrwx 19 root root 4096 Dec 23 13:18 ..
-rw-r--r-- 1 root root 570 Jan 31 2010 .bashrc
-rw-r--r-- 1 root root 148 Aug 17 2015 .profile
$ env
TESTRUN_ID=1_hacksession-debian
SHLVL=4
OLDPWD=/
container=lxc
_=defs/hacksession-debian/setup_session
COLUMNS=80
PATH=/lava-248/1/../bin:/usr/local/bin:/usr/local/sbin:/bin:/usr/bin:/usr/sbin:/sbin
LAVA_RESULT_DIR=/lava-248/1/results/1_hacksession-debian-1482499502
LANG=C
LC_ALL=C.UTF-8
PWD=/lava-248/1/tests/1_hacksession-debian
LINES=24
If I mimic the lava LXC machine creation commands (lxc-create) and I attach to the machine I get a sane environment.
Is this expected behavior?
BR,
Rafael Gago
Hi,
I've installed LAVA and created 'qemu' device type.
$ sudo lava-server manage add-device-type '*'
$ sudo lava-server manage add-device --device-type qemu qemu01
Then, I downloaded an example of yaml to submit a job for the qemu image.
$ wget
https://staging.validation.linaro.org/static/docs/v2/examples/test-jobs/qem…
./
$ lava-tool submit-job http://<name>@localhost qemu-pipeline-first-job.yaml
The error is found during running 'image.py'.
(http://woogyom.iptime.org/scheduler/job/15)
Traceback (most recent call last):
File "/usr/bin/lava", line 9, in <module>
load_entry_point('lava-tool==0.14', 'console_scripts', 'lava')()
File "/usr/lib/python2.7/dist-packages/lava/tool/dispatcher.py", line
153, in run
raise SystemExit(cls().dispatch(args))
File "/usr/lib/python2.7/dist-packages/lava/tool/dispatcher.py", line
143, in dispatch
return command.invoke()
File "/usr/lib/python2.7/dist-packages/lava/dispatcher/commands.py",
line 216, in invoke
job_runner, job_data = self.parse_job_file(self.args.job_file,
oob_file)
File "/usr/lib/python2.7/dist-packages/lava/dispatcher/commands.py",
line 265, in parse_job_file
env_dut=env_dut)
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/parser.py",
line 165, in parse
test_action, counts[name])
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/parser.py",
line 66, in parse_action
Deployment.select(device, parameters)(pipeline, parameters)
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/logical.py",
line 203, in select
willing = [c for c in candidates if c.accepts(device, parameters)]
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/actions/deploy/image.py",
line 116, in accepts
if 'image' not in device['actions']['deploy']['methods']:
KeyError: 'actions'
Traceback (most recent call last):
File "/usr/bin/lava", line 9, in <module>
load_entry_point('lava-tool==0.14', 'console_scripts', 'lava')()
File "/usr/lib/python2.7/dist-packages/lava/tool/dispatcher.py", line
153, in run
raise SystemExit(cls().dispatch(args))
File "/usr/lib/python2.7/dist-packages/lava/tool/dispatcher.py", line
143, in dispatch
return command.invoke()
File "/usr/lib/python2.7/dist-packages/lava/dispatcher/commands.py",
line 216, in invoke
job_runner, job_data = self.parse_job_file(self.args.job_file,
oob_file)
File "/usr/lib/python2.7/dist-packages/lava/dispatcher/commands.py",
line 265, in parse_job_file
env_dut=env_dut)
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/parser.py",
line 165, in parse
test_action, counts[name])
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/parser.py",
line 66, in parse_action
Deployment.select(device, parameters)(pipeline, parameters)
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/logical.py",
line 203, in select
willing = [c for c in candidates if c.accepts(device, parameters)]
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/actions/deploy/image.py",
line 116, in accepts
if 'image' not in device['actions']['deploy']['methods']:
KeyError: 'actions'
It seems no 'methods' is found under actions->deploy block on parsing
the yaml file but I'm not sure this error means wrong yaml usage or not.
Best regards,
Milo
Hi Williams,
I want to get lava job submitter by lava-tool.
And when I use command "lava-tool job-details", the submitter info is displayed as "submitter_id". How can I convert the submitter id to submitter username?
Thanks.
The gitweb (depend apache2) and lava is installed on same host.
But the 80 port is used by lava, so gitweb cannot be visited with browser.
So i want to change the lava‘s port to another one, such as 8088, but after
changed the file:
/etc/apache2/sites-enabled/lava-server.conf, lava can not works.
Does anyone know how to make lava-server use another port ?
Btw, i can not find out the "DocumentRoot" of lava-server . The config file
is defined the "DocumentRoot" is "/usr/share/lava-server/static/lava-server/",
but i can not see the default index.html. ( Only see the templates file
in /usr/lib/python2.7/dist-packages/lava_server/templates/index.html )
Could someone tell me where is the lava-server's default index page ?
--
王泽超
TEL: 13718389475
北京威控睿博科技有限公司 <http://www.ucrobotics.com>
Hi Everyone,
I am trying to set up a standalone Lava V2 Server by following the
instructions on the Linaro website and so far things have gone smoothly.
I have Lava installed, a superuser created and I can access the
application through a web browser. But, I have the following issues:
ISSUE #1:
- When I tried to submit a simple Lava V2 test job, I got an error
message stating that the "beaglebone black" device type is not
available.
- I found the directory where the .jinja2 files were stored including
the beaglebone-black.jinja2 file, but regardless of what I tried, I
couldn't get the web application to see the device type definitions.
- It seems like the application isn't pointing to the directory where
those device type files are stored.
- What do I need to do to make the Lava Server "see" those device type
files?
ISSUE #2:
- When I tried to submit a job, I pasted a small .yaml file and the
validation failed because it didn't recognize the data['run'] in the
job. I tried a few others and then I tried a V1 .json file and it
validated just fine.
- What do I have to do to allow Lava to accept V2 .yaml files? Am I
missing something simple?
As always, I greatly appreciate any feedback you may have to help me
out.
Thank you in advance!
--
Don Brown
Codethink, Ltd.
Software Engineering Consultant
Indianapolis, IN USA
Email: don.brown(a)codethink.co.uk
Mobile: +1 317-560-0513
Hi Williams,
The submitted time is 8 hours behind my local time. How to change job submitted time displayed on LAVA web page?
I have tried to modify TIMEZONE of the file "/usr/lib/python2.7/dist-packages/lava_server/settings/common.py" and "/usr/lib/python2.7/dist-packages/django/conf/global_settings", and then restarted the LAVA server, but it seemed nothing changed.
Thanks.
Hello.
I have configured a LAVA server and I set up a local Django account to start configuring things:
sudo lava-server manage createsuperuser --username <user> --email=<mail>
Then I want to add LDAP support by adding the relevant fields to /etc/lava-server/settings.conf:
"AUTH_LDAP_SERVER_URI": "ldaps://server.domain.se:636",
"AUTH_LDAP_BIND_DN": "CN=company_ldap,OU=Service Accounts,OU=Resources,OU=Data,DC=domain,DC=se",
"AUTH_LDAP_BIND_PASSWORD": "thepwd",
"AUTH_LDAP_USER_ATTR_MAP": {
"first_name": "givenName",
"email": "mail"
},
"DISABLE_OPENID_AUTH": true
I have restarted both apache2 and lava-server.
I was expecting to get a Sign In page like this one:
https://validation.linaro.org/static/docs/v1/_images/ldap-user-login.png
Unfortunately I'm not familiar with neither django (and Web development) and LDAP and I don't know how to debug this. I have tried to grep for ldap|LDAP in /var/log/lava-server but nothing pops up.
Unfortunately I couldn't find a way to browse the mailing list for previous answers. GMANE search doesn't work today.
How should I proceed?
I have a multi-node test involving 13 roles that is no longer syncing properly after upgrading to 2016.11 this morning. It seems that 2 or 3 nodes end up waiting for a specific message while the other ones finish the message and move on to the next. Looking at the dispatcher log, I don't see any errors, but it's only logging that it's sending to some of the nodes. For example, I see a message like this for the nodes that work in a run:
2016-11-10 13:10:37,295 Sending wait messageID 'qa-network-info' to /var/lib/lava/dispatcher/slave/tmp/7615/device.yaml in group 2651c0a0-811f-4b77-bc07-22af31744fe5: {"/var/lib/lava/dispatcher/slave/tmp/7619/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7613/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7623/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tm
p/7620/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7611/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7621/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7622/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7617/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7618/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7614/device.yaml": {},
"/var/lib/lava/dispatcher/slave/tmp/7615/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7616/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7612/device.yaml": {}}
2016-11-10 13:10:37,295 Sending wait response to /var/lib/lava/dispatcher/slave/tmp/7615/device.yaml in group 2651c0a0-811f-4b77-bc07-22af31744fe5: {"message": {"/var/lib/lava/dispatcher/slave/tmp/7619/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7613/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7623/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7620/
device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7611/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7621/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7622/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7617/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7618/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7614/device.yaml": {}, "/var/l
ib/lava/dispatcher/slave/tmp/7615/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7616/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7612/device.yaml": {}}, "response": "ack"}
For the nodes that get stuck, there is no message like above.
All of the nodes are qemu type, all on the same host. The nodes that fail are not consistent, but there seems to be always 2 or 3 that fail in every run I tried.
Is there anything I can look at here to figure out what is happening?
--
James Oakley
james.oakley(a)multapplied.net
[Moving to lava-users as suggested by Neil]
On 11/07/2016 03:20 PM, Neil Williams (Code Review) wrote:
> Neil Williams has posted comments on this change.
>
> Change subject: Add support for the depthcharge bootloader
> ......................................................................
>
>
>
> Patch Set 3:
>
> (1 comment)
>
> https://review.linaro.org/#/c/15203/3/lava_dispatcher/pipeline/actions/depl…
>
> File lava_dispatcher/pipeline/actions/deploy/tftp.py:
>
> Line 127: def _ensure_device_dir(self, device_dir):
>> Cannot say that I have fully understood it yet. Would it be correct
>> if the
>
> The Strategy classes must not set or modify anything. The accepts
> method does some very fast checks and returns True or False. Anything
> which the pipeline actions need to know must be specified in the job
> submission or the device configuration. So either this is restricted
> to specific device-types (so a setting goes into the template) or it
> has to be set for every job using this method (for situations where
> the support can be used or not used on the same hardware for
> different jobs).
>
> What is this per-device directory anyway and how is it meant to work
> with tftpd-hpa which does not support configuration modification
> without restarting itself? Jobs cannot require that daemons restart -
> other jobs could easily be using that daemon at the same time.
So each firmware image containing Depthcharge will also contain
hardcoded values for the IP of the TFTP server, and for the paths of a
cmdline.txt file and a FIT image. The FIT image containing a kernel and
a DTB file, and optionally a ramdisk.
Because the paths are set when the FW image is flashed, we cannot use
the per-job directory. Thus we add a parameter to the device that is to
be set in the device-specific template of Chrome devices. If that
parameter is present, then a directory in the root of the TFTP files
tree will be created with the value of that parameter.
The TFTP server doesn't need to be restarted because its configuration
is left unchanged, we just create a directory where depthcharge will
look for the files.
Thanks,
Tomeu
> I think this needs to move from IRC and gerrit to a thread on the
> lava-users mailing list where the principles can be checked through
> more easily.
>
>
Hi everyone,
As I have probably mentioned in previous emails, Im using the yocto
project to generate some linux images that I want to test using lava as
part of the continuous integration development.
So far so good, i can submit the job description to lava using lava-tool
and it will start the tests. I'm happy so far with all the results.
Now my question is to ask you what would be the correct way do this
procedure. Do you think it is reasonable to have a lava-tool submit-job
followed by a waiting step using lava-tool job-status to report the
final build result? or there is a nicer way to do this?
Thanks a lot for your help in advance :)
Best,
Alfonso
By default, a uboot header is automatically added to the ramdisk image.
For bootloaders without INITRD_ATAG support, the ramdisk needs to be
passed on the command line and cannot have the uboot header added.
To enable this feature, add a "ramdisk_raw" option that device files can
set so that a uboot header is not added.
Signed-off-by: Kevin Hilman <khilman(a)baylibre.com>
---
Patch applies on 2016.9
lava_dispatcher/config.py | 1 +
lava_dispatcher/device/bootloader.py | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/lava_dispatcher/config.py b/lava_dispatcher/config.py
index 66a9e70021fa..c91c5634280d 100644
--- a/lava_dispatcher/config.py
+++ b/lava_dispatcher/config.py
@@ -312,6 +312,7 @@ class DeviceSchema(schema.Schema):
uimage_xip = schema.BoolOption(default=False)
append_dtb = schema.BoolOption(default=False)
prepend_blob = schema.StringOption(default=None)
+ ramdisk_raw = schema.BoolOption(default=False)
# for dynamic_vm devices
dynamic_vm_backend_device_type = schema.StringOption(default='kvm')
diff --git a/lava_dispatcher/device/bootloader.py b/lava_dispatcher/device/bootloader.py
index 634d22ef3311..c88fba8937e6 100644
--- a/lava_dispatcher/device/bootloader.py
+++ b/lava_dispatcher/device/bootloader.py
@@ -208,7 +208,7 @@ class BootloaderTarget(MasterImageTarget):
decompress=False)
extract_overlay(overlay, ramdisk_dir)
ramdisk = create_ramdisk(ramdisk_dir, self._tmpdir)
- if self._is_uboot():
+ if self._is_uboot() and not self.config.ramdisk_raw:
# Ensure ramdisk has u-boot header
if not self._is_uboot_ramdisk(ramdisk):
ramdisk_uboot = ramdisk + ".uboot"
--
2.5.0
Hello everyone,
Can you help me on below two questions?
1. I did email notification settings for sending emails after job complete or incomplete.
How can I get whole logs (where are logs?) about email sending process? I need to debug email sending.
2. I want to use script to control device state periodically.
How can I set device to maintenance state using command, like lava-tool command?
Thanks in advance.
Hello everyone,
just a simple question. I might be wrong but I understand that
submitting a job through lava-dispatch and lava-tool should lead to the
same process. Now, with the dispacher you can already specify an
specific target device encoded in yaml format. Does the lava-tool at
some point reach a similar target configuration? does it generate it or
it is stored somewhere? in the latter case, where is it stored?
thanks in advance :)
Best,
Alfonso
Hello everyone,
just a quick question.
I am trying to run a job to test a qemu image which is stored remotely.
The image is packed in a tar.xz file with both the kernel and the file
system.
is there a way to specify in the job description json file that before
the deploy action it must open this tar.xz file and then use the kernel
and filesystem?
Thanks a lot :)
Best,
Alfonso
Hello guys,
I am currently trying to install the lava-server/dispatcher on my local
pc with Ubuntu 16.04. Unfortunately, I had little success installing the
source projects from github.com/linaro. I just wanted to ask if you
could recommend me what would be the best approach for me.
Thanks for your help.
Best regards,
Alfonso
Hi,
I'm trying to get the proper relationship between requested tests and
results in LAVA v2. Here is example job:
https://validation.linaro.org/scheduler/job/1109234
and results for this job:
https://validation.linaro.org/results/1109234
How can I tell:
- which result matches which test?
- if there are multiple occurrences of the same test with different
parameters, how to recognize the results?
In LAVA v1 the matching was a very arduous process. One had to
download the job definition, look for lava-test-shell actions, pull
the test definition yaml sources and match yaml ID and to ID found in
result bundle. How does this work in v2?
milosz
Hello,
Google has release the latest version of Tradefed with the Android N
release.
https://source.android.com/devices/tech/test_infra/tradefed/index.html
Lots of dispatcher/slave features which LAVA already supports.
Given this update, is LAVA exploring to adopt the new mechanism or continue
developing its own architecture ?
Thanks
Sandeep
Hi,
Chase did an excellent job and put together a piece of code that
allows us local execution of lava-test-shell. This means we can use
our 'lava' test definitions on the boards that are not deployed in any
LAB. There are 2 main reasons for doing that:
- prototyping tests for LAVA
- semi-automated execution of tests on targets that are not deployed in any LAB
Major part of this code is taken directly from lava dispatcher. There
are slight modifications but I would like to keep them to minimum or
remove at all (if possible). So the question follows - is there a way
to achieve the same goal with only LAVA code? One of the biggest
problems was requirement for ACKing that lava-test-shell requires.
This makes the tests 'interactive' which isn't best for semi-automated
execution. This bit was removed and now we're able to use test shell
locally in non-interactive mode. The README file describes the use
cases we're covering. Any comments are welcome. Code can be found
here:
https://git.linaro.org/qa/lava-local-test.git
milosz
Hi Mahesh,
On Tuesday 19 July 2016 05:58 PM, Umamaheswara Rao Lankoti wrote:
> I am Umamaheswara Rao working for Innominds and I am trying to evaluate
> LAVA framework as part of Continuous Integration job.
Nice to know.
> I am looking at automating the smoke tests with LAVA, downloading a
> newly generated build from Jenkins, flash it on android phone, boot into
> home screen and run a minimal set of usecases, report success/failure
> for the test cases, etc..
There is no direct integration such as plugins which does this in LAVA.
But you can submit jobs to LAVA via scripts, once the builds are ready
in Jenkins. This is already done as part of many jenkin based CI loops
used within Linaro and elsewhere.
> Looking at the documentation, I came to know that Ubuntu support is
> stopped. Would Debian Jessie be supported in future?
Yes you are right Ubuntu is deprecated. Debian will be supported in
future. We support Debian Jessie and Testing as of today.
PS: I ve added lava-users mailing list, so that you would get more
inputs on this topic, in case if I ve missed anything.
Thank You.
--
Senthil Kumaran S
http://www.stylesen.org/http://www.sasenthilkumaran.com/
Hi,
I've just made a fresh Debian (Jessie) installation.
Then, I've added jessie-backports and installed lava-server from there.
Once the installation completed, I've rebooted and the GUI desktop
environment doesn't come up.
This happened twice already, so it's definitely the Lava installation
that's breaking something there. Is this a known issue? Any suggestions?
Regards,
matallui
Hi,
I had a boot issue and it was because of using SPACE to stop the autoboot. A quick fix for this was to add a \" to a variable declaration such as "setenv dummy variable\"". Do you know of any other fix for this? It seems the default for lava is "Hit any key to stop autoboot" and just send a "b" to stop it. In my case, I had to change the device configuration file to look for "Press SPACE to abort autoboot in 2 seconds" and then send a " ". The device configuration looks as follows http://pastebin.ubuntu.com/18790420/.
Now I have another issue where the job keeps going even though the image booted successfully. I want it to stop once it sees "am335x-evm login:". Is there a way to do this? I changed the master_str = am335x-evm login:.
This is my complete log http://pastebin.ubuntu.com/18790288/.
Thanks,
Josue
Dear Lava team,
We're deploying Lava V2. So far we've been working on old servers to prototype our installation. We're now almost ready to order our definitive PCs.
We have assessed some key features for our Lava server. Still, we're not 100% sure on how powerful it should be.
Is it possible for you to share the main characteristics of your current Lava servers (Number of cores, RAM, size of disk)? That would be helpful.
Thanks and best regards,
Denis Humeau
Hi,
I'm currently trying to configure a Switched Rack PDU to the lava instance.
How do I know if the driver for my PDU APC AP7901 is supported in the framework?
I got it connected both via serial and telnet. My /etc/lavapdu/lavapdu.conf file is attached.
Also, when I restart the lavapdu-runner it fails with the error that is attached.
As for the beaglebone black support, I have a device dictionary but where should I save it so lava can access it? I've seen that the pdu hostname is always similar to pdu05. How do I know my pdu's hostname?
I appreciate the help, thanks.
Regards,
Josue Albarran
Hello,
I have a lava setup and right now running two single instance servers.
These are pretty high end servers are handling 4-5 Android devices over USB
I need to expand that to multi node setup with maximum DUTs connected to
one dispatcher.
The target is only to have android DUTs so that implies a very stable USB
hub and stack on the dispatcher machines.
I used a couple of off the shelf desktops in the past and the USB stack
could not handle more than 1 device at a time.
Any suggestions for hardware that is proven to be solid for dispatchers.
Thanks
Sandeep
Hello,
For the 2016.4 release I had created a custom LAVA extension to add 2
commands and 2 xmlrpc methods. The extension mechanism was very
convenient. All I had to do was install an egg on the system and restart
the lava service. When I upgraded to 2016.6 the extension did not work
due to commit b6fd045cc2b320ed34a6fefd713cd0574ed7b376 "Remove the need
for extensions".
I was not able to find a way to add my extension to INSTALLED_APPS and
register the xmlrpc methods without modifying settings/common.py and
urls.py. I looked through common.py and distro.py and could not find
support in settings.conf for extensions. I also looked for a
local_settings import which is referenced on the Internet as a common
way to extend django, but did not find it. If there is a way to extend
LAVA without modifying the LAVA python code, please let me know and I
will be happy to send in a documentation patch.
It would have been nice if functionality such as the extension
mechanism, which is part of the external interface of LAVA, had gone
through a deprecation cycle. A reworked demo app showing how the new
extension mechanism works would have also been helpful.
Thank you for your time.
--
Konrad Scherer, MTS, Linux Products Group, Wind River
---------- Forwarded message ----------
From: Steve McIntyre <steve.mcintyre(a)linaro.org>
Date: 31 May 2016 at 16:46
Subject: Re: pipeline vland help
To: Christian Ziethén <christian.ziethen(a)linaro.org>
Cc: neil.williams(a)linaro.org
On Tue, May 31, 2016 at 03:29:45PM +0200, Christian Ziethén wrote:
>Hi,
Hi Christian,
You'd be much better asking on the lava-users mailing list rather than
just directly to me, I'll be honest.
>Been struggling with setting up a vland between two arndale targets. I have
>managed to create a multinode job that uses the pipeline model:
>https://lng.validation.linaro.org/scheduler/job/9702/multinode_definition
>Have also managed to create valid yaml that seems to conform to the code in
>lava-server/lava_scheduler_app/schema.py
>https://lng.validation.linaro.org/scheduler/job/9743/multinode_definition
>This one does however not do anything after being submitted, I tried
putting
>100M in the tags for the vland, I also tried requiring that the arndale
targets
>in the multinode protocol had the 100M tag, but that didn't work.
According to
>the device dictionary for lng-arndale-01, it should have a 100M tag on
iface1.
Hmmm, OK. I can see that it's sat waiting for devices to be
assigned. Looking at the definition, you've got one group (role)
defined with 2 entries. I believe that for VLANd jobs you need to have
individual roles for each machine. Neil can confirm.
>Also have this job (v1):
>https://lng.validation.linaro.org/scheduler/job/9118/multinode_definition
>Which runs my test using iface1 (I think) but it doesn't use vland.
Right, v1 doesn't do vland.
>I am unsure how to debug this.
>
>It was my assumption that I could create a vlan with the vland protocol and
>then query which interface is on that vlan in my test-definition. That
would be
>my end goal for this exercise.
Sure, that's what we expect to have working for you.
Cheers,
--
Steve McIntyre steve.mcintyre(a)linaro.org
<http://www.linaro.org/> Linaro.org | Open source software for ARM SoCs
Hi,
For the last couple of days I’ve been trying to setup a device in our lab for deployment and installation of a Linux OS.
However, I’ve came across a few issues and lots of questions and hopefully I can get some answers here.
Considerations:
1. I’ve decided to give it a go with LAVA v2 and try using the pipeline model
2. I’m not testing any “boards”, so it’s hard to find a similar example
3. I am not deploying any Linaro images (not even debian based)
4. My team develops and supports a reference Linux OS, based on RHEL, so that means we have total control of our images, kickstarter scripts, etc.
5. We already have a PXE server in our network, which is where our servers (the targets in this context) get booted from by default
6. Once booted from PXE, we get a PXE menu, where we can either select an option, or press ESC and add a custom command (at this syslinux is running)
7. We have access to the serial console of every device via serial port server (telnet <ipaddr> <port>)
8. We have power control over every device via IPMI
Issues:
1. I couldn’t find the documentation for how to add devices and device types to the server (which location to add the file, which format - yaml/conf)
2. In the above described environment, I suppose we would skip the deployment phase, since the devices already boot into syslinux from PXE (is this correct?). Either way, it would be nice to be able to run ‘ipmitool chassis booted pxe’ before rebooting the system.
3. Either way (via boot or deploy), how can I make sure to detect the PXE (syslinux) menu, send the ESC key, and send the command I need to trigger the kickstart installation?
To sum-up, the Workflow I’m trying to achieve after having completed the whole setup sort of goes like this:
1. Reboot target device into PXE (the PXE itself will download and start syslinux here)
2. Wait for PXE menu (expect some sort of string)
3. Send ESC key (to get into the boot shell)
4. Send command (this will trigger our kickstarter script and the installation will take around 5 minutes). The images that are needed are fetched automatically from our sftp server.
5. Wait for boot prompt after system installation succeeds
6. Login using credentials (username, password)
7. Run whatever tests we need from here
Any help here would be much appreciated.
Thanks in advance!
--
matallui
Hello,
I just started to set up a lava server to test custom devices and I've some questions about testing custom or third party boards.
Does someone know a good documentation or tutorial for the integration of custom boards? It seems that there isn't much documented for this topic.
Is there any documentation for setting up a custom device configuration file and the commands that can be used, like client_type?
Thank you and best regards
Stefan
[cid:b073c64a8131462290b626cc30430be3]<http://www.facebook.com/WagoKontakttechnik> [cid:00cd0405ac4344fbb39caf677313ca6c] <http://www.youtube.com/WagoKontakttechnik> [cid:3139f430e8f940bb956be9ff06d8d5c2] <https://www.xing.com/companies/wagokontakttechnikgmbh&co.kg> [cid:614f801bb8f64c1d8a9dbfd711f2a29a] <http://twitter.com/WAGOKontakttech> [cid:69804606bdd6406aa66c78bf7dae6c2a] <http://www.linkedin.com/company/wago-kontakttechnik>
[cid:711e500a31984a6a959819fb46ba7f2d]
________________________________________________________________________________________
Diese E-Mail einschlie?lich ihrer Anh?nge ist vertraulich und daher allein f?r den Gebrauch durch den vorgesehenen Empf?nger bestimmt. Dritten ist das Lesen, Verteilen oder Weiterleiten dieser E-Mail sowie jedwedes Vertrauen auf deren Inhalt untersagt. Wir bitten, eine fehlgeleitete E-Mail unverz?glich vollst?ndig zu l?schen und uns eine Nachricht zukommen zu lassen.
This email may contain material that is confidential and/or privileged for the sole use of the intended recipient. Any review, reliance or distribution by others or forwarding without express permission is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.
WAGO Kontakttechnik GmbH (nach Schweizer Recht) & Co. KG; Rechtsform: Kommanditgesellschaft; Sitz: Minden; Registergericht: Bad Oeynhausen, HRA 6218; Pers?nlich haftender Gesellschafter: WAGO Kontakttechnik Beteiligungs GmbH; Sitz: Domdidier (Schweiz); Handelsregisteramt CH-217-3533291-9; Gesch?ftsf?hrung: Dipl.-Wirtsch. Ing. (FH) Sven Michael Hohorst; Dipl. Betriebsw. Axel Christian B?rner; Dipl.-Ing. (FH) Ulrich Bohling.
Hi there,
When I run LAVA hacking session on Juno but found sometimes Juno
cannot be allocated IP properly:
- I created multinode definition for Juno:
https://validation.linaro.org/scheduler/job/845471.0: this
definition is to launch kvm so I can run testing suits on it;
https://validation.linaro.org/scheduler/job/845471.1: this
definition is to launch "deploy_linaro_image" on Juno board;
- After launched these two images, the kvm usually can work well and I
can smoothly log in it with ssh;
- But for juno board, it will have below log for ssh:
395.1 ./setup_session_oe: line 38: /etc/init.d/ssh: No such file or directory
395.2 <LAVA_SIGNAL_TESTCASE TEST_CASE_ID=sshd-restart RESULT=fail>
395.3 sshd re-start failed
395.4 Target's Gateway: 10.0.0.1
395.5 ip: RTNETLINK answers: Network is unreachable
395.6
395.7
395.8 *********************************************************************************************
395.9 Please connect to: ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no root@ (juno-07)
395.10 *********************************************************************************************
395.11
So finally I cannot get the info for Juno's IP and cannot log in it
with ssh. It's not everytime I can reproduce this failure, so sometimes
it's lucky so I can get a correct IP.
- As a workaroud, I found if I create two saperated definitions for
Juno and kvm independently, then Juno's IP issue can be resolved:
https://validation.linaro.org/scheduler/job/845552https://validation.linaro.org/scheduler/job/845561
So could you help give suggestions for this?
Thanks,
Leo Yan
Hi,
I have made a test setup with LAVA to run some test cases on my AM335X
based platform. I want to execute some simple test cases like testing
the status of "ifconfig eth0" command. I don't want to deploy any image
and my goal is to execute only this test directly(target is booted with
root file system mounted).I have the below job submitted to the lava
worker. ethernet.yaml contains the shell script which actually performs
the ifconfig test.
=============================================================================================
{
"actions": [
{
"command": "dummy_deploy",
"parameters": {
"target_type": "oe"
}
},
{
"command": "lava_test_shell",
"parameters": {
"testdef_repos": [
{
"testdef": "/root/lava_tests/yaml/ethernet.yaml"
}
],
"timeout": 60
}
},
{
"command": "submit_results_on_host",
"parameters": {
"server": "http://10.24.252.13:81/RPC2/",
"stream": "/anonymous/lavaserver/"
}
}
],
"device_type": "ELIN",
"health_check": false,
"job_name": "elin-ethernet-test",
"logging_level": "DEBUG",
"timeout": 60
}
=======================================================================================================
Lava dispatcher is able to connect to the target and start the job. But
finally I am getting a timeout. Please find the attached log for
details. I can see that after connecting with the target the dispatcher
is checking/finding some information related to the file system, and
looks like it is getting timed out there. How can I avoid this and make
the dispatcher to directly perform the test on the target platform?
Any help any help would be appreciated.
Best Regards,
Pradeepkumar Soman
Hi all,
We are using LAVA to run jobs from KernelCI which is a test automation
system used to test latest versions of the kernel (upstream, next,
stable, ...). Our main goal is to put in our lab a maximum of boards
which are not yet in KernelCI to provide boot reports for these boards.
However, for some of these, we own a single copy while we definitely
need to work on them to add new features or improve them.
What we could do (and what we are currently doing), is to manually put a
board in maintenance mode, take it physically outside of the lab, work
on it and put it back in the lab when we have finished working on it.
This is not really efficient and during the time the board is physically
outside of the lab, not a single job (from KernelCI or ours) can run.
We would like to create a tool to be able to remotely control our boards
while they still are physically in the lab. We need to be able to do
everything we could do if the board would be on our desk. This means
getting the serial connection, powering on or off the board and sending
files (kernel, DTB, initramfs, ...) to it.
For the latter, we just have to copy the files in the directory used by
the TFTP server of LAVA master node.
I would like to know if it is possible to add an endpoint in the API to
power on or off a given board? Is it possible to get the serial
connection over the API?
To put a board virtually outside of the lab, we need to put it into
maintenance mode in LAVA. As of yet, this is only possible from the
browser, right? It would be great if we could add two endpoints to the
API: one for putting a board offline and one for putting a board online,
so we can remotely manage from our program whether a board is in the lab.
We may have few people working on the same board. Therefore, we need a
way to ensure only one person is able to use this board at a certain
time. I've seen the "created_by" attribute in the
DeviceStateTransition[1] which could help us to find who last put the
board virtually outside of the lab and thus denying access to other
users. However, we do not have a way to get this information today via
the API. Is it possible to add an endpoint to get the status of a given
device (like in 'all_devices'[2] but for one device) and the associated
user responsible of the last device state transition?
I can help with patches if you agree to add these endpoints to the API.
Thanks,
Quentin
[1]
https://github.com/Linaro/lava-server/blob/release/lava_scheduler_app/model…
[2]
https://github.com/Linaro/lava-server/blob/release/lava_scheduler_app/api.p…
The Cambridge LAVA Lab has a transparent squid proxy which saves having
to configure each dispatcher and device to use it. Outgoing HTTP
traffic from the lab has
no choice as it is intercepted at the internet gateway.
We did this because even after configuring the dispatcher, and
devices, it's almost
impossible to make all test shell tasks use the proxy. LAVA sets a
shell environment
inside a job but many of the clients in the various different types of
job simply ignore it.
Chasing every test writer was not feasible as the lab usage is so
large, but might be ok
in a smaller lab with tighter control of the jobs.
We don't proxy HTTPS requests because that's becomes very complicated
with faking
certificates etc
>>Marc Titinger <mtitinger at baylibre.com> writes:
>>
>> I had to make this change to get squid3 going with our LAVA 1.0 machine.
>> I thought this could be useful. I did not test extensively though.
>
>FWIW, I had problems getting lava-dispatcher to use my local squid proxy
>also. Seems setting LAVA_PROXY in lava-dispatcher.conf was working for
>the the devices (lava set the environment variable after booting the
>target), but lava-dispatcher itself was not using the proxy.
>
>I'll give this a try as well.
>
>Kevin
From: Marc Titinger <mtitinger(a)baylibre.com>
search_substr_from_array would return a false positive
in lava_dispatcher/downloader.py when trying to match
'no_proxy' exclusion list with the download object url.
This now uses a (too?) simple substring matching rather
than a greedy regex.
Signed-off-by: Marc Titinger <mtitinger(a)baylibre.com>
---
lava_dispatcher/utils.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lava_dispatcher/utils.py b/lava_dispatcher/utils.py
index f2fd79a..037f6b5 100644
--- a/lava_dispatcher/utils.py
+++ b/lava_dispatcher/utils.py
@@ -640,7 +640,7 @@ def search_substr_from_array(string, array, sep=','):
Strings in array are separated by default with ','
"""
for obj in array.split(sep):
- if re.search('.*' + str(obj) + '.*', str(string)):
+ if str(obj).find(string) is not -1:
return True
return False
--
2.5.0