Hi all,
I used LAVA V1 before and I want to migrate to LAVA V2 following the guide " First steps installing LAVA V2" .
I had a problem when adding pipeline workers to the master. After input the command " sudo lava-server manage workers add <HOSTNAME>", I got error "Unknown command: 'workers'" and I failed to find help by typing "lava-server -h manage".
Can anybody help me on this step?
BTW, I want to know whether multiple workers can have the same hostname ?
Thanks in advance.
Hi,
The tests in https://git.linaro.org/qa/test-definitions.git in some
cases have license in the subdirectories but there's no top level
licence file covering the whole repository. Would you like a PR
submitting to cover this or do you envisage the maintainers making this
change themselves? We're hoping to use some of these and so wanted
licences to be consistent. We'd also be prepared to review licences
in future repository changes.
The licenses that I've found all seem to be GPL v2 so I assume that is
the license flavour which should be covering the whole repos.
Robert
Hi Lava users,
I just updated lava from 2016.12 to 2017.6-1. And I found my health-check
job failed(uncomplete) after the update.
Here is the result logs before:
https://pastebin.com/KWAnYQvD
and after:
https://pastebin.com/mqXDfR3H
I checked the log and think the cause for the issue is:
```
case: job
definition: lava
error_msg: constants section not present in the device config.
error_type: Configuration
result: fail
```
What I have done:
* Try to find the "constants section" with no luck.
* Looked at the logs in /var/lava-server/ and didn't find the log specific
for the reason of this.
* Asked IRC channel and ask, answered by codehelp(thanks:)). Then tried to
replace my device-type config
in /etc/lava-server/dispatcher-config/device-types with x86.jinja2 and the
error message is the same.
Any input would by much appreciated, thank you.
Regards,
-Michael Chen
Hi Lava Team,
I am using Lava image on debian stretch from https://images.validation.linaro.org/production-repo stretch-backports main & in It chart/query api is working.
Then I took latest code from lava release branch https://github.com/Linaro/lava-server/commits/release to my local branch
Tag taken :- c0b431f1b56227f6b97ba0e7859bfcc3fb29b62f refs/tags/2017.7
& build it but Lava charts is not coming when I run lava server["python manage.py runserver"] using latest code from git??
Can you please share where is latest code being used in debian stretch located?
Thanks,
Varun
Hi,
In LAVA v1, as I remember, I can use either "device" or "device_type" to
request a device for the test.
If I use "device", for example "device": "beaglebone-black03", then the
specific device, beaglebone-black03, will be used for the test.
Can I do the same thing in LAVA v2? To request a device by using device
name?
Thanks,
Arthur
Hi,
I've got 2 jobs stuck in canceled mode which are preventing any other job from running.
I'm running lava (2017-7) in a VM and have tried rebooting the VM to
clear the issue but without success (ie the jobs still block the queue).
an extract from /var/log/lava-server/django.log
is attached
I get this 500 error when viewing the results for the job
Is there a manual way of clearing this? The health check has
notification associated with it (and set to verbose) and every time I
reboot I get an email and irc saying that it's finished!
Robert
Hi everyone,
I'm implementing the nfsroot for my devices but it seems that when
extracting my rootfs.tar.xz lava keep the parent folder rootfs and didn't
extract all the files in extract-nfsrootfs-XXXXX so that the lava test
overlay is putting outside of rootfs folder and raises an error during
execution.
Is there a way in job definition to ask lava to put the test overlay in the
rootfs folder ?
Thanks
Benjamin AUCLAIR
--
- Confidential -
Hi everyone,
hope you're fine ! I'm quite stuck in my platform development: indeed, I
succedded in adding my own device type, I'm able to boot on linux by TFTP
and to perform auto-login actions.
However, I face difficulties with test-shell.
I have the following error:
https://pastebin.com/grPcvb14
And the definition of stage is:
https://pastebin.com/ArV11Gbb
Stage value seems to be none and I also realized that my test shell isn't
downloaded from git during server processing. Thus I think, that stage is
empty because the test shell definition isn't in the temporary files. Am I
wrong ?
Even if I think I found my problem, I don't know how to solve it, may it be
due to my device-type config ?
Thanks a lot and have a nice day !
Benjamin AUCLAIR
--
- Confidential -
Hi All,
I am adding BBB board on LAVA server I want to change
"UBOOT_AUTOLOAD_MESSAGE" in constant.py, I used "interrupt_prompt"
parameters in job submission but it took the message written in
constant.py. If I changed the message in constant.py its working but I know
this is not the right way to do that, Please suggest if any one has idea
what is the problem with me.
Below is the my Job:
device_type: beaglebone-black
# NFS fails on panda and arndale.
job_name: BBB smoke test
timeouts:
job:
minutes: 240
action:
minutes: 240
connection:
minutes: 2
priority: medium
visibility: public
metadata:
source: https://git.linaro.org/lava-team/refactoring.git
path: health-checks/beaglebone-black-health.yaml
build-readme:
http://images.validation.linaro.org/snapshots.linaro.org/components/lava/st…
build-console:
https://ci.linaro.org/view/lava-ci/job/lava-debian-armmp-armhf/1/console
build-script:
http://images.validation.linaro.org/snapshots.linaro.org/components/lava/st…
actions:
- deploy:
timeout:
minutes: 40
to: tftp
kernel:
url: file:////home/pi/lava/dl/vmlinuz
ramdisk:
url: file:////home/pi/lava/dl/initramfs.cpio.gz
compression: gz
# the bootloader needs a u-boot header on the modified ramdisk
add-header: u-boot
modules:
url: file:////home/pi/lava/dl/modules.tar.gz
compression: gz
nfsrootfs:
url: file:////home/pi/lava/dl/jessie-armhf-nfs.tar.gz
compression: gz
os: debian
dtb:
url: file:////home/pi/lava/dl/am335x-boneblack.dtb
- boot:
method: u-boot
commands: nfs
parameters:
shutdown-message: 'INIT: Sending processes the TERM signal'
interrupt_prompt: 'Press SPACE to abort autoboot in 10 seconds'
interrupt_char: ' '
send_char: False
type: bootz
auto_login:
login_prompt: 'beaglebone login: '
username: root
prompts:
- 'root@jessie:'
timeout:
minutes: 10
- test:
timeout:
minutes: 50
definitions:
- repository: git://git.linaro.org/qa/test-definitions.git
from: git
path: ubuntu/smoke-tests-basic.yaml
name: smoke-tests
Hi Lava Team
I am trying to mount a directory of worker (host computer) to LXC
instance running on it with job.
I have added this entry in lxc default configuration file of worker
computer.
Default Configuration file path is : /etc/lxc/default.conf
Entry is :-
lxc.mount.entry = /var/lib/nfsdir var/nfsmnt none bind,create=dir 0 0
After restart lxc service. I executed lava job then this directory
"/var/lib/nfsdir" of worker machine is not mounting on LXC instance in
directory "/var/nfsmnt".
But if i manually create and start lxc instance on worker, then this
directory "/var/lib/nfsdir" of worker machine is mounted on LXC instance.
Can anyone assist me on this issue, that how i can resolve this issue.
--
Thanks & Regards
Chetan Sharma
Hi Lava Team
Can you assist me on this usecase that how i can share LXC data with
DUT.
We have LXC and DUT TEST defined in a Job. LXC Tests generates
some data and logs which is required to be accessed by DUT TEST.
--
Thanks & Regards
Chetan Sharma
There are delays getting packages into stretch-backports, as expected.
In the meantime, this is a reminder of how to use backports: first
start with stable.
So when installing LAVA on Stretch, even if what you want is the
latest release from production-repo or staging-repo (currently
2017.7), then your first step is to install 2016.12 from Stretch.
# apt -y install lava-server
# apt -y install vim
# wget http://images.validation.linaro.org/production-repo/production-repo.key.asc
# apt-key add production-repo.key.asc
# vim /etc/apt/sources.list.d/lava.list
# # edit the file to specify: deb
http://images.validation.linaro.org/production-repo jessie-backports
main
Yes, that is jessie-backports - we don't have packages in
stretch-backports at this time.
# apt update
# apt upgrade
If you specify backports too early, you'll get a range of other
packages from backports - you don't actually need jessie-backports or
stretch-backports from Debian at this time.
Packages for jessie-backports are built on jessie. Packages for
stretch-backports are built on stretch. It's the same source code in
each case. Right now, there aren't any problems with installing from
jessie-backports on stretch - particularly if you install lava-server
from stretch in the first place so that the majority of your packages
come from stretch.
--
Neil Williams
=============
neil.williams(a)linaro.org
http://www.linux.codehelp.co.uk/
Hi Lava Team,
Can anyone assist me with these 2 quires as
1. Can we enable interface inside LXC to access host network, so
that we can access any device on network of host machine and can access
internet inside LXC to execute script.
2. Can we pass Params in Lava job, which can propagate to
lava-test-action job or yaml.
If possible can you guide me with process to perform this
action ? share a reference job which perform this task.
--
Thanks & Regards
Chetan Sharma
Dear all,
Do you have any example or recommendation on the way to implement the following use case in Lava?
DUT under Linux
A PC under Linux to use as a "server" for VLC or iPerf test cases
I guess Multinode can be used to manage these use cases, but I don't know how to deal with the constraints linked to the PC: no reboot, no dedicated test kernel.
Thanks,
Denis
Hi Neil, Kevin !
> On 20 July 2017 at 02:57, Kevin Hilman <khilman at baylibre.com> wrote:
> > Hello,
> >
> > There are many configurable options when starting QEMU that are
> > controlled by environment variables, and I'm trying to figure out how to
> > pursuade LAVA v2 to set them.
>
> It's not something which should be done with the QEMU device type
> because that is installed on the worker - it is not a device that can
> be reconfigured to suit test writers and much of the functionality of
> QEMU must remain off-limits when QEMU is running on the worker.
>
> > As an example, configuring the host audio driver used by QEMU is set by
> > running QEMU with: QEMU_AUDIO_DRV=<option> qemu-system-x86_64.
>
> The worker is typically a server, it might not have any audio support.
> Even if it does, that functionality is off-limits to test writers. The
> available options to QEMU must be tightly constrained to protect other
> tasks on the worker.
A see. There is a valid point here in that there should not be control of the
environment from the 'outside' though a job description.
>
> > Unfortunately, device-types/qemu.jinja2 doesn't provide anyway to
> > override the qemu-system binary used (it's conditional based on arch),
>
> This is deliberate, to protect the worker from aberrant processes and
> to give the admins sufficient control over how the worker behaves.
Now for the above case:
QEMU_AUDIO_DRV=<option> qemu-system-*
I see this as a *very valid env var* for a server-side deployment - exactly as
you said - a server does not have a sound card and QEMU_AUDIO_DRV=none is what
we need to use there to prevent qemu from looking for a host sound card. We
should even set this by default - we don't expect any sound to reach the
worker - do we ;) ? I don't think so.
IMHO this should be an option set on the worker node in the lava-
dispatcher.conf (e.g. LAVA_EXTRA_QEMU_ENV="QEMU_AUDIO_DRV=none").
Enable the admins to choose this.
>
> No. This is not something that the worker can support. It needs to
> happen only on a test device. The worker is not itself a test device.
As I said, I think this belongs to the worker setup and we should enable the
admins to make their choice.
Dipl.-Ing.
Jan-Simon Möller
jansimon.moeller(a)gmx.de
Hello,
There are many configurable options when starting QEMU that are
controlled by environment variables, and I'm trying to figure out how to
pursuade LAVA v2 to set them.
As an example, configuring the host audio driver used by QEMU is set by
running QEMU with: QEMU_AUDIO_DRV=<option> qemu-system-x86_64.
Unfortunately, device-types/qemu.jinja2 doesn't provide anyway to
override the qemu-system binary used (it's conditional based on arch),
but even a quick hack to allow it to be overriden[1], and adding the env
as a prefix didn't because LAVA assumes the first item is an actual
binary expected in $PATH. My attempt led to:
Invalid job definition
Cannot find command 'QEMU_AUDIO_DRV=none qemu-system-x86_64' in $PATH
Seems like there should be a more general way to pass enviornment
variables to QEMU that I must be missing. If there's not, would be the
recommended way to add this feature?
Kevin
[1]
diff --git a/lava_scheduler_app/tests/device-types/qemu.jinja2 b/lava_scheduler_app/tests/device-types/qemu.jinja2
index 786f53bdb30d..e7c265a3048b 100644
--- a/lava_scheduler_app/tests/device-types/qemu.jinja2
+++ b/lava_scheduler_app/tests/device-types/qemu.jinja2
@@ -41,7 +41,7 @@ actions:
{% elif arch == 'arm' %}
qemu-system-arm
{% elif arch == 'amd64' or arch == 'x86_64' %}
- qemu-system-x86_64
+ {{ qemu_command|default('qemu-system-x86_64') }}
{% elif arch == 'i386' %}
qemu-system-i386
{% endif %}
With that change, I added
context:
qemu_command: "QEMU_AUDIO_DRV=none qemu-system-x86_64"
to the job definition
Dear lava Community,
I want to use Image charts2.0 for viewing my lava job results.
I am using "lava-test-case" to check pass/fail & I am getting results also.
steps:
- lava-test-case linux-linaro-ubuntu-pwd --shell pwd
- lava-test-case linux-linaro-ubuntu-uname --shell uname -a
- lava-test-case linux-linaro-ubuntu-vmstat --shell vmstat
I want to know how to get these results in image charts, I can see it ask to add chart & addfilter , but there no data is available when I try to add filter?
Similarly if I have to use query api, what kind of query should be used to fetch test data from lava suite?
Detailed info to use image chart will be appreciated, as I am new to Charts/Lava...or at any link.
Thanks,
Varun
Hi everyone,
I'm trying to add own device-type to my lab, but I'm facing some
difficulties when running jobs. I have the following log error:
https://pastebin.com/Eznq6RLA
I clearly understand the log but I'm not able to figure out what to do: I
thought it will be enough describing boot/power actions into device-type.
But it seems not... Do I need to create a .conf file into
/usr/lib/python2.7/dist-packages/lava_dispatcher/default-config/lava-dispatcher/device-types
folder ?
By the way I'm not sure to understand the .conf purpose ? Are they here,
only to be some default files ?
I attached my device-type and my job, maybe it will help you !
Thanks a lot ;)
P.S: I already did some jobs on Qemu and bbb and read the whole
documentation.
--
- Confidential -
Hi,
I'm not entirely sure this job definition is correct, but the only
error I'm getting is only "Problem with submitted job data: Invalid
YAML - check for consistent use of whitespace indents.". The YAML
validates just fine so I'm unsure what is wrong. Any hints? The YAML
in question is enclosed.
milosz
Hello,
We use a python script, LAVA_via_API, to trigger our test jobs.
I must say that I don't know whether this script is a pure internal creation or whether it's been inspired by a Linaro script.
Its role is simple, create a lava job with quite a few parameters (job name, server, worker, kernel, rootfs, dtb, device, device_type, and so on), submit the job, get results and logs.
Whatever, before reworking completely this script, I assume that a reference one exists on one of the Linaro gits. Can you tell me where to find this?
Thanks,
Denis
ALWAYS keep the list in CC please.
On 7 July 2017 at 10:28, Chetan Sharma <chetan.sharma(a)timesys.com> wrote:
> Hi Neil
> Thanks for sharing detailed information to work with LXC.
> 1. I am following sample pipeline job with BBB.
> https://git.linaro.org/lava/lava-dispatcher.git/tree/lava_dispatcher/pipeli…
>
> I have modified this job details with following value, but i am getting an
> Error as ['Invalid job - missing protocol']
Check with the valid job:
https://staging.validation.linaro.org/scheduler/job/178130
> As i have defined protocol as "lava-lxc" which is valid protocol, But job
> object does not have any protocol details, i have verified by printing
> details of self.job.protocols is []
Then that is an invalid job. Your modifications have got something
wrong. There are a lot of changes in your file against the example.
Change only one thing at a time.
> 2. How test action execute on lxc and device ?
Run a test action in the same namespace as the LXC. In the case of the
example, namespace: probe.
https://staging.validation.linaro.org/scheduler/job/178130/definition#defli…
> Can we execute test action in this process
> First lxc test action execute ---> Device1 test action execution start
> -> Device2 test action execution start
>
>
> ==================================================
> device_type: beaglebone-black
>
> job_name: LXC and bbb
Please attach files to emails to the list. There's no need to quote
the entire file to the list.
Take time to understand namespaces. The LXC is transparent and the
namespaces are used to tie the different actions together into one
sequence in the LXC and one sequence for the test device.
LXC protocol support is not the same as MultiNode - operations happen
in series. The LXC protocol is not a substitute for MultiNode either.
If you need parallel execution, you have to use MultiNode.
Split up your test shell definitions if necessary.
Also, attach (not include) the full test job log because that contains
details of the package versions being used and other information.
> On Fri, Jul 7, 2017 at 1:32 PM, Neil Williams <neil.williams(a)linaro.org>
> wrote:
>>
>> On 7 July 2017 at 07:23, Chetan Sharma <chetan.sharma(a)timesys.com> wrote:
>> > Hi Everyone
>> > Hopefully everyone is doing well in this group.
>> > Main intend of writing this email to seek assistance in regard to
>> > one
>> > feature of Lava as lava-lxc which help to create a LXC instance on
>> > worker
>>
>>
>> https://staging.validation.linaro.org/scheduler/job/174215/definition#defli…
>>
>> > and then we can execute any script on worker and propagate its
>> > characteristics and result in another actions of Job on board.
>>
>> That isn't entirely clear. What are you trying to do in the LXC?
>>
>> You need to deploy an LXC, start it with a boot action for the LXC and
>> then start a test shell in the LXC where you can install the tools you
>> need to execute.
>>
>> Talking to the device from the LXC can be difficult depending on how
>> the device is configured. To use networking, you would have to have a
>> fixed IP for the device. To use USB, you need to use the device_info
>> support in the device dictionary to add the USB device to the LXC.
>>
>> > I have gone through documentation shared on Lava V2 instance for
>> > LXC
>> > job creation, but i am not able to successfully execute job on Debian
>> > Jessie
>> > Release.
>> >
>> > https://validation.linaro.org/static/docs/v2/deploy-lxc.html#index-0
>> >
>> > Can you assist me and share a reference process/document to proceed
>> > further to create a job using this feature.
>>
>> More information is needed on exactly what you are trying to do, how
>> the LXC is to connect to the device and what support the device offers
>> to allow for those operations.
>>
>> --
>>
>> Neil Williams
>> =============
>> neil.williams(a)linaro.org
>> http://www.linux.codehelp.co.uk/
>
>
>
>
> --
> Thanks & Regards
> Chetan Sharma
--
Neil Williams
=============
neil.williams(a)linaro.org
http://www.linux.codehelp.co.uk/
Hi Everyone
Hopefully everyone is doing well in this group.
Main intend of writing this email to seek assistance in regard to one
feature of Lava as lava-lxc which help to create a LXC instance on worker
and then we can execute any script on worker and propagate its
characteristics and result in another actions of Job on board.
I have gone through documentation shared on Lava V2 instance for LXC
job creation, but i am not able to successfully execute job on Debian
Jessie Release.
https://validation.linaro.org/static/docs/v2/deploy-lxc.html#index-0
Can you assist me and share a reference process/document to proceed
further to create a job using this feature.
Look forward to hear with positive response.
--
Thanks & Regards
Chetan Sharma
Hi,
I have been investigating LAVA for use in our organisation and i'm stuck trying to get a hello world test case running on our hardware and looking for some help. We looked at the YOCTO test tools however it can only use devices with a fixed ip which we can't guarantee or want during our testing as we also test network settings. It's also limited in configuration, LAVA package seems to meet all our requirements however i'm still unsure on how to do a few things.
We use Yotco images and boot with the grub bootloader.
All our devices are connected via Ethernet only and power and peripheral switching is controlled via usb relays.
After reading through all the documentation i'm still unsure of how to set up and actually run a test on our hardware. What tools do i need to install in the test image and how do i get it to communicate with grub? I assume a base image is one that includes nothing but the tools and grub. We have a recovery partition with tiny core which could facilitate that but it's not required for the automated testing.
I've used the akbennet/lava-server docker image and it is up and running, although test jobs are scheduled but never run on the qemu devices so a little stuck there.
Basically, I need help to get LAVA to connect to one of our devices to load the image and run tests?
Choosing the image, writing tests and mostly configuring the pipeline I understand.
After 2 weeks i'm posting here hoping someone can assist me.
Regards,
Elric
Elric Hindy
Test Engineer
T +61 2 6103 4700
M +61 413 224 841
E elric.hindy(a)seeingmachines.com
W www.seeingmachines.com<http://www.seeingmachines.com>
[Seeing Machines]<https://www.seeingmachines.com/>
This email is confidential. If you are not the intended recipient, destroy all copies and do not disclose or use the information within. No warranties are given that this email does not contain viruses or harmful code.
Hi All,
I am trying to setup a remote lab using Raspberry pi on my local network.
I installed lava-server and a worker on my laptop and its working fine.
I installed raspbian on R-Pi and follow the instruction given on lava site,
but when slave is trying to connect to master its not getting any response,
I am able to ping master from my R-pi board and default port 3079 is open
on my machine.
I used no encryption and use URL to connect master as follow.
MASTER_URL="tcp://10.42.0.24:3079"
LOGGER_URL="tcp://10.42.0.24:3079"
I continuosly getting log messgaes like,
DEBUG Sending new HELLO_RETRY message to the master (are they both running
the same version?)
INFO Waiting for the master to reply
DEBUG Sending new HELLO_RETRY message to the master (are they both running
the same version?)
INFO Waiting for the master to reply
DEBUG Sending new HELLO_RETRY message to the master (are they both running
the same version?)
Please, if any one have some idea why I am not able to connect please help.
Thanks,
Ankit
On Mon, 3 Jul 2017 23:50:25 +0300
Paul Sokolovsky <paul.sokolovsky(a)linaro.org> wrote:
> Hello Milosz,
>
> I appreciate getting at least some response ;-). Some questions
> however could use a reply from LAVA team, I guess.
>
> On Mon, 3 Jul 2017 13:34:49 +0100
> Milosz Wasilewski <milosz.wasilewski(a)linaro.org> wrote:
>
> []
>
> > > jobs submit a number of tests to LAVA (via
> > > https://qa-reports.linaro.org/) for the following boards:
> > > arduino_101, frdm_k64f, frdm_kw41z, qemu_cortex_m3. Here's an
> > > example of cumulative test report for these platforms:
> > > https://qa-reports.linaro.org/lite/zephyr-upstream/tests/
> > >
> > > That's really great! (Though the list of tests to run in LAVA
> > > seems to be hardcoded:
> > > https://git.linaro.org/ci/job/configs.git/tree/zephyr-upstream/submit_for_t…)
> > >
> >
> > It is, as I wasn't really sure what to test. The build job needs to
> > prepare the test templates to be submitted to LAVA. In case of
> > zephyr each tests is a separate binary. So we end up with the
> > number of file paths to substitute in the template. Hardcoding was
> > the easiest thing to get things running. But I see no reason why it
> > wouldn't be changed with some smarter code to discover the
> > binaries. The problem with this approach is that some of these
> > tests are just build time. They have no meaning when running on the
> > board and need to be filter out somehow.
Running the build tests within the Jenkins build makes a lot of sense.
Typically, the build tests will have a different command syntax to the
runtime tests (otherwise Jenkins would attempt to run both), so
filtering should be possible. If the build tests are just a different
set of binary blobs from the runtime tests, that may need a fix
upstream in Zephyr to distinguish between the two modes.
> I see, that makes some sense. But thinking further, I'm not entirely
> sure about "build only" tests. Zephyr's sanitycheck test has such
> concept, but I'd imagine it comes from the following reasons: a)
> sanitycheck runs tests on QEMU, which has very bare hardware support,
> so many tests are not runnable; b) sanitycheck can operate on
> "samples", not just "tests", as sample can be interactive, etc. it
> makes sense to only build them, not run.
>
> So, I'm not exactly sure about build-only tests on real HW boards. The
> "default" idea would be that they should run, but I imagine in
> reality, some may need to be filtered out. But then blacklisting
> would be better approach than whitelisting. And I'm not sure if
> Zephyr has concept of "skipped" tests which may be useful to handle
> hardware variations. (Well, I actually dunno if LAVA supports skipped
> tests!)
Yes, LAVA has support for pass, fail, skip, unknown.
For POSIX shell tests, the test writer just calls lava-test-case name
--result skip
For monitor tests, like Zephyr, it's down to the pattern but skip is as
valid as pass and fail (as is unknown) for the result of the matches
within the pattern.
> Anyway, these are rough ideas for the future. I've spent couple of
> weeks of munging with LITE CI setup, there're definitely some
> improvements, but also a Pandora box of other ideas and improvements
> to make. I'm wrapping up for now, but hope to look again in some time
> (definitely hope to look before the Connect, so we can discuss further
> steps there). In the meantime, I hope that more boards will be
> installed in the Lab and stability of them improves (so far they seem
> to be pretty flaky).
There are known limitations with the USB subsystem and associated
hardware across all architectures, affecting test devices and the
workers which run the automation. LAVA has to drive that subsystem very
hard for both fastboot devices and IoT devices. There are also problems
due to the design of methods like fastboot and some of the IoT support
which result from a single-developer model, leading to buggy
performance when used at scale and added complexity in deploying
workarounds to isolate such protocols in order to prevent interference
between tests. The protocols themselves often lack robust error
handling or retry support.
Other deployment methods which rely on TFTP/network deployments are
massively more reliable at scale, so comparing reliability across
different device types is problematic.
> []
>
> > > - test:
> > > monitors:
> > > - name: foo
> > > start: ""
> > > end: Hello, ZJS world!
> > > pattern: (?P<result>(PASS|FAIL))\s-\s(?P<test_case_id>\w+)\.
> > >
> > > So, the "start" substring is empty, and perhaps matches a line
> > > output by a USB multiplexer or board bootloader. "End" substring
> > > is actually the expected single-line output. And "pattern" is
> > > unused (dunno if it can be dropped without def file syntax
> > > error). Is there a better way to handle single-line test
> > > output?
> >
> > You're making a silent assumption that if there is a matching line,
> > the test is passed. In case of other tests (zephyr unit tests), it's
> > not the case. The 'start' matches some line which is displayed when
> > zephyr is booting. End matches the line which is displayed after all
> > testing is done. The pattern follows the unit test pattern.
>
> Thanks, but I'm not sure I understand this response. I don't challenge
> that Zephyr unittests need this support, or the way they're handled.
> LITE however needs to test more things than "batch" Zephyr unittests.
> I present another usercase which albeit simple, barely supported by
> LAVA. (That's a question to LAVA team definitely.)
LAVA result handling is ultimately a pattern matching system. Patterns
must have a unique and reliable start string and a unique and reliable
end string. An empty start string is just going to cause misleading
results and bad pattern matches as the reality is that most boards emit
some level of random junk immediately upon connection which needs to be
ignored. So there needs to be a reliable, unique, start string emitted
by the test device. It is not enough to *assume* a start at line zero,
doing so increases the problems with reliability.
>
> > > Well, beyond a simple output matching, it would be nice even for
> > > the initial "smoke testing" to actually make some input into the
> > > application and check the expected output (e.g., input: "2+2",
> > > expected output: "4"). Is this already supported for LAVA "v2"
> > > pipeline tests? I may imagine that would be the same kind of
> > > functionality required to test bootloaders like U-boot for Linux
> > > boards.
> >
> > I didn't use anything like this in v2 so far, but you're probably
> > best off doing sth like
> >
> > test 2+2=4 PASS.
> >
> > than you can easily create pattern that will filter the output. In
> > case of zephyr pattern is the only way to filter things out as there
> > is no shell (?) on the board.
>
> So, the problem, for starters, is how to make LAVA *feed* the
> input, as specified in the test definition (like "2+2") into a board.
That will need code changes, so please make a formal request for this
support at CTT
https://projects.linaro.org/servicedesk/customer/portal/1 so that we
can track exactly what is required.
>
> As there were no reply from LAVA team (I may imagine they're busy with
> other things), I decided to create a user story in Jira for them, as I
> couldn't create a LAVA-* ticket, I created it as
> https://projects.linaro.org/browse/LITE-175 . Hopefully that won't go
> unnoticed and LAVA team would get to it eventually.
That JIRA story is in the LITE project. Nobody in the LAVA team can
manage those stories. It needs a CTT issue which can then be linked to
the LITE story and from which a LAVA story can also be linked.
Sadly, any story in the LITE project would go completely unnoticed by
the LAVA software team until it is linked to CTT so that the work can
be prioritised and the relevant LAVA story created. That's just how
JIRA works.
>
> >
> > milosz
>
> Thanks!
>
> --
> Best Regards,
> Paul
>
> Linaro.org | Open source software for ARM SoCs
> Follow Linaro: http://www.facebook.com/pages/Linaro
> http://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog
> _______________________________________________
> linaro-validation mailing list
> linaro-validation(a)lists.linaro.org
> https://lists.linaro.org/mailman/listinfo/linaro-validation
--
Neil Williams
=============
http://www.linux.codehelp.co.uk/
Hi,
On 2017-07-04 15:18, Agustin Benito Bethencourt wrote:
> Dear CIP friends,
>
> please check below. This is a use case we will meet in a few
> weeks/months. It is important to see others walking the same route.
My simple tests are running now thanks to the help of the nice people
from the #linaro-lava chat.
1) Crazy me decided to use upstream u-boot 2017.05 instead of running
some ancient version from 2014;)
1.1) which happens to have a different AUTOBOOT_PROMPT than the one Lava
expects "Press SPACE to abort autoboot in %d seconds\n" instead of "Hit
any key to stop autoboot" so (since) I would like to be able to stay as
close as possible to upstream LAVA 2017.06 I patched u-boot[1]. Note
that this could be fixed with LAVA as well - interrupt_prompt: {{
interrupt_prompt|default('Hit any key to stop autoboot') }}
1.2) also the SYS_PROMPT of upstream u-boot is different than the one
expected by LAVA and again I made a u-boot patch[2]. Note that this
could be fixed with LAVA as well - {% set bootloader_prompt =
bootloader_prompt|default('U-Boot') %}
2) After some searching it turned out that LAVA set some funny variables
in u-boot which made my kernel crash. (Crazy me decided to use a 4.9.x
uImage without baked in address).
Adding this:
{% set base_high_limits = false %}
to my bbb03.jinja2 file fixed it.
... obviously ;)
Regards,
Robert
[1]
https://github.com/RobertBerger/meta-mainline/blob/pyro-training-v4.9.x/u-b…
[2]
https://github.com/RobertBerger/meta-mainline/blob/pyro-training-v4.9.x/u-b…
Hello Milosz,
Thanks for routing this thread to lava-users - when I made initial post
to linaro-validation, I check my archive and so that e.g. Neil posts
there frequently, but I missed that it's not official LAVA list.
On Mon, 3 Jul 2017 22:25:31 +0100
Milosz Wasilewski <milosz.wasilewski(a)linaro.org> wrote:
[]
> > So, I'm not exactly sure about build-only tests on real HW boards.
> > The "default" idea would be that they should run, but I imagine in
> > reality, some may need to be filtered out. But then blacklisting
> > would be better approach than whitelisting. And I'm not sure if
> > Zephyr has concept of "skipped" tests which may be useful to handle
> > hardware variations. (Well, I actually dunno if LAVA supports
> > skipped tests!)
>
> As far as I can tell they acutely run on the board, but usually output
> just 'Hello world!' or sth similar. As we discussed with Kumar, this
> is still OK. What Kumar requested (and I still didn't deliver) is that
> whenever the LAVA test job completes, the test should be considered
> 'passed'. So we wouldn't have to do any parsing of patterns. I'm not
> sure if that will work, but it's worth to try.
Hmm, I wonder what would be criteria for being "failed" for such
tests... Anyway, thanks for sharing - I'm not familiar with all Zephyr
tests/samples myself, will keep in mind such issues when looking into
them.
[]
> > more boards will be installed in the Lab and stability of them
> > improves (so far they seem to be pretty flaky).
> >
>
> You're absolutely right. This is a pretty big task to work on and IMHO
> requires someone to work full time at least for couple of weeks. The
> second part is also true, the boards don't behave as they should. I
> guess Dave can elaborate more on that. I can only see the result -
> boards (frdm-kw41z) don't run the tests they're requested.
Matt Hart actually showed me a ticket on that, so at least it's
confirmed/known issue in works. But even with arduino_101 and
frdm_k64f, I hit cases more than once when board(s) were stuck for
extended time, but still were routed jobs to (which either failed or
timed out). So, there may be problem with health checks, which either
don't run frequently enough or aren't robust enough. arduino_101 is all
the lone one, so if something happens to it, there's no backup. Etc,
etc.
[]
> > So, the problem, for starters, is how to make LAVA *feed* the
> > input, as specified in the test definition (like "2+2") into a
> > board.
>
> Right. What I proposed was coding all the inputs in the test itself.
Well, that would require bunch of legwork, but the biggest problem is
that it wouldn't test what's actually required. E.g. both JerryScript
and MicroPython Zephyr ports are actually interactive apps working over
serial connection. And functional testing of them would be feeding
something over this serial connection and checking that results are as
expected. I'll keep in mind idea of "builtin" tests though.
Thanks!
--
Best Regards,
Paul
Linaro.org | Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linarohttp://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog
I was too quick to hit reply. CCing lava-users for comments from LAVA team.
milosz
On 3 July 2017 at 21:50, Paul Sokolovsky <paul.sokolovsky(a)linaro.org> wrote:
> Hello Milosz,
>
> I appreciate getting at least some response ;-). Some questions however
> could use a reply from LAVA team, I guess.
>
> On Mon, 3 Jul 2017 13:34:49 +0100
> Milosz Wasilewski <milosz.wasilewski(a)linaro.org> wrote:
>
> []
>
>> > jobs submit a number of tests to LAVA (via
>> > https://qa-reports.linaro.org/) for the following boards:
>> > arduino_101, frdm_k64f, frdm_kw41z, qemu_cortex_m3. Here's an
>> > example of cumulative test report for these platforms:
>> > https://qa-reports.linaro.org/lite/zephyr-upstream/tests/
>> >
>> > That's really great! (Though the list of tests to run in LAVA seems
>> > to be hardcoded:
>> > https://git.linaro.org/ci/job/configs.git/tree/zephyr-upstream/submit_for_t…)
>> >
>>
>> It is, as I wasn't really sure what to test. The build job needs to
>> prepare the test templates to be submitted to LAVA. In case of zephyr
>> each tests is a separate binary. So we end up with the number of file
>> paths to substitute in the template. Hardcoding was the easiest thing
>> to get things running. But I see no reason why it wouldn't be changed
>> with some smarter code to discover the binaries. The problem with this
>> approach is that some of these tests are just build time. They have no
>> meaning when running on the board and need to be filter out somehow.
>
> I see, that makes some sense. But thinking further, I'm not entirely
> sure about "build only" tests. Zephyr's sanitycheck test has such
> concept, but I'd imagine it comes from the following reasons: a)
> sanitycheck runs tests on QEMU, which has very bare hardware support,
> so many tests are not runnable; b) sanitycheck can operate on "samples",
> not just "tests", as sample can be interactive, etc. it makes sense to
> only build them, not run.
>
> So, I'm not exactly sure about build-only tests on real HW boards. The
> "default" idea would be that they should run, but I imagine in reality,
> some may need to be filtered out. But then blacklisting would be better
> approach than whitelisting. And I'm not sure if Zephyr has concept of
> "skipped" tests which may be useful to handle hardware variations.
> (Well, I actually dunno if LAVA supports skipped tests!)
>
> Anyway, these are rough ideas for the future. I've spent couple of
> weeks of munging with LITE CI setup, there're definitely some
> improvements, but also a Pandora box of other ideas and improvements to
> make. I'm wrapping up for now, but hope to look again in some time
> (definitely hope to look before the Connect, so we can discuss further
> steps there). In the meantime, I hope that more boards will be
> installed in the Lab and stability of them improves (so far they seem
> to be pretty flaky).
>
> []
>
>> > - test:
>> > monitors:
>> > - name: foo
>> > start: ""
>> > end: Hello, ZJS world!
>> > pattern: (?P<result>(PASS|FAIL))\s-\s(?P<test_case_id>\w+)\.
>> >
>> > So, the "start" substring is empty, and perhaps matches a line
>> > output by a USB multiplexer or board bootloader. "End" substring is
>> > actually the expected single-line output. And "pattern" is unused
>> > (dunno if it can be dropped without def file syntax error). Is
>> > there a better way to handle single-line test output?
>>
>> You're making a silent assumption that if there is a matching line,
>> the test is passed. In case of other tests (zephyr unit tests), it's
>> not the case. The 'start' matches some line which is displayed when
>> zephyr is booting. End matches the line which is displayed after all
>> testing is done. The pattern follows the unit test pattern.
>
> Thanks, but I'm not sure I understand this response. I don't challenge
> that Zephyr unittests need this support, or the way they're handled.
> LITE however needs to test more things than "batch" Zephyr unittests. I
> present another usercase which albeit simple, barely supported by LAVA.
> (That's a question to LAVA team definitely.)
>
>> > Well, beyond a simple output matching, it would be nice even for the
>> > initial "smoke testing" to actually make some input into the
>> > application and check the expected output (e.g., input: "2+2",
>> > expected output: "4"). Is this already supported for LAVA "v2"
>> > pipeline tests? I may imagine that would be the same kind of
>> > functionality required to test bootloaders like U-boot for Linux
>> > boards.
>>
>> I didn't use anything like this in v2 so far, but you're probably best
>> off doing sth like
>>
>> test 2+2=4 PASS.
>>
>> than you can easily create pattern that will filter the output. In
>> case of zephyr pattern is the only way to filter things out as there
>> is no shell (?) on the board.
>
> So, the problem, for starters, is how to make LAVA *feed* the
> input, as specified in the test definition (like "2+2") into a board.
>
> As there were no reply from LAVA team (I may imagine they're busy with
> other things), I decided to create a user story in Jira for them, as I
> couldn't create a LAVA-* ticket, I created it as
> https://projects.linaro.org/browse/LITE-175 . Hopefully that won't go
> unnoticed and LAVA team would get to it eventually.
>
>>
>> milosz
>
> Thanks!
>
> --
> Best Regards,
> Paul
>
> Linaro.org | Open source software for ARM SoCs
> Follow Linaro: http://www.facebook.com/pages/Linaro
> http://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog
Hi,all:
If devive have boot successed,can I skip deploy and boot step in job.yaml?
This email is intended only for the named addressee. It may contain information that is confidential/private, legally privileged, or copyright-protected, and you should handle it accordingly. If you are not the intended recipient, you do not have legal rights to retain, copy, or distribute this email or its contents, and should promptly delete the email and all electronic copies in your system; do not retain copies in any media. If you have received this email in error, please notify the sender promptly. Thank you.
Hello LAVA experts,
I am currently working with LAVA and was asked to find is there a way to get data from device dictionary inside a running job. Details like below:
I have a device name demo-01, and its device-dictionary have one line like "{% set my_property = 'my_prop' %}". Then I have a job running on demo-01 device, and I would like to use string 'my_prop' to passed into a script during running. Is it possible to get device-dictionary data directly from job definition(Job Submitter webpage) or test definition(yaml file)? If yes, how could I do this? If not, is there any good way you would like to share to solve this problem?
Thanks and Best Regards,
Yifan
Hi all,
Is there any limitation on test duration when using LAVA to do tests?
Recently, I found my tests were automatically canceled after running for 24 hours and I could not find any clue about why the job was canceled. Can anybody give me some help?
(Very sorry my LAVA version is V1.)
12794.0 <LAVA_DISPATCHER>2017-06-25 02:31:42 PM INFO: Cancel operation
12794.1 <LAVA_DISPATCHER>2017-06-25 02:31:42 PM DEBUG: finally status fail
12794.2 <LAVA_DISPATCHER>2017-06-25 02:31:42 PM WARNING: [ACTION-E] boot_image is finished with error (Cancel).
12794.3 <LAVA_DISPATCHER>2017-06-25 02:31:42 PM WARNING: Target : power_off being call
Thanks.
Hi,
We've hit an issue when running lava-master on Debian Jessie and
lava-slave on Debian Stretch, after a few minutes the slave would
stop working. After some investigation, it turned out to be due
to a difference of the libzmq versions in Jessie (4.0.5+dfsg-2)
and Stretch (4.2.1-4) causing some protocol errors.
The line that detects the error in Stretch is:
https://github.com/zeromq/libzmq/blob/7005f22726d4a6ca527f27560a0a132394fdb…
This appears to be due to how the "once" counter gets written
into memory and into the zmq packets: the libzmq version from
Jessie uses memcpy whereas the one in Stretch calls put_uint64.
As a result the byte endianness has changed from little to big,
causing the packets to work until "once" reaches 255 which
translates into 0xff << 56, after which it overflows to 0 and
causes the error.
This is not a LAVA bug as such, rather a libzmq one, but it
impacts interoperability between Jessie and Stretch for LAVA so
it may need to be documented or resolved somehow. We've
installed the new version of libzmq onto our Jessie servers to
align them with Stretch; doing this does fix the problem.
Best wishes,
Guillaume
Hi All,
We have a Lab and the DUTs in the Lab will be used for both automation testing(LAVA) and manual usage(development, debug, manual testing).
And we will develop a tool for the manual usage, please find the basic requirements of the tool in the attachment.
I list possible solutions about how to develop the lab tool and let the tool cooperate with LAVA on the same Lab hardwares. which one is better? could you give your suggestions?
1. Develop the lab tool based on LAVA framawork.
Need to modify LAVA DB: add some new tables for network ports, port attribution, port connection, usage log, notes, also need to add columns to existing tables lava_scheduler_app_device.
Need to modify LAVA command line, add functions like lava-server reserve dut1, lava-server connect dut1.
Need to add new codes for the features which LAVA doesn't support, part of codes may be reused, I need to look into LAVA and check how to reuse them.
The tool will be developed based on https://github.com/Linaro/lava-server and will be installed on LAVA master, right?
Most probably we will maintain the codes change in local repository because it is difficult to upstream thousands of codes changes to linaro master repository. We need to merge the changes in master repository to local repository.
2. Develop the lab tool as a separated tool and use separated DB. There are two ways to avoid the DUT conflicts between the lab tool and LAVA:
a) Lab tool DB maintain all DUTs, LAVA DB also maintain all DUTs, when a user want to do automation testing on one DUT, he need to reserve this DUT via lab tool in advance, then he can run automation testing on the specified DUT.
b),Divide the DUTs to two groups, one group is for automation testing and it will be added to LAVA DB, the other one group is for manual usage and will be added to lab tool DB.
Another question about how to link/unlink two network ports dynamically(see requirement#7 in attachment) in automation testing. I am not sure whether LAVA can support this feature later, the simple way is supporting it in test script:
Do the link/unlink in test script: subprocess.Popen(["lab-tool","link","dut1.eth_1g_1","dut2.eth_1g_1"]) , and test script needs to get DUT name from LAVA in advance.
Does this work?
BR
Yongtao
This email is intended only for the named addressee. It may contain information that is confidential/private, legally privileged, or copyright-protected, and you should handle it accordingly. If you are not the intended recipient, you do not have legal rights to retain, copy, or distribute this email or its contents, and should promptly delete the email and all electronic copies in your system; do not retain copies in any media. If you have received this email in error, please notify the sender promptly. Thank you.
Hi,
I am trying to run a multinode test job
<https://staging.validation.linaro.org/scheduler/job/175378/definition> in
LAVA v2.
It failed <https://staging.validation.linaro.org/scheduler/job/175378#L555>.
After
a short discussion with Chase, I realized if I want to run multinode test
job I have to use "lava-multinode" protocol.
However, I still need "lava-lxc" to deploy the test image for hikey.
I am wondering if it is possible to use two protocols, lava-lxc for
deployment test image and lava-multinode to run the test, in the same test
job?
If yes, could you please provide some examples?
Thanks,
Arthur
Hi ,
I known can add the device_tags by the web , and if i can add the device_tags by the cmdline ?
Best Regards
XuHongyu
This email is intended only for the named addressee. It may contain information that is confidential/private, legally privileged, or copyright-protected, and you should handle it accordingly. If you are not the intended recipient, you do not have legal rights to retain, copy, or distribute this email or its contents, and should promptly delete the email and all electronic copies in your system; do not retain copies in any media. If you have received this email in error, please notify the sender promptly. Thank you.
Hello,Neil:
1.Can you give me a deploy flow for a x86 device .
2.I want to use web UI submit job for one time N jobs. (N>10)
How can I do it?
Thanks.
Amanda
This email is intended only for the named addressee. It may contain information that is confidential/private, legally privileged, or copyright-protected, and you should handle it accordingly. If you are not the intended recipient, you do not have legal rights to retain, copy, or distribute this email or its contents, and should promptly delete the email and all electronic copies in your system; do not retain copies in any media. If you have received this email in error, please notify the sender promptly. Thank you.
Hi all,
Can be used multi database in the lava ? And is there any introduction to this ?
Best Regards
XuHongyu
This email is intended only for the named addressee. It may contain information that is confidential/private, legally privileged, or copyright-protected, and you should handle it accordingly. If you are not the intended recipient, you do not have legal rights to retain, copy, or distribute this email or its contents, and should promptly delete the email and all electronic copies in your system; do not retain copies in any media. If you have received this email in error, please notify the sender promptly. Thank you.
Hi,
Someone with the nickname "Nurav" pinged in #linaro-lava IRC channel
last week regarding installation problem to install lava in jessie using
the jessie-backports repository. The person also diligently followed up
with me to check what is wrong with his/her installation on private
messages. I did find sometime to do the testing today. Since, I do not
know any contact details of 'Nurav' I am writing my findings here, based
on lava installation I did on fresh jessie containers, hopefully the
person is available in this mailing list and will see this message.
The fresh installation went fine with both jessie and also using jessie
backports and LAVA production-repo. I ve put up the complete log of the
installations I tried in the following files:
* Fresh installation of lava from jessie-backports -
http://people.linaro.org/~senthil.kumaran/logs/lava-jessie-bp-installation.…
* Installing 2017.5 lava from jessie to jessie-backports to
production-repo -
http://people.linaro.org/~senthil.kumaran/logs/lava-jessie-bp-installation-…
HTH. Thank You.
--
Senthil Kumaran
http://www.stylesen.org/http://www.sasenthilkumaran.com/
In LAVA V1, we can use `target: {device hostname}` to submit job to one specified device, but I find V2 does not support this.
------------------
Best Regards.
Bo Wang
Hi,
warning: long mail
CIP is a Linux Foundation[1] initiative to create a commodity Linux
based platform for train railway control systems, power plants, etc
which needs to be maintained for a very long time. In extreme cases, for
50 years. The first outcome is the CIP-kernel, based on 4.4 LTS and
maintained for now by Ben Hutchings, a colleague of mine.
This week, within the CIP (Civil Infrastructure Platform) project, we
have published a VM where we have integrated LAVAv2 and KernelCI on
Debian so any developer can test a specific kernel in a board attached
to his/her laptop[2]. You have probably read in this mailing list some
questions coming from some colleagues of mine from Codethink.
Since the project is initially focused on the CIP kernel, it was natural
to select LAVA and KernelCI as our default tools. We would like to see
in the near future our CIP kernel as part of kernelci.org We move slowly
though since CIP is still a young project with very limited resources
but for now, and due to the very unique requirements CIP needs to
address[3], safety critical requirements among them, we need to get
absolute control of the testing infrastructure/service we will use.
As a previous step towards building our own testing/reporting
infrastructure and service, understanding LAVAv2 and promoting it among
the developers involved in this initiative is essential. I think that
B@D will be useful for this purpose, allowing us to start testing and
sharing the results among CIP devs. Codethink has invested a significant
amount of effort in creating a step by step deployment-configure-test
guide[4]. Any feedback is welcome.
B@D is meant to significantly reduce the entry level effort to use both,
LAVA and KernelCI. We consider B@D as a downstream project of LAVAv2 and
KernelCI. Hopefully, once we start using it within CIP we will be able
to provide meaningful feedback on this list.
The team behind B@D, is subscribed to this list and the #linaro-lava IRC
channel. We have our own mailing list cip-dev for general support of CIP
related topics, so B@D too, but our idea is to route users here for LAVA
specific questions that are unrelated with the set up of the environment
and participate in supporting them up to the the best of our knowledge,
if you think that is a good idea. We are unsure yet about the level of
success we will get with this effort though.
Since probably for many of you this is the first news you get about CIP
and B@D, feel free to ask me or my colleagues. We will do our best to
answer your questions.
You can get all the technical information about B@D at feature page[5].
Feel free to download it[6]. The integration scripts are located in
gitlab.com under AGPLv3 license[7]. Our cip-dev mailing list is
obviously an open one[8].
I would like to finish thanking the devs for the great work done in LAVA
and to those of you who have helped the Codethink guys to get to this
point. LAVAv2 and KernelCI are complicated beasts, hard to swallow
without support, and at the same time, very useful. I guess that is part
of the beauty.
[1] https://www.cip-project.org/about
[2] https://www.cip-project.org/news/2017/05/30/bd-v0-9-1
[3] https://wiki.linuxfoundation.org/_media/cip/osls2017_cip_v1_0.pdf
[4]
https://wiki.linuxfoundation.org/civilinfrastructureplatform/ciptestingboar…
[5]
https://wiki.linuxfoundation.org/civilinfrastructureplatform/ciptestingboar…
[6] https://wiki.linuxfoundation.org/civilinfrastructureplatform/cipdownload
[7] https://gitlab.com/cip-project/cip-testing/board-at-desk-single-dev
[8] https://lists.cip-project.org/mailman/listinfo/cip-dev
Best Regards
--
Agustin Benito Bethencourt
Principal Consultant - FOSS at Codethink
agustin.benito(a)codethink.co.uk
Hi,
I've hit an issue after upgrading LAVA on Debian Stretch from
2017.5-6486.59742fe-1 to 2017.5-6509.d71f267-1, trying to add or
update device dictionaries using the "lava-tool
device-dictionary" command then always failed.
Between these two revisions, the following change was introduced:
commit ae4d4776ca7b7454f5406159226e3c9327dd207f
Author: Rémi Duraffort <remi.duraffort(a)linaro.org>
Date: Tue May 2 15:22:33 2017 +0200
LAVA-757 Move device dictionaries to file system
The device dictionaries are now saved in:
/etc/lava-server/dispatcher-config/devices
which was previously installed with the root user. The LAVA
server instance is running under the "lavaserver" user, which
didn't have write access to this directory. So I had to manually
change the permissions, and then it all worked.
Could you please add a step in the Debian package installation to
change the permissions and ownership of the files in
/etc/lava-server/ to fix this issue?
Note: It may also be a good idea to propagate the IOError into
the xmlrpclib.Fault error message rather than replacing it with a
rather mysterious "Unable to store the configuration for %s on
disk". I had to do this to find out the root cause of the error;
I can send a patch, let me know.
Best wishes,
Guillaume
Hey all,
I'm currently hitting an interesting issue; We're deploying debian-
based images in our tests which have most of / mounted read-only.
Unfortunately the default deployment data for Debian indicates that the
various lava-test-shell related directories show live in /lava-xxxxx to
which in our booted image ofcourse nothing can write.
It would be good to give test writers the freedom to specify a sensible
base-path for lava's usage to avoid these types of issues. The option
to add a Debian variant to deployment data on our setup is somewhat
less attractive as that requires server configuration. (Changing the
images to mount / as read-write is not an option as that would defeat
the point of the test).
--
Sjoerd Simons
Collabora Ltd.
Hi,
With delivery of May, we can now defined timer for each test of a job.
Some of our tests are not correctly executed and then timeout expired.
But we want to execute next test and not stop the job with an incomplete verdict
How can we configure this behavior in yaml ?
See following example of yaml used
Thanks in advance for your answer
Regards
[Description: Description: Description: Description: Description: Description: logo_big5]
Florence ROUGER-JUNG | TINA: 166 7356 | Tel: +33 2 44 02 73 56 | Mobile: +33 6 13 49 38 02
STMicroelectronics | MCD
Auriga Building | 9-11, rue Pierre-Félix Delarue | 72100 Le Mans | FRANCE
I currently have an issue with starting lava-publisher. Whenever I run python manage.py lava-publisher I get output log:
2017-05-22 20:10:28,061 INFO publisher Dropping priviledges
2017-05-22 20:10:28,061 ERROR publisher Unable to lookup the user or the group
2017-05-22 20:10:28,062 ERROR publisher Unable to drop priviledges
I currently Have the same binaries in my own workstation and I am able to successfully connected to lava-publisher my output log gets:
2017-03-27 19:34:43,909 INFO publisher Dropping priviledges
2017-03-27 19:34:43,909 DEBUG publisher Switching to (lavaserver(126), lavaserver(133))
2017-03-27 19:34:43,909 INFO publisher Creating the input socket at ipc:///tmp/lava.events
2017-03-27 19:34:43,910 INFO publisher Creating the publication socket at tcp://*:5500
2017-03-27 19:34:43,910 INFO publisher Creating the additional sockets:
2017-03-27 19:34:43,910 INFO publisher Starting the proxy
I followed same steps to install from same repository on both machines. I somehow think this has to do with user or group not stored in database but I am not sure.
BTW: lava server/dispatcher both work well on both machines I just want to have an event notification client working on my second machine.
Thank you.
- Randy
Hello,
I am trying to setup Android tests on a B2260 board using LAVA V2 (
version 2017-05 ).
I have read LAVA documentation about Android tests and investigate about
existing Android jobs
and devices that have developed Android tests.
My questions:
* Is it essential to use linux container to perform Androids tests ?
* Do you have device-type, devices and jobs that would help me to
achieve my android tests ?
Best regards
Philippe
Hello,
Until then I've installed extra test packages from embedded test scripts.
Using the latest V2 version, I'm trying to manage this in my test jobs.
e.g.:
install:
deps:
- python-pytest
- python-lxml
- packagegroup-core-buildessential
- phoronix-test-suite
At execution time, I get a Error: OPKG package manager not found in the path.
Does this mean that OPKG is the only installer supported? Or is this the default one meaning that I can select DPKG or aptitude instead?
Best regards,
Denis
Hello Team,
This is just the request as a lava user.
It would be better if lava supports static ip configuration to deploy and
etc stuff with target board.
The user who dont have dhcp setup means this will be very useful to work
with target locally.
I hope you will consider this request and will support static ip also for
lava users.
Regards,
Guru