Hello Milosz,
Thanks for routing this thread to lava-users - when I made initial post
to linaro-validation, I check my archive and so that e.g. Neil posts
there frequently, but I missed that it's not official LAVA list.
On Mon, 3 Jul 2017 22:25:31 +0100
Milosz Wasilewski <milosz.wasilewski(a)linaro.org> wrote:
[]
> > So, I'm not exactly sure about build-only tests on real HW boards.
> > The "default" idea would be that they should run, but I imagine in
> > reality, some may need to be filtered out. But then blacklisting
> > would be better approach than whitelisting. And I'm not sure if
> > Zephyr has concept of "skipped" tests which may be useful to handle
> > hardware variations. (Well, I actually dunno if LAVA supports
> > skipped tests!)
>
> As far as I can tell they acutely run on the board, but usually output
> just 'Hello world!' or sth similar. As we discussed with Kumar, this
> is still OK. What Kumar requested (and I still didn't deliver) is that
> whenever the LAVA test job completes, the test should be considered
> 'passed'. So we wouldn't have to do any parsing of patterns. I'm not
> sure if that will work, but it's worth to try.
Hmm, I wonder what would be criteria for being "failed" for such
tests... Anyway, thanks for sharing - I'm not familiar with all Zephyr
tests/samples myself, will keep in mind such issues when looking into
them.
[]
> > more boards will be installed in the Lab and stability of them
> > improves (so far they seem to be pretty flaky).
> >
>
> You're absolutely right. This is a pretty big task to work on and IMHO
> requires someone to work full time at least for couple of weeks. The
> second part is also true, the boards don't behave as they should. I
> guess Dave can elaborate more on that. I can only see the result -
> boards (frdm-kw41z) don't run the tests they're requested.
Matt Hart actually showed me a ticket on that, so at least it's
confirmed/known issue in works. But even with arduino_101 and
frdm_k64f, I hit cases more than once when board(s) were stuck for
extended time, but still were routed jobs to (which either failed or
timed out). So, there may be problem with health checks, which either
don't run frequently enough or aren't robust enough. arduino_101 is all
the lone one, so if something happens to it, there's no backup. Etc,
etc.
[]
> > So, the problem, for starters, is how to make LAVA *feed* the
> > input, as specified in the test definition (like "2+2") into a
> > board.
>
> Right. What I proposed was coding all the inputs in the test itself.
Well, that would require bunch of legwork, but the biggest problem is
that it wouldn't test what's actually required. E.g. both JerryScript
and MicroPython Zephyr ports are actually interactive apps working over
serial connection. And functional testing of them would be feeding
something over this serial connection and checking that results are as
expected. I'll keep in mind idea of "builtin" tests though.
Thanks!
--
Best Regards,
Paul
Linaro.org | Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linarohttp://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog
I was too quick to hit reply. CCing lava-users for comments from LAVA team.
milosz
On 3 July 2017 at 21:50, Paul Sokolovsky <paul.sokolovsky(a)linaro.org> wrote:
> Hello Milosz,
>
> I appreciate getting at least some response ;-). Some questions however
> could use a reply from LAVA team, I guess.
>
> On Mon, 3 Jul 2017 13:34:49 +0100
> Milosz Wasilewski <milosz.wasilewski(a)linaro.org> wrote:
>
> []
>
>> > jobs submit a number of tests to LAVA (via
>> > https://qa-reports.linaro.org/) for the following boards:
>> > arduino_101, frdm_k64f, frdm_kw41z, qemu_cortex_m3. Here's an
>> > example of cumulative test report for these platforms:
>> > https://qa-reports.linaro.org/lite/zephyr-upstream/tests/
>> >
>> > That's really great! (Though the list of tests to run in LAVA seems
>> > to be hardcoded:
>> > https://git.linaro.org/ci/job/configs.git/tree/zephyr-upstream/submit_for_t…)
>> >
>>
>> It is, as I wasn't really sure what to test. The build job needs to
>> prepare the test templates to be submitted to LAVA. In case of zephyr
>> each tests is a separate binary. So we end up with the number of file
>> paths to substitute in the template. Hardcoding was the easiest thing
>> to get things running. But I see no reason why it wouldn't be changed
>> with some smarter code to discover the binaries. The problem with this
>> approach is that some of these tests are just build time. They have no
>> meaning when running on the board and need to be filter out somehow.
>
> I see, that makes some sense. But thinking further, I'm not entirely
> sure about "build only" tests. Zephyr's sanitycheck test has such
> concept, but I'd imagine it comes from the following reasons: a)
> sanitycheck runs tests on QEMU, which has very bare hardware support,
> so many tests are not runnable; b) sanitycheck can operate on "samples",
> not just "tests", as sample can be interactive, etc. it makes sense to
> only build them, not run.
>
> So, I'm not exactly sure about build-only tests on real HW boards. The
> "default" idea would be that they should run, but I imagine in reality,
> some may need to be filtered out. But then blacklisting would be better
> approach than whitelisting. And I'm not sure if Zephyr has concept of
> "skipped" tests which may be useful to handle hardware variations.
> (Well, I actually dunno if LAVA supports skipped tests!)
>
> Anyway, these are rough ideas for the future. I've spent couple of
> weeks of munging with LITE CI setup, there're definitely some
> improvements, but also a Pandora box of other ideas and improvements to
> make. I'm wrapping up for now, but hope to look again in some time
> (definitely hope to look before the Connect, so we can discuss further
> steps there). In the meantime, I hope that more boards will be
> installed in the Lab and stability of them improves (so far they seem
> to be pretty flaky).
>
> []
>
>> > - test:
>> > monitors:
>> > - name: foo
>> > start: ""
>> > end: Hello, ZJS world!
>> > pattern: (?P<result>(PASS|FAIL))\s-\s(?P<test_case_id>\w+)\.
>> >
>> > So, the "start" substring is empty, and perhaps matches a line
>> > output by a USB multiplexer or board bootloader. "End" substring is
>> > actually the expected single-line output. And "pattern" is unused
>> > (dunno if it can be dropped without def file syntax error). Is
>> > there a better way to handle single-line test output?
>>
>> You're making a silent assumption that if there is a matching line,
>> the test is passed. In case of other tests (zephyr unit tests), it's
>> not the case. The 'start' matches some line which is displayed when
>> zephyr is booting. End matches the line which is displayed after all
>> testing is done. The pattern follows the unit test pattern.
>
> Thanks, but I'm not sure I understand this response. I don't challenge
> that Zephyr unittests need this support, or the way they're handled.
> LITE however needs to test more things than "batch" Zephyr unittests. I
> present another usercase which albeit simple, barely supported by LAVA.
> (That's a question to LAVA team definitely.)
>
>> > Well, beyond a simple output matching, it would be nice even for the
>> > initial "smoke testing" to actually make some input into the
>> > application and check the expected output (e.g., input: "2+2",
>> > expected output: "4"). Is this already supported for LAVA "v2"
>> > pipeline tests? I may imagine that would be the same kind of
>> > functionality required to test bootloaders like U-boot for Linux
>> > boards.
>>
>> I didn't use anything like this in v2 so far, but you're probably best
>> off doing sth like
>>
>> test 2+2=4 PASS.
>>
>> than you can easily create pattern that will filter the output. In
>> case of zephyr pattern is the only way to filter things out as there
>> is no shell (?) on the board.
>
> So, the problem, for starters, is how to make LAVA *feed* the
> input, as specified in the test definition (like "2+2") into a board.
>
> As there were no reply from LAVA team (I may imagine they're busy with
> other things), I decided to create a user story in Jira for them, as I
> couldn't create a LAVA-* ticket, I created it as
> https://projects.linaro.org/browse/LITE-175 . Hopefully that won't go
> unnoticed and LAVA team would get to it eventually.
>
>>
>> milosz
>
> Thanks!
>
> --
> Best Regards,
> Paul
>
> Linaro.org | Open source software for ARM SoCs
> Follow Linaro: http://www.facebook.com/pages/Linaro
> http://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog
Hi,all:
If devive have boot successed,can I skip deploy and boot step in job.yaml?
This email is intended only for the named addressee. It may contain information that is confidential/private, legally privileged, or copyright-protected, and you should handle it accordingly. If you are not the intended recipient, you do not have legal rights to retain, copy, or distribute this email or its contents, and should promptly delete the email and all electronic copies in your system; do not retain copies in any media. If you have received this email in error, please notify the sender promptly. Thank you.
Hello LAVA experts,
I am currently working with LAVA and was asked to find is there a way to get data from device dictionary inside a running job. Details like below:
I have a device name demo-01, and its device-dictionary have one line like "{% set my_property = 'my_prop' %}". Then I have a job running on demo-01 device, and I would like to use string 'my_prop' to passed into a script during running. Is it possible to get device-dictionary data directly from job definition(Job Submitter webpage) or test definition(yaml file)? If yes, how could I do this? If not, is there any good way you would like to share to solve this problem?
Thanks and Best Regards,
Yifan
Hi all,
Is there any limitation on test duration when using LAVA to do tests?
Recently, I found my tests were automatically canceled after running for 24 hours and I could not find any clue about why the job was canceled. Can anybody give me some help?
(Very sorry my LAVA version is V1.)
12794.0 <LAVA_DISPATCHER>2017-06-25 02:31:42 PM INFO: Cancel operation
12794.1 <LAVA_DISPATCHER>2017-06-25 02:31:42 PM DEBUG: finally status fail
12794.2 <LAVA_DISPATCHER>2017-06-25 02:31:42 PM WARNING: [ACTION-E] boot_image is finished with error (Cancel).
12794.3 <LAVA_DISPATCHER>2017-06-25 02:31:42 PM WARNING: Target : power_off being call
Thanks.
Hi,
We've hit an issue when running lava-master on Debian Jessie and
lava-slave on Debian Stretch, after a few minutes the slave would
stop working. After some investigation, it turned out to be due
to a difference of the libzmq versions in Jessie (4.0.5+dfsg-2)
and Stretch (4.2.1-4) causing some protocol errors.
The line that detects the error in Stretch is:
https://github.com/zeromq/libzmq/blob/7005f22726d4a6ca527f27560a0a132394fdb…
This appears to be due to how the "once" counter gets written
into memory and into the zmq packets: the libzmq version from
Jessie uses memcpy whereas the one in Stretch calls put_uint64.
As a result the byte endianness has changed from little to big,
causing the packets to work until "once" reaches 255 which
translates into 0xff << 56, after which it overflows to 0 and
causes the error.
This is not a LAVA bug as such, rather a libzmq one, but it
impacts interoperability between Jessie and Stretch for LAVA so
it may need to be documented or resolved somehow. We've
installed the new version of libzmq onto our Jessie servers to
align them with Stretch; doing this does fix the problem.
Best wishes,
Guillaume
Hi All,
We have a Lab and the DUTs in the Lab will be used for both automation testing(LAVA) and manual usage(development, debug, manual testing).
And we will develop a tool for the manual usage, please find the basic requirements of the tool in the attachment.
I list possible solutions about how to develop the lab tool and let the tool cooperate with LAVA on the same Lab hardwares. which one is better? could you give your suggestions?
1. Develop the lab tool based on LAVA framawork.
Need to modify LAVA DB: add some new tables for network ports, port attribution, port connection, usage log, notes, also need to add columns to existing tables lava_scheduler_app_device.
Need to modify LAVA command line, add functions like lava-server reserve dut1, lava-server connect dut1.
Need to add new codes for the features which LAVA doesn't support, part of codes may be reused, I need to look into LAVA and check how to reuse them.
The tool will be developed based on https://github.com/Linaro/lava-server and will be installed on LAVA master, right?
Most probably we will maintain the codes change in local repository because it is difficult to upstream thousands of codes changes to linaro master repository. We need to merge the changes in master repository to local repository.
2. Develop the lab tool as a separated tool and use separated DB. There are two ways to avoid the DUT conflicts between the lab tool and LAVA:
a) Lab tool DB maintain all DUTs, LAVA DB also maintain all DUTs, when a user want to do automation testing on one DUT, he need to reserve this DUT via lab tool in advance, then he can run automation testing on the specified DUT.
b),Divide the DUTs to two groups, one group is for automation testing and it will be added to LAVA DB, the other one group is for manual usage and will be added to lab tool DB.
Another question about how to link/unlink two network ports dynamically(see requirement#7 in attachment) in automation testing. I am not sure whether LAVA can support this feature later, the simple way is supporting it in test script:
Do the link/unlink in test script: subprocess.Popen(["lab-tool","link","dut1.eth_1g_1","dut2.eth_1g_1"]) , and test script needs to get DUT name from LAVA in advance.
Does this work?
BR
Yongtao
This email is intended only for the named addressee. It may contain information that is confidential/private, legally privileged, or copyright-protected, and you should handle it accordingly. If you are not the intended recipient, you do not have legal rights to retain, copy, or distribute this email or its contents, and should promptly delete the email and all electronic copies in your system; do not retain copies in any media. If you have received this email in error, please notify the sender promptly. Thank you.
Hi,
I am trying to run a multinode test job
<https://staging.validation.linaro.org/scheduler/job/175378/definition> in
LAVA v2.
It failed <https://staging.validation.linaro.org/scheduler/job/175378#L555>.
After
a short discussion with Chase, I realized if I want to run multinode test
job I have to use "lava-multinode" protocol.
However, I still need "lava-lxc" to deploy the test image for hikey.
I am wondering if it is possible to use two protocols, lava-lxc for
deployment test image and lava-multinode to run the test, in the same test
job?
If yes, could you please provide some examples?
Thanks,
Arthur
Hi ,
I known can add the device_tags by the web , and if i can add the device_tags by the cmdline ?
Best Regards
XuHongyu
This email is intended only for the named addressee. It may contain information that is confidential/private, legally privileged, or copyright-protected, and you should handle it accordingly. If you are not the intended recipient, you do not have legal rights to retain, copy, or distribute this email or its contents, and should promptly delete the email and all electronic copies in your system; do not retain copies in any media. If you have received this email in error, please notify the sender promptly. Thank you.
Hello,Neil:
1.Can you give me a deploy flow for a x86 device .
2.I want to use web UI submit job for one time N jobs. (N>10)
How can I do it?
Thanks.
Amanda
This email is intended only for the named addressee. It may contain information that is confidential/private, legally privileged, or copyright-protected, and you should handle it accordingly. If you are not the intended recipient, you do not have legal rights to retain, copy, or distribute this email or its contents, and should promptly delete the email and all electronic copies in your system; do not retain copies in any media. If you have received this email in error, please notify the sender promptly. Thank you.
Hi all,
Can be used multi database in the lava ? And is there any introduction to this ?
Best Regards
XuHongyu
This email is intended only for the named addressee. It may contain information that is confidential/private, legally privileged, or copyright-protected, and you should handle it accordingly. If you are not the intended recipient, you do not have legal rights to retain, copy, or distribute this email or its contents, and should promptly delete the email and all electronic copies in your system; do not retain copies in any media. If you have received this email in error, please notify the sender promptly. Thank you.
Hi,
Someone with the nickname "Nurav" pinged in #linaro-lava IRC channel
last week regarding installation problem to install lava in jessie using
the jessie-backports repository. The person also diligently followed up
with me to check what is wrong with his/her installation on private
messages. I did find sometime to do the testing today. Since, I do not
know any contact details of 'Nurav' I am writing my findings here, based
on lava installation I did on fresh jessie containers, hopefully the
person is available in this mailing list and will see this message.
The fresh installation went fine with both jessie and also using jessie
backports and LAVA production-repo. I ve put up the complete log of the
installations I tried in the following files:
* Fresh installation of lava from jessie-backports -
http://people.linaro.org/~senthil.kumaran/logs/lava-jessie-bp-installation.…
* Installing 2017.5 lava from jessie to jessie-backports to
production-repo -
http://people.linaro.org/~senthil.kumaran/logs/lava-jessie-bp-installation-…
HTH. Thank You.
--
Senthil Kumaran
http://www.stylesen.org/http://www.sasenthilkumaran.com/
In LAVA V1, we can use `target: {device hostname}` to submit job to one specified device, but I find V2 does not support this.
------------------
Best Regards.
Bo Wang
Hi,
warning: long mail
CIP is a Linux Foundation[1] initiative to create a commodity Linux
based platform for train railway control systems, power plants, etc
which needs to be maintained for a very long time. In extreme cases, for
50 years. The first outcome is the CIP-kernel, based on 4.4 LTS and
maintained for now by Ben Hutchings, a colleague of mine.
This week, within the CIP (Civil Infrastructure Platform) project, we
have published a VM where we have integrated LAVAv2 and KernelCI on
Debian so any developer can test a specific kernel in a board attached
to his/her laptop[2]. You have probably read in this mailing list some
questions coming from some colleagues of mine from Codethink.
Since the project is initially focused on the CIP kernel, it was natural
to select LAVA and KernelCI as our default tools. We would like to see
in the near future our CIP kernel as part of kernelci.org We move slowly
though since CIP is still a young project with very limited resources
but for now, and due to the very unique requirements CIP needs to
address[3], safety critical requirements among them, we need to get
absolute control of the testing infrastructure/service we will use.
As a previous step towards building our own testing/reporting
infrastructure and service, understanding LAVAv2 and promoting it among
the developers involved in this initiative is essential. I think that
B@D will be useful for this purpose, allowing us to start testing and
sharing the results among CIP devs. Codethink has invested a significant
amount of effort in creating a step by step deployment-configure-test
guide[4]. Any feedback is welcome.
B@D is meant to significantly reduce the entry level effort to use both,
LAVA and KernelCI. We consider B@D as a downstream project of LAVAv2 and
KernelCI. Hopefully, once we start using it within CIP we will be able
to provide meaningful feedback on this list.
The team behind B@D, is subscribed to this list and the #linaro-lava IRC
channel. We have our own mailing list cip-dev for general support of CIP
related topics, so B@D too, but our idea is to route users here for LAVA
specific questions that are unrelated with the set up of the environment
and participate in supporting them up to the the best of our knowledge,
if you think that is a good idea. We are unsure yet about the level of
success we will get with this effort though.
Since probably for many of you this is the first news you get about CIP
and B@D, feel free to ask me or my colleagues. We will do our best to
answer your questions.
You can get all the technical information about B@D at feature page[5].
Feel free to download it[6]. The integration scripts are located in
gitlab.com under AGPLv3 license[7]. Our cip-dev mailing list is
obviously an open one[8].
I would like to finish thanking the devs for the great work done in LAVA
and to those of you who have helped the Codethink guys to get to this
point. LAVAv2 and KernelCI are complicated beasts, hard to swallow
without support, and at the same time, very useful. I guess that is part
of the beauty.
[1] https://www.cip-project.org/about
[2] https://www.cip-project.org/news/2017/05/30/bd-v0-9-1
[3] https://wiki.linuxfoundation.org/_media/cip/osls2017_cip_v1_0.pdf
[4]
https://wiki.linuxfoundation.org/civilinfrastructureplatform/ciptestingboar…
[5]
https://wiki.linuxfoundation.org/civilinfrastructureplatform/ciptestingboar…
[6] https://wiki.linuxfoundation.org/civilinfrastructureplatform/cipdownload
[7] https://gitlab.com/cip-project/cip-testing/board-at-desk-single-dev
[8] https://lists.cip-project.org/mailman/listinfo/cip-dev
Best Regards
--
Agustin Benito Bethencourt
Principal Consultant - FOSS at Codethink
agustin.benito(a)codethink.co.uk
Hi,
I've hit an issue after upgrading LAVA on Debian Stretch from
2017.5-6486.59742fe-1 to 2017.5-6509.d71f267-1, trying to add or
update device dictionaries using the "lava-tool
device-dictionary" command then always failed.
Between these two revisions, the following change was introduced:
commit ae4d4776ca7b7454f5406159226e3c9327dd207f
Author: Rémi Duraffort <remi.duraffort(a)linaro.org>
Date: Tue May 2 15:22:33 2017 +0200
LAVA-757 Move device dictionaries to file system
The device dictionaries are now saved in:
/etc/lava-server/dispatcher-config/devices
which was previously installed with the root user. The LAVA
server instance is running under the "lavaserver" user, which
didn't have write access to this directory. So I had to manually
change the permissions, and then it all worked.
Could you please add a step in the Debian package installation to
change the permissions and ownership of the files in
/etc/lava-server/ to fix this issue?
Note: It may also be a good idea to propagate the IOError into
the xmlrpclib.Fault error message rather than replacing it with a
rather mysterious "Unable to store the configuration for %s on
disk". I had to do this to find out the root cause of the error;
I can send a patch, let me know.
Best wishes,
Guillaume
Hey all,
I'm currently hitting an interesting issue; We're deploying debian-
based images in our tests which have most of / mounted read-only.
Unfortunately the default deployment data for Debian indicates that the
various lava-test-shell related directories show live in /lava-xxxxx to
which in our booted image ofcourse nothing can write.
It would be good to give test writers the freedom to specify a sensible
base-path for lava's usage to avoid these types of issues. The option
to add a Debian variant to deployment data on our setup is somewhat
less attractive as that requires server configuration. (Changing the
images to mount / as read-write is not an option as that would defeat
the point of the test).
--
Sjoerd Simons
Collabora Ltd.
Hi,
With delivery of May, we can now defined timer for each test of a job.
Some of our tests are not correctly executed and then timeout expired.
But we want to execute next test and not stop the job with an incomplete verdict
How can we configure this behavior in yaml ?
See following example of yaml used
Thanks in advance for your answer
Regards
[Description: Description: Description: Description: Description: Description: logo_big5]
Florence ROUGER-JUNG | TINA: 166 7356 | Tel: +33 2 44 02 73 56 | Mobile: +33 6 13 49 38 02
STMicroelectronics | MCD
Auriga Building | 9-11, rue Pierre-Félix Delarue | 72100 Le Mans | FRANCE
I currently have an issue with starting lava-publisher. Whenever I run python manage.py lava-publisher I get output log:
2017-05-22 20:10:28,061 INFO publisher Dropping priviledges
2017-05-22 20:10:28,061 ERROR publisher Unable to lookup the user or the group
2017-05-22 20:10:28,062 ERROR publisher Unable to drop priviledges
I currently Have the same binaries in my own workstation and I am able to successfully connected to lava-publisher my output log gets:
2017-03-27 19:34:43,909 INFO publisher Dropping priviledges
2017-03-27 19:34:43,909 DEBUG publisher Switching to (lavaserver(126), lavaserver(133))
2017-03-27 19:34:43,909 INFO publisher Creating the input socket at ipc:///tmp/lava.events
2017-03-27 19:34:43,910 INFO publisher Creating the publication socket at tcp://*:5500
2017-03-27 19:34:43,910 INFO publisher Creating the additional sockets:
2017-03-27 19:34:43,910 INFO publisher Starting the proxy
I followed same steps to install from same repository on both machines. I somehow think this has to do with user or group not stored in database but I am not sure.
BTW: lava server/dispatcher both work well on both machines I just want to have an event notification client working on my second machine.
Thank you.
- Randy
Hello,
I am trying to setup Android tests on a B2260 board using LAVA V2 (
version 2017-05 ).
I have read LAVA documentation about Android tests and investigate about
existing Android jobs
and devices that have developed Android tests.
My questions:
* Is it essential to use linux container to perform Androids tests ?
* Do you have device-type, devices and jobs that would help me to
achieve my android tests ?
Best regards
Philippe
Hello,
Until then I've installed extra test packages from embedded test scripts.
Using the latest V2 version, I'm trying to manage this in my test jobs.
e.g.:
install:
deps:
- python-pytest
- python-lxml
- packagegroup-core-buildessential
- phoronix-test-suite
At execution time, I get a Error: OPKG package manager not found in the path.
Does this mean that OPKG is the only installer supported? Or is this the default one meaning that I can select DPKG or aptitude instead?
Best regards,
Denis
Hello Team,
This is just the request as a lava user.
It would be better if lava supports static ip configuration to deploy and
etc stuff with target board.
The user who dont have dhcp setup means this will be very useful to work
with target locally.
I hope you will consider this request and will support static ip also for
lava users.
Regards,
Guru
Hi Everyone,
In the Beaglebone-Black Health Check, `bbb_debian_ramdisk_test.yaml`,
located in the Linaro master repository
(https://git.linaro.org/lava-team/refactoring.git), there are the
following lines in the "action:" block:
---
kernel:
url:
http://snapshots.linaro.org/components/lava/standard/debian/jessie/armhf/4/…
ramdisk:
url:
http://snapshots.linaro.org/components/lava/standard/debian/jessie/armhf/4/…
compression: gz
# the bootloader needs a u-boot header on the modified ramdisk
add-header: u-boot
modules:
url:
http://snapshots.linaro.org/components/lava/standard/debian/jessie/armhf/4/…
compression: gz
---
How is the `initramfs.cpio.gz` generated? KernelCI's build.py script
doesn't generate it. None of the Lava scripts generate it, yet it is
required to perform the boot test of a kernel on the Beaglebone Black. I
can't find it mentioned anywhere in the documentation either.
How did you generate this so it is compatible with the BBB? We want to
follow Linaro's standards, guidelines and recommendations as close as we
can, but this particular part seems to be missing.
Any help you can offer would be greatly appreciated.
Thank you!
--
Don Brown
Codethink, Ltd.
Software Engineering Consultant
Indianapolis, IN USA
Email: don.brown(a)codethink.co.uk
Mobile: +1 317-560-0513
Hi Everyone,
This week one of my teammates discovered that storage was very low on
our LAVA Server. After he investigated, he found that
/var/lib/lava/dispatcher gradually increases in size. He realized that
when a test is run, the files are accumulated in
/var/lib/lava/dispatcher/slave/tmp for each job. However, they are never
removed.
Does LAVA have a setting or does it have some kind of automation that
will remove tests, say, after X days or by some other criteria or, do we
need to remove them manually?
I appreciate any guidance you can offer.
Thank you!
--
Don Brown
Codethink, Ltd.
Software Engineering Consultant
Indianapolis, IN USA
Email: don.brown(a)codethink.co.uk
Mobile: +1 317-560-0513
Feature request:
Please add an option to "lava-server manage device-types" to add an
alias. Currently this can be done from the django interface, but the
command-line interface is much more automation friendly and
administrator friendly.
Thanks,
Kevin
Hi , we're attempting to use lava-ci to submit a test to lava, I've
cloned it from
https://github.com/kernelci/lava-ci.git
But when I attempt to submit a simple test
../lava-job-runner.py --username lavauser --token ... --server http://localhost:8080/RPC2
I get
Connecting to Server...
Connection Successful!
connect-to-server : pass
Gathering Devices...
Gathered Devices Successfully!
Gathering Device Types...
Gathered Device Types Successfully!
Submitting Jobs to Server...
but I don't see any submitted jobs in the lava2 web interface, is there
anything obvious elsewhere I should be checking? - or does the absence
of a 'Submitted Jobs successfully', if there should be one, means nothing
has been submitted?
Robert
Hi,
In LAVA v1, one could declare login commands to be run after logging in and
before starting any of the tests. For example:
"actions": [
{
"command": "deploy_image",
"parameters": {
"image": "https://some-url-to-a-rootfs.ext4.img.gz",
"bootfstype": "ext2",
"login_commands": [
"sudo su"
],
"login_prompt": "login:",
"username": "username",
"password_prompt": "Password:",
"password": "password"
}
},
In this case, "sudo su" is needed to open a regular user session and inherit
the user's environment while also having root privileges.
In LAVA v2, there isn't the possibility to do anything like this directly. One
could define a test with inline commands, but this is not ideal. The login
commands are not a test but part of how the job sets up the environment in
which the tests are run - i.e. it's part of the initial conditions. Also it's
quite a convoluted and lengthy way of running some commands, and it relies on
the side effects of that "login commands test" to persist when running
subsequent tests.
So I've made a couple of patches to see how this could be implemented in LAVA
v2 with an extra parameter in auto_login:
https://review.linaro.org/#/q/topic:T5703-login-commands
For example:
- boot:
method: qemu
auto_login:
login_prompt: 'login:'
username: user
password_prompt: 'Password:'
password: password
login_commands:
- sudo su
Essentially, this makes auto_login more flexible. At the moment, after logging
in auto_login sets the shell prompt: this is already some kind of hard-coded
login command. Some jobs need to run other things such as "sudo su" to stick
to the same example.
Another login command we've used is "systemctl --failed" to show if any systemd
units (services) failed to load during boot.
Notes from the Gerrit reviews:
* The login commands can't be part of a device definition as they are not
related to the device hardware or the boot configuration. For example, when
running Android, one would not want to run "sudo su" but maybe "setprop ..."
or something else - to be defined in each job.
* The login commands should not be fixed in a given distro / userspace
configuration as each job may need to set up a different initial environment.
For example, some jobs may need to be run with a regular user and would not
use the "sudo su" login command.
* Some documentation and unit tests would need to be added for this to be
merged. This is to first discuss the idea and review the code changes.
Any thoughts? Would it make sense to add this feature or maybe implement it
differently?
Best wishes,
Guillaume
Hi Everyone,
I have a co-worker who wants to use our Kernel CI & Lava Virtual
Machine. He says he wants to boot the VM, log in, and run a command that
downloads a kernel and then tests multiple defconfig's and multiple
versions of the Linux kernel. What is the best tool to do this (lava-ci,
lava-tool, or a different tool)?
Can you point me to some examples of the tool you recommend?
Any help you can offer would be greatly appreciated.
--
Don Brown
Codethink, Ltd.
Software Engineering Consultant
Indianapolis, IN USA
Email: don.brown(a)codethink.co.uk
Mobile: +1 317-560-0513
Hello,
I'm trying to set one timeout per test in a job. To do so I'm declaring one test block per test.
Unfortunately, it seems that only the first timeout declaration is taken into account. Did I miss something in my job definition?
Best regards,
Denis
Dear all,
This is my first post on the mailing list, I hope I'm at the right place.
Using Lava V2, I'm trying to install packages in the DUT following the guidelines from
https://validation.linaro.org/static/docs/v2/writing-tests.html#test-defini…
My job looks like this:
metadata:
(...)
install:
sources:
- http://<local_network_package_server>/sti
- http:// <local_network_package_server>/all
- http:// <local_network_package_server>/cortexa9hf-neon
deps:
- python-pytest
- python-lxml
- packagegroup-core-buildessential*
run:
steps:
- step1
- step2
parse:
pattern: "^(?P<test_case_id>\\w+) RESULT:(?P<result>(pass|fail|unknown))"
fixupdict:
FAILED: fail
SUCCESS: pass
ABORTED: unknown
Running this test, I get the following error:
<LAVA_TEST_RUNNER>: started<LAVA_TEST_RUNNER>: looking for work in /lava-3715/0/lava-test-runner.conf-1484266027
<LAVA_TEST_RUNNER>: running 0_TC_BENCH_DISK_AIO_STRESS under lava-test-shell...
<LAVA_TEST_RUNNER>: running 0_TC_BENCH_DISK_AIO_STRESS installer ...
/lava-3715/0/tests/0_TC_BENCH_DISK_AIO_STRESS/install.sh: line 5: lava-add-sources: command not found
/lava-3715/0/tests/0_TC_BENCH_DISK_AIO_STRESS/install.sh: line 6: lava-add-sources: command not found
/lava-3715/0/tests/0_TC_BENCH_DISK_AIO_STRESS/install.sh: line 7: lava-add-sources: command not found
Error: OPKG package manager not found in the path.
It seems lava-add-sources is not copied to the target. Do I understand the log correctly?
Best regards,
Denis
Hello Team,
My name is guru. i am very new to lava and i am very much interested using
lava concept for embedded linux boards for auto deployment and testing
concepts.
I tried to setup the lava for bbb device. i have followed below steps for
that.
> 1. installed debian on vm machine and lava-server and its component
(jessie-backport) 2016.
> 2. just for understanding purpose i tried to add kvm job it was loaded
successfully.
> 3. Now i am trying to add the BBB device on lava.
> 4. For that i have added the bbb device to dispatcher. find the conf file
below
> name: beaglebone-black01.conf
> content :
> device_type = beaglebone-black
> hostname = beaglebone-black01
> connection_command = telnet localhost 2003
> hard_reset: /usr/bin/reset.sh
> power_off: /usr/bin/off.sh
> power_on: /usr/bin/on.sh
> Note : i am not using pduclient. i am using my own script for control
commands
> but it is not working while executing the hard_reset command on lava..
find
the log for more details.
>
> 5. My current setup is like i am controlling the bbb using serial
controlled
relay from VM host machine(debian).
>
> for that i made my own custom script to on.off,reset serial python code
for
controlling the relay.
> 6. after that i tried to submit the below json test job. Find My
definition
job attached.
> I have taken the below json for reference.
> https://git.linaro.org/lava-team/lava-functional-tests.
git/tree/lava-test-shell/single-node/armmp-bbb-daily.json
> 7. after that i have submitted the job . find the job log for more
details.
> 8. i have no idea what is going on and what went wrong on my setup.
> Help me out to boot the BBB using lava.
Regards,
Guru
On 27 March 2017 at 14:54, Ковалёв Сергей <SKovalev(a)ptsecurity.com> wrote:
> Thank you Neil for you reply.
Please keep the list in CC:
>
>> Compare with: https://staging.validation.linaro.org/scheduler/job/168802
>
> I have tried https://staging.validation.linaro.org/scheduler/job/168802/definition but iPXE stuck on it. I have amd64 machine with UEFI.
"stuck" ? This is a standard amd64 Debian kernel with modules and
initramfs. It is already UEFI-aware. Does the machine run Debian
natively? Is there a Debian kernel you can use in your LAVA
submissions (with modules and ramdisk)?
>> First step is to replace these files with images which work on the x86 DUT on staging.validation.linaro.org
>
> I perform kernel development with my colleagues so I have to load our kernels.
Yes, however, to debug what is going on, you should switch to known
working files so that you have a valid comparison with known working
test jobs. Once debugging has produced some results, then switch back
to the locally built kernels. Change one thing at a time.
>> That just isn't going to work. The initrd needs to come via TFTP but this is an absolute path.
>
> 'initrd' is come via TFTP. In context I supply additional kernel boot options.
Your original email quoted:
context:
extra_kernel_args: initrd=/rootfs.cpio.gz root=/dev/ram0
rootfs.cpio.gz does not exist when the machine boots. The initramfs
will have been downloaded by TFTP and loaded directly into memory, it
simply does not exist as a cpio.gz any longer. /dev/ram0 shouldn't be
needed with modern kernels. At best, it would seem that these options
are ignored.
Debian initramfs log:
Begin: Loading essential drivers ... done.
Begin: Running /scripts/init-premount ... done.
Begin: Mounting root file system ... Begin: Running /scripts/local-top ... done.
Begin: Running /scripts/local-premount ... done.
Warning: fsck not present, so skipping unknown file system
mount: can't find /root in /etc/fstab
done.
Begin: Running /scripts/local-bottom ... done.
Begin: Running /scripts/init-bottom ... mount: mounting /dev on
/root/dev failed: No such file or directory
mount: mounting /dev on /root/dev failed: No such file or directory
done.
mount: mounting /run on /root/run failed: No such file or directory
run-init: current directory on the same filesystem as the root: error 0
Target filesystem doesn't have requested /sbin/init.
run-init: current directory on the same filesystem as the root: error 0
run-init: current directory on the same filesystem as the root: error 0
run-init: current directory on the same filesystem as the root: error 0
run-init: current directory on the same filesystem as the root: error 0
run-init: current directory on the same filesystem as the root: error 0
No init found. Try passing init= bootarg.
BusyBox v1.22.1 (Debian 1:1.22.0-19) built-in shell (ash)
Enter 'help' for a list of built-in commands.
Matched prompt #5: \(initramfs\)
> This boot option have been detected before effort to automate the process with LAVA. Without it we could see kernel panic. With it we successfully load kernel and rootfs (from Buildroot). May be in Linaro you embed that boot options in compile time?
No, we do not embed anything in V2 (it's one of the key changes from
V1, we don't hid magic like that anymore.)
The files were prepared with:
https://git.linaro.org/lava-team/refactoring.git/tree/scripts/x86_64-nfs.sh
You can also see the build log for the original Debian kernel package
if relevant.
https://tracker.debian.org/pkg/linux-signedhttps://buildd.debian.org/status/fetch.php?pkg=linux-signed&arch=amd64&ver=…
Running x86_64-nfs.sh in an empty directory will provide access to the
config of the kernel itself as well as the initramfs and supporting
tools.
It's possible these context arguments are hiding some other problem in
the kernel but, as described so far, the options seem to make no
sense.
The command line used in our tests is simply: Command line: ip=dhcp
console=ttyS0,115200n8 lava_mac={LAVA_MAC}
(where LAVA_MAC does not need to be defined for these devices.)
--
Neil Williams
=============
neil.williams(a)linaro.org
http://www.linux.codehelp.co.uk/
Hello.
I'm trying to start LXC Debian hacking sessions on our V2 LAVA server.
This is the related configuration:
http://pastebin.com/index/DNGpJfc6
And I'm mostly doing what's in here:
https://git.linaro.org/lava-team/hacking-session.git
The problem I'm facing is that inside a script the environment seems to be broken, so there is no way to copy to ~/.ssh.
Regarding the environment I get this output:
$ echo $HOME
$ echo $USER
$ cat /etc/passwd | grep root
root:x:0:0:root:/root:/bin/bash
$ ls -al /root
total 16
drwx------ 2 root root 4096 Dec 16 15:33 .
drwxrwxrwx 19 root root 4096 Dec 23 13:18 ..
-rw-r--r-- 1 root root 570 Jan 31 2010 .bashrc
-rw-r--r-- 1 root root 148 Aug 17 2015 .profile
$ env
TESTRUN_ID=1_hacksession-debian
SHLVL=4
OLDPWD=/
container=lxc
_=defs/hacksession-debian/setup_session
COLUMNS=80
PATH=/lava-248/1/../bin:/usr/local/bin:/usr/local/sbin:/bin:/usr/bin:/usr/sbin:/sbin
LAVA_RESULT_DIR=/lava-248/1/results/1_hacksession-debian-1482499502
LANG=C
LC_ALL=C.UTF-8
PWD=/lava-248/1/tests/1_hacksession-debian
LINES=24
If I mimic the lava LXC machine creation commands (lxc-create) and I attach to the machine I get a sane environment.
Is this expected behavior?
BR,
Rafael Gago
Hi,
I've installed LAVA and created 'qemu' device type.
$ sudo lava-server manage add-device-type '*'
$ sudo lava-server manage add-device --device-type qemu qemu01
Then, I downloaded an example of yaml to submit a job for the qemu image.
$ wget
https://staging.validation.linaro.org/static/docs/v2/examples/test-jobs/qem…
./
$ lava-tool submit-job http://<name>@localhost qemu-pipeline-first-job.yaml
The error is found during running 'image.py'.
(http://woogyom.iptime.org/scheduler/job/15)
Traceback (most recent call last):
File "/usr/bin/lava", line 9, in <module>
load_entry_point('lava-tool==0.14', 'console_scripts', 'lava')()
File "/usr/lib/python2.7/dist-packages/lava/tool/dispatcher.py", line
153, in run
raise SystemExit(cls().dispatch(args))
File "/usr/lib/python2.7/dist-packages/lava/tool/dispatcher.py", line
143, in dispatch
return command.invoke()
File "/usr/lib/python2.7/dist-packages/lava/dispatcher/commands.py",
line 216, in invoke
job_runner, job_data = self.parse_job_file(self.args.job_file,
oob_file)
File "/usr/lib/python2.7/dist-packages/lava/dispatcher/commands.py",
line 265, in parse_job_file
env_dut=env_dut)
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/parser.py",
line 165, in parse
test_action, counts[name])
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/parser.py",
line 66, in parse_action
Deployment.select(device, parameters)(pipeline, parameters)
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/logical.py",
line 203, in select
willing = [c for c in candidates if c.accepts(device, parameters)]
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/actions/deploy/image.py",
line 116, in accepts
if 'image' not in device['actions']['deploy']['methods']:
KeyError: 'actions'
Traceback (most recent call last):
File "/usr/bin/lava", line 9, in <module>
load_entry_point('lava-tool==0.14', 'console_scripts', 'lava')()
File "/usr/lib/python2.7/dist-packages/lava/tool/dispatcher.py", line
153, in run
raise SystemExit(cls().dispatch(args))
File "/usr/lib/python2.7/dist-packages/lava/tool/dispatcher.py", line
143, in dispatch
return command.invoke()
File "/usr/lib/python2.7/dist-packages/lava/dispatcher/commands.py",
line 216, in invoke
job_runner, job_data = self.parse_job_file(self.args.job_file,
oob_file)
File "/usr/lib/python2.7/dist-packages/lava/dispatcher/commands.py",
line 265, in parse_job_file
env_dut=env_dut)
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/parser.py",
line 165, in parse
test_action, counts[name])
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/parser.py",
line 66, in parse_action
Deployment.select(device, parameters)(pipeline, parameters)
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/logical.py",
line 203, in select
willing = [c for c in candidates if c.accepts(device, parameters)]
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/actions/deploy/image.py",
line 116, in accepts
if 'image' not in device['actions']['deploy']['methods']:
KeyError: 'actions'
It seems no 'methods' is found under actions->deploy block on parsing
the yaml file but I'm not sure this error means wrong yaml usage or not.
Best regards,
Milo
Hi Williams,
I want to get lava job submitter by lava-tool.
And when I use command "lava-tool job-details", the submitter info is displayed as "submitter_id". How can I convert the submitter id to submitter username?
Thanks.
The gitweb (depend apache2) and lava is installed on same host.
But the 80 port is used by lava, so gitweb cannot be visited with browser.
So i want to change the lava‘s port to another one, such as 8088, but after
changed the file:
/etc/apache2/sites-enabled/lava-server.conf, lava can not works.
Does anyone know how to make lava-server use another port ?
Btw, i can not find out the "DocumentRoot" of lava-server . The config file
is defined the "DocumentRoot" is "/usr/share/lava-server/static/lava-server/",
but i can not see the default index.html. ( Only see the templates file
in /usr/lib/python2.7/dist-packages/lava_server/templates/index.html )
Could someone tell me where is the lava-server's default index page ?
--
王泽超
TEL: 13718389475
北京威控睿博科技有限公司 <http://www.ucrobotics.com>
Hi Everyone,
I am trying to set up a standalone Lava V2 Server by following the
instructions on the Linaro website and so far things have gone smoothly.
I have Lava installed, a superuser created and I can access the
application through a web browser. But, I have the following issues:
ISSUE #1:
- When I tried to submit a simple Lava V2 test job, I got an error
message stating that the "beaglebone black" device type is not
available.
- I found the directory where the .jinja2 files were stored including
the beaglebone-black.jinja2 file, but regardless of what I tried, I
couldn't get the web application to see the device type definitions.
- It seems like the application isn't pointing to the directory where
those device type files are stored.
- What do I need to do to make the Lava Server "see" those device type
files?
ISSUE #2:
- When I tried to submit a job, I pasted a small .yaml file and the
validation failed because it didn't recognize the data['run'] in the
job. I tried a few others and then I tried a V1 .json file and it
validated just fine.
- What do I have to do to allow Lava to accept V2 .yaml files? Am I
missing something simple?
As always, I greatly appreciate any feedback you may have to help me
out.
Thank you in advance!
--
Don Brown
Codethink, Ltd.
Software Engineering Consultant
Indianapolis, IN USA
Email: don.brown(a)codethink.co.uk
Mobile: +1 317-560-0513
Hi Williams,
The submitted time is 8 hours behind my local time. How to change job submitted time displayed on LAVA web page?
I have tried to modify TIMEZONE of the file "/usr/lib/python2.7/dist-packages/lava_server/settings/common.py" and "/usr/lib/python2.7/dist-packages/django/conf/global_settings", and then restarted the LAVA server, but it seemed nothing changed.
Thanks.
Hello.
I have configured a LAVA server and I set up a local Django account to start configuring things:
sudo lava-server manage createsuperuser --username <user> --email=<mail>
Then I want to add LDAP support by adding the relevant fields to /etc/lava-server/settings.conf:
"AUTH_LDAP_SERVER_URI": "ldaps://server.domain.se:636",
"AUTH_LDAP_BIND_DN": "CN=company_ldap,OU=Service Accounts,OU=Resources,OU=Data,DC=domain,DC=se",
"AUTH_LDAP_BIND_PASSWORD": "thepwd",
"AUTH_LDAP_USER_ATTR_MAP": {
"first_name": "givenName",
"email": "mail"
},
"DISABLE_OPENID_AUTH": true
I have restarted both apache2 and lava-server.
I was expecting to get a Sign In page like this one:
https://validation.linaro.org/static/docs/v1/_images/ldap-user-login.png
Unfortunately I'm not familiar with neither django (and Web development) and LDAP and I don't know how to debug this. I have tried to grep for ldap|LDAP in /var/log/lava-server but nothing pops up.
Unfortunately I couldn't find a way to browse the mailing list for previous answers. GMANE search doesn't work today.
How should I proceed?
I have a multi-node test involving 13 roles that is no longer syncing properly after upgrading to 2016.11 this morning. It seems that 2 or 3 nodes end up waiting for a specific message while the other ones finish the message and move on to the next. Looking at the dispatcher log, I don't see any errors, but it's only logging that it's sending to some of the nodes. For example, I see a message like this for the nodes that work in a run:
2016-11-10 13:10:37,295 Sending wait messageID 'qa-network-info' to /var/lib/lava/dispatcher/slave/tmp/7615/device.yaml in group 2651c0a0-811f-4b77-bc07-22af31744fe5: {"/var/lib/lava/dispatcher/slave/tmp/7619/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7613/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7623/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tm
p/7620/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7611/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7621/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7622/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7617/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7618/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7614/device.yaml": {},
"/var/lib/lava/dispatcher/slave/tmp/7615/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7616/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7612/device.yaml": {}}
2016-11-10 13:10:37,295 Sending wait response to /var/lib/lava/dispatcher/slave/tmp/7615/device.yaml in group 2651c0a0-811f-4b77-bc07-22af31744fe5: {"message": {"/var/lib/lava/dispatcher/slave/tmp/7619/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7613/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7623/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7620/
device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7611/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7621/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7622/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7617/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7618/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7614/device.yaml": {}, "/var/l
ib/lava/dispatcher/slave/tmp/7615/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7616/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7612/device.yaml": {}}, "response": "ack"}
For the nodes that get stuck, there is no message like above.
All of the nodes are qemu type, all on the same host. The nodes that fail are not consistent, but there seems to be always 2 or 3 that fail in every run I tried.
Is there anything I can look at here to figure out what is happening?
--
James Oakley
james.oakley(a)multapplied.net
[Moving to lava-users as suggested by Neil]
On 11/07/2016 03:20 PM, Neil Williams (Code Review) wrote:
> Neil Williams has posted comments on this change.
>
> Change subject: Add support for the depthcharge bootloader
> ......................................................................
>
>
>
> Patch Set 3:
>
> (1 comment)
>
> https://review.linaro.org/#/c/15203/3/lava_dispatcher/pipeline/actions/depl…
>
> File lava_dispatcher/pipeline/actions/deploy/tftp.py:
>
> Line 127: def _ensure_device_dir(self, device_dir):
>> Cannot say that I have fully understood it yet. Would it be correct
>> if the
>
> The Strategy classes must not set or modify anything. The accepts
> method does some very fast checks and returns True or False. Anything
> which the pipeline actions need to know must be specified in the job
> submission or the device configuration. So either this is restricted
> to specific device-types (so a setting goes into the template) or it
> has to be set for every job using this method (for situations where
> the support can be used or not used on the same hardware for
> different jobs).
>
> What is this per-device directory anyway and how is it meant to work
> with tftpd-hpa which does not support configuration modification
> without restarting itself? Jobs cannot require that daemons restart -
> other jobs could easily be using that daemon at the same time.
So each firmware image containing Depthcharge will also contain
hardcoded values for the IP of the TFTP server, and for the paths of a
cmdline.txt file and a FIT image. The FIT image containing a kernel and
a DTB file, and optionally a ramdisk.
Because the paths are set when the FW image is flashed, we cannot use
the per-job directory. Thus we add a parameter to the device that is to
be set in the device-specific template of Chrome devices. If that
parameter is present, then a directory in the root of the TFTP files
tree will be created with the value of that parameter.
The TFTP server doesn't need to be restarted because its configuration
is left unchanged, we just create a directory where depthcharge will
look for the files.
Thanks,
Tomeu
> I think this needs to move from IRC and gerrit to a thread on the
> lava-users mailing list where the principles can be checked through
> more easily.
>
>
Hi everyone,
As I have probably mentioned in previous emails, Im using the yocto
project to generate some linux images that I want to test using lava as
part of the continuous integration development.
So far so good, i can submit the job description to lava using lava-tool
and it will start the tests. I'm happy so far with all the results.
Now my question is to ask you what would be the correct way do this
procedure. Do you think it is reasonable to have a lava-tool submit-job
followed by a waiting step using lava-tool job-status to report the
final build result? or there is a nicer way to do this?
Thanks a lot for your help in advance :)
Best,
Alfonso
By default, a uboot header is automatically added to the ramdisk image.
For bootloaders without INITRD_ATAG support, the ramdisk needs to be
passed on the command line and cannot have the uboot header added.
To enable this feature, add a "ramdisk_raw" option that device files can
set so that a uboot header is not added.
Signed-off-by: Kevin Hilman <khilman(a)baylibre.com>
---
Patch applies on 2016.9
lava_dispatcher/config.py | 1 +
lava_dispatcher/device/bootloader.py | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/lava_dispatcher/config.py b/lava_dispatcher/config.py
index 66a9e70021fa..c91c5634280d 100644
--- a/lava_dispatcher/config.py
+++ b/lava_dispatcher/config.py
@@ -312,6 +312,7 @@ class DeviceSchema(schema.Schema):
uimage_xip = schema.BoolOption(default=False)
append_dtb = schema.BoolOption(default=False)
prepend_blob = schema.StringOption(default=None)
+ ramdisk_raw = schema.BoolOption(default=False)
# for dynamic_vm devices
dynamic_vm_backend_device_type = schema.StringOption(default='kvm')
diff --git a/lava_dispatcher/device/bootloader.py b/lava_dispatcher/device/bootloader.py
index 634d22ef3311..c88fba8937e6 100644
--- a/lava_dispatcher/device/bootloader.py
+++ b/lava_dispatcher/device/bootloader.py
@@ -208,7 +208,7 @@ class BootloaderTarget(MasterImageTarget):
decompress=False)
extract_overlay(overlay, ramdisk_dir)
ramdisk = create_ramdisk(ramdisk_dir, self._tmpdir)
- if self._is_uboot():
+ if self._is_uboot() and not self.config.ramdisk_raw:
# Ensure ramdisk has u-boot header
if not self._is_uboot_ramdisk(ramdisk):
ramdisk_uboot = ramdisk + ".uboot"
--
2.5.0
Hello everyone,
Can you help me on below two questions?
1. I did email notification settings for sending emails after job complete or incomplete.
How can I get whole logs (where are logs?) about email sending process? I need to debug email sending.
2. I want to use script to control device state periodically.
How can I set device to maintenance state using command, like lava-tool command?
Thanks in advance.