Hi Team,
Hi Team,
Thanks for giving kind support. We are booting *ARM-cortexM targets* using
lauterbach - trace32 debugger.
http://www.embeddedindia.com/lauterbach-gmbh.html
I would like to know *lauterbach** -trace32 debugger support* in lava
If support is available, Please share me the supported Device-type jinja
file or share me the reference file.
Please let me know anything required from my side.
I am looking forward your kind support
thanks
Regards
Nagendra S
Hi Team,
Today i have started working with remote worker. But In Lava-server Admin
page i have noticed that default lava-worker went office where as remote
worker operations are working fine as expected.
1. Is this expected behavior ??
On command prompt i can see lava-slave status as Online
[image: default_worker_onserver_command_line showing_online.png]
Where as On webpage it is showing offline:
[image: default_worker_onserver_webpage_status_offline.png]
The devices connected to default worker also showing offline.
2. Can we rename hostname for remote worker??
Thanks !!
Regards
Nagendra S
Hi, guys,
I have a question after listen Docker feature for Android testing<https://linarotechdays.sched.com/event/ZZFc/ltd20-103-improved-android-test…>, and the question is a little long.
1. I want to say sorry that I haven't ever tried it on android because currently not urgent for us to switch from lxc to docker.
But my real question is related to this feature at least I think.
2. The whole story is:
We have a device which use the "tftpboot(deploy) -> nfs(boot) -> shell(test)" mode to test, that's ok.
But now we have a team which for their testcases, they need to define other logic which behavior totally different with current lava solution.
There is legacy code on pc which we want to reuse, so the quicker way for us is to use docker device, in it we could do anything to control the device.
What we tried is:
- Deploy (to: docker)
- Boot (method: docker)
- Test
It's ok, as you see we could use the connection from boot in test. But as you know, not all device type accepts docker actions.
Then, I find the option after you give the presentation in linaro tech share, use docker test.
3. I tried the v2020.02 release, it's ok to use next with any device type in our job to do my things.
- test:
docker:
image: ubuntu:18.04
Although some android related log printed like "- ANDROID_SERIAL='xxx'", but I can bear.
4. Things broken when you improve this feature in v2020.04, you add next in pipeline:
WaitDeviceBoardID(self.get_board_id()
Then, the pipeline will have to wait for a udev event, but in our case, we don't have, we control device with "remote telnet device", then job hang.
5. So, my question here is: "What's the roadmap of this docker test action"?
a) Is it just for android scenario?
b) Why we can't make it works for more common scenario?
My opinion here is: with this docker test action, we even no longer need the old "docker deploy + docker boot"?
c) For this docker test, most of docker operation was written fixed in docker test, if possible to add in some place which user could configure? Like, if I want to add "-t", I can't control that although you define the interface in "DockerRun" with "def tty(self):", and others like more docker bind mounts etc?
6. BTW, a side question related to android (As definitely someday we need to switch from lxc to docker, I use this chance to ask the question).
What will happened if there is adbd restart or pdu reboot in "docker test"? You add "-device" for the usb bus (I'm not sure, just think it same way as lxc), but during adbd restart or pdu reboot, the usb bus will surely changes. I didn't see docker has such kinds of ability to renew the "-device". How would that happen? Sorry again, I haven't tired, just want to know the mechanism.
7. Anyway, currently what I care about is the roadmap of "docker test".
If your final direction is to make the "docker test" more common, then we are OK currently "STICK ON 2020.02". If just for android & "WaitDeviceBoardID" had to be here without any control by user, then we will give up this solution & try to find another way to reuse the device in LAVA?
Your direction matters our next step!
8. Finally,
Any other suggestion for our scenario which I described in item 2?
Regards,
Larry
Hi,
I have a question related to uboot boot action's retry settings, our job is:
- boot:
failure_retry: 2
namespace: test_suite_1
connection-namespace: burning-uboot_1
method: u-boot
commands: nfs
auto_login:
login_prompt: '(.*) login:'
username: root
prompts:
- 'root@(.*):~#'
timeout:
minutes: 10
1. From the code:
"UBootAction" extends from a RetryAction, while in its internal pipeline, there is action named "UBootRetry" which also extends from RetryAction.
If we define a "retry", when exception happened in "RetryAction", it will first cause "UbootRetry" to retry, then "UBootAction" to retry again.
Sounds confuse, I wonder for what reason we should had a nested retry here?
2. In fact the real issue here for us is next:
Let's suppose we define failure_retry: 2, our situation is:
1) First boot timeout for some random block issue.
2) Then, it start Retrying: 4.4 uboot-retry (599 sec), but timeout again.
3) Then, it start Retrying: 4 uboot-action (599 sec), but timeout again.
4) Then, it start Retrying: 4.4 uboot-retry (599 sec), this time a lucky boot here, but before we are happy, it finish the last action "export-device-env" in uboot-retry. Then, looks like "UBootAction" timeout resume, then the lucky boot becomes useless although it's in fact successfully boot.
The log is:
start: 4.4.5 expect-shell-connection (timeout 00:07:23) [test_suite_1]
Forcing a shell prompt, looking for ['root@(.*):~#']
root@imx8mnevk:~#
expect-shell-connection: Wait for prompt ['root@(.*):~#'] (timeout 00:10:00)
Waiting using forced prompt support. 299.9747439622879s timeout
end: 4.4.5 expect-shell-connection (duration 00:00:00) [test_suite_1]
start: 4.4.6 export-device-env (timeout 00:07:23) [test_suite_1]
end: 4.4.6 export-device-env (duration 00:00:00) [test_suite_1]
uboot-action timed out after 727 seconds
end: 4.4 uboot-retry (duration 00:02:07) [test_suite_1]
I'm not sure, but looks like: for second "uboot-action", there is two "uboot-retry" inside it because of "retry", which will make when "uboot-action" timeout resume, the time diff becomes less than 0, which directly raise exception? Is it a bug or I misunderstand it?
duration = round(action_max_end_time - time.time())
if duration <= 0:
signal.alarm(0)
parent.timeout._timed_out(None, None)
Any suggestion for this?
Hi,
I have some questions about the new adb/fastboot support in LAVA 2020.02 described
in the recent Tech Days LAVA presentation [1]. Main initial use cases are Android boot test,
before moving on to running Android tests. Android is AOSP 9/10.
[1] https://connect.linaro.org/resources/ltd20/ltd20-304/
Taking the boot test use case I was looking at Antonio's Example 1 from his presentation
for fastboot deploy, boot and test. I found doc for test in the 2020.02 release note but
nothing I could see for deploy or boot.
In the LXC approach a typical job definition would have had target and host namespaces,
with deploy and boot for the host space to create and boot the LXC container. Looking at
Example 1 deploy from the presentation it looks like that is a largely a target fragment as
it contains the image binary etc. A host deploy no longer being required as the Docker
container is now created outside lava.
Similarly the Example 1 boot looks like a target fragment and host boot fragment is not
required as LAVA simply needs to run the specified Docker container. Then finally in
Example 1 test the scope is the specified docker container so no namespace is required.
Is my interpretation of the Example 1 slides correct? I tried some fragments that were causing
fundamental errors that caused me to check here. Although as is often the case writing it out
helps you get it straighter in your mind.
If you have reasons to want to control fastboot for the flashing on the host is that possible?
For example if you had the host side process scripted.
Does the fastboot Docker OS need to be specified?
I'm running 2020.02 on the Worker via lava-docker, with Docker support within the Worker
container by sharing the host Docker socket to gain access to the Docker daemon.
Regards
Steve
Hi,
I'm trialling the fastboot support in Docker introduced in LAVA v2020.02 and getting a fundamental job
error about the deploy action of the job definition. I've checked against the examples from the recent
Linaro Tech Day LAVA presentation but I can't see the source of the error. Could someone familiar with
this new support please take a look at the job definition please? I'm thinking it must be something obvious.
Error snippet:
error_msg: None of the deployment strategies accepted your deployment parameters, reasons given: overlay:
"to" parameter is not "overlay" docker: 'docker' not in the device configuration deploy methods download: "to"
parameter is not "download" qemu-nfs:
Full error:
https://lava.genivi.org/scheduler/job/718
Job definition:
https://lava.genivi.org/scheduler/job/718/definition
One possible cause I can think of is that the LAVA Worker is running 2020.02, whilst the Server is running 2019.03+stretch.
My assumption is that the job definition parsing occurs in the dispatcher but maybe that is not correct? The Server will be
upgraded to match the Worker of course, but we took a two step approach whilst we first looked into Android support.
Thanks for any help,
Steve
Hi Team,
I have recently installed lava server (master) in my Debian machine.
My goal is to configure lava-server with *master- multiple workers*
*Lava-server : Master*
*[PFA] Debian configuration *:
root@LavaServer:~# lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 10 (buster)
Release: 10
Codename: buster
Now i am trying to setup multiple workers on other machine:
Q : *Can i configure/Install the worker on different Operating System other
than Debian ??*
If [No], please describe the process or share the useful Doc
thanks
Regarding
Nagendra S
Hi,
initially we have plan to connect 5 boards to my lava-master and
lava-dispatchers.
Please can some let me know servers configurations for this setup as
mentioned below?
a. Can you let me know the LAVA master server configuration ? (
Processor/DDR/memory etc... configurations details)
b. Can you let me know the Dispatcher server configuration ?
(Processor/DDR/memory etc...configuration details) . ( I guess
Dispatcher server configuration may be less than Master server
configuration)
c. PDU (I guess initial we have plan to connect 5 targets) . (If possible
please can you share me the Amazon link to orfer)
d. Serial port hub . (I guess initial we have plan to connect 5 targets)
. (If possible please can you share me the Amazon link to order)
Regards,
Koti
Hi, there,
If lava in some place possible to find who cancel a job? Or use lavacli or manual cancel a job?
Or for which detail reason, the job was canceled?
We are struggling to find why my job canceled sometimes, not sure someone not carefully do it or our external program by chance do it, especially when there are many people used the same master.
Regards,
Lary
Sometimes for some device, when lava enter "/lava-55904/bin/lava-test-runner /lava-55904/0"
It looks it will print:
-sh: /lava-55904/bin/lava-test-runnera-55904/0: No such file or directory
Looks some character miss.
We increases the test_character_delay, looks useful, but I'm interested about the details, so for next which you mentioned in the code:
What exactly "slow serial problems(iPXE)" mean, could you explain more about it or any reference materteral I can had a look?
Then I could know: yes, it's exactly the same problem I have.
>>> Extends pexpect.sendline so that it can support the delay argument which allows a delay
between sending each character to get around slow serial problems (iPXE).
pexpect sendline does exactly the same thing: calls send for the string then os.linesep.