Hello,
I ran the same job on staging (debian buster) and it's working fine:
https://staging.validation.linaro.org/scheduler/job/266192
Something wrong on your system. Difficult to know.
Could you send the full logs.
Rgds
Le mar. 14 janv. 2020 à 07:35, dhanu msys <dhanuskd.palnati(a)gmail.com> a
écrit :
> Hi Remi,
>
> Can you please check the logs and let me know what was the changes i need
> to make to run the LAVA QEMU arm based test jobs.
>
> Thanks & Regards,
> Dhanunjaya. P
>
>
> On Mon, Dec 23, 2019 at 5:47 PM dhanu msys <dhanuskd.palnati(a)gmail.com>
> wrote:
>
>> Hi Remi,
>>
>> Can you able to check the below mentioned the Log Files & Job related
>> configurations along with the Job failure logs.
>>
>> Thanks & Regards,
>> Dhanunjaya. P
>>
>>
>> On Thu, Nov 28, 2019 at 6:01 PM dhanu msys <dhanuskd.palnati(a)gmail.com>
>> wrote:
>>
>>> Hi Remi,
>>>
>>> Here I have attached the test job definition , test summary details &
>>> qemu device configuration file along with device template for qemu arm64
>>> architecture. PFA.
>>>
>>> Regards,
>>> Dhanunjaya. P
>>>
>>>
>>> On Tue, Nov 26, 2019 at 5:19 PM dhanu msys <dhanuskd.palnati(a)gmail.com>
>>> wrote:
>>>
>>>> Hi Remi,
>>>>
>>>> Here I have attached the Host Information.
>>>>
>>>> root@Dhanu:~# uname -a
>>>> Linux Dhanu 4.19.0-6-amd64 #1 SMP Debian 4.19.67-2+deb10u1 (2019-09-20)
>>>> x86_64 GNU/Linux
>>>>
>>>> root@Dhanu:~# lsb_release -a
>>>> No LSB modules are available.
>>>> Distributor ID: Debian
>>>> Description: Debian GNU/Linux 10 (buster)
>>>> Release: 10
>>>> Codename: buster
>>>>
>>>> root@Dhanu:~# df -h
>>>> Filesystem Size Used Avail Use% Mounted on
>>>> udev 1.9G 0 1.9G 0% /dev
>>>> tmpfs 383M 18M 365M 5% /run
>>>> /dev/sda1 289G 23G 252G 9% /
>>>> tmpfs 1.9G 375M 1.6G 20% /dev/shm
>>>> tmpfs 5.0M 4.0K 5.0M 1% /run/lock
>>>> tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
>>>> tmpfs 383M 5.7M 377M 2% /run/user/1000
>>>>
>>>> And also I have attached the CPU Info & MemInfo related information for
>>>> your reference.
>>>>
>>>> Regards,
>>>> Dhanunjaya. P
>>>>
>>>>
>>>> On Tue, Nov 26, 2019 at 3:17 PM Remi Duraffort <
>>>> remi.duraffort(a)linaro.org> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>>
>>>>>
>>>>>> https://staging.validation.linaro.org/scheduler/device/staging-qemu03
>>>>>>
>>>>>>
>>>>>> This should work, unless your system is way too old. what host are
>>>>> you using?
>>>>>
>>>>>
>>>>> Rgds
>>>>>
>>>>> --
>>>>> Rémi Duraffort
>>>>> LAVA Architect
>>>>> Linaro
>>>>>
>>>>
--
Rémi Duraffort
LAVA Architect
Linaro
Hello, all,
Recently I heard from somebody said, lava will drop stretch support from 2020.01 release, I didn't find related information, is it true?
I also want to know if I remain on stretch, but still use 2019.12 release just for slave, but maybe later just upgrade master to for example 2020.06 release on buster.
Will 2019.12 slave on stretch be compatible with 2020.06(Just a example version) master on buster? Will you promise it?
Best Regards,
David
Hello, all,
Recently I heard from somebody said, lava will drop stretch support from 2020.01 release, I didn't find related information, is it true?
I also want to know if I remain on stretch, but still use 2019.12 release just for slave, but maybe later just upgrade master to for example 2020.06 release on buster.
Will 2019.12 slave on stretch be compatible with 2020.06(Just a example version) master on buster? Will you promise it?
Best Regards,
David
Hi, there,
We have one job to do some stress test, the final log is about 270MB.
We use `lavacli -i production jobs logs --raw 30686` to fetch the log, it gives me next:
```
$ time lavacli -i production jobs logs --raw 30686
/usr/lib/python3/dist-packages/lavacli/__init__.py:101: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
config = yaml.load(f_conf.read())
Unable to call 'jobs.logs': <ProtocolError for http://larry.shen:xxx@xxx.nxp.com/RPC2: 500 ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))>
real 0m39.006s
user 0m0.989s
sys 0m0.191s
```
I check the gunicorn log, it give me:
[2020-01-08 08:11:12 +0000] [188] [CRITICAL] WORKER TIMEOUT (pid:11506)
Any suggestion?
I tried updating my lava-docker-compose repo to the latest but hit a
problem mounting the overlay filesystem in qemu as of 2019.11 (2019.10
and earlier works, 2019.11 and later do not).
To reproduce, clone https://github.com/danrue/lava-docker-compose/ and
run "make"; observe the qemu health-check time out.
The health check job can be found at
https://github.com/danrue/lava-docker-compose/blob/master/server-overlay/et…
The problem seems to occur when it tries to mount the overlay
filesystem for the test job. I see in the release notes that the
containers moved to buster in 2019.11 - related?
Raw output below.
Thanks,
Dan
/ # mkdir /lava-2
mkdir /lava-2
mount /dev/disk/by-uuid/16786250-75bd-4703-9d57-7cfa618c725e -t ext2 /lava-2
/ # mount /dev/disk/by-uuid/16786250-75bd-4703-9d57-7cfa618c725e -t ext2 /lava-2
mount /dev/disk/by-uuid/16786250-75bd-4703-9d57-7cfa618c725e -t ext2 /lava-2
mount: mounting /dev/disk/by-uuid/16786250-75bd-4703-9d57-7cfa618c725e
on /lava-2 failed: No such file or directory
ls -la /lava-2/bin/lava-test-runner
/ # ls -la /lava-2/bin/lava-test-runner
ls -la /lava-2/bin/lava-test-runner
ls: /lava-2/bin/lava-test-runner: No such file or directory
Using /lava-2
export SHELL=/bin/sh
/ # export SHELL=/bin/sh
export SHELL=/bin/sh
. /lava-2/environment
/ # . /lava-2/environment
. /lava-2/environment
/bin/sh: .: can't open '/lava-2/environment'
/lava-2/bin/lava-test-runner /lava-2/0
/ # /lava-2/bin/lava-test-runner /lava-2/0
Test shell timeout: 10s (minimum of the action and connection timeout)
/lava-2/bin/lava-test-runner /lava-2/0
/bin/sh: /lava-2/bin/lava-test-runner: not found
Hi!
I'm trying to implement new method to lava. As a reference I took fastboot.
Now I can job with my new implemented method.
I want to work with lxc container as with fastboot. But same job as for fastboot with lxc, doesn't create lxc container and miss step with image validation.
Only start methods which are described in /lava_dispatcher/actions/deploy/qdl.py
What I miss?
Kind regards,
Ilya
Hi Team,
I have tried to run the QEMU related jobs for arm64 with the below
configurations , throwing an error.
Can you please let me know the what was the Infrastructure missing on this
job.
Thanks & Regards,
Dhanunjaya. P
Hi,
I'm trying to change the way LAVA searches for and passes USB devices
to LXC containers. Current code depends on arbitrary variables present
in the static_info dictionary that is part of the device dictionary.
This seems to be a problem for some users [1]. So the proposal was
made to get rid of the arbitrary variables entirely and use udev
variables (starting with ID_). The change was pretty simple to
implement but when I started changing the docs I realized there were
other classes of devices supported with static_info. Docs list ACME
Cape that can be connected over LAN [2]. I'm not aware of any users of
this code but I don't want to break the feature if there is anyone
using it. So if you're users of LAVA who connect ACME Cape using
static_info please reply to this thread. If I don't hear any replies I
will remove this support.
[1] https://git.lavasoftware.org/lava/lava/issues/335
[2] https://github.com/ARM-software/lisa/wiki/Energy-Meters-Requirements#user-c…
Best Regards,
milosz
What kind of command do you run to flash over usb?
Le jeu. 28 nov. 2019 à 13:26, dhanu msys <dhanuskd.palnati(a)gmail.com> a
écrit :
> Hi Remi,
>
> We were flashing through USB otg cable.
>
> USB deploy method supported by lava right?
>
> Regards,
> Dhanunjaya
>
> On Thu, Nov 28, 2019, 17:52 Remi Duraffort <remi.duraffort(a)linaro.org>
> wrote:
>
>> Hello,
>>
>> you should give more information if you want to have a proper answer.
>>
>> How do you flash the board? is it fully automatic?
>> Is this method already supported by LAVA?
>>
>>
>> Rgds
>>
>> Le jeu. 28 nov. 2019 à 07:48, dhanu msys <dhanuskd.palnati(a)gmail.com> a
>> écrit :
>>
>>> Hi Team,
>>>
>>> How can i deploy RTOS based Application Image on target.
>>>
>>> Notes :
>>> Target HW : STM32F429I-DISC1.
>>> RTOS : Open source RTOS
>>> Application Image (RTOS + Application specific code).
>>>
>>> We are able to flash the Developed Application Firmware (.out/.bin)
>>> through IAR IDE via USB, so we are in plan to make use of LAVA and run test
>>> applications automatically.
>>>
>>> So , Can you please provide some references to deploy this test
>>> scenarios in LAVA.
>>>
>>> Thanks & Regards,
>>> Dhanunjaya. P
>>> _______________________________________________
>>> Lava-users mailing list
>>> Lava-users(a)lists.lavasoftware.org
>>> https://lists.lavasoftware.org/mailman/listinfo/lava-users
>>
>>
>>
>> --
>> Rémi Duraffort
>> LAVA Architect
>> Linaro
>>
>
--
Rémi Duraffort
LAVA Architect
Linaro
Hello everyone,
Summary:
We would like to drop the support for stretch in the (near) future.
Details:
Currently lava-server and lava-dispatcher are supported on both debian
stretch and debian buster.
Since LAVA 2019.11, the lava-server and lava-dispatcher docker images are
based on debian buster and not stretch.
Unittest are still running on stretch, buster and bullseye but integration
tests (lavafed and meta-lava mainly) are using the docker container so they
run only on buster.
Dropping the support for stretch will:
* simplify testing
* allow to use more recent version of python and django
* remove some issues with old dependencies in stretch
But before dropping the support we need to ensure that users/admins had
some time to migrate to buster or to docker-based installations.
In an ideal world, I would drop the support for stretch on the 1st of
January 2020. But please answer to this mail so we can device of the right
date all together.
Thanks
--
Rémi Duraffort
LAVA Architect
Linaro