Good morning everyone,
I would like to have your advices on the following subject : attaching a
fastboot device in LXC container during a LAVA job.
I am currently running jobs on a device through adb without issues. LAVA is
adding udev rules which make it possible for the LXC container to
successfully attach the target.
However, when the device turns into fastboot mode, a "fastboot devices"
command in the LXC returns nothing. In fastboot mode, the USB link of the
device is added whereas the device is not listed among the fastboot devices.
In the host, a "fastboot devices" command returns the id of the device.
Did anyone here already faced that situation in the past?
regards,
Hi Team,
Can we trigger a another job/task of completion of one TestJob ?
Can you share any reference if we can perform this action.
--
Thanks & Regards
Chetan Sharma
Hi Folks,
I'm experimenting with Multinode for distributing tests across multiple
Android DUTs (for using the CTS shards option at some point). The problem
now is that the devices are rebooted to fastboot after the test although
reboot_to_fastboot: false is specified in the test parameters. Apparently
this parameter is not passed over from the multinode job to the LxcProtocol.
Any idea on how to fix this?
I attached a basic test shell definition that demonstrates the problem.
A side question here: If I set the count of the worker role to something
larger than 1, one of the job instance will stop incompletely with "error_msg:
Invalid job data: ["Missing protocol 'lava-lxc'"]
<http://localhost/results/testcase/817>", and the other two time out
at "Multinode
wait/sync". Am I missing something here or is this a limitation of the
multinode/lxc protocol combination?
Thank you,
Karsten
Hello to everyone,
Suppose I am able to retrieve from the DUT my cyclictest results, in
the form of the table.
Am I able to plot these results directly from Lava?
If yes, any examples in V2 test definitions how to do that?
Thank you,
Zoran
by Ros Dos Santos, Alfonso (CT RDA DS EVO OPS DIA SE 1)
Hi everyone,
in the last few months my team and I have being using LAVA in our
Continuous Integration process for the development of a Yocto Linux
image for some development boards. These boards were not compatible with
the available strategies in LAVA so we had to improvise a little.
These boards are however capable of booting from a USB device. Our idea
was then to create a new deployment strategy that will download the
image file into some Linux device with a OTG USB port and "expose" it
using the g_mass_storage kernel module. The OTG USB port will be
connected to the test development board USB. For the booting strategy we
use the already existing minimal boot where we simply power up the
device and let it boot from the USB.
We would like to know your thoughts about this idea and if you see any
value in these changes for a possible contribution.
In the board's device configuration we add the host to where download
and mount the image
actions:
deploy:
methods:
usbgadget:
usb_gadget_host: {{ usb_gadget_host }}
We developed these changes on top of the lava-dispatcher verision
2017.7-1~bpo9+1 from the strerch-backports repository.
Here is our patch with the changes:
We also added some options to apply a patch to the image boot partition
to make it usb bootable, but if the image is already usb bootable it is
not needed.
---
.../pipeline/actions/deploy/strategies.py | 1 +
.../pipeline/actions/deploy/usbgadget.py | 254
+++++++++++++++++++++
2 files changed, 255 insertions(+)
create mode 100644 lava_dispatcher/pipeline/actions/deploy/usbgadget.py
diff --git a/lava_dispatcher/pipeline/actions/deploy/strategies.py
b/lava_dispatcher/pipeline/actions/deploy/strategies.py
index da1e155..cfd6438 100644
--- a/lava_dispatcher/pipeline/actions/deploy/strategies.py
+++ b/lava_dispatcher/pipeline/actions/deploy/strategies.py
@@ -32,3 +32,4 @@ from lava_dispatcher.pipeline.actions.deploy.lxc
import Lxc
from lava_dispatcher.pipeline.actions.deploy.iso import DeployIso
from lava_dispatcher.pipeline.actions.deploy.nfs import Nfs
from lava_dispatcher.pipeline.actions.deploy.vemsd import VExpressMsd
+from lava_dispatcher.pipeline.actions.deploy.usbgadget import
USBGadgetDeployment
diff --git a/lava_dispatcher/pipeline/actions/deploy/usbgadget.py
b/lava_dispatcher/pipeline/actions/deploy/usbgadget.py
new file mode 100644
index 0000000..65347f4
--- /dev/null
+++ b/lava_dispatcher/pipeline/actions/deploy/usbgadget.py
@@ -0,0 +1,254 @@
+import os
+import patch
+import guestfs
+import errno
+import gzip
+
+from paramiko import SSHClient, AutoAddPolicy
+from scp import SCPClient
+from tempfile import mkdtemp
+from shutil import rmtree
+
+from lava_dispatcher.pipeline.action import Pipeline,
InfrastructureError, Action
+from lava_dispatcher.pipeline.logical import Deployment
+from lava_dispatcher.pipeline.actions.deploy import DeployAction
+
+from lava_dispatcher.pipeline.actions.deploy.download import (
+ DownloaderAction,
+)
+
+from lava_dispatcher.pipeline.actions.deploy.overlay import (
+ OverlayAction,
+)
+
+from lava_dispatcher.pipeline.actions.deploy.apply_overlay import (
+ ApplyOverlayImage,
+)
+
+
+class PatchFileAction(Action):
+
+ def __init__(self):
+ super(PatchFileAction, self).__init__()
+ self.name = "patch-image-file"
+ self.summary = "patch-image-file"
+ self.description = "patch-image-file"
+
+ def run(self, connection, max_end_time, args=None):
+ connection = super(PatchFileAction, self).run(
+ connection, max_end_time, args)
+
+ decompressed_image = self.get_namespace_data(
+ action='download-action', label='image', key='file')
+ self.logger.debug("Image: %s", decompressed_image)
+
+ partition = self.parameters['patch'].get('partition')
+ if partition is None:
+ raise JobError(
+ "Unable to apply the patch, image without 'partition'")
+
+ patchfile = self.get_namespace_data(
+ action='download-action', label='file', key='patch')
+
+ destination = self.parameters['patch'].get('dst')
+
+ self.patch_file(decompressed_image, partition, destination,
patchfile)
+ return connection
+
+ @staticmethod
+ def patch_file(image, partition, destination, patchfile):
+ """
+ Reads the destination file from the image, patchs it and writes
it back
+ to the image.
+ """
+ guest = guestfs.GuestFS(python_return_dict=True)
+ guest.add_drive(image)
+ guest.launch()
+ partitions = guest.list_partitions()
+ if not partitions:
+ raise InfrastructureError("Unable to prepare guestfs")
+ guest_partition = partitions[partition]
+ guest.mount(guest_partition, '/')
+
+ # create mount point
+ tmpd = mkdtemp()
+
+ # read the file to be patched
+ f_to_patch = guest.read_file(destination)
+
+ if destination.startswith('/'):
+ # copy the file locally
+ copy_dst = os.path.join(tmpd, destination[1:])
+ else:
+ copy_dst = os.path.join(tmpd, destination)
+
+ try:
+ os.makedirs(os.path.dirname(copy_dst))
+ except OSError as exc:
+ if exc.errno == errno.EEXIST:
+ pass
+ else:
+ raise
+
+ with open(copy_dst, 'w') as dst:
+ dst.write(f_to_patch)
+
+ # read the patch
+ ptch = patch.fromfile(patchfile)
+
+ # apply the patch
+ ptch.apply(root=tmpd)
+
+ # write the patched file back to the image
+ with open(copy_dst, 'r') as copy:
+ guest.write(destination, copy.read())
+
+ guest.umount(guest_partition)
+
+ # remove the mount point
+ rmtree(tmpd)
+
+
+class USBGadgetScriptAction(Action):
+
+ def __init__(self, host):
+ super(USBGadgetScriptAction, self).__init__()
+ self.name = "deploy-usb-gadget"
+ self.summary = "deploy-usb-gadget"
+ self.description = "deploy-usb-gadget"
+ self.host = host
+
+ def validate(self):
+ if 'deployment_data' in self.parameters:
+ lava_test_results_dir = self.parameters[
+ 'deployment_data']['lava_test_results_dir']
+ lava_test_results_dir = lava_test_results_dir % self.job.job_id
+ self.set_namespace_data(action='test', label='results',
+ key='lava_test_results_dir',
value=lava_test_results_dir)
+
+ def print_transfer_progress(self, filename, size, sent):
+ current_progress = (100 * sent) / size
+ if current_progress >= self.transfer_progress + 5:
+ self.transfer_progress = current_progress
+ self.logger.debug(
+ "Transferring file %s. Progress %d%%", filename,
current_progress)
+
+ def run(self, connection, max_end_time, args=None):
+ connection = super(USBGadgetScriptAction, self).run(
+ connection, max_end_time, args)
+
+ # # Compressing the image file
+ uncompressed_image = self.get_namespace_data(
+ action='download-action', label='file', key='image')
+ self.logger.debug("Compressing the image %s", uncompressed_image)
+ compressed_image = uncompressed_image + '.gz'
+ with open(uncompressed_image) as f_in,
gzip.open(compressed_image, 'wb') as f_out:
+ f_out.writelines(f_in)
+
+ # # Try to connect to the usb gadget host
+ ssh = SSHClient()
+ ssh.set_missing_host_key_policy(AutoAddPolicy())
+ ssh.connect(hostname=self.host, username='root', password='')
+ dest_file = os.path.join('/mnt/',
os.path.basename(compressed_image))
+
+ # # Clear /mnt folder
+ self.logger.debug("Clearing /mnt directory")
+ stdin, stdout, stderr = ssh.exec_command('rm -rf /mnt/*')
+ exit_code = stdout.channel.recv_exit_status()
+ if exit_code == 0:
+ self.logger.debug("/mnt clear")
+ else:
+ self.logger.error("Could not clear /mnt on secondary device")
+
+ # # Transfer the compressed image file
+ self.logger.debug(
+ "Transferring file %s to the usb gadget host",
compressed_image)
+ self.transfer_progress = 0
+
+ scp = SCPClient(ssh.get_transport(),
+ progress=self.print_transfer_progress,
+ socket_timeout=600.0)
+ scp.put(compressed_image, dest_file)
+ scp.close()
+
+ # # Decompress the sent image
+ self.logger.debug("Decompressing the file %s", dest_file)
+ stdin, stdout, stderr = ssh.exec_command('gzip -d %s' %
(dest_file))
+ exit_code = stdout.channel.recv_exit_status()
+ if exit_code == 0:
+ self.logger.debug("Decompressed file")
+ else:
+ self.logger.error("Could not decompress file: %s",
+ stderr.readlines())
+
+ # # Run the g_mass_storage module
+ dest_file_uncompressed = dest_file[:-3]
+ self.logger.debug(
+ "Exposing the image %s as a usb storage",
dest_file_uncompressed)
+
+ stdin, stdout, stderr = ssh.exec_command('rmmod g_mass_storage')
+ exit_code = stdout.channel.recv_exit_status()
+
+ stdin, stdout, stderr = ssh.exec_command(
+ 'modprobe g_mass_storage file=%s' % (dest_file_uncompressed))
+ exit_code = stdout.channel.recv_exit_status()
+ if exit_code == 0:
+ self.logger.debug("Mounted mass storage file")
+ else:
+ self.logger.error("Could not mount file: %s",
+ stderr.readlines())
+
+ ssh.close()
+ return connection
+
+
+class USBGadgetDeploymentAction(DeployAction):
+
+ def __init__(self):
+ super(USBGadgetDeploymentAction, self).__init__()
+ self.name = 'usb-gadget-deploy'
+ self.description = "deploy images using the fake usb device"
+ self.summary = "deploy images"
+
+ def populate(self, parameters):
+ self.internal_pipeline = Pipeline(
+ parent=self, job=self.job, parameters=parameters)
+ path = self.mkdtemp()
+
+ # Download the image
+ self.internal_pipeline.add_action(DownloaderAction('image', path))
+
+ if self.test_needs_overlay(parameters):
+ self.internal_pipeline.add_action(OverlayAction())
+ self.internal_pipeline.add_action(ApplyOverlayImage())
+
+ # Patch it if needed
+ if 'patch' in parameters:
+ self.internal_pipeline.add_action(DownloaderAction('patch', path))
+ self.internal_pipeline.add_action(PatchFileAction())
+
+ host = self.job.device['actions']['deploy'][
+ 'methods']['usbgadget']['usb_gadget_host']
+ self.internal_pipeline.add_action(USBGadgetScriptAction(host))
+
+
+class USBGadgetDeployment(Deployment):
+ """
+ Only for iot2000-usb
+ """
+ compatibility = 4
+
+ def __init__(self, parent, parameters):
+ super(USBGadgetDeployment, self).__init__(parent)
+ self.priority = 1
+ self.action = USBGadgetDeploymentAction()
+ self.action.section = self.action_type
+ self.action.job = self.job
+ parent.add_action(self.action, parameters)
+
+ @classmethod
+ def accepts(cls, device, parameters):
+ """
+ Accept only iot2000-usb jobs
+ """
+ return device['device_type'] == 'iot2000-usb'
--
2.7.4
Dear all,
I'm adding a new device to my test farm This device supports u-boot. Device-type and device have been created, I want to test a simple nfs job.
To manage to start the kernel, I need to trigger 2 commands from u-boot before launching the nfs boot command:
dcache off and icache off
My question here is to know how to manage that with Lava V2.
Should it be done in the device or device-type, or should it be done in the test job?
Any example is welcome.
Best regards,
Denis
Hello,
I have another problem/ignorance with Lava, so I need a bit of push here. I
did execute cyclic test (timer interrupt measurements, time of reaction),
using as application hackbench, which introduces tremendous application
load (in the peak around 300 loads per 1m/3m/15m) on the core.
So, poking around, I found the following JSON script:
https://github.com/kernelci/lava-ci/blob/master/templates/cyclictest/generi…
Since i know (at least this is my understanding) that the Lava V2 works as
a pipeline, so the test jobs must be broken to some blocks *(deploy*, *boot*
and *test)*.
I would like to a bit change this JSON script to get hackbench as load, but
much more important is how I can use this JSON script as Lava test job?
I guess, I need to incorporate this script into some yaml format, as
explained/described above... Am I correct?
How to do this? Any description, example... Advice?
Thank you,
Zoran
_______
Hello Folks,
I have really interesting problem with Lava, this time while executing tests.
Lava versions:
root@stretch:/usr/share# dpkg -l lava-server lava-dispatcher
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-==========================-==================-==================-=========================================================
ii lava-dispatcher 2018.2.post2-1+str amd64
Linaro Automated Validation Architecture dispatcher
ii lava-server 2018.2-1+stretch all
Linaro Automated Validation Architecture server
root@stretch:/usr/share#
_______
This problem appears while downloading initramfs. Although the
initramfs I am using from localhost://8010 has the fixed size:
46275637 (decimal), while I download ramdisk.cpio.gz.uboot
(which in theory should be 64 bytes more, exactly 46275701):
I am getting the following on The Same Jobs, which I repeated in Lava:
Job #113: 46597208 (decimal)
Job #114: 46596349 (decimal)
Job #115: 46595788 (decimal)
In other words, I am downloading each time The Same ingredients:
http://localhost:8010/initramfs/initramfs.cpio.gzhttp://localhost:8010/cip-example/cip_v4.4.120-cyclic/v4.4.120-cip20-rt13/a…http://localhost:8010/cip-example/cip_v4.4.120-cyclic/v4.4.120-cip20-rt13/a…
Where I have all three time exact size for:
zImage - 4167704 (3f9818 hex)
am335x-boneblack.dtb - 31552 (7b40 hex)
I removed u-boot-tools, then I installed it back. But this did not
help. service tftpd-hpa restart did not help as well.
So, Iwill continue investigating this problem myself, but did amybody
notice the same?
Thank you,
Zoran
Good morning everyone,
I would like to know if someone here already face the following message
while opening a LAVA results page or LAVA alljobs page :
I am looking around LAVA documentation to check if it is a known issue. But
if someone here already knows something about it, it could help me also.
regards,
On 19 March 2018 at 16:26, Neil Williams <neil.williams at linaro.org> wrote:
>The problem of picking a number is that it depends a lot on the resources
>available to the server and the performance of the devices themselves.
I get the point. But that information is helpful as well, so maybe some examples of the dimensions in which LAVA can be (and actually is) used might be worth mentioning. I have read most of the LAVA documentation, but had no idea that there are actually running test jobs with 50,000 test cases in it on your servers.
>Templating.
>
>Check out how Jinja2 is used for the server-side device configuration
>templates - Jinja2 can output any text-based format, we chose YAML. The
>same principles are used by the Linaro QA team to produce the test job
>submissions. Templates live in version control, the commit hash of the
>template gets included into the metadata of the output.
That’s a very good hint, thank you. I haven’t thought of this, yet. Are the repositories containing your templates publicly available?
Mit freundlichen Grüßen / Best regards
Tim Jaacks
DEVELOPMENT ENGINEER
Garz & Fricke GmbH
Tempowerkring 2
21079 Hamburg
Direct: +49 40 791 899 - 55
Fax: +49 40 791899 - 39
tim.jaacks(a)garz-fricke.com
www.garz-fricke.com
SOLUTIONS THAT COMPLETE!
Sitz der Gesellschaft: D-21079 Hamburg
Registergericht: Amtsgericht Hamburg, HRB 60514
Geschäftsführer: Matthias Fricke, Manfred Garz