Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test-configs: fix imx6ul-pico-hobbit boot method to barebox #39

Merged
merged 2 commits into from
Apr 3, 2019

Conversation

mgrzeschik
Copy link
Contributor

Signed-off-by: Michael Grzeschik [email protected]

@mgrzeschik mgrzeschik force-pushed the for-ml branch 3 times, most recently from c7a9eaa to 74d3ec5 Compare March 28, 2019 21:01
Copy link
Contributor

@gctucker gctucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, just some minor comments in the barebox template. This will be tested on staging today.

@mgrzeschik mgrzeschik force-pushed the for-ml branch 2 times, most recently from 624b595 to c216ebf Compare March 29, 2019 10:39
@gctucker
Copy link
Contributor

Thanks, it should have been tested now but for some reason I don't see any results from the Pengutronix lab here: https://staging.kernelci.org/boot/all/job/staging/branch/kernelci-stable/kernel/v5.0.5-2-g9cc835cc7dd1/

@mgrzeschik
Copy link
Contributor Author

Can the kernelci side work with barebox jobs? Does it parse the results correctly?

@mgrzeschik
Copy link
Contributor Author

The job was successfull:

https://hekla.openlab.pengutronix.de/scheduler/job/256094

@gctucker
Copy link
Contributor

Can the kernelci side work with barebox jobs? Does it parse the results correctly?

In principle the bootloader type shouldn't matter, I didn't have to change anything when we started using depthcharge. I'll take a look in the server logs to see if there was any issue with receiving the LAVA callbacks. You can also check on your end in the LAVA server logs if there was an HTTP error returned by the server for the callbacks.

@gctucker
Copy link
Contributor

The job was successfull:

https://hekla.openlab.pengutronix.de/scheduler/job/256094

Thanks - unfortunately I don't have IPv6 access from where I am right now :P

@mgrzeschik
Copy link
Contributor Author

/var/log/lava-server/django.log telling me:
INFO 2019-03-29 10:18:11,903 models Sending request to callback url https://staging-api.kernelci.org/callback/lava/boot?lab_name=lab-pengutronix-dev&status=2&status_string=complete
WARNING 2019-03-29 10:18:12,823 models Problem sending request to https://staging-api.kernelci.org/callback/lava/boot?lab_name=lab-pengutronix-dev&status=2&status_string=complete: 403 Client Error: Operation not permitted: provided token is not authorized for url: https://staging-api.kernelci.org/callback/lava/boot?lab_name=lab-pengutronix-dev&status=2&status_string=complete

I don't thing that I changed the kernelci token permissions.

@gctucker
Copy link
Contributor

gctucker commented Apr 2, 2019

There seems to be a problem with the job definitions for barebox:

16:54:47 Job written: lab-pengutronix-dev/staging-staging.kernelci.org-v5.0.5-4-gbb851bdee78a-arm-multi_v7_defconfig-gcc-7-imx6ul-pico-hobbit.dtb-imx6ul-pico-hobbit-boot.yaml
[...]
16:54:51 Loading jobs from lab-pengutronix-dev
16:54:51 LAVA API: https://hekla.openlab.pengutronix.de/RPC2/
16:54:51 Connecting to Server...
16:54:51 Connection Successful!
[...]
16:54:51 Traceback (most recent call last):
16:54:51   File "lava-v2-submit-jobs.py", line 158, in <module>
16:54:51     main(args)
16:54:51   File "lava-v2-submit-jobs.py", line 131, in main
16:54:51     result = submit_jobs(connection)
16:54:51   File "lava-v2-submit-jobs.py", line 57, in submit_jobs
16:54:51     job_info = yaml.safe_load(job_data)
16:54:51   File "/usr/lib/python2.7/dist-packages/yaml/__init__.py", line 93, in safe_load
16:54:51     return load(stream, SafeLoader)
16:54:51   File "/usr/lib/python2.7/dist-packages/yaml/__init__.py", line 71, in load
16:54:51     return loader.get_single_data()
16:54:51   File "/usr/lib/python2.7/dist-packages/yaml/constructor.py", line 37, in get_single_data
16:54:51     node = self.get_single_node()
16:54:51   File "/usr/lib/python2.7/dist-packages/yaml/composer.py", line 36, in get_single_node
16:54:51     document = self.compose_document()
16:54:51   File "/usr/lib/python2.7/dist-packages/yaml/composer.py", line 55, in compose_document
16:54:51     node = self.compose_node(None, None)
16:54:51   File "/usr/lib/python2.7/dist-packages/yaml/composer.py", line 84, in compose_node
16:54:51     node = self.compose_mapping_node(anchor)
16:54:51   File "/usr/lib/python2.7/dist-packages/yaml/composer.py", line 127, in compose_mapping_node
16:54:51     while not self.check_event(MappingEndEvent):
16:54:51   File "/usr/lib/python2.7/dist-packages/yaml/parser.py", line 98, in check_event
16:54:51     self.current_event = self.state()
16:54:51   File "/usr/lib/python2.7/dist-packages/yaml/parser.py", line 439, in parse_block_mapping_key
16:54:51     "expected <block end>, but found %r" % token.id, token.start_mark)
16:54:51 yaml.parser.ParserError: while parsing a block mapping
16:54:51   in "<string>", line 3, column 1:
16:54:51     metadata:
16:54:51     ^
16:54:51 expected <block end>, but found '-'
16:54:51   in "<string>", line 65, column 1:
16:54:51     - boot:
16:54:51     ^
16:54:51 Build step 'Execute shell' marked build as failure
16:54:51 Finished: FAILURE

I guess the jinja2 stage worked so it did generate a YAML, but then there were some YAML errors in there. I'll see if I can get the YAML file from the Jenkins workspace if it hasn't been deleted, otherwise maybe you can try to run this locally and reproduce the problem?

@mgrzeschik
Copy link
Contributor Author

I somehow doubt that the statement "removing all the block with super() calls" from the jinja2 templates
holds. I find the callbacks in every instance referring to files using extends. Like templates/base/kernel-ci-base-tftp-deploy.jinja2 {% extends 'base/kernel-ci-base.jinja2' %} {% block metadata %} ...
I don't know how to reproduce the yaml file with the current codestack like in jenkins/lava-boot-v2.sh as I don't have any template run to use.

@gctucker
Copy link
Contributor

gctucker commented Apr 3, 2019

I've generated a YAML job definition locally: https://termbin.com/07wj

The actions and deploy parts are missing. So while it's not necessary to mention blocks with no changes, the actions block needs something like this to be able to reuse the standard deploy but define a different boot with the barebox method:

{% block actions %}
{%- block deploy %}
{{ super () }}
{%- endblock %}

- boot:
    timeout:
      minutes: 5
    method: barebox
    commands: ramdisk
    prompts:
      - '{{ rootfs_prompt }}'
{% endblock %}

@gctucker
Copy link
Contributor

gctucker commented Apr 3, 2019

Add barebox variant to the test plans.

Signed-off-by: Michael Grzeschik <[email protected]>
- add sleep to the test_plan for pico-hobbit

Signed-off-by: Michael Grzeschik <[email protected]>
@gctucker
Copy link
Contributor

gctucker commented Apr 3, 2019

The sleep test plan is not enabled for lab-pengutronix in the shell script that submits the jobs, but it's not worth fixing it now as that script should be replaced with a cleaner implementation using the YAML configuration. I've actually generated the LAVA job locally for the simple test plan on that board and checked that it passed LAVA validation, it looked fine:
https://termbin.com/tagg

So this is ready to be merged now - thanks!

@gctucker gctucker merged commit a7ce3af into kernelci:master Apr 3, 2019
mattface pushed a commit to mattface/kernelci-core that referenced this pull request Apr 12, 2019
When passing a output dir via -t basename:"output" and the directory
doesn't exist then debos finish with an error.
Rework so that the output directory gets created.

Signed-off-by: Anders Roxell <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants