Thank you for contributing to OpenShift Ansible. This document explains how the repository is organized, and how to submit contributions.
Before submitting code changes, get familiarized with these documents:
.
├── inventory Contains dynamic inventory scripts, and examples of
│ Ansible inventories.
├── library Contains Python modules used by the playbooks.
├── playbooks Contains Ansible playbooks targeting multiple use cases.
└── roles Contains Ansible roles, units of shared behavior among
playbooks.
These are plugins used in playbooks and roles:
.
├── ansible-profile
├── callback_plugins
├── filter_plugins
└── lookup_plugins
.
├── bin [DEPRECATED] Contains the `bin/cluster` script, a
│ wrapper around the Ansible playbooks that ensures proper
│ configuration, and facilitates installing, updating,
│ destroying and configuring OpenShift clusters.
│ Note: this tool is kept in the repository for legacy
│ reasons and will be removed at some point.
└── utils Contains the `atomic-openshift-installer` command, an
interactive CLI utility to install OpenShift across a
set of hosts.
.
└── docs Contains documentation for this repository.
.
└── test Contains tests.
See the RPM build instructions.
We use tox to manage virtualenvs and run tests. Alternatively, tests can be run using detox which allows for running tests in parallel.
Note: while detox
may be useful in development to make use of multiple cores,
it can be buggy at times and produce flakes, thus we do not use it in our CI.
pip install tox detox
Note: before running tox
or detox
, ensure that the only virtualenvs within
the repository root are the ones managed by tox
, those in a .tox
subdirectory.
Use this command to list paths that are likely part of a virtualenv not managed
by tox
:
$ find . -path '*/bin/python' | grep -vF .tox
The reason for this recommendation is that extraneous virtualenvs cause tools
such as pylint
to take a very long time going through files that are part of
the virtualenv, and test discovery to go through lots of irrelevant files and
potentially fail.
List the test environments available:
tox -l
Run all of the tests and linters with:
tox
Run all of the tests linters in parallel (may flake):
detox
Run a particular test environment (flake8
on Python 2.7 in this case):
tox -e py27-flake8
Run a particular test environment in a clean virtualenv (pylint
on Python 3.5
in this case):
tox -re py35-pylint
If you want to enter a virtualenv created by tox to do additional testing/debugging (py27-flake8 env in this case):
source .tox/py27-flake8/bin/activate
During development, it might be useful to constantly run just a single test file
or test method, or to pass custom arguments to pytest
:
tox -e py27-unit -- path/to/test/file.py
Anything after --
is passed directly to pytest
. To learn more about what
other flags you can use, try:
tox -e py27-unit -- -h
As a practical example, the snippet below shows how to list all tests in a certain file, and then execute only one test of interest:
$ tox -e py27-unit -- roles/lib_openshift/src/test/unit/test_oc_project.py --collect-only --no-cov
...
collected 1 items
<Module 'roles/lib_openshift/src/test/unit/test_oc_project.py'>
<UnitTestCase 'OCProjectTest'>
<TestCaseFunction 'test_adding_a_project'>
...
$ tox -e py27-unit -- roles/lib_openshift/src/test/unit/test_oc_project.py -k test_adding_a_project
Among other things, this can be used for instance to see the coverage levels of individual modules as we work on improving tests.
One of the repository maintainers will then review the PR and submit it for testing.
The default
test job is publicly accessible at
https://ci.openshift.redhat.com/jenkins/job/openshift-ansible/. The other jobs
are run on a different Jenkins host that is not publicly accessible, however the
test results are posted to S3 buckets when complete.
The test output of each job is also posted to the Pull Request as comments.
A trend of the time taken by merge jobs is available at https://ci.openshift.redhat.com/jenkins/job/merge_pull_request_openshift_ansible/buildTimeTrend.
If you are contributing with Python code, you can use the tool
vulture
to verify that you are not
introducing unused code by accident.
This tool is not used in an automated form in CI nor otherwise because it may produce both false positives and false negatives. Still, it can be helpful to detect dead code that escapes our eyes.